text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Anthony Moore Microsoft Corporation October 2000 Updated March 2002 Summary: A detailed examination of the workings of ASP.NET validation Web controls. (15 printed pages) Introduction In the Beginning What Happens When? Server-Side Validation Sequence Client-Side Validation Validity Rules and Meaningful Error Messages Effect of Enabled, Visible, and Display Properties The CustomValidator Control Which Controls Can Be Validated? That's It This article discusses in detail how the ASP.NET validation controls work. This article is recommended reading for anyone building complex pages with validation controls or those looking to extend the validation framework. For people looking to get started with the validation controls or deciding whether to use them, see User Input Validation in ASP.NET. Throughout the development of ASP.NET, we knew that helping out with validation was important. Take a look at most commercial Web sites today and you will notice that they are filled with forms that clearly execute a lot of handwritten code to perform validation. Validation code is not particularly sexy to write. It can be exciting to write code to display tables of data or to dynamically generate charts, but no one gets their coworkers to check out the cool way they stopped someone from entering a blank value for a name field. Validation of Web applications is particularly frustrating for other reasons as well. HTML 3.2 is so limited in what you can control and what feedback you get from the user that you can't apply the same tricks you can use on a richer client, such as preventing the user from entering certain characters, or making beep sounds. It is possible to create more powerful validation using browser script. This can be hard to justify, however, because script is not always present in client browsers and can be bypassed by malicious users. It is necessary, therefore, to implement the same checks on the server anyway, in order to have a secure site. In the development of ASP.NET, our original intention was to have just one control to handle validation, which would have been a version of the TextBox control that could also display errors. When the time came to design the control, however, it became clear that this would not cut the mustard. We looked at a large number of data-entry forms and tried to find a solution that would fit as many of them as possible. We found a number of interesting things about data-entry forms: Consideration of all these points led to the eventual solution of the five Validator controls, the ValidationSummary control, and integration with the Page object. It was also clear that the solution needed to be extensible, and there needed to be an API for working with it on both the client and the server. When we looked at the different sorts of validation that takes place, it seemed like we would need a bigger toolbox. In most component environments like Microsoft® ActiveX®, we probably would have tried to overload the functionality of all the validation controls into one control that worked with different properties in different modes. However, thanks to the magic of inheritance in the Microsoft® .NET Framework, it is possible to provide a suite of controls that do specific validation with specific properties, because the overhead of deriving each new control is very small. Most of the work done by these controls is implemented in their common parent, BaseValidator. You can also derive from BaseValidator or the other controls to take advantage of this. In fact, even BaseValidator is too lazy to implement its own Text property and inherits from Label. It is beneficial to understand the sequence of events when a page with validation Web controls is processed. If any of the validation conditions are optional, you will want to know exactly when validation takes place on both the client and the server. If you are writing your own validation routines that are potentially time-consuming or that have side effects, it is also important to have an idea of when they will be called. First, let's look at the server. It is important to understand the life cycle of a page. For those used to working with forms in Visual Basic or similar rich client tools, it takes a bit of getting used to. A page and all the objects on it do not actually live for as long as a user is interacting with them, although it can sometimes seem like they do. Here is a simplified sequence of events when a page is first accessed: Now, when a user clicks on a button or similar control, it goes back to the server and does a similar sequence. This is called the post-back sequence: Why don't we just keep all objects in memory? Because Web sites build with ASP.NET would not work with very large numbers of users. This way, the only objects in memory on the server are things being processed right now. When does server-side validation take place? Well, it does not take place at all on the first page fetch. Most of our end-users are very diligent, and we want to give them the benefit of the doubt that they will fill in the form correctly before we bombard them with red text. On the post-back, validation takes place during step 5, just before the event fires for the button or control that triggered the validation. Button controls in ASP.NET have a property called CausesValidation that defaults to True. It is the action of clicking on buttons that makes validation happen. The best place to check the results of validation is in the event handler that triggered the validation. You can also have buttons with CausesValidation=False, that will not cause any validators to be evaluated. One potentially confusing thing about this timing is that the validators will not have been evaluated at the time Page_Load is triggered. The benefit of this is that it gives you a chance to programmatically change property values affecting the validity of the page, such as enabling or disabling certain validators. If this timing is not to your liking and you prefer to evaluate everything in Page_Load, you can do this by explicitly triggering the validation during this event by calling Page.Validate. After this has been called you can then check the result of Page.IsValid. If you try to query the result of Page.IsValid before Page.Validate has been called, either explicitly or being triggered by a button with CausesValidation=True, then its value is meaningless, so an exception will be thrown. The Page object has some important properties and methods with respect to server-side validation. They are summarized in Table 1: Table 1. Page object properties and methods The Validators collection is useful for a number of things. It is a collection of objects of that implement the IValidator interface. I use the term objects rather than controls because the Page cares only about the IValidator interface. While it happens that all the validators will generally be visual controls that implement IValidator, there is no reason someone cannot come along with an arbitrary validation object and add it to the page. The IValidator interface has the following properties and methods: Table 2. IValidator interface properties and methods You can do some interesting things with this interface. For example, to reset the page to a valid state, use the following code (examples shown in C#): IValidator val; foreach(val in Validators) { Val.IsValid = true; } To re-execute the whole validation sequence, use the following code: IValidator val; foreach(val in Validators) { Val.Validate(); } This is equivalent to calling the Validate() method on Page. Another way to make some changes before validation takes place is to override the Validate method. This example shows a page that contains a validator that is turned on or off based on the value of a checkbox: public class Conditional : Page { public HtmlInputCheckBox chkSameAs; public RequiredFieldValidator rfvalShipAddress; public override void Validate() { // Only check ship address if not same as billing bool enableShip = !chkSameAs.Checked; rfvalShipAddress.Enabled = enableShip; // Now perform validation base.Validate(); } } If client-side validation is enabled for your page, a whole different sequence occurs in-between the round trips. Client-side validation works using client JScript®. No binary components are needed to make it work. While the JScript language is reasonably well standardized, the Document Object Model (DOM) for interacting with HTML documents in browsers did not have a widely adopted standard at the time these components were developed and tested. As a result, client-side validation only takes place in Internet Explorer 4.0 and later, because it targets the Internet Explorer DOM. From a server point of view, client-side validation just means that the validation controls emit different stuff into the HTML. Other than that, their sequence of events is exactly the same. The server-side checks are still carried out. This may seem redundant, but it is important because of the following: Every validation control makes sure that a standard block of client script is emitted into the page. This is actually just a small amount of script that includes a reference to code in a script library called WebUIValidation.js. This file, which is downloaded separately and can be cached by the browser, contains all of logic for client-side validation. Because the validation Web controls script that is in a script library, it is not necessary to emit all of the code for client-side validation directly into the page, although it acts as though this is what has happened. The main script file reference looks like this: <script language="javascript" src="/aspnet_client/system_web/1_0_3617_0/WebUIValidation.js"> </script> By default, the script file will be installed into your default root in the aspnet_client directory, and it is referenced using a root-relative script include directive, which begins with the forward slash. This reference means that each individual project does not have to include the script library inside it, and all pages on the same machine can reference the same file. You will notice that it also has the common language runtime version number in the path, so that different versions of the runtime can run on the same machine. If you take a look around your default virtual root, you can find this file and take a look inside it. The location of these files is specified in the machine.config file, an XML file used for most ASP.NET settings. Here is the definition of the location within that file: <webControls clientScriptsLocation="/aspnet_client/{0}/{1}/" /> You are encouraged to read the script to see more of what is going on. However, it is not recommended that you modify these scripts, because their function is very closely tied to a particular version of the run time. If the run time is updated, the scripts may need a corresponding update, and you will have to either lose your changes or face problems with the scripts not working. If you must change the scripts for a particular project, take a copy of the files and point your project to them by overriding the location of the files with a private web.config file.It is perfectly fine to change this location to be a relative or absolute reference. There are some cases where you may not want client-side validation. If the number of input fields is very small, client-side validation may not be of much benefit. You may have logic that needs a round trip to the server every time anyway. You may find that the dynamically appearing messages on the client have an adverse affect on your layout. Note The way to disable client-side validation is to set the EnableClientScript property of the validator or ValidationSummary control to False. It is possible to have a mixture of server-only and client-sever validation components on the same page. This is the sequence of events when a page with client-side validation runs: Because they are executed whenever the inputs change as well as at submit time, client side validation controls generally evaluate two or more times on client. Remember that they will still be re-evaluated on the server once the submit takes place. There is a mini-API that you can use on the client to achieve various effects with your own client-side code. Because it is not possible to make certain routines hidden, you can theoretically make use of any of the variables, attributes, and functions defined by client-side validation script. However, many of them are implementation details that may be changed. Here is a summary of the client-side objects that we encourage you to use. Table 3. Client-side objects A common task you may need to do is to have a "Cancel" button or a navigation button on a page. In this case, set the CausesValidation property on the button to False and no validation will take place, either on the server or the client. If you lay out a page like this, you will want to check Page.IsValid in your button even handlers. If you instead call Page.Validate during Page_Load, you will not have a way of knowing whether a submit or cancel button was pushed. Another common requirement is to have effects other than the error messages displayed by the validators themselves in error situations. In this case, any modifications in behavior you make need to be made on both the server and the client. Suppose you want to have a Label that changes color depending on whether an input is valid. Here is how you would do this on the server: public class ChangeColorPage : Page { public Label lblZip; public RegularExpressionValidator valZip; protected override void OnLoad(EventArgs e) { Page.Validate(); lblZip.ForeColor = valZip.IsValid? Color.Black : Color.Red; } } This is all very nice, but whenever you modify validation like this, you may find that it looks inconsistent unless you do an equivalent operation on the client. The validation frameworks saves you from a lot of this double effort, but for extra effects you just have to do it in two places. Here is a client fragment that does the same thing: <asp:Label id=lblZip runat=server <asp:TextBox id=txtZip runat=server</asp:TextBox><br> <asp:RegularExpressionValidator id=valZip runat=server ControlToValidate=txtZip<br> <script language=javascript> function txtZipOnChange() { // Do nothing if client validation is not active if (typeof(Page_Validators) == "undefined") return; // Change the color of the label lblZip.style.color = valZip.isvalid ? "Black" : "Red"; } </script> Some additional scenarios are enabled by functions that can be called from your client-side script. Table 4. Functions called from client-side script Of particular use is to be able to enable or disable validators. If you have validation that you want active only in certain scenarios, you may need to change the activation on both server and client, or you will find that the user cannot submit the page. Here is the previous example with a field that should only be validated when a check box is unchecked: public class Conditional : Page { public HtmlInputCheckBox chkSameAs; public RequiredFieldValidator rfvalShipAddress; public override void Validate() { bool enableShip = !chkSameAs.Checked; rfvalShipAddress.Enabled = enableShip; base.Validate(); } } Here is the client-side equivalent: <input type=checkbox runat=server id=chkSameAsSame as Billing<br> <script language=javascript> function OnChangeSameAs() { var enableShip = !event.srcElement.status; ValidatorEnable(rfvalShipAddress, enableShip); } </script> Each validator displays a specific error message about a specific condition on a specific control. There are rules as to what is considered valid that may at first seem confusing to you as a developer, but they are necessary to allow you to construct error messages that are actually helpful to the user. All of the validators (except for RequiredFieldValidator) are considered valid if they are blank. If a blank value is not valid, you generally need to provide a RequiredFieldValidator in addition to another validator. You need to do this because almost universally you want different error messages for the blankness and for the validity. Otherwise, you end up with confusing messages like "You must enter a value and it must be between 1 and 10." Another special rule relates to CompareValidator and RangeValidator when the input fields cannot be converted to the specified data type. The evaluation of validity for the CompareValidator with ControlToCompare specified goes like this: The third step may seem a little counterintuitive. The step works this way because it would be hard to write a meaningful error message for the validator if it were checking the validity of more than one field at a time. A separate validator should be used to report on error conditions in the ControlToCompare input field. RangeValidator works in a similar way with its maximum and minimum properties. The difference between the Enabled, Visible, and Display properties on validators may not be immediately obvious. Display=None can be used to specify a validator that does not display anything directly, but still gets evaluated, still affects overall validity, and can still put an error in the summary on both client and server. For client-side validation, these values determine whether the visibility or the display style attributes are used to turn the validator on or off. For server-side validation, Display=Dynamic means that nothing at all displays when the input is valid, while Display=Static means that a single nonbreaking space (" ") is emitted. This last behavior exists so that table cells containing only validators do not collapse to nothing when valid. Why not just use Visible=false to have an invisible validator? In ASP.NET the Visible property of a control has a very strong meaning: a control with Visible=false will not be processed at all for pre-rendering or rendering. As a result of this stronger meaning, Visible=false for a validator means that not only does not it not display anything, it is does not function either. It is not evaluated, does not affect page validity, and does not put errors in the summary. Enabled treads middle ground here. For the most part, Enabled=false has the exact same effect as Visible=false. However, in client-side validation, a disabled validator is still sent to the browser, but in a disabled state. You can activate it with the ValidatorEnable function in client script. When using Visible or Enabled to control whether validation takes place, bear in mind the sequence of events on the server above. Either change them before validation takes place, or re-validate afterwards. Otherwise, their IsValid values may not reflect the changes to their properties. The easiest way to extend the validation framework is to use the CustomValidator control. This can be used either to perform validation that is not covered by something another validation control can do or to perform validation that requires access to information on the server, such as a database or Web service. If you add a CustomValidator with just a server validation function defined, you will notice that it does not take part in client-side validation. The CustomValidator is not updated as users tab between fields, and it requires a round trip to the server to perform its validation. If you are using a CustomValidator to perform a check that does not need any information that lives on the server, you can also have your validator fully participate in client-side validation by using the ClientValidationFunction property. It is assumed that if you provide a ClientValidationFunction, it should ideally perform exactly the same checks as your server validation handler. Failing that, it should perform a subset of that verification. Don't have a client validation function that does more verifications than are performed on the server, as hackers will be able to bypass it easily. Here is a simple example of a CustomValidator that works on the client and the server that just checks to see that the input is an even number. First, here is the server function (in C#): protected void ServerValidate(object source, ServerValidateEventArgs args) { try { int i = Int32.Parse(args.Value); args.IsValid = ((i % 2) == 0); } catch { args.IsValid = false; } } Here is how it is declared on the client, along with a client validation function that performs the same check. This would usually be in JScript, although it can also be VBScript® if you are targeting Microsoft® Internet Explorer. <asp:CustomValidator<br> Input: <asp:TextBox <script language="javascript"> <!-- function CheckEven(source, args) { var val = parseInt(args.Value, 10); if (isNaN(val)) { args.IsValid = false; } else { args.IsValid = ((val % 2) == 0); } } // --> </script> Here are some points to note about using CustomValidator:. This is probably more than you wanted to know about ASP.NET validation. Have fun with it!
http://msdn.microsoft.com/en-us/library/aa479045.aspx
crawl-002
refinedweb
3,468
52.29
Performance data: online performance chart copy of performance measurement logs g2log was made to be a simple, efficient, and easy to understand asynchronous logger. The core of g2log is only a few, short files and it should be easy to modify to suit your needs. It comes with logging, design-by-contract CHECK macros, and catching and logging of fatal signals such as SIGSEGV (illegal memory access) and SIGFPE (floating point error) and more. It is cross-platform, tested on both Linux and Windows. What separates g2log from other logger utilities is that it is asynchronous. By using the Active Object pattern g2log does the slow disk access in the background. LOG calls are asynchronous and thereby g2log gets improved application performance. A comparison with the pseudo asynchronous Google glog (v.0.3.1) shows that g2log is much more efficient, especially in a worst case scenario. I have split this presentation in two parts g2log/src g2log is made with code techniques and building blocks suggested by many great software gurus. I have just connected the dots. There are probably other free asynchronous loggers out there, but at the time of this writing, I have not yet encountered one. That is why I am sharing this code with you. My contribution to the community and thanks for all the great help I have received in person, from articles and blogs. Of course, with this in mind, it just makes sense to make g2log free as a public domain dedication. A version of this article was originally published at. g2log is an asynchronous logging utility made to be efficient and easy to use, understand, and modify. The reason for creating g2log was simply that other logging software I researched were not good enough API-wise or efficiency-wise. API-wise I was just not happy with the calls to some of the logging utilities I tried. They were too verbose and made the code look cluttered. Efficiency-wise, I am of the firm belief that whenever there is a slow file/disk/network access, it should be tried to the utmost to process this in a background thread. I got disappointed with all the logging software I tried, as they were serial (synchronous), i.e., a LOG call was written to file before the log caller could continue, which obviously slows down the log caller. There are good, traditional, [#reasons] for using a synchronous logger but I believe g2log satisfies those reasons while still being asynchronous. Being responsive is a key requirement in the software I work with. Slowing down a thread because it is doing a LOG call is not good enough. Thus I decided to create the asynchronous g2log. For those who are interested, and not for flaming reasons, the loggers I looked into and found lacking were: Google's glog, ezlogger, and log4Cplus. To get the essence of g2log it is only needed to read a few highlights: fatal signal There you have it, g2log in essence. A side point: g2 was a keyword in the first commercial project that used g2log. It stands for second generation (g2), thus the naming was easy. It is just a happy coincidence that one of the inspirations to g2log's API was called glog. A logger requirement is often to have the log entry on file, on disk, before the software would continue with the next logical code instruction. Traditionally, that meant only using a synchronous logger since it seemingly guaranteed that it would write straight to a file. The demand that a made log entry is on file before continuing is common when debugging a crashing application. From now on, we call this the crashing requirement. To the developer, it is vital to know that all the information is caught in the log before taking the next, potentially fatal, step. The downside to this is that the synchronous, slow, logging will penalize performance. The crashing handling requirement was addressed by using a signal handler. You can see this at please browse to g2log/src/ and see crashhandler.h and OS specific crashhandler_win.cpp or crashhandler_unix.cpp. The signal handler will catch common OS or C-library triggered fatal signals that would kill the application. When catching a fatal signal, g2log sends the background worker a message telling it to handle a fatal event. The calling thread then sleeps until the background worker is finished. Meanwhile, the background worker is processing the messages in FIFO order. When the background worker receives the (FIFO queued) fatal-event-message, it writes it to file and then continues to kill the application with the original signal. This way all the FIFO queued log messages that came before the fatal-event-message will be written to file before the crash is finished. For the crashing requirement, when the application is killed with a fatal signal, g2log is still to be preferred to a synchronous logger. Performance will be good while still managing to handle flushing all written logs to file. g2log uses level-specific logging. This is done without slowing down the log-calling part of the software. Thanks to the concept of active object g2log gets asynchronous logging - the actual logging work with slow disk I/O access is done in a background thread. Compared to other logging utilities that does the I/O in the calling thread, the logging performance gain can be huge with g2log. This is shown below in the [#Performance comparison] page where I compare the mostly awesome Google glog library to g2log. Google's glog is what I call pseudo asynchronous since it can fake asynchronous behavior while it is really a synchronous logger. What is apparent is that the average time is up to 48% better with g2log. In the worst case scenarios, g2log is a factor 10-35 times faster than glog. g2log provides both stream syntax and printf-like syntax according to your preference. The streaming API is very similar to other logging utilities and libraries so you should feel right at home when using it. printf A typical scenario for using g2log would be as shown below. Immediately at start up, in the main function body, g2logWorker is initialized with the prefix of the log and the path to the log file. A good rule is to use argv[0] as the log file prefix since that would be the name of the software that is starting up. g2logWorker argv[0] #include "g2log.h" int main(int argc, char** argv) { g2logWorker g2log(argv[0], "/tmp/whatever-directory-path-you-want/"); g2::initializeLogging(&g2log); // .... The example program g2log-example would generate a log file at /tmp/ according to the rule prefix.g2log.YYYYMMDD-HHMMSS.log, i.e. something like g2log-example.g2log.20111114-092342.log. At closing of the application software, g2logWorker will go out of scope. This will trigger the destruction of the active object that g2logWorker is using. Before the active object is destroyed, any pending log writes are flushed to the log file. That way, no log entries will be lost. Active::~Active() { Callback quit_token = std::bind(&Active::doDone, this); send(quit_token); // tell thread to exit, this is the last message to be processed in FIFO order thd_.join(); // after join is done, all messages are processed } The available logging levels are: INFO, DEBUG, WARNING, FATAL. These levels are fixed in the software but can easily be changed if needed. The levels can be added or removed easily from the very first lines of g2log/src/g2log.h. INFO DEBUG WARNING FATAL By using the C preprocessor macro for token concatenation, the level itself is used to call the appropriate function. #define LOG(level) G2_LOG_##level.messageStream() Typos, or using log levels that do not exist, will give compiler errors. LOG(UNKNOWN_LEVEL) << "This log attempt will cause a compiler error"; The compiler error will express something like: >> ... >> 'G2_LOG_UNKNOWN_LEVEL' was not declared in this scope >> ... For a log level that does not exist, or is spelled incorrectly, the concatenation will end up in a call to a non-existent function. This will then generate the compilation error. Thanks to this safe use of a C preprocessor macro, the API is clean and direct. FATAL has a special meaning. Using the log level FATAL means the same as if the evaluation in a [#Design-by-Contract] CHECK fails. #include "g2log.h" int main(int argc, char** argv) { g2logWorker g2log(argv[0], "/tmp/whatever-directory-path-you-want/"); g2::initializeLogging(&g2log); LOG(INFO) << "Simple to use with streaming syntax, easy as ABC or " << 123; LOGF(WARNING, "Printf-style syntax is also %s", "available"); LOGF(FATAL, "This %s is FATAL. After log flush -> Abort()", "message"); // or using the stream API LOG(FATAL) << "This message is FATAL. After log flush -> Abort()"; } Conditional logging is provided. Conditional logging is handy when making a log entry under a certain condition. LOG_IF(INFO, (1 < 2)) << "If " << 1 << "<" << 2 << " : this text will be logged"; // or LOGF_IF(INFO, (1<2), "if %d<%d : then this text will be logged", 1,2); // : if 1<2 : then this text will be logged Of course, conditional logging can be used together with the FATAL log level instead of using [#CHECK]. If the condition does not evaluate to true, then the FATAL level and the message is ignored. LOG_IF(FATAL, (2>3)) << "This message is not FATAL"; LOG_IF(FATAL, (2<3)) << "This message is FATAL"; The streaming API uses a normal C++ std::ostringstream to make it easy to stream strings, native types (int, floats, etc). The streaming API does not suffer from the format risks that printf-type APIs have. std::ostringstream int float LOG(DEBUG) << "Hello I have " << 1 << " car"; LOG(INFO) << "PI is: " << std::setprecision(6) << PI; For the first release of g2log, I was persuaded to add printf-like syntax to g2log. This was implemented as a variadic function and comes with the usual risks associated with printf-like functions. At least printf-like logging is buffer overflow protected, thanks to vsnprintf. printf-like API is still appealing to some mainly because of the nice text and data separation. I hope to move to variadic templates when they are supported on Windows. If deciding to use a printf-like API, the calls will be somewhat different. The API calls are changed to: LOGF, the conditional LOGF_IF, and the Design-by-Contract CHECKF. LOGF LOGF_IF CHECKF LOGF(DEBUG, "This API is popular with some %s", "programmers"); LOGF_IF(DEBUG, (1 != 2), "if true, then this %s will be logged", "message"); CHECKF(foo(), "if 'false == foo()' then the %s is broken: FATAL), "contract"); The risks with a printf-like API can be mitigated on Linux. Compiler warnings for erroneous syntax can be generated by using the -Wall compiler flag. -Wall const std::string logging = "logging"; LOGF(DEBUG, "Printf-type %s is the number 1 for many %s", logging.c_str()); The log call above is badly formatted. It has two %s, but only one string argument. With the gcc compiler and flag -Wall enabled, the compiler would generate a warning similar to: warning: format "%s" expects a matching "char*" argument [-Wformat]. %s warning: format "%s" expects a matching "char*" argument [-Wformat] To be on the safe side, I personally prefer to use the stream API, both on Linux and Windows. It is common programming practice to have early error detection through assert. Conditions in the code are checked and the application is aborted if the conditions are not fulfilled. This is an important part of Design-by-Contract and is sometimes called assertion programming. Most common is to use various CHECK macros to verify the condition and to quit the application if CHECK(condition) fails. g2log provides CHECK functionality for both streaming and printf-like APIs. CHECK CHECK(condition) CHECK(1 != 2); // true: won't be FATAL CHECK(1 > 2) << "CHECK(false) will trigger a FATAL message, put it on log, then exit"; Or with printf-like syntax: const std::string arg = "CHECKF"; CHECKF(1 > 2, "This is a test to see if %s works", arg.c_str()); g2log's streaming API as well as the macro concatenation for creating a log API with compiler check for log levels can be found in other logging utilities. Similar logging usage can be found in Petru Marginean's Dr. Dobbs logging articles [4] and [5] and Google's glog [6]. If you have read my previous blogs or already browsed through g2log's code, then it should come as no surprise that g2log was influenced and inspired by: Version 1.0 of g2log is released with this article. Building it requires). The thread part of g2log is encapsuled within an active object. I have previously done similar active objects with QThread, pthread and more. If you do not have access to std::thread then maybe these could help. The code is available at. I have put up a BitBucket repository for g2log. You can access the files there:. You can do the mercurial download using the command hg clone. Another option is to use the, possibly old, snapshot that should be attached with this article. hg clone If you have any suggestions for improvements or notice something that should be corrected, then please tell me and I will do my best to incorporate it with the rest of the code. varargs shared_queue std::deque std:deque std::queue std::vector On stress tests involving higher data loads to the queue, it could be that std::deque would be faster, I just have not tested it and I stick with std::queue for now. The change is easy to make if needed, only a couple of lines in shared_queue.h. LOG 1 Your application software will continue to run and push logs pretty fast onto the message queue. It will still be responsive. If you use a synchronous logger instead, it would not be so responsive but be stalled for the larger portions of its execution time. 2 The shared message queue would continue to grow, consuming more and more RAM. In the end, if the log over-scheduling continues, bad things would happen as all of the RAM would not be enough. In short: if the use of a logger is completely nuts, then nutty things will happen. If using an asynchronous logger, then the software would still be responsive for some time. If you use a synchronous logger, you would be mostly stalled. In both cases, your hard drive could fill up. Depending on your system, the out-of-disk scenario is more likely than out of RAM. At least on my laptop that is what happened during some of the [#extreme performance testing]. In the performance. It is obvious that KjellKod's g2log is faster than Google's glog, both when measuring the average times and when measuring the worst cases. This has more to do with the fact that glog is (partly) synchronous in nature while g2log is asynchronous. In the average case, Kjellkod's g2log is 10-48% faster than Google's glog. In the worst case scenarios, g2log is a factor 10-35 times faster than glog. The pseudo asynchronous glog is much more efficient than a traditional synchronous logger. However, the peak times are alarming and discouraging. The tests were made on a laptop with a solid state drive. On another system using a standard hard drive, with even slower disk access, the difference between synchronous (glog) and asynchronous (g2log) would be considerable. The implication of using a synchronous logger is that sometimes the I/O wait times will be long. Using a synchronous logger can halt the log calling thread for a very long time. This is usually not desired since being responsive is a normal software design goal. G2log's average time is less than glog's average time. In the worst case scenario, the asynchronous g2log has a tremendous advantage over the synchronous glog. I can only speculate why the public versions of Google's glog, ezlogger, and log4Cplus are not made with true asynchronous logging. It is the traditional approach to a logger, but if you think about it, it is almost silly not having the slow disk access operations done in the background. This is clearly shown in the performance comparison section above. It does seem that Google's glog might indeed be asynchronous when used internally at Google. This according to the glog owner, Shinichi's comment at the glog forum google-glog/issues/detail?id=55. A probable reason not already discussed could be that these loggers were made pre C++11 (std::thread was not available and maybe it was too much effort to make them threaded cross-platform). Another probable reason could be that the authors behind these loggers chose to not tie their logger too tightly with third party threading libraries such as MFC, Qt, or Boost. Either way, C++11 is already here. g2log is free. Use it with std::thread through the just::thread implementation or an C++11/std::thread compliant compiler (Visual Studio 11). Alternatively, you can simply replace the encapsulated threaded part, inside the active object, for a thread library of your choice. What are you waiting for? Go get it just::thread Thank your for reading my article. I hope you can use g2log as is, parts of it or just be inspired to something else. This article, along with any associated source code and files, is licensed under A Public Domain dedication backtrace() new/delete/free/malloc malloc new magical signalhandler unsafe printf #define LOGF(INFO, "Hello"); LOGF(INFO, "Hello%s"); ##__VA_ARGS__ if (condition) LOGF... else LOGF... FatalTrigger PrefixSanityFix ShutDownLogging LOG_IF printf-like ctapmex wrote:I plan to go to the glog. ctapmex wrote:> Customization of log format (unknown how coders would like to have it - please suggest) a pointer to a user-defined function? loglevels, date-time format, ...? more? KjellKod.cc wrote:I hope you mean g2log and not Google's glog KjellKod.cc wrote:A problem with that is that it might be less thread safe (never call unknown code) /* Perhaps the configuration could work like in the example below where no extra information is given. Only severity log level and the log message */ #include "g2log.h" int main(int argc, char** argv) { g2logWorker g2log(argv[0], "/tmp/whatever-directory-path-you-want/"); g2log.configure(). /* access to configure that specifies content of log file*/ header(false). /* no header */ date(false). /* no date */ time(false). /* no timestamp */ file(false). /* no file (FILE:LINE) information */ level(true); /* just to show how one option is explicitly set to true. It could just as well been left out since all options are enabled by default */ g2::initializeLogging(&g2log); // .... // Attempt to change the current log file to another name/location. // returns filename with full path if successful, else empty string // in which case the old log file continues to be active std::future<std::string> changeLogFile(const std::string& log_directory); // logLevel & LL_DEBUG are bit flag variable LOGIF(DEBUG, logLevel & LL_DEBUG, "Debug log"); g2::setLogLevel(DEBUG, false); // disables any DEBUG logging //...stuff g2::setLogLevel(DEBUG, true); // enables any DEBUG logging General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/288827/g-log-An-efficient-asynchronous-logger-using-Cplus?msg=4565463
CC-MAIN-2016-36
refinedweb
3,220
63.8
Join the community to find out what other Atlassian users are discussing, debating and creating. Hi folks, i have created dialog window for my project as described here: There are some qlitches in JIRA v6.3.1x with javascript, for example the script handling the dialog window is loading only on some pages. But with JIRA 7 this is solved, so since our customer is going to upgrade to version 7, there is no bother with this. So what I am trying to do is: I have a button in the blue administration stripe as a Script-fragment/Custom webitem defined in Scriptrunner - this opens the dialog. It calls the REST end point (Show Dialog) which is similar to the one on adaptavist page mentioned above. I can not figure out how to get current project information in this REST end point and fill the information to the returned html template. Maybe its easy and i am just overlooking something due to lack of my experience with Scriptrunner and JIRA and maybe its not possible to do so at all. The dialog creates new project version and I dont have any issue context, we should be able to cal it from anywhere within project administration. Anyone have some suggestions?? Hey Martin, It depends on the the context of the web item. So for example if the context of your web item is the operations-top-bar, as in the example, then and assuming that your rest endpoint is /rest/scriptrunner/latest/custom/doSomething then in the script fragment link you can have /rest/scriptrunner/latest/custom/doSomething?issueId=${issue.id} and then in your rest endpoint you can retrieve the project. Something like import com.atlassian.jira.component.ComponentAccessor doSomething() { MultivaluedMap queryParams -> def issueId = queryParams.getFirst("issueId") as Long def project = getProjectFromIssueId(issueId) log.debug "Project key is ${project.key}"() } def getProjectFromIssueId (Long issueId) { def issue = ComponentAccessor.issueManager.getIssueObject(issueId) // you have the project do something issue?.projectObject } Hope that helps. Regards, Thanos Hey Thanos, thanks for the reply. Yeah your example is correct, that is what i have here - of course without the issue.id param. I just assume here now, but if I open JIRA first time, open the project admin part ie: Versions tab and then click the button to open the dialog, there will be no issue context right? Because i did not open any issue yet. so the param issue.id will be null in this case right? Well i am going to try to fiddle with it, maybe i will find the way :) thanks again, any ohther ideas are very welcome. Martin Of course you can just do something like this, when you define Script fragment calling your showDialog endpoint. /rest/scriptrunner/latest/custom/showDialog?projectKey=${project.key} works!! thanks again.
https://community.atlassian.com/t5/Marketplace-Apps-questions/How-to-get-project-information-in-Scriptrunner-dialog-window/qaq-p/624964
CC-MAIN-2019-18
refinedweb
467
55.95
According to Stack Overflow’s Annual Survey of 2018, JavaScript becomes the most commonly used programming language, for the six years in a row. Let's face it, JavaScript is a cornerstone of your Full Stack Developer skills and can't be avoided on any Developer's Interview. Follow through and read the FullStack.Cafe compilation of the most common and tricky JavaScript Interview Questions and Answers to land your next dream job. 🔴 Originally published on FullStack.Cafe - Kill Your Tech & Coding Interview Q1: What is Coercion in JavaScript? Topic: JavaScript Difficulty: In JavaScript conversion between different two build-in types called coercion. Coercion comes in two forms in JavaScript: explicit and implicit. Here's an example of explicit coercion: var a = "42"; var b = Number( a ); a; // "42" b; // 42 -- the number! And here's an example of implicit coercion: var a = "42"; var b = a * 1; // "42" implicitly coerced to 42 here a; // "42" b; // 42 -- the number! 🔗 Source: FullStack.Cafe Q2: What is Scope in JavaScript? Topic: JavaScript Difficulty: ⭐ In JavaScript, each function gets its own scope. Scope is basically a collection of variables as well as the rules for how those variables are accessed by name. Only code inside that function can access that function's scoped variables. A variable name has to be unique within the same scope. A scope can be nested inside another scope. If one scope is nested inside another, code inside the innermost scope can access variables from either scope. 🔗 Source: FullStack.Cafe Q3: Explain equality in JavaScript Topic: JavaScript Difficulty: ⭐ JavaScript has both strict and type–converting comparisons: - Strict comparison (e.g., ===) checks for value equality without allowing coercion - Abstract comparison (e.g. ==) checks for value equality with coercion allowed var a = "42"; var b = 42; a == b; // true a === b; // false Some simple equalityrules: -. 🔗 Source: FullStack.Cafe Q4: Explain what a callback function is and provide a simple example. Topic: JavaScript Difficulty: ⭐⭐); }); 🔗 Source: coderbyte.com Q5: What does "use strict" do? Topic: JavaScript Difficulty: ⭐⭐; }` It will throw an error because x was not defined and it is being set to some value in the global scope, which isn't allowed with use strict The small change below fixes the error being thrown: function doSomething(val) { "use strict"; var x = val + 10; } 🔗 Source: coderbyte.com Q6: Explain Null and Undefined in JavaScript Topic: JavaScript Difficulty: ⭐⭐ JavaScript (and by extension TypeScript) has two bottom types: null and undefined. They are intended to mean different things: - Something hasn't been initialized : undefined. - Something is currently unavailable: null. 🔗 Source: FullStack.Cafe Q7: Write a function that would allow you to do this. Topic: JavaScript Difficulty: ⭐⭐ var addSix = createBase(6); addSix(10); // returns 16 addSix(21); // returns 27); 🔗 Source: coderbyte.com Q8: Explain Values and Types in JavaScript Topic: JavaScript Difficulty: ⭐⭐ JavaScript has typed values, not typed variables. The following built-in types are available: string number boolean nulland undefined object symbol(new to ES6) 🔗 Source: FullStack.Cafe Q9: Explain event bubbling and how one may prevent it Topic: JavaScript Difficulty: ⭐⭐ Event bubbling is the concept in which an event triggers at the deepest possible element, and triggers on parent elements in nesting order. As a result, when clicking on a child element one may exhibit the handler of the parent activating. One way to prevent event bubbling is using event.stopPropagation() or event.cancelBubble on IE < 9. 🔗 Source: Q10: What is let keyword in JavaScript? Topic: JavaScript Difficulty: ⭐⭐ In addition to creating declarations for variables at the function level, ES6 lets you declare variables to belong to individual blocks (pairs of { .. }), using the let keyword. 🔗 Source: github.com/getify Q11: How would you check if a number is an integer? Topic: JavaScript Difficulty: ⭐⭐ 🔗 Source: coderbyte.com Q12: What is IIFEs (Immediately Invoked Function Expressions)? Topic: JavaScript Difficulty: ⭐⭐⭐ It’s an Immediately-Invoked Function Expression, or IIFE for short. It executes immediately after it’s created: (function IIFE(){ console.log( "Hello!" ); })(); // "Hello!" This pattern is often used when trying to avoid polluting the global namespace, because all the variables used inside the IIFE (like in any other normal function) are not visible outside its scope. 🔗 Source: stackoverflow.com Q13: How to compare two objects in JavaScript? Topic: JavaScript Difficulty: ⭐⭐⭐ Two non-primitive values, like objects (including function and array) held by reference, so both == and === comparisons will simply check whether the references match, not anything about the underlying values. For example, arrays are by default coerced to strings by simply joining all the values with commas ( ,) in between. So two arrays with the same contents would not be == equal: var a = [1,2,3]; var b = [1,2,3]; var c = "1,2,3"; a == c; // true b == c; // true a == b; // false For deep object comparison use external libs like deep-equal or implement your own recursive equality algorithm. 🔗 Source: FullStack.Cafe Q14: Could you explain the difference between ES5 and ES6 Topic: JavaScript Difficulty: ⭐⭐⭐ ECMAScript 5 (ES5): The 5th edition of ECMAScript, standardized in 2009. This standard has been implemented fairly completely in all modern browsers ECMAScript 6 (ES6)/ ECMAScript 2015 (ES2015): The 6th edition of ECMAScript, standardized in 2015. This standard has been partially implemented in most modern browsers. Here are some key differences between ES5 and ES6: - Arrow functions & string interpolation: Consider: const greetings = (name) => { return `hello ${name}`; } and even: const greetings = name => `hello ${name}`; - Const. Const works like a constant in other languages in many ways but there are some caveats. Const stands for ‘constant reference’ to a value. So with const, you can actually mutate the properties of an object being referenced by the variable. You just can’t change the reference itself. const NAMES = []; NAMES.push("Jim"); console.log(NAMES.length === 1); // true NAMES = ["Steve", "John"]; // error - Block-scoped variables. The new ES6 keyword letallows developers to scope variables at the block level. Letdoesn’t hoist in the same way vardoes. - Default parameter values Default parameters allow us to initialize functions with default values. A default is used when an argument is either omitted or undefined — meaning null is a valid value. // Basic syntax function multiply (a, b = 2) { return a * b; } multiply(5); // 10 Class Definition and Inheritance ES6 introduces language support for classes ( classkeyword), constructors ( constructorkeyword), and the extendkeyword for inheritance. for-of operator The for...of statement creates a loop iterating over iterable objects. Spread Operator For objects merging const obj1 = { a: 1, b: 2 } const obj2 = { a: 2, c: 3, d: 4} const obj3 = {...obj1, ...obj2} - Promises Promises provide a mechanism to handle the results and errors from asynchronous operations. You can accomplish the same thing with callbacks, but promises provide improved readability via method chaining and succinct error handling. const isGreater = (a, b) => { return new Promise ((resolve, reject) => { if(a > b) { resolve(true) } else { reject(false) } }) } isGreater(1, 2) .then(result => { console.log('greater') }) .catch(result => { console.log('smaller') }) - Modules exporting & importing Consider module exporting: const myModule = { x: 1, y: () => { console.log('This is ES5') }} export default myModule; and importing: import myModule from './myModule'; Q15: Explain the difference between "undefined" and "not defined" in JavaScript Topic: JavaScript Difficulty: ⭐⭐⭐ In JavaScript if you try to use a variable that doesn't exist and has not been declared, then JavaScript will throw an error var name is not defined and the script will stop execute thereafter. But If you use typeof undeclared_variable then it will return undefined. Before starting further discussion let's understand the difference between declaration and definition. var x is a declaration because you are not defining what value it holds yet, but you are declaring its existence and the need of memory allocation. var x; // declaring x console.log(x); //output: undefined var x = 1 is both declaration and definition (also we can say we are doing initialisation), Here declaration and assignment of value happen inline for variable x, In JavaScript every variable declaration and function declaration brings to the top of its current scope in which it's declared then assignment happen in order this term is called hoisting. A variable that is declared but not define and when we try to access it, It will result undefined. var x; // Declaration if(typeof x === 'undefined') // Will return true A variable that neither declared nor defined when we try to reference such variable then It result not defined. console.log(y); // Output: ReferenceError: y is not defined 🔗 Source: stackoverflow.com Q16: What is the difference between anonymous and named functions? Topic: JavaScript Difficulty: ⭐⭐⭐ Consider: var foo = function() { // anonymous function assigned to variable foo // .. }; var x = function bar(){ // named function (bar) assigned to variable x // .. }; foo(); // actual function execution x(); 🔗 Source: FullStack.Cafe Q17: What is “closure” in javascript? Provide an example? Topic: JavaScript Difficulty: ⭐⭐⭐⭐ 🔗 Source: github.com/ganqqwerty Q18: How would you create a private variable in JavaScript? Topic: JavaScript Difficulty: ⭐⭐⭐⭐ To create a private variable in JavaScript that cannot be changed you need to create it as a local variable within a function. Even if the function is executed the variable cannot be accessed outside of the function. For example: function func() { var priv = "secret code"; } console.log(priv); // throws error To access the variable, a helper function would need to be created that returns the private variable. function func() { var priv = "secret code"; return function() { return priv; } } var getPriv = func(); console.log(getPriv()); // => secret code 🔗 Source: coderbyte.com Q19: Explain the Prototype Design Pattern Topic: JavaScript Difficulty: ⭐⭐⭐⭐ The Prototype Pattern creates new objects, but rather than creating non-initialized objects it returns objects that are initialized with values it copied from a prototype - or sample - object. The Prototype pattern is also referred to as the Properties pattern. An example of where the Prototype pattern is useful is the initialization of business objects with values that match the default values in the database. The prototype object holds the default values that are copied over into a newly created business object. Classical languages rarely use the Prototype pattern, but JavaScript being a prototypal language uses this pattern in the construction of new objects and their prototypes. 🔗 Source: dofactory.com Q20: Check if a given string is a isomorphic Topic: JavaScript Difficulty: ⭐⭐⭐⭐ For two strings to be isomorphic, all occurrences of a character in string A can be replaced with another character to get string B. The order of the characters must be preserved. There must be one-to-one mapping for ever char of string A to every char of string B. paperand titlewould return true. eggand sadwould return false. dggand addwould return true. isIsomorphic("egg", 'add'); // true isIsomorphic("paper", 'title'); // true isIsomorphic("kick", 'side'); // false function isIsomorphic(firstString, secondString) { // Check if the same lenght. If not, they cannot be isomorphic if (firstString.length !== secondString.length) return false var letterMap = {}; for (var i = 0; i < firstString.length; i++) { var letterA = firstString[i], letterB = secondString[i]; // If the letter does not exist, create a map and map it to the value // of the second letter if (letterMap[letterA] === undefined) { letterMap[letterA] = letterB; } else if (letterMap[letterA] !== letterB) { // Eles if letterA already exists in the map, but it does not map to // letterB, that means that A is mapping to more than one letter. return false; } } // If after iterating through and conditions are satisfied, return true. // They are isomorphic return true; } 🔗 Source: Q21: What does the term "Transpiling" stand for? Topic: JavaScript Difficulty: ⭐⭐⭐⭐ There's no way to polyfill new syntax that has been added to the language. So the better option is to use a tool that converts your newer code into older code equivalents. This process is commonly called transpiling, a term for transforming + compiling. Typically you insert the transpiler into your build process, similar to your code linter or your minifier. There are quite a few great transpilers for you to choose from: - Babel: Transpiles ES6+ into ES5 - Traceur: Transpiles ES6, ES7, and beyond into ES5 🔗 Source: You Don't Know JS, Up &going Q22: How does the “this” keyword work? Provide some code examples. Topic: JavaScript Difficulty: ⭐⭐⭐⭐ 🔗 Source: quirksmode.org Q23: How would you add your own method to the Array object so the following code would work? Topic: JavaScript Difficulty: ⭐⭐⭐⭐ 🔗 Source: coderbyte.com Q24: What is Hoisting in JavaScript? Topic: JavaScript Difficulty: ⭐⭐⭐⭐ Hoisting is the JavaScript interpreter's action of moving all variable and function declarations to the top of the current scope. There are two types of hoisting: - variable hoisting - rare - function hoisting - more common Wherever a var (or function declaration) appears inside a scope, that declaration is taken to belong to the entire scope and accessible everywhere throughout. var a = 2; foo(); // works because `foo()` // declaration is "hoisted" function foo() { a = 3; console.log( a ); // 3 var a; // declaration is "hoisted" // to the top of `foo()` } console.log( a ); // 2 🔗 Source: FullStack.Cafe Q25: What will the following code output? Topic: JavaScript Difficulty: ⭐⭐⭐⭐ 0.1 + 0.2 === 0.3 This will surprisingly output false because of floating point errors in internally representing certain numbers. 0.1 + 0.2 does not nicely come out to 0.3 but instead the result is actually 0.30000000000000004 because the computer cannot internally represent the correct number. One solution to get around this problem is to round the results when doing arithmetic with decimal numbers. 🔗 Source: coderbyte.com Q26: Describe the Revealing Module Pattern design pattern Topic: JavaScript Difficulty: ⭐⭐⭐⭐⭐ An obvious disadvantage of it is unable to reference the private methods Thanks 🙌 for reading and good luck on your interview! Please share this article with your fellow devs if you like it! Check more FullStack Interview Questions & Answers on 👉 Discussion (18) Great article! For "Q11: How would you check if a number is an integer?" I recommend using: As far as you are not tageting IE. caniuse.com/#search=isInteger nice, I was surprised most by the question 0.1+0.2===0.3. This is what always haunts me and i'm not sure if that's the case with any other language? I used Python, C, C++ and VBA but such stuff is unseen there. it's same in Python, C and C++ it's same in c# too but in goLang 0.1 + 0.2 == 0.3 is true :) i would like more of the articles that address performance of javascript under the hood like this one. Q6: Explain Null and Undefined in JavaScript Bonus point pointing out Null is an object, it's null-ness still takes up memory hi,it is a good article.but it has a problem: Q20: for your function ,it seemed wrong: isIsomorphic('sad', 'egg') !== isIsomorphic('egg', 'sad') my function: It's great article indeed! , I shared in linkedIn and twitter, I am sure this will help javascript developers in their interviews. good article! I learned a few things Q22 is not completely right according to me . thismay refer the context of the definition with the arrow function. (lexical scoping vs dynamic scoping) Great Article. Thank you Dev Alex.👍 Awesome walk through Alex! Totally saving this one! Great article I really needed that quick revision 👍 Great article Alex!! Excellent and insightful article! Great article! Thanks. I love this article. nice one!
https://practicaldev-herokuapp-com.global.ssl.fastly.net/fullstackcafe/top-26-javascript-interview-questions-i-wish-i-knew-26k1
CC-MAIN-2021-10
refinedweb
2,521
56.15
class ADC – analog to digital conversion¶ Usage: import pyb adc = pyb.ADC(pin) # create an analog object from a pin val = adc.read() # read an analog value adc = pyb.ADCAll(resolution) # create an ADCAll object adc = pyb.ADCAll(resolution, mask) # create an ADCAll object for selected analog channels val = adc.read_channel(channel) # read the given channel val = adc.read_core_temp() # read MCU temperature val = adc.read_core_vbat() # read MCU VBAT val = adc.read_core_vref() # read MCU VREF val = adc.read_vref() # read MCU supply voltage Constructors¶ Methods¶ ADC. read()¶ Read the value on the analog pin and return it. The returned value will be between 0 and 4095. ADC. read_timed(buf, timer)¶ Read analog values into bufat a rate set by the timerobject. bufcan be bytearray or array.array for example. The ADC values have 12-bit resolution and are stored directly into bufif its element size is 16 bits or greater. If bufhas only 8-bit elements (eg a bytearray) then the sample resolution will be reduced to 8 bits. timershould be a Timer object, and a sample is read each time the timer triggers. The timer must already be initialised and running at the desired sampling frequency. To support previous behaviour of this function, timercan also be an integer which specifies the frequency (in Hz) to sample at. In this case Timer(6) will be automatically configured to run at the given frequency. Example using a Timer object (preferred way): adc = pyb.ADC(pyb.Pin("P5")) # create an ADC on pin P5 tim = pyb.Timer(6, freq=10) # create a timer running at 10Hz buf = bytearray(100) # creat a buffer to store the samples adc.read_timed(buf, tim) # sample 100 values, taking 10s Example using an integer for the frequency: adc = pyb.ADC(pyb.Pin("P5")) # create an ADC on pin P5 buf = bytearray(100) # create a buffer of 100 bytes adc.read_timed(buf, 10) # read analog values into buf at 10Hz # this will take 10 seconds to finish for val in buf: # loop over all values print(val) # print the value out This function does not allocate any heap memory. It has blocking behaviour: it does not return to the calling program until the buffer is full. The ADCAll Object¶ Instantiating this changes all masked ADC pins to analog inputs. The preprocessed MCU temperature, VREF and VBAT data can be accessed on ADC channels 16, 17 and 18 respectively. Appropriate scaling is handled according to reference voltage used (usually 3.3V). The temperature sensor on the chip is factory calibrated and allows to read the die temperature to +/- 1 degree centigrade. Although this sounds pretty accurate, don’t forget that the MCU’s internal temperature is measured. Depending on processing loads and I/O subsystems active the die temperature may easily be tens of degrees above ambient temperature. On the other hand a openmvcam woken up after a long standby period will show correct ambient temperature within limits mentioned above. The ADCAll read_core_vbat(), read_vref() and read_core_vref() methods read the backup battery voltage, reference voltage and the (1.21V nominal) reference voltage using the actual supply as a reference. All results are floating point numbers giving direct voltage values. read_core_vbat() returns the voltage of the backup battery. This voltage is also adjusted according to the actual supply voltage. To avoid analog input overload the battery voltage is measured via a voltage divider and scaled according to the divider value. To prevent excessive loads to the backup battery, the voltage divider is only active during ADC conversion. read_vref() is evaluated by measuring the internal voltage reference and backscale it using factory calibration value of the internal voltage reference. In most cases the reading would be close to 3.3V. If the openmvcam is operated from a battery, the supply voltage may drop to values below 3.3V. The openmvcam will still operate fine as long as the operating conditions are met. With proper settings of MCU clock, flash access speed and programming mode it is possible to run the openmvcam down to 2 V and still get useful ADC conversion. It is very important to make sure analog input voltages never exceed actual supply voltage. Other analog input channels (0..15) will return unscaled integer values according to the selected precision. To avoid unwanted activation of analog inputs (channel 0..15) a second parameter can be specified. This parameter is a binary pattern where each requested analog input has the corresponding bit set. The default value is 0xffffffff which means all analog inputs are active. If just the internal channels (16..18) are required, the mask value should be 0x70000. Example: adcall = pyb.ADCAll(12, 0x70000) # 12 bit resolution, internal channels temp = adcall.read_core_temp()
http://docs.openmv.io/library/pyb.ADC.html
CC-MAIN-2018-34
refinedweb
781
50.23
LANDesk Support Tools – Android Edition (Demo) This is my first real project written for Android. Yes, I wrote it in C# using Mono for Android. Archive for the ‘Tablets’ Category. This is my first real project written for Android. Yes, I wrote it in C# using Mono for Android. Ok, so adding a menu that pops up from the bottom when the menu button is clicked is very common and quite easy to do. Note: This assumes you have the Android SDK, Emulator, and Eclipse all working already. Your project is now created. <menu xmlns: <item android: </menu> <?xml version="1.0" encoding="utf-8"?> <resources> <string name="hello">Hello World, HelloAllActivity!</string> <string name="app_name">HelloAll</string> <string name="menu_1">Menu 1</string> <string name="menu_2">Menu 2</string> <string name="menu_3">Menu 3</string> </resources> You now have a menu and strings for each menu item. Step 4 – Overload onCreateOptionsMenu package org.rhyous; import android.app.Activity; import android.os.Bundle; public class HelloAllActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle inSavedInstanceState) { super.onCreate(inSavedInstanceState); setContentView(R.layout.main); } } @Override public boolean onCreateOptionsMenu(Menu inMenu) { super.onCreateOptionsMenu(inMenu); getMenuInflater().inflate(R.layout.menu, inMenu); return true; } You can now build your application and test that the menu pops up. However, the menu doesn’t do anything yet. @Override public boolean onOptionsItemSelected(MenuItem inItem) { switch (inItem.getItemId()) { case R.id.menu_item_1: // Do something here return true; case R.id.menu_item_2: // Do something here return true; default: // Should never get here return false; } Based on the item clicked, the appropriate code will run. Hope you enjoyed this simple Android development example. This post is a continuation of Writing Android apps in C# using MonoDroid. Now that you have installed and configured MonoDroid and its prerequisites, you are ready to create your first project. You now have a sample MonoDroid app. I read this article today and found it very insightful. ASUS Eee Pad Transformer tablet is a mixed bag One of the biggest complaints about the tablet is the ability to type and this device eases that some, though not completely. One interesting quote, I am not sure if I agree with yet, but I have thought it myself is this: A coworker who owns an iPad displayed some gadget envy when I first brought the Eee Pad Transformer to the office, but he also noted, “with the clam shell don’t know if I completely agree that tablets are a novelty. I remember the Palm Pilot was novelty that wore off and I am on the fence right now as to whether the iPad and devices such as the Motorola Xoom will end up as paper weights on peoples desk because typing and other uses just aren’t there and a laptop or desktop computer is desired. However, this ASUS Eee Pad Transformer tablet may be an important bridge to these technologies, something that didn’t exist in the Palm Pilot days, so problems that existed for the Palm Pilot and still exist in tablets today may have new solutions that allow the technology to go further.: There are certain uses for a tablet! I am writing this post to you from a Motorola Xoom. Typing is definitely harder than with a keyboard yet much easier than from a phone. It didn’t work out of the box. Apps wouldn’t download, and Google talk wouldn’t connect. I finally factory reset it and started over and it worked. We think you have to log in during the initial configuration to avoid this issue, but we didn’t try to dupe it. It is working great now.
https://www.rhyous.com/category/hardware/tablets/
CC-MAIN-2018-09
refinedweb
615
65.62
A "pretty printer" for most SeqAn data structures and related types. More... #include <seqan3/core/detail/debug_stream_type.hpp> A "pretty printer" for most SeqAn data structures and related types. A global instance of this type exists as seqan3::debug_stream. You can stream to it as you would to std::cout or std::cerr, but the debug stream has special overloads that make certain types streamable (that are not streamable to std::cout). Additionally some data structures are visualised more elaborately via the debug stream and there are extra flags to configure it (seqan3::fmtflags2). Simple usage: Changing flags: See seqan3::fmtflags2 for more details. Change the underlying output stream. The actual underlying stream that is printed to defaults to std::cerr, but can be changed via this function. You can set any kind of output stream, e.g. a std::ostringstream or a std::ofstream if you want to write to a file, but please be aware that the debug_stream never takes ownership of the underlying stream so you need to take special care that its object lifetime does not end before the debug_stream's. In the case where you wish to print to some stream object locally, instead create you own debug stream: An alignment matrix can be printed to the seqan3::debug_stream. This prints out an alignment matrix which can be a score matrix or a trace matrix. All alphabets can be printed to the seqan3::debug_stream by their char representation. A seqan3::alignment_coordinate can be printed to the seqan3::debug_stream. Prints the alignment coordinate as a tuple. All trace_directions can be printed as ascii or as utf8 to the seqan3::debug_stream. The following table shows the printed symbol of a particular seqan3::detail::trace_directions: Overload for the seqan3::mask alphabet. A type (e.g. an enum) can be made debug streamable by customizing the seqan3::enumeration_names. This searches the seqan3::enumeration_names of the respective type for the value op and prints the respective string if found or '<UNKNOWN_VALUE>' if the value cannot be found in the map. A std::optional can be printed by printing its value or nothing if valueless. All input ranges can be printed to the seqan3::debug_stream element-wise (if their elements are printable). If the element type models seqan3::alphabet (and is not an unsigned integer), the range is printed just as if it were a string, i.e. std::vector<dna4>{'C'_dna4, 'G'_dna4, 'A'_dna4} is printed as "CGA". In all other cases the elements are comma separated and the range is enclosed in brackets, i.e. std::vector<int>{3, 1, 33, 7} is printed as "[3,1,33,7]". All biological sequences can be printed to the seqan3::debug_stream. The (biological) sequence (except for ranges over unsigned integers) is printed just as if it were a string, i.e. std::vector<dna4>{'C'_dna4, 'G'_dna4, 'A'_dna4} is printed as "CGA". Make std::nullopt_t printable. All tuples can be printed by printing their elements separately. A std::variant can be printed by visiting the stream operator for the corresponding type. Note that in case the variant is valueless(_by_exception), nothing is printed. Streams the seqan3::alignment_result to the seqan3::debug_stream. Overload for the seqan3::sam_flags. Print the seqan3::search_result to seqan3::debug_stream.
https://docs.seqan.de/seqan/3-master-user/classseqan3_1_1debug__stream__type.html
CC-MAIN-2021-21
refinedweb
546
57.77
To construct a vbench benchmark you need a setup string and a code string. The constructor’s signature is: Benchmark(self, code, setup, ncalls=None, repeat=3, cleanup=None, name=None, description=None, start_date=None, logy=False). Why generate benchmarks dynamically? For most scikit-learn purposes, the code string will be very close to "algorithm.fit(X, y)", "algorithm.transform(X)" or "algorithm.predict(X)". We can generate a lot of benchmarks by changing what the algorithm is, and changing what the data is or the way it is generated. A possible idea would be to create a DSL in which to specify scikit-learn tests and create benchmarks from them. However, before engineering such a solution, I wanted to test out how to generate three related benchmarks using different arguments for the dataset generation function. This is what I came up with: [sourcecode language=”python”] from vbench.benchmark import Benchmark _setup = “”“ from deps import * kwargs = %s X, y = make_regression(random_state=0, **kwargs) lr = LinearRegression() “”“ _configurations = [ (‘linear_regression_many_samples’, {‘n_samples’: 10000, ‘n_features’: 100}), (‘linear_regression_many_features’, {‘n_samples’: 100, ‘n_features’: 10000}), (‘linear_regression_many_targets’, {‘n_samples’: 1000, ‘n_features’: 100, ‘n_targets’: 100}) ] _statement = “lr.fit(X, y)” _globs = globals() _globs.update({name: Benchmark(_statement, _setup % str(kwargs), name=name) for name, kwargs in _configurations}) [/sourcecode] It works perfectly, but I don’t like having to hack the globals to make the benchmarks detectable. This is because of the way the vbench suite gathers benchmarks. In __init__.py we have to do from linear_regression import *. With a small update to the detection method, we could replace the hacky part with a public lists of Benchmark objects. Exposed issues While working on this, after my first attempt, I was surprised to see that there were no results added to the database, and output plots were empty. It turns out that the generated benchmarks weren’t running, even though if I copied and pasted their source code from the generated html, it would run. Vbench was not issuing any sort of message to let me know that anything was wrong. So what was the problem? My fault, of course, whitespace. But in all fairness, we should add better feedback. This is what I was doing to generate the setup string: [sourcecode lang=”python”] def _make_setup(kwargs): return “”“ from deps import * kwargs = %s X, y = make_regression(random_state=0, **kwargs) lr = LinearRegression() “”” % str(kwargs) [/sourcecode] It’s clear as daylight now that I overzealously indented the multiline string. But man, was it hard to debug! Also, in this example, the bug led to a refactoring that made the whole thing nicer and more direct. Hopefully, my experience with vbench will lead to some improvements to this cool and highly useful piece of software.
http://vene.ro/blog/dynamically-generated-benchmarks-with-vbench.html
CC-MAIN-2019-18
refinedweb
449
55.44
1. Create a Coffee class to represent a single hot beverage. Every Coffee object contains the following instance fields: a. A protected double variable named basePrice. This variable holds the cost of the beverage without accounting for any special options (cream, sugar, etc.). b. A protected ArrayList variable named options. This variable should only store CoffeeOption objects that have been added to the given beverage. c. A protected String variable named size. This variable represents the size of the beverage. d. A protected boolean variable named isDecaf. This variable will be true if the beverage is decaffeinated, and false otherwise. e. A public constructor that takes two arguments: a String followed by a boolean value. The constructor should create a new, empty ArrayList and assign it to options. The constructor should set the value of size to the String argument. The constructor should set the value of basePrice depending on the value of the String argument (“small” = 1.50, “medium” = 2.00, “large” = 2.50, “extra large” = 3.00). You may assume that the String argument will always be one of these four choices, with that spelling and capitalization. Finally, the constructor should set the value of isDecaf to that of its boolean parameter. f. A public method named addOption(). This method takes a CoffeeOption as its argument, and does not return anything. This method adds its argument to the end of the options ArrayList. g. A public method named price(). This method returns a double value, and does not take any arguments. The price() method returns the sum of basePrice and the prices of all of the elements in options. Note that decaffeinated and regular beverages are the same price. h. A public toString() method. This method returns a String, but does not take any arguments. Your toString() method should return a String that contains a description of the Coffee object, in the following format: Coffee base-price Total: $total-price should be either “Regular” or “”Decaf” For example, a small decaffeinated coffee with no added options would produce the following output: small Decaf Coffee 1.50 Total: $1.50 A medium regular coffee with one cream and one sugar would produce the following output: medium Regular Coffee 2.00 add cream 0.10 add sugar 0.05 Total: $2.15 (do not worry about formatting the price to exactly two decimal places; 1.5 and 1.499999999 are equally acceptable substitutes for 1.50) Hint: use the toString() method(s) that you developed for Homework 3. i. You may add any additional instance variables or methods to this class that you wish. 2. Create a class named Order. This class maintains a list of Coffee objects, and contains the following fields: a. A private ArrayList named items that holds Coffee objects. b. A private int named orderNumber. c. A public constructor that assigns a new, empty ArrayList to items and assigns a random integer value (between 1 and 2000) to orderNumber (see the end of this document for information about Java's Random class). d. A public method named add() that takes a Coffee object as its argument and does not return any value. This method adds its argument to the end of the items ArrayList. e. A public method named getNumber() that returns the order number. This method does not take any arguments. f. A public method named getTotal(). This method returns a double value and does not take any arguments. This method returns the total price of the items in the current order, including 8.625% sales tax for Suffolk County. g. A public toString() method that returns a String and does not take any arguments. This method should return a String that lists the order number, the current number of Coffee objects in items, and the total price for the order. For example, toString() might return a String like the following: Order #212 3 item(s) $8.25 h. A public method named receipt(). This method does not take any arguments. It returns a String containing a neatly-formatted order receipt that includes the following information: i. The order number, with an appropriate label ii. The total number of items in the order, with an appropriate label. Note that this value only includes whole beverages; do not include coffee options as separate items! iii.A detailed list of the items in the order (HINT: use Coffee's toString() method) iv.The subtotal for the order, with an appropriate label v. The tax amount, with an appropriate label vi.The total price of the order, with an appropriate label. i. You may add any additional instance variables or methods to this class that you wish. 3. Create a subclass of Coffee named IcedCoffee. An IcedCoffee object is available in the same sizes as a regular Coffee object, except it costs $0.50 more for the equivalent size (for example, a large IcedCoffee has a base price of $3.00 instead of $2.50). Be sure to provide the following functionality for your IcedCoffee class: a. A public constructor that, like the Coffee constructor, takes a size (a String) and a regular/decaf value (a boolean) as its arguments. b. An overridden version of toString() that replaces the word “Coffee” with “Iced Coffee”. Otherwise, toString() provides exactly the same output as its superclass version. 4. Using the sample driver , create a menu-based program that allows the user to perform the following actions: a. Create a new Order. If there is an order currently in progress, it is discarded/ replaced without warning. b. Add a new Coffee object to the current order. This option should let the user do the following: i. Specify a size for the beverage ii. Specify whether the beverage is regular or decaffeinated iii.Add coffee options (cream, sugar, and flavor shots) to the beverage until the user indicates that he/she is done. (HINT: use a while or do-while loop) This option is only available if there is a current order. c. Add a new IcedCoffee object to the order. This option should provide the same functionality as the preceding option. (HINT: instead of writing the same code twice, can you avoid duplication by using a method or if statement(s)?) d. Display the contents of the current order. This option is only available if there is an order in progress. e. Place the current order. Selecting this option causes the program to print out the receipt for the current order, and then delete it (by setting any references to this Order to null). This option is only available if there is an order in progress. f. Cancel the current order. Selecting this option causes any references to the current Order to be set to null. This option is only available if there is an order in progress. g. Quit the program. Each time the menu is displayed, if there is an order currently in progress, a short summary of the order should be displayed as well (use Order's toString() method for this). Here's the sample driver: import java.util.*; public class ProjectDriver { private static Order myOrder; private static Scanner sc; private static double totalCharge; public static void main (String [] args) { myOrder = null; totalCharge = 0.0; sc = new Scanner(System.in); int userChoice = -1; do { userChoice = displayMenu(); handle(userChoice); } while (userChoice != 0); } private static int displayMenu () { System.out.println("\n\n"); System.out.println("Main Menu\n\n"); System.out.println("1. New order"); System.out.println("2. Add Coffee"); System.out.println("3. Add Add Iced Coffee"); System.out.println("4. Print the current order"); System.out.println("5. Clear the current order"); System.out.println(); System.out.println("0. Exit"); System.out.println(); System.out.print("Please select an option: "); int result = sc.nextInt(); sc.nextLine(); // consume extraneous newline character System.out.println(); // Add an extra line for formatting return result; } private static void handle (int choice) { switch (choice) { case 1: newOrder(); break; case 2: addCoffee(); break; case 3: addIcedCoffee(); break; case 4: printOrder(); break; case 5: resetOrder(); break; } } private static newOrder() { myOrder = new Order(); totalCharge = 0; } private static void addCoffee() { System.out.print("Enter size: "); String size = sc.nextLine(); System.out.print("Decaf? "); String d = sc.nextLine(); Coffee c; if (d.equals("yes")) { c = new Coffee(size, true); } else { c = new Coffee(size, false); } // Add submenu for coffee options int choice = -1; while (choice > 0) { System.out.println("Options\n"); System.out.println("1. Add sugar"); ... if (choice == 1) { c.add(new Sugar()); totalCharge += 0.05; } } myOrder.add(c); } private static void addSugar () { myOrder.add(new Sugar()); totalCharge += 0.05; } private static void addCream () { myOrder.add(new Cream()); totalCharge += 0.10; } private static void addFlavoring () { myOrder.add(new Flavoring()); totalCharge += 0.25; } private static void printOrder () { if (myOrder == null) { System.out.println("NO current order!"); } else { System.out.println(myOrder.receipt()); System.out.println(); } System.out.println(); System.out.println("Total charge so far: " + totalCharge); } private static void resetOrder () { myOrder = new Order(); // Get rid of the current contents totalCharge = 0.0; } } --------------------------------------------------------------------------- Other classes: Flavoring.java import java.util.*; import java.io.*; public class Flavoring extends CoffeeOption { private List<String> flavors = new ArrayList<String>(); private String selectedFlavor; private void loadFlavors() { Scanner sc; try { sc = new Scanner(new File("flavors.txt")); } catch (Exception e) { System.err.println(e); return; } while (sc.hasNextLine()) { flavors.add(sc.nextLine()); } } public Flavoring() { description = "flavor shot"; cost = 0.25; loadFlavors(); Scanner sc = new Scanner(System.in); // Print the menu of flavors System.out.println("Please select a flavor:"); for (int i=0;i<flavors.size();i++) { System.out.println((i+1)+") "+flavors.get(i)); } System.out.print("Your choice? "); selectedFlavor = flavors.get(sc.nextInt()-1); System.out.println(); } public String toString() { return super.toString()+"\n"+selectedFlavor; } } ------------------------------------------------- Cream.java public class Cream extends CoffeeOption { public Cream() { description="cream"; cost = 0.10; } } -------------------------------------------------- Sugar.java public class Sugar extends CoffeeOption { public Sugar() { description = "sugar"; cost = 0.05; } } ------------------------------------------------- CoffeeOption.java public abstract class CoffeeOption { protected double cost; protected String description; public double price() { return cost; } public String toString() { return "add "+description+" "+cost; }
https://www.daniweb.com/programming/software-development/threads/364931/java-homework-help
CC-MAIN-2019-35
refinedweb
1,675
60.41
This issue is sometimes termed the XML Processing Model problem. There was in fact an XML Processing Model Workshop. In the light of lack of consensus result from the workshop, and specifically prompted by a question about the relationship of XEncryption to other specs, occurring as XEnc made its way to Candidate Recommendation status in W3C, this document was eventually started as an action item from a TAG meeting, to open discussion on a new issue mixedNamespaceMeaning-13. That issue was then split into several other issues, one of which, xmlFunctions-34, is the main import of this document. In June 2005, this was revised as the XML Processing Model working group charter was being discussed. Up to Design Issues It might seem that the specifications of different XML namespaces can make inconsistent claims such that the semantics of a mixed namespace documents are inconsistent. The solution sometimes proposed is a "processing model language" such that there is no default meaning of an XML document without such an external processing definition. This article argues that there is only one basic generic processing model (or rather, algorithm for allocating semantics) for XML documents which preserves needed properties of a multi-namespace system. These properties include the need to be able to define the semantics of an XML element without clashes of different specifications. This introduces the concept of an on of an XML document is defined starting at the document root by the specifications of the element types involved. A common class of foreign element name, called here XML function, has to be recognized in default processing by any supporting application, and returns more XML when it is elaborated. If one party sends another an XML document, how does one say what it means? Or, if you don't like the meaning word, what specs are invoked in what way when an XML document is published or transmitted? This question is sometimes posed as: What are is the processing model for XML? The interpretation of a plain XHTML document is fairly well understood. The document itself is a human language document, and so the conventions - sloppy and wonderful - of human language define how it is understood and interpreted. And the interpretation of tags such as H1 is described in a well-thumbed standard and many books, and is implemented more or less consistently in many devices. But what happens when we start to mix namespaces? When SVG is embedded within XHTML, or RDF or XSLT for that matter, what are the rules which ensure that the receiver will understand well the intent, the client software do the right thing -- and the person understand the right thing? The same issues obviously apply when the information has machine-readable semantics. As Paul Prescod points out, there are plenty of places one might think of looking for information about how to process a document: In fact the general problem is that without any overall architecture, one can write specs which battle each other. "The X attribute changes the meaning of the Y attribute", "The Z attribute restores the meaning of the X attribute irrespective of any Y attribute" and so on. In such a world, one would never know whether one had correctly interpreted anything, as there might be somewhere something deemed to change the meaning of what we have. Clearly this way lies chaos. A coherent architecture defines which specs to look at to determine the interpretation of a document. We don't have this yet (2002) for XML. However, in practice if a person were to look at a document with a mixture of XHTML and SVG, they would probably find its meaning unambiguous. In the same message, Paul opines, Top-down self-descriptiveness is one of the major advantages of XML and I think that doing otherwise should be deprecated. I completely agree with this conclusion. He concludes correctly that the root namespace (the namespace of the document element) [or a DOCTYPE, which I will not discuss further] is the only thing one must be able to dispatch on. However, he secondarily concludes that, because it is important to define what processing to be done first, one should use wrapper elements, so that if there are any XSLT elements within a document, a wrapper causes XSLT processing to be done, and so on. The discussion about documents with more than one namespace has often made an implict assumption that the XML is to be processed in a pipeline, in which each stage implements one XML technology, such as include processing, style sheet processing, decryption, and so on. The point of this article is that while this works in simple cases, in the general case the pipeline model is basically broken. Once you have things arbitraryily nested inside each other, there is no single pipeline which will do a general case. And nesting things inside each other in arbitrary ways is core to the power of XML. The pipline model makes it very messy to address a situation which is increasingly common. This is of an XML document which contains a large numbers of embedded elements from namespaces such as These namespaces share common properties: To treat these as a group, I will call these elements XML functions. The term is not picked randomly. Let's look at some examples, each of which has its peculiarities. Let me clarify this way of looking at XSLT. The XSLT spec defines an XSLT namespace and how you make an XSLT document (stylesheet) out of it. Normally, the style sheet has xsl:stylesheet as its document element. However, there is a special "Literal result element as Stylesheet" (LRES) form of XSLT, in which a template document in a target namespace (such as XHTML) has XSLT embedded in it only at specific places. Here is an example from the spec. <html xsl:version="1.0" xmlns: <head> <title>Expense Report Summary</title> </head> <body> <p>Total Amount: <xsl:value-of</p> </body> </html> The XSLT spec formally defines the LRES form as an abbreviation for the full form. In doing so it loses the valuable fact that in the LRES form, XSLT elements behave as XML functions. They actually adhere to the constraints above. This is is very valuable. The XSL spec says that the interpretation be that an XSLT document be generated and processed to return the "real" document. However, this does not scale in design terms. As the XSLT specification itsels notes, "In some situations, the only way that a system can recognize that an XML document needs to be processed by an XSLT processor as an XSLT stylesheet is by examining the XML document itself. Using the simplified syntax makes this harder. NOTE: For example, another XML language (AXL) might also use an axl:version on the document element to indicate that an XML document was an AXL document that required processing by an AXL processor; if a document had both an axl:version attribute and an xsl:version attribute, it would be unclear whether the document should be processed by an XSLT processor or an AXL processor. Therefore, the simplified syntax should not be used for XSLT stylesheets that may be used in such a situation" It does not work when other namespaces use the same trick. It also prevents applications from using optimizations which result from the constraints above. So, while the spec formally defines a template document in that way, one can make, it seems, a completely equivalent definition in terms of XML functions. Imagine a document in which at various different parts of the tree different forms occur, and in which these xml functions are in fact nested: you resolve an XInclude and it returns something with XSLT bits in. It is essential primarily to define what such a document should actually be when (for example) presented to a user. It is an extra plus to have some visibility from outside the document as to what functionality will be necessary to have to fully process the document, such as from the MIME header, but we can get to that later. This is probably a simple function. The include element is replaced by the referenced document or part of document. This is straightforward and obviously nests. It is also obvious that it doesn't actually matter , when xincludes are nested, that it doesn't make any difference whether you consider the inner ones to be expanded before or after the outer ones. (The base URI of a reference always has to be taken as that of the original source document, no matter where the refernce ends up being expanded) I think that the battle over the order of processing of XML functions is often an ill-formed question. XML is a tree. It is appropriate for the interpretation of the tree to be defined from the top down. This does not determine the order in which the leaves of the tree have to be done. Here are some ways in which processors could handle an XHTML document containing XML functions: This is NOT supposed to be a definitive list of ways of parsing XML documents with functions - it is only supposed to illustrate the fact that many approaches are possible which can be shown to be mathematically equivalent in their effect. (This is why I tend to talk about the meaning, or interpretation, of a document, rather than the processing model) That said, it may be necessary to define a reference processing model, just because one has to have a way of describing what the document means. In this case note that the first model above is not appropriate. It uses the fact that XHTML contains no tricks - it is "plain vanilla" in that everything in the document is part of the document in the same way, modulo styling. (I simplify). This does not apply to other sorts of document. Take an XML package for example: the contents of the packages are quoted and is not appropriate just to expand the contents of them. Only the cover note, the defining document contains the import of the package as a whole, and the interpretation of the other packaged things is only known in as much as the cover note defines it. it is essential that languages such as XML packaging can be defined in XML. It is essential that one can, if you like, quote a bit of XML literally, and make up a new tag which says something quite new about the contents. Therefore, while it works with XHTML, and as Tim Bray says (TAG 2002/02/14) there are many applications which do "generic XML processing" such as trawling documents for links and use of language, there will be certain namespaces such as HTML and SVG for which that makes sense and and other such as XML packaging and Xml encryption, in which it won't. (On the semantic web case, the same applies, and was the cause in 2002 of much discussion in RDF groups because RDF does not have quotes, and the informal use of rdf:parseType="log:quote") If you need another example, think about the XSLT insert which generates and XInclude element: It may contain what seems to be and even is an XInclude element, but should not be expanded as contents of the XSLT element. The reference processing model must be then, that parsing of an XML document conceptually involves elaboration of any functions, and that processors must be able to dispatch based on namespace at any point in the tree. The result of such processing is the document which should correspond to the XML schema, if the. There is normally no call for schema validation of the document which still contains XML functions. Systems which claim to be conformant to the spec of a given XML function mean that they can, in any XML document, elaborate the function according to the specification. As Jacek Kopecky says (2002/02/21), [...] by saying on the sender: "We expect the XHTML processor to be able to handle XInclude and therefore this thing is an XHTML document all right". We can't of course expect old XML processors to handle XInclude, but we can expect anything which claims conformance with Xinclude to do so. In object-oriented software terms, one imagines handing an XML tree to an instance of an object class which supports the element type of the document element. This then returns something as defined by the spec. (An HTML document conceptually returns a hypertext page, an SVG document a diagram, an RDF document a conceptual graph (small c small g)). The object may itself call out to a parser to get the infoset for its contents, and it may or may not call out to the XML function evaluator but whether it does or not is defined by its own specification. But XML functions just return XML which replaces them. And any XML applications which claim conformance to the XML function's spec should be able to accept this.Similarly, in an event-oriented architecture, an event stream which is being fed to an HTML handler would, when a foreign namespace such as XSLT is found, be vectored to an XSLT handler. The software design has to allow the XSLT handler to hand back a new set of events, a serialization of the resultant tree, to the HTML handler. This note does not address many of the issues around the XML processing model. There is a possible ambiguity when a function refers to the current document. In other words, though it is not allowed to change things outside itself, it may read outside itself. This (if allowed) would clear raise the question of whether it references the document before or after its own or other function's elaboration. A related question is whether an XPointer fragment identifier should refer to the document before or after elaboration of functions. My inclination is to say after, because then you know that an XPointer into an SVG object will resolve to a bit of SVG. But there may be arguments the other way. XML Digital Signature (I am told) specifically requires that the signature is done on the raw source of the document before XInclude. Without going into the relative merits of signature before and after XInclude and other functions, it is clear that there are cases when either would be useful. The ambiguity of these references, like the problems in XSLT of generating XSLT stylesheets with XSLT stylesheets, stem from the lack of quoting syntax in XML. @@This section is not complete. It has been covered more thoroughly by TAG discussions already. @@ link An XML document is in most cases self-describing. That is, you don't need to know anything more that it is XML to interpret it. In message to define how the body should be interpreted using the content-type header. All that is necessary, then, is that the content-type should indicate XML ( text/xml or application/xml or anything with +xml) and a top-down generic processing is valid. (The algorithm for determining the character encoding is not addressed here @@ link) While this is sufficient, it is however useful to be able to provide more visibility as to what is contains [Roy Fielding, Dissertation, Ch4 @@link]. The document element gives, in many cases, fundamental information about the resulting type of the whole document, irrespective of functions elaborated or plugins plugged in. For example, whatever the content, an xhtml:html document is a hypertext page. This means that some systems will represent it in a window and allows certain functionality. The operating system, if it knows this, can use icons to tell the user, before they open an email or follow a link, what sort of a thing it contains or leads to. Similarly, an SVG document will return a diagram, and an RDF document body of knowledge -- a set of relational data. So more than any other namespace used in the document, the document element namespace is crucial. This is why the best practice is to publish documents with standard and therefore well-known document element types as a special MIME type. This allows an XHTML page to be visible as such from the HTTP headers alone. This allows smarter processing by intermediates, decisions about proxy caching, translation, and so on. It allows the content negotiation of HTTP to operate, allowing a user for example to express a preference for audio rather than video. This also allows systems which want to to optimize the dispatching of a handler for the document from the MIME type alone. A "+xml" prefix as defined by RFC____@@ should be used whenever the document is also a self-describing top-down XML document for which the top-down processing model applies. (The fact that a document is a well-formed XML1.0 document alone does not constitute grounds for adding the "+xml") Simon St Laurent has suggested [@@ his Internet-draft, possibly timed out] that all namespaces used in the document be listed as parameters to the MIME type. This makes sense on the surface. It may not be practical or worth the effort. It is a lot of bits, and in any case exactly what will be required cannot be determined until the document has been interpreted top-down. However, it or something equivalent is necessary if one is to specify the software support which is necessary. When a namespace-specific content-type has been specified, is it also necessary to specify the document namespace, or could that be assumed? That would mean that a plain XHTML file would not need an explict namespace. It is tempting to say that the default namespace should default to that associated with the content type, but in fact the logical thing is for the document namespace. @@Decision tree diagram - add This document defines the basic interpretation of an XML document. There have been many suggestions of ways in which a complex and different order of processing could be specified, many of these mentioned at the workshop, and including Sun's XML pipeline submission. My current view is that such languages should be regarded themselves the top-level document which then draws in the rest of the document by reference as it is elaborated. In the HTTP protocol, or email for that matter, the important interface which is standardized is the one between the publisher (or sender) and receiver. We concern ourselves with what a receiver can do by way of interpretation of an XML document published or sent. Any processing which has happened on the server or sender side in order to process that document is not part of the protocol. While XML functions may indeed be elaborated to form a document for transmission from another one, that is something for control within the server and so is not a primary concern for standardization. When a document is in a pure fucntional form, it actually is an opmization whether the functions are elaborated by the server or or the client. Because the mode of operation in which the content is evaluated with function processing is very common, it would be useful in a schema for example to indicate this mode, or, more practically, to indicate the exceptions. There are very few elements which don't elaborate their contents at the moment in the markup world, and so they should be the exception. (Many computing languages of course reserve special punctuation for this quoting but adding punctuation at this stage isn't the XML style!) The top-down processing model for XML as an architectual principle resolves many of the questions which remain unanswerable with pipelined processing. In fact, consideration of the example shows that pipeline processing could be actually dangerious, producing errors and possibly security issues, in the case of generally nested XML technologies of the types discussed. Up to Design Issues Tim BL
http://www.w3.org/DesignIssues/XML
crawl-001
refinedweb
3,325
57.1
using tiles without struts using tiles without struts Hi I am trying to make an application using tiles 2.0. Description of my web.xml is as follows: tiles... tiles-servlet tiles-api tiles-core tiles-jsp When i try to run my web i am Getting Some errors in Struts - Struts i am Getting Some errors in Struts I am Learning Struts Basics,I am Trying examples do in this Site Examples.i am getting lot of errors.Please Help  :( I am not getting Problem (RMI) I am not getting Problem (RMI) When i am excuting RMI EXAMPLE 3,2 I am getting error daying nested exception and Connect Exception  ]: using spring rmi+maven but i am getting this exception using spring rmi+maven but i am getting this exception D:\Nexustool... is: java.rmi.UnmarshalException: error unmarshalling arguments; nested exception... by: java.rmi.UnmarshalException: error unmarshalling arguments; nested exception Tiles in jsp of Apache Tiles. Thanks I have following error in my jsp sile...Tiles in jsp how to use tiles Hi, You can use Struts tiles tag and use in your jsp pages. <%@ taglib uri="/WEB-INF/struts-html.tld i am unable to identify the error in my code i am unable to identify the error in my code class Program { public static void main(String[] args) { BufferedReader br=new BufferedReader(new InputStreamReader(System.in)); System.out.println("enter Getting an error :( Getting an error :( I implemented the same code as above.. But getting this error in console... Console Oct 5, 2012 10:18:14 AM...:SpringMVC' did not find a matching property. Oct 5, 2012 10:18:14 AM   redirect with tiles - Struts specify in detail and send me code. Thanks. I using tiles in struts 2. And i want redirect to other definition tag by url tag. Please help me...redirect with tiles I have definition with three pages. I have Developing Simple Struts Tiles Application Developing Simple Struts Tiles Application  ... will show you how to develop simple Struts Tiles Application. You will learn how to setup the Struts Tiles and create example page with it. What is Struts tiles framework struts tiles framework how could i include tld files in my web application error error I am running a program insert into statement in sql using.../ServletUserEnquiryForm.shtml getting an error given below SQLException caught: [Microsoft][ODBC SQL Server Driver]COUNT field incorrect or syntax error please suggest Struts - Jboss - I-Report - Struts Struts - Jboss - I-Report Hi i am a beginner in Java programming and in my application i wanted to generate a report (based on database) using Struts, Jboss , I Report i got an error while compile this program manually. i got an error while compile this program manually. import... mapping.findForward("errors.jsp"); } } i set both servlet,struts jar files and i got an error in saveErrors() error Heading cannot find   Page is not getting loaded when I refresh the webpage??? Page is not getting loaded when I refresh the webpage??? **I am... function in JSP. It will get all details of image section from database. *I am... when I refresh the page.* Note: Header section is getting loaded always Error-. My Getting Error - Development process Getting Error Hi , i am getting error while executing this code. I just want to store date in to database. Dont mistake me for repeated questions. java.sql.SQLException: [Microsoft][ODBC Microsoft Access Driver] Number Struts validation for j-query tabs data Struts validation for j-query tabs data Hi, I want apply struts validation for J-query tabs data while saving into database, can any body tell please i have problem in that program - JavaMail ++){ int j; for(j=2; j<10; i++) { pw.write(f3+"\n"); f1 = f2...i have problem in that program 1. Write a multi-threaded Java program to print all numbers below 100,000 that are both prime and fibonacci number i want to remove specific node in xml file i want to remove specific node in xml file <A> <B>hi <C>by <A> <B>hellow <C>how r u ? i want to delet node which is == hellow using java program please help me . tanks in advance xml - XML = stw.toString(); after i am getting xml string result, like Successxxx profile...xml hi convert xml document to xml string.i am using below code...-an-xml-document-using-jd.shtml Hope that it will be helpful for you Bar chart with J table Bar chart with J table Hi I'm new in Java and I have a application... tell me how can I make a bar graph that will display the data from my table.I tried using the code from your page ( J J how can stop back after logout using javascript Im not getting validations - Struts Im not getting validations I created one struts aplication im using DynaValidations I configured validation.xml and validation-rules.xml also..... and Struts Begineer Help Help Please Thanks A million ) I am using Jcreator Begineer Help Help Please Thanks A million ) I am using Jcreator System.out.println(" Income Statement...: $ 1.00 Retained Earnings: $ 2.00 I don't know how to sort the array error "+it); } } this is my program i am getting an error saying cannot find symbol class string Blue J using smart turtle Java Blue J using smart turtle I wanted to know the code to start up my smart turtle? i think it goes something like smartTurtle = my smartTurtle ( i need this to be tell my trutle to draw I am trying to create domains for column attributes of my data dictionary? I am trying to create domains for column attributes of my data dictionary? Please provide me with the best possible solution. I already have... are shared by different fields and also some of the field values have been coded Tiles-def.xml Tiles-def.xml tiles-def.xml <put name="title" value="eSRS Export To Excel" /> here the value i want to get from resource bundle... 10g (both ee and xe) editions... while am installing i found port num as 1521... run this prog.... i got error like driver is registered xml and xsd - XML it.. ====================================================================== i am using program import...xml and xsd 50007812 2005-03-09T17:05:59... ====================================================================== how to use xmlns? i have all xsds but i don't know how File I/O File I/O i am trying to read and write a file. my program works perfectly i am using PrintWriter and BufferedReader. but my problem is that when... like this input file blahblah i am a computer i am running windows i: Getting all XML Elements . Description of program: The following program helps you in getting all XML.... Program parses the xml file using the parse() method and creates a Document object... Getting all XML Elements   getting result in table dynamically - Struts getting result in table dynamically How do i get result in tabular format dynamically using our own tags struts error - Struts THINK EVERY THING IS RIGHT BUT THE ERROR IS COMING I TRIED BY GIVING INPUT...java struts error my jsp page is post the problem... ActionForward execute(ActionMapping am,ActionForm af,HttpServletRequest req Java I/O Examples ; What is Java I/O? The Java I/O means... and Interfaces of the I/O Streams The following listing of classes... provides the standard I/O facilities for reading text through either Struts I want to create tiles programme using struts1.3.8 but i got jasper exception help me out Simple java program Simple java program Hi this is the thing we got to do and i am totally confused on how to write this program it would be a great help i could be helped... that program. plz do help me out here. Part I Write a program I/O Java System.out.println(" Error in Concat:"+e); } } } I am not really sure why...I/O Java import java.io.File; import java.io.FileNotFoundException... (FileNotFoundException e) { System.out.println(" Error: "+e + " "+ inputFile Program Error - WebSevices Program Error Hello Friends, Write the simple php program Using Zend framework with Database connection .Anyone know the code .Help me Hi friend, I am sending simple code of using Zend Framework i am inserting an image into database but it is showing relative path not absolute path i am inserting an image into database but it is showing relative path..."); } catch(Exception e) { System.out.println(e); } %> when i compiled it i got error java.io.filenotfoundexception....... please go through it and help me Parsing XML using Document Builder factory - XML Parsing XML using Document Builder factory Hi , I am new to XML . I am trying to parse a XML file while is : For this i am using Document... 144.35.3.90 144.35.3.91 I want to read all the tag values . I am able to do so i am inserting an image into database but it is showing relative path not absolute path i am inserting an image into database but it is showing relative path...) { System.out.println(e); } %> </body> </html> when i compiled it i got error "java.io.filenotfoundexception" please go through it and help me Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/3512
CC-MAIN-2015-35
refinedweb
1,557
64.91
A. My current domain looks like this: (This picture is unfortunately missing a Tag attribute in the Snippet class). As you can see, a Snippet can have multiple Tags, while a Tag can belong to multiple Snippets. How can we query this? We have three options in this case HQL The HQL version is not so hard (and neither are the other ones), but we have to start somewhere, so let's start with this one. To start at the end: the query looks like this: select tag.text, count(snippet.id) from Tag as tag inner join tag.snippets as snippet group by tag.text To test this code, I find it easiest to start up the Grails console: grails console and run the HQL there. You can do so by obtaining the SessionFactory from the ApplicationContext, which is done by accessing the implicit 'ctx' variable in the console. The complete code looks like this: def sessionFactory = ctx.sessionFactory def session = sessionFactory.getCurrentSession() def query = session.createQuery("select tag.text, count(snippet.id) from Tag as tag inner join tag.snippets as snippet group by tag.text") def results = query.list() Which, in my case, returns this (I have two snippets, one with 3 tags and one with 1 tag, which is a duplicate tag of the first snippet): [["groovy", 1], ["io", 2], ["testing", 1]] Hibernate Criteria Since Grails is built on Groovy/Java, Hibernate and Spring, it's easy to use that power while coming up with a solution to your problem. If you write Java code all day, and writing Hibernate Criteria is something you do daily, this won't be a problem at all. Just copy and paste your existing Java code into Grails, and you'll produce something like this: def sessionFactory = ctx.sessionFactory def session = sessionFactory.getCurrentSession() List results = session.createCriteria(Tag.class) .setProjection ( Projections.projectionList() .add (Projections.groupProperty("text")) .add (Projections.rowCount() ) ) .createCriteria("snippets") .list() Hibernate Criteria using the HibernateCriteriaBuilder If you, however feel that the above is lacking some style, you might want to check out the HibernateCriteriaBuilder, which is a DSL for building Hibernate Criteria. When using the DSL, we first need to retrieve the HibernateCriteriaBuilder, which can be obtained from any domain class. After that, you'll have the full power of Hibernate contained in an easy, readable DSL. def c = Tag.createCriteria() def results = c.list { projections { groupProperty("text") rowCount() } snippets { } } As you can see, Grails provides a lot of options to query your domain. Whether you prefer HQL, the safe and sound Criteria, or the new HibernateCriteriaBuilder, the choice is up to you. By leveraging the power of existing frameworks, Grails provides an easy and powerful way to quickly develop an application with a minimal learning curve! For grailstutorials.com I used ‘act as taggable’ plugin. Inner implementation of this plugin will actually create two tables. Tag table Tagging table with important columns tagId, taggableType (actually class name of tagged domain object). I prefer this approach because I don’t need tag field in my domain classes but I can tag them. Another advantage is that querying tag cloud from the database is easier at least from my point of view. Tagging.executeQuery(”select distinct t.tag.name, count(t.tag.id) from Tagging t where t.taggableType=? group by t.tag.id” Jan i am not sure where is that ctx coming from?? i get not defined exception, actually i am trying to use that in my tests. Hi Puran, Like I described, it’s an implicit variable in the console. If you’re running tests, and your tests are ‘normal’ unit tests (so no integration) than my guess is that they are not there. Have you tried the console? Hi, how can you do to render the results on a gsp file, for example using ? Thanks, Rogério Carrasqueira In a grails controller, you can also use DomanObject.executeQuery() Documented here: how do you add multiple distinct fields?
http://blog.xebia.com/2008/06/04/querying-associations-in-grails-with-hql-criteria-and-hibernatecriteriabuilder/
crawl-002
refinedweb
661
57.37
I've got the following data in a CSV file (a few hundred lines) that I'm trying to massage into sensible JSON to post into a rest api I've gone with the bare minimum fields required, but here's what I've got: dateAsked,author,title,body,answers.author,answers.body,topics.name,answers.accepted 13-Jan-16,Ben,Cant set a channel ,"Has anyone had any issues setting channels. it stays at �0�. It actually tells me there are �0� files.",Silvio,"I�m not sure. I think you can leave the cable out, because the control works. But you could try and switch two port and see if problem follows the serial port. maybe �extended� clip names over 32 characters. Please let me know if you find out! Best regards.",club_k,TRUE json_test = """{ "title": "Can I answer a question?", "body": "Some text for the question", "author": "Silvio", "topics": [ { "name": "club_k" } ], "answers": [ { "author": "john", "body": "I\'m not sure. I think you can leave the cable out. Please let me know if you find out! Best regards.", "accepted": "true" } ] }""" You can use a csv.DictReader to process the CSV file as a dictionary for each row. Using the field names as keys, a new dictionary can be constructed that groups common keys into a nested dictionary keyed by the part of the field name after the .. The nested dictionary is held within a list, although it is unclear whether that is really necessary - the nested dictionary could probably be placed immediately under the top-level without requiring a list. Here's the code to do it: import csv import json json_data = [] for row in csv.DictReader(open('/tmp/data.csv')): data = {} for field in row: key, _, sub_key = field.partition('.') if not sub_key: data[key] = row[field] else: if key not in data: data[key] = [{}] data[key][0][sub_key] = row[field] # print(json.dumps(data, indent=True)) # print('---------------------------') json_data.append(json.dumps(data)) For your data, with the print() statements enabled, the output would be: { "body": "Has anyone had any issues setting channels. it stays at '0'. It actually tells me there are '0' files.", "author": "Ben", "topics": [ { "name": "club_k" } ], "title": "Cant set a channel ", "answers": [ { "body": "I'm not sure. I think you can leave the cable out, because the control works. But you could try and switch two port and see if problem follows the serial port. maybe 'extended' clip names over 32 characters. \nPlease let me know if you find out!\n Best regards.", "accepted ": "TRUE", "author": "Silvio" } ], "dateAsked": "13-Jan-16" } ---------------------------
https://codedump.io/share/Z0UeUfVdjgng/1/trying-to-convert-a-csv-into-json-in-python-for-posting-to-rest-api
CC-MAIN-2017-17
refinedweb
428
67.65
Southern Classics! Hands On Cooking Class at Cooking With Class Baytown Actions and Detail Panel Southern Classics! Hands On Cooking Class at Cooking With Class Baytown Fri, December 23, 2016, 7:00 PM – 10:00 PM CST Event Information Date and Time Friends Who Are Going Connect to Facebook Friends Attending Description Southern Classics Dinner Party! Appetizer: Boudin Balls Main Course: Fried fish, Buttermilk fried chicken, collard greens, homemade macaroni and cheese Dessert: bread pudding with bourbon creme glaze and vanilla ice cream You get to work in groups of 3-4, allowing for plenty of time to cook, eat, drink, and meet new friends. The class is hands-on, with each cooking station getting to make every dish, unless the dessert is something requiring preparation ahead. Although dinner party classes are more social in nature, you will still learn how to cook and prepare amazing food! Bring you favorite beverage, some friends, and come enjoy a cooking and dining experience you won't soon forget! Come hungry because there is plenty of food! FOR A LIMTED TIME: BOGO! Buy one ticket at regular price and receive the second person 1/2 off! Talk about a date night deal! FAQs Are there ID or minimum age requirements to enter the event? Dinner parties are for patrons 18 and older. What are my transportation/parking options for getting to and from the event? We are located in a residential neighborhood on a corner lot. You may park in the driveway or in front of the house on the street. What can I bring into the event? Bring your favorite beverage! We provide aprons, wine glasses, ice, and bottle openers.Guests are welcome to bring to-go containers for leftovers. We sell bottled water and sodas. All we ask is that you wear closed toe shoes and comfortable (but not loose) clothing for you safety. How can I contact the organizer with any questions? Please contact Ouindetta Thomas via text or phone call at 409.293.7744 or email at chefwin@cookingwithclassbaytown.com What's the refund policy? We have a 48-hour cancellation policy for our public classes and there are NO EXCEPTIONS. If you cannot attend and you are within the 48-hour window before the class, you are more than welcome to send someone else in your place. You do not need to contact us when you send someone else in your place. If no one can come in your place, you will forfeit your seat completely. We do not issue refunds, only credits towards other classes. Therefore, if you cancel 48 hours before the class, you may reschedule for a class at a later date. ** A note on food allergies** (409)293.7744 to talk about them well before the class. Date and Time Location Friends Who Are Going Connect to Facebook
https://www.eventbrite.com/e/southern-classics-hands-on-cooking-class-at-cooking-with-class-baytown-tickets-29292975077
CC-MAIN-2016-50
refinedweb
474
74.79
EmberBindingTableEntry Struct Reference Defines an entry in the binding table. #include < ember-types.h> Defines an entry in the binding table. A binding entry specifies a local endpoint, a remote endpoint, a cluster ID and either the destination EUI64 (for unicast bindings) or the 64-bit group address (for multicast bindings). Field Documentation A cluster ID that matches one from the local endpoint's simple descriptor. This cluster ID is set by the provisioning application to indicate which part an endpoint's functionality is bound to this particular remote node and is used to distinguish between unicast and multicast bindings. Note that a binding can be used to to send messages with any cluster ID, not just that listed in the binding. A 64-bit identifier. This is either: - The destination EUI64, for unicasts. - A 16-bit multicast group address, for multicasts. The endpoint on the local node. The index of the network the binding belongs to. The endpoint on the remote node (specified by identifier). The type of binding. The documentation for this struct was generated from the following file: ember-types.h
https://docs.silabs.com/zigbee/6.6/em35x/structEmberBindingTableEntry
CC-MAIN-2019-47
refinedweb
184
50.33
Hi, On 23.07.2012 15:17, Rob Weir wrote: > On Fri, Jul 20, 2012 at 2:11 AM, Clarence GUO <clarence.guo.bj@gmail.com> wrote: >> HI~ >> I'm investigating bug 119639. Basically I need ODF standard to support >> chart:percentage attribute not only in <chart:plot-area> applied style but >> also in <chart:series> applied style. However ODF v1.2 says for >> chart:percentage attribute "This attribute is evaluated for chart styles >> that are applied to a <chart:plot-area> element.". So the requirement is >> conflict with the ODF standard. So is there any method can satisfy my >> requirement without ODF standard change? The details please reference below >> explanation. I also post it as comment to this bug. >> > > Two possibilities: > > 1) Generally we should avoid reinterpreting existing ODF > elements/attributes. But if we believe that something in ODF is > ambiguous or wrong, then we can make a proposal to the OASIS ODF TC to > adjust the meaning of existing ODF markup. The main restriction here > is that the ODF TC will generally not make changes that would break > existing documents or implementations. > > 2) If we need to make a bigger change, ODF allows implementors to > extend it. So we can add new elements and attributes, but they cannot > be in an ODF namespace. We need to put the new markup in our own XML > namespace. > Here, I would like to share my presentation which I gave at the OOoCon 2010 in Budapest [1] - especially subtopic "ODF Conforming Development" [1] Best regards, Oliver. > If we need to extend ODF I'd recommend also: > > a) We track on a special wiki page exactly what additions we make to > the document format. > > b) We think carefully about backwards compatibility and how the new > markup will be treated by other applications > > c) If we think the new feature would be useful to other editors we can > propose adding it to ODF 1.3, as an official part of the standard. > > -Rob > >> The chart in the sample is not a standard chart but a composite chart. >> Series 1, 2 and 4 are percent stacked columns which were applied to the >> main Y axis, while series 3 is normal column which was applied to the >> secondary Y axis. So series 3 will cover some part of Series 1, 2 and 4. >> MS Office supports such chart. It can set one or more data series to >> another chart type. AOO's UI cannot do this. But the chart kernal function >> supports. So when open the sample file the chart looks OK. However, the ODF >> format doesn't support this, that why after save the sample to ods and >> reopen, the chart becomes incorrect(Series 3 occupy 100% space of the >> percent stacked column). >> The reason is AOO defined a style 'ch4' and applied to <chart:plot-area>. >> In this style, the chart:percentage attribute was set to true. This >> attribute specifies a percentage accumulation of values per category. >> Because this style was applied to <chart:plot-area>, then all data series >> are regarded as percent columns. And series 3 is a unique data series which >> applied to the secondary Y axis. So it occpy the total 100% space so that >> other data series are all covered. >> If want to fix this defect, must move chart:percentage attribute from the >> style which applied to <chart:plot-area> to the style which applied to >> <chart:series>. However, for this attribute, ODF 1.2 says "This attribute >> is evaluated for chart styles that are applied to a <chart:plot-area> >> element.". So my requirement is conflict with the ODF standard. >> Then how can I fix this defect? It seems the only way is to change the >> standard and change the corresponding filter code. But I know change the >> standard is not easy...
http://mail-archives.apache.org/mod_mbox/incubator-ooo-dev/201208.mbox/%3C50190B87.4060107@googlemail.com%3E
CC-MAIN-2015-22
refinedweb
634
65.42
Random DevTools Entry: #001 😀 So for the inaugural entry, we have two tools and one library: - FaceID Browser. When creating custom add-ins for Office applications, you can create a command bar button (CommandBarButton) and apply an existing Office icon using the FaceID property. This tool allows you to visually map the integer values to the icons. I've tested it to work with Office 2003. - Xsd Generator with GAT. For some reason or another, I like the idea of working with XML schemas when building an object model; schemas seem much more natural to me than working with classes in code. In addition, you get nice XML serialization markup for free 🙂 Matias Woloski has written a custom generator for .Net 2.0. I'm gonna give it a look-see. [Update:1] The binaries that are currently on the GDN website are not compatible with the RTM versions of VS2005. [Update:2] Holy crap. After several hours of fiddling, I finally got it to install. Damn, the December CTP of GAT extensions is still buggy as hell. For some reason or another, uninstalling the Xsd Generator after installing it would also remove all traces of the Microsoft.Practices.RecipeFramework dlls. WTF? So this would necessitate reinstalling the GAT extensions. On top of that, the December CTP changed the default namespace on the configuration file from to took me at least a half an hour of digging to find this info. The other really stupid thing is that you can't change the config file after install without reinstalling...another big WTF; I mean, isn't that the whole point of having an XML config file? I also had to rebuild the references in two of the projects and add a missing reference to Microsoft.RecipeFramework.Common.Library. But in the end, it's worth the effort! I dig the fact that it generates generic lists instead of typed collections (typed collections are soo last gen. :-D). - Microsoft Updater Application Block. Looks useful for anyone currently building lots of application add-ins for any of the Office applications as it will allow you to keep clients up-to-date without having to redeploy with each update. There's an article over at TheServerSide.Net on usage and details. That's it for now. Look for more installments in the future. Comments (2) Trackbacks (0) ( subscribe to comments on this post ) Trackbacks are disabled. June 2nd, 2006 - 03:20 Thanks for the info Charles! You helped me get the XSD Generator with GAT working with VS2005. June 2nd, 2006 - 13:04 No prob Bill. Sadly, I’m not too fond of the output code generated by this tool, however.
http://charliedigital.com/2006/01/14/random-devtools-entry-001/
CC-MAIN-2016-40
refinedweb
448
66.13
Created on 2007-03-07 17:57 by florianfesti, last changed 2010-09-23 16:24 by pitrou. This issue is now closed. This patch improves read performance of the gzip module. We have seen improvments from 20% from reading big blocks up to factor 50 for reading 4 byte blocks. Additionally this patch removes the need of seeking within the file with allows using streams like stdin as input. Details: The whole read(),readline() stack got rewritten. Core part is a new input buffer. It consists of a list of strings (self.buffer), an offset of what has already be consumed from the first string (self.pos) and the length of the (still consumable) buffer (self.bufferlen). This makes adding to and removing from the buffer cheap. It turned out that removing from the old buffer was breaking performance as for reading one byte the whole buffer had to be copied. For reading a 2k buffer in single bytes 2MB had to be copied. readline no longer uses read() but directly works on the buffer. This removes a whole layer of copying strings together. For removing the need of seeking a new readonly filelike class is used (PaddedFile). It just prepends a string to a file and uses the file's read method when the string got used up. There is probably still some space for tweaking when it comes to buffere sizes as we kept this simple. But the great performance improvments show that we can't be off that much. Performance test program and results are attached. File Added: test_gzip2.py File Added: results.txt Added minor improvement of the unittest: Check for newlines in the readline() test code. File Added: test_gzip.py.diff The patch looks good, and the tests pass. Can you add a test that ensures that a seek() method is not necessary any more for the underlying stream? (closed #914340 which provided a similar patch as duplicate) I added checks to test_readline() and test_append(). I left the other read test untouched to keep some filename=filename coverage. BTW: I really forgot special thanks for Phil Knirsch who wrote the initial read() implementation and the performance test cases and did a lot of weaking and testing and without whom this patch would never have existed. File Added: test_gzip.py-noseek.diff Excuse me, but I can't apply the patch. I have Windows XP without any SP and I tried to do the command patch -u gzip.py gzip.py.diff Hm, works one Linux. Try this one File Added: gzip.py.diff I applied the patch by hand, I think the problem is simply it's for python 2.6 (I have the stable 2.5 version) Anyway like I wrote for an old similar patch of another user, the patch starts to read the header at the current position, and not at the start of the file. You can see it trying this piece of code: --------------------------------------- import urllib2 import array import gzip urlfile = urllib2.urlopen(someurl) header = array.array("B") header.fromstring(urlfile.read(1)) gzfile = gzip.GzipFile(fileobj=urlfile) print gzfile.read() ----------------------------------------- Error: ------------------------------------------------------------------------------ File "c:\My Programs\Python\lib\gzip.py", line 285, in read self._read_gzip_header(self.fileobj) File "c:\My Programs\Python\lib\gzip.py", line 177, in _read_gzip_header raise IOError, 'Not a gzipped file' IOError: Not a gzipped file >Exit code: 1 ------------------------------------------------------------------------------ I don't know how you can solve this without seek() Anyway if you are interested I created the diff for Python 2.5 : There is a simple solution for that problem: DON'T! If you pass a file descriptor to the GzipFile class it is your responsibility that the file is a gzip file and that the file pointer is at the right position. After having a short look into original code I don't think it can cope with the use case you brought up. I'd even argue that seeking to the beginning of the file is broken behaviour. What if the gzip content doesn't start at the beginning of the file? In fact prepending some header before compressed data is quite common. If the file you where reading really had a header your example had just worked. Indeed the problem is seek() is not implementes for urllib objects. It's not a bug of your patch, sorry if I explained it bad. This patch is awesome. It makes it possible to do this with http response objects that return gzipped streams: >>> conn = httplib.HTTPConnection('blah') >>> req = conn.request('GET', 'a.gz') >>> resp = conn.getresponse() >>> unzipper = gzip.GzipFile(fileobj=resp) # read just the first 100 lines >>> for i in range(100): print unzipper.readline() Are compatibility concerns the main reason this patch has yet to be applied? If so, could we allay those fears by keeping the default seek-y behavior and only introducing the new seek-less style when a 'noseek' flag is passed? There are no compatibility concerns I am aware of. The new implementation does no longer need a seek()able file. Of course an implemented seek() method won't hurt anyone. The additional tests are only there to point out the problems of the old implementation. So there is no flag needed to maintain compatibility. The patch just has to be reviewed and then to be applied. If there are any concerns or questions I'll be glad to assist. Florian The patch could only be applied to 3.2 (2.7 is frozen now). But the gzip code has changed quite a bit and I would advocate creating a new patch if you are interested. Do notice that performance should also be much better in 3.2, and it is possible to wrap a gzip object in a io.BufferedReader object so as to reach even better performance. As there has been a lot of support for the attached patch what is needed to take this forward? > As there has been a lot of support for the attached patch what is > needed to take this forward? Read my 2010-06-17 message above. Someone needs to update the patch for 3.2, and preferable show that it still brings an improvement (small micro-benchmarks are enough). I updated the performace script to Py3. You still need to change the import gzipnew line to actually load the modified module. Right now it just compares the stdlib gzip module to itself. Attached result of a run with stdlib gzip module only. Results indicate that performance still is as bad as on Python 2. The Python 3 gzip module also still makes use of tell() ans seek(). So both argument for including this patch are still valid. Porting the patch will include some real work to get the bytes vs string split right. > Attached result of a run with stdlib gzip module only. Results > indicate that performance still is as bad as on Python 2. The Python 3 > gzip module also still makes use of tell() ans seek(). So both > argument for including this patch are still valid. Performance is easily improved by wrapping the file object in a io.BufferedReader or io.BufferedWriter: Text 1 byte block test Original gzip Write: 2.125 s Read: 0.683 s New gzip Write: 0.390 s Read: 0.240 s Text 4 byte block test Original gzip Write: 1.077 s Read: 0.351 s New gzip Write: 0.204 s Read: 0.132 s Text 16 byte block test Original gzip Write: 1.119 s Read: 0.353 s New gzip Write: 0.264 s Read: 0.137 s Still, fixing the seek()/tell() issue would be nice. Stupid me! I ran the tests against my systems gzip version (Py 3.1). The performance issue is basically fixed by rev 77289. Performance is even a bit better that my original patch by may be 10-20%. The only test case where it performs worse is Random 10485760 byte block test Original gzip Write: 20.452 s Read: 2.931 s New gzip Write: 20.518 s Read: 1.247 s Don't know if it is worth bothering. May be increasing the maximum chunk size improves this - but I didn't try that out yet. WRT to seeking: I now have two patches that eliminate the need for seek() on normal operation (rewind obviously still needs seek()). Both are based on the PaddedFile class. The first patch just creates a PaddedFile object while switching from an old to a new member while the second just wraps the fileobj all the time. Performance test show that wrapping is cheap. The first patch is a bit ugly while the second requires a implementation of seek() and may create problems if new methods of the fileobj are used that may interfere with the PaddedFile's internals. So I leave the choice which one is preferred to the module owner. The patch creates another problem with is not yet fixed: The implementation of .seekable() is becoming wrong. As one can now use non seekable files the implementation should check if the file object used for reading is really seekable. As this is my first PY3k work I'd prefer if this can be solved by someone else (But that should be pretty easy). Thank you very much! I have kept the second approach (use PaddedFile at all times), since it is more regular and minimizes the probability for borderline cases. As for the supposed performance slowdown, it doesn't seem significant. On large blocks of data, I expect that compression/decompression cost will be overwhelming anyway. I've added a test case and committed the patch in r84976. Don't hesitate to contribute again.
http://bugs.python.org/issue1675951
CC-MAIN-2013-48
refinedweb
1,620
76.82
python tutorial - Python interview questions and answers for testers - learn python - python programming python interview questions :131 How to use python to format strings with delimiter into columns? >>> pairs = map(str.split, text.splitlines()) >>> max_len = max(len(pair[0]) for pair in pairs) >>> info = '\n'.join('{key:<{indent}}{val}'.format(key=k, indent=max_len + 2, val=v) for k, v in pairs) >>> print info abc: 234234 aadfa: 235345 bcsd: 992 click below button to copy the code. By Python tutorial team Learn python - python tutorial - python delimiter - python examples - python programs >>> import math >>> info = ["abc: 234234", "aadfa: 235345", "bcsd: 992"] >>> info = [item.split() for item in info] >>> maxlen = max([len(item[0]) for item in info]) >>> maxlen = math.ceil(maxlen/8.0)*8 >>> info = [item[0]+" "*(maxlen-len(item[0]))+item[1] for item in info] click below button to copy the code. By Python tutorial team - You can control how the final length is made. python interview questions :132 How can I get the CPU temperature in Python? - There's no standard Python library for this, but on various platforms you may be able to use a Python bridge to a platform API to access this information. - For example on Windows this is available through the Windows Management Instrumentation (WMI) APIs, which are available to Python through the PyWin32 library. There is even a Python WMI library which wraps PyWin32 to provide a more convenient interface. - To get the temperature you'll need to use one of these libraries to access the root/WMI namespace and the MSAcpi_ThermalZone Temperature class. This gives the temperature in tenths of a Kelvin, so you'll need to convert to Celsius by deducting 2732 and dividing by 10. Learn python - python tutorial - python cpu temperature - python examples - python programs python interview questions :133 . python interview questions :134 Name the File-related modules in Python? - Python provides libraries / modules with functions that enable you to manipulate text files and binary files on file system. Using them you can create files, update their contents, copy, and delete files. Learn python - python tutorial - python module - python examples - python programs - The libraries are : os, os.path, and shutil. - Here, os and os.path - modules include functions for accessing the filesystem shutil - module enables you to copy and delete the files. python interview questions :135() python interview questions :136 Explain all the file processing modes supported by Python ? - Python allows you to open files in one of the three modes. They are:”. python interview questions :137 Explain how to redirect the output of a python script from standout(ie., monitor) on to a file ? - Open an output file in “write” mode and the print the contents in to that file, using sys.stdout attribute. import sys filename = “outputfile” sys.stdout = open() print “testing” click below button to copy the code. By Python tutorial team - you can create a python script say .py file with the contents, say print “testing” and then redirect it to the output file while executing it at the command prompt. print “Testing” execution: python redirect_output.py > outputfile. click below button to copy the code. By Python tutorial team python interview questions :138) click below button to copy the code. By Python tutorial team python interview questions :139':'Intellipaat')]) d.items() # displays the output as: [('Company-id':1),('Company-Name':'Intellipaat')] click below button to copy the code. By Python tutorial team python interview questions :140searching a list object.
https://www.wikitechy.com/tutorials/python/python-interview-questions-and-answers-for-testers
CC-MAIN-2019-30
refinedweb
574
65.12
07 February 2006 7 comments Python I learnt something useful today which I can't explain. When you use the tempfile module in the Python standard library and the mkstemp() function you get two things back: an integer and a string.: >>> import tempfile >>> x, y = tempfile.mkstemp() >>> x, type(x) (3, <type 'int'>) >>> y, type(y) ('/tmp/tmpK19sIx', <type 'str'>) >>> help(tempfile.mkstemp) I don't know what to do with the integer so I just ignore it. I thought I would get the result of open("/tmp/tmpK19sIx","w+b"). The help section of mkstemp says: mkstemp(suffix='', prefix= tmp, dir=None, text=False) mkstemp([suffix, [prefix, [dir, [text]]]]) User-callable function to create and return a unique temporary file. The return value is a pair (fd, name) where fd is the file descriptor returned by os.open, and name is the filename. What does os.open do? My guess is that it's a lower level constructor than builtin open(). All I wanted to do was generate a temporary filename where I can place some binary content to be unpacked by another little script that uses tarfile. I had to do something like this: def TGZFileToHTML(content): __, tmp_name = tempfile.mkstemp(suffix='.tgz') open(tmp_name, 'w+b').write(content) result = tarfile2html(tmp_name) os.remove(tmp_name) return result Is this the right way to do it? I first looked at tempfile.mktemp() but it's apparently not secure. Follow @peterbe on Twitter Pass the fd number as the first arg to the write call - write(fd, content). Details are here Unfortunately you've just built your own (equally insecure) equivalent to mktemp(). The returned file descriptor is already openned, allowing you avoid a race condition whereby someone else could create a file with the same name. You can create a file object from the fd with a call to fdopen. The function name is taken from a function in the C standard library which returns a file descriptor. Returning the filename also is a Python freebie. It is a good idea to give a book like Johnson and Troan's Linux Application Development or Richard Steven's Advanced Programming in the UNIX Environment a read, to get the background on this and the file descriptor mechanism, even if you aren't planning to do any C programming if you can avoid it; a lot of stuff in Python's os, sys socket, etc. modules (not to mention Perl/Ruby/whatever modules) directly reflect that heritage. (Also, the C function has a manpage: man mkstemp.) Use the Tempfile class! It will take care of everything for you and return a familiar file-like object. The lower level C library wrappers are not really necessary most of the time. And Richard pointed it out but it deserves to be mentioned again. DO NOT THROW AWAY the integer returned by mkstemp if you choose to use it. Really, don't. tmp_fd, tmp_name = tempfile.mkstemp(suffix='.tgz') f = os.fdopen(tmp_name, 'w+b') f.write(content) ... NM... Follow Richard's suggestion. # typo... f = os.fdopen(tmp_fd, 'w+b')
https://www-origin.peterbe.com/plog/tempfile
CC-MAIN-2021-04
refinedweb
515
66.94
In a previous article on async and await, I showed a very simple example of how to run some code asynchronously. Then in the 2nd article I showed an example of updating the user interface in the main thread from an async method. The code below (from the previous article) would execute a long running task which in this case counts to 5,000,000. As the task was running, on a set interval, the UI would be updated. This effectively decoupled the running of the task from the updating of the UI. You could have the task update the UI every 10ms, or you could update every 5 seconds. It really depends on what you are trying to do. using System; using System.Threading; using System.Threading.Tasks; using System.Windows.Forms; namespace WindowsFormsAsync { public partial class Form1 : Form { private readonly SynchronizationContext synchronizationContext; private DateTime previousTime = DateTime.Now; public Form1() { InitializeComponent(); synchronizationContext = SynchronizationContext.Current; } private async void ButtonClickHandlerAsync(object sender, EventArgs e) { button1.Enabled = false; var count = 0; await Task.Run(() => { for (var i = 0; i <= 5000000; i++) { UpdateUI(i); count = i; } }); label1.Text = @"Counter " + count; button1.Enabled = true; } public void UpdateUI(int value) { var timeNow = DateTime.Now; if ((DateTime.Now - previousTime).Milliseconds <= 50) return; synchronizationContext.Post(new SendOrPostCallback(o => { label1.Text = @"Counter " + (int)o; }), value); previousTime = timeNow; } } } A reader on Reddit suggested that you can write this code in a much more succinct way as shown below. The code is much simpler to see what’s going on, but the behaviour is slightly different. In this example, the code similarly counts upto 5,000,000 and updates the UI label. The await statement here will correctly restore the synchronisation context to update the UI thread which means you don’t have to deal with it manually (I didn’t realise that at the time). To enable the asynchrony of the task in this example we need to have a Task.Delay(1) as we are not using any other asynchronous objects in .NET for file or DB access for example. private async void ButtonClickHandlerAsync(object sender, EventArgs e) { button1.Enabled = false; for (var i = 0; i <= 5000000; i++) { label1.Text = "Counter " + i; await Task.Delay(1); } button1.Enabled = true; } The reason the behaviour is different in this case is that the Task.Delay actually delays the method which slows down the counter to 5,000,000 whereas in the original example, the updating of the UI was interval based which didn’t affect the running speed of the original method. As an example I tweaked the example to count to 5000 instead and timed both versions with the StopWatch timer in .NET. The timing results were as follows : - My original code : 0.8 seconds. - Shorter version : 1 minute 22 seconds. As you can see that’s quite a difference. So, to summarise, both versions do roughly the same thing, yet one has more code than the other. The original version decouples the timing of the UI updates from the purpose of the actual task, where-as the shorter and clearer version introduces a delay. At the end of the day, you need to decide what matters more to you. As with anything in software development, there are many different ways of doing the same thing. You don’t need an actual delay, you can just use await Task.Yield() [Thomas Levesque]’s solution seems logically right. And you do a little math, delay(1) for 5,000,000 times is roughly 1 minute 23 seconds. But, checked Microsoft official site, it says “do not rely on to keep a UI responsive.”
https://stephenhaunts.com/2014/10/16/using-async-and-await-to-update-the-ui-thread-part-2/?replytocom=13337
CC-MAIN-2021-43
refinedweb
603
58.69
Abstraction is basically showing the required feature to a user and hiding the implementation and detail. With abstraction, a user can implement and new functionality to a program without going into details of it. For example, think of a machine to make curd. The machine takes milk and makes curd from it. We don't know the detail of how the machine is working but we can use the curd for different purposes. This is abstraction. In C#, we can achieve abstraction in two ways - by using abstract classes and by using interface. Let's learn about each of these. C# Abstract Classes An abstract class is a class which must contain at least one abstract method. An abstract method is a method which is declared using abstract keyword and has an empty body. Its definition is defined in the subclasses. An abstract class can contain any number of abstract and non-abstract methods but it must contain at least one abstract method. We can't create instances (objects) of an abstract class but we can make objects of its subclasses (if they are not an abstract). Let's take an example. Let's take an example of a firm. The firm hires only two types of employees- either driver or developer. Now, you have to develop a software to store the information about them. So, here is an idea - There is no need to make objects of the employee class. We will make objects of only driver or developer. Also, both must have some salary. So, there must be a common method to know about the salary. So, we can make Employee an abstract class and Developer and Driver its subclasses. using System; abstract class Employee //abstract class { public abstract int GetSalary(); //abstract method } class Developer: Employee { private int salary; public Developer(int s) { salary = s; } public override int GetSalary() { return salary; } } class Driver: Employee { private int salary; public Driver(int s) { salary = s; } public override int GetSalary() { return salary; } } class Test { static void Main(string[] args) { Developer d1 = new Developer(5000); Driver d2 = new Driver(3000); int a, b; a = d1.GetSalary(); b = d2.GetSalary(); Console.WriteLine($"Salary of developer : {a}"); Console.WriteLine($"Salary of driver : {b}"); } } abstract class Employee → Employee is an abstract class as 'abstract' keyword is used. Since, Employee is an abstract class so, there must be an abstract method in it. It is GetSalary(). abstract int GetSalary(); → Notice that it has no nody and its body has to be defined in its subclasses and 'abstract' keyword is also used here. In both the subclasses, GetSalary method is defined. Let's look at one more example. using System; abstract class Animal { public abstract void Sound(); } class Dog: Animal { public override void Sound() { Console.WriteLine("Dogs bark"); } } class Cat: Animal { public override void Sound() { Console.WriteLine("Cats meow"); } } class Monkey: Animal { public override void Sound() { Console.WriteLine("Monkeys whoop"); } } class Test { static void Main(string[] args) { Animal d = new Dog(); Animal c = new Cat(); Animal m = new Monkey(); d.Sound(); c.Sound(); m.Sound(); } } C# Interface An interface is similar to an abstract class but all of its methods are abstract by default. It means that it can't have non-abstract methods. We use interface keyword to define an interface. Since all the methods of an iterface are abstract by default, so we don't use public abstract before method declarations. Let's look at an example. using System; interface Shape { int GetArea(); int GetPerimeter(); } class Rectangle: Shape { int length; int breadth; public Rectangle(int l, int b) { length = l; breadth = b; } public int GetArea() { return length*breadth; } public int GetPerimeter() { return 2*(length+breadth); } } class Square: Shape { int side; public Square(int s) { side = s; } public int GetArea() { return side*side; } public int GetPerimeter() { return 4*side; } } class Test { static void Main(string[] args) { Rectangle r = new Rectangle(7, 4); Square s = new Square(4); Console.WriteLine("Rectangle :"); Console.WriteLine($"Area : {r.GetArea()} Perimeter : {r.GetPerimeter()}"); Console.WriteLine("Square :"); Console.WriteLine($"Area : {s.GetArea()} Perimeter : {s.GetPerimeter()}"); } } Here, we have declared Shape as an interface using the interface keyword. Both methods inside it are by default public and abstract.
https://www.codesdope.com/course/c-sharp-abstraction/
CC-MAIN-2022-40
refinedweb
695
65.52
« NSPR API Reference « Process Initialization Coordinates a graceful shutdown of NSPR. Syntax #include <prinit.h> PRStatus PR_Cleanup(void); Returns The function returns one of the following values: - If NSPR has been shut down successfully, PR_SUCCESS. - If the calling thread of this function is not the primordial thread, PR_FAILURE. Description PR_Cleanup must be called by the primordial thread near the end of the main function. PR_Cleanup attempts to synchronize the natural termination of the process. It does so by blocking the caller, if and only if it is the primordial thread, until all user threads have terminated. When the primordial thread returns from main, the process immediately and silently exits. That is, the process (if necessary) forcibly terminates any existing threads and exits without significant blocking and without error messages or core files.
https://developer.mozilla.org/en-US/docs/PR_Cleanup
CC-MAIN-2014-15
refinedweb
132
56.86
ByteString Basics ByteString provides a more efficient alternative to Haskell's built-in String which can be used to store 8-bit character strings and also to handle binary data. It provides alternative versions of functions such as readFile and also equivalents of standard list manipulation functions: {-# START_FILE Main.hs #-} import qualified Data.ByteString as B main = do contents <- B.readFile "foo.txt" print $ B.reverse contents {-# START_FILE foo.txt #-} ... em esreveR Characters or bytes? Depending on the context, we may prefer to view the ByteString as being made up of a list of elements of type Char or of Word8 (Haskell's standard representation of a byte). There's only one ByteString data structure for both, but the library exposes different functions depending on how we want to interpret the contents: import qualified Data.ByteString as B import qualified Data.ByteString.Char8 as BC bytestring = BC.pack "I'm a ByteString, not a [Char]" bytes = B.unpack bytestring chars = BC.unpack bytestring main = do BC.putStrLn bytestring print $ head bytes print $ head chars Here we've used the pack function to convert a String into a ByteString and then used two different unpack functions to get back both a list of Chars (the original String) and a list of Word8s. Data.ByteString provides the Word8 functions while Data.ByteString.Char8 provides the Char equivalents. Of course we don't need to unpack the ByteString to a list to get the first element. We can just use the head functions provided by the library itself import qualified Data.ByteString as B import qualified Data.ByteString.Char8 as BC bytestring = BC.pack "I'm a ByteString, not a [Char]" main = do BC.putStrLn bytestring print $ B.head bytestring print $ BC.head bytestring ByteStrings and Unicode ByeString character functions only work with ASCII text, hence the Char8 in the package name. If you try and use unicode Strings it will mess up: import qualified Data.ByteString.Char8 as BC hello = "你好" helloBytes = BC.pack hello main = do putStrLn hello BC.putStrLn helloBytes print $ BC.length helloBytes If you are working with unicode, you should use the Text package. Lazy ByteStrings ByteString also has a lazy version, which is a better choice if you are processing large amounts of data and don't want to read it all into memory at once. Just import Data.ByteString.Lazy instead of Data.ByteString. Sometimes you will find libraries which use one type when you are using the other. For example, Aeson uses lazy ByteStrings, but you may only be dealing with small JSON snippets and want to write your own code using the strict version. You can convert between them easily enough if you have to: import qualified Data.ByteString as B import qualified Data.ByteString.Lazy as BL import qualified Data.ByteString.Char8 as BC import qualified Data.ByteString.Lazy.Char8 as BLC strict = BC.pack "I'm a strict ByteString (or am I)" lazy = BLC.pack "I'm a lazy ByteString (or am I)" strictToLazy = BL.fromChunks [strict] lazyToStrict = B.concat $ BL.toChunks lazy main = do BLC.putStrLn strictToLazy BC.putStrLn lazyToStrict Newer versions of the library have toStrict and fromStrict functions in the Data.ByteString.Lazy module which you can use instead. The OverloadedStrings Language Extension When you enter a string literal, Haskell will normally assume it is of type String ( [Char]). This useful language extension allows us to have string literals interpreted as ByteStrings, provided we import Data.ByteString.Char8: {-# LANGUAGE OverloadedStrings #-} import Data.ByteString.Char8 () import qualified Data.ByteString as B bytes = "I'm a ByteString, not a [Char]" :: B.ByteString str = "I'm just an ordinary [Char]" :: String main = do print bytes print str As you can see here, we might have to add explicit types in some cases to let Haskell know which kind of string we want. In ghci, you can get the same behaviour by starting it using: ghci -XOverloadedStrings ByteString binary data Manipulating binary data is easy with ByteString. In fact, these notes are really a collection of bits and pieces I picked up along the way while doing the exercises for Coursera's Cryptography I and had to use ByteString for the first time. Hex and Base64 Encoding Binary data is often encoded as hex or base64 to provide an ASCII text representation, so we need an easy way of decoding these to a ByteString containing the bare bytes. This is exactly what the base16-bytestring and base64-bytestring packages were written for. Here's an example for base64: {-# LANGUAGE OverloadedStrings #-} import Data.ByteString.Char8 () import qualified Data.ByteString as B import Data.ByteString.Base64 (encode, decode) Right bytes = decode "SSdtIGEgYmFzZTY0IGVuY29kZWQgQnl0ZVN0cmluZw==" main = print bytes And one for a hex-encoded string: {-# LANGUAGE OverloadedStrings #-} import Data.ByteString.Base16 (encode, decode) bytes = fst $ decode "49276d2061206865782d656e636f6465642042797465537472696e6720286f722077617329" main = print bytes Unfortunately, base16-bytestring isn't available in Stackage yet, so we can't use active code here. One-Time Pad If you want to XOR one bytestring against another, to implement one-time pad encryption for example, you can use zipWith: {-# LANGUAGE OverloadedStrings #-} import Data.ByteString.Char8 () import qualified Data.ByteString as B import Data.ByteString.Base64 (decode) import Data.Bits (xor) Right key = decode "kTSFoLQRrR+hWJlLjAwXqOH5Z3ZLDWray5mBgNK7lLuHdTwab8m/v96y" encrypt = B.pack . B.zipWith xor key decrypt = encrypt main = do let encrypted = encrypt "I'm a secret message" print encrypted print $ decrypt encrypted That's about it. You can view the full package documentation to see what other functions are available.
https://www.schoolofhaskell.com/school/to-infinity-and-beyond/pick-of-the-week/bytestring-bits-and-pieces
CC-MAIN-2016-50
refinedweb
912
67.96
Le Tue, Feb 14, 2012 at 10:35:57AM -0800, Don Armstrong a écrit : >. Hi, I remember seeing a discussion somewhat recently; was it or even more recent ? I must say that I am not particularly convinced, as I have seen R packages moving from Omegahat to CRAN, or from CRAN to Bioconductor, and renaming source packages is quite inconvenient. If there is a need to implement simple namespaces for our source packages, perhaps just "r-" would be enough ? But if we adopt a naming scheme, it would be great to record it somewhere. Unless I am the only one to not use "r-cran-" as a prefix, or unless we reach a conclusion, I would prefer keeping "vegan" as a source package name. Have a nice day, -- Charles Plessy Debian Med packaging team, Tsurumi, Kanagawa, Japan
https://lists.debian.org/debian-devel/2012/02/msg00630.html
CC-MAIN-2018-17
refinedweb
137
69.01
advertise here. From a strictly Silverlight perspective, building the Web Service and using LINQ are related skills but beyond the scope of these tutorials. That said, we’ll examine how they intersect with Silverlight. To begin, create a project named SQLData, but be sure to choose Web Application Project as we want both a Silverlight project and also a Server project in which we can create a WCF Web Service (to connect to the database). Visual Studio 2008 will create two projects under one solution. The solution and first project are named SQLData. That first project is the Silverlight application and has the same files that you’ve seen in previous tutorials. The second project, SQLData_Web is created for you as a test environment for the Silverlight project and it has three potential entry points, SQLDataTestPage.aspx is specifically designed to test the Silverlight controls and quick examination shows that it includes an AJAX ScriptManager and an ASP:Silverlight control whose source is the .xap file (pronounced zap file) that will be produced by the Silverlight project, <form id="form1" runat="server" style="height:100%;"> <asp:ScriptManager <div style="height:100%;"> <asp:Silverlight </div></form> <asp:ScriptManager <div style="height:100%;"> <asp:Silverlight </div></form> LINQ is a very powerful addition to both VB 9 and C# 3 and is likely to be a central technique for data retrieval for Silverlight and other .NET technology going forward.A good way to get started with LINQ is with ScottGu's excellent tutorial available at start is with ScottGu's tutorial available here. There are numerous books on LINQ as well. In this tutorial we’ll be writing a simple LINQ query that I’ll parse for you. To begin right click on the server project and choosing Add, and then choose the LinqToSql Classes template. Notice that the explanation below the window says “LINQ to SQL classes mapped to relational objects.” When the Object Relational Designer window opens, open the Server Explorer and navigate to the AdventureWorks database (installed with SQL Server or available from Micrsoft.com). Expand to reveal the tables and drag the Customer table onto the DataClasses1.dbml Designer workspace, You created the LINQ class (though not the query) first so that the web service (and Intellisense) will know about the Customer class and its members. With that in place we can ask Visual Studio 2008 to help create the Web Service. Right click on the test project and choose Add New and from the templates choose WCF Service, The result is the creation of three new files that hold the service contract for your WCF web service, Note that if you are more familiar with .NET 2 proxy based web services, you are in for a bit of an adjustment. WCF exposes more of the underlying XML and provides more of a contract-based programming model. You can find out more about WCF here. Open the first file, IService1.cs which contains the contract that was created by Visual Studio 2008. public interface IService1 { [OperationContract] void DoWork(); } We can replace this “dummy” contract with whatever contract we want our web service to provide. For the purposes of this tutorial we want to contract that our web service will return a list of Customer objects given a string that represents the beginning of a customer’s last name. Thus, we’ll modify the method from returning void to returning List<Customer>.. However, as soon as you start to change the return type Intellisense is able to pop up to help you, specifically because we created this type in the LINQ class we defined earlier. The convention for a method that returns a set of foo given a bar is to name it GetFoosByBar. Thus, we’ll name this GetCustomersByLastName. public interface IService1{ [OperationContract] List<Customer> GetCustomersByLastName(string lastName);} Having changed the contract in the interface, you must be sure to change the implementation in the .cs file. But why work so hard? When you get to the cs file, just click on the interface and a smart tag will appear. Open the tag and it will offer to create the implementation skeleton for you! Throw away the DoWork method and fill in the GetCustomersByLastName with the LINQ query, public class Service1 : IService1 { } (The region comments were put in when I asked the smart tag to create the implementation skeleton) Ah, finally, some LINQ. The trick to learning LINQ is to find Anders and have him explain it to you. No one does it better. Failing that, read Scott's tutorial, or the chapters on LINQ in my C# 3.0 book, or one of the many great books on LINQ. Let’s take this one LINQ statement apart. First, we use the new var inference variable which, surprisingly is type safe (it infers the type, but it is not without type!). We assign to it the result of the LINQ query, which will be an object of type IEnumerable. The query syntax is much like SQL except that the Select statement comes at the end. So, in English, “Give me a connection to the database I told you about earlier and name that connection db. Go into that database and find the table named customers and look for each record where the LastName field begins with the letters held in the string lastName. Give me all the matching records. Assign all those records to the object matchingCustomers, which is smart enough to (a) know that it has to be of type IEnumerable and (b) know that when I call ToList() on it it should return a List<Customer>. Silverlight, however, supports only basic binding (SOAP 1.1, etc.), so you will need to change the binding accordingly, <endpoint address="" binding="basicHttpBinding" contract="SQLData_Web.IService1"> That’s it. Your service is ready to go. Clicking OK adds the service to your project. You will access the Web Service (and its method) through this reference. <Grid x: <Grid.RowDefinitions> <RowDefinition Height="10" /> <!--0 Margin--> <RowDefinition Height="50" /> <!--1 Prompts--> <RowDefinition Height="*" /> <!--2 DataGrid--> <RowDefinition Height="10" /> <!--3 Margin--> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="10" /> <!--0 Margin--> <ColumnDefinition Width="*" /> <!--1 Controls--> <ColumnDefinition Width="10" /> <!--2 Margin--> </Grid.ColumnDefinitions> </Grid> Notice that I’ve set ShowGridLines to true while I’m working to ensure that I’m getting the results I hope for, and that the third row and second column use star sizing; indicating that they should take up all the remaining space. The Grid has small margins on all sides and two rows, a top small row and a very large bottom row, <Border BorderBrush="Black" BorderThickness="2" Grid.<StackPanel Grid. <TextBlock Text="Last name to search for: " VerticalAlignment="Bottom" FontSize="18" Margin="15,0,0,0" /> <TextBox x: <Button x:</StackPanel> Finally, drag a DataGrid from the Toolbox onto the XAML. <my:DataGrid x: You’ll notice that it is given the prefix my and that a new namespace is declared to support it, xmlns:my="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data" public Page(){ InitializeComponent(); Loaded += new RoutedEventHandler(Page_Loaded);} void Page_Loaded(object sender, RoutedEventArgs e){ Search.Click += new RoutedEventHandler(Search_Click);} void Search_Click(object sender, RoutedEventArgs e){ } void Page_Loaded(object sender, RoutedEventArgs e){ Search.Click += new RoutedEventHandler(Search_Click);} void Search_Click(object sender, RoutedEventArgs e){ } The first task is to get a reference to the Web Service’s Service1Client member. You can examine this in the object browser to see that it is this object that has the Asynchronous methods we’ll need, (image slightly abridged to save space) We assign the Service1Client to the local object webService, void Search_Click(object sender, RoutedEventArgs e){ ServiceReference1.Service1Client webService = new SQLData.ServiceReference1.Service1Client(); We then use webService to set up an event handler for the method that will be called when the GetCustomersByLastNameCompleted event is called webService.GetCustomersByLastNameCompleted += new EventHandler<SQLData.ServiceReference1. GetCustomersByLastNameCompletedEventArgs> (webService_GetCustomersByLastNameCompleted); Finally, we make the asynchronous call webService.GetCustomersByLastNameAsync(LastName.Text); } When the service completes, the GetCustomersByLastNameCompleted event is raised, and our method is invoked. The carefully constructed list of Customers is stashed in e.Result which we assign to the DataGrid’s ItemSource property and all the bindings now have a source to bind to. void webService_GetCustomersByLastNameCompleted( object sender, SQLData.ServiceReference1.GetCustomersByLastNameCompletedEventArgs e){ theDataGrid.ItemsSource = e.Result;} theDataGrid.ItemsSource = e.Result;} Hey presto! Once you know how, the effort to make this kind of application, even using Styles and more to make it look just the way you want, is measured in hours, at most.
http://silverlight.net/learn/tutorials/sqldatagrid.aspx
crawl-001
refinedweb
1,421
55.44
#include <NearTrack.h> Inheritance diagram for Track::NearTrack: Definition at line 37 of file NearTrack.h. The types of objects Track::NearTrack can attach to. Definition at line 85 of file NearTrack.h. Construct with an initial position. Definition at line 20 of file NearTrack.cpp. Definition at line 26 of file NearTrack.cpp. Find a u texture coordinate at a point on a face. The u texture coordinate is used for lap distance. Definition at line 354 of file NearTrack.cpp. Return the face the position is on. Definition at line 148 of file NearTrack.cpp. Get the identifier for the object the point is attached to. Useful when combined with get_source_type. Definition at line 158 of file NearTrack.cpp. Return the type of object the point is attached to. Definition at line 153 of file NearTrack.cpp. Return the position in world space. Definition at line 143 of file NearTrack.cpp. Set the attach point to the closest point on the AI navigation mesh. This is much slower than move_towards(btVector3). Use that if you can help it. Time complexity is O(n), where n is the number of triangles in the ai navigation mesh. Definition at line 115 of file NearTrack.cpp. Move the attach point closer to a position. Requires an up to date AI mesh in the track. The path taken will be continuous. It won't move to the nearest point if the nearest point is not on a reasonable path from the current location. Definition at line 38 of file NearTrack.cpp. Definition at line 55 of file NearTrack.cpp. Calculate statistics about potential movement in a given direction. Projections the direction of the given vector onto the current face, then finds the furthest edge in that direction. If the edge is shared, project the face vector onto the next face recursively. When the edge is not shared, the total navigable distance in that direction is returned. Definition at line 163 of file NearTrack.cpp. Set m_object_coord m_object_name m_object_type, m_face_s/t from m_face. Finds relavent information from the graph face. Definition at line 362 of file NearTrack.cpp. The lowest distance reached in move_towards(). Definition at line 168 of file NearTrack.h. The face with the lowest distance reached in move_towards(). Definition at line 170 of file NearTrack.h. The world coordianates of the solution with the lowest distance reached by move_towards(). Definition at line 172 of file NearTrack.h. The vertices examined in move_towards(). Definition at line 166 of file NearTrack.h. AI mesh face attached to. Will be invalid when the track changes. Definition at line 150 of file NearTrack.h. Coordinates in the direction of m_face's fv1 -> fv2 edge. Will be invalid when the track changes. Definition at line 155 of file NearTrack.h. Coordinates in the direction of m_face's fv1 -> fv3 edge. Will be invalid when the track changes. Definition at line 160 of file NearTrack.h. The mesh we can navigate over. Definition at line 163 of file NearTrack.h. Relative coordinates for edge or vertex attached to. For an edge, this is distorted to lie along the edge. The y coordinate for an edge is between 0 (start) and 1 (finish) of the edge. The x and z coordinates are used like the edge's meshes. For a vertex, it is in the vertex's local space. Definition at line 133 of file NearTrack.h. Definition at line 140 of file NearTrack.h. The type of object the Track::NearTrack is attached to. May Unkown if the relavent object is deleted from the track. Definition at line 138 of file NearTrack.h. World coordinates. May become outdated when the track changes. Definition at line 145 of file NearTrack.h. Generated at Mon Sep 6 00:41:18 2010 by Doxygen version 1.4.7 for Racer version svn335.
http://racer.sourceforge.net/classTrack_1_1NearTrack.html
CC-MAIN-2017-22
refinedweb
643
71.21
D-Bus is a Free/Open Source software inter-process. control or interact with the user is interacting with asynchronously via IPC. This not only allows the same slave to be used by multiple applications but it also keeps the slaves themselves simple as they can do things such as perform blocking operations without it serie types desktop applications will tend to use most often. Additionally, an application may create its own private bus between itself and another application in a peer-to-peer fashion. There is no practical limit to the number of buses a message, much as. These are "short-circuited" and kept local to the application, so it is not necessary for code in an application to worry about whether or not it might actually be calling a remote or local application. This is often useful in highly componentized apps and prevents possible deadlock situations. Since multiple applications can be on the same bus and one application may provide multiple objects to which messages can be sent it is necessary to have a means to effectively and unambiguously address any given object on any given bus, much like how a street address does for any given residence or office. There are 3 pieces of information which when taken together create a unique address for any given object on a bus: interface, service and object name. An interface is a set of callable methods and signals that are advertised on the bus. An interface provides a "contract" between the applications passing messages that defines the name, parameters (if any) and return values (if any) of the interface. These methods may not map directly in a one-to-one fashion to methods or API in the application that is advertising the interface, though they often do. This allows multiple applications to provide similar or the same interfaces regardless of internal implementation while allowing application to use these interfaces without worrying themselves with the internal design of the applications. Interfaces can be described for documentation and code re-use purposes using XML. Not only can users and programmers reference the XML description of the interface, but developers can use classes that are auto-generated from the XML making it much easier and less error-prone (e.g. the compiler can check the syntax of messages at compile time). A service represents an application connection to a bus. These are kept unique by using a "reverse domain name" approach, as can be seen in many other systems that need to namespace for multiple components. Most services provided by applications from the KDE project itself use the org.kde prefix to their service name. So one may find "org.kde.screensaver" advertised on the session bus. You should use the domain name for your organization or application for your service names. For example if your domain is awesomeapps.org and the name of your application is wickedwidget you would probably use org.awesomeapps.wickedwidget as the service name on the bus. If an application has more than one connection to a bus or if multiple instances of the same application may be active at once, it will need to use a unique service name for each connection. Often this is done by appending the process ID to the service name. Of course, an application is likely to advertise access to more than one object on the bus. This many-to-one relationship between objects and services is accomodated by providing a path component to the address. Each path associated with a service represents a different, unique object. An example might be /MainInterface or /Documents/Doc1. The actual path structure is completely arbitrary and is up to the application providing the service as to what the paths should be. These paths simply provide a way to identify and logically group objects for applications that send messages to the application. Some libraries export object paths with their "reverse domain" prepended to it, so as to properly namespace their objects. This is quite common for libraries and plugins that join an arbitrary service and must therefore avoid all clashing with objects exported by the application and other components. However, this practice is not in use in KDE applications and libraries. Objects provide access to interfaces. In fact, a given object can provide access to multiple interfaces at the same time. A D-Bus message contains an address made up of all the above components so that it can be routed to correct application, object and method call. Such an address might look like this: org.kde.krunner /ScreenSaver org.kde.screensaver.setBlankOnly In this case org.kde.krunner is the service, /ScreenSaver is the path to the object, org.kde.screensaver is the interface the object exports and setBlankOnly is a method in the interface. If the /ScreenSaver object only provides the org.kde.screensaver interface (or the quit method is unique amongst the services it implements) then this would work equally well as an address: org.kde.krunner /ScreenSaver setBlankOnly In this way each possible destination is uniquely and reliably addressable. Now that we have a way to address any given end point on the bus, we can examine the possibilities when it comes to actually sending or receiving messages. Methods are messages that are sent to cause code to be executed in the receiving application. If the method is not available because, for instance, the address was wrong or the requested application is not running, an error will be returned to the calling application. If the method is successfully called then an optional return value will be returned to the calling application. Even if there is no return value provided a success message will be returned. This round trip does have overhead, and so is important to keep in mind for performance critical code. Such method calls are always initiated by the calling application and the resulting messages have exactly one source and one destination address. Signals are like method calls except that they happen in the "reverse" direction and are not tied to a single destination. A signal is emitted by the application which is exporting the interface and is available to any application on the same bus. This allows an application to spontaneously advertise changes in state or other events to any applications which may be interested in tracking those changes. If this sounds a lot like the signal and slots mechanism in Qt, that's because it is. For all intents and purposes it is a non-local version of the same functionality. There are several useful tools for exploring the D-Bus busses as well as developing applications that use D-Bus. We will now look briefly at end-user tools as the articles that follow cover the development tools in greater detail and context. qdbus is a command line tool which can be used to both list the services, objects and interfaces on a given bus as well as send messages to a given address on the bus. It can be used to explore both the system as well as the default session bus. If the --system switch is passed, qdbus will connect to the system bus otherwise it uses the session bus. qdbus uses the rest of the supplied arguments on the command as an address and, if any, parameters to pass to a given object. If a full address is not supplied, then it lists all the objects available from that point on the bus. For instance, if no addresses are provided a list of available services is listed. If a service name is provided, object paths will be provided. If a path is also provided all methods in all interfaces will be listed. In this way one can quite easily explore and interact with objects on the bus, making qdbus very useful for testing, scripting and even idle exploration. dbus-viewer is a Qt application that provides a graphical interface to essentially the same set of features that qdbus provides on the command line, thus providing a more user friendly mechanism to interact with the bus. dbus-viewer ships with Qt4 itself (in the /demos directory) and is immediately obvious in usage for anyone who is familiar with the basic D-Bus concepts such as object addresses.
https://techbase.kde.org/index.php?title=Development/Tutorials/D-Bus/Introduction&diff=8085&oldid=8084
CC-MAIN-2017-47
refinedweb
1,382
52.9
Hi! There is a new version of the 'Spider Arachnid' plug-in. Download: Changes since version 0.1.4: - Renamed *.c files to C++ suffix *.cpp. - Added namespaces. - Added gettext support (VDR >= 1.5.7). - Resume the last game. - Customizable width and height. - OSD error compensation (shrink width/height or reduce colors). - Added setup option to hide the top row. - Set normal variation (two decks and 10 piles) as standard, together with "shrink height" and "hide top row". - Added user-defined variations. - Improved key handling. - Updated German language texts. 'Spider Arachnid' is an implementation of the best patience game played on the On Screen Display of the VDR. See the project's homepage for details: Tom
http://www.linuxtv.org/pipermail/vdr/2007-September/014155.html
CC-MAIN-2015-18
refinedweb
116
72.12
TIFF(Tagged Image File Format) is a flexible and adaptable file format for handling images and data within a single file. It's popular among graphic artists or the publishing industry. As a popular image format, TIFF file owns the ability of storing image data in a lossless format and it can be a container holding compressed (lossy) JPEG and (lossless) PackBits compressed images. However, TIFF can be created through Spire.Doc. By using Spire.Doc, developers can easily convert documents from Word to TIFF. Download Spire.Doc (or Spire.Office) with .NET Framework 2.0 (or above) together. Once make sure Spire.Doc (or Spire.Office) are correctly installed on system, follow the steps below to convert Word to TIFF. Step 1: Open your Visual Studio and create a new project, choose "Windows Forms Application". In "Solution Explorer", right click "Reference" and then choose "Add Reference". Browse to the directory where you install the component. Now you can add codes to convert word to Tiff Image. Step 2: Add a "Button" to Form1. Double click the button, add the following codes in the top of file. using Spire; using Spire.Doc; using Spire.Doc.Documents; using System.Drawing.Imaging; Imports Spire Imports Spire.Doc Imports Spire.Doc.Documents Imports System.Drawing.Imaging Step 3: Spire.Doc allows users to create Word document with colorful content. Here we can use Spire.Doc to create a test Word document. Document document = new Document(); doc.LoadFromFile("Word.docx"); Dim document As New Document() doc.LoadFromFile("Word.docx") Step 4: Spire.Doc presents almost the easiest solution to do Word to TIFF conversion. Through the following 2 simple sentences of code, we can save Word document as TIFF format. Image image = document.SaveToImages(0, ImageType.Bitmap); image.Save("Sample.tif", ImageFormat.Tiff); Dim image As Image = document.SaveToImages(0, ImageType.Bitmap) image.Save("Sample.tif", ImageFormat.Tiff) After finish writing codes, start the project by pressing F5 and click the button. Then, a TIFF file will be automatically generated. Check out the effect image below: Effective Screenshot: Spire.Doc can convert Word to most of popular file formats. It can convert Word to PDF, HTML, XML, RTF, Text, ePub, etc. Click to learn more
http://www.e-iceblue.com/Knowledgebase/Spire.Doc/Program-Guide/How-to-Convert-Word-to-Tiff.html
CC-MAIN-2015-14
refinedweb
371
54.9
You utilities. You can load the PowerShell provider by typing: add-pssnapin Microsoft.SharePoint.PowerShell Much of what Christian (iLoveSharePoint) and I have worked on in SPoshMod for SharePoint 2007 supports the same pattern as the 2010 provider. Verb-Noun, with SP prefix. Get-SPSite, Get-SPWeb, etc. What you may not know is that developers can add custom providers and deploy them to SharePoint. You can do this by using the Microsoft.SharePoint.PowerShell namespace. To distinguish a normal PowerShell cmdlet from a SharePoint cmdlet, an new abstract class has been added to the namespace called: SPCmdlet (other cmdlets inherit directly from PSCmdlet). More to come on this topic…
https://blogs.msdn.microsoft.com/ekraus/2009/10/19/powershell-cmdlets-in-sharepoint-2010/
CC-MAIN-2016-36
refinedweb
110
66.94
so far for my program i have import java.util.*; public class CalculateChange { public static void main(String[] args) { String s1, s2; Scanner keyboard = new Scanner(System.in); System.out.print("Enter your name: "); s1 = keyboard.nextLine(); System.out.print("Enter current date: "); // in format of mm/dd/yyyy s2 = keyboard.nextLine(); String mutation1, mutation2, mutation3; mutation1 = s1; mutation2 = mutation1.toUpperCase(); mutation3 = mutation2.substring(0, 1); System.out.println("===== Ref. # " + mutation3); } } i am stuck from here... for the last line it is to print in the format of ===== Ref. #JSMITH091003 ===== Where the J is the first letter of the name entered for s1 and SMITH is the last name entered for s1 and then 091003 for the date in format of yy/mm/dd Note that JSMITH is not what hte user entered just an example of the output and i cant use the Next( ) method
http://www.javaprogrammingforums.com/whats-wrong-my-code/1047-any-help-much-appreciated.html
CC-MAIN-2016-26
refinedweb
146
58.79
Delphi 2005 Reviewer's Guide By: Cary Jensen Abstract: The complete Microsoft Windows development solution Overview Delphi: Advancing the Art of Software Development The Integrated Development Environment One IDE, Multiple Personalities One IDE, Multiple Languages The Structure Pane The VCL and VCL for .NET Floating Designer The Tool Palette Enhanced Tool Palette Behavior New VCL for .NET Components The Object Inspector The Upgrade Project Wizard Delphi 2005 Wizards Find in Files Enhancements Updated Support for International Characters Message List Enhancements IDE Error Reporting Import/Export Project from/to Visual Studio .NET The Next Generation Code Editor Refactoring Symbol Renaming Variable and Field Declarations Resource Refactoring Extract Method Refactoring Import Namespace (C#) and Find Unit (Delphi) SyncEdit Error Insight Help Insight The History Manager The Content Pane The Info Pane The Diff Pane Code Navigation Enhancement Toggling Code to/from Comments Persistent Bookmarks J2EE and CORBA to .NET Integration with Janeva User Selectable File Encoding The VCL for .NET Virtual Library Interfaces Support for Partially Trusted Callers The Delphi Compilers Updates for Both Win32 and .NET Delphi Compilers The For…In Loop Support for Unicode and UTF8 Formats The Delphi for .NET Compiler Delphi Code and Namespaces Support for Weak Packaging in VCL for .NET Applications Forward Declared Record Types The Delphi Win32 Compiler Function Inlining Support for Nested Types Nested Type Constants in Class Declarations Support for Pentium 4 SSE3 and SSE2 Instruction Op Codes and Data Types XML Document Generation The Delphi Debuggers Multiple Debugger Support Exception Dialog Enhancements The Disassembled View Breakpoints The Log Call Stack Breakpoint Option Breakpoint Dialog Box Updates Updated Attach to/Detach from Process Evaluator Frame Support for Win32 Local Variables Database Development RAD for ADO.NET Providing and Resolving with DataSync and DataHub Data Remoting with RemoteServer and RemoteConnection Borland Data Provider for ADO.NET The BDP Data Explorer Managing Tables Data Migration Testing Stored Procedures Creating Reports in Delphi 2005 Added VCL for .NET Data Access Components ADO.NET Connection String Editor Web and Internet Development Deployment Manager HTML Editing in the Web Forms Designer Template Editing Updated Code Completion and Syntax Highlighting Updated Tag Editing Additional ASP.NET Project Manager Support New and Enhanced DB Web Controls New DB Web Controls Updated DB Web Controls IntraWeb Support Integrated Application Lifecycle Management Delphi 2005 and StarTeam Unit Testing Enterprise Core Objects Rapid MDA ECO Space and Persistence Mapping ECO and OCL What's New in ECO II A Highly Scalable Enterprise Object Cache Extended Object Capabilities ECO II Support for Web Forms and Web Services ECO II Support for Existing Databases Integrated and Included Partner Tools Borland Caliber RM Borland InterBase 7.5 Developer Edition Borland Janeva Borland Optimizeit™ Profiler for the Microsoft .NET Framework Borland StarTeam 6.0 Standard Edition Component One Studio Enterprise for Borland Delphi 2005 Crystal Reports Borland Edition glyFX Borland Special Edition IBM DB2 Universal Developers Edition InstallShield Express for Borland Delphi Internet Direct (Indy) IntraWeb Microsoft SQL Server 2000 Desktop Engine (MSDE 2000) Microsoft SQL Server 2000 Developer Edition Rave Reports Borland Edition Wise Owl Demeanor for .NET Borland Edition Other Resources Summary About Borland Software Corporation About the Author The Complete Windows Development Solution A Borland White Paper Produced for Borland by Cary Jensen, Jensen Data Systems, Inc. October 2004 Contents Welcome to the Delphi 2005 Reviewer's Guide. This document will familiarize you with Delphi 2005, the newest version of Borland's flagship development environment, culminating more than twenty years of technological innovation. The Delphi 2005 Reviewer's Guide is organized into two parts. In this first part, the Overview, you will find a general introduction to Borland's Delphi 2005. The second part of this guide takes you on a tour of Delphi 2005. This part, which is organized by the major areas of software development and support in Delphi 2005, provides you with an overview of each area, followed by a description of the many updates, enhancements, and additions introduced in this product. If you are already a Borland enthusiast, you may want to quickly scan the overview section, concentrating instead on the updates that make this the most important upgrade to Delphi since it debuted in 1995. Delphi's legacy began in 1983, when Turbo Pascal set a new standard for software engineering. The evolution of Turbo Pascal reads like a history lesson in the advancement of software development, including the introduction of such groundbreaking innovations as an integrated development environment (IDE), integrated debugging, syntax-highlighting, a powerful object-oriented programming (OOP) model, and OWL, the Object Windows Library. With the release of Delphi 1.0 in February of 1995, Borland proved that component-based development could be applied in an object-oriented environment, permitting developers to rapidly build applications while maximizing code reuse. In more ways than one, Delphi blazed a trail that would eventually be followed the framework class library (FCL) of the Microsoft .NET Framework. Borland Delphi 2005 represents another impressive advance in software development by Borland, making Delphi 2005 the ultimate and complete development solution for Windows. Delphi 2005 converges Delphi, C#, Microsoft .NET Framework and Win32 support for graphical user interface (GUI), Web, database, and model driven application development, and wrapped with the essential application lifecycle management (ALM) tools into a unified, highly-productive rapid application development (RAD) environment. With Delphi 2005, you have everything you need to increase Windows developer productivity, personal developer productivity, and team productivity. Windows developer productivity. Delphi 2005's IDE makes Windows development tasks faster, easier, and better by supporting the Win32 standard of yesterday and today with the Windows-based Microsoft .NET Framework development standard of today and tomorrow. With world-class compilers and debuggers, a rich legacy of standards-based tools, and a seamless migration path between current and emerging platforms, there is no better Windows development tool on the market today. Personal developer productivity. Delphi 2005 takes the power of Delphi to a new level, with speed and productivity enhancements throughout. With a code editor that simplifies every aspect of your programming experience, the largest collection of reusable components, powerful code-generating wizards, and much more, Delphi 2005 is the most prolific development environment available. Team productivity. Delphi 2005 allows teams to take full control of the application lifecycle. In addition to Delphi 2005's state-of-the-art tools for software development, certain editions of Delphi 2005 also include StarTeam for team source code control, Borland Enterprise Core Objects II (ECO™ II) for model-powered development in the .NET framework, integrated Unit Testing Framework, and Borland Optimizeit Profiler for the Microsoft .NET Framework for performance-testing. In short, Delphi 2005 provides you with a complete, integrated solution for all your development and project management needs. Borland Products = Technical Excellence Throughout the years, Borland products have been recognized for excellence and innovation.Here are a few of the honors received recently by the products that represent the heritage of Delphi 2005: ● Borland Delphi 8 for the Microsoft .NET Framework won Best of Show in the developer tool category at TechEd Europe, 02-July-04 ● Borland C# Builder won the Visual Studio Magazine Reader Choice Award for best developer tool 24-May-04 ● Borland Delphi 7 Studio won the Web Services Journal Readers' Choice Award for in the Best GUI for Web Services Product category, 25-February-04 Hide image The remaining sections of this guide are organized into related topics associated with software development. Each section begins with a general overview, and then continues with a description of the new and enhanced features introduced in Delphi 2005. Disclaimer This reviewer's guide is based on a pre-release version Delphi 2005. Features in the shipping product may vary slightly from the descriptions found here. Delphi 2005's IDE (integrated development environment) represents the state-of-the-art in software development tools. Growing out of Borland's Galileo IDE technology first release with Borland C# Builder and Delphi 8 for .NET, Delphi 2005's IDE continues Borland's rich heritage of enabling you to develop applications faster and better. This section focuses on the features found in the various panels, designers, dialog boxes, and views of the IDE. Features that are specific to the code editor are detailed separately in a later section of this guide. Whether you are coding in Delphi or C#, writing Win32 applications or .NET managed code, building ASP.NET Web pages or traditional client applications, Delphi 2005's IDE provides you with a consistent and powerful set of development tools designed to increase your productivity. With Delphi 2005, the IDE keeps track of what kind of application you are working with, providing you with the designers, views, and features consistent with the task at hand. For example, if you are building an ASP.NET Web application, the HTML designer allows you to design your Web pages visually, permitting you to drag-and-drop the components that you want to see on your Web page and configure them with little or no code. The following figure shows Delphi 2005 with an open ASP.NET Web application and its visual HTML designer. If you create a new Win32 client application, or open an existing one, the VCL (visual component library) designer kicks in, again providing you with unmatched support for designing your user interfaces. You can even create project groups that include two or more different kinds of projects. When you do this, the type of application that is currently active in the project group determines which designers are available, and which options you see in the supporting views. For instance, if your project group includes both an ASP.NET Web Service application and a Win32 VCL Form application, Delphi 2005 notes which of these projects is currently active, providing you with the designer and editor appropriate for each as you switch between your projects. Delphi 2005 is more than just context-sensitive designers — it is a full multiple-language development environment. The native languages and debuggers that are included in Delphi 2005 are Delphi for Win32 development, Delphi for the Microsoft .NET Framework, and C# for the Microsoft .NET Framework. While other IDEs support multiple languages, Delphi 2005 is unique in that it supports both multiple platforms and multiple languages transparently. For example, you can create a project group that includes a C# ASP.NET Web application, a Delphi for .NET Web Control class library, and a traditional Windows DLL (dynamic link library) written in Delphi Win32. Not only will the appropriate compiler and debugger be used for each project, based on its underlying language, but also the code editor features and Tool Palette snippets will expose the appropriate features as you navigate between the various projects. Delphi 2005 can also support additional compilers, if you wish. For example, so long as you have the VB for .NET compiler installed on your workstation, you can create, open, edit, compile, and debug VB for .NET applications without ever leaving the Delphi 2005 IDE. The Structure pane is a context-sensitive view that provides you with detailed information about what ever is displayed in your main view. When you are using the code editor, the Structure pane displays the classes, types, interfaces, and other symbols in the current file, as shown in the following figure. (In Delphi 7, this view was called the Code Explorer.) By comparison, when you are designing a VCL Form, the Structure pane displays the components that appear on your form, with the various nodes representing the containership of your controls. (In Delphi 7, this view was referred to as the Object Tree View.) Not only does the Structure pane provide you with valuable insight into your projects, it also serves as a convenient tool for navigating the symbols and objects that you are using. When you are editing your code, double-clicking a symbol in the Structure pane takes you to the associated line of code in the editor. When you are designing a VCL Form, clicking an object selects it in the designer, permitting you to quickly change its properties or assign event handlers. The Structure pane is also invaluable when there are errors in your code. When Delphi 2005's new Error Insight feature identifies problems in your source files, these appear automatically, as you type, in the Structure pane, permitting you to quickly navigate to the position in the code editor where problems exist. Error Insight is described in more detail in the "The Next Generation Code Editor" section of this guide. Some developers who used Delphi 8 for the Microsoft .NET Framework wished for a "floating" VCL designer, like the one available in Delphi 7. Borland listened. For Delphi development of VCL and VCL for .NET applications, Delphi 2005 provides you with a choice between using the .NET-style embedded designer or the classic floating designer. To enable the floating designer in Delphi 2005, select Tools | Options. Navigate to the VCL Designer node under Delphi Options, and uncheck the Embedded designer check box. When you work in a component-based environment like Delphi 2005, you typically make extensive use of design-time components, which are placed into the designer and configured using the Object Inspector. These components are available from the Delphi 2005 Tool Palette. The Tool Palette is organized by component category. Which categories are displayed, and which components appear within them, is context sensitive, based on the type of project on which you are working. Furthermore, the Tool Palette permits you to controls its organization. You can change the position of a component within a Tool Palette category, as well as move a component to a different category, simply by dragging the component within the Tool Palette. You can even define your own custom categories into which you can drag your components. Delphi 2005 includes a number of enhancements to the Tool Palette. These are discussed in the following sections. Delphi 2005's Tool Palette is better than ever. In addition to providing access to design-time components and code snippets, depending on whether you are working with a designer or code editor, the updated Tool Palette can also be used to create new projects, files, and objects. When you do not currently have a project open, the Tool Palette provides access to all of the wizards and templates of the Object Repository. Some of these are shown in the following figure. When you are using the code editor, the Tool Palette now includes these same options in addition to code snippets, reusable pieces of code that you can drag into the code editor. Selecting objects from the Tool Palette has also been enhanced, greatly improving the speed with which you can build forms and applications. Simply click the Filter current items button in the Tool Palette toolbar, or press Ctrl-Alt-P, and start typing the name of the object you want to select. As you type, the characters you've entered so far appear in the Tool Palette title bar, and a filtered list of matching objects appears below, as shown in the following figure. Press Enter when the item you want is selected. You also have additional options for controlling the Tool Palette display. To see these options, select Tools | Options from Delphi 2005's main menu. Tool Palette configuration options are available under the Tool Palette node of the Options dialog box. Finally, the Tool Palette in Delphi 2005 now supports true drag-and-drop placement of components into the designer you are working with. Previously, component placement with VCL Forms could be better described as click-and-click, though that technique also works in Delphi 2005. For Delphi VCL-based development, the Tool Palette now includes a number of new controls for creating better using interfaces. These include a TButtonGroup, TCategoryButtons, and TDockTabSet. These components, which you can use in your Win32 and VCL for .NET applications, permit you to easily create interfaces similar to those used in Delphi 2005’s Tool Palette and the Structure pane. As you have probably already guessed, these new components are the same ones that Borland's engineers developed to build Delphi 2005's IDE. In addition, VCL for .NET has been expanded to include even more Delphi VCL-compatible classes. These additional classes make it even easier than before to migrate your existing Win32 projects to the .NET framework. For a complete list of the new components in Delphi 2005, see "What's New in Delphi 2005" in the Delphi 2005 help. Delphi 2005's File Name property in the preceding figure is shown in an enabled font, indicating that you can edit the name of this file using the Object Inspector. Changing the file name here not only changes the name of the file displayed within the Project Manager, but since this file is a Delphi unit, the unit name changes as well. Of course, you can still rename a file the old fashioned way, by selecting File | Save As from Delphi 2005's main menu. Other objects selectable within the Project Manager can also be viewed in the Object Inspector. For example, if you select one of the assemblies listed under the References node of a .NET project in the Project Manager, the Object Inspector displays details about that assembly, as shown in this next figure. Because Delphi 2005 includes both Win32 and .NET compilers for the Delphi language, it can be used to create new Win32 applications as well as further the development of your existing Win32 projects that you created in Delphi 7 and earlier. You can also use Delphi 2005 to migrate your existing Win32 applications to VCL for .NET, the 100% .NET managed-code solution that maintains component and source code compatibility between Win32 and .NET development. The Upgrade Project Wizard is a special utility that runs the first time you open a Win32 application in Delphi 2005. Using this utility, you can choose to continue the current project as a Win32 application, or you can convert it to a .NET application. Once you made your choice using this wizard, Delphi 2005 will remember your selection. If you tell the Project Upgrade Wizard that you want to continue working with a Delphi project as a Win32 project, and at some later time decide to migrate it to VCL for .NET, simply delete your project's *.bdsproj file. After that, open the .dpr file in Delphi 2005. Once again, the Project Update Wizard will ask you to choose whether to continue working with the project as a Win32 project or to migrate it to VCL for .NET. Wizards are small applets that help you to quickly create the projects, objects, and files that you use in Delphi 2005. For example, the ASP.NET Web Application Wizard creates for you the necessary web.config, global.asax, and initial .aspx file, and configures an IIS virtual directory into which these are placed, among other tasks. In short, wizards increase your productivity, getting you off to a fast start in the right direction. The following figure shows the Delphi 2005 object repository, displaying just a few of the many available wizards. Delphi has always provided you with a rich collection of wizards that support almost every aspect of Windows development. For Win32 development, these include the Windows 2000 Logo Wizard, the DLL Wizard, the Automation Object Wizard, the Web Service Wizard, the IntraWeb Application Wizard, the Database Form Wizard, and the Thread Wizard. These are just a few or the dozens of powerful wizards that are available. For Delphi for .NET and C#, you will find the ASP.NET Web Application Wizard, the Windows Form Application Wizard, ASP.NET Web Service Application Wizard, the Web Control Library Wizard, and many, many more. Delphi 2005 includes Wizards that were previously found in Delphi 7, Delphi 8 for the Microsoft .NET Framework, and C# for the Microsoft .NET Framework. In addition, Delphi 2005 includes a number of new and improved wizards ¾ accelerating your development efforts even more. These include the updated New Component Wizard, the new DB Web Control Library Wizard, the ECO ASP.NET Application Wizard, the ECO Web Service Application Wizard, and the Satellite Assembly Wizard, just to name a few. Delphi 2005 makes it even easier for you to search your project files by permitting you to group search results by file. To group search results by file, check the Group results by file check box in the Find Text dialog box. The following figure shows what a grouped search result looks like. As you can see, each file in which the search string appears forms a base node in a tree view. Expanding the node for a given file lists the lines on which the located search string was found. You can then double-click a particular entry to go to that line of code in the code editor. Delphi 2005's IDE has been upgraded across the board to support UTF-8 characters in all of its wizards, windows, dialog boxes, and panes. Delphi 2005 uses the Message List pane to list compiler errors, warnings, and hints. You can now save the contents of the Message List pane by right-clicking in the Message List pane and selecting either Copy, to copy selected messages to the Windows clipboard, or Save, to save the Message List contents to a file. Borland's commitment to creating better software has lead to the development of a number of programs for reporting and fixing problems. One of the most recent of these is Quality Central, a Web-based application for submitting bug reports located at. With Delphi 2005, Borland has embedded an error reporting system directly into the IDE. This feature is called IDE Error Reporting. While Borland hopes that you never have to use it, if an exception is raised within the IDE, Delphi 2005 displays the Error dialog box. If you click the Details button, you see a detailed trace of the error. Clicking the Send button displays the Send Report dialog box. Hide image Click the Next button to see the stack trace that will be submitted to Borland along with your error report. Click Next again to enter a description of what you were doing when the error occurred. Click Next once more to optionally provide your Borland Developer Network (BDN) logon email address and password. Submitting your report using your BDN account permits you to easily follow up on your report using Borland's Quality Central. If you want to submit the report anonymously, check the Anonymous Report check box. Click Next one final time to submit the error report. Do you currently have C# projects in Visual Studio .NET 2003, but need the advanced features offered by Delphi 2005? Don't worry. Importing these projects into Delphi 2005 is easy. Simply select File | Open, and open the Visual Studio C# project file (*.csproj). The Delphi 2005 Import Visual Studio Project Wizard will ask you for the name you want to give to the imported project. From that point forward, you can use Delphi 2005's features to design, develop, compile, test, and deploy the application. The following figure shows a C# project created in Visual Studio .NET 2003 being imported into Delphi 2005. While the features of Delphi 2005 make it the preferred environment for .NET development, C# projects built in Delphi 2005 can be exported to Visual Studio if you need to share the results of your work with a VS-based developer. To do this, select Tools | Export to Visual Studio from Delphi 2005’s main menu. Note that this menu item is only available when the current project in the Project Manager is a C# project. Delphi 2005 continues Borland's heritage of providing developers with a world-class programming environment. To most developers, that also means a world-class code editor. And that’s exactly what you get in Delphi 2005. In fact, for most developers, the updates that Borland has introduced to the code editor in Delphi 2005 will provide ample justification to upgrade from a previous version of Delphi or C# Builder. These features include refactoring support, SyncEdit, Error Insight, Help Insight, the History Manager, and much, much more. These new features are described in the following sections. Refactoring is the process of updating existing code to improve its readability, maintainability, and efficiency, without changing the essential behavior of the software. Common refactorings include providing more expressive names for variables, replacing duplicate code segments with a call to a common function that performs the same task, and replacing literal values with constants or resource references. Delphi 2005 includes a number of impressive refactorings. These include symbol renaming, method extraction, variable and field declarations, and resource refactorings. Symbol renaming permits you to change all instances of a symbol's name throughout your project. Unlike a search-and-replace feature, symbol renaming respects the context in which the symbol name appears. Symbols that can be renamed using this refactoring include class and interface names, properties, methods, functions and procedures, as well as variables and constants. To perform a symbol renaming refactoring, select the symbol whose name you want to change in the code editor, and select Refactoring | Rename. Use the Rename dialog box to define a new name for your symbol. If you leave the View references before refactoring option checked, Delphi 2005 displays the Refactorings pane, which lists all of the instances within your code where the change will be applied. Click the Refactor button on the Refactoring pane toolbar to apply the changes. Alternatively, you can choose to remove one or more of the refactorings before applying them, or even cancel the refactoring altogether. The Declare Variable and Declare Field options on the Refactor menu permit you to quickly create a local variable or member field declaration. This option is available with Delphi code, but not with C# projects. (This feature is not needed in C# since fields can appear almost anywhere within a C# class. By comparison, in Delphi variables must appear in a var block, and member fields must appear in a type block.) To insert a local variable or member field, select the symbol name that you created in the code editor, and select Refactor | Declare Variable or Refactor | Declare Field (or press Ctrl-Shift-V or Ctrl-Shift-D, respectively). If you select Declare Variable, the Declare Variable dialog box is shown. You use the Declare Variable dialog box to change the variable name, set its data type, make the variable an array type with a specific dimension, or to initialize the newly created variable to a specific value. Click OK to create the local variable, and initialize its value (if you chose that option). If you select Declare Field, the Declare New Field dialog box is displayed. You use this dialog box to set the name and data type of the new field, to declare it as an array of a given dimension, and to define its visibility within the associated class. When you click OK, the newly named field is created in the selected section of the class within whose method the symbol is located. Resource refactorings are used in Delphi code to convert string literals into resourcestring block entries, replacing the original literal with the resource string symbol. (There is no resourcestring block in the C# language.) Using resource strings instead of string literals is particularly valuable when a specific string literal is used repeatedly, as well as when you need to create localized (language and/or culture specific) versions of your application. After placing your cursor within a string literal in the Delphi code editor, select Refactor | Extract Resource String. Use the Extract Resource String dialog box to modify the string and to change the default name for the resource symbol. When you click OK, the string literal is replaced with the resource symbol, and the named symbol is inserted into a resourcestring block in the associated unit's interface section. Most developers think of method extraction when they think of refactoring. Method extraction involves converting one or more lines of code into an independent method call, replacing those lines with an invocation of the extracted method. In Delphi 2005, method extraction refactoring is only available for the Delphi language. Method extraction is particularly useful when the same or similar lines of code appear repeatedly in your project. By extracting those lines to a separate method, replacing each of the repeated instances with an invocation of the method, you greatly enhance your code's maintainability by creating a single location where changes to those lines of code, if desired, need to be implemented. To perform a method extraction refactoring, select the lines of code that you want to extract to a method, and then select Refactor | Extract Method. Use the Extract Method dialog box to define a name for the new method, as well as to examine the code that will be placed inside of this new method. Delphi 2005's extract method refactoring is intelligent, with respect to variables, properties, and objects referenced within the code being extracted. For example, since the code in the preceding figure includes a reference to the Disposing property of the method's class, the value of this property is passed by value to the refactored code. By comparison, if the code actually made a change to the value of a variable that needs to be passed into the refactored method, the associated parameter would be passed by reference (using the var keyword). Although not exactly a refactoring, the Import Namespace and Find Unit options under the Refactor menu permit you to quickly locate and import the namespace associated with a particular symbol. If you are coding in C#, you select Refactor | Import Namespace. Delphi developers select Refactor | Find Unit. After selecting this option from the Refactor menu, the displayed dialog box lists all of the classes in all of the namespaces available to the environment you are working in. For example, if you are creating a Delphi .NET Windows Forms application, the namespaces of the FCL and RTL for .NET (the .NET version of the Delphi runtime library) are available. Delphi VCL for .NET developers will find the VCL for .NET namespaces as well. By comparison, if you are creating a Delphi Win32 application, the various units of the VCL and RTL are listed. Type the name of the class that you want to be able to access in the Search field. As you type, the Matching Results list is filtered to include only those classes, and their associated namespaces, whose names match what you've typed so far. Select the name of the class whose namespace you want and click OK. If you are working in Delphi, you can also specify whether the namespace will be added to your interface or implementation section uses clause. SyncEdit is a new feature in Delphi 2005 that provides support similar to symbol renaming refactoring. Unlike symbol renaming, however, SyncEdit performs localized renaming of symbols for a selected code block only. This is a powerful capability and one of the most popular of the new features with developers. SyncEdit becomes available anytime you select a code block that includes at least two instances of the same symbol name. For example, consider the following figure, which depicts a selected code block that includes more than one reference to a local variable named DataTable1 (as well as DataSet1, DataAdapter1, Connection1, and the Create method). The SyncEdit icon appears in the left gutter of the editor window, indicating that synchronized changes to the selected code block are available. To enter the SyncEdit mode, you either click this icon or press Shift-Ctrl-J. Once you enter the SyncEdit mode, the duplicate symbols are identified, and the symbol selected for synchronized editing appears highlighted (with the duplicates being displayed enclosed in boxes). If you want to edit a symbol other than the one selected by default, press the Tab key until the symbol you want to SyncEdit is selected. After selecting the symbol to edit, begin typing. The name of the selected symbol, and its duplicates, are updated as you type. The following figure shows the name of the DataTable being changed to CustTable. (The edit is being performed on the first instance of DataTable1, which appeared in the var declaration of this method.) SyncEdit is a great productivity tool when you are writing functions, procedures, and methods, in that this feature is so easy to use. There are, however, important differences between SyncEdit and symbol renaming refactorings. SyncEdit is lexical, so it works with comment lines as well as compilable code, unlike symbol renaming refactorings, which work only on actual symbol references. Likewise, symbol renaming refactoring extends its reach into descendant classes, as well as to resource files (such as VCL and VCL for .NET form files). SyncEdit only applies to the currently selected code block. Error Insight, which makes its debut in Delphi 2005, provides you with a service that can be roughly described as spell checking and grammar checking for programmers. As you write your Delphi or C# code, the IDE actively evaluates your work, identifying the symbols, keywords, and directives that you use, looking for syntax and semantic errors that the compiler cannot resolve. When Error Insight locates an error, it identifies the problem by underscoring the offending text with red squiggly lines, similar to how Microsoft's Word identifies words not in its dictionary. When you pause your mouse pointer briefly over a symbol that Error Insight does not recognize, Error Insight displays a hint window with information about the identified error. In addition to the Error Insight features available in the code editor, the problems located by Error Insight also dynamically appear in the Structure pane, under the Errors node, and disappear as they are corrected. The following figure shows the Structure pane with a number of identified errors. In the example shown in the preceding figures, adding the Borland.Vcl.Registry unit (for the TRegistry class) and Borland.Vcl.Windows (for the HKEY_LOCAL_MACHINE constant) to this unit's uses clause allows Error Insight to see the various symbols that it identified as problems. Once these two units are added to the uses clause, both the Structure pane and the code editor are updated, indicating that no problems are detected in this code. You can configure Error Insight from the Code Insight node of the Options dialog box. Display this dialog box by select Tools | Options from Delphi 2005's main menu. Another new Code Insight feature appearing in Delphi 2005 is Help Insight. Help Insight provides you with information about the classes, interfaces, methods, properties, and fields that appear in your code, without you ever having to leave the code editor. To access Help Insight, briefly pause your mouse pointer over a symbol in the code editor. After a moment, a hint window appears, displaying information about the symbol. In many instances, Help Insight includes one or more links within the hint window. Clicking one of these links may drill down into the help, displaying an additional hint window with information about the link you clicked. Alternatively, clicking a link may take you to the line of code where the clicked symbol is defined. Help Insight is also available from the windows displayed by Code Insight, including the Class Completion and Argument Value List windows. When a Code Insight window is active, select an item in the Code Insight window to show the Help Insight for that item. For example, in the following figure Help Insight is displaying information about the BeginTransaction method of a SqlConnection object. This help became available after BeginTransaction was selected in the Code Completion window. You can configure Help Insight from the Code Insight node of the Options dialog box. One of the more exciting additions to the Delphi 2005 code editor is the History Manager. The History Manager, which you display by clicking the History tab when a source file is active in the code editor, permits you to view changes to your source files over time, to view comments about specific versions of your source code, as well as to view the differences between the various saved versions of your files and to easily revert to any backup state or checkin. By default, the History Manager transparently maintains local copies of your source files in a folder named __history under your project directory each time you save your changes. This feature is called local file backup, and you use the Options dialog box to configure how many versions of your local backup to keep. Delphi 2005 maintains the last 10 saved versions of each source file, by default. Depending on your available hard disk space, you may want to increase the number of backups that Delphi 2005 maintains. If you are using Borland’s StarTeam version control server, the History Manager maintains StarTeam checkins as well. Using this feature, you can not only view changes that you have made to the source files, but also compare your changes with those implemented by other developers working on the StarTeam-managed project. StarTeam also permits you to track changes even after you have changed a file's name. In short, the History Manager provides you with a convenient and powerful interface to the robust StarTeam project asset management system. It's worth noting that the History Manager also works with the DFM files of VCL and VCL for .NET applications. DFM files are used in those applications to persist information about the properties of the objects that appear on your forms, data modules, and frames. As a result, the History Manager permits you to view, manage, and restore changes made to your form designs using the same tools as those used on Delphi code files. There are three panes available within the History Manager. These panes are named Content, Info, and Diff. Each of these panes is described in the following sections. You use the Content pane to review the contents of your saved source files, and optionally revert to a previously saved version. When you select a specific backup or the current saved version of the file, the contents of that file are displayed in the code area. In addition, the file name and the date last saved appear in the History Manager's status bar. Use the code area to view the contents of the selected file. If you want, you can use the code area to select and copy (Ctrl-C) lines of code that you want to paste elsewhere within your project (or even into other projects). If you want to revert your code to one of the previous saved versions, select the saved backup that you want to revert to and click the Revert to previous version button in the History Manager toolbar. You use the Info pane to view comments and notes associated with a particular version of your source file. If you are using StarTeam to manage your History Manager contents, these comments are linked to your StarTeam backups. If you are using local backups, these comments are generated by Delphi 2005 and cannot be modified. Some operations, such as refactorings, write information into the Info pane of the History Manager. For most developers, the Diff pane offers the most valuable feature of the History Manger. The Diff pane provides insight into the differences between the multiple versions of your source code, including comparisons between the current edit buffer and saved source files. Select one of the saved versions of your source file from the Differences From: list on the left side of the Diff pane, and either the contents of the current edit buffer or one of the other saved versions from the To: list on the right side. The difference view is displayed in the code area, with the newer code versions identified with a plus sign (+) in the left gutter, and the older versions identified with a minus (-) sign in the gutter. The following figure depicts changes between the current version of the file in the edit buffer and one of the saved local backups. Code navigation is a feature of Delphi 2005 that permits you to easily move between sections of your code. For example, by pressing Ctrl-Shift-UpArrow (or Control-Shift-DownArrow), you can move effortlessly between a method name in a Delphi class declaration to the associated implementation of that method. Delphi 2005 introduces a small but valuable enhancement to code navigation in Delphi code, permitting you to move between your interface and implementation section uses clauses, as well as between your unit's initialization and finalization sections, using Ctrl-Shift-UpArrow. Code navigation is not necessary in C# projects, as the associated modules in C# do not have a two-part structure, as is the case with Delphi units. Delphi 2005 introduces a new feature that permits you to quickly comment and uncomment a selected code block. To comment one or more consecutive lines of code, select the code in the code editor, right-click, and then select Toggle Comment from the displayed context menu (or press Ctrl-/). When you do this, Delphi 2005 places the single line comment characters (//) at the start of each of the lines in the selected block. To uncomment one or more consecutive lines, select those lines and press Ctrl-/, or right-click and select Toggle Comment. Delphi 2005 will respond by removing the single-line comment characters from each selected line in the block. The single-line comment characters do not have to be in the first column of the code editor for Delphi 2005 to remove them. Bookmarks are special tags that you place within a source file to enhance your navigation within that file. You place a bookmark by press Ctrl-Shift, followed by a single digit, from 0 to 9. Once placed, the bookmark appears in the left gutter of the code editor using a glyph that represents the digit. Once a bookmark has been placed, you can quickly navigate to that bookmark within the code editor by pressing the Ctrl key followed by the digit used to place the bookmark. For example, if you have previously placed a bookmark using Ctrl-Shift-1, and subsequently navigate to a different area of your code file, you can instantly return to the bookmarked line in your source code by pressing Ctrl-1. Delphi 2005 now supports persistent bookmarks. If persistent bookmarks are enabled, a placed bookmark will remain in the source code until you specifically remove it. This means that you can place a bookmark in one editing session, and that bookmark will still be there the next time you open that source code file in Delphi 2005. In order to enable persistent bookmarks, check the Project desktop check box under the Autosave options group on the Environment Options page of the Options dialog box. You display the Options dialog box by selecting Tools | Options from Delphi 2005's main menu. Janeva is Borland's middleware solution for using J2EE (Java 2 Enterprise Edition) Enterprise JavaBeans and CORBA (common object request broker architecture) objects from your Delphi 2005 applications. With Janeva, you can leverage your existing enterprise-level objects, calling them from your Web-enabled or workstation client .NET applications. To enable access to a J2EE or CORBA object from your Delphi 2005 application, select the Project menu on Delphi 2005's main menu, or right-click the current project in the Project Manager, and select either Add J2EE Reference or Add CORBA Reference. Use the displayed dialog box to select the Java .jar or .ear file, or the CORBA IDL (interface definition language) file. If you select a Java archive, for example, Delphi 2005 then permits you to choose which of the contained Enterprise JavaBeans you want to use, as shown in the following figure. Selecting OK generates a proxy class that you use to make calls to the Java server at runtime. You can now choose how Delphi 2005 will encode your source files. Your options include ANSI, Binary, UTF8, and so on. To set the file encoding for the current source file in the editor, right-click in the editor and select File Format from the context menu. Select the encoding you want to use from the displayed menu. Being able to select the source file encoding is particularly valuable when you are writing source files using non-US locales. For example, source files encoded using UTF-8 will correctly maintain the identity of the individual characters even when opened in a different locale. By comparison, special characters in a source file may change if the source file is encoded in ANSI and then opened with a different ANSI codepage. The Delphi visual component library for .NET (which for the purpose of this discussion, includes the runtime library for .NET, or RTL for .NET), is a 100% .NET managed code equivalent of the Delphi VCL for Win32. Several features of the VCL for .NET are notable. For one thing, VCL for .NET is the largest, 100% managed collection of classes, types, and functions for the .NET framework outside the .NET framework class library itself. And it's only available in Delphi 2005 (or its immediate predecessor, Delphi 8 for the Microsoft.NET Framework). The second characteristic of the VCL for .NET is its remarkable compatibility with the Win32 versions of the VCL. In fact, you use this compatibility to migrate your Win32 Delphi code to .NET with little or no effort. There are several updates to the VCL for .NET added to Delphi 2005. These are described in the following sections. Delphi provides extensive support for interoperability between Win32 and .NET applications, including COM interop through runtime callable wrappers (RCWs) and platform invoke (PInvoke). With Delphi 2005, this support has been advanced through the support for virtual library interfaces (VLI). Virtual library interfaces permit you to call routines in Win32 DLLs from your .NET applications much more easily than the mechanism provided by .NET's PInvoke. Normally, managed code in the .NET framework can call routines in unmanaged libraries through the .NET platform invoke service, or PInvoke. With PInvoke, you import the exported routines of an unmanaged DLL by using the [DLLImport] attribute to identify the DLL in which the function is located, as well as other characteristics of the exported function. There are several drawbacks to using PInvoke. First, using the [DLLImport] attribute you cannot resolve the DLL name or location (path) at runtime. Second, if the specified routine in the DLL cannot be loaded, for whatever reason, a runtime exception is raised. Third, the [DLLImport] attribute is somewhat verbose and repetitive, especially when you have many routines that you are importing from a single DLL. Consider the following two functions, which are implemented and exported from a Win32 DLL created using Win32 Delphi: function ConvertCtoF(CentValue: Integer): Integer; stdcall;function ConvertFtoC(FahrValue: Integer): Integer; stdcall; A unit that imports these routines using PInvoke has, at a minimum, an implementation block that looks something like the following (assuming that these routines were exported from a DLL named Win32DLL.dll): function ConvertCtoF; external;[DllImport('Win32DLL.dll', CharSet = CharSet.Auto, EntryPoint = 'ConvertCtoF')]function ConvertFtoC; external; [DllImport('Win32DLL.dll', CharSet = CharSet.Auto, EntryPoint = 'ConvertFtoC')] With virtual library interfaces, importing routines from an unmanaged DLL is easier, is less prone to raising exceptions, and permits your code to resolve the name and/or location of the DLL at runtime. There are three steps to importing one or more routines from an unmanaged DLL using virtual library interfaces. These are: Adding the Borland.Vcl.Win32 namespace to your uses clause Creating an interface declaration where each method in the interface maps to one of the routines exported from the DLL Calling the Supports function from the Borland.Vcl.Win32 unit, passing to it the name of the DLL (including an optional path if the DLL is not located in a location where Windows will find it), the Interface you created in the preceding step, and a variable of that interface type If the Supports function determines that the methods of your interface map to functions exported from the named DLL, the variable you pass in the third parameter of the call to Supports will point to an object that implements the interface you passed in the second parameter. You can then use this object reference to execute the unmanaged routines of the DLL. If one or more of the methods of the interface are not exported by the named DLL, or the named DLL does not exist or is somehow compromised, Supports returns a Boolean false without raising an exception. Here is a sample interface that declares the two exported functions of the unmanaged DLL example used earlier in this section: typeIWin32DLLInt = interface function ConvertCtoF(CentValue: Integer): Integer; function ConvertFtoC(FahrValue: Integer): Integer;end; If Win32DLL.dll is located in the mylib subdirectory of the application's executable, the following code returns an implementation of IMyWin32DLL, after which one of the methods (ConvertCtoF) of the returned object is executed: var MyDLL: String; MyWin32DLL: IWin32DLLInt;beginMyDLL := ExtractFilePath(Application.ExeName) + '\mylib\Win32DLL.dll' ;if not Supports(MyDLL, IWin32DLLInt, MyWin32DLL) then MessageBox.Show(self, 'Could not load Win32DLL.dll')else NewInt := MyWin32DLL.ConvertCtoF(100); The VCL for .NET assemblies now support partially trusted callers. A partially trusted caller is an application that does not reside on the same workstation as a managed assembly that it calls. For example, an .exe being executed from a network share or from a URL is a partially trusted caller. By default, the .NET security model prevents a partially trusted caller from invoking unmanaged code, such as that in the Windows API, unless that caller includes the appropriate declarations and checks. The Delphi 2005 assemblies of the VCL for .NET now include the additional security declarations and checks that permit the VCL for .NET to be called from a partially trusted caller without violating .NET security. Borland compilers are legendary for their speed and compatibility, and this legacy continues with Delphi 2005. Actually, Delphi 2005 ships with three compilers. One of these compilers, the C# compiler, is licensed from Microsoft. Consequently, C# applications you build in Delphi 2005 generate the same intermediate language (IL) code as those built with Visual Studio. The other two compilers are Delphi compilers, one for compiling traditional 32-bit Windows executables and the other for generating IL for the .NET Framework. Both of these compilers have received significant updates in the Delphi 2005 release. This section begins with a discussion of features added to both the Win32 and the .NET versions of the Delphi compilers. Later in this section, you will learn about the new features that are specific to one or the other of these compilers. Several new features have been added to both of Delphi 2005’s Delphi compilers. These most significant of these include the new for…in loop and Unicode support. These new compiler features are described in the following sections. The Delphi language has been updated to include a new looping control structure similar to the C# foreach keyword. In Delphi, this new loop is referred to as a for…in loop. Unlike traditional for loops in Delphi, the for…in loop does not require an ordinal control variable. Instead, the for…in loop systematically retrieves a reference to the next object in a collection of like objects. For example, the following code segment can be used to iterate through the DataRows of a DataTable's Rows property (this property is of the type DataRowCollection): var Row: DataRow; begin//…for Row in MyDataTable.Rows do ListBox1.Items.Add(Row[0].ToString); For the .NET Delphi compiler, for…in can be used with any object that satisfies at least one of the following conditions: it implements the IEnumerable interface, has a public GetEnuermerator function, or is an array, set, or string. For the Win32 compiler, for…in can be used with any class that has a public GetEnumerator function, or is an array, a set, or a string. Classes that implement a GetEnumerator function include TList, TCollection, TStrings, TMenuItem, TFields, to name a few. Both of Delphi’s compilers can now compile UTF8 and Unicode source files. Previously, only ANSI source files were supported. For the Delphi for .NET compiler, this feature supports CLS (common language specification) standard Unicode identifiers in both metadata and in source code. Borland's Delphi for .NET compiler made its first debut with the release of Delphi 8 for the Microsoft .NET Framework. In addition to the updates listed in the preceding section, this compiler has received a number of updates that apply specifically to .NET applications. These include a revision to how namespaces are created and managed, forward-declared record types, and support for weak packaging in VCL for .NET applications. The updates to the Delphi for .NET compiler are described in the following sections. The biggest change to the .NET compiler is in how it generates namespaces for the symbols defined in your units. Under the previous version of the compiler, the unit name was the namespace. For some developers, particularly those accustomed to using classes defined in C#, the namespaces created by Delphi appeared awkward. Specifically, these namespaces revealed the physical structure of the underlying code, which is irrelevant from the perspective of the person using your classes, and can be distracting. The Delphi 2005 compiler takes a new approach to namespace generation, allowing multiple units, and even multiple applications, to contribute to a common namespace, if desired. At the same time, it is just as easy to make each unit contribute to a separate namespace. Here is how it works. If your unit names do not use dot notation, the unit name is the namespace. This is how it worked before. If a unit includes a multipart name, using dot notation, the namespace is defined by dropping the last part of the unit name. For example, if a unit has the name YourCompany.Data.Unit1, the classes within that unit will reside in the YourCompany.Data namespace. Classes that appear in the YourCompany.Data.Unit2 and YourCompany.Data.Unit3 units will be in the YourCompany.Data namespace as well. Global variables, constants, functions, and procedures declared in Delphi code represent something of a challenge, in that .NET requires all declarations to be associated with a class. Therefore, the global symbols of a Delphi unit named YourCompany.Data.Unit1 are implemented in .NET metadata as members of a class named Unit1 within the namespace YourCompany.Data.Units. How Delphi symbols appear in .NET metadata has no effect on your Delphi source code. You only need to consider how your Delphi code will appear in the .NET metadata for the portion of your code that you want developers using other .NET languages to use. In general, you should avoid using global variables, global constants, or global procedures and functions when writing Delphi code that you intend to be used by other .NET languages. A runtime package in the VCL for .NET is a managed .NET assembly ¾ it contains declarations that the application can load and use at runtime. Under normal circumstances, if you compile a VCL for .NET application to use a runtime package, you are required to deploy that package, just as you are required to ensure the deployment of all assemblies (DLLs) that are referenced in your application. Weak packaging of a unit addresses a problem that arises when a runtime package contains one or more units that statically link to an external DLL, in particular, a DLL that is not commonly available. Under normal conditions, this situation requires that you deploy both the runtime package and the DLL. Consider the Microsoft DLL PenWin.dll for pen device input, which is not distributed with Microsoft operating systems. The PenWin unit in Delphi statically links to the DLL PenWin.dll. If your unit uses PenWin, and includes calls to one or more functions in the statically linked PenWin.dll, adding your unit to a runtime package without weak packaging would require that the PenWin.dll be available from any application that loaded that runtime package. By making this unit weakly packaged, only applications that actually call PenWin.dll functions will require PenWin.dll. Weak packing permits an application to link a non-packaged version of the unit into the executable instead of using the runtime package that contains this unit. As a result, applications that need the features of the weakly packaged unit will link the non-packaged version of the unit (that stored in the compiler-generated DCPIL file), and as a result, require the DLL. Applications that do not use the unit will not require the DLL, even if they are compiled to use the package that contains the weakly packaged unit. Weak packaging has been available for some time in the Delphi Win32 compiler. Weakly packaged unit semantics are now supported by the Delphi for .NET compiler. Record types can now be forward declared in Delphi VCL for .NET and FCL applications. A forward declared record instructs the compiler to recognize the record as a valid type, even though its formal declaration appears later in the same type block. Forward declared record types permit two type declarations, specifically records, classes, and interfaces, appearing in the same type block to reference one another in their member fields, properties, or methods. You create a forward declared record type by declaring the record type symbol but omitting the record's field lists. The degree of compatibility between the Delphi Win32 and .NET compilers is one of the truly remarkable Delphi 2005 features. This compatibility permits single projects to be compiled as true Win32 applications and then effortlessly migrated to 100% .NET managed code applications. In many cases, a single set of source files can be compiled by both the Win32 and the .NET versions of the Delphi compiler. No other development environment lets you do this as easily. Equally compelling for developers is Borland's continued support for the Win32 platform with the most modern IDE on the market. While Borland is committed to the .NET platform as the future of Windows development, Borland also knows that the majority of desktop developers maintain applications on the Win32 platform, and Borland is just as committed to providing those developers with the advanced features that they need. Although the bulk of the enhancements to the Win32 compiler have already been described earlier in this guide (in the section "Updates for Both Win32 and .NET Compilers"), the following sections discuss some of the unique features added to the Delphi Win32 compiler in Delphi 2005. Function inlining is an operation performed by the Win32 compiler at compile time. When a function is inlined, the compiler replaces a call to the subroutine (a method, function, or procedure) with the compiled instructions defined within the subroutine. Function inlining can increase application performance by eliminating the overhead associated with function, procedure, and method calls. There are two ways to influence whether the compiler will inline a function or not. One way is to include the inline directive in the function, procedure, or method declaration. This directive is a request to the compiler to consider whether or not to inline the function. If inlining has not been disabled, and the compiler determines that the function can be safely inlined, the inlining will be performed. The second way is to use the {$INLINE} compiler directive. This directive can be passed one of three parameters, ON, OFF, and AUTO. With the ON parameter, the default, the compiler will inline functions declared using the inline directive, whenever the compiler determines that inlining is safe. No inlining takes place when you specify the OFF parameter. When you use the {$INLINE} compiler directive with the AUTO parameter, the compiler attempts to inline, if possible, any small function — one whose code size is roughly 32 bytes or less. While function inlining can produce performance improvements, Borland is quick to note that it should be applied judiciously, and does not recommend using the AUTO parameter with the {$INLINE} compiler directive. Inlining can produce larger executables, even some that are dramatically larger. Also, inlined functions do not always produce performance benefits. In some cases, inlining can actually reduce performance. There are a number of conditions that prevent a subroutine from being inlined. For example, subroutines that include inlined assembly instructions cannot be inlined. Similarly, methods of a class that access one or more of that class's private members cannot be inlined into a method in another class. Borland has applied the inline directive to some of the smaller routines in the VCL and RTL, where deemed appropriate. As a result, code that uses these routines will execute faster than before, but with slightly larger executables. A nested type is a type declaration inside another type declaration. The Delphi for .NET compiler already supports nested types. Delphi's Win32 compiler does now, too. The following is an example of a class that contains a nested type. This example is taken from the Delphi 2005 Help, and can be found under the heading Nested Type Declarations. type TOuterClass = class strict private myField: Integer; public type TInnerClass = class public myInnerField: Integer; procedure innerProc; end; procedure outerProc; end; Nested type constants are constant class member declarations inside of a class type declaration. Nested type constants are somewhat similar to class functions, in that they can be referenced using a class reference without an instance of the class. Unlike class functions, however, nested type constants always return a constant value. Nested type constants are already available for your .NET projects. Now you can use them in your Win32 applications as well. Nested type constants can be of any simple type, such as ordinal, real, and String. You cannot declare a nested constant to be a value type, such as TDateTime. The following is an example of a class that includes a nested type constant declaration: type TTemperatureConverter = class(TObject) public const AbsoluteZero = -273; procedure ConvertFtoC(Temp: Integer): Integer;//… If you need to get close to the silicon, Delphi's Win32 compiler now permits you to include Pentium 4 SSE3 and SSE2 op codes and data types in your inline assembly routines. XML document generation was introduced in the Delphi 8 for .NET and C# Builder compilers. You can now generate XML documentation files for your Win32 source code. To enable XML Doc generation, enable the Generate XML Documentation check box on the Compiler page of the Project Options dialog box. You display the Project Options dialog box by selecting Project | Options from the Delphi 2005 main menu. When Generate XML Documentation is enabled, the compiler produces one XML file for each of your source files. This file has the same name as the source file, but with the .xml extension. If you have included custom XML Documentation comments in your source files, these will be inserted into the generated XML file. The XML files generated when you compile with Generate XML documentation enabled can be used with widely available documentation generating tools. Alternatively, you can write your own XML parser to use this information any way you see fit. A good debugger is one of the essential tools for successful software engineering. Whether it is used to help you learn the values of your various variables and objects as your code executes, or to inspect the contents of your application's stack, a debugger lets you do the nearly impossible ¾ peer into the black box and make sense of what's going on. This section provides you with insight into Delphi 2005's support for debugging your Win32 and .NET applications. Delphi 2005 doesn't just have a world-class debugger ¾ it has two. One of these is for your .NET applications that you have compiled to IL, and the other is for your Win32 applications that you've compiled to machine language. Delphi 2005 selects which of these debuggers to use based on the type of compiler that created your executable. For example, if you are debugging an ASP.NET Web application, a Windows Forms application, or a VCL for .NET application, Delphi 2005 uses the Borland .NET Debugger. By comparison, if you are debugging a VCL client/server application, a COM (component object model) server, or a traditional Win32 DLL, Delphi 2005 uses the Borland Win32 Debugger. Just as Borland provides you with a consistent set of features when it comes to compiling, Borland's debuggers do a remarkable job of giving you a rich, dependable, and consistent set of tools for debugging your applications, whether you are compiling for .NET, Win32, or both. For example, each of Delphi 2005's debuggers permits you to set breakpoints, view the call stack, change the values of variables and objects, access local variable values, switch between your application's current threads, view CPU (central processor unit) data, examine the event log, as well as access the list of loaded modules. You can even use these debuggers to attach to existing processes, giving you insight into how they are functioning. While the features offered by these two debuggers are consistent, they are not identical. Specifically, each debugger provides you with options appropriate for the associated executable. For example, with Win32 applications you can create Data breakpoints, breakpoints that trigger when the data stored in a particular memory address changes. Data breakpoints don't make sense in the .NET world, since the physical address in which data is stored cannot be predicted. On the other hand, the CPU window displayed by the .NET debugger can include the IL (intermediate language) the .NET compiler emitted. Win32 compilers don't generate IL, so this feature does not apply to Win32 executables. The following sections provide you with information about new features that appear in the debuggers for Delphi 2005. An exception is an error generated at runtime by your application. Unless you have specifically configured your debugger to ignore the exception (or have disabled the debugger), several things happen when an exception occurs when you are running an application from within the Delphi 2005 IDE: your program stops executing, the appropriate debugger is loaded, and the Debugger Exception Notification dialog box is shown. An example of the exception dialog box is shown in the following figure. The Debugger Exception Notification dialog box in Delphi 2005 includes a number of new features. You can choose whether to stop your program's execution temporarily or close the debugger and continue executing the program using the Break and Continue buttons, respectively, located in the lower-right corner of this dialog box. In addition to these options, you may also see one or more of the check boxes that appear on the left side of the dialog box in the preceding figure. If you click the Ignore this exception type check box, the class of exception that occurred is added to the Exception types to ignore list on the Language Exception tab of the Options dialog box. From that point on, this particular exception class, as well as any class that descends from it, will no longer load the integrated debugger. If you later want to restore the default behavior of having this exception load the integrated debugger, either uncheck the check box next to the corresponding exception in the Exception types to ignore list, or select the exception and click the Remove button. (You display the Options dialog box by selecting Tools | Options from Delphi 2005's main menu.) If you check the Inspect exception object check box on the Debugger Exception Notification dialog box, and then click the Break button, the Debug Inspector becomes available, as shown in the following figure. The Debug Inspector allows you to view, and drill down into, the instance of the raised exception. In this case, detailed information about the exception can be discovered by double-clicking the _errors property of the SqlException object, and then inspecting the SqlErrorCollection& which contains the detailed information about the problem that was encountered. If the raised exception does not correspond to a source location, the Show CPU view check box is available on the Debugger Exception Notification dialog box. Checking this check box, then clicking Break, loads the CPU window, displaying the disassembled view of the executing code, the CPU registers, and possibly other information, depending on the debugger. Speaking of the disassembled view, Borland has introduced updates to both the Win32 and .NET versions of this part of the CPU window. For a .NET executable, you now have the option of viewing the generated IL, the source code that compiled to the IL, or both. An example of the CPU view displayed by the Borland .NET Debugger is shown in the following figure. This particular CPU view is associated with a source breakpoint. The highlighted statement in the disassembled pane (the left pane) is the Delphi source on which the breakpoint was placed. Beneath this code you can see both the IL instructions that were emitted by the Delphi for .NET compiler, as well as the resulting assembly language instructions that the JIT compiler produced. You control whether IL and/or source code appears in the disassembled pane of the CPU view using the disassembled pane's context menu. When Mixed Source is checked, source code is displayed. When Mixed IL Code is checked, IL is displayed. Turn both of these options off to view only the code generated by the JIT compiler. Breakpoints are event-driven markers that can be configured to cause the integrated debugger to perform a task. In most cases, this task is to temporarily stop executing your code and load the integrated debugger, permitting you to examine features of the execution environment. On the other hand, the task might not include stopping your code's execution, but instead perform some action, such as writing a message to the event log. Delphi 2005 introduces two new features that specifically apply to breakpoints. These are described in the following sections. Source, address, and data breakpoints can now be configured to write the call stack to the event log. The call stack stores the current methods, functions, and procedures in the call chain, in the order in which they were entered. Breakpoints that write the call stack to the event log permit you to more easily track and document the events that lead to your code's execution. To write call stack information to the event log, enable the Log Call Stack check box. Use the available radio buttons to configure the breakpoint to either write the entire call stack to the event log, or only a specific number of frames. Typically, when you write the call stack to the event log, you do not need the breakpoint to load the integrated debugger. If that is the case, make sure that the Break check box is left unchecked for this breakpoint. The Breakpoint dialog box has received several updates in this release. First, a new toolbar is available, permitting you to more easily enable, disable, remove, and configure your breakpoints.Hide image The Breakpoint dialog box has also been upgraded to permit in-place editing of a number of breakpoint properties, without having to view a particular breakpoint's Breakpoint Properties dialog box. Using the Breakpoint dialog box, you can directly edit the Enabled, Condition, Pass Count, and Group properties of individual breakpoints. The following figure shows the Condition property of a breakpoint being edited using the Breakpoint dialog box. Previous versions of Borland compilers have permitted you to attach to a running process. Once attached to a process, you can use the debugger's features to inspect the process execution environment. Attaching to a running process is even more powerful in Delphi 2005. For starters, when you select Run | Attach to Process, Delphi 2005 asks you to select which debugger to use to attach to the process. If you select the Borland .NET Debugger, only processes hosted by the CLR (common language runtime) are displayed for your selection. If you select the Borland Win32 Debugger, traditional Win32 processes are shown. Also new is the option to detach from a process. If you have previously used one of the Borland debuggers to attach to a process, select Run | Detach from Process from Delphi 2005's main menu to detach from the process. A popular debugging feature in Delphi 8 and C# Builder is the capability to select a particular frame from the call stack using the Local Variables dialog box. This feature is now available for the Borland Win32 Debugger. With the Borland Win32 Debugger loaded, view the Local Variables dialog box. (If this dialog box is not already visible, select View | Debug Windows | Local Variables, or press Ctrl-Alt-L, to display it.) Initially, the values of variables local to the current function that the debugger is in are shown. To view local variables in one of the methods earlier in the call chain, select the method name from the drop-down menu. Delphi has long been considered the leading environment for database development. Currently, Delphi 2005 provides you with more data access options than any other environment. For Win32 development, in addition to a number of industry standard data access mechanisms, such as ODBC (open database connectivity) and Ole Db Providers, developers have a wide range of Borland technologies that they can employ, including the BDE (Borland Database Engine), dbExpress, IBExpress (InterBase Express), dbGo for ADO, MyBase (ClientDataSet), and DataSnap; Borland's multitier, distributed database environment. Delphi for .NET developers can use the same technologies as Delphi Win32 developers. The .NET implementation of the Win32 data access mechanisms uses what Borland calls its compatibility data access technologies. These are all found in VCL for .NET In addition, both Delphi for .NET and C# developers can access their data through ADO.NET, the data access framework of the FCL. Borland also provides an advanced custom data provider for ADO.NET for both Delphi for .NET and C# developers. This technology, which is called Borland Data Providers, or BDP for ADO.NET, offers many enhancements and extensions to ADO.NET, including live data views at design time, useful component designers, greater portability between underlying databases, and more. What’s especially impressive about Delphi 2005 is that Borland has added significant new database functionality in addition to the extensive features available in Delphi 8 for .NET and C# Builder. These additions and enhancements are described in the following sections. ADO.NET is the portion of the .NET framework associated with database development. While ADO.NET is very powerful, it fails to provide the design time ease-of-use Delphi developers expect. RAD for .NET is Borland's answer, bringing the convenience and speed of Delphi database development to the world of ADO.NET. RAD for ADO.NET simplifies the process of using ADO.NET from within your applications in two distinct ways. First, the DataSync and DataHub components provide a flexible provider/resolver mechanism that uses industry-standard ADO.NET data providers for data access. Second, the RemoteServer and RemoteConnection components permit you to extend these capabilities to a distributed environment. These technologies, and the components that implement them, are described in the following sections. Delphi 2005 introduces two new provider/resolver components that simplify how you work with your ADO.NET-related data access objects: DataSync and DataHub. You can use these components with any ADO.NET data providers to provide design-time views of your data, simplify data access, as well as apply updates back to your underlying database. The relationship between the DataSync and DataHub components and the traditional classes of ADO.NET development is shown in the following figure. Here the DataSync and DataHub components mediate between the ADO.NET DataSet and IDbConnection classes to provide services lacking in ADO.NET alone. These services include live, design time views of data, the management of multiple database connections, as well as flexible and optimized data resolution services. In addition, when used with Borland's new data remoting components, DataSync and DataHub simplify the process of creating distributed applications in the .NET framework. The data remoting components are discussed later in this section. The DataSync component maintains a list of data providers, that is, classes that descend from DbDataAdapter. For each data adapter, the DataSync keeps track of the provider name, the name of the DataTable that the DataSync will create for the provider, as well as how changes to the DataTable will be applied to the underlying database. That the DataSync relies on descendants of DbDataAdapter means that it can work with any data provider, not just the Borland Data Providers. As a result, you can use a DataSync with classes such as SqlDataAdapter and OdbcDataAdapter, which are included in the FCL, as well as data adapters from third-party vendors, such as IBM.Data.DB2 and Oracle.Data.Provider. You use the DataHub component in conjunction with a DataSync to feed data from the DataSync's data adapters to a DataSet, as well as initiate the resolution of changes back to the underlying database. Importantly, the DataHub can be activated at design time, which means that the combination of a DataSync and DataHub provide you with live data views at design time, a feature that is otherwise unavailable from non-BDP data adapters. The following figure shows a C# project in which a DataSync and DataHub are used to populate a DataSet at design time. The DataGrid on the form shown in the designer is displaying the data obtained through the DataSync/DataHub combination. Another important feature of a DataHub is that it provides a single point of control for applying changes back to the underlying databases. Simply call the DataHub's ApplyChanges method, and it communicates to the DataSync, which responds by generating and executing the appropriate queries, based on the changes found in the associated DataTables. In the project shown in the preceding figure, the single line of code that is associated with the Click event of the button whose caption reads Resolve Changes is shown here: dataHub1.ApplyChanges(); The .NET framework provides extensive support for working with remote objects through its .NET remoting services. One of the more practical applications of this technology is for implementing distributed database applications where DataSets in one process are accessed from applications in another, even when the applications are on separate computers on the Internet. However, .NET remoting is a general service, which means that using it to work with remote DataSets often requires a lot of custom code. Delphi 2005 makes working with remote data easy with two new components that encapsulate .NET remoting services, permitting you to effortlessly work with DataSync and DataHub components in a distributed environment. These components, RemoteServer and RemoteConnection, permit you to build applications where the DataSync and DbDataAdapters reside on one machine, and the DataHub and its associated DataSet component reside on another. How RemoteServer and RemoteConnection extend the capabilities DataSync and DataHub is depicted in the following diagram. The RemoteServer component permits you to publish DataSync objects in one process to applications using a RemoteConnection component in another process. The RemoteServer and RemoteConnection components can communicate using either HTTP or TCP. When you place a RemoteServer component into a project, you set its DataSync property to the DataSync instance containing the providers that you want to expose. You also set its ChannelType (Http or Tcp), Port to listen on, and URI (the specific resource that a client requests over the specified port). You use the RemoteConnection component in an application to obtain data through a remote DataSync. After placing a RemoteConnection, you specify the ChannelType, Port, URL, and URI that identifies where your remote server resides. You then set the RemoteConnection's ProviderType property to point to a particular provider on the remote server. Once the RemoteConnection object is configured, you connect a DataHub in your client application to the RemoteConnection. This provides the DataHub with access to the DataSync on the server to which the RemoteConnection is attached. From this point on, you configure and use the DataHub just as you would if the DataSync where in the same process. The RemoteConnection and RemoteServer objects use .NET remoting to transparently move the data between the remote DataSync and the local DataHub. The following figure shows a DataGrid that displays data obtained through a remote DataSync. It is interesting to note that this client application was built using Delphi, while the server was built using C#. You could have just as easily done this the other way around. On the other hand, both the client and the server could have been built using the same language. The Borland Data Provider for ADO.NET is a set of concrete classes and associated types that implement the data access interfaces of ADO.NET. These classes, which are part of what Borland calls BDP for ADO.NET, provide you with a powerful and portable solution for connecting to a wide variety of different databases at the same time extending the already substantial capabilities of ADO.NET. The Borland Data Provider for ADO.NET also includes powerful component editors that you use to work with the BDP data access classes, as well as additional classes that specifically bind to BDP, such as DataSync and DataHub, which provide data services that go well beyond those found in ADO.NET alone. Delphi 2005 includes a number of updates to BDP for ADO.NET. For example, BDP now supports connections to Sybase databases, as well as support for Oracle packages. There is also a new BDP for ADO.NET component — BdpCopyTable. This component provides your applications with the ability to copy a table and its primary index from one supported BDP for ADO.NET provider to another, giving you the runtime equivalent of the new Copy Table feature in the BDP Data Explorer (which is described in the following section). There is another update to BDP for ADO.NET that is not so obvious. BDP for ADO.NET has introduced additional interfaces for BDP providers that expose schema retrieval methods. BDP uses these interface implementations to discover information about the structure of database objects beyond what is currently supported in ADO.NET alone. These behind-the-scene interfaces are responsible for BDP's ability to copy tables, discover stored procedure parameters, and migrate data. These features are available to you at design time through the newly enhanced Data Explorer. The Data Explorer permits you to work with ADO.NET at design time through BDP for ADO.NET supported databases, such as Oracle, DB2, MS SQL Server, InterBase, and MS Access. With the Data Explorer, you can inspect database objects, such as tables, views, and stored procedures from within the Delphi 2005 IDE. The Data Explorer also lets you easily create and configure BDP-related data access components, such as BdpConnections and BdpDataAdapters. The Data Explorer has received a significant upgrade in Delphi 2005. Features now available from the Data Explorer permit you to create, alter, and drop database tables, test stored procedures, and copy data between BDP for ADO.NET-supported databases. Each of these features is discussed in the following sections. You can use the Data Explorer to create, modify, and delete database tables without having to leave the Delphi 2005 IDE. These capabilities are made available through BDP for ADO.NET's schema discovery services. These services, which debut in Delphi 2005, extend the already powerful capabilities of ADO.NET. For example, to create a new table, open a connection in the Data Explorer. Next, right-click the Tables node and select New Table. You use the Table Designer in Delphi 2005 to define the structure of your new table. You use this same designer when you want to modify an existing structure. To modify a table's structure, right-click the table name under the Tables node in an open connection, and select Alter. (To delete a table, you select Drop from this same context menu.) The following figure shows a table named PROJECT being altered in the Table Designer. You can see from the preceding figure that you can define or change the data type of a field in a table using a drop-down list of the applicable data types. Once again, this information is available through BDP for ADO.NET's schema discovery capabilities. You can use the Data Explorer to migrate tables from one supported BDP for ADO.NET database to another simply by copying and pasting. When you copy a table, you copy the table’s structure, data, and primary indexes. To copy a table, right-click the table in the Data Explorer and select Copy. Next, select the connection into which you want to paste the table, right-click and select Paste. Delphi 2005 will respond with the New Table Name dialog box, as shown in the following figure. Enter the name for the copied table and click OK. Delphi 2005 also includes components you can use in your applications to provide these same data migration capabilities to your users. Another important enhancement to the Data Explorer is its support for testing stored procedures. To test a stored procedure, right-click the name of the stored procedure that you want to test in the Data Explorer and select View Parameters. Delphi 2005 examines the stored procedure's parameters, determining each parameter's data type, direction, and name. You can then test the stored procedure by assigning a value to each input parameter and clicking the Execute button, which appears in the top-left corner of the stored procedure pane. After executing the stored procedure, Delphi 2005 displays the output parameters in a data grid beneath the stored procedure pane (given that the stored procedure has output parameters). The following figure shows the stored procedure pane with a result set, which displays the mailing label lines for customer number 1003. Reports are the tools that you use to turn data into information. Delphi 2005 includes two powerful reporting tools for you to use. For your .NET applications written in either Delphi or C#, Delphi 2005 includes Crystal Reports for Borland Delphi from Business Objects. For your Delphi VCL applications, both VCL Forms (Win32) and VCL for .NET, Delphi 2005 includes Rave Reports Borland Edition from Nevrona Designs. Delphi 8 for the .NET Framework was notable for its extensive support for data access mechanisms compatible with Win32 Delphi. With Delphi 2005, this support has been extended further. One of the biggest additions is the support for dbGo for ADO. dbGo for ADO is a set of components that implement the standard VCL TDataSet interface through which you can communicate to ActiveX Data Objects using installed Ole Db Providers. Delphi 2005 now includes the full compliment of dbGo for ADO components in VCL for .NET. Other compatibility components that have now been added to VCL for .NET include the following: TStoredProc, TSimpleDataSet, TNestedDataSet, and TUpdateSql. Additional compatibility components for DataSnap clients have also been added in Delphi 2005. DataSnap is Borland's multitier architecture for building thin clients and their associated application servers. These new VCL for .NET components include TConnectionBroker, TSharedConnection, and TLocalConnection. Delphi 2005 now provides you with an ADO.NET connection string editor for SqlConnection, OdbcConnection, and OleDbConnection components. (Previously, unless you were using BDP for ADO.NET, constructing your connection string for an ADO.NET connection generally involved referring to the documentation for your .NET data provider.) When you need to configure one of these components in the Delphi 2005 IDE, select the ConnectionString property in the Object Inspector and click the ellipsis button to display the Connection String editor. The Connection String editor for a SqlConnection ConnectionString property is shown in the following figure. Delphi was one of the first IDEs to give you component-based, event driven tools for building dynamic Web sites for the World Wide Web. In addition, Delphi was also one of the first development tools to provide high-level wizards, tools, and services for creating Web Service servers and clients. In Delphi 2005, Borland continues its tradition of providing you with the best tools for building standards-based applications for the Web. In fact, Delphi 2005 gives you more options than ever before for creating and deploying Internet-based applications. Technologies included in Delphi 2005 include ASP.NET Web Applications, ASP.NET Web Service Applications, Win32 Web Service servers, Win32 Web Service clients, Web Broker Web server extensions, WebSnap Web server extensions, and both Win32 and .NET IntraWeb applications. No other environment even comes close to this much Internet development support. Borland updated and improved many of the tools that you use to build Web-based applications. For example, the what-you-see-is-what-you-get (wysisyg) designer and the drag-and-drop capabilities of the Web Forms designer have been updated. In addition, new features and components have been enhanced. The following sections discuss some of the new and enhanced Web and Internet-related features that you will find in Delphi 2005. You can now deploy your ASP.NET Web applications, ASP.NET Web Service Applications, and IntraWeb (both Win32 and .NET) applications directly from Delphi 2005's Project Manager. To do this, right-click the Deployment node in the Project Manager and select New Deployment from the displayed context menu. You can deploy your application's files using either XCOPY or FTP (file transfer protocol). Use XCOPY when the directory to which you want to copy your files is visible from your local machine. For example, you can use XCOPY if your Web server is on the same local area network as your development machine. FTP is useful when the location where you want to deploy your files is available somewhere on the Internet, but is not on the local network. In order to deploy using FTP, the server to which you want to deploy your files must be running an FTP server. Once you select the directory or FTP server to deploy your files to, select which files you want to deploy, right-click, and then select Copy Selected Files to Destination or Copy All New and Modified Files to Destination. Once you have created a deployment, that deployment appears as a new node beneath the Deployment node in the Project Manager. You can re-deploy some or all of your files using the deployment configuration that you created earlier by selecting the associated node. If you want, you can have multiple deployment configurations for any of your Web-related projects. You use HTML (hypertext markup language), either in code or wysiwyg, to describe the Web pages that you create in your ASP.NET applications. Delphi 2005 provides you with a number of options for creating and modifying the HTML that defines your various ASP.NET pages. For example, the following figure shows a login page being designed using the ASP.NET Web application designer. When you drag HTML Controls, Web Controls, and DB Web Controls from the Tool Palette onto the Web Forms designer, HTML is inserted into the associated .aspx file in your project. You can edit this .aspx file directly, modifying what the Web Form designer generated, or you can insert your own custom HTML. The following figure shows a portion of the editable .aspx file that was created as the preceding login page was being designed visually. Delphi 2005's Web Form designer now permits template editing within the form designer. Certain Web controls, such as a DataList, support templates for the formatting of the header, footer, and displayed items. To edit a template in Delphi 2005's Web Form designer, right-click a template-supporting control and select the template that you want to edit. For example, the following figure shows the context menu that is displayed when you select a DataList. After selecting which type of template you want to edit, the designer re-draws the control, permitting you to enter the template text directly. For example, the following figure shows a DataList with its Item templates available for editing. When you are through editing your control's templates, right-click the control again and select End Template Editing. While Delphi has supported code completion and syntax highlighting of HTML in past versions, Delphi 2005 has extended this support. Code completion and syntax highlighting are now available for cascading style sheets (CSS) and XHTML. The Tag editor has also been improved in Delphi 2005. The Tag editor is the small window that appears below the Web Form Designer, and it provides you with a context sensitive, editable view of the HTML that underlies your Web page. While earlier versions of the Tag editor permitted you to edit the inner HTML, you can now edit the outer HTML as well. The following figure shows attributes of a <td> tag being edited in the Tag edit. Note that both Code Insight and syntax highlighting are visible in this figure. ASP.NET Web applications, more so than other types of applications, often rely on external files to operate. For example, while the HTML in your .aspx file may include an <IMG> (image) tag, the image itself is typically a .jpg or .gif resource whose location is referred to in the src attribute of the <IMG> element. In addition to supporting application deployment, as discussed earlier in this section, the Project Manager has also been updated to better manage the external resources used in your ASP.NET applications. For example, you can right-click an ASP.NET project in the Project Manager and select New | Folder. The newly added folder will be created as a subdirectory of the ASP.NET application folder. Once you've added a new folder, you can right-click it and select Add. This brings up a browser dialog box that you can use to add support files, such as images, cascading style sheets, JavaScript files, and so forth, to the folder. The resources that you add to this folder can then be included in your configured deployments. With these enhancements to the Project Manager for ASP.NET applications, you no longer need to leave Delphi 2005 in order to manage your application's files. DB Web controls are special data-aware Web controls that you can use in your ASP.NET applications. Like the Web controls that ship with the .NET framework, you add DB Web controls to your Web Forms, and they participate in the generation of the content that is provided to the requesting browser at runtime. Compared with the standard Web controls of the FCL, DB Web controls offer better support for ASP.NET applications, making it even easier for you to build great Web sites faster. For starters, DB Web controls are data-aware, and in many cases, provide you with automated read/write access to the data to which they are bound. As a result, they greatly simplify the process of creating sophisticated Web-based applications. With this release of Delphi 2005, Borland has added a number of new and enhanced DB Web controls. The following are the DB Web controls introduced in Delphi 2005: DBWebAggregateControl, DBWebNavigatorExtender, DBWebSound, and DBWebVideo. The DBWebAggregateControl is similar to a DBWebTextBox, but automatically calculates and displays an aggregate statistic, such as Sum, Min, and Count. The DBWebSound and DBWebVideo controls allow you to easily add sound and video to your ASP.NET applications. The sound or video resource can either be contained in a blob field of a database, or the database field can contain a string that specifies the URL of the external sound or video resource. Finally, the DBWebNavigationExtender permits you to configure standard Web control Buttons to perform navigation operations against BDP for ADO.NET data sources without additional code. Simply place a DBWebNavigationExtender component on a Web Form, and any Buttons that you place will display three additional properties: DBDataSource, TableName, and DataSourceAction. The DataSourceAction property indicates what type of navigation operation the button will perform on the table accessed through the DBDataSource. There are two updated DB Web controls in Delphi 2005. These are the DBWebImage and DBWebDataSource. The DBWebImage has been updated to include a feature of the newly added DBWebSound and DBWebVideo controls described earlier in this section. The DBWebImage can be linked to either a blob field in an underlying database that contains the image to display, or a string field containing the URL of the image resource. Previously, the DBWebImage control could only refer to a blob field containing the image to display. The remaining updates to DB Web can be found in the DBWebDataSource. DBWebDataSource can now be configured to support auto-updates, as well as cascading updates and cascading deletes for master-detail relationships. DBWebDataSource now also supports XML files for the storage of the data used by DB Web controls. You can use this feature in a number of interesting ways. For example, an XML file can be used instead of an underlying database during development, providing a convenient substitute for a database connection. Alternatively, an XML file can be used as a local, readonly data source for managing static information, such as images or other resources. Or, if user authentication is being used, a DBWebDataSource can be configured to generate a unique XML file name for each user. That XML file can be used to persist data on a per user basis between sessions. IntraWeb is a sophisticated, RAD component-based Web development tool that automatically maintains server-side state between Web page requests. As a result, IntraWeb has advantages over ASP.NET for creating Web sites that require the type of state persistence typically associated with traditional client applications. There are a number of features that make IntraWeb an attractive alternative to ASP.NET Web site development. As mentioned previously, IntraWeb supports several convenient levels of state maintenance between Web page requests. At the application level, you can use the TIWServerController to share objects between sessions. At the session level, each IntraWeb session can have its own persistent data module that remains in memory for the duration of the session. This data module can be used to store objects and data that are used by two or more Web pages for a particular end user. Finally, unlike ASP.NET, where ASP.NET Web forms are created and destroyed for each page request, IntraWeb pages persist on the server between requests, until the page is no longer needed. The second aspect of IntraWeb that makes it attractive is its "Delphi" way of doing things. You design your user interface using Delphi components from the Tool Palette, just as you would design any VCL or VCL for .NET application. The difference is that these components participate in the IntraWeb form rendering process to emit HTML, WAP (wireless access protocol), or HTML 3.2. Finally, you have a variety of choices for deploying IntraWeb applications. An IntraWeb application can be deployed as an ISAPI (Internet Server application programming interface) Web server extension, or it can be used as a self-contained HTTP server. In other words, if you are already running IIS (Internet Information Server), you can use your IntraWeb application with it. On the other hand, if you do not already have a Web server, you can design your IntraWeb application to be a Web server, providing all the features necessary to serve Web pages to any Web browser or Web-enabled device using the HTTP protocol. The following figure shows an IntraWeb Web page being designed in Delphi 2005. Unlike ASP.NET applications, there is no code-behind .aspx file. Instead, the IntraWeb components used to build the page render the appropriate HTML at runtime in response to an appropriate HTTP Web page request. Delphi 2005 includes both Win32 Delphi and Delphi for .NET versions of IntraWeb. In today's world of software development, most developers are part of a larger process of application definition, design, testing, deployment, and management. Consistent with Borland's commitment to providing you with the tools you need to ensure the success of your projects, Delphi 2005 provides tight integration to the essential support tools that you need. Depending on the version of Delphi 2005 that you have installed, Delphi 2005) tools deserve particular attention. These StarTeam and unit testing. These tools are described in the following sections., providing you with. As you can see, the StarTeam menus permit you to place a project into a StarTeam repository, check in and check out files, locate managed assets, launch the integrated StarTeam client, and manage your personal StarTeam options. The following figure shows the StarTeam client active in the Delphi IDE. You launch the StarTeam client by selecting StarTeam | View Client from Delphi 2005's main menu or StarTeam | View Client from the Project Manager's context menu. With the StarTeam client active within Delphi 2005, you can work with every aspect of your managed resources. For example, you can track defects, view and contribute to threaded discussions, submit change requests, and more. The following figure shows a change request that has been logged into the StarTeam server for this project. When you are working with a StarTeam managed project, the Delphi 2005 History Manager makes use of the StarTeam repository. For example, the following figure shows the Diff pane of the History Manager. Here the Diff pane displays source code versions based on changes that have been checked into the StarTeam repository. With the StarTeam enabled History Manager, even changes to source code file names are tracked, as shown in the following figure. Unit testing is the process of writing code to test the methods, functions, and procedure of your software. While unit testing is a corner stone of the approach to software development called extreme programming, many developers find it useful to employ some form of unit testing as part of their everyday software development. Delphi 2005 includes unit testing support for all three of its personalities: Delphi Win32, Delphi for the .NET Framework, and C#. You establish a unit testing by first creating a test project. The Test Project Wizard asks you to select which of Delphi 2005's personalities was used to create the code you want to test. Hide image After creating your test project, you add one or more test cases to your test project. Each test case requires you to select the source file (.pas or .cs) that contains the methods or routines you want to test. The Test Case Wizard then generates a simple framework for testing that file. This framework includes a Setup and a Teardown procedure, as well as a method stub for each of the subroutines in your selected source file. You modify the code generated by the Test Case Wizard to implement the Setup and Teardown procedures, as well as the individual tests. For example, you will typically call the constructor of the class in which the methods you want to test are implemented from the Setup procedure, as well as define any variables or objects that are needed for the parameters of your test methods. Likewise, you will free the class created in the constructor, and release any allocated resources, from the Teardown procedure. The actual tests are performed within the stubbed out methods generated by the Test Case Wizard. You implement each of these methods to invoke the method they are testing, validating either the data resulting from the method execution, or the class of exception thrown when your method detects a problem. Borland's Enterprise Core Objects, or ECO (pronounced ee'ko), is Borland's new Rapid Enterprise Development System for .NET. ECO is an object-oriented framework from Borland for the .NET framework that uses UML (universal modeling language) diagrams to drive application development. This approach to building applications is often referred to as model driven architecture, or MDA. One of the more notable features of ECO concerns how the UML models are used. In many development environments, UML models simply provide you with a road map, defining the classes that you need to implement in your application. In other words, UML diagrams are used as guidelines for software development. With ECO, UML models are not just used to guide development; they are tightly integrated into the development process. Models are used to generate classes and support code that represents the core of your application logic. When changes need to be made to the application, you return to the model, modifying its attributes, associations, and constraints, after which your application's code is updated. In this respect, ECO is really "Rapid MDA." ECO dramatically reduces the amount of code that you need to write manually, reducing your time to deployment and improving the overall maintainability of your applications. More importantly, the applications you build with ECO are based on the enterprise-aware architecture of your UML designs. The following figure shows the UML class diagram for a simple ECO application. As you can see, there are three classes defined here: Building, ResidentialBuilding, and Person. This diagram is used to generate the business objects that this application will work with at runtime. The Structure pane shown in the following figure shows you the classes and associated interfaces that ECO generated from this model. Because the model is the central focus of your development efforts, there is an inherent synchronization between your UML model and the application created with it. In other environments where UML simply guides development, the model often quickly becomes out-of-date. In ECO, the UML model defines the core business objects that are the focus of your development efforts. For example, if you build an ECO application to manage inventory, the objects that you work with will represent the entities of your application, such as items, employees, orders, storage facilities, and the like. In other words, your code operates in the domain of the business objects that you are using. Compare this approach to the type of development that you typically see in GUI applications, where code operates in the realm of the user interface, with items such as buttons, text boxes, list boxes, and menus. In most ECO applications, the business objects defined by your UML models map to an underlying ADO.NET relational database structure. This database of your choosing is used to persist and restore your business objects, as needed. You can even map your ECO objects to XML files, though most developers prefer the security and transaction support provided by a remote database server. In traditional database development, you spend considerable time designing your database and writing the code needed to store and retrieve your data. With ECO, the underlying database schema can be created for you, based on your UML models. Alternatively, you can map an existing database to your UML models, permitting you to use the power of ECO with your current databases. The ease with which you work with objects using ECO is particularly noteworthy. Object persistence is provided in ECO through an ECO space, a factory-like container that provides both an object cache as well as a transparent interface to the underlying data store. The ECO space creates your objects as you need them, and persists changes that you make, if persistence is required. For example, if you ask for an object that represents an existing employee, the ECO space creates an employee object and populates its attributes with data from an underlying database. Any changes made to the employee object can likewise be saved back to the database. This capability is provided by an ECO persistence mapper, which performs the required data-related tasks for you. In addition to UML, ECO employs OCL, the object constraint language, an Object Management Group (OMG) standard for defining expressions for UML models. You use OCL to create declarative rules that calculate or control the values of attributes of your objects. As is the case of UML, the OCL you employ in your ECO applications reduces the amount of code that you have to write and maintain. Delphi 2005 ships with ECO II, a major update to Enterprise Core Objects. ECO II improves and extends your support for building enterprise-level model driven applications in the .NET framework. The updates found in ECO II are described in the following sections. ECO II includes two important enhancements to ECO spaces that improve how and where your applications can be used, as well as their scalability. The first of these is that a single process can now include two or more ECO spaces. This capability is particularly valuable for ASP.NET applications where ECO spaces can be pooled and reused for increased application performance. The second improvement permits multiple ECO spaces to be synchronized, a capability supplied by the ECO persistence mapper components. Synchronized ECO spaces permit changes in one ECO space to be more easily resolved with changes that appear in another ECO space. The ECO persistence mapper classes are thread-safe and remotable. In fact, two or more ECO spaces on separate computers can use .NET remoting to share a common persistent mapper, permitting those ECO spaces to be synchronized. This capability permits ECO applications to be easily scaled up to a multitier architecture as you application's needs change. ECO spaces provide more support for object persistence than ever. Added features include undo/redo, versioning, and transactions. ECO II provides extensive support for building Web-based applications using rapid MDA. Delphi 2005 includes wizards for creating ECO ASP.NET Web Form applications and ECO ASP.NET Web Service applications for both C# and Delphi for .NET. Delphi 2005 also includes the ECODataSource component, which you can use to bind your DB Web components to ECO-based business objects. This data source implements DbDataSource, which means that you can assign it to the DataSource property of any DB Web control. Two new enhancements to ECO spaces are particularly valuable to ASP.NET developers. The first is that an ECO space can be maintained on a per session basis, providing automatic state maintenance between page requests. The second provides for a pool of ECO spaces. These features can be used individually or in conjunction with one another to enhance the features and performance of your ECO-based ASP.NET applications. Each ASP.NET application contains an EcoSpaceProvider, which controls the caching of ECO spaces creating within the application. You use this provider to control whether an ECO space is maintained between page requests for a particular session or not. Options include never maintaining state, always maintaining state, or only maintaining state when unresolved changes appear in the ECO space. While maintaining state requires more server resources, it simplifies how you work with your objects in an ASP.NET application. ECO space pooling permits ECO spaces to be easily reused in ASP.NET applications. For those ASP.NET applications that do not persist ECO space between page requests, each time an ASP.NET page is destroyed, its ECO space is returned to the ECO space pool. Performance is enhanced since a new ECO space does not need to be created for each page request. For those ASP.NET applications that maintain ECO space for each session, the ECO space is returned to the ECO space pool when the session terminates. ECO II can examine the schema of your existing database and use this information to generate your initial UML diagrams. Alternatively, you can manually map your UML diagrams to an existing database. Previously, you had to create your UML diagrams first, and generate your database from these diagrams. With ECO II, you can now bring the power of ECO to your existing databases. The following figure shows a UML diagram that ECO created from the sample SQL Server database Northwind. In addition to the various classes and their attributes, ECO infers the relationships between the classes based on field names and indexes. All versions of Delphi 2005 include licenses for other valuable Borland products that support software development and application lifecycle management, as well as products from Borland partner companies. Which products are included with Delphi 2005 depend on the version that you are using. All of the products listed in this section are included in Delphi 2005 Architect. Delphi 2005 Enterprise and Professional include some, but not all, of these products. The following sections provide you with a short description of the associated integrated or partner tool. For more information, please use the URL provided to learn more about these integrated and included tools. Borland Caliber RM is a collaborative requirements management system designed to facilitate collaboration, impact analysis, and the communication of changing requirements. It is a server-based system that enables distributed teams to communicate better, improving the quality of products and reducing the risk of project failure. For more information on Caliber RM, please visit: InterBase 7.5 Developer Edition permits you to develop and test your applications running against InterBase, an enterprise-quality remote database management system (RDBMS). Borland's InterBase is a small-footprint database server that minimizes maintenance while providing support for mission-critical applications. For more information on InterBase 7.5, please visit: Janeva provides you with a seamless and cost-effective solution for integrating your J2EE and CORBA back-end systems with your client and Web applications. For more information about Janeva, please visit: Identify and remove performance bottlenecks in your .NET managed code through CPU and memory usage analysis with Borland Optimizeit Profiler for the Microsoft .NET Framework. For more information about Borland Optimizeit for the Microsoft .NET Framework, please visit: StarTeam provides you with a rich and automated system for managing the assets and application lifecycle tasks from within a single repository. For more information about StarTeam 6.0 Standard Edition, please visit: Component One Studio Enterprise for Borland Delphi 2005 is a special edition of Studio Enterprise that includes a development license for eleven .NET (Windows Forms) and six ASP.NET (Web Forms) controls. For more information about Component One Enterprise Studio, please visit: Crystal Reports Borland Edition is a .NET version of the world's leading reporting tool for use in your C# and Delphi for .NET applications. For more information about Crystal Reports Borland Edition, please visit: glyFX Borland Special Edition is a collection of 95 high-quality images for use in toolbars, buttons, or any control that supports bitmap files. For more information on glyFX Borland Special Edition, please visit: IBM DB2 Universal Developers Edition provides you a DB2 database and associated tools for designing, building, and prototyping applications for deployment on any DB2 client or server platform. InstallShield Express for Borland Delphi provides you with an easy-to-use graphical interface for building custom installers for your Windows software. For more information on InstallShield Express for Borland Delphi, please visit: Internet Direct (Indy) is an open-source Internet component suite comprised of popular Internet protocols written in Delphi and based on blocking sockets. For more information about Internet Direct, please visit: IntraWeb is a complete RAD solution for building Web applications, dynamic Web sites that go well beyond the capabilities of regular ASP.NET Web applications and ISAPI Web server extensions. For more information on IntraWeb, please visit: Microsoft SQL Server 2000 Desktop Engine provides your small workgroup and low-volume Web applications with data storage capabilities that easily scale to Microsoft SQL Server 2000 as your needs grow. Microsoft SQL Server 2000 Developer Edition provides you with a developer license for designing, building, and prototyping applications that you can deploy with Microsoft SQL Server 2000. Rave Reports Borland Edition is a powerful and scalable suite of VCL and VCL for .NET reporting components for creating sophisticated Delphi reports. For more information about Rave Reports Borland Edition, please visit: Wise Owl Demeanor for .NET Borland Edition is a .NET obfuscator, a tool that helps prevent others from reverse-engineering your managed code applications and assemblies. For more information about Wise Owl Demeanor for .NET Borland Edition, please visit: Please also visit the Borland Developer Network, where you will find timely articles as well as links to a wide variety of resources that support your software development needs. The Borland Developer Network is located at. You should also consider visiting Code Central, Borland's online repository for code samples, demonstration applications, and other resources for developers using Borland products. Code Central is located at. More than twenty years in the making, Delphi 2005 achieves what no other development environment can, providing you with state-of-the-art tools that preserve your investment in today's software as you migrate towards tomorrow's new standards. With integrated tools that support every aspect of the application lifecycle, Delphi 2005 really is the ultimate Windows development solution.:. Cary Jensen is President of Jensen Data Systems, Inc., a software development, training, and consulting company (). He is an award-winning, best-selling author of nineteen books, a featured columnist on the Borland Developer Network (), and a popular speaker at conferences, workshops, and training seminars around the world. Cary has a Ph.D. in Human Factors Psychology, specializing in human-computer interaction, from Rice University in Houston, Texas. You can contact Cary at cjensen@jensendatasystems.com. Made in Borland® Copyright © 2004 Jensen Data Systems, Inc. Published on: 10/6/2005 12:00:00 AM Server Response from: ETNASC04
http://edn.embarcadero.com/print/33289
CC-MAIN-2020-05
refinedweb
19,601
53.81
), how to navigate their mac via the terminal, and they’ve read this post on installing mongodb as well as pymongo. The chances are good that they will be spending much more time querying databases than building them. That doesn’t mean learning the process is a waste of time. At some point they will want to collect data that is not already collected and having an option other than a spreadsheet to store, update, and query that data will be a huge advantage. They can think of the data as information about customers or users, metadata about tweets, mobile phone models, or newspaper articles, but what we’ll actually demonstrate with is infobox data for different species of spiders. At a high level here is what we are going to do. - Grab a csv file from DBpedia detailing data on spiders and read that data into a python dictionary (we’ll worry about cleaning the data in a later post) - Import the information into MongoDB - Update our database with a new data field and query the database to see what we got My recommended approach on how to do this and learn what’s happening is to do it in steps in an ipython console. I purposely kept the blocks of code out side functions to make this tutorial more fluid. Get It, Clean It, & Prepare a List of Dictionaries First things first, go to DBpedia and download the Species – Arachnid csv file and save as ‘spiders.csv’. We’ll want to get the information on each spider into a python dictionary and then we will append each one of these dictionaries to a list. This will allow us to easily import a list of dictionaries into mongo. To keep things simple we are only going to take a few fields — rdf-schema#label, synonym, rdf-schema#comment, family_lable, and phylum_label. Here is an example of what that process might look like… import csv DATAFILE = 'spiders.csv' FIELDS = ['rdf-schema#label', 'synonym', 'phylum_label', 'family_label', 'rdf-schema#comment'] FIELDS: continue # Only add required fields if field in FIELDS: temp[field] = val # Append spider to our data list data.append(temp) Great. Now we have a list of 3,967 spider dictionaries (len(data)). To keep this simple (again) let’s write our list of dictionaries to a .json file. If you were wondering before why we used DictReader, it’s because mongodb’s foundation is the json object — which is very similar to a python dictionary. By using DictReader, we’ve made the next step extremely easy as you’ll see. import json with open('spiders.json', 'w') as outfile: json.dump(data, outfile) Into MongoDB Fire up a mongod instance. If you forgot what that means, check out this post. We are going to take our spiders.json file… which is essentially a list of dictionaries… and use the mongo insert method to create 3,967 new entries in a collection called arachnid in a database called examples. As a quick reminder, in a mongo shell you can see your databases, collections, and look at a sample record using these commands… Even if you do not yet have an examples database or an arachnid collection.. our python script will create one. So let’s have a look… from pymongo import MongoClient client = MongoClient("mongodb://localhost:27017") db = client.examples with open('spiders.json') as f: data = json.loads(f.read()) db.arachnid.insert(data) And voilà, we now have a database from which we can modify and run queries on. Let’s do that now. Update and Search So far so good. We’ve got some data and processed it into mongodb. Now the result of typing this in ipython… db.arachnid.find_one() Should look something like this… {'family_label': 'Eriophyidae', 'phylum_label': '{Chelicerata|Arthropod}', 'rdf-schema#comment': ', 'rdf-schema#label': 'Abacarus', 'synonym': 'NULL'} As a business analyst working in a dynamic environment with dynamic data we’ll need to assume that we’ll have to make changes to our records at some point in the future. As an example, let’s assume we want to add the spider’s genus to each record (this might be similar to adding a new type of address for customers (mailing address, email address, new kind of address we don’t yet know about???)). In order to do this we create a new list of dictionaries with our new fields of interest. This will be similar to above… however we only need to grab the genus value along with the label field (we need the label field so we can match it to the existing record in the database). NEW_FIELDS = ['rdf-schema#label', 'genus_label'] NEW_FIELDS: continue if field in NEW_FIELDS: temp[field] = val data.append(temp) Next, we will loop through this list… match the label to an existing record in our database… and add the genus key and value to that record. for el in data: # Loop through the spiders in the new fields data list, match the label with existing spiders in database spider = db.arachnid.find_one({'rdf-schema#label': el['rdf-schema#label']}) # Then add new genus field to that spiders data spider['genus_label'] = el['genus_label'] db.arachnid.save(spider) Good stuff. To be sure everything worked, use the same find_one method as we did above. Now, we should see a record with the genus key. {u'_id': ObjectId('55a3099feebc5c09f32edf31'), u'family_label': u'Eriophyidae', u'genus_label': u'NULL', u'phylum_label': u'{Chelicerata|Arthropod}', u'rdf-schema#comment': u', u'rdf-schema#label': u'Abacarus', u'synonym': u'NULL'} Summary The above is one method for collecting, storing, and updating records. If students feel good with python, mongo’s python driver makes this process enjoyable. So what’s next? As we search through our database with mongo’s find and aggregate methods, we would notice that the data is a bit sloppy (synonyms are stored and separated like this { val | val } which makes iterating over them difficult, many NULL values, key names could be cleaner (use ‘name’ instead of ‘rdf-schema#label’), etc.). Making decisions on how to clean this data and programmatically implementing these solutions is a good next step for exploring MongoDB and pymongo. As an alternative option for a next step, we might look at the DBpedia ontology, selecting another dataset, and repeat the above procedure with that other data.
http://www.frank-corrigan.com/2015/07/13/mongodb-pymongo-tutorial/
CC-MAIN-2021-43
refinedweb
1,061
60.85
Created 09-07-2016 03:12 PM Hi Everybody, i've got an existing hbase table with a rowkey consists of two numbers and a columnfamily (called d) .Within this column family there are several Columns which are a dates as running number followed by a character e.g. 20160808[c|s|m]. So how can i map the cf via a Create view-Statement or something else to read the data? I need something like a wildcard-mapping for the column family. Thanks in advance. can you explain your use-case with some data. Try to formulate in form of any DDL and SELECTs you are looking for? Created 09-08-2016 01:04 PM Hi, i've got an hbase table created like this : create 'hbase_table',{NAME => 'd', BLOOMFILTER => 'NONE', VERSIONS = .......}. I've got only one column family. The java client which fills hbase generates a rowkey consisting of two numbers and stores several columns within the column family 'd'. The name of these columns are made of the actual date following by a character like this 20151119c. The name is stored binary. So if i run a scan in hbase shell i will get the following: "NUMBER1NUMBER2" column=d:\x07\xE0\x07\x0Ds, timestamp=1472144870610, value=\x00\x00\x00\x00\x00\x04\xC7\xD3 Now i want to map the existing hbase table with phoenix with a view like: create view "namespace:hbase_table" (A UNSIGNED_INT NOT NULL, B UNSIGNED_INT NOT NULL, "d".val VARCHAR(40) constraint pk_hbase_table PRIMARY_KEY(A,B); --> "val" stands for the running number with the caracter ( 20151119c) So i've heard that Phoenix have to match the correct Column-Name. So is it possible to match them like a wildcard ?
https://community.cloudera.com/t5/Support-Questions/How-to-create-Phoenix-View-on-existing-HBase-table-with/m-p/141114/highlight/true
CC-MAIN-2019-43
refinedweb
286
63.19
A. Using the Microsoft Management Console (MMC) DFS Management snap-in, you can create new replica sets outside the DFS namespace. Typically, when you manage a DFS name space and add multiple targets for a folder, it’s easy to then configure the multiple targets for replication. However, you can also configure replication outside a DFS namespace. 1. Start the MMC DFS Management snap-in by selecting Start, Administrative Tools, DFS Management. 2. Select the Replication node, and click the New Replication Group link in the Actions pane. 3. Select the type of replication. You can use a “Multipurpose replication group” in cases in which data is changed at all members of the replica set, users share information, and so on. Or you can choose the "Replication group for data collection," typically used only for branch server content replication to a hub server for centralized backup. 4. Enter a name, description, and domain for the replica group, and click Next. 5. The next screen prompts for the members of the replica set. As you add each member, the system performs a check for its suitability as a member of the set. For example, does it have DFSR support? 6. Select the type of replication topology. If more than three servers are part of the replica set, you can choose a “hub-and-spoke topology.” Otherwise, the options are “full mesh,” in which every node replicates to every other node (which really scales up to only 10 nodes), or no replication at all, in which case you need to create a custom topology. 7. Select the amount of bandwidth to use for replication. By default, you have the full bandwidth available for use. You can also specify a schedule that allows an amount of bandwidth to be specified for each hour of each day of the week. Using this feature, you can limit bandwidth during the day and catch up at night (making sure that users are aware that there might be differences in replica sets at locations). 8. Select the authoritative server from the list of servers in the replica set. During the initial replication, the selected primary server will govern the content of the replicas. If data exists on a non-primary server in the replication folder that isn't in the primary servers replication folder, it will be deleted. 9. Select the folder to replicate on the primary server. You can select multiple folders, if necessary. 10. On the next screen, select the folder to use for the other members of the replica set by selecting the member from the dialog box and editing the folder property. 11. A summary of the actions and configuration appears. Click Create to being the DFSR set creation. 12. Once complete, click Close. A notification message will appear, informing you that replication won’t begin until all members have contacted AD DCs for the configuration information. This process requires AD replication to have occurred and the replica to poll the DC.
http://www.itprotoday.com/management-mobility/how-do-i-create-new-dfsr-replica-set-09-nov-2007
CC-MAIN-2018-22
refinedweb
499
63.29
MySQL Connector does not work with SQLAlchemy and Python 2.6 Bug Description Connecting to a MySQL database using SQLAlchemy 0.6.6 / MySQL Connector 0.3.2 / Python 2.6 raises the exception 'MySQLConnection' object has no attribute 'get_characters In the python2 branch the method 'get_characters With a quick and dirty fix by defining the method on the MySQLConnection object """ Fix to work with SQLAlchemy 0.6.6 """ def get_characterse return self.get_charset() the connector works for me now. The above is nice fix for MySQL Connector/Python, but eventually SQLAlchemy has to use MySQLConnection Those things happen in development release and no much time to work on things.. Hi Geert - Did you make a backwards incompat change to the DBAPI ? Feel free to send SQLA a patch that supports the driver in the way you see fit... or perhaps leaving the previous method available for backwards compatibility with the code we got from you originally (the dialect was yours originally is that correct ? I can't find the original ticket...). Yes, I should learn to keep stuff backward compatible. It's good to use the charset-property, but we will leave the get_characterse I think I should give the SQLAlchemy dialect a checkup. I've got a patch somewhere. But I didn't write the original dialect, as far as I can remember :) Cheers, Geert Confirmed same problem but with Python 2.7, found the same fix myself before seeing this ticket.
https://bugs.launchpad.net/myconnpy/+bug/712037
CC-MAIN-2019-04
refinedweb
244
66.33
Hi, I've downloaded my regional map from and now I only want to calculate distance between two point (coordinates) using C# I've read about BruTile and OSM, but I've no found any piece of code which implements my needs. Map Rendering is not necessary for me, duration either. I mean, input: two coordinates, output: XX KM. That's all. Could anybody please help me? Thanks in advance. asked 29 Jun '15, 10:01 doShare 11●1●1●2 accept rate: 0% edited 29 Jun '15, 10:15 scai ♦ 32.2k●20●296●445 Hi again, I've found a solution consisting in import map to mySQL: I also have installed: MySQL and PERL The instructions are not clear for me...I dont know how to, import my .osm/.osm.bz2 maps into MySQL and, moreover, how to test distances/routes between two coordinates... I have installed all the packages into Perl Package Manager, but i am not sure that my installation was fine. Thanks in advance That's a completely different question. And why would you want to use MySQL instead of PostgreSQL? There are dozens of instructions for how to import OSM data into a PostgreSQL database. Don't choose a different database unless you have very good reasons for this decision. Thanks, Maybe the question sounds different, but my goal is the same: Obtain real distance (route) between two points offline using OSM. Firstly, I've tried with .NET c# but, because the map size (arround 400MB) the solution doesnt work. So, I am trying to obtain results migrating the map into DataBase. I've tried with mySQL because my .NET application runs over MySQL but, if PostgreSQL is easier, I will look for information / how to. Any suggestion? If OsmSharp can't handle large maps then you could think about installing a separate routing service, such as GraphHopper or OSRM. I thought it was Postgis which is Postgres + a GIS extension. The latter provides functionality to work with point, lines and areas. Thanks for your help. I've implemented my .NET Solution with GraphHopper. Now, I'm looking for an option to avoid running Terminal Cygwin64 permanently (It starts a local map website) Did you already read the routing wiki page? It even mentions some tools for C#, for example OsmSharp. answered 29 Jun '15, 10:14 scai ♦ 32.2k●20●296●445 accept rate: 23% Now, i am testing but, because my map size is about 500MB it fails (memory exception). I am also trying "" but I can't do anything with it... I was able to get a result in meters from the code below... however, it took 3.5 minutes to calculate the distance between two points that are 12357 meters apart. I used the osm.pbf for Oklahoma that is 116mb. Install VS 2012, Install NuGet Extension, Install OSMSharp via NuGet. Start a new windows app for C#: using OsmSharp.Osm.PBF.Streams; using OsmSharp.Routing.Osm.Interpreter; using OsmSharp.Routing.TSP.Genetic; using OsmSharp.Math.Geo; using OsmSharp.Routing; using System; using System.IO; button1_Click(object sender, EventArgs e) { var frCoord = new GeoCoordinate(35.5275684, -97.5691736); var toCoord = new GeoCoordinate(35.5575105, -97.6740397); var f = new FileInfo("C:\\OSM\\oklahoma-latest\\oklahoma-latest.osm.pbf").OpenRead(); var p = new PBFOsmStreamSource(f); var ri = new OsmRoutingInterpreter(); var router = Router.CreateLiveFrom(p, ri); var resolved1 = router.Resolve(Vehicle.Car, frCoord); var resolved2 = router.Resolve(Vehicle.Car, toCoord); var route = router.Calculate(Vehicle.Car, resolved1, resolved2); label1.Text= Convert.ToString(route.TotalDistance); } } } answered 20 Apr '16, 19:22 kttii 16●1 accept rate: 0% edited 20 Apr '16, 19:36 Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown This is the support site for OpenStreetMap. Question tags: routing ×277 offline ×102 distance ×55 c# ×22 c#.net ×8 question asked: 29 Jun '15, 10:01 question was seen: 11,307 times last updated: 20 Apr '16, 19:36 Open street map use in asp.net c# web application [c#] How to calculate distance and duration of travel between two points? c# - OpenStreetMap offline map manipulation [closed] how to get connection (route) between two/three cities/places? Calculate distance from one node to several other nodes in area How to develop a Desktop Application that uses OSM data for map display and routing? Calculate time and distance between two LatLng points with a JSON response Distance Matrix [closed] Offline Routing [closed] show large volume of data on a map of OpenStreetMap by C# VS2013 First time here? Check out the FAQ!
https://help.openstreetmap.org/questions/43835/c-how-to-calculate-route-between-two-points-offline
CC-MAIN-2021-17
refinedweb
776
68.06
Building your First Mobile Game using XNA 4.0 — Save 50% A fast-paced, hands-on guide to building a 3D game for the Windows Phone 7 platform using XNA 4.0 with this book and ebook. In this article by Brecht Kets and Thomas Goussaert, the authors of Building your First Mobile Game using XNA 4.0 we create a basic framework for our game and add content to it. (For more resources related to this topic, see here.) Adding content Create a new project and call it Chapter2Demo. XNA Game Studio created a class called Game1. Rename it to MainGame so it has a proper name. When we take a look at our solution, we can see two projects. A game project called Chapter2Demo that contains all our code, and a content project called Chapter2DemoContent. This content project will hold all our assets, and compile them to an intermediate file format (xnb). This is often done in game development to make sure our games start faster. The resulting files are uncompressed, and thus larger, but can be read directly into memory without extra processing. Note that we can have more than one content project in a solution. We might add one per platform, but this is beyond the scope of this article. Navigate to the content project using Windows Explorer, and place our textures in there. The start files can be downloaded from the previously mentioned link. Then add the files to the content project by right-clicking on it in the Solution Explorer and choosing the Add | Existing Item.... Make sure to place the assets in a folder called Game2D. When we click on the hero texture in the content project, we can see several properties. First of all, our texture has a name, Hero. We can use that name to load our texture in code. Note that this has no extension, because the files will be compiled to an intermediate format anyway. We can also specify a Content Importer and Content Processor. Our .png file gets recognized as texture so XNA Game studio automatically selects the Texture importer and processor for us. An importer will convert our assets into the "Content Document Object Model", a format that can be read by the processor. The processor will compile the asset into a managed code object, which can then be serialized into the intermediate .xnb file. That file will then be loaded at runtime. Drawing sprites Everything is set up for us to begin. Let's start drawing some images. We'll draw a background, an enemy, and our hero. Adding fields At the top of our MainGame, we need to add a field for each of our objects.The type used here is Texture2D. Texture2D _background, _enemy, _hero; In the LoadContent method, we need to load our textures using the content manager. // TODO: use this.Content to load your game content here _background = Content.Load<Texture2D>("Game2D/Background"); _enemy = Content.Load<Texture2D>("Game2D/Enemy"); _hero = Content.Load<Texture2D>("Game2D/Hero"); The content manager has a generic method called Load. Generic meaning we can specify a type, in this case Texture2D. It has one argument, being the asset name. Note that you do not specify an extension, the asset name corresponds with the folder structure and then the name of the asset that you specified in the properties. This is because the content is compiled to .xnb format by our content project anyway, so the files we load with the content manager all have the same extension. Also note that we do not specify the root directory of our content, because we've set it in the game's constructor. Drawing textures Before we start drawing textures, we need to make sure our game runs in full screen. This is because the emulator has a bug and our sprites wouldn't show up correctly. You can enable full screen by adding the following code to the constructor: graphics.IsFullScreen = true; Now we can go to the Draw method. Rendering textures is always done in a specific way: - First we call the SpriteBatch.Begin() method. This will make sure all the correct states necessary for drawing 2D images are set properly. - Next we draw all our sprites using the Draw method of the sprite batch. This method has several overloads. The first is the texture to draw. The second an object of type Vector2D that will store the position of the object. And the last argument is a color that will tint your texture. Specify Color.White if you don't want to tint your texture. - Finally we call the SpriteBatch.End() method. This will sort all sprites we've rendered (according the the specified sort mode) and actually draw them. If we apply the previous steps, they result in the following code: // TODO: Add your drawing code here spriteBatch.Begin(); spriteBatch.Draw(_background, new Vector2(0, 0), Color.White); spriteBatch.Draw(_enemy, new Vector2(10, 10), Color.White); spriteBatch.Draw(_hero, new Vector2(10, 348), Color.White); spriteBatch.End(); Run the game by pressing F5. The result is shown in the following screenshot: Refactoring our code In the previous code, we've drawn three textures from our game class. We hardcoded the positions, something we shouldn't do. None of the textures were moving but if we want to add movement now, our game class would get cluttered, especially if we have many sprites. Therefore we will refactor our code and introduce some classes. We will create two classes: a GameObject2D class that is the base class for all 2D objects, and a GameSprite class, that will represent a sprite. We will also create a RenderContext class. This class will hold our graphics device, sprite batch, and game time objects. We will use all these classes even more extensively when we begin building our own framework. Render context Create a class called RenderContext. To create a new class, do the following: - Right-click on your solution. - Click on Add | New Item. - Select the Code template on the left. - Select Class and name it RenderContext. - Click on OK. This class will contain three properties: SpriteBatch, GraphicsDevice, and GameTime. We will use an instance of this class to pass to the Update and Draw methods of all our objects. That way they can access the necessary information. Make sure the class has public as access specifier. The class is very simple: public class RenderContext { public SpriteBatch SpriteBatch { get; set; } public GraphicsDevice GraphicsDevice { get; set; } public GameTime GameTime { get; set; } } When you build this class, it will not recognize the terms SpriteBatch, GraphicsDevice, and GameTime. This is because they are stored in certain namespaces and we haven't told the compiler to look for them. Luckily, XNA Game Studio can find them for us automatically. If you hover over SpriteBatch, an icon like the one in the following screenshot will appear on the left-hand side. Click on it and choose the using Microsoft.Xna.Framework.Graphics; option. This will fix the using statement for you. Do it each time such a problem arises. The base class The base class is called GameObject2D. The only thing it does is store the position, scale, and rotation of the object and a Boolean that determines if the object should be drawn. It also contains four methods: Initialize, LoadContent, Draw, and Update. These methods currently have an empty body, but objects that will inherit from this base class later on will add an implementation. We will also use this base class for our scene graph, so don't worry if it still looks a bit empty. Properties We need to create four automatic properties. The Position and the Scale parameters are of type Vector2. The rotation is a float and the property that determines if the object should be drawn is a bool. public Vector2 Position { get; set; } public Vector2 Scale { get; set; } public float Rotation { get; set; } public bool CanDraw { get; set; } Constructor In the constructor, we will set the Scale parameter to one (no scaling) and set the CanDraw parameter to true. public GameObject2D() { Scale = Vector2.One; CanDraw = true; } Methods This class has four methods. - Initialize: We will create all our new objects in this method. - LoadContent: This method will be used for loading our content. It has one argument, being the content manager. - Update: This method shall be used for updating our positions and game logic. It also has one argument, the render context. - Draw: We will use this method to draw our 2D objects. It has one argument, the render context. public virtual void Initialize() { } public virtual void LoadContent(ContentManager contentManager) { } public virtual void Update(RenderContext renderContext) { } public virtual void Draw(RenderContext renderContext) { } Summary In this Article we have got used to the 2D coordinate system. Resources for Article : Further resources on this subject: - 3D Animation Techniques with XNA Game Studio 4.0 [Article] - Advanced Lighting in 3D Graphics with XNA Game Studio 4.0 [Article] - Environmental Effects in 3D Graphics with XNA Game Studio 4.0 [Article] About the Author : Brecht Kets Post new comment
http://www.packtpub.com/article/2d-graphics
CC-MAIN-2014-10
refinedweb
1,516
66.84
On Mon, 2002-07-22 at 00:50, Joey Hess wrote: > Colin Walters wrote: > > ** Changes to the upstream source will NOT be preserved when building > > a non-native version 2 archive. You must generate a patch, and put it > > in debian/patches. See below. [...] > I would even say that it should default to making a automatic diff for > packages that have explicit patches too; so if you download a source for > a quick NMU, you don't have to worry about messing with adding a new > patch. I see your point. I think I agree. So I'll change it so it does by default build a diff of the unpacked source directory; it will then be dropped in debian/patches/00debian.patch. > I think they're going to make life much harder for those of us who want > to check the resulting tree into cvs. I don't want to have to rename an > entire directory in cvs when a new upstream comes out. It would be much > better if the source format let us specify what directory each source > tarball extracted to, so the db part could go right in db/ and the > evolution part right into evolution/. You seem to have the needed > information already. I guess there's no particular major reason not to name the unpacked upstream source directories without the version; I just thought it looked nicer with. If it would be painful for cvs-buildpackage users, I'll change it. (By the way, thanks for giving this feedback; this is exactly the kind of thing I need to know.) > There's also a bit of an inconsistency between the directory format for > single-source packages and multi-source packages; when adding a second > source to a single source package the first source would all have to be > moved into packagename-version/firstsource-version/; which again will > truely suck in cvs. Yeah...I think adding a second source would be a rare occurrence though. I didn't really want to make the unpacking directory confusing for packages with just a single upstream (i.e. the vast majority). > Another concern: How will this new source format deal with packages that > already have a debian/ directory upstream? Seems that with debian/ in > both the upstream tarball and in the .debian.tar.bz2, things could get > rather nasty pretty quick -- nastier than maintaining such packages > already is, even. Right now it bombs out with an error. I personally think released upstream tarballs should not a debian/ directory. This doesn't mean it can't be in the upstream CVS or something, but it should not be in the foo-1.0.tar.gz they release. I'm aware this was the subject of "active discussion" a while ago, and I know people disagree with me, so if one (against all common sense :) ) does have debian/ in the upstream tarball, then one can make the package debian-native, or just stick with version 1 archive. I don't really have any other solutions. Personally, if the package has a debian/ upstream, that probably means upstream is very responsive to Debian, so making it Debian-native makes sense in a way. > Another approach might be to designate one source tarball as the primary > source -- this goes into packagename-version/ . Secondary source > tarballs, which includes any other upstream sources and maybe the > debian.tar.gz would go into subdirectories. Giving the maintainer > control of exactly what the names of those subdirectories are, if > feasable, would be really neat. Mmm...it would be neat. To do this though, the maintainer would really have to have control, because namespace clashes are quite likely; e.g. if the primary source has a subdirectory of the same name as one of the secondary sources. For example, it is certainly possible for the evolution upstream to have a subdirectory "db". The other thing is though, I don't want to make life TOO hard for those trying to unpack a source package manually. If we give the maintainer complex control over unpacking, that's another step they have to follow, or the later build process will fail confusingly. > Wouldn't it be better if this generated a patch against the upstream source > tree with any other existant patches applied? That's exactly what happens (or at least what the implementation intends to do :) ). This is the same way dbs works. > Of course if I later on add more patches to that package and have to revisit > LFS support, it would also be nice if I could update that LFS patch by > telling dpkg-source to generate a diff between the current tree, and the > old patched tree minus my old lfs patch. Perhaps something like this: Couldn't you just do "rm debian/patches/lfs.patch" before running "dpkg-source -p --create"? > Of course this stuff all gets sticky pretty fast with order-dependant patches > and patch dependancies, and I doubt I'd ever really use this. Yes; but I have to do that work anyways, because I do plan to support reversing and applying individual patches, along with the patches that depend on it. You can already reverse all the patches: dpkg-source -p --reverse --apply-all > But the same > facility could be used like so, assuming 'debianization' is the standard > debianization diff for packages that do not separate up their patches: > > dpkg-source -p create debianization > > So that would take the original sources, unpack them, apply any patches > from my current source tree except the debianization patch, and diff the > result against my current source tree. If this were then run > automatically when dpkg-source builds a package, it would solve my > concern at the top of this message. I think that makes sense. -- To UNSUBSCRIBE, email to debian-devel-request@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
https://lists.debian.org/debian-devel/2002/07/msg01194.html
CC-MAIN-2022-05
refinedweb
982
61.36
Ramblings from the Creator of WilsonDotNet.com To further constrain the input for T, add a type constraint of where T: struct. You can't use : enum, so struct is the next best thing. Another handy utility enum function I use quite often: public static T ParseEnum<T>(string value) where T: struct { return (T)Enum.Parse(typeof(T), value, true); } I've been looking at the implementation of Enum.ToObject(Type, int) (using Lut'z Reflector, of course), but can't seem to think of a reason to use it. Why not cast the int directly to it's enum? MyEnum = (MyEnum)typeValue; Can you demo the usage? As for the usage, consider an enum named MyEnum and a variable of that enum type that you want to load from an int value, possibly one you've stored in the database -- just do the following: MyEnum test = ToEnum<MyEnum>(value); As for the direct cast, that's a good question, and I could have sworn I'd tried that many times and it didn't work -- although it did just now in my test. Maybe I'm thinking of a limitation I encountered in .net v1 that I've been working around -- but maybe I'm wrong there as I didn't retest that assumption. Oh well, it seems that its not necessary for .net v2 at any rate, and if you have generics then you have .net v2, so mute point I suppose. Thanks for the comments. I would throw something like this into your helper: System.Diagnostics.Debug.Assert(typeof(T).IsEnum); all these helper function can not ensure valid value of enum. at .net 2.0, support MyEnum only have two enum value, and the value is 10, no exception will throw. So i don't know MyEnum test now have a invalid value. You can call Enum.IsDefined in the helper function first to guarantee a valid value, and if it fails then throw an exception. The same problem occurs with the simple cast syntax, so at least using a helper function makes it easier to add extra things like this when you find it is needed. Be careful about the performance of enum operations.
http://weblogs.asp.net/pwilson/archive/2007/04/10/function-to-load-enum-typed-properties-from-database.aspx
crawl-002
refinedweb
370
73.58
People used to statically typed languages coming to Python often complain that you have to use "self" (or whichever name you want, but self is most common) to refer to a method or variable in that object. You also have to explicitly pass a reference to the method's object in every call to it. Many people new to Python are annoyed by this and feels that it forces a lot of unnecessary typing. This recipe presents a method which makes "self" implicit. Discussion The recipe works using a metaclass that substitutes all function implementations in the class by recompiling their source code. The first change changes the function definition so that in the Vector3d example "def length()" becomes "def length(self)." That has the obvious side effect of making it impossible to add class or static methods to the class, but hopefully, that's not to important. :) If it is important, one alternative is to not subclass LocalsFromInstanceVars and instead wrap the methods you do not want to write "self." in individually: .class Foo: . def method(): . pass . method = rework(method) The second change makes it so that on each invokation of a method in the object all names attached to the object will be copied to that method's local namespace. The added code for length() is: .def length(self): . _LocalsFromInstanceVars__super = self._LocalsFromInstanceVars__super . _Vector3d__super = self._Vector3d__super . dummy = self.dummy . length = self.length . x = self.x . y = self.y . z = self.z With that you can then REFER to the instance members without using "self.", but you cannot assign to them. Setting x = 45 for example, will not have any effect on the object. Another limitation is that oneliner methods cannot be used in classes that subclass LocalsFromInstanceVars. On my Python (2.3.4) they confused the inspect module and made the getsourcelines() function behave strangely. With all those limitations (and more bugs I havent thought of) and with the general consensus that using "self" is a Good Thing , this code should probably not be used for anything other than demonstration purpouses. If even that, but it was fun to write. :) Disagree with the approach. I strongly disagree with these kind of recipes. Python has its own way for doing things and a Python programmer should adopt those views right form the beginning. Letting people programming in Python the way they do in [other language] is the worst thing you can do. Of course, we have to make a strong distinction between things like this which can make your code look like [other language] and between extending Python with features from [other language]. The latter can be useful. Sandor you are courageous. I did something similar a couple of years ago (inspect.getsource + metaclass) but I never had the courage to publish it ;-) BTW, the textwrap module has a dedent function which you may want to use. Please don't publicize this recipe. I was thinking of adding a comment similar to Sandor's. There is always some uncomfortableness when encountering change. That's the pain of growoth. This recipe is like apsirin that helps the symptom but not the root cause. Although I must say I do find it clever. But it really is not doing people new to the language justice. If you are learning python then learn python and the philosophy of python, one is better served by staying with the legacy language that they already know. Excellent recipe! I used python a number of years ago before I discovered Ruby. The biggest issue I had with the language was that the OO-bit felt like a late add-on and having to repeatably declare and send "self" to an objects own methods felt Python was as much an OO language as C was. Object oriented programming in C also requires explicit sending of a context object to a function, same as Python and that is just annoying when you try to envision you're playing with a bunch of objects. The only thing I miss from Python was the speed which, truth be told, Ruby has a complete lack of. I like shorthand. I'd adopt this sort of writing by creating a flex/bison preprocessor that transforms a non-python file into a .py. Something like this: And of course I'd have a custom version of python that knows to search for dates on .PREPY I usually don't use the somewhat irrelevant construct: I sort of agree with the anonymous Sandor---my solution shows I wouldn't call the file a ".py". But, if you can write it in python it's pythonic! My problem with making self implicit by your means, if I understand how it works, is that it recompiles class methods every time it loads even the .pyc file. I almost always avoid "exec" and "eval".
http://code.activestate.com/recipes/362305/
crawl-002
refinedweb
806
71.24
# Stop losing clients! Or how a developer can test a website, by the example of PVS-Studio. Part 1 A website with bugs could be a real pain in the neck for business. Just one 404 or 500 error could end up costing an obscene amount of money for the company and hurt a good reputation. But there is a way to avoid this issue: the website testing. That's sort of what this article is about. After reading this article, you will learn how to test code in Django, create your "own website tester" and much more. Welcome to the article. ![](https://habrastorage.org/r/w780q1/getpro/habr/upload_files/4da/858/258/4da85825868cd5de6984786818965c3d.jpg)How do you feel when you are writing tests? ------------------------------------------- How would you answer this question? I would say that I'm enjoying writing them. Each developer has his own opinion about tests. Personally, I really love the process. The process of writing test helps me not only write more secure code, but also understand my own and other people's programs better. And cherry on top is that feeling when all tests go green. At this point, my perfectionism scale reaches its peak. ![](https://habrastorage.org/r/w780q1/getpro/habr/upload_files/d31/9e5/3a3/d319e53a3c49b7e65bbff14e0abd777b.jpg)Sometimes when testing, I get sucked into the process as if I'm playing Half-Life. I start to spend all my working time and free time on this process. Of course, over time, I get tired of tests, and then I have to take a break. After the break, I can become a no-lifer again for a few weeks, as if Valve released a new episode. If you're the same as me, then you know what I'm talking about. Enough talk, let's get down to business! Backend testing --------------- We build [our website](https://pvs-studio.com/en/) on Django, so the code examples are for this framework.  Before starting, I invite you to read the list of recommendations that structure the process of writing tests and make it more comfortable. I made the list on the basis of my personal experience and other developers' tips.  * The test files are stored in the *tests* folder inside the app; * model tests, view tests and form tests are located in the *test\_models.py*, *test\_views.py* and *test\_forms.py* respectively; * the test method name starts with the *test\_* prefix (e.g, *test\_get\_sum* or *test\_status\_code*); * the name of the class that contains tests has the following form: *TestedEntityTests* (e.g, *TrialTests* or *FeedbackFileTests*). ### Testing of models Let's create the *my\_app* application and fill in the [*models.py*](http://models.py) file with the following code: ``` from django.db import models class Trial(models.Model): """Simple user trial model""" email = models.EmailField( verbose_name='Email', max_length=256, unique=False, ) def __str__(self): return str(self.email) class Meta: verbose_name = 'Trial' verbose_name_plural = 'Trials' ``` This model is a simplified version of our *Trial* model. Here's what we can check with this model:  1. The *verbose\_name* parameter of the *email* field – "Email". 2. The *max\_length* parameter of the *email* field – 256. 3. The *unique* parameter of the *email* field – *False*. 4. The \_\_*str\_\_* method returns the *email* parameter value. 5. The *verbose\_name* parameter of the model – "Trial". 6. The *verbose\_name\_plural* parameter of the model – "Trials". I have heard from some programmers that testing of models is a waste of time. But my experience suggests that this opinion is erroneous. Let me show you a simple example. For the *email* field, we set a maximum length of 256 characters (in accordance with [RFC 2821](https://www.ietf.org/rfc/rfc2821.txt)). Accidentally deleting the last digit is not a big deal.  If such an oversight suddenly happens, the user with the my\_super\_long\_email@gmail.com (29 characters) email will get an error and won't be able to request a trial. This means that the company will lose a prospective client. Of course, you can write additional validation, but it's better to be sure that the program works successfully without it. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/10c/fa2/b86/10cfa2b867f842ace1af1ec1b9605e6b.png)Let's move on to the tests and first decide where they will be located. You can write all the tests in one file — *tests.py* (Django adds this file when you create the application). Or you can follow the recommendations above and sort them.  If you like the second option more, delete *tests.py*. Then create the *tests* folder with the empty \_\_*init.py\_\_* file. When running the tests, the file will tell Python where to look for tests. Let's add 3 more files to the same folder: *test\_forms.py*, *test\_models.py*, and *test\_views.py*. The content of the application directory will be something like this:  ![](https://habrastorage.org/r/w780q1/getpro/habr/upload_files/8f2/d4f/bc7/8f2d4fbc7faef39969e6a7c2bd33e1c7.jpg)Let's open the *test\_models.py* file and add the following code to it:  ``` from django.test import TestCase from my_app.models import Trial class TrialTests(TestCase): """Tests for Trial model""" def test_verbose_name(self): pass def test_max_length(self): pass def test_unique(self): pass def test_str_method(self): pass def test_model_verbose_name(self): pass def test_model_verbose_name_plural(self): pass ``` Django has a special *django.test* module for testing. One of the most important classes of this model is [*TestCase*](https://docs.djangoproject.com/en/4.0/topics/testing/tools/#django.test.TestCase). It is the class that allows you to write tests. To write tests, we just need to inherit our class from *TestCase.* All our tests are methods of the *TrialTests* class. The tests don't check anything yet, but it won't be for long. Each of the methods will test one condition from the list above. Let's figure out how to run the tests. To run all tests of your website at once, enter this command in the console: ``` python manage.py test ``` To run tests of a specific class, for example, *TrialTests*, write: ``` python manage.py test my_app.tests.test_models.TrialTests ``` Any of these commands will run our 6 tests. Select one of them, enter it into the console, press Enter. We will get something like this: ![](https://habrastorage.org/r/w780q1/getpro/habr/upload_files/7e0/5cb/6e5/7e05cb6e5d31e7925bb1c7b4be1e0a39.jpg)The output shows that 6 tests were checked in 0.001 seconds. "OK" at the end of the output indicates their successful execution.   Now let's write real tests. To write them, we need to access the parameters of the *Trial* model object. So, we need to create this object. And here, it's important to know that Django uses a separate clean database for tests. Before running the tests, the database is created. After running the tests, the database is deleted. That's what are the first and last lines about in the screenshot above. If suddenly, for some reason, the base could not be deleted, Django tells you about that issue. You need to delete it manually.  To work with this database, you can use 3 methods: 1. *setUp* — is executed before running each test; 2. *tearDown* — is executed after completion of each test; 3. *setUpTestData* — is executed before running all tests of a particular class. Let's use the [latter](https://docs.djangoproject.com/en/4.0/topics/testing/tools/#django.test.TestCase.setUpTestData). Since it is a method of the class, let's add the appropriate decorator. Inside, we create an object of the *Trial* class and get the *email* field from it. We will use the field in the tests. ``` class TrialTests(TestCase): """Tests for Trial model""" @classmethod def setUpTestData(cls): """Set up the database before running tests of the class""" cls.trial = Trial.objects.create( email='test@gmail.com' ) cls.email_field = cls.trial._meta.get_field('email') ``` Now, when running tests of the *TrialTests* class, a *trial* object is created in the new database. After the run, the object is deleted.  Let's write the test of the *verbose\_name* parameter.  ``` def test_verbose_name(self): """The verbose_name parameter test""" real_verbose_name = getattr(self.email_field, 'verbose_name') expected_verbose_name = 'Email' self.assertEqual(real_verbose_name, expected_verbose_name) ``` From the *email\_field* field we extract the value of the *verbose\_name* parameter. Then we apply the *assertEqual* method from the *TestCase* class. The method compares two parameters - the real and expected values of *verbose\_name.* If the values are equal, the test runs successfully. Otherwise, it fails.  Let's write the same tests for the *max\_length* and *unique* parameters.  ``` def test_max_length(self): """The max_length parameter test""" real_max_length = getattr(self.email_field, 'max_length') self.assertEqual(real_max_length, 256) def test_unique(self): """The unique parameter test""" real_unique = getattr(self.email_field, 'unique') self.assertEqual(real_unique, False) ``` It's the same as with *verbose\_name*.  By the way, in the *unique* parameter test, we check that the value is *False*. The *assertFalse* command makes it easier to do. Let's rewrite the code of this test.  ``` def test_unique(self): """The unique parameter test""" real_unique = getattr(self.email_field, 'unique') self.assertFalse(real_unique) ``` The code is shorter and more readable. By the way, Django has many such [helpful assertions](https://docs.djangoproject.com/en/3.2/topics/testing/tools/#assertions).  Now let's check the string representation of the object.  ``` def test_string_representation(self): """The __str__ method test""" self.assertEqual(str(self.trial), str(self.trial.email)) ``` That one's easy. We check that the string representation of the object equals its email.  And the last thing is the tests of the model fields:  ``` def test_model_verbose_name(self): """The test of the verbose_name field of the Trial model""" self.assertEqual(Trial._meta.verbose_name, 'Trial') def test_model_verbose_name_plural(self): """The test of the verbose_name_plural fields of the Trial model""" self.assertEqual(Trial._meta.verbose_name_plural, 'Trials') ``` Let's access the fields of the *Trial* model through *\_meta* and compare their value with the expected one.  If you run the tests now, they will run successfully, as before. Well, that's no fun! Let's break something. Let the *verbose\_name* parameter of the *Trial* model become our victim. Open the model's code and change the value of this field from "Trial" to "Something else". Let's run the tests.  ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/472/2a5/428/4722a5428fed8d439feabf7eae1fb8d0.png)As you can see, one of the tests failed. Django tells us about the failure and that the real value of the field ("Something else") doesn't equal the expected value ("Trial").  ### Mixins - the helpful guys Model tests are homogeneous. So, when you have a lot of entities, testing them is not the most pleasant routine. I tried to simplify this process somewhat with mixins. My method is not perfect, and I do not insist on using it. However, you may find it useful.  I think you noticed that when we test the *verbose\_name*, *max\_length*, and *unique* fields, we see some code duplication. We get the value of the object field and compare it with the expected one. And so it's in all three tests. That means, you can write one function that does all the work.  ``` def run_field_parameter_test( model, self_, field_and_parameter_value: dict, parameter_name: str) -> None: """Test field’s parameter value""" for instance in model.objects.all(): # Example 1: field = "email"; expected_value = 256. # Example 2: field = "email"; expected_value = "Email". for field, expected_value in field_and_parameter_value.items(): parameter_real_value = getattr( instance._meta.get_field(field), parameter_name ) self_.assertEqual(parameter_real_value, expected_value) ``` Let's figure out what parameters we use. I think it's clear why we use *model*. Then we use *self\_* and we need it only to call the *assertEqual* method. Since *self* is a keyword in Python, we add \_ to avoid misunderstandings. *field\_and\_parameter\_value* is a dictionary with a field and the value of the field's parameter. For example, if we check the *max\_length* parameter, we can pass *email* and 256 to this variable. If we check *verbose\_name*, then we pass *email* and "Email". *parameter\_name* is the parameter being tested: *max\_length*, *verbose\_name* etc.  Now let's turn to the code. First, we get all the objects of the model and go through them. Next, we go through the dictionary that contains fields and expected parameter values. After that, we get the real parameter values by referring to the object. And then we compare them with the expected values. The code is very similar to the one previously written in tests. Only now it's all in one function. By the way, if the function name had started with the *test\_* prefix, Django would have considered this function the real test and would have tried to run it along with the others.  Let's write mixins. Each field should have its own mixin. For example, let's take the *verbose\_name* and *max\_length* fields.  ``` class TestVerboseNameMixin: """Mixin to check verbose_name""" def run_verbose_name_test(self, model): """Function that tests verbose_name""" run_field_parameter_test( model, self, self.field_and_verbose_name, 'verbose_name' ) class TestMaxLengthMixin: """Mixin to check max_length""" def run_max_length_test(self, model): """Function that tests max_length""" run_field_parameter_test( model, self, self.field_and_max_length, 'max_length' ) ``` We create the necessary method. In this method, we call our single function with the corresponding parameters. *self.field\_and\_verbose\_name* and *self.field\_and\_max\_length* are taken from the class inherited from the mixin. Namely, it is taken from the *setUpTestData* method of the *TrialTests* class. ``` @classmethod def setUpTestData(cls): # ... cls.field_and_verbose_name = { 'email': 'Email', } cls.field_and_max_length = { 'email': 256, } ``` Let's inherit the *TrialTests* class from our mixins. ``` class TrialTests(TestCase, TestVerboseNameMixin, TestMaxLengthMixin): # ... ``` If you have a lot of mixins, you can combine them. For example, combine them into a tuple and unpack it when inheriting. ``` MIXINS_SET = ( TestVerboseNameMixin, TestMaxLengthMixin, ) class TrialTests(TestCase, *MIXINS_SET): # ... ``` Now we can rewrite our tests:  ``` def test_verbose_name(self): """The verbose_name parameter test""" super().run_verbose_name_test(Trial) def test_max_length(self): """The max_length parameter test""" super().run_max_length_test(Trial) ``` When you have a lot of tests for different models, this method turns out to be very useful.  ### Testing logic Let's test the code from the *views.py* file. For example, let's take the function that gets a domain from an email. ``` def get_domain(email: str) -> str: """Return email's domain""" try: _, domain = email.split('@') except ValueError: domain = '' return domain ``` This is what the test of the function might look like: ``` from django.test import TestCase from my_app.views import get_domain EMAIL_AND_DOMAIN = { 'test1@gmail.com': 'gmail.com', 'test2@wrong_email': 'wrong_email', 'test3@mail.ru': 'mail.ru', 'test4@@wrong_email.com': '', } class FunctionsTests(TestCase): """Test class for views""" def test_get_domain(self): """Test get_domain function""" for email, expected_domain in EMAIL_AND_DOMAIN.items(): real_domain = get_domain(email) self.assertEqual(real_domain, expected_domain) ``` The constant stores emails and their real domains. In the test, we go through the emails. With the help of the function under test, we get the domain and compare it with the expected one.  Now let's talk a little about one useful construction. Let me change our emails somehow. For example, let's change *test1@gmail.com* to *test1@habr.com* , and *test2@wrong\_email* to *test2@habr*. Time to run the tests.  ![](https://habrastorage.org/r/w780q1/getpro/habr/upload_files/e3b/b8b/f46/e3bb8bf468324784c7d827fe00eaa855.jpg)They failed as expected. But why do we see that only one email is incorrect, even though we changed two? You see, by default, Django doesn't continue testing if a failure occurs. Django simply stops to run tests, as if the command *break* is called inside the loop. This fact can hardly please you, especially if your tests could take forever. But, luckily, there is a solution — the [*with self.subTest()*](https://docs.python.org/3/library/unittest.html#unittest.TestCase.subTest) construction. The construction is specified after the loop declaration. Let's add it to our test: ``` # ... for email, expected_doamin in EMAIL_AND_DOMAIN.items(): with self.subTest(f'{email=}'): real_domain = get_domain(email) self.assertEqual(real_domain, expected_doamin) ``` In the brackets of the *subTest* method, we specify the line that we want to output when the test fails. In our case, this is the email being tested.  Now, if any test fails, Django will save a report about the failure and continue running. And after the run is completed, Django will display information on every test that failed.  ![](https://habrastorage.org/r/w780q1/getpro/habr/upload_files/22e/f5f/8e6/22ef5f8e64f735e8a58269f24f59e267.jpg)Let's look at the test of another function. When we get a promo code from a user, we transform it into a more convenient form – we remove the "#" characters and spaces. To do this, we have the *get\_correct\_promo* function: ``` def get_correct_promo(promo: str) -> str: """Get promo without # and whitespaces""" return promo.replace('#', '').replace(' ', '') ``` This is what the function test might look like: ``` from django.test import TestCase from my_app.views import get_correct_promo PROMO_CODES = { '#sast': 'sast', '#beauty#': 'beauty', '#test test2': 'testtest2', 'test1 test2 test3': 'test1test2test3', } class FunctionsTests(TestCase): """Test class for views""" def test_get_correct_promo(self): """Test get_correct_promo function""" for incorrect_promo, correct_promo in PROMO_CODES.items(): real_promo = get_correct_promo(incorrect_promo) self.assertEqual(real_promo, correct_promo) ``` The constant stores incorrect and correct promo codes. In the test, we go through the promo codes. After that, we compare the promo code obtained with the *get\_correct\_promo* function and the correct promo code.  Probably, views testing is the simplest of this triad of tests. In this kind of testing, we simply call the function we need. Then we check that the value the function returns matches the expected one. By the way, when creating constants with data for testing, I recommend that you come up with many different values as possible. This way you will increase the chances that your test will become more effective.  ### Form tests Form tests are similar to model tests. In form tests, we can also check fields and methods.  Let's create a *Trial* model form:  ``` from django import forms from my_app.models import Trial class TrialForm(forms.ModelForm): """Form of Trial model""" class Meta: model = Trial exclude = () ``` This is what the function test might look like: ``` from django.test import TestCase from my_app.forms import TrialForm class TrialFormTests(TestCase): """Tests for TrialForm form""" def test_field_labels(self): """Test field's labels""" form = TrialForm() email_label = form.fields['email'].label self.assertEqual(email_label, 'Email') ``` In the test, we create an object of our form and compare the *label* of the field with the expected one. This is how you can write form tests. But we hardly use them. There is a more effective way to test forms. That's sort of what the second part of the article is about. How to create your "own website tester" --------------------------------------- So, you tested the backend of your website. But suddenly, you noticed the 404 error on one of the pages. The tests you wrote did not find this error. These tests also won't help, for example, when searching for dead links on pages. Such tests are simply not designed for bugs of this kind. But then how to catch these bugs? In this case, we need tests that simulate user actions. You can use [*django.test.Client*](https://docs.djangoproject.com/en/4.0/topics/testing/tools/#the-test-client), but it allows you to run tests only on the website server itself. It's not always convenient. So, let's turn to the Python *requests* library.  The tests usually turn out to be voluminous. It's better to put them in a separate file (or files), for example — *test\_requests.py*.  ### Checking status codes To check the page status code, you need:  1. Go to the website page; 2. Get the status code of the website page; 3. Check that the status code is 200. The *requests* library has many useful [methods](https://docs.python-requests.org/en/stable/api/). The *head* method will help us to do the 1st and the 2nd list points. We will use the method to send the HEAD request to the website pages. Let's import this method. ``` from requests import head ``` We only need to pass the URL to the method to get a response with all the necessary information about the page. And from this information, you can extract the status code: ``` response = head('') print(response.status\_code) ``` Now let's move on to test writing. Create the necessary constants: the website domain and the relative paths of the website pages. For simplicity, let's take the domain of only the English website version. ``` DOMAIN = 'https://pvs-studio.com/en/' PAGES = ( '', 'address/', 'pvs-studio/', 'pvs-studio/download/', # ... ) PAGES = (DOMAIN + page for page in PAGES) ``` Of course, ideally, it is better to take the relative paths of pages from the database. But if there is no such possibility — you can use a tuple.   Let's add the PagesTests class together with the test\_status\_code test:  ``` from django.test import TestCase class PagesTests(TestCase): """Tests for pages""" def test_status_code(self): """Test status code for pages""" for page in PAGES: with self.subTest(f'{page=}'): response = head(page) # (1) self.assertEqual(response.status_code, 200) # (2) и (3) ``` In the test, we send the HEAD request to each page and save the response. After that, we check whether the page status code is equal to 200. ### Checking links on pages Here's a way how to check a link: 1. Send the GET request to the page and get the page content; 2. Use a regular expression to get all the links from the content; 3. Go through each link and check that the link status code is 200. To search for links, let's use the *findall* method of the *re* module. To send a GET request, let's use the *get* method of the same *requests* library. And remember about the *head* method. ``` from re import findall from requests import get, head ``` Next, let's move on to the variables. For this test, we need the *PAGES* constant declared earlier, and the variable with a regular expression for the link. ``` LINK_REGULAR_EXPRESSION = r']\* href="([^"]\*)"' ``` И, наконец, напишем сам тест. ``` def test_links(self): """Test links on all site pages""" valid_links = set() for page in PAGES: page_content = get(page).content # (1) page_links = set( # (2) findall(LINK_REGULAR_EXPRESSION, str(page_content)) ) for link in page_links: if link in valid_links: continue with self.subTest(f'{link=} | {page=}'): response = head(link, allow_redirects=True) if response.status_code == 200: valid_links.add(link) self.assertEqual(response.status_code, 200) # (3) ``` We send the GET request to each page and extract content from the received response. Next, we use the regular expression and the *findall* method and get all the links located on the page. We put these links to the set to remove duplicates. The last stage is a familiar scenario: we go through all the links, send the HEAD request to these links, and check the status code. If the link variable is a redirect, the [*allow\_redirects*](https://docs.python-requests.org/en/v0.8.4/api/#requests.Request.allow_redirects) parameter will indicate whether we can execute the redirect. By default, its value is False. We also add valid links to set in order not to send a request to them in the future.  By the way, sometimes you can find relative links on the page. For example, "/ru/pvs-studio/faq/". The website adds the URL to these links, while the test does not do this. As a result, the test cannot handle the request.  ![](https://habrastorage.org/r/w780q1/getpro/habr/upload_files/5c9/58a/29e/5c958a29eb3966e4d4225327ce28e867.jpg)To avoid this issue, let's create a function: ``` SITE_URL = 'https://pvs-studio.com' def get_full_link(link: str) -> str: """Return link with site’s url""" if not link.startswith('http'): link = SITE_URL + link return link ``` If the received link is relative, the function adds the URL of the website to this link. Now in the test, when we receive the link, we will use the following function: ``` # ... for link in page_links: link = get_full_link(link) # ... ``` There are situations when the test does not show the real status code of the page. It is usually either 403 or 404. For example, for [this page](https://marketplace.visualstudio.com/items?itemName=rvo.SendEmailTask&), *head* will return the 404 status code. This happens because some websites don’t want to give page data to robots. To avoid this, you need to use the *get* method, and for greater confidence in the test, add a header with the *User-Agent*. ``` from requests import get head_response = head(link) print(head_response.status_code) # 404 get_response = get(link, headers={'User-Agent': 'Mozilla/5.0'}) print(get_response.status_code) # 200 ``` ### Redirect Tests Another variant of tests where you can use *requests* is redirect tests.  To check redirect tests, we need to: 1. Follow the link and get the response; 2. Compare the response URL with the expected one. So, we need two URLs. The first URL is a redirect link that the user clicks on. The second one is the URL of the page that the visitor eventually went to. As in the example with status codes, it's better to get these URLs from the database. If this is not possible, then I recommend using a dictionary. ``` REDIRECT_URLS = { '/ru/m/0008/': '/ru/docs/', '/en/articles/': '/en/blog/posts/', '/ru/d/full/': '/ru/docs/manual/full/', } ``` Let's remember about the *SITE\_URL* variable created earlier. ``` SITE_URL = 'https://pvs-studio.com' ``` Now, let's write the test. ``` def test_redirects(self): """Test the correctness of the redirect""" for link, page_url in REDIRECT_URLS.items(): with self.subTest(f'{link=} | {page_url=}'): page_response = head( SITE_URL + link, allow_redirects=True ) # (1) expected_page_url = SITE_URL + page_url self.assertEqual(page_response.url, expected_page_url) # (2) ``` First, we send the HEAD request using the link. At the same time, we allow the usage of redirects. From the received response, we take the URL of the page and compare it with the expected one.  The *requests* library allows you to perform many different website tests. The main methods for tests, as you may have noticed are *head* and *get*. But there are [other methods](https://docs.python-requests.org/en/latest/api/). And they can also be useful. It all depends on your tasks.  Conclusion ---------- So, now you know how to write tests for the backend and how to create your "own website tester". We will talk about form testing, JS, page translation testing and so on in the next parts of the article. Do you have any comments or feedback? Write down them bellow or to [my instagram](https://www.instagram.com/stepanov.programmer/). Thank you for reading this article, and see you soon!)
https://habr.com/ru/post/649189/
null
null
4,425
60.51
How to post a Multipart http message to a web service in C# and handle it with Java January 10, 2013 39 Comments encounter in your job as a .NET developer; however, as I need to switch between the .NET and the Java world relatively frequently in my job this just happened to be a problem I had to solve recently. Let’s start with the .NET side of the problem: upload the byte array contents of a file to a server using a web service. We’ll take the following steps: - Read in the byte array contents of the file - Construct the HttpRequestMessage object - Set up the request message headers - Set the Multipart content of the request - Send the request to the web service - Await the response Start Visual Studio 2012 – the below code samples should work in VS2010 as well – and create a new Console application. We will only work within Program.cs for simplicity. Step 1: read the file contents, this should be straightforward private static void SendFileToServer(string fileFullPath) { FileInfo fi = new FileInfo(fileFullPath); string fileName = fi.Name; byte[] fileContents = File.ReadAllBytes(fi.FullName); } Step2: Construct the HttpRequestMessage object The HttpRequestMessage within the System.Net.Http namespace represents exactly what it says: a HTTP request. It is a very flexible object that allows you to specify the web method, the contents, the headers and much more properties of the HTTP message. Add the following code to SendFileToServer(string fileFullPath): Uri webService = new Uri(@""); HttpRequestMessage requestMessage = new HttpRequestMessage(HttpMethod.Post, webService); requestMessage.Headers.ExpectContinue = false; The last piece of code, i.e. the one that sets ExpectContinue to false means that the Expect header of the message will not contain Continue. This property is set to true by default. However, a number of servers don’t know how to handle the ‘Continue’ value and they will throw an exception. I ran into this problem when I was working on this scenario so I’ll set it to false. This does not mean that you have to turn off this property every time you call a web service with HttpRequestMessage, but in my case it solved an apparently inexplicable problem. You’ll obviously need to replace the fictional web service address with a real one. Step 3: set the multipart content of the http request You should specify the boundary string of the multipart message in the constructor of the MultipartFormDataContent object. This will set the boundary of the individual parts within the multipart message. We’ll then add a byte array content to the message passing in the bytes of the file to be uploaded. Note that we can add the following parameters to the to individual multipart messages: - The content itself, e.g. the byte array content - A name for that content: this is ideal if the receiving party needs to search for a specific name - A filename that will be added to the content-disposition header of the message: this is a name by which the web service can save the file contents We also specify that the content type header should be of application/octet-stream for obvious reasons. Add the following code to SendFileToServer(string fileFullPath):; Step 4: send the message to the web service and get the response We’re now ready to send the message to the server by using the HttpClient object in the System.Net.Http namespace. We’ll also get the response from the server. HttpClient httpClient = new HttpClient(); Task<HttpResponseMessage> httpRequest = httpClient.SendAsync(requestMessage, HttpCompletionOption.ResponseContentRead, CancellationToken.None); HttpResponseMessage httpResponse = httpRequest.Result; We can send the message using the SendAsync method of the HttpClient object. It returns a Task of type HttpResponseMessage which represents a Task that will be carried out in the future. Note that this call will NOT actually send the message to the service, this is only a preparatory phase. If you are familiar with the Task Parallel Library then this should be no surprise to you – the call to the service will be made upon calling the Result property of the Task object. This post is not about the TPL so I will not go into any details here – if you are not familiar with the TPL but would like to learn about multipart messaging then read on and please just accept the provided code sample ‘as is’. Otherwise there are a great number of sites on the net discussing the Task object and its workings. Step 5: read the response from the server Using the HttpResponseMessage object we can analyse the service response in great detail: status code, response content, headers etc. The response content can be of different types: byte array, form data, string, multipart, stream. In this example we will read the string contents of the message, again using the TPL. Add the following code to SendFileToServer(string fileFullPath): HttpStatusCode statusCode = httpResponse.StatusCode; HttpContent responseContent = httpResponse.Content; if (responseContent != null) { Task<String> stringContentsTask = responseContent.ReadAsStringAsync(); String stringContents = stringContentsTask.Result; } It is up to you of course what you do with the string contents. Ideally we should include the web service call in a try-catch as service calls can throw all sorts of exceptions. Here the final version of the method: private static void SendFileToServer(string fileFullPath) { FileInfo fi = new FileInfo(fileFullPath); string fileName = fi.Name; byte[] fileContents = File.ReadAllBytes(fi.FullName); Uri webService = new Uri(@""); HttpRequestMessage requestMessage = new HttpRequestMessage(HttpMethod.Post, webService);; HttpClient httpClient = new HttpClient(); try {; } } catch (Exception ex) { Console.WriteLine(ex.Message); } } This concludes the .NET portion of our problem. Let’s now see how the incoming message can be handled in a Java web service. So you have a Java web service which received the above multipart message. The solution presented below is based on a Servlet with the standard doPost method. The HttpServletRequest in the signature of the doPost method can be used to inspect the individual parts of the incoming message. This yields a collection which we can iterate through: @Override public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { Collection<Part> requestParts = request.getParts(); Iterator<Part> partIterator = requestParts.iterator(); while (partIterator.hasNext()) { } } If the message is not of type MultipartFormData then the collection of messages will be zero length. The Part object in the java.servlet.http namespace represents a section in the multipart message delimited by some string token, which we provided in the MultipartFormDataContent constructor. Now our goal is to specifically find the byte array message we named “this is the name of the content” in the .NET code. This name can be extracted using the getName() getter of the Part object. Add the following code to the while loop: Part actualPart = partIterator.next(); if (actualPart.getName().equals("this is the name of the content")) { } The Part object also offers a getInputStream() method that can be used later to save the byte array in a file. The file name we provided in the C# code will be added to the content-disposition header of the multipart message – or to be exact to the header of the PART of the message. Keep in mind that each individual message within the multipart message can have its own headers. We will need to iterate through the headers of the byte array message to locate the content-disposition header. Add the following to the if clause: InputStream is = actualPart.getInputStream(); String fileName = ""; Collection<String> headerNames = actualPart.getHeaderNames(); Iterator<String> headerNamesIterator = headerNames.iterator(); while (headerNamesIterator.hasNext()) { String headerName = headerNamesIterator.next(); String headerValue = actualPart.getHeader(headerName); if (headerName.equals("content-disposition")) { } } The last step of the problem is to find the file name within the header. The value of the content-disposition header is a collection of comma separated key-value pairs. Within it you will find “filename=myfile.txt” or whatever file name was provided in the C# code. I have not actually found any ready-to-use method to extract exactly the filename so my solution is very a very basic one based on searching the full string. Add the below code within “if (headerName.equals(“content-disposition”))”: String searchTerm = "filename="; int startIndex = headerValue.indexOf(searchTerm); int endIndex = headerValue.indexOf(";", startIndex); fileName = headerValue.substring(startIndex + searchTerm.length(), endIndex); So now you have access to all three ingredients of the message: - The byte array in form of an InputStream object - The name of the byte array contents - The file name The next step would be to save the message in the file system, but that should be straightforward using the ‘read’ method if the InputStream: OutputStream out = new FileOutputStream(f); byte buf[] = new byte[1024]; int len; while ((len = is.read(buf)) > 0) { out.write(buf, 0, len); } …where ‘is’ is the InputStream presented above and ‘f’ is a File object where the bytes will be saved. View the list of posts on Messaging here. how
https://dotnetcodr.com/2013/01/10/how-to-post-a-multipart-http-message-to-a-web-service-in-c-and-handle-it-with-java/
CC-MAIN-2018-47
refinedweb
1,471
56.25
As we know that in Java POJO refers to the Plain old Java object.POJO and Bean class in Java shares some common features which are as follows − Both classes must be public i.e accessible to all. Properties or variables defined in both classes must be private i.e. can't be accessed directly. Both classes must have default constructor i.e no argument constructor. Public Getter and Setter must be present in both the classes in order to access the variables/properties. The only difference between both the classes is Java make java beans objects serialized so that the state of a bean class could be preserved in case required.So due to this a Java Bean class must either implements Serializable or Externalizable interface. Due to this it is stated that all JavaBeans are POJOs but not all POJOs are JavaBeans. public class Employee implements java.io.Serializable { private int id; private String name; public Employee(){} public void setId(int id){this.id=id;} public int getId(){return id;} public void setName(String name){this.name=name;} public String getName(){return name;} } public class Employee { String name; public String id; private double salary; public Employee(String name, String id,double salary) { this.name = name; this.id = id; this.salary = salary; } public String getName() { return name; } public String getId() { return id; } public Double getSalary() { return salary; } }
https://www.tutorialspoint.com/pojo-vs-java-beans
CC-MAIN-2021-10
refinedweb
228
60.01
Unit. Groups of Elements To help provide a clearer overview of the Unit Testing Framework, this section organizes the elements of the UnitTesting namespace into groups of related functionality. Elements Used for Data-Driven Testing Use the following elements to set up data-driven unit tests. For more information, see Coding a Data-Driven Unit Test and How to: Configure a Data-Driven Unit Test. Attributes Used to Establish a Calling Order A code element decorated with one of the following attributes is called at the moment you specify. For more information, see Structure of Unit Tests. Attributes Used to Identify Test Classes and Methods Every test class must have the TestClass attribute, and every test method must have the TestMethod attribute. For more information, see Structure of Unit Tests. Manager window and Test Results window, which means and How to: Regenerate Private Accessors,.
https://msdn.microsoft.com/en-us/library/ms243147(v=vs.80)
CC-MAIN-2017-39
refinedweb
144
53.71
libmaketmpfile (3) - Linux Man Pages libmaketmpfile: create a temporary named file NAME pm_make_tmpfile() - create a temporary named file SYNOPSIS #include <netpbm/pm.h> pm_make_tmpfile(FILE ** fileP, const char ** filenameP); EXAMPLE This simple example creates a temporary file, writes 'hello world' to it, then writes some search patterns to it, then uses it as input to grep: #include <netpbm/pm.h> FILE * myfileP; const char * myfilename; pm_make_tmpfile(&myfileP, &myfilename); fprintf(myfile, '^account:\\s.*\n'); fprintf(myfile, '^name:\\s.*\n'); fclose(myFileP); asprintfN(&grepCommand, 'grep --file='%s' /tmp/infile >/tmp/outfile'); system(grepCommand); strfree(grepCommand); unlink(myfilename); strfree(myfilename); DESCRIPTION This library function is part of Netpbm(1) pm_make_tmpfile() creates and opens a temporary file, returning to you a stream file handle for it and its name. pm_make_tmpfile() chooses a file name that is not already in use, with proper interlocking to make sure that it actually creates a file and opens the new file, as opposed to merely opening an existing file. If you don't need to access the file by name, use pm_tmpfile() instead, because it's cleaner. With pm_tmpfile(), the operating system always deletes the temporary file when your program exits, if the program failed to clean up after itself. The temporary file goes in the directory named by the TMPFILE environment variable. If TMPFILE is not set or is set to something unusable (e.g. too long), pm_tmpfile() falls back to the value of the standard C library symbol P_tmpdir (like the standard C library's tmpfile()). The name of the file within that directory is like myprog_blrfx, where myprog is the name of your program (arg 0) and the rest is an arbitrary discriminator. If pm_make_tmpfile() is unable to create a temporary file, it issues a message to Standard Error and aborts the program. HISTORY pm_tmpfile() was introduced in Netpbm 10.27 (March 2005).
https://www.systutorials.com/docs/linux/man/3-libmaketmpfile/
CC-MAIN-2021-10
refinedweb
308
52.49
ASF Bugzilla – Bug 9215 XML that contains a large amount of CDATA Sections in parsed incorrectly Last modified: 2004-11-16 19:05:39 UTC For my work, I retreive a large amount of data as an XML String and I use the DocumentBuilder to parse a ByteArrayInputStream containing this XML. The XML contains many CDATA sections and occasionally, depending upon the data, the document tree will have nodes that contain incorrect data. I have found that if I put a crimson.jar in front of the xercesImpl.jar in the classpath, then the document tree comes out OK, but not if xercesImpl.jar is in front of the crimson.jar. Since we use such a large string of XML data, trying to have you reproduce it may be somewhat difficult. I was able to make a small program that does produce these incorrect results. import org.w3c.dom.*; import javax.xml.parsers.*; import java.io.*; class xmltest{ public static void main(String args[]){ StringBuffer xml = new StringBuffer(); xml.append("<LETTERS>"); for (int y=0;y<=100;y++){ xml.append("<LETTER><![CDATA["); for (int z=0;z<=y;z++) xml.append((char)((y%26)+97)); xml.append("]]></LETTER>"); } xml.append("</LETTERS>"); byte[] b = xml.toString().getBytes(); InputStream is = new ByteArrayInputStream(b); Document doc = null; try { if (is!=null){ DocumentBuilderFactory docBuilderFactory = DocumentBuilderFactory.newInstance(); DocumentBuilder docBuilder = docBuilderFactory.newDocumentBuilder(); doc = docBuilder.parse(is); } } catch (Exception e){} NodeList nodelist = doc.getDocumentElement().getChildNodes(); for (int idx=0; idx<nodelist.getLength();idx++){ Node node = nodelist.item(idx); System.out.println(node.getFirstChild().getNodeValue()); } } } At least in my testing, when the nodelist gets to the 65th item, the result for the node value is incorrect. Instead of the node containing the same letter, it is like a concatination of many of the other nodes. Thanks, Matt Havlovick Consolidated Freightways If changing the parser makes the problem go away, this may be a parser bug rather than a Xalan bug. Have you tried running your documents through the Xerces sample programs, to see whether they're parsing correctly? Yes, it's seems to be a parser bug. The xercesImpl.jar file appears to have the problem, and because it is packaged with the xalan download, I thought it might go here as a xalan bug? Nothing wrong with posting it as a Xalan bug as a first guess, but if it's clear that it's a Xerces malfunction posting it there instead is the only way to get it fixed. Transferring to the Xerces project.
https://bz.apache.org/bugzilla/show_bug.cgi?id=9215
CC-MAIN-2015-40
refinedweb
418
58.08
Edison - thanks for the review! I've answered inline. (I've brought the technical review to the right thread from the one about marvin's repo separation) > Few questions: > 1. About the "more object-oriented" CloudStack API python binding: Is the > proposed api good enough? As long as the cloudstack API retains its compatibility as it does now by not altering required arguments. We are good to go. The current implementation of VirtualMachine is bloated and does too many things, like SSH connections, NAT creation, security group creation etc. The new method will provide such special cases as factory hierarchies instead. So: you'll have the regular VirtualMachine -> VpcVirtualMachine -> VirtualMachineWithNAT -> VirtualMachineWithIngress etc > For example, > The current hand written create virtual machine looks like: > class VirtualMachine(object): > .... > @classmethod > def create(cls, apiclient, services, templateid=None, accountid=None, > domainid=None, zoneid=None, networkids=None, serviceofferingid=None, > securitygroupids=None, projectid=None, startvm=None, > diskofferingid=None, affinitygroupnames=None, group=None, > hostid=None, keypair=None, mode='basic', method='GET'): > > the proposed api may look like: > > class VirtualMachine(object): > def create(self, apiclient, accountId, templateId, **kwargs) > > The proposed api will look better than previous one, and it's automatically > generated, so easy to maintain. But as a consumer of the api, how do people > know what kind of parameters should be passed in? Will you have an online > document for your api? Or you assume people will look at the api docs generated > by CloudStack? > Or why not make the api itself as self-contained? For example, add docs > before create method: All **kwargs will be spelt out as docstrings in the entity's methods. This is something I haven't got to yet. It's in the TODO list doc on the branch however. I recognize the difficulty in understanding kwargs for someone looking at the API. I will fix before merge. My concern however is of factories being appropriately documented since they are user written. Those will need to be caught via review. > > 2. Regarding to data factories. From the proposed factories, in each test > case, does test writer still need to write the code to get data, such as > writing code to get account during the setupclass? No. this is not required anymore. All data is represented as a factory. So to get account data you simply import the necessary factory. You don't have to imagine the structure of this data and json anymore. from marvin.factory.data import UserAccount ... def setUp() account = UserAccount(apiclient) So those crufty json headers should altogether disappear. > With the data factories, the code will look like the following? > > Class TestFoo: > Def setupClass(): > Account = UserAccount(apiclient) > VM = UserVM(apiClient) > > And if I want to customize the default data factories, I should be able to > use something like: UserAccount(apiclient, username='myfoo')? Yes, this will create a new useraccount with an overridden username. You may override any attribute of the data this way. This however, doesn't check for duplicates. So if a username 'myfoo' already exists, that account creation will fail. If you use the factory, since it generates a random sequence you won't have the problem of collisions > And the data factories should be able to customized based on test > environment, right? > For example, the current iso test cases are hardcoded to test against >, but it won't work for devcloud, or > in an internal network. The ISO data factory should be able to return an url > based on different test environment, thus iso test cases can be reused. Yes, we'll have to create a LocalIsoFactory which represents an ISO available on the internal network. It is customizable. May be we can represent it to look for a file within devcloud itself? Thanks, > On Wed, Oct 02, 2013 at 10:12:40PM +0530, Prasanna Santhanam wrote: > > Once upon a time [1] I had propagated the idea of refactoring marvin to > > make test case writing simpler. At the time, there weren't enough > > people writing tests using marvin however. Now as focus on testing has > > become much more important for the stability of our releases I would > > like to bring back the discussion and to review the refactoring of > > marvin which I've been doing in the marvin_refactor branch. > > > > The key goal of this refactor was to simplify test case writing. In > > doing so I've transformed the library from its brittle hand-written > > nature to a completely auto-generated set of libraries. In that sense, > > marvin is much closer to cloudmonkey now. > > > > The two important changes in this refactor are: > > > > 1. data represented in an object-oriented fashion presented as factories > > 2. test case writing using entities and their operations rather than > > a sequence of disconnected API calls. > > > > To see the full nature of this proposal I've updated the spec I put up > > on the wiki: > > > > > > For a quick comparison I wrote a test for the VPC vm's lifecycle in > > tools/marvin/marvin/test/test_vpc_life_cycle.py which one can compare > > with the existing tests for vpc under > > test/integration/component/test_vpc_vm_life_cycle.py > > > > These changes being 'architectural' so to speak and in a way > > disruptive even I would like to merge this at the beginning of the > > upcoming cloudstack release. > > > > This is only a small part of a larger change for marvin which will be > > moving to a more BDD like implementation [2] where tests are written > > using a gherkin-like language. But that will come later. > > > > I've also tried to disconnect marvin from depending on CloudStack's > > build and repo. This will help split marvin from CloudStack which I > > will discuss in a seperate thread. > > > > [1] > > [2] > > > > -- > > Prasanna., > > > > ------------------------ > > -- > Prasanna., ------------------------
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201310.mbox/%3C20131008132930.GA1437@cloud-2.local%3E
CC-MAIN-2017-51
refinedweb
938
56.15
Things I miss in PHP: Function decorators The problem: I want to build a set of classes accessible via REST interface. One example here with ZF. My idea is allow users to access via REST to my set of classes. It’s no so difficult. But the problem appears when I want to set an authorization engine to my REST web service. Public functions can be called but sometimes user must be authenticated (with a trusted cookie). I have my class: class Lib_Myclass { public function publicFunction($var1) { return "Hello {$var1}"; } public function privateFunction($var1) { return "private Hello {$var1}"; } } publicFunction can be called without login but privateFunction only after login I have done similar with python and Google App Engine. With python I use decorators and its very clean and easy class Myclass: def publicFunction(self, var1): return "Hello %s" % var1 @private def privateFunction(self, var1): return "private Hello %s" % var1 And I define my decorator ‘private’: def private(funcion): def _private(*list_args): if isValidUser(): valor = funcion(*list_args) else: value = ('You must be logged') return value return _private Really simple and clean. But in PHP I don’t know how to do it in a simple way. I don’t want to do reflections in every call and check if the function is allowed or not. My solution: Instead of having one class with public and private functions I divide it in two classes: one for the public function and another one for the private (private here means logged sessions not private keyword in PHP). I also create two empty Interfaces PublicAccess and PrivateAccess and using PHP’s instanceof I can throw an exception if user without login tries to call any function in a class with the interface PrivateAccess interface PublicAccess{} interface PrivateAccess{} class Lib_Myclass1 implements PublicAccess { public function publicFunction($var1) { return "Hello {$var1}"; } } class Lib_Myclass2 implements PrivateAccess { public function privateFunction($var1) { return "private Hello {$var1}"; } } I prefer the python’s solution Posted on December 9, 2009, in php. Bookmark the permalink. 7 Comments. Have you ever tried looking at these PHP functions? and Using these two functions you can easily simulate argv/argc variable lists in PHP. I hope this helps. with those functions I can have functions with n arguments I cannot get the python’s decorators (at least as I know) Yeah, PHP definitely doesn’t have decorators or annotations; however, I have been playing around with the idea of using phpdoc comments to add decorators. phpdoc comments are accesible via the getDocBlock() method in the ReflectionMethod class. You would have to do some parsing, but it’s not to hard to setup a system like that. Yes. The only problem I see with this method is you cannot call directly to the functions. Each time you call one function you must use reflection. I don’t know the performance of that solution. ZF uses something similar to build XMLRpc servers and Zend studio uses phpdoc to allow autogenerate WSDL files from a class. I usually solve this problem in a higher level of the application than the class itself. I have some common ‘dispatch’ function that I pass the module name, function, and params to. It then checks an access control list to see if that user has access to execute that function on that module. If not, an authentication exception is thrown. It’s really the only way to do it without an extra function call for checking authentication in each function you want to protect. Internally python’s decorators are the “extra function call” that you said but with a elegant interface. As you said create a higher level to dispatch functions is other possibility. I also use it sometimes but last days I prefer things as simple as I can (less code, less mistakes ;) ). Those kind of wrappers in classes turn my classes into something different than a simple plain class. Maybe it’s difficult to explain. I want to create classes with ‘new’ or maybe with a simple factory or singleton and call then as the examples in chapter 1 of PHP’s OO. Something that don’t need any documentation. Pingback: Inject dependencies via PhpDoc « Gonzalo Ayuso | Web Architect
http://gonzalo123.com/2009/12/09/things-i-miss-in-php-function-decorators/
CC-MAIN-2016-07
refinedweb
698
52.49
Created on 2008-11-01 14:59 by robwolfe, last changed 2011-11-16 11:45 by ezio.melotti. I'd like to propose adding some complete example regarding scopes and bindings resolving to execution model description. There is no week on pl.comp.lang.python without a question about UnboundLocalError problem. I'm getting tired answering that. ;-) It does not have to look so verbose as my (attached) example, but please add some example, which will clarify this issue. Your example seem too verbose and diffuse. Perhaps something more focused on what people do wrong would be more helpful. I presume you mean something like this -- with or without x=2 before the def. >>> def f(): print (x) x = 1 >>> f() Traceback (most recent call last): File "<pyshell#31>", line 1, in <module> f() File "<pyshell#30>", line 2, in f print (x) UnboundLocalError: local variable 'x' referenced before assignment What are the other ways people get the error. People seem to understand that they can not use variable before definition. But this dramatically change when they come across nested functions. They don't understand when variable can be resolved from outer scope and when can not, e.g: def outer(): x = 1 def inner1(): print(x) def inner2(): print(x) # [... some instructions (maybe a lot) ...] x = 1 They are always confused why `inner1` works but `inner2` doesn't. The FAQ for this was much improved in 2009 (issue 7290): To support the claim that this keeps biting people, at least the following bug reports all came from people misundestanding this: issue 10043 issue 9049 issue 7747 issue 7276 issue 6833 issue 5763 issue 4109 (understood effect of =, surprised by +=) issue 972467 issue 850110 issue 463640 These are just the people who were persistent enough to open a bug (and in most cases managed to produce a minimal example); many more ask on c.l.p, StackOverflow (>50 hits for UnboundLocalError, many of which are this exact issue) etc., or just give up. [Interesting point: people mostly complain when the unbound reference occurs textually *before* the assignment (though there's selection bias here), and many of them complain that things happen "out of order". It seems half the misunderstanding is that people expect variables to *become* localized when first assigned - they don't understand it's a static decision affecting all occurences in a function.] The central problem I believe is that when people try to modify a non-local var and get UnboundLocalError: local variable foo referenced before assignment their mental model of Python scopes is *wrong*, so the error message is *useless* for them (what 'local variable foo' is it talking about?), and have no idea where to look next. Also, I'm afraid people just won't internalize this issue until it bites them at least once (I've seen a Python course where I had explained this, with a print-before-assignment example, and 2 days later a student was bitten by the exception and was at a loss. Therefore, I think providing a clear learning path from UnboundLocalError is crucial. ==> I propose (i.e. am working on patch(s)) attacking it at many points: (1) Expand UnboundLocalError message to point to help('NAMESPACES') [Execution Model → Naming and Binding] and/or the FAQ. (requirement IMHO: use a help() ref that can be followed the from a Python prompt on an offline machine; not sure if FAQ can work this way.) (1B) Ideally, detect if the var is bound in outer/global scope and suggest help('nonlocal') / help('global') approriately. (1C) Ideally, improve UnboundLocalError to explain "foo is local throughout the function because it's bound at line 42". (2) Add an example to Naming and Binding section. Currently, it's a bunch of legalese, defining 7 terms(!) before you get to the meat. Mortal users won't survive that. Being the language spec, the precise legalese must stay there; but it'd be good to prepend it with a human-readable blurb and example regarding this issue. (3) Improve the tutorial. Sections 4.6 [Defining functions] and 9.2 [Scopes and Namespaces] are relevant. 4.6 mentions the issue of assignment to global but neither section has a clear example. And 9.2 is scary IMHO; I'll see if I can make it any better... (4) Add examples to documentation of global & nonlocal? Not clear if helpful, probably too much.
https://bugs.python.org/issue4246
CC-MAIN-2019-09
refinedweb
736
63.29
In some deployments, for example when running on FUSE or using some network-based VFS implementation, the filesystem operations might add up to a significant fraction of preamble build time. This change allows us to track time spent in FS operations to better understand the problem. Hmm just a few curious questions from the sidelines. Why a "custom system" instead of something -ftime-trace based? How much overhead does this introduce, esp. for normal use-cases? What's the gain? The information is very coarse and general-purpose system profiling tools should give you much better information regarding file system perf. Thanks, LG in general. I suppose the extra timing overhead should be negligible, considering the preamble build times but would be nice to see some numbers if you have any. Apart from that I've raised the concern around multiple preamble builds finishing in parallel and poisoning the first_build metric, wanted to raise it here again to make sure it isn't lost in details. nit: Timer(std::move(Timer)) i guess this is negligible but, timing close operations too shouldn't hurt? i don't think we need this one I don't think mentioning FS here is worth it. Can we rather emphasize on this being an accumulating timer? what about making all of this an implementation detail of Preamble.cpp ? we can lift it up once we have other places that we want to make use of. maybe put a FIXME: nit: we use /// for public comments. what about returning an int with ms granularity ? what about just storing double/int ? i feel like this deserves the FS specific comment mentioned above. maybe something like: This will record all time spent on IO operations in \p Timer. i don't think we ever want concurrent access to Timer, i.e. startTime should never be called without a matching call to stopTime first. passing it in as a shared_ptr somehow gives the feeling that it might be shared across multiple objects, which might do whatever they want with the object. maybe just pass in a reference ? what about only conditionally doing these when Stats is not null? why use an expensive llvm::Timer if all we care is walltime? it might be worth leaving some comments around this TimedFS being exposed to outside through PreambleCallbacks. We provide access to Preprocessor, which keeps the underlying FileManager alive. Today all of this happens in serial hence all is fine, but we were discussing the possibility of performing the indexing and preamble serialization in parallel, it might result in some surprising race conditions if we do that. nit: again /// (i see that fields of PreambleData doesn't follow the style here, but it's the outlier) nit: maybe make this the last parameter and default to nullptr to get rid of changes in tests. can we move all of these into an anonymous namespace instead? what about just tracking the total build time here? we can get the ratio afterwards. nit: llvm style doesn't really use const if the parameter is being copied anyway this can still be called in parallel as we have multiple preamble threads. moreover this also makes "first_build" tracking somewhat tricky. do you think this metric would still be useful in the presence of such noise? e.g. a respawning clangd instance might receive addDocument requests for all the previously open files and the first preamble to build might've benefited a lot from cached file IO that was done by other preamble builds. i suppose we can have something like following to at least address multiple preamble threads accessing FirstReport issue. static bool ReportedFirstBuild = [&]{ PreambleBuildFilesystemLatency.record(Stats.FileSystemTime, "first_build"); // I think it's fine to report twice here, as we've both performed a first build for a file, and it was the first build of clangd instance. return true; }(); llvm::StringLiteral Label = IsFirstPreamble ? "first_build_for_file" : "rebuild"; PreambleBuildFilesystemLatency.record(Stats.FileSystemTime, Label); if (Stats.TotalBuildTime > 0) // Avoid division by zero. PreambleBuildFilesystemLatencyRatio.record( Stats.FileSystemTime / Stats.TotalBuildTime, Label); nit: drop the const per style. nit: if(!LatestBuild) return; reportPreambleBuild(Stats, IsFirstPreamble); if(isReliable(LatestBuild->CompileCommand)) HeaderIncluders.update(FileName, LatestBuild->Includes.allHeaders()); addressed review comments In D121712#3383944, @Trass3r wrote: Hmm just a few curious questions from the sidelines. Why a "custom system" instead of something -ftime-trace based? Hmm just a few curious questions from the sidelines. Why a "custom system" instead of something -ftime-trace based? I'm not sure if I understand. -ftime-trace is a one-off. I want to monitor these values across multiple clangd instances, over time. How much overhead does this introduce, esp. for normal use-cases? On my machine it takes about 3.5 milliseconds to do 100k startTimer(); stopTimer(); calls in a loop. Building preamble for SemaOverload.cpp (random file I checked) takes about ~2200 such calls. Basically the impact of this should not be noticeable. What's the gain? The information is very coarse and general-purpose system profiling tools should give you much better information regarding file system perf. The idea is to use this to monitor large deployments of clangd on many machines over long period of time. One question to answer is whether some sort of prefetching of header files (with parallel I/O) would be beneficial. That depends on what fraction of the time we spend waiting on the serial file reads and how warm the caches are, on average. It will also help us catch regressions, like one we had recently (that went unnoticed for quite a while) where adding some -fmodule-map-file flags to do a layering check caused a lot of stat() calls, which can be quite expensive. Does that make sense? Does it answer your questions? Before I make the change, let me clarify: are you suggesting moving the whole TimedFS into Preamble.cpp? Definitely possible. IMHO FIXME should be actionable. This is not - we shouldn't do it unless something changes. It's just a note that if someone ever wonders "could I just move this to support/ and re-use it", the answer is yes. Does that make sense? We could, but why? llvm::Timer returns WallTime as double so this is somewhat consistent with it. Not a strong argument, of course, since that consistency doesn't really matter, but I'm not sure what benefit using int here offers? Again, why? (I'm not strictly against it, just trying to understand why you're asking for this) It's not about concurrency, it's about live time. This timer needs to have the same lifetime as the entire VFS, which is also ref-counted.. Hmm...I added a comment, but nothing really changes here, right? VFS in general is not thread safe, so if we add parallelism we need to either make sure no file access happens there or make all VFSs used here thread safe. I'd rather not, unless you insist. Besides not having to modify tests (which I already did anyway), what's the benefit of having it be default? Do you think it's more readable? Done. I moved them into anonymous namespace, but I'm not sure what you mean by "instead"? How would you get the ratio then? Right, and I'll never get used to that. Sorry ;-) That's an excellent point. Fixed. I do believe that this extra information is worth it, even if it's noisy. If the fact that we're building multiple TUs in parallel and thus, essentially, have parallel file reading, significantly improves this metric then perhaps lack of parallel reads is not such a big deal after all. Hmm...is that actually documented somewhere? There's definitely many cases of "const bool" in LLVM codebase. I think the "const" improves readability. Yes, exactly. as neither the FS nor the walltimer is needed by anything else (and we probably want to be more cautious when we want to make use of them in other components). SG. it's mostly personal preference i suppose, it just feels reasoning about discrete durations are easier than doubles. feel free to leave as is. it just feels easier to reason about builtin types, than a template alias like std::chrono::steady_clock::duration, at least on the interface level. but again, probably just a personal preference that I don't feel strongly about, so feel free to ignore. (usually with what abouts i try to signal this :D) Right, I've noticed that as I was going through the rest of the review, but forgot to delete these as it was split into two days. sorry for the churn. yes, i think that would be a nice simplification. right. in theory we have that more loudly spelled out by having a ThreadSafeFS, but as you pointed out that wouldn't be the only thing to consider when such a shift happens. is this call intentional ? right, i think it's more readable, and moreover it will reduce the need for typing that parameter more times in the future (mostly in tests again). at the very least, what's the reason for not inserting it at the last position but rather before PreambleCallback? instead of having static in front of the void reportPreambleBuild :D (sorry the comment was definitely misplaced). we could just divide the distributions point by point. it isn't as accurate since we won't have the exact association, but something we've been doing in the past. but no matter what, i forgot that we actually track the total build time through the span in buildPreamble so nvm. no need to give up the accuracy here. we can drop the static now. doesn't this need to be at least static ? is that actually documented somewhere nothing that I can find either. There's definitely many cases of "const bool" in LLVM codebase. I think the "const" improves readability. yes, but I think the majority is still not having "const" in front of bool. it's at least the convention in clangd. I also agree that the const improves readability, but for a local variable it carries much less importance and being consistent with the majority of the cases here is a lot more important. because seeing occurrences of both locals with const and non-const will eventually make the reasoning hard and each case surprising. if you think there's enough value to having consts for locals for readability, i think we should figure out a way to make the codebase (at least the clangd part) consistent with consts first. more review comments It's not about concurrency, it's about live time. This timer needs to have the same lifetime as the entire VFS, which is also ref-counted. Right, I've noticed that as I was going through the rest of the review, but forgot to delete these as it was split into two days. sorry for the churn.. yes, i think that would be a nice simplification. Done. Oops, no, it's not. OK, I made it default to nullptr. The logic behind it not being last was that it's usual (though not a hard rule) for callbacks to be the last argument, probably to make passing lambdas look nicer. Not really important. OK, I'll drop const for now then. thanks, lgtm! sorry for the long round trip. nit: double slashes instead of triple now (as it's no longer public interface), sorry :( same for members fix comments
https://reviews.llvm.org/D121712?id=417010
CC-MAIN-2022-40
refinedweb
1,919
66.03
I'm trying to build two functions using PyCrypto that accept two parameters: the message and the key, and then encrypt/decrypt the message. I found several links on the web to help me out, but each one of them has flaws: This one at codekoala uses os.urandom, which is discouraged by PyCrypto. Moreover, the key I give to the function is not guaranteed to have the exact length expected. What can I do to make that happen ? Also, there are several modes, which one is recommended? I don't know what to use :/ Finally, what exactly is the IV? Can I provide a different IV for encrypting and decrypting, or will this return in a different result? Here's what I've done so far: from Crypto import Random from Crypto.Cipher import AES import base64 BLOCK_SIZE=32 def encrypt(message, passphrase): # passphrase MUST be 16, 24 or 32 bytes long, how can I do that ? IV = Random.new().read(BLOCK_SIZE) aes = AES.new(passphrase, AES.MODE_CFB, IV) return base64.b64encode(aes.encrypt(message)) def decrypt(encrypted, passphrase): IV = Random.new().read(BLOCK_SIZE) aes = AES.new(passphrase, AES.MODE_CFB, IV) return aes.decrypt(base64.b64decode(encrypted)) You may need the following two functions to pad(when do encryption) and unpad(when do decryption) when the length of input is not a multiple of BLOCK_SIZE. BS = 16 pad = lambda s: s + (BS - len(s) % BS) * chr(BS - len(s) % BS) unpad = lambda s : s[:-ord(s[len(s)-1:])] So you're asking the length of key? You can use the md5sum of the key rather than use it directly. More, according to my little experience of using PyCrypto, the IV is used to mix up the output of a encryption when input is same, so the IV is chosen as a random string, and use it as part of the encryption output, and then use it to decrypt the message. And here's my implementation, hope it will be useful for you: import base64 from Crypto.Cipher import AES from Crypto import Random class AESCipher: def __init__( self, key ): self.key = key:] ))
https://codedump.io/share/rrHsQGEnCMs2/1/encrypt-amp-decrypt-using-pycrypto-aes-256
CC-MAIN-2016-44
refinedweb
354
64.91
Have you been to AnnualCreditReport.com to order a free copy of your credit report from each of the three major credit bureaus? If you haven’t, you should do so once a year. If you have, then you’re aware that AnnualCreditReport.com does NOT offer a free credit score along with your free credit report. Lenders use credit scores to make several decisions–everything from how much interest to charge to the size of your down payment to the length of the loan period. Check the details in your credit report periodically, but don’t ignore your credit score as it reveals a clearer picture of how lenders see you. Too often, “free” scores come at a hefty price. Many companies claim they’ll let you check your credit score online for free IF you sign up for a trial membership. After the trial period ends, you’re charged a monthly fee. Fortunately, a handful companies offer a no strings attached, complimentary credit score. Credit Karma is one of them. It’s not perfect, but in my opinion, Credit Karma is the best free credit score site. What’s to like about Credit Karma? Totally free. Register for a Credit Karma account, and leave your credit card tucked away in your wallet. You won’t need it. As you may already know, countless credit scoring models exist. Therefore, your score depends on the algorithm used to calculate it. When you sign up for a Credit Karma account, you’ll gain access to three different credit scores: a TransRisk New Account Score, a VantageScore, and an auto insurance score. - The TransRisk score is computed by TransUnion using their own proprietary scoring system. - The VantageScore was created by the three major credit reporting agencies (Equifax, Experian, and TransUnion). According to Experian.com, “VantageScore is the most consistent, predictive and accurate measure of consumer creditworthiness in the market.” - The Auto Insurance Score was designed to predict the likelihood that you’ll file a claim against an insurer. Believe it or not, people with mediocre credit cost insurance companies more money than those with stellar credit. Personalized advice: Credit Karma not only gives you free credit scores, but it tells you how to improve them. One of my favorite features of the web site is the Credit Report Card–an easy-to-read summary of your credit report. Based on your credit score, you’ll receive an overall letter grade ranging from A to F. Additionally, factors that impact your score such as your percentage of on-time payments, number of applications for new credit, and average age of credit lines are given their own letter grade. Below is an example of my Credit Report Card. Use the My Credit Simulator to predict how certain actions will affect your credit score. Let’s say I close my oldest credit card which happens to be with Bank of America because I hate them. Although the simulator estimates my score will only drop eighteen points, the move would put me dangerously close to the “B” credit range. As much as I despise BoA, I’ll leave the account open until they charge an annual fee. Daily credit monitoring: Credit Karma will email you if there are any significant changes to your TransUnion credit report. Address issues immediately if you’re notified of suspicious activity. Unlimited credit scores: Check your credit score as many times as you like. Inquiries on your credit report only hurt your score when you apply for credit. So don’t worry about how often you view your credit score. Safe and Secure: Credit Karma uses bank-level security to protect your personal information. What’s not to like about Credit Karma? Uses information from only one of the three major credit bureaus: Credit scores provided by Credit Karma are based off of information on your TransUnion credit report only. Some creditors report to one or two bureaus. Without scores from Experian and Equifax, you’re left with an incomplete picture of your credit risk profile. Provides lesser used credit scores: According to Zillow.com, 70% to 80% of lenders use a version of the FICO scoring model to make credit decisions. As mentioned earlier, you’ll receive three credit scores, none of which are a FICO score. Need to provide your social security number: I don’t know about you, but I’m stingy with my social security number. I’ve heard too many horrifying stories about people who were victims of identity theft. But honestly, I signed up for a Credit Karma account on October 18, 2008, and no one has stolen my identity. In relation to how your social security number is used and stored, Credit Karma has the following to say:. So, how does Credit Karma make money? Similar to most for-profit businesses, Credit Karma likes to earn money. Instead of charging consumers for their services, they use information available in your credit report to recommend targeted offers supported by their advertisers. The Verdict If you borrow money or plan on borrowing money, sign up for a Credit Karma account. Even though the company has a few shortcomings, I’ve been happy with their service for almost four years. Track your score on Credit Karma’s web site, and follow their suggestions to improve your credit history over time. When you’re in the market for a loan, go to MyFico.com to pull your TransUnion and Equifax FICO scores. I didn’t forget Experian. Shamefully, they don’t make FICO scores available to consumers. Disclosure: If you sign up for Credit Karma using a link on my web site, I’ll receive a commission and fully appreciate the love. { 41 comments… read them below or add one } Considering it’s a free service, I think it would be worth signing up. Sure it doesn’t check all 3 credit bureaus, but at least it provides some level of protection. If you want a credit score from the other bureaus, you typically have to pay for that service. I do like how it lays out what kind of things may be limiting your credit in an easy to understand format. Modest Money recently posted..9 Savings Tips I Don’t Recommend I believe what you typed made a lot of sense. But, what about this? what if you were to create a killer headline? I am not saying your content is not good., however what if you added a headline to possibly grab a person’s attention? I mean How to Check My Credit Score Online for Free | You Have More Than You Think is a little boring. You should glance at Yahoo’s front page and watch how they create article titles to grab people interested. You might try adding a video or a related picture or two to get readers excited about everything’ve written. In my opinion, it could bring your blog a little bit more interesting. best Loan Deal recently posted..best Loan Deal Some manufacturers even sell ordinary coconut oil with virgin label. Eczema or dermatitis is an inflammation of the skin that produces flaking, scaling and itching in which fluid filled blisters are sometimes present. The oil from coconut also contains a very rare natural fatty acid called lauric acid. Tracee recently posted..Tracee However, it does contain some very healthy fatty acids. Most other cooking oils and fats contain long-chain triglycerides (LCT). All you need do is stop cooking with vegetable oils, and start cooking with coconut oil. Zandra recently posted..Zandra I’ve been browsing on-line more than three hours as of late, yet I by no means discovered any interesting article like yours. It is lovely price enough for me. Personally, if all web owners and bloggers made just right content material as you probably did, the net will be much more useful than ever before. best secured loans recently posted..best secured loans It has a high content of lauric and myristic acids which have a melting point relatively close to the human body temperature. A good facial moisturizer is a critical part of daily face care routine, and as part of a facial cream or facial moisture oil, almond oil gives extra nourishment needed without feeling oily or weighted down. Used in combination with other vegetable oils to produce a nice, hard bar of soap. almond oil uses recently posted..almond oil uses I think that everything published made a ton of sense. However, think about this, suppose you were to write a killer headline? I ain’t suggesting your information is not solid., but suppose you added something that grabbed people’s attention? I mean How to Check My Credit Score Online for Free | You Have More Than You Think is kinda boring. You might peek at Yahoo’s front page and see how they write news titles to get viewers to open the links. You might try adding a video or a picture or two to get readers interested about everything’ve got to say. In my opinion, it would make your website a little livelier. home insurance company ratings in texas recently posted..home insurance company ratings in texas Hi there, I enjoy reading through your article post. I like to write a little comment to support you. car insurance lapse in va recently posted..car insurance lapse in va Hello to every one, because I am actually eager of reading this website’s post to be updated on a regular basis. It contains nice material. Meagan recently posted..Meagan My coder is trying to persuade me to move to .net from PHP. I have always disliked the idea because of the costs. ! nerdfrenzy.com recently posted..nerdfrenzy.com I may get around to this myself sometime, should I get funding. Krystle recently posted..Krystle I don’t know whether problem with my web browser because I’ve had this happen previously. Many thanks cheap car insurance for 18 year old drivers recently posted..cheap car insurance for 18 year old drivers Informative read, especially comment #six I think it was. Hopefully I’ll remember it. best Rate loan recently posted..best Rate loan Off for a scan on my fingers, can ardly type with this break. Has anyone ever tried pressing the keys quickly with a broken finger!?. best tenant loans recently posted..best tenant loans It’s actually a cool and helpful piece of information. I’m glad that you simply shared this helpful info with us. Please keep us informed like this. Thank you for sharing. aami car insurance third party property recently posted..aami car insurance third party property whoah this weblog is fantastic i like reading your posts. Stay up the good work! You recognize, lots of people are searching around for this info, you can help them greatly. cheapest car insurance in mauritius recently posted..cheapest car insurance in mauritius Hello there, just became aware of your blog through Google, and found that it’s really informative. I’m going to watch out for brussels. I’ll appreciate if you continue this in future. A lot of people will be benefited from your writing. Cheers! what does contents insurance usually cover recently posted..what does contents insurance usually cover Excellent web site. A lot of useful info here. I’m sending it to several friends ans additionally sharing in delicious. And of course, thanks on your effort! free credit report recently posted..free credit report Excellent web site you have got here.. It’s difficult to find good quality writing like yours these days. I truly appreciate people like you! Take care!! recently posted.. Pretty nice post. I just stumbled upon your blog and wanted to mention that I’ve really enjoyed browsing your weblog posts. After all I will be subscribing in your rss feed and I am hoping you write again very soon! easycontentsinsurance.com recently posted..easycontentsinsurance.com fast for me on Firefox. Superb Blog! health insurance premiums tax deductible canada recently posted..health insurance premiums tax deductible canada I like the helpful information you provide in your articles. I will bookmark your weblog and check again here frequently. I’m quite sure I will learn plenty of new stuff right here! Good luck for the next! cheap insurance for young drivers recently posted..cheap insurance for young drivers I like it whenever people get together and share views. Great website, keep it up! aaa car insurance las vegas nv recently posted..aaa car insurance las vegas nv Outstanding post however I was wondering if you could write a litte more on this subject? I’d be very thankful if you could elaborate a little bit further. Cheers! affordable car insurance in michigan recently posted..affordable car insurance in michigan It’s amazing to pay a quick visit this site and reading the views of all mates about this piece of writing, while I am also keen of getting familiarity. car insurance companies in michigan recently posted..car insurance companies in michigan Having read this I believed it was extremely informative. I appreciate you taking the time and effort to put this article together. I once again find myself spending a significant amount of time both reading and commenting. But so what, it was still worthwhile! codes for facebook credits recently posted..codes for facebook credits I like it whenever people come together and share views. Great site, continue the good work! Darby recently posted..Darby Some posts make you smile, other posts leave you feeling sad, this one made mme think, which is better than anything else. chubbygirlwebcam.tumblr.com recently posted..chubbygirlwebcam.tumblr.com Day 4: Eat your cabbage soup together with some bananas (maximum of 8 bananas throughout the day) and skimmed milk. “I definitely want to lose weight and when i say lose weight, i mean now. recently posted.. I think this is one of the most significant info for me. And i am satisfied studying your article. However wanna observation on some general issues, The website style is wonderful, the articles is actually great : D. Good process, cheers import recently posted..import With a repairing service to your central heating you’ll be able to also take some care on your much neglected bathroom. In business you occasionally have to create compromises to maintain a very low value for good quality function. drainage engineer jobs kent recently posted..drainage engineer jobs kent It’s appropriate time to make some plans for the longer term and it is time to be happy. I’ve learn this submit and if I could I desire to suggest you some fascinating things or tips. Maybe you can write next articles relating to this article. I want to read more things about it! mobile games recently posted..mobile games youu are actually a good webmaster. The webb site loading speed is amazing. It kind of feels that you’re dong any unique trick. Furthermore, The contents are masterpiece. you have done a excedllent activity iin this matter! diena.org.lv recently posted..diena.org.lv Yes! Finally something about free targeted website. Vonnie recently posted..Vonnie Thank you for the good writeup. It in reality used to be a amusement account it. Look complicated to more brought agreeable from you! By the way, how could we communicate? Lela recently posted..Lela Hi, i think that i noticed you visited my site so i got here to return the prefer?.I’m trying to to find things to enhance my site!I guess its ok to make use of a few of your ideas!! game gave recently posted..game gave Thank you a bunch for sharing this with all people you actually recognize what you are talking approximately! Bookmarked. Please also talk over with my website =). We can have a hyperlink alternate arrangement between us credit monitoring services review recently posted..credit monitoring services review Hey very nice site!! Guyy .. Excellent .. Amazing .. I’ll bookmark your web site aand take the feeds additionally? I am satisfied to searcfh out a loot off useful info right here witfhin the publish, we need develop extra strategies on this regard, thank you ffor sharing. . . . . . credit unions recently posted..credit unions I didn’t think I’d be back here after Drag Race early versions, but this is a great game, a good time keillr and a well put together application.I have only one suggestion, though. Purple monsters are random, sometimes I get a manageable number and sometimes they are downright frustrating. I would suggest the spawn rate adjusts with numbers already on screen. For example, if there are 4 already, half rate. 8, one quarter. This would likely kill/lessen the keillr spawns. Last game, I got 16 purples in a very short time. This is rare, agreed, but it can’t hurt to be a little more consistent.And may I add, I have no issues with graphics. As long as items are easily recognizable, it’s fine. Maracuja moisturizes the dry areas without making the oily spots feel greasy. But, it is known that sun damage is one of the major causes of wrinkles, roughness and other signs of aging. For example, in acne, the sebaceous glands produce excessive amounts of sebum. Russell Organics Maracuja Oil recently posted..Russell Organics Maracuja Oil Thanks to my father who stated to me on the topic of this web site, this website is genuinely amazing. Russell Organics Argan Oil recently posted..Russell Organics Argan Oil
http://youhavemorethanyouthink.org/check-your-credit-score-online-for-free/?replytocom=12447
CC-MAIN-2017-17
refinedweb
2,930
67.55
This tutorial intends to provide a comprehensive, but relatively short introduction to Reason. Reason is a programming language built on top of OCaml. It provides functional and object-oriented features with type-safety and focus on performance. It was created at Facebook. Its syntax is similar to JavaScript. The intention is to make interoperation with JavaScript and adoption by JavaScript programmers easier. Reason can access both JavaScript and OCaml ecosystems. OCaml was created in 1996. It is a functional programming language with infered types. The Reason website contains an online playground. It allows to play with the language and see how the generated JavaScript looks like. It can also convert from OCaml to Reason. Why - In JavaScript types annotation, linting or unified formatting is provided as an external depenency such as Flow, TypeScript, ESLint or Prettier. Reason provides these features out-of-the-box. This makes the development process more streamlined and convenient. - Reason offers support for React with ReasonReact. It also supports JSX syntax (HTML-like syntax used in React) out-of-the-box. - Reason has also the ability to generate native binaries. The generated code is performant. There is no VM overhead. It provides one binary which facilites deployment process. How it works Reason is compiled to OCaml’s abstract syntax tree. This makes Reason a transpiler. OCaml cannot be run directly in the browser. The AST can be converted to various targets. BuckleScript can be used to compile that AST to JavaScript. It also provides the interop between OCaml and JavaScript ecosystems. BuckleScript is extremly fast and generates readable JavaScript. It also provides the Foreign Function Interface (FFI) to allow interoperability with the JavaScript existing libraries. Check BuckleScript benchmarks. BuckleScript is used at Facebook by the Messanger team and at Google by WebAssembly spec interpreter. Check the Bucklescript demo here. BuckleScript was created by Hongbo Zhang. Hello Reason We will use BuckleScript to generate a Reason project. The tool provides ready-to-use project templates known as themes. Let’s start by installing bs-platform globally: npm install -g bs-platform We can now use bsb binary provided by bs-platform to generate a project scaffold. We will use basic-reason template to start with the most basic Reason project structure. bsb -init reason-1 -theme basic-reason Making directory reason-1 Symlink bs-platform in /Users/zaiste/code/reason-1 Here’s the Reason directory structure generated from basic-reason template via BuckleScript: . ├── README.md ├── bsconfig.json ├── lib ├── node_modules ├── package.json └── src └── Demo.re bsconfig.json contains BuckleScript configuration for a Reason project. It allows to specify files to compile via sources, BuckleScript dependencies via bs-dependencies, additional flags for the compiler and more. Next step is to build the project. This will take Reason code and pass it through BuckleScript to generate JavaScript. By default the compiler will target Node.js. npm run build > bsb -make-world ninja: Entering directory `lib/bs' [3/3] Building src/Demo.mlast.d [1/1] Building src/Demo-MyFirstReasonml.cmj Finally we can run our application by using node on the files generated by BuckleScript. node src/Demo.bs.js Hello, BuckleScript and Reason! Syntax 101 In this section, I will go over the syntax elements that I found the peculiar, new or just different. Modules In Reason files are modules. There are no require or import statements as in JavaScript or similar programming languages. The module definitions must be prefixed with the module name to work externally. This feature comes from OCaml. As a result you can freely move the module files in the filesystem without the need to modify the code. Functions Functions are defined using let and =>. let greet = name => Js.log("Hello, " ++ name "!"); greet("Zaiste"); The ++ operator is used to concatenate strings. Function’s input arguments can be labelled. This makes the function invocation more explicit: passed-in values no longer need to follow the arguments order from the function definition. Prefixing the argument name with ~ makes it labelled. let greet = (~name, ~location) => Js.log("Hello, " ++ name "! You're in " ++ location); greet(~location="Vienna", ~name="Zaiste") Data Structures Variants A variant is a data structure that holds a value from a fixed set of values. This is also known as tagged or disjoint union or algebraic data types. Each case in a variant must be capitalised. Optionally, it can receive parameters. type animal = | Dog | Cat | Bird; Records This is a record let p = { name: "Zaiste", age: 13 } Records need explicit type definition. type person = { name: string, age: int }; In the scope of a module, the type will be inherited: the p binding will be recognized as person type. Outside of a module, you can reference the type by just prefixing it with file name. let p: Person.person = { name: 'Sean', age: 12 }; There is a convention to create a module per type and name the type as t to avoid the repetition i.e. Person.t instead of Person.person. Async Programming & Promise There is a built-in support for Promises via BuckleScript, provided as JS.Promise module. Here’s an example of making an API call using Fetch API: Js.Promise.( Fetch.fetch(endpoint) |> then_(Fetch.Response.json) |> then_(json => doSomethingOnResponse(json) |> resolve) ) You need to use then_ as then is reserved word in OCaml. Pattern Matching Pattern matching is a dispatch mechanism based on the shape of the provided value. In Reason, pattern matching is implemented with switch statement. It can be used with a variant type or as destructuring mechanism. switch pet { | Dog => "woof" | Cat => "meow" | Bird => "chirp" }; We can use pattern matching for list destructuring: let numbers = ["1", "2", "3", "4"]; switch numbers { | [] => "Empty" | [n1] => "Only one number: " ++ n1 | [n1, n2] => "Only two numbers" | [n1, _, n3, ...rest] => "At least three numbers" }; Or, we can use it for record destructuring let project = { name: "Huncwot", size: 101101, forks: 42, deps: [{name: "axios"}, {name: "sqlite3"}] } switch project { | {name: "Huncwot", deps} => "Matching by `name`" | {location, years: [{name: "axios"}, ...rest]} => "Matching by one of `deps`" | project => "Any other situation" } Optional values option() is a built-in variant in Reason describing “nullable” values: type option('a) = None | Some('a); Varia unitmeans “nothing” unit => unitis a signature of a function that doesn’t accept any input parameters and doesn’t return any values; mostly used for callback functions React in Reason Hello ReactReason ReasonReact is a Reason built-in feature for creating React applications. Let’s create a ReasonReact project using BuckleScript and its react template. bsb -init reasonreact-1 -theme react This method is recommended by Reason team for scaffolding ReasonReact projects. It is also possible to use yarn with reason-scripts template for a more complete starting point. ReasonReact provides two types of components: statelessComponent and reducerComponent. Contrary to stateless components, reducer components are stateful providing Redux-like reducers. let s = ReasonReact.string let component = ReasonReact.statelessComponent("App"); let make = (~message, _children) => { ...component, render: _self => (s(message)) }; As described earlier ~ designates a labelled argument to freely order function’s input parameters. _ in the binding name tells the compiler that the argument isn’t used in the body of that function. The spread operator ( ...) alongside of component means that we extend an existing component. In this example we also overwrite the render function. JSX in Reason is more strict than in React: we need to explicitly wrap strings with ReasonReact.string(). For convenience, I’ve created a shorter binding called s to use it conveniently inside JSX block. Building non-trivial ReactReason app Let’s build a ReactReason application that goes beyond displaying predefined data. We will create a GitHub viewer for trending repositories. The intention is to showcase how to integrate with an external API, how to manage state and how to use React’s lifecycle methods methods. For the purpose of this example we will use reason-scripts to bootstrap our Reason project. yarn create react-app reasonreact-github --scripts-version reason-scripts Install dependencies: cd reasonreact-github yarn Start it with: yarn start Repository is the central concept in this application. Let’s start by defining a type to describe that entity. We will put it inside a separate module called Repo. type t = { name: string, size: int, forks: int }; From now on we can refer to this type with Repo.t from any Reason file in our application without the need of requiring it. Managing State We’ve already seen a stateless component. Now let’s create a component that has state. In our context we will be using RepoList component managing a list of trending repositories fetched from GitHub’s API. Let’s start by defining the type for the state managed by RepoList component. type state = { repos: list(Repo.t) }; There is, however, a catch. Initially, before the list of trending repositories is fetched from GitHub API, the repos is undefined. Reason type system doesn’t allow us to have undefined value though. We could model that initial state with an empty list, but this is not optimal. Empty list could also mean that our query for fetching trending repositories didn’t return any results. Let’s use Reason’s optional values to deal with that situation. type state = { repos: option(list(Repo.t)) } Next step is to define possible actions for that component. In ReasonReact, actions are represented as variants. For now we will only have one action called ReposFetched. type action = | ReposFetched(list(Repo.t)); In order to create a stateful component in ReasonReact we need to use reducerComponent() function. let component = ReasonReact.reducerComponent("App"); Such component allows to define a reducer which describes how the state is transformed in response to actions. A reducer takes an action along with the current state as input and returns the new state as output. Reducers must be pure functions. reducer: (action, _prevState) => { switch action { | ReposFetched(repos) => ReasonReact.Update({repos: Some(repos)}) } } We’re pattern matching action, based on the parameter we receive in the reducer() method. Pattern matching must be exhaustive. All variant values must be matched. reducer definition is placed inside component’s main function. To finish off component’s definition, let’s define its initial state: initialState: () => { repos: Some([ {name: "Huncwot", size: 11011, forks: 42} ]) } Integrating with API We will use bs-fetch to fetch data from an external API. It is a BuckleScript library that acts as a thin layer on top of the Fetch API. Once the data is fetched, we will use bs-json to extract fields we are interested in. Start by installing bs-fetch and bs-json: npm i bs-fetch @glennsl/bs-json Add them to bs-dependencies in your bsconfig.json: { "bs-dependencies": [ ..., "bs-fetch", "@glennsl/bs-json" ] } We defined our Repo type as a set of three fields: name, size and forks. Once the payload is fetched from GitHub API we parse it to extract those three fields. let parse = json => Json.Decode.{ name: json |> field("name", string), size: json |> field("size", int), forks: json |> field("forks", int), }; field is a method of Json.Decode. The Json.Decode.{ ... } (mind the dot) opens Json.Decode module. Its properties can now be used within these curly brackets without the need of prefixing with Json.Decode. Since GitHub returns repos under items, let’s define another function to get that list. let extract = (fields, json) => Json.Decode.( json |> at(fields, list(parse)) ); Finally we can make a request and pass the returned data through our parsing functions: let list = () => Js.Promise.( Fetch.fetch(endpoint) |> then_(Fetch.Response.json) |> then_(text => extract(["items"], text) |> resolve) ); React Lifecycle Methods Let’s use didMount lifecycle method to trigger the fetch of repositories from GitHub API. didMount: self => { let handle = repos => self.send(ReposFetched(repos)); Repo.list() |> Js.Promise.then_(repos => { handle(repos); Js.Promise.resolve(); }) |> ignore; } handle is a method that dispatches ReposFetched action to the reducer. Once the promise resolves, the action will carry fetched repositories to the reducer. This will update our state. Rendering Since we distinguish between non initialized state and an empty list of repositories, it is straightforward to handle the initial loading in progress message. render: self =>( switch self.state.repos { | None => s("Loading repositories..."); | Some([]) => s("Emtpy list") | Some(repos) => ( repos |> List.map((repo: Repo.t) => - (s(repo.name))) |> Array.of_list |> ReasonReact.array ) } ) }; Error handling TBW Types in CSS Types for CSS with bs-css. yarn add bs-css "bs-dependencies": [ ..., "bs-css" ] let style = Css.( { "header": style([backgroundColor(rgba(111, 37, 35, 1.0)), display(Flex)]), "title": style([color(white), fontSize(px(28)), fontWeight(Bold)]), } ); let make = _children => { ...component, render: _self => (s("This is title")) }; Vocabulary rtopis an interactive command line for Reason. - Merlin is an autocompletion service file for OCaml and Reason. [@bs...]Bucklescript annotations for FFI Additional Resources TBD module History = { type h; [@bs.send] external goBack : h => unit = ""; [@bs.send] external goForward : h => unit = ""; [@bs.send] external go : (h, ~jumps: int) => unit = ""; [@bs.get] external length : h => int = ""; }; BuckleScript allows us to mix raw JavaScript with Reason code. [%bs.raw {|require('./app.css')|}]; read original article here
https://coinerblog.com/reason-in-a-nutshell-getting-started-guide-hacker-noon/
CC-MAIN-2019-39
refinedweb
2,172
51.04
#include <UT_VoxelArray.h> This provides a mip-map type structure for a voxel array. It manages the different levels of voxels arrays that are needed. You can create different types of mip maps: average, maximum, etc, which can allow different tricks. Each level is one half the previous level, rounded up. Out of bound voxels are ignored from the lower levels. Definition at line 1184 of file UT_VoxelArray.h. This does a top down traversal of the implicit octree defined by the voxel array. Returning false will abort that branch of the octree. The bounding box given is in cell space and is an exclusive box of the included cells (ie: (0..1)^3 means just cell 0,0,0) Note that each bounding box will not be square, unless you have the good fortune of starting with a power of 2 cube. The boolean goes true when the the callback is invoked on a base level. Definition at line 1221 of file UT_VoxelArray.h. The different types of functions that can be used for constructing a mip map. Definition at line 1189 of file UT_VoxelArray.h. Definition at line 5284 of file UT_VoxelArray.C. Definition at line 5290 of file UT_VoxelArray.C. Copy constructor. Definition at line 5296 of file UT_VoxelArray.C. Builds from a given voxel array. The ownership flag determines if we gain ownership of the voxel array and should delete it. In any case, the new levels are owned by us. Definition at line 5339 of file UT_VoxelArray.C. Same as above but construct mipmaps simultaneously for more than one function. The order of the functions will correspond to the order of the data values passed to the traversal callback. Definition at line 5350 of file UT_VoxelArray.C. Return the amount of memory used by this mipmap. Definition at line 5446 of file UT_VoxelArray.C. level 0 is the original grid, each level higher is a power of two smaller. Definition at line 1253 of file UT_VoxelArray.h. Definition at line 1288 of file UT_VoxelArray.h. Definition at line 1249 of file UT_VoxelArray.h. Assignment operator: Definition at line 5305 of file UT_VoxelArray.C. Definition at line 5463 of file UT_VoxelArray.C. Top down traversal on op. OP is invoked with bool op(const UT_BoundingBoxI &indexbox, int level) indexbox is half-inclusive (0..1)^3 means cell 0,0,0 level 0 means the base level. (box.min.x()>>level, box.min.y()>>level, box.min.z()>>level) gives the index to extract the value from level.. Definition at line 5573 of file UT_VoxelArray.C. Top down traversal, but which quad tree is visited first is controlled by float op.sortValue(UT_BoundingBoxI &indexbox, int level); Lower values are visited first. Definition at line 5662 of file UT_VoxelArray.C. This stores the base most level that was provided externally. Definition at line 1308 of file UT_VoxelArray.h. The array of VoxelArrays, one per level. myLevels[0] is a 1x1x1 array. Each successive layer is twice as big in each each dimension. However, every layer is clamped against the resolution of the base layer. We own all these layers. Definition at line 1320 of file UT_VoxelArray.h. Tracks the number of levels which we used to represent this hierarchy. Definition at line 1314 of file UT_VoxelArray.h. If true, we will delete the base level when we are done. Definition at line 1310 of file UT_VoxelArray.h.
http://www.sidefx.com/docs/hdk/class_u_t___voxel_mip_map.html
CC-MAIN-2018-05
refinedweb
569
61.22
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python. Let $q$ be the probability of obtaining a head. Whereas $q$ was just a fixed number in the previous recipe, we consider here that it is a random variable. Initially, this variable follows a distribution called the prior distribution. It represents our knowledge about $q$ before we start flipping the coin. We will update this distribution after each trial (posterior distribution). $$P(q | \{x_i\}) = \frac{P(\{x_i\} | q) P(q)}{\displaystyle\int_0^1 P(\{x_i\} | q) P(q) dq} = (n+1)\binom n h q^h (1-q)^{n-h}$$ We define the posterior distribution according to the mathematical formula above. We remark this this expression is $(n+1)$ times the probability mass function (PMF) of the binomial distribution, which is directly available in scipy.stats. () import numpy as np import scipy.stats as st import matplotlib.pyplot as plt %matplotlib inline posterior = lambda n, h, q: (n+1) * st.binom(n, q).pmf(h) Let's plot this distribution for an observation of $h=61$ heads and $n=100$ total flips. n = 100 h = 61 q = np.linspace(0., 1., 1000) d = posterior(n, h, q) plt.figure(figsize=(5,3)); plt.plot(q, d, '-k'); plt.xlabel('q parameter'); plt.ylabel('Posterior distribution'); plt.ylim(0, d.max()+1); We can also derive a point estimate. For example, the maximum a posteriori (MAP) estimation consists in considering the maximum of this distribution as an estimate for $q$. We can find this maximum analytically or numerically. Here, we find analytically $\hat q = h/n$, which looks quite sensible. () You'll find all the explanations, figures, references, and much more in the book (to be released later this summer). IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages).
http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter07_stats/03_bayesian.ipynb
CC-MAIN-2017-47
refinedweb
313
51.24
DTML is the Document Template Markup Language, a handy presentation and templating language that comes with Zope. This Appendix is a reference to all of DTMLs markup tags and how they work. The call tag lets you call a method without inserting the results into the DTML output. call tag syntax: <dtml-call Variable| If the call tag uses a variable, the methods arguments are passed automatically by DTML just as with the var tag. If the method is specified in a expression, then you must pass the arguments yourself. Calling by variable name: <dtml-call UpdateInfo> This calls the UpdateInfo object automatically passing arguments. Calling by expression: <dtml-call var tag The> D. (a / b, a % b). For floating point numbers the result is (q, a % b), where q is usually math.floor(a / b)but may be 1 less than that. In any case 'q * b + a % b' is very close to a, if a % bis non-zero it has the same sign as b, and 0 <= abs(a % b) < abs(b).. oct(-1)yields 037777777777. When evaluated on a machine with the same word size, this literal is evaluated as -1; at a different word size, it may turn up as a large positive number or raise an OverflowError exception. ord("a")returns the integer 97. This is the inverse of chr(). pow(2, -1)or pow(2, 35000)is not allowed. random module math module sequence module Built-in Python Functions The.> Python Tutorial: If Statements The in tag gives you powerful controls for looping over sequences and performing batch processing. in tag syntax: <dtml-in SequenceVariable| [<dtml-else>] </dtml-in> The in block is repeated once for each item in the sequence variable or sequence expression. The current item is pushed on to the DTML namespace during each executing of the in block. If there are no items in the sequence variable or expression, the optional else block is executed.. These variables describe the current item. (key,value), the intag interprets them as (sequence-key, sequence-item). sequence-var-titleis the titlevariable of the current item. Normally you can access these variables directly since the current item is pushed on the DTML namespace. However these variables can be useful when displaying previous and next batch information. These variable summarize information about numeric item variables. To use these variable you must loop over objects (like database query results) that have numeric variables. These variables allow you to track changes in current item variables. startvariable removed. You can use this variable to construct links to next and previous batches. previous-sequence-start-index+ 1. previous-sequence-end-index+ 1. batch-start-index, batch-end-index, and batch-size. next-sequence-start-index+ 1. next-sequence-end-index+ 1. batch-start-index, batch-end-index, and batch-size. (key, value) tuples: <dtml-in objectItems> id: <dtml-var sequence-key>, title: <dtml-var title><br> </dtml-in> Creating alternate colored table cells: >start=<dtml-var previous-sequence-start-number>">Previous</a> </dtml-in> <dtml-in largeSequence size=10 start=start next> <a href="<dtml-var absolute_url><dtml-var sequence sequence-query, you do not lose any GET variables as you navigate between batches. The with tag The mime tag allows you to create MIME encoded data. It is chiefly used to format email inside the sendmail tag. mime tag syntax: <dtml-mime> [<dtml-boundry>] ... </dtml-mime> The mime tag is a block tag. The block is can be divided by one or more boundry tags to create a multi-part MIME message. mime tags may be nested. The mime tag is most often used inside the sendmail tag. Both the mime and boundry tags have the same attributes. base64. Valid encoding options include base64, quoted-printable, uuencode, x-uuencode, uue, x-uue, and 7bit. If the encodeattribute is set to 7bitno encoding is done on the block and the data is assumed to be in a valid MIME format. typeand type_expr. nameand name_expr. dispositionand disposition_expr. filenameand filename_expr. Sending a file attachment: <dtml-sendmail> To: <dtml> Python Library: mimetools The raise tag raises an exception, mirroring the Python raise statement. raise tag syntax: <dtml-raise ExceptionName|ExceptionExpression> </dtml-raise> The raise tag is a block tag. It raises an exception. Exceptions can be an exception class or a string. The contents of the tag are passed as the error value. Raising a KeyError: <dtml-raise KeyError></dtml-raise> Raising an HTTP 404 error: <dtml-raise NotFound>Web Page Not Found</dtml-raise> try tag Python Tutorial: Errors and Exceptions Python Built-in Exceptions The return tag stops executing DTML and returns data. It mirrors the Python return statement. return tag syntax: <dtml-return ReturnVariable| Stops execution of DTML and returns a variable or expression. The DTML output is not returned. Usually a return expression is more useful than a return variable. Scripts largely obsolete this tag. Returning a variable: <dtml-return result> Returning a Python dictionary: <dtml-return The sendmail tag sends an email message using SMTP. sendmail tag syntax: <dtml-sendmail> </dtml-sendmail> The sendmail tag is a block tag. It either requires a mailhost or a smtphost argument, but not both. The tag block is sent as an email message. The beginning of the block describes the email headers. The headers are separated from the body by a blank line. Alternately the To, From and Subject headers can be set with tag arguments. Toheader. Fromheader. Subjectheader.> RFC 821 (SMTP Protocol) mime tag The sqlgroup tag formats complex boolean SQL expressions. You can use it along with the sqltest tag to build dynamic SQL queries that tailor themselves to the environment. This tag is used in SQL Methods. sqlgroup tag syntax: <dtml-sqlgroup> [<dtml-or>] [<dtml-and>] ... </dtml-sqlgroup> The sqlgroup tag is a block tag. It is divided into blocks with one or more optional or and and tags. sqlgroup tags can be nested to produce complex logic. sqlgrouptag in a SQL selectquery. first is Bob and last is Smith, McDonald it renders: select * from employees where (first='Bob' and last in ('Smith', 'McDonald') ) If salary is 50000 and last is Smith it renders: select * from employees where (salary > 50000.0 and last='Smith' ) Nested sql ) sqltest tag The The sqlvar tag safely inserts variables into SQL code. This tag is used in SQL Methods. sqlvar tag syntax: <dtml-sqlvar Variable| The sqlvar tag is a singleton. Like the var tag, the sqlvar tag looks up a variable and inserts it. Unlike the var tag, the formatting options are tailored for SQL code. string, int, floatand nb. nbmeans non-blank string and should be used in place of stringunless you want to use blank strings. The type attribute is required and is used to properly escape inserted variable. Basic usage: select * from employees where name=<dtml-sqlvar name This SQL quotes the name string variable. sqltest tag The.> The The unless tag provides a shortcut for testing negative conditions. For more complete condition testing use the if tag. unless tag syntax: <dtml-unless ConditionVariable| </dtml-unless> The unless tag is a block tag. If the condition variable or expression evaluates to false, then the contained block is executed. Like the if tag, variables that are not present are considered false. Testing a variable: <dtml-unless testMode> <dtml-call dangerousOperation> </dtml-unless> The block will be executed if testMode does not exist, or exists but is false. if tag The var tags allows you insert variables into DTML output. var tag syntax: <dtml-var Variable| The var tag is a singleton tag. The var tag finds a variable by searching the DTML namespace which usually consists of current object, the current object's containers, and finally the web request. If the variable is found, it is inserted into the DTML output. If not found, Zope raises an error. var tag entity syntax: &dtml-variableName; Entity syntax is a short cut which inserts and HTML quotes the variable. It is useful when inserting variables into HTML tags. var, &dtml.-variableName;. 12000becomes 12,000. absolute_urlmethod. url_quotebut also converts spaces to plus signs. sizeattribute listed above). By default, this is ... Inserting a simple variable into a document: <dtml-var standard_html_header> Truncation: <dtml-var colors size=10 will produce the following output if colors is the string 'red yellow green': red yellow, etc. C-style string formatting: <dtml-var renders to: 23432.23 Inserting a variable, link, inside an HTML A tag with the entity syntax: <a href="&dtml-link;">Link</a> Inserting a link to a document doc, using entity syntax with attributes: <a href="&dtml.url-doc;"><dtml-var doc</a> This creates an HTML link to an object using its URL and title. This example calls the object's absolute_url method for the URL (using the url attribute) and its title_or_id method for the title. The
http://www.faqs.org/docs/ZopeBook/AppendixA.html
CC-MAIN-2016-07
refinedweb
1,476
57.87
On Tue, Dec 14, 2010 at 2:05 AM, Noel J. Bergman <noel@devtech.com> wrote: > Sim IJskes wrote: > >> The implementation that resides in com.sun could be renamed into the >> org.apache.river.impl namespace without causing to much conversion >> activity with the users of river. I believe there are strong feelings >> about keeping to the original specifications. > > So are you saying that River will work towards eliminating com.sun, but does > not feel that it needs to be done now? It was clearly a misunderstanding. The current work is largely to put all the loose bits into a coherent order, effectively creating a solid QA environment (kudos!!), a non-trivial effort. So, the upcoming release is all about getting into a consistent state, and doing so with only bug fixes foud during QA setups. Hence, as Sim explains, package names are seen too disruptive for that effort, as well as the nature of the upcoming release is "drop-in replacement, bug fixes only", no impact on users. But, importantly, the community has committed itself to remove the com.sun namespace as soon as that release is out. Cheers -- Niclas Hedhman, Software Developer - New Energy for Java I live here; I work here; I relax here; --------------------------------------------------------------------- To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org For additional commands, e-mail: general-help@incubator.apache.org
http://mail-archives.eu.apache.org/mod_mbox/incubator-general/201012.mbox/%3CAANLkTimVoT4OVxGSnqhYGRw4osyoa+1MuomyHqLMD+h7@mail.gmail.com%3E
CC-MAIN-2019-51
refinedweb
226
58.48
? Table spaces and the Database's Schema Objects. 3. What is a Table space? A database is divided into Logical Storage Unit called table spaces. A table space is used to grouped related logical structures together. 4. What is SYSTEM table space and when is it Created? Every ORACLE database contains a table space named SYSTEM, automatically created when the database is created. The SYSTEM table Space always contains the data dictionary tables for the entire database. 5. Explain the relationship among Database, Table space and Data file? Each databases logically divided into one or more table Spaces one or more data files are explicitly created for each table space. table spaces? Yes. 9. Can a Table space hold objects from different Schemes? Yes. 10. What is Table? A table is the basic unit of data storage in an ORACLE database. The tables of a database hold all of the user accessible data. Table data is stored in rows and columns. 11. What is a View? which is Page 1 of 143 A view is a virtual table. Every view has a Query attached to it. (The Query is a SELECT statement that identifies the columns and rows of the table(s) the view uses.) 12.. Store complex queries. Present the data in a different perspective from that of the base table. 15. What is aonyms? A Private Synonyms can be accessed only by the owner. 19. What is a Public Synonyms? Any user on the database can access a Public synonym. 20. What are synonyms used for? Synonyms are used to: Mask the real name and owner of an object. Provide public access to an object Provide location transparency for tables, views or program units of a remote database. Simplify the SQL statements for database users. 21. What is an Index? Page 2 of 143 An Index is an optional structure associated with a table to have direct access to rows, which can be created to increase the performance of data retrieval. Index can be created on ones or more columns of a table. 22. How is Indexes Update? column of the tables in a cluster? Page 3 of 143 cant? Page 4 of 143 A Database contains one or more Rollback Segments to temporarily store "undo" information. 42. What are the uses of Rollback Segment?? table space.. 50. What is the use of Control File? Page 5 of 143 When an instance of an ORACLE database is started, its control file is used to identify the database and redo log files that must be opened for database operation to proceed. It is also used in database recovery.. Page 6 of 143 queries,. Cursors. Page 7 of 143 70. What is User Process? A user process is created and maintained to execute the software code of an application program. It is a shadow process created automatically to facilitate communication between the user and the server process. 71. What is Server Process? Server Process handles. Page 8 of 143 offline errors. These transactions are eventually recovered by SMON when the table space) is used for inter-instance locking when the ORACLE Parallel Server option is used. 89. What is the maximum number of Lock Processes used? Though a single LCK process is sufficient for most Parallel Server systems Up to Ten Locks (LCK0,....LCK9) are used for inter-instance locking. DATA ACCESS 90. Define Transaction? A Transaction is a logical unit of work that comprises one or more SQL statements executed by a single user. Page 9 of 143 91. When does a Transaction end? When it is committed or Roll backed. points can be declared which can be used to divide a transaction into smaller parts. This allows the option of later rolling back all work performed from the current point in the transaction to a declared save point within the transaction. 95. What is Read-Only Transaction? A Read-Only transaction ensures that the results of each query executed in the transaction are consistent with respect to the same point in time. 96. What is the function of Optimizer? The goal of the optimizer is to choose the most efficient way to execute a SQL statement. 97. What is Execution Plan? The combination. 101. Will the Optimizer always use COST-based approach if OPTIMIZER_MODE is set to "Cost'? Presence of statistics in the data dictionary for at least one of the tables accessed by the SQL statements is necessary for the OPTIMIZER to use COST-based approach. Otherwise OPTIMIZER chooses RULE-based approach. Page 10 of 143 least based? Page 11 of 143 procedure uses. Page 12 of 143 APPLICATION AWARENESS - A database application can be designed to automatically enable and disable selective roles when a user attempts to use the application. 120. How to prevent unauthorized use of privileges granted to a Role? By creating a Role with a password. 121. What is default table space? The Table space to contain schema objects created without specifying a table space name. 122. What is Table space Quota? The collective amount of disk space available to the objects in a schema on a particular table space. amount? Page 13 of 143 Two-phase commit is mechanism that guarantees a distributed transaction either commits on all involved nodes or rolls back on all involved nodes to maintain data consistency across the global distributed database. It has two phases,acles? Page 14 of 143 reused. archived before being. Data Base Administration Introduction to DBA 1. What is a Database instance? Explain Page 15 of 143 are clusters? Group of tables physically stored together because they share common columns and are often used together is called Cluster. 6. What is a cluster Key? The related columns of the tables are called the cluster key. The cluster key is indexed using a cluster index and its value is stored only once for multiple tables in the cluster. 7. What is the basic element of Base configuration of an oracle Database? It consists of One or more data files. One or more control files. Two or more redo log files. The Database contains Multiple users/schemas One or more rollback segments One or more table spaces Data dictionary tables User objects (table, indexes, views etc.,) The server that access the database consists of SGA (Database buffer, Dictionary Cache Buffers, Redo log buffers, Shared SQL pool) accessing the same database (Only In Multi-CPU Page 16 of 143 SMON (System MONito) PMON (Process MONitor) LGWR (LoG Write) DBWR (Data Base Write) ARCH (ARCHiver) CKPT (Check Point) RECO Dispatcher User Process with associated PGS 8. What is a deadlock? Explain. Two processes waiting to update the rows of a table, which are locked, by the other process then deadlock arises. In a database environment this will often happen because of not issuing proper rowlock. Page 17 of 143 Monitor the ratio of the reloads takes place while executing SQL statements. the ratio is greater than 1 then increase the SHARED_POOL_SIZE. LOGICAL & PHYSICAL ARCHITECTURE OF DATABASE. 14. What is Database Buffers? If database organize the table spaces, Page 18 of 143 RBS1, RBS2 - Additional/Special Rollback segments. TEMP - Temporary purpose table space TEMP_USER - Temporary table space for users. USERS - User table space. 20. How will you force database to use particular rollback segment? SET TRANSACTION USE ROLLBACK SEGMENT rbs_name. 21. What is meant by free extent? A free extent is a collection of continuous free blocks in table space.? Page 19 of 143 Create a database that implicitly creates a SYSTEM Rollback Segment in a SYSTEM table space. Create a Second Rollback Segment name R0 in the SYSTEM table space. available. Page 20 of 143 The RBS checks to see if it is part of its OPTIMAL size. RBS chooses its oldest inactive segment. Oldest inactive segment is eliminated. RBS extents The Data dictionary table for space management is updated. Transaction Completes. 35. How can we plan storage for very large tables? Limit the number of extents in the table Separate Table from its indexes. Allocate sufficient temporary storage. 36. How will you estimate the space required by a non-clustered table? Calculate Calculate Calculate Calculate Calculate Calculate the the the the the the total header size available data space per data block combined column lengths of the average row total average row size. average number rows that can fit in a block number of blocks and bytes required for the table. After arriving the calculation, add 10 % additional space to calculate the initial extent size for a working table. 37. It is possible to use raw devices as data files and what database Copy one of the existing control files to new location Edit Config ora file by adding new control file.name Restart the database. 40. What is meant by Redo Log file mirrorring ? How it can be achieved? Process of having a copy of redo log files is called mirroring. Page 21 of 143 disadvantages do not become free immediately after completion due to delayed cleanout. Trailing nulls and length bytes are not stored. Page 22 of 143 Inserts of, updates to and deletes of rows as well as columns larger than a single data block, can cause fragmentation a chained row pieces. DATABASE SECURITY & ADMINISTRATION 48. What is user Account in Oracle database?. Page 23 of 143 SYSTEM user account - It has all the system privileges for the database and additional tables and views that display administrative information and internal tables and views used by oracle tools are created using this username. 54. (init.ora) ? are the minimum parameters should exist in the parameter file ? Roles are the easiest way to grant and manage common privileges needed by different groups of database users. Creating roles and assigning provies to roles. and Page 24 of 143 Assign each role to group of users. This will simplify the job of assigning privileges to individual users. 58. What are the steps to switch a database's archiving mode between 59. How can you enable automatic archiving ? Shut the database Backup the database Modify/Include LOG_ARCHIVE_START_TRUE in init.ora file. Start up the databse.zero-paded and %t - Thread number not padded). The file name created is arch 0001 are if %S is used. LOG_ARCHIVE_DEST = path. 61. What is the use of ANALYZE command ? To perform one of these function on an index,table, or cluster: data to collect statisties about object used by the optimizer and store them in the dictionary. to delete statistics about the object used by object from the data dictionary. to validate the structure of the object. to identify migrated and chained rows of the table or cluster. MANAGING DISTRIBUTED DATABASES. 62. How can we reduce the network traffic ? - Replictaion ? Page 25 of 143 Simple and Complex. 65. Differentiate simple and complex, snapshots ? - A simple snapshot is based on a query that does not contains GROUP BY clauses, CONNECT BY clauses, JOINs, sub-query or snashot of operations. - A complex snapshots contain atleast any one of the above. 66. What dynamic data replication ? Updating or Inserting records in remote database through database triggers. It may fail if remote database is having any problem. 67. How can you Enforce Refrencial Integrity in snapshots ? Time the references to occur when master tables are not in use. Peform the reference the manually immdiately locking the master tables. We can join tables in snopshots by creating a complex snapshots that will based on the master tables. 68. What are the options available to refresh snapshots ? COMPLETE - Tables are completly. Page 26 of 143 Database uses a two phase commit. MANAGING BACKUP & RECOVERY 73. What are the different methods of backing up oracle database ? - Logical Backups - Cold Backups - Hot Backups (Archive log) 74. What is a logical backup ? Logical backup involves reading a set of databse databse objects will be exported or not. Value is 'Y' or 'N'. 81. What is the use of INDEXES option in EXP command ? Page 27 of 143 A flag to indicate whether indexes on tables will be exported. 82. What is the use of ROWS option in EXP command ? Flag to indicate whether table rows should be exported. If 'N' only DDL statements for the databse objects will be created. 83. What is the use of CONSTRAINTS option in EXP command ? A flag to indicate whether constraints on table need to be exported. 84. What is the use of FULL option in EXP command ? A flag to indicate whether full databse should be written to export dump file. information about the exported objects 93. What is the use of CONSISTENT (Ver 7) option in EXP command ? A flag to indicate whether should be maintained. 94. a read consistent version of all the exported objects Page 28 of 143 exectued.., cant be rolled back. Database triggers do not fire on TRUNCATE DELETE allows the filtered deletion. Deleted records can be rolled back or committed. Database triggers fire on DELETE. Page 29 of 143 2(n, (m)), INSTR returns the position of the mth occurrence of the string 2 in String1. The search begins from nth position of string1. SUBSTR (String1 n,m) SUBSTR returns a character string of size m in string1, starting from nth postioncharacter long, block no, row number are the components of ROWID. 11. What is the fastest way of accessing a row in a table? Using ROWID. CONSTRAINTS ---------------------- Page 30 of 143 12. What is an Integrity Constraint? Integrity constraint is a rule that restricts values to a column in a table. 13. What is Referential Integrity? Maintaining data integrity through a set of rules that restrict the values of one or more columns of the tables based on the values of primary key or unique key of the referenced table. 14. What are the usages? SIZE allowed for each type? What is the maximum. 20. Where the integrity constrints are stored in Data Dictionary? The integrity constraints are stored in USER_CONSTRAINTS. 21. How will you a activate/deactivate integrity constraints? The integrity constraints can be enabled or disabled by ALTER TABLE ENABLE constraint/DISABLE constraint. Page 31 of 143. FORMS 3.0 BASIC 1.What is an SQL *FORMS? SQL *forms is 4GL tool for developing and executing; Oracle based interactive application. 2. What is the maximum size of a form? Page 32 of 143 255 character width and 255 characters Length. 3. Name the two files that are created when you generate the form give the file extension? INP (Source File) FRM (Executable File) 4. How do you control the constraints in forms? Select the use constraint property is ON Block definition screen. BLOCK 5. Committed. 9. What are the types of TRIGGERS? 1. Navigational Triggers. 2. Transaction Triggers. 10. What are the different types of key triggers? Function Key Key-function Key-others Key-startup 11.. 12. What does an on-clear-block Trigger fire? It fires just before SQL * forms the current block. Page 33 of 143 usage of an ON-INSERT,ON-DELETE and ON-UPDATE TRIGGERS ? These triggers are executes when inserting, deleting and updating operations are performed and can be used to change the default function of insert, delete or update respectively. For E.g., instead of inserting a row in a table an existing row can be updated in the same table. do a PRE-QUERY Trigger and POSTQUERY Trigger will get executed? PRE-QUERY fires once. POST-QUERY fires 10 times. 18. What is the difference between ON-VALIDATE-FIELD trigger and a POSTCHANGE trigger? When you changes the Existing value to null, the On-validate field trigger will fire post change trigger will not fire. At the time of execute-query post-change is the difference between a POST-FIELD trigger and a POST-CHANGE trigger? Post-field trigger fires whenever the control leaving from the filed. Post-change trigger fires at the time of execute-query procedure invoked or filed validation status changed. 21. When is PRE-QUERY trigger executed? When Execute-query or count-query Package procedures are invoked. Page 34 of 143 22. Give the sequence in which triggers fired during insert operations, when the following 3 triggers are defined at the same block level? a. ON-INSERT b. POST-INSERT c. PRE-INSERT PRE-INSERT, ON-INSERT & POST-INSERT. 23. Can we use GO-BLOCK package in a pre-field trigger ? No. 24. Is a Key startup ? Page 35 of 143 Restricted package procedure that affects the basic basic functions of SQL * Forms. It cannot used in all triggers execpt synchoron. Page 36 of 143. 2. 3. 4. 5. Error_Code Error_Text Form_Failure Form_Fatal Message_Code 40. How does the command POST differs from COMMIT ? Post writes data in the form to the database but does not perform database commit Commit permenently proecdure ?. Page 37 of 143 SYSTEM VARIABLES 47. List the system variables related in Block and Field? 1. 2. 3. 4. 5. 6. 7. System.block_status System.current_block System.current_field System.current_value System.cursor_block System.cursor_field System.field_status. the difference between system.current_field and 1. System.current_field gives name of the field. 2. System.cursor_field gives name of the field with block name. 49. The value recorded in system.last_record variable is of type a. Number b. Boolean c. Character. b. Boolean. User Exits :. Page :. Page 38 of 143 56. Deleting a page removes information about all the fields in that page ? a. True. b. False a. True.. Page 39 of 143? Page 40 of 143.. 7. Page 41 of 143 cursor c1 is select empno,deptno from emp; e_rec c1 %ROWTYPE..; 14. What will happen after commit statement ? Cursor C1 is Select empno, Page 42 of 143 ? Page 43 of 143 It is not possible. As triggers are defined for each table, if you use COMMIT ROLLBACK in a trigger, it affects logical transaction processing. 19. What are two virtual tables available during database trigger execution ? of..) 24. What is Raise_application_error ? Page 44 of 143 end; 33. Give the structure of the function ? Page 45 of 143 c. SQL *PLUS a. PACKAGE NAME.PROCEDURE NAME (parameters); Page 46 of 143 variable := PACKAGE NAME.FUNCTION NAME (arguments); EXEC SQL EXECUTE b. BEGIN PACKAGE NAME.PROCEDURE NAME (parameters) variable := PACKAGE NAME.FUNCTION NAME (arguments); END; END EXEC; c. EXECUTE PACKAGE NAME.PROCEDURE if the procedures does not have any out/in-out parameters. A function can not be called.. 19. What is a visual attribute? Page 47 of 143 26. What built-in dynamically? Set_window_property Canvas-View 27. What is a canvas-view? A canvas-view is the background object on which you layout the interface items (text-items, check boxes, radio groups, and so on.) and boilerplate objects that is when-window-closed, used for changing when-window-deactivated, the properties of the window Page 48 of 143 operators see and interact with as they run your form. At run-time, operators can? Page 49 of 143 If yes, give the name of th built-in to chage the alert messages at run-time. Yes. Set_alert_property.? Page 50 of 143 Query record group Static record group Non query record group Page 51 of 143 Parameter Error_code Error_text return character return number return char Page 52 of 143 Dbms_error_code Dbms_error_text 60. What is a predefined exception available in forms 4.0? Raise form_trigger_failure 61. What are the menu items that oracle forms 4.0 supports? groups? are the? Page 53 of 143.. 08. What are the different modals of windows? Modalless windows Modal windows 09. What are modall.. Page 54 of 143 Within this triggers, you can examine system.event_window to determine the name? Page 55 of 143 Any event that makes a different record in the master block the current record is a coordination causing event. 23.. 24. What are Most Common types of Complex master-detail relationships? There are three most common types of complex master-detail relationships: master with dependent details Page 56 of 143 Page 57 of 143 ? Page 58 of 143) Page 59 of 143.. 70. How do you reference a Parameter? In Pl/Sql, You can reference and set the values of form parameters using bind variables syntax. Ex. PARAMETER name = '' or :block.item = PARAMETER Parameter name Page 60 of 143 76. What is the difference between boiler plat.. How do you create a new session while open a new form? Page 61 of 143 Using open_form built-in setting the session option Ex. Open_form('Stocks ',active,session). when invoke the mulitiple forms with open form and call_form in the same application, state whether the following are true/False? In Oracle forms, Embedded objects become part of the form module, and linked objects are references from a form module to a linked source file. 87. What is the difference between OLE Server & Ole Container? is an example of an ole Container. Page 62 of 143? Page 63 of 143? Page 64 of 143 subprograms including user named procedures, functions and packages. 110. What is the advantage of the library? Library's provide a convenient means of storing client-side program units and sharing them among multipule an applications. 111. Page 65 of 143database-commit trigger fires after oracle forms issues the commit to finalished transactions. 116.. 117. What is trigger associated with the timer? When-timer-expired. 118 What is the use of transactional triggers? Using transactional triggers we can control or modify the default functionality of the oracle forms. REPORTS Page 66 of 143. 10. Which parameter can be used to set read level consistency across multiple queries? Read only. 11. What is term? The term is terminal definition file that describes the terminal form which you are using r20run. 12. What is use of term? Page 67 of 143? Yes. 23. How can a break order be created on a column in an existing group? By dragging the column outside the group. 24. What are the types of calculated columns available? Summary, Formula, Placeholder column. compute at options required? Page 68 of 143 writer? can a text file be attached to a report while creating in the report? Yes. 35. If yes,how? By the use anchors. 36. What are the two repeating frame always associated with matrix object? Page 69 of 143. 46. Does a Before form trigger fire when the parameter form is suppressed. Yes. 47. At what point of report execution is the before Report trigger fired? After the query is executed but before the report is executed and the Page 70 of 143. 58. How is link tool operation different bet. reports 2 & 2.5? In Reports 2.0 the link tool has to be selected and then two fields to be linked are selected and the link is automatically created. In 2.5 the first Page 71 of 143 field is selected and the link tool is then used to link the first field to the second field.. 68. What are the two ways by which data can be generated for a parameter's list of values? Page 72 of 143. 78. How can values be passed bet. precompiler exits & Oracle call interface? By using the statement EXECIAFGET & EXECIAFPUT. Page 73 of 143 79. How can I message to passed to the user from reports? By using SRW.MESSAGE function. Page 74 of 143 Ans : False 10. It is very difficult to grant and manage common privileges needed by different groups of database users using the roles a] True, b] False Ans : False 11.. c] An alert requires an response from the userwhile a messagebox just flashes a message and only requires an acknowledment from the user d] An message box requires an response from the userwhile a alert just flashes a message an only requires an acknowledment from the user Ans : C 13. Page 75 of 143 Ans :D 16. What is the difference between a LIST BOX and a COMBO BOX ? a] In the list box, the user is restricted to selecting a value from a list but in a combo box the user can type in 17. In a CLIENT/SERVER environment , which of the following would not be done at the client ? a] User interface part, b] Data validation at entry line, c] Responding to user events, d] None of the above Ans : D 18. 19. 20. What does DLL stands for ? a] Dynamic Language Library b] Dynamic Link Library c] Dynamic Load Library d] None of the above Ans : B 21. POST-BLOCK trigger is a Page 76 of 143. 24.. 27. The packaged procedure that makes data in form permanent in the Page 77 of 143 34. Which of the folowing is TRUE for a ERASE packaged procedure 1] ERASE removes an indicated Global variable & releases the memory associated with it 2] ERASE is used to remove a field from a page 1] Only 1 is TRUE 2] Only 2 is TRUE Page 78 of 143 3] Both 1 & 2 are TRUE 4] Both 1 & 2 are FALSE Ans : 1 35. Page 79 of 143. Which of the following does not affect the size of the SGA a] Database buffer b] Redolog buffer c] Stored procedure d] Shared pool Ans : C 48. What does a COMMIT statement do to a CURSOR a] Open the Cursor b] Fetch the Cursor c] Close the Cursor d] None of the above Ans : D 49. Which of the following is TRUE 1] Host variables are declared anywhere in the program 2] Host variables are declared in the DECLARE section a] Only 1 is TRUE b] Only 2 is TRUE c] Both 1 & 2are TRUE d] Both are FALSE Page 80 of 143 Ans : B 50. Which of the following is NOT VALID is PL/SQL a] Bool boolean; b] NUM1, NUM2 number; c] deptname dept.dname%type; d] date1 date := sysdate Ans : B Page 81 of 143 ? Page 82 of 143] Delete Page 83 of 143 A822 A812 A973 A500 ENAME SAL RAMASWAMY 3500 NARAYAN 5000 UMESH 2850 Page 84 of 143 A822 A812 A973 A500 ENAME SAL RAMASWAMY 3500 NARAYAN 5000 UMESH Page 85 of 143 c] Insert, Update, Delete Page 86 of 143 ? Page 87 of 143 a] Yes into an Object group ? Page 88 of 143 b] No Ans : B 100. Can MULTIPLE DOCUMENT INTERFACE (MDI) be used in Forms 4.5 ? a] Yes b] No Ans : A 101. When is a .FMB file extension is created in Forms 4.5 ? a] Generating form b] Executing form c] Save form d] Run form Ans : C 102. What is a Built_in subprogram ? a] Library b] Stored procedure & Function c] Collection of Subprograms Page 89 of 143 Page 90 of 143 Ans : A Page 91 of 143 Page 92 of 143 93 of 143 INDEX 1. 2. Query for retrieving N highest paid employees FROM each Department. Query that will display the total no. of employees, and of that total the number who were hired in 1980, 1981, 1982, and 1983. 3. Query for listing Deptno, ename, sal, SUM(sal in that dept). 4. Matrix query to display the job, the salary for that job based on department number, and the total salary for that job for all departments. 5. Nth Top Salary of all the employees. 6. Retrieving the Nth row FROM a table. 7. Tree Query. 8. Eliminate duplicates rows in a table. 9. Displaying EVERY Nth row in a table. 10. Top N rows FROM a table. 11. COUNT/SUM RANGES of data values in a column. 12. For equal size ranges it might be easier to calculate it with DECODE(TRUNC(value/range), 0, rate_0, 1, rate_1, ...). 13. Count different data values in a column. 14. Query to get the product of all the values of a column. 15. Query to display only the duplicate records in a table. 16. Query for getting the following output as many number of rows in the table. 17. Function for getting the Balance Value. 18. Function for getting the Element Value. 19. SELECT Query for counting No of words. 20. Function to check for a leap year. 21. Query for removing all non-numeric. 22. Query for translating a column values to INITCAP. 23. Function for displaying Rupees in Words. 24. Function for displaying Numbers in Words 25. Query for deleting alternate even rows FROM a table. 26. Query for deleting alternate odd rows FROM a table. 27. Procedure for sending Email. 28. Alternate Query for DECODE function. 29. Create table adding Constraint to a date field to SYSDATE or 3 months later. 30. Query to list all the suppliers who r supplying all the parts supplied by supplier 'S2'. 31. Query to get the last Sunday of any month. 32. Query to get all those who have no children themselves. 33. Query to SELECT last N rows FROM a table. 34. SELECT with variables. 35. Query to get the DB Name. 36. Getting the current default schema. 37. Query to get all the column names of a particular table. 38. Spool only the query result to a file in SQLPLUS. 39. Query for getting the current SessionID. 40. Query to display rows FROM m to n. 41. Query to count no. Of columns in a table. 42. Procedure to increase the buffer length. Page 94 of 143 43. Inserting an & symbol in a Varchar2 column. 44. Removing Trailing blanks in a spooled file. 45. Samples for executing Dynamic SQL Statements. 46. Differences between SQL and MS-Access. 47. Query to display all the children, sub children of a parent. 48. Procedure to read/write data from/to a text file. 49. Query to display random number between any two given numbers. 50. Time difference between two date columns. 51. Using INSTR and SUBSTR 52. View procedure code 53. To convert signed number to number in oracle 54. Columns of a table 55. Delete rows conditionally 56. CLOB to Char 57. Change Settings 58. Double quoting a Single quoted String 59. Time Conversion 60. Table comparison 61. Running Jobs 62. Switching Columns 63. Replace and Round 64. First date of the year 65. Create Sequence 66. Cursors 67. Current Week 68. Create Query to restrict the user to a single row. 69. Query to get the first inserted record FROM a table. 70. Concatenate a column value with multiple rows. 71. Query to delete all the tables at once. 72. SQL Query for getting Orphan Records. 1. The following query retrieves "2" highest paid employees FROM each Department : SELECT deptno, empno, sal FROM emp e WHERE 2 > ( SELECT COUNT(e1.sal) FROM emp e1 WHERE e.deptno = e1.deptno AND e.sal < e1.sal ) ORDER BY 1,3 DESC; Index 2. Query that will display the total no. of employees, and of that total the number who were hired in 1980, 1981, 1982, and 1983. Give appropriate column headings. Page 95 of 143 I am looking at the following output. We need to stick to this format. Total ----------14 1980 -----------1 1981 -----------10 1982 1983 ------------- ----------2 1 SELECT COUNT (*), COUNT(DECODE(TO_CHAR (hiredate, 'YYYY'),'1980', empno)) "1980", COUNT (DECODE (TO_CHAR (hiredate, 'YYYY'), '1981', empno)) "1981", COUNT (DECODE (TO_CHAR (hiredate, 'YYYY'), '1982', empno)) "1982", COUNT (DECODE (TO_CHAR (hiredate, 'YYYY'), '1983', empno)) "1983" FROM emp; Index 3. Query for listing Deptno, ename, sal, SUM(sal in that dept) : SELECT a.deptno, ename, sal, (SELECT SUM(sal) FROM emp b WHERE a.deptno = b.deptno) FROM emp a ORDER BY a.deptno; OUTPUT : ======= DEPTNO ========= 10 30 10 10 30 30 30 30 30 20 20 20 Index ENAME ======= KING BLAKE CLARK JONES MARTIN ALLEN TURNER JAMES WARD SMITH SCOTT MILLER SAL SUM (SAL) ==== ========= 5000 11725 2850 10900 2450 11725 2975 11725 1250 10900 1600 10900 1500 10900 950 10900 2750 10900 8000 33000 3000 33000 20000 33000 Page 96 of 143 4. Create a matrix query to display the job, the salary for that job based on department number, and the total salary for that job for all departments, giving each column an appropriate heading. The output is as follows - we need to stick to this format : Job Dept 10 Dept 20 ------------6000 1900 2975 2850 950 Dept 30 ------------Total -------------------------------ANALYST 6000 CLERK 1300 4150 MANAGER 2450 8275 PRESIDENT 5000 5000 SALESMAN 5600 5600 SELECT job "Job", SUM (DECODE (deptno, 10, sal)) "Dept 10", SUM (DECODE (deptno, 20, sal)) "Dept 20", SUM (DECODE (deptno, 30, sal)) "Dept 30", SUM (sal) "Total" FROM emp GROUP BY job ; Index 5. 4th Top Salary of all the employees : SELECT DEPTNO, ENAME, SAL FROM EMP A WHERE 3 = (SELECT COUNT(B.SAL) FROM EMP B WHERE A.SAL < B.SAL) ORDER BY SAL DESC; Index 6. Retrieving the 5th row FROM a table : SELECT DEPTNO, ENAME, SAL FROM EMP Page 97 of 143 WHERE ROWID = (SELECT ROWID FROM EMP WHERE ROWNUM <= 5 MINUS SELECT ROWID FROM EMP WHERE ROWNUM < 5) Index 7. Tree Query : Name Null? Type ------------------------------------------------------------------SUB NOT NULL VARCHAR2(4) SUPER VARCHAR2(4) PRICE NUMBER(6,2) SELECT sub, super FROM parts CONNECT BY PRIOR sub = super START WITH sub = 'p1'; Index 8. Eliminate duplicates rows in a table : DELETE FROM table_name A WHERE ROWID > ( SELECT min(ROWID) FROM table_name B WHERE A.col = B.col); Index 9. Displaying EVERY 4th row in a table : (If a table has 14 rows, 4,8,12 rows will be selected) SELECT * FROM emp WHERE (ROWID,0) IN (SELECT ROWID, MOD(ROWNUM,4) FROM emp); Index 10. Top N rows FROM a table : (Displays top 9 salaried people) SELECT ename, deptno, sal FROM (SELECT * FROM emp ORDER BY sal DESC) WHERE ROWNUM < 10; Page 98 of 143 Index 11. How does one count/sum RANGES of data values in a column? A value x will be between values y and z if GREATEST(x, y) = LEAST(x, z). SELECT f2, COUNT(DECODE(greatest(f1,59), least(f1,100), 1, 0)) "Range 60-100", COUNT(DECODE(greatest(f1,30), least(f1, 59), 1, 0)) "Range 3059", COUNT(DECODE(greatest(f1,29), least(f1, 0), 1, 0)) "Range 0029" FROM my_table GROUP BY f2; Index 12. For equal size ranges it migth be easier to calculate it with DECODE(TRUNC(value/range), 0, rate_0, 1, rate_1, ...). SELECT ename "Name", sal "Salary", DECODE( TRUNC(sal/1000, 0), 0, 0.0, 1, 0.1, 2, 0.2, 3, 0.3) "Tax rate" FROM emp; 13. How does one count different data values in a column? COL NAME DATATYPE ---------------------------------------DNO NUMBER SEX CHAR SELECT dno, SUM(DECODE(sex,'M',1,0)) MALE, SUM(DECODE(sex,'F',1,0)) FEMALE, COUNT(DECODE(sex,'M',1,'F',1)) TOTAL FROM t1 GROUP BY dno; Index 14. Query to get the product of all the values of a column : Page 99 of 143 SELECT EXP(SUM(LN(col1))) FROM srinu; Index 15. Query to display only the duplicate records in a table: SELECT num FROM satyam GROUP BY num HAVING COUNT(*) > 1; Index 16. Query for getting the following output as many number of rows in the table : * ** *** **** ***** SELECT RPAD(DECODE(temp,temp,'*'),ROWNUM,'*') FROM srinu1; Index 17. Function for getting the Balance Value : FUNCTION F_BALANCE_VALUE (p_business_group_id number, p_payroll_action_id number, p_balance_name varchar2, p_dimension_name varchar2) RETURN NUMBER IS l_bal number; l_defined_bal_id number; l_assignment_action_id number; BEGIN SELECT assignment_action_id INTO l_assignment_action_id FROM pay_assignment_actions WHERE assignment_id = :p_assignment_id AND payroll_action_id = p_payroll_action_id; Page 100 of 143 SELECT defined_balance_id INTO l_defined_bal_id FROM pay_balance_types pbt, pay_defined_balances pdb, pay_balance_dimensions pbd WHERE pbt.business_group_id = p_business_group_id AND UPPER(pbt.balance_name) = UPPER(p_balance_name) AND pbt.business_group_id = pdb.business_group_id AND pbt.balance_type_id = pdb.balance_type_id AND UPPER(pbd.dimension_name) = UPPER(p_dimension_name) AND pdb.balance_dimension_id = pbd.balance_dimension_id; l_bal := pay_balance_pkg.get_value(l_defined_bal_id,l_assignment_action_id); RETURN (l_bal); exception WHEN no_data_found THEN RETURN 0; END; Index 18. Function for getting the Element Value : FUNCTION f_element_value( p_classification_name in varchar2, p_element_name in varchar2, p_business_group_id in number, p_input_value_name in varchar2, p_payroll_action_id in number, p_assignment_id in number ) RETURN number IS l_element_value number(14,2) default 0; l_input_value_id pay_input_values_f.input_value_id%type; l_element_type_id pay_element_types_f.element_type_id%type; BEGIN SELECT DISTINCT element_type_id Page 101 of 143 INTO l_element_type_id FROM pay_element_types_f pet, pay_element_classifications pec WHERE pet.classification_id = pec.classification_id AND upper(classification_name) = upper(p_classification_name) AND upper(element_name) = upper(p_element_name) AND pet.business_group_id = p_business_group_id; SELECT input_value_id INTO l_input_value_id FROM pay_input_values_f WHERE upper(name) = upper(p_input_value_name) AND element_type_id = l_element_type_id; SELECT NVL(prrv.result_value,0) INTO l_element_value FROM pay_run_result_values prrv, pay_run_results prr, pay_assignment_actions paa WHERE prrv.run_result_id = prr.run_result_id AND prr.assignment_ACTION_ID = paa.assignment_action_id AND paa.assignment_id = p_assignment_id AND input_value_id = l_input_value_id AND paa.payroll_action_id = p_payroll_action_id; RETURN (l_element_value); exception WHEN no_data_found THEN RETURN 0; END; Index 19. SELECT Query for counting No of words : SELECT ename, NVL(LENGTH(REPLACE(TRANSLATE(UPPER(RTRIM(ename)),'ABCDEFG HIJKLMNOPQRSTUVWXYZ'' ',' @'),' ',''))+1,1) word_length FROM emp; Explanation : TRANSLATE(UPPER(RTRIM(ename)),'ABCDEFGHIJKLMNOPQRSTUVWXY Z'' ',' @') -- This will translate all the characters FROM A-Z including a single quote to a space. It will also translate a space to a @. REPLACE(TRANSLATE(UPPER(RTRIM(ename)),'ABCDEFGHIJKLMNOPQR STUVWXYZ'' ',' @'),' ','') -- This will replace every space with nothing in the above result. LENGTH(REPLACE(TRANSLATE(UPPER(RTRIM(ename)),'ABCDEFGHIJKL MNOPQRSTUVWXYZ'' ',' @'),' ',''))+1 -- This will give u the count of @ characters in the above result. Index 20. Function to check for a leap year : CREATE OR REPLACE FUNCTION is_leap_year (p_date IN DATE) RETURN VARCHAR2 AS v_test DATE; BEGIN v_test := TO_DATE ('29-Feb-' || TO_CHAR (p_date,'YYYY'),'DDMon-YYYY'); RETURN 'Y'; EXCEPTION WHEN OTHERS THEN RETURN 'N'; END is_leap_year; SQL> SELECT hiredate, TO_CHAR (hiredate, 'Day') weekday FROM emp WHERE is_leap_year (hiredate) = 'Y'; Index 21. Query for removing all non-numeric : SELECT TRANSLATE(LOWER(ssn),'abcdefghijklmnopqrstuvwxyz- ','') FROM DUAL; Index 22. Query for translating a column values to INITCAP : Page 103 of 143 SELECT TRANSLATE(INITCAP(temp), SUBSTR(temp, INSTR(temp,'''')+1,1), LOWER(SUBSTR(temp, INSTR(temp,'''')+1))) FROM srinu1; Index 23. Function for displaying Rupees in Words : CREATE OR REPLACE FUNCTION RUPEES_IN_WORDS(amt IN NUMBER) RETURN CHAR IS amount NUMBER(10,2); v_length INTEGER := 0; v_num2 VARCHAR2 (50) := NULL; v_amount VARCHAR2 (50); v_word VARCHAR2 (4000) := NULL; v_word1 VARCHAR2 (4000) := NULL; TYPE myarray IS TABLE OF VARCHAR2 (255); v_str myarray := myarray (' thousand ', ' lakh ', ' crore ', ' arab ', ' kharab ', ' shankh '); BEGIN amount := amt; IF ((amount = 0) OR (amount IS NULL)) THEN v_word := 'zero'; ELSIF (TO_CHAR (amount) LIKE '%.%') THEN IF (SUBSTR (amount, INSTR (amount, '.') + 1) > 0) THEN v_num2 := SUBSTR (amount, INSTR (amount, '.') + 1); IF (LENGTH (v_num2) < 2) THEN v_num2 := v_num2 * 10; END IF; v_word1 := ' AND ' || (TO_CHAR (TO_DATE (SUBSTR (v_num2, LENGTH (v_num2) - 1,2), 'J'), 'JSP' ))|| ' paise '; v_amount := SUBSTR(amount,1,INSTR (amount, '.')-1); v_word := TO_CHAR (TO_DATE (SUBSTR (v_amount, LENGTH (v_amount) - 2,3), 'J'), 'Jsp' ) || v_word; Page 104 of 143 v_amount := SUBSTR (v_amount, 1, LENGTH (v_amount) - 3); FOR i in 1 .. v_str.COUNT LOOP EXIT WHEN (v_amount IS NULL); v_word := TO_CHAR (TO_DATE (SUBSTR (v_amount, LENGTH (v_amount) - 1,2), 'J'), 'Jsp' ) || v_str (i) || v_word; v_amount := SUBSTR (v_amount, 1, LENGTH (v_amount) - 2); END LOOP; END IF; ELSE v_word := TO_CHAR (TO_DATE (SUBSTR (amount, LENGTH (amount) - 2,3), 'J'), 'Jsp' ); amount := SUBSTR (amount, 1, LENGTH (amount) 3); FOR i in 1 .. v_str.COUNT LOOP EXIT WHEN (amount IS NULL); v_word := TO_CHAR (TO_DATE (SUBSTR (amount, LENGTH (amount) - 1,2), 'J'), 'Jsp' ) || v_str (i) || v_word; amount := SUBSTR (amount, 1, LENGTH (amount) - 2); END LOOP; END IF; v_word := v_word || ' ' || v_word1 || ' only '; v_word := REPLACE (RTRIM (v_word), ' ', ' '); v_word := REPLACE (RTRIM (v_word), '-', ' '); RETURN INITCAP (v_word); END; Index 24. Function for displaying Numbers in Words: SELECT TO_CHAR( TO_DATE( SUBSTR( TO_CHAR(5373484),1),'j'),'Jsp') FROM DUAL; Only up to integers from 1 to 5373484 Index 25. Query for deleting alternate even rows FROM a table : DELETE FROM srinu WHERE (ROWID,0) IN (SELECT ROWID, MOD(ROWNUM,2) FROM srinu); Index 26. Query for deleting alternate odd rows FROM a table : DELETE FROM srinu WHERE (ROWID,1) IN (SELECT ROWID, MOD(ROWNUM,2) FROM srinu); Index 27. Procedure for sending Email : CREATE OR REPLACE PROCEDURE Send_Mail IS sender VARCHAR2(50) := 'sender@something.com'; recipient VARCHAR2(50) := 'recipient@something.com'; subject VARCHAR2(100) := 'Test Message'; message VARCHAR2(1000) := 'This is a sample mail ....'; lv_mailhost VARCHAR2(30) := 'HOTNT002'; l_mail_conn utl_smtp.connection; lv_crlf VARCHAR2(2):= CHR( 13 ) || CHR( 10 ); BEGIN l_mail_conn := utl_smtp.open_connection (lv_mailhost, 80); utl_smtp.helo ( l_mail_conn, lv_mailhost); utl_smtp.mail ( l_mail_conn, sender); utl_smtp.rcpt ( l_mail_conn, recipient); utl_smtp.open_data (l_mail_conn); utl_smtp.write_data ( l_mail_conn, 'FROM: ' || sender || lv_crlf); utl_smtp.write_data ( l_mail_conn, 'To: ' || recipient || lv_crlf); utl_smtp.write_data ( l_mail_conn, 'Subject:' || subject || lv_crlf); utl_smtp.write_data ( l_mail_conn, lv_crlf || message); utl_smtp.close_data(l_mail_conn); utl_smtp.quit(l_mail_conn); EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('Error'); END; / Index 28. Alternate Query for DECODE function : SELECT case WHEN sex = 'm' THEN 'male' WHEN sex = 'f' THEN 'female' ELSE 'unknown' END FROM mytable; Index 29. Create table adding Constraint to a date field to SYSDATE or 3 months later: CREATE TABLE srinu(dt1 date DEFAULT SYSDATE, dt2 date, CONSTRAINT check_dt2 CHECK ((dt2 >= dt1) AND (dt2 <= ADD_MONTHS(SYSDATE,3))); Index 30. Query to list all the suppliers who supply all the parts supplied by supplier 'S2' : SELECT DISTINCT a.SUPP FROM ORDERS a WHERE a.supp != 'S2' AND a.parts IN (SELECT DISTINCT PARTS FROM ORDERS WHERE supp = 'S2') GROUP BY a.SUPP HAVING COUNT(DISTINCT a.PARTS) >= (SELECT COUNT(DISTINCT PARTS) FROM ORDERS WHERE supp = 'S2'); Table : orders SUPP PARTS -------------------- ------S1 P1 S1 P2 S1 P3 S1 P4 S1 P5 S1 P6 Page 107 of 143 S2 S2 S3 S4 S4 S4 Index P1 P2 P2 P2 P4 P5 31. Query to get the last Sunday of any month : SELECT NEXT_DAY(LAST_DAY(TO_DATE('26-10-2001','DD-MM-YYYY')) 7,'sunday') FROM DUAL; Index 32. Query to get all those who have no children themselves : table data : id name parent_id ------------------------------1 a NULL - the top level entry 2 b 1 - a child of 1 3 c 1 4 d 2 - a child of 2 5 e 2 6 f 3 7 g 3 8 h 4 9 i 8 10 j 9 SELECT ID FROM MY_TABlE WHERE PARENT_ID IS NOT NULL MINUS SELECT PARENT_ID FROM MY_TABlE; Index 33. Query to SELECT last N rows FROM a table : SELECT empno FROM emp WHERE ROWID in (SELECT ROWID FROM emp Page 108 of 143 MINUS SELECT ROWID FROM emp WHERE ROWNUM <= (SELECT COUNT(*)-5 FROM emp)); Index 34. SELECT with variables: CREATE OR REPLACE PROCEDURE disp AS xTableName varchar2(25):='emp'; xFieldName varchar2(25):='ename'; xValue NUMBER; xQuery varchar2(100); name varchar2(10) := 'CLARK'; BEGIN xQuery := 'SELECT SAL FROM ' || xTableName || ' WHERE ' || xFieldName || ' = ''' || name || ''''; DBMS_OUTPUT.PUT_LINE(xQuery); EXECUTE IMMEDIATE xQuery INTO xValue; DBMS_OUTPUT.PUT_LINE(xValue); END; Index 35. Query to get the DB Name: SELECT name FROM v$database; Index 36. Getting the current default schema : SELECT SYS_CONTEXT('USERENV','CURRENT_SCHEMA') FROM DUAL; Index 37. Query to get all the column names of a particular table : SELECT column_name FROM all_tab_columns WHERE TABLE_NAME = 'ORDERS'; Index 38. How do I spool only the query result to a file in SQLPLUS : Place the following lines of code in a file and execute the file in SQLPLUS : set heading off set feedback off set colsep ' ' set termout off set verify off spool c:\srini.txt SELECT empno,ename FROM emp; /* Write your Query here */ spool off / Index 39. Query for getting the current SessionID : SELECT SYS_CONTEXT('USERENV','SESSIONID') Session_ID FROM DUAL; Index 40. Query to display rows FROM m to n : To display rows 5 to 7 : SELECT DEPTNO, ENAME, SAL FROM EMP WHERE ROWID IN (SELECT ROWID FROM EMP WHERE ROWNUM <= 7 MINUS SELECT ROWID FROM EMP WHERE ROWNUM < 5); OR SELECT ename FROM emp GROUP BY ROWNUM, ename HAVING ROWNUM > 1 and ROWNUM < 3; Index 41. Query to count no. Of columns in a table: SELECT COUNT(column_name) FROM user_tab_columns WHERE table_name = 'MYTABLE'; Index 42. Procedure to increase the buffer length : dbms_output.enable(4000); /*allows the output buffer to be increased to the specified number of bytes */ DECLARE BEGIN dbms_output.enable(4000); FOR i IN 1..400 LOOP DBMS_OUTPUT.PUT_LINE(i); END LOOP; END; / Index 43. Inserting an & symbol in a Varchar2 column : Set the following to some other character. By default it is &. set define '~' Index 44. How do you remove Trailing blanks in a spooled file : Change the Environment Options Like this : set trimspool on set trimout on Sample :1 CREATE OR REPLACE PROCEDURE CNT(P_TABLE_NAME IN VARCHAR2) AS SqlString VARCHAR2(200); tot number; BEGIN SqlString:='SELECT COUNT(*) FROM '|| P_TABLE_NAME; EXECUTE IMMEDIATE SqlString INTO tot; DBMS_OUTPUT.PUT_LINE('Total No.Of Records In ' || P_TABLE_NAME || ' ARE=' || tot); END; Sample :2 DECLARE sql_stmt VARCHAR2(200); plsql_block VARCHAR2(500); emp_id NUMBER(4) := 7566; salary NUMBER(7,2); dept_id NUMBER(2) := 50; dept_name VARCHAR2(14) := PERSONNEL; location VARCHAR2(13) := DALLAS; emp_rec emp%ROWTYPE; BEGIN EXECUTE IMMEDIATE 'CREATE TABLE bonus (id NUMBER, amt NUMBER)'; sql_stmt := 'INSERT INTO dept VALUES (:1, :2, :3)'; EXECUTE IMMEDIATE sql_stmt USING dept_id, dept_name, location; sql_stmt := 'SELECT * FROM emp WHERE empno = :id'; EXECUTE IMMEDIATE sql_stmt INTO emp_rec USING emp_id; plsql_block := 'BEGIN emp_pkg.raise_salary(:id, :amt); END;'; EXECUTE IMMEDIATE plsql_block USING 7788, 500; sql_stmt := 'UPDATE emp SET sal = 2000 WHERE empno = :1 RETURNING sal INTO :2'; EXECUTE IMMEDIATE sql_stmt USING emp_id RETURNING INTO salary; EXECUTE IMMEDIATE 'DELETE FROM dept WHERE deptno = :num' USING dept_id; Page 112 of 143 EXECUTE IMMEDIATE ALTER SESSION SET SQL_TRACE TRUE; END; Sample 3 CREATE OR REPLACE PROCEDURE DEPARTMENTS(NO IN DEPT.DEPTNO%TYPE) AS v_cursor integer; v_dname char(20); v_rows integer; BEGIN v_cursor := DBMS_SQL.OPEN_CURSOR; DBMS_SQL.PARSE(v_cursor, 'select dname from dept where deptno > :x', DBMS_SQL.V7); DBMS_SQL.BIND_VARIABLE(v_cursor, ':x', no); DBMS_SQL.DEFINE_COLUMN_CHAR(v_cursor, 1, v_dname, 20); v_rows := DBMS_SQL.EXECUTE(v_cursor); LOOP IF DBMS_SQL.FETCH_ROWS(v_cursor) = 0 THEN EXIT; END IF; DBMS_SQL.COLUMN_VALUE_CHAR(v_cursor, 1, v_dname); DBMS_OUTPUT.PUT_LINE('Deptartment name: '||v_dname); END LOOP; DBMS_SQL.CLOSE_CURSOR(v_cursor); EXCEPTION WHEN OTHERS THEN DBMS_SQL.CLOSE_CURSOR(v_cursor); raise_application_error(-20000, 'Unknown Exception Raised: '||sqlcode||' '||sqlerrm); END; Index 46.Differences between SQL and MS-Access : Difference 1: Oracle : select name from table1 where name like 'k%'; Access: select name from table1 where name like 'k*'; Difference 2: Access: SELECT TOP 2 name FROM Table1; Oracle : will not work there is no such TOP key word. Index 47. Query to display all the children, sub children of a parent : SELECT organization_id,name FROM hr_all_organization_units WHERE organization_id in ( SELECT ORGANIZATION_ID_CHILD FROM PER_ORG_STRUCTURE_ELEMENTS CONNECT BY PRIOR ORGANIZATION_ID_CHILD = ORGANIZATION_ID_PARENT START WITH ORGANIZATION_ID_CHILD = (SELECT organization_id FROM hr_all_organization_units WHERE name = 'EBG Corporate Group')); Index 48. Procedure to read/write data from a text file : CREATE OR REPLACE PROCEDURE read_data AS c_path varchar2(100) := '/usr/tmp'; c_file_name varchar2(20) := 'EKGSEP01.CSV'; v_file_id utl_file.file_type; v_buffer varchar2(1022) := This is a sample text; BEGIN v_file_id := UTL_FILE.FOPEN(c_path,c_file_name,'w'); UTL_FILE.PUT_LINE(v_file_id, v_buffer); UTL_FILE.FCLOSE(v_file_id); v_file_id := UTL_FILE.FOPEN(c_path,c_file_name,'r'); UTL_FILE.GET_LINE(v_file_id, v_buffer); DBMS_OUTPUT.PUT_LINE(v_buffer); UTL_FILE.FCLOSE(v_file_id); END; / Index 49. Query to display random number between any two given numbers : SELECT DBMS_RANDOM.VALUE (1,2) FROM DUAL; Index 50. my_table; Index 51. Using INSTR and SUBSTR I have this string in a column named location LOT 8 CONC3 RR Using instr and substr, I want to take whatever value follows LOT and put it into a different column and whatever value follows CONC and put it into a different column select substr('LOT 8 CONC3 RR',4,instr('LOT 8 CONC3 RR','CONC')-4) from dual; select substr('LOT 8 CONC3 RR',-(length('LOT 8 CONC3 RR')-(instr('LOT 8 CONC3 RR','CONC')+3))) from dual Index 52. View procedure code select text from all_source where name = 'X' order by line; select text from user_source where name = 'X' select text from user_source where type = 'procedure' and name='procedure_name'; select name,text from dba_source where name='ur_procedure' and owner='scott'; Index 53. To convert signed number to number in oracle select to_number('-999,999.99', 's999,999.99') from dual; 999,999.99 select to_number('+0,123.45', 's999,999,999.99') from dual; 123.45 select to_number('+999,999.99', 's999,999.99') from dual; 999,999.99 Index 54. Columns of a table column_name from user_tab_columns where TABLE_NAME = column_name from all_tab_columns where TABLE_NAME = 'EMP' column_name from dba_tab_columns where TABLE_NAME = 'EMP' column_name from cols where TABLE_NAME = 'EMP' 55. Delete rows conditionally I have a table have a,b,c field, a,b should be unique, and leave max(c) row in. How can I delete other rows? delete from 'table' where (a,b,c) not in (select a,b,max(c) from 'table' group by a,b); Index 56. CLOB to Char 1) This function helps if your clob column value not exceed 4000 bytes (varchar2 limit).if clob column's data exceeds 4000 limit, you have to follow different approach. create or replace function lob_to_char(clob_col clob) return varchar2 IS buffer varchar2(4000); amt BINARY_INTEGER := 4000; pos INTEGER := 1; l clob; bfils bfile; l_var varchar2(4000):=''; begin LOOP if dbms_lob.getlength(clob_col)<=4000 THEN dbms_lob.read (clob_col, amt, pos, buffer); l_var := l_var||buffer; pos:=pos+amt; ELSE l_var:= 'Cannot convert to varchar2..Exceeding varchar2 field limit'; exit; END IF; END LOOP; return l_var; EXCEPTION WHEN NO_DATA_FOUND THEN return l_var; END; create or replace package lobpkg is type ref1 is ref cursor; n number:=0; PROCEDURE lob_to_char(rvar IN OUT lobpkg.ref1) ; end; / create or replace package body lobpkg is PROCEDURE lob_to_char(rvar IN OUT lobpkg.ref1) IS buffer varchar2(4000); amt BINARY_INTEGER := 4000; pos INTEGER := 1; l clob; r lobpkg.ref1; bfils bfile; l_var varchar2(4000):=''; CURSOR C1 IS SELECT * FROM clob_tab; -- change clob_tab to your_table_name begin n:=n+1; FOR crec IN c1 LOOP amt:=4000; pos:=1; BEGIN LOOP --change crec.clob_col to crec.your_column_name dbms_lob.read (crec.clob_col, amt, pos, buffer); --change next line if you create temporary table with different name insert into temp_tab values (n,buffer); pos:=pos+amt; END LOOP; EXCEPTION WHEN NO_DATA_FOUND THEN NULL; END; END LOOP; --change next line if you create temporary table with different name open rvar for select vchar from temp_tab where id=n; END; END; Index 57. Change Settings Open file oracle_home\plus32\glogin.sql and add this set linesize 100 set pagewidth 20 and save the file and exit from sql and reload then it will set it. Index 58. Double quoting a Single quoted String declare -- we need one here to get a single quote into the variable v_str varchar2 (20) := 'O''reilly''s'; begin DBMS_OUTPUT.PUT_LINE ( 'original single quoted v_str= ' || v_str ); v_str := replace(v_str, '''', ''''''); DBMS_OUTPUT.PUT_LINE ( 'after double quoted v_str= ' || v_str ); end; SQL> / original single quoted v_str= O'reilly's after double quoted v_str= O''reilly''s Index 59. Time Conversion CREATE OR REPLACE FUNCTION to_hms (i_days IN number) RETURN varchar2 IS BEGIN RETURN TO_CHAR (TRUNC (i_days)) || ' days ' || TO_CHAR (TRUNC (SYSDATE) + MOD (i_days, 1), 'HH24:MI:SS'); END to_hms; select to_hms(to_date('17-Jan-2002 13:20:20', 'dd-Mon-yyyy hh24:mi:ss') to_date('11-Jan-2002 11:05:05', 'dd-Mon-yyyy hh24:mi:ss')) from dual; Index 60. Table comparison The table in both the schemas should have exactly the same structure. The data in it could be same or different a-b and b-a select * from a.a minus select * from b.a and select * from b.a minus select * from a.a Index 61. Running Jobs select * from user_jobs; exec dbms_job.remove(job_no); Index 62. Switching Columns Page 120 of 143 Update tblname Set column1 = column2, Column2 = column1; Index 63. Replace and Round I have the number e.g. 63,9823874012983 and I want to round it to 63,98 and at the same time change the , to a . select round(replace('63,9823874012983',',','.'),2) from dual; Index 64. First date of the year select trunc(sysdate, 'y') from dual; 01-jan-2002 last year this month through a select statement select add_months(sysdate, -12) from dual; 05-APR-01 Index 65. Create Sequence create sequence sh increment by 1 start with 0; Index 66. Cursors cursor is someting like pointers in C language. u fetch the data using cursor.( wiz...store it somewhere temporarily). u can do any manipulation to the data that is fetched by the cursor. like trim, padd, concat or validate. all this are done in temporary areas called as context area or the cursor area. u can insert this data again in some other table or do anything u want!!...like setting up some flags etc. U can display the contents of cursor using the dbms_output only. U can create an anonymous plsql block or a stored procedure. the major advantage of cursors is that you can fetch more thatn one row and u can loop through the resultset and do the manupulations in a secure manner. set serveroutput on; declare cursor c1 is select * from emp; begin for var in c1 loop exit when c1%notfound; dbms_output.put_line('the employee' || var.ename ||'draws a salary of '|| var.sal); end loop; end; Index 67. Current Week select next_day(sysdate-7,'SUNDAY'), next_day(sysdate,'SATURDAY') from dual; NEXT_DAY( NEXT_DAY( --------- --------07-APR-02 13-APR-02 Index 68. 69. 70. 71. 72. Index Create Query to restrict the user to a single row Query to get the first inserted record FROM a table How to concatenate a column value with multiple rows Query to delete all the tables at once SQL Query for getting Orphan Records : ORACLE FAQs : 1. WHAT IS DATA OR INFORMATION? Ans: The Matter that we feed into the Computer is called Data or Information. 2. WHAT IS DATABASE? Ans: The Collection of Interrelated Data is called Data Base. 3. WHAT IS A DATABASE MANAGEMENT SYSTEM (DBMS) PACKAGE? Ans: The Collection of Interrelated Data and some Programs to access the Data is Called Data Base Management System (DBMS). 4. WHEN CAN WE SAY A DBMS PACKAGE AS RDBMS? Ans: For a system to Qualify as RELATIONAL DATABASE MANAGEMENT system, it must use its RELATIONAL facilities to MANAGE the DATABASE. 5. WHAT IS ORDBMS? Ans: Object (oriented) Relational Data Base Management System is one that can store data, the relationship of the data, and the behavior of the data (i.e., the way it interacts with other data). 6. NAME SOME CODD'S RULES. Ans: Dr. E.F. Codd presented 12 rules that a database must obey if it is to be considered truly relational. Out those, some are as follows a) The rules stem from a single rule- the zero rule: For a system to Qualify as RELATIONAL DATABASE MANAGEMENT system, it must use its RELATIONAL facilities to MANAGE the DATABASE. b) Information Rule: Tabular Representation of Information. c) Guaranteed Access Rule: Uniqueness of tuples for guaranteed accessibility. d) Missing Information Rule: Systematic representation of missing information as NULL values. e) Comprehensive Data Sub-Language Rule: QL to support Data definition, View definition, Data manipulation, Integrity, Authorization and Security. 7. WHAT ARE HIERARCHICAL, NETWORK, AND RELATIONAL DATABASE MODELS? Ans: a) Hierarchical Model: The Hierarchical Model was introduced in the Information Management System (IMS) developed by IBM in 1968. In this data is organized as a tree structure. Each tree is made of nodes and branches. The nodes of the tree represent the record types and it is a collection of data attributes entity at that point. The topmost node in the structure is called the root. Nodes succeeding lower levels are called children. b) Network Model: The Network Model, also called as the CODSYL database structure, is an improvement over the Hierarchical mode, in this model concept of parent and child is expanded to have multiple parent-child relationships, i.e. any child can be subordinate to many different parents (or nodes). Data is represented by collection of records, and relationships among data are represented by links. A link is an association between precisely two records. Many-to-many relationships can exists between the parent and child. c) Relational Model: The Relational Database Model eliminates the need for explicit parent-child relationships. In RDBMS, data is organized in two-dimensional tables consisting of relational, i.e. no pointers are maintained between tables. 8. WHAT IS DATA MODELING? Ans: Data Modeling describes relationship between the data objects. The relationships between the collections of data in a system may be graphically represented using data modeling. 9. DEFINE ENTITY, ATTRIBUTE AND RELATIONSHIP. Ans: Entity: An Entity is a thing, which can be easily identified. An entity is any object, place, person, concept or activity about which an enterprise records data. Attribute: An attribute is the property of a given entity. Relationship: Relationship is an association among entities. 10. WHAT IS ER-MODELING? Ans: The E-R modeling technique is the Top Down Approach. Entity relationship is technique for analysis and logical modeling of a systems data requirements. It is the most widely used and has gained acceptance as the ideal database design. It uses three basic units: entities, their attributes and the relationship that exists between the entities. It uses a graphical notation for representing these. 11. WHAT IS NORMALIZATION? Ans: Normalization is a step-by-step decomposition of complex records into simple records. 12. WHAT ARE VARIOUS NORMAL FORMS OF DATA? Ans: The First Normal Form 1NF, The Second Normal Form 2NF, The Third Normal Form 3NF, The Boyce and Codd Normal Form BC NF. 13. WHAT IS DENORMALIZATION? Ans: The intentional introduction of redundancy to a table to improve performance is called DENORMALIZATION. 14. WHAT ARE 1-TIER, 2-TIER, 3-TIER OR N-TIER DATABASE ARCHITECTURES? Ans: 1-Tier Database Architecture is based on single system, which acts as both server and client. 2-Tier Architecture is based on one server and client. 3-Tier Architecture is based on one server and client out that on client act as a remote system. N-Tier Architecture is based on N no. Of servers and N no. Of clients. 15. WHAT ARE A TABLE, COLUMN, AND RECORD? Ans: Table: A Table is a database object that holds your data. It is made up of many columns. Each of these columns has a data type associated with it. Column: A column, referred to as an attribute, is similar to a field in the file system. Record: A row, usually referred to as tuple, is similar to record in the file system. 16. WHAT IS DIFFERENCE BETWEEN A PROCEDURAL LANGUAGE AND A NONPROCEDURAL LANGUAGE? Ans: Procedural Language NON-Procedural Language A program in this implements a step-by-step algorithm to solve the problem. do but not how to do It contains what to 17.WHAT TYPE OF LANGUAGE "SQL" IS? Ans: SQL is a Non-procedural, 4th generation Language,/ which concerts what to do rather than how to do any process. 18. CLASSIFICATION OF SQL COMMANDS? Ans: DDL (Data Definition Language) DML (Data Manipulating Language) DCL (Data Control Language) DTL(Data Transaction Language) Create Alter Drop Select Insert Update Delete Rollback Commit Grant Revoke 19. WHAT IS DIFFERENCE BETWEEN DDL AND DML COMMANDS? Ans: For DDL commands autocommit is ON implicitly whereas For DML commands autocommit is to be turned ON explicitly. 20. WHAT IS DIFFERENCE BETWEEN A TRANSACTION AND A QUERY? Ans: A Transaction is unit of some commands where as Query is a single line request for the information from the database. 21. WHAT IS DIFFERENCE BETWEEN TRUNCATE AND DELETE COMMANDS? Ans: Truncate Command will delete all the records where as Delete Command will delete specified or all the records depending only on the condition given. 22. WHAT IS DIFFERENCE BETWEEN UPDATE AND ALTER COMMANDS? Ans: Alter command is used to modify the database objects where as the Update command is used to modify the values of a data base objects. 23. WHAT ARE COMMANDS OF TCL CATEGORY? Ans: Grant and Revoke are the two commands belong to the TCL Category. 24. WHICH IS AN EFFICIENT COMMAND - TRUNCATE OR DELETE? WHY? Ans: Delete is the efficient command because using this command we can delete only those records that are not really required. 25. WHAT ARE RULES FOR NAMING A TABLE OR COLUMN? Ans: 1) Names must be from 1 to 30 bytes long. 2) Names cannot contain quotation marks. 3) Names are not case sensitive. 4) A name must begin with an alphabetic character from your database character set and the characters $ and #. But these characters are discouraged. 5) A name cannot be ORACLE reserved word. 6) A name must be unique across its namespace. Objects in the name space must have different names. 7) A name can be enclosed in double quotes. 26. HOW MANY COLUMNS CAN A TABLE HAVE? Ans: A Table can have 1000 columns. 27. WHAT ARE DIFFERENT DATATYPES SUPPORTED BY SQL?. 28. WHAT IS DIFFERENCE BETWEEN LONG AND LOB DATATYPES? Ans: LOB LONG 1) The maximum size is 4GB. 2) LOBs (except NCLOB) can be attributes of an object type. 3) LOBs support random access to data. 4) Multiple LOB columns per table or LOB attributes in an object type. 1) The maximum size is 2GB. 2) LONGs cannot. 3) LONGs support only sequential access. 4) Only one LONG column was allowed in a table 29. WHAT IS DIFFERENCE BETWEEN CHAR AND VARCHAR2 DATATYPES? Ans: Varchar2 is similar to Char but can store variable no. Of characters and while querying the table varchar2 trims the extra spaces from the column and fetches the rows that exactly match the criteria. 30. HOW MUCH MEMORY IS ALLOCATED FOR DATE DATATYPE? WHAT IS DEFAULT DATE FORMAT IN ORACLE? Ans: For Date data type oracle allocates 7 bytes Memory. Default Date Format is: DD-MON-YY. 31. WHAT IS RANGE FOR EACH DATATYPE OF SQL? Ans: Datatype Range Char Varchar2 Number Float LONG, RAW, LONGRAW Large Objects (LOBs) 2000 bytes 4000 bytes Precision 1 to 38 Scale -84 to 127 Precision 38 decimals Or 122 binary precision 2 GB 4GB 32. HOW TO RENAME A COLUMN? Ans: We cant rename a Column of a table directly. So we follow the following steps. To Rename a Column: a) Alter the table specifying new column name to be given and data type. b) Then copy the values in the column to be renamed into new column. c) drop the old column. 33. HOW TO DECREASE SIZE OR CHANGE DATATYPE OF A COLUMN? Ans: To Decrease the size of a Data type of a column i. Truncate the table first. ii. Alter the table column whose size is to be decreased using the same name and data type but new size. 34. WHAT IS A CONSTRAINT? WHAT ARE ITS VARIOUS LEVELS? Ans: Constraint: Constraints are representators of the column to enforce data entity and consistency.There r two levels 1)Column-level constraints 2)Table-level constraints. 35. LIST OUT ALL THE CONSTRAINTS SUPPORTED BY SQL. Ans: Not Null, Unique, Check, Primary Key and Foreign Key or Referential Integrity. 36. WHAT IS DIFFERENCE BETWEEN UNIQUE+NOT NULL AND PRIMARY KEY? Ans: Unique and Not Null is a combination of two Constraints that can be present any number of times in a table and cant be a referential key to any column of an another table where as Primary Key is single Constraint that can be only once for table and can be a referential key to a column of another table becoming a referential integrity. 37. WHAT IS A COMPOSITE PRIMARY KEY? Ans: A Primary key created on combination of columns is called Composite Primary Key. 38. WHAT IS A CANDIDATE COLUMN? HOW MANY CANDIDATE COLUMNS CAN BE POSSIBLE PER COMPOSITE PRIMARY KEY? Ans: 39. HOW TO DEFINE A NULL VALUE? Ans: A NULL value is something which is unavailable, it is neither zero nor a space and any mathematical calculation with NULL is always NULL. 40. WHAT IS NULL? A CONSTRAINT OR DEFAULT VALUE? Ans: It is a default value. 41. WHAT IS DEFAULT VALUE FOR EVERY COLUMN OF A TABLE? Ans: NULL. 42. WHAT IS CREATED IMPLICITLY FOR EVERY UNIQUE AND PRIMARY KEY COLUMNS? Ans: Index. 43. WHAT ARE LIMITATIONS OF CHECK CONSTRAINT? Ans: In this we can't specify Pseudo Columns like sysdate etc. 44. WHAT IS DIFFERENCE BETWEEN REFERENCES AND FOREIGN KEY CONSTRAINT? Ans: References is used as column level key word where as foreign key is used as table level constraint. 45. WHAT IS "ON DELETE CASCADE"? Ans: when this key word is included in the definition of a child table then whenever the records from the parent table is deleted automatically the respective values in the child table will be deleted. 46. WHAT IS PARENT-CHILD OR MASTER-DETAIL RELATIONSHIP? Ans: A table which references a column of another table(using References)is called as a child table(detail table) and a table which is being referred is called Parent (Master) Table . 47. HOW TO DROP A PARENT TABLE WHEN ITS CHILD TABLE EXISTS? Ans: Using "on delete cascade". 48. IS ORACLE CASE SENSITIVE? Ans: NO 49. HOW ORACLE IDENTIFIES EACH RECORD OF TABLE UNIQUELY? Ans: By Creating indexes and reference IDs. 50. WHAT IS A PSEUDO-COLUMN? NAME SOME PSEUDO-COLUMNS OF ORACLE? Ans: Columns that are not created explicitly by the user and can be used explicitly in queries are called Pseudo-Columns. Ex:currval,nextval,sysdate. 51. WHAT FOR "ORDER BY" CLAUSE FOR A QUERY? Ans: To arrange the query result in a specified order(ascending,descending) by default it takes ascending order. 52. WHAT IS "GROUP BY" QUERIES? Ans: To group the query results based on condition. 53. NAME SOME AGGREGATE FUNCTIONS OF SQL? Ans: AVG, MAX, SUM, MIN,COUNT. 54. WHAT IS DIFFERENCE BETWEEN COUNT (), COUNT (*) FUNCTIONS? Ans: Count () will count the specified column whereas count (*) will count total no. of rows in a table. 55. WHAT FOR ROLLUP AND CUBE OPERATORS ARE? Ans: To get subtotals and grand total of values of a column. 56. WHAT IS A SUB-QUERY? Ans: A query within a query is called a sub query where the used by the outer query. 57. WHAT ARE SQL OPERATORS? Ans: Value (), Ref () is SQL operator. 58. EXPLAIN "ANY","SOME","ALL","EXISTS" OPERATORS? Ans: Any: The Any (or its synonym SOME) operator computes the lowest value from the set and compares a value to each returned by a sub query. All: ALL compares a value to every value returned by SQL. Exists: This operator produces a BOOLWAN results. If a sub query produces any result then it evaluates it to TRUE else it evaluates it to FALSE. 59. WHAT IS A CORRELATED SUB QUERY, HOW IT IS DIFFERENT FROM A NORMAL SUB QUERY? Ans: A correlated subquery is a nested subquery, which is executed once for each Candidate row by the main query, which on execution uses a value from a column in the outer query. In normal sub query the result of inner query is dynamically substituted in the condition of the outer query where as in a correlated subquery, the column value used in inner query refers to the column value present in the outer query forming a correlated subquery. 60. WHAT IS A JOIN - TYPES OF JOINS? Ans: A join is used to combine two or more tables logically to get query results. There are four types of Joins namely EQUI Join NON-EQUI Join SELF Join OUTER Join. 61. WHAT ARE MINIMUM REQUIREMENTS FOR AN EQUI-JOIN? Ans: There shold be atleast one common column between the joining tables. 62.. 63. WHAT IS DIFFERENCE BETWEEN EQUI AND SELF JOINS? Ans: SELF JOIN is made within the table whereas EQUI JOIN is made between different tables having common column. 64. WHAT ARE "SET" OPERATORS? Ans: UNION, INTERSECT or MINUS is called SET OPERATORS. 65. WHAT IS DIFFERENCE BETWEEN "UNION" AND "UNION ALL" OPERATORS? Ans: UNION will return the values distinctly whereas UNION ALL will return even duplicate values. 66. NAME SOME NUMBER, CHARACTER, DATE, CONVERSION, OTHER FUNCTIONS. Ans: Number Functions: Round (m, [n]), Trunc (m, [n]), Power (m, n), Sqrt, Abs (m), Ceil (m), Floor (m), Mod (m, n) Character Functions: Chr (x) Concert (string1, string2) Lower (string) Upper (string) Substr (string, from_str, to_str) ASCII (string) Length (string) Initcap (string). Date Functions: sysdate Months between (d1, d2) To_char (d, format) Last day (d) Next_day (d, day). Conversion Functions: To_char To_date To_number 67. WHAT IS DIFFERENCE BETWEEN MAX () AND GREATEST () FUNCTIONS? Ans: MAX is an aggregate function which takes only one column name of a table as parameter whereas Greatest is a general function which can take any number of values and column names from dual and table respectively. 68. WHAT FOR NVL () FUNCTION IS? Ans: NVL Function helps in substituting a value in place of a NULL. 69. WHAT FOR DECODE () FUNCTION IS? Ans: It is substitutes value basis and it actually does an 'if-then-else' test. 70. WHAT IS DIFFERENCE BETWEEN TRANSLATE () AND REPLACE () FUNCTIONS? Ans: Translate() is a superset of functionality provided by Replace(). 71. WHAT IS DIFFERENCE BETWEEN SUBSTR () AND INSTR () FUNCTIONS? Ans: Substr() will return the specified part of a string whereas Instr() return the position of the specified part of the string. 72. WHAT IS A JULIAN DAY NUMBER? Ans: It will return count of the no. Of days between January 1, 4712 BC and the given date. 73. HOW TO DISPLAY TIME FROM A DATE DATA? Ans: By using time format as 'hh [hh24]: mi: ss' in to_char() function. 74. HOW TO INSERT DATE AND TIME INTO A DATE COLUMN? Ans: By using format 'dd-mon-yy hh [hh24]: mi: ss' in to_date() function. 75. WHAT IS DIFFERENCE BETWEEN TO_DATE () AND TO_CHAR () CONVERSION FUNCTIONS? Ans: To_date converts character date to date format whereas To_char function converts date or numerical values to characters. 76. WHAT IS A VIEW? HOW IT IS DIFFERENT FROM A TABLE? Ans: View is database object, which exists logically but contains no physical data and manipulates the base table. View is saved as a select statement in the database and contains no physical data whereas Table exists physically. 77. WHAT IS DIFFERENCE BETWEEN SIMPLE AND COMPLEX VIEWS? Ans: Simple views can be modified whereas Complex views(created based on more than one table) cannot be modified. 78. WHAT IS AN INLINE VIEW? Ans: Inline view is basically a subquery with an alias that u can use like a view inside a SQL statement. It is not a schema object like SQL-object. 79. HOW TO UPDATE A COMPLEX VIEW? Ans: Using 'INSTEAD OF' TRIGGERS Complex views can be Updated. 80. WHAT FOR "WITH CHECK OPTION" FOR A VIEW? Ans: "WITH CHECK OPTION" clause specifies that inserts and updates r performed through the view r not allowed to create rows which the view cannot select and therefore allows integrity constraints and data validation checks to be enforced on data being inserted or updated. 81. WHAT IS AN INDEX? ADVANTAGE OF AN INDEX Ans: An Index is a database object used n Oracle to provide quick access to rows in a table. An Index increases the performance of the database. 82. WHAT IS A SEQUENCE? PSEUDO-COLUMNS ASSOCIATED WITH SEQUENCE? Ans: Sequence is a Database Object used to generate unique integers to use as primary keys. Nextval, Currval are the Pseudo Columns associated with the sequence. **83. WHAT IS A CLUSTER? WHEN TO USE A CLUSTER? HOW TO DROP A CLUSTER WHEN CLUSTERED TABLE EXISTS? Ans: Cluster and Indexes are transparent to the user. Clustering is a method of storing tables that are intimately related and are often joined together into the same area on the disk. When cluster table exists then to drop cluster we have to drop the table first then only cluster is to be dropped. 84. WHAT IS A SNAPSHOT OR MATERIALIZED VIEW? Ans: Materialized views can be used to replicate data. Earlier the data was replicated through CREATE SNAPSHOT command. Now CREATE MATERIALIZED VIEW can be used as synonym for CREATE SNAPSHOT. Query performance is improved using the materialized view as these views pre calculate expensive joins and aggregate operations on the table. 85. WHAT IS A SYNONYM? Ans: A Synonym is a database object that allows you to create alternate names for Oracle tables and views. It is an alias for a table, view, snapshot, sequence, procedure, function or package. 86. WHAT IS DIFFERENCE BETWEEN PRIVATE AND PUBLIC SYNONYM? Ans: Only the user or table owner can reference Private synonym whereas any user can reference the Public synonym. 87. WHAT IS DIFFERENCE BETWEEN "SQL" AND "SQL*PLUS" COMMANDS? Ans: SQL commands are stored in the buffer whereas SQL*PLUS are not. **88. NAME SOME SQL*PLUS COMMANDS? Ans: DESC [CRIBE], START, GET, SAVE, / are SQL*PLUS COMMANDS. 89. WHAT ARE "SQL*PLUS REPORTING" COMMANDS? Ans: SPOOL file-name, SPOOL OUT, TTITLE, BTITLE, BREAK ON, COMPUTE <any aggregate function> OF <column name> [break] ON <column name> etc are SQL*PLUS REPORTING COMMANDS. 90. WHAT ARE SYSTEM AND OBJECT PRIVILEGES? Ans: Connect and Resource etc are System Privileges. Create <object>, Select, Insert, Alter etc are Object Privileges. 91. WHAT FOR DCL COMMANDS ARE? Ans: Commit, Rollback are DCL commands. 92. WHAT FOR GRANT COMMAND WITH "WITH GRANT OPTION"? Ans: With Grant Option with Grant Command gives privileges to the user to grant privileges to other user(s) among the privileges he/she has. 93. HOW TO CHANGE PASSWORD OF A USER? Ans: Using Password command or Using ALTER USER <user name> IDENTIFIED BY <new password> COMAND. 94. WHAT IS A SCHEMA AND SCHEMA OBJECTS? Ans: A schema is a collection of logical structures of data, or schema objects. A schema is owned by the database user and has the same name as that of user. Each user owns a single schema. Schema objects include following type of objects Clusters, Database Links, Functions, Indexes, Packages, Procedures, Sequences, Synonyms, Tables, Database Triggers, Views. **95. HOW TO STARTUP AND SHUTDOWN ORACLE DATABASE? Ans: Startup and Shutdown Oracle database can be done by only the administator. Startup is done by using STARTUP command and Shutdown is done by SHUTDOWN command 96. WHAT IS A SESSION? Ans: The period between Login and Logoff on schema. 97. WHAT IS A CLIENT PROCESS? WHAT IS A SERVER PROCESS? Ans: ref: 172 Q & A. 98. HOW TO MAKE EVERY DML OPERATION AS AUTO COMMIT? Ans: By using SET AUTOCOMMIT ON command. 99. HOW TO DISPLAY DATA PAGE WISE IN SQL? Ans: By using SET PAUSE ON command. 100. HOW TO CHANGE LINE SIZE, PAGE SIZE AND SQL PROMPT? Ans: By using SET LINESIZE <value>, SET PAGESIZE <value>, SET SQLPROMPT <new prompt>. 101. HOW PL/SQL IS DIFFERENT FROM SQL? Ans: SQL is non-procedural language whereas PL/SQL is procedural language that includes features and design of programming language. 102. WHAT IS ARCHITECTURE OF PL/SQL? Ans: 103. WHAT IS A PL/SQL BLOCK? Ans: DECLARE <declarations> BEGIN <Exececutable Statements> EXCEPTION <Exception Handler(s)> END; 104. WHAT ARE DIFFERENT TYPES OF PL/SQL BLOCKS? Ans: DECLARE BLOCK: In this block all the declarations of the variable used in the program is made. If no variables are used this block will become optional. BEGIN BLOCK: In this block all the executable statements are placed. This block is Mandatory. EXCEPTION BLOCK: In this block all the exceptions are handled. This block is also very optional. END: Every begin must be ended with this END; statement. 105. WHAT ARE COMPOSITE DATA TYPES? Ans: Records, Tables are two Composite data types. 106. WHAT IS SCOPE OF A VARIABLE IN PL/SQL BLOCK? Ans: The visuability and accessibility of a variable within the block(s) is called scope of a variable. 107. WHAT IS A NESTED BLOCK? Ans: A block within a block is called Nested Block. 108. WHAT IS A PL/SQL ENGINE? Ans: The PL/SQL engine accepts any valid PL/SQL block as input, executes the procedural part of the statements and sends the SQL statements to the SQL statement executor in the Oracle server. 109. WHAT IS DEFAULT VALUE FOR A NUMERIC PL/SQL VARIABLE? Ans: NULL 110. WHAT IS DIFFERENCE BETWEEN SIMPLE LOOP AND A FOR LOOP? Ans: Simple requires declaration of variables used in it and exit condition but For Loop doesnt require this. 111. WHAT IS A CURSOR? STEPS TO USE A CURSOR? Ans: Cursor is Private SQL area in PL/SQL. Declare the Cursor, Open the Cursor, Fetch values from SQL into the local Variables, Close the Cursor. 112. HOW MANY TYPES OF CURSORS ARE SUPPORTED BY ORACLE? Ans: There are two types of cursors namely Implicit Cursor, Explicit Cursor. 113. WHAT IS A CURSOR FOR LOOP? Ans: Cursor For Loop is shortcut process for Explicit Cursors because the Cursor is Open, Rows are fetched once for each iteration and the cursor is closed automatically when all the rows have been processed. 114. WHAT ARE CURSOR ATTRIBUTES? Ans: %Found %NotFound %IsOpen %RowCount are the cursor attributes. 115. WHAT IS USE OF CURSOR WITH "FOR UPDATE OF" CLAUSE? Ans: This Clause stop accessing of other users on the particular columns used by the cursor until the COMMIT is issued. 116. WHAT IS AN EXCEPTION? HOW IT IS DIFFERENT FROM ERROR? Ans: Whenever an error occurs Exception raises. Error is a bug whereas the Exception is a warning or error condition. 117. NAME SOME BUILT-IN EXCEPTIONS. Ans: Too_Many_Rows No_Data_Found Zero_Divide Not_Logged_On Storage_Error Value_Error etc. 118. HOW TO CREATE A USER-DEFINED EXCEPTION? Ans: User-Defined Exception is created as follows: DECLARE <exception name> EXCEPTION; ---------; - - - - - - - - -; BEGIN - - - - - - - - -; - - - - - - - - -; RAISE <exception name>; EXCEPTION WHEN <exception name> THEN - - - - - - - - -; - - - - - - - - -; END; 119. WHAT IS "OTHERS" EXCEPTION? Ans: It is used to along with one or more exception handlers. This will handle all the errors not already handled in the block. 120. WHAT IS SCOPE OF EXCEPTION HANDLING IN NESTED BLOCKS? Ans: Exception scope will be with in that block in which exception handler is written. 121. WHAT IS A SUB-PROGRAM? Ans: A SUBPROGRAM IS A PL/SQL BLOCK, WHICH WILL BE INVOKED BY TAKING PARAMATERS. 122. WHAT ARE DIFFERENT TYPES OF SUB-PROGRAMS? Ans: THEY R TWO TYPES: 1) PROCEDURE 2) FUNCION. 123. HOW A PROCEDURE IS DIFFERENT FROM A FUNCTION? Ans: Function has return key word and returns a value whereas a Procedure doesnt return any value. 124. WHAT ARE TYPES OF PARAMETERS THAT CAN BE PASSED TO FUNCTION OR PROCEDURE? Ans: IN, IN OUT, OUT. 125. WHAT IS "IN OUT" PARAMETER? Ans: A parameter, which gets value into the Procedure or Function and takes the value out of the Procedure or Function area, is called IN OUT parameter. 126. DOES ORACLE SUPPORTS PROCEDURE OVERLOADING? Ans: NO. 127. WHAT IS A PACKAGE AND PACKAGE BODY? Ans: Package is declarative part of the functions and procedures stored in that package and package body is the definition part of the functions and procedures of that package. 128. WHAT IS ADVANTAGE OF PACKAGE OVER PROCEDURE OR FUNCTION? Ans: Packages provides Functions or Procedures Overloading facility and security to those Functions or Procedures. 129. IS IT POSSIBLE TO HAVE A PROCEDURE AND A FUNCTION WITH THE SAME NAME? Ans: NO if it is out side a Package, YES if it is within a Package. 130. DOES ORACLE SUPPORTS RECURSIVE FUNCTION CALLS? Ans: YES. 131. WHAT IS A TRIGGER? HOW IT IS DIFFERENT FROM A PROCEDURE? Ans: Trigger: A Trigger is a stored PL/SQL program unit associated with a specific database table. Procedure: A Procedure is to be explicitly called by the user whereas Triggers are automatically called implicitly by Oracle itself whenever event Occurs. 132. WHAT IS DIFFERENCE BETWEEN A TRIGGER AND A CONSTRAINT? Ans: Constraints are always TRUE whereas Triggers are NOT always TRUE and Constraints has some limitations whereas Trigger has no limitations. 133. WHAT ARE DIFFERENT EVENTS FOR A TRIGGER AND THEIR SCOPES? Ans: Insert, Update or Delete. 134. WHAT IS DIFFERENCE BETWEEN TABLE LEVEL AND ROW LEVEL TRIGGERS? Ans: Table level Triggers execute once for each table based transaction whereas Row level Triggers will execute once FOR EACH ROW. ** 135. WHAT ARE AUTONOMOUS TRIGGERS? Ans: 136. WHAT IS AN "INSTEAD OF" TRIGGER? Ans: These Triggers are used with the Complex Views only to make possible of Insert, Update and Delete on those Views. ** 137. HOW MANY TRIGGERS CAN BE CONFIGURED ON A TABLE AND VIEW? Ans: 138. WHAT IS "TABLE MUTATING" ERROR? HOW TO SOLVE IT? Ans: ORA-04091: Table name is mutating, trigger/function may not see it Cause : A trigger or a user-defined PL/SQL function that is referenced in the statement attempted to query or modify a table that was in the middle of being modified by the statement that fired the trigger. Action : Rewrite the trigger or function so it does not read the table. 139. WHEN TO USE ":NEW" AND ":OLD" SPECIFIERS? Ans: The prefix :old is used to refer to values already present in the table. The prefix :new is a correlation name that refers to the new value that is inserted / updated. ** 140. WHAT IS A CONDITIONAL TRIGGER? Ans: ** 141. HOW TO CREATE A USER-DEFINED VARIABLE IN PL/SQL? Ans: 142. HOW TO CREATE AN ARRAY VARIABLE IN PL/SQL? Ans: Using CREATE [OR REPLACE] TYPE <type name> AS VARRAY (size) OF ELEMENT_TYPE (NOT NULL) Command; **143. HOW TO MAKE A USER-DEFINED DATA TYPE GLOBAL IN PL/SQL? Ans: 144. HOW TO CREATE AN OBJECT IN ORACLE? Ans: Using CREATE [OR REPLACE] TYPE <type name> AS OBJECT (ATTRIBUTE NAME DATA TYPE,..) Command 145. WHAT IS A TRANSIENT AND PERSISTENT OBJECT? Ans: The Object created in a table is called Persistent Object. Object created on execution of PL/SQL block is called Transient Object. **146. WHAT IS A COLUMN OBJECT AND TABLE OBJECT? Ans: A Column Object is only a Column of a table. 147. HOW TO GRANT PERMISSION ON AN OBJECT TO OTHER USER? Ans: GRANT <permission> ON <object name> TO <user name>. 148. WHAT IS A COLLECTION OF ORACLE? Ans: Varray, Nested Table is a collection of Oracle. 149. WHAT IS DIFFERENCE BETWEEN VARRAY AND NESTED TABLE? Ans: Varray has a fixed size. Nested tables can carry any number of values. 150. HOW TO MODIFY CONTENTS OF A VARRAY IN ORACLE? Ans: To modify a stored VARRAY it has to selected into a PL/SQL variable and then inserted back into the table. 151. WHAT IS USE OF "THE" OPERATOR FOR NESTED TABLE? Ans: THE operator allows nested tables to be manipulated using DML when it is stored in a Table. 152. WHICH PACKAGE IS USED FOR FILE INPUT/OUTPUT IN ORACLE? Ans: UTL_FILE Package is used for File input/output in Oracle. 153. NAME SOME METHODS AND PROCEDURES OF FILE I/O PACKAGE? Ans: FOPEN FCLOSE FFLUSH IS_OPEN GET_LINE PUT_LINE PUTF NEW_LINE **154. WHAT IS SQLJ? HOW IT IS DIFFERENT FROM JDBC CONNECTIVITY? Ans: SQLJ is basically a Java program containing embedded static SQL statements that are compatible with Java design philosophy. 155. WHAT IS AN ITERATOR? Name some TYPES OF ITERATORS? Ans: SQLJ Iterators are basically record groups generated during transaction, which requires manipulation of more than one records from one or more tables. There are two types Iterators namely Named Iterator and Positional Iterator. ** 156. WHAT ARE DIFFERENT STEPS TO WRITE A DYNAMIC SQL PROGRAM? Ans: Eg: char c_sqlstring[]={DELETE FROM sailors WHERE rating>5}; EXEC SQL PREPARE readytogo FROM :c_sqlstring; EXEC SQL EXECUTE readytogo; 157. WHAT IS TABLE PARTITIONING AND INDEX PARTITIONING? Ans: Oracle8 allows tables and Indexes to be partitioned or broken up into smaller parts based on range of key values. Partitioning is a divide and conquer strategy that improves administration and performance in data warehouse and OLTP systems. 158. WHAT IS PARALLEL PROCESSING? Ans: 159. WHAT IS PHYSICAL MEMORY STRUCTURE OF ORACLE? Ans: The basic oracle memory structure associated with Oracle includes: Software Code Areas The System Global Area (SGA) The Database Buffer Cache The shared Pool The Program Global Areas (PGA) Stack Areas Data Areas Sort Areas 160. WHAT IS LOGICAL MEMORY STRUCTURE OF ORACLE? DB_STG STUDENT SYSTEM EMP DEPT EMP_IND .. .. DATA DATA INDEX Ans: Database Tablespace DB Object Segment Extends 161. WHAT IS SGA? Ans: A System Global Area is a group of shared memory allocated by Oracle that contains data and control information for one Oracle database instance. IF the multiple users are concurrently connected to the same instance, the data in the instances SGA is shared among the users. Consequently, the SGA is often referred to as either the system Global Area or the Shared Global Area. 162. WHAT IS PGA? Ans: The Program Global Area is a memory buffer that contains data and control information for a server process. A PGA is created by Oracle when a server process is started. The information in a PGA depends on the configuration of Oracle. 163. WHAT IS AN ORACLE INSTANCE? Ans: Every time a database is started, an SGA is allocated and Oracle background processes are started. The combination of these processes and memory buffers is called an Oracle instance. 164. WHAT ARE DIFFERENT ORACLE PROCESSES? Ans: A process is a thread of control or a mechanism in an operating system that can be execute a series of steps. Some operating systems use terms jobs or task. A process normally has its own private memory area in which it runs. An Oracle database system has general types of process: User Processes and Oracle Processes. **165. WHAT IS DIFFERENCE BETWEEN PMON AND SMON? Ans: SMON (System Monitor) performs instance recovery at instance of startup. In a multiple instance system (one that uses the parallel server), SMON of one instance can also perform instance recovery other instance that have failed whereas The PMON (Process Monitor) performs process recovery when a user process fails. **166. WHAT IS DIFFERENCE BETWEEN DATABASE AND TABLESPACE? Ans: 167. WHAT IS JOB OF DATABASE WRITER (DBWR) PROCESS? Ans: The Data Base Writer writes modified blocks from the database buffer cache to the data files. 168. WHAT IS JOB OF LOG WRITER (LGWR) PROC*SS? Ans: The Log Writer writes redo log files to disk. Redo log data is generated in the redo log buffer of the SGA. As transactions commit and log buffer fills, LGWR writes redo entries into an online redo log file. 169. WHAT IS RECOVERER? Ans: The Recover (RECO) is used to resolve distributed transactions that are pending due to network or system failure in a distributed database. At timed intervals, the local RECO attempts to concept to remote database and automatically complete the commit or rollback of the local portion of any pending distributed transactions. 170. WHAT IS ARCHIVER? Ans: The Archiver (ARCH) copies the online redo log files to archival storage when they are full. ARCH is active only when a databases redo log is used ARCHILOG mode. ** 171. WHAT IS A STORED QUERY? Ans: 172. WHAT IS USER PROCESS AND SERVER PROCESS? Ans: A User process is created and maintained to execute the software code of an application program (such as PRO * Program) or an ORACLE tool (such as SQL * DBA). The User process also manages the communication with server processes. User processes communication with the server processes through the program interface. Other processes call ORACLE processes. In a dedicated server configuration, a server Process handles requests for a single user process. A multithread server configuration allows many user processes to share a small number of server processes, minimizing the utilization of available system resources. **173. WHAT IS A SELF REFERENTIAL INTEGRITY? Ans: 174. WHAT IS A "RAISE" STATEMENT? Ans: It is used to Raise Exceptions. 175. WHAT IS ROWID? HOW IT IS DIFFERENT FROM ROWNUM? Ans: Rowid is the address of the row at where it is stored in the database. Rownum is count of records whereas Rowid is identification of the each row. select distinct(d.salary) from employee d where d.salary > (select distinct(a.salary) from employee a where 4=(select count(distinct(b.salary)) from employee b where a.salary <= b.salary));
https://de.scribd.com/document/87003179/Oracle-6
CC-MAIN-2019-26
refinedweb
14,576
67.45
Doc No: SC22/WG21/N2697 PL22.16/08-0207 Date: 2008-06-30 Project: JTC1.22.32 Reply to: Robert Klarer IBM Canada, Ltd. klarer@ca.ibm.com Clamage called the meeting to order at 09:00 (UTC+2) on Monday, June 8, 2008 Jean-Paul Rigault PL22.16/08-0121 = WG21/N2611). Motion to approve the agenda: Each of the Working Group chairs presented their plans for the coming week. Motion to approve the minutes (document PL22.16/08-0102 = WG21/N2592) P. J. Plauger observed that it's unclear, in subsection 10.1 of document N2592, why Library Working Group Motion 2 did not carry. Accordingly, he proposed a friendly amendment to the motion to approve the minutes. Amendment: Add the following text to the account of Library Working Group Motion 2 in the minutes. Because there was no clear consensus among participating WG21 member nations, the convenor ruled that this motion did not carry. Poll results among WG21 voting members was: Approved by unanimous consent as amended. Sutter reported that seven countries are represented at this meeting; all seven have voting status. As head of the UK delegation, Glassborow made a declaration on behalf of the BSI. The main points of this declaration were: Much discussion ensued. Sutter summarized the preceding discussion thusly: I don't think that anyone is considering not having concepts at all. The question is which bucket [i.e. which project or document] will contain concepts?Sutter then asked an open question to Working Group chairs: what will be the quality of the paper if we issue an FCD document at the end of this meeting? Adamczyk replied that, even concepts were taken off the table, there are several substantial language features that are still in the pipeline, and that he "would be very uncomfortable with voting something out this meeting." Hinnant reminded Sutter that, at the present, there are dangling references in the WD, and that he hoped that these will be resolved at the present meeting, but there is no guarantee that will happen. Vollmann asked how much time will spent at this meeting by the Core Working Group on concepts, if any work is done at all? Adamczyk explained that concepts is in good shape, but it's also an extremely large proposal, at 87 pages. It's not clear that the CWG can review the proposal at the same level of detail that they review other things. For example, the proposal has not been subject to the same line-by-line detail that other items routinely are. Adamczyk indicated that he "would be nervous about voting it in, but in terms of substance, it is correct." Vollmann restated his question: "how much time do you need?" Adamczyk responded by observing that "that depends on how much assurance you need that it is correct." Spicer observed that the committee shouldn't take for granted that it can delay the introduction of concepts to some indefinite point in the future; the people that we have available to work on the item now may not be available later. Gregor stated his opinion that there may be bugs in the specification of concepts that will not be discovered until a second implementation is attempted. The line-by-line review may not be as effective. When asked to elaborate on BSI's position that the standard must include a conceptualized library if it includes concepts as a core language feature, Glassborow explained that, if the committee puts concepts into the core langage and not the library, they will largely be ignored, and the discovery of defects in the specification will be further delayed. Also, if the standard library is not conceptualized, it will be difficult for users to exercise the feature. Dos Reis agreed with BSI's position on this: "my perspective is that concepts in the language without library counterparts is not viable." Meredith expressed concern that compile times will suffer drastically with concepts. Stroustrup opined that shipping concepts without the library is useless; it will not help users to think in terms of concepts, and users will think that concepts are "not good enough." Futher, Stroustrup expressed doubts that a two-year slip would help: "with two more years, we will get proposals for new features, new libraries, and enhancements to concepts. Much of the two-year period will inevitably be devoted to items other than concepts." Stroustrup noted, though, that the BSI position is not unreasonable, and it was to be expected that someone would express it. Discussion ensued. Motion. Move that we discontinue work on concepts for C++0X. Sutter ruled that work would continue on concepts. Plum observed "now that we know that concepts are in the mix of work for this week, we don't need to talk about our publication schedule." Plauger noted that "we must know by the end of this meeting that we either commit or not commit to shipping an FCD at the next meeting." Plauger reported that the Special Math IS for C is going to final ballot and that the C++ committee should take corresponding actions. As well, Plauger reported that the C committee agreed to publish a threading model based on the C++ model. Nelson reported on an issue that was raised by WG14 with respect to the threading model. In the current model, given a struct with two chars beside each other, two different threads can access these chars separately without causing a data race. There is concern that this inhibits certain optimizations, including the combining of stores. There was at least some discussion of allowing the programmer to control the granularity of memory location through the use of a pragma or something similar. Kosnick asked whether there was any interest in standardizing the atomics. Nelson answered that the memory model and atomics were presented to WG14 in one paper. There was no controversy concerning the atomics. Stoughton reported that the group will be meeting next week in Redding. The goal of the meeting is to prepare a document outline. There was no liaison statement to make at this meeting because the group had not met since the last C++ meeting. Plum reported that, at the next SC22 plenary, this group may become a regular WG. Plum also reported that OWGV is making good progress, and that an editing meeting will be conducted in "a month or so" to refine the draft document. Becker reported that document 08-0098 = WG21/N2588 has all of the edits that were approved at Bellevue. A later revision, PL22.16/08-0116 = WG21/N2606 differs from N2588 only in that it contains some editorial changes. Meredith noted that this document contains dangling references, as expected. Motion to accept the working paperApproved by unanimous consent. We have three subgroups: Core, Library, and Evolution. There will be a subgroup of Evolution to deal with issues relating to concurrency. The committee broke into subgroups at 10:30 (GMT+10). Adamczyk gave the following report on the proposals that the Core Working Group intends to move on Saturday: 1) N2659, "Thread-Local Storage". Adds the thread_local keyword. 2) N2656, "Core issue 654 wording" adds conversions from 0 to nullptr_t (requested by library), also reinterpret_cast from nullptr_t to integer (for templates) and implicit conversion from nullptr_t convertible to bool. On the latter, we feel nullptr_t should be the same as pointers, and pointers are currently convertible to bool everywhere, e.g., int *p = 0; bool b = p; bool c = nullptr; // ?? We're moving this forward in that form, and there may be a paper next time to restrict the conversion for both types if that doesn't seem to break existing code. Note that the wording for this was not in the pre-meeting mailing. 3) N2657, "Local and Unnamed Types as Template Arguments". The difference from the previous version is that local and unnamed types are not given external linkage -- they're just allowed as template arguments. Note also that unnamed types are allowed; there was some question about that last time. 4) N2658, "Constness of Lambda Functions (Revision 1)". Slight change from pre-meeting version: generated function is const by default, adding mutable makes it non-const. 5) N2634, "Solving the SFINAE problem for expressions". As proposed a few meetings back: All expression and type errors in template deduction cause deduction failure, not hard errors. 6) N2664, "C++ Data-Dependency Ordering: Atomics and Memory Model". Definition of terms "carries a dependency", "is dependency-ordered before", "inter-thread happens before". Library memory_order_consume, kill_dependency. Library changes approved by concurrency group. 7) N2660, "Dynamic Initialization and Destruction with Concurrency". 8) N2672, "Initializer List proposed wording". This is a compromise proposal that all the interested parties have endorsed. Concerns about explicit constructors and conversions to unexpected third-party types are resolved, at least enough to win everyone's approval. Narrowing checking is still in there. Includes library changes, believed to have been checked by LWG. 9) N2670, "Minimal Support for Garbage Collection and Reachability-Based Leak Detection (revised)". Reachability of pointers, library functions declare_reachable, undeclare_reachable. 10) All "ready" issues in N2608, the most recent core issues list. As well, Adamczyk gave updated status on items that will not be the subject of a motion at this meeting: 1) D2628, "Non-static data member initializers". Largely ready, but wording interacts extensively with the initializer lists proposal, so the paper will be updated for the pre-SF mailing and should be ready to be voted in at that point. 2) D2663, "Towards support for attributes in C++ (Revision 5)". We made a number of changes, and will review an updated paper next time. 3) N2378, "User-defined Literals (aka. Extensible Literals (revision 3))". We requested some minor updates. There was discussion of allowing/requiring implementations to evaluate constant-creation code just once; that's still being studied. If there's no change in that area, this should be in good shape for next time. 4) N2617, "Proposed Wording for Concepts (Revision 6)". Still some substantial changes (already addressed in updated wording). More wording review is needed. 5) Not looked at this time, still in the queue: - N2394, "Wording for range-based for-loop (revision 3)". Waiting on concepts; otherwise believed to be ready. - N2582, "Unified Function Syntax". [] instead of auto as function return type. - N2646, "Concept Implication and Requirement Propagation". - N2581, "Named Requirements for C++0X Concepts". - N2583, "Default Move Functions". - N2584, "Default Swap Functions". - N2568, "Forward declaration of enumerations (rev. 1)". - N2631, "Resolving the difference between C and C++ with regards to object representation of integers". - N2643, "C++ Data-Dependency Ordering: Function Annotation". On the topic of concepts, Widman reported that he would be surprised if the committee encountered anything that required a significant design change. While noting that he was not proposing that they be voted into the Working Draft at this meeting, he asserted that "concepts are in better shape than you think." Dos Reis disagreed with this point of view, indicating that some design issues remain. Spicer opined that more review is necessary before the committee can decide that concepts are close to being done. Gregor reported that, while there are design points that could be revisited, the specification of the procedure by which concepts are translated is now coherent; no parts of it depend on "magic." Hinnant reviewed the LWG formal motions (see below).Marcus reported that the LWG had reviewed three papers about concepts in the C++ Standard Library, covering: Plum asked as to the status of proposals for date and time library facilities. Hinnant replied that paper N2615 in the pre-meeting mailing has been revised and is being put forward as N2661. Sutter proposed the following revised timetable for publication of the standard. According to this timetable, a Committee Draft will be issued for ballot in September 2008, and a Final Committee Draft will be issued for ballot in October 2009. Crowl asked whether there is an enforced time delay between shipping a CD and shipping an FCD. Sutter explained that there is none, except that CD ballot comments must be resolved, and that will take two meetings. Glassborow indicated that BSI would be much happier with this schedule, and expressed hope that once the CD is shipped, implementors will provide the new language features early enough that the committee can benefit by the availability of those implementations to improve the quality of the standard. Spicer asked how long it takes for ISO to issue a Working Draft for ballot. Sutter replied that, based on past experience, it takes "a small number of days." Meredith disagreed with Glassborow's interpretation of the BSI position and reported two specific concerns that had been expressed among the UK panel. First, the UK wanted greater assurance that concepts would not imply unacceptable performance penalties. Second, there was concern that the committee was already approaching burnout due to the aggressive schedule. Reponding to Meredith's first point, Sutter noted that the committee was at this time being asked only to commit to issuing a CD after the next meeting. Thoughthere is a tentative plan to issue an FCD twelve months later, that is a decision that can be made in the future. Meredith asserted that the committee will need more than twelve months after issuing the CD to complete a document that is suitable for FCD ballot. Observing that this issue will most impact the incoming convenor, Dawes requested that P. J. Plauger comment on it. Plauger replied that the decision to issue a CD at the next meeting is "...the most planning that we can do right now." Maurer, noting that he thinks that the plan is viable, asked whether there was any possibility of allowing the NBs more time to submit comments on the CD. Sutter confirmed that there was, but that it was unnecessary to do so. He explained that the ISO directives allow the ballot period to be extended, but that they shall not exceed six months total. However, since the proposed plan is to give NBs two chances to comment, the need for extended ballot periods is greatly mitigated. Stroustrup commented on the necessity of maintaining the pace of progress, and expressed a concern that a long schedule skip without planned intermediate steps would slow that pace. Witt suggested that the only way to maintain the committee's focus is to have a tight schedule, and Stroustrup agreed. Plum, addressing Maurer's earlier question about the lengths of the ballot periods, advised the committee to be very consistent about using the minimum time for NB comments, noting that NBs have every opportunity to participate directly in the standardization effort. Glassborow agreed, adding that the Working Draft has been continuously available to the rest of the world throughout this process. He also reported that BSI will vote no to the introduction of any new feature from now until completion. Meredith asked for an explanation of the phrase "feature complete" on Sutter's slide. Sutter indicated that he hopes that, as a minimum, the CD would specify concepts in the core language and that the library would include foundation concepts, conceptualized algorithms, and conceptualized iterators. Stroustrup indicated that he would like to see a conceptualized library in the CD. Observing that there are some minor cases in the library (eg the use of enable_if in two places in the diagnostics library), in which existing entities should eventually be replaced by the use of concepts. Dawes asked Stroustrup whether he expected that such cases be addressed prior to shipping the CD. Stroustrup replied that he did not, arguing that details and corner cases are less important than emphasizing that the committee is serious enough about concepts to use them in major ways. Meredith argued that the committee has to get the library specifications of concepts 100% right before FDIS because of the backwards-compatibility implications of changing them. For this reason, he indicated that he would be more confident about conceptualizing the library after CD. Sutter expressed hope that a CD that contains a library that is at least partially conceptualized would encourage library implementors to provide such a library so that the community can gain experience with concepts, thereby enhancing the eventual quality of the standard. Dos Reis suggested that it will take many years to get the library 100% right. He encouraged the committee to take a chance, because it is unreasonable to require 100% correctness before shipping. Discussion ensued. Gregor disagreed with the foregoing characterization of the risks to library, arguing that errors in template constraints can be fixed through Defect Reports. Discussion ensued. Nelson challenged the assumption that the publication of a core language or library feature in a CD, will hasten the arrival of an implementation of that feature. Implementors consider many factors when they decide when to implement a given feature. Sutter invited implementors to comment on whether appearance in a CD expedites implementation. Adamczyk reported whether the feature is in a CD is "irrelevant to [EDG]." Caves indicated it is similarly irrelevant to Microsoft. Plauger commented that each of the vendors that Dinkumare, Ltd. has been talking to has a different shopping list for what features they want. A CD may help stablize dialects by influencing the order in which features are implemented. Clamage indicated that he is a lot more comfortable setting a schedule for development when a draft is feature complete. Dos Reis reported that, for this round of GCC, a decision has been made to accept patches if they were accepted in the current draft. "We are not going to accept patches in the current version if they are not in the working paper." Meredith reported that Borland do their scheduling based on the perceived cost of implementation of a feature. Sebor reported that Rogue Wave begins to exploit a core language feature in their library once they have a stable compiler that supports it. Discussion ensued Straw Poll: In favor of shipping a CD with concepts in the core language and the library at the end of the September 2008 meeting in San Francisco?.. Crowl expressed his opposition to this motion, arguing that it is unclear what is meant by 'new proposal,' and that Internal discipline should be sufficient. Dawes explained that this motion is meant to save LWG some time, so that the LWG subgroup chair does not feel obliged to devote agenda time to new proposals just so that the group can discuss their technical merits before deeming them to have arrived too late. Discussion ensued. Motion 3. Move we apply N2514 "Implicit Conversion Operators for Atomics" to the C++0X Working Paper. Motion 4. Move we apply N2667 "Reserved namespaces for POSIX" to the C++0X Working Paper. Motion 5. Move we apply N2668 "Concurrency Modifications to Basic String" to the C++0X Working Paper. Both HP and Rogue Wave expressed strong opposition to this motion, on the basis that it may force vendors to break ABI compatibility. Discussion ensued. Dawes noted that great effort was made to accomodate the concerns of those vendors who were opposed to the motion, but that no technical solution was found that allowed all vendors to preserve ABI compatibility. More discussion ensued. Sutter asked whether the NBs agreed to the withdrawal of this motion from formal votes. Dos Reis answered that withdrawal would be fine, as long there was assurance that the committee could vote on this issue in the future. There was agreement that this proposal would not be moved during Friday's session. Rao, while noting HP's objection to the proposal, declared that "I believe that the LWG concurrency group did the best they could [to accomodate HP's concerns]." Motion 6. Move we apply N2678 "Error Handling Specification for Chapter 30 (Threads) (Revision 1)" to the C++0X Working Paper. Motion 7. Move we apply N2661 "A Foundation to Sleep On" to the C++0X Working Paper. Motion 8. Move we apply N2675 "noncopyable utility class (revision 1)" to the C++0X Working Paper. Stroustrup drew the committee's attention attention to Halpern's comments in reflector message c++lwg-xxxxxx and asked if that point had been raised in LWG. Hinnant replied that it had not. Discussion ensued. Plum argued that, by marking a base class noncopyable, a programmer is allowing a static code analyzer to detect violations. Witt observed that the Boost noncopyable class has been popular. Discussion ensued. Dos Reis asked whether the use of a noncopyable utility class imposes a space penalty, and Hinnant answered that it does. Discussion ensued. Because of the controversy, Meredith withdrew the paper. Motion 9. Move we apply N2674 "Shared_ptr atomic access, revision 1" to the C++0X Working Paper. Motion 10. Move we apply N2666 "More STL algorithms (revision 2)" to the C++0X Working Paper. Motion 11. Move we apply N2669 "Thread-Safety in the Standard Library (Rev 2)" to the C++0X Working Paper. Dawes announced that the LWG knows that there will be valid objections with the wording in this paper. Problems with the wording will be addressed as issues. Sebor argued that there are some requirements imposed on the implementation that will be difficult to satisfy performantly. Motion 12. Move we apply N2435 "Explicit bool for Smart Pointers" to the C++0X Working Paper. Motion 13. Move we apply N2677 "Foundational Concepts" to the C++0X Working Paper. Hinnant noted that this paper uses a core language feature that has not yet been voted into the paper. Glassborow stated that, as a matter of principle, the committee should not introduce dangling pointers to the Working Draft. Discussion ensued. The motion was withdrawn. Motion 14. Move we apply N2679 "Initializer List for Standard Containers (Revision 1)" to the C++0X Working Paper. Motion 15. Move we apply N2680 "Proposed Wording for Placement Insert (Revision 1)" to the C++0X Working Paper. Hinnant reported that this paper had a solution for pair<> that the LWG knows is not mature, so the current revision does not apply placement insert to pair. Clamage observed that, according to Robert's Rules, motions need not be moved and seconded, because they have been forwarded to the group by a subcommittee., i.e. issues numbered 28, 118, 141, 276, 288, 485, 644, 661, 663, 666.. Motion 3. Move we apply N2514 "Implicit Conversion Operators for Atomics" to the C++0X Working Paper. Motion 4. Move we apply N2667 "Reserved namespaces for POSIX" to the C++0X Working Paper. Motion 5. Move we apply N2678 "Error Handling Specification for Chapter 30 (Threads) (Revision 1)" to the C++0X Working Paper. Motion 6. Move we apply N2661 "A Foundation to Sleep On" to the C++0X Working Paper. Motion 7. Move we apply N2674 "Shared_ptr atomic access, revision 1" to the C++0X Working Paper. Motion 8. Move we apply N2666 "More STL algorithms (revision 2)" to the C++0X Working Paper. Motion 9. Move we apply N2669 "Thread-Safety in the Standard Library (Rev 2)" to the C++0X Working Paper. Motion 10. Move we apply N2435 "Explicit bool for Smart Pointers" to the C++0X Working Paper. Motion 11. Move we apply N2679 "Initializer List for Standard Containers (Revision 1)" to the C++0X Working Paper. Motion 12. Move we apply N2680 "Proposed Wording for Placement Insert (Revision 1)" to the C++0X Working Paper. Meredith made the following announcement on behalf of LWG: from this point forward, the library group would like all papers to include full concept wording. See 11.1, below. None. Nelson reported the following mailing deadlines: The following meetings are as follows: Plum moved to thank the host. Applause. Motion to adjourn Unanimous consent.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2697.html
CC-MAIN-2015-27
refinedweb
3,953
54.22
Easily use any Plain Old Ruby Object as the model for Rails form helpers. Informal is a small gem that enhances a Plain Old Ruby Object so it can be used with Rails 3 form helpers in place of an ActiveRecord model. It works with the Rails form_forhelper, and simple_formas well. Here's a quick (and slightly insane) example: # models/command.rb require "informal" class Command include Informal::Model attr_accessor :command, :args validates_presence_of :command def run; `#{command} #{args}`; end end views/commands/new.html.erb controllers/commands_controller.rb def create command = Command.new(params[:command]) if command.valid? command.run end end It's a Ruby gem, so just install it with gem install informal, add it to your bundler Gemfile, or do whatever you like to do with gems. There is nothing to configure. The insanity of the above example aside, Informal is pretty useful for creating simple RESTful resources that don't map directly to ActiveRecord models. It evolved from handling login credentials to creating model objects that were stored in a serialized attribute of a parent resource. In many ways using an informal model is just like using an AR model in controllers and views. The biggest difference is that you don't savean informal object, but you can add validations and check if it's valid?. If there are any validation errors, the object will have all the usual error decorations so that error messages will display properly in the form view. If you include Informal::Model, your class automatically gets an #initializemethod that takes a params hash and calls setters for all attributes in the hash. If your model class inherits from a class that has its own #initializemethod that needs to get the super call, you should instead include Informal::ModelNoInit, which does not create an #initializemethod. Make your own #initializemethod, and in that you can assign the attributes using the #attributes=method and also call super with whatever args are needed. model_name If you name your model InformalCommand, form params get passed to your controller in the params[:informal_command]hash. As that's a bit ugly and perhaps doesn't play well with standing in for a real ActiveRecord model, Informal provides a method to override the model name. class InformalCommand informal_model_name "Command" # ... end Note: the informal_model_namefeature is available only in Rails 3.1 or greater (unless somebody back-ports the required API change to 3.0.x). The standard way that Rails generates ids for new records is to name them like command_new, as opposed to command_17for persisted records. I've found that when using informal models I often want more than one per page, and it's helpful to have a unique id for JavaScript to use. Therefore Informal uses the model's object_idto get a unique id for the record. Those ids in the DOM will look like command_2157193640, which would be scary if you did anything with those memory addresses except use them for attaching scripts. Copyright © 2011 Josh Susser. Released under the MIT License. See the LICENSE file.
https://xscode.com/joshsusser/informal
CC-MAIN-2021-10
refinedweb
510
54.83
SYNOPSIS con- text option- ally a 14-bit code returned from the stop-and-signal instruction on the SPU. The bit masks for the status codes are: sig- nal archi- tecture.- computing/linuxoncell/ for the recommended libraries. EXAMPLE The following is an example of running a simple, one-instruction SPU program with the spu_run() system call. #include <stdlib.h> #include <stdint.h> #include <unistd.h> #include <stdio.h> #include <sys/types.h> #include <fcntl.h> /*); } SEE ALSO close(2), spu_create(2), capabilities(7), spufs(7) COLOPHON This page is part of release 3.23 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://www.linux-directory.com/man2/spu_run.shtml
crawl-003
refinedweb
114
63.36
Created on 2004-12-21 21:02 by jvd66, last changed 2008-02-23 15:09 by facundobatista. On latest versions of Linux (eg. Red Hat FC2 / FC3, kernel 2.6+, glibc 2.3.3+) it seems to be impossible to get a system call invoked by Python to time out if it enters a blocking operation (eg read, recvfrom) using Python's signal or threading modules. A good example is 'gethostbyname' - if the network cable is unplugged, or the name servers are not contactable, then a Python program will hang for a very long time (5 minutes by default) before being able to proceed. I've tried to do timeouts using either the signal module or using threading timers (see attached examples) - but though the signal handler is called when the SIGALRM is received , and the timer activates the callback, the main thread is still blocked in gethostbyname and the whole process is blocked - so timeouts cannot be implemented. If anyone knows a better way of getting a blocking system call to timeout in python, please let me know. I've finally resorted to invoking the BIND 'dig' program with timeout parameters from commands.getstatusoutput so my app can recover from network connectivity problems. I think we need to be able to invoke the glibc siginterrupt() function from Python; ONLY by doing so are signals allowed to interrupt system calls so that they return EINTR . Note that this differs from "old" Linux behaviour . The siginterrupt man page states: " system calls will be restarted if interrupted by the speci- fied signal sig. This is the default behaviour in Linux. " (this is true) " However, when a new signal handler is specified with the signal(2) function, the system call is interrupted by default. " ( THIS IS FALSE !) With modern Linux kernels + glibcs, all signals are restarted UNLESS the siginterrupt( sig, 1 ) call is invoked. This may be a glibc bug, but many glibcs out there have it. This issue can be reproduced using the attached c program - without the siginterrupt, the system call is not interrupted . PLEASE provide a wrapper to call the siginterrupt(3) glibc function from Python - THANKS . PyOS_setsig in pythonrun.c now calls siginterrupt(sig, 1) (in Python 2.4.4/Python 2.5.1, but not in Python 2.3). So you should be able to timeout the system calls with a signal.alarm call. However, having siginterrupt available would still be useful. I have some patches for the signal module and will clean them up in some days and attach to this bug. Here's an implementation using ctypes: def siginterrupt(signum, flag): """change restart behavior when a function is interrupted by the specified signal. see man siginterrupt. """ import ctypes import sys if flag: flag = 1 else: flag = 0 if sys.platform=='darwin': libc = ctypes.CDLL("libc.dylib") elif sys.platform=='linux2': libc = ctypes.CDLL("libc.so.6") else: libc = ctypes.CDLL("libc.so") if libc.siginterrupt(signum, flag)!=0: raise OSError("siginterrupt failed") here is a patch against 2.5.1 Next time please give the patch a more meaningful name than "patch" :) Looks okay, but needs docs. I'm attaching an updated patch against trunk. It also contains some documentation now. I've attached a new patch against current trunk, which now also contains unit tests. Applied in r60983. Thank you all!
http://bugs.python.org/issue1089358
crawl-002
refinedweb
557
67.35
Happy New Year all; I hope you had as pleasant a New Year’s Eve as I did. Last time on FAIC I described how the C# compiler first uses overload resolution to find the unique best lifted operator, and then uses a small optimization to safely replace a call to Value with a call to GetValueOrDefault(). The jitter can then generate code that is both smaller and faster. But that’s not the only optimization the compiler can perform, not by far. To illustrate, let’s take a look at the code you might generate for a binary operator, say, the addition of two expressions of type int?, x and y: int? z = x + y; Last time we only talked about unary operators, but binary operators are a straightforward extension. We have to make two temporaries, so as to ensure that side effects are executed only once: [1. More specifically, the compiler must ensure that side effects are executed exactly once.] int? z; int? temp1 = x; int? temp2 = y; z = temp1.HasValue & temp2.HasValue ? new int?(temp1.GetValueOrDefault() + temp2.GetValueOrDefault()) : new int?(); A brief aside: shouldn’t that be temp1.HasValue && temp2.HasValue? Both versions give the same result; is the short circuiting one more efficient? Not necessarily! AND-ing together two bools is extremely fast, possibly faster than doing an extra conditional branch to avoid what is going to be an extremely fast property lookup. And the code is certainly smaller. Roslyn uses non-short-circuiting AND, and I seem to recall that the earlier compilers do as well. Anyway, when you do a lifted addition of two nullable integers, that’s the code that the compiler generates when it knows nothing about either operand. Suppose however that you added an expression q of type int? and an expression r of type int [2. Roslyn will also optimize lifted binary operator expressions where both sides are known to be null, where one side is known to be null, and where both sides are known to be non-null. Since these scenarios are rare in user-written code, I’m not going to discuss them much.]: int? s = q + r; OK, reason like the compiler here. First off, the compiler has to determine what the addition operator means, so it uses overload resolution and discovers that the unique best applicable operator is the lifted integer addition operator. Therefore both operands have to be converted to the operand type expected by the lifted operator, int?. So immediately we have determined that this means: int? s = q + (int?)r; Which of course is equivalent to int? s = q + new int?(r); And now we have an addition of two nullable integers. We already know how to do that, so the compiler generates: int? s; int? temp1 = q; int? temp2 = new int?(r); s = temp1.HasValue & temp2.HasValue ? new int?(temp1.GetValueOrDefault() + temp2.GetValueOrDefault()) : new int?(); And of course you are saying to yourself well that’s stupid. You and I both know that temp2.HasValue is always going to be true, and that temp2.GetValueOrDefault() is always going to be whatever value r had when the temporary was built. The compiler can optimize this to: int? s; int? temp1 = q; int temp2 = r; s = temp1.HasValue ? new int?(temp1.GetValueOrDefault() + temp2) : new int?(); Just because the conversion from int to int? is required by the language specification does not mean that the compiler actually has to generate code that does it; rather, all the compiler has to do is generate code that produces the correct results! [3. A fun fact is that the Roslyn compiler’s nullable arithmetic optimizer actually optimizes it to temp1.HasValue & true ? ..., and then Roslyn’s regular Boolean arithmetic optimizer gets rid of the unnecessary operator. It was easier to write the code that way than to be super clever in the nullable optimizer.] Next time on FAIC: What happens when we throw some lifted conversions into the mix? Eric, I love your blog and am a long-time reader, and I was wondering if you’d be willing to install a footnote plugin (such as this one:) to make jumping around a little easier? Great post, by the way, I loving reading about what’s going on under the hood🙂 Thanks for the suggestion. I am new at running a wordpress blog and the array of available plugins is somewhat bewildering. I use a lot of footnotes, so I’ll check that out! I’ll also probably install a “markdown in comments” plugin at some point. That would also be quite nice–I think one with a preview window would be great for those of us who aren’t so good at Markdown. If I could be so bold, I’d like to suggest also widening the page, or at least the comment box. The first comment is okay but replies-to-replies are tiny! There also seems to be a limit to how many comments in a thread: Not the lack of a reply button! “AND-ing together two bools is extremely fast” But it might give a wrong result! Is HasValue guaranteed to return either (bool)0 or (bool)1? If it sometimes returns (bool)2 the AND might produce a false negative. You can’t cast ints to bools like that in C#: > (bool)2 (1,1): error CS0030: Cannot convert type ‘int’ to ‘bool’ I think it’s guaranteed that ‘&’ will work on bools as expected. A bool has an integer representation (I know that from Microsoft Pex – Pex can actually generate bools that are not 0 or 1 and cause the program under test to fail). The CLR allows non-0-or-1 booleans. What about this?: [MethodImpl(MethodImplOptions.NoInlining)] public static unsafe bool ByteToBoolean(byte b) { return (bool) *&b; } ByteToBoolean(1) & ByteToBoolean(2) Will return false! Unsafe pointers and related conversion make pretty much anything rational go out the window. Unsafe code is not the explanation here. It is an implementation detail See the last section of (“This is madness!”). Oh, very interesting, I didn’t know that! In your example case, though, you’re running unsafe code–I assume everything with the nullable types is safe. When you’re in an unsafe context things do get a bit trickier. Perhaps a better way to put it would be “in a safe context, ‘&’ on bools works as expected”? Or are there times even then where you can get this behavior? What this method does on the “inside” is an implementation detail. Unsafe code or not does not matter. The CLR provides the same facility – you can convert any byte losslessly to a boolean. This is perfectly defined and deterministic. “Mixed” booleans are a perfectly valid element of the CLR. They are allowed to occur anywhere. The C# Reference on the operator () states: “For bool operands, & computes the logical AND of its operands; that is, the result is true if and only if both its operands are true.” C# isn’t C. & when used on bools isn’t “bitwise AND”, it’s “non-short-circuiting logical AND”. (Scratch the above, apparently the IL emitted is the same for integers and booleans. This might actually be a genuine bug; or one could argue C# booleans and CLR booleans aren’t the same concept, although I thought this sort of interoperability would be dealt with somewhere.) The C# Reference and Specification both say the only values a bool can have are false and true. If all bools have values of false or true, bitwise AND and non-short-circuiting logical AND are the same thing. I think the spec simply does not address this. It sounds suspiciously like what C calls a trap representation. In C, when reading a trap representation (such as, on some systems, a byte with a value of 2 through an lvalue of type _Bool), the behaviour is undefined, any behaviour is permitted. Even though the value 2 is a valid value for any 8-bit register that the generated machine code might use, the compiler is allowed to assume that the value isn’t 2, isn’t 3, isn’t any larger value than that. So, for example, b > true might evaluate to false at compile time, yet so would b < true or b == true at run time. As a practical data point, C#’s & when used with two bools compiles to a bitwise AND on my system, meaning (bool)1 & (bool)2 evaluates to false, and (bool)2 & (bool)2 evaluates to (bool)2, even though your comment suggests the result should be the same for both. The documentation for HasValue is clear: the return value is either false, or true. So unless (bool)2 compares equal to true on some systems (not on mine), it is not a valid return value, and the compiler does not have to worry about that possibility. This reminds me of something I’ve always wondered- Why does the C# compiler not do any inlining itself? Why leave it all to the JIT? It seems like you should do as many optimizations at compile time as possible. I imagine Eric could give a better reply; but my guess would be that a fair amount of the information that guides whether to inline or not is only available at JIT time. Inlining takes up more code space (possibly lowering performance, due to cache misses), removes the speed penalty of a call/return, and potentially saves pushing function arguments onto the stack. Until runtime, you don’t know how much extra code space inlining will use (depends on the platform and/or CPU in question). You also don’t know how many registers you have available, and hence whether you actually *can* save the time of pushing function arguments onto the stack (if inlining causes you to run out of registers, then you’re going to have to spill some onto the stack in any case). Some architectures will take longer for a call/return which could affect your decision on whether to inline. JIT time is the point where you have all the information you need on whether to inline. At compile time you don’t, so the safest option is to leave it to the JIT. Actually no, I don’t think I can give a better answer than that.🙂 I understand why some inlining can only be done by the JIT, but do those considerations really come into play in these single statement cases, such as GetValueOrDefault()? To me, it seems like a reasonable assumption that single statement methods always make more sense to be inlined. Thus, it is something the C# compiler could do. The compiler is essentially doing a form of inlining when it performs the rewrites you detail in your post Eric (as opposed to generated a reusable method and calling that each time). Eric: thanks!😉 Sam: I can imagine two reasons why the C# compiler might not inline ‘simple’ methods (single statement might not be the best way to describe them, a single statement could be pretty complex!); 1) If the statement is really that simple, then the JITter isn’t going to spend much time analysing it, so you’re not saving much JIT time by analysing it in advance; why bother special casing it? It complicates the compiler to optimise only simple cases that weren’t causing speed problems anyway. 2) It removes the ability to do things like set a breakpoint on the method in question – because it effectively doesn’t exist in the IL if it’s been inlined everywhere…! I think (not sure?) that the JIT normally avoids inlining code if there’s a debugger attached, for that reason. Inlining a method also removes the ability to replace some, but not all, dlls in a compiled application (i.e., makes full compiles necessary). Though if you “know” they won’t change (e.g., constants)… Although there are many cases where only the JIT can know whether to inline something, I would think that having the compiler inline things like struct property getters would allow it to eliminate many redundant copy operations, especially when they are invoked in read-only contexts. For example, if one is enumerating a Dictionary(of Guid, Rectangle) and has a KeyValuePair(of Guid, Rectangle) called kvp, and if the compiler doesn’t analyze property getters, accessing kvp.Value.X will require making a copy of all 32 bytes of kvp, passing a byref to a method which fetches 16 of them, storing those to another temporary stack spot, and passing a byref to a routine that fetches four bytes. Even if the methods are inlined, one is still stuck with the overhead of the copy operations. Even though the purpose of the code is simply to fetch four bytes from kvp, it has to make a slow copy of kvp, then a somewhat faster (but still RAM-based) copy of kvp.Value, before it can finally read the four bytes it was actually interested in. I don’t think any realistic level of JIT inlining could yield anything close to the obvious optimization of simply reading the four bytes directly from the original struct, but I would think a compiler that could recognize trivial struct property getters could do so. Inlining at compile time can’t work across assemblies, since you don’t know what code that other assembly might contain at runtime. “so you’re not saving much JIT time by analysing it in advance; why bother special casing it?” – it’s a micro-optimization, just like the compiler using GetValueOrDefault() is a micro-optimization. Compilers should micro-optimize anywhere they can, because when you add those micro-optimizations up over millions/billions lines of code across the globe, you’re saving real time/energy/karma. As with all compiler optimizations, it wouldn’t be present if you build in Debug mode. As it stands, the compiler can already do a significant amount of code rewriting when optimizations are enabled (for example, removing unused variables), making some breakpoints in Release mode code impossible. Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1266 Does it actually do the assignment to a temporary if r is actually an int or just if r is an expression that can have side effects? That’s a great question that I am not going to explore in depth in this series. Briefly, the problem is that determining when an expression either *produces* or *consumes* a side effect can be quite tricky. For example, reading a local variable never produces a side effect, but another expression might *write* to a local variable as a side effect, and therefore the read must not be re-ordered with respect to the write. The Roslyn and original recipe compilers treat constants as expressions that do not need to be stored in temporaries; pretty much everything else is put into some kind of temporary. Pingback: Nullable micro-optimization, part four | Fabulous Adventures In Coding There really was rather little need to be uncertain about what the classic compiler did. You may be familiar with Joseph Albahari’s excellent Linqpad utility. Prior to Roslyn’s REPL it was the best way to to test a C# expression or fragment without creating a whole visual studio project, or calling the compiler by hand. One of the nice things it does is provides the disassembly of the method(s) you create. So to test this one, I simply switched to “C# program mode” and typed in a simple method whose body was “return a+b”, and whose parameters and return value had type ‘int?’. Then I click run (which ran the empty Main method) and looked at the disassembly for the method. Sure enough it the classic compiler does use bitwise and as you predicted. The whole thing took less than a minute to check. It literally took me several times as long to write this post as it did to check that. Hope this helps. Pingback: When would you use & on a bool? | Fabulous adventures in coding
https://ericlippert.com/2013/01/03/nullable-micro-optimization-part-three/
CC-MAIN-2016-44
refinedweb
2,690
62.38
[vos-d] How databases hurt scalability ___ vos-d mailing list vos-d@interreality.org [vos-d] Sorry, need another test message Just want to make sure the list still works after changing something... Reed -- ___ vos-d mailing list vos-d@interreality.org Re: [vos-d] Rough graphs of #vos channel activity Oops, just noticed that the days of the week are out of order. Should fix that I guess. But there's an interesting upward trend toward the middle of the week, then it goes down to drop off on Saturday. ___ vos-d mailing list vos-d@interreality.org Re: [vos-d] babies Interesting coincidence! Congratulations to you too! Reed Braden McDaniel wrote: On Wed, 2008-07-30 at 11:51 -0400, Reed Hedges wrote: I have a new top priority project that must compete with VOS -- Zephan Isaac was born on Friday at 7:30 PM! Though so far he's been great and we're well [vos-d] Web Site Am thinking of switching over to the new website I was working on earlier (), but without the half-assed background image. Would be nice to create a nicer one but don't have time. I'll also do an editing and clarity pass to try to improve whatever wording I can, and Re: [vos-d] Web Site On Mon, Jun 30, 2008 at 01:47:35PM -0400, Reed Hedges wrote: Am thinking of switching over to the new website I was working on earlier (), but without the half-assed Go to to see Re: [vos-d] s5 properties proposal We talked a bit on IRC but wanted to respond here too. I think it can be summarized briefly like this, I think: In S4, Vobjects had an ordered list of named child links to other Vobjects, one type of which was a Property. In the proposal, Vobjects have an unordered set of named properties, [vos-d] Re: [vos-d] s5 vobject properties I'm worried about introducing yet more complexity into S5. You know that this is a big concern of mine. What is the exact overhead for having entries in the child list for embedded properties? You need a contextual name, and you need the object. The list itself stores the position value. The Re: [vos-d] s5 vobject properties In other words, I sort of imagined it like this: class Entity { handleMessage(); setEntity* parents; string url; } class Link { string cname; int pos; Entity *child; Entity *parent; } class Vobject : Entity { listLink children; vectorEmbeddedProperty embeddedProperties; [vos-d] Re: [vos-d] More new S5 classes/concepts I was under the impression that, for the most part, you are always going to be working through Wrapper objects. Is this true? I still think it would be easier for people to use VOS if the wrapper classes had plain names, and the thing being wrapped had the funny name. E.g. DataType Re: [vos-d] s5 hypervos (mod_vos)? Reed Re: [vos-d] s5 hypervos (mod_vos) Reed Hedges wrote:? I'd like Re: [vos-d] More new S5 classes/concepts What is the Extension Manager? (WHat's an ExtensionManagerWrapper?) What is a Service Manager (site.getServiceManager)? ___ vos-d mailing list vos-d@interreality.org [vos-d] Fwd: C++0x developments Original Message Subject:[liblf-dev] C++0x developments Date: Fri, 22 Feb 2008 12:47:36 -0500 From: Bjorn Roche [EMAIL PROTECTED] Reply-To: [EMAIL PROTECTED] To: [EMAIL PROTECTED] Hey all, I just wanted to update everyone on the latest developments in Re: [vos-d] Website design ideas Reed Hedges wrote: Here are two ideas I had for a new website design. Both are rough sketches. \ 2. I made a few small changes to this one, trying a different background image (the branches one is just to show the concept, it's a terrible Re: [vos-d] Updated Road Map on Wiki Re: [vos-d] Website design ideas Lalo Martins wrote: I like #1. When I did a mockup a long long time ago, I went with a similar idea, and I think it's still valid; the metaphor being that you're looking at a few flat widgets floating in a 3d space. The main thing I don't like is it's too dark and black, which might scare Re: [vos-d] Website design ideas I made the divs a bit transparent in #1. I think they're too boring though, maybe need a bit more bubbliness? (Or is that too Web 2.0? :) Or more of a border? I just threw together the background images in blender, but I do like having them look more polygonated and emphasising that they are 3d [vos-d] Re: [vos-d] Future of the Blender UI The internal code changes are to use a more general event system for UI and tool actions, and make all the UI infrastructure accessible and customizable from Python. Here's more Re: [vos-d] More new S5 classes/concepts Croquet is very cool. I know Pete and I have been following it and reading about it since we started VOS. I don't know if its details have influenced our design much, but some the end goals and features are similar. A lot about it is not documented, as far as I can tell. It's all is Squeak, Re: [vos-d] Updated Road Map on Wiki On Thu, Feb 14, 2008 at 07:13:14PM +, Lalo Martins wrote: hypervos is already alive and kicking in the form of an Apache mod_vos; Ack! Why didn't you email the list! I've been adding stuff to S4 hypervos (and intend to port them to S5!) And I have many plans for further development (see Re: [vos-d] s5 site ids Lalo Martins wrote: AIUI from IRC conversations, the site IDs won't actually be visible to the application programmer later on; we'll deal only with IDs like /vos/ core/StringProperty. (I'm not sure about the code namespaces, but I hope some simplification is intended there too ;-) ) Peter [vos-d]()': Re: [vos-d] s5 site ids The new type IDs are still bothering me a bit too (among other things, like the code namespaces). A huge advantage of the old user-invented type names is that they were natural to understand and easy to remember. We're going to have to do a lot of cutting and pasting, and when we get one Re: [vos-d] More new S5 classes/concepts Started listing S5 changes at: Will fill them in a bit as I go through the vos-d archives and talk to Pete more. Reed ___ vos-d mailing list vos-d@interreality.org [vos-d] More new S5 classes/concepts I'm going through S5 a bit more deeply now. Pete, can you give a summary explanation of these new classes/concepts, and how you use them, what they do, etc. What are: ComponentWrapper Promise Status (used with a Promise it seems?) IVobject VobjectImpl ImplementationWrapper Re: [vos-d] More new S5 classes/concepts And just a general impression... S5 is becoming really complex and daunting (and sophisticated, and hopefully very powerful) piece of software. But this means that we're going to need to put a *lot* of work in documenting, tutorials, as well as just polishing and refining the API itself, to Re: [vos-d] s5 progress and design - OTD XML On Fri, Dec 07, 2007 at 06:01:30PM +, Lalo Martins wrote: Also spracht Reed Hedges (Fri, 07 Dec 2007 10:57:10 -0500): Oh, ok, then my sketch is not really SOD, just a similar thing. Sorry for apropriating the name. I didn't know that you implemented your format (or forgot). Why wasn't Re: [vos-d] s5 progress and design I don't know about Karsten, but the reason I have a knee-jerk reaction against suffixes like Wrapper is that it just feels like cruft, based on other systems where the word had no consistent meaning, at least from a user's perspective, it just seemed a hack to extend an API or facilitate its Re: [vos-d] s5 progress and design - sites, hosts, and URLs oh my Karsten Otto wrote: Ok, I see. But this implies there could be more servers than just one, each hosting a replica. Which one do I contact for updates? With VOP/VIP URLs this was straightforward, but please remind me again, how do I contact a vos:0011223344... key-based site? Is there a Re: [vos-d] s5 progress and design We should also include at this point a reminder of why there's a code generator. If I understand things correctly, the goal is to use the code generator to (a) generate code for different programming languages (b) make it easier for users to generate MetaObject (now called Component) Re: [vos-d] s/MetaObject/Component Component is generic, and also recalls COM etc. How about: Part Facet Role Type Fragment Trait What exactly *is* a metaobject? It's a constituent Vobject that's part of a real Vobject, and which implements a facet or part of that Vobject, probably corresponding to a type. It implements part [vos-d] S5 and ordering listeners Can you comment on this Peter? Let's say I want a set of listeners attached to an object to be invoked in order. Let's say that both listeners live on the same local site (process) and maybe are both associated with the same vobject. Will there be a way to do this in S5? What if the listeners Re: [vos-d] thinking about a new web site Here are some ideas I had on revamping the web site. Graphic Design -- * Change the background to white or another light color. Maybe change the main content area to a different shade too, rather than current grey. * A set of background/side illustrations, that convey some of the Re: [vos-d] notes from IRC On Fri, Oct 19, 2007 at 05:16:27PM -0400, Peter Amstutz wrote: Notes from some initial discussion of the interreality 3d interface. Participants: winterk, zaharazod, tetron - Should be more like stuff people expect - Splitting and merging panels is likely to confuse casual users - Re: [vos-d] Forums integrated with mailing lists Here's an RSS feed for the vos-announce form/mailing list: (actually, it's an alias for the RSS feed that the forum software generates, but this abstracts that in case we change forum software or whatever. vos-d also has an rss feed, just go to its forum Re: [vos-d] Forums integrated with mailing lists I think it would be ok to just black it out (e.g. [EMAIL PROTECTED]). It's not that useful to have email addresses visible, this is just an artifact of how some email clients do replies. On Fri, Oct 19, 2007 at 11:07:22AM -0400, Peter Amstutz wrote: Posted at: Re: [vos-d] notes from IRC Just thought of this: One thing that a remote app. might want to customize about the UI is how some things are labeled, or it might want to add special informational labels/text blocks/tooltips/bubbles/whatever. Reed ___ vos-d mailing list Re: [vos-d] Forums integrated with mailing lists Nope. You can see here: Reed On Fri, Oct 19, 2007 at 02:43:08PM +0200, [EMAIL PROTECTED] wrote: Hi Peter, I hope the mail addresses are not open readable in forum then, or we might get a lot of spam here soon? -- Re: [vos-d] s5 version control and persistence On Wed, Oct 17, 2007 at 02:53:49AM +, Lalo Martins wrote: Also spracht Reed Hedges (Tue, 16 Oct 2007 10:24:27 -0400): - type list - child list - payload, if any (eg properties) - security capabilities - parent list? That would be problematic. Since the PCRs are already Re: [vos-d] s5 version control and persistence This sounds really good. Having replication and clustering will be really important as we move forward. It will let us do all kinds of scaling and load distribution, and even manage things like internal, in-development or draft datasets that get published to a public site when ready... lots [vos-d] Douglas Englebart and some Google folks talk about knowlege tools and organizational improvement Brief oververview of Englebarts vision and ideas, talking to folks at Google. ___ vos-d mailing list vos-d@interreality.org Re: [vos-d] issue building on OS X Pete has a Mac so maybe he can help. I guess you have neither readline nor termios, so it's trying to use getch() from curses. Can you post the 'config.log' file that should have been created in the main build directory? Can you grep for the getch() function somewhere, it should be in Re: [vos-d] development plan 5) Come up with some milestones and prioritize development. The strategy will probably be to code enough of the VOS framework to support concurrent development of higher level pieces like A3DL, and to start putting some meat on the bones of the UI prototype. This may be something like Re: [vos-d] What do we want in the 0.24 release? this script recently? Have any problems that need to be fixed or missing features added? In a little while I'll consolidate possible tasks for 0.24 and we can prioritize them (we probably won't be able to do all of them, we ought to release 0.24 pretty soon). Reed Reed Hedges wrote [vos-d] Interpolation in TerAngreal I've been writing some code that makes some objects fly around and Ken's interpolation code makes it look nice and smooth. However, in doing so I experimented a bit with position update frequency, wondering what the slowest rate I could use is, especially when the velocity I was trying to Re: [vos-d] State of S4 Scripting (Lalo!) It doesn't have to be in depth about how it works, just shows what it does and doesn't do (i.e. it just wraps the core vobject and property api, right, not all the metaobjects [yet]?) and how to go about trying to use it, maybe give some examples. How do you build it? (run setup.py?) Reed Re: [vos-d] State of S4 Scripting (Lalo!) Lalo Martins wrote: Yes. There is already a prototype s5-scripting branch somewhere to match Peter's prototype s5 branch, and it looks absolutely beautiful, although Is it s? Do you have any documentation or notes you can add Re: [vos-d] State of S4 Scripting (Lalo!) scripting? (which would be basically a dead end.) Reed Lalo Martins wrote: Also spracht Reed Hedges (Wed, 27 Jun 2007 12:58:40 -0400): What is the state of the S4 scripting branch ()? Does the Python interface work? disclaimer: I haven't [vos-d] What do we want in the 0.24 release? Are there any bugs or real defects in Ter'Angreal or VOS? I know of two, don't know if we should fix them: * Avatar settings (model/skin) aren't saved (really just an unimplemented feature) * Objects aren't always removed from the world Something to test is whether all A3DL Re: [vos-d] thoughts and plans On Wed, Jun 27, 2007 at 12:29:43PM +0200, Karsten Otto wrote: This sounds like a radical redesign... so far we had a single local vobject acting as a sequencer for multiple remote vobjects. Obviously, you have something new in mind. Please tell us more :-) My feeling is that this would be Re: [vos-d] What do we want in the 0.24 release? On Fri, Jun 29, 2007 at 08:05:37AM -0600, S Mattison wrote: I can fall through the map. =P Well maybe you need a floor! :) Well, it's not difficult when there are no invisible bounding boxes holding me on. Actually it's several things 1. Bounds in the world that terangreal can check, [vos-d] State of S4 Scripting (Lalo!) What is the state of the S4 scripting branch ()? Does the Python interface work? Thinking about what we should try to include in 0.24 (codename s4 swan song). Reed ___ vos-d mailing list [vos-d] tags, trees, tables, and types On Wed, Jun 27, 2007 at 10:38:37AM -0400, Andrew Robbins wrote: I have also been thinking a great deal about tables, tags, trees, and types (talk about illiteration!). The reason I've been thinking so much about them is that every once in awhile, I'll come up with a new way of converting Re: [vos-d] development status Chris is referring here to a proposal for the X3D format/language (new version of VRML) to add the sensor nodes mentioned, by the way. Reed chris wrote: Hi, just a few comments on other status that may interest. I have been testing tcp/ip networking with an implementation of the network Re: [vos-d] Listener notifications in S5 On Thu, Jun 07, 2007 at 10:02:03PM -0400, Peter Amstutz wrote: On Thu, Jun 07, 2007 at 07:31:42PM -0400, Reed Hedges wrote: How do listener notifications fit in with the S5 vobject-as-logical-thread idea? I'm thinking specifically about impact on ability to scale number of objects [vos-d] Listener notifications in S5 How do listener notifications fit in with the S5 vobject-as-logical-thread idea? I'm thinking specifically about impact on ability to scale number of objects that one listener is listening to. I'm guessing listener notifications are processed same as any other messages in the vobject's [vos-d] Errors building crystalspace Was the crystalspace snapshot updated or changed recently? I'm getting these errors now trying to build it. Is anyone else or is something strange going on with my checkout? C++ ./out/linuxx86/debug/libs/csutil/csstring.o ./include/csutil/formatter.h:992: error: non-template Re: [vos-d] Errors building crystalspace not and will probably never use). Reed Reed Hedges wrote: Was the crystalspace snapshot updated or changed recently? I'm getting these errors now trying to build it. Is anyone else or is something strange going on with my checkout? ___ vos-d mailing list vos-d [vos-d] Embedded properties, string pooling, and search The planned S5 features of embedded properties and string pooling ought to make it efficient to search for objects on a site by type or name (due to string pooling), if the shared strings have pointers back to their vobjects, right? Have you implemented string pooling yet, or what are your plans Re: [vos-d] Scaling and Origins -- 0.23 vs 0.24 On Wed, May 16, 2007 at 10:18:34AM -0400, Peter Amstutz wrote: The bigger problem was I was doing something dumb in 0.23, which was the code that loads the md2 models for avatars recenters it to make the origin the center of the avatar bounding box rather than at the avatar's feet. So Re: [vos-d] Wanna help the Mass Avatar Mash? We don't have specific plans for H-anim and VOS. We haven't designed how jointed, animateable geometry will work in VOS yet. Chris is just keeping us informed about possible things to do (thanks Chris) I think. At this point, we plan on having a general VRML server for VOS that exposes a Re: [vos-d] Van Jacobson: named data Yeah, so his ideas cut accross all kinds of layers and aspects of networking. so I don't think VOS can be THE solution to the problems he explains, but it can provide a few key tools. Namely it can be a data storage system, both for originals, and replicated copies, and for store-and-forward, Re: [vos-d] Van Jacobson: named data I downloaded a copy of this video if anyone wants it. Reed ___ vos-d mailing list vos-d@interreality.org Re: [vos-d] Van Jacobson: named data -- revision control This means that if that version object is mutable, i.e. a not read-only property, we need to also have branches in the version history, and any reference to a past version of a vobjcet is really a reference to the most recent version in the branch rooted on this object, which if there is Re: [vos-d] Van Jacobson: named data I guess each copy, whether changed or not, should have a pointer to its original. I wonder if any vobject version should not have it's versions inside it, but simply have a pointer to it's predecessor (or the other way around, an object has links to all its derivatives). Then you can have Re: [vos-d] Movement interpolation update Karsten Otto wrote: You cannot really fix this with a don't-interpolate flag, as there is no good place to put one. You could extend the property- update notification, but a lot of properties do not use interpolation That's ok. Nothing stops you from adding whatever fields you want to the Re: [vos-d] build system reviews (long) On Wed, Apr 25, 2007 at 11:07:11PM -0400, Peter Amstutz wrote: It's actually make distcheck that's interesting in this case. In automake it gives you a single command that will build a source tarball, unpack it to another directory, runs configure and does a build. It's a very useful Re: [vos-d] VOS on Solaris? Lars O. Grobe wrote: Do you have libtool installed? That is a separate package from autoconf and automake. I compiled the most recent one from GNU. As I am not root, I installed it in my home and added it to my PATH. Does the build make any assumption where to find libtool (e.g. under Re: [vos-d] Patches for 0.24 in MSVC Wow, thanks Ken! The only thing I haven't really tracked down is that avatar movement seems a bit jerky in the MSVC build. If I get more time I might look into it... Try it in a release build. (both VOS and Crystalspace in release mode). MSVC puts a lot of extra code in when in debug Re: [vos-d] VOS on Solaris? Hi Lars, thanks for giving it a try. We have not tried the current version of vos on Solaris yet but would like to make it work there (the libraries and server at least). Are you using 0.23, or the development version (source control repository snapshot or checkout)? What version of autoconf is [vos-d] S5 and single-thread option Pete, in your description of S5 so far it seems like it is defining a threading model that is not neccesarily coupled to a particular thread implementation. That is, conceptually vobjects are threads or proceses but I am guessing that you won't be implementing it by simply creating a pthread for [vos-d] server requirements In a different forum, Hellekin Wolf asked: The server issue mentioned above makes me think of the requirements for running a VOS world. Do I need hardware graphics or is it only for the client side? A server does not need any graphics hardware. Many servers in fact are just providing data Re: [vos-d] Thinking about Javascript On Tue, Apr 17, 2007 at 07:27:21AM +, Lalo Martins wrote: One problem I have with the pure-js version is the nature of HTTP; either the browser would need to keep a persistent connection to the server, like some web chat rooms do -- which is error prone (hard to recover from a Re: [vos-d] Integration of VOS and IRC This would be great for people who primarily want to just chat or be present in the world while doing other work, so they don't want the full 3D world. It would also make it possible for blind people to interact in the 3D world. Reed On Tue, Apr 17, 2007 at 01:54:23PM -0400, Peter Amstutz Re: [vos-d] Integration of VOS and IRC scenery part). Unfortunately, viewpoints usually have no navigation links between them. So for what you want to do, you need a combination of both. This requires some work, but VOS is flexible enough to support all this. Yeah, you would just have the waypoint object type have child [vos-d] Thinking about Javascript Is anyone here familiar with Javascript much? I'm wondering what kind of networking tools are available from Javascript. I've been reading about a thing some people call Comet, () which basically a publish/push model for the server to update pages live. It Re: [vos-d] terangreal changes On Thu, Apr 12, 2007 at 10:29:35PM -0400, Peter Amstutz wrote: On Thu, Apr 12, 2007 at 09:26:09PM -0400, Reed Hedges wrote: * Change mouse cursor to reflect what clicking will do (i.e. differentiate between mouselook/move modes; change when over a hypercard or clickable) Yea Re: [vos-d] s5 concurrency (design part 2) Peter Amstutz wrote: On Fri, Apr 06, 2007 at 05:16:16PM -0400, Reed Hedges wrote: There's also something called Flow-Based Programming that is similar. In some ways it's closer to VOS since Actors are, I think, more like method handlers (in VOS terminology). I don't agree. Flow-based Re: [vos-d] s5 concurrency (design part 2) So messages between local objects will be serialized and passed like remote messages, rather than being method calls? Is that overhead a concern? If so, maybe an optimization would be to have a message format that just packs native machine format arguments into the message in the same order Re: [vos-d] s5 concurrency (design part 2) Peter Amstutz wrote: On Thu, Apr 12, 2007 at 08:16:59AM -0400, Reed Hedges wrote: So messages between local objects will be serialized and passed like remote messages, rather than being method calls? Is that overhead a concern? It is a concern, although I'm don't think I would call Re: [vos-d] misc:search questions Ken Taylor wrote: the fact that you guys seem to be allergic to comments doesn't help either ;) (i jest! i jest ... sorta) Just Pete. But he wrote most of the code. Reed ___ vos-d mailing list vos-d@interreality.org Re: [vos-d] bakefiles Peter Amstutz wrote: Whenever I try to set up a VOS build environment on Windows, I get a sharp, throbbing headache and a strong urge to throw my chair out the window. It's difficult to understate just how big of a maintainance hassle the current build system is on Windows (whether Cygwin, Re: [vos-d] keyboard vs. wimp interface for 3d Karsten Otto wrote: Interestingly, while it has buttons that trigger actions, I mainly used them as a quick way to arrange keyboard shortcuts during normal gameplay. The only time I ever used the interface in a traditional way was for complex actions like trading items. Actually this is Re: [vos-d] bakefiles Is it really bad enough to throw out our investment in the autotools configuration? bakefile is probably the most obscure of all the build tools you list. If you do switch to bakefile, let's keep some Makefiles in the bzr tree. And keep bakefile inside the bzr tree so that you don't have to Re: [vos-d] Flux Worlds Server Announcement different protocols. Not sure yet how to bridge to completely different multiuser 3D systems but at some point we might see if it's possible. Reed Reed Hedges wrote: Subject: [www-vrml] Flux Worlds Server Announcement Re: [vos-d] s5 design overview Ken Taylor wrote: Peter Amstutz wrote: 1. Memory footprint The current s4 design has a lot of per-vobject overhead, leading to a significant memory footprint. The development version improves on this a bit, but the honest truth is that the implementation was not written with memory [vos-d] Flux Worlds Server Announcement ---BeginMessage--- Folks, We've been up to something over here - thought I would tell you about it before you heard it on the street. Media Machines has been developing a multi-user server based on a new protocol that we intend to put out into the open. We have dubbed it Simple Wide Area Re: [vos-d] How to host a product design dinner party We don't know what our niche is yet. We have one main domain (3D) and a secondary domain (Web) but there might even be others. Actually when we first began this several years ago, we knew someone who knew someone intersted in building factory tracking systems, though we ended up not really Re: [vos-d] XOD questions Peter Amstutz wrote: Oh, I see. I think we've been arguing at cross purposes, because in fact the load tag in XOD already has a few commands special to it that do most of what you want: Aha, thanks for describing linkin and linkout. I hadn't seen them before until very recently looking Re: [vos-d] Upcoming changes: factory, actions. Ken Taylor wrote: Cool! I was actually thinking the other day that being able to select and inspect objects in Ter'Angreal would be useful and not super-difficult to implement on the current architecture. I like the ui:actions object as a sort of scripting approximation, though i'm assuming it Re: [vos-d] Message handler problem Another thought: does your derived class *have* to inherit Base virtually? Yes, basically. Well, in one case it doesn't and calling the method in the base class to register the handler works. In another case it has to be virtual. Maybe I can find a way to reorganize things to avoid it. Re: [vos-d] XOD questions Peter Amstutz wrote: On Thu, Mar 15, 2007 at 06:15:39PM -0400, Reed Hedges wrote: The reason I ask is that I want to load some 3D objects from a COD file, but then insert some non-3d children into one of those objects, and extend its types. This is the kind of thing that VOS is all about Re: [vos-d] XOD questions Peter Amstutz wrote: Well, the idea was more to support the ability to import other file formats (X3D comes to mind, although it's maybe not a good example since it's really an example of how not to design an XML schema) using a straightforward XSLT transform. Of course, we haven't yet Re: [vos-d] X3D Len Bullard wrote: I will move on to X3D because eventually I will need some of the new features like Inlines with interfaces and bits like the Keyboard Sensor, the upcoming Network Sensor and the physics engine, or the Nice-to-Haves like the Boolean Sequencer that I can replicate in script Re: [vos-d] [www-vrml] RE: [x3d-public] X3D Game Engine? chris wrote: Lauren, you were clear enough - I was just reinterpreting a bit because I don't think I have seen such a thing for x3d. But there are some potential candidates as a starting point, such as vos, vrspace or deepmatrix, to perhaps link with an X3D browser. I just don't know Re: [vos-d] Message handler problem For one thing, apparently you can't do this: class Base { public: virtual void pure() = 0; templateclass T register() { VobjectBase::registerHandlerT(message, handler); } void handler(Message *m) { ... } }; class VirtualDerived : public virtual Base { public:
https://www.mail-archive.com/search?l=vos-d%40interreality.org&q=from:%22Reed+Hedges%22&o=newest
CC-MAIN-2022-05
refinedweb
5,246
67.28
/* * <string.h> char * strtok(s, delim) register char *s; register const char *delim; { register char *spanp; register int c, sc; char *tok; static char *last; if (s == NULL && (s = last) == NULL) return (NULL); /* * Skip (span) leading delimiters (s += strspn(s, delim), sort of). */ cont: c = *s++; for (spanp = (char *)delim; (sc = *spanp++) != 0;) { if (c == sc) goto cont; } if (c == 0) { /* no non-delimiter characters */ last = NULL; return (NULL); } tok = s - 1; /* * Scan token (scan for delimiters: s += strcspn(s, delim), sort of). * Note that delim must have one NUL; we stop if we see that, too. */ for (;;) { c = *s++; spanp = (char *)delim; do { if ((sc = *spanp++) == c) { if (c == 0) s = NULL; else s[-1] = 0; last = s; return (tok); } } while (sc != 0); } /* NOTREACHED */ }
http://opensource.apple.com//source/Libc/Libc-186/string.subproj/strtok.c
CC-MAIN-2016-40
refinedweb
125
72.7
public RandomListNode copyRandomList(RandomListNode head) { if (head == null) return null; Map<RandomListNode, RandomListNode> map = new HashMap<RandomListNode, RandomListNode>(); // loop 1. copy all the nodes RandomListNode node = head; while (node != null) { map.put(node, new RandomListNode(node.label)); node = node.next; } // loop 2. assign next and random pointers node = head; while (node != null) { map.get(node).next = map.get(node.next); map.get(node).random = map.get(node.random); node = node.next; } return map.get(head); } In the loop2. assign next and random pointers. We should check whether iter.next or iter.random is null or not! import java.util.Hashtable; public class Solution { public RandomListNode copyRandomList(RandomListNode head) { if(head==null){ return null; } Hashtable<RandomListNode,RandomListNode> ht= new Hashtable<RandomListNode,RandomListNode>(); RandomListNode iter=head; while(iter!=null){ ht.put(iter,new RandomListNode(iter.label)); iter=iter.next; } iter=head; while(iter!=null){ if(iter.next!=null){ ht.get(iter).next=ht.get(iter.next); } if(iter.random!=null){ ht.get(iter).random=ht.get(iter.random); } iter=iter.next; } return ht.get(head); } } Clean solution, but I have some concerns when I thought about HashTables: What is RandomListNode's hashValue, is it Object's in-memory representation? Is it guaranteed to be unique for each object? For some followup, if the RandomListNode's a black-box, seems we cannot use this solution, cause its hash value computation is not transparent. Am I correct? @pinkfloyda Nice question, I wanna know too. @pinkfloyda I believe RandomListNode's hashCode() is not guaranteed to be unique. If two keys have the same hashCode(), then HashMap will try to solve this collision by using Separat chainning. I've just condensed your solution by using for-loops here :) public RandomListNode copyRandomList(RandomListNode head) { if (head == null) return null; Map<RandomListNode, RandomListNode> map = new HashMap<>(); for (RandomListNode ptr = head; ptr != null; ptr = ptr.next) { map.put(ptr, new RandomListNode(ptr.label)); } for (RandomListNode ptr = head; ptr != null; ptr = ptr.next) { map.get(ptr).next = map.get(ptr.next); map.get(ptr).random = map.get(ptr.random); } return map.get(head); } One assumption with this code is that all the nodes pointed to by the random pointer can be reached through the next pointer as well. I would clarify this assumption in the real interview before presenting this answer or not make this assumption at all. @ernieho , you don't have to check the null value when using get() method of HashMap<>, because it will return null when you passing null as key (as long as there's no null as key in the map) @pinkfloyda Great question, when solving this problem I had the same concern, here I read that "Typically, hashCode() just returns the object's address in memory if you don't override it" Then if this is right, the solution @jeantimex proposed is correct, but anyway, if this is an interview, you should at least mention it when solving the problem and propose an alternative solution if this is not true @jeantimex Very smart solution. Thank you for your sharing. Thank you for your solution, it's very clear. While, I hava a question : In java 8, red-black tree is added to the realization of HashMap<K, V>. So in the worst case, shall we treat the time complexity of HashMap.put()/get() to be O(logn)? Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/18086/java-o-n-solution
CC-MAIN-2018-05
refinedweb
570
50.73
It's time to write some Scala code. Before we start on the in-depth Scala tutorial, we put in two chapters that will give you the big picture of Scala, and most importantly, get you writing code. We encourage you to actually try out all the code examples presented in this chapter and the next as you go. The best way to start learning Scala is to program in it. To run the examples in this chapter, you should have a standard Scala installation. To get one, go to and follow the directions for your platform. You can also use a Scala plug-in for Eclipse, IntelliJ, or NetBeans, but for the steps in this chapter, we'll assume you're using the Scala distribution from scala-lang.org.[1] If you are a veteran programmer new to Scala, the next two chapters should give you enough understanding to enable you to start writing useful programs in Scala. If you are less experienced, some of the material may seem a bit mysterious to you. But don't worry. To get you up to speed quickly, we had to leave out some details. Everything will be explained in a less "fire hose" fashion in later chapters. In addition, we inserted quite a few footnotes in these next two chapters to point you to later sections of the book where you'll find more detailed explanations. The easiest way to get started with Scala is by using the Scala interpreter, an interactive "shell" for writing Scala expressions and programs. Simply type an expression into the interpreter and it will evaluate the expression and print the resulting value. The interactive shell for Scala is simply called scala. You use it by typing scala at a command prompt:[2] $ scala Welcome to Scala version 2.7.2. Type in expressions to have them evaluated. Type :help for more information. scala> After you type an expression, such as 1 + 2, and hit enter: scala> 1 + 2 The interpreter will print: res0: Int = 3 This line includes: The type Int names the class Int in the package scala. Packages in Scala are similar to packages in Java: they partition the global namespace and provide a mechanism for information hiding.[3] Values of class Int correspond to Java's int values. More generally, all of Java's primitive types have corresponding classes in the scala package. For example, scala.Boolean corresponds to Java's boolean. scala.Float corresponds to Java's float. And when you compile your Scala code to Java bytecodes, the Scala compiler will use Java's primitive types where possible to give you the performance benefits of the primitive types. The resX identifier may be used in later lines. For instance, since res0 was set to 3 previously, res0 * 3 will be 9: scala> res0 * 3 res1: Int = 9 To print the necessary, but not sufficient, Hello, world! greeting, type: scala> println("Hello, world!") Hello, world!The println function prints the passed string to the standard output, similar to System.out.println in Java. Scala has two kinds of variables, vals and vars. A val is similar to a final variable in Java. Once initialized, a val can never be reassigned. A var, by contrast, is similar to a non-final variable in Java. A var can be reassigned throughout its lifetime. Here's a val definition: scala> val msg = "Hello, world!" msg: java.lang.String = Hello, world!This statement introduces msg as a name for the string "Hello, world!". The type of msg is java.lang.String, because Scala strings are implemented by Java's String class. If you're used to declaring variables in Java, you'll notice one striking difference here: neither java.lang.String nor String appear anywhere in the val definition. This example illustrates type inference, Scala's ability to figure out types you leave off. In this case, because you initialized msg with a string literal, Scala inferred the type of msg to be String. contrast to Java, where you specify a variable's type before its name, in Scala you specify a variable's type after its name, separated by a colon. For example: scala> val msg2: java.lang.String = "Hello again, world!" msg2: java.lang.String = Hello again, world! Or, since java.lang types are visible with their simple names[4] in Scala programs, simply: scala> val msg3: String = "Hello yet again, world!" msg3: String = Hello yet again, world! Going back to the original msg, now that it is defined, you can use it as you'd expect, for example: scala> println(msg) Hello, world! What you can't do with msg, given that it is a val, not a var, is reassign it.[5] For example, see how the interpreter complains when you attempt the following: scala> msg = "Goodbye cruel world!" <console>:5: error: reassignment to val msg = "Goodbye cruel world!" ^ If reassignment is what you want, you'll need to use a var, as in: scala> var greeting = "Hello, world!" greeting: java.lang.String = Hello, world! Since greeting is a var not a val, you can reassign it later. If you are feeling grouchy later, for example, you could change your greeting to: scala> greeting = "Leave me alone, world!" greeting: java.lang.String = Leave me alone, world! To enter something into the interpreter that spans multiple lines, just keep typing after the first line. If the code you typed so far is not complete, the interpreter will respond with a vertical bar on the next line. scala> val multiLine = | "This is the next line." multiLine: java.lang.String = This is the next line. If you realize you have typed something wrong, but the interpreter is still waiting for more input, you can escape by pressing enter twice: scala> val oops = | | You typed two blank lines. Starting a new command. scala> In the rest of the book, we'll leave out the vertical bars to make the code easier to read (and easier to copy and paste from the PDF eBook into the interpreter). Now that you've worked with Scala variables, you'll probably want to write some functions. Here's how you do that in Scala: scala> def max(x: Int, y: Int): Int = { if (x > y) x else y } max: (Int,Int)IntFunction definitions start with def. The function's name, in this case max, is followed by a comma-separated list of parameters in parentheses. A type annotation must follow every function parameter, preceded by a colon, because the Scala compiler (and interpreter, but from now on we'll just say compiler) does not infer function parameter types. In this example, the function named max takes two parameters, x and y, both of type Int. After the close parenthesis of max's parameter list you'll find another ": Int" type annotation. This one defines the result type of the max function itself.[6] Following the function's result type is an equals sign and pair of curly braces that contain the body of the function. In this case, the body contains a single if expression, which selects either x or y, whichever is greater, as the result of the max function. As demonstrated here, Scala's if expression can result in a value, similar to Java's ternary operator. For example, the Scala expression "if (x > y) x else y" behaves similarly to "(x > y) ? x : y" in Java. The equals sign that precedes the body of a function hints that in the functional world view, a function defines an expression that results in a value. The basic structure of a function is illustrated in Figure 2.1. Sometimes the Scala compiler will require you to specify the result type of a function. If the function is recursive,[7] for example, you must explicitly specify the function's result type. In the case of max however, you may leave the result type off and the compiler will infer it.[8] Also, if a function consists of just one statement, you can optionally leave off the curly braces. Thus, you could alternatively write the max function like this: scala> def max2(x: Int, y: Int) = if (x > y) x else y max2: (Int,Int)Int Once you have defined a function, you can call it by name, as in: scala> max(3, 5) res6: Int = 5 Here's the definition of a function that takes no parameters and returns no interesting result: scala> def greet() = println("Hello, world!") greet: ()UnitWhen you define the greet() function, the interpreter will respond with greet: ()Unit. "greet" is, of course, the name of the function. The empty parentheses indicate the function takes no parameters. And Unit is greet's result type. A result type of Unit indicates the function returns no interesting value. Scala's Unit type is similar to Java's void type, and in fact every void-returning method in Java is mapped to a Unit-returning method in Scala. Methods with the result type of Unit, therefore, are only executed for their side effects. In the case of greet(), the side effect is a friendly greeting printed to the standard output. In the next step, you'll place Scala code in a file and run it as a script. If you wish to exit the interpreter, you can do so by entering :quit or :q. scala> :quit $ Although Scala is designed to help programmers build very large-scale systems, it also scales down nicely to scripting. A script is just a sequence of statements in a file that will be executed sequentially. Put this into a file named hello.scala: println("Hello, world, from a script!") $ scala hello.scala And you should get yet another greeting: Hello, world, from a script! Command line arguments to a Scala script are available via a Scala array named args. In Scala, arrays are zero based, and you access an element by specifying an index in parentheses. So the first element in a Scala array named steps is steps(0), not steps[0], as in Java. To try this out, type the following into a new file named helloarg.scala: // Say hello to the first argument println("Hello, "+ args(0) +"!") $ scala helloarg.scala planet In this command, "planet" is passed as a command line argument, which is accessed in the script as args(0). Thus, you should see: Hello, planet! Note that this script included a comment. The Scala compiler will ignore characters between // and the next end of line and any characters between /* and */. This example also shows Strings being concatenated with the + operator. This works as you'd expect. The expression "Hello, "+"world!" will result in the string "Hello, world!".[10] To try out a while, type the following into a file named printargs.scala: var i = 0 while (i < args.length) { println(args(i)) i += 1 } Although the examples in this section help explain while loops, they do not demonstrate the best Scala style. In the next section, you'll see better approaches that avoid iterating through arrays with indexes.:$sn1234$ Scala is fun For even more fun, type the following code into a new file with the name echoargs.scala: var i = 0 while any of them, Scala does use semicolons to separate statements as in Java, except that in Scala the semicolons are very often optional, giving some welcome relief to your right little finger. If you had been in a more verbose mood, therefore, you could have written the echoargs.scala script as follows: var i = 0; while (i < args.length) { if (i != 0) { print(" "); } print(args(i)); i += 1; } println(); Although you may not have realized it, when you wrote the while loops in the previous step, you were programming in an imperative style. In the imperative style, which is the style you normally use with languages like Java, C++, and C, you give one imperative command at a time, iterate with loops, and often mutate state shared between different functions. Scala enables you to program imperatively, but as you get to know Scala better, you'll likely often find yourself programming in a more functional style. In fact, one of the main aims of this book is to help you become as comfortable with the functional style as you are with imperative a function literal that takes one parameter named arg. The body of the function is println(arg). If you type the above code into a new file named pa.scala, and execute with the command: $ scala pa.scala Concise is nice You should see: Concise is nice In the previous example, the Scala interpreter infers the type of arg to be String, since String is the element type of the array on which you're calling foreach. If you'd prefer to be more explicit, you can mention the type name, but when you do you'll need to wrap the argument portion in parentheses (which is the normal form of the syntax anyway): args.foreach((arg: String) => println(arg)) Running this script has the same behavior as the previous one. If you're in the mood for more conciseness instead of more explicitness, you can take advantage of a special shorthand in Scala. If a function literal consists of one statement that takes a single argument, you need not explicitly name and specify the argument.[11] Thus, the following code also works: args.foreach(println) To summarize, the syntax for a function literal is a list of named parameters, in parentheses, a right arrow, and then the body of the function. This syntax is illustrated in Figure 2.2. Now, by this point you may be wondering what happened to those trusty for loops you have been accustomed to using in imperative languages such as Java or C. In an effort to guide you in a functional direction, only a functional relative of the imperative for (called a for expression) is available in Scala. While you won't see their full power and expressiveness until you reach (or peek ahead to) Section 7.3, we'll give you a glimpse here. In a new file named forargs.scala, type the following: for (arg <- args) println(arg) The parentheses after the "for" contain arg <- args.[12] To the right of the <- symbol is the familiar args array. To the left of <- is "arg", the name of a val, not a var. (Because it is always a val, you just write "arg" by itself, not "val arg".) Although arg may seem to be a var, because it will get a new value on each iteration, it really is a val: arg can't be reassigned inside the body of the for expression. Instead, for each element of the args array, a new arg val will be created and initialized to the element value, and the body of the for will be executed. If you run the forargs.scala script with the command: $ scala forargs.scala for arg in args You'll see: for arg in argsScala's for expression can do much more than this, but this example is enough to get you started. We'll show you more about for in Section 7.3 and Chapter 23. In this chapter, you learned some Scala basics and, hopefully, took advantage of the opportunity to write a bit of Scala code. In the next chapter, we'll continue this introductory overview and get into more advanced topics. [1] We tested the examples in this book with Scala version 2.7.2. [2] If you're using Windows, you'll need to type the scala command into the "Command Prompt" DOS box. [3] If you're not familiar with Java packages, you can think of them as providing a full name for classes. Because Int is a member of package scala, "Int" is the class's simple name, and "scala.Int" is its full name. The details of packages are explained in Chapter 13. [4] The simple name of java.lang.String is String. [5] In the interpreter, however, you can define a new val with a name that was already used before. This mechanism is explained in Section 7.7. [6] In Java, the type of the value returned from a method is its return type. In Scala, that same concept is called result type. [7] A function is recursive if it calls itself. [8] Nevertheless, it is often a good idea to indicate function result types explicitly, even when the compiler doesn't require it. Such type annotations can make the code easier to read, because the reader need not study the function body to figure out the inferred result type. [9] You can run scripts without typing "scala" on Unix and Windows using a "pound-bang" syntax, which is shown in Appendix A. [10] You can also put spaces around the plus operator, as in "Hello, " + "world!". In this book, however, we'll leave the space off between `+' and string literals. [11] This shorthand, called a partially applied function, is described in Section 8.6. [12] You can say "in" for the <- symbol. You'd read for (arg <- args), therefore, as "for arg in args."
http://www.artima.com/pins1ed/first-steps-in-scalaP.html
CC-MAIN-2015-27
refinedweb
2,869
73.07
How To Implement Authentication In Next.js With Auth0 “Authentication” is the action of validating that a user is who he or she claims to be. We usually do this by implementing a credentials system, like user/password, security questions, or even facial recognition. “Authorization” determines what a user can (or can’t) do. If we need to handle authentication and authorization in our web application, we will need a security platform or module. We can develop our own platform, implement it, and maintain it. Or we can take the advantage of existing authentication and authorization platforms in the market that are offered as services. When evaluating whether it’s better for us to create our own platform, or to use a third-party service, there are some things that we should consider: - Designing and creating authentication services is not our core skill. There are people working specially focused on security topics that can create better and more secure platforms than us; - We can save time relying on an existing authentication platform and spend it adding value to the products and services that we care about; - We don’t store sensitive information in our databases. We separate it from all the data involved in our apps; - The tools third-party services offer have improved usability and performance, which makes it easier for us to administrate the users of our application. Considering these factors, we can say that relying on third-party authentication platforms can be easier, cheaper, and even more secure than creating our own security module. In this article, we will see how to implement authentication and authorization in our Next.js applications using one of the existing products in the market: Auth0. What Is Auth0? It allows you to add security to apps developed using any programming language or technology. “Auth0 is a flexible, drop-in solution to add authentication and authorization services to your applications.” — Dan Arias, auth0.com Auth0 has several interesting features, such as: - Single Sign-On: Once you log into an application that uses Auth0, you won’t have to enter your credentials again when entering another one that also uses it. You will be automatically logged in to all of them; - Social login: Authenticate using your preferred social network profile; - Multi-Factor Authentication; - Multiple standard protocols are allowed, such as OpenID Connect, JSON Web Token, or OAuth 2.0; - Reporting and analytics tools. There is a free plan that you can use to start securing your web applications, covering up to 7000 monthly active users. You will start paying when the amount of users increases. Another cool thing about Auth0 is that we have a Next.js SDK available to use in our app. With this library, created especially for Next.js, we can easily connect to the Auth0 API. Auth0 SDK For Next.js As we mentioned before, Auth0 created (and maintains) a Next.js focused SDK, among other SDKs available to connect to the API using various programming languages. We just need to download the NPM package, configure some details about our Auth0 account and connection, and we are good to go. This SDK gives us tools to implement authentication and authorization with both client-side and server-side methods, using API Routes on the backend and React Context with React Hooks on the frontend. Let’s see how some of them work in an example Next.js application. Example Next.js App Using Auth0 Let’s go back to our previous video platform example, and create a small app to show how to use Auth0 Next.js SDK. We will set up Auth0’s Universal Login. We will have some YouTube video URLs. They will be hidden under an authentication platform. Only registered users will be able to see the list of videos through our web application. Note: This article focuses on the configuration and use of Auth0 in your Next.js application. We won’t get into details like CSS styling or database usage. If you want to see the complete code of the example app, you can go to this GitHub repository. Create Auth0 Account And Configure App Details First of all, we need to create an Auth0 account using the Sign Up page. After that, let’s go to the Auth0 Dashboard. Go to Applications and create a new app of type [“Regular Web Applications”]. Now let’s go to the Settings tab of the application and, under the Application URIs section, configure the following details and save the changes: - Allowed Callback URLs: add - Allowed Logout URLs: add By doing this, we are configuring the URL where we want to redirect the users after they login our site (Callback), and the URL where we redirect the users after they log out (Logout). We should add the production URLs when we deploy the final version of our app to the hosting server. Auth0 Dashboard has many configurations and customizations we can apply to our projects. We can change the type of authentication we use, the login/sign-up page, the data we request for the users, enable/disable new registrations, configure users’ databases, and so on. Install And Configure The Auth0 Next.js SDK Let’s install the Auth0 Next.js SDK in our app: npm install @auth0/nextjs-auth0 Or yarn add @auth0/nextjs-auth0 Now, in our env.local file (or the environment variables menu of our hosting platform), let’s add these variables:>IMAGE_3<< If you want more configuration options, you can take a look at the docs. Create the Dynamic API Route Next.js offers a way to create serverless APIs: API Routes. With this feature, we can create code that will be executed in every user request to our routes. We can define fixed routes, like /api/index.js. But we can also have dynamic API routes, with params that we can use in our API routes code, like /api/blog/[postId].js. Let’s create the file /pages/api/auth/[...auth0].js, which will be a dynamic API route. Inside of the file, let’s import the handleAuth method from the Auth0 SDK, and export the result: import { handleAuth } from '@auth0/nextjs-auth0'; export default handleAuth(); This will create and handle the following routes: /api/auth/login To perform login or sign up with Auth0. /api/auth/logout To log the user out. /api/auth/callback To redirect the user after a successful login. /api/auth/me To get the user profile information. And that would be the server-side part of our app. If we want to log in to our application or sign up for a new account, we should visit. We should add a link to that route in our app. Same for logging out from our site: Add a link to. Add The UserProvider Component> ); } We have a React hook useUser that accesses to the authentication state exposed by UserProvider. We can use it, for instance, to create a kind of welcome page. Let’s change the code of the pages/index.js file: import { useUser } from "@auth0/nextjs-auth0"; export default () => { const { user, error, isLoading } = useUser(); if (isLoading) return <div>Loading...</div>; if (error) return <div>{error.message}</div>; if (user) { return ( <div> <h2>{user.name}</h2> <p>{user.email}</p> <a href="/api/auth/logout">Logout</a> </div> ); } return <a href="/api/auth/login">Login</a>; };(); Take into consideration that our web application allows any person to sign up for an account, using the Auth0 platform. The user can also re-use an existing Auth0 account, as we’re implementing Universal Login. We can create our own registration page to request more details about the user or add payment information to bill them monthly for our service. We can also use the methods exposed in the SDK to handle authorization in an automatic way. Conclusion In this article, we saw how to secure our Next.js applications using Auth0, an authentication and authorization platform. We evaluate the benefits of using a third-party service for the authentication of our web applications compared to creating our own security platform. We created an example Next.js app and we secured it using Auth0 free plan and Auth0 Next.js SDK. If you want to deploy an Auth0 example application to Vercel, you can do it here. Further Reading And Resources - Auth0 Next.js SDK GitHub repository, Auth0, GitHub - “The Ultimate Guide To Next.js Authentication With Auth0,” Sandrino Di Mattia, Auth0 Blog In our example app, we used server-side rendering, with API routes and a serverless approach. If you’re using Next.js for a static site, or a custom server to host your app, this article has some details about how to implement authentication. - “New Universal Login Experience,” Auth0 Universal Login, Auth0 Docs - “Centralized Universal Login vs. Embedded Login,” Auth0 Universal Login, Auth0 Docs
https://www.smashingmagazine.com/2021/05/implement-authentication-nextjs-auth0/
CC-MAIN-2021-43
refinedweb
1,478
55.84
Learning D Part 1: IDE and Libraries by Malte Skarupke. The code looks simple: import std.stdio; void main() { writeln("Hello World!"); } The problem is running this. D is a language that had a lot of excitement behind it a couple years ago, which is now much diminished. So just finding a compiler and IDE proves difficult. I wanted to use the LLVM D compiler, but that one is broken in the current version of Ubuntu (12.04). So I use DMD instead. Next I want an IDE. You will find many dead or dying IDE’s. Many of them haven’t been updated in years or only support the old D version 1. After trying many different programs I ended up with Code::Blocks. It doesn’t work out of the box, but at least it has support for debugging, which others don’t. I got myself the newest “nightly” build and created a D project. By default you can’t build, because the compiler is set up incorrectly. Yay. You have to go into the advanced options and explain to Code::Blocks how to use the current version of dmd. But after that, I finally had a “Hello, World!” on the screen. The next thing for me was to try to create an OpenGL context. I again had the problem of finding many dead or dying libraries, until I eventually came across Derelict, which gives easy access to the SDL and OpenGL. With all of that, I can finally start building a real application. So I copy+paste some code from the Internet, and I have a yellow square on the screen. This has taken much longer than it should have, mainly because the D community seems to be in a slow decline. Next time I will actually program something useful. Then I’ll see why so few people seem to stick with D. Hopefully the problem is just that the community is too small to grow properly, and not something with the language itself. You need a wrapper to call C code from D? You can’t just call the functions directly? I would’ve thought the process of taking a C header and turning it into D declarations would be mostly mechanical… You can call the functions directly but you need to declare them, because D can’t include C headers. There are instructions about how to convert C headers to D modules in the documentation and DigitalMars provides a tool to do it automatically. But it’s still nice that a library did the work for me. And when I was looking for libraries to open a OpenGL window I was very frustrated after many were either broken or outdated. So when I tried derelict I was just happy when it worked right away. I didn’t even stop to think that it’s just a small wrapper.
https://probablydance.com/2012/10/03/learning-d-part-1-ide-and-libraries/?replytocom=49
CC-MAIN-2020-34
refinedweb
483
74.49
There is a new function since Visual Studio 2012, which means, that it is also available in the recently released Visual Studio 2013. This pretty tiny, but really cool feature allows you to generate C# classes based on XML or JSON input data. I am not only going to show you where the function hides, I will also show you an example to make sure you realize the power behind the functionality. Well, to be honest, the only thing you’ll gain is time. But isn’t it always about efficiency?! Yes it is, there we go with the example: First of all we need some JSON or XML. I decided to take the following JSON as an example input: { "library": { "books": { "author": { "-name": "Jon Skeet", "book": [ { "title": "C# in Depth, Third Edition" }, { "title": "C# in Depth, Second Edition" }, { "title": "C# in Depth" } ] } } } } Let’s generated a bunch of C# classes from this JSON. It is really simple if you know where the particular function hides within the menus of Visual Studio. Go to EDIT, Paste Special, Paste JSON as the following screenshot demonstrates: What do we get? We get a full object graph, model or whatever you want to call it. As above, I will show you the result to make clear how it looks like: public class Rootobject { public Library library { get; set; } } public class Library { public Books books { get; set; } } public class Books { public Author author { get; set; } } public class Author { public string name { get; set; } public Book[] book { get; set; } } public class Book { public string title { get; set; } } What do you think about this feature? Do you think this is awesome?! I do, because if you have a large file that you need to access through a C# program, you’ll save much time. I hope this feature will save you at least the time you needed reading this blog post. Follow Me!
http://www.claudiobernasconi.ch/2013/11/14/paste-special-a-hidden-feature-revealed/
CC-MAIN-2017-30
refinedweb
317
65.66
} There are a few things we need to do before the watch dog timer runs out(WDT). to do this we need to change the bits in WDRCTL register to setup the WDT, other wise it may reboot and get in an endless loop. Most of us will disable the WDT, by using WDTCTL = WDTPW + WDTHOLD; now your wondering how this sets the Watch dog timer to disable. WDTPW needs to be set, this is to access the WDT registers(password protected). WDTHOLD stops the WDT and disables it, this will be used for most applications and programs, but WDT is very useful, if you need examples of other WDT code examples, you can download them at TI.com. This should prevent it from going into an endless loop at start up. *This needs to be done early in the program, if you have a lot of variables to set up you may want to initialize them farther down in the code if you have trouble with WDT. Next we will set up the pins, this is pretty easy to do. This part of the code just sets the pins for input or output and active or not. There are quite a bit of registers to deal with but we will only go over the main ones, PxIN, PxOUT, PxREN, PxDIR, PxSEL. The other ones PxIE, PxIES, PxIEG are intrupt settings and we will go over those later on. P1xxx registers are for the first 8 (0-7)pins. PxSEL – selects the pin for use with a Hardware peripheral PxDIR – switches the pin for input or output (0 for input, 1 for output) PxIN – selects if input is High or Low, Default is low PxOUT – selects if output is High or Low, Default is low PxREN – selects if a pull up or pull down resistor is used, Pull up or Pull Down is selected by the PxOUT register Now since we now know what the registers are we need to know a little binary with a little bit of bitwise operations Bitwise operators - | OR – 1010 | 1011 = 1011, if either bit is a 1 returns a 1 if both bits are 0 then value is 0 ( 1|0=1, 0|1=1, 0|0=0) - & AND – 1010 & 1011 = 1010, if both bits equal 1 returns a 1, if 1 bit is 0 and the other one is 1 then it returns 0, if both bits are 0 then it returns 0 (1&1=1, 1&0=0, 0&0=0) - ~ compliment ~1010=0101 switches the bits to the opposite value (~0=1, ~1=0) - ^ XOR 10101010^01001011 = 11100001, if either bit is 1 (not both) then it returns 1, if both values are 0 or both 1s then it returns 0 (1^0 = 1, o^1=1, 1^1 or 0^0 = 0) - << shift left 1010 << 2 = 1000, shifts the bits to the left by given number - >> shift right 1010 >> 2 = 0010, shifts the bits to the right by given number Since we now have bitwise reference we can now show you how it applies to the pins. at first I was having a hard time understanding bitwise, but once you figure its just moving 1s and 0s around it becomes a great tool. so lets get started with how to setup the pins. P1DIR |= "register value"; valid values between 0-255, or 0x00 to 0xff Example Code P1DIR |= 0x0c; this sets up p1.2 and 1.3 for output. To make things even easier, the MSPGCC headers have an easy way to setup pins with out knowing the ins and outs of other numbering systems. MSPGCC uses #defines to certain bits, as in BIT0 = 0x01 = 00000001 = 1. another example is BIT6 = 0x20 = 00100000 = 32, hopefully this is making sense a little more Example code using BITx P1DIR |= BIT0 + BIT6; This activate P1.6 or pin 7 and P1.0 or pin 1 (pin numbers may not correspond to Physical pin layout). So you may be asking why I am using an OR operator, I have no idea,all the examples from IAR workbench set the pins this way, but if you do have one let me know. LOL. Now lets put that together with the rest of the pin settings. P1DIR |= BIT5; //0=in 1=out, sets P1.5 to output P1OUT |=BIT5; //1 = high 0 = low, sets P1.5 to high output This will get you P1.5 or pin 6 to go to high, or if you hook it up to an LED it will go on. At the end we will have sample program that puts this all together, well most of it. The full code to get a pin up or input or output P1DIR |= BITx + BITx; this will get basic functionality to your pins, you will need more code to make it functional. We still need to setup the power mode or any other peripherals. P1OUT |= BITx + BITx; //for input P1IN |= BITx + BITx The different power modes give you a lot of options when it comes to low power applications, it also affects what clocks are active and if the CPU is on or off (some periphalals can run with out the cpu). to set the different modes you call _bis_SR_register("power modes + other settings") *this is for mpsgcc 4.x the different power modes are LPM0-LPM4 and AM. Each one has its advantage for power saving, but im not the one for power efficiency. - AM- CPU active, all clocks are active - LPM0-CPU and MCLK are disabled, SMCLK & ACLK are active - LMP1- CPU, MCLK are disabled DCO, and DC generator are disabled unless DCO is used for SMCLK, ACLK is active - LPM2-CPU, DCO SMCLK, MCLK are disabled, ACLK and DC generator are active - LPM3-ACLK is active, CPU, DCO, SMCLK, MCLK and DC generator are disabled - LPM4-CPU and all clocks are disabled Now i will leave it up to you, or which power mode to use for you application. this now finishes up the quick walk through to get some blinking lights from your MPS430 chip. I will continue with Timer_A, ADC10, and other peripherals. Here is a quick program to blink lights with software so you can start experimenting with your chip. #include <msp430x20x2.h> //header for chip 2231 void main(void) { WDCTL = WDTPW + WDTHOLD; //disable WDT P1DIR |= BIT0; //sets pin 1.0 for output for (;;) { volatile unsigned int i; P1OUT ^= BIT0; //toggles pin 1 with exclusive or (XOR) i = 5000; do (i--); while (i != 0); } //end of for() }//end of main I’m pretty new to this, but the reason we use the “OR equals” ( |= ) operator is to avoid changing any bits we aren’t intentionally setting. I imagine it’s not too important at the beginning of the program, where everything is at the defaults, but later you would want to avoid unsetting the bits you set earlier. -Doc You’re missing the in your example program after the #include. Just a heads up. It should be fixed! thanks!! In current gcc, you can (should) include msp430.c instead of the specific header and use the -m switches (in a Makefile) to tell the compiler which chip you’re using. It will find the right headers automatically. great job
http://justinstech.org/2010/08/msp430-basic-codingprograming-part-1/
CC-MAIN-2015-06
refinedweb
1,211
73.31
Well, I was wrong, the encryption was not a problem. I cannot even get the size of the entry. It always returns -1. I've managed to google out that it's because the size is written after the entry.... Well, I was wrong, the encryption was not a problem. I cannot even get the size of the entry. It always returns -1. I've managed to google out that it's because the size is written after the entry.... Yeah, I did it.. I seem to have found the problem, it's my overtiredness. I forgot the XML file is XOR encrypted, that's why it can't read it Hi, I've spent many hours trying to find a solution to this problem, but all in vain. Here's the deal: I have a ZipInputStream, which I've even managed to get a ZipEntry from. I can get the size... I've just found the problem myself. It was the question mark in my request string. This request: String request = "post_var=1"; will work as a charm Hi, I'm having problems forming an http request in java. Here's what I do: I've got a simple php (java.php) file on my apache server, here's it's code: <?php $post_var =... If your purpuse is to understand the principles of OOP, I'd recommend you to start it off with ActionScript 3. It's syntax is very similar to Java but it's much easier to understand if you never... I found what the problem was. Such a stupid blunder of mine :D I had to set up look and feel before creating a button: package differenttests; import java.awt.Point; import... I tried it. It looks like this: With this code: package differenttests; import java.awt.Point; import javax.swing.JButton; import... Hi, I'm trying to create a simple JFrame with a button on it, but I want this button to look like a native system one. Here's the code package tests; import java.awt.Button; import... Thanks :) But I suppose it will be something like a standard player, which doesn't support any major video formats. Maybe there's some non-android specific Java video encoders for DivX or XviD?... Hi, I'm not writing any video players right now, it's just a pure curiosity :) So, what if I want to create one for Android, how can I find codecs for it? And how do I use them in my Java code?... Hi all, I'm still trying to understand the logic of Java, and it feels like I'm getting stuck more and more :confused: Right now I wanna create a JFrame and add just one button to it. If I... Very good reply. As I'm just a beginner in Java, and I didn't get used to using any of these approaches, I'll follow your advice and try to get myself used to doing everything the right way :) Isn't there a way to check what type of object it is? If I do it in AS3 I typically use something like this: if (myArray[0] is MovieClip) { trace("I found a movie clip object"); }... Hi all, I've found how to make content assist appear automatically when, e.g. I type the name of some class instance. It will show all public methods it contains and so on. But I want it to show... Ah, I got it now. Thanks a lot! Seems like I couldn't get it at first 'cause I got use to programming in ActionScript 3 which can push any types of object into the same array, so there's no need... Ok, but what if you just extend guessinggame without any abstract methods, and just create those methods from scratch in normal and test game classes? Wouldn't it do the same job? Hi, first of all I'd like to wish a Merry Christmas and a Happy New Year to all who celebrates these holidays. And now, to my question. I'm watching video tutorials from bucky, and I just can't... When I first installed Java to Program Files, it worked fine. But then I've read a recommendation (in one book) to install it into the root directory of disk C. I thought I would be a piece of cake... Thanks for the reply :) But dash before jar didn't help either. The problem's still there I had a problem with path to jdk (which looked similar to this) when I first installed Eclipse,... Hey Everyone who reads this. I'm kind of new to Java, and right now I'm learning nuts and bolts of compilation and running apps process. So here's the problem. I have JRE and jdk1.6.0_20...
http://www.javaprogrammingforums.com/search.php?s=049e2ffafd218217ad3f66f198dbcd34&searchid=204153
CC-MAIN-2016-30
refinedweb
806
83.36
A. When a module is used as a container for objects, it's called a namespace. Ruby's Math module is a good example of a namespace: it provides an overarching structure for constants like Math::PI and methods like Math::log, which would otherwise clutter up the main Kernel namespace. We cover this most basic use of modules in Recipes 9.5 and 9.7. Modules are also used to package functionality for inclusion in classes. The Enumerable module isn't supposed to be used on its own: it adds functionality to a class like Array or Hash. We cover the use of modules as packaged functionality for existing classes in Recipes 9.1 and 9.4. Module is actually the superclass of Class, so every Ruby class is also a module. Throughout this book we talk about using methods of Module from within classes. The same methods will work exactly the same way within modules. The only thing you can't do with a module is instantiate an object from it: Class.superclass # => Module Math.class # => Module Math.new # NoMethodError: undefined method `new' for Math:Module You want to create a class that derives from two or ...
https://www.safaribooksonline.com/library/view/ruby-cookbook/0596523696/ch09.html
CC-MAIN-2016-50
refinedweb
199
67.45
Recently I was working on a Java based program that hooked into an indexing process to make modifications to the data it was indexing. In this case, it was stripping out HTML formatting and removing unnecessary whitespace. Because this was a Java app, performance was dismal. Under normal circumstances I would have gone with a platform that would provide better native support for my platform, such as C, C++, or even a scripting language like Perl would have sufficed. However that was outside of the requirements for this project. So to improve performance, I needed a profiler to track down my performance bottlenecks. I tried the Eclipse Performance and Logging tools, only to be really disappointed with the results. And by disappointed, I mean that I received errors when running the Eclipse profiler so I couldn’t get any kind of results. So, my search for an alternative lead me to JRat. JRat is fairly easy to use. To Launch an application for profiling, you simply append an argument to the Java VM. From Eclipse, this can be done from the Run Dialog, under the Arguments tab. To demonstrate this, I am using the Prime Number example (not sure why I called it Factorize). The code I am using contains both the unoptimized prime number list and the optimized one. The code is below: package com.digiassn.blogspot; import java.util.ArrayList; import java.util.List; import java.util.Iterator; public class Factorize { private static int MAX_NUMBER = 10000; public boolean isPrime(int number) { if (number == 1) return false; for (int x = 2; x < number; x++) { if ((number % x) == 0) { return false; } } return true; } public List getFactors() { List factors = new ArrayList(); for (int x = 2; x < MAX_NUMBER; x++) { if (isPrime(x)) { factors.add(x); } } return factors; } public List getFactors2() { boolean [] list = new boolean[MAX_NUMBER]; List l = new ArrayList(); for (int x = 0; x < MAX_NUMBER; x++) { list[x] = true; } list[0] = false; list[1] = false; for (int x = 2; x < MAX_NUMBER; x++) { if (list[x]) { for (int y = (x * 2); y < MAX_NUMBER; y += x) { list[y] = false; } l.add(x); } } return l; } /** * @param args */ public static void main(String[] args) { Factorize f = new Factorize(); List l = f.getFactors(); List l2 = f.getFactors2(); for (int x = 0; x < l.size(); x++) { if (!((Integer)l.get(x)).equals((Integer)l2.get(x))) { System.out.println("Something didn't match" + l.get(x) + " " + l2.get(x)); } } System.out.println(l.size()); System.out.println(l2.size()); } } In the below screenshot, I have jRat installed under C:\Jrat. Figure 1. Adding JRat to your Eclipse Run That’s basically it, when you run the program, you will see a whole bunch of output from the console. Figure 2. Console output When run with Jrat, it will save its statistics to a file under the project folder that you will need to open in JRat in order to view your statistics. To run JRat to view statistics, you would run the following command: java -jar shiftone-jrat.jar Figure 3. The JRat Window. Above is a screenshot of the file generated by JRat opened and sorted by percentage of time spent in a method. I can see that unoptimized getFactors method is where the program spent a majority of its time, were the optimized getFactors2 barely spent any time at all. To be fair, this doesn’t offer nearly as much analysis as the Eclipse Profiler. The Eclipse Profiler has the option to test memory allocation sizes as well as execution time, and it provides some very nifty outputs, such as the ability to export class interations into UML diagrams, as shown in the below screenshot. Figure 4. The Eclipse Profile Perspective. While I like JRat, I did find a few things I didn’t like. The biggest one is that is isn’t a Eclipse plug-in. While this is actually a pro as well as a con, I have to admit a certain amount of lazyness on my part, and having to jump outside of Eclipse to view my results can be a little annoying. I suppose a plug-in could be built to view these results, however. Next, is the release schedule is a little inconsitent. As of this writing, the last stable release was on 2007-09-11, and before that, it was 2006-07-31. Yikes. I hope the project is still viable. So if you get into a situation where the Eclipse profiler refuses to work with your application, you might want to give jRat a try. 6 comments: Giving it a go now... I managed to get the eclipse profiler working 6 months ago, but since then I've had no end of problems... Thanks for your post. It has inspired me to take a look at JRat under eclipse. I tried the same thing but it gave me an error Error occurred during initialization of VM agent library failed to init: instrument Failed to find Premain-Class manifest attribute in C:\Installer\shiftone-jrat-0.6\shiftone-jrat-0.6\dist\shiftone-jrat.jar Do you know about it? cntrl+c cntrl+v if jdk 1.5 or above java -javaagent:shiftone-jrat.jar [your java ops] [main class] java -Xmx256M -jar shiftone-jrat.jar I also had the error: Error occurred during initialization of VM agent library failed to init: instrument Error opening zip file: shiftone-jrat.jar To resolve the error in the "Run Configurations" / arguments panel inside eclipse I explicitly set the "VM arguments" to the full path to my installation of the shiftone-jrat.jar as follows: -javaagent:"d:\tools\rjat\shiftone-jrat.jar" Hope this helps someone out there. But this way JRat will give a static view of the analysis as we have not done any such instrumentation of jars. What if we want to find the bottlenecks in the code in the actual environment when the application is running. I mean configuring JRat in such a way which involves instrumentation of jars and etc....
http://digiassn.blogspot.com/2008/01/java-using-jrat-under-eclipse.html
CC-MAIN-2018-22
refinedweb
1,002
64.61
On Mon, Dec 03, 2007 at 11:53:04PM +0100, Jim Meyering wrote: > "Daniel P. Berrange" <berrange redhat com> wrote: > > This patch implements internal driver API for authentication callbacks > > in the remote driver. It is basically a bunch of code to bridge from > > the libvirt public API for auth/credentials and the SASL equivalent > > API. The libvirt API is very close in style to the SASL API so it is > > a fairly mechanical mapping. > > I have to start by admitting I've never used or even looked at > policykit before. > > > diff -r 98599cfde033 src/libvirt.c > > --- a/src/libvirt.c Wed Nov 28 23:01:08 2007 -0500 > > +++ b/src/libvirt.c Wed Nov 28 23:29:58 2007 -0500 > > @@ -62,6 +62,78 @@ static int initialized = 0; > > #define DEBUG0 > > #define DEBUG(fs,...) > > #endif /* !ENABLE_DEBUG */ > > + > > +static int virConnectAuthCallbackDefault(virConnectCredentialPtr cred, > > + unsigned int ncred, > > + void *cbdata ATTRIBUTE_UNUSED) { > > + int i; > > + > > + for (i = 0 ; i < ncred ; i++) { > > + char buf[1024]; > > + char *bufptr = buf; > > + > > + printf("%s:", cred[i].prompt); > > + fflush(stdout); > > If printf or fflush fails, this probably return -1. Good point. > > > + switch (cred[i].type) { > > + case VIR_CRED_USERNAME: > > + case VIR_CRED_AUTHNAME: > > + case VIR_CRED_ECHOPROMPT: > > + case VIR_CRED_REALM: > > + if (!fgets(buf, sizeof(buf), stdin)) { > > + return -1; > > + } > > A consistency nit: you might want to make EOF be treated the same as > an empty name. Currently typing EOF to fgets (which then returns NULL) > makes this code return -1, while entering an empty line doesn't. > At least with passwords, I confirmed that cvs login treats ^D like > the empty string. > > On the other hand, an empty name probably makes no sense in many > applications. Actually an empty name does make sense for things like REALM/AUTHNAME as it potentially means use the default value. Its upto underlying SASL impl whether is likes emptry names or not, so we should be treating EOF as same as "". > > + if (buf[strlen(buf)-1] == '\n') > > + buf[strlen(buf)-1] = '\0'; > > + break; > > + > > + case VIR_CRED_PASSPHRASE: > > + case VIR_CRED_NOECHOPROMPT: > > + bufptr = getpass(""); > > If getpass fails (it'd return NULL), return -1. > Otherwise, the following strdup would segfault. Yep, will -=|
https://www.redhat.com/archives/libvir-list/2007-December/msg00026.html
CC-MAIN-2014-23
refinedweb
341
63.8
On Mon, Sep 16, 2002 at 05:50:48PM -0700, Umar Qureshey wrote: > Ok I tested filter.c and ncurses.c. In fact I launched ncurses from filter. > It seems to be working fine. The stuff showed up on the screen. Still guessing - looking for a clue... Let's see - you don't have anything like strace on that platform (which could show the actual system calls), right? The only bug that I recall related to initialization in lynx is this chunk from recent fixes - but it shouldn't prevent ncurses from initializing; #if (!defined(WIN_EX) || defined(__CYGWIN__)) /* @@@ */ if(LYscreen || lynx_called_initscr) { endwin(); /* stop curses */ LYDELSCR(); } #endif -- Thomas E. Dickey <address@hidden> ; To UNSUBSCRIBE: Send "unsubscribe lynx-dev" to address@hidden
http://lists.gnu.org/archive/html/lynx-dev/2002-09/msg00091.html
CC-MAIN-2015-06
refinedweb
120
77.53
Uni Links A Mac OS implementation of Avioli Unilinks plugin for Flutter. These links are simply web-browser-like-links that activate your app and may contain information that you can use to load specific section of the app or continue certain user activity from a website (or another app). Universal Links are regular https links, thus if the app is not installed (or setup correctly) they'll load in the browser, allowing you to present a web-page for further action, eg. install the app. Make sure you read both the Installation and the Usage guides, thoroughly, especiallly for App/Universal Links (the https scheme). Installation To use the plugin, add uni_links_macos as a dependency in your pubspec.yaml file. This package requires the original uni_links to work. Permission Mac OS require to declare links' permission in a configuration file. The following steps are not Flutter specific, but platform specific. You might be able to find more in-depth guides elsewhere online, by searching about Universal Links or Custom URL schemas. There are two kinds of links in Mac OS: "Universal Links" and "Custom URL schemes". - Universal Links only work with httpsscheme and require a specified host, entitlements and a hosted file - apple-app-site-association. Check the Guide links below. - Custom URL schemes can have... any custom scheme and there is no host specificity, nor entitlements or a hosted file. The downside is that any app can claim any scheme, so make sure yours is as unique as possible, eg. hst0000001or myIncrediblyAwesomeScheme. You need to declare at least one of the two. -- For Universal Links you need to add or create a com.apple.developer.associated-domains entitlement - either through Xcode or by editing (or creating and adding to Xcode) ios/Runner/Runner.entitlements file. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" ""> <plist version="1.0"> <dict> <!-- ... other keys --> <key>com.apple.developer.associated-domains</key> <array> <string>applinks:[YOUR_HOST]</string> </array> <!-- ... other keys --> </dict> </plist> This allows for your app to be started from links. For more information, read Apple's guide for Universal Links. -- For Custom URL schemes you need to declare the scheme in ios/Runner/Info.plist (or through Xcode's Target Info editor, under URL Types): <?xml ...> <!-- ... other tags --> <plist> <dict> <!-- ... other tags --> <key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleTypeRole</key> <string>Editor</string> <key>CFBundleURLName</key> <string>[ANY_URL_NAME]</string> <key>CFBundleURLSchemes</key> <array> <string>[YOUR_SCHEME]</string> </array> </dict> </array> <!-- ... other tags --> </dict> </plist> This allows for your app to be started from YOUR_SCHEME://ANYTHING links. For a little more information, read Apple's guide for Inter-App Communication. I strongly recommend watching the Apple WWDC 2015, session 509 - Seamless Linking to Your App to understand how the Universal Links work (and are setup). Usage There are two ways your app will recieve a link - from cold start and brought from the background. More on these after the example usage in More about app start from a link. Initial Link (String) Returns the link that the app was started with, if any. import 'dart:async'; import 'dart:io'; import 'package:uni_links/uni_links.dart'; import 'package:flutter/services.dart' show PlatformException; // ... Future<Null> initUniLinks() async { // Platform messages may fail, so we use a try/catch PlatformException. try { String initialLink = await getInitialLink(); // Parse the link and warn the user, if it is not correct, // but keep in mind it could be `null`. } on PlatformException { // Handle exception by warning the user their action did not succeed // return? } } // ... Initial Link (Uri) Same as the getInitialLink, but converted to a Uri. // Uri parsing may fail, so we use a try/catch FormatException. try { Uri initialUri = await getInitialUri(); // Use the uri and warn the user, if it is not correct, // but keep in mind it could be `null`. } on FormatException { // Handle exception by warning the user their action did not succeed // return? } // ... other exception handling like PlatformException One can achieve the same by using Uri.parse(initialLink), which is what this convenience method does. On change event (String) Usually you would check the getInitialLink and also listen for changes. import 'dart:async'; import 'dart:io'; import 'package:uni_links/uni_links.dart'; // ... StreamSubscription _sub; Future<Null> initUniLinks() async { // ... check initialLink // Attach a listener to the stream _sub = getLinksStream().listen((String link) { // Parse the link and warn the user, if it is not correct }, onError: (err) { // Handle exception by warning the user their action did not succeed }); // NOTE: Don't forget to call _sub.cancel() in dispose() } // ... On change event (Uri) Same as the stream, but transformed to emit Uri objects. Usually you would check the getInitialUri and also listen for changes. import 'dart:async'; import 'dart:io'; import 'package:uni_links/uni_links.dart'; // ... StreamSubscription _sub; Future<Null> initUniLinks() async { // ... check initialUri // Attach a listener to the stream _sub = getUriLinksStream().listen((Uri uri) { // Use the uri and warn the user, if it is not correct },InitialLinkInitialLink will be either null, or the initial link, with which the app was started. Because of these two situations - you should always add a check for the initial link (or URI) and also subscribe for a Stream of links (or URIs). Tools for invoking links If you register a schema, say unilink, you could use these cli tools: Android You could do below tasks within Android Studio. Assuming you've installed Android Studio (with the SDK platform tools): adb shell 'am start -W -a android.intent.action.VIEW -c android.intent.category.BROWSABLE -d "unilinks://host/path/subpath"' adb shell 'am start -W -a android.intent.action.VIEW -c android.intent.category.BROWSABLE -d "unilinks://example.com/path/portion/?uid=123&token=abc"' adb shell 'am start -W -a android.intent.action.VIEW -c android.intent.category.BROWSABLE -d "unilinks://example.com/?arr%5b%5d=123&arr%5b%5d=abc&addr=1%20Nowhere%20Rd&addr=Rand%20City%F0%9F%98%82"' If you don't have adb in your path, but have $ANDROID_HOME env variable then use "$ANDROID_HOME"/platform-tools/adb .... Note: Alternatively you could simply enter an adb shell and run the am commands in it. Note: I use single quotes, because what follows the shell command is what will run in the emulator (or device) and shell metacharacters, such as question marks ( ?) and ampersands ( &), usually mean something different to your own shell. adb shell communicates with the only available device (or emulator), so if you've got multiple devices you have to specify which one you want to run the shell in via: - The only USB connected device - adb -d shell '...' - The only emulated device - adb -e shell '...' You could use adb devices to list currently available devices (similarly flutter devices does the same job). iOS Assuming you've got Xcode already installed: /usr/bin/xcrun simctl openurl booted "unilinks://host/path/subpath" /usr/bin/xcrun simctl openurl booted "unilinks://example.com/path/portion/?uid=123&token=abc" /usr/bin/xcrun simctl openurl booted "unilinks://example.com/?arr%5b%5d=123&arr%5b%5d=abc&addr=1%20Nowhere%20Rd&addr=Rand%20City%F0%9F%98%82". App Links or Universal Links These types of links use https for schema, thus you can use above examples by replacing unilinks with https. Contributing For help on editing plugin code, view the documentation. License BSD 2-clause
https://pub.dev/documentation/uni_links_macos/latest/
CC-MAIN-2021-10
refinedweb
1,218
58.48
Affiliate Disclosure: By buying the products we recommend, you help keep the lights on at MakeUseOf. Read more. Click is a Python package for writing command line interfaces. It produces beautiful documentation for you and lets you build command line interfaces in as little as one line of code. In short: it’s awesome and can help take your programs to the next level. Here’s how you can use it to liven up your Python projects. Writing Command Line Programs Without Click It’s possible to write command line programs without using Click, but doing so requires more effort and lots more code. You need to parse command line arguments, perform validation, develop logic to handle different arguments, and build a custom help menu. Want to add a new option? You’ll be modifying your help function then. There’s nothing wrong with writing your own code, and doing so is a great way to learn Python, but Click allows you to follow the “Don’t Repeat Yourself” (DRY) principles. Without Click, you’ll write code which is fragile and requires lots of maintenance whenever any changes happen. Here’s a simple command line interface coded without Click: import sys import random def do_work(): """ Function to handle command line usage""" args = sys.argv args = args[1:] # First element of args is the file name if len(args) == 0: print('You have not passed any commands in!') else: for a in args: if a == '--help': print('Basic command line program') print('Options:') print(' --help -> show this basic help menu.') print(' --monty -> show a Monty Python quote.') print(' --veg -> show a random vegetable') elif a == '--monty': print('What\'s this, then? "Romanes eunt domus"? People called Romanes, they go, the house?') elif a == '--veg': print(random.choice(['Carrot', 'Potato', 'Turnip'])) else: print('Unrecognised argument.') if __name__ == '__main__': do_work() These 27 lines of Python work well but are very fragile. Any change you make to your program will need lots of other supporting code to change. If you change an argument name you’ll need to update the help information. This code can easily grow out of control. Here’s the same logic with Click: import click import random @click.command() @click.option('--monty', default=False, help='Show a Monty Python quote.') @click.option('--veg', default=False, help='Show a random vegetable.') def do_work(monty, veg): """ Basic Click example will follow your commands""" if monty: print('What\'s this, then? "Romanes eunt domus"? People called Romanes, they go, the house?') if veg: print(random.choice(['Carrot', 'Potato', 'Turnip'])) if __name__ == '__main__': do_work() This Click example implements the same logic in 16 lines of code. The arguments are parsed for you, and the help screen is generated: This basic comparison shows how much time and effort you can save by using programs such as Click. While the command line interface may appear the same to the end user, the underlying code is simpler, and you’ll save lots of time coding. Any changes or updates you write in the future will also see significant development time increases. Getting Started With Click for Python Before using Click, you may wish to configure a virtual environment Learn How to Use the Python Virtual Environment Learn How to Use the Python Virtual Environment Whether you are an experienced Python developer, or you are just getting started, learning how to setup a virtual environment is essential for any Python project. Read More . This will stop your Python packages conflicting with your system Python or other projects you may be working on. You could also try Python in your browser if you want to play around with Python and Click. Finally, make sure you’re running Python version 3. It’s possible to use Click with Python version 2, but these examples are in Python 3. Learn more about the differences between Python 2 and Python 3. Once ready, install Click from the command line using PIP (how to install PIP for Python): pip install click Writing Your First Click Program In a text editor, start by importing Click: import click Once imported, create a method and a main entry point. Our Python OOP guide covers these in greater detail, but they provide a place to store your code, and a way for Python to start running it: import click import random def veg(): """ Basic method will return a random vegetable""" print(random.choice(['Carrot', 'Potato', 'Turnip', 'Parsnip'])) if __name__ == '__main__': veg() This very simple script will output a random vegetable. Your code may look different, but this simple example is perfect to combine with Click. Save this as click_example.py, and then run it in the command line (after navigating to its location): python click_example.py You should see a random vegetable name. Let’s improve things by adding Click. Change your code to include the Click decorators and a for loop: @click.command() @click.option('--total', default=3, help='Number of vegetables to output.') def veg(total): """ Basic method will return a random vegetable""" for number in range(total): print(random.choice(['Carrot', 'Potato', 'Turnip', 'Parsnip'])) if __name__ == '__main__': veg() Upon running, you’ll see a random vegetable displayed three times. Let’s break down these changes. The @click.command() decorator configures Click to work with the function immediately following the decorator. In this case, this is the veg() function. You’ll need this for every method you’d like to use with Click. The @click.option decorator configures click to accept parameters from the command line, which it will pass to your method. There are three arguments used here: - –total: This is the command line name for the total argument. - default: If you don’t specify the total argument when using your script, Click will use the value from default. - help: A short sentence explaining how to use your program. Let’s see Click in action. From the command line, run your script, but pass in the total argument like this: python click_example.py --total 10 By setting –total 10 from the command line, your script will print ten random vegetables. If you pass in the –help flag, you’ll see a nice help page, along with the options you can use: python click_example.py --help Adding More Commands It’s possible to use many Click decorators on the same function. Add another click option to the veg function: @click.option('--gravy', default=False, help='Append "with gravy" to the vegetables.') Don’t forget to pass this into the method: def veg(total, gravy): Now when you run your file, you can pass in the gravy flag: python click_example.py --gravy y The help screen has also changed: Here’s the whole code (with some minor refactoring for neatness): import click import random @click.command() @click.option('--gravy', default=False, help='Append "with gravy" to the vegetables.') @click.option('--total', default=3, help='Number of vegetables to output.') def veg(total, gravy): """ Basic method will return a random vegetable""" for number in range(total): choice = random.choice(['Carrot', 'Potato', 'Turnip', 'Parsnip']) if gravy: print(f'{choice} with gravy') else: print(choice) if __name__ == '__main__': veg() Even More Click Options Once you know the basics, you can begin to look at more complex Click options. In this example, you’ll learn how to pass in several values to a single argument, which Click will convert to a tuple. You can learn more about tuples in our guide to the Python dictionary. Create a new file called click_example_2.py. Here’s the starter code you need: import click import random @click.command() def add(): """ Basic method will add two numbers together.""" pass if __name__ == '__main__': add() There’s nothing new here. The previous section explains this code in detail. Add a @click.option called numbers: @click.option('--numbers', nargs=2, type=int, help='Add two numbers together.') The only new code here are the nargs=2, and the type=int options. This tells Click to accept two values for the numbers option, and that they must both be of type integers. You can change this to any number or (valid) datatype you like. Finally, change the add method to accept the numbers argument, and do some processing with them: def add(numbers): """ Basic method will add two numbers together.""" result = numbers[0] + numbers[1] print(f'{numbers[0]} + {numbers[1]} = {result}') Each value you pass in is accessible through the numbers object. Here’s how to use it in the command line: python click_example_2.py --numbers 1 2 Click Is the Solution for Python Utilities As you’ve seen, Click is easy to use but very powerful. While these examples only cover the very basics of Click, there are lots more features you can learn about now that you have a solid grasp of the basics. If you’re looking for some Python projects to practice your new found skills with, why not learn how to control an Arduino , or how about reading and writing to Google Sheets with Python How to Read and Write to Google Sheets With Python How to Read and Write to Google Sheets With Python Python may seem strange and unusual, however it is easy to learn and use. In this article, I'll be showing you how to read and write to Google Sheets using Python. Read More ? Either of these projects would be perfect for converting to Click! Explore more about: Coding Tutorials, Python. I tried pip install click in Anaconda and it threw several exceptions. I tried it in Python and it said "install not defined." Someone has never used argparse I stopped reading after the overly complex example without click. Either the author isn't familiar with the standard tools at their disposal or they're being intentionally misleading. Either way, I can't trust the rest of the content. Argparse is a module in stdlib and can provide almost all of the same functionality as click, but without external dependencies. The example "without click" is basically a straw man, there is no good reason to parse args manually like that as opposed to using argparse. I second the use of argparse. It has made my short little utilities cleaner and more complete. I third argparse. I also don't like the decorator style of Click. With argparse I define one def that parses in all my args, and then passes those along accordingly. Thus, I only have to look in one spot to update/modify the command line functionality. Also, copy that def into a new project and you have a good base to start with - DRY, as the author said.
https://www.makeuseof.com/tag/python-command-line-programs-click/
CC-MAIN-2020-05
refinedweb
1,774
64.71
The Dublin Core and the Metadata Object Description Schema: a look at namespaces Namespaces. As we have seen, namespaces are a core element of the emerging Semantic Web. By posting namespaces on the Web, we can share precise vocabularies that will hopefully enable us to automate the process of searching the Web. Searching with today’s search engines, like Google, is an inaccurate and highly iterative process. Searches are based on matching our search words with words in the documents that have been found and indexed in advance by the search engine. It can be a very painstaking process: we have to click on the URLs that are returned, and for each one, make a decision as to whether or not the page is relevant. We typically end up changing our search words gradually, as we hone our search criteria. Namespaces are intended as a key element of a long term goal to make search engines of the future smarter. If the terms we used to formulate our searches came from widely-adopted, standardized namespaces, there would be far less painstaking iteration involved in finding the right webpages. We would accompany our search requests with links to the namespaces that define terms we are using. And in fact, searching would become at least partly automatic, with the browser able to narrow the set of returned URLs by making use of its knowledge of namespaces. The Dublin Core. Let’s take a look at one of the most widely known namespaces. It’s called the Dublin Core. But, as it turns out, it proved too simple and has since been eclipsed, at least in part, by a somewhat more sophisticated namespace called the Metadata Object Description Schema. To get started, here’s another way to look at a namespace: it is used to create metadata that describes some data source. In particular, the Dublin Core was engineered to provide metadata for resources that can be found on the Web, including text-based documents, images, and video, and in particular, web pages. Want to know what a web page is all about? Look at its metadata, specified with the Dublin Core standard. By the way, the namespace is named after Dublin, Ohio, not the other Dublin. The namespace was the result of a workshop held in Dublin in 1995. It is not an XML extension, like SMIL, the language used for building multimedia presentations. However, the Dublin Core can be used to create metadata for documents that are specified with XML or one of its many extensions. So, what is in the Dublin Core? Basically it is a set of terms such as Contributor, Publisher, and Language. Some of the terms generally refer to very simple values, like Contributer, which is the person or organization that created a document. To look at one of the potentially more complex Dublin Core terms, Coverage can describe the 3D (x,y,z) coordinates, or the time period, or the nation referenced by the document being described. It could refer to all of these. Note that this is not the time the document was written, or where it was written. Coverage refers specifically to the content of the document itself. So, if we tell a smart browser of the future to find all documents that pertain to the year 1865, it will not return documents that were written in 1865, but are about the year 1012. One drawback of the Dublin Core is that it is very loosely defined. So, it often fails in its true purpose: to provide precisely-defined terms that all of us can use, and where we can be confident they will be uniformly interpreted. A More Sophisticated Standard: MODS. A newer proposed standard, called the Meta Object Description Schema, or MODS, is an XML language that has been very actively promoted as a successor to the Dublin Core. MODS has more terms, and more precisely-defined terms. Since it leverages the ability of XML to express nested or embedded structures, it can convey much more information than a list of Dublin Core terms can convey. Here’s a little piece of MODS: This only gives a hint of the rich metadata that can be specified by using MODS. (The MODS website provides some far more detailed examples.) Still, compare this to the Dublin Core Contributor term, which might have the value “Bugs King”. Is this a human name? Is it a pest control company? But – even though it seems like an odd name, in the MODS example, we know that this is a person who goes by the name Bugs King. Dublin Core might die and blow away – but it will always be recognized as a pivotal point in the development of the Semantic Web.  Comment on this Post
http://itknowledgeexchange.techtarget.com/semantic-web/the-dublin-core-and-the-metadata-object-description-schema-a-look-at-namespaces/
CC-MAIN-2016-07
refinedweb
798
70.63
go to bug id or search bugs for <? imagettfbbox(5, 0, "x.ttf", "the text"); ?> shows : Warning: libgd was not built with TrueType font support in /www/test2.php on line 1 phpinfo(); shows : gd GD Support enabled GD Version 1.6.2 or higher FreeType Support enabled FreeType Linkage with freetype JPG Support enabled PNG Support enabled WBMP Support enabled So basically I'd say something is not working correctly. I have tried with PHP 4.0.6 on 2 FreeBSD servers I have. Here's my configure line : ./configure --enable-inline-optimization --with-apxs --enable-calendar --enable-ftp --enable-track-vars --with-gd=/usr/local --with-freetype-dir=/usr/local --with-mysql All looks ok during config & compilation of PHP. Here's what shows: checking whether to include GD support... yes checking whether to enable truetype string function in gd... no checking for freetype(2)... yes checking whether to include include FreeType 1.x support... no checking whether to include T1lib support... no checking for gdImageString16 in -lgd... (cached) yes checking for gdImagePaletteCopy in -lgd... (cached) yes checking for gdImageCreateFromPng in -lgd... (cached) yes checking for gdImageCreateFromGif in -lgd... (cached) no ... I tried with --enable-gd-native-ttf and --enable-gd-imgstrttf, it's not better. Also note that there is an error in the configure script : > cat configure | grep -i native-tt --enable-gd-native-ttf GD: Enable TrueType string function in gd" # Check whether --enable-gd-native-tt or --disable-gd-native-tt was given. It's confusing... --enable-gd-native-tt or --enable-gd-native-ttf ? Thanks, Philippe Bourcier Add a Patch Add a Pull Request That "libgd was not built with TrueType font support" message comes from the GD library itself, not PHP. Did you compile the GD library with TrueType font support? --Wez. (Please use the web interface to reply!) User Comment: Heh, sure... By default GD doesn't compile without freetype installed. Today here's what I've tried : FreeBSD 4.3 : Apache 1.3.20 + PHP 4.04pl1 + lib GD 1.8.3 + freetype 1.3.1 << doesn't work Apache 1.3.20 + PHP 4.06 + lib GD 1.8.3 + freetype 1.3.1 << doesn't work Apache 1.3.20 + PHP 4.06 + lib GD 1.8.4 + freetype 2.0.4 << doesn't work Apache 1.3.20 + PHP 4.06 + lib GD 2.0.1.beta + freetype 2.0.4 << doesn't work Linux 2.2 : Apache 1.3.20 + PHP 4.03pl1 + lib GD 1.8.3 + freetype 1.3.1 << worked Apache 1.3.20 + PHP 4.06 + lib GD 1.8.4 + freetype 2.0.4 << doesn't work The only difference in the version that works is PHP is not a dynamic module. Are you really, really sure that GD is linked with freetype support?? "By default GD doesn't compile without freetype installed" That statement is not correct in any of the GD sources that I have seen; you need to explicitly enable TTF/Freetype support in the makefiles. Can you successfully run the GD test programs that work with TTF/Freetype? (if there are any). --Wez. Try this short program: #include <stdio.h> #include <stdlib.h> #include "gd.h" int main (int argc, char **argv) { gdImagePtr im; im = gdImageCreate(100,100); printf("TTF returns %s\n", gdImageStringTTF(im, NULL, 1, "dummy.ttf", 12, 0, 0, 0, "test")); gdImageDestroy (im); return 0; } Then do: gcc gdttf.c -lgd -ljpeg -lpng -lfreetype -lm Try -lttf instead of -lfreetype, depending on which you have installed. If you still get the "libgd was not built with TrueType font support", then you haven't build gd with TrueType font support. If you get any other error, then something screwy is going on. --Wez. oops... I had to reply here... sorry. Re True, for GD I always have this in the Makefile : CFLAGS=-O -DHAVE_LIBPNG -DHAVE_LIBJPEG -DHAVE_LIBFREETYPE LIBS=-lgd -lpng -lz -ljpeg -lfreetype -lm Can you successfully run the GD test programs that work with TTF/Freetype? (if there are any). Yes. I modified gdtestft.c to load the arial font instead of the "times" one by default. Another proof of GD having freetype compiled with it is the phpinfo() result : FreeType Support enabled FreeType Linkage with freetype (with freetype 2) FreeType Support enabled FreeType Linkage with TTF library (with freetype 1) If phpinfo() says it's ok and then the function says it's bugged... then I say something is wrong. The only thing that I can think of is that you have and older version of libgd.so or libgd.a hanging around that is mistakenly being linked in instead of the one you were expecting, and that configure found. Please look around your system to see if that is the case; I would recommend moving the duplicates to some not in the lib path temporarily while compiling PHP and see if you still have the same problem. --Wez. Re No Way... That's not the case. And I have the same error on : 2 FreeBSD 4.3 box 1 linux 2.4.6 box I say when you compile PHP 4.0.4/4.0.5/4.0.6 as a DSO module, truetype font support is screwed... It's a GD 1.8.4 problem: It has both ttf and freetype support in different files; you built it with freetype support, and PHP expects it to use TTF functions, which GD reports are not there. This problem is not present in GD 2.0.1; the TTF functions call the FT functions instead. I'll work up a patch for this case, so that PHP can try and stay ahead. --Wez. OK, try either using the latest CVS or changing the following line in ext/gd/gd.c of the PHP sources: -# if HAVE_LIBGD20 & HAVE_LIBFREETYPE +# if HAVE_LIBFREETYPE (It's line 2761 for me in CVS; if you can't find it, look for gdImageStringTTF; it's just above it). I hope that fixes the problem! --Wez. Re I am now using : Apache 1.3.20 PHP 4.0.6 + the GD patch freetype2 2.0.3.1 gd 1.8.4.2 It works fine. Thanks wez.
https://bugs.php.net/bug.php?id=12065
CC-MAIN-2021-31
refinedweb
1,035
79.16
Search Type: Posts; User: vanderbill Search: Search took 0.01 seconds. - 27 Oct 2011 7:04 AM Jump to post Thread: Menu Event using EXT MVC by vanderbill - Replies - 1 - Views - 1,410 Good afternoon. I have a Viewport with accordion menu, How do I get the click event of each menu item? My Controller Ext.define('aplicacao.controller.Usuarios', { extend :... - 11 Feb 2009 3:46 PM - Replies - 3 - Views - 2,570 tks for answer, but its not a checkbox, is a CheckBoxSelectionModel in a grid...:) - 11 Feb 2009 11:52 AM - Replies - 3 - Views - 2,570 Public Events Event Defined By beforerowselect : (... - 11 Feb 2009 10:53 AM - Replies - 3 - Views - 2,570 wich event o need implment to know??? tks :D:D - 11 Feb 2009 10:51 AM sorry i haved tested, im wrong here.. i have implemented in a wrong event! sry :">:"> - 11 Feb 2009 4:33 AM ok, my list is = List<EmpresaData> mylist; but, if i use public void loaderLoad(LoadEvent le) { getGridEmpresa().getSelectionModel().select( - 11 Feb 2009 3:24 AM this is a bug??? because i used the same code but iwth select(index) method getGridEmpresa().getSelectionModel().select(1); and work, but i need select my list!!!! ty for all guys!:D - 11 Feb 2009 2:23 AM Hello guys, im tryng select my list, when a load my page(pagingtoolbar) my grid private Grid<EmpresaData> getGridEmpresa() { if (gridEmpresa == null) { gridEmpresa = new... - 24 Jan 2009 10:47 AM - Replies - 2 - Views - 1,400 hello guys, im trying implements basic login but.... the responseText is returning all code look. /** * @author Vander */ Ext.onReady(function(){ - 19 Jan 2009 9:03 AM - Replies - 3 - Views - 1,750 ok i will look on net for gzipping, but have any tutorial how i can active this.. ty for answer :D - 19 Jan 2009 8:57 AM - Replies - 3 - Views - 1,750 Hello guys, i have an aplication on gxt, then i wll acess first time later deployed in tom cat it load a lot, between 15 and 30 seconds, any have the same issue??? tks guys sorry my bad english! - 16 Jan 2009 9:20 AM - Replies - 7 - Views - 2,779 look the code in examples, there are server side code paging implementation :D:D:D:D:D - 14 Jan 2009 4:36 PM - Replies - 41 - Views - 97,830 I have same problem can any1 help??? - 12 Jan 2009 7:21 AM - Replies - 0 - Views - 956 Hello guys im trying migrating, but not all project. I have any problems with datefield... The trigger(Data picker i think ) dont show!! [CODE]public... - 8 Jan 2009 6:51 AM - Replies - 4 - Views - 2,108 i get it :D:D:D:D:D private void refreshChk(final List<ModelData> list) { if (!getGrid().isRendered()) return; if (list == null) { if... - 8 Jan 2009 6:49 AM - Replies - 3 - Views - 3,157 Hello :)) I checkboxSelectionModel how i can take select/deselect checkbox??? sm = new CheckBoxSelectionModel<RamoData>(); sm ... - 8 Jan 2009 3:40 AM - Replies - 4 - Views - 2,108 Hello guys. I making a test here and to select rows onload...but nothing happens. public BasePagingLoader getLoader() { if (loader == null) { loader = new... - 5 Jan 2009 2:52 AM sorry i dont understand, can gimme a example??? tks for help... but i think dont have a solution for Ext gwt yet...but tks for all!!! im looking yet!!:D:D:D - 30 Dec 2008 10:51 AM Im trying override getEditor, but dont work :((:(( ColumnConfig colResposta = new ColumnConfig("resposta", "Resposta", 150) { @Override ... - 30 Dec 2008 5:26 AM Hello guys. How i can have more than one widget in the same column in a editorGrid??? example: row Column A 1 TextField 2 ComboBox(options 1, 2) 3 CheckBox 4 ... - 22 Dec 2008 4:14 AM - Replies - 3 - Views - 1,143 Ty so much its Work =D - 22 Dec 2008 3:52 AM - Replies - 3 - Views - 1,143 Hello guys. I have a query where have 12.000 rows, but i iwant paging it. Im tryng it In Client side: My model: public class ConhecimentoModel extends BaseModelData { private String... - 17 Sep 2008 3:19 AM hello guys.....the problem happen only in linux....in Windows its works :D:D:D:D ty so much for the great work....cya!!!! - 12 Sep 2008 8:52 AM hello this key dont work too - 12 Sep 2008 5:48 AM when i hold delete or backspace the mask disconfigure...i need making any stuff??? the key '~' is not validate i think the key ' Results 1 to 25 of 30
https://www.sencha.com/forum/search.php?s=2860eb8fe30a05960da378d48eb10043&searchid=11964109
CC-MAIN-2015-27
refinedweb
758
75.3
May 18, 2000 Changes: New APIs to free memory used by the Java object using the PL/SQL parser. Bug Fixes: October 1, 1999 Changes: Now uses the Oracle XML parser for Java V2. The xmlparser package has been split into two: xmlparser, which contains methods for parsing an XML document and xmldom, which contains methods to manipulate the DOM tree Methods to parse and set a DTD have been added to xmlparser Methods to print DOMNodes and external DTDs have been added Support for namespaces has been added XSLT support has been added and is available in the xslprocessor package This is the first production release Oracle XML Parser for PLSQL 1.0.0.1.0 ------------------------------ July 2, 1999 Changes: Now uses the Oracle XML parser for Java V1.0.1.4. More API documentation has been added and the installation instructions in the READMEs have been expanded. This is the first beta patch release Oracle XML Parser for PLSQL 1.0.0.0.0 ------------------------------ April 19, 1999 The Oracle XML parser for PLSQL is an early adopter release and is written in PLSQL and Java. It will check if an XML document is well-formed and, optionally, if it is valid. The parser will construct an object tree which can be accessed via PLSQL interfaces. The licensing agreement is found in LICENSE. The parser is currently available only for testing purposes. We expect to make the parser available for commercial use in the future. Please post any questions, comments, or bug reports to the XML Forum on the Oracle Technology Network at. At this time OTN is your only support resource. The parser conforms to the following standards: * The W3C recommendation for Extensible Markup Language (XML) 1.0 at * The W3C recommendation for Document Object Model Level 1 1.0 at The parser currently supports the following encodings: UTF-8, BIG 5, US-ASCII, GB2312, EUC-JP, ISO-2022-JP, EUC-KR, ISO-2022-KR, KOI8-R, ISO-8859-1to -9, EBCDIC-CP-*, and Shift_JIS. UTF-8 is the default encoding if none is specified. Any other ASCII or EBCDIC based encodings that are supported by the Oracle 8i database may be used. Additional encodings will be added to the list of supported encodings in future releases. The parser also provides error recovery. It will recover from most errors and continue processing until a fatal error is encountered. The following directories and files are found in the release: license.html - licensing agreement readme.html - this file doc/ - API documentation lib/ - contains the XML parser archive sample/ - contains examples of how to use the XML parser
http://docs.oracle.com/cd/A97334_02/relnotes/xdk/plsql/parser/readme.html
crawl-003
refinedweb
437
54.93
This chapter provides an overview of the ELOM-to-ILOM migration process and details the preparatory steps that you need to perform for a successful upgrade of your Sun Fire X4150 and X4450 servers and Sun Blade X6250 and X6450 server modules. While the migration procedures are similar for all four servers, some differences exist, particularly between the Sun Fire and Sun Blade server groups. The differences are noted, and when necessary, the procedures are separated. The information and procedures in this chapter pertain to both Sun Fire X4150 and X4450 servers and Sun Blade X6250 and X6450 server modules. The following is a list of questions about the ELOM-to-ILOM migration. Why should I migrate my servers to ILOM? Sun has consolidated and focused its resources on supplying a single server and server module management utility that best meets the needs of our customers. Sun is committed to supporting and developing ILOM as the server management utility of choice. Does this mean Sun no longer supports ELOM? Sun continues to support ELOM. However, Sun no longer provides updates to ELOM. Additionally, future updates to your server or server module hardware might depend on ILOM compatibility for functionality. Does upgrading to ILOM involve changing my server’s hardware? Upgrading to ILOM is a firmware-only update and does not require changing or modifying your server’s hardware. When should I migrate my system? Sun recommends that you migrate your servers to ILOM as soon as possible. Can I upgrade multiple systems using scripts? Yes. See Migrating to ILOM Using the Command-Line Interface (CLI) and Upgrade Scripts. What is the minimum version of ELOM that I should have installed before beginning the migration process? It is recommended that you upgrade your server or server module to the latest available version of ELOM before beginning this process. To which version of ILOM will I migrate my servers? You need to upgrade your servers to a temporary (transitional) version of ELOM first and then migrate to ILOM. When the migration process is finished, your server or server module will have ILOM version 2.0.3.6. Can I use the transitional version of ELOM without migrating to ILOM? No. The transitional ELOM version is not supported for normal use. It should only be used when migrating to ILOM. How to do I migrate my servers from ELOM to ILOM? See The Migration Process. What if I do not want to migrate to ILOM right now? See Choosing Not to Migrate From ELOM to ILOM. ILOM includes the following features that ELOM does not: For more information about ILOM features and capabilities, go to: Sun recommends that you migrate your servers to ILOM as soon as possible. However, you might find it more convenient to downgrade your ILOM-based servers to ELOM and perform the ELOM-to-ILOM migration at a later date. For information about downgrading a server or server module from ILOM to ELOM, see Chapter 6. This section provides an overview of the ELOM-to-ILOM migration process and details the steps that you need to perform for a successful upgrade of your server. Upgrading from ELOM to ILOM is very similar to upgrading to a newer version of firmware. You can use the same interfaces, and the process is functionally the same. For example, you can use the web GUI or the CLI to upgrade your servers using a combined firmware image file that upgrades both the BIOS and service processor (SP) at the same time. Additionally, if you would like to upgrade multiple servers, you can use the CLI with scripting languages to create scripts that will perform the upgrade. When upgrading from ELOM to ILOM, you can use the following methods: The migration process requires that you perform at least two upgrades. First, you must upgrade your server from the latest version (recommended) of ELOM to a temporary or transitional version of ELOM. Then you must upgrade from the transitional version of ELOM to the final version of ILOM. FIGURE 1 shows the basic migration process. FIGURE 1 The Basic Migration Process You need to perform several preparatory steps before you upgrade your servers to the transitional version of ELOM, including reviewing the Product Notes, accessing the BIOS Setup Utility to record custom settings, and downloading the necessary firmware files. The Migration Task Table lists all the steps, tasks, and relevant sections. The following table lists the steps and relevant sections and procedures: This section describes how to access your server’s documentation collection so that you can review the Product Notes and obtain the latest version of ELOM and ILOM documents. The Product Notes document contains the most up-to-date information about your server, including Tools and Drivers CD version information and issues related to hardware, software, the BIOS, and the SP. Additionally, ELOM and ILOM documentation is available with your server’s document collection. Because of the differences between the versions of ELOM, it is recommended that you use your server’s Embedded Lights Out Manager Administration Guide as a reference when performing the steps in the upcoming procedures. Use the following procedure to access the Product Notes and determine: 1. Open your browser and go to one of the following: 2. Click to expand the documentation collection for your server or server module. 3. Choose to view or download the Product Notes. 4. Review the document for Tools and Drivers CD version information, ELOM-to-ILOM migration-related information, and ILOM-specific information and issues. 5. Download the ELOM and ILOM documentation for your server. The web GUI screens and CLI commands and namespaces differ between versions of ELOM. Therefore, the commands and references in the procedures in this document might not be the same for your server. Use the Embedded Lights Out Manager Administration Guide for your server or server module as a reference when performing the steps in the upcoming procedures. The migration process does not preserve the BIOS settings. If you customized (changed) the BIOS settings from the default value, you might need to access the BIOS Setup Utility, review the settings, and, if necessary, record these values so that you can enter them again at the end of the migration process. You need the following: 1. Set up your server or server module to respond to POST and boot messages. Use the serial terminal or computer running terminal-emulation software. For information about how to set up your server or server module to respond to POST and boot messages, using a keyboard, video, and mouse, see the Installation Guide or the Embedded Lights Out Manager Administration Guide for your server. 2. Reboot the server and watch the screen for the BIOS prompt. A prompt appears requesting you to press F2 to enter the BIOS Setup Utility. 3. When prompted, press F2 to enter the BIOS Setup Utility. The BIOS Setup Utility main screen appears. 4. Use the arrow keys to navigate the BIOS Setup Utility menus and view the BIOS settings. For a description of the BIOS screens, see the Service Manual. 5. Step through each menu and record the customized (non-default) server or site-specific settings. 6. To exit the BIOS Setup Utility press the Esc key (or the F10 key). The server boots. The procedures in this section require SP reboots. After a reboot, the SP might lose its Gateway address. Before starting the migration process record the Gateway address, and power cycle the server or server module and verify that your SP Gateway address is retained. If the address is not retained, you need set the Gateway address using the CLI after each power cycle. 1. Start your browser, and type the IP address of the server’s SP: where xxx.xxx.xxx.xxx is the IP address of the SP. The ELOM web GUI login screen appears. 2. Log in as root or as an account with administrator privileges. The ELOM web GUI main screen appears. 3. Click the Configuration tab. The Configuration submenu screen appears. 4. Click the Network tab and record the Gateway IP address in the table below: 5. To exit the ELOM web GUI, click log out. 1. Log in to the CLI. For CLI access options and login instructions, see the Embedded Lights Out Manager Administration Guide for your server. The CLI prompt appears: -> 2. Navigate to the network directory, and enter the show command. For example: -> cd /SP/network -> show The network settings appear. 3. Record the Gateway IP address in the table below. 4. To exit the CLI, use the exit command: -> exit To migrate your servers to ILOM, you need to download and copy the necessary firmware files. The files are available online at the product’s Sun download site. You can download the appropriate Tools and Drivers CD ISO image, or you can download the files as a .zip file. 1. Start your browser and go to the Sun product page for your server: 2. Do one of the following to navigate to the Download link for your server or server module: a. From the product page, click the Support tab and scroll down to the ELOM-to-ILOM transitional update section. b. Under the Downloads heading, click Latest Drivers and Firmware. The download page for your server or server module appears. c. To access the relevant files for your server, locate the version listed below and click Download. d. Agree to terms and login to the download page. e. At the download page, do one of the following: -or- f. Proceed to Step 3. a. From the downloads list, locate the latest version of the Tools and Drivers CD for your server. b. Click Download. c. Agree to terms and login to the download page. d. Download the latest Tools and Drivers CD ISO image file. e. Burn a CD of the file. f. Proceed to Step 3. 3. Copy the transitional ELOM firmware file from the Tools and Drivers CD (or copy the extracted file) to a device that is accessible to your server: The directory structure and file naming convention varies according to the server: 4. Copy the combined ILOM firmware image file from the Tools and Drivers CD (or copy the extracted file) to a device that is accessible to your server: The combined ILOM firmware image file is a .bin file. The directory structure and file naming convention varies according to the server: You can choose to preserve ELOM configuration settings for the ELOM-to-ILOM migration process. If you want to preserve ELOM configuration settings when migrating from ELOM to ILOM, then you must select to preserve the settings when upgrading to the transitional version of ELOM, and again when upgrading to ILOM. Otherwise, all ELOM configuration settings are lost. However, because of differences between ELOM and ILOM, some ELOM features are not supported by ILOM, and therefore, not retained. The following is a list of the unsupported configuration settings that are not retained during the migration process: The following is a list of the configuration settings that are retained during the migration process: When migrating to ILOM, ELOM user accounts with user and callback privilege settings are not retained. If you would like to retain these user accounts, you can change the privilege settings in ELOM to either operator or administrator before you perform the upgrade. However, changing privilege settings for user accounts might present a security risk. For more information about how to manage user accounts and upgrade (change) user account privileges, see the Embedded Lights Out Manager Administration Guide for your server. You can use upgrade scripts to perform the migration process. Using scripts enables you to upgrade multiple servers to ILOM consecutively. Upgrade scripts are located on the Tools and Drivers CD: The Sun Fire X4150 and X4450 example script is a working script that you can use as a template for performing multiple server upgrades. The script was developed from information in this document. For more information about the example script, see the readme.txt file in the same directory.
http://docs.oracle.com/cd/E19045-01/blade.x6450/820-4930-12/GSGChap1.html
CC-MAIN-2015-22
refinedweb
2,017
63.7
A new release of the Mono runtime and SDK is available for UNIX and Windows. Packages for various distributions are also available from our download page. Many new things have been improved, implemented, fixed, and tuned from the last release. The team at Ximian has been working on three areas to pass the tests provided by the SourceGear team. Elsewhere, dotGNU has made big progress. Mono 0.25 Released 2003-06-27 Mono 36 Comments With things moving from just pure stock implementation of features, to locking them down for robustness and reliability, things can only get better. This is the project that allows me to feel pretty comfortable using C# as my primary language without worrying about other platforms being excluded. Anything that keeps me from having to use Java is a plus. “Anything that keeps me from having to use Java is a plus.” what a stupid comment. mono is not cross platform compatible. certainly not a replacement for java. as long as it runs on windows, linux, macos, it is pretty much multi-platform on a majority of desktops. can java compares its crossplatform with kermit or zip/unzip ? It is kind of cross platform, however, I do see where you are coming from. What would be nice is if Trolltech made their qt version available under the same conditions as their *NIX and MacOS versions then the mono versions, both Windows and *NIX can use qt as the widget set. As for Java, I would love to see SUN losen up the licensing on their java so that FreeBSD can release a binary version without needing to jump through 100s of hoops and handing over 1000s. “as long as it runs on windows, linux, macos, it is pretty much multi-platform on a majority of desktops.” it is running only on linux. mono is a .net-like implementation. not a .net implementation. “can java compares its crossplatform with kermit or zip/unzip ?” no. it also cannot compare with refrigerators. >it is running only on linux. mono is a .net-like >implementation. not a .net implementation. you can find a windows version on their download page, mono is not limited to linux It also runs on FreeBSD as well. There is also a move to get it to run on Solaris and MacOS X. Maybe instead of being so bloody negative, how about look at the positive aspects of it. If they can embrace and extend a Microsoft idea and turn it into something REALLY cool, why stop them? Why does everyone seem to forget that M$ has indicated that they hold numerouse patents on .NET and they may use them in the future. For all intents and purposes developing .NET applications is Windows only affair. No self-respecting business will develop a .Net application and plan on delivering it on Linux using Mono. For the time being, they are happy to allow Mono to operate since it helps them fight Java. However, if Mono threatens Microsoft I am sure they will squash Mono in a second. Develop on .Net at your own risk? I don’t consider MS a threat here. A law suite would just give Mono a 100 times more more publicity than what they now have and a whole lot of anti-competitive counter-accusations against MS. However, using Mono to develop new C# or .NET applications really makes little sense. What Mono is useful for, is moving VB.NET applications away from the MS Windows platform to cheaper alternatives. i’m not bloody negative. i didn’t say it is useless. mono can be useful, especially in language research etc. i said, it cannot be competitor to java in its current form. C# is a relief for Java developer as well as it is for Java final users… If dotGNU succeeds, it might nicely create a LAMP.NET environment. 🙂 Haha, write once, test every everywhere and where ever it goes, it is slow. you ignorant fool “Anything that keeps me from having to use Java is a plus.” what a stupid comment. mono is not cross platform compatible. certainly not a replacement for java. I feel the same as the original poster. After 6 years of Java almost anything feels refreshing… I believe it only x-platform IF you use GTK as well. Certainly NOT x-platform in the Java meaning of the word. I thought Microsoft had a patent on the Windows.Forms API. In that case, how can the dotGNU people have their implementation? Their API would have to be significantly different to prevent it being in violation of MS’s patent. I think being able to patent an API is the silliest idea I’ve heard. Is this not why Mono developer GTK sharp? “I believe it only x-platform IF you use GTK as well. Certainly NOT x-platform in the Java meaning of the word.” I was under the impression that the fact it used GTK was hidden by the containing System.Windows.Forms in the same way on windows that they are wrappers around Win32 calls, and not a new API to learn. Ah I found this… .” I would assume that overriding WndProc is a fairly platform specific thing to do, although the fact that it exists in itself means there was no attempt at platform independance. I’ve done it myself (only once) because there was no alternative though. Java and .NET really aren’t competition as they are targeting different markets. .NET is predominately CLIENT based, sure there are things like ASP.NET and ADO, but from what I understand, the real crown jewel of .NET is WinForms. Java on the other hand sits handily in both the high-end enterprise server market with J2EE (way too much money and time has been invested in this for Java to disapear from this segment for a while) and on the opposite extreme: the embedded market. Almost every major cell phone manufacturer now supports Java, it also becoming a large presence in TV set top boxes. (It is a little known fact that TV set top boxes is what Java was designed for in the first place.) However, when it comes to desktop clients, it has been relatively week, with even Java applets fading away. The latest incarnations of Swing are starting to turn the tide on this, but it will no doubt be an uphill battle. People still associate desktop Java with slow, despite remarkable advances in JITting and other performance enhancements. Anyway, the two only overlap in two areas from what I can see at this time ASP/JSP and desktop clients. With ASP/JSP, it seems to be tipped toward Java a little, and with desktop clients, I’d say the scales are currently tipped toward .NET The interesting thing, is that the two aren’t really mutually exclusive, what’s to prevent a .NET client app from talking to a Java server app? I think we can expect just that kind of configuration in the coming years, especially with “web services” and SOAP being very popular, and both frameworks containing robust support for such technologies. Anything that keeps me from having to use Java is a plus. I’m just curious. Is that opinion based on technical merits or just on prejudice? The reason I ask is that I use both languages extensively and in all honesty, they are extremely similar both in functionality and in syntax. How anyone could detest one and have a horny infatuation with the other one is beyond my comprehension; unless the reasoning is purely political. Mono can run on multiple platforms, but I don’t think it is ready to be run on ANY platform in a production environment yet. It’s only a 0.25 release. It’s infancy still makes it largly irrelevant in my opinion. This will of course change. Don’t get me wrong. I think C# is a great language and look forward to using Mono in a production environment once it is more mature. In the mean time, I do a lot of C# programming for Windows and Java, Python, PHP, C/C++, etc. programming everywhere else. Another thing, I think C#’s abilities on the server side are lacking. ASP.NET is just gay. Java, is the better choice there regardless of Mono’s maturity, in my opinion. I wish people would use the best tools for the job instead of populating their Utility Belts based on politics and corporate spats. I understand and I certainly wouldn’t call it production ready. Regarding Winforms, they are being implemented via the winelib where as gtk will have a wrapper called gtk#. The reason for GTK# is because it provides a famila API for those who are used to using GTK, secondly the speed of winelib development is pretty slow meaning one would wait a very long time if the whole project hinged on one factor, and thirdly from what i have heard GTK# is much nicely toolkit to write code for than winforms. Ontop of that, one has to also take in account the fact that SWT has already been ported to mono and that sharp develop is being ported to SWT which, once ready, give mono a nice IDE for development. Regarding the sharedsource version of the .NET, had they been truely committed to expanding the adoption of .NET on other platforms, they would never had licensed it under a more draconian license than the GPL. Had they put their money where their mouth is they would have licensed it under X11/MIT (like the mono libraries are). If anyone does remember Microsoft praised the BSD license to the hills, yet, we don’t see them using it for one of their pet projects. Is anyone else having problems installing the RPMs for redhat 9.0 … dependecy on PERL-Lib etc… Uh, STB’s were not ‘made for Java’. For one, flash memory is an expensive commodity (if youre talking manufacturing quantities of millions), and in making disposable STB’s one wants to waste as little of it as possible. Not so if you have to install a JVM on each one. In fact, Java is not found on many of them, nor is it deployed in large quantities anywhere. Sun tried 2 years ago to become a big player in the STB market with Java, using the token big splashes at NAB and CES. It popped and fizzled. If you want to develop software for STBs you’re going to have to learn C/C++ and use the PowerTV or OpenTV platforms. Good luck finding a deployed Java STB platform. The availability of other middleware vs. Java makes it even tougher for them to make inroads into the STB market. So we’re down to two options. Either you do ‘low-level’ code using C/C++ and one of a handful of APIS supported by multiple STB vendors, or you do ‘high-level’ scripts (JS, HTML, etc.) supported by the same vendors, or the ‘rest of them’ (Linux STBs, etc). Things like this are eating Java’s STB lunch. Since when is C++ low level? Assuming a decent runtime environment (and the STL is a lot easier to shoehorn into an embedded platform than Java C++ is a higher level language than Java. Java is a pretty vanilla OOP language. C++ allows you to go a good bit beyond traditional OOP in the abstraction department. I’m using C++ right now for embedded development, and my guess is that C++’s “low-level” image comes from C programmers who thing they know C++ because they know what a class is! Just a generalization I wont even try to comment your statements because it’s such an unbelievable FUD. C++ is a higher level language than Java. Of course C++ is low level, lowest level OOP language can get. And C++ has twice that many feature than Java, as to language Java is subset of C++ so go troll elsewhere. I think is what generally qualifies C++ as low level. #include <string> int main( int argc, char **argv) { string a = “Hello world”; string *b = &a; b–; cout << *b << endl; return 0; } Since when does having twice as many features mean a language is lower level? Seems like the opposite would be true. But it is precisely because most of Java’s features are a subset of C++ that C++ is a higher level language than Java. Specifically, templates combined with operator overloading allow you a significant amount of abstraction power that you can’t get with Java and its much more straightforward design. A low level programming language is mostly tied to a specific hardware. Assembly is the lowest you can get without programming in binary numbers. A program written in Assembly for Intel can not be ported to a PowerPC without completely re-writing the program for the PowerPC. C/C++ is the next level up. You can still get to the metal with C/C++ even though you have some hardware abstraction capabilities. Basic, Perl, Python, Ruby, Java are languages further up in the hardware abstraction ladder. With these languages you can generally create a program with out worrying what type of CPU is in the machine. The things you do have to worry about are whether the interpreter/VM is installed and if you used any system call features of the language. A really high level language would be that a program created with the language could run on any device without modification. Your definition is very simplistic. A more accurate definition of high-level languages takes into account the abstractive power offered by the language. For example, both Python and Java are high-level by your definition, but Java is actually a medium-level language compared to Python, or something like Lisp or Haskell or any of a number of very high-level languages. In terms of abstractive power, C++ is a notch above Java, because it can do pretty much everything that Java can, along with some things Java can’t. That said, if you don’t write C code in C++, C++ can be almost as portable as Java. The CPU-specific features come into play when you do stuff like reinterpret_cast or hold pointers in integers, not really when you’re doing regular code. If you REALLY have to go for this “managed interpreter” application craze currently being PUSHED ON US by the corps than at least use a language-interpreter that is open source and free like PYTHON and not controlled by people of questionable loyalty to ANYTHING but Satan and the buck Like M$ and Sun. Mono is nothing but an M$ patent trap waiting to happen that will make SCO look like a sunday school picnick because Micro$oft in addition to being a monopoly is a SATANIC COMPANY!!! The original name “Palladium” for their future control everything technology PROVES this as does the REAL meaning of their “product activation” technology. (I will have the elaboration of this posted on EVERY Micro$oft related post on this forum from now on out!) The best way for Linux developers weather proprietary or free to go is with native code development with C/C++ or with FreePascal if you’re coming from a Windows/VB background and can not yet master the C/C++ language. I know that most Linux developers consider Pascal to be obsolete however I still consider FreePascal to be the best alternative to BASIC on Linux until one can master the basics of a programming language enough to graduate to C/C++ because it is easy for basic users to move to and it compiles to NATIVE CODE. I also believe that continuing with native code apps is in the best interest of Lunix because it will give a desided ADVANTAGE to Linux in native app speed of execution over “managed app” bloat and slowness that will make it even MORE desirable for both business and home use as Windows continues to slide down the bugs, bloat, .NET “managed app” molassis pit. As for software patents from both Micro$oft and others that endanger Linux development I will soon be posting a plan to fight them that will include the idea of public domain BY LAW for all declared obsolete software and the STRATEGIC use of the GNU GPL/LGPL and a third license of my own design to both make Linux development more attractive to proprietary developers and discourage their retaining or taking out software patents. I will be licensing the open source libraries that I develop under this upcomming arrangement ans hope that other open source developers follow suit. As far as I understood the distinction it’s basically something like imperative = low level and declarative = high level. Among the imperative languages the pure procedural would be lower level than the object oriented and among the declarative the functional would be lower than the logic based. Considering the power introduced by the template mechanism in C++ I’d say it’s higher level than Java but still lower than functional languages. As I see it: C < Java < C++ < ML < Lisp < Prolog. It’s a matter of definition of course, so I guess there is no right or wrong here. bytes256: It is a little known fact that TV set top boxes is what Java was designed for in the first place. Kon: Uh, STB’s were not ‘made for Java’. No, but Java was made for STB’s. The Oak language was originally a notation for a proposed set of embedded media-processing chips. An emulator for the instruction set of those chips became (after marketing renamed the language) the Java Virtual Machine. Actually, the differences between the .NET IL and JVM instruction sets are based in this heritage: The JVM was designed to be implemented in hardware, IL to be JIT-compiled to other instruction sets. This is why Java was so slow at first – not now though, after years of hacking around the JVM model to figure out how to make it JIT efficiently.
https://www.osnews.com/story/3905/mono-025-released/
CC-MAIN-2020-29
refinedweb
3,031
62.98
As a general rule, removing "special" (whatever that may mean in a particular context) characters is a much more dangerous and fragile solution than removing everything but "normal" characters. Figure out what you want to allow, and then remove (or throw errors for) anything else. You'll have to do the checking yourself, or use a form validator module (Search CPAN for the module that suits your needs) Not sure exactly what you are looking for, but here's some Perl that grabs the form value and then tests it for unwanted characters and untaints in the same step. I have a bunch of validation methods depending on what I'm testing for. Calling script: ($sql{'name'}, $error) = $self->val_text( 1, 64, $self->query->param(' +name') ); if ( $error-> { msg } ) { push @error_list, { "name" => $error->{ m +sg } }; } [download] Validation script sub val_alphanum { my $self = shift; my ($mand, $len, $value) = @_; if (!$value && $mand) { return (undef, { msg => 'cannot be blank' }); } elsif ($len && (length($value) > $len) ) { return (undef, { msg => 'is limited to '.$len.' characters.' }); } elsif ($value && $value !~ /^(\w*)$/) { return (undef, { msg => 'can only use letters, numbers and _' } else { my $tf = new HTML::TagFilter; return ($tf->filter($1)); } } [download] I've put a lot of work in to figuring out this CGI stuff—you can see more complete examples at Using Perl, jQuery, and JSON for Web development and A Tutorial for CGI::Application. Also, avoid dumping raw text from user input into comments: all the user has to do is figure out you're doing that and preface any of the XSS exploits on the page with '-->' to close your comment early... Yes No Other opinion (please explain) Results (99 votes), past polls
http://www.perlmonks.org/?node_id=850323
CC-MAIN-2015-40
refinedweb
280
53.04
Streaming MP3 has been around for years, and despite the emergence of new formats such as AAC, it remains the dominant audio format on the web. Streaming MP3 works well enough in with a LAN or broadband connection, but what about mobile networks, where bandwidth is generally more limited? This article describes a simple trick that enables the creation of rate adaptive clients, among other things, without breaking backward compatibility. This trick, I am reluctant to call it a hack because the idea is so simple, also enables audio hyperlinks within an MP3 stream. I am calling this RAXAR, for rate adaptive experimental audio relay. I don’t like the name all that much, so if someone else has a better idea, go for it. MP3 + ID3v2 = RAXAR Embedding meta-data within MP3 streams is easy using the ID3v2 system. ID tags are routinely used to embed title, author and other information within a stream. So why not extend this to define a family of tags that enable MP3 streams to point to downsampled streams, relays, and even to affiliate streams or audio hyperlinks. Adaptive Rate MP3 Clients By embedding a few ID3 tags in a stream, it can refer listening clients to alternate streams that run at higher or lower bitrates. This will enable the creation of smarter MP3 clients that automatically upgrade their connection when bandwidth is available, and automatically step down to a lower bitrate when network performance is lacking. The client could either do this automatically, or in response to the user clicking a user interface element. Imagine a cellphone MP3 player that allows you to increase/decrease sound quality via a rocker switch (similar to volume control). To do this, we define a convention for a set of link frames in the ID3 namespace. The tag Wxxx, where xxx is the bitrate in kbps, contains a URL that points to an alternate bitrate stream. Simple. Clients that know that W064 means “a 64kbps stream lives –> here” will use this information accordingly. Older MP3 clients will just ignore the tag. Relay and Mirror Sites One of the neat things we can do with RAXAR is to embed pointers to relays and mirror servers. Smart MP3 clients would parse these tags and build up an index of where peer stream servers reside, and could autonomously select stream servers closest to their network address space. Think of this as a sort of poor man’s multicasting. How would this work? A RAXAR client would connect to the root stream, which would embed tags that point to relays. The client quickly learns where relays are located, and can automatically switch to them, either on its own, or in response to a force redirect tag. Of course, existing load balancing techniques work well for spreading the initial connect requests around, but one of the neat things we can do by embedding tags in a live stream is to enable redirection after a connection is already in progress. For example, let’s say that midway through a session, a new relay becomes available. The root stream starts broadcasting this information every few frames. The client decides that the relay server is closer than the root server, and connects to it. Using this technique it will be possible to build multi-hop MP3 relay networks, with each relay appending additional pointers to upstream and downstream relays. To enable this feature, we create a set of ID3 tags as follows: WRxx –> URL of relay or mirror stream #xx The stream server will rank relays using the numeric identifier (00 = best, 99 = dead last). This list will be dynamic, so smart clients will weight relay recommendations by numeric rank and time since the recommendation was made. Again, this is a simple trick, but it enables us to do some interesting things, like enabling automatic discovery of peer stream servers just by connecting to one stream. There is an obvious security issue here, as a stream server could insert bogus URLs for relay servers, and therefore cause all sorts of mayhem. We assume the source stream is a trusted source, probably not a good assumption, but in a trusted system, you can do some interesting things with this technique. Stream Groups and Audio Hyperlinks Another nifty thing we can do is to create stream groups, where one stream publishes links to other streams within an affinity group. So let’s say you want to create your own package of Internet radio stations, you just embed pointers to other streams within the group. A smart client that connects to one stream will automatically learn the location and current URLs for affiliated streams. Why do this instead of look up the shoutcast directory? Well, maybe your streaming on a mobile device with a tiny screen. This trick enables MP3 clients, especially those designed for mobile devices, to automatically discover other streams, and to enable channel surfing via a simple channel up/down interface. This also guarantees that the URLs for affiliate streams are current, as they can be updated mid-stream. To support stream grouping, we define a few more ID3 tags: WAFN –> name of stream or program WAFD –> description of affiliate stream WAFC –> channel number of affiliate stream WAFU –> url of stream NOTE: these tags are sent in a group, so the source stream can define as many affiliate streams/channels as it wants. RAXAR aware clients will capture these tags as they are sent to build up a channel map (a well designed client will cache maps from previous sessions). Next Steps The great thing about RAXAR is that it is backward compatible with existing ID3 aware MP3 players. Older players that do not recognize the tags will simply ignore them. New clients will be able to use this information to automatically discover alternate streams, and to build a channel guide from information embedded within the stream itself (a nifty capability that could be used in all sorts of creative ways). For example, a radio station could use the stream groups feature to embed links to recent podcasts within their main live stream, or could play short tones as queues that a phrase is a hyperlink to another location. This ability to make MP3 streams hyperlinkable could lead to some neat applications. What’s left to do? To send a RAXAR stream, all you need to do is add the newly defined tags to your existing MP3 or other ID3 friendly stream. Note that as of this writing, this is a very informal spec, and the names for the new tags are arbitrary. If someone else wants to add to this or has a better idea for tag nomenclature, go for it. It should be equally straightforward to update MP3 clients to listen for RAXAR tags. Implementing a rate adaptive MP3 client is simple enough, just listen for Wnnn tags, and build a map of which bitrates map to which URLs. Automatic upgrade/downgrade behavior will be somewhat of a black art due to the unpredictable nature of Internet connections. A good rule of thumb for mobile clients will be to be to start with a mid-range bitrate, say 64kbps, and upgrade if the connection is faster, downgrade if it’s slower (e.g. GPRS). It will also be good to provide the listener with a way to manually upgrade or downgrade the connection. Other features, such as channel groups and relay networks, will require a little more work, but not much. A simple UI for channel groups is to provide a basic channel up/down click interface so the listener can cycle through a group of streams without looking at a graphical user interface. Audio hyperlinking is an especially interesting area. One idea that came out of the camp was to announce audio hyperlinks with a short tone. While this would be obnoxious in a music stream, this would work nicely for spoken word programming, possibly with different tones to signify different types of hyperlinks (e.g. one a completely different document, one to a short recording that defines a term or concept then reverts back to the original stream). Of course, it would be better to use a markup language to define more complex hyperlinked audio documents, but the goal here is to hack MP3 streams so this can be baked right into the stream itself (this should work for any ID3 friendly stream, not just MP3). All in all, this should be straightforward to implement, because 95% of what we need is already there. Lastly, I should point out that I am not putting this out there as a “big idea”. It’s a pretty minor tweak to a widely used system, and if we can get some consensus tag nomenclature, we’ll be able to build some interesting services. When I started working on this, I was mainly interested in embedding information about downsampled streams within a parent stream. While I was exploring that notion, I realized that this could be extended to other areas, such as audio hyperlinking. I like the idea of being able to explore an audio stream, so who knows, this could lead to some interesting things. I thought I’d put this out there and see who bites.
http://www.oreillynet.com/etel/blog/2005/08/foo_camp_talk_rate_adaptive_mp.html
crawl-002
refinedweb
1,543
66.27
"H. Peter Anvin" <hpa@transmeta.com> writes:> Alan Cox wrote:> >> > > Another example: all the stupid pseudo-SCSI drivers that got their own> > > major numbers, and wanted their very own names in /dev. They are BAD for> > > the user. Install-scripts etc used to be able to just test /dev/hd[a-d]> > > and /dev/sd[0-x] and they'd get all the disks. Deficiencies in the SCSI> >> > Sorry here I have to disagree. This is _policy_ and does not belong in the> > kernel. I can call them all /dev/hdfoo or /dev/disc/blah regardless of> > major/minor encoding. If you dont mind kernel based policy then devfs> > with /dev/disc already sorts this out nicely.> >> > IMHO more procfs crud is also not the answer. procfs is already poorly> > managed with arbitary and semi-random namespace. Its a beautiful example of> > why adhoc naming is as broken as random dev_t allocations. Maybe Al Viro's> > per device file systems solve that.> >>> In some ways, they make matters worse -- now you have to effectively keep> a device list in /etc/fstab. Not exactly user friendly.>> devfs -- in the abstract -- really isn't that bad of an idea; after all,> device names really do specify an interface. Something I suggested also,> at some point, was to be able to pass strings onto character device> drivers (so that if /dev/foo is a char device, /dev/foo/bar would access> the same device with the string "bar" passed on to the device driver --> this would help deal with "same device, different options" such as> /dev/ttyS0 versus /dev/cua0 -- having flags to open() is really ugly> since there tends to be no easy way to pass them down through multiple> layers of user-space code.)>> The problems with devfs (other than kernel memory bloat, which is pretty> much guaranteed to be much worse than the bloat a larger dev_t would> entail) is that it needs complex auxilliary mechanisms to make> "chmod /dev/foo" work as expected (the change to /dev/foo is to be> permanent, without having to edit some silly config file)the permanent storage for a PC computer is naturally the hard disk.you could always make a device partition to store persistant state. ithink a few megabytes should be enough. it could be substatially lessif you had good defaults and disk storage was only used to overridethe default.of course, using disk brings us full circle back to device nodes onfilesystem. the impetus behind devfs was never (afaict) saving diskspace or getting around slow disk access. people want device nodes toappear automatically and go away again when drivers are removed.i think what all this means is that between kernel and collection of theuser space programs the filesystem semantics just doesn't have enoughgoing for it in order to do all that you want with devices.it might be a mostly userspace solvable problem. a device daemoncould create new devices on the fly, only they'd be ordinaryfilesystem devices. for example it might be better to hack ls to notshow dormant devices. a cronjob could call a grim device reaper tocull nodes not used for a long time...what do other vaguely unix-like systems do? does, say, plan9 have abetter way of dealing with all this?--J o h a n K u l l s t a m[kullstam@ne.mediaone.net]-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2001/3/27/168
CC-MAIN-2014-42
refinedweb
593
63.19
Dan Fernandez - Demo of C# Express (pull images from Google) - Posted: Jul 26, 2004 at 3:09 PM - 137,669 Asynch Background Worker (reporting progress (do work, report change, run worker completed)) is super powerful and easy, easy to use! Next, the general RAD stuff is really impressive. This video is a great example of how easy it is to add, edit and debug complexish code. In debug, intellisense works in the watch window and you can view html from memory (variables) in a visualizer window. Daniel shows how to use a dictionary with a key and an image class (keeping images in memory and never writing them to disc). RAD tools demonstrated include showing how to easily add the fully qualified name for a resource or add using to the top of the code with the appropriate namespace for missing data types. Refactor and extract method are demonstrated which allow you to take a piece of code an set it as a new function. You can see definitions of functions without jumping to the definition of the function. You can click definitions, 'surround with' to automatically add a try catch. Next, you can call a function without adding the stub for the new call. The IDE then will offer to create the method stub for the new function and it will create the stub with the correct return (or void) and add necessary data types to the definition. Code snippet expansions ('For Each' for example) will write out for you and you just pick a variable name, type and a collection to use for your for each and it's written. Now you can just add the code you want to execute for each whatever. Rename refactoring is shown to rename variables and objects in the program and it will change the reference in all comments, strings and code. You can use rename refactoring to rename a progress bar for example. Thanks, Daniel, for another great demo! I've added a walkthrough of the code and some of the UI for the application on my blog if people want to know more about how the application works. Thanks, -Dan keep the good work /Tyko The search and replace is a good thing if it actually looks up object references and renames only those that actually match the object. I hope it doesn't simply perform a text-based search like what we already have. Consider this: udtChild.Foo(); udtChildren.Goo(); I want to rename udtChild, which is an instance of a control, to udtBar. With a text-based search and replace, I end up with the unwanted scenario where all occurances of udtChild become udtBar, including the occurance of the substring udtChild in udtChildren. So, in effect, my code becomes: udtBar.Foo(); udtBarren.Goo(); which is not so great. Thanks for the great post. One thing to note about your post is that you can install C# Express on Windows 2000 as well as XP and 2003. I'm not sure if this helps you, but I wanted to make sure you knew. You can find the system requirements here. Visual C# Express can be installed on any of the following systems: Also, you can find the source code for the entire project, including the GetGoogleImages code here. Thanks, -Dan I was itching to say thanks, but the slow connection wouldn't let me. Finally, I've made my way through. Like always, you guys are amazing. I gather you put up the link to the zipped source code only after my post? Amazing! Thanks a big bunch, Dan. It comes as news to me, and in that a pleasant one that I can run VS Express on Windows 2000 as well. Thanks for that info. I really didn't know that. I recollect reading something about...but never mind. I am way too forgetful. I actually tried installing both, VB Express as well as VC# Express Editions on my PC around the time the betas were released. Also scraped through the instructions that said it appears to hang after the first reboot, while installation. But when I tried, it looked as though it had hung even prior to the first boot. There was a blank progress bar, and after a seemingly endless wait, no progress. So, I dropped the idea. I thought if there have been major changes to the .NET Base Class Libraries, then I might as well stick to .NET 1.1, because I'm supposed to be doing a project on VC# with VS2003 in this month or the next. We've already started. So I didn't chase the idea. Thanks again, Dan. I am excited about the way you guys are treating us all on Channel 9. In general, I'm overwhelmed by the responses and the attitude Microsoft employees have towards others. I guess that is another trait that makes people like Microsoft. One wish for refactoring stuff: I would like new code to not only appear at the end of the code file (per class) but the ability to route auto-gen'ed code to regions would be great as well. Another wish: to be able to sort any block of lines of code alphanumerically. More VS express stuff please . That almost cracked me up until I read it again and noticed the word block, and I understood it must mean the ability to sort Region directives alphanumerically. I thought somene was asking the IDE to be able to sort source code lines alphanumerically. And just out of curiosity I stopped by the MS recruiting center and they had some convention thing at the park? what was that all about... I added another couple of worker methods so that the program is more asynchronous. I got rid of the progress bar and made the images populate the image list as they are being retrieved. When it has fetched all the thumbnails the program then goes and fetches all the higher resolution images. You can click on an image in the imagelist at any time and it will either display the thumbnail or the high resolution image if it has been retrieve by then. I also added the ability to save the high resolution mages to your harddrive. It's only the second thing I've ever written in C# so forgive me if I've made some terrible mistake. Any comments would be appreciated. My modified code is here I use Arial Black (size 14). It's a very readable font from a far distance because of it's thickness and it's my preferred demo font. Thanks, -Dan I am a very basic programmer. Any help would be great I think the best way to allow the copy ability is to add a context menu option called Copy Image and then set its click Event code to the following: ; Stream ImageStream = new WebClient().OpenRead(""+String.Format("ViewPhoto-{0}.aspx?Width=50",row.ID)); I just want to be clear on that line of code... great job, congratulations Dan! i want to have it now!!! Anthony Hey Ivan, Sorry for not responding sooner, but there is an updated version that works with the final release of Visual C# Express available here. Thanks, -Dan :( Remove this comment Remove this threadclose
http://channel9.msdn.com/Blogs/TheChannel9Team/Dan-Fernandez-Demo-of-C-Express-pull-images-from-Google?format=auto
CC-MAIN-2015-22
refinedweb
1,218
72.87
Programmer to Programmer TM online discussion at: p2p.wrox.com for more information on Wrox books visit: Inside ASP.NET Web Matrix Alex Homer and Dave Sussman Part 1 – What is Web Matrix? Part 2 – Putting Web Matrix to Work Part 3 – Configuring and Extending Web Matrix Inside ASP.NET Web Matrix Alex Homer Dave Sussman Wrox Press Ltd. © 2002 Wrox Press Permission is granted to redistribute this document in its complete, original form. All other rights are reserved. The author and publisher have made every effort Trademark Acknowledgements Wrox has endeavored to adhere to trademark conventions for all companies and products mentioned in this book, such as the appropriate use of capitalization. Wrox cannot however guarantee the accuracy of this information. Credits Authors Managing Editor Alex Homer Viv Emery Dave Sussman Production Coordinator & Commissioning Editor Cover Daniel Kent Natalie O'Donnell Technical Editor Daniel Richardson About the Authors: davids@ipona.co.uk. Inside ASP.NET Web Matrix During its relatively short but spectacularly successful life, Microsoft® (among others) support ASP 3.0, and, of course, Microsoft's own Visual Studio 6.0 included InterDev – which was also available as a standalone: ❑ Part 1 What is Web Matrix? provides an overview of Web Matrix, looks at the features it provides, and the IDE it contains ❑ Part 2 Putting Web Matrix to Work walks you through using Web Matrix to build an application that contains many different types of pages and resources ❑: ❑ Project-based Solutions – Visual Studio .NET has the concept of a project, to which you can add various types of file and resource. Web Matrix does not use a project-based approach; instead it treats each file as a separate item. ❑ ASP.NET Page Structure – Web Matrix creates ASP.NET pages using the inline approach, rather than the code-behind approach of Visual Studio .NET. ❑. ❑ Compilation of Class Files – Unlike Visual Studio .NET, Web Matrix does not automatically compile class files into assemblies. This has to be done from the command line. ❑ . ❑ Community – Web Matrix is designed to be a community tool, and contains various types of links to the online community site at, as well as links to newsgroups, list servers, and other sites that provide community support for Web Matrix. ❑: In the current release, the Toolbox and project windows are not moveable, though they can be resized,, andbox tooltip: ❑ Design – which shows the visible appearance of the page (with or without glyphs) ❑ HTML – which shows the actual HTML and text content of the page, but not any code sections ❑ Code – which shows just the code in the page without any HTML or other content ❑ e-mail: ❑ Source – which shows the complete page, including the page directives and inline code sections (as with the All view when in the default editing mode). ❑: The Quick Tag Edit window can also be opened by right clicking on an element on the page in Design view, and selecting Edit Tag from the context menu tha Builder tooltip when the mouse pointer hovers over it. This snippet can then be inserted into any other page or file by dragging it from the Toolbox:] = @Doors) " _ & "AND ([tblCar].[Seats] >= @Seats))" Dim sqlCommand As System.Data.SqlClient.SqlCommand _ = New System.Data.SqlClient.SqlCommand(queryString, sqlConnection) sqlCommand.Parameters.Add("@Doors", System.Data.SqlDbType.SmallInt).Value = doors sqlCommand.Parameters.Add("@Seats", Email Code Builder item can also be dragged onto a page, and in this case the dialog shown in the following screenshot appears. In this dialog you can specify the To and From addresses, the Subject, the mail format, and the SMTP Server to use to send the message: The following code is then created. All that is left to do is to set the text of the message using the mailMessage.Body property: ' Build a MailMessage Dim mailMessage As System.Web.Mail.MailMessage = New System.Web.Mail.MailMessage mailMessage.From = "somebody@somewhere.com" mailMessage.To = "somebody@somewhere.com" mailMessage.Subject = "Email: ❑ Files and Wizards later on. ❑. ❑ id="SqlDataSourceControl1" runat="server" UpdateCommand="UPDATE [tblCar] SET [Model]=@Model,[Doors]=@Doors,[Seats]=@Seats, [Price]=@Price,[Precis]=@Precis WHERE [CarID]=@CarID" SelectCommand="SELECT * FROM [tblCar]" AutoGenerateUpdateCommand="False" ConnectionString="server='localhost'; trusted _connection=true; Database='WroxCars'" DeleteCommand=""> <: ❑. ❑. ❑ Community – which provides links to useful resources, newsgroups, list servers, and other sites that provide support and references displayed: When the currently selected control is a DataGrid or a DataList, the lower section of the Properties window contains two hyperlinks. The second one, Property Builder, opens the "property page" dialog shown in the previous screenshot. The other link, Auto-Format, provides a list of pre-defined colors and styles for the grid or list, and for its content. The style can be applied simply by selecting it in the Auto Format dialog: Finally, above the icons in the Properties window, are two useful drop-down lists that make it easy to navigate through the controls on a page. The top one shows the currently selected control, and also lists all the other controls in the page, allowing you to select one directly. The drop-down list below this lists all the elements for which the currently selected element is a child or descendant, and allows you to easily find and select any ancestor or parent (enclosing) element. The Classes Window When working with the .NET Framework, information about the multitude of classes that are included in the .NET Framework Class Library is vital. To make it easier, Web Matrix includes features that allow you to access detailed help about any class from within the IDE. The Classes window, by default, shows four folders that contain the ASP.NET Page Intrinsics, the range of Web Controls, the range of HTML Controls, and other common classes for web applications. Below this are the other commonly used classes, listed by namespace. Other views can be selected; for example you can use the icons at the top of the Classes window to display the details in Assembly view (where the classes are listed by the assembly DLL that contains them), or the list can be sorted alphabetically, by class type and by visibility (that is, whether they are public or private). Non-public classes are not shown in the list by default, though the fourth icon in this window can be used to display them in addition to the public classes. The Classes window also contains a search textbox, in which you can enter a search string. Then, only the classes that contain that string in their name will be displayed (press Return to start the search, and then click the View Search Results icon to toggle between the search results and the namespace list): The Customize icon in the top right of the Classes window is used to modify the list of assemblies that appear in the Classes window. The following screenshot shows how to add the System.EnterpriseServices assembly to the list in the Classes window, which then shows all the namespaces within this assembly: The Class Browser Web Matrix includes a comprehensive tool – the Matrix ClassBrowser – that can be opened independently from the Start menu, or from within Web Matrix by double-clicking on a class displayed in the Classes window. When opened from the Start menu, it looks like the following screenshot. The left-hand window lists the .NET Framework namespaces, and for each one the classes and interfaces implemented within that namespace are listed. Double- clicking on a class opens a list of all the members of that class in a new window in the right- hand area of the Class Browser. Clicking on one of the members then displays details of that member in the right-hand area of this window – including the static fields, constructors, properties, methods, and events: The Class Browser uses reflection to obtain details of the class and its members, so the information is limited to just the member definitions. Each parameter or enumeration listed in the right-hand pane acts as a hyperlink, which when clicked displays information about that object or enumeration in another window. This makes it easy to follow the hierarchy and to get details about the method parameters or property/field value types that are required. There are also links that open the local .NET Framework SDK documentation (if installed) or the online MSDN Library at the relevant page, where more details of the class and its members can be found. The menu bar and toolbar in the Class Browser offer the same set of features as the Classes window we discussed earlier. You can change the sort order and organization of the namespaces and classes in the left-hand window, search for classes by name, and show or hide non-public classes. The same Customize button, which opens the Select Assemblies dialog in which you can add and remove assemblies from the right-hand list, is also present. The Window menu can be used to tile or cascade the windows, or just to switch from one to another. When information about a class is opened from within the Web Matrix IDE (by double-clicking on a class in the Classes window) the same right-hand sections of the page shown in the previous screenshot are displayed within the IDE, but the left-hand list of namespaces is not shown. Types of Files and Wizards There are three ways to open a new file in Web Matrix. You can: ❑ Select New from the File menu ❑ Click the New File icon in the toolbar ❑ Right-click on the target folder in the Workspace window and select Add New Item The last two of these techniques are shown in the following compound screenshot, together with the New File dialog that appears. Notice that the path is that selected in the Workspace window: The New File dialog lists the various kinds of file that you can create and edit within the Web Matrix IDE. Each is created from a template stored in folders within the Templates subfolder of the Program Files\Microsoft ASP.NET Web Matrix\version\ folder. Each is described next. File Types in the (General) Section There are 14 different types of file that you can create from the (General) section of the New File dialog. They are: ❑ ASP.NET Page – this creates a file with the extension .aspx. The file contains the @Page directive, the opening and closing <html> tags, an empty <head> section, and a <body> section containing a server-side <form>. ❑ ASP.NET User Control – this creates a file with the extension .ascx. The file contains just the @Control directive. ❑ HTML Page – this creates a file with the extension .htm. The file contains the opening and closing <html> tags, plus empty <head> and <body> sections. ❑ XML Web Service – this creates a file with the extension .asmx. The file contains the @WebService directive, Imports or using statements for the required Web Service namespaces, a Class definition, and an example public function outline that you can modify. You must specify the class name and namespace before you can create this type of file. ❑ Class File – this creates a file with the extension .vb or .cs (depending on which language you specify), containing an Imports or using statement for the System namespace, the Namespace definition, an outline Class definition, and an empty public Sub or function. You must specify the class name and namespace before you can create this type of file. ❑ Style Sheet – this creates a file with the extension .css. The file contains just an empty BODY{} selector definition. ❑ Global.asax – this creates a file with the extension .asax. The file contains the @Application directive and a <script> section that contains empty event handlers for the Application_Start, Application_End, Application_Error, Session_Start, and Session_End events. ❑ Web.Config – this creates a web.config file containing the <configuration>, <appSettings>, and <system.web> sections. Within the <system.web> section there are <sessionState>, <customErrors>, <authentication>, and <authorization> elements. All the elements are commented-out by default, and contain a description of their usage and the valid values for the common attributes. ❑ XML File – this creates a file with the extension .xml. The file contains just the <?xml ... ?> processing instruction that defines the version and encoding of the file. ❑ XSL Transform – this creates an XSLT stylesheet file with the extension .xslt. The file contains the <?xml ... ?> processing instruction and the root <stylesheet> element. ❑ XML Schema – this creates an XML (XSD) schema file with the extension .xsd. The file contains the <?xml ... ?> processing instruction and the root <xsd:schema> element. ❑ ASP.NET HTTP Handler – this creates a file with the extension .ashx. The file contains the @WebHandler directive, Imports or using statements for the required System and System.Web namespaces, a Namespace definition, and a public Class definition that implements the IHttpHandler interface. Within the Class definition are the two required member definitions for the ProcessRequest event and the IsReusable property. You must specify the class name and namespace before you can create this type of file. ❑ Text File – this creates an empty text file with the extension .txt. ❑ SQL Script – this creates a text file that contains just "/* New SQL script */". The file has an extension of .sql. File Types in the Data Pages Section While the file types listed in the (General) section are predominantly empty "outline" files, the file types listed in the other sections are more like Wizards, but without any step-by-step dialogs. The templates that they use to create the new file contain code and (in some cases) ASP.NET server controls in order to implement a working page that you can use as a starting point for developing your own pages. The file types available in the Data Pages section are shown in the following screenshot, and are then described: ❑ Simple Data Report – this creates a page that accesses the local SQL Server or MSDE database and displays details from the Authors table of the sample pubs database, using a DataReader as the data source for an ASP.NET DataGrid control. The following screenshot shows the page in Design view, and when opened in a browser: ❑ Filtered Data Report – this creates a similar page to the previous example, but this time containing controls where you can select an author and see a list of their books. A DataReader is used when the page is first loaded to fill the drop-down list. Then, when Show Titles is clicked, an appropriate SQL statement is constructed and used with a DataReader to extract the information from the titleview SQL View in the pubs database. The following screenshot shows the page in Design view, and when opened in a browser: ❑ Data Report with Paging – this page fills a DataSet with data from the Authors table and then binds it to an ASP.NET DataGrid control. This time, however, it uses the built-in paging features of the DataGrid to display the results over separate pages, along with links that allow a user to navigate through the pages. ❑ Data Report with Paging and Sorting – this page extends the techniques developed in the previous type of page by adding a sorting facility. It does this by setting the attributes of the DataGrid control, and adding a simple event handler to respond to the Sort event of the grid. ❑ Master-Details Grids – this page shows how easy it is to display data from two related tables. Data from the Authors table in the sample pubs database is loaded into a DataSet and displayed in a DataGrid control that has paging enabled, and which contains a ButtonColumn with the text Show details. Clicking on one of these button links causes a DataReader to fetch the matching data from the titleview SQL View in the pubs database and this is displayed in the second DataGrid control. The following screenshot shows the page, both in Design view and when opened in a browser: ❑ Editable Data Grid – this example shows the basic technique for editing data using the built-in features of the ASP.NET DataGrid control. A DataSet is filled with data from the Authors table. An EditCommandColumn that implements the Edit/Update/Cancel link buttons and a ButtonColumn that implements the Delete link buttons are then added to this DataGrid. Code in the page reacts to the various events raised by these link buttons; the code executes SQL statements that update the original database table contents. The following screenshot shows the page, in both Design view and when opened in a browser: ❑ Simple Stored Procedure – this example is similar to the first of the Data Pages we looked at. The only difference is that it calls the stored procedure named CustOrdersDetail within the pubs database, rather than using a SQL statement stored as a string. The result is returned as a DataReader, which is bound to an ASP.NET DataGrid control in the page. File Types in the Mobile Pages Section Web Matrix contains templates that allow you to create "mobile" pages and user controls. These pages are based on the classes exposed by the Microsoft Mobile Internet Toolkit (MMIT). The MMIT can be used to create pages that automatically adapt for a range of devices. These pages, and the controls they contain, produce either HTML or WML (Wireless Markup Language) output, tailoring it for the particular device that is accessing the page: There are two types of file in the Mobile Pages section of the Add New File dialog: ❑ Simple Mobile Page – this creates a file with the extension .aspx. The file contains the @Page directive and inherits from the special MobilePage class that is implemented within the Microsoft Mobile Internet Toolkit (MMIT). The file also includes the @Register directive for the Mobile Controls and an empty server-side <mobile:Form>. ❑ Simple Mobile User Control – this creates a file with the extension .ascx. The file contains the @Control directive but inherits from the special MobileUserControl class implemented within the Microsoft Mobile Internet Toolkit (MMIT). The file also includes the @Register directive for the Mobile Controls. The Environment for Mobile Pages When the page currently being edited within Web Matrix is a mobile page, the environment changes to provide the special features required for this type of page. The Toolbar now shows the Mobile Controls section, which contains the controls from the MMIT. These are the only controls that should be used on mobile pages, as the standard HTML and Web Forms controls cannot output the correct content in all circumstances (because they can't produce WML). The following screenshot shows that the Edit window changes as well. It gains controls to specify how the page will filter on and react to different devices. In the MMIT, it's possible to set up device filters, so that sections of the output can be modified for specific devices. These controls are used to configure that filtering: At the bottom of the Edit window, there is a minor change to the tabs for the four different views of the page. Markup replaces HTML on the second tab, as the output from a mobile page may be WML instead of HTML. In addition, the controls and classes from the Mobile Internet Toolkit are included in the default list of classes displayed in the Classes window, and in the separate ClassBrowser tool. File Types in the Output Caching Section The Output Caching section of the New File dialog contains examples of how you can set up pages that use output caching to improve performance, minimize server overhead, and reduce response times. The four available file types are fundamentally similar, and demonstrate how output caching can be configured to automatically detect different features of each request: ❑ Vary By None – this demonstrates "total" output caching, where every client is sent the same cached page until it expires. The example contains an @OutputCache directive that specifies a cache duration of 10 seconds, and contains the attribute VaryByParam="none". Two label controls on the page are set to the current time and the time that the cache expires. By refreshing the page in the browser, you can see the effect of the output caching. ❑ Vary Cache By Browser – this demonstrates output caching that sends different pages to each type of browser, based on the browser detection carried out by the Request.Browser object. The example contains an @OutputCache directive that specifies a cache duration of 10 seconds, and contains the attributes VaryByParam="none" and VaryByCustom="browser". This time there are three label controls on the page, which are set to the browser name, the current time, and the time that the cache expires. By refreshing the page in the browser, and loading it into different types of browser, you can see the effect of the output caching. ❑ Vary Cache By Headers – this demonstrates output caching that sends different pages depending on a specific HTTP header sent in the request. This example contains an @OutputCache directive that again specifies a 10 second cache duration, with the attributes VaryByParam="none" and VaryByHeader="Accept-Language". The same page will then only be sent in response to requests where the Accept-Language header is the same. Three label controls on the page are set to the value of the Accept- Language header, the current time, and the time that the cache expires. ❑ Vary Cache By Parameters – this demonstrates output caching that sends different pages depending on a value sent as a parameter from the client – in this case a value posted from a drop-down list control on a <form>. This example contains an @OutputCache directive that specifies a cache duration of 120 seconds, with the attribute VaryByParam="Category" (the id and name of the drop-down list). Three label controls on the page are set to the value selected in the drop-down list, the current time, and the time that the cache expires. Selecting a different category and clicking the Lookup button causes a page to be created and cached for that category only if one is not already available in the cache. The following screenshots show this page, in both Design view and when opened in a browser: File Types in the Security Section There are three examples in the Security section of the New File dialog, which demonstrate the common techniques for creating authentication (login) pages for a secure section of a Web site: ❑ Login Page – this creates a standard "log in" page that contains two textboxes with corresponding RequiredFieldValidator controls attached, a Login button, and a label where any error message can be displayed. The code in the page uses a simple hard-coded check of the values you enter, and then shows how to execute the RedirectFromLoginPage method to load the page that was originally requested. The following screenshots show this page, both in Design view and when opened in a browser: ❑ Logout Page – this creates the corresponding "log off" page, with a Status label and a single Log Off button. The label shows the username of the currently logged-in user where available. Clicking the button calls the SignOut method and displays a message indicating that the user is no longer authenticated. The following screenshots show the Logout page, in both Design view and when opened in a browser: ❑ Config File – this example creates a suitable web.config file to use with the two previous security examples. The file contains a <configuration> element with a child <system.web> element. The <system.web> element contains the <authentication> and <authorization> elements that specify Forms authentication, and deny anonymous users. File Types in the Web Services Section The final section of the New File dialog is the Web Services section. This includes four example pages that implement different features of Web Services. For each one, you must enter the class name and namespace before you can create the file: ❑ Simple – this example creates the simplest type of Web Service, basically the same as the XML Web Service option in the (General) section of the New File dialog. The file contains an @WebService directive, Imports or using statements for the required Web Service namespaces, a Class definition, and an example public function outline that you can modify. ❑ SOAP Headers – this example creates a Web Service that reads a custom value from the SOAP headers of the request, and displays the result. ❑ Custom Class – this example demonstrates how a custom class can be returned from a Web Service. The code creates an instance of a custom class named OrderDetails (which is actually an array of another custom class named OrderItem), sets some values for the class members, and then returns this instance. ❑ Output Caching – this example demonstrates how the output from a Web Service can be cached, much like the examples shown earlier that used output caching. It simply defines a public function that is implemented as a WebMethod, and adds a CacheDuration attribute with a value of 30 to the function so that the output is automatically cached for thirty seconds. The following screenshots show the page opened in a browser, and the result: Language, Class Names, and Namespaces Remember that, for each type of file selected in the New File dialog, any code automatically included in the file is in the language that you specify in that dialog – the choice is between Visual Basic .NET and C#. Depending on the type of file you select, the dialog will also contain controls in which any other required information is entered – such as the class name and namespace (in some cases this is optional, while in others – such as a Web Service or Class file – it is mandatory). We'll create some example pages later on in order to demonstrate these general techniques. Help, Support, and Reference Information We've seen how Web Matrix provides access to reference materials and online help in several ways. Future plans are for Web Matrix to include its own comprehensive help files that describe the workings of the IDE, and how to get the best from the product. Only minimal built-in help features are currently implemented at the moment, such as the links to various resources and samples at and. However, if you place the cursor over a class name in the Edit window and press the F1 key, a new ClassBrowser window opens with reference details of that class. Several other places within the Web Matrix IDE also provide access to online help and support. The Community window (in the lower part of the "project" window) contains links to the ASP.NET Web Matrix site, as well as links to several Microsoft-run .NET newsgroups, and list servers provided by other members of the Web Matrix community. The ASP.NET Web Matrix site is part of the main ASP.NET site at, which also contains a great deal of useful information and links to other ASP.NET-related sites. It is also the prime source for downloadable add-ins, control libraries and other resources for Web Matrix – including access to the latest version of the product. Two views of the first page follow so that you can see the range of resources that are provided: The second page (opened from the second icon at the top of the Community window) contains links to related web sites and other resources, while the third page (opened from the third icon) accesses MSN Messenger (if you have this installed on your machine), so that you can chat in real time with other Web Matrix users. Don't forget that the main toolbar at the top of the Web Matrix window contains a combobox drop-down list in which you can type a question or a series of keywords. Pressing Return opens the ASP.NET Web Matrix site in your default browser, and displays a list of articles and resources that match your query. You'll also recall from our earlier discussion that the ClassBrowser window contains links for each member of the .NET Framework Class Library that open the corresponding help and reference pages either locally from your own machine, or at the MSDN online library site. Sending Feedback to Microsoft The Help menu in the Web Matrix IDE contains an entry to send feedback on the product to Microsoft. This feedback can consist of bug reports, feature requests, or just general information and comments. Web Matrix is a "community product", and, as such, its future development will be guided to a large extent by the feedback Microsoft receives from users. So, don't be afraid to send in your opinions the development team is keen to hear what you think! The Send Feedback window is a three-page tabbed dialog that contains the Feedback page itself, the application Information page, and a list of all the currently Loaded Assemblies (the same dialog, but without the Feedback page, appears when you select the Application Information command from the Help menu): The Microsoft ASP.NET Web Matrix Web Server Before we move on to Part 2, where we'll see Web Matrix in action, we'll take a quick look at the web server that is provided with Web Matrix. This is a slim and lightweight web server that can be used to run ASP.NET pages and other resources (such as web Services) on machines that do not already have a local Web server installed. When you first run an ASP.NET page or Web Service from within the Web Matrix IDE, a dialog opens that asks you which web server you want to use. As shown in the following screenshot, you can allow the Microsoft ASP.NET Web Matrix Web Server to execute your page or Web Service: As you can see, the default for the web server is to run on port 8080. This is ideal if the machine you are using already has a web server (such as IIS) installed and running. The existing Web server is likely to be using port 80, and so by using a different port the Web Matrix Web server avoids any possibility of an error. You can change the Application Port to a different port if you wish (such as port 80 if you don't have IIS installed). Alternatively, you can select an existing instance of Internet Information Server (IIS) to execute your page, in which case Web Matrix will create a new virtual root (with the name you specify) that points to the folder containing the file you are editing. If you wish, you can also turn on directory browsing for this virtual root, which makes it easier to find and run individual pages as you develop your application. Once the Web Matrix Web server has been started, an icon appears in the Windows Taskbar tray. Right-clicking this icon displays a menu in which you can open the web site that the web server is providing in your default browser, Restart or Stop the web server, or show details about it: Part 2 – Putting Web Matrix to Work Now that you know what tasks Microsoft ASP.NET Web Matrix is capable of, it's time to put them into practice. Web Matrix is easy to use, so we're not going to show you every aspect of it in action. Instead, we'll build a simple web site for a pizza delivery company, concentrating on the most commonly used pages. This will show you just how little you need to do to get a site up and running with Web Matrix. Pretty Quick Pizza Our sample web site is designed to allow customers to pick pizzas and drinks, add them to a simple shopping basket, and then proceed to a checkout where they pay either by cash on delivery or by being billed to an account. It's a really simple e-commerce site, and leaves out many features (such as looking cool!) because they aren't required. We'll end up with a simple site like this: From this page a customer can select from a variety of types and sizes of pizza and from a range of drinks. Their selection can be added to a shopping basket, and once the customer has made all their choices, they can proceed to the checkout: The checkout page redisplays the customer's selection and allows the order to be placed. The customer can choose to pay when the pizza is delivered or to have the amount billed to an account. If the customer chooses to have the amount billed to an account they will be taken to a secure login page where they can access their account details. All of the code for this example is available from. Building ASP.NET Pages Because we're not building a fully functional site, we've cut out some of the stuff that you'd normally use. For example, we've only got a minimal data access layer, limited security, and few advanced features. This is because what's important is showing you the types of pages Web Matrix can create, and what you need to do to customize them for your own requirements. As Web Matrix is file based, you'll need to set up the IIS Virtual Directory yourself. I called it PPQ. In the following sections we'll tackle: ❑ Creating a Data Layer that consists of an XML file, a VB.NET component, and some SQL stored procedures and tables ❑ Creating User Controls for the page header and the shopping basket ❑ Creating the pizza selection page, where we use a variety of ASP.NET controls, as well as the newly created User Controls ❑ Creating the checkout page, where we take the user’s details and how they'd like to pay ❑ Creating secure pages for customers with accounts ❑ Creating a variety of different pages, such as those that user a Master and Details grid, or those that require caching ❑ How to create Web Services ❑ How to use other controls, such as the Internet Explorer controls and custom controls The Data Layer The data layer for this application consists of two files: an XML file that contains the data and a class that loads the data and performs some database logging. When you first start Web Matrix you'll be presented with the New File dialog (remember that Web Matrix is file based, and not project based like Visual Studio .NET). To keep the files for this web site together we'll need to create a directory – we can do this either externally in Explorer, or from within Web Matrix using the Workspace, where we select New Directory from the context menu: Once we've created the directory, we need to start creating the files for the application. First of all, we need to create the XML file. You can do this from the New … item on the File menu, or use the context menu on the directory: This brings up the New File dialog, from where you can select XML File from the General templates. You then need only add your data: In reality it's likely that you'd use a database for all of these details, but this is a quick solution that shows the simplicity of Web Matrix – note that there are no special XML editing features, such as XML validation, that would add complexity to the tool. To use this data, we create a class called DataLayer: Here we have the option to select the default language, the class name, and the namespace for the class. The template created is a stub into which you add your required code. We could have used the Insert Data Method code builder to add code but the code generated by the code builder creates a SQL statement to execute, and we want to use a couple of stored procedures. We'll add the code manually (although you could still use the code builder and then modify the generated code): Imports System Imports System.Data Imports System.Data.SqlClient Imports System.Web Imports System.Xml Namespace ppq Public Class DataLayer Public Sub New() End Sub Public Shared Function GetData() As DataSet Dim ctx As HttpContext = HttpContext.Current Dim ds As New DataSet() ds.ReadXml(ctx.Server.MapPath("pizzas.xml")) Return ds End Function Public Shared Sub LogOrder(Name As String, Address As String, _ ZipCode As String) Dim ctx As HttpContext = HttpContext.Current Dim Basket As DataTable = CType(ctx.Session("Basket"), DataTable) Dim conn As New SqlConnection("server=.; " & _ "DataBase=AlandDave; Trusted _Connection=true") conn.Open() ' add the order Dim cmd As New SqlCommand() cmd.Connection = conn cmd.CommandText = "sp_PPQInsertOrder" cmd.CommandType = CommandType.StoredProcedure cmd.Parameters.Add("@Name", SqlDbType.VarChar, 25).Value = Name cmd.Parameters.Add("@Address", SqlDbType.VarChar, 255).Value = Address cmd.Parameters.Add("@ZipCode", SqlDbType.VarChar, 15).Value = ZipCode dim OrderID As Integer = cmd.ExecuteScalar() ' add the order details cmd.Parameters.Clear() cmd.CommandText = "sp_PPQInsertOrderItem" cmd.Parameters.Add("@fkOrderID", SqlDbType.Int) cmd.Parameters.Add("@Item", SqlDbType.VarChar, 25) cmd.Parameters.Add("@Quantity", SqlDbType.Int) cmd.Parameters.Add("@Cost", SqlDbType.Decimal) cmd.Parameters("@fkOrderID").Value = OrderID Dim row As DataRow For Each row In Basket.Rows cmd.Parameters("@Item").Value = row("Description") cmd.Parameters("@Quantity").Value = row("Quantity") cmd.Parameters("@Cost").Value = row("Cost") cmd.ExecuteNonQuery() conn.Close() End Sub End Class End Namespace This class has two simple methods. The first method simply loads and returns the XML file as a DataSet. The second method accesses a SQL database in order to add details of the order and order lines. All this is fairly standard code that uses a couple of stored procedures and parameters. The important thing to understand about this code is that the shopping basket is held in the current Session – we'll see how that's created later. Compiling Classes One thing that Web Matrix doesn't do is compilation, so we have to perform this manually. We simply created a batch file with the following in it (note that this command is all one line): vbc /debug /nologo /t:library /out:bin/DataLayer.dll /r:System.dll /r:System.Xml.dll /r:System.Web.dll /r:System.Data.dll Datalayer.vb Then we can run this batch file from the command line. This sort of thing would actually be quite a good add-in. Managing the Data We discussed Web Matrix's data management features in Section 1, but now we can put these features into practice. First we'll use the Data tab to create a new connection (this assumes you've got a database already created): Once we've connected to the database we can create a new table by selecting Tables and then clicking the New Item button: If you don't want to go through this process, you can use SQL scripts that are provided with the code download to create the tables and stored procedures. You'll need to create the following tables: You'll also need to create the following stored procedures: Although the stored procedure window appears to be just a simple text editor, it actually validates the stored procedure as it saves it to SQL Server. However, it doesn't add any required permissions (such as GRANT requests for particular users) so you'll have to add these yourself. The exact form of these permissions will depend on your connection details, and what permissions your user requires. Creating User Controls We're going to use two User Controls in our application. The first control is a simple banner at the top of the page, which will contain an image and heading. The second control is for the shopping basket. We'll use a User Control for the shopping basket as it will be used on several pages, and encapsulates quite a lot of functionality. You can create a User Control by selecting ASP.NET User Control from the New Item dialog. Our first control is very simple, and we can simply drag some HTML controls onto the design surface. The control comprises an HTML table that contains a single row and two columns, inside which are an image and some text. There's no need to add any code: The second User Control is our shopping basket, and since this control requires some database functionality, we won't use the ASP.NET User Control template. Instead we'll use the Editable Data Grid from the Data Pages section of the New Item dialog. Rename the extension of the file to .ascx – we need to do this as we want the shopping basket to be an editable grid. Using the correct template and extension means that Web Matrix generates lots of useful code for us. The following default page is created: Web Matrix has created an editable grid, along with code that allows us to edit and save data. For example, the following code is provided to allow updates: Sub DataGrid_Update(Sender As Object, E As DataGridCommandEventArgs) ' update the database with the new values ' ' execute the command Try myConnection.Open() UpdateCommand.ExecuteNonQuery() Catch ex as Exception Message.Text = ex.ToString() Finally myConnection.Close() End Try ' Resort the grid for new records If AddingNew = True Then DataGrid1.CurrentPageIndex = 0 AddingNew = false End If ' rebind the grid DataGrid1.EditItemIndex = -1 BindGrid() End Sub This code uses the pubs database as its template, and includes a connection string at the top of the page. All we have to do to use this code in our own example is to change a few details to match our database. You might prefer to take out much of this data access code and replace it with calls to a data layer that performs this functionality for you. If this is something you'll be doing a lot of, then it's worth creating your own templates that suit your style of coding. We'll see how to create our own templates is covered in Section 3. Modifying the Code There's not much to do in order to modify the standard template code to work with our own data. We need to modify the connection string and SELECT statement used to fetch the data. Both of these are found at the top of the page: Dim ConnectionString As String = "server=(local);database=pubs;Integrated Security=SSPI" Dim SelectCommand As String = "SELECT au_id, au_lname, au_fname from Authors" We also need to modify three event handlers: ❑ DataGrid_Update ❑ DataGrid_ ❑ AddNew_Click Let's look at the sort of changes you'll often have to make to these methods. After that, we'll look at the specific changes we need to make for our example application. DataGrid_Update The following modifications need to be made to the DataGrid_Update method: ' This event handler is called when a row is updated, and the DataGridCommandEventArgs object passed into the method points to the row being updated. The data is fetched out of the cells into variables – you'll need to modify these lines to pick your data out of the grid. The first two columns are the edit and delete columns, which is why the data starts at the third column (the Cells collection is zero-based). Next we need to set the command Again this will need modifying to match your data. Alternatively, you could set the CommandText property of the command to the name of a stored procedure, and the CommandType to CommandType.StoredProcedure. Next are the parameters for the command. These also need modifying: The rest of the procedure remains the same. DataGrid_ For DataGrid_Delete all we have to do is change the command that performs the deletion: Dim DeleteCommand As New SqlCommand("DELETE from authors where au _id='" & _ keyValue & "'", myConnection) Like the update this could also be changed to call a stored procedure, as long as the CommandType was set correctly to CommandType.StoredProcedure. AddNew_Click For additions the code creates a new array of data that is inserted into the table: ' add a new blank row to the end of the data Dim rowValues As Object() = {"", "", ""} ds.Tables(0).Rows.Add(rowValues) In this code the new values (three of them) are strings, so empty strings are used. You'll have to add objects of the correct type that match the type defined in the columns of the table. Modifying Our User Control For our control, we simply persist the shopping basket details in a DataTable in the Session, so all of the database access code can be removed. For example, the grid update routine is now: Sub DataGrid_Update(Sender As Object, E As DataGridCommandEventArgs) ' get the edit text boxes Dim id As Integer = CInt(ShoppingBasket.DataKeys(e.Item.ItemIndex)) Dim qty As String = CType(e.Item.Cells(1).Controls(0), TextBox).Text ChangeQuantity(id, qty) ' rebind the grid ShoppingBasket.EditItemIndex = -1 BindGrid() End Sub The other data functions are also modified to use methods that manipulate the shopping basket items. These methods have been made public so that they can be called from the page that hosts the user control. We won't go into the detail of all of these methods, since they are fairly easy to understand, and aren't Web Matrix-specific. One thing that is worth mentioning is a property for the basket we create: Public WriteOnly Property ViewMode As Boolean Set If Value = True Then btnClear.Visible = False Dim cols As Integer = ShoppingBasket.Columns.Count ShoppingBasket.Columns(cols-1).Visible = False ShoppingBasket.Columns(cols-2).Visible = False End If End Set End Property The setter for this property turns the editable grid into a read-only grid. It does this by making the edit column, the delete column, and the clear button invisible. Our User Control can now be used on multiple pages in both editable and non-editable modes. We now have a fully functional shopping basket that can be dropped onto other pages. The Main Page Although our main default page will display data, the data templates provided by Web Matrix all produce grids, and we want a combination of different data controls. So, we'll start with a blank ASP.NET page instead. Once we've added a few controls the page looks like this: There's a combination of quite a few controls here. At the top of the page we have the User Control we created earlier that represents the banner. Web Matrix doesn't provide design-time support for these sorts of controls, and there's no way to drag them onto the design surface. To add them to a page we have to use the All view, so that we can add both the @Register directive and the control: <%@ Register TagPrefix="ppq" TagName="Banner" Src="Banner.ascx" %> <ppq:banner</ppq:banner> We need to repeat the process for the shopping basket: <%@ Register TagPrefix="ppq" TagName="ShoppingBasket" Src="ShoppingBasket.ascx" %> <ppq:ShoppingBasket </ppq:ShoppingBasket> If you switch back into Design view then you'll see that the user controls are displayed as gray panels. You can add the text and labels to the page simply by dragging and dropping them from the Toolbox. Then you can set their properties as required. Web Matrix contains a formatting toolbar, so you don't even have to remember what any of the formatting attributes are: For the actual data, we want to show three things: ❑ The selection of pizzas ❑ What sizes they come in ❑ Available drinks Each of these requires a different data control: the pizzas will be displayed using a DataGrid, the sizes using a RadioButtonList, and the drinks using a Repeater with custom content. In order to format them nicely the controls are contained within an HTML table. Dragging a table onto the design surface provides, by default, a table that has three rows and three columns, so we need to edit the code in HTML view in order to delete two of the rows. So that we can display the selection of pizzas, we drag a DataGrid into the first cell of the table: The Auto Format and the Property Builder links, found at the bottom of the Properties panel, can be used to format and customize the DataGrid: These links provide access to the same builders that Visual Studio .NET uses. The Auto Format builder allows us to pick a visual style; the designer will set the style properties of the grid for us: The Property Builder allows us, among other things, to set the column details: Here we've stated that we don't want the columns automatically generated, and we've added two TemplateColumns. Closing the dialog and switching to HTML view allows us to see what this has done: <asp:DataGrid <FooterStyle forecolor="#330099" backcolor="#FFFFCC"></FooterStyle> <HeaderStyle font- </HeaderStyle> <PagerStyle horizontalalign="Center" forecolor="#330099" backcolor="#FFFFCC"></PagerStyle> <SelectedItemStyle font-</SelectedItemStyle> <ItemStyle forecolor="#330099" backcolor="White"></ItemStyle> <Columns> <asp:TemplateColumn></asp:TemplateColumn> <asp:TemplateColumn></asp:TemplateColumn> </Columns> </asp:DataGrid> Now we can add our details into the template columns. There are two ways of doing this. The first is by simply switching to HTML view and typing in the details: <Columns> <asp:TemplateColumn> <ItemTemplate> <asp:ImageButton </ItemTemplate> </asp:TemplateColumn> <asp:TemplateColumn> <ItemTemplate> <b><asp:Label </b> <br/> <%# DataBinder.Eval(Container.DataItem, "description") %><br/> Toppings: <%# DataBinder.Eval(Container.DataItem, "ingredients") %> </ItemTemplate> </asp:TemplateColumn> </Columns> The first column is an image button, which acts as our selected method. The second column Log in to post a comment
https://www.techylib.com/en/view/yelpframe/inside_asp.net_web_matrix
CC-MAIN-2017-34
refinedweb
8,214
58.82
view raw I'm trying to get synonyms of a given word using Wordnet. The problem is that despite I'm doing the same as is written here: here, it returns error. Here is my code: from nltk.corpus import wordnet as wn import nltk dog = wn.synset('dog.n.01') print dog.lemma_names >>> <bound method Synset.lemma_names of Synset('dog.n.01')> for i,j in enumerate(wn.synsets('small')): print "Synonyms:", ", ".join(j.lemma_names) >>> Synonyms: Traceback (most recent call last): File "C:/Users/Python/PycharmProjects/PribliznostneVyhladavanie/testy.py", line 38, in <module> print "Synonyms:", ", ".join(j.lemma_names) TypeError syns = wn.synsets('car') print [l.name for s in syns for l in s.lemmas] >>> TypeError: 'instancemethod' object is not iterable Which version of nltk are you using (try print nltk.__version)? Are you using python 2 or python 3? It seems that in the version you are using, lemma_names is a method and not an attribute (this is the case in nltk 3.0 for Python 3). If this is the case then you can probably fix your code by using this instead: for i,j in enumerate(wn.synsets('small')): print "Synonyms:", ", ".join(j.lemma_names())
https://codedump.io/share/0oMl6NwgcACw/1/get-synonyms-from-synset-returns-error---python
CC-MAIN-2017-22
refinedweb
198
71.82
This page list a few tips to help you investigate issues related to SpiderMonkey. All tips listed here are dealing with the JavaScript shell obtained at the end of the build documentation of SpiderMonkey. It is separated in 2 parts, one section related to debugging and another section related to drafting optimizations. Many of these tips only apply to debug builds of the JS shell; they will not function in a release build. Debugging Tips Getting help (from JS shell) Use the help function to get the list of all primitive functions of the shell with their description. Note that some functions have been moved under an 'os' object, and help(os) will give brief help on just the members of that "namespace". Getting the bytecode of a function (from JS shell) The shell has a small function named dis to dump the bytecode of a function with its source notes. Without arguments, it will dump the bytecode of its caller. js> function f () { return 1; } js> dis(f); flags: loc op ----- -- main: 00000: one 00001: return 00002: stop Source notes: ofs line pc delta desc args ---- ---- ----- ------ -------- ------ 0: 1 0 [ 0] newline 1: 2 0 [ 0] colspan 2 3: 2 2 [ 2] colspan 9 Getting the bytecode of a function (from gdb) In jsopcode.cpp, a function named js::DisassembleAtPC can print the bytecode of a script. Some variants of this function such as js::DumpScript etc are convenient for debugging. Printing the JS stack (from gdb) In jsobj.cpp, a function named js::DumpBacktrace prints a backtrace à la gdb for the JS stack. The backtrace contains in the following order, the stack depth, the interpreter frame pointer (see js/src/vm/Stack.h, StackFrame class) or (nil) if compiled with IonMonkey, the file and line number of the call location and under parentheses, the JSScript pointer and the jsbytecode pointer (pc) executed. $ gdb --args js […] (gdb) b js::ReportOverRecursed (gdb) r js> function f(i) { if (i % 2) f(i + 1); else f(i + 3); } js> f(0) Breakpoint 1, js::ReportOverRecursed (maybecx=0xfdca70) at /home/nicolas/mozilla/ionmonkey/js/src/jscntxt.cpp:495 495 if (maybecx) (gdb) call js::DumpBacktrace(maybecx) #0 (nil) typein:2 (0x7fffef1231c0 @ 0) #1 (nil) typein:2 (0x7fffef1231c0 @ 24) #2 (nil) typein:3 (0x7fffef1231c0 @ 47) #3 (nil) typein:2 (0x7fffef1231c0 @ 24) #4 (nil) typein:3 (0x7fffef1231c0 @ 47) […] #25157 0x7fffefbbc250 typein:2 (0x7fffef1231c0 @ 24) #25158 0x7fffefbbc1c8 typein:3 (0x7fffef1231c0 @ 47) #25159 0x7fffefbbc140 typein:2 (0x7fffef1231c0 @ 24) #25160 0x7fffefbbc0b8 typein:3 (0x7fffef1231c0 @ 47) #25161 0x7fffefbbc030 typein:5 (0x7fffef123280 @ 9) Note, you can do the exact same exercise above using lldb (necessary on OSX after Apple removed gdb) by running lldb -f js then following the remaining steps. Since SpiderMonkey 48, we have a gdb unwinder. This unwinder is able to read the frames created by the JIT, and to display the frames which are after these JIT frames. $ gdb --args out/dist/bin/js ./foo.js […] SpiderMonkey unwinder is disabled by default, to enable it type: enable unwinder .* SpiderMonkey (gdb) b js::math_cos (gdb) run […] #0 js::math_cos (cx=0x14f2640, argc=1, vp=0x7fffffff6a88) at js/src/jsmath.cpp:338 338 CallArgs args = CallArgsFromVp(argc, vp); (gdb) enable unwinder .* SpiderMonkey (gdb) backtrace 10 #0 0x0000000000f89979 in js::math_cos(JSContext*, unsigned int, JS::Value*) (cx=0x14f2640, argc=1, vp=0x7fffffff6a88) at js/src/jsmath.cpp:338 #1 0x0000000000ca9c6e in js::CallJSNative(JSContext*, bool (*)(JSContext*, unsigned int, JS::Value*), JS::CallArgs const&) (cx=0x14f2640, native=0xf89960 , args=...) at js/src/jscntxtinlines.h:235 #2 0x0000000000c87625 in js::Invoke(JSContext*, JS::CallArgs const&, js::MaybeConstruct) (cx=0x14f2640, args=..., construct=js::NO_CONSTRUCT) at js/src/vm/Interpreter.cpp:476 #3 0x000000000069bdcf in js::jit::DoCallFallback(JSContext*, js::jit::BaselineFrame*, js::jit::ICCall_Fallback*, uint32_t, JS::Value*, JS::MutableHandleValue) (cx=0x14f2640, frame=0x7fffffff6ad8, stub_=0x1798838, argc=1, vp=0x7fffffff6a88, res=JSVAL_VOID) at js/src/jit/BaselineIC.cpp:6113 #4 0x00007ffff7f41395 in<<JitFrame_Exit>> () #5 0x00007ffff7f42223 in<<JitFrame_BaselineStub>> () #6 0x00007ffff7f4423d in<<JitFrame_BaselineJS>> () #7 0x00007ffff7f4222e in<<JitFrame_BaselineStub>> () #8 0x00007ffff7f4326a in<<JitFrame_BaselineJS>> () #9 0x00007ffff7f38d5f in<<JitFrame_Entry>> () #10 0x00000000006a86de in EnterBaseline(JSContext*, js::jit::EnterJitData&) (cx=0x14f2640, data=...) at js/src/jit/BaselineJIT.cpp:150 Note, when you enable the unwinder, the current version of gdb (7.10.1) does not flush the backtrace. Therefore, the JIT frames do not appear until you settle on the next breakpoint. To work-around this issue you can use the recording feature of gdb, to step one instruction, and settle back to where you came from with the following set of gdb commands: (gdb) record full (gdb) si (gdb) record goto 0 (gdb) record stop If you have a core file, you can use the gdb unwinder the same way, or do everything from the command line as follow: $ gdb -ex 'enable unwinder .* SpiderMonkey' -ex 'bt 0' -ex 'thread apply all backtrace' -ex 'quit' out/dist/bin/js corefile The gdb unwinder is supposed to be loaded by dist/bin/js-gdb.py and load python scripts which are located in js/src/gdb/mozilla under gdb. If gdb does not load the unwinder by default, you can force it to, by using the source command with the js-gdb.py file. Setting a breakpoint in the generated code (from gdb, x86 / x86-64, arm) To set a breakpoint the generated code of a specific JSScript compiled with IonMonkey. Set a breakpoint on the instruction you are interested in. If you have no precise idea which function you are looking at, you can set a breakpoint on the js::ion::CodeGenerator::visitStart function. Optionally, a condition on the ins->id() of the LIR instruction can be added to select precisely the instruction you are looking for. Once the breakpoint is on CodeGenerator function of the LIR instruction, add a command to generate a static breakpoint in the generated code. $ gdb --args js […] (gdb) b js::ion::CodeGenerator::visitStart (gdb) command >call masm.breakpoint() >continue >end (gdb) r js> function f(a, b) { return a + b; } js> for (var i = 0; i < 100000; i++) f(i, i + 1); Breakpoint 1, js::ion::CodeGenerator::visitStart (this=0x101ed20, lir=0x10234e0) at /home/nicolas/mozilla/ionmonkey/js/src/ion/CodeGenerator.cpp:609 609 } Program received signal SIGTRAP, Trace/breakpoint trap. 0x00007ffff7fb165a in ?? () (gdb) Once you hit the generated breakpoint, you can replace it by a gdb breakpoint to make it conditional, the procedure is to first replace the generated breakpoint by a nop instruction, and to set a breakpoint at the address of the nop. (gdb) x /5i $pc - 1 0x7ffff7fb1659: int3 => 0x7ffff7fb165a: mov 0x28(%rsp),%rax 0x7ffff7fb165f: mov %eax,%ecx 0x7ffff7fb1661: mov 0x30(%rsp),%rdx 0x7ffff7fb1666: mov %edx,%ebx (gdb) # replace the int3 by a nop (gdb) set *(unsigned char *) ($pc - 1) = 0x90 (gdb) x /1i $pc - 1 0x7ffff7fb1659: nop (gdb) # set a breakpoint at the previous location (gdb) b *0x7ffff7fb1659 Breakpoint 2 at 0x7ffff7fb1659 Printing Ion generated assembly code (from gdb) If you want to look at the assembly code generated by IonMonkey, you can follow this procedure: - Place a breakpoint at CodeGenerator.cpp on the CodeGenerator::link method. - Step next a few times, so that the "code" variable gets generated - Print code->code_, which is the address of the code - Disassembly code read at this address (using x/Ni address, where N is the number of instructions you would like to see) Here is an example. It might be simpler to use the CodeGenerator::link lineno instead of the full qualified name to put the breakpoint. Let's say that the line number of this function is 4780, for instance: (gdb) b CodeGenerator.cpp:4780 Breakpoint 1 at 0x84cade0: file /home/code/mozilla-central/js/src/ion/CodeGenerator.cpp, line 4780. (gdb) r Starting program: /home/code/mozilla-central/js/src/32-release/js -f /home/code/jaeger.js [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". [New Thread 0xf7903b40 (LWP 12563)] [New Thread 0xf6bdeb40 (LWP 12564)] Run#0 Breakpoint 1, js::ion::CodeGenerator::link (this=0x86badf8) at /home/code/mozilla-central/js/src/ion/CodeGenerator.cpp:4780 4780 { (gdb) n 4781 JSContext *cx = GetIonContext()->cx; (gdb) n 4783 Linker linker(masm); (gdb) n 4784 IonCode *code = linker.newCode(cx, JSC::ION_CODE); (gdb) n 4785 if (!code) (gdb) p code->code_ $1 = (uint8_t *) 0xf7fd25a8 "\201", <incomplete sequence \354\200> (gdb) x/2i 0xf7fd25a8 0xf7fd25a8: sub $0x80,%esp 0xf7fd25ae: mov 0x94(%esp),%ecx On arm, the compiled JS code will always be ARM machine code, whereas spidermonkey itself is frequently Thumb2. Since there isn't debug info for the jitted code, you will need to tell gdb that you are looking at ARM code: (gdb) set arm force-mode arm Or you can wrap the x command in your own command: def xi set arm force-mode arm eval "x/%di %d", $arg0, $arg1 set arm force-mode auto end Printing asm.js/wasm generated assembly code (from gdb) Set a breakpoint on js::wasm::Instance::callExport(defined in WasmInstance.cppas of November 18th 2016). This will trigger for *any* asm.js/wasm call, so you should find a way to set this breakpoint for the only generated codes you want to look at. Run the program. Do nextin gdb until you reach the definition of the funcPtr: // Call the per-exported-function trampoline created by GenerateEntry. │ auto funcPtr = JS_DATA_TO_FUNC_PTR(ExportFuncPtr, codeBase() + func.entryOffset()); │ if (!CALL_GENERATED_2(funcPtr, exportArgs.begin(), &tlsData_)) │ return false; After it's set, x/64i funcPtrwill show you the trampoline code. There should be a call to an address at some point ; that's what we're targeting. Copy that address. 0x7ffff7ff6000: push %r15 0x7ffff7ff6002: push %r14 0x7ffff7ff6004: push %r13 0x7ffff7ff6006: push %r12 0x7ffff7ff6008: push %rbp 0x7ffff7ff6009: push %rbx 0x7ffff7ff600a: movabs $0xea4f80,%r10 0x7ffff7ff6014: mov 0x178(%r10),%r10 0x7ffff7ff601b: mov %rsp,0x40(%r10) 0x7ffff7ff601f: mov (%rsi),%r15 0x7ffff7ff6022: mov %rdi,%r10 0x7ffff7ff6025: push %r10 0x7ffff7ff6027: test $0xf,%spl 0x7ffff7ff602b: je 0x7ffff7ff6032 0x7ffff7ff6031: int3 0x7ffff7ff6032: callq 0x7ffff7ff5000 <------ right here x/64i address(in this case: x/64i 0x7ffff7ff6032). - If you want to put a breakpoint at the function's entry, you can do: b *address(for instance here, b* 0x7ffff7ff6032). Then you can display the instructions around pc with x/20i $pc,and execute instruction by instruction with stepi. Finding the script of Ion generated assembly (from gdb) When facing a bug in which you are in the middle of IonMonkey generated code, first thing to note, is that gdb's backtrace is not reliable, because the generated code does not keep a frame pointer. To figure it out you have to read the stack to infer the IonMonkey frame. (gdb)x /64a $sp […] 0x7fffffff9838: 0x7ffff7fad2da 0x141 0x7fffffff9848: 0x7fffef134d40 0x2 […] (gdb) p (*(JSFunction**) 0x7fffffff9848)->u.i.script_->lineno $1 = 1 (gdb) p (*(JSFunction**) 0x7fffffff9848)->u.i.script_->filename $2 = 0xff92d1 "typein" The stack is order as defined in js/src/ion/IonFrames-x86-shared.h, it is composed of the return address, a descriptor (a small value), the JSFunction (if it is even) or a JSScript (if the it is odd, remove it to dereference the pointer) and the frame ends with the number of actual arguments (a small value too). If you want to know at which LIR the code is failing at, the js::ion::CodeGenerator::generateBody function can be intrumented to dump the LIR id before each instruction. for (; iter != current->end(); iter++) { IonSpew(IonSpew_Codegen, "instruction %s", iter->opName()); […] masm.store16(Imm32(iter->id()), Address(StackPointer, -8)); // added if (!iter->accept(this)) return false; […] } This modification will add an instruction which abuse the stack pointer to store an immediate value (the LIR id) to a location which would never be generated by any sane compiler. Thus when dumping the assembly under gdb, this kind of instructions would be easily noticeable. Viewing the MIRGraph of Ion/Odin compilations (from gdb) With gdb instrumentation, we can call iongraph program within gdb when the execution is stopped. This instrumentation adds an iongraph command when provided with an instance of a MIRGenerator*, will call iongraph, graphviz and your prefered png viewer to display the MIR graph at the precise time of the execution. To find MIRGenetator* instances, is best is to look up into the stack for OptimizeMIR, or CodeGenerator::generateBody. OptimizeMIR function has a mir argument, and the CodeGenerator::generateBody function has a member this->gen. (gdb) bt#0 0x00000000007eaad4 in js::InlineList<js::jit::MBasicBlock>::begin() const (this=0x33dbbc0) at …/js/src/jit/InlineList.h:280 #1 0x00000000007cb845 in js::jit::MIRGraph::begin() (this=0x33dbbc0) at …/js/src/jit/MIRGraph.h:787 #2 0x0000000000837d25 in js::jit::BuildPhiReverseMapping(js::jit::MIRGraph&) (graph=...) at …/js/src/jit/IonAnalysis.cpp:2436 #3 0x000000000083317f in js::jit::OptimizeMIR(js::jit::MIRGenerator*) (mir=0x33dbdf0) at …/js/src/jit/Ion.cpp:1570 … (gdb) frame 3 #3 0x000000000083317f in js::jit::OptimizeMIR(js::jit::MIRGenerator*) (mir=0x33dbdf0) at …/js/src/jit/Ion.cpp:1570 (gdb) iongraph mirfunction 0 (asm.js compilation): success; 1 passes . /* open your png viewer with the result of iongraph */ This gdb instrumentation is supposed to work with debug builds, or with optimized build compiled with --enable-jitspew configure flag. External programs such as iongraph, dot, and your png viewer are search into the PATH, otherwise custom one can either be configured with environment variables ( GDB_IONGRAPH, GDB_DOT, GDB_PNGVIEWER) before starting gdb, or with gdb parameters ( set iongraph-bin <path>, set dot-bin <path>, set pngviewer-bin <path>) within gdb. Enabling GDB instrumentation may require launching a JS shell executable that shares a directory with a file name "js-gdb.py". If js/src/js does not provide the "iongraph" command, try js/src/shell/js. GDB may complain that ~/.gdbinit requires modification to authorize user scripts, and if so will print out directions. Break on valgrind errors Sometimes, a bug can be reproduced under valgrind but hardly under gdb. One way to investigate is to let valgrind start gdb for you, the other way documented here is to let valgrind act as a gdb server which can be manipulated from the gdb remote. $ valgrind --smc-check=all-non-file--vex-iropt-register-updates=allregs-at-mem-access --vgdb-error=0 ./js … This command will tell you how to start gdb as a remote. Be aware that functions which are usually dumping some output will do it in the shell where valgrind is started and not in the shell where gdb is started. Thus functions such as js::DumpBacktrace, when called from gdb, will print their output in the shell containing valgrind. Adding spew for Compilations & Bailouts & Invalidations (from gdb) If you are in rr, and forgot to record with the spew enabled with IONFLAGS or because this is an optimized build, then you can add similar spew with extra breakpoints within gdb. gdb has the ability to set breakpoints with commands, but a simpler / friendlier version is to use dprintf, with a location, and followed by printf-like arguments. (gdb) dprintf js::jit::IonBuilder::IonBuilder, "Compiling %s:%d:%d-%d\n", info->script_->scriptSource()->filename_.mTuple.mFirstA, info->script_->lineno_, info->script_->sourceStart_, info->script_->sourceEnd_ Dprintf 1 at 0x7fb4f6a104eb: file /home/nicolas/mozilla/contrib-push/js/src/jit/IonBuilder.cpp, line 159. (gdb) cond 1 inliningDepth == 0 (gdb) dprintf js::jit::BailoutIonToBaseline, "Bailout from %s:%d:%d-%d\n", iter.script()->scriptSource()->filename_.mTuple.mFirstA, iter.script()->lineno_, iter.script()->sourceStart_, iter.script()->sourceEnd_ Dprintf 2 at 0x7fb4f6fe43dc: js::jit::BailoutIonToBaseline. (2 locations) (gdb) dprintf Ion.cpp:3196, "Invalidate %s:%d:%d-%d\n", co->script_->scriptSource()->filename_.mTuple.mFirstA, co->script_->lineno_, co->script_->sourceStart_, co->script_->sourceEnd_ Dprintf 3 at 0x7fb4f6a0b62a: file /home/nicolas/mozilla/contrib-push/js/src/jit/Ion.cpp, line 3196. (gdb) continueCompiling self-hosted:650:20470-21501 Bailout from self-hosted:20:403-500 Invalidate self-hosted:20:403-500 Note: the line 3196, listed above, corresponds to the location of the Jit spew inside jit::Invalidate function. Hacking tips Benchmarking (shell) AreWeFastYet.com display the benchmark results of the JavaScript shell, and browser for B2G. These benchmarks are publicly recognized benchmarks suggested by other companies and are used as a metric to evaluate how fast JavaScript engines. This tool is maintained by the JavaScript Team, to find regressions and to compare SpiderMonkey with other JavaScript engines when possible. To run these benchmarks localy, you can clone AreWeFastYet sources and look inside the benchmarks directory to run individual benchmarks with your JS shell. Using the Gecko Profiler (browser / xpcshell) see the section dedicated to profiling with the gecko profiler. This method of profiling has the advantage of mixing the JavaScript stack with the C++ stack, which is useful to analyze library function issues. One tip is to start looking at a script with an inverted JS stack to locate the most expensive JS function, then to focus on the frame of this JS function, and to remove the inverted stack and look at C++ part of this function to determine from where the cost is coming from. Using the JIT Inspector (browser) Install the JIT Inspector addon in your browser. This addon provides estimated cost of IonMonkey , the Baseline compiler, and the interpreter. In addition it provides a clean way to analyze if instructions are inferred as being monomorphic or polymorphic in addition to the number of time each category of type has been observed. Using the TraceLogger (JS shell / browser) Create graphs showing time spent in which engine and which function like this. Whenever running a testcase the file "tl-data.json" and several "tl-*" files get created in the "/tmp" directory. (Per proces a "tl-data-*PID*.json" file and per thread a "tl-tree.*PID*.*ID*.tl", "tl-event.*PID*.*ID*.tl" and "tl-dict.*PID*.*ID*.json" file). These files contain all information to create a tracelogger graph. On you can find the instructions to create the graph (Tools V2 > 1. Creating a tracelogging graph). Note 1: when you are doing this from "" you will probably get a security warning in the console. This is because firefox doesn't allow loading files from the harddisk using httprequest, even when the file loading the file is on the harddisk. There are two solutions. One is to create a localhost server and serving the files there. The simplest way to do this is to run python -m SimpleHTTPServer from within the above directory. The other being disable this check in "about:config", by temporarily switching "security.fileuri.strict_origin_policy" to false Note 2: The files can be very big and take a long time to load in the browser. Therefore it might be good to reduce the logged file. This will remove entries that took only a minor time (=entries that will only show up with les than 1px). This can be done with the reduce.py script in. You need to download "engine.js", "reduce.py", "reduce.js", "reduce-tree.js" and "reduce-corrections.js". Running this tool is a matter of running "python reduce.py JS_SHELL /tmp/tl-data.json tl-reduced". Where JS_SHELL is a real shell. Using callgrind (JS shell) As SpiderMonkey just-in-time compiler are rewriting the executed program, valgrind should be informed from the command line by adding --smc-check=all-non-file. $ valgrind --tool=callgrind --callgrind-out-file=bench.clg \ --smc-check=all-non-file--vex-iropt-register-updates=allregs-at-mem-access ./js ./run.js The output file can then be use with kcachegrind which provides a graphical view of the call graph. Using IonMonkey spew (JS shell) IonMonkey spew is extremely verbose (not as much as the INFER spew), but you can filter it to focus on the list of compiled scripts or channels, IonMonkey spew channels can be selected with the IONFLAGS environment variable, and compilation spew can be filtered with IONFILTER. IONFLAGS contains the names of each channel separated by commas. The logs channel produces 2 files in /tmp/, one (/tmp/ion.json) made to be used with iongraph (made by Sean Stangl) and another one (/tmp/ion.cfg) made to be used with c1visualizer. These tools will show the MIR & LIR steps done by IonMonkey during the compilation. If you would like to use iongraph, you must install Graphviz. Compilation logs and spew can be filtered with the IONFILTER environment variable which contains locations as output in other spew channels. Multiple locations can be separated with comma as a separator of locations. $ IONFILTER=pdfjs.js:16934 IONFLAGS=logs,scripts,osi,bailouts ./js --ion-offthread-compile=off ./run.js 2>&1 | less The bailouts channel is likely to be the first thing you should focus on, because this means that something does not stay in IonMonkey and fallback to the interpreter. This channel output locations (as returned by the id() function of both instructions) of the latest MIR and the latest LIR phases. These locations should correspond to phases of the logs and a filter can be used to remove uninteresting functions. Using the ARM simulator The ARM simulator can be used to test the ARM JIT backend on x86/x64 hardware. An ARM simulator build is an x86 shell (or browser) with the ARM JIT backend. Instead of entering JIT code, it runs it in a simulator (interpreter) for ARM code. To use the simulator, compile an x86 shell (32-bit, x64 doesn't work as we use a different Value format there), and pass --enable-arm-simulator to configure. For instance, on a 64-bit Linux host you can use the following configure command to get an ARM simulator build: AR=ar CC="gcc -m32" CXX="g++ -m32" ../configure --target=i686-pc-linux --enable-debug --disable-optimize --enable-threadsafe --enable-simulator=arm Or on OS X: $ AR=ar CC="clang -m32" CXX="clang++ -m32" ../configure --target=i686-apple-darwin10.0.0 --enable-debug --disable-optimize --enable-threadsafe --enable-arm-simulator An --enable-debug --enable-optimize build is recommended if you want to run jit-tests or jstests. Use the VIXL Debugger in the simulator (arm64) Set a breakpoint (see the comments above about masm.breakpoint()) and run with the environment variable USE_DEBUGGER=1. This will then drop you into a simple debugger provided with VIXL, the ARM simulator technology used for arm64 simulation. Use the Simulator Debugger for arm32 The same instructions for arm64 in the preceeding section apply, but the environment variable differs: Use ARM_SIM_DEBUGGER=1. Building the browser with the ARM simulator You can also build the entire browser with the ARM simulator backend, for instance to reproduce browser-only JS failures on ARM. Make sure to build a browser for x86 (32-bits) and add this option to your mozconfig file: ac_add_options --enable-arm-simulator If you are under an Ubuntu or Debian 64-bits distribution and you want to build a 32-bits browser, it might be hard to find the relevant 32-bits dependencies. You can use padenot's scripts which will magically setup a chrooted 32-bits environment and do All The Things (c) for you (you just need to modify the mozconfig file). Using rr on a test Get the command line for your test run using -s: ./jit_test.py -s $JS_SHELL saved-stacks/async.js Insert 'rr' before the shell invocation: rr $JS_SHELL -f $JS_SRC/jit-test/lib/prolog.js --js-cache $JS_SRC/jit-test/.js-cache -e "const platform='linux2'; const libdir='$JS_SRC/jit-test/lib/'; const scriptdir='$JS_SRC/jit-test/tests/saved-stacks/'" -f $JS_SRC/jit-test/tests/saved-stacks/async.js (note that the above is an example; simply setting JS_SHELL and JS_SRC will not work). Or if this is an intermittent, run it in a loop capturing an rr log for every one until it fails: n=1; while rr ...same.as.above...; do echo passed $n; n=$(( $n + 1 )); done Wait until it hits a failure. Now you can run rr replay to replay that last (failed) run under gdb. rr with reftest To break on the write of a differing pixel: - Find the X/Y of a pixel that differs - Use 'run Z' where Z is the mark in the log for TEST-START. For example in '[rr 28496 607198]REFTEST TEST-START |' Z would be 607198. - break 'mozilla::dom::CanvasRenderingContext2D::DrawWindow(nsGlobalWindow&, double, double, double, double, nsAString_internal const&, unsigned int, mozilla::ErrorResult&)' - cont - break 'PresShell::RenderDocument(nsRect const&, unsigned int, unsigned int, gfxContext*)' - set print object on - set $x = <YOUR X VALUE> - set $y = <YOUR Y VALUE> - print &((cairo_image_surface_t*)aThebesContext->mDT.mRawPtr->mSurface).data[$y * ((cairo_image_surface_t*)aThebesContext->mDT.mRawPtr->mSurface).stride + $x * ((cairo_image_surface_t*)aThebesContext->mDT.mRawPtr->mSurface).depth / 8] watch *(char*)<ADDRESS OF PREVIOUS COMMAND> (NOTE: If you set a watch on the previous expression gdb will watch the expression and run out of watchpoint) rr with emacs Within emacs, do M-x gud-gdb and replace the command line with rr replay. When gdb comes up, enter set annot 1 to get it to emit file location information so that emacs will pop up the corresponding source. Note that if you reverse-continue over a SIGSEGV and you're using the standard .gdbinit that sets a catchpoint for that signal, you'll get an additional stop at the catchpoint. Just reverse-continue again to continue to your breakpoints or whatever. [Hack] Replacing one instruction To replace one specific instruction, you can use in visit function of each instruction the JSScript filename in lineno fields as well as the id() of the LIR / MIR instructions. The JSScript can be obtained from info().script(). bool CodeGeneratorX86Shared::visitGuardShape(LGuardShape *guard) { if (info().script()->lineno == 16934 && guard->id() == 522) { [… another impl only for this one …] return true; } [… old impl …] [Hack] Spewing all compiled code I usually just add this to the apropriate executableCopy. if (getenv("INST_DUMP")) { char buf[4096]; sprintf(buf, "gdb /proc/%d/exe %d -batch -ex 'set pagination off' -ex 'set arm force-mode arm' -ex 'x/%di %p' -ex 'set arm force-mode auto'", getpid(), getpid(), m_buffer.size() / 4, buffer); system(buf); } If you aren't running on arm, you should omit the -ex 'set arm force-mode arm' and -ex 'set arm force-mode auto'. And you should change the size()/4 to be something more apropriate for your architecture. Benchmarking with sub-milliseconds (JS shell) In the shell we have 2 simple ways to benchmark a script, we can either use the -b shell option (--print-timing) which will evaluate a script given on the command line without any need to instrument the benchmark and print an extra line showing the run-time of the script. The other way is to wrap the section that you want to measure with the dateNow() function call which returns the number of milliseconds, with a decimal part for sub-milliseconds. js> dateNow() - dateNow() -0.0009765625 Benchmarking with sub-milliseconds (browser) In a simillar way as dateNow() in the JS shell, you can use performance.now() in the JavaScript code of a page. Dumping the JavaScript heap From the shell, you can call the dumpHeap before Firefox function to dump out all GC things (reachable and unreachable) that are present in the heap. By default the function writes to stdout, but a filename can be specified as an argument. Example output might look as follows: 0x1234abcd B global object========== # zone 0x56789123 # compartment [in zone 0x56789123] # compartment [in zone 0x56789123] # arena allockind=3 size=64 0x1234abcd B object > 0x1234abcd B prop1 > 0xabcd1234 W prop2 0xabcd1234 W object > 0xdeadbeef B prop3 # arena allockind=5 size=72 0xdeadbeef W object > 0xabcd1234 W prop4 The output is textual. The first section of the file contains a list of roots, one per line. Each root has the form "0xabcd1234 <color> <description>", where <color> is the color of the given GC thing (B for black, G for gray, W for white) and <description> is a string. The list of roots ends with a line containing "==========". After the roots come a series of zones. A zone starts with several "comment lines" that start with hashes. The first comment declares the zone. It is followed by lines listing each compartment within the zone. After all the compartments come arenas, which is where the GC things are actually stored. Each arena is followed by all the GC things in the arena. A GC thing starts with a line giving its address, its color, and the thing kind (object, function, whatever). After this come a list of addresses that the GC thing points to, each one starting with ">". It's also possible to dump the JavaScript heap from C++ code (or from gdb) using the js::DumpHeap function. It is part of jsfriendapi.h and it is available in release builds. Inspecting MIR objects within a debugger For MIRGraph, MBasicBlock, and MDefinition and its subclasses (MInstruction, MConstant, etc.), call the dump member function. (gdb) call graph->dump() (gdb) call block->dump() (gdb) call def->dump() Benchmarking without a Phone If you do not have a mobile device or prefer to test on your desktop first, you will need to downgrade your computer such as it is able to run programs as fast as-if they were running on a phone. On Linux, you can manage the resources available to one program by using cgroup, and to do you can install libcgroup which provides some convenient tools such as cgexec to wrap the program that you want to benchmark. The following list of commands is used to create 3 control groups. The top-level control group is just to group the mask and the negate-mask. The mask control group is used to run the program that we want to benchmark. The negate-mask control group is used to reserve resources which might be used by the other program if not reserved. $ sudo cgcreate -a nicolas:users -t nicolas:users -g cpuset,cpu,memory:/benchmarks $ cgcreate -a nicolas:users -t nicolas:users -g cpuset,cpu,memory:/benchmarks/mask $ cgcreate -a nicolas:users -t nicolas:users -g cpuset,cpu,memory:/benchmarks/negate-mask Then we restrict programs of these groups to the first core of the CPU. This is a list of cpu, which means that we can allocate 2 cores by doing 0-1 instead of 0. $ cgset -r cpuset.cpus=0 /benchmarks $ cgset -r cpuset.cpus=0 /benchmarks/mask $ cgset -r cpuset.cpus=0 /benchmarks/negate-mask Then we restrict programs of these groups to the first memory node. Most of the time you will only have one, otherwise you should read what is the best setting to set here. If this is not set, you will have some error when you will try to write a pid in /sys/fs/cgroup/cpuset/benchmarks/mask/tasks while running cgexec. $ cgset -r cpuset.mems=0 /benchmarks $ cgset -r cpuset.mems=0 /benchmarks/mask $ cgset -r cpuset.mems=0 /benchmarks/negate-mask Then we limit the performance of the CPU, as a proportion such as the result approximately correspond to what you might have if you were running on a phone. For example an Unagi is approximately 40 times slower than my computer. So I allocate 1/40 for the mask, and 39/40 for the negate-mask. $ cgset -r cpu.shares=1 /benchmarks/mask $ cgset -r cpu.shares=39 /benchmarks/negate-mask Then we limit the memory available, to what would be available on the phone. For example an Unagi you want to limit this to 512 MB. As there is no swap, on this device, we set the memsw (Memory+Swap) to the same value. $ cgset -r memory.limit_in_bytes=$((512*1024*1024)) /benchmarks/mask $ cgset -r memory.memsw.limit_in_bytes=$((512*1024*1024)) /benchmarks/mask And finally, we run the program that we want to benchmark after the one which is consuming resources. In case of the JS Shell we might also want to set the amount of memory available to change the GC settings as if we were running on a Firefox OS device. $ cgexec -g 'cpuset,cpu,memory:/benchmarks/negate-mask' yes > /dev/null & $ cgexec -g 'cpuset,cpu,memory:/benchmarks/mask' ./js --available-memory=512 ./run.js How to debug oomTest() failures The oomTest() function executes a piece of code many times, simulating an OOM failure at each successive allocation it makes. It's designed to highlight incorrect OOM handling and this may show up as a crash or assertion failure at some later point. When debugging such a crash the most useful thing is to locate the last simulated alloction failure, as it's usually this that has caused the subsequent crash. My workflow for doing this is as follows: - Build a version of the engine with --enable-debugand --enable-oom-breakpointconfigure flags. - Set the environment variable OOM_VERBOSE=1and reproduce the failure. This will print an allocation count at each simulated failure. Note the count of the last allocation. - Run the engine under a debugger and set a breakpoint on the function js_failedAllocBreakpoint. - Run the program and continue the necessary number of times until you reach the final allocation. - e.g. in lldb, if the allocation failure number shown is 1500, run `continue -i 1498` (subtracted 2 because we've already hit it once and don't want to skip the last). Drop "-i" for gdb. - Dump a backtrace. This should show you the point at which the OOM is incorrectly handled, which will be a few frames up from the breakpoint. Note: if you are on linux it may be simpler to use rr. Some guidelines for handling OOM that lead to failures when they are not followed: - Check for allocation failure! - Fallible allocations should always must be checked and handled, at a minimum by returning a status indicating failure to the caller. - Report OOM to the context if you have one - If a function has a JSContext*argument, usually it should call js::ReportOutOfMemory(cx)on allocation failure to report this to the context. - Sometimes it's OK to ignore OOM - For example if you are performing a speculative optimisation you might abandon it and continue anyway. But in this case you may have to call cx->recoverFromOutOfMemory() if something further down the stack has already reported the failure. Debugging GC marking/rooting The js::debug namespace contains some functions that are useful for watching mark bits for an individual JSObject* (or any Cell*). js/src/gc/Heap.h contains a comment describing an example usage. Reproduced here: // Sample usage from gdb: // // (gdb) p $word = js::debug::GetMarkWordAddress(obj) // $1 = (uintptr_t *) 0x7fa56d5fe360 // (gdb) p/x $mask = js::debug::GetMarkMask(obj, js::gc::GRAY) // $2 = 0x200000000 // (gdb) watch *$word // Hardware watchpoint 7: *$word // (gdb) cond 7 *$word & $mask // (gdb) cont // // Note that this is *not* a watchpoint on a single bit. It is a watchpoint on // the whole word, which will trigger whenever the word changes and the // selected bit is set after the change. // // So if the bit changing is the desired one, this is exactly what you want. // But if a different bit changes (either set or cleared), you may still stop // execution if the $mask bit happened to already be set. gdb does not expose // enough information to restrict the watchpoint to just a single bit. Most of the time, you will want js::gc::BLACK (or you can just use 0) for the 2nd param to js::debug::GetMarkMask.
https://developer.mozilla.org/pt-BR/docs/Mozilla/Projects/SpiderMonkey/Hacking_Tips
CC-MAIN-2020-29
refinedweb
5,878
53.61
need help: cloth tag On 19/10/2014 at 02:18, xxxxxxxx wrote: User Information: Cinema 4D Version: 16 Platform: Windows ; Language(s) : C.O.F.F.E.E ; XPRESSO ; PYTHON ; --------- hi everyone.. i need help i want to do calculate cache in my multiple cloth tags in one user data button... please anyone can help... thanks (sorry for the my poor english)_<_mytubeelement_>_<mytubeelement =""loadBundle":true" ="relayPrefs" id="myTubeRelayElementToTa_<_mytubeelement_>_nt> On 19/10/2014 at 09:43, xxxxxxxx wrote: Hi Suda, thanks for stopping by and don't worry, nobody will be blamed for his English. I'm no native speaker myself. If I understand correctly, you want to automate the job of caching cloth tags. By intuition I'd try that with a Python script. Unfortunately the SDK Support Team can't help with developing such scripts from scratch. But I'm quite sure, there's somebody in the community, who may want to try himself on this. Anybody? On 19/10/2014 at 11:42, xxxxxxxx wrote: Hi Suda, with Python you can do something like this: import c4d def GetNextObject(op) : if op==None: return None if op.GetDown() : return op.GetDown() while not op.GetNext() and op.GetUp() : op = op.GetUp() return op.GetNext() def IterateHierarchy(op) : ClothTagList=[] if op is None: return count = 0 while op: count += 1 if op.GetTag(100004020)!= None: #id for the cloth tag ClothTagList.append(op.GetTag(100004020)) op = GetNextObject(op) return ClothTagList def main() : doc = c4d.documents.GetActiveDocument() first = doc.GetFirstObject() ClothTagList=IterateHierarchy(first) for i in ClothTagList: c4d.CallButton(i, c4d.CLOTH_DO_CALCULATE ) if __name__=='__main__': main() On 20/10/2014 at 00:00, xxxxxxxx wrote: Another idea would be to directly simulate the Cache button. You can use c4d.CallButton(...) to do this. As parameters you'll need to pass the object and the button ID. Be aware: This call needs to be done from the main thread. For a normal Python script this is no problem, in a Python Plugin you might need to care. On 20/10/2014 at 00:25, xxxxxxxx wrote: Hi Andreas, I´ve already used the call button in the script. But thank you for the good hint to check the main thread inside a plugin. As I´m scripting the snippet yesterday between changing diapers and singing a lullaby. Here a more efficient script without a list and more error checking. @ Suda Hope this helps. import c4d from c4d import gui if op.GetType!=c4d.Opolygon: dialog=gui.MessageDialog("A Polygon object for caching is needed",c4d.GEMB_OK) if dialog== 1: return if op.GetTag(100004020)!= None: ClothTag = op.GetTag(100004020) c4d.CallButton(ClothTag, c4d.CLOTH_DO_CALCULATE ) op = GetNextObject(op) return def main() : doc = c4d.documents.GetActiveDocument() first = doc.GetFirstObject() if first == None:return IterateHierarchy(first) if __name__=='__main__': main() Best wishes Martin On 20/10/2014 at 00:31, xxxxxxxx wrote: Sorry, monkeytack, I didn't look closely enough. I didn't want to derogate your effort or idea. Thanks for helping Suda out. Would you allow us to integrate your code into our Python SDK docs? On 20/10/2014 at 02:36, xxxxxxxx wrote: Hi Andreas, no need to apologise, every thing is alright. And Yes, feel free to integrate everything I´ve posted, if it fits and someone can take advantage of it. It´s a pleasure. Best wishes Martin
https://plugincafe.maxon.net/topic/8232/10730_need-help-cloth-tag
CC-MAIN-2019-13
refinedweb
566
61.12
This forum has migrated to Microsoft Q&A. Visit Microsoft Q&A to post new questions. Hi, I want to give the customer of an App the choice whether to buy, or to use the App with an abonnement. How can I do this? I only find th option to purchase an App. Kind regards, Martin Hi M_Stein, It may be impossible to make the app with subscription , but you could set In-app purchase(Add-on) in your app to sell content or new app functionality to be purchased one time(Durable Add-on) or charged at recurring intervals(Subscription Add-on).By the way, I advise you to use the new namespace Windows.Services.Store not the old one Windows.ApplicationModel.Store. You can refer to In-app purchases and trials to get how to implement the Add-on in your app and Add-on submissions to submit it in the dashboard.. You can give the users an Add-on feature. The Add-on can be a one-time purchase or you can give them a subscription Add-on. These are in-App features for a certain cost, to the users. It is your choice.
https://social.msdn.microsoft.com/Forums/en-US/48400df0-227b-4f26-9e99-712f54cb8fc1/how-can-i-give-the-customer-of-an-app-the-choice-wether-it-is-an-abonemnet-or-a-puchase?forum=wpsubmit
CC-MAIN-2021-49
refinedweb
197
74.69
'\u2192'or you found it in the tables of the Unicode Database. Instead you might want to use these Java UniCode Constants (UCC). Using a small Ruby script these constants were derived directly from the Unicode Database textual representation. For every character there is a constant with its official name and corresponding charor intvalue. All characters of the Unicode version 4.2.0 up to \u1FFFFare covered except CJK Ideographs. For each Unicode block, e.g. Basic Latin ( \u0000..\u007F) or Aegean Numbers ( \u10100..\u1013F), there is a separate interface with the block's name defining all code-points defined in this block. First you need to import the blocks, e.g. import unicode.AegeanNumbers. Then you can use the constants in your code like here: Character.charCount(BasicLatin.DIGIT_NINE)) // 1(And yes, I know, interfaces are a poor place for constants. They should only be used to model a behaviour of a class. See the AvoidConstantsInterface rule. But I was young and needed the money... ;-) Character.getNumericValue(BasicLatin.DIGIT_NINE)) // 9 Character.charCount(NumberForms.ROMAN_NUMERAL_FIVE_HUNDRED)) // 1 Character.getNumericValue(NumberForms.ROMAN_NUMERAL_FIVE_HUNDRED)) // 500 Character.charCount(AegeanNumbers.NUMBER_EIGHT)) // 2 Character.getNumericValue(AegeanNumbers.NUMBER_EIGHT)) // 8 Download and Installation Download UCC 1.00 (330 KB), together with source. Extract the ucc-*.zip and add ucc.jar to your classpath. UCC is JDK 1.1 compliant and does not depend on any other libraries. To use characters beyond \u10000, called code-points, you need Java 1.5 or newer. UCC is Open Source under the GPL license. 2 comments: Great and useful work for JUnit tests. Thanks a lot. Did you ever think of creating an artifact on mvnrepository.com ? Not that there should be numerous additional versions, but to make integration in line with current practices ? Thank you Oliver. Getting it into mvnrepository.com is some work, esp. for such small OS projects. But I quickly released it to my own Maven repository. Just add the repository to your pom. See the project's generated site for more information.
https://blog.code-cop.org/2007/08/java-unicode-constants.html
CC-MAIN-2020-05
refinedweb
334
55.71
not able to compile the simplest code i.e. of hello world the code is the error wasthe error wasCode:#include<iostream> using namespace std; int main() { cout<<"hello world!!!"; return 0; } collect2.exe: error: ld returned 1 exit status not able to compile the simplest code i.e. of hello world the code is the error wasthe error wasCode:#include<iostream> using namespace std; int main() { cout<<"hello world!!!"; return 0; } collect2.exe: error: ld returned 1 exit status Last edited by Mukul Kumar; 04-16-2013 at 09:07 AM. So what kind of errors are you getting? What compiler are you trying to use? When you downloaded the IDE did you also download a compiler? Jim i m using DEV C++ editor which comes with MinGW GCC 4.7.2 32-bit compiler i actually was making a game something like a tournament in which we fight different people and then upgrade ourselves after each fight but its still incomplete and we can have test run but when i run it i face only one error i.e. collect2.exe: error: ld returned 1 exit status and also when i use borland C++ compiler then i face no errors then i made a simple program of hello world and i still got same error in DEV C++!!! Code:#include<iostream> #include<conio.h> #include<time.h> #include<stdlib.h> #include<stdio.h> #include<dos.h> using namespace std; struct player{ int health,armour,damage[2],money; char name[10]; }p; struct rat{ int health,armour,damage[2],money; }rat1; int main(); void tutorial(); void options(); void shop(); void quit(); void weapons(); void armour(); void extras(); void start(void) { system("cls"); cout<<"when you are ready press enter..."; getch(); } void tutorial(void) { system("cls"); cout<<"learn how to play this game...\npress any key to continue."; getch(); cout<<"dsfsfw"; } void options(void) { system("cls"); } void shop() { back: int b=1; system("cls"); cout<<"\nwelcome to the shop"; cout<<"press...\n (1) to buy weapons\n (2) to buy armour\n (3) to buy extras\n (4) back"; char a=getch(); switch(a) { case '1':weapons();break; case '2':armour();break; case '3':extras();break; case '4':b=0;break; } if(b) { system("cls"); goto back; } } void weapons() { cout<<"choose...\n(1) light weapons \n(2) medium weapons\n(3) heavy weapons"; int a; cin>>a; switch(a) { case 1:{ cout<<"light weapons press...\n (1) small stick (cost 10 damage : 0-3)\n (2) twig (cost 15 damage : 2-5)\n (3) sharp bamboo spear (cost 20 damage : 5-10)\n (4) rusty sword (cost 30 damage : 10-20)\n (5) iron sword (cost 40 damage : 15-25)"; cin>>a; switch(a) { case 1:{ p.damage[0]=0; p.damage[1]=3; p.money-=10; }break; case 2:{ p.damage[0]=2; p.damage[1]=5; p.money-=15; }break; case 3:{ p.damage[0]=5; p.damage[1]=10; p.money-=20; }break; case 4:{ p.damage[0]=10; p.damage[1]=20; p.money-=30; }break; case 5:{ p.damage[0]=15; p.damage[1]=25; p.money-=40; }break; case 6:system("cls");break; } }break; case 2: { cout<<"\nmedium weapons press...\n (1) medium sword (cost 50 damage : 20-30)\n (2) medium iron sword(cost 60 damage : 20-35)\n (3) sharp sword(cost 80 damage : 40-60)\n (4) sharp long sword (cost 100 damage : 50-100)\n (5) double sword(cost 120 damage : 60-100)\n (6) back"; cin>>a; switch(a) { case 1:{ p.damage[0]=20; p.damage[1]=30; p.money-=50; }break; case 2:{ p.damage[0]=20; p.damage[1]=35; p.money-=60; }break; case 3:{ p.damage[0]=40; p.damage[1]=60; p.money-=80; }break; case 4:{ p.damage[0]=50; p.damage[1]=100; p.money-=100; }break; case 5:{ p.damage[0]=60; p.damage[1]=100; p.money-=120; }break; case 6:system("cls");break; } }break; case 3: { cout<<"heavy weapons press...\n (12) silver sword (cost 125 damage : 65-100)\n (13) golden sword(cost 150 damage : 80-120)\n (3) diamond tipped sword(cost 180 damage : 100-200)\n (4) diamond sword (cost 200 damage : 150-250)\n (5) the sword of death(cost 300 damage : 500-600)\n (6) back"; cin>>a; switch(a) { case 1:{ p.damage[0]=65; p.damage[1]=100; p.money-=125; }break; case 2:{ p.damage[0]=80; p.damage[1]=120; p.money-=150; }break; case 3:{ p.damage[0]=100; p.damage[1]=200; p.money-=180; }break; case 4:{ p.damage[0]=150; p.damage[1]=250; p.money-=200; }break; case 5:{ p.damage[0]=500; p.damage[1]=600; p.money-=300; }break; case 6:system("cls");break; } }break; } } void quit(void) { system("cls"); } void back() { system("cls"); } int main() { system("cls"); char a; cout<<"welcome to 'the tournament'\n"; cout<<"press...\n (1) to start\n (2) for tutorial\n (3) for options\n (4) to quit"; a=getch(); switch(a) { case '1':start();break; case '2':tutorial();break; case '3':options();break; case '4':quit();break; } return 0; } Last edited by Mukul Kumar; 04-16-2013 at 09:30 AM. Verify: (1) the source file is properly included in the project, and (2) the source file has the correct file extension. Also what is this? The function main() doesn't need a prototype.The function main() doesn't need a prototype.Code:int main(); Where did you implement your armour() and extras() functions? Why are you including the <dos.h> header file? It's a "little" outdated and shouldn't really be used with today's modern operating systems. Also if you're writing a C++ program you should be using the C++ standard headers not their C versions, <cstdio> instead of <stdio.h>. And since you're not actually using anything in these C headers you shouldn't be including them anyway. Jim i already explained that this is incomplete executable code fragment also i need these standard library functions because i know only this much and still learning + i feel an ease to use these function i will use the function delay(double) from the header file dos.h and about the prototyping of main function i need that in the function ask() where i will ask the user to rerun this code if yes then i'll call the function main() also i face no errors in borland C++ editor!!! till i get the solution i'll continue my work in borland C++ editor Last edited by Mukul Kumar; 04-16-2013 at 10:41 AM. Here's some extra punctuation of mine that you can have to make your messages more clear: ......,,,,,, You shouldn't be calling main from another function. Instead, you should have a loop in main that cycles or breaks based on the return value of "ask()". As to the problem at hand, did you check the things I mentioned in post #4? It is not permitted to use the global main function in your program. Therefore, you are not allowed to call it. Luckily, there is an alternative: have the main function call the function (possibly indirectly) in a loop instead.It is not permitted to use the global main function in your program. Therefore, you are not allowed to call it. Luckily, there is an alternative: have the main function call the function (possibly indirectly) in a loop instead. Originally Posted by Mukul KumarOriginally Posted by Mukul Kumar) Since this is a C++ program this doesn't make any sense. In a C++ program main() can't be called by any function, including main(). And even in a C program calling main() is considered a very very bad practice.Since this is a C++ program this doesn't make any sense. In a C++ program main() can't be called by any function, including main(). And even in a C program calling main() is considered a very very bad practice.and about the prototyping of main function i need that in the function ask() where i will ask the user to rerun this code if yes then i'll call the function main() So then what is your problem? Incomplete code fragments rarely compile if you haven't at least implemented "stub" functions (functions with blank bodies).So then what is your problem? Incomplete code fragments rarely compile if you haven't at least implemented "stub" functions (functions with blank bodies).i already explained that this is incomplete code fragment Jim see the problem is ......that i was verifying time and again to check the working of the code properly and suddenly i faced this problem hence i say it is 100% executable(i tried this in turbo C++) and I AM WORKING ON THOSE FUNCTIONS HAVING NOTHING!!! thanks i will take care of this in future @laserlight,matticus&jimblumberg about that calling of main() Last edited by Mukul Kumar; 04-16-2013 at 10:50 AM. now i may not reply for 20 hrs because this is the time in one day when i sit on computer and work + its 10:23 PM here in my country and i m gonna goto sleep so good-night!!! i promise i'll work hard on this code i found the problem its not in my incomplete code,not even in my bad habits it was in my editor my naughty bro disturbed some settings(i don't know which one) and i set everything to default by reinstalling the editor but thanks to all of you for telling me about my bad habits i'll take care
http://cboard.cprogramming.com/cplusplus-programming/155930-help-dev-cplusplus-users.html
CC-MAIN-2016-30
refinedweb
1,621
71.04
Difference between revisions of "Talk:SantaClausProblemV2" Revision as of 12:42, 10 January 2009 [video editing program|] [video editing program] editing program (( video editing program)) [| video editing program] "video editing program": [rush live videos|] [rush live videos] live videos (( rush live videos)) [| rush live videos] "rush live videos": [rap video girl audition|] [rap video girl audition] video girl audition (( rap video girl audition)) [| rap video girl audition] "rap video girl audition": [page|] [page] [1] (( page)) [| page] "page": [elmos world videos|] [elmos world videos] world videos (( elmos world videos)) [| elmos world videos] "elmos world videos":: done)). (SLPJ: done) There is also a double "Here is the code" at the bottom of page 17. (SLPJ: done) Brecknell 13:59, 11 January 2007 (UTC) The rewrite of the paragraph at top of page 17 has left behind a fragment that I think just needs to be deleted: "just runs each of the actions in sequence".(SLPJ: done) Brecknell 14:15, 11 January 2007 (UTC) Last sentence of page 18: "Here is another way approach that..." Delete either "way" or "approach". (SLPJ: done) ArthurVanLeeuwen 14:27, 11 January 2007 (UTC) (SLPJ: done)) (SLPJ: done) - page 11, use subtract (+ -> -) in limitedWithDraw - page 22 (2nd line), the the -> the Genneth 15:07, 11 January 2007 (UTC) p4: make analogy between () and void to help non-haskellers (SLPJ: done) ArthurVanLeeuwen 15:10, 11 January 2007 (UTC) The bottom of page 17 has 'Here is his code:' in duplicate. (SLPJ: done). (SLPJ: True; but I'm not sure whether to say more, or less! Or just leave it) Maeder 15:25, 11 January 2007 (UTC) maybe generalize nTimes on page 7 to type: Int -> (Int -> IO ()) -> IO () and use nTimes on page 16 instead of sequence_ and list comprehensions. (The function choose on page 19 is also higher order.) (SLPJ: I decide to leave it) Malcolm 15:27, 11 January 2007 (UTC) (SLPJ: done) - .) Malcolm 16:07, 11 January 2007 (UTC) (SLPJ: done) - (SLPJ: done) Fanf 19:34, 12 January 2007 (UTC) (SLPJ: done)) (SLPJ: hmm... I'm re-using the description in earlier papers). (SLPJ: done) - Also, is it worth noting that the reindeer gates model the harnessing and unharnessing? (SLPJ: not sure. Anyone else?). (SLPJ: done) - p.17: "1000,000 microseconds" instead of "1,000,000 microseconds".(SLPJ: done) LuciusGregoryMeredith 02:00, 13 January 2007 (UTC) Simon, lovely paper. Your first paragraph is very compelling. Let me quote it here. - "The free lunch is over [8]. We have grown used to the idea that our programs will go faster when we buy a next-generation processor, but that time has passed. While that next-generation chip will have more CPUs, each individual CPU will be no faster than the previous yearâÂÂs model. If we want our program to run faster, we must learn to write parallel programs [9]." - What evidence does this paper offer that using STM will actually get my program to run faster? (SLPJ: none whatsoever. The only claims I'm trying to make are (a) we need to learn to write concurrent programs; and (b) if we want to write parallel programs, then STM is promising. In fact we need to write concurrent programs for all sorts of reasons, and performance is only one of them; I'm just using it as a convenient "way in". A bit sloppy perhaps.) For all i can see in this example, it may be a beautiful abstraction that when used at scale on interesting programs cannot actually take advantage of the multicore architecture. To be convinced, starting from this particularly motivating opening, i would like to see an example that begins with an algorithm that is not parallel. Then be shown a beautiful, STM-based parallel version that is demonstrably faster. - It would be particularly compelling to see just how a good implementation of STM for Haskell takes advantage of Intel and/or AMD multicore hardware. - It would be even more compelling to see the corresponding lock-based program and how it fairs relative to the STM version in terms of performance, usage of the hardware platform as well as program understandability and analysis. - Clearly, one of the real rubs is getting from current expressions of algorithms to parallel ones, especially parallel ones that map onto modern architectures. Perhaps your point is that STM helps one start to think and conceptualize computation as a concurrent activity -- which then offers some hope to the ordinary programmer to develop programs that will actually take advantage of today and tomorrow's architectures. If so, then the paper is an excellent start, but i would very much like to see this point made more explicit and central, especially if you only give some lipservice to the argument that STM can actually be made to take advantage of the multicore architectures. In particular, evidence for this sort of point might come in the form of a study of how long it takes an ordinary programmer -- not a Simon P-J -- to develop a beautiful solution like a solution to the Santa Clause problem. - Evidence against this sort of point would come in the form of people finding the basic constructs of the solution "mind-bending". Pitarou 00:51, 15 January 2007 (UTC) In the definition of the function forever, there is the comment -- Repeatedly perform the action, taking a rest each time. I'm not sure what you mean by "taking a rest". (SLPJ: done) Asilovy 16:43, 15 January 2007 (UTC) Page 1 first paragraph, last sentence : I'm not native english but I wonder if you would'nt write "If we want our programs..." with 's' since you use plurals elswhere. (SLPJ: done) Asilovy 16:43, 15 January 2007 (UTC) Page 3 last sentence before 2.2 : Is'nt it "... on there being INsufficient..." rather than "sufficient" ?(SLPJ: done) Asilovy 18:33, 15 January 2007 (UTC) Page 6, last sentence : Should'nt you write "...being explicit aboute SIDE effects..." and the same for the last sentence of the next paragraph (begin page 7) "This ability to make SIDE effects..." (SLPJ: done) Asilovy 18:33, 15 January 2007 (UTC) Page 8 second paragraph : I can understand that, in some sense, atomocity and isolation are related (in fact 2 faces seen by different threads of the same problem) but you start by saying the model does not ensure atomicity and explain why finishing by "...thereby destroying the isolation guarantee". Sounds confusing. (SLPJ: indeed. I've changed it to say "does not ensure isolation...".) Asilovy 18:33, 15 January 2007 (UTC) Page 10 code of limitedWithdraw : isn't it "writeTVar acc (bal - amount)" with a minus ? (SLPJ: done) Asilovy 18:33, 15 January 2007 (UTC) Page 11 : name confusion in the code to limitedWithdraw2 (in the type signature), and withdraw2 in the definition and the comment (SLPJ: done) Asilovy 18:33, 15 January 2007 (UTC) Page 15 firt line : maybe "The function newGate makes A new Gate..." ? (SLPJ: done) Jeremygibbons 12:47, 22 January 2007 (UTC) "in reality" (p14) - I hate to break it to you, Simon, but I think "in the specification" would be more appropriate here. (SLPJ: ha ha; I think I'll leave it!) Also, on p19, "a atomic" should be "an atomic" and on p21, "sophisiticated" is misspelt. (SLPJ: done) . atomically. I think if you said something like "if a thread is accessing an IORef while holding the atomically lock there is nothing to stop a *different* thread from accessing the same IORef directly at the same time" it would help emphasise the fact that other threads can still be running even though only one thread can be in an atomic block at any given time. SLPJ: I have re-worded. See if you prefer version? SLPJ: starvation is indeed possible. I deliberately left out for brevity, but perhaps I should devote a couple of paras to it. What do others think? - Page 10: No definition for check. Section 2.4 only defines retryso perhaps section 3.2 should include: check c = if not c then retry() else return () (to avoid introducing yet another Haskell function unless) SLPJ: done - Page 12: Use of sequenceand list comprehensions seems more complicated than something like: mapM_ (elf elf_gp) [1..10] SLPJ: Hmm - Page 16: Section 5 on locks. It seems to me that STM solves these problems by just conceptually using a single lock (the 'brute force' lock on atomically)). SLPJ: and perhaps not even then; you can do the commit use CAS only with no locks at all!.
https://wiki.haskell.org/index.php?title=Talk:SantaClausProblemV2&diff=prev&oldid=25708
CC-MAIN-2020-16
refinedweb
1,419
68.3
My task: create page template with contain web parts developed by me. Here what I've done: I've developed my own web part as it was described in Kentico Developer Guide. This is very simple one so I can write it's code. ascx file: <%@ Control Language="C#" AutoEventWireup="true" CodeFile="customWebPart.ascx.cs" Inherits="CMSWebParts_Custom_customWebPart" %> <asp:Label</asp:Label> cs file: using CMS.PortalControls; public partial class CMSWebParts_Custom_customWebPart : CMSAbstractWebPart { protected void Page_Load(object sender, EventArgs e) { Label1.Text = (string)this.GetValue("Text"); } } As you can see this web part is only a label and you can edit it's text via web part properties. After create I've of course registred this web part and add a property "Text". Next step is quite problematic. I need to create a page template. It's possible to write something like that: <cms:CMSPagePlaceholder <LayoutTemplate> <cms:CMSWebPartZone </LayoutTemplate> </cms:CMSPagePlaceholder> and add my web part via cms desk but it's not the point. I need to put this web part exactly into page template, and it should by editable from cms desk. Webparts are typically used for Portal development mode and not ASPX development mode. Webparts are collections of controls that publicly expose properties for the end user/content editor to set. So in your case you don't need the webpart, just the user control that is inheriting the CMS namespaces and displaying the data. I've personally done many Kentico websites with a few pages to sites with thousands of pages and integrate with different payment systems, web services, etc. and have never used ASPX development mode. Simply because it makes it difficult for the client to take advantage of Kentico's abilities when all the simple settings are done in code and a content editor has to know C# or ASP.NET. Once you learn how to create webparts it's really no different than making a user control. Please, sign in to be able to submit a new answer.
https://devnet.kentico.com/questions/my-web-parts-in-aspx-page-temlate
CC-MAIN-2017-47
refinedweb
333
56.15
Revision history for Class-Load 0.23 2015-06-25 - remove use of namespace::clean 0.22 2014-08-16 - document some of the caveats to using this module, and refer to Module::Runtime as an alternative 0.21 2014-02-09 - repository moved to the github moose organization 0.20 2012-07-15 - Same as the most recent 0.19, but with a new version number. 0.19 2012-07-15 - Uploaded by Dave Rolsky, not realizing 0.19 was already used -. 0.19 2012-04-03 - Uploaded by doy (Jesse Luehrs) - No changes, reupload to fix indexing. 0.18 2012-02-18 - Require Package::Stash 0.14+. Fixes RT #75095. Reported by Zefram. 0.17 2012-02-12 - A bug in Class::Load caused test failures when Module::Runtime 0.012 was used with Perl 5.8.x. Reported by Zefram. RT #74897. ( Jesse Luehrs ) 0.16 2012-02-12 - Require Module::Runtime 0.012, which has a number of useful bug fixes. 0.15 2012-02-08 - Small test changes to accomodate latest version of Module::Implementation. There's no need to upgrade if you're already using 0.14 0.14 2012-02-06 - Use Module::Implementation to handle loading the XS or PP versions of the code. Using this module fixes a few bugs. - Under taint mode, setting an implementation in the CLASS_LOAD_IMPLEMENTATION env var caused a taint error. - An invalid value in the CLASS_LOAD_IMPLEMENTATION env var is now detected and reported immediately. No attempt is made to load an invalid implementation. 0.13 2011-12-22 - Fix some bugs with our use of Try::Tiny. This could cause warnings on some systems with Class::Load::XS wasn't installed. Fixes RT #72345. 0.12 2011-10-25 - Depend on Module::Runtime 0.011+. This fixes problems with Catalyst under Perl 5.8 and 5.10. 0.11 2011-10-04 - Don't accept package names that start with a digit. ( Jesse Luehrs ) - Rewrote some of the guts to use Module::Runtime, rather than reimplementing its functionality. ( Jesse Luehrs ) 0.10 2011-09-06 - Make sure the $@ localization doesn't hide errors - invalid module name errors were being suppressed on perls before 5.14. ( Jesse Luehrs ) 0.09 2011-09-05 - Fix is_class_loaded to ignore $ISA (but still look for @ISA) when trying to determine whether a class is loaded. ( Jesse Luehrs ) - Lots of internals cleanup. ( Jesse Luehrs ) 0.08 2011-08-15 - The previous version was missing a prereq declaration for Data::OptList. Reported by Daisuke Maki and Mark Hedges. RT #70285. 0.07 2011-08-15 -. 0.06 2010-11-15 - ) 0.05 2009-09-02 - Cargo-cult Class::MOP's is_class_loaded so we work on 5.10 0.04 2008-11-09 - No changes 0.03 2008-11-09 - Dist fix 0.02 2008-11-09 - Declare dependency on Test::Exception 0.01 2008-08-13 - First version, released on an unsuspecting world.
https://metacpan.org/changes/distribution/Class-Load
CC-MAIN-2015-35
refinedweb
491
71.71