text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Defining the DOM element of a Backbone view right in the template. Setup – Use case – Core functionality – Template caching – Other – Build and test With Backbone.Declarative.Views, you can read the markup for the container element of a view directly from its template. Keep the tag name, class name and other attributes of el out of Backbone.View Javascript code. (Read why.) That separation of concerns works entirely behind the scenes. Just load Backbone.Declarative.Views into your project, and start declaring the view attributes in the HTML of your templates. As a bonus, you get a Javascript API for direct access to the template cache. The cache is built into Backbone.Declarative.Views to keep it fast and efficient. So why not take advantage of it for your own template processing, too? Creating new views becomes a much speedier affair, and the overall effect on performance can be huge. Users of Marionette benefit from automatic, integrated management of the template caches which Marionette and Backbone.Declarative.Views provide. Backbone is the only dependency. Include backbone.declarative.views.js after Backbone. If you use other components which extend Backbone.View, load these components after Backbone.Declarative.Views. Backbone.Declarative.Views augments the Backbone.View base type, so its functionality is available in every view throughout your code. Load backbone.declarative.views.js after Marionette. If you use AMD, please be aware that Marionette is not declared as a dependency in the AMD build of Backbone.Declarative.Views. Declare it yourself by adding the following shim to your config: requirejsconfigshim:'backbone.declarative.views':deps: 'marionette';. Markup, styling and behaviour should be kept separate – we all know that. Yet with Backbone views, it is common to mix them up. Part of the view markup is often stored in the HTML, wrapped in script/template tags, while another part – the one describing the container el of the template – ends up right inside your Javascript, hidden away in tagName, className and other properties. It doesn't belong there. Backbone views use a couple of properties to describe the container element: tagName, className, id and attributes. Instead of managing these properties in Javascript, declare them as data attributes of the script tag which is storing the template. elwith data attributes in the HTML Lets begin with an example. Consider the following template snippet. Now, if your view has a template: "#my-template" property, its el is set up as The transformation doesn't require any intervention on your part, or additional code. This is the core of what Backbone.Declarative.Views does. Backbone.Declarative.Views looks for a template property on the view, with a selector as its value. A template option passed to the constructor will do as well. As the Backbone view is created, Backbone.Declarative.Views fetches the template and uses the data attributes, if there are any, to set up the el of the view. And that is the end of it. Processing the template, feeding template vars to it, appending the final HTML to the DOM – all that remains your responsibility. However, you can speed up the process by fetching the template from the cache of Backbone.Declarative.Views. There is no need to read it from the DOM again. There is something else you might have noticed in the example above. The names of the Backbone.View properties have changed when they were written as data attributes. In compliance with the HTML5 data attributes spec, tagName has turned into data-tag-name, and className has become data-class-name. Likewise, there are a data-id and data-attributes. Use these names when describing an el in a template. attributesas JSON Among the properties describing the el of the view, one warrants a closer look: the attributes property. In Javascript, it is a hash. So when declaring it in a template, write it as JSON. When hand-writing JSON, remember to quote property names as well as their values. And those quotes must be double quotes. There are two ways to let a view know about its template: You can set the template property of the class with extend(): var View = BackboneViewextend template: "#selector" ; You can also pass the template in as an option when you create the view: var view = template: "#selector" ; The template option, if provided, is attached to the view directly. It is available to the methods of your view, including initialize(), as this.template. If you want Backbone.Declarative.Views to pick up the properties of your el, and perhaps cache the template for you, you have to play by the rules. You can't set the template property to a selector inside initialize – that is too late. The el has already been set up at this point. Modifications of the template property in initialize() will not affect the el of the view. This behaviour is a feature, not a bug. It is common to compile a template in initialize, along the lines of thistemplate = _template $ thistemplate html ; The original value of the template property is overwritten in the process. Backbone.Declarative.Views does not interfere with this pattern, it continues to work. Equally, overwriting the template property in initialize won't break the functionality of Backbone.Declarative.Views, either. The el properties in a template are ignored if the view does not create its own, shiny new el. Backbone allows you to attach a view to an el which already exists in the DOM, rather than create a new one: var view = el: existingElement ; Even if you specify a template along with it, the data attributes of the template won't get applied to the el. That is in line with the default Backbone behaviour. Backbone ignores el-related view properties, like tagName and className, if el is set to an existing DOM element. eldefinition in a template. Yes, that works as well. The template property of a view can be set to an HTML string instead of a selector, as in the following example: var templateHtml = '<li class="bullet" data-' +'template <%= content %> goes here' +'<. Accessing the DOM is rather slow. Ideally, for each template, it should be enough to touch the DOM once. The very first time a template is used, Backbone.Declarative.Views retrieves it from the DOM and checks for el data on the template tag. From here on out, the data of that template is cached. It makes overwhelming sense to reuse that data and save yourself future look-ups. Backbone.Declarative.Views tries to be helpful, so it does not just keep the data attributes of the el in its cache. It will happily hand you the inner HTML of the template, or the outer HTML. And if you tell it which template compiler to use, it will even compile the templates for you and cache the results, too. Here is a pretty universal duo of snippets for tapping into the cache of Backbone.Declarative.Views. The snippets assume that you compile your templates with the _.template() function of Underscore. If you don't, it is pretty easy to see what you need to change. // Tell the caching mechanism which template compiler to usereturn _template templateHtml ;; Defining a compiler like that is optional, but gives you a speed boost for free. // Tap into the cache in a view class.//// Safe to use even if you don't define a template for some of your views.// Also works if you leave out the snippet above, and don't set a compiler.var BaseView = BackboneViewextendvar cachedTemplate = thisdeclarativeViewsgetCachedTemplate;if cachedTemplatethistemplate = cachedTemplatecompiled || _template cachedTemplatehtml ;; This little bit of code has got you covered. For the fine print, read on. You can access cached template data easily from inside a view. The necessary methods are tucked away, or rather namespaced, in the declarativeViews property of a view. In addition, you can deal with cache entries independently of individual, instantiated views. The global cache API is attached to the Backbone.DeclarativeViews namespace (note that there is no dot inside DeclarativeViews). In the context of a view, call declarativeViews.getCachedTemplate(): var cachedTemplate = thisdeclarativeViewsgetCachedTemplate;// Do stuff with it, most likely with cachedTemplate.html,// or perhaps with cachedTemplate.compiled. As you can see in the example, the cached template is available by the time initialize() is run, so you can use it there. The link between a view and a template is forged when the view is instantiated, and as far as the cache is concerned, it can never be changed. You can modify or overwrite the template property as you wish, do whatever you want with it during render(), even use multiple templates. But getCachedTemplate() always returns the template you started out with – the one defined by the template property, or a template option, at the time the view was created. If you need to access the cache independently of an individual view, call getCachedTemplate() via the global API with a template selector. var cachedTemplate = BackboneDeclarativeViewsgetCachedTemplate "#template" ; Don't worry about availability. If the template is not yet in the cache, that call will put it in there. To avoid duplicate cache entries, use the same selector for a getCachedTemplate() query as in your views. Selectors which are equivalent but not identical, e.g. "#template" and "script#template", create two distinct cache entries even though they refer to the same template. When you pull data from the cache with getCachedTemplate(), you do not get a string with the template HTML back. Rather, your receive a hash with various properties of the cache entry: html (string) the actual template content if the template has been specified by a selector. If you don't define your template with a selector, and rather pass in a raw HTML template string, the html property contains the inner HTML of that string. In case you need the string back verbatim, call outerHtml() instead. Please keep in mind that some HTML strings are uncacheable (see below). outerHtml (function) a function returning the full (outer) HTML of the template compiled (function, or undefined) the compiled template (ie, a function returning the final HTML, with the template vars filled in) if a template compiler has been set in Backbone.DeclarativeViews.custom.compiler. Or undefined, otherwise. tagName (string or undefined) the tag to be used for the el, if defined by a data attribute of the template className (string or undefined) the class name of the el, if defined by a data attribute attributes (hash or undefined) hash of el attributes and their values, if defined by a data attribute (Oh, and have you spotted the textbook case of bad API design? One way to get back the template HTML is by reading the html property, while its twin outerHtml is a function you have to call. Yes, that seems silly, and yes, it can trip you up. But then again, some templates are rather big, and most people don't need the outer HTML. Given today's memory constraints on mobile devices, it seemed better to reconstruct the outer HTML on demand, with a function call, rather than double the memory consumption of the cache by storing near-identical strings for every template.) There won't be a cache miss for any template which exists in the DOM. When you call getCachedTemplate() on either a view or the global Backbone.DeclarativeViews object, you get the template back. If it is not yet in the cache, it will be put there in the process. You do get a cache miss in the following cases. The template you request is not defined by a string. You can set the template property of a view to pretty much anything. It could be a function returning what you need. It could, theoretically, be a hash of things. Backbone.Declarative.Views does not handle these kinds of template definitions. It simply leaves them alone. Consequentially, the templates do not make it into the built-in cache. The selector does not match a DOM node. That only matters on first access. If the DOM node is deleted after its content is already in the cache, you get the cached template back. The template is not a selector but a raw HTML string, and that string can't be turned into a template element (or a set of elements). Backbone.Declarative.Views hands the template string over to Backbone.$ (read: jQuery) for processing, or to a custom loader if you have defined one. If jQuery, or your loader, can't handle the string, you get a cache miss. In practice, that happens when you pass the inner HTML of a template to your view, and parts of the HTML are not wrapped in a tag. Consider a view like this: var view =template: "Template <%= content %> <em>without</em> a tag around it."; The loader, jQuery, can't deal with the string. There would have to be HTML tags around the plain text, but without them, jQuery throws an error (which is caught, silently). Because the template loader can't handle it, Backbone.Declarative.Views ignores it. This is an uncacheable template as far as Backbone.Declarative.Views is concerned. In all of these cases, getCachedTemplate() returns undefined. Backbone.Declarative.Views handles the template caching, with one exception. Compiled templates are not in the cache, at least by default. You first need to tell Backbone.Declarative.Views which compiler to use. This is how: // do stuffreturn yourCompiledTemplate;; The compiler function receives the inner HTML of the template node as the first argument. As the second argument, it is passed the template node itself, in a jQuery wrapper. The compiler should return a function which accepts the template vars as an argument and produces the final HTML. But in fact, the compiler is allowed to return anything. Backbone.Declarative.Views doesn't care what your compiled templates are, and what you do with them. It just stores them for you. The return value of the compiler is stored in the compiled property of each cache entry. So in effect, if you define a compiler, this is what Backbone.Declarative.Views does for you: cacheEntrycompiled = BackboneDeclarativeViewscustomcompiler cacheEntryhtml $template ; By default, the template property of your view is assumed to be a selector, or perhaps a raw HTML string. For processing, it is handed over to Backbone.$, which acts as the default loader and fetches your template from the DOM (or creates a node from the raw HTML string). If that is not how you want to go about loading your templates, define a custom loader instead. It will take the place of Backbone.$ when the template is fetched. // do stuffreturn $ nodeOrOuterTemplateHtml ;; The custom loader is called with the template property as the only argument. That argument is always a string. The custom loader must return a jQuery object (or more precisely an instance of Backbone.$, which usually means jQuery). The returned jQuery object is considered to be the template node. The template HTML should best be inside that node (rather than be the node), though it is essentially up to you how you set that up. Inner and outer HTML of the node can be retrieved from the html property and outerHtml() method of the cache entry. But sometimes, things just go wrong. If your loader can't process the template argument, or does not find the template, it is allowed to throw an error. The error is caught and handled silently. Alternatively, the loader can return a jQuery object which does not contain any nodes (length 0). Both cases are treated as a permanent cache miss. Please be aware that your custom loader will only be called if the template of the view is defined by a string. If it is not, Backbone.Declarative.Views bails out well before attempting to load anything. Non-string template properties are none of its business. If you modify a template in the DOM, and if that template has already been used, you have to clear the cache. Otherwise, the cache does not pick up the changes and returns an outdated version of the template. You can clear the cache for a specific template, or a number of them, from the global Backbone.DeclarativeViews object: BackboneDeclarativeViewsclearCachedTemplate "#template" "#template2" ;BackboneDeclarativeViewsclearCachedTemplate "#template" "#template2" ; You must use the exact same selectors as when you first used the templates. Selectors which are merely equivalent, e.g. "script#template" instead of "#template", don't match the cache entry and leave it in the cache. Alternatively, you can target the template associated with a specific view, and clear it from there: someViewdeclarativeViewsclearCachedTemplate; Again, this makes sure that the template will be re-read from the DOM on next access. But it does not allow you to re-associate the view with another template (as far as the cache is concerned). That link stays in place for the lifetime of the view. Finally, if you want to clear the whole cache in one go, do it with BackboneDeclarativeViewsclearCache; There is a lightweight link between the caches of Marionette and Backbone.Declarative.Views. If you clear an item from one cache, it gets cleared from the other as well. You can call the cache-clearing methods of Marionette and Backbone.Declarative.Views interchangeably. And that, surprisingly, is where it ends. You might have expected deeper integration, like an actual joint cache, which would have saved memory and reduced DOM access even further. Indeed, that joint cache has existed briefly. But it turned out that the costs outweighed the benefits. The performance gain was minimal at best, sometimes not even offsetting the additional overhead of integration. And crucially, it didn't work that well with some Marionette customizations. Custom template loaders in Marionette had been trickier to use. In the end, full cache integration had been more trouble than it is worth, and has been removed. With Marionette, it does. The unit tests cover Marionette, too. With other frameworks, it should work just as well. Backbone.Declarative.Views is designed to play nice with third-party code of any kind. So go ahead and try it. Feedback is always welcome.. An example of such a view is the Marionette.CollectionView type. Its content is entirely made up of iterated child views. Its own markup consists of nothing more than the containing el. And yes, that el can be defined with a template. You don't have to put the el properties back into Javascript code. For such a view, only the data attributes matter in the template. The content inside the template tag will simply be ignored. Mostly, yes. It depends on how you set up your views. If you define the template property with extend(), before instantiating the view, things will just work. But you can run into an edge case if you before Backbone.Declarative.Views, and in fact you should. On the face of it, using data attributes on one tag to describe another tag seems nonstandard and indirect. You may wonder why the markup for the el of a view can't just be part of the HTML inside the template, as an enclosing tag perhaps. As it turns out, that kind of approach is fraught with problems. See the related Backbone issue for a discussion. @tbranyen lists some of the difficulties. Also check out the comment by @jashkenas. Backbone.Declarative.Views does not make any assumptions about what you keep inside your templates, or how you structure them. It does not break existing code, no matter what. You can include it into any project and use it where it helps you most, without being forced to rework legacy code. Data attributes are the best solution for that kind of approach. If you'd like to fix, customize or otherwise improve the project: here are your tools. npm and Bower set up the environment for you. npm install. (Creates the environment.) bower install. (Fetches the dependencies of the script.) Your test and build environment is ready now. If you want to test against specific versions of Backbone, edit bower.json first. The test tool chain: Grunt (task runner), Karma (test runner), Mocha (test framework), Chai (assertion library), Sinon (mocking framework). The good news: you don't need to worry about any of this. A handful of commands manage everything for you: grunt test. grunt interactive. grunt build, or just grunt. grunt ci.. MIT.
https://www.npmjs.com/package/backbone.declarative.views
CC-MAIN-2015-32
refinedweb
3,390
66.03
Create a stream from a VMO. #include <zircon/syscalls.h> zx_status_t zx_stream_create(uint32_t options, zx_handle_t vmo, zx_off_t seek, zx_handle_t* out_stream); zx_stream_create() creates a stream, which reads and writes the data in an underlying VMO. The seek offset for the stream is initialized to seek. ZX_STREAM_MODE_READ The stream will be used for reading. If the given vmo lacks ZX_RIGHT_READ, this function will return ZX_ERR_ACCESS_DENIED. Otherwise, ZX_RIGHT_READ will be included as a right on the created stream object. ZX_STREAM_MODE_WRITE The stream will be used for writing. If the given vmo lacks ZX_RIGHT_WRITE, this function will return ZX_ERR_ACCESS_DENIED. Otherwise, ZX_RIGHT_WRITE will be included as a right on the created stream object. TODO(fxbug.dev/32253) zx_stream_create() returns ZX_OK on success. In the event of failure, one of the following values is returned. ZX_ERR_BAD_HANDLE vmo is not a valid handle. ZX_ERR_WRONG_TYPE vmo is not a VMO handle. ZX_ERR_ACCESS_DENIED vmo does not have the rights required for the given options. ZX_ERR_INVALID_ARGS out_stream is an invalid pointer or NULL, options has an unsupported bit set to 1. ZX_ERR_NO_MEMORY Failure due to lack of memory. zx_stream_readv() zx_stream_readv_at() zx_stream_seek() zx_stream_writev() zx_stream_writev_at()
https://fuchsia.googlesource.com/fuchsia/+/refs/heads/releases/f1r/docs/reference/syscalls/stream_create.md
CC-MAIN-2022-33
refinedweb
183
69.38
This Tutorial will Explain all about the NullPointerException in Java. We will discuss the Causes of the Null Pointer Exception & ways to Avoid it: NullPointerException in Java is a runtime exception. Java assigns a special null value to an object reference. When a program tries to use an object reference set to the null value, then this exception is thrown. => Watch Out The Simple Java Training Series Here. What You Will Learn: NullPointerException In Java If an object reference with null value throws NullPointerException, then why do we need a null value? The null value is usually used to indicate that no value has been assigned to a reference variable. Secondly, we need null values for collections like linked lists and trees to indicate null nodes. The design patterns like singleton patterns make use of null values. To conclude, the null value in Java has many uses. Null Pointer Exception is thrown in specific scenarios in Java. Some of the scenarios are as follows: - Method invoked using a null object. - Accessing or modifying a field or data member of the null object. - Passing null object as an argument to a method. - Calculating the length of a null array. - Accessing index of a null array. - Synchronizing a null object. - Throwing a null object. The Null Pointer Exception extends from the class RuntimeException. The hierarchy of NullPointerException is given below. As shown in the above hierarchy, Null Pointer Exception extends from the RuntimeException that inherits the Exception Class. Exception class in turn is derived from the Throwable class that is a subclass of Object. Causes Of java.lang.NullPointerException Occurrence Now we will demonstrate each of the scenarios of NullPointerException occurrence that we listed above. #1) The method is invoked using a null object Consider the following code example. Here we have a class, MyClass that provides two methods. The first method ‘initT’ returns a null object. In the main method, we create an object of MyClass with a call to the initT method. Next, we call the print method of MyClass. Here, the java.lang.NullPointerException is thrown as we are calling the print method using a null object. class MyClass { public static MyClass initT() { //method returns a null object return null; } public void print(String s) { System.out.println(s.toLowerCase()); } } class Main{ public static void main(String[] args) { MyClass t = MyClass.initT(); //create a new object (null object) t.print("Hello, World!"); //invoke method using null object } } Output #2) Access field of a null object class MyClass { int numField = 100; public static MyClass initT() { //method returns a null object return null; } public void print(String s) { System.out.println(s.toLowerCase()); } } class Main{ public static void main(String[] args) { MyClass t = MyClass.initT(); //create a new object (null object) int num = t.numField; //access MyClass member using null object } } Output This is another cause of NullPointerException. Here we attempt to access a class member using a null object. We assign the return value of the initT method to the object t and then access numField using object t. But the object t is a null object as initT returns a null object. At this point, java.lang.NullPointerException is raised. #3) Passing a null object as an argument This is the common cause of java.lang.NullPointerException occurrence. Consider the following Java program. Here we have a method ‘print_LowerCase’ that converts the String object passed as an argument to a lowercase. public class Main { public static void print_LowerCase(String s) { System.out.println(s.toLowerCase()); } public static void main(String[] args) { print_LowerCase(null); //pass null object as argument to the method } } Output. public class Main { public static void main(String[] args) { int[] dataArray = null; //Array is null; no data System.out.println("Array Length:" + dataArray.length); //print array length } } Output In the above program, we declare an array and assign null to it i.e. no data. When we use length property on this null array, NullPointerException is thrown. #5) Access index of a null array Similar to length, even if we try to access a value in a null array using an index, it is the cause of java.lang.NullPointerException. public class Main { public static void main(String[] args) { int[] dataArray = null; //Array set to null //access value at index 2 System.out.println("Value at index 2:" + dataArray[2]); } } Output In the above program, we try to access the value at index 2 of a null array. #6) Synchronization on a null object We usually synchronize a block or a method to facilitate concurrent access. However, the object reference that we use for synchronization should not be null. If it is a null object, then it results in java.lang.NullPointerException. The below Java program demonstrates this. As we can see, we have a String object ‘mutex’ initialized to null. Then in the main function, we use a synchronized block with mutex as the object reference. As mutex is null java.lang.NullPointerException is raised. public class Main { public static String mutex = null; //mutex variable set to null public static void main(String[] args) { synchronized(mutex) { //synchronized block for null mutex System.out.println("synchronized block"); } } } Output #7) By throwing null public class Main { public static void main(String[] args) { throw null; //throw null } } Output: In the above example program, instead of throwing a valid object, null is thrown. This results in Null Pointer Exception. Avoiding Null Pointer Exception Now that we have seen the causes of the occurrence of NullPointerException, we must also try to avoid it in our program. First, we must ensure that the objects that we use in our programs are initialized properly so that we can avoid the use of null objects that result in Null Pointer Exception. We should also take care that the reference variables used in the program are pointed to valid values and do not accidentally acquire null values. Apart from these considerations, we can also exercise more caution on a case-by-case basis to avoid java.lang.NullPointerException. Below we consider a few cases. #1) String comparison with literals A comparison between the string variable and a literal (actual value or element of the enum) is a very common operation in Java programs. But if the String variable that is an object is null, then comparing this null object to literals will throw NullPointerException. So the solution is to invoke the comparison method from the literal instead of the String object that can be null. The following program shows how we can invoke comparison methods from literals and avoid java.lang.NullPointerException. class Main { public static void main (String[] args) { // String set to null String myStr = null; // Checking if myStr is null using try catch. try { if ("Hello".equals(myStr)) //use equals method with literal System.out.print("Two strings are same"); else System.out.print("Strings are not equal"); } catch(NullPointerException e) { System.out.print("Caught NullPointerException"); } } } Output #2) Keep a Check on the arguments of a method Check the arguments of the method to ensure that they are not null values. If the arguments are not as per the specification, then the code will throw IllegalArgumentException to indicate that the arguments are not as expected. This is shown in the below Java program. import java.io.*; class Main { public static void main (String[] args) { // set String to empty value String myStr = ""; try { System.out.println("String value:" + myStr); System.out.println("String Length:" + getLength(myStr)); } catch(IllegalArgumentException e) { System.out.println("Exception: " + e.getMessage()); } // Set String to a proper value and call getLength myStr = "Far from home"; try { System.out.println("String value:" + myStr); System.out.println("String Length:" + getLength(myStr)); } catch(IllegalArgumentException e) { System.out.println("Exception: " + e.getMessage()); } // Set String to null and call getLength() myStr = null; try { System.out.println("String value:" + myStr); System.out.println("String Length:" + getLength(myStr)); } catch(IllegalArgumentException e) { System.out.println("Exception: " + e.getMessage()); } } // Method that returns length of the String public static int getLength(String myStr) { if (myStr == null) //throw Exception if String is null throw new IllegalArgumentException("The String argument cannot be null"); return myStr.length(); } } Output #3) Use of Ternary Operator to take care of null values We can use the ternary operator to avoid java.lang.NullPointerException. The ternary operator has three operators. The first is a boolean expression that evaluates to true or false. If the expression is true, then the second operator is returned or the third operator is returned. The following program shows the use of a ternary operator to avoid NullPointerException. import java.io.*; class Main { public static void main (String[] args) { // Initialize String with null value String myStr = null; //return a substring for this String using ternary oprator String myVal = (myStr == null) ? "" : myStr.substring(0,5); if(myVal.equals("")) System.out.println("Empty String!!"); else System.out.println("String value: " + myVal); // Now set a value for String myStr = "SoftwareTestingHelp"; //return a substring for this String using ternary oprator myVal = (myStr == null) ? "" : myStr.substring(0,8); if(myVal.equals("")) System.out.println("Empty String!!"); else System.out.println("String value: " + myVal); } Output Frequently Asked Questions Q #1) How do I fix NullPointerException in Java? Answer: We must ensure that all the objects used in the program are initialized properly and they do not have null values. Also, the reference variables should not have null values. #2) Is NullPointerException checked or unchecked? Answer: NullPointerException is not a checked exception. It is a descendant of RuntimeException and is unchecked. #3) How do I stop NullPointerException? Answer: Some of the best practices to avoid NullPointerException are: - Use equals() and equalsIgnoreCase() method with String literal instead of using it on the unknown object that can be null. - Use valueOf() instead of toString() ; and both return the same result. - Use Java annotation @NotNull and @Nullable. #4) What is the null value in Java? Answer: A null value does not refer to any object or variable. It is a keyword and a literal. It represents a null reference. #5) Can we catch NullPointerException in Java? Answer: The exception java.lang.NullPointerException is an unchecked exception and extends RuntimeException class. Hence there is no compulsion for the programmer to catch it. Conclusion In this tutorial, we have discussed the NullPointerException in Java. This is quite a dangerous exception and can usually pop up when we least expect it. Null Pointer Exception mostly occurs because of the null object or null reference. We have already seen the causes and ways to avoid NullPointerException. As far as possible, the programmer should try to avoid the occurrence of Null Pointer Exception in a program. As this is an unchecked runtime exception, we should see that it doesn’t occur when the application is running. => Visit Here To Learn Java From Scratch.
https://www.softwaretestinghelp.com/nullpointerexception-in-java/
CC-MAIN-2021-17
refinedweb
1,794
58.08
Yossi Kreinin publishes a new blog? event type, and a few variables depending on the type. This is fast, but not nearly as readable (or writable) as free-form printf messages. "Delayed printf" is a way to get it all - printf's convenience together with event buffers' speed. What we do is we take printf's arguments and save them into a buffer like this: format_string_pointer num_args arg0 arg1 ... format_string_pointer num_args ... Then, outside of the program logging these messages, we can read the actual format strings from the executable file, and call printf to get formatted text. As long as all the format strings and %s arguments are constant, this works fine - and is very fast. I wrote a working example at GitHub - reading and writing log files takes just 100+ LOC. In what follows, we'll walk through the implementation. Reading log files How do we read strings from the executable file? We could parse the file - it's a bunch of data sections, that is, base addresses and the bytes to be copied to those addresses by the program loader. For instance, on Linux, #include <elf.h> has the structs describing program sections in ELF binaries. But there's a simpler way to lay our hands on the strings - just ask the debugger. In gdb, "print (char*)0x8045008" will print the string starting at that address (or garbage if a string doesn't start there.) We'll use something similar, except we'll script gdb in Python: charp = gdb.lookup_type('char').pointer() def read_string(ptrbits): return gdb.Value(ptrbits).cast(charp).string() Ugly, ain't it? Well, we could use the shorter gdb.parse_and_eval('(char*)0x%x'%ptrbits).string() ...but parse_and_eval is dog-slow - slow enough to make the longer incantations well worth it. Armed with read_string, we can take a raw log buffer - a list of words (32-bit in my examples) - and convert it to text: def print_messages(words): i = 0 while i<len(words): fmt = read_string(words[i]) n = words[i+1] args = words[i+2:i+2+n] print fmt % convert_args(fmt, args), i += n+2 This simply reads a format string pointer, a number of arguments, and then that number of arguments until it runs out of words. Why do we need convert_args? Because args is a bunch of integers, which is fine with %d or %x but not with %s or %f. With %s, the integer is really a string pointer, and so we want to read_string. And with %f, we want to reinterpret the word bits as a floating point number - what C does with *(float*)&int_var. def convert_args(fmt, args): types = [s[0] for s in fmt.split('%')[1:]] unpack = {'s':read_string, 'd':int, 'x':int, 'f':to_float} return tuple([unpack[t](a) for t,a in zip(types,args)]) def to_float(floatbits): return struct.unpack('f',struct.pack('I',floatbits))[0] This is a tad naive - it doesn't support things like %.2f - but you get the idea. And now we can add a user-defined gdb command called dprintf like so: class dprintf(gdb.Command): def __init__(self): gdb.Command.__init__(self, 'dprintf', gdb.COMMAND_DATA) def invoke(self, arg, from_tty): bytes = open(arg,'rb').read() def word(b): return struct.unpack('i',b) words = [word(bytes[i:i+4]) for i in xrange(0, len(bytes), 4)] print_messages(words) dprintf() We derive a class from gdb.Command so that __init__ registers a command named dprintf. When the command "dprintf mylog.raw" is entered, we open mylog.raw, read its words into a Python list, and call print_messages. We don't need to be in an interactive gdb session to decode log files - we can issue the command directly from the shell: gdb -q prog -ex 'py execfile("dprintf.py")' -ex 'dprintf mylog.raw' -ex q This can be made shorter by importing dprintf.py (the file with all the Python code above) somewhere in gdb's initialization scripts - then we won't need the execfile part. Writing log files To save printf arguments, we need a way to iterate over them. I figured C++11 variadic templates are faster than fiddling with C's va_list, which wants you to know the argument types to access them, which means you must read the format string, which I'd rather not. So here's the C++11 version: typedef int32_t dcell; struct dbuf { dcell *start, *curr, *finish; }; extern dbuf g_dbuf; template<class... Rest> void dprintf(const char* fmt, Rest... args) { int n = sizeof...(args); dcell* p = __sync_fetch_and_add(&g_dbuf.curr, (n+2)*sizeof(dcell)); if(p + sz < g_dbuf.finish) { *p++ = (dcell)fmt; *p++ = n; dwrite(p, args...); } } I'm using a global buffer, g_dbuf - of course you could prefer passing a buffer as a dprintf's argument. But actually, a global buffer isn't that bad because dprintf updates the buffer atomically, with GNU's __sync_fetch_and_add. So it's OK to call dprintf from multiple threads and even from interrupt handlers (the latter you wouldn't do with a lock-protected buffer, because an interrupt meeting a buffer locked by a suspended thread is game over. But it's fine with a lock-free buffer - the point of lock-freedom being that someone busy with the buffer can never block someone else by getting suspended). As to sizeof...(args) - in case you've never seen that, it's the number of a variadic template's arguments, and has nothing to do with their sizes in bytes. dprintf uses dwrite to write the arguments into a buffer: inline void dwrite(dcell* p) {} template<class T, class... Rest> inline void dwrite(dcell* p, T first, Rest... rest) { *p++ = tocell(first); dwrite(p, rest...); } This is actually a perfectly prosaic recursively self-instantiating variadic template - "the" way to iterate over C++ variadic template function arguments (is there any other way?) In turn, tocell takes an argument of an arbitrary type and extracts the 32 bits. In my example, I overloaded it for ints, floats and char*, along the lines of: inline dcell tocell(int x) { return x; } inline dcell tocell(const char* p) { return (dcell)p; } inline dcell tocell(float x) { union { int n; float f; } nf; nf.f = x; return nf.n; } That's it; the only other thing we need is a dflush function writing the buffer out to a file, and we can use dprintf: int i; for(i=0; i<10; ++i) { dprintf("i=%d i/2=%f %s\n", i, i/2., (i&1)?"odd":"even"); } dflush(); dprintf("going to crash...\n"); volatile int* p=0; dprintf("i=%d p=0x%x\n", i, (dcell)p); p[i]=0; What's with the "going to crash" part? Well, I think a really important part of logging is being able to look at not-yet-flushed messages in a debugger - either in a core dump or in a live debugging session. (Not being able to do this is an annoying thing about buffered printfs; of course you can work around this by sprintfing into a buffer of your own.) So let's extend our dprintf gdb command to print messages directly from g_dbuf instead of reading from a file: intpp = gdb.lookup_type('int').pointer().pointer() def invoke(self, arg, from_tty): if arg: words = read from open(arg)... else: # no file passed buf = gdb.parse_and_eval('&g_dbuf').cast(intpp) start = buf[0] curr = buf[1] n = curr - start words = [int(start[i]) for i in xrange(n)] print_messages(words) Here we do an ugly thing - we treat &g_dbuf as an int** instead of a dbuf*. Why do that - why not avoid any casts and simply access start and curr with buf['start'] and buf['curr']? Because that only works if we have DWARF debug info - that is, if we compiled with -g. But the uglier way above works even without -g. The debugger only needs to know where g_dbuf is, but nothing about its type. And where g_dbuf is debuggers know from the ELF symbol table - which is always there unless someone actively stripped the executable. And now we can see the last messages in a core dump like so: gdb -q progname core -ex dprintf -ex q ...and in a live debugging session, we'd simply type "dprintf". With our example, this should print: going to crash... i=10 p=0x0 Conclusions P.S.: dynamic linking If your strings are logged from a shared object, then the logged string pointers will not match the addresses stored in the .so file - there'll be an offset. It's OK if you have a debugged process or a core dump, because gdb knows where the .so was loaded. The problem is when you parse files logged by a dead process. One workaround is to log the runtime address of some global variable with a known name, and use that to compute the offset relative to the address of that name in the .so. Add a Comment The memory will be thrashed by the time you print it out later.
http://www.embeddedrelated.com/showarticle/518.php
CC-MAIN-2014-52
refinedweb
1,506
73.47
Yesterday. BashDiff adds commands that help you manipulate positional parameters, often with much greater efficiency than the normal bash routes. The pcursor command lets you manipulate the current positional parameters as a single group, clearing them all and storing and restoring them all from a stack. New commands prefixed with pp_ help you manipulate the positional parameters themselves. These commands all take an optional -a parameter to let you specify the name of an array to use instead of working on the implicit positional parameters array. For example, the normal bash command to append to the positional parameters is shown below with the BashDiff equivalent below it. $ pp_append Z Many of the pp_ commands are much more efficient than using the alternatives in the standard bash shell. How much the efficiency gain of the pp_ prefixed commands matters to you depends on how heavily you use positional parameter manipulation, particularly in a loop. The case shown below is one extreme, where an expensive prepend is done 1,000 times in a loop. As you can see, the pp_ version is much more efficient. However, if the loop only runs for 100 iterations, the standard bash set command version completes in 0.15 seconds. Even though the BashDiff version needs only 0.006 seconds, the efficiency of the standard bash syntax is likely to be acceptable, unless the script that manipulates the positional parameters 100 times is itself called many many times from another script. $ pp_trim 10000 $ time for i in `seq 1 1000`; do set -- Zoldpre $@; done real 0m10.157s user 0m9.880s sys 0m0.004s $ pp_trim 10000 $ time for i in `seq 1 1000`; do pp_push Zoldpre; done real 0m0.088s user 0m0.036s sys 0m0.002s BashDiff includes support for the Expat XML parser. You can process only one XML file per call of the new expat builtin. XML is processed and a collection of callbacks you specify are called when elements, comments, namespaces, and so on are encountered in the XML. BashDiff does some housekeeping for you; for example, the XML_TAG_STACK array will include the names of all XML elements that lead to the current one (when in a callback), while XML_ELEMENT_DEPTH will let you know the depth of the current XML element. XML attributes are not handled using explicit callbacks of their own; rather, all the attributes are passed as parameters to the start of XML element callback. It is convenient to use bash functions as your Expat callback entry points. The example below parses the simple ugu.xml file using the BashDiff expat builtin. The startelem function handles the start of a new XML element and uses the standard bash declare function to show the contents of the XML_TAG_STACK array when it is called. I've displayed the second parameter that was handed to startelem by BashDiff to show you how XML attributes are handled. <ugu foo="bar" linux="fun"> <nested one="two" three="four" /> </ugu> $ startelem() { declare -p XML_TAG_STACK; echo $2; } $ expat -s startelem ugu.xml declare -a XML_TAG_STACK='([0]="ugu")' foo=bar declare -a XML_TAG_STACK='([0]="nested" [1]="ugu")' one=two The BashDiff patch brings in support for easily dealing with gdbmISAM files. To create a database and set a value, use the gdbm builtin, supplying the filename and the key-value pairs you want to set. The -e option to gdbm lets you test whether a key is set, and optionally, when it is set, that it has the value you are expecting. This latter case is useful when you are writing a shell script that can be configured to optionally perform additional processing. There are also options to bring in all the keys or values or import the entire gdbm database into a shell array. The below example first sets a single key-value pair, creating the test.gdb file if it does not already exist. It then performs various tests to see if a key-value pair exists, and shows the result of each test. Two key-value pairs are then set at once, and the entire gdbm file is read into a bash array with the -W option. I used the standard bash declare as the last command to show you what the imported data looks like when in a bash array. Note that you can also use the -K and -V options to set up arrays containing only the keys and values respectively. $ gdbm -e test.gdb key $ echo $? 0 $ gdbm -e test.gdb key2 $ echo $? 1 $ gdbm -e test.gdb key value1 $ echo $? 0 $ gdbm -e test.gdb key value2 $ echo $? 1 $ gdbm test.gdb key2 value2 key3 value3 $ gdbm -W thedata test.gdb $ declare -p thedata declare -a thedata='([0]="key2" [1]="value2" [2]="key" [3]="value1" [4]="key3" [5]="value3")' The Lsql command allows you to get at SQLite databases from a BashDiff shell. The general form is Lsql -d file.sqlite SQLquery. When called this way Lsql will print any results to standard out. You can also use the optional -a to supply the name of a bash array you would like to store the results into instead of printing them. The below example doesn't really use the data from any SQLite database, but if realdata.sqlite contained a database, you could change the SQL command given to perform real work like table joins and the results would be stored into the table array just as the example does. $ declare -p table declare -a table='([0]="foo" [1]="bar" [2]="bill" [3]="ted")' There are also Psql and Msql commands to connect to PostgreSQL and MySQL databases in a similar manner to Lsql. The -d option to Lsql makes no sense for Psql and Msql and is replaced with other parameters telling BashDiff where your database is running and the login credentials to use when connecting. If you want to make a small, modern GUI from bash, BashDiff's gtk command might be what you are looking for. The invocation is simply gtk gtk.xml, where the XML can also be made available through redirection. Using XML from bash to define the GUI is not as clunky as you might expect, as the XML schema is quite simple. You can generate options and buttons using bash shell scripts that just spit out the expected XML elements. You get communication from the GUI by using special XML attributes like command, which, for example, allows you to specify a bash command to execute when the user clicks on a button. Also, the id attribute names a shell variable that the value in the text entry or combo box should be stored into when the user closes the GUI. I managed to get the gtk command to segfault a few times during testing, but on later testing was unable to easily replicate the issue. Shown below is the XML to create a simple text entry GUI that stores the value of the entry into the $entry shell variable when the user clicks the OK button. <dialog border="10" buttons="gtk-ok,gtk-cancel" id="dialog"> <entry id="entry" initial="initial text"/> </dialog> $ gtk gtk2.xml $ echo $entry this is the text BashDiff includes many other handy little expansions to the bash syntax and semantics. For example, <<+ allows you to have a here document where leading indentation is preserved relative to the input. The vplot command allows you to take data from one or two bash arrays and create an x-y plot in the terminal. You can supply the names of the arrays or simply give the points directly on the command line in the form x1 y1 x2 y2 etc. The BashDiff patch also includes an RPN calculator, and allows you to mix text and bash commands into a single file in a manner similar to the way PHP works. There are also some functions with less general utility, such as creditcard and cardswipe for dealing with credit card numbers and extracting information from raw cardswipe data supplied on stdin respectively. These are complemented with other commands like protobase for dealing with a specific companies' point of sale hardware. BashDiff also exposes many of the functions from ctype.h to let you test whether a string has a given form. Unfortunately I couldn't figure out how to get these to work properly. Executing isnumber gives the usage pattern shown below. However, trying to execute isnumber upper G gave the result of an "invalid number" instead of the expected success that G is indeed upper case. $ isnumber isnumber: usage: isnumber { alnum | alpha | ascii | blank ... | cntrl | digit | graph | lower | print | punct | space ... | upper | xdigit | letter | word } number... $ isnumber upper G bash+william: isnumber: G: invalid number Whether the features of BashDiff are enticing enough for you to want to replace your unpatched shell is the hard question. BashDiff adds a collection of things that make scripting more convenient, such as non-match alternatives for commands like case, but you can work around the lack of these options in your scripts using just the vanilla bash at the expense of having slightly more contorted scripts. If you are dealing with relational databases or large lookup tables (gdbm), BashDiff might be worth having around just for these two features, even if it does not become your login bash.
https://www.linux.com/news/more-tricks-bashdiff
CC-MAIN-2018-39
refinedweb
1,550
62.38
90 [details] test project Mono master has a new profile mscorlib and a new compiler which is not detected by MD which will always use dmcs for 4.5 profile causing problem by referencing wrong "old" 4.0 mscorlib. IntelliSense works but compilation fails with error CS0234: The type or namespace name `IReadOnlyDictionary' does not exist in the namespace `System.Collections.Generic'. Are you missing an assembly reference? Is this still the case? I thought MD was able to use 4.5 now. It's still the case . Marking dup of the other as it has more info, and the underlying issue is the same. *** This bug has been marked as a duplicate of bug 3984 ***
https://bugzilla.xamarin.com/20/2042/bug.html
CC-MAIN-2021-25
refinedweb
116
67.55
In Whidbey, the WinForms designer takes advantage of a new C# language feature called “partial classes”. This allows them to pull out the designer generated code into a separate file. It has several advantages: · Users are less likely to muck with it, since it’s in a different file. When users start to edit designer generated code, things break down pretty quickly. It’s pretty common for the designer to eat code it doesn’t understand, causing you to lose your work. · The code generator can use no ‘using’ directives, instead having only fully qualified names (FQN). It’s a good idea for code generators to use FQNs, because it makes them resilient to unforeseen changes in the compilation environment (like someone adding a class named “System” · It reduces clutter in the user file, letting you focus on your work. To show what this looks like, I created a simple C# Windows Application, and added an OK button. Here is the result: ——- program.cs ——- #region Using directives using System; using System.Collections.Generic; using System.Windows.Forms; #endregion namespace WindowsApplication1 { static class Program { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void { Application.EnableVisualStyles(); Application.EnableRTLMirroring(); Application.Run(new Form1()); } } } ——- Form1.cs ——- #region Using directives using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Windows.Forms; #endregion namespace WindowsApplication1 { partial class Form1 : Form { public Form1() { InitializeComponent(); } } } ——- Form1.Designer.cs ——- namespace WindowsApplication1 { partial class Form1 { /// <summary> /// Required designer variable. /// </summary> private System.ComponentModel.IContainer components = null; /// <summary> /// Clean up any resources being used. /// </summary>.buttonOK = new System.Windows.Forms.Button(); this.SuspendLayout(); // // buttonOK // this.buttonOK.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Bottom | System.Windows.Forms.AnchorStyles.Right))); this.buttonOK.Location = new System.Drawing.Point(205, 238); this.buttonOK.Name = “buttonOK”; this.buttonOK.TabIndex = 0; this.buttonOK.Text = “OK”; // // Form1 // this.AutoScaleBaseSize = new System.Drawing.Size(5, 13); this.ClientSize = new System.Drawing.Size(292, 273); this.Controls.Add(this.buttonOK); this.Name = “Form1”; this.Text = “Form1”; this.ResumeLayout(false); } #endregion private System.Windows.Forms.Button buttonOK; } } ————————– What do you think of the new default template? It’s an improvement, no? I don’t get it. How do Form1.Designer.cs and Form1.cs relate? I mean what ties the two together because I don’t see them reference each other. Is it only because they both have an implementation of Form1 class and C# magically melds them together? Also if C# is a standards based language how do you just go ahead and add ‘partial’ to it? Jeremy: They are tied together because they are the same class. (They have the same FQN.) ECMA owns the C# standard. The committee evolves the language on an ongoing basis. Read more: Ok gotcha. So because both are WindowsApplication1.Form1 the compiler ties them together. Nifty! Regarding C#, I realize it is an ECMA standard, but if I look at that standards doc I see no mention of the keyword ‘partial’. So is that a MS extension or what? The code I posted is based on Whidbey (aka Visual Studio 2005). It’s pre-release, and includes a compiler & designer that implement the proposed C# 2.0. ECMA is still considering the proposal. I assume that any generated code for "Clicks", and other events would be in the Form1.cs file? If not, you are back to the user editing code in the "designer" file. Paul: that’s correct. I considered including an event in my sample, but I didn’t want it to get too big to read easily. You’re correct Paul, that’s how it works. Hand editing the .designer.cs file is considered "bad" in nearly all situations 🙂 Thanks, Thats the response I was hoping you were going to give. But it does now say that the generator is touching the Form1.cs file instead of being isolated to the "designer file". With anonymous clases you could also go more to a command pattern instead of a new btn_click method on Form1.cs. I am not sure what that would buy you in this case, but it would be possible. Java tends towards the anonymous inner classes for event handlers. Its just a thought. Thanks. Yes, it’s bad. The designer is quickly frustrated by code it didn’t create. Part of the distinction between "Elvis" and "Mort" is that Elvis sees designers as just a view on his code, while Mort sees the designer as what he’s actually working on. Elvis thinks the generated code is his, Mort does not. By creating a situation where the typical C# user can’t freely edit this code, we’re failing to meet users needs. The key as always is an educated consumer. Hints like "// place code here" help, but you still end up with problems when the coder deletes the generated "event" method, and forgets to clean up the += on the control. Or conversely if the coder unhooks the event, the method stays around as dead code until the coder remembers to delete it. The designer can never be 100% correct on round trips, just so long as it is predictable and consistant an educated coder should be fine with it. Generated inner classes might be an interesting idea to explore though for a cleaner way to isolate the generated code. Thanks. What about the protected Dispose method? What if I need add implementation to that function. I am back to editing the designer file. The protected Dispose method should be implemented in the Form1.cs file. I don’t believe the designer ever modifies it. I like the template – I dont like it in the project system – it’s harder to work with. The designer files are hidden away (you have to do show all files to see them). The project should always show the [+] sign. This generally looks good. But another area has not been talked about: the .resx files. In the current version (VS.NET 2003), the designer will destroy any custom entries in the .resx file belonging to a form whenever you edit the form (it seems to completely rewrite the file everytime). This makes it a pain to use the default resources for localizing strings that the form uses (e.g. for messagebox message etc.), because you custom strings are deleted everytime you open the form. 🙁 So you either have to create another class to hold your strings or handle all the resource business yourself. Are there plans for partial resource files or something like that as well? What is this? A Google search returns only your page. Rich: This is something we’re working on improving. Thanks for the feedback. Leo: As much as I’d like to have a designer that could safely roundtrip your code 100% of the time, that’s not happening any time soon. IMO, we’re not even close. The string you show in a MessageBox is not part of the current form, I think. Anyway, the right place for it is a project resource file. In Whidbey, resgen.exe should have some smarts for generating a typesafe way to access your resources. Long overdue, IMO. Can you please tell me what the secret method Application.EnableRTLMirroring() does? I have looked around and have not found any docs on this. You are using Application.EnableRTLMirroring(); in your Main(). Rovert: I don’t know. I’m still trying to get an answer for that one. I noticed one thing that looked a little weird: The Dispose method goes into the generated portion of the partial form. Now, this might not be highly applicable to a Form (because you should be putting resource control code in another class etc), but what if, for whatever reason, you needed to put some code into the Dispose method? I’m confused. Can I create a partial classes in two different assemblies and merge them together when they’re loaded? Otherwise, why doesn’t the Designer just put the generated properties into a resource file? What’s the point of further complicating the C# compiler when this is something that could be better provided in the CLR? How does this fit in with the Avalon/XAML vision of separating application functionality from the presentation layer? I found out that this has something to do with ‘right to left’ languages like some middle-eastern languages (hence the RTL) and mirroring the layout of the widgets on the form (not the language text itself.) If you could find out more about this, I would be appreciative. Thanks! Rovert: I’ll ping folks on that again. Ping Back来自:t_jian的专栏 Ping Back来自:blog.csdn.net Ping Back来自:blog.csdn.net Ping Back来自:blog.csdn.net Ping Back来自:blog.csdn.net
https://blogs.msdn.microsoft.com/jaybaz_ms/2004/04/28/winforms-designed-code-and-c-partial-classes/
CC-MAIN-2017-09
refinedweb
1,465
60.92
23 December 2009 16:06 [Source: ICIS news] By Ben Lefebvre ?xml:namespace> HOUSTON At its height in August 2008, the industry boasted nearly 180 refiners pumping out 78m gallons during the month. Established businesses were in the game – ADM with its 85m gal/year plant in North Dakota – and so were new faces – John Plaza with his 105m gal/year Imperium Renewables in Washington. Less than two years later, the overall industry is running at 15% of total capacity and is not expected to survive without government rescue. Amid the wreckage, three general rules of business can be recognised. We offer these for your holiday reading pleasure. Commandment 1: No mature industry shall rely exclusively on hypothetical government aid for its survival, for the For years, biodiesel refiners baked the following assumptions into their business plans: If the US Congress continued to annually approval the $1/gal blending credit, and if the US Environmental Protection Agency (EPA) implemented the mandates for renewable fuel usage it originally announced in 2005, then biodiesel refining would be a viable commercial venture. Well, as of the end of December, Congress decided it would not vote on the tax credit issue until 2010, and the EPA has not yet implemented mandates. Both of these things may occur in the first quarter of 2010, but that would be too late for refiners with a thin cash cushion and thinner demand. “Even if the tax credit is done retroactively, you’re looking at potentially at three or four lost months of production,” said National Biodiesel Board (NBB) spokesman Michael Frohlich. “A lot of plants will close their doors.” Biodiesel producers point out that they were only seeking to help the government decrease the amount of foreign oil imported into the Critics contend that too many business speculators jumped into the biodiesel pool without first looking for the bottom. The easy credit available in the mid-2000s made it easy for anyone with half a plan to start a refinery with the hopes the government would create the market later. An NBB-sponsored report postulates that the industry, emasculated by the recession, would functionally cease to exist if the government did not extend the tax credits. In the first eight months of 2009, US biodiesel refiners produced 313m gal of the renewable fuel, according to Energy Information Administration (EIA). That was down nearly 40% from the same period of the year before. Lesson 2: Thou shalt not over-invest in first-generation technology whilst the next generation is at the doorstep. When the EPA announced in 2005 it would (eventually) implement renewable fuel standards, its draft proposal called for 1bn gal of first-generation biodiesel - the stuff made from food crops and animal fats - per year as of 2012, with no increases planned afterward. From that announcement, investors dove in, eventually creating 2.7bn gal/year in first-generation biodiesel production. The idea was to sell to The vast majority of US refiners settled in the Soybean oil, the main biodiesel feedstock, was 22.35 cents/lb in January 2006. In February 2008, near the height of the commodity bubble, it hit a high of 68.15 cents/lb. It then fell to 32.56 cents/lb when that bubble popped in November. By that time, soy-biodiesel producers had tanks full of high-priced fuel just as demand - and prices - cratered amid the recession. To make matters worse, many refiners were not integrated with their raw material sources, meaning they had no control and could not profit from swings in raw material costs. They had to pass along changes in soy and tallow prices to buyers, which caused biodiesel prices to fluctuate week to week. Since then, the government and researchers became more serious about developing the next generation of biodiesel, fuel made from non-food crops such as algae, leaving the first generation behind. “Conventional soy-based biodiesel -it’s hard to understand what major innovations will take place that will let it stand on its own,” said Nathaniel Greene, director of renewable energy policy at the National Resources Defence Council (NRDC). “The federal government’s efforts and money is best spent is getting advanced industries out of the lab and into commercialization.” Second- and third-generation biodiesel is still too expensive to market - estimates generally put algae-based biodiesel at $5-10/gal to produce. But look at the flow of government research money and it is not hard to see which way the feds think the future lay. Since July 2009, the Soy-and-fats-based biodiesel producers got nothing. Now that first generation of producers, with their huge asset holdings, finds itself almost an afterthought, with a shrunken market and shrinking prospects. Lesson 3: Thou shalt not place all of thine sales eggs in one market, for the EU is a jealous market. For a while, US biodiesel refiners were shipping about 80% of their product across the This scenario was popular with US producers, who sold three times more volume in Who did not like this state of affairs? Some The EBB called foul, and the European Commission went right to the red card on US players - anti-dumping and countervailing duties (read: tariffs) so high that the US was effectively ejected from the game. Producers who depended on the European trade scrambled to establish themselves in the few states or industries in US with mandates in place. Unfortunately, they found that the few suppliers who had made in-roads in the domestic market early, Renewable Energy Group (REG) the giant among them, were not giving up ground. After the tariffs, the deluge. In March, US producer Nova Biosource filed for bankruptcy and sold off its assets and producer Imperium shut down its 100m gal/year refinery near Seattle and laid off the majority of its workforce (the refinery came back online in November as the company sold material to Canada -then an explosion in December forced it back down, strengthening the feeling that God may not care for biodiesel). Many more refiners went out of business. The largest Due to a loophole in the Commission’s ruling, some The way things are looking now, anyone still looking to attend the industry’s 2010 conference in ($1 = €0.70) To discuss issues facing the chemical industry go to ICIS connect Doris de Guzman examines alternative processing, new technology, R&D and other sustainability initiatives in her Green Chemistry
http://www.icis.com/Articles/2009/12/23/9321296/INSIGHT-The-US-biodiesel-industry-its-rise-and-fall.html
CC-MAIN-2014-15
refinedweb
1,077
56.59
Integrity/Concepts It is about trust Introduction Integrity is about trusting components within your environment, and in our case the workstations, servers and machines you work on. You definitely want to be certain that the workstation you type your credentials on to log on to the infrastructure is not compromised in any way. This "trust" in your environment is a combination of various factors: physical security, system security patching process, secure configuration, access controls and more. Integrity plays a role in this security field: it tries to ensure that the systems have not been tampered with by malicious people or organizations. And this tamperproof-ness extends to a wide range of components that need to be validated. You probably want to be certain that the binaries that are ran (and libraries that are loaded) are those you built yourself (in case of Gentoo) or were provided to you by someone (or something) you trust. And that the Linux kernel you booted (and the modules that are loaded) are those you made, and not someone else. Most people trust themselves and look at integrity as if it needs to prove that things are still as you've built them. But to support this claim, the systems you use to ensure integrity need to be trusted too: you want to make sure that whatever system is in place to offer you the final yes/no on the integrity only uses trusted information (did it really validate the binary) and services (is it not running on a compromised system). To support these claims, many ideas, technologies, processes and algorithms have passed the review. In this document, we will talk about a few of those, and how they play in the Gentoo Hardened Integrity subprojects' vision and roadmap. Hash results Algorithmically validating a file's content Hashes are a primary method for validating if a file (or other resource) has not been changed since it was first inspected. A hash is the result of a mathematical calculation on the content of a file (most often a number or ordered set of numbers), and exhibits the following properties: - The resulting number is represented in a small (often fixed-size) length. This is necessary to allow fast verification if two hash values are the same or not, but also to allow storing the value in a secure location (which is, more than often, much more restricted in space). - The hash function always returns the same hash (output) when the file it inspects has not been changed (input). Otherwise it'll be impossible to ensure that the file content hasn't changed. - The hash function is fast to run (the calculation of a hash result does not take up too much time or even resources). Without this property, it would take too long to generate and even validate hash results, leading to users being malcontent (and more likely to disable the validation altogether). - The hash result cannot be used to reconstruct the file. Although this is often seen as a result of the first property (small length), it is important because hash results are often also seen as a "public validation" of data that is otherwise private in nature. In other words, many processes rely on the inability of users (or hackers) to reverse-engineer information based on its hash result. A good example are passwords and password databases, which should store hashes of the passwords, not the passwords themselves. - Given a hash result, it is near impossible to find another file with the same hash result (or to create such a file yourself). Since the hash result is limited in space, there are many inputs that will map onto the same hash result. The power of a good hash function is that it is not feasible to find them (or calculate them) except by brute force. When such a match is found, it is called a collision . Compared with checksums, hashes try to be more cryptographically secure (and as such more effort is made in the last property to make sure collisions are very hard to obtain). Some even try to generate hash results in a way that the duration to calculate hashes cannot be used to obtain information from the data (such as if it contains more 0s than 1s, etc.) Hashes in integrity validation Integrity validation services are often based on hash generation and validation. Tools such as tripwire or AIDE generate hashes of files and directories on your systems and then ask you to store them safely. When you want the integrity of your systems checked, you provide this information to the program (most likely in a read-only manner since you don't want this list to be modified while validating) which then recalculates the hashes of the files and compares them with the given list. Any changes in files are detected and can be reported to you (or the administrator). A popular hash functions is SHA-1 (which you can generate and validate using the sha1sum command) which gained momentum after MD5 (using md5sum) was found to be less secure (nowadays collisions in MD5 are easy to generate). SHA-2 also exists (but is less popular than SHA-1) and can be played with using the commands sha224sum, sha256sum, sha384sum and sha512sum. user $ sha1sum ~/Downloads/pastie-4301043.rb 6b9b4e0946044ec752992c2afffa7be103c2e748 /home/swift/Downloads/pastie-4301043.rb Hashes are a means, not a solution Hashes, in the field of integrity validation, are a means to compare data and integrity in a relatively fast way. However, by itself hashes cannot be used to provide integrity assurance towards the administrator. Take the use of sha1sum by itself for instance. You are not guaranteed that the sha1sum application behaves correctly (and as such has or hasn't been tampered with). You can't use sha1sum against itself since malicious modifications of the command can easily just return (print out) the expected SHA-1 sum rather than the real one. A way to thwart this is to provide the binary together with the hash values on read-only media. But then you're still not certain that it is that application that is executed: a modified system might have you think it is executing that application, but instead is using a different application. To provide this level of trust, you need to get insurance from a higher-positioned, trusted service that the right application is being ran. Running with a trusted kernel helps here (but might not provide 100% closure on it) but you most likely need assistance from the hardware (we will talk about the Trusted Platform Module later). Likewise, you are not guaranteed that it is still your file with hash results that is being used to verify the integrity of a file. Another file (with modified content) may be bind-mounted on top of it. To support integrity validation with a trusted information source, some solutions use HMAC digests instead of plain hashes. Finally, checksums should not only be taken on file level, but also its attributes (which are often used to provide access controls or even toggle particular security measures on/off on a file, such as is the case with PaX markings), directories (holding information about directory updates such as file adds or removals) and privileges. These are things that a program like sha1sum doesn't offer (but tools like AIDE do). Hash-based Message Authentication Codes Trusting the hash result In order to trust a hash result, some solutions use HMAC digests instead. An HMAC digest combines a regular hash function (and its properties) with a a secret cryptographic key. As such, the function generates the hash of the content of a file together with the secret cryptographic key. This not only provides integrity validation of the file, but also a signature telling the verification tool that the hash was made by a trusted application (one that knows the cryptographic key) in the past and has not been tampered with. By using HMAC digests, malicious users will find it more difficult to modify code and then present a "fake" hash results file since the user cannot reproduce the secret cryptographic key that needs to be added to generate this new hash result. When you see terms like HMAC-SHA1 it means that a SHA-1 hash result is used together with a cryptographic key. Managing the keys Using keys to "protect" the hash results introduces another level of complexity: how do you properly, securely store the keys and access them only when needed? You cannot just embed the key in the hash list (since a tampered system might read it out when you are verifying the system, generate its own results file and have you check against that instead). Likewise you can't just embed the key in the application itself, because a tampered system might just read out the application binary to find the key (and once compromised, you might need to rebuild the application completely with a new key). You might be tempted to just provide the key as a command-line argument, but then again you are not certain that a malicious user is idling on your system, waiting to capture this valuable information from the output of ps , etc. Again rises the need to trust a higher-level component. When you trust the kernel, you might be able to use the kernel key ring for this. Using private/public key cryptography Validating integrity using public keys One way to work around the vulnerability of having the malicious user getting hold of the secret key is to not rely on the key for the authentication of the hash result in the first place when verifying the integrity of the system. This can be accomplised if you, instead of using just an HMAC, you also encrypt HMAC digest with a private key. During validation of the hashes, you decrypt the HMAC with the public key (not the private key) and use this to generate the HMAC digests again to validate. In this approach, an attacker cannot forge a fake HMAC since forgery requires access to the private key, and the private key is never used on the system to validate signatures. And as long as no collisions occur, he also cannot reuse the encrypted HMAC values (which you could consider to be a replay attack). Ensuring the key integrity Of course, this still requires that the public key is not modifyable by a tampered system: a fake list of hash results can be made using a different private key, and the moment the tool wants to decrypt the encrypted values, the tampered system replaces the public key with its own public key, and the system is again vulnerable. Trust chain Handing over trust As you've noticed from the methods and services above, you always need to have something you trust and that you can build on. If you trust nothing, you can't validate anything since nothing can be trusted to return a valid response. And to trust something means you also want to have confidence that that system itself uses trusted resources. For many users, the hardware level is something they trust. After all, as long as no burglar has come in the house and tampered with the hardware itself, it is reasonable to expect that the hardware is still the same. In effect, the users trust that the physical protection of their house is sufficient for them. For companies, the physical protection of the working environment is not sufficient for ultimate trust. They want to make sure that the hardware is not tampered with (or different hardware is suddenly used), specifically when that company uses laptops instead of (less portable) workstations. The more you don't trust, the more things you need to take care of in order to be confident that the system is not tampered with. In the Gentoo Hardened Integrity subproject we will use the following "order" of resources: - System root-owned files and root-running processes. In most cases and most households, properly configured and protected systems will trust root-owned files and processes. Any request for integrity validation of the system is usually applied against user-provided files (no-one tampered with the user account or specific user files) and not against the system itself. - Operating system kernel (in our case the Linux kernel). Although some precautions need to be taken, a properly configured and protected kernel can provide a higher trust level. Integrity validation on kernel level can offer a higher trust in the systems' integrity, although you must be aware that most kernels still reside on the system itself. - Live environments . A bootable (preferably) read-only medium can be used to boot up a validation environment that scans and verifies the integrity of the system-under-investigation. In this case, even tampered kernel boot images can be detected, and by taking proper precautions when running the validation (such as ensuring no network access is enabled from the boot up until the final compliance check has occurred) you can make yourself confident of the state of the entire system. - Hypervisor level . Hypervisors are by many organizations seen as trusted resources (the isolation of a virtual environment is hard to break out of). Integrity validation on the hypervisor level can therefor provide confidence, especially when "chaining trusts": the hypervisor first validates the kernel to boot, and then boots this (now trusted) kernel which loads up the rest of the system. - Hardware level . Whereas hypervisors are still "just software", you can lift up trust up to the hardware level and use the hardware-offered integrity features to provide you with confidence that the system you are about to boot has not been tampered with. In the Gentoo Hardened Integrity subproject, we aim to eventually support all these levels (and perhaps more) to provide you as a user the tools and methods you need to validate the integrity of your system, up to the point that you trust. The less you trust, the more complex a trust chain might become to validate (and manage), but we will not limit our research and support to a single technology (or chain of technologies). Chaining trust is an important aspect to keep things from becoming too complex and unmanageable. It also allows users to just "drop in" at the level of trust they feel is sufficient, rather than requiring technologies for higher levels. For instance: - A hardware component that you trust (like a Trusted Platform Module or a specific BIOS-supported functionality) verifies the integrity of the boot regions on your disk. When ok, it passes control over to the bootloader. - The bootloader now validates the integrity of its configuration and of the files (kernel and initramfs) it is told to boot up. If it checks out, it boots the kernel and hands over control to this kernel. - The kernel, together with the initial ram file system, verifies the integrity of the system components (and for instance SELinux policy) before the initial ram system changes to the real system and boots up the (verified) init system. - The (root-running) init system validates the integrity of the services it wants to start before handing over control of the system to the user. An even longer chain can be seen with hypervisors: - Hardware validates boot loader - Boot loader validates hypervisor kernel and system - Hypervisor validates kernel(s) of the images (or the entire images) - Hypervisor-managed virtual environment starts the image - ... Integrity on serviced platforms Sometimes you cannot trust higher positioned components, but still want to be assured that your service is not tampered with. An example would be when you are hosting a system in a remote, non-accessible data center or when you manage an image hosted by a virtualized hosting provider (I don't want to say "cloud" here, but it fits). In these cases, you want a level of assurance that your own image has not been tampered with while being offline (you can imagine manipulating the guest image, injecting trojans or other backdoors, and then booting the image) or even while running the system. Instead of trusting the higher components, you try to deal with a level of distrust that you want to manage. Providing you with some confidence at this level too is our goal within the Gentoo Hardened Integrity subproject. From measurement to protection When dealing with integrity (and trust chains), the idea behind the top-down trust chain is that higher level components first measure the integrity of the next component, validate (and take appropriate action) and then hand over control to this component. This is what we call protection or integrity enforcement of resources. If the system cannot validate the integrity, or the system is too volatile to enforce this integrity from a higher level, it is necessary to provide a trusted method for other services to validate the integrity. In this case, the system attests the state of the underlying component(s) towards a third party service, which appraises this state against a known "good" value. In the case of our HMAC-based checks, there is no enforcement of integrity of the files, but the tool itself attests the state of the resources by generating new HMAC digests and validating (appraising) it against the list of HMAC digests it took before. An implementation: the Trusted Computing Group functionality Trusted Platform Module Years ago, a non-profit organization called the Trusted Computing Group was formed to work on and promote open standards for hardware-enabled trusted computing and security technologies, including hardware blocks and software interfaces across multiple platforms. One of its deliverables is the Trusted Platform Module, abbreviated to TPM, to help achieve these goals. But what are these goals exactly (especially in light of our integrity project)? - Support hardware-assisted record (measuring) of what software is (or was) running on the system since it booted in a way that modifications to this record (or the presentation of a different, fake record) can be easily detected - Support the secure reporting to a third party of this state (measurement) so that the third party can attest that the system is indeed in a sane state The idea of providing a hardware-assisted method is to prevent software-based attacks or malpractices that would circumvent security measures. By running some basic (but important) functions in a protected, tamper-resistant hardware module (the TPM) even rooted devices cannot work around some of the measures taken to "trust" a system. The TPM chip itself does not influence the execution of a system. It is, in fact, a simple request/reply service and needs to be called by software functions. However, it provides a few services that make it a good candidate to set up a trusted platform (next to its hardware-based protection measures to prevent tampering of the TPM hardware itself): - Asymmetric crypto engine, supporting the generation of asymmetric keys (RSA with a keylength of 2048 bits) and standard operations with those keys - A random noise generator - A SHA-1 hashing engine - Protected (and encrypted) memory for user data and key storage - Specific registers (called PCRs) to which a system can "add" data to Platform Configuration Registers, Reporting and Storage PCR registers are made available to support securely recording the state of (specific parts of) the system. Unlike processor registers that software can reset as needed, PCR registers can only be "extended": the previous value in the register is taken together with the new provided value, hashed and stored again. This has the advantage that a value stores both the knowledge of the data presented to it as well as its order (providing values AAA and BBB gives a different end result than providing values BBB and AAA), and that the PCR can be extended an unlimited number of times. A system that wants to securely "record" each command executed can take the hash of each command (before it executes it), send that to the PCR, record the event and then execute the command. The system (kernel or program) is responsible for recording the values sent to the PCR, but at the end, the value inside the PCR has to be the same as the one calculated from the record. If it differs, then the list is incorrect and the "secure" state of the system cannot be proven. To support secure reporting of this value to a "third party" (be it a local software agent or a remote service) the TPM supports secure reporting of the PCR values: an RSA signature is made on the PCR value as well as on a random number (often called the "nonce") given by the third party (proving there is no man-in-the-middle or replay attack). Because the private key of this signature is securely stored on the TPM this signature cannot be forged. The TPM chip has (at least) 24 PCR registers available. These registers will contain the extended values for - BIOS, ROM and memory block data (PCR 0-4) - OS loaders (PCR 5-7) - Operating System-provided data (PCR 8-15) - Debugging data (PCR 16) - Localities and Trusted Operating System data (PCR 17-22) - Application-specific data (PCR 23) The idea of using PCRs is to first measure the data a component is about to execute (or transfer control to), then extend the appropriate PCR, then log this event in a measurement log and finally transfer control to the measured component. This provides a trust "chain". Trusting the TPM In order to trust the TPM, the TCG basis its model on asymmetric keys. Each TPM chip has a 2048-bit private RSA key securely stored in the chip. This key, called the Endorsement Key, is typically generated by the TPM manufacturer during the creation of the TPM chip, and is backed by an Endorsement Key certificate issued by the TPM manufacturer. This EK certificate guarantees that the EK is in fact an Endorsement Key for a given TPM (similar to how an SSL certificate is "signed" by a Root CA). The private key cannot leave the TPM chip. A second key, called the Storage Root Key, is generated by the TPM chip when someone takes "ownership" of the TPM. Although the key cannot leave the TPM chip, it can be removed (when someone else takes ownership). This key is used to encrypt data and other keys (user Storage Keys and Signature Keys). The other keys (storage and signature keys) can leave the TPM chip, but always in an encrypted state that only the TPM can decrypt. That way, the system can generate specific user storage keys securely and extract them, storing them on non-protected storage and reload them when needed in a secure manner).
https://wiki.gentoo.org/wiki/Integrity/Concepts
CC-MAIN-2021-21
refinedweb
3,794
52.43
A Java program needs to start its execution somewhere. A Java program starts by executing the main method of some class. You can choose the name of the class to execute, but not the name of the method. The method must always be called main. Here is how the main method declaration looks when located inside the Java class declaration from earlier: package myjavacode; public class MyClass { public static void main(String[] args) { } } The three keywords public, static and void have a special meaning. Don’t worry about them right now. Just remember that a main() method declaration needs these three keywords. After the three keywords you have the method name. To recap, a method is a set of instructions that can be executed as if they were a single operation. By “calling” (executing) a method you execute all the instructions inside that method. After the method, the name comes first a left parenthesis, and then a list of parameters. Parameters are variables (data/values) we can pass to the method which may be used by the instructions in the method to customize its behavior. A main method must always take an array of String objects. You declare an array of String objects like this: String[] stringArray In the main() method example earlier I called the String array parameter args, and in the second example I called it stringArray. You can choose the name freely. After the method’s parameter list comes first a left curly bracket ( {), then some empty space, and then a right curly bracket ( }). Inside the curly brackets you locate the Java instructions that are to be executed when the main method is executed. This is also referred to as the method body. In the example above there are no instructions to be executed. The method is empty. Let us insert a single instruction into the main method body. Here is an example of how that could look: package myjavacode; public class MyClass { public static void main(String[] args) { System.out.println("Hello World, Java app"); } } Now the main method contains this single Java instruction: System.out.println("Hello World, Java Program"); This instruction will print out the text Hello World, Java Program to the console. If you run your Java program from the command line, then you will see the output in the command line console (the textual interface to your computer). If you run your Java program from inside an IDE, the IDE normally catches all output to the console and makes it visible to you somewhere inside the IDE.
https://blog.codehunger.in/java-main-method/
CC-MAIN-2021-43
refinedweb
425
63.8
Modifiers are keywords in Java that are used to change the meaning of a variable or method.. Final keyword can be used with a variable, a method or a class. When a variable is declared as final, then its value cannot be changed. The variable acts like a constant. Here is a simple example, final int a = 5; When a method is declared as final, then that method cannot be overridden. For Example: class StudyTonight { final void learn() { System.out.println("learning something new"); } } // concept of Inheritance class Student extends StudyTonight { void learn() { System.out.println("learning something interesting"); } public static void main(String args[]) { Student object= new Student(); object.learn(); } } This will give a compile time error because the method is declared as final and thus, it cannot be overridden. Don't get confused by the extends keyword, we will learn about this in the Inheritance tutorial which is next. Note: A final method can be inherited/used in the subclass, but it cannot be overriden. Let's take an example, where in we will have a final variable and method too: a example of a final class. is initialized once and is shared amongst different objects of the class. All the object of the class having static variable will have the same instance of static variable. Static variables are initialized only once. Static variable are used to represent common property of a class. It saves memory. Suppose there are 100 employee in a company. All employee have its unique name and employee id but company name will be same all 100 employee. Here company name is the common property. So if you create a class to store employee detail, company_name field will be mark as static. Below we have a simple class with one static variable in it, class Employee { int e_id; String name; static String company_name = "Studytonight"; } In the above example, let's define a main() method and create a few instances of the class,/class variable and Instance variable. Let's take an example and understand the difference: public class Test { static int x = 100; int y = 100; public void increment() { x++; y++; } public static void main( String[] args ) { Test t1 = new Test(); Test t2 = new Test(); t1.increment(); t2.increment(); System.out.println(t2.y); System.out.println(Test.x); //accessed without any instance of class. } } 101 102 See the difference in value of two variable. Static variable x shows the changes made to it by increment() method on the different object. While instance variable y show only the change made to it by increment() method on that particular instance. A method can also be declared as static. Static methods do not need instance of its class for being accessed. main() method is the most common example of static method. main() method is declared as static because it is called before any object of the class is created. Let's take an Example: class Test { public static void square(int x) { System.out.println(x*x); } public static void main (String[] arg) { square(8) //static method square () is called without any instance of class. } } 64 Static block is used to initialize static data member. Static block executes before main() method. Time for an Example: class ST_Employee { int eid; String name; static String company_name; static { company_name ="StudyTonight"; //static block invoked before main() method } public void show() { System.out.println(eid+" "+name+" "+company_name); } public static void main( String[] args ) { ST_Employee se1 = new ST_Employee(); se1.eid = 104; se1.name = "Abhijit"; se1.show(); } } Now below we have the. volatile keyword cannot be used with a method or a class. It can be only used with a variable.
https://www.studytonight.com/java/modifier-in-java.php
CC-MAIN-2020-05
refinedweb
608
66.13
Creating a simple Augmented Reality (AR) scene is quite simple with Unity AR Foundation. The steps involved might only take a page or two. However, when we create a scene together in this chapter, each step will be explained in context so that you can gain a full understanding of what comprises an AR scene using AR Foundation. But before we do that, we'll take a look at some AR examples provided by Unity, including the AR Foundation Samples project, and build their example scenes for your device. And because that project contains some useful assets, we'll export those as an asset package for reuse in our own projects. In this chapter, we will cover the following topics: To implement the project provided in this chapter, you will need Unity installed on your development computer, connected to a mobile device that supports augmented reality applications (see Chapter 1, Setting Up for AR Development, for instructions). The completed project can be found in this book's GitHub repository at. A great way to learn about how to create AR projects with Unity AR Foundation is to explore the various example projects from Unity. These projects include example scenes, scripts, prefabs, and other assets. By cloning a project and opening an example scene, you can learn how to use AR Foundation, experiment with features, and see some best practices. In particular, consider these projects: Please look through the README file for each of these projects (found on the GitHub project's home page) to gain an understanding of what the project does, any dependencies it has, and other useful information about the project. Each of these repositories contains a full Unity project. That is, they are not simply Unity asset packages you can import into an existing project. Rather, you'll clone the entire repository and open it as its own project. This is typical for demo projects that may have other package dependencies and require preset settings to build and run properly. The AR Foundation Samples project is my go-to project for learning various AR Foundation features. It contains many example scenes demoing individual features, often in place of detailed documentation elsewhere (see). Each scene is extremely simple (almost to a fault) as it has the atomic purpose of illustrating a single feature. For example, there are separate scenes for plane detection, plane occlusion, and feathered planes. Notably, the project also contains a main menu scene (Assets/Scenes/ARFoundationMenu/Menu) that launches the other scenes when you build them all into a single executable. I recommend starting with the scene named SimpleAR, which we'll review in a moment. Another is the AR Foundation Demos project, which contains some more complex user scenarios and features not covered in the Samples project. For example, it demonstrates the Unity Onboarding UX assets, which we'll introduce you to in Chapter 4, Creating an AR User Framework. It also covers image tracking, mesh placement, language localization, and some useful shaders (for example, wireframe, shadows, and fog). The XR Interaction Toolkit Examples repository contains two separate Unity projects: one for VR and another for AR. It is largely a placeholder (in my opinion) for things to come. Information – XR Interaction Toolkit The XR Interaction Toolkit from Unity is not covered in this book. It provides components and other assets for developing interactive scenes using hand controllers and device-supported hand gestures. At the time of writing, XR Interaction Toolkit is focused on Virtual Reality (VR) applications (evidenced by its Examples project, which contains seven scenes for VR and just one for AR, which only supports mobile AR) but I believe it is a key part of Unity's XR strategy and architecture for the future. If you are interested in XR Interaction Toolkit for VR, check out my other book, Unity 2020 Virtual Reality Projects – Third Edition, from Packt Publishing. Let's get a copy of the AR Foundation Samples project and take a look at the SimpleAR scene. In this section, you are going to build the AR Foundation Samples project and run it on your device. First, please clone the project from its GitHub repository and open it in Unity, as follows: One of the scenes, SimpleAR, is a basic AR example scene. When run, the user will scan their room with their device's camera and the app will detect any horizontal planes that are rendered on the screen. When your user taps on one of these planes, a small red cube will be placed in the environment. You can walk around the room and the cube will remain where it was placed. If you tap again on another location, the cube will be moved there. Let's briefly review this SimpleAR scene's GameObjects: Now, let's try to build and run the project: If all goes well, the project will build, be installed on your device, and launch. If you encounter errors while building the project, look at the Console window in the Unity Editor for messages (in the default layout, it's a tab behind the Project window). Read the messages carefully, generally starting from the top. If the fix is not obvious, do an internet search for the message's text, as you can be certain you're probably not the first person to have a similar question! Tip – "Failed to generate ARCore reference image library" error If you receive an error when attempting to build the project that says something like Failed to generate ARCore reference image library, please make sure there are no spaces in the pathname of your project folder! See for more information. The main menu will be displayed, as shown in the following screen capture (left panel): Figure 2.1 – Screenshot of my phone running the arfoundation-samples app and SimpleAR scene A cool thing about AR Foundation (and this project) is that it can detect the capabilities of the device it is running on at runtime. This means that the buttons in the main menu will be disabled when AR Foundation detects that the features demoed in that scene are not supported on the device. (The device I'm using in the preceding screen capture is an Android phone, so some iOS-only feature scenes are disabled). Click the Simple AR button to open that scene. You should see a camera video feed on your device's screen. Move your phone slowly in different directions and closer/away. As it scans the environment, feature points and planes will be detected and rendered on the screen. Tap one of the planes to place a cube on the scene, as shown in the right-hand panel of the preceding screen capture. Some of the assets and scripts in the Samples project can be useful for building our own projects. I'll show you how to export them now. Unity offers the ability to share assets between projects using .unitypackage files. Let's export the assets from the AR Foundation Samples project for reuse. One trick I like to do is move all the sample folders into a root folder first. With the arfoundation-samples project open in Unity, please perform the following steps: The Assets/ARF-samples/ folder in the Project window is shown in the following screenshot: Figure 2.2 – The Samples assets folder being exported to a .unitypackage file You can close the arfoundation-samples project now if you want. You now have an asset package you can use in other projects. Tip – Starting a New Project by Copying the Samples Project An alternative to starting a new Unity AR project from scratch is to duplicate the arfoundation-samples project as the starting point for new AR projects. To do that, from your Windows Explorer (or macOS Finder), duplicate the entire project folder and then add it to Unity Hub. This way, you get all the example assets and demo scenes in one place, and it's set up with reasonable default project settings. I often do this, especially for quick demos and small projects. Next, we are going to import the Samples assets into your Unity project and build the given SimpleAR scene. As you will see later in this chapter, the Samples project includes some assets we can use in your own projects, saving you time and effort, especially at the start. We will import unitypackage, which we just exported, and then build the given SimpleAR scene as another test to verify that you're set up to build and run AR applications. If you already have a Unity project set up for AR development, as detailed in Chapter 1, Setting Up for AR Development, you can open it in Unity and skip this section. If not, perform the following steps, which have been streamlined for your convenience. If you require more details or explanations, please revisit Chapter 1, Setting Up for AR Development. To create and set up a new Unity project with AR Foundation, Universal Render Pipeline, and the new Input System, here are the abbreviated steps: When prompted to enable the input backend, you can say Yes, but we'll actually change this setting to Both in the next topic when we import the Sample assets into the project. You might want to bookmark these steps for future reference. Next, we'll import the Sample assets we exported from the AR Foundation Samples project. Now that you have a Unity project set up for AR development, you can import the sample assets into your project. With your project open in Unity, perform the following steps: Hopefully, all the assets will import without any issues. However, there may be some errors while compiling the Samples scripts. This could happen if the Samples project is using a newer version of AR Foundation than your project and it is referencing API functions for features your project does not have installed. The simplest solution is to upgrade the version of AR Foundation to the same or later version as the Samples project. To do so, perform the following steps: This is not as threatening as it may sound. "Unsafe" code usually means that something you installed is calling C++ code from the project that is potentially unsafe from the compiler's point of view. Enabling unsafe code in Unity is usually not a problem unless, for example, you are publishing WebGL to a WebPlayer, which we are not. Finally, you can verify your setup by building and running the SimpleAR scene, this time from your own project. Perform the following steps: The app should successfully build and run on your device. If you encounter any errors, please review each of the steps detailed in this chapter and Chapter 1, Setting Up for AR Development. When the app launches, as described earlier, you should see a camera video feed on your screen. Move your phone slowly in different directions and closer/away. As it scans the environment, feature points and planes will be detected and rendered on the screen. Tap one of these planes to place a cube on the scene. Your project is now ready for AR development! In this section, we'll create a scene very similar to SimpleAR (actually, more like the Samples scene named InputSystem_PlaceOnPlane) but we will start with a new empty scene. We'll add AR Session and AR Session Origin objects provided by AR Foundation to the scene hierarchy, and then add trackable feature managers for planes and point clouds. In the subsequent sections of this chapter, we'll set up an Input System action controller, write a C# script to handle any user interaction, and create a prefab 3D graphic to place in the scene. So, start the new scene by performing the following steps: Unity allows you to use a Scene template when creating a new scene. The one named Basic (Built-in) is comparable to the default new scene in previous versions of Unity. Your scene Hierarchy should now look as follows: Figure 2.3 – Starting a scene Hierarchy We can now take a closer look at the objects we just added, beginning with the AR Session object. The AR Session object is responsible for enabling and disabling augmented reality features on the target platform. When you select the AR Session object in your scene Hierarchy, you can see its components in the Inspector window, as shown in the following screenshot: Figure 2.4 – The AR Session object's Inspector window Each AR scene must include one (and only one) AR Session. It provides several options. Generally, you can leave these as their default values. The Attempt Update option instructs the AR Session to try and install the underlying AR support software on the device if it is missing. This is not required for all devices. iOS, for example, does not require any additional updates if the device supports AR. On the other hand, to run AR apps on Android, the device must have the ARCore services installed. Most AR apps will do this for you if they are missing, and that is what the Attempt Update feature of AR Session does. If necessary, when your app launches and support is missing or needs an update, AR Session will attempt to install Google Play Services for AR (see). If the required software is not installed, then AR will not be available on the device. You could choose to disable automatic updates and implement them yourself to customize the user onboarding experience. Note The Match Frame Rate option in the Inspector window is obsolete. Ordinarily, you would want the frame updates of your apps to match the frame rate of the physical device, and generally, there is no need to tinker with this. If you need to tune it, you should control it via scripting (see). Regarding Tracking Mode, you will generally leave it set to Position and Rotation, as this specifies that your VR device is tracking in the physical world 3D space using both its XYZ position and its rotation around each axis. This is referred to as 6DOF, for six-degrees-of-freedom tracking, and is probably the behavior that you expect. But for face tracking, for example, we should set it to Rotation Only, as you'll see in Chapter 9, Selfies: Making Funny Faces. The AR Session GameObject also has an AR Input Manager component that manages our XR Input Subsystem for tracking the device's pose in a physical 3D space. It reads input from the AR Camera's AR Pose Driver (discussed shortly). There are no options for the component, but this is required for device tracking. We also added an AR Session Origin GameObject to the Hierarchy. Let's look at that next. The AR Session Origin will be the root object of all trackable objects. Having a root origin keeps the Camera and any trackable objects in the same space and their positions relative to each other. This session (or device) space includes the AR Camera and any trackable features that have been detected in the real-world environment by the AR software. Otherwise, detected features, such as planes, won't appear in the correct place relative to the Camera. Tip – Scaling Virtual Scenes in AR If you plan to scale your AR scene, place your game objects as children of AR Session Origin and then scale the parent AR Session Origin transform, rather than the child objects themselves. For example, consider a world-scale city map or game court resized to fit on a tabletop. Don't scale the individual objects in the scene; instead, scale everything by resizing the root session origin object. This will ensure the other Unity systems, especially physics and particles, retain their scale relative to the camera space. Otherwise, things such as gravity, calculated as meters per second, and particle rendering could mess up. When you select the AR Session Origin object in your scene Hierarchy, you can see its components in the Inspector window, as shown in the following screenshot: Figure 2.5 – The AR Session object's Inspector window At the time of writing, the default AR Session Origin object simply has an AR Session Origin component. We'll want to build out its behavior by adding more components in a moment. The Session Origin's Camera property references its own child AR Camera GameObject, which we'll look at next. The AR Camera object is a child of AR Session Origin. Its Inspector window is shown in the following screenshot: Figure 2.6 – The AR Camera object's Inspector window During setup, we tagged the AR Camera as our MainCamera. This is not required but it is a good practice to have one camera in the scene tagged as MainCamera, for example, for any code that may use Camera.main, which is a shortcut for the find by tag name. As its name implies, the AR Camera object includes a Camera component, required in all Unity scenes, which determines what objects to render on your screen. The AR one has mostly default values. The Near and Far Clipping planes have been adjusted for typical AR applications to (0.1, 20) meters. In AR apps, it's not unusual to place the device within inches of a virtual object, so we wouldn't want it to be clipped. Conversely, in an AR app, if you walk more than 20 meters away from an object that you've placed in the scene, you probably don't need it to be rendered at all. Importantly, rather than using a Skybox, as you'd expect in non-AR scenes, the camera's Background is set to a Solid black color. This means the background will be rendered using the camera's video feed. This is controlled using the AR Camera Background component of the AR Camera. In an advanced application, you can even customize how the video feed is rendered, using a custom video material (this topic is outside the scope of this book). Similarly, on a wearable AR device, a black camera background is required, but with no video feed, to mix your virtual 3D graphics atop the visual see-through view. The video feed source is controlled using the AR Camera Manager component. You can see, for example, that Facing Direction can be changed from World to User for a selfie face tracking app (see Chapter 9, Selfies: Making Funny Faces). The Light Estimation options are used when you want to emulate real-world lighting when rendering your virtual objects. We'll make use of this feature later in this chapter. You also have the option to disable Auto Focus if you find that the camera feature is inappropriate for your AR application. Tip – When to Disable Camera Auto Focus for AR Ordinarily, I disable Auto Focus for AR applications. When the software uses the video feed to help detect planes and other features in the environment, it needs a clear, consistent, and detailed video feed, not one that may be continually changing for Auto Focus. That would make it difficult to process AR-related algorithms accurately to decode their tracking. On the other hand, a selfie face tracking app may be fine with Auto Focus enabled and could improve the user experience when the area behind the user loses focus due to depth of field. The AR Pose Driver component is responsible for updating the AR Camera's transform as it tracks the device in the real world. (There are similar components for VR headsets and hand controllers, for instance.) This component relies on the XR plugin and the Input XR Subsystem to supply the positional tracking data (see). Our next step is to add Plane and Point Cloud visualizers to the scene. When your application runs, you'll ask the user to scan the room for the AR software to detect features in the environment, such as depth points and flat planes. Usually, you'll want to show these to the user as they're detected. We do this by adding the corresponding feature managers to the AR Session Origin game object. For example, to visualize planes, you'll add an AR Plane Manager to the AR Session Origin object, while to visualize point clouds, you'll add an AR Point Cloud Manager. AR Foundation supports detecting and tracking the following features: Not all of these are supported on every platform. See the documentation for your current version of AR Foundation (for example, visit[email protected]/manual/index.html#platform-support and select your version at the top left). We will be using many of these in various projects throughout this book. Here, we will use the Plane and Point Cloud trackables. Please perform the following steps to add them: You'll notice that the Point Cloud Manager has an empty slot for the Point Cloud Prefab visualizer and that the Plane Manager has an empty slot for the Plane Prefab visualizer. We'll use prefabs from the Samples project, as follows: There are alternative point cloud visualizer prefabs you might like to try out also, such as AR Point Cloud Debug Visualizer and AllPointCloudPointsPrefab. There are alternative plane visualizer prefabs to try out also, such as AR Plane Debug Visualizer, AR Feathered Plane Fade, and CheckeredPlane. We're using the visualizer prefabs we got from the Samples project. Later in this chapter, we'll talk more about prefabs, take a closer look at the visualizer ones, and learn how to edit them to make our own custom visualizers. First, we'll add the AR Raycast Manager to the scene. There's another component I know we're going to need soon, known as AR Raycast Manager. This will be used by our scripts to determine if a user's screen touch corresponds to a 3D trackable feature detected by the AR software. We're going to use it in our script to place an object on a plane. Perform the following steps to add it to the scene: The AR Session Origin GameObject with the manager components we added now looks like this in the Inspector window: Figure 2.7 – AR Session Origin with various manager components One more thing that's handy to include is light estimation, which helps with rendering your virtual objects more realistically. By adding a Light Estimation component to your Directional Light source, the AR camera can use this information when rendering your scene to try and match the scene's lighting more closely to the real-world environment. To add light estimation, perform the following steps: Good! I think we should try to build and run what we have done so far and make sure it's working. Currently, the scene initializes an AR Session, enables the AR camera to scan the environment, detects points and horizontal planes, and renders these on the screen using visualizers. Let's build the scene and make sure it runs: The app should successfully build and run on your device. If you encounter any errors, please read the error messages carefully in the Console window. Then, review each of the setup steps detailed in this chapter and Chapter 1, Setting Up for AR Development. When the app launches, you should see a video feed on your screen. Move the device slowly in different directions and closer/away. As it scans the environment, feature points and planes will be detected and rendered on the screen using the visualizers you chose. Next, let's add the ability to tap on one of the planes to instantiate a 3D object there. We will now add the ability for the user to tap on a plane and place a 3D virtual object in the scene. There are several parts to implementing this: Let's begin by creating an input action for a screen tap. We are going to use the Unity Input System package for user input. If the Input System is new to you, the steps in this section may seem complicated, but only because of its great versatility. The Input System lets you define Actions that separate the logical meaning of the input from the physical means of the input. Using named actions is more meaningful to the application and programmers. Note – Input System Tutorial For a more complete tutorial on using the Input System package, see. Here, we will define a PlaceObject action that is bound to screen tap input data. We'll set this up now, and then use this input action in the next section to find the AR plane that was tapped and place a virtual object there. Before we begin, I will assume you have already imported the Input System package via Package Manager and set Active Input Handing to Input System Package (or Both) in Player Settings. Now, follow these steps: With that, we've created a data asset named AR Input Actions that contains an action map named ARTouchActions, which has one action, PlaceObject, that detects a screen touch. It returns the touch position as a 2D vector (Vector2) with the X, Y values in pixel coordinates. The input action asset is shown in the following screenshot: Figure 2.8 – Our AR Input Actions set up for screen taps Now, we can add the input actions to the scene. This can be done via a Player Input component. For our AR scene, we'll add a Player Input component to the AR Session Origin, as follows: Information – Input System Behavior Types Unity and C# provide different ways for objects to signal other objects. The Player Input component lets you choose how you want input actions to be communicated, via its Behavior setting. The options are as follows: Send Messages: Will send action messages to any components on the same GameObject (). As we'll see, your message handler must be named with the "On" prefix (for example, OnPlaceObject) and receives an InputValue argument ([email protected]/api/UnityEngine.InputSystem.InputValue.html). Broadcast Messages: Like Send Messages, Broadcast Messages will send messages to components on this GameObject and all its children (). Invoke Unity Events: You can set event callback functions using the Inspector or in scripts (). The callback function receives an InputAction.CallbackContext argument ([email protected]/api/UnityEngine.InputSystem.InputAction.CallbackContext.html). Invoke C# Events: You can set event listeners in scripts (). To learn more about the Player Input component, see[email protected]/api/UnityEngine.InputSystem.PlayerInput.html. I've decided to use Send Messages here, so we'll need to write a script with an OnPlaceObject function, which we'll do next. But first, I'll provide a quick introduction to Unity C# programming. Writing C# scripts is an essential skill for every Unity developer. You don't need to be an expert programmer, but you cannot avoid writing some code to make your projects work. If you are new to coding, you can simply follow the instructions provided here, and over time, you'll get more comfortable and proficient. I also encourage you to go through some of the great beginner tutorials provided by Unity () and others, including the following: Given that, I will offer some brief explanations as we work through this section. But I'll assume that you have at least a basic understanding of C# language syntax, common programming vocabulary (for example, class, variable, and function), using an editor such as Visual Studio, and how to read error messages that may appear in your Console window due to typos or other common coding mistakes. We're going to create a new script named PlaceObjectOnPlane. Then, we can attach this script as a component to a GameObject in the scene. It will then appear in the object's Inspector window. Let's begin by performing the following steps: This creates a new C# script with the .cs file extension (although you don't see the extension in the Project window). As you can see in the following initial script content of the template, the PlaceObjectOnPlane.cs file declares a C# class, PlaceObjectsOnPlane, that has the same name as the .cs file (the names must match; otherwise, it will cause compile errors in Unity): using System.Collections; using System.Collections.Generic; using UnityEngine; public class PlaceObjectOnPlane : MonoBehaviour { // Start is called before the first frame update void Start() { } // Update is called once per frame void Update() { } } The first three lines in this script have a using directive, which declares an SDK library, or namespace, that will be used in the script. When a script references external symbols, the compiler needs to know where to find them. In this case, we're saying that we'll potentially be using standard .NET system libraries for managing sets of objects (collections). And here, we are using the UnityEngine API. One of the symbols defined by UnityEngine is the MonoBehaviour class. You can see that our PlaceObjectsOnPlane class is declared as a subclass of MonoBehaviour. (Beware its British spelling, "iour"). Scripts attached to a GameObject in your scene must be a subclass of MonoBehaviour, which provides a litany of features and services related to the GameObject where it is attached. For one, MonoBehaviour provides hooks into the GameObject life cycle and the Unity game loop. When a GameObject is created at runtime, for example, its Start() function will automatically be called. This is a good place to add some initialization code. The Unity game engine's main purpose is to render the current scene view every frame, perhaps 60 times per second or more. Each time the frame is updated, your Update() function will automatically be called. This is where you put any runtime code that needs to be run every frame. Try to keep the amount of work that's done in Update() to a minimum; otherwise, your app may feel slow and sluggish. You can learn more about the MonoBehaviour class here:. To get a complete picture of the GameObject and MonoBehaviour scripts' life cycles, take a look at this flowchart here:. We can now write our script. Since this is the first script in this book, I'll present it slowly. The purpose of the PlaceObjectOnPlane script is to place a virtual object on the AR plane when and where the user taps. We'll outline the logic first (in C#, any text after // on the same line is a comment): using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.InputSystem; public class PlaceObjectOnPlane : MonoBehaviour { void OnPlaceObject(InputValue value) { // get the screen touch position // raycast from the touch position into the 3D scene looking for a plane // if the raycast hit a plane then // get the hit point (pose) on the plane // if this is the first time placing an object, // instantiate the prefab at the hit position and rotation // else // change the position of the previously instantiated object } } As it turns out, in this script, there is no need for an Update function as it is only used for frame updates, which this script can ignore. This script implements OnPlaceObject, which is called when the user taps the screen. As we mentioned previously, the Player Input component we added to the AR Session Origin uses the Send Messages behavior and thus expects our script to implement OnPlacedObject for the PlacedObject action. It receives an InputValue. Notice that I also added a line using UnityEngine.InputSystem;, which defines the InputValue class. First, we need to get the screen touch position from the input value we passed in. Add the following code, which declares and assigns it to the touchPosition local variable: // get the screen touch position Vector2 touchPosition = value.Get<Vector2>(); The next step is to figure out if the screen touch corresponds to a plane that was detected in the AR scene. AR Foundation provides a solution by using the AR Raycast Manager component that we added to the AR Session Origin GameObject earlier. We'll use it in our script now. Add these lines to the top of your script: using UnityEngine.XR.ARFoundation; using UnityEngine.XR.ARSubsystems; Then, inside the OnPlaceObject function, add the following code: //)) { // } Firstly, we get a reference to the ARRaycastManager component, assigning it to raycaster. We declare and initialize a list of ARRaycastHit, which will be populated when the raycast finds something. Then, we call raycaster.Raycast(), passing in the screen's touchPosition, and a reference to the hits list. If it finds a plane, it'll return true and populate the hits list with details. The third argument instructs raycaster.Raycast on what kinds of trackables can be hit. In this case, PlaneWithinPolygon filters for 2D convex-shaped planes. Information – For More Information on AR Raycasting For more information on using ARRaycastManager, see[email protected]/manual/raycast-manager.html. For a list of trackable types you can pass in, see[email protected]/api/UnityEngine.XR.ARSubsystems.TrackableType.html. The code inside the if statement will only be executed if raycaster.Raycast returns true; that is, if the user had tapped a location on the screen that casts to a trackable plane in the scene. In that case, we must create a 3D GameObject there. In Unity, creating a new GameObject is referred to as instantiating the object. You can read more about it here:. First, let's declare a variable, placedPrefab, to hold a reference to the prefab that we want to instantiate on the selected plane. Using the [SerializedField] directive permits the property to be visible and settable in the Unity Inspector. We'll also declare a private variable, spawnedObject, that holds a reference to the instantiated object. Add the following code to the top of the class: public class PlaceObjectOnPlane : MonoBehaviour { [SerializeField] GameObject placedPrefab; GameObject spawnedObject; Now, inside the if statement, we will instantiate a new object if this is the first time the user has tapped the screen, and then assign it to spawnedObject. If the object had already been spawned and the user taps the screen again, we'll move the object to the new location instead. Add the following highlighted code: public void OnPlaceObject(InputValue value) { // get the screen touch position Vector2 touchPosition = value.Get<Vector2>(); //)) { // get the hit point (pose) on the plane Pose hitPose = hits[0].pose; // if this is the first time placing an object, if (spawnedObject == null) { // instantiate the prefab at the hit position and rotation spawnedObject = Instantiate(placedPrefab, hitPose.position, hitPose.rotation); } else { // change the position of the previously instantiated object spawnedObject.transform.SetPositionAndRotation( hitPose.position, hitPose.rotation); } } } Raycast populates a list of hit points, as there could be multiple trackable planes in line where the user has tapped the screen. They're sorted closest to furthest, so in our case, we're only interested in the first one, hits[0]. From there, we get the point's Pose, a simple structure that includes 3D position and rotation values. These, in turn, are used when placing the object. After that, save the script file. Now, back in Unity, we'll attach our script as a component to AR Session Origin by performing the following steps: You'll notice that there is a Placed Prefab property in the component's Inspector window. This is the placedPrefab variable we declared in the script. Let's populate it with the red cube prefab provided by the Samples assets. Our script, as a component on the AR Session Origin GameObject, should now look as follows: Figure 2.9 – PlaceObjectOnPlane as a component with its Placed Prefab slot populated Let's try it! We're now ready to build and run the scene. If you've built the scene before, in the previous section, you can go to File | Build And Run to start the process. Otherwise, perform the following steps to build and run the scene: The app should successfully build and run on your device. As usual, if you encounter any errors, please read the error messages carefully in the Console window. When the app launches, you should see a video feed on your screen. Move your device slowly in different directions and closer/away. As it scans the environment, feature points and planes will be detected and rendered on the screen. If you tap the screen on a tracked plane, the red cube should be placed at that location. Refactoring is reworking a script to make the code cleaner, more readable, more organized, more efficient, or otherwise improved without changing its behavior or adding new features. We can now refactor our little script to make the following improvements: The modified script is shown in the following code block. The changed code is highlighted, beginning with the top part, which contains the new class variables and the Start() function: public class PlaceObjectOnPlane : MonoBehaviour { [SerializeField] GameObject placedPrefab; GameObject spawnedObject; ARRaycastManager raycaster; List<ARRaycastHit> hits = new List<ARRaycastHit>(); void Start() { raycaster = GetComponent<ARRaycastManager>(); } Now, add the OnPlacedObject function, as follows: public void OnPlaceObject(InputValue value) { // get the screen touch position Vector2 touchPosition = value.Get<Vector2>(); // raycast from the touch position into the 3D scene looking for a plane // if the raycast hit a plane then // REMOVE NEXT TWO LINES // ARRaycastManager raycaster = GetComponent<ARRaycastManager>(); //List<ARRaycastHit> hits = new List<ARRaycastHit>(); { Please save the script, then build and run it one more time to verify it still works. Information – Public versus Private and Object Encapsulation One of the driving principles of object-oriented programming is encapsulation, where an object keeps its internal variables and functions private, and only exposes properties (public variables) and methods (public functions) to other objects when they're intended to be accessible. C# provides the private and public declarations for this purpose. And in C#, any symbol not declared public is assumed to be private. In Unity, any public variables are also visible (serialized) in the Inspector window when the script is attached to a GameObject as a component. Ordinarily, private variables are not visible. Using the [SerializeField] directive enables a private variable to also be visible and modifiable in the Inspector window. Congratulations! It's not necessarily a brilliant app, and it's modeled after the example scenes found in the Samples projects, but you started from File | New Scene and built it up all on your own. Now, let's have a little fun with it and find a 3D model that's a little more interesting than a little red cube. The prefab object we've been placing on the planes in this chapter is the one named AR Placed Cube, which we imported from the AR Foundation Samples project. Let's find a different, more interesting, model to use instead. In the process, we'll learn a bit more about GameObjects, Transforms, and prefabs. I think a good place to start is by taking a closer look at the AR Placed Cube prefab we've been using. Let's open it in the Editor by performing the following steps: We are now editing the prefab, as shown in the following screenshot (I have rearranged my windows differently from the default layout): Figure 2.10 – Editing the AR Placed Cube prefab The Scene window now shows the isolated prefab object, and the Hierarchy window is the hierarchy for just the prefab itself. At its root is an "empty" GameObject named AR Placed Cube; it has only one component – Transform, which is required of all GameObjects. Its Transform is reset to Position (0, 0, 0), Rotation (0, 0, 0), and Scale (1, 1, 1). Beneath the AR Placed Cube is a child Cube object, as depicted in the preceding screenshot. This cube is scaled to (0.05, 0.05, 0.05). These units are in meters (0.05 meters is about 2 inches per side). And that's its size when it's placed in the physical environment with our app. You'll also notice that the child Cube's X-Y-Z Position is (0, 0.025, 0), where Y in Unity is the up-axis. As 0.025 is half of 0.05, we've raised the cube half its height above the zero X-Z plane. The origin of a Cube is its center. So, the origin of the AR Placed Cube is the bottom of the child Cube. In other words, when we place this prefab in the scene, the cube's bottom side rests on the pose position, as determined by the hit raycast. Parenting a model with an empty GameObject to normalize its scale and adjust its origin is a common pattern in Unity development. Now, let's find a different model for our app and normalize its Transform as we make it a prefab. To find a 3D model, feel free to search the internet for a 3D model you like. If you're a 3D artist, you may already have ones of your own. You will want a relatively simple, low-poly model (that is, with not many polygons). Look for files in .FBX or .OBJ format, as they will import into Unity without conversion. I found a model of a virus microbe on cgtrader.com here:. It is a free download and royalty-free, has 960 polygons, and is available in FBX format. My file is named uploads_files_745381_Microbe.fbx. Once you've found a file and downloaded it to your computer, perform the following steps to import it into Unity: Now, we'll make a prefab of the model and make sure it's been scaled to a usable size. I like to use a temporary Cube object to measure it: The model I found did not come with a material, so let's create one for it now. With the prefab we're working on still open for editing, perform the following additional steps: My prefab now looks like this while open for editing (I have rearranged my windows so that they're different from the default layout): Figure 2.11 – Editing my Virus prefab We're now ready to add this prefab to the scene. After, we will build and run the finished project. We now have our own prefab to place in the AR scene. Let's add it to the Place Object On Plane component, as follows: As shown in the following screenshot, I have infected my desk with a virus! Figure 2.12 – Running the project shows a virus on my keyboard There it is. You've successfully created an augmented reality scene that places a virtual 3D model in the real world. Perhaps you wouldn't have chosen a virus, but it's a sign of the times! You're now ready to proceed with creating your own AR projects in Unity. In this chapter, we examined the core structure of an augmented reality scene using AR Foundation. We started with the AR Foundation Samples project from Unity, building it to run on your device, and then exported its assets into an asset package for reuse. Then, we imported these sample assets into our own project, took a closer look at the SimpleAR scene, and built that to run on your device. Then, starting from a new empty scene, we built our own basic AR demo from scratch that lets the user place a virtual 3D object in the physical world environment. For this, we added AR Session and AR Session Origin game objects and added components for tracking and visualizing planes and point clouds. Next, we added user interaction, first by creating an Input Action controller that responds to screen touches, and then by writing a C# script to receive the OnPlaceObject action message. This function performs a raycast from the screen touch position to find a pose point on a trackable horizontal plane. It then instantiates an object on the plane at that location. We concluded this chapter by finding a 3D model on the internet, importing it into the project, creating a scaled prefab from the model, and using it as the virtual object placed into the scene. Several times along the way, we did a Build And Run of the project to verify that our work at that point runs as expected on the target device. In the next chapter, we will look at tools and practices to facilitate developing and troubleshooting AR projects, which will help improve the developer workflow, before moving on to creating more complete projects in subsequent chapters.
https://ebookreading.net/view/book/EB9781838982591_7.html
CC-MAIN-2022-40
refinedweb
7,394
60.85
varuses local type inference to deduce type. It was introduced in Java 10. var is used in the body of a method. It exists in order to remove some forms of repeated type names in code. The benefit is that the code becomes more terse and legible. Using var has a significant effect on the appearance of code in general, since it can be used in many places. A simple example to illustrate: ByteArrayOutputStream weather = new ByteArrayOutputStream(); var weather = new ByteArrayOutputStream(); There are many cases where a straightforward use of var will improve the implementation of a method (as above). But you shouldn't use var if it hides essential type information from the reader. Using variable names which clarify the meaning of code is always a good practice. When using var, the type information may not be completely explicit, so using good variable names becomes especially important. var is a not a reserved keyword in the Java language. (This is a common misconception.) For example, this code still compiles in Java 10, and it wouldn't compile if var was a keyword: String var = "Landau and Lifshitz"; varis actually a reserved type name. If your code has a type named var, then conflicts will indeed occur. But given common Java naming conventions, the likelihood of such conflicts is probably infinitesimal. var can be used for these kinds of variables: var cannot be used for these kinds of variables: Examples import java.io.ByteArrayOutputStream; import java.io.FileInputStream; import java.io.IOException; import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.Paths; import java.time.LocalDate; import java.util.ArrayList; import java.util.LinkedHashMap; import java.util.List; import java.util.Map; /** Examples of using var. @since Java 10 */ public final class UsingVar { /** It's glaringly obvious what the type is, because it appears on both the LHS (left-hand side) and the RHS (right-hand side). This is likely the most common use case for var. */ void repeatedTypeName() { //without var ByteArrayOutputStream weather = new ByteArrayOutputStream(); //with var var readings = new ByteArrayOutputStream(); } /** Small improvement to iteration with enhanced-for loops. The iteration variable can be a var. */ void enhancedForIteration() { class Nucleus{/*elided*/} //a toy type to play with //without var List<Nucleus> nuclei = new ArrayList<>(); for (Nucleus nucleus : nuclei) { //..elided } //with var //note how the type moves to the RHS of the declaration var molecule = new ArrayList<Nucleus>(); for (var nucleus : molecule) { //..elided } } /** With simple integer-loops, you can use var, but it doesn't appear very useful. */ void oldStyleForLoop() { for(int i = 0; i < 10; ++i) { //..elided } for(var i = 0; i < 10; ++i) { //..elided } } /** Sometimes a local variable exists only as a step in a chain of operations. */ void chainingIntermediateObjects() throws IOException { //without var Path weather = Paths.get("weather.utf8"); //the intermediate object byte[] weatherBytes = Files.readAllBytes(weather); //with var var readings = Paths.get("readings.utf8"); byte[] readingsBytes = Files.readAllBytes(readings); } /** Again, the type appears both on the LHS and the RHS. */ void repeatedTypeNameTryWithResources() throws IOException { //without var try (FileInputStream weather = new FileInputStream("weather.utf8")) { //..elided } //with var try (var readings = new FileInputStream("readings.utf8")) { //..elided } } /** There are occasions when iterating over a Map is particularly repetitive, and benefits from using var. */ void mapIteration() { //map course id to a list of student names //without var Map<String, List<String>> courseToStudents = new LinkedHashMap<>(); for(Map.Entry<String, List<String>> entry : courseToStudents.entrySet()) { List<String> students = entry.getValue(); } //with var //note how the types move to the RHS of the declaration var courseToStudenstWithVar = new LinkedHashMap<Integer, List<String>>(); for (var entry : courseToStudenstWithVar.entrySet()) { var students = entry.getValue(); } } /** Yes, the returned object can be a var. */ List<LocalDate> returnedObject() { var result = new ArrayList<LocalDate>(); //..some computation.. return result; } }
http://www.javapractices.com/topic/TopicAction.do?Id=284
CC-MAIN-2018-34
refinedweb
621
52.15
Posts1,768 Joined Last visited Days Won158 Content Type Profiles Forums Store Showcase Product ScrollTrigger Demos Downloads Everything posted by Rodrigo - Honestly I'd do my best to accommodate both instances to update the X value on the slides parent container and do some math in order to update the value from the handle Draggable instance in a way that reflects the drag amount of the handle. Another possibility is to create a regular TweenLite instance that should be updated via the handle's Draggable instance. For example use a regular Draggable with snapping for the slides container. At the same time create a regular TweenLite instance (this should remain paused all the time) that moves the slides container all the way to the left using the X value on it and update that TweenLite instance using Draggable, like this: That is, IMHO, basically the simplest way to achieve that. Happy Tweening!! - Hi Richard and welcome to the GreenSock forums. It seems that this could be solved by using the update() method all Draggable instances have, which is very useful when something outside the realm of the Draggable instance itself, updates it's target. On top of that right now you're updating two different things, that also could be a source of problems. Let's go backwards. First you're updating two different things. On the slides the Draggable instance is set to Scroll Left while the handle Draggable instance updates the slides X position. Those are two completely different things. My advice is to create a consistent update in both cases the handle callback and the property being updated by the Draggable instance that controls the slides. Second, if you're updating the same property that a Draggable instance is affecting, then the recommendation is to use the update method in order to tell that Draggable instance that the value of that specific property has been updated outside the Draggable instance. In that case GSAP updates the value in the Draggable instance and the next time the user drags that updated value will be reflected: Unfortunately I don't have enough time to fiddle with your code and try to create a working solution, but hopefully this will help you get started. Happy Tweening!! - Ha!! yeah the extra rest can do that - Ok my take on this is to use onStart instead of onComplete in the out animation instance and in that callback use a delayedCall which is a timer but it runs with GSAP ticker so everything will be synced perfectly if you change browser tabs: Happy Tweening!! Animation in React Project Rodrigo replied to Appollos's topic in GSAPHi Apollos, I agree with Jack, there is a lot you want to do in your sample and basically most of the resources you need are here in the forums and other documentation. For example to draw an SVG path you can use the Draw SVG plugin. Also you could tween the width of a div element with no height but a border-top property as well. As far as using GSAP in React this basically comes to using refs in order to gain access to the DOM nodes and use them in a GSAP instance. Here is a introduction of how to get your React project working with GSAP with live-editable samples: Happy Tweening!! progress NaN on tween's with duration 0 Rodrigo replied to erikb's topic in GSAPIf you ask me it seems about right. Keep in mind that .set() instances are zero duration instances so, something that actually lasts nothing can have a progress? Is it at the start or the end? So to me is actually correct that it doesn't have a progress. Of course the Yoda of GSAP, our beloved master @GreenSock will clarify this for us all why set tweens progress don't have Happy Tweening!! Accessing ref for tween(React GSAP) Rodrigo replied to Alexander75's topic in GSAPHi and welcome to the GreenSock forums. The fist thing here is that when you create a ref in directly in a component, you get the component instance and not the DOM node or tree that the component ultimately renders: class MyApp extends Componetn { constructor(props) { super(props); this.myCard = null; } componentDidMount () { console.log(this.myCard); // returns a React component object } render () { return <div> <Card ref={card => myCard = card} /> </div>; } } That because the ref callback is being called in a React Component instance and not a DOM node created in JSX. Then is not clear where you want to create the animations; In the parent component or on each card?. If you want to create the animations in the parent component then you should use Forwarding Refs in order to store a reference to the DOM node that sits at the top of the DOM tree created by that component. Forwarding refs is a bit of a complex resource for someone that is just staring with React, specially if your Card component can't be transformed into a functional component. If your Card component can be transformed into a functional component, then is pretty easy. This is an extremely simple example of using forward refs to create an animation with GSAP: An alternative is to use React Transition Group in order to create the animations inside each Card component. This approach can also be a bit convoluted as shown here: Finally if you want to access the DOM element directly inside each instance of the Card component without all the hassle of React Transition Group you're very close in fact. Right now you have this on your Card component: class Card extends Component { tl = new TimelineLite({ paused: true }); componentDidMount() { console.log(this.refs); } render() { return ( <div className="slide"> <div className="card"> <div className={`${this.props.card.id} card-img`} /> <div className="card-content" ref> > ); } } Keep in mind that the ref is a callback that, used as an attribute in the JSX code, grabs whatever is returned from the tag where is used, in this case a <div> element, but is a function, now you're only referencing that function but you're not doing anything with it. You have to create a reference to the DOM element in the constructor and then use the callback to update it at render time: class Card extends Component { constructor(props){ super(props); this.tl = new TimelineLite({ paused: true }); this.card = null; } componentDidMount() { console.log(this.card); // returns the div element } render() { return ( <div className="slide"> <div className="card"> <div className={`${this.props.card.id} card-img`} /> <div className="card-content" ref={div => this.card = div}> > ); } } That particular use will allow you to use the <div> element in your GSAP instance with no problems. Hopefully this is enough to get you started. Happy Tweening!! - Well, basically that has to do with dealing with asynchronous code. Since you're getting your app's data in the componentDidMount() event handler that code is executed and keeps waiting for an answer from the server and the rest of the code is executed regardless of that, hence the code is asynchronous. When you move your GSAP code to the .then() method of the promise returned by fetch() that is executed when the server sends a successful response, therefore you have the data and you can use it in your GSAP powered animations. Another alternative is to use async/await, but if you're not too familiar with promises it might come as a bit confusing. Basically async await allows you to write asynchronous code in a synchronous fashion, without the .then() and catch() methods. You can check this resources to learn about it: And of course the crazy guy from Sweden (note, not every person in Sweden is crazy and maybe this guy isn't as well ): Happy Tweening!! staggerTo is taking more time everytime i navigate to a new page in NextJs Rodrigo replied to TimeFrame's topic in GSAP@Dipscom and @Shaun Gorneau, your input was invaluable in this one. Because of that I just payed attention to that final tween and nothing more. So indeed is a true team effort!! staggerTo is taking more time everytime i navigate to a new page in NextJs Rodrigo replied to TimeFrame's topic in GSAPMy guess, besides what Shaun and Pedro already have pointed here, is that this is related to the fact that your menu component could be re-rendered every time you do a route change and that re-render causes new instances being added to the timeline controlling the menu farther and farther away in the timeline, as Shaun points. Just create your animation in the componendDidMount() method and use what Shaun suggested. Also check your app and see if it's actually necessary to re-render your menu component everytime a route changes, normally menus are added at the top of the app tree, because they don't mutate as the app's data or state is being updated. @Shaun Gorneau and @Dipscom, the final tween in the timeline is not being added at an absolute time, the final number in that instance, stands for the stagger time between animations and not the position, that's why the stagger animation gets pushed every time. Happy tweening!! - Is basically the same issue as before. This: <Scroll ref={this.scroll} /> Will create a reference to a React component instance, not a DOM node because the forwarded ref goes into another component: <ScrollSpy ref={ref}> <div className="one" /> <div className="two" /> <div className="three" /> </ScrollSpy> This is basically whatever styled components (which is a tool that I have never used) returns. That most likely will be another React Component and not a DOM node. You could try to log <ScrollSpy> into the console and find out what that is. Your best choice is look up in the styled components API for a way to retrieve the DOM node created in order to access that and create the animation. Honestly I understand the usage of styled components by developers but personally I don't see any upside to it. Call me an old school guy, but I prefer my styles and my JS code in different files. - Hi, First of all the post editor has an option to include code snippets, is the button with the smaller and bigger than icon <>, it has syntax highlighting and auto-detects indentation. That makes reading code easier and faster, please do your best effort to use it. In the react aspect of your issue, yeah that's a pickle. React has the forward refs feature that allows to create a reference to the DOM node of a child component in it's parent, but in the case of an array of DOM nodes I'm pretty sure is not going to work. My advice would be to create the reference in the child component and store in the instance the array of DOM nodes. Then create a method to simply return that array: // Child component constructor (props) { super(props); this.navEls = []; this.getNavElements = this.getNavElements.bind(this); } getNavElements () { return this.navEls; } render () { return <nav> {array.map( (element, index) => <li ref={li => this.navEls[index] = li}></li>)} </nav>; } In the parent component use a regular createRef method to get a reference to the child component and execute the method that returns the array, then store it in the state: // parent component constructor (props) { super(props); this.myNav = React.createRef(); this.state = { navElements: [] }; } componentDidMount () { this.setState({ navElements: this.myNav.current.getNavElements() }); } render () { return <div> <Navigation ref={this.myNav} /> </div>; } Then you can use componentDidUpdate to create the animation or you can also do it directly in the componentDidMount method, is up to you actually. Here is a simple live sample: Happy Tweening!!! - Hi and welcome to the GreenSock forums. The problem in your app is that you're setting a ref in a component instance and not a DOM element in your JSX: <Menu visible={visible} menuRef={this.menuRef} /> That particular reference will return a React Component instance and not a DOM node. In fact you should be seeing an error in your console, because a component instance doesn't have a current property, so this.menuRef.current should be undefined and GSAP should be complaining about it saying that it cannot tween a null target. The solution is not complicated at all. You have to create the animation logic inside your menu component and since you're already passing the visible state property as a prop to the Menu component you can listen for changes in the props in the menu component: // in the menu component componentDidMount () { this.menuTween = TweenMax.to(this.menuRef, 1, { autoAlpha: 1 }).reverse(); } componentDidUpdate (prevProps) { if ( prevProps.visible !== this.props.visible ) { this.menuTween.reversed(!this.props.visible); } } Here is an oversimplified example: Happy Tweening!! React, Gatsby, and SplitText Rodrigo replied to Lee Campbell's topic in GSAPMhhh... yeah the issue might be the fact that Gatsby's <layout> complains about some operation being done by SplitText. Unfortunately in Codesandbox, using the Gatsby set up for a new sample, has some issues with adding SplitText using the S3 link for live samples. A simple React setup using CRA works as expected: Anyway, you could try using React.fragment around your tags and see if it works. Happy tweening!! - @PointC See?? the binary system is at fault here, so you should blame the Egyptians, Indians, Chinese, Leibniz (I think that's the way is written) and the geniuses that thought it was a good idea to use it on IT. We're all victims here - Works like a charm Jack, thanks a bunch!!! Yeah, since the values are reaching less than zero and almos 130, basically I added a return at the top of the callback if the value for x is less than 0, like that the extra conditional logic is not expensive at all. Finally PIXI takes care of rounding the value of y to 130, so nothing too complicated. Is it possible to achieve this water effect with GreenSock? Rodrigo replied to mrstevejobs's topic in GSAPYes, in fact that particular effect is done using PIXI JS. Lucky for us @OSUblake, one of the superstars we have around here is quite a PIXI/GSAP and anythign you throw at Him expert, so He has you covered regarding samples. The title says everything here: Here is something that looks similar but is achieved in a different way: Finally: As you can see this is done using PIXI's displacement filter which can be used with GSAP and the PIXI Plugin. Hopefully these will help you get started. Happy Tweening!! Bezier plugin returning negative value Rodrigo posted a topic in GSAPI don't know if someone else has stumped into something like this before. I did a quick search and couldn't find anything. As you can see in the pen, I'm updating the values of an object using the bezier plugin with the idea of drawing a rounded rectangle in PIXI using quadratic bezier because it has just one control point which keeps code simpler. The issue comes at the end of the GSAP instance, the final value should be { x: 0, y: 130 }, but the values returned by the bezier plugin are smaller than that, I mean reaaaaaally small, the problem is caused in the 15th decimal position!!. I have no issue in adding some extra logic in my code, in order to force the values to 0 and 130, but since I'm not extremely familiar with the bezier equations used by GSAP, I don't know if this is by design, some sort of inevitable tradeoff of the calculations inside the plugin or an actual issue. Happy Tweening!! pixiPlugin and pixi2.2.5 Rodrigo replied to reno's topic in GSAPHi and welcome to the GreenSock forums. You're using PIXI 2.2.5?, that's a bit old and there is a good chance the plugin doesn't work with filters in that particular version of PIXI. If possible please try to update to the latest 4.x version of PIXI. Unfortunately I don't have time to go through the difference in filters implementations in these different versions of PIXI (2.2.5 vs 4.8.6) and see where exactly is the problem. But my guess is that perhaps it has changed during that time. Perhaps our great master @GreenSock can shed some lights in this matter. Happy tweening!! staggerFrom not working on component rendered with API fetched data Rodrigo replied to George I.'s topic in GSAPHi and welcome to the GreenSock forums. You got really close, in fact you even created a boolean to check when the data fetching is completed. All you have to do is compare that part of the state in the componentDidUpdate() method in order to trigger the animation, just change your component to this and it should work as expected: componentDidMount() { this.fetchUsers(); } componentDidUpdate (preProps, preState) { if ( preState.isFetching && preState.isFetching !== this.state.isFetching ) { TweenMax.staggerFrom('#list-1 li, #list-2 li, #list-3 li', .5, { x: -10, autoAlpha: 0 }, .2); } } Basically this compares the the isFetching property of the state before and after every update. Since the only case where the previous value is true and the next value is false, is after getting the data from the API, you can trigger the animation there. Happy Tweening!! - Mhhh... I'm terribly sorry I misunderstood the issue you're having. I rushed away to think this was a tree shaking issue where the animation wasn't happening at all and now I see is about the animation running differently in development and production. Apologies all around Have you tried using just rotation instead of rotationZ, the effect is the same. Also GSAP interprets numbers as pixels by default, I see some strings in your config objects, perhaps try passing just numbers: this.menuToggle .set(this.controlit, {className:"-=closemenu"}) .set(this.controlit, {className:"+=openmenu"}) .to(' .top', .2, {y:-9, transformOrigin: '50% 50%'}, 'burg') .to(' .bot', .2, {y:9, transformOrigin: '50% 50%'}, 'burg') .to(' .mid', .2, {scale:0.1, transformOrigin: '50% 50%'}, 'burg') .add('rotate') .to(' .top', .2, {y:5}, 'rotate') .to(' .bot', .2, {y:-5}, 'rotate') .to(' .top', .2, {rotation:45, transformOrigin: '50% 50%'}, 'rotate') .to(' .bot', .2, {rotation:-45, transformOrigin: '50% 50%'}, 'rotate'); That does exactly the same animation, give it a try and let us know. Finally, just in case tree shaking is causing this, that is the actual import statement and the GSAP tools you're using in your file? Happy Tweening!! - Hi and welcome to the GreenSock forums. This normally happens because of tree shaking in production mode. To prevent that you can add this to your file: import { TimelineMax, CSSPlugin } from "gsap/all"; const plugins = [ CSSPlugin ]; Take a look at the docs here (scroll a little bit and you'll find the relevant part): Happy Tweening!! Mapping through arrays - list items the react way, with conditional logic Rodrigo replied to UnioninDesign's topic in GSAPIn order to get the instance of the <li> element in the DOM you can use refs. If that's not what you're looking for, please be more specific about it. Regarding the second part I'm a bit lost. The first part is that you want to add an extra animation depending on some specific criteria for a specific card. You can use the same array.map() to store some data in an array and then return the array of JSX elements. Keep in mind that the callback inside a .map() method has to return some element that is added to an array, but before that you can run any logic you want. Then you can use a for or for each loop to create your animations and check there if the extra instance has to be created using either the original data or a new array you can create in the .map() method. Is also important to remember that the .map() method goes through an array in ascending order, that is starts with the element at index 0. Because of that, the order in which the DOM elements are added to the array that will hold their references created in React, is the same the data for each element in the original array, so you can easily use a forEach and use the index value in the callback to read the original data and avoid adding more attributes to the DOM element. // original data const originalData = []; //in the component constructor constructor (props) { super(props); this.animateCards = this.animateCards.bind(this); this.cards = []; } // in the render method render () { <div> {originalData.map (card, index) => (<li ref={e => this.cards[index] = e}>Element</li>)} </div> } // finally when creating the animations animateCards () { this.cards.forEach((card, index) => { // here this.cards[index] matches the data in originalData[index] // you can use originalData[index] to deicde if an extra animation is needed }); } Finally if you want to add an extra DOM element depending on the data, you should do that in the array.map() inside the render method and not somewhere else. Also base that on the original data and not some attribute passed to the rendered DOM element that results from the .map() method. Please do your best to provide some small and simplified sample code in order to get a better idea of what you're trying to do. Happy Tweening!! - Well as far as I know there is no direct way to get the type from an instance, but it could be something I'm not aware of, Jack could offer a final say in this. What I can think of is the following. Single instances like TweenLite/Max don't have the .add() method while Timeline instances do, so that tells you if you're dealing with a timeline or single tween. A TweenMax instance has the yoyo property so you can check that and finally a TimelineMax has the .getLabelsArray() method, as you already know , so a way to check would be this: var checkInstance = function (instance) { if ( instance.add ) { // instance is a timeline if ( instance.getLabelsArray ) { // instance is TimelineMax } else { // instance is TimelineLite } } else { // instance is TweenLite or TweenMax if (instance.yoyo) { // instance is TweenMax } else { // instance is TweenLite } } }; Happy Tweening!! - Thanks Buddy!!! I learned from the best The one option I can think, by looking at the GSAP code is extend TimelineLite prototype. Since get labels array is quite simple: And TimelineLite has a labels object just like it's Max counterpart: This shouldn't return any problem (running some simple tests doesn't create an issue): TimelineLite.prototype.getLabelsArray = TimelineMax.prototype.getLabelsArray; Unless there is something I'm not totally aware of, in which case our beloved leader and tween-guide spirit @GreenSock could warn us against this. Although this doesn't actually resolves anything as far as the original question, because still there is no way to introduce more than two labels in the main TimelineMax instance that holds the stagger methods. Unless Jack comes up with a new element in the API that allows dynamic labels in stagger methods using a base name for them and switching to TimelineMax. But IMHO it doesn't sound too useful, I'd go with the loop in order to add the animations to a TimelineMax instance.
https://staging.greensock.com/profile/9252-rodrigo/content/page/2/
CC-MAIN-2021-49
refinedweb
3,895
59.74
Edited by: 981354 on Jan 15, 2013 2:33 AMEdited by: 981354 on Jan 15, 2013 2:33 AM class A { class X { void print() {} } } class B extends A { class X extends A.X { void print() {} } } class C extends A { class X extends A.X { void print() { System.out.println( "C" ); } } } class Generic<E extends A> { void m( E.X x ) { x.print(); } } public class Main { public static void main( String[] args ) { Generic<B> g = new Generic<B> (); A a = new C(); A.X x = ((C)a).new X(); g.m( x ); // Prints "C" } } 981354 wrote:you are instantiating a specific class instance here, "C.X". the print() method of "C.X" prints "C". what did you expect to happen? Hi, I have a question regarding generics if anyone has time...? The following code prints "C". Why does it do that? B and C are not related. The Generic class is instantiated with B as type argument, while the operation m is invoked with an instance of C.X. Thanks! A.X x = ((C)a).new X(); 981354 wrote:well, your code doesn't compile for me in java 6, so yes, i'd say there was an error here. what version of java are you using? Hi, Yes, but the C class is not a subtype of the B class. So shouldn't there be a compilation error here? 981354 wrote:No because C.X is extending B.X and this is extending A.X. Yes, but the C class is not a subtype of the B class. So shouldn't there be a compilation error here? toto void m(E.X x) { in order to get a clean compile.in order to get a clean compile. void m(A.X x) { JDK6u30 gives:JDK6u30 gives: TestGenerics.java:34: error: method m in class Generic<E> cannot be applied to given types; g.m( x ); // Prints "C" ^ required: B.X found: A.X reason: actual argument A.X cannot be converted to B.X by method invocation conversion where E is a type-variable: E extends A declared in class Generic TestGenerics.java:23: cannot select from a type variable void m( E.X x ) { ^
https://community.oracle.com/message/10800846?tstart=-1
CC-MAIN-2014-15
refinedweb
366
79.97
Complete Guide to Developing a powerful documentation - Lab Diary One of the most important programming skills to develop, for those just starting out in programming (and everyone else!), is the skill of managing information. Next post: Better programmer - Blindfold programming Next post: Better programmer - Blindfold programming - practical tips It’s like a lifeline(a rope or line used for life-saving, typically one thrown to rescue someone in difficulties in water): when I started my Lab Diary, I started researching and qualifying better, filtering information and writing more, I quit asking superiors and developed my first application, I started this project and wrote a blog, I search less and work faster, I transformed my workday completely. I’m far from perfect, but I’ve progress a lot - comparing then and now it's like comparing black and white. But if you don’t manage information so well, it could cause issues: "short memory problem" (I remember something but not enough), reinventing the wheel, procrastination, disorientating, confusion, tasks piling up and overwhelming you, and much more. So it’s such an essential skill to develop, but most people don’t know where or/how to start. This guide is aimed at helping you get started. First things first - Lab Diary initialization The first question that a friend asked me was: Why to create such a documentation if I have Google? Most of us don’t want to think about documentation, let alone take a bunch of actions on daily basis. For me, the motivation came from realizing that what I was doing was wrong or not enough. Ignoring the problems only made them worse and bigger. Working with great volumes of information could cause headache and burden. Once you realize that you’re not efficient enough … you should start improving your routines slowly and persistently. Think of this like investment in yourself tomorrow - the quality information is one of the best investments nowadays. On contrary web is full with bad or duplicated information - you don't want to spent hours in surfing just to find a single line of code. A line of code that you already had and used in the past. Lab Diary benefits Some people are writing down on paper, other use only memory I myself prefer to use digital way of storing data. The benefits are: - Search - It's great to be able to go through thousand of lines in seconds - Copy & Paste - remember the last time when you had to "copy" from paper or image. What a loss of time. Mistakes are also very likely. Even for a simple code like this snippet copy & paste is indispensable. public class HelloWorld { public static void main(String[] args) { // Prints "Hello, World" to the terminal window. System.out.println("Hello, World"); } } - Eternal Memory - once written forever available. Unlikely people who rely only on memory. If you met a problem and have a solution. - Independance - this way will allow you to have access to the solution no matter of internet connection or site availability. I remember when StackOverFlow was down(even for 5 minutes could cause a panic) that many developers were not able to produce a single line of code. Lab Diary tools The tools that I'm using are: - Typora - Windows, Linux and MacOS. Typora will give you a seamless experience as both a reader and a writer. It removes the preview window, mode switcher, syntax symbols of markdown source code, and all other unnecessary distractions. - HelpNDoc - Windows and Linux (working with Wine) - HelpNDoc is an easy to use yet powerful and intuitive tool to create HTML help files, help web sites, printed manuals and eBooks. - free for personal projects - Help+Manual - Windows - Help & Manual from EC Software is the favorite authoring tool for writing online help and technical documentation. - paid version - XMind - Windows, Linux, Mac OS - XMind is a brainstorming and mind mapping application. It provides a rich set of different visualization styles, and allows sharing of created mind maps via their website. - free - basic version - Freeplane - Windows, Linux, MacOS - Application for Mind Mapping, Knowledge Management, Project Management. Develop, organize and communicate your ideas and knowledge in the most effective way. - free Lab Diary topics Defining topics and categories is another important point. My rule is: topics should corespondent to knowledge areas: - Projects - describe all projects in 3 categories - current - completed - planned - Technical information - everything related to technical knowledge - applications - Browsers - IDEs - Mail clients - code snippets - language basics - algorithms - Design - templates - themes - Examples - example mails - example projects - OS administration - Windows - Linux - shell commands - Linux terminal basics - Mac OS This structure is a sample one. It's better to start with a smaller one and gradually to shape it to your needs. The ability of modifying topics hierarchy and levels is from great benefit. If you are working in marketing area you could do: - Marketing - Social - Online Media - Content Management - Articles - Images - SEO - Online - Offline Example structure from Help'n'Doc: You can define topic name, structure, topic icon and all this in free and user friendly interface. Conclusion A huge mistake that a lot of people make with documentation is that they mess up at the beginning, and get discouraged by this. They feel bad about messing up. This causes them to give up and not want to continue with their Lab Diary. Here’s the key: Give up is not an option. My favorite quote on the topic: “Giving up is the only sure way to fail.” - Gena Showalter.
https://blog.softhints.com/complete-guide-to-developing-a-powerful-documenation-lab-diary/
CC-MAIN-2019-43
refinedweb
913
51.89
Duration: 16 minutes Summary: In this tutorial, we’ll continue to develop our first Struts 2 application using Eclipse tool environment and Tomcat Web Server. Introducing Packages and Action concepts, we’ll show you how to configure an application to map the Struts 2 on action. All MVC (Model, View, Controller) concepts are presented on this tutorial, providing a step by step guide. Transcript: Hello everybody, please be welcome. Let's continue our Java tutorial using struts on our web application. In the last class we have created our web.xml file, and now we're going to create a second file that must be created is the struts.xml file. This file will give us a map for the application and all the routines that we will use on the application, inside the application, all the routes, will be designed and will signed by this file. So let's create it. Inside the webLEAF source folder that I'm showing here, you will create a new file, another XML file named struts.xml. Here we go. OK. Again, I will copy and paste some text just to show you how a simple application will work. We will use DOCTYPE arechtype to set DTD map to do XML. And after this we will have the struts tag, and inside the struts tag we will use a constant tag that defines some parameters to our application. One of them is named by struts.devmode. And when you set the struts,devmode to true you are saying that your application will give you in case of error a real big message with all this stack trace for all your error. And that will help you to find the root calls to this error. When your application is used on the production server, the online server, you may set this option to false. Because your customer will not see all this stack trace given by this constant. It helps a lot when you have some kind of trouble on the application. But let's see how the struts file works. Struts is defined by a group of packages. And these packages must contain actions that will be used by the web application. You have the gsp files webpage, that is used as our views. And you have the actions that will be used as our model on mvc pattern. As we create a simple struts application we will design here a simple package with some actions inside, or a simple action. We have to create the package. Tag. The package tag must have a name. We will give the default name here. This package must extend struts default. And we can set a namespace to ourpackage will give /namespace. Inside the package tag we will create an action. This action must give us all the business rules. And as you must know using an mvc pattern, you will not have this rows inside your viewcode of the gsp file. We will insert these rows inside Javaclass and inside a method. So let's create the first action tag. This tag must have a name, which I will call just index. And inside our tag we must have a result. I will explain this better later. Our result will be the page or another action that will be redirected when the model code is done. And you have a result, the result may have classified, but you can just have a main result here. We will call this result the result as index.gsp. Inside an action you can call a class to act as a a model. So we can create a class, name it indexaction here. How the map will work here. Well we must create, first of all, the index.gsp, and after that you must create a msrbl index action. Well I got the index.gsp used by our last sample, msrbl and I will use this file to create our logic. Inside the webIF source I will create a new Java class. I will call this msrbl.indexaction. Here we go. Package name msrbl and name indexaction. OK. We have a package named msrbl, our class name is indexaction, and here we will create an indexaction file. How this works on the action class. You must have a method that will be called when you call an action. You can call an action directly by the browser using an address, as we are using the local host, at the port 8080, by application name struts, test 01/index. This address will directly call the action. And when this action is called you must have a method, and have the full name as execute. And this method must return a string that will classify this execution. We'll use 'success' in our method. All the business processes included inside this action have to come inside the main method to make it work. The next question is how do we communicate the action class with our view, our gsp file? You have a bunch of tags that you can use on this communication. The most common is called the property tag. Inside our gsp file, we must add a taglib to make struts work with the gsp file. Here you have a sample to use the taglib. You must have a prefix that will be used on every tag that uses struts. And this prefix will call the struts tag library to define how these tags will work when called. For these tags you must use the prefix. In here you've got all the tags that you can use. We will use, this first time, the property tag, and a property must have a value. You can call this value the information or something like that. Let's call it message. How will this message be used by the index action? Inside your index action you must create a property with the same name. Now you create the getter and setter for your property. And then you can set a message to this property. Like this. After this we must define, on the struts.xml, a name for the result. You can have a main name with all the name information, but let's create here the success name to give the execution to the index.gsp file. Now let's test the call. We will access the tomcat container manager APP. So here you have your struts.test right here. To have the struts.test you must create the context, the gsp work directory as we made on our last class. Let's click on the struts test 01, and here you just have the hello world! information here. Why don't we have the welcome to java struts here? Because we are not calling the index action. To call the action you must type index here on you location. Enter. And here you've got welcome to java struts. Without this you are accessing directly to index.gsp file and calling the index. The struts.xml file defined that you have an index action that will execute this code index action and that will have a result named 'success'. That will reassign to index.gsp file. That's all for now. Thank you very much. See you later.
http://mrbool.com/mvc-coursejava-struts-developing-a-finance-management-software-part-3/24003
CC-MAIN-2021-21
refinedweb
1,220
84.68
Hey there, Just hit a road block in my assignment. I won't concern you too much with the assignment itself but it's basically a number input operated menu with option's like "Display employee details" & "Input pay details." I'm up to a part where I need to get the section "Create Employee ID" working. Basically what it want's me to do is add a method to the class (MyString.java - seperate file to main assignment) called 'CreateID()' and this method's job is to accept lastName as a parameter and return it as a string to the empID() field. I'm using substring to take the first three characters from the lastName field and adding a random number between 1200 - 1299 to these letters to create an 'Employee ID.' So far I have this for the method: public class MyString { public static String createID(C) { String message = C.substring(0.3).toUpperCase(); return message += (int) (Math.random()*99 + 1200); } } Firstly I'd like to know if you see anything wrong with the above code, because it looks okay to me. For example, it will take the first three characters from the last name "Johnson" and add a random number between 1200-1299 so that it will look like JOH1288 - and that will serve as the employee ID. Now I'm having trouble with calling this method in the main assignment file. Here's the filler text for where the code I'm missing should go: public static void createEmpID() { JOptionPane.showMessageDialog(null,"In (2) createEmpID()"); } All it does right now is obviously show the text within the quotation marks - I need the code that will call the method from the MyString.java file. I'm not that great with Java so please be as descriptive as possible when answering, thank you!
https://www.daniweb.com/programming/software-development/threads/436529/help-with-calling-methods-in-classes-to-main-assignment-file
CC-MAIN-2018-26
refinedweb
305
68.1
Referencing Other Assemblies in Scripting Solutions The Microsoft .NET Framework class library and the Microsoft.VisualBasic namespace provide the script developer with a powerful set of tools for implementing custom functionality in Integration Services packages. The Script task and the Script component can also use custom managed assemblies. The .NET tab of the Add Reference dialog box in Microsoft Visual Studio for Applications lists the managed assemblies found in the %windir%\Microsoft.NET\Framework\v2.0.xxxxx folder and the %ProgramFiles%\Microsoft SQL Server\90\SDK\Assemblies folder. Therefore, by default, this list is largely limited to assemblies from the Microsoft .NET Framework class library and assemblies installed by SQL Server 2005. The contents of the list are determined exclusively by file location and not by installation in the global assembly cache (GAC) or by other assembly attributes or properties. As a result, a copy of any assembly that you want to reference needs to be present in one of the specified folders. The Add Reference dialog box in VSA does not include the Browse button that is present in Microsoft Visual Studio for locating and referencing managed assemblies in other folders, and does not include the COM tab for referencing COM components. A custom assembly that you want to use in the Script task or the Script component must also be signed with a strong name and installed in the global assembly cache. When you deploy a package that uses a custom assembly to another computer, you must install the assembly in the global assembly cache. If the script is not precompiled, you must also copy the assembly into the Framework folder or the Assemblies folder as described above for adding a reference. Visual Studio for Applications does not have the Add Web Reference command that is familiar to Visual Studio developers. If you want to use a Web service from the Script task or from the Script component, you must first use the command prompt utility wsdl.exe to generate a proxy class in Visual Basic. Then you have two options: - Import the proxy class file into the VSA project by using the Add Existing Item option. - Build the proxy class into a separate assembly that is signed with a strong name key file, copy the assembly to the Framework folder as described above, add it to the global assembly cache, and then add a reference to the assembly in the VSA project. In this case, you need to deploy the additional assembly along with your package. You must also add a reference in the script project to the System.Web.Services namespace. For more information about generating the proxy class, see the following topics in the MSDN Library: - Creating an XML Web Service Proxy - How to: Generate an XML Web Service Proxy - Web Services Description Language Tool (Wsdl.exe) The Script task and the Script component can use Microsoft Visual Basic .NET objects and functions from the Microsoft.VisualBasic namespace. The VisualBasic namespace contains many of the objects, functions, and constants from earlier versions of Visual Basic, and a wide variety of useful functions. The Microsoft.VisualBasic.Financial module, for example, contains methods for calculating depreciation, internal rate of returns, and annuity payments. The Script task and the Script component can also take advantage of all the other objects and functionality exposed by the Microsoft and the Microsoft Visual Basic runtime library, see the MSDN Library.
http://msdn.microsoft.com/en-US/library/ms136007(v=sql.90).aspx
CC-MAIN-2014-41
refinedweb
569
50.77
Linux Container (LXC) — Part 2: Working With Containers By Lenz Grimmer on Aug 05, 2013 "Containers" by Phil Parker (CC BY 2.0).. However, Oracle has developed several enhancements which are included in the lxc package that's part of Oracle Linux 6.4; these changes were also contributed to the upstream LXC project and are now part of the official LXC releases. The support of Linux containers is also included in the libvirt project, which provides a graphical user interface for the management of virtual machines or containers using virt-manager (and other utilities). Libvirt is also included in Oracle Linux. The following example creates a Btrfs file system on the second hard disk drive and mounts it to the directory /container: # mkfs.btrfs /dev/sdb WARNING! - Btrfs v0.20-rc1 IS EXPERIMENTAL WARNING! - see before using fs created label (null) on /dev/sdb nodesize 4096 leafsize 4096 sectorsize 4096 size 4.00GB Btrfs v0.20-rc1 # mdkir -v /container mkdir: created directory `/container' # mount -v /dev/sdb /container mount: you didn't specify a filesystem type for /dev/sdb I will try type btrfs /dev/sdb on /container type btrfs (rw) Now you can create a container of the latest version of Oracle Linux 6 named "ol6cont1" and using the default options by entering the following command. The option "-t" determines the general type of the Linux distribution to be installed (the so-called "template"), e.g. "oracle", "ubuntu" or "fedora". Depending on the template, you can pass template-specific options after the double dashes ("--"). In the case of the Oracle Linux template, you can choose the distribution's version by providing values like "5.8", "6.3" or "6.latest". Further information about the available configuration options can be found in chapter "About the lxc-oracle Template Script" of the Oracle Linux 6 Administrator's Solutions Guide. # lxc-create -n ol6cont1 -t oracle -- --release=6.latest /usr/share/lxc/templates/lxc-oracle is /usr/share/lxc/templates/lxc-oracle Note: Usually the template option is called with a configuration file option too, mostly to configure the network. For more information look at lxc.conf (5) Host is OracleServer 6.4 Create configuration file /container/ol6cont1/config Downloading release 6.latest for x86_64 Loaded plugins: refresh-packagekit, security ol6_latest | 1.4 kB 00:00 ol6_latest/primary | 31 MB 01:23 ol6_latest 21879/21879 Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package chkconfig.x86_64 0:1.3.49.3-2.el6 will be installed --> Processing Dependency: libc.so.6(GLIBC_2.4)(64bit) for package: chkconfig-1.3.49.3-2.el6.x86_64 --> Processing Dependency: libc.so.6(GLIBC_2.3.4)(64bit) for package: chkconfig-1.3.49.3-2.el6.x86_64 [...] --> Processing Dependency: pygpgme for package: yum-3.2.29-40.0.1.el6.noarch --> Processing Dependency: python-iniparse for package: yum-3.2.29-40.0.1.el6.noarch --> Processing Dependency: rpm-python for package: yum-3.2.29-40.0.1.el6.noarch --> Running transaction check ---> Package audit-libs.x86_64 0:2.2-2.el6 will be installed ---> Package bash.x86_64 0:4.1.2-15.el6_4 will be installed ---> Package checkpolicy.x86_64 0:2.0.22-1.el6 will be installed ---> Package coreutils.x86_64 0:8.4-19.0.1.el6_4.2 will be installed --> Processing Dependency: coreutils-libs = 8.4-19.0.1.el6_4.2 for package: coreutils-8.4-19.0.1.el6_4.2.x86_64 [...] ---> Package pinentry.x86_64 0:0.7.6-6.el6 will be installed --> Running transaction check ---> Package groff.x86_64 0:1.18.1.4-21.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: chkconfig x86_64 1.3.49.3-2.el6 ol6_latest 158 k dhclient x86_64 12:4.1.1-34.P1.0.1.el6 ol6_latest 316 k initscripts x86_64 9.03.38-1.0.1.el6_4.1 ol6_latest 937 k [...] rootfiles noarch 8.1-6.1.el6 ol6_latest 6.3 k rsyslog x86_64 5.8.10-6.el6 ol6_latest 648 k vim-minimal x86_64 2:7.2.411-1.8.el6 ol6_latest 363 k yum noarch 3.2.29-40.0.1.el6 ol6_latest 995 k Installing for dependencies: MAKEDEV x86_64 3.24-6.el6 ol6_latest 88 k audit-libs x86_64 2.2-2.el6 ol6_latest 60 k basesystem noarch 10.0-4.0.1.el6 ol6_latest 4.3 k [...] yum-metadata-parser x86_64 1.1.2-16.el6 ol6_latest 26 k zlib x86_64 1.2.3-29.el6 ol6_latest 72 k Transaction Summary ================================================================================ Install 135 Package(s) Total download size: 79 M Installed size: 294 M Downloading Packages: (1/135): MAKEDEV-3.24-6.el6.x86_64.rpm | 88 kB 00:00 (2/135): audit-libs-2.2-2.el6.x86_64.rpm | 60 kB 00:00 (3/135): basesystem-10.0-4.0.1.el6.noarch.rpm | 4.3 kB 00:00 (4/135): bash-4.1.2-15.el6_4.x86_64.rpm | 904 kB 00:02 (5/135): binutils-2.20.51.0.2-5.36.el6.x86_64.rpm | 2.8 MB 00:07 [...] (131/135): vim-minimal-7.2.411-1.8.el6.x86_64.rpm | 363 kB 00:01 (132/135): xz-libs-4.999.9-0.3.beta.20091007git.el6.x86_ | 89 kB 00:00 (133/135): yum-3.2.29-40.0.1.el6.noarch.rpm | 995 kB 00:03 (134/135): yum-metadata-parser-1.1.2-16.el6.x86_64.rpm | 26 kB 00:00 (135/135): zlib-1.2.3-29.el6.x86_64.rpm | 72 kB 00:00 -------------------------------------------------------------------------------- Total 271 kB/s | 79 MB 04:59 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : libgcc-4.4.7-3.el6.x86_64 1/135 Installing : setup-2.8.14-20.el6.noarch 2/135 Installing : filesystem-2.4.30-3.el6.x86_64 3/135 Installing : basesystem-10.0-4.0.1.el6.noarch 4/135 Installing : ca-certificates-2010.63-3.el6_1.5.noarch 5/135 [...] Installing : rsyslog-5.8.10-6.el6.x86_64 131/135 Installing : yum-3.2.29-40.0.1.el6.noarch 132/135 Installing : passwd-0.77-4.el6_2.2.x86_64 133/135 Installing : 2:vim-minimal-7.2.411-1.8.el6.x86_64 134/135 Installing : rootfiles-8.1-6.1.el6.noarch 135/135 Verifying : gamin-0.1.10-9.el6.x86_64 1/135 Verifying : procps-3.2.8-25.el6.x86_64 2/135 Verifying : 12:dhclient-4.1.1-34.P1.0.1.el6.x86_64 3/135 Verifying : 2:ethtool-3.5-1.el6.x86_64 4/135 Verifying : ncurses-base-5.7-3.20090208.el6.x86_64 5/135 [...] Verifying : ca-certificates-2010.63-3.el6_1.5.noarch 130/135 Verifying : libssh2-1.4.2-1.el6.x86_64 131/135 Verifying : cpio-2.10-11.el6_3.x86_64 132/135 Verifying : mingetty-1.08-5.el6.x86_64 133/135 Verifying : libcurl-7.19.7-37.el6_4.x86_64 134/135 Verifying : 1:findutils-4.4.2-6.el6.x86_64 135/135 Installed: chkconfig.x86_64 0:1.3.49.3-2.el6 dhclient.x86_64 12:4.1.1-34.P1.0.1.el6 initscripts.x86_64 0:9.03.38-1.0.1.el6_4.1 openssh-server.x86_64 0:5.3p1-84.1.el6 [...] Dependency Installed: MAKEDEV.x86_64 0:3.24-6.el6 audit-libs.x86_64 0:2.2-2.el6 basesystem.noarch 0:10.0-4.0.1.el6 bash.x86_64 0:4.1.2-15.el6_4 binutils.x86_64 0:2.20.51.0.2-5.36.el6 [...] upstart.x86_64 0:0.6.5-12.el6_4.1 ustr.x86_64 0:1.0.4-9.1.el6 util-linux-ng.x86_64 0:2.17.2-12.9.el6_4.3 xz-libs.x86_64 0:4.999.9-0.3.beta.20091007git.el6 yum-metadata-parser.x86_64 0:1.1.2-16.el6 zlib.x86_64 0:1.2.3-29.el6 Complete! Rebuilding rpm database Configuring container for Oracle Linux 6.4 Added container user:oracle password:oracle Added container user:root password:root Container : /container/ol6cont1/rootfs Config : /container/ol6cont1/config Network : eth0 () on virbr0 'oracle' template installed 'ol6cont1' created To prepare a miminal installation of the latest version of Oracle Linux 6 (about 400 MB), the installation script performs a download of the required RPM packages from Oracle's "public-yum" service. The directory structure of the installed container can be found at /container/ol6cont1/rootfs, it can be browsed and evaluated like any other regular directory structure. The script also creates two user accounts "root" and "oracle" and configures a virtual network device, which obtains an IP address via DHCP from the DHCP server provided by the libvirt framework. The container's configuration file created by lxc-create is located at /container/ol6cont1/config and can be adapted and modified using a regular text editor. Before making any changes, it's recommended to create a snapshot of the container first, which can be used to quickly spawn additional containers: # lxc-clone -o ol6cont1 -n ol6cont2 Tweaking configuration Copying rootfs... Create a snapshot of '/container/ol6cont1/rootfs' in '/container/ol6cont2/rootfs' Updating rootfs... 'ol6cont2' created # lxc-ls -1 ol6cont1 ol6cont2 Start the container using the following command: # lxc-start -n ol6cont1 -d -o /container/ol6cont1/ol6cont1.log # lxc-info -n ol6cont1 state: RUNNING pid: 311 # lxc-info -n ol6cont2 state: STOPPED pid: -1 The container has now been started in the background. Eventual log messages will be redirected to the file ol6cont.log. As you can tell from the output of lxc-info, only the container ol6cont1 has been started, while the clone ol6cont2 remains in stopped state until you boot it up using lxc-start. Now you can log into the container instance's console using the following command. The container's system configuration can now be modified using the usual tools (e.g. yum or rpm to install additional software). # lxc-console -n ol6cont1 Oracle Linux Server release 6.4 Kernel 2.6.39-400.109.4.el6uek.x86_64 on an x86_64 ol6cont1 login: root [root@ol6cont1 ~]# ps x PID TTY STAT TIME COMMAND 1 ? Ss 0:00 /sbin/init 184 ? Ss 0:00 /sbin/dhclient -H ol6cont1 -1 -q -lf /var/lib/dhclien 207 ? Sl 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 249 ? Ss 0:00 /usr/sbin/sshd 256 lxc/console Ss+ 0:00 /sbin/mingetty /dev/console 260 ? Ss 0:00 login -- root 262 lxc/tty2 Ss+ 0:00 /sbin/mingetty /dev/tty2 264 lxc/tty3 Ss+ 0:00 /sbin/mingetty /dev/tty3 266 lxc/tty4 Ss+ 0:00 /sbin/mingetty /dev/tty4 267 lxc/tty1 Ss 0:00 -bash 278 lxc/tty1 R+ 0:00 ps x [root@ol6cont1 ~]# logout Oracle Linux Server release 6.4 Kernel 2.6.39-400.109.4.el6uek.x86_64 on an x86_64 ol6cont1 login: CTRL-A Q The key combination CTRL-A, Q terminates the console session. Alternatively, you can also log in to the container using SSH from the host system. All containers have their own IP address and are connected to a virtual bridge device virbr0 by default, which is also reachable from the host system. This way, you can easily set up simple client/server architectures within a host system. A running container can easily be suspended using the command lxc-freeze at any time. All running processes will be halted and won't consume CPU ressources anymore, until you release them using lxc-unfreeze again. Since Linux containers are based on the Linux Control Groups (Cgroups) framework, it is also possible to precisely limit the resources available to a container. A container can be shut down using various ways: either by calling lxc-stop from the host, or from within the container using the usual commands like shutdown -h or poweroff. Containers that are no longer needed can be discarded using the lxc-destroy command. If you'd like to learn more about this topic, there is a dedicated chapter about Linux containers in the Oracle Linux Administrator's Solutions Guide. It covers the creation, configuration and starting/stopping as well as monitoring of containers in detail. It also explains how to prepare the container storage on a Btrfs file system and how existing containers can be quickly cloned. More links about the topic of Linux containers: - Oracle Linux Technology Spotlight: LXC — Linux Containers - OTN Article: The Role of Oracle Solaris Zones and Linux Containers in a Virtualization Strategy - Video on the Oracle Linux YouTube channel: Linux Containers Explained - Linux Containers auf Wikipedia - ArchLinux Wiki-Artikel zu Linux-Containers - Linux Advocates: Linux Containers and Why They Matter - OTN Article: How I Used CGroups to Manage System Resources In Oracle Linux 6 Does Oracle Linux provide any security enhancements to Linux namespaces or LXC? When I tried the technology last year its biggest drawback, almost defeating the entire benefit of LXC, used to be that kernel wasn't providing any real isolation of containers from the host. So for example gaining root access in a guest OS would effectively mean also gaining root access to host. Posted by Maciek on August 06, 2013 at 03:26 AM PDT # Hi Maciek, thanks for your comment. User namespaces (which would allow the containment of the container root user) are not yet part of Oracle Linux - for now, user IDs (including "root") are identical on a container and the host system. However, this does not automatically mean you could gain root access on the host, too - you are still confined by the container environment. But a malicious root user inside a container could cause harm that would affect the host and other containers, this depends on what files (e.g. in the virtual /dev/ directory) are made available inside the container. Posted by guest on August 07, 2013 at 05:36 AM PDT # I keep reading about containers and they all seem to download a base system to the container. Is this really necessary for an identical system to the host? Can mounting of host directories containing the binaries not be used? Posted by guest on September 19, 2013 at 03:47 AM PDT # It's certainly possible to re-use the host's root file system and just spawn applications inside their own name space - this is called an "Application Container". We have a chapter about this in the manual: Posted by Lenz Grimmer on September 19, 2013 at 04:24 AM PDT # Hi, My Name is Jose. How to change the ip of a container to connect to other computers in the Network ? Posted by jose perez on May 26, 2014 at 09:04 AM PDT #
https://blogs.oracle.com/OTNGarage/entry/linux_container_lxc_part_2
CC-MAIN-2016-40
refinedweb
2,434
50.12
Jamie Baldaro15,101 Points Return the average length of the tongues of the frogs in the array. Use a foreach loop as part of your solution. I'm ashamed to say i'm completely lost. I've looked at this for ages. re-watched the video time and time again and i still can't fathom it u.u Would anyone be willing to explain to me how this works exactly? 9 Answers Kyle Waite4,634 Points namespace Treehouse.CodeChallenges { class FrogStats { public static double GetAverageTongueLength(Frog[] frogs) { double total = 0; foreach(Frog frog in frogs) { total+=frog.TongueLength; } return total/frogs.Length; } } } anil rahman7,781 Points public static double GetAverageTongueLength(Frog[] frogs) { Frog frog = new Frog(7); int sum = 0; int count = 0; foreach(Frog f in frogs) { sum += f.TongueLength; count++; } int avg = sum / count; return avg; } Kathryn Ann10,071 Points What's the reason for creating a new Frog at the beginning of the method? Other than that, the only change I'd suggest is getting rid of the avg variable for efficiency, since you only use it once. Instead, you could just return sum / count. Tony Lawrence3,056 Points Thanks for the example Anil. Though the Frog.cs is still to be considered. I'm still wondering how you got your 'f' value when you made your for each loop, since it's not called in your example class or the Frog.cs. Andrei Li33,372 Points Is it ok if I share a correct code? (question to Treehouse Forum admins) I am so happy to solve that I can't hold myself from sharing. Because there are cases where I couldn't find a correct answer in Treehouse forum and it makes me struggle. namespace Treehouse.CodeChallenges { class FrogStats { public static double GetAverageTongueLength(Frog[] frogs) { double total = 0; foreach(Frog frog in frogs) { total += frog.TongueLength; } return total/frogs.Length; } } } Steven Parker177,495 Points You know how to do an average, right? You add up all the values and then divide by the number of things. In this case, the "things" are the frog tongues and the values are the tongue lengths. Your function is going to get a list of Frog objects. In this list, every Frog has a TongueLength. So if you create a loop that takes each one and adds it to a total, and then after the loop divides it by the number of Frogs, you'd have an average length. Do you have a better idea about what to do now? Jamie Baldaro15,101 Points I want to say yes but no... Not a clue >.< I feel like Mondays are not good days to learn u.u Maddalena Menolascina6,668 Points I did not quite understand a few basic things... please keep in mind that I will start this whole course again as well as going through the basis again, but I get very confused when I see using variables that we did not initialize in a way that I understand: For example, I get the meaning of the foreach loop, but in anil rahman's code where did the "f" come from? I will study more but a quick answer would be much appeciated Sorry for the silly doubt Philip Lloyd2,773 Points I believe you can skip out the count variable altogether here and just use "frogs.Length" for the size of the array. With regards to the f question, I'm still learning and know more in the VBA world but I believe it's just a placeholder name so you can reference the Frog within the loop and won't be used outside the loop. So could easily have also said, foreach(Frog kermit in frogs) or could have said, foreach(Frog meh in frogs) as long as you then said kermit.TongueLength or meh.TongueLength respectively. Ricardo Ferreira9,693 Points My question is: where does this .TongueLength come from in the body of the loop. The function's name is GetAverageTongueLength. HIDAYATULLAH ARGHANDABI20,988 Points Try this answer it will work double total=0; for(int i=0;i<frogs.Length;i++) { total+=frogs[i].TongueLength; } return total/frogs.Length; Enjoy Coding Emma Wyman13,250 Points This answer would not work because it is supposed to be a foreach loop. anil rahman7,781 Points anil rahman7,781 Points I've kind of just guessed that and it works so im guessing theres a better way to write it. :)
https://teamtreehouse.com/community/return-the-average-length-of-the-tongues-of-the-frogs-in-the-array-use-a-foreach-loop-as-part-of-your-solution
CC-MAIN-2019-51
refinedweb
743
82.04
Facebook Wall .NET API This project was awarded to czetxinc for $101.14 USD.Get free quotes for a project like this Awarded to: Project Budget$100 - $150 USD Total Bids2 Project Description We need a .NET library (preferably written in C#, but VB can also be used) that allows us to send markup to the wall of a specified Facebook account. The class should look similar to this: public class FacebookWall static void WriteToWall(username, password, flags, markup) static string GenerateMarkup(string text) static string GenerateMarkup(string photoUrl, string text) static string GenerateMarkup(string text, string urltitle, string url) The first method allows us to specify the username and password of a Facebook account. It also takes an integer that represents flags that further define how to write to the wall (for example, there should be a flag that determines whether a date/time is sent for the wall entry, etc). Finally, it takes the markup text that should be written to the wall (it is our understanding that the wall data must be specified in FB markup). The static helpers that follow returns FB markup from the passed in arguments. For example, the last GenerateMarkup accepts the text and a url that is to follow the text (so when this markup is sent to the wall, it displays the text followed by a clickable link). These utility methods make the library more usable, plus help to show us how to construct the markup for different, FaceBook
https://www.freelancer.com/projects/PHP-Engineering/Facebook-Wall-NET-API/
CC-MAIN-2015-40
refinedweb
248
61.29
Construct a region boundary RAG with the rag_boundary function. The function skimage.future.graph.rag_boundary() takes an edge_map argument, which gives the significance of a feature (such as edges) being present at each pixel. In a region boundary RAG, the edge weight between two regions is the average value of the corresponding pixels in edge_map along their shared boundary. from skimage.future import graph from skimage import data, segmentation, color, filters, io from matplotlib import pyplot as plt img = data.coffee() gimg = color.rgb2gray(img) labels = segmentation.slic(img, compactness=30, n_segments=400) edges = filters.sobel(gimg) edges_rgb = color.gray2rgb(edges) g = graph.rag_boundary(labels, edges) lc = graph.show_rag(labels, g, edges_rgb, img_cmap=None, edge_cmap='viridis', edge_width=1.2) plt.colorbar(lc, fraction=0.03) io.show() Total running time of the script: ( 0 minutes 0.576 seconds) Gallery generated by Sphinx-Gallery
https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_rag_boundary.html
CC-MAIN-2019-18
refinedweb
143
54.79
Functions Functions are useful for carrying out actions that you will want to use multiple times. Arguments allow us to pass different values to a function - the function's actions are carried out on those values. Functions compute values and give them back to us via a return statement. Example: def myAdd(x, y): # add 2 values and return the sum return x+y result = myAdd(1, 3) # run myAdd with x set to 1, y set to 3 # store the value returned by myAdd in the result variable print result # what will be printed? # run myAdd with x set to "foo", y set to "bar" # store the returned value in the result variable result = myAdd("foo", "bar") print result # what happens? what is printed? # run myAdd with x set to 3, y set to "abc" # store return value in result result = myAdd(3, "abc") print result # what happens? Exercise: Write a function square() that takes a value and returns the square of that value. For example: >>> answer = square(4) >>> print answer 16 >>> result = square("hello") >>> print result hellohello Exercise:.
http://www.cs.utexas.edu/~eberlein/cs303e/Functions.html
CC-MAIN-2015-18
refinedweb
180
69.01
by Roh𝔦t Kr․ Server-side rendering your React app in three simple steps Here’s what we will build in this tutorial: We’ll use server-side rendering to deliver an HTML response when a user or crawler hits a page URL. We’ll handle the latter requests on the client side. Why do we need it? Let me guide you to the answer. What’s the difference between client-side rendering and server-side rendering? In Client-side rendering, your browser downloads a minimal HTML page. It renders the JavaScript and fills the content into it. Server-side rendering, on the other hand, renders the React components on the server. The output is HTML content. You can combine these two to create an isomorphic app. Cons of Rendering React on the Server - SSR can improve performance if your application is small. But it can also degrade performance if it is heavy. - It increases response time (and it can be worse if the server is busy). - It increases response size, which means the page takes longer to load. - It increases the complexity of the application. When should you use Server Side Rendering? Despite these consequences of SSR, there are some situations in which you can and should use it. 1. SEO 👀 Every website wants to appear in searches. Correct me if I’m wrong! Unfortunately, Search engine crawlers do not yet understand/render JavaScript. This means they see a blank page, no matter how helpful your site is. Many folks say that Google’s crawler now renders JavaScript. To test this, I deployed the app on Heroku. Here is what I saw on the Google Search Console: A blank page. 🔲 😢 This was the biggest reason I explored server-side rendering. Especially when it is a cornerstone page such as a landing page, blog, and so on. To verify if Google renders your site, visit: Search Console Dashboard > Crawl > Fetch as Google. Enter the page URL or leave it empty for the homepage. Select FETCH AND RENDER. Once complete, click to see the result. 2. Improve performance 🚀 In SSR, the application performance depends on the server’s resources and user’s network speed. This makes it very useful for content-heavy sites. For Example, say that you have a medium-price mobile phone with slow internet speed. You try to access a site that downloads 4MB of data before you can see anything. Would you be able to see anything on your screen within 2–4 seconds? Would you visit that site again? I don’t think you would. Another major improvement is in First User Interaction Time. This is the difference in time from when a user hits the URL to when they see content. Here’s the comparison. I tested it on a development Mac. React Rendered on Server The first interaction time is 300ms. Hydrate finishes at 400ms. The load event exits at 500ms approximately. You can see this by checking out the image above. React Rendered on Client’s Browser The first interaction time is 400ms. The load event exits at 470ms 🤔. The result speaks for itself. There’s a 100ms difference in the First User Interaction Time for such a small app. 🚂 How does it Work? — (4 Simple Steps) - Create a fresh Redux Store on every request. - Optionally dispatch some actions. - Get the state out of the Store and perform SSR. - Send the state obtained in the previous step along with the response. We will use the state passed in the response for creating the initial state on client-side. 👋 Before you get started, clone/download the complete example from Github and use it for reference. Getting Started by Setting up our App First, open your favourite editor and shell. Create a new folder for your application. Let’s start. npm init --yes Fill in the details. After package.json is created, copy the dependencies and scripts below into it. Install all dependencies by running: npm install You need to configure Babel and webpack for our build script to work. Babel transforms ESM and react into Node and browser-understood code. Create a new file .babelrc and put the line below in it. { "presets": ["@babel/env", "@babel/react"] } webpack bundles our app and its dependencies into a single file. Create another file webpack.config.js with the following code in it: const path = require('path');module.exports = { entry: { client: './src/client.js', bundle: './src/bundle.js' }, output: { path: path.resolve(__dirname, 'assets'), filename: "[name].js" }, module: { rules: [ { test: /\.js$/, exclude: /node_modules/, loader: "babel-loader" } ] } } The build process output’s two files: assets/bundle.js— pure client side app. assets/client.js— client side companion for SSR. The src/ folder contains the source code. The Babel compiled files go into views/. views directory will be created automatically if not present. 😌 Why do we need to compile source files? The reason is the syntax difference between ESM & CommonJS. While writing React and Redux, we heavily use import and export in all files. Unfortunately, they don’t work in Node. Here comes Babel to rescue. The script below tells Babel to compile all files in the src directory and put the result in views. "babel": "babel src -d views", Now, Node can run them. 👏 Copy Precoded & Static files If you have already cloned the repository, copy from it. Otherwise download ssr-static.zip file from Dropbox. Extract it and keep these three folders inside your app directory. Here’s what they contain. - React Appand components resides in src/components. - Redux files in src/redux/. assets/ & media/: Contain static files such as style.cssand images. Server Side Create two new files named server.js and template.js inside the src/ folder. 1. src/server.js Magic happens here. This is the code you’ve been searching for. import React from 'react'; import { renderToString } from 'react-dom/server'; import { Provider } from 'react-redux'; import configureStore from './redux/configureStore'; import App from './components/app'; module.exports = function render(initialState) { // Model the initial state const store = configureStore(initialState); let content = renderToString(<Provider store={store} ><App /></Provider>); const preloadedState = store.getState(); return { content, preloadedState }; }; Instead of rendering our app, we need to wrap it into a function and export it. The function accepts the initial state of the application. Here’s how it works. - Pass initialStateto configureStore(). configureStore()returns a new Store instance. Hold it inside the storevariable. - Call renderToString()method, providing our App as input. It renders our app on the server and returns the HTML produced. Now, the variable contentstores the HTML. - Get the state out of Redux Store by calling getState()on store. Keep it in a variable preloadedState. - Return the contentand preloadedState. We will pass these to our template to get the final HTML page. 2. src/template.js template.js exports a function. It takes title, state and content as input. It injects them into the template and returns the final HTML document. To pass along the state, the template attaches state to window.__STATE__ inside a <script> tag. Now you can read state on the client side by accessing window.__STATE__. We also include the SSR companion assets/client.js client-side application in another script tag. If you request the pure client version, it only puts assets/bundle.js inside the script tag. The Client Side The client side is pretty straightforward. 1. src/bundle.js This is how you write the React and Redux Provider wrap. It is our pure client-side app. No tricks here. 😅 import React from 'react'; import { render } from 'react-dom'; import { Provider } from 'react-redux'; import configureStore from './redux/configureStore'; import App from './components/app'; const store = configureStore(); render( <Provider store={store} > <App /> </Provider>, document.querySelector('#app') ); 2. src/client.js Looks familiar? Yeah, there is nothing special except window.__STATE__. All we need to do is grab the initial state from window.__STATE__ and pass it to our configureStore() function as the initial state. Let’s take a look at our new client file: import React from 'react'; import { hydrate } from 'react-dom'; import { Provider } from 'react-redux'; import configureStore from './redux/configureStore'; import App from './components/app'; const state = window.__STATE__; delete window.__STATE__; const store = configureStore(state); hydrate( <Provider store={store} > <App /> </Provider>, document.querySelector('#app') ); Let’s review the changes: - Replace render()with hydrate(). hydrate()is the same as render()but is used to hydrate elements rendered by ReactDOMServer. It ensures that the content is the same on the server and the client. - Read the state from the global window object window.__STATE__. Store it in a variable and delete the window.__STATE__. - Create a fresh store with stateas initialState. All done here. Putting it all together Index.js This is the entry point of our application. It handles requests and templating. It also declares an initialState variable. I have modelled it with data in the assets/data.json file. We will pass it to our ssr() function. Note: While referencing a file that is inside src/ from a file outside src/ , use normal require() and replace src/ by views/. You know the reason (Babel compile). Routing /: By default server-rendered homepage. /client: Pure client-side rendering example. /exit: Server stop button. Only available in development. Build & Run It’s time to build and run our application. We can do this with a single line of code. npm run build && npm run start Now, the application is running at. Ready to become a React Pro? I am starting a new series from next Monday to get your React skills blazing, immediately. Thank you for reading this! If you like it and find it useful, here are some things you can do to show your support: Update: The article is updated to include latest Babel scoped package naming convention. — 2 October 2018
https://www.freecodecamp.org/news/server-side-rendering-your-react-app-in-three-simple-steps-7a82b95db82e/
CC-MAIN-2019-26
refinedweb
1,634
70.7
And I think we should talk, contact me. How about a cup of coffee to talk about dreams, the wonder of learning and the endless possibilities of the internet Targeting a high school audience and competing with Texas Instruments’ Ti-Translated from the Wiki nspirefamily, the NumWorks calculator is equipped with a non-touch color LCD2. It offers a programmable interface in Python. Launched in France in the summer of 2017, the NumWorks calculator was quickly sold to several thousand students among French high schools, mainly thanks to a fine design, Python interface (programming language whose learning is mandatory in secondary education in France), the recommendations of many teachers and the creation of a community of users offering various applications online. The calculator is announced in open hardware (“open hardware”, schemas and plans are available) and under open source with a Creative Commons BY-NC-ND4 license.Translated from the Wiki This is a calculator that has a great design and Python build into the OS. Updating is easy, just a modern Why do you need it Great design, build with durability from the beginning and it can be upgraded. It has all the functions of a regular calculator and it can be programmed without much extra training. What’s the fun part? It runs Python, had a USB connection and utilizes webUSB for updating. The calculator runs the core uPy and can be programmed with a computer. Or if you are up for a challenge… with the ABC-keyboard on the device. I’ve written a small script to return a random value of a dice. Code Example The example can also be found here import random #returns a number def roll_dice(): print (random.randint(1, 6)) print(""" This python script returns a random value between 1 and 6. It's a dice """) flag = True while flag: user_prompt = input(">") if user_prompt.lower() == "quit": flag = False else: print("Rolling dice...\nYour number is:") roll_dice()
http://ramonmoorlag.nl/2019/01/15/numworks/2/
CC-MAIN-2019-09
refinedweb
324
62.48
Elm Friday: Imports (Part VIII) 12/18/15 Nearly all modules you’ll write in Elm need to import other modules to do their work; also, all our examples so far had some import statements. In this episode, we take a closer look at the import statement and at the different ways to import modules. About This Series This is the eighth. Qualified Imports Let’s look at a basic example that just renders some static HTML: import Html import Html.Attributes main : Html.Html main = Html.div [] [ Html.p [] [ Html.text "This is the first paragraph" ] , Html.p [] [ Html.text "This is another paragraph" ] , Html.hr [] [] , Html.ul [] [ Html.li [] [ Html.text "some" ] , Html.li [] [ Html.text "bullet" ] , Html.li [] [ Html.text "points" ] ] , Html.p [] [ Html.text "This is the " , Html.span [ Html.Attributes.style [("font-weight", "bold")] ] [ Html.text "closing" ] , Html.text " paragraph." ] ] This would render the following HTML: <div> <p>This is the first paragraph</p> <p>This is another paragraph</p> <hr> <ul> <li>some</li> <li>bullet</li> <li>points</li> </ul> <p> This is the <span style="font-weight: bold;">closing</span> paragraph. </p> </div> As always, go ahead and try stuff out for yourself, for example by copying this into a file named Imports.elm and check the result with elm-reactor. This piece of Elm imports two modules, Html and Html.Attributes: import Html import Html.Attributes An import statement basically tells the Elm compiler to load these two modules and to use stuff from them whenever it encounters anything that is prefixed with Html. (or Html.Attributes., respectively). Thus, we know that the functions Html.text or Html.Attributes.style that we are using in the example code come from said modules. By the way, where do the imported modules come from? That depends. You can import modules from third party packages that have been installed via elm-package install. You can also import your own modules from your current project’s source folder (there will be a separate blog post on how to structure your code base with modules later). In this example, we are importing modules from the package evancz/elm-html so you would need to install this via elm-package install --yes evancz/elm-html to follow along. If you already did the examples in episode 3 or episode 4 you can just create a new file (say, Imports.elm) in the same elm-playground directory and you are good to go. We already installed the elm-html package there. Coming back to the example code, let’s be honest here: This code looks really bloated with all the redundant Html.qualifiers. This is where the exposing keyword comes in. Open Imports aka Unqualified Imports The following code example makes use of open imports by using import exposing, so that the imported identifiers can be used without prefix. import Html exposing (Html, div, hr, li, p, span, text, ul) import Html.Attributes exposing (style) main : Html main = div [] [ p [] [ text "This is the first paragraph" ] , p [] [ text "This is another paragraph" ] , hr [] [] , ul [] [ li [] [ text "some" ] , li [] [ text "bullet" ] , li [] [ text "points" ] ] , p [] [ text "This is the " , span [ style [("font-weight", "bold")] ] [ text "closing" ] , text " paragraph." ] ] Comment
https://blog.codecentric.de/en/2015/12/elm-friday-part-08-imports/
CC-MAIN-2017-30
refinedweb
540
66.13
Thanks a lot guys, the problem is fixed. I feel so stupid because it ended up being a naming problem - I thought the path looked okay, and I automatically assumed the name wasn't a problem because I would have thought the IOError message would have said something along the lines of "cannot find file". Anyways, I actually had a question about handling the exception because I did try doing that. I had req.sendfile() in a try/except block, but in order to to send the error message to the browser.... Here's what I did... def handler(req): fields = util.FieldStorage(req) filename = os.path.basename(fields["filename"]) directory = os.path.dirname(fields["filename"]) req.headers_out["Content-Disposition"] = "attachment; filename=%s" % filename req.content_type = "text/plain" try: req.sendfile("%s/%s" % (directory, filename)) except IOError, e: req.content_type = "text/html" req.write("Raised exception reads:\n<br>%s" % str(e)) return apache.OK return apache.OK Here's what happened: instead of the IOError displaying in the browser, the file downloaded and had the req.write() message in it: "Raised exception reads: <br>Could not stat file for reading" I thought I read somewhere you can't change the content type after it has been set (a limitation of http - in fact, this is why I have a seperate sendFile handler... so I can set the content type properly), but I didn't think this would be a problem. Is there a way to get the message to display in the browser instead of sending over an html file with the message? Thanks again, I don't know what I would do without this mailing list sometimes. Tom -----Original Message----- > Date: Fri Aug 04 12:42:37 EDT 2006 > From: "Jim Gallacher" <jpg at jgassociates.ca> > Subject: Re: [mod_python] req.sendfile() problem > To: "Thomas J. Schirripa" <tommys at eden.rutgers.edu> > > Thomas J. Schirripa wrote: > > Before stating my problem, let me say that I am running apache version 2.0.52, mod_python 3.2.8, python 2.3, all on Redhat Enterprise Linux WS release 4. > > > > Basically I have a webpage where the client uploads a file and some scripts/programs run on the server and produce output files. Some of this output is spit out onto another webpage, but I want to give the client the option to download the output files. Since I am using multiple webpages, I had to figure out a way to transfer data from one page to another. The only sensible solution that I could figure out was to use psp to feed variables from my handlers into hidden form inputs in my template. From the output html page, the client can click a submit button next to the output filename that says "download". The hidden form inputs have the information as to where the file is located on the server, and the action on the form refers to my sendFile handler. That handler looks like this: > > > > def handler(req): > > fields = util.FieldStorage(req) > > #Note: the filename is an absolute path, so I am going to split that up into directory and filename for clarity > > filename = os.path.basename(fields["filename"]) > > path = os.path.dirname(fields["filename"]) > > req.headers_out["Content-Disposition"] = "attachment; filename=%s" % filename > > req. > req.sendfile("%s/%s" % (directory, filename)) > > return apache.OK > > > > I am not sure if I am using req.header_out or req.content-type correctly or how critical those lines are (I found it in another post), but my problem is that I am getting a file to download with the proper filename, BUT the file has a mod_python error message in it that reads: > > > > IOError: Could not stat file for reading > > > > and the error points to the line with req.sendfile(). Can anyone tell me what's going on and how to correct this? > > An exception is being raised and mod_python is dumping the traceback in > the response. You should wrap req.sendfile() in a try/except block, and > if and exception is raised send a proper error message to the browser. > > sendfile() needs to stat the file so it can set the content length > header. You are getting the IOError because the file does not exist or > there is a permission problem. Make sure you are using absolute paths > for sendfile(). Also, it goes without saying that the filename provided > by in the request should not be trusted. Do some checking to ensure it > is valid value. > > Jim
https://modpython.org/pipermail/mod_python/2006-August/021745.html
CC-MAIN-2022-21
refinedweb
743
66.94
I’m building a React-based paper doll RPG character screen. Just for fun. More of a challenge than I expected, honestly. I have a list of quests, and when you click that the quest is “completed”, the paper doll gets new gear. That “gear” is simply an image that loads over the area (head, body, legs, etc). I can’t seem to figure out how to pull the appropriate image from a local file using DOM styling. Here’s my onClick() handler: handleClick = (event, props) => { // copy this.state.quests into array for .map() var quests = [...this.state.quests]; // toggle checked state quests.map((quest) => { if ( quest.name === props.name) { // toggle the checked flag in this.state.quests quest.checked = !quest.checked; // just using 'pdHead' for testing purposes (paper doll Head) const upgrade = document.getElementById('pdHead'); // toggle background image on quest click quest.checked ? // HERE'S THE PROBLEM - which format do I use to link to the file? upgrade.style.backgroundImage="url('../images/TestImage.png')" : upgrade.style.backgroundImage='none'; } return quest; }) // object destructuring to update this.state.quests this.setState({ quests }); } First, I am able to do this by using ES6 import statements to import the image file, like this: import TestImage from '../images/TestImage.png'; and then just use a template string in the ternary operator: quest.checked ? upgrade.style.backgroundImage=`url(${ TestImage })` : upgrade.style.backgroundImage='none'; This works, but I don’t want to do it this way (unless I have no other choice) because I will ultimately have dozens of pictures and I don’t want to import them all if I don’t have to. I’ve tried all sorts of variations, but the ES6 import is the only one that works. I get no errors to help me figure out why it isn’t working because backgroundImage just defaults to ‘none’. I’ve tried: upgrade.style.backgroundImage='url("../images/TestImage.png")' upgrade.style.backgroundImage='require("../images/TestImage.png")' upgrade.style.backgroundimage='url(require("../images/TestImage.png")' Is this just something you can’t do? I can change the color and set all sorts of other styles this way, and I can grab an image from the Internet with no issues. I just can’t seem to use a local image without the import statement. Any ideas? My goal is to have filepaths stored in the state, so when you click on a quest it populates the URL with the correct path for the quest. I can think of other ways to do this, but I’m trying to do it without writing a ton of code over and over. That’s also why I don’t want to use the import statements if I can avoid them.
https://forum.freecodecamp.org/t/react-dom-style-backgroundimage/320254
CC-MAIN-2021-10
refinedweb
452
75.1
Not to say that the Google Style Guide is the holy bible but as a newbie programmer, it seems like a good reference. The Google Style Guide lists the following disadvantages of forward declaration // b.h: struct B {}; struct D : B {}; // good_user.cc: #include "b.h" void f(B*); void f(void*); void test(D* x) { f(x); } // calls f(B*) some search on SO seemed to suggest that forward declaration is universally a better solution. I don't think that's what SO says. The text you quote is comparing a "guerilla" forward declaration against including the proper include file. I don't think you'll find a lot of support on SO for the approach Google is criticising here. That bad approach is, "no, don't #include include files, just write declarations for the few functions and types you want to use". The proper include file will still contain forward declarations of its own, and a search on SO will suggest that this is the right thing to do, so I see where you got the idea that SO is in favour of declarations. But Google isn't saying that a library's own include file shouldn't contain forward declarations, it's saying that you shouldn't go rogue and write your own forward declaration for each function or type you want to use. If you #include the right include file, and your build chain works, then the dependency isn't hidden and the rest of the problems mostly don't apply, despite the fact that the include file contains declarations. There are still some difficulties, but that's not what Google is talking about here. Looking in particular at forward declarations of types as compared with class definitions for them, (4) gives an example of that going wrong (since a forward declaration of D cannot express that it's derived from B, for that you need the class definition). There's also a technique called "Pimpl" that does make careful use of a forward type declaration for a particular purpose. So again you'll see some support on SO for that, but this isn't the same as supporting the idea that everyone should in general run around forward-declaring classes instead of #includeing their header files.
https://codedump.io/share/ramsiI1Hp568/1/why-does-google-style-guide-discourage-forward-declaration
CC-MAIN-2017-43
refinedweb
381
65.86
This article presents the basics of Observer Pattern, when to use it and how to implement it in C++. I have posted a similar article that talks about the Observer pattern in C#. The main aim of this article will be to implement the observer pattern in C++.) Let us now discuss all the classes one by one: Subject ASubject ConcreteSubject DummyProject Observer IObserver ConcreteObserver /); } } } //Header File #pragma once #include "ASubject.h" class DummyProduct : public ASubject { public: void ChangePrice(float price); }; //CPP File #include "DummyProduct.h" void DummyProduct::ChangePrice(float price) { Notify(price); } #pragma once class IObserver { public: virtual void Update(float price) = 0; }; /; } This article covers the basics of Observer pattern and provides a basic implementation in C++. I have also implemented the same in C#. What I learnt from it is how the observer pattern works and what are the similarities and differences in implementing it in C++ and C#. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) KjellKod.cc wrote:2) In my opinion the Observer pattern is a great teaching example that often (not always) is best to stay as just that, an example. Stefan_Lang wrote:I've looked over signals and slots, and found them to be not that different from the observer pattern. The main difference seems to be that it separates the logic (i. e. who signals what to whom) is separated from the observers Stefan_Lang wrote:Why must there be a separate object hat is responsible for registering the observer? Stefan_Lang wrote:[...] most of the observers had to be able to switch notifications on and off at all times, just so they wouldn't get swamped by them. Anything short of that would have led to violations of hard real time conditions Stefan_Lang wrote:I don't argue the flexibility of signals and slots, but my experience shows that you trade a significant amount of performance for this flexibility. Depending on the system you implement, wou will have to decide which is your priority. KjellKod.cc wrote:Meaning this is not clear in the Observer pattern either,. at least not if a clear interface structure is being used. KjellKod.cc wrote:There is nothing to stop you from using signals-n-slots and using a connect/disconnect though an interface is there? KjellKod.cc wrote:Nothing says you cannot do this using signals-n-slots either. KjellKod.cc wrote:OO design need both composition and inheritance. KjellKod.cc wrote:(regarding performance) If you then are unlucky enough to have an "Observer" that needs to receive several types of data and use multiple-inheritance to get it then you are on a downhill slope. Stefan_Lang wrote:In the end, what pattern you should use depends on the problem at hand Stefan_Lang wrote: KjellKod.cc wrote:There is nothing to stop you from using signals-n-slots and using a connect/disconnect though an interface is there? Yes, in fact there is: As I pointed out, the connection in that case I mentioned often depended on the observers inner state. That means any external object activating or deactivating that connection would have had to be notified of a change of that state, turning it into an observer! It would be rather ironic to decouple an observer from its duty to maintain its connections by introducing an observer to that observer! inActive_ Stefan_Lang wrote:there was a notable cost for extra traffic between threads, context switching, waking up sleeping threads unnecessarily, and the like. Therefore minimizing communication was top priority. General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/328365/Understanding-and-Implementing-Observer-Pattern-in?fid=1685107&df=90&mpp=10&sort=Position&spc=None&tid=4155826
CC-MAIN-2015-22
refinedweb
625
51.89
File I/O - Introduction - Reference information - - - Introduction This section describes how to use the FileIO API to read and write files using a local secure data store. You might use the File IO API with the URL Loading APIs to create an overall data download and caching solution for your NaCl applications. For example: - Use the File IO APIs to check the local disk to see if a file exists that your program needs. - If the file exists locally, load it into memory using the File IO API. If the file doesn’t exist locally, use the URL Loading API to retrieve the file from the server. - Use the File IO API to write the file to disk. - Load the file into memory using the File IO API when needed by your application. The example discussed in this section is included in the SDK in the directory examples/api/file_io. Reference information For reference information related to FileIO, see the following documentation: - file_io.h - API to create a FileIO object - file_ref.h - API to create a file reference or “weak pointer” to a file in a file system - file_system.h - API to create a file system associated with a file Local file I/O Chrome provides an obfuscated, restricted area on disk to which a web app can safely read and write files. The Pepper FileIO, FileRef, and FileSystem APIs (collectively called the File IO APIs) allow you to access this sandboxed local disk so you can read and write files and manage caching yourself. The data is persistent between launches of Chrome, and is not removed unless your application deletes it or the user manually deletes it. There is no limit to the amount of local data you can use, other than the actual space available on the local drive. Enabling local file I/O The easiest way to enable the writing of persistent local data is to include the unlimitedStorage permission in your Chrome Web Store manifest file. With this permission you can use the Pepper FileIO API without the need to request disk space at run time. When the user installs the app Chrome displays a message announcing that the app writes to the local disk. If you do not use the unlimitedStorage permission you must include JavaScript code that calls the HTML5 Quota Management API to explicitly request local disk space before using the FileIO API. In this case Chrome will prompt the user to accept a requestQuota call every time one is made. Testing local file I/O You should be aware that using the unlimitedStorage manifest permission constrains the way you can test your app. Three of the four techniques described in Running Native Client Applications read the Chrome Web Store manifest file and enable the unlimitedStorage permission when it appears, but the first technique (local server) does not. If you want to test the file IO portion of your app with a simple local server, you need to include JavaScript code that calls the HTML5 Quota Management API. When you deliver your application you can replace this code with the unlimitedStorage manifest permission. The file_io example The Native Client SDK includes an example, file_io, that demonstrates how to read and write a local disk file. Since you will probably run the example from a local server without a Chrome Web Store manifest file, the example’s index file uses JavaScript to perform the Quota Management setup as described above. The example has these primary files: index.html- The HTML code that launches the Native Client module and displays the user interface. example.js- JavaScript code that requests quota (as described above). It also listens for user interaction with the user interface, and forwards the requests to the Native Client module. file_io.cc- The code that sets up and provides an entry point to the Native Client module. The remainder of this section covers the code in the file_io.cc file for reading and writing files. File I/O overview Like many Pepper APIs, the File IO API includes a set of methods that execute asynchronously and that invoke callback functions in your Native Client module. Unlike most other examples, the file_io example also demonstrates how to make Pepper calls synchronously on a worker thread. It is illegal to make blocking calls to Pepper on the module’s main thread. This restriction is lifted when running on a worker thread—this is called “calling Pepper off the main thread”. This often simplifies the logic of your code; multiple asynchronous Pepper functions can be called from one function on your worker thread, so you can use the stack and standard control flow structures normally. The high-level flow for the file_io example is described below. Note that methods in the namespace pp are part of the Pepper C++ API. Creating and writing a file Following are the high-level steps involved in creating and writing to a file: pp::FileIO::Openis called with the PP_FILEOPEN_FLAG_CREATEflag to create a file. Because the callback function is pp::BlockUntilComplete, this thread is blocked until Opensucceeds or fails. pp::FileIO::Writeis called to write the contents. Again, the thread is blocked until the call to Writecompletes. If there is more data to write, Writeis called again. - When there is no more data to write, call pp::FileIO::Flush. Opening and reading a file Following are the high-level steps involved in opening and reading a file: pp::FileIO::Openis called to open the file. Because the callback function is pp::BlockUntilComplete, this thread is blocked until Open succeeds or fails. pp::FileIO::Queryis called to query information about the file, such as its file size. The thread is blocked until Querycompletes. pp::FileIO::Readis called to read the contents. The thread is blocked until Readcompletes. If there is more data to read, Readis called again. Deleting a file Deleting a file is straightforward: call pp::FileRef::Delete. The thread is blocked until Delete completes. Making a directory Making a directory is also straightforward: call pp::File::MakeDirectory. The thread is blocked until MakeDirectory completes. Listing the contents of a directory Following are the high-level steps involved in listing a directory: pp::FileRef::ReadDirectoryEntriesis called, and given a directory entry to list. A callback is given as well; many of the other functions use pp::BlockUntilComplete, but ReadDirectoryEntriesreturns results in its callback, so it must be specified. - When the call to ReadDirectoryEntriescompletes, it calls ListCallbackwhich packages up the results into a string message, and sends it to JavaScript. file_io deep dive The file_io example displays a user interface with a couple of fields and several buttons. Following is a screenshot of the file_io example: Each radio button is a file operation you can perform, with some reasonable default values for filenames. Try typing a message in the large input box and clicking Save, then switching to the Load File operation, and clicking Load. Let’s take a look at what is going on under the hood. Opening a file system and preparing for file I/O pp::Instance::Init is called when an instance of a module is created. In this example, Init starts a new thread (via the pp::SimpleThread class), and tells it to open the filesystem: virtual bool Init(uint32_t /*argc*/, const char * /*argn*/ [], const char * /*argv*/ []) { file_thread_.Start(); // Open the file system on the file_thread_. Since this is the first // operation we perform there, and because we do everything on the // file_thread_ synchronously, this ensures that the FileSystem is open // before any FileIO operations execute. file_thread_.message_loop().PostWork( callback_factory_.NewCallback(&FileIoInstance::OpenFileSystem)); return true; } When the file thread starts running, it will call OpenFileSystem. This calls pp::FileSystem::Open and blocks the file thread until the function returns. void OpenFileSystem(int32_t /*result*/) { int32_t rv = file_system_.Open(1024 * 1024, pp::BlockUntilComplete()); if (rv == PP_OK) { file_system_ready_ = true; // Notify the user interface that we're ready PostMessage("READY|"); } else { ShowErrorMessage("Failed to open file system", rv); } } Handling messages from JavaScript When you click the Save button, JavaScript posts a message to the NaCl module with the file operation to perform sent as a string (See Messaging System for more details on message passing). The string is parsed by HandleMessage, and new work is added to the file thread: virtual void HandleMessage(const pp::Var& var_message) { if (!var_message.is_string()) return; // Parse message into: instruction file_name_length file_name [file_text] std::string message = var_message.AsString(); std::string instruction; std::string file_name; std::stringstream reader(message); int file_name_length; reader >> instruction >> file_name_length; file_name.resize(file_name_length); reader.ignore(1); // Eat the delimiter reader.read(&file_name[0], file_name_length); ... // Dispatch the instruction if (instruction == kLoadPrefix) { file_thread_.message_loop().PostWork( callback_factory_.NewCallback(&FileIoInstance::Load, file_name)); } else if (instruction == kSavePrefix) { ... } } Saving a file FileIoInstance::Save is called when the Save button is pressed. First, it checks to see that the FileSystem has been successfully opened: if (!file_system_ready_) { ShowErrorMessage("File system is not open", PP_ERROR_FAILED); return; } It then creates a pp::FileRef resource with the name of the file. A FileRef resource is a weak reference to a file in the FileSystem; that is, a file can still be deleted even if there are outstanding FileRef resources. pp::FileRef ref(file_system_, file_name.c_str()); Next, a pp::FileIO resource is created and opened. The call to pp::FileIO::Open passes PP_FILEOPEFLAG_WRITE to open the file for writing, PP_FILEOPENFLAG_CREATE to create a new file if it doesn’t already exist and PP_FILEOPENFLAG_TRUNCATE to clear the file of any previous content: pp::FileIO file(this); int32_t open_result = file.Open(ref, PP_FILEOPENFLAG_WRITE | PP_FILEOPENFLAG_CREATE | PP_FILEOPENFLAG_TRUNCATE, pp::BlockUntilComplete()); if (open_result != PP_OK) { ShowErrorMessage("File open for write failed", open_result); return; } Now that the file is opened, it is written to in chunks. In an asynchronous model, this would require writing a separate function, storing the current state on the free store and a chain of callbacks. Because this function is called off the main thread, pp::FileIO::Write can be called synchronously and a conventional do/while loop can be used: int64_t offset = 0; int32_t bytes_written = 0; do { bytes_written = file.Write(offset, file_contents.data() + offset, file_contents.length(), pp::BlockUntilComplete()); if (bytes_written > 0) { offset += bytes_written; } else { ShowErrorMessage("File write failed", bytes_written); return; } } while (bytes_written < static_cast<int64_t>(file_contents.length())); Finally, the file is flushed to push all changes to disk: int32_t flush_result = file.Flush(pp::BlockUntilComplete()); if (flush_result != PP_OK) { ShowErrorMessage("File fail to flush", flush_result); return; } FileIoInstance::Load is called when the Load button is pressed. Like the Save function, Load first checks to see if the FileSystem has been successfully opened, and creates a new FileRef: if (!file_system_ready_) { ShowErrorMessage("File system is not open", PP_ERROR_FAILED); return; } pp::FileRef ref(file_system_, file_name.c_str()); Load creates and opens a new FileIO resource, passing PP_FILEOPENFLAG_READ to open the file for reading. The result is compared to PP_ERROR_FILENOTFOUND to give a better error message when the file doesn’t exist: int32_t open_result = file.Open(ref, PP_FILEOPENFLAG_READ, pp::BlockUntilComplete()); if (open_result == PP_ERROR_FILENOTFOUND) { ShowErrorMessage("File not found", open_result); return; } else if (open_result != PP_OK) { ShowErrorMessage("File open for read failed", open_result); return; } Then Load calls pp::FileIO::Query to get metadata about the file, such as its size. This is used to allocate a std::vector buffer that holds the data from the file in memory: int32_t query_result = file.Query(&info, pp::BlockUntilComplete()); if (query_result != PP_OK) { ShowErrorMessage("File query failed", query_result); return; } ... std::vector<char> data(info.size); Similar to Save, a conventional while loop is used to read the file into the newly allocated buffer: int64_t offset = 0; int32_t bytes_read = 0; int32_t bytes_to_read = info.size; while (bytes_to_read > 0) { bytes_read = file.Read(offset, &data[offset], data.size() - offset, pp::BlockUntilComplete()); if (bytes_read > 0) { offset += bytes_read; bytes_to_read -= bytes_read; } else if (bytes_read < 0) { // If bytes_read < PP_OK then it indicates the error code. ShowErrorMessage("File read failed", bytes_read); return; } } Finally, the contents of the file are sent back to JavaScript, to be displayed on the page. This example uses “ DISP|” as a prefix command for display information: std::string string_data(data.begin(), data.end()); PostMessage("DISP|" + string_data); ShowStatusMessage("Load success"); Deleting a file FileIoInstance::Delete is called when the Delete button is pressed. First, it checks whether the FileSystem has been opened, and creates a new FileRef: if (!file_system_ready_) { ShowErrorMessage("File system is not open", PP_ERROR_FAILED); return; } pp::FileRef ref(file_system_, file_name.c_str()); Unlike Save and Load, Delete is called on the FileRef resource, not a FileIO resource. Note that the result is checked for PP_ERROR_FILENOTFOUND to give a better error message when trying to delete a non-existent file: int32_t result = ref.Delete(pp::BlockUntilComplete()); if (result == PP_ERROR_FILENOTFOUND) { ShowStatusMessage("File/Directory not found"); return; } else if (result != PP_OK) { ShowErrorMessage("Deletion failed", result); return; } Listing files in a directory FileIoInstance::List is called when the List Directory button is pressed. Like all other operations, it checks whether the FileSystem has been opened and creates a new FileRef: if (!file_system_ready_) { ShowErrorMessage("File system is not open", PP_ERROR_FAILED); return; } pp::FileRef ref(file_system_, dir_name.c_str()); Unlike the other operations, it does not make a blocking call to pp::FileRef::ReadDirectoryEntries. Since ReadDirectoryEntries returns the resulting directory entries in its callback, a new callback object is created pointing to FileIoInstance::ListCallback. The pp::CompletionCallbackFactory template class is used to instantiate a new callback. Notice that the FileRef resource is passed as a parameter; this will add a reference count to the callback object, to keep the FileRef resource from being destroyed when the function finishes. // Pass ref along to keep it alive. ref.ReadDirectoryEntries(callback_factory_.NewCallbackWithOutput( &FileIoInstance::ListCallback, ref)); FileIoInstance::ListCallback then gets the results passed as a std::vector of pp::DirectoryEntry objects, and sends them to JavaScript: void ListCallback(int32_t result, const std::vector<pp::DirectoryEntry>& entries, pp::FileRef /*unused_ref*/) { if (result != PP_OK) { ShowErrorMessage("List failed", result); return; } std::stringstream ss; ss << "LIST"; for (size_t i = 0; i < entries.size(); ++i) { pp::Var name = entries[i].file_ref().GetName(); if (name.is_string()) { ss << "|" << name.AsString(); } } PostMessage(ss.str()); ShowStatusMessage("List success"); } Making a new directory FileIoInstance::MakeDir is called when the Make Directory button is pressed. Like all other operations, it checks whether the FileSystem has been opened and creates a new FileRef: if (!file_system_ready_) { ShowErrorMessage("File system is not open", PP_ERROR_FAILED); return; } pp::FileRef ref(file_system_, dir_name.c_str()); Then the pp::FileRef::MakeDirectory function is called. int32_t result = ref.MakeDirectory( PP_MAKEDIRECTORYFLAG_NONE, pp::BlockUntilComplete()); if (result != PP_OK) { ShowErrorMessage("Make directory failed", result); return; } ShowStatusMessage("Make directory success");
https://developer.chrome.com/native-client/devguide/coding/file-io
CC-MAIN-2015-40
refinedweb
2,396
55.03
Assembly and Project naming conventions I've noticed a standard of naming assemblies after their contained root namespace... i.e. System.Data.dll contains the System.Data namespace objects. Do you think it is a good idea to also name your project after the root namespace? The only downside I see is that it won't be easy to change the namespace in the future, if necessary. The upside is that it's easy to know what goes with what. Stress Wednesday, August 06, 2003 It's pretty easy to rename projects, actually. :) Brad Wilson (dotnetguy.techieswithcats.com) Wednesday, August 06, 2003 I always use the namespace name for the assembly. Renaming projects is basically easy but IIRC the IDE doesn't actually rename the directories, so you have to manually do that (and manually correct he project files). Chris Nahr Thursday, August 07, 2003 Brad Adams has written some suggestions on this topic: See Assembly/DLL Naming Guidelines Bernard Vander Beken Thursday, August 07, 2003 Thanks for the replies. I also found some msdn documentation that backs giving your assembly and project the same name as the root namespace. See "Carefully Considering Naming Conventions" at the bottom of this page: Stress Friday, August 08, 2003 Stress, The Brad A URL is likely to be the next version of the MSDN page, so you want to check that out as well. He's been posting a lot of changes the teams wants to make to the MSDN documentation to his blog to get public feedback. Brad Wilson (dotnetguy.techieswithcats.com) Friday, August 08, 2003 Recent Topics Fog Creek Home
http://discuss.fogcreek.com/dotnetquestions/default.asp?cmd=show&ixPost=2084&ixReplies=5
CC-MAIN-2017-30
refinedweb
269
63.8
<< UltimaXMembers Content count1327 Joined Last visited Community Reputation468 Neutral About UltimaX - RankContributor UltimaX replied to StoneMask's topic in For BeginnersAlso, keep an eye on unitialized variables. Debug mode will initialize variables that have not been explicitly initialized. Try compiling with the /RTCu flag set. See here: [url=""][/url] for more information. UltimaX replied to UltimaX's topic in GDNet LoungeThis post was for a legitimate reason as this was for tracking money for our drug task force. The police department is 4 floors below me so I don't think I have to worry about that. I can assure you; the police are not tech savy and would be of no help. I found the answer and appreciate the invaluable feedback. UltimaX posted a topic in GDNet LoungeFound the answer. UltimaX replied to NaturalNines's topic in General and Gameplay Programming[img][/img] NaturalNines, thank you for asking. At least you inquire about it. I wish more people would see this thread as many great points were raised. UltimaX replied to superman3275's topic in For Beginners[quote name='superman3275' timestamp='1349064095' post='4985635'] •Books about writing cleaner code. [/quote] Take a look at this one: [url=""][/url] [EDIT] I found the book free online and want to assume this is a legit site? (I just Googled the book name and it was the second link). If not I am deeply sorry and appologize. You should buy the book either way as it's a great read. [url=""][/url] UltimaX replied to Liuqahs15's topic in General and Gameplay Programming[quote name='Shaquil' timestamp='1347991178' post='4981316'] Today I was talking with a few professors at my university about doing some work on a couple new servers they've got set up. They asked me about my background in coding, and I mentioned that I'd written a couple games over the summer with C++/Allegro, and that I used Visual Studio 2010 as my IDE. One of the professors told me that he preferred Dev-C++, which I found quite odd considering Dev-C++ at least seems very bare-bones (I wouldn't know, I've only written a couple [i]very simple [/i]projects in it). Anyway, I told him I thought he'd find VS2010 to be nice because the debugger is great for catching unhandled exceptions/null pointers, and he told me that debuggers are for lazy people. Now something tells me he's out of his mind, but I'm no expert, so I can't be sure. Do you think using a debugger is lazy? I don't mean overusing it, or relying on it too much. Just the simple act of using it at all. I think a debugger is a great tool and it's a good idea to use one. What do you think? [/quote] Tell him to use VI then... UltimaX replied to c_olin's topic in Graphics and GPU ProgrammingGoogle: Texture Mipmap Quadtree You may find this to be a good read: [url=""][/url] UltimaX replied to SteveDeFacto's topic in GDNet LoungeOh boy... UltimaX replied to eduardo79's topic in Graphics and GPU ProgrammingWhy don't you put yourself out there and see what happens? I wouldn't fully commit to it until you have something reliable thought because if things don't pan out then oh well. The only problem with that is conflict of interest. This is a little off topic of what you were initially asking, but I just want to point this out because it's something I didn't think about when I first start contract work. But... One thing I have learned, and it's my own fault, is get everything in writting. I have been burnt by this once and I will never do it again. Even though we had a meeting prior to me working and everything was written on the quote it wasn't detailed enough to stand up. Make sure it specifically states the monetary value and what is expected. For isntance, say something like the following: The project XYZ will be completed with the discussed expectations x and y for $xxx,xxx. Anything beyond these services will be billed at an hourly rate of $xxx.xx. Also, make sure you define what the expectation are too (which is where I messed up) and make sure they agree to them and sign off on them. If it's a large project make sure you setup milestones and when you reach those milestones you send a bill. This way you're getting paid while you are working on it. The company I did work for was never satisifed and wanted things changed multiple times and it never ended. They initiallly "think they know" what they want, but they never do and will want to change things. I worked on the program for over 2 years and at the end I gave up the source code for $50,000 less than what was agreed to just to be done with it... Don't make that mistake. UltimaX replied to SteveDeFacto's topic in GDNet Comments, Suggestions, and IdeasI have never seen a better community than what I have seen on this site. It has always been top notch. I became a GDNet+ member to help support the site; not for anything else. I do a little game programming for the fun of it and that's about it. If I was a professional game developer I would write in the journal and take advantage of the features a little more, but I'm not and I don't. It's the only site I try to visit every day. So thank you for a great site and everything you do. With that being said, I can't believe everything I am reading here. Sure it's a little aggrevating to see the site down at times, but it's not like it's a stock exchange website were we lose money if it's down. If this site is that critical in your life then maybe you, yourself, needs an evaluation. We are also dealing with a community and not a fortune 500 company. What have you done to help? Maybe I'm just used to the way things used to be almost a decade ago; it was about coming to this site to mingle, get help, and provide help. 201X has become the ages where everyone is spoiled and wants everything now and their way. DX11 UltimaX replied to Nali's topic in Graphics and GPU Programming[quote] [b]Warning: [/b]Microsoft DirectDraw. DirectDraw enables you to directly manipulate display memory, the hardware blitter, hardware overlay support, and flipping surface support. The following tables list the members exposed by the Microsoft.DirectX.DirectDraw namespace. [/quote] [url=""][/url] DirectDraw has been deprecated for a while and there is no reason to use it unless you have a legacy product you are supporting. From the sounds of it this isn't the case. You should consider Direct2D or even Direct3D. Take a look at SlimDX as it supports both of these. [url=""][/url] [EDIT] Should also include this as well: [quote] As of DirectX version 8.0, DirectDraw was merged into a new package called [b]DirectX Graphics[/b], which is really just Direct3D with a few DirectDraw API additions. DirectDraw can still be used by programmers, but they must use older DirectX interfaces (DirectX 7 and below). As of the release of the June 2010 DirectX SDK package, the DirectDraw [url=""]header file[/url] and [url=""]library[/url] are no longed included. DirectDraw has been deprecated since version 7 but is still included with DirectX, although updates are no longer made. Developers have been instructed to use textured quads in Direct3D for 2D graphics. [url=""]Managed DirectX[/url] includes a managed wrapper for DirectDraw. DirectDraw is now replaced by [url=""]Direct2D[/url]. [/quote] [url=""][/url] UltimaX replied to Michael Tanczos's topic in GDNet Comments, Suggestions, and IdeasI just noticed the "GameDev.net Forums RSS" RSS feed at the top. This is great and I think I will be using this as it will be a good alternative to the recent content that was on the home page. Great job with the site everyone. Keep up the great work. On a side note, the edit button does not show up on a post unless you refresh the page. Not a huge deal, but I wasn't sure why I couldn't see it at first. - Perhaps it was a bit blunt, but it was the truth and it was meant to help you. We all want to be the best at what we do, but we all also have to crawl before we can walk. [list][*]I would start out learning a specific language at first. If C++ is too hard at first then try something simpler like C#. Learn that language well and then a little more.[*]After that start learning about games and the theories/technology behind them.[*]If you want DirectX as your SDK (for example) then focus on literature/tutorials that pertains to DirectX so you can follow along easier[*]--You may also want to look at XNA if you try C#. XNA can be used to create games on Windows (not Windows 8), XBOX, and Windows Phone.[*] Study, study, study.[/list] Regardless if you are doing it as a hobby or career just remember to sit back, relax, and have fun. - Hello Flump - There are a few things that caught my eye. This my opinion and I am only saying this to try and help you. How are you creating "How 2 Do Programming" videos when you are asking for code to learn from? You should learn to create your own and do a series about how you did it, what challenges you had to overcome, etc. Even the first tutorial was very bad. It left me with more questions then answers. Why a blank project? What are the other project types related to allegro and what benefits do they provide? Why do I want to delete the header file? What was it for? Why do I want to use "" instead of <> for including the header file? What does this tell the preprocessor? preprocessor... What? Why are you including that specific header? What was the main for and why did I need it? What... etc. I think you get the point. I'm not trying to be harsh, but someone may pick up very bad habits that would hurt them more than help them. I appreciate what you are trying to do, but write a pong game on your own. Trust me... It will help you more than anyone else. [EDIT] Wow I'm slow at typing today. UltimaX replied to JonConley's topic in Games Career DevelopmentDoes not matter and here is why... Employers rarely [i]fully [/i]read your resume. So it does not matter if it's 1 page or 4 pages. The important thing about a resume, and what others have been trying to say, is the quality of the content. Keep job relevant material towards the beginning of every job entry. They are going to skim them and only read the beginning of each. Attached is a sample of a resume I sent out a few weeks ago when a specific person requested it. Notice I am currently a lead project administrator, but the resume (and job) is for a software engineer? I designed it around this and showed progression. It's not the best, but it will hopefully give you a few ideas. It must not be too bad either because I go Wednesday to negotiate. Best if luck
https://www.gamedev.net/profile/43813-ultimax/?tab=reputation
CC-MAIN-2017-30
refinedweb
1,961
73.07
The first function called is the c_mount() function, which is responsible for accepting the name of a .tar file to operate on, and where to manifest its contents. There's a "mount helper" utility (the file m_main.c) that we'll look at later. The beginning part of c_mount() is the same as the RAM disk, so we'll just point out the tail-end section that's different: ... // 1) allocate an attributes structure if (!(cfs_attr = malloc (sizeof (*cfs_attr)))) { return ENOMEM; } // 2) initialize it iofunc_attr_init (&cfs_attr -> attr, S_IFDIR | 0555, NULL, NULL); cfs_attr_init (cfs_attr); cfs_attr -> attr.inode = (int) cfs_attr; cfs_a_mknod (cfs_attr, ".", S_IFDIR | 0555, NULL); cfs_a_mknod (cfs_attr, "..", S_IFDIR | 0555, NULL); // 3) load the tar file if (ret = analyze_tar_file (cfs_attr, spec_name)) { return (ret); } // 4) Attach the new pathname with the new value ret = resmgr_attach (dpp, &resmgr_attr, mnt_point, _FTYPE_ANY, _RESMGR_FLAG_DIR, &connect_func, &io_func, &cfs_attr -> attr); if (ret == -1) { free (cfs_attr); return (errno); } return (EOK); } The code walkthrough is as follows: Part of the grief mentioned in step 2 above actually turns out to have a useful side-effect. If you were to reinstate the RAM-disk portion of the extended attributes structure — even though it's not implemented in the current filesystem manager — you could implement a somewhat "modifiable" .tar filesystem. If you ever went to write to a file, the resource manager would copy the data from the .tar version of the file, and then use the fileblocks member rather than the vfile member for subsequent operations. This might be a fairly simple way of making a few small modifications to an existing tar file, without the need to uncompress and untar the file. You'd then need to re-tar the entire directory structure to include your new changes. Try it and see!
http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.neutrino.cookbook/topic/s2_tarfs_c_mount.html
CC-MAIN-2022-27
refinedweb
290
53.81
Soma on the Key Themes for Visual Studio 2010 - Posted: Nov 10, 2008 at 5:00 PM - 69,344.. if you have a method, Foo, you can add a comment to it by typing three slashes : ///. this will genereate a comment like this: (note the 3 slashes instead of 2) /// <summary> /// /// < when you reference the dll/exe in another project, you comments will show up as long as that generated xml file is in the same directory as the referenced file.. ive only really used netbeans for ruby development so im not that familiar wit the java experience you can invoke intellisense by hitting ctl-space (or possebly alt or * space in the default key mapping) to show members that starts with whatever you've written check out the edit > intellisense menu. you can change those hotkeys in the tools > options >environment >keyboard options VS doesnt show you all the docs at once like netbeans, instead, the docs for the method is shown if the cursor is over the method, and each paramater if the cursor is on the parameter. the parameters arent shown while the cursor is on the method body because the diffrent overloads may have diffrent arguments meaning diffrent things. experiment with typing . after a a string for instance and try starting to type the parameters to a method and see what shows up if you want something like the navigator window. you can use the two drop down menus at the top of the ide. the right one shows the the types in the current file (c# can have multiple types in a file) and the left one shows the members in the selected type also, you can check out the object browser (view > object browser) from there you can browse all the namespaces referenced in your project and also view the comments for classes and methods i also have a java background from my school days and ive been a c# dev all my prof life. you'll find that c# is more of a superset of java when it comes to functionality with events and delegates (although events are actually delegates) and linq and attributes and various other things you should note though that c# is not a native language. c# is compiled to byecode and then jitted to native code at runtime ive learned pretty much everything i know aboud c# and vs from there im not aware of any built in generator stuff, but that doesnt mean there arent any go with sandcastle :9 Herbie that player42 was talking about seems to have VS(or atleast msbuild) integration Remove this comment Remove this threadclose
http://channel9.msdn.com/Blogs/VisualStudio/Soma-on-the-Key-Themes-for-Visual-Studio-2010
CC-MAIN-2014-41
refinedweb
442
51.55
Introduction. What is FTP? FTP stands for File Transfer Protocol; it is based on the client-server model architecture and is widely used. It has two channels; a command channel and a data channel. The command channel is used to control the communication and the data channel is used for the actual transmission of files. There's a wide range of things that you can do using FTP, like moving, downloading, copying files, etc. We will discuss that in a later section, along with the details on how to do it using Python. Working with FTP in Python Moving on, you would be happy to know that ftplib is a built-in library that comes already installed with Python, all you need to do is import it in your script and you can start using its functions. To import it, use the following command: from ftplib import FTP After that, we need to initiate a connection to the FTP server that we want to open a communication link with. To do so, create an ftp instance: # Replace the example domain with your domain name ftp = FTP('') The above method uses the default port, i.e. port 21, for establishing a connection with the server. Next step is to provide your login credentials, i.e. your username and password, to get access to the files on the server. You can use the following method for that:('your_username','your_password') The default values for username and password are 'anonymous' and '[email protected]', respectively. If the connection is successful, you should receive a message similar to "230 Login Successful". Now that we have established a connection to the server, we want to navigate to the directory where we wish to do operations, i.e. get or write a file in. For that, we change the 'current working directory' using the following command:('/path/to/the/directory/') Let's now discuss some basic examples of how to get a file from a directory or write a file to a directory. The explanation of the code is provided in the comments alongside each line of code: file_name = 'a-filename.txt' my_file = open(file_name, 'wb') # Open a local file to store the downloaded file('RETR ' + file_name, my_file.write, 1024) # Enter the filename to download In the retrbinary call above, the 1024 means that the file will be downloaded in blocks of 1024 bytes until the whole file is transferred. There's one more thing that you need to do after downloading or uploading a file - close that file and also close the FTP connection that you had opened. You can do that for the above example with the following two lines of code: # Terminate the FTP connection my_file.close() # Close the local file you had opened for downloading/storing its content Let's now try to upload a file to the server. In addition to the commands below you would also have to rewrite the commands we used above to open an FTP connection. file_name = 'a-filename.txt'('STOR ' + file_name, open(file_name, rb)) In the above examples, 'rb' and 'wb' mean "read binary" and "write binary", respectively. Additional FTP Functionalities Now that we have discussed the implementation for the main features, let's now see some additional functionality that ftplib provides us. List Files and Directories To see the files and folders in your current working directory, in list format, run the retrlines command:('LIST') Make a New Directory In order to organize your files in a certain manner, you might feel the need to create a new directory on the server, which you can do using a single line of code:('/path/for/the/directory') The path would be the location at which you wish the new directory to be located at. Delete a File from the Server Removing a file on the server is fairly simple, you just have to give the name of the file as a parameter to the delete function. The operation's success or failure will be conveyed by a response message.('file_name_to_delete') Check Current Path To check your current path, simply run the following code: This command will return the absolute path to the current working directory. Caution It is important to note that while FTP is quite secure itself, it is not commonly used to transfer sensitive information; if you're transferring something like that then you should go for more secure options like SFTP (Secure FTP) or SSH (Secure Shell). These are the most commonly used protocols for handling sensitive data transmission. Conclusion In this post, we discussed what FTP is and how it works with the help of different examples. We also saw how to use Python's "ftplib" module to communicate with a remote server using FTP and saw some other functionalities that the module offers. In the end, we also discussed some more secure alternatives to FTP, such as SFTP and SSH, which are used for the transfer of sensitive information. For more information on using FTP with Python, see the the official ftplib docs or RFC 959.
https://stackabuse.com/introduction-to-python-ftp/
CC-MAIN-2021-21
refinedweb
843
57.2
I want to display all the movies by selected director. Routes and controller work fine. However, the filtered movies shown in view are all the same. For example, I have four movies of which two have the same director. What I want is to show these two different tuples in view page, but the shown two tuples are the same. This is the controller code: def find_movies_by_same_director @movie = Movie.find(params[:id]) @director = @movie.director if (not @director.nil?) and (not @director.empty?) #@movies = Movie.find_all_by_director(@director) if (not @director.nil?) and (not @director.empty?); @movies = Movie.find_by_sql("SELECT * FROM movies i WHERE i.director == '#{@director}'") render :director else flash[:notice] = "'#{@movie.title}' has no director information" redirect_to root_path end end %tbody - @movies.each do |movie| %tr %th= @movie.title %th= @movie.rating %th= @movie.release_date In your view code, you are using the instance variable @movie, which returns the result of your original search from line 2 of your controller code. To see each movie as you iterate through @movies, you need to use the local variable that you are declaring in the block. %tbody - @movies.each do |movie| %tr %th= movie.title %th= movie.rating %th= movie.release_date If that's confusing, you might change the name of the block variable entirely. This doesn't change the result but might be more readable. %tbody - @movies.each do |matched_movie| %tr %th= matched_movie.title %th= matched_movie.rating %th= matched_movie.release_date
https://codedump.io/share/i6sCqTTaMp1W/1/rails-return-tuples-with-common-attributes
CC-MAIN-2017-43
refinedweb
240
72.02
Explicit Call Stack ImplementationExplicit Call Stack Implementation As part of an MSR internship, a prototype implementation of some of the explicit call stack passing transform has been done within GHC as a new pass. This page outlines an example of usage, sketches some implementation details should someone want to take it further, and details future work / problems that need addressing. - ExplicitCallStack - read this to get an overview of the idea - Blog post giving an overview - Darcs repository with paper sources and patches - Paper describing the work submitted to Haskell Symposium 2009 ExamplesExamples There are several examples in the testsuite patch, and also in the blog post above. The gist is: Imagine you have the following source program: module Main where import Control.Exception main :: IO () main = do baz `seq` return () bar str = error str baz = bar "crash!\n" Compile and run this and the program will predictably fail with an error message. $ ~/ghc/ghc/ghc/stage2-inplace/ghc.exe --make -o Eek Eek.hs $ ./Eek.exe Eek.exe: crash! To build up a call stack, the program needs a bit of rewriting to the following form: {-# OPTIONS_GHC -fexplicit-call-stack #-} module Main where import GHC.ExplicitCallStack.Annotation import GHC.ExplicitCallStack.Stack import Control.Exception main :: IO () main = do baz `seq` return () {-# ANN bar Debug #-} bar str = throwStack (\s -> ErrorCall $ str ++ show s) {-# ANN baz Debug #-} baz = bar "crash!\n" The changes needed are: indicating we want to run the explicit-call-stack pass by telling GHC, either via an options pragma or compile time flag. importing GHC.ExplicitCallStack.Annotationto bring the annotation Debuginto scope. importing GHC.ExplicitCallStack.Stackto bring throwStackinto scope indicating that barand bazshould be rewritten to explicitly pass the stack by decorating them with a Debugannotation replacing errorwith the primitive throwStackthat can access the Stack and throw an exception Compile and run now, and you get: $ ./Eek.exe Eek.exe: crash! in bar, Eek.hs:13,11 in baz, Eek.hs:16,7 in main, Eek.hs:10,3 Alternatively, the user can indicate instead that all functions should be rewritten to pass a stack ( -fexplicit-call-stack-all), and thus not require the annotations: {-# OPTIONS_GHC -fexplicit-call-stack-all #-} module Main where import GHC.ExplicitCallStack.Stack import Control.Exception main :: IO () main = do baz `seq` return () bar str = throwStack (\s -> ErrorCall $ str ++ show s) baz = bar "crash!\n" Which gives a slightly different output of: $ ./Eek.exe Eek.exe: crash! in bar, Eek.hs:11,11 in baz, Eek.hs:13,7 in main, Eek.hs:9,3 in main, Eek.hs:8,1 as main has also been rewritten to accept a stack... ImplementationImplementation FlagsFlags Several flags have been added in the patch: -fexplicit-call-stackwhich actually enables or disables the core pass. It also implies -fno-method-sharingwhich makes the core look a little more like you would intuitively expect and thus easier to write the transform for. -fexplicit-call-stack-allwhich tells the core pass to ignore Debugannotations and just rewrite every top level binding. It also implies -fexplicit-call-stackto enable the core pass, and -fno-method-sharingas above. -ddump-ecswhich dumps the core after the explicit call stack phase has run -fds-simplewhich causes the desugarer to massivly simplify how it translates mutually recursive functions with typeclass/type parameter arguments. AnnotationsAnnotations The pass uses GHC Annotations for two purposes. The first is to allow the user to guide which functions get rewritten for debugging purposes. The second is to record the link between the original function, and it's debug-rewritten form. The annotations live in the module GHC.ExplicitCallStack.Annotations that have been added to the package template-haskell. (There is a dependency on TH Names in the annotations hence this package). This module is tiny: {-# LANGUAGE DeriveDataTypeable #-} module GHC.ExplicitCallStack.Annotation where import Language.Haskell.TH import Data.Data data Debug = Debug deriving (Data, Typeable) data Debugged = Debugged { fromDebugged :: Name } deriving (Data, Typeable) We need Data and Typeable so the Debug and Debugged constructors can be serialized using the default annotation serialization scheme. The Debugannotation we have already seen, and is used by the core pass to establish which functions should have a stack accepting form generated. The Debuggedannotation is placed onto the core during the explicit call stack pass. The annotation is associated with the original function, and the fromDebuggedfield is given the template-haskell Nameof the version of the function that accepts and threads through the call stack. Source locationsSource locations In order to give helpful stack traces, the source locations of program variables had to be persisted into GHC Core somehow. The chosen route for this was to add a new type of Core Note: data Note = ... | CoreSrcLoc SrcSpan These are then sprinkled into the core by the desugarer see ( dsExpr. Basically every variable reference becomes wrapped in a {{{Note (CoreSrcLoc loc) _}}}. A utility pass was also written and added to simplCore/ called StripSrrcLocNote that removes all these notes again. This is used in e.g. desugaring RULES to make sure the patterns don't get clobbered. However this is a bit messy. It could be better to add an explicit phase in the pipeline which goes HSSyn --desugarer--> Expr SrcLocAnnotatedVar --passes that safely work with src locs--> Expr SrcLocAnnotatedVar --strip anns--> CoreExpr --rest of pipeline--> ... Alternatively, HPC works fine on HSSyn, maybe that'd be a better target for this work. The transformThe transform The transform itself lives in callStackCore/SimpleDebugTranslate. The entry point is stackPassTransform. Beacause of the need for annotations (and hence transitively template haskell), most of the module is conditionally compiled with an #ifdef GHCI. The pass transforms a module by the following: Building up a mapping of variable names in the current module that are to be debugged (i.e. to have a stack-accepting variant generated), and allocating the names for these debugged variants. buildLocalDebuggedMap Actually doing the rewriting of all bindings in the module ( processBind) - Those bindings that are to have a stack-accpeting variant generated have that generated, and their original expression is rewritten to forward to the new variant. - All other bindings are rewritten so that any references to any functions with Debuggedannotations instead call the fromDebuggedfield. Adding all of the new Debuggedannotations into the current pipeline and into the ModGutsfor persisting into the interface file. ( mkAndSetAnnotations). The actual rewriting performs a fairly dumb transform: f = e will be rewritten to f = f' (pushStack #currentsrcloc# emptyStack) f' = \stack -> [[e]]_stack where [[e]]_stack replaces any variable x with a Debugged x' annotation with x' (pushStack #currentsrcloc# stack). The rewriting also recognises the special function throwStack and replaces it with roughly \f -> throw (f stack). StacksStacks The library for the actual runtime callstacks lives in GHC.ExplicitCallStack.Stack in package base. The stack has been designed to abstract away recursion (since otherwise that would kill performance/memory usage given that iteration has to use recursion). The high level view of the stack design is this: Imagine you have a real call stack, e.g. error in top! in a in b in c in c in c in d in b Our abstracted call stack would be built by keeping the first occurance of any element (from the top of the stack), and replacing any later occurences by a sentinal value (hereafter ...). These sentinal values collapse together if they would ever be adjacent. i.e, walking down the above stack from the top, we would build up: in top! in top! in a in top! in top! in a in b in top! in a in a in b in c in top! in a in b in b in c ... in top! in a in b in c in c ... in d in top! in a in b in c ... ... in d ... Alternatively, building up as though we were pusing elements onto the stack one by one: in top! in a in a in b in b in b in c in c in c in c in c in c ... ... ... ... ... in d in d in d in d in d in d in d in b in b in b in b in b ... ... ... The stack data structure represents the sentinal elements implicitly by having two constructors to represent an element on top of another stack; Then, and RecursionThen (for the sentinal). Loosly, our final stack in the examples above would be: top `Then` a `Then` b `Then` c `RecursionThen` d `RecursionThen` Empty In order to prevent a large number of stacks being allocated and held live, each stack holds an MVar to a Hashtable of stack elements to new stacks. This table is used to memoise push calls. Open ProblemsOpen Problems There are several open problems and incomplete corners in the implementation Some are mentioned here, they are elaborated on more fully in the paper, linked above. BootstrappingBootstrapping It would be useful if many prelude functions (c.f. error and a few of it's cousins) could have Debugged variants available, so e.g. fromJust or head errors could be detected more easily. However, In order for the pass to work, it requires at least a stage 2 compiler. This is because the annotations system uses template haskell in order to evaluate annotation expressions. This means that the libraries would need rebuilding with the stage 2 compiler, which may make things a bit messy. It may also induce some horrible dependencies between base and template-haskell - ( Prelude needing to import GHC.ExplicitCallStack.Annotations, etc. RecordsRecords Partial record selectors are currently written with a specialsed error function. The pass could try and special case to detect these, or a solving of the bootstrapping problem above would allow a debugged version of the specialised error function to be generated. Type ClassesType Classes Type classes provide many interesting problems; some theoretical, some pragmatic - I have a section outlining some approaches in the paper Source locations in AbsBinds AbsBinds currently don't store source locations for the binding group accurately. This shows up in a couple of the tests in the testsuite in the patch. I havn't had chance to investigate this properly, but it should be relatively straightforward to fix. Late DebuggingLate Debugging There is currently a big open question about how or whether it would be possible to create debugged versions of functions without recompiling / changing that module. Maybe I have a library and I want to pass a call-stack down inside there. Higher order functionsHigher order functions Also discussed in the paper
https://gitlab.haskell.org/ghc/ghc/-/wikis/explicit-call-stack/core-pass-implementation
CC-MAIN-2022-21
refinedweb
1,748
55.03
Dancer::Request - interface for accessing incoming requests This class implements a common interface for accessing incoming requests in a Dancer application. In a route handler, the current request object can be accessed by the request method, like in the following example: get '/foo' => sub { request->params; # request, params parsed as a hash ref request->body; # returns the request body, unparsed request->path; # the path requested by the client # ... }; A route handler should not read the environment by itself, but should instead use the current request object.). Same thing as base above, except it removes the last trailing slash in the path if it is the only path. This means that if your base is, uri_base will return (notice no trailing slash). This is considered very useful when using templates to do the following thing: <link rel="stylesheet" href="<% request.uri_base %>/css/style.css" /> Constructs a URI from the base and the passed path. If params (hashref) is supplied, these are added to the query string of the uri. If the base is, request->uri_for('/bar', { baz => 'baz' }) would return. Returns a URI object (which stringifies to the URL, as you'd expect). Called in scalar context, returns a hashref of params, either from the specified source (see below for more info on that) or merging all sources. So, you can use, for instance: my $foo = params->{foo} If called in list context, returns a list of key => value pairs, so you could use: my %allparams = params; If the incoming form data contains multiple values for the same key, they will be returned as an arrayref. If a required source isn't specified, a mixed hashref (or list of key value pairs, in list context) will be returned; this will contain params from all sources (route, query, body). In practical terms, this means that if the param foo is passed both on the querystring and in a POST body, you can only access one of them. If you want to see only params from a given source, you can say so by passing the $source param to params(): my %querystring_params = params('query'); my %route_params = params('route'); my %post_params = params('body'); If source equals route, then only params parsed from the route pattern are returned. If source equals query, then only params parsed from the query string are returned. If source equals body, then only params sent in the request body will be returned. If another value is given for $source, then an exception is triggered. table. Context-aware accessor for uploads. It's a wrapper around an access to the hash table provided by uploads(). It looks at the calling context and returns a corresponding value. If you have many file uploads under the same name, and call upload('name') in an array context, the accessor will unroll the ARRAY ref for you: my @uploads = request->upload('many_uploads'); # OK Whereas with a manual access to the hash table, you'll end up with one element in @uploads, being the ARRAY.
http://search.cpan.org/~yanick/Dancer-1.3121/lib/Dancer/Request.pm
CC-MAIN-2014-15
refinedweb
499
66.37
: - Hash tables I use klib. It's awesome and easy to use. I have this for critical algorithms like unique, factorize (unique + integer label assignment) - O(n) sort, known in pandas as groupsort, on integers with known range. This is a variant of counting sort; if you have N integers with known range from 0 to K — 1, this can be sorted in O(N) time. Combining this tool with factorize (hash table-based), you can categorize and sort a large data set in linear time. Failure to understand these two algorithms will force you to pay O(N log N), dominating the runtime of your algorithm. - Vectorized data movement and subsetting routines: take, put, putmask, replace, etc. Let me give you a prime example from a commit yesterday of me applying these ideas to great effect. Suppose I had a time series (or DataFrame containing time series) that I want to group by year, month, and day: In [6]: rng = date_range('1/1/2000', periods=20, freq='4h') In [7]: ts = Series(np.random.randn(len(rng)), index=rng) In [8]: ts Out[8]: 2000-01-01 00:00:00 -0.891761 2000-01-01 04:00:00 0.204853 2000-01-01 08:00:00 0.690581 2000-01-01 12:00:00 0.454010 2000-01-01 16:00:00 -0.123102 2000-01-01 20:00:00 0.300111 2000-01-02 00:00:00 -1.052215 2000-01-02 04:00:00 0.094484 2000-01-02 08:00:00 0.318417 2000-01-02 12:00:00 0.779984 2000-01-02 16:00:00 -1.514042 2000-01-02 20:00:00 2.550011 2000-01-03 00:00:00 0.983423 2000-01-03 04:00:00 -0.710861 2000-01-03 08:00:00 -1.350554 2000-01-03 12:00:00 -0.464388 2000-01-03 16:00:00 0.817372 2000-01-03 20:00:00 1.057514 2000-01-04 00:00:00 0.743033 2000-01-04 04:00:00 0.925849 Freq: 4H In [9]: by = lambda x: lambda y: getattr(y, x) In [10]: ts.groupby([by('year'), by('month'), by('day')]).mean() Out[10]: 2000 1 1 0.105782 2 0.196106 3 0.055418 4 0.834441: - Factorize the group keys, computing integer label arrays (O(N) per key) - Compute a "cartesian product number" or group index for each observation (since you could theoretically have K_1 * … * K_p groups observed, where K_i is the number of unique values in key i). This is again O(n) work. - If the maximum number of groups is large, "compress" the group index by using the factorize algorithm on it again. imagine you have 1000 uniques per key and 3 keys; most likely you do not actually observe 1 billion key combinations but rather some much smaller number. O(n) work again - For simple aggregations, like mean, go ahead and aggregate the data in one linear sweep without moving anything around. - Otherwise, for arbitrary user-defined aggregations, we have two options for chopping up the data set: sorting it (O(N) using groupsort!) or create index arrays indicating which observations belong to each group. The latter would be preferable in many cases to sorting a large data set (reordering the data, even though linear time, can be very costly). I worked on speeding up the latter part of this last bullet point yesterday. The resulting code looked like this: def _get_indices_dict(label_list, keys): # Accepts factorized labels and unique key values shape = [len(x) for x in keys] group_index = get_group_index(label_list, shape) # Compute group index sorter, _ = lib.groupsort_indexer(com._ensure_int64(group_index), np.prod(shape)) # Reorder key labels and group index sorted_labels = [lab.take(sorter) for lab in label_list] group_index = group_index.take(sorter) # Compute dict of {group tuple -> location NumPy array for group} index = np.arange(len(group_index)).take(sorter) return lib.indices_fast(index, group_index, keys, sorted_labels) The details of lib.indices_fast aren't that interesting; it chops up np.arange(len(group_index)).take(sorter), the sorted indirect indices, to produce the index dictionary. Running %lprun to get a line profiling on a large-ish data set: In [11]: rng = date_range('1/1/2000', '12/31/2005', freq='H') In [12]: year, month, day = rng.year, rng.month, rng.day In [13]: ts = Series(np.random.randn(len(rng)), index=rng) In [14]: lprun -f gp._get_indices_dict for i in xrange(100): ts.groupby([year, month, day]).indices Timer unit: 1e-06 s File: pandas/core/groupby.py Function: _get_indices_dict at line 1975 Total time: 0.628506 s Line # Hits Time Per Hit % Time Line Contents ============================================================== 1975 def _get_indices_dict(label_list, keys): 1976 400 695 1.7 0.1 shape = [len(x) for x in keys] 1977 100 114388 1143.9 18.2 group_index = get_group_index(label_list, shape) 1978 1979 100 320 3.2 0.1 sorter, _ = lib.groupsort_indexer(com._ensure_int64(group_index), 1980 100 62007 620.1 9.9 np.prod(shape)) 1981 1982 400 53530 133.8 8.5 sorted_labels = [lab.take(sorter) for lab in label_list] 1983 100 19516 195.2 3.1 group_index = group_index.take(sorter) 1984 100 20189 201.9 3.2 index = np.arange(len(group_index)).take(sorter) 1985 1986 100 357861 3578.6 56.9 return lib.indices_fast(index, group_index, keys, sorted_labels).
https://wesmckinney.com/blog/mastering-high-performance-data-algorithms-i-group-by/
CC-MAIN-2021-31
refinedweb
888
70.7
Sphero client A python client for sphero EXAMPLES: from kulka import Kulka with Kulka('01:02:03:04:05:06') as kulka: kulka.set_inactivity_timeout(3600) kulka.set_rgb(0xFF, 0, 0) from kulka import Kulka from random import randint with Kulka('01:02:03:04:05:06') as kulka: kulka.set_inactivity_timeout(3600) kulka.roll(randint(0, 255), randint(0, 359)) INSTALLATION: pip install kulka LICENSE: Kulka is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. Kululka; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/Kulka/
CC-MAIN-2017-39
refinedweb
146
50.97
This is the fourteenth in a series of posts on how to build a LINQ IQueryable provider. If you have not read the previous posts you might request a weeks vacation, sit back, relax with a mochacino in one hand a netbook in the other, or if you've got better things to do with your time print them all out and stuff them under your pillow. Who knows, it might work better. Complete list of posts in the Building an IQueryable Provider series Okay, enough with all the post-is-late guilt! It's done now, so breathe a sigh of relief and get on with the reading. What's inside: More Mapping - Finally a real mapping system, with attributes and XML. More Providers - MS Access and MS SQL Server Compact Edition More POCO - Constructors, Enum and Interfaces. More More More The full source code can be found at: More Mapping - Attribute Mapping - put attributes on the properties in the class that declares your tables. This differs from LINQ to SQL mapping attributes which are placed on the entities themselves and is more like the proposed LINQ to Entity mapping attributes. However, I've not actually gone out of my way to make them the same. The advantages to this approach are 1) keeping the mapping separate from the entity objects (more POCO), and 2) being able to supply different mapping for the same entity type based on the table the entities are accessed from. Mapping attributes look like this: [Table] [Column(Member = "CustomerId", IsPrimaryKey = true)] [Column(Member = "ContactName")] [Column(Member = "CompanyName")] [Column(Member = "Phone")] [Column(Member = "City", DbType="NVARCHAR(20)")] [Column(Member = "Country")] [Association(Member = "Orders", KeyMembers = "CustomerID", RelatedEntityID = "Orders", RelatedKeyMembers = "CustomerID")] public IUpdatableTable<Customer> Customers You specify the Table, Column and Association attributes as necessary. The 'Member' refers to the member in the entity type. If this is the same name as the database's column name you don't need to repeat it by specifying 'Name' too. You can specify nested mapping information by using a dot in the Member name. This allows you to have what some call value types, but to keep from clashing with .Net terminology I don't. For example, if you've defined an Address type that you want to use in a nested relationship (actually embedded in the same table row) you can do that like this:[Table] [Column(Member = "EmployeeID", IsPrimaryKey = true)] [Column(Member = "LastName")] [Column(Member = "FirstName")] [Column(Member = "Title")] [Column(Member = "Address.Street", Name = "Address")] [Column(Member = "Address.City")] [Column(Member = "Address.Region")] [Column(Member = "Address.PostalCode")] public IUpdatable<Employee> Employees - Xml Mapping -- this is same as attribute based mapping but data is read from an XML file. Xml mapping looks like this: <?xml version="1.0" encoding="utf-8" ?> <map> <Entity Id="Customers"> <Table Name="Customers" /> <Column Member = "CustomerId" IsPrimaryKey = "true" /> <Column Member = "ContactName" /> <Column Member = "CompanyName" /> <Column Member = "Phone" /> <Column Member = "City" DbType="NVARCHAR(20)" /> <Column Member = "Country" /> <Association Member = "Orders" KeyMembers = "CustomerID" RelatedEntityID = "Orders" RelatedKeyMembers = "CustomerID" /> </Entity> <Entity Id="Orders"> <Column Member = "OrderID" IsPrimaryKey = "true" IsGenerated = "true"/> <Column Member = "CustomerID" /> <Column Member = "OrderDate" /> <Association Member = "Customer" KeyMembers = "CustomerID" RelatedEntityID = "Customers" RelatedKeyMembers = "CustomerID" /> <Association Member = "Details" KeyMembers = "OrderID" RelatedEntityID = "OrderDetails" RelatedKeyMembers = "OrderID" /> </Entity> <Entity Id="OrderDetails"> <Table Name="Order Details"/> <Column Member = "OrderID" IsPrimaryKey = "true" /> <Column Member = "ProductID" IsPrimaryKey = "true" /> <Association Member = "Product" KeyMembers = "ProductID" RelatedEntityID = "Products" RelatedKeyMembers = "ProductID" /> </Entity> </map> You use it like this: XmlMapping mapping = XmlMapping.FromXml(TSqlLanguage.Default, File.ReadAllText(@"northwind.xml")); SqlQueryProvider provider = new SqlQueryProvider(connection, mapping); - Multi-table mapping -- Map multiple tables into a single entity. If you've got entity data spread out over multiple tables with a 1:1 association between them you can now specify the additional tables in mapping using the ExtensionTable attribute or equivalent XML element. Here's what a multi-table mapping looks like: [Table(Name = "TestTable1", Alias = "TT1")] [ExtensionTable(Name = "TestTable2", Alias = "TT2", KeyColumns = "ID", RelatedAlias = "TT1", RelatedKeyColumns = "ID")] [ExtensionTable(Name = "TestTable3", Alias = "TT3", KeyColumns = "ID", RelatedAlias = "TT1", RelatedKeyColumns = "ID")] [Column(Member = "ID", Alias = "TT1", IsPrimaryKey = true, IsGenerated = true)] [Column(Member = "Value1", Alias = "TT1")] [Column(Member = "Value2", Alias = "TT2")] [Column(Member = "Value3", Alias = "TT3")] public IUpdatable<MultiTableEntity> MultiTableEntities Extension tables are specified similar to how Associations are specified, except you are never referring to members, only column names. You use the 'Alias' value to connect column & association mappings with columns from particular tables. All queries for this multi-table entity treat the 'Table' as the primary table queried, all other tables are queried with left-outer joins. All keys for associations must be from the same alias. Can I mix nested mapping with multi-table mapping? I have not tried it, but in theory it should work. It should not matter which table your nested entity gets it's data from, so in effect you can have a composition relationship between one table and another as long as it is 1:1. What about many-to-many? Not yet. Making the system query a many-to-many relationship is relatively easy. I haven't yet figured out the right semantics for inserts & updates. Right now, all insert, updates and deletes are explicit via calls to the IUpdatable with real-live entities.? More Providers - MS Access -- This new query provider works with both Access 2000 - 2003 and Access 2007 data files. I don't know what the true differences are between the Jet and the Ace engines; the query language appears to be identical (as per my limited tests so far), yet the filename extension changed in Access 2007 to 'accdb' instead of 'mdb' and the northwind sample database plumped up an extra 66% in disk size without any additional data. In order to make this work I've added an AccessLanguage object that is necessary to get the correct semantics for MS Access queries and an AccessFormatter object that handles generating the correct command text. In order to salvage as much as I could from the TSqlFormatter, I moved most of this code to a common SqlFormatter base class, and now the TSQL and Access formatters only supply the deviations from the standard syntax. (Of course, 'standard' is currently whatever I deem it to be so don't go getting some actual online specification and prove me wrong.) Access only allows one command at a time, so that added an extra wrinkle, but in the end there is now support in the system for providers that can only do one command at a time. This means there are multiple round-trips to the engine for things like inserting a record and getting back the computed keys. Luckily, the access engine is in-proc so this is not really a burden. A new property on QueryLanguage, 'AllowMultipleCommands' determines how the execution plan is generated and whether multiple commands can be lumped together into a single ADO command. The good news is that the access engine passes almost all the Northwind tests; some are not possible (mostly ones testing translation of framework methods that have no apparent equivalent in the access expression engine). There were a lot of hairy strange & subtle differences in syntax between Access and TSQL, but most were handled by having different format rules, some required new expression visitors to change the query tree, like no explicit cross joins! This caused me to write a visitor to attempt to get rid of cross joins (often injected by my visitor that tries to get rid of cross-apply joins) which is now generally useful to everyone, and if that didn't do it, another visitor that would attempt to isolate out the cross joins from any other joins and push them into sub-queries where Access lets me use the old-style comma-list, which is truly a cross join, though it just can't be mixed with other kinds of joins in the same from clause. - SQL Compact -- Yes, even more SQL Server. Though to be truthful, SQL Server Compact Edition (aka SQL CE, aka SQL Compact, aka Skweelzy) is not really SQL Server, it is some other entirely different product that handles a subset of TSQL syntax, and is not a server at all since it runs in-proc just like MS Access. - What about MySQL or Oracle? One day. The fact is that MS SQL and MS Access are easy for me to get to, they are already on my box. Getting something else up and running would take actual effort, and the MS secret database police might come get me. Meanwhile, if someone out there wants to put together a provider implementation I'll add it into the drop. - Where did the SqlQueryProvider go? I moved it. With the addition of the new providers it became apparent that I'd have to start factoring out all this ADO provider specific nonsense, otherwise all uses of the toolkit would have direct dependencies to way more than necessary. So I made separate projects, each building its own library. I may end up separating all the core 'data' stuff out into its own project too. The solution builds these libraries now: - IQToolkit.dll - IQToolkit.Data.Access.dll - IQToolkit.Data.SqlClient.dll - IQToolkit.Data.SqlServerCe.dll More POCO - Constructors -- Use entities that don't have default constructors. It is now possible to have entities that require invocation of a constructor with parameters. The binding process will figure out how to call your constructor and the client side code will call it for you as long as the constructor parameter names match property names. You can even have fully read-only entities if all data member are accounted for in the constructor. - Enums -- They actually sort of work now. You can have a member in your entity typed as some enum and you get automatic mapping between that enum and a numeric type in the database. - Interfaces and abstract base classes -- You can now declare you IQueryable's as IQueryable<SomeInterface> or IQueryable<SomeAbstractClass> and have the provider create instances of a compatible type under the covers, automatically translating all your references to interface or abstract class members to the appropriate mapped members. You can have mutiple entities share a common interface or base class and get different mapping for each. You can write code using generics and constrain generic parameters based on an interface and write queries that will get correct translation at runtime. (Note, variation of mapping likely won't work with compiled queries, since the translation is fixed on the first execution.) Less Policy There's not a whole lot of policy being used right now and the policy objects dependence on the mapping object was no where near as deep as the mapping object's dependence on the language. So policy is now independent of mapping, which means you can construct providers without specifying policy and/or reusing mapping with different policies. Now if I could only make it simpler to specify/construct mapping without needing to know the language. Back to the drawing board. More Insanity I apologize for the churn. The namespace changed so now all heck is going to break loose. Gone is the simple 'IQ' namespace and in its place is the 'IQToolkit' namespace. I really did like the 'IQ' name, it was short, classy and made you feel intelligent just by looking at it. Yet, it was hard to guess at if you did not already know what it was. I chose to change the namespace name to match the product name and the DLL name. You add reference to the IQToolkit.dll and you import/use the IQToolkit namespace. No fuss, no muss. Except for all those files you'll have to edit now. But hey, this is pre-pre-pre-pre beta stuff. Some people may think they are something special by snarkily keeping all their products in beta. They've got a lot to learn. I hope this toolkit is becoming useful to many. I realize there have been a variety of requests for new things in the toolkit that I just have not gotten time to put in yet. So you can expect plenty more in the future. So enough with reading. It's time to code! PingBack from From a goal of starting off with a simple how-to, you’ve gone a long way and really produced something useful. Thanks for your hard work on this project! This is the fourteenth in a series of posts on how to build a LINQ IQueryable provider. If you have not Great posts – I’m actually working with mono & MySQL at the moment so I really miss LINQ, next to generics it’s one of the best features to be added to the language. Keep up the good work. Cheers Lee When compiled queries are implemented I will think of using IQ instead of L2S ) Omari, the IQToolkit has had compiled queries for many releases now. Thank you for submitting this cool story – Trackback from DotNetShoutout @Lee: Have you checked out db4o? IIRC, Mono can use this just fine. I was wondering whether it’s possible to cast a column to a different type in the query itself. Our tables have timestamp columns and I’d like to cast them to bigint in the queries and then represent them using Int64 types in the entity classes. Is that possible or is there a better approach? Thanks, – Erik Erik, SQL server treats timestamp columns as binary, and they come through the ADO provider as byte[], so there is no existing conversion logic to turn this into an integer. It may be possible to do with some special case logic in the query provider. I was hoping to coerce the system to express the timestamp column in SQL like this: SELECT [myCol1], CAST([timestampCol] AS bigint) as [timestampCol] … SQL 2008 will convert a timestamp to a bigint (not sure about previous version). I was thinking a new mapping attribute field to cast values on the server would be useful in cases where IConvertable can’t help: … [Column(CastAs="BigInt")] public long TimeStampCol {get; set;} … It’s not a big deal, I can poke an explicit conversion function into the entity type hierachy. BTW, Thanks so much for working on this project — it’s an outstanding contribution. Is there any support for delayed value loading? BTW, been following your work and it is definetly outstanding. Thanks KShaban, There is currently only support for delayed 1:n relationship loading. Small typo, in IQToolkit.ExpressionWriter, line 294, shouldn’t DataTime be DateTime? Unless its some secret undocumented format you guys at Microsoft don’t want us to know about. I love your work! Following the series nzearly from the beginning (from part IV or V). I started a while ago implementing an Oracle provider. At this time, I had to refactor a little the last version of the toolkit for it didn’t provided me with the correct extensibility points to plug my provider in… So it is really great news you provide an extensible version by now. PS: I have very little time to devote this "hobbyist" project. So I don’t expect I can send feedback about this before a long time. Maybe someone else will provide us with an Oracle provider sooner. Nevertheless, once again, I love your series and spending nights clicking (go to definition, find references, … best ways to understand something) inside your code like a miner extracting gems. Glad to know I can safely get rid of most of my tricky derived classes and in-place modifications around the little old bugs. Keep it going, man, cos it’s getting better and better! Hello mattwar, Thanks for great articles! I have one question about lazy-loading associations. In one previous comments you wrote that "There is currently only support for delayed 1:n relationship loading.". And as I understand this is implemented using Association attribute. I’ve look through the tests and didn’t found any test case for lazy loading so I tried to add some additional method to test this: public void TestLazyLoading() { var result = db.Customers.Where(x => x.CustomerID == "ALFKI"); foreach (var customer in result) { foreach (var order in customer.Orders) { Console.WriteLine(order.Details); } } } But this method fails. Orders property of the customer is null (DB contains orders for selected customer). I’ve tried to create an instance of the List for orders property in the constructor but in this case this collection is not loaded too. I’ve added only this function and I’ve changed path to attached database file. Could you please check lazy loading via association with or add a sample of usage this functionality? You are right, I don’t have a test for this. I need to add some. As it stands right now, relationships are not included in results unless the QueryPolicy claims they should be. A few of the tests mock up a policy that will claim one or more properties on the queried object should be included. Of course, you still won’t get lazy loading unless you change the definition of the Orders property, since it is currently defined as a List<Order>. If you change it to a DeferredList<Order> and have the policy claim that the Orders property should be included, then it will become lazy loaded. Mr. Matt, good afternoon. i am a newbie in linq. i saw it was very powerful. i do not know whether i am experiencing a bug with the post i had done. no one has replied ot this: the only related article i saw was: any help will be great. pl. feel to reply to that posting in silverlight.net. sorry to have chosen you of all the experts in linq forum. regards Ravi. Hi Matt, Fantastic job. This has been a life saver as we’re using LINQ to validate the contents of an Access database (which is not being upgraded to SQL Server any time soon…). If our paths ever cross I owe you two pints – one for each hand 😉 Cheers, Dave Harvey PS. You’re got a small bug-ette with you NULL handling. Currently your provider can handle "IS NULL", but it can’t handle "IS NOT NULL". It’s a quick fix to required in the IQToolkit.Data.SqlFormatter VisitBinary(BinaryExpression b) method. You need something like the following to handle the "NOT" case: case ExpressionType.Equal: case ExpressionType.NotEqual: if (right.NodeType == ExpressionType.Constant) { ConstantExpression ce = (ConstantExpression)right; if (ce.Value == null) { this.Visit(left); this.Write(" IS"); if (b.NodeType == ExpressionType.NotEqual) { this.Write(" NOT"); } this.Write(" NULL"); break; } } else if (left.NodeType == ExpressionType.Constant) { ConstantExpression ce = (ConstantExpression)left; if (ce.Value == null) { this.Visit(right); this.Write(" IS"); if (b.NodeType == ExpressionType.NotEqual) { this.Write(" NOT"); } this.Write(" NULL"); break; } } goto case ExpressionType.LessThan; Thanks, Dave. You were right!
https://blogs.msdn.microsoft.com/mattwar/2009/04/08/building-a-linq-iqueryable-provider-part-xiv/?replytocom=16983
CC-MAIN-2018-34
refinedweb
3,151
53.92
IPython newcomers may be confused by the following scenario. - You type some commands at the Canopy Python prompt, and they work great. - Then you copy the exact same commands to a python file, and run the file, and you get errors like "No module named...." or "name... not defined". The solution is to understand that in each python module (e.g. in each .py file), you must always explicitly import all outside names that you wish to use in that module. Here's why: By default, Canopy's IPython prompt runs in "Pylab" mode (this can be changed in Canopy's Preferences menu, in the "Python" tab.) When IPython is started in pylab mode, as in Canopy, then it starts with many imports already done behind the scenes: * This provides almost 1000 names directly to the ipython shell, which is very convenient, but has great potential for "namespace pollution", i.e. name conflict. (There are probably names already defined that you don't even know are there and might be using for some other purpose.) In contrast, in a python module, you must explicitly import every name which you use. This is much more robust, and since you are not constantly typing and retyping as you would do at the prompt, the extra typing is well worth it. Related -- don't forget to call show() When you plot in an ipython pylab session, you don't need to explicitly invoke pyplot.show(). When you plot from a python script, you do.
https://support.enthought.com/hc/en-us/articles/204469630-Modules-are-already-available-in-Canopy-s-Python-PyLab-prompt-but-not-in-a-script
CC-MAIN-2019-22
refinedweb
251
76.35
Monday, February 17, 2014 read Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today's Computers, a good book to introduce few common algorithms. reading Making Java Groovy, I like the syntax of groovy and the thinking is still very Java. there is a another choice, Kotlin, but I still like groovy's syntax more. I've been learning quite a few programming languages, and kept forgetting what I'd read. maybe it's time to do something rather than wasting time on reading and forgetting. I'll keep reading, but will probably only read those could help my projects. more coding this year. Thursday, February 20, 2014 being devops recently, trying different system tools. vagrant is cool, very easy to create and destroy, one command to setup a clean environment. I'm still finding a way to do the setup purely on shell script, which I don't need to mess with puppet. but I will try puppet too, and docker as well. setup a nginx to front my node apps. still a lot of things to play with nginx, load balancing, caching, websocket proxying, etc. and will try front ruby apps with passenger (the latest episode of ruby rogues podcast) and unicorn. trying to do some ui test with selenium webdriver (ruby), phantomjs w/ casperjs, and wraith. looking for a way to easily do integration tests. trying vaadin (a modern gwt?). I really don't like maven, I have no idea why it keeps downloading things (there is one comment I like about maven: it makes hard things simple and simple things hard). I'm not seriously into it, but it's still fun to try few examples. (netbeans has good support for vaadin) groovy, planning to write a version comparison tool with it, for myself(or the team). must be fun. read a post about one php trick: // in javascript, you can do $foo = $a || $b || $c; // in php (after 5.4), you can do it with $foo = $a ?: $b ?: $c; it's nice but I probably won't use it because it doesn't feel safe. I start to believe if you work with a team, your code should be as verbose as java. Friday, February 21, 2014 I've changed a little bit of my reading style on programming books. now I'll read few books at the same time but focus on one topic. I want to give myself a good understanding on java threading, I just randomly pick one from Learning Java 4ed, The Well-Grounded Java Developer, Beggining Java 7, Java Concurrency in Practice and read it. it works quite well. on the other hand, I'm keep switching my daily tasks into to groovy. reading a directory recursively is just as easy as any interpreted languages: import groovy.io.FileType def dir = new File("my_folder") dir.eachFileRecurse (FileType.FILES) { f -> // each one of them is a File object println f.class.name println f.path } I'll also start using groovy on web apps, Vert.x looks good for my tiny little apps. Wednesday, February 26, 2014 when I picked my mind to write something, I listened two interesting podcasts - one about go, the other about dart. so I decided to check them out. it turns out, I really like dart. it offers a bundle of tools (ide, dart2js too, a chromium with dart vm) officially, and it already pass the 1.0 version (means mature enough). I finished the little Dart: Up and Running book very quickly. checking some code review tools too, gerrit and phabricator both look good. Friday, February 28, 2014 using phantomjs to take screenshots are really easy: - download phantomjs, start it with –webdriver=4444 - add "nearsoft/php-selenium-client": "v2.0"to composer.json composer update - run the php script below <?php require '../vendor/autoload.php'; use Nearsoft\SeleniumClient\WebDriver; $driver = new WebDriver(); $driver->setScreenShotsDirectory('.'); $driver->get(''); $driver->screenshot(); $driver->close(); Blog Archives Newer Entries -
https://jchk.net/blog/2014-02
CC-MAIN-2019-13
refinedweb
662
66.64
Recursive Functions with Examples in C Language Recursion is a process of calling a function within the same function again and again till the condition is satisfied. We have already seen how functions can be declared, defined and called. Recursive functions are declared and defined in the same manner. But they are called within its own body except for the first call which is obviously made by an external method. This way of calling the methods/functions allows a function to be executed repeatedly without the use of loops. As for stopping the repeat process, a condition can be specified by the programmer. Without an exit condition, the function would get executed repeatedly in the same manner as an infinite loop. Recursive functions need to be called by any other function first in order to start the execution process. After the exit condition is satisfied the control flows out of the function and back to the calling function. Here is how the control flows in a recursive program: How Stack is used in Recursion? Stack is a data structure used to implement recursion. Stack is implemented where the storage and sequence of execution of recursive program comes. Let us understand it better with an example: #include <stdio.h> void recursion(int n) { if( n<=3 ){ //exit condition for recursive function printf("%d\n",n); recursion(++n); //calling itself again } else{ return ; } } int main() { int x=1; recursion(x); return 0; } In this program, we see that the value passed to the function is 1. So in the beginning, the value of n is 1. So for the first execution, the value of n is stored at the base of the stack as we see in the figure below. Then it prints the value of n. Once the compiler encounters the call inside of the function, the function starts executing again with the updated value n=2. It gets stored as shown in the figure below: The value of n is then printed and the control flows to the calling of the function again with n=3. And the process happens again for n=4. But if the condition becomes false and it returns the control to the main() function. This is used so as to maintain the data of every call. For example, when the call for n=3 will finish, it will return to its calling function which is actually the function itself but with another instance of it running. So when it returns back to function execution with n=2, data of n=3 execution is pop from the stack and now the data for n=2 is at the top which is loaded into the memory for the further execution of the method. Factorial Program Using Recursion in C Language Here’s a simple program which prints the factorial of n numbers starting from 1 using a recursive function: #include <stdio.h> int factorial(int x)//function definition { if(x==1||x==0){//condition on which the exit of the recursion depends return 1; } else{ return (x*factorial(x-1));//function being called within its own body } } int main() { int n,i,f=1; printf("Enter the range: \n"); scanf("%d",&n); for(i=1;i<=n;i++) { f=factorial(i);//function call at the initial stages printf("The factorial of %d is: %d \n",i,f); } return 0; } Output:- Enter the range: 5 The factorial of 1 is: 1 The factorial of 2 is: 2 The factorial of 3 is: 6 The factorial of 4 is: 24 The factorial of 5 is: 120 An investment in knowledge always pays the best interest. Hope you like the tutorial. Do come back for more because learning paves way for a better understanding. Do not forget to share and Subscribe. Happy coding!! 🙂
https://www.codingeek.com/tutorials/c-programming/recursive-functions-with-examples-in-c-language/
CC-MAIN-2018-26
refinedweb
633
59.43
3 quick ways to add fonts to your React app In HTML, font is used to specify the font face, font size, typography of the text. You can add fonts to your React application in different ways. This article aims to explain three quick ways of adding fonts to your React app. All the examples in the article assumes the React code structure provided by the create-react-app. You can choose to use the examples in any other code structure too. ✨ Using the Font link We can link to any fonts hosted online using the <link> tag inside an HTML file. Let's take an example of applying Google Fonts using the <link> tag. Go to. Click on a Font of your choice, Click on the +Select this stylebutton Go to the section, Use on the weband copy the code under the <link>section Go to the index.htmlfile of your project. If your app is based on the create-react-app, you will find it under the publicfolder. Paste the copied lines inside the <head>section. Here is an example, <link rel="preconnect" href=""> <link href="" rel="stylesheet"> Go to your CSS file and add a style like, .font-link { font-family: 'Hanalei Fill', cursive; } Here we are using the same font-family that linked in the above step. Last, you can add this style anywhere in your React component. const FontLink = () => { return( <div className="card"> <span className="font-link"> This is with Font Link. We are linking the fonts from the Google Fonts. </span> </div> ) }; export default FontLink; Please note, we are using the class name with the <span>element in the React component. This is how the component may look like, ✨ Using the Web Font Loader The Web Font Loader helps you to load fonts from Google Fonts, Typekit, Fonts.com, and Fontdeck, as well as self-hosted web fonts. It is co-developed by Google and Typekit. Let us see how to load multiple fonts from Google Fonts and use them in a React component. Install webfontloader yarn add webfontloader # Or, npm i webfontloader Import webloaderto your component import WebFont from 'webfontloader'; Load the desired fonts using the font name. It is better to use the useEffecthook and let it run once when the component loads. As the fonts need to be loaded just once in the app, you can load them in the index.jsfile. useEffect(() => { WebFont.load({ google: { families: ['Droid Sans', 'Chilanka'] } }); }, []); Here we are loading fonts, 'Droid Sans' and 'Chilanka'. Now you can use these fonts in a React component using the classNameor styleattribute. To use with the classNameattribute, create a CSS class in the .css file, .font-loader { font-family: 'Chilanka'; } Then, in the component's render()method, <div className="font-loader"> This is with Web Font Loader using the class attribute. We are loading the <u><b>Chilanka</b></u> font from the Google Fonts. </div> With the styleattribute, <div style={{fontFamily: 'Droid Sans'}}> This is with Web Font Loader using the style attribute. We are loading the <u><b>Droid Sans</b></u> fonts from the Google Fonts. </div> This is how the component may look like, Read more about the Web Font Loader from here. ✨ Using @font-face In some situations, you may not be allowed to connect to a font repository online and link/load it. A classic example is, your app users use intranet and they have restricted access to the internet. In these situations, the fonts must be downloaded locally and packaged within the app. @font-face is a CSS rule to define a font name by pointing to a font with a URL. - Create a folder called fontsunder src. Download the required fonts into the src\fontsfolder. In this example, we have downloaded two fonts, Goldmanand Lobster. Next, import the fonts into the index.jsfile. import './fonts/Goldman/Goldman-Bold.ttf'; In the index.cssfile add, @font-face { font-family: "GoldmanBold"; src: local("GoldmanBold"), url("./fonts/Goldman/Goldman-Bold.ttf") format("truetype"); font-weight: bold; } Now add a class name in the App.cssfile that uses this family name. .font-face-gm { font-family: "GoldmanBold"; } Use this class name in your React component, const FontFace = () => { return( <div className="card"> <div className="font-face-gm"> This is using Font Face. We are linking the <u><b>Goldman</b></u> font from the Google Fonts. </div> </div> ) } export default FontFace; This is how the component may look like, Before we end... Hope it was useful. Please Like/Share so that it reaches others as well. But, a few more points before we finish, - All the mechanisms discussed here are also applicable to a vanilla JavaScript application. - We can use multiple fonts in one app. All the source code used in this article is in my GitHub repository. Let's connect. You can @ me on Twitter (@tapasadhikary) with comments, or feel free to follow. You may also enjoy reading,
https://atapas.hashnode.dev/3-quick-ways-to-add-fonts-to-your-react-app-ckjo6jbgd084w8ls1d7xp05qj?guid=none&deviceId=49e19641-8894-4f02-b6ba-5d3fc137e5b0
CC-MAIN-2021-04
refinedweb
822
65.83
React’s standard flow is to start with an empty HTML template from the server, and populate the DOM entirely on the client. Letting React pre-render the initial HTML for your app on the server before the client takes over has some advantages, though: it lets slower (mobile) clients load your app much faster, and it’s good for SEO for search engines that don’t support JavaScript. Since you have to run React on the server to do this, you need a JavaScript-able backend, which is why such Universal (a.k.a. Isomorphic) apps are mostly built with Node.js backends. However, you can create universal React apps with other languages too, including Swift. In this post, I’ll walk through an example of a universal React app using Swift’s Vapor web framework. The Client JavaScript App Because Redux lends itself well as the basis of a universal React data model, we’ll take the Redux Counter Example as the basis of our app. I will leave out the unimportant parts in this post; you can always find the full example in the Git repository. The main client.js file of the Counter app fetches the state via AJAX from the server on the /api/state endpoint, and renders the Counter component: import { createStore } from 'redux'; import { Provider } from 'react-redux'; import Counter from './Counter'; import { rootReducer } from './reducers'; function load(state) { const store = createStore(rootReducer, state); // Render our main component render( <Provider store={store}> <Counter/> </Provider>, document.getElementById('root') ); } fetch('/api/state').then( response => response.json().then(load), err => console.error(err) ); On the server side, we have a Vapor handler for /api/state that gets the state from the database, and returns it as a JSON object: import Vapor // Dummy database func getStateFromDB() -> [String: Any] { return [ "value": 42 ] } let drop = Droplet() drop.get("/api/state") { req in return toJSON(value: getStateFromDB())! } For now, all we need is to serve a static index.html at the root to get the app running: <!doctype html> <html> <head> <title>React+Swift Test App</title> </head> <body> <div id="root"></div> <script src="/js/app.js"></script> </body> </html> This gives us a regular React app, where all the rendering is dynamically happening on the client, and the server only provides the data API. Pre-rendering the App on the Server To pre-render our app on the server, we start by creating a helper JavaScript render function, which we will call from the server: import React from 'react'; import { renderToString } from 'react-dom/server'; import { Provider } from 'react-redux'; import { default as genericRender } from './render'; export function render(preloadedState = undefined) { const store = createStore(rootReducer, preloadedState); return { html: renderToString( <Provider store={store}> <Counter/> </Provider> ), state: JSON.stringify(store.getState()) }; } The render function renders the same component as client.js, except it uses React’s renderToString to build a string instead of injecting it in the DOM. Next to the rendered string, we also return the state from the Redux store (which, apart from populated default values, will be the same as preloadedState), so we can pass it on to the client. We can then bundle this function and all its dependencies (including React and Redux) up into a single server.js JavaScript file using webpack. To run this code on the server, we can use Swift’s built-in JavaScriptCore1 framework to load the server.js bundle, and call the render function with the data we get from the database. func loadJS() -> JSValue? { let context = JSContext() do { let js = try String( contentsOfFile: "server.js", encoding: String.Encoding.utf8) context?.evaluateScript(js) } catch (let error) { return nil } return context?.objectForKeyedSubscript("server") } // Dummy func getStateFromDB() -> [String: Any] { return [ "value": 4 ] } drop.get("/") { req in let state = getStateFromDB() if let result = loadJS()?.forProperty("render")? .call(withArguments: [state]).toDictionary() { return try drop.view.make("index", [ "html": Node.string(result["html"] as! String), "state": Node.string(result["state"] as! String) ]) } throw Abort.badRequest } The root handler now has to render a dynamic version of index.html with the pre-rendered html and current state embedded, so that Redux can continue on the client where the server left off: <!doctype html> <html> <head> <title>React+Swift Test App</title> </head> <body> <div id="root">#(html)</div> <script> window.__PRELOADED_STATE__ = #(state); </script> <script src="/js/app.js"></script> </body> </html> Finally, in client.js, we use the preloaded state from __PRELOADED_STATE__ to hydrate our Redux store. import { createStore } from 'redux'; import { Provider } from 'react-redux'; import Counter from './Counter'; import { rootReducer } from './reducers'; function load(state) { const store = createStore(rootReducer, state); render( <Provider store={store}> <Counter/> </Provider>, document.getElementById('root') ); } load(window.__PRELOADED_STATE__); And that’s it! Whenever a client accesses our app, the server will pre-render the HTML using data from the database, and send it to the client. On the client-side, React will detect that the contents has already been rendered, and not do any extra initial work. JavaScript Development In the server code above, loadJS() reloads the JavaScript on every access. This means we can run webpack --watch on the JavaScript code while developing, and a page reload will run the newest version of server.js every time. 2 For development, webpack also has a very convenient dev server mode, which compiles and serves all code from memory, and reloads your page on every change. On top of this, it can even hot-load your CSS and React modules on the fly, without any page reload at all. Since webpack’s dev-server mode works entirely from memory and doesn’t write anything to disk, we can’t rely on the server to have the latest version of server.js, and so the pre-rendered code can be inconsistent with the client code. To make sure this development model work as well, we can tell webpack’s proxy to always add an X-DevServer header when forwarding requests to the backend; in the backend, we can then skip the pre-rendering by sending an empty html and state, and let the client do all the rendering itself again: drop.get("/") { req in if req.headers["X-DevServer"] != nil { // When accessing through the dev server, // don't pre-render anything return try drop.view.make("index", [ "html": "", "state": "undefined" ]) } else { // Prerender ... } On the client, we now just have to check whether the server gave us pre-rendered state, and otherwise fetch the state via AJAX: // Check if we can preload the state from // a server-rendered page if (window.__PRELOADED_STATE__) { load(window.__PRELOADED_STATE__); } else { // We didn't pre-render on the server, // so we need to get our state fetch('/api/state') .then( response => response.json().then(load), err => console.error(err) ); } JavaScript Debugging A downside with rendering JavaScript on the server is that, when your JavaScript code goes bad, it can be tricky to track down where. Doing all your development on the client definitely helps here, but sometimes you can still hit a bug only when rendering on the server (e.g. when you’re passing unexpected state to render in production). Since the render function is a pure function that translates a state object to HTML, you can log the state on the server as a JSON string when an exception occurs, and run the JSON string offline through a simple command-line Node.js script renderComponent to get decent stacktraces or do some debugging: // Example: // ./renderComponent '{"value": 42}' var server = require('./JS/server'); if (process.argv.length < 3) { throw Error("Missing arguments"); } console.log( server.render(JSON.parse(process.argv[2])).html); Conclusion You don’t need a Node.js backend to be able to build a universal React app. In this post, I used Swift, Vapor, and JavaScriptCore to build such an example app (for which you can find the complete sources in my Git repository). However, you can follow the same pattern on other languages, including Java (using the Nashorn JS engine) or any language with native bindings, using e.g. V8 (or even duktape!). Unfortunately, JavaScriptCore isn’t available for Linux. As a replacement, I used Duktape with a Duktape binding for Swift on this platform↩ In my experiment, JavaScriptCore’s interpreter was fast enough to not have to cache anything. With larger apps in production, you’ll probably want to either only load the JavaScript code once, or check a timestamp to see whether the file needs to be reloaded.↩
https://el-tramo.be/blog/react-swift/
CC-MAIN-2019-22
refinedweb
1,407
56.05
I am currently facing a problem of how to dynamically "wire" OSGi Services depending on their configuration properties. I want to do that with Declarative Services. To give a concrete example: I have two different OSGi Services A and B which bot I am trying to use bean validation (using Hibernate implementation) in an OSGI context. The setup is the following in my blueprint file: <jaxrs:server <jaxrs:serviceBeans> <ref componen I am attempting to test the configuration of my application that uses Spring and OSGi. I am trying to do so without a real OSGi Container. I am trying to simply mock my BundleContext and create some sort of Bean Factory that will read my XML, to veri I am trying to understand the capabilities of Fabric8's container management. I just want to clarify weather the following scenario can be achieved by using Fabric8 in JBossFuse. I have created simple 2 bundles (tick, tock bundles inspired by the : h I was reading this article: and the author states that the most important place to use the SOLID principles is at the module joints, "It is these joints within the system that require the gre I have the following java classes public class SecondClass { //... } public class MyClass { public void doSomething(SecondClass secondClass) { //... } } In blueprint I have something like the following <blueprint xmlns=" I've downloaded the AEM developer tools from and followed instructions from to setup a starting project It looks like this: My AEM is running on I am using Jetty 9.2.7.v20150116 within OSGi and I must create a Https Connector, but it is not working. I have created a fragment bundle that contains the jetty.xml with the following content: <New id="httpConfig" class="org.eclipse.jetty.server. Given a single db schema and two (or more) bundles. Question: is it possible to have JPA entities (for the single schema) distributed across bundles? (I was initially thinking about fragment bundles, but want to know if there are other possibilities) i have osgi services service-1.0.0.jar and service-1.1.0.jar they are implementation of service-api-1.0.0.jar both these service-1.0.0.jar and service-1.1.0.jar have same service name and packages. service are registered by bundle-activator lets assu I have recently started learning OSGi. While experimenting with Apache Aries and OSGi blueprint, I created the following set-up: Bundle A : public interface IMessageSender { String send(String message); String getServiceName(); } public interface IMe I'm looking for an automatic update mechanism in my Equinox environment. I am developing bundles which use remote Services. So I have multiple clients which communicates with each other. Now I'm looking for a way which automatically installs the new I have following class hierarchy parent and child1 declared in bundle_1. child2 declared in bundle_2 Which classloader child2 will use for loading super class ? --------------Solutions------------- In OSGi you have one ClassLoader per bundle. When lo Using "Tools for developing and administering WebSphere® Application Server V8.0L (or V8.5). in Eclipse (Kepler), I ań getting a following error message when I want to start the Web Preview Server: CWWKE0005E: The runtime environment could not be lau I have bundle A and bundle B. Bundle B imports packages from bundle A and bundle B has Eclipse-RegisterBuddy set to bundle A. Bundle A loads class which is exported by B by Java reflection (Class.forName). When bundle B is redeployed, bundle A still I can't find a sample project on how to deploy a cxf bundle to virgo tomcat server. All samples I've found on the internet register the rest service on their bundle Activator (why?). I have a war bundle which has web.xml and applicationContext.xml. I I have 2 java applications that I need to be able to talk to each other - if the second one is installed. Both applications/frameworks are extensive and the core code is not under my control, but I can extend the parts I need to use, I just can't rew I have my datasource configured in tomcat\context.xml. I have deployed a bridge.war provided by Eclipse to get servletbridge environment. I have developed another osgi bundle which contains servlet registation code and my aim is to do JNDI look up fo I am trying to create integration test for my bundle. basically I want to mimic the setup I have in normal web app project(wherein the test are in src/test folder) I am almost there except that I have exception when the bundle tries to activate nativ I've read in the documentations of Apache Ace 2 that it works with Equinox OSGi targets as well, but I can't find out how to configure it. I am aware there is already p2 for Equinox but I also want to integrate it with the Ace software. I've found so
http://www.dskims.com/tag/osgi/
CC-MAIN-2019-13
refinedweb
828
54.12
Has someone worked on it? I want to make it work on android and ios. Where should I find some good materials to discover how it works? (I found only) Has someone worked on it? What is it? ONVIF is an open industry forum that provides and promotes standardized interfaces for effective interoperability of IP-based physical security products. sounds not very relevant… Ok. I’m introducing my problem. I need to make an app which can provide the image stream (mostly it’s by rtsp). But making just only putting stream in text form like (192.168.0.100:554/givemestream) - it’s ridiculous way. I want to make the possibility to scan local web and somehow later controlling the ONVIF camera but from for example mobile web (not only local way). I’ve never heard of this ONVIF before, but perhaps you might be interested in ionic-native/zeroconf. Yup, I’m very familiar with ONVIF. If you are asking about how ONVIF works, there is a spec, and there is reality. Many cameras do their own thing and its a mess. If you are asking if there are implementations you can use, the closest I can think of is for something to start on. There are also C++ libs you can choose to compile and write a cordova wrapper around. If I want to do it on my own what you recommend? (I mean what spec or anything else?) This really isn’t a site for ONVIF questions.. Start with the core specs. (Feel free to discuss everything that is slightly related to using this “onvif” stuff in Ionic here though.) Unfortunately not Cool! Working on it. Will keep you posted. I’ve made some huge progress here! Let me know if you’re interested to know. Still interested in Please share your code.I am working on implementing live streaming of onvif camera. Please provide some pointers on how to code on CLICK TO CONNECT Button? Aceboy +, Sorry for the delayed response. Here’s a brief note about it: - Install the onvif package through npm. - Comment the following line of code from node_modules/onvif/lib/onvif.js. Otherwise, an error will be thrown as the dgram module won’t work in browsers. // require(’./discovery’).Discovery - Import the onvif module wherever needed import { Cam } from ‘onvif’; - Here’s a sample code to initialise the cam as per the onvif documentation this.pf.ready().then(() => { console.log(“inside platform ready!”); new Cam({ hostname: CAM_IP, port: CAM_PORT, username: CAM_USERNAME, password: CAM_PASSWORD }, function (err) { if (err) return console.log(err); this.absoluteMove({ x: 1, y: 1, zoom: 1 }); this.getStreamUri({ protocol: ‘RTSP’ }, function (err, stream) { console.log(stream); console.log(“window.plugins”, rtspVideo); rtspVideo.play(stream.uri, function () { console.log(‘Done Playing.’); }, function (e) { console.error('Error: ’ + e); }); }); }); }); - You’re good to go now. Hope this helps! Thank you so much for the reply. Appreciate it. I am getting error in line rtspVideo. Play(screen. Uri, function () {…} The error is’‘Cannot find name’rtspVideo’. Please share pointers on how to solve this… That’s just an example. rtspVideo is another third-party plugin which you need to import. Check the docs here Hey Guys, Need some help in this. Anybody successfully able to run this ? What port number should be used here ? http port or RTSP? Any help is appreciated. Ready to pay some amount if you guys ready to help
https://forum.ionicframework.com/t/ionic-onvif/100401
CC-MAIN-2021-31
refinedweb
571
70.5
import java.util.date; I am having a lot of difficulties, have just begun to program in JAVA Good, I have been having problems with the class java.util.date Do you have some examples or study aids of as it uses that class to order for e-mail? Thank you I await answer, it excuses him/it inconvenience Ranch Hand Kyle Kyle Brown, Author of Persistence in the Enterprise and Enterprise Java Programming with IBM Websphere, 2nd Edition See my homepage at for other WebSphere information. Open Group Certified Distinguished IT Architect. Open Group Certified Master IT Architect. Sun Certified Architect (SCEA). Ranch Hand Associate Instructor - Hofstra University Amazon Top 750 reviewer - Blog - Unresolved References - Book Review Blog Be sure to check out the API for java.util.Date as there's lots of great information about it there. As Thomas Paul alluded to in the previous post, Java is case-sensitive. There is no such class as java.util.date, but there is a java.util.Date. Make sure that the case is correct. I hope that helps, Corey [ March 27, 2002: Message edited by: Corey McGlone ] Do you have some examples or study aids of how it uses that class to order for e-mail? Here is an assignment which requires the use of the java.util.Date class. I do not understand your question about e-mail. Sheriff Please change your name to be compliant with JavaRanch's naming policy. Your displayed name should be 2 separate names with more than 1 letter each. We really want this to be a professional forum and would prefer that you use your REAL name. Thanks, Cindy "JavaRanch, where the deer and the Certified play" - David O'Meara
http://www.coderanch.com/t/391255/java/java/import-java-util-date
CC-MAIN-2016-30
refinedweb
289
65.52
According to Dean J. Champion (2005, 61), corruption is defined as ”behavior of public officials who accept money or other bribes for doing something they are under a duty to do anyway.” In terms of law enforcement, police officers engage in corrupt actions when, for money or other favors, they fail to do something when they have a lawful duty to act or when the officer does something that he or she should not have done (Dempsey and Forst 2005, 296). Also, if officers incorrectly use their discretion, they are engaging in corruption (Champion 2005, 61). An example of an officer’s failure to perform his or her duty is when an officer accepts a small bribe in exchange for not issuing a traffic citation. An example of a law enforcement officer doing something that he or she should not do would be an officer’s protection of criminals who engage in unlawful actions. Finally, an example of an officer misusing his or her discretion involves letting personal values, biases, and beliefs interfere with the performance of the job, such as when an officer issues traffic tickets only to African Americans because of personal discrimination against this ethnic minority. Dimensions of Corruption Types of Corruption Most researchers identify nine main types of corruption by law enforcement officers: (1) corruption of authority, (2) kickbacks, (3) opportunistic theft, (4) shakedowns, (5) protection of illegal activities, (6) case fixing, (7) direct criminal activity, (8) internal payoffs, and (9) padding (Types and dimensions 2005). Corruption of authority is defined as ”when an officer receives some form of material gain by virtue of their position as a police officer without violating the law per se” (Types and dimensions 2005). An example of this form of corruption occurs when an officer accepts a gratuity, such as a free meal. Kickbacks occur when, in exchange for referring an offender to a business, the officer receives a fee. When defense attorneys pay a police officer a fee for referring everyone he or she has arrested to their offices, this is an example of a kickback. When a police officer steals from a crime scene or an arrestee, this is known as opportunistic theft. According to Champion (2005, 231), a shakedown is defined as ”a form of police corruption in which money or valuables are extorted from criminals by police officers in exchange for the criminals not being arrested.” Protection of illegal activities involves, as the name suggests, the officer using his or her position to protect those individuals engaging in illegal conduct (Types and dimensions 2005). An example of this is an officer protecting organized crime figures who are engaging in prostitution rings. Fixing cases is also a problem within policing, and it involves officers using their position to get someone that they know out of trouble, such as out of a traffic ticket. Directed criminal activity involves officers actively committing crimes for money or property. The key to directed criminal activity is the officer’s planning of the criminal offense. The final two forms of corruption are padding and internal payoffs. Padding involves interfering in the investigation of crime by planting evidence, and its purpose is to ensure the prosecution of an offender (Types and dimensions 2005). Internal payoffs are associated with the police department itself. It includes things such as paying other officers for their holiday or vacation times. Levels of Corruption There are three general categories or levels of corruption within police departments (Lawrence Sherman, as cited in Dempsey and Forst 2005, 299). The first level is ”the rotten apples and rotten pockets” ”pervasive, unorganized corruption” (Dempsey and Forst 2005, 299). With this form of corruption, many officers within a department might be engaging in corrupt actions, but they are not working together. The final level of corruption occurs when the entire police department is working together and protecting each. This type of corruption is known as ”pervasive, organized corruption” (Dempsey and Forst 2005, 299). Issues Associated with Gratuities One of the most controversial issues in terms of police corruption is the acceptance of gratuities by officers. Accepting gratuities is defined as happening when an officer accepts an incentive in exchange for favors (Champion 2005, 114). Gratuities can be free items, discounts, or gifts. Many law enforcement administrators do not allow their officers to accept any form of gratuities; some departments allow them to accept some things such as a free meal. Those departments that allow officers to accept gratuities usually rationalize it by letting the persons offering the gratuities know that they will not received any special privileges, which is more like the officer accepting a gift rather than a gratuity. Those departments that do not allow gratuities believe that it opens the officer up to committing corrupted acts, and administrators believe gratuities are never offered without the expectation of something in return (Delattre 1996, 77). These departments believe that officers will start justifying stealing and other major corruptions if they accept small incentives for services. Stuart MacIntrye and Tim Prenzler (1999) found that officers are more likely to respond favorably to those individuals who have offered them some form of special privilege. With regards to the issue of gratuities, there are different arguments for and against their use. Because most gratuities are small, many people do not see a problem with officers accepting them (White 2002, 22). According to Mike White (2002, 21), other reasons why those who support the use of gratuities believe that it is safe to do so is because not only do the officers who receive the gratuities benefit, but the community is also helped. The relationship between the police and the citizens within their policing community is improved when officers feel appreciated and rewarded for their work. Also, officers might feel that their supervisors do not trust them to make a decision on unacceptable and acceptable behavior if the department has guidelines against the acceptance of gratuities. In addition to the reasons previously stated, critics of the use of gratuities believe that other people are at a disadvantage when they are offered (White 2002, 22). For example, if one restaurant offers police officers free meals in exchange for patronage and another restaurant owner cannot afford to offer the same service, then problems might arise. If there were a sudden string of robberies in the area, the owner who could not afford the gratuity would be at a disadvantage. Many people believe that officers can be placed in situations where they encounter such conflicts (White 2002, 21). This conflict could occur when individuals expect officers to ignore certain unlawful behavior, such as driving while under the influence of alcohol, because of the special services that the officer received. Pollock and Becker (1996, 172) look at the issue from a different perspective. If other governmental professions cannot accept gifts or services, then police officers should not be allowed to accept them. The Corruption Process Many theories have been proposed for why officers or police departments become corrupt. One explanation involves the increased drug use and distribution in today’s society. With the current war on drugs, police are encountering more money than ever before (Carter 1990). Because of the lack of supervision that occurs when an officer comes into contact with criminals, this can increase the likelihood that officers will take money from the offender because they are not likely to get caught. Officers can further become involved in drug-related corruption through the sale of drugs for money or the protection of those criminals engaging in the drug trade for a bribe. The Police Subculture and the Cop Code Another reason for the amount of corruption among law enforcement personnel is the nature of police work and, particularly, the nature of the police subculture. Dean J. Champion (2005, 195) defined the police subculture as ”the result of socialization and bonding among police officers because of their stress and job-related anxiety” and the ”unofficial norms and values possessed” by those working in the field of law enforcement. These values that officers learn as part of their job are different from the values of non-law enforcement people. New recruits learn quickly what is expected of them in terms of their fellow officers. They learn the cop code that is discussed by John P. Crank and Michael A. Cal-dero. These authors identified the following relationships that exist within police department between peers and supervisors. The following rules apply to law enforcement: Rules Defining Relationships with Other Cops: 1. Watch out for your partner first and then the rest of the guys working that tour. 2. Don’t give up another cop. 3. Show toughness. 4. Be aggressive when you have to, but don’t be too eager. 5. Don’t get involved in anything in another guy’s sector. 6. Hold up your end of the work. 7. If you get caught off base, don’t implicate anybody else. 8. Make sure the other guys know if another cop is dangerous or unsafe. 9. Don’t trust a new guy until you have checked him out. 10. Don’t tell anybody else more than they have to know; it could be bad for you and it could be bad for them. 11. Don’t talk too much or too little. Both are suspicious. 12. Don’t leave work for the next tour. Rules Defining Relationships of Street Cops with Bosses: 1. Protect yourself. (If the system wants to get you, it will.) 2. Don’t make waves. Supervisors pay attention to troublemakers. 3. Don’t give them too much activity. Don’t be too eager. 4. Keep out of the way of any boss from outside your precinct. 5. Don’t look for favors just for yourself. 6. Don’t take on the patrol sergeant by yourself. 7. Know your bosses. Who’s working and who has the desk? 8. Don’t do the bosses’ work for them. 9. Don’t trust bosses to look out for your interests (Crank and Caldero 2000, 144). The cop code demonstrates how the police subculture can foster corruption through protecting those who are engaging in criminal acts and not reporting them to supervisors and the public. Corruption Continuum Neal Trautman (2000, 65) developed the corruption continuum to explain the unethical actions of police officers. This continuum involves four levels. The first one is the implementation of policies that ensure that officers know the ethical rules that they have to follow. If the administrator fails to do this, then officers will believe that they can be corrupt and no one will do anything about it. The next phase of the process involves police supervisors not doing anything when they know of unethical acts committed by officers or when they try to cover for those officers who engage in corruption. The next step involves officers become indifferent or fearful of becoming involved in the situation. They may feel that if they come forward, they will be punished. After a period of time, when these officers become unhappy with their job, they are more likely to become corrupt. The final step of the corruption continuum involves officers doing anything they can to survive within the corrupt environment of the police organization, even if that means they have to become corrupt themselves. Combating Corruption in Police Organizations When faced with the issue of combating police corruption, law enforcement agencies and administrators can try various means to deal with the problem. J. Kevin Grant (2002) listed four means of dealing with this issue. The first method of fighting police corruption is through leadership. Police agencies must have a strong administrator that is willing to assess the problems, come up with solutions, and monitor the success of their implementations. Strong leadership is very important in terms of handling corruption because officers will typically look to their leaders to determine how they should behave (Pollock 1996, 220). If officers see that their supervisors are engaging in corruption, they are more likely to engage in it themselves, but if they see that administrators are following the law, punishing violations, and behaving ethically, then they would learn that to do otherwise is unacceptable. Two other ways that J. Kevin Grant (2002) developed for combating corruption involved the hiring process and departmental procedures. Grant believed that if administrators selected quality applicants through high standards, then corruption would likely decrease. The use of psychological tests can help in the selection process because they are designed to determine characteristics of individuals. The administrator can decline those applicants that do not meet the standards set by the department. With regard to department standards, Grant believed that by providing training, setting up codes of conduct, making sure the officers are punished when violations occur, and encouraging officers to work together, a lot of the problems with corruption would disappear in police agencies. One of the main forms of training that police departments can provide is ethics training. Joycelyn Pollock (1996, 217) stated that ”ethics training in the academy, as well as offering in-service courses, is common and recommended for all police departments today.” This training will give officers the opportunity to understand the different ethical issues that are a part of the job, and, it is hoped, they will learn how to deal with these conflicts in a moral and ethical manner. Civilian Review Boards A final way to combat corruption in law enforcement is through civilian review boards (Dempsey and Forst 2005, 311). It is the job of law enforcement civil review boards to investigate allegations of corruption. The board can also make recommendations for change in terms of punishment meted out and policies on dealing with corruption. Many people like review boards that are independent of police departments because of the increase in impartiality that is associated with them. They believe that these types of boards are able to fully investigate issues of corruption and look at everyone’s side of the story.
http://what-when-how.com/police-science/corruption-police/
CC-MAIN-2015-40
refinedweb
2,330
51.28
Opened 12 years ago Closed 10 years ago Last modified 9 years ago #2669 closed enhancement (fixed) excel/openoffice calc export Description a microsoft excel/openoffice calc export would be a great feature. it could be implemented by: - producting tab separated or comma separated format - setting the http mime type of the result to application/msexcel - set the extension of the result to ../xx.xls and it should work with most (all?) browsers. Attachments (5) Change History (37) comment:1 Changed 12 years ago by comment:2 Changed 12 years ago by I think that having a direct Excel export can be useful: it's more convenient than having to save the text output to a file, and then open that file. The following patch sets the mimetype used for Tab-separated value to be application/vnd.ms-excel. Note that the mimetype value given to the add_link is not used. Index: trac/ticket/query.py =================================================================== --- trac/ticket/query.py (revision 2831) +++ trac/ticket/query.py (working copy) @@ -381,8 +381,8 @@ 'application/rss+xml', 'rss') add_link(req, 'alternate', query.get_href('csv'), 'Comma-delimited Text', 'text/plain') - add_link(req, 'alternate', query.get_href('tab'), 'Tab-delimited Text', - 'text/plain') + add_link(req, 'alternate', query.get_href('tab'), + 'Tab-delimited Text (Excel)', 'text/plain') constraints = {} for k, v in query.constraints.items(): @@ -406,7 +406,7 @@ elif format == 'csv': self.display_csv(req, query) elif format == 'tab': - self.display_csv(req, query, '\t') + self.display_csv(req, query, '\t', 'application/vnd.ms-excel') else: self.display_html(req, query) return 'query.cs', None @@ -572,9 +572,9 @@ self.env.is_component_enabled(ReportModule): req.hdf['query.report_href'] = self.env.href.report() - def display_csv(self, req, query, sep=','): + def display_csv(self, req, query, sep=',', mimetype='text/plain'): req.send_response(200) - req.send_header('Content-Type', 'text/plain;charset=utf-8') + req.send_header('Content-Type', '%s;charset=utf-8' % mimetype) req.end_headers() cols = query.get_columns() comment:3 Changed 12 years ago by What does prevent the browser to open Excel when receiving a CSV file ? Again, I don't think it's a good idea to add references to proprietary formats in the Trac core - may be it is possible to have an extension (plugin) for such formats ? comment:4 Changed 12 years ago by It's the mimetype. If it's text/plain, the browser will show the text by itself. With a more neutral alternative to application/vnd.ms-excel, like text/csv or text/tab-separated-values, IE only propose to save the file, and Firefox starts the generic Open with… dialog. Note that the data format itself is unchanged and non-proprietary, it's only the type annotation that has been made more specific. Nothing prevents you to choose a text editor to open it. And I'm pretty sure that free MS-office alternatives like Open Office or KOffice will know what to do with the ms-excel type. comment:5 Changed 12 years ago by I agree with eblot that we shouldn't do this… if someone wants their report results in excel, it's easy enough. comment:6 Changed 12 years ago by eblot, exactly, it would be great for ticket exports as additional link on the bottom to ease peoples life a little who want to take the list of issues with them, for a meeting e.g. the purpose of the mime-type and the ending is to help the operating system or the browser to choose the way how to display the document. if the result is csv, then the mime type could be different from text to allow configuration of a different application. setting it is definitely not a crime. btw, the patch above replaces the csv link, which was not the idea. there should be two links both producing text: - one with mime type text, opens the same as displaying the source of a wiki page - one with mime type differen, wich opens a spreadsheet program on any platform by default (and the proposed solution works on linux, windows, macos). how you call this links? i'm not sure if this is important at all. e.g. csv + excel, text + csv, text + spreadsheet just to name possibilities. am i allowed to reopen the ticket pls? comment:7 Changed 12 years ago by sorry … how do i change the wrong statement above? it is already an additional link i think *sweat*. comment:8 Changed 12 years ago by My personal opinion is that Trac should not provide OS-specific format (text/csv is not, application/ms-excel is). I don't think it would be a nice thing to clutter the interface with links which are useless on MacOS or Linux, for example. People who want to add platform specific feature can edit the Trac code, or create plugins to support proprietary file format, but I wish Trac core stayed platform independent. [About the latest question: you cannot edit comments, only append new ones] comment:9 Changed 12 years ago by exactly this is the point and i agree fully with you: such a link has to work on every operating system where you have a spreadsheet program and a browser. i only tested linux and windows and it does. but as the same programs exist also for macos it would be a big surprise if it would not work there. if it is a plugin, even better. Changed 12 years ago by Additional changes on top of alect's attachment:ticket:2296:content-converter.diff comment:10 Changed 12 years ago by This ticket is now related to #2296. I attached a single file plugin implementing that feature (attachment:tickets_to_excel_tsv.py) which is using the extension points introduced in the attachment:ticket:2296:content-converter.diff patch, and some additional changes on top of that (attachment:reusable_export_csv.diff). I think we will be able to close this as worksforme as soon as the #2296 related changes will be in trunk. Changed 12 years ago by Simple but effective plugin implementing the requested feature (take 2) comment:11 Changed 12 years ago by I've applied the patch Christian and I'll upload a new version soon (once I've migrated the versioncontrol zip/diff downloads). comment:12 Changed 12 years ago by Alternatively, that Excel stuff could also be done in a more general way by pipelining the transforms… Registering once a 'text/csv' => 'application/vnd.ms-excel' converter, and for each of the XYZ converter of type * => 'text/csv', propose an additional XYZ (Excel) conversion… Changed 12 years ago by Report to Native-xls by pyExcelerator for trac 0.9.5 comment:13 follow-up: 29 Changed 12 years ago… Changed 12 years ago by Updated version for trunk API comment:14 Changed 12 years ago by comment:15 Changed 12 years ago by Let me first propose an implementation for the pipelining stuff… Here's the implementation on top of r3307: attachment:pipelining_conversions.patch I changed the text/plain MIME Type of the CSV to text/csv, as this is the registered on (cf. rfc:4180). The plugin would be extremely simple: """Convert any `text/csv` data to `application/vnd.ms-excel`.""" from trac.core import * from trac.mimeview.api import IContentConverter EXCEL_MIMETYPE = 'application/vnd.ms-excel' class CSVToExcelConverter(Component): implements (IContentConverter) # IContentConverter methods def get_supported_conversions(self): yield ('excel', 'Excel', 'csv', 'text/csv', EXCEL_MIMETYPE, 8) def convert_content(self, req, mimetype, content, k, fname=None, url=None): return (content, EXCEL_MIMETYPE) Changed 12 years ago by comment:16 Changed 11 years ago by I don't know exactly since when, but now for me the .csv files are automatically opened in Excel… Does it worksforme for other people too? comment:17 Changed 11 years ago by Yep, worksforme as well on a Mac. comment:18 Changed 11 years ago by Ok, if it works on a Mac … ;) comment:19 follow-up: 20 Changed 11 years ago by yes, it works perfect for queries. could you pls add the same mime type for reports too pls? may i reopen it for that reason. comment:20 follow-up: 21 Changed 11 years ago by comment:21 Changed 11 years ago by comment:22 Changed 11 years ago by can it be that spreadsheet is opened because the result link is called "query.csv" in case of the query? because the report link is called "1" (the id), and windows does not know that it should open excel? comment:23 Changed 11 years ago by Right, report_1.csv would certainly be a better choice of filename anyway. comment:24 Changed 11 years ago by comment:25 Changed 11 years ago by comment:26 Changed 11 years ago by tx a lot, now works correctly comment:27 follow-up: 28 Changed 11 years ago by I installed the CSVToExcelConverter plugin in my trac's plugins directory, and also set in the ini file the following fields: [components] CSVToExcelConverter.* = enabled the trac.log says 2007-02-13 12:29:35,887 Trac[init] DEBUG: Loading file plugin CSVToExcelConverter from /home/trac/plugins/CSVToExcelConverter.py but, when I open a report I only keep seeing: Download in other formats: - RSS Feed - Comma-delimited Text - Tab-delimited Text - SQL Query What am I doing wrong? Thank you. comment:28 Changed 11 years ago by What am I doing wrong? Quoting the new ticket page: Support and installation questions should be asked on the mailing list or IRC channel, not filed as tickets. Please ask for support on the MailingList rather than re-opening tickets. You may reopen ticket if the provided feature is buggy, but not to ask for support. Thanks in advance. comment:29 Changed 11 years ago by I ported it to trac 0.10.3, and uploaded it as a new project… comment:30 follow-up: 31 Changed 10 years ago by what about the possibility to have an organized xls report output with many tab, with different pie charts : one with percentage pieces differentiated for gravity, another with pieces for status, etc. Yes always as a plugin, so to not loose OS indipendence of trac. (sorry if this was not a good reason to reopen the ticket) comment:31 Changed 10 years ago by Replying to andreacolpo@yahoo.it: (sorry if this was not a good reason to reopen the ticket) Yep, as this feature won't be implemented in Trac core, there's no need to re-open the ticket. I would have said you need to report these suggestion to the th:ExcelReportPatch maintainer, but as trac.hacks.org web site is down for now, you'll have to wait to do so. comment:32 Changed 9 years ago by How can i export to spreadsheet ……… i want to use open office Can you give some examples of which kind of data you are thinking of ? CSV and TSV exports are already available for ticket reports. I don't really agree with the second/third points: I don't think Trac should export to some proprietary format. Plus, I'm not sure it is valid to generate a CSV file with a .XLSextension.
https://trac.edgewall.org/ticket/2669
CC-MAIN-2017-47
refinedweb
1,851
62.78
This is a playground to test code. It runs a full Node.js environment and already has all of npm’s 400,000 packages pre-installed, including react-dock with all npm packages installed. Try it out: require()any package directly from npm awaitany promise instead of using callbacks (example) This service is provided by RunKit and is not affiliated with npm, Inc or the package authors. Resizable dockable react component. $ npm i -S react-dock render() { return ( <Dock position='right' isVisible={this.state.isVisible}> {/* you can pass a function as a child here */} <div onClick={() => this.setState({ isVisible: !this.state.isVisible })}>X</div> </Dock> ); }
https://npm.runkit.com/react-dock
CC-MAIN-2020-24
refinedweb
105
51.55
Content-type: text/html glob, globfree - Generate pathnames matching a pattern Standard C Library (libc.so, libc.a) #include <glob.h> int glob( const char *pattern, int flags, int (*errfunc)(const char *epath, int eerrno), glob_t *pglob); void globfree( glob_t *pglob); Interfaces documented on this reference page conform to industry standards as follows: glob(), globfree(): XPG4, XPG4-UNIX Refer to the standards(5) reference page for more information about industry standards and associated tags. Contains the filename pattern to compare against accessible pathnames. Controls the customizable behavior of the glob function. Specifies an optional function that, if specified, is called when the glob() function detects an error condition. Contains a pointer to a glob_t structure. The structure is allocated by the caller. The array of structures containing the filenames located that match the pattern parameter are stored by the glob() function into the structure. The last entry is a NULL pointer. Contains the pathname that failed because a directory could not be opened or read. Specifies the errno value from a failure specified by the epath parameter, as set by the opendir(), readdir(), or stat() functions. The glob() function constructs a list of accessible files that match the pattern parameter. The glob() function matches all accessible pathnames against this pattern and develops a list of all pathnames that match. In order to have access to a pathname, the glob() function requires search permission on every component of a pathname except the last, and read permission on each directory of any filename component of the pattern parameter that contains any of the special characters * (asterisk), ? (question mark), or [ (open bracket). The glob() function stores the number of matched pathnames and a pointer to a list of pointers to pathnames in the pglob parameter. The pathnames are sorted, based on the setting of the LC_COLLATE category in the current locale. The first pointer after the last pathname is NULL. If the pattern does not match any pathnames, the returned number of matched pathnames is 0 (zero). It is the caller's responsibility to create the structure pointed to by the pglob parameter. The glob function allocates other space as needed. The globfree() function frees any space associated with the pglob parameter due to a previous call to the glob() function. The flags parameter is used to control the behavior of the glob() function. The flags value is the bitwise inclusive OR (|) of any of the following constants, which are defined in the glob.h file. Appends pathnames located with this call to any pathnames previously located. Uses the gl_offs structure to specify the number of NULL pointers to add to the beginning of the gl_pathv component of the pglob parameter. Causes the glob() function to return when it encounters a directory that it cannot open or read. If the GLOB_ERR flag is not set, the glob() function continues to find matches if it encounters a directory that it cannot open or read. Specifies that each pathname that is a directory should have a / (slash) appended. If the pattern parameter does not match any pathname, then the glob() function returns a list consisting only of the pattern parameter, and the number of matched patterns is one. If the GLOB_NOESCAPE flag is set, a \ (backslash) cannot be used to escape metacharacters. Specifies that the list of pathnames need not be sorted. If the GLOB_NOSORT flag is not set, pathnames are collated according to the current setting of the LC_COLLATE category. The GLOB_APPEND flag can be used to append a new set of pathnames to those found in a previous call to the glob() function. The following rules apply when two or more calls to the glob() function are made with the same value of the pglob parameter and without intervening calls to the glob() function: If the application set the GLOB_DOOFFS flag in the first call to the glob() function, then it is also set in the second call, and the value of the gl_ofs field of the pglob parameter is not modified between the calls. If the application did not set the GLOB_DOOFFS flag in the first call to the glob() function, then it is not set in the second call. After the second call, the gl_pattr field of the pglob parameter points to a list containing the following: Zero or more NULLs, as specified by the GLOB_DOOFFS flag and pglob->gl_offs. Pointers to the pathnames that were in the pglob->gl_pathv list before the call, in the same order as after the first call to the glob() function. Pointers to the new pathnames generated by the second call, in the specified order. The count returned in the pglob->gl_pathc parameter is the total number of pathnames from the two calls. The application should not modify the pglob->gl_pathc or pglob->gl_pathv fields between the two calls. Note that the pglob parameter has meaning even if the glob() function fails. This allows the glob() function to report partial results in the event of an error. However, if the number of matched pathnames is 0 (zero), the pointer in the pglob parameter is unspecified even if the glob() function did not return an error. The GLOB_NOCHECK flag can be used when an application wants to expand a pathname if wildcards are specified, but wants to treat the pattern as just a string otherwise. The sh command can use this for flag parameters, for example. One use of the GLOB_DOOFFS flag is for applications that build an argument list for use with the execv(), execve(), or execvp() functions; for example, if an application needs to do the equivalent of ls -l *.c, but for some reason approximated using the GLOB_APPEND flag as follows: globbuf.gl_offs = 2; glob ("*.c", GLOB_DOOFFS, NULL, &globbuf); glob ("*.h", GLOB_DOOFFS|GLOB_APPEND, NULL, &globbuf); The new pathnames generated by a subsequent call with the GLOB_APPEND flag set are not sorted together with the previous pathnames. This process mirrors the way that the shell handles pathname expansion when multiple expansions are done on a command line. On successful completion, the glob() function returns a value of 0 (zero). The pglob->gl_pathc field returns the number of matched pathnames and the pglob->gl_pathv field contains a pointer to a NULL-terminated list of matched and sorted pathnames. If the number of matched pathnames in the pglob->gl_pathc parameter is 0 (zero), the pointer in the pglob->gl_pathv parameter is undefined. If the glob() function terminates due to an error, the function returns one of the following nonzero constants. These are defined in the glob.h file. In this case, the pglob parameter values are still set as defined above. Indicates the scan was stopped because GLOB_ERROR was set or errfunc returned a nonzero value. Indicates the pattern does not match any existing pathname, and GLOB_NOCHECK was not set in flags. Indicates an attempt to allocate memory failed. If, during the search, a directory is encountered that cannot be opened or read and the errfunc parameter value is not NULL, the glob() function calls errfunc with two arguments: Specifies the pathname that failed. Specifies the value of errno from the failure, as set by the opendir(), readdir(), or stat() functions. If errfunc is called and returns nonzero, or if the GLOB_ERR flag is set in flags, the glob() function stops the scan and returns GLOB_ABORTED after setting the pglob parameter to reflect the pathnames already scanned. If GLOB_ERR is not set and either errfunc is NULL or errfunc returns zero, the error is ignored. No errno values are returned. Defines glob() macros, data types, and functions. Functions: fnmatch(3), opendir(3), readdir(3), stat(2) Standards: standards(5) delim off
http://backdrift.org/man/tru64/man3/globfree.3.html
CC-MAIN-2017-22
refinedweb
1,275
61.56
I have started coding cpp since a week so I am a newbie in it. I have downloaded MinGW cpp compiler, and added it to the path. I used Cmder to run a hello world code, #include <iostream> using namespace std; int main() { cout << "Hello World!"; cout << "My first CPP program LOL"; return 0; } Then I typed in cmder, g++ HelloWorld.cpp -o outputfile.exe Nothing shows, it means that g++ have converted the code to exe, but when i typed outputfile.exe 2 to 3 seconds later, it comes as ‘Access is denied’ and cmder closes. I am sick of it. Can you please help me finding the solution of this. I am the administrator of my pc. Source: Windows Questions C++
https://windowsquestions.com/2021/12/02/c-program-access-is-denied-closed/
CC-MAIN-2022-05
refinedweb
123
85.79
Python needs a main function because in that language toplevel variables are global. If you don't create a main function you will pollute the global namespace. This is not a problem in Lua. When you load a module, it implicitly puts everything inside an invisible main function. Local variables declared at the toplevel are local to the module. What I do on my projects: if the file is a standalone script I usually don't bother with creating a main function. I the module is intended to be used via require, I put the code inside an exported function. I don't want to have side effects when I call require("mymodule").
http://lua-users.org/lists/lua-l/2022-05/msg00109.html
CC-MAIN-2022-33
refinedweb
113
74.59
This is one of my posts on how PVS-Studio makes programs safer. That is where and what types of errors it detects. This time it is samples demonstrating handling of the IPP 7.0 library (Intel Performance Primitives Library) we are going to examine. Intel Parallel Studio 2011 includes the Performance Primitives Library. This library in its turn includes a lot of primitives which allow you to create efficient video and audio codecs, signal processing software, image rendering mechanisms, archivers and so on. Sure, it is rather difficult to handle such a library. That is why Intel created a lot of demonstration programs based on it. You may see descriptions of samples and download them here: All the samples are arranged into four groups: - IPP Samples for Windows - IPP UIC Demo for Windows - IPP DMIP Samples for Windows - IPP Cryptography Samples for Windows Each set contains many projects, so, for a start, I took only the first set IPP Samples for Windows for the check. I used PVS-Studio 4.10 to perform the analysis. I want to show you in this post that static analysis is useful regardless of programmers' skill and level of a solution being developed. The idea "you must employ experts and write code without errors right away" does not work. Even highly skilled developers cannot be secure from all the errors and misprints while writing code. Errors in samples for IPP show this very well. I want you to note that IPP Samples for Windows is a high-quality project. But due to its size, 1.6 millions of code lines, it cannot but contain various errors. Let's examine some of them. Bad replacement of array's indexes I could well include this sample into my previous article "Consequences of using the Copy-Paste method in C++ programming and how to deal with it": PVS-Studio's diagnostic message: V557 Array overrun is possible. The '30' index is pointing beyond array bound. avs_enc umc_avs_enc_compressor_enc_b.cpp 495 The programmer copied the code fragment several times and changed the arrays' indexes. But in the end his hand shook and he typed number 3 but forgot to delete 0. As a result, we have got index 30 and there is an overrun far outside the array's boundaries. Identical code branches Since we started with code copying, let's examine one more example related to it: AACStatus aacencGetFrame(...) { ... if (maxEn[0] > maxEn[1]) { ics[1].num_window_groups = ics[0].num_window_groups; for (g = 0; g < ics[0].num_window_groups; g++) { ics[1].len_window_group[g] = ics[0].len_window_group[g]; } } else { ics[1].num_window_groups = ics[0].num_window_groups; for (g = 0; g < ics[0].num_window_groups; g++) { ics[1].len_window_group[g] = ics[0].len_window_group[g]; } } ... } The PVS-Studio's diagnostic message: V523 The 'then' statement is equivalent to the 'else' statement. aac_enc aac_enc_api_fp.c 1379 But this time it is just on the contrary - the programmer forgot to edit the copied code. The both branches of the conditional operator "if" perform the same actions. Confusion with priority of "--" decrement operation and "*" pointer's dereferencing static void sbrencConflictResolution (..., Ipp32s *nLeftBord) { ... *nLeftBord = nBordNext - 1; ... if (*lenBordNext > 1) { ... *nLeftBord--; } ... } The PVS-Studio's diagnostic message: V532 Consider inspecting the statement of '*pointer--' pattern. Probably meant: '(*pointer)--'. aac_enc sbr_enc_frame_gen.c 428 The "nLeftBord" pointer returns values from the "sbrencConflictResolution" function. At first, it is the value "nBordNext - 1" which is written by the specified address. On certain conditions, this value must be decremented by one. To decrement the value, the programmer used this code: *nLeftBord--; The error is that it is the pointer itself which is decremented instead of the value. The correct code looks this way: (*nLeftBord)--; More confusion with "++" increment operation and "*" pointer's dereferencing I cannot understand the following code at all. I do not know how to fix it to make it meaningful. Perhaps something is missing here. static IppStatus mp2_HuffmanTableInitAlloc(Ipp32s *tbl, ...) { ... for (i = 0; i < num_tbl; i++) { *tbl++; } ... } The PVS-Studio's diagnostic message: V532 Consider inspecting the statement of '*pointer++' pattern. Probably meant: '(*pointer)++'. mpeg2_dec umc_mpeg2_dec.cpp 59 Here, the loop from the sample above is equivalent to the following code: tbl += num_tbl; The PVS-Studio analyzer supposed that brackets might be missing here and there must be this code: "(*tbl)++;". But this variant is meaningless too. In this case, the loop is equivalent to this code: *tbl += num_tbl; So, this loop is a rather strange one. The error does exist but only the code's author seems to know how to fix it. Loss of error flag The code has the function "GetTrackByPidOrCreateNew" that returns "-1" if an error occurs. typedef signed int Ipp32s; typedef unsigned int Ipp32u; Ipp32s StreamParser::GetTrackByPidOrCreateNew( Ipp32s iPid, bool *pIsNew) { ... else if (!pIsNew || m_uiTracks >= MAX_TRACK) return -1; ... } The "GetTrackByPidOrCreateNew" function itself is absolutely correct. But an error occurs while using it: Status StreamParser::GetNextData(MediaData *pData, Ipp32u *pTrack) { ... *pTrack = GetTrackByPidOrCreateNew(m_pPacket->iPid, NULL); if (*pTrack >= 0 && TRACK_LPCM == m_pInfo[*pTrack]->m_Type) ippsSwapBytes_16u_I((Ipp16u *)pData->GetDataPointer(), m_pPacket->uiSize / 2); ... } The PVS-Studio's diagnostic message: V547 Expression '* pTrack >= 0' is always true. Unsigned type value is always >= 0. demuxer umc_stream_parser.cpp 179 The value returned by the "GetTrackByPidOrCreateNew" function is saved as the unsigned int type. It means that "-1" turns into "4294967295". The "*pTrack >= 0" condition is always true. As a result, if the "GetTrackByPidOrCreateNew" function returns "-1", an Access Violation will occur while executing "m_pInfo[*pTrack]->m_Type". Copy-Paste and missing +1 void H264SegmentDecoder::ResetDeblockingVariablesMBAFF() { ...; ... } The PVS-Studio's diagnostic message: V523 The 'then' statement is equivalent to the 'else' statement. h264_dec umc_h264_segment_decoder_deblocking_mbaff.cpp 340 If you look at the nearby code, you will understand that the programmer forgot to add 1 in the copied line. This is the correct code: + 1; Not far from this place, there is the same error with missing "+ 1" in the function "H264CoreEncoder_ResetDeblockingVariablesMBAFF". The PVS-Studio's diagnostic message: V523 The 'then' statement is equivalent to the 'else' statement. h264_enc umc_h264_deblocking_mbaff_tmpl.cpp.h 366 Remove that does not remove anything void H264ThreadGroup::RemoveThread(H264Thread * thread) { AutomaticUMCMutex guard(m_mGuard); std::remove(m_threads.begin(), m_threads.end(), thread); } The PVS-Studio's diagnostic message: V530 The return value of function 'remove' is required to be utilized. h264_dec umc_h264_thread.cpp 226 This is quite an interesting combination. On the one hand, everything is cool. We have mutex to correctly remove items in a multithreaded application. On the other hand, the developers simply forgot that the std::remove function does not remove items from the array but only rearranges them. Actually, this code must look this way: m_threads .erase( std::remove(m_threads.begin(), m_threads.end(), thread), m_threads.end()); Comparing structures' fields to themselves I was looking through the errors and noticed that implementation of the H264 video compression standard is somewhat defective. A lot of errors we have found relate to this very project. For instance, the programmer was in a hurry and used two wrong variable names at once. bool H264_AU_Stream::IsPictureSame(H264SliceHeaderParse & p_newHeader) { if ((p_newHeader.frame_num != m_lastSlice.frame_num) || (p_newHeader.pic_parameter_set_id != p_newHeader.pic_parameter_set_id) || (p_newHeader.field_pic_flag != p_newHeader.field_pic_flag) || (p_newHeader.bottom_field_flag != m_lastSlice.bottom_field_flag) ){ return false; } ... } The PVS-Studio's diagnostic messages: V501 There are identical sub-expressions 'p_newHeader.pic_parameter_set_id' to the left and to the right of the '!=' operator. h264_spl umc_h264_au_stream.cpp 478 V501 There are identical sub-expressions 'p_newHeader.field_pic_flag' to the left and to the right of the '!=' operator. h264_spl umc_h264_au_stream.cpp 479 The comparison function does not work because some members of the structure are compared to themselves. Here are the two corrected lines: (p_newHeader.pic_parameter_set_id != m_lastSlice.pic_parameter_set_id) (p_newHeader.field_pic_flag != m_lastSlice.field_pic_flag) Incorrect data copying Errors related to use of wrong objects occur not only in comparison operations but in operations of copying objects' states: Ipp32s ippVideoEncoderMPEG4::Init(mp4_Param *par) { ... VOL.sprite_width = par->sprite_width; VOL.sprite_height = par->sprite_height; VOL.sprite_left_coordinate = par->sprite_left_coordinate; VOL.sprite_top_coordinate = par->sprite_left_coordinate; ... } The PVS-Studio's diagnostic message: V537 Consider reviewing the correctness of 'sprite_left_coordinate' item's usage. mpeg4_enc mp4_enc_misc.cpp 387 A wrong value is saved into "VOL.sprite_top_coordinate". This is the correct assignment operation: VOL.sprite_top_coordinate = par->sprite_top_coordinate; Two loops for one variable PVS-Studio's diagnostic message: V535 The variable 'c' is being used for this loop and for the outer loop. jpegcodec jpegdec.cpp 4652 One variable 'c' is used for two loops nested in each other. A decoding function like this may cause strange and unpredicted results. Double assignment for additional safety H264EncoderFrameType* H264ENC_MAKE_NAME(H264EncoderFrameList_findOldestToEncode)(...) { ... MaxBrefPOC = H264ENC_MAKE_NAME(H264EncoderFrame_PicOrderCnt)(pCurr, 0, 3); MaxBrefPOC = H264ENC_MAKE_NAME(H264EncoderFrame_PicOrderCnt)(pCurr, 0, 3); ... } The PVS-Studio's diagnostic message: V519 The 'MaxBrefPOC' object is assigned values twice successively. Perhaps this is a mistake. h264_enc umc_h264_enc_cpb_tmpl.cpp.h 784 When I saw this code, I recalled an old programmers' joke: - Why do you have two identical GOTO one right after the other in your code? - What if the first one doesn't work! Well, this error is not crucial yet it is an error. Code making you alert AACStatus sbrencResampler_v2_32f(Ipp32f* pSrc, Ipp32f* pDst) { ... k = nCoef-1; k = nCoef; ... } The PVS-Studio's diagnostic message: V519 The 'k' object is assigned values twice successively. Perhaps this is a mistake. aac_enc sbr_enc_resampler_fp.c 90 This double assignment alerts me much more than in the previous sample. It seems as if the programmer was not confident. Or as if he decided to try "nCoef-1" first and then "nCoef". It is also called "programming through experiment method". Anyway, it is that very case when you should stop for a while and think it over on encountering such a fragment. Minimum value which is not quite minimum PVS-Studio's diagnostic message: V501 There are identical sub-expressions to the left and to the right of the '<' operator: (m_cur.AcRate [2]) < (m_cur.AcRate [2]) me umc_me.cpp 898 Here is another misprint in the array's index. The last index must be 3, not 2. This is the correct code: Ipp32s BestAC= IPP_MIN(IPP_MIN(m_cur.AcRate[0],m_cur.AcRate[1]), IPP_MIN(m_cur.AcRate[2],m_cur.AcRate[3])); What is unpleasant about such errors is that the code "almost works". The error occurs only if the minimum item is stored in "m_cur.AcRate[3]". Such errors like to hide during testing and show up on users' computers at user input data. Maximum value which is not quite maximum There are problems with maximum values too: Ipp32s ippVideoEncoderMPEG4::Init(mp4_Param *par) { ... i = IPP_MAX(mBVOPsearchHorBack, mBVOPsearchHorBack); ... } The PVS-Studio's diagnostic message: V501 There are identical sub-expressions '(mBVOPsearchHorBack)' to the left and to the right of the '>' operator. mpeg4_enc mp4_enc_misc.cpp 547 The mBVOPsearchHorBack variable is used twice. Actually, the programmer intended to use mBVOPsearchHorBack and mBVOPsearchVerBack: i = IPP_MAX(mBVOPsearchHorBack, mBVOPsearchVerBack); A bad shot typedef struct { ... VM_ALIGN16_DECL(Ipp32f) nb_short[2][3][__ALIGNED(MAX_PPT_SHORT)]; ... } mpaPsychoacousticBlock; static void mp3encPsy_short_window(...) { ... if (win_counter == 0) { nb_s = pBlock->nb_short[0][3]; } ... } The PVS-Studio's diagnostic message: V557 Array overrun is possible. The '3' index is pointing beyond array bound. mp3_enc mp3enc_psychoacoustic_fp.c 726 There must be a simple misprint here. It is index '3' used accidentally instead of '2'. I think you understand the consequences. Error causing a slowdown PVS-Studio's diagnostic message: V503 This is a nonsensical comparison: pointer < 0. ipprsample ippr_sample.cpp 501 This is a nice example of code that works slower than it could due to an error. The algorithm must normalize only those items which are specified in the mask array. But this code normalizes all the items. The error is located in the "if(mask<0)" condition. The programmer forgot to use the "i" index. The "mask" pointer will be almost all the time above or equal to zero and therefore we will process all the items. This is the correct code: if(mask[i]<0) continue; Subtraction result always amounts to 0 int ec_fb_GetSubbandNum(void *stat) { _fbECState *state=(_fbECState *)stat; return (state->freq-state->freq); } The PVS-Studio's diagnostic message: V501 There are identical sub-expressions to the left and to the right of the '-' operator: state->freq - state->freq speech ec_fb.c 250 A misprint here causes the function to return 0 all the time. We are subtracting something wrong here. I do not know what it actually must be. Incorrect processing of buffer overflow typedef unsigned int Ipp32u; UMC::Status Init(..., Ipp32u memSize, ...) { ... memSize -= UMC::align_value<Ipp32u>(m_nFrames*sizeof(Frame)); if(memSize < 0) return UMC::UMC_ERR_NOT_ENOUGH_BUFFER; ... } The PVS-Studio's diagnostic message: V547 Expression 'memSize < 0' is always false. Unsigned type value is never < 0. vc1_enc umc_vc1_enc_planes.h 200 Processing of situation when the buffer's size is not sufficient is implemented incorrectly. The program will continue working instead of returning the error's code and most likely will crash. The point is that the "memSize" variable has the "unsigned int" type. So, the "memSize < 0" condition is always false and we go on working with a buffer overflow. I think it is a good example of software attack vulnerability. You may cause a buffer overflow by feeding incorrect data into the program and use it for your own purposes. By the way, we found about 10 such vulnerabilities in the code. I will not describe them here not to overload the text. Overrun in the wake of incorrect check]); } The PVS-Studio's diagnostic message: V547 Expression 'm_iCurrMBIndex - x < 0' is always false. Unsigned type value is never < 0. vc1_enc umc_vc1_enc_mb.cpp 188 The "m_iCurrMBIndex" variable has the "unsigned" type. Because of it, the "m_iCurrMBIndex - x" expression also has the "unsigned" type. Therefore, the "m_iCurrMBIndex - x < 0" condition is always false. Let's see what consequences it has. Let the "m_iCurrMBIndex" variable amount to 5 and "x" variable amount to 10. The "m_iCurrMBIndex - x" expression equals 5u - 10i = 0xFFFFFFFBu. The "m_iCurrMBIndex - x < 0" condition is false. The "m_MBInfo[row][0xFFFFFFFBu]" expression is executed and an overrun occurs. Error of using '?:' ternary operator The ternary operator is rather dangerous because you may easily make an error using it. Nevertheless, programmers like to write code as short as possible and use the interesting language construct. The C++ language punishes them for this. vm_file* vm_file_fopen(...) { ... mds[3] = FILE_ATTRIBUTE_NORMAL | (islog == 0) ? 0 : FILE_FLAG_NO_BUFFERING; ... } The PVS-Studio's diagnostic message: V502 Perhaps the '?:' operator works in a different way than it was expected. The '?:' operator has a lower priority than the '|' operator. vm vm_file_win.c 393 There must be a combination of flags FILE_ATTRIBUTE_NORMAL and FILE_FLAG_NO_BUFFERING. But actually, the "mds[3]" item is always assigned 0. The programmer forgot that the priority of "|" operator is higher than that of "?:" operator. So it turns out that we have the following expression in the code (note the brackets): (FILE_ATTRIBUTE_NORMAL | (islog == 0)) ? 0 : FILE_FLAG_NO_BUFFERING; The "FILE_ATTRIBUTE_NORMAL | (islog == 0)" condition is always true and we assign 0 to "mds[3]" item. This is the correct expression (note the brackets once again): FILE_ATTRIBUTE_NORMAL | ((islog == 0) ? 0 : FILE_FLAG_NO_BUFFERING); Strange handling of array AACStatus alsdecGetFrame(...) { ... for (i = 0; i < num; i++) { ... *tmpPtr = (Ipp32s)((tmp << 24) + ((tmp & 0xff00) << 8) + ((tmp >> 8) & 0xff00) + (tmp >> 24)); *tmpPtr = *srcPrt; ... } ... } The PVS-Studio's diagnostic message: V519 The '* tmpPtr' object is assigned values twice successively. Perhaps this is a mistake. aac_dec als_dec_api.c 928 I suggest that the readers examine the code themselves and draw conclusions. I would just call this code "peculiar". Paranormal assignments static IPLStatus ownRemap8u_Pixel(...) { ... saveXMask = xMap->maskROI; saveXMask = NULL; saveYMask = yMap->maskROI; saveYMask = NULL; ... } The PVS-Studio's diagnostic messages: V519 The 'saveXMask' object is assigned values twice successively. Perhaps this is a mistake. ipl iplremap.c 36 V519 The 'saveYMask' object is assigned values twice successively. Perhaps this is a mistake. ipl iplremap.c 38 I cannot see the reason for such strange code. Note that this block is repeated 8 times in different functions! There are also other strange assignments of one variable: Ipp32s ippVideoEncoderMPEG4::Init(mp4_Param *par) { ... mNumOfFrames = par->NumOfFrames; mNumOfFrames = -1; ... } The PVS-Studio's diagnostic message: V519 The 'mNumOfFrames' object is assigned values twice successively. Perhaps this is a mistake. mpeg4_enc mp4_enc_misc.cpp 276 Summary I described only some of the errors detected in IPP Samples for Windows in this article. I have not listed some errors because they are twins with those I have discussed in the article, so it would not be interesting to read about them. I also have not given inessential errors here. For instance, take assert() which always has a true condition because of a misprint. I skipped many code fragments because I simply did not know if there were errors or just poor code. But I think I have described enough defects to show you how difficult it is to write large projects even for skilled developers. Let me once again formulate the idea I have mentioned in the beginning of the article. Even a good programmer is not secure from misprints, absent-mindedness, urge to use Copy-Paste and logical errors. I think this article will be a good answer for those people who believe that the phrase "you must write correct code" will protect them against any errors. I wish you luck in all your C/C++/C++0x projects. May you find as many errors as possible using the static analysis methodology I love so much!
https://software.intel.com/fr-fr/articles/intel-ipp-samples-for-windows-error-correction
CC-MAIN-2014-15
refinedweb
2,860
50.63
Java language offers you to work with several loops. Loops are basically used to execute a set of statements repeatedly until a particular condition is satisfied. Here, I will tell you about the ‘while’ loop in Java. The topics included in this article are mentioned below: - What is while loop in Java? - What is an Infinite while loop? Let’s begin! What is a while loop in Java? The Java while loop is used to iterate a part of the program again and again. If the number of iteration is not fixed, then you can use while loop. A pictorial representation of how a while loop works: In the above diagram, when the execution begins and the condition returns false, then the control jumps out to the next statement after the while loop. On the other hand, if the condition returns true then the statement inside the while loop is executed. Moving on with this article on While Loop in Java, Let’s have a look at the syntax: Syntax: while (condition) { // code block to be executed } Now that I have shown you the syntax, here is an example: Practical Implementation: class Example { public static void main(String args[]){ int i=10; while(i>1){ System.out.println(i); i--; } } } Output: 10 9 8 7 6 5 4 3 2 Next, let’s take a look at another example: Another example of While Loop in Java: // Java While Loop example package Loops; import java.util.Scanner; public class WhileLoop { private static Scanner sc; public static void main(String[] args) { int number, sum = 0; sc = new Scanner(System.in); System.out.println("n Please Enter any integer Value below 10: "); number = sc.nextInt(); while (number <= 10) { sum = sum + number; number++; } System.out.format(" Sum of the Numbers From the While Loop is: %d ", sum); } } Output: Please Enter any integer Value below 10: 7 Sum of the Numbers From the While Loop is: 34 Above illustrated example is a bit complex as compared to the previous one. Let me explain it step by step. In this Java while loop example, the machine would ask the user to enter any integer value below 10. Next, the While loop and the Condition inside the While loop will assure that the given number is less than or equal to 10. Now, User Entered value = 7 and I have initialized the sum = 0 This is how the iteration would work: (concentrate on the while loop written in the code) First Iteration: sum = sum + number sum = 0 + 7 ==> 7 Now, the number will be incremented by 1 (number ++) Second Iteration Now in the first iteration the values of both Number and sum has changed as: Number = 8 and sum = 7 sum = sum + number sum = 7 + 8 ==> 15 Again, the number will be incremented by 1 (number ++) Third Iteration Now, in the Second Iteration, the values of both Number and sum has changed as: Number = 9 and sum = 15 sum = sum + number sum = 15 + 9 ==> 24 Following the same pattern, the number will be incremented by 1 (number ++) again. Fourth Iteration In the third Iteration of the Java while loop, the values of both Number and sum has changed as: Number = 10 and sum = 24 sum = sum + number sum = 24 + 10 ==> 34 Finally, the number will be incremented by 1 (number ++) for the last time. Here, Number = 11. So, the condition present in the while loop fails. In the end, System.out.format statement will print the output as you can see above! Moving further, One thing that you need to keep in mind is that you should use increment or decrement statement inside while loop so that the loop variable gets changed on each iteration so that at some point, the condition returns false. This way you can end the execution of the while loop. Else, the loop would execute indefinitely. In such cases, where the loop executes indefinitely, you’ll encounter a concept of the infinite while loop in Java, which is our next topic of discussion! Infinite while loop in Java The moment you pass ‘true’ in the while loop, the infinite while loop will be initiated. Syntax: while (true){ statement(s); } Practical Demonstration Let me show you an example of Infinite While Loop in Java: class Example { public static void main(String args[]){ int i=10; while(i>1) { System.out.println(i); i++; } } } It’s an infinite while loop, hence it won’t end. This is because the condition in the code says i>1 which would always be true as we are incrementing the value of i inside the while loop. With this, I have reached towards the end of this blog. I really hope the above-shared content added value to your Java knowledge. Let us keep exploring Java world together. Stay tuned! ‘’While loop in Java” blog and we will get back to you as soon as possible.
https://www.edureka.co/blog/java-while-loop/
CC-MAIN-2019-35
refinedweb
818
58.52
Question: How can I convert an RGB integer to the corresponding RGB tuple (R,G,B)? Seems simple enough, but I can't find anything on google. I know that for every RGB (r,g,b) you have the integer n = r256^2 + g256 + b, how can I solve the reverse in Python, IE given an n, I need the r, g, b values. Solution:1 I'm not a Python expert by all means, but as far as I know it has the same operators as C. If so this should work and it should also be a lot quicker than using modulo and division. Blue = RGBint & 255 Green = (RGBint >> 8) & 255 Red = (RGBint >> 16) & 255 What it does it to mask out the lowest byte in each case (the binary and with 255.. Equals to a 8 one bits). For the green and red component it does the same, but shifts the color-channel into the lowest byte first. Solution:2 From a RGB integer: Blue = RGBint mod 256 Green = RGBint / 256 mod 256 Red = RGBint / 256 / 256 mod 256 This can be pretty simply implemented once you know how to get it. :) Upd: Added python function. Not sure if there's a better way to do it, but this works on Python 3 and 2.4 def rgb_int2tuple(rgbint): return (rgbint // 256 // 256 % 256, rgbint // 256 % 256, rgbint % 256) There's also an excellent solution that uses bitshifting and masking that's no doubt much faster that Nils Pipenbrinck posted. Solution:3 >>> import struct >>>>> struct.unpack('BBB',str.decode('hex')) (170, 187, 204) for python3: >>> struct.unpack('BBB', bytes.fromhex(str)) and >>> rgb = (50,100,150) >>> struct.pack('BBB',*rgb).encode('hex') '326496' for python3: >>> bytes.hex(struct.pack('BBB',*rgb)) Solution:4 I assume you have a 32-bit integer containing the RGB values (e.g. ARGB). Then you can unpack the binary data using the struct module: # Create an example value (this represents your 32-bit input integer in this example). # The following line results in exampleRgbValue = binary 0x00FF77F0 (big endian) exampleRgbValue = struct.pack(">I", 0x00FF77F0) # Unpack the value (result is: a = 0, r = 255, g = 119, b = 240) a, r, g, b = struct.unpack("BBBB", exampleRgbValue) Solution:5 >>> r, g, b = (111, 121, 131) >>> packed = int('%02x%02x%02x' % (r, g, b), 16) This produces the following integer: >>> packed 7305603 You can then unpack it either the long explicit way: >>> packed % 256 255 >>> (packed / 256) % 256 131 >>> (packed / 256 / 256) % 256 121 >>> (packed / 256 / 256 / 256) % 256 111 ..or in a more compact manner: >>> b, g, r = [(packed >> (8*i)) & 255 for i in range(3)] >>> r, g, b Sample applies with any number of digits, e.g an RGBA colour: >>> packed = int('%02x%02x%02x%02x' % (111, 121, 131, 141), 16) >>> [(packed >> (8*i)) & 255 for i in range(4)] [141, 131, 121, 111] Solution:6 Just a note for anyone using Google's Appengine Images Python API. I found I had a situation where I had to supply a method with a 32-bit RGB color value. Specifically, if you're using the API to convert a PNG (or any image with transparent pixels), you'll need to supply the execute_transforms method with an argument called transparent_substitution_rgb which has to be a 32-bit RGB color value. Borrowing from dbr's answer, I came up with a method similar to this: def RGBTo32bitInt(r, g, b): return int('%02x%02x%02x' % (r, g, b), 16) transformed_image = image.execute_transforms(output_encoding=images.JPEG, transparent_substitution_rgb=RGBTo32bitInt(255, 127, 0)) Solution:7 def unpack2rgb(intcol): tmp, blue= divmod(intcol, 256) tmp, green= divmod(tmp, 256) alpha, red= divmod(tmp, 256) return alpha, red, green, blue If only the divmod(value, (divider1, divider2, divider3â¦)) suggestion was accepted, it would have simplified various time conversions too. Solution:8 There's probably a shorter way of doing this: dec=10490586 hex="%06x" % dec r=hex[:2] g=hex[2:4] b=hex[4:6] rgb=(r,g,b) EDIT: this is wrong - gives the answer in Hex, OP wanted int. EDIT2: refined to reduce misery and failure - needed '%06x' to ensure hex is always shown as six digits [thanks to Peter Hansen's comment]. Solution:9 If you are using NumPy and you have an array of RGBints, you can also just change its dtype to extract the red, green, blue and alpha components: >>> type(rgbints) numpy.ndarray >>> rgbints.shape (1024L, 768L) >>> rgbints.dtype dtype('int32') >>> rgbints.dtype = dtype('4uint8') >>> rgbints.shape (1024L, 768L, 4L) >>> rgbints.dtype dtype('uint8') Solution:10 Adding to what is mentioned above. A concise one-liner alternative. # 2003199 or #E190FF is Dodger Blue. tuple((2003199 >> Val) & 255 for Val in (16, 8, 0)) # (30, 144, 255) And to avoid any confusion in the future. from collections import namedtuple RGB = namedtuple('RGB', ('Red', 'Green', 'Blue')) rgb_integer = 16766720 # Or #ffd700 is Gold. # The unpacking asterisk prevents a TypeError caused by missing arguments. RGB(*((rgb_integer >> Val) & 255 for Val in (16, 8, 0))) # RGB(Red=255, Green=215, Blue=0) Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2019/02/tutorial-rgb-int-to-rgb-python.html
CC-MAIN-2019-09
refinedweb
863
61.56
Wrote the code below to take in a list/set and a string and if I am able to break the string into two words that are in the list then I return. I know I'm close, I just cant get the comparison right or something. e.g. wordBreakEasy("snowfall", ("apple", "fall", ..., "snow")) True def wordBreakEasy(str1, wordset): wordset1 = set(wordset) breakup = ['%s %s' % (str1[:i], str1[i:]) for i in range(1, len(str1))] newlist = [] for x in breakup: newlist.append((x.split())) wordset2 = set(map(tuple, newlist)) for wordset2 in wordset1: Add your word pairs as tuples, rather than as a space-delimited single string, then filter that list on tuples that are a subset of your wordset1 set: breakup = [(str1[:i], str1[i:]) for i in range(1, len(str1))] present = [tup for tup in breakup if not wordset1.issuperset(tup)] I used the set.issuperset() method here; it returns True if all of the elements in the argument iterable are present in the set, so if it returns True only if both elements in the tuple are present. Only then combine the words into a single string: newlist = [' '.join(tup) for tup in present] You don't need those intermediary lists, really; you only need to find if there is any such tuple that is a subset for your function to return True: breakup = ((str1[:i], str1[i:]) for i in range(1, len(str1))) return any(wordset1.issuperset(tup) for tup in breakup) I turned breakup into a generator expression; no need for the whole list to be built if you can find a matching word-pair early on. The any() function returns True as soon as one of the values it iterates over is true. Since that's a generator expression too, this tests word pairs lazily until a match is found. Demo: >>> def wordBreakEasy(str1, wordset): ... wordset1 = set(wordset) ... breakup = ((str1[:i], str1[i:]) for i in range(1, len(str1))) ... return any(wordset1.issuperset(tup) for tup in breakup) ... >>> wordBreakEasy("snowfall", ("apple", "fall", "...", "snow")) True >>> wordBreakEasy("snowflake", ("apple", "fall", "...", "snow")) False
https://codedump.io/share/czWAK91HQEa6/1/comparing-tuples-to-a-list-of-non-tuples
CC-MAIN-2017-09
refinedweb
350
70.23
I'm trying to make a program that checks if a file is there, if not make it with some writting in it. I've done my best make sure everything is correct. Code: Errors:Errors:Code:/* Section: header */ #include <iostream> #include <fstream> #include <cstdlib> #include <string> /* Section: constant */ const std::string NEWLINE = "\n" const std::string NEWLINE_FLUSH = "std::endl" /* Section: prototype */ void nfound( const std::string &v_file ); int main( int argc, char *argv[] ) { using namespace std; string the_file = "test_file.txt"; if ( "the_file" ) { cout << "File found!"; } else { cout << "File not found!"; nfound( "the_file" ); } cin.get(); return 0; } void nfound( const std::string &v_file ) { using namespace std; ofstream a_file ( "v_file" ); a_file << "This text will go inside!NEWLINE"; a_file.close(); } |9|error: expected ',' or ';' before 'const'| ||In function 'int main(int, char**)':| |21|error: 'nfound' was not declared in this scope|
https://cboard.cprogramming.com/cplusplus-programming/137033-string-parameter-errors.html
CC-MAIN-2017-30
refinedweb
139
64.3
Answered by: Using Groups in regular Expressions Question Hi Experts, Am using VSTS Ultimate 2013 to extract a response using Regular Expressions. My Expression in webtest Looks like,(Context Parameter name: Details) "name":"(.*?)","ID":"(.*?)","District":null,"Region":"(.*?)","RegionNum":(.*?), How can i replace name, ID, Region in my requests? I tried {{Details_g1}}, {{Details_g2}}, {{Details_g3}}. Please suggest me how do i replace the groups. Friday, June 12, 2015 7:30 PM - Edited by AbhishekAruru Friday, June 12, 2015 8:19 PM Answers All replies - {{Details_g1}}, {{Details_g2}}, {{Details_g3}} didnt work for meFriday, June 12, 2015 8:19 PM Hi AbhishekAruru, Based on your issue, could you please tell me where you want to replace name, ID, Region in your requests? If possible, I suggest you could share me a screen shot for us so that we will further help you support this issue. Generally, I know that we could use the 'Extract Regular Expression' Extraction rule. in the web performance test. So please you check if you set the Use Groups as True in the 'Extract Regular Expression' Extraction rule for the web performance test request. Best Regards, We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. Click HERE to participate the survey.Monday, June 15, 2015 5:56 AM Hi Tina, Am already using "Use Groups" as true. Please find below the request (string body) body where am trying to replace. Attached is the screenshot that am using for the Extraction rule. ****************** Request body************************* [{"Planned":"00000000-0000-0000-0000-000000000000","name":"xxx","NameDet":{"name":"xxx","ID":"yyy","District":null,"Region":"zzz","RegionNumber":aaa,}] ***************End of request body******************************************* I am trying to replace xxx with {{C_R_Details_g1}}, yyy with {{C_R_Details_g2}} , zzz with {{C_R_Details_g3}} and aaa with {{C_R_Details_g4}} Am not sure if am following the right syntax to replace a regular Expression in my request. Please suggest.Monday, June 15, 2015 1:00 PM I believe the "use groups" allows the extraction of one group, its value will be written to the named context parameter. If you want to extract three values then use three extraction rules, or possibly write your own extraction rule and provide three context parameters to receive the values. When using the test with the extract shown check the values shown in context parameters tab of the web test results to see what value has been extracted. I think there was another question within the last two months in this forum about using this extraction rule. But I am not able to search properly at the moment. Regards AdrianMonday, June 15, 2015 2:36 PM I guess... the only option is to write a custom plugin to meet my requirement. The requirement posted by me is pretty straight forward and easy to work with Jmeter and HP Loadrunner. Probably VSTS is not compatible with these tools, because not every one are good with writing C# codes :)Monday, June 15, 2015 2:39 PM Hi AbhishekAruru, Thanks for your reply. I did some research about this "use groups" properties for this 'Extract Regular Expression' Extraction rule in web performance test, I think that the Adrian's explanation is right for this "use groups" in web performance test. So if possible, please refer the following document try to write a custom plugin Extraction rule in web performance test If you still want to this feature "use groups" properties for this 'Extract Regular Expression' Extraction rule in web performance test, I suggest you could submit a uservoice:. The Visual Studio product team is listening to user voice there. You can send your idea there and people can vote. If you submit this suggestion, I hope you could post that link here, I will help you vote it. Thanks for your understanding. Best Regards, We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. Click HERE to participate the survey.Wednesday, June 17, 2015 3:20 AM Hi AbhishekAruru, please suggest if you got some way to extract multiple group values in different variables, I can see only first group value getting stored in variable, how to get values for other groups. I am facing situation same like you have mentioned. Wednesday, January 16, 2019 8:12 AM I believe that you are correct in that only the first group is saved by that extraction rule. I suggest creating your own plugin that uses the same regular expression and saves to multiple context parameters. Here is a version (simplified and not re-tested) of a plugin I used for a regex with a repeated group (i.e. using " ( )+ "). public class GetMatchesArticles : WebTestRequestPlugin { public override void PostRequest(object sender, PostRequestEventArgs e) { Regex theRegex = new Regex(" ... ( ... )+ ... "); MatchCollection matches = theRegex.Matches(e.Response.BodyString); int index = 0; foreach (Match mmmm in matches) { if (mmmm.Groups.Count > 1 && mmmm.Groups[1].Captures.Count > 0) { Capture cc = mmmm.Groups[1].Captures[0]; e.WebTest.Context["Match" + index] = cc.Value; index++; } } } } Simple coding changes can support multiple groups (i.e. " ( ) ( ) ") and getting the context parameter name as a plugin property. Regards AdrianWednesday, January 16, 2019 9:45 AM
https://social.msdn.microsoft.com/Forums/en-US/9f95202d-8ec5-450a-9701-54f17a68fe33/using-groups-in-regular-expressions?forum=vstest
CC-MAIN-2021-17
refinedweb
890
54.32
Opened 4 years ago Closed 4 years ago #18661 closed Bug (invalid) special characters like "é" or "è" in sqlite3 file path raise a "sqlite3.OperationalError: unable to open database file" Description If my sqlite3 db file path have a "é" char, the "syncdb" action raise an error : "sqlite3.OperationalError: unable to open database file" I use in my settings os.path to create the path to this file on Windows. If this char is replaced by "e", syncdb is OK Change History (4) comment:1 Changed 4 years ago by comment:2 Changed 4 years ago by Under linux, django will run fine with a sqlite database with a non-ascii pathname. However, it will print out a UnicodeWarning. This is the output of django manage.py runserver: (...) Django version 1.5.dev20120820010350, using settings 'ticket18661.settings' Development server is running at Quit the server with CONTROL-C. /home/kandinski/.virtualenvs/django-bug-18661/src/django/django/db/backends/sqlite3/base.py:344: UnicodeWarning: Unicode unequal comparison failed to convert both arguments to Unicode - interpreting them as being unequal if self.settings_dict['NAME'] != ":memory:": [20/Aug/2012 01:25:08] "GET /admin/ HTTP/1.1" 200 3301 [20/Aug/2012 01:26:06] "GET /admin/auth/user/ HTTP/1.1" 200 6059 [20/Aug/2012 01:26:06] "GET /admin/jsi18n/ HTTP/1.1" 200 2164 [20/Aug/2012 01:26:13] "GET /admin/auth/user/add/ HTTP/1.1" 200 4615 [20/Aug/2012 01:26:13] "GET /admin/jsi18n/ HTTP/1.1" 200 2164 [20/Aug/2012 01:26:25] "POST /admin/auth/user/add/ HTTP/1.1" 302 0 [20/Aug/2012 01:26:25] "GET /admin/auth/user/2/ HTTP/1.1" 200 12995 [20/Aug/2012 01:26:25] "GET /admin/jsi18n/ HTTP/1.1" 200 2164 And a snippet from my settings: # Django settings for ticket18661 project. # coding: utf8 (...) DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'. (...) comment:3 Changed 4 years ago by The settings snippet seems to be missing the most important part of the information, the database's NAME. Make sure the name is an unicode string, something like this: 'NAME': u'name_containing_é' comment:4 Changed 4 years ago by I have verified that so long as the database name string is unicode (prefixed with u'', or in a file with from __future__ import unicode_literals), a sqlite file name with non-ascii characters works fine, on Windows. Tried 1.4 and current master, on current master tried both py2 and py3. The unicode warning noted above only happens when the DB name string is not unicode. Is your "é" inside a Unicode string in your settings?
https://code.djangoproject.com/ticket/18661
CC-MAIN-2016-40
refinedweb
451
66.64
07 May 2010 21:54 [Source: ICIS news] HOUSTON (ICIS news)--US propylene contracts for May settled at a reduction of 12 cents/lb ($265/tonne, €209/tonne), pressured by a recent slump in spot prices, sources said on Friday. The settlements put polymer-grade propylene (PGP) at 63.50 cents/lb and chemical-grade propylene (CGP) at 62.00 cents/lb, according to ICIS pricing. Market participants attributed the drop to weaker demand and a sharp fall in the price of refinery grade propylene (RGP). RGP, a co-product of gasoline, is a key component in the cost structure of ?xml:namespace> The product traded this week at 39.00-40.00 cents/lb, down from deals done at 58.00-58.75 cents/lb in the first week of April. The reduction in May was the first for US propylene contracts since October 2009 and only the second contract drop in the last 17 months. But some market participants said the downturn in propylene could be short lived as Light feedstocks were already on the rise in April, a consultant said, adding that crackers ran on 85% light feeds last month, up from 80% in March, driven largely by a jump in propane consumption. One market participant attributed increased reliance on light feedstocks to a recent plunge in ethylene prices, saying cost-driven cracker operators were resuming a pattern of running mostly on ethane and propane. “That will tighten propylene but the question is when it is going to happen,” the source said. Another market participant predicted propylene prices would drop again in June, but the source said the market could rebound after that. The outlook for the sector would also depend on how the start-up of a propane dehydrogenation (PDH) plant in The Petrologistics plant in Chevron Phillips Chemical, Enterprise Products, ExxonMobil, LyondellBasell and Shell Chemical are among the major Dow Chemical, INEOS, Ascend Performance Materials and Total are among the main buyers. ($1 = €0.79)
http://www.icis.com/Articles/2010/05/07/9357632/us-propylene-for-may-drops-12-centslb-on-spot-price.html
CC-MAIN-2014-15
refinedweb
330
59.23
Discussion in 'Visual Basic .NET' started by Warlocq2014, Oct 29, 2014. ? I know a tiny bit of both and Java seems more useful but what Do uses recommenced I would recommend Java, But if you want to look for more Python is best in my opinion!! Wait for more comments Have fun Java. It will give you a better understanding of the C language. Visual Basic net is kinda better to understand and easier to start. I'd go with C# which is very similar to Java and C# has grown pretty mature too and it's really easy to understand. Of course, it depends on what you are planning on creating. If you plan on writing software that you want to sell, keep in mind that Java is interpreted, not compiled. This feature makes Java portable between platforms. Of course, for every pro there is a con. Java decompilers are available for free and can be used by anyone with a little common sense and motivation. It is more difficult to secure and protect software written in Java than software written in a compiled language. That being said, I don't believe ANY software is 100% pirate-proof. If there's a will, there's always a way!. VB is easier to understand compare to java I started in Visual Basic and it was really easy to start with... Java is more versatile in most applications, but why not pick one that your interested in learn it first then learn the other... its better to know multiple languages in my opinion. C# The .NET framework is cross platform now with Java. If you learn the .NET framework then you're able to write software for the majority of small/medium sized businesses. It's capable of enterprise software, SOAP/REST web APIs, and though I hate asp .net, ASP .NET MVC converted me from PHP for web development. Also since there is a standardized IDE (Visual Studio), and standard package manager (NuGet) you don't have as many issues with project setup/etc like with Java. Hell the few times I've made Java spplications, the project setup part was probably the part that took the longest. Worst case scenario, if you know C#, learning Java becomes much easier. It's really just a difference in namespaces/class names/design patterns; and the IDE. I personally started with C#, then moved to VB.NET, because I just like the syntax, lol . After that I started learning C, and the main difference is that you have to write most of the functions by yourself, they are not pre-built as in the .NET Framework. But now... I regret that I do not know Java I'd recommend starting with VB.Net and once you will start to think in the other way -> go on and learn Java, because that's what I'm going to do. If you're going to develop applications only for windows and you don't plan to port them to another platform, then VB should be a good choice. On the other had, if you plan developing cross platform applications java is the way to go. java can go way more places than .net can. .Net is usually limited to windows with a mono framework for linux support. As long as you are in the windows ecosystem your code should run, this includes even on cellphones. Java is in android(Google OS for phones), so its a very popular language outside of windows. It depends on what you want to make and what type of application you are going to make, i would prefer java though VB.Net will be much easier and faster to learn. I suggest you go with it, and then see if it satisfies your needs. If it doesn't, it is always easier to learn a successive language as you'll have the programming basics already learned. For me it's better to learn Java because it's cross platform and it's a way to get in the C++ family Learning Java is a good gateway because it's so strict that it makes learning other languages easier. c# is my choice I also choose c#, wont regret I would suggest you to learn C#, you could learn faster than Java and imo C# is more mature programming language! Separate names with a comma.
https://www.blackhatworld.com/seo/should-i-learn-vb-net-or-java.714418/
CC-MAIN-2017-13
refinedweb
735
72.56
Stork Stork is a lightweight library focused on making the flight from JSON to Types as smooth as possible. Stork believes in simplicity, explicitness, and control. Based on functional programming principles and mildly inspired in Aeson, Stork is the middle sweet between JSON parsers such as Argo - full-fledged but requiring extra dependencies and too functional for some - and other parsers that require your types to be mutable, to throw on init, or that take the control away from you. How it works To go from JSON to types, all you need to do is to state what fields you want to parse. Stork infers their type and parses them for you. To make that possible, your types must follow the FromJson protocol. protocol FromJson { static func from(value: JsonValue) -> Self? } In practice, this means that for a type to be parseable from JSON, it needs to provide a way of being constructed from a JsonValue: string, number, bool, JSON, or [JsonValue]. At compile-time, Storks requires the types you want to get from JSON to be FromJson compliant. Otherwise, you encounter the following compile error message on Xcode: Generic parameter 'T' could not be infered Examples Say that we want to retrieve values of type User from some JSON input, where User and its nested types are defined as follows: struct User { let id: Int let name: String let email: String? let country: Country? let subscription: Subscription let favouriteSongs: [Song] } enum Subscription: String { case free case bronze case silver case gold } enum Country: String { case netherlands case portugal } struct Song: Equatable { let name: String let band: String } With Stork, to go from JSON to this model all we need to do is to have these types complying to our FromJson protocol. extension User: FromJson { static func from(value: JsonValue) -> User? { return value.ifObject { json in User( id: try json .! "id", name: try json .! "name", email: json .? "email", country: json .? "country", subscription: try json .! "subscription", favouriteSongs: (json ..? "favouriteSongs") ?? [] ) } } } // That's all you need for String/Int RawRepresentable Enums! extension Subscription: FromJson {} // Or you get to say how it's done! // In this case, the country in JSON is short-coded and // thus needs to be translated to the right Country case. extension Country: FromJson { static func from(value: JsonValue) -> Country? { return value.ifString { str in switch str { case "nl": return .netherlands case "pt": return .portugal default: return nil } } } } extension Song: FromJson { static func from(value: JsonValue) -> Song? { return value.ifObject { json in Song( name: try json .! "name", band: try json .! "band" ) } } } Now that we have everything, let's get Stork to deliver that baby: // Single user let maybeUser: User? = User.from(json: userJSON) // Array of users let users: [User] = [User].from(jsonArray: usersJSON) See more, unit-tested examples at the Examples directory Installation You can add Stork as a dependency to your project via the following ways. CocoaPods - To add Stork to your Xcode project using CocoaPods, add this line to your Podfile: pod 'StorkEgg', '0.2.1' Note: The pod name is StorkEgg since Stork was already taken. - Then let CocoaPods fetch and install it for you: pod install - Finally, build your project and import Stork: import Stork Git Submodules # Add Stork as a git submodule to your repository git submodule add git@github.com:NunoAlexandre/stork.git # Get the most updated version of Stork git submodule update --remote Read more about Git Submodules here. TODO I plan to support Carthage and the Swift Package Manager. Help is rather appreciated! Github You may find interesting Dependencies Used By Total: 0
https://swiftpack.co/package/NunoAlexandre/stork
CC-MAIN-2020-10
refinedweb
591
64.81
New Indigo Project #2: WindowBuilder Join the DZone community and get the full member experience.Join For Free In my series on new projects that have joined the Indigo release train, Eric Clayberg, project leader of WindowBuidler answers the 4 questions. As a side note, I have known Eric since the days of WindowBuilder for Smalltalk. It is great to have him leading the charge of making WindowBuilder an Eclipse project. 1. What does your project provide to an Eclipse user?). 2. Why are you personally involved in the project? I have been involved with the WindowBuilder project in its many various forms since 1992 when it was focused on Smalltalk. I have been the project leader for the Java/Eclipse version since its first release in 2003 by Instantiations and then lead the effort at Google to make it free and then contribute it to Eclipse. I have been very impressed with Google’s willingness to “do the right thing” and make this wonderful technology freely available to everyone. 3. What is the future roadmap for your project? WindowBuilder has been a huge commercial success over the years, and now we want to migrate it to being a successful open source project. While the technology is very mature from an end-user (e.g., Swing and SWT developer POV), we need to do a lot to establish and document the public APIs so that other may build upon the framework to create new UI designers for other UI toolkits. 4. What have been your experiences of participating in the Indigo release train? We are relatively late to this release cycle and have been rushing to catch the train and release WindowBuilder with the rest of Indigo. We had a lot of work to do to move WindowBuilder to the Eclipse namespace, vet all of its IP, externalize all of its strings, update its build process, etc. It has been a labor of love for all involved and a huge learning process for all of us. From Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/new-indigo-project-2
CC-MAIN-2021-31
refinedweb
345
69.21
To the Pros out there: Hy, im searching for a bug since 2 days, and i asked a lot of pros in my company, but nobody could help me. Heres the situation: Visual Studio Microsoft Visual Studio 2005 Version 8.0.50727.762 (SP.050727-7600) I had a project with a strange bug, so i could melt down the problem into this simple program (I took the original Project and deleted file after file to look when it would go correctly) and still got the strange behaviour. heres the whole program, its a .vcproj-file project with 1 source file called main.cpp where the following is inside: Code: #include <string> class EventTest { public: EventTest (int a_id, const std::string& a_text); int m_Id; std::string m_Text; }; /*inline*/ EventTest::EventTest (int a_id, const std::string& a_text) : m_Id(a_id) , m_Text(a_text) { //m_Id = a_id; //m_Text = a_text; } int main(int argc, char** argv) { EventTest test(1, "Text"); //std::string strTest = "TEST"; return 0; } there are also some additional Dependencies in the Properties, but i wont mention it here. The result when debugging it is extremely strange. its about the content of m_Text who is (watched with debugger) always something like "0,0,0,0,T,e,x,t,-b,4,0,0,0,d,4,....." (and signs i cant write here) and so on with a string size of _MySize = 14220893!! instead of 4. As soon as i use a variable of type std::string (the line that is commented now) all works fine!!! I can make the behaviour the other way round as well: make the constructor inline, or initialize the members in the body. when doing either of the two, then the program works only correct (having "Text" in the string) when NOT having the variable of type std::string in the code!! I made an other new project (not deleting things from the original project) with exactly the same code, the same project settings, and the same Dependencies. And this program is ALWAYS working correctly!!! but i cant find any difference, its like magic. also the output (wich dlls are loaded) is the same. The program would acctually work finde only this std::string in the class is nasty! Does anybody has an idea? or a tipp how to analyze the problem further?? Im sure it is some strange sideeffect, but of what?? thanx for any input! andreas
http://cboard.cprogramming.com/cplusplus-programming/93971-std-string-has-my-compiler-gone-nuts-printable-thread.html
CC-MAIN-2015-14
refinedweb
398
71.55
This page provides information to help you plan a new installation of Anthos Service Mesh. Customize the control plane The features that Anthos Service Mesh supports differ between platforms. We recommend that you review the Supported features to learn which features are supported on your platform. Some features are enabled by default, and others you can optionally enable by creating an IstioOperator overlay file. When you run asmcli install, you can customize the control plane by specifying the --custom_overlay option with the overlay file. You can use Anthos Service Mesh certificate authority (Mesh CA) or Istio CA. Certificates from Mesh CA include the following data about your application's services: - The Google Cloud project ID - The GKE namespace - The GKE service account name.
https://cloud.google.com/service-mesh/v1.10/docs/unified-install/plan-install
CC-MAIN-2021-49
refinedweb
123
53.81
Part of twisted.flow View Source flow.protocolThis allows one to use flow module to create protocols, a protocol is actually a controller, but it is specialized enough to deserve its own module. Construct a flow based protocolThis takes a base protocol class, and a set of callbacks and creates a connection flow based on the two. For example, the following would build a simple 'echo' protocol: from __future__ import generators from twisted.internet import reactor, protocol from twisted.flow import flow PORT = 8392 def echoServer(conn): yield conn for data in conn: conn.write(data) yield conn def echoClient(conn): conn.write("hello, world!") yield conn print "server said: ", conn.next() reactor.callLater(0,reactor.stop) server = protocol.ServerFactory() server.protocol = flow.makeProtocol(echoServer) reactor.listenTCP(PORT,server) client = protocol.ClientFactory() client.protocol = flow.makeProtocol(echoClient) reactor.connectTCP("localhost", PORT, client) reactor.run()Of course, the best part about flow is that you can nest stages. Therefore it is quite easy to make a lineBreaker generator which takes an input connection and produces and output connection. Anyway, the code is almost identical as far as the client/server is concerned: # this is a filter generator, it consumes from the # incoming connection, and yields results to # the next stage, the echoServer below def lineBreaker(conn, lineEnding = "\n"): lst = [] yield conn for chunk in conn: pos = chunk.find(lineEnding) if pos > -1: lst.append(chunk[:pos]) yield "".join(lst) lst = [chunk[pos+1:]] else: lst.append(chunk) yield conn yield "".join(lst) # note that this class is only slightly modified, # simply comment out the line breaker line to see # how the server behaves without the filter... def echoServer(conn): lines = flow.wrap(lineBreaker(conn)) yield lines for data in lines: conn.write(data) yield lines # and the only thing that is changed is that we # are sending data in strange chunks, and even # putting the last chunk on hold for 2 seconds. def echoClient(conn): conn.write("Good Morning!\nPlease ") yield conn print "server said: ", conn.next() conn.write("do not disregard ") reactor.callLater(2, conn.write, "this.\n") yield conn print "server said: ", conn.next() reactor.callLater(0,reactor.stop)
http://twistedmatrix.com/documents/8.2.0/api/twisted.flow.protocol.html
CC-MAIN-2016-22
refinedweb
360
51.55
/* fgens */ /* FGENS.H */ #ifndef CSOUND_FGENS_H #define CSOUND_FGENS_H #define MAXFNUM 100 #define GENMAX 60 /** * Create ftable using evtblk data, and store pointer to new table in *ftpp. * If mode is zero, a zero table number is ignored, otherwise a new table * number is automatically assigned. * Returns zero on success. */ int hfgens(CSOUND *csound, FUNC **ftpp, const EVTBLK *evtblkp, int mode); /** * Allocates space for 'tableNum' with a length (not including the guard * point) of 'len' samples. The table data is not cleared to zero. * Return value is zero on success. */ int csoundFTAlloc(CSOUND *csound, int tableNum, int len); /** * Deletes a function table. * Return value is zero on success. */ int csoundFTDelete(CSOUND *csound, int tableNum); #endif /* CSOUND_FGENS_H */
http://csound.sourcearchive.com/documentation/1:5.12.1~dfsg-2ubuntu2/fgens_8h-source.html
CC-MAIN-2017-34
refinedweb
114
59.09
Opened 4 years ago Closed 4 years ago Last modified 4 years ago #15337 closed (worksforme) cyclic import dependency when extend builtin backend Description (last modified by russellm) When writing a custom cache backend which inherit from django's builtin memcache backend, say mycache.py: from django.core.cache.backends.memcached import CacheClass as BaseCacheClass class MyCache(BaseCacheClass): .... and set backend to "mycache.MyCache" in settings.py, then mycache.py and django.core.cache.__init__.py import each other. Turn django.core.cache.__init__.py::cache object to be a SimpleLazyObject would solve this problem. Attachments (1) Change History (5) Changed 4 years ago by yi.codeplayer@… comment:1 Changed 4 years ago by russellm comment:2 Changed 4 years ago by anonymous - Resolution worksforme deleted - Status changed from closed to reopened You need to config this custom backend to the default cache backend in settings.py to reproduce this problem, django.core.cache would import the default cache backend. sorry for poor english description. comment:3 Changed 4 years ago by russellm - Resolution set to worksforme - Status changed from reopened to closed And that's exactly what I did. If you want to convince me, you'll need to provide a sample project. comment:4 Changed 4 years ago by anonymous I was wrong, there's no bug in django ;-) Can't reproduce -- I dropped a custom cache class defined as you describe, and don't the problem you describe. Looking at the code, I can't even work out what set of conditions would allow this to be reproduced -- I don't see any circular dependency. memcached doesn't import anything from the django.core.cache.
https://code.djangoproject.com/ticket/15337
CC-MAIN-2015-22
refinedweb
278
58.58
copy_file_range man page copy_file_range — Copy a range of data from one file to another Synopsis #include <sys/syscall.h> . The flags argument is provided to allow for future extensions and currently must be to 0. Return Value Upon successful completion, copy_file_range() will return the number of bytes copied between files. This could be less than the length originallyINVAL Requested range extends beyond the end of the source file; or the flags argument is not 0. - EIO A low-level I/O error occurred while copying. -. Conforming to The copy_file_range() system call is a nonstandard Linux.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Referenced By sendfile(2), splice(2), stress-ng(1), syscalls(2), xfs_io(8).
https://www.mankier.com/2/copy_file_range
CC-MAIN-2017-13
refinedweb
135
57.57
Ticket #371 (closed defect: fixed) New --database option for migrate command doesn't work Description I attempted using the --database option when migrating and noticed it always used the default database. settings.py: DATABASES = { 'default': { 'NAME': 'test', 'ENGINE': 'django.db.backends.mysql', 'USER': 'test', }, 'test': { 'NAME': 'test1', 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'USER': 'test1', }, } Using the command: python manage.py migrate appname --database=test The default database is used instead of 'test'. It appears the correct database is set at However, uses the import: from south.db import db. When it does that the last line of that file always sets the db to the default. So even though we passed in a different database at the command line it is always overwritten during the import. Fixed in [2273d7101099]. It was actually because "from x import y" imports copy the namespace, so when we reassign in south.db.db it doesn't reassign in the module.
http://south.aeracode.org/ticket/371
CC-MAIN-2015-32
refinedweb
156
67.96
public class Gradesv2 { public static void main(String[ ] args) { //local variables int numTests = 0; //counts number of tests int testgGrades = 0; //individual test grade int totalPoints = 0; //total points for all test double average = 0.0; //average grade int test1 = 95; int test2 = 73; int test3 = 91; int test4 = 82; System.out.println(""); System.out.println("Test # 1 Test Grade: " + test1 + " Total Points: " + test1 + " = " + totalPoints + "Average Score: " + test1 = +); }//end of main method }//end of class Recommended Answers Answered by Reverend Jim 3,430 in a post from Let's start at line 0. Because I don't know all languages I'm not going to make an assumption here so what programming language is this? And what line is giving the error? It would also help if you posted in the appropriate forum. This is java code. All 6 Replies Reverend Jim 3,430 Hi, I'm Jim, one of DaniWeb's moderators. Moderator Featured Poster rproffitt commented: When I first read this, I didn't see that tag. Anyhow, will re-read. +15 Be a part of the DaniWeb community We're a friendly, industry-focused community of 1.20 million developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge.
https://www.daniweb.com/programming/threads/517803/i-keep-getting-this-illegal-start-of-expression-and-can-t-figure-out-why
CC-MAIN-2021-04
refinedweb
204
64.3
=head1 NAME perl5db.pl - the perl debugger =head1 SYNOPSIS perl -d your_Perl_script =head1 DESCRIPTION C<perl5db.pl> is the perl debugger. It is loaded automatically by Perl when you invoke a script with C<perl -d>. This documentation tries to outline the structure and services provided by C<perl5db.pl>, and to describe how you can use them. See L<perldebug> for an overview of how to use the debugger. =head1 C<local()> operator in creative ways. Some of these have survived into the current debugger; a few of the more interesting and still-useful idioms are noted in this section, along with notes on the comments themselves. =head2 I<are> debugger internals, and are therefore subject to change. Future development should probably attempt to replace the globals with a well-defined API, but for now, the variables are what we've got. =head2 Automated variable stacking via C<local()> As you may recall from reading C<perlfunc>, the, C<$some_global> is localized, then altered. When the subroutine returns, Perl automatically undoes the localization, restoring the previous value. Voila, automatic stack management. The debugger uses this trick a I<lot>. Of particular note is C<DB::eval>, which lets the debugger get control inside of C<eval>'ed code. The debugger localizes a saved copy of C<$@> inside the subroutine, which allows it to keep C<$@> safe until it C<DB::eval> returns, at which point the previous value of C<$@> is restored. This makes it simple (well, I<simpler>) to keep track of C<$@> inside C<eval>s which C<eval> other C<eval's>. In any case, watch for this pattern. It occurs fairly often. =head2 The C<^> trick This is used to cleverly reverse the sense of a logical test depending on the value of an auxiliary variable. For instance, the debugger's C<S> (search for subroutines by pattern) allows you to negate the pattern like this: # Find all non-'foo' subs: S !/foo/ Boolean algebra states that the truth table for XOR looks like this: =over 4 =item * 0 ^ 0 = 0 (! not present and no match) --> false, don't print =item * 0 ^ 1 = 1 (! not present and matches) --> true, print =item * 1 ^ 0 = 1 (! present and no match) --> true, print =item * 1 ^ 1 = 0 (! present and matches) --> false, don't print =back As you can see, the first pair applies when C<!> isn't supplied, and the second pair applies when it is. The XOR simply allows us to compact a more complicated if-then-elseif-else into a more elegant (but perhaps overly clever) single test. After all, it needed this explanation... =head2 FLAGS, FLAGS, FLAGS There is a certain C programming legacy in the debugger. Some variables, such as C<$single>, C<$trace>, and C<$frame>, have I, C<$scalar> is acting sort of like an array of bits. Obviously, since the contents of? =over 4 =item * First, doing an arithmetical or bitwise operation on a scalar is just about the fastest thing you can do in Perl: C<use constant> actually creates a subroutine call, and array and hash lookups are much slower. Is this over-optimization at the expense of readability? Possibly, but the debugger accesses these variables a I<lot>. Any rewrite of the code will probably have to benchmark alternate implementations and see which is the best balance of readability and speed, and then document how it actually works. =item * Second, it's very easy to serialize a scalar number. This is done in the restart code; the debugger state variables are saved in C<%ENV> and then restored when the debugger is restarted. Having them be just numbers makes this trivial. =item *). =back =head2 What are those C<XXX> comments for? Any comment containing C C<XXX>. =head1 DATA STRUCTURES MAINTAINED BY CORE There are a number of special data structures provided to the debugger by the Perl interpreter. The array C<@{$main::{'_<'.$filename}}> (aliased locally to C<@dbline> via glob assignment) contains the text from C<$filename>, with each element corresponding to a single line of C<$filename>. Additionally, breakable lines will be dualvars with the numeric component being the memory address of a COP node. Non-breakable lines are dualvar to 0. The hash C<%{'_<'.$filename}> (aliased locally to C<%dbline> via glob assignment) contains breakpoints and actions. The keys are line numbers; you can set individual values, but not the whole hash. The Perl interpreter uses this hash to determine where breakpoints have been set. Any true value is considered to be a breakpoint; C<perl5db.pl> uses C<$break_condition\0$action>. Values are magical in numeric context: 1 if the line is breakable, 0 if not. The scalar C<${"_<$filename"}> simply contains the string C<$filename>. This is also the case for evaluated strings that contain subroutines, or which are currently being executed. The $filename for C<eval>ed strings looks like C<(eval 34)>. =head1 DEBUGGER STARTUP When C<perl5db.pl> starts, it reads an rcfile (C<perl5db.ini> for non-interactive sessions, C<.perldb> for interactive ones) that can set a number of options. In addition, this file may define a subroutine C<&afterinit> that will be executed (in the debugger's context) after the debugger has initialized itself. Next, it checks the C<PERLDB_OPTS> environment variable and treats its contents as the argument of a C<o> command in the debugger. =head2 STARTUP-ONLY OPTIONS The following options can only be specified at startup. To set them in your rcfile, add a call to C<&parse_options("optionName=new_value")>. =over 4 =item * TTY the TTY to use for debugging i/o. =item * noTTY if set, goes in NonStop mode. On interrupt, if TTY is not set, uses the value of noTTY or F<$HOME/.perldbtty$$> to find TTY using Term::Rendezvous. Current variant is to have the name of TTY in this file. =item * ReadLine if false, a dummy ReadLine is used, so you can debug ReadLine applications. =item * NonStop if true, no i/o is performed until interrupt. =item * LineInfo file or pipe to print line number info to. If it is a pipe, a short "emacs like" message is used. =item * RemotePort host:port to connect to on remote host for remote debugging. =item * HistFile file to store session history to. There is no default and so no history file is written unless this variable is explicitly set. =item * HistSize number of commands to store to the file specified in C<HistFile>. Default is 100. =back =head3 SAMPLE RCFILE &parse_options("NonStop=1 LineInfo=db.out"); sub afterinit { $trace = 1; } The script will run without human intervention, putting trace information into C<db.out>. (If you interrupt it, you had better reset C<LineInfo> to something I<interactive>!) =head1 INTERNALS DESCRIPTION =head2 DEBUGGER INTERFACE VARIABLES Perl supplies the values for C<%sub>. It effectively inserts a C<&DB::DB();> in front of each place that can have a breakpoint. At each subroutine call, it calls C<&DB::sub> with C<$DB::sub> set to the called subroutine. It also inserts a C<BEGIN {require 'perl5db.pl'}> before the first line. After each C<require>d file is compiled, but before it is executed, a call to C<&DB::postponed($main::{'_<'.$filename})> is done. C<$filename> is the expanded name of the C<require>d file (as found via C<%INC>). =head3 IMPORTANT INTERNAL VARIABLES =head4 C<$CreateTTY> Used to control when the debugger will attempt to acquire another TTY to be used for input. =over =item * 1 - on C<fork()> =item * 2 - debugger is started inside debugger =item * 4 - on startup =back =head4 C<$doret> The value -2 indicates that no return value should be printed. Any other positive value causes C<DB::sub> to print return values. =head4 C<$evalarg> The item to be eval'ed by C<DB::eval>. Used to prevent messing with the current contents of C<@_> when C<DB::eval> is called. =head4 C<$frame> Determines what messages (if any) will get printed when a subroutine (or eval) is entered or exited. =over 4 =item * 0 - No enter/exit messages =item * 1 - Print I<entering> messages on subroutine entry =item * 2 - Adds exit messages on subroutine exit. If no other flag is on, acts like 1+2. =item * 4 - Extended messages: C<< <in|out> I<context>=I<fully-qualified sub name> from I<file>:I<line> >>. If no other flag is on, acts like 1+4. =item * 8 - Adds parameter information to messages, and overloaded stringify and tied FETCH is enabled on the printed arguments. Ignored if C<4> is not on. =item * 16 - Adds C<I<context> return from I<subname>: I<value>> messages on subroutine/eval exit. Ignored if C<4> is not on. =back To get everything, use C<$frame=30> (or C<o f=30> as a debugger command). The debugger internally juggles the value of C<$frame> during execution to protect external modules that the debugger uses from getting traced. =head4 C<$level> Tracks current debugger nesting level. Used to figure out how many C<E<lt>E<gt>> pairs to surround the line number with when the debugger outputs a prompt. Also used to help determine if the program has finished during command parsing. =head4 C<$onetimeDump> Controls what (if anything) C<DB::eval()> will print after evaluating an expression. =over 4 =item * C<undef> - don't print anything =item * C<dump> - use C<dumpvar.pl> to display the value returned =item * C<methods> - print the methods callable on the first item returned =back =head4 C<$onetimeDumpDepth> Controls how far down C<dumpvar.pl> will go before printing C<...> while dumping a structure. Numeric. If C<undef>, print all levels. =head4 C<$signal> Used to track whether or not an C<INT> signal has been detected. C<DB::DB()>, which is called before every statement, checks this and puts the user into command mode if it finds C<$signal> set to a true value. =head4 C<$single> Controls behavior during single-stepping. Stacked in C<@stack> on entry to each subroutine; popped again at the end of each subroutine. =over 4 =item * 0 - run continuously. =item * 1 - single-step, go into subs. The C<s> command. =item * 2 - single-step, don't go into subs. The C<n> command. =item * 4 - print current sub depth (turned on to force this when C<too much recursion> occurs. =back =head4 C<$trace> Controls the output of trace information. =over 4 =item * 1 - The C<t> command was entered to turn on tracing (every line executed is printed) =item * 2 - watch expressions are active =item * 4 - user defined a C<watchfunction()> in C<afterinit()> =back =head4 C<$slave_editor> 1 if C<LINEINFO> was directed to a pipe; 0 otherwise. =head4 C<@cmdfhs> Stack of filehandles that C<DB::readline()> will read commands from. Manipulated by the debugger's C<source> command and C<DB::readline()> itself. =head4 C<@dbline> Local alias to the magical line array, C<@{$main::{'_<'.$filename}}> , supplied by the Perl interpreter to the debugger. Contains the source. =head4 C<@old_watch> Previous values of watch expressions. First set when the expression is entered; reset whenever the watch expression changes. =head4 C<@saved> Saves important globals (C<$@>, C<$!>, C<$^E>, C<$,>, C<$/>, C<$\>, C<$^W>) so that the debugger can substitute safe values while it's running, and restore them when it returns control. =head4 C<@stack> Saves the current value of C<$single> on entry to a subroutine. Manipulated by the C<c> command to turn off tracing in all subs above the current one. =head4 C<@to_watch> The 'watch' expressions: to be evaluated before each line is executed. =head4 C<@typeahead> The typeahead buffer, used by C<DB::readline>. =head4 C<%alias> Command aliases. Stored as character strings to be substituted for a command entered. =head4 C<%break_on_load> Keys are file names, values are 1 (break when this file is loaded) or undef (don't break when it is loaded). =head4 C<%dbline> Keys are line numbers, values are C<condition\0action>. If used in numeric context, values are 0 if not breakable, 1 if breakable, no matter what is in the actual hash entry. =head4 C<%had_breakpoints> Keys are file names; values are bitfields: =over 4 =item * 1 - file has a breakpoint in it. =item * 2 - file has an action in it. =back A zero or undefined value means this file has neither. =head4 C<%option> Stores the debugger options. These are character string values. =head4 C<%postponed> Saves breakpoints for code that hasn't been compiled yet. Keys are subroutine names, values are: =over 4 =item * C<compile> - break when this sub is compiled =item * C<< break +0 if <condition> >> - break (conditionally) at the start of this routine. The condition will be '1' if no condition was specified. =back =head4 C<%postponed_file> This hash keeps track of breakpoints that need to be set for files that have not yet been compiled. Keys are filenames; values are references to hashes. Each of these hashes is keyed by line number, and its values are breakpoint definitions (C<condition\0action>). =head1 C<$deep> variable that C<DB::sub> uses to tell when a program has recursed deeply. In addition, the debugger has to turn off warnings while the debugger code is compiled, but then restore them to their original setting before the program being debugged begins executing. The first C<BEGIN> block simply turns off warnings by saving the current setting of C<$^W> and then setting it to zero. The second one initializes the debugger variables that are needed before the debugger begins executing. The third one puts C<$^X> back to its former value. We'll detail the second C<BEGIN> block later; just remember that if you need to initialize something before the debugger starts really executing, that's where it has to go. =cut package DB; use strict; use Cwd (); my $_initial_cwd; BEGIN {eval 'use IO::Handle'}; # Needed for flush only? breaks under miniperl BEGIN { require feature; $^V =~ /^v(\d+\.\d+)/; feature->import(":$1"); $_initial_cwd = Cwd::getcwd(); } # Debugger for Perl 5.00x; perl5db.pl patch level: use vars qw($VERSION $header); # bump to X.XX in blead, only use X.XX_XX in maint $', $o ) || open( OUT, ">&STDERR" ) || open( OUT, ">&STDOUT" ); # so we don't dongle stdout } ## end if ($console) elsif ( not defined $console ) { # No console. Open STDIN. open( IN, "<&STDIN" ); # merge with STDERR, or with STDOUT. open( OUT, ">&STDERR" ) || open( OUT, ">&STDOUT" ); # so we don't dongle stdout $' x $level ) . " " ); return defined($cmd); } sub _DB__trim_command_and_return_first_component { my ($obj) = @_; $cmd =~ s/\A\s+//s; # trim annoying leading whitespace $cmd =~ s/\s+\z//s; # trim annoying trailing whitespace # A single-character debugger command can be immediately followed by its # argument if they aren't both alphanumeric; otherwise require space # between commands and arguments: my ($verb, $args) = $cmd =~ m{\A([^\.-]\b|\S*)\s*(.*)}s; $obj->cmd_verb($verb); $obj->cmd_args($args); return; } sub _DB__handle_f_command { my ($obj) = @_; if ($file = $obj->cmd_args) { # help for no arguments (old-style was return from sub). if ( !$file ) { print $OUT "The old f command is now the r command.\n"; # hint print $OUT "The new f command switches filenames.\n"; next CMD; } ## end if (!$file) # if not in magic file list, try a close match. if ( !defined $main::{ '_<' . $file } ) { if ( ($try) = grep( m#^_<.*$file#, keys %main:: ) ) { { $try = substr( $try, 2 ); print $OUT "Choosing $try matching '$file':\n"; $file = $try; } } ## end if (($try) = grep(m#^_<.*$file#... } ## end if (!defined $main::{ ... # If not successfully switched now, we failed. if ( !defined $main::{ '_<' . $file } ) { print $OUT "No file matching '$file' is loaded.\n"; next CMD; } # We switched, so switch the debugger internals around. elsif ( $file ne $filename ) { *dbline = $main::{ '_<' . $file }; $max = $#dbline; $filename = $file; $start = 1; $&STDOUT" ) || _db_warn("Can't save STDOUT"); open( STDOUT, ">&OUT" ) || _db_warn("Can't redirect STDOUT"); } ## end if ($pager =~ /^\|/) else { # Not into a pipe. STDOUT is safe. open( SAVEOUT, ">&OUT" ) || _db_warn("Can't save DB::OUT"); } # Fix up environment to record we have less if so. fix_less(); unless ( $obj->piped(scalar ( open( OUT, $pager ) ) ) ) { # Couldn't open pipe to pager. _db_warn("Can't pipe output to '$pager'"); if ( $pager =~ /^\|/ ) { # Redirect I/O back again. open( OUT, ">&STDOUT" ) # XXX: lost message || _db_warn("Can't restore DB::OUT"); open( STDOUT, ">&SAVEOUT" ) || _db_warn("Can't restore STDOUT"); close(SAVEOUT); } ## end if ($pager =~ /^\|/) else { # Redirect I/O. STDOUT already safe. open( OUT, ">&STDOUT" ) # XXX: lost message || _db_warn("Can't restore DB::OUT"); } next CMD; } ## end unless ($piped = open(OUT,... # Set up broken-pipe handler if necessary. $SIG{PIPE} = \&DB::catch if $pager =~ /^\|/ && ( "" eq $SIG{PIPE} || "DEFAULT" eq $SIG{PIPE} ); _autoflush(\*OUT); # Save current filehandle, and put it back. $obj->selected(scalar( select(OUT) )); # Don't put it back if pager was a pipe. if ($cmd !~ /\A\|\|/) { select($obj->selected()); $obj->selected(""); } # Trim off the pipe symbols and run the command now. $cmd =~ s#\A\|+\s*##; redo PIPE; } return; } sub _DB__handle_m_command { my ($obj) = @_; if ($cmd =~ s#\Am\s+([\w:]+)\s*\z# #) { methods($1); next CMD; } # m expr - set up DB::eval to do the work if ($cmd =~ s#\Am\b# #) { # Rest gets done by DB::eval() $', $filename ) { # chomp to remove extraneous newlines from source'd files chomp( my @truelist = map { m/\A\s*(save|source)/ ? "#$_" : $_ } @truehist ); print {$fh} join( "\n", @truelist ); print "commands saved in $filename\n"; } else { DB::_db_warn("Can't save debugger commands in '$new_fn': $!\n"); } next CMD; } return; } sub _n_or_s_and_arg_commands_generic { my ($self, $letter, $new_val) = @_; # s - single-step. Remember the last command was 's'. if ($DB::cmd =~ s#\A\Q$letter\E\s#\$DB::single = $new_val;\n#) { $laststep = $letter; } return; } sub _handle_sh_command { my $self = shift; # $sh$sh - run a shell command (if it's all ASCII). # Can't run shell commands with Unicode in the debugger, hmm. my $my_cmd = $DB::cmd; if ($my_cmd =~ m#\A$sh#gms) { if ($my_cmd =~ m#\G\z#cgms) { # Run the user's shell. If none defined, run Bourne. # We resume execution when the shell terminates. DB::_db_system( $ENV{SHELL} || "/bin/sh" ); next CMD; } elsif ($my_cmd =~ m#\G$sh\s*(.*)#cgms) { # System it. DB::_db_system($1); next CMD; } elsif ($my_cmd =~ m#\G\s*(.*)#cgms) { DB::_db_system( $ENV{SHELL} || "/bin/sh", "-c", $1 ); next CMD; } } } sub _handle_x_command { my $self = shift; if ($DB::cmd =~ s#\Ax\b# #) { # Remainder gets done by DB::eval() $' => 'pre590_prepost', '>>' => 'pre590_prepost', '{' => 'pre590_prepost', '{{' => 'pre590_prepost', }, ); my %breakpoints_data; sub _has_breakpoint_data_ref { my ($filename, $line) = @_; return ( exists( $breakpoints_data{$filename} ) and exists( $breakpoints_data{$filename}{$line} ) ); } sub _get_breakpoint_data_ref { my ($filename, $line) = @_; return ($breakpoints_data{$filename}{$line} ||= +{}); } sub _delete_breakpoint_data_ref { my ($filename, $line) = @_; delete($breakpoints_data{$filename}{$line}); if (! scalar(keys( %{$breakpoints_data{$filename}} )) ) { delete($breakpoints_data{$filename}); } return; } sub _set_breakpoint_enabled_status { my ($filename, $line, $status) = @_; _get_breakpoint_data_ref($filename, $line)->{'enabled'} = ($status ? 1 : '') ; return; } sub _enable_breakpoint_temp_enabled_status { my ($filename, $line) = @_; _get_breakpoint_data_ref($filename, $line)->{'temp_enabled'} = 1; return; } sub _cancel_breakpoint_temp_enabled_status { my ($filename, $line) = @_; my $ref = _get_breakpoint_data_ref($filename, $line); delete ($ref->{'temp_enabled'}); if (! %$ref) { _delete_breakpoint_data_ref($filename, $line); } return; } sub _is_breakpoint_enabled { my ($filename, $line) = @_; my $data_ref = _get_breakpoint_data_ref($filename, $line); return ($data_ref->{'enabled'} || $data_ref->{'temp_enabled'}); } =head2 C<cmd_wrapper()> (API) C<cmd_wrapper()> allows the debugger to switch command sets depending on the value of the C<CommandSet> option. It tries to look up the command in the C<%set> package-level I<lexical> (which means external entities can't fiddle with it) and create the name of the sub to call based on the value found in the hash (if it's there). I<All> of the commands to be handled in a set have to be added to C<%set>; if they aren't found, the 5.8.0 equivalent is called (if there is one). This code uses symbolic references. =cut sub cmd_wrapper { my $cmd = shift; my $line = shift; my $dblineno = shift; # Assemble the command subroutine's name by looking up the # command set and command name in %set. If we can't find it, # default to the older version of the command. my $' : $_->tid) } threads->list )."\n"; } } ## end sub cmd_E =head3 C<cmd_h> - help command (command) Does the work of either =over 4 =item * Showing all the debugger help =item * Showing help for a specific command =back =cut use vars qw($help); use vars qw($summary); sub cmd_h { my $cmd = shift; # If we have no operand, assume null. my $line = shift || ''; # 'h h'. Print the long-format help. if ( $line =~ /\Ah\s*\z/ ) { print_help($help); } # 'h <something>'. Search for the command and print only its help. elsif ( my ($asked) = $line =~ /\A(\S.*)\z/ ) { # support long commands; otherwise bogus errors # happen when you ask for h on <CR> for example my $qasked = quotemeta($asked); # for searching; we don't # want to use it as a pattern. # XXX: finds CR but not <CR> # Search the help string for the command. if ( $help =~ /^ # Start of a line <? # Optional '<' (?:[IB]<) # Optional markup $qasked # The requested command /mx ) { # It's there; pull it out and print it. while ( $help =~ /^ (<? # Optional '<' (?:[IB]<) # Optional markup $qasked # The command ([\s\S]*?) # Description line(s) \n) # End of last description line (?!\s) # Next line not starting with # whitespace /mgx ) { print_help($1); } } # Not found; not a debugger command. else { print_help("B<$asked> is not a debugger command.\n"); } } ## end elsif ($line =~ /^(\S.*)$/) # 'h' - print the summary help. else { print_help($summary); } } ## end sub cmd_h =head3 C<cmd_L> - list breakpoints, actions, and watch expressions (command) To list breakpoints, the command has to look determine where all of them are first. It starts a C<%had_breakpoints>, which tells us what all files have breakpoints and/or actions. For each file, we switch the C<*dbline> glob (the magic source and breakpoint data structures) to the file, and then look through C<%dbline> for lines with breakpoints and/or actions, listing them out. We look through C<%postponed> not-yet-compiled subroutines that have breakpoints, and through C<%postponed_file> for not-yet-C<require>'d files that have breakpoints. Watchpoints are simpler: we just list the entries in C<@to_watch>. =cut sub _cmd_L_calc_arg { # If no argument, list everything. Pre-5.8.0 version always lists # everything my $arg = shift || 'abw'; if ($CommandSet ne '580') { $', $o ) or die "Cannot open TTY '$o' for write: $!"; $IN = \*IN; $OUT = \*OUT; _autoflush($OUT); } ## end if ($tty) # We don't have a TTY - try to find one via Term::Rendezvous. else { require Term::Rendezvous; # See if we have anything to pass to Term::Rendezvous. # Use $HOME/.perldbtty$$ if not. my $rv = $ENV{PERLDB_NOTTY} || "$ENV{HOME}/.perldbtty$$"; # Rendezvous and get the filehandles. my $term_rv = Term::Rendezvous->new( $rv ); $IN = $term_rv->IN; $OUT = $term_rv->OUT; } ## end else [ if ($tty) } ## end if ($notty) # We're a daughter debugger. Try to fork off another TTY. if ( $term_pid eq '-1' ) { # In a TTY with another debugger resetterm(2); } # If we shouldn't use Term::ReadLine, don't. if ( !$rl ) { $term = Term::ReadLine::Stub->new( 'perldb', $IN, $OUT ); } # We're using Term::ReadLine. Get all the attributes for this terminal. else { $term = Term::ReadLine->new( 'perldb', $IN, $OUT ); $rl_attribs = $term->Attribs; $rl_attribs->{basic_word_break_characters} .= '-:+/*,[])}' if defined $rl_attribs->{basic_word_break_characters} and index( $rl_attribs->{basic_word_break_characters}, ":" ) == -1; $rl_attribs->{special_prefixes} = '$@&%'; $rl_attribs->{completer_word_break_characters} .= '$@&%'; $rl_attribs->{completion_function} = \&db_complete; } ## end else [ if (!$rl) # Set up the LINEINFO filehandle. $LINEINFO = $OUT unless defined $LINEINFO; $lineinfo = $console unless defined $lineinfo; $term->MinLine(2); load_hist(); if ( $term->Features->{setHistory} and "@hist" ne "?" ) { $term->SetHistory(@hist); } # XXX Ornaments are turned on unconditionally, which is not # always a good thing. ornaments($ornaments) if defined $ornaments; $term_pid = $$; } ## end sub setterm sub load_hist { $histfile //= option_val("HistFile", undef); return unless defined $histfile; open my $fh, "<", $histfile or return; local $/ = "\n"; @hist = (); while (<$fh>) { chomp; push @hist, $_; } close $fh; } sub save_hist { return unless defined $histfile; eval { require File::Path } or return; eval { require File::Basename } or return; File::Path::mkpath(File::Basename::dirname($histfile)); open my $fh, ">", $histfile or die "Could not open '$histfile': $!"; $histsize //= option_val("HistSize",100); my @copy = grep { $_ ne '?' } @hist; my $start = scalar(@copy) > $histsize ? scalar(@copy)-$histsize : 0; for ($start .. $#copy) { print $fh "$copy[$_]\n"; } close $fh or die "Could not write '$histfile': $!"; } =head1 GET_FORK_TTY EXAMPLE FUNCTIONS When the process being debugged forks, or the process invokes a command via C<system()> which starts a new debugger, we need to be able to get a new C<IN> and C<OUT> filehandle for the new debugger. Otherwise, the two processes fight over the terminal, and you can never quite be sure who's going to get the input you're typing. C C<get_fork_TTY> functions which work for TCP socket servers, X11, OS/2, and Mac OS X. Other systems are not supported. You are encouraged to write C<get_fork_TTY> functions which work for I<your> platform and contribute them. =head3 C<socket_get_fork_TTY> =cut sub connect_remoteport { require IO::Socket; my $socket = IO::Socket::INET->new( Timeout => '10', PeerAddr => $remoteport, Proto => 'tcp', ); if ( ! $socket ) { die "Unable to connect to remote host: $remoteport\n"; } return $socket; } sub socket_get_fork_TTY { $tty = $LINEINFO = $IN = $OUT = connect_remoteport(); # Do I need to worry about setting $term? reset_IN_OUT( $IN, $OUT ); return ''; } =head3 C<xterm_get_fork_TTY> This function provides the C<get_fork_TTY> function for X11. If a program running under the debugger forks, a new <xterm> window is opened and the subsidiary debugger is directed there. The C<open()> call is of particular note here. We have the new C<xterm> we're spawning route file number 3 to STDOUT, and then execute the C<tty> command (which prints the device name of the TTY we'll want to use for input and output to STDOUT, then C<sleep> for a very long time, routing this output to file number 3. This way we can simply read from the <XT> filehandle (which is STDOUT from the I<commands> we ran) to get the TTY we want to use. Only works if C<xterm> is in your path and C<$ENV{DISPLAY}>, etc. are properly set up. =cut sub xterm_get_fork_TTY { ( my $name = $0 ) =~ s,^.*[/\\],,s; open XT, qq[3>&1 xterm -title "Daughter Perl debugger $pids $name" -e sh -c 'tty 1>&3;\ sleep 10000000' |]; # Get the output from 'tty' and clean it up a little. my $tty = <XT>; chomp $tty; $', $out or die "cannot open '$out' for write: $!"; # Swap to the new filehandles. reset_IN_OUT( \*IN, \*OUT ); # Save the setting for later. return $tty = $in; } ## end if (@_ and $term and $term... # Terminal doesn't support new TTY, or doesn't support readline. # Can't do it now, try restarting. if ($term and @_) { _db_warn("Too late to set TTY, enabled on next 'R'!\n"); } # Useful if done through PERLDB_OPTS: $console = $tty = shift if @_; # Return whatever the TTY is. $tty or $console; } ## end sub TTY =head2 C<noTTY> Sets the C<$notty> global, controlling whether or not the debugger tries to get a terminal to read from. If called after a terminal is already in place, we save the value to use it if we're restarted. =cut sub noTTY { if ($term) { _db_warn("Too late to set noTTY, enabled on next 'R'!\n") if @_; } $notty = shift if @_; $notty; } ## end sub noTTY =head2 C<ReadLine> Sets the C<$rl> option variable. If 0, we use C<Term::ReadLine::Stub> (essentially, no C<readline> processing on this I<terminal>). Otherwise, we use C<Term::ReadLine>. Can't be changed after a terminal's in place; we save the value in case a restart is done so we can change it then. =cut sub ReadLine { if ($term) { _db_warn("Too late to set ReadLine, enabled on next 'R'!\n") if @_; } $rl = shift if @_; $rl; } ## end sub ReadLine =head2 C<RemotePort> Sets the port that the debugger will try to connect to when starting up. If the terminal's already been set up, we can't do it, but we remember the setting in case the user does a restart. =cut sub RemotePort { if ($term) { _db_warn("Too late to set RemotePort, enabled on next 'R'!\n") if @_; } $remoteport = shift if @_; $remoteport; } ## end sub RemotePort =head2 C<tkRunning> Checks with the terminal to see if C<Tk> is running, and returns true or false. Returns false if the current terminal doesn't support C<readline>. =cut sub tkRunning { if ( ${ $term->Features }{tkRunning} ) { return $term->tkRunning(@_); } else { local $\ = ''; print $OUT "tkRunning not supported by current ReadLine package.\n"; 0; } } ## end sub tkRunning =head2 C<NonStop> Sets nonstop mode. If a terminal's already been set up, it's too late; the debugger remembers the setting in case you restart, though. =cut sub NonStop { if ($term) { _db_warn("Too late to set up NonStop mode, enabled on next 'R'!\n") if @_; } $runnonstop = shift if @_; $runnonstop; } ## end sub NonStop sub DollarCaretP { if ($term) { _db_warn("Some flag changes could not take effect until next 'R'!\n") if @_; } $^P = parse_DollarCaretP_flags(shift) if @_; expand_DollarCaretP_flags($^P); } =head2 C<pager> Set up the C<$pager> variable. Adds a pipe to the front unless there's one there already. =cut sub pager { if (@_) { $pager = shift; $$lineinfo"; # If this is a pipe, the stream points to a slave editor. $slave_editor = ( $stream =~ /^\|/ ); my $new_lineinfo_fh; # Open it up and unbuffer it. open ($new_lineinfo_fh , $stream ) or _db_warn("Cannot open '$stream' for write"); $LINEINFO = $new_lineinfo_fh; _autoflush($LINEINFO); } return $lineinfo; } ## end sub LineInfo =head1 COMMAND SUPPORT ROUTINES These subroutines provide functionality for various commands. =head2 C<list_modules> For the C<M> command: list modules loaded and their versions. Essentially just runs through the keys in %INC, picks each package's C<$VERSION> variable, gets the file name, and formats the information for output. =cut sub list_modules { # versions my %version; my $file; # keys are the "as-loaded" name, values are the fully-qualified path # to the file itself. for ( keys %INC ) { $file = $_; # get the module name s,\.p[lm]$,,i; # remove '.pl' or '.pm' s,/,::,g; # change '/' to '::' s/^perl5db$/DB/; # Special case: debugger # moves to package DB s/^Term::ReadLine::readline$/readline/; # simplify readline # If the package has a $VERSION package global (as all good packages # should!) decode it and save as partial message. my $pkg_version = do { no strict 'refs'; ${ $_ . '::VERSION' } }; if ( defined $pkg_version ) { $version{$file} = "$pkg_version from "; } # Finish up the message with the file the package came from. $version{$file} .= $INC{$file}; } ## end for (keys %INC) # Hey, dumpit() formats a hash nicely, so why not use it? dumpit( $OUT, \%version ); } ## end sub list_modules =head2 C<sethelp()> Sets up the monster string used to format and print the help. =head3 HELP MESSAGE FORMAT The help message is a peculiar format unto itself; it mixes C<pod> I<ornaments> (C<< B<> >> C<< I<> >>) with tabs to come up with a format that's fairly easy to parse and portable, but which still allows the help to be a little nicer than just plain text. Essentially, you define the command name (usually marked up with C<< B<> >> and C<< I<> >>),, I<be careful>. The help-string parser is not very sophisticated, and if you don't follow these rules it will mangle the help beyond hope until you fix the string. =cut use vars qw($pre580_help); use vars qw($pre580_summary); sub sethelp { # XXX: make sure there are tabs between the command and explanation, # or print_help will screw up your formatting if you have # eeevil ornaments enabled. This is an insane mess. $help = " Help is currently only available for the new 5.8 command set. No help is available for the old command set. We assume you know what you're doing if you switch to it.<v> [I<line>] View> [I<a|b|w>] List actions and or breakpoints and or watch-expressions. B<S> [[B<!>]I<pattern>] List subroutine names [not] matching I<pattern>. B<t> [I<n>] Toggle trace mode (to max I<n> levels below current stack depth). B<t> [I<n>] I<expr> Trace through execution of I<expr>. B<b> Sets breakpoint on current line)<B> [I<line>] Delete the breakpoint for I<line>. B<B> I<*>> Does nothing B<A> [I<line>] Delete the action for I<line>. B<A> I<*> Delete all actions. B<w> I<expr> Add a global watch-expression. B<w> Does nothing B<W> I<expr> Delete a global watch-expression. B<W> I<*><M> Show versions of loaded modules. B<i> I<class> Prints nested parents of given class. B<e> Display current thread id. B<E> Display all thread ids the current one will be identified: <n>. B<y> [I<n> [I<Vars>]] List lexicals in higher scope <n>. Vars same as B<V>. B<<> ? List Perl commands to run before each prompt. B<<> I<expr> Define Perl command to run before each prompt. B<<<> I<expr> Add to the list of Perl commands to run before each prompt. B<< *> Delete the list of perl commands to run before each prompt. B<>> ? List Perl commands to run after each prompt. B<>> I<expr> Define Perl command to run after each prompt. B<>>B<>> I<expr> Add to the list of Perl commands to run after each prompt. B<>>B< *> Delete the list of Perl commands to run after each prompt. B<{> I<db_command> Define debugger command to run before each prompt. B<{> ? List debugger commands to run before each prompt. B<{{> I<db_command> Add to the list of debugger commands to run before each prompt. B<{ *> Delete<save> I<file> Save current debugger session (actual history) to I<file>. B<rerun> Rerun session to current position. B<rerun> I<n> Rerun session to numbered command. B<rerun> I<-n> Rerun session to number'th-to-last command. B<H> I<-number> Display last number commands (default all). B<H> I<*> Delete complete history. B<p> I<expr> Same as \"I<print {DB::OUT} expr>\" in current package. B<|>I<dbcmd> Run debugger command, piping DB::OUT to current pager. B<||>I<dbcmd> Same as B<|>I<dbcmd> but DB::OUT is temporarily select()ed as well. B<\=> [I<alias> I<value>] Define a command alias, or list current aliases. I<command> Execute as a perl statement in current package.> Summary of debugger commands. B<h> [I<db_command>] Get help [on a specific debugger command], enter B<|h> to page. B<h h> Long help for debugger commands B<$doccmd> I<manpage> Runs the external doc viewer B<$doccmd> command on the named Perl I<manpage>, or on B<$doccmd> itself if omitted. Set B<\$DB::doccmd> to change viewer. Type '|h h' for a paged display if this was too hard to read. "; # Fix balance of vi % matching: }}}} # note: tabs in the following section are not-so-helpful $summary = <<"END_SUM"; I<List/search source lines:> I<Control script execution:> B<l> [I<ln>|I<sub>] List source code B<T> Stack trace B<-> or B<.> List previous/current line B<s> [I<expr>] Single step [in expr] B<v> [I<line>] View<M> Show module versions B<c> [I<ln>|I<sub>] Continue until position I<Debugger controls:> B<L> List break/watch/actions B<o> [...] Set debugger options B<t> [I<n>] [I<expr>] Toggle trace [max depth] ][trace expr] B<<>[B<<>]|B<{>[B<{>]|B<>>[B<>>] [I<cmd>] Do pre/post-prompt B<b> [I<ln>|I<event>|I<sub>] [I<cnd>] Set breakpoint B<$prc> [I<N>|I<pat>] Redo a previous command B<B> I<ln|*> Delete a/all breakpoints B<H> [I<-num>] Display last num commands B<a> [I<ln>] I<cmd> Do cmd before line B<=> [I<a> I<val>] Define/list an alias B<A> I<ln|*> Delete a/all actions B<h> [I<db_cmd>] Get help on command B<w> I<expr> Add a watch expression B<h h> Complete help page B<W> I<expr|*> Delete a/all watch exprs<i> I<class> inheritance tree. B<y> [I<n> [I<Vars>]] List lexicals in higher scope <n>. Vars same as B<V>. B<e> Display thread id B<E> Display all thread ids. For more help, type B<h> I<cmd_letter>, or run B<$doccmd perldebug> for all docs. END_SUM # ')}}; # Fix balance of vi % matching # and this is really numb... $pre580_help = "<w> [I<line>] List> List all breakpoints and actions. B<S> [[B<!>]I<pattern>] List subroutine names [not] matching I<pattern>. B<t> [I<n>] Toggle trace mode (to max I<n> levels below current stack depth) . B<t> [I<n>] I<expr> Trace through execution of I<expr>.<d> [I<line>] Delete the breakpoint for I<line>. B<D>> [I<line>] Delete the action for I<line>. B<A> Delete all actions. B<W> I<expr> Add a global watch-expression. B<W><<> ? List Perl commands to run before each prompt. B<<> I<expr> Define Perl command to run before each prompt. B<<<> I<expr> Add to the list of Perl commands to run before each prompt. B<>> ? List Perl commands to run after each prompt. B<>> I<expr> Define Perl command to run after each prompt. B<>>B<>> I<expr> Add to the list of Perl commands to run after each prompt. B<{> I<db_command> Define debugger command to run before each prompt. B<{> ? List debugger commands to run before each prompt. B<{{> I<db_command> Add to<H> I<-number> Display last number commands (default all). B<p> I<expr> Same as \"I<print {DB::OUT} expr>\" in current package. B<|>I<dbcmd> Run debugger command, piping DB::OUT to current pager. B<||>I<dbcmd> Same as B<|>I<dbcmd> but DB::OUT is temporarilly select()ed as well. B<\=> [I<alias> I<value>] Define a command alias, or list current aliases. I<command> Execute as a perl statement in current package. B<v> Show versions of loaded modules.> [I<db_command>] Get help [on a specific debugger command], enter B<|h> to page. B<h h> Summary of debugger commands. B<$doccmd> I<manpage> Runs the external doc viewer B<$doccmd> command on the named Perl I<manpage>, or on B<$doccmd> itself if omitted. Set B<\$DB::doccmd> to change viewer. Type '|h' for a paged display if this was too hard to read. "; # Fix balance of vi % matching: }}}} # note: tabs in the following section are not-so-helpful $pre580_summary = <<"END_SUM"; I<List/search source lines:> I<Control script execution:> B<l> [I<ln>|I<sub>] List source code B<T> Stack trace B<-> or B<.> List previous/current line B<s> [I<expr>] Single step [in expr] B<w> [I<line>] List<v> Show versions of modules B<c> [I<ln>|I<sub>] Continue until position I<Debugger controls:> B<L> List break/watch/actions B<O> [...] Set debugger options B<t> [I<expr>] Toggle trace [trace expr] B<<>[B<<>]|B<{>[B<{>]|B<>>[B<>>] [I<cmd>] Do pre/post-prompt B<b> [I<ln>|I<event>|I<sub>] [I<cnd>] Set breakpoint B<$prc> [I<N>|I<pat>] Redo a previous command B<d> [I<ln>] or B<D> Delete a/all breakpoints B<H> [I<-num>] Display last num commands B<a> [I<ln>] I<cmd> Do cmd before line B<=> [I<a> I<val>] Define/list an alias B<W> I<expr> Add a watch expression B<h> [I<db_cmd>] Get help on command B<A> or B<W> Delete all actions/watch<y> [I<n> [I<Vars>]] List lexicals in higher scope <n>. Vars same as B<V>. For more help, type B<h> I<cmd_letter>, or run B<$doccmd perldebug> for all docs. END_SUM # ')}}; # Fix balance of vi % matching } ## end sub sethelp =head2 C<print_help()> Most of what C<print_help> does is just text formatting. It finds the C<B> and C<I> ornaments, cleans them off, and substitutes the proper terminal control characters to simulate them (courtesy of C<Term::ReadLine::TermCap>). =cut sub print_help { my $help_str = shift; # Restore proper alignment destroyed by eeevil I<> and B<> # ornaments: A pox on both their houses! # # A help command will have everything up to and including # the first tab sequence padded into a field 16 (or if indented 20) # wide. If it's wider than that, an extra space will be added. $help_str =~ s{ ^ # only matters at start of line ( \ {4} | \t )* # some subcommands are indented ( < ? # so <CR> works [BI] < [^\t\n] + ) # find an eeevil ornament ( \t+ ) # original separation, discarded ( .* ) # this will now start (no earlier) than # column 16 } { my($leadwhite, $command, $midwhite, $text) = ($1, $2, $3, $4); my $clean = $command; $clean =~ s/[BI]<([^>]*)>/$1/g; # replace with this whole string: ($leadwhite ? " " x 4 : "") . $command . ((" " x (16 + ($leadwhite ? 4 : 0) - length($clean))) || " ") . $text; }mgex; $help_str =~ s{ # handle bold ornaments B < ( [^>] + | > ) > } { $Term::ReadLine::TermCap::rl_term_set[2] . $1 . $Term::ReadLine::TermCap::rl_term_set[3] }gex; $help_str =~ s{ # handle italic ornaments I < ( [^>] + | > ) > } { $Term::ReadLine::TermCap::rl_term_set[0] . $1 . $Term::ReadLine::TermCap::rl_term_set[1] }gex; local $\ = ''; print {$OUT} $help_str; return; } ## end sub print_help =head2 C<fix_less> This routine does a lot of gyrations to be sure that the pager is C<less>. It checks for C<less> masquerading as C<more> and records the result in C<$fixed_less> so we don't have to go through doing the stats again. =cut use vars qw($fixed_less); sub _calc_is_less { if ($pager =~ /\bless\b/) { return 1; } elsif ($pager =~ /\bmore\b/) { # Nope, set to more. See what's out there. my @st_more = stat('/usr/bin/more'); my @st_less = stat('/usr/bin/less'); # is it really less, pretending to be more? return ( @st_more && @st_less && $st_more[0] == $st_less[0] && $st_more[1] == $st_less[1] ); } else { return; } } sub fix_less { # We already know if this is set. return if $fixed_less; # changes environment! # 'r' added so we don't do (slow) stats again. $fixed_less = 1 if _calc_is_less(); return; } ## end sub fix_less =head1 DIE AND WARN MANAGEMENT =head2 C<diesignal> C<diesignal> is a just-drop-dead C<die> handler. It's most useful when trying to debug a debugger problem. It does its best to report the error that occurred, and then forces the program, debugger, and everything to die. =cut sub diesignal { # No entry/exit messages. local $frame = 0; # No return value prints. local $doret = -2; # set the abort signal handling to the default (just terminate). $SIG{'ABRT'} = 'DEFAULT'; # If we enter the signal handler recursively, kill myself with an # abort signal (so we just terminate). kill 'ABRT', $$ if $panic++; # If we can show detailed info, do so. if ( defined &Carp::longmess ) { # Don't recursively enter the warn handler, since we're carping. local $SIG{__WARN__} = ''; # Skip two levels before reporting traceback: we're skipping # mydie and confess. local $Carp::CarpLevel = 2; # mydie + confess # Tell us all about it. _db_warn( Carp::longmess("Signal @_") ); } # No Carp. Tell us about the signal as best we can. else { local $\ = ''; print $DB::OUT "Got signal @_\n"; } # Drop dead. kill 'ABRT', $$; } ## end sub diesignal =head2 C<dbwarn> The debugger's own default C<$SIG{__WARN__}> handler. We load C<Carp> to be able to get a stack trace, and output the warning message vi C<DB::dbwarn()>. =cut sub dbwarn { # No entry/exit trace. local $frame = 0; # No return value printing. local $doret = -2; # Turn off warn and die handling to prevent recursive entries to this # routine. local $SIG{__WARN__} = ''; local $SIG{__DIE__} = ''; # Load Carp if we can. If $^S is false (current thing being compiled isn't # done yet), we may not be able to do a require. eval { require Carp } if defined $^S; # If error/warning during compilation, # require may be broken. # Use the core warn() unless Carp loaded OK. CORE::warn( @_, "\nCannot print stack trace, load with -MCarp option to see stack" ), return unless defined &Carp::longmess; # Save the current values of $single and $trace, and then turn them off. my ( $mysingle, $mytrace ) = ( $single, $trace ); $single = 0; $trace = 0; # We can call Carp::longmess without its being "debugged" (which we # don't want - we just want to use it!). Capture this for later. my $mess = Carp::longmess(@_); # Restore $single and $trace to their original values. ( $single, $trace ) = ( $mysingle, $mytrace ); # Use the debugger's own special way of printing warnings to print # the stack trace message. _db_warn($mess); } ## end sub dbwarn =head2 C<dbdie> The debugger's own C<$SIG{__DIE__}> handler. Handles providing a stack trace by loading C<Carp> and calling C<Carp::longmess()> to get it. We turn off single stepping and tracing during the call to C<Carp::longmess> to avoid debugging it - we just want to use it. If C<dieLevel> is zero, we let the program being debugged handle the exceptions. If it's 1, you get backtraces for any exception. If it's 2, the debugger takes over all exception handling, printing a backtrace and displaying the exception via its C<dbwarn()> routine. =cut sub dbdie { local $frame = 0; local $doret = -2; local $SIG{__DIE__} = ''; local $SIG{__WARN__} = ''; if ( $dieLevel > 2 ) { local $SIG{__WARN__} = \&dbwarn; _db_warn(@_); # Yell no matter what return; } if ( $dieLevel < 2 ) { die @_ if $^S; # in eval propagate } # The code used to check $^S to see if compilation of the current thing # hadn't finished. We don't do it anymore, figuring eval is pretty stable. eval { require Carp }; die( @_, "\nCannot print stack trace, load with -MCarp option to see stack" ) unless defined &Carp::longmess; # We do not want to debug this chunk (automatic disabling works # inside DB::DB, but not in Carp). Save $single and $trace, turn them off, # get the stack trace from Carp::longmess (if possible), restore $signal # and $trace, and then die with the stack trace. my ( $mysingle, $mytrace ) = ( $single, $trace ); $single = 0; $trace = 0; my $mess = "@_"; { package Carp; # Do not include us in the list eval { $mess = Carp::longmess(@_); }; } ( $single, $trace ) = ( $mysingle, $mytrace ); die $mess; } ## end sub dbdie =head2 C<warnlevel()> Set the C<$DB::warnLevel> variable that stores the value of the C<warnLevel> option. Calling C<warnLevel()> with a positive value results in the debugger taking over all warning handlers. Setting C<warnLevel> to zero leaves any warning handlers set up by the program being debugged in place. =cut sub warnLevel { if (@_) { my $prevwarn = $SIG{__WARN__} unless $warnLevel; $warnLevel = shift; if ($warnLevel) { $SIG{__WARN__} = \&DB::dbwarn; } elsif ($prevwarn) { $SIG{__WARN__} = $prevwarn; } else { undef $SIG{__WARN__}; } } ## end if (@_) $warnLevel; } ## end sub warnLevel =head2 C<dielevel> Similar to C<warnLevel>. Non-zero values for C<dieLevel> result in the C<DB::dbdie()> function overriding any other C<die()> handler. Setting it to zero lets you use your own C<die()> handler. =cut sub dieLevel { local $\ = ''; if (@_) { my $prevdie = $SIG{__DIE__} unless $dieLevel; $dieLevel = shift; if ($dieLevel) { # Always set it to dbdie() for non-zero values. $SIG{__DIE__} = \&DB::dbdie; # if $dieLevel < 2; # No longer exists, so don't try to use it. #$SIG{__DIE__} = \&DB::diehard if $dieLevel >= 2; # If we've finished initialization, mention that stack dumps # are enabled, If dieLevel is 1, we won't stack dump if we die # in an eval(). print $OUT "Stack dump during die enabled", ( $dieLevel == 1 ? " outside of evals" : "" ), ".\n" if $I_m_init; # XXX This is probably obsolete, given that diehard() is gone. print $OUT "Dump printed too.\n" if $dieLevel > 2; } ## end if ($dieLevel) # Put the old one back if there was one. elsif ($prevdie) { $SIG{__DIE__} = $prevdie; print $OUT "Default die handler restored.\n"; } else { undef $SIG{__DIE__}; print $OUT "Die handler removed.\n"; } } ## end if (@_) $dieLevel; } ## end sub dieLevel =head2 C<signalLevel> Number three in a series: set C<signalLevel> to zero to keep your own signal handler for C<SIGSEGV> and/or C<SIGBUS>. Otherwise, the debugger takes over and handles them with C<DB::diesignal()>. =cut sub signalLevel { if (@_) { my $prevsegv = $SIG{SEGV} unless $signalLevel; my $prevbus = $SIG{BUS} unless $signalLevel; $signalLevel = shift; if ($signalLevel) { $SIG{SEGV} = \&DB::diesignal; $SIG{BUS} = \&DB::diesignal; } else { $SIG{SEGV} = $prevsegv; $SIG{BUS} = $prevbus; } } ## end if (@_) $signalLevel; } ## end sub signalLevel =head1 SUBROUTINE DECODING SUPPORT These subroutines are used during the C<x> and C<X> commands to try to produce as much information as possible about a code reference. They use L<Devel::Peek> to try to find the glob in which this code reference lives (if it does) - this allows us to actually code references which correspond to named subroutines (including those aliased via glob assignment). =head2 C<CvGV_name()> Wrapper for C<CvGV_name_or_bust>; tries to get the name of a reference via that routine. If this fails, return the reference again (when the reference is stringified, it'll come out as C<SOMETHING(0x...)>). =cut sub CvGV_name { my $in = shift; my $name = CvGV_name_or_bust($in); defined $name ? $name : $in; } =head2 C<CvGV_name_or_bust> I<coderef> Calls L<Devel::Peek> to try to find the glob the ref lives in; returns C<undef> if L<Devel::Peek> can't be loaded, or if C<Devel::Peek::CvGV> can't find a glob for this ref. Returns C<< I<package>::I<glob name> >> if the code ref is found in a glob. =cut use vars qw($skipCvGV); sub CvGV_name_or_bust { my $in = shift; return if $skipCvGV; # Backdoor to avoid problems if XS broken... return unless ref $in; $in = \&$in; # Hard reference... eval { require Devel::Peek; 1 } or return; my $gv = Devel::Peek::CvGV($in) or return; *$gv{PACKAGE} . '::' . *$gv{NAME}; } ## end sub CvGV_name_or_bust =head2 C<find_sub> A utility routine used in various places; finds the file where a subroutine was defined, and returns that filename and a line-number range. Tries to use C<@sub> first; if it can't find it there, it tries building a reference to the subroutine and uses C<CvGV_name_or_bust> to locate it, loading it into C<@sub> as a side effect (XXX I think). If it can't find it this way, it brute-force searches C<%sub>, checking for identical references. =cut sub _find_sub_helper { my $subr = shift; return unless defined &$subr; my $name = CvGV_name_or_bust($subr); my $data; $data = $sub{$name} if defined $name; return $data if defined $data; # Old stupid way... $subr = \&$subr; # Hard reference my $s; for ( keys %sub ) { $s = $_, last if $subr eq \&$_; } if ($s) { return $sub{$s}; } else { return; } } sub find_sub { my $subr = shift; return ( $sub{$subr} || _find_sub_helper($subr) ); } ## end sub find_sub =head2 C<methods> A subroutine that uses the utility function C<methods_via> to find all the methods in the class corresponding to the current reference and in C<UNIVERSAL>. =cut use vars qw(%seen); sub methods { # Figure out the class - either this is the class or it's a reference # to something blessed into that class. my $class = shift; $class = ref $class if ref $class; local %seen; # Show the methods that this class has. methods_via( $class, '', 1 ); # Show the methods that UNIVERSAL has. methods_via( 'UNIVERSAL', 'UNIVERSAL', 0 ); } ## end sub methods =head2 C<methods_via($class, $prefix, $crawl_upward)> C<methods_via> does the work of crawling up the C<@ISA> tree and reporting all the parent class methods. C<$class> is the name of the next class to try; C<$prefix> is the message prefix, which gets built up as we go up the C<@ISA> tree to show parentage; C<$crawl_upward> is 1 if we should try to go higher in the C<@ISA> tree, 0 if we should stop. =cut sub methods_via { # If we've processed this class already, just quit. my $class = shift; return if $seen{$class}++; # This is a package that is contributing the methods we're about to print. my $prefix = shift; my $prepend = $prefix ? "via $prefix: " : ''; my @to_print; # Extract from all the symbols in this class. my $class_ref = do { no strict "refs"; \%{$class . '::'} }; while (my ($name, $glob) = each %$class_ref) { # references directly in the symbol table are Proxy Constant # Subroutines, and are by their very nature defined # Otherwise, check if the thing is a typeglob, and if it is, it decays # to a subroutine reference, which can be tested by defined. # $glob might also be the value -1 (from sub foo;) # or (say) '$$' (from sub foo ($$);) # \$glob will be SCALAR in both cases. if ((ref $glob || ($glob && ref \$glob eq 'GLOB' && defined &$glob)) && !$seen{$name}++) { push @to_print, "$prepend$name\n"; } } { local $\ = ''; local $, = ''; print $DB::OUT $_ foreach sort @to_print; } # If the $crawl_upward argument is false, just quit here. return unless shift; # $crawl_upward true: keep going up the tree. # Find all the classes this one is a subclass of. my $class_ISA_ref = do { no strict "refs"; \@{"${class}::ISA"} }; for my $name ( @$class_ISA_ref ) { # Set up the new prefix. $prepend = $prefix ? $prefix . " -> $name" : $name; # Crawl up the tree and keep trying to crawl up. methods_via( $name, $prepend, 1 ); } } ## end sub methods_via =head2 C<setman> - figure out which command to use to show documentation Just checks the contents of C<$^O> and sets the C<$doccmd> global accordingly. =cut sub setman { $doccmd = $^O !~ /^(?:MSWin32|VMS|os2|dos|amigaos|riscos|NetWare)\z/s ? "man" # O Happy Day! : "perldoc"; # Alas, poor unfortunates } ## end sub setman =head2 C<runman> - run the appropriate command to show documentation Accepts a man page name; runs the appropriate command to display it (set up during debugger initialization). Uses C<_db_system()> to avoid mucking up the program's STDIN and STDOUT. =cut sub runman { my $page = shift; unless ($page) { _db_system("$doccmd $doccmd"); return; } # this way user can override, like with $doccmd="man -Mwhatever" # or even just "man " to disable the path check. if ( $doccmd ne 'man' ) { _db_system("$doccmd $page"); return; } $page = 'perl' if lc($page) eq 'help'; require Config; my $man1dir = $Config::Config{man1direxp}; my $man3dir = $Config::Config{man3direxp}; for ( $man1dir, $man3dir ) { s#/[^/]*\z## if /\S/ } my $manpath = ''; $manpath .= "$man1dir:" if $man1dir =~ /\S/; $manpath .= "$man3dir:" if $man3dir =~ /\S/ && $man1dir ne $man3dir; chop $manpath if $manpath; # harmless if missing, I figure local $ENV{MANPATH} = $manpath if $manpath; my $nopathopt = $^O =~ /dunno what goes here/; if ( CORE::system( $doccmd, # I just *know* there are men without -M ( ( $manpath && !$nopathopt ) ? ( "-M", $manpath ) : () ), split ' ', $page ) ) { unless ( $page =~ /^perl\w/ ) { # Previously the debugger contained a list which it slurped in, # listing the known "perl" manpages. However, it was out of date, # with errors both of omission and inclusion. This approach is # considerably less complex. The failure mode on a butchered # install is simply that the user has to run man or perldoc # "manually" with the full manpage name. # There is a list of $^O values in installperl to determine whether # the directory is 'pods' or 'pod'. However, we can avoid tight # coupling to that by simply checking the "non-standard" 'pods' # first. my $pods = "$Config::Config{privlibexp}/pods"; $pods = "$Config::Config{privlibexp}/pod" unless -d $pods; if (-f "$pods/perl$page.pod") { CORE::system( $doccmd, ( ( $manpath && !$nopathopt ) ? ( "-M", $manpath ) : () ), "perl$page" ); } } } ## end if (CORE::system($doccmd... } ## end sub runman #use Carp; # This did break, left for debugging =head1 DEBUGGER INITIALIZATION - THE SECOND BEGIN BLOCK Because of the way the debugger interface to the Perl core is designed, any debugger package globals that C<DB::sub()> requires have to be defined before any subroutines can be called. These are defined in the second C<BEGIN> block. This block sets things up so that (basically) the world is sane before the debugger starts executing. We set up various variables that the debugger has to have set up before the Perl core starts running: =over 4 =item * The debugger's own filehandles (copies of STD and STDOUT for now). =item * Characters for shell escapes, the recall command, and the history command. =item * The maximum recursion depth. =item * The size of a C<w> command's window. =item * The before-this-line context to be printed in a C<v> (view a window around this line) command. =item * The fact that we're not in a sub at all right now. =item * The default SIGINT handler for the debugger. =item * The appropriate value of the flag in C<$^D> that says the debugger is running =item * The current debugger recursion level =item * The list of postponed items and the C<$single> stack (XXX define this) =item * That we want no return values and no subroutine entry/exit trace. =back =cut # The following BEGIN is very handy if debugger goes havoc, debugging debugger? use vars qw($db_stop); BEGIN { # This does not compile, alas. (XXX eh?) $IN = \*STDIN; # For bugs before DB::OUT has been opened $OUT = \*STDERR; # For errors before DB::OUT has been opened # Define characters used by command parsing. $sh = '!'; # Shell escape (does not work) $rc = ','; # Recall command (does not work) @hist = ('?'); # Show history (does not work) @truehist = (); # Can be saved for replay (per session) # This defines the point at which you get the 'deep recursion' # warning. It MUST be defined or the debugger will not load. $deep = 1000; # Number of lines around the current one that are shown in the # 'w' command. $window = 10; # How much before-the-current-line context the 'v' command should # use in calculating the start of the window it will display. $preview = 3; # We're not in any sub yet, but we need this to be a defined value. $sub = ''; # Set up the debugger's interrupt handler. It simply sets a flag # ($signal) that DB::DB() will check before each command is executed. $SIG{INT} = \&DB::catch; # The following lines supposedly, if uncommented, allow the debugger to # debug itself. Perhaps we can try that someday. # This may be enabled to debug debugger: #$warnLevel = 1 unless defined $warnLevel; #$dieLevel = 1 unless defined $dieLevel; #$signalLevel = 1 unless defined $signalLevel; # This is the flag that says "a debugger is running, please call # DB::DB and DB::sub". We will turn it on forcibly before we try to # execute anything in the user's context, because we always want to # get control back. $db_stop = 0; # Compiler warning ... $db_stop = 1 << 30; # ... because this is only used in an eval() later. # This variable records how many levels we're nested in debugging. # Used in the debugger prompt, and in determining whether it's all over or # not. $level = 0; # Level of recursive debugging # "Triggers bug (?) in perl if we postpone this until runtime." # XXX No details on this yet, or whether we should fix the bug instead # of work around it. Stay tuned. @stack = (0); # Used to track the current stack depth using the auto-stacked-variable # trick. $stack_depth = 0; # Localized repeatedly; simple way to track $#stack # Don't print return values on exiting a subroutine. $doret = -2; # No extry/exit tracing. $frame = 0; } ## end BEGIN BEGIN { $^W = $ini_warn; } # Switch warnings back =head1 READLINE SUPPORT - COMPLETION FUNCTION =head2 db_complete C<readline> support - adds command completion to basic C<readline>. Returns a list of possible completions to C<readline> when invoked. C<readline> will print the longest common substring following the text already entered. If there is only a single possible completion, C<readline> will use it in full. This code uses C<map> and C<grep> heavily to create lists of possible completion. Think LISP in this section. =cut sub db_complete { # Specific code for b c l V m f O, &blah, $blah, @blah, %blah # $text is the text to be completed. # $line is the incoming line typed by the user. # $start is the start of the text to be completed in the incoming line. my ( $text, $line, $start ) = @_; # Save the initial text. # The search pattern is current package, ::, extract the next qualifier # Prefix and pack are set to undef. my ( $itext, $search, $prefix, $pack ) = ( $text, "^\Q${package}::\E([^:]+)\$" ); =head3 C<b postpone|compile> =over 4 =item * Find all the subroutines that might match in this package =item * Add C<postpone>, C<load>, and C<compile> as possibles (we may be completing the keyword itself) =item * Include all the rest of the subs that are known =item * C<grep> out the ones that match the text we have so far =item * Return this as the list of possible completions =back =cut return sort grep /^\Q$text/, ( keys %sub ), qw(postpone load compile), # subroutines ( map { /$search/ ? ($1) : () } keys %sub ) if ( substr $line, 0, $start ) =~ /^\|*[blc]\s+((postpone|compile)\s+)?$/; =head3 C<b load> Get all the possible files from C<@INC> as it currently stands and select the ones that match the text so far. =cut return sort grep /^\Q$text/, values %INC # files if ( substr $line, 0, $start ) =~ /^\|*b\s+load\s+$/; =head3 C<V> (list variable) and C<m> (list modules) There are two entry points for these commands: =head4 Unqualified package names Get the top-level packages and grab everything that matches the text so far. For each match, recursively complete the partial packages to get all possible matching packages. Return this sorted list. =cut return sort map { ( $_, db_complete( $_ . "::", "V ", 2 ) ) } grep /^\Q$text/, map { /^(.*)::$/ ? ($1) : () } keys %:: # top-packages if ( substr $line, 0, $start ) =~ /^\|*[Vm]\s+$/ and $text =~ /^\w*$/; =head4 Qualified package names Take a partially-qualified package and find all subpackages for it by getting all the subpackages for the package so far, matching all the subpackages against the text, and discarding all of them which start with 'main::'. Return this list. =cut return sort map { ( $_, db_complete( $_ . "::", "V ", 2 ) ) } grep !/^main::/, grep /^\Q$text/, map { /^(.*)::$/ ? ( $prefix . "::$1" ) : () } do { no strict 'refs'; keys %{ $prefix . '::' } } if ( substr $line, 0, $start ) =~ /^\|*[Vm]\s+$/ and $text =~ /^(.*[^:])::?(\w*)$/ and $prefix = $1; =head3 C<f> - switch files Here, we want to get a fully-qualified filename for the C<f> command. Possibilities are: =over 4 =item 1. The original source file itself =item 2. A file from C<@INC> =item 3. An C<eval> (the debugger gets a C<(eval N)> fake file for each C<eval>). =back =cut if ( $line =~ /^\|*f\s+(.*)/ ) { # Loaded files # We might possibly want to switch to an eval (which has a "filename" # like '(eval 9)'), so we may need to clean up the completion text # before proceeding. $prefix = length($1) - length($text); $text = $1; =pod Under the debugger, source files are represented as C<_E<lt>/fullpath/to/file> (C<eval>s are C<_E<lt>(eval NNN)>) keys in C<%main::>. We pull all of these out of C<%main::>, add the initial source file, and extract the ones that match the completion text so far. =cut return sort map { substr $_, 2 + $prefix } grep /^_<\Q$text/, ( keys %main:: ), $0; } ## end if ($line =~ /^\|*f\s+(.*)/) =head3 Subroutine name completion We look through all of the defined subs (the keys of C<%sub>) and return both all the possible matches to the subroutine name plus all the matches qualified to the current package. =cut if ( ( substr $text, 0, 1 ) eq '&' ) { # subroutines $text = substr $text, 1; $prefix = "&"; return sort map "$prefix$_", grep /^\Q$text/, ( keys %sub ), ( map { /$search/ ? ($1) : () } keys %sub ); } ## end if ((substr $text, 0, ... =head3 Scalar, array, and hash completion: partially qualified package Much like the above, except we have to do a little more cleanup: =cut if ( $text =~ /^[\$@%](.*)::(.*)/ ) { # symbols in a package =pod =over 4 =item * Determine the package that the symbol is in. Put it in C<::> (effectively C<main::>) if no package is specified. =cut $pack = ( $1 eq 'main' ? '' : $1 ) . '::'; =pod =item * Figure out the prefix vs. what needs completing. =cut $prefix = ( substr $text, 0, 1 ) . $1 . '::'; $text = $2; =pod =item * Look through all the symbols in the package. C<grep> out all the possible hashes/arrays/scalars, and then C<grep> the possible matches out of those. C<map> the prefix onto all the possibilities. =cut my @out = do { no strict 'refs'; map "$prefix$_", grep /^\Q$text/, grep /^_?[a-zA-Z]/, keys %$pack; }; =pod =item * If there's only one hit, and it's a package qualifier, and it's not equal to the initial text, re-complete it using the symbol we actually found. =cut if ( @out == 1 and $out[0] =~ /::$/ and $out[0] ne $itext ) { return db_complete( $out[0], $line, $start ); } # Return the list of possibles. return sort @out; } ## end if ($text =~ /^[\$@%](.*)::(.*)/) =pod =back =head3 Symbol completion: current package or package C<main>. =cut if ( $text =~ /^[\$@%]/ ) { # symbols (in $package + packages in main) =pod =over 4 =item * If it's C<main>, delete main to just get C<::> leading. =cut $pack = ( $package eq 'main' ? '' : $package ) . '::'; =pod =item * We set the prefix to the item's sigil, and trim off the sigil to get the text to be completed. =cut $prefix = substr $text, 0, 1; $text = substr $text, 1; my @out; =pod =item * We look for the lexical scope above DB::DB and auto-complete lexical variables if PadWalker could be loaded. =cut if (not $text =~ /::/ and eval { local @INC = @INC; pop @INC if $INC[-1] eq '.'; require PadWalker } ) { my $level = 1; while (1) { my @info = caller($level); $level++; $level = -1, last if not @info; last if $info[3] eq 'DB::DB'; } if ($level > 0) { my $lexicals = PadWalker::peek_my($level); push @out, grep /^\Q$prefix$text/, keys %$lexicals; } } =pod =item * If the package is C<::> (C<main>), create an empty list; if it's something else, create a list of all the packages known. Append whichever list to a list of all the possible symbols in the current package. C<grep> out the matches to the text entered so far, then C<map> the prefix back onto the symbols. =cut push @out, map "$prefix$_", grep /^\Q$text/, ( grep /^_?[a-zA-Z]/, do { no strict 'refs'; keys %$pack } ), ( $pack eq '::' ? () : ( grep /::$/, keys %:: ) ); =item * If there's only one hit, it's a package qualifier, and it's not equal to the initial text, recomplete using this symbol. =back =cut if ( @out == 1 and $out[0] =~ /::$/ and $out[0] ne $itext ) { return db_complete( $out[0], $line, $start ); } # Return the list of possibles. return sort @out; } ## end if ($text =~ /^[\$@%]/) =head3 Options We use C. =cut if ( ( substr $line, 0, $start ) =~ /^\|*[oO]\b.*\s$/ ) { # Options after space # We look for the text to be matched in the list of possible options, # and fetch the current value. my @out = grep /^\Q$text/, @options; my $val = option_val( $out[0], undef ); # Set up a 'query option's value' command. my $out = '? '; if ( not defined $val or $val =~ /[\n\r]/ ) { # There's really nothing else we can do. } # We have a value. Create a proper option-setting command. elsif ( $val =~ /\s/ ) { # XXX This may be an extraneous variable. my $found; # We'll want to quote the string (because of the embedded # whtespace), but we want to make sure we don't end up with # mismatched quote characters. We try several possibilities. foreach my $l ( split //, qq/\"\'\#\|/ ) { # If we didn't find this quote character in the value, # quote it using this quote character. $out = "$l$val$l ", last if ( index $val, $l ) == -1; } } ## end elsif ($val =~ /\s/) # Don't need any quotes. else { $out = "=$val "; } # If there were multiple possible values, return '? ', which # makes the command into a query command. If there was just one, # have readline append that. $rl_attribs->{completer_terminator_character} = ( @out == 1 ? $out : '? ' ); # Return list of possibilities. return sort @out; } ## end if ((substr $line, 0, ... =head3 Filename completion For entering filenames. We simply call C<readline>'s C<filename_list()> method with the completion text to get the possible completions. =cut return $term->filename_list($text); # filenames } ## end sub db_complete =head1 MISCELLANEOUS SUPPORT FUNCTIONS Functions that possibly ought to be somewhere else. =head2 end_report Say we're done. =cut sub end_report { local $\ = ''; print $OUT "Use 'q' to quit or 'R' to restart. 'h q' for details.\n"; } =head2 clean_ENV If we have $ini_pids, save it in the environment; else remove it from the environment. Used by the C<R> (restart) command. =cut sub clean_ENV { if ( defined($ini_pids) ) { $ENV{PERLDB_PIDS} = $ini_pids; } else { delete( $ENV{PERLDB_PIDS} ); } } ## end sub clean_ENV # PERLDBf_... flag names from perl.h our ( %DollarCaretP_flags, %DollarCaretP_flags_r ); BEGIN { %DollarCaretP_flags = ( PERLDBf_SUB => 0x01, # Debug sub enter/exit PERLDBf_LINE => 0x02, # Keep line # PERLDBf_NOOPT => 0x04, # Switch off optimizations PERLDBf_INTER => 0x08, # Preserve more data PERLDBf_SUBLINE => 0x10, # Keep subr source lines PERLDBf_SINGLE => 0x20, # Start with single-step on PERLDBf_NONAME => 0x40, # For _SUB: no name of the subr PERLDBf_GOTO => 0x80, # Report goto: call DB::goto PERLDBf_NAMEEVAL => 0x100, # Informative names for evals PERLDBf_NAMEANON => 0x200, # Informative names for anon subs PERLDBf_SAVESRC => 0x400, # Save source lines into @{"_<$filename"} PERLDB_ALL => 0x33f, # No _NONAME, _GOTO ); # PERLDBf_LINE also enables the actions of PERLDBf_SAVESRC, so the debugger # doesn't need to set it. It's provided for the benefit of profilers and # other code analysers. %DollarCaretP_flags_r = reverse %DollarCaretP_flags; } sub parse_DollarCaretP_flags { my $flags = shift; $flags =~ s/^\s+//; $flags =~ s/\s+$//; my $acu = 0; foreach my $f ( split /\s*\|\s*/, $flags ) { my $value; if ( $f =~ /^0x([[:xdigit:]]+)$/ ) { $value = hex $1; } elsif ( $f =~ /^(\d+)$/ ) { $value = int $1; } elsif ( $f =~ /^DEFAULT$/i ) { $value = $DollarCaretP_flags{PERLDB_ALL}; } else { $f =~ /^(?:PERLDBf_)?(.*)$/i; $value = $DollarCaretP_flags{ 'PERLDBf_' . uc($1) }; unless ( defined $value ) { print $OUT ( "Unrecognized \$^P flag '$f'!\n", "Acceptable flags are: " . join( ', ', sort keys %DollarCaretP_flags ), ", and hexadecimal and decimal numbers.\n" ); return undef; } } $acu |= $value; } $acu; } sub expand_DollarCaretP_flags { my $DollarCaretP = shift; my @bits = ( map { my $n = ( 1 << $_ ); ( $DollarCaretP & $n ) ? ( $DollarCaretP_flags_r{$n} || sprintf( '0x%x', $n ) ) : () } 0 .. 31 ); return @bits ? join( '|', @bits ) : 0; } =over 4 =item. =cut sub rerun { my $i = shift; my @args; pop(@truehist); # strim unless (defined $truehist[$i]) { print "Unable to return to non-existent command: $i\n"; } else { $#truehist = ($i < 0 ? $#truehist + $i : $i > 0 ? $i : $#truehist); my @temp = @truehist; # store push(@DB::typeahead, @truehist); # saved @truehist = @hist = (); # flush @args = restart(); # setup get_list("PERLDB_HIST"); # clean set_list("PERLDB_HIST", @temp); # reset } return @args; } =item restart Restarting the debugger is a complex operation that occurs in several phases. First, we try to reconstruct the command line that was used to invoke Perl and the debugger. =cut sub restart { # I may not be able to resurrect you, but here goes ... print $OUT "Warning: some settings and command-line options may be lost!\n"; my ( @script, @flags, $cl ); # If warn was on before, turn it on again. push @flags, '-w' if $ini_warn; # Rebuild the -I flags that were on the initial # command line. for (@ini_INC) { push @flags, '-I', $_; } # Turn on taint if it was on before. push @flags, '-T' if ${^TAINT}; # Arrange for setting the old INC: # Save the current @init_INC in the environment. set_list( "PERLDB_INC", @ini_INC ); # If this was a perl one-liner, go to the "file" # corresponding to the one-liner read all the lines # out of it (except for the first one, which is going # to be added back on again when 'perl -d' runs: that's # the 'require perl5db.pl;' line), and add them back on # to the command line to be executed. if ( $0 eq '-e' ) { my $lines = *{$main::{'_<-e'}}{ARRAY}; for ( 1 .. $#$lines ) { # The first line is PERL5DB chomp( $cl = $lines->[$_] ); push @script, '-e', $cl; } } ## end if ($0 eq '-e') # Otherwise we just reuse the original name we had # before. else { @script = $0; } =pod After the command line has been reconstructed, the next step is to save the debugger's status in environment variables. The C<DB::set_list> routine is used to save aggregate variables (both hashes and arrays); scalars are just popped into environment variables directly. =cut # If the terminal supported history, grab it and # save that in the environment. set_list( "PERLDB_HIST", $term->Features->{getHistory} ? $term->GetHistory : @hist ); # Find all the files that were visited during this # session (i.e., the debugger had magic hashes # corresponding to them) and stick them in the environment. my @had_breakpoints = keys %had_breakpoints; set_list( "PERLDB_VISITED", @had_breakpoints ); # Save the debugger options we chose. set_list( "PERLDB_OPT", %option ); # set_list( "PERLDB_OPT", options2remember() ); # Save the break-on-loads. set_list( "PERLDB_ON_LOAD", %break_on_load ); =pod The most complex part of this is the saving of all of the breakpoints. They can live in an awful lot of places, and we have to go through all of them, find the breakpoints, and then save them in the appropriate environment variable via C<DB::set_list>. =cut # Go through all the breakpoints and make sure they're # still valid. my @hard; for ( 0 .. $#had_breakpoints ) { # We were in this file. my $file = $had_breakpoints[$_]; # Grab that file's magic line hash. *dbline = $main::{ '_<' . $file }; # Skip out if it doesn't exist, or if the breakpoint # is in a postponed file (we'll do postponed ones # later). next unless %dbline or $postponed_file{$file}; # In an eval. This is a little harder, so we'll # do more processing on that below. ( push @hard, $file ), next if $file =~ /^\(\w*eval/; # XXX I have no idea what this is doing. Yet. my @add; @add = %{ $postponed_file{$file} } if $postponed_file{$file}; # Save the list of all the breakpoints for this file. set_list( "PERLDB_FILE_$_", %dbline, @add ); # Serialize the extra data %breakpoints_data hash. # That's a bug fix. set_list( "PERLDB_FILE_ENABLED_$_", map { _is_breakpoint_enabled($file, $_) ? 1 : 0 } sort { $a <=> $b } keys(%dbline) ) } ## end for (0 .. $#had_breakpoints) # The breakpoint was inside an eval. This is a little # more difficult. XXX and I don't understand it. foreach my $hard_file (@hard) { # Get over to the eval in question. *dbline = $main::{ '_<' . $hard_file }; my $quoted = quotemeta $hard_file; my %subs; for my $sub ( keys %sub ) { if (my ($n1, $n2) = $sub{$sub} =~ /\A$quoted:(\d+)-(\d+)\z/) { $subs{$sub} = [ $n1, $n2 ]; } } unless (%subs) { print {$OUT} "No subroutines in $hard_file, ignoring breakpoints.\n"; next; } LINES: foreach my $line ( keys %dbline ) { # One breakpoint per sub only: my ( $offset, $found ); SUBS: foreach my $sub ( keys %subs ) { if ( $subs{$sub}->[1] >= $line # Not after the subroutine and ( not defined $offset # Not caught or $offset < 0 ) ) { # or badly caught $found = $sub; $offset = $line - $subs{$sub}->[0]; if ($offset >= 0) { $offset = "+$offset"; last SUBS; } } ## end if ($subs{$sub}->[1] >=... } ## end for $sub (keys %subs) if ( defined $offset ) { $postponed{$found} = "break $offset if $dbline{$line}"; } else { print {$OUT} ("Breakpoint in ${hard_file}:$line ignored:" . " after all the subroutines.\n"); } } ## end for $line (keys %dbline) } ## end for (@hard) # Save the other things that don't need to be # processed. set_list( "PERLDB_POSTPONE", %postponed ); set_list( "PERLDB_PRETYPE", @$pretype ); set_list( "PERLDB_PRE", @$pre ); set_list( "PERLDB_POST", @$post ); set_list( "PERLDB_TYPEAHEAD", @typeahead ); # We are officially restarting. $ENV{PERLDB_RESTART} = 1; # We are junking all child debuggers. delete $ENV{PERLDB_PIDS}; # Restore ini state # Set this back to the initial pid. $ENV{PERLDB_PIDS} = $ini_pids if defined $ini_pids; =pod After all the debugger status has been saved, we take the command we built up and then return it, so we can C<exec()> it. The debugger will spot the C<PERLDB_RESTART> environment variable and realize it needs to reload its state from the environment. =cut # And run Perl again. Add the "-d" flag, all the # flags we built up, the script (whether a one-liner # or a file), add on the -emacs flag for a slave editor, # and then the old arguments. return ($^X, '-d', @flags, @script, ($slave_editor ? '-emacs' : ()), @ARGS); }; # end restart =back =head1 END PROCESSING - THE C<END> BLOCK Come here at the very end of processing. We want to go into a loop where we allow the user to enter commands and interact with the debugger, but we don't want anything else to execute. First we set the C<$finished> variable, so that some commands that shouldn't be run after the end of program quit working. We then figure out whether we're truly done (as in the user entered a C<q> command, or we finished execution while running nonstop). If we aren't, we set C<$single> to 1 (causing the debugger to get control again). We then call C<DB::fake::at_exit()>, which returns the C<Use 'q' to quit ...> message and returns control to the debugger. Repeat. When the user finally enters a C<q> command, C<$fall_off_end> is set to 1 and the C<END> block simply exits with C<$single> set to 0 (don't break, run to completion.). =cut END { $finished = 1 if $inhibit_exit; # So that some commands may be disabled. $fall_off_end = 1 unless $inhibit_exit; # Do not stop in at_exit() and destructors on exit: if ($fall_off_end or $runnonstop) { save_hist(); } else { $DB::single = 1; DB::fake::at_exit(); } } ## end END =head1. =head2 Null command Does nothing. Used to I<turn off> commands. =cut sub cmd_pre580_null { # do nothing... } =head2 Old C<a> command. This version added actions if you supplied them, and deleted them if you didn't. =cut sub cmd_pre580_a { my $xcmd = shift; my $cmd = shift; # Argument supplied. Add the action. if ( $cmd =~ /^(\d*)\s*(.*)/ ) { # If the line isn't there, use the current line. my $i = $1 || $line; my $j = $2; # If there is an action ... if ( length $j ) { # ... but the line isn't breakable, skip it. if ( $dbline[$i] == 0 ) { print $OUT "Line $i may not have an action.\n"; } else { # ... and the line is breakable: # Mark that there's an action in this file. $had_breakpoints{$filename} |= 2; # Delete any current action. $dbline{$i} =~ s/\0[^\0]*//; # Add the new action, continuing the line as needed. $dbline{$i} .= "\0" . action($j); } } ## end if (length $j) # No action supplied. else { # Delete the action. $dbline{$i} =~ s/\0[^\0]*//; # Mark as having no break or action if nothing's left. delete $dbline{$i} if $dbline{$i} eq ''; } } ## end if ($cmd =~ /^(\d*)\s*(.*)/) } ## end sub cmd_pre580_a =head2 Old C<b> command Add breakpoints. =cut sub cmd_pre580_b { my $xcmd = shift; my $cmd = shift; my $dbline = shift; # Break on load. if ( $cmd =~ /^load\b\s*(.*)/ ) { my $file = $1; $file =~ s/\s+$//; cmd_b_load($file); } # b compile|postpone <some sub> [<condition>] # The interpreter actually traps this one for us; we just put the # necessary condition in the %postponed hash. elsif ( $cmd =~ /^(postpone|compile)\b\s*([':A-Za-z_][':\w]*)\s*(.*)/ ) { # Capture the condition if there is one. Make it true if none. my $cond = length $3 ? $3 : '1'; # Save the sub name and set $break to 1 if $1 was 'postpone', 0 # if it was 'compile'. my ( $subname, $break ) = ( $2, $1 eq 'postpone' ); #} = $break ? "break +0 if $cond" : "compile"; } ## end elsif ($cmd =~ ... # b <sub name> [<condition>] elsif ( $cmd =~ /^([':A-Za-z_][':\w]*(?:\[.*\])?)\s*(.*)/ ) { my $subname = $1; my $cond = length $2 ? $2 : '1'; cmd_b_sub( $subname, $cond ); } # b <line> [<condition>]. elsif ( $cmd =~ /^(\d*)\s*(.*)/ ) { my $i = $1 || $dbline; my $cond = length $2 ? $2 : '1'; cmd_b_line( $i, $cond ); } } ## end sub cmd_pre580_b =head2 Old C<D> command. Delete all breakpoints unconditionally. =cut sub cmd_pre580_D { my $xcmd = shift; my $cmd = shift; if ( $cmd =~ /^\s*$/ ) { print $OUT "Deleting all breakpoints...\n"; # %had_breakpoints lists every file that had at least one # breakpoint in it. my $file; for $file ( keys %had_breakpoints ) { # Switch to the desired file temporarily. local *dbline = $main::{ '_<' . $file }; $max = $#dbline; my $was; # For all lines in this file ... for my $i (1 .. $max) { # If there's a breakpoint or action on this line ... if ( defined $dbline{$i} ) { # ... remove the breakpoint. $dbline{$i} =~ s/^[^\0]+//; if ( $dbline{$i} =~ s/^\0?$// ) { # Remove the entry altogether if no action is there. delete $dbline{$i}; } } ## end if (defined $dbline{$i... } ## end for my $i (1 .. $max) # If, after we turn off the "there were breakpoints in this file" # bit, the entry in %had_breakpoints for this file is zero, # we should remove this file from the hash. if ( not $had_breakpoints{$file} &= ~1 ) { delete $had_breakpoints{$file}; } } ## end for $file (keys %had_breakpoints) # Kill off all the other breakpoints that are waiting for files that # haven't been loaded yet. undef %postponed; undef %postponed_file; undef %break_on_load; } ## end if ($cmd =~ /^\s*$/) } ## end sub cmd_pre580_D =head2 Old C<h> command Print help. Defaults to printing the long-form help; the 5.8 version prints the summary by default. =cut sub cmd_pre580_h { my $xcmd = shift; my $cmd = shift; # Print the *right* help, long format. if ( $cmd =~ /^\s*$/ ) { print_help($pre580_help); } # 'h h' - explicitly-requested summary. elsif ( $cmd =~ /^h\s*/ ) { print_help($pre580_summary); } # Find and print a command's help. elsif ( $cmd =~ /^h\s+(\S.*)$/ ) { my $asked = $1; # for proper errmsg my $qasked = quotemeta($asked); # for searching # XXX: finds CR but not <CR> if ( $pre580_help =~ /^ <? # Optional '<' (?:[IB]<) # Optional markup $qasked # The command name /mx ) { while ( $pre580_help =~ /^ ( # The command help: <? # Optional '<' (?:[IB]<) # Optional markup $qasked # The command name ([\s\S]*?) # Lines starting with tabs \n # Final newline ) (?!\s)/mgx ) # Line not starting with space # (Next command's help) { print_help($1); } } ## end if ($pre580_help =~ /^<?(?:[IB]<)$qasked/m) # Help not found. else { print_help("B<$asked> is not a debugger command.\n"); } } ## end elsif ($cmd =~ /^h\s+(\S.*)$/) } ## end sub cmd_pre580_h =head2 Old C<W> command C<W E<lt>exprE<gt>> adds a watch expression, C<W> deletes them all. =cut sub cmd_pre580_W { my $xcmd = shift; my $cmd = shift; # Delete all watch expressions. if ( $cmd =~ /^$/ ) { # No watching is going on. $trace &= ~2; # Kill all the watch expressions and values. @to_watch = @old_watch = (); } # Add a watch expression. elsif ( $cmd =~ /^(.*)/s ) { # add it to the list to be watched. push @to_watch, $1; # Get the current value of the expression. # Doesn't handle expressions returning list values! $evalarg = $1; # The &-call is here to ascertain the mutability of @_. my ($val) = &DB::eval; $val = ( defined $val ) ? "'$val'" : 'undef'; # Save it. push @old_watch, $val; # We're watching stuff. $trace |= 2; } ## end elsif ($cmd =~ /^(.*)/s) } ## end sub cmd_pre580_W =head1 PRE-AND-POST-PROMPT COMMANDS AND ACTIONS The debugger used to have a bunch of nearly-identical code to handle the pre-and-post-prompt action commands. C<cmd_pre590_prepost> and C<cmd_prepost> unify all this into one set of code to handle the appropriate actions. =head2 C<cmd_pre590_prepost> A small wrapper around C<cmd_prepost>; it makes sure that the default doesn't do something destructive. In pre 5.8 debuggers, the default action was to delete all the actions. =cut sub cmd_pre590_prepost { my $cmd = shift; my $line = shift || '*'; my $dbline = shift; return cmd_prepost( $cmd, $line, $dbline ); } ## end sub cmd_pre590_prepost =head2 C<cmd_prepost> Actually does all the handling for C<E<lt>>, C<E<gt>>, C<{{>, C<{>, etc. Since the lists of actions are all held in arrays that are pointed to by references anyway, all we have to do is pick the right array reference and then use generic code to all, delete, or list actions. =cut sub cmd_prepost { my $cmd = shift; # No action supplied defaults to 'list'. my $line = shift || '?'; # Figure out what to put in the prompt. my $which = ''; # Make sure we have some array or another to address later. # This means that if for some reason the tests fail, we won't be # trying to stash actions or delete them from the wrong place. my $aref = []; # < - Perl code to run before prompt. if ( $cmd =~ /^\</o ) { $which = 'pre-perl'; $aref = $pre; } # > - Perl code to run after prompt. elsif ( $cmd =~ /^\>/o ) { $which = 'post-perl'; $aref = $post; } # { - first check for properly-balanced braces. elsif ( $cmd =~ /^\{/o ) { if ( $cmd =~ /^\{.*\}$/o && unbalanced( substr( $cmd, 1 ) ) ) { print $OUT "$cmd is now a debugger command\nuse ';$cmd' if you mean Perl code\n"; } # Properly balanced. Pre-prompt debugger actions. else { $which = 'pre-debugger'; $aref = $pretype; } } ## end elsif ( $cmd =~ /^\{/o ) # Did we find something that makes sense? unless ($which) { print $OUT "Confused by command: $cmd\n"; } # Yes. else { # List actions. if ( $line =~ /^\s*\?\s*$/o ) { unless (@$aref) { # Nothing there. Complain. print $OUT "No $which actions.\n"; } else { # List the actions in the selected list. print $OUT "$which commands:\n"; foreach my $action (@$aref) { print $OUT "\t$cmd -- $action\n"; } } ## end else } ## end if ( $line =~ /^\s*\?\s*$/o) # Might be a delete. else { if ( length($cmd) == 1 ) { if ( $line =~ /^\s*\*\s*$/o ) { # It's a delete. Get rid of the old actions in the # selected list.. @$aref = (); print $OUT "All $cmd actions cleared.\n"; } else { # Replace all the actions. (This is a <, >, or {). @$aref = action($line); } } ## end if ( length($cmd) == 1) elsif ( length($cmd) == 2 ) { # Add the action to the line. (This is a <<, >>, or {{). push @$aref, action($line); } else { # <<<, >>>>, {{{{{{ ... something not a command. print $OUT "Confused by strange length of $which command($cmd)...\n"; } } ## end else [ if ( $line =~ /^\s*\?\s*$/o) } ## end else } ## end sub cmd_prepost =head1 C<DB::fake> Contains the C<at_exit> routine that the debugger uses to issue the C<Debugged program terminated ...> message after the program completes. See the C<END> block documentation for more details. =cut package DB::fake; sub at_exit { "Debugged program terminated. Use 'q' to quit or 'R' to restart."; } package DB; # Do not trace this 1; below! 1;
https://metacpan.org/dist/perl/source/lib/perl5db.pl
CC-MAIN-2021-25
refinedweb
13,890
65.62
To help you experience EDAS, Quick Start describes how to deploy microservice application demos based on Spring Cloud and Apache Dubbo (Dubbo for short) to different EDAS environments. Each deployment scenario can be completed within 30 minutes. EDAS provides one to five pay-as-you-go application instances for free. If you have activated pay-as-you-go Standard Edition for EDAS and deploy no more than five application instances, you are not charged for using EDAS. However, you must pay for the Alibaba Cloud resources that you use to create the Elastic Compute Service (ECS) and Server Load Balancer (SLB) instances until you release these resources. Nacos is used as the registry for the application demo that is provided by EDAS. You can also create a registry or use MSE to host another type of registry, such as Eureka or ZooKeeper, based on your requirements. Make sure that the registry and the environment where the applications are to be deployed are connected over a network. If this condition is met, you can deploy your applications to EDAS and use the application hosting, microservices, and cloud native application PaaS platform capabilities of EDAS without modifying code. For more information, see What is EDAS?. - Default environment: the default ECS cluster in the default VPC and the default namespace that are provided by EDAS. EDAS provides a default environment for only ECS clusters but not for Kubernetes clusters. - Custom environment: the ECS cluster or the Kubernetes cluster that you create.
https://www.alibabacloud.com/help/doc-detail/168115.htm
CC-MAIN-2021-31
refinedweb
248
52.6
For in-depth information on various Big Data technologies, check out my free e-book “ Introduction to Big Data “. In YARN, the Resource Manager is a single point of failure (SPOF). Multiple Resource Manager instances can be brought up for fault tolerance but only one instance is Active. When the Active goes down or becomes unresponsive, another Resource Manager has to be elected to be the Active. Such a leader election problem is common for distributed systems with a active/standby design. YARN relays on ZooKeeper for electing the new Active. In fact, distributed systems also face other common problems such as naming service, configuration management, synchronization, group membership, etc. ZooKeeper is a highly reliable distributed coordination service for all these use cases. Higher order constructs, e.g. barriers, message queues, locks, two-phase commit, and leader election, can also be implemented with ZooKeeper. In the rest of book, we will find that many distributed services depend on the ZooKeeper, which is actually the goal of ZooKeeper: implementing the coordination service once and well and shared by many distributed applications. Essentially, ZooKeeper is a distributed in-memory CP data store that has the following. Data Model ZooKeeper has a hierarchal namespace, much like a file system. The major difference is that each node (called znode) in the namespace, both internal node and leaf, can have associated data. The data stored at each znode is accessed atomically. Reads get all the data bytes associated with a znode and a write replaces it. To achieve high throughput and low latency, ZooKeeper keeps all the data in main memory. For recoverability, updates are logged to disk and the whole data tree is also snapshot in a fuzzy way (of both the data content and snapshot frequency). So ZooKeeper is like an in-memory key-value pair data store, of which the key namespace is organized in a tree structure. However, ZooKeeper is not intended to be used as a general database or large object store. In fact, the ZooKeeper client and the server implementations have sanity checks to ensure that znodes have less than 1M of data. In practice, the data should be at the scale of kilobytes on average as ZooKeeper is designed to manage coordination data such as configuration, status information, rendezvous, etc. Each znode has an Access Control List (ACL) and a stat structure that includes timestamps and version numbers for data changes and ACL changes. ZooKeeper stamps each update in the form of zxid (ZooKeeper Transaction Id), which exposes the total ordering of all changes to ZooKeeper. When a znode’s data or ACL changes, the version numbers increase too. For every read, the client also receives the version of the data. And when it performs an update or a delete, it must supply the version of the data. If the version it supplies doesn’t match the current version of the data, the update will fail. Clients can also set watches on znodes. A watches is an one-time trigger. Changes to a znode trigger the watch associated with it and then clear the watch. When a watch triggers, the client will receive a notification from ZooKeeper. Watches are sent asynchronously to watchers. But ZooKeeper guarantees that a client will see a watch event for a znode it is watching before seeing the new data that corresponds to that znode. Besides, the order of watch events from ZooKeeper corresponds to the order of the updates as seen by the ZooKeeper service. Specially, ZooKeeper also has ephemeral nodes, which exist as long as the session that created the znode is active. When the session ends, the znode is deleted. With ephemeral nodes, we can easily implement the group membership of distributed systems. The group is represented by a znode. Each group member can create an ephemeral node under the group node. If a member leaves or fails abnormally, the corresponding znode will be deleted automatically when ZooKeeper detects the failure. Another special kind of znode is sequence node whose name is automatically appended a monotonically increasing counter by ZooKeeper. This counter is unique to the parent znode. A simple way of implementing leader election with ZooKeeper is to use sequence and ephemeral nodes under a group node. The process that created the znode with the smallest appended sequence number is the leader. If the group size is not very big, all application processes can watch upon the current smallest znode. If the leader goes offline, the corresponding ephemeral node is removed and and all other processes can observe who is the new leader. If the group is very large, this design may cause a burst of operations that ZooKeeper has to process, referred as to the “herd effect”. An alternative approach is that each client watches upon only the largest znode that is smaller than its own znode. When a process receives a notification that the smallest znode is removed, it then executes the leader procedure. This avoids the herd effect as only one process is notified. With watches and sequence nodes, one may also implement message queues with ZooKeeper. Just like not using ZooKeeper as a general database, however, it is not recommended to replace the normal message queue with ZooKeeper. The design of ZooKeeper does not fit the typical use cases of message queues. The performance of ZooKeeper is bad if there are many nodes with thousands of children. The 1MB size limit of ZooKeeper also prevents large messages. Atomic Broadcast To be fault tolerant, ZooKeeper should be replicated over a sets of hosts called an ensemble. The servers that make up the ZooKeeper service must all know about each other. As long as a majority of the servers are available, the ZooKeeper service will be available. More specifically, the service requires at least 2f+1 servers to tolerate up to f crash failures. In practice, a ZooKeeper service usually consists of three to seven machines. Because Zookeeper requires a majority, it is best to use an odd number of machines. Every ZooKeeper server services clients and clients connect to exactly one server. To create a client session, the application code must provide a connection string containing a comma separated list of host:port pairs, each corresponding to a ZooKeeper server. The ZooKeeper client library will pick an arbitrary server and try to connect to it. If the client becomes disconnected from the server, the client will automatically try the next server in the list until a connection is re-established. To provide high read throughput, ZooKeeper services the read requests from the local replica of state at each server. In contrast, all write requests are forwarded to a single server, referred as to the leader. The leader uses an atomic broadcast protocol, called Zab, to keep all the servers in sync. Such a leader is elected through a leader election algorithm and synchronized with a quorum of other servers, called followers. By sending all updates through the leader, non-idempotent requests are transformed into idempotent transactions. To guarantee the correct transformation, ZooKeeper enforces that there is only one leader in Zab. And the Zab protocol meets the following requirements: - Reliable delivery If a message. Zab at a high level is a leader based protocol similar to Paxos. Compared to Paxos, Zab is primarily designed for primary-backup systems rather than for state machine replication. The Zab protocol consists of two modes: recovery/leader activation and broadcast/active messaging. When the service starts or after a leader failure, Zab transitions to recovery mode. Recovery mode ends when a leader emerges and a quorum of servers have synchronized their state with the leader. Synchronizing their state consists of guaranteeing that the leader and new server have the same state. Once a leader has a quorum of synchronized followers, it accepts messages to propose and coordinates message delivery. The broadcast looks just like two-phase commit without the need to handle aborts and all communication channels are assumed FIFO: - The leader sends proposals to all followers in the order that requests have been received. Before proposing a message the leader assigns a monotonically increasing unique zxid. - Followers process messages in the order they are received. - The leader will issue a COMMIT to all followers as soon as a quorum of followers have ACKed a message. - Followers deliver the message when they receive the COMMIT from the leader.
http://126kr.com/article/69dsr5jac75
CC-MAIN-2017-09
refinedweb
1,398
55.03
Beginner Error, Urgent help please!983060 Jan 5, 2013 4:45 PM Hello, I am a beginner at programming and I am currently going through the Database 11g XE Two day plus Java, so I am using Jdeveloper. All has been going well, until I came upon the filter by name portion. I have followed the instructions to a tee, and after several checks everything matches with the tutorial. However, I am receiving an error. I am using a JSP page, and two classes: DataHandler.java , and JavaClient. java. The getEmployeesByName class is located inside DataHandler.java ; as well as the getAllEmployees class that I used earlier in the tutorial. However, when I add in the getEmployeesByName class on the JSP after a form, I recieve an error saying that method getEmployeesByName could not be found. It has been an hour and I can't find my mistake. Here is my code: For JavaClient.java: Here is my code: For JavaClient.java: For DataHandler.javaFor DataHandler.java package hr; import java.sql.ResultSet; public class JavaClient { public JavaClient() { super(); } public static void main(String[] args) throws Exception { DataHandler datahandler = new DataHandler(); ResultSet rset = datahandler.getAllEmployees(); while (rset.next()) { System.out.println(rset.getInt(1) + " " + rset.getString(2) + " " + rset.getString(3) + " " + rset.getString(4)); rset = datahandler.getEmployeesByName("King"); System.out.println("\nResults from query: "); while (rset.next()) { System.out.println(rset.getInt(1) + " " + rset.getString(2) + " " + rset.getString(3) + " " + rset.getString(4)); } } } } for the JSP pagefor the JSP page package hr; import java.sql.Connection; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import oracle.jdbc.pool.OracleDataSource; public class DataHandler { public DataHandler() { } String jdbcUrl = null; String userid = null; String password = null; Connection conn; Statement stmt; ResultSet rset; String query; String sqlString; public void getDBConnection() throws SQLException{ OracleDataSource ds; ds = new OracleDataSource(); ds.setURL(jdbcUrl); conn = ds.getConnection(userid,password); }; } public ResultSet getEmployeesByName(String name) throws SQLException{ name = name.toUpperCase(); getDBConnection(); stmt = conn.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_READ_ONLY); query = "SELECT * FROM Employees WHERE UPPER(first_name) LIKE \'%" + name + "%\'" + " OR UPPER(last_name) LIKE \'%" + name + "%\' ORDER BY employee_id"; System.out.println("\nExecuting query: " + query); rset = stmt.executeQuery(query); return rset; } } I know this is a very big question, but I am lost and the tutorial isn't offering any further guidance. I have a deadline to meet soon, so thanks in advance for help.I know this is a very big question, but I am lost and the tutorial isn't offering any further guidance. I have a deadline to meet soon, so thanks in advance for help. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" ""> <%@ page <title>employees</title> <link type="text/css" rel="stylesheet" href="resources/css/jdeveloper.css"/> </head> <body> <h2 align="center"> AnyCo Corporation </h2><h3> Employee Data </h3> <jsp:useBean <form action="employees.jsp"> Filter by Name: <input type="text" name="query"/> <input type="submit" value="Filter"/> </form> <> <td width="17%"> <h4> First Name </h4> </td> <td width="17%"> <h4> Last Name </h4> </td> <td width="17%"> <h4> Email </h4> </td> <td width="17%"> <h4> Job </h4> </td> <td width="16%"> <h4> Phone </h4> </td> <td width="16%"> <h4> Salary </h4> </td> </tr><>"); }%> </table> </body> </html> This content has been marked as final. Show 12 replies 1. Re: Beginner Error, Urgent help please!Timo Hahn Jan 5, 2013 2:06 PM (in response to 983060)User, welcome to the forum. Please read the FAQ () and format the code you post in this forum. Right now it's too hard to read. Next you should always tell us the exact JDev version you are using. 11g in not enough as there are a couple of 11g versions out in the wild. May I ask you another question: why do you user plain jsp as ui technology? This is quite old. Nowadays JSF and ADF are popular. Timo 2. Re: Beginner Error, Urgent help please!983060 Jan 5, 2013 6:02 PM (in response to Timo Hahn)I edited it, I apologize for that. Also, I am doing this for a competitive project due in under two weeks.I must be able to show all the source code. So do JSF or ADF allow you to view all source? Also which one would be easier to create an app that queries data from a database, edit data in the database, and provide log in functionality? The app must work offline as well. Edited by: DanDan Sc on Jan 5, 2013 10:56 AM 3. Re: Beginner Error, Urgent help please!dvohra21 Jan 5, 2013 9:35 PM (in response to 983060)Use a JSF with a managed bean associated for bean properties. With JSF, the bean properties may be used in a JSF page with EL Expression. 4. Re: Beginner Error, Urgent help please!Timo Hahn Jan 6, 2013 1:05 AM (in response to 983060)Thanks for formatting the code. Much easier to read. What I don't get are your requirements. Do you try to create a web application? Stand alone application? If yes, what do you mean by 'the app must be run offline too'? Without a db? Without a web server? Timo 5. Re: Beginner Error, Urgent help please!983060 Jan 6, 2013 3:10 AM (in response to Timo Hahn)Well the problem with this code is that i recieve am error on the JSP page saying error method getEmployeesByName could not be found. But my requirements are: the application must be put on a USB drive with an executable file to run the app. It must include database files saved as .txt files, which will be queried and edited through the application. I am overwhelmed with all the options, and my deadline is rapidly approaching. What would you recommend to do this? Thank you in advance for the help. 6. Re: Beginner Error, Urgent help please!983060 Jan 7, 2013 12:26 AM (in response to 983060)Does anyone have suggestions on how to do such a thing? 7. Re: Beginner Error, Urgent help please!bigchill Jan 7, 2013 2:08 AM (in response to 983060)Hi Dan firstly, looking at your example app, seems like you are not using ADF but only a oracle db. apart from that, its pure java/jdbc and a dao secondly, your requirements are not clear or overloaded. I'm going to assume (a) you know what your requirements are and (b) you have selected the technolgies based on your intial research. So I'm not going to veer from all that or suggest you something else. if you answered YES to (a) and (b) then fine, else let me know otherwise as its crucial. Going with what you have, please answer the following: 1) explain how you deploy this app 2) post the full error message 8. Re: Beginner Error, Urgent help please!983060 Jan 7, 2013 2:16 AM (in response to bigchill)After doing much more research, I have decided to start over and use an Offline Database inside Jdeveloper, and I will likely use ADF or JSF to develop it now. I'm figuring it out as I go. The thing I'm now stuck on is how do I get the data files in my database to save as .txt files? Jdev says that they are saved as .xml files, so do I have a choice here? That is my only specific question now. I'm basically just jumping in and looking at examples to learn how to do the things i need. 9. Re: Beginner Error, Urgent help please!bigchill Jan 7, 2013 2:27 AM (in response to 983060)when you say offline database do u mean a stub java class with helper methods that returns data back based on hardcoded fields in the class? If YES, then fine. by doing this you leave out the jdbc connection/driver and other sql code in your dao and make it more simpler. If NO, then what do u mean by offline db. 10. Re: Beginner Error, Urgent help please!983060 Jan 7, 2013 2:52 AM (in response to bigchill)Well I created a new project and right clicked and selected new>database tier> offline database objects> offline database; In Jdeveloper. Then I created tables with the columns I needed. I'm going to be honest, I have no experience with this. In general this is what I need to do: develop a program that is executable off of a CD or USB drive that begins with a login page, then a home page. Then the user must be able to navigate to pages where they can add employees, or employers contracted; and edit their data (address phone, etc.) Then they must be able to go to a page that is unique for each employee to perform an evaluation. They will input numerical scores and comments, and save the evaluation. Then they must be able to go to the employee's page and pull reports of saved evaluations. Other reports must be possible such as: all employees, all employers, and a more detailed report for each individual. So if you were tasked with this, how would you do it? I find myself buried with all the different methods Jdeveloper provides. 11. Re: Beginner Error, Urgent help please!bigchill Jan 7, 2013 3:30 AM (in response to 983060)DanDan your answer can be divided into 2 very broad categories. 1) non-functional requirement and 2) functional requirements point 1 is how the system is deployed or usable, and point 2 is how the system should behave from a user perspective, which could be divided into 1 ore more usecases. your app seems to have more than half a dozen usecases - much likely more by looking at it form a high level... If I was tasked with this, CASE1:(Assumtion here is Knowledge=BEGINNER) and only 2 weeks were given from completion, then I'd work with the technoliges I already know and try not doing it with a new techonlogy given ADF high sophistication, customisation capabilities and complexity. Having said this, it can be still possible but you will require a lot and I still dont think 2 weeks will do it. CASE2:(assumption here is Knowledge=INTER-ADV) and only 2 weeks were given from completion, then this would be a standalone app deployed on a memory device that is runnable by launching the application. Currently my work is using ADF, Web and JEE technolgies. The standalone application development have evolved rapidly since I last did my bit in that space. Having said that, I'd create the same application without a JSP, I'd use SWING as a frontend, have a database since you have a lot of data manipulation like CRUD etc., and then deploy the application as an executable jar file which when double clicked launches itself. The time I last did something like this, it was with Java Webstart, and now it mught have eveloved into something better. Again here, if you are going to LEARN Swing then you'll fall into the CASE1 bracket which will alter your Estimated completion time. And at the end of the day; how I'd go about this problem/reaserach would be different based on the individual and what resources one has at their disposal. 12. Re: Beginner Error, Urgent help please!983060 Jan 7, 2013 3:59 AM (in response to bigchill)Alright, I'll see what I can do. Thank you for the help.
https://community.oracle.com/message/10779637
CC-MAIN-2017-17
refinedweb
1,921
65.93
codeEditor The source code editor is another scripting triumph. I started it last Friday by simply copying the PythonCard textEditor sample, creating a simple component wrapper around the wxPython wxStyledTextCtrl, changing the component used from a TextArea to the new CodeEditor component, and in 15 minutes I had a syntax highlighting Python source code editor. wxStyledTextCtrl is in turn a wrapper around Neil Hodgson's open source Scintilla. After a few more days of work, the editor does all the basic editing functions you would expect for a good Python source code editor, you can set all of the styles used, check the syntax of scripts, run scripts, file history, it has a built-in shell, and you can extend the editor by writing scripts in (surprise) Python (we're calling them Scriptlets). So the full object model and data of the editor is there to be programmed, just like a "real editor" such as Emacs. Not bad for around a 1000 lines of code and less than a weeks work. There is a codeEditor wiki page if you want more info on the editor. Here's an example Scriptlet included with codeEditor, called appropriately enough insertDateAndTime.py import timenow = time.localtime(time.time())dateStr = time.strftime("%A, %B %d, %Y, %I:%M %p", now)comp.document.ReplaceSelection(dateStr) PythonCard 0.6.6 PythonCard is a GUI construction kit for building cross-platform desktop applications on Windows, Mac OS X, and Linux. The latest release of PythonCard includes 30 sample applications, new additions include a Python source code editor and a sample for creating flat file databases. This release also supports the new wxPython 2.3.3 preview for Mac OS X. The documentation page has links to installation instructions for Windows that covers installing Python, wxPython, and PythonCard. There is a new set of Wiki pages for "in progress" documentation and to supplement the main web site. Check the changelog for a complete list of changes for release 0.6.6.
http://radio.weblogs.com/0102677/categories/pythoncard/2002/05/10.html
crawl-002
refinedweb
332
54.02
A string of characters is probably the most commonly used data type when developing scripts, and PHP provides a large library of string functions to help transform, manipulate, and otherwise manage strings. We introduced the basics of PHP strings in Chapter 2. In this section, we show you many of the useful PHP string functions. The length property of a string is determined with the strlen( ) function, which returns the number of eight-bit characters in the subject string: We used strlen( ) earlier in the chapter to compare string lengths. Consider another simple example that prints the length of a 16-character string: print strlen("This is a String"); // prints 16 In the previous chapter, we presented the basic method for outputting text with echo and print. Earlier in this chapter, we showed you the functions print_r( ) and var_dump( ), which can determine the contents of variables during debugging. PHP provides several other functions that allow more complex and controlled formatting of strings, and we discuss them in this section. Sometimes, more complex output is required than can be produced with echo or print. For example, a floating-point value such as 3.14159 might need to be truncated to 3.14 in the output. For complex formatting, the sprintf( ) or printf( ) functions are useful: The operation of these functions is modeled on the identical C programming language functions, and both expect a format string with optional conversion specifications, followed by variables or values as arguments to match any formatting conversions. The difference between sprintf( ) and printf( ) is that the output of printf( ) goes directly to the output buffer that PHP uses to build a HTTP response, whereas the output of sprintf( ) is returned as a string. Consider an example printf( ) statement: $variable = 3.14159; // prints "Result: 3.14" printf("Result: %.2f\n", $variable); The format string Result: %.2f\n is the first parameter to the printf( ) statement. Strings such as Result: are output the same as with echo or print. The %.2f component is a conversion specification that describes how the value of $variable is to be formatted. Conversion specifications always start with the % character and end with a type specifier; and can include width and precision components in between. The example above includes a precision specification .2 that prints two decimal places. A specifier %5.3f means). Table 3-1 shows all the types supported by sprintf( ) and printf( ). While width specifiers can be used with all types?we show examples in Example 3-2?decimal precision can only be used with floating point numbers. Both sprintf( ) and printf( ) allow the formatting of multiple parameters: each conversion specification in the format string formatting the corresponding parameter. Example 3-2 illustrates the use of printf( ) and sprintf( ), including how multiple parameters are formatted. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" ""> <html> <head> <meta http- <title>Examples of using printf( )</title> </head> <body bgcolor="#ffffff"> <h1>Examples of using printf( )</h1> <pre> <?php // Outputs "pi equals 3.14159" printf("pi equals %f\n", 3.14159); // Outputs "3.14" printf("%.2f\n", 3.14159); // Outputs " 3.14" printf("%10.2f\n", 3.14159); // Outputs "3.1415900000" printf("%.10f\n", 3.14159); // Outputs "halfofthe" printf("%.9s\n", "halfofthestring"); // Outputs "1111011 123 123.000000 test" printf("%b %d %f %s\n", 123, 123, 123, "test"); // Outputs "Over 55.71% of statistics are made up." printf("Over %.2f%% of statistics are made up.\n", 55.719); // sprintf( ) works just the same except the // output is returned as a string $c = 245; $message = sprintf("%c = %x (Hex) %o (Octal)", $c, $c, $c); // prints "õ = f5 (Hex) 365 (Octal)" print($message);?> </pre> </body> </html> A simple method to space strings is to use the str_pad( ) function: Characters are added to the input string so that the resulting string has length characters. The following example shows the simplest form of str_pad( ) that adds spaces to the end of the input string: // prints "PHP" followed by three spaces print str_pad("PHP", 6); An optional string argument padding can be supplied that is used instead of the space character. By default, padding is added to the end of the string. By setting the optional argument pad_type to STR_PAD_LEFT or to STR_PAD_BOTH, the padding is added to the beginning of the string or to both ends. The following example shows how str_pad( ) can create a justified index: $players = array("DUNCAN, king of Scotland"=>"Larry", "MALCOLM, son of the king"=>"Curly", "MACBETH"=>"Moe", "MACDUFF"=>"Rafael"); print "<pre>"; // Print a heading print str_pad("Dramatis Personae", 50, " ", STR_PAD_BOTH) . "\n"; // Print an index line for each entry foreach($players as $role => $actor) print str_pad($role, 30, ".") . str_pad($actor, 20, ".", STR_PAD_LEFT) . "\n"; print "</pre>"; A foreach loop is used to create a line of the index: the loop assigns the key and value of the $players array to $role and $actor. The example prints: Dramatis Personae DUNCAN, king of Scotland.....................Larry MALCOLM, son of the king.....................Curly MACBETH........................................Moe MACDUFF.....................................Rafael We have included the <pre> tags so a web browser doesn't ignore the spaces used to pad out the heading, and that a non-proportional font is used for the text; without the <pre> tags in this example, things don't line up. The following PHP functions return a copy of the subject string with changes in the case of the characters : The following fragment shows how each operates: print strtolower("PHP and MySQL"); // php and mysql print strtoupper("PHP and MySQL"); // PHP AND MYSQL print ucfirst("now is the time"); // Now is the time print ucwords("now is the time"); // Now Is The Time PHP provides three functions that trim leading or trailing whitespace characters from strings:" By default these functions trim space, tab (\t), newline (\n), carriage return (\r), NULL (\x00 ), and the vertical tab (\x0b ) characters. The optional character_list parameter allows you to specify the characters to trim. A range of characters can be specified using two periods (..) as shown in the following example: $var = trim("16 MAY 2004", "0..9 "); // Trims digits and spaces print $var; // prints "MAY" PHP provides the string comparison functions strcmp( ) and strncmp( ) that compare two strings in alphabetical order, str1 and str2: While the equality operator == can compare two strings, the result isn't always as expected for strings with binary content or multi-byte encoding: strcmp( ) and strncmp( ) provide binary safe string comparison. Both strcmp( ) and strncmp( ) take two strings as parameters, str1 and str2, and return 0 if the strings are identical, 1 if str1 is less than str2, and -1 if str1 is greater that str2. The function strncmp( ) takes a third argument length that restricts the comparison to length characters. String comparisons are often used as a conditional expression in an if statement like this: $a = "aardvark"; $z = "zebra"; // Test if $a and $z are not different (i.e. the same) if (!strcmp($a, $z)) print "a and z are the same"; When strcmp( ) compares two different strings, the function returns either -1 or 1 which is treated as true in a conditional expression. These examples show the results of various comparisons: print strcmp("aardvark", "zebra"); // -1 print strcmp("zebra", "aardvark"); // 1 print strcmp("mouse", "mouse"); // 0 print strcmp("mouse", "Mouse"); // 1 print strncmp("aardvark", "aardwolf", 4); // 0 print strncmp("aardvark", "aardwolf", 5); // -1 The functions strcasecmp( ) and strncasecmp( ) are case-insensitive versions of strcmp( ) and strncmp( ). For example: print strcasecmp("mouse", "Mouse"); // 0 The functions strcmp( ), strncmp( ), strcasecmp( ), or strncasecmp( ) can be used as the callback function when sorting arrays with usort( ). See Section 3.1.4 earlier in this chapter for a discussion on usort( ). PHP provides several simple and efficient functions that can identify and extract specific substrings of a string. As is common with string libraries in other languages, PHP string functions reference characters using an index that starts at zero for the first character, one for the next character and so on. The substr( ) function returns a substring from a source string: as a parameter, the starting point of the returned string is counted from the end of the source string. If the length is negative," The strpos( ) function returns the index of the first occurring substring needle in the string haystack: When called with two arguments, the search for the substring needle is from the start of the string haystack at position zero. When called with three arguments, the search occurs from the index offset into the haystack. The following examples show how strpos( ) works: $var = "To be or not to be"; print strpos($var, "T"); // 0 print strpos($var, "be"); // 3 // Start searching from the 5th character in $var print strpos($var, "be", 4); // 16 The strrpos( ) function returns the index of the last occurrence of the single character needle in the string haystack: Prior to PHP 5, strrpos( ) uses the first character of needle to search. The following example shows how strrpos( ) works: $var = "and by a sleep to say we end the heart-ache"; // Prints 18 using PHP 4.3 matching the "s" in "say" // Prints 9 using PHP 5 matching the whole string "sleep" print strrpos($var, "sleep"); // Prints 22 using PHP 4.3 matching the "w" of "we" // The function returns false using PHP 5 as "wally" // is not found print strrpos($var, "wally"); If the substring needle isn't found by strpos( ) or strrpos( ), both functions return false. The is-identical operator ===, or the is-not-identical operator !== should be used when testing the returned value from these functions. This is because if the substring needle is found at the start of the string haystack, the index returned is zero and is interpreted as false if used as a Boolean value. Example 3-3 shows how strpos( ) can be repeatedly called to find parts of a structured sequence like an Internet domain name. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" ""> <html> <head> <meta http- <title>Hello, world</title> </head> <body bgcolor="#ffffff"> <?php $domain = "orbit.mds.rmit.edu.au"; $a = 0; while (($b = strpos($domain, ".", $a)) !== false) { print substr($domain, $a, $b-$a) . "\n"; $a = $b + 1; } // print the piece to the right of the last found "." print substr($domain, $a); ?> </body> </html> A while loop is used to repeatedly find the period character (.) in the string $domain. The body of the loop is executed if the value returned by strpos( ) is not false?we also assign the return result to $b in the same call. This is possible because an assignment can be used as an expression. In Example 3-3, the value of the assignment ($b = strpos($domain, ".", $a)) is the same as the value returned from calling strpos( ) alone strpos($domain, ".", $a) Each time strpos( ) is called, we pass the variable $a as the starting point in $domain for the search. For the first call, $a is set to zero and the first period in the string is found. The body of the while loop uses substr( ) to print the characters from $a up to the period character that's been found?the first time through the loop substr( ) prints $b characters from the string $domain starting from position zero. The starting point for the next search is calculated by setting $a to the location of the next character after the period found at position $b. The loop is then repeated if another period is found. When no more period characters are found, the final print statement uses substr( ) to print the remaining characters from the string $domain. // print the piece to the right of the last found "." print substr($domain, $a); The output of Example 3-3 is: orbit mds rmit edu au The strstr( ) and stristr( ) functions search for the substring needle in the string haystack and return the portion of haystack from the first occurrence of needle to the end of haystack: The strstr( ) search is case-sensitive, and the stristr( ) search isn't. If the needle isn't found in the haystack string, both strstr( ) and stristr( ) return false. The following examples show how the functions work: $var = "To be or not to be"; print strstr($var, "to"); // "to be" print stristr($var, "to"); // "To be or not to be" print stristr($var, "oz"); // false The strrchr( ) function returns the portion of haystack by searching for the single character needle; however, strrchr( ) returns the portion from the last occurrence of needle: Unlike strstr( ) and stristr( ), strrchr( ) searches for a single character, and only the first character of the needle string is used. The following examples show how strrchr( ) works: $var = "To be or not to be"; // Prints: "not to be" print strrchr($var, "n"); // Prints "o be": Only searches for "o" which // is found at position 14 print strrchr($var, "or"); PHP provides several simple functions that can replace specific substrings or characters in a string with other strings or characters. These functions don't change the input string, instead they return a copy of the input modified by the require changes. In the next section, we discuss regular expressions, which are powerful tools for finding and replacing complex patterns of characters. However, the functions described in this section are faster than regular expressions and usually a better choice for simple tasks. The substr_replace( ) function returns a copy of the source string with the characters from the position start to the end of the string replaced with the replace string: If the optional length is supplied, only length characters are replaced. The following examples show how substr_replace( ) works: $var = "abcdefghij"; // prints "abcDEF"; print substr_replace($var, "DEF", 3); // prints "abcDEFghij"; print substr_replace($var, "DEF", 3, 3); // prints "abcDEFdefghij"; print substr_replace($var, "DEF", 3, 0); The last example shows how a string can be inserted by setting the length to zero. The str_replace( ) function returns a string created by replacing occurrences of the string search in subject with the string replace: In the following example, the subject string, "old-age for the old", is printed with both occurrences of old replaced with new: $var = "old-age for the old."; print str_replace("old", "new", $var); The result is: new-age for the new. Since PHP 4.0.5, str_replace( ) allows an array of search strings and a corresponding array of replacement strings to be passed as parameters. The following example shows how the fields in a very short form letter can be populated: // A short form-letter for an overdue account $letter = "Dear #title #name, you owe us $#amount."; // Set-up an array of three search strings that will be // replaced in the form-letter $fields = array("#title", "#name", "#amount"); // Set-up an array of debtors. Each element is an array that // holds the replacement values for the form-letter $debtors = array( array("Mr", "Cartwright", "146.00"), array("Ms", "Yates", "1,662.00"), array("Dr", "Smith", "84.75")); foreach($debtors as $debtor) print str_replace($fields, $debtor, $letter) . "\n"; The $fields array contains a list of strings that are to be replaced. These strings don't need to follow any particular format; we have chosen to prefix each field name with the # character to clearly identify the fields in the letter. The body of the foreach loop calls str_replace( ) to replace the corresponding fields in $letter with the values for each debtor. The output of this script is as follows: Dear Mr Cartwright, you owe us $146.00. Dear Ms Yates, you owe us $1,662.00. Dear Dr Smith, you owe us $84.75. If the array of replacement strings is shorter than the array of search strings, the unmatched search strings are replaced with empty strings. The strtr( ) function translates characters or substrings in a subject string: When called with three arguments, strtr( ) translates the characters in the subject string that match those in the from string with the corresponding characters in the to string. When called with two arguments, you must use an associative array called a map. Occurrences of the map keys in subject are replaced with the corresponding map values. The following example uses strtr( ) to replace all lowercase vowels with the corresponding umlauted character: $mischief = strtr("command.com", "aeiou", "äëïöü"); print $mischief; // prints cömmänd.cöm When an associative array is passed as a translation map, strtr( ) replaces substrings rather than characters. The following example shows how strtr( ) can expand acronyms: // Create an unintelligible email $geekMail = "BTW, IMHO (IOW) you're wrong!"; // Short list of acronyms used in e-mail $glossary = array("BTW"=>"by the way", "IMHO"=>"in my humble opinion", "IOW"=>"in other words", "OTOH"=>"on the other hand"); // Maybe now I can understand // Prints: by the way, in my humble opinion (in other words) you're wrong! print strtr($geekMail, $glossary);
http://etutorials.org/Programming/PHP+MySQL.+Building+web+database+applications/Chapter+3.+Arrays+Strings+and+Advanced+Data+Manipulation+in+PHP/3.2+Strings/
CC-MAIN-2018-09
refinedweb
2,781
60.45
news.digitalmars.com - digitalmars.DDec 31 2010 const - Best practices (11) Dec 31 2010 D for game development (10) Dec 31 2010 Happy New Year!!! (7) Dec 31 2010 While we were discussing lambda syntax.. (11) Dec 31 2010 Windows: gdc-4.5 development (1) Dec 31 2010 Less commas (30) Dec 30 2010 Re: PROPOSAL: Implicit conversions of integer literals to floating (1) Dec 30 2010 [SPEC/DMD] Bug (?): extern( C ) function pointers (D1 and D2) (6) Dec 30 2010 range practicle use (5) Dec 30 2010 Something Go and Scala syntax (18) Dec 29 2010 PROPOSAL: Implicit conversions of integer literals to floating point (30) Dec 29 2010 asForwardRange, a ForwardRange based on an InputRange (1) Dec 29 2010 Android development using D (7) Dec 29 2010 member access times (3) Dec 29 2010 Mac OS X: gdc-4.2 testing (5) Dec 28 2010 Serenity web framework - early feedback wanted (6) Dec 28 2010 std.openrj (4) Dec 28 2010 Phobos Patch - Version (X86_64) for struct_stat64 on Linux x86_64 (8) Dec 28 2010 D Language Custom Google Search (3) Dec 28 2010 Contracts in library code (2) Dec 28 2010 Inline asm expressions for ranged integrals? (2) Dec 27 2010 align(n) not working as expected (6) Dec 27 2010 streaming redux (66) Dec 27 2010 Subtyping with "alias this" doesn't mix with regular inheritance (2) Dec 27 2010 "The D Programming Language" : Still valid? (6) Dec 27 2010 Clay language (90) Dec 27 2010 GC conservatism -- again (30) Dec 27 2010 htod simple macro translation (2) Dec 26 2010 typeof(t) not working correctly? (4) Dec 26 2010 How do I make/use my own run-time library? (5) Dec 26 2010 Phobos usability with text files (14) Dec 26 2010 auto init & what the code means (1) Dec 25 2010 A few experiments with partial unrolling (4) Dec 25 2010 Requiring weak purity for opAssign, postblit (8) Dec 25 2010 Owned members (10) Dec 24 2010 Infinite BidirectionalRange? (7) Dec 24 2010 D vs C++ (84) Dec 24 2010 TDPL dictionary example - ERROR with dmd and gdc (4) Dec 24 2010 Merry Christmas everyone! (8) Dec 24 2010 Installation problem (3) Dec 24 2010 Phobos usability (1) Dec 23 2010 assocArray.remove() gives strange error (2) Dec 23 2010 How is the D programming language financed? (35) Dec 22 2010 Is std.demangle usable? (12) Dec 22 2010 Why is D slower than LuaJIT? (64) Dec 22 2010 rdmd and extern(C) (8) Dec 21 2010 What's the problem in opensourcing htod? (17) Dec 21 2010 synchronized statements in C++ ;) (1) Dec 21 2010 Azul's Pauseless GC (2) Dec 21 2010 [OT] How to post here without getting spam (9) Dec 21 2010 Feature: __FUNCTION__ to give name of parent function. (5) Dec 21 2010 Should Tuple!( T, "name" ) be implicitly castable to Tuple!T? (2) Dec 21 2010 DSource.org down? (3) Dec 20 2010 Scala containers (2) Dec 20 2010 thin heaps (5) Dec 19 2010 What is this D book? (13) Dec 19 2010 Offense programming (2) Dec 19 2010 Optimizing delegates (30) Dec 19 2010 try...catch slooowness? (17) Dec 18 2010 executable size (12) Dec 18 2010 freebsd (1) Dec 18 2010 is it possible to learn D(2)? (30) Dec 17 2010 Threads and static initialization. (19) Dec 16 2010 Purity (41) Dec 16 2010 (Improved) Benchmark for Phobos Sort Algorithm (11) Dec 16 2010 gdc-4.5 testing (30) Dec 16 2010 Does Phobos have thread pool class? (2) Dec 16 2010 [OT] Mozilla Thunderbird (32) Dec 15 2010 Binary heap method to update an entry. (7) Dec 15 2010 A Benchmark for Phobos Sort Algorithm (6) Dec 15 2010 Infinite loop using phobos sort (3) Dec 15 2010 Re: For whom is (1) Dec 15 2010 Cross-post from druntime: Mixing GC and non-GC in D. (AKA, "don't (2) Dec 15 2010 type classes for selection of template variant (3) Dec 15 2010 How to write template ModuleOf!T ? (3) Dec 14 2010 An opportunity to benchmark the DM back-end. (1) Dec 14 2010 How do I do placement delete in D? (14) Dec 14 2010 Paralysis of analysis (34) Dec 14 2010 Version statement (11) Dec 14 2010 Reducing template constraint verbosity? [was Re: Slides from my ACCU (21) Dec 14 2010 emscripten (115) Dec 14 2010 write, toString, formatValue & range interface (5) Dec 13 2010 New syntax for string mixins (53) Dec 13 2010 Using unary expressions with property functions (3) Dec 13 2010 CTAN, CPAN, RubyGem like (12) Dec 13 2010 Unused memory filling (2) Dec 13 2010 Bleeding edge DMD2? (3) Dec 13 2010 VLAs (1) Dec 12 2010 Fast string search (6) Dec 12 2010 fast string searching (4) Dec 12 2010 Inlining Code Test (19) Dec 12 2010 SWIG 4 D2 How To : namespace, friend, operator() (2) Dec 12 2010 Slides from my ACCU Silicon Valley talk (57) Dec 12 2010 empty string & array truth values& comparisons to null (1) Dec 11 2010 ByToken Range (3) Dec 11 2010 Casting functions to delegates (4) Dec 11 2010 Problems with dmd inlining (19) Dec 11 2010 String to boolean inconsistency (16) Dec 11 2010 Verbose checking of range category (11) Dec 11 2010 const / in (1) Dec 10 2010 Why Ruby? (313) Dec 10 2010 Jeff Dean's keynote at LADIS 2009 (3) Dec 10 2010 Problems with sort (8) Dec 10 2010 ACCEPTED: std.datetime (2) Dec 10 2010 Problems with D const ref vs C++ const & (11) Dec 10 2010 static import of std.signals (1) Dec 10 2010 rationale: [] and () (5) Dec 09 2010 404 & small proposal (17) Dec 09 2010 Please vote on std.datetime (52) Dec 09 2010 Scripting again. (24) Dec 09 2010 How convince computer teacher (21) Dec 09 2010 Choosing Go vs. D (4) Dec 08 2010 improvement request - enabling by-value-containers (29) Dec 08 2010 DMD2 .deb fails to install on Ubuntu 10.10 =?UTF-8?B?4oCTIGp1c3Qg?= (12) Dec 07 2010 merry christmas (4) Dec 06 2010 D is a Systems Language: 16-bit Compilation? (2) Dec 06 2010 Insight into the DMD back-end (7) Dec 06 2010 ldc2: Current State? (6) Dec 05 2010 Cannot get thread ID with Thread.getThis() in specific callback functions (11) Dec 05 2010 const(Object)ref is here! (65) Dec 05 2010 future of std.process? (8) Dec 04 2010 I know D now (too) (1) Dec 04 2010 double.min - should be 5e-324? (11) Dec 04 2010 TDPL source code (3) Dec 03 2010 Destructors, const structs, and opEquals (73) Dec 03 2010 On const and inout (was Re: Logical const) (1) Dec 03 2010 Research breakthrough from the Haskell team (11) Dec 02 2010 "Programming in D for C++ Programmers" mistake (3) Dec 02 2010 Andrei's "Sealed Containers" (8) Dec 02 2010 debugging code with mixins (3) Dec 01 2010 Interval foreach iteration variable (3) Dec 01 2010 Is opCast need, we have to! (21) Dec 01 2010 A bug with matching overloaded functions? (3) Dec 01 2010 delegates and heap usage (4) Dec 01 2010 Setting the stack size (23) Nov 30 2010 tail const (51) Nov 30 2010 [review] new string type (39) Nov 30 2010 why Rust when there is D? (11) Nov 29 2010 Logical Const using a Mutable template (4) Nov 29 2010 Type Classes as Objects and Implicits (2) Nov 29 2010 Tidy template instantiation syntax (8) Nov 28 2010 D's greatest mistakes (145) Nov 28 2010 XmlTokenizer review: Features and API (1) Nov 28 2010 GDC: druntime GC wrongly frees data pointed to by TLS (1) Nov 28 2010 What's wrong with opCall ? (3) Nov 27 2010 value range propagation for % (4) Nov 27 2010 String compare performance (36) Nov 27 2010 Discussion about D on the gentoo forum (7) Nov 27 2010 ddmd: is suspended ? (4) Nov 26 2010 C#'s greatest mistakes (67) Nov 26 2010 Deprecation schedule (20) Nov 25 2010 Explicitly unimplemented computed gotos (5) Nov 25 2010 What are modules for? (7) Nov 25 2010 const a storage class or a type modifier? (23) Nov 25 2010 custom AST library (8) Nov 25 2010 Some algorithms on immutable data: bug 5134? (2) Nov 25 2010 name guess by the compiler (5) Nov 25 2010 a different kind of synchronized (1) Nov 25 2010 name guess by the compiler (1) Nov 24 2010 Proposal: User Code Bug Tracking (6) Nov 23 2010 Array Appending Plus Postblits (4) Nov 23 2010 [challenge] can you break wstring's back? (4) Nov 23 2010 GDC2 compilation warnings (25) Nov 23 2010 when is a GDC/D2/Phobos bug a GDC bug? (12) Nov 23 2010 Why is 'scope' so weak? (8) Nov 22 2010 cheers to gdc team (15) Nov 22 2010 Template performance (7) Nov 22 2010 rdmd --main (8) Nov 22 2010 Random numbers in strongly pure functions (7) Nov 22 2010 A trouble with constructors (1) Nov 22 2010 Basic coding style (44) Nov 21 2010 Design by Contract != Runtime Assertion (11) Nov 21 2010 Atomic Ref Counting (9) Nov 21 2010 How to parse c source file to json file ? (1) Nov 21 2010 A CTFE Segfault (with explanation, but I'm not sure what the fix (3) Nov 21 2010 PEG lib (1) Nov 20 2010 OT: Planes, Space Crafts and Computer Science (2) Nov 20 2010 Repairing BigInt const (4) Nov 20 2010 php strings demo (9) Nov 20 2010 Logical const (160) Nov 20 2010 Principled method of lookup-or-insert in associative arrays? (22) Nov 19 2010 Error 42: Symbol Undefined __d_throwc (3) Nov 19 2010 Register Preservation in Inline ASM Blocks (2) Nov 19 2010 Re: Simple tagged attribute for unions [OT] (2) Nov 19 2010 standardization ISO (4) Nov 19 2010 Some new LLVM slides/videos (4) Nov 19 2010 state & effects (1) Nov 19 2010 Review: A new stab at a potential std.unittests (26) Nov 19 2010 Faster uniform() in [0.0 - 1.0( (11) Nov 18 2010 Asynchronous Programming in C#5 (1) Nov 18 2010 DIP9 -- Redo toString API (54) Nov 18 2010 casting class pointer (4) Nov 18 2010 it's time to change how things are printed (22) Nov 18 2010 Invariants for methods (14) Nov 18 2010 Shared pain (20) Nov 18 2010 D1 -> D2 (45) Nov 17 2010 =?ISO-8859-1?Q?Re:_DDMD_not_update=a3=acwhy=a3=bf?= (1) Nov 17 2010 Debugging with gdb on Posix but setAssertHandler is deprecated (3) Nov 17 2010 Why unix time is signed (1) Nov 17 2010 Eror message comprehensibility (1) Nov 16 2010 Another Go vs Python vs D thread (1) Nov 16 2010 "In praise of Go" discussion on ycombinator (23) Nov 16 2010 std.container.BinaryHeap + refCounted = WTF??? (19) Nov 16 2010 std.date (21) Nov 16 2010 std.socket and std.socketstream (1) Nov 16 2010 DMD Automatic Dependency Linking (3) Nov 16 2010 assert(false) with -unittest (4) Nov 16 2010 modulus and array.length (7) Nov 16 2010 std.algorithm.remove using SwapStrategy.unstable doesn't works (4) Nov 15 2010 =?ISO-8859-1?Q?DDMD_not_update=a3=acwhy=a3=bf?= (6) Nov 15 2010 TDPL bug or phobos bug? (2) Nov 15 2010 What to do when the linker fails (5) Nov 14 2010 RegExp.find() now crippled (19) Nov 14 2010 forbid field name conflict in class hierarchy (22) Nov 14 2010 C header file importer using -J (2) Nov 14 2010 Delegates, closures and scope (2) Nov 14 2010 We need a way to make functions pure and/or nothrow based on the (10) Nov 14 2010 Compiler optimization breaks multi-threaded code (7) Nov 13 2010 D and multicore (9) Nov 13 2010 Standard third party imports (8) Nov 12 2010 Multichar literals (7) Nov 12 2010 Basic standard graphics (20) Nov 12 2010 Emacs D Mode (1) Nov 12 2010 UText (2) Nov 12 2010 Function, signatures and tuples (6) Nov 12 2010 Question about std.bind (8) Nov 12 2010 Explicit Thread Local Heaps (2) Nov 12 2010 Output ranges and arrays (4) Nov 12 2010 RFC, ensureHeaped (35) Nov 12 2010 LC_SEGMENT command 0 filesize field greater than vmsize field (5) Nov 12 2010 Memory Pools support in Phobos (3) Nov 12 2010 One year of Go (73) Nov 11 2010 language wars (1) Nov 11 2010 Help for .h to D pain? (5) Nov 11 2010 Hacking on DMD (8) Nov 11 2010 No property 'clear/shrinkTo' for type 'Appender!(string)' (2) Nov 11 2010 linker wrapper (26) Nov 11 2010 the D scripting language -- command line (7) Nov 11 2010 class instance construction (9) Nov 10 2010 Kill implicit joining of adjacent strings (60) Nov 10 2010 Thoughts on parallel programming? (36) Nov 10 2010 std.crypto (2) Nov 10 2010 Build Linux shared library from DMD (3) Nov 10 2010 Is Walter tired of D? (2) Nov 10 2010 Call to immutable method during immutable construction (1) Nov 09 2010 What every D programmer really wants (6) Nov 09 2010 Ask HN: What do you think of the D language? (1) Nov 08 2010 Which compiler regressions are blocking people fom upgrading? (3) Nov 08 2010 Apache "mod_d" needs C to instantiate D interpreter? (40) Nov 08 2010 Passing dynamic arrays (79) Nov 08 2010 Attribute hiding, strict compiler (1) Nov 08 2010 PEG matching/parsing lib in progress (1) Nov 07 2010 Visual studio project files (1) Nov 07 2010 The D Scripting Language (64) Nov 07 2010 nullable done right, was #Spec (2) Nov 07 2010 Can non-nullable references be implemented as a library? (25) Nov 07 2010 in-parameter (12) Nov 07 2010 missing "new" --> Error: "no property 'opCall'..." (1) Nov 06 2010 D Best Practices: Default initializers for structs (3) Nov 06 2010 why a part of D community do not want go to D2 ? (132) Nov 06 2010 Wikipedia purity example and discussion (4) Nov 06 2010 null [re: spec#] (105) Nov 05 2010 Helper unit testing functions in Phobos (possible std.unittests) (11) Nov 05 2010 Should pure functions be prevented from reading changeable immutable (19) Nov 05 2010 disable on override function (2) Nov 05 2010 Stack Traces on Linux (4) Nov 04 2010 Spec#, nullables and more (199) Nov 04 2010 Sealed Containers + Ignore Interior (1) Nov 04 2010 User feedback - recurrent runtime bug (11) Nov 04 2010 An Idea - New data stucture for D (1) Nov 03 2010 Scriptometer (16) Nov 03 2010 [help]operator overloading with opEquals in a class (13) Nov 03 2010 D/Objective-C Preliminary Design (24) Nov 02 2010 DWT build error function tango.io.FileSystem.FileSystem.toAbsolute is (2) Nov 02 2010 Overzealous recursive template expansion protection? (3) Nov 02 2010 Deduction of Template Value Parameters (6) Nov 02 2010 Immutable fields (20) Nov 02 2010 Linking to C (10) Nov 02 2010 D in Fedora 14 (1) Nov 02 2010 D2 on FreeBSD (10) Nov 02 2010 The Expressiveness of D (28) Nov 01 2010 The Computer Languages Shootout Game (12) Nov 01 2010 The Computer Languages Shootout Game (2) Nov 01 2010 The Computer Languages Shootout Game (1) Nov 01 2010 shorter foreach syntax - C++0x range-based for (13) Oct 31 2010 Array-bound indexes (1) Oct 31 2010 Pointer types [Was: Re: Lints, Condate and bugs] (4) Oct 31 2010 The Computer Languages Shootout Game (20) Oct 30 2010 Ubuntu Gutsy Gibbon listed on download page. (1) Oct 30 2010 /usr/bin/shell & Ubuntu 10.10 (1) Oct 30 2010 TDPL Errata (2) Oct 30 2010 GCC 4.6 (82) Oct 30 2010 Magpie language (1) Oct 30 2010 GDC, Debian and Ubuntu (1) Oct 29 2010 Interfacing C functions with safer ptr/length (7) Oct 28 2010 Interfacing to C++ (4) Oct 28 2010 Language Popularity (6) Oct 28 2010 Simulating Multiple Inheritance (8) Oct 28 2010 ddt 0.4rc1 Installation error (2) Oct 28 2010 Visual D problems (3) Oct 26 2010 Experiments with weak purity for the win, outer (1) Oct 26 2010 Temporary suspension of disbelief (invariant) (22) Oct 26 2010 Fighting with alias this: bugs or features? (11) Oct 26 2010 Lints, Condate and bugs (49) Oct 26 2010 D in accounting program (4) Oct 25 2010 More Clang diagnostic (35) Oct 25 2010 Issue 5109 (2) Oct 25 2010 Time computation in receiveTimeout() (1) Oct 24 2010 Quick question about target patforms . . . (29) Oct 24 2010 Reflection? (8) Oct 24 2010 Improving std.range.Zip (12) Oct 23 2010 Q: What are the rules for emitting template code? (6) Oct 23 2010 What can the community do to help D? (17) Oct 23 2010 "Expressive vs. permissive languages" and bugs (5) Oct 23 2010 Possible bug in atomicOp (11) Oct 22 2010 Branding and Logos (4) Oct 22 2010 DMD Linux Version and shared libraries (2) Oct 22 2010 Less free underscores in number literals (12) Oct 22 2010 Language progress? [partially OT] (12) Oct 22 2010 Some benchmarks of efficient C++ data structures (1) Oct 21 2010 Linux Agora D thread (20) Oct 21 2010 Simple tagged attribute for unions (8) Oct 21 2010 How to work with an "arbitrary input range"? (7) Oct 21 2010 Looking for champion - std.lang.d.lex (140) Oct 21 2010 The Language I Wish Go Was (5) Oct 21 2010 Duck typing for structs (3) Oct 21 2010 [debate] About D's pretension for homoiconicicity (5) Oct 21 2010 noreturn property (30) Oct 20 2010 A quotation from Reddit (3) Oct 20 2010 Spec Clarification: Template Alias Parameters (2) Oct 20 2010 typedef keyword? (6) Oct 20 2010 d-mode for Emacs (20) Oct 20 2010 approxEqual() has fooled me for a long time... (55) Oct 19 2010 [due diligence] std.xml (22) Oct 19 2010 First big PITA in reallife D project (18) Oct 19 2010 Why cannot scopes be used in template mixins? (6) Oct 19 2010 How does import work? (5) Oct 18 2010 array, map, filter, sort, schwartzSort (1) Oct 18 2010 [challenge] Limitation in D's metaprogramming (13) Oct 18 2010 D for Python programmers (3) Oct 18 2010 deepCopy (4) Oct 18 2010 Associative array .get with .init as default second argument (4) Oct 18 2010 Pure functions as initializers for immutable structures? (13) Oct 18 2010 "How Hardware Will Shape Languages" (1) Oct 18 2010 Improving version(...) (20) Oct 18 2010 "The Next Big Language" discussion is for brainless Kids (1) Oct 17 2010 datetime review part 2 (1) Oct 17 2010 Tips from the compiler (21) Oct 17 2010 struct field alignment (16) Oct 17 2010 htod feature request: save commands in translated file (9) Oct 17 2010 The Next Big Language (73) Oct 17 2010 Ddoc to PDF (26) Oct 17 2010 The Next Big Language (68) Oct 16 2010 Why struct opEquals must be const? (15) Oct 16 2010 __traits(getMember) and uniform call syntax (3) Oct 16 2010 Typeless function arguments (3) Oct 16 2010 std.algorithm.remove and principle of least astonishment (89) Oct 16 2010 Is mimicking a reference type with a struct reliable? (6) Oct 16 2010 rationale for function and delegate (12) Oct 15 2010 Dependent types & ATS language (1) Oct 15 2010 Review on amazon.com (8) Oct 15 2010 duck! (127) Oct 15 2010 Feature discussion: __traits(getSource, function) (7) Oct 15 2010 automatic code examples in documentation (11) Oct 15 2010 Weird writeln behavior with associative arrays (6) Oct 15 2010 Safety of casting away constness instead of using postblit (1) Oct 14 2010 zero-copy API (13) Oct 14 2010 A little class benchmark (1) Oct 14 2010 New slides about Go (73) Oct 14 2010 Doesn't work: Ubuntu 10.10, DMD 2.049, GDB 7.2 (3) Oct 14 2010 pure and (fully) lazy? (4) Oct 14 2010 Streaming transport interfaces: input (34) Oct 14 2010 [OT] a good color distance function (6) Oct 14 2010 creating void[] arrays (1) Oct 14 2010 [nomenclature] systems language (49) Oct 14 2010 A move semantics benchmark (5) Oct 13 2010 Visual D Build + DMD Bugginess = Bad (29) Oct 13 2010 Slightly off the wall question about D strategy . . . (4) Oct 12 2010 What do people here use as an IDE? (44) Oct 12 2010 Consider generalizing Bounded (5) Oct 12 2010 Will uniform function call syntax apply to operator overloads? (12) Oct 12 2010 Current status of DB libraries in D (13) Oct 12 2010 [joke] D type system and making love in a canoe? (6) Oct 12 2010 [D typesystem] What is the type of null? (17) Oct 12 2010 [nomenclature] What is a bug? (11) Oct 12 2010 Partial function profiling feature (1) Oct 12 2010 "Strong typing vs. strong testing" (1) Oct 11 2010 improving the join function (47) Oct 10 2010 [challenge] Bounded types (32) Oct 10 2010 Call C (or C++) with pointer to static function (8) Oct 10 2010 assert(false) in release == splinter in eye (18) Oct 10 2010 convenient backward compatible template arguments type deduction (1) Oct 10 2010 =?UTF-8?B?U2NocsO2ZGluZ2VyJ3M=?= Stride (6) Oct 10 2010 ; not required after pragma (4) Oct 10 2010 [theory] What is a type? (4) Oct 09 2010 ParserCobinator like scala in D (3) Oct 09 2010 Minor site suggestion regarding NG and bugs (2) Oct 09 2010 Caching in computing ranges (14) Oct 09 2010 Ghost fields for Contract Programming (6) Oct 09 2010 [Theory] Halting problem (21) Oct 08 2010 Go vs D on reddit (8) Oct 08 2010 Uniform Function Call syntax for properties (26) Oct 08 2010 What is a tuple and why should one care? (4) Oct 07 2010 Open invitation for Kenji Hara (3) Oct 07 2010 Default struct constructors with pure (3) Oct 07 2010 Re: Tuple literal syntax + Tuple assignment (1) Oct 07 2010 Proposal to expand existing OO languages (Jiri Soukup) (1) Oct 07 2010 Intrusive Data Structures (1) Oct 07 2010 Re: Tuple literal syntax + Tuple assignment (1) Oct 07 2010 Re: On C/C++ undefined behaviours (on the term "undefined behaviours") (1) Oct 07 2010 Re: Tuple literal syntax + Tuple assignment (10) Oct 07 2010 "in" everywhere (94) Oct 06 2010 Tuple assignment (20) Oct 06 2010 Tuple literal syntax (66) Oct 06 2010 Template params: decl vs instantiation syntax (8) Oct 06 2010 Optionally beefed-up shadowing-prevention (3) Oct 06 2010 OwnerTerminated graceful handling (6) Oct 06 2010 Ada, SPARK [Was: Re: tolf and detab (language succinctness)] (3) Oct 06 2010 Re: On C/C++ undefined behaviours (on the term "undefined behaviours") (2) Oct 06 2010 Strict endianness management where necessary (1) Oct 06 2010 Tail call optimization in dmd (5) Oct 06 2010 'aka' for alias error messages (1) Oct 06 2010 Improving std.typecons.defineEnum (6) Oct 06 2010 is D too modern for Emacs? (10) Oct 06 2010 Ruling out arbitrary cost copy construction? (95) Oct 06 2010 ddmd: Enhanced buildscript to simplify compiling for new users (9) Oct 05 2010 Big executable? (20) Oct 05 2010 Suggestions for std.stream (esp. for 64bit) (4) Oct 05 2010 What would you rewrite in D? (47) Oct 05 2010 Enum proprierties (1) Oct 05 2010 Immutable and cache coherence (5) Oct 05 2010 Is there anybody working on a linear algebra library for D2? (9) Oct 05 2010 Type wrapping blockers (16) Oct 05 2010 First Experiences in D (3) Oct 05 2010 phobos is failure (17) Oct 05 2010 Is D right for me? (171) Oct 04 2010 Nifty chaining (6) Oct 04 2010 We need to kill C syntax for declaring function types (18) Oct 03 2010 Partial return type specification (9) Oct 03 2010 Alias template parameters and runtime functions (5) Oct 03 2010 Am I doing it wrong? (15) Oct 03 2010 The Many Faces of D - slides (68) Oct 03 2010 Module-level accessibility (39) Oct 02 2010 SSE in D (6) Oct 01 2010 RFC: Adding an alternative dispatch mechanism (4) Oct 01 2010 name resolution in module imports (5) Oct 01 2010 Inheriting from an interface twice (6) Sep 30 2010 Redundancies often reveal bugs (19) Sep 30 2010 About DbC, pros & cons (8) Sep 30 2010 SO question on DLLs (1) Sep 30 2010 [std.concurrency] Critical bug (2) Sep 29 2010 where to find Implements! (16) Sep 29 2010 [std.concurrency] prioritySend is 1000 times slower than send? (11) Sep 29 2010 [typing] Type-erasure re generics (26) Sep 29 2010 Getting D ready for prime time (9) Sep 29 2010 sorting hidden data. (20) Sep 28 2010 Switch implementation (17) Sep 28 2010 Fedora 14 will integrate D into the distribution (20) Sep 27 2010 [BUG} dsss linker error when a module has property (4) Sep 27 2010 [contest] Is a Cow an animal ++ (40) Sep 25 2010 opStaticSlice (1) Sep 24 2010 About DMD intrinsics? (1) Sep 23 2010 Initialization of unions (4) Sep 22 2010 The Wrong Stuff (51) Sep 22 2010 Some hash functions (2) Sep 22 2010 Broken links (2) Sep 22 2010 Template parameter shadowing (3) Sep 21 2010 Proposal: Relax rules for 'pure' (106) Sep 21 2010 [OT] Google search restricted to a directory? (6) Sep 21 2010 LazyInterface (simplified Boost.Interfaces) (10) Sep 21 2010 Producing 64-bit executables [Win/Lin/Mac]? (2) Sep 20 2010 Language features and reinterpret casts (18) Sep 20 2010 GC- and thread-related deadlock (3) Sep 20 2010 delegate/function ptr implicit casting (1) Sep 19 2010 ddmd and ide error recovery and error reporting (3) Sep 19 2010 For D's manifesto: transitivity of shared, const and immutable (2) Sep 18 2010 Typography (6) Sep 18 2010 splitting class (4) Sep 17 2010 dmd and visual D (5) Sep 16 2010 OT: Firefox 3.6.10 (3) Sep 16 2010 TDPL stats (5) Sep 15 2010 Right blame (3) Sep 15 2010 2.049b (2) Sep 15 2010 A summary of D's design principles (142) Sep 15 2010 Safe std.parallelism (2) Sep 13 2010 DMD source overview? (17) Sep 13 2010 what does it take to build DMD on linux? (5) Sep 13 2010 Bug in destructors (2) Sep 13 2010 [challenge] To implement XPath 2.0 in D (6) Sep 12 2010 Random string samples & unicode - Reprise (31) Sep 12 2010 std.bigint doesn't show up in the docs (2) Sep 12 2010 Some missing things in the current threading implementation (8) Sep 12 2010 std.json API improvement - Request for code review (14) Sep 11 2010 Well, it's been a total failure (102) Sep 10 2010 std.concurrency: Returning from spawned function (7) Sep 10 2010 Random string samples & unicode (13) Sep 10 2010 blog: Overlooked Essentials for Optimizing Code (52) Sep 10 2010 New structs (4) Sep 09 2010 One more update on d-programming-language.org (56) Sep 09 2010 R-values and const ref (12) Sep 08 2010 The D2 homepage is missing something (6) Sep 08 2010 foreach is slower than for? (8) Sep 08 2010 opDispatch and template parameters (6) Sep 08 2010 Phobos + DCollections (3) Sep 08 2010 SortedRange and new docs for std.container (12) Sep 08 2010 Bizprac database.. Urgent (22) Sep 08 2010 D2: immutable/const and delegates (2) Sep 07 2010 [OT]: Google AI Challenge: starter package for D (1) Sep 07 2010 D1/D2: How to check if a method has been overridden (19) Sep 07 2010 Thread + socket = (shared) problem? (2.048) (8) Sep 06 2010 Does DMD build as a 32bit binary under linux? (6) Sep 06 2010 btw ... GDB 7.2 is stable - with D support :) (3) Sep 05 2010 CMake for D2 ready for testers (56) Sep 05 2010 Bikeshedding fun: suggest a name for 64 bit Phobos library (10) Sep 05 2010 Cloning in D (14) Sep 05 2010 std.typecons.Rebindable with arrays (1) Sep 05 2010 Purity in Java & D (7) Sep 05 2010 Creation in Dylan (2) Sep 04 2010 Behaviour of goto into catch blocks. (2) Sep 04 2010 Style updated on d-programming-language.org (6) Sep 04 2010 Semantics of casting **to** shared? (5) Sep 03 2010 any update on large file support for linux? (5) Sep 03 2010 Logger For D Design Doc (3) Sep 03 2010 32-bit Memory Limitations (11) Sep 03 2010 this as lvalue? (25) Sep 03 2010 blosc (3) Sep 03 2010 Assigning map result, and in-place map (8) Sep 02 2010 D Compiler for .NET (1) Sep 02 2010 Overloading + on points (2) Sep 02 2010 do D support something like C# 4.0 co/contra-variance? (9) Sep 02 2010 Please comment on (50) Sep 02 2010 Miscellaneous memory management questions (4) Sep 01 2010 The new Mono GC (13) Sep 01 2010 [Challenge] implementing the ambiguous operator in D (52) Sep 01 2010 [OT] Dark Star (1974) - the platinum age of movies (49) Sep 01 2010 [D.typesystem] Suggestion for improving OO inheritance models (12) Sep 01 2010 [D.typesystem] Static (CT) enforce anybody? (17) Sep 01 2010 Reporting TDPL bugs in Bugzilla (11) Aug 31 2010 Overloading + on points (1) Aug 31 2010 Bug 3999 and 4261 (57) Aug 30 2010 std.mixins (47) Aug 30 2010 DMD Samples (2) Aug 29 2010 Where do I post website bugs? (10) Aug 29 2010 inplace_merge, nWayUnion (7) Aug 29 2010 std.container: Work in progress or ready for prime time? (2) Aug 29 2010 Beginner not getting "string" (9) Aug 29 2010 std.algorithm move() struct emptying (16) Aug 29 2010 About removing the new keyword (9) Aug 28 2010 Generic code: autoconst, autopure, autonothrow (34) Aug 28 2010 Type inference in array literals (6) Aug 27 2010 Implementing something like DLR in D? (6) Aug 27 2010 std.concurrency : using delegate in lieu of function (2) Aug 27 2010 file i/o in a variety of languages (43) Aug 26 2010 Bug reports [Was: Re: About Andrei's interview, part 3] (10) Aug 26 2010 moveFront() and friends: Request for comment (5) Aug 26 2010 [Slight OT] TDPL in Russia (228) Aug 26 2010 Interfacing to C: const or immutable? (2) Aug 26 2010 Array types (6) Aug 26 2010 Getting the name of current module (6) Aug 26 2010 Why is Exceptions c'tor this(immutable(char[]) instead of (7) Aug 26 2010 Safe Cursors and Ranges (11) Aug 25 2010 Using glog's design for Phobos? (52) Aug 25 2010 About Andrei's interview, part 3 (26) Aug 25 2010 int[new] (1) Aug 25 2010 D.challenge (6) Aug 25 2010 Andrei A. interview (1) Aug 25 2010 Retrieving the traversed range (21) Aug 25 2010 =?ISO-8859-1?Q?Berkeley_DB_lib_for_d2_=a3=bf?= (1) Aug 25 2010 Sources for D docs? (3) Aug 24 2010 Link to that D doc-search tool? (3) Aug 24 2010 How about a nabble forum archive? (1) Aug 24 2010 update zlib used in phobos? (2) Aug 24 2010 Why all the D hate? (29) Aug 24 2010 Range literals (2) Aug 23 2010 contracts for external functions? (4) Aug 23 2010 D1: accepts-invalid? (7) Aug 22 2010 Link to dsource on each Phobos module doc (2) Aug 22 2010 Yet another Gregorian.d contribution (5) Aug 22 2010 std.socket is horrible. (5) Aug 22 2010 Self-compilation (5) Aug 22 2010 Patches for DMD2 interface generation: cleanup, bugfixes and prettyprinting (4) Aug 22 2010 Socket.shutdown() error handling (1) Aug 22 2010 Algorithms & opApply (9) Aug 22 2010 safeD formal semantics (5) Aug 21 2010 Double backslashes in dmd's -deps? (2) Aug 21 2010 Possible new COW/copy suggestions? (1) Aug 21 2010 Suggested changes in Phobos min() and max() (1) Aug 21 2010 Are aliases supossed to be implicitly cross-module overloadable? (3) Aug 21 2010 ddmd (14) Aug 21 2010 map on fixed-size arrays (5) Aug 21 2010 ddmd (1) Aug 21 2010 The type of an element-wise arithmetic expression (2) Aug 21 2010 re: Scope/block behaviour (1) Aug 20 2010 re: Scope/block behaviour (2) Aug 20 2010 Version bug? (3) Aug 20 2010 How does D handle null pointers? (3) Aug 20 2010 On C/C++ undefined behaviours (110) Aug 20 2010 Unused variables, better as error or warning? (33) Aug 19 2010 std.process.system asynchronous? or win cmd line oddity? (20) Aug 19 2010 Regarding iota (2) Aug 19 2010 Fixing std.string (15) Aug 19 2010 re: welcome! (1) Aug 19 2010 Collateral Exceptions, Throwable vs Exception (22) Aug 19 2010 [META] Amending subject names in long discussions (5) Aug 19 2010 Future optimizations (1) Aug 19 2010 TDPL ebook (1) Aug 19 2010 Element-wise addition of arrays (2) Aug 19 2010 Scope/block behaviour (8) Aug 18 2010 Why foreach(c; someString) must yield dchar (34) Aug 18 2010 Immutability and strings (4) Aug 18 2010 Why C++ compiles slowly (112) Aug 18 2010 DMD 64bit Status? (6) Aug 17 2010 array operations enhancements (8) Aug 17 2010 [OT] Stupid programmer tricks (or "Fun with RDMD") (6) Aug 17 2010 [OT] (7) Aug 17 2010 htod, no version for Linux? (14) Aug 17 2010 [OT] NG needs more simultaneous connections? (2) Aug 17 2010 std.algorithm.move (1) Aug 17 2010 OT: Mars on upcoming OpenGL SuperBible edition cover (1) Aug 17 2010 Xoc, yaspl (yet another SPL) (4) Aug 16 2010 Contributing (5) Aug 16 2010 Current RDMD, please? (36) Aug 16 2010 to!()() & leading/trailing whitespace (9) Aug 16 2010 Notes on the Phobos style guide (15) Aug 16 2010 readf for the novice (7) Aug 16 2010 To avoid some 32=>64 bit user code port bugs (14) Aug 16 2010 Unit tests in libraries? (10) Aug 15 2010 Precise garbage collection (3) Aug 15 2010 Does D support dynamically allocated multi-dimensional arrays? (2) Aug 15 2010 OPTLINK & cmake (2) Aug 15 2010 Phobos incubator project ? (18) Aug 15 2010 DMD(2), linux/bin/* tools (5) Aug 15 2010 [OT] Zen on objects vs closures (1) Aug 14 2010 wikibooks.org (9) Aug 13 2010 Alias parameters for templates does not accept built-in types (2) Aug 13 2010 Infinite loop in compiler with forward reference (6) Aug 13 2010 private vs protected in Interfaces (9) Aug 13 2010 D examples (8) Aug 13 2010 [OT] A model for recording everything that is computable (1) Aug 12 2010 dmd as link driver (4) Aug 12 2010 alias this and immutable shenanigans (3) Aug 12 2010 The Status of Const (109) Aug 12 2010 Matlab and D (2) Aug 12 2010 LDC can't import std.stdio (2) Aug 12 2010 Head-to-head comparison of Go and D on golang-nuts (1) Aug 12 2010 [OT] LALR table-generation docs? (2) Aug 12 2010 Time for ng name fixes? (4) Aug 12 2010 Phobos urllib (12) Aug 12 2010 Integer Square Root (2) Aug 12 2010 Am I doing it wrong or is this a bug ? (8) Aug 11 2010 How Garbage Collector works? (12) Aug 11 2010 Is there a report for performance compared with popular languarge? (6) Aug 11 2010 scope function parameters (3) Aug 11 2010 Inconsistent stdout buffering behaviour (1) Aug 11 2010 A const idiom + a different 'delete' (15) Aug 11 2010 Overloading Lazy Vs. Non-Lazy (14) Aug 10 2010 Linker errors with Interfaces (7) Aug 10 2010 Queue (3) Aug 10 2010 Destructor semantics (52) Aug 10 2010 Should destructors be able to tell who called them? (13) Aug 10 2010 What's the use case of clear? (9) Aug 09 2010 Custom Blocks (29) Aug 09 2010 TDPL Errata: Page 208 (13) Aug 09 2010 P!=NP (10) Aug 08 2010 synchronized vs. C volatile (19) Aug 08 2010 typeid() woes (5) Aug 08 2010 TDPL: Inheritance (1) Aug 08 2010 Static constructor call ordering in imported modules (5) Aug 08 2010 TDPL: Manual invocation of destructor (62) Aug 08 2010 where is these files => TGcc.h and str4.h (4) Aug 08 2010 C1X features (2) Aug 08 2010 D2 and stream (9) Aug 08 2010 I don't quite get foreach ref (3) Aug 07 2010 void initialization of class fields (3) Aug 07 2010 getopt & single-letter options with parameters (14) Aug 06 2010 tolf and detab (61) Aug 06 2010 Wrong and somewhat rude statement in your docs. (8) Aug 06 2010 Templated struct operator overloading (2) Aug 06 2010 Template constraints error messages using cputs() (21) Aug 05 2010 D 2.0 windows installer overwrites PATH variable (6) Aug 05 2010 range chunks (22) Aug 05 2010 Build farm(er) (18) Aug 05 2010 Where to find D2 docs? (2) Aug 05 2010 Congratulations to Philippe Sigaud (6) Aug 05 2010 Mixin Expressions, can't evalutate string variable (24) Aug 05 2010 Bug in the spec or implementation? (10) Aug 05 2010 [OT] Is the D(n) PL discovery or invention? (12) Aug 04 2010 Sharing Ref Counted Containers (7) Aug 04 2010 DMD1 binaries a lot bigger than DMD2 due to weird zero blocks? (6) Aug 04 2010 Where is stdout? (3) Aug 04 2010 Regarding typedef (3) Aug 03 2010 Bugs in template constraints (9) Aug 03 2010 D2 and FreeBSD (12) Aug 03 2010 Andrei's Google Talk (308) Aug 02 2010 A working backtrace for linux (15) Aug 02 2010 TDPL: Function literals with missing argument type specifiers do not compile (3) Aug 02 2010 TDPL: Cross-module overloading (1) Aug 02 2010 Non-null request in Bugzilla (3) Aug 02 2010 std.typecons.Tuple and tuples (3) Aug 02 2010 Problem with constant structs, destructors and copy constructors (1) Aug 01 2010 Embedded software DbC (5) Aug 01 2010 Mac OSX installer for dmd (57) Aug 01 2010 Shared (4) Aug 01 2010 Documentation generation (19) Aug 01 2010 std.concurrency and efficient returns (19) Aug 01 2010 What functions could be added to std.algorithm? (15) Aug 01 2010 gdb bugs/patches (3) Jul 31 2010 A problem with D contracts (22) Jul 30 2010 Phobos-compatible license on Google Code? (9) Jul 30 2010 Behavior of signed/unsigned conversion in template parameters (2) Jul 30 2010 Axiomatic purity of D (19) Jul 29 2010 Unfortunately delete is a keyword (5) Jul 28 2010 indexOf's return val upon not found (3) Jul 28 2010 Joe Duffy's "Thoughts on immutability and concurrency" (4) Jul 28 2010 Superhash buried in Druntime does super work. (3) Jul 27 2010 TDPL: Overloading template functions (31) Jul 27 2010 Proposal for dual memory management (9) Jul 27 2010 TDPL: Foreach over Unicode string (23) Jul 27 2010 Next computer architectures & D (3) Jul 27 2010 [OT] CMS recommendation for these criteria...? (6) Jul 27 2010 FloatLiteral 1f (3) Jul 27 2010 Associative-array .remove method returns void for non-existent keys (3) Jul 27 2010 Better alignment management (2) Jul 27 2010 FFT Lib? (22) Jul 27 2010 alternative to candydoc? (10) Jul 27 2010 Array-wise assignment on unallocated array (4) Jul 27 2010 Re: poll about delete (16) Jul 27 2010 Interest in std.algorithm.joiner? (15) Jul 27 2010 GC & IRC Server (7) Jul 26 2010 Documentation on D's vtable (3) Jul 26 2010 Problems building druntime (2) Jul 26 2010 Contravariance (is this a bug) (4) Jul 26 2010 couldn't we keep complex number literals? (3) Jul 26 2010 [OT] The Clay Programming Language (5) Jul 26 2010 D and Emerging Languages Camp (3) Jul 25 2010 Concurrency (3) Jul 25 2010 Should alias expand visibility? (24) Jul 25 2010 library defined typedef (3) Jul 25 2010 Do sorted ranges have any special properties? (40) Jul 25 2010 Uniform call syntax for operator overloads (4) Jul 25 2010 Where statement (10) Jul 24 2010 D 2.0 (14) Jul 24 2010 PNG Lib? (6) Jul 24 2010 Conditional purity (6) Jul 24 2010 CTFE JIT (1) Jul 24 2010 My presentation at Oscon last Thursday (5) Jul 24 2010 Some questions (5) Jul 24 2010 Why don't other programming languages have ranges? (112) Jul 23 2010 ReturnThis/ chain (11) Jul 23 2010 [OT] TDPL errata ofline (2) Jul 23 2010 TDPL Errata site is down (14) Jul 23 2010 Ropes (4) Jul 23 2010 Manually linking druntime and phobos2 (6) Jul 21 2010 Phobos2 networking (1) Jul 21 2010 dsource down? (2) Jul 20 2010 emplace, scope, enforce [Was: Re: Manual...] (25) Jul 20 2010 C strings (1) Jul 20 2010 Proposal: Automatic shallow Unqual on IFTI (4) Jul 20 2010 D's treatment of values versus side-effect free nullary functions (52) Jul 19 2010 Are iterators and ranges going to co-exist? (57) Jul 18 2010 Higher level built-in strings (51) Jul 18 2010 Linking problem with custom associative arrays (1) Jul 17 2010 C++ Stylistics (7) Jul 17 2010 Improving std.algorithm.find (18) Jul 17 2010 opDollar and infinite slices on infinite ranges (8) Jul 17 2010 Static array initialization (2) Jul 17 2010 Spotting possible integer overflows statically (1) Jul 16 2010 Suggestion for allocators and D integration for them (2) Jul 16 2010 The singleton design pattern in D, C++ and Java (19) Jul 16 2010 [OT] Next word in this sequence of words ending with 'ty' (10) Jul 16 2010 Range indexes (1) Jul 15 2010 Poll regarding delete removal (7) Jul 15 2010 [100% OT] (7) Jul 15 2010 dflplot/Plot2Kill, Most Mature *nix GUI For D2 (20) Jul 15 2010 One case of careless opDispatch :) (4) Jul 15 2010 Extending deprecated (2) Jul 15 2010 Overloading property vs. non-property (24) Jul 15 2010 TDPL notes, part 2 (4) Jul 15 2010 State of and plans for the garbage collector (12) Jul 14 2010 Getting # Physical CPUs (21) Jul 14 2010 TDPL notes, part 1 (7) Jul 13 2010 TDPL promo at informit.com (1) Jul 13 2010 Why will the delete keyword be removed? (26) Jul 13 2010 A way to promote the language (2) Jul 13 2010 [OT] opera 10.60 and d's bugzilla (4) Jul 13 2010 reddit discussion on Fedora's inclusion of ldc (3) Jul 13 2010 TDPL, shared data, and Phobos (18) Jul 12 2010 Debugging (20) Jul 12 2010 getNext (73) Jul 12 2010 I want my Memory back ;-) (18) Jul 12 2010 More than cache effects (1) Jul 12 2010 Cost of Returning Unused Struct? (1) Jul 12 2010 One more purpose for -cstyle (2) Jul 12 2010 unittest behavior (3) Jul 12 2010 Why is array.reverse a property and not a method? (24) Jul 12 2010 C#'s conditional attributes (3) Jul 12 2010 renaming foreach_reverse to rforeach (1) Jul 12 2010 Anybody else working on an advanced Windows GUI (6) Jul 12 2010 Static Analysis at Mozilla, must_override (3) Jul 11 2010 Overhauling the notion of output range (34) Jul 11 2010 Getting a tuple of all members of a struct or a class (1) Jul 11 2010 Getting a tuple of all members (1) Jul 11 2010 Empty subexpressions captures in std.regex (3) Jul 10 2010 Manual memory management in D2 (55) Jul 10 2010 Winelib: DFL -> Linux? (6) Jul 09 2010 Allocating structs with new? (19) Jul 08 2010 "Rust" language (5) Jul 08 2010 What are AST Macros? (78) Jul 08 2010 D2 Phobos what's the scope ? (1) Jul 08 2010 Update to current RDMD? (1) Jul 08 2010 std.pattern..mixin temptes..std.concurrency (10) Jul 07 2010 SafeD & Java (1) Jul 07 2010 One usage of GNU C (1) Jul 07 2010 Concurrency in the D Programming Language: free chapter (7) Jul 07 2010 [OT, but getting closer]: reddit discussion on C++ Concepts: A Postmortem (11) Jul 07 2010 Abstract Classes vs Interfaces (10) Jul 06 2010 Tuple, TypeTuple, tupleof etc (2) Jul 06 2010 metaprograming: staticMap and such? (5) Jul 06 2010 Nullable!T (13) Jul 06 2010 D on langpop.com (2) Jul 05 2010 [OT] D shows up on SO. (3) Jul 05 2010 OT: anybody up for a beer? (1) Jul 04 2010 More for bitfields (4) Jul 03 2010 Organize dsource by D1/D2? (3) Jul 02 2010 slow compilation speed (3) Jul 02 2010 Error in TDPL? Order of static constructors (2) Jul 01 2010 mangle (18) Jul 01 2010 Bit disappointed with TDPL build quality (5) Jul 01 2010 D: An Up and Coming Embedded Software Language (5) Jul 01 2010 Spikes in array capacity (13) Jul 01 2010 Can someone explain why this is not an error? (8) Jun 30 2010 Better tuples (6) Jun 30 2010 Class field inheritance (4) Jun 30 2010 LDC, GDC for D2 (9) Jun 30 2010 Interesting to see (for geeks) (7) Jun 30 2010 Final switch statement (1) Jun 29 2010 [OT] TDPL is out of stock on Amazon? (3) Jun 29 2010 Iterating over containers of immutable objects (8) Jun 29 2010 Network I/O and streaming in D2 (27) Jun 29 2010 Actors are not always the best solution (1) Jun 29 2010 C# 4.0 dynamic vs std.variant (7) Jun 29 2010 [OT] modules vs filenames in "module-name == filename" package systems (7) Jun 28 2010 Using ()s in property functions (34) Jun 28 2010 Rob Pike about Go, seems D has a lot of friends. (3) Jun 28 2010 Data-binding infrastructure. (3) Jun 28 2010 bind() vs curry() (1) Jun 28 2010 MPIR lib (9) Jun 28 2010 Backquotes look like regular quotes in TDPL? (16) Jun 27 2010 Compilation of a numerical kernel (1) Jun 27 2010 Requesting some DMD hacking advice for property lowering. (3) Jun 27 2010 dmd build fail (2) Jun 27 2010 What exactly are the rules of governance of the development of D? (4) Jun 27 2010 Status of std.xml (D2/Phobos) (37) Jun 27 2010 [TDPL] arrays of D future (6) Jun 27 2010 An idea for the D community (9) Jun 26 2010 immutable singleton pattern with static opCall (10) Jun 26 2010 Publicity idea: Suggest TDPL at Libraries (1) Jun 26 2010 Issues with array expressions (3) Jun 26 2010 [TDPL] Will TDPL LaTeX class/template be accessible? (4) Jun 26 2010 What's the authoritative difference between immutable and const for (6) Jun 26 2010 loading D into R in OSX. Dynamic libraries problem (3) Jun 25 2010 Part 1 of the Language Reference Docs Review (5) Jun 25 2010 Good dotProduct (12) Jun 25 2010 Automatic library download plan (7) Jun 25 2010 Make D more public visible (20) Jun 25 2010 TraceHandler not being called on Access violation (3) Jun 24 2010 Question about the validness of code examples in the docs (1) Jun 24 2010 std.functional.curry isn't (9) Jun 24 2010 The X Macro (23) Jun 24 2010 property (25) Jun 24 2010 Intel Concurrent Collections for Haskell [OT] (1) Jun 24 2010 cent/ucent (12) Jun 24 2010 Public code reviews of Phobos code (1) Jun 24 2010 guide for building DMD/Phobos from SVN (on Linux)? (3) Jun 24 2010 D1 is such a nice little language (3) Jun 24 2010 The status of D2 (3) Jun 24 2010 is expression (5) Jun 24 2010 The future of DWT (13) Jun 24 2010 More on StringToken (4) Jun 23 2010 Mac OS X Installation (8) Jun 23 2010 is Expression, Type Identifier : TypeSpecialization (4) Jun 23 2010 GUI Library for D2+Phobos (25) Jun 23 2010 I'd like to try D, but... (12) Jun 23 2010 Is it time for D1 to die of natural causes? (17) Jun 22 2010 readf anyone? (5) Jun 22 2010 Feature request: Shared receive for all waitable objects (2) Jun 22 2010 Latest string_token Code (21) Jun 21 2010 Operator Precedence Table in Online Docs (1) Jun 21 2010 Re: Is there ANY chance we can fix the bitwise operator precedence (1) Jun 21 2010 Calling C function with static array includes length and pointer (6) Jun 21 2010 Errors in TDPL (72) Jun 21 2010 finding a circular dependency (11) Jun 21 2010 DMD Backend Long-term (56) Jun 21 2010 Stack info (1) Jun 21 2010 How to erase chars from char[]? (3) Jun 20 2010 Using Classes as the KeyType (from the Docs) (6) Jun 20 2010 Combsort comparison (6) Jun 20 2010 A web server with D (7) Jun 19 2010 String Literal Docs (20) Jun 19 2010 main.d(61): Error: temp_[i_] isn't mutable (14) Jun 19 2010 Floating point not loaded (6) Jun 19 2010 Re: Is there ANY chance we can fix the bitwise operator precedence (1) Jun 19 2010 MAX_CHAR (7) Jun 19 2010 Where will D be in 2015 in the programming language ecosphere? (15) Jun 18 2010 Is there ANY chance we can fix the bitwise operator precedence rules? (97) Jun 18 2010 Speeding up program loading (4) Jun 17 2010 std.bind (2) Jun 17 2010 Improving std.regex(p) (51) Jun 17 2010 Recent work on Phobos and asking for volunteers (2) Jun 17 2010 Review: std.msgpack (24) Jun 17 2010 Pathfinding? (4) Jun 17 2010 Pancake Sort comparison (1) Jun 16 2010 beforeGarbageCollection (9) Jun 16 2010 An Introduction (2) Jun 16 2010 Re: help with bind (1) Jun 16 2010 Associative array dup property? (4) Jun 15 2010 enforce()? (159) Jun 15 2010 When will Phobos be usable from safe functions? (1) Jun 15 2010 Idea: bug-of-the-week club (7) Jun 15 2010 Idea: Compilation benchmarks for D publicity (5) Jun 15 2010 "ubyte[size] store = void" in std.variant (8) Jun 15 2010 std.container / tightArray / class (de)allocators (9) Jun 14 2010 Signed word lengths and indexes (152) Jun 14 2010 Constraints error messages [Was: Re: Constrained Templates] (3) Jun 14 2010 D const enables multi-reader synchronization (13) Jun 13 2010 Compiler Test Suite (1) Jun 13 2010 Price drop for TDPL on Amazon to $41.10<eom> (17) Jun 13 2010 Is there a way to get the names of a function's parameters? (6) Jun 13 2010 Layout of 80-bit Reals on Mac (2) Jun 13 2010 Constrained Templates (26) Jun 13 2010 The design principles of D (aka The D Manifesto) (7) Jun 13 2010 Brief review of the samples in D distribution package (1) Jun 12 2010 Re: Go Programming talk [OT] - C is simple enough!??? (1) Jun 12 2010 Re: Go Programming talk [OT] - C is simple enough!??? (1) Jun 12 2010 unary operator overloading syntax (5) Jun 11 2010 Do you think free ad's might help advance D? (16) Jun 11 2010 C++ and D stackoverflow (11) Jun 11 2010 Could we get a list of things that have been, or will be, removed (4) Jun 11 2010 How do you use C based libs with D? (6) Jun 11 2010 Memcache (3) Jun 11 2010 TDPL shipping off Amazon (12) Jun 11 2010 MingW compatibility (11) Jun 10 2010 Trying to build Tango as dynamic library on linux (8) Jun 09 2010 std.algorithm and immutable arrays (7) Jun 09 2010 Static analysis at Mozilla (8) Jun 09 2010 Hamming numbers comparison, take 2 (2) Jun 08 2010 Relative performance (1) Jun 08 2010 LLDB Debugger (7) Jun 08 2010 Phobos import graph (6) Jun 08 2010 Questions about Unicode, particularly Japanese (12) Jun 08 2010 Experimenting with std.all (4) Jun 08 2010 BinaryHeap is a range so it goes in std.range. Agree? (36) Jun 08 2010 Out parameters and the strong exception guarantee (5) Jun 08 2010 First experience with std.algorithm: I had to resort to writing a (16) Jun 07 2010 [ot] D users at Google (11) Jun 07 2010 Is the declaration grammar definition of 'Parameter' correct? (16) Jun 07 2010 Wide characters support in D (55) Jun 06 2010 D gets the rounding right, too (1) Jun 06 2010 Go Programming talk [OT] (65) Jun 06 2010 Re: need clarification: will typedef, C struct initialization, etc. (2) Jun 05 2010 Variant[string] assoc array -- runtime error (4) Jun 05 2010 Who is BearPile? (3) Jun 05 2010 I'm holding it in my hands (17) Jun 05 2010 Various documentation questions (9) Jun 04 2010 Inlining done by JavaVM (2) Jun 04 2010 D at shootout.alioth.debian.org (9) Jun 04 2010 File.byLine should return dups? (7) Jun 04 2010 Discussion on D on StackOverflow.com (1) Jun 03 2010 Marketing of D - article topic ideas? (111) Jun 03 2010 Anagrams comparison (1) Jun 03 2010 Is it intentional that forward references in unittests aren't (7) Jun 03 2010 Error 42: Symbol Undefined __tls_array (7) Jun 03 2010 Convert "C Function Pointer" to D? (4) Jun 03 2010 article on ctfe on reddit (1) Jun 03 2010 _based pointers in D? (3) Jun 02 2010 OT: "Using C++ in GCC is OK" (3) Jun 02 2010 Why libphobos2.a? (5) Jun 02 2010 Re: need clarification: will typedef, C struct initialization, etc. (3) Jun 01 2010 BinaryHeap (2) Jun 01 2010 dmdz, take 2 (4) Jun 01 2010 OT: Cookie Monster consumes computerized coffee machine (1) Jun 01 2010 Socket + Thread - any working example?? (1) Jun 01 2010 What Every Programmer Should Know About Memory (1) Jun 01 2010 One document about Go (81) Jun 01 2010 This just in: authorless TDPL becomes collector's edition (34) Jun 01 2010 anybody got D source code for a DirectX/DirectDraw proxy dll? (1) May 31 2010 Stricter protection attributes for students (3) May 31 2010 Unofficial wish list status.(Jun 2010) (5) May 31 2010 If you have to learn just one programming language (62) May 30 2010 APT repository (1) May 30 2010 Containers I'd like to see in std.containers (26) May 30 2010 Array access via pointer (12) May 30 2010 eliminating std.range.SListRange? (17) May 30 2010 Basic principles behind parallelism (2) May 30 2010 std.container: the advent of deterministic containers (2) May 30 2010 Binary data-structure serialization (45) May 30 2010 associative arrays / hash function speed up (1) May 29 2010 The last changes to range (12) May 29 2010 std.mmfile doc. (3) May 28 2010 Copy constructors for lazy initialization (15) May 28 2010 Go has contempt for generics (17) May 28 2010 C#5 desiderata (17) May 28 2010 Huffman coding comparison (51) May 28 2010 Memory Mapped File Access (17) May 27 2010 Shared Class Variables (4) May 27 2010 std.container update - now Array is in (8) May 27 2010 Bug: compiler crash when using module name twice (4) May 27 2010 Method hiding (4) May 27 2010 AAs of struct or array (5) May 27 2010 Static constructors in circularly imported modules - again (8) May 26 2010 std.container update (19) May 26 2010 Uniform function call syntax (10) May 26 2010 'out of memory' error compiling on windows (2) May 25 2010 container stuff (66) May 24 2010 Installing D on MacOS X Leopard box (13) May 24 2010 To interface or not to interface (38) May 23 2010 To use opDispatch (10) May 23 2010 ODE bindings (2) May 23 2010 Bug fix week (30) May 22 2010 GWT clone (4) May 22 2010 Another example for comparison (1) May 20 2010 Default argument values (18) May 19 2010 Poll: Primary D version (126) May 19 2010 "The Right Tool" site (4) May 18 2010 [OT] My tips on giving technical talks (4) May 18 2010 dmdbindmd not found on OSX (3) May 17 2010 Samples directory (7) May 17 2010 Alternative typeof syntax (7) May 17 2010 Re: Misc questions:- licensing, VC++ IDE compatible, GPGPU, LTCG, (2) May 17 2010 need clarification: will typedef, C struct initialization, etc. go or not? (46) May 17 2010 LLVM backend for GHC (2) May 16 2010 Does D suck? (21) May 16 2010 News Reader for iPod? (11) May 16 2010 std.gregorian contribution (10) May 16 2010 Misc questions:- licensing, VC++ IDE compatible, GPGPU, LTCG, QT, SDL (29) May 15 2010 [OT] The One Hundred Year Data Model (4) May 14 2010 public import and bugs it causes (49) May 14 2010 complement to $ (45) May 13 2010 Intel Single-chip Cluster (12) May 13 2010 On Iteration is discussed again on reddit (3) May 12 2010 pure generators (10) May 12 2010 Drop extern (C++) (10) May 11 2010 Logger for D (12) May 11 2010 Should scope(exit) be valid outside of a scope? (11) May 11 2010 JSON improvement request (2) May 11 2010 covariance, operator overloads, and interfaces (4) May 10 2010 Linking Phobos with libcurl (3) May 10 2010 Programming Dojo (3) May 10 2010 tango naming conversation (3) May 08 2010 Tuple unpacking example, and to! (3) May 08 2010 Spellechecker - is this really fun? (20) May 08 2010 Can D be cute? (Qt) (38) May 07 2010 What is D? (14) May 07 2010 Apple disallows D-Sources (17) May 06 2010 Large Address Aware W/ OptLink (5) May 06 2010 envy for "Writing Go Packages" (69) May 05 2010 Another typedef usage example (2) May 04 2010 Unit tests in D (51) May 04 2010 c++ vs lisp -- D perspective (3) May 04 2010 // Function parameters, sound, clear and clean // (5) May 03 2010 Is [] mandatory for array operations? (46) May 03 2010 Phobos Proposal: replace std.xml with kxml. (17) May 03 2010 ISO646 (1) May 02 2010 Studying the DMD front-end and contributing to similar projects? (3) May 01 2010 Improving Compiler Error Messages (88) May 01 2010 std.gregorian (10) May 01 2010 // A great help for the best D, I think // (15) May 01 2010 The BSD license problem and a trivial solution (5) May 01 2010 Linking large D programms (5) Apr 30 2010 Unofficial wish list status.(May 2010) (1) Apr 30 2010 Tango & Phobos (39) Apr 30 2010 sad (12) Apr 29 2010 Re: D2 std.thread and ThreadAddr (3) Apr 29 2010 Results are in: static foreach is a slower than hand unrolling your (5) Apr 29 2010 Numerical code (1) Apr 28 2010 Debugging with GDB on Mac (2) Apr 28 2010 Less than 30 days to bound copies of TDPL (19) Apr 28 2010 Masahiro Nakagawa and SHOO invited to join Phobos developers (38) Apr 28 2010 Clojure Protocols & expression problem (25) Apr 28 2010 Built-in range type (3) Apr 27 2010 disable usage and design (1) Apr 27 2010 Change some keywords and enum manifests? (19) Apr 27 2010 Loop invariant for a binary search (4) Apr 27 2010 Changeset 442, implicit Vs explicit (12) Apr 26 2010 Multiple "Standard" Libraries for D? (3) Apr 26 2010 json output and Visual D (3) Apr 26 2010 Meta-programming in Heron (1) Apr 26 2010 Inlining of function(){...}() (4) Apr 25 2010 Function-local imports (4) Apr 25 2010 Less pure for objects (3) Apr 24 2010 Some thoughts on std.demangle (1) Apr 23 2010 not an lvalue (2) Apr 23 2010 Anyone know what's going on here? (variable with an instantiated (8) Apr 23 2010 Things I Learned from ACCU 2010 (58) Apr 23 2010 DDoc improvements (1) Apr 22 2010 Function name (8) Apr 22 2010 To help other D implementations (1) Apr 21 2010 typedef in D2 (2) Apr 21 2010 Remove real type (79) Apr 21 2010 DMD crash! and JSON files (16) Apr 20 2010 D FTP Library (6) Apr 20 2010 SListRange assignable elements (1) Apr 20 2010 JavaScript is the "VM" to target for D (45) Apr 17 2010 Is this a bug with goto? (4) Apr 16 2010 Re: value range propagation for _bitwise_ OR (6) Apr 16 2010 DMD and commercial application ? (7) Apr 15 2010 Low dimensional matrices, vectors, quaternions and a cubic equation (31) Apr 15 2010 Problems with std.time and local conversion? (18) Apr 14 2010 Undefined behaviours in D and C (31) Apr 14 2010 Re: value range propagation for _bitwise_ OR (3) Apr 13 2010 Is it time to deprecate COM compatibility through D interfaces? (12) Apr 13 2010 OpEquals and Interfaces (23) Apr 13 2010 What do you think about that ? (2) Apr 13 2010 freepascal and fantom links (3) Apr 13 2010 Struct template type inference (1) Apr 13 2010 Automatic opApply iteration counter (4) Apr 13 2010 Vectorization and more (1) Apr 12 2010 [feedback] folding in scintilla (12) Apr 12 2010 signed -> unsigned (2) Apr 11 2010 SListRange, Ranges, costructors (4) Apr 11 2010 D compilation speed vs. go (13) Apr 11 2010 struct opCast to void* and back (12) Apr 11 2010 Benchmarking in D (7) Apr 10 2010 Re: value range propagation for _bitwise_ OR (20) Apr 10 2010 opDispatch is grand! (7) Apr 10 2010 Real Time Reflection using __traits (1) Apr 10 2010 value range propagation for logical OR (83) Apr 10 2010 Patches, bottlenecks, OpenSource (15) Apr 09 2010 [gdb] Pushing the D patches upstream (again) (5) Apr 09 2010 Druntime AA interface could be enhanced a bit (7) Apr 09 2010 Patches (6) Apr 08 2010 Euler problems 14, 135, 174 (6) Apr 07 2010 Google Code Jam (3) Apr 07 2010 memory management (1) Apr 07 2010 std.stream and std.stdio (2) Apr 06 2010 New Software Design Technique Allows Programs To Run Faster (3) Apr 06 2010 garbage collection in d (16) Apr 06 2010 Clang error recovery (12) Apr 06 2010 Easily building a dynamic library with DMD (2) Apr 05 2010 data corruption (5) Apr 05 2010 "bstring" (6) Apr 05 2010 Bug 4070 and so on (4) Apr 04 2010 Implementation of open_cl (4) Apr 04 2010 Binary size with dmd 2.042 (4) Apr 04 2010 Runtime Reflection using Compile Time Reflection (4) Apr 03 2010 Pattern matching example (4) Apr 03 2010 Compile-time class instances (3) Apr 03 2010 Having problems using __traits (3) Apr 02 2010 Are anonymous enums mostly available for performance reasons? (6) Apr 02 2010 Memory Corruption with AAs (21) Apr 02 2010 Dendrite Flow Based Progamming System Released (in D) (1) Apr 01 2010 Need help fixing "The linker can't handle *.d.obj" issue (12) Apr 01 2010 The D programming language newsgroup should lift its game (46) Mar 31 2010 Is string.ptr a part of the language? (2) Mar 31 2010 pinned classes (13) Mar 31 2010 Re: Solution for Fatal flaw in D design which is holding back widespread (1) Mar 31 2010 Unofficial wish list status.(Apr 2010) (1) Mar 31 2010 console output in dll doesn't work (8) Mar 31 2010 go and defer implementation in Go (1) Mar 31 2010 Solution for Fatal flaw in D design which is holding back widespread adoption(tm) (18) Mar 31 2010 dmd changesets: 422, 427, 428 (1) Mar 31 2010 It is impossible to debug code compiled with dmd (18) Mar 30 2010 Enum arguments? (6) Mar 30 2010 "Avoid a Void" paper (3) Mar 30 2010 Fatal flaw in D design which is holding back widespread adoption (34) Mar 30 2010 DSpec / Templates + Delegates (5) Mar 30 2010 LIB flag on Windows DMD (nagging) (4) Mar 30 2010 Code Poet IDE (2) Mar 29 2010 [OT] Who lives in the by area? (11) Mar 29 2010 shouldn't phobos finally use some complete windows bindings (from the (1) Mar 29 2010 Bugzilla votes (2) Mar 28 2010 errata on tuple webpage (both 1.0 and 2.0) (2) Mar 28 2010 static foreach (9) Mar 28 2010 Two CTFE benchmarks (3) Mar 28 2010 More precise GC (15) Mar 28 2010 literals (45) Mar 27 2010 D is dead, so now where it the money going? (27) Mar 27 2010 Network in phobos (7) Mar 27 2010 Update an example on the website (2) Mar 25 2010 Does D allow paradoxical code? (6) Mar 25 2010 Linking fails on Hello World?! (5) Mar 25 2010 Few ideas to reduce template bloat (19) Mar 24 2010 Output of dmd and dmd.conf location (13) Mar 23 2010 Go updates (6) Mar 23 2010 D2 std.container ? container update events.. (8) Mar 23 2010 Implicit enum conversions are a stupid PITA (194) Mar 23 2010 Summary on unit testing situation (12) Mar 23 2010 Scope operator like in C++? (8) Mar 23 2010 Ranges and/versus iterators (30) Mar 23 2010 DMD 2.042 -- what happened to ModuleInfo.name? (2) Mar 23 2010 suspected ctfe bug (4) Mar 22 2010 Can we drop cast from float to int ? (6) Mar 22 2010 Can we drop casts from float to bool? (12) Mar 22 2010 storing the hash multiplier instead of the hash value (29) Mar 21 2010 Covariance and Contravariance in C# 1 - 11 (1) Mar 21 2010 Append char to char-array in function (1) Mar 21 2010 Append char to char-array in function (1) Mar 21 2010 Append char to char-array in function (6) Mar 21 2010 Append char to char-array in function (1) Mar 21 2010 Append char to char-array in function (1) Mar 20 2010 Obfuscating function names and the like inside exe file (15) Mar 20 2010 typeof from TypeInfo (2) Mar 19 2010 D: A solution looking for a problem? (10) Mar 19 2010 An important potential change to the language: transitory ref (19) Mar 19 2010 D Business Contacts? (5) Mar 19 2010 128 or 256 bit arithmetic (1) Mar 19 2010 Question concerning Exceptions (2) Mar 19 2010 static foreach (1) Mar 19 2010 What's the status of opDollar? (3) Mar 19 2010 Improve D1/D2 home page (17) Mar 19 2010 An idiom for disabling implicit conversions (11) Mar 18 2010 covariant final interface functions (6) Mar 18 2010 Reading few CPU flags from D code (2) Mar 18 2010 Guide for D (6) Mar 18 2010 trouble getting started with D (15) Mar 17 2010 More on invariants (1) Mar 17 2010 Some attributes (3) Mar 17 2010 pNaCl (1) Mar 17 2010 A problem with generators (25) Mar 17 2010 Breaking enhancement requests (1) Mar 16 2010 D: a computer programming language? (6) Mar 16 2010 Re: Why are some casts from floating point to integral done differently (1) Mar 16 2010 The D 1.0 docs are stuffed. (2) Mar 15 2010 Associative Arrays need cleanout method or property to help garbage (15) Mar 15 2010 assuaging static array literals (1) Mar 15 2010 A paper about traps and programming stress (17) Mar 14 2010 Why are some casts from floating point to integral done differently from others? (9) Mar 14 2010 Dynamic libraries, again (12) Mar 14 2010 Templates everywhere (18) Mar 14 2010 Some problems with operator overloading (11) Mar 14 2010 AssociativeArray!(K,V) .init has no [] operator (3) Mar 13 2010 [OT] Business idea: make a case that makes the iPhone look like a (2) Mar 13 2010 "future" for async calling? (2) Mar 13 2010 shouldn't override be obligatory? (11) Mar 13 2010 C++0x news (23) Mar 13 2010 T.init for static arrays? (9) Mar 13 2010 On * Language Design (6) Mar 13 2010 A possible replacement for pragma(msg,...) (1) Mar 13 2010 Worlds of CTFEs & templates (1) Mar 12 2010 Google opensources linear time, fixed space regex library (7) Mar 12 2010 [ot] Scala gets a system for tracking immutability (3) Mar 12 2010 Best Builder to Use (11) Mar 12 2010 casting of arrays really needs to be clarified in the specs! (3) Mar 12 2010 Built-in unsafety in D (3) Mar 11 2010 order of static constructor execution (40) Mar 11 2010 dmdz (64) Mar 11 2010 Re: any tool to at least partially convert C++ to D (htod for source (9) Mar 10 2010 Snow Leopard (2) Mar 10 2010 GC.calloc with random bits causes slowdown, also seen in built in (8) Mar 10 2010 [OT] Thunderbird 3 vs. 2 (37) Mar 10 2010 Set ops in std.algorithm (3) Mar 10 2010 toString, to!(char[]) & co (5) Mar 10 2010 functional (10) Mar 10 2010 Re: any tool to at least partially convert C++ to D (htod for source (1) Mar 10 2010 Re: any tool to at least partially convert C++ to D (htod for source (2) Mar 10 2010 The D license (11) Mar 09 2010 Property rewriting; I feel it's important. Is there still time? (27) Mar 09 2010 Is return by ref really safe? (5) Mar 09 2010 availabilty of D compiler for 64bit ubuntu (4) Mar 09 2010 LDC 0.9.2 release candidate 3 (10) Mar 09 2010 Collections in Scala (2) Mar 09 2010 Problem with opCast(bool) for classes (2) Mar 09 2010 any tool to at least partially convert C++ to D (htod for source files)? (55) Mar 08 2010 new operator overloading buggy? (4) Mar 08 2010 Web-News Reader Problems (13) Mar 08 2010 Proposal: Multidimensional opSlice solution (26) Mar 08 2010 opCmp: strange definition in specs (2) Mar 08 2010 Complete off topic: Nimrod language (1) Mar 08 2010 Simple spell check in new DMD release (3) Mar 07 2010 Interesting slides about compiler optimizations (2) Mar 07 2010 Holes in structs and opEquals (16) Mar 06 2010 Can we get a DMD with debug symbols as a standard thing? (2) Mar 06 2010 Empty array literals (8) Mar 06 2010 Container insertion and removal (36) Mar 05 2010 Is D a cult? (94) Mar 05 2010 Static attributes & immutability, static attributes seen from instances (14) Mar 05 2010 shared library (3) Mar 05 2010 OCaml compiler for NaCl (1) Mar 05 2010 wikipedia const-correctness link to d const article is missing (2) Mar 04 2010 Container hierarchy vs. container types (32) Mar 04 2010 Pantheios 1.0.1 (beta 193) released (4) Mar 04 2010 Line number of Exception instantiation (8) Mar 04 2010 speed.pypy.org (2) Mar 04 2010 An example of Clang error messages (37) Mar 04 2010 Is D a useful programming language for the web? (9) Mar 04 2010 Arguments and attributes with the same name (19) Mar 04 2010 Misleading contract syntax (9) Mar 04 2010 Feature suggestion: in-place append to array (25) Mar 03 2010 shouldn't std.perf.PerformanceCounter.stop return the this reference? (1) Mar 03 2010 Stack Alignment for Numerics (4) Mar 03 2010 Accessing the symbol table of a mach-o file (3) Mar 03 2010 Other parts of Contract Programming (1) Mar 02 2010 One minute to twelve: last chance to fix D2 ABI? (15) Mar 02 2010 64-bit and SSE (21) Mar 02 2010 Good Contract programming idiom? (24) Mar 01 2010 std.array.put doesn't put (7) Mar 01 2010 Readonly asserts and more (2) Mar 01 2010 Recursive template expansion (6) Feb 28 2010 Unofficial wish list status.(Mar 2010) (2) Feb 28 2010 nothrow functions/methods (5) Feb 28 2010 Problem with writeln (9) Feb 27 2010 Updating GDC links on Digital Mars D page (1) Feb 27 2010 A possible future purpose for D1 (15) Feb 26 2010 Tidier pre/post conditions (4) Feb 25 2010 A little challenge... (16) Feb 24 2010 A rationale for pure nothrow ---> pure nothrow (and nothing else (37) Feb 24 2010 Casts, especially array casts (9) Feb 24 2010 Evaluation order (8) Feb 24 2010 dil: updated to Tango 0.99.9 (3) Feb 24 2010 Light-weight threads (10) Feb 23 2010 Tango withered? (1) Feb 23 2010 Research Grant Proposal: Numerical Arrays (4) Feb 23 2010 Heap memory limit? (10) Feb 21 2010 Questions about IEEE754 floating point in D (8) Feb 21 2010 Improved doc page - small change to html (4) Feb 20 2010 Design of intuitive interfaces (19) Feb 20 2010 "Consume", "Skip", "Eat", "Munch", "Bite", or...? (12) Feb 20 2010 Back in the game: Numerics in D (17) Feb 20 2010 error of the day (2) Feb 19 2010 D hates to be dynamic linked (38) Feb 19 2010 Attacking Attack Patterns (10) Feb 19 2010 How to initialize static arrays with variable data (7) Feb 19 2010 From the silent community (17) Feb 18 2010 Unit tests (4) Feb 18 2010 Whither Tango? (93) Feb 18 2010 DWT (5) Feb 18 2010 Some questions now D2 is frozen (12) Feb 17 2010 detect if a symbol is template (4) Feb 17 2010 [TDPL] function type syntax (1) Feb 17 2010 CfP: WGP 2010 (1) Feb 17 2010 D3 Feature Request: rename 'in' to 'into'. (1) Feb 17 2010 call variadic function with argptr (1) Feb 17 2010 D2 Closure (14) Feb 17 2010 !in (28) Feb 17 2010 Array literals MUST be immutable. (39) Feb 16 2010 TDPL stats (9) Feb 16 2010 assert(object) calls invariant and crash (1) Feb 16 2010 Assertion Failue : toobj.c (5) Feb 16 2010 Does std.bigint compile under D2.40? (10) Feb 16 2010 C++ concepts (8) Feb 15 2010 array comparison (10) Feb 15 2010 Requesting FreeBSD and OSX volunteers to help test Goldie (2) Feb 15 2010 Const system names (10) Feb 15 2010 Module-level visibility (13) Feb 15 2010 Template specialization ignores attributes (at least 'immutable') (3) Feb 14 2010 OT: Linux shell validate-all-command-before-executing-anything behavior? (6) Feb 14 2010 placement new with void[] instead of void* (1) Feb 14 2010 new T[10] => new T[](10) (5) Feb 14 2010 change mixins (54) Feb 14 2010 disabling unary "-" for unsigned types (80) Feb 14 2010 typing base ^^ exp (7) Feb 14 2010 eliminate new operator paraphernalia (27) Feb 14 2010 strict (4) Feb 13 2010 foreach_reverse is better than ever (55) Feb 13 2010 LDC 0.9.2 release candidate (11) Feb 12 2010 OT: How not to be seen! (5) Feb 12 2010 Decimal separator (4) Feb 10 2010 How DMD's -w *Prevents* Me From Seeing My Warnings (25) Feb 10 2010 D on Reddit! (10) Feb 10 2010 D Website not loading D1 page (6) Feb 09 2010 Array collection ? (6) Feb 09 2010 Is there a modern GC for D? (8) Feb 09 2010 A special treat (18) Feb 08 2010 Coverity tool (46) Feb 08 2010 delegating constructors and "this = ..." (6) Feb 08 2010 Implicit conversion between calling conventions (1) Feb 08 2010 std.contracts.enforceEx (2) Feb 07 2010 Syntax for struct constructors (3) Feb 07 2010 "The last feature": overridable methods in interfaces (31) Feb 07 2010 "The last feature": scope structs (16) Feb 07 2010 Built-in arrays as output ranges (4) Feb 06 2010 Function with try/catch and no return statement (13) Feb 05 2010 Visual Studio plugin (4) Feb 05 2010 Disable NaN and Inf (9) Feb 05 2010 Proposal: Dedicated-string-mixin templates/functions (15) Feb 04 2010 Why does std.intrinsic contain nothrow? (1) Feb 04 2010 It's interesting how many old bugzilla issues are still open though (7) Feb 03 2010 A thought for template alias parameters? (9) Feb 03 2010 OT: I'm back (8) Feb 03 2010 A little project (3) Feb 03 2010 Making all strings UTF ranges has some risk of WTF (52) Feb 03 2010 Array operation for computing the dot product? (57) Feb 03 2010 Virtual methods (17) Feb 03 2010 Is there any more detailed dmd source guide? (7) Feb 03 2010 Static initialization order (8) Feb 03 2010 Scala design process vs D (9) Feb 01 2010 Some Java/C# design flaws (8) Feb 01 2010 I'm getting a Bus Error with dmd 1.056 (1) Jan 31 2010 Using DMD2 on Ubuntu 9.04 x64? (23) Jan 31 2010 Unofficial wish list status.(Feb 2010) (3) Jan 31 2010 Unit testing with asserts: Why is assertHandler required to throw? (11) Jan 31 2010 D-IDE and its new project site (4) Jan 31 2010 d2 bug ! bug ! (4) Jan 30 2010 TDPL a bad idea? (98) Jan 29 2010 more property discussion... (2) Jan 29 2010 std.string will get the boot (31) Jan 28 2010 Advertising D on Stackoverflow? (3) Jan 28 2010 Proposal: Definition of -attributes (15) Jan 26 2010 Function calls (163) Jan 26 2010 std.bigint? (3) Jan 26 2010 DMD generated exes' size differ from PE headers (5) Jan 26 2010 D Exceptions (9) Jan 25 2010 profiling in heavily concurrent D2 (3) Jan 23 2010 Google's Go (148) Jan 22 2010 Perfect hashing for string switch (13) Jan 21 2010 dmd warning request: warn for bitwise OR in conditional (9) Jan 21 2010 What's left to do for a stable D2? (62) Jan 21 2010 Isn't it time (19) Jan 20 2010 opDispatch or equivalent at static context (5) Jan 20 2010 "Unsigned-related bugs never occur in real code." (9) Jan 20 2010 Guy Steele on language design (7) Jan 19 2010 xTests 0.14.4 released (2) Jan 19 2010 interesting iterator library (3) Jan 19 2010 Why not throw everything at dmd (4) Jan 19 2010 Will D 2.0 concurrency be supported between processes? (9) Jan 18 2010 Invalid pointer reference (7) Jan 18 2010 Function pointers as values? (1) Jan 18 2010 Maybe type in Fortress (5) Jan 18 2010 What if D would require * for reference types? (20) Jan 17 2010 D Language 2.0 (124) Jan 17 2010 array operation a[] + b[] not implemented?? (8) Jan 17 2010 Setting Timeout for a Socket (phobos D1.0) (1) Jan 15 2010 Private default function arguments (20) Jan 14 2010 disable (82) Jan 13 2010 Why does GDC on the main page still link to the old sf.net site? (1) Jan 13 2010 D's auto keyword (20) Jan 13 2010 shouldn't auto ref parameters also be available for class template (1) Jan 13 2010 The magic meta namespace (again) (2) Jan 13 2010 Thread-local Member Variables? (9) Jan 12 2010 __traits proposal possible? or must be AST macros? (1) Jan 12 2010 opDispatch with template parameter and property syntax doesn't work (3) Jan 12 2010 D language help for a C# programmer (2) Jan 11 2010 Sturcts with constructor (dmd1.x) (10) Jan 11 2010 Variable-length stack allocated arrays (41) Jan 11 2010 Class Instance allocations (11) Jan 10 2010 Help with demangling (8) Jan 10 2010 Identity assignment operator overload LEGAL for const, etc. (1) Jan 10 2010 Division by zero - why no exception? (4) Jan 10 2010 Some random thoughts on Phobos2 (2) Jan 09 2010 Compiler: Size of generated executable file (106) Jan 08 2010 Class nomenclature (2) Jan 08 2010 Should .idup simply do nothing for arrays that are already immutable? (8) Jan 07 2010 What is this (2) Jan 07 2010 Is this a bug or a feature? (7) Jan 07 2010 Linker error with array expressions (8) Jan 06 2010 Computed gotos in LLVM (1) Jan 06 2010 Porting C# code (18) Jan 06 2010 dmd.1.055.zip missing linux shell binary (2) Jan 05 2010 "Compiler as a service" in C# 4.0 (6) Jan 05 2010 Named variadic arguments (2) Jan 05 2010 A newer WinDBG version, please... (9) Jan 05 2010 Compare in ParaSail (2) Jan 04 2010 The future of D (2) Jan 03 2010 restrict in practice (4) Jan 03 2010 Concurrency mailing list (3) Jan 02 2010 casting array literals doesn't work as stated in the docs (17) Jan 02 2010 Why doesn't __traits(allMembers,...) return a tuple? (1) Jan 01 2010 D related reddit link (4) Jan 01 2010 Things to look up in the docs (9) Dec 31 2009 Unofficial wish list status.(Jan 2010) (1) Other years: 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004
http://www.digitalmars.com/d/archives/digitalmars/D/index2010.html
CC-MAIN-2017-22
refinedweb
13,443
61.9
in reply to Getting soap client/server to work you will get better diagnostics if you do: use SOAP::Lite +trace => 'all'; [download] /J\ Thanks! Looks like I'm getting closer :) A 500 error from apache. hmmm. This should be reported in the /var/log/httpd/error_log but isn't. Jason L. Froebe Team Sybase member No one has seen what you have seen, and until that happens, we're all going to think that you're nuts. - Jack O'Neil, Stargate SG-1 Figured it out! The URI was wrong! :) print SOAP::Lite ->uri('') ->proxy('') ->hi() ->result; [download] print SOAP::Lite ->uri('') ->proxy('') ->hi() ->result; [download] Yeah the uri actually becomes the namespace for the content of the SOAP Body element. SOAP::Lite uses this to determine the module to use for an rpc/encoded request. So with your original attempt would try to load a module like cgi-bin::soap::Demo. The namespace URI doesn't have to be a URI of an actual resource and shouldn't be confused with one. This is one reason why I tend to use a URI of the form urn:Demo (which should work in your case, avoiding using the http scheme reduces the potential for confusion IM
http://www.perlmonks.org/index.pl/jacques?node_id=587741
CC-MAIN-2015-06
refinedweb
207
71.24
Cal.com raises $7.4 million after rebrand Growth takes off after domain upgrade. During a global pandemic that has seen travel restrictions and lockdowns affect vast swathes of the Growth takes off after domain upgrade. During a global pandemic that has seen travel restrictions and lockdowns affect vast swathes of the Judge grants preliminary injunction that will force Canvas to stop using its valuable domain name. A judge has ordered Canvas to stop using Here’s a look at my activity in 2021. Domain investing is my third-most important business from a financial standpoint. I don’t do it for cash Panel cites 8 false statements it says Simple Plan Inc made in UDRP. Simple Plan Inc’s plan to get SimplePlan.com back wasn’t so simple. The No, Soccer Store is not confusingly similar to The Olympic Store. The International Olympic Committee argued that “Soccer Store” was confusingly Try to get this song out of your head. The makers of the Baby Shark character won the domain name BabyShark.com in a cybersquatting case. Image from These are the fastest growing and biggest domain name registrars for .com domains. ICANN has published the latest official data from Verisign The operator of free annual credit report site goes after cybersquatters. Central Source, LLC has filed an in rem lawsuit (pdf) against 108 domain Researchers sent requests under privacy laws to website owners. It is canceling the study after complaints. Researchers at Princeton University and Domains ending in something other than .com sold briskly in 2021. .Com is king. It’s a fair statement. .Com is the largest namespace with nearly Hackers stole data from Epik this year, connecting the dots between extremist websites and their owners. This year, the domain name industry A surge in demand for domains with meta in them. “All the good domains are taken.” It’s a common thing you hear about domain names. I’ve The Complainant appears to have provided the wrong acquisition date, but the Respondent’s dates don’t add up, either. Regal Games, LLC was found Many domain investors acquired NFTs this year, which likely impacted the domain aftermarket. 2021 was the year of the non-fungible token (NFT). NFTs Scooter company upgrades its domain, which will allow it to expand beyond scooters. Voi upgraded its domain, which will allow it to expand beyond
https://laptrinhx.com/news/tag/domainnamewire-com/p/2/
CC-MAIN-2022-05
refinedweb
396
67.04
Agenda See also: IRC log Draft agenda: 1. Approval of last telecon minutes from 10 April for publication, see [a] 2. Primer on Delivery Context Ontology and its relationship to client and server-side APIs, see [b], plus thanks to Jose for his work as editor 2. Personalization (Rich), see [c] 3. RWC Charter and work item on location API, see [d] 4. Face to face planning based upon questionnaire results, see [e] 5. Review of open actions, see [f] 6. Any other business a. b. c. d. e. f. any objections to publishing last week's minutes? <Rotan> none here No objections so Resolution: publish minutes of 10 April 2008 Thanks to Jose for his work on editing. we are now featured on the W3C home page. <Rotan> Will primer address management of the ontology as it grows? I have a wee problem with an issue mentioned in 1.1.1 that might cause trouble managing a distributed ontology process in the future. The next step is a primer explaining the ontology and relating it to client and server-side APIs. <Rotan> Requirement to have a "unique value" for properties. <Rotan> How would uniqueness be guaranteed if different groups were to take responsibility for different domains of the ontology? Sure, Jose says he wants to modularize the ontology for the next draft. <Rotan> Exactly. <Rotan> So we should consider management of that process. <Rotan> Primer should explain the ontology, its role, and its future direction. <Rotan> In DD we don't have that problem because vocabularies will be in their own namespace. <Rotan> Permitted for properties in different vocabs to overlap. <Rotan> But eventually all vocabs will reference the "ontology". <Rotan> Even if that ontology is created in a distributed manner. Jose agrees with the aims of the primer. Jose: the primer is needed to explain the role of the ontology and the relation to the core vocabulary published by the DDWG <Rotan> +1 to Jose Dave asks in Jose is willing to act as editor for the Primer? <Rotan> Could primer use DDWG vocab as an example, showing mapping of DDRCV properties to items in UWA Ontology? Jose: yes, but I am looking for someone else to help, perhaps Rotan? <Rotan> I can help, yes. <Rotan> Jose take editorship lead. Dave asks what kind of timescale we can expect a first editors draft and what input Jose and Rotan are looking for? <Rotan> Wondering how much/little about ontology concepts would need to be explained, and how much we could merely reference to external sources of expertise. <scribe> ACTION: Jose to work with Rotan on preparing a Primer for the ontology [recorded in] <trackbot-ng> Created ACTION-89 - Work with Rotan on preparing a Primer for the ontology [on Jose Manuel Cantera Fonseca - due 2008-04-24]. Dave invites Rich to give a status report see Rich met with ISO, IMS, EU for All, and Fluid Dave: welcomes Carlos and invites him to give a few words of introduction (Carlos Velasco from Fraunhofer Institute for Applied Information Technology) Dave asks Rich on what he sees as the next steps for work on personalization. Dave confirms that the current charter covers user preferences and adaptation as part of work on the delivery context. Rich says he is currently very busy as editor on 3 documents and will be able to get back to us with concrete next steps in May. Dave asks Carlos if he has any suggestions for things we could be doing. Carlos: need to synchronise with what ISO are doing. <Rich> LIP = Learner Information Package <Rich> also known as personal preferences as part of Access For All standards <Rich> ISO has their version of these specifications in SC36 <Rich> You pick resources to see if they are adaptable or whether there are alternative resources. It is not just passing preferences, it is ensuring that on the server we can do the appropriate selection/adaptation based upon existance of the associated metadata Rich is interested in what has been done or is planned for rich descriptions of applications as opposed to devices, since both are needed for effective personalization. Dave notes that this is related to the plans for work on model-based user interfaces and a W3C Incubator Group he is trying to set up with Charlie Wiecha, Jose and Fabio Paterno. The idea is to look at use cases, requirements and practical solutions, based upon many years of research on model-based UI design. <carlosV> For interface personalization, we may take a look at ANSI/V2. I think it is also transferred to ISO Rich asks about the liaison with the OMA. <Rich> what is his name? <Rotan> Bennett Marks Dave notes that Bennett Marks acts as the OMA liaison to the W3C HCG see the HCG archive see Dave introduces the activity proposal and explains that it includes a work item for a location API stimulated by a submission from Google Jose: unconvinced of the need for a specific API for location and that the need could be addressed by a generic means to expose location through the ontology. Dave thinks that there are a number of complications. Here is a link to my slides for the talk I am giving on geolocation in the mobile web from the W3C track at WWW2008 <Rotan> Must also acknowledge comment from Art that some people don't want a dependency on DCCI. <Rotan> Perhaps we could bless some work on a location API that would be DCCI-friendly but not depend on it? Dave quickly runs through some of the points in his slides. Perhaps we need to enrich the ontology to cover some of the additional considerations In respect to Art's comments, this is one reason for why we need the Primer for the ontology to explain the relationship between the ontology and DCCI and other possible APIs. <Rotan> If we tell Art that the primer would answer the issues, we'll need to give an agressive timeline for delivery of the Primer. Dave thinks we need to be aware of demonstrating a real market need for any work we do. <Rotan> Have Google evidence for such a need? We shall see, but their google maps for mobile devices seems relevant. We also need to look at the work of the IETF GeoPriv WG <Rotan> Just because there's a Google Labs gadget out there doesn't mean there's evidence of demand, though it would seem natural to make such an assumption. Hard figures would be more convincing. The same applies to the DCCI and the DC Ontology, no? <Rotan> For the ontology, requests from the CT community and OMA suggest there is a demand. <Rotan> Not so sure about the market stats for DCCI though. <Rotan> Would be interested in knowing how many people have downloaded/installed the Google app. How many projects have integrated it into a SatNav device. It seems that we should continue work on the DC Ontology, and show that it provides a superior solution, and see how the market responds. <Rotan> Nokia might have info on market interest in DCCI (or similar, since they've expanded on it a bit). <Rotan> Let's define some criteria for deciding that the ontology has been taken up. For example, that it is actually used in commercial/OS solutions, which have referenceable customers. We need to show the benefits of the DCCI and DC Ontology, which is something we have been a bit slow to do. <Rotan> That's what I call a convincing piece of evidence. <Rotan> Without such proof of utility, we could rightly be accused of creating technology for the sake of creating technology. Indeed which is why we need to address this issue soon. Dave encourages people who have yet to do so to fill out the questionnaire. You have until the end of Friday. We have several volunteers for hosting and the next step is to fix the dates and select the host. We should be able to do this via email over the next week. <carlosV> We are happy to host, as I said in the questionnaire. Shall I send an email Thanks for the offer, I think the next step is for me to post a summary of the results and we can then discuss this via email on the WG list. Dave will be in Beijing next week for the AC meeting and WWW2008 conference and won't be able to chair. The next call will be on 1 May 2008. scribe: end of meeting ...
http://www.w3.org/2008/04/17-uwawg-minutes.html
crawl-002
refinedweb
1,433
70.33
Important: Please read the Qt Code of Conduct - Odd OpenGL-related Errors attempting to run on phone I have a Nokia N8 with Symbian that I'm trying to run a QGLWidget on. I'm gonna preface this by saying that in the Simulator, there are no problems whatsoever. But when I switch to trying to use a Symbian device, I get the following types of errors: [...]\glwidget.cpp:294: error: 'GL_MODELVIEW' was not declared in this scope [...]\glwidget.cpp:294: error: 'glMatrixMode' was not declared in this scope [...]\glwidget.cpp:295: error: 'glLoadIdentity' was not declared in this scope [...]\glwidget.cpp:296: error: 'GL_PROJECTION' was not declared in this scope Again, compiling AND running is perfectly fine in the Qt Simulator, where I'm even able to play the game perfectly fine. In my pro file, I have: QT += opengl In my GLWidget.ccp, I originally had: #include <QtOpenGL> But removing it didn't change any of the errors. Has anyone else encountered this that could point me in the right direction? I can't understand why none of these errors are appearing in the simulator build if they really are problems. Thanks in advance. Okay, I figured it out. This was my first time using OpenGL ES and I didn't realize how different they were. Yep I recently took the leap myself, having worked with OpenGL the previous time in around 1999, OpenGL ES 2.0 was rather.. different :P (but so much better since you can now control everything!) - m [quote author="Reptile" date="1324779891"]Okay, I figured it out. This was my first time using OpenGL ES and I didn't realize how different they were.[/quote] hello im having the exact problem how did you fix it? [quote author="imleet" date="1326144658"] [quote author="Reptile" date="1324779891"]Okay, I figured it out. This was my first time using OpenGL ES and I didn't realize how different they were.[/quote] hello im having the exact problem how did you fix it?[/quote] basically, in OpenGL ES, you can't use 90% of the functions you use in regular OpenGL (going the other way is easy, though - OpenGL can process OpenGL ES code just fine. What you need to do is convert all your rendering code to use shaders instead of what you're currently using and get rid of the helper functions that control the camera, matrices, etc since you'll be doing all that work and passing that in as a variable to the shaders. thanks, but this is really bad news for me, if i have to use shaders ill have to rewrite a lot of code that is already done using opengl es 1.1 (which is supported on android, iphone, bada..) to 2.0 =( haha, I totally understand The good news is that once you familiarize yourself with the programmable pipeline you'll never look back!
https://forum.qt.io/topic/12528/odd-opengl-related-errors-attempting-to-run-on-phone/2
CC-MAIN-2021-39
refinedweb
486
72.05
!) (An IPython notebook with math and code is available on github.) Today, we will learn another dimensionality reduction method called ICA. ICA is a linear dimension reduction method, which transforms the dataset into columns of independent components. Blind Source Separation and the "cocktail party problem" are other names for it. ICA is an important tool in neuroimaging, fMRI, and EEG analysis that helps in separating normal signals from abnormal ones. So, what exactly is ICA? ICA stands for Independent Components Analysis. It assumes that each sample of data is a mixture of independent components and it aims to find these independent components. At the heart of ICA is "Independence". We should try to understand this first. What does independence mean in the context of ICA? When can we safely say that two variables are independent? How is it different from 'Correlation'? And lastly, How do you measure the degree of independence? Suppose x, y are two random variables and their distribution functions are given by Px, Py respectively. If we receive some information about x and that doesn't change whatever knowledge we have about y, then we can safely say that x, y are independent variables. Now you will say "Hang on, this is the absence of correlation you are talking about". Yes, you are right but partly. Correlation is not the only means to measure the dependence between two variables. In fact, what correlation captures is the linear dependence. If two variables are independent then both linear and non-linear dependence will be zero. The absence of linear dependence does not imply independence since there might be a non-linear relationship. Let's take a small example to understand this. Suppose x (-5,-4, -3, -2, -1, 0, 1, 2, 3, 4, 5) & y = x2 which gives us y (25, 16, 9, 4,1,0, 1, 4, 9, 16, 25). Now, calculate the correlation between these two variables. import numpy as np x = np.array([-5,-4,-2,-1,0,1,2,3,4,5]) y = np.array([25,16,9,4,1,0,1,4,9,16,25]) np.correlate(x,y) 0.0 As you see the correlation in the above case is 0 though they have a non-linear relationship. Thus, independence between two variables implies zero correlation but it is not true in reverse. Let's come back to our topic for the day ICA. As earlier said ICA tries to find out the independent sources from which data is made of. We will start with a classic example to explain the ICA and its working principle. In the image shown above Alice and Bob, both are speaking at the same time. Both mics receive inputs S1 & S2 from Alice and Bob respectively. ICA assumes that the mixing process is linear i.e. it can be represented as a matrix multiplication. Each mic mixes S1 & S2 according to its location and settings which is given by a matrix A. The matrix operation produces vector M as an output. Now, you wish to separate the S1 & S2 from M1 & M2. This is referred to as cocktail party problem or blind source separation. The solution to this problem is trivial if matrix A is known. A simple matrix inversion of A followed by multiplication with M will give the answer. But in a real-world scenario matrix A is often unknown. The only information we have is the output of the mixing process. ICA approach to this problem is based on three assumptions. These are: - Mixing process is linear. - All source signals are independent of each other. - All source signals have non-gaussian distribution. We have already talked about first two assumptions. Let's talk a bit about the third assumption ICA makes: non-gaussianity of source signals. The basis of this assumption comes from the central limit theorem. According to the Central Limit Theorem, the sum of independent random variables is more Gaussian than the independent variables. So to infer source variables we have to move away from the gaussianity. In case of Gaussian distribution, uncorrelated Gaussian variables are also independent, it is a unique property associated with Gaussian distribution. Let's take a simple example to understand this concept. First, create four datasets - two from Gaussian distribution and two from the uniform distribution. np.random.seed(100) U1 = np.random.uniform(-1, 1, 1000) U2 = np.random.uniform(-1, 1, 1000) G1 = np.random.randn(1000) G2 = np.random.randn(1000) %matplotlib inline # let's plot our signals from matplotlib import pyplot as plt fig = plt.figure() ax1 = fig.add_subplot(121, aspect = "equal") ax1.scatter(U1, U2, marker = ".") ax1.set_title("Uniform") ax2 = fig.add_subplot(122, aspect = "equal") ax2.scatter(G1, G2, marker = ".") ax2.set_title("Gaussian") plt.show() Now, mix U1 & U2 and G1 & G2 to create outputs U_mix and G_mix. # now comes the mixing part. we can choose a random matrix for the mixing A = np.array([[1, 0], [1, 2]]) U_source = np.array([U1,U2]) U_mix = U_source.T.dot(A) G_source = np.array([G1, G2]) G_mix = G_source.T.dot(A) # plot of our dataset fig = plt.figure() ax1 = fig.add_subplot(121) ax1.set_title("Mixed Uniform ") ax1.scatter(U_mix[:, 0], U_mix[:,1], marker = ".") ax2 = fig.add_subplot(122) ax2.set_title("Mixed Gaussian ") ax2.scatter(G_mix[:, 0], G_mix[:, 1], marker = ".") plt.show() U_mix and G_mix are what we have in a real-world scenario. Remove the linear dependence from both the mixtures. # PCA and whitening the dataset from sklearn.decomposition import PCA U_pca = PCA(whiten=True).fit_transform(U_mix) G_pca = PCA(whiten=True).fit_transform(G_mix) # let's plot the uncorrelated columns from the datasets fig = plt.figure() ax1 = fig.add_subplot(121) ax1.set_title("PCA Uniform ") ax1.scatter(U_pca[:, 0], U_pca[:,1], marker = ".") ax2 = fig.add_subplot(122) ax2.set_title("PCA Gaussian ") ax2.scatter(G_pca[:, 0], G_pca[:, 1], marker = ".") Notice the differences between the uncorrelated(PCA uniform, PCA gaussian2) and source plots(Uniform, Gaussian). In case of Gaussian they look alike while uncorrelated Uniform needs a rotation to get there. By removing correlation in the gaussian case, we have achieved independence between variables. If the source variables are gaussian ICA is not required and PCA is sufficient. How do we measure and remove the non-linear dependence between variables? Non-linear dependence between the variables can be measured by mutual information among the variables. Higher the mutual information, higher will be the dependence. Mutual information = sum of entropies of marginal distribution - entropy of the joint distribution Entropy is a measure of uncertainty in a distribution. Entropy for a variable x is given by H(x) = sum(log(P(x))*P(x)) for every possible of value of x. The gaussian distribution has the highest entropy. A term closely related to entropy is negentropy which is formulated as negentropy(x) = H(x_gaussian) - H(x). Here x_gaussian is gaussian random vector with same covariance as x. Thus, negentropy is always non-zero and equal to zero if x is a gaussian random variable. Also, mutual information(y1,y2) = constant - sum(negentropy(yi)) Calculation of negentropy and mutual information requires knowledge of entropy. Entropy calculation requires probability distribution function which is not known. We can approximate negentropy with some suitable functions. Few popular examples are tanh(ay), -exp(-y2) and -y*exp(-y2). Psuedocode ICA G & g are the approximating function and its derivative respectively. X is the dataset. - Initialize W - X = PCA(X) - While W changes: W = average(X*G(WX)) - average(g(WTX))W W = orthogonalize(W) - return S = WX Orthogonalization is a process to make columns of a matrix orthogonal. How many independent components should be selected? Which independent components should be selected? ICA outputs a source matrix with columns as independent sources. It never tells us whether a component is significant or irrelevant. If the number of columns is less then checking every component is advisable. For large number of components, selection should be done at the PCA stage(2nd step). If you are unfamiliar with PCA check out the post-1 in this series. Let's implement this algorithm in PySpark. We will create few signals and then mix them up to get a dataset suitable for ICA analysis. %matplotlib inline import numpy as np import matplotlib.pyplot as plt from scipy import signal np.random.seed(0) num_rows = 3000 t = np.linaspace(0,10, n_samples) # create signals sources s1 = np.sin(3*t) # a sine wave s2 = np.sign(np.cos(6*time)) # a square wave s3 = signal.sawtooth(2 *t) # a sawtooth wave # combine single sources to create a numpy matrix S = np.c_[s1,s2,s3] # add a bit of random noise to each value S += 0.2 no.random.normal(size = S.shape) # create a mixing matrix A A = np.array([[1, 1.5, 0.5], [2.5, 1.0, 2.0], [1.0, 0.5, 4.0]]) X = S.dot(A.T) #plot the single sources and mixed signals plt.figure(figsize =(26,12) ) colors = ['red', 'blue', 'orange'] plt.subplot(2,1,1) plt.title('True Sources') for color, series in zip(colors, S.T): plt.plot(series, color) plt.subplot(2,1,2) plt.title('Observations(mixed signal)') for color, series in zip(colors, X.T): plt.plot(series, color) Code for PCA and whitening the dataset. from pyspark.mllib.linalg.distributed import IndexedRowMatrix, IndexedRow, BlockMatrix from pyspark.mllib.feature import StandardScaler from pyspark.mllib.linalg import Vectors, DenseMatrix, Matrix from sklearn import datasets # create the standardizer model for standardizing the dataset X_rdd = sc.parallelize(X).map(lambda x:Vectors.dense(x) ) scaler = StandardScaler(withMean = True, withStd = False).fit(iris_rdd) X_sc = scaler.transform(X_rdd) #create the IndexedRowMatrix from rdd X_rm = IndexedRowMatrix(X_sc.zipWithIndex().map(lambda x: (x[1], x[0]))) # compute the svd factorization of the matrix. First the number of columns and second a boolean stating whether # to compute U or not. svd_o = X_rm.computeSVD(X_rm.numCols(), True) # svd_o.V is of shape n * k not k * n(as in sklearn) P_comps = svd_o.V.toArray().copy() num_rows = X_rm.numRows() # U is whitened and projected onto principal components subspace. S = svd_o.s.toArray() eig_vals = S**2 # change the ncomp to 3 for this tutorial #n_comp = np.argmax(np.cumsum(eig_vals)/eig_vals.sum() > 0.95)+1 n_comp = 3 U = svd_o.U.rows.map(lambda x:(x.index, (np.sqrt(num_rows-1)*x.vector).tolist()[0:n_comp])) # K is our transformation matrix to obtain projection on PC's subspace K = (U/S).T[:n_comp] Now, the code for calculation of independent components. import pyspark.sql.functions as f import pyspark.sql.types as t df = spark.createDataFrame(U).toDF("id", "features") # Approximating function g(y) = x*exp(-x**2/2) and its derivative def g(X): x = np.array(X) return(x * np.exp(-x**2/2.0)) def gprime(Y): y = np.array(Y) return((1-y**2)*np.exp(-y**2/2.0)) # function for calculating step 2 of the ICA algorithm def calc(df): function to calculate the appoximating function and its derivative def foo(x,y): y_arr = np.array(y) gy = g(y_arr) gp = gprime(y_arr) x_arr = np.array(x) res = np.outer(gy,x_arr) return([res.flatten().tolist(), gp.tolist()]) udf_foo = f.udf(foo, t.ArrayType(t.ArrayType(t.DoubleType()))) df2 = df.withColumn("vals", udf_foo("features","Y")) df2 = df2.select("id", f.col("vals").getItem(0).alias("gy"), f.col("vals").getItem(1).alias("gy_")) GY_ = np.array(df2.agg(f.array([f.sum(f.col("gy")[i]) for i in range(n_comp**2)])).collect()[0][0]).reshape(n_comp,n_comp)/num_rows GY_AVG_V = np.array(df2.agg(f.array([f.avg(f.col("gy_")[i]) for i in range(n_comp)])).collect()[0][0]).reshape(n_comp,1)*V return(GY_, GY_AVG_V) np.random.seed(101) # Initialization V = np.random.rand(n_comp, n_comp) # symmetric decorelation function def sym_decorrelation(V): U,D,VT = np.linalg.svd(V) Y = np.dot(np.dot(U,np.diag(1.0/D)),U.T) return np.dot(Y,V) numIters = 10 V = sym_decorrelation(v_init) tol =1e-3 V_bc = sc.broadcast(V) for i in range(numIters): # Y = V*X udf_mult = f.udf(lambda x: V_bc.value.dot(np.array(x)).tolist(), t.ArrayType(t.DoubleType())) df = df.withColumn("Y", udf_mult("features")) gy_x_mean, g_y_mean_V = calc(df) V_new = gy_x_mean - g_y_mean_V V_new = sym_decorrelation( V_new ) #condition for convergence lim = max(abs(abs(np.diag(V_new.dot(V.T)))-1)) V = V_new # V needs to be broadcasted after every change V_bc = sc.broadcast(V) print("i= ",i," lim = ",lim) if lim < tol: break elif i== numIters: print("Lower the tolerance or increase the number of iterations") #calculate the unmixing matrix for dataset W = V.dot(K) #now multiply U with V to get source signals S_ = df.withColumn("Y", udf_mult("features")) Plot the result S_ plt.title('Recovered source Signals') for color, series in zip(colors, S_.T): plt.plot(series, color) Drawbacks of ICA: ICA cannot uncover non-linear relationships of the dataset. ICA does not tell us anything about the order of independent components or how many of them are relevant. Conclusion: In this post, we learned practical aspects of the Independent component analysis. We touched a few important topics related to the understanding of ICA like gaussianity and independence. Afterwards, ICA algorithm was implemented on pyspark and used on a toy dataset. If you wish to learn more about ICA and its applications try ICA paper on fMRI and EEG data. Next article in this series will be on Multi-Dimension Scaling
https://blog.paperspace.com/dimension-reduction-with-independent-components-analysis/
CC-MAIN-2022-27
refinedweb
2,244
53.47
Create subdomain Discussion in 'ASP .Net' started by Roshawn,: - 357 - Michael Horton - Apr 20, 2004 namespace and subdomain, Jun 3, 2004, in forum: ASP .Net - Replies: - 4 - Views: - 388 Implications of subdomain vs. subfolder for web services=?Utf-8?B?QmlsbCBCb3Jn?=, Oct 13, 2004, in forum: ASP .Net - Replies: - 2 - Views: - 501 - =?Utf-8?B?QmlsbCBCb3Jn?= - Oct 14, 2004 How to create a SubDomainTina, Nov 9, 2007, in forum: ASP .Net - Replies: - 12 - Views: - 2,148 - darrel - Nov 12, 2007 across subdomain authentication failed on one subdomain while workingon other onesaify, Sep 28, 2009, in forum: ASP .Net Security - Replies: - 0 - Views: - 912 - saify - Sep 28, 2009
http://www.thecodingforums.com/threads/create-subdomain.382100/
CC-MAIN-2014-52
refinedweb
107
68.77
CPSC 124, Winter 1998 Second Test This is the second test given in CPSC 124: Introductory Programming, Winter 1998. See the information page for that course for more information. The answers given here are sample answers that would receive full credit. However, they are not necessarily the only correct answers. Question 1: What is a constructor? What is the purpose of a constructor in a class? Answer: (Quoted from the solutions to Quiz 4.) 2: Explain carefully everything that the computer does when it executes the declaration statement: Color hue = new Color(180,180,255); Answer: In this statement, a new object is created and is assigned to a newly created variable. To do this, the computer performs the following four steps: - Space is allocated on the heap to hold an object belonging to the class Color. - A constructor from the Color class is called to initialize the object, and the parameters 180, 180, 255 are passed to this constructor. - Space is allocated for a new variable named "hue"". - A pointer to the new object is stored in the variable hue. Question 3: Some of the applets that you have worked with in lab use an off-screen Image to do double buffering. Explain this. (What are off-screen Images? How are they used? Why are they important? What does this have to do with animation?) Answer: An off-screen Image is a segment of the computer's memory that can be used as a drawing surface. What is drawn to the off-screen Image is not visible on the screen, but the Image can be quickly copied onto the screen with a drawImage() command. It is important to use an off-screen Image in a situation where the process of drawing the image should not be seen by the user. This is true, for example, in animation. Each frame of the animation can be composed in an off-screen Image and then copied to the screen when it is complete. The alternative would be to erase the screen and draw the next frame directly on the screen. This causes unacceptable flickering of the image. Question 4: Variables in a class can be either instance variables or static variables. When you declare a variable, you have to decide whether to make it static or instance. Explain how you can decide, and give several examples. Answer: When a class declares an instance variable, every object of that class gets its own copy of the instance variable. This means that the variable can have different values in different objects. For example, in a Rectangle class, where an object represents a rectangle drawn on the screen, instance variables could be used to store the position, size, and color of the rectangle. Then every Rectangle would have its own position, size, and color. Suppose that the color were stored in a static variable instead of in an instance variable. A static variable belongs to the class rather than to the object created with that class. This means that there would be only one color that applies to all Rectangle objects. Changing the value of that static variable would, presumably, change the color of every Rectangle that exists! On the other hand, there are some cases where static variables are appropriate. For one thing, static methods can only use static variables, not instance variables. Static variables can also be used to store data that pertains to the whole class. For example, suppose you want to keep track of the number of Rectangle objects that exist. This value can be stored in a static variable in the Rectangle class. It wouldn't make sense to use an instance variable since there is only one value, not one value per object. Question 5: a) In your second programming assignment, you worked with roman numbers. Suppose that you wanted to represent roman numbers as objects. How would you design a class, RomanNumber, so that each object belonging to the class represents a roman number? List the instance variables, methods, and constructors that you would include in the class, and state what each one is for. For the methods and constructors, specify the parameters and return type as well as the name; however, do not give full definitions. (The class you describe must work with part b) of this problem.) b) Assume that the RomanNumber class from part a) is already written. Write a complete main program, using that class, that will read two roman numbers from the user, add them, and print the result. The class you described in part a) must include any methods that you need to write this program. Answer: (A Roman number has a value. The class needs an instance variable to represent that value. It is convenient to be able to construct Roman numbers from strings or from ints. Instance methods provide for the conversions in the opposite direction: from Roman number to int and from Roman number to String. This gives a pretty minimal class, but it's sufficient for writing the program in part b.)a) class RomanNumber { int value; // The int with the same numerical value as // this roman number. (Could use a String // representation instead, but this is easier.) RomanNumber(String rn) { ... } // Assume rn is a string such as "MCXII" that represents // a roman number. Construct that roman number. RomanNumber(int N) { ... } // Construct a roman number with value N. int getIntValue() { ... } // Return the int with the same value as this roman number. String toString() { ... } // Return a string representation of this Roman number. } b) public class AddRoman { public static void main(String[] args) { Console console = new Console(); console.put( "Enter a Roman number: " ); RomanNumber rn1 = new RomanNumber( console.getln() ); console.put( "Enter another Roman number: " ); RomanNumber rn2 = new RomanNumber( console.getln() ); int sum = rn1.getIntValue() + rn2.getIntValue(); RomanNumber sum = new RomanNumber( sum ); console.putln( "The sum is: " + sum.toString() ); } } Question 6: Describe the picture that is produced by the following paint method.public void paint(Graphics g) { int redLevel = 0; for (int x = 0; x < 256; x++) { g.setColor( new Color(redLevel, 0, 0) ); g.drawLine( x, 0, x, 255 ); redLevel++; } } Answer: Here is what the image looks like (except that you need a full-color monitor to display it properly): This image is a square made up of 256 vertical lines. Each line is 256 pixels long. Each line is a different color. Moving from left to right, the amount of red in the color increases. At the left, the lines are pure black; at the right, the lines are pure red. (There is no green or blue in any of the colors.) Question 7: Write a subroutine that finds the sum of all the numbers in an array of int's. The subroutine should be named arraySum. It should have one parameter of type int[], and it should return a value of type int. The value that is returned should be obtained by adding up all the numbers in the array. Answer:static int arraySum(int[] A) { // Returns the sum of the numbers in A. int sum = 0; for (int i = 0; i < A.length; i++) sum += A[i]; return sum; } Question more than 20 years. (Assume that there is a console variable to use for output.) ) console.putln(employeeData[i].firstName + " " + employeeData[i].lastName + ": " + employeeData[i].hourlyWage); } Question 9: Java uses garbage collection. What is meant here by garbage collection? Answer: When an object on the heap is no longer pointed to by any variable, then that object can play no further role in the program. It is "garbage." Java keeps track of references to objects, so that it can recognize when they are no longer in use. When that happens, they can be garbage collected, and the memory that they occupy can be reused. Question 10: A checkers board is an 8-by-8 grid of squares that contains empty spaces, red pieces, and black pieces. (For this problem, ignore the possibility of "kings.") Assume that the contents of a checkers board are stored in an arrayint[][] board = new int[8][8]; The value of board[row][col] represents the square in row number row and column number col. A value of 1 represents a red piece and a value of 2 represents a black piece. An empty square is represented by a value of zero. Write a code segment that will determine whether the board contains more red pieces, more black pieces, or the same number of red pieces and black pieces. You can print out your answer on a console. Answer:int redCount = 0; // number of red pieces int blackCount = 0; // number of black pieces for (int row = 0; row < 8; row++) for (int col = 0; col < 8; col++) { if (board[row][col] == 1) redCount++; else if (board[row][col] == 2) blackCount++; } if (redCount > blackCount) console.putln("There are more red pieces."); else if (redCount < blackCount) console.putln("There are more black pieces."); else console.putln("The numbers of red and black pieces are equal."); Question 11: The distinguishing features of object-oriented programming are inheritance and polymorphism. Write an essay discussing these two terms. Explain what they mean and how they are related. Discuss how they can be useful in designing and writing programs. Include some examples in your answer. Answer: inheritance allows a programmer to reuse and build on previous work. A new class can be created that "extends" an existing class. The new class inherits all the variables and methods of the existing class and can then add to and modify the stuff that it inherits. So the work that was done creating the original class does not have to be redone. (In fact, the original code does not have to be modified -- which can be a difficult and error-prone task. The original class remains unchanged. The changes are all isolated in the new class.) Polymorphism can arise whenever one or more classes inherit from a base class. If some method in the base class is overridden in a subclass, then that method is "polymorphic." If doSomething() is a polymorphic method, then the result of calling anObject.doSomething() will depend on the actual class of the object anObject. For example, suppose that a Vehicle class has subclasses named Car, Train, and Boat. Suppose that go() is a polymorphic method that is defined in Vehicle and overridden in the subclasses. If myVehicle is a variable of type Vehicle, then myVehicle can refer to an object belonging to any of the classes Vehicle, Car, Train, and Boat. The method called by "myVehicle.go()" is the one that is appropriate to the object that myVehicle points to. In addition to making reuse of existing work easier, polymorphism and inheritance can be useful when a program is being designed. Suppose that several classes have been identified and that it is recognized that those classes have certain properties or behaviors in common. In that case, the common properties and behaviors can be encoded in a common base class, so that they only have to be written once. Polymorphism allows each subclass to display somewhat different versions of common behaviors. David Eck 2 March 1998
http://math.hws.edu/eck/cs124/javanotes1/tests98/test2.html
crawl-001
refinedweb
1,859
65.83
mechanize — Forms This documentation is in need of reorganisation! This page is the old ClientForm documentation. ClientForm is now part of mechanize, but the documentation hasn’t been fully updated to reflect that: what’s here is correct, but not well-integrated with the rest of the documentation. This page deals with HTML form handling: parsing HTML forms, filling them in and returning the completed forms to the server. See the front page for how to obtain form objects from a mechanize.Browser. Simple working example ( examples/forms/simple.py in the source distribution): import sys from mechanize import ParseResponse, urlopen, urljoin if len(sys.argv) == 1: uri = "" else: uri = sys.argv[1] response = urlopen(urljoin(uri, "mechanize/example.html")) forms = ParseResponse(response, backwards_compat=False) form = forms[0] print form form["comments"] = "Thanks, Gisle" # form.click() returns a mechanize.Request object # (see HTMLForm.click.__doc__ if you want to use only the forms support, and # not the rest of mechanize) print urlopen(form.click()).read() A more complicated working example (from examples/forms/example.py in the source distribution): import sys import mechanize if len(sys.argv) == 1: uri = "" else: uri = sys.argv[1] request = mechanize.Request(mechanize.urljoin(uri, "mechanize/example.html")) response = mechanize.urlopen(request) forms = mechanize.ParseResponse(response, backwards_compat=False) response.close() ## f = open("example.html") ## forms = mechanize.ParseFile(f, "", ## backwards_compat=False) ## f.close() form = forms[0] print form # very useful! # A 'control' is a graphical HTML form widget: a text entry box, a # dropdown 'select' list, a checkbox, etc. # Indexing allows setting and retrieval of control values original_text = form["comments"] # a string, NOT a Control instance form["comments"] = "Blah." # Controls that represent lists (checkbox, select and radio lists) are # ListControl instances. Their values are sequences of list item names. # They come in two flavours: single- and multiple-selection: form["favorite_cheese"] = ["brie"] # single form["cheeses"] = ["parmesan", "leicester", "cheddar"] # multi # equivalent, but more flexible: form.set_value(["parmesan", "leicester", "cheddar"], name="cheeses") #) # A couple of notes about list controls and HTML: # 1. List controls correspond to either a single SELECT element, or # multiple INPUT elements. Items correspond to either OPTION or INPUT # elements. For example, this is a SELECT control, named "control1": # <select name="control1"> # <option>foo</option> # <option value="1">bar</option> # and this is a CHECKBOX control, named "control2": # <input type="checkbox" name="control2" value="foo" id="cbe1"> # <input type="checkbox" name="control2" value="bar" id="cbe2"> # You know the latter is a single control because all the name attributes # are the same. # 2. Item names are the strings that go to make up the value that should # be returned to the server. These strings come from various different # pieces of text in the HTML. The HTML standard and the mechanize # docstrings explain in detail, but playing around with an HTML file, # ParseFile() and 'print form' is very useful to understand this! # You can get the Control instances from inside the form... control = form.find_control("cheeses", type="select") print control.name, control.value, control.type control.value = ["mascarpone", "curd"] # ...and the Item instances from inside the Control item = control.get("curd") print item.name, item.selected, item.id, item.attrs item.selected = False # Controls may be referred to by label: # find control with label that has a *substring* "Cheeses" # (e.g., a label "Please select a cheese" would match). control = form.find_control(label="select a cheese") # You can explicitly say that you're referring to a ListControl: # set value of "cheeses" ListControl form.set_value(["gouda"], name="cheeses", kind="list") # equivalent: form.find_control(name="cheeses", kind="list").value = ["gouda"] # the first example is also almost equivalent to", nr=0) # find, and set the value of, the first single-selection list control form.set_value(["spam"], kind="singlelist", nr=0) # You can find controls with a general predicate function: def control_has_caerphilly(control): for item in control.items: if item.name == "caerphilly": return True form.find_control(kind="list", predicate=control_has_caerphilly) # HTMLForm.controls is a list of all controls in the form for control in form.controls: if control.value == "inquisition": sys.exit() # Control.items is a list of all Item instances in the control for item in form.find_control("cheeses").items: print item.name # To remove items from a list control, remove it from .items: cheeses = form.find_control("cheeses") curd = cheeses.get("curd") del cheeses.items[cheeses.items.index(curd)] # To add items to a list container, instantiate an Item with its control # and attributes: # Note that you are responsible for getting the attributes correct here, # and these are not quite identical to the original HTML, due to # defaulting rules and a few special attributes (e.g. Items that represent # OPTIONs have a special "contents" key in their .attrs dict). In future # there will be an explicitly supported way of using the parsing logic to # add items and controls from HTML strings without knowing these details. mechanize.Item(cheeses, {"contents": "mascarpone", "value": "mascarpone"}) # You can specify list items by label using set/get_value_by_label() and # the label argument of the .get() method. Sometimes labels are easier to # maintain than names, sometimes the other way around. form.set_value_by_label(["Mozzarella", "Caerphilly"], "cheeses") # Which items are present, selected, and successful? # is the "parmesan" item of the "cheeses" control successful (selected # and not disabled)? print "parmesan" in form["cheeses"] # is the "parmesan" item of the "cheeses" control selected? print "parmesan" in [ item.name for item in form.find_control("cheeses").items if item.selected] # does cheeses control have a "caerphilly" item? print "caerphilly" in [item.name for item in form.find_control("cheeses").items] # Sometimes one wants to set or clear individual items in a list, rather # than setting the whole .value: # select the item named "gorgonzola" in the first control named "cheeses" form.find_control("cheeses").get("gorgonzola").selected = True # You can be more specific: # deselect "edam" in third CHECKBOX control form.find_control(type="checkbox", nr=2).get("edam").selected = False # deselect item labelled "Mozzarella" in control with id "chz" form.find_control(id="chz").get(label="Mozzarella").selected = False # Often, a single checkbox (a CHECKBOX control with a single item) is # present. In that case, the name of the single item isn't of much # interest, so it's a good idea to check and uncheck the box without # using the item name: form.find_control("smelly").items[0].selected = True # check form.find_control("smelly").items[0].selected = False # uncheck # Items may be disabled (selecting or de-selecting a disabled item is # not allowed): control = form.find_control("cheeses") print control.get("emmenthal").disabled control.get("emmenthal").disabled = True # enable all items in control control.set_all_items_disabled(False) request2 = form.click() # mechanize.Request object try: response2 = mechanize.urlopen(request2) except mechanize.HTTPError, response2: pass print response2.geturl() # headers for name, value in response2.info().items(): if name != "date": print "%s: %s" % (name.title(), value), for example, you pickle them. Parsers There are two parsers. TODO: more! See also the FAQ entries on XHTML and parsing bad HTML. Backwards-compatibility mode mechanize (and ClientForm 0.2) includes three minor backwards-incompatible interface changes from ClientForm version 0.1. To make upgrading from ClientForm 0.1 easier, and to allow me to stop supporting version ClientForm 0.1 sooner, there is support for operating in a backwards-compatible mode, under which code written for ClientForm 0.1 should work without modification. This is done on a per- HTMLForm basis via the .backwards_compat attribute, but for convenience the ParseResponse() and ParseFile() factory functions accept backwards_compat arguments. These backwards-compatibility features will be removed soon. The default is to operate in backwards-compatible mode. To run with backwards compatible mode turned OFF (strongly recommended): from mechanize import ParseResponse, urlopen forms = ParseResponse(urlopen(""), backwards_compat=False) # ... The backwards-incompatible changes are: Ambiguous specification of controls or items now results in AmbiguityError. If you want the old behaviour, explicitly pass nr=0to indicate you want the first matching control or item. Item label matching is now done by substring, not by strict string-equality (but note leading and trailing space is always stripped). (Control label matching is always done by substring.) Handling of disabled list items has changed. First, note that handling of disabled list items in ClientForm 0.1 (and in ClientForm 0.2’s backwards-compatibility mode!) is buggy: disabled items are successful (ie. disabled item names are sent back to the server). As a result, there was no distinction to be made between successful items and selected items. In ClientForm 0.2, the bug is fixed, so this is no longer the case, and it is important to note that list controls’ .valueattribute contains only the successful item names; items that are selected but not successful (because disabled) are not included in .value. Second, disabled list items may no longer be deselected: AttributeError is raised in ClientForm 0.2, whereas deselection was allowed in ClientForm 0.1. The bug in ClientForm 0.1 and in ClientForm 0.2’s backwards-compatibility mode will not be fixed, to preserve compatibility and to encourage people to upgrade to the new ClientForm 0.2 backwards_compat=Falsebehaviour. I prefer questions and comments to be sent to the mailing list rather than direct to me. John J. Lee, April 2010.
http://wwwsearch.sourceforge.net/mechanize/forms.html
CC-MAIN-2017-47
refinedweb
1,520
52.56
My current work project uses the .NET framework v4.5, and we’re developing using Visual Studio 2013. I’ve been reading about the new language features of C# 6.0 for a while, and I’m really interested in finding a way to use them. I used Bing/Google to identify what the new language features are, read some blogs and fired up my personal copy of Visual Studio 2015 to try out a few examples. The one I was really interested in was Primary Constructors, but there were a list of new features I wanted to try: - Primary Constructors - Import of static type members into namespace - Auto property initializers - Default values for getter-only properties - String interpolation - nameof operator - Dictionary initializer - Null propagator This isn’t a comprehensive list of the new features, they’re just the ones that I think I would use most in my day-to-day coding. The first feature I decided to try was creating a Primary Constructor…but when I wrote the code in VS2015 and .NET 4.6, it showed the dreaded red squiggly line and didn’t compile. What went wrong? After some more research, I found that the Primary Constructor feature has been removed (at least temporarily). So those articles (for example, this one) showing me how to do this are, for now, wrong. This made me sit back and think a bit more. - When I look at the dropdown list of available .NET frameworks in Visual Studio 2015, there’s quite a few (at least in my system). Which one should I be using to compile C# 6.0 language features? - And what does C# 6.0 actually mean? Should I be assuming that .NET Framework 4.6 necessarily corresponds to C# 6.0? - Can I make Visual Studio 2013 compile code written using C# 6.0 language features? - And where does Roslyn fit into all of this? Some Sample Code for C# language features I wrote a simple class which contains each of the C# 6.0 features that I listed above (except Primary Constructors, obviously). It’s a bit of a silly example, but hopefully it illustrates the point. I’ve commented each of the features, and put some of the most relevant code in bold font. namespace CSharp6SampleApplication { using System; using System.Collections.Generic; using static System.Console; public class SuperCar { // Dictionary initializer private static readonly Dictionary<string, DateTime> _specialDates = new Dictionary<string, DateTime> { ["past"] = new DateTime(1985, 10, 26), ["current"] = new DateTime(1985, 11, 5), ["future"] = new DateTime(2015, 10, 21) }; // Auto property initializers public string Manufacturer { get; set; } = "DeLorean"; // Auto property initializers public int TopSpeed { get; set; } = 88; // Default values for getter-only properties - no need to specify a private setter; public double Power { get; } public Engine Engine { get; set; } public SuperCar() { // Default values for getter-only properties - possible to set in the constructor only Power = 1.21; } public override string ToString() { // String interpolation return $"Made by {Manufacturer}, Top Speed = {TopSpeed}"; } } public class Engine { public string Manufacturer { get; set; } public bool IsEfficient(string engineType) { // nameof operator if (engineType == null) { throw new ArgumentNullException(nameof(engineType)); } if (engineType == "Mr. Fusion") { return true; } return false; } } class Program { static void Main(string[] args) { var car = new SuperCar(); // Import of static type members into namespace WriteLine(car.ToString()); // Null propagator WriteLine(car.Engine?.Manufacturer ?? "No engine type specified yet"); } } } How it works There’s a difference between a language specification and the version of the framework that supports it. C# 6.0 is a language specification that is supported by the Roslyn compiler for the .NET platform. This compiler ships by default with Visual Studio 2015 – however, Roslyn doesn’t ship with Visual Studio 2013 (obviously, because this came out before Roslyn). So all of the above code will compile and works out of the box in Visual Studio 2015, and it works for .NET framework versions 2.0, 3, 3.5, 4, 4.5, 4.5.1, 4.5.2, and 4.6 (I just haven’t included version 1.0 and 1.1 because I don’t have them installed on my machine). It doesn’t matter about the framework – it’s the compiler that matters. Can it work in VS2013? I think the answer to this is “partly but not really”. When I try the above code in VS2013, the environment looks like the screenshot below, with more of the red-squiggly lines and a bunch of compiler errors. But it’s possible to compile C# 6.0 features with Visual Studio 2013 – you just need to install a nuget package. Run the code below from the package manager console. Install-Package Microsoft.Net.Compilers -Version 1.1.1 This will now compile, but the VS2013 development environment still thinks there’s a problem – it leaves the red squiggly lines and reports the errors in the Error List window. This is because Visual Studio 2013’s real-time compiler hasn’t been replaced, and the development environment doesn’t understand the new language features. So this isn’t really a long term workable solution to develop in VS2013 with C# 6.0 language features. But you can make the code compile if you need to.
https://jeremylindsayni.wordpress.com/2016/03/10/whats-the-link-between-c-6-0-language-specifications-net-frameworks-and-visual-studios/
CC-MAIN-2017-26
refinedweb
870
57.77
jQuery is a lightweight open source JavaScript library (only 15kb in size) that in a relatively short span of time has become one of the most popular libraries on the web."write less do more" is a very good idea for our web developers.But when using it in ASP.NET something had happend... We has some challenge of using jQuery in ASP.NET as well. $(function() { $("#datepicker").datepicker(buttonText:'Close', changeMonth:true, changeYear:true, gotoCurrent:true, showOn:'click', onselect:function(){ ..... });}); <div id="accordion"> <h3> <a href="#">Section 1</a></h3> <div> Section 1 content here </div> <h3> <a href="#">Section 2</a></h3> <div> Section 2 content here </div> <h3> <a href="#">Section 3</a></h3> <div> Section 3 content here </div> </div> So how i can "write less do more" ?! I write many code for using jQuery! Against such a background.I build a lightweight framework for jQuery in ASP.NET named "DJ". so the server control's code just like this: using System; using System.Web.UI; using DNA.UI; //step1:Import the DNA JQuery UI Framework library //step2:add the JQuery Attribute to your control class header [JQuery(Assembly="jQuery",Name="lightbox",ScriptResources=new string[]{"plugin.lightbox.js"})] public class LightBox:Control { //step3:add the JQueryOption Attribute to the property header [JQueryOption(Name = "fixedNavigation")] public bool FixedNavigation { get; set; } protected override void OnPreRender(EventArgs e) { //step4:Register the jQuery Control DNA.UI.ClientScriptManager.RegisterJQueryControl(this); base.OnPreRender(e); } } It's great! just four steps i write the lightbox Server Control! so i write all widgets of jquery.ui in "DJ" so that you can use theme to build your asp.net application quick or you can using this framework to write your own jquery Server Controls! if you want see more live demo of "DJ" you can visit my website and i put the lastest version source code on CodePlex Write jQuery plugin WebControl for ASP.NET just in Few Minutes Getting Started with jqChart jQuery Plugin Sir why we use DNA and what is the use of it.
http://www.c-sharpcorner.com/uploadfile/lruikun/write-jquery-plugin-webcontrol-for-Asp-Net-just-in-few-miniutes/
crawl-003
refinedweb
343
67.15
I am having trouble figuring out how to get this program to work correctly and was wondering if someone could give me a little guidance? I know the problem is in the last function Grade::getWeightedGrade() but I can't figure out what I am doing wrong. The program runs correctly but instead of returning the right score it returns zero. The score is supposed to be based upon the ( score of the test * (weight / 100) for both grade1 and grade2 and then these totals are added together to give the final score.... Here is what it should run like: Enter Type: Midterm Enter Score: 86 Enter Weight: .40 Enter Type: Final Enter Score: 97 Enter Weight: .60 Your grades are as follows: Midterm 86, which counts 40% Final 97, which counts 60% Your final grade is: 92.6 Code:#include <iostream> #include <string> using namespace std; class Grade { public: void enterGradeInfo(); void showGradeInfo(); double getWeightedGrade(); private: int mWeight; int mScore; string mType; }; int main() { Grade grade1; Grade grade2; grade1.enterGradeInfo(); grade2.enterGradeInfo(); cout << "Your grades are as follows:" << endl; grade1.showGradeInfo(); grade2.showGradeInfo(); cout << "Your final grade is " << grade1.getWeightedGrade + grade2.getWeightedGrade; cout << endl; system("pause"); return 0; } void Grade::enterGradeInfo() { cout << "Enter type: "; cin >> mType; cout << endl; cout << "Enter Score: "; cin >> mScore; cout << endl; cout << "Enter weight: "; cin >> mWeight; cout << endl; } void Grade::showGradeInfo() { cout << mType << " which counts for " << mWeight << "%" << endl; } double Grade::getWeightedGrade() { return mScore * (mWeight/100) ; }
https://cboard.cprogramming.com/cplusplus-programming/99651-parameter-passing.html
CC-MAIN-2017-39
refinedweb
240
67.59
iPcRules Struct Reference This property class represents a set of active rules for an entity. More... #include <propclass/rules.h> Detailed Description This property class represents a set of active rules for an entity. It uses the rule base system (iCelRuleBase). Note that this property class will automatically load the rule base system if it is missing. When reading properties out of this pcrules it will check if there are any rules affecting the property. If not you will simply get the value from the associated 'pcproperties' property class (if present). Otherwise you will get a modified value according to the rules defined for this property (starting from the value out of pcproperties). If the 'pcproperties' is not present a default value is assumed (0 for numbers, empty string for string, 0-vector for vectors, and black for colors). This property class can send out the following messages: - 'cel.rules.modifypar' (old 'pcrules_modifypar'): a parameter has been modified: parameters 'name' (name of the parameter). This property class supports the following actions (add prefix 'cel.rules.action.' if you want to access this action through a message): - AddRule: parameters 'name' (string). Optional 'time' (long) parameter. - DeleteRule: parameters 'name' (string). - DeleteAllRules: no parameters. Definition at line 50 of file rules.h. Member Function Documentation Add a rule which times out after a specific amount of time. Add a rule. Delete all rules. Delete a rule. Get a specific property. Get a specific property. Get a specific property. Get a specific property. Get a specific property. Get a specific property. Get the type of the property. This will base itself on the underlying pcproperties. If there is no pcproperties class with the right property in it then this will return CEL_DATA_NONE even though getting a specific typed value might work. Get a specific property. Get a specific property. The documentation for this struct was generated from the following file: Generated for CEL: Crystal Entity Layer 2.1 by doxygen 1.6.1
http://crystalspace3d.org/cel/docs/online/api/structiPcRules.html
CC-MAIN-2016-22
refinedweb
329
53.27
I’m looking for advice on how to structure a multi language application using ember-i18n. I wasn’t able to find many resources on this so any guidance would be greatly appreciated! Routes should be localized and prefixed with the locale (unless it’s the default). Like this: - news - es/noticias The API can be namespaced with the locale as such: - api/news/5 - es/news/5 - /news/5?locale=es I thought about nesting all routes in a: this.route('locale', {path: ':locale}, function () { this.route('music'); } … which would allow you to localize and redirect on the language route. But when you for instance transition to the route en.news the path would not end up being /news/ but /en/news/ (with locale). I need the default locale ( en in this case) to not be prefixed.
https://discuss.emberjs.com/t/how-to-architect-multilanguage-routes-with-i18n/9747/6
CC-MAIN-2018-26
refinedweb
138
67.76
NAME getrusage - get resource usage SYNOPSIS #include <sys/time.h> . POSIX.1-2001 specifies getrusage(), but only specifies the fields ru_utime and ru_stime. RUSAGE_THREAD is Linux-specific. NOTES Resource usage metrics are preserved across an execve(2)..1-2001 explicitly prohibits this. This non-conformance is rectified in Linux 2.6.9 and later. The structure definition shown at the start of this page the description of /proc/PID/stat in proc(5). SEE ALSO getrlimit(2), times(2), wait(2), wait4(2), clock(3), clock_gettime(3) COLOPHON This page is part of release 3.15 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/jaunty/man2/getrusage.2.html
CC-MAIN-2015-22
refinedweb
116
62.85
Input monitoring I am trying to detect an input change at a GPIO pin. I have the following python script, based on the cpp script here: I can get the ADC measurements from the same pin successfully but can’t get input monitoring to work. I hope I am missing something simple… any help would be appreciated. from mbientlab.metawear import MetaWear, libmetawear, parse_value from mbientlab.metawear.cbindings import * from time import sleep from threading import Event import thread import time import sys def state_handler(data): print("Edge detected...") print("%s" % (parse_value(data))) state_handler_fn = FnVoid_DataP(state_handler) device = MetaWear(sys.argv[1], device = 'hci1') device.connect() print("Connected (1)") sleep(1.0) int_signal0 = libmetawear.mbl_mw_gpio_get_pin_monitor_data_signal(device.board,) for x in range (0,10,1): print("Looping...") sleep(2.0) sleep(1.0) print("Disconnecting...") device.disconnect() Which gpio pin are you monitoring? Your code uses both '3' and '0'. I have tried a number of different pins (end eventually made the error in the code above), but I double checked and re-tested pins 0, 2, 3... with all the functions using the same pin number as per the code below (for pin 0): int_signal0 = libmetawear.mbl_mw_gpio_get_pin_monitor_data_signal(device.board,) Quoting myself from the other thread, which in turn quotes the first statement from Input Monitoring section of the linked gpio doc: Just because the ADC value changes does not mean the digital state has changed. I'm pulling the pin down to 0V from 3V. Yes but is the digital state changing? Also, please provide a circuit diagram showing how the sensor is powered and connected to your board .
https://mbientlab.com/community/discussion/comment/6497
CC-MAIN-2019-26
refinedweb
268
58.18
csTraceBeamResult Struct Reference Return structure for the csColliderHelper::TraceBeam() method. More... #include <cstool/collider.h> Detailed Description Return structure for the csColliderHelper::TraceBeam() method. Definition at line 169 of file collider.h. Member Data Documentation closest_isect will be set to the closest intersection point (in world space). Definition at line 181 of file collider.h. closest_mesh will be set to the closest mesh that is hit. Definition at line 185 of file collider.h. Closest triangle from the model. closest_tri will be set to the closest triangle that is hit. The triangle will be specified in world space. Definition at line 176 of file collider.h. Sector in which the collision occured. Definition at line 194 of file collider.h. The squared distance between 'start' and the closest hit or else a negative number if there was no hit. Definition at line 190 of file collider.h. The documentation for this struct was generated from the following file: - cstool/collider.h Generated for Crystal Space 1.4.1 by doxygen 1.7.1
http://www.crystalspace3d.org/docs/online/api-1.4/structcsTraceBeamResult.html
CC-MAIN-2016-50
refinedweb
173
54.18
IRC log of sml on 2008-06-12 Timestamps are in UTC. 18:04:07 [RRSAgent] RRSAgent has joined #sml 18:04:07 [RRSAgent] logging to 18:04:26 [Kumar] scribe: Kumar Pandit 18:04:29 [Kumar] scribenick: Kumar 18:04:42 [Kumar] meeting: SML Conf Call 18:04:55 [Kumar] chair: Pratul 18:04:59 [MSM] zakim, who's here? 18:04:59 [Zakim] On the phone I see [Microsoft], Kirk, johnarwe, MSM, kumar, jim 18:05:00 [Zakim] On IRC I see RRSAgent, Sandy, Kumar, johnarwe, pratul, Jim, Zakim, MSM, trackbot 18:05:19 [ginny] ginny has joined #sml 18:05:23 [Kumar] topic: Approval of minutes from previous meeting(s): 18:05:41 [Kumar] Pratul: minutes for 6/5 call are approved due to no objection. 18:05:54 [Kumar] topic: Review of action items 18:06:01 [Zakim] +ginny 18:07:48 [Kirk] Kirk has joined #sml 18:08:43 [Kumar] Kumar: my AI is related to Pratul's. Not done yet. 18:08:56 [Kumar] msm: Will have AIs done before f2f 18:09:13 [Kumar] topic: Review all non-editorial bugs with external comments 18:09:39 [Kumar] topic: bug# 5522 18:12:54 [Zakim] +Sandy 18:14:53 [Kumar] Kumar: The existing text is correct. I have mentioned this in comment# 7. 18:16:30 [Kumar] msm: the existing text looks correct, ok to close this bug. 18:18:57 [Kumar] Pratul: we need to mention in the bug that comment# 5 no longer holds. 18:24:53 [Kumar] John: Henry looked at an old draft that really had the word 'containing' and since we fixed that in the LC draft, we should resolve this bug as 'fixed'. 18:26:04 [Kumar] resolution: mark as fixed. 18:26:44 [Kumar] bug# 5541 18:26:49 [Kumar] topic: bug# 5541 18:28:33 [Kumar] Pratul: proposal in comment# 14 18:37:01 [Kumar] Pratul: does this preclude identifying references using sml:ref=true ? 18:37:05 [Zakim] +Kirk.a 18:37:08 [Kumar] msm: no 18:37:37 [MSM] zakim, who's here? 18:37:37 [Zakim] On the phone I see [Microsoft], Kirk, johnarwe, MSM, kumar, jim, ginny, Sandy, Kirk.a 18:37:40 [Zakim] On IRC I see Kirk, ginny, RRSAgent, Sandy, Kumar, johnarwe, pratul, Jim, Zakim, MSM, trackbot 18:38:09 [Kumar] msm: PSVI will still have the attribute value if the attribute is present in the instance. 18:39:15 [Kumar] resolution: fix per comment# 14, mark decided+editorial 18:41:15 [johnarwe] LC text: 4.1.1 SML Reference 18:41:15 [johnarwe] 18:41:15 [johnarwe] An element information item in an SML model instance document is as an SML reference if and only if it has an attribute information item for which all of the following is true: 18:41:15 [johnarwe] 18:41:15 [johnarwe] 1. 18:41:17 [johnarwe] 18:41:19 [johnarwe] Its [local name] is ref 18:41:21 [johnarwe] 2. 18:41:23 [johnarwe] 18:41:25 [johnarwe] Its [namespace name] is 18:41:27 [johnarwe] 3. 18:41:29 [johnarwe] 18:41:31 [johnarwe] Its [normalized value], after whitespace normalization using collapse following schema rules, is either "true" or "1". 18:41:34 [johnarwe] 18:41:37 [johnarwe] This mechanism enables schema-less identification of SML reference, i.e., SML references can be identified without relying on the Post Schema Validation Infoset. [XML Schema Structures] 18:41:39 [johnarwe] 18:41:41 [johnarwe] Although its normative definition allows several syntaxes to be used to identify an SML reference, for the sake of brevity and consistency, the rest of this specification uses sml:ref="true" to denote an SML reference in examples and text. 18:50:03 [Kumar] john: The proposed text would replace the 'implementation-defined' part of the current text this is not good. 18:51:17 [MSM] [Seaking for myself, I would like (a) to allow non-validating consumers to use either the base infoset or the PSVI, (b) to say so explicitly and clearly, not indirectly, (c) to say so normatively.] 18:51:58 [MSM] [... and (d) to require non-validating consumers to document which they use, as part of their claim of conformance] 18:56:31 [Kumar] john to open another bug about non-validating consumers. 18:56:39 [Kumar] action: john to open another bug about non-validating consumers. 18:56:39 [trackbot] Created ACTION-194 - Open another bug about non-validating consumers. [on John Arwe - due 2008-06-19]. 18:56:58 [MSM] zakim, who's here? 18:56:58 [Zakim] On the phone I see [Microsoft], Kirk, johnarwe, MSM, kumar, jim, ginny, Sandy, Kirk.a 18:57:01 [Zakim] On IRC I see Kirk, ginny, RRSAgent, Sandy, Kumar, johnarwe, pratul, Jim, Zakim, MSM, trackbot 18:57:04 [Kumar] resolution: fix per comment# 14, mark decided+editorial 18:58:11 [Kumar] topic: bug# 5519 18:58:25 [Kumar] ginny: not ready to make a decision on this bug till I see msm's test case. 18:58:42 [Kumar] topic: bug# 5529 18:59:58 [MSM] [text in comment #5 works for me] 19:03:50 [MSM] I wonder if 'no element-specific constraints' would work? 19:07:01 [MSM] or 'no (target-* or acyclic) constraints' 19:09:55 [Kumar] ...long discussion about the current text. 19:10:12 [Kumar] s/current/proposed/ 19:12:39 [Kumar] resolution: remove needsReview, resolve as fixed 19:13:08 [Kumar] action: ginny to open a new bug for defining the term 'SML constraints' 19:13:08 [trackbot] Sorry, couldn't find user - ginny 19:13:23 [Kumar] action: virginia to open a new bug for defining the term 'SML constraints' 19:13:23 [trackbot] Created ACTION-195 - Open a new bug for defining the term 'SML constraints' [on Virginia Smith - due 2008-06-19]. 19:13:45 [Kumar] topic: bug# 5598 19:15:24 [Kumar] ginny: will re-fix the bug to use smlfn prefix 19:15:29 [Kumar] resolution: mark editorial again 19:15:42 [Kumar] topic: bug# 5546 19:18:10 [pratul] Zakim, Microsoft is me 19:18:10 [Zakim] +pratul; got it 19:19:40 [Kumar] Kumar: I had sent my findings about 2557 to public-sml. John had responded to it. 19:20:29 [Kumar] John: The biggest reason I remember was 2557 allows one alias per document, sml-if needs multiple. 19:21:35 [Kumar] Pratul: Kumar, can you add your findings to the bug? 19:21:39 [Kumar] Kumar: yes 19:27:30 [johnarwe] 19:30:59 [Kumar] john: ... talks about findings in his email. 19:35:41 [Kumar] john: everyone should read the emails sent by John & Kumar and then they can decide if they want to do deeper research by reading the rfc as well. Once everyone has done the research, we can decide on this bug. 19:35:57 [Zakim] -ginny 19:36:11 [Kumar] topic: bug# 5707 19:48:32 [Kumar] sandy: if relative reference is an empty string, we can skip 2.a as well. 19:49:05 [Kumar] modified proposal: 19:49:25 [Kumar] Note: If the relative reference is an empty string or if it consists of only a 19:49:25 [Kumar] fragment component then steps 2.a and 2.b are skipped because the document containing the 19:49:25 [Kumar] SML reference is the target document. 19:59:49 [johnarwe] apologies, need to leave promptly today for another call 20:00:04 [johnarwe] reminder to PD and MSM, mtg in 1 hr 20:00:07 [Kumar] sandy: we cannot deduce the conclusion in the proposed non-normative note using the normative text therefore that text should be normative. 20:00:12 [Jim] need to leave also 20:00:12 [Zakim] -johnarwe 20:00:14 [Jim] Jim has left #sml 20:00:23 [Kumar] msm: I think that text should remain non-normative. 20:00:53 [MSM] johnarwe, ack 20:01:05 [MSM] sandy, the other bug on which I think this one depends is 5542 20:01:07 [Kumar] Pratul: Sandy can you add a summary of today's discussion to the bug? 20:01:18 [Zakim] -jim 20:02:04 [Zakim] -pratul 20:02:12 [Zakim] -Kirk.a 20:02:14 [Zakim] -kumar 20:02:25 [Kumar] rrsagent, generate minutes 20:02:25 [RRSAgent] I have made the request to generate Kumar 20:02:32 [Kumar] rrsagent, make log public 20:03:51 [Zakim] -MSM 20:03:51 [Zakim] -Sandy 20:06:01 [johnarwe] johnarwe has left #sml 20:08:51 [Zakim] disconnecting the lone participant, Kirk, in XML_SMLWG()2:00PM 20:08:55 [Zakim] XML_SMLWG()2:00PM has ended 20:08:56 [Zakim] Attendees were Kirk, johnarwe, MSM, kumar, jim, ginny, Sandy, pratul 21:49:46 [Zakim] Zakim has left #sml
http://www.w3.org/2008/06/12-sml-irc
CC-MAIN-2014-52
refinedweb
1,471
58.21