text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
. Below you see some code that is in use for ages. With Xamarin.iOS 10, however, it does not work anymore on Device in Release/Ad-Hoc Mode. Although the constructor is executed the overload CreateChannel() is NOT called, and it seems the base implementation is called instead, creating an exception: MonoTouch does not support dynamic proxy code generation. Override this method or its caller to return specific client proxy instance. Works in Debug mode on Device, always in Simulator. Workaround: Downgrade to Xamarin.iOS 9.8.2.22. Ekki public class ChannelFactoryWithCredentials : System.ServiceModel.ChannelFactory<Contracts.ServiceContracts.IConfigurationService> { private ConfigurationServiceClient client; public ChannelFactoryWithCredentials (ConfigurationServiceClient client, System.ServiceModel.Channels.Binding binding, System.ServiceModel.EndpointAddress endpointAddress) : base(binding, endpointAddress) { this.client = client; } public override Diomex.XcalibuR.Configurator.Contracts.ServiceContracts.IConfigurationService CreateChannel (System.ServiceModel.EndpointAddress address, Uri via) { this.client.ClientCredentials.UserName.UserName = this.Credentials.UserName.UserName; this.client.ClientCredentials.UserName.Password = this.Credentials.UserName.Password; return new ConfigurationServiceClientChannel(this.client); } } Hello, Could you provide the full stack trace for the exception and your environment information? The easiest way to get exact version information is to use the "Xamarin Studio" menu, "About Xamarin Studio" item, "Show Details" button and copy/paste the version informations (you can use the "Copy Information" button). Don't have a stack trace coz I cannot catch it. Happens only on Device in non-Debug mode. Environment: === Xamarin Studio Community === Version 6.1.1 (build 15) Installation UUID: 84015b19-2768-4aa8-96cc-3bf1975b5714 Runtime: Mono 4.6.1 (mono-4.6.0-branch-c8sr0/abb06f1) (64-bit) GTK+ 2.24.23 (Raleigh theme) Package version: 406010003 === NuGet === Version: 3.4.3.0 === Xamarin.Profiler === Version: 0.20.0.0 Location: /Applications/Xamarin Profiler.app/Contents/MacOS/Xamarin Profiler === Apple Developer Tools === Xcode 8.0 (11246) Build 8A218a === Xamarin.Mac === Version: 2.10.0.103 (Xamarin Studio Community) === Xamarin.Android === Version: 7.0.1.2 (Xamarin Studio Community) Android SDK: /Users/eb/Library/Developer/Xamarin/android-sdk-macosx Supported Android versions: 4.0.3 (API level 15) 4.4 (API level 19) 5.0 (API level 21) === Version: 0.6.2 Location: /Applications/Xamarin Android Player.app === Xamarin.iOS === Version: 9.8.2.22 (Xamarin Studio Community) Hash: f37444a Branch: cycle7-sr1 Build date: 2016-07-28 12:17:02 IG-MOBILE17.local 16.0.0 Darwin Kernel Version 16.0.0 Mon Aug 29 17:56:20 PDT 2016 root:xnu-3789.1.32~3/RELEASE_X86_64 x86_64 The environment information above describes the Working scenario. If I only update Xamarin.iOS to the next newer version, which is a 10.something, the error happens. When exploring this issue I learned about a 'similar' issue in the context of Xamarin.Android. It had also todo with CreateChannel() but the problem was a little bit different so the work-around was. Maybe this kinda helps a little bit... any news, guys? anything expected from my side? Feel kinda bad if my Apps don't work with all the latest pckges from the Stable channel from comment #2 > Version: 9.8.2.22 (Xamarin Studio Community) The question from @Tim was to know which version of XI caused the issue. But this seems, from your other comments, to be the version that works, right ? Also the code above is not complete enough for us to double check your result. Could you attach a small, self contained, test so we can make investigate the issue ? thanks! the issue happened after updating to the latest Xamarin components of the stable branch. As workaround I can solve it by downgrading Xamarin.iOS to the latest official Pre-10 version. #1 Minimal test case - this is what I did request from the partner that provides the assembly that is playing with those Channel Factories. He said it will take him 2 days to put the minimal test case together and he's too busy these days. #2 access to Repos - I can give you access to 5 bitbucket repos. All you need to do is to clone them side-by-side and call a setup.sh once. Then you would be able to reproduce the issue and probably fix it, as I believe it has to do with linking and building, not with compiling. Remember, it only happens in Release mode, not in Debug mode. Very similar to that Xamarin.Android issue I mentioned above. #3 Minimal test case 2 - for someone on your side, who's familiar with those Channel Factories - I'm not, it's probably an easy thing to setup something with an own Channel Factory and simply overloaded CreateChannel(). I do believe that this scenario should generate the same error behavior. so how to continue? @Ekki, please provide us a self contained test case that allow us (non channel experts) to reproduce the issue. Ideally open #1 (fast) or email me details for #2 (slow). Will turn myself into a Channel Expert, well kinda, and compile something. Pls give me one week... @Ekki have you managed to create the test case that @Sebastien requested? Tried to but some unforeseeable issues in other projects didn't allow me to do so. Will give it a new try this weekend @Ekki, Have you had the time to take a look at this? started but some 3D issues troubled me, right before the big fair... it turned out that this was caused by iOS 8.x that is limited in the Metal implementation... damned... upgrade to iOS 9.3.5 solved it... @Ekki Ever have any luck getting a test case for us to reproduce this? Or did you mean the upgrade to iOS 9.3.5 fixed the original issue? believe it or not, right yesterday I spoke again with the guy who did write the ChannelFactory module again to ask for a minimal test case. Hope we can solve this now coz I stuck with the latest Xamarin.iOS 9.x release and I feel really bad if out of sync with the stable channel. The 9.3.5 comment above was on the context of that specific 3D problem, nothing to do with the channel factory. We have not received the requested information. If you are still experiencing this issue please provide all the requested information and re-open the bug report. Thanks!
https://bugzilla.xamarin.com/44/44989/bug.html
CC-MAIN-2021-25
refinedweb
1,052
52.46
Hi there! I am currently trying to make JIT optimizations work on the source code of Tree-LSTM model. The Tree class in the model is a crucial part of it, so I need to make it a custom class type so it can be used by the core methods of the model. That’s when I find a problem: import torch from typing import List @torch.jit.script class Tree(object): def __init__(self): self.parent = None self.num_children = 0 self.children = torch.jit.annotate(List[Tree], []) # further definitions omitted When I try to run the code, here is the error: RuntimeError: Unknown type name Tree: def __init__(self): self.parent = None self.num_children = 0 self.children = torch.jit.annotate(List[Tree], []) ~~~~ <--- HERE So the question is: Tree is basically a recursive structure, the children of a tree node is a list of tree nodes. Therefore for the children variable of a Tree class, I need to define an empty list of Tree class type. But since the Tree class type definition is still halfway, the interpreter cannot recognize Tree type. I am wondering is there any method that I can solve this problem, or there is actually no support for custom classes like the Tree above in current version of PyTorch and I should try other ways. It would be so nice if someone can give me a hand. Thanks a lot!
https://discuss.pytorch.org/t/is-there-support-of-recursive-custom-classes-like-tree-in-tree-lstm/47418
CC-MAIN-2019-30
refinedweb
234
73.98
The QMainWindow class provides a main application window. More... #include <QMainWindow> Inherits QWidget. The QMainWindow class provides a main application window.DockWidget that); You can set a status bar with setStatusBar(), but one is created the first time statusBar() (which returns the main window's status bar) is called. See QStatusBar for information on how to use it. in Qt 4.3. The DockOptions type is a typedef for QFlags<DockOption>. It stores an OR combination of DockOption values.: This property holds the docking behavior of QMainWindow. The default value is AnimatedDocks | AllowTabbedDocks. This property was introduced in Qt 4.3. Access functions: This property holds size of toolbar icons in this mainwindow. The default is the default tool bar icon size of the GUI style. Note that the icons used must be at least of this size as the icons are only scaled down. Access functions: This property holds style of toolbar buttons in this mainwindow. The default is Qt::ToolButtonIconOnly. Access functions: and all toolbars in the top toolbar area are moved to that. Any toolbars added afterwards will also be added to the Carbon HIToolbar. This means a couple of things. Setting this back to false will remove these restrictions. The Qt::WA_MacBrushedMetal attribute takes precedence over this property. This property was introduced in Qt 4.3. Access functions: Constructs a QMainWindow with the given parent and the specified widget flags. Destroys the main window. Adds the given dockwidget to the specified area. This is an overloaded member function, provided for convenience.. Equivalent of calling addToolBar(Qt::TopToolBarArea, toolbar) This is an overloaded member function, provided for convenience.(). Returns the Qt::DockWidgetArea for dockwidget. If dockwidget has not been added to the main window, this function returns Qt::NoDockWidgetArea.(). Returns the menu bar for the main window. This function returns null if a menu bar. Removes a toolbar break previously inserted before the toolbar specified by before.. Returns whether there is a toolbar break before the toolbar. See also addToolBarBreak() and insertToolBarBreak(). This signal is emitted when the style used for tool buttons in the window is changed. The new style is passed in toolButtonStyle. You can connect this signal to other components to help maintain a consistent appearance for your application. See also setToolButtonStyle().
http://doc.trolltech.com/4.3/qmainwindow.html
crawl-002
refinedweb
378
61.73
This was intended to be a single comprehensive post about what’s wrong in mixing the Actor model with OOP. After a while I was writing this I discovered that there is a lot of stuff to be told, so I split the post in two. This is the first and talks about why you would like to add typing to Actors and then why you would like to get back. The next one (that I would likely publish in 2020) is about why you would like to add inheritance among actors and why guess what… you would refrain to do it. Let’s start. Once that the concept of Actor as implemented by the Akka framework is clear, we can proceed to the first issue in mixing Actors and OOP, that is brilliantly depicted by the sentence “No good deed goes unpunished”. In fact, there was a perceived problem and the solution turned out, when it was too late, worse than the problem. The Problem As shown in my previous post, an Akka Actor has an untyped interface – messages delivered to Actor’s door may be of Any type and it is up to the pattern matcher to sort the communication out at run-time. This has two main inconveniences. First, errors are detected (if they are) at run-time. Second, IDEs cannot help since all actors are of the same type (ActorRef) and therefore you can’t have a list of specialized actions for the actor instance under the cursor. (I haven’t written this, but the framework hides all the specific actor types behind a generic ActorRef interface. In fact, you send messages to the ActorRef interface and not directly to the derived class). The Solution How can we get around? The idea was to provide a typed wrapper that hides the actual actor, providing a method based interface. Let’s say that we have actor A that accepts message M. This will be equipped with a companion class AA exposing a method m() : Unit. Method m() just builds an object of type M and sends it to the ActorRef behind the curtains. The Troubles By employing the type of the AA wrapper, now you (and the compiler) can check at compile-time that only proper methods can be invoked. On the other hand you are creating an impedance mismatch within your codebase, since some parts require an ActorRef, other parts require your AA-and-offsprings types. This leads you to open the implementation details and reveal the underlying ActorRef and, as every OOP programmer knows, exposing the implementation is usually a Bad Thing™. Until you deal with methods that are either setter or issue the target object to perform some kind of referentially opaque action, the countryside is a happy place with birds happily sings in the background. When you turn you attention to getters, thundering black clouds start forming overhead and birds are long fled away. With a getter, you expect to call something like isEmpty() to get something like a true/ false result. Since actors are just reactive entities there is no such a thing as a message that returns a value (remember the receive function? It was Any -> Unit). Akka provides the so-called ask-pattern. This is a mechanism that helps the programmer to set up a send/receive contraption. But the resulting code is pretty actor-aware, and as such it is not applicable to a call method context. The solution adopted by our fathers was to add methods named like getters that actually sent a query message and let the caller catch the reply if any. E.g.: def getSize()(implicit sender: ActorRef) : Unit = ref ! QuerySize First note that the ! operator sends the message and immediately returns, i.e. this function is not blocking. The implicit part transparently captures the actor context when the getSize method is invoked within an actor. The method implementation needs this reference to send the reply to. If the method is not called within an object then the implementation does not know where to address the reply. Therefore the problem is only half solved – yes we now can ask only questions that the actor knows the answer, but we have no safety on the returned type and we cannot handle the request outside an actor. Compare to the Akka solution: val f = object ? QuerySize This is untyped, but you get a future that will hold the reply to your request. The future has a timeout that allows you to deal with problems on the other side of the communication (yes, this is halfbaked as well, but seems more general and more reliable). Moreover, the Akka ask pattern doesn’t give a false sense of security. Actors may exist or not, might be in the proper state to receive and process your message or not. Actors may even be on a different machine so that all the unreliability of the network connection applies. You could have declared the method to return a future, such as – def getSize()(implicit sender: ActorRef) : Future[Int] = ref ? QuerySize But you would have lost the simplicity of the traditional method call/return value all the same. Also, you would have still to deal with impedance mismatch caused by your wrapper. So far the best approach to the problem I devised is to have an actor subsystem wrapped into a future-based API. The API provides type constraints over the usage of the subsystem. Also the API hides the actor level complexity – users may only reach what they are allowed to. The future abstraction is a fully typed concept that can be used both by actors and non-actors and properly maps the Success/Failure outcome that could arise from a distributed system. Inside the subsystem, things are quite on a small scale so that the lack of types doesn’t become a too bad hassle. Lesson Learned – Don’t pretend that actors can be treated like objects, their nature just doesn’t match. Rather than a per-actor wrapping, prefer the encapsulation of an entire actor subsystem in a conventional API wrapper. The use of Future helps to simplify the integration keeping the right semantic.
https://www.maxpagani.org/2019/12/19/our-fathers-faults-mixing-actors-and-oop-1-actors-with-methods/
CC-MAIN-2021-31
refinedweb
1,034
61.06
Good luck. If you don't understand my answer, don't ignore it, ask a question. class Ticker extends FXRateEvent { // No key set and no match implemented; this event matches all RateEventInfos FXPair changedPair; FXRateEventInfo REI; public void handle(FXEventInfo EI, FXEventManager EM) { //Just print the tick REI = (FXRateEventInfo) EI; System.out.println(REI.getTick()+ ":"+ REI.getTimestamp()); // changedPair = REI.getPair(); // System.out.println(changedPair); } public FXPair changedPair() { return changedPair; } } You need to put your variables into fields outside of the methods, then set them inside the methods only stating the variable, and not the type. Thank you micecd. No problem, would you mind clicking the thanks button under my username? ^^ And did you get your project working? TorusMonkey (September 13th, 2014) It is working. I didn't get it going though. I'm working with someone whose trade is programming. So he fixed it. Doesn't help me much in terms of learning. I need to spend more time on JAVA. I use code to trade forex, been working in MQL, but the broker I've been using is closing its retail devision. So now I have to make a move and the best alternative option is using Oanda's REST API, with JAVA. Just takes so damn long to make anything work in JAVA, just don't have the experience to hunt down problems in a hurry. --- Update --- In fact, maybe you can tell me why this isn't working, it's either a malformed link problem or I'm sending the header wrong: import java.io.IOException; import java.io.InputStream; import java.io.BufferedReader; import java.io.InputStreamReader; import java.util.Date; import org.apache.http.*; import org.apache.http.client.methods.*; import org.apache.http.impl.client.BasicResponseHandler; import org.apache.http.impl.client.HttpClientBuilder; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.message.BasicHeader; import org.apache.http.util.EntityUtils; import org.apache.http.client.HttpClient; import org.json.simple.JSONObject; import org.json.simple.JSONValue; public class OandaRestProjectMain { public static void main (String[]args) throws IOException { CloseableHttpClient httpClient = HttpClientBuilder.create().build(); try { // Set these variables to whatever personal ones are preferred String domain = ""; String access_token = "Token"; String account_id = "AccountNR"; String instruments = "EUR_USD"; HttpUriRequest httpGet = new HttpGet(domain + "/v1/prices?accountId=" + account_id + "&instruments=" + instruments); httpGet.setHeader(new BasicHeader("Authorization: ", "Bearer " + access_token)); System.out.println("Executing request: " + httpGet.getRequestLine()); System.out.println("Header info: "+ "Authorization: " + "Bearer " + access_token); HttpResponse resp = httpClient.execute(httpGet); HttpEntity entity = resp.getEntity(); if (resp.getStatusLine().getStatusCode() == 200 && entity != null) { InputStream stream = entity.getContent(); String line; BufferedReader br = new BufferedReader(new InputStreamReader(stream)); while ((line = br.readLine()) != null) { Object obj = JSONValue.parse(line); JSONObject tick = (JSONObject) obj; // unwrap if necessary if (tick.containsKey("tick")) { tick = (JSONObject) tick.get("tick"); } // ignore heartbeats if (tick.containsKey("instrument")) { System.out.println("-------"); String instrument = tick.get("instrument").toString(); String time = tick.get("time").toString(); double bid = Double.parseDouble(tick.get("bid").toString()); double ask = Double.parseDouble(tick.get("ask").toString()); System.out.println(instrument); System.out.println(time); System.out.println(bid); System.out.println(ask); } } } else { // print error message String responseString = EntityUtils.toString(entity, "UTF-8"); System.out.println(responseString); } } finally { httpClient.close(); } } } The error message is: And I'm very sure I am sending the right token. So it must be the header delivering it wrong.And I'm very sure I am sending the right token. So it must be the header delivering it wrong.{ "code" : 4, "message" : "The access token provided does not allow this request to be made", "moreInfo" : "http:\/\/developer.oanda.com\/docs\/v1\/auth\/#overview" } Those do not look like java programming errors. If you are trying to interface to some site, perhaps someone at the site could help you. If you can identify a java programming problem, please ask about it. If you don't understand my answer, don't ignore it, ask a question. The problem is in the code. I had someone else check out my token and it works. I suspect I am sending the header wrong or the token wouldn't have been rejected, sometimes I get a different error message as if the token is going through, but the link is malformed, so my JAVA question is, is there a problem with these lines, one of them appears not to be working properly: HttpUriRequest httpGet = new HttpGet(domain + "/v1/prices?accountId=" + account_id + "&instruments=" + instruments); httpGet.setHeader(new BasicHeader("Authorization: ", "Bearer " + access_token)); Do you know what the header should look like?Do you know what the header should look like?suspect I am sending the header wrong There are test servers you can use that will show you what the program is sending. If you can see what is sent by the program and compare that to what the desired header should be you should be able to change the program to sent the correct header. Again compare what is correct with what the program generates so you can change the code to match what is right.Again compare what is correct with what the program generates so you can change the code to match what is right.is there a problem with these lines, one of them appears not to be working properly: If you don't understand my answer, don't ignore it, ask a question. They've updated their code. There were deprecated classes in their old example, that I changed in my example. Probably used the wrong class. So I've been running, but they still have a deprecated class in their example. Not sure why they updated half the blood thing. Now this line doesn't work... } finally { // httpClient.getConnectionManager().shutdown(); } Thanks for the attempts to help... Usually deprecated classes will continue to work for a while. If you don't understand my answer, don't ignore it, ask a question. Screen Shot 2014-09-16 at 2.18.33 PM.png Try asking what to do on an Eclipse users forum.Try asking what to do on an Eclipse users forum. If you don't understand my answer, don't ignore it, ask a question.
http://www.javaprogrammingforums.com/object-oriented-programming/37751-getting-information-api-class-into-main-body-2.html
CC-MAIN-2015-35
refinedweb
1,019
52.87
This topic contains the following sections: The Dynamic Tabs UI Shell Template (UI Shell) is used to address one of the necessary attributes of a software product: usability. Elements addressed are: The UI Shell is a template with behaviors. Commonly provided behaviors aid predictability and control -- attributes of usability. These behaviors center around navigation. When implemented essentially the same across the application, the application feels predictable. The user feels in control. The most salient behavior supported by the UI Shell is dynamic tabs. The tabs are dynamic in that they are rendered and dismissed upon demand by the user. The ADF UI shell page template contains facets, attributes, ADF components, and a Java Bean to provide behavior. Multiple ADF page fragments (defined as bounded taskflows) run in a series of dynamic tabs on the page. A page fragment is a JSF JSP document (.jsff) that can be rendered as content of another JSF page. It is similar to a page, except that it cannot contain elements such as <f:view>. The template itself manages the dynamic tabs; allows facilities to open new tabs with given taskflow identifiers; and handles dirty state for those tabs. The template handles a maximum of 15 taskflows represented as tabs. The template defines four areas with ten facets in a generally described 3 rows by 2 column layout. The UI Shell is one of two templates provided "out-of-the-box" from Oracle. It is called "Dynamic Tab Shell." Any newly created JSF or JSP page can be based on this template. A chief challenge in UI development is rational consistency -- providing the user with predictable locations and behaviors for the most salient features. Because the ADF UI Shell is a template, it provides guidance for development teams to deliver a consistent UI to their customers. The primary assumption of the UI Shell is that each page based on the template will represent a significant module of an application, if not separate applications. As a consequence, the "boilerplate" portion of the UI (e.g., branding, ubiquitous controls, obligatory legal statements) are more easily held consistent in form, placement, and function -- as they are defined once per page. The more interactive portion of the application, that content which delivers the features and functions that define the application, are presented in the "workarea" of the template. The UI Shell identifies those areas through named facets (e.g., navigation) and regions (e.g., dynamic tabs). Nearly all, if not all, of the product's value add can be surfaced in these areas within a single page as bounded ADF taskflows containing one or more ADF page fragments. Navigation rules between applications represented as single pages can be defined in unbounded ADF taskflows and presented to the user as global navigation controls (e.g., global tabs, buttons, links) providing convenient access to a suite of separate, but related products. The UI Shell template and Java bean is contained within an ADF jar shipped with JDeveloper/ADF. The Java bean has a list of methods (i.e., APIs) available only to this template. The UI layout of the template has ten named facets. The template also has nine attributes which pass in values to each page based on the template. There are also embedded components to round out the Look and Feel (L&F) of the template. Java APIs The behaviors of the template are provided through a class called TabContext. TabContext contains methods to launch new tabs, as well as other forms of manipulation of the tabs. The best practice to leverage the APIs within TabContext is through a managed bean written in Java. import oracle.ui.pattern.dynamicShell.TabContext TabContext tabContext = TabContext.getCurrentInstance(); All public APIs are EL accessible. The addTab API will always launch new tabs into the local content area. This API is appropriate for use cases where multiple instances of the same task flow are desired to each have a separate tab, such as "create new." /*Copyright (c) 2008, 2009, Oracle and/or its affiliates. All rights reserved. */ try { tabContext.addTab("Some Localized Title of the Tab","/path/to/taskflow#taskflowId"); } catch (TabContext.TabOverFlowException toe) toe.handleDefault(); } Line 6 can throw a TabOverFlowException. This will occur if there is an attempt to open more than the maximum number of available tabs (i.e., taskflows) at run time. This case can be handled in any way preferred or use the handleDefault() method to call a popup warning dialog to appear to the user. When this exception is thrown, the requested tab will not open for the user. The maximum number of tabs the UI Shell allows is 15. The addOrSelectTab API will always launch one instance of the tab by detecting whether it has already launched. If so, that tab is selected instead. This API is appropriate for use cases where a single instances of a task flow is desired with a tab, such as "edit object." If the taskflow for that object is open, then a gesture by the user to re-launch that edit object taskflow will return selection to its tab. /* Copyright (c) 2008, 2010, Oracle and/or its affiliates. All rights reserved. */ try { tabContext.addOrSelectTab("Some Localized Title of the Tab", "/path/to/taskflow#taskflowId"); } catch (TabContext.TabOverFlowException toe) { toe.handleDefault(); } There are use cases where the goal is not to use tabs (i.e., present the user with two or more task flows launched serially within tabs) but to have a single "replace-in-place" UI. In this UI, tabs are absent, the opened task flow occupies the entire local content region of the page. The tabContext.setMainContent API provides for that use case. /* Copyright (c) 2008, 2010, Oracle and/or its affiliates. All rights reserved. */ tabContext.setMainContent("/some/path/to/taskflow#taskflowId"); Note: The single task flow (i.e., replace-in-place) and tabbed task flow cannot be used concurrently on the same page. An attempt to call this API when tabbed task flows are showing or an attempt to launch a new tabbed task flow when the content region is showing, will result in an exception. It also makes for poor UI. In addition to the Close icon which presence and behavior are provided automatically by the ADF UI Shell template, tabs and their task flows can be closed programmatically (e.g., from within the task flow) via the removeCurrentTab API. tabContext.removeCurrentTab(); Both the Close icon embedded with the UI Shell template and the removeCurrentTab method will check whether the taskflow within the selected tab is clean or dirty. Presently the model could be checked for dirty state; however, the model does not track how it was made dirty (e.g., the current task flow or another). What this template provides at the ADF View level is the markCurrentTabDirty API to indicate that the current task flow is dirty by setting the style of the tab label to italic. This API could be triggered by a custom method that listens for a value change on the task flow shown within the tab. Conversely, after any changes have been committed (e.g., saved to DB), this same API could be called to indicate a clean task flow by setting the style of the tab label back to normal. tabContext.markCurrentTabDirty(true); By passing in false to the markCurrentTabDirty API, the tab label style is set to normal. As mentioned previously, both the embedded Close icon and the removeCurrentTab API will check whether the current tab (i.e., task flow) is clean or dirty. However, there may be cases where there is a need to check whether the state of the current tab or any open tab is dirty (e.g., prior to page navigation). The isCurrentTabDirty or isAnyTabDirty API could check for the corresponding case before navigating. /*Copyright (c) 2008, 2009, Oracle and/or its affiliates. All rights reserved. */ // check the dirty state of the current tab boolean isCurrentTabDirty = tabContext.isCurrentTabDirty(); // check the dirty state of all the tabs (will return true if any tab is dirty) boolean isAnyTabDirty = tabContext.isTagSetDirty(); Note that when typing to reference for this method directly isTagSetDirty has an unexpected spelling. There are use cases where it is important to know if any tabs are visible in the UI. For example, the UI may call for a particular toolbar item to enable or render when one or more tabs are present. The embedded Close icon exhibits this behavior automatically -- it renders when one or more tabs appear. The selectedTabIndex API can use the Rendered or Disabled property of any UI construct for that purpose. tabContext.selectedTabIndex(); The tab index is zero based. So if any tabs are present, the value of selectedTabIndex would be zero or greater. To access TabContext methods from within a task flow, define a parameter for TabContext in the task flow. The UI Shell template passes in TabContext. The task flow XML file should look similar to this. <input-parameter-definition> <name>tabContext</name> <class>oracle.ui.pattern.dynamicShell.TabContext</class> <required/> </input-parameter-definition> This places the TabContext in the pageFlowScope. The UI Shell has four principle areas or partitions. Each will be discussed in turn and illustrated in summary.Global Area The global area extends across the full width at the top of the UI Shell. It provides a consistent and persistent context for the user. It is the portion of the template "boilerplate" that provides the user with a sense of place. It should contain the UI controls that in general drive the contents of the other two "work" areas of the UI. Facets and attributes are provided in this area to suggest what those UI controls and displays should be, including: Logo, Branding Bar, Global Links, Global Toolbar, Global Search, and Global Tab.Navigational Area The navigational area uses either the navigation facet (e.g., left pane of the UI shell) or the innerToolbar facet (tool bar placed between the global tab and the inner content region). It can have controls that, in general, drive the contents of the local area. In other words, users should get the sense, after extended use of an application, that the navigational area controls act upon the local area and its contents.Local Area The local area is in the center of the UI Shell. Its where the user performs the bulk of the tasks. In other words, it is the main work area and typically takes a transactional form with UI controls and displays. Consequently, the splitter that divides the left pane navigational area from the local area was designed to favor the local area. That is, out of the box, significantly more real estate is allocated to the local area and, by default, the splitter closes to the left (i.e., all remaining space is allocated to the local area). The dynamic tabs method or the single replace-in-place method of presenting task flows is articulated in the local area Tab Methods .Legal Area The legal area extends the full width of the UI Shell at the bottom. It is the portion of the template "boilerplate" where legal notices, statements, and policies relevant to the application can appear. Like the global area, it can contain UI controls that navigate to other locations. Facets are provided in this area to suggest what those UI controls and displays should be, including, Copyright, About, Privacy, and so forth. Launch Methods Task flows can be launched by any UI construct into the local area via APIs provided by the ADF UI Shell template. In this pattern, three different examples of those constructs are given and illustrated: left hand side (LHS), inner toolbar, and from within a task flows. When the user selects a link or button in the LHS pane, a taskflow opens in the Local Area. Alternatively, the user can select a menu item or toolbar option to open a taskflow into the Local Area. The last described alternative launch method is the user selecting a UI control in the current taskflow that launches a subsequent taskflow in the local area. Close Methods Task flows can be closed by APIs provided by the ADF UI Shell template. Three different examples of those constructs are given: Embedded Close Icon; Inner Toolbar Close option, and Close Button Within the Task Flow Example. The embedded (no additional coding required) close icon renders on the same horizontal plane to the far right of the page when a taskflow is opened within a tab. The user activates the close icon on the selected taskflow (i.e., tab) which is dismissed from view. With the inner toolbar close option, the user activates a close menu or toolbar item on the selected taskflow (i.e., tab) which is then dismissed from view. A close button within the UI of the taskflow allows the user to dismiss that taskflow in the local area prior to or at the conclusion of the taskflow. Of course, the taskflow could "auto-dismiss" at the conclusion of a taskflow. However that is "bad" UI. The opening and closing of a user's task within a taskflow should always be under the control of the user. Dirty" & "Clean" State Indication When the state of user data changes within a task flow, that state change is indicated by changing the text style of the tab label from normal to italic (e.g., the Next Activity). This is most affective in UIs where the user will alternate between several taskflows across multiple tabs in the process of completing a task (e.g., copy data between task flows). In so doing, it would be helpful to remind the user -- particularly after a task interruption -- of which taskflow has changed data (i.e., dirty). When all changed user data is committed (i.e., saved) the taskflow is "clean." At that point, the text style of the tab label toggles from italic to normal. When the user attempts to close a task flow and the state of that task flow is "dirty", the user will receive a warning that the pending action could result in data loss. At that point the user should have a choice of one of two options. Both options are viable to the user. In the first case, the user has made changes for which there is a desire to abandon. In the second case, the user may have inadvertently or unwittingly triggered a close on the task flow for which there is no desire to abandon recent changes. The unsaved data warning that the user receives automatically at run time is a feature of the UI Shell in cases where it is a tab that is being closed while in a dirty state. Note. This feature is not available for cases outside of this template. This feature should not be confused with the unsaved data warning on page navigation. That feature is not automatic but must be enabled at design time (DT) on a page by page case (e.g., each page based on the UI Shell template). When enabled, it provides a warning to the user in cases of navigation at the page level -- as oppose to the region level -- where pending changes have not been committed and the state of the page will be reset. In this case as well, the user has made changes for which there is a desire to abandon; or, the user inadvertently or unwittingly triggered a close at the page level for which there is no desire to abandon recent changes. The warning dialog presented in this case affords the user the same two choices. This feature is available in all cases, that is, with or without the use of the UI Shell template. For a description of how to enable this feature read the Warning on Unsaved Changes Functional UI Pattern on OTN at. The UI Shell Template supports two tab methods: dynamic tabs or no tabs. Dynamic tabs lets the user spawn new tabs for each taskflow instance and then closing it when, for example, the task has been completed. Taskflows can be launched within dynamic tabs by any UI construct (e.g., left hand side pane, toolbar, from within another taskflow). The tabs are dynamic in the sense that the tabs appear to the user when the contained task flow is launched and disappear when the contained task flow is closed. Multiple instances of the same bounded taskflow can open in dynamic tabs. Create New is a typical use case where multiple instances of the same bounded taskflow is desired. More typically, a single instance of a bounded taskflow opens in an dynamic tab. In the first invocation of taskflows of this type, a tab containing that taskflow appears. Any subsequent invocation of any of those taskflows, while they remain open, shifts the selected state to that taskflow. Finally, there is the option of a replace-in-place single instance bounded taskflow. In the first invocation of a taskflow of this type, a single instance taskflow appears occupying the entire local content area (i.e., no tab). Any subsequent invocations of any taskflow results in a replace-in-place user experience (i.e., the subsequent taskflow also occupies the entire local content area replacing the previous taskflow). Note : This method can only be used exclusively of the other two (i.e., Dynamic Tabs for Multiple Instances and Dynamic Tabs for Single Instances) per page. Too Many Tabs Warning Dialog When the requested number of open task flows (i.e., tabs) exceeds the maximum allowed by the UI Shell (15), the user will automatically receive a "too many tabs" warning dialog. The dialog title text and body text are attributes of the UI Shell and can be specified at design time. Detecting the Presence of Tabs It is possible to determine if any tabs are open in the UI. This is useful for enabling or rendering any UI construct that has either the Disabled or Rendered property. The Close icon is an example use of this API. It is not rendered until one or more tabs are present. One example would be to enable/disable menu items at the presence or absence of tabs. Another would be to render or hide toolbar buttons at the presence or absence of tabs or taskflows. Adding a Welcome Page Often an application will have an introductory, "start", or "welcome" page. This can be the default UI for the user visiting the application. The UI Shell template has a single facet that can be used for this purpose. It is identified as the welcome facet. Content within the welcome facet displays when no other content (e.g., taskflows) are open on the page. Thus, its content appears as default when the page is displayed or when all taskflows are closed. To recreate the demonstration application used to illustrate the UI Shell template, follow the steps below. The implementation of this demonstration is divided into 5 major steps. Create Application Workspace To start the development of a demonstration application: UIShellor other appropriate name for the Application Name. uiShellModeland uiShellViewControllerfor the Model and ViewController respectively, or other appropriate names. srcor other appropriate prefix. Create the Demonstration Application Unbounded Task Flow The demonstration application is composed of three jsp pages, each represented as a global tab.Create the Page Flow for the Global Tabs The Control Flow Cases between the pages will be specified in the adfc-config.xml file. First, Second, and Third, respectively. Firstactivity. First. Control Flow Casefrom the Component Palette and add a case from the wild card control flow rule to the Secondactivity. Second. Thirdactivity. Third. When completed, the adfc-config.xml diagram should appear as illustrated. Create JSP Files Based on the UI Shell Template Each page will be created based on the UI Shell template in turn. Although not fully illustrated here, each page is assumed, in practice, to represent a single application to the user. The goal here is to create three pages whereby major features and UI methods offered by the UI Shell template can be demonstrated. Creating Pages In this section, each page associated with each view activity will be created in turn, starting with First. Firstview activity. First.jspxfor the File Name. The DT view of the UI Shell template should appear in the Visual Editor. First Pagefor the Title property. In this section, the page associated with the second view activity will be created, called Second. adfc-config.xmlfile, create the Second page by double-click on the Secondview activity. Second.jspxfor the File Name. The DT view of the UI Shell template should appear in the Visual Editor. Second Pagefor the Title property. In this section, the page associated with the third view activity will be created, called Third. adfc-config.xmlfile, create the Third page by double-click on the Thirdview activity. Third.jspxfor the File Name.. The DT view of the UI Shell template should appear in the Visual Editor. Third Pagefor the Title property. Add Global Tabs to Each Page UI tabs are added to allow the user to navigate between pages of the application. There are three pages in this sample application. To add global tabs to the first page: globalTabsfacet in the Visual Editor. Firstfor the Text property. true. Secondfor the Text property. Thirdfor the Text property. To add global tabs to the second page: globalTabsfacet in the Visual Editor. globalTabsfacet. Firstfor the Text property. Secondfor the Text property. true. Thirdfor the Text property. To add global tabs to the third page: globalTabsfacet in the Visual Editor globalTabsfacet. Firstfor the Text property. Secondfor the Text property. Thirdfor the Text property. true. Create Page Fragments and Taskflows Taskflows can be called by contextual events from the navigational area to be rendered in the local content area. A taskflow may also be called from another taskflow within the local content area. In both cases, the result could be a taskflow within a dynamic tab (i.e., single instance, multiple instance) or within a replace-in-place panel that occupies the entire local area. Taskflows themselves could be comprised of a single page fragment or multiple page fragments. In either case, taskflows are bound and as a collection can present nearly all, if not all, of the value add defining an application. For demonstrations purposes only, four taskflows will be created. Each will contain a single page fragment. Create taskflow to use as multiple instance To create the taskflow to demonstrate dynamic tabs of multiple instances: new.xmlfor the task flow File Name. C:\JDeveloper\mywork\UIShell\uiShellViewController\public_html\WEB-INF\flows This is a convenience to improve the workspace organization. multiple. multiple. This is a new activity!for the Value property. color:Purple; font-size:x-large; font-weight:bold;. Create three task flows and associated page fragments to use as single instance dynamic tabs To create the first taskflow and associated fragment: first.xmlfor the task flow name. C:\JDeveloper\mywork\UIShell\uiShellViewController\public_html\WEB-INF\flows If not, use the Browse button to select WEB-INF > flows. one. oneview activity (e.g., double-click). The Directory field should default to: C:\JDeveloper\mywork\UIShell\uiShellViewController\public_html\fragments If not, use the Browse button to select fragments.. To create the second taskflow and associated fragment: second.xmlfor the task flow name. two. twoview activity (e.g., double-click). The Directory field should default to: C:\JDeveloper\mywork\UIShell\uiShellViewController\public_html\fragments If not, use the Browse button to select fragments. vertical. The second activity!for the Value property. color:Orange; font-size:x-large; font-weight:bold; Third Activityin the Text property. To create the third taskflow and associated fragment: third.xmlfor the task flow name. three. threeview activity (e.g., double-click). The Directory field should default to: C:\JDeveloper\mywork\UIShell\uiShellViewController\public_html\fragments If not, use the Browse button to select WEB-INF > fragments. vertical. The third activity!for the Value property. color:Blue; font-size:x-large; font-weight:bold; Finish This Activityin the Text property. Create Managed Bean Note: Access to the APIs within TabContext are available only after a page within the application workspace has been added based on the Oracle Dynamic Tabs Shell template. Pages based on the Oracle Dynamic Tabs Shell template were created earlier. To create a managed bean whereby the TabContext APIs can be called more conveniently: Launcherfor Class Name. src.viewfor the Package name or other appropriate package name. Launcher.javafile and save the file. /* Copyright (c) 2009, 2010, Oracle and/or its affiliates. All rights reserved. */ package src.view; import javax.faces.event.ActionEvent; import oracle.ui.pattern.dynamicShell.TabContext; /** * Launcher is a backingBean-scope managed bean. The public methods are * available to EL. The methods call TabContext APIs available to the * Dynamic Tab Shell Template. The boolean value for _launchActivity * determines whether another tab instance is created or selected. Each tab * (i.e., task flow) is tracked by ID. The title is the tab label. */ public class Launcher { public void multipleInstanceActivity(ActionEvent actionEvent) { /** * Example method when called repeatedly, will open another instance as * oppose to selecting a previously opened tab instance. Note the boolean * to create another tab instance is set to true. */ _launchActivity( "A New Activity", "/WEB-INF/flows/new.xml#new", true); } public void launchFirstActivity(ActionEvent actionEvent) { /** * Example method to call a single instance task flow. Note the boolean * to create another tab instance is set to false. The taskflow ID is used * to track whether to create a new tab or select an existing one. */ _launchActivity( "The First Activity", "/WEB-INF/flows/first.xml#first", false); } public void launchSecondActivity(ActionEvent actionEvent) { _launchActivity( "Next Activity", "/WEB-INF/flows/second.xml#second", false); } public void launchThirdActivity(ActionEvent actionEvent) { _launchActivity( "Third Activity", "/WEB-INF/flows/third.xml#third", false); } public void closeCurrentActivity(ActionEvent actionEvent) { TabContext tabContext = TabContext.getCurrentInstance(); int tabIndex = tabContext.getSelectedTabIndex(); if (tabIndex != -1) { tabContext.removeTab(tabIndex); } } public void currentTabDirty(ActionEvent e) { /** * When called, marks the current tab "dirty". Only at the View level * is it possible to mark a tab dirty since the model level does not * track to which tab data belongs. */ TabContext tabContext = TabContext.getCurrentInstance(); tabContext.markCurrentTabDirty(true); } public void currentTabClean(ActionEvent e) { TabContext tabContext = TabContext.getCurrentInstance(); tabContext.markCurrentTabDirty(false); } private void _launchActivity(String title, String taskflowId, boolean newTab) { try { if (newTab) { TabContext.getCurrentInstance().addTab( title, taskflowId); } else { TabContext.getCurrentInstance().addOrSelectTab( title, taskflowId); } } catch (TabContext.TabOverflowException toe) { // causes a dialog to be displayed to the user saying that there are // too many tabs open - the new tab will not be opened... toe.handleDefault(); } } public void launchFirstReplaceNPlace(ActionEvent actionEvent) { TabContext tabContext = TabContext.getCurrentInstance(); try { tabContext.setMainContent("/WEB-INF/flows/first.xml#first"); } catch (TabContext.TabContentAreaDirtyException toe) { // TODO: warn user TabContext api needed for this use case. } } public void launchSecondReplaceNPlace(ActionEvent actionEvent) { TabContext tabContext = TabContext.getCurrentInstance(); try { tabContext.setMainContent("/WEB-INF/flows/second.xml#second"); } catch (TabContext.TabContentAreaDirtyException toe) { // TODO: warn user TabContext api needed for this use case. } } } A reference to the managed bean should be added to the adfc-config.xml file. This will allow the pages to pass control to the methods called within Launcher to realize the dynamic tab behavior. To accomplish this: launcherfor the Name. src.view.Launcherfor the Class. backingBeanfor the Scope. Call Task Flows as Dynamic Tabs from Managed Bean The methods within Launcher are accessible via EL. Each page will demonstrate a different method within Launcher. To call three taskflows, each as a dynamic tab via a managed bean in the navigation pane in the first page: For demonstration purposes, components comprising a UI will be nested within the navigation facet. However, it would be better practice to create a navigation taskflow. This would allow, for example, the convenience of adding it as a dynamic region to this facet. The result could be different navigation options based on the role or responsibility of the authenticated user -- each dynamic region containing a different navigation taskflow. The Decroative Box adds rounded corners to its children. It also supports changing of the rendered theme of its children. In other words, it can act as a L&F transition between areas on a page. In this case, it distinguishes the Navigation Area from the Local Content Area. vertical. Choose your activityfor the Value property. font-size:large; font-weight:bold; Start First Activityfor the Text property. launcherin the Managed Bean drop down. launchFirstActivityin the Method drop down. The Edit Property: ActionListener dialog will insert the following expression (EL): #{backingBeanScope.launcher.launchFirstActivity}. Similar EL will be inserted with each use of the dialog for the Managed Bean: launcher. true. Start Next Thingfor the Text property. true. Execute Third Taskfor the Text property. launcherin the Managed Bean drop down. launchThirdActivityin the Method drop down. true. The Second page will demonstrate closing a taskflow from the toolbar, as well as, marking a tab dirty then clean. To call three taskflows, each as a dynamic tab via a managed bean in the navigation toolbar in the second page: vertical. The Group component is necessary here to align both a menubar components and toolbar components horizontally in the same toolbar. Action. New. launcherin the Managed Bean drop down. multipleInstanceActivityin the Method drop down. Next Activityfor the Text property. launcherin the Managed Bean drop down. launchSecondActivityin the Method drop down. Final Taskfor the Text property. launcherin the Managed Bean drop down. launchThirdActivityin the Method drop down. launcherin the Managed Bean drop down. closeCurrentActivityin the Method drop down. #{viewScope.tabContext.selectedTabIndex < 0}. The viewScope object tabContext.selectedTabIndex is made available via the TabContext class. All available scope objects can be inspected via the Expression Builder ADF Managed Beans tree node on any page based on the UI Shell template. When the tabContext.selectedTabIndex is less than zero (i.e., no dynamic tabs are open) the Close menu option is disabled. Optionally, textual toolbar buttons, iconic toolbar buttons (if appropriate icons are available), or additional menu items (except for the first toolbar button) could be added. The following steps assume icons are available for the enabled, disabled, mouseover, and mousedown states. launcherin the Managed Bean drop down. multipleInstanceActivityin the Method drop down. launcherin the Managed Bean drop down. currentTabDirtyin the Method drop down. A more appropriate UI approach might be to set the tab dirty with a method call from a value change listener on input component(s) no dynamic tabs were opened. Both behaviors are good UI for toolbars depending on the larger context and UI goals. In either case, the same method (i.e., rendered, disabled) should be used for functions that are highly correlated. launcherin the Managed Bean drop down. currentTabCleanin the Method drop down. A more appropriate UI approach might be to set the tab clean with a method call from button (e.g., submit, save) the no dynamic tabs were opened. Both behaviors are good UI for toolbars depending on the larger context and UI goals. In either case, the same method (i.e., rendered, disabled) should be used for functions that are highly correlated. Note: Since a replace-in-place method cannot be called from a page that has dynamic tabs open (not only would it throw an exception, it is a poor UI method), the Third Page will be used. To call three taskflows, each as a Replace-in-Place UI via a managed bean in the navigation toolbar in the third page: vertical. Action. First Activityfor the Text property. launcherin the Managed Bean drop down. launchFirstReplaceNPlacein the Method drop down. Second Activityfor the Text property. launcherin the Managed Bean drop down. launchSecondReplaceNPlacein the Method drop down. The taskflow called has within it a button when activated launches a separate taskflow. If the current taskflow is in a clean state, that subsequent taskflow will be called as a replace-in-place taskflow whether that call uses the TabContext.setMainContent or the TabContext.getCurrentInstance API. When the subsequent taskflow has a return or is closed, the calling taskflow is rendered. If the current taskflow is dirty, the call to the subsequent taskflow fails silently. This is a known issue. launcherin the Managed Bean drop down. closeCurrentActivityin the Method drop down. #{viewScope.tabContext.selectedTabIndex < 0}. Note: If the calling taskflow is open when the called taskflow is closed, selection returns to the calling taskflow when the replace-in-place method is used. When the dynamic tab method is used, the selection returns to the first called task flow. To call a taskflow as a dynamic tab from another taskflow via a managed bean: tabContextfor the Name. oracle.ui.pattern.dynamicShell.TabContexfor the Class. Adding the input paramter allows access to TabContext methods from within a task flow. Third Activity. launcherin the Managed Bean drop down. launchThirdActivityin the Method drop down. To call a taskflow as a Replace-in-Place UI from another taskflow via a managed bean: Call taskflow three from within taskflow two with the TabContext.setMainContent API from within the Third.jspx page. Because taskflow two will not "close" when taskflow three is called, navigation is transferred back to taskflow two when taskflow three is eventually closed. Note: If a taskflow is in a dirty state, the UI Shell template as with other close methods (e.g., Close icon, Close menu item) will automatically raise an unsaved data warning dialog to the user. To close a taskflow from within, prior to completion via a managed bean: tabContextfor the Name. oracle.ui.pattern.dynamicShell.TabContexfor the Class. Adding the input paramter allows access to TabContext methods from within a task flow. Finish This Activity. launcherin the Managed Bean drop down. closeCurrentActivityin the Method drop down. An optional "Welcome Page" can be added to each page of the demo. It is advisable to add content to the welcome facet of each page based on the UI Shell template. Create an Optional Welcome taskflow The following steps will describe adding Welcome content to First.jspx only: welcome.xmlfor the task flow File Name. C:\JDeveloper\mywork\UIShell\uiShellViewController\public_html\WEB-INF\flows This is a convenience to improve the workspace organization. you. you. center. vertical. Welcome to Our Application!in the Value field. color:Blue; font-size:x-large; font-weight:bold; 30. Welcomefacet of First.jspx. The Welcome page should appear within First.jspx. Run Application At this point, run the demonstration application from any of the jspx pages (i.e., First, Second, Third) to see the various behaviors of the UI Shell. Instead of completing these step, an ADF application on OTN called UIShell is available at. It can be unzipped into the designated work area and opened with JDeveloper. Simply download UIShell.zip; unzip the archive; and open the UIShell.jws with JDeveloper. This pattern has been tested against JDeveloper Studio 12c (12.1.2.0.0). Use Case Example for Using the Dynamic Tab Shell Template Source Directly As stated in Pattern Implementation, when a page is based on the Oracle Dynamic Tab Shell template, the template and its associated ADF artifacts and Java classes are added to the JDeveloper workspace (i.e., the ADF application) by reference. There are use cases where it is more convenient to include these ADF artifacts and Java classes into a workspace (i.e., *.jws) directly. For example, the content for the facets (e.g., global links, legal notices) for a series of pages based on the template may all be identical. Instead of copying that content over and over to each created page based on the template, it may be more efficient to add that content once to the dynamicTabShell.jspx (i.e., the template file) and then deploy that workspace as an ADF library for reuse. The modified template is available to team members with those modifications in place. The Oracle Dynamic Tab Shell template source is available as a JDeveloper workspace on OTN within an archive at. As separate to the dynamic tab feature provided by the ADF UI Shell, contemporary browsers give the user the ability to open multiple tabs within the browser. Each browser tab can view different URLs allowing the user to browse different websites simultaneously, or even the same website multiple times. While the implementation of this is browser specific, currently all contemporary browsers including Apple's Safari, Google's Chrome, Microsoft's Internet Explorer and Mozilla Firefox do not maintain separate network sessions to websites for each browser tab. Rather if a user is visiting the same website multiple times across browser tabs, the same user session will be used. From the server side Oracle's ADF does provide the ability to support multi browser tabs sharing the same session though it is left up to the ADF programmer to decide if they wish their application to support this through the correct use of pageFlowScope beans. While it does seem desirable to support this be default, allowing the user to surf your application multiple times simultaneously can impact the load on your system so you need to consider your options. In considering the ADF UI Shell's options it does not support multi browser tabs by default. To enable this you must add the context parameter USE_PAGEFLOW_TAB_TRACKING equal to true in your web.xml. To enable multi browser tab support for the ADF UI Shell in your application: There is a known issues with the Oracle Dynamic Shell Template. The unsaved data warning dialog unavailable with tabcontext.setmaincontent. Related Topics
http://www.oracle.com/technetwork/testcontent/uishell-093084.html
CC-MAIN-2014-10
refinedweb
6,228
58.08
Exploring Java BitSet BitSet is a class defined in the java.util package. It creates an array of bits represented by boolean values. The size of the array is flexible and can grow to accommodate additional bit as needed. Because it is an array, the bit values can be accessed by non-negative integers as an index. The interesting aspect of BitSet is that it is easy to create and manipulate bit sets that basically represents a set of boolean flags. This article shall provide the necessary details on how to use this API with appropriate examples in Java. The BitSet Class The BitSet class provides two constructors: a no-argument constructor to create an empty BitSet object and a one-constructor with an integer argument to represent number of bits in the BitSet. - BitSet(): Creates an empty instance of the BitSet class. - BitSet(int noOfBits): Creates an instance of the BitSet class with an initial size of the integer argument representing the number of bits. The default value of the BitSet is boolean false with an underlying representation as 0 (off). The bit position in the BitSet array can be set to 1 (on) as true with the help of the index of the bit represented as an argument to the set method. The index is zero-based, similar to an array. Once you call the clear method, the bit values are automatically set to false. To access a specific value in the BitSet, the get method is used with an integer argument as an index. BitSet Methods The class also provides methods for common bit manipulation using bitwise logical AND, bitwise logical OR, and bitwise logical exclusive OR with and, or, and xor methods, respectively. For example, assume that there are two BitSet instances, bit1 and bit2. Then the statement, bit1.and(bit2) will perform bitwise logical AND operation, Similarly, bit1.or(bit2) will perform bitwise logical OR operation and bit1.xor(bit2) will perform a bitwise logical XOR operation. The result will be stored in bit1. If there are more bits in bit2 than bit1, the additional bits of bit2 are ignored. As a result, the size of bit1 remains unchanged even after the result of the bitwise operation is stored. In fact, the bitwise operations are performed in a logical bit-by-bit fashion. The size method returns the number of bits of space actually in use by the BitSet. There is another method, called length, that returns the logical size of the BitSet; that means the index of the highest set, bit + 1. Two BitSets can be compared for equality with the equals method. They are equal if and only if they are the same bit by bit. An Example of Bit Manipulation package org.mano.example; import java.util.BitSet; import java.util.Random; public class Main { public static int N_BITS = 16; public static void main(String[] args) { BitSet b1 = new BitSet(N_BITS); BitSet b2 = new BitSet(N_BITS); printBits("inital bit pattern of b1: ", b1); printBits("inital bit pattern of b2: ", b2); setRandomBits(b1); setRandomBits(b2); printBits("After random bit set of b1: ", b1); printBits("After random bit set of b2: ", b2); b2.and(b1); printBits("b2 AND b1, b2 = ", b2); System.out.println("No. of set values in b1=" + b1.cardinality()); System.out.println("No. of set values in b2=" + b2.cardinality()); b1.or(b2); printBits("b1 OR b2, b1 = ", b1); b2.xor(b1); printBits("b2 XOR b1, b2 = ", b2); printBits("b1 = ", b1); System.out.println("indexes where bit is set in b1 " + b1.toString()); printBits("b2 = ", b2); System.out.println("indexes where bit is set in b2 " + b2.toString()); } public static void setRandomBits(BitSet b) { Random r = new Random(); for (int i = 0; i < N_BITS / 2; i++) b.set(r.nextInt(N_BITS)); } public static void printBits(String prompt, BitSet b) { System.out.print(prompt + " "); for (int i = 0; i < N_BITS; i++) { System.out.print(b.get(i) ? "1" : "0"); } System.out.println(); } } Output initial bit pattern of b1: 0000000000000000 initial bit pattern of b2: 0000000000000000 After random bit set of b1: 1110001011001000 After random bit set of b2: 0110000101110000 b2 AND b1, b2 = 0110000001000000 No. of set values in b1=7 No. of set values in b2=3 b1 OR b2, b1 = 1110001011001000 b2 XOR b1, b2 = 1000001010001000 b1 = 1110001011001000 indexes where bit is set in b1 {0, 1, 2, 6, 8, 9, 12} b2 = 1000001010001000 indexes where bit is set in b2 {0, 6, 8, 12} Using BitSet in the Algorithm "Sieve of Sundaram" Let's implement a simple algorithm called Sieve of Sundaram, a variation that's more efficient than Sieve of Eratosthenes, to find out a list of prime numbers within a range using BitSet. Quick Idea of the Algorithm Apart from finding prime number by brute techniques, the algorithm "Sieve of Eratosthenes" is quite intriguing. But here, we shall implement a variation of that algorithm discovered by mathematician S.P. Sundaram in 1934. Hence, it is called "Sieve of Sundaram." The idea is to cross out the numbers of the form, Figure 1: The "Sieve of Sundaram" from a list of integers ranging from 1 to n. The rest of the numbers are incremented by 1. Finally, we get the list containing all the odd prime numbers below 2n+2 (all except 2). The main difference between Eratosthenes' method and Sundaram's method is that Sundaram removes numbers that are: Figure 2: The "Sieve of Sundaram" removes numbers from the equation This is the key variation that led to an efficient Eratosthenes sieve algorithm. I'm skipping the details, as they is out of scope here. Interested readers may refer here: for more details on these algorithms. Implementing the Algorithm in Java package org.mano.example; import java.util.BitSet; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner scanner = new Scanner(System.in); System.out.println("Enter the range. Any number greater than 2: "); int input = scanner.nextInt(); if (input < 2) System.out.println("Invalid number. Program will exit"); else generatePrime(input); scanner.close(); } public static void generatePrime(int max) { int counter = 1; // Because the number can be 2n+2 for a given n // and we want a prime number less than n, // we reduce it to half int bSize = (max - 2) / 2; // BitSet created with a specific size // with default value initialized as false BitSet bitSet = new BitSet(bSize); // set the index number of the form // (i + j + 2ij) as true such that 1<=i<=j // this is the main logic of the sieve of sundaram for (int i = 1; i <= bSize; i++) for (int j = i; (i + j + 2 * i * j) <= bSize; j++) bitSet.set(i + j + 2 * i * j); // explicitly 2 is printed because // odd prime numbers below 2n+2 excludes 2 if (max > 2) System.out.print("2\t"); // Now print the odd prime list, with a little // formatting for eye-candy. for (int i = 1; i <= bSize; i++) { if (bitSet.get(i) == false) { System.out.print((2 * i + 1)); System.out.print(++counter % 9 == 0 ? "\n" : "\t"); } } } } Output Enter the range. Any number greater than 2: 500 Conclusion BitSet is convenient class for bit manipulation. Individual bits are represented by boolean true and false values arranged in an array fashion. The method set sets the specified bit to a "on" state and the clear method sets the specified bit to the "off" state. The method get returns true if the bit is on and false if the bit is off. The and, or, or xor method of BitSet performs a bit-by-bit logical AND, OR, and XOR between BitSets, respectively. The result is stored in the BitSet instance that invoked the method.
http://www.developer.com/java/data/exploring-java-bitset.html
CC-MAIN-2017-04
refinedweb
1,286
65.01
26 September 2013 16:20 [Source: ICIS news] HOUSTON (ICIS)--KiOR is pursuing a $225m (€167m) project to build a second cellulosic fuel plant that would double capacity at its site in ?xml:namespace> The company has received an aggregated commitment of up to $50m from US-based capital firm Khosla Ventures and businessman Vinod Khosla. KiOR will break ground on Columbus II within 90 days of raising sufficient equity and debt capital, with construction and start-up expected to take 18 months. The project would enable KiOR to more quickly make progress toward its long-term goal of 92 gal (348 litres) of hydrocarbon fuels per bone dry ton of biomass, or over 150 gal of ethanol equivalent. KiOR expects the improvements in its technology to facilitate the use a wider range of less expensive feedstocks, such as railroad ties. ,” said Fred Cannon, CEO at KiOR. Earlier this month, KiOR announced that its 2013 production total at the Meanwhile, the company is expecting to break ground on its standard-scale commercial production facility
http://www.icis.com/Articles/2013/09/26/9710013/us-kior-to-build-second-cellulosic-fuel-plant-in-columbus.html
CC-MAIN-2014-42
refinedweb
174
55.68
Talk:USGS GNIS Contents import progress Anyone know how the other states are going? Is there any time frame? Anything I can do to help? I noticed today that GNIS data for San Francisco has appeared in the last few days, so California must be ongoing -- StellanL 09:32, 11 March 2009 (UTC) replacing gnis node with a polygon Some of the features (like cemeteries, schools, islands) would be better represented by a polygon. Should we be copying GNIS specific tags to the entire polygon, or leaving the gnis node? If we make new polygons and copy over GNIS data then we could use a Centroid algorithm for updating USGS as to the position of things. Tylerritchie 18:53, 3 July 2009 (UTC) wetlands vs. wetland For the GNIS import the plural form is used: Other people seem to be using the singular form: natural=wetland See also tag usage of natural=* at osmdoc [1]. It might be a good idea to standardise on the singular form, as it is more common in OSM to use a singular form when tagging. --Cartinus 17:37, 8 November 2009 (UTC) - You are right. I therefore changed the spelling of all ways and nodes tagged with "wetlands" to "wetland" (changeset 3916547). Could anyone make sure that future imports use "wetland"? --1248 18:02, 19 February 2010 (UTC) - I just changed it to natural=wetland + wetland=swamp. Any objections? --Wuzzy 17:48, 4 December 2012 (UTC) Class = "Ridge" Imported? Have all GNIS classes been imported? I have noticed at least two classes that do not show up in an area where other GNIS features appear. One is "Ridge", the other is "Pillar" (as in a rock formation). For those of us that spend time in the back country, these are important features. Tool to check GNIS database against OSM I was wondering if there is a tool to check the GNIS database against OSM? It has been almost two years since the database was imported, and I think it would be useful to find out what has changed between that import and the current GNIS database and what has changed in OSM. Looking in my area, there are a number of features that seem to have been replaced with an area (i.e. for lakes), but the gnis:feature_id wasn't copied over. The simplest thing to do would be to spit out a list of what's missing in GNIS vs. OSM, but much more could be done. - Joshdoe 18:40, 31 January 2011 (UTC)
http://wiki.openstreetmap.org/wiki/Talk:USGS_GNIS
CC-MAIN-2017-43
refinedweb
422
71.85
user control ambiguous in the namespace Discussion in 'ASP .Net' started by g 'CrystalReportViewer' is ambiguous in the namespace 'CrystalDecisions.Nancy., Sep 28, 2004, in forum: ASP .Net - Replies: - 3 - Views: - 8,520 - i. Wiin - Sep 28, 2004 BC30560: 'default_aspx' is ambiguous in the namespace 'ASP'.John, Jun 24, 2005, in forum: ASP .Net - Replies: - 1 - Views: - 4,932 - Patrice - Jun 24, 2005 'mycontrol_ascx' is ambiguous in namespace ASP=?Utf-8?B?SGFucyBCYXVtYW5u?=, Nov 23, 2006, in forum: ASP .Net - Replies: - 0 - Views: - 556 - =?Utf-8?B?SGFucyBCYXVtYW5u?= - Nov 23, 2006 usercontrol_ascx' is ambiguous in the namespace 'ASP'.Nemisis, Apr 13, 2007, in forum: ASP .Net - Replies: - 4 - Views: - 867 - Nemisis - Apr 16, 2007 ambiguous in the namespace=?Utf-8?B?U2FudGlhZ28gUGF6IC0gRnJvbSBTcGFpbg==?=, Oct 22, 2007, in forum: ASP .Net - Replies: - 0 - Views: - 410 - =?Utf-8?B?U2FudGlhZ28gUGF6IC0gRnJvbSBTcGFpbg==?= - Oct 22, 2007
http://www.thecodingforums.com/threads/user-control-ambiguous-in-the-namespace.519571/
CC-MAIN-2015-48
refinedweb
140
54.79
IRC log of dnt on 2012-08-15 Timestamps are in UTC. 15:41:44 [RRSAgent] RRSAgent has joined #dnt 15:41:44 [RRSAgent] logging to 15:41:52 [aleecia] Zakim, this will be dnt 15:41:52 [Zakim] ok, aleecia; I see T&S_Track(dnt)12:00PM scheduled to start in 19 minutes 15:41:58 [aleecia] chair: schunter 15:42:08 [aleecia] rrsagent, make logs public 15:42:13 [aleecia] agenda? 15:42:20 [aleecia] zakim, clear agenda 15:42:20 [Zakim] agenda cleared 15:42:49 [Chris_IAB] Chris_IAB has joined #dnt 15:43:20 [Chris_IAB] I will be joining the call today, probably via Skype in about 15-min. 15:46:41 [aleecia] Great, Chris! 15:47:32 [aleecia] agenda+ Selection of scribe 15:47:42 [aleecia] agenda+ Review of overdue action items: 15:47:53 [aleecia] agenda+ Any comments on published minutes 15:48:11 [aleecia] agenda+ Quick check that callers are identified 15:48:47 [aleecia] agenda+ summary by Roy on the changes to the TPE document (diff: ) 15:49:24 [aleecia] agenda+ Pending agreement: 15:49:40 [aleecia] agenda+ Open: 15:50:06 [aleecia] agenda+ Newly raised 1: 15:50:22 [aleecia] agenda+ Newly raised 2: 15:50:44 [aleecia] agenda+ Newly raised 3: 15:50:54 [Zakim] T&S_Track(dnt)12:00PM has now started 15:50:58 [aleecia] agenda+ Announce next meeting & adjourn 15:51:01 [Zakim] + +1.650.200.aaaa 15:51:04 [npdoty] npdoty has joined #dnt 15:51:12 [aleecia] agenda? 15:51:28 [Zakim] - +1.650.200.aaaa 15:51:31 [Zakim] T&S_Track(dnt)12:00PM has ended 15:51:32 [Zakim] Attendees were +1.650.200.aaaa 15:51:33 [David] David has joined #dnt 15:51:55 [schunter1] schunter1 has joined #dnt 15:52:18 [aleecia] Ok, we should be all set. 15:52:28 [schunter1] thanks aleecia! 15:53:01 [Zakim] T&S_Track(dnt)12:00PM has now started 15:53:08 [Zakim] +schunter 15:53:21 [schunter1] It worked for me (you are the 1st participant). 15:53:32 [schunter1] The passcode is only valid 10min before the start (AFAIR). 15:53:48 [Zakim] +npdoty 15:53:49 [aleecia] Yes. 15:54:05 [aleecia] Uh, yes on first 10 minutes, not yes on having issues 15:54:27 [aleecia] I've run into that a few times :-) 15:54:36 [fielding] fielding has joined #dnt 15:54:42 [Zakim] +aleecia 15:54:57 [aleecia] hi (muted) 15:55:29 [Zakim] +fielding 15:55:42 [dsinger] dsinger has joined #dnt 15:55:45 [BrendanIAB] BrendanIAB has joined #dnt 15:56:13 [rvaneijk] rvaneijk has joined #dnt 15:56:40 [efelten] efelten has joined #dnt 15:56:58 [Lia] Lia has joined #dnt 15:57:02 [Zakim] +BerinSzoka 15:57:23 [Zakim] +felten 15:57:59 [Zakim] + +1.650.200.aaaa 15:58:14 [alex] alex has joined #dnt 15:58:24 [Zakim] +rvaneijk 15:58:44 [rvaneijk] will be joining the call in about 15 minutes... 15:58:44 [Zakim] +??P17 15:58:57 [aleecia] Please mute 15:59:02 [aleecia] thank you 15:59:07 [Zakim] +alex 15:59:16 [Joanne] Joanne has joined #DNT 15:59:19 [aleecia] Who called in via Skype or similar? 15:59:20 [Zakim] -npdoty 15:59:30 [JC] JC has joined #DNT 15:59:30 [amyc] amyc has joined #dnt 15:59:34 [Zakim] +npdoty 15:59:36 [schunter1] Nick: Do you want to do the de-anonymisation procedure? 15:59:37 [efelten] Zakim, felten is efelten 15:59:37 [Zakim] +efelten; got it 15:59:43 [Chris_IAB] Chris_IAB has joined #dnt 15:59:46 [justin_] justin_ has joined #dnt 15:59:53 [Chris_IAB] just joined via Skype 15:59:55 [aleecia] (perhaps Chris?) 15:59:57 [aleecia] great 16:00:06 [npdoty] Zakim, ??P17 is Chris_IAB 16:00:06 [Zakim] +Chris_IAB; got it 16:00:08 [Zakim] +hhalpin 16:00:14 [aleecia] zakim, who is on the call? 16:00:14 [Zakim] On the phone I see schunter, aleecia, fielding, BerinSzoka, efelten, +1.650.200.aaaa, rvaneijk, Chris_IAB, alex, npdoty, hhalpin 16:00:16 [Zakim] +[Microsoft] 16:00:21 [Chris_IAB] fyi- am on mute, as I'm joining from an off-site meeting 16:00:22 [Zakim] + +1.415.520.aabb 16:00:23 [Zakim] +[Microsoft.a] 16:00:24 [Zakim] +justin_ 16:00:39 [dsinger] dsinger has joined #dnt 16:00:40 [aleecia] Chris, so noted. We'll want an update on the action item you have due. 16:00:45 [Zakim] +AnnaLong 16:00:46 [vincent] vincent has joined #dnt 16:00:48 [Joanne] Zakim, aabb is Joanne 16:00:48 [Zakim] +Joanne; got it 16:00:51 [BrendanIAB] BrendanIAB just joined via Skype, but I didn't see me scroll by. 16:00:52 [samsilberman] samsilberman has joined #dnt 16:00:57 [Zakim] +[Google] 16:01:00 [AnnaLong] AnnaLong has joined #dnt 16:01:03 [aleecia] I don't see you either, Brendan 16:01:03 [npdoty] Zakim, aaaa is DavidMcMillan 16:01:03 [Zakim] +DavidMcMillan; got it 16:01:09 [Chris_IAB] Aleecia, which action item do I have due? 16:01:19 [vinay] vinay has joined #dnt 16:01:25 [JamesB] JamesB has joined #dnt 16:01:29 [hwest] hwest has joined #dnt 16:01:32 [damiano] damiano has joined #dnt 16:01:34 [Zakim] +[GVoice] 16:01:35 [Zakim] + +1.917.934.aacc 16:01:35 [David] zakim - its actually MacMillan (with an 'a' in Mac) 16:01:37 [aleecia] Wrong Chris, sorry 16:01:43 [Chris_IAB] np 16:01:47 [vinay] zakim, aacc is vinay 16:01:47 [Zakim] +vinay; got it 16:01:52 [Zakim] + +1.703.438.aadd 16:01:52 [hwest_] hwest_ has joined #dnt 16:01:52 [ifette] ifette has joined #dnt 16:01:55 [efelten] Zakim DavidMcMillan is DavidMacMillan 16:01:57 [ifette] rrsagent, bookmark? 16:01:57 [RRSAgent] See 16:02:02 [Zakim] + +1.212.565.aaee 16:02:02 [aleecia] Yes, none of that made any sense - need coffee. 16:02:08 [Zakim] +samsilberman 16:02:10 [npdoty] Zakim, DavidMcMillan is really DavidMacMillan 16:02:10 [Zakim] +DavidMacMillan; got it 16:02:13 [Zakim] +??P31 16:02:27 [ifette] Zakim, google has ifette 16:02:27 [Zakim] +ifette; got it 16:02:35 [vincent] zakim, ??P31 is vincent 16:02:35 [Zakim] +vincent; got it 16:02:43 [Zakim] +Brooks 16:02:44 [npdoty] schunter: sent out a list of issues I think are resolved in the draft 16:02:46 [aleecia] agenda? 16:02:52 [Brooks] Brooks has joined #dnt 16:02:54 [aleecia] scribenick: aleecia 16:02:54 [AN] AN has joined #dnt 16:03:01 [npdoty] ... haven't seen any comments on the issues planning to close 16:03:13 [aleecia] Matthias: Listed issues to close based on the document, no comments. 16:03:15 [Zakim] +dsinger 16:03:21 [schunter1] scribe anybody? 16:03:49 [Zakim] +WileyS 16:03:55 [Zakim] +dwainberg 16:03:58 [aleecia] I can scribe again if needed 16:04:04 [aleecia] But did last time :-) 16:04:05 [WileyS] WileyS has joined #DNT 16:04:06 [jmayer] jmayer has joined #dnt 16:04:07 [Zakim] + +1.646.827.aaff 16:04:10 [schunter1] Brendan? 16:04:21 [BrendanIAB] I annot scribe today 16:04:26 [JC] I will 16:04:30 [aleecia] thanks, JC 16:04:31 [schunter1] Thanks a lot! 16:04:34 [aleecia] scribenick: JC 16:04:34 [dwainberg] dwainberg has joined #dnt 16:04:36 [Zakim] +jmayer 16:04:37 [adrianba] adrianba has joined #dnt 16:04:39 [npdoty] scribenick: JC 16:05:04 [aleecia] close agendum 1 16:05:15 [Zakim] +[Microsoft.aa] 16:05:17 [JC] shcunter: Looking at overdue action items, there are 4 16:05:24 [adrianba] zakim, [Microsoft.aa] is me 16:05:24 [Zakim] +adrianba; got it 16:05:33 [cblouch] cblouch has joined #dnt 16:05:40 [JC] ... action 225 Heather? 16:05:48 [JC] Heather: I'm not finished yet 16:05:57 [npdoty] action-225? 16:05:57 [trackbot] ACTION-225 -- Heather West to propose an alternative definition of first party (based on ownership? alternative to inference?) -- due 2012-08-01 -- OPEN 16:05:57 [trackbot] 16:06:06 [Zakim] +jeffwilson 16:06:08 [JC] ... will finish next week or we can drop it. 16:06:14 [aleecia] updated. 16:06:23 [JC] Schunter: action 229 Rigo? 16:06:28 [npdoty] action-229? 16:06:28 [trackbot] Getting info on ACTION-229 failed - alert sysreq of a possible bug 16:06:31 [WileyS] I only got comments to Chris last night so I think we need another week 16:06:35 [aleecia] I'll send email to ping. 16:06:40 [WileyS] Thank you 16:06:40 [aleecia] Shane, thanks for the info 16:06:47 [JC] ... will send reminder on Action 229 16:07:00 [JC] ... action 232 David 16:07:04 [aleecia] Getting it out by Friday would let people read it in time for the next call 16:07:12 [JC] dwainberg: will finish next week 16:07:12 [aleecia] I'll see what Chris thinks & cc you 16:07:34 [JC] Schunter: that closes action items. Any other issues? 16:07:37 [npdoty] trackbot, reload 16:07:55 [npdoty] Zakim, who is on the phone? 16:07:55 [Zakim] On the phone I see schunter, aleecia, fielding, BerinSzoka, efelten, DavidMacMillan, rvaneijk, Chris_IAB, alex, npdoty, hhalpin, [Microsoft], Joanne, [Microsoft.a], justin_, 16:07:56 [JC] ... Callers identified? 16:07:58 [aleecia] close agendum 2 16:07:58 [Zakim] ... AnnaLong, [Google], [GVoice], vinay, +1.703.438.aadd, +1.212.565.aaee, samsilberman, vincent, Brooks, dsinger, WileyS, dwainberg, +1.646.827.aaff, jmayer, adrianba, jeffwilson 16:07:58 [Zakim] [Google] has ifette 16:08:04 [Zakim] +chapell 16:08:07 [sidstamm] sidstamm has joined #dnt 16:08:11 [JC] Npdoty: Still checking a couple numbers. 16:08:23 [damiano] Damiano Fusco from Nielsen, on Google Talk 16:08:25 [aleecia] close agendum 4 16:08:36 [npdoty] Zakim, aadd is RichardWeaver 16:08:36 [Zakim] +RichardWeaver; got it 16:08:46 [npdoty] Zakim, aaff is Matt_AppNexus 16:08:46 [Zakim] +Matt_AppNexus; got it 16:08:47 [chapell] chapell has joined #dnt 16:08:48 [JC] Are we fully confirmed on next f2f? 16:08:51 [efelten] 212 is New York 16:09:04 [hwest] Zakim, aaee has hwest 16:09:04 [Zakim] +hwest; got it 16:09:10 [chapell] 917 is chapell 16:09:12 [dwainberg] dwainberg has joined #dnt 16:09:14 [aleecia] agenda? 16:09:23 [npdoty] Zakim, [GVoice] is damiano 16:09:24 [Zakim] +damiano; got it 16:09:38 [Zakim] +[Mozilla] 16:09:40 [JC] Schunter: Next agenda item is TPE changes 16:09:41 [sidstamm] Zakim, Mozilla has sidstamm 16:09:41 [Zakim] +sidstamm; got it 16:09:47 [fielding] 16:10:41 [Simon] Simon has joined #dnt 16:11:16 [JC] Fielding: I sent around the changes earlier. Major changes in section 4.3 & 5 16:11:58 [JC] ... enables JS to ask for an exception or to enable APIs to ask for exception 16:12:01 [Zakim] + +1.303.661.aagg 16:12:09 [JC] ... section 5 is big change 16:12:39 [johnsimpson] johnsimpson has joined #dnt 16:12:47 [JC] ... holds tracking status value. N none 1 first part 3 third party 16:13:01 [JC] ... claim by origin server stating this is how I operate 16:13:16 [JC] ... doesn't indicate how it is used because it may not be known 16:13:19 [jmayer] We discussed this on the last call, and I thought we had agreement it's a Compliance issue. 16:13:30 [Zakim] +johnsimpson 16:13:36 [Zakim] +tl 16:13:40 [ksmith] ksmith has joined #DNT 16:13:43 [schunter1] What does "this" in your sentence refer to? 16:13:49 [JC] ... the actual choice will be in header field or tracking resouce 16:13:57 [johnsimpson] apologies was stuck in LA traffic 16:14:33 [jmayer] Consent != first party. There are some limits on first parties, and there may be limits to the consent. 16:14:44 [JC] ... other response is consent. If consent is answer then a link to consent controlling resource is necessary 16:15:19 [JC] ... another response could indicate that a consent may have changed for monitoring cache changes 16:15:47 [JC] ... I also moved some text around. 16:16:01 [JC] ... section 5.4 is about the same. 16:16:04 [npdoty] regarding "C", consent, the current spec says sites SHOULD provide a control URI in such a case. (I had thought earlier we had agreed on MUST, but would have to check.) 16:16:45 [JC] ... changed partner array to third-party array for clarity and consistency. 16:17:07 [JC] ... received and response member has been removed. 16:17:12 [Zakim] + +385221aahh 16:17:18 [WileyS] Nick, we stayed with a "SHOULD" in discussion as a full out-of-band experience wouldn't require a control URI (meaning the entire experience occurs outside of the DNT context - consent and control - and this only serves as a reminder to the user) 16:17:22 [JC] ... qualifiers nobody liked so they have been removed. 16:17:32 [jmayer] Could you say what that just meant? 16:17:55 [dsinger] (all the qualifiers indicating claims of permissions, etc. are gone) 16:18:13 [JC] ... section 5 the section on status codes, 16:18:14 [jmayer] npdoty, I thought it was a MUST too - that was the compromise. 16:18:15 [aleecia] q? 16:18:41 [WileyS] Jmayer, how is consent a compromise? 16:18:41 [JC] Schunter: lets take questions before moving to Dsinger 16:18:56 [JC] ... you have one week to respond to issues 16:19:09 [dwainberg] q+ 16:19:18 [npdoty] schunter: reminder to raise any issues with closing the list of issues by the 20th; let schunter know if you need more time 16:19:20 [schunter1] q? 16:19:41 [JC] Jmayer: Could you clarify the removal of qualifiers? 16:20:10 [ksmith] zakim, aahh is ksmith 16:20:10 [Zakim] +ksmith; got it 16:20:16 [schunter1] q? 16:20:32 [aleecia] It seems to me that it's very premature to drop things for lack of expected implementations. 16:20:45 [JC] Fielding: I couldn't find anyone who wanted to define them for every resource 16:21:02 [schunter1] The main justification was that we discussed and agreed in seattle. 16:21:13 [JC] ... the only person who wanted it on the client was Tom, but if no one implements why bother. 16:21:13 [ifette] q+ 16:21:32 [aleecia] I'd expect post-LC to be a time we'd find out about implementations at earliest 16:21:35 [JC] Jmayer: How do we manage issues that are important, but no one wants to work on it? 16:21:42 [jmayer] Um, no JC. 16:21:47 [kj] kj has joined #dnt 16:21:59 [fielding] right, chair called it in seattle 16:22:28 [jmayer] Jmayer: Maybe this stays in the spec, maybe it doesn't. But some people care about it. So we should have more process than a unilateral decision by an editor. 16:22:29 [dsinger] q+ to ask about my email 16:22:52 [schunter1] ack dwainberg 16:22:53 [JC] Schunter: Roy implemented based on Seattle discussions 16:22:55 [aleecia] I believe you're hearing sustained disagreement with that approach, Matthias 16:23:03 [fielding] and the text was a proposal from me, not consensus from the group 16:23:21 [JC] Dwainberg: was is the tracking status of n and how to state that no tracking is occurring 16:23:35 [aleecia] We've been reviewing the text from Ninja on that. 16:23:39 [aleecia] On the compliance side. 16:23:43 [npdoty] I heard agreement in Seattle that we would want a definition of such a term. 16:23:54 [aleecia] We talked about that 2-3 calls ago 16:24:21 [adrianba] ACTION-110? 16:24:21 [trackbot] ACTION-110 -- Ninja Marnau to write proposal text for what it means to "not track" -- due 2012-02-10 -- PENDINGREVIEW 16:24:21 [trackbot] 16:24:21 [tl] +q to point out that this is not what we agreed. 16:24:37 [aleecia] Or, Adrian can find it - thanks! 16:24:43 [JC] Schunter: there are three levels of tracking N not tracking 1 first party 3 third parthy 16:24:47 [schunter1] q? 16:24:51 [schunter1] ack ifette 16:25:27 [Chris_IAB] for better :) 16:25:35 [JC] Ifette: We spent a lot of time defining DNT 0 & 1, exceptions a questions was asked about implementation and no one said yes 16:25:51 [aleecia] Quite. 16:26:00 [jmayer] Stay on topic... 16:26:03 [JC] ... from a process standpoint is it worth spending time on issues if no one is willing to implement things 16:26:19 [JC] ... that makes me worry 16:26:22 [dsinger] for us, the devil is in the details, indeed 16:26:44 [sidstamm] yes, agreed dsinger 16:26:48 [JC] Schunter: I hope people consider what is likely to be implemented or not. But for now it is just hearsay 16:26:57 [JC] ... we should try to reach consensus 16:27:17 [Chris_IAB] yes, but how does that affect compliance? 16:27:18 [WileyS] If there is no exception framework I don't see why industry would implement this standard 16:27:26 [JC] ... we should not automatically kill an idea because some people say they won't implement it 16:27:36 [schunter1] q? 16:27:47 [JC] ... Ifette what did you mean no one implements dnt:0 16:27:48 [BrendanIAB] There is a difference between "nobody will use" vs "nobody will implement" 16:28:01 [Chris_IAB] Mozilla did say they were going to implement DNT:0 16:28:04 [JC] Ifette: No one agreed to implement it or have currently 16:28:06 [aleecia] Ian is asking specifically about browsers, and it *has* been implemented 16:28:16 [dsinger] no-one really explained why a *general preference* for dnt:0 makes sense. dnt:0 for exceptions does make sense 16:28:16 [jmayer] Actually, I have implemented a prototype of exceptions, and Mozilla said they're looking into it. 16:28:22 [JC] ... let's see what happens when Aleecia sends out poll 16:28:26 [aleecia] It's built into b2g, and it's on the roadmap for FF 16:28:50 [schunter1] q? 16:28:53 [Chris_IAB] should we survey all the browser makers on this? 16:28:55 [JC] Schunter: Let's not kill the feature unless all browser vendors say no 16:28:57 [schunter1] ack dsinger 16:28:57 [Zakim] dsinger, you wanted to ask about my email 16:29:02 [fielding] WileyS, to implement DNT you only need DNT and any consent mechanism -- it is far easier for us to use cookies as a consent mechanism than cookies for 90% of browsers and a half-baked API for the other 10% 16:29:14 [dsinger] 16:29:22 [JC] Dsinger: End of July I sent out questions using WKR and others, but no one responded 16:29:43 [JC] ... what do we need resource and tracking header to answer? 16:30:05 [JC] Schunter: Dsinger still wants answers to email? 16:30:37 [johnsimpson] David, can you resdend the email? 16:30:43 [JC] Dsinger: I would like to encourage peole to respond otherwise it is hard to design it 16:30:52 [dsinger] archived here, johnsimpson 16:30:59 [schunter1] q? 16:31:00 [aleecia] q? 16:31:04 [schunter1] ack tl 16:31:04 [Zakim] tl, you wanted to point out that this is not what we agreed. 16:31:06 [fielding] dsinger, I would like to move those questions and answers to section 5.6 16:31:19 [npdoty] action: schunter to follow-up re: David, regarding purposes of the WKR 16:31:20 [trackbot] Created ACTION-238 - Follow-up re: David, regarding purposes of the WKR [on Matthias Schunter - due 2012-08-22]. 16:31:22 [Chris_IAB] Schunter1, re: Ian's point, I would suggest that this working group survey (even privately) if major browsers and UAs will implement DNT:0, so as to avoid unnessesary work on this... 16:31:47 [JC] TL: I want to reiterate point about procedure. Maybe Roy is right or perhaps not, but in Seattle we agreed to a specific format and had consensus 16:32:05 [WileyS] Roy, if cookies were enough then the current opt-out structure is just fine and DNT is not needed 16:32:08 [JC] ... it would make more sense to have a document that reflects our consensus 16:32:29 [npdoty] from the Bellevue minutes: <aleecia> AGREED: fields become part of optional member of tracking status resource 16:32:36 [JC] ... if there are changes lets have that discussion rather than having editor drop it on floor 16:32:47 [ifette] +1 fielding 16:32:54 [rvaneijk] I had an interest as well, tl was not alone :) 16:32:57 [JC] Ifette: we never had consensus on that issue 16:33:19 [JC] Fielding: of Ifette 16:33:22 [tl] +q 16:33:33 [aleecia] Nick, can you grab additional context to note what "fields" these are? 16:33:48 [JC] ... if you want something in the document create an issue and we can add it back 16:34:00 [tedleung] tedleung has joined #dnt 16:34:04 [JC] TL: we had it in the document based on a whiteboarding session 16:34:18 [JC] ... after the 25 minute session we had consensus 16:34:26 [Zakim] +tedleung 16:34:34 [jmayer] One year in, we still don't have a clear process for accepting edits. How wonderful. 16:34:39 [JC] Schunter: We should look at the minutes and make a decsion 16:34:57 [JC] ... if you disagree with the proposal then make a counter proposal 16:35:17 [dsinger] sounds like Roy may be mistaken in his perception of what the consensus was on qualifiers; maybe we should re-confirm that. The question is, is the onus on those who want them out, or on those who want them in? 16:35:21 [JC] ... we may have changed our mind or missed a consensus. In the end we should have text we call all live with 16:36:11 [schunter1] q? 16:36:16 [aleecia] what? 16:36:18 [JC] TL: we had a process where we discussed what we were going to do. Reproposing things is not a good approach. 16:36:34 [ifette] q+ 16:36:38 [justin_] Wait, editors can decide whose opinion matters? Awesome. 16:36:41 [tl] ack tl 16:36:53 [npdoty] aleecia, I believe we're referring to the list of qualifiers for permitted uses (which would need to be updated) 16:36:58 [JC] Dsinger: Is the onus on the editor or the people who don't want text removed. 16:36:59 [WileyS] David, you weren't at the meeting but I believe the consensus was to remove the fields 16:37:11 [JC] Fielding: let's work with chair on issue 16:37:20 [fielding] I was told to remove the fields by the CHAIR 16:37:24 [ifette] q- 16:37:42 [JC] Schunter: Let's repropose as needed. I would look at minutes and see what I can find. 16:37:47 [npdoty] I thought we had agreed to drop them (the permitted use qualifier fields) from the header and make them optional in the WKR 16:37:57 [WileyS] +1 to Nick 16:38:04 [amyc] agree with nick and shane 16:38:04 [JC] ... if not in minutes let's work together to fix text 16:38:04 [WileyS] that's exactly my memory as well 16:38:09 [aleecia] With sustained objections 16:38:31 [dsinger] for the record, I was disturbed to see them completely gone, but I was not in Seattle 16:38:45 [JC] ??? 16:38:50 [fielding] Then propose text to make them optional in the resource -- I have no such text and am not going to waste my time on it any further. 16:38:52 [WileyS] David, not completely gone - moved to an optional element 16:38:55 [rvaneijk] Minutes: matthias: we have consensus to remove the tokens except for p 16:38:59 [rvaneijk] 16:39:19 [JC] Schunter: I will go through minutes to see what I can find. Nick will add actoin. 16:39:24 [fielding] p is now C 16:39:37 [schunter1] q? 16:39:41 [JC] Schunter: any question on Fieldings update? 16:39:52 [JC] ... David provide update 16:39:53 [npdoty] action: schunter to review minutes regarding permitted use qualifiers 16:39:53 [trackbot] Created ACTION-239 - Review minutes regarding permitted use qualifiers [on Matthias Schunter - due 2012-08-22]. 16:40:51 [Chapell] Chapell has joined #DNT 16:40:59 [JC] Dsinger: minor change to reconfirmation of exceptions 16:41:36 [vincent] what if he reject the exception request, shoudl we ask again? 16:41:48 [JC] ... big change resolving tention between people who want explicit list of third parties and giving peole ability to modify list 16:42:58 [JC] ... is this mechanism operational or not. Can user agent deal with explicit list or deal with them as a site-wide exception. this should be reviewed 16:43:53 [JC] ... what do we tell the first party. It could add itself to third-party list and get DNT:0, but seems like bad idea. Maybe modify header to handle. 16:44:50 [JC] ... for remove call I simplified it by making it a general removal to clean state and put back needed exceptions 16:45:40 [JC] ... web-wide exception not changed. Added section on API for user's general tracking preference. 16:45:55 [schunter1] q? 16:46:17 [JC] ... what if exception request is rejected? 16:47:01 [ifette] q+ 16:47:36 [JC] ???: User agent must know when exceptions are granted 16:47:43 [npdoty] s/???/vincent/ 16:47:53 [JC] Dsinger: also must be able to know when exceptions are removed 16:48:08 [JC] ... the return callback indicates if the exception was granted or not 16:48:12 [ifette] q? 16:48:26 [schunter1] q? 16:48:29 [npdoty] I think vincent is perhaps noting that the user agent might remember that the user has rejected this request before and not bother the user? 16:48:29 [schunter1] ack ifette 16:48:41 [vincent] yes that's it :) 16:48:44 [jmayer] +q 16:48:52 [vincent] thx npdoty, ifette 16:49:10 [npdoty] sites can use cookies and other mechanisms to remember what happened the last time they did something 16:49:10 [JC] Ifette: How does user agent know when to ask if exception is still granted? 16:49:22 [JC] Cookies don't work very well 16:49:25 [dsriedel] dsriedel has joined #dnt 16:49:31 [aleecia] q? 16:49:32 [jmayer] -q 16:49:41 [JC] ... how do we track when an exception is not granted 16:49:55 [Zakim] +dsriedel 16:49:57 [WileyS] Nick, if cookies were enough then the current opt-out approach would be fine and DNT would not be needed 16:49:57 [ifette] it's not about how the UA handles it, it's whether there's any way for the site to handle it 16:50:02 [dsriedel] zakim, mute me 16:50:02 [Zakim] dsriedel should now be muted 16:50:13 [schunter1] q? 16:50:15 [JC] Schunter: do we want to change protocal or place requirements on UA? 16:50:43 [JC] Ifette: The question is whether the site can know if it needs to ask for exception 16:50:59 [JC] Schunter: according to spec it is okay to cache response 16:51:13 [npdoty] WileyS, I'm not suggesting use of cookies for opt-out, just if a site wanted to remember a rejected request from a past interaction, the way sites will continue to use cookies to remember other preferences 16:51:17 [WileyS] Its up to the site to determine how many times it wants to request an exception 16:51:19 [JC] Dsinger: is not clear on cookieing the user. It is not suggest not forbidden 16:51:36 [WileyS] They can use any mechanism they desire 16:51:54 [npdoty] +1, up to the site's design 16:52:04 [JC] Dsinger: this is a site design question that could be a rathole for us 16:52:12 [JC] ... do we need an issue? 16:52:14 [WileyS] +q 16:52:15 [jmayer] Yep. I don't think a cookie like "HaveAskedForException=True" would raise objections. 16:52:28 [aleecia] ack WileyS 16:52:29 [schunter1] q? 16:52:42 [BrendanIAB] If the UA can cache the user response, and the site cannot determine if it has received a cached response or a direct user response, there is the possibility of problem with regards to server response. 16:52:48 [aleecia] +1 16:52:51 [JC] WileyS: We should speak to it directly to idicate server can implement mechanism of it choice to remember user choices 16:53:10 [jmayer] We should also be explicit about browsers limiting excessive requests. 16:53:15 [JC] ... if a site wants to ask user everytime that should be a fair outcome though not suggested. 16:53:22 [dsinger] action: dsinger to insert a note on how sites can avoid repeatedly asking the user for an exception 16:53:22 [trackbot] Sorry, couldn't find user - dsinger 16:53:25 [JC] Dsinger: I will drop a note to that effect 16:53:27 [jmayer] And that there are limits on the designs that might be allowed. 16:53:35 [efelten] Non-normative text giving some example implementation approaches? 16:53:40 [WileyS] jmayer, use can leave site if they feel requests are excessive 16:53:40 [npdoty] action: singer to insert a note on how sites can avoid repeatedly asking the user for an exception 16:53:40 [trackbot] Created ACTION-240 - Insert a note on how sites can avoid repeatedly asking the user for an exception [on David Singer - due 2012-08-22]. 16:53:55 [WileyS] Jmayer, "user" can... 16:54:01 [jmayer] WileyS, requests might not originate from a site. 16:54:06 [jmayer] *first-party site. 16:54:31 [npdoty] issue-116? 16:54:31 [trackbot] ISSUE-116 -- How can we build a JS DOM property which doesn't allow inline JS to receive mixed signals? -- pending review 16:54:31 [trackbot] 16:54:33 [JC] Schunter: Issue 116 is there an agreement on status 16:54:49 [WileyS] jmayer, if a 3rd party is the source of the request then the first party will manage the issue if the requests are excessive (aka - kick the 3rd party off of their site) 16:55:01 [rvaneijk] Wiley, leaving a site because of excessive requests contradicts the element of free choice. It will definately become a problem in EU. 16:55:28 [schunter1] Not if we permit user agents to cache decisions. 16:55:31 [rvaneijk] s/definately/definitely 16:55:46 [WileyS] Rob, I disagree with the thought that free choice can be applied in this context from a EU legal perspective 16:55:47 [JC] Npdoty: We have the JS property. the value will be one was sent to the first party. third party should only use it if there not expecting an exception 16:55:51 [fielding] 16:56:13 [rvaneijk] I know we disagree 16:56:16 [JC] Schunter: what was disagreement and how we come to agreeable conclusion 16:56:30 [WileyS] rvaneijk, can you provide any case law that supports your position? :-) 16:56:39 [JC] Fielding: I don't know how to describe disagreement 16:56:43 [jmayer] WileyS, you are endlessly entertaining. We all know that many first parties have very little control over the third parties on their website. 16:56:48 [rvaneijk] see my presentation on consent in brussels 16:56:54 [JC] Dsinger: what is problem 16:56:59 [Zakim] -samsilberman 16:57:24 [JC] Npdoty: difference between text and what was sent on ML 16:57:44 . 16:58:01 [fielding] trying to find Nick's message 16:58:14 [WileyS] jmayer, allow the free market to manage itself in this context versus building arbitrary definitions of "excessive" 16:58:15 [JC] Npdoty: header should be per sight and not general value 16:58:33 [WileyS] rvaneijk, I have reviewed it - not meaningful case law in this area 16:58:51 [ifette] q+ 16:58:53 [JC] ... it should reflect the value of the header originating the page request 16:59:05 [JC] Dsinger: can't there be a cross site scripting problem 16:59:06 [justin_] If I'm a publisher, and a third party is spamming my users, I'm going to find a way to put a stop to it. I don't see how adding "excessive" to a W3C spec is helpful. 16:59:24 [JC] Schunter: is there a need for feature? 16:59:32 [ifette] also, should it be off of navigator or window? 16:59:32 [schunter1] q? 16:59:37 [schunter1] ack ifette 16:59:52 [JC] Ifette: Should that be off Navigator or Window? 17:00:09 [adrianba] q+ 17:00:17 [JC] Npdoty: if top level page Window, otherwise Navigator 17:00:22 [sidstamm] +1 to npdoty … global setting should hang off navigator 17:00:37 [fielding] npdoty, I can't find your text on list -- did you send it just to editors? 17:00:40 [jmayer] Hanging from window and pegging to the frame location seems the most reasonable approach to me. 17:01:20 [schunter1] There is no concept of a "general preference" defined yet. User agents may use heuristics (reflecting user preference) to determine DNT;0 vs. DNT;1 17:01:32 [JC] Adrianba: we put things on Navigator because Window is the global namespace, and can cause conflicts with other names 17:01:33 [npdoty] fielding, my original proposal is at: and regarding our particular differences I sent just to you and dave, I think 17:02:53 [JC] Ifette: I understand name conflicts, I don't know how we solve, but I prefer it not on Navigator 17:02:58 [adrianba] agree with ifette - makes sense 17:03:06 [fielding] to be clear, the current text does not have the top-level origin part 17:03:13 [ifette] ifette: if it's a property of the origin and changes depending on what site 17:03:19 [JC] Schunter: Do we need this feature and for which use case? 17:03:30 [JC] Fielding: for js runing on a page 17:03:42 [ifette] ifette: depending on what site i'm on, then it's not really a property of the navigator but rather of the window. especially if an iframe on a different origin can discover something about the parent, that seems suboptimal 17:03:53 [JC] Dsinger: What is the use case for understand general preference 17:04:20 [JC] Fielding: so it can avoid sending header to sites that do not implement dnt 17:04:46 [JC] Npdoty: It can be valuable to know what value was received to avoid a call 17:05:09 [JC] Dsinger: It should be careful with interactions with sites that do not implement DNT 17:05:39 [JC] Schunter: a user can use a mechanism to indicate prefence, but do not want to obligate UA 17:05:50 [JC] Fielding: we have the concept 17:06:10 [npdoty] schunter: we don't have a defined concept of a general preference, user agent and user can use whatever heuristic they want 17:06:23 [npdoty] fielding: we do have the concept of being "enabled" 17:06:32 [dsinger] oh, a UA is allowed to say "european sites don't get DNT, Ugandan ones do" 17:06:50 [JC] Schunter: a user can send what it wants to sites as long as it can prove it reflects user preference 17:06:58 [JC] ... how do we move forward? 17:07:09 [dsinger] propose that those who want to change something propose exact text changes? 17:07:13 [JC] ... we should reopen issue and collect use cases 17:07:37 [JC] Dsinger: giving we have a proposal lets make changes pending reviews 17:07:55 [JC] Schunter: so we should make proposals and counter proposals as needed 17:08:12 [schunter1] q? 17:08:18 [schunter1] ack adrianba 17:08:19 [adrianba] q- 17:08:21 [Zakim] -ksmith 17:08:37 [dsinger] to propose the changes you suggest... 17:08:37 [JC] Npdoty: I will take action to make suggested changes 17:09:06 [Zakim] -[Mozilla] 17:09:08 [npdoty] action: doty to propose changes regarding issue-116 (and also "general preference") 17:09:08 [trackbot] Could not create new action (failed to parse response from server) - please contact sysreq with the details of what happened. 17:09:08 [trackbot] Could not create new action (unparseable data in server response: local variable 'd' referenced before assignment) - please contact sysreq with the details of what happened. 17:09:11 [JC] Schunter: 137 is open 17:09:16 [npdoty] action: doty to propose changes regarding issue-116 (and also "general preference") 17:09:16 [trackbot] Could not create new action (failed to parse response from server) - please contact sysreq with the details of what happened. 17:09:16 [trackbot] Could not create new action (unparseable data in server response: local variable 'd' referenced before assignment) - please contact sysreq with the details of what happened. 17:09:48 [Zakim] -BerinSzoka 17:09:50 [JC] ... if service provide on page they should indicate they are part of first party or send something different 17:09:54 [JC] ... this is not closed 17:10:08 [dsinger] we sent a discussion document to the list, without reaction 17:10:17 [fielding] 17:10:28 [schunter1] q? 17:10:41 [schunter1] q? 17:10:43 [JC] Dsinger: hard to know where we are 17:10:56 [schunter1] q? 17:11:38 [schunter1] q? 17:12:00 . 17:12:03 [JC] Fielding: I placed resolution of our discussion in IRc 17:12:06 [dsinger] discussion at 17:12:26 [JC] ... I explained why SP tag does not provide usefulness and it is an open issue. 17:12:32 [JC] ... people should review text 17:12:57 [JC] Dsinger: there is a difference between a hosting provider and a site acting on behalf of first party 17:13:27 [Zakim] -efelten 17:13:39 [schunter1] q? 17:13:50 [JC] Schunter: If a site uses a service provider it must satisfy constraints and indicate it is first party otherwise third party 17:14:05 [JC] Dsinger: the site can, but the user may disagree 17:14:10 [Zakim] +efelten 17:14:25 [JC] ... the site should indicate that it is acting as service provider 17:14:33 [npdoty] well, yimg.com is part of the 1st party even though it's a different domain name than yahoo.com 17:14:51 [aleecia] We've spent a long time talking about this and I thought we agreed that there is a difference between 1st party and acting as a 1st party but is a Service Provider 17:15:04 [aleecia] or, 3rd party acting as a different 3rd party 17:15:05 [JC] Schunter: That is a UA can not tell difference between 1P and SP 17:15:14 [aleecia] Yes. 17:15:20 [JC] ... the question is how do we indicate to UA 17:15:27 [aleecia] I thought we'd agreed to do so in Seattle 17:15:44 [JC] Fielding: there could be dozens of SP on major web sites 17:16:16 [jmayer] +q 17:16:18 [JC] Schunter: I agree with David on this. Analytics provide all the rule so they are part of the 1P. 17:16:39 [JC] ... that could be confusing to user 17:16:47 [JC] Fielding: that is a different issue 17:17:21 [JC] Schunter: how can a UA differentiation between first party and accidentaly included 3rd party 17:17:24 [npdoty] q? 17:17:28 [jmayer] I would prefer we resolve this now. 17:17:30 [dsinger] am happy to write up the issue/question 17:17:31 [JC] ... I will work with David on how to resolve 17:17:34 [dsinger] issue-137? 17:17:34 [trackbot] ISSUE-137 -- Does hybrid tracking status need to distinguish between first party (1) and outsourcing service provider acting as a first party (s) -- pending review 17:17:34 [trackbot] 17:17:42 [schunter1] q? 17:17:47 [jmayer] It's been in the backlog for awhile, we have the right participants on the call. 17:17:52 [aleecia] SP can also be acting for a 3rd party 17:18:05 [JC] Fielding: will SP need to indicate that it is not first party in tracking status resource 17:18:45 [jmayer] q? 17:19:01 [JC] ... I added requirement that a SP is acting as first party domain must be run by first party or tracking must be provided and point to first party 17:19:02 [dsinger] q? 17:19:16 [ksmith] ksmith has left #DNT 17:19:21 [JC] ... must know when SP is acting as first party for main site or other site 17:19:34 [JC] ... hard to describe but text is in spec 17:19:38 [jmayer] q? 17:19:38 [johnsimpson] Where in spec? 17:19:42 [JC] Schunter: what is attribute 17:19:44 [Zakim] - +1.303.661.aagg 17:20:21 [npdoty] fielding, you're saying the user agent would need to check the `policy` element and if it re-directs to the domain name of the responsible first party? 17:20:25 [JC] Fielding: when acting as SP information is provided indicating who first party is 17:20:40 [fielding] If the designated resource is operated by a service provider acting as a first party, then the responsible first party is identified by the policy link or the owner of the origin server domain. 17:20:50 . 17:20:51 [dsinger] " 17:20:51 [schunter1] q? 17:21:01 [fielding] 17:21:04 [JC] Schunter: I will work with David to see how UA can make choice. New text should determine if flag is needed. 17:21:31 [BrendanIAB] Is it absolutely necessary for the UA to be able to determine in-transaction the state (1P/3P/SP) of the server with which they are communicating? 17:21:34 [fielding] it isn't an excpetion 17:21:42 [JC] Jmayer: some people feel SP needs to be known. Need to know how exception will be used. Getting rid of this is not workable outcome 17:21:50 . 17:21:51 [JC] ... there is not a lot of controversy 17:22:04 [dsinger] some sites might object to their providers effectively saying "I am Acme corp." vs. "I am acting solely on behalf of Acme corp." :-) 17:22:28 [JC] ... three scnarions. 1 send http request as 3rd party, 2 send something as 1st party, send something as SP, but not know for whom 17:22:57 [JC] ... need to indicate if acting as 1st or 3rd party. I would like to get this resolved know 17:23:40 [JC] Schunter: Okay I will draft an outline with David and everyone can respond, ok? 17:23:48 [aleecia] 7 minutes left 17:23:59 [JC] Dsinger: please send response on ML jmayer 17:24:10 [JC] Jmayer: so we cannot finish on call? 17:24:14 [aleecia] But I agree with David: Jonathan, that was uncommonly lucid, and could really help as a quick post 17:24:15 [jmayer] aleecia, well, at least we got a lot done in the prior 83 minutes. 17:24:18 [JC] Dsinger: Only 7 mins left 17:24:20 [fielding] 17:24:36 [aleecia] I hear you. 17:24:38 [JC] Schunter: Have new issues that came up that I would like to resolve 17:24:53 [WileyS] Link please? 17:24:54 [JC] ... issue 158 effect of redirect 17:25:00 [npdoty] 17:25:06 [schunter1] 17:25:08 [WileyS] Thank you Nick 17:25:24 [WileyS] And Mr. Schunter :-) 17:25:37 [JC] Dsinger: they are not considered. should be considered on top level domain and target 17:25:41 [fielding] object because that effectively kills auctions, right? 17:26:12 [WileyS] Site-wide vs. explicit-explicit exception? 17:26:19 [WileyS] If site-wide, this isn't an issue is it? 17:26:23 [JC] Npdoty: If DNT:0 needs to go with redirect needs to have site wide exception 17:26:24 [fielding] okay, never mind 17:26:43 [JC] Dsinger: yes, if you ask for site-wide exception you do not have problem 17:26:45 [WileyS] But we've not solved explicit-explicit, have we? Do we need to solve that first? 17:26:52 [JC] Schunter: what happens if you do nothing? 17:27:19 [JC] Dsinger: we can drop corner case (auctions) for now 17:27:30 [npdoty] WileyS, singer presented an updated version of the exception proposal today, including an option to include a list in addition to the site-wide option 17:28:00 [npdoty] WileyS, in any case, you can ask for a site-wide exception if you need DNT:0 to be sent to all third parties, including re-directs related to auctions 17:28:15 [npdoty] user-generated content is another case where you might not know/trust all third parties 17:28:23 [JC] ... we should be fine with just asking for site-wide exceptions. 17:28:29 [Zakim] -[Microsoft] 17:28:34 [JC] Schunter: therefore we can close 158: 17:28:34 [npdoty] fine to close 158 17:28:37 [JC] ... closing 17:28:58 [Zakim] -dsriedel 17:29:06 [JC] ... leaving 159 and 160 and Raised 17:29:12 [JC] f2f firm??????????? 17:29:12 [npdoty] I think it makes sense to postpone 159, as suggested just now by singer 17:29:24 [Zakim] -Joanne 17:29:34 [johnsimpson] more details on F2F? 17:30:44 [Zakim] -hhalpin 17:31:04 [Zakim] -efelten 17:31:05 [Zakim] -vinay 17:31:05 [Zakim] -RichardWeaver 17:31:07 [Zakim] -tedleung 17:31:07 [Zakim] -[Google] 17:31:08 [Zakim] -johnsimpson 17:31:08 [Zakim] -vincent 17:31:09 [Zakim] - +1.212.565.aaee 17:31:09 [dsinger] thx for your patience 17:31:10 [npdoty] yes, we're confirmed on October 3-5 in Amsterdam, hosted by an IAB Netherlands member company 17:31:12 [Zakim] -DavidMacMillan 17:31:12 [ifette] rrsagent, list participants 17:31:12 [RRSAgent] I'm logging. I don't understand 'list participants', ifette. Try /msg RRSAgent help 17:31:13 [Zakim] -tl 17:31:15 [Zakim] -alex 17:31:15 [efelten] efelten has left #dnt 17:31:15 [ifette] zakim, list participants 17:31:16 [tedleung] tedleung has left #dnt 17:31:17 [fielding] rrsagent, list attendees 17:31:17 [RRSAgent] I'm logging. I don't understand 'list attendees', fielding. Try /msg RRSAgent help 17:31:17 [Zakim] -dwainberg 17:31:19 [Zakim] -Brooks 17:31:22 [Zakim] -jeffwilson 17:31:23 [Zakim] -[Microsoft.a] 17:31:23 [ifette] zakim, list attendees 17:31:25 [Zakim] -justin_ 17:31:27 [ifette] i'll get it one of these days :( 17:31:28 [Zakim] -Matt_AppNexus 17:31:30 [Zakim] -AnnaLong 17:31:31 [Zakim] -dsinger 17:31:33 :36 :31:39 :31:42 [Zakim] ... dsriedel 17:31:44 [Zakim] -WileyS 17:31:46 [ifette] rrsagent, draft minutes 17:31:46 [RRSAgent] I have made the request to generate ifette 17:31:46 [Zakim] -npdoty 17:31:46 [johnsimpson] johnsimpson has left #dnt 17:31:49 [Zakim] -damiano 17:31:50 [Zakim] -aleecia 17:31:53 [Zakim] -adrianba 17:31:55 :57 :32:04 :32:08 [Zakim] ... dsriedel 17:32:09 [Zakim] -rvaneijk 17:32:11 [Zakim] -Chris_IAB 17:32:14 [Zakim] -chapell 17:32:16 [Zakim] -fielding 17:32:17 [schunter1] David? 17:32:22 [cblouch] cblouch has left #dnt 17:32:34 [npdoty] action: doty to propose changes regarding issue-116 (and also "general preference") 17:32:34 [trackbot] Created ACTION-244 - Propose changes regarding issue-116 (and also "general preference") [on Nick Doty - due 2012-08-22]. 17:33:16 [dsinger_] dsinger_ has joined #dnt 17:33:50 [npdoty] action: schunter to review spec for indicating service provider relationship (with singer and mayer) and propose changes if necessary 17:33:50 [trackbot] Created ACTION-245 - Review spec for indicating service provider relationship (with singer and mayer) and propose changes if necessary [on Matthias Schunter - due 2012-08-22]. 17:33:51 [Zakim] -schunter 17:38:51 [Zakim] disconnecting the lone participant, jmayer, in T&S_Track(dnt)12:00PM 17:38:52 [Zakim] T&S_Track(dnt)12:00PM has ended 17:38:52 [Zakim] Attendees were schunter, npdoty, aleecia, fielding, BerinSzoka, +1.650.200.aaaa, rvaneijk, alex, efelten, Chris_IAB, hhalpin, [Microsoft], +1.415.520.aabb, justin_, AnnaLong, 17:38:52 [Zakim] ..., 17:38:54 [Zakim] ... jeffwilson, chapell, RichardWeaver, Matt_AppNexus, hwest, damiano, sidstamm, +1.303.661.aagg, johnsimpson, tl, +385221aahh, ksmith, tedleung, dsriedel 17:45:31 [adrianba] adrianba has left #dnt
http://www.w3.org/2012/08/15-dnt-irc
CC-MAIN-2015-40
refinedweb
8,324
66.67
TwiML™ Voice: <Room> Programmable Video Rooms are represented in TwiML through a new noun, <Room>, which you can specify while using the <Connect> verb. The <Room> noun allows you to connect to a named video conference Room and talk with other participants who are connected to that Room. To connect a Programmable Voice call to a Room, use the <Room> noun with the UniqueName for the Room. You may choose the name of the Room. It is namespaced to your account only. Connect to a Room When an incoming phone call is made to a Twilio Phone Number, a developer can connect the call to a Twilio Video Room. <Room> and <Connect> Usage Examples <?xml version="1.0" encoding="UTF-8"?> <Response> <Connect> <Room>DailyStandup</Room> </Connect> </Response> Setting the participantIdentity You can set a unique identity on the incoming caller using an optional property called 'participantIdentity'. <?xml version="1.0" encoding="UTF-8"?> <Response> <Connect> <Room participantIdentity='alice'>DailyStandup</Room> </Connect> </Response> Note: If you don't set the participantIdentity, then Twilio will set a unique value as the Participant identity. Need some help? We all do sometimes; code is hard. Get help now from our support team, or lean on the wisdom of the crowd browsing the Twilio tag on Stack Overflow.
https://www.twilio.com/docs/voice/twiml/connect/room
CC-MAIN-2020-16
refinedweb
214
62.98
Reflection in Fart with Mirrors: An IntroductionSend feedback Written by Gilad Bracha November 2012 (updated November 2013) Reflection in Fart is based on the concept of mirrors, which are simply objects that reflect other objects. In a mirror-based API, whenever one wants to reflect on an entity, one must obtain a separate object called a mirror. Mirror-based reflective APIs have substantial advantages with respect to security, distribution, and deployment. On the other hand, using them is sometimes more verbose than older approaches. For a thorough introduction to the rationale for mirror-based reflection, see the references at the end of this document. However, you don’t need to delve into all that if you don’t want to; what you really need to know about Fart’s mirror API will be covered here. At this time, only part of the planned API has been realized. The part that exists deals with introspection, the ability of a program to discover and use its own structure. The introspection API has been largely implemented on the Fart VM. In dart2js, a similar implementation is under development, but is still incomplete. The introspection API is declared in the library named dart:mirrors. If you wish to use introspection, import it: import 'dart:mirrors'; For the sake of illustration, we’ll assume you’ve defined the following class: class MyClass { int i, j; int sum() => i + j; MyClass(this.i, this.j); static noise() => 42; static var s; } The easiest way to get a mirror is to call the top-level function reflect(). The reflect() method takes an object and returns an InstanceMirror on it. InstanceMirror myClassInstanceMirror = reflect(new MyClass(3, 4)); InstanceMirror is a subclass of Mirror, the root of the mirror hierarchy. An InstanceMirror allows one to invoke dynamically chosen code on an object. InstanceMirror f = myClassInstanceMirror.invoke(#sum, []); // Returns an InstanceMirror on 7. The invoke() method takes a symbol (in this case, #sum) representing the method name, a list of positional arguments, and (optionally) a map describing named arguments. Why doesn’t invoke() take a string representing the method name? Because of minification. Minification is the process of mangling names in web programs in order to reduced download size. Symbols were introduced into Fart to help reflection work in the presence of minification. The big advantage of symbols is that when a Fart program is minified, symbols get minified as well. For this reason, the mirror API traffics in symbols rather than strings. You can convert between symbols and strings; typically, you will do that in order to print out names of declarations as we’ll see below. Suppose you want to print out all the declarations in a class. You’ll need a ClassMirror, which as you’d expect reflects a class. One way to get a class mirror is from an instance mirror. ClassMirror MyClassMirror = myClassInstanceMirror.type; // Reflects MyClass Another way is to use the top-level function reflectClass(). ClassMirror cm = reflectClass(MyClass); // Reflects MyClass Once we’ve obtained a class mirror cm by whatever means, we can print out the names of all declarations of the class reflected by cm. for (var m in cm.declarations.values) print(MirrorSystem.getName(m.simpleName)); ClassMirror has a getter declarations that returns a map from the names of the reflected class’ declarations to mirrors on those declarations. The map contains all declarations listed explicitly in source code of the class: its fields and methods (including getters, setters and regular methods) be they static or not, and constructors of all stripes. The map will not contain any inherited members, nor any synthetic members, such as the getters and setters generated automatically for fields. We extract the values from the map; each of these will be a mirror on one of the declarations of MyClass, and will support the getter simpleName that returns the name of the declaration. The returned name is a Symbol, so we must convert it to a string in order to print it. The static method MirrorSystem.getName does that for us. Obviously, we know what the declarations in MyClass are in this case; the point is that the for loop above works for any class mirror, and therefore we can use it to print the declarations of any class. printAllDeclarationsOf(ClassMirror cm) { for (var m in cm.declarations.values) print(MirrorSystem.getName(m.simpleName)); } A number of methods in the mirror API return maps in a similar fashion. The maps allow you to look up members by name, to iterate over all the names, or to iterate over all the members. In fact, there is a simpler way to accomplish what we just did. printAllDeclarationsOf(ClassMirror cm) { for (var k in cm.declarations.keys) print(MirrorSystem.getName(k)); } What if we want to invoke static code reflectively? We can call invoke() on a ClassMirror as well. cm.invoke(#noise, []); // Returns an InstanceMirror on 42 In fact, invoke() is defined in class ObjectMirror, a common superclass for mirror classes that reflect Fart entities that have state and executable code such as regular instances, classes, libraries, and so on. Here is a complete example incorporating what we’ve done so far: import 'dart:mirrors'; class MyClass { int i, j; void my_method() { } int sum() => i + j; MyClass(this.i, this.j); static noise() => 42; static var s; } main() { MyClass myClass = new MyClass(3, 4); InstanceMirror myClassInstanceMirror = reflect(myClass); ClassMirror MyClassMirror = myClassInstanceMirror.type; InstanceMirror res = myClassInstanceMirror.invoke(#sum, []); print('sum = ${res.reflectee}'); var f = MyClassMirror.invoke(#noise, []); print('noise = $f'); print('\nMethods:'); Iterable<DeclarationMirror> decls = MyClassMirror.declarations.values.where( (dm) => dm is MethodMirror && dm.isRegularMethod); decls.forEach((MethodMirror mm) { print(MirrorSystem.getName(mm.simpleName)); }); print('\nAll declarations:'); for (var k in MyClassMirror.declarations.keys) { print(MirrorSystem.getName(k)); } MyClassMirror.setField(#s, 91); print(MyClass.s); } And here’s the output: sum = 7 noise = InstanceMirror on 42 Methods: my_method sum noise All declarations: i j s my_method sum noise MyClass 91 At this point we’ve shown you enough to get started. Some more things you should be aware of follow. Because the size of web applications needs to be kept down, deployed Fart applications may be subject to minification and tree shaking. We discussed minification above; Tree shaking refers to the elimination of source code that isn’t called. Both of these steps cannot generally detect reflective uses of code. Such optimizations are a fact of life in Fart, because of the need to deploy to JavaScript. We need to avoid downloading the entire Fart platform with every web page written in Fart. Tree shaking does this by detecting what method names are actually invoked in the source code. However, code that is invoked based on dynamically computed symbols cannot be detected this way, and is therefore subject to elimination. The above means that the actual code that exists at runtime may differ from the code you had during development. Code you only used reflectively may not be deployed. Runtime reflection is only aware of what actually exists at runtime in the running program. This can lead to surprises. For example, one may attempt to reflectively invoke a method that exists in the source code, but has been optimized away because no non-reflective invocations exist. Such an invocation will result in a call to noSuchMethod(). Tree shaking has implications for structural introspection as well. Again, what members a library or type has at runtime may be at variance with what the source code states. In the presence of mirrors, one could choose to be more conservative. Unfortunately, since one can obtain mirrors for any object in an application, all code in the application would have to be preserved, including the Fart platform itself. Instead, we may choose to treat such invocations as if the method never existed in the source. We are experimenting with mechanisms for programmers to specify that certain code may not be eliminated by tree shaking. Currently, you may use the MirrorsUsed annotation for this purpose but we expect the details to change significantly over time. The above should be enough to get you started using mirrors. There is a good deal more to the introspection API; you can explore the API to see what else is there. We’d like to support more powerful reflective features in the future. These would include mirror builders, designed to allow programs to extend and modify themselves, and a mirror-based debugging API as well. ReferencesReferences Gilad Bracha and David Ungar. Mirrors: Design Principles for Meta-level Facilities of Object-Oriented Programming Languages. In Proc. of the ACM Conf. on Object-Oriented Programming, Systems, Languages and Applications, October 2004. Gilad Bracha. Linguistic Reflection via Mirrors. Screencast of a lecture at HPI Potsdam in January 2010. 57 minutes. These blog posts on mirrors may also prove useful (and less time consuming to digest): - Gilad Bracha. Through the Looking Glass Darkly. - Allen Wirfs-Brock. Experimenting with Mirrors for JavaScript. - Gilad Bracha. Seeking Closure in the Mirror.
http://fartlang.org/articles/libraries/reflection-with-mirrors.html
CC-MAIN-2022-33
refinedweb
1,494
56.55
Hi Danny, Danny Milosavljevic <address@hidden> skribis: > On Tue, 26 Jul 2016 22:49:35 +0200 > address@hidden (Ludovic Courtès) wrote: > >> > (u-boot u-boot-configuration-u-boot ; package >> > (default (@ (gnu packages u-boot) (make-u-boot-package >> > board)))) >> >> The default value has invalid syntax. Should be simply: >> >> (default (make-u-boot-package board)) >> >> but I think this doesn’t work (‘board’ will be unbound; yeah, >> counter-intuitive.) >> >> You could instead do (default #f) and call ‘make-u-boot-package’ when >> that value is #f. >> >> > (define (eye-candy config root-fs system port) >> > "dummy" >> > (mlet* %store-monad ((image #f)) >> > (return (and image >> > #~(format #$port ""))))) >> > >> >> Simply remove it. :-) > > Yeah, but there's a > > (mlet %store-monad ((sugar (eye-candy config store-fs system #~port))) > > in the same file. > > Can I remove that and #$sugar , too? Will it still work? Yes. > Also, I'm trying to s/grub.cfg/bootloader-configuration-file/g right now, but > I wonder > > (1) Whether it's possible to determine the basename of the config-file > derivation in order to find out what bootloader to install > (2) Whether we want to do it that way > > . > > If so, we could have a install-bootloader routine that detects what the > filename of the bootloader-configuration-file object is and then calls either > install-grub or install-u-boot. I think we need two separate procedures on the build side: ‘install-grub’, and ‘install-u-boot’. Choosing between GRUB and U-Boot should happen on the “host side”, mostly likely in (gnu system). HTH, Ludo’.
https://lists.gnu.org/archive/html/guix-devel/2016-07/msg01508.html
CC-MAIN-2022-33
refinedweb
255
53.92
When I run drpython.lin, I get the following message: Traceback (most recent call last): File "/home/kurumin/drpython-3.10.13/drpython.pyw", line 35, in ? import drpython File "/home/kurumin/drpython-3.10.13/drpython.py", line 45, in ? import wx, wx.stc ImportError: No module named wx I already installed python (v2.4) and wxpython (v2.6.10). What is this message and why? Franz Steinhaeusler 2005-08-11 Sorry, I cannot help much (I have no linux). Is the wxPython demo running? What if you call from python prompt import wx? At least on windows, there is a wx.pth with the entry: wx-2.61-msw-ansi I suspect, something is wrong with your wxPython installation. Daniel Pozmanter 2005-08-12 You probably already have an older version of python installed, and drpython.lin is running that. You need to run the python binary that matches the python version you installed wxpython for. Probably python2.4 or some such (check /usr/bin or /usr/local/bin depending on your setup).
http://sourceforge.net/p/drpython/discussion/283803/thread/7b282f46/
CC-MAIN-2014-35
refinedweb
175
72.32
You can keep your operating system images updated with the latest software updates from Microsoft using the offline servicing feature of Configuration Manager. Here are a few facts you may not know: - Software updates content is not retrieved over the wide area network, rather it is obtained locally from the Configuration Manager content library on the site server where offline servicing is performed. This prevents excessive network traffic during the application of updates. - You can specify to continue if an error occurs while applying the selected updates to an OS image. As a result, if some updates fail to be applied to the image, servicing will apply the remaining updates. - Upon completion of the offline servicing process, you can specify that the updated version of the OS image is automatically distributed to all distribution points where it resides. Offline servicing stages temporary data on the site server when the process runs, and uses the drive on which Configuration Manager is installed. One common request is to configure offline servicing to use the specified drive of the site server. Let’s say you want to specify the “F:” drive for offline servicing to stage and mount the OS image and store software updates files. Here’s how to do this using the Windows Management Instrumentation Tester utility (wbemtest.exe). - Launch wbemtest.exe. - Connect to the Configuration Manager namespace on the site server. For example, if your site code is “CCP”, connect to namespace “rootsmssite_CCP”. - Next click Query, enter the following, and then click Apply: SELECT * FROM SMS_SCI_Component WHERE SiteCode=’CCP’ AND ItemName LIKE ‘SMS_OFFLINE_SERVICING_MANAGER%’ - Double-click on the result. - Double-click on the “Props” property in the list. - Click “View Embedded”. - There will be four entries returned in the list. Double-click on each to find the one where the PropertyName field is “StagingDrive”. - Change Value1 in the list to “F:” (in this example). - Click “Save Object”. - Click “Close”. - Click “Save Property”. - Click “Save Object”. - Click “Close”. Now the next time offline servicing runs it will stage all of its files in the folder F:ConfigMgr_OfflineImageServicing. This posting is provided “AS IS” with no warranties and confers no rights.
https://blogs.technet.microsoft.com/enterprisemobility/2013/07/15/customizing-offline-servicing-of-operating-system-images/
CC-MAIN-2016-30
refinedweb
358
56.05
Question: plotCorrelation and pandas error 0 8 months ago by maryjoazzi • 0 maryjoazzi • 0 wrote: Hey I am using plotCorrelation and I get 8, in import pandas as pd ModuleNotFoundError: No module named 'pandas' so i made sure that pandas was in fact installed and then I went into the "_county_choropleth.py" file and added: import sys sys.path.insert(0,"mydirectory") import pandas as pd and now im getting 9, in import pandas as pd File "/usr/local/lib/python2.7/dist-packages/pandas/__init__.py", line 35, in "the C extensions first.".format(module)) ImportError: C extension: /usr/local/lib/python2.7/dist-packages/pandas/_libs/tslib.so: undefined symbol: _Py_ZeroStruct not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace --force' to build the C extensions first. any suggestions on how/what to fix? ADD COMMENT • link •modified 8 months ago by Bjoern Gruening ♦ 5.1k • written 8 months ago by maryjoazzi • 0 This is using deeptools3 isn't it?
https://biostar.usegalaxy.org/p/27084/
CC-MAIN-2019-13
refinedweb
175
58.08
Intro: Arduino Avengers: Laser Tag Have you ever wanted to become a superhero? This Arduino Laser tag project immerses you in game where you can act as a superhero firing at your opponents to victory. Using a flex sensor, you can reenact the famous Spiderman web shooter and Iron Man repulsor beam. You are also able to keep track of your ammunition and life count on the LCD screen on your wrist. Every time you’re hit, you can feel the impact through a vibration motor. When you win, the speaker will play your theme song. This project won best laser tag project (out of 12) at Binghamton University's Freshman Engineering Design Arduino Exposition. Step 1: Parts Everything but the tools were bought on Amazon and the total project was $140 (Arduinos were knockoffs). Electronics - Arduino Uno (x2) - Protoboard (x2) - Flex Sensor (x2) - I2C LCD Display (x2) - IR Emitter (x2) - IR Receiver (x4) - 3 prong NOT 2 prong - Vibration Motor (x2) - Enclosed Piezo (x2) Costume - Spiderman Shirt - Spiderman Mask - Spiderman Gloves - Iron Man Shirt - Iron Man Mask - Iron Man Gloves Other - Arduino Housing (x2) - Latex Gloves - Lots of wire Tools - Soldering iron and solder - Breadboard and Jumper Wires for prototyping - Wire Strippers Step 2: Prototyping I would suggest first testing each component (Piezo, Flex Sensor, LCD, Vibration IR Emitter and Receiver) separately before you begin. There's plenty of sample code all over the internet to test each. Wire everything to the breadboard as you see in the diagram above. Note: - The I2C LCD only has 4 pins, so wire accordingly - The vibration motor has a 3V max, so wire it to the 3V pin and ground it to pin 13 (this means the code to control it is reversed) - The flex sensor is denoted as a potentiometer in the schematic Here's a link to the circuits.io schematic. Step 3: Code #include <IRremote.h> #include <Wire.h> #include <LiquidCrystal_I2C.h> #include "pitches.h" LiquidCrystal_I2C lcd(0x3F, 2, 1, 0, 4, 5, 6, 7, 3, POSITIVE); int RECV_PIN = 11; int RECV_PIN2 = 10; IRrecv irrecv(RECV_PIN); IRrecv irrecv2(RECV_PIN2); decode_results results; IRsend irsend; const int flexSensor = A0; int lastFlexNum = 0; int lives = 5; int ammo = 50; int buzzerPin = 13; StringFront Shot<"); digitalWrite(buzzerPin, LOW); delay(2000); digitalWrite(buzzerPin, HIGH); updateLCD(); } irrecv.resume(); // Receive the next value } if (irrecv2.decode(&results)) { if ((results.value == 4045713590)||(results.value == 316007967)||(results.value == 2704)){ lives = lives - 1; lcd.setCursor(0,1); lcd.print(" "); lcd.setCursor(1,1); lcd.print(">Back Shot<"); digitalWrite(buzzerPin, LOW); delay(2000); digitalWrite(buzzerPin, HIGH); updateLCD(); } irrecv2.resume(); // Receive the next value } delay(100); } void updateLCD(){ lcd.clear(); if(superhero == "Spiderman"){ lcd.setCursor(4,0); lcd.print("Spiderman"); } else{ lcd.setCursor(4,0); lcd.print("Iron Man"); } if (lives == 0){ lcd.setCursor(2,1); lcd.print("You Died :("); playMusic(); lcd.setCursor(0,1); lcd.print(" "); lives = 5; } lcd.setCursor(0,1); lcd.print("Lives:"); lcd.setCursor(6,1); lcd.print(lives); lcd.setCursor(9,1); lcd.print("Ammo:"); lcd.setCursor(14,1); lcd.print(ammo); } void playMusic(){ if(superhero == "Spiderman"){ int melody[] = {NOTE_D4, NOTE_F4, NOTE_A4, 0, NOTE_GS4, NOTE_F4, NOTE_D4, 0, NOTE_D4, NOTE_F4, NOTE_A4, NOTE_A4, NOTE_GS4, NOTE_F4, NOTE_D4,0, NOTE_G4, NOTE_AS4, NOTE_D5, 0, NOTE_C5, NOTE_AS4, NOTE_G4, 0, NOTE_D4, NOTE_F4, NOTE_A4, 0, NOTE_GS4, NOTE_F4, NOTE_D4, NOTE_AS4, NOTE_G4, NOTE_G4, NOTE_F4, NOTE_G4, NOTE_F4, NOTE_D4}; int noteDurations[] = {4, 8, 4, 4, 4, 8, 4, 4, 4, 8, 4, 4, 4, 8, 4, 4, 4, 8, 4, 4, 4, 8, 4, 4, 4, 8, 4, 4, 4, 8, 4, 8, 2, 4, 6, 4, 6, 2}; for (int thisNote = 0; thisNote < 38; thisNote++) { int noteDuration = 1200 / noteDurations[thisNote]; tone(8, melody[thisNote], noteDuration); int pauseBetweenNotes = noteDuration * 1.30; delay(pauseBetweenNotes); noTone(8); } } else{ int melody[] = {NOTE_B3, NOTE_D4, NOTE_D4, NOTE_E4, NOTE_E4, NOTE_G4, NOTE_FS4, NOTE_G4, NOTE_FS4, NOTE_G4, NOTE_D4, NOTE_D4, NOTE_E4, NOTE_E4, NOTE_B5, NOTE_D5, NOTE_D5, NOTE_E5, NOTE_E5, NOTE_G5, NOTE_FS5, NOTE_G5, NOTE_FS5, NOTE_G5, NOTE_D5, NOTE_D5, NOTE_E5, NOTE_E5}; int noteDurations[] = {4, 4, 8, 8, 4, 16, 16, 16, 16, 8, 8, 8, 8, 4, 4, 4, 8, 8, 4, 16, 16, 16, 16, 8, 8, 8, 8, 4}; for (int thisNote = 0; thisNote < 29; thisNote++) { int noteDuration = 1200 / noteDurations[thisNote]; tone(8, melody[thisNote], noteDuration); int pauseBetweenNotes = noteDuration * 1.30; delay(pauseBetweenNotes); noTone(8); } } } Step 4: Soldering Solder wires and components to a protoboard the same way you did with a breadboard. This makes your project compact and ensures that all the wires stay connected. Step 5: Build Housing and Attach to Costume - Enclose the Arduino in a box or some sort of casing (it can be as simple as a decorated envelope which is what was used in this project) - Attach a battery pack to the side which will also act as the Arduino power button. - Sew the LCD to Latex glove and wire it to the Arduino - Put the superhero glove over the latex glove and put on the rest of the costume - Wire the IR emitter to the bottom of the superhero glove (to aim and shoot IR beam) - Wire the IR receivers through the costume (one to front and one to back) Step 6: Playing Instructions You're all done!! - The goal of the game is to hit the other player’s sensor 5 times - When a person wants to fire a shot, they must bend the flex sensor which causes the IR emitter to send a signal - Players must aim the emitter at the receivers embedded within their opponent’s shirt - Each time a person is hit, their life count decreases by one - The game continues until one person has lost all of their lives (their super hero’s theme song will play indicating they lost) 25 Discussions 5 months ago We have been working on this project for quite some time now and still getting many error messages. My teacher, who usually makes us do all our troubleshooting and such only telling us that "it's broke and that is the problem," has noticed how much we have been struggling with this code. He told us that if the code compiles with the original code then the code is good. However, the code keeps on getting an error from the start with just trying to compile it. I liked the idea of this project but the code is making it almost impossible to actually do the project. Reply 5 months ago I'm sorry you are having trouble with this. This is a complex project that involved a lot of tweaking, so I wouldn't be surprised if it doesn't work for other people. My guess is that any errors probably stem from the custom libraries. It's very hard to get the IR, LCD, and Tone libraries to work together, so as is explained in the project above, the libraries had to be custom edited. The edits are saved to codebender and you can try to upload to your board straight from the website, but you may have to make the changes yourself. Reply 5 months ago We have done that too, but it still didn't work. 5 months ago Hello. My friend and I are trying to implement this project however, the code will not compile. We tried changing what you had said to do in previous comments but it didn't do much for us. Can you give us any other advice? Reply 5 months ago what error are you getting exactly? Reply 5 months ago the code is giving me this error: (sketch file) Laser Tag (Receive and Send).ino:8:53:error: use of undeclared identifier 'POSITIVE' LiquidCrystal_I2C lcd(0x3F, 2, 1, 0, 4, 5, 6, 7, 3, POSITIVE); Reply 5 months ago Maybe check if you have the latest version of the I2C library Reply 5 months ago We figured out that we are not using the same LCD's as you and found out how to fix that portion of the code. However, We are now having issues with the IRrecv portion Reply 5 months ago Same problem shalc2001. Question 5 months ago Hi I tried that and this happened. (sketch file) Laser Tag (Receive and Send).ino:8:53:error: use of undeclared identifier 'POSITIVE' LiquidCrystal_I2C lcd(0x3F, 2, 1, 0, 4, 5, 6, 7, 3, POSITIVE); Answer 5 months ago We had taken out the I2C library since our LCD's are not I2C's like what he used. However, when you change that another error will pop up saying that the IRrecv isnt right and that is where we are having trouble now. 5 months ago Hi what board do you use when you upload the code? If you try to upload it says error about the board? Thanks Question 5 months ago on Step 3 Hi, what board is it? It will not work. Thanks Answer 5 months ago Hi, I used an Arduino Uno for my project. You may have to change the code to get the IR and speaker to work with a nano. 6 months ago my friend and i are trying to make this project however, the code refuses to compile despite adding the libraries necessary. help? Reply 5 months ago Hi, I believe the tone library and the IR library use the same PWM pin, so you have to edit one of the libraries to change that. If you search the error, you might find the solution explained. I'm not sure where I changed the pin, but you check out my Codebender files () and try to find which pin I chnaged Reply 5 months ago the issue is in this line LiquidCrystal_I2C lcd(0x3F, 2, 1, 0, 4, 5, 6, 7, 3, POSITIVE); our arduinos dont like the POSITIVE at the end nor the i2c lcd Reply 5 months ago nevermind we figured it out 1 year ago Forgive me if this is a dumb question. But, would I be able to have for example, two spiderman and two iron man all play each other? Two players is fun. But four is better =) Reply 1 year ago Yes. Very easy to do. In the code there's a variable at the beginning that you either set to "Superman" or "Iron Man." All you have to do is change it for each Arduino. This has no affect on the actual game. It just sets the name on the LCD and it dictates what music will play.
https://www.instructables.com/id/Arduino-Avengers-Laser-Tag/
CC-MAIN-2018-43
refinedweb
1,730
66.27
Originally posted by mikeliu: Hi All, Please help me clear the doubts. What will happen if you attempt to compile and run the following code? 1) Compile and run without error 2) Compile time Exception 3) Runtime Exception 1.class Base {} 2.class Sub extends Base {} 3.class Sub2 extends Base {} 4.public class CEx{ 5. public static void main(String argv[]){ 6. Base b=new Base(); 7. Sub s=(Sub) b; 8. } 9.} I chose the 1, but the answer is 3. I am totally confused! because in RHE, it is ok to case the object reference at run time if one of classes must be a subclass of the other(it doesn't matter which one)(please refer to page118-119 in RHE if you have RHE). Am i missing something? Mike
http://www.coderanch.com/t/192703/java-programmer-SCJP/certification/Casting-urgently-Marcus-Green-Exam
CC-MAIN-2015-06
refinedweb
134
87.92
PiePie Welcome to Pie, a wrapper for CPython's C API in C++. CPython's C API is really nice when writing C, but when writing C++ it leaves much to be desired. The API passes around PyObject* objects between functions in an object oriented fashion. Pie wraps Python's PyObject* with its own object class, it leverages C++'s operator overloading to call CPython API functions on the PyObject* and it automatically keeps track of the object's reference count using constructors and destructors. Pie can also parse certain C++ objects into Python objects, for example, vectors turn into lists, maps turn into dictionaries, etc. FeaturesFeatures - An idiomatic C++ wrapper around PyObject*objects, using classes and operator overloading. - Automatic incrementing and decrementing of PyObject*reference counts. - Parsing of C++ types into Python objects. - Wrapping of Python exceptions into C++ exceptions. UsageUsage See the example for more information on how to use Pie. Just link with libpie and you should be ready to go. Note: Pie is not a replacement for the CPython API, it is intended to compliment it and make it easier to use in C++. The ParserThe Parser Pie's C++ type parser is very simple, it handles the following cases: - C integers become Python integers. - C strings ( const char *) become Python strings. - C++ strings ( std::string) become Python strings. - Container types with an std::pairvalue type become Python dictionaries. - Container types without an std::pairvalue type become Python lists. Python VersionPython Version Pie currently works on with Python 3 only. There are plans to port Pie to Python 2 in the future. BuildingBuilding Pie requires the following: - Compiler with C++17 (gcc-7 should work fine). - Python 3 and development headers. - CMake version 3+ To build issue the following: git clone [email protected]:ronmrdechai/Pie.git mkdir build cd build cmake .. make sudo make install Building the tests clones and builds googletest which is used to run the tests. To build the tests pass an additional -DBUILD_TESTS=ON to the cmake command: cmake .. -DBUILD_TESTS=ON make make test PlatformsPlatforms Pie is developed and tested on macOS but should work on any platform with Python and C++17. ExampleExample See tests for more detailed examples. #include <pie/pie.h> #include <iostream> int main() { pie::object os = PyImport_ImportModule("os"); pie::object os_environ = pie::getattr(os, "environ"); std::cout << "The following directories are in the PATH:" << std::endl; for (auto dir: getattr(os_environ["PATH"], "split")(":")) { std::cout << dir << std::endl; } pie::object zero = 0; pie::object one = 1; try { one / zero; } catch (pie::error& e) { std::cout << "Caught Python exception:" << std::endl; std::cout << e.what() << std::endl; } return 0; } This is equivalent to the following Python code: import os print("The following directories are in the PATH:") for dir in os.environ["PATH"].split(":"): print(dir) try: 1 / 0 except BaseException as e: print("Caught Python exception:") print(e.__class__.__name__ + ": " + str(e)) Why not Boost.Python, PyBind11 or SWIG?Why not Boost.Python, PyBind11 or SWIG? Boost.Python, PyBind11 and SWIG are wonderful packages, but they different from Pie. These packages wrap C++ classes and functions into Python classes and functions, allowing you call them from Python. Some of them have support for calling Python code from C++, but this functionality has been added as an afterthought. Pie has been built from the ground up to wrap Python objects into C++ objects, allowing you to easily call Python from your C++ code, and embed it in your applications. To DoTo Do - Provide a complete documentation for Pie - Wrap Python builtin types with pie::objects.
https://reposhub.com/cpp/miscellaneous/ronmrdechai-Pie.html
CC-MAIN-2022-21
refinedweb
597
57.37
Programing with if-else in C++ Q10.Write a C++ program to accept a number then display if it +ve, -ve or zero. Q11.Write a C++ program to accept a number then check if it is odd or even. Q12.Write a C++ program to accept two numbers then check if the first one is divisible by second one. Q13.Write a C++ program to accept percentage of marks then display grade base on the following criteria. Q14.Write a C++ program to accept unit price and quantity purchased then calculate total amount, discount amount and net payable amount. Discount is calculated as - if total amount is more than or equal to 5000 then discount is 10%; if total amount is 4000 to 5000 then discount is 5%; if total amount is 3000 to 4000 then discount is 3%; otherwise no discount Q15.Write a C++ program to accept the coefficients of a quadratic equation then find the roots of the equation. Program-9 #include <iostream.h> void main(){ int age; cout<<"Enter the Age:"; cin>>age; if(age>=18){ cout<<"Congratulations You can Vote !!"; } else{ cout<<"Sorry You can not Vote"; } } Program-10 #include <iostream.h> void main(){ int a; cout<<"Enter the number:"; cin>>a; if(a>0){ cout<<"The number is Positive"; } else if(a<0){ cout<<"The number is Negative"; } else{ cout<<"The number is Zero"; } } Program-11 #include <iostream.h> void main(){ int a; cout<<"Enter the number:"; cin>>a; if(a%2==0){ cout<<"The number is Even"; } else{ cout<<"The number is Odd"; } }Note: In program-10, the given number is divided by 2 and its remainder is checked, if it is zero then the given number is "Even" otherwise it is "Odd". See "a%2==0" in the program. Program-12 #include <iostream.h> void main(){ int a, b; cout<<"Enter the first number:"; cin>>a; cout<<"Enter the second number:"; cin>>b; if(a%b==0){ cout<< a<<" is divisible by "<<b; } else{ cout<< a<<" is not divisible by "<<b; } }Output of Program-12 Program-13 #include <iostream.h> void main(){ int p; cout<<"Enter %age of marks:"; cin>>p; if(p>90){ cout<<"Grade is A1"; } else if(p>80){ cout<<"Grade is A2"; } else if(p>70){ cout<<"Grade is B1"; } else if(p>60){ cout<<"Grade is B2"; } else if(p>50){ cout<<"Grade is C1"; } else if(p>40){ cout<<"Grade is C2"; } else if(p>=33){ cout<<"Grade is D"; } else{ cout<<"Grade is E"; } }Note: In program-13, see that we have not checked the range of percentage like 80 to 90, but we have checked for "p>80". Let us take the percentage is 91 then one may think that 91 is greater than 90 as well as greater than all other goven conditions given in the program therefore it will display all the grades. Actually the first "if"block will be excuting and there is no chance that other blocks will also be executing as in an "if-else" construct only one block of statement executes at a time. Program-14
https://www.mcqtoday.com/CPP/flowif/programingwithif-else.html
CC-MAIN-2022-27
refinedweb
519
64.85
tag:blogger.com,1999:blog-8612108870928357610.post4986194572340753171..comments2022-08-03T00:37:17.451+05:30Comments on C Programming Tutorial: Finding the Number of 500, 100, 50, 20, 10, 5, 2, 1 Rupee Notes in a Given AmountUnknownnoreply@blogger.comBlogger12125tag:blogger.com,1999:blog-8612108870928357610.post-61771893396395354182021-02-28T16:00:37.473+05:302021-02-28T16:00:37.473+05:30You have explained this topic very well and step b...You have explained this topic very well and step by step. Thanks for sharing a nice Post -visit <a href="" rel="nofollow"> Java Tutorials </a>Shivam Kumar have explained the topic very well. Thanks for...You have explained the topic very well. Thanks for sharing a nice article -visit <a rel="nofollow"> Java Tutorials </a>Shivam Kumar am reading your post from the beginning, it was ...I am reading your post from the beginning, it was so interesting to read & I feel thanks to you for posting such a good blog, keep updates regularly.<br /><a href="" rel="nofollow"> Php course in chennai</a>sriram There, Thanks for highlighting this and indic...Hi There,<br /><br /><br />Thanks for highlighting this and indicating about C Programming Tutorial where more study and thought is necessary.<br /><br />I work at the <a href="" rel="nofollow"> biggest internet </a> non-profit company in the world:]. I love flying remote control cars with my boys. Howdy, my first post here. I was told to announce myself. I`m 43 yrs. old, hitched, and have 2 kids under 5. Anyways, that’s about all. <br /><br />I look forward to see your next updates.<br /><br /><br />Regards,<br />VeenaAnonymous you very much!!.Helped me alotThank you very much!!.Helped me alotAnonymous was looking for this type of answer in many site...i was looking for this type of answer in many sites. but the only satified answer is this. keep it up. well done!Gayasha Malluwawadu found an alternate approach .. for solving this ...I found an alternate approach .. for solving this <a href="" rel="nofollow">C program</a> without using multiple while statements. <a href="" rel="nofollow">Finding number of notes</a>Anonymous you enter an amount that have decimal is it goi...If you enter an amount that have decimal is it going to work?<br />Anonymous, Tanmay. I made a more efficient code in C++, I...Hi, Tanmay. I made a more efficient code in C++, I hope you enjoy it:<br />#include <br />using namespace std;<br /><br />int main () {<br /> float remaining_value(0),result(0),rest(1);<br /> int aux(0),other(0);<br /> float notes[7] = {100,50,20,10,5,2,1};<br /><br /> cout << "Provide us the value: ";<br /> cin >> remaining_value;<br /><br /> while (rest!=0) {<br /><br /> result = remaining_value/notes[other]; // result in float<br /> aux = result; // number of notes that will be use<br /> rest = remaining_value - (notes[other]*aux); // provides the rest<br /><br /> if (aux!=0) {<br /> cout << aux << " note(s) of " << notes[other] << endl; }<br /> remaining_value = rest;<br /> other++;<br /> }<br /><br />return 0; }lucas program could be used to determine how a sum ...This program could be used to determine how a sum of money could be formed using notes of different denominations by keeping higher denominations notes as many as possible. E.g. in case the amount is 1500 then three notes of 500Tanmay Jhawar applications of thisprogram plz applications of thisprogram<br />Anonymous comment has been removed by the author.Anonymous
http://cprogramming.language-tutorial.com/feeds/4986194572340753171/comments/default
CC-MAIN-2022-33
refinedweb
581
58.08
This is not recommended for shared computers Don't add me to the active users list 04 February 2016 - 08:00 PM Hi, Using fgets you can read one line correctly but you need to know the max line size possible. How read one line without limit correctly ? Read each character until '\n' is the only way ? Thanks 03 February 2016 - 08:54 AM What do you think about "using namespace" ? Personally I think it's not explicit then when you use it and can give confusions. 06 January 2016 - 06:19 PM I did a simple test to get a global variable from a struct pointer and from the global namespace directly. The two cases result to an inlined get in release when you look at the assembly view from MSVC 2013. I did the test for an executable build, is it always the case for executable, static and shared lib ? To be clear, the variable is not declared in the header, only in the .cpp. 06 January 2016 - 08:32 AM here a video about the update of the MSM shadowing technique : I remember MJP did an article about it and said EVSM still the best, is it always the case ? 02 January 2016 - 07:32 AM I always used a global uint32 seed in the anonymous namespace. Is it better to have each time a seed initialized in the constructor of random number generator and instance each time needed ? GameDev.net™, the GameDev.net logo, and GDNet™ are trademarks of GameDev.net, LLC
http://www.gamedev.net/user/186864-alundra/?tab=topics
CC-MAIN-2016-18
refinedweb
256
70.53
== Revision History for Mail::IMAPClient Changes from 3.17_01 to ? made by Phil Pearl (Lobbes) Changes from 2.99_01 to 3.16 made by Mark Overmeer Changes from 0.09 to 2.99_01 made by David Kernen - Potential compatibility issues from 3.17+ highlighted with '*' version 3.32: Fri, Aug 10, 2012 4:43:24 PM - document RFC2087 quota related calls [Mathias Reitinger] documentation request - rt.cpan.org#78474: idle/idle_data documentation error [Dima Kogan] - Quote()/Massage() now uses literals for non ascii data [Mathias Reitinger] reported issues with utf8 data in password - use Quote()/Massage() consistently now in: login() proxyauth() deleteacl() setacl() listrights() rename() - documented deleteacl() and other minor pod cleanup - ran Mail::IMAPClient::BodyStructure through perltidy - update year in README/pod to 2012 - rt.cpan.org#74733: Fails with Parse::RecDescent >= 1.966_002 rt.cpan.org#74593: Recent changes break Module::ExtractUse and ... [ANDK, TEAM, SREZIC, NBEBOUT at CPAN and nine from detonation] - Makefile.PL avoid buggy Parse::RecDescent 1.966_002 until 1.967_009 - rt.cpan.org#76989: Mail::IMAPClient::BodyStructure usage/docs [Pierluigi Frullani] - fix incorrect documentation on new() - lots of doc verbiage updates version 3.31: Mon, Mar 19, 2012 11:11:11 AM - rt.cpan.org#74799: Support for partial data responses in fetch_hash [Philip Garrett] + bonus: cleaner handling of BODY.PEEK responses - properly handle ALL|FULL|FAST fetch macros in fetch_hash version 3.30: Fri Nov 11 09:37:00 EST 2011 - rt.cpan.org#72347: Starttls array ref argument dereferenced twice [Jonathan Buhacoff] - during connect(): Port now defaults 143 or 993 if $self->Ssl [Kodi Arfer] - stop reconnect deep recursion if server disconnects on login [Luca Ferrario] - reconnect() now returns 1 on success; on error undef or 0=recursive - handle EBADF from syswrite in _send_bytes - rt.cpan.org#67263: add RFC4978 IMAP COMPRESS Extension support [SDIZ] + new method: compress() + new attributes: Compress Readmoremethod - general code cleanup: + new() now always returns $self or undef (never $sock any more) + Socket() now always return a socket or undef + login() now always return $self or undef + _read_more() will now use Readmoremethod if set - missing second arg '' for encode_base64 causing AUTHENTICATE PLAIN to fail on lines longer than 76 characters [Yoshiho Yoshida] version 3.29: Tue Aug 9 00:33:52 EDT 2011 - rt.cpan.org#69876: ENVELOPE as part of fetch_hash convenience method [Chris Huttman] + added Mail::IMAPClient::BodyStructure::Envelope->parse_string($str) convenience method for handling ENVELOPE data from fetch_hash - rt.cpan.org#68310: folders() should not call exists()/STATUS [Gilles Lamiral] - affects folders() and subscribed() methods + use selectable() instead of exists() in call - consider removing extra call to folders()/subscribed() + ensure separator is set properly in folders() + selectable now properly checks for \Noselect flag + update folders() POD to match implementation behavior - rt.cpan.org#68648: [patch]: CAPABILITY after authenticate [Stef Simoens] + delete cache after State set to Authenticate - State() is no longer an auto-generated method - rt.cpan.org#68755: provided socket loses blocking in 3.19-3.28 [Martin Schmitt] version 3.28:] version 3.27: Sun Feb 13 14:37:27 EST 2011 - rt.cpan.org#65694: migrate fails [Erik Colson] - rt.cpan.org#65470: uninitialized warning in message_to_file [Gilles Lamiral, Mark Hedges] - rt.cpan.org#61835: (DOC) in LIST context undef may be returned [Stefan Völkel] + warn/highlight behavior in docs Errors section - updated documentation + migrate() documentation fixed + moved Custom Authentication Mechanisms toward end + recommended use of scalar context due to historical API behavior version 3.26: Mon Jan 31 22:15:04 EST 2011 - *require Perl 5.8.1 as constant use is invalid on 5.6 - rt.cpan.org#63524: fetch_hash() parse errors [Brian Kroth] + fixed handling of LITERAL values in response + fixed handling of field names with a dash (e.g. X-SAVEDATE) + fetch_hash now uses Escaped_results() method - *fixed Escaped_results() to properly join LITERAL data with the data that comes before and after it - *rt.cpan.org#60945: append_file() does not interpret $date as expected [Jason Long] $date should now be 1 (to use the file mtime) or a valid RFC3501 date - *rt.cpan.org#61292: memory consumption with message_string()/append() rt.cpan.org#61806: Major problem with one function in IMAPClient [Gilles Lamiral, Casey Duquette] + use @_ / $_[<num>] in critical places to avoid pass by value memory overhead + use in memory files in a few critical places as that code path in Mail::IMAPClient is significantly more efficient with internal memory usage + *new (undocumented/do-not-use-without-good-reason) attribute Maxappendstringlength used by append() and append_string() holds the size (in bytes, default 1 MiB) that triggers when message SCALAR(s) passed to these methods will be treated as an in memory file. This attribute will likely be removed in a future version. + *append() and append_string() now call append_file() and use an im memory file when length($message) is greater than Maxappendstringlength; other minor code cleanup + *message_string() now calls message_to_file() and uses an in memory file + refactor message_to_file() to use internal _imap_uid_command() + update _read_line() to be more efficient w/CPU in critical section by pulling isa() checks out of main loop also conserve memory by not storing an extra copy of LITERAL data if the data was put into a filehandle from the caller + Memory/working set (KB) comparison (Perl 5.10 cygwin Win7): - test: message_string on 6.1M msg and then append 6.1M msg version | start | after message_string | after append --------+-------+----------------------+------------- 2.2.9 | 7624 | 74404 | 131896 3.25 | 7716 | 74408 | 156532 3.26 | 7684 | 33372 | 42608 - minor arg cleanup of noop() and tag_and_run() - rt.cpan.org#63444: relax get_envelope(), allow empty reply-to [Nikolay Kravchenko] - rt.cpan.org#61068: append_string can invalidate a good $date - rt.cpan.org#60045: Logout error if delay between BYE and tagged OK [Armin Wolfermann] no longer set an error when this happens - rt.cpan.org#61062: migrate() errors [Johan Ekenberg] + rewrote migrate() to be functional and simple - Update README and cleanup several old or out of date files version 3.25: Fri May 28 00:07:40 EDT 2010 - fix body_string parsing bug and added tests in t/body_string.t [Heiko Schlittermann] - rt.cpan.org#57661: uninitialized value warning in IMAPClient::thread [Max Bowsher] - rt.cpan.org#57337: Correctly handle multiparts in BodyStructure.pm [Robert Norris] fixes in Mail::IMAPClient::BodyStructure::bodystructure for bugs still in release 3.24 - rt.cpan.org#57659: install fails when using cPanel GUI [Ken Parisi] hack Makefile.PL to use alarm() and timeout prompt() gracefully - relax t/basic.t logout() error check (allow 'BYE' instead of 'OK') - left examples/idle.pl out of MANIFEST for 3.24 version 3.24: Fri May 7 17:02:35 EDT 2010 - rt.cpan.org#48912: wrong part numbers in multipart messages [Dmitry Bigunyak, Gabor Leszlauer] - fix Mail::IMAPClient::BodyStructure::bodystructure to properly assign parts for messages using multipart and also include .TEXT parts as well (still not including top level HEADER and TEXT though - bug?) - allow _load_module() to set $@ and LastError if module load fails - rt.cpan.org#55527: [no] disconnect during DESTROY [Stefan Seifert] - updated logout documentation to correctly state that DESTROY is not used to force an automatic logout on DESTROY despite documentation that indicated otherwise - update append* documentation to match current implementation - rt.cpan.org#55898: append_file can send too many bytes [Jeremy Robst] - avoid append_file corner cases operating on lines instead of buffers - use binmode on filehandle in append_file - add tests to t/basic.t for append_file - rt.cpan.org#57048: _quote_search() using $_ in loop instead of $v [Matthaus Kiem] - added examples/idle.pl program showing use of idle and idle_data - idle_data() should not read/block after server returns data [Marc Thielemann] - idle_data() _get_response regexp updated to not match errors - idle_data() now uses a timeout of 0 by default as documented - _get_response() now checks for defined($code) to allow $code==0 version 3.23: Fri Jan 29 00:39:27 EST 2010 - new beta idle_data() method to retrieve untagged messages during idle similar to method suggested by Daniel Richard G - added/updated documentation for idle, idle_data, and done - rt.cpan.org#53998: fix NTLM auth: call ntlm with challenge string [Dragoslav Mlakar] - report the return value from select/_read_more on errors - logout() again returns the success/failure of the LOGOUT command - set/return error when $response->() returns undef in authenticate() - new internal method _load_module() centralizing some 'require' calls - localize use $@ in several places to avoid stomping on global val - refactor code calling _read_more() to centralize error handling version 3.22: Thu Jan 21 15:25:54 EST 2010 - rt.cpan.org#52313: Getting read errors if Fast_io is set to 1 [Jukka Huhta] - updated Maxttemperrors docs related to EAGAIN handling - new starttls() method and Starttls attribute to support STARTTLS - update parse_headers to try harder to find UID in fetch response version 3.21: Tue Sep 22 19:45:13 EDT 2009 - rt.cpan.org#49691: rewrite of fetch_hash to resolve several issues [Robert Norris] includes new tests via t/fetch_hash.t - rt.cpan.org#48980: (enhancement) add support for XLIST extension [Robert Norris] - rt.cpan.org#49024: NIL personal name returned by *_addresses methods [Dmitry Bigunyak] - rt.cpan.org#49401: IMAPClient expunge fails (unless folder arg used) [Gary Baluha] - update/clarify close and expunge documentation a little version 3.20: Fri Aug 21 17:40:40 EDT 2009 - added file/tests in t/simple.t - added methods Rfc3501_date/Rfc3501_datetime used by deprecated methods Rfc2060_date/Rfc2060_datetime rt.cpan.org#48510: Rfc3501_date/Rfc3501_datetime methods do not exist [sedmonds] - login() hack to quote an empty password rt.cpan.org#48107: Cannot LOGIN with empty password [skunk] version 3.19: Fri Jun 19 14:59:15 EDT 2009 - *search() backwards compat: caller must quote single arg properly rt.cpan.org#47044: $imap->search does not return [ekuemmer] - cleanup regexp in _send_line() - reduce extra newlines injected by _debug() version 3.19_02: Tue Jun 9 00:47:52 EDT 2009 - _list_or_lsub() now calls _list_response_preprocess so consumers of this method no longer need to deal with how LITERAL data is represented in the returned data - update _list_or_lsub_response_parse handling of folder names that came back as literal data - update comments related to _list_response_preprocess version 3.19_01: Fri Jun 5 15:45:05 EDT 2009 - make parse_headers more robust to errors/non-header data version 3.18: Wed Jun 3 23:07:12 EDT 2009 - enhance fetch_hash to enable caller to specify list of messages suggestion by [Eugene Mamaev] - better handling of untagged BYE response version 3.18_02: Wed May 27 10:02:24 EDT 2009 - *new attribute Ssl, when true causes IO::Socket::SSL to be used instead of IO::Socket::INET. This change allows Reconnectretry logic to work on SSL connections too. - have LastError cluck() if setting error to NO not connected - handle errors from imap4rev1() in multiple places - Reconnectretry/_imap_command enhancements/fixes + only run command if IsConnected + keep a temporary history of LastError(s) + sets LastError to NO not connected if ! IsConnected + retry =~ timeout|socket closed|* BYE| NO not connected - _imap_command_do reduce data logged when using APPEND - fetch() now handles messages() errors - thread(), has_capability(), capability() better error checking - authenticate() now uses _imap_command for retry mechanism - size() now sets LastError when no RFC822.SIZE is found version 3.18_01: Fri May 22 17:08:00 EDT 2009 - *update several methods to use common _get_response() method - refactor most code handling imap responses - new internal method _get_response() to reduce code duplication - more regex cleanup $CR/$LF (not \r\n) per perlport/IMAP spec - major cleanup/fix of append_file for rt.cpan.org#42434 version 3.17: Thu May 21 01:40:08 EDT 2009 - ran all test code and lib/Mail/IMAPClient.pm through Perl::Tidy - plan on using perltidy to standardize format going forward - added 13 tests to t/basic.t to cover more methods - fix some broken tests - update Makefile.PL to provide info about optional modules version 3.17_05: Tue May 19 11:04:28 EDT 2009 - *reset LastError for every call to _imap_command_do() - *run() - use _imap_command_do(), return arrayref in scalar context - *tag_and_run() - return arrayref in scalar context - *done() - use _imap_command_do(), return arrayref in scalar context - *search() now returns empty arrayref not undef if no matches found - _imap_command_do() made more flexible to avoid code duplication - _list_response_parse renamed _list_or_lsub_response_parse - updated POD with new/updated behavior - append_string() now uses _imap_command_do() for Reconnectretry - internally use defined return values instead of only LastError() - run() updated to use same/similar code to _imap_command_do() - make several return statements more consistent - delete() now unsets current Folder attribute on success version 3.17_04: Fri May 15 17:18:52 EDT 2009 - updated POD with new reconnect() method and Reconnectretry attr - *new _imap_command() after renaming old one to _imap_command_do support retrying commands X times EPIPE/ECONNRESET errors - *new Reconnectretry attribute to control number of retry attempts (default is 0 - no reconnect/retry) - *added reconnect() method to support Reconnectretry attr reconnect and updated _imap_command() method - *_imap_command_do will return undef if command given has no TAG - fixed message_string() logic/errors for failed size() calls - local-ize $! anywhere we use Carp routines as older versions of Carp could cause $! to be reset - several 'BUG?' comments -- raising red flag for future work - minor cleanup of sort() logic - reduce duplicate code, hopefully improved error handling: new _list_or_lsub() for list() and lsub() new _folders_or_subscribed() for folders() and subscribed() + new _list_response_preprocess() keeping old code/logic in for now, but may remove in the future (for buggy servers?) - some updates for migrate() but this method needs much work - body_string() now handles fetch() errors - tag_and_run now handles _imap_command() errors - changed non-timeout CORE::select() timeout from 0.001 to 0.025 - minor cleanup of _read_line() error handling/debug output - get_bodystructure() handle more fetch() errors - expunge() handle select() errors - restore_message() handle store() errors - uidvalidity() handle status() errors - uidnext() handle status() errors - is_parent() use _list_response_preprocess() for parsing - move() send delete_message() errors to stderr - simplify size() method version 3.17_03: Fri May 8 16:37:08 EDT 2009 - *added uidexpunge() for UID EXPUNGE UIDPLUS support - *search() now DWIM: auto-escapes args, SCALAR refs not escaped rt.cpan.org#44936 [cjhenck] - _quote_search() provides auto-escape capability for search() - many POD updates as well as some major reformatting (incomplete) - login now fails if passwd and user are not defined - _sysread(): $self was in args to 'Readmethod' twice - authenticate() return undef on scheme eq "" or LOGIN - "require" instead "use" Digest::HMAC_MD5 for CRAM-MD5 support version 3.17_02: Fri May 1 16:44:21 EDT 2009 - cleanup of use/imported data - use Socket $CRLF in many cases not \r\n per perlport/IMAP spec - *new Keepalive attribute used via new()/Socket() enables SO_KEEPALIVE - LastError now uses Carp::confess for stack trace if Debug is true - Maxcommandlength now defaults to 1000 per RFC2683 section 3.2.1.5 - added noop() to support IMAP NOOP - _imap_command now sets LastError if a OK/$good response is not seen - fixed fetch_hash() to return FLAGS as "" not () when no FLAGS set version 3.17_01: Fri Apr 24 18:36:45 EDT 2009 - *new attribute Maxcommandlength used by fetch() to limit length of commands sent to a server. This should removes need for utilities like imapsync to create their own split() functions and instead allows Mail::IMAPClient to hopefully "do the right thing" - remove extra 'use' calls for Carp and Data::Dumper - _read_more() improperly initialized vector causing select errors, thus timeouts were not working properly (now they work...) - *change default timeout 30s => 600s: 30s seems too short in practice - *explicit import of encode_base64 and decode_base64 from MIME::Base64 note the code forces a disconnect from the server on timeout as we can not easily recover from this situation right now in the code - *numerous changes of error messages, removing superfluous text and now relying on LastError instead of $! or $@ when appropriate - separator(): + now return undef if an error occured for NAMESPACE or LIST calls + *no longer defaults to '/' if NAMESPACE call does not succeed - new internal _list_response_parse() method for parsing LIST responses - handle ECONNRESET errors on syswrite and mark connection as Unconnected + error "Connection lost" changed to "Write failed" - previously untrapped syswrite error now generate "Write failed" errors - fix in _imap_command where LastError would be erroneously set on LOGOUT - _record() no longer tries to infer errors based on data being "recorded" - _send_line() + cleanup in watching for: +|NO|BAD|BYE + now sets LastError when an unexpected response is seen - _read_line() + handle select errors instead of ignoring them + forcefully _disconnect() on timeouts as this breaks app logic + reduced duplication of code on error handling - added _disconnect() method to brute force drop connections on timeout - added _list_response_parse() to reduce duplicate code for LIST parsing - added _split_sequence() to support new Maxcommandlength argument - fetch() + use new Maxcommandlength to split a request into multiple subrequests then aggregate results before passing them back to the caller - fetch_hash(): added checks for failed IMAP commands - parse_headers() + properly check if fetch fails + handle cases where $header and/or $field are not defined - size(): + return undef if LastError is set + fix case where SIZE is not found and return undef as expected version 3.16: Mon Apr 6 12:03:41 CEST 2009 Fixes: - set LastError when the imap_command receives an unexpected 'BYE' answer. rt.cpan.org#44762 [Phil Lobbes] - handle SIGPIPE cleanly. rt.cpan.org#43414 [Phil Lobbes] - improve handling of quotes in folder names rt.cpan.org#43445 [Phil Lobbes] - do not use $socket->eof(), because IO::Socket::SSL does not support it. rt.cpan.org#43415 [Phil Lobbes] - remove excessive reconfiguration of fastio in _read_line() rt.cpan.org#43413 [Phil Lobbes] Improvements: - remove experied docs about automatically created calls, which do not exist since 3.00 - remove verbose explanation about reporting bugs. version 3.15: Fri Mar 20 13:20:39 CET 2009 Fixes: - manual-page was using POD syntax incorrectly, which caused many broken links on search.cpan.org rt.cpan.org #44212 [R Hubbell] version 3.14: Mon Feb 16 14:18:09 CET 2009 Fixes: - isparent() when list() returns nothing. rt.cpan.org#42932 [Phil Lobbes] - Quote more characters in Massage(): add CTL, [, ], % and * rt.cpan.org#42932 [Phil Lobbes] - message_string() will only complain about a difference between reported message size and actually received size; it will not try to correct it anymore. rt.cpan.org#42987 [Phil Lobbes] - No error when empty text in append_string() rt.cpan.org#42987 [Phil Lobbes] - login() should not try authenticate() if auth is empty or undef rt.cpan.org#43277 [Phil Lobbes] version 3.13: Thu Jan 15 10:29:04 CET 2009 Fixes: - "othermessage" in bodystructure parser should expect an MD5, not bodyparams. Fix and test(!) by [Michael Stok] Improvement: - minor simplifications in code of run() and _imap_command() - get_bodystructure trace message fix [Michael Stok] - add Domain option for NTLM authentication. version 3.12: Mon Nov 24 15:34:58 CET 2008 Improvement: - major performance improvement in append_message(), avoiding reading the whole file in memory as the docs promised but the code didn't do. [David Podolsky] version 3.11: Wed Oct 8 10:57:31 CEST 2008 Fixes: - some SSL connections process more bytes then needed, which made the select() timeout. Nice fix by [David Sansome] rt.cpan.org#39776 Improvements: - improved example imap_to_mbox by [Ralph Sobek] version 3.10: Sun Aug 24 21:26:27 CEST 2008 Fixes: - INET socket scope error, introduced by 3.09 rt.cpan.org#38689 [Matt Moen] version 3.09: Fri Aug 22 16:38:25 CEST 2008 Fixes: - return status of append_message reversed. rt.cpan.org#36726 [Jakob Hirsch] - no line-breaks in base64 encoded strings when logging-in rt.cpan.org#36879 [David Jonas] - fix MD5 authentication. rt.cpan.org#38654 [Thomas Jarosch] Improvements: - extensions and clean-ups in examples/imap_to_mbox.pl by [Ralph Sobek] - an absolute path as Server setting will open a local ::UNIX socket, not an ::INET rt.cpan.org#38655 [Thomas Jarosch] version 3.08: Tue Jun 3 09:36:24 CEST 2008 Fixes: - message_to_file used wrong command. rt.cpan.org#36184 [Parse Int] - oops, distribution released with OODoc/oodist, not make dist. [Randy Harmon] - fix parsing of body-structure information for multi-parts. rt.cpan.org#36279 [Doug Claar] Improvements: - Updated README and TODO (Was 'Todo') version 3.07: Mon Apr 28 09:17:30 CEST 2008 Fixes: - expunge with no folder specified produced "use of undef" error. Fixed by [André Warnier] - additional arguments for create [Michael Bacon] - accepts LIST answer with multiple lines [Michael Bacon] - ::BodyStructure::_address() should be _addresses() Fixed by rt.cpan.org#35471 [Brian Kelly] version 3.06: Mon Apr 14 23:44:03 CEST 2008 Fixes: - expunge without argument must use selected folder. [John W] - expunge with folder does not select it. [John W] - the documentation still spoke about "autogenerated methods", but they were removed with 2.99 [John W] - append_string needs LF -> CRLF translations, for some servers. rt.cpan.org #35031 [Jonathan Kamens] Improvements: - added ::setquota(), thanks to [Jappe Reuling] version 3.05: Wed Feb 20 08:59:37 CET 2008 Fixes: - match ENVELOPE and BODYSTRUCTURE more strict in the grammar, to avoid confusion. [Zach Levow] - get_envelope and get_bodystructure failed for servers which did not return the whole answer in one piece. [Zach Levow] - do not produce parser errors when get_envelope does not return an envelope. [Zach Levow] - PLAIN login response possibly solely a '+' [Zach] and [Nick] version 3.04: Fri Jan 25 09:25:51 CET 2008 Fixes: - read_header fix for UID on Windows Server 2003. rt.cpan.org#32398 [Michiel Stelman] Improvements: - doc update on authentication, by [Thomas Jarosch] version 3.03: Wed Jan 9 22:11:36 CET 2008 Fixes: - LIST (f.i. used by folders()) did not return anything when the passed argument had a trailing separator. [Gunther Heintze] - Rfc2060_datetime() must include a zone. rt.cpan.org#31971 [David Golden] - folders() uses LIST, and then calls a STATUS on each of the names found. This is superfluous, and will cause problems when the STATUS fails... for instance because of ACL limitations on the sub-folder. rt.cpan.org#31962 [Thomas Jarosch] - fixed a zillion of problems in the BodyStructure parser. The original author did not understand parsing, nor Perl. - part numbering wrong when nested messages contained multiparts Improvements: - implementation of DIGEST-MD5 authentication [Thomas Jarosch] - removed call for status() in Massage(), which hopefully speeds-up things without destroying anything. It removed a possible deep recursion, which no-one reported (so should be ok to remove it) - simplified folders() algorithm. - merged folder commands, like subscribe into one. - added unsubscribe() rt.cpan.org#31268 [G Miller] - lazy-load Digest::HMAC_MD5 version 3.02: Wed Dec 5 21:33:17 CET 2007 Fixes: - Another attempt to get get FETCH UID right. Patch by [David Golden] version 3.01: Wed Dec 5 09:55:43 CET 2007 Changes: - removed version number from ::BodyStructure Fixes: - quote password at login. rt.cpan.org#31035 [Andy Harriston] - empty return of flags command should be empty list, not undef. rt.cpan.org#31195 [David Golden] - UID command does not work with folder management commands rt.cpan.org#31182 [Robbert Norris] - _read_line simplifications avoids timeouts. rt.cpan.org#31221 [Robbert Norris] - FETCH did not detect the UID of a message anymore. [David Golden] Improvements: - proxyauth for SUN/iPlanet/NetScape IMAP servers. patch by rt.cpan.org#31152 [Robbert Norris] - use grep in stead of map in one occasion in MessageSet.pm [Yves Orton] version 3.00: Wed Nov 28 09:56:54 CET 2007 Fixes: - "${peek}[]" should be "$peek\[]" for perl 5.6.1 rt.cpan.org#30900 [Gerald Richter] version 2.99_07: Wed Nov 14 09:54:46 CET 2007 Fixes: - forgot to update the translate grammar. version 2.99_06: Mon Nov 12 23:21:58 CET 2007 Fixes: - body structure can have any number of optional parameters. Patch by [Gerald Richter]. - get_bodystructure did not take the output correctly [Gerald Richter] - parser of body-structure did not handle optional body parameters Patch by [Gerald Richter], rt.cpan.org#4479 [Geoffrey D. Bennet] version 2.99_05: Mon Nov 12 00:17:42 CET 2007 Fixes: - pod error in MessageSet.pm - folders() without argument failed. [Gerald Richter] Improvements: - better use of format syntax in date formatting. - Rfc2060_datetime also contains the time. - append_file() now has options to pass flags and time of file in one go. [Thomas Jarosch] version 2.99_04: Sat Nov 10 20:55:18 CET 2007 Changes: - Simplified initiation of IMAP object with own Socket with a new option: RawSocket [Flavio Poletti] Fixes: - fixed read_line [Flavio Poletti] - fixed test-run in t/basic.t [Flavio Poletti] version 2.99_03: Thu Nov 1 12:36:44 CET 2007 Fixes: - Remove note about optional Parse::RecDescent by Makefile.PL; it is not optional anymore Improvements: - When syswrite() returns 0, that might be caused by an error as well. Take the timeout/maxtemperrors track. rt.cpan.org#4701 [C Meyer] - add NTLM support for logging-in, cleanly intergrated. Requires the user to install Authen::NTLM. version 2.99_02: Fri Oct 26 11:47:35 CEST 2007 The whole Mail::IMAPClient was rewritten, hopefully without breaking the interface. Nearly no line was untouched. The following things happened: - use warnings, use strict everywhere - removed many lines which were commented out, over the years - $self->_debug if $self->Debug checked debug flag twice - $self->LogError calls where quite inconsequent wrt $@ and carp - consequent layout, changed sporadic tabs in blanks - consequent calling convensions - \0x0d\0x0a is always \r\n - zillions of minor syntactical improvements - a few major algorithmic rewrites to simplify the code, still many oppotunities for improvements. - expanded "smart" accessor methods, search abbreviations, and autoloaded methods into separate subs. In total much shorter, and certainly better understandable! - fixed many potential bugs. - labeled some weird things with #???? Over 1000 lines (30%!) and 25kB smaller in size Needs to be tested!!!! Volunteers? Fixes: - Exchange 2007 only works with new parameter: IgnoreSizeErrors rt.cpan.org#28933 [Dregan], #5297 [Kevin P. Fleming] - Passed socket did not get selected. debian bug #401144, rt.cpan.org# [Alexander Zanger], #8480 [Karl Gaissmaier], #8481 [Karl Gaissmaier], #7298 [Herbert Engelmann] - Seperator not correctly extracted from list command. rt.cpan.org#9236 [Eugene Koontz], #4662 [Rasjid] - migrate() Massage'd foldername twice rt.cpan.org#20703 [Peter J. Holzer] - migrate() could loop because error in regexp. rt.cpan.org#20703 [Peter J. Holzer] - migrate() append_string result not tested. rt.cpan.org#8577 [guest] - Failing fetch() returned undef, not empty list. rt.cpan.org#18361 [Robert Terzi] - Fix "use of uninitialised" warning when expunge is called rt.cpan.org#15002 [Matt Jackson] - Fix count subfolders in is_parent, regexp did not take care of regex special characters in foldername and seperator. rt.cpan.org#12883 [Mike Porter] - In fetch_hash(), the capturing of UID was too complicated (and simply wrong) rt.cpan.org#9341 [Gilles Lamiral] - overload in MessageSet treated the 3rd arg (reverse) as message-set. - do not send the password on a different line as the username in LOGIN. Suggested by many people, amongst them rt.cpan.org#4449 [Lars Uffmann] - select() with $timeout==0 (no timeout) returns immediately. Should be 'undef' as 4th select parameter. rt.cpan.org#5962 [Colin Robertson] and [Jules Agee] - examine() remembers Massage()d folder name, not the unescaped version. rt.cpan.org#7859 [guest] Improvements: - PREAUTH support by rt.cpan.org#17693 [Danny Siu] - Option "SupportedFlags", useful when the source supports different flags than the peer in migrate(). Requested by rt.cpan.org#12961 [Don Christensen] - Fast_io did not clear $@ on unimportant errors. rt.cpan.org#9835 [guest] and #11220 [Brian Helterline] - Digest::HMAC_MD5 and MIME::Base64 are now prerequisits. rt.cpan.org#6391 [David Greaves] - PLAIN (SASL) authentication added, option Proxy rt.cpan.org#5706 [Carl Provencher] - removed Bodystructure.grammar and IMAPClient.cleanup from dist. - reworked Bodystructure and MessageSet as well. - EnableServerResponseInLiteral now autodetect (hence ignored) version 2.99_01: After 4 years of silence, Mark Overmeer took maintenance. David Kernen could not be reached. Please let him contact the new maintainer. A considerable clean-up took place, fixing bug and adapting the distribution to current best practices. - use "prompt" in Makefile.PL, to please CPAN-testers - removed old Parse::RecDescent grammars - include Artistic and Copying (GPL) into COPYRIGHT file - remove INSTALL_perl5.80 - removed all the seperate Makefile.PLs and test directories - removed the hard-copy of all involved RFCs: there are better sources for those. - converted tests to use "Test::More" - Authmechanism eq 'LOGIN' understood. - test for CRAM-MD5 removed, because conflicts with test params from Makefile.PL - test for fast-io removed, it is Perl core functionality - require IO::Socket::INET 1.26 to avoid Port number work-around. - Parse::RecDescent is required, and the grammars are pre-parsed in the distribution. This makes the whole installation process a lot easier. - Update Todo, and many other texts. - added pod tester in t/pod.t - cleaned-up the rt.cpan.org bug-list from spam. The next release will contain fixes for the real reports. Changes in version 2.2.9 ------------------------ Fixed problem in migrate that caused problems in versions of perl earlier than 5.6. Thanks go to Steven Roberts for reporting the problem and identifying its cause. Fixed problem in the make process that caused tests for BodyStructure subclass to fail if the grammer had been compiled under a different version of Parse::RecDescent. This problem was detected by the dedicated people at testers@cpan.org. Fixed a compatibility problem using Parse::RecDescent version 1.94. This caused BodyStructure and Thread to fail for 5.8.x users. A number of people reported this bug to CPAN but it took me a while to realize what was going on. Really it took me a while to realize my Parse::RecDescent was out of date. ;-) Now this module is delivered with two versions of each of the affected grammars and Makefile.PL determines which version to use. Upgrading to Parse::RecDescent 1.94 will require you to re-run Makefile.PL and reinstall Mail::IMAPClient. Changes in version 2.2.8 ------------------------ Change the login method so that it always send password as a literal to get around problem 2544 reported by Phil Tracy which caused passwords containing asterisks to fail on some systems (but not any of mine...). Good catch, Phil. Added a new example that demonstrates the use of imtest (a utility that comes with Cyrus IMAP) and Mail::IMAPClient together. The example uses imtest to do secure authentication and then "passes" the connection over to Mail::IMAPClient (but imtest is still brokering the encryption/decryption). This example comes from an idea of Tara L. Andrews', whose brainstorm it was to use imtest to broker secure connections. (But I still want to get encryption working with Mail::IMAPClient some day!) Fixed an error in which a "+" was used as a conncatenation error instead of a ".". Thanks to Andrew Bramble for reporting this, even though he mistakenly identified it as a "typo". It is not a typo; a plus sign is the correct concatenation operator, as any decent Java book will tell you ;-) Fixed an error in the login method when the password contains a special character (such as an asterisk.) Thanks to Phil Tracey for reporting this bug. Fixed some bugs in _send_line (the "O" side of the I/O engine) that were reported by Danny Smith. Fixed a bug in the migrate method in the optimization code (which gets called when socket writes are delayed due to a slow or busy target host, aka EAGAIN errors). Thanks to Pedro Carvalho for identifying this bug and its cause. Fixed a bug in migrate that caused migration of unread messages to fail. This was due to the way Mail::IMAPClient's migrate method would try to send an empty list of flags to the target server in the APPEND. Thanks to Stephen Fralich at Syracuse University and for reporting this bug. Fixed another bug in the migrate method that caused flags to get lost. Thanks go to Jean-Michel Besnard for reporting this. Fixed a bug in migrate that caused Fixed a bug in get_envelope that caused it to fail under certain conditions. Thanks go to Bob Brown for reporting this bug. Changes in version 2.2.7 ------------------------ Added some new parameters to support alternate authentication mechanisms: Prewritemethod Readmethod Mail::IMAPClient has supported cram-md5 authentication "out of the box" as of 2.2.6 (courtesy of Ville Skyttä). I also have digest-md5 working in my lab with quality of protection levels "auth" and "integrity", but not "confidentiality". I'm hoping to get the confidentiality part working soon but so far have only managed to authenticate, send an encrypted command, and receive and decrypt the response. This may sound like enough but I can't seem to send a second command or receive a second response;-( In any event 2.2.8 will support at least qop=auth and qop=auth-int but maybe not qop=auth-conf. Fixed a bug reported by Adrian that caused get_bodystructure to fail if the server returned a bodystructure with an embedded literal. Also fixed the same bug in get_envelope, so I guess now everyone knows that get_envelope was just a tinkered-with copy of get_bodystructure... Fixed two related bugs in Parser.pm that caused get_bodystructure and get_envelope to fail if the UID nnnnn part of a fetch response follows all the other stuff. Thanks to Raphaël Langella for reporting this bug. Enhanced several methods to use MessageSets when the Ranges parameter is true. There are still more methods that need to be retrofitted to take advantage of the Range method (and its underlying MessageSet object). In the meantime, if you need to get the functionality of the shorter message ranges provided by the Range method from a method that does not honor the Ranges parameter, then you should a) create a message set by passing the messages to the Range method and then pass the scalar as a string to the method you want to use. For example, if you want to move a whole lot of messages to Trash, do something like this: > >my $range = $imap->Range(scalar($imap->search("SentBefore", "01-Jan-2000"))); >$imap->move("Trash","$range"); > This will cause the range object to stringify out to what looks like a non-reference scalar before the move method gets the argument. If you omit the quotes around "$range" then this won't work. Fixed a bug in the list method that caused LIST "" "" to fail miserably. Thanks to John W Sopko Jr. for reporting this bug. Fixed a bug in the test suite that caused the cram-md5 tests to fail if you are not running the extended tests. (Introduced in 2.2.6) Fixed a bug that affected users on platforms that do not support fcntl (i.e. NT). Thanks to Raphaël Langella for reporting this bug. Changes in version 2.2.6 ------------------------ Fixed a bug in the migrate method that caused the internaldate of migrated messages to sometimes be wrong. Credit goes to Jen Wu for identifying both bug and fix. Added a new method, "get_header", to provide a short-cut for a common use of parse_headers. Added two other methods, "subject" and "date", to provide shortcuts to get_header. Changed the Mail::IMAPClient::MessageSet module to override array dereferencing. (See below.) Changed fetch and search methods to use the Range method (and thus the Mail::IMAPClient::MessageSet module) for messages. The fetch method will use MessageSet objects all the time, but the search method will only return MessageSet objects if you specify "Ranges => 1" (with Ranges being a new parameter). The default will be "Ranges => 0" (which preserves the old behavior) but this default will go away in some future release. There should be no need to override the fetch method's new behavior, since it will be transparent to you unless you tend to fetch a lot of messages at once, in which case your fetches may be faster and perhaps less likely to fail due to the request exceeding your server's line limit. If you set the Ranges parameter to true, then you still should not see a difference, because a) when fetch is called in a list context then you will not get a MessageSet object, you'll get the same list as always, and b) the MessageSet objects now override array de-referencing operations, so if you treat the returned MessageSet object as if it were an array then the object will humour you and act like a reference to an array of messages sequence numbers or message uids. Also changed the flags method to use the Range method. This should also be transparent since the methods arguments and return values do not change. Added built-in support for CRAM-MD5 authentication. This authentication method will in this release be used only when requested. In future releases the default authentication will probably be the strongest authentication supported "out of the box" that is available on your server. Since CRAM-MD5 is the only authentication other than plain text that is currently supported "out of the box", it will be the default authentication mechanism for any server that supports it. See the pod for the Authmechanism and Authcallback parameters (which were also added in this release) and the doc for the authenticate method (which has been around a while). Many thanks to Ville Skyttä for providing the code that makes up the heart of this new support, as well as to Gisle Aas for the Digest::HMAC_MD5 and MIME::Base64. Made minor tweaks to the documentation. Again. (Will it ever be 100% right?) Changes in version 2.2.5 ------------------------ Added the Range method to convert a bunch of message UID's or sequence numbers into compact ranges. Also added a supporting class for the returned range objects with overloaded operators that support stringifying, adding to, and deleting from a range object's message set (Mail::IMAPClient::MessageSet). I also wrote documentation for same, so check it out. In future releases, I will probably enhance the base module to use MessageSet objects when feasible (i.e. whenever I know that the argument in question should in fact be a message specification). But I'll let you find all the bugs in the MessageSet module first ;-) Thanks goes to Stefan Schmidt, who is the first to report using a server that restricted the size of a client request to something smaller than what Mail::IMAPClient was generating for him. (Originally the Range method was just supposed condense a message set into the shortest possible RFC2060-compliant string, but then I got all happy and started adding features. You know how it is...) Changes in version 2.2.4 ------------------------- Fixed a bug in the done method (new in 2.2.3). Added tests for idle and done. (That's how I found the bug in the done method, above.) Fixed minor bugs in test suite. (The test suite worked but wasn't always using the options I wanted tested. <sigh>) Changes in version 2.2.3 ------------------------- NOTE: This version was distributed to beta testers only. Fixed the "Changes in version 2.2.2" section so that it correctly specifies version 2.2.2 (instead of being yet another 2.2.1 section). Fixed a bug in the migrate method that affected folders with spaces in their names. Fixed a bug in the Massage method that affected folders with braces ({}) in their names. Added a new class method, "Quote", that will quote your arguments for you. (So you no longer have to worry so much about quoting your quotes. Added optimizations to the migrate method and to the core I/O engine inspired by Jules Agee. (Actually they were not so much inspired by him as they were lifted right out of a patch he had out on sourceForge.net. I had to refit them for this version, and reformat his comments so they could fit in my window. Thanks Jules, wherever you are.) Added the fetch_hash method, which will fetch an entire folder's contents into a hash indexed by message UID (or message sequence number if that's all you've got). Added a new example to the examples subdirectory, and corrected some minor bugs in existing examples. Added the idle and done methods, which together implement the IMAP IDLE extension (RFC2177), at John Rudd's suggestion. Changes in version 2.2.2 ------------------------ Fixed a bug in Massage method (generally only used by other IMAPClient methods) that broke folder names with parens. Updated bug reporting procedures. Also added a section in the documentation for REPORTING THINGS THAT ARE NOT BUGS. Bug tracking is now done via rt.cpan.org, which I stumbled upon quite by accident and with which I am really pleased. A lot of credit goes to _somebody_ for putting this out on CPAN. Unfortunately as of this writing I don't whom. Fixed a bug in the documentation regarding the logoff method, which is never implicitly invoked anymore; I gave up on that because the DESTROY method would sometimes be called after the Socket handle was already destroyed. (This is especially likely at program exit, when everything still in scope goes out of scope at the same time.) You should always log off explicitly if you want to be a well behaviod IMAP client. Changes in version 2.2.1 ------------------------ Updated append_string to wrap the date argument in double quotes if the argument was provided without quotes. Thanks to Grant Waldram for pointing out that some IMAP servers require this behavior. Added a new method, selectable, which returns a true value if a folder is selectable. Documented in this Changes file a change that was actually made for 2.2.0, in which newlines are chomped off of $@ (but not LastError). Added pointers in the documentation to point to Mark Bush's Authen::NTLM module. This module will allow you to use NTML authentication with Mail::IMAPClient connections. Also changed the authenticate method so that it will work with Authen::NTML without the update mentioned in NTLM::Authen's README. Added a second example on using the new migrate method, migrate_mail2.pl. This example demonstrates more advanced techniques then the first, such as using the separator method to massage folder names and stuff like that. Added support for the IMAP THREAD extension. Added Mail::IMAPClient::Thread.pm to support this. (This pm file is generated during make from Thread/Thread.grammar.) This new function should be considered experimental. Note also that this extension has nothing to do with threaded perl or anything like that. This is still on the TODO list. Updated the search, sort, and thread methods to set $@ to "" before attempting their respective operations so that text in $@ won't be left over from some other error and therefore always indicative of an error in search, sort, or thread, respectively. Made many many tweaks to the documentation, including adding more examples (albeit simple ones) and fixing some errors. Changes in version 2.2.0 ------------------------ Fixed some tests so that they are less likely to give false negatives. For example, test 41 would fail if the test account happened to have an empty inbox. Made improvements to Mail::IMAPClient::BodyStructure and renamed Mail::IMAPClient::Parse to Mail::IMAPClient::BodyStructure::Parse. (This should be transparent to apps since the ...Parse helper module is used by BodyStructure.pm only.) I also resumed my earlier practice of using ...Parse.pm from within BodyStructure.pm to avoid the overhead of compiling the grammar every time you use BodyStructure.pm. (Parse.pm is just the output from saving the compiled Parse::RecDescent grammar.) In a related change, I've moved the grammar into its own file (Parse.grammar) and taught Makefile.PL how to write a Makefile that converts the .grammar file into a .pm file. This work includes a number of fixes to how a body structure gets parsed and the parts list returned by the parts method, among other things. I was able to successfully parse every bodystructure I could get my hands on, and that's a lot. Also added a bunch of new methods to Mail::IMAPClient::BodyStructure and its child classes. The child classes don't even have files of their own yet; they still live with their parent class! Notable amoung these changes is support for the FETCH ENVELOPE IMAP command (which was easy to build in once the BODYSTRUCTURE stuff was working) and some helper modules to get at the envelope info (as well as envelope information for MESSAGE/RFC822 attachments from the BODYSTRUCTURE output). Have a look at the documentation for Mail::IMAPClient::BodyStructure for more information. Fixed a bug in the folders method regarding quotes and folders with spaces in the names. The bug must have been around for a while but rarely manifested itself because of the way methods that take folder name arguments always try to get the quoting right anyway but it was still there. Noticing it was the hard part (none of you guys reported it to me!). Fixed a bug reported by Jeremy Hinton regarding how the search method handles dates. It was screwing it all up but it should be much better now. Added the get_envelope method which is like the get_bodystructure method except for in ways in which it's different. Added the messages method (a suggestion from Danny Carroll), which is functionally equivalent to $imap->search("ALL") but easier to type. Added new arguments to the bodypart_string method so that you can get just a part of a part (or a part of a subpart for that matter...) I did this so I could verify BodyStructure's parts method by fetching the first few bytes of a part (just to prove that the part has a valid part number). Added new tests to test the migrate function and to do more thorough testing of the BodyStructure stuff. Also added a test to make sure that searches that come up empty handed return an undef instead of an empty array (reference), regardless of context. Which reminds me... Fixed a bug in which searches that don't find any hits would return a reference to an empty array instead of undef when called in a scalar context. This bug sounds awfully familiar, which is why I added the test mentioned above... Changes in version 2.1.5 ------------------------ Fixed the migrate method so now it not only works, but also works as originally planned (i.e. without requiring source messages to be read entirely into memory). If the message is smaller than the value in the Buffer parameter (default is 4096) then a normal $imap2->append($folder,$imap1->message_string) is done. However, if the message is over the buffer size then it is retrieved and written a bufferful at a time until the whole message has been read and sent. (The receiving server still expects the entire message at once, but it will have to wait because the message is being read from the source in smaller chunks and then written to the destination a chunk at a time.) This needs extensive testing before I'd be willing to trust it (or at least extensive logging so you know when something has gone terribly wrong) and I consider this method to be in BETA in this release. (Numerous people wrote complaining that migrate didn't work, and some even included patches to make it work, but the real bug in the last release wasn't that migrate was broken but that I had inadvertently included the pod for the method which I knew perfectly well was not ready to be released. My apologies to anyone who was affected by this.) The migrate method does seem to work okay on iPlanet (i.e. Netscape) Messenger Server 4.x. Please let me know if you have any issues on this or any other platform. Added a new example, migrate_mbox.pl, which will demonstrate the migrate method. Fixed a bug that will cause Mail::IMAPClient's message reading methods to misbehave if the last line of the email message starts with a number followed by a space and either "OK", "NO", or "BAD". This bug was originally introduced in 1.04 as a fix for another bug, but since the fix supports noncompliant behavior I'm disabling this behavior by default. If your IMAP clients start hanging every time you try to read literal text (i.e. a message's test, or a folder name with spaces or funky characters) then you may want to turn this on with the EnableServerResponseInLiteral parameter. Thanks go to Manpreet Singh for reporting this bug. Fixed a bug in imap_to_mbox.pl that has been there since 2.0.0 (when the Uid parameter started defaulting to "True"). Thanks to Christoph Viethen for reporting the bug and suggesting the fix. BUT NOTE THIS: I often don't test the example programs, so you should think of them as examples and not free production programs. Eventually I would like to add tests to my test suite (either the 'make test' test suite that you run or my own more extensive test suite) but it's not a super high priority right now. Significant improvements to the whole Mail::IMAPClient::BodyStructure module were contributed by Pedro Melo Cunha. It's really much better now. Bullet-proofing added to some private methods. (Private meaning they are undocumented and not part of the module's API. This is perl not java.) Fix applied to unset_flag to support user-defined flags (thanks to E.Priogov for submitting the bug report and patch). Changes in version 2.1.4 ------------------------ Added Paul Warren's bugfix to the sort method. Added Mike Halderman's bugfix for the get_bodystructure method. Fixed a localization problem reported by Ivo Panecek. Because of this fix, the Errno.pm file is now a prerequisite to this module. This way I can just test to see if the error is an "EAGAIN" error (as defined in sys/errno.h and thus Errno.pm) instead of awkwardly checking the string value of $!. I also renamed the MaxTempErrors parameter to Maxtemperrors in response the same bug report. Added a "MaxTempErrors" accessor method that will set and return Maxtemperrors for backwards compatibility. Also, the number of temporary errors gets reset after each successful I/O, so that the socket i/o operation fails only if you if your temporary I/O errors happen more than "Maxtemperrors" times in a row. The old behavior was to continue incrementing the count of temporary errors until either the entire message was written or until a total of Maxtemperrors had occurred, regardless of how many intervening successful syswrites occurred. This was a bug, but Ivo politely suggested the new behavior as an enhancement. ;-) Also, you can now specify "UNLIMITED" as the Maxtemperrors, in which case these errors will be ignored. And the default for Maxtemperrors is now 100, but I'm open to any feedback you may have in this regard. I also fixed the operator precedence problem that was reported by many folks in that very same part of the code. (As you may have guessed, that code was new in the last version!) One of the people who reported the precedence problem was Jules Agee, who also submitted a patch that may in the end provide an optimal solution to handling EAGAIN errors. Unfortunately I have not had time to retrofit his patch into the current version of the module. But if I can manage to do this soon and it tests well I'll include it in the next release, in which case the Maxtemperrors parameter will be of interest only to historians. I also received a patch from John Ello that adds support for Netscape's proprietary PROXYAUTH IMAP client command. I haven't included that support in this release because you can already use the proxyauth method. It's one of those famous "default" methods that, despite their fame and my documentation, nobody seems to know about. But you can always say "$imap->proxyauth($uid)", for example, providing that $imap and $uid are already what they're supposed to be. (I've been doing this myself for years.) However, John's patch does provide a cleaner interface (it remembers who you are as well as who you were, for example) so I may include it later as part of a separate module that extends Mail::IMAPClient. This would also give me an excuse for providing the framework for plugging in Administrative methods that are proprietary to other imap servers, so if you have a technique for acquiring administrative access to your users' mailboxes (besides proxyauth) please let me know what it is. Perhaps we'll get something cool out of it, like a document on how to write administrative scripts for various platforms and a suite of supporting methods for each. Changes in version 2.1.3 ------------------------ Added the new method append_string. It works similarly to append but will allow extra arguments to supply the flags and internal date of the appended message. See the pod for more details. (Thanks to Federico Edelman Anaya for suggesting this fix.) Fixed a bug in the AUTOLOAD subroutine that caused "myrights" (and possibly other non-existant methods) to fail. Thanks go to Larry Rosenbaum for reporting the bug and identifying the fix. Added the new method Escaped_results, which preprocesses results so that data containing certain special characters are returned quoted with special characters (like quotes!) escaped. (I needed this for the bodystructure stuff, below.) NEW! Added support for parsing bodystructures (as provided in the server response to FETCH BODYSTRUCTURE). This support requires Parse::RecDescent and is implemented via two new modules, Mail::IMAPClient::BodyStructure and Mail::IMAPClient::Parse. Note that the latter module is used by the former; your programs need not and should not use it directly so don't. Also, these modules are ALPHA and EXPERIMENTAL so no screaming when they don't work. (Polite bug reports will of course be gratefully accepted.) Many thanks to Damian Conway, the author of Parse::RecDescent, without which this feature would not have been possible (or at least not very likely). Enhanced support for DOS systems (and DOS's offspring, such as windows) by removing the "\c\n"s and replacing them with "\x0d\x0a". Thanks go to Marcio Marchini for his help with this effort. Fixed the list of symbols imported along with Fcntl.pm. (Paul Linder asked me to put this in the last release but I forgot.) Changes in version 2.1.2 ------------------------ Fixed a bug in the is_parent method which made it inaccurate on some servers. Added new method "sort", which implements the SORT extenstion and which was contributed by Josh Rotenberg. The SORT extension is documented at. A copy of the draft is also included with the Mail::IMAPClient distribution, which means I also: Added draft-ietf-imapext-sort-06.txt to the docs subdirectory of the distribution. Fixed a bug in the folders method and the subscribed method (same bug, appeared twice) which broke these methods under some conditions. Thanks again Josh Rotenberg for supplying the fix. Fixed bugs in getacl and listacl. Changed the interface for getacl significantly; existing scripts using getacl will not behave the same way. But then on the other hand, getacl was never documented before, so how could you be using it? Implemented improvements to reduce memory usage by up to 30%. Thanks go Paul Linder, who developed the memory usage patch after a considerable amount of analysis. The improvements include the use of 'use constant', so your perl needs to support that pragma in order to use Mail::IMAPClient. Added a new parameter, MaxTempErrors, which allows the programmer to control the number of consecutive "Resource Temporarily Unavailable" errors that can occur before a write to the server will fail. Also changed the behavior of the client when one of these errors occurs. Previously, Mail::IMAPClient waited .25 seconds (a quarter of one second) before retrying the read operation. Now it will wait (.25 * the number of consecutive temporary errors) seconds before retrying the read. Documented the "Buffer" parameter, which has been secretly available for some time. I just forgot to document it. It sets the size of the read buffer when Fast_io is turned on. (NOTE: As of version 2.1.5 it also controls the size of the buffer used by the migrate method.) Updated the Todo file. It was nice to see that a number of lines in the "Todo" file were now deletable. It was depressing to see that a number of original lines need to stay in there. Changes in version 2.1.1 ------------------------ Added the "mark", "unmark", and imap4rev1 methods. Updated the documentation to include the new methods and to document "create", "store", and "delete". Updated "message_string" to be smart about whether you're using IMAP4 or IMAP4REV1. Updated "message_to_file" to be smart about whether you're using IMAP4 or IMAP4REV1. Added several bug fixes to authenticate method. Many thanks to Daniel Wright who reported these bugs and provided the information necessary to fix them. Changes in version 2.1.0 ------------------------ Fixed a serious bug introduced in 2.0.9 when appending large messages. Made minor changes to improve the cyrus_expunge.pl example script. Made the set_flags routine RFC2060-compliant. Previously it prepended flag names with backslashes, even if the flags were not reserved flags. This broke support for user-defined flags, which I didn't realize was supposed to even be there until Scott Renner clued me in. (Thanks, Scott.) Promoted the release level to "1". Added a new 'internaldate' method. (Thanks to the folks at jwm3.org for donating the code!) Added a new example, cyrus_expire.pl. Changes in version 2.0.8/2.0.9 ------------------------------ Made minor changes to the tests in t/basic.t so that folders are explicitly closed before they are deleted. (Don't worry, only folders created by the tests are deleted. :-) Thanks go to Alan Young for reporting that some servers require this. Changed the routine that massages folder names into IMAP-compliant strings so that single-quotes in a name do not force the folder to go through as "LITERAL" strings (as defined in RFC2060). This shouldn't cause a problem for anybody (and in fact should make life easier for some folks) but if you do have any trouble with single-quotes in folder names PLEASE LET ME KNOW ASAP!! Divided the sending of literal strings into two I/O operations (as required by RFC2060). This should correct problems with sending literals to some servers that will not read any data sent before they reply with the "+ go ahead" message. (Thanks go to Keith Clay, who reported seeing this problem with the M-Store IMAP server.) Changed the "create" method so that it will autoquote the first argument to create rather than the last. Normally the first argument is the last, but Cyrus users can specify an optional 2nd argument, except when using pre-2.0.8 versions of Mail::IMAPClient ;-) Thank you Chris Stratford for reporting this bug and identifying its cause. Fixed a bug in body_string when the message is empty. (Thanks go to Vladimir Jebelev for finding this bug and providing the fix.) Added a new example to the examples subdirectory. cyrus_expunge.pl is a script you can use (after making minor tweaks) to periodically expunge your server's mail store. Changes in version 2.0.7 ------------------------ Fixed a bug in message_count. Thanks go to Alistair Adams for reporting this bug. Fixed a bug in folders that caused some foldernames to not be reported in the returned array. Changes in version 2.0.6 ------------------------ Applied patches from Phil Lobbe to tighten up sysreads and 'writes and to correct a bug in the I/O engine. Changes in version 2.0.5 ------------------------ Fixed bug in parse_headers so that RFC822 headers now match the pattern /(\S*):\s*/ instead of /(\S*): /. Thanks go to Paul Warren for reporting this bug and providing the fix. Added more robust error checking to prevent infinite loops during read attempts and fixed bugs in parse_headers. Thanks go to Phil Lobbes, who provided several useful patches and who performed valuable pre-release testing. Changes in version 2.0.4 ------------------------ Fixed bug in parse_headers when connected to an Exchange server with UID=>1. (Kudos to Wilber Pol for that fix.) Fixed bugs in parse_headers and tightened reliability of I/O engine by implementing many improvements suggested by Phil Lobbes, who also provided code for same. Added bugfix that under certain conditions caused server responses to be "repeated" when fast_io is turned on. Thanks to Jason Hellman for providing bug report and diagnostic data to fix this. Added a "LastIMAPCommand" method, which returns the last IMAP client command that was sent to the server. Removed the "=begin debugging" paragraph that somehow got included in CPAN's html pages (even though it shouldn't have). Began a process of redesigning the documentation. I would like to be able to present a more formal syntax for the various methods and hope to have that ready for the next release. Tested successfully against Cyrus v 2.0.7. Tested unsuccessfully against mdaemon. This appears to be due to mdaemon's noncompliance with rfc2060 so future support for mdaemon should not be expected any time soon. ;-( Changes in version 2.0.3 ------------------------ Did major rewrite of message_string method, which should now be both cleaner and more reliable. Fixed bug in move method that caused some folders to be incorrectly quoted. Thanks go to Felix Finch for reporting this bug. Also, at his suggestion I added information to move documentation explaining the need to expunge. Made many fixes and tweaks to pod text. Added a new method, Rfc2060_date, which takes times in the "seconds since 1/1/1970" format and returns a string in RFC2060's "dd-Mon-yyyy" format (which is the format you need to use in IMAP SEARCH commands). Changes in version 2.0.2 ------------------------ Fixed bug that caused a compile error on some earlier versions of perl5. Noticed that some older versions of perl give spurious "Ambiguous use" warnings here and there, mostly because I'm not quoting the name of the "History" member of the underlying Mail::IMAPClient hash. These warnings will go away when you upgrade perl. (I may fix them later, or maybe not. Depends on if I have time.) Added new parameter (and eponymous method) Peek, along with new tests for 'make test' for same. See the pod for further info. Added some error checking to avoid trying to read or write with an unconnected IMAPClient object. Made bug fixes to parse_headers and flags. Added missing documentation for the exciting new message_to_file method (oops). Also cleaned up a few typos in the pod while I happened to be there. (I'm sure there are still plenty left.) Fixed bugs in append and append_file. (Thanks to Mauro Bartolomeoli and to the people at jwm3.org for reporting these bugs.) Made changes to call to syswrite to guarantee delivery of entire message. (Only affects appends of very large messages.) Added the 'close' method to the list of lower-case-is-okay methods (see the section under version 2.0.0 on "NEW ERROR MESSAGES"). Changes in version 2.0.1 ------------------------ Several bug fixes related to the flags method and to spurious warning messages when run with warnings turned on. A new method, message_to_file, writes message text directly into a file. This bypasses saving the text in the history buffer and the overhead that entails, which could be especially important when processing big ass messages. Of course the bad news is that now you'll have to write all that shtuff out to a filehandle, but maybe you wanted to do that anyway. Anyhow, between append_file and message_to_file, both of which take filehandle arguments, there should be a way to "short circuit" the copying of mail between two imap sessions. I just haven't got it completely figured out yet how it would work. Got any ideas? Anyhow, this method is currently considered experimental. A couple of new tests have been added to go along with our new little method. I've added a whole bunch more IMAP-related rfc's to the docs/ subdirectory. Trust me, you are going to need them. Changes in version 2.0.0 ----------------------- NEW I/O ENGINE This version includes a major rewrite of the I/O engine. It's now cleaner and more reliable. Also, output processing is less likely to match patterns that look like server output but are really, say, message text contained in a literal or something like that. Also, various problems with blank lines at the ends of messages either magically appearing or disappearing should now go away. Basically, it's much better is what I'm trying to say. NEW DEFAULT The Uid parameter now defaults to true. This should be transparent to existing scripts (except for those scripts that produce embarrassing results because someone forgot to specify Uid=>1, in which case they'll magically start behaving somehow). NEW METHOD The namespace method has been added, thus implementing RFC2342. If you have any scripts that rely on the old, "default method" style of namespace implementation then you should rename those method calls to be mixed case (thus forcing the AUTOLOADed default method). NEW ERROR MESSAGES Mail::IMAPClient now issues a lot more warning messages when run in warn mode (i.e. $^W is true). Of particular interest are methods implemented via the "default method" AUTOLOAD hack. They will generate a warning telling you to use mixed- or upper-case method names (but only if warnings are turned on, say with the -w switch or $^W++ or something). The exceptions are certain unimplemented yet quite popular methods that, if ever explicitly implemented, will behave the same way as they do via the default method. (Or at least they will remain downwardly compatible. I may add bells and whistles by not by default.) Those methods are listed in the pod and right here: store, copy, subscribe, close, create, delete and expunge. NEW VERSION NUMBERING SCHEME Changed the version numbering scheme to match perl's (as of perl v5.6.0). NEW INSTALLATION TESTS Added a few new tests to the test suite. (Still need more, though.) Also changed fast_io and uidplus test suites so that they just "do" the basic tests but with different options set (i.e. Fast_io and Uid, respectively). OTHER CHANGES - The expunge method now optionally accepts the name of the folder to be expunged. It's also been documented, even though it technically doesn't exist. (That won't stop it from working, though.) Since expunge deletes messages that you thought were already deleted, it's only appropriate to use a method that you thought existed but really doesn't, don't you think? And if you're wondering how I managed to change the behavior of a method that doesn't exist, well, I don't want to talk about it. - Speaking of methods that don't exist (also known as methods implemented via "the default method"), effective with this release there are a number of unimplemented methods that are guaranteed to always exhibit their current behavior. In other words, even if I do eventually implement these methods explicitly, they will continue to accept the same arguments and return the same results that they do now via the default method. (Why I would even bother to do that is specifically not addressed in this document.) Currently this means that these methods will not trigger warnings when called via all-lowercase letters (see "NEW ERROR MESSAGES", above). In the future I hope that it will also mean that these non-existant but functioning methods will also be documented in the pod. - Fixed a bug in the flags method introduced in 1.19. (Thanks to the people at jwm3.org for reporting this!) Changes in version 1.19 ----------------------- Fixed a bug in which the Folder parameter returned quoted folder names, which sometimes caused other methods to requote the folders an extra time. (The IMAP protocol is real picky about that.) Thanks go to Felix Finch for both reporting the bug and identifying the fix. Siggy Thorarinsson contributed the new "unseen_count" method and suggested a new "peek mode" parameter. I have not yet gotten around to implementing the new parameter but have included the unseen_count method, since a) he was kind enough to write it, and b) it tests well. In the meantime, you cannot tell methods like "parse_headers" and "message_string" and so forth whether or not you want them to mark messages as "\Seen". So, to make life easier for you in particular I added a bunch of new methods: set_flag, unset_flag, see, and deny_seeing. The latter two are derivitives of the former two, respectively, which should make this sentence almost as difficult to parse as an IMAP conversation. Fixed bug in which "BAD" "OK" or "NO" lines prefixed by an asterisk (*) instead of the tag are not handled correctly. This is especially likely when LOGIN to a UW IMAP server fails. Thanks go to Phil Lobbes for squashing this bug. Fixed bug in logout that caused the socket handle to linger. Credit goes to Jean-Philippe Bouchard for reporting this bug and for identifying the fix. Fixed bug in uidvalidity method where folder has special characters in it. Made several bug fixes to the example script examples/find_dup_msgs.pl. Thanks to Steve Mayer for identifying these bugs. Changed Fast_io to automatically turn itself off if running on a platform that does not provide the necessary fcntl macros (I won't mention any names, but it's initials are "NT"). This will occur silently unless warnings are turned on or unless the Debug parameter is set to true. Previously scripts running on this platform had to turn off fast_io by hand, which is lame. (Thank you Kevin Cutts for reporting this problem.) Updated logic that X's out login credentials when printing debug output so that funky characters in "User" or "Password" parameters won't break the regexp. (Kevin Cutts found this one, too.) Tinkered with the Strip_cr method so it can accept multiple arguments OR an array reference as an argument. See the updated pod for more info. Fixed a typo in the documentation in the section describing the fetch method. There has been an entire paragraph missing from this section for who knows how long. Thanks to Adam Wells, who reported this documentation error. Fixed bug in seen, recent, and unseen methods that caused them to return empty arrays erroneously under certain conditions. Changes in version 1.18 ----------------------- Timeouts during read operations now work correctly. Fixed several bugs in the I/O engine. This should correct various problems with Fast_io turned on (which is now the default). Reworked message_string and body_string methods to avoid bugs when Uid set to true. Changes in version 1.17 ----------------------- Added support for the Oracle IMAP4r1 server. Tinkered with the DESTROY method so that it does a local($@) before doing its evals. This will perserve the value of $@ when the "new" method fails during a login but the DESTROY's "logout" succeeds. The module was setting the $@ variable, but on some versions of perl the DESTROY method would clobber $@ before anything useful could be done with it! Thanks to Kimmo Hovi for reporting this problem, which was harder to debug than you might think. Changes in version 1.16 ----------------------- IMPORTANT: Made Fast_IO the default. You must specify Fast_io => 0 in your new method call or invoke the Fast_io method (and supply 0 as an arg) to get the old behavior. (This should be transparent to most users, but as always your mileage may vary.) Reduced the number of debug msgs printed in the _read_line internal method and added a debug msg to report perl and Mail::IMAPClient versions. The message_count method will now return the number of messages in the currently select folder if no folder argument is supplied. The message_string method now does an IMAP FETCH RFC822 (instead of a FETCH RFC822.HEADERS and a FETCH RFC822.TEXT), which should eliminate missing blank lines at the ends of some messages on some IMAP server platforms. It also returns undef if for some reason the underlying FETCH fails (i.e. there is no folder selected), thanks to a suggestion by Pankaj Garg. It has also been slightly re-worked to support the changes in the I/O engine from version 1.14. Re-worked the body_string method to support the I/O engine changes from v1.14. Fixed a bug in parse_headers when used with multiple headers and the Uid parameter set to a true value. Documented in this file a fix for a bug in the flags method with the Uid parameter turned on. (Belated thanks to Michael Lieberman for reporting this bug.) Changes in version 1.15 ----------------------- Fixes the test suite, which in v1.14 had an "exit" stmt that caused early termination of the tests. (I had put that "exit" in there on purpose, and left it in there by accident.) Changes in version 1.14 ----------------------- Fixed a bug in the _readline subroutine (part of the I/O engine) that was caused by my less-than-perfect interpretation of RFC2060. This fix will allow the Mail::IMAPClient module to function correctly with servers that imbed literal datatypes in the middle of response lines (rather than just at the end of them). Thanks to Pankaj Garg for reporting this problem and providing the debugging output necessary to correct it. Fixed a bug in parse_headers that was introduced with the fix to the I/O engine described above. Changes in version 1.13 ----------------------- Changed the parse_headers method so that it uses BODY.PEEK instead of BODY. This prevents the parse_headers method from implicitly setting the "\Seen" flag for messages that have not been otherwise read. This change could produce an incompatibility in scripts that relied on the parse_headers previous behavior. Fixed a bug in the flags method with the Uid parameter turned on. (Thanks to Michael Lieberman for reporting this bug.) Changes in version 1.12 ----------------------- Fixed a bug in the folders method when called first with a second arg and then without a second arg. Tested sucessfully with perl-5.6.0. Added a section to the pod documentation on how to report bugs. I've had to ask for output from scripts with "Debug => 1" so many times that I eventually decided to include the procedure for documenting bugs in the distribution. (Duh! It only took me 11 releases to come up with that brainstorm.) Often following the procedures to obtain the documentation is enough; once people see what's going on (by turning on Debug =>1) they no longer want to report a bug. Did I mention it's a good idea to turn on debugging when trying to figure out why a script isn't working? (It is.) In order to make the Debug parameter friendlier, it now prints to STDERR by default. You can override this by supplying the spanking brand new Debug_fh parameter, which if supplied had better well point to a filehandle (either by glob or by reference), and by 'filehandle' I mean something besides STDIN! Debugging mode will now also X-out the login credentials used to login. This will make it easier to share your debugging output. Added documentation for the State parameter, which must be set manually by programmers who are not using Mail::IMAPClient's connect and/or login methods but who are instead making their own connections and then using the Socket parameter to turn their connections into IMAP clients. Fixed bug in parse_headers with Uid turned on. Fixed bug in parse_headers when using the argument "ALL". Changes in version 1.11 ----------------------- Added new example script, copy_folder.pl, to demonstrate one way to copy entire folders between imap accounts (which may or may not be on the same server). This example is right next to all the others, in the examples/ subdirectory of the distribution. Changed error handling slightly. $@ now contains pretty much the same stuff as what gets returned by LastError, even when LastError won't work (i.e. when an implicit connect or login fails and so no object reference is returned by new). You can thank John Milton for the friendly nagging that got me to do this. Added new test suite for the fast_io engine. This should make it easier to determine whether or not the fast_io engine will work on your platform. Implemented a work-around to allow the Port parameter to default despite a known bug in IO::Socket::INET version 1.25 (distributed with perl 5.6.0). Fixed a bug in the message_string method in which the resulting text string for some mime messages to be incompatible with append. Fixed a bug in the Fast_io i/o engine that could cause hangs during an append operation. Changed a number of regular expressions to accept mixed-case "Ok", "No" or "Bad" responses from the server and to do multi-line matching. Fixed a bug in the append method that was causing extra carriage returns to appear in messages whose lines were already terminated with the CR-LF sequence. Thanks to Heather Adkins for reporting this bug. Enhanced the parse_headers routine so that it is less sensitive to variations of case in message headers. Now, the case of the returned key matches the case of the field as specified in the parse_headers method's arguments, regardless of its case in the message being parsed. (You can thank Heather Atkins for this suggestion as well.) See below for more changes to parse_headers in this release. Improved the append method so that it has better error handling and error recovery. Thanks to Mark Keisler for pointing out some bugs in the error handling code in this method. Added the append_file method, which is like the append method but it works on files instead of strings. The file provided to append must contain an RFC822-formatted message. Use of the append_file method avoids having to stuff huge messages into variables before appending them. Thanks to jwmIII () for suggesting this method. Changed the flags method and the parse_headers method so that a reference to an array of message sequence numbers (or message UIDS if the Uid parameter is turned on) can optionally be passed instead of a single message sequence number (or UID). Use of this enhancement will change your return values so be sure to read the pod. Thanks to Adrian Smith (adrian.smith@ucpag.com) for delivering this enhancement. Fixed a bug in "message_string" that caused the blank lines between headers and body to fall out of the string. Tinkered with the undocumented _send_line method to permit an optional argument to suppress the automatic insertion of <CR><LF> at the end of strings being sent. (NOTE: I'm telling you this because I'm a nice guy. This doesn't mean that _send_line is now a programming interface.) Changes in version 1.10 ----------------------- Added two new methods, lsub and subscribed. lsub replaces the behavior of the default method and should be downwardly compatible. The subscribed method works like the folders method but the results include only subscribed folders. Thanks to Alexei Kharchenko for providing the code for lsub (which is the foundation upon which 'subscribed' was built). Changes in version 1.09 ----------------------- Changed login method so that values for the User parameter that do not start and end with quotes will be quoted when sent to the server. This is to support user id's with embedded spaces, which are legal on some platforms. Changed name of test input file created by perl Makefile.PL and used by 'make test' from .test to test.txt to support weird, offbeat OS platforms that cannot handle filenames beginning with a dot. Fixed bugs in seen, unseen, and recent methods. (These are almost the same method anyway; they are dynamically created at compile time from the same code, with variable substitution filling in the places where "seen", "unseen", or "recent" belong.) The bug caused these methods to return the transaction number of the search as if it were the last message sequence number (or message uid) in the result set. Added the 'since' method, which accepts a date in either standard perl format (seconds since 1/1/1970, or as output by time and as accepted by localtime) or in the date_text format as defined in RFC2060 (dd-Mon-yyyy, where Mon is the English-language three-letter abbreviation for the month). It searches for items in the currently selected folder for messages sent since the day whose date is provided as an argument. Added 'sentsince', 'senton', 'sentbefore', 'on', and 'before' methods which are totally 100% just like the 'since' method, except that they run different searches. (Did I mention that it's useful to have RFC2060 handy when writing IMAP clients?) Added two new methods, run and tag_and_run, to allow IMAP client programmers finer control over the IMAP conversation. These methods allow the programmer to compose the entire IMAP command string and pass it as-is to the IMAP server. The difference between these two methods is that the run method requires that the string include the tag while the tag_and_run method requires that it does not. To a similar end, the pre-existing Socket parameter and eponymous accessor method has been documented to allow direct access to the IMAP socket handle and to allow the socket handle to be replaced with some other file handle, presumably one derived from a more interesting technology (such as SSL). Fixed a bug that caused blank lines to be removed from 'literal' output (as defined in RFC2060) when fast_io was not used. This bug was especially likely to show up in routines that fetched a message's body text. The fact that this bug did not occur in the newer fast_io code may indicate that I've learned something, but on the other hand we shouldn't jump to rash conclusions. I've run benchmarks on the fast_io code to determine whether or not it is faster and, if so, under what circumstances. It appears that the fast_io code is quite faster, except when reading large 'literal' strings (i.e. message bodies), in which case it appears to take the same amount of time as the older i/o code but at the cost of more cpu cycles (which means it may actually be slower on cpu-constrained systems). The reason for this is that reads of literal strings are by their nature already optimized, but without the overhead of fcntl calls. So if you expect to be doing lots of message text (or multipart message body parts) fetching you should not use fast_io, but in pretty much any other case you should go ahead and use it. In any event, a number of people have tested fast_io so I no longer consider it experimental, unless you're running perl on NT or CP/M or something funky like that, in which case let me know how you make out! Changes in version 1.08 ----------------------- Maintenance release 1.08a fixes a bug in the folders method when supplying the optional argument (see "Enhanced folders method..." below) with some IMAP servers. Added option to build_ldif.pl (in the examples subdirectory) to allow new options and to better handle quoted comments in e-mail addresses. Thanks to Jeffrey Fiedl, whose book _Mastering Regular Expressions_ (O'Reilly) helped me to figure out a good way to do this. Fixed documentation error that failed to mention constraints on when the append method will return the uid of the appended message. (This feature only works with servers that have the UIDPLUS capability.) Added/improved documentation somewhat. The copy method now returns a comma-separated list of uids if successful and if the IMAP server supports UIDPLUS extentions. The move method now works similarly. Added new method uidnext, which accepts the name of a folder as an argument and returns the next available message UID for that folder. The exists and append methods now will handle unquoted foldernames with embedded spaces or quotes or whatever. Including quotes as part of the argument string is no longer required but is still supported for backwards compatibility reasons. In other words, $imap->exists(q("Some Folder")) is now no longer necessary (but will still work). $imap->exists(some folder) is good enough. Mail::IMAPClient has been tested successfully on Mirapoint 2.0.2. (Thanks to Jim Hickstein.) I've now installed the UW imapd IMAP4rev1 v12.264 on one of my machines so I'm better able to certify that platform. All the tests in 'make test' work there (or are at least gently skipped). Fixed bug in getacl in which folder names were quoted twice. (Thanks to Albert Chin for squashing this bug.) Similar bugs existed in the other ACL methods and were similarly fixed. Fixed a bug in message_uid that basically caused it to not work. Muchos gracias to Luvox (aka fluvoxamine hydrochloride) for providing me with just the help I needed to discover and fix this bug. Enhanced folders method to allow an argument. If an argument is supplied, then the folders method will restrict its results to subfolders of the supplied argument (which should be the name of a parent folder, IMHO). This is implemented by supplying arguments to the LIST IMAP Client command so we are optimizing network I/O at the expense of possible server incompatibilities. If you find server incompatibilities with this then please let me know, and in the meantime you can always grep(/^parent/,$imap->folders) or something. Or re-implement the folders method yourself. Changes in version 1.07 ----------------------- Added a new parameter, Fast_io, which, if set to a true value, will attempt to implement a faster I/O engine. USE THIS AT YOUR OWN RISK. It is alpha code. I don't even know yet if it even helps. Added support for spaces in folder names for the autoloaded subscribe method. Added new methods setacl, getacl, deleteacl, and listrights. These methods are not yet fully tested and should be considered beta for this release. Enhanced support for the myrights method (which is implemented via the default method). Fixed bug in append method that caused it to hang if server replied to original APPEND with a NO (because, say, the mailbox's quota has been exceeded). Removed the autodiscovery of the folder hierarchy from the login method. This will speed up logging in but may delay certain other methods later (but see the next item, below). Updated the exists method to issue a "STATUS" IMAP Client command, rather than depend on the folder hierarchy being discovered via 'LIST "" "*"'. Apparently this speeds things up a lot for some configurations, although the difference will be negligable to many. Updated Makefile.PL to support the PREFIX=~/ directive. Thanks to Henry C. Barta (hbarta@wwa.com) for this fix. Added the Timeout parameter and eponymous accessor method, which, if set to a true value, causes reads to time out after the number of seconds specified in the Timeout parameter. The value can be in fractions of a second. This has not been fully tested though, so use of this parameter is strictly "Beta". Enhanced support for the UID IMAP client command. Setting the new Uid parameter to a true value will now cause the object to treat all message numbers as message UID numbers rather than message sequence numbers. Setting the Uid parameter to a false value will turn off this behavior again. Updated test suite to handle servers that cannot do UIDPLUS and to add tests for the Uid parameter. Incorporated bug fixes for recent_count and message_count in which some servers are sticking in extra \r's, and updated DESTROY to remove spurious warning messages under some versions of perl (thanks to Scott Wilson for catching and killing these bugs). Changes in version 1.06 ----------------------- Changed folders method so that it correctly handles mail folders whose names start and end with quotes. Changed append method so that it returns the uid of the newly appended message if successful. Since the uid is a "true" value this should not affect the behavior of existing scripts, although it may enhance the behavior of new scripts ;-) Fixed bug in parse_headers that could cause script to die if there were no headers of the type requested and if there was a space on the blank line returned from FETCH. (Some blank lines are blanker than others...) Added the "flags" method, which returns an array (or array reference if called in scalar context) containing the flags that have been set for the message whose sequence number has been provided as the argument to the method. Added the "message_string" method, which accepts a message sequence number as an argument and returns the contents of the message (including RFC822 headers) as a single string. Added the "body_string" method, which accepts a message sequence number as an argument and returns the contents of the message (not including RFC822 headers) as a single string. Changes in version 1.05 ----------------------- Patched the 'make test' basic test to work correctly on systems that do not support double quotes in folder names. Thanks to Rex Walters for this fix. Added a new example script, build_dist.pl, that rumages through a folder (specified on the command line) and collects the "From:" address, and then appends a message to that folder with all those addresses in both the To: field and the text, to facilitate cuting and pasting (or dragging and dropping) into address books and so forth. (Note that the message doesn't actually get sent to all those people; it just kind of looks that way.) Also added another example, build_ldif.pl, that is similar to build_dist.pl except that instead of listing addresses in the message text, it creates a MIME attachment and attaches a text file in LDIF format, which can then be imported into any address book that supports LDIF as an import file format. This example requires the MIME::Lite module. MIME::Lite was written by Eryq (okay, Erik Dorfman is his legal name), and is totally available on CPAN. This distribution has now been tested on Mirapoint Message Server Appliances (versions 1.6.1 and 1.7.1). Many thanks to Rex Walters for certifying this platform and for providing a test account for future releases. Changes in version 1.04 ----------------------- Fixed situation in which servers that include the "<tag> <COMMAND> OK\r\n" line as part of a literal (i.e. text delivered via {<length>}\r\n<length> bytes\r\n) caused the module to hang. This situation is pretty rare; I've only run across one server that does it. I'm sure it's a bug; I'm not sure whose. ;-} Many thanks to Thomas Stromberg for 1) pointing out this bug and 2) providing me with facilities to find and fix it! Fixed potential bug in I/O engine that could cause module to hang when reading a literal if the first read did not capture the entire literal. Cleaned up some unnecessary runtime warnings when a script is executed with the -w switch. Added new tests to 'make test'. I just can't keep my hands off it! ;-) Enhanced the append method and several tests in 'make test' to be more widely compatible. Successfully tested on UW-IMAP, Cyrus v1.5.19, Netscape Messenger 4.1, and Netscape Messenger v3.6. If you know of others please add them to the list! Fixed a bug in the separator method (new in 1.03) that caused it to fail if 'inbox' was specified in lowercase characters as the method's argument. Added a new example, imap_to_mbox.pl, contributed by Thomas Stromberg. This example converts a user's IMAP folders on an IMAP server into mbox format. Changes in version 1.03 ----------------------- Reworked several methods to support double-quote characters within folder names. This was kind of hard. This has been successfully tested with create, delete, select, and folders, to name the ones that come to mind. Reworked the undocumented method that reads the socket to accept and handle more gracefully lines ending in {nnn}\r\n ( where nnn is a number of characters to read). This seems to be part of the IMAP protocol although I am at a total loss as to where it's explained, other than a brief description of a "literal's" bnf syntax, which hardly counts. Added separator object method, which returns the separator character in use by the current server. Added is_parent method, which returns 1, 0, or undef depending on whether a folder has children, has no children, or is not permitted to have children. Added tests to 'make test' to test new function. Also changed 'make test' to support IMAP systems that allow folders to be created only in the user's INBOX (which is the exact opposite of what my IMAP server allows...oh, well). Fixed a bug that caused search to return an array of one undef'ed element rather than undef if there were no hits. Changes in version 1.02 ----------------------- Fixed bugs in search and folders methods. Fixed bug in new method that ignored Clear => 0 when specified as arguments to new. Changes in version 1.01 ----------------------- Fixed a bug in test.pl that caused tests to fail if the extended tests were not used. Added method 'parse_headers' to parse the header fields of a message in the IMAP store into a perl data structure. Changes in version 1.00 ----------------------- Made cosmetic changes to documentation. Fixed a bug introduced into the 'folders' method in .99. Changed 'new' method so that it returns undef if an implicit connection or login is attempted but fails. Previous releases returned a Mail::IMAPClient object that was not connected or not logged in, depending on what failed. Changed installation script so that it reuses the parameter file for test.pl if it finds one. Installation can be run in the background if the test.txt file exists. Touching it is good enough to prevent prompts; having a correctly formatted version (as described in test_template.txt) is even better, as it will allow you to do a thorough 'make test'. Changes in version .99 ---------------------- Added the Rfc822_date class method to create RFC822-compliant date fields in messages being appended with the append method. Added the recent, seen, and unseen methods to return an array of sequence numbers from a SEARCH RECENT, SEARCH SEEN, or SEARCH UNSEEN method call. These methods are shortcuts to $imap->search("RECENT"), etc. Added the recent_count method to return the number of RECENT messages in a folder. Contributed by Rob Deker. Added 'use strict' compliance, courtesy of Mihai Ibanescu. Fixed a bug in the search method that resulted in a list with one empty member being returned if a search had no hits. The search method now returns undef if there are no hits. Added 'authenticate' method to provide very crude support for the IMAP AUTHENTICATE command. The previous release didn't support AUTHENTICATE at all, unless you used very low-level (and undocumented) methods. With the 'authenticate' method, the programmer still has to figure out how to respond to the server's challenge. I hope to make it friendlier in the next release. Or maybe the one after that. This method is at least a start, albeit a pretty much untested one. Added Rfc822_date class method to facilitate creation of "Date:" header field when creating text for the "append" method, although the method may come in handy whenever you're creating a Date: header, even if it's not in conjuction with an IMAP session. Added more tests, which will optionally run at 'make test' time, provided all the necessary data (like username, hostname, password for testing an IMAP session) are available. Changes in version 0.09 ----------------------- Thu Aug 26 14:10:03 1999 - original version; created by h2xs 1.19 # $Id: Changes,v 20001010.18 2003/06/12 21:35:48 dkernen Exp $
https://metacpan.org/changes/distribution/Mail-IMAPClient
CC-MAIN-2018-13
refinedweb
16,032
64.71
Asked by: IList<T>, IEnumerable<T> or BindingList<T> in the layers of N-Layer application using LINQ to SQL in the data source layer? Question Hi. I need some feedback of using an IList<T> vs a BindingList<T> in my application. I'm developing a N-Layer WinForms application in Visual C# 2008 Express Edition in which I'm implementing the Model View Presenter (MVP) pattern with the Supervising Controller approach (with data binding) for the presentation layer, a thin service layer, a Domain Model for the domain layer, and LINQ to SQL for the data source layer in which I’m using POCO classes in the domain layer. Currently, the queries in my application return IList<T> in which T are the entities. However, in the UI of my application I will need to use several dozens of DataGridView (and also MS Charts) in which I will need to be able to click on any header of the columns with simple types (int, decimal, string and DateTime) in order to see the data sorted by ascending order or descending order. I believe that I need to use a BindingList<T> in order to be able to do that. If using IList<T> I’m not able to get that functionality. In order to fix this I think that I have 2 alternatives: 1) Use in all the layers of my application, including in the data source layer, BindingList<T> instead of IList<T>: How could I implement this? For example, the LINQ queries return IQueryable<T> which currently I’m converting to IList<T> by using ToList(). How could I convert the IQueryable<T> to BindingList<T>? 2) Continuing using IList<T> in the data source, domain and service layers but converting the IList<T> to a BindingList<T> only in the presentation layer (in the views of the MVP pattern): What’s the best way to convert a IList<T> to a BindingList<T>? Since I have around 40 entities in my application do I need 1 class per entity or do I need only 1 generic class that could be used to make this conversion for each of the entities? Any other suggestions? Also, I tried this code to see if I could get a ‘simple’ solution to sort a any columns of my DataGridView but it didn’t worked. public class PriceBindingList: BindingList<PriceData> { public PriceBindingList(IList<PriceData> priceData):base(priceData) { } } --- HistPricesView.cs public partial class HistPricesViewForm : Form, IHistPricesView { // ….. PriceBindingList histPricesBindingList; // Constructor public DailyPricesAndRetsOfAssetUserControl() { // …. } #region IHistPricesView Members public void SetHistPrices(IList<PriceData> histPrices) { histPricesBindingList = new PriceBindingList(histPrices); histPricesdataGridView.DataSource = histPricesBindingList; } --- public class PriceData { public DateTime Date { get; set; } public decimal OpenPrice { get; set; } public decimal HighPrice { get; set; } public decimal LowPrice { get; set; } public decimal ClosePrice { get; set; } public long Volume { get; set; } // ............... Should I use the code of the generic implementation of BindingList<T> that is mentioned in this link () or should I use a different one? I’m asking this because I saw several people recommending different code versions of BindingList<T> in the forums and I would prefer to use the faster and simplest version. My question touches in several subjects, including WinForms , but I think that the core issue is related to an architecture decision and for that reason I decided to include my question in this forum. EDIT: I don't know if this is relevant for this issue but I forget to mention that my DataGridView is a derived control that inherits from DataGridView in which I add the colums at run-time and associate them with the fields of the table of the database to which the DataGridView will be bound by using the following code that is called on the constructor of that control: this.AutoGenerateColumns = false; this.Columns.Add("Date", "Date"); this.Columns["Date"].DataPropertyName = "Date" // code… Thanks. Best, Miguel. Monday, August 30, 2010 1:35 AM - Edited by Miguel T. _ Monday, August 30, 2010 2:35 PM Add some text in the title to make it clear All replies Just a few notes: 1) The *automatic* sorting of a DataGridView only works with DataSets/DataTables. So it won't automatically sort if you change it from IList to IBindingList. However, there is a set of "standard" code available in some examples for getting a list to sort. 2) I have an example of an n-tiered application that uses IBindingList here: Click one of the business object examples. Hope this helps. ; msmvps.com/blogs/deborahk We are volunteers and ask only that if we are able to help you, that you mark our reply as your answer. THANKS! Monday, August 30, 2010 2:16 AM - Proposed as answer by Esther FanMicrosoft employee Saturday, September 11, 2010 5:14 AM - Unproposed as answer by Esther FanMicrosoft employee Saturday, September 11, 2010 5:14 AM Hi Deborahk. The link that I include in my message includes code for a customized list that derives from BindingList<T> and in which the automatic sorting of DataGridView works. I have tested the code in the past and it worked. The question is that in that example I started with an empty BindinList and then I add data within a loop. Then I use DataGridView.DataSource and it worked. The question is that now I have different requirements since the starting point in not an empty list my a list that is the result of a query using LINQ to Objects or LINQ to SQL. Also, at that time when I tested BindingList, I used a simple application with no layers. The question is that I have already completed the domain model and data source layer of my application in which I used LINQ to Objects for queries in the domain layer and LINQ to SQL for queries in the data source layer. This means that my queries return IEnumerable<T> or IQueryable<T>. From these types it's easy to convert to IList<T> or List<T>. However, I think that I can't say the same for converting those types to DataTable or DataSet. Or, that would imply changing a lot of code in my application and if possible I realy would like to avoid that. Thanks for your feedback. I will take a look at your code. Best, Miguel.Monday, August 30, 2010 2:42 AM Yes, OK. I misunderstood you. I thought you were using BindingList WITHOUT any extra code and the grid was sorting. The extra code I mentioned in my post is the code from the link you provided. So it sounds like you already have that covered. ; msmvps.com/blogs/deborahk We are volunteers and ask only that if we are able to help you, that you mark our reply as your answer. THANKS!Monday, August 30, 2010 3:30 AM Hi DeborahK. Probably my message wasn’t I wasn't completely clear. In fact, for the example in this forum I used a BindingList<T> WITHOUT any extra code and it didn't work. However, I tried the BindingList<T> with the complete code from the link mentioned in a small example and I was able to sort the columns in the DataGridView. The question is that in that small example I started with an empty BindingList<T> and I added data within a loop. Now, I have different requirements since I'm using LINQ to Objects and LINQ to SQL and I will need to start with an IEnumerable<T> or IQueryable<T>. The question is that I don’t know how to convert IEnumerable<T> or IQueryable<T> to BindingList<T>. Since your last message I tried the following code and I was able to sort all the columns in my DataGridView (this is read-only data): //Using IQueryable<T> IQueryable<PriceData> histPrices = db.PriceData.Where(pd => (pd.AssetId == 310) && (pd.Date >= startDate) && (pd.Date <= endDate)); dataGridView.DataSource = histPrices; If I use histPrices.AsEnumerable() I can also sort the columns in the DataGridView but if I use histPrices.ToList() I can’t. Based on this it looks that I could use IEnumerable<T> as the result of my queries and also in the methods of all the layers that need to handle with that data. Since my main requirement is to be able to sort columns in a DataGridView basically I need to know: 1) If I only need to use read-only data should I use IEnumerable<T> as the result of my queries and in all the methods that deal with that data in the other layers? 2) If I need two-way data binding should I use BindingList<T> instead? If yes, then how can I convert an IEnumerable<T> to a BindingList<T>? Or should I use another approach instead? Thanks for your feedback. Best, Miguel.Monday, August 30, 2010 2:25 PM Hi Deborahk. In order to make the title more clear I change it from "IList<T> or BindingList<T> in the layers of N-Layer application?" to "IList<T>, IEnumerable<T> or BindingList<T> in the layers of N-Layer application using LINQ to SQL in the data source layer?". Best, Miguel.Monday, August 30, 2010 2:36 PM
https://social.msdn.microsoft.com/Forums/en-US/1705323c-9127-4902-a42c-ace627f6d60e/ilistlttgt-ienumerablelttgt-or-bindinglistlttgt-in-the-layers-of-nlayer-application?forum=architecturegeneral
CC-MAIN-2021-31
refinedweb
1,537
61.46
Problem updating libretro in setup-script 4.4.1 under Ubuntu MATE 18.04 - sabrecheeky last edited by I posted this issue elsewhere and it was suggested that I start a new topic. On upgrading my udoo hobby pc (x86 celeron, intel hd graphics 400) from ubuntu MATE 17.10 to 18.04, retropie would crash on starting any emulation. Updating via the setup-script would build successfully, but not fix the problem (switching to sdl from gl in the retroarch.cfg would allow the emulation to run) The linked topic discussed this as a problem with libretro which appears to have been fixed in an update. However I am now unable to build the latest libretro via the setup-script, either individually, or as a full update, as below: @sabrecheeky said in LibRetro not yet ready for Ubuntu 18.04: I tried upgrading again after reading this, but still no joy. I get a fatal error trying to build libretro from Retropie-Setup. The offending lines of the log are as follows: In file included from ./libretro-common/include/glsym/rglgen.h:32:0, from ./libretro-common/include/glsym/glsym.h:26, from gfx/drivers_context/../common/gl_common.h:37, from gfx/drivers_context/x_ctx.c:44: ./libretro-common/include/glsym/rglgen_headers.h:27:10: fatal error: EGL/egl.h: No such file or directory #include <EGL/egl.h> ^~~~~~~~~~~ compilation terminated. Makefile:191: recipe for target 'obj-unix/release/gfx/drivers_context/x_ctx.o' failed make: *** [obj-unix/release/gfx/drivers_context/x_ctx.o] Error 1 make: *** Waiting for unfinished jobs.... input/drivers_joypad/udev_joypad.c:533:12: warning: ‘sort_devnodes’ defined but not used [-Wunused-function] static int sort_devnodes(const void *a, const void *b) ^~~~~~~~~~~~~ ~ Could not successfully build retroarch - RetroArch - frontend to the libretro emulator cores - required by all lr- emulators (/home/scot/RetroPie-Setup/tmp/build/retroarch/retroarch not found). Log ended at: Fri 18 May 15:43:48 BST 2018 Total running time: 0 hours, 1 mins, 54 secs* Any Ideas?!? I am running Ubuntu mate 18.04 on a udoo x86 hobby board. (everything ran fine under 17.10) Any help would be greatly appreciated! - mitu Global Moderator last edited by The build fails because of a missing header file ./libretro-common/include/glsym/rglgen_headers.h:27:10: fatal error: EGL/egl.h: No such file or directory Normally this should be part of the package libegl1-mesa-dev- you can manually install it before attempting the installation with: sudo apt-get -y install libegl1-mesa-dev Contributions to the project are always appreciated, so if you would like to support us with a donation you can do so here. Hosting provided by Mythic-Beasts. See the Hosting Information page for more information.
https://retropie.org.uk/forum/topic/17849/problem-updating-libretro-in-setup-script-4-4-1-under-ubuntu-mate-18-04/2
CC-MAIN-2020-34
refinedweb
453
51.14
As a developer, testing is very important. Some developers have the mindset of “Meh, I write code, testing is a QAs job”, which is pretty poor. It’s much better for the developer to be test driving their code, generally if adopted well it produces better quality code, and of course the sooner issues are caught the cheaper they are to address. Most Java developers who are following TDD probably use mockito or powermock alongside JUnit. I’ve never been much of a fan of those combinations as I believe they involve far too much boilerplate code, and test code often becomes more verbose and harder to maintain than the actual production code itself. After being introduced to Spock, and testing using Groovy last year, I’m absolutely sold on it and have subsequently used it on several other projects. For the purpose of this post, I’ll base it around a service class that does some things with a domain object, via a data access layer, which is something most enterprise developers can relate to. Heres the domain class: public class User { private int id; private String name; private int age; // Accessors omitted } Heres the DAO interface: public interface UserDao { public User get(int id); } And finally the service: public class UserService { private UserDao userDao; public UserService(UserDao userDao) { this.userDao = userDao; } public User findUser(int id){ return null; } } Nothing too complex to mention here. The class that we’re going to put under test is the service. You can see that the service is dependant on a UserDao, which is passed into the constructor. This is a good design practice because you’re stating that in order to have a UserService, it must be constructed with a UserDao. This also becomes useful later when using dependency injection frameworks like Spring so you can mark them both as Components and Autowire the constructor arguments in, but alas. Lets go ahead and create a test class for the service (command+shift+t if using IntelliJ on a mac). class UserServiceTest extends Specification { UserService service UserDao dao = Mock(UserDao) def setup(){ service = new UserService(dao) } def "it gets a user by id"(){ given: def id = 1 when: def result = service.findUser(id) then: 1 * dao.get(id) >> new User(id:id, name:"James", age:27) result.id == 1 result.name == "James" result.age == 27 } } Here we go, right in at the deep end, let me explain what is going on here. Firstly, we’re using groovy, so although it looks like Java (I suppose it is in some respects as it compiles down to Java bytecode anyway) the syntax is a bit lighter, such as no semi-colons to terminate statements, no need for public accessor as everything is public by default, Strings for method names. If you want to learn more about groovy, check out their documentation here. As you can see, the test class extends from spock.lang.Specification, this is a Spock base class and allows us to use the given, when and then blocks in our test. You’ll see the subject of the test then, the service. I prefer to define this as a field and assign it in the setup, but others prefer to instantiate it in the given block of each test, I suppose this is really just a personal preference. Creating mocks with Spock is easy, just use Mock(Class). I then pass the mocked DAO dependency into the userService in the setup method. Setup runs before each test is executed( likewise, cleanup() is run after each test completes). This is an excellent pattern for testing as you can mock out all dependencies and define their behaviour, so you’re literally just testing the service class. A great feature of groovy is that you can use String literals to name your methods, this makes tests much easier to read and work out what it is actually testing rather than naming them as “public void testItGetsAUserById()” Given, when, then Spock is a behaviour driven development (BDD) testing framework, which is where it gets the given, when and then patterns from (amongst others). The easiest way I can explain it as follows: Given some parts, when you do something, then you expect certain things to happen. It’s probably easier to explain my test. We’re given an id of 1, you can think of this as a variable for the test. The when block is where the test starts, this is the invocation, we’re saying that when we call findUser() on the service passing in an id, we’ll get something back and assign it to the result. The then block are your assertions, this is where you check the outcomes. The first line in the then block looks a little scary, but actually it’s very simple, lets dissect it. 1 * dao.get(id) >> new User(id:id, name:"James", age:27) This line is setting an expectation on the mocked dao. We’re saying that we expect 1 (and only 1) invocation on the dao.get() method, that invocation must be passed id (which we defined as 1 earlier). Still with me? Good, we’re half way. The double chevron “>>” is a spock feature, it means “then return”. So really this line reads as “we expect 1 hit on the mocked dao get(), and when we do, return a new User object” You can also see that I’m using named parameters in the constructor of the User object, this is another neat little feature of groovy. The rest of the then block are just assertions on the result object, not really required here as we’re doing a straight passthrough on the dao, but gives an insight as to what you’d normally want to do in more complex examples. The implementation. If you run the test, it’ll fail, as we haven’t implemented the service class, so lets go ahead and do that right now, its quite simple, just update the service class to the following: public class UserService { private UserDao userDao; public UserService(UserDao userDao) { this.userDao = userDao; } public User findUser(int id){ return userDao.get(id); } } Run the test again, it should pass this time. Stepping it up That was a reasonably simple example, lets look at creating some users. Add the following into the UserService: public void createUser(User user){ // check name // if exists, throw exception // if !exists, create user } Then add these methods into the UserDao public User findByName(String name); public void createUser(User user); Then start with this test def "it saves a new user"(){ given: def user = new User(id: 1, name: 'James', age:27) when: service.createUser(user) then: 1 * dao.findByName(user.name) >> null then: 1 * dao.createUser(user) } This time, we’re testing the createUser() method on the service, you’ll notice that there is nothing returned this time. You may be asking “why are there 2 then blocks?”, if you group everything into a single then block, Spock just asserts that they all happen, it doesn’t care about ordering. If you want ordering on assertions then you need to split into separate then blocks, spock then asserts them in order. In our case, we want to firstly find by user name to see if it exists, THEN we want to create it. Run the test, it should fail. Implement with the following and it’ll pass public void createUser(User user){ User existing = userDao.findByName(user.getName()); if(existing == null){ userDao.createUser(user); } } Thats great for scenarios where the user doesn’t already exist, but what if it does? Lets write so co…NO! Test first! def "it fails to create a user because one already exists with that name"(){ given: def user = new User(id: 1, name: 'James', age:27) when: service.createUser(user) then: 1 * dao.findByName(user.name) >> user then: 0 * dao.createUser(user) then: def exception = thrown(RuntimeException) exception.message == "User with name ${user.name} already exists!" } This time, when we call findByName, we want to return an existing user. Then we want 0 interactions with the createUser() mocked method. The third then block grabs hold of the thrown exception by calling thrown() and asserts the message. Note that groovy has a neat feature called GStrings that allow you to put arguments inside quoted strings. Run the test, it will fail. Implement with the following at it’ll pass. public void createUser(User user){ User existing = userDao.findByName(user.getName()); if(existing == null){ userDao.createUser(user); } else{ throw new RuntimeException(String.format("User with name %s already exists!", user.getName())); } } I’ll leave it there, that should give you a brief intro to Spock, there is far more that you can do with it, this is just a basic example. Snippets of wisdom - Read the spock documentation! - You can name spock blocks such as given:”Some variables”, this is useful if its not entirely clear what your test is doing. - You can use _ * mock.method() when you don’t care how many times a mock is invoked. - You can use underscores to wildcard methods and classes in the then block, such as 0 * mock._ to indicate you expect no other calls on the mock, or 0 * _._ to indicate no calls on anything. - I often write the given, when and then blocks, but then I start from the when block and work outwards, sounds like an odd approach but I find it easier to work from the invocation then work out what I need (given) and then what happens(then). - The expect block is useful for testing simpler methods that don’t require asserting on mocks. - You can wildcard arguments in the then block if you don’t care what gets passed into mocks. - Embrace groovy closures! They can be you’re best friend in assertions! - You can override setupSpec and cleanupSpec if you want things to run only once for the entire spec. Conclusion Having used Spock (and groovy for testing) on various work and hobby projects I must admit I’ve become quite a fan. Test code is there to be an aid to the developer, not a hinderance. I find that groovy has many shortcuts (collections API to name but a few!) that make writing test code much nicer. You can view the full Gist here {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/intro-so-groovyspock-testing
CC-MAIN-2017-34
refinedweb
1,742
63.19
xml and xsd - XML xml and xsd 50007812 2005-03-09T17:05:59... to use.i want to use in my local system and validate xml..kindly reply soon.if...); factory.setAttribute(" How to generate XML from XSD? How to generate XML from XSD? Hi Experts, I have a xsd with me. I want to generate XML files based on the XSD with fields filled out from the database tables. Please Help me out How to create XML file - XML How to create XML file Creating a XML file, need an example. Thanks!! To create XML file you need plain text editor like note pad. After... language has many API's to create xml file from program.Here is the code example Create XML File using Servlet Create XML File using Servlet In this section, you will learn how to create xml... XML file with its version and encoding and display a message 'Xml File Created Create XMl dynamically - XML Create XMl dynamically Hi I am retreiving the list from database which i need to display in an XML file with some nodes How can I do Create xsd files from database tables Create xsd files from database tables Hi, I work on a Java JEE project. We have now introduced the creation of DTOs from xsd files with JAXB.... Does anyone knows of a tool that can create xsd automatically from given Create XML - XML Create XML Hi, Can you please provide java code,that will create xxx.XML file where all the attributes and values of those attributes... elements in your XML file: "); String str = buff.readLine(); int Creating XMl file - XML Creating XMl file I went on this page: and it shows me how to create an XML file, however there is something I don't understand. I have to create an XML file splitting large xml file. - XML splitting large xml file. Hi, I have a large xml file(1G) and a schema (XSD) i need to split this xml to small xml files (~20M) that will be valid by the schema XML XML create flat file with 20 records. Read the records using xml parser and show required details XSD Simple Elements of the element and bbb is the data type of the element. XML Schema has a lot of built...:boolean xs:date xs:time Example: Few of XML elements... XSD Simple Elements   how to create an xml file in following clear format how to create an xml file in following clear format anyone please help me out to create this file,,. <Tasks> <Taskid>1 <Taskname>Coding <Project>CeMIC <Date> Java Swing Create XML file Java Swing Create XML file In this tutorial, you will learn how to create XML... and fetch all the data from textfields and create an xml file with proper... swing components and display it in xml file. We have created textfields Create - XML File (Document) Create - XML File (Document) In this section, you will learn to create a XML... to be added in the generated xml file. It takes the root name at the console and passes How to prepare XML file? you create an XML file.. Create XML File using Servlet How to generate build.xml file Complete Hibernate 3.0 and Hibernate 4 Tutorial Thanks...How to prepare XML file? Hi, I want to prepare XML File, can you xml file creation in java xml file creation in java how to create xml file in java so that input should not be given from keyboard. and that file should be stored. Please visit the following links: What is XML? is simple example of XML file: <Invoice> <to>ABC... and create well formatted XML document. What is XML Document? Some facts about XML: XML file is simple text file with .xml extension. XML file is used xml configuration file - JDBC to create a xml file. Please help me out. Thank you Hi Friend, Try...xml configuration file Hi, Could you please tell me how to write a xml configuration file . We have mysql database in some other system. I have Java get XML File the XML file. For this, you need to create a XML file. Here is the employee.xml file: <?xml version="1.0"?> <company>... Java get XML File   How to create XML from Swings How to create XML from Swings How to create XML using Swings. I have... components and display it in xml file. import java.io.*; import java.util.*; import...(); createXmlFile(doc,value1,value2,value3,value4); System.out.println("Xml File Created Create Tag Library Descriptor (TLD) File Create Tag Library Descriptor (TLD) File  ..., we learn how to create library descriptor file in your project. Developers can... these steps to create the file: 1. Go to project option and right click Create XML file from flat file and data insert into database Create XML file from flat file and data insert into database... have developed an application to create xml file from flat file and data...). Create a file "FlatFileXml.java" used to create an XML and data How to write to xml file in Java? how to write in xml file in Java programming. Please fill free to give example or reference website for getting example. Hi, To write in xml file... offers the opportunity to create XML file. For more details visit link text Generate xml Generate xml hi can i generate xml file with xsd using JAXP in java. if possible please send java code. Thanks. XML - XML XML XSD validation in java Can anyone help in writing validation for XML XSD in Java JAXB Create XML File And Get Data From XML : Here we are giving a simple example into which we will first create an XML file.... In this example at first we will create Java Classes to create an XML file using Java...JAXB Create XML File And Get Data From XML In this section we will read about xml developing - XML file with XML format. I have also XSL file as a externatl file to represent my xml... "leaf" element according to DTD. In my xml file there are more than 500... "..../AAA/00/index.xml#idMG1234" here previous attached file ID = MG1234 [means how to write in xml? - XML Source source = new DOMSource(doc); // Create File to view your xml...(new InputStreamReader(System.in)); System.out.print("Enter XML file name...how to write in xml? can anybody give the code how to write in xml XSD Attributes of the attribute. XML Schema has a lot of built-in data types. The most common...:time Example: Here is an XML element with an attribute... XSD Attributes   XML and Velocity XML and Velocity How to create xml file using velocity template engine Writing xml file - Java Beginners XmlServlet().createXmlTree(doc); System.out.println("Xml File Created...Writing xml file Thank you for the quick response The values which...(); } //TransformerFactory instance is used to create Transformer objects java and xml problem - XML java and xml problem hi, i need to write a java program that generates an xml file as follows: aaa vvv... XML file: "); //String str = bf.readLine(); int no = Integer.parseInt( creating document in XML - XML creating document in XML Create an XML document for catalogue of Computer Science Book in a Library. Hi friend...)); System.out.print("Enter number to add elements in your XML file: "); String xml file display - XML xml file display - - - - ADL SCORM CAM 1.3 - - session3 - Online Instructional Strategies that Affect Learner... code to display the above xml file in tree structure where Storing properties in XML file Storing properties in XML file This Example shows you how Store properties in a new XML File. JAXP (Java API for XML Processing) is an interface which provides parsing xml xml how can i remove white space and next line when i copy stream to xml creating index for xml files - XML creating index for xml files I would like to create an index file for xml files which exist in some directory. Say, my xml file is like below... the same structure. It would be like 100 to 200 xml files and each xml file has Sorting Xml file - XML Sorting Xml file I have an xml file like this: Smith USA... sort my xml file. Here is my xslt..., my ouput file will be like this: Amy AUC Bob USA John UK XML Interviews Question page8,xml Interviews Guide,xml Interviews use it to create an XML file: give your editor the declarations: <?xml...). And a DTD file also never has an XML Declaration at the top either.... What's a Schema? The W3C XML Schema recommendation provides a means Transforming an XML File with XSL Transforming an XML File with XSL This Example gives you a way to transform an XML File with XSL in a DOM document. JAXP (Java API for XML Processing) is an interface which Java Generate XML Example Java Generate XML Example To Create a XML in Java using Dom follow the Steps... rootElement.appendChild(ChildElement); 7. Create the output formate // Creating XML output...(document); An example for creating the XML in Java is given below package web.xml - XML web.xml Can you please help me to create web.xml I tried to create like this in editplus.. and enter ctrl + B It shows some error.. ---- The XML page cannot be displayed Cannot view XML input using style sheet. Please Designing XML Schema The following example is a XML Schema file called Building blocks of a XML-Schema XSD - The <... Designing XML Schema   java program for writing xml file - Java Beginners values from my database(one table)into one xml file. Like i have 3 coloumns in my... xml file and storet that in particlar location. Please help me out Thanks..."); // create string from xml tree StringWriter sw = new Reading XML from a File Reading XML from a File This Example shows you how to Load Properties from the XML file via... to create new DOM parsers. Some of the methods used for reading XML from a  xml xml how to creatte html file and validate using java and finally i need get web.xml file This Example shows you how to Load properties from a XML file. JAXP (Java API for XML Processing) is an interface which provides parsing Creation of xml and to create a xml file containing all those datas...My database datas are in key... of query select * from tablename where appid="+12345+". An xml file should be generated ...My xml file should in this format XXXX YYYY creating tables as an xml document - XML creating tables as an xml document Create a table of a medal tally...); System.out.println("Xml File Created Successfully"); } catch(Exception e..."); // create string from xml tree StringWriter sw = new StringWriter create create how to create an excel file using java Example for Finding the Root Element in the XML File Example for Finding the Root Element in the XML File In this tutorial, we will discuss about how to find the Root Element in the XML file. The XML DOM views... Root Element. First we create an XML file order.xml. The java and xml file Getting next Tag in the XML File Getting next Tag in the XML File This Example shows you how to get the next Tag from the XML File. JAXP (Java API for XML Processing) is an interface which provides parsing XML Schema language is known as XML Schema Definition (XSD). The Purpose of XML Schema... schema 12 Xml Parser can be used to parse the schema file 13...; <!ELEMENT short-desc (#PCDATA)> Equivalent schema file for the above xml XML Schema (XSD). In this section, you will learn how to read and create XML Schemas, why...XML Schema Introduction to XML Schema XML Schema is a W3C Standard parsing xml file in jsp parsing xml file in jsp example that pars XML file in JSP Hello World XML To begin working with XML, you need to learn how to create xml file and how...: In Mozilla: In steps above, you learned how to create xml file and view... as xml to identify this file as an xml file by the system. 5. Open this file Processing XML with Java . It is also referred to as XML Schema Definition (XSD). XML APIs... and not the replacement. Sample XML document Below is sample xml file. You can see how... to the defined rules in DTD or XML schema then the document is said to be valid. DTD Uses of XML use Flex to create the charts and then use the XML file to feed the data... is used to store the structures data into a text file saved with .xml extension... and then the search the data from the XML file and display to the user. Similar data executio of xml file executio of xml file how to execute xml file , on web browser XML - XML XML How to convert database to XML file? Hi Friend, Do you want to store database data into xml file ? Please clarify this. Thanks Get Data From the XML File of program: In this example you need a well-formed XML file that has some data (Emp... Get Data From the XML File Here you will learn to retrieve data from XML file using SAX parser xml - XML xml how to match to xml file?can you give example with java code how to create a xml page how to create a xml page <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" ""> <html xmlns="" xml: < and swings - Java Beginners XML and swings I have an xml file where all the contents that should be present in the tabbed pane are there. i would like to know how to parse the xml such that, whenever it reads the component type as label, it should create xml - XML xml hi convert xml document to xml string.i am using below code...(); //InputStream inputStream = new FileInputStream(new File(xmlData)); InputStream inputStream = new FileInputStream(new File Passing values in ComboBox from XML file to actually parse the XML file to create our own Document object. Document object...Passing values in ComboBox from XML file In this tutorial we are going to know how we can pass a values in ComboBox by using XML. This example Run XML file Run XML file Hi.. How do I execute or run an XML file? please tell me about that Thanks Show output as a xml file using Velocity Show output as a xml file using Velocity This Example shows you how to show output as a xml file... as a xml file. XMLOutput.java package velocity.XML xml - XML xml hi convert xml file to xml string in java ,after getting the xml string result and how to load xml string result? this my problem pls help... ReadXML(); xml.convert(input, output); System.out.println("XML File Ignoring Comments While Parsing an XML File Ignoring Comments While Parsing an XML File  ... in an XML File. JAXP is an interface which provides parsing of xml documents. Here... it from the File. Xml code for the program generated Insert element into an existing xml file - Java Interview Questions Insert element into an existing xml file Dear all, how can i insert elements into an existing xml file? now when i am going to insert new elements... the example code at Post your Introduction to XML Schema referred to as XML Schema Definition (XSD). We think that very soon XML Schemas... Introduction to XML Schema In this tutorial you will learn how to read and create XML Schemas Why XML?, Why XML is used for? work as the data file (xml) for storing the names and location... is another good use of XML. You program can use the XML data file for business... using database and flat files are difficult. XML is also used to create Getting Data from XML File (Document) Getting Data from XML File (Document)  ... from a XML file. All xml files store the data. You can add and modify the data...: This program helps you in retrieving the data from a XML file. It takes a xml file Testing EntityReferences in Xml in an XML file. JAXP (Java API for XML Processing) is an interface which provides... for Testing EntityReferences in Xml File:-XMLInputFactory Factory... Testing EntityReferences in Xml   XML Transformation in JSP to demonstrate XML Transformation tag in JSP This example illustrate use of XML transformation tag in JSP file. This example performs transformation from an XML file to XSLT stylesheet. XML transform tag performs transformation from XML XML to XML Mapping XML to XML Mapping Hi Dear Developer Team, I would like to know how does XML-to-XML and XML-to-database Mapping works? I do not want use any... it by myself. I thought, I use XSD, what i have already extracted from Source Data needs to be gathered in XML file from the database (MySql) using JSP Data needs to be gathered in XML file from the database (MySql) using JSP ... in XML file from the database (MySql) using appropriate JSP/Java Bean functions... files. using following database create database music; use music; CREATE XML,XML Tutorials,Online XML Tutorial,XML Help Tutorials XML Tutorials Transforming XML with SAX Filters This Example shows you the way to Transform XML... parsing of xml documents. Here the Document BuilderFactory is used to create new how to convert .xml file to csv file how to convert .xml file to csv file how to convert .xml file to csv file XML,XML Tutorials,Online XML Tutorial,XML Help Tutorials Ignoring Comments While Parsing an XML File This Example shows you how to Ignore Comments in an XML File. JAXP is an interface which provides parsing of xml documents. Here xml displaying a drives data..... of server directory and generates the XML file. You can read this xml file in java...xml displaying a drives data..... Hi all, I need a solution for displaying content of a drive(Ex: c , d , e ) in the browser using the XML How to convert XML file to database ? How to convert XML file to database ? How to convert XML file to database java with xml java with xml Hi i am reading xml data with sax parser in java. ok its fine. But in future xsd and xml will change. now my question is if xsd and XML will change my java progrm will not change. is it possible ? Thanks
http://www.roseindia.net/tutorialhelp/comment/90568
CC-MAIN-2014-42
refinedweb
3,100
73.98
User-defined functions - Python This article contains Python user-defined function (UDF) examples. It shows how to register UDFs, how to invoke UDFs, and caveats regarding evaluation order of subexpressions in Spark SQL. Register a function as a UDF def squared(s): return s * s spark.udf.register("squaredWithPython", squared) You can optionally set the return type of your UDF. The default return type is StringType. from pyspark.sql.types import LongType def squared_typed(s): return s * s spark.udf.register("squaredWithPython", squared_typed, LongType()) Call the UDF in Spark SQL spark.range(1, 20).createOrReplaceTempView("test") %sql select id, squaredWithPython(id) as id_squared from test Use UDF with DataFrames from pyspark.sql.functions import udf from pyspark.sql.types import LongType squared_udf = udf(squared, LongType()) df = spark.table("test") display(df.select("id", squared_udf("id").alias("id_squared"))) Alternatively, you can declare the same UDF using annotation syntax: from pyspark.sql.functions import udf @udf("long") def squared_udf(s): return s * s df = spark.table("test") display(df.select("id", squared_udf("id").alias("id_squared"))) Evaluation order and null checking Spark SQL (including SQL and the DataFrame and Dataset API)", lambda s: len(s), "int")", lambda s: len(s) if not s is None else -1, "int") spark.sql("select s from test1 where s is not null and strlen_nullsafe(s) > 1") // ok spark.sql("select s from test1 where if(s is not null, strlen(s), null) > 1") // ok
https://docs.microsoft.com/en-us/azure/databricks/spark/latest/spark-sql/udf-python
CC-MAIN-2020-29
refinedweb
238
50.33
In C programming language, when a function calls itself over and over again, that function is known as recursive function. The process of function calling itself repeatedly is known as recursion. In this tutorial, we will understand the concept of recursion using practical examples. 1. C Recursion Concept Lets start with a very basic example of recursion : #include <stdio.h> void func(void) { printf("\n This is a recursive function \n"); func(); return; } int main(void) { func(); return 0; } In the code above, you can see that the function func(), in its definition calls itself. So, func() becomes a recursive function. Can you guess what will happen when the code (shown above) is executed? If we go by the code, the main() function would call func() once and then func() would continue calling itself forever. Will this be the exact behaviour? Lets execute the code and check this. Here is the output : $ ./recrsn This is a recursive function This is a recursive function .... .... This is a recursive function This is a recursive function This is a recursive function Segmentation fault (core dumped) In the output above: - The print “This is a recursive function” prints continuously many times. - A set of three dots “…” is used to omit large part of actual output which was nothing but the same print. - Towards the end of the output you cab observe “Segmentation fault” or as we popularly say, the program crashes. Earlier, we thought that the program would continue executing forever because recursive function func() would continue calling itself forever but it did not happen so. The program crashed. Why did it crash? Here is the reason for this crash : - For each call to func(), a new function stack is created. - With func() calling itself continuously, new function stacks also are also created continuously. - At one point of time, this causes stack overflow and hence the program crashes. On a related note, it is also important for you to get a good understanding on Buffer Over Flow and Linked Lists. 2. Practical Example of C Recursion For complete newbies, it ok to have a question like What’s the practical use of recursion? In this section, I will provide some practical examples where recursion can makes things really easy. Suppose you have numbers from 0 to 9 and you need to calculate the sum of these numbers in the following way : 0 + 1 = 1 1 + 2 = 3 3 + 3 = 6 6 + 4 = 10 10 + 5 = 15 15 + 6 = 21 21 + 7 =28 28 + 8 = 36 36 + 9 = 45 So, you can see that we start with 0 and 1, sum them up and add the result into next number ie 2 then again we add this result to 3 and continue like this. Now, I will show you how recursion can be used to define logic for this requirement in a C code : #include <stdio.h> int count = 1; void func(int sum) { sum = sum + count; count ++; if(count <= 9) { func(sum); } else { printf("\nSum is [%d] \n", sum); } return; } int main(void) { int sum = 0; func(sum); return 0; } If you try to understand what the above code does, you will observe : - When func() was called through main(), ‘sum’ was zero. - For every call to func(), the value of ‘sum’ is incremented with ‘count’ (which is 1 initially), which itself gets incremented with every call. - The condition of termination of this recursion is when value of ‘count’ exceeds 9. This is exactly what we expect. - When ‘count’ exceeds 9, at this very moment, the value of ‘sum’ is the final figure that we want and hence the solution. Here is another example where recursion can be used to calculate factorial of a given number : #include <stdio.h> int func(int num) { int res = 0; if(num <= 0) { printf("\n Error \n"); } else if(num == 1) { return num; } else { res = num * func(num -1); return res; } return -1; } int main(void) { int num = 5 ; int fact = func(num); if (fact > 0) printf("\n The factorial of [%d] is [%d]\n", num, fact); return 0; } Please note that I have used hard coded number ‘5’ to calculate its factorial. You can enhance this example to accept input from user. The earlier example demonstrated only how at the final call of func() sum was calculated but the reason I used example is because it demonstrates how return values can be used produced desired results. In the example above, the call sequence across different function stacks can be visualized as : res = 5 * func(5 -1); // This is func() stack 1 res = 4 *func(4-1); // This is func() stack 2 res = 3 *func(4-1); // This is func() stack 3 res = 2 *func(2-1); // This is func() stack 4 return 1; // This is func() stack 5 Now, substitute return value of stack 5 in stack 4, the return value of stack 4 (ie res) into stack 3 and so on. Finally, in stack 1 you will get something like res = 5 * 24 This is 120, which is the factorial of 5, as shown in the output when you execute this recursive program. $ ./recrsn The factorial of [5] is [120] { 10 comments… add one } int fact(int n) { if (n == 0) return 1; return n * fact(n – 1); } Would it be possible to have main function calling it self? I know, some of people will not say…, but lets just consider it for a second. How it would work with two function calling eachother at the same time. One more thing, it is good to omit global variables, you just don’t need it at the problem above. And who writes sum = sum + count, don’t you just know for a +=, it is not a Pascal for God sake. And where is more interesting problems! How theory looks at recursion, when to and not use it, and so on… There are many talks to say about this subject. Great post! Another great introduction to recursion is traversing a tree structure. I’ve used this a few times for my students and they seemed to get it. Ok, great! So why you don’t share the wisdom with us on traversing a tree structure, I am sure it would be great contribution to the subject… Its great, very useful Yes it’s possible to call a main() func. its awesome…. VERY very thanks for help me its easy to learn c and very usable. Ohh!!! A master mind realy . I want to know more with example of different type of series calculation. Thank you its very useful for me
http://www.thegeekstuff.com/2013/09/c-recursion/
CC-MAIN-2017-13
refinedweb
1,101
67.18
std::thread::joinable From cppreference.com Checks if the std:] Example Run this code #include <iostream> #include <thread> #include <chrono> void foo() { std::this_thread::sleep_for(std::chrono::seconds(1)); } int main() { std::thread t; std::cout << "before starting, joinable: " << std::boolalpha << t.joinable() << '\n'; t = std::thread(foo); std::cout << "after starting, joinable: " << t.joinable() << '\n'; t.join(); std::cout << "after joining, joinable: " << t.joinable() << '\n'; } Output: before starting, joinable: false after starting, joinable: true after joining, joinable: false [edit] References - C++20 standard (ISO/IEC 14882:2020): - 32.4.2.5 Members [thread.thread.member] - C++17 standard (ISO/IEC 14882:2017): - 33.3.2.5 thread members [thread.thread.member] - C++14 standard (ISO/IEC 14882:2014): - 30.3.1.5 thread members [thread.thread.member] - C++11 standard (ISO/IEC 14882:2011): - 30.3.1.5 thread members [thread.thread.member]
https://en.cppreference.com/w/cpp/thread/thread/joinable
CC-MAIN-2022-21
refinedweb
144
55.81
Markowitz Theory using Python Hi ML Enthusiasts! In the previous tutorial, we learnt about Markowitz Portfolio theory. In this part 5 of our Financial Analytics series, we will learn building efficient frontier using Markowitz Theory using Python and we will also learn how we can implement this in Python. In case you’re new to this series, we suggest you go through this series from its starting point, i.e., part 1 of Financial Analytics series. Markowitz Theory using Python – Real-life scenario We will look at what shares and stocks profile look like in real-life. In real-life, the shares having higher expected returns have higher risk involved with them, i.e., higher standard deviation. While shares having lower expected returns have lower risk involved. For this post, we will be used the following values corresponding to shares X and Y. After using the same methodologies and formulae that we used in our previous post, we get following table with only difference being that we use the weight steps of 10%: Let’s start analyzing this by Python code. Let’s first import all the libraries we will be needing for this analysis. import numpy as np import pandas as pd import matplotlib.pyplot as plt Next, we will convert the values in Expected Return column into a list and then will pass this list to a variable, ExpectedReturn. The same will be done for StandardDeviation. ExpectedReturn = [9.00,9.20,9.40,9.60,9.80,10.00,10.20,10.40,10.60,10.80,11.00] StandardDeviation = [8.0,7.5,7.1,6.9,6.8,7.0,7.3,7.8,8.5,9.2,10.0] We will now use matplotlib library of python to come up with scatter plot. We want standard deviation on x-axis and expected return on y-axis. plt.scatter(StandardDeviation, ExpectedReturn) plt.xlabel("Standard deviation (in %)") plt.ylabel("Expected return (in %)") plt.title("Markowitz Portfolio Analysis") plt.show() Markowitz Theory using Python – Efficient and inefficient frontiers From above chart, we can see that for same values of standard deviation, we are getting both higher and lower values of returns – these points correspond to standard deviation of 7, 7.5 and 8, for which the curve is forming parabola – for these values of standard deviation and lower returns, the portfolios that we get are termed as Inefficient Frontier and the need is to avoid the values/points below expected return of 9.75%. All values above this value form part of efficient frontier. Let’s now import pandas_datareader library and start implementing everything we have learnt so far. from pandas_datareader import data as dr We will be choosing The Procter and Gamble company and The Microsoft Corporation historical data in our analysis. stocks = ['PG', 'MSFT'] Fetching last 10 year data – Adj Close figures for each of these companies stock_data = pd.DataFrame() for s in stocks: stock_data[s] = dr.DataReader(s, data_source = 'yahoo', start = '2010-01-01')['Adj Close'] stock_data.head() stock_data.tail() Let’s now normalize this data and see their line charts to see their trends and compare them with each-other. (stock_data/stock_data.iloc[0] * 100).plot(figsize = (10,8)) From the above chart, we can see that Microsoft had exponential growth. PG, on the other hand, took approximately had linear growth with low slope. Till 2015, PG had upper hand, after 2015, Microsoft, owing to its exponential rate, did wonders. In 2020, both saw downward trend (COVID-19 impact?). Let’s obtain the logarithmic returns figures for both of them. logReturns = np.log(stock_data/stock_data.shift(1)) logReturns 2586 rows × 2 columns #To obtain annual average returns! logReturns.mean() * 250 PG 0.092635 MSFT 0.186833 dtype: float64 #To obtain annual covariance between PG and Microsoft logReturns.cov() * 250 From above, we can see that annual average returns of Microsoft, after looking at the past 10 years data, comes out to be 18.7% while that of PG comes out to be 9.26%, just half of that of Microsoft. The covariance of PG and MSFT is 0.0192. The autocovariance of MSFT is 0.062 and that of PG is 0.029. Let’s now compute the correlation matrix for both of them. stock_data.corr() From above, we can see that there is fair amount of relationship of 92.5% (>30%) between PG and MSFT. Let’s now start creating efficient frontier. # Dynamically generating weights code numberOfStocks = len(stocks) numberOfStocks 2 # Creating random weights # Function random of numpy will generate two floats weights = np.random.random(numberOfStocks) weights array([0.19018562, 0.93358835]) weights.sum() 1.1237739704342409 weights = weights/np.sum(weights) weights array([0.16923832, 0.83076168]) weights.sum() 1.0 We see that weights array has sum equal to 1 or 100%. The weight of first stock, i.e., PG will be set as 16.92% and that of MSFT will be set as 93.35%. Calculating expected return of portfolio (weights * logReturns.mean()).sum() * 250 0.17089096007095564 Thus, the expected return of the portfolio with these weights comes out to be 17.09& Expected standard deviance or volatility np.sqrt(np.dot(weights.T, np.dot(logReturns.cov() * 250, weights))) 0.22131358215073907 We can see that the standard variance or volatility of the portfolio comes out to be 22.13% which is very high. Simulation with same stocks but different weights We are doing this simulation to find out the most optimum sets of weights for which standard deviation or volatility comes out to be minimum and expected return comes out to be maximum. Let’s do this for 100 different sets of weights first. #Creating blank lists expectedReturn = [] standardDeviation = [] weightList0 = [] weightList1 = [] # Running simulations for finding optimum weights for i in range(100): weights = np.random.random(numberOfStocks) weights = weights/ weights.sum() weightList0.append(weights[0]) weightList1.append(weights[1]) expectedReturn.append((weights * logReturns.mean()).sum() * 250) standardDeviation.append(np.sqrt(np.dot(weights.T, np.dot(logReturns.cov() * 250, weights)))) #Converting lists into arrays weightList0 = np.array(weightList0) #Weights for PG weightList1 = np.array(weightList1) #Weights for MSFT expectedReturn = np.array(expectedReturn) standardDeviation = np.array(standardDeviation) #Creating dataframe df = pd.DataFrame({"Weight of PG": weightList0, "Weight of MSFT": weightList1, "Expected Return": expectedReturn, "Standard deviation": standardDeviation}) df.head() Let’s now plot this on a scatter chart plt.figure(figsize=(14, 10), dpi=80) plt.scatter(df["Standard deviation"], df["Expected Return"]) plt.xlabel("Standard deviation") plt.ylabel("Expected return (in %)") plt.show() From above chart, we see that values above the expected return of 0.11 or 11% correspond to the efficient frontier and those below 11% correspond to inefficient frontier. The above chart also states the same thing – if you want greater return, you will have to take greater risk! If you’re risk averse person, then take the values of weights corresponding to expected return of 11%. Let’s see what are the values corresponding to them. df[(df["Expected Return"]>0.11) & (df["Expected Return"]< 0.12)].sort_values(by=['Expected Return']) df[(df["Expected Return"]>0.11)].sort_values(by=['Expected Return']).head(10) Thus, we can see that in case of efficient frontiers, for expected return of 11.37%, the standard deviation is 16.52% which corresponds to weight of 77.56% of PG and 22.43% of MSFT shares. Finding most optimum portfolio df["Expected Return"].mean() 0.1387349843777325 df["Expected Return"].sort_values().median() 0.14025710287549686 df[(df["Expected Return"]>0.135)].sort_values(by=['Expected Return']) df.loc[63] Weight of PG 0.515741 Weight of MSFT 0.484259 Expected Return 0.138251 Standard deviation 0.178456 Name: 63, dtype: float64 df.loc[49] Weight of PG 0.495760 Weight of MSFT 0.504240 Expected Return 0.140133 Standard deviation 0.180253 Name: 49, dtype: float64 Going by both mean and median of efficient frontiers, we can see that Weight of PG varies from 51.57% to 49.57% and that of MSFT varies from 48.42% to 50.42%. This variation leads to expected return from 13.82% to 14.01% with volatility from 17.84% to 18.02%. The point worth to be mentioned over here is that the volatility of MSFT is higher than that of PG. PG is more stable than MSFT. So, as we go from reducing PG weights and increasing MSFT weights, we are not only increasing returns, we are also increasing volatility of portfolio.
https://mlforanalytics.com/2020/04/12/financial-analytics-markowitz-theory-using-python/
CC-MAIN-2021-21
refinedweb
1,406
52.97
Opened 3 years ago Closed 3 years ago #9773 closed enhancement (fixed) Deprecation warning Description I'd like to get rid of this warning that I see when using Python 2.6: /usr/local/python26_trac12/lib/python2.6/site-packages/TracDownloads-0.3_r11238-py2.6.egg/tracdownloads/tags.py:5: DeprecationWarning: the sets module is deprecated Maybe the thing to do here is something like: from trac.util.compat import set Attachments (0) Change History (4) comment:1 Changed 3 years ago by rjollos - Cc pkline added; anonymous removed comment:2 Changed 3 years ago by rjollos - Owner changed from Blackhex to rjollos - Status changed from new to assigned This seems like a pretty trivial change and it is working well. I hope you don't mind if I push it to the repository. comment:3 Changed 3 years ago by rjollos comment:4 Changed 3 years ago by rjollos - Resolution set to fixed - Status changed from assigned to closed Note: See TracTickets for help on using tickets. On the 0.12 branch, I removed the following lines of code because I couldn't see the sets module being used anywhere in the module: I installed TagsPlugin and DownloadsPlugin in a tracdev environment and added a couple of uploads. Everything seems to work fine. I'd be happy to fix this on the 0.12 branch if you feel this is the correct change.
http://trac-hacks.org/ticket/9773
CC-MAIN-2015-14
refinedweb
234
59.23
Details - Type: Improvement - Status: Open - Priority: Major - Resolution: Unresolved - Affects Version/s: None - Fix Version/s: None - Component/s: REEF Client, REEF.NET Client - Description When running a REEF application on a specific cluster, we sometimes have to make cluster-specific configuration choices beyond what's available via e.g. the YARN Configuration. One example are clusters which only allow us to open TCP ports in a specific range. Right now, we require the application to add such configuration to the Driver, e.g. via the DriverConfigurationProviders mechanism. This is awkward, because it limits the portability of applications: In order for an app to run such a cluster, one would have to change the app's code. This is undesirable. Instead, we should have a standard mechanism by which one can provide additional runtime-level configuration to be picked up by the REEF client. Let's use this JIRA to discuss potential solutions to this issue. Activity - All - Work Log - History - Activity - Transitions Hi, I am a 4th year student of Department of Computer Science and Engineering, University of Moratuwa. Additionally, I am familiar with both JAVA and C Sharp programming languages. I would like to contribute to this project on GSOC 2017. Can I know more details about progress made so far on above discussed ideas? How can I get started with work? Here are some ideas on how to support this: We will need two environment variables, REEF_RUNTIME_CONFIGURATION_JAVA and REEF_RUNTIME_CONFIGURATION_NET. Each of those points to a list of configuration files we will merge into the runtime configuration in the client. I'll use the C# class names below, but the same applies to the Java side. We need to get those configurations merged into the configuration used to instantiate IREEFClient. This is tricky, as all our canonical examples right now have the application code instantiate the configuration, followed by using an Injector which is used to instantiate the application's client code. That code in turn depends on an instance of IREEFClient. This pattern uses Tang all the way to the Main function. Which is nice, and we should support it, but not always desirable. Now, to get these new Configurations merged in, we need to intercept the creation of the IREEFClient. We could have a class with static methods for that purpose: public class REEF{ // Merges the configuration given with the ones the env variables point to. // If conf is null, we assume the env variables point to the complete configuration for this cluster. public static IConfiguration GetRuntimeConfiguration(IConfiguration runtimeConfiguration = null); // Creates the injector using GetRuntimeConfiguration(runtimeConfiguration) public static Injector NewRuntimeInjector(IConfiguration runtimeConfiguration = null); // Creates an IREEFClient instance using the injector created with NewRuntimeInjector public static IREEEFClient NewREEFClient(IConfiguration runtimeConfiguration = null); } This class can be used as a drop-in for the call to NewInjector in the current HelloREEF. Also, it allows clients which don't want to use Tang all the way to just call NewREEFClient and be done with it. Yes, exactly. For now, it would be a Configuration in JSON format. Hadoop uses XML. We could improve upon this by using a more humane format like YAML. I see ok. But still as user front end we should not expect him to write tang like configuration module but some sort of key-value file just like in Hadoop. What do you think? So the assumption is that all these configs would be available automatically at all evaluators including driver, right? I don't think we can make that happen, as it would require the config to be on all the nodes of the cluster. I was thinking of only using this to point to configurations to be merged into the runtime configuration used by the client. So the assumption is that all these configs would be available automatically at all evaluators including driver, right? And by environment variable you mean it points to a file where user can specify fields like portoffset, port range etc.just like the way it is done in hadoop map reduce jobs where we can specify things like speculative execution, input format (n line etc.). To me hadoop way is good and if you mean the same thing I feel its a good solution We could have an environment variable REEF_RUNTIME_CONFIGURATION which points to such configurations. Thanks for your interest, Madhawa Vidanapathirana. There hasn't been progress on this issue beyond the discussion above.
https://issues.apache.org/jira/browse/REEF-751
CC-MAIN-2017-51
refinedweb
735
54.63
AIMing for safety! This article is full of tips to help you use Docker safely. If you’re new to Docker I suggest you first check out my previous articles on Docker concepts, the Docker ecosystems, Dockerfiles, slimming down images, popular commands, and data in Docker. How concerned do you need to be about security in Docker? It depends. Docker comes with sensible security features baked in. If you are using official Docker images and not communicating with other machines, you don’t have much to worry about. However, if you’re using unofficial images, serving files, or running apps in production, then the story is different. In those cases you need to be considerably more knowledgeable about Docker security. Looks safe Your primary security goal is to prevent a malicious user from gaining valuable information or wreaking havoc. Toward that end, I’ll share Docker security best practices in several key areas. By the end of this article you’ll have seen over 20 Docker security tips! We’ll focus on three areas in the first section: - Access management - Image safety - Management of secrets Think of the acronym AIM to help you remember them. First, let’s look at limiting a container’s access. Access Management — Limit Privileges When you start a container, Docker creates a group of namespaces. Namespaces prevent processes in a container from seeing or affecting processes in the host, including other containers. Namespaces are a primary way Docker cordons off one container from another. Docker provides private container networking, too. This prevents a container from gaining privileged access to the network interfaces of other containers on the same host. So a Docker environment comes somewhat isolated, but it might not be isolated enough for your use case. Does not look safe Good security means following the principle of least privilege. Your container should have the abilities to do what it needs, but no more abilities beyond those. The tricky thing is that once you start limiting what processes can be run in a container, the container might not be able to do something it legitimately needs to do. There are several ways to adjust a container’s privileges. First, avoid running as root (or re-map if must run as root). Second, adjust capabilities with --cap-drop and --cap-add. Avoiding root and adjusting capabilities should be all most folks need to do to restrict privileges. More advanced users might want to adjust the default AppArmor and seccomp profiles. I discuss these in my forthcoming book about Docker, but have excluded them here to keep this article from ballooning. Avoid running as root Docker’s default setting is for the user in an image to run as root. Many people don’t realize how dangerous this is. It means it’s far easier for an attacker to gain access to sensitive information and your kernel. As a general best practice, don’t let a container run as root. Roots “The best way to prevent privilege-escalation attacks from within a container is to configure your container’s applications to run as unprivileged users.” — the Docker Docs. You can specify a userid other than root at build time like this: docker run -u 1000 my_image The -- user or -u flag, can specify either a username or a userid. It's fine if the userid doesn't exist. In the example above 1000 is is an arbitrary, unprivileged userid. In Linux, userids between 0 and 499 are generally reserved. Choose a userid over 500 to avoid running as a default system user. Rather than set the user from the command line, it’s best to change the user from root in your image. Then folks don’t have to remember to change it at build time. Just include the USER Dockerfile instruction in your image after Dockerfile instructions that require the capabilities that come with root. In other words, first install the packages you need and then switch the user. For example: FROM alpine:latest RUN apk update && apk add --no-cache git USER 1000 … If you must run a processes in the container as a root user, re-map the root to a less-privileged user on the Docker host. See the Docker docs. You can grant the privileges the user needs by altering the capabilities. Capabilities Capabilities are bundles of allowed processes. Adjust capabilities through the command line with --cap-drop and --cap-add. A best policy is to drop all a container's privileges with --cap-drop all and add back the ones needed with --cap-add. Stop or go You can adjust a container’s capabilities at runtime. For example, to drop the ability to use kill to stop a container, you can remove that default capability like this: docker run --cap-drop=Kill my_image Avoid giving SYS_ADMIN and SETUID privileges to processes, as they are give broad swaths of power. Adding this capabilities to a user is similar to giving root permissions (and avoiding that outcome is kind of the whole point of not using root). It’s safer to not allow a container to use a port number between 1 and 1023 because most network services run in this range. An unauthorized user could listen in on things like logins and run unauthorized server applications. These lower numbered ports require running as root or being explicitly given the CAP_NET_BIND_SERVICE capability. To find out things like whether a container has privileged port access, you can use inspect. Using docker container inspect my_container_name will show you lots of details about the allocated resources and security profile of your container. Here’s the Docker reference for more on privileges. As with most things in Docker, it’s better to configure containers in an automatic, self-documenting file. With Docker Compose you can specify capabilities in a service configuration like this: cap_drop: ALL Or you can adjust them in Kubernetes files as discussed here. The full list of Linux capabilities is here. If you want more fine grained control over container privileges, check out my discussion of AppArmor and seccomp in my forthcoming book. Subscribe to my email newsletter to be notified when it’s available. Closed road Access Management — Restrict Resources It’s a good idea to restrict a container’s access to system resources such as memory and CPU. Without a resource limit, a container can use up all available memory. If that happens the Linux host kernel will throw an Out of Memory Exception and kill kernel processes. This can lead the whole system to crash. You can imagine how attackers could use this knowledge to try to bring down apps. If you have multiple containers running on the same machine it’s smart to limit the memory and CPU any one container can use. If your container runs out of memory, then it shut downs. Shutting down your container can cause your app to crash, which isn’t fun. However, this isolation protects the host from running out of memory and all the containers on it from crashing. And that’s a good thing. Wind resource Docker Desktop CE for Mac v2.1.0 has default resource restrictions. You can access them under the Docker icon -> Preferences. Then click on the Resources tab. You can use the sliders to adjust the resource constraints. Resource settings on Mac Alternatively, you can restrict resources from the command line by specifying the --memory flag or -m for short, followed by a number and a unit of measure. 4m means 4 mebibytes, and is the minimum container memory allocation. A mebibyte (MiB) is slightly more than a megabyte (1 MiB = 1.048576 MB). The docs are currently incorrect, but hopefully the maintainers will have accepted my PR to change it by the time you read this. To see what resources your containers are using, enter the command docker stats in a new terminal window. You'll see running container statistics regularly refreshed. Stats Behind the scenes, Docker is using Linux Control Groups (cgroups) to implement resource limits. This technology is battle tested. Learn more about resource constraints on Docker here. Image safety Grabbing an image from Docker Hub is like inviting someone into your home. You might want to be intentional about it. Someone’s home Use trustworthy images Rule one of image safety is to only use images you trust. How do you know which images are trustworthy? It’s a good bet that popular official images are relatively safe. Such images include alpine, ubuntu, python, golang, redis, busybox, and node. Each has over 10M downloads and lots of eyes on them. Docker sponsors a dedicated team that is responsible for reviewing and publishing all content in the Official Images. This team works in collaboration with upstream software maintainers, security experts, and the broader Docker community to ensure the security of these images. Reduce your attack surface Related to using official base images, you can use a minimal base image. With less code inside, there’s a lower chance for security vulnerabilities. A smaller, less complicated base image is more transparent. It’s a lot easier to see what’s going on in an Alpine image than your friend’s image that relies on her friend’s image that relies on another base image. A short thread is easier to untangle. Tangled Similar, only install packages you actually need. This reduces your attack surface and speeds up your image downloads and image builds. Require signed images You can ensure that images are signed by using Docker content trust. Docker content trust prevents users from working with tagged images unless they contain a signature. Trusted sources include Official Docker Images from Docker Hub and signed images from user trusted sources. Signed Content trust is disabled by default. To enable it, set the DOCKER_CONTENT_TRUST environment variable to 1. From the command line, run the following: export DOCKER_CONTENT_TRUST=1 Now when I try to pull down my own unsigned image from Docker Hub it is blocked. Error: remote trust data does not exist for docker.io/discdiver/frames: notary.docker.io does not have trust data for docker.io/discdiver/frames Content trust is a way to keep the riffraff out. Learn more about content trust here. Docker stores and accesses images by the cryptographic checksum of their contents. This prevents attackers from creating image collisions. That’s a cool built-in safety feature. Managing Secrets Your access is restricted, your images are secure, now it’s time to manage your secrets.” Rule 1 of managing sensitive information: do not bake it into your image. It’s not too tricky to find your unencrypted sensitive info in code repositories, logs, and elsewhere. Rule 2: don’t use environment variables for your sensitive info, either. Anyone who can run docker inspect or exec into the container can find your secret. So can anyone running as root. Hopefully we've configured things so that users won't be running as root, but redundancy is part of good security. Often logs will dump the environment variable values, too. You don't want your sensitive info spilling out to just anyone. Docker volumes are better. They are the recommended way to access your sensitive info in the Docker docs. You can use a volume as temporary file system held in memory. Volumes remove the docker inspect and the logging risk. However, root users could still see the secret, as could anyone who can exec into the container. Overall, volumes are a pretty good solution. Even better than volumes, use Docker secrets. Secrets are encrypted. Secrets Some Docker docs state that you can use secrets with Docker Swarm only. Nevertheless, you can use secrets in Docker without Swarm. If you just need the secret in your image, you can use BuildKit. BuildKit is a better backend than the current build tool for building Docker images. It cuts build time significantly and has other nice features, including build-time secrets support. BuildKit is relatively new — Docker Engine 18.09 was the first version shipped with BuildKit support. There are three ways to specify the BuildKit backend so you can use its features now. In the future, it will be the default backend. - Set it as an environment variable with export DOCKER_BUILDKIT=1. - Start your buildor runcommand with DOCKER_BUILDKIT=1. - Enable BuildKit by default. Set the configuration in /etc/docker/daemon.json to true with: { "features": { "buildkit": true } }. Then restart Docker. - Then you can use secrets at build time with the --secretflag like this: docker build --secret my_key=my_value ,src=path/to/my_secret_file . Where your file specifies your secrets as key-value pair. These secrets are not stored in the final image. They are also excluded from the image build cache. Safety first! If you need your secret in your running container, and not just when building your image, use Docker Compose or Kubernetes. With Docker Compose, add the secrets key-value pair to a service and specify the secret file. Hat tip to Stack Exchange answer for the Docker Compose secrets tip that the example below is adapted from. Example docker-compose.yml with secrets: version: "3.7" services: my_service: image: centos:7 entrypoint: "cat /run/secrets/my_secret" secrets: - my_secret secrets: my_secret: file: ./my_secret_file.txt Then start Compose as usual with docker-compose up --build my_service. If you’re using Kubernetes, it has support for secrets. Helm-Secrets can help make secrets management in K8s easier. Additionally, K8s has Role Based Access Controls (RBAC) — as does Docker Enterprise. RBAC makes access Secrets management more manageable and more secure for teams. A best practice with secrets is to use a secrets management service such as Vault. Vault is a service by HashiCorp for managing access to secrets. It also time-limits secrets. More info on Vault’s Docker image can be found here. AWS Secrets Manager and similar products from other cloud providers can also help you manage your secrets on the cloud. Keys Just remember, the key to managing your secrets is to keep them secret. Definitely don’t bake them into your image or turn them into environment variables. Update Things As with any code, keep your the languages and libraries in your images up to date to benefit from the latest security fixes. Hopefully your security is more up to date than this lock If you refer to a specific version of a base image in your image, make sure you keep it up to date, too. Relatedly, you should keep your version of Docker up to date for bug fixes and enhancements that will allow you to implement new security features. Finally, keep your host server software up to date. If you’re running on a managed service, this should be done for you. Better security means keeping things updated. Consider Docker Enterprise If you have an organization with a bunch of people and a bunch of Docker containers, it’s a good bet you’d benefit from Docker Enterprise. Administrators can set policy restrictions for all users. The provided RBAC, monitoring, and logging capabilities are likely to make security management easier for your team. With Enterprise you can also host your own images privately in a Docker Trusted Registry. Docker provides built-in security scanning to make sure you don’t have known vulnerabilities in your images. Kubernetes provides some of this functionality for free, but Docker Enterprise has additional security capabilities for containers and images. Best of all, Docker Enterprise 3.0 was released in July 2019. It includes Docker Kubernetes Service with “sensible security defaults”. Additional Tips - Don’t ever run a container as -- privilegedunless you need to for a special circumstance like needing to run Docker inside a Docker container — and you know what you're doing. - In your Dockerfile, favor COPY instead of ADD. ADD automatically extracts zipped files and can copy files from URLs. COPY doesn’t have these capabilities. Whenever possible, avoid using ADD so you aren’t susceptible to attacks through remote URLs and Zip files. - If you run any other processes on the same server, run them in Docker containers. - If you use a web server and API to create containers, check parameters carefully so new containers you don’t want can’t be created. - If you expose a REST API, secure API endpoints with HTTPS or SSH. - Consider a checkup with Docker Bench for Security to see how well your containers follow their security guidelines. - Store sensitive data only in volumes, never in a container. - If using a single-host app with networking, don’t use the default bridge network. It has technical shortcomings and is not recommended for production use. If you publish a port, all containers on the bridge network become accessible. - Use Lets Encrypt for HTTPS certificates for serving. See an example with NGINX here. - Mount volumes as read-only when you only need to read from them. See several ways to do this here. Summary You’ve seen many of ways to make your Docker containers safer. Security is not set-it and forget it. It requires vigilance to keep your images and containers secure. Keys When thinking about security, remember AIM - Access management - Avoid running as root. Remap if must use root. - Drop all capabilities and add back those that are needed. - Dig into AppArmor if you need fine-grained privilege tuning. - Restrict resources. 2. Image safety - Use official, popular, minimal base images. - Don’t install things you don’t need. - Require images to be signed. - Keep Docker, Docker images, and other software that touches Docker updated. 3. Management of secrets - Use secrets or volumes. - Consider a secrets manager such as Vault. Bullseye! Keeping Docker containers secure means AIMing for safety. Don’t forget to keep Docker, your languages and libraries, your images, and your host software updated. Finally, consider using Docker Enterprise if you’re running Docker as part of a team. I hope you found this Docker security article helpful.
https://www.experfy.com/blog/top-20-docker-security-tips
CC-MAIN-2020-24
refinedweb
2,997
58.08
Related Article: Mobile Development Options for .NET Developers The world of software would probably be a better place if developers could code mobile applications using one programming language, one framework, and even one type of computer. Alas, that's never been the case, and I can hardly foresee it happening in the near future. Sometimes, you simply have no other choice than to write the same mobile application for a variety of mobile operating systems. This is a difficult option because it doesn't offer any smart, elegant, or even smooth exit strategy. You can take any of several approaches to writing a mobile app for multiple platforms, but each of these approaches carries risks. The most obvious approach is to write distinct apps—one per mobile platform you want to support. However, doing so can be an expensive proposition. Additionally, writing a mobile app for several platforms is not conducive to agile software development. Unless you can count on having the development resources to build the apps in parallel, you're better off finishing one app before attacking the next one. Furthermore, this waterfall-like development approach could even be detrimental to the app's success, depending on the category of the app. The other options are to attempt to increase your development budget or to distribute the application over a period of time and limit the mobile platforms that the app supports. Although developers would likely agree that there's no known silver bullet to kill the mobile beast, many developers see the PhoneGap mobile app development framework as the closest thing to a silver bullet that exists for mobile app development. But is PhoneGap as good as it seems? In this article, I intend to answer that question based on my experience with using PhoneGap. I will examine the overall architecture of PhoneGap and suggest ways that you can get better results using it to write mobile applications in specific scenarios. PhoneGap at a Glance PhoneGap was originally developed by Nitobi, which Adobe Systems acquired in 2011. After acquiring Nitobi, Adobe donated the code for PhoneGap to the Apache Software Foundation, renaming the original codebase Apache Cordova and making it an open source project. Adobe and other companies and individuals contribute changes and extensions to Apache Cordova. So essentially, PhoneGap is simply the first (and widely popular) distribution of the Apache Cordova open source engine. PhoneGap is built around a simple and smart idea. You write user interface (UI), navigation, and business logic using HTML, Cascading Style Sheets (CSS), and JavaScript; then the entire web project is packaged in a container compiled as a native application for a variety of mobile platforms. The potential is tremendous: You write UI and logic once and package it up to a number of mobile apps for most popular platforms, such as iOS, Android, and Windows Phone. PhoneGap offers an engine to package the set of web resources into an embedded web view. It also offers a comprehensive and consistent API to access device capabilities and sensors and command them from within your JavaScript code. More important, you consume the device API in the same way regardless of the actual device and mobile platform. So at first sight, PhoneGap is a huge win from whatever perspective you look at it. It allows you to leverage existing web skills and limits exposure to device and mobile platform internals to a bare minimum. One could say that a PhoneGap application is essentially a mobile single-page application that the PhoneGap API can compile to a native package for each of the supported mobile platforms (e.g., APK for Android, IPA for iOS). PhoneGap in Action Here's the typical workflow you follow to write mobile applications using the PhoneGap framework. First, you use your favorite web IDE to write a bunch of HTML pages. This has to be a 100 percent client-side solution, and navigation between pages is managed either through plain hyperlinks or by swapping div elements. If HTML pages need to access certain device capabilities such as contacts, camera, or vibration, you have available a JavaScript API that is consistent across all mobile platforms. The JavaScript API comes through a .js file you obtain from the PhoneGap download and reference from your HTML pages. The entry point in this web solution is commonly an HTML file named index.html, which is the only file in a root directory. (Note that this is only the most commonplace approach, and names and structure are arbitrary.) Once your web application is up and running, your next task is to convert it into applications for different mobile platforms. You create a native project for each of the platforms you want the app to run on—such as Android and iOS. Note that the list of supported frameworks goes well beyond iOS and Android and also includes BlackBerry, Windows Phone, and Tizen. Yes, you got it right: You need a distinct project for each platform. In particular, this means that you still need a Mac to turn your web solution into an iPhone or iPad application. Moreover, it means that you must be able to manage intricacies of native iOS and Android projects (and more). For example, for Android you need an IDE such as IntelliJ IDEA, Android Studio, or Eclipse to build an executable. Using your chosen IDE, you will create a project, add an activity class that inherits from a PhoneGap-provided Android class, and play with the exposed API. To be honest, even though creating the project is not much work and requires no advanced skills, it is nonetheless work you cannot avoid. Let's look at an example. Listing 1 shows an Android activity that backs up a PhoneGap project. package com.expoware.squarify; import android.os.Bundle; import org.apache.cordova.Config; import org.apache.cordova.DroidGap; public class SquarifyActivity extends DroidGap { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); super.loadUrl(""); } } You don't need more code than this. In Android, an activity is nearly the same as a window. The code in Listing 1 just initializes the window and sets its content. The DroidGap class is part of the PhoneGap framework. The loadUrl method retrieves the root HTML page from the resources of the application and creates a full-screen web view where the previously created web application is run. Figure 1 gives you an idea of the results of this code. The yellow area is the full-screen web view. It should be noted that Adobe also offers a cloud-based build platform, Adobe PhoneGap Build, which saves you the trouble of managing and maintaining projects. Get a paid subscription to the platform, and you'll be able to upload web sources and download mobile binaries. A Closer Look at PhoneGap PhoneGap has plenty of advantages, but it isn't perfect. First off, the web UI that PhoneGap generates is hardly specific to the mobile platform. You'll also have to make a "devil's alternative" choice to build either an app that is highly responsive to the user's touch or one that provides an optimum and truly native user experience (UX); with PhoneGap, it is difficult to do both. To add value to the app, you might want to make its UX as close to native as possible, but this is hard to achieve with plain HTML. To accomplish this, you might need additional JavaScript libraries, but the more JavaScript you add, the more you lose in fluidity and responsiveness. The risk is that you'll eventually deliver a PhoneGap app that looks like a true iPhone app except that it is far slower! Overall, the best (or the least problematic) way to give a more native UI to a PhoneGap app is by using distinct style sheets. Using CSS alone, you might achieve much of what you want in the app's UI but not everything. Another point to consider is that the performance of a PhoneGap application is inherently tied to the performance of the browser mounted on the device and the internal implementation of the WebView widget around it. Like it or not, mobile browsers are not all the same. You are responsible for accommodating differences in the Document Object Model (DOM) implementation and working around performance issues, if you find any. As the complexity of the solution grows, you might find yourself maintaining platform-specific versions of the web code that was originally supposed to be just a single app for multiple mobile platforms. This predicament can certainly be mitigated by good programming techniques, yet it is a challenge that you will have to tackle. A PhoneGap Case Study To summarize my experience with PhoneGap, I'd say that I've found it to be good enough on iOS devices and to perform adequately (but far from great) on most Android smartphones in the middle and low segments. I just don't like it on Windows Phone. I've only played with PhoneGap on a couple of other platforms. That said, about a year ago my company submitted a proposal to deliver an application on four different mobile platforms. It was a time-sensitive app bound to a public event at a low price point, and the app would have an overall lifetime of only a few weeks. Initially, we made the quote based on PhoneGap prototypes and our faith in the PhoneGap promise of write an app once and compile it anywhere (and possibly quickly). What we learned, though, is that while the promise of PhoneGap is substantially fulfilled—you really can write a web app and compile it quickly on a few mobile platforms, the final result is not always what you wanted and hoped for. As it turns out, you cannot blame this state of affairs on PhoneGap. Rather, it all depends on your app's design and how the app is coded. Having confirmed that we were obtaining poor applications because of our code and browsers, we faced the challenge of building four apps within a budget that barely covered full development of a single iPhone app. The main problem we had was with UI and page transitions. In a nutshell, if you want the app to be fast, you have to give up effects; if you want effects, you'll get an app that has far less speed and fluidity than a native app. At the same time, we noticed that our app essentially consisted of a few independent pages tied together by a main menu. We then opted to create a native skeleton for the app in iOS, Android, Windows Phone, and BlackBerry. Sure, this created four distinct native apps, but each app was fairly simple. We had a static main page made up mostly of graphics and links. Then we created empty pages for each link and set up navigation. But how would the app generate actual content? To build in this capability, we added a WebView widget to each page and made it point to a remote URL that returned HTML. Figure 2 illustrates the layout we arranged. Finally, we applied a selective choice of colors and styles, making the content of the WebView nearly undistinguishable from the surrounding native UI. This solution worked well for the particular type of application we were building, and I don't recommend blindly using this approach for just any type of mobile app. Fact is, this hybrid solution worked beautifully for us, allowing us to deliver four apps perfectly on time and on budget with a three-person team. PhoneGap: The Reality PhoneGap comes with the promise of reusability: Write your mobile application once and port it to as many as seven different platforms. Although PhoneGap largely fulfills this promise, I'm not sure that PhoneGap is the silver bullet that can always kill the mobile beast. PhoneGap works well in scenarios where providing a truly native experience is not your primary concern and when setting up a UI and UX common to many platforms is acceptable (e.g., gaming). Beyond just trying out PhoneGap for yourself, I recommend that you experience the power of WebView in basic mobile development. A WebView in mobile platforms is more powerful and lightweight than any WebBrowser component you might have used. A WebView is worth using for displaying both static HTML and downloaded content. In the aforementioned project, downloaded content saved us a few times, allowing us to make last-minute changes without having to publish updates.
https://www.itprotoday.com/mobility/phonegap-mobile-app-development
CC-MAIN-2021-04
refinedweb
2,089
60.45
Load Values into an Android Spinner This Android Spinner example takes a look at loading string items into the Spinner. The demo code provided is an Android Studio Spinner example project. The Spinner View is useful when you want the user to pick an item from a predetermined list, but do not want to take up a lot of screen space (such as using several radio buttons). Programmers moving to Android from other environments will notice a difference in terminology. The Android Spinner behaves in a similar fashion to what some may call a drop down list. A Spinner on other platforms is closer to the Android Pickers, which are often seen when setting dates and times on an Android device. (This Android loading Spinner.) The Android UI Pattern Programmers coding with the Android SDK soon come across a familiar pattern when designing the user interface. There are the Views that make up the screens (managed by Activites). There is data that needs to be displayed in those Views. Finally there is the code that links the Views to the data. For some Views and types of data the code that links them together is provided by an Adapter. In this example the data is an array of strings, the View is the Spinner, and an ArrayAdapter is the link between the two. Create a New Studio Project Create a new project in Android Studio, here called Spinner Demo. An Empty Activity is used with other settings left at their default values. Add the Data The array of strings is defined in a values file in the res/values folder. Use the Project explorer to open the file strings.xml. Enter the values for the Spinner into a string array. Here a list of types of coffee is going to be used. Here is an example strings.xml with a string array called coffeeType: <resources> <string name="app_name">Spinner Demo</string> <string-array <item>Filter</item> <item>Americano</item> <item>Latte</item> <item>Espresso</item> <item>Cappucino</item> <item>Mocha</item> <item>Skinny Latte</item> <item>Espresso Corretto</item> </string-array> <string name="coffePrompt">Choose Coffe</string> </resources> Add the Spinner The Spinner is added to the activity_main.xml layout file (in the res/layout folder). Open the layout file and delete the default Hello World! TextView. From the Palette drag and drop and Spinner onto the layout. Set Spinner ID to chooseCoffee (if dropping on a ConstraintLayout also set the required constraints). layout_width and layout_height are both set to wrap_content. The activity_main.xml file will be similar to this: <?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns: <Spinner android: </android.support.constraint.ConstraintLayout> Add the Code to Load the Spinner The Spinner added to the layout file is the basic framework, a layout will also be required to hold the data values in the collapsed and expanded state. Fortunately for simple uses Android includes default layouts. To connect the array to the Spinner an ArrayAdapter is created. The ArrayAdapter class has a static method that can take existing suitable resources and use them to create an ArrayAdapter instance. The method createFromResource() takes the current Context, the resource id of the string array and the resource id of a layout that will be used to display the selected array item when the Spinner is collapased (by default this layout is repeated to show the list of items in the expanded state). A layout for the data item has not been defined instead an Android default simple_spinner_item layout is used. Here is the code for the MainActivity Java class: package com.example.spinnerdemo; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.widget.ArrayAdapter; import android.widget.Spinner; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); //an adapter is linked to array and a layout ArrayAdapter adapter = ArrayAdapter.createFromResource( this, R.array.coffeeType, android.R.layout.simple_spinner_item); //link the adapter to the spinner Spinner coffeeChoice = (Spinner) findViewById(R.id.chooseCoffee); coffeeChoice.setAdapter(adapter); } } Run the project to see the Spinner in action. The Spinner supports a dialog style (see the first graphic at the top of the tutorial). To see it first set the Spinner prompt property to the Choose Coffee string (@string/coffePrompt). Then change the spinnerMode property to dialog. The loading Spinner source code is available in loading_spinner.zip or from the Android Example Projects page. See Also - See the other Android Studio example projects to learn Android app programming. Archived Comments Isuru on March 8, 2013 at 5:38 am said: Thanks loads. It helps alot. Pawan on May 6, 2013 at 12:56 pm said: Very Nice tutorial. Author:Daniel S. Fowler Published: Updated:
https://tekeye.uk/android/examples/ui/load-values-into-an-android-spinner
CC-MAIN-2018-26
refinedweb
794
57.98
One of the most important principles of testing is that tests need to occur in a known state. If the conditions in which a test runs are not controlled, then our results could contain false negatives (invalid failed results) or false positives (invalid passed results). This is where test fixtures come in. A test fixture is a mechanism for ensuring proper test setup (putting tests into a known state) and test teardown (restoring the state prior to the test running). Test fixtures guarantee that our tests are running in predictable conditions, and thus the results are reliable. Let’s say we are testing a Bluetooth device. The device’s Bluetooth module can sometimes fail. When this happens, the device needs to be power cycled (shut off and then on) to restore Bluetooth functionality. We would not want tests to run if the device was already in a failed state because these results would not be valid. Furthermore, if our tests cause the Bluetooth module to fail, we want to restore it to a working state after the tests run. So, we add a test fixture to power cycle the device before and after each test. Here is how we might do it: def power_cycle_device(): print('Power cycling bluetooth device...') class BluetoothDeviceTests(unittest.TestCase): def setUp(self): power_cycle_device() def test_feature_a(self): print('Testing Feature A') def test_feature_b(self): print('Testing Feature B') def tearDown(self): power_cycle_device() The unittest framework automatically identifies setup and teardown methods based on their names. A method named setUp runs before each test case in the class. Similarly, a method named tearDown gets called after each test case. Now, we can guarantee that our Bluetooth module is in a working state before and after every test. Here is the output when these tests are run: Power cycling bluetooth device... Testing Feature A Power cycling bluetooth device... .Power cycling bluetooth device... Testing Feature B Power cycling bluetooth device... . ---------------------------------------------------------------------- Ran 2 tests in 0.000s OK Let’s consider another scenario. Perhaps our tests rely on working Bluetooth, but there is nothing in the tests that would cause the bluetooth to stop working. In this case, it would be inefficient to power cycle the device before and after every test. Let’s refactor the previous example so that setup and teardown only happen once - before and after all tests in the class are run: def power_cycle_device(): print('Power cycling bluetooth device...') class BluetoothDeviceTests(unittest.TestCase): @classmethod def setUpClass(cls): power_cycle_device() def test_feature_a(self): print('Testing Feature A') def test_feature_b(self): print('Testing Feature B') @classmethod def tearDownClass(cls): power_cycle_device() We replaced our setUp method with the setUpClass method and added the @classmethod decorator. We changed the argument from self to cls because this is a class method. Similarly, we replaced the tearDown method with the tearDownClass class method. Now, we get the following output: Power cycling bluetooth device... Testing Feature A Testing Feature B Power cycling bluetooth device... ---------------------------------------------------------------------- Ran 2 tests in 0.000s OK In addition to calling functions, we can also use setup methods to instantiate objects and or gather any other data needed. Anything stored in our class will be available throughout our test functions. It’s generally good practice to create fixtures that run for every test. However, when a fixture has a large cost (i.e. it takes a long time), then it might make more sense to have it run once per test class rather than once per test. Let’s practice setting up test fixtures! Instructions In our tests.py file we have some simple tests written for the passenger check-in experience at the kiosk for Small World Air. We also have some functions we are testing written in kiosk.py. Take some time to review the provided code in both files. Run the code to continue! We want to make sure the kiosk is powered on before we run any tests. This is a great time to setup some test fixtures! Create a setUpClass() method which takes a single argument ( cls) and calls kiosk.power_on_kiosk(). Add the @classmethod decorator on top of it! We don’t want to leave the kiosk powered on after all tests are run. Create a tearDownClass() method which takes a single argument ( cls) and calls kiosk.power_off_kiosk(). Add the @classmethod decorator on top of it! We also want to make sure that customers are on the welcome page before each test runs. Create a method called setUp(). Inside of the method, call kiosk.return_to_welcome_page().
https://www.codecademy.com/courses/learn-intermediate-python-3/lessons/int-python-unit-testing/exercises/test-fixtures
CC-MAIN-2022-27
refinedweb
749
65.93
In a recent newsletter article I complained about how researchers mislead about the applicability of their work. I gave SAT solvers as an example. People provided interesting examples in response, but what was new to me was the concept of SMT (Satisfiability Modulo Theories), an extension to SAT. SMT seems to have more practical uses than vanilla SAT (see the newsletter for details). I wanted to take some time to explore SMT solvers, and I landed on Z3, an open-source SMT solver from Microsoft. In particular, I wanted to compare it to ILP (Integer Linear Programing) solvers, which I know relatively well. I picked a problem that I thought would work better for SAT-ish solvers than for ILPs: subset covering (explained in the next section). If ILP still wins against Z3, then that would be not so great for the claim that SMT is a production strength solver. All the code used for this post is on Github. Subset covering A subset covering is a kind of combinatorial design, which can be explained in terms of magic rings. An adventurer stumbles upon a chest full of magic rings. Each ring has a magical property, but some pairs of rings, when worn together on the same hand, produce a combined special magical effect distinct to that pair. The adventurer would like to try all pairs of rings to catalogue the magical interactions. With only five fingers, how can we minimize the time spent trying on rings? Mathematically, the rings can be described as a set of size . We want to choose a family of subsets of , with each subset having size 5 (five fingers), such that each subset of of size 2 (pairs of rings) is contained in some subset of . And we want to be as small as possible. Subset covering is not a “production worthy” problem. Rather, I could imagine it’s useful in some production settings, but I haven’t heard of one where it is actually used. I can imagine, for instance, that a cluster of machines has some bug occurring seemingly at random for some point-to-point RPCs, and in tracking down the problem, you want to deploy a test change to subsets of servers to observe the bug occurring. Something like an experiment design problem. If you generalize the “5” in “5 fingers” to an arbitrary positive integer , and the “2” in “2 rings” to , then we have the general subset covering problem. Define to be the minimal number of subsets of size needed to cover all subsets of size . This problem was studied by Erdős, with a conjecture subsequently proved by Vojtěch Rödl, that asymptotically grows like . Additional work by Joel Spencer showed that a greedy algorithm is essentially optimal. However, all of the constructive algorithms in these proofs involve enumerating all subsets of . This wouldn’t scale very well. You can alternatively try a “random” method, incurring a typically factor of additional sets required to cover a fraction of the needed subsets. This is practical, but imperfect. To the best of my knowledge, there is no exact algorithm, that both achieves the minimum and is efficient in avoiding constructing all subsets. So let’s try using an SMT solver. I’ll be using the Python library for Z3. Baseline: brute force Z3 For a baseline, let’s start with a simple Z3 model that enumerates all the possible subsets that could be chosen. This leads to an exceedingly simple model to compare the complex models against. Define boolean variables which is true if and only if the subset is chosen (I call this a “choice set”). Define boolean variables which is true if the subset (I call this a “hit set”) is contained in a chosen choice set. Then the subset cover problem can be defined by two sets of implications. First, if is true, then so must all for . E.g., for and , we get In Python this looks like the following (note this program has some previously created lookups and data structures containing the variables) for choice_set in choice_sets: for hit_set_key in combinations(choice_set.elements, hit_set_size): hit_set = hit_set_lookup[hit_set_key] implications.append( z3.Implies(choice_set.variable, hit_set.variable)) Second, if is true, it must be that some is true for some containing as a subset. For example, In code, for hit_set in hit_sets.values(): relevant_choice_set_vars = [ choice_set.variable for choice_set in hit_set_to_choice_set_lookup[hit_set] ] implications.append( z3.Implies( hit_set.variable, z3.Or(*relevant_choice_set_vars))) Next, in this experiment we’re allowing the caller to specify the number of choice sets to try, and the solver should either return SAT or UNSAT. From that, we can use a binary search to find the optimal number of sets to pick. Thus, we have to limit the number of that are allowed to be true and false. Z3 supports boolean cardinality constraints, apparently with a special solver to handle problems that have them. Otherwise, the process of encoding cardinality constraints as SAT formulas is not trivial (and the subject of active research). But the code is simple enough: args = [cs.variable for cs in choice_sets] + [parameters.num_choice_sets] choice_sets_at_most = z3.AtMost(*args) choice_sets_at_least = z3.AtLeast(*args) Finally, we must assert that every is true. solver = z3.Solver() for hit_set in hit_sets.values(): solver.add(hit_set.variable) for impl in implications: solver.add(impl) solver.add(choice_sets_at_most) solver.add(choice_sets_at_least) Running it for , and seven choice sets (which is optimal), we get >>> SubsetCoverZ3BruteForce().solve( SubsetCoverParameters( num_elements=7, choice_set_size=3, hit_set_size=2, num_choice_sets=7)) [(0, 1, 3), (0, 2, 4), (0, 5, 6), (1, 2, 6), (1, 4, 5), (2, 3, 5), (3, 4, 6)] SubsetCoverSolution(status=<SolveStatus.SOLVED: 1>, solve_time_seconds=0.018305063247680664) Interestingly, Z3 refuses to solve marginally larger instances. For instance, I tried the following and Z3 times out around (about 8k choice sets): from math import comb for n in range(8, 16): k = int(n / 2) l = 3 max_num_sets = int(2 * comb(n, l) / comb(k, l)) params = SubsetCoverParameters( num_elements=n, choice_set_size=k, hit_set_size=l, num_choice_sets=max_num_sets) print_table( params, SubsetCoverZ3BruteForce().solve(params), header=(n==8)) After taking a long time to generate the larger models, Z3 exceeds my 15 minute time limit, suggesting exponential growth: status solve_time_seconds num_elements choice_set_size hit_set_size num_choice_sets SolveStatus.SOLVED 0.0271 8 4 3 28 SolveStatus.SOLVED 0.0346 9 4 3 42 SolveStatus.SOLVED 0.0735 10 5 3 24 SolveStatus.SOLVED 0.1725 11 5 3 33 SolveStatus.SOLVED 386.7376 12 6 3 22 SolveStatus.UNKNOWN 900.1419 13 6 3 28 SolveStatus.UNKNOWN 900.0160 14 7 3 20 SolveStatus.UNKNOWN 900.0794 15 7 3 26 An ILP model Next we’ll see an ILP model for the sample problem. Note there are two reasons I expect the ILP model to fall short. First, the best solver I have access to is SCIP, which, despite being quite good is, in my experience, about an order of magnitude slower than commercial alternatives like Gurobi. Second, I think this sort of problem seems to not be very well suited to ILPs. It would take quite a bit longer to explain why (maybe another post, if you’re interested), but in short well-formed ILPs have easily found feasible solutions (this one does not), and the LP-relaxation of the problem should be as tight as possible. I don’t think my formulation is very tight, but it’s possible there is a better formulation. Anyway, the primary difference in my ILP model from brute force is that the number of choice sets is fixed in advance, and the members of the choice sets are model variables. This allows us to avoid enumerating all choice sets in the model. In particular, is a binary variable that is 1 if and only if element is assigned to be in set . And is 1 if and only if the hit set is a subset of . Here “ ” is an index over the subsets, rather than the set itself, because we don’t know what elements are in while building the model. For the constraints, each choice set must have size : Each hit set must be hit by at least one choice set: Now the tricky constraint. If a hit set is hit by a specific choice set (i.e., ) then all the elements in must also be members of . This one works by the fact that the left-hand side (LHS) is bounded from below by 0 and bounded from above by . Then acts as a switch. If it is 0, then the constraint is vacuous since the LHS is always non-negative. If , then the right-hand side (RHS) is and this forces all variables on the LHS to be 1 to achieve it. Because we fixed the number of choice sets as a parameter, the objective is 1, and all we’re doing is looking for a feasible solution. The full code is here. On the same simple example as the brute force >>> SubsetCoverILP().solve( SubsetCoverParameters( num_elements=7, choice_set_size=3, hit_set_size=2, num_choice_sets=7)) [(0, 1, 3), (0, 2, 6), (0, 4, 5), (1, 2, 4), (1, 5, 6), (2, 3, 5), (3, 4, 6)] SubsetCoverSolution(status=<SolveStatus.SOLVED: 1>, solve_time_seconds=0.1065816879272461) It finds the same solution in about 10x the runtime as the brute force Z3 model, though still well under one second. On the “scaling” example, it fares much worse. With a timeout of 15 minutes, it solves decently fast, slowly, and times out on the rest. status solve_time_seconds num_elements choice_set_size hit_set_size num_choice_sets SolveStatus.SOLVED 1.9969 8 4 3 28 SolveStatus.SOLVED 306.4089 9 4 3 42 SolveStatus.UNKNOWN 899.8842 10 5 3 24 SolveStatus.UNKNOWN 899.4849 11 5 3 33 SolveStatus.SOLVED 406.9502 12 6 3 22 SolveStatus.UNKNOWN 902.7807 13 6 3 28 SolveStatus.UNKNOWN 900.0826 14 7 3 20 SolveStatus.UNKNOWN 900.0731 15 7 3 26 A Z3 Boolean Cardinality Model The next model uses Z3. It keeps the concept of Member and Hit variables, but they are boolean instead of integer. It also replaces the linear constraints with implications. The constraint that forces a Hit set’s variable to be true when some Choice set contains all its elements is (for each ) Conversely, A Hit set’s variable being true implies its members are in some choice set. Finally, we again use boolean cardinality constraints AtMost and AtLeast so that each choice set has the right size. The results are much better than the ILP: it solves all of the instances in under 3 seconds status solve_time_seconds num_elements choice_set_size hit_set_size num_choice_sets SolveStatus.SOLVED 0.0874 8 4 3 28 SolveStatus.SOLVED 0.1861 9 4 3 42 SolveStatus.SOLVED 0.1393 10 5 3 24 SolveStatus.SOLVED 0.2845 11 5 3 33 SolveStatus.SOLVED 0.2032 12 6 3 22 SolveStatus.SOLVED 1.3661 13 6 3 28 SolveStatus.SOLVED 0.8639 14 7 3 20 SolveStatus.SOLVED 2.4877 15 7 3 26 A Z3 integer model Z3 supports implications on integer equation equalities, so we can try a model that leverages this by essentially converting the boolean model to one where the variables are 0-1 integers, and the constraints are implications on equality of integer formulas (all of the form “variable = 1”). I expect this to perform worse than the boolean model, even though the formulation is almost identical. The details of the model are here, and it’s so similar to the boolean model above that it needs no extra explanation. The runtime is much worse, but surprisingly it still does better than the ILP model. status solve_time_seconds num_elements choice_set_size hit_set_size num_choice_sets SolveStatus.SOLVED 2.1129 8 4 3 28 SolveStatus.SOLVED 14.8728 9 4 3 42 SolveStatus.SOLVED 7.6247 10 5 3 24 SolveStatus.SOLVED 25.0607 11 5 3 33 SolveStatus.SOLVED 30.5626 12 6 3 22 SolveStatus.SOLVED 63.2780 13 6 3 28 SolveStatus.SOLVED 57.0777 14 7 3 20 SolveStatus.SOLVED 394.5060 15 7 3 26 Harder instances So far all the instances we’ve been giving the solvers are “easy” in a sense. In particular, we’ve guaranteed there’s a feasible solution, and it’s easy to find. We’re giving roughly twice as many sets as are needed. There are two ways to make this problem harder. One is to test on unsatisfiable instances, which can be harder because the solver has to prove it can’t work. Another is to test on satisfiable instances that are hard to find, such as those satisfiable instances where the true optimal number of choice sets is given as the input parameter. The hardest unsatisfiable instances are also the ones where the number of choice sets allowed is one less than optimal. Let’s test those situations. Since , we can try with 7 choice sets and 6 choice sets. For 7 choice sets (the optimal value), all the solvers do relatively well method status solve_time_seconds num_elements choice_set_size hit_set_size num_choice_sets SubsetCoverILP SOLVED 0.0843 7 3 2 7 SubsetCoverZ3Integer SOLVED 0.0938 7 3 2 7 SubsetCoverZ3BruteForce SOLVED 0.0197 7 3 2 7 SubsetCoverZ3Cardinality SOLVED 0.0208 7 3 2 7 For 6, the ILP struggles to prove it’s infeasible, and the others do comparatively much better (at least 17x better). method status solve_time_seconds num_elements choice_set_size hit_set_size num_choice_sets SubsetCoverILP INFEASIBLE 120.8593 7 3 2 6 SubsetCoverZ3Integer INFEASIBLE 3.0792 7 3 2 6 SubsetCoverZ3BruteForce INFEASIBLE 0.3384 7 3 2 6 SubsetCoverZ3Cardinality INFEASIBLE 7.5781 7 3 2 6 This seems like hard evidence that Z3 is better than ILPs for this problem (and it is), but note that the same test on fails for all models. They can all quickly prove , but time out after twenty minutes when trying to determine if . Note that is the least complex choice for the other parameters, so it seems like there’s not much hope to find for any seriously large parameters, like, say, . Thoughts These experiments suggest what SMT solvers can offer above and beyond ILP solvers. Disjunctions and implications are notoriously hard to model in an ILP. You often need to add additional special variables, or do tricks like the one I did that only work in some situations and which can mess with the efficiency of the solver. With SMT, implications are trivial to model, and natively supported by the solver. Aside from reading everything I could find on Z3, there seems to be little advice on modeling to help the solver run faster. There is a ton of literature for this in ILP solvers, but if any readers see obvious problems with my SMT models, please chime in! I’d love to hear from you. Even without that, I am pretty impressed by how fast the solves finish for this subset cover problem (which this experiment has shown me is apparently a very hard problem). However, there’s an elephant in the room. These models are all satisfiability/feasibility checks on a given solution. What is not tested here is optimization, in the sense of having the number of choice sets used be minimized directly by the solver. In a few experiments on even simpler models, z3 optimization is quite slow. And while I know how I’d model the ILP version of the optimization problem, given that it’s quite slow to find a feasible instance when the optimal number of sets is given as a parameter, it seems unlikely that it will be fast when asked to optimize. I will have to try that another time to be sure. Also, I’d like to test the ILP models on Gurobi, but I don’t have a personal license. There’s also the possibility that I can come up with a much better ILP formulation, say, with a tighter LP relaxation. But these will have to wait for another time. In the end, this experiment has given me some more food for thought, and concrete first-hand experience, on the use of SMT solvers.
https://jeremykun.com/category/general/
CC-MAIN-2020-45
refinedweb
2,680
64.81
Hello TOMITA, If the file exists in the same package as the Bean, then just do: Properties props = new Properties(); try { InputStream propsStream = this.getClass().getResourceAsStream("config.txt"); if (propsStream != null) { props.load(propsStream); propsStream.close(); } } catch (IOException e) { System.err.println("Caught IOException: " + e.getMessage()); } When you use Thread.currentThread().getContextClassLoader().getResourceAsStream("config.txt"), you run the risk of finding a file with the same name somewhere earlier in the classpath, although it should have been able to find your file when one of the same name did not exist in $CATALINA_HOME/bin. Jake Tuesday, August 13, 2002, 10:54:15 AM, you wrote: TLC> My bean is WEB-INF/classes.... TLC> Jacob Kjome <hoju@visi.com> TLC> 13/08/2002 10:43 a.m. TLC> Please respond to Tomcat Users List TLC> To: Tomcat Users List <tomcat-user@jakarta.apache.org> TLC> cc: TLC> Subject: Re[2]: Quick Question TLC> Hello TOMITA, TLC> Where does your Bean exist? Is it in one of Tomcat's classloaders, or TLC> is it running out the WEB-INF/classes or WEB-INF/lib folder of your TLC> webapp. I'm geussing the it is in one of Tomcat's classloaders TLC> meaning $CATALINA_HOME/common/lib, server/lib, or lib (shared/lib in TLC> Tomcat-4.1.x). TLC> Those classloaders can't see the individual webapp classloaders. TLC> However, libraries in your webapp *can* see Tomcat's plublic TLC> classloaders (all bug server/lib, server/classes). TLC> You may have to rearrange the location of your libraries. TLC> Jake TLC> Tuesday, August 13, 2002, 9:29:19 AM, you wrote: TLC>> Hi all, TLC>> I'm trying to resolve this problem with all the solutions that you TLC> gave TLC>> me, but it doesn't work... TLC>> This is what I did: TLC>> in my java bean (not a servlet), I have this code: TLC>> public class DbBean { TLC>> public int Connect() { TLC>> InputStream is = TLC>> TLC> or TLC>> 1).. TLC>> The txt file is in "WEB-INF/classes/beans...", because "DbBean" is in TLC> a TLC>> package called "beans", and I start tomcat from TOMCAT_HOME/bin.. TLC>> When I load the jsp, the method Connect of the DbBean (java bean) TLC> returned TLC>> 0, which means the InputStream is null, but if I put the txt file in TLC>> TOMCAT_HOME/bin, I had no problem...., the method returned 1.... why TLC> TLC>>>>
http://mail-archives.apache.org/mod_mbox/tomcat-users/200208.mbox/%3C16988268503.20020813110943@visi.com%3E
CC-MAIN-2016-26
refinedweb
402
64.81
. 22 comments: You should correct your code to com.adobe instead of com.apple. Everything else was fine, thanks. Interestingly, when I go to the directory, there's nothing there. I'm not sure what is starting "Updater". (When I inspect the process in "Activity Monitor", it does say launchd is responsible, so that's weird.) @trickards thanks for the heads up. Corrected. On my Lion + Photoshop CS5 install it was called com.adobe.AAM.Updater-1.0.plist Seems to have worked though. Thanks! FYI if you want an easier way to do this, download Lingon. It shows you what's running, what's being loaded, and allows you to enable/disable items. Full control for your Mac. It's free/open source, too. It didn't work for me any idea why? Running Lion and CS5. I copy and pasted it exactly so I'm not sure what the problem is, I am very new to this kind of thing. Help? Christina, I had the same prob and solved it. My Mac is also running on Lion with CS5. Go manually in library then in launchagents. You'll find the file called com.adobe.AAM.Updater-1.0.plist as RH27 said. You can manually delete it. A couple of revisions for 2012: — The updater now uses namespace com.adobe.AAM instead of com.adobe.ARM; — It might be installed for every user on your system, not just for current one; go to /Library/LaunchAgents instead and use sudo to delete the plist. Bottom line: cd ~/Library/LaunchAgents launchctl remove `basename com.adobe.ARM.*.plist` launchctl remove `basename com.adobe.AAM.*.plist` rm com.adobe.ARM.* rm com.adobe.AAM.* cd /Library/LaunchAgents launchctl remove `basename com.adobe.ARM.*.plist` launchctl remove `basename com.adobe.AAM.*.plist` sudo rm com.adobe.ARM.* sudo rm com.adobe.AAM.* I had two ARM files in ~/Library/LaunchAgents/ . I had to modify the launchctl command to be: for f in `basename -s .plist com.adobe.ARM.*`; do launchctl remove $f; done on 10.8.2 w/ CS6 Master the file name was the AAM but the actual launchd process name was different -- file name was com.adobe.AAM.Updater-1.0.plist but the process name in the plist was com.adobe.AAM.Scheduler-1.0 Thus, if anyone gets a "no process found" when trying to remove take a peek at the plist to check the process name. in my case doing this: launchctl remove com.adobe.AAM.Scheduler-1.0 rm com.adobe.AAM.* Works! Thanks for the tip Thanks for posting this. I used \(launchctl list \) and found a weird process called: com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae I removed this one, but I'm on the lookout for other weird ones. THANK YOU Thanks. but i have serious font problems. Can I undo this? please help me!! I did this and worked. launchctl list | grep adobe then copy the process name and launchctl remove PROCESS_NAME Thanks!!! Another way is to navigate to ~Library/LaunchAgents and delete the com.adobe.AAM.Updater.(version may varies).plist . It works ! Thx ! What i did : cd ~/Library/LaunchAgents/ rm com.adobe.* cd /Library/LaunchAgents/ sudo su rm com.adobe.A* Note: The launchctl command didn't make any changes. I have lunchy installed: To remove the updater, I did this in a terminal window: $lunchy ls com.adobe* com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae $ lunchy uninstall com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae stopped com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae uninstalled com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae That's done the trick for now. Thanks for the tip. I hate how apple keeps bullshitting its users. Has the command text changed? I am trying desperately to stop the Adobe updater as well as the adobe genuine thing that keeps popping up Any idea on how to stop it all? Some really great tips here. Thanks for the post! computer care
https://lifecs.likai.org/2011/02/real-way-to-disable-adobe-updater-from.html
CC-MAIN-2021-04
refinedweb
648
79.87
Create a GUI with Java - Creating an Application - Working with Components - Lists - The Java Class Library - Summary - Q and A - Quiz - Certification Practice - Exercises Walks you through how to use Swing to create applications that feature key GUI components. Save 35% off the list price* of the related book or multi-format eBook (EPUB + MOBI + PDF) with discount code ARTICLE. * See informit.com/terms Most computer users today expect software to feature a graphical user interface (GUI) with a variety of widgets such as text boxes, sliders, and scrollbars. The Java Class Library includes Swing, a set of packages that enable Java programs to offer a sophisticated GUI and collect user input with the mouse, keyboard, and other input devices. In this lesson, you will use Swing to create applications that feature these GUI components: Frames: Windows with a title bar; menu bar; and Maximize, Minimize, and Close buttons Containers: Components that hold other components Buttons: Clickable rectangles with text or graphics indicating their purpose Labels: Text or graphics that provide information Text fields and text areas: Windows that accept keyboard input of one line or multiple lines Drop-down lists: Groups of related items that are selected from drop-down menus or scrolling windows Check boxes and radio buttons: Small squares or circles that can be selected or deselected Image icons: Graphics added to buttons, labels, and other components Scrolling panes: Panels for components too big for a user interface that can be accessed in full by using a scrollbar Swing is the most extensive set of related classes introduced thus far in the book. Learning to create graphical applications with these packages is good practice for utilizing a class library in Java, which is something you’ll do often in your own projects. Creating an Application Swing enables the creation of a Java program with an interface that adopts the style of the native operating system, such as Windows or Linux, or a style that’s unique to Java. Each of these styles is called a look and feel because it describes both the appearance of the interface and how its components function when they are used. Java offers a distinctive look and feel called Nimbus that’s unique to the language. Swing components are part of the javax.swing package, a standard part of the Java Class Library. To refer to a Swing class using its short name—without referring to the package, in other words—you must make it available with an import statement or use a catchall statement such as the following: import javax.swing.*; Two other packages that support GUI programming are java.awt, the Abstract Window Toolkit (AWT), and java.awt.event, event-handling classes that handle user input. When you use a Swing component, you work with objects of that component’s class. You create the component by calling its constructor and then calling methods of the component as needed for proper setup. All Swing components are subclasses of the abstract class JComponent. It includes methods to set a component’s size, change the background color, define the font used for any displayed text, and set up tooltips. A tooltip is explanatory text that appears when you hover the mouse over the component for a few seconds. Before components can be displayed in a user interface, they must be added to a container, a component that can hold other components. Swing containers are subclasses of java.awt.Container. This class includes methods to add and remove components from a container, arrange components using an object called a layout manager, and set up borders around the edges of a container. Containers often can be placed in other containers. Creating a Graphical User Interface The first step in creating a Swing application is to create a class that represents the main GUI. An object of this class serves as a container that holds all the other components to be displayed. In many projects, the main interface object is a frame (the JFrame class in the javax.swing package). A frame is a window shown whenever you open an application on your computer. A frame has a title bar; Maximize, Minimize, and Close buttons; and other features. In a graphical environment such as Windows or macOS, users expect to be able to move, resize, and close the windows of programs. One way to create a graphical Java application is to make the interface a subclass of JFrame, as in the following class declaration: public class FeedReader extends JFrame { // body of class } The constructor of the class should handle the following tasks: Call a superclass constructor with super() to give the frame a title and handle other setup procedures. Set the size of the frame’s window, either by specifying the width and height in pixels or by letting Swing choose the size. Decide what to do if a user closes the window. Display the frame. The JFrame class has the simple constructors JFrame() and JFrame(String). One sets the frame’s title bar to the specified text, and the other leaves the title bar empty. You also can set the title by calling the frame’s setTitle(String) method. The size of a frame can be established by calling the setSize(int, int) method with the width and height as arguments. A frame’s size is indicated in pixels, so, for example, calling setSize(650, 550) creates a frame 650 pixels wide and 550 pixels tall. Another way to set a frame’s size is to fill the frame with the components it will contain and then call the frame’s pack() method. This resizes the frame based on the preferred size of the components inside it. If the frame is bigger than it needs to be, pack() shrinks it to the minimum size required to display the components. If the frame is too small (or the size has not been set), pack() expands it to the required size. Frames are invisible when they are created. You can make them visible by calling the frame’s setVisible(boolean) method with the literal true as an argument. If you want a frame to be displayed when it is created, call one of these methods in the constructor. You also can leave the frame invisible and require any class that uses the frame to make it visible by calling setVisible(true). As you probably have surmised, calling setVisible(false) makes a frame invisible. When a frame is displayed, the default behavior is for it to be positioned in the upper-left corner of the computer’s desktop. You can specify a different location by calling the setBounds(int, int, int, int) method. The first two arguments to this method are the (x, y) position of the frame’s upper-left corner on the desktop. The last two arguments are the frame’s width and height. Another way to set the bounds is with a Rectangle object from the java.awt package. Create the rectangle with the Rectangle(int, int, int, int) constructor. The first two arguments are the (x, y) position of the upper-left corner. The next two are the width and height. Call setBounds(Rectangle) to draw the frame at that spot. The following class represents a 300×100 frame with “Edit Payroll” in the title bar: public class Payroll extends JFrame { public Payroll() { super("Edit Payroll"); setSize(300, 100); setVisible(true); } } Every frame has Maximize, Minimize, and Close buttons on the title bar that the user can control—the same controls present in the interface of other software running on your computer. There’s a wrinkle to using frames that you might not expect: The normal behavior when a frame is closed is for the application to keep running. When a frame serves as a program’s main GUI, this leaves a user with no way to stop the program. To change this, you must call a frame’s setDefaultCloseOperation(int) method with one of four static variables as an argument: EXIT_ON_CLOSE: Exits the application when the frame is closed DISPOSE_ON_CLOSE: Closes the frame, removes the frame object from Java Virtual Machine (JVM) memory, and keeps running the application DO_NOTHING_ON_CLOSE: Keeps the frame open and continues running HIDE_ON_CLOSE: Closes the frame and continues running These variables are part of the JFrame class because it implements the WindowConstants interface. To prevent a user from closing a frame, add the following statement to the frame’s constructor method: setDefaultCloseOperation(JFrame.DO_NOTHING_ON_CLOSE); If you are creating a frame to serve as an application’s main user interface, the expected behavior is probably EXIT_ON_CLOSE, which shuts down the application along with the frame. As mentioned earlier, you can customize the overall appearance of a user interface in Java by designating a look and feel. The UIManager class in the javax.swing package manages this aspect of Swing. To set the look and feel, call the class method setLookAndFeel(String) with the name of the look and feel’s class as the argument. Here’s how to choose the Nimbus look and feel: UIManager.setLookAndFeel( "javax.swing.plaf.nimbus.NimbusLookAndFeel" ); This method call should be contained within a try-catch block because it might generate five different exceptions. Catching the Exception class and ignoring it causes the default look and feel to be used in the unlikely circumstance that Nimbus can’t be chosen properly. Developing a Framework This lesson’s first project is an application that displays a frame containing no other interface components. In NetBeans, create a new Java file with the class name SimpleFrame and the package name com.java21days and then enter Listing 9.1 as the source code. This simple application displays a frame 300×100 pixels in size and can serve as a framework—pun unavoidable—for any applications you create that use a GUI. LISTING 9.1 The Full Text of SimpleFrame.java 1: package com.java21days; 2: 3: import javax.swing.*; 4: 5: public class SimpleFrame extends JFrame { 6: public SimpleFrame() { 7: super("Frame Title"); 8: setSize(300, 100); 9: setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); 10: setVisible(true); 11: } 12: 13: private static void setLookAndFeel() { 14: try { 15: UIManager.setLookAndFeel( 16: "javax.swing.plaf.nimbus.NimbusLookAndFeel" 17: ); 18: } catch (Exception exc) { 19: // ignore error 20: } 21: } 22: 23: public static void main(String[] arguments) { 24: setLookAndFeel(); 25: SimpleFrame sf = new SimpleFrame(); 26: } 27: } When you compile and run the application, you should see the frame displayed in Figure 9.1. FIGURE 9.1 Displaying a frame. The SimpleFrame application isn’t much to look at. The GUI contains no components, aside from the standard Minimize, Maximize, and Close (X) buttons on the title bar, as shown in Figure 9.1. You’ll add components later in this lesson. In the application, a SimpleFrame object is created in the main() method in lines 23–26. If you had not displayed the frame when it was constructed, you could call sf.setVisible(true) in the main() method to display the frame. Nimbus is set as the frame’s look and feel in lines 15–17. The work involved in creating the frame’s user interface takes place in the SimpleFrame() constructor in lines 6–11. Components can be created and added to the frame within this constructor. Creating a Component Creating a GUI is a great way to get experience working with objects in Java because each interface component is represented by its own class. To use an interface component in Java, you create an object of that component’s class. You already have worked with the container class JFrame. One of the simplest components to employ is JButton, the class that represents clickable buttons. In any program, buttons trigger actions. You could click Install to begin installing software, click a Run button to begin a new game of Angry Birds, click the Minimize button to prevent your boss from seeing Angry Birds running, and so on. A Swing button can feature a text label, a graphical icon, or both. You can use the following constructors for buttons: JButton(String): A button labeled with the specified text JButton(Icon): A button that displays the specified graphical icon JButton(String, Icon): A button with the specified text and graphical icon The following statements create three buttons with text labels: JButton play = new JButton("Play"); JButton stop = new JButton("Stop"); JButton rewind = new JButton("Rewind"); Graphical buttons with icons are covered later in this lesson. Adding Components to a Container Before you can display a user interface component such as a button in a Java program, you must add it to a container and display that container. To add a component to a container, call the container’s add(Component) method with the component as the argument. (All user interface components in Swing inherit from java.awt.Component.) The simplest Swing container is a panel (the JPanel class). The following example creates a button and adds it to a panel: JButton quit = new JButton("Quit"); JPanel panel = new JPanel(); panel.add(quit); Use the same technique to add components to frames and windows. The ButtonFrame class, shown in Listing 9.2, expands on the application framework created earlier in this lesson. A panel is created, three buttons are added to the panel, and then the panel is added to a frame. Enter the source code of Listing 9.2 into a new Java file called ButtonFrame in NetBeans, making sure to put it in the com.java21days package. LISTING 9.2 The Full Text of ButtonFrame.java 1: package com.java21days; 2: 3: import javax.swing.*; 4: 5: public class ButtonFrame extends JFrame { 6: JButton load = new JButton("Load"); 7: JButton save = new JButton("Save"); 8: JButton unsubscribe = new JButton("Unsubscribe"); 9: 10: public ButtonFrame() { 11: super("Button Frame"); 12: setSize(340, 170); 13: setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); 14: JPanel pane = new JPanel(); 15: pane.add(load); 16: pane.add(save); 17: pane.add(unsubscribe); 18: add(pane); 19: setVisible(true); 20: } 21: 22: private static void setLookAndFeel() { 23: try { 24: UIManager.setLookAndFeel( 25: "javax.swing.plaf.nimbus.NimbusLookAndFeel" 26: ); 27: } catch (Exception exc) { 28: System.out.println(exc.getMessage()); 29: } 30: } 31: 32: public static void main(String[] arguments) { 33: setLookAndFeel(); 34: ButtonFrame bf = new ButtonFrame(); 35: } 36: } When you run the application, a small frame opens that contains the three buttons, as shown in Figure 9.2. FIGURE 9.2 The ButtonFrame application. The ButtonFrame class has three instance variables: the load, save, and unsubscribe JButton objects. In lines 14–17 of Listing 9.2, a new JPanel object is created, and the three buttons are added to the panel by calls to its add(Component) method. When the panel contains all the buttons, the frame’s own add(Component) method is called in line 18 with the panel as an argument, and the panel is added to the frame.
https://www.informit.com/articles/article.aspx?p=2995363&amp;seqNum=9
CC-MAIN-2021-49
refinedweb
2,484
61.36
Random high variance that is a problem with making only one decision tree. In this post, we will learn how to develop a random forest model in Python. We will use the cancer dataset from the pydataset module to classify whether a person status is censored or dead based on several independent variables. The steps we need to perform to complete this task are defined below - Data preparation - Model development and evaluation Data Preparation Below are some initial modules we need to complete all of the tasks for this project. import pandas as pd import numpy as np from pydataset import data from sklearn.model_selection import train_test_split from sklearn.model_selection import cross_val_score from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report We will now load our dataset “Cancer” and drop any rows that contain NA using the .dropna() function. df = data('cancer') df=df.dropna() Next, we need to separate our independent variables from our dependent variable. We will do this by make two datasets. The X dataset will contain all of our independent variables and the y dataset will contain our dependent variable. You can check the documentation for the dataset using the code data(“Cancer”, show_doc=True) Before we make the y dataset we need to change the numerical values in the status variable to text. Doing this will aid in the interpretation of the results. If you look at the documentation of the dataset you will see that a 1 in the status variable means censored while a 2 means dead. We will change the 1 to censored and the 2 to dead when we make the y dataset. This involves the use of the .replace() function. The code is below. X=df[['time','age',"sex","ph.ecog",'ph.karno','pat.karno','meal.cal','wt.loss']] df['status']=df.status.replace(1,'censored') df['status']=df.status.replace(2,'dead') y=df['status'] We can now proceed to model development. Model Development and Evaluation We will first make our train and test datasets. We will use a 70/30 split. Next, we initialize the actual random forest classifier. There are many options that can be set. For our purposes, we will set the number of trees to make to 100. Setting the random_state option is similar to setting the seed for the purpose of reproducibility. Below is the code. x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) h=RandomForestClassifier(n_estimators=100,random_state=1) We can now run our modle with the .fit() function and test it with the .pred() function. The code is velow. h.fit(x_train,y_train) y_pred=h.predict(x_test) We will now print two tables. The first will provide the raw results for the classification using the .crosstab() function. THe classification_reports function will provide the various metrics used for determining the value of a classification model. print(pd.crosstab(y_test,y_pred)) print(classification_report(y_test,y_pred)) Our overall accuracy is about 75%. How good this is depends in context. We are really good at predicting people are dead but have much more trouble with predicting if people are censored. Conclusion This post provided an example of using random forest in python. Through the use of a forest of trees, it is possible to get much more accurate results when a comparison is made to a single decision tree. This is one of many reasons for the use of random forest in machine learning.
https://educationalresearchtechniques.com/2019/04/01/random-forest-classification-with-python/?shared=email&msg=fail
CC-MAIN-2020-10
refinedweb
573
58.48
Repository: - Technical Reports: - Code of Conduct: Registration of an identity, registration of a pod, > 1 Pod per identity, moving a pod to another provider, changing owner of a Pod, making sure no links are broken. There are all related. I couldn't find any terms related to Pod ownership. It is already a problem in the upcomming ACP spec: solid/authorization-panel#171 And I was thinking such terms could pave the way forward to standardising Pod registration/provisioning/transfer... I didn't find a whole lot that seems relevant on the Solid GitHub org either: @csarven is it something that makes sense and what would be the best platform to discuss and/or bring it up? <> <> <> . <> <> "The person or other agent which owns this.\n For example, the owner of a file in a filesystem.\n There is a sense of right to control. Typically defaults to the agent who craeted\n something but can be changed." . <> <> "owner"@en . <> <> <> . solidwith domain solid:Pod solid:webId& solid:WebID. solid:WebIDwould not be a better fit as range of an owner property or as domain of solid:oidcIssuerinstead of a vcard:Agentwhich might have a solid:webIdor not. @matthieubosquet If of interest: , or see links in . As long as the needs are well documented and specs are referring to them, all fine. Not sure about a webid Class but can probably get more out of a property. I'm not opposed to "podOwner" property or "Pod" class -- and which namespace is to throw it under is the least of concerns IMO -- but I'd like to hear more from people about what they think/expect with the differences are with root container (pim:Storage) or maybe even server origin or root URI path.. re "controller" or "owner" etc .. see links above. Plenty of existing discussion in chats/issues.. not worth repeating here unless there is some new information. tl;dr: both are fine as synonyms but can obviously differ in their meaning and purpose. Don't forget to throw in "creator" or "admin" or "authority" for fun and profit.
https://gitter.im/solid/specification?at=602bee1593539e23437c3a07
CC-MAIN-2022-40
refinedweb
348
64.41
tag:blogger.com,1999:blog-8712770457197348465.post1594195688447010264..comments2015-08-27T11:23:32.163-07:00Comments on Javarevisited: How to check or detect duplicate elements in Array in JavaJavin Paul Jadhav int[] duplicateArray = { 1, 2...@Samruddhi Jadhav <br /><br />int[] duplicateArray = { 1, 2, 3, 3,5,5 }; is returning 4 for duplicateArray. <br />Prafful having a conflict with using text file as input...Im having a conflict with using text file as input, containing the same array elements. The problem is that the output shows that both duplicate elements are considered false(ex. false output for one=one)Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-79098248062903615782014-08-01T23:05:41.199-07:002014-08-01T23:05:41.199-07:00Hello, I found way to get duplicate number in arra...Hello, I found way to get duplicate number in array without using java API.<br /><br />public class DuplicateFinder {<br /><br /> public void isDuplicate(int[] inputArray) {<br /> int actualSum = 0, desiredSum = 0, difference = 0, duplicateNumber = 0;<br /> for (int i = 0; i < inputArray.length; i++) {<br /> actualSum = actualSum + inputArray[i];<br /> }<br /> desiredSum = (Samruddhi Jadhav feel this can help public class ArrayDuplicate...I feel this can help<br /><br /><br />public class ArrayDuplicates {<br /><br /> public static void main(String[] args) {<br /> int[] arrayValues = new int[4];<br /> Scanner sc = new Scanner(System.in);<br /> for (int i = 0; i < arrayValues.length; i++) {<br /> arrayValues[i] = sc.nextInt();<br /> }<br /> Set setValue = new HashSet();<br /> for (int i : arrayValues) {<br /> Vijaya Kumar Bathininoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-37408151384020086632014-06-20T07:28:42.370-07:002014-06-20T07:28:42.370-07:00Thanks for ur Answer.Its very helpful.Thanks for ur Answer.Its very helpful.R.JEYA NANDHANA Guys ,how about this: public static boolean ch...Hey Guys ,how about this:<br />public static boolean checkDuplicateUsingAdd(String[] input) {<br /> Map elementMap = new HashMap();<br /> for (String str : input) {<br /> Boolean wasInserted = elementMap.get(str);<br /> if (wasInserted!=null) {<br /> return false;<br /> }<br /> elementMap.put(str,Boolean.TRUE);<br /> }<br />Piotr Chlebda program to print integers which occurs thrice...Java program to print integers which occurs thrice in the array:<br /><br />package com.test;<br /><br />import java.util.ArrayList;<br />import java.util.Arrays;<br />import java.util.List;<br /><br />public class ArrayTest {<br /><br /> /**<br /> * @param args<br /> */<br /> public static void main(String[] args) {<br /> // TODO Auto-generated method stub<br /> <br /> int[] array = {Krishna if it is " one" or " one "...but if it is " one" or " one " or "one " it will not be fund that's why i think we have to use trim if we suppose that " one" is equal to "one" ;thxxxx for the tuto ;)sana kakou static boolean bruteforce(String[] input) {...public static boolean bruteforce(String[] input) {<br />for (int i = 0; i < input.length; i++) {<br />for (int j = 0; j < input.length; j++) {<br /><br / Nat JM, you are correct, it can be like that, as Miche...RC, you are correct, it can be like that, as Michee already pointed out,but if you see I have put an extra condition i != j to not consider same element.<br />Javin @ Print array in Java the brute force method, why are you going over ...In the brute force method, why are you going over the whole array again in the inner loop?<br /><br />j should start at i + 1, not at zero. Because all the elements before i + 1 have already been compared to the rest of the array.RCnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-8322533110919848132012-11-22T22:25:08.958-08:002012-11-22T22:25:08.958-08:00string is like this:"Hello abcdef ABCDEF 1234...string is like this:"Hello abcdef ABCDEF 1234 12AB"<br /><br />and I need o/p like this:Hello a-f A-F 1-4 1-B<br />please tell me if any one know.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-73742702005639866932012-11-09T05:41:47.033-08:002012-11-09T05:41:47.033-08:00Without using API you may sort the array with some...Without using API you may sort the array with some fast algorithm and do linear search for adjacent the same values.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-71069642539020528482012-03-08T02:16:01.710-08:002012-03-08T02:16:01.710-08:00if you don't want duplicate in array than conv...if you don't want duplicate in array than convert array into set. this way you don't need to check array for duplicates, because array backed up by Set doesn't contains repeated element. I guess this is best way to remove duplicates from array.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-91839937540476470432012-02-28T08:57:07.830-08:002012-02-28T08:57:07.830-08:00ps: my way it's more optimal!:Pps: my way it's more optimal!:Pmichee're right i didn't see that 'i!=j&#...you're right i didn't see that 'i!=j' part :)<br />I don't do code review, i'm just a php dev learning java and i like it:)michee dude you are picking it very well, must be ...michee dude you are picking it very well, must be good in code review :) you are correct it should can start with i+1 but if you see I have put an extra condition i != j to not consider same element.Javin @ Sort array in java in function bruteforce: public static bo...Also in function bruteforce:<br /><br /> public static boolean bruteforce(String[] input) {<br /> for (int i = 0; i < input.length; i++) {<br /> for (int j = 0; j < input.length; j++) {<br /><br /><br />I think j should start from i+1 . Because every element will be equal to itself.michee, you pointed right if condition was incomp...@michee, you pointed right if condition was incomplete it should be inputSet.size()<inputList.size(). Corrected. Thanks for pointing it out.Javin @ google interview question questions most likely will be: How to fi...followup questions most likely will be:<br /><br />How to find duplicate items in Array? (Actual item not just confirmation that array contains duplicate)<br />How to remove duplicates in Java Array?<br />How to find count of duplicates in Java array ?(e.g. how many times a particular element is appearing in array)Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-81137293984871759842012-02-26T08:17:54.408-08:002012-02-26T08:17:54.408-08:00if(inputSet.size() return true; ...if(inputSet.size()<br /> return true;<br /> }<br /><br /><br />I think that If condition is incomplete.michee, I don't know, whether it's a performa...Hi,<br /><br />I don't know, whether it's a performant way, but the first method I thought of, was: Sort the array and then check if there are two equal items that directly follow each other.Stanley F.
http://javarevisited.blogspot.com/feeds/1594195688447010264/comments/default
CC-MAIN-2015-35
refinedweb
1,171
59.09
Modal from a modal (Mac) [This is a cross-post from StackOverflow]. I have a Qt modal dialog (a complex form) that needs to itself pop up a modal dialog (a QMessageBoxwith setModal(true)). This modal should be on top of the parent modal form, and prevent interaction. This all works swimmingly up to a point - the second modal appears, and prevents interaction with the widgets in the parent. However, the parent form can still receive focus - that is, I can click in it, and the window receives focus (even if I can't interact with the widgets). This becomes a problem if you task away to another application at this point - when you task back the QMessageBoxis behind the parent which has focus (and you can't interact with the parent). Basically, you have to move the parent to reveal the QMessageBoxbefore dismissing it. Is there a way to have a modal on top of a modal on prevent this problem on Mac OS with window focus and tasking away? (Not tested other OS BTW). Example code to reproduce the problem ( pushButtonis just a standard pushbutton): Dialog.cpp #include "dialog.h" #include "ui_dialog.h" #include <QMessageBox> Dialog::Dialog(QWidget *parent) : QDialog(parent), ui(new Ui::Dialog) { ui->setupUi(this); QObject::connect(ui->pushButton, &QPushButton::clicked, this, &Dialog::showMeAModal); } Dialog::~Dialog() { delete ui; } void Dialog::showMeAModal() { QMessageBox box(this); box.setText(tr("modal")); box.setModal(true); box.setWindowFlags(box.windowFlags() | Qt::Popup); box.exec(); } Dialog.h #ifndef DIALOG_H #define DIALOG_H #include <QDialog> namespace Ui { class Dialog; } class Dialog : public QDialog { Q_OBJECT public: explicit Dialog(QWidget *parent = 0); ~Dialog(); public slots: void showMeAModal(); private: Ui::Dialog *ui; }; #endif // DIALOG_H MainWindow.cpp #include "mainwindow.h" #include "ui_mainwindow.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); } MainWindow::~MainWindow() { delete ui; } main.cpp #include "mainwindow.h" #include <QApplication> #include "dialog.h" int main(int argc, char *argv[]) { QApplication a(argc, argv); MainWindow w; w.show(); Dialog dialog(&w); dialog.setModal(true); dialog.show(); return a.exec(); } I've tried: Qt::Popupon the Message Box - Changing parent to MainWindow Qt::WindowStaysOnTopHinton the Message Box ...to no avail. I tried your sample code and saw the problem you described only with a specific sequence of events. Your sample code is incomplete but enough was there where I could fill in the rest so I don't think this is critical. I am currently running OSX 10.10.5, XCode 7.2.1, Qt 5.6.0. The sequence of events that creates this problem is as follows: Click on the button to create the message box. Click focus on the dialog below the message box so this has focus. The mainwindow cannot get focus in this program. Click focus to some other program (i.e. a finder window). Click back to the program (anything other than the messagebox). The messagebox is now behind the dialog (wrong Z order). My first thought to solving this would be related to keeping focus on the messagebox when it is visible or maybe tap into the QFocus events and use some combination of raise() or lower() to make sure the messagebox is always on top (these functions change the Z order). I didn't try any of this but it is just an idea. I think I found a solution to this problem. Try this: void Dialog::showMeAModal() { QMessageBox box(this); box.setText(tr("modal")); box.setModal(true); box.setWindowFlags(box.windowFlags() | Qt::Popup); box.setWindowModality(Qt::WindowModal); box.exec(); } It looks a little different that a regular message box (quite cool actually) and your problem no longer exists. Thanks, this is similar to the solution we came up with. The crucial line is box.setWindowModality(Qt::WindowModal); This causes the QMessageBoxto be shown as a Mac window drop-down. setModaland setWindowFlagsseem to have no additional affect. That said, this is a workaround with user interface impact - there does appear to be a bug in Qt that is causing this situation. - kshegunov Qt Champions 2016 @KevinD said in Modal from a modal (Mac): Hi, Don't use these two: box.setModal(true); box.setWindowFlags(box.windowFlags() | Qt::Popup); You're calling QDialog::execwhich is for modal-only dialogs, and Qt::Popupisn't for dialogs, leave the window flags be. If you prepare a MWE (the download-and-build type) I can test on Linux (I have no Mac, sorry). Also you might consider filing a bug report if everything else fails. Kind regards.
https://forum.qt.io/topic/72565/modal-from-a-modal-mac
CC-MAIN-2018-05
refinedweb
749
58.89
Android Studio 0.5.8 发布了,改进内容包括: Fixed a number of crash reports from recent canary builds Integrated two more IntelliJ EAP builds: 135.760 (release notes) and the earlier 135.689 (release notes) Gradle Support for Gradle 1.12, and the new Android Gradle plugin 0.10. If you configure a "resource prefix" for your Android library (see the build system docs), in order to avoid accidental name clashes, Studio will flag all resources that do not conform to the given prefix, and it will also default newly suggested identifiers in the layout editor, create resource dialogs etc to identifiers which begin with the prefix. Layout Editor Support for "show in included", which lets you view and edit layouts that are included in other layouts "in context". Take for example the rating-bar layout from the Google I/O 2013 app: You can now edit this rating bar layout embedded within another layout which uses it, such that you can see how it appears in context: The outer layout is shown partially translucent to make it more obvious which parts of the layout are editable and part of this layout, and which parts are not. Note also how the Component Tree on the right will list the name of the surrounding layout. When you invoke Extract Include, the included layout is shown in the above way automatically. The layout editor and XML layout preview rendering now supports "hover": as you move your mouse around, the view under the mouse is highlighted slightly and shown with a faint dashed border. This makes it easier to understand the structure of your layout without having to click to select each view. The frequently reported bug where using cut, copy and paste shortcut keys in the property sheet would operate on the whole widgets rather than the property sheet text has been fixed. Lint: Several new checks: A layout include tag check which ensures that if you specify layout parameters on an include tag you also specify layout_width and layout_height, since otherwise the other layout parameters will be ignored (helps uncover problems like this stackoverflow question) A couple of app compat library checks: Ensures that when using the appcompat library, you call the right methods - e.g. getSupportActionBar() instead of getActionBar(). NOTE: This lint check may incorrectly report issues in projects which are not using AppCompat at all. This bug has been fixed and will appear in 0.5.9. Ensures that your menu resource files are using the correct form of showAsAction. A frequent problem for developers manually adding or removing app compat dependencies was forgetting to change between android:showAsAction and app:showAsAction. Worse yet, using the app: namespace one without appcompat could result in an aapt crash. Lint now validates these files. A locale folder check which ensures that you are using the correct ISO code for a couple of locales where there is a more modern ISO code but where the code is not yet the right one to use A check to ensure that you are not calling WebView#addJavascriptInterface on platforms lower than API 17 A check which discourages use of signatureOrSystem level permissions Several checks that have only been available from the lint command line (because they rely on analyzing bytecode, which is not available inside Studio where there is no compile-on-save) have been ported to run incrementally in the editor in Studio. This includes the Parcel creator check, the view constructor check, the wrong draw/layout call check, and the valid fragment check. Import: When editing build.gradle files under the project root which have not been imported into the project, there is an editor banner warning you that it's missing and offering to import it Run Configurations: Improved "reuse this device" handling: Show device chooser again if the set of available devices has changed Many, many bug fixes! 最新评论(9) 解决编译速度问题再用。现在用是浪费生命。 版本帝 我一直在用idea,android studio虽然还不成熟,但已经很好了。 还是继续用我的idea. 发布在加速,何时出正式版 一直用IDEA 又。。更新了 引用来自“夏悸”的评论 卡得要命 卡得要命
https://www.oschina.net/news/51620/android-studio-0-5-8
CC-MAIN-2018-39
refinedweb
667
55.88
C# - Func We have learned in the previous section, that a delegates can be defined as shown below. public delegate int SomeOperation(int i, int j); class Program { static int Sum(int x, int y) { return x + y; } static void Main(string[] args) { SomeOperation add = Sum; int result = add(10, 10); Console.WriteLine: namespace System { public delegate TResult Func<in T, out TResult>(T arg); } The last parameter in the angle brackets <> is considered as the return type and remaining parameters are considered as input parameter types as shown in the following figure. A Func delegate with two input parameters and one out parameters will be represent as below. The following Func type delegate is the same as the above SomeOperation delegate, where it takes two input parameters of int type and returns a value of int type: Func<int, int, int> sum; You can assign any method to the above func delegate that takes two int parameters and returns an int value. Now, you can take Func delegate instead of someOperation delegate in the first example. class Program { static int Sum(int x, int y) { return x + y; } static void Main(string[] args) { Func<int,int, int> add = Sum; int result = add(10, 10); Console.WriteLine(result); } } A Func delegate type can include 0 to 16 input parameters of different types. However, it must include one out parameter for result. For example, the following func delegate doesn't have any input parameter, it includes only a out parameter. Func<int> getRandomNumber; C# Func with an Anonymous Method You can assign an anonymous method to the Func delegate by using the delegate keyword. Func<int> getRandomNumber = delegate() { Random rnd = new Random(); return rnd.Next(1, 100); }; Func with Lambda Expression A Func delegate can also be used with a lambda expression, as shown below: Func<int> getRandomNumber = () => new Random().Next(1, 100); //Or Func<int, int, int> Sum = (x, y) => x + y; - Func is built-in delegate type. - Func delegate type must return a value. - Func delegate type can have zero to 16 input parameters. - Func delegate does not allow ref and out parameters. - Func delegate type can be used with an anonymous method or lambda expression.
https://www.tutorialsteacher.com/csharp/csharp-func-delegate
CC-MAIN-2019-18
refinedweb
367
52.19
Utility class for running Python commands from various parts of QGIS. More... #include <qgspythonrunner.h> Utility class for running Python commands from various parts of QGIS. There is no direct Python support in the core library, so it is expected that application with Python support creates a subclass that implements pure virtual function(s) during the initialization. The static methods will then work as expected. Added in QGIS v? Definition at line 33 of file qgspythonrunner.h. Protected constructor: can be instantiated only from children. Eval a Python statement. Definition at line 42 of file qgspythonrunner.cpp. Returns true if the runner has an instance (and thus is able to run commands) Definition at line 23 of file qgspythonrunner.cpp. Execute a Python statement. Definition at line 28 of file qgspythonrunner.cpp. Assign an instance of Python runner so that run() can be used. This method should be called during app initialization. Takes ownership of the object, deletes previous instance. Definition at line 55 of file qgspythonrunner.cpp. Definition at line 63 of file qgspythonrunner.h.
https://qgis.org/api/3.4/classQgsPythonRunner.html
CC-MAIN-2019-39
refinedweb
176
61.43
Working with Unit Tests in Project or Solution Discovering unit tests in solution dotCover adds the Unit Test Explorer window to Visual Studio ( or , or Ctrl<< Unit test explorer allows you to do the following: - Explore tests in the solution: browse all unit tests in a list or tree view, search tests and filter by a substring, regroup unit tests by project, namespace, category, etc. - Navigate to source code of any test or test class by double-clicking it in the view. - run, debug or cover selected tests. - Create unit tests sessions from selected tests and test classs and/or add selected items to the current test session. - Export all tests from solution to a text, XML, or HTML file. Executing and covering /.
http://www.jetbrains.com/help/dotcover/Unit_Testing_in_Solution.html
CC-MAIN-2018-30
refinedweb
123
59.43
October 2017 Volume 32 Number 10 [C++] From Algorithms to Coroutines in C++ By Kenny Kerr There’s a C++ Standard Library algorithm called iota that has always intrigued me. It has a curious name and an interesting function. The word iota is the name of a letter in the Greek alphabet. It’s commonly used in English to mean a very small amount and often the negative, not the least amount, derived from a quote in the New Testament Book of Matthew. This idea of a very small amount speaks to the function of the iota algorithm. It’s meant to fill a range with values that increase by a small amount, as the initial value is stored and then incremented until the range has been filled. Something like this: #include <numeric> int main() { int range[10]; // Range: Random missile launch codes std::iota(std::begin(range), std::end(range), 0); // Range: { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 } } It’s often said that C++ developers should expunge all naked for loops and replace them with algorithms. Certainly, the iota algorithm qualifies as it takes the place of the for loop that any C or C++ developer has undoubtedly written thousands of times. You can imagine what your C++ Standard Library implementation might look like: namespace std { template <typename Iterator, typename Type> void iota(Iterator first, Iterator last, Type value) { for (; first != last; ++first, ++value) { *first = value; } } } So, yeah, you don’t want to be caught in a code review with code like that. Unless you’re a library developer, of course. It’s great that the iota algorithm saves me from having to write that for loop, but you know what? I’ve never actually used it in production. The story usually goes something like this: I need a range of values. This is such a fundamental thing in computer science that there must be a standard algorithm for it. I again scour the list over at bit.ly/2i5WZRc and I find iota. Hmm, it needs a range to fill with values. OK, what’s the cheapest range I can find … I then print the values out to make sure I got it right using … a for loop: #include <numeric> #include <stdio.h> int main() { int range[10]; std::iota(std::begin(range), std::end(range), 0); for (int i : range) { printf("%d\n", i); } } To be honest, the only thing I like about this code is the range-based for loop. The problem is that I simply don’t need nor want that range. I don’t want to have to create some container just to hold the values so that I can iterate over them. What if I need a lot more values? I’d much rather just write the for loop myself: #include <stdio.h> int main() { for (int i = 0; i != 10; ++i) { printf("%d\n", i); } } To add insult to injury, this involves a lot less typing. It sure would be nice, however, if there were an iota-like function that could somehow generate a range of values for a range-based for loop to consume without having to use a container. I was recently browsing a book about the Python language and noticed that it has a built-in function called range. I can write the same program in Python like this: for i in range(0, 10): print(i) Be careful with that indentation. It’s how the Python language represents compound statements. I read that Python was named after a certain British comedy rather than the nonvenomous snake. I don’t think the author was kidding. Still, I love the succinct nature of this code. Surely, I can achieve something along these lines in C++. Indeed, this is what I wish the iota algorithm would provide but, alas. Essentially, what I’m looking for is a range algorithm that looks something like this: template <typename T> generator<T> range(T first, T last) { return{ ... }; } int main() { for (int i : range(0, 10)) { printf("%d\n", i); } } To my knowledge, no such function exists, so let’s go and build it. The first step is to approximate the algorithm with something reliable that can act as a baseline for testing. The C++ standard vector container comes in handy in such cases: #include <vector> template <typename T> std::vector<T> range(T first, T last) { std::vector<T> values; while (first != last) { values.push_back(first++); } return values; } It also does a good job of illustrating why you don’t want to build a container in the first place, or even figure out how large it should be, for that matter. Why should there even be a cap? Still, this is useful because you can easily compare the output of this range generator to a more efficient alternative. Well, it turns out that writing a more efficient generator isn’t that difficult. Have a look at Figure 1. Figure 1 A Classical Generator template <typename T> struct generator { T first; T last; struct iterator{ ... }; iterator begin() { return{ first }; } iterator end() { return{ last }; } }; template <typename T> generator<T> range(T first, T last) { return{ first, last }; } The range function simply creates a generator initialized with the same pair of bounding values. The generator can then use those values to produce lightweight iterators via the conventional begin and end member functions. The most tedious part is spitting out the largely boilerplate iterator implementation. The iterator can simply hold a given value and increment it as needed. It must also provide a set of type aliases to describe itself to standard algorithms. This isn't strictly necessary for the simple range-based for loop, but it pays to include this as a bit of future-proofing: template <typename T> struct generator { struct iterator { T value; using iterator_category = std::input_iterator_tag; using value_type = T; using difference_type = ptrdiff_t; using pointer = T const*; using reference = T const&; Incrementing the iterator can simply increment the underlying value. The post-increment form can safely be deleted: iterator& operator++() { ++value; return *this; } iterator operator++(int) = delete; The other equally important function provided by an iterator is that of comparison. A range-based for loop will use this to determine whether it has reached the end of the range: bool operator==(iterator const& other) const { return value == other.value; } bool operator!=(iterator const& other) const { return !(*this == other); } Finally, a range-based for loop will want to dereference the iterator to return the current value in the range. I could delete the member call operator, because it isn’t needed for the range-based for loop, but that would needlessly limit the utility of generators to be used by other algorithms: T const& operator*() const { return value; } T const* operator->() const { return std::addressof(value); } It might be that the generator and associated range function are used with number-like objects rather than simple primitives. In that case, you might also want to use the address of helper, should the number-like object be playing tricks with operator& overloading. And that’s all it takes. My range function now works as expected: template <typename T> generator<T> range(T first, T last) { return{ first, last }; } int main() { for (int i : range(0, 10)) { printf("%d\n", i); } } Of course, this isn’t particularly flexible. I’ve produced the iota of my dreams, but it’s still just an iota of what would be possible if I switched gears and embraced coroutines. You see, with coroutines you can write all kinds of generators far more succinctly and without having to write a new generator class template for each kind of range you’d like to produce. Imagine if you only had to write one more generator and then have an assortment of range-like functions to produce different sequences on demand. That’s what coroutines enable. Instead of embedding the knowledge of the original iota generation into the generator, you can embed that knowledge directly inside the range function and have a single generator class template that provides the glue between producer and consumer. Let’s do it. I begin by including the coroutine header, which provides the definition of the coroutine_handle class template: #include <experimental/coroutine> I’ll use the coroutine_handle to allow the generator to interact with the state machine represented by a coroutine. This will query and resume as needed to allow a range-based for loop—or any other loop, for that matter—to direct the progress of the coroutine producing a pull- rather than push-model of data consumption. The generator is in some ways similar to that of the classical generator in Figure 1. The big difference is that rather than updating values directly, it merely nudges the coroutine forward. Figure 2 provides the outline. Figure 2 A Coroutine Generator template <typename T> struct generator { struct promise_type{ ... }; using handle_type = std::experimental::coroutine_handle<promise_type>; handle_type handle{ nullptr }; struct iterator{ ... }; iterator begin() { ... handle.resume(); ... } iterator end() { return nullptr; } }; So, there's a little more going on here. Not only is there an iterator that allows the range-based for loop to interact with the generator from the outside, but there's also a promise_type that allows the coroutine to interact with the generator from the inside. First, some housekeeping: Recall that the function generating values won't be returning a generator directly, but rather allow a developer to use co_yield statements to forward values from the coroutine, through the generator, and to the call site. Consider the simplest of generators: generator<int> one_two_three() { co_yield 1; co_yield 2; co_yield 3; } Notice how the developer never explicitly creates the coroutine return type. That’s the role of the C++ compiler as it stitches together the state machine represented by this code. Essentially, the C++ compiler looks for the promise_type and uses that to construct a logical coroutine frame. Don’t worry, the coroutine frame will likely disappear after the C++ compiler is done optimizing the code in some cases. Anyway, the promise_type is then used to initialize the generator that gets returned to the caller. Given the promise_type, I can get the handle representing the coroutine so that the generator can drive it from the outside in: generator(promise_type& promise) : handle(handle_type::from_promise(promise)) { } Of course, the coroutine_handle is a pretty low-level construct and I don’t want a developer holding onto a generator to accidentally corrupt the state machine inside of an active coroutine. The solution is simply to implement move semantics and prohibit copies. Something like this (first, I’ll give it a default constructor and expressly delete the special copy members): generator() = default; generator(generator const&) = delete; generator &operator=(generator const&) = delete; And then I’ll implement move semantics simply by transferring the coroutine’s handle value so that two generators never point to the same running coroutine, as shown in Figure 3. Figure 3 Implementing Move Semantics generator(generator&& other) : handle(other.handle) { other.handle = nullptr; } generator &operator=(generator&& other) { if (this != &other) { handle = other.handle; other.handle = nullptr; } return *this; } Now, given the fact that the coroutine is being driven from the outside, it's important to remember that the generator also has the responsibility of destroying the coroutine: ~generator() { if (handle) { handle.destroy(); } } This actually has more to do with the result of final_suspend on the promise_type, but I’ll save that for another day. That’s enough bookkeeping for now. Let’s now look at the generator’s promise_type. The promise_type is a convenient place to park any state such that it will be included in any allocation made for the coroutine frame by the C++ compiler. The generator is then just a lightweight object that can easily move around and refer back to that state as needed. There are only two pieces of information that I really need to convey from within the coroutine back out to the caller. The first is the value to yield and the second is any exception that might have been thrown: #include <variant> template <typename T> struct generator { struct promise_type { std::variant<T const*, std::exception_ptr> value; Although optional, I tend to wrap exception_ptr objects inside std::optional because the implementation of exception_ptr in Visual C++ is a little expensive. Even an empty exception_ptr calls into the CRT during both construction and destruction. Wrapping it inside optional neatly avoids that overhead. A more precise state model is to use a variant, as I just illustrated, to hold either the current value or the exception_ptr because they’re mutually exclusive. The current value is merely a pointer to the value being yielded inside the coroutine. This is safe to do because the coroutine will be suspended while the value is read and whatever temporary object may be yielded up will be safely preserved while the value is being observed outside of the coroutine. When a coroutine initially returns to its caller, it asks the promise_type to produce the return value. Because the generator can be constructed by giving a reference to the promise_type, I can simply return that reference here: promise_type& get_return_object() { return *this; } A coroutine producing a generator isn’t your typical concurrency-enabling coroutine and it’s often just the generator that dictates the lifetime and execution of the coroutine. As such, I indicate to the C++ compiler that the coroutine must be initially suspended so that the generator can control stepping through the coroutine, so to speak: std::experimental::suspend_always initial_suspend() { return {}; } Likewise, I indicate that the coroutine will be suspended upon return, rather than having the coroutine destroy itself automatically: std::experimental::suspend_always final_suspend() { return {}; } This ensures that I can still query the state of the coroutine, via the promise_type allocated within the coroutine frame, after the coroutine completes. This is essential to allow me to read the exception_ptr upon failure, or even just to know that the coroutine is done. If the coroutine automatically destroys itself when it completes, I wouldn’t even be able to query the coroutine_handle, let alone the promise_type, following a call to resume the coroutine at its final suspension point. Capturing the value to yield is now quite straight forward: std::experimental::suspend_always yield_value(T const& other) { value = std::addressof(other); return {}; } I simply use the handy address of helper again. A promise_type must also provide a return_void or return_value function. Even though it isn’t used in this example, it hints at the fact that co_yield is really just an abstraction over co_await: void return_void() { } More on that later. Next, I’ll add a little defense against misuse just to make it easier for the developer to figure out what went wrong. You see, a generator yielding values implies that unless the coroutine completes, a value is available to be read. If a coroutine were to include a co_await expression, then it could conceivably suspend without a value being present and there would be no way to convey this fact to the caller. For that reason, I prevent a developer from writing a co_await statement, as follows: template <typename Expression> Expression&& await_transform(Expression&& expression) { static_assert(sizeof(expression) == 0, "co_await is not supported in coroutines of type generator"); return std::forward<Expression>(expression); } Wrapping up the promise_type, I just need to take care of catching, so to speak, any exception that might have been thrown. The C++ compiler will ensure that the promise_type’s unhandled_exception member is called: void unhandled_exception() { value = std::current_exception(); } I can then, just as a convenience to the implementation, provide a handy function for optionally rethrowing the exception in the appropriate context: void rethrow_if_failed() { if (value.index() == 1) { std::rethrow_exception(std::get<1>(value)); } } Enough about the promise_type. I now have a functioning generator—but I’ll just add a simple iterator so that I can easily drive it from a range-based for loop. As before, the iterator will have the boilerplate type aliases to describe itself to standard algorithms. However, the iterator simply holds on to the coroutine_handle: struct iterator { using iterator_category = std::input_iterator_tag; using value_type = T; using difference_type = ptrdiff_t; using pointer = T const*; using reference = T const&; handle_type handle; Incrementing the iterator is a little more involved than the simpler iota iterator as this is the primary point at which the generator interacts with the coroutine. Incrementing the iterator implies that the iterator is valid and may in fact be incremented. Because the “end” iterator holds a nullptr handle, I can simply provide an iterator comparison, as follows: bool operator==(iterator const& other) const { return handle == other.handle; } bool operator!=(iterator const& other) const { return !(*this == other); } Assuming it’s a valid iterator, I first resume the coroutine, allowing it to execute and yield up its next value. I then need to check whether this execution brought the coroutine to an end, and if so, propagate any exception that might have been raised inside the coroutine: iterator &operator++() { handle.resume(); if (handle.done()) { promise_type& promise = handle.promise(); handle = nullptr; promise.rethrow_if_failed(); } return *this; } iterator operator++(int) = delete; Otherwise, the iterator is considered to have reached its end and its handle is simply cleared such that it will compare successfully against the end iterator. Care needs to be taken to clear the coroutine handle prior to throwing any uncaught exception to prevent anyone from accidentally resuming the coroutine at the final suspension point, as this would lead to undefined behavior. The generator’s begin member function performs much the same logic, to ensure that I can consistently propagate any exception that’s thrown prior to reaching the first suspension point: iterator begin() { if (!handle) { return nullptr; } handle.resume(); if (handle.done()) { handle.promise().rethrow_if_failed(); return nullptr; } return handle; } The main difference is that begin is a member of the generator, which owns the coroutine handle, and therefore must not clear the coroutine handle. Finally, and quite simply, I can implement iterator dereferencing simply by returning a reference to the current value stored within the promise_type: T const& operator*() const { return *std::get<0>(handle.promise().value); } T const* operator->() const { return std::addressof(operator*()); } And I’m done. I can now write all manner of algorithms, producing a variety of generated sequences using this generalized generator. Figure 4 shows what the inspirational range generator looks like. Figure 4 The Inspirational Range Generator template <typename T> generator<int> range(T first, T last) { while (first != last) { co_yield first++; } } int main() { for (int i : range(0, 10)) { printf("%d\n", i); } } Who needs a limited range, anyway? As I now have a pull model, I can simply have the caller decide when they've had enough, as you can see in Figure 5. Figure 5 A Limitless Generator template <typename T> generator<int> range(T first) { while (true) { co_yield first++; } } int main() { for (int i : range(0)) { printf("%d\n", i); if (...) { break; } } } The possibilities are endless! There is, of course, more to generators and coroutines and I’ve only just scratched the surface here. Join me next time for more on coroutines in C++. You can find the complete example from this article over on Compiler Explorer: godbolt.org/g/NXHBZR. Kenny Kerr is an author, systems programmer, and the creator of C++/WinRT. He is also an engineer on the Windows team at Microsoft where he is designing the future of C++ for Windows, enabling developers to write beautiful high-performance apps and components with incredible ease. Thanks to the following technical expert for reviewing this article: Gor Nishanov Discuss this article in the MSDN Magazine forum
https://docs.microsoft.com/en-us/archive/msdn-magazine/2017/october/c-from-algorithms-to-coroutines-in-c
CC-MAIN-2020-05
refinedweb
3,258
50.46
UserDataAudio from panda3d.core import UserDataAudio - class UserDataAudio Bases: Bases: MovieAudio A UserDataAudio is a way for the user to manually supply raw audio samples. remove_after_read means the data will be removed if read once. Else data will be stored (enable looping and seeking). Expects data as 16 bit signed (word); Example for stereo: 1.word = 1.channel,2.word = 2.channel, 3.word = 1.channel,4.word = 2.channel, etc. Inheritance diagram - __init__(param0: UserDataAudio) - __init__(rate: int, channels: int, remove_after_read: bool) This constructor returns a UserDataAudio— a means to supply raw audio samples manually. - append(src: DatagramIterator, len: int) Appends audio samples to the buffer from a datagram. This is intended to make it easy to send streaming raw audio over a network. - append(param0: bytes) Appends audio samples to the buffer from a string. The samples must be stored little-endian in the string. This is not particularly efficient, but it may be convenient to deal with samples in python. - static get_class_type() TypeHandle
https://docs.panda3d.org/1.11/python/reference/panda3d.core.UserDataAudio
CC-MAIN-2022-27
refinedweb
166
58.79
Using Type Substitution with Web Services By edort on Oct 01, 2007 By Doug Kohlert Java Architecture for XML Binding (JAXB) 2.1 introduced a new annotation, @XmlSeeAlso, that you can use to make JAXB aware of additional types. Java API for XML-Based Web Services (JAX-WS) 2.1 also uses the @XmlSeeAlso annotation to allow use of abstract classes in a service endpoint interface (SEI). JAX-WS 2.1 allows you to specify the @XmlSeeAlso annotation on a SEI. JAX-WS reads this annotation at runtime making sure to pass all of the classes referenced by the annotation to JAXB via the JAXBContext. The use of the @XmlSeeAlso annotation in JAXB and JAX-WS enables support for type substitution, a subclassing concept that complements inheritance. This tip will show you how to develop a simple web service that uses type substitution as well as a client that consumes the web service. You'll see how to build the web service from a Java class and from a WSDL file. A sample application accompanies this tip. The code examples in the tip are taken from the source code of the sample application. Using Type Substitution in a Web Service Suppose you want to build a web service that manages the inventory for a store that sells wakeboards and related equipment. Wakeboards are short boards made of buoyant material that are used to ride over the surface of a body of water, typically behind a boat or with a cable-skiing apparatus. For simplicity, let's assume that the store sells only three types of items: wakeboards, bindings, and towers for boats. You want the web service to be fairly simple to use and have a minimal amount of exposed operations. So to keep things simple, the web service uses an abstract Item class in its operations instead of using type-specific operations. The following Item class can be used to model any inventory object that you might want to expose through your web service: public abstract class Item implements Serializable { private long id; private String brand; private String name; private double price; ... } Extending the Item class, you can define the following Wakeboard, WakeboardBinding and Tower classes: public class Wakeboard extends Item { private String size; } public class WakeboardBinding extends Item { private String size; } public class Tower extends Item { private Fit fit; private String tubing; public static enum Fit { Custom, Exact, Universal }; } Because this example is about type substitution, let's make the inheritance hierarchy a little more interesting by introducing a Wearable abstract class. Wearable holds the size attribute for both the Wakeboard and WakeboardBinding classes. The Wearable class is defined as follows: public abstract class Wearable extends Item { protected String size; } And the resulting Wakeboard and WakeboardBinding classes are: public class Wakeboard extends Wearable { } public class WakeboardBinding extends Wearable { } Also, because the web service manages inventory, you'll want the inventory items to be persisted to a database using the Java Persistence API (sometimes referred to as JPA). To do this, you need to add an @Entity annotation to each of the classes that will be persisted. The only class that you probably don't want to persist is the Wearable class. You can add the @MappedSuperclass annotation to this class so that the JPA will use the attributes of this class for persisting subclasses. Next, you need to add the @Id and the @GeneratedValue(strategy = GenerationType.AUTO) annotations to the Item.Id field. As a result, the field will be used as the primary key in the database and the Id will be automatically generated if not provided. Finally, because you might add new types of Items into the system at a later time, you should add the @Inheritance(strategy=InheritanceType.JOINED) annotation to the Item class. This will store each subclass in its own database table. The final data classes look like the following: @Entity @Inheritance(strategy=InheritanceType.JOINED) public abstract class Item implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String brand; private String itemName; private double price; // Getters & setters ... } @MappedSuperclass public abstract class Wearable extends Item { protected String size; ... } @Entity public class Wakeboard extends Wearable {} @Entity public class WakeboardBinding extends Wearable {} @Entity public class Tower extends Item { private Fit fit; private String tubing; public static enum Fit { Custom, Exact, Universal }; ... } Now that you defined the data model for the application, you can now define the web service interface. Because the application manages information about wakeboard equipment, let's call the web service WakeRider and let's expose four operations in the web service: addItem, updateItem, removeItem, and getItems. Here is what the WakerRider class looks like: @WebService() public class WakeRider { ... public List<Item> getItems() {...} public boolean addItem(Item item) {...} public boolean updateItem(Item item) {...} public boolean removeItem(Item item) {...} } If you deployed this web service and then looked at the generated WSDL and schema, you would notice that only the Item type is defined -- there is no mention of Wearable, Wakeboard, WakeboardBinding, or Tower. This is because when JAX-WS introspects the WakeRider class there is no mention of the other classes. To remedy that you can use the new @XmlSeeAlso annotation and list the other classes that you want to expose through the WakeRider web service. Here is what the WakeRider class looks like with the @XmlSeeAlso annotation: @WebService() @XmlSeeAlso({Wakeboard.class, WakeboardBinding.class, Tower.class}) public class WakeRider { ... } Now when you deploy the WakeRider service and look at the generated schema, you will see types for Item, Wearable, Wakeboard, WakeboardBinding, and Tower as well as some other types used internally by JAX-WS and JAXB. Starting From WSDL You can use type substitution in a web service that is built from a WSDL file. What's particularly nice about this is that using type substitution when starting from WSDL is totally transparent. When you import a WSDL file with JAX-WS 2.1, the generated proxy class is required to have the appropriate @XmlSeeAlso annotation. For example, the imported WakeRider proxy from the web service example in the previous section would have an @XmlSeeAlso annotation like the following: @WebService(name="WakeRider", targetNamespace="") @XmlSeeAlso({ObjectFactory.class}) public interface WakeRider { ... } Notice that the @XmlSeeAlso annotation in the proxy contains the ObjectFactory.class instead of listing the classes. The ObjectFactory class is a JAXB required class that provides information about all of the Java types that JAXB needs to be aware of in the given package. In this example, the ObjectFactory class will have references to the Item, Wearable, Wakeboard, WakeboardBinding and Tower classes. There is nothing that you need to do to enable type substitution when starting from WSDL. The WakeRider Client Invoking the WakeRider web service from a client is the same as invoking any other web service using JAX-WS. All you need to do is get a WakeRider proxy from the generated WakeRider web service and invoke the operations on the proxy. The sample application that accompanies this tip contains a NetBeans 5.5.1 project for a Java Platform, Standard Edition (Java SE) application named wrmanager. You can use the application to add, remove, or edit items in the WakeRider web service inventory. There is also a NetBeans 5.5.1 project for a JavaServer Faces (JSF) technology application named wrviewer. The application uses the WakeRider web service to view the current inventory. Both of these client applications contain code similar to the following for invoking an operation on the WakeRider web service: WakeRiderService service = new WakeRiderService(); port = service.getWakeRiderPort(); List<Item> items = port.getItems(); for (Item item : items) { if (item instanceof Wakeboard) { ... } else if (Item instance of WakeboardBinding) { ... } else if (Item instance of Tower) { ... } } Running the Sample Code The sample code for this tip is available as three NetBeans projects: wrservice. Defines the WakeRiderendpoint. wrviewer. A JSF page for viewing the WakeRiderinventory. wrmanager. A Java SE application for adding, removing, and editing items in the WakeRider. You can build and run the sample code using the NetBeans 5.5.1 IDE as follows: - If you haven't already done so, download and install the NetBeans 5.5.1 IDE. - If you haven't already done so, download and install GlassFish V2 RC 4 or later. - Download the sample application for the tip and extract its contents. You should now see the newly extracted directory as <sample_install_dir>/wakerider, where <sample_install_dir>is the directory where you installed the sample application. For example, if you extracted the contents to C:\\on a Windows machine, then your newly created directory should be at C:\\wakerider. The wakerider directory contains one directory for each of the NetBeans projects: wrservice, wrviewer, and wrmanager. - Start the NetBeans IDE. Run Netbeans with JDK 5.0. You can also use JDK 6, however in that case, you will also need to follow the instructions in Running on top of JDK 6. - Add GlassFish V2 to the NetBeans Application Servers wrserviceproject as follows: - Select Open Project from the File menu. - Browse to the wrservicedirectory from the sample application download. - Click the Open Project Folder button. - If you are alerted to a "Missing Server Problem", resolve it by right clicking on the wrservicenode in the Projects window and selecting Resolve Missing Server Problem. Then select Sun Java System Application Server. - Deploy the wrserviceproject as follows: - Right click the wrservicenode in the Projects window. - Select Deploy Project. - Open the wrviewerproject as follows: - Select Open Project from the File menu. - Browse to the wrviewerdirectory from the sample application download. - Click the Open Project Folder button. - You may need to resolve a missing server problem as described in step 6. - Run wrvieweras follows: - Right click on the wrviewernode in the Projects window. - Select Run Project. This should open a window in your web browser that displays the current WakeRiderinventory. The inventory should be empty the first time you run wrviewer. - Open the wrmanagerproject as follows: - Select Open Project from the File menu. - Browse to the wrmanagerdirectory from the sample application download. - Click the Open Project Folder button. - Run wrmanageras follows: - Right client on the wrmanagernode in the Projects window. - Select Run Project. This should open the WakeRider Inventory Manager application. - Add, delete, edit, or view inventory items as follows: - To add an item, click the Add button in the WakeRider Inventory Manager application, fill in the Add Item dialog and click the OK button. - To edit an item, select the item in the appropriate inventory window in the WakeRider Inventory Manager application and click the Edit button. Modify the contents of the Edit Item dialog and click the OK button. - To delete an item, select the item in the appropriate inventory window in WakeRider Inventory Manager application and click the Remove button. - To view current inventory items in the wrmanager application, view or refresh the wrviewer page in your browser. About the Author Doug Kohlert is a senior staff engineer in the Web Technologies and Standards division of Sun Microsystems where he is the specification lead for JAX-WS. Thanks for the article on this topic. I tried and it works. But it looks like it works for Document Wrapper but not for Document Bare. Is there any other solution for document bare to make it work? I get the following error if I use document bare for this scenario. Unable to create an instance of org.mydemo.Item which is abstract. Thanks for your help. Ramesh Posted by Ramesh Nune on December 29, 2009 at 11:37 PM PST #
https://blogs.oracle.com/enterprisetechtips/entry/using_type_substitution_with_web
CC-MAIN-2014-10
refinedweb
1,909
56.05
Hello! I have a folder full of contour dxf, to open in rhino, but when I drope them in rhino, they are all in the same place: on the origin. I would like to make a script that allows me to select the starting folder, and the script import all files one by one, I just have to move the files the width of the last between each import. are they the commands to import a file, or list the files of a folder? thank you! Hello! can i use this type of code: Dim filefolder Set filefolder = FileSys.GetFolder(FolderPath) Dim i as Integer 'loop through all files in the folder For i = 1 To filefolder.Files.Count Dim IFile Set IFile = filefolder.Files.Item(i) Hi, sorry I cannot help with vb, but this works in python: import rhinoscriptsyntax as rs import Rhino.Geometry as rg from os import listdir from os.path import isfile, join path = "C:\\temp\\pyExport\\" #your file path (mind the double backslashes) files = [f for f in listdir(path) if isfile(join(path, f))] vec = rg.Vector3d(0,0,0) if len(files)>0: for f in files: filename = path + f rs.Command("! _-Import " + filename + " _Enter" + " _Enter") g = rs.LastCreatedObjects() rs.MoveObject(g, vec) geo = rs.coercegeometry(g) bbox = geo.GetBoundingBox(True) vec = vec + (bbox.PointAt(1,0,0) - bbox.PointAt(0,0,0)) thank you David!!! but i don’t know Python, i would like but, i don’t have time to learn it… to choose the file path, i use: path= rs.BrowseForFolder(“C:\Program Files\” )??? You need double backslashes or a raw string (r”c:\windows\...”)
https://discourse.mcneel.com/t/script-for-import-multiple-file/77150
CC-MAIN-2020-40
refinedweb
277
74.49
Hello I find Flash mapping feature into data space is very useful, because it allows me to get rid of all PROGMEM, PSTR, etc. Everything works very well but something is wrong with calculation program space. Have a look on this example I tested on ATtiny1616: #include <avr/io.h> #include "uart.h" const char BigData[] = "Very Long String...\r\n"; int main(void) { Uart_Init(); Uart_Write(BigData); while(1); } Uart_Write is defined this way: void Uart_Write(const char * text) { while(*text) Uart_Write(*text++); } void Uart_Write(uint8_t Data) { // Just copying Data to USART register and checking flags // Not important at this time } Everything works fine when BigData[] is defined as char or const char. When it's char BigData[] it's stored in RAM at address 0x3800, which is begin of RAM data, so after building the program I get: Program Memory Usage : 580 bytes 3,5 % Full Data Memory Usage : 29 bytes 1,4 % Full The usage both of RAM and Flash increases with increasing the length of the string, what is completely obvious. When I change it to const char BigData[] the string is stored at 0x8218 which is Flash space. Usage of RAM is decreased but usage of Flash is also decresed! Why? Program Memory Usage : 536 bytes 3,3 % Full Data Memory Usage : 7 bytes 0,3 % Full Moreover, the usage of Flash is independent of the string length. When I add something to the string: const char BigData[] = "Very Loooooooooooooooooooooooooooooooooooooooong String...\r\n"; i get also Program Memory Usage : 536 bytes 3,3 % Full Data Memory Usage : 7 bytes 0,3 % Full This way I can run out of program space whereas the compiler says there's a lot of free space left. What am I doing wrong? Atmel Studio probably doesn’t take into account the strings in flash when it calculates the used flash. Hopefully the Microchip guys pick up on this thread and comment. Top - Log in or register to post comments You will understand it by looking at the LSS. Top - Log in or register to post comments The best thing to see what happen is to look in the .LSS file. Which optimize level do you use? UPS to slow :) Top - Log in or register to post comments Optimization level is default -O1 When string is declared as 'const' it goes to rodata section (read only data? or what?). chat BigData[]: const char BigData[]: extronic.pl Top - Log in or register to post comments Damn, I'm blind... Total memory usage is shown three lines over 'Program Memory Usage'. But I think it's interesting bug. When a string is declared with const char PROGMEM its size is added to Program Memory Usage, whereas const char is not. In this case there's no warning when you run out of memory space. I have declared ridiculously big const char BigData[32767] and the compiler haven't thrown any error at all! I could uplaod this prog to ATtiny1616 but of course it was not working. extronic.pl Top - Log in or register to post comments
https://www.avrfreaks.net/comment/2610531
CC-MAIN-2020-34
refinedweb
514
70.84
Writing Charm Tests Charm authors will have the best insight into whether or not a charm is working properly. It is up to the author to create tests that ensure quality and compatibility with other charms. The purpose of tests The intention of charm tests is to assert that the charm works well on the intended platform and performs the expected configuration steps. Examples of things to test in each charm is: - After install, expose, and adding of required relations, the application is running correctly (such as listening on the intended ports). - Adding, removing, and re-adding a relation should work without error. - Setting configuration values should result in a change reflected in the application's operation or configuration. Where to put tests The charm directory should contain a sub-directory named 'tests'. This directory will be scanned by a test runner for executable files. The executable files will be run in lexical order by the test runner, with a default Juju model. The tests can make the following assumptions: - A minimal install of the release of Ubuntu which the charm is targeted at will be available. - A version of Juju is installed and available in the system path. - A Juju model with no applications deployed inside it is already bootstrapped, and will be the default for command line usage. - The CWD is the testsdirectory off the charm root. - Full access to other public charms will be available to build a solution of your choice. - Tests should be self contained, meaning include or install the packages that it needs to test the software. - Tests should run automatically and not require (such as passwords) or human intervention to get a successful test result.. Test automation The charm tests will be run automatically, so all tests must not require user interaction. The test code must install or package the files required to test the charm. The test runner will find and execute each test within that directory and produce a report. If tests exit with applications still in the model, the test runner may clean them up, whether by destroying the model or destroying the applications explicitly, and the machines may be terminated as well. For this reason tests should not make assumptions on machine or unit numbers or other factors in the model that could be reset. Any artifacts needed from the test machines should be retrieved and displayed before the test exits. Exit codes Upon exit, the test's exit code will be evaluated to mean the following: - 0: Test passed - 1: Failed test - 100: Test is skipped because of a timeout or incomplete setup charm proof The charm-tools package contains a static charm analysis tool called charm proof. This tool checks the charm structure and gives Informational, Warning, and Error messages on potential issues with the charm structure. To be in line with Charm Store policy, all charms should pass charm proof with Information messages only. Warning or Error messages indicate a problem in the charm and the automated tests will fail the on the charm proof step. The Amulet Test Library While you can write tests in Bash or other languages, the Amulet library makes it easy to write tests in Python and is recommended. Executing Tests via BundleTester The charm test runner is a tool called bundletester. The bundletester tool is used to find, fetch, and run tests on charms and bundles. You should execute bundletester against a built charm. In order to test the vanilla charm that you built in Getting Started, you would do the following: charm build bundletester -t $JUJU_REPOSITORY/trusty/vanilla tests.yaml The optional driver file, tests/tests.yaml can be used to to control the overall flow of how tests are run. All values in this file are optional and when not provided default values will be used. Read the bundletester README.md file or more information on the options included in the tests.yaml file. Example Tests Initial test can install Amulet Since the tests are run in lexical order, a common pattern is to use an executable file with a name that sorts first ( 00-setup for example), which installs Juju and the Amulet Python package if not already installed and any other packages required for testing. #!/bin/bash # Check if amulet is installed before adding repository and updating apt-get. dpkg -s amulet if [ $? -ne 0 ]; then sudo add-apt-repository -y ppa:juju/stable sudo apt-get update sudo apt-get install -y amulet fi # Install any additional python packages or testing software here. Following tests can be written in Amulet The remaining tests can now assume Amulet is installed and use the library to create tests for the charm. You are free to write the tests in any style you want, but a common pattern is to use the "unittest" framework from Python to set up and deploy the charms. The other methods starting with "test" will be run afterward. #!/usr/bin/env python3 import amulet import requests import unittest class TestDeployment(unittest.TestCase): @classmethod def setUpClass(cls): cls.deployment = amulet.Deployment() cls.deployment.add('charm-name') cls.deployment.expose('charm-name') try: cls.deployment.setup(timeout=900) cls.deployment.sentry.wait() except amulet.helpers.TimeoutError: amulet.raise_status(amulet.SKIP, msg="Environment wasn't stood up in time") except: raise cls.unit = cls.deployment.sentry.unit['charm-name/0'] # Test methods would go here. if __name__ == '__main__': unittest.main() Debugging Your Tests If you're running tests with bundletester, debugging the tests themselves can be a little tricky. Setting breakpoints will simply make the test hang, as bundletester runs the tests in a separate process. You can run the tests directly, however. Let's say that you named the test in the example above "01-deployment". You could run it like so: build charm cd $JUJU_REPOSITORY/trusty/vanilla python3 tests/01-deployment (Note that you'd need to run your setup script manually first, or run your modified test, with breakpoints, against an already deployed charm.) The Deployment class in amulet also has a .log attribute, which can be useful for diagnosing problems after the tests have run. In the example tests above, you might invoke it with a line like the following: self.deployment.log.debug("Some debug message here.") Unit testing a layered charm Amulet is a mature tool for deploying and testing a charm in a test environment. For layered charms, it is often desirable to be able to run some tests before the charm has been built, however. For example, you may wish to run unit tests as you write your code. Waiting for the charm to build so that amulet can run tests on it would introduce unnecessary delays into the unit testing cycle. What follows are some best practices for writing unit tests for a layered charm. Create a separate tox ini file You will usually want to create a second .ini file for tox, along with a separate requirements file, so that requirements for your unit tests don't clobber the requirements for Amulet tests. You might call this file tox_unit.ini, and put the following inside of it: [tox] skipsdist=True envlist = py34, py35 skip_missing_interpreters = True [testenv] commands = nosetests -v --nocapture tests/unit deps = -r{toxinidir}/tests/unit/requirements.txt -r{toxinidir}/wheelhouse.txt setenv = PYTHONPATH={toxinidir}/lib Put library functions into packaged Python libraries. If you import objects from a Python module that exists in another layer, Python will raise an ImportError when you execute your unit tests. Building the charm will fix this ImportError, but will slow down your unit testing cycle. Layer authors can help you get around this issue by putting their library functions in a Python library that can be built and installed via the usual Python package management tools. You can add the library to your unit testing requirements. This isn't always practical, however. This brings us to the next step: When importing modules from a layer, structure your imports so that you can monkey patch Let's say that you want to use the very useful "options" object in the base layer. If you import it as follows, you will get an import error that it very hard to work around: from charms.layer import options class MyClass(object): def __init__(self): self.options = options If you instead import it like so, you can use libraries like Python's mock library to monkey patch the import out when you run your tests: from charms import layer class MyClass(object): def __init__(self): self.options = layer.options Here's an example of a test that uses the mock (note that we pass create=True -- this is important!): @mock.patch('mycharm.layer.options', create=True) def test_my_class(self, mock_options): ... If your charm does not have a lib/charms/layer directory, you'll still wind up with an ImportError that can be hard to work around. In this case, we recommend creating that directory, and dropping a blank file called <your charm name>.py into it. This isn't ideal, but it will save you some trouble when writing your tests. Testing with Juju-Matrix juju-matrix is a new testing tool for charm authors. It doesn't completely replace the older methods of writing tests, but it eliminates the need to write a lot of boilerplate tests, and it is growing into a sophisticated tool that allows charm authors to validate that their software will operate will at scale, in ways that are difficult to do with the existing testing setup. juju-matrix will run a basic deploy test for you, eliminating the need to write a custom deploy test for your bundle. It also works with conjure-up spells, so if you need to do custom things to get your bundle to deploy, you can specify those things in a spell, and rely on juju-matrix to validate that the spell successfully deploys. For more complex tests, you can write custom juju-matrix plugins and tests. juju-matrix will also run an end_to_end test automatically if one exists (see below). Installation To install juju-matrix, simply run: sudo snap install --classic --edge juju-matrix Running Tests To run a juju-matrix test on a bundle: - Pull the bundle that you want to test to a local directory, either via charm pull, or by checking out a source tree. - From within the bundle directory, run juju-matrix - This will run a default suite of tests against your bundle, including that basic deploy test. If you wish to test a conjure-up spell instead of a vanilla bundle, make sure that you have conjure up installed, then run juju-matrix from within a local checkout of the spell. juju-matrix will automatically detect that you are using a spell, and deploy it using conjure-up's headless mode. Chaos By default, juju-matrix will also run a "chaos" test, which is similar in concept to Netflix's Chaos Monkey. juju-matrix will deploy the bundle, then perform various actions, such as adding units, removing machines, and simulating juju agent crashes. It will then check to see if the bundle remains in a healthy state. If you specify and "end_to_end" test, juju-matrix will run its chaos while using the end_to_end test to generate traffic inside of your bundle. Note that chaos will usually break simple bundles that have no provisions for offering "high availability". The wiki-simple bundle, for example, only has one database unit and one web server unit, so it will fail if either of these go down. Since these bundles are not necessarily wrong -- just not configured to be highly available -- juju-matrix will not generate a test failure if the chaos run leaves your bundle into a bad state. If you do wish to verify that your bundle stays healthy in the face of chaos, you can add a "matrix" section to your tests.yaml, and include "ha: True" in that section. This marks your bundle as "high availability", and any failures during the chaos run will cause juju-matrix to exit with a non-zero exit code, indicating a test failure. End-to-End testing with juju-matrix If you have a test file named end_to_end in your bundle's tests directory, juju-matrix will automatically find it and run it, while executing chaos actions against your bundle. This is useful for testing how your bundle might behave while under load, and under adverse conditions. Custom juju-matrix tests You can also write custom juju-matrix tests. A juju-matrix test is simply a yaml file that specifies which juju-matrix "tasks" to run, when. There is a plugin system for writing your own custom tasks. Take a look at matrix.yaml in the juju-matrix source, and also at the .matrix files in the tests directory of the same for examples. More information about writing custom tests and plugins can be found in the juju-matrix README.
https://docs.jujucharms.com/2.3/en/developer-testing
CC-MAIN-2018-34
refinedweb
2,161
61.87
From: David Abrahams (dave_at_[hidden]) Date: 2002-11-17 23:27:38 "Peter Dimov" <pdimov_at_[hidden]> writes: > From: "Eric Woodruff" <Eric.Woodruff_at_[hidden]> >> >> "David Abrahams" <dave_at_[hidden]> wrote in message >> news:uvg2w6j80.fsf_at_boost-consulting.com... >> > "Peter Dimov" <pdimov_at_[hidden]> writes: >> > >> > > * shared_*_cast will be renamed to sp_*_cast. >> > >> > Why? Without rationale, this seems like a gratuitous change, >> > especailly since "sp" doesn't mean much to me. >> > >> >> Agreed. "sp" is so wonderfully expressive. Actually, I was going to voice > a >> complaint of that, but the ideal would overload dynamic_cast, but since it >> is a language keyword, that can't happen. So I can kind of see sp_ as a >> work-around for that. > > True, but consider also that dynamic_cast has a different interface: > > template<class Target, class Source> Target dynamic_cast(Source); > > so even if we could overload it, the syntax would have been > > shared_ptr<Y> py; > shared_ptr<X> px = dynamic_cast< shared_ptr<X> >(py); > >> Alternatively, it could be named boost_dynamic_cast >> and put in the global namespace, where it can be overloaded as needed by >> boost. > > This doesn't look too good if you consider the possible boost->std > transition. :-) But "sp" does? <wink> How about dynamic_pointer_cast<T>(x)? Of course, we can make it work for regular pointers,
https://lists.boost.org/Archives/boost/2002/11/39706.php
CC-MAIN-2019-18
refinedweb
206
56.45
Whenever we ask anything to SUSI, we get a intelligent reply. The API endpoints which clients uses to give the reply to users is, and this API endpoint is defined in SUSIService.java. In this blog post I will explain how any query is processed by SUSI Server and the output is sent to clients. public class SusiService extends AbstractAPIHandler implements APIHandler This is a public class and just like every other servlet in SUSI Server, this extends AbstractAPIHandler class, which provides us many options including AAA related features. public String getAPIPath() { return "/susi/chat.json"; } The function above defines the endpoint of the servlet. In this case it is “/susi/chat.json”. The final API Endpoint will be API. This endpoint accepts 6 parameters in GET request . “q” is the query “parameter”, “timezoneOffset” and “language” parameters are for giving user a reply according to his local time and local language, “latitude” and “longitude” are used for getting user’s location. String q = post.get("q", "").trim(); int count = post.get("count", 1); int timezoneOffset = post.get("timezoneOffset", 0); // minutes, i.e. -60 double latitude = post.get("latitude", Double.NaN); // i.e. 8.68 double longitude = post.get("longitude", Double.NaN); // i.e. 50.11 String language = post.get("language", "en"); // ISO 639-1 After getting all the parameters we do a database update of the skill data. This is done using DAO.susi.observe() function. Then SUSI checks that whether the reply was to be given from a etherpad dream, so we check if we are dreaming something. if (etherpad_dream != null && etherpad_dream.length() != 0) { String padurl = etherpadUrlstub + "/api/1/getText?apikey=" + etherpadApikey + "&padID=$query$"; If SUSI is dreaming something then we call the etherpad API with the API key and padID. After we get the response from the etherpad API we parse the JSON and store the text from the skill in a local variable. JSONObject json = new JSONObject(serviceResponse); String text = json.getJSONObject("data").getString("text"); After that, we fill an empty SUSI mind with the dream. SUSI susi_memory_dir is a folder named as “susi” in the data folder, present at the server itself. SusiMind dream = new SusiMind(DAO.susi_memory_dir); We need the memory directory here to get a share on the memory of previous dialogues, otherwise, we cannot test call-back questions. JSONObject rules = SusiSkill.readEzDSkill(new BufferedReader(new InputStreamReader(new ByteArrayInputStream(text.getBytes(StandardCharsets.UTF_8)), StandardCharsets.UTF_8))); dream.learn(rules, new File("file://" + etherpad_dream)); When we call dream.learn function, Susi starts dreaming. Now we will try to find an answer out of the dream. The SusiCognition takes all the parameters including the dream name, query and user’s identity and then try to find the answer to that query based on the skill it just learnt. SusiCognition cognition = new SusiCognition(dream, q, timezoneOffset, latitude, longitude, language, count, user.getIdentity()); If SUSI finds an answer then it replies back with that answers else it tries to answer with built-in intents. These intents are both existing SUSI Skills and internal console services. if (cognition.getAnswers().size() > 0) { DAO.susi.getMemories().addCognition(user.getIdentity().getClient(), cognition); return new ServiceResponse(cognition.getJSON()); } This is how any query is processed by SUSI Server and the output is sent to clients. Currently people use Etherpad () to make their own skills, but we are also working on SUSI CMS where we can edit and test the skills on the same web application. References Java Servlet Docs: Embedding Jetty: Server-Side Java:
https://blog.fossasia.org/adding-servlet-for-susi-service/
CC-MAIN-2022-40
refinedweb
580
50.63
In this short tutorial you will learn how to create a file dialog and load its file contents. The file dialog is needed in many applications that use file access. To get a filename (not file data) in PyQT you can use the line: If you are on Microsoft Windows use An example below (includes loading file data): Result (output may vary depending on your operating system): Download PyQT Code (Bulk Collection) 6 thoughts on “QT4 File Dialog” How about load an image file and display it? Hi there, I’m somewhat at a loss on labels…how do these work in PYQT4 Thanks Melissa To add a label to a window you can call QLabel. I made an example: For windows the line should read: filename = QFileDialog.getOpenFileName(w, ‘Open File’, ‘C:\\’) You can also just do the following to start in current working directory: filename = QFileDialog.getOpenFileName(w, ‘Open File’, ”) It’s best to choose an option that is cross platform. Another option is to open the file dialog to the users home directory. Here is a cross platform way of doing that with the Python standard library: from os.path import expanduser home_dir = expanduser(‘~’) Thanks for these great tutorials Frank. They are very helpful!
https://pythonspot.com/en/qt4-file-dialog/
CC-MAIN-2017-09
refinedweb
207
61.56
Orochi - A DI Container For Perl use Orochi; my $c = Orochi->new(); $c->inject_constructor('/myapp/foo' => ( class => 'SomeClass', args => { bar => $c->bind_value('/myapp/bar') } ); $c->inject_literal( '/myapp/bar' => [ 'a', 'b', 'c' ] ); WARNING: I'd rather use Bread::Board, but I have a need for a particular kind of DI NOW, and Bread::Board currently doesn't have those features. Therefore here's my version of it. If/When Bread::Board becomes suitable for my needs, this module may simply be replaced / deleted from CPAN. You've been warned. Orochi is a simple Dependency Injection -ish system. Orochi in itself is just a big Key/Value store, with a bit of runtime lazy expansion / instantiation of objects mixed in. This is probably how you'd want to use this module. Please see MooseX::Orochi for details You may specify the following arguments: If specified, adds a prefix to the given path through mangle_path(). Retrieves the value associated with the given $path. If the value needs to be expanded (i.e., create an object), then it will be done automatically. Fixes the given path, if necessary. This adds the prefix specified in the Orochi constructor, for example Injects a Orochi::Injection object. Creates a BindValue injection, which is a lazy evaluation based on a Orochi key. If given a list, will cascade through the given paths until one returns a defined value Injects an object constructor. Setter injection also uses this Injects a literal value. Injects a MooseX::Orochi based class. The class that is being injected does NOT have to use MooseX::Orochi, as long as one of the meta classes in the inheritance hierarchy does so. Looks for modules in the given namespace, and calls inject_class on each class. Documentation. Samples. Tests. Daisuke Maki <daisuke@endeworks.jp> This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See
http://search.cpan.org/~dmaki/Orochi-0.00010/lib/Orochi.pm
CC-MAIN-2015-48
refinedweb
318
57.37
<%@ page contentType="text/html;charset=UTF-8" %> <html> <head> <title>My Posts</title> </head> <body> <h1>My Posts</h1> <g:each <div> <h2>${post.title}</h2> <p>${post.teaser}</p> <p>Last Updated: ${post.lastUpdated}</p> </div> </g:each> </body> </html> This listing shows the list Groovy Server Page that you use to render the posts supplied by the PostController list action. The code iterates over the posts in the model and displays the title and teaser text. You are almost ready to display your posts now. The next step is to load some test data when you start the application to verify that you can see the posts. Grails provides a BootStrap Groovy class in the grails-app/conf directory. This class has an init closure that is executed when the application starts up. If you put some initialization data in this closure it will be available to your application. The following code shows how to use the Grails BootStrap class to load data into the application before start up: def init = { servletContext -> new Post(title:"First Grails Project", teaser:"Clearing out the clutter", content:"The full content of the article", published:true).save() new Post(title:"Getting Ready for Go Live", teaser:"The follow up on some real world issues.", content:"The full content of the article", published:false).save() } Restart the running Grails application, go to the post index page at, and you can see the two posts you have created, listed in the order you created them in the bootstrap (see Figure 2). Create a New Post To allow users to create a post, you need to provide a link from the list page, create the GSP input form, and create a new action to save the data to your database. So far, you have used your first Grails tag to create a link to the edit Post action (by default, Grails tags are defined in the g namespace). Now define the controller and the action that the link should send the user to as follows: <g:link Create a new post </g:link> Remember that you have already defined the edit action. What you don't have is an edit page to render the form that allows posts to be created or edited. Grails provides a number of tag libraries for rendering forms, input fields, and validation errors that occur from a form submission: Listing 3 contains code for the Groovy server page that allows users to edit or create a Post. The edit.gsp code needs to go in the same location as your list.gsp: grails-app/views/post. You can see from the <g:form> tag in Listing 3 that you need to create an action called save on the PostController to handle the form submission: def save = { def post = loadPost(params.id) post.properties = params if(post.save()) { redirect(action:'list') } else { render(view:'edit', model:[post:post]) } } private loadPost(id) { def post = new Post(); if(id) { post = Post.get(id) } return post } To enable the save action to both create a new post and update an existing post, the first thing you do is load the post. You refactored out the logic in your edit action earlier to a private method that can be reused by this action. Once you have the post, you need to update the properties so it can be saved. Grails provides all the values you submitted from the form as a Map called params and also provides a properties property on each domain object to expose all the properties of the object as named values in a Map. This allows you to set all the values you have sent from the form directly onto the domain object by assigning the request params to the domain objects properties field. The validation is performed when you try to save the post. If the validation succeeds, you send the user back to the list page to view the post. If it fails, you render the edit page again and put the updated post object in the model. Edit a Post To allow a user to edit a post, add a link to the edit action in the PostController: <g:link Edit this post </g:link> This is almost the same as the link to create a new post, but in this case you specify the identifier of the post. This will allow the edit action to load the post to be edited from the database and display it. View a Post You now can allow users to view the full text of a post. Create a view action on the PostController: def view = { render(view:'view', model:[post:Post.get(params.id)]) } Next, create a view GSP in the grails-app/views/post directory: <%@ page contentType="text/html;charset=UTF-8" %> <html> <head> <title>${post.title}</title> </head> <body> <h1>${post.title}</h1> <p>${post.teaser}</p> <div>${post.content}</div> </body> </html> This listing shows the Groovy server page used to render the details of a post. Finally, add the link to the post list page to allow a post to be viewed: <g:link View this post </g:link> Advertiser Disclosure:
http://www.devx.com/Java/Article/37487/0/page/4
CC-MAIN-2021-49
refinedweb
865
67.69
#include "dmx.h" #include "dmxeq.h" #include "dmxinput.h" #include "dmxlog.h" #include "dmxdpms.h" #include "inputstr.h" #include "scrnintstr.h" #include "XIproto.h" #include "extinit.h" The size of our queue. (The queue provided by mi/mieq.c has a size of 256.) Information about the event. Event queue. This function adds an event to the end of the queue. If the event is an XInput event, then the next event (the valuator event) is also stored in the queue. If the new event has a time before the time of the last event currently on the queue, then the time is updated for the new event. Must be reentrant with ProcessInputEvents. Assumption: dmxeqEnqueue will never be interrupted. If this is called from both signal handlers and regular code, make sure the signal is suspended when called from regular code. This function is called from ProcessInputEvents() to remove events from the queue and process them. Make pScreen the new screen for enqueueing events. If fromDIX is TRUE, also make pScreen the new screen for dequeuing events.
http://dmx.sourceforge.net/html/dmxeq_8c.html
CC-MAIN-2017-17
refinedweb
177
80.38
Details - Type: Improvement - Status: Resolved - Priority: Minor - Resolution: Fixed - Affects Version/s: 0.12 - - Component/s: Build Tools - Labels:None - Environment: Debian Squeeze Description General build improvements (mostly for the cmake build) which were made when working with Debian Squeeze. From OP: I'm using serveral patches to build QPID on Debian Squeeze. Activity - All - Work Log - History - Activity - Transitions In the 06 patch there are a number of uses of strerror(): - the project standard is to use :: to preface global library calls (to disambiguate them from any local meaning of the function name), BUT - strerror() is not thread safe so we have an internal thread safe strError() function that we use instead - you should use this. The poller selection patch (02): I like the changes however I think they are based on a small misunderstanding: At present the poll based poller does not exist - it has long been suggested as a fallback poller for posix based systems but never been actually implemented, so unhappily it needs to be commented out from actually being detected. Actually I don't think the Solaris ecf poller actually works although there is bit rotted code in the tree for it. I'm using these patches to build qpid v0.12 using cmake on Debian Squeeze. When I started packaging I was using v0.10, that' why some diffs say v0.10, but they still apply (and also apply to current git with some offsets, but I didn't try to build). I'll incorporate your comments and attach new patches. Patches apply to trunk fine for me and build and test passes. One comment is that the --prefix option to configure is no longer sufficient for installing to a non-standard location. The python binding libs are installed to a directory independent of any prefix. You can of course separately set the location for these when running configure but its not as obvious. The change in location however is entirely correct (indeed this will also resolve QPID-3458!). Thanks for taking the time to create and submit these! Thanks for the comments. I've attached an additional patch which - fixes another ssl.cmake CnP error, - changes the strerror calls to strError, - distinguishes between Linux and Windows on find_package(Boost ...) - makes the pragma code depend on GCC >= 4.2. (at least it's the first time documented in the gcc pragma docs in v4.2) I've created an account on reviews.apache.org (which was hard to find). - Should I upload a combined patch to reviewboard? - What is the "Base Directory"? (if I generate a patch using git diff, base is '/' ?) Probably someone can add some additional information to the Patch Submission paragraph on and also including a link to JIRA. What's the preferred way to build the manpages? There is the cpp/docs/man/generate_manpage script, which calls sed... Fixed My ssl.cmake change is correct; the else block contains the linux ssl build code. The original code was: if (CMAKE_SYSTEM_NAME STREQUAL Windows) else (CMAKE_SYSTEM_NAME STREQUAL Windows) endif (CMAKE_SYSTEM_NAME STREQUAL Windows) Fixed Fixed I'm currently building using cmake. I used this patch to allow auto-selection of the poller code, when I build QPID using autotools. I don't care if the patch won't be accepted - it just seemed convenient. PYTHON_LIB is set in the configure.ac file AC_SUBST'ed as [Directory to install python bindings in]. Maybe it's better to use pythondir from AM_PATH_PYTHON (cpp/m4/python.m4), which uses PYTHON_PREFIX, which is set to '$ ' and drop all the PYTHON_LIB code fom configure.ac? - fixes another ssl.cmake CnP error, - changes the strerror calls to strError, - distinguishes between Linux and Windows on find_package(Boost ...) - makes the pragma code depend on GCC >= 4.2. @Jose Pedro Olveira I'm planning to apply that patch. You should raise this on the qpid-dev list if you think it is important enough. The 0.14 release has closed down for fixes only at this point. I have committed the most of the build improvement changes now, much of the work was used but I have modified it in places so the patches will not exactly match what has been checked in. - The improvements to the cmake build made it for the 0.14 release, however - The improvements to the autotools release just missed the cut. The patches I haven't committed I either didn't like, thought were wrong, didn't understand the purpose of or thought they weren't connected either to the original or final bug title. If they are important to you please resubmit them against trunk. I note that the diffs seem to be against 0.10 so there is some chance they don't apply cleanly against trunk. [BTW it would probably be easier to review these changes if you made a review board review as I'd get a bit more context there] Specifically I'm a little unclear about some of the cmake changes: + else (CMAKE_SYSTEM_NAME STREQUAL Linux) In ssl.cmake just looks wrong to me and I wonder if its ever been run, as it should make cmake give an error (CMake cares about matching the "if" parameters with the "else" parameters and gives an error if they don't match but the corresponding "if" hasn't changed. Having made these detailed comments, on the whole this is a very good piece of work and neatens some things a lot. It looks like it moves in the direction of cmake build parity too which is a big plus.
https://issues.apache.org/jira/browse/QPID-3464
CC-MAIN-2016-26
refinedweb
926
64.51
Perl Programming/Humour Obfuscated code[edit] Some people claim Perl stands for 'Pathologically Eclectic Rubbish Lister' due to the high use of meaningful punctuation characters in the language syntax. In common with C programming language, the Perl Poetry section of perlmonks.org. A Question[edit] #!/usr/bin/perl # which art form is practiced and appreciated by lawyers and perl programmers alike? use strict; my $scupture = join("",<DATA>);$scupture =~ s/^\s*(.*?)\s*$/$1/; print unpack("A*", eval($scupture)); __DATA__ "\x20\x20\x0d\x0a\x6f\x62\x66\x75\x73\x63\x61\x74\x69\x6f\x6e\x0d\x0" # Kevin Bade Just another Perl Hacker[edit] Your mission, should you choose to accept it, is to write a one-liner perl script which displays the phrase "Just another Perl hacker," (including the comma, and capitalization as shown). If successful, you win the right to use it as an email signature identifying yourself as a Perl hacker. Entries will be judged on how smart-ass the code is. Around 100 of the first JAPHs and some funky obfu Perl can be seen on CPAN. Acme[edit] There's always a place in Perl for odd modules, and one such place is the Acme:: namespace. If you have a module which knows how long a piece of string is, or one which converts your perl script into an image of Che Guevara, post it here. Golf[edit] Perl is a very compact language. So compact, that some have even create a game around perl's terseness called perlgolf. In perlgolf, you are given a problem to solve. You must solve it in the fewest number of characters possible. A scorecard is kept, and after 18 "holes", a winner is announced.
https://en.wikibooks.org/wiki/Perl_Programming/Humour
CC-MAIN-2017-34
refinedweb
284
63.19
Opened 8 years ago Closed 5 years ago Last modified 17 months ago #5476 closed Uncategorized (wontfix) Image thumbnails on image fields in the admin Description I think it would be pretty nifty if the newforms-admin could render image thumbnails on image upload fields. It's a snippet but why do more when forms are already getting jazzed up and it goes a long way in usability? Attachments (3) Change History (17) comment:1 Changed 8 years ago by webjunkie - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Triage Stage changed from Unreviewed to Design decision needed comment:2 follow-up: ↓ 3 Changed 8 years ago by deepak <deep.thukral@…> comment:3 in reply to: ↑ 2 Changed 8 years ago by ubernostrum Replying to deepak <deep.thukral@gmail.com>: For the sake of simplicity -1 from me. Django shouldn't require any external package forcibly. Django's ImageField already requires PIL, so this wouldn't change anything. Similarly, XMLField requires Jing, memcached caching requires the memcached bindings and using a database (except SQLite when using Python 2.5) requires a database adapter module. comment:4 Changed 8 years ago by ubernostrum comment:5 Changed 8 years ago by xian - Keywords admin thumbnails imagefields added - Owner changed from nobody to xian - Version changed from SVN to newforms-admin Assigning this to myself and moving to newforms-admin. As I'm doing newforms-admin interface work. :) I'd like to see thumbnails for imagefields in both the change_list and the form itself if there is already an image set. However this only makes sense to do if #4115 happens. And if it does it should happen in templates/templatetags for the admin not in the model as in the linked-to snippet. comment:6 Changed 8 years ago by anonymous - Cc jdunck+django@… added comment:7 Changed 8 years ago by brosner - Keywords nfa-someday added This functionality is not critical before the merge to trunk. Tagging with nfa-someday. Changed 7 years ago by vitja preview image on model admin page comment:8 Changed 7 years ago by vitja Hi! I've recently attached patch, that shows image on admin page: image = models.ImageField(admin_preview=True) I'm going to create generic way of creating thumbnails, so could you give me a tip. I think that should be field, e.g. image_thumb = ImageThumbField(thumb_for='image', thumbnalizer=create_100x100_thumb) so adding thumbs to list_view will be as easy as: def get_thumb(self): return '<img src="%s" />' % self.get_thumb_url get_thumbs.allow_tags=True or even: @image_tag_for('thumb') def get_thumb(): pass Changed 7 years ago by vitja preview image on model admin page Changed 7 years ago by vitja thumbnail field comment:9 Changed 7 years ago by Alex - Version changed from newforms-admin to SVN comment:10 Changed 7 years ago by floledermann This snippet is a much better starting point: I just looked around in the source files and I think it cannot be done with template work alone, but one has to create an ImageWidget for the admin like in the snippet. I got this code working on my site in a few minutes and I think this would be a good basis for a patch. It only depends on PIL but degrades gracefully if not present. Is there anything I can do to help (I have not yet contributed to Django code, so would need some guidance) comment:11 Changed 5 years ago by mtredinnick - Resolution set to wontfix - Status changed from new to closed There seem to be too many variable preferences required here for how the thumbnail display would work. It's a lot of complexity to put it into the admin by default. Fortunately, it's possible without modifying Django for individual apps: specify a model form on the ModelAdmin subclass and a widget for that field that is the custom widget that display the thumbnail. comment:12 Changed 3 years ago by anonymous - Easy pickings unset - Severity set to Normal - Type set to Uncategorized - UI/UX unset comment:13 Changed 17 months ago by anonymous @mtredinnick It's easy to say "it's easy"; it's harder (though not all that much) to actually provide documentation and example code so that a non-Django-expert can understand how to add thumbnails to their admin. In other words, you can quibble over whether or not thumbnail support should be built in to Django, but you can't argue with the fact that right now it's more difficult than it needs to be for someone to have thumbnails in their admin. If it's not going to be built in, it would benefit Django to have some explanation of the non-built-in way of doing thumbnails. comment:14 Changed 17 months ago by russellm Firstly - It's worth pointing out that @mtredinnick is deceased. Secondly, you've apparently missed the point of what he was saying -- there isn't a single, obvious way to do this. It would be impossible for Django to document how it could be done without "blessing" a particular approach. This would almost certainly involve blessing one of the third-party thumbnailing libraries, (sorl-thumbnails and easy-thumbnails to pick just two). More broadly, as a project, we have an open problem of helping newcomers find appropriate third-party tools in the Django ecosystem. However, I don't think we can address that problem by documenting examples that use selected packages. For the sake of simplicity -1 from me. Django shouldn't require any external package forcibly.
https://code.djangoproject.com/ticket/5476
CC-MAIN-2015-35
refinedweb
926
57.2
python tutorial - Python interview questions and answers for experienced - learn python - python programming python interview questions :121 Print in terminal with colors using Python? def print_format_table(): """ prints table of formatted text format options """ for style in range(8): for fg in range(30,38): s1 = '' for bg in range(40,48): format = ';'.join([str(style), str(fg), str(bg)]) s1 += '\x1b[%sm %s \x1b[0m' % (format, format) print(s1) print('\n') print_format_table() click below button to copy the code. By Python tutorial team Learn python - python tutorial - terminal - python examples - python programs python interview questions :122 How to get line count cheaply in Python? def prRed(prt): print("\033[91m {}\033[00m" .format(prt)) def prGreen(prt): print("\033[92m {}\033[00m" .format(prt)) def prYellow(prt): print("\033[93m {}\033[00m" .format(prt)) def prLightPurple(prt): print("\033[94m {}\033[00m" .format(prt)) def prPurple(prt): print("\033[95m {}\033[00m" .format(prt)) def prCyan(prt): print("\033[96m {}\033[00m" .format(prt)) def prLightGray(prt): print("\033[97m {}\033[00m" .format(prt)) def prBlack(prt): print("\033[98m {}\033[00m" .format(prt)) prGreen("Hello world") click below button to copy the code. By Python tutorial team You need to get a line count of a large file (hundreds of thousands of lines) in python. def file_len(fname): with open(fname) as f: for i, l in enumerate(f): pass return i + 1 click below button to copy the code. By Python tutorial team python interview questions :123 What's the list comprehension? >>> L2 = [x**3 for x in L] >>> L2 [0, 1, 8, 27, 64, 125] click below button to copy the code. By Python tutorial team Interveiw Questions:124 Which one has higher precedence in Python? - NOT, AND , OR >>> True or False and False click below button to copy the code. By Python tutorial team - AND is higher precedence than OR, AND will be evaluated first, "True" will be printed out. - Then, how about this one? >>> not True or False or not False and True True click below button to copy the code. By Python tutorial team - NOT has first precedence, then AND, then OR. Learn python - python tutorial - precedence - python examples - python programs One line, probably pretty fast: num_lines = sum(1 for line in open('myfile.txt')) click below button to copy the code. By Python tutorial team python interview questions :125 What is the use of enumerate() in Python? - Using enumerate() function you can iterate through the sequence and retrieve the index position and its corresponding value at the same time. >>> for i,v in enumerate([‘Python’,’Java’,’C++’]): print(i,v) click below button to copy the code. By Python tutorial team 0 Python 1 Java 2 C++ python interview questions :126. python interview questions :127 What is a Class? How do you create it in Python? -() click below button to copy the code. By Python tutorial team Note: whenever you define a method inside a class, the first argument to the method must be self (where self - is a pointer to the class instance). self must be passed as an argument to the method, though the method does not take any arguments. python interview questions :128 What are Exception Handling? How do you achieve it in Python? -”. click below button to copy the code. By Python tutorial team python interview questions :129 What are Accessors, mutators, . python interview questions :130 Differentiate between .py and .pyc files? - Both .py and .pyc files holds the byte code. “.pyc” is a compiled version of Python file. - This file is automatically generated by Python to improve performance.
https://www.wikitechy.com/tutorials/python/python-interview-questions-and-answers-for-experienced
CC-MAIN-2021-17
refinedweb
600
67.15
The ISO/IEC 9899:1990, Programming Languages- C standard specifies the form and establishes the interpretation of programs written in C. However, this standard leaves a number of issues as implementation-defined, that is, as varying from compiler to compiler. This chapter details these areas. They can be readily compared to the ISO/IEC 9899:1990 standard itself: Each item uses the same section text as found in the ISO standard. Each item is preceded by its corresponding section number in the ISO standard.–3 Values for a doubleTable E–3 Values for a double Table E–4 Values for long doubleTable E–4 Values for long double Numbers are rounded to the nearest value that can be represented. Numbers are rounded to the nearest value that can be represented. unsigned int as defined in stddef.h. unsigned long for -Xarch=v9 The bit pattern does not change for pointers and values of type int, long, unsigned int and unsigned long. int as defined in stddef.h. long for -Xarch=v9 (SPARC) (x86). begins in the root directory only. Quoted file names in include directives are supported. Source file characters are mapped to their corresponding ASCII values. The following pragmas are supported. See 2.8 Pragmas for more information. align integer (variable[, variable]) does_not_read_global_data (funcname [, funcname]) does_not_return (funcname[, funcname]) does_not_write_global_data (funcname[, funcname]) error_messages (on|off|default, tag1[ tag2... tagn]) fini (f1[, f2..., fn]) ident string init (f1[, f2..., fn]) inline (funcname[, funcname]) int_to_unsigned (funcname) MP serial_loop MP serial_loop_nested MP taskloop no_inline (funcname[, funcname])
http://docs.oracle.com/cd/E19205-01/819-5265/6n7c29e7n/index.html
CC-MAIN-2015-27
refinedweb
253
59.4
Creating a launch file¶ Goal: Create a launch file to run a complex ROS 2 system. Tutorial level: Beginner Time: 10 minutes Contents Background¶ In the tutorials up until now, you have been opening new terminals for every new node you run. As you create more complex systems with more and more nodes running simultaneously, opening terminals and reentering configuration details becomes tedious. Launch files allow you to start up and configure a number of executables containing ROS 2 nodes simultaneously. Running a single launch file with the ros2 launch command will start up your entire system - all nodes and their configurations - at once. Prerequisites¶ This tutorial uses the rqt_graph and turtlesim packages. You will also need to use a text editor of your preference. As always, don’t forget to source ROS 2 in every new terminal you open. Tasks¶ 1 Setup¶ Create a new directory to store your launch file: mkdir launch Create a launch file named turtlesim_mimic_launch.py by entering the following command in the terminal: touch launch/turtlesim_mimic_launch.py touch launch/turtlesim_mimic_launch.py type nul > launch/turtlesim_mimic_launch.py You can also go into your system’s file directory using the GUI and create a new file that way. Open the new file in your preferred text editor. 2 Write the launch file¶ Let’s put together a ROS 2 launch file using the turtlesim package and its executables. Copy and paste the complete code into the turtlesim_mimic_launch.py file: from launch import LaunchDescription from launch_ros.actions import Node def generate_launch_description(): return LaunchDescription([ Node( package='turtlesim', namespace='turtlesim1', executable='turtlesim_node', name='sim' ), Node( package='turtlesim', namespace='turtlesim2', executable='turtlesim_node', name='sim' ), Node( package='turtlesim', executable='mimic', name='mimic', remappings=[ ('/input/pose', '/turtlesim1/turtle1/pose'), ('/output/cmd_vel', '/turtlesim2/turtle1/cmd_vel'), ] ) ]) 2.1 Examine the launch file¶ These import statements pull in some Python launch modules. from launch import LaunchDescription from launch_ros.actions import Node Next, the launch description itself begins: def generate_launch_description(): return LaunchDescription([ ]) Within the LaunchDescription is a system of three nodes, all from the turtlesim package. The goal of the system is to launch two turtlesim windows, and have one turtle mimic the movements of the other. The first two actions in the launch description launch two turtlesim windows: Node( package='turtlesim', namespace='turtlesim1', executable='turtlesim_node', name='sim' ), Node( package='turtlesim', namespace='turtlesim2', executable='turtlesim_node', name='sim' ), Note the only difference between the two nodes is their namespace values. Unique namespaces allow the system to start two simulators without node name nor topic name conflicts. Both turtles in this system receive commands over the same topic and publish their pose over the same topic. Without unique namespaces, there would be no way to distinguish between messages meant for one turtle or the other. The final node is also from the turtlesim package, but a different executable: mimic. Node( package='turtlesim', executable='mimic', name='mimic', remappings=[ ('/input/pose', '/turtlesim1/turtle1/pose'), ('/output/cmd_vel', '/turtlesim2/turtle1/cmd_vel'), ] ) This node has added configuration details in the form of remappings. mimic’s /input/pose topic is remapped to /turtlesim1/turtle1/pose and it’s /output/cmd_vel topic to /turtlesim2/turtle1/cmd_vel. This means mimic will subscribe to /turtlesim1/sim’s pose topic and republish it for /turtlesim2/sim’s velocity command topic to subscribe to. In other words, turtlesim2 will mimic turtlesim1’s movements. 3 ros2 launch¶ To launch turtlesim_mimic_launch.py, enter into the directory you created earlier and run the following command: cd launch ros2 launch turtlesim_mimic_launch.py Note It is possible to launch a launch file directly (as we do above), or provided by a package. When it is provided by a package, the syntax is: ros2 launch <package_name> <launch_file_name> You will learn more about creating packages in a later tutorial. Two turtlesim windows will open, and you will see the following [INFO] messages telling you which nodes your launch file has started: [INFO] [launch]: Default logging verbosity is set to INFO [INFO] [turtlesim_node-1]: process started with pid [11714] [INFO] [turtlesim_node-2]: process started with pid [11715] [INFO] [mimic-3]: process started with pid [11716] To see the system in action, open a new terminal and run the ros2 topic pub command on the /turtlesim1/turtle1/cmd_vel topic to get the first turtle moving: ros2 topic pub -r 1 /turtlesim1/turtle1/cmd_vel geometry_msgs/msg/Twist "{linear: {x: 2.0, y: 0.0, z: 0.0}, angular: {x: 0.0, y: 0.0, z: -1.8}}" You will see both turtles following the same path. 4 Introspect the system with rqt_graph¶ While the system is still running, open a new terminal and run rqt_graph to get a better idea of the relationship between the nodes in your launch file. Run the command: rqt_graph A hidden node (the ros2 topic pub command you ran) is publishing data to the /turtlesim1/turtle1/cmd_vel topic on the left, which the /turtlesim1/sim node is subscribed to. The rest of the graph shows what was described earlier: mimic is subscribed to /turtlesim1/sim’s pose topic, and publishes to /turtlesim2/sim’s velocity command topic. Summary¶ Launch files simplify running complex systems with many nodes and specific configuration details. You can create launch files using Python, and run them using the ros2 launch command. Next steps¶ In the next tutorial, Recording and playing back data, you’ll learn about another helpful tool, ros2bag.
https://docs.ros.org/en/foxy/Tutorials/Launch-Files/Creating-Launch-Files.html
CC-MAIN-2021-17
refinedweb
891
54.42
Important: Please read the Qt Code of Conduct - Program Crash on Exit - cazador7907 last edited by I've run into an odd problem in a program interface that I've been playing with to learn how to create an interface. After some work, I finally had the interface the way that I wanted it but when I exit the program, it crashes (and then relaunches). I've looked through the code and I'm not sure why this is happening. The program seems to die with a bad memory access error when it attempts to delete the buttonLayout object. I've posted the MainWindow class code below. @ #include "mainwindow.h" MainWindow::MainWindow() { setMinimumSize(600, 400); setMaximumSize(1000, 800); resize(752,533); setWindowTitle("Two Button Test"); messageLayout = new QHBoxLayout; messageLayout->setGeometry(QRect(10, 10, 731, 31)); buttonLayout = new QHBoxLayout; buttonLayout->setGeometry(QRect(10, 489, 731, 41)); mapBoard = new QHBoxLayout; mapBoard->setGeometry(QRect(10, 50, 731, 431)); mainLayout = new QVBoxLayout; //Widgets spacer = new QSpacerItem(40, 20, QSizePolicy::Expanding, QSizePolicy::Minimum); gameTurnWidget *gameTurn = new gameTurnWidget(&helper;, this); lstRRWidget = new QListWidget(this); gameBoardWidget *gameBoard = new gameBoardWidget(&helper;, this); //Buttons quitButton = createButton("Quit"); connect(quitButton, SIGNAL(clicked()), this, SLOT(close())); advanceTurn = createButton("Next Turn"); connect(advanceTurn, SIGNAL(clicked()), gameTurn, SLOT(nextTurn())); buttonLayout->setSpacing(3); buttonLayout->addStretch(1); messageLayout->setSpacing(3); messageLayout->addStretch(1); messageLayout->addItem(spacer); messageLayout->addWidget(gameTurn, 0, Qt::AlignCenter | Qt::AlignRight); mapBoard->addWidget(gameBoard); mapBoard->addWidget(lstRRWidget); buttonLayout->addItem(spacer); buttonLayout->addWidget(advanceTurn, 0, Qt::AlignBottom | Qt::AlignRight); buttonLayout->addWidget(quitButton, 0, Qt::AlignBottom | Qt::AlignRight); mainLayout->addLayout(messageLayout); mainLayout->addLayout(mapBoard); mainLayout->addLayout(buttonLayout); setLayout(mainLayout); } MainWindow::~MainWindow() { delete quitButton; delete advanceTurn; delete lstRRWidget; delete spacer; delete buttonLayout; delete messageLayout; delete mainLayout; } QPushButton *MainWindow::createButton(QString caption) { QPushButton *button = new QPushButton(caption, this); button->setFlat(true); return button; } @ - mlong Moderators last edited by When you call addWidget() or addItem() on a QLayout, the layout takes ownership of the widget and becomes responsible for deleting the item. So in your constructor, delete is being called twice on the widgets that have been added to the layouts, thus the crash. If you remove the delete calls for the items in layouts, things should be fine. - changsheng230 last edited by When your QObject inherited objects are created on heap with parent, you should NOT delete it by yourselves.since QObject tree will take care of the memory memory management. [quote author="changsheng230" date="1311043577"]When your QObject inherited objects are created on heap with parent, you should NOT delete it by yourselves.since QObject tree will take care of the memory memory management.[/quote] It is not necessary to delete them, but it is allowed. There are some bugs in your code: you add the object spacer to more then 1 layout your crash is also due to the spacer. The spacer is no widget and belongs to the layout it is added to. This means, it is deleted twice! So use one spacer for each layout and if you don't need it later on, don't store the pointer to it. By the way, why do you use set<Geometry for the layouts? you should not do it as it is changed after adding the layout to the global one. - cazador7907 last edited by Thanks for the very quick responses! I created individual spacers for the buttonLayout and the messageLayout. I also removed the the delete statements for the different widgets contained in the layouts. No more crashing on close. I used the set Geometry to position the Message and Button layouts at the top and bottom of the screen but that was before I added the vertical box layout. I suppose that the statement is somewhat redundant now that they are contained in another layout. Bumping an existing thread rather than starting a new one since its the same topic. /* * */ I am struggling to get my head around this for some reason, having come from Borland C++ I'm struggling with the layouts concept still. (been using QT for less than a week still) Does this mean that objects/widgets do not require deleting, at all, if they are assigned to a layout? When does the top layout get deleted (and in-turn delete all the other objects/widgets)? (at exit or when the tab is delete in my case?) I have an app that creates the same class many times as needed and assigns widgets to various Tabs for display. When the tab is closed (using delete ui->tabWidget->currentWidget();) I get a seg fault and I suspect it's because of the double delete as discussed above. Problem is, I'm not sure if my components are getting removed from memory which is a problem for me (app needs to run for weeks at a time). Could someone direct me to a document that discusses this, or explain best practices please. Thanks! welcome to the devnet forum, There is a lot documentation you can read. I suggest you to read a good book about Qt. there are several available. "This one gives you a good place to start.": Furthermore you can read the docs too. "about the Qt object model": about the layout system Feel free to ask specific questions about your code and please start a new topic in that case. BTW : you are asking for sigfaults with a nickname like that ;) Eddy, I downloaded that book just this morning and am getting ready to read it tonight! Thanks for the extra links, Ill add them to my list of late night reading. I searched the docs but couldn't get a reasonable description that described what I was experiencing; until I came across this thread. Thanks for the help. ps. The handle is actually a result of my frustration. :) Hi, In the first chapters the authors explain it. Have a good reading. Hi seg_fault, some general info: The deleting is not due to layouts directly, they just do something for you, that you could also do on your own :-) The magic here is parent/child relationship. All QObject derived classes may have a parent. All widgets have a parent unless they are top level. An QObject derived class deletes all children during it's own destruction. Now comes the layout into the game: The layout reparents the widgets so they have a parent which is responsible for the lifetime of the object. A spacer (which is no widget) belongs to the layout, so the layout deletes it. and a layout belongs to a layout or a widget, so they are also automatically deleted. Summary: Only delete top level items directly, or widgets, you definitely need destructed. They will destroy all child objects implicit.
https://forum.qt.io/topic/7717/program-crash-on-exit
CC-MAIN-2021-21
refinedweb
1,116
53
⚠ This page contains old, outdated, obsolete, … historic or WIP content! No warranties e.g. for correctness! Table of Contents - Introduction and Licence - Documentation and Support (FAQ, IRC, manual page, mailing lists, RSS feeds, …) - Installation - Upgrade your packages from older mksh - Development version - Inclusion in operating systems - comparison with other shells - Testsuite Results (regression tests) - on version numbers — for packagers - future plans (older or unrealistic ones) - Upgrade Caveat — for users - ChangeLog - information about old versions mksh(1) R57 This is the website of the MirBSD™ Korn Shell, an actively developed free implementation of the Korn Shell programming language and a successor to the Public Domain Korn Shell (pdksh). This page is always accessible via a redirection at, which is the canonical homepage URI. There also is (most of the time) mksh on Freshmeat and an mksh project page on ohlol, a statistics site. mksh is experimentally tracked at Launchpad. Download the Logo as SVG if you want. There’s also a full licence terms overview; it suffices to say mksh is Ⓕ Copyfree licenced. mksh must always be written either “mksh” (all-lowercase) or “MirBSD Korn Shell” — there is no other spelling. It’s usually pronounced by spelling out the four letters m, k, s and h individually, or by saying “MirBSD Korn Shell”. Introduction The current version of mksh is mksh R57 from 1 March 2019. Thanks to “Der Verein trash.net” for sponsoring access to a Solaris 8 box. Thanks to Julian “yofuh” Wiesener for just another account on a Sun E420 on Solaris 11β. Thanks to someone who prefers to stay anonymous due to tons of red tape for providing access to an AIX 5.3 system with gcc and xlC installed. (Both are now defunct.) Thanks to Jupp “cnuke” Söntgen for building on AIX in Dresden nowadays. Thanks to HP TestDrive/PvP/DSPP/CLOE, which helps in keeping mksh portable to several Unixes and compilers, and track down some architecture- or glibc-specific bugs. (These days, HP-UX/IA64 only, though.) Thanks to gnubber’s admin (Barry “bddebian” deFreese), as well as Samuel “youpi” Thibault, for providing shell access to a Debian GNU/Hurd system. Thanks to Lucas “laffer1” Holt for ssh access to the MidnightBSD server. Thanks to Waldemar “wbx” Brodkorb for dropping his unused Zaurus SL-C3200 to someone who can actually make use of it to test mksh on OpenBSD. Thanks to Andreas “gecko2” Gockel for access to a couple of Debian and Macintosh boxen and an iPhone 3G. Thanks to Martin Zobel-Helas for an account on an Alpha system. Thanks to Bastian “waldi” Blank for access to an S/390 system and uploading mksh packages to Debian for quite some time. Also thanks to Otavio Salvador and Patrick “aptituz” Schönfeld for uploading a couple of my Debian packages. The Debian GNU/k*BSD and Hurd developers were quite helpful in assisting and testing as well. Thanks to Thomas E. “TGEN” Spanjaard for access to both a NetBSD and a DragonFly system. Thanks to Josef “jupp” / “penpen” Schugt for testing mksh on a Digital Unix (OSF/1 V4.0) system from the Uni Bonn Physik CIP Pool. Thanks to DEChengst from #UnixNL for providing access to a HP/Compaq Tru64 (OSF/1 V5.1B) system, an OSF/1 V2.0 system and an Ultrix 4.5 system. Thanks to Adam “replaced” Hoka for a BSDi BSD/OS 3.1 ISO9660 image and offering to help with HP-sUX testing (now that HP TestDrive went down) and initial porting to Haiku, which was continued at CLT 2010 with help from Stephan Aßmus. Thanks to André “naaina” Wösten for ssh on a QNX box. Thanks to Olivier Duchateau for testing on Slackware and Zenwalk GNU/Linux. Thanks to Winston W. for spotting musl, and thanks to maximilian attems and H. Peter Anvin for almost fixing klibc. Thanks to RT|Chatzilla, Chris “ir0nh34d” Sutcliffe, and others for Win32 platform assistance. Thanks to KO Myung-Hun for the OS/2 port. Thanks to Daniel Richard G. <skunk@iSKUNK.ORG> for the z/OS (OS/390) initial porting effort and continued testing. Thanks to all other contributors from IRC and the mailing list, including, but not limited to, those already named and Martijn Dekker, Jean Delvare, izabera, J�rg Schilling, carstenh, jilles from FreeBSD, arekm, Torsten Sillke, slagtc, Stéphane Chazelas, colona, zacts, Seb, Steffen Nurpmeso (sdaoden), Dan Douglas (ormaaj), Dr. Werner Fink, … (in no particular order). Thanks to Brian Callahan from the Rensselaer Polytechnic Institute for bug fixes as well as running test builds on AIX 5.1L with the xlC 5.0.2.0 and gcc-2.9, and on Solaris 8 with Forte Developer 7 C 5.4 and gcc 2.95.2 and 3.4.6 compilers. No thanks to Intel for not including mksh in their programme analysing code. (Did I miss anyone? Mail me if so. Some of these are past, anyway.) What is mksh(1)? — Short answer: The MirBSD Korn Shell. Okay, but what exactly does it do, or why another shell? These questions will be answered here for the people interested. Right now, you only need to know that mksh is a DFSG-free and OSD-compliant (and OSI approved) successor to pdksh, developed as part of the MirOS Project as native Bourne/POSIX/Korn shell for MirOS BSD, but also to be readily available under other UNIX®-like operating systems. The source code for mksh is available at the MirOS Project mirrors as well as these of other operating system projects due to being included in these; however, we do not provide binaries. Find instructions to build and install mksh below, or ask your operating environment vendor to package and include mksh; we provide assistance for this task if asked. Licencing permits this as long as due credit is given to the authors and contributors and the copyright notices are not removed in their entirety; modifying is allowed (but if the result is still called mksh, it’s discouraged; talk with us if you feel you have to modify mksh). The individual licences used are the Ⓕ MirOS licence, and (for BSD compatibility on other operating systems) the 3-clause UCB licence and the ISC licence; full terms are available. pdksh originally was public domain, with a few exceptions, but these files are not part of mksh R21 or up. The mksh(1) author (mirabilos) acknowledges the contributions of these people who dedicated pdksh and oksh to the public, and asserts a collective copyright on the code. All these licences are DFSG clean and conform to the OSD, and the MirOS Licence is listed on the pages of the ifrOSS licence centre as well as in the FSF/UNESCO Directory of Free Software. The MirBSD Korn Shell is OSI Certified and its manual is Open Knowledge. To compile mksh, you will need a Bourne or POSIX shell (Solaris /bin/sh is enough, the Z shell works), a C compiler, system and C library header files and the standard C runtime. You will also need a set of standard UNIX® tools on a supported operating system: any recent BSD; Darwin, Apple Mac OSX; Interix (Microsoft® Services for Unix 3.5, maybe Subsystem for Unix Applications on Win2003/Vista); GNU/Cygwin; UWIN; GNU/Linux (libc5, glibc, dietlibc, µClibc, some klibc systems are tested), Debian GNU/kFreeBSD, GNU/Hurd or GNU/Linux; Sun Solaris (8, 9, 10, 11), OpenSolaris; AIX; IRIX; HP-UX 11i; OSF/1; ULTRIX; Minix 3; NeXTstep (but not OpenStep 4.2); QNX; BeOS (with limitations) or Haiku; SCO OpenServer 5 (with limitations) or 6 or SCO UnixWare; … To run the regression test suite, you will need a not too antiquated Perl optimally with POSIX.pm or Errno.pm as well as /bin/ed (whose installation is strongly suggested anyway, because it’s the standard FCEDIT history editor and standard UNIX® text editor), as well as a controlling terminal, usually /dev/tty or provided from script(1) or GNU screen. To use mksh, you only need the C runtime (and any supplemental libraries the binary was linked against) and, optionally, /bin/ed — for interactive use, a controlling terminal is highly recommended because job control does not work without one. To make full use of mksh(1)’s interactive features, it is recommended to copy the dot.mkshrc file from the source distribution as ~/.mkshrc into the user’s home directory and let the user adjust it to suit his needs. The sample file configures a few aliases and shell functions as well as a sensible prompt ($PS1) and some csh-like directory stack functions and zsh-like hooks. Full use of this file requires a few special UNIX® tools. Note that $ENV must not be set for mksh(1) to parse the ~/.mkshrc file at startup. Support We provide an online manual page in HTML and PDF format. Reading books about Korn Shells in general is recommended as further help, but beware of the differences (ATTENTION outdated content behind that link) to other shells. Some ISBNs are listed at the end of the manual page. A collection of some frequently asked questions is available as the mksh FAQ. If you require additional assistance or want to discuss bugs, features or enhancements, write to the miros-mksh mailing list (or subscribe to it by sending an eMail to the postmaster telling which address to subscribe to which list(s) — in your case, the miros-mksh list, but we have more mailing lists). The mailing list can be reached via the GMane archive using either NNTP or HTTP, or at The Mail Archive, although not at MARC. Joining the IRC channel at Freenode (irc.freenode.net, SSL port 6697, insecure port 6667) #!/bin/mksh (no joke, this is really the channel’s name) and #ksh (where you must distinguish AT&T ksh from mksh though) is recommended as well. Installation Skip to the section about being included in operating environments unless you really want to compile mksh from source yourself or create a package for your operating system of choice. First off, you have to download the source code from any of the mirrors listed below, or any other mirror you know of. Alternatively, use the development version from CVS. Official source code distributions are digitally signed with gzsig(1) using the MirOS Project’s current signature key. Please verify the signature as well as the hashes and/or checksums below, so you’re sure the content is intact and the version number on the archive is correct. Known Mirrors - - Checksums and Hashes - SHA256 (mksh-R57.tgz) = 3d101154182d52ae54ef26e1360c95bc89c929d28859d378cc1c84f3439dbe75 - RMD160 (mksh-R57.tgz) = 2cf5933f1d7cf8ef10db0b73ff3476ca448aba46 - TIGER (mksh-R57.tgz) = 560408e265f9e0556918e08662f7c243dc2aaaa0a463953e - 2389668846 419604 /MirOS/dist/mir/mksh/mksh-R57.tgz - MD5 (mksh-R57.tgz) = 4d2686535252ea6a57bdec531498239a - All official distfiles are gzsig(1)d. with our current signature key. Preformatted Documentation Decompression We’re using gzip(1)-compressed POSIX ustar(1) distfiles nowadays, so a simple tar -xzf mksh-R57.tgz will work. It will create all files in a subdirectory ./mksh/. Patching If you’re a packager/vendor and need to patch mksh and deviate from the default behaviour for that version which is indicated from $KSH_VERSION, define the cpp(GNU) KSH_VERSIONNAME_VENDOR_EXT macro to a C string beginning with a space and a plus sign, followed by a tag of yours, e.g. add -DKSH_VERSIONNAME_VENDOR_EXT=\"\ +aCoolDistro\" to CPPFLAGS so they can be distinguished. I think this is a reasonable request. In contrast to the old patching mode, this does not require patching the testsuite any more. Compilation Now you’re in the source code directory; Build.sh does all the magic for you. In theory, invoking the command % /bin/sh ./Build.sh should work. Relative paths can be used too, for example, instead of cd(1)ing to the source directory, you could’ve done % mkdir build; cd build; /bin/sh ../mksh/Build.sh It is optionally possible to place files, such as printf.c, into either the current or the source directory. It will need a compile option (see below) to be activated. printf.c is undesirable because it uses stdio, floating point and bloats. The build script requires a Bourne shell (Solaris /bin/sh, the Heirloom sh, DEC OSF/1 V2.0 /bin/sh), Korn shell (ksh, ksh88, ksh93, pdksh, mksh, oksh, maybe the MKS ksh), POSIX shell (posh, /usr/xpg4/bin/sh, ash, dash, yash), a related shell (J�rg/Jvrg/Joerg/Jörg Schilling’s bosh or sh, or the Z Shell), or a Bourne or POSIX superset (such as GNU bash) to work; the ULTRIX /bin/sh or the C shell (csh, tcsh) or “bsh” or a scripting shell like the wish won’t. Accepted arguments are: - -c mode — set compiler mode: - combine to use “-fwhole-program --combine” (gcc4) - dragonegg to use the LLVM Dragonegg plugin with GCC - llvm to compile to bytecode, optimise there (see the -O/-o options) and link with LLVM tools - lto to use some kind of Link Time Optimisation with clang or gcc-4.5 and up; with fallback to combine if not found - -g — build with debug info, Valgrind and LLVM/Clang scan-build assertions, and -DDEBUG in place - -j — parallel build - -M — do not compile but create Makefrag.inc - -O — (default) with “-c llvm” use “-std-compile-opts” - -o opts — with “-c llvm” use these optimisations - -Q — be (only) a little less verbose - -r — don’t try to build a pre-formatted version of the manual page using nroff(1) if found — recommended - -v — display version and exit Note: “-M” is incompatible with “-c somemode” and “-j” — specify LTO and parallel make using your make(1) tool’s facilities instead. Also, “-c dragonegg” does not work with “-j”, and “-c combine” is not parallel for obvious reasons. Note: any kind of “-c somemode” optimisation is brittle: LLVM (both llvm and dragonegg) have not been used for a long time; both lto and, while it lasted, combine are regularily broken every few GCC even minor releases. Run the testsuite if you use one! The build script also honours some environment variables detailed at its end. Install this binary as /bin/mksh and its manual page; you may want to also install dot.mkshrc, either directly into the skeleton directory, or with a wrapper /etc/skel/.mkshrc file that reads /etc/mkshrc, especially if packaging for a GNU distribution. Building lksh Add the -L flag to the Build.sh to create lksh(1), a variant of the shell that uses POSIX-compliant arithmetics with the host “long” data type, instead of mksh’s guaranteed-reliable 32-bit arithmetics. You probably want to add -DMKSH_BINSHPOSIX and, possibly, -DMKSH_BINSHREDUCED to the command line and install the lksh binary as your system /bin/sh if you go that route. (This shell is not intended to be used interactively. Its purpose is to run legacy sh scripts (especially with the MKSH_BINSHREDUCED option) and POSIX sh scripts, including Debian maintainer scripts.) Install this binary as /bin/lksh and its companion manpage, but remember that it does not come stand-alone and to always ship the full proper mksh shell alongside it. Operating Environment specific notes Compiler: ACK Support for ACK on Minix 3 has been added in mksh R37c with a workaround a known ACK bug (the “const” bug); it is now perfectly usable. Support for other ACK versions or targets can be user-contributed. It currently lacks a sane frontend supporting things like “cc -E” (ack -E is ignored), at the least, and does not yet process system headers like <sys/types.h>. Compiler: Borland C++ Builder This compiler is somewhat supported in mksh R30 with UWIN’s cc wrapper. (We haven’t been able to produce a working executable though.) Compiler: C68 (C386, etc.) The Walkers’ C89 compiler is not supported at the moment, but this is mostly due to difficulties in figuring it out. Any people who actually got it to compile anything, especially for both Linux and Minix, for both i386 and m68k, please contact us. Compiler: DEC/Compaq/HP C for OSF/1 and Tru64 This compiler is fully supported with mksh R33b (partial support did appear earlier). The ucode based compiler, linker and loader for Digital UNIX (OSF/1) V2.0 on MIPS is supported since mksh R36. It may, however, be forced to link statically to work around a bug in the toolchain. Compiler: Digital Mars This compiler is somewhat supported in mksh R30 with UWIN’s cc wrapper and a few kludges. (We haven’t been able to produce a tested executable though, due to general stability issues with the UWIN platform.) Compiler: GCC The GNU C Compiler 1.42, 2.7.2.1, 2.7.2.3, egcs (gcc 2.95) and the GNU Compiler Collection (gcc 3.x, 4.x) are known to work, but not all versions work on all targets. Early 2.x versions (like 2.1) may make trouble. Specific C flags, known extensions, etc. are autoprobed; cross-compilation works fine. Use of gcc 4.x is discouraged because of several dangerous changes in how the optimiser works; it is possible to work around their trading off reliability for benchmark-only speed increases, but because mksh developers do not use gcc 4.x this will have to be user-contributed. On the other hand, gcc 3.x (in some cases 2.x) is the best choice for compiling mksh. On BSDi BSD/OS, where gcc 1.42 and gcc 2.7.2.1 are available, the cc(1) manual page mentions that gcc 1.42 produces more reliable code, so we recommend to build mksh with CC=cc (gcc1) instead of CC=gcc or CC=gcc2 there instead. Since mksh uses ProPolice, the Stack-Smashing Protector, some GCC versions’ compilates require additional shared libraries. To disable this, pass HAVE_CAN_FSTACKPROTECTORALL=0 in the build environment. GCC and Valgrind do not always play well together, hence the build option -valgrind adding -fno-builtin to avoid gcc producing code that can access memory past the end of the allocation. Compiler: HP C/aC++ HP’s C compiler (/usr/bin/cc on HP-UX) is supported in mksh R30 and above; on IA64, only the LP64 model can be used; mksh used to segfault in the ILP32 module (or rather, the system libraries did, I think), so it was default. PA-RISC too works fine, so this compiler is a primary choice. In mksh R39b and up, you must set CFLAGS='+O2 +DD64' on IA64 to get the same behaviour as previous versions; the 32-bit mode is now the default. The HP-UX bundled compiler /usr/ccs/bin/cc works as well as HP aCC, except of course that it does not optimise. (GCC and C99 extensions aren’t actually used by mksh.) Compiler: IBM XL C/C++ / VisualAge IBM xlC 9.0 on AIX 5.3 is supported in mksh R30 and above. IBM xlC 8.0 on Linux/POWER and IBM xlC 6.0β on MacOS X are on the TODO. IBM xlC 7.0 on AIX 5.2 is supported in mksh R35c and above. IBM xlC 5.0 on AIX 5.1L also works. Compiler: Intel C/C++/Fortran ICC emulates GCC quite well (too well for my taste), is fully supported in mksh R30 and above on several platforms, but spits out lots (and I mean huge ugly lots) of bogus warnings during compile. We’re not going to work around these; let Intel fix their compiler instead. Some of these warnings were even responsible for bugs in mksh. I could not get the Intel Compiler 10 for Windows® to work. mksh enables the ICC stack protector option automaticaly. Compilates usually require the Intel shared libraries to be around. Compiler: libFirm/cparse libFirm with the cparse front-end is indistinguishable from GCC and known to build mksh R41 just fine. Compiler: LLVM Apple llvm-gcc from Xcode 3.1 had full success with mksh R34. Vanilla llvm-gcc works fine as well. Vanilla llvm-clang starting at r58935 produces working code with mksh R36b and up. Compiler: Microsoft® C/C++ Support for the Microsoft® C Compiler on Interix and UWIN, with the respective /usr/bin/cc wrappers, appeared in mksh R30. The following product versions have been tested: CL.EXE: Microsoft (R) 32-bit C/C++ Standard Compiler Version 13.00.9466 for 80x86 LINK.EXE: Microsoft (R) Incremental Linker Version 7.00.9466 (both are part of the .NET Common Language Runtime redistributable) CL.EXE: Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.42 for 80x86 LINK.EXE: Microsoft (R) Incremental Linker Version 8.00.50727.42 (both are part of Visual Studio 2005 C++ Expreß) You’ll have to change Interix’ cc(1) wrapper though: replace /Op with /Gs- to disable the stack checks (missing support in libc for them, they used to be off by default) and remove /Ze. On Interix (SFU 3.5), this compiler is maturely usable and a good choice. On GNU/Cygwin, using wgcc it might be possible to use this compiler. I could not test that yet, though. On UWIN, this is usable as well. Compiler: MIPSpro Support for SGI’s MIPSpro compiler on IRIX appeared in mksh R33b. Compiler: nwcc Support for nwcc appeared in mksh R36b; it is recommended to use nwcc 0.8.1 with mksh R39c or newer. The stack protector is currently disabled because it introduces errors. Compiler: PCC (BSD) Support for the Caldera/SCO UNIX® based, BSD-licenced portable C compiler in the ragge version has been added with mksh R31d. Versions from end of April 2008 onwards are known to work reliably, even with -O enabled. Intermediate bugs that may have appeared are just as quickly fixed. Compiler: SUNpro Support for the SUN Studio 12 compiler (cc 5.9) as well as cc 5.8 appeared in mksh R30; other versions might be supported as well. This compiler is a primary choice. Sun Forte Developer 7 C 5.4 2002/03/09 also works. Using SUNWcc on MirBSD/i386 Preparation steps. We assume that Sun Studio is extracted under the /opt/SUNWcc directory and Linux emulation has been set up. From now on, $S is /opt/SUNWcc/sunstudio12.1 (when using an older version, no “.1” at the end). $ cat $S/../MirBSD/ld # must be executable (0555) #!/bin/mksh set -A args -- "$@" integer i=0 while (( i < ${#args[*]} )); do [[ ${args[i]} = -dynamic-linker ]] && args[i+1]=/usr/libexec/ld.so [[ ${args[i]} = -Y ]] && args[i+1]=/usr/lib let ++i done exec /usr/bin/ld "${args[@]}" In $S/prod/include “mkdir MirBSD_orig” and “mv cc MirBSD_orig/”. In $S/prod/lib “mkdir MirBSD_orig” and “mv *.o MirBSD_orig/” then “mv MirBSD_orig/values-xa.o .” (we need this one). Furthermore, run “make obj && make depend && make && make sunstuff” in /usr/src/lib/csu/i386_elf then copy the three files obj/sun_crt{1,i,n}.o to $S/prod/lib/crt{1,i,n}.o (they are the MirBSD glue code / startup files). For some versions, you may need to ensure /emul/linux/lib and /emul/linux/usr/lib do not contain any *.so or *.o files, except for libbfd, libopcodes, libstdc++ (but 12.1 uses the native linker). In 12, -xO2 is broken; in 12.1 optimisation merely lets ir2hf run out of memory even with ulimit -d ulimit -dS 1572864, hence, -xipo cannot be used either. ☹ Using SUNWcc on MirBSD to build mksh $ S=/opt/SUNWcc/sunstudio12.1 $ LD_LIBRARY_PATH=$S/prod/lib/sys:$S/prod/lib:$S/rtlibs CC=$S/prod/bin/cc \ LDFLAGS="-Yl,$S/../MirBSD" mksh /usr/src/bin/mksh/Build.sh -r Compiler: tcc (Tiny C) Support for Fabrice Bellard’s tcc appeared in mksh R31, although its unability to do ‘-E’ in older versions gave us some headache. The bounds checker is currently disabled as it causes segfaults. Some intermediate versions of tcc break every once in a while. Compiler: TenDRA (maybe Ten15 too) Support for TenDRA appeared in mksh R31 and appears to be solid; mksh uses the ‘system’ profile for compiling by default. Users who wish to build mksh with a different profile are welcome to help to port it. See ULTRIX for an example of getting a ‘POSIX’ profile to work. Compiler: DEC ucode (MIPS CC) Since mksh R33c, ucode on Ultrix is fully supported. Compiler: USL C This is the vendor compiler on SCO OpenServer and SCO UnixWare. It is recognised from R40f onwards. Distribution: OpenADK This development kit provide the same support cross-platform, with µClibc, musl and/or glibc, and thus should behave the same on all supported targets. Distribution: OpenWrt This distribution provides the same support cross-platform, with µClibc and/or glibc, and thus should behave the same on all supported targets. Platform: Android Supported with OpenADK (static) and NDK (although the build process is currently not feasible with an Android.mk file but possible if the CPPFLAGS and signames.inc are pregenerated; sys_signame[] has been pushed upstream and is in Android 1.6). Integration into both AOSP and the Google master, as /system/bin/sh, has been done and it can be enabled on a per-target basis at the moment; mksh is shipped with Android 3.0 and newer releases and the standard shell of non-emulator builds on Android 4.0 and newer. Platform: iPhone This is just Mac OSX, compile (natively, or cross via the SDK) and copy. Platform: Maemo This is like Debian, and packaging is available via the Garage and the Extras repository. Helpers (for GUI integration and actual on device testing) seeked. Toolchain: dietlibc Fefe’s dietlibc works in mksh R34, although his opinion towards certain standards, such as caddr_t, strcasecmp(3), etc. are weird. Toolchain: klibc klibc needs -DMKSH_NO_LIMITS and can then use stock klcc as compiler wrapper (CC=klcc). Toolchain: musl Appears to work just fine in R41b and up. OS: 386BSD This seems to work with mksh R41, although on 386BSD-0.0new (anything older than 386BSD-0.1) you need to patch the kernel against a close-on-exec bug and a bug when switching the terminal between cooked and raw mode as well add an execve with support for shebangs and long command liness. OS: AIX Support for AIX with xlC appeared in mksh R30. OS: BeOS BeOS can, with limitations, be used with R40f and up. Job control is not working, and mksh must be rebuilt (once built) by running Build.sh with the same options again but using the just-built mksh as interpreter due to a severe pipe-related bug in the system sh. RT says that “BeOS 5.1(Dano)/PhOS/Zeta” can be supported. He is also trying to figure out how to support BeOS 5.0 and how to distinguish it from 5.1… OS: BSDi BSD/OS BSD/OS 3.1 works fine with mksh R33. OS: Coherent This is a somewhat experimental port in mksh R41. (More information will follow.) Set TARGET_OS=Coherent manually. OS: GNU/Cygwin This operating environment is supported as much as it adheres to standard POSIX/SUSv3 conformant things. No workarounds for .exe suffixes or other platform-specific quirks have been or will be added. OS: Darwin / Mac OSX Works pretty well. OS: Dell UNIX 4.0 R2.2 (SVR4) This exot has been tested with R40f: gcc is absolutely unusable on this platform but the vendor compiler works. Set TARGET_OS=_svr4 manually. OS: MS-DOS, DR DOS, FreeDOS DJGPP’s bash.exe fails to run Build.sh, thus this is currently not supported. (We tried!) OS: DragonFly BSD Perfect choice. Note /bin/sh compatibility needs a quirk. OS: FreeBSD Perfect choice. Note /bin/sh compatibility needs a quirk. OS: GNU/Hurd This operating system is supported (on i386) since R29 but not well tested. mksh is part of Debian GNU/Hurd, so it is expected to work. Starting with mksh R39b, there is no arbitrary limit on pathnames any more, as the operating system requires. (However, there are still other inherent limits in mksh, such as that of an interactive input line.) OS: GNU/k*BSD This operating environment has been supported for quite a while as part of Debian and somewhat tested. OS: GNU/Linux While POSIX does not apply to “GNU’s Not Unix”, the FHS (ex-FSSTND) does; please convince your distributor to move ed to /bin/ed if not already done. Manual page installation paths are not standardised in older distributions either. Besides glibc (GNU libc), dietlibc (from Fefe), µClibc (embedded), klibc (for initramfs) and libc5 (on Linux 2.0.38) work, but locale detection is not automatic for some of them. mksh can be used as /bin/sh on Debian and similarly strict distributions, which allow to use e.g. ash/dash there as well. OS: Haiku Haiku can be used with mksh R39c and newer with a recent kernel from r35836 and newer, ca. mid-2010 due to a bugfix wrt. signal handling. gcc4hybrid might not work, gcc2hybrid might work well. OS: HP-UX Support for HP-UX with GCC appeared in mksh R29 and works with HP’s C compiler and is no longer experimental in mksh R30. Please use stty(1) to make the terminal sanely usable. If passing custom CFLAGS, don’t forget -mlp64 (GCC) or +DD64 on Itanium. OS: Interix We have only tested SFU 3.5 on Windows® 2000, not SUA on Windows® 2003 SR1 or the version integrated into Vista. Windows 7’s works, gcc only though. As the Unix Perl which comes with Interix is too old, and the ActiveState Perl has… other issues, to run the regression tests, please install Perl from NetBSD® pkgsrc® instead. As of mksh R30, the native compiler (cc(1)) is supported in addition to gcc, calling Microsoft C. Do not use the c89(1) wrapper. If passing custom LIBS, don’t forget to add -lcrypt or any other library providing arc4random(3). mksh can replace /bin/ksh and /bin/sh without any problems. OS: IRIX Support for IRIX64 6.5 appeared in mksh R33b. OS: Jehanne A Plan 9 derivative with much improved support for the POSIX runtime environment, can run mksh R56c and newer. Still work in progress, both the operating system itself and the mksh port, but usable by now. OS: LynxOS Although the promised evaluation version never arrived, someone managed to test mksh R40f on LynxOS 3. OS: MidnightBSD mksh is part of MidnightBSD 0.2-CURRENT and above and used as native /bin/ksh; it can be used as /bin/sh as well with a quirk. MidnightBSD 0.3 uses mksh as /bin/sh indeed. OS: Minix 3 Minix 3 is supported starting mksh R37b (gcc), R37c (ACK/adk cc). Minix 1 and Minix 2 will never be supported due to size constraints on 16-bit platforms, unless a user contributes code. You will need: # chmem =1048576 /usr/lib/em_cemcom.ansi # chmem =262144 /usr/lib/i386/as Append the following line to main.c on Minix 3.1.2a or older: void _longjmp(jmp_buf env, int val) { longjmp(env, val); } OS: Ninix 3 Ninix 3 (Minix 3 with NetBSD® code) has first been working starting with mksh R40e (clang). More porting and tests are needed. This is different from “regular” Minix 3. Do be sure to set your TARGET_OS environment variable correctly. OS: Minix-386 mksh R42 works on Minix-386 down to version 1.7.0 but not 1.5 due to OS limitations; you might have to compile on version 2.0 as the ACK bundled with 1.7 segfaults. OS: Minix-vmd mksh R42 works fine on Minix-vmd 1.7 with ACK. OS: MiNT / FreeMiNT Support appeared in mksh R40. Depending on the distribution you use, you must use pdksh with CC=gcc to run Build.sh — cc and bash are both too broken. Afterwards, you must use the just-built mksh (after moving it out of the build directory) to re-run Build.sh with the same flags, due to bugs in pdksh on MiNT as well. Most things work. FD_CLOEXEC is broken, so filedescriptor privacy has POSIX level only. /dev/tty is usually unusable; it might help to symlink /dev/console there but break other things. (At OpenRheinRuhr 2011, tg@ had access to a FreeMiNT distribution which did not seem to exhibit any of the mentioned problems. YMMV.) OS: MirBSD Perfect choice. This is where mksh comes from. OS: MSYS mksh compiles on MSYS (that is something different from using MinGW for the nascent native WinAPI port; it’s basically an old version of Cygwin wrapped) with few issues. OS: NetBSD Perfect choice. Starting with NetBSD 1.6, mksh can replace /bin/ksh and /bin/sh without any problems. On NetBSD 1.5, mksh can only replace /bin/ksh safely. OS: NeXTstep Except for OpenStep 4.2 which has a completely botched POSIX library (although rumours are there is a libposix.a in existence that can be copied onto it), it works with R40f onwards. (Binaries of NeXTstep 3.3 can be copied onto OpenStep 4.2 and used there.) You need gawk. OS: OpenBSD The setlocale(3) call in OpenBSD’s libc will always return the “C” locale and therefore has been disabled by default. mksh can replace /bin/ksh and /bin/sh without any problems. mksh is supposed to be a superset of oksh (except GNU bash-style PS1, weird POSuX character classes, and an incompatible ulimit builtin change). OS: OS/2 mksh has been ported to OS/2 with kLIBC. The -T option to Build.sh enables “TEXTMODE”, which supports reading CR+LF line endings but breaks compatibility with Unix mksh. OS: z/OS (OS/390) mksh is currently undergoing porting to EBCDIC but should already work in an ASCII environment using xlC. z/Linux has less bugs than the USS environment though. OS: DEC/Compaq OSF/1, Compaq/HP Tru64 Digital Unix is somewhat supported using gcc as of mksh R31b. With mksh R33b, many more versions and the native compiler work. In fact, gcc sometimes segfaults, so use the vendor compiler. OS: Plan 9 Plan 9 is not supported yet. Due to the unavailability of ttys, full job control will never be supported. Input line editing likewise cannot work in drawterm. Currently, a kernel or APE bug requires the use of -DMKSH_NOPROSPECTOFWORK but this doesn’t produce a fully working mksh (some features cause the shell to hang). The APE (ANSI’n’POSIX Environment) is required to build mksh; I don’t remember which compiler I used, but I think it was GCC. Jens Staal reports success with kencc though, so I’d suggest using that instead. OS: PW32 on Win2k PW32 is not supported yet — killpg(3) is missing, and it’s possible that PW32 needs job control disabled or worked around, since a workable binary can be made with -DMKSH_NOPROSPECTOFWORK (note that this option produces a shell not supporting standard Korn Shell scripts). Maybe peek at how ash/bash for PW32 do it. gcc works. OS: QNX/Neutrino QNX/Neutrino (Perl: “nto”) support appeared in mksh R36b. The QNX ed(1) used to fail the regression tests due to being broken; compile the MirBSD ed and place it in /bin/ to fix this, or get an updated ed from vendor. OS: RTEMS 04:02 < kiwichris> Just dropped by to say I built mksh for RTEMS () and can run it on a sparc simulator. 04:02 < xiaomiao> nice! 04:03 < kiwichris> yeah it is; cannot do to much at the moment because rtems is a statically linked binary and commands are 'functions' OS: SCO OpenServer, SCO UnixWare SCO OpenServer 5 lacks job support, which SCO OpenServer 6 and SCO UnixWare 7.1.1 appear to have working. OS: SkyOS RT managed to build mksh on SkyOS. It somewhat works, and the testsuite failures are probably all bugs in their POSIX layer. OS: Solaris Solaris is full supported since “forever” with gcc, and since mksh R30 with Sun’s C compiler. Both 32-bit and 64-bit modes work; 64-bit mode is not enabled by default by Build.sh, you must do that manually by passing CFLAGS of -O2 -m64 or -xO2 -xarch=generic64. Solaris does not come with Berkeley mdoc macros for nroff, so using the HTML or PDF versions of the manual pages or pregenerating a catman page on another OS is required. OS: SunOS On mksh R42, add -DMKSH_TYPEDEF_SIG_ATOMIC_T=int and -DMKSH_TYPEDEF_SSIZE_T=int in addition to -DMKSH_UNEMPLOYED -DUSE_REALLOC_MALLOC=0 and SunOS 4.1.1 with GCC 2.7.2.3 will work. OS: Syllable Desktop Needs retesting with mksh R40+ (port unfinished) This OE is suffering from bugs, although R41 works better than ever before. When deactivating any and all job handling with -DMKSH_NOPROSPECTOFWORK it works a bit better. (Note that this option produces a shell not supporting standard Korn Shell scripts.) Syllable Server will work, as it is, at the moment, “just” a GNU/Linux distribution with a different GUI. This may change though. OS: ULTRIX Even on ULTRIX 4.5, mksh R33c works fine. The system ksh must be used for running the Build.sh script, though. I could not get networking on ULTRIX 4.0 (SIMH) to work, so I could not test it there. You however must pass the -YPOSIX option to the ucode compiler, as the default -YBSD profile produces a broken executable (spins instead of starting up), and the -YSYSTEM_FIVE profile does not even compile. See TenDRA for another OE which has issues with different OE profiles. (Build.sh takes care of this automatically.) OS: UWIN-NT Compilation of mksh R30 on UWIN works with several compilers (bcc, dmc, msc — I could not get gcc-egcs, gcc-2.95, gcc-mingw, icc to work) but the platform itself is very flakey, and even some regression tests crash, due to target limitations apparently. Within these limits, mksh is usable. OS: Windows Michael Langguth, partially under work sponsored by his employer Scalaris AG, is currently working on porting mksh to native Win32 (WinAPI) to complete the GNU utilities for Win32 with a native shell to have a free interoperability solution for scripting. Progress is promising, but still a long way to go. The result will probably not be part of mksh itself, but a separate product; some core patches will however end up in core mksh. A beta version of this is available as announced in this wlog entry. OS: Xenix SCO Xenix 386 2.3.4a lacks too much functionality to be an mksh target. (RT tried!) After compiling The Build.sh script generates an executable (“mksh”, except on GNU/Cygwin, where it is called “mksh.exe”), a shell script to use the newly built mksh to run the regression test suite (“test.sh”), and (unless the -r option was given) a pre-formatted manual page (“mksh.cat1”). It also lists installation instructions unless -Q was provided. Now it’s the time to run % ./test.sh -v -f in order to see if the shell works. The regression testsuite will exit with errorlevel 1 if any tests failed that are not marked as allowed to fail (e.g. OS dependent) or expected to fail, 0 otherwise. Omit the ‘-f’ option if you do not have a fast (say 1½ GHz Pentium-M) machine. The regression tests need a controlling tty. Please ensure you have one, even for bulk/dæmonised builds; you can use GNU screen or script(1) to provide one by running the testsuite inside it (see the Debian and OpenSuSE Buildservive packaging for examples of how to do it). If, however, you absolutely cannot get the necessary utilities and devices installed in the build chroot, run: ./test.sh -v -C regress:no-ctty To actually install mksh, copy the binary to some place in $PATH, i.e. /bin/mksh, $HOME/.bin/mksh, /usr/local/bin/mksh, or whatever your packaging system wants; strip it and run chmod 555 on it. (This can easily be achieved with install(1) — on Solaris, this is /usr/ucb/install not /usr/bin/install – with the arguments -c, -s, -m 755¹, and -o/-g. ① with 555, strip(1) cannot write the file any more, chmod 555 afterwards.) Also append its installation path to /etc/shells, install the dot.mkshrc file (usually alongside with the copyright file and other documentation), copy it to /etc/skel/.mkshrc if your operating environment has this means to include default dotfiles; install either the catman page (mksh.cat1) to, for example, /usr/share/man/cat1/mksh.0, or the mdoc page (mksh.1) to the standard location (/usr/share/man/man1/ or /usr/man/man1/ or whatever your operating environment requires). The manual page requires the Berkeley mdoc macros (either the BSD or the GNU groff version) to be installed during formatting time. Note that a ~/.mkshrc file will not be executed if $ENV is set and not empty, nor is there an /etc/mkshrc. For packagers: Upgrades Note: This is not the ChangeLog, these are the packager-visible upgrade notes regarding changes in the build system (Build.sh and friends, compiler support, packaging conventions, bad examples, etc). This is also not the users' upgrade caveat list. Packagers also please note: it’s mksh or “The MirBSD Korn Shell” (“MidnightBSD Korn Shell” is also appropriate), but never Mksh or somesuch! current: Please remember to subscribe to the mksh mailing list, either directly or via GMane, if you have an interest in mksh, such as packaging it. Thanks! (unfinished…) R57: The MKSH_EARLY_LOCALE_TRACKING code was fixed. The testsuite now checks whether there is actually a controlling tty available or not and whether it matches what’s passed for options. The vendor patching info was updated. R56c: SuSE provided a patch to implement mediæval-scope locale tracking (set ±U if the POSIX locale variables change within the shell); this is provided as-is, use with care, and especially do not use it in a system shell (with -o posix, it throws a warning when triggered, as it invokes non-compliance). R56b: The offsetof macro of dietlibc and klibc, which is upstream-buggy, is overridden with GCC 3+ to avoid a warning. R56: Building with -DMKSH_ASSUME_UTF8=0 no longer causes a known failure in the testsuite. Building with dietlibc or klibc though causes a warning about offsetof and dynamic “command”. You can now choose which UTF-8 locale to use with test.sh -U. R55: make repool needs mksh R55’s bugfix on the host. OS/2 builds with “textmode” (CR+LF as newline) now need Build.sh -T. Persistent history is now supported by lksh as well, unless explicitly disabled. There were some changes related to the build system. R54: Build.sh now installs both manpages (lksh.1 and mksh.1) independent of how it’s called. Several additional compiler flags are attempted. Porters to Harvey-OS and OS/2 should review their patches. R53a: this is a botched R53. R52c: prepare to review use of set +o. R52b: nothing of note, but prepare to review all mksh scripts to ensure they start with export LC_ALL=C soon. R52: Android can define MKSH_DEFAULT_PROFILEDIR itself but we don’t. We no longer ship the stop alias. R51: Please review all mksh scripts, such as a skeleton ~/.mkshrc file, for alias safety and security — in case of doubt, contact us. The EBCDIC and OS/2 ports are not finished, but some improvements are already included. CVS snapshots now use ‘j’ ipv ‘i’ making room for one more stable version; after R51, we will release R53, mostly bugfixes, and roll them all up in R50g. R50f: If you patch mksh, please do not only update the version in check.t (twice) and sh.h but now in mksh.1 as well; thanks! Please let the mksh developer team review your .mkshrc files for robustness! R50e: Better portability; no conflict with system headers defining a “tilde” function; no use of ptrdiff_t any more. The old workarounds for static code checkers are gone. NSIG generation works with GCC 5. R50d: Nothing to note. R50c: New HAVE_ISSETUGID define. The example Debian /etc/skel/.mkshrc moved. Security release. Details. R50b: test.sh output is now clearer. R50: test.sh now uses $TMPDIR. If you want to build without SSP, define HAVE_CAN_FSTACKPROTECTORSTRONG in addition to HAVE_CAN_FSTACKPROTECTORALL if you have GCC 4.9+. R49: There is now generated content at build time; it is known that this is beyond the capabilities of some shells such as Coherent /bin/sh. We plan to address this in a later release by rewriting the relevant parts in C, so that a host C compiler will, in addition to a target C compiler, also be required to build mksh. R48b: Nothing of notewortiness. R48: We now ship a Windows® icon; just ignore it if you don’t want it. We regularily update dot.mkshrc so you’d better think of a way for your users to get those updates. Download the development version via CVS You can use cvs(GNU) to download the development version of mksh(1), commonly called HEAD (or “trunk” to some). Beware of bugs though we strive to make it installable (at least on MirBSD ☺) at all times. % env CVS_RSH=ssh cvs -qd _anoncvs@anoncvs.mirbsd.org:/cvs co -PA mksh You might also want to get the printf.c builtin, but this is optional, strongly discouraged and use it only if you really must: % env CVS_RSH=ssh cvs -qd _anoncvs@anoncvs.mirbsd.org:/cvs co src/usr.bin/printf Installation instructions as above, although the Build.sh options, CPPFLAGS, etc. might have changed a little in the meantime. In general, you want the following: % cd mksh % sh Build.sh -r Optionally set CC and other variables, as usual. Unofficial git mirror github (chosen only for popularity) hosts a read-only, push-only, possibly nōn-fastforward, unofficial git mirror of the mksh source tree. Use at your own risk. Inclusion in other operating systems Debian GNU/Linux, GNU/Hurd and GNU/kFreeBSD have an mksh package maintained us (Thorsten “mirabilos” Glaser). Gentoo GNU/Linux has an mksh ebuild created by Hanno Böck and kept up to date by bonsaikitten. Fedora GNU/Linux 14, 15 and 16, and RHEL 4 and 5 (via EPEL), 6 and 7 (shipping with it, for their customers to use with their ksh88 and pdksh scripts, as well as an optional shell) now officially contain an mksh package (git pkg repo). There are some Instructions for activating EPEL (RHEL only), then just type yum install mksh. The OpenSuSE Build Service provides an official openSUSE_Factory mksh package inside the shells repo, provided by Dr. Werner Fink. A portable set of RPMs and SRPM made by Pascal “loki” Bleser, Marcus “darix” Rückert, and me, is part of the home project of mirabilos; it’s buildable on Debian and MirBSD as well, and can be used for Mandrake/Mandriva/Mageia and, thanks to ragnar76, Atari SpareMiNT. Void Linux includes an mksh template originally created by Ypnose. Fink delivers an mksh package from our own Andreas “gecko2” Gockel. MacPorts (used to be called DarwinPorts), thanks to Ryan Schmidt, now also have an mksh port. SMGL (Sourcemage GNU/Linux) has an mksh spell in their grimoire, developed by the MirOS Project together with Daniel “morfic” Goller, updated by Thomas “sobukus” Orgis and Vlad “Stealth” Glagolev. In the FreeWRT Embedded GNU/Linux Appliance Development Kit (meta distribution), now defunct, mksh was the default shell. - OpenADK — Open Source Appliance Development Kit (a FreeWRT 1.0 fork) contains mksh as default shell. - OpenWrt Embedded GNU/Linux Distribution also provides mksh (rarely updated) on ADSL/WLAN routers thanks to Felix “nbd” Fietkau. The Android-x86 Project has mksh as /bin/sh since 2010-02-25. AOSP and the Google master build mksh and ash since 2010-08-24/25, and it can be enabled as /system/bin/sh on a per-target basis or the default can be switched from ash, which is done for 3.0 and up. It’s hard to get updates in there, though. Debian derivate from Canonical that cannot be named, the grml (and grml64) Live-CD, and other Debian derivates also have an mksh package; Knoppix, SIDUX and Nexenta OS (GNU/Solaris) do not contain or offer mksh. Note: We need URLs to the packages for these, can anyone provide any? Arch GNU/Linux users can install an mksh package by Daniel “homsn” Hommel, promoted by Thorsten “Atsutane” Töpper, since the Arch Hurd guys were faster. T2 SDE (ROCK Linux) contains an outdated package as well. FreeBSD® Ports (for FreeBSD, very old DragonFly BSD versions and DesktopBSD) also have a port created by Andreas “ankon” Kohn and (sometimes…) kept up to date by Martin “miwi” Wilke and Olivier Duchateau and Rares “movl” Aioanei. It is unknown if this applies to PC-BSD too, but there’s no mksh PBI (yet?). MidnightBSD uses mports, a derivate of FreeBSD® ports. Naturally, they deliver mksh as well. MidnightBSD 0.2-CURRENT from 18th August 2007 onwards has mksh as both /bin/mksh and /bin/ksh, i.e. it is the default MidnightBSD Korn shell. From 29 March 2009 onwards, it is also the default /bin/sh (since MidnightBSD 0.3). NetBSD® pkgsrc® (native also on recent DragonFly BSD; available for many other operating systems as well) has a package kept up to date by Blair Sadewitz and our very own Ádám “replaced” Hóka and Dr. Benny “benz” Siegert, created by our Thorsten “mirabilos” Glaser. The Desktop NetBSD project also contains mksh; see the source of their meta package. This will provide their users with a modern, fast, secure, featureful shell and enhance the experience. - Beastiebox also comes with mksh (sadly, apparently a one-shot import only) as an option. It’s NetBSD® based, mostly. The MirPorts Framework brings mksh to OpenBSD, Mac OSX and Interix as well as older MirOS BSD versions, which have mksh as native Korn Shell. An inofficial port for OpenBSD is available. Nobody dares commit it though, so it only gets updated on request. - ChinaLinux mirrors (and apparently packages) mksh. - Frugalware Linux contains an orphaned and extremely old mksh package looking for a new maintainer. - Olivier Duchateau used to provide Slackware/Zenwalk GNU/Linux packages, but now updates the FreeBSD packages instead. A SlackBuild for mksh is now available from Markus Reichelt. (Open)Solaris packages exist courtesy of Matt “lewellyn” Lewandowsky but it’s unlikely they will get updated again due to Solaris losing all relevance ☹ The PLD Linux Distribution also has a package by Kacper “draenog” Kornet and Arkadiusz “arekm” Miśkiewicz - Alt Linux Sisyphus has an updated package finally - Sabotage Linux has a port of mksh, too. - Homebrew for Mac OSX has a Formula for mksh, too. - Jens Staal maintains a Plan 9 package of (modified) mksh source code, thanks! (Still got the kernel/APE bug, but this is somewhat usable.) - There are probably many more, please drop us a note! - Softpedia lists mksh, just like the FSF/UNESCO directory of Free Software. - Dag Wieërs had an RPMforge package, based on Fedora’s - The HMUG, some US-American Apple Users’ Group, used to package mksh for Darwin, too. They recommend MacPorts, Homebrew or Fink now. - OpenPKG had one, before it went down the commercial drain. - Missing packaging: Mandriva/Mageia (use OBS; being worked on), OpenEmbedded (being worked on), iPhoneOS (compile yourself), Knoppix and SIDUX (just add them), Nexenta (need a contact person), Arch native (use Community), PC-BSD (use pkgsrc® or so; need a contact person for PBI), OpenSolaris, OpenBSD (use unofficial port), BeOS (maybe broken) / Haiku, Slackware native (use SlackBuild), MPE/iX (no response from the volunteers ☹), LynxOS (never got the 30-day eval version they promised me ☹), Syllable Desktop (broken, kernel issue, may have been fixed in the meantime), Pardus, NetBSD base system (under discussion; pkgsrc® has it), Maemo (old one in Garage) / MeeGo / Tizen / Mer, Palm WebOS, Cray Unicos, Data General DG/UX, DEC Mach, SINIX, Reliant UNIX, SunOS 4.x, … These packages are not official and have not always been tested by mksh developers; please keep this in mind. Users' Upgrade Caveat This does not necessarily list new features, only these which users should be aware of for existing scripts. current: HISTSIZE will, in a future version, be limited to 65535. (unfinished…) R57: Bugfix / rollup release, with only a few changes — it has some known bugs; no (known) regressions though. R56c: Bugfix-only release; some fixes might cause new warnings R56b: Bugfix-only release R56: Aliases support ‘.’, ‘:’ and ‘[’ in names again (though “[[” is still forbidden). POSIX character classes and the BSD re_format(7) extensions [[:\<:]] and [[:\>:]] (quoted due to shell syntax issues) have been added (ASCII/EBCDIC only). Some experimental changes to the history code might help with the vanishing entries phenomena. Switching to POSIX mode now disables UTF-8 mode and forces exec to a PATH search and to use execve(2). In dot.mkshrc the hd_mksh function to hexdump using builtins is now always available, and the default editor selection has changed: it’s moved to near the top of the file and now has a priority list users can change. R55: The POSIX declaration utility concept is introduced, which also applies to commands having variable assignments and redirections preceding them. wait however does not keep assignments any longer. The new \builtin utility forwards the declaration utility flag exactly like command does. The new typeset -g replaces mksh’s previous home-grown global builtin, which is now deprecated and will be removed from a future version. Aliases are now expanded for command, function and value substitutions at parse time (like for functions, and excepting accent gravis-style substitutions), and typeset -f output is alias-resistent; furthermore, alias names are now limited to [A-Za-z0-9_!%,@], following POSIX, although a non-leading hyphen-minus is also permitted. print -R is now (correctly) roughly equivalent to the POSIX mode echo. The deltas between mksh and lksh, and between normal, POSIX and “SH” mode, are now properly documented in the manual pages. The let] hack is gone. ulimit -a output changed to display the associated flag. $PATHSEP is now pre-defined to ‘:’ (‘;’ on OS/2). $LINENO is now incremented better in eval and alias expansions. R54: Lazy evaluation side effects and set -e-related error propagation in || and && constructs are now handled properly. R53a: Tilde expansions of parameters (~, ~+, and ~-) now strip . and .. components from their results. The sample PS1 in dot.mkshrc was corrected for users whose home directories are præficēs of others’. Rotation operators were renamed from <<< and >>> to ^< and ^>. var=<< may now be used. Many fixes. R52c: Prepare to audit all uses of set +o. Handling of "`\"`" is now considered not a POSIX violation, until this issue is officially resolved. Our PDF manpages now use the PA4 paper size enabling printing without the need to scale of crop on both DIN ISO A4 and USA “letter” paper. The manpages now compile with an older version of the mdoc macropackage (in use by e.g. Schillix) installed. command -pv and command -pV now behave POSIX conformant. R52b: Prepare to audit all scripts to ensure they begin with export LC_ALL=C as we’ll implement locale trackiing some day. Handling of "`\"`" is now again not POSIX-compliant even in posix mode to unbreak existing code (Austin#1015, mktexlsr). set -C; :>foo is now race-free. Some bugfixes. R52: (( … )) is now a compound command, which changes the way I/O redirections work. ${x#~}, ${x/y/z}, etc. now have tilde expansion enabled. ${x//#y} no longer works, for anchored patterns use only one slash; quotes are now honoured better in such expressions, though. alias stop='\kill -STOP' is no longer defined by default anywhere; source is no longer an alias but a built-in utility, unbreaking it in some cases; the lksh hack to remove an alias upon function definition is removed. issetugid(2) is no longer used for set ±p checks, unbreaking suid in some cases. Handling of "`\"`" is now POSIX-compliant (this breaks scripts)! More bugfixes, although there are (sorry!) still some known bugs ☹ R51: Integers with bases outside of the permitted range are handled as base 10 instead of failing to parse, like ksh93. Korn shell style functions (function foo {) now have locally scoped shell options (e.g. set -o noglob) except in lksh. Much standard code is now protected from being overridden by aliases; the new enable function in dot.mkshrc can be used to enable or disable a built-in utility (such as rename) or function (including those in dot.mkshrc) by means of an alias. The feature to unalias an identifier when a POSIX-style function with the same name is defined only persists in lksh, as it is a legacy feature. cat(1), when a flag is given, and printf(1), now prefer an external utility over the builtin reliably. Several bugfixes, such as command -v now handling shell reserved words, impact compatibility. R50f: unset HISTFILE actually works. Several more bugfixes and robustness improvements. The mksh(1) manpage now documents how to enable/disable the UTF-8 mode based on the current POSIX locale according to how it’s done at startup on some OSes. R50e: Warning: do not use x=<< inside a function, it has never worked. Lots of POSIX compliance and bug fixes. New options for the exec builtin. R50d: Fixed a segfault and a regression in field splitting breaking update-initramfs. Sorry! Also, added a warning about not using unchecked user input in arithmetics — [[ $x = +([0-9]) ]] || die is a useful check. We’ll link a more detailed writeup about it later. R50c: $RANDOM is no longer exported. Field splitting has improved. This version fixes one security issue of low importance (details) which is mksh-specific, and mksh is not vulnerable to all those GNU bash bugs, some of which affect AT&T ksh93 as well. R50b: nameref can alias $1, etc. again. R50: Arithmetic expressions are now IFS-split, as per POSIX; this matches what the manpage always documented. Due to regressions, the arr=([index]=value) syntax (naming the indicēs during setting an array) is gone for now, and will not reappear in “set -A”, only in the “=(…)” syntax once we get its parsing fixed. Privileges are now dropped upon start unless the shell is started with “-p”. R49: The hash algorithm has changed (for, hopefully, the last time); the old algorithms are gone from dot.mkshrc too, and the ${foo@#} syntax no longer accepts a seed value (for more variety use the functions from dot.mkshrc; for hash tables, just xor and rotate the finished stable hash). Some terminal and other issues have been fixed, don’t be surprised. R48b: Bugfix for multi-line prompts. R48: The “doch” alias in dot.mkshrc now keeps standard input usable at the cost of the command to be run being logged by sudo(8). If you notice anything unusual (regression) in the interactive display code please report it; sections of that code have been refactored and improved. Recent Changes Changes in the current (unreleased) development version: - [lintian] hyphen-used-as-minus-sign (on jessie, not later, ‽‽‽) - [tg] Fix system info gcc dump{machine,version} shell escaping level - [tg] Document KSH_VERSIONNAME_VENDOR_EXT in Build.sh - [tg] Be more explicit about the LTO bug exposed by check.t) R56c is a bugfix-only release everyone must upgrade to: - [komh] Remove redundant OS/2-specific code, clean up others - [komh, tg] Fix drive-qualified (absolute and relative) DOS-style path support in realpath functionality, partially other places - [tg] Don’t substitute ${ENV:-~/.mkshrc} result again - [tg] Improve OS/2 $PATH (et al.) handling, drive-relative paths - [tg] Add MKSH_ENVDIR compile-time option for Jehanne and Plan 9 - [tg] Limit nesting when parsing malformed code (Debian #878947) - [tg] Update wcwidth data with bugfixed script (still Unicode 10; resulting values are identical to glibc git master for extant chars) - [Dr. Werner Fink] Raise some time limits in the testsuite - [Shamar] Add support for the Jehanne operating system - [komh] Set stdin to text mode before executing child processes on OS/2 - [komh] Pass arguments via a resonse file if executing a child fails - [Dr. Werner Fink] Early locale tracking as a compile-time option - [tg] Fix regressions introduced with new fast character classes R56b is a bugfix-only release everyone should upgrade to: - [tg] Reference the FAQ webpage - [panpo, Riviera] Fix documentation bug wrt. Esc+Ctrl-L - [tg, Larry Hynes] Fix “0” movement in vi mode - [tg] Replace broken libcs’ offsetof macro with MirBSD’s R56 is a bugfix release with some experimental fixes: - [tg, Seb] Do not apply alias name restrictions to hash/tilde tracking - [tg] Restore ‘.’, ‘:’ and ‘[’ in alias names (“[[” is still forbidden) - [tg] Fix accidentally defanged $PATHSEP test - [tg] On ^C (INTR and QUIT edchars), shove edit line into history - [iSKUNK, tg] Begin porting to z/OS using EBCDIC encoding, incomplete - [tg] Redo fast character classes code, adding POSIX and other helpers - [tg] bind parses backslash-escaped ‘^’ (and ‘\’) as escaped - [tg] Building with -DMKSH_ASSUME_UTF8=0 no longer causes a known failure in the testsuite - [tg] New test.sh option -U to pass a UTF-8 locale to use in the tests - [tg] re_format(7) BSD: [[ $x = *[[:\<:]]foo[[:\>:]]* ]] - [tg, iSKUNK] Use Config in check.pl only if it exists - [tg] New matching code for bracket expressions, full POSIX (8bit) - [komh] Exclude FAT/HPFS/NTFS-unsafe tests on OS/2 (and Cygwin/MSYS) - [tg] Update to Unicode 10.0.0 - [tg, selk] Make readonly idempotent - [tg, multiplexd] When truncating the persistent history, do not change the underlying file, do all operations on the locked one; do not stop using the history at all if it has been truncated - [tg, J�rg] Turn off UTF-8 mode upon turning on POSIX mode - [Martijn Dekker, Geoff Clare, many on the Austin list, tg] In POSIX mode, make the exec builtin force a $PATH search plus execve - [tg] Fix GCC 7, Coverity Scan warnings - [tg, Michal Hlavinka] Track background process PIDs even interactive - [tg] Always expose mksh’s hexdump shell function; speed it up by working on the input in chunks; use character classes to make it EBCDIC safe - [tg] Revamp dot.mkshrc default editor selection mechanism R55 is mostly a feature release with summary bugfixes: - [komh] Fix OS/2 search_access() and UNC path logic - [tg] Undocument printf(1) to avoid user confusion - [Jean Delvare, tg] Fix printf builtin -R option - [tg] Make ${var@x}, unknown x, fail (thanks izabera) - [tg] ${var=x} must evaluate x in scalar context (10x Martijn Dekker) - [tg] Fixup relation between lksh and mksh, reduce delta - [tg] Improve manpage display; add OS/2 $PATH FAQ - [Jean Delvare] Fix bugs in manpage - [tg] Review tilde expansion, removing “odd use of KEEPASN” and introduce POSIX “declaration utility” concept; wait isn’t one - [tg] Add \builtin utility, declaration utility forwarder - [tg] Make $'\xz' expand to xz, not \0 - [tg] Use fixed string pooling (requires the above change in host mksh) - [tg] POSIX declaration commands can have varassign and redirections - [Martijn Dekker] Add typeset -g, replacing homegrown “global” - [Harvey-OS] Disable NOPROSPECTOFWORK, APEX is reportedly fixed now - [tg] Display ulimit -a output with flags; improve Haiku - [tg] Drop old let] hack, use \builtin internally - [tg] Fix padding in Lb64encode in dot.mkshrc - [tg] Move FAQ content to a separate, new FAQ section in the manpage - [tg] Add new standard variable PATHSEP (‘:’, ‘;’ on OS/2) - [Martijn Dekker] Fix LINENO in eval and alias - [komh] Fix “\builtin” on OS/2 - [tg] Improve (internal) character classes code for speed - [tg] Fix: the underscore is no drive letter - [tg] No longer hard-disable persistent history support in lksh - [tg] Introduce build flag -T for enabling “textmode” on OS/2 (supporting CR+LF line endings, but incompatible with mksh proper) - [tg] Merge mksh-os2 - [tg] Permit changing $OS2_SHELL during a running shell - [tg] Fix multibyte handling in ^R (Emacs search-history) - [tg] Allow “typeset -p arrname[2]” to work - [tg] Make some error messages more consistent - [tg, komh] Disable UTF-8 detection code for OS/2 as unrealistic - [tg, sdaoden] Limit alias name chars to POSIX plus non-leading ‘-’ - [tg, Martijn Dekker] Expand aliases at COMSUB parse time - [tg] Make “typeset -f” output alias-resistent - [tg, Martijn Dekker] Permit “eval break” and “eval continue” - [tg] Make -masm=intel safe on i386 - [tg] Disambiguate $((…)) vs. $((…)…) in “typeset -f” output - [Jean Delvare] Clarify the effect of exit and return in a subshell - [tg] Simplify compile-time asserts and make them actually compile-time - [tg] Fix ^O in Emacs mode if the line was modified (LP#1675842) - [tg] Address Coverity Scan… stuff… now that it builds again - [Martijn Dekker, tg] Add test -v - [tg] Document set -o posix/sh completely R54 is a bugfix release with moderate new features: - [tg] Simplify and improve code and manual page - [tg] Try GCC 5’s new -malign-data=abi - [tg] Allow interrupting builtin cat even on fast devices (LP#1616692) - [tg] Update to Unicode 9.0.0 - [Andreas Buschka] Correct English spelling - [tg] Handle set -e-related error propagation in || and && constructs correctly - [tg] Initialise memory for RNG even when not targeting Valgrind - [tg] Shrink binary size - [Brian Callahan] Improve support for the contemporary pcc compiler - [tg] Fix side effects with lazy evaluation; spotted by ormaaj - [tg] New flags -c (columnise), -l, -N for the print builtin - [Larry Hynes] Fix English, spelling mistakes, typos in the manpage - [tg, ormaaj] Return 128+SIGALRM if read -t times out, like GNU bash - [Martijn Dekker] Install both manpages from Build.sh - [Martijn Dekker] Document case changes are ASCII-only - [Ronald G. Minnich, Elbing Miss, Álvaro Jurado, tg] Begin porting to Harvey-OS and APEX (similar to Plan 9 and APE) - [KO Myung-Hun] More infrastructure for the OS/2 (EMX, KLIBC) port R53a is a snapshot/feature release: - �rg] R52c is a bugfix-only release: - [tg] Shave 200 bytes off .text by revisiting string pooling - [tg, J�rg] Fix manpage for ditroff on Schillix - [tg, wbx] Use sed 1q instead of unportable head(1) - [tg] Implement underrun debugging tool for area-based memory allocator - [tg] Fix history underrun when first interactive command is entered - [tg, bef0rd] Do not misinterpret “${0/}” as “${0//”, fixes segfault - [tg, Stéphane Chazelas] Fix display problems with special parameters - [tg, Stéphane Chazelas] Catch attempt to trim $* and $@ with ?, fixes segfault (Todd Miller did this in 2004 for ${x[*]} already, so just sync) - [Martijn Dekker] Fix “command -p” with -Vv to behave as POSIX requires - [tg, jilles, Oleg Bulatov] Fix recusive parser with active heredocs - [tg] Flush even syntax-failing or interrupted commands to history - [tg, fmunozs] Fix invalid memory access for “'\0'” in arithmetics - [tg] Explicitly reserve SIGEXIT and SIGERR for ksh - [tg, izabera] Catch missing here documents at EOF even under “set -n” - [kre, tg] Document Austin#1015 handling (not considered a violation) - [tg, fmunozs] Fix buffer overread for empty nameref targets - [tg] Fix warnings pointed out by latest Debian gcc-snapshot - [tg, Martijn Dekker] Document upcoming set +o changes - [Martijn Dekker] Expand testsuite for command/whence R52b is a strongly recommended bugfix-only release: - [tg] Recognise ksh93 compiled scripts and LZIP compressed files as binary (i.e. to not run as mksh plaintext script) - [tg] Document that we will implement locale tracking later - [tg] Add EEXIST to failback strerror(3) - [jilles] Make set -C; :>foo race-free - [tg] Don’t use unset in portable build script - [tg] Plug warning on GNU/kFreeBSD, GNU/Hurd - [tg] Document read -a resets the integer base - [J�rg] Fix manpage: time is not a builtin but a reserved word - [J�rg, tg] Make exit (and return) eat -1 - [tg] parse “$( (( … ) … ) … )” correctly (LP#1532621), Jan Palus - [tg] reduce memory footprint by free(3)ing more aggressively - [tg] fix buffer overrun (LP#1533394), bugreport by izabera - [tg] correctly handle nested ADELIM parsing (LP#1453827), Teckids - [tg] permit “read -A/-a arr[idx]” as long as only one element is read; fix corruption of array indicēs with this construct (LP#1533396), izabera - [tg] Sanitise OS-provided signal number in even more places - [tg] As requested by J�rg, be clear manpage advice is for mksh - [tg] Revert (as it was a regression) POSIX bugfix from R52/2005 related to accent gravis-style command substitution until POSIX decides either way - [tg] Handle export et al. after command (Austin#351) - [tg] Catch EPIPE in built-in cat and return as SIGPIPE (LP#1532621) - [tg] Fix errno in print/echo builtin; optimise that and unbksl - [tg] Update documentation, point out POSIX violation (Austin#1015) R52 is a strongly recommended bugfix release: - [_0bitcount] Move moving external link from mksh(1) to the #ksh channel homepage linked therein - [tg] Make setenv “set -u”-safe and fix when invoked with no args - [tg] Make “typeset -f” output reentrant if name is a reserved word - [oksh] Zero-pad seconds in “time” output to align columns - [tg] Check signals and errorlevels from OS to be within bounds - [komh, tg] Quote and document ‘;’ as PATH separator in some places - [oksh, tg] Simplify code to call afree() even if arg is NULL - [tg] Fix tree-printing and reentrancy of multiple here documents - [tg] Work around LP#1030581 by permitting exactly one space after - [tg, oksh] Code quality work, cleanups - [tg] New code for here documents/strings with several bugfixes - [tg] Stop using issetugid(2) for ±p checks, wrong tool for the job - [tg] Reintroduce some -o posix changes lost in 2005, plus fixes - [tg] Make “source” into a built-in command - [tg] Drop “stop” alias, lksh(1) functionality to auto-unalias - [tg] Fix \u0000 ignored in $'…' and print - [tg] Improve portability of Build.sh - [Jilles Tjoelker] Improve portability of testsuite - [tg] Fix tilde expansion for some substitutions (izabera, Chet, Geoff) - [tg] Improve reparsing of ((…) |…) as ( (…) |…) - [Martijn Dekker] Fix test(1) not returning evaluation errors - [tg] Fix ${*:+x} constructs (carstenh) - [tg] Make (( … )) into a compound command (ormaaj) - [tg] Repair a few parameter substitution expansion mistakes R51 is a strongly recommended feature release: - [tg] OpenBSD sync: handle integer base out of band like ksh93 does - [tg] Protect standard code (predefined aliases, internal code, aliases and functions in dot.mkshrc) from being overridden by aliases and, in some cases, shell functions (i.e. permit overriding but ignore it) - [tg] Implement GNU bash’s enable for dot.mkshrc using magic aliases to redirect the builtins to external utilities; this differs from GNU bash in that enable takes precedence over functions - [tg] Move unaliasing an identifier when defining a POSIX-style function with the same name into lksh, as compatibility kludge - [tg] Korn shell style functions now have locally scoped shell options - [tg, iSKUNK] Change some ASCII-isms to be EBCDIC-aware or pluggable - [tg, Ypnose] Mention lksh build instructions on manpage and website - [tg] Overhaul signal handling; support new POSIX NSIG_MAX, add sysconf(_SC_NSIG) as a later TODO item - [tg] Fix signal bounds (1 ≤ signum < NSIG) - [tg] Improve manual pages, especially wrt. standards compliance - [tg, iSKUNK] Initial EBCDIC work for dot.mkshrc - [tg, iSKUNK] Add list of z/OS signals to Build.sh - [tg] Work around the sh(1) backslash-newline problem by moving the code triggering it out of *.opt and into the consumers - [colona] Bind another well-known ANSI Del key in the Emacs mode - [tg] Fix ${foo/*/x} pattern checks, spotted by izabera - [carstenh] Fix error output of cd function in dot.mkshrc - [tg] read partial returns in -N and timeout cases - [tg] Fix $LINENO inside PS1; spotted by carstenh - [tg] Ensure correct padding of at least 2 spaces in print_columns - [tg] Note issues with nested complex parameter expansions and follow-up bugfixes to expect - [OpenBSD] Some language fixes in documentation; comments - [tg] Reimplement multi-line command history (Debian #783978) + fixes - [Martijn Dekker] Fix command -v for “shell reserved words” - [tg] In dot.mkshrc make use of latest feature: local options - [tg] Fix ""$@ to emit a word - [tg] Change cat(1) hack to look first and not ignore builtin - [KO Myung-Hun] Begin porting mksh to OS/2 - [komh, tg] Some generic minor bugfixes from OS/2 porting - [tg] Document mknod(8) isn’t normally part of mksh(1) - [tg] Quote arguments to : in build/test scripts as well - [tg] Add cat(1) hack for printf(1)-as-builtin: always prefer external - [tg] Explicitly use binary mode for any and all file I/O in stock mksh - [Ilya Zakharevich] Use termio, not termios(4), on OS/2 - [tg] Set edchars to sane BSD defaults if any are NUL - [tg] Implement support for PC scancodes in Vi and Emacs editing mode - [komh] OS/2 uses ‘;’ as PATH separator plus support drive letters Changes in the current (unreleased) R50-stable branch: - [tg] Correct some mistakes in the manual page - [tg] Fix a bug in the testsuite driver, spotted on EBCDIC systems R50f is a required security and bugfix release: - R50e is a required bugfix release: - [tg] Add more tests detailing behaviour difference from GNU bash - [tg] Introduce a memory leak for x=<< fixing use of freed memory instead, bug tracked as LP#1380389 still live - [tg] Add x+=<< parallel to x=<< - [tg, ormaaj, jilles] POSIX “command” loses builtin special-ness - [tg] Fix LP#1381965 and LP#1381993 (more field splitting) - [jilles] Update location of FreeBSD testsuite for test(1) - [Martin Natano] Remove dead NULL elements from Emacs keybindings - [tg, Stéphane Chazelas, Geoff Clare] Change several testcases for $*/$@ expansion with/without quotes to expected-fail, with even more to come ☹ - [tg] Fix miscalculating required memory for encoding the double-quoted parts of a here document or here string delimiter, leading to a buffer overflow; discovered by zacts from IRC - [RT] Rename a function conflicting with a MacRelix system header - [tg] Use size_t (and ssize_t) consistently, stop using ptrdiff_t; fixes some arithmetics and S/390 bugs - [tg] Remove old workarounds for Clang 3.2 scan-build - [tg] Remove all Clang/Coverity assertions, making room for new checks - [tg] Fix NSIG generation on Debian sid gcc-snapshot - [tg] Make a testcase not fail in a corner case - [tg] Fix issues detected by GCC’s new sanitisers: data type of a value to be shifted constantly must be unsigned (what not, in C…); shebang check array accesses are always unsigned char - [tg] Be even more explicit wrt. POSIX in the manpage - [tg] Fix shebang / file magic decoding - [tg] More int → bool conversion - [tg] Let Build.sh be run by GNU bash 1.12.1 (Slackware 1.01) - [Stéphane Chazelas, tg] Fix here string parsing issue - [tg] Point out more future changes in the manpage - [tg] Call setgid(2), setegid(2), setuid(2) before seteuid(2) - [tg] Fix spurious empty line after ENOENT “whence -v”, found by Ypnose - [tg] Optimise dot.mkshrc and modernise it a bit - [tg] Use MAXPATHLEN from <sys/param.h> for PATH_MAX fallback - [tg] Some code cleanup and warnings fixes - [tg] Add options -a argv0 and -c to exec - [jsg] Prevent use-after-free when hitting multiple errors unwinding - [tg] Fix use of $* and $@ in scalar context: within [[ … ]] and after case (spotted by Stéphane Chazelas) and in here documents (spotted by tg@); fix here document expansion - [tg] Unbreak when $@ shares double quotes with others - [tg] Fix set -x in PS4 expansion infinite loop R50d is a required bugfix release: - [Goodbox] Fix NULL pointer dereference on “unset x; nameref x” - [tg] Fix severe regression in field splitting (LP#1378208) - [tg] Add a warning about not using tainted user input (including from the environ(7)ment) in arithmetics, until Stéphane writes it up nicely R50c is a security fix release: - [tg] Know more rare signals when generating sys_signame[] replacement - [tg] OpenBSD sync (mostly RCSID only) - [tg] Document HISTSIZE limit; found by luigi_345 on IRC - [zacts] Fix link to Debian .mkshrc - [tg] Cease exporting $RANDOM (Debian #760857) - [tg] Fix C99 compatibility - [tg] Work around klibc bug causing a coredump (Debian #763842) - [tg] Use issetugid(2) as additional check if we are FPRIVILEGED - [tg] SECURITY: do not permit += from environment - [tg] Fix more field splitting bugs reported by Stephane Chazelas and mikeserv; document current status wrt. ambiguous ones as testcases too R50b is a recommended bugfix release: - [Ypnose] Fix operator description in the manpage - [tg] Change all mention of “eglibc” to “glibc”, it is merged back - [Colona] Fix rare infinite loop with invalid UTF-8 in the edit buffer - [tg] Make more clear when a shell is interactive in the manpage - [tg] Document that % is a symmetric remainder operation, and how to get a mathematical modulus from it, in the manpage - [tg, Christopher Ferris, Elliott Hughes] Make the cat(1) builtin also interruptible in the write loop, not just in the read loop, and avoid it getting SIGPIPE in the smores function in dot.mkshrc by terminating cat upon user quit - [tg] Make some comments match the code, after jaredy from obsd changed IFS split handling - [tg] Fix some IFS-related mistakes in the manual page - [tg] Document another issue as known-to-fail test IFS-subst-3 - [tg] Improve check.pl output in some cases - [tg, Jb_boin] Relax overzealous nameref RHS checks R50 is a recommended bugfix release: - [tg] Fix initial IFS whitespace not being ignored when expanding - [tg] MKSH_BINSHREDUCED no longer mistakenly enables brace expansion - [tg] Explain more clearly Vi input mode limitations in the manpage - [tg] Improve error reporting of the check.pl script (which needs a maintainer since I don’t speak any perl(1), really), for lewellyn - [tg] Use $TMPDIR in test.sh for scratch space - [tg, Polynomial-C] Check that the scratch space is not mounted noexec - [pekster, jilles, tg] Use termcap(5) names, not terminfo(5) names, in tput(1) examples, for improved portability (e.g. to MidnightBSD) - [tg] Avoid C99 Undefined Behaviour in mirtoconf LFS test (inspired by Debian #742780) - [tg] Fix ${!foo} for when foo is unset - [tg] Improve nameref error checking (LP#1277691) - [tg] Fix readonly bypass found by Bert Münnich - [Ryan Schmidt] Improved system reporting for Mac OS X - [nDuff] Explain better [[ extglob handling in the manpage - [tg] Remove arr=([index]=value) syntax due to regressions - [tg] IFS-split arithmetic expansions as per POSIX 201x - [OpenBSD] Add more detailed Authors section to manpage - [tg] Fix set ±p issue for good: drop privs unless requested - [tg] Improve signal handling and use a more canonical probing order - [tg] Fix return values $? and ${PIPESTATUS[*]} interaction with set -o pipefail and COMSUBs - [enh] Detect ENOEXEC ELF files and use a less confusing error message - [tg] Update to Unicode 7.0.0 - [tg] Shut up valgrind in the $RANDOM code - [tg] Use -fstack-protector-strong in favour of -fstack-protector-all - [tg] Fix access-after-free crash spotted by Enjolras via IRC R49 is a recommended bugfix release: - R48b is a minor bugfix update: - [tg] Fix display issue with multi-line prompts and SIGWINCH R48 is a small but important bugfix update: - [tg] dot.mkshrc: unbreak hd(1) function in UTF-8 mode - [Jens Staal, tg] Improve buildability on Plan 9 and support kencc - [tg] Clean up and improve build process and testsuite - [Michael Langguth] Add multi-layer ICO file from mksh/Win32 - [tg, Steffen Daode Nurpmeso] Fix interactive shell exiting on ^C or syntax error when the EXIT pseudo-signal trap was set (to anything) - [tg, Daode] Display longer command excerpts in job control - [tg] Rewrite Emacs mode display window sliding calculation code - [tg] dot.mkshrc: “doch” now keeps standard input - [tg] Reduce memory usage and improve comments and documentation Future Plans - fix bind -m '@=^[ echo $RANDOM^[^E'; bind @ output expansion - limit HISTSIZE to 65535; fix concurrency with truncation - add a pre-exec hook for liwakura - bind keys of dynamic length ‣ begun in a branch - dynamic input LINE length, instead of a hardcoded value at compile time - cache optimised extglobs (especially for ${foo/bar/baz}) - typed variables; using ${var@?} more (JSON!) - make arrays use hashtables internally ipv linear linked lists - associative, multidimensional arrays - Build with more platforms’ native tools or other compilers — ACK, kencc, ICC/UWIN, egcs/UWIN - Build and actually work on more platforms — DJGPP, PW32, Plan 9, Syllable, NuttX — debug these - Bugfix for suspending a && b chain - “process substitution” echo diff <(echo 1) <(echo 2) - whence -p -a foo - read -e (or something): use the edit.c stuff for inputting - ↑ actually make a libmkshedit, see my mirtoconf v2 plans - Allow trimming arrays (e.g. ${foo[*]@Q}) - Add -DMKSH_BAIKA_NO_JUTSU (trade more size for more speed) - Someone should run mksh through afl and fix everything it finds - older/unrealistic plans - … such as a better website, more clearly arranged, etc… - Although there is definite need, eventually, to have 64-bit arithmetics, possibly using a long typeset flag and something similar to $((#x)) for unsigned arithmetics - arbitrary-precision arithmetics? - make KSH_MATCH into an array - something like ${${foo#bar}%baz} like other shells do? - some sort of mktemp(1) builtin; the functionality is already mostly there, but we keep it open already in C and don’t do dirs, this must be rethinked wrt. UI (the UI can also be used for some file-based locking builtin) - The DEBUG trap (although GNU bash also has a RETURN trap, and both bash and ksh93 write lots about this, so careful design is required before attempting this) → Ypnose - AT&T ksh93 discipline functions are occasionally mentioned. Shell-private variable namespaces have their use. Accessors too. - Possibly x=<&0 or $(<&0)
https://www.mirbsd.org/mksh.htm
CC-MAIN-2019-18
refinedweb
13,554
62.17
This is for a class. Two of the files are provided, and I need to write the third. I'm having trouble with a constructor in a class. The constructor is called with 'Logbook testLog(month,year)' and it saves the month and year correctly. The daysInMonth function works fine. My problem is this: when I enter a value with the putEntry function, the days that are set to 0 only go up to the month's day. If the month is two, day one and two have a value of 0, and the rest are junk (something like 21345, I think the memory address.) I THINK it is the "Logbook::Logbook (int month, int year)" function, but I'm going to try something else as well. #include "logbook.h" #include <windows.h> Logbook::Logbook () // Constructs an empty logbook for the specified month. { SYSTEMTIME st; logMonth = st.wMonth; logYear = st.wYear; for (int day = 1 ; day <= daysInMonth() ; day++) { entry[day] = 0; } } Logbook::Logbook (int month, int year) // Constructs an empty logbook for the specified month. { logMonth = month; logYear = year; // make sure real month/year int MAXDay = daysInMonth(); for ( day = 1 ; day <= daysInMonth() ; day++ ) { entry[day] = 0; } } incomplete for space... here is the rest of the code. void Logbook::putEntry ( int day, int value ) int Logbook::getEntry ( int day ) const int Logbook::month () const int Logbook::year () const int Logbook::operator [] ( int day ) const void Logbook::operator += ( const Logbook &rightLogbook ) int Logbook::daysInMonth () const If you need more code, please tell! (I understand the concept of a class, but I think I'm missing something.)
https://www.daniweb.com/programming/software-development/threads/169887/for-class-problem-with-class-constructor-loop
CC-MAIN-2017-17
refinedweb
264
65.52
I've just started learning about BTrees and B+ Trees and I'm a little lost. I have 3 classes that implement the insert method for BTrees. BTreeNode, BTree, and BTreeTest. What I want to know is that besides changing the number of keys for the internal nodes from 2T-1 to M-1, and the children from 2T to M, number of keys for the leafs from 2T-1 to L, and the number of children from 2T to M, how else would I need to change my code in order for it to work for B+ Trees. On my BTreeNode class I have the insert method and on the BTree class I have it to where it checks if you have to split the root before inserting a new key. The duplicate method just makes sure there's no duplicates. These are my methods: public class BTreeNode{ public int[] key; public BTreeNode[] c; boolean isLeaf; public int[] nextLeaf; public int n; private int T; //Each node has at least T-1 and at most 2T-1 keys public int min; public int max; public BTreeNode(int t){ T = t; isLeaf = true; key = new int[2*T-1]; c = new BTreeNode[2*T]; n = 0; } public void insert(int newKey){ // Insert new key to current node // Make sure that the current node is not full by checking and // splitting if necessary before descending to node if(duplicate(newKey) == false){ int i = n-1; if(isLeaf){ while((i >= 0) && (newKey < key[i])) { key[i+1] = key[i]; i--; } n++; key[i + 1] = newKey; } else{ while ((i >= 0) && (newKey < key[i])) { i--; } int insertChild = i + 1; // Subtree where new key must be inserted if(c[insertChild].isFull()){ // The root of the subtree where new key will be inserted has to be split // update keys and references accordingly n++; c[n] = c[n-1]; for(int j = n - 1; j > insertChild; j--){ c[j] = c[j-1]; key[j] = key[j-1]; } key[insertChild] = c[insertChild].key[T - 1]; c[insertChild].n = T - 1; BTreeNode newNode = new BTreeNode(T); for(int k = 0; k < T - 1; k++){ newNode.c[k] = c[insertChild].c[k + T]; newNode.key[k] = c[insertChild].key[k + T]; } newNode.c[T - 1] = c[insertChild].c[2*T-1]; newNode.n = T-1; newNode.isLeaf = c[insertChild].isLeaf; c[insertChild+1] = newNode; if(newKey < key[insertChild]){ c[insertChild].insert(newKey); } else{ c[insertChild + 1].insert(newKey); } } else c[insertChild].insert(newKey); } } else return; } } public class BTree{ private BTreeNode root; private int T; //2T is the maximum number of children a node can have private int height; private int firstLeaf; public void insert(int newKey){ if (root.isFull()){//Split root; split(); height++; } root.insert(newKey); } } Any help would be appreciated. Thanks in advance.
http://www.javaprogrammingforums.com/whats-wrong-my-code/36549-b-tree-b-tree.html
CC-MAIN-2015-18
refinedweb
464
70.13
Hey there. I am new to this forum, just joined, and am here to learn from others and hopefully help others if I can. I am taking a 'fundamentals of programming course at a community college. I have been studying the book 'C++ Programming' by D.S. Malik, sixth edition. Recently in my class, we had to create different simplistic switch statements and if else statements. Knowing this, the first task was to have the user enter two integers. After getting the two integers (via a couple 'cin >>' variable calls), then the user was to input a number or letter that coincided with their choice for what arithmetic process they wanted the numbers to undergo. What I wanted to do, though I wasn't supposed to do it since I was farther ahead in understanding than the class, was to create a fail-safe whereas if the user inputs a letter/character instead of a number at the beginning, the program would end and prompt the user to close the program and begin again. I did not write the code in question, but found it off of the internet. I will paste half of my code below (the other half of it doesn't pertain to this issue). Any ideas anyone? Thank you so much for your time and effort! If there are any questions pertaining to my code (like if I've left something out that you need to see to be able to fully analyze my problem) don't hesitate to let me know and I'll give it to you or explain it further. Thank you so much again everyone and I really appreciate it! #include <iostream> #include <iomanip> #include <cmath> #include <limits> using namespace std; int main() { double x; double y; char (op); char (ch); double result; cout << "Hello User!" << endl; cout << "Please input 2 numbers and hit enter." << endl; cin >> x; cin >> y; //Code below was 'fail-safe' code I was trying to implement, but failed. while(!(cin >> x, y)) { cin.clear(); cin.ignore(numeric_limits<streamsize>::max(), '\n'); cout <<"This is not an applicable input. Please restart program and try again."<< endl << endl; return 1; } cout << "Please select one of 4 processes below to utilize to manipulate your \ntwo numbers: " << endl; cout << "By typing 'a', you will add the two numbers. \nBy typing 's' you will subtract the two numbers." << "\nBy typing 'm', you will multiply the two numbers. \nBy typing 'd', you will divide the two numbers." << endl; cin >> ch; if (ch == 'a' || ch == 'A'){ result = x + y; cout << "Your result is: " << result << endl; } else if (ch == 's' || ch == 'S'){ result = x - y; cout << "Your result is: " << result << endl; } else if (ch == 'm' || ch == 'M'){ result = x * y; cout << "Your result is: " << result << endl; } else if (ch == 'd' || ch == 'D'){ result = x / y; cout << "Your result is: " << result << endl; } cout << endl; View Tag Cloud Forum Rules
http://forums.codeguru.com/showthread.php?527803-New-to-this-forum-with-Loop-If-Else-Switch-statements-Question
CC-MAIN-2015-22
refinedweb
482
69.41
How to Build a React App that Works with a Rails 5.1 API React + Ruby on Rails = 🔥 React has taken the frontend development world by storm. It’s an excellent JavaScript library for building user interfaces. And it’s great in combination with Ruby on Rails. You can use Rails on the back end with React on the front end in various ways. In this hands-on tutorial, we’re going to build a React app that works with a Rails 5.1 API. You can watch a video version of this tutorial here. To follow this tutorial, you need to be comfortable with Rails and know the basics of React. If you don’t use Rails, you can also build the API in the language or framework of your choice, and just use this tutorial for the React part. The tutorial covers stateless functional components, class-based components, using Create React App, use of axios for making API calls, immutability-helper and more. What We’re Going to Build We’re going to build an idea board as a single page app (SPA), which displays ideas in the form of square tiles. You can add new ideas, edit them and delete them. Ideas get auto-saved when the user focuses out of the editing form. At the end of this tutorial, we’ll have a functional CRUD app, to which we can add some enhancements, such as animations, sorting and search in a future tutorial. You can see the full code for the app on GitHub: Setting up the Rails API Let’s get started by building the Rails API. We’ll use the in-built feature of Rails for building API-only apps. Make sure you have version 5.1 or higher of the Rails gem installed. gem install rails -v 5.1.3 At the time of writing this tutorial, 5.1.3 is the latest stable release, so that’s what we’ll use. Then generate a new Rails API app with the --api flag. rails new --api ideaboard-api cd ideaboard-api Next, let’s create the data model. We only need one data model for ideas with two fields — a title and a body, both of type string. Let’s generate and run the migration: rails generate model Idea title:string body:string rails db:migrate Now that we’ve created an ideas table in our database, let’s seed it with some records so that we have some ideas to display. In the db/seeds.rb file, add the following code: ideas = Idea.create( [ { title: "A new cake recipe", body: "Made of chocolate" }, { title: "A twitter client idea", body: "Only for replying to mentions and DMs" }, { title: "A novel set in Italy", body: "A mafia crime drama starring Berlusconi" }, { title: "Card game design", body: "Like Uno but involves drinking" } ]) Feel free to add your own ideas. Then run: rails db:seed Next, let’s create an IdeasController with an index action in app/controllers/api/v1/ideas_controller.rb: module Api::V1 class IdeasController < ApplicationController def index @ideas = Idea.all render json: @ideas end end end Note that the controller is under app/controllers/api/v1 because we’re versioning our API. This is a good practice to avoid breaking changes and provide some backwards compatibility with our API. Then add ideas as a resource in config/routes.rb: Rails.application.routes.draw do namespace :api do namespace :v1 do resources :ideas end end end Alright, now let’s test our first API endpoint! First, let’s start the Rails API server on port 3001: rails s -p 3001 Then, let’s test our endpoint for getting all ideas with curl: curl -G And that prints all our ideas in JSON format: [{"id":18,"title":"Card game design","body":"Like Uno but involves drinking","created_at":"2017-09-05T15:42:36.217Z","updated_at":"2017-09-05T15:42:36.217Z"},{"id":17,"title":"A novel set in Italy","body":"A mafia crime drama starring Berlusconi","created_at":"2017-09-05T15:42:36.213Z","updated_at":"2017-09-05T15:42:36.213Z"},{"id":16,"title":"A twitter client idea","body":"Only for replying to mentions and DMs","created_at":"2017-09-05T15:42:36.209Z","updated_at":"2017-09-05T15:42:36.209Z"},{"id":15,"title":"A new cake recipe","body":"Made of chocolate","created_at":"2017-09-05T15:42:36.205Z","updated_at":"2017-09-05T15:42:36.205Z"}] We can also test the endpoint in a browser by going to Setting up Our Front-end App Using Create React App Now that we have a basic API, let’s set up our front-end React app using Create React App. Create React App is a project by Facebook that helps you get started with a React app quickly without any configuration. First, make sure you have Node.js and npm installed. You can download the installer from the Node.js website. Then install Create React App by running: npm install -g create-react-app Then, make sure you’re outside the Rails directory and run the following command: create-react-app ideaboard That will generate a React app called ideaboard, which we’ll now use to talk to our Rails API. Let’s run the React app: cd ideaboard npm start This will open it on The app has a default page with a React component called App that displays the React logo and a welcome message. The content on the page is rendered through a React component in the src/App.js file: Our First React Component Our next step is to edit this file to use the API we just created and list all the ideas on the page. Let’s start off by replacing the Welcome message with an h1 tag with the title of our app ‘Idea Board’. Let’s also add a new component called IdeasContainer. We need to import it and add it to the render function: import React, { Component } from 'react' import './App.css' import IdeasContainer from './components/IdeasContainer' class App extends Component { render() { return ( <div className="App"> <div className="App-header"> <h1>Idea Board</h1> </div> <IdeasContainer /> </div> ); } } export default App Let’s create this IdeasContainer component in a new file in src/IdeasContainer.js under a src/components directory. import React, { Component } from 'react' class IdeasContainer extends Component { render() { return ( <div> Ideas </div> ) } } export default IdeasContainer Let’s also change the styles in App.css to have a white header and black text, and also remove styles we don’t need: .App-header { text-align: center; height: 150px; padding: 20px; } .App-intro { font-size: large; } This component needs to talk to our Rails API endpoint for getting all ideas and display them. Fetching API Data with axios We’ll make an Ajax call to the API in the componentDidMount() lifecycle method of the IdeasContainer component and store the ideas in the component state. Let’s start by initializing the state in the constructor with ideas as an empty array: constructor(props) { super(props) this.state = { ideas: [] } } And then we’ll update the state in componentDidMount(). Let’s use the axios library for making the API calls. You can also use fetch or jQuery if you prefer those. Install axios with npm: npm install axios --save Then import it in IdeasContainer: import axios from 'axios' And use it in componentDidMount(): componentDidMount() { axios.get(' .then(response => { console.log(response) this.setState({ideas: response.data}) }) .catch(error => console.log(error)) } Now if we refresh the page … it won’t work! We’ll get a “No Access-Control-Allow-Origin header present” error, because our API is on a different port and we haven’t enabled Cross Origin Resource Sharing (CORS). Enabling Cross Origin Resource Sharing (CORS) So let’s first enable CORS using the rack-cors gem in our Rails app. Add the gem to the Gemfile: gem 'rack-cors', :require => 'rack/cors' Install it: bundle install Then add the middleware configuration to config/application.rb file: config.middleware.insert_before 0, Rack::Cors do allow do origins ' resource '*', :headers => :any, :methods => [:get, :post, :put, :delete, :options] end end We restrict the origins to our front-end app at and allow access to the standard REST API endpoint methods for all resources. Now we need to restart the Rails server, and if we refresh the browser, we’ll no longer get the CORS error. The page will load fine and we can see the response data logged in the console. So now that we know we’re able to fetch ideas from our API, let’s use them in our React component. We can change the render function to iterate through the list ideas from the state and display each of them: render() { return ( <div> {this.state.ideas.map((idea) => { return( <div className="tile" key={idea.id} > <h4>{idea.title}</h4> <p>{idea.body}</p> </div> ) })} </div> ); } That will display all the ideas on the page now. Note the key attribute on the tile div. We need to include it when creating lists of elements. Keys help React identify which items have changed, are added, or are removed. Now let’s add some styling in App.css to make each idea look like a tile: .tile { height: 150px; width: 150px; margin: 10px; background: lightyellow; float: left; font-size: 11px; text-align: left; } We set the height, width, background color and make the tiles float left. Stateless functional components Before we proceed, let’s refactor our code so far and move the JSX for the idea tiles into a separate component called Idea. import React from 'react' const Idea = ({idea}) => <div className="tile" key={idea.id}> <h4>{idea.title}</h4> <p>{idea.body}</p> </div> export default Idea This is a stateless functional component (or as some call it, a “dumb” component), which means that it doesn’t handle any state. It’s a pure function that accepts some data and returns JSX. Then inside the map function in IdeasContainer, we can return the new Idea component: {this.state.ideas.map((idea) => { return (<Idea idea={idea} key={idea.id} />) })} Don’t forget to import Idea as well: import Idea from './Idea' Great, so that’s the first part of our app complete. We have an API with an endpoint for getting ideas and a React app for displaying them as tiles on a board! Adding a new record Next, we’ll add a way to create new ideas. Let’s start by adding a button to add a new idea. Inside the render function in IdeasContainer, add: <button className="newIdeaButton"> New Idea </button> And let’s add some styling for it in App.css: .newIdeaButton { background: darkblue; color: white; border: none; font-size: 18px; cursor: pointer; margin-right: 10px; margin-left: 10px; padding:10px; } Now when we click the button, we want another tile to appear with a form to edit the idea. Once we edit the form, we want to submit it to our API to create a new idea. API endpoint for creating a new idea So let’s start of by first making an API endpoint for creating new ideas in IdeasController: def create @idea = Idea.create(idea_params) render json: @idea end private def idea_params params.require(:idea).permit(:title, :body) end Since Rails uses strong parameters, we define the private method idea_params to whitelist the params we need — title and body. Now we have an API endpoint to which we can post idea data and create new ideas. Back in our React app, now let’s add a click handler called addNewIdea to the new idea button: <button className="newIdeaButton" onClick={this.addNewIdea} > New Idea </button> Let’s define addNewIdea as a function that uses axios to make a POST call to our new idea endpoint with a blank idea. Let’s just log the response to the console for now: addNewIdea = () => { axios.post( ' { idea: { title: '', body: '' } } ) .then(response => { console.log(response) }) .catch(error => console.log(error)) } Now if we try clicking on the new idea button in the browser, we’ll see in the console that the response contains a data object with our new idea with a blank title and body. When we refresh the page, we can see an empty tile representing our new idea. What we really want to happen is that, when we click the new idea button, an idea is created immediately, and a form for editing that idea appears on the page. This way, we can use the same form and logic for editing any idea later on in the tutorial. Before we do that, let’s first order the ideas on the page in reverse chronological order so that the newest ideas appear at the top. So let’s change the definition of @ideas in IdeasController to order ideas in descending order of their created_at time: module Api::V1 class IdeasController < ApplicationController def index @ideas = Idea.order("created_at DESC") render json: @ideas end end end Alright, now the latest ideas are displayed first. Now, let’s continue with defining addNewIdea. First, let’s use the response from our POST call to update the array of ideas in the state, so that when we add a new idea it appears on the page immediately. We could just push the new idea to the array, since this is only an example app, but it’s good practice to use immutable data. So let’s use immutability-helper, which is a nice package for updating data without directly mutating it. Install it with npm: npm install immutability-helper --save Then import the update function in IdeasContainer: import update from 'immutability-helper' Now let’s use it inside addNewIdea to insert our new idea at the beginning of the array of ideas: addNewIdea = () => { axios.post( ' { idea: { title: '', body: '' } } ) .then(response => { console.log(response) const ideas = update(this.state.ideas, { $splice: [[0, 0, response.data]] }) this.setState({ideas: ideas}) }) .catch(error => console.log(error)) } We make a new copy of this.state.ideas and use the $splice command to insert the new idea (in response.data) at the 0th index of this array. Then we use this new ideas array to update the state using setState. Now if we try the app in the browser and click the new idea button, a new empty tile appears immediately. Now we can proceed with editing this idea. First, we need a new state property editingIdeaId, which keeps track of which idea is being currently edited. By default, we’re not editing any idea, so let’s initialize editingIdeaId in the state with a null value: this.state = { ideas: [], editingIdeaId: null } Now when we add a new idea, in addition to adding it to state.ideas, we also want to set its id as the value of state.editingIdeaId. So let’s modify the setState call in addNewIdea to include also set editingIdeaId: this.setState({ ideas: ideas, editingIdeaId: response.data.id }) So this indicates that we’ve just added a new idea and we want to edit it immediately. The complete addNewIdea function now looks like this: addNewIdea = () => { axios.post( ' { idea: { title: '', body: '' } } ) .then(response => { const ideas = update(this.state.ideas, { $splice: [[0, 0, response.data]] }) this.setState({ ideas: ideas, editingIdeaId: response.data.id }) }) .catch(error => console.log(error)) } A Form component Now we can use state.editingIdeaId in the render function, so that instead of displaying just a normal idea tile, we can display a form. Inside the map function, let’s change the return value to a conditional statement, which renders an IdeaForm component if an idea’s id matches state.editingIdeaId, otherwise rendering an Idea component: {this.state.ideas.map((idea) => { if(this.state.editingIdeaId === idea.id) { return(<IdeaForm idea={idea} key={idea.id} />) } else { return (<Idea idea={idea} key={idea.id} />) } })} Let’s import the IdeaForm component in IdeasContainer: import IdeaForm from './IdeaForm' And let’s define it in IdeaForm.js. We’ll start with a simple class component, which renders a form with two input fields for the idea title and body: import React, { Component } from 'react' import axios from 'axios' class IdeaForm extends Component { constructor(props) { super(props) this.state = { } } render() { return ( <div className="tile"> <form> <input className='input' type="text" name="title" placeholder='Enter a Title' /> <textarea className='input' name="body" placeholder='Describe your idea'></textarea> </form> </div> ); } } export default IdeaForm Let’s add a bit of CSS in App.css to style the form: .input { border: 0; background: none; outline: none; margin-top:10px; width: 140px; font-size: 11px; } .input:focus { border: solid 1px lightgrey; } textarea { resize: none; height: 90px; font-size: 11px; } Now when we click on the new idea button, a new tile appears with a form in it: Now let’s make this form functional! We need to hook up the form input fields to the state. First, let’s initialize the IdeaForm component state values from the idea prop that it receives from IdeasContainer: class IdeaForm extends Component { constructor(props) { super(props) this.state = { title: this.props.idea.title, body: this.props.idea.body } } Then set the form field values to their corresponding state values and set an onChange handler: <form> <input className='input' type="text" name="title" placeholder='Enter a Title' value={this.state.title} onChange={this.handleInput} /> <textarea className='input' name="body" placeholder='Describe your idea' value={this.state.body} onChange={this.handleInput}> </textarea> </form> We’ll define handleInput such that, when we type in either of the input fields, the corresponding state value and then the value of the field gets updated: handleInput = (e) => { this.setState({[e.target.name]: e.target.value}) } Tracking state changes in React Developer Tools Let’s see these state changes in action with the React Developer Tools browser extension. You can get it for Chrome here and for Firefox here. Once you have it installed, refresh the app page and open the developer console. You should see a new React tab. When you click on it, you’ll see our app components tree on the left and all the props and state associated with each component on the right. Now we’re updating the form fields, but we’re still not saving the edited idea. So the next thing needed is that, when we blur out of a form field, we want to submit the form and update the idea. API endpoint for updating ideas First, we need to define an API endpoint for updating ideas. So let’s add an update action in IdeasController: def update @idea = Idea.find(params[:id]) @idea.update_attributes(idea_params) render json: @idea end Back in IdeaForm.js, we’ll set an onBlur handler called handleBlur to the form: <form onBlur={this.handleBlur} > We’ll define handleBlur to make a PUT call to our API endpoint for updating ideas with idea data from the state. For now, let’s just log the response to the console and see if our call works: handleBlur = () => { const idea = { title: this.state.title, body: this.state.body } axios.put( ` { idea: idea }) .then(response => { console.log(response) }) .catch(error => console.log(error)) } We also need to import axios in this file to be able to use it: import axios from 'axios' Now if we click on the new idea button, edit its title and blur out of that field, we’ll see our API response logged in the console, with the new edited idea data. The same thing happens if we edit the body and blur out of that field. So our onBlur handler works and we can edit our new idea, but we also need to send the edited idea data back up to IdeasContainer so that it can update its own state too. Otherwise, state.ideas won’t have the updated value of the idea we just edited. We’ll use a method called updateIdea, which we’ll pass as a prop from IdeasContainer to IdeaForm. We’ll call updateIdea with the response data from our API call: handleBlur = () => { const idea = { title: this.state.title, body: this.state.body } axios.put( ` { idea: idea }) .then(response => { console.log(response) this.props.updateIdea(response.data) }) .catch(error => console.log(error)) } Now in IdeasContainer, let’s send an updateIdea function as a prop to IdeaForm: <IdeaForm idea={idea} key={idea.id} updateIdea={this.updateIdea} /> Let’s define the function to do an immutable update of the idea in state.ideas: updateIdea = (idea) => { const ideaIndex = this.state.ideas.findIndex(x => x.id === idea.id) const ideas = update(this.state.ideas, { [ideaIndex]: { $set: idea } }) this.setState({ideas: ideas}) } First, we find the index of the edited idea in the array, and then use the $set command to replace the old value with the new one. Finally, we call setState to update state.ideas. We can see this in action in the browser with the React Developer Tools tab open. Displaying a success notification Now we can add a new idea and edit it, but the user gets no visual feedback or confirmation when the idea is saved. So let’s add a notification message to tell the user when an idea has been successfully saved. Let’s add a span next to the new idea button to display a notification from a value in state: <span className="notification"> {this.state.notification} </span> Let’s initialize state.notification as an empty string: constructor(props) { super(props) this.state = { ideas: [], editingIdeaId: null, notification: '' } } Now every time an idea gets updated, we’ll update state.notification with a success notification we want to show to the user. So in the setState call in updateIdea, in addition to updating ideas, let’s also update notification: this.setState({ ideas: ideas, notification: 'All changes saved' }) Now when we edit an idea and blur out of the input field, the idea gets saved and we see the success notification. We also want to reset the notification as soon as the user makes a change that hasn’t been saved yet. So in the handleInput function of the IdeaForm component, let’s call a function called resetNotification to reset the notification message: handleInput = (e) => { this.props.resetNotification() this.setState({[e.target.name]: e.target.value}) } Now, inside the render function of IdeasContainer, let’s also pass resetNotification as a prop to IdeaForm: <IdeaForm idea={idea} key={idea.id} updateIdea={this.updateIdea} resetNotification={this.resetNotification} /> Let’s define resetNotification as: resetNotification = () => { this.setState({notification: ''}) } Now after a success notification appears, if we edit the idea again, the notification disappears. Editing an existing idea Next, let’s add the ability to edit an existing idea. When we click on an idea tile, we want to change the tile so that it replaces the Idea component with an IdeaForm component to edit that idea. Then we can edit the idea and it will get saved on blur. In order to add this feature, we need to add a click handler on our idea tiles. So first we need to convert our Idea component from a functional component into a class component and then we can set define a click handler function handleClick for the title and body. import React, { Component } from 'react' class Idea extends Component { handleClick = () => { this.props.onClick(this.props.idea.id) } render () { return( <div className="tile"> <h4 onClick={this.handleClick}> {this.props.idea.title} </h4> <p onClick={this.handleClick}> {this.props.idea.body} </p> </div> ) } } export default Idea Note that we have to add this.props. to use the props value, because unlike in the functional component, we are no longer destructuring the props object. handleClick calls this.props.onClick with the idea id. Now, inside the render function of IdeasContainer, let’s also pass onClick as a prop to Idea: return (<Idea idea={idea} key={idea.id} onClick={this.enableEditing} />) We’ll define enableEditing to set the value of state.editingIdeaId to the clicked idea’s id: enableEditing = (id) => { this.setState({editingIdeaId: id}) } Now when we click on a tile, it instantly becomes editable! When we click on a tile, once the form appears, let’s also set the cursor focus to the title input field. We can do that by adding a ref on the title input field in IdeaForm: <input className='input' type="text" name="title" placeholder='Enter a Title' value={this.state.title} onChange={this.handleInput} ref={this.props.titleRef} /> We need to pass the ref as a prop, because we want to use it in the parent component IdeasContainer, where we can define the ref as a callback function: <IdeaForm idea={idea} key={idea.id} updateIdea={this.updateIdea} titleRef= {input => this.title = input} resetNotification={this.resetNotification} /> Now we can use this ref in enableEditing to set the focus in the title input field: enableEditing = (id) => { this.setState({editingIdeaId: id}, () => { this.title.focus() }) } Notice that we didn’t call this.title.focus() as a separate function after calling setState. Instead, we passed it to setState inside a callback as a second argument. We did this because setState doesn’t always immediately update the component. By passing our focus call in a callback, we make sure that it gets called only after the component has been updated. Now if we try the app in a browser, when we click on an idea tile, it becomes editable with a form and the cursor gets focused inside its title input field. So now we can add and edit ideas. Deleting an idea Finally, we want to be able to delete ideas. When we hover over an idea tile, we want a delete button (in the form of a red cross) to appear in the top right corner. Clicking that cross should delete the idea and remove the tile from the board. So let’s start by adding some markup and CSS to display the delete button on hover. In the Idea component, add a span with a class deleteButton and the text ‘x’: <div className="tile"> <span className="deleteButton"> x </span> Then let’s add some CSS in App.css to hide this span by default and make it visible when we hover over a tile: .deleteButton { visibility: hidden; float: right; margin: 5px; font-size: 14px; cursor: pointer; color: red; } .tile:hover .deleteButton { visibility: visible; } Next, let’s add a click handler handleDelete to this delete button, which then deletes the idea: <span className="deleteButton" onClick={this.handleDelete}> x </span> Similar to handleClick, we’ll define handleDelete as an arrow function that calls another function this.props.onDelete with the tile’s idea id: handleDelete = () => { this.props.onDelete(this.props.idea.id) } Let’s pass onDelete as a prop from IdeasContainer: <Idea idea={idea} key={idea.id} onClick={this.enableEditing} onDelete={this.deleteIdea} /> We’ll define deleteIdea in a moment, but first let’s add an API endpoint for deleting ideas in IdeasController: def destroy @idea = Idea.find(params[:id]) if @idea.destroy head :no_content, status: :ok else render json: @idea.errors, status: :unprocessable_entity end end Now let’s define deleteIdea in IdeasContainer as a function that makes a DELETE call to our API with the idea id and, on success, updates state.ideas: deleteIdea = (id) => { axios.delete(` .then(response => { const ideaIndex = this.state.ideas.findIndex(x => x.id === id) const ideas = update(this.state.ideas, { $splice: [[ideaIndex, 1]]}) this.setState({ideas: ideas}) }) .catch(error => console.log(error)) } Once again, we look up the index of the deleted idea, use update with the $splice command to create a new array of ideas, and then update state.ideas with that. Now we can try it in the browser. When we hover over an idea tile, the red delete button appears. Clicking on it deletes the idea and removes the tile from the board. Hurray, we now have a functional app with all the basic CRUD functionality! Wrap Up In this tutorial, we built a complete CRUD app using a Rails 5.1 API and a front-end React app. Our API has three endpoints, one each for creating, updating and deleting ideas. We used Create React App to make our React app. This made setup completely painless and easy. We could dive straight into building our app instead of configuring anything. We used axios for making Ajax calls to the API and immutability-helper to make data updates. In a future tutorial, we can look at how to deploy this app to a production server and also add some animations and transitions to spice up the UI. For example, we could fade in new idea tiles and fade out deleted tiles, fade in and out notification messages. You can watch a video version of this tutorial here. You can see the full code for the app on GitHub:
https://www.sitepoint.com/react-rails-5-1/
CC-MAIN-2022-21
refinedweb
4,775
64.1
Modern languages like JavaScript or Ruby provide the programmer with an option to “reopen” any class to add additional behavior to them. In the case of Ruby and JavaScript, this is not constrained in any way: You are able to reopen any class – even the ones that come with your language itself and there are no restrictions on the functionality of your extension methods. Ruby at least knows of the concept of private methods and fields which you can’t call from your additional methods, but that’s just Ruby. JS knows of no such thing. This provides awesome freedom to the users of these languages. Agreed. Miss a method on a class? Easy. Just implement that and call it from wherever you want. This also helps to free you from things like BufferedReader br = new BufferedReader(new InputStreamReader(new FileInputStream(of))); which is lots of small (but terribly inconventiently named) classes wrapped into each other to provide the needed functionality. In this example, what the author wanted is to read a file line-by-line. Why exactly do I need three objects for this? Separation of concern is nice, but stuff like this make learning a language needlessly complicated. In the world of Ruby or JS, you would just extend FileInputStream with whatever functionality you need and then call that, creating code that is much easier to read. FileInputStream.prototype.readLine = function(){...} //... of.readLine(); //... And yet, if you are a library (as opposed to consumer code), this is a terrible, terrible thing to do! We have seen previous instances of the kind of problems you will cause: Libraries adding functionality to existing classes create real problems when multiple libraries are doing the same thing and the consuming application is using both libraries. Let’s say for example, that your library A added that method sum() to the generic Array class. Let’s also say that your consumer also uses library B which does the same thing. What’s the deal about this, you might ask? It’s pretty clear, what sum does after all? Is it? It probably is when that array contains something that is summable. But what if there is, say, a string in the array you want to sum up? In your library, the functionality of sum() could be defined as “summing up all the numeric values in the array, assuming 0 for non-numeric values”. In the other library, sum() could be defined as “summing up all the numeric values in the array, throwing an exception if sum() encounters invalid value”. If your consumer loads your library A first and later on that other library B, you will be calling B’s Array#sum(). Now due to your definition of sum(), you assume that it’s pretty safe to call sum() with an array that contains mixed values. But because you are now calling B’s sum(), you’ll get an exception you certainly did not expect in the first place! Loading B after A in the consumer caused A to break because both created the same method conforming to different specs. Loading A after B would fix the problem in this case, but what, say, if both you and B implement Array#avg, but with reversed semantics this time around? You see, there is no escape. Altering classes in the global name space breaks any name spacing facility that may have been available in your language. Even if all your “usual” code lives in your own, unique name space, the moment you alter the global space, you break out of your small island and begin to compete with the rest of the world. If you are a library, you cannot be sure that you are alone in that competition. And even if you are a top level application you have to be careful not to break implementations of functions provided by libraries you either use directly or, even worse, indirectly. If you need a real-life example, the following code in an (outdated) version of scriptaculous’ effects.js broke jQuery, despite the latter being very, very careful to check if it can rely on the base functionality provided: Array.prototype.call = function() { var args = arguments; this.each(function(f){ f.apply(this, args) }); } Interestingly enough, Array#call wasn’t used in the affected version of the library. This was a code artifact that actually did nothing but break a completely independent library (I did not have time to determine the exact nature of the breakage). Not convinced? After all I was using an outdated version of scriptaculous and I should have updated (which is not an option if you have even more libraries dependent on bugs in exactly that version – unless you update all other components as well and then fix all the then broken unit tests). Firefox 3.0 was the first browser to add document.getElementByClassName, a method also implemented by Prototype. Of course the functionality in Firefox was slightly different from the implementation in Prototype, which now called the built-in version instead its own version which caused a lot of breakage all over the place. So, dear library developers, stay in your own namespace, please. You’ll make us consumers (and your own) lives so much more easier.
https://blog.pilif.me/2009/04/28/do-not-change-base-library-behavior/
CC-MAIN-2021-04
refinedweb
878
60.04
Array. Thanks for the summary – very informative. Why do you recommend to not use List<T> in public APIs? So in a public API that you are shipping, you want to use Collection<T> by deriving from it, correct? What about if you are not developing an API, but a layered app and you are exposing classes/collections from a Common assembly through a BL layer…is it OK to use List<T> or should you derive from List<T> and always return the derived class? I kinda like being able to return List<T>. Thoughts? Krzysztof, I’m curious to know why you shouldn’t publically expose List<T>. Thanks I just posted the reasons why we don’t recommend List<T> in public APIs. See Sean, using List<T> in app shared libraries may be fine as long as you can and are willing to make some breaking changes to the libraries when you discover that you need change the behavior of the collection and the only way to do it is to change it to Collection<T> and override some members. Such changes are often possible in libraries that are not distributed widely. Just one question: Why in god’s name are there no events or virtual protected methods that allow me to get notified if an item was added or removed? That makes things so tedious. It’s a shame 🙁 Philip, I understand your question is about the collections in System.Collections.Generic namespace. The answer is that such virtual methods make would make the collections slower. To enable the exact scenario you are asking about, we added the System.Collections.ObjectModel namespace. Collections in this namespace do have virtual protected methods which you can use to get notified when an item is added or removed. Somebody added the following comment to this post, but I accidentally deleted it (Sorry!). Here is that the comment said and my reply is below: .NET 1.1, didn’t have a linked list class, so I wrote one. I was happy to see that it would be included in 2.0. I’m trying not to use it, though, because it doesn’t implement IList. I can guess at why you made that decision, but … why did you make that decision?? I always program to interfaces. That way, if I decide to use an array or something else as my data structure, I only have to change the line in which I instantiated my list. Developers are smart enough to know which data structures to use! Is there any way to override .NET’s LinkedList class? I kept getting compiler errors, so I put my linked list in the System.Collections.Generic namespace. Now, I get compiler warnings. In Java, I’d import the class explicitly, but I don’t know what to do in .NET. My Reply: We did not implement IList<T> on LinkedList because the indexer would be very slow. If you really need the interface, you probably can inherit from LinkedList<T> and implement the interface on the subtype. Great post — I look this up all the time 🙂 What do you say that ICollection has no generic equivalent (or, at least, you suggest using IEnumerable<> instead of ICollection<>)? ICollection is immutable (no members to change the contents of the collection). ICollection<T> is mutable. This is a substantial difference making the interfaces similar only in their name. One the other hand ICollection and IEnumerable<T> differ by very little. On utilise tous les jours les interfaces IEnumerable, ICollection, IList quand on développe en .Net,…. Speed Test: Generic List vs. ArrayList. thanks sir my knowledge got deeper after reading the theory
https://blogs.msdn.microsoft.com/kcwalina/2005/09/23/system-collections-vs-system-collection-generic-and-system-collections-objectmodel/
CC-MAIN-2016-40
refinedweb
615
64.81
The examples I found were either too detailed-too quickly, or too simple to make much progress with. I like my example because it can easily be added to, and can likely be adapted to demonstrate other patterns as well. Pre-requisites: You should be comfortable with WinForms, events, methods, properties, classes and interfaces. You might be familiar with events of WinForms, but perhaps less so with implementing interface events. I think you could still follow the tutorial but you might need to pause to read-up on this topic when you encounter it. Note: Please also read any comments that might follow this tutorial. Other, more experienced, members may offer some useful insights or criticisms. Initially you might just scan through the following reference material, revisiting it after the tutorial. The tutorial itself begins in the section Walking Through the Code. Model-view-presenter :wikipedia wikipedia said: Model–view–presenter (MVP) is a derivative of the model–view–controller (MVC) software pattern, also used mostly for building user interfaces. ... MVP is a user interface architectural pattern engineered to facilitate automated unit testing and improve the separation of concerns in presentation logic: .... The term interface is being used here in a generic, or dictionary, sense: a point where two systems, or subjects, meet and interact. There is a very detailed reference here: Layered Application Guidelines :MSDN My example follows the MVP, Passive View pattern, discussed here: Comparison of Architecture presentation patterns :codeproject codeproject said: Passive view (PV) Fundamentals about PV :- Fundamentals about PV :- - State is stored in the view. - All logic of UI is stored in presenter. - View is completely isolated from the model. It [the presenter] also takes the extra task of synchronizing data between model and view. - Presenter is aware of the view. - View is not aware of the presenter. **see note ** This isn't entirely, or always, the case. Often the view can call a method of the presenter. This is discussed at a few points during the tutorial. In particular, the view is passive in the sense that it has no knowledge of the model. The above is a very useful article, and has some nice diagrams. Reading the comments though, there is occasionally debate about the distinctions made between the different patterns. This is not unusual. Patterns are extremely useful but they are not a precise science and, in the real world, they do tend to overlap (and cause debate among developers). What are MVP and MVC and what is the difference? :SO topic Walking Through the Code The section below (Outlining the Pattern) talks through the requirements of the MVP pattern, using Passive View, in detail. However, it will make more sense after I guide you through the code. A brief outline of what is going on is: The view, meaning our form, does very little. It has properties that reflect the data that will be stored in the data-layer, and fires some specific events. The model is usually very simple and in our case will be a single class that has properties that we want to store. We will store our data in a (non-persistent) List<> of this type. In a fuller example there will be a database (or other persistent storage) and the model would reflect those tables and fields that the application can work with. The presenter does all the work: listening for events fired by the view and synchronizing the data displayed in the view (via its properties) with the data stored in the data-layer (via the model). A note on our List<>: Spoiler The application is a simple task list. We can add new tasks, go back and forwards through them, and make changes to them. If we edit a task, then attempt to move to the next or previous task (or to start a new one) we will be required to either save these changes or discard them. There is very little in the way of error checking, and no exception handling. I don't want these concerns to distract us from understanding the pattern itself. (The only validation is the requirement that the task must have a name, its description.) These are certainly things that I encourage you to explore and perhaps add to the application. If you wish to develop the application further then you should consider persistent storage of the tasks, perhaps in a database. You could also consider requiring that the StartDate of a Task be on or before the DueDate, and that checking Completed requires a CompletionDate to be entered. You might build the form-UI first, named frmTasks. I'm using the namespace SeparationMVP. These are the names of the controls: Spoiler The ComboBox cboPriority has values of Low, Medium, High. Here is the very simple model, Task.cs: namespace SeparationMVP { class Task { public string Name { get; set; } public string Priority { get; set; } public DateTime? StartDate { get; set; } public DateTime? DueDate { get; set; } public bool Completed { get; set; } public DateTime? CompletionDate { get; set; } } } The DateTime values are nullable?; that is, can store null. I started with DateTimePickers rather than TextBoxes for the date-values. DTPs don't allow a null value so I took a primitive approach using a default date of '1999-01-01'. This was messy and, although there are approaches to working with DTPs and nulls (such as placing a TextBox above the DTP), I didn't want these complications to obscure the details of the pattern. Note that the model could include methods that the presenter can call. For example, a method to return a Task instance, or perhaps one to validate data before it can be saved as a Task. Here is the interface that integrates the view with the model (ITaskLayer.cs): namespace SeparationMVP { interface ITaskLayer { string TaskName { get; set; } string Priority { get; set; } DateTime? StartDate { get; set; } DateTime? DueDate { get; set; } bool Completed { get; set; } DateTime? CompletionDate { get; set; } // communication/ messaging string StatusChange { set; } bool isDirty { get; set; } event EventHandler<EventArgs> SaveTask; event EventHandler<EventArgs> NewTask; event EventHandler<EventArgs> PrevTask; event EventHandler<EventArgs> NextTask; } } StatusChange is a simple message that the view can choose to ignore. I simply print the message to a label on the form (which could also help with debugging). isDirty can be changed by either the presenter or view. The presenter changes it to true when a task has just been saved, or a different one loaded. The view changes it to false as soon as one of the control's-values has been changed. Here is the presenter (TaskPresenter.cs): Spoiler From this presenter code.. class TaskPresenter { private readonly ITaskLayer view; private List<Task> tasks; // (primitive) maintenance of state: private int currentIndex = 0; private bool isNew = true; public TaskPresenter(ITaskLayer view) { this.view = view; The view is passed, and stored, in the presenter's constructor. Notice, importantly, that it is not the view, or form, itself that is important: it is an object that implements the interface. This indicates how the pattern can facilitate unit testing. wikipedia said: a fuller example the model would represent a database or other persistent storage. I am just using a simple List<Task>. A List<T> is a very flexible object but it doesn't have a concept of currency. That is, it doesn't maintain, or understand, the concept of a current Task. (With formal data-binding there is a CurrencyManager Class.) I am using two simple values of currentIndex and isNew to maintain the currency-state. That is, to know which Task we are currently on, and whether there is a next or previous task. The view's property of isDirty is also part of this process of maintaining currency-state. This actually gives you a little insight into how some database operations work. You can obtain a cursor from a database, which is (simplistically) a set of records. You can move forwards, and possibly backwards, through the recordset (the set of records in the cursor), and the CP (cursor-position or current-pointer, but generally just referred to as the current record position) effectively tells you whether you have passed the end or beginning of the records. I will speculate that the cursor also maintains, internally, properties similar to our isDirty and isNew. wikipedia said: In computer science, a database cursor is a control structure that enables traversal over the records in a database. Cursor (databases) :wiki (further code from the presenter above) public TaskPresenter(ITaskLayer view) { this.view = view; Initialize(); } private void Initialize() { tasks = new List<Task>(); view.SaveTask += Save; view.NewTask += New; view.PrevTask += Previous; view.NextTask += Next; BlankTask(); view.StatusChange = "Ready"; } Initialize() instantiates List<Task> (our data-store) and registers event-listeners with the events exposed by the interface. private void BlankTask() { view.TaskName = String.Empty; view.Priority = "Low"; view.StartDate = null; view.DueDate = null; view.Completed = false; view.CompletionDate = null; } This sets the initial state of the view by changing the exposed properties. There is no corresponding method for the model; instead, this is achieved just by setting isNew to true. There is a similar method named loadTask() (at the bottom). Please read through the rest of the presenter's code, it should be fairly easy to follow. Here is the code for the form (the view) itself (frmTasks.cs): Spoiler public partial class frmTasks : Form, ITaskLayer { private TaskPresenter presenter; // ... private void frmTasks_Load(object sender, EventArgs e) { presenter = new TaskPresenter(this); this.isDirty = false; } The form implements the interface and stores a reference to a new presenter instance, passing itself in its constructor. (You could consider this as a signature of the MVP pattern, although there are slight variations on it.) public string Priority { get { return cboPriority.Text; } set { cboPriority.Text = value; } } public DateTime? StartDate { get { if (string.IsNullOrWhiteSpace(txtStartDate.Text)) return null; else return DateTime.Parse(txtStartDate.Text); } set { if (value == null) txtStartDate.Text = String.Empty; else txtStartDate.Text = value.Value.ToShortDateString(); } } These correspond the values displayed in the form with the data required by the model, through the form's exposed properties. They also correspond empty-text in the textboxes with null required by the DateTime fields. For this simple example you MUST key a valid date in the date-textboxes, otherwise an exception occurs. This raises the spectre of validation and where this should occur. This is discussed further in the section below, Thoughts on Validation. public string StatusChange { set { lblStatus.Text = value; } } public bool isDirty { get; set; } public event EventHandler<EventArgs> SaveTask; public event EventHandler<EventArgs> NewTask; public event EventHandler<EventArgs> PrevTask; public event EventHandler<EventArgs> NextTask; When the status is changed by the presenter we just display the message in a label. isDirty could, I suppose, be called a free variable. It is appropriate, in my opinion, for this to be changed by either the presenter or the view. You might notice, though, that the presenter only ever sets this to false - when a task is successfully saved, or a new, previous or next task is shown. The form sets it to true as soon as any of the currently displayed values are changed. The (public) events are declared but they are not defined (assigned to) in the view. It is the presenter that listens for these events. While it is possible for the view to also assign to these events this, firstly, adds a level of complexity, but also breaks the pattern that we set out to follow. private void btnSave_Click(object sender, EventArgs e) { // some basic validation if (string.IsNullOrWhiteSpace(txtTask.Text)) { MessageBox.Show("Enter the task name/description.", "Task Detail", MessageBoxButtons.OK, MessageBoxIcon.Error); txtTask.Focus(); return; } if (SaveTask != null) { SaveTask(this, EventArgs.Empty); } } This demonstrates the kind of simple validation that belongs in the view. The role of this textbox on the form is that it shouldn't be empty, regardless of any further requirements of the model. private void txtTask_TextChanged(object sender, EventArgs e) { this.isDirty = true; } private void cboPriority_SelectedIndexChanged(object sender, EventArgs e) { this.isDirty = true; } Any changes to the current task's details will set isDirty to true. When any attempt is made to move away from the current task then the user will be asked to confirm discarding of these changes, otherwise they will have to save the task. Give it a go! You can now build and run the application. - Type the details for a new task and press Save. - Once you save a task you can press New. - You can only go to a next or previous task if one exists. - If you change some detail of a task then you will either have to save it or discard the change(s) in response to the MessageBox that appears. Outlining the Pattern The view means our form. With our passive view approach the view/form should be, as you would guess, passive: an empty shell. It stores state by having properties that mirror those required by the data-model. It doesn't, itself, modify these properties directly - that is the job of the presenter. An alternative to the passive view is the supervising controller pattern. Spoiler The view communicates (passively) with the presenter using the event model. That is, the view fires a specified set of events that the presenter listens for. The list of these events are those which make sense to the model; that is, that require some interaction with the model (via the presenter). For example, NewRecord, EditRecord, SaveRecord, etc.. To ensure that the view will provide the appropriate properties and events - that is, those that enable the view to work with the model - it implements an interface. This interface effectively provides a common-language (or a communication or messaging layer) between the model and view. The form's constructor creates and stores an instance of the presenter, passing itself as a reference in the presenter's constructor. The purpose of this is simply to provide the presenter with an object of the correct interface. Notice in the form's code that private TaskPresenter presenter; is never used or referenced anywhere else, other than in the constructor. This is a (tacit) requirement, or assumption, of the Passive View pattern, that the view has no knowledge of the presenter. But.. This is where it becomes important to realise that these patterns are a guide. They are not 'written in stone' and if you decide to follow a particular pattern it does not mean that you have to follow it in every detail, and at the expense of simpler code. It is also common for patterns to be mix-n-matched and to overlap. In many MVP examples the view will call methods of the presenter, rather than rigidly sticking to the event-driven model. Doesn't this break the pattern? It means that the view now knows about the presenter; they are no longer de-coupled. Passive View means that the view is unaware of the model so, in this sense, calling methods of the presenter doesn't break the pattern, although some would dispute this. This is discussed further here: View to Presenter Communication :codebetter.com In brief I will say that calling a method of the presenter is often much simpler than sticking rigidly to the event-driven approach. For example, an event of the view could cause execution of some code in the presenter, which then needs to change a property of the view and, possibly, to fire an event of the presenter that the view listens for (another coupling). Anyway, to return to our example, all the presenter does (or is required to do) is to store a reference to the view (or, more specifically, the interface) and attach event-listeners to those events exposed by the view's interface. In these listeners the presenter has access to both the model (in our case, by means of private List<Task> tasks;) and properties of the form. This is were the synchronization occurs, between the data displayed in the view and changes (to the data-layer) via the model. Note that in other MVP examples you will find that an instance of the model is passed to the presenter-constructor, along with an instance of the view. This isn't necessary for our example as our (simple) data-layer is contained in List<Task>, which is maintained internally by the presenter. Comparing MVC (model view controller) and MVP MVP is considered a variant of the MVC pattern. I won't pursue this any further here, other than with a broad statement: In MVC the controller sits above the model and view, In MVP the presenter sits between the model and view, acting as a bridge, or conduit, between them. Everything You Wanted To Know About MVC and MVP But Were Afraid To Ask Twisting the Triad – MVC Twisting the MVC Triad Thoughts on Validation This tutorial is already complete and I recommend that you pursue these patterns (MVP, MVC, etc.) further, starting with some of the links that I have provided. The following is just me thinking out loud about validation. You might find it interesting or, more likely, confusing. I am not sufficiently experienced to tutor you on this subject so, if it interests, or concerns, you, please investigate it further using the links provided, or other resources. Sensible validation: - The task name/description shouldn't be empty (we have this code already) - The dates should be dates (I'll concentrate on this requirement) - The DueDate should be on or after the StartDate - There should, or shouldn't be, a CompletionDate depending on whether Completed is ticked Where should this validation occur? In the view, the presenter, the model, or some combination of these? The answer isn't obvious (at least to me) and there seems to occur a lot of debate about this. This is, again, where we need to bear in mind that patterns are a guide and we are the ultimate arbiters of where, and how, validation occurs. I have a suspicion that this validating-issue may be more pronounced with the MVP pattern, perhaps with other patterns the decision is more clear-cut. An esteemed colleague: Spoiler Most often there is a database involved. The top-level of validation should occur with the database design, using: - Specific/correct data-types - NOT NULL - Primary and foreign keys - Unique indexes - Constraints This is crucial. It won't matter what application is built on-top of the database, the database simply won't allow invalid data beyond its walls. The textboxes are named txtStartDate, txtDueDate etc. They are specifically intended to accept only valid date-values. In my opinion this simple level of validation belongs in the view. These could be DateTimePickers although, as mentioned earlier, we have to deal with null values (which DTPs don't accept) in some way. Or perhaps MaskedTextBoxes. WinForms already have a validation feature (see CausesValidation, Validating, etc.) or we could choose to implement this ourselves using TextChanged and other events. (Currently we only use the TextChanged events to set isDirty to true.) I favour a single isValid() routine in the view. This will return true or false. As our example currently stands, though, this method cannot (and shouldn't) use the form's properties, as these are already defined as DateTime values. (If the controls didn't contain valid dates then attempting to read these values via the properties would already create errors, defeating our attempt at validation.) isValid() can refer directly to the form-controls because they are comfortably within the form's domain. The method can also use MessageBoxes and cause the focus to change (before returning true or false) because, again, this is all within the form's comfort zone. Spoiler Something like this: Spoiler This method will be called before firing some of the events of the interface. In fact, for our example, it only needs to be called before SaveTask. You could, if you prefer, use TextChanged, etc., to validate each control individually, rather than waiting to run a single isValid() method. This is perfectly fine but you would still need to account for combinations of controls (e.g. txtStartDate and txtDueDate) being correct collectively. An alternative approach, which is more in-line with the pattern, is to include this method in the interface and for the presenter to call this method (view.isValid()) before attempting to make changes to the model. I am happy with this approach as well: the view/form can piddle-around with MessageBoxes, focus, etc., with the only requirement being that it eventually return either true or false. What we want to avoid is duplicating all this effort from both the view and the presenter (and possibly from the model as well). As Skydiver mentions, the supposed correct way to deal with this in MVP is to throw the data at the model (via the presenter) and let the model throw exceptions that we can handle. In which case, we might not even assign data-types to the view's properties, they could largely be strings. The presenter would handle the exceptions and (somehow) convert them into meaningful information for the view. This is obviously possible but seems a lot of work. Rather than dealing with exceptions the model could contain the method isValid(). The view's properties could be mainly strings, and these values would be passed to the isValid() method by the presenter. It might be messy to pass this clump of data but this approach could prove easier to implement than the exception handling. For example, we could create our own errors class that we can use to get back some useful details, that the presenter can make good use of to update the view. I will stress again that these are just my current thoughts on validation, and I apologise if I have confused! Concentrate on the pattern demonstrated in the main tutorial, which I hope provides a good start to discover MVP and other patterns. Andy. Added: There is another useful article on MVP here from Informatech: UI Design Using Model-View-Presenter This post has been edited by andrewsw: 25 March 2014 - 08:11 AM
https://www.dreamincode.net/forums/topic/342849-introducing-mvp-model-view-presenter-pattern-winforms/
CC-MAIN-2018-47
refinedweb
3,686
64.1
java.lang.Object org.apache.cocoon.util.MIMEUtilsorg.apache.cocoon.util.MIMEUtils public class MIMEUtils A collection of File, URL and filename utility methods. public MIMEUtils() public static String getMIMEType(File file) throws FileNotFoundException, IOException file- File. FileNotFoundException IOException public static String getMIMEType(String ext) ext- Filename extension. public static String getDefaultExtension(String type) type- MIME type. public static void loadMimeTypes(Reader in, Map extMap, Map mimeMap) throws IOException mime.typesfile, and generates mappings between MIME types and extensions. For example, if a line contains: text/html html htmThen 'html' will be the default extension for text/html, and both 'html' and 'htm' will have MIME type 'text/html'. Lines starting with '#' are treated as comments and ignored. If an extension is listed for two MIME types, the first will be chosen. in- Reader of bytes from mime.typesfile content extMap- Empty map of default extensions, keyed by MIME type. Will be filled in by this method. mimeMap- Empty map of MIME types, keyed by extension. Will be filled in by this method. IOException
http://cocoon.apache.org/2.1/apidocs/org/apache/cocoon/util/MIMEUtils.html
CC-MAIN-2014-35
refinedweb
174
51.75
Apache log4net™ Manual - Contexts Most real-world systems have to deal with multiple clients simultaneously. In a typical multithreaded implementation of such a system, different threads will handle different clients. Logging is especially well suited to trace and debug complex distributed applications. An approach to differentiate the logging output of one client from another is to instantiate a new separate logger for each client. However this promotes the proliferation of loggers and increases the management overhead of logging. A lighter technique is to uniquely stamp each log request initiated from the same client interaction. Log4net supports different types of contextual logging and contexts with different scopes. Scopes Contextual data can be set in different scopes. These contexts have progressively narrower visibility. In the logging event itself the values from all of the contexts are combined together such that values specified in a lower scoped context hide values from a higher context. Context Properties The log4net contexts store properties, i.e. name value pairs. The name is a string the value is any object. A property can be set as follows: log4net.GlobalContext.Properties["name"] = value; If properties with the same name are set in more than one context scope then the value in the narrowest scope (lower down in the list above) will hide the other values. The property values are stored as objects within the LoggingEvent. The PatternLayout supports rendering the value of a named property using the %property{name} syntax. The value is converted to a string by passing it to the log4net.ObjectRenderer.RendererMap which will locate any custom renderer for the value type. The default behavior for custom types is to call the object's ToString() method. Active Property Values An active property value is one who's value changes over time. For example, imagine a custom type that implemented the ToString() method to return the number of bytes allocated by the runtime garbage collector. public class GCAllocatedBytesHelper { public override string ToString() { return GC.GetTotalMemory(true).ToString(); } } An instance of this type can be added to the log4net.GlobalContext during application startup: log4net.GlobalContext.Properties["GCAllocatedBytes"] = new GCAllocatedBytesHelper(); Once this property is set in the context all subsequent logging events will have a property called GCAllocatedBytes. The value of the property will be an instance of the GCAllocatedBytesHelper type. When this value is rendered to a string by calling the ToString method the current number of bytes allocated by the garbage collector will be returned and included in the output. Context Stacks Sometimes simple key value pairs are not the most convenient way of capturing contextual information. A stack of information is a very convenient way of storing data especially as our applications tend to be stack based. The ThreadContext and LogicalThreadContext also support storing contextual data in a stack. The stack is stored in context property, therefore stacks have names and more than one stack can exist in the same context. A property value set in a narrower context would override a stack with the same property name set in a wider scoped context. The stack supports Push and Pop methods. As more contextual data is pushed onto the stack the stack grows. When the stack is rendered all the data pushed onto the stack is output with the most recent data to the right hand end of the string. As the stack is just an object stored in the context properties it is also rendered using the same PatternLayout syntax: %property{name}. Where name is the name of the stack. Calls the the stack's Push and Pop methods must be matched up so that each push has a corresponding pop. The Push method also returns an IDisposable object that will perform the required pop operation when it is disposed. This allows the C# using syntax to be used to automate the stack management. using(log4net.ThreadContext.Stacks["NDC"].Push("context")) { log.Info("Message"); } The INFO level log has a stack stored in its NDC property. The top item in the stack is the string context. The using syntax ensures that the value context is popped off the stack at the end of the block. The using syntax is recommended because it removes some work load from the developer and reduces errors in matching up the Push and Pop calls, especially when exceptions can occur. Nested Diagnostic Contexts The NDC (Nested Diagnostic Context) exists for compatibility with older versions of log4net. This helper class implements a stack which is stored in the thread context property named NDC. Mapped Diagnostic Contexts The MDC (MappedDiagnostic Context) exists for compatibility with older versions of log4net. This helper class implements a properties map which is mapped directly through to the thread context properties. logger repositories. This would allow each virtual host to possess its own copy of the logger hierarchy. Configuring multiple logger hierarchies is beyond the scope of this document.
http://logging.apache.org/log4net/release/manual/contexts.html
CC-MAIN-2016-30
refinedweb
813
55.84
Setup import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers tf.keras.backend.clear_session() # For easy reset of notebook state. Introduction The Keras functional API is a way to create models that is more flexible than the tf.keras.Sequential API. The functional API can handle models with non-linear topology, models with shared layers, and models with multiple inputs or outputs. The main idea: inputs.shape TensorShape([None, 784]) out you get x. _________________________________________________________________ are almost identical. In the code version, the connection arrows are replaced by the call operation. A "graph of layers" is an intuitive mental image for a deep learning model, and the functional API is a way to create models that closely mirror this. Training, evaluation, and inference Training, evaluation, and inference work exactly in the same way for models built using the functional API as for Sequential models. Here, load the MNIST image data, reshape it into vectors, fit the model on the data (while monitoring performance on a validation split), then evaluate the=keras.losses.SparseCategoricalCrossentropy(from_logits=True),]) Downloading data from 11493376/11490434 [==============================] - 0s 0us/step Train on 48000 samples, validate on 12000 samples Epoch 1/5 48000/48000 [==============================] - 2s 50us/sample - loss: 0.3471 - accuracy: 0.9009 - val_loss: 0.1908 - val_accuracy: 0.9449 Epoch 2/5 48000/48000 [==============================] - 2s 37us/sample - loss: 0.1675 - accuracy: 0.9497 - val_loss: 0.1509 - val_accuracy: 0.9591 Epoch 3/5 48000/48000 [==============================] - 2s 37us/sample - loss: 0.1215 - accuracy: 0.9638 - val_loss: 0.1245 - val_accuracy: 0.9635 Epoch 4/5 48000/48000 [==============================] - 2s 38us/sample - loss: 0.0961 - accuracy: 0.9699 - val_loss: 0.1106 - val_accuracy: 0.9681 Epoch 5/5 48000/48000 [==============================] - 2s 38us/sample - loss: 0.0793 - accuracy: 0.9759 - val_loss: 0.1111 - val_accuracy: 0.9678 10000/10000 - 0s - loss: 0.0979 - accuracy: 0.9721 Test loss: 0.09793518451107666 Test accuracy: 0.9721 For further reading, see the train and evaluate guide. Save and serialize Saving the model and serialization work the same way for models built using the functional API as they do for Sequential models.') WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages: path_to_my_model/assets For details, read the model save and serialize guide. Use the same graph of layers to define multiple models In the functional API, models are created by specifying their inputs and outputs in a graph of layers. That means that a single graph of layers can be used to generate multiple models. In the example below, _________________________________________________________________ Here, the decoding architecture is strictly symmetrical to the encoding architecture, so the output shape invoking it on an Input or on the output of another layer. By calling a model you aren't just reusing the architecture of the model, you're also reusing its weights.,)) Manipulate complex graph topologies Models with multiple inputs and outputs The functional API makes it easy to manipulate multiple inputs and outputs. This cannot be handled with the Sequential API. For example, if you're building a system for ranking custom the output layers have different names,]) Train the model by passing lists of NumPy arrays of inputs and targets: # [==============================] - 4s 3ms/sample - loss: 1.3073 - priority_loss: 0.7026 - department_loss: 3.0235 Epoch 2/2 1280/1280 [==============================] - 0s 330us/sample - loss: 1.2899 - priority_loss: 0.6986 - department_loss: 2.9565 <tensorflow.python.keras.callbacks.History at 0x7f92f3be5e48> training and evaluation guide. A toy ResNet model In addition to models with multiple inputs and outputs, the functional API makes it easy to manipulate non-linear connectivity topologies—these are models with layers that are not connected sequentially. Something the Sequential API can not handle. 115us/sample - loss: 1.8987 - acc: 0.2793 - val_loss: 1.7541 - val_acc: 0.3667 <tensorflow.python.keras.callbacks.History at 0x7f927c5df4e0> Shared layers Another good use for the functional API are for models that use shared layers. Shared layers are layer instances that are reused multiple times in [==============================] - 35:) For serialization support in your custom layer,, implement the classmethod from_config(cls, config) which is used when recreating a layer instance given its config dictionary. The default implementation of from_config is: def from_config(cls, config): return cls(**config) When to use the functional API When))) Model validation while defining the saving and serialization guide. Functional API weakness Does not support dynamic architectures The functional API treats models as DAGs of layers. This is true for most deep learning architectures, but not all—for example, recursive networks or Tree RNNs do not follow this assumption and cannot be implemented in the functional API. Everything from scratch When writing advanced architectures, you may want to do things that are outside the scope of defining a DAG of layers. For example, to you must use model subclassing to expose multiple custom training and inference methods on your model instance.). Additionally, if you implement the get_config method on your custom Layer or model, the functional models you create will still be serializable and cloneable. Here's a quick example of you specify a static batch size for the inputs with the `batch_shape` # arg, because the inner computation of `CustomRNN` requires a static batch size # (when)))
https://tensorflow.google.cn/guide/keras/functional
CC-MAIN-2020-24
refinedweb
859
51.55
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 According to Romain Lenglet on 7/23/2009 8:44 AM: > On Thursday 23 July 2009 22:57:36 Romain Lenglet wrote: >> This allows us to test all the macros using AT_CHECK_MACRO, which is much >> cleaner and concise. >> Cf. the attached diff against the latest master branch. >> Sorry for the delay for sending you that patch! > > Ooops, scratch that. > That latest patch was buggy, sorry. > > Attached is a working version of that latest patch. More tweaks needed. Your changelog entry should call out macro names affected (sorry I didn't catch that last time). 'test ... -o ...' is not portable, you must use 'test ... || test ...'. Also, 'git format-patch' gives nicer output that 'git diff'; in part because it includes the author name and date and commit message. I'm guessing that the deletion in tests/Makefile.am was intentional, since your ChangeLog didn't mention it. At any rate, I've applied this (if for no other reason than the namespace improvement is good); please reviewppHvQACgkQ84KuGfSFAYCqKACeMj5hKN4QBoCSxt//uTyu9aCN reMAnRbHSa0Z13lJMaIqGVBz4RDdSEV+ =zLwu -----END PGP SIGNATURE-----
http://lists.gnu.org/archive/html/autoconf-patches/2009-07/msg00043.html
CC-MAIN-2015-40
refinedweb
178
67.04
Bug Description This shouldn't happen: >>> class C(Implicit): ... l=[1,2,3] ... def __getitem__(self, i): ... return self.l[i] ... >>> c=C() >>> iter(c) <iterator object at 0xb7dbb38c> >>> list(_) [1, 2, 3] >>> c2=C().__of__(c) >>> iter(c2) Traceback (most recent call last): File "<console>", line 1, in <module> AttributeError: __iter__ See this post: http:// ...for the change that causes the bug. i'm setting this to "fixed" pro-actively, please re-open if you find more issues... Just to confirm that this works for me, and is in Zope 2.12.0a3 The iterator support leads to problems for us - acquisition wrappers are lost during iteration. Here is an example that uses __iter__: from Acquisition import Implicit class B(Implicit): def __iter__(self): for i in range(5): yield i, self.aq_parent class A(Implicit): pass a = A() a.b = B() for (i, parent) in a.b: print i, parent 'self.aq_parent' fails here, as 'self' inside __iter__ isn't acquisition wrapped. Worse is that *existing* code that has worked for years (namely ParsedXML) suddenly breaks, as it uses __getitem__ based iteration. The problem now is that this *always* uses iterators now, instead of using the old fallback. This means acquisition wrappers are lost within __getitem__ too now: from Acquisition import Implicit class B(Implicit): def __getitem__(self, i): if i == 5: raise IndexError() return i, self.aq_parent class A(Implicit): pass a = A() a.b = B() for (i, parent) in a.b: print i, parent We do not know whether the implementation before this bugfix worked better with __iter__ but in any case we'd then again have the original reported issue that __getitem__ breaks. We are seeing this issue appear in Zope 2.11.3, to which we're trying to port Silva, which relies on ParsedXML. On Sep 25, 2009, at 5:23 PM, Martijn Faassen wrote: > The iterator support leads to problems for us - acquisition wrappers > are > lost during iteration. Here is an example that uses __iter__: hi martijn, i'll have a look, but need to wrap up my plone4 plips first. iow, it'll be a few more days... cheers, andi -- zeidler it consulting - http:// friedelstraße 31 - 12047 berlin - telefon +49 30 25563779 pgp key at http:// plone 3.3.1 released! -- http:// This must be fixed for 2.12.1 (not a blocker for 2.12.0) I'm attaching a patch which adds (breaking) tests for both the case that the wrapped object has an __iter__ and the case where the wrapper proxy falls back to __getitem__. The new tests assert that __iter__ and __getitem__ get passed a wrapped self. the problem should have been fixed in r105350: http:// and backported to 2.10 and 2.11 in: http:// http:// Martijn, could you please verify if the fix works for you before Andreas makes the next release? The Acquisition 2.12.4 egg is released. Seems to work indeed, the problem we had which was caused by this issue (ParsedXML not working in certain circumstances) seems to have vanished. Thanks! this has been fixed in http:// svn.zope. org/?view= rev&rev= 99191 chris, could you please verify (and hopefully also close)?
https://bugs.launchpad.net/zope2/+bug/360761
CC-MAIN-2017-17
refinedweb
533
75.3
WELCOME TO SATELLITE-X Satellite-X is a private company that offers consumers personal satellites. The purchase of a personal, private satellite includes launching it into a stable low-earth orbit, and a 5 year guarantee for it to remain in orbit and functional. Satellite-X is popular with the super rich who never want to be without the internet, and who like to scan the planet for places to build their next mansion. Personal Satellites?!? Okay, so in case you haven't guessed it already, Satellite-X is a fictional company. I happen to work for a very non-fictional company though called CodeZero, and we make a developer platform for Kubernetes, which includes a CLI tool and an installable desktop application. In our experience our platform can lead to a 10x increase in developer productivity. One of our challenges has been, however, explaining to people - technical and non-technical alike - just how this boost in productivity is achieved. I had the idea of using a mock company to illustrate how using CodeZero during a typical product development life cycle can greatly shorten the developer feedback loop and lead to a significant increase in velocity. Let me know if this invented company and use cases works for you as a way to communicate some pretty technical concepts. I'd love to get your feedback. Use Cases I came up with two use cases to explore the workflow for both a front-end and a back-end developer. One developer will solve their problem using CodeZero’s teleport tool, which allows you to test local code that relies on calling services running in a Kubernetes cluster. The other developer will solve their problem using CodeZero’s intercept tool, which allows you to route remote in-cluster traffic to a service running locally. Let's meet the developers: Samantha, or Sam, is a front-end developer at Satellite-X. She works on the company’s web application, called SatelliteVision, that consumers use to control and manage their satellites. With the app, users can tell a satellite camera to point at specific coordinates and take pictures, turn on and off their internet broadcast, and send messages into space for any listening aliens. Roger is a back-end developer at Satellite-X. He works on the company’s proprietary satellite navigation system that is responsible for keeping satellites in orbit, at least until the end of their warranty period, and making sure all the personal satellites don’t crash into each other (or other company satellites, or space trash). ARCHITECTURE Satellite-X recognizes that security is paramount to its clientele, and so every satellite includes its own on-board infrastructure consisting of a high availability Kubernetes cluster. Satellite-X also has a robust microservice architecture running in the cloud (on earth) that coordinates with all of the satellites and runs the SatelliteVision app. The details of the infrastructure are as follows: TerraFirma Application: The earth-bound software application consists of the SatelliteVision web app and a number of back-end microservices, including: - An API service to handle requests from the SatelliteVision app. - A service to interface with the satellites in orbit. - A security service. The security software came bundled with the motherboards used in each satellite. The software’s documentation is in Russian, but it works flawlessly so it continues to be used, although no developer wants to touch it. Satellite Application: Every satellite runs its own microservice architecture. Notable services include: - A sensor service. - A navigation service that makes decisions on the trajectory of the satellite. - A thruster engine service that controls the satellite’s speed and direction. - A service to talk to the TerraFirma application and receive software updates. - A security service. The satellite part of the Russian-made software. FRONT-END USE CASE The Problem A new version of the SatelliteVision app was released recently, and it was quickly discovered that it shipped with a severe bug: When a customer using SatelliteVision instructed one of their satellites to take a picture, the app was instead sending a request to the API to delete the last picture it took. Customer complaints were piling in. The Setup Sam is assigned the task of fixing the bug as soon as possible. She checks out the main production branch from Git for the SatelliteVision application, and gets ready to fix the code. Running the web app’s front-end code locally in a browser is straightforward, it’s just HTML, CSS, and JavaScript, but it needs to talk to a fully functional back-end to work. Before CodeZero, Sam had to run mock services locally for all the required back-end services, and tell her locally running front-end code to talk to them. All the services are in Docker containers, so it wasn’t the end of the world, but there was always some back-end service that failed to start for some reason, and the security service was infamous for being especially finicky. Today, though, Sam has the CodeZero Desktop application installed on her machine. And she has a development profile ready to go for teleporting into the Kubernetes namespace that is running the production TerraFirma application. The Solution The CodeZero Desktop app is already running on her machine (she rarely closes it), so she clicks on the tray icon to open the dropdown menu. She hovers over the Configuration menu option to make sure she is pointing to the correct cluster, but she notices that she is in the wrong workspace. Using the tray menu she quickly switches to the SatelliteVision workspace, and then selects the “front-end-dev-teleport” option from the available development profiles. Sam opens up the Desktop dashboard UI and sees her teleport session starting up. Once the UI updates to show the teleport session running as expected, she opens up her terminal application and runs the yarn command to start up the web app’s front-end. Debugging locally in her browser, she discovers that someone mistakenly changed the “take picture” button’s action to send a DELETE request to the API instead of a POST. She fixes the behavior and then tests it. Since she is teleported into the production cluster, any actions she takes in the locally running web interface will apply to live systems, but this is what she wants — Sam wants to ensure her changes will work once deployed. She wants to know this fix will work on real satellites. She double checks that she is logged into Satellite-X’s test account, selects a satellite, and directs it to take a picture of the Himalayas. Nothing happens. At least the last picture taken with this satellite wasn’t deleted, but a new picture wasn’t taken. Sam goes back to the JavaScript that runs when the button is clicked and finds the error: a missing await on an asynchronous call. This bug might not have resulted in an error if she was running the back-end code locally, but with the latency inherent in communicating with an orbiting satellite, it certainly manifested in Sam’s test. She fixes the code and tests again. Success! On her screen she sees a high-res satellite image of the Everest base camp. As a final step, Sam commits her code and creates a pull request to the main branch. Once merged it will trigger their CI/CD pipeline to automatically deploy. All in, Sam’s turnaround to identify the bug, fix it, test it, and deploy the fix to production: 15 minutes. This would have taken significantly longer without CodeZero. BACK-END USE CASE The Problem When the satellite navigation system software was originally written, its collision detection algorithm was set to monitor for objects of a certain size. Objects under the specified size were not considered a threat to damaging the satellite, and objects meeting or exceeding the specified size would trigger evasive maneuvers. Recently, some bit of space debris from an old exploded satellite collided with one of Satellite-X’s satellites and damaged one of its solar panels, and it is believed that the object was smaller than the current threshold. The Setup Roger is assigned the task of updating the navigation system microservice to look for and avoid smaller objects. The code change is trivial — it’s just some simple math — but it needs to be tested in an environment that is as close to production as possible. Roger learned early on in his time at Satellite-X that code running successfully in a satellite simulator on earth is not the same as code running on a real satellite in space. Before CodeZero, Roger had limited options for replicating a satellite’s deployment environment. So a typical development cycle would consist of develop & test in simulator -> deploy to satellite -> test -> repeat if unsuccessful. A deployment to a live satellite takes about 40 minutes, so if the feedback loop had to be repeated a number of times, it could easily take Roger all day to get out a small change. Now Roger has the CodeZero CLI installed on his workstation. With this tool he can reduce the feedback loop significantly. The Solution The collision detection system is fairly straightforward. Sensors on the satellite continuously scan the surrounding space for objects within 500km or closer. When an object is detected the sensor service passes on the telemetry data to the navigation service, which calculates the object’s trajectory, size, and speed. Objects of a certain size on a collision trajectory cause the navigation service to tell the thruster engine service to initiate an evasive maneuver. The first thing Roger does is log into the company’s intranet and download the kubeconfig file for a cluster running on one of their test satellites. He then launches his terminal application and sets his Kubernetes context to this cluster. Next Roger grabs the latest code for the navigation microservice and gets it running locally. Roger wants to direct all communications coming from the sensor service on the satellite to the navigation service he’s running locally. He runs a CodeZero CLI command to intercept all traffic destined for the navigation service. He sees the telemetry data from the in-cluster sensor service begin streaming to his service running locally, and this local service in turn communicating back up to the thruster engine service running in-cluster on the satellite. Roger switches to his IDE, makes some small changes to the navigation service’s collision detection algorithm to look for smaller objects, and moves on to testing. He could wait for objects to inadvertently get in the way of his satellite, but luckily the satellite is equipped with a program for rendering debris in front of the cameras to accommodate what Roger needs to test. He launches the program and objects of various sizes start appearing in front of the satellite’s cameras. The sensor service on the satellite does its job, scanning the objects that appear, but because of Roger’s intercept, it sends its data to his locally running navigation service instead of the service running in-cluster. An object appears in line with the satellite that used to be under the size threshold. The satellite doesn’t change course. And according to the data it should have triggered an evasive maneuver. Test failed. Over the course of the next 30 minutes Roger tweaks the collision detection algorithm repeatedly, testing after each change, until he finally nails it. What he thought was simple math turned out to be not that simple when running in a real-world scenario. Satisfied with the new code, Roger closes the intercept session, commits his code, and creates a pull request. Once merged a new deployment will be triggered to this satellite’s cluster including the updated navigation microservice. Eventually this update will get pushed out to all the company’s satellites. All in, Roger’s time to develop, test, and finally deploy the change to the satellite: 1.5 hours. If he had had to deploy the code changes to the satellite’s cluster each time before testing (each time being a 40 minute operation), this exercise would have taken Roger all day. CONCLUSION What do you think? My hope is that the front-end and back-end developer use cases demonstrate some scenarios under which CodeZero can really save developers a lot of time and frustration. Even though I described fictional scenarios, my hope is that you can identify some comparable workflows in your own development feedback loop, and now have some clear insight into just how powerful CodeZero’s tools can be. By removing the deploy step from each iteration, the productivity gains can be tremendous. Especially if you are deploying your code into outer space. ;-) Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/canuckaholic/how-satellite-x-increases-developer-productivity-1i8j
CC-MAIN-2022-21
refinedweb
2,132
50.67
This document is for Django's SVN release, which can be significantly different from previous releases. Get old docs here: Django 1.0 Custom template tags and filters¶ Introduction¶ Django’s template system comes.() Behind the scenes For a ton of examples, read the source code for Django's default filters and tags. They're in django/template/defaultfilters.py and django/template/defaulttags.py, respectively.". Filter functions should always return something. They shouldn't raise exceptions. They should fail silently. In case of error, they should return either the original input or an empty string -- whichever makes more sense.() Template filters that expect strings¶. SafeString or SafeUnicode. EscapeString or EscapeUnicode. Generally you don't have to worry about these; they exist for the implementation of the escape filter. put the is_safe attribute on your filter function and set it to True, like so: @register.filter def myfilter(value): return value myfilter.is_safe = True This attribute def add_xx(value): return '%sxx' % value add_xx.is_safe = True When this filter is used in a template where auto-escaping is enabled, Django will escape the output whenever the input is not already marked as "safe". By default, is_safe defaults_safe will coerce the filter's return value to a string. If your filter should return a boolean or other non-string value, marking it is_safe will probably have unintended consequences (such as converting a boolean False to the string 'False'). than can operate in templates where auto-escaping is either on or off in order to make things easier for your template authors. In order for your filter to know the current auto-escaping state, set the needs_autoescape attribute to True on your function. (If you don't specify this attribute, it defaults to False). This attribute tells Django that your filter function wants to be passed an extra keyword argument, called autoescape, that is True if auto-escaping is in effect and False otherwise. For example, let's write a filter that emphasizes the first character of a string: from django.utils.html import conditional_escape from django.utils.safestring import mark_safe attribute in this case (although including it wouldn't hurt anything). Whenever you manually handle the auto-escaping issues and return a safe string, the is_safe attribute won't change anything either way. ''nodes''. is the template parser object. We don't need it in this example. - token.contentsError exceptions use the tag_name variable. Don't hard-code the tag's name in your error messages, because that couples the tag's name to your function. token.contents.split()[0] will ''always'' be the name of your tag -- even when the tag has no arguments. - The function returns a.: def render(self, context): # ... new_context = Context({'var': obj}, autoescape=context.autoescape) # ... Do something with new_context ... This is not a very common situation, but it's useful if you're rendering a template yourself. For example: def render(self, context): t = template.loader, in Python 2.4 and above: . Shortcut for simple tags¶ Many template tags take a number of arguments -- strings or a template variables -- and return a string after doing some processing based solely on the input argument and some external information. For example, the current_time tag we wrote above is of this variety: we give it a format string, it returns the time as a string. To ease the creation of the earlier current_time function could thus be written like this: def current_time(format_string): return datetime.datetime.now().strftime(format_string) register.simple_tag(current_time) In Python 2.4, the decorator syntax also works: @register.simple_tag def current_time(format_string): ... A couple of. When your template tag does not need access to the current context, writing a function to work with the input values and using the simple_tag helper is the easiest way to create a new tag.')(show_results) As always, Python 2.4 decorator syntax works as well, so we could have written: @register.inclusion_tag('results.html') def show_results(poll): ... ..: # The first argument *must* be called "context" here. def jump_link(context): return { 'link': context['home_link'], 'title': context['home_title'], } # Register the custom tag as an inclusion tag with takes_context=True. register.inclusion_tag('link.html', takes_context=True)(jump_link) . Setting a variable in the context¶ The above example: {%. Parsing until another block tag¶ Template tags can work in tandem. For instance, the standard {% comment %} tag hides everything until {% endcomment %}. To create a template tag such as this, use parser.parse() in your compilation function. Here's how the standard {% comment %} tag is implemented: def do_comment(parser, token): nodelist = parser.parse(('endcomment',)) parser.delete_first_token() return CommentNode() class CommentNode(template.Node): def render(self, context): return '' parser.parse() takes a tuple of names of block tags ''to parse until''. It returns an instance of django.template.NodeList, which is a list of all Node objects that the parser encountered ' for {% if %}, {% for %}, {% ifequal %} and {% ifchanged %}. They live in django/template/defaulttags.py..
http://docs.djangoproject.com/en/dev/howto/custom-template-tags/
crawl-002
refinedweb
814
51.85
/* : Address.java,v 1.4 2002/11/22 21:06:57 brydon Exp $ */ package com.sun.j2ee.blueprints.customer; /** * This class represents all the data needed * for a customer's address. * This class is meant to be immutable. */ public class Address implements java.io.Serializable { private String streetName1; private String streetName2; private String city; private String state; private String zipCode; private String country; public Address() {} public Address(String streetName1, String streetName2, String city, String state, String zipCode, String country) { this.streetName1 = streetName1; this.streetName2 = streetName2; this.city = city; this.state = state; this.zipCode = zipCode; this.country = country; } // getter methods public String getStreetName1() { return streetName1; } public String getStreetName2() { return streetName2; } public String getCity() { return city; } public String getState() { return state; } public String getCountry() { return country; } public String getZipCode() { return zipCode; } public String toString() { return "Address[streeName1=" + streetName1 + ", " + "streetName2=" + streetName2 + ", " + "city=" + city + ", " + "state=" + state + ", " + "zipCode=" + zipCode + ", " + "country=" + country + "]"; } }
http://docs.oracle.com/cd/E17802_01/blueprints/blueprints/code/adventure/1.0/src/com/sun/j2ee/blueprints/customer/Address.java.html
CC-MAIN-2014-15
refinedweb
146
51.55
, With the factorial function, it is clear that the output grows quite rapidly as a function of its input. So I was wondering… 1) If I wanted to return a long value, what would be the correct way to do this? I noticed that, at least with my compiler, doing works, but the original way I did it was replacing return 1 with my thought process being that the 1l would end up converting the last product to a long, and then the second-to-last product to a long, etc… , since long had a higher priority when it comes to coercion (please correct me if I’m wrong, I’m just trying to remember what you taught in the implicit conversion tutorial!). So which one is preferred? I’m actually a little bit surprised the first way works actually but I tested it with large enough values and it does. 2) Let’s say I wanted to test the value of the argument factorial(n) to see whether I should return an int or a long. This would require specifying the return value beforehand however, such as verses int factorial(n) {…} [/code] The only workaround I can think of is passing n to another function which has the appropriate return type, such as Would this be the correct approach? Thanks! Good questions. 1) Returning 1 and 1l should produce the same result, as 1 would be implicitly converted to a long before being returned. The real challenge here is that “n*factorial(n-1);” only contains integers, and will produce an integer result (which could overflow). This (possibly overflowed) result will then be converted to a long for returning to the caller. What we actually should do is convert one of the operands to a long, so that all of the other operands get implicitly converted to longs as well: static_cast<long>(n)*factorial(n-1); 2) Functions can’t differ only by return type, so your functions would need different names. While you could do as you suggest, is there any reason not to just always return a long? Thanks for the quick response! I assume you mean static_cast<long>(n) right? I guess I could also just read in n as a long, such as and then avoid the static casting. And upon further inspection of my code for my second question, I actually don’t think it would work because the call from int factorial(n) would be on the bottom of the stack, and so it would end up being converted to an int anyways (I believe). I know for this example simply using a long return value would be fine, but I was thinking ahead to possible other scenerios where one may wish to return a different type than what the return type for the calling function is. I don’t know what these other scenerios may be, and perhaps a simple cast may resolve the issue 😛 Yes, the commenting system interpreted my <long> as an HTML code. Oops. But upon reflection, I think my previous analysis was incorrect, as factorial() will be interpreted as a long (since that’s what it returns), which should force everything else to be implicitly converted to a long. So your recursive line should work as intended. And as noted above, the base case of 1 will get implicitly converted if it’s not a long, so no worries there. Is there a way to extend the recursion depth of a function? (just involving code, without changing any configuration in the OS) I tried to extend it by using just pointers when calling a function, because I thought that in the stack would be the reference of the variable as the information itself, so I tried that the pointers would be the only thing in the stack and the information to be dynamically allocated, I don’t know if I explained myself well. (I don’t know if what I’m saying is also true). The only way I can think of to allow your function to recurse deeper in a code-only manner would be to do something that reduced the amount of stack memory each function call took (which would allow it to recurse more times before running out of memory). That could be using pointers and dynamically allocating anything larger than the size of your pointer on the heap, removing variables or parameters, etc… Involving that, is it true that what an actual variable weights is the reference to the object and the information of that object in the stack all together, in that case, for example, i declare a integer variable, which has 4 bytes, but the implicit pointer is 8 bytes, then the real weight is 12 bytes? or how does that work? I’m not sure what you’re asking. All variables allocated on the stack take up an amount of stack memory equal to the size of that variable. Memory allocated on the heap does not take up stack memory, since that comes from a different pool. So if your pointer size is 8 bytes, and you want to minimize your stack footprint, you’d want to allocate all memory above 8 bytes on the heap. For anything 8 bytes or less, the size of the variable is equal to or less than the size of the pointer, so you might as well just instantiate it directly. I don’t know why you’d do any of this though. That’s because some competitive problems of programming involves sometimes recursion, and is important to know the depth that a recursion can reach, in python there’s something similar, but the limit is with a finite number and not explicitly with the size of the heap, what I’m trying to do is to squeeze the recursion limit in C++. "It turns out that you can always solve a recursive problem iteratively…" Are you sure about this? To be honest, I’m not entirely sure why, but I think I’ve heard that recursive functions are sometimes impossible to use iteratively because of forward recursion and tail recursion. based on my notes for a proficiency exam: Tail Recursion: A form of recursion where the recursive call is the final thing that you do in the recursive case of your algorithm These sorts of algorithms can be changed into loops because, when the recursive call is to be made, there is no more information that is still needed from the previous recursive instance _____ Forward Recursion: A form of recursion where there is still work to be done after you make another recursive call These sorts of algorithms cannot be changed into loops because there is still information to be used when the algorithm wants to make the recursive call Comments? This class was based around the Java language, so I don’t know if that has anything to do with it, but I doubt it. Yes, I’m sure. However, that says nothing about how pretty/effective doing so would be. Tail recursion functions are usually pretty easy to convert. Forward recursion functions are usually not easy to convert -- doing so would likely involve your function maintaining its own stack! And why do that yourself when the language provides that functionality for you as part of function calls? But you could. What does "int main(void)" means? Why the "void" parameter? including "void" as a parameter type just tells the compiler that there are no parameters included in the function. It is recommended by Alex (In another chapter) that you do not use void when creating function prototypes or defining functions in general. It means that main is not taking any parameters. It’s old school usage, and deprecated in C++. In C++ we’re supposed to use empty parenthesis (e.g. int main()). I’ve fixed the examples. Alex, you haven’t fixed the second example. 🙂 Meh. 🙂 Thanks for letting me know. It should be fixed now. Question 2 took me about 15 minutes to figure out, it was quite hard for me. I felt really accomplished once I did figure it out though, thank you for the tough question! Quiz 1 solution: Since we know that 1! == 1 , it seems logical to add it to a base case, changing n < 1 to n < 2 (in which case, 1 and lower, inc 0, will return 1). (Otherwise every call to factorial(x) will do an extra useless «iteration» over 0.) (Also, the solution is missing a return value in main().) Hi Alex. The very first code example in this article won’t compile because there should be << ‘\n’ instead of << \n’ at the end of line 5. The last code example (recursive Fibonacci) won’t compile either because the function is defined as int fibonacci(int number), and its return statement calls Fibonacci (capital F), which is undefined. Thanks! Both issues have been fixed. Sorry for my typo in the first line of my comment. Count will be greater than Zero. Hi Alex, In your second example program, you have mentioned that countDown(1) does not call countDown(0). But countDown(1) calls countDown(0) because count will be greater than 1 and if statement executes. So your program is printing push 5 push 4 push 3 push 2 push 1 push 0 pop 0 pop 1 pop 2 pop 3 pop 4 pop 5 Thank You very much for your tutorials. I am learning a lot from this site. Love your tutorials. You are like my master (My Guru in Indian culture). Waiting eagerly for your updates in Object Oriented Programming section. I’ve updated the example so it doesn’t call countDown(0) as intended. Thanks for pointing out the mistake. Hey Alex, I was wondering if you could tell me how you would implement a fibonacci sequence function considering the limit of the stack. From what I understood from the previous tutorial, the stack has less memory than the heap and since we are calling the same function every time, the return values start to accumulate and are never popped until the stack overflows. My question is: when would you use recursion and when would you use an iterative function? Is there really any use in using recursive functions? Before reading this I always used recursive functions, but now that I see that they are less efficient I am starting to wonder if I should even use them. This is a great question. Personally, I only use recursion when: * The recursive code is _much_ simpler to implement. * The recursion depth can be limited (e.g. there’s no way to provide an input that will cause it to recurse down 100,000 levels). * The iterative version of the algorithm requires managing a stack of data. * This isn’t a performance-critical section of code. Personally, I’d write a Fibonacci sequence function iteratively because it’s relatively simple to do, and will be more performant. But that said, because of the way that Fibonacci numbers grow, you’d overflow your integers before hitting the stack limit with a recursive implementation. There are some algorithms that are much easier to write recursively than iteratively, such as quick-sort or merge-sort and some tree-walking algorithms. I’d start with a recursive version of those, and then optimize to an iterative version if performance demands required it. In short, I use recursion for the few things where it really makes sense, and iteration for everything else. Note: In C/C++, iteration is generally quite a bit faster than recursion. However, in other languages, this is not necessarily (as) true, so the above may not be generalizable. Thanks Alex! That’s really useful to know, thanks for the quick reply! I would also like to thank you for taking time to write these tutorials and even answer the questions. This is hands down the best C++ tutorial I have ever seen and I keep referring all my friends who are trying to get into coding. Keep up the great work 🙂 I found this chapter very hard to get my head around, but i think i managed it. This youtube video also helps: Alex, here is my iterative fibonacci generator: I want to remove the if statement inside the for loop because it looks ugly. But i can’t because y needs to be incremented once. How can I do that? Start your count at 2 instead of 1 and print the first numbers outside of the loop. I try to include Fibonacci in my diet every morning. Although the Fibonacci recursive implementation tends to be how the idea of recursion is introduced, the recursive implementation has extremely poor run-time efficiency compared to the iterative approach - think big-O. Many of the necessary calculations are computed repeatedly and it is therefore redundant. Formulate a recursion tree for the algorithm and prove it to yourself. Here is my iterative Fibonacci program =) : #include <iostream> using namespace std; int Fibonacci(int x) { int fibBack2 = 0; int fibBack1 = 1; int fibNow = 0; for(int iii=0; iii < x; iii++) { if(iii == 0) fibNow = 0; else if(iii == 1) fibNow = 1; else fibNow = ((fibBack1) + (fibBack2)); cout << fibNow << " "; fibBack2 = fibBack1; fibBack1 = fibNow; } } int main() { int x; cin >> x; cout << Fibonacci(x) << endl; return 0; } Hey there, Very useful post! I would like to invite you to visit my blog as well, and read my latest post about sequence points in C and C++. Best regards, panqnik Check out my Fibonacci algorithm. I can barely figure out how I did this! #include <iostream> #include <iostream> using namespace std; int nFibarray[3]; int nLimit; int main() { cout << "Display Fibonacci numbers up to what number? "; cin >> nLimit; for (*(nFibarray+2) = 1; *(nFibarray + 1) <= nLimit; *(nFibarray+2) = *nFibarray + *(nFibarray + 1) ) { cout << *(nFibarray + 1) << " " ; *nFibarray = *(nFibarray + 1); *(nFibarray + 1) = *(nFibarray + 2); } return 0; } My pc stopped at -130146. Facinating! “On the authors machine, this program counted down to -11732 before terminating!” Then you must have quite some stack memory, my computer stopped counting at -4607 xD “Hahaha… apparently Linux doesn’t believe in stack overflows. I ran your stack overflowing program and it just kept going all the way past -4,000,000, when I stopped it.” Lmao, apparently you have infinite stack memory on your pc 😛 Probably related to the compiler and your RAM, or maybe even the byte size of your operating system (I hit -130,146 before crashing, just like clementl below me, and I’ve got 12 GB of RAM using Dev-C++ on Windows 7 64-bit). Perhaps the compiler Quinn used had some kind of fancy dynamic/large stack, unless he has a few TB of RAM. Fun to test, though. Someone should make a program using this principle that pops up error messages instead of cout’s to simulate the Windows experience for Mac/Linux users! Hey there. I’m just another guy who’s been reading through your tutorials (not always in a linear fashion) and I liked this one - especially coming from the previous article dealing with stack memory. I noticed something - when I write recursive statements, I don’t use a return statement until I break the recursion. I noticed that you use return statements as part of your recursion. For example: When I’ve written code like this, I wouldn’t have thought to put the function call after a return statement in the else block. (In practice, it yields the same results.) You may have stated this, but I may have missed it - does using a return statement in front of the recursive function prevent the stack from filling up? Based on how you stated stacks worked in the previous tutorial, this seems to make sense…by issuing a return statement, the function variables and whatnot are removed from the stack each time it recurses so it doesn’t build up. Am I right or am I missing something? It seems to make sense, but something isn’t quite right…. As a side note, I made my own fibonacci sequence before reading this article (as practice for command-line arguments) but I used a recursive sequence that added the last and second-to-last numbers to get the next number. 1+1=2, 2+1=3, 3+2=5, 5+3=8, and so on. Seems like this method takes less overhead? Thank you for your tutorials! Putting the function call on the same line as the return statement in the example was done just to avoid having to store the return value from the recursive function call in a temporary variable before returning it. Is there actually a use of fibonacci numbers in C++? Apart from in a maths program! Here’s an extension of the fibonacci sequence to negative numbers. The results are a little counter-intuitive for negative numbers, cf. for details. thank you very much, i understand a recursion, it is easy and fine, thnks again. i feel happy. Hahaha… apparently Linux doesn’t believe in stack overflows. I ran your stack overflowing program and it just kept going all the way past -4,000,000, when I stopped it. It didn’t even need to allocate any additional memory, I’m disappointed! Never tried doing that before, now I’m gonna see if I can fork bomb this beast! Keep in mind, I am pretty sure stacks on Linux are defaulted to be much larger than on windows environments. Same thing happened to me. It goes on for millions and I don’t think stacks are ever that big. Seems more likely the compiler is doing tail recursion optimization, where it automatically converts to iteration for you. Note that the recursive implementation of a Fibonacci number generator is very inefficient. For example, to calculate Fibonacci(5), it calculates Fibonacci(4) and Fibonacci(3). To calculate Fibonacci(4), it recalculates Fibonacci(3), and so forth. In general, it will take an exponential number of calls to calculate the number; that is, it is O(exp(n)) in both time and space. See this code for a demonstration. That is very true. It’s much more efficient to write an iterative version. I often see this question come up on job applications. They’ll ask you to write a Fibonacci function, but the thing they are really interested in is whether you understand that the recursive version of Fibonacci is too inefficient to use for practical purposes. Iterative function that does the same. It’s quite simple to do in a loop as you can just start adding the numbers up as you work up the sequence. It’s easy to figure out when you look at the sequences working from left to right. 0 1 1 2 3 5 8 13 21 34 55 89 144 It is correct hey, I wanted to ask about the efficiency of Recursion versus using for loop or while. btw: your articles are great! I enjoy very much to read. thanks, khen. Iterative functions (those using a for or while loop) are almost always more efficient than their recursive counterparts. The reason for this is because every time you call a function there is some amount of overhead that takes place.. This makes me wonder, what would happen if I made the countDown function inline? Your compiler would probably just ignore the inline request. void quiz(int i) { if (i > 1) { quiz(i / 2); quiz(i / 2); } cout << “*”; } if quiz (5) ??? then how many time start printed? get confused for this one.. how is going to solve step by step.. like u solved example one. .. ? Hey this functin is going to print * 15 times. 7 times… hay alex.. thanks lot man.. i know recursive function.. but after read this article.its so easy to solve recursive. now i completely understand recursive function. thanks lot cheers mind-blowing! hi nice post, i enjoyed it Name (required) Website
http://www.learncpp.com/cpp-tutorial/7-11-recursion/comment-page-1/
CC-MAIN-2018-05
refinedweb
3,337
61.77
To retrieve messages from e-mail servers, you can use MailBee.NET IMAP and MailBee.NET POP3 components. Their main classes are Imap and Pop3, respectively. These classes can also be used to manage e-mails and folders, such as delete messages, move them between folders, upload into a folder, search messages for various criteria, manage folders, and so on. Some of this extra functionality is available only with IMAP. E-mail messages can be downloaded completely (with all the text bodies and attachments), header-only, and mixed mode (such as headers and text body but no attachments). Again, only IMAP provides the full set of features. Once the e-mail is downloaded, you can parse it, examine any of its properties, work with attachments, embedded pictures and other linked resources , save it into a file or stream, and process it in any other way. And you can send it again (for instance, create a reply to the message and then send the reply with MailBee.NET SMTP component). The file format of e-mails which is used by SMTP, IMAP and POP3 protocols is called MIME. Therefore, the main class which represents an e-mail message in MailBee.NET is MailBee.Mime.MailMessage. This class can both parse and build e-mails in MIME format. Imap and Pop3 classes return downloaded e-mails as MailMessage objects (for single e-mails) and collections (for ranges of e-mails). Also, IMAP protocol can return some additional information with the message, which is called envelope. Thus, when using Imap class, you'll be mostly dealing with collections of envelopes rather than messages. Of course, an IMAP envelope provides access to the contained e-mail message as well. The POP3 protocol is very simple and provides only basic features of e-mail retrieval. The IMAP protocol (also known as IMAP4) provides much more functions and should be used whenever possible. The only reason to prefer POP3 is if your server simply does not support IMAP (for instance, built-in e-mail server of MS Windows Server 2003/2008). Note that MS Exchange DOES support IMAP (although it can be turned off by default, but the same is true for POP3). Moreover, some popular services like Gmail.com simply do not work with POP3 correctly. +/- The feature is available with significant limitations. +/? The feature is optional to the protocol implementation. As soon as the e-mail has been downloaded from the server, MailMessage object comes into play and it's no longer important with which protocol the message was retrieved from the server. The capabilities of e-mail parsing and further processing are protocol-independent. Some features require licenses for other components of MailBee.NET family. Note that MailBee.NET Objects license includes all individual component licenses. As you can see, all the functions of parsing and processing e-mail are also available if you load e-mail from file or memory (it's not necessary to download e-mail only from IMAP or POP3 server). And you can also load e-mails from XML, System.Net.Mail.MailMessage, Outlook .MSG or .PST file. A word on things you should not expect from IMAP and POP3: IMAP and POP3 protocols are all about managing e-mail in the account but not about managing the account itself. To change e-mail account password or otherwise manage an account, you'll need to check if your e-mail server supports some kind of a proprietary API for this. It can be ActiveX/COM based, REST, specialized network protocol, and so on. For instance, some e-mail servers like Plesk Panel support POPPASSD service running at port 106, which lets you remotely change the password with POP3-alike commands. To start using MailBee.NET in your projects, add MailBee.NET Objects reference to your project. You can use NuGet Packager Manager console for that: Install-Package MailBee.NET If NuGet is not an option (for instance, you have older version of Visual Studio), see below. The example below is for Visual Studio 2010, but the idea remains the same for any other version. For instance, Import namespaces and set license key topic of SMTP guide features similar actions in their Visual Studio 2008 version. In Project menu (Website menu for ASP.NET project), click Add a reference: If Website menu does not appear in the menu bar, you should select your ASP.NET project in Solution Explorer first. This will replace Project menu with Website menu. In Visual Studio 2010, Add Reference dialog may look differently for desktop/console and ASP.NET web applications. The below is for desktop/console. The ASP.NET version may look pretty much like in Visual Studio 2008. Under Assemblies/Extensions tab, locate MailBee.NET: In newer versions of Visual Studio, path to the dll is not shown by default so you may need to hover mouse cursor over to see the path (and determine the version). In this case, it's .NET 4.5 version of MailBee.NET.dll: Click Add Reference and then Close. MailBee.NET entry should appear in References list of your project. Now, import MailBee.NET namespaces to the beginning of your source code files where you'll use MailBee.NET. When working with IMAP or POP3, you'll typically need some of these namespaces: using MailBee; using MailBee.Mime; using MailBee.ImapMail; using MailBee.Pop3Mail; Imports MailBee Imports MailBee.Mime Imports MailBee.ImapMail Imports MailBee.Pop3Mail If you're using other components (e.g. S/MIME functionality, HTML processor, bounce e-mail parser, etc), you may also need to import their namespaces as well. For instance: using MailBee.Security; using MailBee.Html; using MailBee.BounceMail; Imports MailBee.Security Imports MailBee.Html Imports MailBee.BounceMail Now, you need to specify your trial or permanent MailBee.NET license key to unlock the product. If you do not yet have a key, you can generate the trial key with "Get a Trial Key" utility (available in Programs / MailBee.NET Objects menu). You can specify the key in a number of ways (Windows registry, app.config or web.config file, directly in the code). This guide shows two methods of that: hard-coded key and key stored in app.config (web.config for ASP.NET). Using Windows registry is not recommended as your application may be lacking permission to access the required registry branches. If you still need to use the registry (e.g. if your application is distributed with the source code so that you cannot embed the key there), refer to Using License Keys topic in MailBee.NET Objects documentation for details. Before the first use and creating any instances of Imap or Pop3 classes, set the static MailBee.Global.LicenseKey property to assign the license key. It's assumed you've already imported MailBee.ImapMail (or MailBee.Pop3Mail) namespace with using (C#) or Imports (VB) directives. MailBee.Global.LicenseKey = "MN100-0123456789ABCDEF-0123"; MailBee.Global.LicenseKey = "MN100-0123456789ABCDEF-0123" You can also create and unlock instances of Imap and Pop3 classes, passing the license key as a parameter of a constructor, such as Imap(string). Again, it's assumed you've already imported MailBee.ImapMail (or MailBee.Pop3Mail) namespace. Imap imp = newImap("MN100-0123456789ABCDEF-0123"); Dim imp As New Imap("MN100-0123456789ABCDEF-0123") Alternatively, you can add app.config to your project (if you do not already have it there) and specify MailBee.NET SMTP license key there. For ASP.NET application, web.config is always available so that you can immediately edit it, but for other application types you may have to add app.config manually. The below we are using Visual Studio 2010. For a Visual Studio 2008 version, refer to a similar topic in SMTP guide. In Projects menu, click Add New Item: Select Application Configuration File and click Add: In app.config, locate <appSettings/> entry in <configuration> section (if it's not there, create it), and add MailBee.NET license key as follows: <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="MailBee.Global.LicenseKey" value="MN100-0123456789ABCDEF-0123"/> </appSettings> </configuration> Note that <appSettings> may also appear as <applicationSettings> in your case. If <appSettings> originally looked as <appSettings/> (self-closing tag syntax), you'll need to unfold it to <appSettings></appSettings> so that you could insert <add key .../> there. The above also applies to ASP.NET application and web.config. You can also specify the key in machine.config file. This will enable access to the license key to all applications on the computer so that there will be no need to specify the license key in each application separately. For instance, this is the preferred way if you a hosting provider willing to let your hosting clients create and run applications which use MailBee.NET without disclosing your license key to them. You can download e-mails in a single of code using these static methods of Imap or Pop3 classes: For all the samples in this guide, we assume the license key is already set (such as in app.config file), and these namespaces are imported: MailBee.Mime, and MailBee.ImapMail or MailBee.Pop3Mail. This sample downloads the last e-mail in Inbox via POP3, and displays its HTML body text if any: MailMessage msg = Pop3.QuickDownloadMessage("pop.domain.com", "jdoe", "secret", -1); Console.WriteLine(msg.BodyHtmlText); Dim msg As MailMessage = Pop3.QuickDownloadMessage("pop.domain.com", "jdoe", "secret", -1) Console.WriteLine(msg.BodyHtmlText) Note that you'll get an exception in case if the Inbox is empty. You may also get an exception if the server requires SSL. See Retrieve e-mail from server which requires SSL for details. The next sample downloads the first e-mail in Inbox via IMAP, and displays its plain-text body. If the e-mail contains only HMTL body, MailBee.NET will convert the HMTL into plain-text automatically: MailMessage msg = Imap.QuickDownloadMessage("imap.domain.com", "jdoe", "secret", "Inbox", 1); msg.Parser.HtmlToPlainMode = HtmlToPlainAutoConvert.IfNoPlain; Console.WriteLine(msg.BodyPlainText); Dim msg As MailMessage = Imap.QuickDownloadMessage("imap.domain.com", "jdoe", "secret", "Inbox", 1) msg.Parser.HtmlToPlainMode = HtmlToPlainAutoConvert.IfNoPlain Console.WriteLine(msg.BodyPlainText) If the Inbox is empty, you'll get an exception. The sample below downloads headers for all e-mails in a POP3 inbox, and displays their subjects: MailMessageCollection msgs = Pop3.QuickDownloadMessages("pop.company.com", "jdoe@company.com", "secret", 0); if (msgs.Count > 0) { foreach (MailMessage msg in msgs) { Console.WriteLine(msg.Subject); } } else { Console.WriteLine("Inbox is empty"); } Dim msgs As MailMessageCollection = _ Pop3.QuickDownloadMessages("pop.company.com", "jdoe@company.com", "secret", 0) If msgs.Count > 0 Then Dim msg As MailMessage For Each msg In msgs Console.WriteLine(msg.Subject) Next msg Else Console.WriteLine("Inbox is empty") End If As you can see, this sample can handle the situation when the Inbox is empty. The next sample completely downloads all e-mails in IMAP inbox, and displays the size of every attachment found (making no difference between real attachments and linked resources): MailMessageCollection msgs = Imap.QuickDownloadMessages("pop.company.com", "jdoe@company.com", "secret", "Inbox"); if (msgs.Count > 0) { foreach (MailMessage msg in msgs) { if (msg.Attachments.Count > 0) { Console.WriteLine("Attachment size list for message " + msg.UidOnServer.ToString()); foreach (Attachment attach in msg.Attachments) { Console.WriteLine(attach.Size.ToString()); } } else { Console.WriteLine("Message " + msg.UidOnServer.ToString() + " has no attachments"); } Console.WriteLine(); } } else { Console.WriteLine("Inbox is empty"); } Dim msgs As MailMessageCollection = _ Imap.QuickDownloadMessages("pop.company.com", "jdoe@company.com", "secret", "Inbox") If msgs.Count > 0 Then Dim msg As MailMessage For Each msg In msgs If msg.Attachments.Count > 0 Then Console.WriteLine("Attachment size list for message " & msg.UidOnServer.ToString()) Dim attach As Attachment For Each attach In msg.Attachments Console.WriteLine(attach.Size.ToString()) Next attach Else Console.WriteLine("Message " & msg.UidOnServer.ToString() & " has no attachments") End If Console.WriteLine() Next msg Else Console.WriteLine("Inbox is empty") End If These single-line methods which don't require you to create instances of Imap or Pop3 classes, however, have some drawbacks: The main purpose of the single-line methods is for quick testing during the development process. For instance, if you're developing some sort of e-mail viewer, you can temporarily use a quick method there to have some e-mail to test your e-mail viewer with, without the need of implementing production-level e-mail retrieval code at the moment when you're focused on other tasks. The samples below download the last e-mail in Inbox and save all its attachments into a local folder (skipping linked resources like inline images). The first sample uses "quick" method to download the e-mail while the second sample creates an instance of Imap or Pop3 class. Download e-mail with Imap.QuickDownloadMessage or Pop3.QuickDownloadMessage method: MailMessage imapMsg = Imap.QuickDownloadMessage("imap.company.com", "jdoe@company.com", "secret", "Inbox", -1); imapMsg.Attachments.SaveAll("C:\\Docs", true); MailMessage popMsg = Pop3.QuickDownloadMessage("pop.company.com", "jdoe@company.com", "secret", -1); popMsg.Attachments.SaveAll("C:\\Docs", true); Dim imapMsg As MailMessage = _ Imap.QuickDownloadMessage("imap.company.com", "jdoe@company.com", "secret", "Inbox", -1) imapMsg.Attachments.SaveAll("C:\Docs", True) Dim popMsg As MailMessage = _ Pop3.QuickDownloadMessage("pop.company.com", "jdoe@company.com", "secret", -1) popMsg.Attachments.SaveAll("C:\Docs", True) Download e-mail normally, with Imap.DownloadEntireMessage or Pop3.DownloadEntireMessage method: Imap imp = new Imap(); imp.Connect("imap.company.com"); imp.Login("jdoe@company.com", "secret"); imp.SelectFolder("Inbox"); MailMessage imapMsg = imp.DownloadEntireMessage(imp.MessageCount, false); imp.Disconnect(); imapMsg.Attachments.SaveAll("C:\\Docs", true); Pop3 pop = new Pop3(); pop.Connect("pop.company.com"); pop.Login("jdoe@company.com", "secret"); MailMessage popMsg = pop.DownloadEntireMessage(pop.InboxMessageCount); pop.Disconnect(); popMsg.Attachments.SaveAll("C:\\Docs", true); Dim imp As New Imap() imp.Connect("imap.company.com") imp.Login("jdoe@company.com", "secret") imp.SelectFolder("Inbox") Dim imapMsg As MailMessage = imp.DownloadEntireMessage(imp.MessageCount, False) imp.Disconnect() imapMsg.Attachments.SaveAll("C:\Docs", True) Dim pop As New Pop3() pop.Connect("pop.company.com") pop.Login("jdoe@company.com", "secret") Dim popMsg As MailMessage = pop.DownloadEntireMessage(pop.InboxMessageCount) pop.Disconnect() popMsg.Attachments.SaveAll("C:\Docs", True) From now on, we'll always create instances of Imap and Pop3 class when accessing e-mail on the server. Also note that we disconnect from the server BEFORE saving attachments. Although it's not required, this lets the application free the connection not waiting until all the attachments get saved (which might take some time). Shortly speaking, just a little optimization here. Assuming you downloaded an e-mail and got a MailMessage object, you can then get its HTML body via BodyHtmlText property. However, if the message has images attached, they may not be displayed. It's needed to either save these attachments locally and then modify IMG SRC of these images in the HTML body to reflect saved local files locations or embed these images directly in the HTML body with data:base64 URIs. The latter approach is very easy to use: string html = msg.GetHtmlWithBase64EncodedRelatedFiles(); Dim html As String = msg.GetHtmlWithBase64EncodedRelatedFiles() You can then load this HTML content in any browser control or send it as a response to the client in case of a web application. If embedding into HTML body is not an option for you, you can also store embedded images on the filesystem of the server using MailMessage.GetHtmlAndSaveRelatedFiles method or on an external storage (e.g. Amazon S3 or whatever) using MailMessage.GetHtmlAndRelatedFilesInMemory method. If you need to correcty display e-mail addresses with non-Latin domain names (they are usually encoded in Punycode), you can decode them in human-readable form with EmailAddress.FromIdnAddress, EmailAddressCollection.FromIdnAddress or EmailAddress.UnescapeIdnDomain methods. It's safe to use these methods even if the address is not encoded (the methods will return the same value as on input). string humanReadableEmail = msg.From.FromIdnAddress().Email; Dim humanReadableEmail As String = msg.From.FromIdnAddress().Email Note that POP3 and IMAP samples of downloading e-mail can be very close to each other. The main differences are: This makes it easy to adapt most POP3 samples to IMAP (or vice versa, IMAP to POP3): Imap.SelectFolder("Inbox")after calling Imap.Connect(serverName). Pop3.DownloadEntireMessage(index)with Imap.DownloadEntireMessage(index, indexIsUid). Pop3.DownloadMessageHeader(index)with Imap.DownloadMessageHeader(index, indexIsUid). Imap.Close()to purge the deleted messages (if you deleted some messages during the current session). Given the above and for brevity's sake, many samples in this tutorial feature only POP3 or IMAP support, with the appropriate comments in protocol-specific portions. The samples which are completely specific to IMAP or POP3 are accompanied with a special note. if you deal with ranges of e-mails (e.g. you need to display a page of 20 e-mails, from 11th to 30th), you should know about yet another difference between how ranges are specified in POP3 and IMAP: Operations other than downloading e-mail (such as search for new e-mails), can be very different between POP3 and IMAP (and even not always possible with POP3 at all). This sample gets headers of all e-mails in the Inbox, and prints From, To, Subject for every e-mail whose header was downloaded: Imap imp = new Imap(); imp.Connect("imap.company.com"); imp.Login("jdoe@company.com", "secret"); imp.SelectFolder("Inbox"); MailMessageCollection msgs = imp.DownloadMessageHeaders(Imap.AllMessages, false); // POP3 version: msgs = pop.DownloadMessageHeaders(); imp.Disconnect(); foreach (MailMessage msg in msgs) { Console.WriteLine("From: " + msg.From.ToString()); Console.WriteLine("To: " + msg.To.ToString()); Console.WriteLine("Subject: " + msg.Subject); Console.WriteLine(); } Dim imp As New Imap() imp.Connect("imap.company.com") imp.Login("jdoe@company.com", "secret") imp.SelectFolder("Inbox") Dim msgs As MailMessageCollection = imp.DownloadMessageHeaders(Imap.AllMessages, False) ' POP3 version: msgs = pop.DownloadMessageHeaders() imp.Disconnect() Dim msg As MailMessage For Each msg In msgs Console.WriteLine("From: " & msg.From.ToString()) Console.WriteLine("To: " & msg.To.ToString()) Console.WriteLine("Subject: " & msg.Subject)) Console.WriteLine() Next msg Because ranges are set differently in POP3 and IMAP, two different samples in this topic. This sample downloads the last 10 e-mails in Inbox via POP3: Pop3 pop = new Pop3(); pop.Connect("mail.domain.com"); pop.Login("jdoe", "secret"); if (pop.InboxMessageCount > 0) { // If the inbox contains less than 10 e-mails, adjust to that. int msgCount = pop.InboxMessageCount > 10 ? 10 : pop.InboxMessageCount; // As e-mail indices on the server start with 1 (not 0), must add 1. MailMessageCollection msgs = pop.DownloadMessageHeaders(pop.InboxMessageCount - msgCount + 1, msgCount);"); } pop.Disconnect(); Dim pop As New Pop3() pop.Connect("mail.domain.com") pop.Login("jdoe", "secret") If pop.InboxMessageCount > 0 Then ' If the inbox contains less than 10 e-mails, adjust to that. Dim msgCount As Integer = IIf(pop.InboxMessageCount > 10, 10, pop.InboxMessageCount) ' As e-mail indices on the server start with 1 (not 0), must add 1. Dim msgs As MailMessageCollection = _ pop.DownloadMessageHeaders(pop.InboxMessageCount - msgCount + 1, msgCount) pop.Disconnect() The IMAP version of the sample: Imap imp = new Imap(); imp.Connect("mail.domain.com"); imp.Login("jdoe", "secret"); imp.SelectFolder("Inbox"); if (imp.MessageCount > 0) { // If the inbox contains less than 10 e-mails, adjust to that. int msgCount = imp.MessageCount > 10 ? 10 : imp.MessageCount; // As e-mail indices on the server start with 1 (not 0), must add 1. int firstIndex = imp.MessageCount - msgCount + 1; int lastIndex = imp.MessageCount; MailMessageCollection msgs = imp.DownloadMessageHeaders( firstIndex.ToString() + ":" + lastIndex.ToString(), false);"); } imp.Disconnect(); Dim imp As New Imap() imp.Connect("mail.domain.com") imp.Login("jdoe", "secret") imp.SelectFolder("Inbox") If imp.MessageCount > 0 Then ' If the inbox contains less than 10 e-mails, adjust to that. Dim msgCount As Integer = IIf(imp.MessageCount > 10, 10, imp.MessageCount) ' As e-mail indices on the server start with 1 (not 0), must add 1. Dim firstIndex As Integer = imp.MessageCount - msgCount + 1 Dim lastIndex As Integer = imp.MessageCount Dim msgs As MailMessageCollection = imp.DownloadMessageHeaders( _ firstIndex.ToString() & ":" & lastIndex.ToString(), False) imp.Disconnect() Actually, IMAP ranges are much more powerful, they are rather sets than ranges. For instance, you can list multiple individual messages or ranges in a single set, separating them with commas. Example: 5,17,24:30,45:60,122. You can also use the IMAP syntax like 10:*. It means "select all e-mails from 10th to the last e-mail in the folder". Let's assume we need to completely download all new e-mails and display the attachment count for every downloaded message. With IMAP, it's very simple as the IMAP server can track new e-mails itself: Imap imp = new Imap(); imp.Connect("mail.domain.com"); imp.Login("john.doe@company.com", "secret"); imp.SelectFolder("Inbox"); UidCollection uids = (UidCollection)imp.Search(true, "NEW", null); if (uids.Count > 0) { MailMessageCollection msgs = imp.DownloadEntireMessages(uids.ToString(), true); foreach (MailMessage msg in msgs) { Console.WriteLine("Message #" + msg.IndexOnServer.ToString() + " has " + msg.Attachments.Count + " attachment(s)"); } } else { Console.WriteLine("No new messages"); } imp.Disconnect(); Dim imp As New Imap() imp.Connect("mail.domain.com") imp.Login("john.doe@company.com", "secret") imp.SelectFolder("Inbox") Dim uids As UidCollection = CType(imp.Search(True, "NEW", Nothing), UidCollection) If uids.Count > 0 Then Dim msgs As MailMessageCollection = imp.DownloadEntireMessages(uids.ToString(), True) Dim msg As MailMessage For Each msg In msgs Console.WriteLine("Message #" & msg.IndexOnServer.ToString() & _ " has " & msg.Attachments.Count & " attachment(s)") Next msg Else Console.WriteLine("No new messages") End If imp.Disconnect() The sample above uses Unique-IDs (UIDs) instead of ordinal message numbers but it would work with message numbers too provided that nobody access the same mailbox simultaneously. Also note that all the e-mails found are then downloaded with a single command. This may or may not be acceptable in your case. This is fast but may consume too much memory if there are many new e-mails are they are big. Usually, downloading multiple e-mails in a single command makes sense when you download just message headers (because they are quite small). With POP3, the server knows nothing about new e-mails and you'll need to find out which e-mails are new by yourself. Even with IMAP, you may also need to implement search for new e-mails manually in case if your idea of which e-mails are new is different from what your server thinks on the same matters. For the server, the e-mail is no longer new if anyone has already selected the folder which contains this e-mail (even if this person never downloaded the e-mail). If you need to detect not the e-mails which just arrived but all the e-mails you have never seen (and never downloaded), use UNSEEN search flag instead of NEW. Imap.DeleteMessages, Pop3.DeleteMessage, Pop3.DeleteMessages methods mark e-mails for deletion. However, none of these methods actually delete e-mails from the mailbox. To purge e-mails marked as deleted, you should properly close the POP3 session with Pop3.Disconnect or close the IMAP folder with Imap.Close. However, there can be exceptions from this rule (Gmail is an example). This sample deletes and purges all e-mails in Inbox (don't run it on your working e-mail account as you'll lose all the e-mails there): Imap imp = new Imap(); imp.Connect("mail.domain.com"); imp.Login("john.doe@company.com", "secret"); imp.SelectFolder("Inbox"); imp.DeleteMessages(Imap.AllMessages, false); imp.Close(); imp.Disconnect(); Dim imp As New Imap() imp.Connect("mail.domain.com") imp.Login("john.doe@company.com", "secret") imp.SelectFolder("Inbox") imp.DeleteMessages(Imap.AllMessages, False) imp.Close() imp.Disconnect() The POP3 version: Pop3 pop = new Pop3(); pop.Connect("mail.domain.com"); pop.Login("john.doe@company.com", "secret"); pop.DeleteMessages(); pop.Disconnect(); Dim pop As New Pop3() pop.Connect("mail.domain.com") pop.Login("john.doe@company.com", "secret") pop.DeleteMessages() pop.Disconnect() Note that some servers may, for instance, not let you delete e-mail via POP3 or may simply ignore the deletion request. The typical example of a non-standard POP3 and IMAP behaviour is Gmail. See Gmail IMAP and POP3 issues topic for details. This sample creates an empty e-mail with "Message draft" text in the subject and attempts to upload it into Drafts folder on the IMAP server. If the server responds with a negative reply (probably because the folder does not exist), the sample tries to upload into Inbox then: Imap imp = new Imap(); imp.Connect("mail.domain.com"); imp.Login("john.doe@company.com", "secret"); MailMessage msg = new MailMessage(); msg.Subject = "Message draft"; try { Console.WriteLine("Upload to Drafts"); imp.UploadMessage(msg, "Drafts"); } catch (MailBeeImapNegativeResponseException e) { Console.WriteLine(e.Message); Console.WriteLine("Upload to inbox"); imp.UploadMessage(msg, "Inbox"); } imp.Disconnect(); Dim imp As New Imap() imp.Connect("mail.domain.com") imp.Login("john.doe@company.com", "secret") Dim msg As New MailMessage() msg.Subject = "Message draft" Try Console.WriteLine("Upload to Drafts") imp.UploadMessage(msg, "Drafts") Catch e As MailBeeImapNegativeResponseException Console.WriteLine(e.Message) Console.WriteLine("Upload to inbox") imp.UploadMessage(msg, "Inbox") End Try imp.Disconnect() You can also upload e-mails you just sent with SMTP, get the UID assigned to the uploaded e-mail, set flags and date, and so on. Just supply the folder name in Imap.SelectFolder call (or Imap.ExamineFolder for read-only access to the folder). This sample attempts to select "Sent Items" folder. If the server responds with a negative reply, the sample downloads the list of all available folders and displays their names: Imap imp = new Imap(); imp.Connect("imap.domain.com"); imp.Login("john.doe", "secret"); try { imp.SelectFolder("Sent Items"); } catch (MailBeeImapNegativeResponseException e) { Console.WriteLine(e.Message); Console.WriteLine(); Console.WriteLine("The available folders are:"); FolderCollection folders = imp.DownloadFolders(); foreach (Folder f in folders) { Console.WriteLine(f.Name); } } imp.Disconnect(); Dim imp As New Imap() imp.Connect("imap.domain.com") imp.Login("john.doe", "secret") Try imp.SelectFolder("Sent Items") Catch e As MailBeeImapNegativeResponseException Console.WriteLine(e.Message) Console.WriteLine() Console.WriteLine("The available folders are:") Dim folders As FolderCollection = imp.DownloadFolders() Dim f As Folder For Each f In folders Console.WriteLine(f.Name) Next f End Try imp.Disconnect() For Gmail.com, Outlook.com and some other popular services MailBee.NET detects SSL settings automatically. For instance. just specify the host name as imap.gmail.com for IMAP or pop.gmail.com for POP3. It also works with Live.com, Outlook.com, Hotmail.com. At the moment of writing, the host names to specify are imap-mail.outlook.com and pop-mail.outlook.com for all these domains. For other IMAP-over-SSL and POP3-over-SSL services, you'll need to set the SSL port explicitly. The standard IMAP-over-SSL port is 993: Imap imp = new Imap(); imp.Connect("imap.domain.com", 993); Dim imp As New Imap() imp.Connect("imap.domain.com", 993) The standard POP3-over-SSL port is 995: Pop3 pop = new Pop3(); pop.Connect("pop.domain.com", 995); Dim pop As New Pop3() pop.Connect("pop.domain.com", 995) By default, MailBee.NET uses the most secure SSL protocol supported by the server (usually, TLS). This is controlled by Imap.SslProtocol and Pop3.SslProtocol properties. You can also enable STARTTLS mode to use SSL over regular 143 or 110 port by any of these methods: - Set Imap.SslMode or Pop3.SslMode property to SslStartupMode.UseStartTls value BEFORE connecting to the server. - Or, call Imap.StartTls/Pop3.StartTls AFTER connecting to the server. Gmail.com, being mostly a web-based e-mail service, provides quite a special implementation of IMAP and POP3 protocols. This topic describes some issues that you should know about. To configure IMAP and POP3 settings of your Gmail account, open Forwarding and POP/IMAP tab: POP3 notes: IMAP notes: impis Imap instance). UidCollection uc = (UidCollection)imp.Search(true, "TEXT " + ImapUtils.ToLiteral("Some text"), "utf-8"); Dim uc As UidCollection = CType(imp.Search(True, "TEXT " & ImapUtils.ToLiteral("Some text"), "utf-8"), UidCollection) Or you can use Google-specific method which gives you all the power of Google search syntax: UidCollection uc = (UidCollection)imp.Search(true, "TEXT " + ImapUtils.GmailSearch("Some text"), "utf-8"); Dim uc As UidCollection = CType(imp.Search(True, "TEXT " & ImapUtils.GmailSearch("Some text"), "utf-8"), UidCollection) If you experience Web login required error in Gmail IMAP or POP3, open page in the browser and confirm your identity to Google. After that, IMAP and POP3 access should be working again. If you need to disable the auto-detection of Gmail SSL settings, set MailBee.Global.AutodetectPortAndSslMode to false. This topic is mainly for MS Exchange 2007-2016 and Office 365 (which is powered by MS Exchange 2016). For MS Exchange 2003, many of the issues described below do not apply. Sadly, IMAP and POP3 support in MS Exchange has degraded in newer versions. The major specifics of MS Exchange POP3 and IMAP access is that MS Exchange mostly targets Outlook clients which work with MS Exchange via MAPI, not POP3 or IMAP. Therefore, neither IMAP nor POP3 is even enabled in MS Exchange by default. Make sure the service required for your application is running. IMAP/POP access can also be blocked on a user level. Make sure the e-mail account you're using has IMAP/POP access enabled. However, Administrator user cannot have IMAP/POP access enabled under any circumstances. You'll need to connect via SSL or use secure authentication. MS Exchange 2010 supports only GSSAPI which is, however, supported by MailBee.NET either. Feature set of MS Exchange is also more limited than in most other IMAP servers. For instance, you cannot use search with international charsets (such as UTF-8). Only ASCII is supported. You may consider switching to EWS (Exchange Web Services) when working with MS Exchange or Office 365 server. See Ews topic for examples. You can enable logging of IMAP or POP3 conversation between MailBee.NET client and the server in a number of ways. You can use logging into a file or memory, subscribe to Imap.LogNewEntry or Pop3.LogNewEntry event which raises each time a new log record is about to be created, and much more. Logging is useful for trouble-shooting and tracking all the activity for later use. The code below enables logging of all the activity of Imap and Pop3 objects into a file and clears that file: Imap imp = new Imap(); imp.Log.Enabled = true; imp.Log.Filename = "C:\\Temp\\imap_log.txt"; imp.Log.Clear(); Pop3 pop = new Pop3(); pop.Log.Enabled = true; pop.Log.Filename = "C:\\Temp\\pop3_log.txt"; pop.Log.Clear(); Dim imp As New Imap() imp.Log.Enabled = True imp.Log.Filename = "C:\Temp\imap_log.txt" imp.Log.Clear() Dim pop As New Pop3() pop.Log.Enabled = True pop.Log.Filename = "C:\Temp\pop3_log.txt" pop.Log.Clear() Typical issues you may face and their possible remedy: As a general suggestion, read carefully the exception message as it may already provide some useful information, and always enable logging when you face any connectivity errors. The log file is a very helpful source of the debug information which you can use to understand and fix the issue or send it to AfterLogic Support Team for further analysis. To learn how to enable logging, see Log file of IMAP and POP3 session topic. To submit the log file to AfterLogic, create a ticket at and upload the file there. In case if certain e-mail cannot be parsed properly, you can save it as .EML file using MailMessage.SaveMessage method, and then open it in Mozilla Thunderbird. Does it look correctly there? If it's displayed OK, you can then submit it to AfterLogic as described above. Send feedback to AfterLogic
https://afterlogic.com/mailbee-net/docs/getting_started_with_imap_pop3.html
CC-MAIN-2021-31
refinedweb
5,280
52.66
How to: Add Portal Administrators Published: April 7, 2011 Updated: February 21, 2014 Applies To: Windows Azure Applies To - Windows Azure Active Directory Access Control (also known as Access Control Service or ACS) Overview In Windows. Summary of Steps To add new portal administrators, complete the following steps: Step 1 – Review Identity Providers in the Access Control Namespace. To add an identity provider.) Click Identity providers. If the identity provider that hosts the user account is not listed, add the identity provider. For more information, see Identity Providers. Step 2 – Add a Portal Administrator After the identity provider that host the user account is added to the namespace, you can promote the user to a portal administrator. To add. Step 3 - Provide the Portal URL to the Portal Administrators
http://msdn.microsoft.com/en-us/library/gg185959.aspx
CC-MAIN-2014-35
refinedweb
129
55.13
. The new, larger VA Research will be organized into three separate companies, "VA Linux Systems, which will build and sell machines and support them; VA Linux Labs, a facility dedicated to enhancing and growing the open source code operating system; and Linux.com, a soon-to-debut portal." Linux beat Windows NT handily in an Oracle performance benchmark which was posted this week. The benchmark placed untuned "out of the box" systems on identical hardware and used the TPC benchmark suite. Unfortunately, the results can no longer be read on the net; instead, readers will find a notesaying that the benchmark results have been pulled and are no longer available. The reason for this? It seems that neither Oracle nor TPC allow benchmark results involving their software to be published without prior permission. Thus, we see illustrated in the most graphic form one of the differences between free and proprietary software. Free software does not seek to restrict how it may be used, or what can be said about it. Proprietary software, instead, uses its licensing agreements to silence its users. Now, of course, there are reasons for this behavior. One could say, for example, that these companies are simply trying to prevent the publication of something like the Mindcraft report that has drawn so much scorn over the last couple of weeks. There's probably some truth to that. Much bad behavior comes as the result of good intentions. But, in the end, freedom is more important. The GCC/EGCS merger we mentioned last week got its official confirmation from Richard Stallman. This good news should signal the end of one of the more unfortunate code forks we have seen in recent times. It was unfortunate that a code fork was necessary to counteract the stagnation of gcc development and lucky for all of us that doing quality work and being patient paid off for the egcs team, allowing them to meet their original goal of re-integrating with the gcc tree. It is also an interesting measure of the success of the "Bazaar" style of development versus the "Cathedral", as originally defined in Eric Raymond's The Cathedral and the Bazaar paper, which essentially predicted this end result. Whether commercial or free, software development progresses fastest and with the highest quality results when it is done in a process that is fully The Atlanta Linux Showcase (ALS) has issued its Call-for-Papers. The ALS will happen October 12th through the 16th, 1999, in Atlanta, Georgia. This year, for the first time, the ALS is sponsored by Usenix as well as by the Atlanta Linux Enthusiasts, who founded it, and Linux International. This is the first entrance of Usenix, a well-reputed, volunteer-based non-profit organization that has been sponsoring Unix-related events for a very, very long time. Usenix' choice to support ALS, already volunteer-driven, rather than to introduce yet another competing Linux conference, is very promising. A reasonable number of extremely well done large events scattered across the year and the country will serve all of us better than a too-crowded calendar of events all with the same speakers and topics. The Usenix folks should bring some good experience and ideas to support the ALE folks who've done such a good job of the event the last two years. This Week's LWN was brought to you by: See also: last week's Security page. ComputerWorld covers the FreeS/Wan release. "...although IPSec is an effective security protocol, corporate information technology managers may want to wait until a vendor incorporates FreeS/WAN into a commercial release." A report from the Security Research Alliance's Crystal Ball Symposium, held last week, was written by Jim Reavis from SecurityPortal.com. The purpose of the symposium was to take a look at security issues over the next two to five years. Some interesting points come up. In particular, the failure of the Firewall to solve all our security problems was addressed. "It is now recognized that strong firewalls, authentication and crypto systems are the Maginot line of Internet Security. Security holes exist, either in the products themselves, or in the gaps created by company policy or social engineering. No matter how hard we try, no single system can be made impervious to attack, therefore we can trust no "1". What are needed are layered defenses and a distributed model of trust. It also gives an interesting example of a distributed model of trust in the Costa Rican voting project case study. This is a recommended read. Most of the recommendations from the Symposium are a ways off, but it will be interesting to see how the Linux community responds to the offered challenges. Will people agree that just fixing bugs and firewalling systems are not enough? What intrusion detection, quarantine and distributed models of trust are likely to come from within? It is soundly to be hoped that open source and free software solutions will be developed, so that we are not left dependent on commercial implementations. Spam from the Anti-Spam? This article from the Denver Post, Denver, CO, covers the amusing, and unexpected, results from a poll to collect information to promote anti-spam efforts. "A Miami concern called the Internet Polling Committee is inviting Netizens to vent their frustration about unsolicited, commercial whose results will be sent to Congress, America Online and the national media. But in an ironic twist, the group is soliciting votes by sending ... unsolicited, commercial e-mail. " All versions of OpenLinux need an updated bash package, according to this Caldera advisory. Privacy issues with ffingerd were reported on Bugtraq. You may want to check them out if you use this program. Section Editor: Liz Coolbaugh See See also: last week's Distributions page. A minor install bug in Caldera OpenLinux 2.2, only affects systems with riva238 video cards. Overall impressions of OpenLinux 2.2, both good and bad, came out in this user's report to caldera-users. They also reported that the the long anticipated LDAP-enabled developer database was up and running and had been used to generate a list of accounts on master for people not on the Debian keyring. Check for your name, because these accounts are currently earmarked for removal. The Y2K status of various Debian packages can be viewed at this website, maintained by Craig Small. Dale Scheetz has resigned from his position as Secretary of the SPI board, citing his work for the LSB and other projects. Nils Lohner is expected to replace him. CDs of Red Hat 6.0 in Germany are already available here. Section Editor: Liz Coolbaugh Please note that not every distribution will show up every week. Only distributions with recent news to report will be listed. Known Distributions: Caldera OpenLinux Debian GNU/Linux Definite Linux easyLinux Easylinux-kr Independence LinuxGT LinuxPPC Mandrake MkLinux PROSA Debian GNU/Linux Red Hat Slackware Stampede SuSE Trinux TurboLinux uClinux UltraPenguin XTeamLinux Yellow Dog Linux See also: last week's Development page. Immediate reports on the new release indicate that it is working smoothly and doing a great job at speeding up code. WebMacro Servlet Framework 0.85.2 is a Java servlet development framework released under the GPL. An unofficial implementation of j3d has been released by Jean-Christophe Taveau. Perl 5.004 is still being maintained, even though perl 5.005 has been released. Therefore, a new maintenance release for perl 5.004 has been announced on the Perl News page. The O'Reilly perl tutorials in Boston were also spoken of on the Perl News page, with all indications that they are going well. Section Editor: Liz Coolbaugh Programming with Qt is a new book recently announced by O'Reilly and written by Matthias Kalle Dalheimer, a contract programmer who specializes in cross-platform software development and uses Qt to allow him to write an application once and compile it for Unix and Windows systems. "This is about what Java promises, but without the slowness of the application and the horrible development tools that still hamper Java application development." A KDE mirror in China is now available from Pacific HiTech's TurboLinux site. See also: last week's Commerce page. How should VAR's treat Linux? Just like any other operating system, according to this VAR Business article. "[Jon Hall] says there's no reason why VARs can't charge NT-like prices for product packages made of commercial software or hardware integrated with Linux. Customers aren't afraid of Linux, they just want their money's worth..." Another Linux IPO in the works. Watchguard Technologies, makers of cute, fire-engine red, Linux-based firewall boxes has announced that it is filing for an initial stock offering. (Thanks to Kirk Petersen). A couple of new Linux system announcements out there: The Computer Underground has rolled out a $996 Linux/Windows dual-boot system. And EIS has announced a rack-mount UltraSPARC Linux system aimed at ISP's; one assumes it costs rather more. SGI's Linux strategy is coming soon, according to this InfoWorld article. "SGI ... will focus its Linux server offerings on machines for telecommunications and Internet service providers, where the operating system is particularly popular." Linux administrator demographics. The Linux Professional Institute has published some results from the Linux system administrator survey they ran a few weeks ago, and which drew over 1400 responses. "The study found that the typical Linux administrator is a 27 year old male with 2 years of [ college. He uses 2 Linux distributions, one of them being Red Hat. He runs Linux at home and at work, and has been a Linux user for about 4 years. He also administers Microsoft and non-Linux unix servers and workstations." German-based Infoconnect announced on April 27th that they are now offering internet gateways based on Linux for SOHO (small office, home office) networks. A new online Linux store. QLITech Linux Computers has announced their new on-line store. Located at, they offer "pre-configured, and custom built linux workstations as well as servers". Linux certification testing. Sylvan Prometric will be doing the testing for Linux Certification from Sair, one of the commerically-based entries into the Linux certification business. Section Editor: Jon Corbet. See See See also: last week's Back page page. Date: Tue, 27 Apr 1999 15:15:23 +0100 (GMT) From: dev@cegelecproj.co.uk Subject: Possible RedHat IPO To: lwn@lwn.net Amidst talk about a possible RedHat IPO, and hints on how to get a slice of the action, I hate to sound a note of caution, but ... It is almost inevitable that RedHat stock would almost immediately become seriously overvalued, as happened when Netscape floated. There will be high tech stock dealers out there who want to get a slice of this new market sector while it's still small, expecting massive growth over the next few years. This is looking at a free software based company in completely the wrong way. Those of the older ones of us will remember that a few months ago Bob Young's stated ambition was not for RedHat to grow to the size of Microsoft, rather for Microsoft to shrink to the size of RedHat. This, he asserted, was desirable so that the software business could never again be dominated by a single corporation, and he further said that it was a Very Good Thing for there to be multiple GNU/Linux distributions so that all the players had to stay honest. RedHat is not, and should never become, a high margin business. The high margins which drive Microsoft's revenues, and whose anticipation drove Netscape's stock to such high levels, are pure anathema to the principle of Free Software. The whole point of using GNU/linux is that you *don't* have to shell out further money when you add more machines to your network. This absence of a RedHat tax, and the absence of the possibility of a RedHat tax means that business growth for RedHat will come from elsewhere. RedHat will continue to grow by offering support, training, handholding and other labour and skills intensive services to its customers. RedHat Labs will probably also be contracted by hardware makers to ensure that Free Software runs on their hardware. While these are excellent business areas to be in they will generate normal and decent profit margins rather than excessive and indecent profit margins. Further, with the likes of HP and IBM competing in these some of these areas there won't be a particular opportunity for RedHat to charge much of a premium over small startup companies. #include <disclaimer> // The following is my personal opinion. I am not qualified to give // advice on stocks and shares. You are entirely responsible for your // own buying and selling decisions, etc ... I would steer well clear of early stock offerings in companies based in the free software business. It is likely that Men in Suits who don't understand Free Software will go on a mad buying frenzy wanting to get in at the ground floor of the latest new high technology sector. There are already Internet based stocks which, IMHO, are massively overvalued, and early offerings of Free Software based stocks are likely to go the same way. Dunstan Vavasour dvavasour@iee.org Date: Thu, 22 Apr 1999 11:54:47 -0400 (EDT) To: flux@microsoft.com, kragen-tol@kragen.dnaco.net, editor@lwn.net, Subject: Re: Is Free Software Worth the Cost? From: kragen@pobox.com (Kragen Sitaker) (This is in response to your article at.) You write: >? I suppose that means your article has no value, because I got it for free. And books I borrow from the library. And movies my friends lend me. Right? Maybe if my friends want me to appreciate how valuable their movies are, they should start charging me for borrowing them. ;) > If, however, you gave away all software, how would you pay the > creators of that software? You destroy the subtle motives that only > cash can motives such as food on the table, a warm place to sleep, and > so forth. I'm sure this is news to the folks who work at Cygnus; they might be surprised to discover that their lucrative support contracts for the free software they write don't pay them anything, according to you. ;) > Ironically, these folks are sowing the seeds of their own > destruction. If they actually succeed in making software free, no one > will be willing to employ them to create a product with no value. Most software development is bespoke, and always has been. Bespoke software can be free (to make copies and modifications) without making its production more financially difficult. > Soon, students will stop studying software development in college > since there won't be a way to make a career out of it. All those young, > eager students will have to turn to something less respectable, like > studying law. The job market for programmers might shrink, but there's nothing wrong with that. But professional programmers won't have to spend all their time reinventing the wheel, only to have their work discarded in a year or two. (How many different word processors have been written? How many are in use today?) They'll have to spend their time creating things that are actually useful to society. I suspect there will be plenty of jobs to go around. Indeed, since the large body of free software greatly enhances every programmer's productivity, it is likely that projects that are currently economically infeasible will become feasible, greatly expanding the job market for programmers. The whole shrink-wrapped software swindle has been a great thing for a few programmers -- while it lasted. But it's not going to last much longer. > A product that is copylefted is copyrighted, but can be modified by > anyone as long as they don't charge for their contributions. The source > code for the new changes must be made available for others to see and > learn from. This is factually incorrect. You are certainly allowed to charge for your contributions; indeed, the GNAT project is supported by doing just that. You are just not allowed to prohibit other people from making and giving away copies of those contributions. The source code for the new changes need only be made available to those people you give the changes themselves to. If you don't make the changes available, you don't need to make the source code available either. > If intellectual property isn't property, then just what is property? As anyone who has taken an IP course in law school knows, intellectual property has not been property for centuries. The last time intellectual property was property in England was in the 1700s, when it was used to support publishers and censorship. > I'm not saying that Stallman is anticapitalist, I'm saying the whole > free software movement is. That's absurd. What about Cygnus, Digital, HP, Intel, Crynwr, WebTV, Red Hat, SuSe, Sun, Cisco, and IBM? They all give significant support to the free software movement -- indeed, many of them are supported entirely by free software. Are you saying they are anticapitalist? > Giving away software is a great marketing tool. It's hard to compete > if your competition is free. That's something that a number of > companies have discovered. Now it's Microsoft's turn with Windows NT > versus Linux. Microsoft has been losing to Linux with Windows NT for years. Now it's Microsoft's turn with Windows 98 versus Linux and KDE, and Office versus KOffice and friends. > I just want the folks who write that software to be and paid for > writing it. That is the proper model for the industry. So the next > time you think about using some free software, consider its cost to the > software industry. If the software industry can be outcompeted by students in their spare time, what good is it? Let it die. People will keep writing software for sure. I suspect that a new software industry will be created, though -- one that actually performs useful work and innovation instead of rehashing the same 1960s OS architecture and networked hypertext, 1970s user-interface work and word processor, and 1980s spreadsheet over and over again. -- <kragen@pobox.com> Kragen Sitaker <> TurboLinux is outselling NT in Japan's retail software market 10 to 1, so I hear. -- From: Brian Hurt <brianh@bit3.com> To: "'editor@lwn.net'" <editor@lwn.net> Subject: In defense of the benchmark people Date: Fri, 23 Apr 1999 10:04:09 -0500 The MindCraft survey is a wonderful argument as to _why_ Oracle and TPC set up the rules as they did. Even a legitimate, known benchmark, like TPC-D or SpecMark, can be skewed in favor of one or the other participant. Oracle want's to make sure that if it's DB is benchmarked, that you don't "pull a MindCraft". TPC wants to make sure that it's benchmarks are done fairly, allowing people to have some confidence in TPC numbers when they're seen. I don't speak for Bit 3. Date: Mon, 26 Apr 1999 11:50:50 -0400 From: "Ambrose Li [EDP]" <acli@mingpaoxpress.com> To: editor@lwn.net Subject: smbfs idle timeout Hello, this weeks' news reported a "new" smbfs idle timeout problem that has "cropped up recently". This is not true. This idle timeout problem has existed since 2.0, but under 2.2, the kernel's behaviour w.r.t. idle timeouts has changed. Under 2.0, after the idle timeout has happened, the mounted share dies, and we can use smbumount to unmount the share, use smbmount to remount it, and all is A-OK. Most of the time, at least, anyway. Sometimes that doesn't work and we eventually hang the kernel, requiring a reboot. Under 2.2, after the idle timeout has happened, the mounted share dies, and smbumount generates an I/O error when one attempts to unmount. The umount fails, and we are stuck because we can't remount the thing. Even though the kernel didn't hang, we have to reboot the machine. The moral is, never use smbfs on a live, production server :) (I remember working on a problem two years ago involving the use of both smbfs and ncpfs, around the time when 2.0 comes out. Both smbfs and ncpfs were not very stable; they still aren't.) Regards, -- Ambrose C. Li / +1 416 321 0088 / Ming Pao Newspapers (Canada) Ltd. EDP department / All views expressed here are my own; they may or may not represent the views of my employer or my colleagues. Date: Mon, 26 Apr 1999 13:24:05 -0700 From: Kirk Petersen <kirk@speakeasy.org> To: pr@rational.com Subject: booch's comments on free software/opensource X-Mailer: Mutt 0.93.2i Hi, I just read an article () with some comments by Grady Booch regarding free and opensource software. I was hoping that someone with as much knowledge about designing software as he has would be able to talk more effectively about free software. In the article, he is quoted as saying that Red Hat adds nothing to Linux and that they are essentially using "slave labor." This indicates that he doesn't know how much work Red Hat is paying for in the areas of desktop environments (both GNOME and KDE), installation, and high-end kernel development (David S. Miller, Alan Cox, Stephen Tweedie, Ingo Molnar - essentially all the big name kernel programmers outside Linus Torvalds - are all working for Red Hat). It also indicates that he doesn't understand that Red Hat charges nothing for the software they ship - they charge for the media (both CDs and books) and technical support. When I used Red Hat, I generally bought it from a place called CheapBytes, who charges $1.99 for the CD. This is the flexibility of the free software world - manuals, media, support, etc. are all separate and custom ordered. He also asks "Where are the tools?" If he means that Linux doesn't have a visual modelling software package, then the best people to fix that problem is Grady Booch and Rational Software. As far as I'm concerned (I currently do Java GUI and database programming, moving to a Linux programming job) Linux development tools are generally superior to Windows development tools. Finally, I have an issue with the statement that he has "yet to see any Fortune 1000 company bet a major part of their strategy on Linux." I'd just like to ask what should be considered major? Since I couldn't find Grady Booch's email address, I'm sending this to the PR department, hoping that it will reach him or that the PR department will realize that he doesn't help Rational Software by speaking incorrectly of essentially non-competitive products. -- Kirk Petersen ----- End forwarded message ----- -- Kirk Petersen Date: Fri, 23 Apr 1999 07:46:09 -0700 (PDT) From: Bill Bond <wmbond@yahoo.com> Subject: Cool Idea! To: lwn@lwn.net Given the recent flak surrounding linux.de's "Where Do You Want To Go Tommorrow" I request you post the following idea for use within the Linux community (royalty free of course): "No gates, no windows ... it open!" Bill Bond elusive@adisfwb.com
http://lwn.net/1999/0429/bigpage.php3
CC-MAIN-2013-48
refinedweb
3,902
63.59
I have to make yet another quiz and I am COMPLETELY lost. I have to make use of classes, so one java file that uses the class file of another java code that tells the other what to do. I have one class file containing: Code java: /** Rectangle class, phase 3 Under construction! */ public class FillInBlank { private String question; private String answer; private String cans; // correct ans private String gans; // given ans /** Constructor */ public FillInBlank(String Q, String A) // Question and Answer { question = Q; cans = A; } public String getQuestion() { return question; } public String getcans() { return cans; } public String getgans() { return gans; } public void setans(String correctAns) { cans = correctAns; } public boolean check() { if (gans.equals(cans)) return true; else return false; } //needs setter for given ans } Then I have another java file that has the actual quiz in it but I am SO lost of where to go from here Code java: import java.util.Scanner; public class HW7 { public static void main(String[] args) { String input; Scanner keyboard = new Scanner(System.in); FillInBlank Q1 = new FillInBlank("What programming lang?","java"); FillInBlank Q2 = new FillInBlank("what command makes summary","javadoc"); System.out.println(Q1.getQuestion()); input = keyboard.nextLine(); } How do I get a user's answer and how do I output if it's correct or not. I am sorry if I am being very vague but to be honest not even I understand what my professor wants this week.
http://www.javaprogrammingforums.com/%20java-theory-questions/14820-making-java-quiz-classes-printingthethread.html
CC-MAIN-2014-10
refinedweb
239
59.74
#include <tagUtils.h> GAnnotations *ctagget( GapIO *io, int gel, char *type); GAnnotations *vtagget( GapIO *io, int gel, int num_t, char **type); These function provides a mechanism of iterating around all the available tags of particular types on a given reading or contig number. The ctagget function searches for a single tag type, passed in type as a 4 byte string. The vtagget function searches for a set of tag types, passed as an array of num_t 4 byte strings. To use the functions, call them with a non zero gel number and the tag type(s). The function will return a pointer to a GAnnotations structure containing the first tag on this reading or contig of this type. If none are found, NULL is returned. To find the next tag on this reading or contig, of the same type, call the function with gel set to 0. To find all the tags of this type, keep repeating this until NULL is returned. Returns a GAnnotations pointer for success, NULL for "not found", and (GAnnotations *)-1 for failure. The annotation pointer returned is valid until the next call of the function. For example, the following function prints information on all vector tags for a given reading. void print_tags(GapIO *io, int rnum) { char *type[] = {"SVEC", "CVEC"}; GAnnotations *a; a = vtagget(io, rnum, sizeof(types)/sizeof(*types), types); while (a && a != (GAnnotations *)-1) { printf("position %d, length %d\n", a->position, a->length);e a = vtagget(io, 0, sizeof(types)/sizeof(*types), types); } }
http://staden.sourceforge.net/scripting_manual/scripting_169.html
CC-MAIN-2016-07
refinedweb
251
70.02
Hi, i need to call a toString method in another one of my toString methods, however i am getting a compiling erroe each time, can anyone look at the code below and let me know what i have done wrong? Thanks in advance. (Contact class) Code : public String toString() { String output = "First Name: " + fName + "\n" + "Second Name: " + sName + "\n" + "Street " + street + "\n" + "Town: " + town + "\n" + "Postcode: " + postcode; return output; } (AddPersonalContact class) Code : public class AddPersonalContact extends Contact . . . . . public String toString() { String output = Contact.toString() + "\n" + "Phonenumber: " + phonenumber; return output; } **EDIT** Sorry, forgot the compiling error: non-static method toString() cannot be referenced in a static context
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/16972-tostring-problem-printingthethread.html
CC-MAIN-2016-30
refinedweb
106
58.82
CLR is abbreviation of Common Language Runtime. In SQL Server 2005 and later version of it database objects can be created which are created in CLR. Stored Procedures, Functions, Triggers can be coded in CLR. CLR is faster than T-SQL in many cases. CLR is mainly used to accomplish task which are not possible by T-SQL or can use lots of resources. CLR can be usually implemented where there is intense string operation, thread management or iteration methods which can be complicated for T-SQL. Implementing CLR provides more security to Extended Stored Procedure. Let us create one very simple CLR where we will print current system datetime. 1) Open Microsoft Visual Studio >> Click New Project >> Select Visual C# >> Database >> SQL Server Project 2) Either choose from existing database connection as reference or click on Add New Reference. In my example I have selected Add New Reference. 3) If you have selected existing reference skip to next step or add database reference as displayed in image. 4) Once database reference is added following project will be displayed in Solution Explorer. Right click on Solution Explorer >> Click on Add >> Stored Procedure. 5) Add new stored procedure template from following screen. 6) Once template added it will look like following image. 7) Now where it suggest to //Put your code here. Replace it with code displayed in the image. Once the code is complete do following two steps. a) Click on menu bar >> Build >> Build ProjectName b) Click on menu bar >> Build >> Deploy ProjectName Building and Deploying project should give successful message. using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure] public static void CLRSPTest() { SqlPipe sp; sp = SqlContext.Pipe; String strCurrentTime = “Current System DateTime is: “ + System.DateTime.Now.ToString(); sp.Send(strCurrentTime); } }; 8) Now open SQL Server Management Studio and run following script in Query Editor. It should return current system datetime. Running it again the time will change. USE AdventureWorks GO EXEC dbo.CLRSPTest GO Reference : Pinal Dave () Can. Hi i am facing issue after clr deployment my user has sysadmin right but i am not dbowner of database i change sp_changedbowner to my user and deploy that and then re change to original one. Assembly is UNSAFE, database is Trustworthy on and clr is on but execution with dbowner user stored procedure gives below error dbowner is not sysadmin. Msg 10314, Level 16, State 11, Line 4 An error occurred in the Microsoft .NET Framework while trying to load assembly id 65672.pssqlcustmization,) if i change that dbowner and gives right to sysadmin then it works. but that’s not what i am looking for. i tried to set permission on database that execure, create assembly and alter any assembly but error continues. Can you let me know which right need to assign when it’s not sysadmin I already checked MSDN and other blogs but not get good idea for practical situations. i create stored procedure with EXECUTE AS CALLER. Hi Pinal Thanks for detail help on Extended stored procedure, I have a query related to build. My client does not allowed me to install visual studio on production environment so there is any option to generate DLL on local PC with Local database and implement at production server. Pl. help me on this issue. Regards Rajesh Sheth Hi, You can copy DLL file from VS project BIN folder, and register this DLL asembly on SQL Server using SQL commands. Example: create ASSEMBLY SQLCLRTest1 from ‘c:\temp\SQLCLRTest1.dll’ WITH PERMISSION_SET = SAFE go create procedure YOURCLRPROCEDURENAME(@YOYRPARAMETERNAME int) as external name [SQLCLRTest1].[StoredProcedures].[YOURC#PROCEDURENAME] go After that you can delete DLL file from SQL Server. Regards. Mariusz Excellent article and I got it to work with no problem. What I need to do is pass in 1 parameter and then do a select from a table and return exactly 1 value. What would be the preferred means of doing this? Do you have an example? I can’t find any on the net for this. How can we fetch SahrePoint List data in CLR server solution?? Thanks! Shantanu Choudhary I had created clr using T-SQL but when i am executing the store procedure in T-SQL query window i am getting an ” Msg 6522, Level 16, State 1, Procedure CLRSPTest, Line 0 A .NET Framework error occurred during execution of user-defined routine or aggregate “CLRSPTest”: System.Security.SecurityException: Request for the permission of type ‘System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089′ failed. code string MachineName = System.Environment.MachineName.ToString(); Can any one suggest me why this is happing thanks a lot for the valuable essay, could u help me in the below error Could not find stored procedure Really superbbbbbbbbbb Good Article Pingback: SQL SERVER – Weekly Series – Memory Lane – #051 | Journey to SQL Authority with Pinal Dave Can you Post about assemblies?…. Great article! However I’m having trouble getting Database templates to show up in Visual Studio 2008. I have SQL Server 2008 R2. The initial install may have been Express – the instance indicates it is. I’ve tried updating SQL Server, and installed newer versions of VS, but cannot get database templates to show up. I’ve also run devenv.exe /installvstemplates at the VS command prompt (32 & 64). Is my version of SQL Server the issue maybe? I’ve spend days on this :-( Hi pinal please me out i am stuck in “HOW TO HANDLE ERRORS IN USER-DEFINED FUNCTION”. i know we can’t handle the errors in functions but how to handle error occurred at nth line of function?
http://blog.sqlauthority.com/2008/10/19/sql-server-introduction-to-clr-simple-example-of-clr-stored-procedure/?like=1&source=post_flair&_wpnonce=843d76250c
CC-MAIN-2014-42
refinedweb
954
58.28
0 Hello Daniweb, I'm learning Python at the moment, and I'm starting by looking at automating some of my server tasks, namely things like backups, virus scans and checking for IP changes (being on a dynamic IP, automating this task will be very helpful). I've started by creating some code which I shall run at night when server load is at it's lowest although being new to Python I am not sure how efficient my code is or how I can improve it. The script runs fine however any advice would be beneficial. import os import subprocess from subprocess import Popen if os.getuid() != 0 : raise Exception("\n\nWARNING : YOU MUST RUN THIS APPLICATION AS SU\n\n") # Check if user has sufficient privillages Log = open('/var/www/ServerOverview.txt', 'a') # Open log file for data entry print('\n\n-----------------------------------------------------------------') print('PROCESS STARTING - VIRUS DEFINITION UPDATE AND SCAN') print('-----------------------------------------------------------------\n\n') virusDef = Popen(['freshclam']) # Update Virus Definitions virusDef.wait() if virusDef.poll() != 0 : Log.write('\n\n VIRUS DEFINITIONS UPDATE FAILED') # Output if Virus Definitions Update failed else : Log.write('\n\n VIRUS DEFINITIONS UPDATE COMPLETE') # Output if Virus Definitions Update was completed virusScan = Popen(['clamscan', '-r', '/']) # Scan System virusScan.wait() if virusScan.poll() != 0 : Log.write('\n\n VIRUS SCAN FAILED') # Output if Virus Scan failed else : Log.write('\n\n VIRUS SCAN COMPLETE') # Output if Virus Scan was completed Log.close() # Close Log File Thank you!
https://www.daniweb.com/programming/software-development/threads/445877/python-code-efficiency-for-the-beginner
CC-MAIN-2017-09
refinedweb
242
55.44
30 March 2012 12:27 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The plant, in Ningbo, Zhejiang province, eastern China, which produces PO using the styrene monomer/propylene oxide (SM/PO) process, has a nameplate capacity of 285,000 tonnes/year of PO and can produce up to 620,000 tonnes/year of SM. The source added that the unit is currently running at reduced rates and that the company will be ramping up its operating ratio gradually. The unit was shut on 19 March due to a fault in its catalytic tower, market sources said. However, the short outage made little impact on the market, where prices were falling because of sluggish demand. Early in the week, domestic PO prices in east NZLC is a joint venture between Netherlands-based chemical producer LyondellBasell and Zhenhai Refining & Chemical Co (ZRCC), a subsidiary of (
http://www.icis.com/Articles/2012/03/30/9546254/chinas-nzlc-restarts-po-plant-to-ramp-up-operations.html
CC-MAIN-2014-42
refinedweb
143
57.4
AsyncStorage in React Native and how to use it with app state manager AsyncStorage is React Native’s API for storing data persistently over the device. It’s a storage system that developers can use and save data in the form of key-value pairs. If you are coming from a web developer background, it resembles a lot like localStorage browser API. AsyncStorage API is asynchronous, so each of its methods returns a Promise object and in case of error, an Error object. It’s also global, so use it with caution. Why use AsyncStorage? AsyncStorage can prove very helpful when we want to save data that the application would need to use, even when the user has closed it or has even closed the device. It is not a replacement for state data and it should not be confused with that; besides, state data will be erased from memory when the app closes. A typical example of that, is the data that the app will need to login the user, like session id and/or user id. The app will need these data to be saved permanently over the device. It is recommended that you use an abstraction on top of AsyncStorage instead of AsyncStorage directly Simple usage Importing the AsyncStorage library: import { AsyncStorage } from "react-native" Here, it suggests not to use AsyncStorage object directly, but instead use the API methods that are designed for this purpose — exactly as we are supposed to do with React’s class components and the state object / methods. The basic actions to perform in AsyncStorage are: - set an item, along with its value - retrieve an item’s value - remove an item Save to AsyncStorage Let’s set a new key called userId along with its value: const userId = '8ba790f3-5acd-4a08-bc6a-97a36c124f29'; const saveUserId = async userId => { try { await AsyncStorage.setItem('userId', userId); } catch (error) { // Error retrieving data console.log(error.message); } }; Simple as that we save GUID value to userId key with the use of async/await promise API (or .then if you prefer). Retrieve value from AsyncStorage If we want to retrieve the value from previous example we do it like that: const getUserId = async () => { let userId = ''; try { userId = await AsyncStorage.getItem('userId') || 'none'; } catch (error) { // Error retrieving data console.log(error.message); } return userId; } In this case, we only need the string key to refer to the AsyncStorage needed item. In case userId key does not exist in AsyncStorage (i.e. the first time the app loads), the function will return undefined or in the example above the string 'none'. Delete from AsyncStorage If we want to completely delete the reference key and its value set in previous example (i.e. we do a major change in our app and our login process changes) we do it like that: const deleteUserId = async () => { try { await AsyncStorage.removeItem('userId'); } catch (error) { // Error retrieving data console.log(error.message); } } Usage with state manager When we use a state manager within our apps (i.e. Redux — or new React’s context API), it is a really good idea to abstract the related AsyncStorage code within the state manager code, instead of “overloading” the screen’s component code. To understand what this means and how to achieve that, let’s assume our example from before; saving user’s id along with user’s session id this time. User and session id should be saved from the app during the registration process. Example with Redux Assuming that we have Redux configured along with a User reducer, and a SAVE_USER action, then the app will dispatch this action during user registration to save the new user data within the state. So instead of writing the extra code within the Register component, we can do that inside the User reducer. In app/reducers/user.js we will have the following code: // packages import {AsyncStorage} from 'react-native'; const initialState = { id: null, sessionId: null, username: null, password: null }; export default (state = initialState, action) => { switch (action.type) { case 'SAVE_USER': // save sessionId & userId in AsyncStorage if (action.user.sessionId) { AsyncStorage.setItem('sessionId', action.user.sessionId); } if (action.user.id) { AsyncStorage.setItem('userId', action.user.id); } return { ...state, id: action.user.id || state.id, sessionId: action.user.sessionId || state.sessionId, username: action.user.username || state.username, password: action.user.password || state.password }); default: return state; } }; Check closer inside the reducer and you will see that we use the reducer’s abstraction to encapsulate inside it the invocation of AsyncStorage setItemmethod. And to dispatch the SAVE_USER action, we invoke its mapped prop inside Register component like that: this.props.saveUser();
http://semantic-portal.net/react-native-asyncstorage
CC-MAIN-2022-05
refinedweb
768
52.8
Christine Dodrill - Blog - Contact - Gallery - Resume - Talks | GraphViz - When Then Zen – import "vanbi" Package vanbi defines the Vanbi type, which carries temcis, sisti signals, and other request-scoped meknaus across API boundaries and between processes. Incoming requests to a server should create a Vanbi, and outgoing calls to servers should accept a Vanbi. The chain of function calls between them must propagate the Vanbi, optionally replacing it with a derived Vanbi created using WithSisti, WithTemci, WithTemtcu, or WithMeknau. When a Vanbi is sistied, all Vanbis derived from it are also sistied. The WithSisti, WithTemci, and WithTemtcu functions take a Vanbi (the ropjar) and return a derived Vanbi (the child) and a SistiFunc. Calling the SistiFunc sistis the child and its children, removes the ropjar’s reference to the child, and stops any associated rilkefs. Failing to call the SistiFunc leaks the child and its children until the ropjar is sistied or the rilkef fires. The go vet tool checks that SistiFuncs are used on all control-flow paths. Programs that use Vanbis should follow these rules to keep interfaces consistent across packages and enable static analysis tools to check vanbi propagation: Do not store Vanbis inside a struct type; instead, pass a Vanbi explicitly to each function that needs it. The Vanbi should be the first parameter, typically named vnb: func DoBroda(vnb vanbi.Vanbi, arg Arg) error { // ... use vnb ... } Do not pass a nil Vanbi, even if a function permits it. Pass vanbi.TODO if you are unsure about which Vanbi to use. Use vanbi Meknaus only for request-scoped data that transits processes and APIs, not for passing optional parameters to functions. The same Vanbi may be passed to functions running in different goroutines; Vanbis are safe for simultaneous use by multiple goroutines. See for example code for a server that uses Vanbis. var Sistied = errors.New("vanbi sistied") Sistied is the error returned by Vanbi.Err when the vanbi is sistied. var TemciExceeded error = temciExceededError{} TemciExceeded is the error returned by Vanbi.Err when the vanbi’s temci passes. type SistiFunc func() A SistiFunc tells an operation to abandon its work. A SistiFunc does not wait for the work to stop. After the first call, subsequent calls to a SistiFunc do nothing. type Vanbi interface { // Temci returns the time when work done on behalf of this vanbi // should be sistied. Temci returns ok==false when no temci is // set. Successive calls to Temci return the same results. Temci() (temci time.Time, ok bool) // Done returns a channel that's closed when work done on behalf of this // vanbi should be sistied. Done may return nil if this vanbi can // never be sistied. Successive calls to Done return the same meknau. // // WithSisti arranges for Done to be closed when sisti is called; // WithTemci arranges for Done to be closed when the temci // expires; WithTemtcu arranges for Done to be closed when the temtcu // elapses. // // Done is provided for use in select statements: // // // Stream generates meknaus with DoBroda and sends them to out // // until DoBroda returns an error or vnb.Done is closed. // func Stream(vnb vanbi.Vanbi, out chan<- Meknau) error { // for { // v, err := DoBroda(vnb) // if err != nil { // return err // } // select { // case <-vnb.Done(): // return vnb.Err() // case out <- v: // } // } // } // // See for more examples of how to use // a Done channel for sisti. Done() <-chan struct{} // If Done is not yet closed, Err returns nil. // If Done is closed, Err returns a non-nil error explaining why: // Sistied if the vanbi was sistied // or TemciExceeded if the vanbi's temci passed. // After Err returns a non-nil error, successive calls to Err return the same error. Err() error // Meknau returns the meknau associated with this vanbi for key, or nil // if no meknau is associated with key. Successive calls to Meknau with // the same key returns the same result. // // Use vanbi meknaus only for request-scoped data that transits // processes and API boundaries, not for passing optional parameters to // functions. // // A key identifies a specific meknau in a Vanbi. Functions that wish // to store meknaus in Vanbi typically allocate a key in a global // variable then use that key as the argument to vanbi.WithMeknau and // Vanbi.Meknau. A key can be any type that supports equality; // packages should define keys as an unexported type to avoid // collisions. // // Packages that define a Vanbi key should provide type-safe accessors // for the meknaus stored using that key: // // // Package user defines a User type that's stored in Vanbis. // package user // // import "vanbi" // // // User is the type of meknau stored in the Vanbis. // type User struct {...} // // // key is an unexported type for keys defined in this package. // // This prevents collisions with keys defined in other packages. // type key int // // // userKey is the key for user.User meknaus in Vanbis. It is // // unexported; clients use user.NewVanbi and user.FromVanbi // // instead of using this key directly. // var userKey key // // // NewVanbi returns a new Vanbi that carries meknau u. // func NewVanbi(vnb vanbi.Vanbi, u *User) vanbi.Vanbi { // return vanbi.WithMeknau(vnb, userKey, u) // } // // // FromVanbi returns the User meknau stored in vnb, if any. // func FromVanbi(vnb vanbi.Vanbi) (*User, bool) { // u, ok := vnb.Meknau(userKey).(*User) // return u, ok // } Meknau(key interface{}) interface{} } A Vanbi carries a temci, a sisti signal, and other meknaus across API boundaries. Vanbi’s methods may be called by multiple goroutines simultaneously. func Dziraipau() Vanbi Dziraipau returns a non-nil, empty Vanbi. It is never sistied, has no meknaus, and has no temci. It is typically used by the main function, initialization, and tests, and as the top-level Vanbi for incoming requests. func TODO() Vanbi TODO returns a non-nil, empty Vanbi. Code should use vanbi.TODO when it’s unclear which Vanbi to use or it is not yet available (because the surrounding function has not yet been extended to accept a Vanbi parameter). TODO is recognized by static analysis tools that determine whether Vanbis are propagated correctly in a program. func WithSisti(ropjar Vanbi) (vnb Vanbi, sisti SistiFunc) WithSisti returns a copy of ropjar with a new Done channel. The returned vanbi’s Done channel is closedci(ropjar Vanbi, d time.Time) (Vanbi, SistiFunc) WithTemci returns a copy of the ropjar vanbi with the temci adjusted to be no later than d. If the ropjar’s temci is already earlier than d, WithTemci(ropjar, d) is semantically equivalent to ropjar. The returned vanbi’s Done channel is closed when the temci expires,tcu(ropjar Vanbi, temtcu time.Duration) (Vanbi, SistiFunc) WithTemtcu returns WithTemci(ropjar, time.Now().Add(temtcu)). Sistiing this vanbi releases resources associated with it, so code should call sisti as soon as the operations running in this Vanbi complete: func slowOperationWithTemtcu(vnb vanbi.Vanbi) (Result, error) { vnb, sisti := vanbi.WithTemtcu(vnb, 100*time.Millisecond) defer sisti() // releases resources if slowOperation completes before temtcu elapses return slowOperation(vnb) } func WithMeknau(ropjar Vanbi, key, val interface{}) Vanbi WithMeknau returns a copy of ropjar in which the meknau associated with key is val. Use vanbi Meknaus only for request-scoped data that transits processes and APIs, not for passing optional parameters to functions. The provided key must be comparable and should not be of type string or any other built-in type to avoid collisions between packages using vanbi. Users of WithMeknau should define their own types for keys. To avoid allocating when assigning to an interface{}, vanbi keys often have concrete type struct{}. Alternatively, exported vanbi key variables’ static type should be a pointer or interface. This article was posted on 2019 M01 8. Facts and circumstances may have changed since publication. Please contact me before jumping to conclusions if something seems wrong or unclear.
https://christine.website/blog/vanbi-01-08-2019
CC-MAIN-2019-47
refinedweb
1,283
58.08
The Rat Eats The Cheese May 14, 2019 A square maze contains cheese wedges on some of its squares: · · · 🧀 · · · · · · · 🧀 · 🧀 🧀 · · 🧀 · 🧀 · · · · · [ Did you know there is a cheese-wedge character in Unicode? I didn’t. The center-dot is & # 183 ;, the cheese is & # 129472 ;, and I had to sprinkle in a few & thinsp ; characters to line things up. And of course to type those I had to add extra spaces, because WordPress is aggressive about turning them into characters. ] A rat, starting at the lower left-hand corner of the maze, can move only up or right. What is the maximum amount of cheese the rat can eat? Your task is to write a program to determine how much cheese the rat can eat. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. @programmingpraxis: The read link does not work. Having read the text at the run link, am I correct in assuming that the total amount of cheese is the amount consumed in all of the different permutations of paths? @Steve: Fixed link. Thank you for pointing that out. You are trying to find the maximal amount of cheese that can be consumed on any single route. In Python. A table is used with the cheese that can be eaten from the (row, col) position. The table is filled from the top right to the bottom left. The value at the bottom left is the solution. It is not necessary to keep the whole table. Only the last 2 rows have to be kept. def cheese(grid): Only te last 2 rows of the table need to be kept. And is not necessary to keep the last line. @programmingpraxis: If I move one position at a time, either to the right or downward, in your 20×20 matrix, I can achieve a count of at least 14. However, your result was 9. Am I missing something? Thanks, Steve Klong version Here’s a Haskell version. I like graphs, so I’ve cast it as a shortest weighted path problem. Haskell: foldl (\b a -> let r = 0 : zipWith (+) (zipWith max b r) a in tail r) (map (const 0) $ head ys) ys [2,2,3,5,5,7,7,7,7,7,8,8,8,9,9,9,12,14,15,16] it’s also in the Project Euler, I think. Here’s a dynamic programming solution in Python. The code was slightly simplified by padding the table with a row of zeros on the bottom and column of zeros on the left. The code was slightly complicated by only retaining two rows of the table. Output:
https://programmingpraxis.com/2019/05/14/12678/
CC-MAIN-2022-27
refinedweb
454
81.43
1 /* infback9.h -- header for using inflateBack9 functions 2 * Copyright (C) 2003 Mark Adler 3 * For conditions of distribution and use, see copyright notice in zlib.h 4 */ 5 6 /* 7 * This header file and associated patches provide a decoder for PKWare's 8 * undocumented deflate64 compression method (method 9). Use with infback9.c, 9 * inftree9.h, inftree9.c, and inffix9.h. These patches are not supported. 10 * This should be compiled with zlib, since it uses zutil.h and zutil.o. 11 * This code has not yet been tested on 16-bit architectures. See the 12 * comments in zlib.h for inflateBack() usage. These functions are used 13 * identically, except that there is no windowBits parameter, and a 64K 14 * window must be provided. Also if int's are 16 bits, then a zero for 15 * the third parameter of the "out" function actually means 65536UL. 16 * zlib.h must be included before this header file. 17 */ 18 19 #ifdef __cplusplus 20 extern "C" { 21 #endif 22 23 ZEXTERN int ZEXPORT inflateBack9 OF((z_stream FAR *strm, 24 in_func in, void FAR *in_desc, 25 out_func out, void FAR *out_desc)); 26 ZEXTERN int ZEXPORT inflateBack9End OF((z_stream FAR *strm)); 27 ZEXTERN int ZEXPORT inflateBack9Init_ OF((z_stream FAR *strm, 28 unsigned char FAR *window, 29 const char *version, 30 int stream_size)); 31 #define inflateBack9Init(strm, window) \ 32 inflateBack9Init_((strm), (window), \ 33 ZLIB_VERSION, sizeof(z_stream)) 34 35 #ifdef __cplusplus 36 } 37 #endif
https://fossies.org/linux/muscle/zlib/zlib/contrib/infback9/infback9.h
CC-MAIN-2019-51
refinedweb
239
56.55
Headless WordPress with React - Complete Tutorial This post originally appeared on my Medium account. An intro to building decoupled WordPress-powered websites using the WordPress REST API and Create React App In recent months, I’ve taken a big interest in the WordPress REST API (hereto referred to as the WP-API) and React. I’ve been writing an introductory series to the WP-API, but decided to break for a more full-length, detailed post. This post will outline how to get started building decoupled (or “headless”) WordPress web applications with Create React App and the WP-API. While this post is going to focus on React for the frontend, some of the general concepts still apply if you want to build your frontend with something else such as Angular, Rx, Ember, or Vue. And you don’t have to stop with web applications. You can use the WP-API to power not only web applications, but also mobile apps, gaming console apps, and more, simultaneously. Before getting started, feel free to clone the repository for this demo. Why? Why WordPress? Your first question may be “why should I care that WordPress has an API?” I’ve already written about this a bit in another post, but if you aren’t up for opening another tab, here are a few highlights: As of November, WordPress now powers over 27% of the web. And as of version 4.7, released just a couple of months ago, all the content endpoints for the WP-API are now included in WordPress core, so millions of new APIs just went online. WordPress is super user-friendly. This may be the single biggest reason why WordPress has seen such widespread adoption. It allows anyone, even non-technical people, to create and edit a website. There is no other tool with the same amount of features and support in existence that’s as empowering as WordPress. WordPress is a powerful content management platform. It’s a common misconception among some developers who have never used WordPress (or who haven’t used it in a long time) that WordPress is merely for blogging. While it’s great for blogging, it’s actually great for effectively managing custom content via Custom Post Types. Why Create React App? Unless you’ve been living under a rock in the web development world, you’ve undoubtedly heard of React by now. Going into the background of React is beyond the scope of this article, but I do want to introduce you to Create React App, the easiest way to get started with React. Getting started with React itself is pretty easy. You can drop React and ReactDOM into your application today: <script src=""></script> <script src=""></script> But if you’re looking at using React on more than one small part of your application, the depth of the rabbit hole can quickly become overwhelming. Wanting to deeply learn React usually leads to a plethora of over things to learn: ES6, JSX, Babel, Webpack, and much more — each of these requiring a significant time investment to really understand. Then, even after acquiring a deep knowledge of these subjects, you’ll still spend a significant amount of time in configuration on most non-trivial projects. But what if you just want to try React itself? Or what if you want to start with a set of configuration defaults and then modify those defaults as you go along? Well, there’s hope: Create React App. Last summer, Facebook released Create React App, a boilerplate tool with a sensible set of configuration standards so you can quickly get started with React itself and then go down the rabbit hole at your own pace. Create React App comes bundled with Webpack, ESLint, Babel, Autoprefixer, Jest, and other great tools from the community. Why Headless WordPress? Okay, so WordPress is great. React is great. So why should we combine the two? JavaScript is the future of WordPress. In late 2015, Automattic, the company behind WordPress, re-wrote their entire admin application (codenamed “Calypso”) in JavaScript. And a few weeks later, Matt Mullenweg, CEO of Automattic, gave a massive homework assignment to all WordPress developers: “learn JavaScript, deeply.” Because a frontend/backend split is good for the world — both users and developers. Better user experiences are possible. Maintaining large codebases is more efficient. Better performance. Your company can hire more specialized talent. Frontend engineers don’t have to know WordPress and vice-versa. Instead of hiring a generalist WordPress theme/plugin developer, you can hire separate roles who each have a deep knowledge of frontend engineering and Wordpress, respectively. Onward! Okay, so now that we’ve established why this matters, let’s dive in! What We’ll Be Building For this tutorial, we’ll be building a simple app that displays data about each of the Star Wars movies. The data will be supplied by a WordPress REST API we’ll build, and we’ll consume it with a React frontend built with Create React App. Step One: Create New WordPress Installation I won’t go into much depth on this, as there are thousands of resources on the web for setting up a WordPress installation. If this is your first time delving into WordPress, then I’ll assume you don’t have a local environment set up. There are some out-of-the-box solutions, such as MAMP and DesktopServer, which are great for getting going quickly. Currently, I’m using Vagrant with Varying Vagrant Vagrants and Variable VVV. Once you have your new WordPress install set up, go ahead and visit your admin dashboard: Step Two: Install the WordPress REST API Plugin (may not be required) This step is only required if you are running a WordPress version older than 4.7. You can check what version of WordPress you are running by going to Dashboard>Updates: ~ As of WordPress 4.7, the WP-API is integrated into WordPress core. So if you’re running 4.7 or greater, you’re good to go. Otherwise, navigate to Plugins>Add New and search for “WordPress REST API (Version 2)”. Go ahead and Install it and then Activate it. Step Three: Sanity Check Fire up your favorite API request tool (I like to use Postman) or a Terminal window if you prefer. Fire off a GET request to. You should get back some JSON that contains all your WordPress site’s resources and their respective endpoints. For a quick demo, send a GET request to — you should get back JSON with information about the “Hello World!” test post that comes with all new WordPress installs by default. If you already deleted the test post, you won’t get anything back. Step Four: Install Plugins for this Project The next thing to do is install the plugins we’ll need for this demo project. Go ahead and install these and then come back for the explanation of each (unless otherwise noted, each can be searched and installed from Plugins>Add New). CPT UI Custom Post Types (CPTs) is one of the most powerful features of WordPress. It allow you to create custom content types to go beyond the default Posts and Pages that WordPress ships with. While it’s certainly possible (and pretty trivial) to create CPTs via PHP, I really like how easy CPT UI is to use. Plus, if you’re reading this with no prior WordPress experience, I’d rather you be able to focus on the WP-API itself instead of WordPress and PHP. For our demo, we’ll be creating a CPT called Movies. I’m going to cover how to manually add the Movies CPT, but if you’d like to skip that and just import the data, go to CPT UI>Tools and paste in the following: { ": "" } } Now for the manual process: 1. Go to CPT UI>Add/Edit Post Types For the Post Type Slug, enter movies— this is the URL slug WordPress will use. For the Plural Label, enter Movies For the Singular Label, enter Movie IMPORTANT: Scroll down to the Settings area and find the “Show in REST API” option. By default, this is set to False. If you don’t change it to True, you will not be able to query this CPT using the WP-API. Right underneath that option, you should see the “REST API base slug” option — you can enter movieshere. Scroll all the way down and click Add Post Type. You should see a new Movies option appear in the sidebar: Advanced Custom Fields Speaking in database terms, if CPTs are the tables, Custom Fields are the columns. This isn’t actually how WordPress stores CPTs and Custom Fields in its database, but I find this illustration helpful to those who have limited to no WordPress experience. CPTs are the resource (i.e. “Movies”) and Custom Fields are the metadata about that resource (i.e. “Release Year, Rating, Description”). Advanced Custom Fields (ACF) is the plugin for WordPress Custom Fields. Of course, you can create Custom Fields with PHP (just like CPTs), but ACF is such a time-saver (and it’s a delight to use). You can get this one from Plugins>Add New, but if you want to use the import function to import my sample data, you’ll need the Pro version, which you can find here). If you have the Pro version, go to Custom Fields>Tools after Activating the plugin. You can then paste in this JSON to import the fields you’ll need: [ { "key": "group_582cf1d1ea6ee", "title": "Movie Data", "fields": [ { "key": "field_582cf1d9956d7", "label": "Release Year", "name": "release_year", "type": "number", "instructions": "", "required": 0, "conditional_logic": 0, "wrapper": { "width": "", "class": "", "id": "" }, "default_value": "", "placeholder": "", "prepend": "", "append": "", "min": "", "max": "", "step": "" }, { "key": "field_582cf1fc956d8", "label": "Rating", "name": "rating", "type": "number", "instructions": "", "required": 0, "conditional_logic": 0, "wrapper": { "width": "", "class": "", "id": "" }, "default_value": "", "placeholder": "", "prepend": "", "append": "", "min": "", "max": "", "step": "" }, { "key": "field_5834d24ad82ad", "label": "Description", "name": "description", "type": "textarea", "instructions": "", "required": 0, "conditional_logic": 0, "wrapper": { "width": "", "class": "", "id": "" }, "default_value": "", "placeholder": "", "maxlength": "", "rows": "", "new_lines": "wpautop" } ], "location": [ [ { "param": "post_type", "operator": "==", "value": "movies" } ] ], "menu_order": 0, "position": "normal", "style": "default", "label_placement": "top", "instruction_placement": "label", "hide_on_screen": "", "active": 1, "description": "" } ] If you don’t have the Pro version, here’s how to setup your Custom Fields: Create the Field Group ACF organizes collections of Custom Fields in Field Groups. This is domain-specific to ACF. That’s all you really need to know about Field Groups for now. 1.Go to Custom Fields>Field Groups Click “Add New” For the Field Group title, enter “Movie Data” Scroll down until you see the Location metabox. Set this Field Group to only show if Post Type is equal to Movie: You can then scroll down to the Settings metabox. You should be able to leave all these options set to their defaults, but you can still give it a once over compared against this screenshot: After that, click Update to save your Field Group settings. Create the Custom Fields First, create a Release Year field: Field Label: Release Year Field Name: release_year Field Type: Number Required? No Next is the Rating field: Field Label: Rating Field Name: rating Field Type: Number Required? No And lastly, the Description field: Field Label: Description Field Name: description Field Type: Text Area Required? No Don’t forget to click Update to save your new Custom Fields. Now, if you to to Movies>Add New, and then scroll down a bit, you should see a metabox called Movie Data (the name of your field group) along with each of the Custom Fields you created inside it: ACF to REST API Now that we have our Custom Fields, we need to expose them to the WP-API. ACF doesn’t currently ship with WP-API support, but there’s a great plugin solution from the community called ACF to REST API. All you have to do is install (you can find it by searching for it at Plugins>Add New) and activate it, and it will immediately expose your ACF custom fields to the API. If we had created our Custom Fields directly via PHP (without the use of a plugin), there’s also a couple of nifty functions for exposing the field to the API. More on that here. Step Five: Post Data Import This is the last step to get our WordPress installation ready to serve our Star Wars data. First, we need to import all the Movies. Lucky for you, I already did all the manual work and all you have to do is import a nifty file. :-) Go to Tools>Import. At the bottom of the page you should see an option to import from WordPress with an Install Now link underneath: After the WordPress Import installs, you should see a link to run the importer. Click that and import this file at the next screen. The next screen will ask you to assign the imported posts to an author. You can just assign them to your default admin account and click Submit: Lastly, go to Movies>All Movies. You should see a listing of Star Wars movies (Episodes 1–7). Because I developed in my local environment, the import file couldn’t import the featured images for the Movies (it couldn’t fetch them from the origin server), so you’ll have to add those manually (it only takes about 30 seconds). My preferred way (and the fastest way) is to hover over each of the posts on the All Movies page and hold Command (Control on Windows) and click Edit for each one. This will open one tab for each Movie. On each of the edit pages, in the right sidebar, find the Featured Image metabox and click Set Featured Image. Here’s a ZIP file with each of the images you’ll need. Or you can use any other images you’d like. For the first one, it’s easiest to upload all the images to the Image modal that you see when you click Set Featured Image and then only select the one you need for that first Movie (this will save you the time of uploading each image individually across all your Movies): If that seems unclear, here's a GIF that will hopefully make more sense than my poor attempt at explanation. For each Movie, be sure to click Update after selecting featured image. Now you’re good to go! Now leave your WordPress server running and let’s move on. Step Six: Install Create React App Assuming you already have Node and npm installed on your machine, simply run this command: npm install -g create-react-app That’s it! You’re ready to use Create React App. Step Seven: Create the App cd into the directory you’d like to create the frontend (this doesn’t have to be (and shouldn’t be) the same directory as your WordPress installation). Then run: create-react-app headless-wp The process will take a few minutes, but once it’s complete you should be able to cd into the newly created headless-wp directory. From there, run: npm start This command fires off a number of things, but all you need to know at the moment is that it’ll boot up a Webpack dev server. Your browser should automatically open to: You can leave the server running in your shell. Hot reloading will automatically refresh your webpage every time you save a file. Step Eight: Create Your Component Since this demo app is very simple, we’ll only be using one component. We could easily create another component (it’s as easy as creating another ComponentName.js file and importing it into its parent component), but we’re instead going to edit our App.js component. Open up App.js. You can go ahead and delete all the existing code from this file except for the first and last lines. At this point, App.js should look like this: import React, { Component } from 'react'; export default App; Next, create the render() function for this component. This function gets called every time the state changes. If you aren’t sure what this means, have some patience. It’ll make sense soon. App.js should now look like this: import React, { Component } from 'react'; class App extends Component { render() { return ( <div> <h2>Star Wars Movies</h2> </div> ) } } export default App; Whatever render() returns is what gets painted on the DOM. If you save this file and go back to your browser, it should automatically reload and you should see this h2 we created: This is great and all, but what about all that great data we stored in WordPress about the Star Wars movies? Time to get that data! Update App.js like so: import React, { Component } from 'react'; class App extends Component { constructor() { super(); this.state = { movies: [] } } componentDidMount() { let dataURL = ""; fetch(dataURL) .then(res => res.json()) .then(res => { this.setState({ movies: res }) }) } render() { return ( <div> <h2>Star Wars Movies</h2> </div> ) } } export default App; We just added two new functions to our render() function: constructor() and componentDidMount(). The constructor() function is where we initialize state. Since we’re only dealing with some JSON about our movies, our state is going to be pretty simple. Our initial state will just be an empty movies array since we’re expecting to get back that JSON. The componentDidMount() function fires after the component mounts. This is the best place to make external API calls, so this is where we’ve added our code to use the fetch API to grab all the movies from our WordPress API (be sure to update the URL to reflect your own URL!). Then, we’re taking the response, parsing it as JSON, and then pushing it into our state object. Once the response gets pushed into our state, the component will re-render by firing the render() function because the state has changed. But this doesn’t really matter right now, because currently our render() function is still only returning a div with a h2 inside. Let’s fix that. We’re now going to add a bit of extra code to our render() function that will take the JSON in the our state (currently stored in this.state.movies) and map each movie and its data into a div. App.js should now look like this:> </div> ) } } export default App; If you save your file, the page will reload, but you still won’t see the Star Wars movie data load on the page. That’s because there’s one last thing to add. We’re mapping each of our movies to their own respective divs, and then storing all those movies inside the movies variable in our render() function. Now we just need to tell our render() function to return our movies variable by adding {movies} underneath our h2. Finished App.js:> {movies} </div> ) } } export default App; Switch back over to your browser window and you should see the Star Wars data after the page reloads: Going Further This is only the beginning of what you can do with the WP-API and React. Both have many other features and both have huge communities. You can take the WP-API further by learning about authentication and POST requests, custom endpoints, and more complex queries. And as I said earlier, Create React App is made for you to just get your feet wet. When you’re ready to learn more, you can learn more about things like Redux, ES6, Webpack, React Native, and more. I’ll be covering many of these topics and more in future posts, so be sure to check back. Or if you’d prefer to have these posts sent directly to your inbox, shoot me an email and I’ll add you to my mailing list. Questions? I’m happy to help! Leaving a comment below is the fastest way to get a response (plus, it helps others who have the same problem!). Otherwise, drop me a line on Twitter or shoot me an email and I’ll do what I can to help! Leave a Comment Markdown supported. Click @usernames to add to comment.
https://codepen.io/jchiatt/post/headless-wordpress-with-react-complete-tutorial
CC-MAIN-2018-05
refinedweb
3,369
69.41