text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
O'Reilly Book Excerpts: Secure Programming Cookbook for C and C++
Secure Cooking with C and C++, Part 3
Editor's note: We've covered basic data validation techniques and how to evaluate URL encodings in the two previous sample recipes from Secure Programming Cookbook for C and C++. This week, the authors cover how to verify the authenticity of an email address.
Recipe 3.9: Validating Email Addresses
Problem
Your program accepts an email address as input, and you need to verify that the supplied address is valid.
Solution
Scan the email address supplied by the user, and validate it against the lexical rules set forth in RFC 822.
Discussion
RFC 822 defines the syntax for email addresses. Unfortunately, the syntax is complex, and it supports several address formats that are no longer relevant. The fortunate thing is that if anyone attempts to use one of these no-longer-relevant address formats, you can be reasonably certain they are attempting to do something they are not supposed to do.
You can use the following
spc_email_isvalid( ) function to check the format of an email address. It will perform only a syntactical check and will not actually attempt to verify the authenticity of the address by attempting to deliver mail to it or by performing any DNS lookups on the domain name portion of the address.
The function only validates the actual email address and will not accept any associated data. For example, it will fail to validate "Bob Bobson
<bob@bobson.com>", but it will successfully validate "
bob@bobson.com". If the supplied email address is syntactically valid,
spc_email_isvalid( ) will return 1; otherwise, it will return 0.
TIP: Keep in mind that almost any character is legal in an email address if it is properly quoted, so if you are passing an email address to something that may be sensitive to certain characters or character sequences (such as a command shell), you must be sure to properly escape those characters.
#include <string.h> int spc_email_isvalid(const char *address) { int count = 0; const char *c, *domain; static char *rfc822_specials = "()<>@,;:\\\"[]"; /* first we validate the name portion (name@domain) */ for (c = address; *c; c++) { if (*c == '\"' && (c == address || *(c - 1) == '.' || *(c - 1) == '\"')) { while (*++c) { if (*c == '\"') break; if (*c == '\\' && (*++c == ' ')) continue; if (*c <= ' ' || *c >= 127) return 0; } if (!*c++) return 0; if (*c == '@') break; if (*c != '.') return 0; continue; } if (*c == '@') break; if (*c <= ' ' || *c >= 127) return 0; if (strchr(rfc822_specials, *c)) return 0; } if (c == address || *(c - 1) == '.') return 0; /* next we validate the domain portion (name@domain) */ if (!*(domain = ++c)) return 0; do { if (*c == '.') { if (c == domain || *(c - 1) == '.') return 0; count++; } if (*c <= ' ' || *c >= 127) return 0; if (strchr(rfc822_specials, *c)) return 0; } while (*++c); return (count >= 1); }
See Also
RFC 822: Standard for the Format of ARPA Internet Text Messages
| http://www.linuxdevcenter.com/pub/a/network/excerpt/spcookbook_chap03/index3.html | CC-MAIN-2016-26 | refinedweb | 472 | 58.92 |
In this post you’ll learn:
- How to structure a Visual Studio solution that uses React for the front-end and ASP.NET Web API for the back-end
- How to use webpack and npm together with Visual Studio
- How to easily make your applications realtime with Pusher
Before getting started it might be helpful to have a basic understanding of:
- React
- Babel
- webpack
- ASP.NET Web API
- NuGet
- npm
You should also be using Visual Studio 2015 or greater.
In order to demonstrate how to combine the power of React, ASP.NET Web API, and Pusher, we’ll be building a realtime chat application. The chat application itself will be very simple:
Upon loading the application, the user will be prompted for their Twitter username:
… And upon clicking Join, taken to the chat where they can send and receive messages in realtime:
The Visual Studio solution will be comprised of two projects namely, PusherRealtimeChat.WebAPI and PusherRealtimeChat.UI:
PusherRealtimeChat.WebAPI is where we’ll implement the ASP.NET Web API server. This simple server will revolve around a route called
/api/messages to which clients can
GET chat messages. Upon receiving a valid chat message, the server will broadcast it to all connected clients, via Pusher.
PusherRealtimeChat.UI is where we’ll implement the React client. This client will subscribe to a Pusher channel for new chat messages and upon receiving one, immediately update the UI.
Implementing the Server
Separating the server and the client into separate projects gives us a clear separation of concerns. This is handy because it allows us to focus on the server and client in isolation.
In Visual Studio, create a new ASP.NET Web Application called PusherRealtimeChat.WebAPI:
When prompted to select a template, choose Empty and check Web API before clicking OK:
If you’re prompted by Visual Studio to configure Azure, click Cancel:
Once the project has been created, in Solution Explorer, right-click the PusherRealtimeChat.WebAPI project, then click Properties. Under the Web tab, set Start Action to Don’t open a page. Wait for request from an external application:
Setting this option does what you might expect – it tells Visual Studio to not open a web page in the default browser when you start the server. This is a lesser-known option that proves to be convenient when working with ASP.NET Web API projects, as ASP.NET Web API projects have no user interface.
Now that the PusherRealtimeChat.WebAPI project has been setup we can start to implement some code! A good place to start is by creating a
ChatMessage.cs model inside the
Models directory:
using System.ComponentModel.DataAnnotations; namespace PusherRealtimeChat.WebAPI.Models { public class ChatMessage { [Required] public string Text { get; set; } [Required] public string AuthorTwitterHandle { get; set; } } }
Note: If you’re following along and at any point you’re not sure where a code file belongs, check out the source code on GitHub.
The above model represents a chat message and we’ll be using it in the next step to define controller actions for the
/api/messages/ route. The
Required attributes make it easy to validate the model from said controller actions.
Next, we’ll define controller actions for the
/api/messages route I mentioned. To do that, create a new controller called
MessagesController.cs inside the
Controllers directory:
using PusherRealtimeChat.WebAPI.Models; using PusherServer; using System.Collections.Generic; using System.Net; using System.Net.Http; using System.Web.Http; namespace PusherRealtimeChat.WebAPI.Controllers { public class MessagesController : ApiController { private static List<ChatMessage> messages = new List<ChatMessage>() { new ChatMessage { AuthorTwitterHandle = "Pusher", Text = "Hi there! ?" }, new ChatMessage { AuthorTwitterHandle = "Pusher", Text = "Welcome to your chat app" } }; public HttpResponseMessage Get() { return Request.CreateResponse( HttpStatusCode.OK, messages); } public HttpResponseMessage Post(ChatMessage message) { if (message == null || !ModelState.IsValid) { return Request.CreateErrorResponse( HttpStatusCode.BadRequest, "Invalid input"); } messages.Add(message); return Request.CreateResponse(HttpStatusCode.Created); } } }
Note: Remember to import
PusherServer.
As you can see, this controller is very simple and has just two principal members:
Get.
Post is called with an instance of the
ChatMessage model whenever a
POST request is sent to
/api/messages. It validates the model using
Model.IsValid (remember those
Required attributes?) before storing the incoming message in the
messages list.
Get is even simpler – it’s called whenever a
GET request is sent to
/api/messages and it returns the
messages list as JSON.
Making the server realtime
As it stands, the server can accept and send messages via
GET requests respectively. This is a solid starting point but ideally, clients should be immediately updated when new messages become available (i.e. updated in realtime).
With the current implementation, one possible way we could achieve this is by periodically sending a
GET request to
/api/messages from the client. This is a technique known as short polling and whilst it’s simple, it’s also really inefficient. A much more efficient solution to this problem would be to use WebSockets and when you use Pusher, the code is equally simple.
If you haven’t already, head over to the Pusher dashboard and create a new Pusher application:
Take a note of your Pusher application keys (or just keep the Pusher dashboard open in another window ?) and return to Visual Studio.
In Visual Studio, click Tools | NuGet Package Manager | Package Manager Console, then install
PusherServer with the following command:
Install-Package PusherServer
Once
PusherServer has finished installing, head back to the
MessagesController.cs controller we defined earlier and replace the
Post method with:
public HttpResponseMessage Post(ChatMessage message) { if (message == null || !ModelState.IsValid) { return Request.CreateErrorResponse( HttpStatusCode.BadRequest, "Invalid input"); } messages.Add(message); var pusher = new Pusher( "YOUR APP ID", "YOUR APP KEY", "YOUR APP SECRET", new PusherOptions { Cluster = "YOUR CLUSTER" }); pusher.Trigger( channelName: "messages", eventName: "new_message", data: new { AuthorTwitterHandle = message.AuthorTwitterHandle, Text = message.Text });. return Request.CreateResponse(HttpStatusCode.Created); }
As you can see, when you use Pusher, you don’t have to do a whole lot to make the server realtime. All we had to do was instantiate
Pusher with our application details before calling
Pusher.Trigger to broadcast the inbound chat message. When the time comes to implement the React client, we’ll subscribe to the
messages channel for new messages.
CORS
We’re almost ready to build the client but before we do, we must first enable cross-origin resource sharing (CORS) in ASP.NET Web API.
In a nutshell, the PusherRealtimeChat.WebAPI and PusherRealtimeChat.UI projects will run on separate port numbers and therefore have different origins. In order to make a request from PusherRealtimeChat.UI to PusherRealtimeChat.WebAPI, a cross-origin HTTP request must take place. This is noteworthy because web browsers disallow cross-origin requests unless CORS is enabled on the server.
To enable CORS in ASP.NET Web API, it’s recommended that you use the
Microsoft.AspNet.WebApi.Cors NuGet package.
Just like we did with the
PusherServer NuGet package, to install
Microsoft.AspNet.WebApi.Cors, click Tools | NuGet Package Manager | Package Manager Console, then run:
Install-Package Microsoft.AspNet.WebApi.Cors
Once
Microsoft.AspNet.WebApi.Cors has finished installing, you’ll need to enable it by going to
App_Start/WebApiConfig.cs and calling
config.EnableCors() from the
Register method, like this:
using System; using System.Collections.Generic; using System.Linq; using System.Web.Http; namespace PusherRealtimeChat.WebAPI { public static class WebApiConfig { public static void Register(HttpConfiguration config) { // Web API configuration and services config.EnableCors(); // Web API routes config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } }
You’ll also need to decorate the
MessagesController.cs controller with the
EnableCors attribute (remember to import
System.Web.Http.Cors!):
using System.Web.Http.Cors; namespace PusherRealtimeChat.WebAPI.Controllers { [EnableCors("*", "*", "*")] public class MessagesController : ApiController { ... } }
And that’s it! You won’t be able to observe the impact of this change right now, but know that it’ll save us from cross-origin errors later down the road.
Implementing the Client
As I mentioned in the overview, the client code will reside in it’s own project called PusherRealtimeChat.UI. Let’s create that project now.
In Solution Explorer, right-click the PusherRealtimeChat solution, then go to Add | New Project. You should be presented with the Add New Project window. Choose ASP.NET Web Application and call it PusherRealtimeChat.UI
When prompted again to choose a template, choose Empty before clicking OK:
Note: There’s no need to check the Web API check box this time.
Again, if you’re prompted by Visual Studio to configure Azure, click Cancel:
Once the PusherRealtimeChat.UI project has been created, the first thing we’ll want to do is declare all the front-end
dependencies and
devDependencies we anticipate needing. To do that, create an npm configuration file called
package.json in the root of the PusherRealtimeChat.UI project:
{ "version": "1.0.0", "name": "ASP.NET", "private": true, "devDependencies": { "webpack": "1.13.1", "babel": "6.5.2", "babel-preset-es2015": "6.9.0", "babel-preset-react": "6.11.1", "babel-loader": "6.2.4" }, "dependencies": { "react": "15.2.1", "react-dom": "15.2.1", "axios": "0.13.1", "pusher-js": "3.1.0" } }
Upon saving the above
package.json file, Visual Studio will automatically download the dependencies into a local
node_modules directory, via npm:
I expect that the
react,
react-dom,
webpack, and
babel-* dependencies are already familiar to you, as they’re commonly used with React.
axios is a modern HTTP client and
pusher-js is the Pusher client library we’ll be using to subscribe for new messages.
Once the aforementioned modules have finished installing, we can setup Babel and WebPack to transpile our source code.
Transpilation
Because modern web browsers don’t yet understand JavaScript modules or JSX, we must first transpile our source code before distributing it. To do that, we’ll use WebPack in conjunction with the
babel-loader WebPack loader.
At the core of any WebPack build is a
webpack.config.js file. We’ll puts ours alongside
package.json in the root of the RealtimeChat.UI project:
"use strict"; module.exports = { entry: "./index.js", output: { filename: "bundle.js" }, module: { loaders: [ { test: /\.js$/, loader: "babel-loader", exclude: /node_modules/, query: { presets: ["es2015", "react"] } } ] } };
I shan’t belabour the
webpack.config.js configuration file but suffice to say, it directs WebPack to look at the
index.js file and to transpile its contents using
babel-loader, and to output the result to a file called
bundle.js.
This is all very good and well but how do we run WebPack from Visual Studio?
First of all, you’ll want to define an npm
script in
package.json that runs
webpack with the
webpack.config.js configuration file we just created:
{ "version": "1.0.0", "name": "ASP.NET", "private": "true", "devDependencies": { ... }, "dependencies": { ... }, "scripts": { "build": "webpack --config webpack.config.js" } }
Then, to actually run the above
script from within Visual Studio, I recommend using the npm Task Runner Visual Studio extension by @mkristensen:
If you haven’t already, install the extension, then go to Tools | Task Runner Explorer to open it.
Note: You can also load the extension by searching for “Task Runner Explorer” in Quick Launch. Also Note: You’ll need to restart Visual Studio before npm scripts will appear in Task Runner Explorer.
Inside Task Runner Explorer, you should see the custom
build script we added:
There isn’t much use in running the
build script quite yet, as there’s nothing to build. That being said, for future reference, to run the script you just need to double click it.
Rather than running the script manually every time we update the client code, it would be better to automatically run the script whenever we run the Visual Studio solution. To make that happen, right-click the
build script, then go to Bindings and check After Build:
Now, whenever we run the PusherRealtimeChat.UI project, the
build script will be run automatically – nice!
One more thing we could do to make development easier going forward is to treat both the PusherRealtimeChat.WebAPI and PusherRealtimeChat.UI projects as one thus that when we press Run, both projects start.
Setting Multiple Start Up Projects
To setup multiple startup project, in Solution Explorer, right-click the PusherRealtimeChat solution, then click Properties. In the Properties window, go to Common Properties | Startup Projects, then click the Multiple startup projects radio button. Finally, set the Action for both PusherRealtimeChat.UI and PusherRealtimeChat.WebAPI to Start:
Now, when you press Run, both projects will start. This makes perfect sense for this project because it’s rare that you would want to run the server but not the client and vice versa.
That is more or less it in terms of setting up our build tools, let’s move on and implement some code… at last!
Implementing the Client
To begin with, create an
index.html file in the PusherRealtimeUI.UI project root:
<!DOCTYPE html> <html> <head> <title>Pusher Realtime Chat</title> <meta charset="utf-8" /> </head> <body> <div class="container" id="app"></div> <script src="./bundle.js"></script> </body> </html>
Note: The
index.html file on GitHub will look a bit different due to the fact that I applied styles to the final code but do not mention styles in this post.
There isn’t much to note here except that we reference
bundle.js, which is the file output by WebPack.
bundle.js won’t exist at the moment because there’s no code to build. We’ll implement some code in just a moment but first, let’s take a step back and try to get a feeling for the structure of the client application.
React popularized the idea of breaking your UI into a hierarchy of components. This approach has many benefits, one of which is that it makes it easy to see an overview of the application, here’s ours:
Notice how I make a distinction between containers and presentational components. You can read more about the distinction here in an article by @dan_abromav but in a nutshell, container components fetch data and store state whereas presentational components only concern themselves with presentation. I won’t be explaining the presentational components in this article, as they simply render content – all the noteworthy stuff happens inside the
App container!
For production applications, it’s recommended that you separate your components into separate files. For the purposes of this tutorial, however, I’m going to present the code in a single file called
index.js:
import React from "react"; import ReactDOM from "react-dom"; import axios from "axios"; import Pusher from "pusher-js"; const <img src={`{username}/profile_image?size=original`} style={{ width: 24, height: 24 }}/> <strong>@{username}: </strong> {message} </li> ); const ChatMessageList = ({ messages }) => ( <ul> {messages.map((message, index) => <ChatMessage key={index} message={message.Text} username={message.AuthorTwitterHandle} /> )} </ul> ); const Chat = ({ onSubmit, messages }) => ( <div> <ChatMessageList messages={messages} /> <ChatInputForm onSubmit={onSubmit}/> </div> );} />; } } }); ReactDOM.render(<App />, document.getElementById("app"));
Note: Remember to update
YOUR APP KEY and
baseUrl.
baseUrl should point to your server’s address.
Like I mentioned previously, I won’t be explaining the presentational components in this post but I will be explaining the
App container component. Here it is again for reference:} />; } } });
When the
App container is first loaded, the
getInitialState lifecycle method is called:
getInitialState () { return { authorTwitterHandle: "", messages: [] } }
getInitialState quickly returns an object that describes the initial state of the application – the
render method is run almost immediately afterwards:
render() { if (this.state.authorTwitterHandle === '') { return ( <Welcome onSubmit = { author => this.setState({ authorTwitterHandle: author }) }/> ); } else { return <Chat messages={this.state.messages} onSubmit={this.sendMessage} />; } }
The
render function first looks at
this.state.authorTwitterHandle to determine whether or not the user has provided their Twitter handle yet. If they have not, the
Welcome component is rendered; otherwise, the
Chat component is rendered.
Notice how we pass an
onClick property to the
Welcome component. This allows us to update the state and re-
render the
App container when the
Welcome component’s form is submitted. Similarly, we pass
this.state.messages and another
onClick function to the
Chat component. These properties allow the
Chat component to render and submit messages respectively.
After
render,
componentDidMount is called:) }); }); }); },
componentDidMount first makes an asynchronous
GET request to
/api/messages.
When the asynchronous
GET request to
/api/messages has finished, we update the container’s state using
this.setState before initializing
Pusher and subscribing for new
new_messagess on the
messages channel. Remember how we programmed the server to broadcast messages via the
messages channel? This is where we subscribe to them. As new messages trickle in, the state, and by extension, the UI is updated in realtime.
Conclusion
ASP.NET Web API and React is not only possible, it’s easy!
I appreciate a tutorial with so many moving parts might be hard to follow so I took the liberty of putting the code on GitHub.
If you have any comments of questions, please feel free to leave a comment below or message me on Twitter (I’m @bookercodes ?!). | https://blog.pusher.com/how-to-use-react-with-visual-studio-and-asp-net-web-api/ | CC-MAIN-2018-05 | refinedweb | 2,832 | 50.53 |
.
Hi Folks,
We have one more upcoming pre-release before we're done with SQL Server 2008, and while I've posted a few articles about the coordinate order swap, there's another exciting piece we're releasing: our builder API.
Let's say you wanted to write a very simple method that shifted a geometry instance in space. You could write this this today, but it would require writing code to digest one of the formats we support---WKT, WKB, or GML---perform the operation, and then recreate one of these data formats. When the operation is so simple, the bulk of the effort needed will be in moving between formats.
This seems pretty silly, given the fact that we certainly have code that already digests these instances programmatically. Why don't we give you access?
Now we do.
If you scroll down, you'll see a lot of C# code. The solution we have can only be accessed directly through the CLR, but since the CLR is hosted in SQL Server, functionality developed can be deployed to the server and used through T-SQL. This new functionality, which will be released in RC0, has three key parts:
So, let's take a look at a working example that solves our shifting problem. The main part is to write our own implementation of IGeometrySink that will perform the shifting. Our constructor will take the amount of shift we want in the x and y directions, as well as a target IGeometrySink instance. The implementations of each of the IGeometrySink methods will simply pass the calls through to the target, except that every point passed through will be shifted first.
using Microsoft.SqlServer.Types;namespace Tools{ class GeometryShifter : IGeometrySink { IGeometrySink _target; double _xShift; double _yShift; public GeometryShifter(double xShift, double yShift, IGeometrySink target) { _target = target; _xShift = xShift; _yShift = yShift; } // Each AddLine call will just move the endpoint by the required amount. public void AddLine(double x, double y, double? z, double? m) { _target.AddLine(x + _xShift, y + _yShift, z, m); } // Each BeginFigure call will just move the start point by the required amount. public void BeginFigure(double x, double y, double? z, double? m) { _target.BeginFigure(x + _xShift, y + _yShift, z, m); } // Just pass through without change. public void BeginGeometry(OpenGisGeometryType type) { _target.BeginGeometry(type); } // Just pass through without change. public void EndFigure() { _target.EndFigure(); } // Just pass through without change. public void EndGeometry() { _target.EndGeometry(); } // Just pass through without change. public void SetSrid(int srid) { _target.SetSrid(srid); } }}
To use this in SQL Server, we need to wrap it in a function that can be registered and used in the server. To do this, we create a function that puts the pipeline together and runs it.
using Microsoft.SqlServer.Types;namespace Tools{ public partial class Functions { public static SqlGeometry ShiftGeometry(SqlGeometry g, double xShift, double yShift) { SqlGeometryBuilder constructed = new SqlGeometryBuilder(); GeometryShifter shifter = new GeometryShifter(xShift, yShift, constructed); g.Populate(shifter); return constructed.ConstructedGeometry; } }}
Because of SQL Server's CLR support, we can build this to Tools.dll and load it into the server. After that, we can register the function and use it directly from T-SQL:
create assembly Tools from 'c:\tmp\Tools.dll'gocreate function ShiftGeometry(@g geometry, @x float, @y float)returns geometryas external name Tools.[Tools.Functions].ShiftGeometrygodeclare @g geometry = 'LINESTRING (0 0, 1 1, 12 3, 4 0)'select @g.ToString()union allselect dbo.ShiftGeometry(@g, 10, 10).ToString()go
This yields:
LINESTRING (0 0, 1 1, 12 3, 4 0)LINESTRING (10 10, 11 11, 22 13, 14 10)
This is a pretty simple example, but it illustrates the power of the approach. Now you can build functions like this without all of the parsing glue.
Our spatial implementation makes use of this kind of API internally, and it was this internal use that inspired us to release it to users of the system as well. We see many cases in which new spatial operations can be added to the server through this API, and we hope that you can find some as well. (Looking at the Spatial forum, we can find many examples of problems that can be tackled with this approach, and this list is nowhere near exhaustive.)
Cheers,
-Isaac
Published Friday, May 30, 2008 4:42 PM
by
isaac
If you would like to receive an email when updates are made to this post, please register here
RSS
This is great! Geoffrey Emery and I gave a spatial presentation at the MS SQL 2008 Fire Starter event in Irvine, CA. We had a lot of examples viewing spatial data in Virtual Earth. We did this by having a web handler grab the GML output from SQL 2008 and converting it to a GeoRSS feed, which Virtual Earth reads natively.
Using the builder API we could now create a routine to output GeoRSS straight from SQL 2008.
Great job!
Matt Penner
Hi Folks, As we continue to shut down SQL Server 2008, our first release candidate has been released
Isaac @ MSDN
SQL Server 2008: Spatial Data, Part 8
A View Inside My Head
Hummm. I'm not sure I like this particular part of the interface very much:
public void AddLine(double x, double y, double? z, double? m) {
throw new NotImplementedException();
}
X and Y aren't nullable, but Z and M are. The Z and M make sense as they are user-defined. But consider that you may be building against an invalid geometery that has null points in a collection.
While this works for many of the use cases, that would break and I can't see a good reason to let it.
Feedback?
Kent Tegels
Hi Folks, I've just published a new project on CodePlex: SQL Server Spatial Tools . The core idea
The RTM version of SQL Server 2008 and recently release of the CLR updates with Visual Studio 2008 SP1
Enjoy Another Sandwich -- Kent Tegels
@Kent Tegels
There is a way to specify empty points!
...
BeginGeometry(OpenGisGeometryType.Point)
// note: no begin/end figure calls
EndGeometry()
In general, to figure out the call sequence for any posible geometry use SqlGeometry.Populate method with IGeometrySink implementation that just logs the calls.
Marko Tintor
Hi, I am interested in using the Builder API you describe in a .NET project.
Seems like a basic question but where is the Microsoft.SqlServer.Types located so I can add it to the my project?
I am using VS2005 but have installed SQL Server 2008
lester
Hi,
I have a geometry of type SqlGeometry and then I convert it into varbinary.
I then use a BinaryReader to read the bytes.
BinaryReader r = new BinaryReader(new MemoryStream(byteGeomIn));
Then I use SqlGeometry's read method to read the binary.
sqlGeom.Read(r);
It should be working properly,but I am getting an error at the Read statement. It says Invalid Format or Spatial Reference Id should be between 0-9999. But while creating the geometry type in Sql, i have given the SRID as 0. I dont understand what the problem could be.
Swat | http://blogs.msdn.com/isaac/archive/2008/05/30/our-upcoming-builder-api.aspx | crawl-002 | refinedweb | 1,179 | 54.52 |
Merge lp:~futatuki/mailman/i18n-add-whence-to-adminack-templates into lp:mailman/2.1
Commit message
i18nize whence candidate message strings
Description of the change
This is a solution of problem left in mp+347992.
To mark whence messages for pygettext, I introduced dummy function i18n.D_(s) returns s it self, and then added "-k D_" option to pygettext commandline in messages/
This costs extra function call on execute, for its maintenancebility.
I've picked up whence candidate message strings except from bin/add_member, bin/delete_member, and cron/disabled, and tested with ja translation (temporary for test, not included in this proposal).
Note:
* whence messages for user conformation of subscribe and unsubscribe are inconsistent. For subscribe they are 'via email confirmation' and 'via web confirmation', but for unsubscribe they are 'email confirmation' and 'web confirmation'.
* admin approval for unsubscription dosen't report whence because ListAdmin.
I'm going to merge this although somewhat differently. I like what you have done and it is conceptually similar to what we do in Mailman 3. There, we have augmented i18n._ with a context manager to not do the translation and then we can do things like:
with _.defer_
translation( ):
msgdata. setdefault( 'moderation_ reasons' , []).append(
_( 'Message contains administrivia'))
# This will be translated at the point of use.
to mark messages without translating them. However, there are already many places in Mailman 2.1 where we accomplish a similar thing by doing things like:
def _(s):
return s
_('string to translate later')
_ = i18n._
In fact, for example, in Mailman/Bouncer.py we already have
def _(s): return s
REASONS = {MemberAdaptor.
BYBOUNCE: _('due to excessive bounces'),
MemberAdapt or.BYUSER: _('by yourself'),
MemberAdapt or.BYADMIN: _('by the list administrator'),
MemberAdapt or.UNKNOWN: _('for unknown reasons'),
}
_ = i18n._
To to be more consistent with what is done elsewhere in the code, rather than defining D_() in i18n, I'll define it in the places it's needed and then just do things like
_ = D_
_('string to translate later')
_ = i18n._
Your way is actually better, but in this case, I think it's more important to maintain consistency | https://code.launchpad.net/~futatuki/mailman/i18n-add-whence-to-adminack-templates/+merge/348127/?ss=1 | CC-MAIN-2019-51 | refinedweb | 368 | 58.79 |
Version 0.16.0 released
08 January 2017 The Nim Team
We’re happy to announce that the latest release of Nim, version 0.16.0, is now available!
As always, you can grab the latest version from the downloads page.
This release includes over 80 bug fixes and improvements. To see a full list of changes, take a look at the detailed changelog below.
Some of the most significant changes in this release include: a major new Nimble release, an improved import syntax, and the stabilisation of name mangling rules enabling faster compile times.
The new Nimble release that is included with Nim 0.16.0 includes a variety of new features and bug fixes. The most prominent of which is the improved output system, as shown in the figure below.
For a full list of changes in Nimble, see its changelog.
The new import syntax makes it easier to import multiple modules from the same package or directory. For example:
import compiler/ast, compiler/parser, compiler/lexer import compiler / [ast, parser, lexer]
The two are equivalent, but the new latter syntax is less redundant.
Finally, the code responsible for name mangling in the generated C and C++ code has been improved to reduce compile times. In particular, compile-time for the common edit-compile-run cycles have been reduced.
Changelog
Changes affecting backwards compatibility
staticExecnow uses the directory of the nim file that contains the
staticExeccall as the current working directory.
TimeInfo.tznamehas been removed from
timesmodule because it was broken. Because of this, the option
"ZZZ"will no longer work in format strings for formatting and parsing.
Library Additions
- Added new parameter to
errorproc of
macromodule to provide better error message
Added new
dequesmodule intended to replace
queues.
dequesprovides a superset of
queuesAPI with clear naming.
queuesmodule is now deprecated and will be removed in the future.
Added
hideCursor,
showCursor,
terminalWidth,
terminalWidthIoctland
terminalSizeto the
terminal(doc) module.
- Added new module
distros(doc) that can be used in Nimble packages to aid in supporting the OS’s native package managers.
Tool Additions
Compiler Additions
- The C/C++ code generator has been rewritten to use stable name mangling rules. This means that compile times for edit-compile-run cycles are much reduced.
Language Additions
The
emitpragma now takes a list of Nim expressions instead of a single string literal. This list can easily contain non-strings like template parameters. This means
emitworks out of the box with templates and no new quoting rules needed to be introduced. The old way with backtick quoting is still supported but will be deprecated.
type Vector* {.importcpp: "std::vector", header: "<vector>".}[T] = object template `[]=`*[T](v: var Vector[T], key: int, val: T) = {.emit: [v, "[", key, "] = ", val, ";"].} proc setLen*[T](v: var Vector[T]; size: int) {.importcpp: "resize", nodecl.} proc `[]`*[T](v: var Vector[T], key: int): T {.importcpp: "(#[#])", nodecl.} proc main = var v: Vector[float] v.setLen 1 v[0] = 6.0 echo v[0]
The
importstatement now supports importing multiple modules from the same directory:
import compiler / [ast, parser, lexer]
Is a shortcut for:
import compiler / ast, compiler / parser, compiler / lexer
Bugfixes
The list below has been generated based on the commits in Nim’s git repository. As such it lists only the issues which have been closed via a commit, for a full list see this link on Github.
- Fixed “staticRead and staticExec have different working directories” (#4871)
- Fixed “CountTable doesn’t support the ‘==’ operator” (#4901)
- Fixed “documentation for module sequtls apply proc” (#4386)
- Fixed “Operator
==for CountTable does not work.” (#4946)
- Fixed “sysFatal (IndexError) with parseUri and the / operator” (#4959)
- Fixed “initialSize parameter does not work in OrderedTableRef” (#4940)
- Fixed “error proc from macro library could have a node parameter” (#4915)
- Fixed “Segfault when comparing OrderedTableRef with nil” (#4974)
- Fixed “Bad codegen when comparing isNil results” (#4975)
- Fixed “OrderedTable cannot delete entry with empty string or 0 key” (#5035)
- Fixed “Deleting specific keys from ordered table leaves it in invalid state.” (#5057)
- Fixed “Paths are converted to lowercase on Windows” (#5076)
- Fixed “toTime(getGMTime(…)) doesn’t work correctly when local timezone is not UTC” (#5065)
- Fixed “out of memory error from
test=type proc call when parameter is a call to a table’s
[]proc” (#5079)
- Fixed “Incorrect field order in object construction” (#5055)
- Fixed “Incorrect codegen when importing nre with C++ backend (commit 8494338)” (#5081)
- Fixed “Templates, {.emit.}, and backtick interpolation do not work together” (#4730)
- Fixed “Regression: getType fails in certain cases” (#5129)
- Fixed “CreateThread doesn’t accept functions with generics” (#43)
- Fixed “No instantiation information when template has error” (#4308)
- Fixed “realloc leaks” (#4818)
- Fixed “Regression: getType” (#5131)
- Fixed “Code generation for generics broken by sighashes” (#5135)
- Fixed “Regression: importc functions are not declared in generated C code” (#5136)
- Fixed “Calling split(“”) on string hangs program” (#5119)
- Fixed “Building dynamic library: undefined references (Linux)” (#4775)
- Fixed “Bad codegen for distinct + importc - sighashes regression” (#5137)
- Fixed “C++ codegen regression: memset called on a result variable of
importcpptype” (#5140)
- Fixed “C++ codegen regression: using channels leads to broken C++ code” (#5142)
- Fixed “Ambiguous call when overloading var and non-var with generic type” (#4519)
- Fixed “[Debian]: build.sh error: unknown processor: aarch64” (#2147)
- Fixed “RFC: asyncdispatch.poll behaviour” (#5155)
- Fixed “Can’t access enum members through alias (possible sighashes regression)” (#5148)
- Fixed “Type, declared in generic proc body, leads to incorrect codegen (sighashes regression)” (#5147)
- Fixed “Compiler SIGSEGV when mixing method and proc” (#5161)
- Fixed “Compile-time SIGSEGV when declaring .importcpp method with return value “ (#3848)
- Fixed “Variable declaration incorrectly parsed” (#2050)
- Fixed “Invalid C code when naming a object member “linux”” (#5171)
- Fixed “[Windows] MinGW within Nim install is missing libraries” (#2723)
- Fixed “async: annoying warning for future.finished” (#4948)
- Fixed “new import syntax doesn’t work?” (#5185)
- Fixed “Fixes #1994” (#4874)
- Fixed “Can’t tell return value of programs with staticExec” (#1994)
- Fixed “startProcess() on Windows with poInteractive: Second call fails (“Alle Pipeinstanzen sind ausgelastet”)” (#5179) | https://nim-lang.org/blog/2017/01/08/version-0160-released.html | CC-MAIN-2017-47 | refinedweb | 990 | 51.18 |
- September 3, 2001 Camp set to begin without last year s top player By JEFF GOODMAN Faceoff.com correspondent With training camp rapidly approaching, the BostonSep 3, 2001 1 of 1View SourceSeptember 3, 2001
Camp set to begin without last year's top player
By JEFF GOODMAN
Faceoff.com correspondent
With training camp rapidly approaching, the Boston Bruins have been
able to make sure every player on the club will be in attendance
except for one: Jason Allison.
Boston general manager Mike OÂ'Connell re-signed defenseman Jarno
Kultanen and forward Mikko Eloranta this past week, leaving Allison,
the teamÂ's captain and leading scorer, as the only player unhappy
with his current deal.
Eloranta signed a one-year deal after a fairly disappointing season
in which he spent much of it in former Bruins coach Mike KeenanÂ's
doghouse. The Finland native scored 12 goals and added 11 assists in
62 games last season, his second in the NHL. The 29-year-old will
offer depth and speed up front.
Kultanen, 28, was the Bruins eighth pick in the 2000 draft after
playing five seasons in the Finnish Elite League. He signed a two-
year deal after scoring two goals and adding eight assists in 62
games last year.
"We are pleased to have both of these players returning to our team,"
said O'Connell. "Jarno proved to be a steady defenseman in just his
first year in this league and we feel he will continue to improve.
Mikko has been a solid contributor who has made the most of his ice
time and he brings a valuable versatility to the club."
Now that OÂ'Connell has been able to tie up nearly every loose end
contract-wise, he can turn nearly all of his focus to the Allison
matter. The center seems primed to wait the situation out, and
OÂ'Connell may end up doing the same if he canÂ't get a quality trade
in return.
Allison is coming off a career-year and seemed to click on the number
one line with Bill Guerin and Sergei Samsonov. If Allison holds out,
it could force new coach Robbie Ftorek to move Joe Thornton up to
center the top line, which would hurt the depth on the second and
third lines.
There has been little to no progress on the Allison front of late.
The 26-year-old is coming off a 36-goal campaign in which he led the
Bs with 59 assists and 95 points.
Â"IÂ'd like to (get it done before the season starts), because I think
we have a good team, and with Jason  or whatever players we might
get in return (in a trade) Â weÂ'd be a better team,Â" OÂ'Connell told
the Boston Herald. Â"But IÂ'm not going to give him away just for the
sake of getting a deal. IÂ'm going to take my team and see whatÂ's
there.Â"
Whatever ends up happening, the Allison situation could end up being
a major distraction in the clubÂ's quest to get back into the post-
season.
OÂ'Connell has done a good job in bringing all the key players back Â
except for veteran defenseman Eric Weinrich and fellow blue-liner
Darren Van Impe  and has added hard-nosed veterans Martin Lapointe,
Rob Zamuner and Scott Pellerin up front  and Sean OÂ'Donnell on the
defense.
If the Bruins can somehow manage to get Allison in the fold or get a
quality playmaker in return, they should be much better than a year
ago, when they were just squeezed out of making the playoffs by
virtue of a tiebreaker with Carolina.
Notes: Training camp will open on Sept. 11 at the Ristuccia Memorial
Arena in Wilmington, Mass. New faces that have the best shot at
making this yearÂ's club include former Michigan sniper Andy Hilbert
and former Boston College blue-liner Bobby Allen. Â Former Bruins
defenseman Al Iafrate, who retired in 1998 because of knee injuries,
may be making a comeback with the Carolina Hurricanes.
Your message has been successfully submitted and would be delivered to recipients shortly. | http://groups.yahoo.com/neo/groups/Islanders-SoundTigers/conversations/topics/10105?l=1 | CC-MAIN-2013-48 | refinedweb | 701 | 65.15 |
Answered by:
how to return null from a constructor
say, i a m trying to do something like:
class something
something(streamreader){
if (wrong format or end of file)
caller receives null as result
else
build object from what has been read in file
}
i tried 'return null', but it says it can not do that since a constructor is 'void', so i can not place a return sentence. i'm not sure something like this.dispose would retunr a null, since the 'dispose' may be asynchronous (or something)
i have the next alternative
class something blah blah
somehting(string)
(build object from string, check if string==null or wrong format before calling)
static bool checkformat(string)
check to see if string is adecuate to build something object
but if i use this, that would mean that i have to parse the string twice, onece to check format and once to build it, so im not too in favour of it, how do i return null from a cosntructor if params are not adecuate?Thursday, November 30, 2006 9:07 PM
Question
Answers
-
All replies
a constructor doesnt return anything. It's only job in life is to construct the object. That's all. Nothing else.
you can only return a value if the method has a return type (non void). also you can return null from string return types or other objects, except for say, integer valuesThursday, November 30, 2006 9:16 PM
-
- hmm.. i think i didnt make myself clear.. a constructor, whether is called a constructor or not, is a function, it returns something, actually a pointer to some object of the type of the class it was declared. a variable declared as some kind of object, or whatever inherits form 'object' class, can point to that type of object or have a null value, now, if i call a function and that function returns null for an object, thats a valid value. i am asking how to do that in a constructor. for example, if i was talking about 'plain c', and i made an alloc or malloc or whatever, it may return the pointer to the allocated space or it may return a null if it was not possible to allocate the space. i want to return the object if the object was created succesfully or null if the parameters to create the objet in the cosntructor function, which is an ordinary function in most senses, stated to return an object of some class, are not adecuateThursday, November 30, 2006 9:30 PM
- it seems thats what i must do, but as stated above, i was trying to avoid that, since i would have to check for validity twice. im not happy about it toughThursday, November 30, 2006 9:32 PM
- What about making your constructor throw an exception when it isn't created properly, and then use try and catch statements in the function that's calling the constructor?Friday, December 01, 2006 4:23 AM
- it wouldnt be recommended and its bad practice. you should really validate your inputs/objects before creating the class objectFriday, December 01, 2006 12:38 PM
Also note that it is NOT the constructor that is returning the constructed object, but the new operator.
I'm curious - why is it bad practice to throw an exception in a constructor? I know this is frowned upon in C++ - is the same true for C#, and why?Friday, December 01, 2006 1:40 PM
well because exceptions are expensive and because it's the way the design pattern is really. Maybe the same reason for C/C++? (I dont dev in C/C++)
It's just not recommended and I've not seen any classes that throw an exception via the constructor, except for maybe invalid inputs but even then its recommended to check/validate your inputs before constructing an object - its better design.Friday, December 01, 2006 2:07 PM
It isn't bad practice to throw an exception from a constructor. If the inputs are invalid or the object cannot be constructed then you should throw exceptions as with any other member. You have to write your class with no control over how it will be called (unless it is an internal class) so the only defence you have against invalid inputs is exceptions.Friday, December 01, 2006 2:58 PM
Note, it is not a bad practiceto throw an exception is a ctor (either in C# or C++).
It isa bad practice to allow an exception to leave an object in a "half-constructed" state.
For example (in C++):
class MyClass
{
char* ptr1;
char* ptr2;
public:
MyClass()
{
ptr1 = new char[100];
ptr2 = new char[10000];
}
}
In that example, if the new for ptr2 throw an out of memory exception, the memory of ptr1 would be lost as a memory leak. The ctor should be written as:
MyClass()
{
ptr1 = NULL;
ptr2 = NULL;
try {
ptr1 = new char[100];
ptr2 = new char[10000];
}
catch(...)
{
delete[] ptr1;
delete[] ptr2;
throw;
}
}Friday, December 01, 2006 4:13 PM
- Ahmedilyas: "it wouldnt be recommended and its bad practice. you should really validate your inputs/objects before creating the class object "
No mate. Validating a function's (or constructor's) arguments outside the function itself is the bad practice. If the function's interface changes even a tiny bit, you have a lot of search/replacing to do or you'll be throwing exceptions anyways.
And arguments about exceptions are way more expensive than the exceptions themselves. Since they're only supposed to be used in *exceptional* circumstances, they should never slow anything down in normal use. Especially in a constructor, where there's just been a slow memory allocation.
Bad practice is obfuscating and duplicating code to avoid throwing exceptions.Friday, December 01, 2006 10:55 PM
ah well we have our own views but I still wouldnt recommend it. Always been told throwing an exception in a Constructor is not the way to go about doing things and should be validated before hand before constructing an object.
as long as the original question is resolved...Friday, December 01, 2006 10:57 PM
- I don't think the original question has been resolved. The OP is talking about "validating twice." This is exactly why I say not throwing exceptions from a constructor is bad practice.
As far as I can see, there are only two proper ways to safely handle object initialization:
Friday, December 01, 2006 11:05 PM
- Throw exceptions from the constructor
- Don't use exceptions, use empty constructors, and have an Init() method with a return value to indicate success or failure
I'm glad some of you agree that throwing exceptions from a constructor is not necessarily bad practice, as I tend to do this, e.g., in the input arguments are invalid.
I think it is one of those rules that has been handed down and got garbled in the process.
Throwing an exception from a constructor is bad IF the object is left half constructed. In C++ this could be a real problem where you have to manage all the memory yourself. So the rule has warped into "do not throw exceptions from a constructor".
In C# this less likely to be a problem, and an exception is often the best way to report an error when creating an object, as long as the application state remains consistent before and after the exception.Monday, December 04, 2006 1:32 PM
You could have the constructor create an empty object and then have an Init method to process any actual data and fill the object. The Init method could return NULL or an error code that would give the caller more information. Hell, you could even add a GetError method to give detailed error information.
This way you never have an object that's half constructed, it would simply be set to default values. Any calls to the empty object could fail safely.
I agree that throwing an exception in a constructor is not the way to go. You would be simply setting yourself up for future heartbreak.Monday, December 04, 2006 8:37 PM
- There are MANY classes in the .Net framework that throw exceptions, so it is totally untrue to say that it is not recommended practice!
Examples include:
DateTime
List<T>
Queue<T>
String()
Encoding()
...and many, many more. In fact, it's difficult to find any .Net constructor that takes a parameter that DOESN'T throw an exception!Tuesday, December 05, 2006 1:13 PM
Although I still don't see a problem with throwing an execption from a constructor, it seems that a few people still don't agree. So how's this for a compromise: Instead of throwing an exception or using a separate Init() function, why not declare the constructor like this:
public Something(string szInput, out bool errorOccurred)
Then if there's a problem with the input, or if there isn't enough memory for the object, the calling method will know by checking the value of errorOccurred. Eh?Tuesday, December 05, 2006 11:59 PM
- Since parameter-accepting constructors for most .Net classes can throw exceptions, I think people just have to suck it up and accept that constructors will throw exceptions.
I'm not sure where the idea came from that it's bad for a constructor to throw an exception. Could it be that people are confusing it with the fact that C++ destructors should never throw an exception?
Anyway, people MUST TAKE NOTE that something as innocent looking as:
using (FileStream myStream = new FileStream("myFile", FileMode.OpenOrCreate))
...
can throw no less than NINE different kinds of exceptions.
To be honest, it is somewhat worrying that there can be so much debate about exceptions in constructors. Are people writing code that blithely ignores the exceptions that can be thrown from all the .Net types?Wednesday, December 06, 2006 9:46 AM
- This is the most obvious solution except the Create method should return a Somehting object instead of void.Wednesday, December 06, 2006 3:33 PM
Now you know why so much software is so buggy. People don't want to handle errors - a) it's hard to do well and b) it disrupts the flow of what might otherwise be clean and easy to read code.
Unfortunately, it's necessary and I'm with you.. you gotta do what you gotta do. But the answer is yes.. I think alot of people DO blithely ignore the possible exceptions.
I have the luck of working with people who love breaking software though, so the stuff I write usually gets a good pounding. While it's impossible to create bug-free software, I think I at least get the opportunity to create very reliable software..Friday, December 08, 2006 5:37 PM
A couple of the posts in this discussion have highlighted the reasons to prefer factory methods over constructors. I can't give a constructor a meaningful name, and I can only have the ctor return a single object of a specific type. Using a constructor is hard coding a dependency into an application - which will need to happen occasionally, but can also hurt testability and maintainability.
It sounds like the original poster has a scenario where some complicated logic is required to construct the right object - and this is a place where I'd favor using a factory method over both a constructor and over any "Init" type / two step initialization technique. Two phase construction introduces more complications than throwing from inside a constructor would ever introduce. What happens if Init is called twice? What sort of checks do I need to place into every method to ensure Init was correctly invoked once and once only? Life is simpler if I know the object is created all at once or not at all.Friday, December 08, 2006 7:15 PM
The best way I know to return null from constructor or to cancel a constructor is this:
public class Person
{
public Person(int age, out Person result)
{
result = this;
if (age > 120)
{
result = null;
return;
}
// Continue with constructor...
}
}
// Call constructor and return null if person is more than 120 years old.
Person p;
new Person(130, out p);
// p is NullFriday, March 21, 2008 11:23 AM
I don't think that returning a null from a constructor is a good programming practice. That appears to be valid code that returns valid objects, not nulls. It would require intimate knowledge of the code to realilze that a null was being returned. Works great if you have the source code for Person. Too bad if you don't
RudedogFriday, March 21, 2008 6:00 PMModerator
- An easier, and mucht more sensible way to deal with this is (if you really didn't want to throw an exception):Tuesday, March 25, 2008 3:15 PM
- I agree with Scott, a factory method would be ideal for this situation. Having to create a Person object to create another Person object isn't right...Tuesday, March 25, 2008 3:31 PMModerator
Everything I've read on the subject from Microsoft indicates that it is perfectly valid to throw exceptions in constructors, and that in fact exceptions should ALWAYS be used to report errors so that errors are always reported and dealt with consistently. Most of the suggestions posted here to avoid throwing an exception in a constructor (such as returning the object as an out parameter), actually violates the design guidelines established by Microsoft for .NET.
The only time you really want to avoid throwing an exception is when you except it to happen in the normal flow of the program.
That said, using factory methods is a perfectly reasonable approach when one needs to do something that can't be done with a constructor, such as the option of returning null instead of a valid object.Tuesday, March 25, 2008 5:38 PM
Don't like the IsValid idea at all. You are given the illusion the object was created (because you've had one returned to you). It's way to easy to forget to call IsValid because, for starters, you have to notice that the class has an IsValid property.
The pattern I've seen used with great success is: Have a C'Tor and a TryCreate(input, out objectCreated) method. If you use the C'Tor, be prepared for exceptions. If you use the trycreate, be prepared to get back false and have objectCreated be null.Thursday, May 01, 2008 11:47 PM
- I think this whole discussion here have derailed.
Exceptions are used for *exceptional* cases. Yes, they're more costly than simply an if-statement with a "return null" statement inside, but the thing is that the "return null" case will only happen as *an exception*, it should not be the general case.
If you have to validate input from a file, like a CSV file full of data, then throwing exceptions on 50% of the data will slow down the import. On the other hand, if you *know* that the first column will be a valid Int32 valid, doing a Int32.Parse, which will throw an exception on bad input, is *exactly the right idea*.
Constructors *should* throw exceptions on bad data, unless you can *guarantee* that the constructor is only called from controlled methods, like factory methods that is in the same library. You should *never* assume the caller has done the job required to validate the input. But again, as long as the invalid data are *exceptions* and not the rule, you won't have a problem with performance.
The alternative is the odd bug where the wrong data slipping through to a class will cause unpredictable results later on. Without the fail-at-once mentality, you're going to use more time to figure out why a class fails at a particular point when in fact you could've known about the problem well in advance, and at the point where the wrong data was given to the class, which will typically give you a stack trace that shows the location that the real bug is: the code that is calling your constructor with wrong data.
As for handling the problem, if all you do is return "I can't do this", you might compare this to a lightweight version of "throw new Exception();" where you simply don't tell the caller what exactly went wrong. This will make finding and fixing bugs in this code that much harder when all you know is that it "doesn't work". I'm sure most of you have heard that phrase from coworkers, support, customers, etc. and wondered how these people could work when they can't even read a simple error message back to you, but if you follow this "return null" way of doing things, you're ensuring this is all you'll ever know. I'm pretty sure the user of your software would feel much better too if the program told him that "The file you picked has the wrong file format" instead of just "I can't open that file", which, as was shown, could be from nine different exceptions (like ACL, file locked, network no longer there).
Personally I advocate that all public methods should throw exceptions if the data that is invalid is exceptional, or there is no (known) way to process that particular type of data. If for no other reason you'll get a crash which tells you that "this particular combination of data was invalid according to the rules I know at the moment", and then you can go figure out if you need to extend your method to handle that case.
Under no circumstances is it bad to throw exceptions from a constructor as *a general rule*. It might be that you want to prevent costly exceptions that will occur a lot because you're doing things that will often fail (like the import case), but in that case you will still use exceptions for the absolutely invalid data, and then pre-validate the data to avoid the throw at all.
The exception, however, is the last defense for the class. If you drop that, all bets are off.
This is my 2 cents, but if someone is going to challenge this with "throwing exceptions in constructors is bad practice" then I challenge you to come up with the source for this bad practice, otherwise its just an opinion.Sunday, May 04, 2008 11:30 AM
- I am not good at English and I am programmer beginner. I looked for this information too and everywhere was, that constructor cant return value. What about this solution:
class something
{
public:
something()
{
if ( !{test validity} )
{
delete(this);
(something*)this = NULL;
}
}
}
But I dont know if it is correct way (all allocated memory from class will be destroyed etc...).
Next problem is that I am deallocating memory in which program running...
So write your opinion if I can solute this problem by this way.
Monday, September 08, 2008 12:57 PM
- "I am not good at English and I am programmer beginner. I looked for this information too and everywhere was, that constructor cant return value. What about this solution:"
The this keyword is read only and you cannot assign to it.
You cannot have a constructor that returns null. Your best bet is to create some static methods in your class that have the abilities of creating an object of your type based on your requirement :
Enjoy
Agility., September 08, 2008 3:23 PM
- Lasse V. Karlsen said:
I think this whole discussion here have derailed.
Anyone who'd like to add to this thread, could you please scroll up to Lasse's statement on May 4th. Sums it up very well. The constructor is no place for user input, or text file input validation. The constructor's job is to construct things and if it will be handed garbage data regularly, it should throw(up) an exception regularly. By the way, learn "Design Patterns", the Object Factory pattern is classic and should be used where it fits. It may apply in this case, but not as a way to get around constructors not returning null.
IMHO...
Les Potter, Xalnix Corporation, Yet Another C# BlogMonday, September 08, 2008 4:21 PM
- A long time ago in a galaxy far away...
There was cpp, where you could explicitly define operator new, which was calling malloc(). It was easy to return NULL calling new operator. It was very interesting feature. It was used to create very complicated programs by almost very complicated programmers. These programs were debugged by very hard bugslayers. Right than very hard bugslayyers had beaten complicated programmers.Wednesday, July 29, 2009 1:31 PM
You can not return null in the constructor but you can overload the == operator and make the result look like it's null.
Following code works fine: (You don't need to override GetHashCode and Equals. But compiler throws warning if you don't)
Person p = new Person(150); if (p == null) { MessageBox.Show("Yes it's null!"); }
And this is the Person class:
public class Person { public Person(int age) { if (age > 120) { isNull = true; } } private bool isNull; public static bool operator ==(Person personA, Person personB) { if ((object)personA != null && personA.isNull) { personA = null; } if ((object)personB != null && personB.isNull) { personB = null; } return object.Equals(personA, personB); } public static bool operator !=(Person a, Person b) { return !(a == b); } public override int GetHashCode() { if (isNull) { return 0; } return base.GetHashCode(); } public override bool Equals(object obj) { if (obj is Person) { if (((Person)obj).isNull) { obj = null; } } return base.Equals(obj); } }Wednesday, January 13, 2010 9:17 PM
- If possible, use the Null-object pattern. Don't return a null if at all possible, because you start checking for NULLs everywhere.
As an example, in the "build object from what has been read in file", have your null object return an empty list (or empty StreamReader).
If the problem is (contrived example) that your streamreader is supposed to load the contents of a file, and that file doesn't exist, throw an exception (Even if its in the constructor). Validate your inputs, throw exceptions, and handle those exceptions where necessary.Wednesday, January 13, 2010 11:34?
1. You could have a Parameter on the Factory Method that tells you why but: a) you'd need to declare a separate Variable to hold the results of that Parameter and b) to follow good practices, do so in every place you're trying to create the Object vs. sharing a Global one.
2. You could Throw a custom Exception but that would require use of the Try Statement, which: a) is cumbersome, b) IMHO, should be reserved for unexpected errors where you just want to display and / or log an error(s) and exit vs. recover and continue processing and c) cannot be enforced to exist (via a Base Class or a more generic Interface) to exist like a Public Variable / Property of the Class.
3. You could have a Shared validation Method but: a) it would have the same disadvantage as option 1 above, plus b) it would have to duplicate the Parameters needed by the Constructor and the appreciating human cost of that duplication is usually much more significant than the depreciating machine costs of Constructing an Object only to Destruct it shortly after.
For commonly expected errors (like the User typed in an invalid value(s) needed to create the Object), it's much easier to just check one or more Public Variable(s) / Property(ies) of the ClassFriday, June 29, 2012 6:30?
.................................
For commonly expected errors (like the User typed in an invalid value(s) needed to create the Object), it's much easier to just check one or more Public Variable(s) / Property(ies) of the Class
For commonly expected errors, why not validate the parameters prior to using them?
If it is possible to determine the reason "why" after the fact, shouldn't you be able to determine the reason before the fact?
I have a previous post on this thread. You cite the fact that try/catch is undesirable, which I agree with. Throwing an exception would be deceptive and misleading. A Factory Method seems to be the only best choice. The are examples of this in the Base Class Library; i.e. Delegate.CreateDelegate.
Rudy =8^D
Mark the best replies as answers. "Fooling computers since 1971.", June 30, 2012 10:50 AMModerator
For commonly expected errors (like the User typed in an invalid value(s) needed to create the Object), it's much easier to just check one or more Public Variable(s) / Property(ies) of the Class
It would be even easier to forget to check those public variables, and to start using the object as if it had been constructed correctly.
Personally, I think your suggestion that the class should validate the user input is suboptimal. It should be the responsibility of a business logic class to do that kind of verification, and then the verified data should be passed to the constructor of the object that requires the input data to be correct. Any user of the "verified" class can be sure that it is in a usable state.
If you allow a class to have a property that tells you if it was properly constructed or not, then *every* method that is passed such an object will need to check that property. Not only that, such methods will need to decide what to do if the object is NOT properly constructed. That would not be a great design...
Monday, July 02, 2012 7:52 AM
- Edited by Matthew Watson Monday, July 02, 2012 8:21 AM
.Thursday, July 05, 2012 5:08 PM
.
Thanks for the compliment.
Under the scenario you describe above, I would not call those circumstances "unexpected errors". I would call them "bugs". A Factory Method should return null or a valid object. In fact, you do not even need to throw an exception. I say let the OS do it for you because that is most likely what will happen anyway.
If your Factory Method has the potential to throw exceptions, then it should catch them and return a null object. If the code is for your own internal use, then proceed by whatever means seems most logical and appropriate. I'm just saying that I would be livid if a constructor threw an exception on me. To me, that's a bug.
Rudy =8^D
Mark the best replies as answers. "Fooling computers since 1971."
Thursday, July 05, 2012 5:24 PMModerator
- Edited by Rudedog2Moderator Thursday, July 05, 2012 5:28 PM
@Rudedog2: You're welcome.
Re. "I would call them "bugs".": I agree. That's precisely why I think they should generate Exceptions vs. just returning a Null and hoping that all instances of Calling Code will properly suspend / abort processing when Null is returned.
Re. "In fact, you do not even need to throw an exception.": When I said "Throw an Exception", I meant indirectly (by allowing a Runtime Error to occur un-Catch'ed) or indirectly (via an explicit Throw Statement) both from inside the Constructor.
Re. "I would be livid if a constructor threw an exception on me.": Welll, Class Constructors in the .NET Framework do that all the time (albeit for what I would call "unexpected" errors / "bugs" in the Calling Code). Ex. "String(Char, Int32)", "List(Of T)(Int32)", "List(Of T)(IEnumerable(Of T))".
Sent from my iMindThursday, July 05, 2012 6:56 PM
I didn't look up all of the constructors that you listed, but the String constructor throws an IndexOutOfRange Exception, which is documented. Remember, the original question is how to return null from a set of invalid parameters. The recommendation is to use a Factory Method, not a constructor, per se.
Also, using a negative integer to index an array should throw an exception, and it is something that the consumer should catch. The original question was how to throw the exception and return something, preferably a null. I would bet that the other types you cited throw exceptions for invalid parameters, not unexpected or unforeseen issues.
[EDIT] List<T> throws an exception if the parameter is null. Again, another developer error that has been foreseen. Not the scenario posed by the original question.
Rudy =8^D
Mark the best replies as answers. "Fooling computers since 1971.", July 05, 2012 9:41 PMModerator
Re. "the String constructor throws an IndexOutOfRange Exception": Huh? The String Constructor example I listed was "String(Char, Int32)" which, according to the MSDN docs, Throws an "ArgumentOutOfRangeException" (not "IndexOutOfRange") Exception (btw, when "count is less than zero" which the Parameter's Type of "Integer" does not prevent).
Re. "which is documented": Huh? I never claimed it wasn't. If you're just trying to imply that being "documented" means it's not an "unexpected" error, then see my 3rd "Re." after this one.
Re. "Remember, the original question is how to return null from a set of invalid parameters. The recommendation is to use a Factory Method, not a constructor, per se." and "The original question was how to throw the exception and return something, preferably a null.": Huh? The O.P. asks at the end of his post "how do i return null from a cosntructor if params are not adecuate?". The answer to the O.P.'s literal Q. is "No, you can't do it". Now the *closest* *workaround* to what he's asking for is to force use of a Factory Method (which returns Null on errors) vs. a "cosntructor". Regardless of whether one uses a *theoretical* Constructor that *could* return Null or a Factory Method that *can* to Construct an Object, what I (and others including yourself) have been trying to also point out is that it's bad practice to rely on it doing so *if* it's doing so due to errors that the App was supposedly designed to catch (i.e. via "Business Logic") before Object Construction. In addition to that, I (and I think no one else in this Thread) am also trying to point out that *if* the errors were designed to be caught before Object Construction, then the Class should not *quietly* announce the errors by simply returning Null, but *instead / also* *loudly* announce them by (directly / indirectly) Throwing Exceptions from inside the Constructing Method (New or Factory) so that it's much less likely the Consumer will continue processing (by simply avoiding References to the Null Object because I think the code to do so is much less likely to be well designed / tested code which therefore is much more likely to result in invalid processing / corrupted data without generating other error messages and / or doing so later in the process when it's harder to trace the source and / or recover from the damages) vs. aborting. BTW, in the O.P.'s specific example where the Class' Constructor is checking for "wrong format or end of file (?aka empty file?)" prior to returning an Object that represents the File's contents, I don't think his App was designed to check for those errors prior to Object Construction, nor do I think it should've been. I think that in that specific example: a) the validation logic should be encapsulated inside his Class which would make the errors they catch "expected" errors at the time of Object Construction and b) *if* his Consumer code wants to know why the Construction failed, then I recommend the Object return the error(s) via Public Variable(s) / Property(ies) or Optional Parameter(s) on the Constructing Method (New or Factory).
Re. "Also, using a negative integer to index an array should throw an exception, and it is something that the consumer should catch.": Did you mean "should catch" with a Try - Catch Statement around the Constructor Call or (as several of us including yourself have recommended above) before even calling the Constructor?
Re. "I would bet that the other types you cited throw exceptions for invalid parameters, not unexpected or unforeseen issues.": As I was trying to point out in my 2nd reply above, whether an error is considered "unexpected" / "unforeseen" / "bug" (in the Consumer) at the point of Object Construction depends on whether the Consumer was designed to catch those errors prior to Object Construction. If the latter was, then the former is. As for the .NET Classes, since they're Throwing Exceptions vs. returning errors from their Constructors and we can't change them, then I think the best practice would be to validate the Parameters prior to calling a .NET Class Constructor which would make the errors "unexpected" inside the .NET Constructors such that if somehow my validation logic failed, I'd want errors due to it to be *loudly* announced by the Constructor via Thrown Exceptions and either: a) just generate a Runtime Error and abort the App or b) be Catch'ed via a Try Statement but only for display / logging purposes before ultimately aborting the App.
Sent from my iMindFriday, July 06, 2012 12:27 AM
I'm just saying that I would be livid if a constructor threw an exception on me. To me, that's a bug.
Sure it's a bug - in your own code, not in the constructor throwing the exception. :)
How do you deal with the myriad of .Net types that throw exceptions from their constructors?
* DateTime - if you pass in an illegal year/month/day combination
* FileStream - if you try to open a non-existent file.
* Hundreds of other classes - if you pass in null for parameters that are not allowed to be null.
There are so very many - how do you deal with it?
To me it is pretty clear that if you commit a programming error by passing bad data to a constructor, you should get an exception.I would like to point you at Microsoft's documentation for Constructor Design. In particular, note the comment: Do throw exceptions from instance constructors if appropriate.
Friday, July 06, 2012 8:16 AM
- Edited by Matthew Watson Friday, July 06, 2012 8:19 AM | https://social.msdn.microsoft.com/Forums/en-US/1450ed84-277f-46d3-b2ea-8352986f877c/how-to-return-null-from-a-constructor?forum=csharplanguage | CC-MAIN-2016-22 | refinedweb | 5,738 | 59.84 |
We interface for an application where data needs to be transferred in XML format instead of HTML format which is default in case of visual client like web browsers.
A good example is facebook APIs. Facebook has exposed its services through some open endpoints in form of RESTful webservices where you hit a URL and post some parameters, and API return you the data in xml format. Now it is upto you, how you use that data.
In this post, I am giving an example of marshalling and unmarshalling of Map object e.g.
HashMap. These map objects are usually represent the mapping between some simple keys to complex data.
1) JAXB Maven Dependencies
To run JAXB examples, we need to add run time dependencies like below.
<dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-core</artifactId> <version>2.2.8-b01</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.2-promoted-b65</version> </dependency>
2) Map model classes
I have created a model class “
Employee.java” which has some common fields. I want to build code which could parse map of objects where key is sequence code and value is
Employee object itself.
@XmlRootElement(name = "employee") @XmlAccessorType (XmlAccessType.FIELD) public class Employee { private Integer id; private String firstName; private String lastName; private double income; //Getters and Setters } (name="employees") @XmlAccessorType(XmlAccessType.FIELD) public class EmployeeMap { private Map<Integer, Employee> employeeMap = new HashMap<Integer, Employee>(); public Map<Integer, Employee> getEmployeeMap() { return employeeMap; } public void setEmployeeMap(Map<Integer, Employee> employeeMap) { this.employeeMap = employeeMap; } }
3) Marshal Map to XML Example
Java example to marshal or convert java map to xml representation. In below example code, I am writing the map of employees first in console, and then in a file.
public static void main(String[] args) throws JAXBException { HashMap<Integer, Employee> map = new HashMap<Integer, Employee>(); Employee emp1 = new Employee(); emp1.setId(1); emp1.setFirstName("Lokesh"); emp1.setLastName("Gupta"); emp1.setIncome(100.0); Employee emp2 = new Employee(); emp2.setId(2); emp2.setFirstName("John"); emp2.setLastName("Mclane"); emp2.setIncome(200.0); map.put( 1 , emp1); map.put( 2 , emp2); //Add employees in map EmployeeMap employeeMap = new EmployeeMap(); employeeMap.setEmployeeMap(map); /******************** Marshalling example *****************************/ JAXBContext jaxbContext = JAXBContext.newInstance(EmployeeMap.class); Marshaller jaxbMarshaller = jaxbContext.createMarshaller(); jaxbMarshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); jaxbMarshaller.marshal(employeeMap, System.out); jaxbMarshaller.marshal(employeeMap, new File("c:/temp/employees.xml")); } Output: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <employees> <employeeMap> <entry> <key>1</key> <value> <id>1</id> <firstName>Lokesh</firstName> <lastName>Gupta</lastName> <income>100.0</income> </value> </entry> <entry> <key>2</key> <value> <id>2</id> <firstName>John</firstName> <lastName>Mclane</lastName> <income>200.0</income> </value> </entry> </employeeMap> </employees>
4) Unmarshal XML to Map Example
Java example to convert xml to Java map object. Let’s see the example of our
EmployeeMap class.
private static void unMarshalingExample() throws JAXBException { JAXBContext jaxbContext = JAXBContext.newInstance(EmployeeMap.class); Unmarshaller jaxbUnmarshaller = jaxbContext.createUnmarshaller(); EmployeeMap empMap = (EmployeeMap) jaxbUnmarshaller.unmarshal( new File("c:/temp/employees.xml") ); for(Integer empId : empMap.getEmployeeMap().keySet()) { System.out.println(empMap.getEmployeeMap().get(empId).getFirstName()); System.out.println(empMap.getEmployeeMap().get(empId).getLastName()); } } Output: Lokesh Gupta John Mclane
5) Sourcecode Download
To download the sourcecode of above example follow below link.
Happy Learning !!
28 thoughts on “JAXB – Marshal and Unmarshal HashMap in Java”
how to use only the name of xml file without giving the full path address in unMarshalingExample() ?
what to do if wanna call xml file only by its name not by full path address?
i want learn Data Structures and Algorithms and I want know which algorithm is better for which Data Structures also….. please give me Best Reference to learn Time Complexity and Space Complexity also…..
Not sure. I still have not gone through any resource/book which I can recommend.
Ok But as per your point of view you can prefer one book, so Just give me any one based on your experience level to refer, it could be Commercial also fine or Open Source also fine…
I want grow up my Skills, i am trying for 6 months but i did’t find So please help me ……
Hi Lokesh good evening ,
thank you for Sharing Content here and giving reply for to US.
i got one Business case, that I have to Develop one Web Service and That Web Service Will have to as SOAP(JAX-RS) Web Service and as well as REST full web Service, to get the Solution i am referring JAX-RS Spec and SOAP(JAX-RS) Spec, But i could n’t able to find the Solution . So can you help me Here.
Thanking you.
You will need to develop two web services with almost similar contract.
Hi Lokesh,
Yes, but at the SOAP Point of view we can See WSDL, But JAX-RS API , Resteasy Impl will not support the any WSDl and WADL, in this case one Contract is not usefull.
here i have some doubts, IF we develop the SOAP Web Service then the Web Service will not take the GET method , Bcoz the Complexity xml Data we cant send over the Get method String Query param, So that the SOAP failed Here as per my knowledge, is there any alternative tools are there in market to support the above business case, please tell me.
I am not aware of any such tool or methodology.
OK thanks for your answer,
is it possible to build one web Service and that could be work as SOAP and REST Web Service
Not really!! Both are too much different with very less in common.
Which algorithm is better for sorting Linked List and Searching the Element in Linked list
Perhaps MergeSort will give you best results in case of LinkedList. It has been established many times by many researchers e.g. here.
Hi,
This is a very good article which helped through the issue I have been struggling with a few days now.
How can I get rid of the unwanted tags , 1 and from the output xml and only retain the actual Employee related Tags.
Thanks,
Ganga
Sorry, looks like the tags that I was referring to in the Query I posted did not display quite well . The tags that I am referring to are
1. Entry
2. Key
3. Value
I found a solution for this..I used Arraylist instead of Map…I added all the map values into Arraylist before passing it to jaxb marshaller…
Thanks,
Ganga
Seems good to me.
I have one ParentItem.java class having common attributes and 2 child classes Merchandise.java and Offer.java. I have one Transaction.java class having list of ParentItem, so as to marshal list of 2 different object and want to get structure like this :
LINE_ITEM
ADD
1
REG2
7.00
7.42
0.42
5.00
#1 Combo Meal
1695155651
5.00
1
5.00
#1 Combo Meal
1695155652
5.00
1
OTHER_COUPON
1695155653
EmpDisc
2.00
1695155651
MERCHANT_COUPON
1695155654
1OffAnyPurch
1.00
1695155651
but doing all my stuff of coding I am getting following structure:
ADD
LINE_ITEM
TIE WAIST SHIRT DRESS
1
Offer
100
How i create a java client now to view all the employee and perform an simple query
Java client for what purpose??
to retrive employee information.
Ankush
I want output is like that as PFB
Employees e =
e.getFirstName() = “Ankush”
e.getAddress() = “>”
how can get this result from jaxb
Something missing in comment. I am not able to understand it properly.
Hi Lokesh,
Could you please help me with the below XML structure where the below block would be repeated many times depending upon the condition.
ABC
.
Many Thanks!
Hi Lokesh i am getting the following error while sending the @requestline(get test/search?userid={userId}deptid={deptid}
unable to marshal type “java.util.LinkedHashMap” as an element because it is missing an @XmlRootElement annotation
You can return an instance of LinkedHashMap from your method because it does not contain @XmlRootElement annotation. You must return an instance of class which has @XmlRootElement annotation and add LinkedHashMap as an attribute in this class (with setters and getters). has some good examples.
I need to send only couple of parameters but still facing the problem
Doesn’t matter the number of parameters. The class you return from method “MUST” have @XmlRootElement annotation. | https://howtodoinjava.com/jaxb/jaxb-example-marshalling-and-unmarshalling-hashmap-in-java/ | CC-MAIN-2022-27 | refinedweb | 1,390 | 56.55 |
Member since 11-16-2015
156
36
Kudos Received
16
Solutions
03-02-2021 06:09 PM
03-02-2021 06:09 PM
Hello @PR_224 Please replace steps f to j with what @singh101 suggested in one of the above comments: . The idea is - we make use of the binaries from the CDH parcel, instead of downloading it from upstream. On a side note: CDP Base provides sparkR out of the box (in case if you plan to upgrade in near future) Good luck! ...
10-24-2019 10:30 AM
10-24-2019 10:30 AM
@aahbs Thanks for the call today. Let's see if we can narrow those 401's to the browser level (Chrome). ... View more
@simps In CDSW version 1.6.0 there was a wrong check in our code which failed engines if /etc/krb5.conf file was missing. We fixed it in 1.6.1. Fixed an issue where sessions on non-kerberized environments would throw the following error even though no principal was provided: Kerberos principal provided, but no krb5.conf and cluster is not Kerberized. Cloudera Bug: DSE-7236 Please see if you can upgrade to this minor release or as a workaround you can place a dummy krb5.conf in /etc/ on all CDSW hosts. Regards Amit ... View more
09-20-2019 11:51 PM
09-20-2019 11:51 PM
@aahbs good point. Certain organizations which makes use of firewall or proxies, can block websockets. If your browser shows problems with websockets using Chrome Developer Tools, it's likely the case. You might want to speak with your network admin and get this sorted. Regarding the extension, see if you can download the chrome extension on a machine which has internet connectivity and then scp install it manually on your laptop. ... View more
09-20-2019 08:54 AM
09-20-2019 08:54 AM
@aahbs these 2 lines suggests the POD is ready from k8s perspective. 2019-09-20 08:30:24.762 29 INFO Engine 76jt0ox8nexowxq5 Finish Registering running status: success 2019-09-20 08:30:24.763 29 INFO Engine 76jt0ox8nexowxq5 Pod is ready data = {"secondsSinceStartup":2.6,"engineModuleShare":2.092} Basically once the init process completes in the engine and the kernel (eg python) boots up the handler code in the engine, it directly updates the livelog status badge that the engine has transitioned from Starting to Running state. In our case this is broken which could indicate a problem with websockets. You can enable developer console in the browser to check the websocket errors. To open the Developer console in chrome, click on the Three Dots on the extreme right side of the URL bar. Then click on more tools -> developer tools -> console. To identify if the browser supports websockets and connect to, use the echo test from here You can also use a chrome extension which lets you connect to the livelog pod from the browser using websockets and ensures that there are no connectivity problems between the browser and CDSW’s livelog using websockets. Another thing to ensure is that you are able to resolve the wildcard subdomain from both your laptop and the server. For eg if you configured your DOMAIN in CDSW configuration as "cdsw.company.com", then a dig *.cdsw.comapny.com and a dig cdsw.company.com should return the A record correctly from both your laptop and CDSW host. You might also want to double check that there are no conflicting environment variables at the global or project level. ... View more
09-20-2019 12:27 AM
09-20-2019 12:27 AM
@aahbs good to hear that you are past node.js segfaults. Regarding the session stuck in launching state, start by having a look at the engine pod logs. The engine pod name will be the ID at the end of the session URL (eg in this case ilc5mjrqcy2hertx). You can then run kubectl get pods to find out the namespace that the pod is launched with kubectl get pods --all-namespaces=true | grep -i <engine ID> Followed by kubectl logs to review the logs of the engine and kinit containers kubectl logs <engineID> -n <namespace> -c engine BTW, is this a new installation or an upgrade of the existing one? Do you use kerberos and https? If TLS is enabled are you using self-signed certificates? ... View
09-18-2019 09:37 AM
09-18-2019 09:37 AM
@aahbs we recently observed this with CDSW 1.6 on hosts which have ipv6 disabled. If you're hitting this behaviour please check dmesg, it would likely show segfaults on node process. We are working internally to understand the GRPC behaviour and its connection with ipv6 but in the meantime, you might want to enable ipv6 per the RedHat article 1. Edit /etc/default/grub and delete the entry ipv6.disable=1 from the GRUB_CMDLINE_LINUX, like the following sample: GRUB_CMDLINE_LINUX= "rd.lvm.lv=rhel/swap crashkernel=auto rd.lvm.lv=rhel/root" 2. Run the grub2-mkconfig command to regenerate the grub.cfg file: # grub2-mkconfig -o /boot/grub2/grub.cfg Alternatively, on UEFI systems, run the following: # grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg 3. Delete the file /etc/sysctl.d/ipv6.conf which contains the entry: # To disable for all interfaces net.ipv6.conf.all.disable_ipv6 = 1 # the protocol can be disabled for specific interfaces as well. net.ipv6.conf.< interface >.disable_ipv6 = 1 4. Check the content of the file /etc/ssh/sshd_config and make sure the AddressFamily line is commented: #AddressFamily inet 5. Make sure the following line exists in /etc/hosts, and is not commented out: ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 6. Enable ipv6 support on the ethernet interface. Double check /etc/sysconfig/network and /etc/sysconfig/network-scripts/ifcfg-* and ensure we've IPV6INIT=yes .This setting is required for IPv6 static and DHCP assignment of IPv6 addresses. 7. Stop CDSW service. 8. Reboot the CDSW hosts to enable IPv6 support. 9. Start CDSW service ... View more
07-16-2019 10:58 AM
07-16-2019 10:58 AM
@rssanders3 Thanks for your interest in the upcoming CDSW release > Has a more specific date been announced yet? Not yet publicly (but should be out very soon) >Specifically, will it run on 7.6? Yes ... View more
07-15-2019 10:20 PM
07-15-2019 10:20 PM ... View more
06-15-2019 10:53 AM
06-15-2019 10:53 AM
Hello @Data_Dog Welcome! ... View more
Hello @Baris There is no such limitations from CDSW. If a node has spare resources - kubernetes could use that node to launch the pod. May I ask how many nodes are there in your CDSW cluster? What is the CPU and Memory footprint on each node, what version of CDSW are you running? And what error you are getting when launching the session with > 50% memory? You can find out how much spare resources are there cluster wide using the CDSW homepage (Dashboard). If you want to find out exactly how much spare resources are there on each node, you can find that out by running $ kubectl describe node on the CDSW master server. Example: In the snip below you can see that out of 4CPU (4000m), 3330m was used and similarly out of 8GB RAM, around 6.5 GB was used. This means if you try to launch a session with 1CPU or 2GB RAM it will not work. $ kubectl describe nodes Name: host-aaaa Capacity: cpu: 4 memory: 8009452Ki Allocatable: cpu: 4 memory: 8009452Ki Allocated: CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 3330m (83%) 0 (0%) 6482Mi (82%) 22774Mi (291%) Do note that a session can only spin an engine pod on one node. This means for eg if you have three nodes with 2 GB RAM left on each of them, it might give you an assumption that you've 6GB of free RAM and that you can launch a session with 6GB memory but because a session can't share resources across nodes you'd eventually see an error something like this "Unschedulable: No nodes are available that match all of the predicates: Insufficient memory (3)" ...
12-20-2018 03:01 PM. ... View more ...
Hi @JSenzier Right, this won't work in client mode. It's not about the compatibility of Spark1.6 with CDH version, but the way deploy mode 'client' works. spark-shell on Cloudera installs runs in yarn-client mode by default. Given the use of (which is generally used for local disks) we recommend running the app in local mode for such local testing or you can turn your script (using maven or sbt) into a jar file and execute this using spark-submit in cluster mode. $ spark-shell --master local[*] ... 08:35 AM
05-11-2018 08:35 AM
I see, Thanks. Are you able to print the results on the console using a simple spark kafka streaming app ? If yes, we'd need to look at why the logging part is not working. ... View more
05-11-2018 08:23 AM
05-11-2018 08:23 AM
I have not got a chance to run your code locally, but I believe it should be where you've defined your logpath to be logPath='/home/<username>/logs/' + os.path.basename(__file__) + '-' + dtmStamp + '.log' Did you forgot to replace the <username> with the actual username or was it redacted for sharing purpose? ... View more
Hi @sim6 Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timed out waiting for client connection. It looks like the rpc times out waiting for resource getting available on spark side. Given that it is random indicates that this error might be happening when the cluster does not have enough resource and nothing permanently wrong with the cluster as such. For testing you can explore the following timeout values and see if that helps: hive.spark.client.connect.timeout=30000ms (default 1000ms) hive.spark.client.server.connect.timeout=300000ms (default 90000ms) You'd need to set it up in the Hive Safety Value using the steps below, so that it takes effect for all the spark queries: Go to Cloudera Manager home page click through "Hive" service click "Configuration" search for "Hive Service Advanced Configuration Snippet (Safety Valve) for hive-site.xml" enter the following in the XML text field: <property> <name>hive.spark.client.connect.timeout</name> <value>30000ms</value> </property> <property> <name>hive.spark.client.server.connect.timeout</name> <value>300000ms</value> </property Restart Hive services to allow changes to take effect then run the query again to test. Let us know how it goes. ... View more
05-11-2018 01:23 AM
05-11-2018 01:23 AM
Hi @Nick Yes, you should get a count of the words. Something like this: ------------------------------------------- Time: 2018-05-11 01:05:20 ------------------------------------------- (u'', 160) ... To start with, please let us know if you are using kerberos on either of the clusters? Next, can you help confirm you can read the kafka topic data using a kafka-console-consumer command from the kafka cluster? Next, can you verify (the host from where you are running spark job) that you can reach out to the zookeeper on the kafka cluster (using ping and nc on port 2181). Lastly, please double check that you have the topic name listed correctly and the ZK quorum in the spark(2)-submit command line. For comparison, I am sharing the same exercise from my cluster, one running Spark and other Kafka (however note both are using SIMPLE authentication i.e non kerberized). Kafka-Cluster ========= [systest@nightly511 tmp]$ kafka-topics --create --zookeeper localhost:2181 --topic wordcounttopic --partitions 1 --replication-factor 3 .... Created topic "wordcounttopic". [systest@nightly511-unsecure-1 tmp]$ vmstat 1 | kafka-console-producer --broker-list `hostname`:9092 --topic wordcounttopic Spark- Cluster =========== [user1@host-10-17-101-208 ~]$ vi kafka_wordcount.py() [user1@host-10-17-101-208 ~]$ spark2-submit --master yarn --deploy-mode client --conf "spark.dynamicAllocation.enabled=false" --jars /opt/cloudera/parcels/SPARK2/lib/spark2/examples/jars/spark-examples_*.jar kafka_wordcount.py nightly511:2181 wordcounttopic Notice the last 2 arguments are the ZK(hostname/URL) in the kafka cluster and the kafka-topic name in the kafka cluster. 18/05/11 01:04:55 INFO cluster.YarnClientSchedulerBackend: Application application_1525758910545_0024 has started running. 18/05/11 01:05:21 INFO scheduler.DAGScheduler: ResultStage 4 (runJob at PythonRDD.scala:446) finished in 0.125 s 18/05/11 01:05:21 INFO scheduler.DAGScheduler: Job 2 finished: runJob at PythonRDD.scala:446, took 1.059940 s ------------------------------------------- Time: 2018-05-11 01:05:20 ------------------------------------------- (u'', 160) (u'216', 1) (u'13', 1) (u'15665', 1) (u'28', 1) (u'17861', 1) (u'872', 6) (u'3', 5) (u'8712', 1) (u'5', 1) ... 18/05/11 01:05:21 INFO scheduler.JobScheduler: Finished job streaming job 1526025920000 ms.0 from job set of time 1526025920000 ms 18/05/11 01:05:21 INFO scheduler.JobScheduler: Total delay: 1.625 s for time 1526025920000 ms (execution: 1.128 s) Let us know if you find any differences and manage to get it working. If it's still not working, let us know that too. Good Luck! ...
Cool. I will feed it back in the internal Jira we are discussing this issue for. Thx for sharing. ... View more
Thanks, Lucas. That's great to hear! Can you please check if toggling it back to /var/log/spark2/lineage followed by redeploying the client configuration helps too? As promised, once the fix is identified I will update this thread. ... | https://community.cloudera.com/t5/user/viewprofilepage/user-id/13082 | CC-MAIN-2021-31 | refinedweb | 2,275 | 65.52 |
In the last article in this series, I discussed how to encapsulate actions that sprites undertake — such as running, falling, pacing, or blowing up — in pluggable objects known as behaviors. At run time, you can easily adorn any sprite with any set of behaviors you desire. Among its many benefits, that flexibility encourages the exploration of game aspects that might otherwise lie dormant.
In this article I continue to discuss sprite behaviors, with a couple of twists. First, this is the first of two consecutive articles in the series devoted to a single sprite behavior: the runner's jump behavior. By the end of "Manipulating time, Part 2," Snail Bait will ultimately arrive at the natural jump sequence depicted in Figure 1:
Figure 1. A natural jump sequence
Second, the jump behavior, unlike the behaviors I discussed in the preceding article, doesn't repeat indefinitely. Because of that simple difference, Snail Bait must keep track of time as jumps progress. That requirement begets the need for something akin to a stopwatch, so I will implement a JavaScript stopwatch and use it to time the runner's ascent and descent as she jumps.
Runner tracks and platform tops
Snail Bait's platforms move horizontally on three tracks, as shown in Figure 2:
Figure 2. Platform tracks
The space between tracks is 100 pixels. That gives the runner, whose height is 60 pixels, more than enough room to maneuver.
Listing 1 shows how Snail Bait sets the runner's height and the platforms' vertical positions. It also lists a convenience method —
calculatePlatformTop()— that, given a track (either 1, 2, or 3), returns the track's corresponding baseline.
Listing 1. Calculating platform tops from track baselines
var SnailBait = function () { // Height of the runner's animation cells: this.RUNNER_CELLS_HEIGHT = 60, // pixels // Track baselines: this.TRACK_1_BASELINE = 323, // pixels this.TRACK_2_BASELINE = 223, this.TRACK_3_BASELINE = 123, ... }; ... SnailBait.prototype = { ... calculatePlatformTop: function (track) { var top; if (track === 1) { top = this.TRACK_1_BASELINE; } else if (track === 2) { top = this.TRACK_2_BASELINE; } else if (track === 3) { top = this.TRACK_3_BASELINE; } return top; ... };
Snail Bait uses
calculatePlatformTop() to position nearly all of the game's sprites.
The initial jump implementation
As implemented at the end of the last article, Snail Bait has the most simplistic of algorithms for jumping, as shown in Listing 2:
Listing 2. Keyboard handling for jumps
window.onkeydown = function (e) { var key = e.keyCode; ... if (key === 74) { // 'j' if (snailBait.runner.track === 3) { // At the top; nowhere to go return; } snailBait.runner.track++; snailBait.runner.top = snailBait.calculatePlatformTop(snailBait.runner.track) - snailBait.RUNNER_CELLS_HEIGHT; } }; ...
When the player presses the j key, Snail Bait immediately puts the runner's feet on the track above the runner (provided the runner is not on the top track already), as shown in Figure 3:
Figure 3. Jerky jump sequence: simple to implement, but unnatural
The jumping implementation shown in Listing 2 has two serious drawbacks. First, the way the runner moves from one level to another — instantly — is far from the desired effect. Second, the jumping implementation is at the wrong level of abstraction. A window event handler has no business directly manipulating the runner's attributes; instead, the runner itself should be responsible for jumping.
Shifting responsibility for jumping to the runner
Listing 3 shows a refactored implementation of the window's
onkeydown event handler. It's much simpler than the implementation in Listing 2, and it shifts the responsibility for jumping from the event handler to the runner.
Listing 3. The window's key handler, delegating to the runner
window.onkeydown = function (e) { var key = e.keyCode; ... if (key === 74) { // 'j' runner.jump(); } };
When the game starts, Snail Bait invokes a method named
equipRunner(), as shown in Listing 4:
Listing 4. Equipping the runner at the start of the game
SnailBait.prototype = { ... start: function () { this.createSprites(); this.initializeImages(); this.equipRunner(); this.splashToast('Good Luck!'); }, };
The
equipRunner() method, shown in Listing 5, adds attributes and a
jump() method to the runner:
Listing 5. Equipping the runner: The runner's
jump() method
SnailBait.prototype = { equipRunner: function () { // This function sets runner attributes: this.runner.jumping = false; // 'this' is snailBait this.runner.track = this.INITIAL_RUNNER_TRACK; ... // More runner attributes omitted for brevity // This function also implements the runner's jump() method: this.runner.jump = function () { if ( ! this.jumping) { // 'this' is the runner. this.jumping = true; // Start the jump } }; }, },
The runner has attributes that represent, among other things, her current track and whether or not she is currently jumping.
If the runner is not currently jumping, the
runner.jump() method merely sets the runner's
jumping attribute to
true. Snail Bait implements the act of jumping in a separate behavior object, as it does for all the runner's other behaviors such as running and falling — and indeed, for all sprite behaviors. When it creates the runner, Snail Bait adds that object to the runner's array of behaviors, as shown in Listing 6:
Listing 6. Creating the runner with its behaviors
var SnailBait = function () { ... this.jumpBehavior = { execute: function(sprite, time, fps) { // Implement jumping here }, ... }; ... this.runner = new Sprite('runner', // type this.runnerArtist, // artist [ this.runBehavior, // behaviors this.jumpBehavior, this.fallBehavior ]); ... };
Now that the infrastructure is in place for initiating a jump, I can concentrate solely on the jump behavior.
The jump behavior
Listing 7, which shows an initial implementation of the runner's jump behavior, is functionally equivalent to the code in Listing 2. If the runner's
jumping attribute — which is set by the runner's
jump() method (see Listing 5) — is
false, the behavior does nothing. The behavior also does nothing if the runner is on the top track.
Listing 7. An unrealistic jump behavior implementation
var SnailBait = function () { ... this.jumpBehavior = { ... execute: function(sprite, time, fps) { if ( ! sprite.jumping || sprite.track === 3) { return; } sprite.track++; sprite.top = snailBait.calculatePlatformTop(sprite.track) - snailBait.RUNNER_CELLS_HEIGHT; sprite.jumping = false; } }, ... };
If the runner is jumping and she's not on the top track, the jump behavior implemented in Listing 7 moves her to the next track and completes the jump by setting her
jumping attribute to
false.
Just like the jumping implementation in Listing 2, the implementation in Listing 7 instantly moves the runner from one track to another. For a realistic jumping motion, you must gradually move the runner from one track to another over a specific period of time.
Timed animations: Stopwatches
All the motion that I've implemented so far in Snail Bait has been constant; for example, all the game's sprites, except for the runner, move continuously in the horizontal direction, and buttons and snails constantly pace back and forth on their platforms. (See the Scrolling the background section from the second article in this series to see how that motion is implemented.) Coins, sapphires, and rubies can also slowly bob up and down without ever stopping to take a break.
Jumping, however, is not constant; it has a definite start and end. To implement jumping, therefore, I need a way to constantly monitor how much time has elapsed since a jump began. What I need is a stopwatch.
Listing 8 shows the implementation of a
Stopwatch JavaScript object:
Listing 8. A
Stopwatch object
// Stopwatch.................................................................. // // You can start and stop a stopwatch and you can find out the elapsed // time the stopwatch has been running. After you stop a stopwatch, // its getElapsedTime() method returns the elapsed time // between the start and stop. Stopwatch = function () { this.startTime = 0; this.running = false; this.elapsed = undefined; this.paused = false; this.startPause = 0; this.totalPausedTime = 0; }; // You can get the elapsed time while the stopwatch is running, or after it's // stopped. Stopwatch.prototype = { start: function () { this.startTime = +new Date(); this.running = true; this.totalPausedTime = 0; this.startPause = 0; }, stop: function () { if (this.paused) { this.unpause(); } this.elapsed = (+new Date()) - this.startTime - this.totalPausedTime; this.running = false; }, pause: function () { this.startPause = +new Date(); this.paused = true; }, unpause: function () { if (!this.paused) { return; } this.totalPausedTime += (+new Date()) - this.startPause; this.startPause = 0; this.paused = false; }, getElapsedTime: function () { if (this.running) { return (+new Date()) - this.startTime - this.totalPausedTime; } else { return this.elapsed; } }, isPaused: function() { return this.paused; }, isRunning: function() { return this.running; }, reset: function() { this.elapsed = 0; this.startTime = +new Date(); this.running = false; this.totalPausedTime = 0; this.startPause = 0; } };
You can start, stop, pause, unpause, and reset the stopwatch object in Listing 8. You can also get its elapsed time, and you can determine whether a stopwatch is running or paused.
In the Freezing the game section of the third article in this series, I discussed how to resume a paused game exactly where it left off by accounting for the amount of time the game was paused. Like the game itself, paused stopwatches must resume exactly where they left off, so they also account for the amount of time they've been paused.
The stopwatch implementation, though simple, is of great importance because it lets you implement behaviors that last for a finite amount of time — in this case, more-natural jumping.
Refining the jump behavior
Now that I have stopwatches, I'll use them to refine the jump behavior. First, I modify the
equipRunner() method from Listing 5 as shown in Listing 9:
Listing 9. Revised
equipRunner() method
SnailBait.prototype = { ... this.RUNNER_JUMP_HEIGHT = 120, // pixels this.RUNNER_JUMP_DURATION = 1000, // milliseconds equipRunnerForJumping: function () { this.runner.JUMP_HEIGHT = this.RUNNER_JUMP_HEIGHT; this.runner.JUMP_DURATION = this.RUNNER_JUMP_DURATION; this.runner.jumping = false; this.runner.ascendStopwatch = new Stopwatch(this.runner.JUMP_DURATION/2); this.runner.descendStopwatch = new Stopwatch(this.runner.JUMP_DURATION/2); this.runner.jump = function () { if (this.jumping) // 'this' is the runner return; this.jumping = true; this.runAnimationRate = 0; // Freeze the runner while jumping this.verticalLaunchPosition = this.top; this.ascendStopwatch.start(); }; }, equipRunner: function () { ... this.equipRunnerForJumping(); }, ... };
The revised implementation of
equipRunner() invokes a new method:
equipRunnerForJumping(). As its name implies, it equips the runner for jumping. That method creates two stopwatches:
runner.ascendStopwatch for the jump's ascent and
runner.descendStopwatch for its descent.
When the jump begins, the
jump() method starts the runner's ascend stopwatch, as you can see from Listing 9. That method also sets the runner's run animation rate — which determines how quickly the runner progresses through its run animation — to zero to freeze the runner while she's in the air. The
run() method also records the runner's vertical position so the runner can return to that position when the jump completes.
All the runner attributes set in Listing 9 are summarized in Table 1:
Table 1. The runner's jump-related attributes
Next, in Listing 10, I refactor the jump behavior originally implemented in Listing 7:
Listing 10. The jump behavior, revisited
var SnailBait = function () { this.jumpBehavior = { ... execute: function(sprite, context, time, fps) { if ( ! sprite.jumping) { return; } if (this.isJumpOver(sprite)) { sprite.jumping = false; return; } if (this.isAscending(sprite)) { if ( ! this.isDoneAscending(sprite)) this.ascend(sprite); else this.finishAscent(sprite); } else if (this.isDescending(sprite)) { if ( ! this.isDoneDescending(sprite)) this.descend(sprite); else this.finishDescent(sprite); } } }, ...
The jump behavior in Listing 10 is the implementation of a high-level abstraction that leaves jumping details to other methods such as
ascend() and
isDescending(). Now all that remains is to fill in the details by using the runner's ascend and descend stopwatches to implement the following methods:
isJumpOver()
ascend()
isAscending()
isDoneAscending()
finishAscent()
descend()
isDescending()
isDoneDescending()
finishDescent()
Linear motion
For now, the methods I list above produce linear motion, meaning that the runner ascends and descends at a constant rate of speed, as depicted in Figure 4:
Figure 4. Smooth linear jump sequence
Linear motion results in an unnatural jumping motion, because gravity should be constantly accelerating or decelerating the runner when she's descending or ascending, respectively. In the next installment I'll reimplement those methods so they result in nonlinear motion, as depicted in Figure 1. For now, I'll stick to the simpler case of linear motion.
First, Listing 11 shows the implementation of the jump behavior's
isJumpOver() method, which is the same whether the motion is linear or nonlinear: A jump is over if neither stopwatch is running.
Listing 11. Determining if a jump is over
SnailBait.prototype = { this.jumpBehavior = { isJumpOver: function (sprite) { return !sprite.ascendStopwatch.isRunning() && !sprite.descendStopwatch.isRunning(); }, ... }, ... };
The jump behavior's methods dealing with ascending are shown in Listing 12:
Listing 12. Ascending
SnailBait.prototype = { ... this.jumpBehavior = { isAscending: function (sprite) { return sprite.ascendStopwatch.isRunning(); }, ascend: function (sprite) { var elapsed = sprite.ascendStopwatch.getElapsedTime(), deltaY = elapsed / (sprite.JUMP_DURATION/2) * sprite.JUMP_HEIGHT; sprite.top = sprite.verticalLaunchPosition - deltaY; // Moving up }, isDoneAscending: function (sprite) { return sprite.ascendStopwatch.getElapsedTime() > sprite.JUMP_DURATION/2; }, finishAscent: function (sprite) { sprite.jumpApex = sprite.top; sprite.ascendStopwatch.stop(); sprite.descendStopwatch.start(); } }, ... };
The methods in Listing 12 are summarized in Table 2:
Table 2.
jumpBehavior's ascend methods
Recall that the runner's
jump() method, shown in Listing 9, starts the runner's ascend stopwatch. Subsequently, that running stopwatch causes the jump behavior's
isAscending() method to return
true temporarily. Until the runner is done ascending — meaning the jump is halfway over — the runner's jump behavior repeatedly calls the
ascend() method, as you can see from Listing 10.
Ascending and descending
The
ascend() method incrementally moves the runner higher. It calculates the number of pixels to move the runner for each animation frame by dividing the stopwatch's elapsed time (milliseconds) by one half of the jump's duration (milliseconds) and multiplying that value by the height of the jump (pixels). The milliseconds cancel each other out, yielding pixels as the unit of measure for the
deltaY value. That value, therefore, represents the number of pixels to move the runner in the vertical direction for the current animation frame.
When the runner finishes her ascent, the jump behavior's
finishAscent() method records the sprite's position as the jump apex, stops the ascend stopwatch, and starts the descend stopwatch.
The jump behavior methods associated with descending are shown in Listing 13:
Listing 13. Descending
SnailBait.prototype = { this.jumpBehavior = { isDescending: function (sprite) { return sprite.descendStopwatch.isRunning(); }, descend: function (sprite, verticalVelocity, fps) { var elapsed = sprite.descendStopwatch.getElapsedTime(), deltaY = elapsed / (sprite.JUMP_DURATION/2) * sprite.JUMP_HEIGHT; sprite.top = sprite.jumpApex + deltaY; // Moving down }, isDoneDescending: function (sprite) { return sprite.descendStopwatch.getElapsedTime() > sprite.JUMP_DURATION/2; }, finishDescent: function (sprite) { sprite.top = sprite.verticalLaunchPosition; sprite.descendStopwatch.stop(); sprite.jumping = false; sprite.runAnimationRate = snailBait.RUN_ANIMATION_RATE; } }, ... };
The methods in Listing 13 are summarized in Table 3:
Table 3.
jumpBehavior's descend methods
There's a lot of symmetry between the ascend methods in Listing 12 and the descend methods in Listing 13. Both
ascend() and
descend() calculate the number of pixels to move the runner in the vertical direction for the current frame in exactly the same manner. The
descend() method, however, adds that value to the jump's apex, whereas
ascend() subtracts it from the launch position. (Recall that the Canvas Y axis increases from top to bottom.)
When the jump's descent is finished,
finishDescent() puts the runner back at the same vertical position where she began the jump and restarts her run animation.
Next time
In the next article in this series, I'll show you how to implement nonlinear motion to produce the realistic jumping motion shown in Figure 1. Along the way, I'll show you how to warp time itself so you can produce nonlinear effects for any other derivative of time, such as color change..
- HTML5 fundamentals: Learn HTML5 basics with this developerWorks knowledge path.
Get products and technologies
- Replica Island: You can download the source for this popular open source platform video game for Android. Most of Snail Bait's sprites are from Replica Island (used with permission).. | http://www.ibm.com/developerworks/web/library/j-html5-game6/index.html?ca=drs- | CC-MAIN-2014-41 | refinedweb | 2,589 | 50.12 |
Functor
Functor is a type class that abstracts over type constructors that can be
map‘ed over. Examples of such
type constructors are
List,
Option, and
Future.
trait Functor[F[_]] { def map[A, B](fa: F[A])(f: A => B): F[B] } // Example implementation for Option implicit val functorForOption: Functor[Option] = new Functor[Option] { def map[A, B](fa: Option[A])(f: A => B): Option[B] = fa match { case None => None case Some(a) => Some(f(a)) } }
A
Functor instance must obey two laws:
- Composition: Mapping with
fand then again with
gis the same as mapping once with the composition of
fand
g
fa.map(f).map(g) = fa.map(f.andThen(g))
- Identity: Mapping with the identity function is a no-op
fa.map(x => x) = fa
A different view
Another way of viewing a
Functor[F] is that
F allows the lifting of a pure function
A => B into the effectful
function
F[A] => F[B]. We can see this if we re-order the
map signature above.
trait Functor[F[_]] { def map[A, B](fa: F[A])(f: A => B): F[B] def lift[A, B](f: A => B): F[A] => F[B] = fa => map(fa)(f) }
Functors for effect management
The
F in
Functor is often referred to as an “effect” or “computational context.” Different effects will
abstract away different behaviors with respect to fundamental functions like
map. For instance,
Option’s effect
abstracts away potentially missing values, where
map applies the function only in the
Some case but
otherwise threads the
None through.
Taking this view, we can view
Functor as the ability to work with a single effect - we can apply a pure
function to a single effectful value without needing to “leave” the effect.
Functors compose
If you’re ever found yourself working with nested data types such as
Option[List[A]] or a
List[Either[String, Future[A]]] and tried to
map over it, you’ve most likely found yourself doing something
like
_.map(_.map(_.map(f))). As it turns out,
Functors compose, which means if
F and
G have
Functor instances, then so does
F[G[_]].
Such composition can be achieved via the
Functor#compose method.
import cats.Functor import cats.instances.list._ import cats.instances.option._
val listOption = List(Some(1), None, Some(2)) // listOption: List[Option[Int]] = List(Some(1), None, Some(2)) // Through Functor#compose Functor[List].compose[Option].map(listOption)(_ + 1) // res1: List[Option[Int]] = List(Some(2), None, Some(3))
This approach will allow us to use composition without wrapping the value in question, but can
introduce complications in more complex use cases. For example, if we need to call another function which
requires a
Functor and we want to use the composed
Functor, we would have to explicitly pass in the
composed instance during the function call or create a local implicit.
def needsFunctor[F[_]: Functor, A](fa: F[A]): F[Unit] = Functor[F].map(fa)(_ => ()) def foo: List[Option[Unit]] = { val listOptionFunctor = Functor[List].compose[Option] type ListOption[A] = List[Option[A]] needsFunctor[ListOption, Int](listOption)(listOptionFunctor) }
We can make this nicer at the cost of boxing with the
Nested data type.
import cats.data.Nested import cats.syntax.functor._
val nested: Nested[List, Option, Int] = Nested(listOption) // nested: cats.data.Nested[List,Option,Int] = Nested(List(Some(1), None, Some(2))) nested.map(_ + 1) // res3: cats.data.Nested[List,Option,Int] = Nested(List(Some(2), None, Some(3)))
The
Nested approach, being a distinct type from its constituents, will resolve the usual way modulo
possible SI-2712 issues (which can be addressed through partial unification),
but requires syntactic and runtime overhead from wrapping and unwrapping. | https://typelevel.org/cats/typeclasses/functor.html | CC-MAIN-2018-17 | refinedweb | 625 | 52.49 |
sequence_enumerate¶
- paddle.static.nn. sequence_enumerate ( input, win_size, pad_value=0, name=None ) [source]
- api_attr
Static Graph
- Generate a new sequence for the input index sequence with
shape
[d_1, win_size], which enumerates all the sub-sequences with length
win_sizeof the input with shape
[d_1, 1], and padded by
pad_valueif necessary in generation.
Please note that the input must be LodTensor.
Input x: x.lod = [[0, 3, 5]] x.data = [[1], [2], [3], [4], [5]] x.dims = [5, 1] Attrs: win_size = 2 pad_value = 0 Output: out.lod = [[0, 3, 5]] out.data = [[1, 2], [2, 3], [3, 0], [4, 5], [5, 0]] out.dims = [5, 2]
- Parameters
input (Variable) – The input variable which is a index sequence, which should be a LodTensor with shape
[d_1, 1]and 1-level lod info. The data type should be int32 or int64.
win_size (int) – The window size for enumerating all sub-sequences.
pad_value (int, optional) – The padding value, default 0.
name (str, optional) – For detailed information, please refer to Name. Usually name is no need to set and None by default.
- Returns: The enumerate sequence variable which is a LoDTensor with
shape
[d_1, win_size]and 1-level lod info. The data type is same as
input.
Return Type: Variable
Examples
import paddle paddle.enable_static() x = paddle.static.data(name='x', shape=[-1, 1], dtype='int32', lod_level=1) out = paddle.static.nn.sequence_enumerate(input=x, win_size=3, pad_value=0) | https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/static/nn/sequence_enumerate_en.html | CC-MAIN-2022-05 | refinedweb | 232 | 61.22 |
quantity box to apear up to 1000Asked by drverde on May 22, 2011 at 03:30 PM
Hello!
I am trying to create a simple order form, but we have more than 100 products. I need the user to be able to select what item(s) he wants to order, and the quantity for each one. I see that this is almost done with the default Purchase Order tool, but the quantity is a dropdown list. And this is a problem for me, because someone could order up to 1000 products, and you can see that a dropdown list with 1000 elements is not usable. I was thinking of creating a check list and it would have been cool if the quantity would apear as a text box next to it if a product is checked. I dont know, is this possible?
The easyest way I suppose is to create a text box for every product and the quantity to pe the input field. But then, when I recieve the email, I would have to look through 100 items and see which of them were selected and in what quantity. If I use the check list, maybe only the checked items will appear. But I could be very well mistaken.
Anyway, your service is the greatest and that is why I think that using a JotForm for this job would be great. We will most likely purchase the 1000/month option, but I need to know if there is a way to get around my "little" problem.
I hope I was clear enough about what I require, but will gladly add more specifications if needed. Please help!
Thank you very much!
Sincerely,
Liviu Lungu
- JotForm Support
Hello Liviu
When you stated that a dropdown list with 1000 elements is "not usable", were you referring to the impracticality of typing the numbers 1 to 1000 and hitting the return (Enter) key after each entry? If so, the tool at can do that job for for you. As a matter of fact, you could easily generate a sequential list of 100,000 numbers with that tool. However, your idea of each product having a check box that would reveal a text box if checked, where a user can then enter an amount, is a good one but to my knowledge, not presently possible. A variation of that might be to have the dropdowns only appear when items are selected but here again, I don't believe that's currently possible.
Regarding your other concern, only items selected by the user (and the total, if the option to display that on the form was chosen when configuring the purchase order field) would be displayed in the email (and on the submissions page of the form). I have created a form containing just a few products that you can clone (see ) and test for yourself to see how submissions are displayed. The form's URL is
Hopefully this helps clarify things but if it doesn't please feel free to ask for more information. As always, our team would be glad to assist you.
~ Wayne
Thank you very much Wayne!
Sorry for the delay but I've been away.
I think your advice is just what I need. I was thinking that a dropdown with 1000 elements will popup with most of the elements out of screen. But thats not the case, as you well know, and I didnt.
I think that I'll be able to have a form meeting in full my requierments. I will try to make the complete form and then Ill make it live. Im sure that pretty soon Ill have to get a subscrition for 1000 forms a month.
Thanks again! As usual, great service, and quick too
Oh, one more thing.
Isnt there a way to have a custom currency? I mean I just want the word "lei" to appear after the price. Eventually the form for the plural to be able to be different from the one in singular. Is there any way to do this?
Ive seen that there are a few to choose from, but the one I need isnt in the list. I was thinking of just typing in the currency I need could be a good solution for me, and Im sure for others as well.
Thanks again!
- JotForm Support
You're quite welcome, drverde. We're always happy to help our users and to know that the advice given proved worthwhile.
~ Wayne
P.S. Sorry, I didn't see your second query until I had posted my acknowledgement of your gratitude. I'll have to check and see if such customization is possible. In the meantime, perhaps one of my colleagues knows about this and may well provide an answer before I get back to you.
Thanks again!
I was going to try the "Purchase Order" tool, but that has some inconveniences that makes it unusable for me. Perhaps I should firstly try to explain what Im trying to accomplish with this form, and maybe after that you can tell me if JotForm is the right solution for this, or perhaps I need a more complicated tool.
So, we have about 10 companies and from each one we have 30-50 products. Being that there are many many products, I wanted to use a Form Collapse for every company. So, the user knows he wants a product form company A and 2 products from company D, he doesn't have to scroll through all the products. He un-collapses company A, chooses the product, then goes straight to company D, and marks the products he wants over there.
Here is where I found the first problem with the Purchase Order tool. I can only have one in a form, but I need about 10, for each company I have, because the Form Collapse tool cant separate products in one Purchase Order tool.
For my products I dont want to display the price, we just need the name of the product, the check box if one desires to select that product, and the quantity dropdown or text box (preferably a text box). Our clients will have different prices, depending on what we negociated with each one. The Purchase Order tool requires a price though.
In a nutshell that is what I am trying to create. Do you think that JotForm has a solution for this?
Thank you very much! I really appreciate all the help!
- JotForm Support
Hi,
I believe that this cannot be made possible under Jotform. The Form Builder does not allow more than one payment field in forms. A workaround I suggest is to consolidate all the products in one payment field, although form collapses or any other field types cannot be inserted inside a payment field. Also, a single payment field can only accept one payee. I figured that might be a problem if you want to have the total payment divided across the corresponding sellers.
Another workaround I could see fit is to create a separate form for each merchant. You can then have those forms linked in a single page or form.
Unfortunately, that is all I can think of that might work for you, given Jotform's limitations. Let me know if you require further assistance. Thank you.
Neil
Thank you Neil!
I understand. But I think I have found a good solution with the Matrix Tool. I use it to list all the products, and use the text box option for the quantity. I use one Matrix Tool for every product category. This solution is almost perfect for what I need. I wish there was the posibility to add more predefined columns (to add the a little description in a new column to the left of each product). Could that option be possible in future updates of this great service? That would really be a huge door opener for new posibilities, in my opinion.
I am so happy that I can use JotForm for this project of mine. Thanks again and keep up the good work!
- JotForm Support
Hello,
It's good to hear that you have found an alternate solution using the Matrix tool. However, what you're asking for, adding an extra column to the left of the rows label, is not possible. I have forwarded your request though to our user requests list.
We will have you updated on the status of this request once our developers have been notified. Please note that we cannot guarantee a lead time for completion or if the request will be approved or not. Thank you for supporting Jotform.
Neil
- | https://www.jotform.com/answers/26814-I-need-a-quantity-box-to-apear-up-to-1000 | CC-MAIN-2017-34 | refinedweb | 1,445 | 79.6 |
detect the presence of objects in images. Object detectors, such as YOLO, Faster R-CNNs, and Single Shot Detectors (SSDs), generate four sets of (x, y)-coordinates which represent the bounding box of an object in an image.
Obtaining the bounding boxes of an object is a good start but the bounding box itself doesn’t tell us anything about (1) which pixels belong to the foreground object and (2) which pixels belong to the background.
That begs the question:
Is it possible to generate a mask for each object in our image, thereby allowing us to segment the foreground object from the background?
Is such a method even possible?
The answer is yes — we just need to perform instance segmentation using the Mask R-CNN architecture.
To learn how to apply Mask R-CNN with OpenCV to both images and video streams, just keep reading!
Looking for the source code to this post?
Jump right to the downloads section..
I’ll then show you how to apply Mask R-CNN with OpenCV to both images and video streams.
Let’s get started!
Instance segmentation vs. Semantic segmentation
Figure 1: Image classification (top-left), object detection (top-right), semantic segmentation (bottom-left), and instance segmentation (bottom-right). We’ll be performing instance segmentation with Mask R-CNN in this tutorial. (source)
Explaining the differences between traditional image classification, object detection, semantic segmentation, and instance segmentation is best done visually.
When performing traditional image classification our goal is to predict a set of labels to characterize the contents of an input image (top-left).
Object detection builds on image classification, but this time allows us to localize each object in an image. The image is now characterized by:
- Bounding box (x, y)-coordinates for each object
- An associated class label for each bounding box
An example of semantic segmentation can be seen in bottom-left. Semantic segmentation algorithms require us to associate every pixel in an input image with a class label (including a class label for the background).
Pay close attention to our semantic segmentation visualization — notice how each object is indeed segmented but each “cube” object has the same color.
While semantic segmentation algorithms are capable of labeling every object in an image they cannot differentiate between two objects of the same class.
This behavior is especially problematic if two objects of the same class are partially occluding each other — we have no idea where the boundaries of one object ends and the next one begins, as demonstrated by the two purple cubes, we cannot tell where one cube starts and the other ends.
Instance segmentation algorithms, on the other hand, compute a pixel-wise mask for every object in the image, even if the objects are of the same class label (bottom-right). Here you can see that each of the cubes has their own unique color, implying that our instance segmentation algorithm not only localized each individual cube but predicted their boundaries as well.
The Mask R-CNN architecture we’ll be discussing in this tutorial is an example of an instance segmentation algorithm.
What is Mask R-CNN?_2<<
Figure 2: The original R-CNN architecture (source: Girshick et al,. 2013)
The original R-CNN algorithm is a four-step process:
- Step #1: Input an image to the network.
- Step #2: Extract region proposals (i.e., regions of an image that potentially contain objects) using an algorithm such as Selective Search.
- Step #3: Use transfer learning, specifically feature extraction, to compute features for each proposal (which is effectively an ROI) using the pre-trained CNN.
- Step #4: Classify each proposal using the extracted features with a Support Vector Machine (SVM).
The reason this method works is due to the robust, discriminative features learned by the CNN.
However, the problem with the R-CNN method is it’s incredibly slow. And furthermore, we’re not actually learning to localize via a deep neural network, we’re effectively just building a more advanced HOG + Linear SVM detector.
To improve upon the original R-CNN, Girshick et al. published the Fast R-CNN algorithm:
Figure 3: The Fast R-CNN architecture (source: Girshick et al., 2015).
Similar to the original R-CNN, Fast R-CNN still utilizes Selective Search to obtain region proposals; however, the novel contribution from the paper was Region of Interest (ROI) Pooling module.
ROI Pooling works by extracting a fixed-size window from the feature map and using these features to obtain the final class label and bounding box. The primary benefit here is that the network is now, effectively, end-to-end trainable:
- We input an image and associated ground-truth bounding boxes
- Extract the feature map
- Apply ROI pooling and obtain the ROI feature vector
- And finally, use the two sets of fully-connected layers to obtain (1) the class label predictions and (2) the bounding box locations for each proposal.
While the network is now end-to-end trainable, performance suffered dramatically at inference (i.e., prediction) by being dependent on Selective Search.
To make the R-CNN architecture even faster we need to incorporate the region proposal directly into the R-CNN:
Figure 4: The Faster R-CNN architecture (source: Girshick et al., 2015)
The Faster R-CNN paper by Girshick et al. introduced the Region Proposal Network (RPN) that bakes region proposal directly into the architecture, alleviating the need for the Selective Search algorithm.
As a whole, the Faster R-CNN architecture is capable of running at approximately 7-10 FPS, a huge step towards making real-time object detection with deep learning a reality.
The Mask R-CNN algorithm builds on the Faster R-CNN architecture with two major contributions:
- Replacing the ROI Pooling module with a more accurate ROI Align module
- Inserting an additional branch out of the ROI Align module
This additional branch accepts the output of the ROI Align and then feeds it into two CONV layers.
The output of the CONV layers is the mask itself.
We can visualize the Mask R-CNN architecture in the following figure:
Figure 5: The Mask R-CNN work by He et al. replaces the ROI Polling module with a more accurate ROI Align module. The output of the ROI module is then fed into two CONV layers. The output of the CONV layers is the mask itself.
Notice the branch of two CONV layers coming out of the ROI Align module — this is where our mask is actually generated.
As we know, the Faster R-CNN/Mask R-CNN architectures leverage a Region Proposal Network (RPN) to generate regions of an image that potentially contain an object.
Each of these regions is ranked based on their “objectness score” (i.e., how likely it is that a given region could potentially contain an object) and then the top N most confident objectness regions are kept.
In the original Faster R-CNN publication Girshick et al. set N=2,000, but in practice, we can get away with a much smaller N, such as N={10, 100, 200, 300} and still obtain good results.
He et al. set N=300 in their publication which is the value we’ll use here as well.
Each of the 300 selected ROIs go through three parallel branches of the network:
- Label prediction
- Bounding box prediction
- Mask prediction
Figure 5 above above visualizes these branches.
During prediction, each of the 300 ROIs go through non-maxima suppression and the top 100 detection boxes are kept, resulting in a 4D tensor of 100 x L x 15 x 15 where L is the number of class labels in the dataset and 15 x 15 is the size of each of the L masks.
The Mask R-CNN we’re using here today was trained on the COCO dataset, which has L=90 classes, thus the resulting volume size from the mask module of the Mask R CNN is 100 x 90 x 15 x 15.
To visualize the Mask R-CNN process take a look at the figure below:
Figure 6: A visualization of Mask R-CNN producing a 15 x 15 mask, the mask resized to the original dimensions of the image, and then finally overlaying the mask on the original image. (source: Deep Learning for Computer Vision with Python, ImageNet Bundle)
Here you can see that we start with our input image and feed it through our Mask R-CNN network to obtain our mask prediction.
The predicted mask is only 15 x 15 pixels so we resize the mask back to the original input image dimensions.
Finally, the resized mask can be overlaid on the original input image. For a more thorough discussion on how Mask R-CNN works be sure to refer to:
- The original Mask R-CNN publication by He et al.
- My book, Deep Learning for Computer Vision with Python, where I discuss Mask R-CNNs in more detail, including how to train your own Mask R-CNNs from scratch on your own data.
Project structure
Our project today consists of two scripts, but there are several other files that are important.
I’ve organized the project in the following manner (as is shown by the tree command output directly in a terminal):
Our project consists of four directories:
- mask-rcnn-coco/ : The Mask R-CNN model files. There are four.
- colors.txt : This text file contains six colors to randomly assign to objects found in the image.
- images/ : I’ve provided three test images in the “Downloads”. Feel free to add your own images to test with.
- videos/ : This is an empty directory. I actually tested with large videos that I scraped from YouTube (credits are below, just above the “Summary” section). Rather than providing a really big zip, my suggestion is that you find a few videos on YouTube to download and test with. Or maybe take some videos with your cell phone and come back to your computer and use them!
- output/ : Another empty directory that will hold the processed videos (assuming you set the command line argument flag to output to this directory).
We’ll be reviewing two scripts today:
- mask_rcnn.py : This script will perform instance segmentation and apply a mask to the image so you can see where, down to the pixel, the Mask R-CNN thinks an object is.
- mask_rcnn_video.py : This video processing script uses the same Mask R-CNN and applies the model to every frame of a video file. The script then writes the output frame back to a video file on disk.
OpenCV and Mask R-CNN in images
Now that we’ve reviewed how Mask R-CNNs work, let’s get our hands dirty with some Python code.
Before we begin, ensure that your Python environment has OpenCV 3.4.2/3.4.3 or higher installed. You can follow one of my OpenCV installation tutorials to upgrade/install OpenCV. If you want to be up and running in 5 minutes or less, you can consider installing OpenCV with pip. If you have some other requirements, you might want to compile OpenCV from source.
Make sure you’ve used the “Downloads” section of this blog post to download the source code, trained Mask R-CNN, and example images.
From there, open up the mask_rcnn.py file and insert the following code:
First we’ll import our required packages on Lines 2-7. Notably, we’re importing NumPy and OpenCV. Everything else comes with most Python installations.
From there, we’ll parse our command line arguments:
Our script requires that command line argument flags and parameters be passed at runtime in our terminal. Our arguments are parsed on Lines 10-21, where the first two of the following are required and the rest are optional:
- --image : The path to our input image.
- --mask-rnn : The base path to the Mask R-CNN files.
- --visualize (optional): A positive value indicates that we want to visualize how we extracted the masked region on our screen. Either way, we’ll display the final output on the screen.
- --confidence (optional): You can override the probability value of 0.5 which serves to filter weak detections.
- --threshold (optional): We’ll be creating a binary mask for each object in the image and this threshold value will help us filter out weak mask predictions. I found that a default value of 0.3 works pretty well.
Now that our command line arguments are stored in the args dictionary, let’s load our labels and colors:
Lines 24-26 load the COCO object class LABELS . Today’s Mask R-CNN is capable of recognizing 90 classes including people, vehicles, signs, animals, everyday items, sports gear, kitchen items, food, and more! I encourage you to look at object_detection_classes_coco.txt to see the available classes.
From there we load the COLORS from the path, performing a couple array conversion operations (Lines 30-33).
Let’s load our model:
First, we build our weight and configuration paths (Lines 36-39), followed by loading the model via these paths (Line 44).
In the next block, we’ll load and pass an image through the Mask R-CNN neural net:
Here we:
- Load the input image and extract dimensions for scaling purposes later (Lines 47 and 48).
- Construct a blob via cv2.dnn.blobFromImage (Line 54). You can learn why and how to use this function in my previous tutorial.
- Perform a forward pass of the blob through the net while collecting timestamps (Lines 55-58). The results are contained in two important variables: boxes and masks .
Now that we’ve performed a forward pass of the Mask R-CNN on the image, we’ll want to filter + visualize our results. That’s exactly what this next for loop accomplishes. It is quite long, so I’ve broken it into five code blocks beginning here:
In this block, we begin our filter/visualization loop (Line 66).
We proceed to extract the classID and confidence of a particular detected object (Lines 69 and 70).
From there we filter out weak predictions by comparing the confidence to the command line argument confidence value, ensuring we exceed it (Line 74).
Assuming that’s the case, we’ll go ahead and make a clone of the image (Line 76). We’ll need this image later.
Then we scale our object’s bounding box as well as calculate the box dimensions (Lines 81-84).
Image segmentation requires that we find all pixels where an object is present. Thus, we’re going to place a transparent overlay on top of the object to see how well our algorithm is performing. In order to do so, we’ll calculate a mask:
On Lines 89-91, we extract the pixel-wise segmentation for the object as well as resize it to the original image dimensions. Finally we threshold the mask so that it is a binary array/image (Line 92).
We also extract the region of interest where the object resides (Line 95).
Both the mask and roi can be seen visually in Figure 8 later in the post.
For convenience, this next block accomplishes visualizing the mask , roi , and segmented instance if the --visualize flag is set via command line arguments:
In this block we:
- Check to see if we should visualize the ROI, mask, and segmented instance (Line 99).
- Convert our mask from boolean to integer where a value of “0” indicates background and “255” foreground (Line 102).
- Perform bitwise masking to visualize just the instance itself (Line 103).
- Show all three images (Lines 107-109).
Again, these visualization images will only be shown if the --visualize flag is set via the optional command line argument (by default these images won’t be shown).
Now let’s continue on with visualization:
Line 113 extracts only the masked region of the ROI by passing the boolean mask array as our slice condition.
Then we’ll randomly select one of our six COLORS to apply our transparent overlay on the object (Line 118).
Subsequently, we’ll blend our masked region with the roi (Line 119) followed by placing this blended region into the clone image (Line 122).
Finally, we’ll draw the rectangle and textual class label + confidence value on the image as well as display the result!
To close out, we:
- Draw a colored bounding box around the object (Lines 125 and 126).
- Build our class label + confidence text as well as draw the text above the bounding box (Lines 130-132).
- Display the image until any key is pressed (Lines 135 and 136).
Let’s give our Mask R-CNN code a try!
Make sure you’ve used the “Downloads” section of the tutorial to download the source code, trained Mask R-CNN, and example images. From there, open up your terminal and execute the following command:
Figure 7: A Mask R-CNN applied to a scene of cars. Python and OpenCV were used to generate the masks.
In the above image, you can see that our Mask R-CNN has not only localized each of the cars in the image but has also constructed a pixel-wise mask as well, allowing us to segment each car from the image.
If we were to run the same command, this time supplying the --visualize flag, we can visualize the ROI, mask, and instance as well:
Figure 8: Using the
--visualize flag, we can view the ROI, mask, and segmentmentation intermediate steps for our Mask R-CNN pipeline built with Python and OpenCV.
Let’s try another example image:
Figure 9: Using Python and OpenCV, we can perform instance segmentation using a Mask R-CNN.
Our Mask R-CNN has correctly detected and segmented both people, a dog, a horse, and a truck from the image.
Here’s one final example before we move on to using Mask R-CNNs in videos:
Figure 10: Here you can see me feeding a treat to the family beagle, Jemma. The pixel-wise map of each object identified is masked and transparently overlaid on the objects. This image was generated with OpenCV and Python using a pre-trained Mask R-CNN model.
In this image, you can see a photo of myself and Jemma, the family beagle.
Our Mask R-CNN is capable of detecting and localizing me, Jemma, and the chair with high confidence.
OpenCV and Mask R-CNN in video streams
Now that we’ve looked at how to apply Mask R-CNNs to images, let’s explore how they can be applied to videos as well.
Open up the mask_rcnn_video.py file and insert the following code:
First we import our necessary packages and parse our command line arguments.
There are two new command line arguments (which replaces --image from the previous script):
- --input : The path to our input video.
- --output : The path to our output video (since we’ll be writing our results to disk in a video file).
Now let’s load our class LABELS , COLORS , and Mask R-CNN neural net :
Our LABELS and COLORS are loaded on Lines 24-31.
From there we define our weightsPath and configPath before loading our Mask R-CNN neural net (Lines 34-42).
Now let’s initialize our video stream and video writer:
Our video stream ( vs ) and video writer are initialized on Lines 45 and 46.
We attempt to determine the number of frames in the video file and display the total (Lines 49-53). If we’re unsuccessful, we’ll capture the exception and print a status message as well as set total to -1 (Lines 57-59). We’ll use this value to approximate how long it will take to process an entire video file.
Let’s begin our frame processing loop:
We begin looping over frames by defining an infinite while loop and capturing the first frame (Lines 62-64). The loop will process the video until completion which is handled by the exit condition on Lines 68 and 69.
We then construct a blob from the frame and pass it through the neural net while grabbing the elapsed time so we can calculate estimated time to completion later (Lines 75-80). The result is included in both boxes and masks .
Now let’s begin looping over detected objects:
First we filter out weak detections with a low confidence value. Then we determine the bounding box coordinates and obtain the mask and roi .
Now let’s draw the object’s transparent overlay, bounding rectangle, and label + confidence:
Here we’ve blended our roi with color and store it in the original frame , effectively creating a colored transparent overlay (Lines 118-122).
We then draw a rectangle around the object and display the class label + confidence just above (Lines 125-133).
Finally, let’s write to the video file and clean up:
On the first iteration of the loop, our video writer is initialized.
An estimate of the amount of time that the processing will take is printed to the terminal on Lines 143-147.
The final operation of our loop is to write the frame to disk via our writer object (Line 150).
You’ll notice that I’m not displaying each frame to the screen. The display operation is time-consuming and you’ll be able to view the output video with any media player when the script is finished processing anyways.
Note:.
Lastly, we release video input and output file pointers (Lines 154 and 155).
Now that we’ve coded up our Mask R-CNN + OpenCV script for video streams, let’s give it a try!
Make sure you use the “Downloads” section of this tutorial to download the source code and Mask R-CNN model.
You’ll then need to collect your own videos with your smartphone or another recording device. Alternatively, you can download videos from YouTube as I have done.
Note: I am intentionally not including the videos in today’s download because they are rather large (400MB+). If you choose to use the same videos as me, the credits and links are at the bottom of this section.
From there, open up a terminal and execute the following command:
Figure 11: Mask R-CNN applied to video with Python and OpenCV.
In the above video, you can find funny video clips of dogs and cats with a Mask R-CNN applied to them!
Here is a second example, this one of applying OpenCV and a Mask R- CNN to video clips of cars “slipping and sliding” in wintry conditions:
Figure 12: Mask R-CNN object detection is applied to a video scene of cars using Python and OpenCV.
You can imagine a Mask R-CNN being applied to highly trafficked roads, checking for congestion, car accidents, or travelers in need of immediate help and attention.
Credits for the videos and audio include:
- Cats and Dogs
- Slip and Slide
How do I train my own Mask R-CNN models?
Figure 13: included a detailed explanation of both the algorithm and code, ensuring you will be able to successfully train your own Mask R-CNNs.
To learn more about my book (and grab your free set of sample chapters and table of contents), just click here.
Summary
In this tutorial, you learned how to apply the Mask R-CNN architecture with OpenCV and Python to segment objects from images and video streams.
Object detectors such as YOLO, SSDs, and Faster R-CNNs are only capable of producing bounding box coordinates of an object in an image — they tell us nothing about the actual shape of the object itself.
Using Mask R-CNN we can generate pixel-wise masks for each object in an image, thereby allowing us to segment the foreground object from the background.
Furthermore, Mask R-CNNs enable us to segment complex objects and shapes from images which traditional computer vision algorithms would not enable us to do.
I hope you enjoyed today’s tutorial on OpenCV and Mask R-CNN!
To download the source code to this post, and be notified when future tutorials are published here on PyImageSearch, just enter your email address in the form below!
Hi. How can we train our own Mask RCNN model. Can we use Tensorflow Models API for this purpose?
Hey Faizan — I cover how to train your own custom Mask R-CNN models inside Deep Learning for Computer Vision with Python.
Thank you Adrian for the article.I am a beginner in python cv. Well when i was testing the code with example_01 image it was detecting only one car instead of two cars….any explanation??
Click on the window opened by OpenCV and click any key on your keyboard to advance the execution of the script.
Hi Adrian,
thanks a lot for another great tutorial.
I already knew Mask-RCNN for trying it on my problem, but apparently that is not the way to go.
What I want to do is to detect movie posters in videos and then track them over time. The first time they appear I also manually define a mask to simplify the process. Unfortunately any detection/tracking method I tried failed miserably… the detection step is hard, because the poster is not an object available in the models, and it can vary a lot depending on the movie it represents; tracking also fails, since I need a pixel perfect tracking and any deep learning method I tried does not return a shape with straight borders but always rounded objects.
Do you have any algorithms to recommend for this specific task? Or shall I resort to traditional, not DL-based methids?
Thanks in advance!
How many example images per movie poster do you have?
I have 10 videos, each of them showing a movie poster for about 150 frames. The camera is always panning or zooming, so the shape and size of the poster is constantly changing.
Thanks in advance for any help 🙂
I assume each of the 150 frames has the same movie poster? Are these 150 frames your training data? If so, have you labeled them and annotated them so you can train an object detector?
Yes, I have 1500 images as training data. For each movie poster, i created a binary mask showing where is the poster. The shape is usually a quadrilateral, unless in case the poster is partially occluded.
I’d like to train a system which, given an annotated frame of a video, could then detect the movie poster with pixel precision during camera movement and occlusions, but so far I didn’t have luck. Even system especially trained for that (as they do in the Davis challenge) seem to fail after just a few frames.
If you are going to work / publish a post on the issue, let me know!
Thanks for the clarification. In that case I would highly suggest using a Mask R-CNN. The Mask R-CNN will give you a pixel-wise segmentation of the movie poster. Once you have the location of the poster you can either:
1. Continue to process subsequent frames using the Mask R-CNN
2. Or you can apply a dedicated object tracker
Adrian, you are constantly bombarding us with such valuable information every single week, which otherwise would take us months to even understand.
Thank you for sharing this incredible piece of code with us.
Thanks Mansoor — it is my pleasure 🙂
Hello, Adrian.
Thanks so much for your article and explanation of principles R-CNN
You are welcome, I’m happy you found the post useful! I hope you can apply it to your own projects.
Thanks , very informative and useful 🙂
Thanks Atul!
Hi Adrain.
Thank you again for the great effort.My question is that mask rcnn is according to authors of paper Mask rcnn : ,fps is around 5fps.Isnt it a bit slow for using it in real time application and how do you compare YOLO or SSD with it.Thanks.
Yes, Faster R-CNN and Mask R-CNN are slower than YOLO and SSD. I would request you read “Instance segmentation vs. Semantic segmentation” section of this tutorial — the section will explain to you how YOLO, SSD, and Faster R-CNN (object detectors) are different than Mask R-CNN (instance segmentation).
Thanks Adrian ,so what i understand is that mask rcnn may not be suitable for real time applications.Great tutorial by the way.Thumbs up
Hi Adrian,
Thank you very much for your sharing the code along with the blog, as it will be very helpful for us to play around and understand better.
Thanks Cenk!
Thanks a lot.
I worked when I updated openCV 🙂
Awesome, glad to hear it!
Great post, Adrian. Actually, a large number of papers are published everyday on machine learning, so can you share us the way you keep track almost of them. Thanks so muchs, Adrian
Adrian, please give me some comment about this. Thanks
Hi Adrian
This is awesome. I loved your book. (still trying to learn most of it)
I used matterport’s Mask RCNN in our software to segment label-free cells in microscopy images and track them.
I wonder if you can comment on two things
1.
would you comment on how to improve the accuracy of the mask?
Do you think it’s the interpolation error or we can improve the accuracy by increasing the depth of the CNNs?
2. I’ve seen this “flicking” thing in segmentation. (as in video)
If i’m doing image segmentation, it would be one trained weight can recognize a target, while the other may not. some kind of false negative.
would you know where it came from?
1. Are you already applying data augmentation? If not, make sure you are. I’m also not sure how much data you have for training but you may need more.
2. False-negatives and false-positives will happen, especially if you’re trying to run the model on video. Ways to improve your model include using training data that is similar to your testing data, applying data augmentation, regularization, and anything that will increase the ability of your model to generalize.
This is looks really cool. Is this the same thing as pose estimation?
No, pose estimation actually finds keypoints/landmarks for specific joints/body parts. I’ll try to cover pose estimation in the future.
Thank you so much for all the wonderful tutorials. i am great follower of your work. had a doubt here:
To perform Localization and Classification at the same time we add 2 fully connected layers at the end of our network architecture. One classifies and other provides the bounding box information. But how will come to know which fully connected layer produces cordinates and which one is for classification?
What i read in some blogs is that we receive a matrix at the end which contains: [confidence score, bx, by, bw, bh, class1, class2, class3].
We know due to our implementation. One FC branch is (N + 1)-d where N is the number of class labels plus an additional one for the background. The other FC branch is 4xN-d where each of the four values represents the deltas for the final predicted bounding boxes.
Thanks for your invaluable tutorials. I ran your code as is, however I am getting only one object instance segemented. i.e If I have two cars in the image (e.g example1), only one car is detected and instance segmented. I have tried with other images. Same story.
My openCV version is 3.4.3. Please suggest resolution.
Please ignore my previous comment. I thought it would be an animated gif.
Click on the window opened by OpenCV and press any key on your keyboard. It will advance the execution of the script to highlight the next car.
Hi Adrian,
Can you suggest me any architecture for Sementic Segmentation which performs segmentation without resizing the image. Any blog/code related to it would be great.
I would suggest starting by reading my tutorial on semantic segmentation to help you get started.
Hi Adrian,
Thanks for this awesome post.
I am working on a similar project where I have to identify and localize each object in the picture. Can you please advise how to make this script identify all the objects in the picture like a carton box, wooden block etc. I will not know what could be in the picture in advance.
You would need to first train a Mask R-CNN to identify each of the objects you would like to recognize. Mask R-CNNs, and in general, all machine learning models, are not magic boxes that intuitively understand the contents of an image. Instead, we need to explicitly train them to do so. If you’re interested in training your own custom Mask R-CNN networks be sure to refer to Deep Learning for Computer Vision with Python where I discuss how to train your own models in detail (including code).
Great tutorial.
I am interested in extracting and classifying/labeling plant disease(s) and insects from an image sent by a farmer using deep learning paradigm. Please advice the relevant approaches/techniques to be employed.
Are you planning to diversify your blog with examples in the field of plant pests or disease diagnosis in future?
I haven’t covered plant diseases specifically before but I have cover human diseases such as skin lesion/cancer segmentation using a Mask R-CNN. Be sure to take a look at Deep Learning for Computer Vision with Python for more details. I’m more than confident that the book would help you complete your plant disease classification project.
As you mentioned it’s storing as an output I wanted to know How can we show the output on the screen Frame by frame.
You can insert a call to
cv2.imshowbut keep in mind that the Mask R-CNN running on a CPU, at best, may only be able to do 1 FPS. The results wouldn’t look as good.
Hi Adrian, Another great tutorial – Your program examples just work first time (unlike many other object detection tutorials on the web…)
I am trying to reduce the number of false positives from my CCTV alarm system which monitors for visitors against a very ‘noisy’ background (trees blowing in the wind etc) and using an RCNN looks most promising. The Mask RCNN gives very accurate results but I don’t really need the pixel-level masks and the extra CPU time to generate them.
Is there a (simple) way to just generate the bounding boxes?
I have tried to use Faster RCNN rather than Mask RCNN but the accuracy I am getting (from the aforementioned web tutorials and Github downloads) is much poorer.
If Faster R-CNN isn’t working you may want to try YOLO or Single Shot Detector (SSDs).
Never even heard of R-CNN until now .. but great follow up to the YOLO post. Question … sometimes the algo seems to identify the same person twice, very very similar confidence levels and at times, the same person twice, once at ~90% and once at ~50%.
Any ideas?
The same person in the same frame? Or the same person in subsequent frames?
another great article! would it be possible to use instance segmentation or object detection to detect whether an object is on the floor? i wanna be able to scan a room and trigger an alert if an object is on the floor. I haven’t seen any deep learning algorithm applied to detect the floor. thanks, look forward to your reply.
That would actually be a great application of semantic segmentation. Semantic segmentation algorithms can be used to classify all pixels of an image/frame. Try looking into semantic segmentation algorithms for room understanding.
thanks Adrian, I’ll look into using semantic segmentation for this, look forward to more articles from you!
Hi Adrian, I found u have lots of blogs on install opencv on raspberry pi, they build and compile (min 2hours)…..I found pip install opencv- python working fine on raspberry Pi. Did you try it?
I actually have an entire tutorial dedicated to installing OpenCV with pip. I would refer to it to ensure your install is working properly.
Like always great tutorial.
No algorithm is perfect.What are the short comings of Mask R-CNN approach/algorithm?
Mask R-CNNs are extremely slow. Even on a GPU they only operate at 5-7 FPS.
Hey Adrian,
I made the entire tree structure on Google Colab and ran the mask_rcnn.py file.
!python mask_rcnn.py –mask-rcnn mask-rcnn-coco –image images/example_01.jpg
It gave the following result:
[INFO] loading Mask R-CNN from disk…
[INFO] Mask R-CNN took 5.486852 seconds
[INFO] boxes shape: (1, 1, 3, 7)
[INFO] masks shape: (100, 90, 15, 15)
: cannot connect to X server
Could you please tell me why did this happen?
I don’t believe Google Colab has X11 forwarding which is required to display images via
cv2.imshow. Don’t worry though, you can still use matplotlib to display images.
cool..leading the way for us to the most recent technology
Thinking to use MASK R-CNN for background removal, is there and way to make the mask more accurate then the examples in the video in the examples?
You would want to ensure your Mask R-CNN is trained on objects that are similar to the ones in your video streams. A deep learning model is only as good as the training data you give it.
I’m talking about person recognize, It can be any person… so I’m understanding your comment ” objects that are similar ”
look on the picture below the mask cut part of the person head (the one near the dog)… for example…
however if I’m looking on this document the mask cover the persons better
any idea how the mask can cover the body better then the examples?
tFirst thanks for all the information you share with us!!!!
I Just to verify, as I understand your opinion is that better training can improve the mask fit to the object required and it is not the limitation that related to the ability of Mask RCNN and for my needs I need to search for other AI model
Thanx a lot for a great blog !
on internet lots of article available on custom object detection using tensorflow API , but not well explained..
In future Can we except blog on “Custom object detection using tensorflow API” ??
thanx a lot your blogs are really very helpful for us…
Best regards
Gagandeep
Hi Gagandeep — if you like how I explain computer vision and deep learning here on the PyImageSearch blog I would recommend taking a look at my book, Deep Learning for Computer Vision with Python which includes six chapters on training your own custom object detectors, including using the TensorFlow Object Detection API.
Hi Adrian,
Thanks for such a great tutorial! I have some questions after reading the tutorial:
1. Which one is faster between Faster R-CNN and Mask R-CNN? What about the accuracy?
2. Under what condition I should consider using Mask R-CNN? Under what condition I should consider using Faster-CNN? (Just for Mask R-CNN and Faster R-CNN)
3. What is the limitation of Mask R-CNN?
Sincerely,
Sunny
1. Mask R-CNN builds on Faster R-CNN and includes extra computation. Faster R-CNN is slightly faster.
2 and 3. Go back and read the “Instance segmentation vs. Semantic segmentation” section of this post. Faster R-CNN is an object detector while Mask R-CNN is used for instance segmentation.
the mask output that I’m getting for the images that you provided is not as smooth as the output that you have shown in this article – there are significant jagged edges on the outline of the mask. is there any way to get a smoother mask as you have got ? I’m running the script on a Macbook Pro.
looking forward to your reply, thanks.
Hi Adrian,
don’t mean to annoy you, but it’d help me considerably if you could give me some ideas for why I’m getting masks with jagged edges (like steps all over the outline) as opposed to the smooth mask outputs, and how I can possible fix this problem. Thanks,
See my reply to Robert in this same comment thread. What interpolation are you using? Try using a different interpolation method when resizing. Instead of “cv2.INTER_NEAREST” you may want to try linear or cubic interpolation.
using cubic interpolation gives the same results as you show in this post. thank you so much!!
Awesome, glad to hear it!
I’m running into the same issue. Do you have any recommendation Adrian? Are you smoothing the pixels in some way?
What interpolation method are you using when resizing the mask?
box = boxes[0, 0, i, 3:7] * np.array([W, H, W, H])
(startX, startY, endX, endY) = box.astype(“int”)
boxW = endX – startX
boxH = endY – startY
What is happening in the first step.?
Why is it 3:7…?
Looking forward for your reply.
That is the NumPy array slice. The 7 values correspond to:
[batchId, classId, confidence, left, top, right, bottom]
In a very simple yet detailed way all the procedures are described. Easy to understand.
Can you please tell me how to get or generate these files ?
colors.txt
frozen_inference_graph.pb
mask_rcnn_inception_v2_coco_2018_01_28.pbtxt
object_detection_classes_coco.txt
I want to go through your example.
These models were generated by training the Mask R-CNN network. You need to train the actual network which will require you to understand machine learning and deep learning. Do you have any prior experience in those areas?
it looks like those files are generated by Tensorflow, look for tutorials on how to use Tensorflow Object detection API.
Any thoughts on this error:
… cv2.error: OpenCV(3.4.2) /home/estes/git/cv-modules/opencv/modules/dnn/src/tensorflow/tf_graph_simplifier.cpp:659: error: (-215:Assertion failed) !field.empty() in function ‘getTensorContent’
Note that I’m using opencv 3.4.2, as suggested, and am running an unmodified version of your code.
Thanks!
Found a link suggesting I needed 3.4.3. I updated to 3.4 and all is well.
Typo: can’t edit post. I upgraded to 4.0.0 and it worked.
Thanks for letting us know, Bob!
Hello Adrian,
Thanks for you post, it’s a really good tutorial!
But I am wondering whether there is any way to limit the categories of coco dataset if I just want it to detect the ‘person’ class. Forgive my stupidity, I really couldn’t find the model file or some other file contains the code related to it.
Looking forward to your reply;)
I show you exactly how to do that in this post.
this is probably my favorite of all of your posts! i have a question about extending the Mask R-CNN model. Currently, if i run the code on a video that has more than 1 person, i get a mask output labeled ‘person’ for each person in the video. Is there any way to identify and track each person in the video, so the output would be ‘person 1’, ‘person 2’ and so on… Thanks,
I would suggest using a simple object tracking algorithm.
Hi Adrian,
Amazing book. I’ve been reading through it. Love the materials. I was going through your custom mask rcnn pills example and the annotation is done using a circle. If I am training on something custom I’m using polygons. The code has it finding the center the circle from the annotation and draws a mask. Any suggestions on how to update this to get it to work with polygon annotations in via? Thanks!
Thanks Michael, I’m glad you’re enjoying Deep Learning for Computer Vision with Python!
As for your question, yes, there is a way to draw polygons. Using the scikit-image library it’s actually quite easy. You’ll need the skimage.draw.polygon function.
Hi Adrian,
Thanks for that. I was able to train now but I realized it was only on CPU and it was sooo slow. When I convert to GPU I get a Segmentation Fault (Core Dumped) could be related to a version issue? How can I repay your time???
Michael
Hey Michael, be sure to see my quote from the tutorial:
.”
Can I use this on a gray scale image like Dental x-ray?
Yes, Mask R-CNNs can be used on grayscale, single channel images. I demonstrate how to train your own custom Mask R-CNNs, including Mask R-CNNs for medical applications, inside my book, Deep Learning for Computer Vision with Python.
Adrian,
I really appreciate all of your detailed tutorials. I’m just getting familiar with openCV, and after walking through a few of them I have been able to start some cool projects.
I was curious if you could think of a method to add a contrail to tracked objects using the code provided? Right now, I am “ignoring” all objects except for the sports ball class, so I am just looking to add the movement path to the ball (similar to your past Ball Tracking with OpenCv tutorial.
Thanks!
Thanks Christian, I’m glad you’re enjoying the tutorials.
You could certainly adapt the ball tracking contrails to this tutorial as well. Just maintain a “deque” class for each detected object like we do in the ball tracking tutorial (I would recommend computing the center x,y-coordinates of the bounding box).
when i run it i see this error can you pls tell me how to fix it
mask_rcnn.py: error: the following arguments are required: -i/–image, -m/–mask-rcnn
If you’re new to command line arguments that’s okay, but you need to read this tutorial first.
Hi Adrian,
Currently, I am doing a project which is about capturing the trajectory of some scalpels when a surgeon is doing operations, so that I can input this data to a robot arm and hope it can help surgeons with operations.
The first task of my project is to track the scalpels first, then the second task is to know their 2D movement from the videos provided and even 3D motions.
I think CNN can help me with the first task easily, right?
My question is: is it possible to help me with the second task?
Looking forward to your reply, thanks.
Yes, Mask R-CNNs and object detectors will help you detect an object. You can then track them using object tracking algorithms.
Hi,
congrats for the tuorial. Really well done!
I have a question:
I used your code but the masks are not as smooths as the one I see on your article, but they are quite roughly squared.
Is there a reason for this?
Thank you!
See my reply to Sophia.
How do you set ask_rcnn_video .py” line 97: box = boxes[0, 0, i, 3:7] * np.array([W, H, W, H])”, I am through your other articles and try I will use YOLO+opencv with centroidtracker, but there is always a problem with the coordinates. I think it is a problem with box. I don’t know yolo’s box=[0:4]. What is the difference between the two, I saw you have used centroidtracker’s article, all use: box = boxes[0, 0, i, 3:7], please help me answer
I tried to use YOLO+centroidtracker to achieve thank you.
The returned coordinates for the bounding boxes are:
[batchID, classID, confidence, left, top, right, bottom]
yes,but yolo_video.py is ” box = detection[0:4] * np.array([W, H, W, H])”,i don’t know how to use
YOLO’s return signature is slightly difference. It’s actually:
[center_x, center_y, width, height]
Hi Adrian, really helpful post. Would it be possible to extract a 128-D object embedding vector (or larger size vector like 256-D or 512-D) that quantifies a specific instance of that object class – similar to the way a 128-D face embedding vector is extracted for a face?
For example, if you have two different (different color, different model) Toyota cars in an image, then two object embedding vectors would be generated in such a way that both cars could be re-identified in a later image, even if those cars would appear in different angles – similar to the way a person’s face can be re-identified by the 128-D face embedding vector.
Yes, but you would need to train a model to do exactly that. I would suggest looking into siamese networks and triplet loss functions.
How do I do that showing two bounding boxes in one image without pressing ESC
You would remove the “cv2.imshow” statement inside the “for” loop and place it after the loop.
I think it is better in Figure 5 to change notation N to L for consistency
Would it possible to run MaskR-CNN in the raspberry pi ?
Realistically, no. The Raspberry Pi is far too underpowered. The best you could do is attempt to run the model a Movidius NCS connected to the Pi.
I ordered the max bundler imageNet. It worth it !
I hope more material using Tensorflow 2.0, TF Lite , TPU, Colab for more coherent and easy development.
I have a question: can we add background sample images without masking them with the masked objects to train the model better on detecting similar object. Like detecting windows but not doors ?
Thanks for picking up a copy of the ImageNet Bundle, Adama! I’m glad you are enjoying it.
As far as your question goes, yes, you can insert “negative” samples in your dataset. As long as none of the regions are annotated they will be used as negative samples.
Hello dear
i want to know if it’s possible to run the Mask R-CNN with Web cam to make it detect in real time?
thanks
You would need a GPU to run the Mask R-CNN network in real-time. It is not fast enough to run in real-time on the CPU.
it’s works but so heavy there’s no way to make it littel faster?
Hello, fantastic articles that are just a wealth of information. Is the download link for the source code still functioning?
Yes, you can use the “Downloads” section of the post to download the source code and pre-trained model.
Hi Adrian, How did you get the fc layers as 4096 in Figure 5? According to the Mask R-CNN paper the fc layers are 1024 from Figure 4 (in their paper).
Dear Adrian,
Great post, as always. Based on your posts I have learned a lot about CV, NN and python. I still have a question: I have my own Keras CNN saved as a model.h5. I would like to use it to detect features in the pictures, also hopefully with masking. I have transformed keras model to tensorflow and also generated the pdtxt file, however, my model does not want to work because of the error: ‘cv::dnn::experimental_dnn_34_v11::`anonymous-namespace’::addConstNodes’. Is there any other way to use own CNN to detect features on the images? I have tried with dividing image into blocks which were fed into CNN but this approach is rather slow and I would also need to include some more sophisticated algorithms to specify exact location. I would be very grateful for your answer!
Could you elaborate a bit more about what you mean by “detect features”? What is the end goal of what you are trying to achieve?
do you have the code for training?I want to test it on my datasets,thank you
I cover how to train your own custom Mask R-CNN networks inside my book, Deep Learning for Computer Vision with Python.
Hi Adrian,
I am so much thankful to you for writing, encouraging and motivating so many young talents in the field of Computer Vision and AI.
Thank you so much, once again.
Keep writing.
We love you so much.
God bless you.
Than you for the kind words, Pallawi 🙂
Adrian thank you so much for yet another amazing post!
Thanks Izack, I’m glad you enjoyed it!
how to draw contours for the output of the mask rcnn
Take a look at Line 92 where the mask is calculated. You can take that mask and find contours in it.
Hello Adrian,
thank you for the tutorial. It really is great.
Can you tell whether I can use this program also for the raspberry?
Thank you 🙂
No, the RPi is too underpowered to run Mask R-CNN. You would need to combine the Pi with a Movidius NCS or Google Coral USB Accelerator.
Hi Adrian,
Thanks for another great tutorial!
I was wondering how I would go about getting the code to also output coordinates for the four corners of each bounding box? Is that possible?
Thanks!
What do you mean by “output” the bounding box coordinates?
Hi, thanks for your response.
I am looking to collect data on where each object is located in an image. So, ideally, as well as producing the output image/video, the code will also produce an array containing the pixel coordinates for each bounding box.
Line 82 gives you the (x, y)-coordinates of the box.
Hi
Thanks for this great tutorial.
I am trying run this on intel movidius ncs 2 but am getting the following error:
[INFO] loading Mask R-CNN from disk…
terminate called after throwing an instance of ‘std::bad_cast’
what(): std::bad_cast
Aborted (core dumped)
It works perfectly with opencv but gives error with openvino’s opencv
OpenVINO’s OpenCV has their own custom implementations. Unfortunately it’s hard to say what the exact issue is there. Have you tried posting the issue on their GitHub?
Hi Adrian
This is very informative. Actually I am trying to detect different color wires in an images. My dataset has images of wires in it, I want to detect where are the wires and what colors are they. I was trying to use MASK RCNN, it was able to detect the wires but it is classifying all the wires of same color.
Do you know how can I improve my code.
Have you taken a look at Raspberry Pi for Computer Vision? That book will teach you how to train your own Mask R-CNNs. I also provide my best practices, tips, and suggestions.
Hi Adrian,
Thank you for this excellent tutorial, I ran the code, it works but it gives me rectangular shapes, not like the results in the tutorial. the second problem is when I test with a 5MB image it gives me an error (cv::OutOfMemoryError). All my images contain only one object which is the body of a person, I like to use mask rcnn in order to detect the shape of the skin, can I obtain such a result starting from your tutorial code?
Thank you in advance.
To avoid the memory error first resize your input image to the network — your machine is running out of memory trying to process the large image.
I wan to plot the image with Matplotlib but I don’t know exactly where in the code I put that.
You mean you want to use the matplotlib’s “plt.imshow” function to display the image?
Hi Adrian
I really appreciate all of your detailed tutorials.
For reference, I am not very familiar with DNN
in line (source code for images): 113 ,,,
roi = roi [ mask ]
Q1 : Does ‘roi’ have all the pixels that are masked?
Q2 : I want to know the center of the coordinates of the masked area using the OPENCV function. Is it possible?
1. The ROI contains the “Region of Interest”. The “mask” variable contains the masked pixels. We use NumPy array indexing to grab only the masked pixels.
2. Compute the centroid of the mask.
Hi Adrian,
Great work! I bought the practitioner package to try and learn more about the process. I can’t find anything about image annotation tools for training my own dataset in the book. I found VGG from Oxford but I’m not sure if that will work with the tools you’ve put together.
Thanks again for all these great tutorials!
Reed
Hi Reed — it’s the ImageNet Bundle of Deep Learning for Computer Vision with Python that covers Mask R-CNN and my recommended image annotation tools.
Hi Adrian,
In which bundle you teach to train a Mask R-CNN on a custom dataset? I have the starter bundle of your book and it’s not there.
Thanks
The ImageNet Bundle of Deep Learning for Computer Vision with Python contains the Mask R-CNN chapters.
If you would like to upgrade to the ImageNet Bundle from the Starter Bundle just send me an email and I can get you upgraded!
Hi Adrian,
Can we do object detection in video by retaining the sound of the video?
I’m not sure what you mean by “retaining the sound”? What do you hope to do with the audio from the video?
Thank you it works great, had some issues getting started because of the project interpreter but once I sorted that out it works exactly as stated, I learnt a lot from this tutorial thanks again.
Hi Adrian!
I am curious if I can combine mask r-cnn with webcam input in real time? Could you please give me any ideas how to achieve this?
A Mask R-CNN, even with a GPU, is not going to run in real-time (you’ll be in the 5-7 FPS range).
Hi Adrian,
Am a novice in the field of image recognition. I started exploring your blog and ran my first sample today.
I have two points to mention
1) Why is the Mask R-CNN not accurate in real time images? If I have around 5 images of car then it is detecting only 3 (The other 2 cars are might not be clear but still they are clearly visible (60%) for human eyes in the image and this algorithm is not detecting them).
2) Instead of viewing different output files of an image, can’t I view the image segmentation in a single image? (Ex: If it detected 2 cars then it is poping up a window showing a single car and after closing it then it is reopening it and showing me the second car. Is there any chance of viewing them in a sigle window probably on a single image). | https://www.pyimagesearch.com/2018/11/19/mask-r-cnn-with-opencv/ | CC-MAIN-2019-43 | refinedweb | 9,814 | 63.7 |
I've been knocking my head against the wall for a few days trying to get my Arduino Uno to receive IR signals from a 2nd gen Apple remote, and then control a standard Futaba servo with the received codes. It is a BoeBot application, so I am using the Board of Education shield with an IR receiver from Parallax running to pin 4 and the servo to pin 12. The problem I am having is that whenever I load Ken Shirriff's IRremote.h library and the Servo.h library, I encounter a servo twitch.
Here's a simple instance of the code that is giving me the issue:
- Code: Select all | TOGGLE FULL SIZE
#include <IRremote.h>
#include <Servo.h>
int RECV_PIN = 4;
IRrecv irrecv(RECV_PIN);
Servo myservo;
void setup()
{
irrecv.enableIRIn(); // Start the receiver
myservo.attach(12);
}
void loop()
{
}
It is powered by a 6V supply (4, 1.5V alkalines), and there is a common ground between the servo and the board.
Here are some things I've tried:
1. Using a different servo
2. Using different pins, for both the servo and IR receiver
3. Change the IRreceive.h library to interrupt with timer0, instead of timer2. (My thinking here is that there may have been a priority of operation, and I wanted to give the servo priority over the IR pin. This didn't change anything though, and I'm pretty sure timer0 has higher priority anyhow..)
4. Wrap the IRremote.h timer ISR in a "detach.servo();" and "attach.servo();" command somehow, but I can't seem to get the coding correct, and I'm not sure if this approach would work.
I should mention that the IR receive demo sketch and a basic servo sweep sketch work just fine when they're independent of one another.
I've seen some postings online where people have gotten the same code to work on the same type of application, so I feel like I'm really missing something big here. I've copied their approach exactly, but am still stuck with a twitchy servo. Any help would be great..I really do appreciate the support.
Ken Shirriff's IRremote library: | http://adafruit.com/forums/viewtopic.php?f=25&t=34328&p=169911 | CC-MAIN-2014-41 | refinedweb | 365 | 76.11 |
On Friday 30 November 2007, David Brownell wrote:> Thanks for the review. I'll snip out typos and similar trivial> comments (and fix them!), using responses here for more the> substantive feedback.Here's the current version of this patch ... updated to put thedriver into drivers/gpio (separate patch setting that up) andthe header into <linux/i2c/pcf857x.h>Note that after looking at the GPIO expanders listed at the NXPwebsite, I updated this to accept a few more of these chips.Other than reset pins and addressing options, the key differencebetween these seems to be the top I2C clock speed supported: pcf857x ... 100 KHz pca857x ... 400 KHz pca967x ... 1000 KHzOtherwise they're equivalent at the level of just swapping parts.- Dave============= SNIP!This is a new-style I2C driver for most common 8 and 16 bit I2C based"quasi-bidirectional" GPIO expanders: pcf8574 or pcf8575, and severalcompatible models (mostly faster, supporting I2C at up to 1 MHz).Since it's a new-style driver, these devices must be configured aspart of board-specific init. That eliminates the need for error-pronemanual configuration of module parameters, and makes compatibilitywith legacy drivers (pcf8574.c, pc8575.c)for these chips easier.The driver exposes the GPIO signals using the platform-neutral GPIOprogramming interface, so they are easily accessed by other kernelcode. The lack of such a flexible kernel API is what has ensuredthe proliferation of board-specific drivers for these chips... stuffthat rarely makes it upstream since it's so ugly. This driver willlet them use standard calls.Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>--- drivers/gpio/Kconfig | 23 +++ drivers/gpio/Makefile | 2 drivers/gpio/pcf857x.c | 331 ++++++++++++++++++++++++++++++++++++++++++++ include/linux/i2c/pcf857x.h | 45 +++++ 4 files changed, 401 insertions(+)--- a/drivers/gpio/Kconfig 2007-12-05 15:13:27.000000000 -0800+++ b/drivers/gpio/Kconfig 2007-12-05 15:14:12.000000000 -0800@@ -5,4 +5,27 @@ menu "GPIO Support" depends on GPIO_LIB +config GPIO_PCF857X+ tristate "PCF857x, PCA857x, and PCA967x I2C GPIO expanders"+ depends on I2C+ help+ Say yes here to provide access to most "quasi-bidirectional" I2C+ GPIO expanders used for additional digital outputs or inputs.+ Most of these parts are from NXP, though TI is a second source for+ some of them. Compatible models include:++ 8 bits: pcf8574, pcf8574a, pca8574, pca8574a,+ pca9670, pca9672, pca9674, pca9674a++ 16 bits: pcf8575, pcf8575c, pca8575,+ pca9671, pca9673, pca9675++ Your board setup code will need to declare the expanders in+ use, and assign numbers to the GPIOs they expose. Those GPIOs+ can then be used from drivers and other kernel code, just like+ other GPIOs, but only accessible from task contexts.++ This driver provides an in-kernel interface to those GPIOs using+ platform-neutral GPIO calls.+ endmenu--- a/drivers/gpio/Makefile 2007-12-05 15:14:03.000000000 -0800+++ b/drivers/gpio/Makefile 2007-12-05 15:14:12.000000000 -0800@@ -1 +1,3 @@ # gpio support: dedicated expander chips, etc++obj-$(CONFIG_GPIO_PCF857X) += pcf857x.o--- /dev/null 1970-01-01 00:00:00.000000000 +0000+++ b/drivers/gpio/pcf857x.c 2007-12-05 15:15:18.000000000 -0800@@ -0,0 +1,331 @@+/*+ * pcf857x - driver for pcf857x, pca857x, and pca967x I2C GPIO expanders+ *+ * Copyright (C) 2007 David Brown/kernel.h>+#include <linux/slab.h>+#include <linux/i2c.h>+#include <linux/i2c/pcf857x.h>++#include <asm/gpio.h>+++/*+ * The pcf857x, pca857x, and pca967x chips only expose one read and one+ * write register. Writing a "one" bit (to match the reset state) lets+ * that pin be used as an input; it's not an open-drain model, but acts+ * a bit like one. This is described as "quasi-bidirectional"; read the+ * chip documentation for details.+ *+ * Some other I2C GPIO expander chips (like the pca953{4,5,6,7,9}, pca9555,+ * pca9698, mcp23008, and mc23017) have more complex register models.+ */++ /* 8574 addresses are 0x20..0x27; 8574a uses 0x38..0x3f;+ * 9670, 9672, 9764, and 9764a use quite a variety.+ *+ * NOTE: we dont distinguish here between *4 and *4a parts.+ */+ if (strcmp(client->name, "pcf8574") == 0+ || strcmp(client->name, "pca8574") == 0+ || strcmp(client->name, "pca9670") == 0+ || strcmp(client->name, "pca9672") == 0+ || strcmp(client->name, "pca9674") == */+ else+ status = i2c_smbus_read_byte(client);++ /* '75/'75c addresses are 0x20..0x27, just like the '74;+ * the '75c doesn't have a current source pulling high.+ * 9671, 9673, and 9765 use quite a variety of addresses.+ *+ * NOTE: we dont distinguish here between 8575/8575a parts.+ */+ } else if (strcmp(client->name, "pcf8575") == 0+ || strcmp(client->name, "pca8575") == 0+ || strcmp(client->name, "pca9671") == 0+ || strcmp(client->name, "pca9673") == 0+ || strcmp(client->name, "pca96 */+ else+ status = i2c_read_le16(client);++ } else+ status = -ENODEV;++ if (status < 0)+ goto fail;++ gpio->chip.label = client->name;++ gpio->client = client;+ i2c_set_clientdata(client, gpio);++ /* all-ones reset state. But some systems will+ * need to drive some pins low, while avoiding transient glitches.+ * Handle those cases by assigning n_latch to a nonzero value.+ */+ gpio->out = ~pdata->n_latch;++ status = gpiochip_add(&gpio->chip);+ if (status < 0)+ goto fail;++ /*);+ }++ return 0;++fail:+ dev_dbg(&client->dev, "probe error %d for '%s'\n",+ status, client->name);+ kfree(gpio);+ return status;+}+");+MODULE_AUTHOR("David Brownell");--- /dev/null 1970-01-01 00:00:00.000000000 +0000+++ b/include/linux/i2c/pcf857x.h 2007-12-05 15:14:12.000000000 -0800@@ -0,0 +1,45 @@+ register value; if+ * you leave this initialized to zero the driver will act+ * like the chip was just reset+ * @setup: optional callback issued once the GPIOs are valid+ * @teardown: optional callback issued before the GPIOs are invalidated+ * @context: optional parameter passed to setup() and teard.+ *+ * These GPIO chips are only "quasi-bidirectional"; read the chip specs+ * to understand the behavior. They don't have separate registers to+ * record which pins are used for input or output, record which output+ * values are driven, or provide access to input values. That must be+ * inferred by reading the chip's value and knowing the last value written+ * to it. If you leave n_latch initialized to zero, that last written+ * value is presumed to be all ones (as if the chip were just reset).+ */+struct pcf857x_platform_data {+ unsigned gpio_base;+ unsigned n_latch;++ int (*setup)(struct i2c_client *client,+ int gpio, unsigned ngpio,+ void *context);+ int (*teardown)(struct i2c_client *client,+ int gpio, unsigned ngpio,+ void *context);+ void *context;+};++#endif /* __LINUX_PCF857X_H */--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2007/12/5/447 | CC-MAIN-2014-15 | refinedweb | 1,057 | 57.16 |
Chapter 18 Trajectory Analysis
18 that branch to multiple endpoints.
The “pseudotime” is defined as the positioning of cells along the trajectory that quantifies the relative activity or progression of the underlying biological process. For example, the pseudotime for a differentiation trajectory might represent the degree of differentiation from a pluripotent cell to a terminal state where cells with larger pseudotime values are more differentiated. This metric allows us to tackle questions related to the global population structure in a more quantitative manner. The most common application is to fit models to gene expression against the pseudotime to identify the genes responsible for generating the trajectory in the first place, especially around interesting branch events.
18.2 Obtaining pseudotime orderings
18.2.1 Overview
The pseudotime is simply a number describing the relative position of a cell in the trajectory, where cells with larger values are consider to be “after” their counterparts with smaller values. Branched trajectories will typically be associated with multiple pseudotimes, one per path through the trajectory; these values are not usually comparable across paths. It is worth noting that “pseudotime” is a rather unfortunate term as it may not have much to do with real-life time. For example, one can imagine a continuum of stress states where cells move in either direction (or not) over time but the pseudotime simply describes the transition from one end of the continuum to the other. In trajectories describing time-dependent processes like differentiation, a cell’s pseudotime value may be used as a proxy for its relative age, but only if directionality can be inferred (see Section 18.4).
The big question is how to identify the trajectory from high-dimensional expression data and map individual cells onto it. A massive variety of different algorithms are available for doing so (Saelens et al. 2019), and while we will demonstrate only a few specific methods below, many of the concepts apply generally to all trajectory inference strategies. A more philosophical question is whether a trajectory even exists in the dataset. One can interpret a continuum of states as a series of closely related (but distinct) subpopulations, or two well-separated clusters as the endpoints of a trajectory with rare intermediates. The choice between these two perspectives is left to the analyst based on which is more useful, convenient or biologically sensible.
18.2.2 Cluster-based minimum spanning tree
18.2.2.1 Basic steps
The TSCAN algorithm uses a simple yet effective approach to trajectory reconstruction. It uses the clustering to summarize the data into a smaller set of discrete units, computes cluster centroids by averaging the coordinates of its member cells, and then forms the minimum spanning tree (MST) across those centroids. The MST is simply an undirected acyclic graph that passes through each centroid exactly once and is thus the most parsimonious structure that captures the transitions between clusters. We demonstrate below on the Nestorowa et al. (2016) dataset, computing the cluster centroids in the low-dimensional PC space to take advantage of data compaction and denoising (Chapter 9).
library(scater) by.cluster <- aggregateAcrossCells(sce.nest, ids=colLabels(sce.nest)) centroids <- reducedDim(by.cluster, "PCA") # Set clusters=NULL as we have already aggregated above. library(TSCAN) mst <- createClusterMST(centroids, clusters=NULL) mst
## IGRAPH 2f74a1d UNW- 9 8 -- ## + attr: name (v/c), coordinates (v/x), weight (e/n), gain (e/n) ## + edges from 2f74a1d (vertex names): ## [1] 1--3 1--9 2--3 2--6 3--4 5--8 5--9 6--7
For reference, we can draw the same lines between the centroids in a \(t\)-SNE plot (Figure 18.1).
This allows us to identify interesting clusters such as those at bifurcations or endpoints.
Note that the MST in
mst was generated from distances in the PC space and is merely being visualized here in the \(t\)-SNE space,
for the same reasons as discussed in Section 9.5.5.
This may occasionally result in some visually unappealing plots if the original ordering of clusters in the PC space is not preserved in the \(t\)-SNE space.
line.data <- reportEdges(by.cluster, mst=mst, clusters=NULL, use.dimred="TSNE") plotTSNE(sce.nest, colour_by="label") + geom_line(data=line.data, mapping=aes(x=dim1, y=dim2, group=edge))
Figure 18.1: \ with
mapCellsToEdges().
More specifically, we move each cell onto the closest edge of the MST;
the pseudotime is then calculated as the distance along the MST to this new position from a “root node” with
orderCells().
For our purposes, we will arbitrarily pick one of the endpoint nodes as the root,
though a more careful choice based on the biological annotation of each node may yield more relevant orderings
(e.g., picking a node corresponding to a more pluripotent state).
map.tscan <- mapCellsToEdges(sce.nest, mst=mst, use.dimred="PCA") tscan.pseudo <- orderCells(map.tscan, mst) head(tscan.pseudo)
## 7 8 ## [1,] 33.90 NA ## [2,] 53.34 NA ## [3,] 47.95 NA ## [4,] 59.92 NA ## [5,] 54.36 NA ## [6,] 70.73 NA
Here, multiple sets of pseudotimes are reported for a branched trajectory.
Each column contains one pseudotime ordering and corresponds to one path from the root node to one of the terminal nodes - the name of the terminal node that defines this path is recorded in the column names of
tscan.pseudo.
Some cells may be shared across multiple paths, in which case they will have the same pseudotime in those paths.
We can then examine the pseudotime ordering on our desired visualization as shown in Figure 18.2.
# Taking the rowMeans just gives us a single pseudo-time for all cells. Cells # in segments that are shared across paths have the same pseudo-time value for # those paths anyway, so the rowMeans doesn't change anything. common.pseudo <- rowMeans(tscan.pseudo, na.rm=TRUE) plotTSNE(sce.nest, colour_by=I(common.pseudo), text_by="label", text_colour="red") + geom_line(data=line.data, mapping=aes(x=dim1, y=dim2, group=edge))
Figure 18.2: \(t\)-SNE plot of the Nestorowa HSC dataset, where each point is a cell and is colored according to its pseudotime value. The MST obtained using TSCAN is overlaid on top.
Alternatively, this entire series of calculations can be conveniently performed with the
quickPseudotime() wrapper.
This executes all steps from
aggregateAcrossCells() to
orderCells() and returns a list with the output from each step.
## 7 8 ## [1,] 33.90 NA ## [2,] 53.34 NA ## [3,] 47.95 NA ## [4,] 59.92 NA ## [5,] 54.36 NA ## [6,] 70.73 NA
18.2.2.2 Tweaking the MST
The MST can be constructed with an “outgroup” to avoid connecting unrelated populations in the dataset.
Based on the OMEGA cluster concept from Street et al. (2018),
the outgroup is an artificial cluster that is equidistant from all real clusters at some threshold value.
If the original MST sans the outgroup contains an edge that is longer than twice the threshold,
the addition of the outgroup will cause the MST to instead be routed through the outgroup.
We can subsequently break up the MST into subcomponents (i.e., a minimum spanning forest) by removing the outgroup.
We set
outgroup=TRUE to introduce an outgroup with an automatically determined threshold distance,
which breaks up our previous MST into two components (Figure 18.3).
pseudo.og <- quickPseudotime(sce.nest, use.dimred="PCA", outgroup=TRUE) set.seed(10101) plot(pseudo.og$mst)
Figure 18.3: Minimum spanning tree of the Nestorowa clusters after introducing an outgroup.
Another option is to construct the MST based on distances between mutual nearest neighbor (MNN) pairs between clusters (Section 13.5). This exploits the fact that MNN pairs occur at the boundaries of two clusters, with short distances between paired cells meaning that the clusters are “touching”. In this mode, the MST focuses on the connectivity between clusters, which can be different from the shortest distance between centroids (Figure 18.4). Consider, for example, a pair of elongated clusters that are immediately adjacent to each other. A large distance between their centroids precludes the formation of the obvious edge with the default MST construction; in contrast, the MNN distance is very low and encourages the MST to create a connection between the two clusters.
pseudo.mnn <- quickPseudotime(sce.nest, use.dimred="PCA", with.mnn=TRUE) mnn.pseudo <- rowMeans(pseudo.mnn$ordering, na.rm=TRUE) plotTSNE(sce.nest, colour_by=I(mnn.pseudo), text_by="label", text_colour="red") + geom_line(data=pseudo.mnn$connected$TSNE, mapping=aes(x=dim1, y=dim2, group=edge))
Figure 18.4: \(t\)-SNE plot of the Nestorowa HSC dataset, where each point is a cell and is colored according to its pseudotime value. The MST obtained using TSCAN with MNN distances is overlaid on top.
18.2.2.3 Further comments
The TSCAN approach derives straightforward as it uses the same clusters as the rest of the analysis, allowing us to recycle previous knowledge about the biological annotations assigned to each cluster.
However, the reliance on clustering is a double-edged sword. If the clusters are not sufficiently granular, it is possible for TSCAN to overlook variation that occurs inside a single cluster. The MST is obliged to pass through each cluster exactly once, which can lead to excessively circuitous paths in overclustered datasets as well as the formation of irrelevant paths between distinct cell subpopulations if the outgroup threshold is too high. The MST also fails to handle more complex events such as “bubbles” (i.e., a bifurcation and then a merging) or cycles.
18.2.3 single principal curve to the Nestorowa dataset, again using the low-dimensional PC coordinates for denoising and speed. This yields a pseudotime ordering of cells based on their relative positions when projected onto the curve.
library(slingshot) sce.sling <- slingshot(sce.nest, reducedDim='PCA') head(sce.sling$slingPseudotime_1)
## [1] 89.44 76.34 87.88 76.93 82.41 72.10
We can then visualize the path taken by the fitted curve in any desired space with
embedCurves().
For example, Figure 18.5 shows the behavior of the principle curve on the \(t\)-SNE plot.
Again, users should note that this may not always yield aesthetically pleasing plots if the \(t\)-SNE algorithm decides to arrange clusters so that they no longer match the ordering of the pseudotimes.
embedded <- embedCurves(sce.sling, "TSNE") embedded <- slingCurves(embedded)[[1]] # only 1 path. embedded <- data.frame(embedded$s[embedded$ord,]) plotTSNE(sce.sling, colour_by="slingPseudotime_1") + geom_path(data=embedded, aes(x=Dim.1, y=Dim.2), size=1.2)
Figure 18.5: \(t\)-SNE plot of the Nestorowa HSC dataset where each point is a cell and is colored by the slingshot pseudotime ordering. The fitted principal curve is shown in black.
The previous call to
slingshot() assumed that all cells in the dataset were part of a single curve.
To accommodate more complex events like bifurcations, we use our previously computed cluster assignments to build a rough sketch for the global structure in the form of a MST across the cluster centroids.
Each path through the MST from a designated root node is treated as a lineage that contains cells from the associated clusters.
Principal curves are then simultaneously fitted to all lineages with some averaging across curves to encourage consistency in shared clusters across lineages.
This process yields a matrix of pseudotimes where each column corresponds to a lineage and contains the pseudotimes of all cells assigned to that lineage.
sce.sling2 <- slingshot(sce.nest, cluster=colLabels(sce.nest), reducedDim='PCA') pseudo.paths <- slingPseudotime(sce.sling2) head(pseudo.paths)
## curve1 curve2 curve3 ## HSPC_025 107.11 NA NA ## HSPC_031 95.38 101.6 117.1 ## HSPC_037 103.74 104.1 109.3 ## HSPC_008 99.25 115.7 103.9 ## HSPC_014 103.07 111.0 105.7 ## HSPC_020 NA 124.0 NA
By using the MST as a scaffold for the global structure,
slingshot() can accommodate branching events based on divergence in the principal curves (Figure 18.6).
However, unlike TSCAN, the MST here is only used as a rough guide and does not define the final pseudotime.
sce.nest <- runUMAP(sce.nest, dimred="PCA") reducedDim(sce.sling2, "UMAP") <- reducedDim(sce.nest, "UMAP") shared.pseudo <- rowMeans(pseudo.paths, na.rm=TRUE) # Need to loop over the paths and add each one separately. gg <- plotUMAP(sce.sling2, colour_by=I(shared.pseudo)) embedded <- embedCurves(sce.sling2, "UMAP") embedded <- slingCurves(embedded) for (path in embedded) { embedded <- data.frame(path$s[path$ord,]) gg <- gg + geom_path(data=embedded, aes(x=Dim.1, y=Dim.2), size=1.2) } gg
Figure 18.6: UMAP plot of the Nestorowa HSC dataset where each point is a cell and is colored by the average slingshot pseudotime across paths. The principal curves fitted to each lineage are shown in black..
Applying an approximation with
approx_points= reduces computational work without any major loss of precision in the pseudotime estimates.
sce.sling3 <- slingshot(sce.nest, cluster=colLabels(sce.nest), reducedDim='PCA', approx_points=100) pseudo.paths3 <- slingPseudotime(sce.sling3) head(pseudo.paths3)
## curve1 curve2 curve3 ## HSPC_025 106.85 NA NA ## HSPC_031 95.38 101.8 117.1 ## HSPC_037 103.08 104.1 109.0 ## HSPC_008 98.72 115.5 103.7 ## HSPC_014 103.08 110.9 105.3 ## HSPC_020 NA 123.5 NA
The MST can also be constructed with an OMEGA cluster to avoid connecting unrelated trajectories. This operates in the same manner as (and was the inspiration for) the outgroup for TSCAN’s MST. Principal curves are fitted through each component individually, manifesting in the pseudotime matrix as paths that do not share any cells.
sce.sling4 <- slingshot(sce.nest, cluster=colLabels(sce.nest), reducedDim='PCA', approx_points=100, omega=TRUE) pseudo.paths4 <- slingPseudotime(sce.sling4) head(pseudo.paths4)
## curve1 curve2 curve3 ## HSPC_025 111.83 NA NA ## HSPC_031 96.16 99.78 NA ## HSPC_037 105.49 105.08 NA ## HSPC_008 102.00 117.28 NA ## HSPC_014 105.49 112.70 NA ## HSPC_020 NA 126.08 NA
shared.pseudo <- rowMeans(pseudo.paths, na.rm=TRUE) gg <- plotUMAP(sce.sling4, colour_by=I(shared.pseudo)) embedded <- embedCurves(sce.sling4, "UMAP") embedded <- slingCurves(embedded) for (path in embedded) { embedded <- data.frame(path$s[path$ord,]) gg <- gg + geom_path(data=embedded, aes(x=Dim.1, y=Dim.2), size=1.2) } gg
Figure 18.7: UMAP plot of the Nestorowa HSC dataset where each point is a cell and is colored by the average slingshot pseudotime across paths. The principal curves (black lines) were constructed with an OMEGA cluster.
The use of principal curves adds an extra layer of sophistication that complements the deficiencies of the cluster-based MST. The principal curve has the opportunity to model variation within clusters that would otherwise be overlooked; for example, slingshot could build a trajectory out of one cluster while TSCAN cannot. Conversely, the principal curves can “smooth out” circuitous paths in the MST for overclustered data, ignoring small differences between fine clusters that are unlikely to be relevant to the overall trajectory.
That said, the structure of the initial MST is still fundamentally dependent on the resolution of the clusters. One can arbitrarily change the number of branches from slingshot by tuning the cluster granularity, making it difficult to use the output as evidence for the presence/absence of subtle branch events. If the variation within clusters is uninteresting, the greater sensitivity of the curve fitting to such variation may yield irrelevant trajectories where the differences between clusters are masked. Moreover, slingshot is no longer obliged to separate clusters in pseudotime, which may complicate intepretation of the trajectory with respect to existing cluster annotations.
18.3 Characterizing trajectories
18.3.1 Overview
Once we have constructed a trajectory, the next step is to characterize the underlying biology based on its DE genes. The aim here is to find the genes that exhibit significant changes in expression across pseudotime, as these are the most likely to have driven the formation of the trajectory in the first place. The overall strategy is to fit a model to the per-gene expression with respect to pseudotime, allowing us to obtain inferences about the significance of any association. We can then prioritize interesting genes as those with low \(p\)-values for further investigation. A wide range of options are available for model fitting but we will focus on the simplest approach of fitting a linear model to the log-expression values with respect to the pseudotime; we will discuss some of the more advanced models later.
18.3.2 Changes along a trajectory
To demonstrate, we will identify genes with significant changes with respect to one of the TSCAN pseudotimes in the Nestorowa data.
We use the
testPseudotime() utility to fit a natural spline to the expression of each gene,
allowing us to model a range of non-linear relationships in the data.
We then perform an analysis of variance (ANOVA) to determine if any of the spline coefficients are significantly non-zero,
i.e., there is some significant trend with respect to pseudotime.
library(TSCAN) pseudo <- testPseudotime(sce.nest, pseudotime=tscan.pseudo[,1]) pseudo$SYMBOL <- rowData(sce.nest)$SYMBOL pseudo[order(pseudo$p.value),]
## DataFrame with 46078 rows and 4 columns ## logFC p.value FDR SYMBOL ## <numeric> <numeric> <numeric> <character> ## ENSMUSG00000029322 -0.0872517 0.00000e+00 0.00000e+00 Plac8 ## ENSMUSG00000105231 0.0158450 0.00000e+00 0.00000e+00 Iglj3 ## ENSMUSG00000076608 0.0118768 2.66618e-310 3.78002e-306 Igkj5 ## ENSMUSG00000106668 0.0153919 2.54019e-300 2.70105e-296 Iglj1 ## ENSMUSG00000022496 0.0229337 4.84822e-297 4.12418e-293 Tnfrsf17 ## ... ... ... ... ... ## ENSMUSG00000107367 0 NaN NaN Mir192 ## ENSMUSG00000107372 0 NaN NaN NA ## ENSMUSG00000107381 0 NaN NaN NA ## ENSMUSG00000107382 0 NaN NaN Gm37714 ## ENSMUSG00000107391 0 NaN NaN Rian
In practice, it is helpful to pair the spline-based ANOVA results with a fit from a much simpler model
where we assume that there exists a linear relationship between expression and the pseudotime.
This yields an interpretable summary of the overall direction of change in the
logFC field above,
complementing the more poweful spline-based model used to populate the
p.value field.
In contrast, the magnitude and sign of the spline coefficients cannot be easily interpreted.
To simplify the results, we will repeat our DE analysis after filtering out cluster 7. This cluster seems to contain a set of B cell precursors that are located at one end of the trajectory, causing immunoglobulins to dominate the set of DE genes and mask other interesting effects. Incidentally, this is the same cluster that was split into a separate component in the outgroup-based MST.
# Making a copy of our SCE and including the pseudotimes in the colData. sce.nest2 <- sce.nest sce.nest2$TSCAN.first <- tscan.pseudo[,1] sce.nest2$TSCAN.second <- tscan.pseudo[,2] # Discarding the offending cluster. discard <- "7" keep <- colLabels(sce.nest)!=discard sce.nest2 <- sce.nest2[,keep] # Testing against the first path again. pseudo <- testPseudotime(sce.nest2, pseudotime=sce.nest2$TSCAN.first) pseudo$SYMBOL <- rowData(sce.nest2)$SYMBOL sorted <- pseudo[order(pseudo$p.value),]
Examination of the top downregulated genes suggests that this pseudotime represents a transition away from myeloid identity, based on the decrease in expression of genes such as Mpo and Plac8 (Figure 18.8).
## DataFrame with 10 rows and 4 columns ## logFC p.value FDR SYMBOL ## <numeric> <numeric> <numeric> <character> ## ENSMUSG00000029322 -0.0951619 0.00000e+00 0.00000e+00 Plac8 ## ENSMUSG00000009350 -0.1230460 6.07026e-245 1.28963e-240 Mpo ## ENSMUSG00000040314 -0.1247572 5.29679e-231 7.50202e-227 Ctsg ## ENSMUSG00000031722 -0.0772702 3.46925e-217 3.68521e-213 Hp ## ENSMUSG00000020125 -0.1055643 2.21357e-211 1.88109e-207 Elane ## ENSMUSG00000015937 -0.0439171 8.35182e-204 5.91448e-200 H2afy ## ENSMUSG00000035004 -0.0770322 8.34215e-201 5.06369e-197 Igsf6 ## ENSMUSG00000045799 -0.0270218 8.85762e-197 4.70450e-193 Gm9800 ## ENSMUSG00000026238 -0.0255206 1.31491e-194 6.20783e-191 Ptma ## ENSMUSG00000096544 -0.0264184 3.73314e-177 1.58621e-173 Gm4617
best <- head(up.left$SYMBOL, 10) plotExpression(sce.nest2, features=best, swap_rownames="SYMBOL", x="TSCAN.first", colour_by="label")
Figure 18.8: Expression of the top 10 genes that decrease in expression with increasing pseudotime along the first path in the MST of the Nestorowa dataset. Each point represents a cell that is mapped to this path and is colored by the assigned cluster.
Conversely, the later parts of the pseudotime may correspond to a more stem-like state based on upregulation of genes like Hlf. There is also increased expression of genes associated with the lymphoid lineage (e.g., Ltb), consistent with reduced commitment to the myeloid lineage at earlier pseudotime values.
## DataFrame with 10 rows and 4 columns ## logFC p.value FDR SYMBOL ## <numeric> <numeric> <numeric> <character> ## ENSMUSG00000047867 0.0869463 1.06721e-173 4.12235e-170 Gimap6 ## ENSMUSG00000028716 0.1023233 4.76874e-172 1.68853e-168 Pdzk1ip1 ## ENSMUSG00000086567 0.0294706 9.89947e-165 2.62893e-161 Gm2830 ## ENSMUSG00000027562 0.0646994 5.91659e-156 1.04748e-152 Car2 ## ENSMUSG00000006389 0.1096438 4.69440e-151 7.67174e-148 Mpl ## ENSMUSG00000037820 0.0702660 1.80467e-135 1.78327e-132 Tgm2 ## ENSMUSG00000003949 0.0934931 3.07633e-126 2.37661e-123 Hlf ## ENSMUSG00000061232 0.0191498 1.24511e-125 9.44725e-123 H2-K1 ## ENSMUSG00000044258 0.0557909 3.49882e-121 2.28715e-118 Ctla2a ## ENSMUSG00000024399 0.0998322 5.53699e-116 3.17928e-113 Ltb
best <- head(up.right$SYMBOL, 10) plotExpression(sce.nest2, features=best, swap_rownames="SYMBOL", x="TSCAN.first", colour_by="label")
Figure 18.9: Expression of the top 10 genes that increase in expression with increasing pseudotime along the first path in the MST of the Nestorowa dataset. Each point represents a cell that is mapped to this path and is colored by the assigned cluster.
Alternatively, a heatmap can be used to provide a more compact visualization (Figure 18.10).
on.first.path <- !is.na(sce.nest2$TSCAN.first) plotHeatmap(sce.nest2[,on.first.path], order_columns_by="TSCAN.first", colour_columns_by="label", features=head(up.right$SYMBOL, 50), center=TRUE, swap_rownames="SYMBOL")
Figure 18.10: Heatmap of the expression of the top 50 genes that increase in expression with increasing pseudotime along the first path in the MST of the Nestorowa HSC dataset. Each column represents a cell that is mapped to this path and is ordered by its pseudotime value.
18.3.3 Changes between paths
A more advanced analysis involves looking for differences in expression between paths of a branched trajectory. This is most interesting for cells close to the branch point between two or more paths where the differential expression analysis may highlight genes is responsible for the branching event. The general strategy here is to fit one trend to the unique part of each path immediately following the branch point, followed by a comparison of the fits between paths.
To this end, a particularly tempting approach is to perform another ANOVA with our spline-based model and test for significant differences in the spline parameters between paths.
While this can be done with
testPseudotime(), the magnitude of the pseudotime has little comparability across paths.
A pseudotime value in one path of the MST does not, in general, have any relation to the same value in another path; the pseudotime can be arbitrarily “stretched” by factors such as the magnitude of DE or the density of cells, depending on the algorithm.
This compromises any comparison of trends as we cannot reliably say that they are being fitted to comparable \(x\)-axes.
Rather, we employ the much simpler ad hoc approach of fitting a spline to each trajectory and comparing the sets of DE genes. To demonstrate, we focus on the cluster containing the branch point in the Nestorowa-derived MST (Figure 18.2). We recompute the pseudotimes so that the root lies at the cluster center, allowing us to detect genes that are associated with the divergence of the branches.
We visualize the reordered pseudotimes using only the cells in our branch point cluster (Figure 18.11), which allows us to see the correspondence between each pseudotime to the projected edges of the MST.
A more precise determination of the identity of each pseudotime can be achieved by examining the column names of
tscan.pseudo2, which contains the name of the terminal node for the path of the MST corresponding to each column.
# Making a copy and giving the paths more friendly names. sub.nest <- sce.nest sub.nest$TSCAN.first <- tscan.pseudo2[,1] sub.nest$TSCAN.second <- tscan.pseudo2[,2] sub.nest$TSCAN.third <- tscan.pseudo2[,3] # Subsetting to the desired cluster containing the branch point. keep <- colLabels(sce.nest) == starter sub.nest <- sub.nest[,keep] # Showing only the lines to/from our cluster of interest. line.data.sub <- line.data[grepl("^3--", line.data$edge) | grepl("--3$", line.data$edge),] ggline <- geom_line(data=line.data.sub, mapping=aes(x=dim1, y=dim2, group=edge)) gridExtra::grid.arrange( plotTSNE(sub.nest, colour_by="TSCAN.first") + ggline, plotTSNE(sub.nest, colour_by="TSCAN.second") + ggline, plotTSNE(sub.nest, colour_by="TSCAN.third") + ggline, ncol=3 )
Figure 18.11: TSCAN-derived pseudotimes around cluster 3 in the Nestorowa HSC dataset. Each point is a cell in this cluster and is colored by its pseudotime value along the path to which it was assigned. The overlaid lines represent the relevant edges of the MST.
We then apply
testPseudotime() to each path involving cluster 3.
Because we are operating over a relatively short pseudotime interval, we do not expect complex trends and so we set
df=1 (i.e., a linear trend) to avoid problems from overfitting.
pseudo1 <- testPseudotime(sub.nest, df=1, pseudotime=sub.nest$TSCAN.first) pseudo1$SYMBOL <- rowData(sce.nest)$SYMBOL pseudo1[order(pseudo1$p.value),]
## DataFrame with 46078 rows and 5 columns ## logFC logFC.1 p.value FDR SYMBOL ## <numeric> <numeric> <numeric> <numeric> <character> ## ENSMUSG00000009350 0.332855 0.332855 2.67471e-18 9.59018e-14 Mpo ## ENSMUSG00000040314 0.475509 0.475509 5.65148e-16 1.01317e-11 Ctsg ## ENSMUSG00000064147 0.449444 0.449444 3.76156e-15 4.49569e-11 Rab44 ## ENSMUSG00000026581 0.379946 0.379946 3.86978e-14 3.46877e-10 Sell ## ENSMUSG00000085611 0.266637 0.266637 7.51248e-12 5.38720e-08 Ap3s1-ps1 ## ... ... ... ... ... ... ##387 0 0 NaN NaN 5430435K18Rik ## ENSMUSG00000107391 0 0 NaN NaN Rian
pseudo2 <- testPseudotime(sub.nest, df=1, pseudotime=sub.nest$TSCAN.second) pseudo2$SYMBOL <- rowData(sce.nest)$SYMBOL pseudo2[order(pseudo2$p.value),]
## DataFrame with 46078 rows and 5 columns ## logFC logFC.1 p.value FDR SYMBOL ## <numeric> <numeric> <numeric> <numeric> <character> ## ENSMUSG00000027342 -0.1265815 -0.1265815 1.14035e-11 4.01425e-07 Pcna ## ENSMUSG00000025747 -0.3693852 -0.3693852 5.06241e-09 6.43725e-05 Tyms ## ENSMUSG00000020358 -0.1001289 -0.1001289 6.95055e-09 6.43725e-05 Hnrnpab ## ENSMUSG00000035198 -0.4166721 -0.4166721 7.31465e-09 6.43725e-05 Tubg1 ## ENSMUSG00000045799 -0.0452833 -0.0452833 5.43487e-08 3.19298e-04 Gm9800 ## ... ... ... ... ... ... ##386 0 0 NaN NaN Gm42800 ## ENSMUSG00000107391 0 0 NaN NaN Rian
pseudo3 <- testPseudotime(sub.nest, df=1, pseudotime=sub.nest$TSCAN.third) pseudo3$SYMBOL <- rowData(sce.nest)$SYMBOL pseudo3[order(pseudo3$p.value),]
## DataFrame with 4607800000015937 -0.163091 -0.163091 1.18901e-13 2.00860e-09 H2afy ## ENSMUSG00000002985 0.351661 0.351661 7.64160e-13 8.60597e-09 Apoe ## ENSMUSG00000053168 -0.398684 -0.398684 8.17626e-12 6.90608e-08 9030619P08Rik ## ENSMUSG00000029247 -0.137079 -0.137079 1.78997e-10 1.06448e-06 Paics ## ... ... ... ... ... ... ## ENSMUSG00000107381 0 0 NaN NaN NA ## ENSMUSG00000107382 0 0 NaN NaN Gm37714 ## ENSMUSG00000107384 0 0 NaN NaN Gm42557 ## ENSMUSG00000107387 0 0 NaN NaN 5430435K18Rik ## ENSMUSG00000107391 0 0 NaN NaN Rian
We want to find genes that are significant in our path of interest (for this demonstration, the third path reported by TSCAN) and are not significant and/or changing in the opposite direction in the other paths. We use the raw \(p\)-values to look for non-significant genes in order to increase the stringency of the definition of unique genes in our path.
only3 <- pseudo3[which(pseudo3$FDR <= 0.05 & (pseudo2$p.value >= 0.05 | sign(pseudo1$logFC)!=sign(pseudo3$logFC)) & (pseudo2$p.value >= 0.05 | sign(pseudo2$logFC)!=sign(pseudo3$logFC))),] only3[order(only3$p.value),]
## DataFrame with 6400000002985 0.351661 0.351661 7.64160e-13 8.60597e-09 Apoe ## ENSMUSG00000016494 -0.248953 -0.248953 1.89039e-10 1.06448e-06 Cd34 ## ENSMUSG00000000486 -0.217213 -0.217213 1.24423e-09 5.25468e-06 Sept1 ## ENSMUSG00000021728 -0.293032 -0.293032 3.56762e-09 1.20535e-05 Emb ## ... ... ... ... ... ... ## ENSMUSG00000004609 -0.205262 -0.205262 0.000118937 0.0422992 Cd33 ## ENSMUSG00000083657 0.100788 0.100788 0.000145710 0.0484448 Gm12245 ## ENSMUSG00000023942 -0.144269 -0.144269 0.000146255 0.0484448 Slc29a1 ## ENSMUSG00000091408 -0.149411 -0.149411 0.000157634 0.0499154 Gm6728 ## ENSMUSG00000053559 0.135833 0.135833 0.000159559 0.0499154 Smagp
We observe upregulation of interesting genes such as Gata2, Cd9 and Apoe in this path, along with downregulation of Flt3 (Figure 18.12). One might speculate that this path leads to a less differentiated HSC state compared to the other directions.
gridExtra::grid.arrange( plotTSNE(sub.nest, colour_by="Flt3", swap_rownames="SYMBOL") + ggline, plotTSNE(sub.nest, colour_by="Apoe", swap_rownames="SYMBOL") + ggline, plotTSNE(sub.nest, colour_by="Gata2", swap_rownames="SYMBOL") + ggline, plotTSNE(sub.nest, colour_by="Cd9", swap_rownames="SYMBOL") + ggline )
Figure 18.12: \(t\)-SNE plots of cells in the cluster containing the branch point of the MST in the Nestorowa dataset. Each point is a cell colored by the expression of a gene of interest and the relevant edges of the MST are overlaid on top.
While simple and practical, this comparison strategy is even less statistically defensible than usual. The differential testing machinery is not suited to making inferences on the absence of differences, and we should not have used the non-significant genes to draw any conclusions. Another limitation is that this approach cannot detect differences in the magnitude of the gradient of the trend between paths; a gene that is significantly upregulated in each of two paths but with a sharper gradient in one of the paths will not be DE. (Of course, this is only a limitation if the pseudotimes were comparable in the first place.)
18.3.4 Further comments
The magnitudes of the \(p\)-values reported here should be treated with some skepticism. The same fundamental problems discussed in Section 11.5 remain; the \(p\)-values are computed from the same data used to define the trajectory, and there is only a sample size of 1 in this analysis regardless of the number of cells. Nonetheless, the \(p\)-value is still useful for prioritizing interesting genes in the same manner that it is used to identify markers between clusters.
The previous sections have focused on a very simple and efficient - but largely effective - approach to trend fitting.
Alternatively, we can use more complex strategies that involve various generalizations to the concept of linear models.
For example, generalized additive models (GAMs) are quite popular for pseudotime-based DE analyses
as they are able to handle non-normal noise distributions and a greater diversity of non-linear trends.
We demonstrate the use of the GAM implementation from the tradeSeq package on the Nestorowa dataset below.
Specifically, we will take a leap of faith and assume that our pseudotime values are comparable across paths of the MST,
allowing us to use the
patternTest() function to test for significant differences in expression between paths.
# Getting rid of the NA's; using the cell weights # to indicate which cell belongs on which path. nonna.pseudo <- tscan.pseudo nonna.pseudo[is.na(nonna.pseudo)] <- 0 cell.weights <- !is.na(tscan.pseudo) storage.mode(cell.weights) <- "numeric" # Fitting a GAM on the subset of genes for speed. library(tradeSeq) fit <- fitGAM(counts(sce.nest)[1:100,], pseudotime=nonna.pseudo, cellWeights=cell.weights) res <- patternTest(fit) res$Symbol <- rowData(sce.nest)[1:100,"SYMBOL"] res <- res[order(res$pvalue),] head(res, 10)
## waldStat df pvalue fcMedian Symbol ## ENSMUSG00000000028 275.03 6 0 1.5507 Cdc45 ## ENSMUSG00000000058 124.99 6 0 1.3323 Cav2 ## ENSMUSG00000000078 188.82 6 0 0.9602 Klf6 ## ENSMUSG00000000088 122.82 6 0 0.5421 Cox5a ## ENSMUSG00000000184 216.14 6 0 0.2182 Ccnd2 ## ENSMUSG00000000247 108.03 6 0 0.2142 Lhx2 ## ENSMUSG00000000248 131.13 6 0 1.2077 Clec2g ## ENSMUSG00000000278 201.74 6 0 2.0016 Scpep1 ## ENSMUSG00000000303 111.59 6 0 1.1410 Cdh1 ## ENSMUSG00000000318 89.31 6 0 1.1405 Clec10a
From a statistical perspective, the GAM is superior to linear models as the former uses the raw counts. This accounts for the idiosyncrasies of the mean-variance relationship for low counts and avoids some problems with spurious trajectories introduced by the log-transformation (Section 7.5.1). However, this sophistication comes at the cost of increased complexity and compute time, requiring parallelization via BiocParallel even for relatively small datasets.
When a trajectory consists of a series of clusters (as in the Nestorowa dataset), pseudotime-based DE tests can be considered a continuous generalization of cluster-based marker detection. One would expect to identify similar genes by performing an ANOVA on the per-cluster expression values, and indeed, this may be a more interpretable approach as it avoids imposing the assumption that a trajectory exists at all. The main benefit of pseudotime-based tests is that they encourage expression to be a smooth function of pseudotime, assuming that the degrees of freedom in the trend fit prevents overfitting. This smoothness reflects an expectation that changes in expression along a trajectory should be gradual.
18.4 Finding the root
18.4.1 Overview
The pseudotime calculations rely on some specification of the root of the trajectory to define “position zero”. In some cases, this choice has little effect beyond flipping the sign of the gradients of the DE genes. In other cases, this choice may necessarily arbitrary depending on the questions being asked, e.g., what are the genes driving the transition to or from a particular part of the trajectory? However, in situations where the trajectory is associated with a time-dependent biological process, the position on the trajectory corresponding to the earliest timepoint is clearly the best default choice for the root. This simplifies interpretation by allowing the pseudotime to be treated as a proxy for real time.
18.4.2 Entropy-based methods
Trajectories are commonly used to characterize differentiation where branches are interpreted as multiple lineages. In this setting, the root of the trajectory is best set to the “start” of the differentiation process, i.e., the most undifferentiated state that is observed in the dataset. It is usually possible to identify this state based on the genes that are expressed at each point of the trajectory. However, when such prior biological knowledge is not available, we can fall back to the more general concept that undifferentiated cells have more diverse expression profiles (Gulati et al. 2020). The assumption is that terminally differentiated cells have expression profiles that are highly specialized for their function while multipotent cells have no such constraints - and indeed, may need to have active expression programs for many lineages in preparation for commitment to any of them.
We quantify the diversity of expression by computing the entropy of each cell’s expression profile (Grun et al. 2016; Guo et al. 2017; Teschendorff and Enver 2017), with higher entropies representing greater diversity. We demonstrate on the Nestorowa HSC dataset (Figure 18.13) where clusters 5 and 8 have the highest entropies, suggesting that they represent the least differentiated states within the trajectory. It is also reassuring that these two clusters are adjacent on the MST (Figure 18.1), which is consistent with branched differentiation “away” from a single root.
library(TSCAN) entropy <- perCellEntropy(sce.nest) ent.data <- data.frame(cluster=colLabels(sce.nest), entropy=entropy) ggplot(ent.data, aes(x=cluster, y=entropy)) + geom_violin() + coord_cartesian(ylim=c(7, NA)) + stat_summary(fun=median, geom="point")
Figure 18.13: Distribution of per-cell entropies for each cluster in the Nestorowa dataset. The median entropy for each cluster is shown as a point in the violin plot.
Of course, this interpretation is fully dependent on whether the underlying assumption is reasonable. While the association between diversity and differentiation potential is likely to be generally applicable, it may not be sufficiently precise to enable claims on the relative potency of closely related subpopulations. Indeed, other processes such as stress or metabolic responses may interfere with the entropy comparisons. Furthermore, at low counts, the magnitude of the entropy is dependent on sequencing depth in a manner that cannot be corrected by scaling normalization. Cells with lower coverage will have lower entropy even if the underlying transcriptional diversity is the same, which may confound the interpretation of entropy as a measure of potency.
18.4.3 RNA velocity
Another strategy is to use the concept of “RNA velocity” to identify the root (La Manno et al. 2018). For a given gene, a high ratio of unspliced to spliced transcripts indicates that that gene is being actively upregulated, under the assumption that the increase in transcription exceeds the capability of the splicing machinery to process the pre-mRNA. Conversely, a low ratio indicates that the gene is being downregulated as the rate of production and processing of pre-mRNAs cannot compensate for the degradation of mature transcripts. Thus, we can infer that cells with high and low ratios are moving towards a high- and low-expression state, respectively, allowing us to assign directionality to any trajectory or even individual cells.
To demonstrate, we will use matrices of spliced and unspliced counts from Hermann et al. (2018).
The unspliced count matrix is most typically generated by counting reads across intronic regions, thus quantifying the abundance of nascent transcripts for each gene in each cell.
The spliced counts are obtained in a more standard manner by counting reads aligned to exonic regions;
however, some extra thought is required to deal with reads spanning exon-intron boundaries, as well as reads mapping to regions that can be either intronic or exonic depending on the isoform (???).
Conveniently, both matrices have the same shape and thus can be stored as separate assays in our usual
SingleCellExperiment.
library(scRNAseq) sce.sperm <- HermannSpermatogenesisData(strip=TRUE, location=TRUE) assayNames(sce.sperm)
## [1] "spliced" "unspliced"
We run through a quick-and-dirty analysis on the spliced counts, which can - by and large - be treated in the same manner as the standard exonic gene counts used in non-velocity-aware analyses. Alternatively, if the standard exonic count matrix was available, we could just use it directly in these steps and restrict the involvement of the spliced/unspliced matrices to the velocity calculations. The latter approach is logistically convenient when adding an RNA velocity section to an existing analysis, such that the prior steps (and the interpretation of their results) do not have to be repeated on the spliced count matrix.
# Quality control: library(scuttle) is.mito <- which(seqnames(sce.sperm)=="MT") sce.sperm <- addPerCellQC(sce.sperm, subsets=list(Mt=is.mito), assay.type="spliced") qc <- quickPerCellQC(colData(sce.sperm), sub.fields=TRUE) sce.sperm <- sce.sperm[,!qc$discard] # Normalization: set.seed(10000) library(scran) sce.sperm <- logNormCounts(sce.sperm, assay.type="spliced") dec <- modelGeneVarByPoisson(sce.sperm, assay.type="spliced") hvgs <- getTopHVGs(dec, n=2500) # Dimensionality reduction: set.seed(1000101) library(scater) sce.sperm <- runPCA(sce.sperm, ncomponents=25, subset_row=hvgs) sce.sperm <- runTSNE(sce.sperm, dimred="PCA")
We use the velociraptor package to perform the velocity calculations on this dataset via the scvelo Python package (Bergen et al. 2019).
scvelo offers some improvements over the original implementation of RNA velocity by La Manno et al. (2018), most notably eliminating the need for observed subpopulations at steady state (i.e., where the rates of transcription, splicing and degradation are equal).
velociraptor conveniently wraps this functionality by providing a function that accepts a
SingleCellExperiment object such as
sce.sperm and returns a similar object decorated with the velocity statistics.
library(velociraptor) velo.out <- scvelo(sce.sperm, assay.X="spliced", subset.row=hvgs, use.dimred="PCA") velo.out
## class: SingleCellExperiment ## dim: 2500 2175 ## metadata(4): neighbors velocity_params velocity_graph ## velocity_graph_neg ## assays(6): X spliced ... Mu velocity ## rownames(2500): ENSMUSG00000038015 ENSMUSG00000022501 ... ## ENSMUSG00000095650 ENSMUSG00000002524 ## rowData names(3): velocity_gamma velocity_r2 velocity_genes ## colnames(2175): CCCATACTCCGAAGAG AATCCAGTCATCTGCC ... ATCCACCCACCACCAG ## ATTGGTGGTTACCGAT ## colData names(7): velocity_self_transition root_cells ... ## velocity_confidence velocity_confidence_transition ## reducedDimNames(1): X_pca ## altExpNames(0):
The primary output is the matrix of velocity vectors that describe the direction and magnitude of transcriptional change for each cell. To construct an ordering, we extrapolate from the vector for each cell to determine its future state. Roughly speaking, if a cell’s future state is close to the observed state of another cell, we place the former behind the latter in the ordering. This yields a “velocity pseudotime” that provides directionality without the need to explicitly define a root in our trajectory. We visualize this procedure in Figure 18.14 by embedding the estimated velocities into any low-dimensional representation of the dataset.
sce.sperm$pseudotime <- velo.out$velocity_pseudotime # Also embedding the velocity vectors, for some verisimilitude. embedded <- embedVelocity(reducedDim(sce.sperm, "TSNE"), velo.out) grid.df <- gridVectors(reducedDim(sce.sperm, "TSNE"), embedded, resolution=30) library(ggplot2) plotTSNE(sce.sperm, colour_by="pseudotime", point_alpha=0.3) + geom_segment(data=grid.df, mapping=aes(x=start.1, y=start.2, xend=end.1, yend=end.2), arrow=arrow(length=unit(0.05, "inches"), type="closed"))
Figure 18.14: \(t\)-SNE plot of the Hermann spermatogenesis dataset, where each point is a cell and is colored by its velocity pseudotime. Arrows indicate the direction and magnitude of the velocity vectors, averaged over nearby cells.
While we could use the velocity pseudotimes directly in our downstream analyses, it is often helpful to pair this information with other trajectory analyses. This is because the velocity calculations are done on a per-cell basis but interpretation is typically performed at a lower granularity, e.g., per cluster or lineage. For example, we can overlay the average velocity pseudotime for each cluster onto our TSCAN-derived MST (Figure 18.15) to identify the likely root clusters. More complex analyses can also be performed (e.g., to identify the likely fate of each cell in the intermediate clusters) but will not be discussed here.
library(bluster) colLabels(sce.sperm) <- clusterRows(reducedDim(sce.sperm, "PCA"), NNGraphParam()) library(TSCAN) mst <- TSCAN::createClusterMST(sce.sperm, use.dimred="PCA", outgroup=TRUE) # Could also use velo.out$root_cell here, for a more direct measure of 'rootness'. by.cluster <- split(sce.sperm$pseudotime, colLabels(sce.sperm)) mean.by.cluster <- vapply(by.cluster, mean, 0) mean.by.cluster <- mean.by.cluster[names(igraph::V(mst))] color.by.cluster <- viridis::viridis(21)[cut(mean.by.cluster, 21)] set.seed(1001) plot(mst, vertex.color=color.by.cluster)
Figure 18.15: TSCAN-derived MST created from the Hermann spermatogenesis dataset. Each node is a cluster and is colored by the average velocity pseudotime of all cells in that cluster, from lowest (purple) to highest (yellow).
Needless to say, this lunch is not entirely free. The inferences rely on a sophisticated mathematical model that has a few assumptions, the most obvious of which being that the transcriptional dynamics are the same across subpopulations. The use of unspliced counts increases the sensitivity of the analysis to unannotated transcripts (e.g., microRNAs in the gene body), intron retention events, annotation errors or quantification ambiguities (Soneson et al. 2020) that could interfere with the velocity calculations. There is also the question of whether there is enough intronic coverage to reliably estimate the velocity for the relevant genes for the process of interest, and if not, whether this lack of information may bias the resulting velocity estimates. From a purely practical perspective, the main difficulty with RNA velocity is that the unspliced counts are often unavailable.
18.4.4 Real timepoints
There does, however, exist a gold-standard approach to rooting a trajectory: simply collect multiple real-life timepoints over the course of a biological process and use the population(s) at the earliest time point as the root. This approach experimentally defines a link between pseudotime and real time without requiring any further assumptions. To demonstrate, we will use the activated T cell dataset from Richard et al. (2018) where they collected CD8+ T cells at various time points after ovalbumin stimulation.
library(scRNAseq) sce.richard <- RichardTCellData() sce.richard <- sce.richard[,sce.richard$`single cell quality`=="OK"] # Only using cells treated with the highest affinity peptide # plus the unstimulated cells as time zero. sub.richard <- sce.richard[,sce.richard$stimulus %in% c("OT-I high affinity peptide N4 (SIINFEKL)", "unstimulated")] sub.richard$time[is.na(sub.richard$time)] <- 0 table(sub.richard$time)
## ## 0 1 3 6 ## 44 51 64 91
We run through the standard workflow for single-cell data with spike-ins - see Sections 7.4 and 8.2.3 for more details.
library(scran) sub.richard <- computeSpikeFactors(sub.richard, "ERCC") sub.richard <- logNormCounts(sub.richard) dec.richard <- modelGeneVarWithSpikes(sub.richard, "ERCC") top.hvgs <- getTopHVGs(dec.richard, prop=0.2) sub.richard <- denoisePCA(sub.richard, technical=dec.richard, subset.row=top.hvgs)
We can then run our trajectory inference method of choice.
As we expecting a fairly simple trajectory, we will keep matters simple and use
slingshot() without any clusters.
This yields a pseudotime that is strongly associated with real time (Figure 18.16)
and from which it is straightforward to identify the best location of the root.
The rooted trajectory can then be used to determine the “real time equivalent” of other activation stimuli,
see Richard et al. (2018) for more details.
sub.richard <- slingshot(sub.richard, reducedDim="PCA") plot(sub.richard$time, sub.richard$slingPseudotime_1, xlab="Time (hours)", ylab="Pseudotime")
Figure 18.16: Pseudotime as a function of real time in the Richard T cell dataset.
Of course, this strategy relies on careful experimental design to ensure that multiple timepoints are actually collected. This requires more planning and resources (i.e., cost!) and is frequently absent from many scRNA-seq studies that only consider a single “snapshot” of the system. Generation of multiple timepoints also requires an amenable experimental system where the initiation of the process of interest can be tightly controlled. This is often more complex to set up than a strictly observational study, though having causal information arguably makes the data more useful for making inferences. velociraptor_1.0.0 [3] scran_1.18.1 scuttle_1.0.3 [5] ensembldb_2.14.0 AnnotationFilter_1.14.0 [7] GenomicFeatures_1.42.1 AnnotationDbi_1.52.0 [9] scRNAseq_2.4.0 tradeSeq_1.4.0 [11] slingshot_1.8.0 princurve_2.1.5 [13] TSCAN_1.28.0 scater_1.18.3 [15] ggplot2_3.3.2 SingleCellExperiment_1.12.0 [17] SummarizedExperiment_1.20.0 Biobase_2.50.0 [19] GenomicRanges_1.42.0 GenomeInfoDb_1.26.1 [21] IRanges_2.24.0 S4Vectors_0.28.0 [23] BiocGenerics_0.36.0 MatrixGenerics_1.2.0 [25] matrixStats_0.57.0 BiocStyle_2.18.1 [27] rebook_1.0.0 loaded via a namespace (and not attached): [1] reticulate_1.18 tidyselect_1.1.0 [3] RSQLite_2.2.1 grid_4.0.3 [5] combinat_0.0-8 docopt_0.7.1 [7] BiocParallel_1.24.1 Rtsne_0.15 [9] zellkonverter_1.0.0 munsell_0.5.0 [11] codetools_0.2-18 statmod_1.4.35 [13] withr_2.3.0 batchelor_1.6.2 [15] colorspace_2.0-0 fastICA_1.2-2 [17] filelock_1.0.2 highr_0.8 [19] knitr_1.30 labeling_0.4.2 [21] slam_0.1-48 GenomeInfoDbData_1.2.4 [23] bit64_4.0.5 farver_2.0.3 [25] pheatmap_1.0.12 basilisk_1.2.0 [27] vctrs_0.3.5 generics_0.1.0 [29] xfun_0.19 BiocFileCache_1.14.0 [31] R6_2.5.0 ggbeeswarm_0.6.0 [33] rsvd_1.0.3 VGAM_1.1-4 [35] locfit_1.5-9.4 bitops_1.0-6 [37] DelayedArray_0.16.0 assertthat_0.2.1 [39] promises_1.1.1 scales_1.1.1 [41] beeswarm_0.2.3 gtable_0.3.0 [43] beachmat_2.6.2 processx_3.4.5 [45] rlang_0.4.9 splines_4.0.3 [47] lazyeval_0.2.2 rtracklayer_1.50.0 [49] BiocManager_1.30.10 yaml_2.2.1 [51] reshape2_1.4.4 httpuv_1.5.4 [53] tools_4.0.3 bookdown_0.21 [55] ellipsis_0.3.1 gplots_3.1.1 [57] RColorBrewer_1.1-2 Rcpp_1.0.5 [59] plyr_1.8.6 sparseMatrixStats_1.2.0 [61] progress_1.2.2 zlibbioc_1.36.0 [63] purrr_0.3.4 RCurl_1.98-1.2 [65] densityClust_0.3 basilisk.utils_1.2.0 [67] ps_1.5.0 prettyunits_1.1.1 [69] openssl_1.4.3 pbapply_1.4-3 [71] viridis_0.5.1 cowplot_1.1.0 [73] ggrepel_0.8.2 cluster_2.1.0 [75] magrittr_2.0.1 RSpectra_0.16-0 [77] ResidualMatrix_1.0.0 RANN_2.6.1 [79] ProtGenerics_1.22.0 hms_0.5.3 [81] mime_0.9 evaluate_0.14 [83] xtable_1.8-4 XML_3.99-0.5 [85] mclust_5.4.7 sparsesvd_0.2 [87] gridExtra_2.3 HSMMSingleCell_1.10.0 [89] compiler_4.0.3 biomaRt_2.46.0 [91] tibble_3.0.4 KernSmooth_2.23-18 [93] crayon_1.3.4 htmltools_0.5.0 [95] mgcv_1.8-33 later_1.1.0.1 [97] DBI_1.1.0 ExperimentHub_1.16.0 [99] dbplyr_2.0.0 rappdirs_0.3.1 [101] Matrix_1.2-18 igraph_1.2.6 [103] pkgconfig_2.0.3 GenomicAlignments_1.26.0 [105] xml2_1.3.2 vipor_0.4.5 [107] dqrng_0.2.1 XVector_0.30.0 [109] stringr_1.4.0 callr_3.5.1 [111] digest_0.6.27 graph_1.68.0 [113] DDRTree_0.1.5 Biostrings_2.58.0 [115] rmarkdown_2.5 uwot_0.1.9 [117] edgeR_3.32.0 DelayedMatrixStats_1.12.1 [119] curl_4.3 shiny_1.5.0 [121] Rsamtools_2.6.0 gtools_3.8.2 [123] jsonlite_1.7.1 lifecycle_0.2.0 [125] monocle_2.18.0 nlme_3.1-150 [127] BiocNeighbors_1.8.2 CodeDepends_0.6.5 [129] viridisLite_0.3.0 askpass_1.1 [131] limma_3.46.0 pillar_1.4.7 [133] lattice_0.20-41 fastmap_1.0.1 [135] httr_1.4.2 interactiveDisplayBase_1.28.0 [137] glue_1.4.2 qlcMatrix_0.9.7 [139] FNN_1.1.3 BiocVersion_3.12.0 [141] bit_4.0.4 stringi_1.5.3 [143] blob_1.2.1 BiocSingular_1.6.0 [145] AnnotationHub_2.22.0 caTools_1.18.0 [147] memoise_1.1.0 dplyr_1.0.2 [149] irlba_2.3.3 ape_5.4-1
Bibliography
Bergen, Volker, Marius Lange, Stefan Peidli, F. Alexander Wolf, and Fabian J. Theis. 2019. “Generalizing Rna Velocity to Transient Cell States Through Dynamical Modeling.” bioRxiv...
Gulati, G. S., S. S. Sikandar, D. J. Wesche, A. Manjunath, A. Bharadwaj, M. J. Berger, F. Ilagan, et al. 2020. “Single-cell transcriptional diversity is a hallmark of developmental potential.” Science 367 (6476): 405–11.
Guo, M., E. L. Bao, M. Wagner, J. A. Whitsett, and Y. Xu. 2017. “SLICE: determining cell differentiation and lineage based on single cell entropy.” Nucleic Acids Res. 45 (7): e54.
Hastie, T., and W. Stuetzle. 1989. “Principal Curves.” J Am Stat Assoc 84 (406): 502–16.
Hermann, B. P., K. Cheng, A. Singh, L. Roa-De La Cruz, K. N. Mutoji, I. C. Chen, H. Gildersleeve, et al. 2018. “The Mammalian Spermatogenesis Single-Cell Transcriptome, from Spermatogonial Stem Cells to Spermatids.” Cell Rep 25 (6): 1650–67.
La Manno, G., R. Soldatov, A. Zeisel, E. Braun, H. Hochgerner, V. Petukhov, K. Lidschreiber, et al. 2018. “RNA velocity of single cells.” Nature 560 (7719): 494–98..
Saelens, W., R. Cannoodt, H. Todorov, and Y. Saeys. 2019. “A comparison of single-cell trajectory inference methods.” Nat. Biotechnol. 37 (5): 547–54.
Soneson, C., A. Srivastava, R. Patro, and M. B. Stadler. 2020. “Preprocessing Choices Affect Rna Velocity Results for Droplet scRNA-Seq Data.” bioRxiv...
Teschendorff, A. E., and T. Enver. 2017. “Single-cell entropy for accurate estimation of differentiation potency from a cell’s transcriptome.” Nat Commun 8 (June): 15599. | https://bioconductor.org/books/release/OSCA/trajectory-analysis.html | CC-MAIN-2021-17 | refinedweb | 8,551 | 51.04 |
Introduction to Tkinter geometry
The Tkinter geometry is one of the built-in methods and it is used to set the dimensions while we used the Tkinter widget windows is mainly used to set the dimension length for the application window. It may be the desktop-based application widget these geometry() method also used for the basic fundamental thing while the passing the some of the arguments like size and position of the windows even the font and its style, colors are also used in the screen layouts of the geometry the widget method contains set of rows and columns that will be more presentation of the application.
Syntax
The Tkinter most powerful GUI based widget it has the most advanced library methods in the python programming language. It has n number of methods it has been used across globally hence geometry() also one of the default methods and it can be used directly in the specific requirement widget and these functions will call wherever it requires. It has its own syntax and default parameters.
import webbrowser from tkinter import * //import all the library of the tkinter module variable name=Tk() variable name.geometry(“rows x columns”) variable name.mainloop()
The above codes are the basic algorithm for creating the geometry function in the Tkinter library and these modules will utilize all other widgets for presenting the desktop application more secure and powerful.
How does the geometry method work in Tkinter?
The Tkinter package has a set of built-in functions that have been used for covering the UI parts in both desktop and web applications. It has more interactive and advanced library scripts it can be calculated for the time, space functionalities in the real-time world, and even though some AI-based application scenarios this will be covered across the widget methods. Basically, we want to decorate the UI has textbox, labels, buttons, views, scroll box, etc mostly button will be the UI area to navigate the web pages. Once we clicked the button it performs the application logic and it works in the backend areas. The geometry() method is one of the UI decorative based widget method and it will pass the arguments in the set of rows and columns area of the application window. The geometry() method has geometry manager also known as pack manager and these packages will be a set of pre-defined methods and these have been covered among all the situations for the widgets. We used a different set of methods in the code along with inside of the UI-related functionalities like iframe and it will cover the entire window frame.
Each frame has a different set of patterns for creating a normal and complex layout screen it forms the group for including along the widgets functions like Extra frames etc. The rows and columns contain the set of grid values and these values used in the grid manager. Using the fill option to fill the values automatically in the widget is one of the option to reduce the unwanted spaces and it fills the space values then it shows on the screen. If we use Both methods it should cover the entire rows and columns will be the horizontal and vertical positions on the screen. If we use Both=X and Both=Y the X will denote the Horizontal layout of the screen and Y denotes the Vertical positions of the screen.
Constructor
The Tkinter geometry() constructor has a default set of arguments that will be passed into throughout the programming. Rows and Columns have a set of values and these arguments should be passed along the runtime.
class classname(tk.Tk):
def _init_(variable name,first arguments,second arguments)
variable name.geometry(“rows x columns”) we can also use these methods along with the GUI widgets like GUI.geomtery(“r x c”)
The above codes will be used to create and passed the arguments in the geometry constructor which helps to create the object and it calls the other defined methods.
Methods
We already discussed in the previous paragraphs the geometry() method has a geometry manager and these packages will more ever be compared to the other pack widgets. And these package managers will use some default options like fill, side, expand, etc. These options will use to controls the geometry method.
variable name.geometry(“rows x columns”)
variable name.pack(fill=X,expand=2)//example method along with the pack method it comes with the geometry manager.
Examples of Tkinter geometry
Here are the following examples mention below
Example #1
Code:
import tkinter as tk from tkinter import ttk def demo(events): print("Please select the new events from the list") first = tk.Tk() first.geometry('310x110') def month(): example["values"] = ["first", "second", "third", "four" ] lists = tk.Label(first, text = "Welcome User please select list of values") lists.grid(column=5, row=5) example = ttk.Combobox(first, values=[ "first", "second", "third", "four"], postcommand=month) example.grid(column=5, row=5) first.mainloop()
Output:
Example #2
Code:
from tkinter import * first = Tk() first.geometry('120x213') second = Button(first, text = 'Welcome To My Domain', font =('Courier',7, 'bold')) second.pack(side = TOP, pady = 8) lists = Listbox(first) lists.pack() for i in range(12): lists.insert(END, str(i)) mainloop()
Output:
Example #3
Code:
from tkinter import * import tkinter as tk from tkinter import ttk first = Tk() first.geometry("132x110") frm = Frame(first) frm.pack() frm1 = Frame(first) frm1.pack(side = RIGHT) frm2 = Frame(first) frm2.pack(side = LEFT) butn = Button(frm, text="Welcome", fg="pink",activebackground = "green") butn.pack(side = LEFT) butn1 = Button(frm, text="To", fg="orange", activebackground = "red") butn1.pack(side = RIGHT) butn2 = Button(frm1, text="My", fg="green", activebackground = "violet") butn2.pack(side = RIGHT) butn3 = Button(frm2, text="Domain", fg="blue", activebackground = "yellow") butn3.pack(side = LEFT) comb = tk.Label(first, text = "Welcome Back",background = 'pink', foreground ="pink", font = ("Times New Roman", 17)) result = Button(first, text = "Have a Nice Day") example = ttk.Combobox(first, values=[ "first", "second", "third", "four"]) first.mainloop()
Output:
The above three examples we used geometry() method in different scenarios. While we passed the arguments in the function it also utilised the other different options and its usages. And these options will come under the geometry manager package it will be utilised with the other default widgets.
Conclusion
The geometry() method is used for the basic and advanced purpose of the applications and also utilized with the other Tkinter libraries that can be used to create the different types of GUI representations along with the user-friendly environment. It supports most of the Operating systems and these libraries will be imported whenever we used in the python script.
Recommended Articles
This is a guide to Tkinter geometry. Here we discuss How does the geometry method work in Tkinter and Examples along with the Codes and Outputs. You may also have a look at the following articles to learn more – | https://www.educba.com/tkinter-geometry/ | CC-MAIN-2022-33 | refinedweb | 1,157 | 53.61 |
The form module is an extended replacement for the standard Python cgi module, providing robust form-handling features in order to make writing secure form-handling CGIs in Python less work.
The idea is to define the kind of data you want returned for each field of the form. This definition
is done using a mapping of form field names to datatypes (fdefs), which is
passed to the main function,
readForm. This call reads
CGI input and interprets it, returning a mapping of field names to values.
form also fully supports [multiple] file-upload fields, image-submit fields and embedding values in names, protects against some denial-of-service problems common to CGI scripting, and provides miscellaneous utility functions useful to CGI progammers. It has been proven to cope with very large input sets.
form and cgi have completely different interfaces and are not compatible. form works at a somewhat higher level than cgi. Its ease of use comes at the expense of disallowing direct access to the exact submitted data.
The main advantage is that the returned values from reading a form submission are guaranteed to conform to your specifications, regardless of how malformed the submission may have been. This reduces the error-checking necessary to produce error-free scripts. The abstraction of datatype from submission data also allows some elements in an HTML form to be changed without having to re-write the corresponding CGI.
cgi is part of the standard distribution and so guaranteed available without having to add any modules. It easily suffices for writing simple forms. form is more complicated that cgi so it may be more likely to have bugs in it, although none are currently known. form is also not suitable for applications where you don't know the names of the submitted fields in advance (eg. generic form-to-mail scripts).
A user sign-up form might be read like this:
import form fdefs= { 'email': (form.STRING, 128), 'username': (form.STRING, 16), 'password': (form.STRING, 16), 'sex': (form.ENUM, ['m', 'f'], 'f'), 'age': form.INT, 'sendmespam': form.BOOL } fvals= form.readForm(fdefs) if fvals.username=='': errorPage('You forgot to enter a user name.') if allUsers.has_key(fvals.username): errorPage('Sorry, someone has already had that user name') # and so on
Each item in an fdefs dictionary defines one form field. The key should be the
same as the
name property in the HTML
form, which should not normally contain a period or colon (see 2.2).
The value of the item dictates the datatype to be returned.
readForm returns a dictionary-like object with the names
of the fields as keys. The type of the values depends on which type was requested
for that field in the fdefs. You can read the returned object like a dictionary
(
fvals['address']), or like an object
(
fvals.address), it makes no difference.
In the case where a field is included more than once in a submission but a list-of-values submission
(
form.LIST) was not expected, the last field in the input takes
precedence.
The following field types are available. Some of them take parameters, which you can specify by putting the type in a tuple, with the parameters following. If you are not passing parameters, you can use the type name on its own or in a singleton tuple, it doesn't matter which.
For
input type=text or
password. Return
a string of maximum length length characters, with all
characters in the string exclude removed. You can
omit the exclude string to allow all (non-control) characters. You
can omit length or set it to 0 to allow any length string; it's
mostly there so you can copy the value into a database without having to
worry about it being too big to fit.
For
textarea. As
form.STRING, but single newlines are converted
to space, and double newlines are converted to a Python '\n'. Other control
characters are still removed.
For
select and
input type="radio".
Return one of the list of string values passed if it matches the input, else return the default value, which
can be of any type. If the default is not supplied, '' is used as the default.
For
input type="checkbox" with no
value property. The value
returned is a boolean object which evaluates true if the input value for this field was 'on', else false.
For
select multiple and multiple fields with the same
name (especially
checkboxes). Return a list of each non-empty input strings given for this field.
For
input type="image". Return a tuple (x, y) of position of the click,
clipped to within (0:width, 0:height) if the (width, height) tuple is
supplied. Returns (0, 0) if the input field was supplied but without x and y
co-ords, or (-1, -1) if the field was not in the input at all.
For
input type="file". Fills the given directory with files uploaded through
the field, and return a list of tuples (storedFile, suppliedFile, mimeType,
length). The suppliedFile filename may be '' if no filename was specified.
storedFile is the full pathname of the stored file. The list is empty if no files were uploaded,
and is unlikely to be longer than one entry since few browsers support multiple-file upload.
Parse the input as an decimal (possibly negative) integer. Returns the default value if no
parsable number could be read. If default is omitted, zero is used as the default.
Returns
sys.maxint if the number is higher than Python can represent as an integer.
Note! Future versions of form may return a long integer for
form.INT. I might restrict this to Python 1.6 and later, where
str
doesn't add an 'L' to the end of the number, to avoid problems.
Parse the input as a simple floating point number, which may contain a decimal point, but not 'E' notation. Returns 0.0, or, if supplied, the default if the input is not a valid number or not supplied.
In HTML, there are some kinds of form fields where you can't use the
value
attribute to pass information to the CGI script. These are
input type="map",
where the value is always a pair of co-ordinates, and
submit,
where the value is used as the text for the button.
So if you wanted to detect which of a set of identically-labelled buttons was pressed, you'd have to give them all a different name, and include a check for each one in your script. This would be especially tedious for an order form with a hundred "Buy It!" buttons, for example.
For this reason, form allows you make a group of controls where the value submitted for each is taken from the name of the control instead of the value, when such a control is included in a submission. The actual value submitted is ignored.
To use the feature, put both the name and the desired value together in the HTML name of
the field, separated by a colon. (Which is a valid character for
name,
albeit a seldom-used one).
<input type="submit" name="b:left" value="Click me!"> <input type="submit" name="b:middle" value="Click me!"> <input type="submit" name="b:right" value="Click me!">
In this example, an call to
form.readForm({'b': form.STRING})
would return either 'left', 'middle' or 'right', depending on which button was used to submit
the form. This is not limited to
STRING: values of all types
except
FILE may be embedded in names.
(You can still use names with colons if you do not wish to use the value-embedding feature. form only tries to separate a name with a colon in if it can't find the whole name as a key in your fdefs. The same goes for periods, which are special characters used by HTML in image maps.)
To embed characters which aren't normally allowed in HTML
name
attributes, see the
encI function.
form will automatically decode this for you when reading name-embedding values.
Calling this function is not compulsory, but it allows you to set some of form's internal variables easily.
form.py includes features to protect against certain kinds of denial-of-service attacks in POST requests. They are turned off by default, but passing non-zero values in the "limit" parameters enables them.
The arguments you can set are:
form.INTand
form.FLOATwill read numbers using European-style punctuation (where "." is a thousands-separator and "," is the decimal point). If false (the default), it's the other way around.
All
read functions take submitted form data and parse it,
returning a dictionary-like object containing the values that have been posted
to the form, standardised according to the fdefs argument passed to
the function. The returned object may be read like a dictionary or like an object.
Typically, a script calls
readForm at the start of its
code. Scripts do not normally need to call the other
read
functions directly.
readfunctions is appropriate, and calls that.
readUrlEncoded, but takes its input from a stream object (must support
read()) instead of a string.
Decodes fields encoded in a multipart/form-data formatted string. parameters is a dictionary of MIME headers, lower-cased keys, containing at least a 'boundary' key.
Currently this function is no more efficient than
readFormDataStream, since it is not
commonly needed.
readFormData, but input is taken from a stream object instead of a string. The length is the number of bytes that should be read from the stream.
The
write functions take form values from a dictionary
(or dictionary-like object returned by the
read
functions), and convert them into encoded text sent to a string or a
stream.
File upload fields only work for
writeFormData and
writeFormDataStream since it does not make much
sense to try to upload a file to a query string or hidden form. File
upload values need not have a valid length value in the tuple as the
length is read directly from the file specified.
Currently, the string-returning functions are no more efficient than the stream-writing versions.
input type="hidden"controls for each field in the fvals dictionary. This is useful for writing a follow-up-form that retains all the information posted into a previous form.
writeForm, but send output to a stream object (or anything supporting
write) instead of returning a string.
<a href="...">, remember to HTML-encode the whole URL, or those & characters could confuse a browser.
writeUrlEncoded, except that the output is sent to the nominated stream.
These convenience functions are available for coding text for representation in HTML, URLs and JavaScript strings. If you have user input anywhere in your scripts, you'll need to do this a lot, or you're likely to make a site susceptible to security problems. (See this CERT advisory for an example of this.)
Encode text as HTML and return as string. ", &, <, > and control characters are replaced with HTML entities. This assumes you use the double-quote rather than single-quote for attribute strings, which is advisable. Obviously quotes do not need to be escaped outside of attribute values, but it does no harm.
Encode text as a URL part (replacing spaces with '+' and many symbols with %-encoded entities), and return as a string.
Note: you should not pass entire URLs through
encU,
only separate parts, for example a directory name in a path, or a key or
value string in a query. Once encoded you can combine these parts using '/', '?' and so. Also allows the range of characters between C0-FF, used for accented letters in ISO-Latin encodings.
The input-reading functions may throw the following exceptions:
Some aspect of the CGI environment is broken, for example environment variables not being correctly set by the script's caller.
cgiErrors are the fault of the web server, and should not happen in
working web sites.
An fdefs dictionary was passed to
readForm which included unknown fdef
values or unexpected parameters. Alternatively you passed a set of fields
to
writeForm or
writeUrlEncoded (or the
stream versions) which included a file-upload field. Note,
readForm may
also raise a TypeError, if some of the parameters in the fdefs were of the wrong type.
fdefErrors are your script's fault, and should not happen in working web
sites.
The HTTP request or the MIME message in a HTTP POST request is malformed in some way.
httpErrors are the user-agent's fault, so could happen in a working web site, but
only if either:
Finally,
initialise may throw a
NotImplementedError if it is called with a version number
higher than the version of form being used.
form was written by Andrew Clover and is available under the GNU General Public Licence. There is no warranty. However it has been in use on several production systems without apparent trouble.
Bugs, queries, comments to: and@doxdesk.com.
del.
Copyright © 2000 Andrew Clover.. | http://www.doxdesk.com/file/software/py/v/form-1.3.html | crawl-001 | refinedweb | 2,165 | 64.91 |
Python allows a lot of control over formatting of output. But here we will just look at controlling decimal places of output.
There are some different ways. Here is perhaps the most common (because it is most similar to other languages).
The number use is represented in the print function as %x.yf where x is the total number of spaces to use (if defined this will add padding to short numbers to align numbers on different lines, for example), and y is the number of decimal places. f informs python that we are formatting a float (a decimal number). The %x.yf is used as a placeholder in the output (multiple palceholders are possible) and the values are given at the end of the print statement as below:
import math pi = math.pi pi_square = pi**2 print('Pi is %.3f, and Pi squared is %.3f' %(pi,pi_square)) OUT: Pi is 3.142, and Pi squared is 9.870
It is also possible to round numbers before printing (or sending to a file). If taking this approach be aware that this may limit the precision of further work using these numbers:
import math pi = math.pi pi = round(pi,3) print (pi) OUT: 3.142
One thought on “15. Python basics: decimal places in output” | https://pythonhealthcare.org/2018/03/22/15-python-basics-decimal-places-in-output/ | CC-MAIN-2020-29 | refinedweb | 215 | 76.32 |
Near Field Communication (NFC) is an emerging, short range wireless technology. With a precise range of 2 cm, people can physically tap devices together to send/receive content. Tapping lets you select something (or someone) quickly. Not only is it quick, it’s also easy to understand. Once you see it, you get it; there’s no manual needed.
For example, imagine you are looking at some photos with a friend and she wants your pictures; with NFC, you can simply tap your device against her PC to send the photos. Over simplifying? Maybe, but the main idea is that it’s simple to share content between devices.
Figure 1: This image shows the Nokia 360 speaker sharing its Bluetooth
pairing information with a Windows Phone
Similar to Bluetooth or Wi-Fi, NFC is a standard wireless protocol defined by the NFC Forum. Your PC needs an NFC radio. There are lots of Windows 8 PCs that have NFC radios integrated directly into them. If NFC isn’t already part of your PC, you can buy NFC dongles to plug into your PC.
NFC offers some cool capabilities. People can tap and send photos, tap a menu and order a meal at a restaurant, or even tap to pair a Bluetooth device. These scenarios are pretty different from each other, but the thing they have in common is the 'tap' to initiate the experience. NFC is used in variety of devices, such as PCs, phones, speakers, headsets, wireless displays, etc..., to make connecting devices together a really intuitive experience. Also, NFC uses RFID tags; these are really cheap, lightweight passive antennas that can hold a sizable amount of data and can be stuck on virtually anything, most commonly posters. For example, buying a movie ticket could be as simple as tapping the movie poster! These are called NFC tags. We'll walk through a basic example of tapping an NFC tag to demonstrate some key concepts of the NFC API.
But first, let’s take a closer look at what it means to tap devices together.
Defining a ‘Tap’
Whether you’re tapping to pair a Bluetooth mouse or tapping to share photos, it’s important for users to tap devices together the same way. And while tapping is a well understood concept, tapping your PC against another device is new to most people. Here are some guidelines for tapping that let users know:
- Where to tap devices together – by using a touchmark, also known as the ‘Tap and Do’ visual mark, indicating where the NFC antenna is located. Depending on the PC model, you tap different parts of the PC. For example, you may tap on the back on a tablet but on the front an all-in-one. Here’s what the mark looks like on any NFC enabled, Windows 8 PC:
Figure 2: Tap and Do visual mark
- Devices are communicating with each other - During the tap, users should have confidence that something is going on; even if they can’t see the data being transferred. Therefore, Windows plays a sound when devices are in-range and communicating with each other.
Windows does these things automatically, so you won’t need to worry about them. For more info on these user experience elements, see the Windows 8 Near Field Proximity Implementation Specification. With that in mind, let's check out some of the cool scenarios you can experience with NFC.
When to use NFC
Use NFC when a user needs to select something, or someone, in your app. NFC gives you an intuitive way to select; and it’s often faster (and cooler!) than manually searching. The tap is a trigger to initiate an experience; and depending on your app, the experience can range from receiving a photo to starting a playlist. It’s really up to your app to decide what happens after the tap. So, to keep it simple, we classify this range of experiences as ‘Tap and Do’ experiences.
Below are a few examples of using NFC to select something in an app. You could Tap and…
- Get information from a poster: NFC tags are light, cheap RFID tags; they cost between $0.15 - $1.00 (price varies on printing cost). These are comparable to QR codes, but are easier and faster to use. Tapping a tag feels more comfortable than taking a picture of the bar code (and hoping the camera got the right angle). Manufacturers are increasingly embedding tags into posters in high traffic areas like airports, metro stations, and bus stops. They can hold between 48 B – 4 KB of data. You can program a tag to launch your app to a specific page.
- Exchange contact information: instead of spelling out your contact information to a friend, and hoping he didn’t misspell anything, tap devices together to exchange contact information. Similar to above, you can program your information to an NFC-business card/tag; or you could directly exchange information via an app.
- Play your favorite music station: whether you’re about to go work out, hopping into your car or just lounging at home – use an NFC tag to start a radio station. You can even have different music stations programmed on different tags; for example, one tag for the gym, one for lounging, and one for sleep.
- Order dinner at a busy restaurant: a popular restaurant at dinner time means you might be waiting a long time just to place an order. Instead, tap a tag at your table to order your food.
- Play a multiplayer game with a friend: you can easily connect a multiplayer game like Battleship, Chess, or Scrabble with a friend by tapping devices together. After the tap, the connection persists over an out-of-band transport with a bigger range and higher throughput, such as Bluetooth or Wi-Fi Direct.
By now, we’ve got a pretty good idea of when to use NFC; now for fun stuff – building an app that uses our Windows 8 Proximity (NFC) APIs.
How to implement NFC
As you can see NFC makes lots of everyday tasks easier for end-users. Take setting an alarm for example. I’m sure just about everyone has had a few experiences where they mistakenly set their alarm for the wrong time in the morning. When its late and you just want to get to sleep, you’re not always thinking at your best. NFC makes this easier by letting you just tap a preset tag, confirm, and then go to sleep worry free. So to help users with this everyday task let’s imagine you wrote a basic alarm app that let users set an alarm using an NFC tag. This breaks into two scenarios
- Setting an alarm on the tag: NFC tags can be reusable, so the app should have a way for users to program an alarm. For example, a user may want to program different alarms – one for the week, another for the weekend. This is known as publishing data to a tag.
- Setting an alarm from the tag: After a user taps a tag, the app should launch to confirm setting an alarm. This means the app can be launched with context, or arguments.
The NFC API allows for several ways to achieve the same thing, but I’ll go over the simplest way to implement this scenario.
Let’s walk through a flow to set an alarm on an NFC tag:
- Mario launches the alarm app and sets a time, say 7.00 AM. Normal alarm stuff, nothing with NFC yet.
- Mario selects an option to ‘Set alarm on an NFC tag’. At this time, the app calls the NFC APIs to publish information to the NFC radio, specifically an app identifier string & 07:00. NFC tags use a standardized message format called NDEF, NFC Defined Exchange Format. Your app does not need to worry about formatting data into an NDEF message; Windows does this for you! Now, the user can tap the NFC tag.
- Mario taps the tag against the PC and the app confirms programming the tag. After the tag is tapped, it’s important to let users know that your app successfully programmed the tag. As we discussed earlier – tapping your PC is a new concept for most people, so the confirmation gives users confidence that what they just did worked. The app knows a message was successfully transmitted by registering for a message transmitted handler.
The NFC APIs are located in the Windows.Networking.Proximity namespace. They come into play at step 2 – after the user selects the option to ‘Set alarm on NFC tag.’ First, the app initializes a proximity object. The proximity object is used to tell when a tag (or device) is in/out of range. Next, we’ll add the DeviceArrival event handler. The handler recognizes when the tag has been tapped, which means we can start writing information to the tag. It’s useful to let users know when you’re writing to a tag, so they don’t move it out of range. You can use the same event to recognize when any proximity device has been tapped.
The next code snippet shows how to initialize and add a DeviceArrival event handler.
JavaScript
var proximityDevice;
function initializeProximityDevice() {
proximityDevice = Windows.Networking.Proximity.ProximityDevice.getDefault();
if (proximityDevice) {
proximityDevice.addEventListener("devicearrived", proximityDeviceArrived);
}
else {
// No NFC radio on the PC, display an error message
}
function proximityDeviceArrived(device) {
// Let the user know we’re ‘Writing to Tag’
}
}
C#
private void InitializeProximityDevice()
{
Windows.Networking.Proximity.ProximityDevice proximityDevice;
proximityDevice = Windows.Networking.Proximity.ProximityDevice.GetDefault();
if (proximityDevice != null) {
proximityDevice.DeviceArrived += ProximityDeviceArrived;
}
else
{
// No NFC radio on the PC, display an error message
}
}
private void ProximityDeviceArrived(Windows.Networking.Proximity.ProximityDevice device)
{
// Let the user know we’re ‘Writing to Tag’
}
Next, we publish information to the tag. The app publishes two things: an app identifier string, which consists of an app ID and app platform, and launch arguments. For Windows 8, the app Id is <package family name>!<app Id> and the app platform is ‘Windows.’ You must copy the app ID value from the ID attribute of the Application element in the package manifest for your app. The launch argument is ’07:00’ – the alarm set by the user. Let’s call this the message.
If the app works across platforms, Windows lets you publish alternate app ID(s) and app platform(s); which means you can tap the same tag on a different device which support NFC, like Windows Phone 8! You can find more information about Alternate IDs on MSDN.
The app publishes the data to the tag using a method called publishBinaryMessage. The method takes three parameters – messageType, message, and a messageTransmittedHandler function. We’ll set messageType to ‘LaunchApp:WriteTag’, which lets Windows know that your app wants to write information to an NFC tag. The message is just the message we defined earlier (app identified string and launch arguments); we’ll need to store the message as a binary message in a buffer. The messageTransmittedHandler function registers for callbacks. This lets your app know that the message has successfully been written to the tag. We’ll use this to tell the user two things: we’ve successfully written a message to the tag and the tag no longer needs to be in range.
Messages continue to be published until we call the StopPublishingMessage function or the ProximityDevice object is released. In this example, we’ll use the stop function. PublishBinaryMessage returns a publication ID; we’ll use this same publication ID to stop publishing the message onto the NFC Radio.
The next code snippet shows how to write data to an NFC tag:
JavaScript
var proximityDevice;
function getAlarmTime(){
// Grab time set by the user, call this variable ‘Alarm’
return Alarm;
}
function publishLaunchApp() {
proximityDevice = Windows.Networking.Proximity.ProximityDevice.GetDefault();
if (proximityDevice) {
// The format of the app launch string is: "<args>\tWindows\t<AppName>".
// The string is tab or null delimited.
// The <args> string can be an empty string ("").
var launchArgs = getAlarmTime();
// The format of the AppName is: PackageFamilyName!PRAID.
var praid = "AlarmApp"; // The Application Id value from your package.appxmanifest.
var appName = Windows.ApplicationModel.Package.current.id.familyName + "!" + praid;
var) {
// Stop publishing the message on NFC radio
proximityDevice.stopPublishingMessage(launchAppPubId);
}
}
}
function proximityWriteTagLaunchAppMessageTransmitCallback() {
// Inform the user that: the message has been successfully written to a tag & the tag no longer needs to be in range
}
C#
Windows.Networking.Proximity.ProximityDevice proximityDevice;
private string GetAlarmTime(){
// Grab time set by the user, call this variable ‘Alarm’
return Alarm;
}
private void PublishLaunchApp()
{
proximityDevice = Windows.Networking.Proximity.ProximityDevice.GetDefault();
if (proximityDevice != null)
{
// The format of the app launch string is: "<args>\tWindows\t<AppName>".
// The string is tab or null delimited.
// The <args> string can be an empty string ("").
string launchArgs = getAlarmTime();
// The format of the AppName is: PackageFamilyName!PRAID.
string praid = "MyAppId"; // The Application Id value from your package.appxmanifest.
string appName = Windows.ApplicationModel.Package.Current.Id.FamilyName + "!" + praid;
string)
{
proximityDevice.StopPublishingMessage(launchAppPubId);
// Stop publishing the message on NFC radio
}
}
}
private void proximityWriteTagLaunchAppMessageTransmitCallback(
Windows.Networking.Proximity.ProximityDevice sender,
long messageId)
{
// Inform the user that: the message has been successfully written to a tag & the tag no longer needs to be in range
}
That’s it! Now you know how to write to an NFC tag from a Windows Store app. Simple enough; so, let’s move onto the next scenario - setting an alarm from the tag. Let’s walk through a flow to read an alarm from an NFC tag:
- Mario is reading his home screen/writing an email/playing a game/using Windows and he realizes he needs to set an alarm for Monday morning. He grabs his ‘Weekday alarm’ tag and taps it on his PC. He gets a toast inviting him to launch your Alarm app. Windows takes care of everything up until this point; your app doesn’t do a thing.
- Mario accepts and your app launches to a ‘Confirm Alarm?’ screen with a time of 7:00 AM. After Mario accepts the toast, Windows passes the launch arguments (same as above) to your app during activation. This is called contextual launching, which is the same thing as launching your app to specific page.
- Mario sets the alarm. Normal alarm stuff, no NFC.
It’s easy enough to get the launch arguments from the NFC tag. The app needs to handle contextual launching from an NFC tag. Contextual launching is the synonymous with launching your app to a specific page. Our launch arguments specify an alarm, 7.00 AM, which your app uses to display a proposed alarm. Also, in case your app isn’t installed on the PC, Windows invites Mario to install your app from the Windows store – automatically!
The following code snippet shows how to implement contextual launching.
JavaScript
app.onactivated = function (args) {
if (args.detail.kind === activation.ActivationKind.launch) {
if (args.detail.arguments == "Windows.Networking.Proximity.PeerFinder:StreamSocket") {
//do nothing here.
}
else {
// Use args.detail.arguments to parse out ’07.00’ string, and display to the user
}
args.setPromise(WinJS.UI.processAll());
}
}
C#
async protected override void OnLaunched(LaunchActivatedEventArgs args)
{
if (args.Arguments == "Windows.Networking.Proximity.PeerFinder:StreamSocket")
{
_isLaunchedByTap = true;
}
else
{
// Use args.Arguments to parse out ’07.00’ string, and display to the user
}
Window.Current.Activate();
}
That’s all your app has to do to support reading and writing to an NFC tag; pretty simple for a cutting edge scenario. Before I wrap up, let’s go over some good hygiene stuff – error handling.
Error Handling
There are a few common errors your app may run into.
- The tapped tag is not NDEF formatted. Windows 8 doesn’t support automatically reformatting a tag to NDEF, so you’ll need to download and install an NDEF formatter.
- The tapped tag is read-only. Some NFC tags can be locked to read-only (similar to old school VHS tapes).
- The tapped tag is too small and cannot hold all the data.
- A users PC doesn’t have NFC. As I mentioned from the start, NFC is an emerging technology; widespread adoption is still growing. To check whether a PC supports proximity, use the ProximityDevice.getDefault() method; the method returns NULL if no NFC radios are installed.
It’s fun and intuitive!
NFC is finally here, ready for mainstream consumers – Windows has the ecosystem and a well-designed end-to-end UX. The technology has a lot of potential to make apps and device experiences incredibly interactive. It’s fun and intuitive.
NFC is a big area, so stay tuned for more blog posts on other cool Windows 8 NFC developer experiences.
For more info about NFC and proximity, check out the resources below.
Resources
Thanks,
Priya Dandawate
Program Manager, Devices and Networking
Contributions by: Max Morris, Marzena Makuta, Mike Loholt, Jake Sabulsky, and Vishal Mhatre
The problem is that a lot of the thing you can do with NFC on other platforms like android is not possible here since WP8 is locked down to much. It is not even possible to turn wifi/Bluetooth on/off via API.
Why should you be able to turn it on/off, Mathias? The user may decide if it should be enabled or not, and not an installed app. I don't know how to do it, but a very few apps actually can redirect you to the Bluetooth section in the Settings app. Maybe making this accessible for developers coudl help, but still the app should have no control about that.
It could be enabled via manifest and a popup. Just like location info is. Lots of people turn off bluetooth and often wifi if they have a data plan, to extend battery life. | https://blogs.msdn.microsoft.com/windowsappdev/2013/04/18/develop-a-cutting-edge-app-with-nfc/ | CC-MAIN-2018-09 | refinedweb | 2,955 | 56.35 |
Sudoku Solver using Image Processing and Neural Networks
Can Neural Networks and Image Processing be able to solve sudoku puzzles?
:
- Binarization of Images
- Extracting Data using Connected Components
- Apply Convolutional Neural Network to predict the digits.
- Apply py-sudoku in solving the puzzle.
In my previous post, we were able to perform binarization and extracting data from images. You may visit this post for more details about the first two steps..
The code below is tailor-fit for a sudoku puzzle.
import matplotlib.pyplot as plt
import numpy as np
import pandas as pdfrom skimage.io import imread, imshow
from skimage.measure import label, regionprops, regionprops_table
from skimage.color import rgb2gray
from skimage.transform import resizesudoku_image = imread('sudoku.png')
sudoku_image = resize(sudoku_image, (900, 900), anti_aliasing=True)
def bounding_box(image):
"""
Returns an image with the corresponding bounding box plotted.
"""
#Binarization
bin_image = rgb2gray(image) > 0.3
label_im = label(bin_image, background=1)
#Region Props
regions = regionprops(label_im)
fig, ax = plt.subplots(figsize=(5,5))
count = 0
for props in regions:
minr, minc, maxr, maxc = props.bbox
area = (maxr - minr)*(maxc - minc)
#Filtering Box by Area
if area > 500 and area < 4000:
bx = (minc, maxc, maxc, minc, minc)
by = (minr, minr, maxr, maxr, minr)
ax.plot(bx, by, '-r', linewidth=2)
count += 1
#Counting the number of fruits
ax.set_title("Number of Box : {}".format(count))
ax.imshow(image, alpha = 0.5)
bounding_box(sudoku_image)
This code will add bounding boxes to the detected digits in the image. The coordinates of the boxes can be as slicing index to produce the numbers are images. These images are then read by the neural network to predict the digits.
Apply Convolutional Neural Network to predict the digits.
There are different datasets available to train a neural network in digit recognition. There are also OCR tools to read numbers from images. In this post, we used the MNIST handwritten dataset in predicting the numbers from images. Sounds ironic since the sudoku digits are not handwritten. We chose this dataset anyway because it has the largest dataset and produces the highest accuracy on test data (the test data refers to recognizing the sudoku digits)
There are numerous sources on how to train a neural network using the MNIST dataset. This is the most useful reference for me:
You can use the CNN model to predict the digits. Take note that you need to reshape first the image before predicting the image. You may do this by using this code:
from skimage.transform import resize
import numpy as npimage = resize(image, (28, 28), anti_aliasing=True)
image = image.reshape(1,28,28,1)
As you can see, we can use a neural network to predict the numbers represented by the images.
We can then map out the location of these numbers by using the coordinates of the bounding boxes. In summary, the process converts the sudoku image into a 9x9 array. This array shall be used as input to py-sudoku, a library for solving a sudoku puzzle.
Apply py-sudoku in solving the puzzle
Applying py-sudoku is straightforward. You just need to input a 9x9 array into the program, then the library will give you the solution. You need to install this on your machine via pip install.
from sudoku import Sudoku]]
puzzle = Sudoku(3, 3, board=board)
puzzle.solve().show()
There are many things that can be improved from this approach. For instance, we can use homography to read images on non orthogonal views of the images, balancing techniques on dark captured images, and filters on images with noise. Moreover, we can use opencv and other similar libraries to have a real time sudoku solver on a camera. That is for later. Thanks for reading this post. | https://kdtabongds.medium.com/sudoku-solver-using-image-processing-and-neural-networks-262877ea82ef?source=post_internal_links---------7---------------------------- | CC-MAIN-2021-17 | refinedweb | 618 | 58.58 |
BlackBerry Java Application Development — Save 50%
Build and deploy powerful, useful, and professional Java mobile applications for BlackBerry smartphones, the fast and easy way.
(For more resources on BlackBerry, see here.)
So, without any further words, let's get to work!
Choosing the SDK version
Remember that the first step is to choose the SDK version to use. For this project we want to choose the lowest possible SDK version, which is 4.2.1. This is because this application is so simple that it will not need to use any of the newer features of more recent versions of the SDK.
By choosing a lower version, more models of handheld can be used to run this application. Conversely, choosing a higher SDK version means that fewer models of handhelds can run the application. Therefore, you should choose the lowest version of the SDK that still supports the features you require in order to support as many devices as possible. We will go through the steps of actually applying this later on, but for now, the choice is made and we are ready to move on.
Creating a new project
You need to create a new project for your new application. The IDE makes it very simple to get started, but because you are creating a BlackBerry project you have to be careful. Let's get started and see what I mean.
Time for action – creating a new project
- You can create a new project by clicking on File | New | Project... option in the menu bar (not the File | Java Project menu item).
- The New Project dialog gives you many choices for which type of project to create.
- You want to create a BlackBerry project, of course. Expand the BlackBerry folder in the tree and then select the BlackBerry Project node. When that is done click on the Next button.
- Enter TipCalc as the name of the application and click on the Finish button to create the new project.
What just happened?
These three steps are all that is needed to create a BlackBerry project in Eclipse.
You were told earlier that choosing New | Java Project was not the right thing to do. This is because the wizard that you get from choosing this menu item is the Swiss Army Knife wizard that will set up any kind of project for Eclipse. It is powerful, complicated, and not for beginners. Because of this, we just won't use it at all. The BlackBerry Project option is much easier to use, you just have to remember to use the New | Project... option instead of the New | Java Project option.
Once you have chosen the right menu item, the New Project dialog is shown. Apparently, it is possible to have so many project types available that finding the one you want can be a challenge. The text field at the top of the dialog will filter the tree below to include only projects whose name matches the filter test. In our case though, the BlackBerry project is right near the top and easily accessible so there really isn't a need for the search feature.
The last step of the wizard prompts you to enter the name of your new application. Project names are used as a directory name but are not used in code so they can have some special characters, such as a space, which would otherwise be invalid for code. If you try to provide a name that is invalid the wizard will show a warning about the name to indicate the name is not valid.
Below the Project name text box is a checkbox indicating to use the default workspace location. By leaving the box checked the new project will be placed in a directory named after the project name under the directory set as the workspace. You can change the location where the new project files are stored by unchecking the Default location checkbox and then entering a new location in the edit field provided.
Adding a package to the new project
Next, you will create a new package for the application to use. A Java package is a container for the objects in your application and is used to prevent conflicts if the classes you create happen to have the same name as another class in the same project or even the system classes. Packages are equivalent to namespaces in C# and Visual Basic .NET (VB.NET).
Adding a package to the project in this way is a minor housekeeping task, but is also an overall good technique because it forces you to choose your package name up front before creating any code. In Java, the naming convention for a package is to use your Internet domain name in reverse—almost like you were creating a new server. In this case, we will use the package name com.rimdev.demo.tipcalc. The package name can be any valid Java name and doesn't have to follow these conventions.
Time for action – creating a new project
- Add the package by right-clicking on the src folder in the Package Explorer and then selecting New | Package.
- After selecting the menu the New Java Package wizard is shown. This small wizard is here only to collect the folder where the package files will be and the name of the package itself. Because you selected the src folder to begin with, that part is already filled in so you need to specify only the name of the package.
- Enter the package name com.rimdev.demo.tipcalc into the Name field and then click on Finish to create the package.
What just happened?
At this point you have an empty project that is ready to start being used. You've taken the BlackBerry application project that you had before and added a package to the src directory in preparation for creating the actual source files (which will come next).
The project tree is expanded slightly to include the package you just created under the src directory—the directory whose icon looks like a little mail parcel. Creating a package in your project doesn't result in any actual source files being created. Instead, it sets up the project so that when you do create files later on they will be created with package definitions already included in them.
Start at the beginning
Every application must have a starting point, and for BlackBerry applications it is at a method named main. The use of the name main goes all the way back to the C programming language, if not further. At that time, simply making a method named main was enough to be able to run an application. However, because Java is an object-oriented language, you can't just make a method named main. In Java, all methods must be in a class, and this includes the method main as well.
In addition to the main method all BlackBerry applications must contain an object derived from Application as well. As both the Application-derived class and the main method are required, it is standard practice to include the main method in the same class as your Application.
Application and UiApplication
As we just said, every BlackBerry application must contain a class derived from Application. The Application class contains the bare essentials for interacting with the BlackBerry operating system. If an application displays a User Interface (UI) then the bare essentials in the Application class are not enough. Instead, you should use UiApplication—the derived class that handles the special processing needed to interact with the user as well as the operating system.
So, the next step is to create a class derived from UiApplication and that contains the main method to serve as the starting point of your BlackBerry application.
Time for action – adding the UiApplication class
- To create this starting class right-click on the package you just created in the project and select New | Class.
- First, give the class a name; enter TipCalcApplication into the Name field.
The next step is to set the superclass for your new class. The superclass is another name for a base class, or the class from which your new class will be derived. Eclipse offers a strong browser tool to quickly and easily find the class.
- Click on the Browse button next to the Superclass field . This dialog is aware of all of the classes in the libraries and allows you to choose the proper one. By default, the class java.lang.Object is set as the superclass. Replace java.lang.Object with uiapplication. Notice that as you do so, other class names appear in the list below, but once it is completely entered only the net.rim.device.api."ui.UiApplication" class is shown. Also notice that even though you entered the name in lowercase and did not enter the complete package name, the filter found the correct class with the correct casing. Click on OK to select this class.
- Back at the New Java Class dialog there is one more setting to make and that is to check the public static void main(String args[]) option. This creates a stub main function that is used to initiate the application. Use this checkbox only when creating UiApplication objects ; no other classes need them. Check the public static void main(String[] args) checkbox so the wizard will generate the main function.
- Finally, click on Finish and see the new class in the project.
What just happened?
You just created the first class for use in your new application! Well, to be more accurate you used Eclipse to set up a new class with some standard elements based on how you filled out the dialog. You could have done the same thing by simply creating a new file and manually adding all of the code by hand, but that's just not as interesting, is it? To be real though, the tools that Eclipse provides are truly helpful and easy to use.
The New Java Class dialog that is displayed has many options that can be set and which will cause Eclipse to generate different code. Notice that the package name has already been supplied in the dialog because we started creating this class by right-clicking on the package name in the project. Also, the Source Folder is properly set already because you created the package inside the src folder previously.
A closer look at the code
Now, let's look at the code that was generated.
package com.rimdev.demos.tipcalc;
import net.rim.device.api.ui.UiApplication;
public class TipCalcApplication extends UiApplication {
/**
* @param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
}
}
The first line of code is the package declaration for com.rimdev.demos.tipcalc. This line defines the package where the TipCalcApplication class will reside. The package can be specified in the New Class dialog but because we previously added the package to the project, the package was supplied automatically to the new New Class dialog.
package com.rimdev.demos.tipcalc;
The next line is an import statement for net.rim.device.api.ui.UiApplication. Import statements are similar to .NET using or imports statements and declare which libraries are being used. The Java convention is to specifically import each class being referenced. It is possible to wildcard the import statement though, in which case the class name would be replaced with *, that is, net.rim.device.api.ui.*.
When doing this all of the classes in that package will be imported into your application and this can make coding easier. It can certainly be annoying having to go back each time you want to use a new class and add the import statement for it. Eclipse is pretty smart and shouldn't include any classes unless they are actually being used when it compiles your application, so there shouldn't be any negative impact on performance. Having said all that, the established convention is not to use wildcarding because it also makes it less clear for someone looking at your application later on to know exactly which classes are being used. In the end, it is probably best to stay with the established convention, which we will do in this article.
import net.rim.device.api.ui.UiApplication;
Next, we have the class declaration itself. Again, notice that the extends keyword is already added and the class chosen to be the superclass, UiApplication, is added as well. These are added because we chose the UiApplication to be the superclass in the New Class dialog.
public class TipCalcApplication extends UiApplication {
Lastly, notice that the public static void main method is also created. Remember that every application must have a main method, and this is that method. The method was added because we checked the checkbox for it. Very simple and easy! The words public and static are special keywords that allow the main method to be called by the system before any of the objects in your application are created.
public static void main(String[] args) {
// TODO Auto-generated method stub
}
Time for action – expanding TipCalcApplication
Now that you have the class created with some of the boilerplate code it's time to expand it and make the application actually do something.
- You can start off by giving the static main function something to do. Replace the main method with the following code.
/**
* @param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
// Create a new instance of the application.
TipCalcApplication theApp = new TipCalcApplication();
// To make the application enter the event thread and start
processing messages,
// we invoke the enterEventDispatcher() method.
theApp.enterEventDispatcher();
}
- Secondly, you need to add the TipCalcApplication constructor to the class so add the following code.
private TipCalcApplication()
{
// Push the main screen instance onto the UI stack for
rendering.
pushScreen(new TipCalcMainScreen());
}
What just happened?
The code that you just added takes the simple generated code that you got from the New Class wizard and expands it to set up and start the application.
The first thing you did was to put some code in the initially empty main method. This code in the main function is used to actually create an instance of the application's object, which happens to contain the main method. This may seem strange unless you are used to it and understand what the static keyword means. If not, then just understand that static means that the main method can be called without an instance of the object that contains it. You still do need to create an instance of the application though, and so that's the first step.
theApp = new TipCalcApplication();
The next line of code in the main method is the call to the enterEventDispatcher method on the application that you just created. This is a method already implemented in the UiApplication class. It does all of the setup necessary to get the application started, runs the application, and waits until the application is finished.
theApp.enterEventDispatcher();
As we said earlier, an Application object is required, but it's the main function that is the actual entry point of the program. When the main function is exited the application is terminated and cleaned up by the operating system. The Application object, and more specifically the call to enterEventDispatcher, is why a class derived from Application is required.
The last thing to do for this class is to create the constructor and show the first screen. We haven't created the screen yet, but we can go ahead and create the code to use it.
The constructor is also very simple and does only one thing. You could do more in the setup and initialization of things in the application constructor of course (if your application needs it), but this simple application does not. Here we create a new instance of the TipCalcMainScreen class and then push it onto the UI stack. The TipCalcMainScreen is the class that you will create next and is the screen that you will display to the user. We will come back to pushScreen and the UI Stack later.
pushScreen(new TipCalcMainScreen());
(For more resources on BlackBerry, see here.)
MainScreen
A MainScreen is a specific class in the BlackBerry SDK and not just the name of our first screen. The MainScreen class provides a lot of services for you by providing a standard framework to work in. This framework helps you in the following tasks:
- Making menus
- Detecting a prompt for save
- Laying out fields by providing a field manager
- Providing a standard title bar
- Adding standard menu items to menus
Because of this support, making your TipCalcMainScreen class derive from MainScreen is an easy choice. Again, don't let the name MainScreen box you in. Generally, every screen class that you create should be derived from MainScreen because of these features that are automatically provided.
Time for action – adding a MainScreen
Now it's time to add that screen just mentioned to your project. Adding the screen follows the same steps that you just did to add the UiApplication class.
- Right-click on the package and select New | Class.
- In the New Java Class dialog, enter TipCalcMainScreen as the class name.
- Select the MainScreen class to be the superclass by using the Browse button (just like we did before).
- Unlike last time, make sure that the public static void main (String args[]) checkbox remains unchecked.
- Click on Finish.
What just happened?
Much like before, the package declaration, the import statement, and the class definition are created automatically for you. It should look a great deal like the code generated for the TipCalcApplication class except the main method is not there. Just like before, this is just a starting point and now we need to start modifying it.
Determining your screen requirements
So, what kind of screen should this application have? Well, in order to calculate a tip, you need:
- Some way to collect the total bill amount
- Some way to trigger the calculation
- Some way to display the result
There are a lot of different ways in which you can accomplish these three requirements. Laying out a screen, even a simple one such as this, is partially an art and partially a science. You could, for instance, collect the bill amount in two separate fields—one for dollars and one for cents. You might choose to use a butt on to trigger a calculation, or use a menu item to do it instead.
The topic of good UI design is well beyond the scope of this article. I will generalize and summarize development guidelines and best practices, but for those seeking a deeper look at the topic, BlackBerry provides a set of UI development guidelines at this link (), which you can examine for all the details.
For this application, we need to make some choices and keep going even though we haven't talked about the kinds of fields available yet. To simplify things, let's just say that we will:
- Use one edit type of field to collect the amount
- Use a menu to trigger the calculation
- Display the results in a dialog box
Alright, let's get to it!
Time for action – expanding the TIpCalcMainScreen
OK, so we have a plan for what we want the screen to look like. Let's start by adding the field to accept the bill amount into the application.
Add the following code to the TipCalcMainScreen class as a data member.
protected EditField amount = new EditField();
What just happened?
Talk about baby steps! This one line didn't accomplish much, but a couple of things happened that need more explanation. First, you need to know a bit more about the EditField class. It may seem obvious, but an EditField is another class in the SDK that is designed to work with the MainScreen class in order to provide standard functionality. As the name implies, an EditField is meant to allow the user to enter text data.
Secondly, as you can see in the following screenshot, EditField is underlined within Eclipse with a red squiggly line, which indicates that there is an error.
Hovering over the line shows a dialog with some suggestions about how to solve the problem as shown in the next screenshot:
In this case the editor is letting you know that the class EditField is not defined yet. The dialog offers a few suggestions about how to fix the error, such as to create a new class or interface called EditField or to go back to the project setup, in case something there is wrong. In this case, all you really need is to make sure that the class has been imported. If you select the first item in the list, Eclipse will add an import statement to your class for the EditField class. You can manually add these import statements if you wish.
Time for action – adding more to the MainScreen
You can continue by adding the constructor for the screen and setting up the rest of the fields. This constructor does a number of things to set up the screen. In general, it is creating and adding all of the screen elements that will be displayed.
- Add the following code to the TipCalcMainScreen class.
public TipCalcMainScreen()
{
// Each screen can have a field in the Title section.
LabelField title = new LabelField("TipCalc" ,
LabelField.ELLIPSIS | LabelField.USE_ALL_WIDTH);
// Set the title to the label.
setTitle(title);
// setup the EditField to accept the Bill Amount
amount.setLabel("Bill Amount: $");
// add the field to the screen
add(amount);
}
- Add the calculateTip method to the TipCalcMainScreen class.
protected double calculateTip()
{
double billamount;
// Convert the text entered into the textfield into
// a floating point number.
try
{
billamount = Double.valueOf(amount.getText().trim()).
floatValue();
}
catch (NumberFormatException nfe)
{
billamount = 0;
}
double tipamount = billamount * 0.10;
// round the computed amount to two decimal places.
tipamount += 0.005;
tipamount *= 100.0;
int tip = (int)tipamount;
tipamount = (double)tip / 100.0;
return tipamount;
}
What just happened?
The previous section just set up the data member variables for the screen elements but in this section you actually used them to set up the screen in the constructor. The first step is to create a field for the title portion of the MainScreen. The MainScreen class reserves a field to be displayed at the top of the screen as a title and automatically adds a line under that field. This is part of the standard look and feel of an application that you get by using the UiApplication framework and the MainScreen class.
In this case, set the title of the screen with a LabelField. Much like the EditField, a LabelField also displays text but it is intended to be a label only and therefore is not editable. The LabelField is given two style attributes as well—ELLIPSIS and USE_ALL_WIDTH. The ELLIPSIS property indicates that if the text of the LabelField is too large for the screen (remember there are many different screen sizes on the various BlackBerry handhelds), then the text will be trimmed and an ellipsis (that is, three dots in a row) is shown to indicate that it was trimmed. The USE_ALL_WIDTH property indicates that the LabelField should use as much of the screen as it is allowed.
LabelField title = new LabelField("TipCalc" ,
LabelField.ELLIPSIS | LabelField.USE_ALL_WIDTH);
After creating the LabelField for the title, you need to use it by calling the setTitle method with the new LabelField object. The setTitle method is one of those methods provided by the MainScreen as part of the framework. Don't forget to add the import statement for LabelField!
setTitle(title);
The amount field was previously created as a member of the class, but there is still some setup work to do on it—specifically setting the label and adding it to the screen. Unlike some development environments, many fields include a label portion automatically. You could, of course, not set the label, in which case you would get just an empty EditField, but the label is desired so often that it was just included with the EditField and many other fields. However, not all fields have a label portion to them. For some, such as a ButtonField, it just doesn't make sense. For those fields where it does make sense, the Label is already included.
amount.setLabel("Bill Amount: $");
It should be noted that the label portion of a field is not the same as a LabelField, but is simply a portion of the EditField that is dedicated to a label function and which cannot be edited.
Once the label has been set for the amount field, the last step is to add it to the screen. This is done by using the add method. The add method is one of those framework methods which is part of the MainScreen class. Using it will add the Field object to the screen so that it can be displayed. Notice that you didn't have to use the add method for the title. Using the setTitle method does this for you under the covers.
add(amount);
Lastly, you must add the CalculateTip method —a method which is just pure Java programming and has nothing BlackBerry-specific in it, so we won't go over it line-by-line. The only line worth noting is the call to amount.getText() . Remember, amount is the name of the EditField where the user will enter the bill amount. Therefore, the getText method is used to retrieve the text that the user had entered there.
Adding a menu to the application
Menus are important user interface components on a BlackBerry. It is the preferred way to enable a user to trigger an action in an application and generally speaking, every application should have at least one.
The framework provides a great deal of support for menus and even adds some standard menu items automatically for you in proper situations. This support helps to provide a standard look and feel that makes working with your application just as easy and familiar as it is to work with any of the other standard applications.
So far, your program has a screen and a method to calculate a tip, but nothing is using that method yet. This last step sets up a menu item for your program to calculate the tip and display the results.
Time for action – adding a menu to the MainScreen
- Add the following code to create the menu.
// Menu items
MenuItem _calculateAction = new MenuItem("Calculate" , 100000, 10)
{
public void run()
{
Dialog.alert("The tip is $"+Double.toString(calculateTip()));
}
};
- Add the menu to the screen by adding the following code snippet to the class.
protected void makeMenu(Menu menu, int instance)
{
menu.add(_calculateAction);
super.makeMenu(menu, instance);
}
What just happened?
The first step is the declaration for the menu itself. This style may look unusual if you aren't used to it. The style is a shorthand style of declaring the menu item and utilizes a technique called anonymous classes. This shorthand style contains the member declaration, creation, and inner member code for the run method, all wrapped together in one concise fragment. This technique certainly makes the coding easier to make and read, but you should understand that this is just a shortcut and that behind the scenes, the compiler is generating a lot of boilerplate code for you.
The important thing to get here is that the member_calculateAction is actually a data member of the class because this line of code is not part of a method. It MUST be done this way in order to take advantage of the shortcut. If you ignore the run method under it, this looks just like any other member declaration and creation statement.
MenuItem _calculateAction = new MenuItem(...)
In the creation of the MenuItem the text of the menu is set to Calculate and two more numbers are given to the constructor. These numbers are weighting values that tell MainScreen how to organize the menu that it is creating. The first number, 100000, is somewhat arbitrary and can just be thought of as "a big number" for this application. This is actually a sort order and a larger number will get sorted to the top of the menu. Menu items at the top of the menu should be the most commonly used functions.
new MenuItem("Calculate" , 100000, 10)
The second number is a priority. Priority is used to determine which menu item will be selected by default, and in this case, a low priority number is more likely to be selected. Lastly, the run method is also implemented. This run method is very simple, it displays the computed tip amount in a dialog. The point of all this is that when the Calculate menu item is selected, a dialog will be displayed with the tip amount. But, simply creating the menu isn't enough, we still have to add the menu item to the screen.
public void run()
{
Dialog.alert("The tip is $"+Double.toString(calculateTip()));
}
By overriding makeMenu we can add the menu items that we want to the menu, which the MainScreen is already creating. The menu being created is passed in as a parameter and we can add more menu items to it by using the add method. When the menu button is pressed the framework calls makeMenu for your application so that you can supply the menu items to be displayed. This method is called each time the menu button is pressed, so you could add different menu items to the screen depending on other situations or problems, depending on your application. In our case, we need only one menu item to calculate the tip amount.
menu.add(_calculateAction);
The second parameter to makeMenu is an instance value that lets you know what kind of menu is being created. By checking this value you can put different menu items on the diff erent menus, depending on what makes sense for your application. Generally, the full menu should have every menu item available, but the context menu (aka short menu) should have only the bare essentials and most commonly used items in it. Because we are ignoring the context value in this case, the menu will be added to each kind of menu.
If you are adding more than one menu item it doesn't matter in what order you add them. Remember, all of the ordering is done based on the "sort order" and "priority" parameters passed in when constructing the menu item objects.
The last line in the makeMenu method, super.makeMenu, is like saying "Ok, I'm done interrupting you; please continue with what you were going to do." When you override a method you get in the way of what would normally happen. Sometimes, this is desirable, such as if you don't want the normal response to happen, but sometimes it is not. super is a Java keyword meaning the superclass of your derived class. Calling the same method on super lets the superclass execute the code that it normally would execute if you hadn't overridden the method and interrupted it.
super.makeMenu(menu, instance);
Forgetting to call super.makeMenu will cause all of the menus that are added by the system automatically on your behalf to be missing!
So there you have it! At this point, the application should work, assuming you don't have any copy/paste errors. It may seem hard to believe that an application can be made with such little code. This is mostly because the framework just does so much for you and you have to concentrate only on the basics of application design.
Setting the SDK version
Early on in this article, we spent a little time to decide which version of the SDK we wanted to use. However, we haven't done anything to make that selection a reality yet. Even though the application could be compiled and executed successfully right now, we need to take a moment to make sure that we are using the right SDK version. If you've done nothing since installing Eclipse, then the SDK version is probably 4.5. We need to change the build settings to use 4.2 instead.
Time for action – selecting the right component package
Eclipse can have multiple component packages installed and each workspace is configured to use one component package. You can change the package from the Configure BlackBerry Workspace menu item on the BlackBerry menu.
- Navigate to BlackBerry | Configure BlackBerry Workspace. From this dialog, you can change many of the settings specific to this BlackBerry workspace. (We will come back to this part later.)
- Expand the BlackBerry JDE branch of the tree and select the Installed Components node. This should present a drop list with the list of installed components in it. It really doesn't matter which version you use for this sample, but this is part of choosing the version ahead of time based on which models you want to support.
- Install JDE Component Pack v 4.2
- Once it is installed, make sure that the BlackBerry JDE Component Package 4.2.1 is selected in the list.
- When done, click on the OK button.
What just happened?
Because a workspace is tied to a specific component package, when you need to change the JDE component you will see a progress meter (like the one shown in the next screenshot) while the changes are being applied. Eclipse is actually recreating the workspace from scratch in order to properly be configured to work with the new component package.
Afterward, another dialog will be displayed to clean and rebuild the workspace. Any compiled output needs to be deleted because you changed the BlackBerry libraries that the project uses. Accepting the defaults will clean and rebuild all of the projects in the workspace.
(For more resources on BlackBerry, see here.)
Testing it out
Now that we are done with the code, let's build and debug it. Because you copied all of the code there won't be any compile errors, right? Good. Eclipse is pretty good about flagging compile errors before you actually try to compile the application by placing the red squiggly line under any code with problems. At a quick glance, if you have none of these red squiggly lines in your application, then chances are good that it will compile.
Of course, just because your program will compile doesn't mean it's right and that there aren't other problems. Even simple changes like these should be tested out, which is what we will do now to make sure that the problem is really solved.
Time for action – running your new application
- Click on Run | Debug and wait for the simulator to be displayed.
Note that you may get warnings from your firewall or antivirus system about the simulator trying to access the Internet. There are a number of reasons why the simulator does this so don't worry about the warnings. If you do get any, acknowledge them so that you can continue.
- Once the simulator starts, activate the TipCalc application and you can see the simple screen that you created.
- Next, click the trackball and the "short menu" comes up.
- Click on the Menu button or select the Full Menu menu item from the short menu to show a standard full menu.
- Close the menu by pressing the Escape key.
- Enter a valid value and click on the Calculate menu item.
- Continue testing the application and see if you can spot any problems.
- When you are ready to quit the application, close the simulator by clicking on the Close button.
What just happened?
Going through those steps should give you a good idea of how the application is used. Think back to the code you used to set up the screen and look at the screen of the application. Notice the title bar at the top and the edit field with the Bill Amount: $ label. The $ looks like it is part of the edit field, but it is really not. So, while the screen is very simple, you did get everything on the screen and it's looking good.
When you will open the short menu you will notice that the Calculate menu item is displayed there. Because you didn't check the instance parameter of the makeMenu method, the Calculate menu will be added to all of the menus of this application.
Once you open the full menu you will notice that the Calculate menu item is present and that there are a few other menu items shown as well. We didn't add these menu items. Instead, these were added by the framework automatically and is one of the reasons we are using it. The presence of these menu items adds to the standard look and feel of a BlackBerry application.
Now that you've explored the menus you can test out the Calculate menu item. You can see that the tip was calculated and displayed in a dialog box (just as we intended). Thanks to the nifty math in the CalculateTip method, it rounds up the value properly as well.
Once you quit the simulator you may notice that Eclipse is still showing the Debug perspective. If this happens it will be because you encountered an error or had a breakpoint in the code that caused the debugger to stop and switch to the Debug perspective. You can switch back to the "Workbench" perspective; click on the Java button in the upper-right corner of the screen.
There are quite a lot of things wrong with this application still, but in a short period of time, and with relatively little source code, you've got a basic application complete and running in the simulator. This shows the power and ease with which a basic application can be put together. As with all programs though, the little details make a big difference, and that's what we will focus on next.
Giving TipCalc some polish
Now that we have covered the basics, let's go back and give it some polish. Did you play with it enough to know what needs to be improved? Here are a few of the obvious ones that I will address.
- There is no application icon. When you select the application on the simulator, there is a plain black window icon—a default icon supplied by the operating system if an icon is not presented.
- The name of the application is the same as the project name. This isn't horrible, but it could be better.
- The edit field on the application for the bill amount will accept any value. The bill amount needs to be limited to valid numbers.
- When exiting the application, a standard save prompt is shown. This is not useful, and is annoying, so it should be removed.
So let's start going through and addressing these issues!
Adding an icon to TipCalc
Adding an icon is very easy, but choosing the right size and shape of the icon can be very confusing. Each class of device has a different screen resolution and uses diff erent sizes for the icon. Furthermore, the rules for handling icons that are not of the right size have also changed over time. We won't worry about that right now. Instead, we will just focus on adding an icon.
The icon that I made for this application is 52 x 52 pixels. Most of the icons are square-shaped and 52 pixels seems to be a pretty common size.
Icons can be in either GIF or PNG formats. When creating an icon be sure to use a tool that will let you specify the transparent color for the image; and no, MS Paint won't work. In this case, gray is the transparent color and the black border is not part of the image, but is displayed by the image viewer.
First, you need to add the image to the project in Eclipse. Eclipse does not have an image editor component, nor does it have the ability to add an existing file to a project. Instead, just use the Windows Explorer to copy and paste the file into the proper directory or, when creating the image using an editor, save the file directly to the project directory. What directory is that? Remember when you first launched Eclipse, it asked which directory would be used for the workspace. If you don't remember where that is you can view the project properties and the path is shown there.
Time for action – adding an icon
- View the project's Properties window to get to the workspace directory.
Once the Properties dialog is shown, you will find the path to the project at the top of the Resource properties page in the Location field.
- Using the path in the Location field, open a Windows Explorer window and browse to that location.
- Using Windows Explorer, move the icon's image file into the project's directory in the workspace.
Once the file is in the proper directory you may notice that it does not show up in the project list in Eclipse. Eclipse won't automatically pick it up so you need to tell Eclipse to refresh the project by selecting Refresh from the right menu or by pressing the F5 key.
- Refresh the Package Explorer by pressing the F5 key.
- Once the image is listed in the project by Eclipse, open the file properties by right-clicking the image and selecting the Properties menu item. This will look similar to the project's Properties, but have only a few property pages. In this case, we want to see the BlackBerry File Properties dialog, which is done by selecting the BlackBerry File Properties group on the left-hand side of the dialog to display the BlackBerry File Properties tab on the right. There are quite a few options here, most of which are used in advanced situations, but here, the Use as application Icon checkbox is what we are after. Checking this checkbox is all there is to do; Eclipse will handle the rest. Now, when you run the application the icon will be shown instead of the default black window icon.
- Check the Use as application icon checkbox in the BlackBerry file Properties page of the image properties dialog.
What just happened?
Setting the application icon is straightforward, once the file is in the proper place. There were a number of steps partly because we first looked up where the workspace directory is located and then even more steps because we had to copy the files into place using a Windows Explorer. Once the file was in the right place, setting the image file to be the application icon is simply a matter of checking the checkbox.
You can test this out by running the application again in the debugger and seeing that the icon is set properly. Notice that the background yellow disc shows up properly because this image file has the transparency color set properly.
Time for action – changing the application title
The next issue, changing the display name, is also solved by changing properties in the project's Properties dialog. In fact, there are several project settings that should be addressed at the same time!
- Right-click on the project name and then select the Properties menu item.
- Next, select the BlackBerry Project Properties from the list on the left-hand side of the screen to display the BlackBerry Project Properties tab.
In the General tab you will find fields for several attributes, including the title and version of the application. It's a good idea to fill this dialog out as soon as you create the project.
- Enter the information needed for application name, vendor, and version in the dialog.
- Click on the OK button to close the dialog.
Next time you compile and debug the application, the changes will be there!
What just happened?
Changing the title is an easy step and one that doesn't really have any effects on the application; it simply changes what is displayed on the screen when the application icon is selected.
Although we came to this screen just to change the title, the other values here should be given a value as well. Obviously, the system will allow the Version, Vendor, and Description fields to be empty, but it is good practice to make sure these are populated. These will be used later when you build your application for distribution. These values will show up in the BlackBerry App World or in the Application Loader when you go to install it on a device.
Fixing the Bill Amount field
The third issue is one that must be solved with code. The issue here is that the Bill Amount text box will allow any kind of text, when we really want to allow only numbers. We aren't the first to want to do this, so as you might expect, there is a way to do this already and it is called a TextFilter. A TextFilter is the base class and can support many different kinds of filters such as phone number, e-mail address, and more. A more specific class, called the NumericTextFilter , is geared toward handling numbers, so we will use that one. The EditField class knows how to interact with a TextFilter, so to add this capability is as simple as creating the filtering object and calling setFilter on the EditField.
Time for action – fixing the bill amount field
- Add the following code to the TipCalcMainScreen constructor.
// In order to keep things nice and easy for the user, set a
filter
// preventing them from entering anything but numbers
NumericTextFilter amt_filter = new NumericTextFilter(TextFilter.
REAL_NUMERIC);
amount.setFilter(amt_filter);
- Of course, you need to add a couple of imports to tell the compiler about the new classes that you are using. Add these as well with the other import statements at the top of the file.
import net.rim.device.api.ui.text.TextFilter;
import net.rim.device.api.ui.text.NumericTextFilter;
What just happened?
The NumericTextFilter needs the additional style of TextFilter.REAL_NUMERIC to let the filter know what kind of numeric is needed. Without this parameter, a decimal point would not be allowed and only whole numbers could be entered into the Bill Amount field. After creating the new filter object, we call setFilter, so that the amount field will start using it.
The result is that the amount field will now accept characters that make up only a real number.
Disabling the save prompt
The next issue to tackle is to disable the save prompt on exiting the application. Something like this is usually a good feature and the fact that it is already supported automatically by the MainScreen is a nice benefit. However, in this case it doesn't make sense and you need to stop it from happening. To do this, you need to understand how the MainScreen handles the saving feature.
The API reference shows that MainScreen offers two methods related to saving—onSavePrompt and onSave. MainScreen implements onSavePrompt for us and it is this implementation that detects if the fields have changed and then displays the dialog prompt to save.
For this application though, you won't be saving the value of the Bill Amount field, so it just doesn't make sense to display the save dialog. Fortunately, you can shortcut that logic by overriding onSavePrompt and doing nothing. By simply returning true (and not calling super.onSavePrompt), you are effectively telling the MainScreen that the user did save the data and that the application can be closed.
Time for action – disabling the "save" dialog
Add the following method to the TipCalcMainScreen class.
// return true to allow an exit without displaying the save prompt
protected boolean onSavePrompt()
{
return true;
}
What just happened?
Remember, when talking about overriding the makeMenu method that sometimes you want to call the same method of the super class (that is, super.makeMenu), and sometimes you don't. This was an example of a time when you don't want to call the super class because you wanted to interrupt what the super class was doing. In this case, simply returning true gave you the desired behavior.
Have a go hero – expanding TipCalc even more
Now that you've covered some of the basics of your first application, why not try to take it a step further and refine it even further on your own. One of the biggest problems with this application is the fact that you can calculate the tip at only one percentage. Sometimes you want to tip different amounts based on the quality of service so it can be helpful to be able to change the percentage that is used to calculate the tip.
To make this happen, add a second field to the screen where the user can change the percentage that can be used to calculate the tip. We haven't really covered any other kinds of fields so it would be best to use another EditField for now. Don't forget that the actual method for calculating the tip amount will need to be changed as well!
Summary
You just completed a couple of iterations of a real BlackBerry application, and it's a pretty good one too! The fact that you can do so little work and make such a good application speaks volumes to how powerful the BlackBerry framework is and how much it provides for you. We did gloss over many things as we sped through the article but this is just a hint of the depth and power of the development environment.
Specifically, we covered:
- We created the project files from scratch and utilized the Eclipse New Java Class dialog to make things a bit easier
- We created both an Application and a Screen class for the new application
- The default properties were OK, but even this simple change improves the application tremendously
- Every application needs an icon in the second iteration; we added one
Further resources on this subject:
- BlackBerry Enterprise Server 5: Activating Devices and Users [Article]
- BlackBerry Enterprise Server 5: MDS Applications [Article]
- Interfacing with Personal Information Management (PIM) applications in BlackBerry [Article]
About the Author :
Bill Foust
Bill has been a professional developer for over 15 years in a variety of platforms and languages. Bill first encountered a RIM 950, the predecessor to the BlackBerry brand, in 1998 and has been working with them since that time. During this early period, he wrote and sold many applications for BlackBerry through Handango.com and was involved in many custom development projects as well.
With the release of the BlackBerry 5810 in 2002, the development environment changed from C++ to Java Micro, and Bill started pursuing other opportunities, including .NET and has continued to do development and consulting professionally since then.
He wrote his first book "Mobile Guide to BlackBerry" in 2005 and later became part of the podcasting and blogging team at MobileComputingAuthority.com, which focuses on mobile devices of all kinds.
Post new comment | http://www.packtpub.com/article/creating-your-first-blackberry-project?utm_source=lxer.com&utm_medium=article&utm_content=news&utm_campaign=mdb_004011 | CC-MAIN-2014-15 | refinedweb | 8,540 | 61.87 |
:
>> %apply int *OUTPUT { AllData *data };
>This means you apply the int* typemap to the AllData* type. So swig
thinks =20
>that AllData is an int, while it is not.
>-Matthias;
> };
This will work:
%module example
%include "typemaps.i"
%{
#include "example.h"
%}
void someFunc(AllData* OUTPUT);
void someOtherFunc(AllData* OUTPUT, AllData* OUTPUT);
Of course that requires you to write every function into the interface
file instead of the %include example.h .
Probably you can use %apply to make it possible to only use %include. I am
not sure how %apply works exactly here, maybe instead of writing the
functions like above you could do sth like
%apply AllData *OUTPUT { AllData *data };
or
%apply SWIGTYPE *OUTPUT { AllData *data };
Not sure if these %applys make sense or not, you'll have to try.
-Matthias
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/swig/mailman/swig-user/thread/op.s9q7cbdbz3pzfc@smtp.1and1.com/ | CC-MAIN-2016-40 | refinedweb | 174 | 56.05 |
CodePlexProject Hosting for Open Source Software
Hi, folks. Up until now I have been placing all my RESTful service classes and their DTOs in the root of my Web project. Here's an example structure:
Services project
-- Product.cs
-- ProductsService.cs
I'm configuring the routes like so:
routes.MapServiceRoute<ProductsService>("products", configuration);
Everything works perfectly with that approach. However, my project is starting to get a bit messy due to all the files in the Services namespace. In order to get a bit better organization, I tried to place my service class and its DTOs in a subfolder
within the Web project. The structure looks something like this:
Services project
-- Products
-- Products/Product.cs
-- Products/ProductsService.cs
When I try to access my service via its standard URL, I get the following response:
<html><head><title>Object moved</title></head><body>
<h2>Object moved to <a href='/products/'>here</a>.</h2>
</body></html>
This is followed up with a 404 error response. I am wondering how the route should change to accomodate the existence of the Products subfolder. Can anyone help?
To my understanding moving code files to different folders and namespaces does to affect the service location when accessing the service from web. You might want to check you MapServiceRoute calls for refactoring errors. You also didn't specify did you move
the ProductsService.cs also as MapServiceRoute is using it instead of ProductService.cs which you said you moved to new folder (if you are using filename which match your class names).
I spotted a typo in my original post. ProductService.cs should be ProductsService.cs (in my actual code it is correctly named).
I assure you that MapServiceRoute is coded correctly. There's not much with that method call that one can get wrong.
Somehow, the existence of a folder that matches the route causes problems. I should note that renaming the folder to Products2 allowed the code to work once again. The problem only occurs when the route matches the folder.
A quick update: I configured the
ASP.NET Route Debugger and noticed that when there is a folder name matching the URL structure, the route debugger doesn't even execute. This leads me to believe that the ASP.NET Web Development Server itself is actually handling the request completely
and that the request never makes it to the ASP.NET routing engine.
After reading this post I am inclined to believe that this is working as intended due to the way IIS and ASP.NET integrate. I think it makes sense that IIS should handle several types
of requests (folders, content files, etc.) while leaving ASP.NET-specific requests to ASP.NET. Unfortunately Visual Studio projects don't have any kind of "virtual folder" capabilities that other IDEs like Xcode have, so I'm stuck having to place these service
folders within App_Code. The extra level of folderness won't hurt too bad. :) I also set the Namespace Provider setting on the App_Code folder to False to preserve the namespace structure I was looking for.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://wcf.codeplex.com/discussions/275385 | CC-MAIN-2017-13 | refinedweb | 547 | 67.35 |
Runnables as Function Pointers
Last Friday’s article expressed some longing for C-style function pointers. It attempted to use AS3’s Namespace class to fake a function pointer. Unfortunately, this resulted in far slower code than simple direct access. Today’s article shows a technique that actually results in far faster code!
The Namespace approach failed due to Namespace being a dynamic class and access through it therefore being necessarily slow. So how about a non-dynamic approach? The idea I came up with was to create a class like Java’s Runnable interface. The idea is to create an interface that defines one function that takes the parameters you want to pass and returns the parameters you want returned. This is similar to function pointers in C where the syntax forces you to specify what the function’s parameters and return value are. The next step is to create a class implementing the interface for each function you want to point to. Next, simply implement the one function that the interface specifies. Lastly, the code wanting to emulate function pointers points its Runnable variable at an appropriate object implementing Runnable and calls to its one function are then made. Consider this simple example:
package { import flash.display.*; import flash.utils.*; import flash.text.*; public class RunnableTest extends Sprite { public function RunnableTest() { var logger:TextField = new TextField(); logger.autoSize = TextFieldAutoSize.LEFT; addChild(logger); var i:int; const FUNC_CALL_REPS:int = 10000000; var runnable:Runnable = new MyRunnable(); var func:Function = runnable.run; var beforeTime:int = getTimer(); for (i = 0; i < FUNC_CALL_REPS; ++i) { func(); } logger.appendText("Func call time: " + (getTimer()-beforeTime) + "\n"); beforeTime = getTimer(); for (i = 0; i < FUNC_CALL_REPS; ++i) { runnable.run(); } logger.appendText("Runnable call time: " + (getTimer()-beforeTime) + "\n"); beforeTime = getTimer(); for (i = 0; i < FUNC_CALL_REPS; ++i) { this.foo(); } logger.appendText("Direct call time: " + (getTimer()-beforeTime)); } private function foo(): void { } } } internal interface Runnable { function run(): void } internal class MyRunnable implements Runnable { public function run(): void { } }
The results are impressive:
It seems as though you sacrifice almost no performance using this technique. Runnables only slightly lag behind direct function calls and are some 14x faster than Function objects! But what else do you lose? Well, you’ll need to create at least one interface and two classes, which contributes to SWF size, project complexity, and number of files to maintain. You’ll need to create more classes implementing the interface if you have more functions to point at. You’ll also need to instantiate an object of each of these types in order to point at it. If you are going to do a lot of processing, this probably won’t be much of a burden for the benefits (fewer if-else chains) you’ll get. You can also pre-allocate and re-use them. Lastly, you’ll need to split these functions out of the classes they naturally belong in and therefore need to pass along the “this” pointer they would normally have and provide non-private access to the class as far as the function needs to do its work. This can lead to overexposed classes, but the internal access specifier can help with that.
There is one surprising benefit of the runnable strategy as described above. If you intend to pass the same arguments to the function over and over, such as the “this” pointer it would normally have, you can simply store them as fields of the class. This leads to a related secondary advantage. Say your function normally computed a value and added it to an Array field of the class it should belong to when not using the runnable technique. You could pass this Array to the runnable’s constructor, store it as a field, and then share a reference to the Array with the original class. Both of these are good for limiting argument passing.
This technique works out surprisingly well. It’s a lot of overhead in typing, propagation of files, and a little overhead in SWF size, but when you really need to eliminate some if-else chains with a function pointer-style solution, runnables sure seem to be the way to go.
#1 by Troy Gilbert on September 25th, 2009 · | Quote
You can get the same benefits without having to implement an interface. I’ve done some performance tests and any function reference (except for an anonymous function, what you’re calling a function object) will run at virtually the same speed of direct call.
For the latest rev of our API we’re using callbacks (Function references) for all our notifications because they’re significantly faster (10x+) than native events (because they require a memory allocation).
#2 by jackson on September 25th, 2009 · | Quote
Could you elaborate on your technique? The three ways I know to do function pointers are:
The first two are painfully slow, as shown in this article and the linked one about Namespaces. The runnables technique has almost no performance penalty. If you know of a fourth technique, I’d sure like to hear about it.
I totally agree about Function variables being much faster than events. That really just shows how Function variables are relatively fast when compared to events, not that Function variables are absolutely fast when compared to all available techniques. For example, your API could use the runnable strategy for callbacks rather than Function variables. Consider a simple case:
That would definitely be faster than if the callback was a Function variable. Just a thought if you’re concerned enough about speed to ditch native events.
Thanks for commenting!
#3 by Troy Gilbert on September 25th, 2009 · | Quote
I’m guessing when you do your performance tests of Function variables you’re assigning an anonymous function to them. Anonymous functions are slow (closure and such, I guess). But if you assign any normal, named function (class method, static or instance, package function, etc.) to a Function variable it executes at virtually the same speed as a direct function call.
#4 by jackson on September 25th, 2009 · | Quote
My test is really just what you see above. I’m assigning a normal, named function to the Function variable on the line:
And the performance is terrible. More precisely, the performance of the function call is terrible, not the execution of the function.
Could you provide a test case that shows otherwise? The runnables technique does have downsides, as discussed in the article, so I’d really like to see if you have a better way in mind.
#5 by Troy Gilbert on September 25th, 2009 · | Quote
I’ve got a test case that compares events, callbacks and observers. What’s the best way to post code to the blog (or I could email it to you)?
#6 by jackson on September 25th, 2009 · | Quote
We might be talking about two different comparisons. I think you’re talking about events versus callbacks and I’m talking about callbacks via Function variables versus callbacks via runnables. I totally agree about events being slower than callbacks via Function variables and am trying to show that there’s a further speedup. It’s kind of like this:
This article is an attempt to improve even on callbacks via Function variables by implementing callbacks via runnables. There’s nothing limiting runnables to callbacks, so I covered it in general rather than just the callbacks case. Now I’m wondering if you have discovered a technique that improves even on callbacks via runnables. That would be awesome to see! You can either e-mail it to me directly or post it in comments by wrapping it up:
<pre lang="actionscript3">
// code goes here
</pre>
Thanks!
#7 by Troy Gilbert on September 25th, 2009 · | Quote
Whoops, forgot the obvious: check out my blog post on the subject from a few weeks ago:
That link is to a correction on an earlier article. I haven’t had a chance to go back and write a fresh one cleaning everything up, but you’ll find examples (and perf tests) of callbacks.
#8 by Troy Gilbert on September 25th, 2009 · | Quote
Okay, here’s my test harness:
It tries to be a slightly more real-world usage scenario. I allocate multiple objects to receive notification (via event, callback or observer), and iterate through the targets through a container (you call directly on a local instance, which will make it seem even faster).
Looking back at my times, there is a difference between callbacks (Function references) and observers (runnables). A not insignificant difference, to be honest, particularly if you look at the times in the standalone Flash Player (as opposed to debug browser plug-in). But when compared to events, callbacks and observers are in the same class.
All that being said, observers look to give a 2x-3x improvement over callbacks, but that’s only measurable over hundreds of thousands of calls. Given the huge drawback of having to implement a specific interface and being limited to a single function per-class, I think callbacks (Function references) are the winner.
#9 by Troy Gilbert on September 25th, 2009 · | Quote
Well, some of its been eaten up by making it HTML-safe, so cut-n-paste with care!
#10 by jackson on September 25th, 2009 · | Quote
Thanks for the demo. I edited your comment to fix the HTML safety stuff. My results on a 3.0Ghz Intel Core 2 Duo with 2GB of RAM on Windows XP are:
So right away the “observer” technique is 3x faster than callbacks, just like you say. It sure looks like your observer technique is almost the same as my runnable technique; perhaps the more correct terminology. Wikipedia seems to think so. In any case, it seems as though your observer technique test validates my runnable technique test, which is good to hear.
Thanks very much for providing the test!
#11 by Troy Gilbert on September 25th, 2009 · | Quote
Wow, Flash Player on Windows performs so much better when it comes to memory allocations (the real difference between events and callbacks):
That’s the release standalone Flash Player 10, the fastest way to run Flash on the Mac.
#12 by Uwe Holland on September 25th, 2009 · | Quote
I’m following your discussion interested and I’ve just posted our approch on my blog relying to this.
#13 by jackson on September 25th, 2009 · | Quote
I saw and commented about it. Thanks for doing the writeup. Between the three of us we have really explored the options an AS3 programmer has for doing callbacks!
#14 by Uwe Holland on September 26th, 2009 · | Quote
Find my answers here. :) | https://jacksondunstan.com/articles/323?replytocom=55 | CC-MAIN-2020-16 | refinedweb | 1,767 | 61.36 |
Update: This is completely deprecated, thanks to Growl's new built-in network notifications. Here's a better way to do most of this, including locating machines to notify.
I've been wanting to do something like this for some time now, but it's taken a while to find some free time, inspiration and a few of the puzzle pieces (and it's still a bit of a hack, since I'm not being a good Rendezvous neighbour, as you'll see later).
The concept is simple: I wanted my UNIX boxes to broadcast simple notifications to my LAN. Typically stuff like:
- fetchmail has finished
- arpwatch spotted a new MAC address
- Something interesting has been spotted in a log file
...etc. You get my drift. I had previously looked at LanOSD, but it's neither open nor simple enough for what I had in mind. Plus, I wanted this to be something machines could react upon too, and it had to be something unprivileged processes had easy access to with a minimal API.
Simple Plumbing
Integrating processes across security contextes has an age-old solution in the UNIX world: piping. You create a pipe someplace in your filesystem, and processes can read and write to it to exchange data. In my case, all a batch script needs to do is this:
... # We have a new LAN station: echo "New station:|$IPADDRESS ($MACADDRESS)" >> /tmp/alert ...
Anything can do this, or be modified to do this. No network programming is required, and it requires no special privilege levels (other than the ones you set on the pipe, of course).
Abusing Rendezvous
Now, my original idea was to use Rendezvous itself as the messaging protocol. I'll eventually move on to use SIP (since that's something I'd like to do a baseline implementation of), but for now, to send an alert to the LAN I simply create a new _alerter._udp.local. Rendezvous record with a short TTL (15 seconds):
from Rendezvous import * import socket, time, sys, os PIPE="/tmp/alert" def loop(): r = Rendezvous() while 1: f = open(PIPE, 'r') line = f.readline() f.close() # we only want the one line - avoid buffering issues # pack line into mDNS record desc = { 'source':'pipe', 'text':line } # use MDNS as notifier with short TTL. Note that the socket calls # might send the wrong IP address on a multihomed machine info = ServiceInfo( "_alerter._udp.local.", "Alert" + str(int(time.time())) + "._alerter._udp.local.", socket.inet_aton(socket.gethostbyname(socket.gethostname())), 0, 0, 0, desc ) # we'll need to unregister this later r.registerService(info, 15) if __name__ == '__main__': if not os.access(PIPE, os.F_OK): os.mkfifo(PIPE) loop()
To do this, I'm using pyzeroconf. It's working so far, except that records seem to outlast their TTL (I've yet to investigate the inner workings of Rendezvous.py to the point where I fully understand its semantics, so I think I might have to keep track of alerts and unregister them manually later).
The upshot is that, after a while, I get a couple of dozen still "live" mDNS records hanging around (I really should write a couple of scripts to do decent ASCII dumps of mDNS records, since there seem to be so few Rendezvous debugging tools around).
Still, this is a proof of concept, not a finished package - and the sample code is easier to understand this way.
Glueing it to Growl
Picking up on the basic Python bridge I had done earlier, I just create a Rendezvous listener of the appropriate type and invoke Growl when needed:
import os import socket APPLESCRIPT = "/usr/bin/osascript" from Rendezvous import * class AlertListener(object): def __init__(self): pass def addService(self, rendezvous, type, name): print "Service", name, "added" info = rendezvous.getServiceInfo(type,name) properties = info.getProperties() text = properties['text'] try: (title,text) = text.split( "|", 1 ) except: title = "" # try to resolve the server address addr = socket.inet_ntoa(info.getAddress()) try: name = socket.gethostbyaddr(addr) except: name = addr notify(title, text + "\n(from %s)" % (name)) def notify(title, description, icon = "Finder"): if os.path.exists(APPLESCRIPT): # assume we're on a Mac # See if Growl is installed if os.path.exists("/Library/Frameworks/GrowlAppBridge.framework"): applescript = os.popen(APPLESCRIPT, 'w') applescript.write( 'tell application "GrowlHelperApp"\nnotify with ' + 'title "%s" description "%s" icon of application "%s"\n' % (title, description, icon) + 'end tell') applescript.close() else: pass # use something else else: # use the age old UNIX way print "NOTIFICATION - %s: %s" % (title, description) if __name__ == '__main__': notify( "Ready", "Alert monitor running" ) r = Rendezvous() type = "_alerter._udp.local." listener = AlertListener() browser = ServiceBrowser(r, type, listener )
Stuff to Improve
The server side of things will eventually become a bit more complex and incorporate my own "reaper" to unregister services after a while (again, this is mostly a proof of concept). I'll also need to create a system startup script for it, tie it in with all sorts of system events, and (this is the juicy bit) create listeners to act upon alerts (like nmaping new machines that pop up on my network, rsyncing files when batches finish, etc.).
Setting the correct permissions on the pipe is also something I'll have to look into, as is checking pipe input correctly, making sure buffering works properly, etc. - right now I blindly trust whatever comes in, regardless of size and format, and have the client figure out what it is (if data reaches the client, then it hasn't crashed the Rendezvous layer - and that is likely to have size limitations, too). Also, pipes work a little differently on Linux - I keep getting null data out of readline, but it might be just my lack of Python experience.
And, of course, I'll have to start using Rendezvous properly (i.e., merely to announce that a server exists) and start using SIP in a hub-and-spoke configuration, with clients waking up, looking for server(s) using Rendezvous, registering with them using SIP and getting notified in the same way.
I actually have a few unconventional ideas I'd like to try out, such as making it a fully P2P, self-organizing setup, but, as always, it will take some time for all the pieces to fall in the right place.
(As usual, the working bits will find their way into my CVS repository someplace, and I'll post regarding any updates.)
After all, I've got a lot of relaxing to catch up on. In the meanwhile, maybe the code snippets above are of use to someone. | http://the.taoofmac.com/space/blog/2004/09/26 | crawl-002 | refinedweb | 1,089 | 59.03 |
I decided to just go with the inelegant solution of building a new
project and adding submodules into which I clicked and dragged the
source code I needed. So the problem is solved.
It looks like this in IDEA now:
-- parent
-- sub-module 1
-- sub-module 2
-- etc.
Perhaps I'm not using Maven in the spirit it was intended... I noticed
in the POM reference that you can declare a parent project and, in the
<module /> element, specify its aggregated projects with a relative
pathname (meaning the modules don't need to be subdirectories of the
parent). Similarly, I saw that an inheriting pom.xml can use relative
pathnames in its <parent /> element.
So, it is possible to "relate" projects strongly without unifying them
into a directory hierarchy of IDEA project files. Then, the user can
import multiple projects that are then shown as a list in the project
explorer. I was hoping that the IDEA Maven plugin would unify all of my
projects for me when I declared the relationship. However, given the
observation in the previous paragraph, there must be situations in which
this behavior is undesirable. And, there are probably very few
situations where this is really required given Maven's ability to relate
projects with such flexibility.
Maven by Example makes it seem as though the directory hierarchy for
parent/submodule projects is highly desirable (if not required).
Correct analysis?
--
Jason Franklin
j_fra@fastmail.us
On Sun, Mar 20, 2016, at 07:00 AM, Jason van Zyl wrote:
> I generally use Eclipse but I believe what you want is what’s referred to
> a workspace in Eclipse. A graph of loosely related projects in the
> workspace where you resolve against the version in your workspace as
> opposed to resolving the dependency from the repository system. If I
> recall there is a “+” button hidden in the Maven view that allows you to
> add more Maven projects to a single workspace so that you don’t have to
> make aggregrator POMs or have multiple windows open. I remember it being
> extremely hard to find. If you can’t see it I’ll fire up IDEA and try to
> put together an example.
>
> > On Mar 17, 2016, at 11:50 AM, Jason Franklin <j_fra@fastmail.us> wrote:
> >
> > Greetings,
> >
> > I'm new to Maven, and I've taken the time over the past week to read
> > "Maven by Example." I'm trying to join two independent projects as
> > submodules in a third project serving as the parent project. Is there a
> > way to create a parent project in IntelliJ and import two previously
> > independent projects as submodules without simply moving the directories
> > and changing the pom.xml files manually?
> >
> > --
> > Jason Franklin
> > j_fra@fastmail.us
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@maven.apache.org
> > For additional commands, e-mail: users-help@maven.apache.org
> >
>
> Thanks,
>
> Jason
>
> ----------------------------------------------------------
> Jason van Zyl
> Founder, Takari and Apache Maven
>
>
> ---------------------------------------------------------
>
>
>
> ---------------------------------------------------------------------
> | http://mail-archives.apache.org/mod_mbox/maven-users/201603.mbox/%3C1458492754.1715889.554485586.43285D56@webmail.messagingengine.com%3E | CC-MAIN-2019-22 | refinedweb | 484 | 63.59 |
Without the complete context to summarize:
#include "stdafx.h" #include <iostream> #include <fstream> #include <string> #include <stdio.h> #include <stdlib.h> #include <time.h> #include <objbase.h> using namespace std; BYTE* StringToGUID(LPOLESTR szBuf) { GUID *g = (GUID *) malloc( sizeof(GUID)); HRESULT h2 = CLSIDFromString(szBuf, g); return (BYTE*) g; }
I Get:
1>Linking...
1>XorCryt.obj : error LNK2019: unresolved external symbol __imp__CLSIDFromString@8 referenced in function "unsigned char * __cdecl StringToGUID(wchar_t*)
(?StringToGUID@@YAPAEPA_W@Z)
1>C:\Users\Grace\Documents\Visual Studio 2008\XORCRYT\XORCRYT\Debug\XorCryt.exe
: fatal error LNK1120: 1 unresolved externals
1>Build log was saved at ":\Users\Grace\Documents\Visual Studio
2008\XORCRYT\XORCRYT\Debug\BuildLog.htm"
1>XorCryt - 2 error(s), 0 warning(s)
========== Rebuild All: 0 succeeded, 1 failed, 0 skipped ==========
Any Ideas? - Any help... I am getting ready to give up on this and try
do the hex to binary conversions myself, But if this worked it would be
nice. What am I missing? A library reference?
According to Microsoft Forum this code is supposed to work. I will continue
to look at it but if anyone has some practical experience that might be
helpful.... I can already generate the GUIDs that I need. I need to get
GUIDs entered by the user into binary form also. This seemed to be well
documented but it was hard to find the right include file and I am not
sure that that is enough.
Thanks again folks.. I *AM* learning... albeit slowly. | https://www.daniweb.com/programming/software-development/threads/418322/how-to-convert-guid-to-byte-array-in-a-win32-application | CC-MAIN-2017-39 | refinedweb | 245 | 52.46 |
Build a simple chat web app using Faye, Express and Vue - Part 4
In the previous part, I implemented a simple pub/sub server to push messages to the client side. In this part, I am going to implement the user interface using Vue
Data flow⌗
Vue supports Flux officially through the use of Vuex. There are a lot of blog posts around the Internet explaining Flux pattern in great details, so I won’t go deep into it. Instead I will summarize the core concepts of Vuex
Vuex provides a single source of truth (single state tree) where all the changes must go through. UI components change according to changes made to state.
Each change in Vuex happens through a mutation (event). Each mutation modifies the state tree and in turn re-renders the UI accordingly.
export default { [types.DELETED_MESSAGE] (state, {id}) { state.messages = state.messages.filter((msg) => { return msg.id !== id }) } }
Vuex encourages the use of constants for mutation types in order to share them with actions (more on this later). In the above mutation, the state is mutated by filtering out the message matched with
id.
As mentioned before, actions are a part of the data flow. In Vuex, there is no restriction on how to mutate the data, it can be done through mutations or actions. However, actions are there to separate the mutation logic from the actual action leading to the mutation. For example, when a user clicks the button to send a message, it triggers
sendMessage action which in turn triggers
sentMessage mutation after calling the API to notify the UI that there is a new message. If all that happens inside a mutation, it is difficult to test and re-use.
export const sendMessage = ({commit}, payload) => { api('/messages', { method: 'post', body: payload }).then((message) => { commit(types.SENT_MESSAGE, message) }) }
There are getters but since they are just normal functions used to get stuff from state, I will skip them.
Vuex also supports modules which are used to split the state tree into smaller sections for more complicated applications. In this simple chat application, I am not going to use any modules.
All of them are combined into one single
store and pass to the main application instance
new Vue({ el: '#app', router, store, template: '<App/>', components: { App } })
Mutations⌗
In this chat app, there are 5 types of mutations
export default { [types.DELETED_MESSAGE] (state, {id}) { state.messages = state.messages.filter((msg) => { return msg.id !== id }) }, [types.FETCHED_MESSAGES] (state, messages) { state.messages = messages state.isFetchingMessages = false }, [types.SENT_MESSAGE] (state, message) { addNewMessage(state, message) }, [types.RECEIVED_MESSAGE] (state, message) { addNewMessage(state, message) }, [types.FETCHED_ME] (state, me) { state.me = me state.isFetchingMe = false } }
DELETED_MESSAGEis called when a message is deleted either by the current user or someone else
FETCHED_MESSAGESis called when the app receives messages from the API
SENT_MESSAGEis called after sending a new message in order to append the new message to the current message list
RECEIVED_MESSAGEis similar to
SENT_MESSAGEbut this is for when receiving a new message from someone else
FETCHED_MEis called after receiving the data of the current user
Actions⌗
Corresponding to those mutations are the following actions
export const fetchMessages = ({commit}) => { api('/messages').then((messages) => { commit(types.FETCHED_MESSAGES, messages) }) } export const sendMessage = ({commit}, payload) => { api('/messages', { method: 'post', body: payload }).then((message) => { commit(types.SENT_MESSAGE, message) }) } export const deleteMessage = ({commit}, payload) => { commit(types.DELETED_MESSAGE, { id: payload.id }) api(`/messages/${payload.id}`, { method: 'delete' }) } export const receivedMessage = ({commit}, payload) => { commit(types.RECEIVED_MESSAGE, payload) } export const deletedMessage = ({commit}, payload) => { commit(types.DELETED_MESSAGE, payload) } export const fetchMe = ({commit}, payload) => { api('/me', { prefix: 'auth' }).then((me) => { commit(types.FETCHED_ME, me) }) }
Actions usually follow the same pattern, call the API then commit a mutation based on the data received. However, since this is a chat app receiving data in real-time, there need to be some actions specifically for handling events from faye. For example, when sending a message, the API triggers a faye event
faye.publish('/messages', { event: 'receivedMessage', payload: message });
Then in the UI, I listen to the faye channel and dispatch appropriate actions
const client = new Faye.Client(config.get('faye.url')) client.subscribe('/messages', ({event, payload}) => { store.dispatch(event, payload) })
UI components⌗
Let’s take a look at the final UI first
I usually divide components into 2 categories, presentational and container components. They are also known as stateless and stateful components. Presentational components are responsible for rendering the actual UI, they are often nested inside of another container component. Data is passed down to presentational components by the parent (container) component. Presentational components usually communicate with their parent through the use of events.
There are 4 presentational components in this chat app
- CurrentUser renders the info about the current user or a button to log in
- MessageList renders the list of messages
- Message renders one message
- MessageInput renders the input for sending a new message
There is only 1 container component Main which does all the API calls and manages the state tree. In reality, there might be many container components, each handles one route/path or whatever unit you use to define a single page in the application.
Components⌗
A component defines an UI element, it can be as simple as an input or as complicated as a list of messages.
A component in Vue is just a normal Javascript object with proper attributes to define the behaviour of the component. Vue borrows the same
props concept from React to indicate data passed to the component by its parent. There is also
data which is somewhat similar to
state in React world. However, when accessing data, Vue doesn’t have any distinction between external data and internal data, everything can be accessed through the component instance (
this). It’s convenient for developing but might come and bite me later on when I accidentally change
props
The style of defining a component in Vue is definitely my favourite. Everything is in one file
<template> </template> <script> import moment from 'moment' export default { name: 'Message', props: { message: Object, me: Object } } </script> <style lang="scss" scoped> </style>
Data binding is another strong point of Vue, everything is automatic and is somewhat similar to Angular style
<a class="CurrentUser__card-avatar"> <img : </a>
This binds
this.user.avatarUrl to
src attribute of
img tag, every time
avatarUrl changes,
src is also updated.
It can go the other way as well (it’s often known as 2-way data binding) using
v-model attribute. This can save a lot of time doing form controls
<input @keyup.
There are a lot more when it comes to component, this blog post probably won’t be able to cover everything. So I just write about things that I find interesting and somewhat important to mention.
Interaction between components⌗
There is one simple rule: “props down, events up”, this is true for most frameworks I have a chance to work with (Angular, React, Vue). Events here can mean an actual event fired and forgotten or a function call (in the case of React)
In
MessageInput, I have this method to emit an event with the message typed by the user. This method is triggered when the user presses “Enter” (
@keyup.enter="sendMessage")
methods: { sendMessage (e) { const content = this.input.trim() if (!content) return this.$emit('send-message', {content}) this.input = '' } }
In the parent component which is
Main, it listens to
send-message event
<message-input @
and acts accordingly (
me here refers to the current user). The input is disabled if it’s a guest
sendMessage ({content}) { this.$store.dispatch('sendMessage', {content}) }
Passing
props down is straightforward, in
Main, when rendering
MessageList
<message-list @
messages is a computed attribute which uses
getMessage getter in the store to get current messages
computed: { messages () { return this.$store.getters.getMessages() } }
Router⌗
Vue comes with vue-router
import Vue from 'vue' import Router from 'vue-router' import Main from 'pages/Main' Vue.use(Router) export default new Router({ routes: [ { path: '/', name: 'Main', component: Main } ] })
There is only one route in this application, so the setup is very simple.
Root component⌗
The root component is usually in charge of bootstrapping the whole application. I does the initial request to fetch data, loads all the routes and many other things.
<template> <div id="app" class="container"> <router-view></router-view> </div> </template> <script> export default { name: 'App', mounted: function () { this.$store.dispatch('fetchMessages') this.$store.dispatch('fetchMe') } } </script> <style> html, body, #app { height: 100%; background-color: #f5f8fa; padding: 10px 0px; } </style>
Put everything together⌗
At the root, there is
App component which loads chat messages and current user data from the API. And at the root path
/,
Main component is rendered. It receives a
store instance created during the initialization, from
store,
Main gets messages and current user data, then passes it down to
MessageList and
CurrentUser respectively.
MessageInput emits
send-message event everytime the user presses “Enter” or the button to send the message.
Main listens to this event and dispatch
sendMessage action when it happens. The action then sends
POST /api/messages request to the API to create a new message and commit
SENT_MESSAGE mutation upon success.
MessageList just renders whatever
messages it receives, each message is a
Message component. This component also emits
delete-message event when the user wants to delete a message. This event propagates all the way to
Main (through
MessageList). In
Main, upon receiving this event, it calls
deleteMessage action which sends a
DELETE /api/messages/:id request to the API. When it finishes, it commits a
DELETED_MESSAGE mutation to update the state tree.
That is pretty much everything for this simple chat app. I skip error handling to make everything simpler to follow since this is just a demo application for me to learn Vue.
Conclusion⌗
There are a lot to talk about Vue, this blog post probably won’t/can’t cover everything. But my impression about Vue is extremely positive, everything just works, no complicated setup (actually vue-cli does all the hard works for me)
The source code for this part can be found at
I also changed
api and
faye service to make them work with the UI
- Source code for
apican be found at
- Source code for
fayecan be found at
Next, I am going to write about the deployment process. The goal is to deploy this app as 3 separate services (
api,
faye and
web) using
dokku. | https://tannguyen.org/2017/05/build-a-simple-chat-web-app-using-faye-express-and-vue-part-4/ | CC-MAIN-2020-40 | refinedweb | 1,730 | 53.71 |
How to monitor ML runs live: step by step guide¶
Introduction¶
This guide will show you how to:
Monitor training and evaluation metrics and losses live
Monitor hardware resources during training
By the end of it, you will monitor your metrics, losses, and hardware live in Neptune!
Before you start¶
Make sure you meet the following prerequisites before starting:
Have Python 3.x installed
Have Tensorflow 2.x with Keras Keras model on mnist dataset.
Note
You don’t have to use Keras to monitor your training runs live with Neptune.
I am using it as an easy to follow example.
There are links to integrations with other ML frameworks and useful articles about monitoring in the text.
Create a file
train.pyand copy the script below.
train.py
import keras PARAMS = {'epoch_nr': 100, 'batch_size': 256, 'lr': 0.005, 'momentum': 0.4, 'use_nesterov': True, 'unit_nr': 256, 'dropout': 0.05} mnist = keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 model = keras.models.Sequential([ keras.layers.Flatten(), keras.layers.Dense(PARAMS['unit_nr'], activation=keras.activations.relu), keras.layers.Dropout(PARAMS['dropout']), keras.layers.Dense(10, activation=keras.activations.softmax) ]) optimizer = keras.optimizers.SGD(lr=PARAMS['lr'], momentum=PARAMS['momentum'], nesterov=PARAMS['use_nesterov'],) model.compile(optimizer=optimizer, loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=PARAMS['epoch_nr'], batch_size=PARAMS['batch_size'])
Run training to make sure that it works correctly.
python train.py
Step 2: Install psutil¶
To monitor hardware consumption in Neptune you need to have
psutil installed.
pip
pip install psutil
conda
conda install -c anaconda psutil¶
neptune.create_experiment(name='great-idea')
This opens a new “experiment” namespace in Neptune to which you can log various objects.
Step 5. Add logging for metrics and losses¶
To log a metric or loss to Neptune you should use
neptune.log_metric method:)
Many frameworks, like Keras, let you create a callback that is executed inside of the training loop.
Now that you know all this.
Steps for Keras
Create a Neptune callback.
class NeptuneMonitor(keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): for metric_name, metric_value in logs.items(): neptune.log_metric(metric_name, metric_value)
Pass callback to the
model.fit()method:
model.fit(x_train, y_train, epochs=PARAMS['epoch_nr'], batch_size=PARAMS['batch_size'], callbacks=[NeptuneMonitor()])
Note
You don’t actually have to implement this callback yourself and can use the Callback that we created for Keras. It is one of many integrations with ML frameworks that Neptune has.
Tip
You may want to read our article on monitoring ML/DL experiments:
Step 6. Run your script and see results in Neptune¶
Run training script.
python train.py
If it worked correctly you should see:
a link to Neptune experiment. Click on it and go to the app
metrics and losses in the
Logsand
Chartssections of the UI
hardware consumption and console logs in the
Monitoringsection of the UI
What’s next¶
Now that you know how to create experiments and log metrics you can learn:
See what objects you can log to Neptune
See how to connect Neptune to your codebase
Other useful articles: | https://docs-legacy.neptune.ai/getting-started/quick-starts/how-to-monitor-live.html | CC-MAIN-2021-17 | refinedweb | 516 | 50.73 |
Portal talk:Research
Contents
- 1 New versions of the Research Portal
- 2 Research tasks
- 3 Harvard Liberal Arts Faculty Votes to Distribute Research Free
- 4 Proposals
- 5 Wiki Research could be Sold for Fundraising
- 6 Categories
- 7 research initiative
- 8 How to display and coordinate researches
- 9 Wiki Research Ideas
New versions of the Research Portal[edit]
If you want to experiment with a radical new version of Portal:Research you can work at Portal:Researchnew.
- Thanks for the above comment. I looked and it seemed very similar :-) --McCormack 11:56, 14 April 2008 (UTC)
- For the clueless: Portal:Researchnew started as an exact copy of the Portal:Research. It is an available workspace for planning new content for this portal. --JWSchmidt 15:38, 24 June 2008 (UTC)
If you update featured content on this portal, make sure that you update Portal:Research/Featured. --JWSchmidt 14:41, 27 January 2007
I just made this comment on the Researchnew portal, I just noticed that the graph of the number of pages online at Wikiversity only encompasses a page count before December 2006. Shouldn't a new version be availible? --Sgutkind 21:54, 14 April 2007 (UTC)
- These are not very good graphs. Here is the most recent info from stats.wikimedia.org:
--JWSchmidt 22:12, 14 April 2007 (UTC)
Research tasks[edit]
I was coming here to add participation at to the list of "Research tasks", but I don't see any such list. Should we add one to the new design?--Rayc 01:25, 2 September 2007 (UTC)
Harvard Liberal Arts Faculty Votes to Distribute Research Free[edit]
Harvard Liberal Arts Faculty Votes to Distribute Research Free .. sort of interesting item related to research. Would there be a better place to put items like these? --Remi 09:59, 13 February 2008 (UTC)
Proposals[edit]
There should be a process to get research topics approved by peers. This would allow for better organization of research and collaboration among users researchers. Proposals are often used to get funding for research, Wikiversity should use proposals to get pages made for research. This will eliminate all bogus research and increase the credibility of the research. Wbeadle3 17:26, 7 September 2010 (UTC)
- See this page at beta Wikiversity and Wikiversity:Review board. --JWSchmidt 01:42, 8 September 2010 (UTC)
Wiki Research could be Sold for Fundraising[edit])
- Not a bad idea, though I know little about the details. The Jade Knight 07:10, 25 March 2008 (UTC)
- I live this idea but how will funds be distributed? Wbeadle3
Radical new version (April 2008)[edit]
General ideas of the new version: (1) update antiquated references, (2) improve visual appear, (3) improve ergonomy, (4) improve overview of and access to content (category tree system), (5) incorporate some dynamic rotating content (features, quotes), (6) visually connect the portal into the other five top-level "resources by level" portals. --McCormack 14:40, 14 April 2008 (UTC)
For the moment I have left out the content of Portal:Research/Research news and Portal:Research/Did you know. If someone reckons this content needs adding, it's still there on the subpages. At the moment this page seems a little overloading, so some trimming seemed in order. --McCormack 11:56, 14 April 2008 (UTC)
I have combined the following pages under the heading of "research administration" to try and simplify things a little: Portal:Research/WikiProjects, Portal:Research/Categories, Portal:Research/Selected biography. In the future it might be a good idea to actually consolidate the content of these pages into something more digestible for a new researcher trying to get to grips with what can and cannot be done at Wikiversity. Incidentally, the selected biography page had nothing to do with biographies! --McCormack 12:14, 14 April 2008 (UTC)
Archive of older content[edit]
To access old content (i.e. pre-radical revision), you can use the page history, or alternatively the redundant subpages listed above. The subpages have mostly been left alone - i.e. quite a bit of "productive forking" has been done. Other subpages have simply been incorporated intact into the new design. --McCormack 12:37, 14 April 2008 (UTC)
Categories[edit]
Category:Research needs a thorough tidy-up. One useful recategorisation that could be performed is changing the category to Category:Research project for specific projects. -- Jtneill - Talk 15:55, 23 April 2008 (UTC)
research initiative[edit]
Before there was a WikiMedia project (Wikiversity) that was open to original research, wiki participants with an interest in doing research on wikis congregated at the Meta wiki. It would make sense to bring those folks to Wikiversity. --JWSchmidt 15:26, 24 June 2008 (UTC)
How to display and coordinate researches[edit]
- The hierarchical WV file organization, including Portal, School, Project, Topic, Category, etc., is far more complex than the simple WP one, perhaps too complex for simpletons like me, hence a great barrier to maximal process and progress in concert.
- ". --JWSchmidt 14:02, 6 September 2010 (UTC)
- To be or not to be "correct"
That is not the question.
For I'm supposed to be invited to the brainstorming, hence have to feel so free that even simpletons and newbies should rarely be upset, opposed, discouraged, shamed or blamed for their folly, but just welcome to talk anything hopefully good, in good faith. Obviously I mistook Project as a namespace. Should you or anyone take that so seriously here, JWS? The guest here should speak of her opinion rather than interfere with others disruptively. The hostess should take whatever is good and just ignore the rest. Pretentious, few would come to help her. She must behave like a business girl, as it were!
The "content development' is any wiki's business. What exactly do you mean by that? You may be too clever to understand how complicated or confused the hierarchical organization of WV looks to me as a simpleton as I confessed. Should Topic and School not be structural or complicated at all, Category would be just as good as Topic and School, which thus look so redundant as to obstruct the process and progress! -- KYPark [T] 01:29, 7 September 2010 (UTC)
- "Should you or anyone take that so seriously here, JWS?" <-- KYPark, I don't know what you mean by "that". If you mean the claim that Wikiversity has a namespace organization "far more complex" than that of Wikipedia, then I take such claims seriously and I provided links to a page that explains how the topic and school namespaces can help organize content development efforts. If you follow those two links then you can find additional links to a page that explains what is meant by "content development". "What exactly do you mean by that?" <-- I mean people editing collaboratively to develop learning resources. I'm not surprised when people are confused about the school and topic namespaces. It would be useful if those namespaces and content development were mentioned on the main page since 99.9999% of Wikiversity remains to be developed. "Should you or anyone take that so seriously here, JWS?" <-- KYPark, if by "that" you mean confusion about multiple ways to use the word "project", including the project namespace, then I take seriously the danger that people can be confused when a word is used in several ways. There are many kinds of projects at Wikiversity. At Wikipedia, content development projects are called "WikiProjects". Is that any less confusing than making use of the school and topic namespaces to organize "content development projects"? KYPark, you say, "Category would be just as good as Topic and School", but when Wikiversity started it was organized around schools and a decision was made to create the school and topic namespaces rather than just try to make use of categories like "Directories of content development projects" and "Content development projects". That was a decision similar to deciding to drive on the right or left side of a road. Is it really worth arguing about the benefits from driving on one side of the road or the other? --JWSchmidt 07:08, 7 September 2010 (UTC)
- By "that" I mean the fact that I mistook Project for a namespace so that neither of your understandings is relevant, I'm afraid. -- KYPark [T] 08:21, 7 September 2010 (UTC)
- I'm not really arguing for the abolition of School and Topic namespaces, but pointing out the complication thereby which might be hinder the novice editor in particular from organizing resources easily and freely. Anyone may take this opportunity to review them in this perspective, which is also a self-referential research in kind! -- KYPark [T] 08:21, 7 September 2010 (UTC)
- "hinder the novice editor in particular from organizing resources" <-- KYPark, what do you mean by "the novice editor"? A "novice editor" who has not participated in collaborative development of learning resources? A "novice editor" who arrives from Wikipedia and cannot be bothered to learn what is different between Wikipedia and Wikiversity? A "novice editor" who feels free to delete and disrupt the work of Wikiversity community members without first helping to create any learning resources? Such editors are hindered by much more than the existence of the school and topic namespaces. I'm highly skeptical about the ability of any novice editor to be "organizing resources" when they have never demonstrated any understanding of what Wikiversity is or how learning resources are collaboratively developed. KYPark, in exactly what sense is "organizing resources" a job for "novice editors"? --JWSchmidt 14:17, 7 September 2010 (UTC)
- JWSchmidt, your argument against novice editors sounds too challenging for me. Sooner or later, however, I may try to address it very carefully perhaps at #Who notices novices? (under construction). Meanwhile, knowing is one thing, teaching is another, editing is still another, and so on. A novice editor is not always a novice of learning. Most elders of high literacy in scholarship suffer high illiteracy in computing. To use that literacy is to ease that illiteracy as far as possible, by easing computing. Such would be the case with simpletons and newbies. The easiest is editing on the main namespace, regardless of School, Topic, and the like, which others would better bother from the file-organizational perspective. This would be a maximal collaboration by division of work as per varied capacity or competence. You'd better not long for Leibnizean talents or polymaths. Or, you'd suffer from the absolute manpower shortage. Never forget WV is far less popular than WP anyway. (BTW, note that your comment beginning with ":" would destroy my numbering beginning with "#". Thanks.) -- KYPark [T] 02:21, 8 September 2010 (UTC)
- We may better not isolate information searching, understanding, learning, researching, and teaching from one another, but integrate all into the same thread while highlighting the point of intensive research required. (Alternatively, a link to the research page may be added to the headline of the other relevant pages.)
- . --JWSchmidt 14:02, 6 September 2010 (UTC)
- Your advice is not welcome at all, for again this is brainstorming. You just take whatever good there is if any fortunately. JWS, you'd better talk about your opinion to the headline invitation than such opposition to others that is just improper, if not uncivil, unreasonable or unseasonable on this occasion. -- KYPark [T] 01:29, 7 September 2010 (UTC)
- It would be very hard to describe or prescribe a research agenda from scratch as well as an information search query from the "anomalous state of knowledge" (ASK) as pointed out by Nicholas J. Belkin of the UCL school of, so to speak, uncertain legacy, say, hypo-text.
- An ambitious goal setting or seeking would be implausible at first. Another way of reading the intra- or inter-text, hypo-text or subtext should also be taken seriously as sort of research.
- Editors would better write up to show up their research competence or qualification on the user page. The red link to it is a shame.
- "The red link to it is a shame" <-- What red link? --JWSchmidt 14:02, 6 September 2010 (UTC)
- The research coordinator should do their best.
-- KYPark [T] 13:08, 6 September 2010 (UTC)
- . "The red link to it is a shame" <-- What red link? --JWSchmidt 14:02, 6 September 2010 (UTC)
- As a devoted contextualist, I dislike you drawing the points at issue out of context, as I elsewhere complained and asked for your cooperation or adaptation to my texture. (As a Darwinist you'd know how vital it is.) So I have to post each point back or close to the context, as above. -- KYPark [T] 01:29, 7 September 2010 (UTC)
- I'm proud to have participated in this brainstorming alone so far, whether wisely or foolishly. -- KYPark [T] 02:10, 7 September 2010 (UTC)
- My contextualist information strategy
- search and research of information
- reference thereto and inference therefrom
- teaching and learning
by not just roughly doing but toughly
- linking and sinking
the relevant, in context in concert or in consilience, hence the names
- hypertext and hypotext, respectively!
- Note
- The file organization, if too complicated, may become a curse rather than a blessing. For simple is beautiful! And it appears self-manifest why and how the relevant be well coordinated within and without research projects.
-- KYPark [T] 03:52, 7 September 2010 (UTC)
Ah! As far as hyperlinking is concerned, see Category:Linda Smith 1980#Literature and its edit box to find how easily the relevant are bilaterally linked. See also User:KYPark/Hi Ottava Rima/Colloquium#KYPark 2 and the other relevant here and there!
-- KYPark [T] 04:24, 7 September 2010 (UTC)
Wiki Research Ideas[edit]
Hello: I'm starting to collect ideas from a long thread from September 2012 in the wiki-research-l list, in which many different proposals related to Open Access Journals and the "wiki way" of doing research were circulated. I will start adding content to meta:Wiki Research Ideas, please feel free to join me! Arided (talk) 13:04, 20 September 2012 (UTC) | https://en.wikiversity.org/wiki/Portal_talk:Research | CC-MAIN-2018-30 | refinedweb | 2,335 | 52.6 |
In this tutorial, we will check how to draw circles in an image with OpenCV and Python.
Introduction
In this tutorial, we will check how to draw circles in an image with OpenCV and Python.
One common task when using OpenCV is detecting regions of interest with some computer algorithm vision. So, it makes sense for the programmer to be able to highlight those regions of interest in some way.
Thus, OpenCV offers a decent amount of drawing functions, which we can leverage to draw forms in images. So, as already mentioned, in this tutorial we will check how to draw circles in images.
One important thing to highlight is that when drawing in the image the coordinates referential works as shown in figure 1. The origin is at the top left corner of the image and we specify the x coordinate from left to right and the y coordinate from the top to the bottom. Also, the coordinates are specified in pixels.
Figure 1 – Referential when drawing in OpenCV.
This tutorial was tested on version 3.2.0 of OpenCV.
The code
As usual, we start by importing the cv2 module, so we have access to all the OpenCV functionalities we will need.
import cv2
Then, we need to read the image in which we want to draw some circles. To do it, we simple need to use the imread function, passing as input the path of the image in the file system.
image = cv2.imread('C:/Users/N/Desktop/Test.jpg')
Now, to draw a circle on the image, we simply need to call the circle function of the cv2 module. Note that this function will not return a new image but rather draw the circle on the image we will pass as input.
But, as long as we don’t save this edited image, the circle will be drawn in the image we have in memory and not in the original file we have read, so these functions can be used safely.
As first input, the circle function receives the image on which we want to draw the circle. As second, it receives a tuple with the x and y coordinates of the center of the circle.
As third argument, we need to pass the radius of the circle and as fourth, we need to specify another tuple with the color of the circle, in BGR (Blue, Green and Red) format.
These are the mandatory parameters we need to pass in order to draw the circle, but there are some optional ones that assume default values if not specified.
So, we will first draw a green circle at coordinates x = 100 and y = 0, and with a radius of 25 pixels.
cv2.circle(image,(100, 0), 25, (0,255,0))
Next, we will draw a circle at coordinates x= 0 and y = 100, also with a radius of 25 pixels. Its color will be red.
cv2.circle(image,(0, 100), 25, (0,0,255))
Note that in the previous calls, we did not specified what will be the thickness of the circle outline. But this value is actually one of the optional parameters we can pass to the circle function, which defaults to 1 pixel. So, the previous two circles will have a thickness of 1 pixel.
Just to illustrate the use of this parameter, we will now draw a circle at coordinates x = 100 and y = 100, a radius of 50 and with a blue color. Additionally, we will pass a fifth parameter, which will correspond to the thickness of the circle outline, with a value of 3 pixels.
cv2.circle(image,(100, 100), 50, (255,0,0), 3)
Finally, we will display the image with the drawn circles and wait for a key event. When that event happens, we will destroy the window where the image was being displayed and finish the execution. The final source code is shown below and already includes these last calls.
import cv2 image = cv2.imread('C:/Users/N/Desktop/Test.jpg') cv2.circle(image,(100, 0), 25, (0,255,0)) cv2.circle(image,(0, 100), 25, (0,0,255)) cv2.circle(image,(100, 100), 50, (255,0,0), 3) cv2.imshow('Test image',image) cv2.waitKey(0) cv2.destroyAllWindows()
Testing the code
To test the code, simply run the previous program in your Python environment of choice, point to an existing image in your file system. You should get an output similar to figure 2.
Figure 2 – Drawing circles in an image with OpenCV.
As can be seen, all the three circles were drawn in the specified coordinates, with the corresponding radius, colors and thicknesses.
Note that for the first two circles we specified the top and left halves, respectively, would be drawn outside the image borders, which is why the circles are cut.
One Reply to “Python OpenCV: Drawing circles”
how can i draw a circle when i detect a face from a webcam? | https://techtutorialsx.com/2018/07/08/python-opencv-drawing-circles/ | CC-MAIN-2019-35 | refinedweb | 826 | 70.53 |
Created on 2021-11-06 04:02 by tim.peters, last changed 2021-11-11 03:22 by tim.peters.
A number of contexts allow specifying a tuple of arguments to be passed later to a function. The Thread constructor is a fine example, and happened to come up (again! for me) here today:
This often confuses especially newbies, because the function they intend to parallelize often takes only a single argument, and Python's syntax for a 1-element tuple actually _requires_ parentheses in the context of an argument list, with a naked trailing comma:
t = threading.Thread(target=access, args=(thread_number,))
It "looks weird" to people.
I'm not suggesting to change that, but instead to officially bless the workaround I've seen very often in real code: use a list instead.
t = threading.Thread(target=access, args=[thread_number])
Nobody scratches their head over what that means.
CPython's implementations typically couldn't care less what kind of sequence is used, and none that I'm aware of verify that it's specifically a tuple. The implementations just go on to do some simple variation of
self.target(*self.args)
Tuple or list makes no real difference. I'm not really keen to immortalize the "any sequence type whatsoever that just happens to work" implementation behavior, but am keen to promise that a list specifically will work. A lot of code already relies on it.
There is a difference if you modify the arguments list after creating a thread.
args = [1]
t = threading.Thread(target=access, args=args)
args[0] = 2
t.start()
Would it call access(1) or access(2)?
Serhiy, we haven't documented such stuff, and, indeed, I've been burned by it but much more often in the case of multiprocessing.Process. But note that I'm SWAPPING the order of your last two lines. In the original, you mutated the argument _before_ starting any parallel work, so "of course" the new worker will see the mutation:
def access(xs):
print(xs)
args = ([1],)
t = multiprocessing.Process(target=access, args=args)
t.start() # start parallel work before mutating
args[0][0] = 2
Does that print [1] or [2]? Passing a tuple in no way prevents mutations to mutable objects the tuple contains.
When the docs are silent, "implementation defined" rules. Whether you use threading or multiprocessing in the altered example above, the result printed simply isn't defined - it's a race between the main thread doing the mutation and the "parallel part" accessing the mutated object.
This is subtler in the multiprocessing context, though, because the relevant "parallel part" is really the hidden thread that pickles the argument list to send to the worker. That effectively makes a deep copy. But it's still a race, just not one visible from staring at the Python code. In the threading case, no copies are made.
Changed stage back to "needs patch", since Raymond appears to have closed his PR. Raymond, what's up with that? | https://bugs.python.org/issue45735 | CC-MAIN-2022-05 | refinedweb | 501 | 56.76 |
- Author:
- youell
- Posted:
- July 17, 2008
- Language:
- Python
- Version:
- .96
- text javascript html snippet
- Score:
- -2 (after 4 ratings)
Help me get better! If you vote (either way) please leave a comment if you have time and say what was good or bad. I appreciate any and all feedback. Thanks!
I keep finding places in my apps where I need an isolated snippet of text that can periodically be changed from the admin interface. Most often it's html but sometimes it's text, javascript, or css.
Use it like so: (Assuming this snippet lives in snippy_snip/models.py and there is a snippet named "Welcome Message" in the database)
from snippy_snip.models import snip msg = snip("Welcome Message")
Or, you might populate a parameter hash for a template:
def showpage(request): params = { 'welcome': snip('Welcome Message'), 'video1': snip('Video 1'), 'NavHeader': snip('Nav.SectionHeader'), } return render_to_response("main.html", params)
For clarity, params might look something like this:
welcome -> "Welcome to our site. Please use the menu on the left..."
video1 - > a YouTube snippet
NavHeader -> Some HTML which comprises the top of a navigation menu.
This is a very simple bit of code but I've found it very useful. It isn't intended for instant changes... Your snippets will cache like anything else, which may cause confusion if you expect immediate changes. And it's probably not great for a high traffic site, but for my moderate traffic sites and workgroup apps I've found it useful.
(This code was created for 0.96, but I'm working to bring it into alignment with the latest svn version of Django, see comments.)
More like this
- Confirm alert if the user navigates away without saving changes by mrazzari 5 years, 8 months ago
- Template tag for compressed CSS & JS (GAE version) by jeffar 6 years, 9 months ago
- A dict template tag by Batiste 6 years, 11 months ago
- Sending html emails with images using Django templates by sleytr 7 years, 9 months ago
- YUI Loader as Django middleware by akaihola 6 years, 11 months ago
first of all, str method in model long ago was replaced by unicode.
This try: except: is horrible. It should looks like
Any way. this ^^ is not so smart approach.
#
err
#
@KpoH, thanks for the feedback!
I'm on 0.96, which didn't call _unicode___ by default. But I've created a workaround which I think should be forward compatible. Tell me what you think:
Works fine on 0.96, but I don't have a trunk version to test against at the moment, so I only assume it would work with the latest svn.
As for the exception handling, yeah it's crude, but I'm happy with it in this particular situation.
#
and please also add unique=True to titles ;)
#
@burly - Done. Thank you!
#
This seems kinda like (almost exactly like) gettext in the i18n module. You might also end up with your Snippets confusing you at some point in the future when you start using snip("Submit") and decide that half of your buttons should read "Send" and half "Save" (as happened to me when translating to Japanese).
I like the idea but I'm not convinced its maintainable or even sticking to the DRY principle.
Is there a reason this can't be achieved with gettext/i18n and something like the rosetta interface?
#
Hi @aarond10ster! I'm not entirely sure I understood what you're going for, but I took a shot. Let me know how bad I've misunderstood you. :)
Mechanically, I think yes you could use Rosetta. It would be a bit like using a screwdriver as a hammer though. From what I've gathered you'd have to restart the webserver every time a user changed text and the .MOs were recompiled, but technically I think it would work. Of course if you tried to use Rosetta for actual localization at the same time that would probably get awkward fast.
To put it all in perspective, I use this code on small sites where the clients/users want to change information on a page frequently and they have a limited number of spots to change.
Let me know how I did with understanding you. Thanks!
#
Oh, and I should add some more clarification. SnippySnip is intended to work more like server-side includes or a hash table than anything (like localization) at this point.
#
Please login first before commenting. | https://djangosnippets.org/snippets/891/ | CC-MAIN-2015-14 | refinedweb | 743 | 73.17 |
How to Arbitrage,470 times. Learn more...
Looking for a risk-free return? Arbitrage is the way to go. It's the process of simultaneously buying an asset at a low price and selling essentially the same asset at a higher price, locking up the difference as profit. Warren Buffett, arguably the best investor in the world, has used arbitrage to generate average annualized rate of return of 81.28% from 1980 to 2003 with very low risk.[1] Learn below how you too can get on the action.
Steps
- 1Understand the different types of arbitrage. Pure arbitrage, generally available only to market makers, is the purchase of securities on one market for immediate resale on another market at a higher price, earning a risk-free profit. Risk arbitrage, available to retail investors, entails some risk and involves purchase of a security and simultaneous sale of a similar security at a higher price in anticipation of convergence of value between the two securities.
- 2Identify arbitrage opportunities. While pure arbitrage opportunities are scarce in efficient markets, risk arbitrage opportunities exist all over the world in diverse financial markets, including stocks, bonds, funds, currencies, commodities, and derivatives.
- Among the most profitable arbitrage opportunities are mergers and acquisitions, whereby a stock of a company being acquired trades at a discount to the offer price.
- Another type of arbitrage is liquidation arbitrage, the purchase of undervalued securities at prices below their estimated liquidation values. Liquidation value is the value owners can receive if they were to give up the business and sell off the assets and pay off the liabilities. Net current asset value (NCAV), the difference between current assets and total liabilities, can be used as a rough approximation of liquidation value.[2]
- Pairs trading exploits the difference between two very similar companies in the same industry that have historically been highly correlated, for example, Coke and Pepsi. When the two company's values diverge to a historically high level you can go long on the undervalued company and short on the overvalued one, and profit when their values converge, as history has shown that they eventually will.
- 3Determine the transactions needed to realize your arbitrage profit.
- Example 1: If Company A offers to buy Company B for $10 per share of Company B stock, with acquisition to close in 6 months, and Company B stock currently trades at $9, you simply buy Company B stock at $9 and realize a $1 profit (difference between $10 offer and $9 paid) when the acquisition closes. When the deal closes in 6 months, then you will have made an annualized 22% return ($1 profit divided by $9 investment gives 11% in a 1/2 year period, then multiply that by 2, which shows an annualized 22% gain).
- Example 2: If Company A offers to buy Company B for $5 and 1 share of Company A stock per share of Company B stock, and Company A stock currently trades at $5 while Company B stock currently trades at $9, you see that $5 and 1 share of Company A stock cost a total of $10, while 1 share of Company B stock costs $9. So you buy low (the Company B stock at $9) and sell high (the Company A stock at $5), locking in $1 arbitrage profit and 22% annualized return when the deal closes after 6 months.
- 4Evaluate the risk in a risk arbitrage. You can use Benjamin Graham's risk arbitrage formula to determine optimal risk/reward: Expected Annual Return= (SG-L(100%-S))/(YP), where
- S is the expected chance of Success (%).
- P is the current price of the security.
- L is the expected loss in the event of a failure (usually difference between current price after announcement of an event and original price before announcement of the event).
- Y is the expected holding time in years (usually the time until the merger takes place).
- G is the expected gain in the event of a success (usually takeover price less current price).
- To use the example above, let's assume the chance of success is 90% (a friendly takeover with no looming regulatory concerns). Current price is $9. Let's assume Company B stock traded at $5 per share prior to merger announcement. The expected loss is the difference between current price of $9 and the original price of $5 before merger announcement, or $4. The holding time in years is 0.5 (6 months). The gain is the difference between the $10 takeover price and the $9 current price, or $1. Plugging these inputs into the equation, the expected annual return = (90%*$1 - $4*(100%-90%))/(0.5*$9) = 11%.
- 5Decide whether the expected annual return is acceptable, by comparing that to your personal discount rate. Your personal discount rate is the rate of return you can expect by investing your money elsewhere such as the stock market and represents an opportunity cost of capital. Let's suppose your discount rate is 10%, in line with historical market performance. Since the expected annual return of 11% from the merger is greater than your discount rate, you would decide to partake in the merger arbitrage.
- 6Execute your transactions. Be sure to act quickly. The more efficient the market, the more quickly you must act.
- For the brave, use leverage in a margin account to amplify the potential return from an arbitrage. For example, by using 2 to 1 leverage in the example above, you can turn a 22% return assuming merger completion into 44% (or expected annual return from 11% to 22%). Leverage is double edge sword, however, as it could also magnify your losses if prices continue to diverge instead of converging as expected, so use leverage carefully.
Community Q&A
Search
Ask a Question
200 characters left
Include your email address to get a message when this question is answered.Submit
Tips
- Do an online search for mergers and acquisitions and be alerted when new opportunities arrive.
- Diversify to avoid arbitraging deals where the chance of success is all or nothing. Make two or more smaller arbitrage deals to avoid having 100% of the risk in one larger venture.
Warnings
- For mergers and acquisitions, be aware of the risk that the deal may take longer than expected to close or even fall apart, thereby diminishing or even eliminating any potential return.
- Watch out for transaction fees. Use low or no-fee brokers to minimize transaction fees. Your arbitrage profit is net of transaction fees, so do not allow transaction fees to eat away your profit.
References
- ↑ Buffett, Mary, and David Clark. Warren Buffett and the Art of Stock Arbitrage. New York: Simon & Schuster, 2010.
- ↑
Article Info
Categories: Investments and Trading
Thanks to all authors for creating a page that has been read 9,470 times.
Did this article help you? | https://www.wikihow.com/Arbitrage | CC-MAIN-2019-18 | refinedweb | 1,140 | 52.7 |
03 February 2010 23:21 [Source: ICIS news]
HOUSTON (ICIS news)--Port Freeport on the Texas Gulf coast has been closed to night vessel traffic due to rough sea conditions, shipping sources said on Wednesday.
Port officials did not return calls, but the Freeport-based Brazos Pilots Association issued a statement saying the port would be closed for all night traffic.
The statement said the pilots would review conditions again at 19:00 hours local time (1:00 GMT) to determine if the weather was favourable to begin vessel movements.
A dispatcher for the Houston Pilots group said seas in the Gulf of Mexico near ?xml:namespace>
"It's pretty rough right now and supposed to get worse," the dispatcher said.
State law calls for large vessels coming into
Two large chemical plants are located in
For more on Dow Chemical and BASF. | http://www.icis.com/Articles/2010/02/03/9331600/texas+port+closed+to+vessel+traffic+due+to+rough+seas.html | CC-MAIN-2013-20 | refinedweb | 143 | 65.35 |
On Tue, 2011-11-15 at 15:22:46 +0100, Raphael Hertzog wrote: > On Mon, 14 Nov 2011, Andrew Stormont wrote: > > diff --git a/lib/dpkg/md5.c b/lib/dpkg/md5.c > > index 3da18c9..5e9f311 100644 > > --- a/lib/dpkg/md5.c > > +++ b/lib/dpkg/md5.c > > @@ -15,6 +15,8 @@ > > * MD5Context structure, pass it to MD5Init, call MD5Update as > > * needed on buffers full of bytes, and then call MD5Final, which > > * will fill a supplied 16-byte array with the digest. > > + * > > + * Copyright © 2011 Nexenta Systems Inc. All rights reserved. > > */ > > That file is in the public domain and it's best if we keep it that way, so > please accept the same and don't claim any copyright on it. This does not matter any more given the pushed changes, but in addition I don't think these changes are copyrightable, as they are just a symbol rename (at least according to the GNU maintainers doc). > Hum, C99 is not a requirement to build dpkg. Some features are required > but those standard types are currently not part of it (see README and > doc/coding-style.txt). So maybe it's better to add the required typedefs > specifically for Solaris? They are assumed to be present, and checked by dpkg-compiler.m4. Those are not on the doc, because they don't really need compiler support, and can be easily mapped to other types by configure. > That said I don't really know why Guillem did not mandate C99 in its > entirety. Because C99 is not yet fully implemented by many compilers (not even gcc). > > +#ifdef HAVE_SYS_CDEFS > > #include <sys/cdefs.h> > > +#endif > > So this test should probably be changed into something else. Not sure > what though... this header is provided by glibc but is not glibc specific > apparently. > > If we can't find anything better, we could go with this I guess: > #if !defined(__sun) > #include <sys/cdefs.h> > #endif It's a BSDism and it's not needed. > > @@ -31,6 +33,7 @@ > > # define OSHurd > > #elif defined(__sun) > > # define OSsunos > > +# undef HAVE_KVM_H > > #elif defined(OPENBSD) || defined(__OpenBSD__) > > # define OSOpenBSD > > #elif defined(hpux) > > Why? Does kvm.h exist on Solaris and is it something totally unrelated? kvm implementations vary slightly from system to system, given that they expose kernel internal structures to user-land. regards, guillem | http://lists.debian.org/debian-dpkg/2011/11/msg00041.html | CC-MAIN-2013-48 | refinedweb | 383 | 66.44 |
One of the welcome additions to Java language in Java 5 release is the static import declaration. static import works in the same way as traditional import declaration but it imports only the static members of a class.
Traditional import declaration looks like this
import java.util.*;
The above statement will import all the classes under java.util package.
The format of static import declaration is similar with the addition of keyword static. E.g;
import static java.lang.Math.*;
This statement will import all the static members of java.lang.Math class.
Similar to import declaration the static import offers two options. You can import all static members of a class or import only the members that you need in your program.
import static java.lang.Math.PI;
will import only PI.
import static java.lang.Math.*;
will import all static elements from Math class.
Before Java5 to use static fields or methods of a class you had to fully qualify the field/method like
double pi= Math.PI;
double randomNumber = Math.random();
Following is a simple program that uses static members from Math class.
public class DemoStatic{ public static void main(String args[]){ double pi = Math.PI; double randomNumber = Math.random(); System.out.println("PI = " + pi); System.out.println("Random = " + randomNumber); } }
Typing fully qualified names quickly become cumbersome if you use lot of static members.
static import eliminates the need of fully qualified names. This not only reduces the number of key strokes for developers it also makes the code more readable.
The above example rewritten with static import will become
import static java.lang.Math.*; public class DemoStatic2{ public static void main(String args[]){ double pi = PI; double randomNumber = random(); System.out.println("PI = " + pi); System.out.println("Random = " + randomNumber); } }
To import a static method you just give the method name without parenthesis.
The only drawback I see with static import is if you import too many static elements from various classes and then use them unqualified in your code it may become hard to determine which element belongs to which class. Used wisely the static import may help you avoid repeating class names over and over.
Hey, you’ve done it again. Another well written bit of Java programming advice. Perhaps more seasoned programmers don’t need such guidance but part-time Java programmers, like myself, will undoubtedly find posts like this one extremely useful. Thanks, and keep the tips coming.
Bret’s last blog post..Adding Twitter to Your Blog
Thanks, I’m learning Java, and you’ve explained this better than a similar write-up I saw at the Sun website on static imports. I’m glad you gave the history behind it, as a newb I kept wondering why importing java.lang.Math.* wasn’t letting me call the static method random() unqualified – thanks to your write-up I understand much better.
Good website! I adore quite a few of the articles that have been written, and particularly the reviews posted! I will without any doubt be traveling to once again!
Good Website, to the point words/explainations are found rearly. | http://zparacha.com/java-static-import | CC-MAIN-2017-51 | refinedweb | 518 | 59.19 |
30 January 2009 19:27 [Source: ICIS news]
TORONTO (ICIS news)--LANXESS and union IG BCE have agreed to introduce short-time working hours and pay cuts for 5,000 non-managerial workers in Germany as the chemicals producer responds to declining demand amid the global recession, it said on Friday.
A 35-hour work week with corresponding pay cuts would be introduced in March for initially 12 months, the company said.
In addition, there would be no bonus for 2009.
LANXESS also said that salaries of managerial employees would be “adjusted”, salary reviews would be postponed by at least six months, and the board of management would take a pay cut equivalent to about 10% of fixed salary.
Outside of ?xml:namespace>
In some countries, salary increases would be postponed by twelve months. In addition, measures to lower personnel costs would be agreed upon according to the respective country conditions, LANXESS said.
Overall, the measures were expected to save LANXESS some €50m ($65m) in cash outflow in the 2009-2010 period, it said.
“We are in the midst of a global recession,” LANXESS CEO Axel Heitmann said.
“Customer demand, especially in the automobile and construction industries, remained weak in January, and we do not expect the current economic environment to radically improve going forward,” he said.
“LANXESS has the expertise to cope with difficult economic conditions. We have already demonstrated our competence in the recent past with the realignment of our company.
“I am impressed by the commitment and the flexibility of our employees worldwide” Heitman said.
Should economic conditions worsen in the coming months, senior management and employee representatives would meet on a regular basis to discuss further measures, the company added.
LANXESS employs 15,000 people in 21 countries.
($1 = €0.77)
For more on LANX | http://www.icis.com/Articles/2009/01/30/9189091/lanxess-union-agree-on-fewer-hours-pay-cuts.html | CC-MAIN-2015-14 | refinedweb | 299 | 51.89 |
Say I have this class:
@Entity
@Table(name="PICTURE")
public class Picture{
private String category1, category2;
}
TABLE PICTURE {
int category1;
int category2;
...
@Id @GeneratedValue @Column(name = "MESSAGE_ID") private Long id; If i try to use above statement. The hibernate_sequence sequence is being used, but i have not specified the sequence in my GeneratedValue. It means that hibernate_sequence is being used by default for Id generation. In case 2 threads simultaneously try to save an object, how will the sequence number generation take place ...
I have the same problem and I search on forums and I reached the conclusion that there is no annotation support for specifying a column's default value. This is very strange since in the xml mapping you can put default value for a column. I am not sure if I am right but maybe someone from the hibernate team can help ... | http://www.java2s.com/Questions_And_Answers/JPA/Annotation/Value.htm | CC-MAIN-2018-43 | refinedweb | 145 | 53.61 |
Created on 2010-08-23 08:06 by doko, last changed 2010-11-25 13:16 by msuchy@redhat.com. This issue is now closed.
[."
I do not think your wish is sensibly possible. GzipFile wraps an object that is or simulates a file. This is necessary because GzipFile "simulates most of the methods of a file object, with the exception of the readinto() and truncate() methods." Note that seek, rewind, and tell are not excluded. For instance, I see no way that this:
def rewind(self):
'''Return the uncompressed stream file position indicator to the
beginning of the file'''
if self.mode != READ:
raise IOError("Can't rewind in write mode")
self.fileobj.seek(0)
...
could be implemented without seek, and without having a complete local copy of everything read.
urllib.request.urlopen returns a 'file-like' object that does not quite fully simulate a file. The downstream OP should save the gzip file locally using urlretrieve() and *then* open and iterate through it. Feel free to forward this suggestion to the OP.
I don't know the gzip format well enough, but I was hoping that it would be possible to iterate through the lines of a gzip-compressed stream without having to use any of the functions that would require seeking.
Matt: if you want to learn the file format and propose a patch, I think it would be OK for gzip to duck-type the file object and only raise an error when a seek is explicitly requested. After all, that's the way real file objects work. A quick glance at the code, though, indicates this isn't a trivial refactoring. I think it should be possible in theory since one can pipe a gzipped file into gunzip, and I don't think it buffers the whole file to unzip it...but I don't know for sure. Another issue is that if the patch substantially changes the memory/performance footprint it might get rejected on that basis.
If you (or anyone else) wants to work on a patch let me know and I'll reopen the issue.
It is possible that only a fixed-size buffer is needed. If so, use of an alternate read mechanism could be conditioned on the underlying file(like) object not having seek.
It is also possible to direct a stream to a temporary file, but I think having the user do so explicitly is better so there are no surprises and so that the user has file reference for any further work.
Or their could be a context manager class for creating temp files from streams (or urls specifically) and deleting when done. One could then write
with TempStreamFile(urlopen('xxx') as f:
for line in Gzipfile(fileobj=f):
I'm proposing GzipStream class which inherit from gzip.GzipFile and handle streaming gzipped data.
You can use this module under both Python or GPLv2 license.
We use this module under python 2.6. Not sure if it will work under Python3. | https://bugs.python.org/issue9664 | CC-MAIN-2019-22 | refinedweb | 503 | 72.05 |
Antti Koivunen wrote:
>
>.
Good point.
> Also, if the URI is used to locate the descriptor, there's always the
> possibility that you're offline or behind a firewall (although the block
> manager should provide a way around this by giving the option of storing
> the descriptor locally).
Yes, of course.
>:
> >>
> >> <provides-file
> >> <provides-implementation
> >
> >
> > Yes, this is where I aim to start.
>
> Very good.
>
> >>>3) detailed description: the contract identifier indicates both the
> >>>skeleton and the behavior of the contract. This allows high granular
> >>>automatic validation.
> >>
> >>Sounds good, but would be difficult to implement using just an XML
> >>descriptor.
> >
> >
> > If you are saying that the XML descriptor might get insanely complex, I
> > totally agree.
>
> Exactly my point, but as said before, a few simple validation rules
> would go a long way (but probably not all the way).
>
> >>Following proper SoC, perhaps the role itself should provide
> >>the tools for more complex validation.
> >
> >
> > No, this *breaks* SoC! Validation is not your concern is *our* to
> > understand if what you provided us with works depending on the contract
> > that we are expecting!
>
> Well, SoC isn't just about who writes the Java code. Offering a clean
> Java API to the block (role) authors for defining the validation rules
> might be better than offering an "insanely complex" XML API. Our main
> concern is to perform the validation in a uniform way according to these
> rules.
Yes, but it's a matter of *trust* more than SoC at this point: I give
you the contract, you give me the implementation *and* a way to check
the validation.
As italian, I'm always very sensible at cheating :)
> >>The role descriptor could make
> >>use of the simple built-in validators (see above) and/or define custom
> >>ones if necessary.
> >>
> >>It should be possible to define an 'intermediate' API to make it easy to
> >>implement new validators, e.g.
> >>
> >> interface Validator
> >> {
> >> void validate( ValidationContext ctx ) throws ValidationException;
> >> }
> >>
> >> interface ValidationContext
> >> {
> >> BlockInfo getBlockInfo();
> >> URL getResource( String name );
> >> ClassLoader getContextClassLoader();
> >> Configuration getConfiguration(); // from the role descriptor
> >> }
> >>
> >>This approach would allow practically any level complexity, but would
> >>also mean that the role might not consist of just the XML descriptor,
> >>i.e. we might end up with another archive format, say '.cor'. Still,
> >>it's probably be better than trying to please everybody and ending up
> >>with 50kB role descriptors.
> >
> >
> > Hmmm, no, I was thinking more of using namespaces to trigger different
> > validation behavior during installation.
>
> I'm also quite hesitant to go beyond XML, but it might be difficult to
> define standalone roles. Consider the following fairly simple example:
>
> <validate-xml
>
> Now, if the schema URI does not resolve to a valid schema (or there's no
> internet access or the server is down), we have a problem. There are a
> couple of possible solutions (pre-install the schema, require momentary
> internet access), but wouldn't it be more convenient to download a
> single file that contains everything? Then we could do something like:
>
> <validate-xml
>
> This is just one example, but I'm pretty sure there other similar
> situations.
Ok, I'll try to come up with something as soon as I have time. | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200204.mbox/%3C3CAC4669.1D82E02A@apache.org%3E | CC-MAIN-2016-07 | refinedweb | 525 | 52.49 |
01 April 2008 15:47 [Source: ICIS news]
TORONTO (ICIS news)--RAG-Stiftung has decided to postpone the planned initial public offering (IPO) of a stake in Evonik Industries, which includes the former Degussa specialty chemicals business, due to poor capital market conditions, a spokesman said on Tuesday.
?xml:namespace>
RAG’s main objective was to list the Evonik stake at a good price rather than selling it under value at this time, spokesman Klaus-Henning Groth told ICIS news.
The IPO plans could be quickly revived if and when capital markets improved, he said without providing firm timelines.
In the meantime, RAG was prioritising its talks with potential financial investors to buy a stake in Evonik, he said.
Groth would not name the interested investors RAG is negotiating with and would not say how much money it hoped to receive for a | http://www.icis.com/Articles/2008/04/01/9112739/rag-stiftung-postpones-planned-evonik-ipo.html | CC-MAIN-2014-42 | refinedweb | 143 | 52.12 |
Duplicate files have their uses, but when they are duplicated multiple times or under different names and in different directories, they can be a nuisance. This article shows readers how to use Python to eliminate such files in a Windows system.
Computer users often have problems with duplicate files. Sometimes we mistakenly create the same file again and again with different names, or copy one file to different locations with different names. So it becomes very difficult to find the duplicate file due to its different name. There is also the case of files with the same name having different content. In order to solve this problem, let’s check out a new Python program.
Before jumping to the source code, I want to explain the principle of the code, which is based upon the integrity of the file. If two files have the same content, with the same or different names, then their MD5 hash (or other hash algorithm) must be the same. In this article, I am going to use the MD5 hash to find the integrity of files. In the first step, let’s create and save the MD5 hash of all the files of all drives. See the basic code flow in Figure 1, which signifies the functionality of generating a database file that contains hashes of all files.
The function create() will be called by the user with arguments. It accesses all hard disk drives through the get_drive() function; then, it creates threads for each drive and calls the search1() function. The search1() function uses the md5() function to generate the MD5 hash for each file. In this way, the search1() function creates a Python default dictionary, which contains hashes as keys and files with paths as values. Finally, the create() function dumps the Python default dictionary into a pickle file.
Let’s discuss the code.
#program created by mohit #offical website L4wisdom.com # email-id [email protected]
The following modules will be used in the program. Do not worry about creating MD5 hashes because there is a module hashlib which will do this for you.
import os import re import sys from threading import Thread from datetime import datetime import subprocess import cPickle import argparse import hashlib import collections · Create a Python default dictionary dict1 = collections.defaultdict(list)
The md5() function calculates the MD5 hash of the file.
def md5(fname,size=4096): hash_md5 = hashlib.md5() with open(fname, “rb”) as f: for chunk in iter(lambda: f.read(size), b””): hash_md5.update(chunk) return hash_md5.hexdigest()
The all_duplicate() function in the following code is used to print all duplicate files in the drive. It gives the output to a file named duplicate.txt in the current running folder.
def all_duplicate(file_dict, path=””): file_txt = open(‘duplicate.txt’, ‘w’) all_file_list = [v for k,v in file_dict.items()] for each in all_file_list: if len(each)>2: file_txt.write(“-------------------\n”) for i in each: str1 = i+”\n” file_txt.write(str1) file_txt.close()
The get_drives() function shown below returns the list of all drives. If you insert a pen drive or external drive while running the program, it will also be listed.
The search1() function shown in the following code gets the drive’s name from the above function and accesses all files. It then sends the file with the full path to the md5() function to get the MD5 hash. Finally, it builds the global default dictionary dict1.
def search1(drive,size): for root, dir, files in os.walk(drive, topdown = True): try: for file in files: try: if os.access(root, os.X_OK): orig = file file = root+”/”+file if os.access(file, os.F_OK): if os.access(file, os.R_OK): s1=md5(file,size) dict1[s1].append(file) except Exception as e : pass except Exception as e : pass
The create() function is what starts to create the hashes of all files. It creates the threads for each drive and calls the search1() function. After the termination of each thread, the create() function dumps the default dictionary dict1 to the pickle file named mohit.dup1, as shown below:
def create(size): t1= datetime.now() list2 = [] # empty list is created list1 = get_drives() print “Drives are \n” for d in list1: print d,” “ , print “\nCreating Index...” for each in list1: process1 = Thread(target=search1, args=(each,size)) process1.start() list2.append(process1) for t in list2: t.join() # Terminate the threads print len(dict1) pickle_file = open(“mohit.dup1”,”w”) cPickle.dump(dict1,pickle_file) pickle_file.close() t2= datetime.now() total =t2-t1 print “Time taken to create “ , total
The following function opens the pickle file and loads the dictionary into the memory (RAM):
def file_open(): pickle_file = open(“mohit.dup1”, “r”) file_dict = cPickle.load(pickle_file) pickle_file.close() return file_dict
The file_search() function in the following code is used to match the hash of the file provided by the user. It first opens the pickle file and loads the Python default dictionary. When you provide the file with the pathname, it calculates the MD5 hash of the file and then matches it with the dictionary’s keys.
def file_search(file_name): t1= datetime.now() try: file_dict = file_open() except IOError: create() file_dict = file_open() except Exception as e : print e sys.exit() file_name1 = file_name.rsplit(“\\”,1) os.chdir(file_name1[0]) file_to_be_searched = file_name1[1] if os.access(file_name, os.F_OK): if os.access(file_name, os.R_OK): sign = md5(file_to_be_searched) files= file_dict.get(sign, None) if files: print “File(s) are “ files.sort() for index, item in enumerate(files): print index+1,” “, item print “---------------------” else : print “File is not present or accessible” t2= datetime.now() total =t2-t1 print “Time taken to search “ , total
Shown below is the main() function responsible for all actions. We will discuss all its options later with diagrams.
def main(): parser.add_argument(“file_name”,nargs=’?’, help=”Give file with path in double quotes”) parser.add_argument(‘-c’,nargs=’?’, help=”For creating MD5 hash of all files”,const=4096, type=int) parser.add_argument(‘-a’,help=”To get all duplicate files in duplicate.txt in running current folder”, action=’store_true’) parser.add_argument(‘-f’,help=”To find the MD5 hash,provide file with path in double quotes “, nargs=1,) args = parser.parse_args() try: if args.c: print args.c create(args.c) elif args.a : file_dict = file_open() all_duplicate(file_dict) elif args.f : if os.access(args.f[0], os.R_OK): print “Md5 Signature are : “, md5(args.f[0],4096) print “\n” else : print “Check the file path and file name\n” else: file_search(args.file_name) print “Thanks for using L4wisdom.com” print “Email id [email protected] print “URL:” except Exception as e: print e print “Please use proper format to search a file use following instructions” print “dupl file-name” print “Use <dupl -h > For help” main()
Figure 5: Creating a pickle file of the hashes of all files
Let us save the complete code as dupl.py and make it a Windows executable (exe) file using the Pyinstaller module. You can also download a readymade exe file from. Run the command as shown in Figure 2. After running it successfully, you can find the dupl.exe in folder C:\PyInstaller-2.1\dupl\dist, as shown in Figure 3. You can put the dupl.exe file in the Windows folder, but if you place this in a different folder, you will have to set the path to that folder. Let us run the program with the following command:
1. dupl –h
This helps to show all options, as seen in Figure 4.
2. dupl –c 81920
The above command will create the database pickle file mohit.dup1, which contains the hashes of all files. The argument 81920 is the byte size used by the MD5 algorithm — you can change it. If you increase the size, the calculating speed of the hashes will be increased at the expense of the RAM. If you don’t provide the number, then it automatically takes the 4096 byte size.
As you can see in Figure 5, it takes 12 minutes and 53 seconds to create databases. This speed depends upon various factors such as RAM, the model of the computer, the number of files and their sizes. The hash calculation time is directly proportional to the size of the file. The database pickle file will be created in the current running folder.
3. dupl <file-with full path>
Figure 6 shows the duplicate files based upon the hash.
4. dupl –a
In this option, duplicate.txt has been created in the current folder, which contains all the duplicate files in pairs, as shown in Figure 7.
5. dupl –f <file with path>
This option takes a file with path and returns the MD5 hash.
Connect With Us | http://opensourceforu.com/2016/12/finding-duplicate-files-windows-using-python/ | CC-MAIN-2017-04 | refinedweb | 1,447 | 67.76 |
>!
Furthermore, there are some platforms where a further transformation
is necessary to get from a PCI-relative memory-mapped I/O address
to a CPU address on the MIPS platform, thus in our io.h we have:
extern inline void * ioremap(unsigned long offset, unsigned long size)
{
extern unsigned long platform_io_mem_base;
return (void *) KSEG1ADDR(offset | platform_io_mem_base);
}
> 2. If yes, isn't it better to or (`|') instead of add ('+') 0xa0000000 in the
> readb() et al. macros (or to use the macro KSEG1ADDR())?
One could make that argument. Others might say that addition is
an more mnemonic operation for adding a base displacement.
The results will be, one hopes, the same. But it's a fair question
as to why KSEG1ADDR isn't used in preference indeed, it is in
the MIPS 2.2.12 distribution.
>FYI, I'm trying to make the UART in the NEC Vrc-5074 hosty bridge work cleanly
>with serial.c. And serial.c first ioremap()s it.
The ioremap/readb stuff is only in the latest versions of serial.c,
(newer that I run with, anyway), and yes, you are right, it's broken.
)?
Isn't there an isa_slot_offset declaration? Odd. Even the
i386 has a __ISA_IO_base in the definition.));
}
So, while we didn't put in the isa support, we did do a certain
amount at MIPS to make arbitrary PCI platforms work with MIPS.
You can snarf it from and
see what I mean.
And yes, one of these days, somebody needs to merge it into
the SGI 2.3.x tree...
Regards,
Kevin K. | https://www.linux-mips.org/archives/linux-mips/2000-02/msg00024.html | CC-MAIN-2016-40 | refinedweb | 261 | 75 |
While all of our code is functional, there are potentially numerous ways in which we could better organize it. We’ll look at a pair of ways that will help us both reduce the visual clutter of working with a large script and reduce the odds of introducing bugs by centralizing some of our important data.
Use of the
#region and
#endregion keywords would allow us to break our code up into collapsible sections. Let’s take a look at one of our largest scripts—PlayerController—and see how we could break it down using regions. Open the script and adjust it as shown in Listing 12.1.
Listing 12.1 Regions in the PlayerController Script
public class PlayerController : MonoBehaviour ...
No credit card required | https://www.oreilly.com/library/view/learning-2d-game/9780133523416/ch12lev2sec05.html | CC-MAIN-2019-26 | refinedweb | 124 | 71.24 |
Lately I’ve received several emails from people asking about setting up projects using more than one file. Having come from an Eclipse background, I found it really intuitive but realized there are not many good tutorials on how this works specifically with Flex Builder or Flash Builder from Adobe. Here is a quick start on how to get your project up and running.
Start a new project and name it. A Flash Builder/Flex Builder project may contain several components, ActionScript files, classes, packages and other assets. The first step is to identify your project’s entry point and then reference other files.
The Flex Project I had emailed has the following lines of code:
Line 1 is the XML processing declaration and line two contains the root component of the application. This particular application is a Windowed Application (Adobe AIR). The xmlns:mx=”” line tells the compiler to use a specific Flex SDK, in this case the Flex 3.4 SDK. The namespace declaration below on line 3 (xmlns:components=”components.*”) declares that the project may use any or all of the components in the Package named “components”. Note that at this point in your application, the Package has not yet been created so your project will throw an error (correct behavior).
Line 5 of the code is where the a specific component is referenced. Because this component is namespace qualified with the same namespace prefix given to the “components” Package, the component MUST exist within that package. The specific component named here is CountriesCombo. The declaration components:CountriesCombo tells the compiler to create an instance of that component at runtime.
Creating a new Package is really easy. In Flash Builder 4, highlight the src folder and right click (PC) or Command-Click (OSX) the folder and a context menu will appear. Select “New -> Package” as shown and when the dialog pops up, give the Package the name “components”. Remembers that these are case sensetive.
Now that you have a Package created, it is time to add your component. It is just as easy. right-click (PC) or Command-Click (OSX) on the newly created package and select “New -> MXML Component” from the Menu as shown below.
In the dialog box that opens, name your component “CountriesCombo”. Paste the following code into the component source view:
Your project should now be runnable and have the following structure:
Note that all red X’s are gone.
If your project still does not work, try cleaning it “Project -> Clean” from the top menu. If you have added a component to your project that is not being recognized, you may have to manually refresh the package Explorer view. Do this by right clicking on the root folder of the project (or even just the src folder) and hit “refresh”. | http://blogs.adobe.com/digitalmedia/2010/01/multi_file_flex_projects/ | CC-MAIN-2017-17 | refinedweb | 468 | 63.39 |
Hi -
Newbie here working on a POC for work. My only goal at this point is very
simply. Open up the REST API on activemq to allow a voice web application
to post log messages to a queue. A separate consumer app will pick up the
messages via a MessageListener.
I don't want to get bogged down with security right now... just want to get
this up and running but my voice app keeps getting a 401 response from AMQ.
I tried adding the simpleAuthenticationPlugin as described on the AMQ
security page, but that didn't work. Probably because of the jetty.xml
import which I see has a bunch of pre-configured security settings in there
(which I am not familiar with).
Anyway, I would like to know what is the simplest and quickest way to
configure anonymous (or basic auth) for AMQ's REST API with or without
jetty?
Thank you
--
View this message in context:
Sent from the ActiveMQ - Dev mailing list archive at Nabble.com. | http://mail-archives.apache.org/mod_mbox/activemq-dev/201307.mbox/%3C1374592651771-4669614.post@n4.nabble.com%3E | CC-MAIN-2015-14 | refinedweb | 170 | 73.27 |
ReactJS ships a tool called
create-react-app that can generate a new project for you. This gets one started very quickly. But I see several problems with a generated project:
- The build process is heavily abstracted. Meaning, it hides how Webpack and Babel are used and configured.
- It launches a web server that uses Websocket to automatically refresh a web page when you modify any code..
Why do we need Webpack?
If you have experience with jQuery or AngularJS 1.x then you may be wondering why we need a build system like Webpack. There are several reasons but the main one is module loading. Let me explain. When we code a large application using ES6 we break up the code in several JavaScript files called modules. We can import a module from another using a syntax like this:
import React from 'react'; import {render} from 'react-dom';
Although this is valid JavaScript, today’s browsers do not support module loading yet. Webpack will inspect your module files and build a dependency tree between the modules. It will then combine all JavaScript files in the correct order in a single file. This built output file can then be used from an HTML document. Webpack builds this output file in a very clever manner. For example, in case of an exception you will see the original file and line number in the stack trace. This makes debugging easy.
(Note: NodeJS runtime does support ES6 import. So we do not need to use Webpack there.)
Why do we need Babel?
When we code ReactJS we embed HTML tags as JSX code. We will use Babel to translate JSX code into proper JavaScript.
You can also transpile ES6 code into ES5 JavaScript. But increasingly web browsers support most of ES6. In this article we will not transpile ES6.
OK, that’s enough theory. Let’s get started.
Create the Project
mkdir simple-app cd simple-app npm init
Now we will install the packages we need to develop and build ReactJS.
npm i react react-dom \ webpack babel-core babel-loader babel-preset-react -S
Configure Webpack
In the root folder of your project create a file called
webpack.config.js.
Here
APP_DIR points to the root of our JavaScript files. That will be the
src folder. We will create this folder shortly and write all of our JS code there.
BUILD_DIR points to where the combined output JavaScript file will be stored.
The core configuration is being done through the
config object here. The
entry property is one of the most important. When Webpack builds the dependency tree it starts by inspecting the entry JS file. This module is at the very top of the tree. In ReactJS this will be the root level component that includes all other components.
The
output property should be obvious. It names the combined JS file name and location.
Then comes the
loaders. A loader is used by Webpack to process a source JS file. In this case we are using the
babel-loader that will transpile each JS file using Babel before being included in the combined output file. Of course we have to configure Babel so that it knows how to transpile JSX. We will do so next.
Configure Babel
In your project’s root folder create a file called
.babelrc. Add this to the file.
{ "presets" : ["react"] }
This tells Babel to transpile JSX (but not ES6).
We are done with build configuration. We can start writing some code.
Develop a ReactJS component
Create a folder called
src. Recall, all our JS files should go there.
Within the
src folder create a file called
index.js.
Add this code:
Use the component in HTML
In the root folder of the project create a file called
index.html. Add this code:
Note how we are loading the combined output file
build/bundle.js.
Build and test
From the root folder of your project run this command to do a Webpack build.
./node_modules/.bin/webpack -d
This will create the
build/bundle.js file. Feel free to look at it. This will have the code for all modules required by
index.js including the ReactJS modules.
Open the
index.html file in a browser. You don’t need a web server. In Mac you can do:
open index.html
You should see this page.
Create convenience scripts
Running the build as shown above can become tedious. Let’s define a few NPM scripts to make things easier.
Open
package.json. Add the
build and
watch scripts as shown below.
"scripts": { "build": "webpack -d", "watch": "webpack -d --watch", "test": "echo \"Error: no test specified\" && exit 1" }
Now you can have Webpack watch your code:
npm run watch
Try changing your JS code and refreshing the browser. You should see the changes right away.
Summary
In this article we learned how Webpack and Babel work with ReactJS. We learned how to configure these build systems. | https://mobiarch.wordpress.com/2017/07/21/reactjs-from-scratch/ | CC-MAIN-2017-43 | refinedweb | 828 | 77.33 |
Imagine following piece of code (---> is a tab, | is caret):
public class Foo {
--->public void bar() {
--->}|
}
Now when you press two times after the method, you will get: public class Foo { --->public void bar() { --->} ---> --->| } Now you press and get: public class Foo { --->public void bar() { --->} ---> | } Now the little annoyance occurs: pressing backspace again results in moving the caret to the end of the previous line (although the tab is usally not visible) and you have to press the key multiple times to delete the visually empty line. The little plug-in "Less Hungry Backspace" solves this problem. When you press the ]]> key and would just delete the previous newline
character, this plug-in causes also to delete previous whitespace characters
(tabs and spaces).
Tom
Imagine following piece of code (---> is a tab, | is caret):
Did you try Hungry BackSpace plugin? I wrote that to take care of the problem you talk about, unless I'm misunderstanding...
That was one of the immediate things I missed when I moved from Emacs to IDEA. ()
For such a sophisticated editor, IDEA still treats whitespace like a text editor. I would like to see Hungry Backspace and EmacsTab (Editor Actions->Emacs Tab) be the default keybindings for BackSpace and TAB keys.
Alex, I got inspired by your plug-in, but for my personal taste it deletes
too much whitespaces. Less Hungry whitespace just deletes the tabs and
spaces at the end of the previous line - just the same as the previous line
would not have trailing whitespace. It does not delete multiple newlines.
Tom | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206773935-ANN-Less-Hungry-Whitespace-0-1-available-via-Plug-in-Manager | CC-MAIN-2020-05 | refinedweb | 260 | 64.85 |
0
I'll try to keep it brief. When I try to pass the 'list' vector to the counter function my compiler throws an error of: "error: request for member size in count, which is of non-class type std::vector<int>*"
Googling hasn't produced anything quite what I'm looking for syntax wise. If someone could enlighten me I'd be most appreciative.
My compiler is g++ version 4.6.3.
#include <iostream> #include <vector> using namespace std; void counter(vector<int> count[]); int main (){ vector<int> list[5]; counter(list); return 0; } void counter(vector<int> count[]){ int i, j=0; for(i = 0; i <= count.size(); i++){ j++; } cout << "There are " << j << " elements.\n"; } | https://www.daniweb.com/programming/software-development/threads/442647/cant-pass-vector-to-function | CC-MAIN-2016-07 | refinedweb | 118 | 71.85 |
Your the man! gcc4 and -fno-tree-ch did the trick for me, too.-fno-tree-ch was mentioned earlyer on this list, to compile with gcc4 on OS X. But since gcc4 is still not in the default toolchain, I did not even try :(.
Seams that we have a problem with gcc3.3 and not gcc4 for once :)The error behavement is similar on your machine. I also got bus error (Sometimes it did not reach the menu, too. I started to hit the 3 very early, so I could go right thru it... Early crashes always happened for me on MS-DOS 6.22 and DOS 7.
Tested it with DOS 6.22 and DOS 7 (win95). No Problems so far. I hope Fabrice stumbles upon this.Maybe we should make the Patch a little more selective with "ifeq ($ (CONFIG_DARWIN),yes)" and post it.
Don't know whether this affects other Platforms, too... Thanks for Your work so far! Mike On 11.12.2005, at 15:56, Joachim Henke wrote:
I just did some tests on the freedos image from your web-site and my first impression is that these crashes are something compiler related. When Ibuild qemu with./configure --prefix=/usr/local --cc=gcc-3.3 --target-list=i386- softmmu --enable-cocoaand start your image with qemu -hda harddisk_1.img -soundhw sb16it starts up and immediately crashes after 1 or 2 seconds (Bus error) - even before I could choose one of the 3 menu options. For the next test Iapplied the patch below and compiled qemu with GCC 4.0.1: --- Makefile.target +++ Makefile.target @@ -148,7 +148,7 @@ ifeq ($(HAVE_GCC3_OPTIONS),yes) # very important to generate a return at the end of every operation -OP_CFLAGS+=-fno-reorder-blocks -fno-optimize-sibling-calls+OP_CFLAGS+=-fno-reorder-blocks -fno-optimize-sibling-calls -fno- tree-chendif ifeq ($(CONFIG_DARWIN),yes)./configure --prefix=/usr/local --cc=gcc-4.0 --target-list=i386- softmmu --enable-cocoa --disable-gcc-checkWith this build everything seems to work perfectly. I boot into option 1 and run the DOOM demo with b.bat - it runs for ca. 100 seconds and thenWith this build everything seems to work perfectly. I boot into option 1 and run the DOOM demo with b.bat - it runs for ca. 100 seconds and thenquits back to DOS saying 'timed 2134 gametics in 2325 realtics'.Can you try if using GCC 4 helps for you too? I'll do some more tests now.Hopefully I can track down the problem to someting more specific. Jo. Mike Kronenberg wrote:You find a freedos including doom here: or at the oszoo.org Crashes: - when choosing option 1 (standard) about 10-20 sec into doom, when playing timedemo (doom -timedemo demo3), otherwise, too. This used to work great before. Thanks, Mike_______________________________________________ Qemu-devel mailing list address@hidden | http://lists.gnu.org/archive/html/qemu-devel/2005-12/msg00101.html | CC-MAIN-2016-07 | refinedweb | 471 | 69.68 |
I recently migrated an existing project to .net 4.5 and changed out what this project was using for data access (switching to Entity Framework).
For some reason any time I try to access any functions for a DbSet (
Where
First
FirstOrDefault
Error 53 'System.Data.Entity.DbSet1' could be found1' could be found
1<MyProject.Data.Customer>' does
not contain a definition for 'FirstOrDefault' and no extension method
'FirstOrDefault' accepting a first argument of type
'System.Data.Entity.DbSet
(are you missing a using directive or an assembly reference?
VIModel Db = new VIModel();
Customer = Db.Customers.FirstOrDefault(c => c.CustomerId == CustomerId && c.IsPrimary);
public partial class VIModel : DbContext
{
........
public virtual DbSet<Customer> Customers { get; set; }
........
}
The assembly for
Queryable (the thing that adds the
FirstOrDefault extension method you are using) is in
System.Core, however it's namespace is
System.Linq, you can see this on the MSDN page for it
Namespace: System.Linq
Assembly: System.Core (in System.Core.dll)
You need to have in your project a refrence to
System.Core and in the file you are trying to use it a
using System.Linq;
If you have both of these things double check that your project or some project you are refrencing did not create it's own
System.Data.Entity.DbSet<T> class which does not implement
IQueryable<T> or
IEnumerable<T>. | https://codedump.io/share/xDU20a1x6H5l/1/dbset-doest-not-contain-definition-for-firstordefault | CC-MAIN-2016-44 | refinedweb | 227 | 52.56 |
The
Chronicles – Julius
Mirembe
discuss UNBS 'Q' and 'S' classification
Uganda National Bureau of Standards (UNBS)
held its annual quality awards dinner last week. The dinner is basically to
recognise and appreciate companies that have observed the continuous standards,
“Q” and “S” throughout the year. The
dinner is also intended to encourage those companies that have not yet obtained
standards to do so.
The
event didn't happen last year due to logistical challenges and restructuring
exercise at the standards organisation and for the first time, UNBS contracted
East Africa Media consult (EAMC), an
organisation that publishes independent magazine (The Chronicles) to promote
standards in the country. Qmag’s Milly Kalyabe talked to Julius Mirembe, the
proprietor of EAMC about general issues of standards in the country and below
are the excerpts. ..
Qmag:
You have been involved in promoting and disseminating information about
standards for about ten years now through your magazine, The Chronicles. What
are your feelings about standards in Uganda?
JM: The standards in Uganda basically are little
bit behind according to the East African Community. They are not yet harmonised
but they are quite ahead of Rwanda and Tanzania although Kenya is a bit ahead
of us in terms of the actual numbers standards. I think the Standards Council
is doing whatever it can to ensure that they harmonise the standards and
everything looks to be on course. There are a few products that still need
standards but these standards are developed according to demand. So I can say
the standards are rolling, they are moving forward given our market, and what
we produce.
Qmag:
There are people producing without standards in our market and yet competing in
the same market. What is your take on this, how fair is this?
JM: The standards Council
would probably be in better position to answer that but I think normally
standards are developed according to demand of the market and the technical
personnel. For instance, if you look at the demand of standards for plastic
tanks it has just been developed and yet have been producing these products for
a long time. They haven’t been having standards, but as the market grows, of course there is need for the standards, and that is the rationale for
developing standards.
Qmag:
What do you think of enforcement of standards in this country?
JM: Enforcement is there. It doesn't look at standards per say, they look at quite a number of things. When you look market surveillance and also
import inspection, the two departments in UNBS that are mandated to monitor the
market, you will appreciate that UNBS is still under funded and under staffed.
So enforcement is where they find a challenge, as they cannot be everywhere all
the time as they would be required. That is why there are other mechanisms that
are being developed to ensure that they reduce on the importation of
substandard goods.
Thus, the idea of Pre- Inspection
Verification of Conformity (PIVoC) to restrict entry of sub standards goods. I
think this a point where enforcement can yield results. The enforcement team in
the field also keep monitoring.
But there is also what is called
quality management system which is going to be put in supermarkets because some
of those people selling those products are tasked to explain why they have
goods on their shelves which do not have standards marks.
So basically enforcement is there
but because of lack of personnel, UNBS is looking at other mechanisms that can
cartel substandard goods from the market.
Qmag: From your experience, are manufacturers and
or importers happy to acquire these standards willingly?
JM: Of course the world over
everything has a standard. It may not
really be by choice, but standards are the yard- stick of everything we do, but
certainly some of the manufacturers it cuts across. It’s not casting stones
that there must be particular standards, but by default there have to be
standards.
The genuine manufacturers are happy
to be involved with the standards. Some of them have approached UNBS and asked
them for standards here and there. And the market is driven by standards, if
you don’t have a standard I tell you, that it will be very hard for you to
export your products, say to Rwanda. You
can’t export to certain markets if you don’t have certain standards marks.
So people are happy to have
standards, both manufacturers and importers.
Qmag:
Many of our local consumers are not really sensitised on standards. In most cases they buy what is affordable.
What is your experience on this one?
JM: Consumers here will
always go for the cheap products. It may not necessarily follow that whatever
is cheap doesn't conform to the standards. But because of our nature and our
society in that by and large semi illiterate or illiterate, they normally go
for the cheap products. Truth-be-told, most of the cheap products don’t conform
to standards because the manufacturers use less of the materials than they
would be using for something that is durable.
You will also agree that our
economy, pockets and incomes may not necessarily be in consonance with what
people consume. So there is need to sensitive people about what they consume.
You buy a belt for Sh 4000 instead of going for a belt of sh 6000, the former
will last months and the later will last two year and the difference is only sh
2000.
But sensitisation needs money and it
is another whole a ball game. You have
to go into radio and TV talk shows but slowly UNBS is doing whatever they can
to make people aware of the benefits of buying quality.
There are some products which have
mandatory standards, those that normally have direct impact on people’s lives,
for example food, construction materials, shoes etc. So whether you like or not
these ones have to conform to standards.
There is a lot that needs to be told
to the consumers, but besides the pre verification conformity to standards will
come in handy to try and stop importation of sub standards goods, then even if
one wanted they will not find them.
Qmag:
There are reports that even with the standards marks obtained, some businesses
still produce sub standard goods at the expense of quality marks? What is your
comment on this one?
JM: It depends on who says what. For anyone to get the
quality marks they go through various stages but like we know, some the
businesses don’t get to renew their licenses.
Standardisation is a continuous
process, renewals and calibrations never end. There is a lot more than saying I
have the marks.
Well, UNBS also has its challenges,
we can’t say it’s an angel, but by and large, if we talk about percentages, I
would say about 98 percent of people with the quality marks really qualify.
Some people may maneuver and get the Q mark, but what I know is that UNBS tries
to make sure that whoever gets the standards marks truly deserving. And when
you look at our products and compare them with other products, say from Kenya
or Rwanda, they better. I grew up knowing that Ugandan products are best for
example our plastic basins from TAMPECO, NICE toothbrush have always been
better regionally.
So, say for the market that is now
flooded with Chinese cheap imports, most locally manufactured products are
really OK. Look at Movit products, people, mostly women who use them, say they
are good. At least they pass the test and they qualify to be on the market.
Qmag:
On the side of manufacturers, is this area that needs more sensitisation to
bring more on board of standards?
JM: Of course, most of the
manufacturers here are really manufacturers. Most of them import these things
and repack them here. They bring them as raw materials and repack them as
though they manufactured here. That is why there have been fights between
manufacturers and UNBS because they know that PIVoC will catch them easily and
the issue of taxation will come in and it will be eating into their profits.
But the genuine manufacturers are really happy about PIVoC because it protects
the indigenous manufacturers. So the genuine ones know the essence of
standardisation and the mark, but the quicker ones will always fight with UNBS.
Otherwise why would anyone fight someone trying to aid you in produce a good
product and then have a market? UNBS is
a watchdog but main purpose is to aid trade and to ensure that best practices
in trade.
Qmag:
On the question of sensitisation you are a publishing a quarterly magazine. Who
is targeted to and do what you more could be done in this area?
JM:
The magazine targets people who are supposed to ensure that the information
trickles down. We target politicians, District chair persons, manufacturers,
supermarkets etc. We make sure that people who get this magazine help the
person who may not be able to read which much of our population is. It’s a
downstream kind of mechanism but a lot more still needs to be done, radio, TV
talk shows and a host of other media to make sure everyone benefits.
Qmag:
This brings us to the Qmag, our online publication. What do you think about it?
JM: ya ya ya. This comes in
handy, people can receive it on laptops, computers, phones and I think it’s
really a good idea. People should be able to consume all media. UNBS should be
able to utilise all the media as much as they can to reach out the population
Qmag:
Slightly on the Dinner. What is the purpose of it?
JM: The UNBS Quality Gala
has always been set aside to acknowledge and reward people who continue
conforming to standards and to encourage others who are yet to come on the
scheme of standardisation. Also to
encourage the SMEs and invite them to learn from the big boys. Most of them are
producing without the standards. It’s a day that has been set aside for UNBS to
appreciate these manufacturers and the importers. It’s also a platform to tell
Ugandans that these are the manufacturers who are safe to buy from. So it’s basically a celebration for people who
are on Q mark and not the S mark. May be in future we will incorporate the two
standards because S mark is the largest sector.
Qmag:
Any other activities to loudly appreciate the companies that have embraced the
standards?
JM: We had the east African
standards conference but it has been put on halt because the institution (UNBS)
has been going through a lot of changes. But maybe next we will revive it. It’s
a big one, it brings across people from all over the world.
Qmag:
Any specific message to your clients
JM: Well I would like to
encourage people who were not at gala to make sure next year, they participate.
We will keep improving it and we encourage feedback from those who attended. | http://www.ugandaeconomy.com/media/quality-chronicles | CC-MAIN-2020-05 | refinedweb | 1,863 | 61.26 |
It's been a while since I learned about property based testing, yet I never had a chance to apply it in my work. Property based testing is a stochastic process, in which your code is being bombarded with multiple combinations of examples, and output is checked for compliance to some condition that is supposed to hold on all inputs. The idea of describing and checking higher level properties of some code is just as beautiful as impractical in many areas of programming. In particular, in web development, where logic is usually intertwined with the database layer. Running hundreds of instances of the same test would introduce prohibitive overhead. This explains why I have not even attempted to adopt it before.
Let's take for example this little function:
def add(a, b) do a + b end
Suppose I meant it to be used with positive integers and I want to make sure that the result is always equal or greater than either operand. Kinda silly, but this is just an example. Let's describe this assumption then:
property "result of addition always >= of each operand" do check all a <- integer(), b <- integer() do s = add(a, b) assert s >= a && s >= b end end
This fails (expectedly):
Failed with generated values (after 1 successful run): * Clause: a <- integer() Generated: -1 * Clause: b <- integer() Generated: 0
Right! We forgot about negative ints. In order to make fix comprehensive, we need to both add a guard to the function itself and bound the input stream. Whenever we do
var <- integer(), it means instantiation of a particular value out of the stream of integers. Since we don't want just any integer, we should be able to apply a filter somehow. Even though the library provides specified
positive_integer() generator, it does not suit our needs, because it filters out zeroes as well. Instead we apply filter to the stream:
def add(a, b) when a >= 0 and b >= 0 do a + b end property "result of addition always >= of each operand" do check all a <- filter(integer(), &(&1 >= 0)), b <- filter(integer(), &(&1 >= 0)) do s = add(a, b) assert s >= a && s >= b end end
Now test passes. We could go on with improvements by creating a custom generator, but instead I wanted to move to the first useful and successful use of property based testing in my practice. Meanwhile feel free to dig deeper into StreamData docs to learn more about API of the library that was almost included into Elixir itself.
Real life example
Recently, while working on a
poll feature for a social app, I ran into the issue of distributing percents. There are multiple options with the poll, each option has an integral number of votes greater or equal than zero, there is a total number of votes as well. What I needed is to hide the absolute numbers of voters and convert these numbers into percents, while making sure they add up to 100% or 0% if there were no votes whatsoever. The main issue I expected was rounding errors. Simple example:
4.5 + 5.5 == 10.0 round(4.5) + round(5.5) == 11
With different distributions of votes, rounding can introduce overflows as well as underflows.
While 146% of votes might be okay for Russian politics, I wanted to avoid that. And the original approach was to maintain some accumulator initialized at 100 and return it at the last option.
@doc """ Computing percent for all options using total_votes and option_votes in Poll and Option respectively. This function makes sure that integral percents sum up properly to 100%. """ @spec distrib_votes(%Poll{}, [%Option{}]) :: [%Option{}] def distrib_votes(poll, options, acc \\ 100) def distrib_votes(%Poll{total_votes: 0}, options, _) do options end def distrib_votes(_, options, percent) when options == [] or percent <= 0 do options end def distrib_votes(_, [option], percent) do [%{option | percent: percent}] end def distrib_votes(poll, [option | options], percent) do p = round(100 * option.option_votes / poll.total_votes) option = %{option | percent: min(p, percent)} [option | distrib_votes(poll, options, percent - p)] end
When I tried to come up with meaningful and comprehensive test cases, I realized that it slips out of my control and I recalled property based testing. Really, this little function does not need the database, and the assertion is extremely easy to formulate: the sum of the percents should either equal to 100% or 0% if there is no votes:
property "distrib_votes always sums to either 100 or 0" do check all votes <- list_of(filter(integer(), & &1 >= 0)) do options = Enum.map(votes, & %Option{option_votes: &1}) total_votes = Enum.sum(votes) poll = %Poll{total_votes: total_votes} total_percents = Feed.distrib_votes(poll, options) |> Enum.map(&(&1.percent)) |> Enum.sum() assert total_votes > 0 && total_percents == 100 || total_votes == 0 && total_percents == 0 end end
These were the final versions of the test and function definition. Initially I failed to define correct assertion by setting a naive condition
total_percents == 100 || total_percents == 0 and of course it would allow for a buggy behavior, where no votes poll would show 100% at the last option. Using implicative form (precondition) fixed that and proved that the test is just as good or as bad, as the logic behind it.
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/youroff/a-case-study-for-property-based-testing-in-elixir-44c6 | CC-MAIN-2021-43 | refinedweb | 864 | 56.79 |
Cherniavsky Beni <cben at users.sf.net> added the comment: Hi Steven. Please confirm if we can mark the bug closed; if you need farther advice, posting your full code (not just the error case) on comp.lang.python or StackOverflow would be more effective. The documentation is indeed correct but hard to find (you're not the first to be surprised by UnboundLocalError); I'm working on making things more discoverable in issue 4246. See also First of all, it's important to understand that a Python function has a *fixed* set of local variables, frozen when the function is parsed. If you assign to it (e.g. ``name = None``), *all* appearances of the name in the function refer to a local variable; if not, they refer to the outer scope. Therefore, you can't achieve what you want with local variables. Generally, dynamically creating variables is a bad programming practice. A dictionary is the cleanest way to hold a set of names/values that is not fixed. Yes, you'll have to write ``cols['foo']`` instead of ``foo``; OTOH, setting them will not require any ugly magic... Note also that string formatting can use values from a dictionary with very conveniently: ``"... {foo} ...".format(**cols)``. The next best thing if ``cols['foo']`` is too verbose for you is ``cols.foo``: create an object which will contain the values as instance variables (that's a good use for setattr()). This is the most Pythonic solution if a dictionary doesn't suffice - it's what most object-relational mappers do. The third idea is to (ab)use a class statement. A class statement in Python creates a temporary namespace *during the class definition* (we'll not be defining any methods or using it object-oriented stuff). And the nice part is that you can put a class statement anywhere, even inside a function: def f(): cols = {'foo': 42} # however you fetch them... class temp_namespace: locals().update(cols) print(foo / 6) # prints 7.0 assert 'foo' not in locals() # no effect outside the class! This works both in CPython 2 and 3. I'm not 100% sure that being able to change locals() in a class is guaranteed in all other implementations. (Note again that locals() of a *function* are not a real dictionary and you *can't* change them - as I said these are fixed when the function is defined.) The fourth idea if you must have code that says just ``foo`` to access columns is to use the exec statement - you can pass it a dictionary that will serve as globals and/or locals. An upside is that the code will be a string and can be dynamic as well. (BTW, if the code is not dynamic, how come you don't know the names you're accessing? If you do, you could just set ``foo = cols['foo']`` etc. for every variable you need - tedious but no magic needed.) Lastly, as you discovered you can dynamically create global variables. (As Terry said, just use the dictionary returned by ``globals()``; no need for setattr). But this is a very last resort (changing globals for a single function is ugly), and somewhat dangerous - e.g. consider what happens if a column names changes and overwrites a function name you had... ---------- nosy: +cben _______________________________________ Python tracker <report at bugs.python.org> <> _______________________________________ | https://mail.python.org/pipermail/docs/2010-November/002192.html | CC-MAIN-2014-15 | refinedweb | 559 | 73.17 |
13 September 2012 10:57 [Source: ICIS news]
(adds paragraphs 6-9)
SINGAPORE (ICIS)--?xml:namespace>
The second-half October naphtha contract lost $18.50/tonne (€14.43/tonne) from Wednesday to $995-997/tonne CFR Japan, according to ICIS data.
Naphtha had slid to the weakest levels since 31 August, ICIS data showed.
“Naphtha came off hard today,” said a trader.
In physical trading, Glencore sold an open-spec naphtha cargo to Marubeni at $990/tonne for delivery in the first half of November, traders said.
The naphtha crack spread for second-half October fell by 10.9% from Wednesday to $127.80/tonne on Thursday – the lowest since 29 August when the crack spread stood at $123.33/tonne, according to ICIS data.
The intermonth spread between the second-half October and the second-half November contracts on Thursday weakened to its lowest level since 3 September, at $12/tonne in backwardation, the data showed.
The backwardation was at $13.50/tonne on Wednesday, according to the data.
“The end-users are getting bearish on demand,” one | http://www.icis.com/Articles/2012/09/13/9595069/asia-naphtha-falls-below-1000tonne-on-weak-petchem-demand.html | CC-MAIN-2014-52 | refinedweb | 179 | 60.31 |
A simple functional, type-safe pattern matcher in Python
Project description
This is a port of Rematch, a Typescript pattern matching library. Matcher allows for type-safe, functional pattern matching in Python.
Basic usage
Without type-checking
from matcher import Matcher m = Matcher() def powerLevel(hero): return m.match(hero, [ m.Type(Speedster, lambda hero: print('Speedsters are too fast!'), lambda hero: math.inf), m.Values(['Goku', 'Vegeta'], lambda hero: 9001), m.Value('Iron Man', lambda hero: 616) ]) print(powerLevel('Goku')) # 9001 print(powerLevel(Speedster.Flash)) # Speedsters are too fast! # inf print(powerLevel('Captain America')) # matcher.MatchError: Captain America doesn't match any of the provided clauses
With type-checking
from matcher import Matcher m = Matcher[int, str]() def wrongInput(s: str) -> str: return m.match(s, [ m.Value(1, lambda s: s), m.Else(lambda s: s) ]) # Argument 1 to "match" of "Matcher" has incompatible type "str"; expected "int" def wrongOutput(n: int) -> Any: return m.match(n, [ m.Values((1, 2, 3), lambda n: n + "Hello World"), m.Else(lambda n: n**2) ]) # Argument 2 to "Values" of "Matcher" has incompatible type Callable[[int], int]; expected Callable[[int], str]
The Matcher.match function takes in an argument and a group of cases to test the argument against.
There are 4 types of cases:
- Value - argument matches single value
- Values - argument matches one of multiple values
- Type - argument matches a type
- Else - argument does not match any previous cases
If no cases are valid, a MatchError is thrown. There are no ‘fall-throughs’ like in switch statements.
Why use pattern matching over if/else?
For the large majority of code that isn’t performance-sensitive, there are a lot of great reasons why you’d want to use pattern matching over if/else:
- it enforces a common return value and type for each of your branches (when using type definitions)
- in languages with exhaustiveness checks, it forces you to explicitly consider all cases and noop the ones you don’t need
- it prevents early returns, which become harder to reason about if they cascade, grow in number, or the branches grow longer than the height of your screen (at which point they become invisible). Having an extra level of indentation goes a long way towards warning you you’re inside a scope.
- it can help you identify logic to pull out, rewriting it into a more DRY, debuggable, and testable form.
A longer example
Let’s do an example! We’re building a webapp, and we need to authenticate our users and update them on their status. Here’s a straightforward solution:
if isinstance(user, BlacklistedUser): warnBlacklistMonitor() return elif user.password == enteredPassword: login() print("You're logged in!") else: onUserFailedLogin() print("Mistyped your password? Try again or do a password reset.")
This code works. Let’s see how a pattern matching solution stacks up:
from matcher import Matcher m = Matcher[User, None]() m2 = Matcher[str, str]() m.match(user, [ m.Type(BlacklistedUser, lambda user: warnBlacklistMonitor()), m.Else(lambda user: print( m2.match(enteredPassword, [ m2.Value(user.password, lambda password: login(), lambda password: "You're logged in!"), m2.Else(lambda password: onUserFailedLogin(), lambda password: f"Your password isn't {password}!") ]) )) ])
It’s immediately clear that there are 3 return points, and that 2 of them are dependent on the other one. We’ve factored out the print statement, which’ll make debugging / testing easier down the line. And lastly, all the return points consistently return nothing.
A more fun example
We can also calculate Fibonacci numbers using matching!
from matcher import Matcher m = Matcher[int, int]() cases = [ m.Values([1, 2], lambda n: 1), m.Else(lambda n: m.match(n-1, cases) + m.match(n-2, cases)) ] print(m.match(10, cases)) # 55
This is more in line with the functional definition that fib(1) == fib(2) == 1, and fib(n) == fib(n-1) + fib(n-2). Due to the lazy evaluation of the actions provided to the cases, we can use recursion.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/matcher/0.2/ | CC-MAIN-2022-33 | refinedweb | 693 | 58.48 |
Covers application checkpoint/restart, overall design, interfaces,usage, shared objects, and and checkpoint image format/internals.txt | 125 ++++++++++++++++++++++++++ Documentation/checkpoint/readme.txt | 104 ++++++++++++++++++++++ Documentation/checkpoint/rstr.c | 20 ++++ Documentation/checkpoint/security.txt | 38 ++++++++ Documentation/checkpoint/self.c | 57 ++++++++++++ Documentation/checkpoint/test.c | 48 ++++++++++ Documentation/checkpoint/usage.txt | 153 ++++++++++++++++++++++++++++++++ 8 files changed, 577 insertions(+), 0 deletions(-) create mode 100644 Documentation/checkpoint/ckpt.c create mode 100644 Documentation/checkpoint/internals.txt create mode 100644 Documentation/checkpoint/readme.txt create mode 100644 Documentation/checkpoint/rstr.c create mode 100644 Documentation/checkpoint/security.txt/internals.txt b/Documentation/checkpoint/internals.txtnew file mode 100644index 0000000..03a92a2--- /dev/null+++ b/Documentation/checkpoint/internals.txt@@ -0,0 +1,125 @@++ ===== Internals of Checkpoint-Restart =====+++(1) Order of state dump++The order of operations, both save and restore,+++(2) Checkpoint image format++The checkpoint image format is composed of records consisting;+};++'type' identifies the type of the payload, 'len' tells its length in+bytes, and 'parent' identifies the owner object instance. The meaning+of 'parent' varies depending on the type. For example, for CR_HDR_MM,+'parent' chunks: each+chunk begins with a header that specifies how many pages it holds,+then the virtual addresses of all the dumped pages in that chunk,+++(3) and whether they were already saved. Shared+objects are stored in a hash table as they appear, indexed by their+kernel address. (The hash table itself is not saved as part of the+checkpoint image: it is constructed dynamically during both checkpoint+and restart, and discarded at the end of the operation).++Each shared object that is found is first looked up in the hash table.+On the first encounter, the object will not be found, so its state is+dumped, and the object is assigned a unique identifier and also stored+in the hash table. Subsequent lookups of that object in the hash table+will yield that entry, and then only the unique identifier is saved,+as opposed the entire state of the object.++During restart, shared objects are seen by their unique identifiers as+assigned during the checkpoint. Each shared object that it read in is+first looked up in the hash table. On the first encounter it will not+be found, meaning that the object needs to be created and its state+read in and restored. Then the object is added to the hash table, this+time indexed by its unique identifier. Subsequent lookups of the same+unique identifier in the hash table will yield that entry, and then+the existing object instance is reused instead of creating another one.++The interface for the hash table is the following:++cr_obj_get_by_ptr() - find the unique object reference (objref)+ of the object that is pointer to by ptr [checkpoint]++cr_obj_add_ptr() - add the object pointed to by ptr to the hash table+ if not already there, and fill its unique object reference (objref)++cr_obj_get_by_ref() - return the pointer to the object whose unique+ object reference is equal to objref [restart]++cr_obj_add_ref() - add the object with given unique object reference+ (objref), pointed to by ptr to the hash table. [restart]diff --git a/Documentation/checkpoint/readme.txt b/Documentation/checkpoint/readme.txtnew file mode 100644index 0000000..c7f79e9--- /dev/null+++ b/Documentation/checkpoint/readme.txt@@ -0,0 +1,104 @@++ ===== relatively C/R products out there, as well as+the research project Zap.++Two new system calls are introduced to provide C/R:. In+particular, "pre-dump" works before freezing the container, e.g. the+pre-copy for live migration, and "post-dump" works after the container+resumes execution, e.g. write-back the data to secondary storage.++The restart code basically reads the saved kernel state from a file+descriptor, and re-creates the tasks and the resources they need to+resume execution. The restart code is executed by each task that is+restored in a new container to reconstruct its own state.+++=== Current Implementation++* How useful is this code as it stands in real-world usage?++Right now, the application must be a single process that does not+share any resources with other processes. The only file descriptors+that may be open are simple files and directories, they may not+include devices, sockets or pipes.++For an "external" checkpoint, the caller must first freeze (or stop)+the target process. For "self" checkpoint, the application must be+specifically written to use the new system calls. The restart does not+yet preserve the pid of the original process, but will use whatever+pid it was given by the kernel.++What this means in practice is that it is useful for a simple+application doing computational work and input/output from/to files.++Currently, namespaces are not saved or restored. They will be treated+as a class of a shared object. In particular, it is assumed that the+task's file system namespace is the "root" for the entire container.+It is also assumed that the same file system view is available for the+restart task(s). Otherwise, a file system snapshot is required.++* What additional work needs to be done to it?++We know this design can work. We have two commercial products and a+horde of academic projects doing it today using this basic design.+We're early in this particular implementation because we're trying to+release early and often/security.txt b/Documentation/checkpoint/security.txtnew file mode 100644index 0000000..e5b4107--- /dev/null+++ b/Documentation/checkpoint/security.txt@@ -0,0 +1,38 @@++ ===== Security consideration for Checkpoint-Restart =====+. 0000000..6bea42d--- /dev/null+++ b/Documentation/checkpoint/usage.txt@@ -0,0 +1,153 @@++ ===== How to use Checkpoint-Restart =====++The API consists of two new system calls:++* int sys_checkpoint(pid_t pid, int fd, unsigned long flag);++ Checkpoint a container whose init task is identified by pid, to+ the file designated by fd. 'flags' will have future meaning (must+ be 0 for now).++ Returns: a positive checkpoint identifier (crid) upon success, 0+ if it returns from a restart, and -1 if an error occurs.++ 'crid' uniquely identifies a checkpoint image. For each checkpoint+ the kernel allocates a unique 'crid', that remains valid for as+ long as the checkpoint is kept in the kernel (for instance, when a+ checkpoint, or a partial checkpoint, may reside in kernel memory).++* int sys_restart(int crid, int fd, unsigned long flags);++ Restart a container from a checkpoint image that is read from the+ blob stored in the file designated by fd. 'crid' will have future+ meaning (must be 0 for now). 'flags' will have future meaning+ (must be 0 for now).++ The role of 'crid' is to identify the checkpoint image in the case+ that it remains in kernel memory. This will be useful to restart+ from a checkpoint image that remains in kernel memory.++ Returns: -1 if an error occurs, 0 on success when restarting from+ a "self" checkpoint, and return value of system call at the time+ of the checkpoint when restarting from an "external" checkpoint.++ If restarting from an "external" checkpoint, tasks that were+ executing a system call will observe the return value of that+ system call (as it was when interrupted for the act of taking the+ checkpoint), and tasks that were executing in user space will be+ ready to return there.++ Upon successful "external" restart, the container will end up in a+ frozen state.++The granularity of a checkpoint usually is a whole container..++If the caller passes a pid which does not refer to a container's init+task, then sys_checkpoint() would return -EINVAL. (This is because+with nested containers a task may belong to more than one container).++Here is a code snippet that illustrates how a checkpoint is initiated+by a process in a container - the logic is similar to fork():+ ...+ crid = checkpoint(1, ...);+ */+ ...+.++To illustrate how the API works, refer to these sample programs:++*, or by sending SIGSTOP.+.-- 1.5.4.3 | http://lkml.org/lkml/2008/11/26/507 | CC-MAIN-2017-34 | refinedweb | 1,303 | 56.55 |
Python client library for Google Maps Platform
Project description
Python Client for Google Maps Services
Description
Use Python? Want to geocode something? Looking for directions?
Maybe matrices of directions? This library brings the Google Maps Platform Web
Services to your Python application.
The Python Client for Google Maps Services is a Python Client library for the following Google Maps APIs:
- Directions API
- Distance Matrix API
- Elevation API
- Geocoding API
- Geolocation API
- Time Zone API
- Roads API
- Places API
Keep in mind that the same terms and conditions apply to usage of the APIs when they're accessed through this library.
Support
This library is community supported. We're comfortable enough with the stability and features of the library that we want you to build real production applications on it. We will try to support, through Stack Overflow, the public and protected surface of the library and maintain backwards compatibility in the future; however, while the library is in version 0.x, we reserve the right to make backwards-incompatible changes. If we do remove some functionality (typically because better functionality exists or if the feature proved infeasible), our intention is to deprecate and give developers a year to update their code.
If you find a bug, or have a feature suggestion, please log an issue. If you'd like to contribute, please read contribute.
Requirements
- Python 2.7 or later.
- A Google Maps API key.
API Keys
Each Google Maps Web Service request requires an API key or client ID. API keys are generated in the 'Credentials' page of the 'APIs & Services' tab of Google Cloud console.
For even more information on getting started with Google Maps Platform and generating/restricting an API key, see Get Started with Google Maps Platform in our docs.
Important: This key should be kept secret on your server.
Installation
$ pip install -U googlemaps
Note that you will need requests 2.4.0 or higher if you want to specify connect/read timeouts.
Usage
This example uses the Geocoding API and the Directions API with an API key:
import googlemaps from datetime import datetime gmaps = googlemaps.Client(key='Add Your Key here') # Geocoding)
Below is the same example, using client ID and client secret (digital signature)
for authentication. This code assumes you have previously loaded the
client_id
and
client_secret variables with appropriate values.
For a guide on how to generate the
client_secret (digital signature), see the
documentation for the API you're using. For example, see the guide for the
Directions API.
gmaps = googlemaps.Client(client_id=client_id, client_secret=client_secret) # Geocoding)
For more usage examples, check out the tests.
Features
Retry on Failure
Automatically retry when intermittent failures occur. That is, when any of the retriable 5xx errors are returned from the API.
Client IDs
Google Maps APIs Premium Plan customers can use their client ID and secret to authenticate, instead of an API key.
Building the Project
# Installing nox $ pip install nox # Running tests $ nox # Generating documentation $ nox -e docs # Copy docs to gh-pages $ nox -e docs && mv docs/_build/html generated_docs && git clean -Xdi && git checkout gh-pages
Documentation & resources
Getting started
- Generating/restricting an API key
- Authenticating with a client ID
API docs
- Google Maps Platform web services
- Directions API
- Distance Matrix API
- Elevation API
- Geocoding API
- Geolocation API
- Time Zone API
- Roads API
- Places API
Support
Changelog
All notable changes to this project will be documented in this file.
Unreleased
Changed
Added
Removed
v3.1.0
Changed
- Switched build system to use nox, pytest, and codecov. Added Python 3.7 to test framework.
- Set precision of truncated latitude and longitude floats to 8 decimals instead of 6.
- Minimum version of requests increased.
- Session token parameter added to
place().
- Fixed issue where headers in
request_kwargswere being overridden.
Added
- Automation for PyPi uploads.
- Long description to package.
- Added tests to manifest and tarball.
Removed
- Removed places
places_autocomplete_session_tokenwhich can be replaced with
uuid.uuid4().hex.
- Removed deprecated
places_radar.
Note: Start of changelog is 2019-08-27, v3.0.2.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/googlemaps/3.1.0/ | CC-MAIN-2019-43 | refinedweb | 692 | 57.57 |
THE WASHINGTON HERAlA SUNDAY, DECEMBER 8. 1912:
Twelve of the Thirteen Articles of
Impeachment Yet to Be
Presented.
SUIT NOW MOVING SLOWLY
Twelve of the thirteen articles of Im
peachment against Judge Robert W.
Archbald of the United States Com
merce Court were jet to be presented
when the fifth session of the trial In
the Senate began jesterdty. Four wit
nesses G A Richardson. William P.
Boland. J. IL Rlttenhouse and Richard
I Bradley had still to be heard upon the
Jlrst article. This, article'' concerned
Archbald's attempt to buy the Katydid
culm bank, owned by the Erie Railroad,
at a time when the Erie had two cases
pending- In his court.
Richardson, who was sick In New York
Friday w hen his name was called, was to
testily to Archbald's visit to his office
In connection with the proposed clum
purchase by Archbald and E. J 'Will
lams. Richardson Is vice president of
the Erie and vice president also of the
Hillside Coal and Iron Company, owner
of the culm pile Boland was called to
explain the drawing up of the agreement
fcr purchase. In which Archbald Is re
ferred to only as a ' silent part " Rlt
tenhouse is the man Mho made a surve
of the culm bank and estimated its
alue at over J-'OtwO Bradley Is the coal
operator to whom Archbald and Will
lams tried to sell the Katdld at JM.oOO
The second article, dealing with ttu
aid given by Archbald to an attempt
made to sell stock of the Marlon Coal
lompanv to the Lackawanna Railroad,
at a time when a suit by the owners
of the Marlon Compan against the rail
road nas before the Interstate Commerce
t ommission and subject to review b
Archbald s court It was expected ves
xerday, would bring out a sharp clash be
tween the House managers of the prose
cution and Attorne orthington, for the
defens"
The originil deposition made b Will
iams before Wrlslej Broun, special agent
of the Department of Justice, at Scranton
last spring in which Williams described
how an interview between Judge Arch
bald and Manager May, of the Katdid
i ulm dump to Archbald and llliams,
was read to tho Senate esterda fol
lowing the completion of the testlmon
of Charles F Conn of the Lackawanna
Railroad. In this statement Williams as
serted that udge Alchbald became quite
excited when Wilms reported to him
that Mas curtlj refused to sell
H told me to go back again next f
dav and that he would see Brownell. the
attorne for the Erie, about it
R rhard Bradle described the contract
into which he entered to bu the Katv
did lor J-"0ux and recall of the contract
y Manager Mas of the Hillside Corn
pans in a letter which called his atten
tion to the Uing of adverse claims against
the propertv All of these claims were
made immedlatels following the first
tumors of an investigation "or Archbald s
conduct
SOLVE MANY THEFTS.
1'iillri II line rrest of Vrcm llx
lilnins inrtmeiit IMIfcrlnc.
Mvstenous thefts In the Mamo Apart
ment
were
at 12 Twelfth Mrect Northwest
iolved last night according to the I
police when Detective Bar bee of the
Second Precinct arrested Major Rankin
a negro eighteen vears old on a charge
or housebreaking and grand Ian ens
c ordlnj; to the police. Rankins con
ussea tint nc entered the apartments
or M i-s !ta Bo) kin on December
and stole a ring vet wit ha peail and
diamond and valued at and a brace
let vlucd at $2!
negro admits In opened a rear window
uid ntered the apartments of illiam
McMahon on December 6 and made aw as
wun doming vaiucel at -C5 The prop
ertv will lie recovered In a few dass
llarbee picked up Rankins
Twelfth md V streets Northwest. The
negro had been working at various kind
empiojmcnt Tor occupants of the apart
ments
Vlahiniu 31nl touches TTeKro.
Butler Ma Dec 7 A mob to das
Isnched Azariah Curtis, colored for the
murder of B B Buch a planter, who
was killed when Curtis and three com
panions attempted to rob him The prls
oner was forcefulls taken from the Jail
ana nanged near the scene of his crime.
I.nv-r Pnrtner of rthur Dlr.
Laston Pa , Dec 7 Gen Frank
Ileeder. once law partner of Chester A.
Arthur and former secretar of the
c ommorwealt l, died here to-das. aged
7 His father, Vndrew Reeder, was flrst
Governor of Kansas
CREAM, MILK
AND
TYPHOID, Etc.
In tbe tnboid epidemic of CasscL in 1900
WT 300 tw within 10 daj) only tnose who
drank rw milk cratneted the disrue.
llefo
In hosntil where a change was made from
raw to properly pasteurized milk tjphoid con
ditions iminedjatelr Improved and tbe mor
tahtj rate decreased. (LdsalL)
It haa been found that among patrons ot
tUirin supplying properly pasteurized milk
and errant there occur cut Terr few cases of
trphoid. (Ropenan I
C
Disease terms rue with or chttff to the
cream, which contain at leait twelre times
as many bacteria . volume as the whole
milk from which it waa separated, i&chrpe
def In Japan where little cows milk la used
scarlet fertr .$, practically nnknowrw (HalL)
1
Milk trust and others have fprcad tbe re
port that pahteurulns w harmful. Able
sanitarians hue often disproved this. ChD
dren and persons in a rundown conditioa
should not drink raw milk and cream, it is
rarely safe to do so.
Properly pasteurizing means heating to 140
decrees Fahrenheit for twenty minutes. Home
paleurirJnc is just as efficient. Commercial
pasteurization is unreliable.
MOC-.L:
Fither buy only properly pasteurized milk
and cream or home-pasteurize it fay bringing
it to near boiling, then cool and keep cold
and corered until used.
Socialj for Preicniion cf Sickness
. BERLINER, SerreUry.
GOVERNORS PREPARE
TOBOILDDPG.O.P.
Contlnned from Pase One.
the party cap stand together. The Roose
velt people contended In the conference
that the only ground on which a reor
ganization could be brought about would
be through the acceptance by a Repub
lican convention of most of'the principles
advocated In the Bull Moose platform.
Immediately after the conference Gov.
Hadley of Missouri gave out a formal
statement of his own v,lews on the ques
tion of party reorganization. Here Is
the statement:
Guv. lfadlea's Statement.
"I have felt and acted upon the theory
that the Republican party has not out
lived Its usefulness as an agency of good
government, but I do not agree with
those who contend that all that is neces
sary for us to do is to sit still or to stand
pat in the hope and belief that Demo
cratic mistakes on general business de
pression will restore the Republican
pary to power
"1 bellce the fact that 4.000.000
voters who otcd the Republican ticket
In 1903 refused to vote It in 1311 requires
a careful consideration as to the reason
of their action, and as to what may be
done to correct conditions to which they
hav e objected
"One of the conditions which has been
the cause of objection within the party
for j ears has been the present djsIs of
representation from Southern States.
Another Is the part 'a attitude toward
direct primaries for the election of dele
gates to national conventions. I believe
that such chnnge should be made in the
existing rules for the conduct of party
affairs in both of these matters that
there can be no question but that the
will of the majority will control both as
to pollcj and as to candidates We Re
publicans who believe In progressive pol
icies can not ask that those who are
conservative should change their opin
oinfc. but we ought all to be able to
agree that the conventions that settled
these questions shall be trulv represen
tative of voters but not a political or
official authority and that the) shall
express the wishes of the majoritv
,hat time and in wnat way
changes should be made. It Is in mv opin
ion too earl to sa There should be
ample time for the prejudices and feel
ings aroused by the recent contest to
subside I believe that prior to the be
ginning of the campaign of 1914. prob-
ablj within the next J ear. tture should
be called a national convention or the
Republican party to consider these ques
tions or an other matters that It might
then deem advisable to consider
11 Optimistic.
Gov Carroll of Iowa, said that he
felt after the short talks that he had
listened to that the situation of the
Republican party might not be ennsid
ered as bad as certain of Its enemies in
the other two parties were inclined to
consider
I am not a prophet said Gov Car
roll but 1 will sas this much We
have not seen the last national Repub
He in victors bs a long chalk.-but :
im not so sure as to just when the
next vlctorv will be '
Gov Joseph M Carev, of Wyoming,
emerging from the conference before
It was ended, said
I have nothing to tell the reporters
I cant Interest an of sou, for Im a
Hull Moose and sou all know it '
Gov Illiam Glasscock, of West Vir
ginii, said
It was onls a free and open expres
sion of the views of the twelve gentle-
no ,,, , I1iln,rier too earlv to
iKi,rp on plans for reorganization But
Sou must remember that e are not In a
position to figure on ans scheme that
eliminates Theodore Roosevelt '
Gov Glasscock was asked if he meant
the elimination of Theodore Roosevelt
trom the Republican parts
?H
NPllI
vi sOUf-c I no but I mean more
than that he said I mean that Rooe-
elt as a factor in our hole national
lifo Is a factor to be reckoned with hy
everbod
Go Hadle in the course of the day
had talks with Senator Crane and other
latt leaders He impressed upon them
the necesslt of co-operating In any
plans for a calling of another national
conentlon and the reorganization of the
part
Ion are wrong In sugge&tlng that s
suitable committee to reorganize the Re
publican partv would be Go Jlndlo
Mr UcCall, of Massachusetts, and my
telf wrote 5? en at or Britow of Kansas
to a friend to-daj M choice for
such a committee Is Theodore Rooe-
elt, Joe Cannon Senator La Follette,
Senator Root Go Johnson bnator
Lodge, and Senator Cummins Agree
ment upon fundamental principles
necessao to an partj hen these
gentlemen can agree upon tr-e principles
which must underlie a great iarty, let
them report '
"I hae not a word to a. was Sen
ator La Kollette s comment on the let
ter i can t help their using my
name, he added
Visitors at White House
Continued from rage One.
been seeking to wrest from nature her
secrets, so that the farming of the coun
try shall be done on better scientific
principles, and the rate of production per
acre shall be Increased
."eed Mod.
It Is well, and Indeed It is necessary.
that the&e new methods should be stu
died and adopted if we are to bring
about necessan Improvement, but our
farmers can hardly do this unless in
some wa the additional capital Is fur
nished them, which is indispenslble to
such an improtement of agricultural
methods We hate great capital in this
country, and we have farming property
that Is producing farm products of Im
mense alue It would seem clear that
with these two elements It would be pos
sible to lntrodue a third, by which the
farmer engaged In producing the crops
should be able. In view of what he pro
duces and the value ot the land on wich
It Is produced, to obtain money on the
faith of the land and on the faith of the
products which will enable him to ex
pend his acreage and better his methods
of cultivation and production This is
a field In which those who are clamor
Ins for progress and who are looking ti
the gocrnment to furnish wajs of
progress mi) well devote their atten
tion, for this is real and practical
"An eas) exchange between capital and
farmers, Mr. Taft continued, "with
proper security, has been established In
European countries, where the rate of
Interest has been lowered so that the
farmer Is on practically the same basis
of adantage In the borrowing of mone
to aid his farming as the business man
Is In borrowing monej to aid and carry
on his business. If this can be done
abroad It can be done here, and If abroad
find that government institutions
adapted to form the conduit pipe be
tween capitalists and farmers are suc
cessfully operating, why should we n t
adopt them here?"
Condition. Different.
"I am quite willing to agree that con
ditions here are different from those In
tturcpe. and that such conditions ma)
make necessary a modification of tha
methods 'adopted to produce the flow of
capital to the farms and the return of
proper security to the capitalist; but the
general plans adopted abroad can be
amended to suit the peculiarities of the
present conditions and a convention of
Governors, representing all the States of
the Union, is the place where such meth
ods ought to be discussed with a view- to
adopting uniform legislation in all the
States to secure the desired end.
In concluding, Mr. Taft said: v
"There Is no subject matter of great
er Importance to the people of the
United States than the Improvement of
agricultural methods, keplng them up
to date In all agricultural communities,
the securing; of profit to the farmer,
the attraction of the oung men of the
country to farming as a lucrative pro
fession and lowering the cost of pro
ducing agricultural products and the
lowering of their prices to the consum
ers." Herrlclc Also Speaks.
Addresses were also made by Secre
tary Wilson, Senator Fletcher, and Am
bassador Herrlck. who. In a report to
the State Department several months
ago, made the original suggestion on
the subject of land credit schemes
which prompted Peresident Taft to re
fer to the conference of Governors a
consideration of the plan.
'In the United States," Mr Herrlck
said, "the land mortgage business is not
specialized except In the cities, and never
has an effective attempt been made to
adopt the principle of amortization with
out which a land credit system remains
an uncompleted and fragile scaffold, en
dangering the good name and the public f
welfare of any country trusting upon It.
Amortization, however, is the all-dom-1
mating teature or me tana mortgage bss
terns of all European nations. The av
erage length of a loan Is about thirty
j ears, and many run for sevents-flve
ers There Is no land mortgage system
In Europe which could be transplanted
bodlls Into the United States We should
have to take the best principles, such as
the associated guaranty or the Land-i
schaften. the government supervision, j
and amortization methods, and construct
a new machlners adapted to American
needs and business habits " I
M. S. THOMPSON FIRST
EMPLOYE TO APPEAL
DIRECT FOR A "RAISE"
M b Thompson, an emplose of the.
Navs Department Is the first roan in the
civil service of the government to take
advantage of a new law passed by Con
gress t the last session which allows
government workers to send petitions dl
rect to Congress without consulting tin Ir
bureau chiefs Thompson asks Congress
In a document printed sesterdas for an
Increase in salars Accompansing his
request Is a letter from the becretary of
the Navs Basing that his position de
serves no higher rate of compensation
than that which he now receives The
two letters are printed together and tha
whole document is referred to the House
Committee on Appropriations More than
a thousand copies were printed
ASKS CmjECH FOR AH).
ilero diulnlstrn1lnn penl. to
t nlhollra to 1 rKe Trace.
Mexico City, Dec 7 President Madcro s
government is now licseeching the Hn
man Catholic Church to uhe its influcnrc
throughout the republic to restore peace
and confidence In the national adminlstra
toln This t the tlrit time that the go
eminent his resumed politic il relations
with the clerical", since the church and
state were separated over Jialf a century
ago
Minister of th Interior Hernandez has
petitioned the papal delegate to Mexico
to Issue an order to all priests through
out the republic to urge their rongregi
tions to remtln loal to Madcro Th
papal delegates answered that he wnull
liaie prajers for pearo held In all Cath
olic churches t ut ho far has gone no
further In answering the government s re
quests CURFEW "TOOTS" IN
SPOKANE TO WARN
CHILDREN OFF STREETS
., l Irt Tlie untfon HmM
Spokane Wash. Pec 7 Some cities
hive the t urfew ring at least one has
the curfew blink but Spokane is going
to have It toot to-night to warn th
bos and girls to hurr home. The sys
tem of having electric lights through
out the clt blinked at th hour of
8 was suggested, but a deemed Im
practicable o the citv commission! rs
arr-inged with a number of factories
to blow their whistles as a warning
to the little folks tint their hour be
fore the hearth had come
I00K FOR RUNAWAY BOY.
'Irenlmi l'ollr U for Vld In Kind-
InK IInr Iluuasunl.
Jlaj Sylvester last night received
request fro mCblef of Police John
flears, of Trenton, J J . to institute a
tearch In Washington for Henr Dous
sard, fifteen ycjrs old, who vanished
from his hime in Trenton on November
12 and has not been heard from since
The lad had been In this country onl
hve weeks vhen he disappeared He
cannot speak English, although he con
verses fluentlv In French and Herman
The bo Is five feet seven Inches tall,
haa light hair ail a light complexion,
llaht gray cjcS anu small teaiures
When he vanished oung Doussard
wore a grav overcoat that he bought In
German, a dark coat and vest, dark
trousers with light stripe, and lace shoes
bearing the mark of a Trenton shoe firm
TO DINE SULZER.
Gov -elect Sulzer. of New York, It was
announced Jesteida), would be the guest
Tuesda evening at the Metropolitan
Club at a dinner, at which Represent-
ativet Foirchlld of New York will be
host. All the members of the New York
delegation in Congress are expected to
attend, along with Bird S Coler, of
New York Cit, and a number of prom
inent men from that cltj.
VEHICLE OF DIVINE POWER.
Testament of Tolstoi Indicates lie
Uellevrd III Writings Inspired.
Spedil &H to The nuhinztca Henld.
Tarls. Dec 7 If the people of the
world wish to read m writings, let them
dwell upon those passages where I know
the divine power has been spoken through
mi aral let them profit throughout their
lives "
This Is one of the most striking pas
sages in the dlarj of Count Leo Tolstov,
which the Jourllt Jes Debats Drlnts this
evening as the philosopher's hitherto un
published testament. This testament was
replaced by a brief formal will dated Julv
17, 1910, and he requested In It that le be
regarded as his final testament If he did
,not make another He asked to be burled
where he died with the least Tstentatlon
He said
Let there be no flowers no wreaths, no
discourses, and. If possible, let the fu
neral take pllcc without Priests anrf with
out liturgy." '
The diary, which was written under
date of 1895. Is published on the authority
cf Count Sergtus Tolstoy, the writer's son
Woodrow Wilson will be the lith
j-resDj terian to occupy the White House.
"tf " Wa,er " Wh,Ch tVeJ a"
washed win-free light-colored stockings
from leather stains. J
-.PAY. -
LATER
Qur Easy
SI 7 Arts and Crafts
Rocker or Armchair - -
IrMfa rill '
fJral
Hi
i ft' i ' KM
I mm
3SI11
I 11
In fumed -oak or Earlv English finish
Built for long service Loose Spanish
leather cushion.
REPORT ON WRECK
SCORES RAILROAD
New York. New Haven and Hart
ford Target for Caustic Criticism
from Commerce Commission.
The nport of the Interstate Commerce
timnislnii on th, wieel, on tin New
.,rk , lliwnand Hartford Railroad
at !"i!ort i onn on October 3 list
in whli h nin. ptr-ons wen kill.d -ind a
s ore or nior injured is a strong indict
ment of the fli, UN of this railroad and
rsinti. out tint the il. lit rate Ignoring
lv this mid of irenmnu udatfons bv th
commission w is responsible for the
wreck im! ihat it was in everi wa
rrqvental,l The wreck w is due to th
tt-iin taking i. ross-over at an excessive
rit, of sin,, I
Tin i nnuiilM' on struiit.lv i-ets forth the
in nmmend ttions it made at thi time it
Investigated the Bridgeport wreck on the
s,ime road in vvhl Ii fourteen lives were
list and iiiinv Injured", and savs that
follow I lit its report 'the whole result of
such eonsiilerettion as oRltlals of this
railroad have given the subject is a
rtssluiisti nopelefcsness indicated by
testiniouv that vve are at our wits
end i
The report states that the operating
vice president of the New Haven road.
when asked what hope he saw of being
able to accomplish anvthing to prevent
the recurrence of these accidents, an
swered that he knew of nothing that
would prevent absolute! such recur
rence
Commenting on this the Commission
sas
The public Interest Involved and
decent legard for the safety of the lives
of those who tnvel do not justifv
great tailroad in passlvels waiting until
some prlvite inventor, at his own cost.
develops to full perfection appliances
which will 'absolutel prevent the re
current of such accidents'
The report savs that the recommenda
tions of the Commission are not man
dator, but 'if railroad directors and
mtnaglng officials remain passive and
give to such occurrences no such serious
consideration as the situation demands
then It becomes the dut of public ofll
rials to bluntly and plalnl point out to
them their duties as trustees of the
safety ot the traveling public"
SOCIETY MATRON SEEKS ESTATE.
lira. s,ntlinnirl V. Campbell Son for
I Ino.iXIO left b Grandfather.
New lork Dec 7 ilis Nathaniel A
Campbell i soclet matron, of Ardsley-on-the-Hudson
has begun a fight In the
Easton (Pa ) courts for a 1100.000 estatt
left b her grandfather. John Knecht.
an Ironmaster Mrs. Campbell Is one
of the popular women members of the
exclusive Ardsle ciuo, in which Edwin
Gould. Jo'in F IIavemeer, Maltland
Griggs, and J Allen Townsend are
prominent.
Sir Knecht was one of the founders
of the Bethelhem Iron Coropanj, and
helped finance the Lehigh Valley Rall
roid .He left an estat of over J600,
OCO with the understanding that hb
two children were to have only a life
Interest and then come to pass to his
grandchildren
One of his daughters. Miss Anna Mar
Knecht who had the use of the estate,
died recentlv His other daughter. Mrs
J J Detwiller, mother Is Mrs. Camp
bell, dead is also What remains of the
original Knecht estate Is now valued at
$100,000. Mrs Campbell liases her claim
on the provisions of the original Knecht
will, which bequeathed the property
eventualb to Mr. Knecht's grandchil
dren T,, Cure Constipation.
n .w, the svstem with a lot nf dan.
fvrywhe?e,larTno pwSSSS
Lemon Seidlltz. the good-tasting seldlltr
powder. All druggists sell It.
ccrous habit forming drugs. Physicians
-T .n-.. aiW now nrpserthlnr- llnffm
WE GIVE HERALD $25,000 CONTEST VOTES.
L ANSBU RGH SKES
512 NINTH STREET N. W.
Payment System Is Open to All
One Lot $2.50 Lace
Curtains, $1.45
While they last this is a wonderful chance
for gift buyers. Six artistic new designs
on firm, durable mesh that will wear. Full
length and width.
l $5.00 Indian Coufth
Covers, $1.45
n assortment of strikingly beautiful de
signs in rich color combination. Three
yards long and 60 inches wide
OjO OC
OIZiZJ
For Sound Sleep Buy
This $10 All-felt
MATTRESS
At $7.35
jM
Made in our own
clean, white felt, and
ticking One or two
NAVAL BATTLE
IH DARDANELLES
urd front Pane One.
thTities of new cae that conic to their
knowledge arc threatened with Imprison
ment and fine
The Hritlsh government to-dav placed
the htstorii M June" P-iIace at the ills
m -al of the peace plenipotentiaries from
! Turk, and the Balkan statis who will
meet her on December IX
(reeee has not et signed the armistice
and lias cjven no ,lefimti intlni ition of
her Intentions w II f nundeil report
I from l nna st-ites that (,reeee will enter
int separate n, gotl-vtions for peace wlh
Turkev probablv -it Vienna
AGREES TO CONFERENCE.
ieina. Dec 7 -The semi-offici il Frem
em Blitt stales that Austrli-Hungarv
ias agrees! to the British proposal to
I hold a conference of ambassadors on the
i Balkan affairs
RENEW TRIPLE ALLIANCE.
Berlin Dee " Oftieial mnouncement
vie mad, to di v tint Germanv Italv
and Austria iave renewed the Triple
Alliance without aUrations This ex
pression i f confidence bv each of the
three members is regarded as significant
in the present international controvers.
Conditions In Capltnl mprovril.
Ambassador Rockhlll telegraphed the
fetate Department esterda that sani
tary conditions at Constantinople are
greatl Improv ed
Vernl More Fnndi.
The American Red Cross esterday
sent J1.000 to Turkev and J.A) each to the
war relief funds of Montenegro. Bul
garia, and Servia These- contributions
make a total of Jl ljb sent to the Bal
kans
PLAN DOUBLE MARRIAE
TO BRING PEACE IN BALKANS
Paris. Dec 7 A rumor in circulation
here places the diplomats of Roumanla
and Bulgaria in the position of trying
to bring about peace In the Balkans b)
the arrangement of a double marriage
The are working hard, it Is said, to
bring about an alliance between the old
est son of Prince of Roumanla, Prince
Carol, aged nineteen and Princess Lu
duxla, aged fifteen eldest daughter of
King ,Ferdin-inil on the one side, and on
the other between Prince Boris, aged
nineteen, the future king of the Bulgar
ians, and the Roumanian princess, Eliza
beth of Hohenzollem. aged nineteen, sis
ter of Prince Carol If this plan is real
ized It will result In a remarkable so
lution of one phise of the Balkan prob
lem GENERAL COUNCIL OF
INDIANS DISCUSS
SETTLEMENT OF CLAIMS
fcrecul to Tie V4iir.ston Herskl
Spokane Wash, Dec 7 A general
council of Indians of the Pacific North
west has Just been concluded at Fort
Spokane. W ash . w here Is located the
Colvllle reservation agency
Six honored Indians, many ot them
rich In lands and stock, gathered for
tlnai settlement of claims to reservation
allotments Three hundred of these
came with a petition asking that the
be adopted bv the Colvllle trltfer so that
they might procure a division of the
land All who are adopted not only re
ceive son.e of the land, but share In the
mone3 due the tribe from the govern
ment undei treiiles.
A remarkaole feature of the gathering
was tho fact that only one out of ten
wore the Indlin blanket or was able to
speak an of the tribal languages.
Carload
Extension Tables
v Special prices this week on
high-class tables in all finishes.
A Gift
sanitary shop of
covered with good
- piece stlc
SPO-KANE" OR "SPO-KAN?-THAT
IS THE QUESTION
WORRYING SP0KANERS
-r to Tit Uiski-ut si H.tI!
tpokane. ish. D i 7 After twe
, -!,.. i7h , .hre
1 .. .... . - -
resioents oi mis rn nave receiv ca
aenaen snocK in me announcement
Edward h Curtis noted Indian author
It that the a' should be long
Battle lines are elrawn closel and
the argument waxes warmer as the
da s pass
Back K ist near! all peo-
pie call it spn-kitic ' with the
long as in cane When the come
West with this pronunciation the are
frowned dovfn is tenderfoot, and are
educated to sac Spo-kan
Vow comes edict of the Indian ex
pert, and orthographers and etmolo
gists have hid their two score jrars
eif peace shattered Meanwhile, old
timers are clinging tenaciously to the
short "a lest the be designated as
tenderfeei b extremists A board of
arbitration has been suggested to set
tle the dispute
PROGRESSIVES SEEM TO
HAVE CARRIED CALIFORNIA
Sacramento Cal , Dec 7 With totals
in the election returns from Los An
geles the Progressives (taking Elector
K J Wallace s total) carried the State
b 171 votes (taking Klector Thomas
Griftins total for the Democratic vot)
and the Progressives elected eleven,
while the Democrats elected two Presi
dential electors
These figures are final but the can
not be made official until the Los An
geles returns are audited b the Secre
tary of State and certified by the Gov
ernor, showing the totals for the entire
State This certification, so Secretary of
State Jordan announced to-da, will be
made immediate! on the completion "of
the Los Angeles audit
Pigeon Files .1,000 Stiles.
Montreal. Dec 7 Ernest Robinson of
Westmount Canada, received word to
da that one of a flock ot pigeons he
imported from England, which had
escaped, had returned to Its English
home The distance Is 3.000 miles. Ap
parent! the pigeon's flight took twelve
dass.
A French sc entist has suggested an
international monetarv stanelard which
he claims is adapted to all values now
In use the value of the basic unit be
ing S cents
FREE TO YOU MY SISTER
.atiHaaaHKlA
BaaPllliBBBK'v a
KBaaasw v sgftB
f'iE:;:tHg;;Al
wtntoeontmna,ltwmcrtyoacolyabotrtlloentsaweekorless than two cents a day. I
will not interf ere with your work or occupation. Jo! tad bs iter laaa ut sdtrtta, tell me Lowyoa
auSer if joo. wish, and I will send you the treatment for your ease, entirely freejnpialnwrsp
per. by return mall. IwlUalsoaendyotitriatfeatlmybook- "wcwri in KBKU UrSO" with
explanatory lllnstratloni showing war women suffer, and how thcr can easily cure tnemsarrea
at homo. Erery woman should haTO It, and learn to alal tor aanaHlnen when the doctor ay7
"Yon must hare an operation,'" yon can decide for yourself. Thousands of women tn f""1
UiemselTea with my home remedy. It curea aD cM r naif. la Bcasti af Diazaftrt, I will explain a .
simple home treatment which speedily and effectually cure Leucorrhora, Green Blexnessanq
Painful or Irregular Menstruation Is young Ladles, Plumpness and health alwayreultifroja,
Wherercr yoa live, I can refer you to ladles ot your own locality who know and will gladly
tell any sufferer that this baa Titttasat really earn all women's diseases, and makes womenweU.
strong, plump and robust. Jut tut m tear turrit, and the free tan day's treatment la yours, alio
thsbook. write to-day, at you may not see thia offer acatn Address
MRS. M. SUMMERS. Box H Notre Dame, Ind.. U.S. A.
,
NO EXTRA
CHARGE
FOR CREDIT
CO.
This S19 Quartered Oak
Extension-Table
Has massive pedestal and
claw feet, and is beautifully
hand-polished.
Sure to Please.
This
Golden
Oak
Chiffonier,
$26
Value,
Solid quar
tered oak in co
lonial style.
Large French
plate . beveled
mirror.
$100 TIPS FOR HEAD WAITERS
fcJS nnd n Fees Common, a
White Mclit snnp Clinaers.
Vew York, Dec 7 New lork waiters
and head waiters were amused to-day
over a dispatch from Philadelphia re
gardlng the astonishment of the head
waiter of the Hotel Stafford when
Bot R5 tip The doner is reported to
,,... h. M xi,,.,, r. Trva-
this cit
It Is not unusual for the head waiter
of one of our big hotels In New lork
BUMt , M)d tnp mana. of onf uhitP
i light liostelrv to-da) As for 15 or
IK0 tips the are -o frequently receive
as to be almost common These large
perquisites are received as a rule from
guests who have been in the hotel for
a le-ngth of time and ar about to leave
but often 1 beral guests of onlv a few
davs. present the head of the dining
room staff with as much as JJ or IS."
BIBLICAL QUOTATION
LEADS PASTOR TO KILL
PARISH C0MMISI0NER
Greensburg La Dec 7 Nat Llndsaj
parish commissioner was killed and his
son Charles fatally wounded bv the Rev
Fleet Harroll In a quarrel as to the
origin of a biblical quotation after an
argument In a grocer store
The minister was serlousl wounded
Woman Movravvnr on Unttlenhlp.
Galveston. Tex Dec 7 A woman
stowaway was discovered on the I nlted
States battle'hlp Kansas when she en
tered Galveston Ba to-da with Admiral
Fletchers fleet The woman was found
hidden In the coal bunkers clad in man's
clothing She refused to divulge her
name Sailors made up a purse to de
fra her expenses for her Journe to
Philadelphia, where she said she lived
OBVrs Work to Ex-cons lets.
Philadelphia Dec 7 The Bethlehem
Steel Compan Charles M Schwab a
corporation has asked Sheriff Meredith,
of Bucks Count Pa whether it would
be possible to obtain the services of 100
men who have been releasea irom tne
Bucks Count Jail and offering regular
employment on the coke department at
wages ranging from seventeen and one
half cents to twenty cents per hour
Two Pennsylvanlans have patented a
can opener In which the cutting blade
slides on the handle bar, making as
neat a job with a rectangular can as
with a round one.
Free) to You and Every Sister 8uf
erlng from Woman's Allmants.
Iiatvonu.
I kao w woman! stiff artex,
I bare found the enra.
JJS? Err1? t2H.'aT 5hrs. Tw j
ant with fall instructions to any nSanrfrom
womaaa alhaenU. I want to tell U women abont
this cure ves. mv reader, for vnnnjilf- toot
elnghtr.vonrmotfcer,orvonraster. Iwaatto
seu:
outt
1 yoa how to ear Toarselra a home wlth
t the help of a doctor. Men casts! understand
women's sunerinrx. What wa women know haa
tiKnmt, w know better than any doctor. I
know that crj'fcozaa treatment Is safe and sura
curs for lnc-rv.wnrTTkttjiCKii.Tit. rfleMiBet. tif
Itciattt w riUti ! Bs That, frrfm, fcutj sr riatrf
hrMi.Bhrai Tutraiu Tatars. tr Crovtlu: itss aalssli
saaf. la-l aaa unit. Mines m nuinnmnni
cntiiBf fiaOaj n Bt Min. avtiscitlr, itm H sit, sal
flttWt, ntrlnrt, alatrr. asd Unfair trtillH rtan casta
t? miinm aatallaf to our sex.
iTTaattoseadjoni taaabttba ftfi baatnil
aafintj fm to erora to you that jon eaaenr;
TosrseU athoma, easily. Quickly and
surely. Bemembar, that, it ril eaaf m aaakxto
rlra tha treat lent a cemslete trial : and if tos
xml | txt | http://chroniclingamerica.loc.gov/lccn/sn83045433/1912-12-08/ed-1/seq-8/ocr/ | CC-MAIN-2014-15 | refinedweb | 5,903 | 62.11 |
- str
-
The string to search for in the intern pool.The string to search for in the intern pool. string.Intern(string) string.Intern(string) method.
This method does not return a Boolean value. If you call the method because you want a Boolean value that indicates whether a particular string is interned, you can use code such as the following.
code reference: System.String.IsInterned#1
Starting with the .NET Framework version 2.0, you can override the use of the intern pool when you use the Native Image Generator (Ngen.exe) to install an assembly to the native image cache on a local computer. For more information, see Performance Considerations in the Remarks section for the string.Intern(string) property.
The following example demonstrates the string.IsInterned(string) method.
C# Example
using System; using System.Text; public class StringExample { public static void Main() { String s1 = new StringBuilder().Append("My").Append("Test").ToString(); Console.WriteLine(String.IsInterned(s1) != null); } }
The output is
True | http://docs.go-mono.com/monodoc.ashx?link=M%3ASystem.String.IsInterned(System.String) | CC-MAIN-2020-05 | refinedweb | 164 | 52.87 |
Tutorial
Introduction
This tutorial tries to help newcomers to cats-effect to get familiar with its
main concepts by means of code examples, in a learn-by-doing fashion. Two small
programs will be coded. The first one copies the contents from one file to
another, safely handling resources in the process. That should help us to flex
our muscles. The second one is a bit more elaborated, it is a light TCP server
able to attend concurrent connections. In both cases complexity will grow as we
add more features, which will allow to introduce more and more concepts from
cats-effect. Also, while the first example is focused on
IO, the second one
will shift towards polymorphic functions that make use of cats-effect type
classes and do not tie our code to
IO.
This tutorial assumes certain familiarity with functional programming. It is
also a good idea to read cats-effect documentation prior to starting this
tutorial, at least the excellent documentation about
IO data
type.
Please read this tutorial as training material, not as a best-practices document. As you gain more experience with cats-effect, probably you will find your own solutions to deal with the problems presented here. Also, bear in mind that using cats-effect for copying files or building TCP servers is suitable for a ‘getting things done’ approach, but for more complex systems/settings/requirements you might want to take a look at fs2 or Monix to find powerful network and file abstractions that integrate with cats-effect. But that is beyond the purpose of this tutorial, which focuses solely on cats-effect.
That said, let’s go!
Setting things up
This Github repo includes all
the software that will be developed during this tutorial. It uses
sbt as the
build tool. To ease coding, compiling and running the code snippets in this
tutorial it is recommended to use the same
build.sbt, or at least one with the
same dependencies and compilation options:
name := "cats-effect-tutorial" version := "1.0" scalaVersion := "2.12.8" libraryDependencies += "org.typelevel" %% "cats-effect" % "1.0.0" withSources() withJavadoc() scalacOptions ++= Seq( "-feature", "-deprecation", "-unchecked", "-language:postfixOps", "-language:higherKinds", "-Ypartial-unification")
Code snippets in this tutorial can be pasted and compiled right in the scala console of the project defined above (or any project with similar settings).
Copying contents of a file - safely handling resources
Our goal is to create a program that copies files. First we will work on a function that carries such task, and then we will create a program that can be invoked from the shell and uses that function.
First of all we must code the function that copies the content from a file to
another file. The function takes the source and destination files as parameters.
But this is functional programming! So invoking the function shall not copy
anything, instead it will return an
IO instance that encapsulates all the
side effects involved (opening/closing files, reading/writing content), that way
purity is kept. Only when that
IO instance is evaluated all those
side-effectful actions will be run. In our implementation the
IO instance will
return the amount of bytes copied upon execution, but this is just a design
decision. Of course errors can occur, but when working with any
IO those
should be embedded in the
IO instance. That is, no exception is raised outside
the
IO and so no
try (or the like) needs to be used when using the function,
instead the
IO evaluation will fail and the
IO instance will carry the error
raised.
Now, the signature of our function looks like this:
import cats.effect.IO import java.io.File def copy(origin: File, destination: File): IO[Long] = ???
Nothing scary, uh? As we said before, the function just returns an
IO
instance. When run, all side-effects will be actually executed and the
IO
instance will return the bytes copied in a
Long (note that
IO is
parameterized by the return type). Now, let’s start implementing our function.
First, we need to open two streams that will read and write file contents.
Acquiring and releasing
Resources
We consider opening an stream to be a side-effect action, so we have to
encapsulate those actions in their own
IO instances. For this, we will make
use of cats-effect
Resource, that allows to orderly create, use and then
release resources. See this code:
import cats.effect.{IO, Resource} import cats.implicits._ import java.io._ def inputStream(f: File): Resource[IO, FileInputStream] = Resource.make { IO(new FileInputStream(f)) // build } { inStream => IO(inStream.close()).handleErrorWith(_ => IO.unit) // release } def outputStream(f: File): Resource[IO, FileOutputStream] = Resource.make { IO(new FileOutputStream(f)) // build } { outStream => IO(outStream.close()).handleErrorWith(_ => IO.unit) // release } def inputOutputStreams(in: File, out: File): Resource[IO, (InputStream, OutputStream)] = for { inStream <- inputStream(in) outStream <- outputStream(out) } yield (inStream, outStream)
We want to ensure that streams are closed once we are done using them, no matter
what. That is precisely why we use
Resource in both
inputStream and
outputStream functions, each one returning one
Resource that encapsulates
the actions for opening and then closing each stream.
inputOutputStreams
encapsulates both resources in a single
Resource instance that will be
available once the creation of both streams has been successful, and only in
that case. As seen in the code above
Resource instances can be combined in
for-comprehensions as they implement
flatMap. Note also that when releasing
resources we must also take care of any possible error during the release
itself, for example with the
.handleErrorWith call as we do in the code above.
In this case we just swallow the error, but normally it should be at least
logged.
Optionally we could have used
Resource.fromAutoCloseable to define our
resources, that method creates
Resource instances over objects that implement
java.lang.AutoCloseable interface without having to define how the resource is
released. So our
inputStream function would look like this:
import cats.effect.{IO, Resource} import java.io.{File, FileInputStream} def inputStream(f: File): Resource[IO, FileInputStream] = Resource.fromAutoCloseable(IO(new FileInputStream(f)))
That code is way simpler, but with that code we would not have control over what
would happen if the closing operation throws an exception. Also it could be that
we want to be aware when closing operations are being run, for example using
logs. In contrast, using
Resource.make allows to easily control the actions
of the release phase.
Let’s go back to our
copy function, which now looks like this:
import cats.effect.{IO, Resource} import java.io._ // as defined before def inputOutputStreams(in: File, out: File): Resource[IO, (InputStream, OutputStream)] = ??? // transfer will do the real work def transfer(origin: InputStream, destination: OutputStream): IO[Long] = ??? def copy(origin: File, destination: File): IO[Long] = inputOutputStreams(origin, destination).use { case (in, out) => transfer(in, out) }
The new method
transfer will perform the actual copying of data, once the
resources (the streams) are obtained. When they are not needed anymore, whatever
the outcome of
transfer (success of failure) both streams will be closed. If
any of the streams could not be obtained, then
transfer will not be run. Even
better, because of
Resource semantics, if there is any problem opening the
input file then the output file will not be opened. On the other hand, if there
is any issue opening the output file, then the input stream will be closed.
What about
bracket?
Now, if you are familiar with cats-effect’s
bracket you may be wondering why
we are not using it as it looks so similar to
Resource (and there is a good
reason for that:
Resource is based on
bracket). Ok, before moving forward it
is worth to take a look to
bracket.
There are three stages when using
bracket: resource acquisition, usage,
and release. Each stage is defined by an
IO instance. A fundamental
property is that the release stage will always be run regardless whether the
usage stage finished correctly or an exception was thrown during its
execution. In our case, in the acquisition stage we would create the streams,
then in the usage stage we will copy the contents, and finally in the release
stage we will close the streams. Thus we could define our
copy function as
follows:
import cats.effect.IO import cats.implicits._ import java.io._ // function inputOutputStreams not needed // transfer will do the real work def transfer(origin: InputStream, destination: OutputStream): IO[Long] = ??? def copy(origin: File, destination: File): IO[Long] = { val inIO: IO[InputStream] = IO(new FileInputStream(origin)) val outIO:IO[OutputStream] = IO(new FileOutputStream(destination)) (inIO, outIO) // Stage 1: Getting resources .tupled // From (IO[InputStream], IO[OutputStream]) to IO[(InputStream, OutputStream)] .bracket{ case (in, out) => transfer(in, out) // Stage 2: Using resources (for copying data, in this case) } { case (in, out) => // Stage 3: Freeing resources (IO(in.close()), IO(out.close())) .tupled // From (IO[Unit], IO[Unit]) to IO[(Unit, Unit)] .handleErrorWith(_ => IO.unit).void } }
New
copy definition is more complex, even though the code as a whole is way
shorter as we do not need the
inputOutputStreams function. But there is a
catch in the code above. When using
bracket, if there is a problem when
getting resources in the first stage, then the release stage will not be run.
Now, in the code above, first the origin file and then the destination file are
opened (
tupled just reorganizes both
IO instances into a single one). So
what would happen if we successfully open the origin file (i.e. when
evaluating
inIO) but then an exception is raised when opening the destination
file (i.e. when evaluating
outIO)? In that case the origin stream will not
be closed! To solve this we should first get the first stream with one
bracket
call, and then the second stream with another
bracket call inside the first.
But, in a way, that’s precisely what we do when we
flatMap instances of
Resource. And the code looks cleaner too. So, while using
bracket directly
has its place,
Resource is likely to be a better choice when dealing with
multiple resources at once.
Copying data
Finally we have our streams ready to go! We have to focus now on coding
transfer. That function will have to define a loop that at each iteration
reads data from the input stream into a buffer, and then writes the buffer
contents into the output stream. At the same time, the loop will keep a counter
of the bytes transferred. To reuse the same buffer we should define it outside
the main loop, and leave the actual transmission of data to another function
transmit that uses that loop. Something like:
import cats.effect.IO import cats.implicits._ import java.io._ def transmit(origin: InputStream, destination: OutputStream, buffer: Array[Byte], acc: Long): IO[Long] = for { amount <- IO(origin.read(buffer, 0, buffer.size)) count <- if(amount > -1) IO(destination.write(buffer, 0, amount)) >> transmit(origin, destination, buffer, acc + amount) else IO.pure(acc) // End of read stream reached (by java.io.InputStream contract), nothing to write } yield count // Returns the actual amount of bytes transmitted def transfer(origin: InputStream, destination: OutputStream): IO[Long] = for { buffer <- IO(new Array[Byte](1024 * 10)) // Allocated only when the IO is evaluated total <- transmit(origin, destination, buffer, 0L) } yield total
Take a look at
transmit, observe that both input and output actions are
encapsulated in (suspended in)
IO.
IO being a monad, we can sequence them
using a for-comprehension to create another
IO. The for-comprehension loops as
long as the call to
read() does not return a negative value that would signal
that the end of the stream has been reached.
>> is a Cats operator to sequence
two operations where the output of the first is not needed by the second (i.e.
it is equivalent to
first.flatMap(_ => second)). In the code above that means
that after each write operation we recursively call
transmit again, but as
IO is stack safe we are not concerned about stack overflow issues. At each
iteration we increase the counter
acc with the amount of bytes read at that
iteration.
We are making progress, and already have a version of
copy that can be used.
If any exception is raised when
transfer is running, then the streams will be
automatically closed by
Resource. But there is something else we have to take
into account:
IO instances execution can be canceled!. And cancellation
should not be ignored, as it is a key feature of cats-effect. We will discuss
cancellation in the next section.
Dealing with cancellation
Cancellation is a powerful but non-trivial cats-effect feature. In cats-effect,
some
IO instances can be cancelable, meaning that their evaluation will be
aborted. If the programmer is careful, an alternative
IO task will be run
under cancellation, for example to deal with potential cleaning up activities.
We will see how an
IO can be actually canceled at the end of the Fibers are
not threads! section later on, but for now we will
just keep in mind that during the execution of the
IO returned by the
copy
method a cancellation could be requested at any time.
Now,
IOs created with
Resource.use can be canceled. The cancellation will
trigger the execution of the code that handles the closing of the resource. In
our case, that would close both streams. So far so good! But what happens if
cancellation happens while the streams are being used? This could lead to
data corruption as a stream where some thread is writing to is at the same time
being closed by another thread. For more info about this problem see Gotcha:
Cancellation is a concurrent
action in
cats-effect site.
To prevent such data corruption we must use some concurrency control mechanism
that ensures that no stream will be closed while the
IO returned by
transfer is being evaluated. Cats-effect provides several constructs for
controlling concurrency, for this case we will use a
semaphore. A semaphore has a number of
permits, its method
.acquire ‘blocks’ if no permit is available until
release is called on the same semaphore. It is important to remark that
there is no actual thread being really blocked, the thread that finds the
.acquire call will be immediately recycled by cats-effect. When the
release
method is invoked then cats-effect will look for some available thread to
resume the execution of the code after
.acquire.
We will use a semaphore with a single permit. The
.withPermit method acquires
one permit, runs the
IO given and then releases the permit. We could also
use
.acquire and then
.release on the semaphore explicitly, but
.withPermit is more idiomatic and ensures that the permit is released even if
the effect run fails.
import cats.implicits._ import cats.effect.{Concurrent, IO, Resource} import cats.effect.concurrent.Semaphore import java.io._ // transfer and transmit methods as defined before def transfer(origin: InputStream, destination: OutputStream): IO[Long] = ??? def inputStream(f: File, guard: Semaphore[IO]): Resource[IO, FileInputStream] = Resource.make { IO(new FileInputStream(f)) } { inStream => guard.withPermit { IO(inStream.close()).handleErrorWith(_ => IO.unit) } } def outputStream(f: File, guard: Semaphore[IO]): Resource[IO, FileOutputStream] = Resource.make { IO(new FileOutputStream(f)) } { outStream => guard.withPermit { IO(outStream.close()).handleErrorWith(_ => IO.unit) } } def inputOutputStreams(in: File, out: File, guard: Semaphore[IO]): Resource[IO, (InputStream, OutputStream)] = for { inStream <- inputStream(in, guard) outStream <- outputStream(out, guard) } yield (inStream, outStream) def copy(origin: File, destination: File)(implicit concurrent: Concurrent[IO]): IO[Long] = { for { guard <- Semaphore[IO](1) count <- inputOutputStreams(origin, destination, guard).use { case (in, out) => guard.withPermit(transfer(in, out)) } } yield count }
Before calling to
transfer we acquire the semaphore, which is not released
until
transfer is done. The
use call ensures that the semaphore will be
released under any circumstances, whatever the result of
transfer (success,
error, or cancellation). As the ‘release’ parts in the
Resource instances are
now blocked on the same semaphore, we can be sure that streams are closed only
after
transfer is over, i.e. we have implemented mutual exclusion of
transfer execution and resources releasing. An implicit
Concurrent instance
is required to create the semaphore instance, which is included in the function
signature.
Mark that while the
IO returned by
copy is cancelable (because so are
IO
instances returned by
Resource.use), the
IO returned by
transfer is not.
Trying to cancel it will not have any effect and that
IO will run until the
whole file is copied! In real world code you will probably want to make your
functions cancelable, section Building cancelable IO
tasks of
IO documentation
explains how to create such cancelable
IO instances (besides calling
Resource.use, as we have done for our code).
And that is it! We are done, now we can create a program that uses this
copy function.
IOApp for our final program
We will create a program that copies files, this program only takes two
parameters: the name of the origin and destination files. For coding this
program we will use
IOApp as it allows to maintain purity in our definitions
up to the program main function.
IOApp is a kind of ‘functional’ equivalent to Scala’s
App, where instead of
coding an effectful
main method we code a pure
run function. When executing
the class a
main method defined in
IOApp will call the
run function we
have coded. Any interruption (like pressing
Ctrl-c) will be treated as a
cancellation of the running
IO. Also
IOApp provides implicit instances of
Timer[IO] and
ContextShift[IO] (not discussed yet in this tutorial).
ContextShift[IO] allows for having a
Concurrent[IO] in scope, as the one
required by the
copy function.
When coding
IOApp, instead of a
main function we have a
run function,
which creates the
IO instance that forms the program. In our case, our
run
method can look like this:
import cats.effect._ import cats.implicits._ import java.io.File object Main extends IOApp { // copy as defined before def copy(origin: File, destination: File): IO[Long] = ??? override def run(args: List[String]): IO[ExitCode] = for { _ <- if(args.length < 2) IO.raiseError(new IllegalArgumentException("Need origin and destination files")) else IO.unit orig = new File(args(0)) dest = new File(args(1)) count <- copy(orig, dest) _ <- IO(println(s"$count bytes copied from ${orig.getPath} to ${dest.getPath}")) } yield ExitCode.Success }
Heed how
run verifies the
args list passed. If there are fewer than two
arguments, an error is raised. As
IO implements
MonadError we can at any
moment call to
IO.raiseError to interrupt a sequence of
IO operations.
Copy program code
You can check the final version of our copy program here.
The program can be run from
sbt just by issuing this call:
> runMain catsEffectTutorial.CopyFile origin.txt destination.txt
It can be argued that using
IO{java.nio.file.Files.copy(...)} would get an
IO with the same characteristics of purity as our function. But there is a
difference: our
IO is safely cancelable! So the user can stop the running code
at any time for example by pressing
Ctrl-c, our code will deal with safe
resource release (streams closing) even under such circumstances. The same will
apply if the
copy function is run from other modules that require its
functionality. If the
IO returned by this function is canceled while being
run, still resources will be properly released. But recall what we commented
before: this is because
use returns
IO instances that are cancelable, in
contrast our
transfer function is not cancelable.
Polymorphic cats-effect code
There is an important characteristic of
IO that we shall be aware of.
IO is
able to encapsulate side-effects, but the capacity to define concurrent and/or
async and/or cancelable
IO instances comes from the existence of a
Concurrent[IO] instance.
Concurrent[F[_]] is a type class that, for an
F
carrying a side-effect, brings the ability to cancel or start concurrently the
side-effect in
F.
Concurrent also extends type class
Async[F[_]], that
allows to define synchronous/asynchronous computations.
Async[F[_]], in turn,
extends type class
Sync[F[_]], which can suspend the execution of side effects
in
F.
So well,
Sync can suspend side effects (and so can
Async and
Concurrent as
they extend
Sync). We have used
IO so far mostly for that purpose. Now,
going back to the code we created to copy files, could have we coded its
functions in terms of some
F[_]: Sync instead of
IO? Truth is we could and
in fact it is recommendable in real world programs. See for example how we
would define a polymorphic version of our
transfer function with this
approach, just by replacing any use of
IO by calls to the
delay and
pure
methods of the
Sync[F[_]] instance!
import cats.effect.Sync import cats.effect.syntax.all._ import cats.implicits._ import java.io._ def transmit[F[_]: Sync](origin: InputStream, destination: OutputStream, buffer: Array[Byte], acc: Long): F[Long] = for { amount <- Sync[F].delay(origin.read(buffer, 0, buffer.size)) count <- if(amount > -1) Sync[F].delay(destination.write(buffer, 0, amount)) >> transmit(origin, destination, buffer, acc + amount) else Sync[F].pure(acc) // End of read stream reached (by java.io.InputStream contract), nothing to write } yield count // Returns the actual amount of bytes transmitted
We can do the same transformation to most of the code we have created so far,
but not all. In
copy you will find out that we do need a full instance of
Concurrent[F] in scope, this is because it is required by the
Semaphore
instantiation:
import cats.effect._ import cats.effect.concurrent.Semaphore import cats.effect.syntax.all._ import cats.implicits._ import java.io._ def transfer[F[_]: Sync](origin: InputStream, destination: OutputStream): F[Long] = ??? def inputOutputStreams[F[_]: Sync](in: File, out: File, guard: Semaphore[F]): Resource[F, (InputStream, OutputStream)] = ??? def copy[F[_]: Concurrent](origin: File, destination: File): F[Long] = for { guard <- Semaphore[F](1) count <- inputOutputStreams(origin, destination, guard).use { case (in, out) => guard.withPermit(transfer(in, out)) } } yield count
Only in our
main function we will set
IO as the final
F for
our program. To do so, of course, a
Concurrent[IO] instance must be in scope,
but that instance is brought transparently by
IOApp so we do not need to be
concerned about it.
During the remaining of this tutorial we will use polymorphic code, only falling
to
IO in the
run method of our
IOApps. Polymorphic code is less
restrictive, as functions are not tied to
IO but are applicable to any
F[_]
as long as there is an instance of the type class required (
Sync[F[_]] ,
Concurrent[F[_]]…) in scope. The type class to use will depend on the
requirements of our code. For example, if the execution of the side-effect
should be cancelable, then we must stick to
Concurrent[F[_]]. Also, it is
actually easier to work on
F than on any specific type.
Copy program code, polymorphic version
The polymorphic version of our copy program in full is available here.
Exercises: improving our small
IO program
To finalize we propose you some exercises that will help you to keep improving your IO-kungfu:
- Modify the
IOAppso it shows an error and abort the execution if the origin and destination files are the same, the origin file cannot be open for reading or the destination file cannot be opened for writing. Also, if the destination file already exists, the program should ask for confirmation before overwriting that file.
- Modify
transmitso the buffer size is not hardcoded but passed as parameter.
- Use some other concurrency tool of cats-effect instead of
semaphoreto ensure mutual exclusion of
transferexecution and streams closing.
- Create a new program able to copy folders. If the origin folder has subfolders, then their contents must be recursively copied too. Of course the copying must be safely cancelable at any moment.
TCP echo server - concurrent system with
Fibers
This program is a bit more complex than the previous one. Here we create an echo
TCP server that replies to each text message from a client sending back that
same message. When the client sends an empty line its connection is shutdown by
the server. This server will also bring a key feature, it will be able to attend
several clients at the same time. For that we will use
cats-effect’s
Fiber,
which can be seen as light threads. For each new client a
Fiber instance will
be spawned to serve that client.
We will stick to a simple design principle: whoever method creates a resource is the sole responsible of dispatching it! It’s worth to remark this from the beginning to better understand the code listings shown in this tutorial.
Ok, we are ready to start coding our server. Let’s build it step-by-step. First
we will code a method that implements the echo protocol. It will take as input
the socket (
java.net.Socket instance) that is connected to the client. The
method will be basically a loop that at each iteration reads the input from the
client, if the input is not an empty line then the text is sent back to the
client, otherwise the method will finish.
The method signature will look like this:
import cats.effect.Sync import java.net.Socket def echoProtocol[F[_]: Sync](clientSocket: Socket): F[Unit] = ???
Reading and writing will be done using
java.io.BufferedReader and
java.io.BufferedWriter instances built from the socket. Recall that this
method will be in charge of closing those buffers, but not the client socket (it
did not create that socket after all!). We will use again
Resource to ensure
that we close the streams we create. Also, all actions with potential
side-effects are encapsulated in
F instances, where
F only requires an
implicit instance of
Sync[F] to be present. That way we ensure no side-effect
is actually run until the
F returned by this method is evaluated. With this
in mind, the code looks like:
import cats.effect._ import cats.implicits._ import java.io._ import java.net._ def echoProtocol[F[_]: Sync](clientSocket: Socket): F[Unit] = { def loop(reader: BufferedReader, writer: BufferedWriter): F[Unit] = for { line <- Sync[F].delay(reader.readLine()) _ <- line match { case "" => Sync[F].unit // Empty line, we are done case _ => Sync[F].delay{ writer.write(line); writer.newLine(); writer.flush() } >> loop(reader, writer) } } yield () def reader(clientSocket: Socket): Resource[F, BufferedReader] = Resource.make { Sync[F].delay( new BufferedReader(new InputStreamReader(clientSocket.getInputStream())) ) } { reader => Sync[F].delay(reader.close()).handleErrorWith(_ => Sync[F].unit) } def writer(clientSocket: Socket): Resource[F, BufferedWriter] = Resource.make { Sync[F].delay( new BufferedWriter(new PrintWriter(clientSocket.getOutputStream())) ) } { writer => Sync[F].delay(writer.close()).handleErrorWith(_ => Sync[F].unit) } def readerWriter(clientSocket: Socket): Resource[F, (BufferedReader, BufferedWriter)] = for { reader <- reader(clientSocket) writer <- writer(clientSocket) } yield (reader, writer) readerWriter(clientSocket).use { case (reader, writer) => loop(reader, writer) // Let's get to work } }
Note that, as we did in the previous example, we swallow possible errors when closing the streams, as there is little to do in such cases.
The actual interaction with the client is done by the
loop function. It tries
to read a line from the client, and if successful then it checks the line
content. If empty it finishes the method, if not it sends back the line through
the writer and loops back to the beginning. And what happens if we find any
error in the
reader.readLine() call? Well,
F will catch the exception and
will short-circuit the evaluation, this method would then return an
F
instance carrying the caught exception. Easy, right :) ?
So we are done with our
echoProtocol method, good! But we still miss the part
of our server that will listen for new connections and create fibers to attend
them. Let’s work on that, we implement that functionality in another method
that takes as input the
java.io.ServerSocket instance that will listen for
clients:
import cats.effect._ import cats.effect.syntax.all._ import cats.effect.ExitCase._ import cats.implicits._ import java.net.{ServerSocket, Socket} // echoProtocol as defined before def echoProtocol[F[_]: Sync](clientSocket: Socket): F[Unit] = ??? def serve[F[_]: Concurrent](serverSocket: ServerSocket): F[Unit] = { def close(socket: Socket): F[Unit] = Sync[F].delay(socket.close()).handleErrorWith(_ => Sync[F].unit) for { _ <- Sync[F] .delay(serverSocket.accept()) .bracketCase { socket => echoProtocol(socket) .guarantee(close(socket)) // Ensuring socket is closed .start // Will run in its own Fiber! }{ (socket, exit) => exit match { case Completed => Sync[F].unit case Error(_) | Canceled => close(socket) }} _ <- serve(serverSocket) // Looping back to the beginning } yield () }
We invoke the
accept method of
ServerSocket and use
bracketCase to define
both the action that will make use of the resource (the client socket) and how
it will be released. The action in this case invokes
echoProtocol, and then
uses
guarantee call on the returned
F to ensure that the socket will be
safely closed when
echoProtocol is done. Also quite interesting: we use
start! By doing so the
echoProtocol call will run on its own fiber thus
not blocking the main loop. To be able to invoke
start we need an instance of
Concurrent[F] in scope (in fact we are invoking
Concurrent[F].start(...)
but the
cats.effect.syntax.all._ classes that we are importing did the
trick). Finally, the release part of the
bracketCase will only close the
socket if there was an error or cancellation during the
accept call or the
subsequent invocation to
echoProtocol. If that is not the case, it means that
echoProtocol was started without any issue and so we do not need to take any
action, the
guarantee call will close the socket when
echoProtocol is done.
You may wonder if using
bracketCase when we already have
guarantee is not a
bit overkill. We could have coded our loop like this:
for { socket <- Sync[F].delay(serverSocket.accept) _ <- echoProtocol(socket) .guarantee(close(socket)) .start _ <- serve(serverSocket) }
That code is way simpler, but it contains a bug: if there is a cancellation in
the
flatMap between
socket and
echoProtocol then
close(socket) does not
execute. Using
bracketCase solves that problem.
So there it is, we have our concurrent code ready, able to handle several client connections at once!
NOTE: If you have coded servers before, probably you are wondering if cats-effect provides some magical way to attend an unlimited number of clients without balancing the load somehow. Truth is, it doesn’t. You can spawn as many fibers as you wish, but there is no guarantee they will run simultaneously. More about this in the Fibers are not threads! section.
IOApp for our server
So, what do we miss now? Only the creation of the server socket of course,
which we can already do in the
run method of an
IOApp:
import cats.effect._ import cats.implicits._ import java.net.ServerSocket object Main extends IOApp { // serve as defined before def serve[F[_]: Concurrent](serverSocket: ServerSocket): F[Unit] = ??? def run(args: List[String]): IO[ExitCode] = { def close[F[_]: Sync](socket: ServerSocket): F[Unit] = Sync[F].delay(socket.close()).handleErrorWith(_ => Sync[F].unit) IO( new ServerSocket(args.headOption.map(_.toInt).getOrElse(5432)) ) .bracket{ serverSocket => serve[IO](serverSocket) >> IO.pure(ExitCode.Success) } { serverSocket => close[IO](serverSocket) >> IO(println("Server finished")) } } }
Heed how this time we can use
bracket right ahead, as there is a single
resource to deal with and no action to be taken if the creation fails. Also
IOApp provides a
ContextShift in scope that brings a
Concurrent[IO], so we
do not have to create our own.
Echo server code, simple version
Full code of our basic echo server is available here.
As before you can run in for example from the
sbt console just by typing
> runMain catsEffectTutorial.EchoServerV1_Simple
That will start the server on default port
5432, you can also set any other
port by passing it as argument. To test the server is properly running, you can
connect to it using
telnet. Here we connect, send
hi, and the server replies
with the same text. Finally we send an empty line to close the connection:
$ telnet localhost 5432 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. hi hi Connection closed by foreign host.
You can connect several telnet sessions at the same time to verify that indeed our server can attend all of them simultaneously. Several… but not many, more about that in Fibers are not threads! section.
Unfortunately this server is a bit too simplistic. For example, how can we stop it? Well, that is something we have not addressed yet and it is when things can get a bit more complicated. We will deal with proper server halting in the next section.
Graceful server stop (handling exit events)
There is no way to shutdown gracefully the echo server coded in the previous
section. Sure we can always
Ctrl-c it, but proper servers should provide
better mechanisms to stop them. In this section we use some other
cats-effect
types to deal with this.
First, we will use a flag to signal when the server shall quit. The main server
loop will run on its own fiber, that will be canceled when that flag is set.
The flag will be an instance of
MVar. The
cats-effect documentation
describes
MVar as a mutable location that can be empty or contains a value,
asynchronously blocking reads when empty and blocking writes when full. Why not
using
Semaphore or
Deferred? Thing is, as we will see later on, we will need
to be able to ‘peek’ whether a value has been written or not in a non-blocking
fashion. That’s a handy feature that
MVar implements.
So, we will ‘block’ by reading our
MVar instance, and we will only write on it
when
STOP is received, the write being the signal that the server must be
shut down. The server will be only stopped once, so we are not concerned about
blocking on writing.
And who shall signal that the server must be stopped? In this example, we will
assume that it will be the connected users who can request the server to halt by
sending a
STOP message. Thus, the method attending clients (
echoProtocol!)
needs access to the flag to use it to communicate that the server must stop when
that message is received.
Let’s first define a new method
server that instantiates the flag, runs the
serve method in its own fiber and waits on the flag to be set. Only when
the flag is set the server fiber will be canceled.
import cats.effect._ import cats.effect.syntax.all._ import cats.effect.concurrent.MVar import cats.implicits._ import java.net.ServerSocket // serve now requires access to the stopFlag, it will use it to signal the // server must stop def serve[F[_]: Concurrent](serverSocket: ServerSocket, stopFlag: MVar[F, Unit]): F[Unit] = ??? def server[F[_]: Concurrent](serverSocket: ServerSocket): F[ExitCode] = for { stopFlag <- MVar[F].empty[Unit] serverFiber <- serve(serverSocket, stopFlag).start // Server runs on its own Fiber _ <- stopFlag.read // Blocked until 'stopFlag.put(())' is run _ <- serverFiber.cancel // Stopping server! } yield ExitCode.Success
As before, creating new fibers requires a
Concurrent[F] in scope.
We must also modify the main
run method in
IOApp so now it calls to
server:
import cats.effect._ import cats.implicits._ import java.net.ServerSocket object Main extends IOApp { // server as defined before def server[F[_]: Concurrent](serverSocket: ServerSocket): F[ExitCode] = ??? override def run(args: List[String]): IO[ExitCode] = { def close[F[_]: Sync](socket: ServerSocket): F[Unit] = Sync[F].delay(socket.close()).handleErrorWith(_ => Sync[F].unit) IO{ new ServerSocket(args.headOption.map(_.toInt).getOrElse(5432)) } .bracket{ serverSocket => server[IO](serverSocket) >> IO.pure(ExitCode.Success) } { serverSocket => close[IO](serverSocket) >> IO(println("Server finished")) } } }
So
run calls
server which in turn calls
serve. Do we need to modify
serve as well? Yes, as we need to pass the
stopFlag to the
echoProtocol
method:
import cats.effect._ import cats.effect.ExitCase._ import cats.effect.concurrent.MVar import cats.effect.syntax.all._ import cats.implicits._ import java.net._ // echoProtocol now requires access to the stopFlag, it will use it to signal the // server must stop { _ <- Sync[F] .delay(serverSocket.accept()) .bracketCase { socket => echoProtocol(socket, stopFlag) .guarantee(close(socket)) // Ensuring socket is closed .start // Client attended by its own Fiber }{ (socket, exit) => exit match { case Completed => Sync[F].unit case Error(_) | Canceled => close(socket) }} _ <- serve(serverSocket, stopFlag) // Looping back to the beginning } yield () }
There is only one step missing, modifying
echoProtocol. In fact, the only
relevant changes are on its inner
loop method. Now it will check whether the
line received from the client is
STOP, if so it will set the
stopFlag to
signal the server must be stopped, and the function will quit:
import cats.effect._ import cats.effect.concurrent.MVar import cats.effect.syntax.all._ import cats.implicits._ import java.io._ def loop[F[_]:Concurrent](reader: BufferedReader, writer: BufferedWriter, stopFlag: MVar[F, Unit]): F[Unit] = for { line <- Sync[F].delay(reader.readLine()) _ <- line match { case "STOP" => stopFlag.put(()) // Stopping server! Also put(()) returns F[Unit] which is handy as we are done case "" => Sync[F].unit // Empty line, we are done case _ => Sync[F].delay{ writer.write(line); writer.newLine(); writer.flush() } >> loop(reader, writer, stopFlag) } } yield ()
Echo server code, graceful stop version
The code of the server able to react to stop events is available here.
If you run the server coded above, open a telnet session against it and send an
STOP message you will see how the server is properly terminated.
Exercise: closing client connections to echo server on shutdown
There is a catch yet in our server. If there are several clients connected,
sending an
STOP message will close the server’s fiber and the one attending
the client that sent the message. But the other fibers will keep running
normally! It is like they were daemon threads. Arguably, we could expect that
shutting down the server shall close all connections. How could we do it?
Solving that issue is the proposed exercise below.
We need to close all connections with clients when the server is shut down. To
do that we can call
cancel on each one of the
Fiber instances we have
created to attend each new client. But how? After all, we are not tracking
which fibers are running at any given time. We propose this exercise to you: can
you devise a mechanism so all client connections are closed when the server is
shutdown? We outline a solution in the next subsection, but maybe you can
consider taking some time looking for a solution yourself :) .
Solution
We could keep a list of active fibers serving client connections. It is doable, but cumbersome… and not really needed at this point.
Think about it: we have a
stopFlag that signals when the server must be
stopped. When that flag is set we can assume we shall close all client
connections too. Thus what we need to do is, every time we create a new fiber to
attend some new client, we must also make sure that when
stopFlag is set that
client is ‘shutdown’. As
Fiber instances are very light we can create a new
instance just to wait for
stopFlag.read and then forcing the client to stop.
This is how the
serve method will look like now with that change: { socket <- Sync[F] .delay(serverSocket.accept()) .bracketCase { socket => echoProtocol(socket, stopFlag) .guarantee(close(socket)) // Ensuring socket is closed .start >> Sync[F].pure(socket) // Client attended by its own Fiber, socket is returned }{ (socket, exit) => exit match { case Completed => Sync[F].unit case Error(_) | Canceled => close(socket) }} _ <- (stopFlag.read >> close(socket)) .start // Another Fiber to cancel the client when stopFlag is set _ <- serve(serverSocket, stopFlag) // Looping back to the beginning } yield () }
Here we close the client socket once the read on
stopFlag unblocks. This will
trigger an exception on the
reader.readLine call. To capture and process the
exception we will use
attempt, which returns an
Either instance that will
contain either a
Right[String] with the line read or a
Left[Throwable] with
the exception captured. If some error is detected first the state of
stopFlag
is checked, and if it is set a graceful shutdown is assumed and no action is
taken; otherwise the error is raised:
import cats.effect._ import cats.effect.concurrent.MVar import cats.implicits._ import java.io._ def loop[F[_]: Sync](reader: BufferedReader, writer: BufferedWriter, stopFlag: MVar[F, Unit]): F[Unit] = for { lineE <- Sync[F].delay(reader.readLine()).attempt _ <- lineE match { case Right(line) => line match { case "STOP" => stopFlag.put(()) // Stopping server! Also put(()) returns F[Unit] which is handy as we are done case "" => Sync[F].unit // Empty line, we are done case _ => Sync[F].delay{ writer.write(line); writer.newLine(); writer.flush() } >> loop(reader, writer, stopFlag) } case Left(e) => for { // readLine() failed, stopFlag will tell us whether this is a graceful shutdown isEmpty <- stopFlag.isEmpty _ <- if(!isEmpty) Sync[F].unit // stopFlag is set, cool, we are done else Sync[F].raiseError(e) // stopFlag not set, must raise error } yield () } } yield ()
Recall that we used
Resource to instantiate both the
reader and
writer
used by
loop; following how we coded that resource, both that
reader and
writer will be automatically closed.
Now you may think ‘wait a minute!, why don’t we cancel the client fiber instead of closing the socket straight away?’ In fact this is perfectly possible, and it will have a similar effect: { fiber <- Sync[F] .delay(serverSocket.accept()) .bracketCase { socket => echoProtocol(socket, stopFlag) .guarantee(close(socket)) // Ensuring socket is closed .start // Client attended by its own Fiber, which is returned }{ (socket, exit) => exit match { case Completed => Sync[F].unit case Error(_) | Canceled => close(socket) }} _ <- (stopFlag.read >> fiber.cancel) .start // Another Fiber to cancel the client when stopFlag is set _ <- serve(serverSocket, stopFlag) // Looping back to the beginning } yield () }
What is happening in this latter case? If you take a look again to
echoProtocol you will see that the
F returned by
echoProtocol is the
F
given by
Resource.use. When we cancel the fiber running that
F, the release
of the resources defined is triggered. That release phase closes the
reader
and
writer streams that we created from the client socket… which in turn
closes the client socket! As before, the
attempt call will take care of the
exception raised. In fact using
cancel looks cleaner overall. But there is a
catch. The call to
cancel does not force an
F to be immediately terminated,
it is not like a
Thread.interrupt! It happened in our server because it
indirectly created an exception that was raised inside the
F running the
reader.readLine, caused by the socket being closed. If that had not been the
case, the
cancel call would only have taken effect when the code inside the
F running the
reader.readLine was normally finished. Keep that in mind when
using
cancel to handle fibers.
Echo server code, closing client connections version
The resulting code of this new server, able to shutdown all client connections on shutdown, is available here.
Fibers are not threads!
As stated before, fibers are like ‘light’ threads, meaning they can be used in a
similar way than threads to create concurrent code. However, they are not
threads. Spawning new fibers does not guarantee that the action described in the
F associated to it will be run if there is a shortage of threads. At the end
of the day, if no thread is available that can run the fiber, then the actions
in that fiber will be blocked until some thread is free again.
You can test this yourself. Start the server defined in the previous sections and try to connect several clients and send lines to the server through them. Soon you will notice that the latest clients… do not get any echo reply when sending lines! Why is that? Well, the answer is that the first fibers already used all underlying threads available! But if we close one of the active clients by sending an empty line (recall that makes the server to close that client session) then immediately one of the blocked clients will be active.
It shall be clear from that experiment that fibers are run by thread pools. And
that in our case, all our fibers share the same thread pool!
ContextShif[F]
is in charge of assigning threads to the fibers waiting to be run, each one
with a pending
F action. When using
IOApp we get also the
ContextShift[IO]
that we need to run the fibers in our code. So there are our threads!
The
ContextShift type class
Cats-effect provides ways to use different
ContextShifts (each with its own
thread pool) when running
F actions, and to swap which one should be used for
each new
F to ask to reschedule threads among the current active
F
instances e.g. for improved fairness etc. Code below shows an example of how to
declare tasks that will be run in different thread pools: first task will be run
by the thread pool of the
ExecutionContext passed as parameter, the second
task will be run in the default thread pool.
import cats.effect._ import cats.implicits._ import scala.concurrent.ExecutionContext def doHeavyStuffInADifferentThreadPool[F[_]: ContextShift: Sync](implicit ec: ExecutionContext): F[Unit] = { val csf = implicitly[ContextShift[F]] for { _ <- csf.evalOn(ec)(Sync[F].delay(println("Hi!"))) // Swapping to thread pool of given ExecutionContext _ <- Sync[F].delay(println("Welcome!")) // Running back in default thread pool } yield () }
Exercise: using a custom thread pool in echo server
Given what we know so far, how could we solve the problem of the limited number
of clients attended in parallel in our echo server? Recall that in traditional
servers we would make use of an specific thread pool for clients, able to resize
itself by creating new threads if they are needed. You can get such a pool using
Executors.newCachedThreadPool(). But take care of shutting the pool down when
the server is stopped!
Solution
Well, the solution is quite straightforward. We only need to create a thread pool
and execution context, and use it whenever we need to read input from some
connected client. So the beginning of the
echoProtocol function would look like:
def echoProtocol[F[_]: Sync: ContextShift](clientSocket: Socket, stopFlag: MVar[F, Unit])(implicit clientsExecutionContext: ExecutionContext): F[Unit] = { val csf = implicitly[ContextShift[F]] def loop(reader: BufferedReader, writer: BufferedWriter, stopFlag: MVar[F, Unit]): F[Unit] = for { lineE <- csf.evalOn(clientsExecutionContext)(Sync[F].delay(reader.readLine()).attempt) // ...
and… that is mostly it. Only pending change is to create the thread pool and
execution context in the
server function, which will be in charge also of
shutting down the thread pool when the server finishes:
import cats.effect._ import cats.effect.concurrent.MVar import cats.effect.syntax.all._ import cats.implicits._ import java.net.ServerSocket import java.util.concurrent.Executors import scala.concurrent.ExecutionContext def serve[F[_]: Concurrent: ContextShift](serverSocket: ServerSocket, stopFlag: MVar[F, Unit])(implicit clientsExecutionContext: ExecutionContext): F[Unit] = ??? def server[F[_]: Concurrent: ContextShift](serverSocket: ServerSocket): F[ExitCode] = { val clientsThreadPool = Executors.newCachedThreadPool() implicit val clientsExecutionContext = ExecutionContext.fromExecutor(clientsThreadPool) for { stopFlag <- MVar[F].empty[Unit] serverFiber <- serve(serverSocket, stopFlag).start // Server runs on its own Fiber _ <- stopFlag.read // Blocked until 'stopFlag.put(())' is run _ <- Sync[F].delay(clientsThreadPool.shutdown()) // Shutting down clients pool _ <- serverFiber.cancel // Stopping server } yield ExitCode.Success }
Signatures of
serve and of
echoProtocol will have to be changed too to pass
the execution context as parameter. Finally, we need an implicit
ContextShift[F] that will be carried by the functions signature. It is
IOApp
who provides the instance of
ContextShift[IO] in the
run method.
Echo server code, thread pool for clients version
The version of our echo server using a thread pool is available here.
Let’s not forget about
async
The
async functionality is another powerful capability of cats-effect we have
not mentioned yet. It is provided by
Async type class, that it allows to
describe
F instances that may be terminated by a thread different than the
one carrying the evaluation of that instance. Result will be returned by using
a callback.
Some of you may wonder if that could help us to solve the issue of having
blocking code in our fabulous echo server. Unfortunately,
async cannot
magically ‘unblock’ such code. Try this simple code snippet (e.g. in
sbt
console):
import cats.effect._ import cats.effect.syntax.all._ import cats.implicits._ import scala.util.Either def delayed[F[_]: Async]: F[Unit] = for { _ <- Sync[F].delay(println("Starting")) // Async extends Sync, so (F[_]: Async) 'brings' (F[_]: Sync) _ <- Async[F].async{ (cb: Either[Throwable,Unit] => Unit) => Thread.sleep(2000) cb(Right(())) } _ <- Sync[F].delay(println("Done")) // 2 seconds to get here, no matter what, as we were 'blocked' by previous call } yield() delayed[IO].unsafeRunSync() // a way to run an IO without IOApp
You will notice that the code above still blocks, waiting for the
async call
to finish.
Using
async in our echo server
So how is
async useful? Well, let’s see how we can apply it on our server
code. Because
async allows a different thread to finish the task, we can
modify the blocking read call inside the
loop function of our server with
something like:
for { lineE <- Async[F].async{ (cb: Either[Throwable, Either[Throwable, String]] => Unit) => clientsExecutionContext.execute(new Runnable { override def run(): Unit = { val result: Either[Throwable, String] = Try(reader.readLine()).toEither cb(Right(result)) } }) } // ...
Note that the call
clientsExecutionContext.execute will create a thread from
that execution context, setting free the thread that was evaluating the
F
for-comprehension. If the thread pool used by the execution context can create
new threads if no free ones are available, then we will be able to attend as
many clients as needed. This is similar to the solution we used previously when
we asked to run the blocking
readLine call in a different execution context.
The final result will be identical to our previous server version. To attend
client connections, if no thread is available in the pool, new threads will be
created from that pool.
Echo server code, async version
A full version of our echo server using this async approach is available here.
When is
async useful then?
The
Async type class is useful specially when the task to run by the
F can
be terminated by any thread. For example, calls to remote services are often
modeled with
Futures so they do not block the calling thread. When defining
our
F, should we block on the
Future waiting for the result? No! We can
wrap the call in an
async call like:
import cats.effect._ import scala.concurrent.ExecutionContext.Implicits.global import scala.concurrent.Future import scala.util._ trait Service { def getResult(): Future[String] } def service: Service = ??? def processServiceResult[F[_]: Async] = Async[F].async{ (cb: Either[Throwable, String] => Unit) => service.getResult().onComplete { case Success(s) => cb(Right(s)) case Failure(e) => cb(Left(e)) } }
So, let’s say our new goal is to create an echo server that does not require a
thread per connected socket to wait on the blocking
read() method. If we use a
network library that avoids blocking operations, we could then combine that with
async to create such non-blocking server. And Java NIO can be helpful here.
While Java NIO does have some blocking method (
Selector’s
select()), it
allows to build servers that do not require a thread per connected client:
select() will return those ‘channels’ (such as
SocketChannel) that have data
available to read from, then processing of the incoming data can be split among
threads of a size-bounded pool. This way, a thread-per-client approach is not
needed. Java NIO2 or netty could also be applicable to this scenario. We leave
as a final exercise to implement again our echo server but this time using an
async lib.
Conclusion
With all this we have covered a good deal of what cats-effect has to offer (but not all!). Now you are ready to use to create code that operate side effects in a purely functional manner. Enjoy the ride! | https://typelevel.org/cats-effect/tutorial/tutorial.html | CC-MAIN-2019-09 | refinedweb | 8,839 | 57.67 |
We.
Enough chitchat, let's start coding, shall we?
Requirements
- Have the JDK installed on your machine (duh!).
- An IDE (I would recommend IntelliJ IDEA Community Edition).
- In terms of Java as a language, you need to be familiar with the concepts of a
variables,
constant,
function,
classand
objectin order to fully understand this post. (Which I might be able to help with this).
- Add the Math
classfrom this GitHub Gist to your project.
Getting Started
The very first thing we need to do is to add the TestNG framework to our project. This will provide us with a set of
classes and
annotations which will come in handy later.
Add TestNG to your project
This can be done 2 ways: manually or using Maven in your project. Feel free to skip the other depending on how you setup your project.
Manually
You can follow this guide in order to add it manually.
Via Maven
- Open up your
pom.xml, which should be on your project's root folder.
- Inside the
<project>tag, make a
<dependencies>tag.
- Add the following block of XML code:
<dependency> <groupId>org.testng</groupId> <artifactId>testng</artifactId> <version>6.14.2</version> </dependency>
In case you haven't done it, make sure to select the
Enable auto-importoption that should appear on the bottom right of your screen. This will allow IntelliJ IDEA to automatically detect any changes made to your
pom.xmland refresh it accordingly. Life saver :)
The test subjectname is a common naming convention that will allow other developers and yourself to quickly know, without opening the file, that it has testing logic inside of it.
Which should look like this:
public class MathTests { // ... }
Can you spot any difference from a regular
class and this one?
Nope.
And that's fine.
What makes a
class a "testing class" is not it's signature, but actually it's methods and it's important for these classes to be inside a directory marked as a Test Source Root.
These methods should follow a particular naming convention, should have the
public access modifier, should have
void as their return type and the
@Test annotation before their signature.
The
@Testannotation indicates our IDE's test runner that this is not a regular method and will execute it accordingly.
So, our first test method would look like this:
@Test public void add_TwoPlusTwo_ReturnsFour() { // ... }
Adding testing logic
There's another convention that is really common among other programming langauges, known as the Tripple A:
- Arrange: consists of a few lines of code that are used to declare and initialize the objects we need in our test.
- Act: is usually a few lines of code where we perform the actions, whether it is some calculation or modify the state of our objects.
- Assert: usually consists of a single line of code where we verify that the outcome of the Act part were made successfully. Comparing the
actualvalue with an
expectedvalue that we plan to get.
In practice, this would be:
@Test public void add_TwoPlusTwo_ReturnsFour() { // Arrange final int expected = 4; // Act final int actual = Math.add(2, 2); // Assert Assert.assertEquals(actual, expected); }
The
finalkeyword is used to prevent our value to be changed later, also known as a constant.
Here we can see how the whole arrange, act and assert come together.
If you take a look at the line numbers on the left of the
add_TwoPlusTwo_ReturnsFour() method, there's a green play button, select it and then choose the first option from the context menu.
Wait a few moments and... the test runner panel should open up with the test results.
If you see that everything is green, it means that our test passed!
But, as a good tester, we should also try to make our test fail.
Let's change the act part so it adds
3 and
2 together, so our
actual value becomes
5 and our test should fail.
...
Did it fail?
Great!
Now, some of you may be wondering why we use the
assertEquals() method from the Assert
class, we could manually try to use an
if-else block that can simulate the same results, but the Assert
class provides a handy set of methods to do various types of validations.
The most common ones are:
assertTrue(): evaluates a given
conditionor
boolean, if it's value it's
true, the test will be marked as PASSED, otherwise, it will be marked as FAILED.
assertFalse(): similar to the
assertTrue(), but whenever the
conditionor
booleanis
falsethe test will be marked as PASSED, otherwise, it will be marked as FAILED.
asssertEquals(): commonly used to compare two given values that can be either primitive types (int, double, etc) or any objects.
If we were to implement our own logic using
if-else, not only it would clutter our code, but also could lead to unwanted results. Since, if we forget to
throw and exception in one of our
if-else blocks, both of our code paths will be marked as PASSED.
Tip: most of the time we should only use 1 single Assert method per test, although there are exceptions to this rule. But this is normally recommended in order for our test to be really small and straight to the point. Each test should only verify 1 code path at a time. Also, if we had 3 assertions and the first one fails, the following ones will never be executed, so keep that in mind.
Now that we got that out of the way, let's continue with more tests!
How about you practice testing more scenarios for the
add() method?
- Add a positive number with a negative one.
- Add two negative numbers.
If we take another look at our Math
class, we can see there are 2 more methods.
I'll let you do the tests for the
multiply() (hint: make sure to test when we multiply a number by zero) method and I'll focus on the
divide() one for the rest of this article.
The
divide() method
Let's take a closer look to this method:
public static double divide(int dividend, int divisor) { if (divisor == 0) throw new IllegalArgumentException("Cannot divide by zero (0)."); return dividend / divisor; }
As you can see, if the value of the
divisor argument is
0, we will throw an
IllegalArgumentException, otherwise, the division operation will be performed.
Note: the
throwkeyword not only throws a given exception, but also stops the code execution, so it works similar to the
breakkeyword inside a loop or a
switchblock.
So, this method has 2 possible outcomes or "code paths". We need to make sure to test them.
The amount of tests per method, should be equal or more than the amount of code paths it has.
Which means, that we should at least have 2 tests.
Let's go ahead and make them!
- Divide two numbers, where the
divisoris any number but zero (0).
- Divide two numbers, where the
divisoris zero (0).
Our first test would be something like:
@Test public void divide_TenDividedByFive_ReturnsTwo() { final double expected = 2.0; final double actual = Math.divide(10, 5); Assert.assertEquals(actual, expected); }
And our second test would be:
@Test(expectedExceptions = IllegalArgumentException.class) public void divide_TenDividedByZero_ThrowsIllegalArgumentException() { Math.divide(10, 0); }
Wait wut!
Mr./Mrs Reader: "B-bu-but what happened with the arrange, act and the assert? what is the
expectedExceptions part doing?"
Do not worry, I shall explain shortly!
- I decided to skip the whole arrange, act and assert because the execution of our code will automatically be interrupted when the
divide()method is ran. So the whole Tripple A can be omitted for this test in particular.
- The
expectedExceptionpart is needed in order to tell our test runner that the
IllegalArgumentExceptionis actually possible to happen in this test, if we were to change that to another exception, our test would fail.
Tip: remember to use the
.classat the end of the exception name, otherwise, this code would not compile.
Testing objects
You have noticed that so far we have been testing static methods of our Math
class, which means we don't have to create objects of it. Which is fine.
But what if we had a
class that didn't have static methods?
For this, our testing framework (TestNG) provides a pair of annotations to make sure that each of our test use a fresh instance of our
class.
Let's imagine we could create instances of the Math
class.
In that case, our tests would look like this:
@Test public void add_TwoPlusTwo_ReturnsFour() { final Math math = new Math(); final int expected = 4; final int actual = Math.add(2, 2); Assert.assertEquals(actual, expected); } @Test public void divide_TenDividedByFive_ReturnsTwo() { final Math math = new Math(); final double expected = 2.0; final double actual = Math.divide(10, 5); Assert.assertEquals(actual, expected); }
Which isn't that bad, but remember that we can make many more tests for this same
class and having this Math objects initialized over and over will create more code noise.
If we have to ignore certain parts of our test, specially in the arrangement, it means we can use one of our testing framework's tools:
@BeforeMethod & @AfterMethod
These two annotations can be added to our test functions like we have been using the
@Test one, but they work in a particular way.
@BeforeMethod: this code block will always be executed before any other
@Testmethod.
@AfterMethod: this code block will always be executed after any other
@Testmethod.
So, why would we use them?
In all of our
@Test methods we would have to constantly initiate a new Math object, so with the help of the
@BeforeMethod annotation we can get rid of this repetitive code.
First thing we need to do is to promote our Math object to a member variable or property.
public final class MathTests { private Math); } }
Then add our
@BeforeMethod function, which is commonly named as "setUp".); } }
Now, in order to make sure we clear out our
math object, we can set it's value to
null inside our
@AfterMethod function, which is usually called
tearDown():); } @AfterMethod public void tearDown() { math = null; } }
This means that the order of execution of our test would be:
- The
setup().
- And
add_TwoPlusTwo_ReturnsFour().
- Then
tearDown().
setup()again.
- And
divide_TenDividedByFive_ReturnsTwo().
- Then
tearDown()again.
Aaaaand that's it
With this you should be more familiar now with how Unit Testing works.
Although we didn't do any tests that required us using the
assertTrue() and
assertFalse(), I encourage you to do your own tests to play around with them for a little bit :)
Feel free to leave a comment if you have any questions and I'll do my best to clear them out!
If you would like to take a look at the entire project, head over to this repository on GitHub.
Discussion (16)
Great intro. I hit an error on line 11 of the final code. I get an
Error:(11, 16) java: Math() has private access in Math. IntelliJ's linter is yelling about it as well. My Java knowledge is minimal so I'm wondering how would I fix this error? I'm guessing it has something to do with line 2 of
Math.java.
Thanks for letting me know, Seth!
That's my fault.
Try removing this from the Math.java file:
The entire class should be like this now:
In case you or someone else also wonders why, the
private Math() {}refers to the constructor of our Math class, I made it
privateat the beginning because all it's methods are
static, which prevents anyone from trying to instantiate it. But later on I decided to also add an example where we had the need to use an object and I forgot to update it hahaha.
That works. Thanks!
Hi, just a small hint. In case you add a dependency in Maven which is only intended for testing, which TestNG is, you should do it like this:
Apart from that if suggest to name a test class
*Tests.javayou have add an example to use the most recent versions of maven-surefire-plugin (2.21.0+). Otherwise this will not being executed as test. The best it to name it
*Test.javathis will work with older versions well..
Thank you, Karl!
That's really helpful 😄
Just to give some other options: We’ve just started using JUnit 5, the best thing is actually @DisplayName to price a readable test name. Also, we switched to AssertJ that has a pretty neat fluent API.
Very well explained as usual (> ._.)> Kuddos!. This is really helpful since Im trying to implement a new testing framework for the folks at work, wish you luck and here your reward.
Hahahha, thanks Manuel!
I'm glad you found it useful 🤓
Awesome guide. This is a great refresher for me as I have not wrote some unit tests in a while 😬
Hey Christian,
Which IDE do you prefer? :)
Good article!
One of the best articles that I found on the whole web, Thank you, sir.
But I got "Error:(3, 34) java: package org.graalvm.compiler.debug does not exist" when I type expectedExceptions.
Hey Mohammad,
Thank you very much!
I'm not entirely sure what might cause it, but it seems you are missing a dependency.
In case it might help you, here's a repository with the project I used while making this article: github.com/chrisvasqm/intro-unit-t...
Great article!
Great post
This is a great post. thank you very much Christian Vasquez
Great Article, Thank you! | https://dev.to/chrisvasqm/introduction-to-unit-testing-with-java-2544 | CC-MAIN-2021-39 | refinedweb | 2,251 | 64 |
EEG source localization given electrode locations on an MRI#
This tutorial explains how to compute the forward operator from EEG data when the electrodes are in MRI voxel coordinates.
# Authors: Eric Larson <larson.eric.d@gmail.com> # # License: BSD-3-Clause
import os.path as op import nibabel from nilearn.plotting import plot_glass_brain import numpy as np import mne from mne.channels import compute_native_head_t, read_custom_montage from mne.viz import plot_alignment
Prerequisites#NI#
Let’s load our
DigMontage using
mne.channels.read_custom_montage(), making note of the fact that
we stored our locations in Freesurfer surface RAS (MRI) coordinates.
What if my electrodes are in MRI voxels?.pick_types(meg=False, eeg=True, stim=True, exclude=()).load_data() raw.set_montage(dig_montage) raw.plot_sensors(show_names=True). Removing projector <Projection | PCA-v1, active : False, n_channels : 102> Removing projector <Projection | PCA-v2, active : False, n_channels : 102> Removing projector <Projection | PCA-v3, active : False, n_channels : 102>()
Adding average EEG reference projection. 1 projection items deactivated Average reference projection was added, but has not been applied yet. Use the apply_proj method to apply it. 320 events found Event IDs: [ 1 2 3 4 5 32] Not setting metadata 320 matching events found Setting baseline interval to [-0.19979521315838786, 0.0] sec Applying baseline correction (mode: mean) Created an SSP operator (subspace dimension = 1) 1 projection items activated Using data from preloaded Raw for 72 events and 421 original time points ... 0 bad epochs dropped Using data from preloaded Raw for 73 events and 421 original time points ... 0 bad epochs dropped Using data from preloaded Raw for 73 events and 421 original time points ... 0 bad epochs dropped Using data from preloaded Raw for 71 events and 421 original time points ... 0 bad epochs dropped Using data from preloaded Raw for 15 events and 421 original time points ... 0 bad epochs dropped Using data from preloaded Raw] Projections have already been applied. Setting proj attribute to True.
Getting a source estimate#
New we have all of the components we need to compute a forward solution, but first we should sanity check that everything is well aligned:
fig = plot_alignment( evoked.info, trans=trans, show_axes=True, surfaces='head-dense', subject='sample', subjects_dir=subjects_dir)
Using lh.seghead for head surface. Channel types:: eeg: 59
Now we can actually compute the forward:
fwd = mne.make_forward_solution( evoked.info, trans=trans, src=fname_src, bem=fname_bem, verbose=True)... Loading the solution matrix... Three-layer model surfaces loaded....) Checking surface interior status for 4098 points... Found 846/4098 points inside an interior sphere of radius 43.6 mm Found 0/4098 points outside an exterior sphere of radius 91.8 mm Found 2/3252 points outside using surface Qhull [Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers. [Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 2.0s remaining: 0.0s [Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 2.0s finished Found 0/3250 points outside using solid angles Total 4096/4098 points inside the surface Interior check completed in 1991.5 ms 2 source space points omitted because they are outside the inner skull surface. Computing patch statistics... Patch information added... Checking surface interior status for 4098 points... Found 875/4098 points inside an interior sphere of radius 43.6 mm Found 0/4098 points outside an exterior sphere of radius 91.8 mm Found 0/3223 points outside using surface Qhull [Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers. [Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 1.9s remaining: 0.0s [Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 1.9s finished Found 1/3223 point outside using solid angles Total 4097/4098 points inside the surface Interior check completed in 1936.6 ms 1 source space point omitted because it is outside the inner skull surface. Computing patch statistics... Patch information added... Setting up for EEG... Computing EEG at 8193 source locations (free orientations)... [Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers. [Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 3.3s remaining: 0.0s [Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 3.3s finished Finished.
Finally let’s compute the inverse and apply it:
inv = mne.minimum_norm.make_inverse_operator( evoked.info, fwd, cov, verbose=True) stc = mne.minimum_norm.apply_inverse(evoked, inv) brain = stc.plot(subjects_dir=subjects_dir, initial_time=0 = 8192/8193 = 10.040598 scale = 157084 exp = 0.8 Applying loose dipole orientations to surface source spaces:.86184 scaling factor to adjust the trace = 3.24877e+19 (nchan = 59 nzero = 1).17591891 4.939572 10.86348066]
Total running time of the script: ( 0 minutes 37.407 seconds)
Estimated memory usage: 579 MB
Gallery generated by Sphinx-Gallery | https://mne.tools/dev/auto_tutorials/inverse/70_eeg_mri_coords.html | CC-MAIN-2022-21 | refinedweb | 775 | 52.97 |
Now that we understand the basics of creating functions in LLVM, let's move on to a more complicated example: something with control flow. As an example, let's consider Euclid's Greatest Common Denominator (GCD) algorithm:
unsigned gcd(unsigned x, unsigned y) { if(x == y) { return x; } else if(x < y) { return gcd(x, y - x); } else { return gcd(x - y, y); } }
With this example, we'll learn how to create functions with multiple blocks and control flow, and how to make function calls within your LLVM code. For starters, consider the diagram below.
The above is a graphical representation of a program in LLVM IR. It places each basic block on a node of a graph, and uses directed edges to indicate flow control. These blocks will be serialized when written to a text or bitcode file, but it is often useful conceptually to think of them as a graph. Again, if you are unsure about the code in the diagram, you should skim through the LLVM Language Reference Manual and convince yourself that it is, in fact, the GCD algorithm.
The first part of our code is practically the same as from the first tutorial. The same basic setup is required: creating a module, verifying it, and running the
PrintModulePass on it. Even the first segment of
makeLLVMModule() looks essentially the same, except that
gcd takes one fewer parameter than
mul_add.
#include <llvm/Module.h> #include <llvm/Function.h> #include <llvm/PassManager.h> #include <llvm/Analysis/Verifier.h> #include <llvm/Assembly/PrintModulePass.h> #include <llvm/Support/LLVMBuilder.h> using namespace llvm; Module* makeLLVMModule(); int main(int argc, char**argv) { Module* Mod = makeLLVMModule(); verifyModule(*Mod, PrintMessageAction); PassManager PM; PM.add(new PrintModulePass(&llvm::cout)); PM.run(*Mod); return 0; } Module* makeLLVMModule() { Module* mod = new Module("tut2");");
Here, however, is where our code begins to diverge from the first tutorial. Because
gcd has control flow, it is composed of multiple blocks interconnected by branching (
br) instructions. For those familiar with assembly language, a block is similar to a labeled set of instructions. For those not familiar with assembly language, a block is basically a set of instructions that can be branched to and is executed linearly until the block is terminated by one of a small number of control flow instructions, such as
br or
ret.
Blocks corresponds to the nodes in the diagram we looked at in the beginning of this tutorial. From the diagram, we can see that this function contains five blocks, so we'll go ahead and create them. Note that, in this code sample, we're making use of LLVM's automatic name uniquing, since we're giving two blocks the same name.
BasicBlock* entry = new BasicBlock("entry", gcd); BasicBlock* ret = new BasicBlock("return", gcd); BasicBlock* cond_false = new BasicBlock("cond_false", gcd); BasicBlock* cond_true = new BasicBlock("cond_true", gcd); BasicBlock* cond_false_2 = new BasicBlock("cond_false", gcd);
Now, we're ready to begin generate code! We'll start with the
entry block. This block corresponds to the top-level if-statement in the original C code, so we need to compare
x == y To achieve this, we perform an explicity comparison using
ICmpEQ.
ICmpEQ stands for an integer comparison for equality and returns a 1-bit integer result. This 1-bit result is then used as the input to a conditional branch, with
ret as the
true and
cond_false as the
false case.
LLVMBuilder builder(entry); Value* xEqualsY = builder.CreateICmpEQ(x, y, "tmp"); builder.CreateCondBr(xEqualsY, ret, cond_false);
Our next block,
ret, is pretty simple: it just returns the value of
x. Recall that this block is only reached if
x == y, so this is the correct behavior. Notice that, instead of creating a new
LLVMBuilder for each block, we can use
SetInsertPoint to retarget our existing one. This saves on construction and memory allocation costs.
builder.SetInsertPoint(ret); builder.CreateRet(x);
cond_false is a more interesting block: we now know that
x != y, so we must branch again to determine which of
x and
y is larger. This is achieved using the
ICmpULT instruction, which stands for integer comparison for unsigned less-than. In LLVM, integer types do not carry sign; a 32-bit integer pseudo-register can interpreted as signed or unsigned without casting. Whether a signed or unsigned interpretation is desired is specified in the instruction. This is why several instructions in the LLVM IR, such as integer less-than, include a specifier for signed or unsigned.
Also, note that we're again making use of LLVM's automatic name uniquing, this time at a register level. We've deliberately chosen to name every instruction "tmp", to illustrate that LLVM will give them all unique names without getting confused.
builder.SetInsertPoint(cond_false); Value* xLessThanY = builder.CreateICmpULT(x, y, "tmp"); builder.CreateCondBr(xLessThanY, cond_true, cond_false_2);
Our last two blocks are quite similar; they're both recursive calls to
gcd with different parameters. To create a call instruction, we have to create a
vector (or any other container with
InputInterators) to hold the arguments. We then pass in the beginning and ending iterators for this vector.
builder.SetInsertPoint(cond_true); Value* yMinusX = builder.CreateSub(y, x, "tmp"); std::vector<Value*> args1; args1.push_back(x); args1.push_back(yMinusX); Value* recur_1 = builder.CreateCall(gcd, args1.begin(), args1.end(), "tmp"); builder.CreateRet(recur_1); builder.SetInsertPoint(cond_false_2); Value* xMinusY = builder.CreateSub(x, y, "tmp"); std::vector<Value*> args2; args2.push_back(xMinusY); args2.push_back(y); Value* recur_2 = builder.CreateCall(gcd, args2.begin(), args2.end(), "tmp"); builder.CreateRet(recur_2); return mod; }
And that's it! You can compile and execute your code in the same way as before, by doing:
# c++ -g tut2.cpp `llvm-config --cppflags --ldflags --libs core` -o tut2 # ./tut2 | http://llvm.org/releases/2.2/docs/tutorial/JITTutorial2.html | CC-MAIN-2015-35 | refinedweb | 949 | 56.96 |
Summary: Use Windows PowerShell to normalize names before importing data.
Microsoft Scripting Guy, Ed Wilson, is here. I have been reading Jane Austen Northanger Abbey this week in the evenings. I really love the way Jane Austen develops her characters. I also like some of the ways the characters express themselves. For example, “Give me but a little cheerful company…”
Indeed. I like to hang out with people who are cheerful, and who have a good attitude. It rubs off.
This is also how Windows PowerShell is. For example, rather than complaining about the person who designed a poor database—who did not implement strict data input controls, and rather, provided a blank unencumbered text box for data input, I can be cheerful and glad that I have Windows PowerShell to help me out. Indeed, Windows PowerShell makes quick work of text manipulation.
As you will recall from my recent post, Use PowerShell to Read Munged Data, the names in the data import file are in poor shape. The first names and last names are in a single field. In some cases, the names are first name first and last name last. In other cases, they are reversed. Some are uppercase with lowercase, and others are all lowercase. It seems that just about every way of inputting the data has been used to create the file.
In the following image, I can see the different ways that the names appear:
The first thing I do is import the CSV file and store the created custom objects in the $datain variable. This is shown here:
$datain = import-csv C:\DataIn\DataDump.csv
I now use a Foreach command to walk through the collection of custom objects:
Foreach ($d in $datain)
{
By looking at my data, I can see that if I have a comma, the name is Last name, First name. First name Last name does not use a comma. So I use the If statement to look for a comma. If there is a comma, I enter a script block to fix the name. This is shown here:
If($d.name -match ',')
{
The easiest way to title cap the name (that is, the first letter is capitalized and the remaining letters in the name are lowercase) is to use the ToTitleCase method from the TextInfo class from the CultureInfo class that is found in the System.Globalization namespace.
Luckily, I can gain access to this by using the Get-Culture cmdlet and accessing the TextInfo property. (Accessing the TextInfo property returns a TextInfo object, and that gives me the ToTitleCase method.) I call this method to title cap the name. I then split the name at the comma (and create two elements in an array). Then I trim the elements to remove any leading or trailing spaces. This line is shown here:
$name = (Get-Culture).TextInfo.ToTitleCase($d.name).Split(',').trim()
Now I take my elements, plug them into the Lname and Fname fields of a custom object, and I am done with the record. This is shown here:
[PSCustomObject]@{
Lname = $name[0]
Fname = $name[1] }}
If the name field does not include a comma, it means the name is First name, Last name. I could also title cap the names if I need to, but for this example, I leave that out. I split the field into two elements of an array, and plug them into custom properties to make a new object.
Note If not specified, the Split method splits at a blank space.
Here is the code to split the First name, Last name entries:
ELSE {
$name = $d.Name.Split().trim()
[PSCustomObject]@{
Fname = $name[0]
Lname = $name[1] } }
The complete FixName.ps1 script is shown here:
If($d.name -match ',')
{
$name = (Get-Culture).textinfo.ToTitleCase($d.name).Split(',').trim()
ELSE {
}
When I run it, the following output appears:
That is all there is to using Windows PowerShell to fix user names before importing data. Data Manipulation Week will continue tomorrow when I will talk about fixing the | http://blogs.technet.com/b/heyscriptingguy/archive/2014/08/27/use-powershell-to-normalize-names-before-data-import.aspx | CC-MAIN-2015-18 | refinedweb | 670 | 72.97 |
Book Review: Building Websites with
VB.NET and DotNetNuke 3.0
by
Peter A. Bromberg, Ph.D.
I don't normally write about or review books focusing on VB.NET, as it's not my preferred programming language. In fact, I could probably be blamed for contributing my two cents to the flame wars about VB.NET vs. C#. However, the fact of the matter is that if used properly, VB.NET is a pretty much full-fledged member of the .NET Framework programming language family. As a side note, when I say "used properly", what I mean specifically is to have Option Strict and Option Explicit "ON" at all times, and preferably, remove any and all references to namespaces with "VisualBasic" in their names so as to be able to write CLS-compliant assemblies.
In the case of "Building Websites with VB.NET and DotNetNuke 3.0" I digress from my normal programming "posture" because I believe this is a truly excellent book, no matter what your preferred programming language.
The reason DotNetNuke became so popular with ASP.NET developers is because it provides a templatized, pre-made framework for developing database-driven portal web sites with all the features one would ever expect to have, already built-in. It's easily customizable, "skinnable", and extensible. In fact, a whole control development community has sprung up around it. One of the interesting things I've seen with DotNetNuke is that some smart developers have even been able to make a full time job out of it. What they do is use DotNetNuke to set up a complete portal site for a customer for a fixed fee, and then they charge by the hour for further customization. In this way, they can pick up a quick $250.00 for setting up the basics, which can easily turn into $1,000 to $5,000 additional fees for the customization.
In addition to the fact that it is open source and well supported by a community of developers, DotNetNuke uses the ASP.NET 2.0 Provider model, which is highly extensible. It also comes pre-packaged with modules for discussions, events, news feeds, links, contact, FAQ, announcements and others. It also separates page layout, page content, and application logic in a very OOP - oriented manner, supports custom "Skins", and multiple portals with a single database. It's also been recognized by the Microsoft team as a "best practices" application. That means it uses quality coding techniques, some of which include the little tidbits I alluded to in my first paragraph above.
If you work with ASP.NET and VB.NET, and want an interactive website, with forums, news and image management, where visitors can register, participate and contribute to your site, then DotNetNuke is for you. This book is a complete guide to creating content-rich websites with DotNetNuke 3.0, as quickly as possible.
The first part of this book gives you a thorough understanding of working with basic DotNetNuke sites, guiding you through the features and giving you the confidence to create and manage your own site. After that, you will get to the heart of DotNetNuke, and learn about its core architecture. From there, you will learn how to customize DotNetNuke sites through skinning and creating custom modules.
The subjects of enhancing your site with forums and ecommerce functionality, creating multiple sites, and deploying your site round off the book. Each of these topics is covered in detail as you step through the development of a DotNetNuke 3.0 site.
Developers can use this book to help you set up and administer a DotNetNuke portal, even if you have a limited knowledge of ASP.NET. You will learn how to setup and administer an example site, stepping through all the tasks to ease your learning.
This book will help you extend the DotNetNuke portal by first helping you understand how the core framework works and then show you how to create custom modules and skins. A rudimentary knowledge of VB.NET programming is assumed; however the emphasis is not on becoming a better VB.NET programmer but on taming DotNetNuke.
No prior knowledge of DotNetNuke is assumed.
The new features of DotNetNuke 3.0 are discussed extensively, so even if you have worked with previous versions of DotNetNuke, you will find something new.
I give this one a hearty "Thumbs Up" -- not only does it cover the subject extensively, it also promotes best-practices programming methodology. Pick up a copy of "Building Websites with
VB.NET and DotNetNuke 3.0", by Dan Egan from Packt Publishing (list price $39.95 US) from your favorite bookseller or online store such as Amazon.com or BookPool.com.
Buy this book: | http://www.nullskull.com/articles/20050806.asp | CC-MAIN-2015-14 | refinedweb | 789 | 65.62 |
Issue #1: Can’t generate tests for a Web Application Project
Workaround
1. Add the Web Application Project Assembly via the “Add Assembly…” button (located on the bottom left of the dialog).
a. After adding the assembly, you may get an error message box saying: “One or more of assemblies you selected could not be loaded. Please check those assemblies are valid and try again:”; just hit OK; and afterwards, you will be able to select the methods that you want to generate tests against.
2. After generating the tests, you must add the correct namespace (Microsoft.VisualStudio.TestTools.UnitTesting.Web) and the proper attributes to the test method for the test to execute properly. The required attributes include [HostType] (set to ASP.NET), [AspNetDevelopmentServerHost] (set to the path to the web site), and [UrlToTest] (set to the URL to test). For example:
/// <summary>
///A test for Class1ConstructorTest
///</summary>
[TestMethod()]
[HostType("ASP.NET")]
[AspNetDevelopmentServerHost("%PathToWebRoot%\\WebSite2", "/WebSite2")]
[UrlToTest("")]
public void Class1ConstructorTest()
{
}
3. That’s it, you’re ready to go!
Issue #2: When running ASP.NET unit tests against an ASP.NET Development Server (A.K.A. Cassini), they fail with the following error message: “Could not find WebDev.Webserver.exe”
3. Issue #3: Can’t generate tests for an ASP.NET Web Site Project
Unfortunately, we don’t have a workaround for this problem. The only thing that will work in this scenario is having a Visual Studio Team System 2005 web site project, and opening it in Orcas (going through the conversion wizard, which should be automatic when opening any old project in Orcas).Hope this helps.
Thanks,David Gorena Elizondo | http://blogs.msdn.com/b/vstsqualitytools/archive/2007/04/26/known-issues-and-workarounds-for-orcas-beta1-asp-net.aspx | CC-MAIN-2013-48 | refinedweb | 273 | 58.79 |
Introduction
This is a project I have been planning for a while. I want to know if my caravan is moving (when it shouldn't be!) and also to know where it is any time. I purchased an mkr gsm1400 some time ago and all I have done so far is check it worked by loading the blink script. So I thought it was time to put it to use.
Project plan
I want to use the gsm1400 together with a neo-6m to get gps position and send location based text messages. I want to locate it in my caravan so I need to make a case for everything to fit in.
Project Steps
connect everything up
write a script
test it works
test the power consumption
make a case
locate in caravan
test it still works.
Parts list
Arduino GSM MKR1400
GY-GPS6MV2 board
usb cable to connect to pc
1500Mah Lipo battery
SIM card
8 screws
piece of ribbon for battery holder
Parts added as afterthoughts
Perf board 90mm x 70mm
female headers
Oled display
RGB LED
Software required
Connecting everything up
First insert the Sim Card.
Then connect the GPS. The GPS I am using has 4 pins VCC, GND, TX and RX.
Here is the connection diagram.
Script plan
So with everything connected up I want to write the script. The script needs to perform the following functions:
- Gsm1400 to initialise
- get gps location.
- store this location as the home position
- send location to pre planned mobile phone
- wait for a response from phone
- check correct phone was used
- react to responses the planned responses are;
STORE tells gsm1400 to monitor current position and report changes
HALT tells gsm1400 to stop monitoring because I’m towing the caravan.
TEST tells gsm1400 to send current location
- if gps position changes send location to planned phone
Before you can write the script you need to load the necessary libraries. I opted for the TinyGPSplus library which can be found on github here.
Download the zip file.
Then in Arduino IDE Sketch > Include Library > Add ZIP Library
Find your download and click open
The library will be installed and you can check this by looking at the examples under the File menu
Next you need to install the GSM1400 libraries and this can be found under Tools>Manage Libraries..
When it is finished the version number and INSTALLED will appear alongside the author.
The script
With the libraries installed select the board -
We now also have the sample sketches for MKR1400 available. I opened up and tested the sketches ReceiveSMS and SendSMS and they worked really well.
I also tested the TinyGPSplus BasicExample and this also worked well.
I stripped down the three samples and this formed the basis of my sketch which I have annotated as best I can. In some places I am aware the coding could be improved and I would appreciate any pointers to this end. I noticed that the latitude and longitude coordinates constantly fluctuate and I did some analysis of the values. I found that the values constantly change even if the gps isn't moving. So just storing the latitude and longitude then checking again later for a match wouldn't work. So I decided to store the latitude and longitude but only alert if either the latitude and longitude position changed by more than .0003.
So the sketch will start the gsm then from the gps. It will get and store the current position and send a text message containing the stored location. It will then monitor the gps position and if it has changed, as discussed above, then it it will send a text message every 2 minutes with the latest gps position. The script is also monitoring the sms feed and is waiting to receive one of 3 messages; Store, Halt or Test. Texting Halt to the GSM phone number will pause monitoring of the gps location and would be used when you want to move the caravan. Texting Store will take the current gps location and change it the stored location and gps monitoring will continue. A text message containing Test will get Cara-duino to send the current position. Once the status is Moving a Halt message must be sent before sending Store again.
Here's the code ... there are lots of Serial.print lines which are only there for error proofing. Now that I am happy it is working I could remove these lines.
#include <TinyGPS++.h> // GPS library #include <MKRGSM.h> //GSM libary #include <ArduinoLowPower.h> #include <Adafruit_SSD1306.h> //Oled display library #include <Wire.h> //I2C library // The TinyGPS++ object TinyGPSPlus gps; //Create a GPS object Adafruit_SSD1306 display(4); // initialize the GSM library instance GSM gsmAccess; GSM_SMS sms; //variables to control text message responses //bool responded = true; //should be initialised to false set to true to avoid unnecessary sms bool msgreceived = false; //not currently used char admin_phone[14] = {'+', '4', '4', '1', '2', '3', '4', '5', '6', '7', '8', '9', '0'}; // type in only number that can be used to send a message char senderNumber[14]; // to check against admin number int mysatnum; // number of satellites float mylong; //longitude float mylat; //latitude String smspos; //Text message to send to admin // connection state bool connected = false; // GSM status float StoredLat; //latitude of stored position float StoredLong; //longitude of stored position String myStatus = "wait for gps"; //status of caravan char MsgRcd = 'S'; //Last message received char PrvMsgRcd; // previous message received used to revert status when test received unsigned long currentmillis = 0; //used for mydelay unsigned long previousmillis = 0; //used for mydelay unsigned long myinterval = 0; //used for mydelay int lastupdate = 0; String oledMessage1; //oled display messages String oledMessage2; //oled display messages String oledMessage3; //oled display messages int redpin = 3; int greenpin = 4; int bluepin = 5; void getpos() { // keep reading serial1 until gps.time.minute no longer matches last recorded update time // means that gps has to get an update to move out of ths loop or if no update from GPS //for 3 minutes then keep reading until update received while (Serial1.available() && (gps.time.minute() == lastupdate ) || Serial1.available() && (gps.time.minute() - lastupdate >= 3)) { gps.encode(Serial1.read()); } //update variables lastupdate = gps.time.minute(); //last recorded GPs update time mysatnum = gps.satellites.value(); mylong = gps.location.lng(); mylat = gps.location.lat(); smspos = myStatus + " " + String(mylat, 6) + " " + String(mylong, 6); Serial.println("getpos executed"); myPrint(); } // send sms to admin phone void mySmsSend () { // sms.beginSMS(admin_phone); // sms.print(smspos); // sms.endSMS(); smspos = ""; Serial.println("mySmsSend complete!"); } // used mostly for error trapping void myPrint() { Serial.print (gps.time.hour()); Serial.print(":"); Serial.print (gps.time.minute()); Serial.print(" "); Serial.print(StoredLat, 6); Serial.print(" "); Serial.println(StoredLong, 6); Serial.print(mylat, 6); Serial.print(" "); Serial.print(mylong, 6); Serial.print(" "); Serial .print(gps.location.isValid()); Serial.print(" "); Serial.println(mysatnum); Serial.print(smspos); Serial.print(" "); Serial.print("Last message receeived = "); Serial.println(MsgRcd); Serial.print("checksum failures "); Serial.println(gps.failedChecksum()); oledMessage1 = String(mylat, 6) + " " + String(mylong, 6); oledMessage2 = smspos; oledMessage3 = myStatus + " " + MsgRcd; oled(); } // when SMS = S this routine used to store location void storepos() { StoredLat = mylat; StoredLong = mylong; Serial.print(StoredLat); Serial.println(" Storepos executed"); } // check stored location matches current location has a degree of adjustment to allow for gps fluctuations // // messages sent to the phone are full GPSs so exact location is known. void checkpos() { if (StoredLat >= (mylat + .0003) || StoredLat <= (mylat - .0003) || StoredLong >= (mylong + .0003) || StoredLong <= (mylong - .0003) ) { myStatus = "Moving"; } else { myStatus = "Static"; } Serial.println("checkpos executed"); Serial.println(myStatus); } //check sms messages void checksms() { connected = false; while (!connected) { if (gsmAccess.begin() == GSM_READY) { connected = true; Serial.println("Connected"); } else { Serial.println("Not connected"); delay(1000); } } // If there are any SMSs available Serial.println("checking for SMS"); if (sms.available()) { Serial.println("Message Received"); // Get remote number sms.remoteNumber(senderNumber, 14); Serial.print(senderNumber); Serial.println("|"); Serial.print(admin_phone); Serial.println("|"); // test remote number matches admin phone bool test = false; int x = 0; for (x = 0; x < 13; x++) { Serial.print(senderNumber[x]); Serial.print(" "); Serial.println(admin_phone[x]); if (senderNumber[x] == admin_phone[x]) { test = true; } else { test = false; break; } } Serial.println(test); // Test nessage is valid if (test == true) { PrvMsgRcd = MsgRcd; MsgRcd = char(sms.peek()); if (MsgRcd == 'T' || MsgRcd == 'H' || MsgRcd == 'S') { Serial.println("Message received okay"); } else { MsgRcd = PrvMsgRcd; Serial.println("Message ignored"); } sms.flush(); test = false; Serial.print("Messaged received "); Serial.println(MsgRcd); } else { sms.flush(); } } } //delay routine because c delay seems to interfere with serial1 void mydelay() { currentmillis = millis(); previousmillis = millis(); while ((unsigned long)(currentmillis - previousmillis) <= myinterval) { currentmillis = millis(); } } void oled() { // Clear the buffer. display.clearDisplay(); display.setTextSize(1); display.setTextColor(WHITE); display.setCursor(0, 0); display.println(oledMessage1); display.println(oledMessage2); display.println(oledMessage3); display.display(); } void setup() { display.begin(SSD1306_SWITCHCAPVCC, 0x3C); // initialize with the I2C addr 0x3D (for the 128x64) display.clearDisplay(); pinMode(redpin, OUTPUT); pinMode(greenpin, OUTPUT); pinMode(bluepin, OUTPUT); analogWrite(redpin, 255); analogWrite(greenpin, 0); analogWrite(bluepin, 0); delay(1000); Serial.begin(115200); // connect usbserial Serial1.begin(9600); // connect serial1 on the tx and rx pins // while (!Serial) {// wait for serial port to connect. Serial required for error proofing only. Serial doesn't work // ; // when battery connected so these lines have to be commented out // } // otherwise caraduino will forever stay in this loop waiting for serial. // connect to GSM service while (!connected) { if (gsmAccess.begin() == GSM_READY) { connected = true; Serial.println("Connected"); } else { Serial.println("Not connected"); delay(1000); } } Serial.println("GSM initialized"); Serial.println("Reached initial gps store"); // get position while (mylat == 0) { gps.encode(Serial1.read()); Serial.println("getting first fix"); getpos(); } storepos(); //store position Serial.print("Stored pos"); Serial.print(StoredLat); Serial.print(" "); Serial.println(StoredLong); lastupdate = gps.time.minute(); smspos = "Stored " + String(StoredLat) + " " + String(StoredLong); Serial.println(smspos); analogWrite(redpin, 0); analogWrite(greenpin, 255); analogWrite(bluepin, 0); } void loop() { switch (MsgRcd) { case 'H': //Halt means stop monitoring position while moving from one location to another smspos = "Halt message received. Position monitoring has stopped"; mySmsSend(); while (MsgRcd == 'H') { Serial.println( "Halt mode checking SMS"); oledMessage1 = "Halt mode"; oledMessage2 = " "; oledMessage3 = " "; oled(); analogWrite(redpin, 0); analogWrite(greenpin, 0); analogWrite(bluepin, 255); myinterval = 30000; mydelay(); checksms(); myPrint(); } break; case 'S': //Store means remember position and monitor analogWrite(redpin, 0); analogWrite(greenpin, 255); analogWrite(bluepin, 0); if (PrvMsgRcd != 'S') { getpos(); storepos(); smspos = "Stored " + String(StoredLat, 6) + " " + String(StoredLong, 6); Serial.println(smspos); mySmsSend(); checkpos(); } // just monitor position and check sms while (myStatus == "Static" && MsgRcd == 'S') { checksms(); getpos(); checkpos(); Serial.println("Static and Stored checking for SMS"); myinterval = 30000; mydelay(); } // if checkpos determines caravan is moving send text messages with current position while (myStatus == "Moving" && MsgRcd == 'S') { getpos(); smspos = myStatus + " " + mylat + " " + mylong; mySmsSend (); oledMessage1 = String(mylat, 6) + " " + String(mylong, 6); oledMessage2 = smspos; oledMessage3 = myStatus + " " + MsgRcd; oled(); smspos = ""; myinterval = 120000; mydelay();// wait 2 minutes before checking again checksms(); Serial.println("Static and Moving checking for SMS"); } break; case 'T': // send current location getpos(); smspos = "Test requested " + String(mylat, 6) + " " + String(mylong, 6); mySmsSend(); smspos = ""; MsgRcd = PrvMsgRcd; // revert back to previous status break; } }
Some points to note. Firstly the battery lasts no time at all so I haven't used lowpower because it just wouldn't make enough difference to have a battery for anything more than back up if main power was lost. So any kind of constant use would need a much bigger battery. Fortunately my caravan has a bigger battery and also a solar panel so I just need to work out how to connect it to my caravan.
The second point is that once a battery is connected the usbserial stops working which makes it very hard to error trap so this led me to modify my design by adding an OLED into the design.
For this I used the Adafruit SSD1306 library and I cut down their examples to the barebones which amounts to only a few lines of code which you can see in the void OLED() routine inside .
Library can be found here.
With the sketch working I wanted to get away from the jumper cables and have something more stable. So I started work on attaching everything to a piece of perf board. First try wasn't very elegant
See photo.
As you can see it isn't pretty and also it needed solder both sides which is nigh on impossible.
This is my second attempt which I think is much better and it also includes the RGB LED as an status indicator..
Enclosure
So now I need an enclosure. I thought about disguising it as a Bluetooth speaker but in the end I want to hide the device out of sight if I can, so I opted for a plain box.
I drew it on FREECAD and printed it.
I have made the box bigger than required because I am thinking about how I could incorporate a pir so I have left enough room for whatever I come up with. I am hiding the OLED display under the lid (I will probably remove it and just leave the socket empty for future use as this will save energy). I realised that this would mean there was no visual indiction as to what was going on inside the box so I have made a hole in the lid to incorporate an LED. This means a slight change to the sketch and the circuit.
So the sketch change involved making the LED turn to red while waiting for connection, Change to green once working and change to blue while in halt mode.
Here is the circuit board installed in the enclosure. One problem with my enclosure design is that the gap between the battery and the usb socket is too small and only one of my usb power supplies fits. I have redesigned the enclosure slightly larger but haven't had time to print it.
So this is it working
And finally it working in its enclosure
I have tested it in the car going shopping but because of the lockdown I haven't been able to go to the caravan to test it. So I don't know what kind of impact the caravan will have on the signals or where I will have to site it in the caravan to make it work.
I think it works quite nicely but time will tell.
Future Considerations
I guess I shouldn't hint at the fact that my project isn't perfect. Yes it works and I think it works well. However like most things it could be improved or at least upgraded.
First upgrade I think should be the PIR mentioned earlier and I think this would be a good early warning if a text message was sent if the PIR is triggered.
Secondly I was thinking that it is impossible to hitch up a caravan without changing its angle so a gyroscope could be a good addition.
I also wondered about number spoofing. I really think this is unlikely with my caravan as its value isn't that high but some of the more expensive units might make a thief think a bit harder. That said, firstly they would need to know my number which is possible I suppose because I get calls from the accident solicitors but they would also need the Cara-duino number which can be kept secret. If they somehow got the number, one solution would be to modify the script to include a password. This got me thinking a bit more and I wondered if the sketch should warn if a valid text is received from an unauthorised number. Fairly simple to do but again number spoofing could trick that. I don't know how number spoofing is done and how hard it is do I don't really know if this should even be a consideration. | https://www.element14.com/community/community/project14/nano-rama/blog/2020/05/14/my-entry | CC-MAIN-2021-43 | refinedweb | 2,625 | 55.54 |
login controller.servlet file.. (good coding stuff for reference)
login controller.servlet file.. (good coding stuff for reference) package com.tcs.ilp.controller;
import java.io.*;
import java.util.*;
import com.tcs.ilp.model.*;
import java.sql.*;
import java.io.IOException;
import
keyword articles seo articles webcopy–stuff benefits
What is a blog?
is a website in which you will find lot of stuff posted by people from all over the world on regular basis. ?Stuff? means food, politics, local news, books, movies... people where they can share ideas, make friends etc. Sounds good hnn - Java Beginners
in jsp, without redirection. pls help me with code in urgent Use Ajax... stuff
abc.java
--------
if ur username match pass control to another.../jsp/javascriptpagerefresh.shtml
Thanks
Capturing JSP Content Within My Strut Action Servlet - JSP-Servlet
Capturing JSP Content Within My Strut Action Servlet My end goal is to be able to grab the content from the JSP from within the calling servlet... a dispatch to the JSP and is some how able to hold onto the response so as to be
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/31798 | CC-MAIN-2015-18 | refinedweb | 199 | 65.93 |
In Python, how can I parse a numeric string like
"545.2222" to its corresponding float value,
542.2222? Or parse the string
"31" to an integer,
31?
I just want to know how to parse a float
string to a
float, and (separately) an int
string to an
int.
def isfloat(value): try: float(value) return True except: return False
A longer and more accurate name for this function could be:
isConvertibleToFloat(value)
val isfloat(val) Note -------------------- ---------- -------------------------------- "" False Blank string "127" True Passed string True True Pure sweet Truth "True" False Vile contemptible lie False True So false it becomes true "123.456" True Decimal " -127 " True Spaces trimmed "tn12rn" True whitespace ignored "NaN" True Not a number "NaNanananaBATMAN" False I am Batman "-iNF" True Negative infinity "123.E4" True Exponential notation ".1" True mantissa only "1,234" False Commas gtfo u'x30' True Unicode is fine. "NULL" False Null is not special 0x3fade True Hexidecimal "6e7777777777777" True Shrunk to infinity "1.797693e+308" True This is max value "infinity" True Same as inf "infinityandBEYOND" False Extra characters wreck it "12.34.56" False Only one dot allowed u'四' False Japanese '4' is not a float. "#56" False Pound sign "56%" False Percent of what? "0E0" True Exponential, move dot 0 places 0**0 True 0___0 Exponentiation "-5e-5" True Raise to a negative number "+1e1" True Plus is OK with exponent "+1e1^5" False Fancy exponent not interpreted "+1e1.3" False No decimals in exponent "-+1" False Make up your mind "(1)" False Parenthesis is bad
You think you know what numbers are? You are not so good as you think! Not big surprise. | https://www.dowemo.com/article/70322/in-python,-resolve-how-to-resolve-strings-to-float | CC-MAIN-2018-26 | refinedweb | 277 | 72.05 |
After.
The strategy rules are as follows:
1) Select all stocks near the market open whose returns from their previous day’s lows to today’s opens are lower than one standard deviation. The standard deviation is computed using the daily close-to-close returns of the last 90 days. These are stocks that “gapped down”.
2) Narrow down this list of stocks by requiring that their open prices be higher than the 20-day moving average of the closing prices.
3) Liquidate the positions at the market close.
So we will first begin with our necessary module imports as follows:
import pandas as pd import numpy as np from pandas_datareader import data from math import sqrt
I will be running this backtest using the NYSE stock universe which contains 3159 stock – you can download the ticker list by clicking on the download button below.
Once you have that file stored somewhere, we can feed it in using pandas, and set up our stock ticker list as follows:
#make sure the NYSE.txt file is in the same folder as your python script file stocks = pd.read_csv('NYSE.txt',delimiter="\t") #set up our empty list to hold the stock tickers stocks_list = [] #iterate through the pandas dataframe of tickers and append them to our empty list for symbol in stocks['Symbol']: stocks_list.append(symbol)
As a quick check to see if they have been fed in correctly:
len(stocks_list)
should produce
3159
and
stocks_list[:5]
Should produce:
['A', 'AA', 'AAC', 'AAN', 'AAP']
Ok great, so now we have our list of stocks that we wish to use as our “investment universe” – we can begin to write the code for the actual backtest.
The logic of our approach is as follows…we will iterate through the list of stock tickers, each time we will download the relevant price data into a DataFrame and then add a couple of columns to help us create signals as to when our two criteria are met (gap down of larger than 1 90 day rolling standard deviation and an opening price above the 20 day moving average).
We will then use these signals to create our return series for that stock, and then store that information by appending each stocks return series to a list. Finally we will concatenate all those return series into a master DataFrame and calculate our overall daily return.
#create empty list to hold our return series DataFrame for each stock frames = [] for stock in stocks_list: try: #download stock data and place in DataFrame df = data.DataReader(stock, 'yahoo',start='1/1/2000') #create column to hold our 90 day rolling standard deviation df['Stdev'] = df['Close'].rolling(window=90).std() #create a column to hold our 20 day moving average df['Moving Average'] = df['Close'].rolling(window=20).mean() #create a column which holds a TRUE value if the gap down from previous day's low to next #day's open is larger than the 90 day rolling standard deviation df['Criteria1'] = (df['Open'] - df['Low'].shift(1)) < -df['Stdev'] #create a column which holds a TRUE value if the opening price of the stock is above the 20 day moving average df['Criteria2'] = df['Open'] > df['Moving Average'] #create a column that holds a TRUE value if both above criteria are also TRUE df['BUY'] = df['Criteria1'] & df['Criteria2'] #calculate daily % return series for stock df['Pct Change'] = (df['Close'] - df['Open']) / df['Open'] #create a strategy return series by using the daily stock returns where the trade criteria above are met df['Rets'] = df['Pct Change'][df['BUY'] == True] #append the strategy return series to our list frames.append(df['Rets']) except: pass
Now this stock list has over 3000 stocks in it, so expect this code to take a bit of time to run…I believe mine took about 15-20 minutes to run when I tried it, so try to be a bit patient.
Once the code has run and we have our list filled with all the individual strategy return series for each stock, we have to concatenate them all into a master DataFrame and then calculate the overall daily strategy return. This can be done as follows:
#concatenate the individual DataFrames held in our list- and do it along the column axis masterFrame = pd.concat(frames,axis=1) #create a column to hold the sum of all the individual daily strategy returns masterFrame['Total'] = masterFrame.sum(axis=1) #create a column that hold the count of the number of stocks that were traded each day #we minus one from it so that we dont count the "Total" column we added as a trade. masterFrame['Count'] = masterFrame.count(axis=1) - 1 #create a column that divides the "total" strategy return each day by the number of stocks traded that day to get equally weighted return. masterFrame['Return'] = masterFrame['Total'] / masterFrame['Count']
So now we have a return series that holds the strategy returns based on trading the qualifying stocks each day, in equal weight. If 2 stocks qualified, we would weight each stock at 50% in our portfolio for example.
So all that’s left to do now, is to plot the equity curve and calculate a rough Sharpe Ratio and annual return.
masterFrame['Return'].dropna().cumsum().plot()
The Sharpe Ratio (excluding the risk free element for simplicity) can be calculated as follows:
(masterFrame['Return'].mean() *252) / (masterFrame['Return'].std() * (sqrt(252)))
which gets us:
2.176240875776992
and the annual return can be calculated as:
(masterFrame['Return'].dropna().cumsum()[-1]+1)**(365.0/days) - 1
Which gets us:
0.088146958591373892
So a Sharpe Ratio of over 2 and an annual return of around 8.8% – that’s not too shabby!!
Of course, we have to remember that we are not taking into account any transaction costs so those returns could be quite heavily effected in a real world setting. Also, this strategy logic assumes we can buy the stocks that have gapped down exactly at their opening price, and assumes we always achieve the closing (settlement) price on selling at the end of the day, which of course wouldn’t be the case.
I’ll leave it up to you guys and girls to delve more deeply into the strategy returns – you can use my previous blog post where I analysed the returns of our moving average crossover strategy as inspiration. That post can be found here
Hi,
Thanks for the post. We are working on a high performance data analytics framework in python and would like to use your codes as examples. Are we allowed to use the material? Is there a license for this material?
Thanks,
Ehsan
Hi Ehsan – thanks for the kind words. I write this blog just for my own amusement, so no license is needed to re-use the code, please feel free to do so. All I would ask is that, if possible, you reference my blog as the source so that I may possibly attract more traffic. That’s up to you though 😉
I would be very interested to see the outcome of/hear more about your project, it sounds very interesting!
Of course, I’ll add a reference to this post. Here is the link to the example in the project:
HPAT will compile this code (with minimal changes) automatically to run efficiently on clusters.
Looks great! Thanks for the mention too…much appreciated!
Hi S666 I was using your codes to test
I noticed something because this is taking Open to Close change, the line below should add a shift(1)?
df[‘Criteria2’] = df[‘Open’] > df[‘Moving Average’].shift(1)
Because if you dont you will be taking in today close price (But we are buying at Open and cannot possibly know today close prices)
*I am pulling data from my database but you data source may have accounted for this already if so pls ignore me thanks
Hi Jerrickng – good spot, I believe you are correct. If we are buying at the open price based upon the opening price being higher than the moving average, and we are using closing prices to calculate the moving average, we are in effect suffering from look forward bias as in real time we would not know the close price to use in the moving average calculation.
I shall change the code as soon as I get a moment.
nice blog!! …The best that I found about Python being used in Finance!!!
The only model which closely approximates financial markets is Geometric Brownian movement(GBM).Distance travelled under GBM is proportional to square root of time interval. Positive & negative shocks cancel each other over time in A diversified portfolio of stocks. On A net basis one can rarely beat the markets. According to option formula for A given stock S, if one month option costs 1 dollar then 4 month option on the same stock costs only 2 dollars because square root of 4 is two..
My question is whether following strategy is possibly sound in trading using computerized trading by A fund manager–
Computer puts in following order on stock “ S”.On the same ticket take profit & stop loss orders are always on the same side of current market price that day & not on opposite sides of current stock price.
1) Below the current price “P” put an order to buy that stock at “ P minus 1d” with take profit at “P minus1/2 d” & a stop loss at “P minus 2d”.This order is entered every day based on current price that day until executed whether at profit or with a loss–& same process is repeated on diversified portfolio of stocks all by computer with no human intervention. Similar orders are placed on the upside to sell short every day based on current prices that day using the same principals by the computer.No directional bet is ever made.
2)Stock prices go through noise every day on intraday basis. Chances that buy order would get filled at distance of “P minus 1D” is 4 times compared to hitting stop loss at “ P minus 2D” within same period of time on the same ticket order. With intraday noise, reversion to the mean, take profit order would get hit more times than stop loss on the same ticket order.
3) Under GBM, out of 4 episodes, 3 times there would be profit earned of “1/2d” each & one time there would be loss of “ 1d”with net profit of “½ d” on these 4 executions over & over again both on the downside as well as on the upside. Unfilled orders are cancelled every day when stock exchange closes. New orders are entered every morning based on CURRENT PRICE of the stock that day. Distance d is adjusted depending upon historical volatility of the stock so that decent number of orders are getting executed—if too many orders are getting executed then value of “d” is increased to slow down executions.With decent number of executions laws of averages would apply. Risk is controlled by controlling how many stock orders are placed both on the upside & downside. No directional bet any time—all orders are non-directional ,automatic & computer generated based on current volatility.Risk is also controlled by trading smaller amount of fund assets relative to total assets.
With low transactional costs ,fund manager would make money.
I would greatly appreciate your input into this strategy
Hi S666!
Great blog! I’m learning a lot!
I have a question about relative returns, log returns, and adding returns. In another blog post you mention that relative returns aren’t able to be summed like log returns can. ()
But here, it looks like we are using relative returns:
#calculate daily % return series for stock
df[‘Pct Change’] = (df[‘Close’] – df[‘Open’]) / df[‘Open’]
Then later we sum them up and even cumsum them:
#create a column to hold the sum of all the individual daily strategy returns
masterFrame[‘Total’] = masterFrame.sum(axis=1)
…
masterFrame[‘Return’].dropna().cumsum().plot()
Should be we using log returns here?
Thanks!
It seems the link to the txt file is not working:
Forbidden
You don’t have permission to access /wp-content/uploads/delightful-downloads/2017/02/NYSE.txt on this server.
Is there a new link?
Regards.
Thanks for bringing that to my attention – I will look into it now and update once fixed!! Hopefully shouldn’t take too long!
Ok that should work now – when you click the button it will open the text file in your browser – you can just right click and select “save as” and then it will save as a text file onto your local machine. Hope you can access it now…if not, just let me know and I will send you the text file myself.
Thank you so much S666 for answering so fast. I’ll like to try your code, it looks great. Regards.
Super duper! Got it, thank you so much S666. I’ll try the code right now. Regards.
No problem :D….let me know if you come across any problems and I will try to help
Hi S666, I have a little problem, when I run this section:
#concatenate the individual DataFrames held in our list- and do it along the column axis
masterFrame = pd.concat(frames,axis=1)
#create a column to hold the sum of all the individual daily strategy returns
masterFrame[‘Total’] = masterFrame.sum(axis=1)
#create a column that hold the count of the number of stocks that were traded each day
#we minus one from it so that we dont count the “Total” column we added as a trade.
masterFrame[‘Count’] = masterFrame.count(axis=1) – 1
#create a column that divides the “total” strategy return each day by the number of stocks traded that day to get equally weighted return.
/usr/local/lib/python3.6/dist-packages/pandas/core/reshape/concat.py in init(self, objs, axis, join, join_axes, keys, levels, names, ignore_index, verify_integrity, copy)
243
244 if len(objs) == 0:
–> 245 raise ValueError(‘No objects to concatenate’)
246
247 if keys is None:
ValueError: No objects to concatenate
Any idea, what I’m doing wrong? I’m running on Google Colab Notebook 3.
Thank you for your help.
I am pretty sure I can guess what is going on – the message at the end “ValueError: No objects to concatenate” is the important one…it’s saying exactly that – that you actually have no DataFrame objects in your “frames” list to concatenate together..
It can be adapted to make it work again – I don’t know what level of ability/knowledge you have just at the moment but if I point you towards this package:
That is a working package that has been adapted to the new Yahoo API – do you feel comfortable adapting the code, installing the package and using it?
Hi S666, thank you for your guidance. Let me try with the package you said and I’ll let you know.
Thank you for sharing with all of us your expertise. I’m very interesting in using Python for stock trading.
Regards.
Hello S666, I found a solution for the data retrieval, this is the fix:
from pandas_datareader import data as pdr
import fix_yahoo_finance as yf
yf.pdr_override() # <== that’s all it takes 🙂
download dataframe
I think we are almost there but I think there is a little bug but I can’t find it.
Regards.
Hi there – i have noticed there is a bug in the code – WordPress has changed the formatting of some of the symbols – namely “<“,”>” and the ampersand sign
They have been changed (incorrectly) to “lt;”, “gt;” and “amp;” – (all with ampersands at the start too) so make sure you change them back!
Let me know if that doesn’t make sense…
jajaja, you were right S666.
It worked!!!!
I just had to define the days variable because it’s not defined anywhere.
Thank you for you help. Now I’ll try with more stocks and I’ll keep you informed.
Regards.
can i know for this column (masterFrame[‘Return’].dropna().cumsum()[-1]+1)**(365.0/days) – 1, what value should i put for ‘days’?
Hi S666, thanks for the blog !
Hi S666, | https://www.pythonforfinance.net/2017/02/20/intraday-stock-mean-reversion-trading-backtest-in-python/?utm_source=rss&utm_medium=rss&utm_campaign=intraday-stock-mean-reversion-trading-backtest-in-python | CC-MAIN-2020-10 | refinedweb | 2,711 | 58.32 |
Results 1 to 2 of 2
Thread: How to sort Linked List?
- Join Date
- Mar 2011
- 2
- Thanks
- 0
- Thanked 0 Times in 0 Posts
How to sort Linked List?
I need to know how to sort my linked list "teams" below.Code:
package linkorderedlist; import java.util.*; /** * * @author jason.gladfelder */ public class Main { /** * @param args the command line arguments */ public static void main(String[] args) { LinkedList teams = new LinkedList(); Scanner scannerObject= new Scanner (System.in); System.out.println("Input a team, then press enter, then its number of wins, then enter again."); System.out.println("Only 4 teams permmitted, input done when finished."); int teamwinA, teamwinB, teamwinC, teamwinD; String teamA= scannerObject.nextLine(); teamwinA=scannerObject.nextInt(); String teamB= scannerObject.nextLine(); teamwinB=scannerObject.nextInt(); String teamC= scannerObject.nextLine(); teamwinC=scannerObject.nextInt(); String teamD= scannerObject.nextLine(); teamwinD=scannerObject.nextInt(); BufferedReader reader = new BufferedReader(input); while (team!=done) { teams.add(teamA+teamwinA); teams.add(teamB+teamwinB); teams.add(teamC+teamwinC); teams.add(teamD+teamwinD); } Collections.sort(teams); for (String teamsh : teams) { System.out.println("Grade = " + grade1); } // TODO code application logic here } }
- Join Date
- Sep 2002
- Location
- Saskatoon, Saskatchewan
- 17,025
- Thanks
- 4
- Thanked 2,668 Times in 2,637 Posts
I won't figure out what all the compiler errors are supposed to be, so I'll just just answer your question.
Use a generic LinkedList<String>. Then it will sort. Strings are Comparable by default, so no special Comparator needs to be written for it. Since you are combining data like names and wins and whatnots, you'd be better off with writing an object to deal with those and implement a Comparable yourself.PHP Code:
header('HTTP/1.1 420 Enhance Your Calm'); | http://www.codingforums.com/java-and-jsp/220795-how-sort-linked-list.html | CC-MAIN-2017-30 | refinedweb | 282 | 51.34 |
//**************************************
// Name: Sum of Three Numbers in Java
// Description:A simple program that I wrote in Java that will ask the user to give three integer numbers and then our program will find the total sum of three numbers based of the given values
//**************************************
sum_three.java
package hello;
import java.util.Scanner;
public class sum_three {
public static void main(String args[]){
Scanner input = new Scanner(System.in);
int sum=0;
int a=0,b=0,c=0;
System.out.println("Sum of Three Numbers in Java");
System.out.println();
System.out.print("Enter First Number : ");
a = input.nextInt();
System.out.print("Enter Second Number : ");
b = input.nextInt();
System.out.print("Enter Third Number : ");
c = input.nextInt();
sum = (a+b+c);
System.out.println();
System.out.println("The sum of " +a + "," + b + " and " +
c + " is " + sum + ".");
System.out.println();
System.out.println("\t End of Program");
}
}
Other 71 4/9
By Mike Smith on 3/31
By Mike Smith on 3/13
By Ben128 on
By Jake R. Pomperada on 6/15
By Jake R. Pomperada on 6/10
By Jake R. Pomperada on 5/3
By Jake R. Pomperada on 4/22
By Jake R. Pomperada on 4. | http://planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=7153&lngWId=2 | CC-MAIN-2020-24 | refinedweb | 197 | 53.58 |
This tutorial will help you how to append a strings in java.
Ads
In this tutorial we are going to discuss about appending string in java. Java provide a method called append() in java.lang.StringBuilder class, which appends the specified string to the character. StringBuilder are in replacement for StringBuffer where synchronization is applied that is used by single thread. StringBuilder principal operation is insert and append which take any type of data whether it is int, char, boolean etc. Methods summary for StringBuilder append() method which take any data type are as follows :
Syntax :
public StringBuilder append(String str)
The str is a string and this method returns a reference to this object. For example, if str refers to a string builder object whose current contents are "Rose", then the method call str.append("India") would cause the string builder to contain "RoseIndia", Now here is a simple example to illustrate the use of append() in java.
import java.lang.*; public class AppendString { public static void main(String[] args) { StringBuilder str = new StringBuilder("Rose "); System.out.println("string = " + str); // appends the string str.append("India"); // after appending prints the string System.out.println("After appending = " + str); str = new StringBuilder("0289"); System.out.println("string = " + str); // appends the string argument to the StringBuilder str.append(" hi "); // print the StringBuilder after appending System.out.println("After append = " + str); } }
In the above code creating a instance of StringBuilder class and initialize the content with "Rose ", by append() method which take string as argument to append the specified string to the content of str, like that we can add integer, float, character etc to the string sequence.
Output of the program :
Advertisements
Fee:
Rs. 20,000 US$ 300
Today: Rs. 10,000 US$150
Course Duration: 30 hrs
Posted on: June 25, 2013 If you enjoyed this post then why not add us on Google+? Add us to your Circles
Advertisements
Ads
Ads
Discuss: Appending String in java
Post your Comment | http://roseindia.net/java/beginners/Append-in-java.shtml | CC-MAIN-2017-34 | refinedweb | 329 | 64.61 |
Investors eyeing a purchase of XPO Logistics, Inc. (Symbol: XPO) shares, but tentative about paying the going market price of $19.28/share, might benefit from considering selling puts among the alternative strategies at their disposal. One interesting put contract in particular, is the September put at the $15 strike, which has a bid at the time of this writing of $2.30. Collecting that bid as the premium represents a 15.3% return against the $15 commitment, or a 22.9% annualized rate of return (at Stock Options Channel we call this the YieldBoost ).
Selling a put does not give an investor access to XPO XPO Logistics, Inc. sees its shares fall 22.2% and the contract is exercised (resulting in a cost basis of $12.70 per share before broker commissions, subtracting the $2.30 from $15), the only upside to the put seller is from collecting that premium for the 22.9% annualized rate of return.
Below is a chart showing the trailing twelve month trading history for XPO Logistics, Inc., and highlighting in green where the $15 strike is located relative to that history:
The chart above, and the stock's historical volatility, can be a helpful guide in combination with fundamental analysis to judge whether selling the September put at the $15 strike for the 22.9% annualized rate of return represents good reward for the risks. We calculate the trailing twelve month volatility for XPO Logistics, Inc. (considering the last 253 trading day closing values as well as today's price of $19.28) to be 53%. For other put options contract ideas at the various different available expirations, visit the XPO Stock Options page of StockOptionsChannel.com.
In mid-afternoon trading on Friday, the put volume among S&P 500 components was 1.44M contracts, with call volume at 1.46M, for a put:call ratio of 0.98. | https://www.nasdaq.com/articles/commit-buy-xpo-logistics-15-earn-229-annualized-using-options-2016-01-15 | CC-MAIN-2019-39 | refinedweb | 316 | 66.84 |
I don’t have any major conclusions to share in this blog post, but ... what I was curious about is how Scala implements “lazy val” fields. That is, when the Scala code I write is translated into a .class file and bytecode that a JVM can understand, what does that resulting code look like?Back to top
A little `lazy val` conversion example
To look at this I created a file named Foo.scala and put the following code in it:
class Foo { lazy val foo = "foo " + "bar" }
I then compiled this code with this command:
$ scalac -Xprint:all Foo.scala
That command didn’t print out what I was looking for, so I then tried the
javap command on the resulting .class file:
$ javap Foo Compiled from "Foo.scala" public class Foo { public java.lang.String foo(); public Foo(); }
That also didn’t tell me too much, except it looks like the
foo field is converted to a method. So then I used the
jad command to decompile the .class file back to a Java file:
$ jad Foo Foo() { } private String foo; private volatile boolean bitmap$0; }
This is what I was looking for. It shows that there is a private
foo field, a public
foo method, and that
foo method calls a private method named
foo$lzycompute. The
lazy magic happens in both the
foo method and the
foo$lzycompute method. I’m not going to take the time to explain it, but if you know Java, you can see it in that code.
A second `lazy val` conversion example
Curious how this might grow, I added another line of code to my Scala source code:
class Foo { lazy val foo = "foo " + "bar" val baz = foo + " baz" }
I compiled this code with
scalac, then skipped the intermediate things I tried before and just decompiled the .class file with
jad, and got the following output from it:
import scala.collection.mutable.StringBuilder; String baz() { return baz; } public Foo() { } private String foo; private final String baz = (new StringBuilder()).append(foo()).append(" baz").toString(); private volatile boolean bitmap$0; }
The big change in that code is this line:
private final String baz = (new StringBuilder()).append(foo()).append(" baz").toString();
As a reminder, I defined
baz in my Scala file like this:
val baz = foo + " baz"
and that simple line of code resulted in the
private final String baz line of code shown.
One more `lazy val` conversion example
I was going to stop there, but okay, here’s one more example. First I created this Foo.scala file:
class Foo { lazy val text = io.Source.fromFile("/etc/passwd").getLines.foreach(println) }
Then I compiled it with
scalac and then decompiled it with
jad to get this Java source code:
import scala.Predef$; import scala.Serializable; import scala.collection.Iterator; import scala.io.*; import scala.runtime.AbstractFunction1; import scala.runtime.BoxedUnit; public class Foo { private void text$lzycompute() { synchronized(this) { if(!bitmap$0) { Source$.MODULE$.fromFile("/etc/passwd", Codec$.MODULE$.fallbackSystemCodec()).getLines().foreach(new Serializable() { public final void apply(Object x) { Predef$.MODULE$.println(x); } public final volatile Object apply(Object v1) { apply(v1); return BoxedUnit.UNIT; } public static final long serialVersionUID = 0L; } ); bitmap$0 = true; } BoxedUnit _tmp = BoxedUnit.UNIT; } } public void text() { if(!bitmap$0) text$lzycompute(); } public Foo() { } private BoxedUnit text; private volatile boolean bitmap$0; }
As you can see, the simple “wrapper” code outside of the
text$lzycompute method is very similar, but the
text$lzycompute method itself is quite different. Once again I’m not going to comment on that code because all I want to do today is to take a quick look at how this works. If/when I feel a little less lazy, I’ll write more about this.
The end
As I mentioned at the beginning of this post, I don’t have any major conclusions or anything here. I just wanted to take a quick look at how the Scala
lazy val syntax gets converted into Java code and/or bytecode that the JVM can understand, and these examples are a start down this road.
Of course there are many more things you can do with the Scala
lazy val syntax, and while I’m not going to dig into that today, these examples show how you can do that if you’re interested in looking into it.
Many thanks for your information
In my project, I have some environment variables with lazy val, to let the system operator has chances to set some settings of service when it just be deployed. However, that puzzles me when writing the test code for that, if the setting needs be changed in different test case, and I have no way to reloaded that.
And your post help me to solve this problem, I could reset the bitmap$0 to zero by the Java reflection way, and let it be reloaded from the new setting.
Many thanks for your information, it's helpful for me.
Add new comment | https://alvinalexander.com/scala/look-how-scala-lazy-val-converted-to-java-code-jvm-bytecode | CC-MAIN-2019-26 | refinedweb | 832 | 71.04 |
Consider a country that is represented as a tree with N nodes and N-1 edges. Now each node represents a town, and each edge represents a road. We have two lists of numbers source and dest of size N-1. According to them the i-th road connects source[i] to dest[i]. And the roads are bidirectional. We also have another list of numbers called population of size N, where population[i] represents the population of the i-th town. We are trying to upgrade some number of towns into cities. But no two cities should be adjacent to each other and every node adjacent to a town should be a city (every road must connect a town and a city). So we have to find the maximum possible population of all the cities.
So, if the input is like source = [2, 2, 1, 1] dest = [1, 3, 4, 0] population = [6, 8, 4, 3, 5], then the output will be 15, as we can upgrade cities 0, 2, and 4 to get a population of 6 + 4 + 5 = 15.
To solve this, we will follow these steps −
Let us see the following implementation to get better understanding −
from collections import defaultdict class Solution: def solve(self, source, dest, population): adj = defaultdict(list) for a, b in zip(source, dest): adj[a].append(b) adj[b].append(a) seen = set() def dfs(x, choose): if x in seen: return 0 seen.add(x) ans = 0 if choose: ans += population[x] for neighbor in adj[x]: ans += dfs(neighbor, not choose) return ans x = dfs(0, True) return max(x, sum(population) - x) ob = Solution() source = [2, 2, 1, 1] dest = [1, 3, 4, 0] population = [6, 8, 4, 3, 5] print(ob.solve(source, dest, population))
[2, 2, 1, 1], [1, 3, 4, 0], [6, 8, 4, 3, 5]
15 | https://www.tutorialspoint.com/program-to-find-maximum-possible-population-of-all-the-cities-in-python | CC-MAIN-2021-49 | refinedweb | 312 | 67.79 |
In a previous post, I talked about the “Hello World” example in the C# standard. The C++ mention of the Hello World is brief and not really a code example, to demonstrate the Function and specifically the parameter-declaration-clause from page 192 (page number at the bottom of the page, not the pdf defined page) or paragraph 8.3.5:
printf("hello world");
printf("a=%d b=%d", a, b);
But I wrote a little example below that works if you use the Visual Studio 2013 C++ Console Project:
#include <stdio.h>
#include "stdafx.h"
int main()
{
int a = 42, b=56;
//int b = 56;
printf("hello world \n");
printf("a=%d b=%d \n", a, b);
return 0;
}
I also checked the C ISO/IEC 9899:2011, but there is no “Hello World” example, but there is the “Eh?” example and I will discuss that in my next blog.
At this point you might be thinking has Surf4Fun lost his mind? And the answer would be yes. Frankly there wasn’t much to lose.
But the point is this: The C# specification is actually not a bad read, but neither are the other specifications like C++ or C or even FORTRAN. If you work for a corporation or attend a school it is likely you can get a copy of the C++ and C specification through your company.
Bear in mind that the C# spec is free whether you use the ECMA or ISO standard. C++, C cost $150 or so, but usually most technology corporations or colleges have a subscription. | https://blogs.msdn.microsoft.com/devschool/2014/07/09/hello-world-in-the-c-standard-isoiec-148822011e/ | CC-MAIN-2016-44 | refinedweb | 263 | 69.31 |
Binary for Apache Camel SQL Database
I need some tutorial on which to use uploading binaries from folder to MySQL database using Camel. Basically I want to store voice logs from our PBX system in a database. The directory with voice logs will be the remote directory
I've developed a prototype but I'm not sure if it is really effective, but it works, but I'm not happy with the design. Let me explain what I am doing. Camel as follows:
="blahblah" /> </bean> <bean id="fileToSQL" class="com.hia.camelone.fileToSQL"/>
And the fileToSQL bean code:
public class fileToSQL { public String toString(@Headers Map<String,Object> header, @Body Object body){ StringBuilder sb = new StringBuilder(); String filename =(String)header.get("CamelFileNameOnly"); String escapedFileName = StringEscapeUtils.escapeJava(filename).replace("\'", ""); String filePath = StringEscapeUtils.escapeJava((String)header.get("CamelFilePath")); sb.append("insert into FileLog "); sb.append("(FileName,FileData) values ("); sb.append("'").append(escapedFileName).append("',").append("LOAD_FILE(\"").append(filePath).append("\")"); sb.append(")"); System.out.println(sb.toString()); System.out.println(body); System.out.println(header.toString()); return sb.toString(); } }
Nice short explanation. I get a file component to use files, then I create a SQL string using the MySQL LOAD_FILE () function to load the file.
My thoughts on this:
The LOAD_FILE function only works on the local computer, and thus this route will only work with files residing on the local computer. I could use a file producer to copy files from some remote directory to a local directory and then use a route. Then my route would be something like this:
<route> <from uri=""/> <to uri=""/> <to uri="bean://fileToSQL"/> <to uri="jdbc://timlogdb"/> </route>
However, since I have access to the content of the files in the message from the users of the files, I should in theory be able to access the body / content of the string and create a SQL command that does NOT use the LOAD_FILE () function.
The only way I know how to construct such a string is by using a JDBC prepared statement. This would be the first prize if I could somehow create an insert statement with content from the user of the file.
Is it possible to create a prepared statement in the fileToSQL bean and pass it to my jdbc component? Or how can I create an INSERT statement without the LOAD_FILE () function?
Since I have to use the LOAD_FILE () function, I now have to serve both unix and windows file paths. While it shouldn't be hard, I just don't like the idea of injecting OS code into my apps (it seems to work).
Anyone have ever uploaded binaries to MySQL database using Camel who can give me some advice on the above points. While I could work around the problems, I just want to make sure I don't miss the obvious way of doing things.
I have looked around here and found people who work with text files. Guys, please don't even come down with me by saving the file to the filesystem and linking it to the database. We have some very specific disaster recovery and legal requirements that require me to save to the database.
source to share
Correct, so I managed to find a way and it wasn't that hard. What I basically did was get rid of the JDBC Camel component in the route. Then I injected the datasource bean into my ToSQL bean file. Then I used a simple prepared statement to insert the file and its name into MySQL.
As always the code is much more explicit than my English.
="lalala" /> </bean> <bean id="fileToSQL" class="com.hia.camelone.fileToSQL"> <property name="dataSource" ref="timlogdb"/> </bean>
As you can see, I am injecting my timlogdb bean into my ToSQL bean file. Spring ROCKY!
So here is my ToSQL bean file.
public class fileToSQL { private DriverManagerDataSource dataSource; private static final String SQL_INSERT="insert into FileLog(FileName,FileData)values(?,?)"; @Handler public void toString(@Headers Map<String,Object> header,Exchange exchange){ Connection conn = null; PreparedStatement stmt=null; String filename =StringEscapeUtils.escapeJava(((String)header.get("CamelFileNameOnly")).replace("\'", "")); try { conn= dataSource.getConnection(); stmt =conn.prepareStatement(SQL_INSERT); stmt.setString(1, filename); byte[] filedata = exchange.getIn().getBody(byte[].class); stmt.setBytes(2,filedata ); int s = stmt.executeUpdate(); } catch (Exception e) { System.out.println(e.getMessage()); } finally{ try { if (stmt!=null) { stmt.close(); } if (conn!=null) { conn.close(); } } catch(SQLException e) { System.out.println(e.getMessage()); } } } /** * @param dataSource the dataSource to set */ public void setDataSource(DriverManagerDataSource dataSource) { this.dataSource = dataSource; } }
Camel guys did a great job. Camel is really flexible, especially if you combine it with Spring.
What a trip!
source to share | https://daily-blog.netlify.app/questions/1894614/index.html | CC-MAIN-2021-43 | refinedweb | 773 | 57.98 |
Welcome to Part II of our CPP Delegates article.
In Part_II we are going to illustrate how Borland's C++ Builder handles Windows Events using its proprietary keyword "__closure" and the delegation model instead of Inheritance as happens in MFC library and Win32 API.
__closure
I assume that the reader has read Part 1 of this article. Also I assume that he/she has intermediate to advanced knowledge of Borland's C++ Builder IDE.
When you want another person to replace you in a job, this is called delegating this person. We may see a manager of some company delegates one of the experienced employees to replace him for a period of time when he is not available.
In the programming world, this happens by asking some function in some class to do some job that our class cannot do. The most sensitive example for this model is the “Event Handling”. When you press a button, a message is sent to the operating system (namely Microsoft Windows) declaring that you have clicked your mouse's left button. This is called Event. The job that your software does as a response to this message or event is called Event Handling. The function which is responsible for handling this event (i.e. the code that is actually executed when you press a button) is called the Event Handler.
Borland C++ Builder uses closures to implement the delegation model and hence the “Event Handling story”. This happens by declaring a closure that can point to a sort of methods in a form like this:
__property OnClick; // declared as __published in class TButton in stdctrls.hpp
__property Classes::TNotifyEvent OnClick = {read=FOnClick, write=FOnClick,
stored=IsOnClickStored}; // declared as protected in
//class TControl in controls.hpp
Classes::TNotifyEvent FOnClick; // declared as private in TControl in controls.hpp
typedef void __fastcall (__closure *TNotifyEvent)
(System::TObject* Sender); // declared as global in namespace
// Classes in classes.hpp
To understand what this is, let’s explain it from bottom to up:
TObject*
TNotifyEvent
typedef
private
TControl
OnClick
private
public
__property
protected
__published
TButton
public
That is it. To see what happens when you click a button, let’s reverse what we said.
Assume you have written some code in some function that finishes a certain job, and you named your function as OnClickHandler. Using the IDE, we may assign this function address to our closure (OnClick) which is a member of class TButton.
OnClickHandler
OnClick
TButton
But, wait, OnClick is not a variable in itself. It is just a gate to allow to you assign values to or retrieve them from real members (usually private) using assignment operator (l-value/r-value) instead of using old-fashioned Set/Get methods. Our real closure that is going to be assigned this address is FOnClick closure. This assignment is called delegation.
l-value
r-value
Set
Get
FOnClick
TButton objects are programmed to capture Click-Messages (events) sent by your operating system and then call FOnClick closure as a response to this message (event), but did TButton class handled the event? No, the real handler is your function which is written somewhere in some class. That is TButton delegates your function to replace it in handling such events. By this way, each object of type TButton has its own handler. This will not be the case if we defined the handler in the TButton class, since it will be general for all TButton objects. This means that all buttons would have the same job.
FOnClick
You may wonder, what if we used inheritance and virtual functions to implement a specific handler to each TButton object. You are right. This is one solution by declaring a handler as a virtual function that has no body, and then you have to inherit from this class every time you want to add a button to your form, moreover you have to redefine your virtual function to suit your desired job.
Assume you are expected to program a scientific calculator that has more than 30 different buttons with very different jobs. Using Windows API, Borland’s OWL, or Microsoft’s MFC you have to do more than 30 inheritances. But with Borland’s C++ Builder, you have to make more than 30 assignment statements. Or even use mouse clicks in the IDE.
Redefining your virtual function is equal to defining your handlers, and there is no way to avoid this. Borland’s products are not smart enough to expect your button jobs.
Just imagine your source-code file that contains 30 inheritances, and another that contains 30 assignments if any.
If you understood this article, you may be able to understand Charlie Calvert’s statement in his Borland C++ Builder Unleashed book.
“Delegation is an alternative to inheritance. It's a trick to allow you to receive the same benefits as inheritance but with less work.”
“Delegation is an alternative to inheritance. It's a trick to allow you to receive the same benefits as inheritance but with less work.”
Thank you Borland.
Code in part I and class traces in part II are tested against Borland's C++ Builder 6.0 Enterprise Edition.
Windows XP service pack 3.0. | https://www.codeproject.com/Articles/45063/C-Delegates-and-Borland-s-C-Builder-Event-Handling?fid=1552855&df=90&mpp=25&sort=Position&spc=Relaxed&tid=3286870 | CC-MAIN-2017-34 | refinedweb | 865 | 63.39 |
The Apache Foundation recently released version 2.5 of Groovy, with new features including: improvements in AST Transformations; new macro support, and other miscellaneous improvements. Groovy 2.5 requires JDK 7 and runs on JDK 9+ with benign warnings that can be ignored.
Despite the more recent focus on other JVM languages such as Kotlin, for example, Groovy is still experiencing tremendous growth. As Dr. Paul King, principal software engineer at OCI and Groovy committer stated in a recent webinar:
The Groovy download numbers still make it the second most popular language on the JVM after Java and the numbers just keep increasing. For the first quarter of this year, there were 90 million downloads which was twice the number of downloads than the first quarter of last year. So you can see, there is still a lot of interest in Groovy.
Groovy has also gained 30 new committers in the past 12 months.
AST Transformations - Annotations
As shown in the diagram below, a number of the existing AST transformations were improved for consistency among the transformations and 11 new transformations were added for version 2.5. An additional transformation has been added for Groovy 3.0, but more could appear before GA release.
AST Transformations - Macros
As shown in the previous section, there are a large number of built-in AST transformations. Developers could, however, create their own custom transformations, but this required knowing Groovy's internal internal representation of the syntactic structures.
The new macro feature in version 2.5 eliminates the need to know the internal representation of the syntactic structures. As defined in the release notes:
Macros let you use Groovy syntax directly rather than using the internal compiler representations when creating compile-time metaprogramming extensions. This puts the creation of transformations in the hands of all Groovy programmers not just Groovy compiler gurus.
For example, say a developer wishes to create a transformation,
@Info, that generates a method,
@getInfo(). Before version 2.5, it was necessary to write the following code:
...) ...
With macros, the first two lines in the code shown above can be replaced with:
def body = macro(true) { getClass().name }
More details can be found in the release notes.
Groovy 3.0
Groovy 3.0.0-alpha-3 has been available since late June with beta versions scheduled for later this year and release candidates expected in early 2019.
Groovy 3.0 will require a minimum of JDK 8 with improved support for JDK 9 and above. A new parser, called the Parrot Parser, will be a significant new feature that will support new Groovy syntax.
King spoke to InfoQ on this latest release.
InfoQ: Can you speak to the differences between the default Indy parser and the new Parrot parser for Groovy 3.0?
Paul King: To explain the "Indy" and "Parrot" flavors of the Groovy parser, you need to know a little more detail. Groovy's parser does its work in multiple phases. The early phases take source code and convert it into an internal abstract syntax tree (AST) representation. The later phases convert the AST into bytecode which runs on the JVM in an almost identical fashion to how bytecode produced from the Java compiler runs.
When we speak about the "Parrot" parser, we are talking about Groovy 3's totally revamped early stages of the compiler. It has been re-engineered to be much more flexible and using recent well-supported technologies. This work puts Groovy into a great position to evolve quickly which means we can incorporate changes from Java for "cut-n-paste" compatibility with Java as well as native Groovy changes.
When we speak of the "Indy" support. This is all about the kind of bytecode produced in the later stages of the compiler. In Groovy versions 2.0 through 2.5, we support producing "classic" Groovy bytecode as well as an "indy" version. The indy version makes use of the "INvoke DYnamic" bytecode instruction introduced in JDK 7. The invoke dynamic bytecode instruction was added to the JVM for numerous reasons but a particular use case was to improve the performance for dynamic languages like Groovy. Early versions of the JDK supporting that bytecode instruction were a little buggy and although performance in some areas improved, other areas like our hand-coded primitive optimisation were still faster when using the classic bytecode instructions. For these reasons we have been supporting both variants. The alpha versions of the Groovy 3 compiler still support both versions but there is work underway to make the indy variant the default and potentially incorporate some of the other optimisations so that we can remove the old classic flavor altogether.
InfoQ: With the large number of built-in AST transformations available in Groovy, what would be a typical use case for a developer to write his or her own custom AST transformations?
King: There are numerous use cases for using AST transformations. They remove boilerplate, capture common design patterns and coding styles, and support declarative programming. You are correct that we bundle a lot of useful AST transformations within Groovy but we try to bundle transforms that we think would be widely applicable. I can see some obvious places where developers will still find it useful to create their own:
- If the transformation someone needs is special purpose in nature, we aren't likely going to include that in Groovy.
- If someone doesn't like the behavior our transforms offer and they can't use annotation attributes to customise the behavior, then they have a few options. Firstly, they can use the meta-annotation capability to "bundle" together combinations of our annotations and potentially combine them with for example Spring annotations. If that still doesn't give them what they need, then writing their own is a good option.
- Someone might be writing a DSL specific to their domain and wish to incorporate annotations in the DSL. That is just one option they have when writing DSLs.
- Someone might be writing their own framework and wish to provide powerful annotations to greatly simplify coding for users of the framework.
To give you an example of framework usage of AST transformations, Grails has about 20 AST transformations in grails-core alone. Griffon has added numerous ones also. Micronaut's AST transformation processing handles an aggregate of over 100 annotations.
InfoQ: With the new macro feature, do you anticipate more developers to write their own custom AST transformations?
King: It will no doubt take some time for people to become familiar with macros. But once more examples start to appear, there is often a snowball effect. We'll have to wait and see. I can see us adding numerous ones into the Groovy codebase in any case.
InfoQ: Has the new JDK release cadence affected Groovy development in any way?
King: Yes, there are some really nice aspects to seeing the faster release cadence but it has also been a strain on many open source projects to keep up. Groovy tries to run across as many JDK versions as makes sense. The more versions there are, the harder that gets. Not just in terms of potentially extra work from us but we rely on other open source tools that must also keep up to date and work consistently enough across the versions for us to be able to remain compatible. Perhaps JPMS has affected us more. Some of the changes it brought about remain on our todo list to fix before GA release of Groovy 3.0.
InfoQ: Are there plans for Groovy 3.0 to support JDK 11?
King: Groovy 3.0 already supports JDK11 unofficially. It builds and the test suite runs fine on the recent JDK11 EA builds. In terms of our Java source code compatibility, we already support var as an alias for def in local variable lambda parameters (JEP 323), so even running on JDK 8, you can define lambdas in Groovy like this:
(var x, var y) -> x.process(y)
InfoQ: What else would you like to share with our readers about Groovy 2.5 and the upcoming Groovy 3.0?
King: We are very excited with the continued growth in Groovy usage and have many more things in store that we'd like to introduce in future versions of Groovy. We are also very thankful for the patience the Groovy community has had while we complete all the features planned on our roadmap and work on some outstanding bug fixes in a few areas.
So, while we still have some engineering challenges ahead we see a great future ahead for Groovy. We are super keen for anyone who wants to contribute to come along and help out or become involved in discussions on the mailing lists.
InfoQ: What are your current responsibilities, that is, what do you do on a day-to-day basis?
King: Much of my time is spent working on Groovy either participating in discussions on mailing lists and other forums or contributing to the code base or supporting other projects which make use of Groovy.
Resources
- Groovy 2.5+ Roadmap (webinar) by Paul King
- Groovy 2.5 Features and Groovy 3+ Roadmap by Paul King (June 1, 2018)
Inspired by this content? Write for InfoQ.
Becoming an editor for InfoQ was one of the best decisions of my career. It has challenged me and helped me grow in so many ways. We'd love to have more people join our team.
Community comments
Business model
by Javier Paniza,
Re: Business model
by Richard Richter,
Re: Business model
by Owen Rubel,
Business model
by Javier Paniza,
Your message is awaiting moderation. Thank you for participating in the discussion.
90 millions of downloads in a quarter and not able to create a business model! Wow! I said this because of the lay off of the Groovy team by Pivotal some time ago.
Maybe developers is not a good market, too stingy.
Re: Business model
by Richard Richter,
Your message is awaiting moderation. Thank you for participating in the discussion.
I like Groovy. I can't "sell" it in the company though. But I can imagine something like Wikipedia donation or crowd-funding - ideally with some goal (periodic). Goal makes it transparent and makes it feel more like an achievement unlike mere "donate" with black-hole like feeling where we rely on someone else. :-)
Anyway, it's great that Groovy is still here and progressing. Groovy 3.0 with "improved support for JDK 9 and above" means what exactly? No warnings? That would be cool.
Re: Business model
by Owen Rubel,
Your message is awaiting moderation. Thank you for participating in the discussion.
Well the Grails team shut out ALOT of developers, isolated themselves, shut down the forums and basically closed out the community. As a result, they lost 80% of their market share in under 2 years.
People have turned to Go and/or Kotlin. I suggest the same. | https://www.infoq.com/news/2018/07/apache-releases-groovy-2.5/ | CC-MAIN-2022-33 | refinedweb | 1,825 | 63.9 |
Quoting Stefan Berger (stefanb linux vnet ibm com): > On 04/30/2012 06:59 PM, Serge Hallyn wrote: > >configure.ac: > >Check for libnl-3. If found, find libnl-route-3. If not found, > >do the original check to look for libnl-1. > > > > > [...] > >--- a/src/util/virnetlink.c > >+++ b/src/util/virnetlink.c > >@@ -67,7 +67,11 @@ struct _virNetlinkEventSrvPrivate { > > virMutex lock; > > int eventwatch; > > int netlinkfd; > >+#ifdef HAVE_LIBNL1 > > struct nl_handle *netlinknh; > >+#else > >+ struct nl_sock *netlinksock; > >+#endif > > Since the two members are treated similarly, could you give these > structure members the same name and with that we could get rid of a > couple of the #ifdef's below. I suppose the major change between v1 > and v3 that we are touching upon here is that of nl_handle to > nl_sock. I could - that's what I was referring to later in the commit message. I would worry that it would over time not be robust, and would get all the more confusing if it needed to be unwound at some point. But, while I think it would be short-sighted to do this just to shorten the patch, it would also make the flow of the rest of the code cleaner, so it may be worth it. For that matter, with a simple wrapper function or two we should be able to hide the remaining ifdefs, if that's what you'd like. > > /*Events*/ > > int handled; > > size_t handlesCount; > >@@ -121,15 +125,31 @@ int virNetlinkCommand(struct nl_msg *nl_msg, > > int fd; > > int n; > > struct nlmsghdr *nlmsg = nlmsg_hdr(nl_msg); > >+#ifdef HAVE_LIBNL1 > > struct nl_handle *nlhandle = nl_handle_alloc(); > >+#else > >+ struct nl_sock *nlsock = nl_socket_alloc(); > >+#endif > > > > Also same name here. > > >+#ifdef HAVE_LIBNL1 > > if (!nlhandle) { > >+#else > >+ if (!nlsock) { > >+#endif > > This could then be just one test. > > > virReportSystemError(errno, > >+#ifdef HAVE_LIBNL1 > > "%s", _("cannot allocate nlhandle for netlink")); > >+#else > >+ "%s", _("cannot allocate nlsock for netlink")); > >+#endif > > return -1; > > } > > > >+#ifdef HAVE_LIBNL1 > > if (nl_connect(nlhandle, NETLINK_ROUTE)< 0) { > >+#else > >+ if (nl_connect(nlsock, NETLINK_ROUTE)< 0) { > >+#endif > > ... this one also ... > > > virReportSystemError(errno, > > "%s", _("cannot connect to netlink socket")); > > rc = -1; > >@@ -140,7 +160,11 @@ int virNetlinkCommand(struct nl_msg *nl_msg, > > > > nlmsg->nlmsg_pid = getpid(); > > > >+#ifdef HAVE_LIBNL1 > > nbytes = nl_send_auto_complete(nlhandle, nl_msg); > >+#else > >+ nbytes = nl_send_auto_complete(nlsock, nl_msg); > >+#endif > > as well as this function call and from what I can see pretty much > all of the rest too except for the destroy/free calls. > > Regards, > Stefan > | https://www.redhat.com/archives/libvir-list/2012-May/msg00103.html | CC-MAIN-2014-10 | refinedweb | 384 | 59.43 |
Dear Wiki user,
You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.
The "Hive/HowToContribute" page has been changed by Ning Zhang.
--------------------------------------------------
= How to Contribute to Apache Hive =
-
This page describes the mechanics of ''how'' to contribute software to Apache Hive. For
ideas about ''what'' you might contribute, please see open tickets in [[]].
<<TableOfContents(3)>>
== Getting the source code ==
-
First of all, you need the Hive source code.<<BR>>
Get the source code on your local drive using [[]].
Most development is done on the "trunk":
@@ -15, +13 @@
{{{
svn checkout hive-trunk
}}}
-
== Setting up Eclipse Development Environment (Optional) ==
This is an optional step. Eclipse has a lot of advanced features for Java development,
and it makes the life much easier for Hive developers as well.
[[Hive/GettingStarted/EclipseSetup|How to set up Eclipse for Hive development]]
== Making Changes ==
-
Before you start, send a message to the [[
developer mailing list]], or file a bug report in [[]].
Describe your proposed changes and check that they fit in with what others are doing and
have planned for the project. Be patient, it may take folks a while to understand your requirements.
Modify the source code and add some (very) nice features using your favorite IDE.<<BR>>
But take care about the following points
+
* All public classes and methods should have informative [[
* Do not use @author tags.
* Code should be formatted according to [['s conventions]],
with one exception:
@@ -39, +36 @@
* You can run all the unit test with the command {{{ant test}}}, or you can run a specific
unit test with the command {{{ant -Dtestcase=<class name without package prefix> test}}}
(for example {{{ant -Dtestcase=TestFileSystem test}}})
=== Understanding Ant ===
-
- Hive is built by Ant, a Java building tool.
+ Hive is built by Ant, a Java building tool.
* Good Ant tutorial:
=== Unit Tests ===
+.
{{{
> cd hive-trunk
> ant clean test tar -logfile ant.log
}}}
After a while, if you see
+
{{{
BUILD SUCCESSFUL
}}}
all is ok, but if you see
+
{{{
BUILD FAILED
}}}
then you should fix things before proceeding. Running
+
{{{
> ant testreport
}}}
@@ -79, +76 @@
* Run "ant test -Dtestcase=TestCliDriver -Dqfile=XXXXXX.q -Doverwrite=true -Dtest.silent=false".
This will generate a new XXXXXX.q.out file in ql/src/test/results/clientpositive.
* If the feature is added in contrib
* Do the steps above, replacing "ql" with "contrib", and "TestCliDriver" with "TestContribCliDriver".
+ === Debugging Hive code ===
+
+.
+
+
+ * Compile Hive code with javac.debug=on. Under Hive checkout directory.
+ {{{
+ > ant -Djavac.debug=on package
+ }}}
+ If you have already built Hive without javac.debug=on, you can clean the build and then
run the above command.
+ {{{
+ > ant clean # not necessary if the first time to compile
+ > ant -Djavac.debug=on package
+ }}}
+
+
+ * Run ant test with additional options to tell Java VM that we want to wait for debugger
to attach.
+ First define some convenient macros for debugging. You can put it in your .bashrc or .cshrc.
+ {{{
+ > export HIVE_DEBUG_PORT=8000
+ > export $HIVE_DEBUG="-Xdebug -Xrunjdwp:transport=dt_socket,address=${HIVE_DEBUG_PORT},server=y,suspend=y"
+ }}}
+ In particular HIVE_DEBUG_PORT is the port that the JVM is listening on and the debugger
should attach to. Then run the unit test as follows:
+ {{{
+ > $HADOOP_OPTS=$HIVE_DEBUG; ant test -Dtestcase=TestCliDriver -Dqfile=<mytest>.q
+ }}}
+
+ The unit test will run until it shows:
+ {{{
+ [junit] Listening for transport dt_socket at address: 8000
+ }}}
+
+ * Now, you can use jdb to attach to port 8000 to debug
+ {{{
+ > jdb -attach 8000
+ }}}
+ or better off if you are running eclipse and projects are already imported, you can debug
with eclipse. Under eclipse Run -> Debug Configurations, find "Remote Java Application"
at the bottom of the left panel. There should be MapRedTask configuration already. If there
is no such configuration, you can create one with the following property:
+
+ Project: the Hive project that you imported.
+ Connection Type: Standard (Socket Attach)
+ Connection Properties: Host: localhost Port: 8000
+
+ Then hit "Debug" button and it will attach the JVM listening on port 8000 and continue running.
You can define breakpoints in the source code before hit "Debug" so that it will stop there.
The rest is the same as debugging client side Hive.
+
=== Creating a patch ===
Check to see what files you have modified with:
+
{{{
svn stat
}}}
-
Add any new files with:
+
{{{
svn add .../MyNewClass.java
svn add .../TestMyNewClass.java
svn add .../XXXXXX.q
svn add .../XXXXXX.q.out
}}}
-
In order to create a patch, type (from the base directory of hive):
{{{
svn diff > HIVE-1234.patch
}}}
-
- This will report all modifications done on Hive sources on your local disk and save them
into the ''HIVE-1234.patch'' file. Read the patch file.
+ This will report all modifications done on Hive sources on your local disk and save them
into the ''HIVE-1234.patch'' file. Read the patch file. Make sure it includes ONLY the
modifications required to fix a single issue.
- Make sure it includes ONLY the modifications required to fix a single issue.
Please do not:
+
* reformat code unrelated to the bug being fixed: formatting changes should be separate
patches/commits.
- * comment out code that is now obsolete: just remove it.
+ *:
+
1. Write a shell script that uses 'svn mv' to rename the original files.
1. Edit files as needed (e.g., to change package names).
1. Create a patch file with 'svn diff --no-diff-deleted --notice-ancestry'.
1. Submit both the shell script and the patch file.
+
This way other developers can preview your change by running the script and then applying
the patch.
-
=== Applying a patch ===
-
- To apply a patch either you generated or found from JIRA, you can issue
+
+ If you are an Eclipse user, you can apply a patch by : 1. Right click project name in Package
Explorer , 2. Team -> Apply Patch
== Contributing your work ==
-
-).
When you believe that your patch is ready to be committed, select the '''Submit Patch'''
link on the issue's Jira.
+ Folks should run {{{ant clean test}}} before selecting '''Submit Patch'''. Tests should
all pass. If your patch involves performance optimizations, they should be validated by benchmarks
that demonstrate an improvement.
- Folks should run {{{ant clean test}}} before selecting '''Submit Patch'''. Tests should
all pass.
-.
@@ -156, +196 @@ [[ mailing
lists]]. In particular the dev list (to join discussions of changes) and the user list (to
help others).
== See Also ==
-
* [[ contributor documentation]]
* [[ voting documentation]] | http://mail-archives.apache.org/mod_mbox/hadoop-common-commits/200910.mbox/%3C20091029062425.22198.55315@eos.apache.org%3E | CC-MAIN-2015-27 | refinedweb | 1,037 | 66.23 |
Below is a piece of code (in SML) that will do the trick. It's a function from int to bool list.
fun odd n = (n mod 2) = 1
fun f 0 false false acc = acc
| f 0 true false acc = false::acc
| f 0 evenp true acc = f 0 (not evenp) (not evenp) (true::acc)
| f n false carry acc = f (n div 2) true (odd(n) orelse carry) ((not(odd(n) = carry))::acc)
| f n true carry acc = f (n div 2) false (odd(n) andalso carry) ((not(odd(n) = carry))::acc)
fun minus2 n = f' n true false []
- minus2 5;
val it = [true,false,true] : bool list
- minus2 6;
val it = [true,true,false,true,false] : bool list
This is a solution to the question in the hard interview questions node. However, you need not go there if you simply read the following. The task is to convert a positive integer from base 2 to base minus 2. In both bases, all digits are either ones or zeros. For base 2, a one for the digit located n places from the right contributes 2n towards the value of a number. In base minus 2, a digit contributes (-2)n.
As this writeup is composed, there is already a solution to this problem posted above. However, the previous solution exhibits some peculiarities. It is terse and uncommented. It is in a slightly oddball language. And it's incomplete.
An interviewer gives hard interview questions to evaluate an interviewee's ability to think logically and her proficiency in some particular areas. Therefore, an ideal interviewee (from the interviewer's point of view) would improve her critical thinking abilities as well as her skills specific to the job. So this desirable interviewee would read these hard interview questions and work them out as a way to do that. The answers would hopefully provide either a confirmation of correctness to those who solve the problems or a thought path for those who could not solve the problem and require guidance. A terse answer as appears above fails to provide the second quality. Although an interviewee could go to the effort of memorizing all known questions and their unexplicated answers to appear competent in an interview, doing so would short change the interviewer and potentially run afoul if the interviewer invented new questions or dug deeper.
From the author's perspective, SML has oddball syntax. Never having seen SML code prior to this node, he can't say much of the language having learned only enough SML to read the solution. That said, the syntax it uses seems far removed from more familiar languages including BASIC, C, Java, Perl, and Lisp. Further, the program itself is bereft of comments or explanation. Its use of elegant polymorphism also impairs comprehension.
Finally, the statement of the problem appears to add a constraint which is unheeded in the previously given solution. That is, it is requested that an integer be converted. This implies an input of a fixed sized variable, likely 32 bits in length, and output of another variable with the same length. As will be shown below, a base minus 2 integer may require one more bit to store the same value as a base 2 integer. Granted, since the sample code gives the solution in a list of boolean values, it is at no risk of overflow. This makes the solution incomplete, but not really incorrect.
Anyone interested enough to read this far has likely gone through the first few steps of thought on their own, but this solution intends to be complete. Please ignore any banality. Also, note that the author came to his solution in something of a roundabout manner, taking longer than an interviewer would likely allow. Therefore, view the below as an instruction in a way you might not want to think.
First, it is noted that all even1 powers of -2 are just powers of 2 since (-2)2n = (-1*2)2n = (-1)2n * 22n = ((-1)2)n * 22n = 1n * 22n = 22n. Secondly, all odd powers of -2 will be the negative of the corresponding power of 2 by similar logic.
This means that setting a bit in an odd number of places from the right in base minus 2 will decrease the represented number. For example, if one were to take 0101-2 (22 + 20 = 510) and set the bit one digit from the right, the result would be 0111-2 (22 - 21 + 20 = 310)). Therefore, the largest positive value that could be represented in 32-bit base minus 2 would be:
0101 0101 0101 0101 0101 0101 0101 0101-2 =
(-2)30 + (-2)28 + (-2)26 + (-2)24 + … + (-2)4 + (-2)2 + (-2)0 =
n=0∑1522n =
n=0∑154n = (geometric sequence)
1 - 4n+1
------- =
1 - 4
1,431,655,76510
Meanwhile, the largest positive value available in a signed 32-bit base 2 is 231-1 = 2,147,483,64810, which is 1.5 times the maximum value of base minus 2. Deviating from the question asked (probably not a good idea if actually solving this on an interview), the largest negative value is checked:
1010 1010 1010 1010 1010 1010 1010 1010-2 =
(-2)31 + (-2)29 + (-2)27 + (-2)25 + … + (-2)5 + (-2)3 + (-2)1 =
n=0∑15(-2)2n+1 =
-2*n=0∑1522n =
-2*n=0∑154n =
1 - 4n+1
-2 * ------- =
1 - 4
-2,863,311,53010
This compares with a maximum negative value of -231 = -2,147,483,64810 for a 32-bit integer. Calculating the range for both sets shows that both can represent 4,294,967,29610 distinct numbers, including 0. Therefore, a base minus 2 number is just as efficient for storing a number as a base 2 number, and there is a unique representation for every number in base minus 2. The important lesson learned here is that any proper base-2-to-base-minus-2 conversion routine should do a range check on an the input to insure it can be converted.
It is apparent that a number which can be represented as the sum of even1 powers of 2 is the same in base minus 2 as it is in base 2. e.g. 101012 = 10101-2. Therefore, a promising strategy would seem to simply convert all even powers of 2 in this fashion, then add any contribution made by odd powers of 2.
To convert odd powers of 2, one clearly needs to set more than one bit. At least one set bit needs to be an even power of 2, adding to the result, and at least one set bit needs to be an odd power of 2, subtracting from the result. We know that even if we set the bits of the even powers of 2 up to the odd power, that it will not sum to the desired result. This is known because in a base 2 number, setting bit n gives 2n while setting all bits from 0 to n-1 gives 2n-1. (And, if we didn't already know this, we could apply the closed form for a geometric sequence displayed above.)
Starting with the simplest case, we guess only two bits need to be set: one even power and one odd power. (If this strategy fails, we can then try 2, 3, and more bits.) The obvious candidate for the former is the bit for the power one greater than we are converting. Given 22n+1, we set the bit for (-2)2n+2 in the base minus 2 number. Next, we try to find what single bit needs to be set to subtract from this, to give us our original number. Let's call the position of that bit x.
x
22n+2 + (-2)x = 22n+1
(-2)x = 22n+1 - 22n+2
(-2)x = 22n+1(1 - 2)
(-2)x = -22n+1
x = 2n+1
That worked out quite easily! Therefore, to convert a bit representing an odd power of two, we need to set the same bit in the base minus 2 number as well as set the next bit. e.g. 213 = 8,19210 = 0010 0000 0000 00002 = 0110 0000 0000 0000-2 = 214 - 213
Next, we need to deal with circumstances in which the bit which an even numbered power intends to set is already set by the last odd power. e.g. 214 + 213 = 0110 0000 0000 00002 One's first impression might be to introduce a system of carries as would normally be used in addition. So, in the above case:
1
0000 0110 0000 0000 0000-2 = 213
0000 0100 0000 0000 0000-2 = 214
------------------------
0000 1010 0000 0000 0000-2 = -213 - 215
Whoops! That doesn't work, because when we carry a bit forward to an odd power of two, we have the same difficulty as when converting an odd power of two. That is:
(22n+2) + (22n+1) =
(22n+2) + (22n+2 - 22n+1) =
2*22n+2 - 22n+1 =
22n+3 - 22n+1 =
22n+4 - 22n+3 - 22n+1
Therefore, we should treat a carry in this case like its analagous conversion:
1 1
0000 0110 0000 0000 0000-2 = 213
0000 0100 0000 0000 0000-2 = 214
------------------------
0001 1010 0000 0000 0000-2 = 216 - 215 - 213
Finally, there is the occurrence of a cascaded carry, which is seen when we add 22n+1 + 22n+2 + 22n+3.
(22n+3) + (22n+2 + 22n+1) =
(22n+3) + (22n+4 - 22n+3 - 22n+1) = (substitution from above)
22n+4 - 22n+1
In this case, we see that, since the -22n+3 bit is already set, we merely have to clear this bit (which is decreasing the sum) to add the 22n+3 we desire. An example appears below, drawing from the previous conversion.
1
0000 0110 0000 0000 0000-2 = 213
0000 0100 0000 0000 0000-2 = 214
0001 1000 0000 0000 0000-2 = 215
------------------------
0001 0010 0000 0000 0000-2 = 216 - 213
This gives us a general sequence for converting numbers to base minus 2. We start on the rightmost digit with no carry in. For each digit, we set the outgoing digit if exactly one of the carry in or incoming digit is true. For even digits, the outgoing carry is set if both are true. For odd digits, the outgoing carry is set if either or both are true. Pseudo code to implement this appears below.
BASE_MINUS_2_INT_MAX = 1431655765;
if (integerToConvert > BASE_MINUS_2_INT_MAX)
throw OverFlowError(integerToConvert);
extract digitString from integerToConvert
isEven = true;
carry = false;
digitStack = < >;
for each digit in digitString
if (digit XOR carry)
digitStack.push('1')
else
digitStack.push('0')
endif
if (isEven)
carry = carry AND digit;
else
carry = carry OR digit;
endif
isEven = !isEven;
endfor
return digitStack;
The observant reader, who may have already considered this solution too long and meandering, may note that each set of bits of 22n+1 and 22n+2 forms a simple counter which follows the sequence of:
base | base | base | Value
2 | minus | minus |
| 2 | 2 carry |
---------------------------------------
00 | 00 | 0 | 0
01 | 11 | 0 | 1 * 22n+1
10 | 10 | 0 | 2 * 22n+1 = 22n+2
11 | 01 | 1 | 3 * 22n+1 = 22n+1 + 22n+2
Therefore, after the first bit of 20, we can view the conversion process as a series of two bit conversions. First, 20 can be copied directly. Then, we process each remaining pair of bits in a loop. We begin with carry in to each step being clear. The carry out from each step becomes the carry in to the next step. The process would proceed by looking up the next set of base 2 digits in the above table. If carry in is set and the base 2 digits are '11', we set the carry out and write '00'. Otherwise, we copy down the base -2 value in the table, advancing one position down if carry in is set. Then, if the copied value is '01', we set carry out.
This is a perfectly feasible method for solving the problem, and should create identical results to the bit-at-a-time method. However, it should be noted that one could just as well build a table of four bit conversions with carry outs. Or eight bit conversions for that matter. And, while one is at it, a full 32-bit conversion table would be completely feasible, though tedious to compile. Therefore, a solution might appear as below:
if (isBase2Convertabile[base2Num])
return base2toBaseMinus2[base2Num];
else
throw overflowError(base2Num);
endif
Such a solution might be suboptimal if one's intention is to display his mathematical knowledge to an interviewer. In that case, write Θ(1) below the above code. That should do it.
Θ(1)
As said above, the author is familiar with SML only in passing, so terminology may not be correct in this section. Of particular note is that one does not normally modify a variable in SML but creates a new variable which is created by applying a function to existing variables. Therefore, one does not "push a value onto the stack A", but "creates a new list composed of a value and the previous list A". The former terminology will be preferred below for clarity, and the difference is trivial for these purposes. This explanation is intended mostly to draw parallels between it and the C++ code below. As always, feel free to inform the author of any factual mistakes or misrepresentations in the below text.
The given SML code defines three functions named odd, f, and minus2. odd takes an integer and returns true if the number is odd or false otherwise. minus2 makes a call to f, which does most of the work, and then returns f's result.
odd
f
minus2
f is a recursive, polymorphic function which takes four arguments in all of its forms. These four arguments are:
n, an int
evenp, a bool
carry, a bool
acc, a bool list
f is composed of five lines whose purposes are displayed below:
n
/*
* File: baseminustwo.cpp
* Author: OldMiner
* Created: 2003 Jan 03
* Last Modified: 2003 Jan 04
*
* A simple conversion utility to convert a given
* number in base 2 to base minus 2.
*
* Written largely for a writeup on Everything2.com
*
*/
#include <stdlib.h>
#include <iostream>
#include <stdexcept>
#include <string>
using namespace std;
// 8 sets of the 4 bit sequence: 0101 (base 2 and minus 2)
// == 5 (base 10 and 16)
const unsigned int BASE_MINUS_2_INT_MAX = 0x00005555 + 0x55550000;
string convert(unsigned int convertMe) {
int digitPos = 33;
bool carry = false, evenDigit = true;
bool currentDigit;
char convertedDigits[33];
if (convertMe > BASE_MINUS_2_INT_MAX)
throw overflow_error("Exceeded maximum base minus two value.");
// Null-termine returned string
convertedDigits[--digitPos] = '\0';
do {
// Get current base 2 digit
currentDigit = convertMe % 2;
// Add a '0' or a '1' at the front of the existing string
convertedDigits[--digitPos] = '0' + (currentDigit ^ carry);
if (evenDigit)
carry &= currentDigit;
else
carry |= currentDigit;
// Move to the next base 2 digit
convertMe >>= 1;
evenDigit = !evenDigit;
// Continue till all base 2 digits have been converted
// and all carries written
} while (convertMe || carry);
return string(convertedDigits + digitPos);
}
int main(int argc, const char **argv) {
if (argc < 2) {
cerr << "Usage: " << argv[0] << " n" << endl;
return -2;
}
try {
string result = convert(atoi(argv[1]));
cout << "Amazingly " << argv[1] << " in base minus 2 is " <<
result << "!" << endl;
return 0;
}
catch (const overflow_error &overflew) {
cerr << "What's all this now?\n" << overflew.what() << endl;
return -1;
}
/* Should never get here */
return 0;
}
<.
<
One subtlety in the above code is that convert takes an unsigned int. This will cast any negative number to a very large unsigned number, which will then be thrown out by the range check. Since the problem only asks that positive integers be converted, this should be a sufficient solution.
convert
unsigned int
The do...while loop in the C++ code is equivalent to the SML code above it. The base case occurs when the while condition fails; the value to be converted is zero, and no carries remain. At that point, the built string is returned. There is no need for a second base case like line 2 in the SML code. Because the loop is always run at least once, zero is added to the string without any extra logic. Line 3 of the SML code is mirrored in the while condition. For most runs through the do...while loop, the value to be converted will be non-zero, so the carry is not checked. This is the same as the alternating between lines 4 and 5 in the SML code. Once the value to be converted becomes zero, the carry is checked, and the loop continues till the carry is cleared. The innards of the loop constantly add the next digit to the string to be returned, set the carry, and halve the value to be converted, just as in lines 4 and 5 of the SML code. The remainder of the code is ultimately packaging for this procedure.
do...while
while
Log in or register to write something here or to contact authors.
Need help? accounthelp@everything2.com | https://m.everything2.com/title/Base+minus+2+solution | CC-MAIN-2020-29 | refinedweb | 2,846 | 67.28 |
Created on 2006-08-10 06:57 by djmdjm, last changed 2007-03-13 18:32 by georg.brandl. This issue is now closed.
Hi,
tempfile.NamedTemporaryFile provides a good interface
to creating temporary files, but its insistence on
deleting the file upon close is limiting. The attached
patch adds an optional parameter to NamedTemporaryFile
that allows persistence of the temp file after it has
been closed.
One use-case that demonstrates where keeping the
temporary file around is handy would be when you need
to safely create and fill a temp file before it is
atomically moved into place with os.rename(). E.g.
def update_conf(conf_path):
old = open(conf_path)
tmp = tempfile.NamedTemporaryFile(prefix =
os.basename(conf_path), \
dir = os.dirname(conf_path), delete = False)
for l in old:
tmp.write(l.replace('war', 'peace'))
close(old)
close(tmp)
os.link(conf_path, conf_path + ".old")
os.rename(tmp.name, conf_path)
Logged In: YES
user_id=1359232
oops, wrong Category: this should be Lib and not Modules
Logged In: YES
user_id=357491
Why can't you store into an instance of StringIO instead of
a temp file?
Logged In: YES
user_id=1359232
As far as I can tell, StringIO doesn't actually create a
filesystem object that can be manipulated.
Logged In: YES
user_id=1359232
Here is an diff that includes a regress test
Logged In: YES
user_id=357491
Right, it doesn't create a filesystem file. But that is the
point. You work in memory and then write to your final
destination as needed. Your code you have pasted in the
description does nothing special that requires the use of a
temporary file. You can just write into a StringIO object,
skip the os.link call, and then just write out to the final
file location.
Logged In: YES
user_id=1359232
Well, that would a) not be an atomic replacement and b) you
miss (or would have to reimplement) the mkstemp() like
behaviour.
Thanks for the patch, committed with doc changes as rev. 54344. | http://bugs.python.org/issue1537850 | CC-MAIN-2017-30 | refinedweb | 331 | 64.41 |
3. Release notes for version 8.10.1¶
The significant changes to the various parts of the compiler are listed in the following sections.
3.1. Highlights¶
- The
UnliftedNewtypesextension, allowing
newtypes to be wrap types of kind other than
Type.
- The
StandaloneKindSignaturesextension, allowing explicit signatures on type constructors.
- A new,
low-latency garbage collectorfor the oldest generation.
3.2. Full details¶
3.2.1. Language¶
Kind variables are no longer implicitly quantified when an explicit
forallis used, see GHC proposal #24.
-Wimplicit-kind-varsis now obsolete.
Kind variables are no longer implicitly quantified in constructor declarations:
data T a = T1 (S (a :: k)) | forall (b::k). T2 (S b) -- no longer accepted data T (a :: k) = T1 (S (a :: k)) | forall (b::k). T2 (S b) -- still accepted.
In type synonyms and type family equations, free variables on the RHS are no longer implicitly quantified unless used in an outermost kind annotation:
type T = Just (Nothing :: Maybe a) -- no longer accepted type T = Just Nothing :: Maybe (Maybe a) -- still accepted
A new extension
StandaloneKindSignaturesallows one to explicitly specify the kind of a type constructor, as proposed in GHC proposal #54:
type TypeRep :: forall k. k -> Type data TypeRep a where TyInt :: TypeRep Int TyMaybe :: TypeRep Maybe TyApp :: TypeRep a -> TypeRep b -> TypeRep (a b)
Analogous to function type signatures, a standalone kind signature enables polymorphic recursion. This feature is a replacement for
CUSKs.
Note: The finer points around this feature are subject to change. In particular, it is likely that the treatment around specified and inferred variables may change, to become more like the way term-level type signatures are handled.
GHC now parses visible, dependent quantifiers (as proposed in GHC proposal 35), such as the following:
data Proxy :: forall k -> k -> Type
See the section on explicit kind quantification for more details.
Type variables in associated type family default declarations can now be explicitly bound with a
forallwhen
ExplicitForAllis enabled, as in the following example:
class C a where type T a b type forall a b. T a b = Either a b
This has a couple of knock-on consequences:
Wildcard patterns are now permitted on the left-hand sides of default declarations, whereas they were rejected by previous versions of GHC.
It used to be the case that default declarations supported occurrences of left-hand side arguments with higher-rank kinds, such as in the following example:
class C a where type T a (f :: forall k. k -> Type) type T a (f :: forall k. k -> Type) = f Int
This will no longer work unless
fis explicitly quantified with a
forall, like so:
class C a where type T a (f :: forall k. k -> Type) type forall a (f :: forall k. k -> Type). T a f = f Int
A new extension
UnliftedNewtypesthat relaxes restrictions around what kinds of types can appear inside of the data constructor for a
newtype. This was proposed in GHC proposal #13.
A new extension
ImportQualifiedPostallows the syntax
import M qualified, that is, to annotate a module as qualified by writing
qualifiedafter the module name. This was proposed in GHC proposal #49.
New flag
-Wderiving-defaultsthat controls a warning message when both
DeriveAnyClassand
GeneralizedNewtypeDerivingare enabled and no explicit deriving strategy is in use. The warning is enabled by default and has been present in earlier GHC versions but without the option of disabling it. For example, this code would trigger the warning:
class C a newtype T a = MkT a deriving Cenables
-XRankNTypes, but
Bdoes not. Previous versions of GHC would allow
barto typecheck, even though its inferred type is higher-rank. GHC 8.10 will now reject this, as one must now enable
-XRankNTypesin
Bto accept the inferred type signature.
Type family dependencies (also known as injective type families) sometimes now need
-XUndecidableInstancesin order to be accepted. Here is an example:
type family F1 a = r | r -> a type family F2 a = r | r -> a type instance F2 [a] = Maybe (F1 a)
Because GHC needs to look under a type family to see that
ais determined by the right-hand side of
F2‘s equation, this now needs
-XUndecidableInstances. The problem is very much akin to its need to detect some functional dependencies.
The pattern-match coverage checker received a number of improvements wrt. correctness and performance.
Checking against user-defined COMPLETE pragmas “just works” now, so that we could move away from the complicated procedure for disambiguation we had in place before.
Previously, the checker performed really badly on some inputs and had no good story for graceful degradation in these situations. These situations should occur much less frequently now and degradation happens much more smoothly, while still producing useful, sound results (see
-fmax-pmcheck-models=⟨n⟩).
3.2.2. Compiler¶
The
LLVM backendof this release is to be used with LLVM 9.
(x86) Native code generator support for legacy x87 floating point coprocessor has been removed. From this point forth GHC will only support floating point via SSE2.
Add new flags
-Wunused-record-wildcardsand
-Wredundant-record-wildcardswhich warn users when they have redundant or unused uses of a record wildcard match.
Calls to
memsetand
memcpyare now unrolled more aggressively and the produced code is more efficient on x86-64 with added support for 64-bit
MOVs. In particular,
setByteArray#and
copyByteArray#calls that were not optimized before, now will be. See #16052.
When loading modules that use
UnboxedTuplesor
UnboxedSumsinto GHCi, it will now automatically enable
-fobject-codefor these modules and all modules they depend on. Before this change, attempting to load these modules into the interpreter would just fail, and the only convenient workaround was to enable
-fobject-codefor all modules. See the GHCi FAQ for further details.
The eventlog now contains events for biographical and retainer profiling. The biographical profiling events all appear at the end of the eventlog but the sample start event contains a timestamp of when the census occurred. The retainer profiling events are emitted using the standard events.
The eventlog now records the cost centre stack on each profiler sample. This enables the
.proffile to be partially reconstructed from the eventlog.
Add new flag
-fkeep-goingwhich makes the compiler continue as far as it can despite errors.
Deprecated flag
-fwarn-hi-shadowingbecause it was not implemented correctly, and appears to be largely unused. This flag will be removed in a later version of GHC.
Windows bindist has been updated to GCC 9.2 and binutils 2.32. These binaries have been patched to no longer have have the
MAX_PATHlimit. Windows users should no longer have any issues with long path names.
Introduce
DynFlagsplugins, that allow users to modidy the
DynFlagsthat GHC is going to use when processing a set of files, from plugins. They can be used for applying tiny configuration changes, registering hooks and much more. See the user guide for more details as well as an example.
Deprecated flag
-fmax-pmcheck-iterationsin favor of
-fmax-pmcheck-models=⟨n⟩, which uses a completely different mechanism.
GHC now writes
.ofiles atomically, resulting in reduced chances of truncated files when a build is cancelled or the computer crashes.
This fixes numerous bug reports in Stack and Cabal where GHC was not able to recover from such situations by itself and users reported having to clean the build directory.
Other file types are not yet written atomically. Users that observe related problems should report them on GHC issue #14533. This fix is part of the Stack initiative to get rid of persistent build errors due to non-atomic file writes across the Haskell tooling ecosystem.
3.2.3. GHC API¶
- GHC’s runtime linker no longer uses global state. This allows programs that use the GHC API to safely use multiple GHC sessions in a single process, as long as there are no native dependencies that rely on global state.
- In the process of making GHC’s codebase more modular, many modules have been renamed to better reflect the different phases of the compiler. See #13009. Programs that rely on the previous GHC API may use the ghc-api-compat package to make the transition to the new interface easier. The renaming process is still going on so you must expect other similar changes in the next major release.
3.2.4. GHCi¶
- Added a command
:instancesto show the class instances available for a type.
- Added new debugger commands
:disableand
:enableto disable and re-enable breakpoints.
- Improved command name resolution with option
!. For example,
:k!resolves to
:kind!.
3.2.5. Runtime system¶
The runtime system linker now marks loaded code as non-writable (see #14069) on all tier-1 platforms. This is necesaary for out-of-the-box compatibility with OpenBSD and macOS Catalina (see #17353)
The RTS API now exposes an interface to configure
EventLogWriters, allowing eventlog data to fed to sinks other than
.eventlogfiles.
A new
+RTSflag
--disable-delayed-os-memory-returnwas added to make for accurate resident memory usage of the program as shown in memory usage reporting tools (e.g. the
RSScolumn in
topand
htop).
This makes it easier to check the real memory usage of Haskell programs.
Using this new flag is expected to make the program slightly slower.
Without this flag, the (Linux) RTS returns unused memory “lazily” to the OS. This has making the memory available to other processes while also allowing the RTS to re-use the memory very efficiently (without zeroing pages) in case it needs it again, but common tools will incorrectly show such memory as occupied by the RTS (because they do not process the
LazyFreefield in
/proc/PID/smaps).
3.2.6. Template Haskell¶
The
Lifttypeclass is now levity-polymorphic and has a
liftTypedmethod. Previously disallowed instances for unboxed tuples, unboxed sums, an primitive unboxed types have also been added. Finally, the code generated by
DeriveLifthas been simplified to take advantage of expression quotations.
Using
TupleT 1,
TupE [exp], or
TupP [pat]will now produce unary tuples (i.e., involving the
Unittype from
GHC.Tuple) instead of silently dropping the parentheses. This brings Template Haskell’s treatment of boxed tuples in line with that of unboxed tuples, as
UnboxedTupleT`, ``UnboxedTupE, and
UnboxedTupPalso produce unary unboxed tuples (i.e.,
Unit#) when applied to only one argument.
GHC’s constraint solver now solves constraints in each top-level group sooner. This has practical consequences for Template Haskell, as TH splices necessarily separate top-level groups. For example, the following program would compile in previous versions of GHC, but not in GHC 8.10:
data T = MkT tStr :: String tStr = show MkT $(return []) instance Show T where show MkT = "MkT"
This is because each top-level group’s constraints are solved before moving on to the next, and since the top-level group for
tStrappears before the top-level group that defines a
Show Tinstance, GHC 8.10 will throw an error about a missing
Show Tinstance in the expression
show MkT. The issue can be fixed by rearranging the order of declarations. For instance, the following will compile:
data T = MkT instance Show T where show MkT = "MkT" $(return []) tStr :: String tStr = show MkT
TH splices by default don’t generate warnings anymore. For example,
$([d| f :: Int -> void; f x = case x of {} |])used to generate a pattern-match exhaustivity warning, which now it doesn’t. The user can activate warnings for TH splices with
-fenable-th-splice-warnings. The reason for opt-in is that the offending code might not have been generated by code the user has control over, for example the
singletonsor
lenslibrary.
3.2.7.
ghc-prim library¶
- Add new
bitReverse#primops that, for a
Wordof 8, 16, 32 or 64 bits, reverse the order of its bits e.g.
0b110001becomes
0b100011. These primitives use optimized machine instructions when available.
3.2.10. Build system¶
- Countless bug fixes in the new Hadrian build system
- Hadrian now supports a simple key-value configuration language, eliminating the need for users to use Haskell to define build configuration. This should simplify life for packagers and users alike. See #16769 and the documentation in
hadrian/doc/user-settings.md. | https://downloads.haskell.org/~ghc/8.10.1/docs/html/users_guide/8.10.1-notes.html | CC-MAIN-2020-29 | refinedweb | 2,014 | 54.42 |
One of the most repeated questions with angular2 i have come across Stack overflow is “how to generate a select dropdown with an array or object”. Hence, i decided to write a sample post with sample code to ease the search.
I will be posting two samples with one simple array and other with object.
Assume you want to generate a dropdown select by having an array with years.
years = ['2016','2015','2014'];
The app.component.ts code will look like,
import {Component} from '@angular/core'; import {Http} from '@angular/http' import {bootstrap} from '@angular/platform-browser-dynamic'; import { Component } from './Component'; @Component({ selector: 'material-app', templateUrl: 'app.component.html' }) export class AppComponent { years = ['2016','2015','2014']; selectedyear = '2015' ; onChange(year) { alert(year); } }
In the above cocde selectedyear indicates the default value of the dropdown whenever the app is loaded. onChange is the event gets fired whenever a option is changed, you can capture the selected value with the event.
*ngFor is being used to repeat the items as options. It’s simple as above.
Next we will see how to bind a object using *ngFor . Assume if you have a object and want to bind the keys as drop down values,
currencyList = { "USD": { "symbol": "$", "name": "US Dollar", "symbol_native": "$", "decimal_digits": 2, "rounding": 0, "code": "USD", "name_plural": "US dollars" }, "CAD": { "symbol": "CA$", "name": "Canadian Dollar", "symbol_native": "$", "decimal_digits": 2, "rounding": 0, "code": "CAD", "name_plural": "Canadian dollars" } };
to get the keys of object you can use Object.keys(this.currencyList); and the rest is same as above sample.
One thought on “Build a select dropdown with *ngFor in Angular 2”
Its fantatic explaintion lot of information gather it…nice article….seo company in Chennai
You must log in to post a comment. | https://sajeetharan.com/2017/06/11/build-a-select-dropdown-with-ngfor-in-angular-2/ | CC-MAIN-2021-04 | refinedweb | 289 | 54.52 |
Pioneering wrote:even though they received an alleged ‘overwhelming number of entries’.
kmoorevs wrote:The fix would only take a few minutes if I don't upgrade it
kmoorevs wrote:they forgot how to add?
public class SanderRossel : Lazy<Person>
{
public void DoWork()
{
throw new NotSupportedException();
}
}
kmoorevs wrote:a dozen or so change requests from customers ... Now I must go and revive another legacy app that has served us well for over 8 years untouched!.
Marc Clifton wrote:The sad thing is,
BillWoodruff wrote:And, how bittersweet that sadness is given that without the bloody evolutionary triumph of our countless ancestors ... you and I would not be here, to indulge in maudlin professions of guilt while leading lives of incredible comfort compared to most of the other people in the world.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Lounge.aspx?msg=4883866 | CC-MAIN-2017-22 | refinedweb | 161 | 58.72 |
Adding files to your project
As programs get larger, it is common to split them into multiple files for organizational or reusability purposes. One advantage of working with an IDE is that they make working with multiple files much easier. You already know how to create and compile single-file projects. Adding new files to existing projects is very easy.
Best practice
When you add new code files to your project, give them a .cpp extension.
For Visual Studio users
In Visual Studio, right click on the Source Files folder in the Solution Explorer window,.
Now when you compile your program, you should see the compiler list the name of your file as it compiles it.
For Code::Blocks users
In Code::Blocks, go to the File menu and choose New > File….
In the New from template dialog, select C/C++ source and click Go.
You may or may not see a welcome to the C/C++ source file wizard dialog at this point. If you do, click Next.
On the next page of the wizard, select “C++” and click Next.
Now give the new file a name (don’t forget the .cpp extension), and click the All button to ensure all build targets are selected. Finally, select finish.
For GCC/G++ users
In lesson 2.6 -- Forward declarations and definitions, we took a look at a single-file program that wouldn’t compile:
When the compiler reaches the function call to add on line 5 of main, it doesn’t know what add is, because we haven’t defined add until line 9! Our solution to this was to either reorder the functions (placing add first) or use a forward declaration for add.
Now let’s take a look at a similar multi-file program:
add.cpp:
main.cpp:
Your compiler may decide to compile either add.cpp or main.cpp first. Either way, main.cpp will fail to compile, giving the same compiler error as the previous example:
main.cpp(5) : error C3861: 'add': identifier not found
The reason is exactly the same as well: when the compiler reaches line 5 of main.cpp, it doesn’t know what identifier add is.
Remember, the compiler compiles each file individually. It does not know about the contents of other code files, or remember anything it has seen from previously compiled code files. So even though the compiler may have seen the definition of function add previously (if it compiled add.cpp first), it doesn’t remember.
This limited visibility and short memory is intentional, so that files may have functions or variables that have the same names without conflicting with each other. We’ll explore an example of such a conflict in the next lesson.
Our options for a solution here are the same as before: place the definition of function add before function main, or satisfy the compiler with a forward declaration. In this case, because function add is in another file, the reordering option isn’t a good one.
The better solution here is to use a forward declaration:
main.cpp (with forward declaration):
add.cpp (stays the same):
Now, when the compiler is compiling main.cpp, it will know what identifier add is and be satisfied. The linker will connect the function call to add in main.cpp to the definition of function add in add.c for function add in main.cpp.
2. If you get a linker error about add not being defined, e.g.
unresolved external symbol "int __cdecl add(int,int)" ([email protected]@[email protected]) referenced in function _main
2a. …the most likely reason is that add.cpp is not added to your project correctly. When you compile, you should see the compiler list both main.cpp and add.cpp. If you only see main.cpp, then add.cpp definitely isn’t getting compiled.. Do not #include “add.cpp” from main.cpp. This will cause the compiler to insert the contents of add.cpp directly into main.cpp instead of treating them as separate files.
Summary.
Quiz time
Question #1
Split the following program into two files (main.cpp, and input.cpp). Main.cpp should have the main function, and input.cpp should have the getInteger function.
Show Hint
Show Solution
i use VS code, i downloaded an extension to combine files, so after selecting the two separate files using ctrl + left click , then right click and there is an option saying combine which I press but I am still getting an error . I use code runner extension as well, is that causing a problem ??
the error i was getting :::
Windows PowerShell
Try the new cross-platform PowerShell
PS Z:\My C PROGRAMS\C++> cd "z:\My C PROGRAMS\C++\" ; if ($?) { g++ main.cpp -o main } ; if ($?) { .\main }
C:\Users\PRAJWA~1\AppData\Local\Temp\ccuZ5kAM.o:main.cpp:(.text+0x3d): undefined reference to `add(int, int)'
collect2.exe: error: ld returned 1 exit status
PS Z:\My C PROGRAMS\C++>
I am also getting the same error! Can someone help
I'm loving these tutorials so far!
Though from time to time during the last couple of days, I couldn't access the website, and it only displayed a white screen with "Origin Error" on it, seems to be fixed now though.
Just wanted to know if it was a problem on my end, and if so, suggestions on fixing it in case it pops up again?
Or maybe the site was just under maintenance.
Thanks again to you guys for making this awesome content!
The Origin Errors are errors on our end. If they occur, it's likely because the site is either down, or there are internet connectivity issues somewhere. All you can do is wait and try again.
You can use the waybackmachine when you've problems accessing the site.
When using Visual Studio on Windows, I only have the options "Class..." and "Resource..." when I right-click on "HelloWorld" and move with the cursor over "add".
Also, the line starting with "Solution" is not in the solution explorer.
What is going on?
Oh, never mind!
I had changed some settings in Visual Studio. After a reset, it was solved.
is ```int x{}``` the variable x value-initialized to 0 or is it equal to an uninitialized variable ```int x;```?
`x` is value-initialized to 0
I can compile the two files on the Linux command line fine with g++ and the program works, in VSCode (version 1.51.1) on Linux it doesn't seem to work, probably because there is no concept of an project? I did install the Create C/C++ Project extension, but that didn't help.
The error ends in "/C++/multipleFiles/main.cpp:7: undefined reference to `add(int, int)"
Love the site by the way, donated $10 yesterday because I like it so much, looking forward to the other chapters.
The error indicates that you didn't compile "add.cpp", only "main.cpp". I don't know the extension you're using and I can't recommend using extensions to compile code. If the author of that extension removes it from the store or you switch editors, you won't be able to compile your code anymore. Look into CMake or Meson, or write a script that compiles your code instead. If you need help with any of those, feel free to ask.
Hey there, Good work on donation. I started learning today. Also using VSCode. The Problem with code editor is more manual configurations. So, you need to know more tweaks in VSCode. Visit and do more tweaks to learn more :
Just tweak the task.json with below code. Remember to remove the two file configuration after use.
it's like : g++ -std=c++17 -g Excercise_001.cpp TwoTimerGuy.cpp -o GoodBoy.exe
feel free to connect : [email protected]
Thank you so much for making this entire guide. I hope you'll continue to maintain it and this site because it is beyond useful.
Before discovering this site, I tried a number of highly rated textbooks and a couple of video courses to try getting my head around C++. This site beats them all, and it's not even close. Many Thanks for putting all of this together. I'm looking forward to completing all the lessons.
Possible typo:
"The reason for is exactly the same as well:"
should be:
"The reason for this is exactly the same as well:"
?
Thanks again.
Typo fixed, thanks!
I just want to say that the construction of this tutorial is beautiful. Even though I just learned about scope a few lessons ago and I felt that I understood it I didn't TRULY get it until the quiz question in this lesson. Initially I couldn't understand how 'x' could be used in both functions or how 'y' could get the result of 'getInteger', or really how 'y' could get a result different from 'x'. I compiled and ran it anyway after writing it and to my surprise it asked for my input twice, well of course it did the function was called twice! It seems so obvious now but if this tutorial hadn't been set up as it was I wouldn't have had the background information to make it so obvious once I saw it. The function is called twice so it gets two inputs stored as 'x' and 'y' in the main function, the 'x' in getInteger is only stored there. So in this one lesson you've cemented in my head forward declarations, scopes, calls, different ways to achieve results and have really given me a much more solid base to stand on. Again, it's just beautiful and I so look forward to the rest of the tutorial, thank you!
Hello,
I'm totally new to Code::Blocks
I'm trying to compile 2 files together but I can't add input.cpp to main.cpp
I am not able to check the add file checkbox...
What should I do?
Using Code::Blocks 20.03 on Microsoft Windows 10.
Hello, I'm learning two work with multiple files but is met with this error:
main.cpp
checkASCII.cpp
when i try to compile main.cpp, the error is "undefined reference to checkASCII(char). What am i doing wrong here?
You're not compiling "checkASCII.cpp" alongside "main.cpp". Look up how to add a source file in your IDE
i think i did compile both of them, the checkASCII file is inside the same source folder as main.cpp. I checked the suggested errors and none of them applied to my case : ( Is it because i didnt include a header file in my main.cpp? i just assumed from this chapter that only an additional .cpp file is needed
you're right! i'm sorry i thought the other file being listed under the sources folder was enough. thank you
#include <iostream>
add.cpp
main.cpp
this is what im assuming its supposed to look like and it wont succeed i get this.
1>------ Build started: Project: hello world, Configuration: Debug Win32 ------
1>hello world.cpp
1>C:\Users\Gaming\source\repos\hello world\hello world\hello world.cpp(3,4): error C3927: '->': trailing return type is not allowed after a non-function declarator
1>C:\Users\Gaming\source\repos\hello world\hello world\hello world.cpp(3,4): error C3484: syntax error: expected '->' before the return type
1>C:\Users\Gaming\source\repos\hello world\hello world\hello world.cpp(5,1): error C3613: missing return type after '->' ('int' assumed)
1>C:\Users\Gaming\source\repos\hello world\hello world\hello world.cpp(5,1): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int
1>C:\Users\Gaming\source\repos\hello world\hello world\hello world.cpp(3,5): error C2146: syntax error: missing ';' before identifier 'cpp'
1>main.cpp
1>Generating Code...
1>Done building project "hello world.vcxproj" -- FAILED.
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
added #iostream to add.cpp and now i dont see main.cpp in the compiler
removed it and now neither add or main cpp show up in comp.
Alex,
Your method show here does not seem to work with C::B 20.3 W-10 for some reason. I've tried many ways to fix this but the method you show doesn't work on multi files. If I save the main.cpp then I try to add file (add.cpp) it has to be given a the full path name and saved to work. (not just add.cpp as you've shown.) If I use C:/CBProjects/main.cpp it doesn't work at all.
Is it possible that you may have left out a step in your lesson? What should we us as a full path to save here? I'm using the path show above.
Thanks
You need to do this in an already existing project.
Wow I finally got the multi file Process to worked.
I had to make an entirely new project I called testing to do this.
Then I copied main.cpp from lesson 2.8 with the "forward declaration" with add function. Once I did this I made the new>file>add.cpp. I had to use the full for the testing project for the name path, (you showed add.cpp in the lesson.) I used (C:\CBProjects\testing) there. It put main.cpp and add.cpp into the management window on the left side. I copied app.cpp from lesson 2.8 again, into the editors add function. I also checked the properties of both files to make sure the would compile and link. The first time I ran build/run the file worked, but I got a red process terminated with status 1073741510 in the build log.
I ran the process again and it ran great and the red build log signal went away.
I hope this post will help other C::B Users. Just remember to make a new project and substitute CBProjects above in the full path with whatever you used there.
It seems that, at least in my case that in add.cpp i still have to add #include <iostream>, is that normal?
If you use the iostream library in the file add.cpp, then you need to #include it.
can someone tell me how to work with separate .cpp files in atom IDE?
Atom, is a simple Text editor, built on chromium, why dont you just use Code::blocks.
Atom is owned be github, and github is owned by Microsoft, and Microsoft already have a alternative, to Atom, and they are putting a lot of work of, it, If you Just dont want to use code::blocks you should consider using `vs code` rather then Atom.
Can someone please tell me why am i getting this error
Error :-
undefined reference to `add(int, int)'
ld returned 1 exit status
my main.cpp
#include <iostream>
int add(int x,int y);
int main()
{
std::cout<<add(3,5);
return 0;
}
my add.cpp
int add(int x,int y){
return x+y;
}
See point 2 in section "Something went wrong!"
I added my add.cpp to my project correctly but its not working if i add #include"add.cpp" then its working else its not working
Then you didn't add add.cpp to your project correctly. Don't #include cpp files.
In the quiz time Q#1, you haven't defined y and nor did it holds a value neither a value entered by the user.
you have only given a value (entered by the user)for x not for y. Should'nt it show undefined behaviour?
`getInteger`'s `x` has nothing to do with `main`'s `x`. A function can't see variables declared in other functions.
`main`'s `x` is initialized with the return value of `getInteger`, and so is `y`. See lesson 2.2 for an explanation of return values.
In Code Lite I didn't have to #include<iostream> in the second file. In Visual Studio, I did have to do this. Should I worry about that? Or not?
Yes, worry about it. Always include everything you use. If your second file uses something from <iostream>, then include <iostream> in the second file. As you already found out, if you don't do this, your code won't work with other compilers.
"3. Do not #include “add.cpp” from main.cpp. This will cause the compiler to insert the contents of add.cpp directly into main.cpp instead of treating them as separate files."
About that, can we somehow avoid declaring function prototype in every file where we need to use it? Is there a way to, like in some other programming laguages, to somehow import whole file in separate kind of namespace and use functions defined in it?
This is done via header files, which we cover later in this chapter.
C++20 replaced header files with modules, but no compiler supports modules yet. Headers will continue to work.
Name (required)
Website
Save my name, email, and website in this browser for the next time I comment. | https://www.learncpp.com/cpp-tutorial/programs-with-multiple-code-files/ | CC-MAIN-2021-17 | refinedweb | 2,876 | 76.72 |
Anyone can explain why compiling in IDEA gives compiler-error on valid code, which maven compiles just fine?
The java-code is (in maven-module A):
public class Fisk {
public static class A {
}
public static A A = new A();
}
The following JAVA-code is example of usage:
Fisk.A a = new Fisk.A();
Fisk.A b = Fisk.A;
Then in maven-module B, which has A as dependency the following code results in compiler-error in IDEA (but works in maven):
val fisk = new Fisk.A()
val strupp: Fisk.A = Fisk.A
I posted a question regarding this here:
And filed a ticket here:
You could add this to your log.xml in the IDEA installation to log out the compiler command line:
<category name="#org.jetbrains.plugins.scala.compiler">
<priority value="DEBUG"/>
<appender-ref
</category>
Or post a zip of a minimal project that demonstrates the problem.
-jason
I will post an example. Right now I'm off for a bachelor's party so it'll be tomorrow:-)
I cooked up the attached example.
The project compiles fine using "mvn install" but making the project in IDEA (CTRL+F9) fails with:
error: A is already defined as object A
public static A A = new A();
Attachment(s):
test-fisk.tar.bz2
Can I pay JetBrains to prioritize this issue (I'm already a license-owner)? It's really getting in the way for productivity.
PS: I'm using Scala-2.8.1, not 2.9.RCx and will keep doing so until Lift is released with a version which fully supports 2.9
--
Andreas
I'm now looking into the problem...
The root of the problem is some bug in scalac Java parser / typer (and it's a good idea to report the bug to Scala's issue tracker).
To invoke the bug, you need to feed both given .java & .scala files into scalac (move Fish.java next to FishTest.scala, run "mvn compile" and you will get the same error).
When there're two separate modules, Maven compiles them sequentially: it runs javac first for the Java module, and then it runs scalac for the second, so Scala compiler uses binary .class files instead of Java sources (and thus scalac bug bypassed). However, Maven (of course) won't to compile those files within a single module.
By default, Scala plugin for IDEA compiles Scala files in the first place, but you may uncheck "Project Settings / Scala Compiler / Compile Scala files first" to switch the order, so you will be able to compile your sample files (even in a single module).
P.S. Many thanks to Jason Zaugg for clarifying the problem and fixing "Compile Scala files first" setting.
If you select 'Compile Java Files first', why do we include the .java files from other modules in the batch of files passed to Scalac? Should we just limit it to .java files from the current module?
In the sample project "test-scala" module has an explicit dependency on "test-java" one.
BTW, you don't need to include dependency on scala-compiler in scala/pom.xml (if you don't use Scalac classes directly in your code).
It's worth to point-out that it'll be impossible to compile files with Java-Scala circular dependencies when "Compile Scala files first" is turned off.
Is this really a reason to pass the java-files to scalac? It seems to me the java-files in the java-module should be compiled first (by javac) and then the scala-files in the scala-module with the already compiled classes from java-module in the classpath, or is that not the way it works?
--
Andreas
Actually I know this, but I've never gotten around to remove it:-) And copy-pasting deps around projects haunts me...
Filed ticket:
I agree, I still don't see the need to ever pass .java files from downstream modules to scalac.
It seems that there's no need to pass external .java files to scalac except when there's a mutual module dependency.
Oh wow, I didn't know that was possible! Sounds simultaneously evil and convenient...
In reality, I'm unsure whether we need to pass .java files or not.
Current implementation of project compilation is, undoubtedly, based on it's own historical prerequisites, and some of them I know nothing about.
The issue needs further investigation. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206640305-Getting-compiler-errors-in-IDEA-but-not-when-using-Maven?page=1 | CC-MAIN-2017-09 | refinedweb | 732 | 66.23 |
elric@xxxxxxxxxx wrote: > This does not work. The problem is that in GBDE for sector n > which is written, there are two operations: > > 1. change the key by which sector n is encrypted, and > 2. write sector n ecnrypted with the new key. > > If one of these fails, how could the write be ignored? If one of > the two completes but not both, then one is left in the situation > of either: > > 1. trying to decrypt the old sector with the new > encryption key, or No. > 2. trying to decrypt the new sector with the old > encryption key. Yes, the data sector is written first and then the key sector. Since, as you pointed out, GBDE is more susceptible to dictionary attacks than CGD one can then use this advantage (it's a feature, not a design flaw!) to recover the lost key so no data is lost. :-> Seriously, how can one make writing atomic without breaking compatibility with existing GBDE volumes? Which approach does CGD use to solve the problem of atomic writing? How about changing GBDE in a backwards-incompatible way by adding one key shadow sector for every n key sectors (n would be chosen at volume initialization)? The key shadow sector would hold the xor of the last encrypted key in the zone to be replaced and its replacement key, a 32 byte status chunk to indicate the status of the operation: all zeros (0x0) would indicate the new key has not been written to the key sector yet (but it is present in xor-ed form in the key shadow sector), all ones (0xFF) would indicate the new encrypted data sector has been written successfully and random garbage would indicate that both the new key and the encrypted data have been successfully written to disk. Then the relative offset of the key sector from the key shadow sector would follow and then the offset of the key within the key sector. The rest of the sector would also be padded with random garbage which would be regenerated every x writes (this value could be tunable via sysctl, kern.geom.bde.garbage_recycle_freq). The whole write procedure would look something like this: 1. read old key from key sector 2. xor-ed key = new key ^ old key 3. pad xor-ed key with: - status chunk: 32 0x0 characters - relative offset of key sector from key shadow sector (chosen at volume initialization, 8 bytes by default) - relative offset of key within key sector (8 bytes by default) 4. write padded xored key to key shadow sector 5. encrypt data with new key 6. write encrypted data to disk 7. set status chunk in key shadow sector to 0xFF 8. write new key to key sector 9. if (number of key writes % kern.geom.bde.garbage_recycle_freq == 0) then overwrite control chunk in key shadow sector and the rest of sector with random garbage else overwrite only control chunk in key shadow sector with random garbage (not to waste entropy) endif > Either way, the sector has been lost. Neither the original > contents of the sector nor the new contents can be recovered without > breaking AES-128. Fsck(8) does not contain this functionality (and it > would be rather impressive if it did.) The solution I proposed always provides consistent atomic writes. If only the encrypted data sector gets written and the key does not get written to the key sector, you can recover the new key by xor-ing the xor-ed key from the key shadow sector with the old key. That way fsck(8) would only need to walk through all the key shadow sectors and check for 0xFF status chunks and recover new keys automatically. Zero control chunks would have to be handled by the user since fsck(8) would have no way of telling whether a zero chunk means the newly encrypted data got written to disk (and the plug was pulled just before the control chunk was to be set to 0xFF) or not (the new key just got written to disk in xor-ed form, but the existing key and data are unchanged). The simplest way would be to let fsck(8) report and record all zero chunks in the form which could be fed to a utility which would set the key directly. The input (output generated by fsck(8)) for such a utility would consist of the device name, relative offset and the key. Both options would always be listed and then the user could try all the alternatives and see which key is the right one. There would be a performance hit, but I would like to see this implemented to see just how big the difference would be. What do you guys think of this solution? ALeine ___________________________________________________________________ WebMail FREE | http://leaf.dragonflybsd.org/mailarchive/kernel/2005-02/msg00257.html | CC-MAIN-2015-22 | refinedweb | 800 | 68.6 |
Get/set format flags.
The first syntax returns the current set of format flags for the stream.
The second syntax sets fmtfl as the format flags for the stream.
The stored format flags affects the way data is interpreted in certain input functions
and how it is written by certain output functions.
The values of the different format flags are explained in the fmtflags type reference.
The second syntax of this function sets the value for all the format flags of the stream. If you want to modify a single flag refer to setf and unsetf.
Parameters.
Return Value.
The format flags of the stream before the call.
Example.
// modify flags
#include <iostream>
using namespace std;
int main () {
cout.flags ( ios_base::right | ios_base::hex | ios_base::showbase );
cout.width (10);
cout << 100;
return 0;
}
This simple example sets some format flags for cout affecting the
later insertion operation by printing the value in hexadecimal base format padded right
as in a field ten spaces long.
See also.
setf, unsetf
fmtflags type, ios_base class | http://www.kev.pulo.com.au/pp/RESOURCES/cplusplus/ref/iostream/ios_base/flags.html | CC-MAIN-2018-05 | refinedweb | 172 | 65.93 |
#include "FastLED.h"#define PIN 6#define NUM_LEDS 40#define BRIGHTNESS 50CRGBArray<NUM_LEDS> leds;uint8_t hue[NUM_LEDS];void setup() { FastLED.addLeds<NEOPIXEL, PIN>(leds, NUM_LEDS); FastLED.setBrightness(BRIGHTNESS); for (int i = 0; i < NUM_LEDS; i++) { hue[i] = 255 / NUM_LEDS * i; }}void loop() { for (int i = 0; i < NUM_LEDS; i++) { leds[i] = CHSV(hue[i]++, 255, 255); } FastLED.show(); delay(30);}
for (int i = 0; i < NUM_LEDS; i++)
for (int i = 0; i < 5; i++)
for (int i = 20; i < 24; i++)
Don't do thisCode: [Select]for (int i = 0; i < NUM_LEDS; i++)For just using LEDs 0 to 4 ( the first five ) useCode: [Select]for (int i = 0; i < 5; i++)Then doCode: [Select]for (int i = 20; i < 24; i++)And so on to build up the colours you want to show on the LEDs.
@Grumpy_Mike big shoutout to you for this idea, it was a good starting point.But the problem is that the effect is supposed to appear on special pixelgroups with gaps between each other.So like I wrote in my first post, for example pixel 1,2,3,4,30,31,32 should show the same effect as a group. If they would have been in one row (from pixel 1 to 10 for example) this would have worked but I need them to be paired together with an effect that cycles threw all of the pixels of one group.Any ideas?
#include "FastLED.h"#define PIN 6#define NUM_LEDS 40#define BRIGHTNESS 50#define LED_TYPE WS2812B#define COLOR_ORDER GRB#define NUM_SEGMENTS 5#define NUM_ROWS 8CRGBArray<NUM_LEDS> leds;uint8_t hue[NUM_ROWS];const uint8_t aRowMap[NUM_SEGMENTS][NUM_ROWS] = { { 0, 1, 2, 3, 36, 37, 38, 39 }, { 4, 5, 6, 7, 8, 33, 34, 35 }, { 9, 10, 11, 12, 29, 30, 31, 32 }, {13, 14, 23, 24, 25, 26, 27, 28 }, {15, 16, 17, 18, 19, 20, 21, 22 } };void setup() { Serial.begin(115200); FastLED.addLeds<LED_TYPE, PIN, COLOR_ORDER>(leds, NUM_LEDS).setCorrection(TypicalSMD5050); FastLED.setBrightness(BRIGHTNESS); for (int i = 0; i < NUM_SEGMENTS; i++) { hue[i] = 255 / NUM_SEGMENTS * i; }}void loop() { static uint8_t nCurrentRow =0; for (int i = 0; i < NUM_ROWS; i++) { leds[aRowMap[nCurrentRow][i]] = CHSV(hue[i]++, 255, 255); } FastLED.show(); delay(50);}
but I have no clue how to apply an effect to each individual group at the same time.
I have multiple arrays of LED's
But my problem is still that I don't know how to assign the effect to the LED group.
I want to show this effect on LED 1, 2, 3, 4, 5, 20, 21, 22 and 23.
byte firstGroup[] = {1, 2, 3, 4, 5, 20, 21, 22, 23};
for (int i = 0; i < 9; i++) { leds[firstGroup[i]].setRGB(255,128,0); } FastLED.show(); delay(50); | https://forum.arduino.cc/index.php?topic=618020.msg4190653 | CC-MAIN-2019-47 | refinedweb | 458 | 62.51 |
Hi I'm learning C++ as a beginner through the book "C++ Primer by Stephen Prata". I have just learned a bit about creating my own functions and I'm now doing some practice at the end of the chapter. The problem is I'm meant to make output of:
"Three blind mice
Three blind mice
See how they run
See how they run"
Using my own functions, now so far the book has only taught me how to make functions with integers, I do no know what to make as a prototype at the top, and to call the functions where highlighted, could someone please advise me on what to do where red as a prototype if all I want is a function to display text?
// using 3 user defined functions #include <iostream> [B]tbm; shr;[/B] int main() { using namespace std; cout << tbm << endl; cout << tbm << endl; cout << shr << endl; cout << shr << endl; system ("pause"); return 0; } [B]tbm [/B] { using namespace std; cout << "Three blind mice"; } [B]shr[/B] { using namespace std; cout << "See how they run"; } | https://www.daniweb.com/programming/software-development/threads/309180/beginners-problems-with-user-defined-functions | CC-MAIN-2017-47 | refinedweb | 180 | 61.67 |
LIST OF DOCUMENTS
PUBLISHED IN
,,DOCUMENTEN BETREFFENDE DE
BUITENLANDSE POLITIEK VAN NEDERLAND 19 19-1945”
(DOCUMENTS RELATING TO THE
FOREIGN POLICY OF THE NETHERLANDS 1919--1945)
SEPTEMBER 1,1921 -JULY 31,1922
THE HAGUE 1980
This book contains the complete text of the ,,List of documents” from:
Documenten betreffende de buitenlandse politiek van Nederland 1919 -1945.
Periode A: 191’$-1930. Deel 111: 1 September 1921-31 juli 1922. Bewerkt door J.
\
W oltring.
(Rijks Geschiedkundige Publicatiën, Grote Serie 173).
’s-Gravenhage, Martinus Nijhoff, 1980.
LIST OF DOCUMENTS~
XXXV
No. Date; From/to Descrip tion
1 1.9.1921
From Beelaerts van Blokland
to Van Panhuys
(Berne)
2 1.9.1921
From De Graaff
3 2.9.1921
From Van Heeckeren
(Ems Estuary Committee)
3A Annex 1
3B Annex 2
4 3.9.1921
To Van Vredenburgh
(Brussels)
4A 28.8.1921
Annex 1
Serbia: Diplomatic Service and Rapaport question
(see Part 11, No. 185); Serbian insistence on
dispatch of Netherlands representative; comments
of Serbian diplomats in Bucharest and
Sofia.
Netherlands East Indies and the League of
Nations: applicability of Labour Agreements to
Netherlands East Indies; objections to separate
representation of the colony on the League of
Nations delegation.
Germany (Ems Estuary): Handing over of protocols
of 19 and 20 August (see Part 11, No. 439);
paraphrase of points discussed then; military aspects
of divided possession of Ems Huibertsgat
and water between Borkum and Huibertsplaats;
line of demarcation from Knocke to the sea, and
German objection to a line running across the
Paap or Hond rivers; allowing Germany a waterway
from Emden to the sea (500 m wide and
14.5 m deep at average high tide) and the Netherlands
a waterway (of equal depth) from Delfzijl
to the south, connecting with Oost Friesche
Gaatje, and one (200 m wide and 8.5 m deep) to
the north, connecting with Doekegat; military
matters to be dealt with by military members of
the committee.
Protocol German-Netherlands Ems Estuary Committee
(4th session on 19 August).
Ditto (5th session on 20 August).
Belgian question: instructions to call upon Jaspar
to obtain text of the latter’s proposed formula
for shipping on the river Scheldt; suggestion for
consultation between Struycken and Bourquin in
Geneva; refusal to cooperate in revision of commercial
treaty only; enclosure of Annex 1.
Notes by Van Karnebeek on discussion held at
Schweizerhof Hotel, Lucerne, on treaty revision
(points relating among other things to Limburg,
1. The numbers in the first column refer to the numbers of the documents. The data of
the document, the sender’s and the adressee’s names and places where the document was
written are shown in the second column. Where the minister of Foreign Affairs was the
sender or the adressee, this is not indicated. Titles have not been added. Where extracts
from diaries, notes, minutes of meetings and other documents that were not dispatched
are concerned, only the author’s name or that of the institution in question has been indicated
(i.e. without the addition of from or to). The thud column contains a short description
of the text of the document.
XXXVI
No. Date; From/to Description
4B
5
27.8.1 92 1
Annex 2
5.9.1 921
Minutes Trade Treaties
Committee
6 6.9.1921
From Van Dijk
7 8.9.1921
To De Graaff
8
9
10
8.9.1 92 1
From Oliphant to
Michiels van Verduynen
(London)
8.9.1921
From Hubrechts
(Washington)
9.9.1 9 2 1
From Van Vredenburch
(Brussels)
military consultation, Wielingen, economic treaty,
activism); application to Terneuzen of import,
export and transit tariffs in force in Belgian ports.
Aide-mémoire from Van Karnebeek concerning
Scheldt shipping.
Inaugural meeting of Committee (see Part 11, No.
420): Nederbragt’s exposition of objectives and
work (study of trade policy trends elsewhere,
weighing the interests of the Netherlands against
those of other countries, recognition of companies
and admittance of commercial travellers,
principles underlying the conclusion and renewal
of trade agreements and method of drawing up
instructions in specific cases); discussion of trade
relations with Spain.
Belgium: pilot service on river Scheldt; piloting
by Dutchmen to and from Antwerp quay (see
Part 11, No. 193-A); interpretation of the terms
(1) ,,beneden (below) Antwerpen” (Art. IX, Para.
2 Treaty of 1839) and (2) ,jusque devant l’entrée
du port” (alongside the quay or before the
entrance to the docks) drafted for the new treaty;
Netherlands authority to arrange pilotage ex
S.1859/93 (amounting to the use of State pilots
everywhere except in Rotterdam) and ban on
foreign pilots in any Dutch port; special arrangement
for Terneuzen.
Radio link Germany-Japan via Nauen-Funabashi
(Java-Japan) (request from Telefunken): Dutch
co-operation in the case of British approval and
compatibility with Treaty of Versailles, on condition
that the Netherlands be given access to the
JavaJapan link for official telegrams.
Turkey : (navigation dues: Commission des
Detroits): comments on the steps taken by the
Scandinavian countries and the Netherlands referred
to in Part IJ, Nos. 211, 293 and 330. The
problem of the Straits required an approach different
from that foliowed for the Scheldt, the
Sound and the Elbe.
Washington Conference: talk with Hughes; his account
of his discussion with Britain, France,
China, Italy and Japan on the invitation from
Belgium and the Netherlands.
Belgian question: No. 4B read to Jaspar; discussion
with him on the transport of arms and
ammunition on the Scheldt, through Limburg
and across the Wielingen; declination to sign
XXXVII
No. Date; From/to Description
10A
Annex
11 10.9.1921
From Schrikker
12 13.9.1921
From Hubrechts
(Washington)
13
13A
14
15
13.9.1 921
From Kon. Mij Exploitatie
Petroleum (Royal
Dutch) sources in
Netherlands East Indies
30.9.1921
Annex
19.9.1 921
To Struycken
2 1.9.1 921
From Van Vredenburch
(Brussels)
16 24.9.1921
From van Karnebeek
(Geneva)
to Snouck Hurgronje
17 24.9.1921
From Melvill Carnbee
(Madrid)
economic agreement only (see No. 4); the Gazette
de Hollande: glorification of Van Karnebeek
and denigration of Jaspar; anti-Dutch press
in Belgium and the Queen’s visit to Staats Vlaanderen;
Jasper’s dilatoriness in the dispatch of
business.
Quotation from an article by Terlinden (,,Le
trait6 de Versailles et le livre de Tardieu”) in
Revue Générale of 15 August.
Spain: note on the provisional (protectionist) import
duties in that country (two columns); temporary
Netherlands-Spanish arrangement of 16/
24 June; Spanish plans for the introductiod of a
definitive protectionist tariff; Netherlands balance
of trade with Spain; import and export figures.
Yap cables and Japanese mandate over Yap:
Hughes on premature reports on this matter in
the American press; Sidehara on the progress of
the negotiations and on the discussions yet to be
conducted with the Netherlands regarding cable
allocations.
Djambi affair: incorrect statements by Senator
Lodge about control of that company passing into
British hands; protest against this by Andrews,
the group’s legal adviser in the U.S., should be
brought to the attention of the U.S. government
through the Envoy in Washington.
Draft and text sent to the State Department.
Pilot service on the Scheldt: enclosure of No. 6
with indication of some confusion of terms in
the documents (non-existence of the terms
,,reede” and ,,haven”).
Belgian question: possible appearance of Forthomme
at Foreign Affairs; press on visit of
Queen Wilhelmina referred to in No. 10; Envoy’s
limited confidence in Jaspar-Forthomme administration
in view of the latter’s annexionist teridencies
(Cf. No. 10).
Austria (relief credits) : conversion of credit
granted to Austria after the armistice into relief
credit up to F. 12,710,000.
Spain: (trade) (Cf. No. 11); Gonzales Hontoria’s
views on measures to be taken; postponement of
negotiations until early 1922 to allow for preparation
by a committee in that country; rejection
XXXVIII
No. Date; From/to Descrip tion
18 25.9.1921
From Cohen Stuart
(Delft) to Snouck-
Hurgronje
18A 23.9.1921
Annex
19 26.9.1921
From Gevers
(Berlin)
20
20A
21
21A
22
23
26.9.1921
From Snouck Hurgronje
to Van Karnebeek
(Geneva)
Annex
26.9.1 921
From Van Karnebeek
(Geneva) to Snouck
Hurgronje
25.9 .I 92 1
Annex
From Nixon (Geneva)
27.9.1921
From Snouck Hurgronje
to Cohen Stuart
28.9 .I 92 1
From Van Vredenburch
(Brussels)
of similar Spanish proposals by France, Switzerland
and Italy.
Russia: Notes on aid, based on personal experience
in that country, in connection with forthcoming
conference in Brussels.
Observations on the political and economic significance
of aid; impossibility of organising it
without involving the Bolsheviks; unlikelihood of
a change for the better in Russia by European
standards, despite failure of world revolution and
changing views of those in power in Russia; need
to strive for a compromise that would also be acceptable
to the Russians; little chance of success
with purely private attempts at reconstruction;
guarantees required for effective implementation
of the Nansen plan (international credit of £30
to E40 million) in view of possibility of ,,private”
looting, theft and corruption.
Taking over of premkesof German embassyguard
in Peking under Art. 130 of the Treaty of Versailles
to prevent the Chinese from taking possession;
preparation of German-Netherlands exchange
of notes, Peking 18 and 20 February
1922.
Austrziz (relief credits): reply to No. 16; need for
Cabinet to agree with Van Karnebeek’s changed
views and likely reservations on the part of the
Min. of Finance.
Report from Snouck-Hurgronje containing figures
on money already advanced for the purchase
of food and as relief; viewpoints of other countries
regarding chances of Ter Meulen plan in the
League of Nations.
Ditto: inquiry about replies received to Part I
No. 423. Elucidation of No. 16 in connection
with annex.
Communication from financial-economic section
of League of Nations concerning conversion of
monies advanced into relief credit.
Russia (aid): reply to No. 18. No reference to
Third International’s propaganda lacuna in Cohen
Stuart’s notes; guarantees to be given in this
respect.
Belgian question: Van de Vijvere’s objections to
Jaspar-Van Karnebeek discussions (Cf. No. 4A);
criticism in Belgian R.C. circles of Jaspar (Carton
No. Date; From/to Descrip tion
XXXIX
24
25
26
27
28
28A
28B
29
30
31
28.9.1921
Minutes of Council of
Ministers
29.9.192 1
From Snouck Hurgronje
to Van Karnebeek
(Geneva)
30.9.1 921
From Van Nispen tot
Sevenaer (Vatican City)
30.9.1 921
From Beelaerts van
Blokland
30.9.1 9 2 1
From Econ. Affairs Dept.
Annex 1
Annex 2
30.9.1 9 2 1
From Van Karnebeek
(Geneva) to Snouck
Hurgronje
30.9.1 9 21
From Nolens (I.L.O.) to
De Gasparri
1.10.1921
To Van Karnebeek
(Geneva)
de Wiart, Imperiali) and of the Jaspar-Forthomme
partnership (Cf. No. 15); Belgian-Luxembourg
negotiations suggest Jaspar poaching on French
preserves; ambassador had little faith in Jaspar’s
ability to restore normal relations with the
Netherlands.
League of Nations: rejection of Van Karnebeek’s
suggestion that the Netherlands be moved from
the fourth to the second class for the annual contribution.
Russia (aid): announcement of invitation for the
Netherlands to attent the conference in Brussels
on 6 October; Loudon recommended as Netherlands
delegate, with secondment of Cohen Stuart
as expert.
Vatican: diplomatic service: unfortunate behaviour
of Internuncio Vicentini in the Netherlands
(addressed H.M. the Queen while presenting his
credentials); his efforts (,,priority procedure”) to
become doyen of the diplomatic corps.
Turkey: navigation dues: Commission des Détroits):
at Sweden’s request memorandum on
Netherlands intentions regarding further steps
following the démarche of 1 May (Part 11, Nos.
293,322,324 and 350).
Yap cables: analytical summary of the situation
regarding the DNTG and its position under the
Treaty of Versailles; caution to be exercised in
regard to participation in international telegraph
conferences in view of commitments ensuing
from the Convention of St. Petersburg and the
telegraph regulations lately revised in Lisbon.
Answer to questionnaire compiled by ,,Commission
des Réparations” (subsidies to company, its
liquidation and settlement of affairs).
Standpoint to be adopted at Washington conference
by Netherlands delegation.
Russia (aid): answer to No. 25;would theNetherlands
be formally represented in Brussels Agreement
with nomination of Loudon and promise of
further decision regarding Cohen Stuart; need for
caution at conference.
Z.L.O. : Plea for participation of Vatican.
Russia (aid): meeting in Brussels; delegation of
Loudon ,,ad audiendum”; non-secondment of
Cohen Stuart (Cf. Nos. 25 and 29) on the grounds
XL
No. Date; From/to Description
32
33
34
35
36
36A
37
38
39
40
41
2.1 0.1 921
From Van Karnebeek
(Geneva) to Beelaerts
van Blokland
3.10.1921
From Van Karnebeek
(Geneva)
4.10.1921
To Van Karnebeek
(Geneva)
5.10.1921
From Van Karnebeek
5.10.1921
From Phillips
Annex
6.10.1921
From Beelaerts van
Blokland
6.10.1921
From Oudendij k
(Peking)
7.1 0.1 92 1
To Van Karnebeek
7.10.1921
From Van Karnebeek
(Washington) to Ruys
de Beerenbrouck
7.10.1 921
To Melvill v. Carnbee
(Madrid)
of his at least temporary support of the USSR, as
shown in No. 18A. Serbia: restoration of diplomatic
relations with Belgrade.
Vatican: dissatisfaction regarding Mgr. Vicentini
(Cf. No. 26): contact with Pope only on diplomatic
grounds and not in his capacity of Head of
Church.
League of Nations (Albania Commission): request
for designation of Netherlands member.
Ditto : Ruys de Beerenbrouck’s approval of proposal
contained in No. 33; choice between Wittert,
Pop and Heifrich.
Ditto: address delivered by him as Chairman
at the closing session of the Second Assembly
on 5 October.
Washington Conference on Far East: invitation
for the Netherlands to participate in the discussion
of Pacific and Far Eastern questions.
Tentative suggestions as to the agenda (limitation
of armaments) proposed by the United States.
Turkey (Commission des Détroits; navigation
dues): notes on No. 27; no determination of future
attitude before all replies had been received.
Yap cables and Washington Conference: retention
of Netherlands rights; statement to this effect
in response to a report from the Chung Mei
news agency in the Chinese press.
Washington Conference: notes on a conversation
between Beelaerts and Pustoshkin on the importance
of the matter to Russia; request by the latter
that in Far Eastern questions of interest to
Russia only the pre-revolution status be considered.
Yap cables and Washington Conference: pl~
to
operating the Yap-Guam and Yap-Japan cabm in
order to promote traffic between Japan and US
(provisional arrangement); request for further instructions
regarding the Netherlands’ share in
these cables based on the 1904 agreement with
Germany.
Spain (trade): influence to be brought to bear in
the committee referred to in No. 17 by Dutch
exporters in collaboration with interested parties
in Spain; assignment of Engelbrecht to Madrid as
temporary commercial attaché charged with furnishing,
on request, information to Spanish committee;
consultation between Van Karnebeek-
XLI
No. Date; From/to Descrip tion
42 8.10.1921
From Oudendijk
(Peking)
43
44
45
46
47
8.10.1921
From Central Industrial
Federation to Van
IJsselsteyn
8.10.1921
From De Kat Angelino
11.10.1921
Minutes of Council of
Ministers
13.10.1921
To HM de Queen
17.10.1921
To HM de Queen
Van IJsselsteyn concerning Note to be presented;
somewhat misleading Spanish representation of
Dutch views on traffic regulations.
Washington Conference (China) : composition of
Chinese delegation; their intention of making use
of Americandapanese differences to strengthen
their own international position (largely at the
expense of foreign powers); likely Chinese demands
for return of Chinese territory, national
status of Chinese abroad, abolition of concessions
and international Boxer indemnity and introduction
of an autonomous Chinese tariff.
The Netherlands (trade policy) : Objectiods to
the levying of export duties as favoured by the
Netherlands East Indies authorities.
Washington Conference: Chinese questions divided
into ten categories, viz: Territorial inviolability
of China and retrocession of territory.
Open door.
Notification to China of all treaties and agreements
relating to her that had been concluded
between the powers.
Chinese resistance - directed against Japan - to
secret agreements.
Chinese political, jurisdictional and administrative
freedom of action. Terms to be set for all Chinese
obligations of unlimited duration (the customs
tariff being the main issue).
Interpretation of special rights and privileges accorded
the (Chinese) grantor (relating, inter alia,
to concessions, settlements and liking levy).
Neutrality and recognition of ,,China’s lordship
of the soil”, including that of ceded territories.
The Netherlands and the above questions.
Ditto: participation only in so far as relations in
the Far East were concerned; designation of Van
Karnebeek, Van Limburg Stirum, Beelaerts van
Blokland and Van der Houven van Oordt as
members of Netherlands delegation.
Ditto: request for authorisation to accept the
non-solicited invitation to attend.
Ditto : request for authorisation to designate the
delegates mentioned in No. 45, with the addition
of Moresco, and omission of Van Limburg Stirum
and Van der Houven van Oordt. Doude van
Troostwijk, ambassador on call (Chef du Cabinet
to the Min. of Foreign Affairs 1914-1919), to
XLII
No. Date; From/to Descrip tion
48 17.10.1921
From De Beaufort
(Washington)
49
50
51
18.10.1921
From Van Vredenburch
(Brussels)
21.10.1921
Minutes of Council of
Ministers
22.10.1921
To Sir Eric Drummond
(Geneva)
52 25.10.1921
To Van Nispen tot
Sevenaer (Vatican)
52A
53
Annex
2.1 1.1921
From Sweerts de Landas
Wyborgh (Stockholm )
54 4.11.1921
To De Graaff
deputize at the Ministry for Beelaerts van Blokland
(Head of Political Affairs).
Mexico (petroleum legislation): extension of
Netherlands-Mexican economic relations through
regular line services by Holland-America Line and
Koninklijke Hollandse Lloyd; emigration of Dutch
farmers; institution of joint commission to assess
losses sustained by Dutch nationals in Mexico;
mutual diplomatic representation; honours for
Mexican officials on the occasion of Mexico’s
centenary celebrations.
Belgian question: party relations in Belgium;
Forthomme as candidate for Foreign Affairs (cf.
Nos. 15 and 23) for the Liberals; incompatibility
of his views with those of the Netherlands; anti-
Dutch tendencies of Devèze.
Purchase of Netherlands Embassy buildings: supplementary
estimate (Fl. 300,000) for the deficit
in the funds made available by the Netherlands
Overseas Trust for establishments in Athens,
Berne, Brussels, Paris and Washington.
League of Nations matters, notably its relations
with the Permanent Court and the Carnegie Foundation;
unsatisfactory nature of the League’s decision
regarding Upper Silesia (its repercussions
on the political and economic situation in Germany).
Vatican (diplomatic service): likelihood of
deterioration in relations with the Vatican
if attention were drawn to the position of the
Pope as the Head of Christianity; recall of
Mgr. Vicentini to be recommended owing to his
lack of circumspection and political insight (Cf.
Nos. 26 and 32).
Address by Vicentini on presenting his credentials
on 19 September.
Russia (aid): treatment of Nansen on his last visit
to Moscow ,,as overripe fruit in a fruit shop”;
other ,,frank” statements about him; special number
of Swedish communist journal Politilzen on
the occasion of the fourth anniversary of the
Russian revolution.
Settlement of American coloured people in Surinam
: objections to Govenor Van Heemstra’s proposals
for promoting emigration; inflow of ,,large
masses of negroes who would retain their American
citizenship” would constitute potential cause
of friction with the United States.
XLIII
No. Date; From/to Descrip tion
5 4A 20.8.1921
Annex
54B 17.6.1921
Annex
55
56
57
7.1 1.1921
From Van Nispen tot
Sevenaer (Vatican)
14.1 1.1921
From Ruys de Beerenbrouck
to Van Nispen
tot Sevenaer (Vatican)
14.1 1.1 O21
Minutes of Trade
Treaties Committee
58 15.11.1921
Paper on the subject of emigration promotion by
Van Heemstra for W.S. Burghardt du Bois, leader
of the Association for the Advancement of
Coloured People set up in New York, (,,where
the majority of Surinam’s inhabitants are descendants
of the negroes, there is no objection to
extending this part of the population”).
Letter from Van Steyn Parvé, Consul-General in
New York, to Beaufort concerning the activities
of the association; non-revolutionary conceptions
of the American negroes; talks with Burghardt du
Bois about the realisation of Van Heemstra’s plan;
United States as a reservoir of elements welkome
in Surinam; publicity in the Association’s journal
The Crisis.
Vatican (diplomatic service) : discussions with
Secretary of State Gasparri on the contents of
No. 52; Vatican had rebuked Mgr. Vicentini; the
suggestion that he be recalled not favoured there.
Ditto : appreciation of his handling of the Vicentini
affair (Cf. No. 55); preference in the Netherlands
for ,,promoting Vicentini out of the way”.
The Netherlands [trade policy for various countries):
Spain: (Cf. No. 41); Czechoslovakia (mostfavoured
nation clause and Czech reservations
concerning tariff facilities for Austria and Hungary
and Czech quotas to be fixed by special
agreement); Italy (protection of domestic electric
light bulb industry and preference for solution of
difficulties as and when they arise); Finland (request
for most-favoured nation treatment by the
Netherlands with, perforce, acceptance of the exclusion
of the major reductions granted under
agreement to France in the surcharge on duties
on imports into Finland); Romania (importance
of a new trade agreement with that country on
expiry of the old one next April; special position
of Austria in that country); Bulgaria (doubling of
import duties upon termination of all its trade
agreements; question of the applicability of the
most-favoured nation clause to special agreements
under which that country granted special
reciprocity; no Bulgarian protectionism because
there was no domestic industry; Netherlands
preference for most-favoured nation treatment
with shorter term of notice).
Washington Conference (Yap cables): Nether-
XLIV
No. Date; Froin/to Descrip tion
From Ruys de Beerenbrouck
to Van Karnebeek
(Washington)
59 16.11.1921
From Engelbrecht
(Madrid) to Nederbragt
60 17.11.1921
From Van Nispen tot
Sevenaer (Vatican)
61 18.11.1921
Van Karnebeek’s
diary
62 20.1 1.1921
From De Romrêe de
Vichenet
63 21.11.1921
From Ruys de Beerenbrouck
to Van Nispen
tot Sevenaer (Vatican)
64 21.11.1921
From Ruys de Beerenbrouck
to Oudendijk
(Peking)
lands banking institutions and liquidation of
DNTG; Le Roy pessimistic about satisfying creditors;
his preference for stringent government control
in the establishment of a new body; withdrawal
of credits to the amount of some
F1. 700,000.
Spain (commerce): report on his experiences in
Madrid; talk with Palacios (need for high import
duties in Spain connected with the war in Morocco,
but no inclination to start a tariff war) and
discussion with Lopez Lago on the ,,futility of
negotiations so long as the new Spanish tariff has
not yet been fixed”; costly campaign by Philips
in Spanish press against proposed high duties
there; intransigence of Spanish government expected
by Engelbrecht.
Vatican (d$lomatic service): De Gasparri informed
of the heated discussions at Lower House
committee meetings on the difficulty of retaining
Mgr. Vicentini in his present post.
Washington Conference: talk with Briand about
expected French opposition to attempts by British
Admiralty to prohibit the submarine; China’s
capacity for reform and consolidation as basic
factor in problem of the Pacific; limitation of
large battleships inspired by their costliness and
doubtful value in the light of modern means of attack;
relevant discussion with Balfour and information
given by Van Karnebeek on the Netherlands
fleet plan (based on the importance of the
Netherlands colonial possessions as a link in the
British Calcutta-Melbourne line; point 10 of the
Chinese proposals at the conference (Cf. No. 44)
in connection with the League of Nations and
,,ententes régionales”.
Belgium (dredging operations West Scheldt): reply
to Part 11, No. 441; objections to existing restrictions
on work at night; reduction of the hazard
to navigation by improvement of river lighting;
Belgian desiderata.
Vatican (diplomatic service); visit by Internuntio
t.0 inform him of his transfer to Constantinople;
Dutch desire that he move not be delayed.
Washington Conference (Yap cables): Japanese
claims to the Nafa cable as a link with the Liusiu
Islands at variance with Chinese undertakings
given to the Northern Extension Company; requi-
XLV
No. Date; From/to Descrip tion
65
65A
66
67
68
21.11.1921
From Ruys de Beerenbrouck
to Van Karnebeek
(Washington)
21.10.1921
Annex
2 1.1 1.192 1
From Van Karnebeek
(Washington) to Ruys
de Beerenbrouck
21.11.1921
Minutes of the Council
of Ministers
22.11.1921
Minutes of Coulcil of
Ministers
68A 22.11.1921
Annex
site co-operation of China and the Netherlands in
allocation of the overland link from Shanghai to
Wusung, and of China alone in allocation of the
submarine cable from Wusung to the limit of Chinese
territorial waters.
Ditto : further reference to the Nafa-Shanghai
cable; Chinese protests and upholding of all her
rights by the Netherlands.
Note from Leroy concerning Japanese attempts
to anticipate decision on DNTG cables; cession
of Yap-Shanghai cable to Japan in exchangq for
other link for reconstituted DNTG, either via Manilla
or via Guam, outside Japanese control.
Ditto (naval dkarmament): objections from viewpoint
of commensurate freedom of weaker
powers to counter British attempts (as in 1907)
to ban the submarine; information on French
and Italian attitude gained in conversations with
Briand and Schanzer (British plans doomed to
failure); reference by Van Karnebeek to his explanations
to Balfour (Cf. No. 51); the latter’s
cautious behaviour, Van Karnebeek’s opinion in
retrospect that the Netherlands delegation should
have been larger.
Trade statistics (international): authorisation for
the introduction of a relevant Bill in Parliament.
Washington Conference (naval disarmament):
luncheon with Root at Metropolitan Club; his assessment
of the chances ,,to slow Japan down”
during China’s attempts ,,to work out its own salvation”
uapan hampered by economic difficulties);
US objections to ceding Japan territories
north of the Amur.
Letter from Van Karnebeek to Ruys de Beerenbrouck
with reference to the contents of No. 66;
account of his discussion with Lee concerning
the submarine question (British opposition to
this weapon rooted in its threat to merchant
shipping, in view of Britain’s dependence thereon
for supplies from overseas); Britain would not
tolerate the conquest of Netherlands colonies by
a third power and Lee’s appreciation of the
Netherlands’ wish - if only for reasons of national
dignity - to meet unaided, as far as possible,
the demands imposed by its striving for self-preservation
and the maintenance of international
XLVI
No. Date; From/to Description
69 23.11.1921
From Ruys de Beerenbrouck
to Van Karnebeek
(Washington)
70 23.11.1921
From LeRoy to Six,
member of Council
of State
71 23.11.1921
From Van Nispen tot
Sevenaer (Vatican)
72 23.11.1921
Van Karnebeek’s
diary
73 24.11.1921
From Economic
Affairs Dept.
74 24.1 1.1921
obligations; his recognition of the need not to
allow Japan to force its way to the south as a result
of wrong policies towards China and Siberia;
less reassuring views on this matter in the United
States.
Ditto (Yap cables): confidential information received
from De Graaff concerning LeRoy’s success
in negotiations with Denmark, Britain and
the United States regarding reconstruction of the
DNTG as a new company; consequent need for
more certainty in regard to the cable allocations:
risk of bankruptcy of DNTG (Cf. No. 58) and
objectioris to allowing LeRoy to proceed to
Washington in those circumstances; information
furnished by him by telegraph should suffice.
Ditto (Yap cables): policy to be pursued by the
Netherlands; no arrangements with Japan (relating,
inter alia, to the radio link via Nauen); objections
to Allied ,,projet de convention et de reglement”,
which implied recognition of the supremacy
of a particular group of powers. Backgrounds
to anti-monopolism of America: US control
by reducing importance of Danish and British
cable companies in China; Netherlands cooperation
with thos companies; agreement with the
Chinese standpoint regarding the sovereign right
to grant landing rights within territorial waters;
desirability of Netherlands delegates in Washington
confining themselves to the main lines and
remaining non-committal in the implementation;
placing the Menado-Yap cable at the disposal of
the Netherlands government.
Vatican (diplomatic service): further to No. 63
(transfer of internuncio to Constantinople),
suggesting that the transfer be published in the
Osservatore Romano; objections to this at that
time on De Gasparri’s part.
Washington Conference : little driving force in
Hughes’ leadership; Schanzer on friction in Committee
of Five concerning land armaments.
Spain (trade) : memorandum concerning the very
high Spanish duties: need for reduction, if not
return to former level, so as to avoid growing
resistance in Netherlands business circles to
continuation of existing benevolent trade policy
on the part of this country; Dutch plans for
raising excise duty on wine.
Washington Conference: further discussion with
XLVII
No.
Date; From/to
Van Karnebeek’s
diary
75 25.11.1921
From Ruys de Beerenbrouck
to Netherlands
delegation, Washington
(Van Karnebeek)
76 26.11.1921
Minutes of Economic
Policy Committee
76A
Annex
77 26.11.1921
Van Karnebeek’s
diary
, 78 27.11.1921
From delegation in
Washington (Van
Karnebeek) to Foreign
Description
-___
Schanzer regarding land armaments (Cf. No. 72)
and Franco-Italian clash on this issue; regulation
of war practices by the five big powers outside
their competence; Schanzer’s fear of a disarmaments
conference.
Ditto (Yap cables): Netherlands-Chinese agreement
on landing rights on Menado (sovereign
rights not unilaterally available to third parties);
Chinese obligation to grant the Netherlands rights
for a link via China and Siberia, to be requested
on the establishment of the Menado-Manilla link.
Netherlands foreign trade policy : discussion of
measures to overcome the economic difficulties
facing trade and industry; need for and nature of
temporary import restrictions to counter abnormal
foreign exchange rates (prices of imports too
low and prices of Netherlands exports too high);
fall in shipbuilding orders; unemployment among
men normally working outside the national frontiers;
domestic consequences of foreign exchange,
competition and wage level problems.
General survey of state of Netherlands industry:
difficulties arising from foreign competition in
imports and exports; causes thereof (difference
between coal prices at home and abroad, wage
differentials and comparison between the Netherlands,
Belgium and Germany, differences in working
hours, differences in raw materials prices
(dumping by foreign countries), differences in
freight rates, harbour dues and taxation, low
foreign exchange rates, increased import duties
or other impediments abroad, specified by commodity)
;,,other circumstances”.
Washington Conference: low prestige of Chinese
delegation; Harding’s statement at a White House
press conference concerning the possible extension
of the conference to other nations (including
Germany) and the establishment of an Association
of States instead of the League, in compliance
with the wishes of the Republican Party;
efforts to shift the lead in international politics
back to America; support for Harding so as to get
him through the expected difficulties and thereby
closer to the League
Ditto (America and the League of Nations): discussion
of the problem of international organisation;
America’s efforts to reattain the dominant
position lost by the conflict between Wilson and
XLVIII
No. Date; From/to Descrip tion
Ministry
79 29.11.1921
Van Karnebeek’s
diary
80 1.12.1921
From Van Panhuys
(Berne)
81 1.12.1921
From Van Asbeck
(Warsaw)
81A 1.12.1921
Annex
From Skirmunt
(Warsaw) to Van Asbeck
the Senate; belief that this could bring the U.S.
into the League; importance of this to the Netherlands
in connection with the Permanent Court.
Ditto: final decision on Far Eastern question
through U.S. disarmament plan (not imposing
America’s will on Japan, and Japanese freedom
of action vis-à-vis China, without risk of conflict);
discussion with Fletcher on Djambi affair (American
hope that this had not left any ill-feeling in
the Netherlands and the writer’s expression of
the hope that the position of Phillips had not sustained
too much damage); information passed on
from Reppington to Van Karnebeek at press party
about exchange of views with Harding and Hughes
on the convening of a major conference in The
Hague (entry of United States to League of Nations,
and rehabilitation of Germany); China at
that day’s committee meeting on withdrawal of
foreign troops and foreign police; moodiness, as
usual, of Viviani.
Serbia: diplomatic relations following upon settlement
of Rapaport question (Cf. No. 1);discussion
with Yovanovitch, Serbian ambassador in Berne,
regarding restoration of diplomatic relations by
exchange of declarations to the effect that both
parties agreed to surmount certain difficulties;
threat of failure would attend Dutch demand for
some satisfaction; resumption of relations without
a Serbian legation in The Hague, whilst maintaining
Netherlands legation in Belgrado pro
forma
Clothing credit, Poland: discussion of the proposal
contained in No. 81-A further details about
the discussion with Michalski and Kowalski and
about the meaning of the Polish proposal; suggestion
by Van Asbeck that the proposed settlement
of F1. 3,560,000 be accepted in principle, pending
negotiations for a further concession, and
that the balance of the debt of F1.14,240,000 be
included in the relief credit.
Clothing credit, Poland: discussion between Polish
Envoy in The Hague, Wierusz-Kowalski, Van Asbeck
and Michalski on Polish proposals relating
to this matter, and anlysis of the agreement concluded
in The Hague on 16 June 1919; request
that debt be reduced by one-fifth, or
F1. 3,560,000, of which F1. 2,000,000 would be
repaid in instalments (guaranteed by priority
XLIX
No. Date; From/to Description
82
83
84
85
86
87
88
89
2.1 2.1 921
From Ruys de Beerenbrouck
to Van Karnebeek
(Washington)
2.12.1921
From Ruys de Beerenbrouck
to Van Nispen
tot Sevenaer (Vatican)
3.12.1921
From Ruys de Beerenbrouck
to Van Karnebeek
(Washington)
3.12.1921
From Van Karnebeek
(Washington) to Ruys
de Beerenbrouck
3.1 2.1 921
Van Karnebeek
(Washington) to Ruys
de Beerenbrouck
5.1 2.1921
Departmental
Memorandum
6.12.1921
From Ruys de Beerenbrouck
to Van Karnebeek
(Washington)
6.12.1921
From Van Karnebeek’s
diary
rights to the mortgage taken out against the Polish
salt mines.
Washington Conference (China): time not ripe
for autonomous tariff (cable from Oudendijk, Peking);
raising tariff to 5 per cent (Chinese Foreign
Minister), 12% per cent, without abolition of
likin (Washington legation) or 7% per cent and a
further 5 per cent after complete abolition of
likin ad valorem (Oudendijk).
Vatican (diplomatic service) : discussion with Vicentini
regarding his forthcoming transfer (Cf.
No. 71) featured in Dutch press through De Tijd
(newspaper); appreciation of co-operation Cardinal
State Secretary; standards to be set for new
Internuncio (viz. avoidance of conduct such as
that mentioned in Nos. 26 and 32).
Washington Conference: no Dutch agreement to
tariff increase in China before payment of debts
(treasury bonds and debentures of Chinese loans
in Dutch hands); no reply to communications
from the Netherlands to China regarding the
DNTG.
Ditto (Yap cables): American proposal to Japan
to allocate the Yap-Menado cable to the Netherlands.
Ditto: further to No. 85; toning down of statement
by American Secretary of State.
Spitsbergen (mining regulations): Art. 6 of convention
compared with the rights of nations laid
down in earlier agreements; amendments to the
Mining Act deemed necessary by the Netherlands.
Washington Conference (Chinese customs tarqf):
arrival of further cable from Oudendijk (received
after that referred to in No. 82) regarding recognition
of Chinese sovereignty and settlement of
Chinese debts; his statement to Chinese government
that the desire for indemnity from Germany
did not entitle China to seize property of friendly
neutrals; China’s bad faith to be discussed with
USA (interested party through its participation
in loan for Hankow-Canton railway).
Washington Conference: discussion with Schanzer
on a broader-based agreement in replacement of
the British-Japanese alliance of 1902; position of
Italy and the Netherlands in regard to an agree-
L
No. Date; From/to Description
-
________
90 8.12.1921
To Ridder van Rappard
(Copenhagen)
91 9.12.1921
From Van Karnebeek
(Washington) to Ruys
de Beerenbrouck and
Oudendijk (Peking)
92 9.12.1921
From Van Nispen tot
Sevenar (Vatican)
93 9.12.1921
Van Karnebeek’s
diary
94 9.12.1921
Van Karnebeek
(Washington) to Hughes
95 10.12.1921
Van Karnebeek
ment between Pacific states; institution of a
special committee for the question of troops; utterances
by Harding about an association of
powers with a view to a specific mode of co-operation.
Spitsbergen Convention: text of amendments to
Norwegian Mining Act proposed by the Netherlands.
washington Conference: (China): reply to Nos.
82, 84 and 88, rights of Dutch holders of Chinese
securities, American government’s view that nonrecognition
thereof would contribute little to restoration
of shaken Chinese credit; for the rest,
American debenture holding in Hu Kuang railway
loan of minor importance.
Belgian question: information given him by a
Belgian prelate to the effect that Flemings and
many Roman Catholic Wallons wished to loosen
their ties with France in order to conclude an
economic Union - later, possibly, a military ailiance
- with the Netherlands. Observations by
the Envoy concerning the difficulties entailed in
the conclusion of economic agreements, and the
lack of enthusiasm in the Netherlands for military
commitments.
Washington Conference (Pacific): Incorporation
in the quadruple alliance replacing the British-Japanese
treaty of 1902 (Cf. No. 89) of the various
resolutions to be adopted by the Nine powers.
Pressure on Hughes to insert in the General Arrangement
for the Pacific a formal recognition of
the territorial status quo and to announce this in
the statements he was expected to make at the
first plenary session, ,,otherwise the impression
might possibly at first prevail that Holland is to
be the only power with insular possessions in the
Far East whose territorial rights will find no explicit
recognition at the Washington Conference”;
co-operation in this promised by Hughes, and his
views on the new Entente in replacement of the
British-Japanese treaty; Van Karnebeek’s intention
not to show irritation at the fact that the
Netherlands had been excluded up to that point.
Ditto (Cf. No. 93): written expression of appreciation
of undertaking given by Hughes.
Ditto (General Agreement for the Pacific): talk
between Van Karnebeek and Huahes on the scooe
LI
No.
-
Date; From/to
(Washington) to Ruys
de Beerenbrouck
96 10.12.1921
From Michiels van
Verduynen (Prague)
97 12.12.1921
From Ruys de Beerenbrouck
to Van Karnebeek
(Washington)
98 12.12.1921
Van Karnebeek’s
diary
99 13.12.1921
From Van Karnebeek
(Washington)
100 14.12.1921
From LeRoy to
Nederbragt
Description
of the agreement and the ultimate admittance of
France to this ,,instrument de paix”; adequate
guarantees for the Netherlands provided by general
agreement of all the powers concerned
,juxtaposée à quadruple Entente”. Netherlands
reluctance to join the Entente rooted in fear of becoming
involved in others’ conflicts and of the
concomitant possibility of their interfering in
Netherlands affairs.
Central Europe: report on conference of Austrian
succession states in Porte Rosa and Rome: exchange
rates, obstacles to free trade and imperfections
in the tariff policies of the nations concerned.
Washington Conference: comparison of the General
Agreement (Pacific Affairs) with earlier
Mediterranean and North Sea declarations.
Ditto (General Agreement and Quadruple Entente)
: Netherlands distrust of Japan proceeding
from the guaranteeing of the insular possessions
of America, Britain, France and Japan only could
lead to an increase in armaments in this country;
Netherlands objections to the role of ,,hanger-on”
and to the aggression clause in the Quadruple Entente;
desire to seek a solution in a general settlement
between the Nine States providing mutual
guarantees of territorial rights; Shantung and
Manchuria problem areas.
Ditto: French proposal that wireless stations in
China be run in cooperation under Chinese control;
recommendations on three principles.
Ditto : Chinese opposition to ,,pénétration pacifique”
in their country by the French Compagnie
Générale de Télégrafie sans Fil; contract concluded
by that company with Telefunken, Marconi
and Radio Corp. of New York; Telefunken’s
monopoly position in Argentina through construction
of a large radio station; no recognition
of faits accomplis before meeting of World Congress
on Radio Telegraphy based on the London
Convention. Incompatibility of French plans with
the agreement between the Chinese Telegraph
Association, Eastern Extension and the Great
Northern Telegraph Company; writer’s wish to
remain uncommitted and to propose giving sympathetic
consideration to China’s plans for the
improvement of international radio traffic.
LI1
No. Date; From/to Description
101 14.12.1921
Van Karnebeek’s
diary
102 14.12.1921
Ditto
103 14.12.1921
From Melvill van
Carnbee (Madrid)
104
105
106
16.12.1921
From Van Karnebeek
(Washington) to Ruys
de Beerenbrouck
17.12.1921
From Ruys de Beerenbrouck
to Van Karnebeek
(Washington)
17.12.1921
Van Karnebeek’s
diary
Washington Conference (Yap cables and general
agreement) : consultation with Root on possibility
of settlement before the writer’s return to the
Netherlands; early convocation of ex-allied
powers by Root for the purpose of allocating
Menado-Yap cable to the Netherlands and deferment
of negotiations on operating rights; agreement
between Root and Hughes on general Far
East treaty on the basis of Dutch desiderata.
Ditto (general agreement and Quadruple Entente):
calls paid on Shidehara and Hanihara; advisability
of avoiding disturbing restoration of
confidence between Japan and the Netherlands
(Cf. No. 98); displeasure and apprehension in the
Netherlands East Indies to be expected in the
event of the Netherlands remaining outside the
security statute of the Quadruple Entente (despite
Art. 10 of that agreement); was Shidehara
seeking grounds for withholding his cooperation
Should China also participate Difficulties in
finding a form for an arrangement.
Spain (trade policy): advice, after discussion with
Spanish Minister of State, that the Netherlands
agree to replacement of ,,Tarif du 21 Mai” by the
not yet definitively fixed ,,Tarif Espagnol”, in
view of the provisional nature of the latter and
Spain’s willingness to enter into negotiations regarding
the proposals formulated by Engelbrecht;
in the event of non-acceptance of this proposal,
early cancellation of the existing arrangements
by Spain could be expected.
Washington Conference (naval arms limitation):
rejection by the other four powers of the French
tonnage figures (designed to double the pre-war
fleet); significance of dominance of any one
power or combination of powers in the Mediterranean
for the Netherlands’ lines of communication
with the East Indies.
Ditto (Yap cables): reply to No. 99 in accordance
with the recommendations contained in No. 100.
Ditto (Yap cables): further to No. 101,discussion
with Hughes on a provision to be included in the
agreement with Japan whereby that country
would guarantee the same rights as those by the
U.S., supplementing the allocation agreement
with one between Japan, the Netherlands and the
LI11
No. Date; Fromlto Description
107 17.12.1921
From Van Lamping
(Antwerp) to Huyssen
Van Kattendijke
(Brussels)
108 17.12.1921
From Van Nispen tot
Sevenaer (Vatican)
109 18.12.1921
From Washington
Delegation (Van Karnebeek)
to Ruys de
Beerenbrouc k
110 19.12.1921
Van Ruys de Beerenbrouck
to Ridder Van
Rappard (Copenhagen)
111 19.12.1921
U.S. on the use and operation of the cables;
Hughes anticipated no difficulties; limitation of
arms (submarines question); Britain’s intention
to make an issue of their abolition (Cf. No. 68A);
no recognition of settlement of this matter by
the Big Five alone; rules of warfare not to be regarded
as a prerogative of those powers; conference
at Balfour’s with Borden on League of
Nations; Hughes’ irritation with attitude of the
French (De Bon’s statements in the commission).
Belgium (Dutch and Belgianpilots on the Scheldt):
,,Haven” (port) as complex of maritime facilities
(Cf. No. 6); ,,dok” (dock) as enclosed area of
water, serving as berth for ships, and ,,reede”
(roads) as mooring in stream, etc.
Vatican (diplomatic service): answer to No. 83
and discussion on the subject with Cardinal State
Secretary; his view was that the Vicentini affair
was grossly exaggerated.
Washington Conference (limitation of armsfsubmarine
question): discussion with Hughes on total
abolition of this weapon as demanded by Britain
(Cf. No. 106); request for standpoint of Netherlands
government towards such capitulation of
the small powers; possible need for the Netherlands
to issue a statement on the matter, even
though it was not a participant in the naval discussions;
meetings expected to continue after 4
January to allow for discussion of demands made
by France.
Spitsbergen (mining regulations): instructions to
give sympathetic consideration to Norwegian objections
to Netherlands amendments; non-imposition
of the system of concession application on
those with acquired rights; Netherlands proposals
with explanatory note to be forewarded to other
nations only after evident lack of Norwegian responsiveness;
observations on preliminary report
of Lower House on Spitsbergen question; French
concurrence with Norwegian mining regulations;
Swedes inclined to take Norwegians’ views into
account; comments on a communication from
the Netherlands government (number of daily
services and exploration centres, non-application
of claims system for sites occupied there); Norwegian
Mining Regulations and Art. 128 of
Netherlands East Indies Mining Order.
Central Europe: progress of conference of Aus-
LIV
No. Date; From/to Description
From Van Weede
(Vienna)
112 19.12.1921
Van Karnebeek’s
diary
112A 19.12.1921
Annex
113 20.12.1921
From Ruys de Beerenbrouck
to Ridder Van
Rappard (Christiania)
114 20.12.1921
Van Karnebeek’s
diary
trian succession states in Ponte Rosa (Cf. No.96).
Washington Conference (naval question): meeting
between Beelaerts and Sarraut; French sensitivity
and tendency to assertiveness; lack of tact
on Hughes’ part (belated involvement of the
French - as a favour - in the Entente on replacement
of the British-Japanese alliance of 1902);
reports by U.S. journalists on French agitation
(inter alia at their demands being rejected by
Lord Riddell at a press conference, stagnation of
the conference and the shifting of its centre of
gravity to London); talks between Briand and
Lloyd George; anti-British mood in the U.S.;
chance of Quadruple Entente stranding in the
U.S. senate; suspicion aroused by the Netherlands
not being party to these agreements; writer’s
views on this; dangers attaching to a conference
of this kind.
Notes on the question put by an unknown person
to Hughes as to why the Netherlands had not
been included in the Quadruple Entente, and the
latter’s expectation that the conference would
end with a general agreement in which the
Netherlands would also be involved.
Spitsbergen (mining regulations): further to No.
110, elaboration of the principles to be left to
the Norwegian government after main lines had
been laid down jointly by powers concerned (e.g.
in the manner of Art. 8 of the convention).
Washington Conference (naval question, submarines):
discussion with Hughes on position of
States interested but not participating in the
naval conference relative to the expected British
proposal for abolition of this weapon; agreement
reached at conference on restricting consultation
to the five principal allies (Cf. No. 106); outlawing
of the submarine to be viewed against the
background of ,,droit de visite”, the law of booty
and principle of contraband with a voice for all
states concerned; allusions to possible statement
to be issued by the Netherlands (Cf. No. 109);
standpoints of Italy, France and, presumably,
Japan in this matter different from Britain’s; Van
Kamebeek’s urging of Hughes to act as guardian
of the legal rights of all; discussion with Sarraut
on ship ratio of 1:Y; Hughes again questioned
about non-inclusion of the Netherlands in the
No. Date; From/to Description
LV
Quadruple Entente (Cf. Nos. 95, 98, 102, 112
and 112A); treatment as ,,quantité négligeable”,
and state secretary’s explanations of what had
taken place in regard to replacement of the
BritishJapanese treaty of 1902; the Netherlands,
as a non-aggressive power, was of insufficient importance
to Japan for inclusion in the convention;
territorial restriction of the latter to the islands.
115 20.12.1921
Ditto
Ditto (submarine question): question asked at
press conference about Netherlands standpoint,
and statement to the effect that, as in the matter
of battleships, any limitations the powers might
wish to impose upon themselves as regards the
ratio of submarines would be welcomed, subject
to reservations in regard to the raising of the
question of the use of the submarine as a legitimate
weapon.
116 20.12.1921
From Michiels van
Verduynen (Prague)
Czechoslovakia (trade): report on the visit of the
Polish minister Skirmunt to Prague, and that of
the Austrian Federal Chancellor; consultation between
Schober and Masaryk (demolition of the
,,Chinese Wall” between the Central European
States); rapprochement between Poland and Austria
- born of economic necessity - as first milestone
on the right road.
117 21.12.1921 Spitsbergen (mining regulations): Norwegian op-
From Ridder Van position to amendments proposed by the Nether-
Rappard (Christiania) lands.
118 21.12.1921 Aid for Russia: goods to the value of about
From Federation of FIS. 180,000 presented by the Netherlands
Committees for Aid to government shipped to Riga and their distri-
People in Distress in bution via Nansen; need to supplement them
Russia
with other goods (such as fats); urging of further
government aid as being in the naional interest
with a view to the reconstruction of Russia as a
factor in future world trade and the reopening of
Russia as a market for the Netherlands (inter alia,
as a means of ending the crisis in trade and industry
and of reducing unemployment in the Netherlands);
reference to the aid rendered by Germany,
Britain, France, the U.S. and Switzerland.
119 22.12.11921 Washington Conference (limitation of arms, sub-
From Ruys de Beeren- marine question): further to No. 110: banning of
brouck to Washington submarines not permissible as this would deprive
Delegation
the small nations of a defensive weapon they
could afford; objection to public declaration,
however, in view of domestic policy.
120 22.12.1921 Ditto (maintenance of territorial status quo in
LVI
No. Date; From/to Description
120A
From Delegation to
Washington Conference
(Van Karnebeek) to
Henihara
Annex
121 22.12.1921
From Ruys de Beerenbrouck
to Ridder van
Rappard (Christiania)
122 24.12.1921
From Ruys de Beerenbrouck
to Van IJsselsteyn
122* 25.12.1921
Van Karnebeek’s
diary
the Pacific): presentation of annex.
Draft convention as mentioned above (Resolution
of the United States of America, the British Empire,
China, France, Japan, the Netherlands and
Portugal to maintain and preserve intact their
sovereign rights to their territories in these regions).
Spitsbergen (mining regulations): exploration
centres; social and labour legislation; envoy to
confine himself to verbal consultation with Norwegian
government; claims of N.V. Netherlands
Spitsbergen Company to Green Harbour, Colen
Bay, and another area of far greater (22,000 sq
km) extent.
Ditto: on the analogy of No. 121, intention of
N.V. Netherlands Spitsbergen Company to take
over Ise Fjord Kul Company (territory the size
of the Netherlands); objections to non-enforcement
of Norwegian Mining Act in such large
areas; possibility of forfeiture of rights through
non-exploitation (limits to applicability of provisions
of Mining Act to rights acquired) and
possible extension of Art. 35 of the draft mining
regulations; difficulty in finding a formulation
guaranteeing Netherlands interested parties that
they would not have to operate under too onerous
conditions; Norwegian regret at not having
consulted the Netherlands beforehand on the
Mining Bill; British approval of the Bill.
Washington Conference (submarines): Hughes’
compromise (impressed by Balfour’s vigorous action)
on the basis of 60,000 tons; acceptance implied
proportionally small margin for the Netherlands;
exercise of power on ,,our side of the
Pacific” left to Japan by the United States;
mutual honouring of agreements by the four
powers in respect of each other’s island territories
without accepting obligations towards the
Netherlands, whilst curtailing Dutch means of
defence (submarine); this provided proof of
danger of conferences convened by a small number
of dominant powers (Cf. No. 114); Britain’s
viewpoint that her interests coincided with those
of others; congruence - up to a point - of
British and Netherlands interests, but less cer-
LVII
No. Date; Fromlto Descrip tion
123
124
124A
25.12.1921
From Ridder van
Rappard (Copenhagen)
25.12.1921
To De haff
25.12.
Annex
125 26.12.1921
Van Karnebeek’s
diary
126 27.12.1921
Ditto
tainty of British help in the East than in the
North Sea.
Spitsbergen (mining regulations): further to 110
(question in Lower House); completion of memorandum
to be handed to Norwegian government
(Art. 33 and chapter 6); conviction that Norway
would persist in its attitude towards the principle
embodied in para. 35.
Hadrumut: reply to Vol. 11, No. 383: no earlier
opportunity to make the statement referred to
there on the political nature of Sajjids and Sheiks;
the matter to be left in abeyance for the present.
921 Note for Snouck Hurgronje (,,Colonies should
know that we have done nothing”); observation
- with reference to Part I1 No. 177 - on Department’s
somewhat unfortunate handling of the
matter thus far.
Washington Conference: general discussion on
state of affairs: dragging on of talks on naval arms
meant deferment of discussion of Far Eastern
question until after New Year; Hughes’ concurrent
chairmanship of both parts of the conference
was a mistake (delays through overburdening
of a man who also had responsibilities as Secretary
of State); probable ending of submarine
question in deadlock; for the rest, Britain would
emerge from the conference fairly advantageously;
Japan ditto, thanks to America’s abdication as
a military power in the Western Pacific; reflections
on what the different nations had striven
after and achieved, with short sketches of Balfour
and Schanzer; self-righteousness of United
States and resistance put up by France to American
dictatorship.
Ditto: discussion (in company of Beelaerts) with
Root on status quo declaration before departure
for the Netherlands; difficulties arising from relations
between Japan and China; China’s exclusion
from status quo declaration and substitution
of statement of policy concerning China
(,,our declaration and amplification thereof”) ;
little objection on Root’s part to China’s refusal
to undertake any obligation to respect other
countries’ territory (little real importance to be
attached to China’s Washington delegation); need
for a clause providing for consultation in the
event of a threat to territorial rights (arrangement
acceptable alongside four-power pact) ; his
LVIII
NO.
126A
127
128
129
Date; From/to
27.12.1921
Annex
27.12.1921
From Ruys de Beerenbrouck
to Ridder van
Rappard (Christiania)
28.12.1921
From Ridder van
Rappard (Christiania)
to Ruys de Beerenbrouck
29.12.1921
Van Kamebeek’s
diary
Description
objections to identical arrangement which would
confer upon the subsequent statement the nature
of an addendum devised as an afterthought ,,as a
result of which its prestige would suffer”; draft
text of a declaration emanating from the discussion
(annex).
Draft status quo declaration formulated by Van
Karnebeek and Root during discussion on 27 December
1921.
Spitsbergen (mining regulations): approval of No.
123; presentation of amplified memorandum to
Norway with a view to it being discussed à deux
prior to presentation to other participants in
Spitsbergen convention.
Ditto: reply to No. 127; unfortunate omission of
a passage concerning the mining regulations
which the Envoy had already used during the
consultations.
Washington Conference (general considerations
and status quo declaration): further talk with
Root on future moves and division of tasks between
the two of them (discussions between
Root and the French, and between Van Karnebeek
and Balfour and the Japanese); Root deplored
the deepening of the Franco-British controversy
and France refused to abandon a military
programme obviously directed against
Britain; his criticism of Hughes (,,has talked too
much with the British, too little with the French”);
discussion about Root’s proposals of 28 December
for regulating the use of submarines (Cf. Nos.
114-115); Van Karnebeek’s view that law in the
previous fifty years had been characterised by
participation of all sovereign states on basis of
equality of status, and Root’s accession clause
was thus incompatible with participation in the
democratic deliberations which had gained acceptance
in the community of states and offered
more scope than the mere acceptance of regulations;
Van Karnebeek’s opposition to the attempts
of the Big Powers to form together a
higher power in the international order (fear of
the emergence of a super state; Nos. 106 and
114); Root’s defence, based on the League’s rejection
of a recommendation by the Judicial
Committee of the Court of Justice reFarding the
advisability of further conferences for the revision
and extension of international law; his per-
LIX
No.
Date; Fromlto
Description
130 29.12.1921
From Ruys de Beerenbrouck
to Ridder van
Rappard (Christiania)
131 30.12.1921
Van Karnebeek’s
diary
132 30.12.1921
From Ridder van
Rappard (Christiania)
to Ruys de Beerenbrouck
133 30.12.1921
From Ridder van
Rappard (Christiania)
to Ruys de Beerenbrouck
134 31.12.1921
From Kikkert
sistence in the view that the accession clause in
his resolutions of 28 December took into account
the interests of other nations.
Spitsbergen (mining regulations): reply to No.
128, with authorisation to use first part of No.
121; three reasons why emphasis should not be
placed on the wide scope of Netherlands interests
beyond the claims to Green Harbour and Colenburg:
(1) formally, there was a Norwegian company
(shares held by N.V. Netherlands Spitsbergen
Company), (2) the claims were disputed and
in danger of not being recognised, and (3) investigation
of the claims made on behalf of the
Netherlands government had proved impossible.
Washington Conference (status quo declaration) :
entries concerning farewell audience with Harding
prior to the writer’s departure for the Netherlands;
call on Balfour to present text of new
status quo declaration drawn up jointly with
Root; discussion with Balfour of Root’s comments
(No. 129) on the League’s rejection of the
recommendations of the Judicial Committee, and
Van Karnebeek’s arguments against his interpretation
(need to protect the League, which in 1920
felt itself too close to the war and was aware of
the drawbacks of its non-universality); Balfour’s
suggestion that the letter to be written by Van
Karnebeek to Root presented a suitable opportunity
to protest against the tendency of the
major powers ,,to reduce the smaller ones to the
status of adherents”.
Spitsbergen (mining regulations): report on the
implementation of No. 127; Norwegian promise
to study the memorandum before presenting it
to other powers, and Norwegian soundings as to
whether the Netherlands would be prepared to
send an expert to discuss technical details.
Ditto: further to No. 132: on reflection, proposal
to drop the first part of No. 121 as well
(Cf. No. 130); recommendations of line of
thought set out in No. 113.
Rhine navtg-ation (lateral canal): report on the
question of the construction of such a canal
through the Alsace; previous history of the matter
(canalisation and flow control in the system
of a lateral canal) and submission to the Central
Commission for Rhine Navigation by France
during the sessions of 5-17 December 1921, in
LX
No. Date, From/to
___
135
136
1.1.1922
From Van Karnebeek
(Washington) to Root
2.1.1922
Van Karnebeek’s
diary
Description
accordance with Art. 30 of the Mannheim Treaty
and the variously interpreted Art. 358 of the
Treaty of Versailles; summary of Franco-Swiss
divergencies in the sessions; CarlinJolles controversy
deriving from Dutch failure to give strong
support to the opposition to the French draft;
the Netherlands against internationalisation of
canal administration, or in any case preponderant
influence of the Central Commission; mediating
role of the Netherlands in seeking a solution to
these difficulties and acceptance of a Dutch formula
slightly modified by France; renewed attack
by Italy on the maximum flow rate (and
resultant reduced speed) and continued Swiss opposition
to the entire project; wavering by the
majority and deferment of final decision inter
alia by France in an attempt to avoid probable
defeat on a point closely related to the Treaty of
Versaiiles; the guiding principle of the Netherlands
delegation was that it was preferably to
defer the final decision rather than risk being
outvoted; analysis of the attitudes of the various
delegations and their individual members.
Washington Conference (status quo declaration
Far East, limitation of arms with special reference
to submarines): letter to Root in the spirit
of No. 129.
Ditto (status quo declaration Far East): discussion
Van Karnebeek-Hughes; contents of No. 129 unacceptable
to the latter because China and Russia
were ,,passed by” (U.S. opposition to Japan in
Siberia) and recognition of annexation of Korea
by Japan was implied; Netherlands fear of expansion
of Japan in the direction of the East Indian
archipelago (loss of American influence there
through non-fortification of the Phiilippines);
Netherlands inability (outside the Statute) to
contribute to the proposed arms limitation; presentation
of Annex by Hughes and Dutch objections
to its use as an Annex to a convention concluded
by third parties; further proposal by
Hughes Goint declaration by Britain, France,
Japan and the United States) and writer’s objections
to such an arrangement ,,concerning, yet
without us”; third proposal by Hughes (to include
the necessary points in a four-power declaration
meeting Italy’s claims to be one of the five
powers to which the Treaty of Versailles had
No. Date; From/to Descrip tion
LXI
136A Annex 1
136B Annex 2
137 3.1.1922
Van Karnebeek’s
diary
138 3.1.1922
To H.M. the Queen
138A
138B
138*
3.1.1 922
Annex 1
3.1.1922
Annex 2
3.1.1922
From Beucker Andreas
entrusted the islands placed under mandate); disparaging
remarks by Van Karnebeek about that
country’s striving after the position of a major
power; consultation between Van Karnebeek and
Beelaerts van Blokland: their rejection of the latter
proposal and preference for a declaration to
be presented by letter by the four major allies to
the effect that the rights of the Netherlands in
the Pacific would be respected.
Malkin’s Draft (draft of four-power treaty guaranteeing
the rights of the Netherlandsin the Pacific).
Draft by Van Karnebeek and Beelaerts van Blokland
of an American written declaration ,,that it
is firmly resolved to reqpect the rights of the
Netherlands in relation to their insular possessions
in the region of the Pacific Ocean” to be
adopted in identical terms by the other allies
concerned.
Ditto (status quo declaration Far East): Hughes’
satisfaction with solution in accordance with No.
136-B; binding agreement between the four powers
on simultaneous presentation of letters by
their Envoys in The Hague.
Venezuela: Non-admittance of Venezuelan revolutionaries
to Curaçao (black list of 32 persons);
instructions for the Envoy at Caracas concerning
enforcement of Art. 1 of the Curaçao Order of
29 April 1905 (President Comes’ wishes should
be met wherever possible without losing sight of
the possibility of a change of government and
taking account of the importance to Willemstad
of undisturbed tourist traffic).
Cancelled, somewhat deviant draft (the matter to
be left undecided and unreasonable demands of
the Venezuelan government to be vigorously opposed).
Communication to Envoy at Caracas, d’Artillac
Brill, concerning instructions for Governor of
Curaçao in conformity with the covering document.
Applicability of the treaties concluded between
the Netherlands and the former Donau Monarchy
to the Republic of Austrziz, in connection, inter
alia, with the admission of consuls to the Netherlands
East Indies only after drawing up new provisions
for the implementation of the relevant
old treaty or concluding a new treaty (Cf. Part 11,
Nos. 425 and 426).
LXII
No. Date; From/to Description
138* 13.2.1922 Notes compiled by Economic Affairs Dept.
Annex 1
24.9.1921
Annex 4
139 6.1.1922
From Van Eysinga
(Rhine Navigation
Comm.)
140
141
142
142A
142*
143
144
144A
145
7.1.1922
From Ruys de Beerenbrouck
to De Graaff
and LeRoy
9 .l. 1922
Minutes of 3rd Meeting
of Trade Policy
Committee
11.1.1922
From Beelaerts van Blokland,
delegate to the
Washington Conference
10.1.1922
Annex
15.1.1922
From Van Vredenburch
(Brussels)
15.1.1922
From Beelaerts van
Blo kland (Washington)
17.1.1922
From Carobbio
6.1.1922
Annex
17.1.1922
To Beelaerts van
Blo kland (Washington)
Rhine navigation: conflicting views on the applicability
or otherwise of Art. 46 of the Mannheim
Treaty (viz. resolutions adopted by majority vote
in the Cenral Commission were binding only after
approval by governments) to resolutions ex
Article 358 of the Treaty of Versdiles.
Yap cables: résumé of Japanese-American draft
treaty relating to the allocation of cables (with
summary of a cable just received from Van Karnebeek,
Washington); Anglo-French approval of
that draft; government consultation on the matter
in Italy (linking this question to that of the
Transatlantic cables).
Netherlands trade policy vis-à-vis Spain, Bulgaria,
Romania, Italy, Finland, Hungary, Brazil, Australia
and Czechoslovakia.
Washington Conference: situation after Van
Karnebeek’s departure; discussion with Root on
closing date of conference and status quo declaration
in Far East.
Summary in English of Root-Beelaerts van Blokland
disucssion on 10 January.
Belgium (Dutch Protestant School, Brussels):
Preference for its continuation as a non-legal person
(not a Belgian public institution); corporate
body to be established in the Netherlands as
owner and lessor of the premises to the governing
body in Brussels; political importance of admitting
children of Flemish origin).
Washington Conference (status quo declaration):
Netherlands delegation had no part in the New
York Times article on letter from Van Karnebeek
to Root (Cf. No. 135); Root’s distress at leak.
Genoa Conference (all European states, including
Bulgaria, Germany, Hungary, Austria and the
Soviet Union) on the economic and financial rehabilitation
of Central and Eastern Europe: enclosure
of Annex.
Text of relevant resolution adopted by the allied
powers at Cannes on 6 January.
Washington Conference (Eight-po wer Declaration):
report of communication to U.S. Envoy
in the Netherlands, Phillips, of disappointment in
LXIII
No. Date; From/to Description
146 17.1.1922
From Van Nispen tot
Sevenaer (Vatican)
147 18.1.1922
From Beelaerts van
Blokland (Washington)
148 20.1.1922
To Ridder van Rappard
(Stockholm), Sweerts de
Landas Wyborgh (Christiania)
and Van Panhuys
(Berne)
149 21.1.1922
From Van Beelaerts van
Blokland (Washington)
150 21.1.1922
From Van Beelaerts van
Blokland (Washington)
151 21.1.1922
From Van Vredenburch
(Brussels)
the Netherlands at the attitude of Hughes in this
matter, notwithstanding assurances preïioiusly
given by him.
Vatican (diplomatic service): account of talks
with Under-Secretary of State Mgr. Borgoncini
about need for early appointment of new Internuncio;
Borgoncini’s reversion to préséance question,
with reference to the rules laid down at the
Congress of Vienna.
Washington Conference (Far East, China): course
of events; resolutions relating to Chinese customs
tariffs; money squandered in China on maintenance
of excessively large military establishment,
largely under the command of more or less
independent generals; resolution concerning
foreign troops and police on Chinese territory;
American proposal for further elaboration of the
opendoor principle; report requested from subcommittee
on Chinese Oriental Railway; limitation
of arms (difficulties in demarcation of
territory within which no new fortifications will
be permitted).
Genoa Conference: notification of No. 144; request
to ascertain whether the invitation had
been received as sympathetically in Berne as in
The Hague.
Washington Conference (limitation of arms): editorial
committee concerned with the questions
(1) whether the existing provisions were adequate
in the light of the development of weapons since
1907, and (2) what new provisions would be
needed if (1) was answered in the negative; likely
technical procedure for sub-committee’s report.
Ditto: money squandered on troops in China (Cf.
No. 147); item 7 of American agenda (status of
commitments); full information on these contracts;
21 demands in Manchuria; Sarrant’s opposition
to elaboration of Art. 4 resolution on
opendoor principle; cooperation between American
and British delegations on all fronts.
Belgian question and moratorium on German
reparations: discussion with Jaspar on the moratorium
and the Genoa Conference; his inclination
to ,,faire du tapage” (maintenance of Belgian
priority claim in its entirety); further talk with
him about the Genoa Conference (acceptance by
LXIV
No.
Date; From/to
Description
152 22.1.1922
From Van Ketwich
Verschuur (Tangier)
153 21.1.1922
From Carobbio
154 27.1.1922
From Beelaerts van
Blo kland (Washington)
155 31.1.1922
To Carobbio
156 1.2.1922
From Beelaerts van
Blokland (Washington)
157 2.2.1922
From De Marees van
Swinderen (London)
Soviet Union with tacit nescience of preliminary
conditions) and Anglo-French and Anglo-Belgian
agreements for Belgium; clause on ,,agression non
provoquée d’Allemagne” in the former agreement
only
,,GentillesSe” of Belgian Prime Minister towards
the Netherlands apparent from his government
declaration; Belgium studying Van Karnebeek’s
Wielingen proposal.
Tangier Statute: Netherlands participation in
naval review in the roads of Tangier on the occasion
of the French President’s visit; prominent
position of Netherlands flag in port of Tangier
and expected offer of a directorship in the Société
Internationale pour le développement de
Tanger (construction and management of port);
participation in naval review dependent on international
importance of French visit from viewpoint
of most interested powers, viz. Britain and
Spain; non-availability of a Dutch warship.
Genoa Conference: (French) announcement of
the seven agenda items and their subdivision.
Washington Conference (sundries): reduced entertainment
expenses in view of criticism voiced in
the Netherlands; police troops in China; taxation
and railways there; Siberia subject of discussion
between Japan and U.S.A.; Anglo-American
collaboration at conference with support from
France; favourable reception of Netherlands draft
of identical Notes for Tokyo and Washington
(Status Quo Declaration).
Genoa Conference (agenda items): reply to No.
144. comments on ,,nature aussi variée que complexe
des tres nombreuses questions”; acceptance
of invitation and request for further information
with a view to the most desirable composition of
the Netherlands delegation.
Washington Conference (identical Notes on Status
Quo Declaration): discussion with Hughes about
their presentation before close of conference;
talks with Balfour and Sidehara and agreement of
both; an analogous declaration for the Portuguese
government; writer’s view that ,,insular
possessions” could only partly relate to Portugal’s
East Asian possessions.
Belgian question: discussion with Curzon regarding
matter of Anglo-Belgian guarantee treaty first
raised by France at Cannes (Briand); also ,,attaque
No.
___
157*
158
158A
159
Date; From/to
2.2.1922
From Putman Cramer
4.2.1922
From Van Vredenburch
(Brussels)
Annex
4.2.1922
From Beelaerts van
Blokland (Washington)
160 5.2.1922
Ditto
161 5.2.1922
From Tatsuke
161-A
162 6.2.1922
TO H.M. the Queen
Description
LXV
non-provoqué” restricted to Germany - ,,we
have no intention of guaranteeing Belgium against
an attack by you” (= the Netherlands).
Note on Netherlands naval plan (,,The Dutch
Navy in European waters must necessarily confine
itself to a purely defensive attitude; in the
Netherlands East Indies the Royal Navy finds itself
confronted with a task of far wider scope”).
Belgian question: objections to draft text of an
interview on the subject of Belgian-Dutch relations
to be published at the request of the Belgian
government in a daily paper widely read in
the Netherlands; request to the correspondent of
that paper to refrain from publication.
Text of the interview referred to in No. 158.
Washington Conference: drafting of Far East
treaties by sub-committee of heads of delegations;
no success in his efforts to effect amendments to
the wording; a few spontaneous concessions
made by Japan in respect of her 21 demands of
1915.
Ditto : report on previous day’s meeting; signing
of treaties at final session on 6 February made
possible by Hughes’ obvious desire to please Balfour,
who wished to depart.
Need for Netherlands delegates to remain in
Washington until the 18th for finalisation.
Ditto: enclosure of statement identical to that
issued by Britain, France and the U.S.: Japan
,,declares that it is firmly resolved to respect the
rights of the Netherlands in relation to their insular
possessions in the region of the Pacific
Ocean”.
Declaration.
Ditto: notification of the presentation of the
four identical Notes referred to in No. 161-161A.
Background was the fact that the four-power
treaty in which they declared that they would respect
one another’s insular possessions had created
a political situation in the Pacific which was disadvantageous
to the Netherlands (four-power
treaty intended to do away with the Anglo-
Japanese alliance deplored by America and the
British Dominions, while the Four did not anticipate
aggressive intentions on the part of the
Netherlands and the Netherlands possessions
were deemed to lie outside the ring of islands
which could prompt international conflicts); solution
sought which would not entail the draw-
LXVI
No. Date; From/to Descrip tion
-
163
163A
163B
164
165
166
167
168
7.2.1922
From Quarles van
Ufford (Middelburg)
7.2.1 9 22
Annex 1
9.3.1922
Annex 2
9.2.1922
From Van Vredenburch
(Brussels)
1 3.2.1 922
Minutes of Council of
Ministers
15.2.1 9 22
From LeRoy
15.2.1922
From Legation in
Washington
16.2.1922
From Van Vredenburch
(Brussels)
169 16.2.1922
From Oudendijk
(Peking)
170 16.2.1922
From Oudendij k
(Peking)
-
back of participation in a four-power pact; no
commitments which might involve the Netherlands
in the difficulties of other states. The
Netherlands preferred four separate declarations
to a collective one since anything suggesting
patronage by other states or any decrease in the
full sovereignty of the Netherlands as an Asiatic
power was to be avoided. Similar declaration in
respect of Portugal.
Belgian question: damming up of Zandkreek; installation
of ad hoc committee; Eendracht not to
be regarded as an island waterway between
Scheldt and Rhine.
Carsten’s objections to damming up plan.
J. Beucker Andreae’s concurrence with No.
163-A.
Ditto : Jaspar’s suggestion of meeting Van Karnebeek
during Genoa Conference; doubts as to the
utility of such a meeting because of the differences
of opinion on the Wielingen problem.
Russia: no aid to be granted in view of the state
of the Dutch finances.
China: wireless telegraphy in that country;
Netherlands abstinence in view of politically
dangerous aspects of the matter.
Washington Conference: four-power pact; Senator
Hitchcock’s questioning of Senator Lodge as
to reason for non-participation of the Netherlands;
Lodge’s reply (British objections because
of boundary line running too close to Singapore).
Belgian question: information given to Barendse
and Pieterse concerning the Wielingen negotiations;
Jaspar’s reticence based on fear of influential
circles in Belgium chef-de-cabinet Davignon’s
influence on Jaspar; position of Flemings
in Belgium; Franco-Belgian treaty of
guarantee against attack (from any quarter);
Netherlands publicity in Belgium.
Yap cables and DNTG: Chinese share of possessions
of Shanghai Company and Netherlands
protest voiced against this; probably advantages
to China of arrangement with Netherlands
interested parties.
Yap cables: enclosure of Annex with elaboration
of arguments in favour of Chinese-Netherlands
cooperation (Cf. no. 169), including the Japanese
LXVII
No. Date; From/to Descrip tion
170A 14.2.1922
Annex 1
170B 15.2.1922
Annex 2
171 17.2.1922
To Diplomatic Missions
(except Berne and
Bucharest)
172 18.2.1922
From Van Vredenburch
(Brussels)
173 18.2.1922
From De Graaff
173A 10.12.1921
Annex 1
From Aschke to
De Graaff
173B 24.1.1922
Annex 2
From De Graaff to
Aschke
174 20.2.1922
Minutes of 4th Meeting
of Trade Treaties
Committee
request for rights for a cable from the island of
Nafa.
Translation of a communication from Chinese
Foreign Secretary Yen to Oudendijk.
Oudendijk’s objections (for Yen) to assertions
advanced by Chinese Ministry of Communications
with regard to Netherlands rights and interests.
Serbia (diplomatic service): explanatory note relating
to course of events in the Rapaportquestion
(Cf. Part 11, Nos. 192, 195 and 197, and 1
and 80 above); satisfaction demanded by the
Netherlands for Serban lack of regard; striving of
government in Belgrade to restore relations before
visit of King to Bucharest in connection
with his marriage; willingness in Belgrade to take
the initiative; ending of suspension of relations
by exchange of notes expressing mutual desire
for resumption.
Belgian question: cuttings from ,,Nation Belge”
and ,,Handelsblad van Antwerpen” concerning
Gerretson’s speech in the Lower House (,,grist to
the mill of the opponents of the Netherlands”)
on 9 February.
Djambi affair: enclosure of two annexes relating
to Amerian capital in the development of oil
fields in the Netherlands East Indies; avoidance
of commitments for a new Colonial Minister.
Request from Vice-president of Standard Oil to
De Graaff; reference to No. 317, Part 11, (forwarded
too late) expressing confidence that
,,there no doubt would be found important
petroleum fields suited for contracts similar to
that whith the BPM” and the view that American
oil discoveries should entitle the companies to
share in the subsequent exploitation.
Evasive reply to No. 17 3-A; no particular preference
expressed as regards future forms of exploitation.
Trade policy of and vis-à-vis various countries.
Portugal: import duties, shipping rights and possible
retaliatory measures.
Germany: revision of 1851 trade treaty with German
Customs Union in connection with revision
of Netherlands East Indies Tariffs Act; observations
of a general nature concerning the upholding
of free trade and possible special measures
to aid Netherlands trade and industry in the pre-
LXVIII
No. Date; From/to Descrip tion
175
176
176A
177
20.2.1922
To Van IJsselsteyn
22.2.1922
From Van Asbeck
(Warsaw)
18.2.1922
Annex
2 5.2.1 9 22
To Brussels, London,
Paris, Rome and
Tokyo
177A 25.2.1922
Annex 1
177B 25.2.1922
Annex 2
178
179
2 7.2.1922
From Economic Affairs
Dept., Assistance
Council
27.2.1922
From Gevers
(Berlin)
vailing depression;
rejection of protective duties - both temporary
and permanent - by the economic policy subcommittee
(fear of temporary measures becoming
permanent and fear of unwillingness on the
part of the exchequer to forgo benefits once received).
Spitsbergen (mining regulations): failure to adopt
a standpoint (for the sake of private interests)
was not consistent with loyal recognition of Norwegian
sovereignty.
Poland (clothing credit): method of repayment
of FIS. 17,800,000 (Cf. No. 81); instalments and
interest rate; further - deviating - Polish proposal
for procedure with Polish treasury notes.
Specification from Van Asbeck for Skirmunt.
Aviation Conference, Paris: Non-accession to
international convention of 13 October 1919 in
connection with Articles 5 and 34 (derogation
and loss of freedom to make own decisions concerning
admission of foreign aircraft over Netherlands
territory together with unacceptable division
of votes in international committee); Van
Karnebeek’s question as to standpoint and views
of neutral states.
Note from Economic Affairs Dept. regarding
standpoint of former neutral states.
Communication from State Commission on Aviation
(J.B. Kan) about the technical part of the
convention (annexes); acceptance of the provisions
contained therein in general partly in so
far as practicable with the organisation and resources
existing in the Netherlands; several other
proposed technical amendments.
Genoa Conference: notes on basis and schedule;
aims to be pursued, based on results of Cannes
Conference.
Ditto : discussion with Rathenau on deferment;
latter’s contention that the conference could not
and would not become a gathering where definitive
decisions would be worked out or adopted
for improving the economic situation, but merely
,,eine Konferenz der allseitige Erkenntnis”; need
for thorough preparation (difficult to achieve in
time) did not lessen the urgent need to bring the
various governments together for an exchange of
~~
No. Date; From/to Descrip tion
LXIX
views; risk that deferment would mean cancellation;
definitive (official) fixing of the opening
date for 10 April.
180 27.2.1922
From Sweerts de
Landas Wyborch
(Stockholm)
Ditto: neutral states of Europe and recognition
of 19 19 peace treaties; Swedish opposition even
to indirect recognition; common interest of exneutrals
in this matter; question whether the
time had not come for them to unite (possibly
openly, in the form of a discussion on thereintroduction
of the gold standard); Sweden’s preference
for an entente between the Netherlands,
Switzerland and Sweden.
181 28.2.1922 Belgian question and Genoa Conference: discus-
From Van Vredenburch
(Brussels)
182 1.3.1922
183 1.3.1922
From Hooft
184 1.3.1922
From Van Rappard
(Christiania)
185 3.3.1922
From De Ligne
186 4.3.1922
From Nederbragt
187 6.3.1922
To Van Panhuys
sion with Jaspar on Genoa, the Franco-British
and Belgian-British military agreements and possible
consultation between Jaspar and Van Karnebeek
in Genoa; limited success of meeting in Lucerne
(Cf. No. 4A); Netherlands hydraulic engineering
works in Zandkreek.
Telegraph policy, the Netherlands: report to
Executive of Postal and Telegraph Services on
confidential discussion (countering foreign ,,imperialism”,
non-establishment of offices of foreign
companies in the Netherlands); fear of American
infiltration.
Relief Credits Central Europe (Austrian succession
states) : proposal that R J.H. Patijn be appointed
trustee; few objections to appointment
of a Dutchman as such, and advantages that
could ensue.
Genoa Conference and (non-)recognition of peace
treaties by ex-neutrals (Cf. No. 180): Norwegian
opposition to recognition because of possible undermining
of the authority of the League, and
non-acceptance of Branting’s standpoint (possible
acceptance of the economic agreements only
by Norway).
Belgian question (waterways between Rhine and
Scheldt) : reservations about installation of an
hoc committee for damming off the Zandkreek
(Cf. No. 163 and 18l);contention that Zandkreek
should be regarded as a waterway between the
Rhine and Scheldt.
Czechoslovakia (trade treaty): no objection to
imports; most favoured nation clause; greater
Netherlands import quotas and/or lower tariffs.
Genoa Conference and (non-)recognition of peace
treaties (prior consultation of ex-neutrals - Cf.
LXX
No. Date; Fromlto Description
187A
188
189
190
191
(Berne)
2.3.1 922
Annex
From Motta to Carlin
9.3.1 922
From Van Panhuys
9.3.1922
From De Marees van
Swinderen
(London)
9.3.1922
From De Marees van
Swinderen
(London)
10.3.1922
From Walree de Bordes
(Geneva) to
Nederbragt
No. 180 and 184).
Discussion with Swiss Envoy Carlin and Motta’s
instructions for him; objections to action by a
neutral bloc (time the war groups were abolished);
room for Denmark and Norway in discussion of
gold standard reticence to be exercised in the
talks; need for subsequent discussions in The
Hague (nearer to London); wish not to become
involved in Genoa in the probably sharply conflicting
views.
Instructions from Motta for Carlin (to sound
Netherlands government on common interests at
conference).
Ditto: Swiss request to Italian government for
elucidation of programme in regard to reconstruction
of Europe; consulting Spain; Dinichert’s objections
to the creation of yet a third group besides
the ,,large” and the ,,small” Entente, and to
the treaties of 1919 which on so many points
had a far-reaching influence on the economic life
of Europe .
Rhhe navigation: further to No. 139; consultation
with British Rhine Navigation delegate, Baldwin;
his objections to Van Eysinga’s impractical academic
standpoint, which made it possible fornonreparian
states to block measures of no importance
to them.
Genoa Conference: agenda; criticism in London
of composition of Permanent Court; Belgian question:
Wielingen article expected to appear in Revue
des deux Mondes; no need for Van Eysinga
to go to London for the Rhine navigation controversy
in view of Baldwin’s attitude (Cf. No.
189).
Relief Credits (Austria): Anglo-French request to
League of Nation’s finance committee to appoint
a financial adviser; objections in Austria and London
to a ,,snooper” (,,they will not be bothered
in their actions by a League of Nations man, but
will want to appoint their own control”); view of
Walree that ,,a strong and well-organised socialdemocratic
party and a large Roman Catholic
Party with specialistic tendencies cannot be submitted
to a purely capitalistic control”; suspension
for twenty years of the Dutch lien on the
Austrian state assets in order that the League of
Nations scheme for the reconstruction of Austria
might be put into practice.
LXXI
No. Date; From/to Descrip tion
192 11.3.1922
From Sweerts de
Landas Wyborgh
(Stockholm)
193 12.3.1922
From Ridder van
Rappard (Christiania)
194
195
196
197
198
199
12.3.1922
To De Marees van
Swinderen (London)
13.3.1922
To Tatsuke
13.3.1922
To Sweerts de
Landas Wyborgh
(Christiania)
13.3 .I 922
To Gevers
(Berlin)
13.3.1922
From Sweerts de Landas
Wyborgh (Stockholm)
14.3.1922
Minutes of 76th
Meeting of Econ.
Affairs’ Dept.
Assistance Council
200 14.3.1922
From De Marees van
Genoa Conference: further to Nos. 180, 181 and
187; acceptance by Switzerland of Swedish invitation
for preliminary consultation with Scandinavian
countries; Swiss recommendation that
Spain also be included; expectation expressed in
a Swedish paper that this might induce the
Netherlands to take part.
Ditto (Cf. No. 192): discussion with Raestadt
about No. 187-A and his concurrence with the
observations made by Van Karnebeek in No. 187;
likelihood of matter being taken up by Norwegian
Prime Minister Bleher with Branting in Stockholm;
virtual exclusion of Denmark and Norway
from decision regarding a return to the gold standard
as pretext for a meeting in The Hague (Cf.
No. 187).
Rhine navigation: reply to No. 189; opposition
to the construction put on Van Eysinga’s views
there.
Washington Conference - Status Quo Declaration:
expression of thanks for sending No. 161.
Genoa Conference and non-recognition of peace
treaties: instructions to announce participation
in the Genoa meeting on 18 March in the terms
of No. 187.
Ditto : preparations in Germany; request for confidential
perusal of German documents (Cf. No.
179).
Ditto: Further to No. 197: Reservations regarding
presence of Spanish representative.
Ditto (discussions on preparation): I: reconstruction
of Russia and 11: credits for and monetary
matters connected with Eastern Europe in general;
Van Vollenhoven’s contact with British
circles.
I: Attitude of the Netherlands regarding confiscated
securities and claims on Soviet state; possible
participation in international syndicate
(,,Parent Cy”); reports on Kröller’s conference
with Belgian industrialists.
11: International Gold Standard Convention credit
questions and Ter Meulen plan.
Appointment of Assistance Council sub-committees
for (1) Russia, (2) economic and (3) monetary
questions.
Ditto (preparation, international consortium):
Admittance of Danish financier Glückstadt to
LXXII
No. Date; From/to Description
200A
201
202
203
204
204A
205
206
Swinderen
(London)
10.1.1922
Annex
14.3.1922
From Van Vredenburch
(Brussels)
14.3.1922
From Michels van
Verduynen (Prague)
to Nederbragt
16.3.1922
To De Marees van
Swinderen
(London)
16.3.1922
To Oudendijk
(Peking)
Annex
16.3.1922
From De Marees van
Swinderen
(London)
18.3.1922
From De Marees van
Swinderen
provisional committee of experts; establishment
of Central International Corporation with capital
of £20,000,000 (20 per cent each for Belgium,
Germany, France, Britain and Italy); invitations
for Denmark, Japan, the Netherlands, Czechoslovakia
and the United States to join via national
corporations to be formed in each of these
countries, operating under a state guarantee, with
shares in the International Corporation; serious
action only on the part of Germany, Britain and
Italy, and opposition from British Joint Stock
Banks.
Resolution of Supreme Council pertaining to the
establishment of an international corporation for
the reconstruction of Central and Eastern Europe,
and its national branch establishments.
Belgian question: early resumption of negotiations;
Comité de Politique Nationale on the
war path again (article in ,,Flambeau”, 22 Feb.).
Czechoslovakia (trade treaty): reply to No. 186;
addition of a second clause to Articles 1 and 3;
ditto to para. 1 of the protocol of the treaty and
deletion of para. 4 thereof; moderation called for
in requesting tariff reductions.
Genoa Conference: complaint regarding belated
notification of February conference in London
on the establishment of an international consortium
of the Allies, Germany and Denmark (Cf.
No. 200); request for information on further developments.
China and the Yap cables: division of assets of
former DNTG; protest against Chinese plans for
the assets in Shanghai (Cf. No. 169) and Wusung;
description of the company’s assets in Wusung.
Memorandum by LeRoy on action taken by
China.
Rhine nauz’gation: reply to No. 194; British
Foreign Secretary’s view that ,,nullement résolution
du commission sera valable sans ratification
par ie gouvernement territorial impliqué”.
Genoa Conference: further to No. 200 (confidential
disclosure of British ,,avant-projet”): details
of agenda items; tendency in Britain to preserve
continuity as far as possible between Tsarist and
Soviet governments; reflections on governmental
and private debts and confiscation of private
property; Foreign Office’s refusal to furnish further
written information about ,,avant-projet”.
No. Date; From/to Description
LXXIII
207
207A
207B
208
209
209A
209B
209C
210
19.3.1922
From Van Sweerts
de Landas Wyborgh
(Stockholm)
18.3.1922
Annex 1
Annex 2
20.3.1922
Minutes of Council
of Minister
21.3.1922
To Political and Economic
Affairs Depts.
22.3.1922
Annex 1
23.3.1922
Annex 2
23.3.1922
Annex 3
21.3.1922
To Oudendijk
(Peking)
210A 1.2.1922
Annex 1
210B 7.12.1921
Annex 2
210C 7.12.1921
Annex 3
210D 8.7.1921
Annex 4
210E 13.1.1922
Ditto : confidential; information supplied to him
by Branting (Cf. No. 180) regarding agendûitems;
statements by Swiss Envoy Schreiber (non-acceptance
of responsibility for the war reparations arrangement
which was to be considered the main
reason for Europe’s economic decline); reserved
attitude of Sweerts to these statements; Branting’s
satisfaction at Netherlands’ willingness to cooperate
at the conference; enclosure of two annexes.
Memorandum from Branting (French text) read
to Sweerts de Landas Wyborgh.
Résumé (French text) of the discussions held on
18 March.
Loans for certain states in Central and Eastern
Europe: authorisation to introduce Bill.
Genoa Conference: question regarding the adequacy
or otherwise of the report from Berlin on
Soviet Russia.
Negative comments on the subject from Beelaerts
van Blo kland.
Ditto from Nederbragt.
Final instructions from Van Karnebeek (request
to be communicated to the legation in Berlin for
political and economic information about Russia
from there if possible).
Resolution on wireless telegraphy in China
(adopted in Washington): ,,to replace present
competition between wireless stations in China
by cooperation under Chinese control” with recommendations
on four basic principles (English
text) (Cf. Nos. 100 and 105).
Netherlands desire to remain free of undertakings,
with favourable consideration of Chinese proposals
for improvement of communications conditions.
Resolution regarding radio stations in China and
accompanying declarations (Washington Conference).
Declaration of the Powers other than China concerning
the resolution on radio stations in that
country.
Chinese declaration concerning resolution of 7
December regarding radio stations in China.
Press release from American Department of State.
British memorandum on wireless in China (num-
LXXIV
No. Date; From/to Descrip tion
Annex 5
ber of conflicting concessions granted by the
Chinese government).
211 21.3.1922 Yap cables: acceptance by the Netherlands of
From De Graaff Menado-Y ap cable; calculation of claims
arising from German-Dutch Pool (total of
Frs.3,165,062.26 of which Frs.764,006.26 accruing
to the Netherlands.
211A Annex
Draft cable agreement.
212 21.3.1922
From Gevers
(Berlin)
Genoa Conference: reply to No. 197: preparatory
activity on the part of the German government
was only apparent but real activity displayed by
213 22.3.1922
From Everwijn
(Washington)
,,Korporationen und grosse Verbande”); discussion
with Rathenau; his slender hope of practical
results from conference and inclination to
take no further part; Envoy’s objections to this.
Washington Conference (Four-Power pact and
status quo declaration): discussion in Senate of
treaties concluded; Senators Underwood and
214 23.3.1922
To Sweerts de
Landas Wyborch
(Stockholm)
Pittman on encroachment on interests of smaller
states (including the Netherlands) through their
non-inclusion in Four-Power pact; defeat of
amendments proposed by Pittman, Robinson and
Walsh.
Genoa Conference: reply to No. 207; approval of
,,réunion ultérieure” to be held in Berne, and
designation of Van Panhuys and Van de Sande
Bakhuyzen as Netherlands delegates; abandon-
215
216
24.3.1922
To H.M. the Queen
25.3.1922
Van Karnebeek’s
diary
ment of plan for meeting to this end in The
Hague due, inter alia, to impossibility of drawing
up final agenda at the time and absence of Vissering,
an obstacle to discussion of the gold standard.
Diplomatic Service (Baltic countries): in view of
the trend of trade relations, doubt as to the
possibility of effective representation in five such
widely separated countries simultaneously (i.e.
Denmark and Norwy as well); posting of Van
Rappard to Estonia, Latvia and Lithuania.
Genoa Conference: discussion with Carlin on
meeting of ex-neutrals at Berne en route to
Genoa (Cf. Nos. 207-A and 214; question of the
217 25.3.1922
To Van IJsselsteyn
need for stressing cooperationbetweenex-neutrals.
Spitsbergen (mining regulations): further to No.
175: objections to Norwegian recognition of acquired
rights of surface ownership only; time
limits in Articles 15 and 35 too short;guarantee
fund in Art. 33 and salaries; social legislation in
draft chapter 6 and Netherlands wish for certain-
LXXV
No. Date; From/to Description
218 25.3.1922
DEZ. Working Paper
ty; several other technical objections; rectangular
shape of concession not very practical with sharply
indented coastline (preference for parallelogram);
support to be given to certain proposed
amendments to regulations; presentation anew to
Norway of certain desiderata with notification
thereof to Britain and Sweden with a view to
their bringing more pressure to bear in Christiania.
Genoa Conference (Russia): notes on restoration
of relations and conclusion of trade agreement.
Admission of Soviet representatives to the
Netherlands and other countries; reinstitution of
the system of law in Russia; concessions and
trusts; restoration of the capitalist system; Soviet
propaganda; Russia and the Genoa Conference;
International Consortium and aid.
219 25.3.1922 Ditto : further discussion of the preparations;
Minutes of 77th Meeting I Treub report (sub-committee on Russiu); Cf.
of Economic Affairs’
Dept. Assistance Council
No. 199) on the interests of holders of securities;
acknowledgment of progress made by Soviet
Union and value of the rouble; Van Karnebeek’s
report on his several discussions with Carlin (Cf.
No. 216) and Ter Meden’s report on his talks in
London (organisation in accordance with British
plans); Lloyd George and recognition of USSR;
opposition to this from Belgium and France pending
proof that USSR merited trust (need for
pledges); reports on British plans for (1) acknowledgment
of the debts in foreign currency and gold
roubles abroad, (2) acknowledgment of debts relating
to public utilities, (3) determining the
amounts owed by Russia to France and Britain,
(4) compensation for private property, (5) right
to appoint consular officials in Russia; freedom
of movement for foreign nationals, and (6) accession
of USSR to a number of international
agreements; relevant negotiations; pros and cons
of recognition; scepticism regarding the state of
affairs in Russia as against the dangers entailed
by further delaying the resumption of trade.
I1 Credits: discussion of documents relating to
the Central International Corporation (objectives
and Articles of Incorporation); exchange risk inherent
in the objectives; reduction of the share of
countries with depreciating currencies in the
finance of the corporation, right of co-determination
of transactions as a condition governing a
LXXVI
No. Date; From/to Description
220 27.3.1922
From Nederbragt
to Michiels van
Verduynen (Prague)
221 27.3.1922
From Netherlands Ems
Estuary Committee
(Van Heeckeren)
222 28.3.1922
From Loudon
(Paris)
222A 28.3.1922
Annex 1
222B 4.6.1922
Annex 2
guarantee to provide capital; Japan’s refusal to
participate.
Czechoslovakia (trade treaty): comments on No.
202; import quotas and certificates of origin.
Ems Estuary: Protocol drawn up by the committee,
regulating the frontier in the Ems and the
Dollard and provisions relating to the Ems-Dollard
questions (tying in with the discussions conducted
in Aug. 1921 - Cf. Part 11 No. 439 and here above
No. 3); curtailment of both parties’ sovereignty
by a servitude imposed not only longitudinally
(i.e. relating to the part of the river between the
sea and the old West Ems), but in such a way
that it extended to the first point where each
party had the necessary freedom of movement
on its own territory; no absolute necessity for
the proposed latitudinal restriction of the servitude;
divergent viewpoint of Van Oordt (military
objections); view that the settlement thus drawn
up could not prejudice settlement of the (entirely
different) Wielingen question; description of
Ems Estuary region; right to Ems engineering
works to be accorded only after German standpoint
regarding allocation of the cost of improvement
of the river had been made known; arbitration
clause of Art. 29.
Conflict (Greece-Turkey) Middle East: report on
the eight-day conference of the Foreign Ministers
of France, Britain and Italy for the restoration of
peace; discussions relating to Asia Minor, Dardanelles,
Constantinople, the Turkish army and
Turkish economy, Armenia, protection of minorities
and preparation for replacement of terms of
Turkish.
Advice sought by Van Karnebeek relating to (1)
the necessity or otherwise for the Netherlands to
secure a seat on the Supervisory Commission for
the Dardanelles and (2) ,,to avoid lagging behind”
in the event of the abolition of the terms of capitulation.
Notes by Schuurman (tying in with the previous
discussion on the Commission des Détroits and
shipping rights (Cf. Nos. 27 and 37); a new element
had arisen in the form of plans to change
the Treaty of Sèvres; need for Netherlands seat
on any organisation set up for the purpose of
LXXVII
No. Date; From/to Descrip tion
222C 9.6.1922
Annex 3
222D 9.6.1922
Annex 4
222E 9.6.1922
Annex 5
223 29.3.1922
To Van Dijk
223A 23.3.1922
Annex
224 31.3.1922
From Van Asbeck
(Warsaw)
225 1.4.1922
To Gevers
(Berlin)
225A 23.3.1922
Annex 1
225B Annex 2
226 2-3.4.1922
From Professor
Bruins
placing the Dardanelles, the Sea of Marmora and
the Bosphorus under supervision as regards trade
and shipping; figures relating, inter alia, to
Netherlands interests.
Notes by Nederbragt: limited commercial interest
in the matter so long as there was no question of
differential treatment (virtually ruled out).
Notes by Snouck-Hurgronje: No. 222-B, based on
Netherlands position as major maritime nution,
carried sufficient weight for participation.
Van Karnebeek’s concurrence with 222-B; instructions
to act accordingly.
Belgium: recordings by the ,,Hydrograph” in the
mouth of the Scheldt (Wielingen) could scarcely
be considered recordings in the sense of Articles
68-69 of the 1839 treaty; need for prior notification
to the Belgian hydrographic service.
Relevant notes by Legal Affairs Section (with
special reference to end of No. 223); caution required
in notifying Belgium on account of Netherlands
views regarding sovereignty over the Wielingen
(call at Zeebrugge to be main subject).
Genoa Conference: eleventh hour USSR agreement
with Baltic States and Poland; preliminary
discussions not in Moscow but in Riga; abstention
of Finland.
Germany (Tubantia claims) : agreement with appointment
of experts and establishment of committee
of three arbitrators should the latter not
succeed within three months.
Notes by Legal Affairs Section (Beucker Andreae):
possible preference for one arbitrator and
Plate’s preference for immediate submission of
the case to arbitration; designation of experts in
consultation with Royal Dutch Lloyd, with marginal
note by Van Karnebeek.
Supplementary notes by Beucker Andreae with
post script by Nederbragt.
Genoa Conference: memorandum on the question
of international credits in general, likely to
be raised in a form different from that used in
the plans elaborated in London in regard to a
Central International Corporation operating with
national subsidiary corporations; changed relations
since the Brussels conference of September
1920 (Cf. Nos. 25 and 29); aspects of the matter
in regard to Germany, Russia and Czechoslovakia;
negative verdict on the wisdom of Netherlands
LXXVIII
No. Date; From/to Description
227 3.4.1922
From Nederbragt
228 3.4.1922
To Van Panhuys
(Berne)
participation in view of the domestic financial situation
(greater resilience of the interest rate and the
capital market in Britain than in the Netherlands;
capital depletion owing to immense amounts in foreign
securities having left the country in recent
years and need for very early restoration of equilibrium
through drastic curtailment of public expenditure);
participation only if conference absolutely
essential for alleviating the situation in
Germany.
Ditto (notes on international consortium for the
reconstruction of the USSR): further explanation
of the scepticism he evinced at the departmental
meeting on 25 March (Cf. No. 219) and provisional
negative conclusion regarding participation because
(1) through the commercial interests of the
Netherlands in Russian exports of grain and timber
were not inconsiderable, the parties directly
concerned were not over-eager to establish relations
with the Soviet Union; (2) it would be
better for the present, with or without official relations
to benefit indirectly from Russia’s recovery;
(3) Dutch nationals’ property in Russia
(about Fls. 165,000,000) was relatively too unimportant
to allow it to carry weight; and (4)
Russian securities in Dutch hands, though more
substantial (about Fls.960,000,000, were not of
such importance to the Netherlands economy as
to warrant the risks involved in an agreement
with the USSR (lack of code of commercial ethics
in that country and the threat to possible
Netherlands exports to Germany, with it lower
prices and more favourable location for trade);
caution to be observed with complex organisations
which, like the consortium under discussion,
had not evolved from small and simple beginnings,
but had been set up in a complicated manner;
limited importance of the consortium for
employment in the Netherlands in the event of
participation to the amount of approx.
Fls.12,000,000, circulating slowly or not at all;
advice in regard to Russia ,,to stand firm in all respects
and deliberately to lag behind”, and to be
mindful of guarantees should it prove really
necessary to yield.
Ditto: proposed meeting of delegates after close
of meeting of experts in Berne (Cf. No.214);
report on discussion with Carlin (Cf. No. 216);
continuing objection to accentuated formation
No. Date; From/to Descrip tion
LXXIX
229 3.4.1922
From Van Vredenburch
(Brussels)
230 4.4.1922
Minutes of the 78th
Meeting of the
Economic Affairs’
Dept., Assistance
Council
231 4.4.1922
From De Geer
232 5.4.1922
From Ridder van
Rappard (Copenhagen)
of neutral bloc (construed by the Allies as a German
stratagem); Netherlands delegation to remain
uncommitted ,,without losing the confidence
of the other ex-neutrals”; regular participation
in the discussions only if this seemed essential.
Belgian question: Minister of State Seegers’ indignation
about sentiments attributed to him by
the ,,Standard” (Cf. No. 158); his version of the
interview in question.
Genoa Conference: international credit bank
(private capital with State guarantee) and monetary
question; I: reading by Patijn of further subcommittee
report; discussion and summary by
Van Karnebeek; cautious linking up with the
group wishing to go ahead in Genoa and doubt as
to the size of the Netherlands’ share
(Fls. 12,000,000) and the British share
(Fls. 48,000,000); Trip’s concurrence with a
state guarantee up to a total of 50 per cent of the
shares (to preserve the participating industries’
interest in a sound industrial policy); further
Treub and Waller. 11: reading by Patijn of a
report from Van Vollenhoven (monetary question);
proposed convention merely declaration of
certain principles (non-binding nature of free
gold markets); abandonment of gold centres
owing to French opposition; possibility that discussion
of this point might lead to loan of Fls. 7
or 8 million, with the proceeds from which Germany
could pay its reparations; elucidation by
Prof. Bruins of his memorandum (Cf. No. 226);
summary by Van Karnebeek: fullest possible cooperation
in everything at Geneva that could lead
to reconstruction; final communication from
Van Aalst about ,,Germany being well on the
way to reaching an understanding with Russia”.
Poland (clothing credit) (Cf. No. 176-176-A): repayment
and interest; rejection of Polish request
for an alternative arrangement by means of relief
credits.
Genoa Conference: ,,Berlingske Tidende” on
Danish attitude; anticipated grouping at conference;
ex-neutrals joining hands not to be regarded
as a bloc in the international political
sense, but as a form of cooperation between
states which by virtue of a certain similarity in
LXXX
No. Date, Fromlto Description
233 5.4.1922
From Van Rathenau
to Gevers (Berlin)
233A 21.4.1922
Annex 1
233B 30.4.1922
Annex 2
233C 30.4.1922
Annex 3
233D 30.4.1922
Annex 4
234 8.4.1922
Van Karnebeek’s
diary
235 9.4.1922
Ditto
236 10.4.1922
Ditto
237 11.4.1922
Ditto
238 11.4.1922
Ditto
239 12.4.1922
Ditto
240 13.4.1922
Ditto
241 14.4.1921
size, power and relations with the outside world
had various interests in common, and could thus
discuss the possibility of adopting a joint approach.
Germany: payment of compensation for torpedoing
,,Tubantia” during the war, unsatisfactory offer
of no more than ,,angemessene” damages.
Notes by Beucker Andreae adhering to ,,restituio
in integrum”.
Notes by Van Karnebeek in the spirit of No.
225-A: full compensation sole yardstick.
Notes by Struycken concurring with two previous
notes, full compensation (if necessary to be fixed
by arbitration) as guideline; objections to value
of lost tonnage as a basis (resulting in forfeiture
of compensation for loss of profit); difficult
questions would arise in regard to underwriters,
shippers, stevedores and passengers.
Notes by Snouck Hurgronje concurring with Nos.
233-A to 233-C; fixing of the extent of the compensation
by experts and (ultimately) by arbitration.
Genoa Conference: arrival, welcoming and accommodation
of Netherlands delegation.
Ditto : Discussion with Schanzer (Italy): objections
to Belgium being seated with the Great
Powers.
Ditto : Report on the opening of the conference.
Ditto : Composition of sub-committees of First
Committee.
Ditto: Report on the proceedings in the Second
Committee during the afternoon; chances of the
Netherlands joining View that Switzerland
ought not to vote for herself but for the Netherlands
(as the only free-trade country); discussion
with Fentener van Vlissingen about his talks with
Mendelssohn and German industrialists.
Ditto: Inclusion of Van Karnebeek in Fourth
Committee and Ruys in Third Committee; consultation
with other committee members.
Ditto: Report on luncheon with Ruys, Wirth,
Hermes, Melchior and Kreuter at Rathenau’s
(Germany to join League of Nations) and dinner
with Ruys as guests of Theunis and Jaspar (Wielingen
question).
Ditto: Election of the Netherlands by Fourth
~~
LXXXI
No. Date, From/to Description
Ditto
242 15.4.1922
Ditto
243 17.4.1922
Ditto
244 ca. 17.4.1922
From François
245 18.4.1922
Van Karnebeek’s
diary
Committee to subcommittee for the waterways.
Ditto: Request by Banffy for support at the
conference on the question of the minorities in
Hungary; discussion with Facta and possible
meeting with Jaspar; view that the conference
lent itself to confidential discussions and expectation
that something might be achieved inregard
to reparations; meetings between Lloyd George
and Chicherin outside the conference; lack of
leadership and cohesion at the conference.
Ditto: Whom to send on mission to Lenin (question
by Schanzer) ; Chicherin’s objections to
taking this task upon himself; drafting and publication
of Russian-Germany treaty (Rapallo) ;
weakening of other powers vis-à-vis Russia and
,,incorrect” attitude of Germany: Lloyd George
,,very upset”; worsening of atmosphere at conference;
convening of subcommittee of First Committee
after break-away of Russians and Germans
(Van Karnebeek’s consultation with Swiss delegation).
League of Nations (limitation of arms): reply by
the Netherlands communicated on 17 May 1921
to Secretary General of League concerning the
resolution adopted by the first Assembly (Cf.
Part I1 Nos. 303 and 318-A); implementation of
the first and third increase in the Naval Act reserve,
,,situation exceptionelle” in which the latter
increase was admissible for the Netherlands;
protocol of British delegate Herschell (Paris,
February) pertaining to the limitation of all
armed forces in proportion to the size of the
population.
Genoa Conference: discussion between Van
Karnebeek and Schanzer at Villa Raggio; Netherlands
memorandum pertaining to the committee
of experts’ proposals concerning Russia (substantial
interests (cf. No. 227) of the Netherlands as
Russia’s creditor): Italian irritation at Treaty of
Rapallo and view that Germany ought not to be
given a lead on other countries by Russia; Starkenborch
Stachouwer’s report on Swiss-Scandinavian
and Netherlands-Spanish consultation on the situation,
and Van Karnebeek’s moderating influence;
his objections to ,,rather pronounced banding together
of the ex-neutrals”, press communiqué by
Van Karnebeek; latter’s discussion with Lloyd
LXXXII
No. Date; From/to Description
George on the occasion of the state banquet at the
Pdazzo Reale; L.G.’s view of the German attitude
(,,very unloyal”) shared by Van K. in more mitigated
form; conversation between Chicherin and
Prince Gonsaga and others; isolated position of
Wirth and Rathenau.
246 19.4.1922 Belgian question: discussion with Jaspar who had
To Ruys de Beeren-
Brouck
247 19.4.1922
From Van Karnebeek
248 19.4.1922
Van Karnebeek’s
diary
249 20.4.1922
Ditto
250 20.4.1922
From Snouck
Hurgronje
251 21.4.1922
Van Karnebeek’s
diary
made cooperation in finding a solution to the
Wielingen question contingent upon Dutch cooperation
in military agreements - this was unacceptable
to Van Karnebeek; further talks with
Jaspar expected.
Ditto: report on his talk with Jaspar (see No.
246) on the Wielingen question and the military
clause, in the presence of Struycken, Carsten,
Bourquin and Davignon; Jaspar’s insistence on
link between Limburg and Wielingen questions;
Wielingen demarcation line proposed by Belgium
(tangent to the N.E. point of the Bol van Heyst
drawn from the frontier); need to arrive at a
solution of the Scheldt question.
Genoa Conference: tales and wild rumours. Rathenau
and Wirth’s call on Lloyd George; Giolitti
press supported Germany; Van Karnebeek’s conviction
that Russia had forced Germany to publish
Treaty of Rapallo; further consultation between
Van Karnebeek, Jaspar, Struycken and Carsten.
Ditto : Morning conference with Patijn, Vissering
and Ter Meulen on plan for international loan of
four billion gold marks, part of which would go
to German Reichsbank for interest payment for
two years; moratorium on Germany’s reparations
payments for five years Netherlands support if
this would lead to improvement in the general
financial situation; German austerity and tackling
industry on sound financial footing (Fentener
van Vlissingen’s plan as a guideline); discussion
on the use of the 140 miliion-guilder Netherlands
credit already provided; Van Karnebeek’s press
conference.
Ditto: information from the German Envoy in
The Hague regarding the Treaty of Rappallo (not
considered incompatible with Art. 260 of the
Treaty of Versailles and third parties’ interests).
Ditto : conversation with Barthou about Netherlands
memorandum; Netherlands views on matter
close to those of French; no objections on Barthou’s
part to seat for the Netherlands on- the
-
LXXXIII
No. Date; From/to Description
252 22.4.1922
Ditto
253
254
255
2 2.4.1 922
From Carsten (Geneva)
to Beelarts van Blokland
2 3.4.1 92 2
Van Karnebeek’s
diary
24.4.1922
Ditto
new small committee of experts on Russia
(Struycken) cooperation on this point from
Jaspar, Motta and Branting, and Van Karnebeek’s
letter to Schanzer; election of Committee of
Seven (five Geneva convening powers, one representative
of the Smali Entente and the Netherlands):
Van Karnebeek’s conclusion that the
Netherlands’ opposition to the formation of
blocs had had a favourable effect on the five convening
powers; speech by Patijn (20 Apr.) in
Second Committee on reparation payments;
economic recovery possible only if the latter
question was settled satisfactorily, which in any
case was a matter for the parties concerned and
not for the conference; dinner with Branting;
Van Karnebeek’s idea that the question of Russian
armaments should be raised in the new committee
not as a political but as an economic issue;
no credit for USSR if the Red Army should profit
by it; approval on the part of Motta and Schulthess;
talk with Benes about Russian question;
his fierce opposition, shared by Masaryk, to
recognition of USSR.
Ditto: Reception by King of Italy aboard the
,,Dante Alighieri”; strange reception by the King
of Chicherin and Krassin; Chicherin’s remark to
the Archbishop of Genoa about the ,,wonder of
freedom of religion in Russia”; Barthou deluged
by telegrams from Poincaré; would the French
stay in Genoa; meeting of allied delegates following
German reply to their Note; clash between
Lloyd George and Barthou; discussions on international
loan; British abstention pending settlement
of reparations.
Belgian question: enclosure of No. 246, not unfavourable
impression of the discussion; France
and Treaty of Rapallo.
Genoa Conference: meeting of sub-committee of
First Committee preceded by conference with allied
delegates; Russian memorandum considered
non avenu; cails on Van Karnebeek by Fierlinger
and don Sturzo; conciliatory attitude of Italians.
Ditto : Conference in smoother waters; diminishing
buoyanccy; Prench favoured recommendations
rather than agreements; little enthusiasm
for ,,syndicate” (Lloyd George’s hobby horse);
Russian question to be referred to a new committee
yet to be appointed Britain’s predominant
LXXXIV
No.
Date; From/to
-
256 25.4.1922
Ditto
25 7
258
25.4.1922
From Van IJsselsteyn
26.4.1922
Van Karnebeek’s
diary
Description
~-
influence on conference and ignorance of French
and other delegates; presumably no reaction
forthcoming to the response of the group of
nations to the German reply; would Genoa peter
out
Ditto: Protocol of the experts.on the last discussion
with the Russians; Van Karnebeek’s consultation
with Swiss and Scandinavian delegates
on the situation created by the Russians’ attitude;
proposal that the group meet again with inclusion
of Spain.
Poland (clothing credit): appreciation of Van Asbeck’s
action and success in Warsaw (Cf. Nos.
176 and 231); payment of annuities (interest
rate) ; acknowledgement of Polish government’s
good will.
Genoa Conference: talks with other ex-neutrals
(Cf. No. 256) opened by Van Karnebeek; Branting’s
reflections on the legitimate rights of the
Russians, who should nevertheless be addressed
with some firmness; little success in the approach
made by those delegations to Schanzer at the
Palazzo Reale, initiated that afternoon by Van
Karnebeek; Motta favoured support for accommodating
attitude of the Italians; Van Karnebeek
for coming to grips with Soviets; Schanzer felt
there would probably be no credits for the Bolsheviks.
Dinner given by ex-neutrals at Miramare
Hotel; Van Karnebeek’s views on Facta and Barthou,
still no definitive draft non-aggression pact
(Lloyd George’s show piece); Van Karnebeek’s
fear that this might further reduce Germany’s already
slight inclination to join the League of
Nations; Evans’ evasive statements on Germany’s
accession; negative nature of the non-aggression
pact unimportant beside Art. 10 of Covenant and
inadequacy of the four-million gold mark loan to
Germany; plans for ten-year truce and consortium
for Russia. Van Karnebeek’s views to the effect
that on those points where too much had been
conceded to the Russians efforts should be made
to achieve what was still possible, providing the
position of those who wished to settle in Russia
was regulated; Jaspar’s suspicion that British delegation
will be too conciliatory towards the Russians
in an effort to pleace Lloyd George; Van
Karnebeek’s attempt to arrange a further discussion
with the Belgians concerning revision of
LXXXV
No. Date; From/to Description
259 27.4.1922
Ditto
260 27.4.1922
Ditto
the 1839 Treaty.
Ditto : Luncheon of Netherlands delegation members
with Delacroix, Lepreux, Avenol and other
delegation members; Delacroix’s views on Jaspar
and Theunis’ wish to settle the Belgian question;
Van Karnebeek would have no objection to Jaspar’s
presenting the Dutch solution for the Wielingen
in the Belgian parliament as a Belgian success
(the Netherlands was in fact already relinquishing
its claims): ,,He still feels uneasy about
the Limburg question for the sake of the Belgians”;
confirmation by Delacroix of the British
tendency to make matters easy for the Russians
and to be content with a minimum (leaving prewar
debts to bond holders, being content with a
simple acknowledgement of liability, acceptance
of usufruct (,,jouissance”) and the institution of
mixed courts of law, without further adjustments).
Ditto : Morning conference of ex-neutrals at Swiss
quarters to discuss projected meeting of sub-committee;
Motta’s account of his talk with Lloyd
George about the proposal the latter intended
to make to the Soviets; chance of rupture
with the French who insisted on ,,restitution
de la propriété” and rejected ,jouissance”) ;
the latter was based on their fear that the
socialists in the different countries would
seize upon it in their increasing efforts to transfer
ownership to the State; the writer’s objections to
being grouped a priori with the British or the
French (personal preference for French views,
but preferred supporting the British to risking a
breakdown of the conference); his support for a
comparative study, and disapproval of Motta’s
reconciliation proposal based on the idea of instructing
the committee of experts to determine
whether the two plans were not, after all, compatible
(the writer’s objections to such intervention
in the conflict were confirmed by the unfavourable
reception accorded this step taken by
Motta). Van Karnebeek’s criticism of the way
things were going at the conference (insufficient
information owing to non-distribution of essential
documents) ; Skirmunt’s complaint about
this had been rejected by Schanzer and the writer
had the impression that the latter in fact regarded
the meetings as conferences of the Supreme
LXXXVI
No.
Date; From/to
Description
261 29.4.1922
Ditto
262 29.4.1922
From Beelaerts van
Blokland (Genoa)
263 29.4.1922
From Beucker Andreae
Council with invitees who had to take care not to
interfere; Schanzer’s rejection of Motta’s suggestion
that the experts should work out a compromise,
and his observation that the issue here
was a political one; Schanzer’s counter-proposal
- after reading out the British and French preambles
- that an editorial committee drawn
from the political sub-committee should bring
the two texts into harmony as far as possible;
Fentener van Vlissingen had heard from Bücher
about the visit to Genoa of five prominent industrialists
and businessmen to enquire whether they
could work together with Germany and Russia;
Wirth’s toast the previous evening to Chicherin
on the significance of Rapdo for the international
proletariat; Italy’s subordination of
everything to financial speculation.
Ditto : Luncheon with Rathenau, Mendelssohn
and Kreuter: Rathenau had discussed (1) psychological
nature of reparations problem (could be
settled only after election of a new parliament in
France); (2) high hopes set on Morgan’s joining
the committee on the new loan and (3) Germany’s
,,Zwangslage” arising from position vis-àvis
Russia and Powers, Treaty of Rapallo a move
intended to counteract disadvantageous position.
Afternoon conference with Schulthess on economic
and financial matters. The writer’s objection
to Schulthess’ intention of addressing the plenary
session on behalf of the ,,neutrals” (his intention
to speak there himself cf. No. 238); Schulthess’s
reluctance to refrain from assuming a measure of
leadership. Struycken’s account of another clash
between Barthou and Lloyd George during the afternoon
conference of the political sub-comittee.
Belgian question: decision of Council of Ministers
that discussion with Jaspar should continue
could (Cf. Nos. 246-7) if necessary be carried out
on neutral ground (London).
Spitsbergen (mining regulations): further to Nos.
175 and 217; a Note to be sent to London only,
or to London and Christiania According to Rappard,
little support to be expected from Sweden,
and fear of meeting with a rebuff; new Norwegian
chargé d’affaires on shape of concession
(paralellogram or rectangle); deferment of
Netherlands reply if Norway should fail to give
satisfaction on any point.
LXXXVII
No. Date; From/to Description
264 29.4.1922 Relief Credits (Central Europe): complaint about
From De Geer
administrative procedure; figures relating to the
credits for Austria, Hungary, Poland, Romania,
Serbia and Czechoslovakia to that date, payment
urged of the interest still owed by each of those
countries.
265 30.4.1922
From Van Karnebeek
(Genoa) to Ruys de
Beerenbrouck
Belgian question and other matters: inquiry as to
the reasoning behind No. 262; astonishment at
the suggestion (made in The Hague) about consultation
in London; such meddling would be unwise
at this moment; ascertainment of the mood
in Brussels by Theunis on basis of Jaspar-Van
Karnebeek talks in Genoa; chaotic situation at
the conference; dissatisfaction there with the
,,meeting of the Conseil Suprême with a few
other states around it”.
Enclosure of the telegram referred to in No. 265.
265A 2.5.1922
Annex
266 30.4.1922
Van Karnebeek’s
diary
267 1.5.1922
Ditto
268 2.5.1922
Ditto
269 3.5.1922
Ditto
270 3.5.1922
From Snouck
Hurgronje to Ruys
de Beerenbrouck
Genoa Conference: festivities at Pegli in honour
of the delegations; departure of Branting and
Trygger in mood of despondency.
Ditto: Arrangement with Avenol to include Ter
Meulen in a small committee set up to devise a
formula for Russia’s pre-war debts on the basis of
the French proposal; discussion with Schanzer on
further procedure; ex-neutrals’ objections to arrangements
which implied confirmation of the
peace treaties; heated debates on the question of
restitution; Japanese objection to ,,tame” attitude
of the French at the meeting of the political subcommittee;
Struycken felt that Europe was busily
engaged in erecting a scaffold upon which capitalism
and ownership were to expire; conflict between
Poincaré and Barthou
Ditto : Van Karnebeek’s discussion with Lloyd
George before the former’s departure for the
Netherlands; the latter’s fear of rejection by the
Russians of the demands made, and of a conflagration
in the Balkans; peace treaties as ,,rest
inter alios jacta”; more about the disagreement
between France and Belgium; Barthou instructed
to be accommodating.
Ditto: Account of a satisfactory plenary meeting.
Ditto: enclosure of a telegram from Van Karnebeek
concerning his discussion with Lloyd
George; the latter’s request to Van Karnebeek to
postpone his departure because of possible diffi-
LXXXVIII
No. Date; Fromlto Description
271 3.5.1922
From Van Vredenburch
(Brussels)
272 4.5.1922
Van Karnebeek’s
diarv
273 4.5.1922
From Van Karnebeek
274 5.5.1922
Van Karnebeek’s
diary
275 5.5.1922
From Ruys de Beerenbrouck
to De Geer
culties with Russia (preparing for war against Poland)
and the chance of having to take ,,decisions
of the utmost importance” in the event of failure
of the conference (Cf. Nos. 265-265-A);lesspessimistic
views in other circles in Genoa.
Ditto: discussion with Theunis, head of the Belgian
delegation, on (1) the worsened atmosphere
in Genoa as a result of the Treaty of Rapallo and
British-Russian cooperation; (2) Jaspar’s proposals
to the Economics Committee on 1 May concerning
amendment to Art. 6 (compensation and indemnity)
of the memorandum to be sent to the
Russians, and Lloyd George’s fear that this might
lead to a breakdown of the conference; (3) Belgium
stood alone in the defence of the proposal
referred to under (2); danger of sanctions in the
event of Germany failing to pay reparations by
31 May; Francophile attitude of the Belgian minister
Theunis; his derogatory remarks about
Krassin and Chicherin and pessimism regarding
the revival of Russian industry.
Ditto: long discussion with Benes; the latter’s
fear of a Russian refusal and failure of the conference;
exchange of views concerning the nonaggression
pact and Art. 10 of the League of
Nations Covenant (Cf. No. 258); repeated objections,
also vis-à-vis Benes, to ratification of the
peace treaties (res inter alios jacta). Discussion
with Jaspar (Cf. No. 273) and dinner with Schanzer,
Bratianu, Lloyd George, Benes et al. asguests
of the Japanese.
Belgian question: report on his second talk with
Jaspar (Cf. No. 272) at Villa Farfati near Genoa,
in accordance with arrangement referred to at
the close of No. 272; Netherlands’ Wielingen
standpoint unacceptable to Belgium and this
question had been linked to the defence of Limburg;
both parties’ adherence to their own points
of view and Van Karnebeek’s objection to Jaspar’s
suggestion that the matter be left in abeyance;
press communiqué.
Ditto: composition of Note for Belgium concerning
Wielingen arbitration and (Genoa Confmence)
of Note to political sub-committee on the draft
presented to the Russians.
Relief credits (Central Europe, with special reference
to Austria) : revival of priority on expiration
of term of twenty years during which it was to
276 6.5.1922
Van Karnebeek’s
diary
277 6.5.1922
From Van Karnebeek
278 8.5.1922
Van Karnebeek’s
diary
279 9.5.1922
Ditto
280 10.5.1922
Ditto
LXXXIX
No. Date; From/to Description
_________
be suspended; need to refrain from what could
be construed as obstructing the granting of a
reconstruction loan in accordance with the plans
of the League of Nations’ finance committee.
Genoa Conference: unfavourable impressions
gained during and after conference; further talk
with Jaspar on Belgian question (Cf. No. 277);
no unconditional rejection of arbitration by Belgians,
dinner with the Swiss; the writer was seated
between Wirth (preoccupied with the sharp rise
in prices in Germany) and Banffy (preoccupied
with Benes’ intentions with regard to non-aggression
pact, exclusion of dynasties - Emperor
Charles of Hungary - and enforcement of peace
treaties).
Belgian question: report on the third discussion
with Jaspar in Genoa; the writer had handed him
the Netherlands draft press communiqué; Jaspar’s
objections to announcing at that stage that
arbitration was being considered in the Wielingen
question; Van Karnebeek was opposed to a communiqué
which would in fact be the same as the
one issued in August 1921; further consultation
on the wording of the communiqué.
Genoa Conference: Note to Lloyd George on
non-aggression pact; talk with Schanzer on Scialoja’s
efforts to reach a restitution arrangement
acceptable to all parties; rumours of possible
failure of conference not taken too seriously by
Schanzer (after discussions with Lloyd George
and the Russians); more generous credits for the
USSR in the form of advances for payment of
goods supplied (No loans from State to State);
Schanzer’s rejection of adjournment; further
complaint about procedure at conference; talk
with Lloyd George (who was much less tenacious
than the writer) at Miramare.
Ditto: Preliminary discussion with Patijn and
Struycken on convening a meeting of ex-neutrals
to discuss non-aggression pact (need for adding
to it a provision to the effect that it would terminate
when all signatories had joined the League
of Nations) ; pessimism about regulating Russian
debts (suspicion that USSR would not be satisfied
with prospects offered).
Ditto: Conference of ex-neutrals (Cf. No. 279),
speculation on the Russian reply expected that
day; resolution of host countries to table the
xc
No. Date; From/to Description
281 11.5.1922
Ditto
282
283
2 84
11.5.1922
From Van Karnebeek
to Snouck Hurgronje
12.5.1 922
Van Karnebeek’s
diary
12.5.1922
From Snouck Hungronje
to Ruys de Beerenbrouck
questions of Georgia and Eastern Galicia (continuation
of conference); management of affairs
by small clique who took notice of others only
when they needed them; the writer’s view that
the members of the Supreme Council would have
done better to deal exclusively with one another.
Ditto: Luncheon as guest of Facta and Schanzer
at Villa Reggio (with Lloyd George, Evans, Patijn,
Struycken and others), Lloyd George in agreement
with Netherlands proposals regarding nonaggression
pact and his tending towards adjournment
of conference as being useful for study of
Russian problems; reply by Van Karnebeek that
the German problem was equally important;
Lloyd George evasive about reparations and his
view that Bolshevism existed only on paper (restoration
of private property on a wide scale), the
writer’s reference to persecution in Russia of
priests and socialists; Russian reply read aloud by
Barthou during discussion; the latter’s remarks
on recognition of the Soviets after a trial period
and on Lloyd George and Schanzer’s ,,scheming”
with the Russians; discussion with Avezzano and
dinner with Barthou, the guests including U.S.
Ambassador Child who enquired whether the
Netherlands and the USSR were conducting separate
negotiations and expressed the view that
France was in the process of regaining moral
leadership in Europe and that the Netherlands
would have to assume that task if France should
prove to be incapable of it; his opinion on the
reply (free from polemics) to be given to Russia
and his objections to the U.S. taking part in the
conference.
Ditto : Request for information concerning an
alleged claim by Shell to a monopoly in the
USSR.
Ditto: dinner with Lloyd George at Villa de ALberti;
he considered the reply to the Russians to
be ,,sharp, but not on a very realistic level”; impossibility
of agreement on that basis, while the
negotiations must nevertheless be continued.
Ditto: enclosure of a telegram from Van Karnebeek
dated the previous dau, expressing the
expectation that the conference would founder;
he asked whether another cabinet member
could deputise for him in the debate on
his estimates in the Upper House, feeling that
XCI
No. Date; From/to Description
public opinion would not condone his absence
from Genoa at such a critical moment; Ruys willing
to take his place in the House.
285 12.5.1922 Spitsbergen (mining regulation): Norwegian char-
From Snouck Hurgronje
286 13.5.1922
Van Karnebeek’s
diary
287 13.5.1922
From Van Karnebeek
288 13.5.1922
From Snouck
Hurgronje to Ruys de
Beerenbrouck
289 13.5.1922
Report from the
Colonial Ministry
gé d’affaires’ insistence on reply from the Netherlands
in connection with British desire for early
settlement of the matter; request for telegraphic
instructions as to whether Netherlands objections
and definitive proposals should first be communicated
to Britain only, or simultaneously to Britain
and Norway (Cf. No. 263).
Genoa Conference: Barthou’s objections to new
Russian committee and his rather unfavourable
impression of non-aggression pact; consultation
between Barthou and Lloyd George; Facta was
urging Van Karnebeek not to return home yet;
reflections on unsatisfactory state of affairs and
fiasco of the Supreme Council, writer’s criticism
of the fact that the most important issues were
dealt with outside the comittees (absence of
legal basis) ; unwarranted disregard to the Baltic
States in dealing with the Russian questions; unfavourable
verdict of various delegates and real
appreciation of Jaspar’s courage; good showing
by the Netherlands (,,without becoming entangled
in controversies and bickering as between France
and Britain”) ; the Germans had practically ceased
to exist at conference since the Treaty of Rapallo.
Belgian question: account of further discussion
with Jaspar in the presence of Struycken, at
Palazzo Reale, Genoa, regarding the press release
referred to in No. 277.
Genoa Conference: enclosure of a telegram from
Van Karnebeek referring to the unlikelihood of
agreement with Russia and the establishment of a
committee for maintaining the contact established
with Russia thus far; non-aggression pact for the
duration of the committee’s deliberations; Van
Karnebeek’s expectation of failure here as well.
Yap Cables and DNTG: further details regarding
the new Netherlands company to be founded
(capital participation by Eastern Extension, concession
from the Netherlands government for
operating the Yap-Menado cable); appointment
of representatives (who for the first five years
were required to be Dutch nationals); maintenance
and repair of cables; working agreement
with Commercial Pacific for laying Menado-
XCII
No. Date; From/to Descrip tion
290 14.5.1922
Van Karnebeek’s
diary
Manilla cable, joint purse agreement with Eastern
Extension and associated companies and with
Northern for traffic between the Netherlands
East lndies and Europe via the Menado-Manilia
and the Napa-Shanghai cables.
Genoa Conference: Van Karnebeek’s departure
from Genoa; Patijn had remained behind.
291 14.5.1922
Ditto: minutes of a meeting of the delegation
291A
From Netherlands
delegation in Genoa
Annex:
leaders from Belgium, Britain, France, Italy and
Japan held on that date (recommendation that
the sub-committee of the First Committee meet
without the Germans and Russians).
Projet de clause à communiquer à la legation
292 15.5.1922
From Patijn (Genoa)
Russe.
Ditto (continuation in The Hague): agreement in
sub-committee on proposals to be made to Rus-
293 15.5.1922
Ditto
sia; setting up of a Russian and non-Russian committee
(excluding Germany)at the invitation of
the United States; meeting of the non-Russian
committee planned for 15 June in The Hague;
expected arrival of the Russian committee on 26
June; agenda for the meetings (debts, property,
credits) ; non-aggression and abstention from
propaganda.
Ditto: further telegram about the choice of The
Hague as location for the follow-up conference;
request for authorisation to announce that the
Netherlands was in agreement and to cali upon
Chicherin.
294 17.5.1922 Ditto: authorisation requested in No. 293 given;
To Patijn
request for notification of the reasons for choosing
The Hague (in the Netherlands ,,ni désir ni
intrigues”), police surveillance of Soviet delegates
in the Netherlands.
295 17.5.1922 Ditto: probable acceptance of the (amended)
From Patijn (Genoa)
proposal by the Soviets and Chicherin’s preference
for meeting in a friendly country with
which normal relations were maintained (statement
to journalists) ; Chicherin’s criticism of ,,disobliging
attitude of Netherlands delegation” in
Genoa, weak support from Lloyd George for The
Hague and Patijn’s abstention from démarche in
favour of it; Chicherin’s rejection of non-membership
of Germany on non-Russian committee on
the grounds of the arrangement already made independently
between Germany and Russia at
Rapallo (participation of Poland, despite similar
XCIII
No. Date; From/to Description
296 17.5.1922
From Patijn (Genoa)
297 17.5.1922
To Patijn
298 17.5.1922
From Patijn (Genoa)
299 17.5.1922
To Patijn
300 17.5.1922
From Patijn (Genoa)
301 17.5.1922
From Patijn (Genoa)
302 18.5.1922
From Beelaerts van
Blokland
303 19.5.1922
agreement concluded between Poland and Russia).
Ditto : Chicherin’s attitude seen as insurmountable
objection to courtesy visit as long as the choice
of The Hague was not definite.
Ditto: Chicherin’s statement construed as ,prétexte
et manoeuvre parce qu’on désire autre endroit”;
concurrence with contents of No. 296
and in the event of courtesy visit Netherlands attitude
to be explained (,,qui s’inspire de l’idée de
I’accord collectif avec Russie, qui est le sens de
la conférence et que pour cette raison les Pays
Bas se sont abstenus d’action séparée”); expectation
that the new meeting would be more in the
nature of a new conference than of a committee
meeting, doubt about its advisability if a fresh
fiasco were to be expected in June.
Ditto: Chicherin’s rooted objection to going to
The Hague, threatening rupture and Patijn’s
statement that the Soviet committee would receive
the same treatment as the non-Russian
committee; Chicherin no longer opposed to plan.
Ditto: in view of the conflict of opinion regarding
the venue of further meetings, the conference
need not consider itself committed to The Hague
(,,gouvernement ne ’désire pas créer complications,
mais ne regretterait pas si la commission se
réunit aiileurs”).
Ditto: confirmation of definitive choice of The
Hague.
Ditto: further details concerning No. 300; initial
objections on Chicherin’s part (poor connection
between The Hague and Russia) and change in
Lloyd George’s attitude (Cf. 295); his vigorous
defence of The Hague on the grounds of its
,,international atmosphere”; Michiels van Verduynen
(Prague) recommended as secretary general
of the forthcoming conference, and Van
Blankestein as press secretary.
Ditto: Notes on No. 282; denial by Dr.H.Loudon
of any Shell monopoly in Russia and furthermore
of the existence of any agreement; mere
thought of this ,,abhorrent to the company”;
talks with Krassin confined to consultation on
nationalised former property of Shell; unrealistic
offer of Krassin of a contract.
Morocco: Notes on the Von Motz affair (offer of
XCIV
No. Date; From/to Descrip tion
From Beelarts van
Blokland
304 21.5.1922
To Everwijn
(Washington)
305
305A
306
307
308
22.5.1922
To Van Vredenburch
(Brussels)
Annex
24.5.1 922
From Loudon (Paris)
26.5.1922
From Van Panhuys
(Berne)
27.5.1922
To De Marees van
Swinderen (London)
compensation by Spanish government in connection
with liquidation of his business); continuance
of the Netherlands claim (difficult to specify) of
Fls. 15,000 Less favourable chances of arbitration
Von Motz could return to Morocco only
at his own risk.
Genoa Conference (continuation in The Hague):
reason for the decision to hold the meetings in
the Netherlands was that this presented the last
chance of preventing the nations - inspired by
the Treaty of Rapallo - from concluding separate
agreements with Soviet Russia and thereby securing
the same advantages as Germany; Hughes
to be urged to take part by pointing out the technical
nature of the discussions.
Belgian question: enclosure of the text of the
press release referred to in Nos. 273, 277 and
287, citicism of the deviations from the agreed
text.
Text of the press release forwarded by the Envoy
on 17 May.
France and Belgium: objections to abolition of
post of Netherlands military attaché in view of
military pact concluded between Belgium and
France; attaché should not be recalled before
true limitation of arms had been effected and the
political horizon in Europe had brightened.
Rhine navigation: report on the solution reached
in April to the question of the lateral canal; Swiss
dissatisfaction with the attitude of the Netherlands
in this matter (cf. No. 134).
Continuation of Genoa Conference in The Hague:
private communication setting out the reasons
for a further meeting in the Netherlands: incidents
in the summer feared by Poland and Romania
if the Genoa Conference should end inconclusively
(,,gaining time until the season in which
military operations in the East could no longer
take place”); efforts to achieve collective agreement
in order to prevent a rush to conclude separate
agreements with Russia under the pressure
of Rapallo (cf. No. 304) Van Karnebeek’s initial
preference for Stockholm; agreement on The
Hague as concession to the general interest (notably
Lloyd George’s); chairmanship to be placed in
the hands of one of the powers that had borne responsibility
for Geneva, with honorary chairmanship
and possibly deputy-chairmanship for the
No.
Date; From/to
Description
xcv
309 29.5.1922
To Everwijn
(Washington)
310 29.5.1922
To Jaspar
311 29.5.1922
From Advisory
Committee for
Problems of
International Law
312
313
314
3 15
316
30.5.1922
From Van Vredenburch
(Brussels)
30.5.1922
From Van Vredenburch
(Brussels)
31.5.1922
From Hughes to
Everwijn
(Washington)
1.6.1922
From French
Government
1.6.1922
Netherlands.
Ditto: Hughes to be urged to take part (if need
be ad audiendum); advisable for U.S. Secretary
of State to approach the chairman of the Genoa
Conference for this purpose, without mentioning
that the suggestion had come from the Netherlands.
Belgian question: formal offer to submit Wielingen
dispute to arbitration or to the Permanent
Court in the spirit of the agreements of 1907 and
the League of Nations Covenant so as to remove
the sole point of controversy still impeding the
signature of the draft treaties.
League of Nations: Report on discussion of the
Second Assembly pertaining to Art. 16 of the
Covenant under the terms of which it was conceivable
that force could be used in defence of
the international legal order whilst respecting the
sovereignty of states (Van Eysinga: attack on one
state regarded as attack on all); objections to
resolutions which constantly weakened the purport
of Art. 16.
Belgian question: report onNo. 310; fundamental
objections of Jaspar to the arbitration proposal
on the grounds that the Netherlands-Belgian draft
treaty itself was already unpopular enough in
Belgium; Vredenburch’s reaction to this and his
view that Jaspar would not attempt to solve the
Wielingen question unless forced to do so by
Flemings and socialists. Non-advisability of agitation
against Jaspar (,,whose head was still
adorned with the halo of Genoa”) at that juncture.
Ditto: call on Jaspar in connection with No. 305.
The latter’s promise that the matter would be
gone into.
Continuation of Genoa Conference in The Hague:
instructions the State Department had sent Ambassador
Child in Genoa on 17 May regarding
Russian participation in the work of an economic
committee of inquiry, on condition that Russia
withdraw the memorandum of 11 May.
Genoa Conference (continuation in The Hague):
primary need for a ,,plan d’ensemble tres clair et
tres complet” in regard to Russian recovery, to
be accepted by the Soviets. Impossibility of
having such a plan ready by 20 June.
Ditto: Talk with Hughes with reference to No.
XCVI
No.
Date; From/to
Description
From Everwijn
(Washington)
317 2.6.1922
To Van Panhuys
(Berne)
317-A 24.5.1922
Annex
317-B 31.5.1922
Annex 2
From François
317-C 12.6.1922
Annex 3
From Aalberse
318 2.6.1922
From De Geer
319 2.6.1922
From Van Vredenburch
(Brussels)
320 3.6.1922
From Van den Bosch
(Reval)
321 3.6.1922
From De Marees
van Swinderen
(London)
322 4.6.1922
304: non-dispatch of a delegate ad audiendurn or
observer by United States. Offer to come to The
Hague for oral consultation.
International Labour Organisation: resolution of
League of Nations Council of 12 May requesting
ruling by Permanent Court on whether the
Netherlands delegate to the third ILO conference
had been appointed in conformity with Art. 389
of the Treaty of Versailles. Suprise at such ,,interference
without prior consultation with the
Netherlands government”.
Relevant report by correspondent 0fN.R.C. (Rotterdam
Daily) of 29 May: note on what was considered
misleading presentation by A. Thomas
suggesting that acceptance of the resolution proposed
by the ,,Commission des vérifications des
pouvoirs” was tantamount to acceptance of the
motion tabled in the Council of the League of
Nations. (Bulletin No. 7, pp. 10 and 8 resp.).
Note on interpretation of Art. 389 of the Treaty
of Versailles: outline of the procedure by which
the Netherlands labour delegate had until then
been appointed: sectarianism of the trade unions
in the Netherlands, as a result of which none of
the five general unions could be regarded as truly
representative.
Relief credits: abolition of pledging of Austrian
State assets instituted in order to make the Ter
Meulen League of Nations plan possible.
Belgian question: writer’s objections to interlocking
Belgian-Netherlands military measures prior
to revision of the 1839 treaties. Arbitration was
the only way left.
Genoa Conference (continuation in The Hague):
meeting with Litvinov on the train from Berlin to
to Reval. Probable composition of Russian delegation
(Joffe instead of Litvinov); Soviets
could be expected to be more accommodating in
The Hague so as to secure recognition and credits.
Litvinov on inevitability of world revolution.
Ditto. Talk with Eyre Crowe: the latter’s ignorance
regarding British attitude at the Conference,
where the Foreign Office would not be represented.
His optimism about German reparations
payments and his belief that aggressive intentions
were not the reason for the Russian troop concentration
on the western frontier.
Netherlands Naval attachés: hopes that Colonel
XCVII
No. Date; From/to Descrip tion
323
323-A
323-B
324
325
326
From De Marees
van Swinderen
(London)
6.6.1922
From Kikkert
Annex 1
From Beucker Andreae
Annex 2
From Beucker Andreae
6.6.1922
To De Marees
van Swinderen
6.6.1922
To De Marees van
Swinderen (London)
and Loudon (Paris)
8.8.1922
From Van Vredenburch
(Brussels)
327 9.9.1922
Sluys would not be recalled as such from Idofidon.
Importance of keeping in direct touch with developments
in the British naval programme so as
to avoid alarming isolation.
Rhine navigation: articles of the Versailles Treaty
and revision of the 1868 Treaty of Mannheim:
difference of opinion as to whether there had
been unanimous approval of the resolutions
adopted by the Central Rhine Navigation Commission,
and consequent suspension of the debate
in the Netherlands Parliament on the Bill pertaining
to Netherlands entry.
Note relating to applicability of Art. 46 of the
Treaty in regard to shippers’ patents.
Second note concerning the interpretation of
Art. 46 as laid down in the Government’s explanatory
note to Art. 356 of the Treaty of Versailles.
Genoa Conference (continuation in The Hague):
Van Kamebeek’s doubts about accepting chairmanship
(should, rather, go to one of the major
host powers); likelihood of fierce controversies
during the debates; need for prior agreement between
Britain and Italy; possible honorary chairmanship
for the Netherlands; pessimism about
the outcome of the conference.
Belgian question: Jaspar’s opposition to arbitration
referred to in No. 319; his continued linking
of settlement of Wielingen dispute with a
military agreement on Limburg; Van Swinderen
on weakening of Belgian position through rejection
of arbitration.
Ditto: discussion with de Broqueville on the Van
Karnebeekgaspar meeting in Genoa (Cf. Nos.
246-47, 272-73, 277, 287, 305, 310 and 312).
Detaching the Rhine provinces from Prussia and
elevating them to the position of a separate state
under the protection of the Netherlands and Belgium
preferred by de Broqueville to the annexationist
policy he had persistently advised the
King against. Possible effect of the breaking off
of the negotiations on the Flemish movement.
Need for early Belgian co-operation in view of
expiration of 5-year term within which Belgium
could by virtue of the Treaty of Versailles force
Germany to settle the question of the Rhine
canal.
Genoa Conference (continuation in The Hague):
XCVIII
No. Date; Fromlto Descrip tion
328
329
Minutes of the
Council of Ministers
9.9.1922
From Van Welderen
Rengers
(Constantinople)
9.9.1 9 22
Minutes of joint meeting
of Foreign Affairs,
Public Works and
Marine
330 10.9.1922
From De Graaff
330-A
331
Annex
From C. Snouck
Hu rgr onj e
(Leyden)
10.6.1922
Minutes of the
Committee on International
Law
designation of Struycken and Snouck Hurgronje
as experts.
Turkey: abolition of the capitulations: unilateral
Young Turkish declaration of 1914: impracticability
in the Netherlands (arbitrariness of Turkish
justice officials and insufficiently developed
Turkish system of law); objections to the voluntary
surrender of a justifiable cultural privilege
and inclination to participate in the deliberations
of a preparatory committee on reforms with a
view to the replacement of the capitulatory instruments.
Discussion of public works and Belgium: evaluation
of Zealand plan for damming up the Eendracht
and the Zandkreek; dredging near Bath;
Belgian complaints about condition of Wemeldingse
Vlije and Belgian objections to Netherlands
plan for a lateral canal in Limburg; factual
and legal problems; was the Meuse (common to
both countries) a navigable or a non-navigable
river
Turkey: abolition of capitulations (Cf. No. 328):
enclosures of annex; the Netherlands’ reduced
interest in maintenance of the capitulations
owing to the cessation of Turkish sovereignty
over the holy cities and Jiddah.
Little enthusiasm on the part of intellectual Mohammedans
in NE1 for Pan-Islamic views and no
gratitude towards the Netherlands administration
for passive co-operation in abolition of the capitulations.
Revision of League of Nations Treaty: economic
pressure from non-belligerents; prize courts and
blockade; special cases provided for in Art. 16;
rupture and reprisals; limitation of Art. 4 by 5th
resolution; what would become of Art. 16 if
there were a recurrance of the 1914 situation
did Art. 16 require the Council to be accessible
to all States Unanimity on the intention to exclude
the violator Possible amendment of resolutions
7 and 9; participation of Small States in
the case of resolution 9 (as regards the blockade
of Germany, the Netherlands the obvious choice
for blockading the river Ems) ; implementation of
Art. 16 expected to proceed slowly owing to
very gradual increase in means to bring pressure
to bear.
XCIX
No. Date; From/to Descrip tion
332 12.6.1933
To van Dijk
333 12.6.1922
Van Karnebeek’s
diary
334
335
335-A
336
13.6.1922
Ditto
13.6.1922
From Colonel Sluys
(London)
12.6.1922
Annex
From British Government
13.6.1922
From Patijn
337 14.6.1922
From François
League of Nations (arms reduction); supplement
to reply to question about not increasing military
expenditure for two years in connection with the
naval estimates. Decrease of Fls. 71.000.000 in
expenditure on the Netherlands defence budget
for 1922 compared with 1921, as against adecrease
of Fls. 2.000.000 in the naval estimates
over the same period.
Genoa Conference (continuation in The Hague):
talk with Benoist about his acting as representative
of France and non-acceptance of the chairmanship
by Britain or France. His view that the
Netherlands should not pursue neutrality to the
point of declining the chairmanship (Cf. No.
3 24); Van Karnebeek’s reply that the Netherlands
could not bear any responsibility for a conference
about which it had been neither consulted nor informed;
the conflicting views and confusion that
had already become evident in Genoa; Van Karnebeek’s
condemnation of the way international
consultation on important political issues had
been handled.
Ditto: Britain’s agreement with the proposed arrangement
of an honorary chairmanship for Van
Karnebeek and decision on the presidency by the
conference itself.
Aviation Conference, Paris 1905: Netherlands
objection to Art. 5 (originally directed against
Germany) which was no longer relevant. Britain
wanted an Article of that tenor in order to bring
pressure and repression to bear on profiteering
non-member and member States which did not
fulfil their obligations.
Memorandum refuting the objections mentioned
in the preamble.
League of Nations loan to Austria (credits):
Danish trade treaty with the Soviets and action
by other countries (claims upon the Soviets in
respect of securities and amounts owing).
Terms of reference for third League of Nations
conference: adherence to general guidelines used
for the first and second conferences; election of
president, work of Council and secretariat; Art.
19 as basis for the various other articles, notably
Art. 16; need from the point of view of legal
security for a uniform interpretation of Art. 18;
Netherlands backing of requests for admission
C
No. Date; From/to Descrip tion
338 14.6.1922
Van Karnebeek’s
diary
339 14.6.1922
From Van Vredenburch
(Brussels)
340 17.6.1922
Van Karnebeek’s
diary
341 21.6.1922
From Van Dijk
341-A 1.6.1922
Annex
From Beelaerts van
Blockland
342 21.6.1922
(Austria, possibly Germany); support for any
general plan for the reduction of arms and for
implementation of Art. 8, A1.5; reflections on
the committee reports on the Bills of Exchange
law and the Conference of Barcelona; countering
opium abuse; conciliation; better allocation of
the costs of the League; rules governing the election
of permanent members of the Council; election
procedure in conformity with the 1921
Orange Paper, page 20.
Genoa Conference (continuation in The Hague):
account of talks with Marling and Graeme, who
had stated that Britain was counting on Van Karnebeek’s
chairmanship (which he did not desire);
talk with Benoist: dependence of Van Karnebeek’s
decision on attitude of France; announcement
by Benoist that as head of a ,,commission
d’étude” he would for the present act only as an
observer; insistence on his part, too, that Van
Karnebeek should accept the chairmanship.
Belgian question: enclosure of analytical report
on session of Belgian Parliament of 13 June,
paraphrasing annexationism, Wielingen dispute
and other matters relating to the revision treaty;
British support for Belgium in negotiations
Speech by Theunis regarding the deliberations of
the bankers in Paris; unwillingness on the part of
Belgium and France ,,d’admettre une amputation
de leur créance qui n’aurait pas de contre-partie”.
Genoa Conference (continuation in The Hague):
Avezzano’s request, also on behalf of Lord
Graeme, to Van Karnebeek concerning chairmanship
of the non-Russian committee, where what
mattered was his personal qualities, not his office.
Van Karnebeek’s reluctance to refuse point blank.
Netherlands naval attachés (Cf. No. 322): insistence
on retention of Colonel Sluys in London in
1923, mainly in connection with the continuing
chance of the Naval Act being passed in the
Netherlands; objections to his being employed in
the Netherlands with periodic official visits to
Britain, since the vital contacts he had built up
might then be lost.
Memorandum expressing agreement with Van
Dijk’s reasoning but suggesting that this argued
more for transferring Van Sluys to the naval budget.
United States: Fock advised against compliance
CI
No. Date; Fromlto Descrip tion
From Fock (Batavia)
to De Graaff
343 23.6.1922
From Snouck
Hurgronje
344 23.6.1922
From Van
IJsselsteyn
345
345-A
346
347
24.6.1922
From Pustoshkin to
Beelaerts van Blokland
26.6.1922
Annex
From Beelaerts van
Blokland
27.6.1922
Van Karnebeek’s
diary
27.6.1922
From Van Eysinga
347-A Annex
From Kikkert
with the requests from the US consul in Batavia
and the US consul general in Singapore for confidential
information in view of the inevitability,
ultimately, of war between America and Japan,
in which the Netherlands would co-operate with
America; he felt that discussion of this kind was
outside the competence of the NE1 government.
Poland (clothing credit) (Cf. No. 176): Request
to urge Ministry of Finance to retract refusal to
heed Polish request.
Germany (coal credits): unemployment in IJmuiden
fishery industry owing to inability to compete
with the selling prices of German trawlers;
possible decrease in monthly deliveries of coal by
Germany of, say, 20,000 tons in exchange for
German undertaking that those trawlers would
avoid the port of IJmuiden for the duration of
the arrangement.
Russian diplomatic mission in the Netherlands:
request for retention of the (Tsarist) legation on
the grounds of the need to allow continuation of
,,une autorit6 Russe non-bolchéviste” in all countries.
Russian agreement with retention of Pustoshkin
on the diplomatic list as first secretary (instead
of chargé d’affaires); deletion from the list of the
(absent) military attaché and the commercial attaché
(residing in Brussels) in order to reduce the
staff of the former Russian legations to the
smallest possible proportions.
Norway (Spitsbergen question) : Willingness to
abandon the idea of an international conference
on the matter only if Oslo took fuller account of
Dutch wishes, which it had declared unacceptable.
Rhine navigation: appointment of members of
Central Commission; ,,Kleinstaaterei” desired on
the part of the Entente powers, as against the
statement by the German envoy that the small
German States wished their delegates to be regarded
as a Reichs delegation, and not as reprentatives
of riparian states; that question to be
measured against the provisions of the Treaty of
Versailles; formal untenability of German standpoint
but little inclination on the part of the
Netherlands to oppose that standpoint.
Agreement with Van Eysinga’s reasoning.
CII
No. Date, From/to Descrip tion
347-B
347-c
347-D
347-E
347-F
Annex 2
From Van Karnebeek
15.6.1922
Annex 3
From Nederbragt
17.6.1922
Annex 4
Second note from
Kikkert
22.6.1922
Annex 5
From Snouck
Hurgronje
Annex 6
From Van Karnebeek
348 28.6.1922
From De Graaff
348-A 21.6.1922
Annex
From C. Snouck
Hurgronje
(Leyden)
349 28.6.1922
To Van Panhuys
(Berne)
350 28.6.1922
From George
Notes expressing doubt whether the Netherlands
interest in this case warranted diplomatic negotiation,
and his disinclination to lend support to
attempts to dismember Germany.
Suggestion not to react to the German statement
referred to in No. 347 in consideration of the
fact that this could never be construed as the surrender
of any right by the Netherlands.
Agreement with plan to leave the German envoy
out of it for the present and accordingly not to
dispatch an accusé de reception. View expressed
that the matter, which was certain to be raised at
the next meeting of the Central Commission,
should not be allowed to be disposed of without
the Netherlands being consulted.
Reference to the Netherlands’ preference up till
then for treatment of Rhine navigation matters
(including navigation rights) by the riparian
states rather than by the German State.
Request to draft a formula making it clear to
Germany that the matter could not be decided
without reference to the Netherlands, leading to
a recommendation (24 October) to the Netherlands
delegates to abstain in the Central Commission,
while pointing out that the matter definitely
concerned the Netherlands.
Aid for Turkish refugees: desire to prevent the
establishment of a committee, as referred to in
the Annex, in the NEL
The writer could understand the action of Boon
and Nijpels, reported by Van Welderen Rengers
(envoy in Constantinople), to induce Europeans
resident in the NE1 to lend support to certain
Turkish refugees. Need to refrain from stressing
the Mohammedan character and advice to remain
aloof from all political elements so as to avoid
the reproach of courting Islam.
League of Nations aid to Russian and Armenian
refugees: agreement in principle with the proposals
of the High Commissioner; attention directed
to the small number of Russians seeking
refuge in the Netherlands, so that for such persons
only a small number of identy cards would
have to be issued; the Netherlands was prepared
to make rolling stock and transport facilities
available.
Portuguese trade and tariffs: objections to Portuguese
proposal to terminate the Declaration of
CIII
No. Date; Fromlto Descrip tion
(Lisbon)
351 29.6.1922
From Patijn
352 29.6.1922
To Patijn
353
354
355
356
357
30.6.1922
To Van IJsselsteyn
30.6.1922
Minutes of the
Committee on
International Law
4.7.1922
From Binder
(London)
5.7.1922
Minutes of the 5th
meeting of the Committee
for the Revision
of Trade Agreements
5.7.1922
From König
1894 (S.1896/89) immediately after announcement
of the new tariff, and simultaneously to
open negotiations for a new treaty; preference
for retention of the Declaration for one more
year to provide opportunity for closer study of
the new Portuguese tariff.
Genoa Conference (continuation in The Hague):
view that as Secretary General of the conference
he should not be involved with any measures the
government might take in regard to the residence
of Russians in the Netherlands.
Ditto: agreement with No. 351, but would appreciate
receipt of information and suggestions;
each day that went well was a day gained; prevention
of misuse by the Russians of stagnation
in the negotiations.
League of Nations agenda: pollution of public
waters by industry - unlike pollution of seas and
ports by tankers - less suitable for being dealt
with by the League in view of possibility of consultation
between the individual States concerned;
expectation that pursuant to a resolution
passed by the House of Representatives the US
President would take action in regard to seas and
ports.
Revision of League of Nations Treaty: exhaustive
discussion of Art. 16; draft Bill relating to the
provisions for implementation of Art. 16 in the
Netherlands.
Genoa Conference (continuation in The Hague):
economic reconstruction of Europe: national
relief corporations and their proceeding with the
scheme dependent on further discussion with
their respective governments; a definite decision
to be given 30 days after termination of the
Genoa Conference; difficulty of arriving at such a
decision before the results of the Conference at
The Hague were known.
General survey of current trade agreements: the
position with regard to Albania, Australia, Brazil,
Bulgaria, Germany, Great Britain, Finland,
France, Hungary, New Zealand, Austria, Poland,
Portugal, the border states, Romania, Spain,
Czechoslovakia and Venezuela.
Belgium: damming of Eendracht and Zandkreek
(cf. Nos. 163 and 185); financial objections to
damming of Zandkreek; non-acceptance by the
Netherlands of obligation to keep channels of
CIV
No. Date; From/to Descrip tion
358 6.7.1922
From Van Dijk
358-A Annex 1
From Van Dijk’s
minis try
358-B July 1922
Annex 2
From François
358-C 16.8.1922
Annex 3
To League of Nations
Zandkreek at proper depth by means of engineering
or dredging works.
League of Nations (arms reduction): agreement
with the military-political considerations contained
in the first annex; impossibility of furnishing
comprehensive guarantees; fundamental objections
of the Netherlands to the conclusion of
military alliances and preference for the conclusion
of agreements for the prevention of war
and respect for international law; duty of the
Netherlands, within the terms of the League
Treaty, to possess adequate means of repulsing
with the force of arms any violation of its own
rights, pending joint action.
Draft reply to the League in accordance with the
contents of No. 358; continued possibility of
lawful (defence against aggression, participation
in economic boycotts or in League of Nations
military expeditions) and some unlawful wars;
Netherlands military needs for the State and for
the colonies; reflections on international obligations,
geographical location and internal security.
Notes relating to No. 358-A: need for an army
for the maintenance of neutrality and for defence
in the cases referred to in No. 358-A; possibility
of substantial reduction in the armed forces if
the other States decided upon a similar line of
conduct; when would the Netherlands be required
to take part in international action
French text of the reply sent to the secretariat.
359 11.7.1922 Regulation of trade relations with Austria: pro-
To De Geer
vision to be included in the exchange of memoranda
concerning control under the old agreements
of imports of samples, in order to leave
undecided the question whether the Republic of
Austria was bound by the agreements of the former
monarchy; need for consolidation of conditions
in Central Europe and clearer evidence of
the need for new agreements before the conclusion
of a new treaty.
360 11.7.1922
From Quarles van
Ufford
(Rome)
League of Nations mandate (Palestine): summary
of the British White Paper published shortly before;
Pius XI and the report on it given by Cardinal
Gasparri to the Dutch envoy.
No. Date; From/to Description
cv
361 i5.7.1922
To Van Panhuys (Berne),
Rappard (Copenhagen)
and Sweerts de Landas
Wyborgh (Stockholm)
362 15.7.1922
From Nederbragt
363 15.7.1922
From König
364 15.7.1922
From Emir EI Djabri
and Suleiman Kanaan
365 16.7.1922
From Litvinov to
Patijn
366 17.7.1922
From Patijn to
Litvinov
367 18.7.1922
From De Geer
League of Nations agenda: unacceptability of the
increase in the League’s annual budget by nearly
4.5 million goldfrancs (from Frs. 20,873,945 to
Frs. 25,248,190) in view of the efforts being
made everywhere just then to reduce costs.
Freedom of transit: note relating to the Barcelona
agreement of 20 April 1921 signedby the Netherlands
on 28 November 1921. Strangeness of the
priorities in the explanatory memorandum of the
Dutch enabling Act; signature only for the Kingdom
in Europe; articles 2 and 5 and distinction
made between nationalities in respect of passport
and visa requirements for transit.
Germany (coal credits) and unemployment in
IJmuiden fisheries (further to No. 344): objections
to restrictions on the sales of German
catches by closing the fish market to foreign
nationals or raising the tariffs; to achieve effect
the tariffs would have to be increased more than
sixfold.
League of Nations mandate (Syria and the Lebanon):
request to disclaim all responsibility for
the ,,régime périlleux pour la paix du monde” resulting
from France’s misuse of the mandate
granted her at San Rem0 against the wishes of
the Syrian people.
Genoa Conference (continuation in The Hague):
statement of intent of the Russian delegation
(obtaining reconstruction credits and willingness
to discuss indemnification for the old Russian
debts, provided restoration of the private property
of foreign creditors was not made a preliminary
requirement) ; proposal that the three non-Russian
sub-committees (private property, debts and credits)
be convened with a view to establishing a
basis for resumption of the Genoa talks.
Reply to No. 365, rejecting the proposal referred
to in the closing passage because the chairmen of
those sub-committees ,,ne formant pas un organisme
de la commission non-Russe, n’auraient aucune
compétence dans la matiere’’; willingness of
the credit subcommittee to meet on 18 July with
the ,,commission Russe” so as to enable the
latter to put forward a better offer.
League of Nations agenda: reference to the annex
in connection with No. 361; unfairness to
the Netherlands of the cost allocation scale then
applied; the writer agreed that the League’s bud-
CVI
No. Date; From/to Descrip tion
get should not be increased; amplification of instructions
in the sense that reduction of the
League’s costs should be urged so that the League
might serve as an example to all States of the
sobriety and thrift needed to save the world, and
Europe in particular, from extinction.
367-A 20.7.1922
Annex
the preamble; no objection to support for the
campaign against the spread of infections diseases;
reservations about firm announcement to the effect
that the Netherlands would no longer wish
to co-operate on the present basis of cost allocation,
as this would be tantamount to giving
conditional notice of termination of membership
of the League; a number of suggestions for economising,
including limiting the number of secretariat
officials, scrapping some items for unforeseen
expenditure, improved auditing and collection
of amounts outstanding; restriction of the
League’s activities (termination of less important
activities such as surveys, etc.).
368 18.7.1922 Genoa Conference (continuation in The Hague):
From Litvinov to reply to No. 366: objections to transfer of work
Patijn
to sub-committees which individually were unable
to reach definitive conclusions; express purpose
of Russian delegation’s visit to The Hague
had been to meet the plenary non-Russian committee,
but it had stranded there on the three
non-competent sub-committees; request for the
convening of a plenary meeting of the two committees
(Russian and non-Russian) for the purpose
of drafting the outlines of a basic agreement;
refusal of the (final) invitation for the session of
the first sub-committee as being contrary to the
,,base de l’égalité des droits”.
369 18.7.1922
From Patijn to
Litvinov
Ditto. Reply to No. 368: acceptance of the proposal
for a ,,réunion plénière des deux commissions’’
(on the 19th) subject to withdrawal of the
demand formulated at the end of No. 365, which
had not to be regarded as a ,,condition préliminaire”,
but as a ,,nécessité pratique en vue d’éviter
les pertes de temps”; defence of the ,,faits et
gestes” of the individual sub-committees ; lack of
results achieved there attributable to the obstinate
,,Commission RusSe”.
370 18.7.1922 Revision of League of Nations Treaty: continu-
Minutes of the Committee
on International Law
ation of discussion (cf. No. 354) of Art. 16 c.
CVII
No. Date; Fromlto Description
371 19.7.1922
From Litvinov
372 21.7.1922
Minutes of the
Council of Ministers
373 21.7.1922
Van Karnebeek’s
diary
374 22.7.1922
From Van Panhuys
(Berne)
Genoa Conference (continuation in The Hague):
proposal put forward by the writer at the plenary
meeting to refer certain matters to the governments
concerned by reason of the fact that the
delegates were only experts, not plenipotentiaries;
linking of the acknowledgement of old debts
and indemnification of foreign nationals to
the granting of credits by the Western powers;
the proposal had contained no guarantee
regarding the answer from his government; Patijn’s
comment on the negative attitude of the Russian
delegates and the closing of the session; emphasis
on the fact that ,,the declaration made by the
Russian delegation could not form the basis of an
agreement as it did not embody any workingrules
and excluded the possibility of any guarantee ensuring
the effective discharge of the undertakings
which it was suggested the Russian government
should assume”.
League of Nations Conference: appointment of
Loudon, Struyken, and Van Eysinga as delegates;
approval of their instructions.
Genoa Conference (continuation in The Hague):
farewell visit by Litvinov who, even after Genoa,
had cherished hopes of obtaining credits, but for
the rest felt that the conference had had aclarifying
effect and had thus not been useless; Litvinov’s
question whether the Netherlands was prepared
to agree to some arrangements with Russia;
unlikelihood of any initiative on the part of the
Netherlands; Dutch trade with Russia linked by
Litvinov to Soviet representation in the Netherlands;
Van Karnebeek’s fear that such a body
might conduct political propaganda; Litvinov’s
view that the Third International had nothing to
do with the Russian government; Van Karnebeek’s
doubts about that and his reluctance to
conclude an agreement in view of the terror tactics
the Soviets continued to deploy; discussion
of the question whether the Netherlands had
played any part in the blockade and intervention.
League of Nations agenda: Swiss agreement with
the Netherlands’ objections to increasing the
League’s budget; they feared, however, that a démarche
on their part as well would add to the
existing dissatisfaction of the Secretariat General
with the seat of the League; absurd demands
made by the Secretariat in financial and other
areas.
CVIII
No. Date; From/to Descrip tion
375
376
376-A
3 76-B
377
22.7.1922
From Van Ketwich
Verschuur
(Tangier)
22.7.1922
Ditto
21.1 1.1921
Annex I
From Van Kleffens
to Beucker Andreae
Annex 2
From Van Karnebeek
25.7.1922
To Emir el Djabri
and Suleiman Kanaan
378 27.7.1922
To De Marees
van Swinderen
(London)
379 27.7.1922
Tangier Statute: The High Commissioner in the
Spanish zone, General Berenguer, was to be succeeded
by the Military Governor of Madrid, General
Burguette, who could be expected to attempt
to consolidate the Spanish protectorate in Morocco
by ,,pénétration pacifique” (co-operation
with the native population); rumours that the
Foreign Office in London would not be averse to
a Netherlands mandate over the zone, a solution
favoured by the writer in view of the Netherlands’
experience (neutral power) in governing Mohammedan
peoples; in that case, however, it would
be necessary to limit the responsibilities and to
have adequate statutory guarantees against serious
political difficulties.
French nationality in Morocco: enclosure of a
decree relating to the French zone of the Sherifian
Empire, with a copy of the objections raised
by the Italian Ambassador in Paris.
Notes on the questionable innovation featured in
this decree (imposition of French nationality on
children born there if one of the parents came
under French jurisdiction).
Inclination to keep the matter in abeyance pending
the ruling of The Hague Court in a forthcoming
case between France and Britain; fear,
based partly on the objections raised - not without
reason - by Italy, that in the event of judgement
going against her France would would annex
Morocco.
League of Nations mandate (Syria and the Lebanon):
acknowledgement of receipt of No. 364,
deleting the statement still appearing in original
that the Netherlands - not represented on the
League of Nations Council - bore no responsibility
for the mandate.
Yap cables: claims of the DNTG against the Eastern
Telegraph Company in respect of the pool
agreements totalling 3,165,061 gold francs
(1,161,011 gold francs from the Dutch Indies
Pool and 2,004,050 gold francs from the German-
Dutch Pool); these amounts to be divided between
the DNTG (lo%), the Netherlands (375/1400 of
the remaining 90%) and the German (the balance)
governments; the British Government to be asked
to authorise payment of the 763,006.26 gold
francs accruing to the Netherlands.
Yup cables: Italian objections to allocation of the
~~ ~
CIX
No. Date; From/to Descrip tion
To Hubrecht
(Washington)
380 28.7.1922
To De Graaff
380-A Annex 1
380-B 6.5.1919
Annex 2
From C. van
Vollenhoven
381 29.7.1922
To H.M. the Queen
Yap-Menado cable to the Netherlands withdrawn;
Hubrecht instructed to press the US government
for a definitive decision.
American claims to Miangas (Palmas-Miangas arbitration):
an attempt should first be made to
bring this case before the Permanent Court in
view of the expense of settling disputes through
arbitration; simultaneous presentation of a draft
arbitration compromise in case the United States
should decide against The Hague Court; some
Special (draft) agreement on the submission to
arbitration of the question of sovereignty over
the island of Palmas (or Miangas).
Memorandum relating to the writer’s discussions
about the arbitration compromise at the Department
of State; agreement in principle to arbitration
in this dispute dating from 1905 reached
in 1914; Netherlands draft compromise dated
1916 and amendments made in April 1919.
Netherlands Diplmatic Service: meeting new
needs arising from the disintegration of the
Austro-Hungarian monarchy: transfer (from
London) of F.E.M.H. Michiels van Verduynen to
Vienna (with station Budapest) as chargé d’affaires
with the Hungarian Foreign Minister to
deputise in the absence of Van Weede, the Envoy
in Vienna. | https://www.yumpu.com/en/document/view/36301051/list-of-documents-1-september-1921-31-juli-1922-historicinl | CC-MAIN-2020-05 | refinedweb | 28,785 | 50.77 |
Slim.
Vanilla Slim
Let's begin by looking at some common Slim code to identify the problem. After you've install Slim through Composer, you need to create an instance of the
Slim object and define your routes:
<?php $app = new \Slim\Slim; $app->get('/', function(){ echo "Home Page"; }); $app->get('/testPage', function() use ($app) { $app->render('testpage.php'); }); $app->run();
Let's turn the Slim object into the "controller."
The first method call sets a new route for the root URI (
/), and connects the given function to that route. This is fairly verbose, yet easy to setup. The second method call defines a route for the URI
testPage. Inside the supplied method, we use Slim's
render() method to render a view.
Here lies the first problem: this function (a closure) is not called in the current context and has no way of accessing Slim's features. This is why we need to use the
use keyword to pass the reference to the Slim app.
The second issue stems from Slim's architecture; it's meant to be defined all in one file. Of course, you can outsource the variable to another file, but it just gets messy. Ideally, we want the ability to add controllers to modularize the framework into individual components. As a bonus, it would be nice if these controllers offered native access to Slim's features, removing the need to pass references into the closures.
A Little Reverse Engineering
It's debatable whether reading source code from an open-source project is considered reverse engineering, but it's the term I'll stick with. We understand how to use Slim, but what goes on under the hood? Let's look at a more complicated route to get to the root of this question:
$app->get('/users/:name', function($name){ echo "Hello " . $name; });
This route definition uses a colon with the word,
name. This is a placeholder, and the value used in its place is passed to the function. For example,
/users/gabriel matches this route, and 'gabriel' is passed to the function. The route,
/users, on the other hand, is not a match because it is missing the parameter.
If you think about it logically, there are a number of steps that must complete in order to process a route.
- Step One: check if the route matches the current URI.
- Step Two: extract all parameters from the URI.
- Step Three: call the connected closure and pass the extracted parameters.
To better optimize the process, Slim — using regex callbacks and groups — stores the placeholders as it checks for matches. This combines two steps into one, leaving only the need to execute the connected function when Slim is ready. It becomes clear that the route object is self-contained, and frankly, all that is needed.
In the previous example, we had access to Slim's features when parsing the routes, but we needed to pass a Slim object reference because it would otherwise be unavailable within the function's execution context. That's all you need for most applications, as your application's logic should occur in the controller.
With that in mind, let's extract the "routing" portion into a class and turn the Slim object into the "controller."
Getting Started
To begin, let's download and install "vanilla Slim" if you haven't done so already. I'm going to assume that you have Composer installed, but if not, follow the steps .
Within a new directory, create a file named
composer.json, and append the following:
{ "name": "nettuts/slim-mvc", "require": { "slim/slim": "*", "slim/extras": "*", "twig/twig": "*" } }
In a terminal window, navigate to said directory and type
composer install. I'll walk you through these packages, if this is you're first time using Slim.
- slim/slim - the actual Slim framework.
- slim/extras - a set of optional classes to extend Slim.
- twig/twig - the Twig templating engine.
You technically don't need the the Slim extras or Twig for this tutorial, but I like using Twig instead of standard PHP templates. If you use Twig, however, you need the Slim extras because it provides an interface between Twig and Slim.
Now lets add our custom files, and we'll start by adding a directory to the
vendors folder. I'll name mine
Nettuts, but feel free to name yours whatever you wish. If you are still in the terminal, ensure that your terminal window is in the project's directory and type the following:
mkdir vendor/Nettuts
Now, edit
composer.json by adding the reference to this new folder:
{ "name": "nettuts/slim-mvc", "require": { "slim/slim": "*", "slim/extras": "*", "twig/twig": "*" }, "autoload": { "psr-0": { "Nettuts": "vendor/" } } }
We want our app to automatically load classes from the
Nettuts namespace, so this tells Composer to map all requests for
Nettuts to the PSR-0 standard starting from the
vendor folder.
Now execute:
composer dump-autoload
This recompiles the autoloader to include the new reference. Next, create a file, named
Router.php, within the
Nettuts directory, and enter the following:
<?php namespace Nettuts; Class Router { }
We saw that each route object has a self-contained function that determines if it matches the provided URI. So, we want an array of routes and a function to parse through them. We'll also need another function to add new routes, and a way to retrieve the URI from the current HTTP request.
Let's begin by adding some member variables and the constructor:
Class Router { protected $routes; protected $request; public function __construct() { $env = \Slim\Environment::getInstance(); $this->request = new \Slim\Http\Request($env); $this->routes = array(); } }
We set the
routes variable to contain the routes, and the
request variable to store the Slim
Request object. Next, we need the ability to add routes. To stick with best practices, I will break this into two steps:
public function addRoutes($routes) { foreach ($routes as $route => $path) { $method = "any"; if (strpos($path, "@") !== false) { list($path, $method) = explode("@", $path); } $func = $this->processCallback($path); $r = new \Slim\Route($route, $func); $r->setHttpMethods(strtoupper($method)); array_push($this->routes, $r); } }
This public function accepts an associative array of routes in the format of
route => path, where
route is a standard Slim route and
path is a string with the following convention:
Optionally, you can leave out certain parameters to use a default value. For example, the class name will be replaced with
Main if you leave it out,
index is the default for omitted function names, and the default for the HTTP method is
any. Of course,
any is not a real HTTP method, but it is a value that Slim uses to match all HTTP method types.
The
addRoutes function starts with a
foreach loop that cycles through the routes. Next, we set the default HTTP method, optionally overriding it with the provided method if the
@ symbol is present. Then we pass the remainder of the path to a function to retrieve a callback, and attach it to a route. Finally, we add the route to the array.
Now let's look at the
processCallback() function:
protected function processCallback($path) { $class = "Main"; if (strpos($path, ":") !== false) { list($class, $path) = explode(":", $path); } $function = ($path != "") ? $path : "index"; $func = function () use ($class, $function) { $class = '\Controllers\\' . $class; $class = new $class(); $args = func_get_args(); return call_user_func_array(array($class, $function), $args); }; return $func; }
The second issue stems from Slim’s architecture; it’s meant to be defined all in one file.
We first set the default class to
Main, and override that class if the colon symbol is found. Next, we determine if a function is defined and use the default method
index if necessary. We then pass the class and function names to a closure and return it to the route.
Inside the closure, we prepend the class name with the namespace. We then create a new instance of the specified class and retrieve the list of arguments passed to this function. If you remember, while Slim checks if a route matches, it slowly builds a list of parameters based on wildcards from the route. This function (
func_get_args()) can be used to get the passed parameters in an array. Then, using the
call_user_func_array() method enables us to specify the class and function, while passing the parameters to the controller.
It's not a very complicated function once you understand it, but it is a very good example of when closures come in handy.
To recap, we added a function to our
Router that allows you to pass an associative array containing routes and paths that map to classes and functions. The last step is to process the routes and execute any that match. Keeping with the Slim naming convention, let's call it
run:
public function run() { $display404 = true; $uri = $this->request->getResourceUri(); $method = $this->request->getMethod(); foreach ($this->routes as $i => $route) { if ($route->matches($uri)) { if ($route->supportsHttpMethod($method) || $route->supportsHttpMethod("ANY")) { call_user_func_array($route->getCallable(), array_values($route->getParams())); $display404 = false; } } } if ($display404) { echo "404 - route not found"; } }
We begin by setting the
display404 variable, representing no routes found, to
true. If we find a matching route, we'll set this to
false and bypass the error message. Next, we use Slim's request object to retrieve the current URI and HTTP method.
We'll use this information to cycle through and find matches from our array.
Once the route object's
matches() function executes, you are able to call
getParams() to retrieve the parsed parameters. Using that function and the
getCallable() method, we are able to execute the closure and pass the necessary parameters. Finally, we display a 404 message if no route matched the current URI.
Let's create the controller class that holds the callbacks for these routes. If you have been following along, then you may have realized that we never forced a protocol or class type. If you don't want to create a controller class, then any class will work fine.
So why are create a controller class? The short answer is we still haven't really used Slim! We used parts of Slim for the HTTP request and routes, but the whole point of this was to have easy access to all of Slim's properties. Our controller class will extend the actual Slim class, gaining access to all of Slim's methods.
You can just as easily skip this and subclass Slim directly from your controllers.
Building the Controller
This controller basically allows you to modify Slim while still keeping it vanilla. Name the file
Controller.php, and write the following code:
<?php namespace Nettuts; Class Controller extends \Slim\Slim { protected $data; public function __construct() { $settings = require("../settings.php"); if (isset($settings['model'])) { $this->data = $settings['model']; } parent::__construct($settings); } }
When you initialize Slim, you can pass in a variety of settings, ranging from the application's debug mode to the templating engine. Instead of hard coding any values in the constructor, I load them from a file named
settings.php and pass that array into the parent's constructor.
Because we are extending Slim, I thought it would be cool to add a 'model' setting, allowing people to hook their data object directly into the controller.
That's the section you can see in the middle of the above code. We check if the
model setting has been set and assign it to the controller's
data property if necessary.
Now create a file named
settings.php in the root of your project (the folder with the
composer.json file), and enter the following:
<?php $settings = array( 'view' => new \Slim\Extras\Views\Twig(), 'templates.path' => '../Views', 'model' => (Object)array( "message" => "Hello World" ) ); return $settings;
These are standard Slim settings with the exception of the model. Whatever value is assigned to the
model property is passed to the
data variable; this could be an array, another class, a string, etc... I set it to an object because I like using the
-> notation instead of the bracket (array) notation.
We can now test the system. If you remember in the
Router class, we prepend the class name with the "
Controller" namespace. Open up
composer.json add the following directly after the psr-0 definition for the
Nettuts namespace:
{ "name": "nettuts/slim_advanced", "require": { "slim/slim": "2.2.0", "slim/extras": "*", "twig/twig": "*" }, "autoload": { "psr-0": { "Nettuts": "vendor/", "Controller": "./" } } }
Then like before, just dump the autoloader:
composer dump-autoload
If we just set the base path to the root directory, then the namespace
Controller will map to a folder named "
Controller" in the root of our app. So create that folder:
mkdir Controller
Inside this folder, create a new file named
Main.php. Inside the file, we need to declare the namespace and create a class that extends our
Controller base class:
<?php namespace Controller; Class Main extends \Nettuts\Controller { public function index() { echo $this->data->message; } public function test() { echo "Test Page"; } }
This is not complicated, but let's take it in moderation. In this class, we define two functions; their names don't matter because we will map them to routes later. It's important to notice that I directly access properties from the controller (i.e. the model) in the first function, and in fact, you will have full access to all of Slim's commands.
Let's now create the actual public file. Create a new directory in the root of your project and name it
public. As its name implies, this is were all the public stuff will reside. Inside this folder, create a file called
index.php and enter the following:
<?php require("../vendor/autoload.php"); $router = new \Nettuts\Router; $routes = array( '/' => 'Main:index@get', '/test' => 'Main:test@get' ); $router->addRoutes($routes); $router->run();
We include Composer's autoloading library and create a new instance of our router. Then we define two routes, add them to the router object and execute it.
You also need to turn on mod_rewrite in Apache (or the equivalent using a different web server). To set this up, create a file named
.htaccess inside the
public directory and fill it with the following:
RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [QSA,L]
Now all requests to this folder (that do not match an actual file) will be transferred to
index.php.
In your browser, navigate to your
public directory, and you should see a page that says "Hello World". Navigate to "
/test", and you should see the message "Test Page". It's not terribly exciting, but we have successfully moved all the logic code into individual controllers.
Round Two
Slim is not CodeIgniter, it's not Symfony and it's not Laravel.
So we have basic functionality, but there are a few rough edges. Let's start with the router.
As of right now, we display a simple error message if a route doesn't exist. In a real application, we want the same functionality as loading a regular page. We want to take advantage of Slim's ability to load views, as well as set the response's error code.
Let's add a new class variable that holds an optional path (just like the other routes). At the top of the file, add the following line directly after the request object definition:
protected $errorHandler;
Next, let's create a function that accepts a path and assigns it a callback function. This is relatively simple because we already abstracted this functionality:
public function set404Handler($path) { $this->errorHandler = $this->processCallback($path); }
Now let's adjust the
run command to optionally execute the callback instead of just displaying the error message:
if ($display404) { if (is_callable($this->errorHandler)) { call_user_func($this->errorHandler); } else { echo "404 - route not found"; } }
Open the controller class. This is where you can adjust Slim's functionality to your own personal preferences. For example, I would like the option to omit the file extension when loading views. So instead of writing
$this->render("home.php");, I just want to write:
$this->render("home");. To do this let's override the render method:
public function render($name, $data = array(), $status = null) { if (strpos($name, ".php") === false) { $name = $name . ".php"; } parent::render($name, $data, $status); }
We accept the same parameters as the parent function, but we check if the file extension is provided and add it if necessary. After this modification, we pass the file to the parent method for processing.
This is just a single example, but we should put any other changes here in the
render() method. For example, if you load the same header and footer pages on all your documents, you can add a function
renderPage(). This function would load the passed view between the calls to load the regular header and footer.
Next, let's take a look at loading some views. In the root of your project create a folder named "
Views" (the location and name can be adjusted in the
settings.php file). Let's just create two views named
test.php and
error.php.
Inside
test.php, add the following:
<h1>{{title}}</h1> <p>This is the {{name}} page!</p>
And inside the
error.php file, enter this:
<h1>404</h1> <p>The route you were looking for could not be found</p>
Also, modify the
Main controller by changing the
index() function to the following:
public function index() { $this->render("test", array("title" => $this->data->message, "name" => "Home")); }
Here, we render the test view that we just made and pass it data to display. Next, let's try a route with parameters. Change the
test() function to the following:
public function test($title) { $this->render("test", array("title" => $title, "name" => "Test")); }
Here, we take it one step further by retrieving the page's title from the URI itself. Last, but not least, let's add a function for the 404 page:
public function notFound() { $this->render('error', array(), 404); }
We use the
render() function's third optional parameter, which sets the response's HTTP status code.
Our final edit is in
index.php to incorporate our new routes:
$routes = array( '/' => '', '/test/:title' => 'Main:test@get' ); $router->addRoutes($routes); $router->set404Handler("Main:notFound"); $router->run();
You should now be able to navigate to the three routes and see their respective views.
Conclusion
With everything that we accomplished, you sure have a few questions about why Slim does not already offer these modifications. They seem logical, they don't stray from Slim's implementation too far, and they make a lot of sense. Josh Lockhart (Slim's creator) put it best:
"Slim is not CodeIgniter, it's not Symfony, and it's not Laravel. Slim is Slim. It was built to be light-weight and fun, while still able to solve about 80% of the most common problems. Instead of worrying about the edge cases, it focuses on being simple and having an easy-to-read codebase."
Sometimes, as developers, we get so caught up covering crazy scenarios that we forget about what's really important: the code. Mods, like the one in this tutorial, are only possible because of the code's simplicity and verbosity. So yes, there may be some edge cases that need special attention, but you get an active community, which in my opinion, heavily out-weighs the costs.
I hope you enjoyed this article. If you have any questions or comments, leave a message down below. You can also contact me through IRC channel on Freenode at the #nettuts channel. | http://code.tutsplus.com/tutorials/taming-slim-20--net-30669 | CC-MAIN-2014-10 | refinedweb | 3,236 | 63.09 |
Recently we ran in some issues when doing a training with several trainees on our SAP Datahub environment that is running on a Kubernetes cluster deployed on Azure. If you do a default deployment of Kubernetes with advanced networking, you end up with a pod limit of 30 pods per node. This is something you need to consider before installation, since it can only be set during initial deployment of the cluster and it cannot be changed afterwards (see this article).
We deployed SAP Datahub on a kubernetes cluster on Azure currently running 3 nodes (8 vcpus, 32 GB memory). For the little exercises we had forseen this should have been plenty, but still we ran into issues. Expanding the resources to 8 nodes temporarily resolved the issue, but soon we were having troubles again. Here is an explanation why.
Every node in the cluster is limited to 30 pods. When starting up a kubernetes cluster, kubernetes itself already starts a number of pods on each node to provide a number of services to the applications that will be deployed onto the cluster:
- network services
- dns services
- proxy services
- kubernets dashboard
- monitoring services
- …
If you look at our 3 node deployment already 25 pods are taken by kuberenetes itself:
Also the datahub installation needs a considrable number of pods running to provide all the core services needed to run the environment. If I look at the pods in the kubernetes “SAPDATAHUB” (name of the namespace we where we deployed our datahub to in kuberenetes), we have 51 pods running for the core system. Actually 49, since 2 pods are there after using the sapdatahub launchpad and using the system management, which I will explain later.
25 pods used by kuberenetes and 49 pods used by means 76 pods of the 90 available on our 3 nodes deployment are already used and I haven’t even started using the application yet. Because,when you launch the Datahub launchpad a pod is started on the cluster. As soon as you start one of the datahub application by clicking on of the tiles, at least one other pods is started on the cluster.
If I start using the Connection Mangement, Meta Data Explorer, Modeler, Vora Tools and system management, you can retrieve these pods back in the Kuberenets dashboard. Using the launchpad and starting up the applications launches pods that are dedicated to the user. This means that another user will spin-up another set of pods dedicated to his user. The initial delays experienced when using the launchpad and applications for the first time as a new user are caused by the spinup of the needed pods on the cluster.
Be aware that the launchpad and application pods stay active on the cluster even if you logout of datahub. A user that already logged in before will reuse the already started pods. You will however notice you have no longer the initial startup delay.
If you start doing data Profiling via the “Meta explorer” or start executing graphs you created in the “Modeler” the demanded process will perform their execution tasks by submitting pods to the cluster. The next screenshot shows the additonal pods launched by starting a profiling job on a dataset.
In the case of the profiling, one coordinator pods is started that will stay active and dedicated to the user, the other pods will end and free the pod allocation on the kubernetes node where they ran.
Once you run into your pod-limit, the pods will no longer startup and start waiting for pod-slots to free up by pods that completed or pod-slots that become available by up-scaling the kubernetes cluster through the addition of a new node. Another way to free up some pods is by deleting application instances for some of the users using the System Management application available via Launchpad. Deleting User instances will also delete the related pod.
To work around the issue we faced during the training, we spinned up some additional nodes, but because of our pod-limits, two training users would completely allocate one VM of 8vcpu, 32 GB, leaving available resources under utilized. When looking at the cpu and memory request the concerned node only 1/3 of the cpu and memory where allocated. If you spun up your kuberenetes cluster with even a more powerful vm, the resource loss becomes even greater.
So when deploying you kubernetes cluster, you should probably consider a higher pod-limit than 30. Azure allows for a maximum pod-limit of 250, while kubernetes doesn’t recommend to go over 100.
Feel free to give any comment or feedback.
Hello Pascal
Thanks for the nice introduction of the used pods in conjunction with the SAP Datahub.
I allowed myself to add you Blog to the Blog –
Thanks and best regards Roland
This was a very good heads up. However, it would have been nice to also have seen a solution to the problem (i.e. a step by step on how to deploy a cluster with 100 pods/node). | https://blogs.sap.com/2019/12/13/consider-your-pods-azure/ | CC-MAIN-2020-50 | refinedweb | 848 | 57.1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.