text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
JSF still sucks?
Granted, this post about how painful JSF is is almost 6 months old, but I think it's still mostly true.
Want to compare times? More than three man-weeks have been spent fixing silly JSF navigation problems. A full CRUD AJAX interface with Spring MVC and prototype in the same project took four days, and there was no previous experience with Spring MVC.
If you're going to use JSF, I highly recommend Facelets or Shale/Seam. However, those are mentioned as'.
IMO, Facelets is very easy to learn. If you know how to program JSPs with JSF, you should be able to use Facelets in under an hour. When we converted AppFuse's JSF flavor from JSP to Facelets, rarely did the body have to change - we just had to change from taglibs to XML namespaces..
There's two problems with Shale and Facelets - the activity on these projects is very low. Shale still has its creators around, so even while its seldom used, you can probably still get your questions answered. However, Facelets seems to be suffering from "developer abandonment".
Conclusion: don't use JSF simply because it's a "standard". Use other frameworks that are more actively developed and designed for the web. For component-based frameworks, the most popular are Tapestry and Wicket. Less popular ones are RIFE and Click.
If you still want to use JSF, you should probably use Seam, but don't simply use JSF because it's a standard. If it was a de-facto standard, that'd be another story.
Of course, you could also help improve JSF 2.0. But that's not scheduled for release until late 2008. I'm sure 2 or 3 commentors will claim we'll all be using Rails or Grails by then.
Posted by Paul Barry on April 16, 2007 at 01:43 PM MDT #
As far as rails/grails is concerned, I disagree. Sounds like MDA revisited, or just another language to keep me busy (although I like some features of ruby)
Koen
Posted by Koen Serry on April 16, 2007 at 02:04 PM MDT #
Posted by Twice on April 17, 2007 at 01:40 AM MDT #
Being a standard gives JSF a great advantage over other frameworks - there are so many third-party JSF component libraries which plays pretty well together. I agree that you should not use some framework simply because it is a standard, but if that gives you some extra benefits, you should think again. Anyway, people often forget that JSF is not just another framework but specification which aims to build ecosystem of other frameworks (Seam, Shale...), component libraries (Ajax4jsf, RichFaces, Trinidad, IceFaces..) and implementations.
Posted by Dejan on April 17, 2007 at 03:03 AM MDT #
Posted by Craig on April 17, 2007 at 05:53 AM MDT #
Posted by Matt Raible on April 17, 2007 at 06:14 AM MDT #
Posted by Ignacio Coloma on April 17, 2007 at 06:37 AM MDT #
I work in an IT department within a corporation. The business doesn't care about technology decisions, they only care that a business need is satisfied. There are 20+ websites in various stages of evolution my team has to deal w/on a daily basis. The technology utilized for these sites was pretty much what was "Hot" for that period of time (remember when container managed entity beans were cool? or how about a JSP scriplet-based application?). The problem we have run into, is the myriad technology, frameworks and approaches taken for each site vary so greatly, that it makes maintenance a real pain to do and to find appropriate people to do it.
We're now in a refactoring / rewrite of all the sites because we're pretty much pinned down and unable to move fast enough to adapt to changing business needs. As part of the refactoring, we adopted JSF as the common framework for presentation. Yes, we're running into issues Mr Raible has outlined (and then some), but we're dealing with them. We've also needed to write a number of custom components, which some developers are complaining about having to do.
The positive side of things, is that we're in a better position now than the position we got ourselves into over the past 10 years. One of the reasons we chose JSF is because it is a spec. IMHO, Java presentation technology is evolving, but it seems like it is evolving from a JSF base, not some framework or OSS project.
When our first AJAX requirement was submitted, we incorporated Ajax4jsf and it wasn't a huge deal. To me, that was pretty "cool".
Posted by m@t on April 18, 2007 at 09:40 AM MDT #
Posted by Java on April 20, 2007 at 02:31 PM MDT #
Posted by Ryan Lubke on May 17, 2007 at 08:14 AM MDT #
Posted by Jacob Hookom on May 17, 2007 at 08:21 AM MDT #
Posted by Srinivas Narayanan on March 10, 2008 at 07:37 PM MDT #
Posted by Jeremy Leipzig on September 17, 2008 at 08:21 AM MDT #
JSF 2.0 is out, it is based on Facelets, and guess what... it still sucks!
Tooling is also practically absent, components are scarce (apart from IceFaces, RichFaces and PrimeFaces you don't find anything JSF 2.0... maybe Oracle will update their ADF soon).
Seems like JSF needs another generation to get things right... is this deja vu? [Entity Beans?]
Posted by jbx on July 17, 2010 at 08:44 AM MDT # | https://raibledesigns.com/rd/entry/jsf_still_sucks | CC-MAIN-2022-21 | refinedweb | 935 | 70.33 |
Porting a C++ Application to Python¶
Qt for Python lets you use Qt APIs in a Python application. So the next question is: What does it take to port an existing C++ application? Try porting a Qt C++ application to Python to understand this.
Before you start, ensure that all the prerequisites for Qt for Python are met. See Getting Started for more information. In addition, familiarize yourself with the basic differences between Qt in C++ and in Python.
Basic differences¶
This section highlights some of the basic differences between C++ and Python, and how Qt differs between these two contexts.
C++ vs Python¶
In the interest of code reuse, both C++ and Python provide ways for one file of code to use facilities provided by another. In C++, this is done using the
#includedirective to access the API definition of the reused code. The Python equivalent is an
importstatement.
The constructor of a C++ class shares the name of its class and automatically calls the constructor of any base-classes (in a predefined order) before it runs. In Python, the
__init__()method is the constructor of the class, and it can explicitly call base-class constructors in any order.
C++ uses the keyword,
this, to implicitly refer to the current object. In python, you need to explicitly mention the current object as the first parameter to each instance method of the class; it is conventionally named
self.
And more importantly, forget about curly braces, {}, and semi-colon, ;.
Precede variable definitions with the
globalkeyword, only if they need global scope.
var = None def func(key, value = None): """Does stuff with a key and an optional value. If value is omitted or None, the value from func()'s last call is reused. """ global var if value is None: if var is None: raise ValueError("Must pass a value on first call", key, value) value = var else: var = value doStuff(key, value)
In this example,
func() would treat
var as a local
name without the
global statement. This would lead to
a
NameError in the
value is None handling, on
accessing
var. For more information about this, see
Python refernce documentation.
Tip
Python being an interpreted language, most often
the easiest way is to try your idea in the interperter.
You could call the
help() function in the
interpreter on any built-in function or keyword in
Python. For example, a call to
help(import) should
provide documentation about the
import statment
Last but not the least, try out a few examples to familiarize yourself with the Python coding style and follow the guidelines outlined in the PEP8 - Style Guide.
import sys from PySide2.QtWidgets import QApplication, QLabel app = QApplication(sys.argv) label = QLabel("Hello World") label.show() sys.exit(app.exec_())
Note
Qt provides classes that are meant to manage the application-specific requirements depending on whether the application is console-only (QCoreApplication), GUI with QtWidgets (QApplication), or GUI without QtWidgets (QGuiApplication). These classes load necessary plugins, such as the GUI libraries required by an application. In this case, it is QApplication that is initialized first as the application has a GUI with QtWidgets.
Qt in the C++ and Python context¶
Qt behaves the same irrespective of whether it is used in a C++ or a Python application. Considering that C++ and Python use different language semantics, some differences between the two variants of Qt are inevitable. Here are a few important ones that you must be aware of:
Qt Properties:
Q_PROPERTYmacros are used in C++ to add a public member variable with getter and setter functions. Python’s alternative for this is the
@propertydecorator before the getter and setter function definitions.
Qt Signals and Slots: Qt offers a unique callback mechanism, where a signal is emitted to notify the occurrence of an event, so that slots connected to this signal can react to it. In C++, the class definition must define the slots under the
public Q_SLOTS:and signals under
Q_SIGNALS:access specifier. You connect these two using one of the several variants of the QObject::connect() function. Python’s equivalent for this is the @Slot` decorator just before the function definition. This is necessary to register the slots with the QtMetaObject.
QString, QVariant, and other types
Qt for Python does not provide access to QString and QVariant. You must use Python’s native types instead.
QChar and QStringRef are represented as Python strings, and QStringList is converted to a list of strings.
QDate, QDateTime, QTime, and QUrl’s __hash__() methods return a string representation so that identical dates (and identical date/times or times or URLs) have identical hash values.
QTextStream’s bin(), hex(), and oct() functions are renamed to bin_(), hex_(), and oct_() respectively. This should avoid name conflicts with Python’s built-in functions.
QByteArray: A QByteArray is treated as a list of bytes without encoding. The equivalent type in Python varies; Python 2 uses “str” type, whereas Python 3 uses “bytes”. To avoid confusion, a QString is represented as an encoded human readable string, which means it is a “unicode” object in Python 2, and a “str” in Python 3.
Here is the improved version of the Hello World example, demonstrating some of these differences:
Note
The
if block is just a good practice when
developing a Python application. It lets the Python file
behave differently depending on whether it is imported
as a module in another file or run directly. The
__name__ variable will have different values in
these two scenarios. It is
__main__ when the file is
run directly, and the module’s file name
(
hello_world_ex in this case) when imported as a
module. In the later case, everything defined in the
module except the
if block is available to the
importing file.
Notice that the QPushButton’s
clicked signal is
connected to the
magic function to randomly change the
QLabel’s
text property. The @Slot` decorator marks
the methods that are slots and informs the QtMetaObject about
them.
Porting a Qt C++ example¶
Qt offers several C++ examples to showcase its features and help
beginners learn. You can try porting one of these C++ examples to
Python. The
books SQL example
is a good starting point as it does not require you to write UI-specific code in
Python, but can use its
.ui file instead.
The following chapters guides you through the porting. | https://doc.qt.io/qtforpython/tutorials/portingguide/index.html | CC-MAIN-2020-10 | refinedweb | 1,061 | 62.58 |
Ajax Transport Reference
Ajax (Asynchronous Java and XML) is a group of interrelated web development techniques used for creating interactive web applications or rich Internet applications. With Ajax, web applications can retrieve data from the server asynchronously in the background without interfering with the display and behavior of the existing page.
The Mule Ajax connector allows Mule events to be sent and received asynchronously to and from the web browser. The connector includes a JavaScript client that listens for events, sends events, and performs RPC calls. It can be deployed in Mule standalone or embedded in a servlet container such as Apache Tomcat or Tcat Server.
Aj, add the
ajax namespaces:
Configuring the Server
The usual way of setting up the Ajax server is to use the one embedded in Mule. This can be created by adding an ajax:connector element to your config:
<ajax:connector
This starts an Ajax server and is ready to start publishing and subscribing. Next you can create a flow that listens to Ajax messages on a channel:
Or to publish on an Ajax channel, use an outbound endpoint:
Embedding in a Servlet Container
If you are running Mule inside a servlet container such as Apache Tomcat, bind Ajax endpoints to the servlet container by adding the
org.mule.transport.ajax.container.MuleAjaxServlet to your
web.xml in your webapp and use the
ajax:servlet-xxx-endpoint elements.
Configure your
web.xml using:
Then replace any
ajax:inbound-endpoint and
ajax:outbound-endpoint with
ajax:servlet-inbound-endpoint and
ajax:servlet-outbound-endpoint respectively.
To use the football scores example again:
Then configure your connector and endpoints as described below.
Using the JavaScript Client
Mule provides a powerful JavaScript client with full Ajax support that can be used to interact with Mule flows directly in the browser. It also provides support for interacting directly with objects running inside the container using Cometd, a message bus for Ajax web applications that allows multi-channel messaging between the server and client.
Configuring the Server
To use the JavaScript client, you just need to have a flow that has an Ajax inbound endpoint through which requests can be sent. This example shows a simple echo flow published on the
/services/echo Ajax channel:
Enabling the Client
To enable the client in an HTML page, add a single script element to the page:
Adding this script element makes a 'mule' client object available for your page.
Making an RPC request
This example defines a button in the body that, when clicked, sends a request to the Echo flow:
<input id="sendButton" class="button" type="submit" name="Go" value="Send" onclick="callEcho();"/>
The button calls the
callEcho function, which handles the logic of the request:
This function uses the
rpc method to request data from the flow. The
rpc method sets up a private response channel that Mule uses to publish when response data is available. The first argument is the channel on which you’re making the request (this matches the channel that our Echo Flow is listening on), the second argument is the payload object, and the third argument is the callback function that processes the response, in this case a function called call Echo Response:
If you use
rpc just for a one-way request where you don’t pass a callback function as parameter because you don’t expect a response, use the
disableReplyTo flag in the Ajax connector:
<ajax:connector
Listening to Server Events
The Mule JavaScript client allows developers to subscribe to events from Mule flows. These events just need to be published on an Ajax endpoint. Here is a flow that receives events on JMS and publishes them to an Ajax channel.
Now you can register for interest in these football scores by adding a subscriber via the Mule JavaScript client.
The first argument of the
subscribe method is the Ajax path that the flow publishes to. The second argument is the name of the callback function that processes the message. In this example, it’s the
scoresCallback function, which is defined next:
Sending a Message
Let’s say you want to send a message out without getting a response. In this case, you call the
publish function on the Mule client:
Example Configurations
Mule comes bundled with several examples that employ the Ajax Connector. We recommend you take a look at the "Notifications Example" and the "GPS Walker Example" (which is also explained in further detail in Walk this Way: Building AJAX apps with Mule). In the following typical use cases we provides an AJAX connector, an AJAX outbound endpoint and the required JavaScript client library to take care of this.
We add an AJAX connector that hosts the pages (HTML, CSS, etc.) using the JavaScript client and that lets them interact with Mule’s AJAX endpoints. It’s the same connector we used in the two previous examples.
We also need to publish some content via an AJAX outbound endpoint in a channel.
RPC Example Server Code
This configuration is very similar to the one in the previous example. As a matter of fact, the only significant changes are the channel name and an out-of-the-box echo component to bounce the request back to the caller.
RPC Example Client Code
The browser sends information to Mule (using the JavaScript Mule client) when a button is pushed, just as it did before. This time however, a callback method displays the response.
Note the following changes:
Loading the mule.js script ❶ makes the Mule client automatically available via the ‘mule’ variable.
The rpcCallMuleEcho() ❷ method gathers some data from the page and submits it to the ‘/services/echo’ channel we configured before.
The mule.rpc() ❸ method makes the actual call to Mule. This time, it receives three parameters:
The channel name.
The data to send.
The callback method to be invoked when the response is returned.
The rpcEchoResponse() callback method ❹ takes a single parameter, which is the response message, and displays its data on the page.
Connector
Allows Mule to expose Mule Services over HTTP using a Jetty HTTP server and Cometd. A single Jetty server is created for each connector instance. One connector can serve many endpoints. Users should rarely need to have more than one Ajax servlet connector.
There are no default values in the following table.
Inbound Endpoint
Allows a Mule service to receive Ajax events over HTTP using a Jetty server. This is different from the equivalent
servlet-inbound-endpoint because it uses an embedded servlet container rather that relying on a pre-existing servlet container instance. This endpoint type should not be used if running Mule embedded in a servlet container.
No child elements.
Outbound Endpoint
Allows a Mule service to send Ajax events over HTTP using Bayeux. JavaScript clients can register interest in these events using the Mule JavaScript client.
No child elements.
Best Practices
Use Ajax outbound endpoints mainly for broadcasting information to several clients simultaneously. For example, broadcasting live news updates to several browsers in real time without reloading the page.
It’s recommended to subscribe/unsubscribe callback methods associated with outbound channels on
<body>onload/onunload. See example above. Pay special attention to unsubscribing callback methods.
When sending information back and forth between clients and servers using Ajax you should consider using JSON. Mule provides a JSON module to handle transformations gracefully. | https://docs.mulesoft.com/mule-user-guide/v/3.9/ajax-transport-reference | CC-MAIN-2018-09 | refinedweb | 1,233 | 61.16 |
Unity Coroutine Performance
Unity’s coroutine support allows you to easily create pseudo-threads and write synchronous-looking code that doesn’t block the rest of the app. They can be very handy for a variety of tasks. Before using them, we should understand the performance cost. Today’s article takes a look at the cost of starting a coroutine as well as the cost of running it. Just how expensive are they? Read on to find out!
Coroutines are just C# iterator functions. That means they return a
System.Collections.IEnumerator and have at least one
yield return X statement in them. Here’s one that moves a
GameObject toward a destination every time it’s resumed:
IEnumerator MoveToDestination( GameObject objectToMove, Vector3 destination, float speed ) { // Not at destination yet while (objectToMove.transform.position != destination) { // Move toward destination objectToMove.transform.position = Vector3.MoveTowards( objectToMove.transform.position, destination, Time.deltaTime * speed ); // Yield new position yield return objectToMove.transform.position; } }
You could manually iterate over this function, but Unity can do that for you by starting it as a coroutine. If you do, it’ll be resumed every frame just like your
Update function. To start it as a coroutine, all you need is a
MonoBehaviour and to call the StartCoroutine function on it like so:
class MyScript : MonoBehaviour { void Start() { // Pass the IEnumerator the coroutine function returns // to the StartCoroutine function StartCoroutine(MoveToDestination(gameObject, Vector3.zero, 5)); } IEnumerator MoveToDestination( GameObject objectToMove, Vector3 destination, float speed ) { // ... } }
Now to test its performance. I’ve set up a small script that starts one thousand, ten thousand, or one hundred thousand coroutines that do nothing but yield. They’re the cheapest coroutine you could write. They also start up as fast as possible, which is good because I’m measuring the time it takes to start up all the coroutines. From there on I display the frame rate the app is running at. Here’s the code:
using System; using System.Diagnostics; using System.Reflection; using UnityEngine; using UnityEngine.UI; using System.Collections; public static class StopwatchExtensions { public delegate void TestFunction(); public static long RunTest(this Stopwatch stopwatch, TestFunction testFunction) { stopwatch.Reset(); stopwatch.Start(); testFunction(); return stopwatch.ElapsedMilliseconds; } } public class TestScript : MonoBehaviour { private Rect drawRect; private bool showModeScreen; private long startTime; private const float UpdateInterval = 1; private float totalTime; private int numFrames; private float timeleft; private float fps; void Start() { drawRect = new Rect(0, 0, Screen.width, Screen.height); showModeScreen = true; } void OnGUI() { if (showModeScreen) { GUI.Label(new Rect(0, 0, 200, 25), "How many coroutines?"); if (GUI.Button(new Rect(0, 25, 100, 25), "1,000")) { StartTest(1000); } else if (GUI.Button(new Rect(0, 50, 100, 25), "10,000")) { StartTest(10000); } else if (GUI.Button(new Rect(0, 75, 100, 25), "100,000")) { StartTest(100000); } } else { timeleft -= Time.deltaTime; totalTime += Time.timeScale / Time.deltaTime; numFrames++; if (timeleft <= 0) { fps = totalTime / numFrames; timeleft = UpdateInterval; totalTime = 0; numFrames = 0; } GUI.Label(drawRect, "Start Time: " + startTime + ", FPS: " + fps); } } private void StartTest(int numCoroutines) { showModeScreen = false; var stopwatch = new Stopwatch(); startTime = stopwatch.RunTest( () => { for (var i = 0; i < numCoroutines; ++i) { StartCoroutine(CoroutineFunction()); } } ); } private IEnumerator CoroutineFunction() { do { yield return null; } while (true); } }<<
First off, starting coroutines is cheap. It’s not free either, but unless you’re suddenly starting hundreds of them in a single frame it probably won’t matter. In cases where you’re tempted to do that, like making a bunch of game objects all move to a destination at the exact same time, you should probably consider spreading that work out over multiple frames or not using coroutines.
As for the actual coroutine performance, we see here that it’s certainly possible to drive the frame rate into the ground with nothing but empty coroutines. In this sense, the overhead is enough to make them expensive. On the other hand, a hundred thousand coroutines is probably not a realistic number. But even 1000 is much higher than the test computer’s frame rate with no coroutines: about 740. There is definitely some expense to them above and beyond simple function calls.
So if you’re looking for maximum performance, coroutines aren’t for you. However, they are really quite cheap when used in small numbers. A few dozen shouldn’t be much of an issue for most games.
Do you have any thoughts to add on coroutines in Unity? Love ’em? Hate ’em? Had performance troubles? Post a comment!
#1 by Shawn Blais Skinner on May 19th, 2015 · | Quote
The one issue I just ran into with co-routines, was Garbage Collection. I didn’t realize that everytime you cann StartCoroutine, it allocated 9bytes of memory. And each time you do something like yield return new WaitForSeconds(.1f) you’re generating even more garbage.
Given that, there’s some best practices. For example, avoid calling StartCoroutine if you can, so prefer
To:
Or use internal timers to monitor time, rather than WaitForSeconds(), so prefer:
To:
#2 by jackson on May 19th, 2015 · | Quote
It makes sense that some memory would be allocated behind the scenes from a
StartCoroutinecall, but it’s good to have the exact byte figure. I haven’t yet run into a case where I needed to start a lot of them at once, so this hasn’t been an issue for me. How has the 9 byte allocation impacted your projects?
As for the
WaitForSecondsclass, have you considered reusing instances of it? I never have, but if your wait times are constant then it seems like that may be a viable alternative.
#3 by Jarnak on June 17th, 2015 · | Quote
Quite interesting, thank you for this post!
#4 by Leucaruth on July 4th, 2015 · | Quote
I just discovered your site while searching for coroutines performance. Thanks for all your hard work. Its really useful to have these kind of unity references so I just suscribed. Please, keep the good work :)
#5 by jackson on July 5th, 2015 · | Quote
Glad to hear you’re enjoying the site! Let me know—in comments or mail—if there’s anything in particular you’d like to see articles about.
#6 by Idea++ on March 2nd, 2016 · | Quote
Thanks for the article. There is a bug in the test. OnGUI() may be called multiple times in the same frame. Therefore the resulting Start Time and FPS values are unreliable. Use Time.frameCount to check if it’s in the same frame, if the time bookkeeping has to be in OnGUI().
#7 by jackson on March 2nd, 2016 · | Quote
Thanks for pointing this out! You’re correct that
OnGUIcan be called more than once per frame, leading to inaccuracies. I adjusted it to only count if
Time.frameCounthad advanced and re-ran the test using the same computer but Unity 5.3.2f1. Here are the results I got:
The second test now hits the 60 FPS cap. I adjusted it to 50,000 coroutines and got about 16 FPS, just as a data point.
The conclusion remains the same though: coroutines are cheap and probably not an issue for most games unless you start running thousands of them.
Thanks again for pointing out the issue!
#8 by bzor on April 14th, 2016 · | Quote
people might be interested in this:
it’s helped me squeeze some perf out where I was using a lot of coroutines | https://jacksondunstan.com/articles/2981 | CC-MAIN-2019-09 | refinedweb | 1,221 | 65.83 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
For the last two hours, under the heading of object-oriented, encapsulated code without global variables, I have been reading spaghetti code. Every class method I look at involves a few private instance variables whose lifetimes are as long as the object itself. Ok, the scope is limited to the class methods only, but for a class of a certain size, how is this any different from global variables?
PS: No, they are not static member fields. In general, I have nothing against singletons.
Right, public static variables in a class have all of the same semantics as global variables. It was an interesting tradeoff that the Java designers decided to allow static public variables in classes, but not global variables in packages - because they're essentially the same thing.
It's a good practice to avoid the unnecessary use of global variables and static public class member variables, though they are often hard to avoid. The Unreal codebase has a number of global variables, including a particularly onerous one named UglyHackFlags.
Because a public getter and setter for a private class variable is little better.
Doesn't fluid (dynamic) scoping (and the related with-foo lisp idiom) help alleviate the problem by exposing a more controlled interface to global variables? For example, in the filehandle example, dynamic scoping would help us guarantee that the handle won't suddenly change to something else (especially that it won't be closed) -- except in the dynamic scope of a function that explicitly changes it.
[EDIT: I guess assignment is a better word than side-effect when trying to contrast with fluid scoping]
The primary evil of global variables is unexpected side-effects and dynamic scoping removes that evil. It effectively turns them into arguments that get implicitly passed to all functions. It also has one extra advantage.
Consider a pretty-printer implemented with the visitor patter. You need to store the indentation level in the visitor object, this basically makes it an evil global variable and requires that you write lots of error-prone code to make sure side-effects are undone - even in the presence of exceptions and functions with multiple return points. You basically end up rolling your own dynamic scope. One alternative is to make your visitor infrastructure capable of passing extra paramerts but then your visitor's all messed up and more paramters are going to result in a lot of stack twiddling, especially as they would have to be passed by value.
The same goes for anything else involving callbacks where some state has to be passed through the callback.
It's a pity that more languages don't support it.
I've read a little bit about dynamic scoping and I found the concept somewhat difficult/dangerous. Could you explain how it actually helps with safety ?
I don't speak for fergal, but in a nutshell, dynamic scoping definitely has some evilness itself, but it is certainly less evil than state (as typically used by a "typical" programmer). If dynamic scoping can replace a use of state, then that is almost certainly a win. If dynamic scoping itself can also be gotten rid of, all the better.
There are of course trivial examples of long lived "objects" outside of your program you might be connected with e.g. databases. So instead of acting like Sysiphus or fighting against Hydra I would vote for better techniques of distributing responsibilities and monitoring. Your suggestion to localize effects are of course part of the answer because you delegate responsibility for creation and destruction of an object to a certain distinct code block. But an onion-like program cannot perform communication well. Another technique I like, allthough the realization might not be radical enough or could be refined, are friend classes in C++ where collaborations are determined by the owner of an attribute. So why not extend the concept of locality to a certain group of classes where each class determines the boundarys of sharing it's own attributes ( or a group of them ) with others? This would imply something like delegation-contracts. Public variables would really be public only for the purpose of visibility by 3rd-party module/package authors.
Eiffel has something akin to a more refined friend declaration. You can specify what classes can see what features. A friend a la C++ can see all private data.
Section 5. here is a good grounding.
Not a great fan of Eiffel but I always like this aspect of it. I think there's problems with derived classes changing visibility though.
Global variables are not so bad. For instance in most of the languages Classes are globals.
However it's sometimes difficult to reuse a library that is storing some states into global variables, since often it means that you cannot use it in multithread for example.
Restricting the global access to a given class (using private) is then a good thing since if you want to remove the global state you'll hopefully only have this class to modify.
A global variable (a mutable storage cell) holds state that may be read and written from anywhere and is used to pass state implicitly. Properly sequencing accesses to a global variable is usually very important.
Usually a class is a declaration (comparable to a constant) that may be read from anywhere, but is not (usually) written to and is not used for passing state implicitly. There is (usually) no need to sequence accesses to a class.
Your comment that classes are read-only and are thus more like constants is true only in a few OO languages. It's true in Java, for instance, but not in Ruby or Python (or Smalltalk or Self, from what I understand).
Also, it doesn't take into account stateful effects on classes. Many classes have static fields with getters and setters (or constant fields which point to objects with getters and setters of their own), in which case a class isn't just a single global variable. It's actually an entry point to a whole universe of global state.
Finally, even in Java there's a very strong need to sequence accesses to a class in a wide variety of cases. Consider for instance the subtleties involved in class initialization. Just a couple of weeks ago I spent a lot of time tracking down a race condition involving a complex web of static initialization blocks in a Java application.
Well, this depends on what you mean by read-only. What I mean is that a class declaration specifies the components (methods, variables, etc...) of a class and the set of components usually won't change after the declaration. This is the case in basically all statically typed OO languages (ignoring AOP extensions). As you say, changing the set of components of a class is possible in some (dynamically checked) languages, but even then it is
recommended practise to make class declarations complete and avoid mutating classes (adding and removing methods, for instance) after the fact, because such mutations can be very confusing. And, this is why I said "usually" more than once.
The rest of what you say is irrelevant in this respect. Many OO languages allow you to have (global) variables at class scope and it should not be news to anyone that it can cause problems just like global variables.
The comments below are ones I previously used in a related discussion.
"*Stateful* Singletons" are evil; they are just Object Oriented global variables. "Stateless Singletons" which are just used to avoid creating multiple instances of an Object with no instance variables, is fine. “magical” connection to this shared pool of global or static information or services. In all other fields, components, be it CPU or carburetor, only have access to what they’re explicitly connected to. Because of this limitation they are guaranteed to be re-usable and re-composable. I can use a given CPU in a computer or a DVD player and I can put two tail pipes on a car if I want. Software on the other hand has all sorts of artificial restrictions. Try to instantiate two instances of your favourite ORB (because you want to be configured with different hostnames of socket layers on different networks for example), or database driver or logging package. Even having one part of your application use a different System.out than the other can be a problem. Software components which rely on globals being one way or another are potentially excluding other software from running (especially another instance of themselves). Globals/Statics are holding back the entire software industry (or rather, the people who use them are); and that’s why I classify them as “evil”.
While I mostly agree with your comments, I think it's important to realize that even "real world" objects in other engineering disciplines do suffer from the equivalent of "global variables" that components are not explicitly wired to, and "magical" connections to shared "information". Two examples of this kind of thing are the radiative and convective thermal environment of a system, and the electromagnetic environment of a system. Now, in many cases, these things are not really an issue. But in some designs electromagnetic coupling between different components is a major concern, and design for electromagnetic compatibility becomes a major focus of design efforts. In other cases, thermal interference between components is an issue, and it becomes necessary to add extra thermal control elements, such as insulation or thermal straps, to maintain the correct thermal environment. In both of these example situations, it isn't feasible to just plug components together and go. In fact, my own experience is that these implicit interfaces between components and subsystems are a significant contributor to the difficulties of design and integration for "real" systems (such as spacecraft).In other disciplines, these problems are managed through careful analysis during the design phase. In the software world we have two rational options: we can, like other disciplines, perform careful analysis (using various formalisms); or, since we are dealing with purely artificial constructs, we can eliminate the globals and make our lives much easier. Sadly, the default choice seems to be a third option: use globals, and skip the careful analysis. That, at least IMHO, is why component-based software has run into problems.
Incidentally, this is exactly the problem that Odersky and Zenger try to address. I've said it before, and I guess I'll just keep saying it: I think this paper presents a very compelling approach that works today in Scala, and they make it very clear what language features enable this pattern. I definitely urge you to take a look.
(And yeah I realize this isn't the problem the original post was trying to get at...)
Aggred, Stateful Singletons are evil. They required for some complex situations where you need to have just one (generally when you have real hardware that you need strict access control on within your program). However I think the GoF made a big mistake when they put them in their book. (Though in their defense, they seem aware of all the problems, they just didn't tell you strongly enough in the book that you shouldn't use them if there is any other choice)
There is a time and place for global variables. There is a time and a place for singletons. In the real world those times and places are rare. I've debugged many programs just by removing instance() and making the program compile.
is that writing a class which declares, a priori, "there shall be only one!", is almost always a violation of separation of concerns.
The correct (IMHO!) way to implement a singleton is to write your class (or other datatype) as normal; then use an external "SingletonHolder" or "InstanceManager" to manage the single (or n if you prefer) instances of the class.
The Singleton pattern, as is currently specified, requires one to modify the class in question if one wants to change the number of allowable instances in a program. With an external instance manager; all you need to do is change the configuration of the instance manager.
Since a class is already an object factory that knows how to instantiate objects I do not understand why do you think it needs additional management classes to manage multiplicities? The requirement of creating exactly n instances ( with n>1 ) seems there for pure theoretical reasons and I beg for non-trivial use cases. At least for me your suggestion looks like bloat and overdesign I know from Java projects where colleagues and me tried to figure out the responsibilities of all the clunky Manager-, Helper-, Holder- etc. stuff.
Instantiation strategies (singletons, object pooling, ...) also fall very neatly into the domain of metaclasses. See, for example, this paper (which, now that I remember it, definitely belongs in a top list for 2005)...
Thanks Matt, for linking this. I will take a look at the paper. As for Python metaclasses that are mentioned by segphault too, Python classes reference metaclasses that customize them explicitely i.e. they accept metaclass manipulation. This is satisfying since instantiation still remains in the responsibility of the class and interfaces need not to be changed so that the code remains DRY - something I don't see with Scott Johnstons SingletonManager instances that duplictate access points for instance access/creation. Nevertheless I'm not much worried about his decoupling solution since the feared codebloat as it is present in much Java designs is obviously not there - on the contrary. Its a fair tradeoff. I should have known LtU readers better.
Kay
I don't mean for every class Foo that you want to make a singleton; you need a FooManager class.
Instead, have a pre-existing class, SingletonManager, which is global--and which will only bind to one instance of something else. By using generics/templates; this can be done without any obnoxious casting (one reason in support of the Singleton pattern in Java is that the get_impl() method can return the correct type, rather than Object).
Often time, the real problem you want to solve is NOT preventing users from creating two instances of class Foo; it's to make sure that (within some context), there is exactly one Foo which is the "official" Foo. In a properly designed class (i.e. one that isn't oozing with static non-final/const members), having additional instances of the class generally aren't harmful--the issue is ensuring unique or synchronized access to some external resource.
When a class is used to manage/synchronize access to some external resource, in many cases the IDEAL solution is for the external resource to provide its own syncronization. That isn't always an option, of course.
More sophisticated instance managers can provide more interesting semantics. For example--in the context of logfiles; you could have a method which returns the current Logger associated with a given system logfile (say, /var/log/myApp.log), creating a new Logger if one doesn't already exist. If Logger is a Singleton; no other Logger instances (pointing to different logfiles) can exist; but if Logger is a non-Singleton class whose instances are managed by an instance manager; it becomes far more useful.
Virtually every electrical component of your car is connected to the common ground a.k.a the battery's negative terminal. On older cars, that common ground is accessible globally via the car's frame.
Or consider wireless routers. Every comptuter is my house is connected to the same global resouce, the router. There is only one true router, and all communication to the outside world goes through it.
Both of which are examples of particular singleton objects, not grounds (as a class) or routers (as a class). This is the core of the argument against singleton classes and for factories, wrappers, etc. which enforce singleton-ism where appropriate.
Surely a "ground" class is more general, more readily reusable and has less unnecessary complexity than a "singleton ground" class (assuming the difference is the "singleton" bit rather than some hidden details in the "ground" part, which would invalidate the example anyway). Why waste time duplicating functionality (singleton-ism) where it is not needed and reducing reusability?
...there can be multiple grounds in a circuit, and not all are alike. Analog ground, digital ground, safety ground, etc. Even though the impedance of a huge hunk of metal (or a circuit board layer) is low, it is not zero; detectable difference in potential may exist at different points in a circuit region called "ground".
A proper and thorough discussion of this is outside the scope of LtU (and my technical ability); but be rest assured, "ground" is frequently not a singleton.
Good luck reusing your RouterSingleton in a bridge device or a computer with multiple network cards.
My problem is with non-static, private instance variables and (especially non-const) private methods.
Imagine a class with 6 public methods, 6 private helper methods and 6 instance variables. Some of these instance variables are objects themselves, and probably quite large.
This looks innocent at the first glance, but what we have here is a miniature program with 12 functions and 6 global variables.
The situation is not exactly as bad as 12 functions and 6 global variables in the non-OO world where the functions could access any global, and here they are limited to the private instance variables.
However, that's of little help. If these 12 functions are in the same file, I could pretty easily figure out which globals they have access to and come up with the list of 6.
Then the problem becomes figuring out exactly which function touches exactly which globals, which is as hard as the OO case.
If you were to refactor the non-OO version, you would decouple many of the functions from the globals by passing the required state as arguments. In the end, perhaps only a few of them would still have a dependency on globals.
But, why shouldn't this be done for the OO case as well? At the very least, why aren't the private methods always static so that the public methods have to pass the state explicitly?
Every public method still has full access to (*this), and I wish I could think of a way to express that it doesn't access all of (*this), but I can't.
It's just so easy to tack on a new private instance variable and a few new methods when the class needs to do something more, especially as the original author who knows that the original 12 methods will have nothing to do with the new instance variable. However, as a reader of the code, you now see 13+ methods that can access 7 instance variables.
I have been struggling with the same exact issues you raise. In fact I was going to comment earlier about this, but after reading comments by others, I thought I was the one who misunderstood! :)
For the past several weeks I've been thinking about how inside of a class, member variables are essentially global variables regardless of their access modifiers. I was thinking about how, when working with a class, you have to keep all of these details in your head. This isn't as big of a problem as the global variables of yore, but yet, OOP is like spaghetti code in its own way. The situation is much better now, because instead of having one large platter of spaghetti you have several smaller plates of spaghetti. Some are very small and quite manageable, others are larger and much more difficult to reason about.
I concluded that a class should only contain private member variables for what it absolutely must. Everything else should be passed from method to method, much like in procedural or functional programming. For me these thoughts were interesting thoughts, and a good idea in general, but they were just thoughts: until last week. You see, last week I had to add some functionality to a class that somebody else wrote. Of course I knew that changing the behavior of the existing methods was a bad idea, so I was careful to not change anything. So, I went ahead and wrote my code, using some of the private methods of the class in my new code. I made a major mistake. I didn't write a unit test. The next day a bug showed up: I got the sequence wrong. Unit test! Oh, I should have written a test case! Just to make sure. But it was such a small change, and it was so very simple. (I know, bad excuses! Bad Ben, bad! Doh!) The problem was the methods were mucking around with the private member variables. It got me thinking: now, unit testing should have caught the error, right? But that doesn't mean the problem just goes away. If unit testing fixes everthing, then we don't need to avoid gloabal variables or anything. "Just write a unit test!" If you have to get the sequence of method calls just right (even thought it isn't obvious that the methods need to be sequenced) isn't that telling something about the design? But there was nothing wrong with the design as far as OOP goes. Yet, it leaves me with a bad taste in my mouth. I can hear everybody saying: Unit Test! But in a way, private member variables are not very amiable to testing, are they? Passing arguments into methods instead of using private member variables just seems like a better idea to me (if you can help it) as it just seems safer and more amiable to unit testing.
But in a way, private member variables are not very amiable to testing, are they?
The problem is caused by encapsulation, which is not unique to OO. However, non-static private methods additionally require a whole object to be constructed in order to be tested. This could be ok, but sometimes, these objects establish connections to databases, etc. so the unitness of unit tests becomes questionable. Now you have to create a mock object if you like to decouple your unit tests from the database, etc. etc.
Happy New Year everybody.
It looks like to me that the problem was that the code imposed a defined order to call private methods. Yes, instance variables allow this sort of bad design, but it's bad design nontheless. In my experience, when instance variables are semantically germane to the class' responsibilities they don't pose a problem.
What's wrong with the standard-OO tools, derivation, composition, maybe MI or mixins? Isn't that exactly what they do?
... At the very least, why aren't the private methods always static so that the public methods have to pass the state explicitly?
Funny, I always thought the other way round: why make private static methods at all? I mean, if it's a static method, it doesn't depend on the internals of the class and is predestined for reuse, so why hide it? Chances are, you'll be hiding away a useful tool method...
What's wrong with the standard-OO tools, derivation, composition, maybe MI or mixins? Isn't that exactly what they do?
I mean, if it's a static method, it doesn't depend on the internals of the class and is predestined for reuse, so why hide it? Chances are, you'll be hiding away a useful tool method...
If there's a reasonable demand, surely we can promote that method to the public interface. My problem is not with what is public or private, though. It's about limiting the portion of the object's state that a method has access to. I can't do anything to limit the public methods, but I can limit the private methods.
My problem is not with what is public or private, though. It's about limiting the portion of the object's state that a method has access to. I can't do anything to limit the public methods, but I can limit the private methods.
It would seem that what you really want is decent fine grained control of what state variables can be accessed by any given method/function/procedure, and that certainly is available in the right languages. In Java, using JML annotations and tools, you can declare, at the top of each method (public or private), which instance variables are assignable, or accessible from within the method (and even which other methods are callable from within the given method), and have checks that such restrictions are followed (as unit tests, or runtime checks, or even static checks with ESC/Java2). A language like SPARKAda provides similar functionality (though not with quite the same OO mentality) via global in out and derives annotations to provide control over what state variables are accessed or changed, and what inputs/other state variables any changes are derived from. SPARKAda even goes so far as to refuse a function or procedure access to any state variables unless you specifically grant it in a global annotation.
Koray, you and Benjamin raise some interesting observations, and something that has always bothered me about mainstream OO languages. It seems to me that it's far too easy to basically get yourself in a sphagetti mess of coupling and state because it's so easy to just throw another instance variable and another random method into a class.
Maybe I'm not understanding the other posts here. But I think some people misunderstood the original post.
Unfortunatly, I can't seem to load blog.lab49.com, so maybe I am missing something in that reference.
I understood Koray Can's complaint to be about instance variables that are utilized in many methods. This is not surprising to me and just points out that 1) no matter the language, it comes down to the programmer and good design; 2) 00 code benefits a lot from applied Functional principles.
If you have a bloated object, with many methods accessing the same member variable, you can run into the same old problems caused by global variables. I don't think globals are inherently bad. Proper naming (possibly with hungarian notation to indicate purpose, not data type) and proper functionalization of behavior can make good use of "global within an instance" variables.
To me, oo is about clean scoping and controlled access. Functionalism are about more readable code and maybe fewer lines. But just because you have the power specify scope doesn't mean you will do it well.
I understood that as his meaning as well.
It sounds like the classes have some very obvious "code smells" that can be used to direct refactoring to improve the design.
1) Groups of methods that use only a subset of the instance variables. It sounds like the methods and variables they use should be in a separate class.
2) If the use of instance variables becomes as confusing as system-wide global variables, then the class is much, much too big. It should be split into multiple, collaborating classes. Other smells, such as the one above, will indicate where to split the big class.
So, yes, instance variables are like global variables in a way. The way to avoid the problems inherent in global variables when doing OO programming is by divide and conquer. Just as a global variable is not a problem in a small program, so an instance variable is not a problem in a small class.
I guess Matthias Felleisen's "Functional Objects" presentation is relevant here:
Slides of the presentation [pdf]
LtU discussion | http://lambda-the-ultimate.org/node/1206 | CC-MAIN-2013-48 | refinedweb | 4,663 | 61.56 |
[
]
Craig L Russell commented on JDO-669:
-------------------------------------
It's a test case bug. It's not permitted to access a persistent field after the object has
been deleted.
The test case should be changed to perform a different check to see if the deleted instance
is in the collection. This can be done by saving the id before deleting the instance and later
checking to see if any of the instances in the collection have that id.
> TCK : RelationshipManyToManyAllRelationships.testDeleteFromMappedSide - problem with
check
> ------------------------------------------------------------------------------------------
>
> Key: JDO-669
> URL:
> Project: JDO
> Issue Type: Bug
> Components: tck
> Affects Versions: JDO 3
> Reporter: Andy Jefferson
> Assignee: Craig L Russell
> Fix For: JDO 3 maintenance release 1
>
>
> Whilst this test passes with current DataNucleus (2.2 M3), I was in the process of extending
its support for managed relationships, and now get this test to fail which provokes this question
:-
> pm.deletePersistent(proj1);
> pm.flush();
> deferredAssertTrue(!emp1.getProjects().contains(proj1),
> ASSERTION_FAILED + testMethod, "Postcondition is false; other side of relationship
not set on flush");
> After the call to deletePersistent() and flush() the object "proj1" is in P_DELETED state.
So when the call goes in to emp1.getProjects().contains(proj) this will interrogate the hashCode()
method of Project. This is defined as
> public int hashCode() {
> return (int)projid;
> }
> But when using datastore identity "projid" is not a primary-key field, and so, as per
section 5.5.6 of the spec
> <spec>Read access to primary key fields is permitted. Any other access to persistent
fields is not supported and might throw a JDOUserException.</spec>
> So what does the implementation do ?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online. | http://mail-archives.apache.org/mod_mbox/db-jdo-dev/201011.mbox/%3C25107207.203791290198377065.JavaMail.jira@thor%3E | CC-MAIN-2015-27 | refinedweb | 285 | 53.71 |
Dare Obasanjo
Microsoft Corporation
January 13, 2005
Summary: Dare Obasanjo covers the Cω programming language created by Microsoft Research by augmenting C# with constructs to make it better at processing information, such as XML and relational data. (18 printed pages)
One of the main reasons for XML's rise to prominence as the lingua franca for information interchange is that, unlike prior data interchange formats, XML can easily represent both rigidly-structured tabular data (relational data or serialized objects) and semi-structured data (office documents). The former tends to be strongly typed and is typically processed using Object<->XML mapping technologies, while the latter tends to be untyped and is usually processed using traditional XML technologies like DOM, SAX, and XSLT. However, in both cases, there is somewhat of a disconnect for developers processing XML using traditional object oriented programming languages.
In the case of processing strongly typed XML using Object<->XML mapping technologies, there is the impedance mismatch between programming language objects and XML schema languages like DTDs or W3C XML Schema. Notions such as the distinction between elements and attributes, document order, and content models that specify a choice of elements are all intrinsic to XML schema languages, but have no counterpart in traditional object oriented programming. These mismatches tend to lead to some contortions and data loss when mapping XML to objects. Processing untyped XML documents using technologies such as XSLT or the DOM has a different set of issues. In the case of XSLT or other XML specific languages like Xquery, the developer has to learn a whole other language to effectively process XML as well as their programming language of choice. Often, all the benefits of the integrated development environment of the host language, such as compile time checking and IntelliSense, cannot be utilized when processing XML. In the case of processing XML with APIs such as the DOM or SAX, developers often complain that the code they have to write tends to become unwieldy and cumbersome.
As design patterns and Application Programming Interfaces (APIs) for performing particular tasks become widely used, they sometimes become incorporated into programming languages. Programming languages like C# have promoted concepts that exist as design patterns and API calls in other languages, such as native string types, memory management using garbage collection, and event handling to core constructs within the language. This evolutionary process has now begun to involve XML. As XML has grown in popularity, certain parties have begun to integrate constructs for creating and manipulating XML into mainstream programming languages. The expectation is that making XML processing a native part of these programming languages will ease some of the problems facing developers that use traditional XML processing techniques.
Two of the most notable examples of XML being integrated into conventional programming languages are Cω (C-Omega) produced by Microsoft Research, which is an extension of C# and ECMAScript for XML (E4X) produced by ECMA International, which is an extension of ECMAScript. This article provides an overview of the XML features of Cω and an upcoming article will explore the E4X language. The article begins with an introduction to the changes to the C# type system made in Cω, followed by a look at the operators added to the C# language to enable easier processing of relational and XML data
The goal of the Cω type system is to bridge the gap between Relational, Object, and XML data access by creating a type system that is a combination of all three data models. Instead of adding built-in XML or relation types to the C# language, the approach favored by the Cω type system has been to add certain general changes to the C# type system that make it more conducive for programming against both structured relational data and semi-structured XML data.
A number of the changes to C# made in Cω make it more conducive for programming against strongly typed XML, specifically XML constrained using W3C XML Schema. Several concepts from XML and XML Schema have analogous features in Cω. Concepts such as document order, the distinction between elements and attributes, having multiple fields with the same name but different values, and content models that specify a choice of types for a given field exist in Cω. A number of these concepts are handled in traditional Object<->XML mapping technologies, but it is often with awkwardness. Cω aims to make programming against strongly typed XML as natural as programming against arrays or strings in traditional programming languages.
Streams in Cω are analogous to sequences in XQuery & XPath 2.0 and the System.Collections.Generic.IEnumerable<T> type, which will exist in version 2.0 of the .NET Framework. With the existence of streams, Cω promotes the concept of an ordered, homogenous collection of zero or more items into a programming language construct. Streams are a fundamental aspect of the Cω type system upon which a number of the other extensions to C# rest.
A stream is declared by appending the operator '*' to the type name in the declaration of the variable. Typically streams are generated using iterator functions. An iterator function is a function that returns an ordered sequence of values by using a yield statement to return each value in turn. When a value is yielded, the state of the iterator function is preserved and the caller is allowed to execute. The next time the iterator is invoked, it continues from the previous state and yields the next value. Iterators functions in Cω work like the iterators functions planned for C# 2.0. The most obvious difference between iterator functions in Cω and iterator functions in C# is that Cω iterators return a stream (T*), while C# 2.0 iterators return an enumerator (IEnumerator<T>). However, there is little difference in behavior when interacting with a stream or an enumerator. The more significant difference is that, just like sequences in XQuery, streams in Cω cannot contain other streams. Instead, when multiple streams are combined the results are flattened into a single stream. For example, appending the stream (4,5) to the stream (1,2,3) results in a stream containing five items (1,2,3,4,5), not a stream with four items (1, 2, 3, (4, 5)). In C# 2.0, it isn't possible to combine multiple enumerators in such a manner, although it is possible to create an enumerator of enumerators.
The following iterator function returns a stream containing the three books in the Lord of the Rings trilogy.
public string* LoTR(){
yield return "The Fellowship of the Ring";
yield return "The Two Towers";
yield return "The Return of the King";
}
The results of the above function can be processed using a traditional C# foreach loop as shown below.
public void PrintTrilogyTitles(){
foreach(string title in LoTR())
Console.WriteLine(title);
}
A powerful feature of Cω streams is that one can invoke methods on a stream, which are then translated to subsequent method calls on each item in the stream. The following method shows an example of this feature in action:
public void PrintTrilogyTitleLengths(){
foreach(int size in LoTR().Length)
Console.WriteLine(size);
}
The method call above results in the value of the Length property being invoked on each string returned by the PrintTrilogyTitles() method. The ability to access the properties of the contents of a stream in such an aggregate manner allows one to write XPath-style queries over object graphs.
There is also the concept of an apply-to-all-expressions construct that allows one to apply an anonymous method directly to each member of a stream. These anonymous methods may contain the special variable it, which is bound to each successive element of the iterated stream. Below is an alternate implementation of the PrintTrilogyTitles() method which uses the apply-to-all-expressions construct.
public void PrintTrilogyTitles(){
LoTR().{Console.WriteLine(it)};
}
Cω's choice types are very similar to union types in programming languages like C and C++, the '|' operator in DTDs and the xs:choice element in W3C XML Schema. The following is an example of a class that uses a choice type:
public class NewsItem{
string title;
string author;
choice {string pubdate; DateTime date;};
string body;
}
In this example, an instance of the NewsItem class can have a pubdate field of type System.String or a date field of type System.DateTime, but not both. It should be noted that the Cω compiler enforces that each field in the choice should have a different name, otherwise there would be ambiguity as to what type was intended when the field is accessed. The way fields in a choice type are accessed in Cω differs from union types in C and C++. In C/C++, the programmer has to keep track of what type the value in a particular union type represents because no static type checking is done by the compiler. Cω union types are statically checked by the compiler, which allows one to declare them like so:
choice{string;DateTime;} x = DateTime.Now;
choice{string;DateTime;} y = "12/12/2004";
However, there is still the problem of how to statically declare a type that may depend on a field in a choice type that does not exist. For example, in the above sample, x.Length is not a valid property access if the variable is initialized with an instance of System.DateTime, but returns 10 when initialized with the string "12/12/2004". This is where nullable types come in to play.
In both the worlds of W3C XML Schema and relational databases, it is possible for all types to have instances whose value is null. Languages such as Java and existing versions of C# do not allow one to assign null to an integer or floating point value type. However, when working with XML or relational data it is valuable to be able to state that null is a valid value for a type. In such cases, one would not want property accesses on the value to result in NullReferenceExceptions being thrown. Nullable types make this possible by mapping all values returned by a property access to the value null. Below are some examples of using nullable types.
string? s = null;
int? size = s.Length; // returns null instead of throwing
// NullReferenceException
if(size == null)
Console.WriteLine("The value of size is NULL");
choice{string;DateTime;} pubdate = DateTime.Now;
int? dateLen = pubdate.Length; //works since it returns null because
//Length is a property of System.String
int dateLen2 = (int) pubdate.Length; //throws NullReferenceException
This feature is similar to but not the same as nullable types in C# 2.0. A nullable type in C# 2.0 is an instance of Nullable<T>, which contains a value and an indication whether the value is null or not. This is basically a wrapper for value types such as ints and floats that can't be null. Cω takes this one step further with the behavior of returning null instead of throwing a NullReferenceException on accessing a field or property of a nullable type whose value is null.
Anonymous structs are analogous to the xs:sequence element in W3C XML Schema. Anonymous structs enable one to model certain XML-centric notions in Cω, such as document order and the fact that an element may have multiple child elements with the same name that have different values. An anonymous struct is similar to a regular struct in C# with a few key distinctions:
The following examples highlight the various characteristics of anonymous structs in Cω.
struct{ int; string;
string; DateTime date;
string;} x = new {47, "Hello World",
"Dare Obasanjo", date=DateTime.Now,
"This is my first story"};
Console.WriteLine(x[1]);
DateTime pubDate = x.date;
struct{ long; string; string;
DateTime date; string;} newsItem = x;
Console.WriteLine(newsItem[1] + " by " + newsItem[2] + " on " + newsItem.date);
struct {string field;
string field;
string field;} field3 = new {field="one",
field="two",
field="three"};
string* strField = field3.field;
//string strField = field3.field doesn't work since field3.field returns a stream
struct {int value; string value;} tricky = new {value=10, value="ten"};
choice {int; string;}* values = tricky.value;
A content class is a class that has its members grouped into distinct units using the struct keyword. To some degree, content classes are analogous to DTDs in XML. Consider the following content class for a Books object:
public class Books{
struct{
string title;
string author;
string publisher;
string? onloan;
}* Book;
}
This sample is analogous to the following DTD:
<!ELEMENT Books (Book*)>
<!ELEMENT Book (title, author,publisher, onloan?)>
<!ELEMENT title (#PCDATA)>
<!ELEMENT author (#PCDATA)>
<!ELEMENT publisher (#PCDATA)>
<!ELEMENT onloan (#PCDATA)>
As well as the following XML schema:
<xs:schema xmlns:
following code sample shows how the Books object would be used:
using Microsoft.Comega;
using System;
public class Books{
struct{
string title;
string author;
string publisher;
string? onloan;
}* Book;
public static void Main(){
Books books = new Books();
books.Book = new struct {title="Essential.NET", author="Don Box",
publisher="Addison-Wesley", onloan = (string?) null};
Console.WriteLine((string) books.Book.author + " is the author of " +
(string) books.Book.title);
}
}
Cω adds two broad classes of query operators to the C# language:
With the existence of streams and anonymous structs that can have multiple members with the same name, even ordinary direct member access with the '.' operator in Cω can be considered a query operation. For example, the operation books.Book.title from the previous section returns the titles of all the Book objects contained within the Books class. This is equivalent to the XPath query '/Books/Book/title' that returns all the titles of Book elements contained within the Books element.
The wildcard member access operator '.*' is used to retrieve all the fields of a type. This operator is equivalent to the XPath query child::* that returns all the child nodes of the principal node type of the current node. For example, the operation books.Book.* return a stream container all the members of all the Book objects contained within the Books class. This is equivalent to the XPath query '/Books/Book/*' that returns all the child elements of the Book elements contained within the Books element.
Cω also supports transitive member access using the '...' operator, which is analogous to the descendent-or-self axis or '//' abbreviated path in XPath. The operation books...title returns a stream containing all title member fields that are contained within the Books class or are member fields of any of its contents in a recursive manner. This is equivalent to the XPath query '/Books//title' that returns all title elements that are descendants of the Books element. The transitive member access operator can also be used to match nodes restricted to a particular type using syntax of the form '...typename::*'. For example, the operation books...string::* returns a stream containing all the member fields of type System.String that are contained within the Books class or are member fields of any of its contents in a recursive manner. This is analogous to the XPath 2.0 query /Books//element(*, xs:string) that matches any descendent of the Books element of type xs:string.
Filter operations can be applied to the results of a transitive member access in the same manner that predicates can be used to filter XPath queries. Just as in XPath, a Cω filter is applied to a query operation by using the '[expression]' operator placed after the query. As is the case with apply-to-all-expressions, the filter may contain the special variable it, which is bound to each successive element of the iterated stream. Below is an example, which queries all the fields of type System.Int32 in an anonymous struct, and then filters the results to those whose value is greater than 8.
struct {int a; int b; int c;} z = new {a=5, b=10, c= 15};
int* values = z...int::*[it > 8];
foreach(int i in values){
Console.WriteLine(i + " is greater than 8");
}
Cω includes a number of constructs from the SQL language as keywords. Operators for performing selection with projection, filtering, ordering, grouping, and joins are all built into Cω. The SQL operators can be applied to both in-memory objects and relational stores that can be accessed using ADO.NET. When applied to a relational database, the Cω query operators are converted to SQL queries over the underlying store. The primary advantage of using the SQL operators from the Cω language is that the query syntax and results can be checked at compile time instead of at runtime, as is the case with embedding SQL expressions in strings using traditional relational APIs.
To connect to a SQL database in Cω, it must be exposed as a managed assembly (that is, a .NET library file), which is then referenced by the application. A relational database can be exposed to a Cω as a managed assembly either by using the sql2comega.exe command line tool or the Add Database Schema... dialog from within Visual Studio. Database objects are used by Cω to represent the relational database hosted by the server. A Database object has a public property for each table or view, and a method for each table-valued function found in the database. To query a relational database, a table, view, or table-valued function must be specified as input to the one or more of the SQL-based operators.
The following sample program and output shows some of the capabilities of using the SQL-based operators to query a relational database in Cω. The database used in this example is the sample Northwind database that comes with Microsoft SQL Server. The name DB used in the example refers to a global instance of a Database object in the Northwind namespace of the Northwind.dll assembly generated using sql2comega.exe.
using System;
using System.Data.SqlTypes;
using Northwind;
class Test {
static void Main() {
// The foreach statement can infer the type of the
// iteration variable 'row' statically evaluating
// the result type of the select expression
foreach( row in select ContactName from DB.Customers ) {
Console.WriteLine("{0}", row.ContactName);
}
}
}
The following sample program and output shows some of the capabilities of using the SQL-based operators to query in-memory objects in Cω.
Code Sample
using Microsoft.Comega;
using System;
public class Test{
enum CDStyle {Alt, Classic, HipHop}
static struct{ string Title; string Artist; CDStyle Style; int Year;}* CDs =
new{new{ Title="Lucky Frog", Artist="Holly Holt", Style=CDStyle.Alt, Year=2001},
new{ Title="Kamikaze", Artist="Twista", Style=CDStyle.HipHop, Year=2004},
new{ Title="Stop Light Green", Artist="Robert O'Hara", Style=CDStyle.Alt, Year=1981},
new{ Title="Noctures", Artist="Chopin", Style=CDStyle.Classic, Year=1892},
new{ Title="Mimosa!", Artist="Brian Groth", Style=CDStyle.Alt, Year=1980},
new {Title="Beg For Mercy", Artist="G-Unit", Style=CDStyle.HipHop, Year=2003}
};
public static void Main(){
struct { string Title; string Artist;}* results;
Console.WriteLine("QUERY #1: select Title, Artist from CDs where Style == CDStyle.HipHop");
results = select Title, Artist from CDs where Style == CDStyle.HipHop;
results.{ Console.WriteLine("Title = {0}, Artist = {1}", it.Title, it.Artist); };
Console.WriteLine();
struct { string Title; string Artist; int Year;}* results2;
Console.WriteLine("QUERY #2: select Title, Artist, Year from CDs order by Year");
results2 = select Title, Artist, Year from CDs order by Year;
results2.{ Console.WriteLine("Title = {0}, Artist = {1}, Year = {2}", it.Title, it.Artist, it.Year); };
}
}
Output
QUERY #1: select Title, Artist from CDs where Style == CDStyle.HipHop
Title = Kamikaze, Artist = Twista
Title = Beg For Mercy, Artist = G-Unit
QUERY #2: select Title, Artist, Year from CDs order by Year
Title = Noctures, Artist = Chopin, Year = 1892
Title = Mimosa!, Artist = Brian Groth, Year = 1980
Title = Stop Light Green, Artist = Robert O'Hara, Year = 1981
Title = Lucky Frog, Artist = Holly Holt, Year = 2001
Title = Beg For Mercy, Artist = G-Unit, Year = 2003
Title = Kamikaze, Artist = Twista, Year = 2004
A number of operations that require tedious nested loops can be processed in a straightforward manner using the declarative SQL-like operators in Cω. Provided below is a brief description of the major classes of SQL operators included in Cω.
The projection of the select expression is the list of expressions following the select keyword. The projection is executed once for each row specified by the from clause. The job of the projection is to shape the resulting rows of data into rows containing only the columns required. The simplest form of the select command consists of the select keyword followed by a projection list of one or more expressions identifying columns from the source, followed by the from keyword, and then an expression identifying the source of the query. Here's an example:
rows = select ContactName, Phone from DB.Customers;
foreach( row in rows ) {
Console.WriteLine("{0}", row.ContactName);
}
In this example the type designator for the results of the select query are not specified. The Cω compiler automatically infers the correct type. The actual type of an individual result row is a Cω tuple type. One can specify the result type directly using a tuple type (that is, an anonymous struct) and the asterisk (*) to designate a stream of results. For example:
struct{SqlString ContactName;}* rows =
select ContactName from DB.Customers;
struct{SqlString ContactName; SqlString Phone;}* rows =
select ContactName, Phone from DB.Customers;
The results of a select expression can be filtered using one of three keywords—distinct, top, and where. The distinct keyword is used to restrict the resulting rows to only unique values. The top keyword is used to restrict the total number of rows produced by the query. The keyword top is followed by a constant numeric expression that specifies the number of rows to return. One can also create a distinct top selection that restricts the total number of unique rows returned by the query. The where clause is used to specify a Boolean expression for filtering the rows returned by the query source. Rows where the expression evaluates to true are retained, while the rest are discarded. The example below shows all three filter operators in action:
select distinct top 10 ContactName from DB.Customers where City == "London";
The resulting rows from a select expression can be sorted by using the order by clause. The order by clause is always the last clause of the select expression, if it is specified at all. The order by clause consists of the two keywords order by followed immediately by a comma-separated list of expressions that define the values that determine the order. The first expression in the list defines the ordering criteria with the most precedence. It is also possible to specify whether each expression should be considered in ascending or descending order. The default for all expressions is to be considered in ascending order. The example below shows the order by clause at work:
rows = select ContactName, Phone from DB.Customers
order by ContactName desc, Phone asc;
Values can be aggregated across multiple rows using the group by clause and the built-in aggregate functions. The group by clause enables one to specify how different rows are actually related, so they can be grouped together. Aggregate functions can then be applied to the columns to compute values over the group. An aggregate function is a function that computes a single value from a series of inputs; such as computing the sum or average of a series of numbers. There are six aggregate functions built into Cω. They are Count, Min, Max, Avg, Sum, and Stddev. To use these functions in a query, one must first import the System.Query namespace. The following example shows how to use the group by clause and the built-in aggregate functions.
rows = select Country, Count(Country) from DB.Customers
group by Country;
This example uses an aggregate to produce the set of all countries and the count of customers within each country. The Count() aggregate tallies the number of items in the group.
Aggregates can be used in all clauses evaluated after the group by clause. The projection list is evaluated after the group by clause, even though it is specified earlier in the query. A consequence of this is that aggregate functions cannot be applied in the where clause. However, it is still possible to filter grouped rows using the having clause. The having clause acts just like the where clause, except that it is evaluated after the group by clause. The following example shows how the having clause is used.
rows = select Country, Count(Country) from DB.Customers
group by Country
having Count(Country) > 1;
Select queries can be used to combine results from multiple tables. A SQL joining is a Cartesian product of one or more tables where each row from one table is paired up with each row from another table. The full Cartesian product consists of all such pairings. To select multiple sources whose data should be joined to perform a query, the from clause can actually contain a comma separated list of source expressions, each with its own iteration alias. The following example pairs up all Customer rows with their corresponding Order rows, and produces a table listing the customer's name and the shipping date for the order.
rows = select c.ContactName, o.ShippedDate
from c in DB.Customers, o in DB.Orders
where c.CustomerID == o.CustomerID;
Cω also supports more sophisticated table joining semantics from the SQL world including inner join, left join, right join, and outer join using the corresponding keywords. Explanations of the semantics of the various kinds of joins is available at the W3Schools tutorial on SQL JOIN. The following example shows a select expression that uses the inner join keyword.
rows = select c.ContactName, o.ShippedDate
from c in DB.Customers
inner join o in DB.Orders
on c.CustomerID == o.CustomerID;
The relational data access capabilities of Cω are not limited to querying data. One can also insert new rows into a table using the insert command, modify existing rows within a table using the update command, or delete rows from a table using the delete command.
The insert command is an expression that evaluates to the number of successful inserts that have been made to the table as a result of executing the command. The following example inserts a new customer in the Customers table.
int n = insert CustomerID = "ABCDE", ContactName="Frank", CompanyName="Acme"
into DB.Customers;
The same effect can be obtained by using an anonymous struct as opposed to setting each field directly as shown below:
row = new{CustomerID = "ABCDE", ContactName="Frank", CompanyName="Acme"};
int n = insert row into DB.Customers;
The update command is an expression that evaluates to the number of rows that were successfully modified as a result of executing the command. The following example shows how to do a global replace of all misspelled references to the city "London".
int n = update DB.Customers
set City = "London"
where Country == "UK" && City == "Lundon";
It is also possible to modify all rows in the table by omitting the where clause. The delete command is an expression that evaluates the number of rows that were successfully deleted as a result of executing the command. The following example deletes all the orders for customers in London.
int n = delete o from c in DB.Customers
inner join o in DB.Orders
on c.CustomerID == o.CustomerID
where c.City == "London";
Most applications that use insert, update, and delete expressions will also use transactions to guarantee ACIDity (atomicity, consistency, isolation, and durability) of one or more changes to the database. The Cω language includes a transact statement that promotes the notion of initiating and exiting transactions in a programming language feature. The transact statement is a statement bounding a block of code associated with a database transaction. If the code executes successfully and attempts to exit the block, the transaction will automatically be committed. If the block is exited due to a thrown exception, the transaction will automatically be aborted. The developer can also specify handlers that execute some user-defined once the transact block is exited using the commit and rollback handlers. The following example shows transactions in action:
transact(DB) {
delete from DB.Customers where CustomerID == "ALFKI";
}
commit {
Console.WriteLine("commited");
}
rollback {
Console.WriteLine("aborted");
}
In Cω, one can construct object instances using XML syntax. This feature is modeled after the ability to construct elements in languages like XQuery and XSLT. Like XQuery and XSLT, the XML literals can contain embedded code for constructing values. However, because Cω is statically typed, the names of members and types must be known at compile time, and cannot be constructed dynamically.
One can control the shape of the XML using a number of constructs. One can specify that fields should be treated as attributes in the XML literal with the use of the attribute modifier on the member declaration. Similarly, choice types and anonymous structs are treated as the children of the XML element that maps to the content class. The following example shows how to initialize a content class from an XML literal.
using Microsoft.Comega;
using System;
public class NewsItem{
attribute string title;
attribute string author;
struct { DateTime date; string body; }
public static void Main(){
NewsItem news = <NewsItem title="Hello World" author="Dare Obasanjo">
<date>{DateTime.Now}</date>
<body>I am the first post of the New Year.</body>
</NewsItem>;
Console.WriteLine(news.title + " by " + news.author + " on " + news.date);
}
}
XML literals are intended to make the process of constructing strongly typed XML much simpler. Consider the following XQuery example taken from the W3C XQuery Use cases document. It iterates over a bibliography that contains a number of books. For each book in the bibliography, it lists the title and authors, grouped inside a result element.
for $b in $bs/book
return
<result>
{$b/title}
{$b/author}
</result>
Performing the equivalent task in Cω looks very similar.
foreach (b in bs.book){
yield return <result>
{b.title}
{b.author}
</result>
}
The Cω language is an interesting attempt to bridge the impending mismatches involved in typical enterprise development efforts when crossing the boundaries of the relational, object oriented, and XML worlds. A number of the ideas in the language have taken hold in academia and within Microsoft as is evidenced in comments by Anders Hejlsberg about the direction of C# 3.0. Until then developers can explore what it means to program in a more data oriented programming language by downloading the Cω compiler or Visual Studio.NET plug-in.
I'd like to thank Don Box, Mike Champion, Denise Draper, and Erik Meijer for their ideas and feedback while writing this article.
Dare Obasanjo is a program manager on the MSN Communication Services Platform team. He brings his love of solving problems with XML to building the server infrastructure utilized by the MSN Messenger, MSN Hotmail, and MSN Spaces teams.
Feel free to post any questions or comments about this article on the Extreme XML message board on GotDotNet. | http://msdn.microsoft.com/en-us/library/ms974195.aspx | crawl-002 | refinedweb | 5,152 | 54.63 |
07 September 2012 13:31 [Source: ICIS news]
Correction: In the ICIS news story headlined “?xml:namespace>
This is a 1.5% increase from the same period a year earlier, according to the ministry.
Japan’s ethylene-equivalent exports – which include ethylene contained in derivative products – increased by 17% month on month to 143,000 tonnes in July, but decreased by 13% year on year, based on the data.
Among the exported ethylene derivatives, 42,791 tonnes of vinyl chloride monomer (VCM) were exported in July, a 59% increase from the previous month, and a 40% decrease from July 2011.
Exports of polyvinyl chloride (PVC) were up by 17% to 21,766 tonnes in July from June, and down by 49% compared with July 2011, according to METI.
Exports of styrene monomer (SM) increased by 25% to 91,033 tonnes in July from the previous month, and decreased by 11% year on year, | http://www.icis.com/Articles/2012/09/07/9593821/corrected-japan-ethylene-exports-increase-by-11-in-july.html | CC-MAIN-2014-49 | refinedweb | 153 | 67.49 |
Python Day 15
Love Earth Day ❤
the Earth day is coming! many years ago, i vaguely remember krupa, tim and i were spending the earth day together.
I forgot who said the best way to celebrate earth day is to throw Krupa in the trash can.
haha…
this year, I decided to celebrate the Earth day by throwing a lot of love to myself, my surroundings and the world using Python!
import math c='♥' width = 40 print ((c*2).center(width//2)*2) for i in range(1,width//10+1): print (((c*int(math.sin(math.radians(i*width//2))*width//4)).rjust(width//4)+ (c*int(math.sin(math.radians(i*width//2))*width//4)).ljust(width//4))*2) for i in range(width//4,0,-1): print ((c*i*4).center(width)) print ((c*2).center(width))
2. Number System
Elena used to like a joke, there are 10 kinds of Mathematicians, ones can count, ones couldn’t.😂
i thought this joke was making fun of the mathematicians who cannot count.
but, the punchline was 10 is 2 in the binary system.
So today we will try to write the number in different systems in Python.
Challenge: Given an integer, n, print the numbers in Decimal, Octal, Hexadecimal, Binary
Octal : number system based on 8 (1,2,3,4,5,6,7,10,11,…)
Hexadecimal: Number system based on 16 (1,2,3,4,5,6,7,8,9,A,B,C,D,E,F, 10,11,…)
Solution:
def print_formatted(number): # your code goes here n=number n1=bin(number) n2=oct(number) n3=hex(number) print(n,n1,n2,n3)
Output:
print_formatted(4) 4 0b100 0o4 0x4
Happy Python Learning! 🍀
Reference:
himanshurawlani | https://fangya18.com/2019/03/22/python-little-goal-9/ | CC-MAIN-2021-17 | refinedweb | 286 | 55.74 |
.IO;
using System.Net;
public class Bugged
{
static void CB (IAsyncResult res)
{
return;
}
public static void Main ()
{
var wr = HttpWebRequest.Create ("");
wr.Method = "POST";
var stream = wr.GetRequestStream ();
//str.WriteByte (0xAC); // Uncomment this to make it work
var gr = wr.BeginGetResponse (CB, null);
gr.AsyncWaitHandle.WaitOne ();
Console.WriteLine ("end");
return;
}
}
I did quite some digging when fixing other bug and the issue is most likely in how
void CheckIfForceWrite ()
works. It seems to assume that when the stream was requested it will always be filled.
Fixed; master 1ccb01e, mono-2-10 63506fa, mobile-master f0a2a4f
This is a .NET 4.0 feature. Fixed in 3a7a355 / 4025e28 / 2853ddd. | https://xamarin.github.io/bugzilla-archives/76/7637/bug.html | CC-MAIN-2019-35 | refinedweb | 107 | 60.92 |
I know late again, sorry. My excuse this time? I’ve just bought a Garmin Edge 305 GPS for my bike and I seem to have become obsessed with riding, my bike that is! Now down to business, last time we wrote our first test script this time we’re going to use the Ruby unit testing framework to run the script.
OUR FIRST TEST CASE
First create a new file called c:\watir\scripts\myfirsttc.rb with the following content. I know this code does not look particularly good in HTML but should be OK to cut and paste into an editor. Note to self: Find out which tags to use for posting code to your SAP blog!
require 'watir'
require 'test/unit'
class MyFirstTC < Test::Unit::TestCase
def setup
@ie = Watir::IE.new
end
end
def teardown
@ie.close
end
Now lets just break this script down and explain the new bits.
REFERENCE THE TEST LIBRARY
We need to reference the Ruby Unit Test Library this is done with one simple line, nothing new here.
require 'test/unit'
THE TEST CASE CLASS
To use the unit test framework our class needs to inherit from the Test::Unit::TestCase base class. This is done using the Ruby operator ‘<‘.
class MyFirstTC < Test::Unit::TestCase
THE SETUP METHOD
Before each test method the setup method automatically runs, this allows you to do any required setup before a test method is run. The setup method runs before EVERY test method so if your test case class contained 100 test methods the setup method would be run 100 times. In our case we simply create a class level variable called @ie which will be available to our test method.
NOTE: The @ symbol is significant as this is how Ruby defines class level variables.
def setup
@ie = Watir::IE.new
end
THE TEST METHOD
Now this is were our actual testing goes on. One thing to note about our test method is it must start with the prefix ‘test_’ this simple naming convention is how the test framework knows which methods are test methods and which are not. So we call out login test ‘test_login’.
The code goes to our portal url fills in the logon fields, clicks the logon button and finally our actual test comes in the form of an assert method call.
assert(@ie.contains_text('Bahh I failed!'), 'Text not found!')
The assert method checks that the first parameter equals true, if not it raises and error and displays the descriptive message. So this line is actually saying if you cannot find the text ‘Bahh I failed!’ then raise an error and display the message ‘Text not found!’. Now I know we’re not going to find the text ‘Bahh I failed!’ during this test but to be sure the test is working we MUST first make it fail. Also I want you to see what happens when we get a failed assert.
NOTE: The Ruby unit testing framework has a lots of different assertion methods which you will want to checkout.
THE TEARDOWN METHOD
The opposite of the setup method the teardown method is called automatically after each and every test method. In our case we simply close Internet Explorer.
def teardown
@ie.close
end
THE RUN
Now that we know what the script is doing lets run our test. Open a DOS window and type the following commands.
cd c:\watir\scripts
ruby myfirsttc.rb
Again you should see an Internet Explorer window open with your portal login page showing. The User ID and Password fields will turn yellow as they are filled with the values from your script. The Log on button will turn yellow as it is clicked and finally the Internet Explorer will close and you will be returned to your DOS prompt. You should now see some report text output from the test framework.
c:\watir\scripts>
ruby myfirsttc.rb
Loaded suite c:/watir/scripts/myfirsttc
Started
F
Finished in 5.609 seconds.
1) Failure:
test_login(MyFirstTC) c:/watir/scripts/myfirsttc.rb:20:
Text not found!.
THE REPORT
Although it does not look like much and there is no fancy GUI do not be deceived there is quite a bit of information here. First it tells us how long the tests take to run.
Finished in 5.609 seconds.
Next is a list of failures which shows the name of the method which failed and the line number of where the failure occured in this case the method test_login at line 20.
1) Failure:
test_login(MyFirstTC) c:/watir/scripts/myfirsttc.rb:*20*:
We can also see the descriptive message we used in our code and also the actual Watir error message.
Text not found!.
NOTE: A test method can contain as many assertions as you like. In fact it is usual that a test method will contain several assertions, they are after all the core of your tests.
FIXING THE TEST
Ok we’ve seen a failing test so lets go ahead and fix it. We simply need to change the assert line to look for a different piece of text. Lets look for ‘Log Off’ as we should see this text if our log on was successfull right?
assert(@ie.contains_text('Log Off'), 'Text not found!')
Run the test again using THE RUN procedure above. You should now have a passing test. Congratulations!
Started
.
Finished in 5.359 seconds.
1 tests, 1 assertions, 0 failures, 0 errors
HUNDREDS OF TESTS
So that’s one test but how do you run hundreds of tests? Simple add more test methods to your test case class. When you run the test case the framework will pickup your new test method and run that too. I tend to have one test case class for each module I am testing and add new test methods as I go. Eventually you will end up with hundreds of tests and the total confidence in your code which comes along with it.
COMING UP IN PART FIVE
That’s quite big post (for me anyway!) so I’ll leave it there until next time when we’ll go through my top tips which should help with your testing. That is providing I can stay off my bike for long enough to write it!
Fantastic blog, thank you! One small problem I’m curious about:
My IE window doesn’t close after the script is run. Any ideas why this is? Will this be a problem in later tests?
Thanks
Chris
Glad you’re enjoying the blog!?
It should not cause any problems with your tests but you will get lots of open windows!
Cheers,
Justin
I adapted your script to my portal and I faced the same problem as Chris: The IE explorer window did not close.
Besides that, the output looked exactly the same as you described it in your blog.
So everything is fine, but the IE did not close.
Any hints?
Thanks in advance,
guido
Sorry for the slow reply my proper job is manic at the moment!
Same questions as Chris really?
Cheers,
Justin | https://blogs.sap.com/2006/09/13/automated-functional-testing-part-4-of/ | CC-MAIN-2018-30 | refinedweb | 1,186 | 82.75 |
The factorial function
n! calculates the number of permutations in a set. Say you want to rank three soccer teams Manchester United, FC Barcelona, and FC Bayern München — how many possible rankings exist? The answer is
3! = 3 x 2 x 1 = 6.
In general, to calculate the factorial
n!, you need to multiply all positive integer numbers that are smaller or equal to
n. For example, if you have 5 soccer teams, there are
5! = 5 x 4 x 3 x 2 x 1 = 120 different pairings.
There are many different ways to calculate the factorial function in Python easily (see alternatives below). You can also watch my explainer Calculate the Factorial in NumPy?
NumPy’s math module is relatively little known. It contains efficient implementations of basic math function such as the factorial function
numpy.math.factorial(n).
Here’s an example of how to calculate the factorial
3! with NumPy:
>>> import numpy as np >>> np.math.factorial(3) 6
The factorial function in NumPy has only one integer argument
n. If the argument is negative or not an integer, Python will raise a value error.
Practical Example: Say,.
Here’s how you can calculate this in Python for 3 teams:
Exercise: Modify the code to calculate the number of rankings for 20 teams!
How to Calculate the Factorial in Scipy?
The popular scipy library is a collection of libraries and modules that help you with scientific computing. It’s a powerful collection of functionality—built upon the NumPy library. Thus, it doesn’t surprise that the scipy factorial function
scipy.math.factorial() is actually a reference to NumPy’s factorial function
numpy.math.factorial(). In fact, if you compare their memory addresses using the keyword
is, it turns out that both refer to the same function object:
>>> import scipy, numpy >>> scipy.math.factorial(3) 6 >>> numpy.math.factorial(3) 6 >>> scipy.math.factorial is numpy.math.factorial True
So you can use both
scipy.math.factorial(3) and
numpy.math.factorial(3) to compute the factorial function
3!.
As both functions point to the same object, the performance characteristics are the same — one is not faster than the other one.
How to Calculate the Factorial in Python’s Math Library?
As it turns out, not only NumPy and Scipy come with a packaged “implementation” of the factorial function, but also Python’s powerful math library. Here’s an example of how to use the
math.factorial(n) function to compute the factorial
n!.
>>> import math >>> math.factorial(3) 6
The factorial of 3 is 6 — nothing new here.
Let’s check whether this is actually the same implementation as NumPy’s and Scipy’s factorial functions:
>>> import scipy, numpy, math >>> scipy.math.factorial is math.factorial True >>> numpy.math.factorial is math.factorial True
Ha! Both libraries NumPy and Scipy rely on the same factorial function of the math library. Hence, to save valuable space in your code, use the math factorial function if you have already imported the math library. If not, just use the NumPy or Scipy factorial function aliases.
So up ’till now we’ve seen the same old wine in three different bottles: NumPy, Scipy, and math libraries all refer to the same factorial function implementation.
How to Calculate the Factorial in Python?
It’s often a good idea to implement a function by yourself. This will help you understand the underlying details better and gives you confidence and expertise. So let’s implement the factorial function in Python.
To calculate the number of permutations of a given set of
n elements, you use the factorial function
n!. The factorial is defined as follows:
n! = n × (n – 1) × ( n – 2) × . . . × 1
For example:
- 1! = 1
- 3! = 3 × 2 × 1 = 6
- 10! = 10 × 9 × 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1 = 3,628,800
- 20! = 20 × 19 × 18 × . . . × 3 × 2 × 1 = 2,432,902,008,176,640,000
Recursively, the factorial function can also be defined as follows:
n! = n × (n – 1)!
The recursion base cases are defined as shown here:
1! = 0! = 1
The intuition behind these base cases is that a set with one element has one permutation, and a set with zero elements has one permutation (there is one way of assigning zero elements to zero buckets).
Now, we can use this recursive definition to calculate the factorial function in a recursive manner:
>>> factorial = lambda n: n * factorial(n-1) if n > 1 else 1 >>> factorial(3) 6
Try It Yourself: Run this one-liner in our interactive code shell:
Exercise: What’s the output?
The lambda keyword is used to define an anonymous function in a single line. You can learn everything you need to know about the lambda function in this comprehensive tutorial on the Finxter blog.
If you love one-liners like I do, check out my book “Python One-Liners” that will teach you everything there is to learn about a single line of Python code!
You create a lambda function with one argument
n and assign the lambda function to the name
factorial. Finally, you call the named function
factorial(n-1) to calculate the result of the function call
factorial(n).
Roughly speaking, you can use the simpler solution for
factorial(n-1) to construct the solution of the harder problem
factorial(n) by multiplying the former with the input argument
n. As soon as you reach the recursion base case
n <= 1, you simply return the hard-coded solution
factorial(1) = factorial(0) = 1.
An alternative is to use the iterative computation like this:
def factorial(n): fac = n for i in range(1, n): fac *= i return fac print(factorial(3)) # 6 print(factorial(5)) # 120
In the function
factorial(n), we initialize the variable
fac to the value
n. Then, we iterate over all values
i between 1 and n-1 (inclusive) and multiply them with the value currently stored in the variable
fac. The result is the factorial of the integer value
n.
Speed Comparison
Let’s compare all three different ways to calculate the factorial function regarding speed. Note that the NumPy, Scipy, and math factorial functions are referencing to the same function object—they have the same speed properties. Thus, we only compare the
math.factorial() function with our two implementations in Python (recursive and iterative).
Want to take a guess first?
I used my own notebook computer (Quadcore, Intel Core i7, 8th Generation) with Python 3.7 to run 900 factorial computations for each method using the following code:
import time num_runs = 900 speed = [] ## SPEED TEST MATH.FACTORIAL ## import math start = time.time() for i in range(num_runs): math.factorial(i) stop = time.time() speed.append(stop-start) ## SPEED TEST RECURSIVE ## factorial = lambda n: n * factorial(n-1) if n > 1 else 1 start = time.time() for i in range(num_runs): factorial(i) stop = time.time() speed.append(stop-start) ## SPEED TEST ITERATIVE ## def factorial(n): fac = n for i in range(1, n): fac *= i return fac start = time.time() for i in range(num_runs): factorial(i) stop = time.time() speed.append(stop-start) ## RESULT print(speed) # [0.011027336120605469, 0.10074210166931152, 0.0559844970703125] import matplotlib.pyplot as plt plt.bar(["Math", "Recursive", "Iterative"], height=speed) plt.show()
Wow—the clear winner is the math module! A clear sign that you should always prefer library code over your own implementations!
The math library’s implementation is almost 600% faster than the iterative one and 1000% faster than the recursive implementation.
Try It Yourself: You can perform this speed comparison yourself in the interactive code shell:
Exercise: Do you receive similar results in your browser? Run the shell to find out!
Where to Go From Here
The three library implementations
numpy.math.factorial(),
scipy.math.factorial(), and
math.factorial() point to the same function object in memory—they are identical so use any of them.
One a higher-level, you’ve learned that library implementations of popular libraries such as NumPy are blazingly fast and efficient. Do yourself a favor and use library implementations wherever possible.
A good place to start is the NumPy library which is the basis of many more advanced data science and machine learning libraries in Python such as matplotlib, pandas, tensorflow, and scikit-learn. Learning NumPy will set the foundation on which you can build your Python career.
If you liked this article, you’ll also like my NumPy book “Coffee Break NumPy” that leads you step-by-step into the NumPy library in a fun, engaging, and interactive way. Pro coders read more books!
>. | https://blog.finxter.com/the-factorial-function-competition-numpy-scipy-math-and-python/ | CC-MAIN-2021-43 | refinedweb | 1,432 | 58.99 |
Anyone who writes web applications knows that web development is not easy. Developers wrangle with a soup of technologies distributed across multiple tiers. We live in a world where programmers accept the fact that they need to know four or five different languages, tools, and environments just to get a site up and running. In ancient times, the Egyptians built marvelous structures despite the primitive tools that the workmen used. Building a pyramid took most of the national resources of wealth and labor. Today, we build structures which are vastly more complicated and yet require only a tiny fraction of the resources. The difference is in the tools and the infrastructure.
In a similar way, Volta significantly improves web development. Programmers write web applications using familiar .NET languages, libraries, and tools. Volta splits the application into multiple parts potentially running on different tiers, say, client and server. Client code needs only a minimal, JavaScript-enabled browser, though Volta will take advantage of additional runtimes that may be present.
Programmers simply refactor classes that need to run on tiers other than the client and Volta injects the boilerplate code for communication, serialization, synchronization -- all the remoting code. The developer enjoys all of the benefits of .NET: a great debugger, test tools, profiling, intellisense, refactorings, etc.
Just how simple is Volta? Let's write an application that uses a button to query the server for a string and displays that string to the client: the hello world web application.
Now, let's write the code for the web page. We need a Div for the output and an Input for the interaction. Of course, we could have constructed the page elements with HTML/CSS instead.
using System;
using Microsoft.LiveLabs.Volta.Html;
using Microsoft.LiveLabs.Volta.Xml;
namespace HelloWorld
{
public partial class VoltaPage1 : Page
{
public VoltaPage1()
{
var output = new Div();
var b = new Input();
b.Type = "button";
b.Value = "Get Message";
b.Click += () => output.InnerHtml = C.GetMessage();
Document.Body.AppendChild(output);
Document.Body.AppendChild(b);
}
}
class C
{
public static string GetMessage()
{
return "Hello, World";
}
}
}
But we want to produce the message on the server. Time to refactor.
Browser clients call the server, the "Origin", because this ensures the message will come from the same server that supplied the HTML.
[RunAtOrigin]
class C
{
public static string GetMessage()
{
return "Hello, World";
}
}
That is it. Try it out.
Now, click "Get Message".
Great. But is it cross-browser compatible?
Yes. And you can debug it.
You can even debug across tiers.
There is a lot more to Volta in the first technology preview which was made publicly available at 11 am PST today and there will be a lot more to come.
Still skeptical? Try it out for yourself.
If you would like to receive an email when updates are made to this post, please register here
RSS
I could add a hands-on with Volta to elaborate on my previous post, but Wes already beat me to it on
OK ... now turn on Fiddler and watch what comes down over the pipe for this simple application ... you gotta be kidding me.
So, you decided to start blogging again...I was about to finally remove you from my feed reader.
Hi Wes,
Does Volta use Nikhil Kothari's Javascript Converter? Is it finally being productized?
Mike:
You are right that a lot is coming down the pipe today. The first preview is about establishing the right basic feature set. In the upcoming releases performance will be improved dramatically.
Chyld:
I am glad to start blogging again. I have a lot to say that I've been holding back.
Jafar:
No, it is not using Nikhil's JavaScript Converter. One of the things that Volta provides is a deep embedding of MSIL in JavaScript. So you can also write apps in VB.NET, F#, ...
Hi, My name is Harish Kantamneni. I am a developer at Microsoft and work for Erik Meijer of the LINQ
Erik Meijer and I have discussed Volta a number of times - very exciting to see it released on Live Labs
Oh, finally a new post. Welcome back!
Good question. I saw this today in my RSS reader:
You've been kicked (a good thing) - Trackback from DotNetKicks.com
(My first post! Be gentle ;) )
I'm not really convinced; the reason that I use different programs is to ensure the stages of communication are separated. For example, in your demonstration a button is defined which is exactly what I don't want; I want to separate the interface from the back-end code. I think, it makes for cleaner programmer practices.
Met Volta wil Microsoft de wirwar van methodieken die er op dit moment is om een ASP.NET applicatie te
This looks kind of nice. I'm still a bit sceptical, so I will play around with the hands-on-lab ;-)
I do have one concern though: I think that with all the cool new initiatives, Microsofts strategy on web-development is becoming a bit unclear. At the moment we have classic ASP.NET, the ASP.NET AJAX Framework, Silverlight, MVC, Volta, WCSF. All (very! :-) nice things on their own, but I think that at the moment there appears to be a lack of cohesion between all of them. They are all trying to solve a piece of the web-development puzzle, but they are not integrated in one solution (yet?). I hope this will become more clear in the future.
Always wanted to develop web applications a bit different? Microsoft just release volta, a new way to
Glad to see you blogging again... Was the same as Chyld and was just about to remove you from my RSS...
Arno:
Welcome. Can you explain what you mean by "I want to seperate the interface from the backend code"? I think that you are getting at the fact that you want to define the button in HTML, is that right or am I missing something?
Martijn:
Yes, there are a lot of technologies in the space. In some sense, I think that it reflects the confusion we see in the industry at large. Remember that Volta is an experimental technology preview at this point, so it really is about showing what the future may hold and shouldn't really influence your short-term planning. Furthermore, Volta is meant to complement the other technologies: it is a set of developer tools that can/could target various runtime web features.
I'm glad you're skeptical. You should be. Try it out and check out the information.
The Volta quickstart as well as Wes ' blog post Volta: Redefining Web Development tier-split an application
It has been quite a while without a post from you, now I understand why. I do like many of the volta ideas. Is there a future plan to do things like:
[RunAtDatastorage] [RunAtDatabase]
[RunAtBusinessTier]
[RunAtPresentationTier]
etc. and getting your works into the database engine (hey translating IL either to IL for MS SQL or to PL/SQL for Oracle) to your SAP (by translating to ABAP) and similar? Are you planning to set this up as a framework, so that others can plug their tiers into it?
Please make it that way ...
buzina:
Do you dabble in mind reading? ;)
I think you will be pleased with what you see in the coming months.
I can see two major problems here (this is why i can't say "great!"):
1) Code samples shows very unstructured code like php or js. It is not look like OOP or component oriented thing.
2) Performance is awfull. Both IE7 and Firefox nearly dies on the samples.
dreamy_zombie:
I like the name, it is pretty catchy.
I am very interested in why you identified problem (1). Are you referring to a particular sample or did something about their design make you cringe?
As for problem (2), we know the performance is not what it needs to be. The first release is a very early prototype. As I am sure you know, prototypes are most effective when they address the riskiest parts of the design. We did not nor do we consider performance to be the riskiest part of the design. We have measured and analyzed performance and know what we need to do to address it. The riskier parts of the design are how it feels from the programmer's point of view.
Hi Buddy,
It seems that the icon of Volta looks like firefox:)
Ray:
There is definitely a similarity. But it was actually designed to look like the other live labs logos (photosynth, deepfish, ...). Now, were they designed to look like the fox? I have no idea.
What is the language named for? The Lake? The Physicist? The Bjork CD?
It is named after the physicist, but it also means "revolution" in portugese. | http://blogs.msdn.com/wesdyer/archive/2007/12/05/volta-redefining-web-development.aspx | crawl-002 | refinedweb | 1,470 | 66.64 |
Hi, i hope you are still around and havnt given up yet on this ... On Sun, Jul 12, 2009 at 02:00:33AM +0200, Erik Van Grunderbeeck wrote: > Hello; > > Included are the files, and patches needed for the dvd decoder. > > I threw away my original system on trying to insert packets into the main > stream (as the "attachment/data" type stream). The reason for that is that > the dvd data stream, that contains commands for the client can change/insert > packets "in the past". Example; a user seeks back. The DVD ifo system will > insert packets containing "change the subtitle color" so it works ok for the > displayed cell/chapter. LibAV doesn't expect new packets inserted > "backwards" in the stream, and the buffer system gets all messed up because > packet sizes are off. please elaborate on what the problem with this is. It should have worked from how i understand it, at least if all packets have correct timestamps > Changing to a new system means my "hack" on decoding > the packet header isn't needed any more, and except for stream flagging the > mpegps demuxer should be unchanged. My attachment decoder is not needed > anymore (for those who looked at it, only dvd_protocol.c/h are still > needed). > > The new system uses a callback, that the user registers. This is, after much > experimenting, the best way I could come up with. The callback though is > registered per protocol, and thus changes are needed in "gasp" URLContext. > Reason for keeping it per context? Well, a user can be reading from several > dvd devices at the same time if he/she uses the avlibs. Or some end devices > (like the ps3) send several request to open a file at the same time. If one > callback is registered for the whole library, this messes up the system. > > Whats send in the callback? Basically AVDVDParseContext packets. They > contain a type (the enum effdvd_Type ) that tells the end-user app what > should be done, and a pts timestamp in dvd time (90000/tick). The packets > should be handled at that time. > > There are some packets which are obvious (like: new subtitle palette) and > some that are not. The most important one is effdvd_Reset_IFO. > > effdvd_Reset_IFO asks the end-user app: clear your streams, flag all streams > you are using as invalid (more later on that), and be prepared to re-init > all your codec decoders. > > Why does one need to do that? Because a vob file 1 might contain audio data > in ac3 with 2 channels, while a vob 2 might have it in 6 channels. Or the > resolution of the video stream might change (from dvd pal to dvd ntsc > widescreen for example). > > First; DVD's may contain up to 64 streams. One of the patches changes the > MAX_STREAMS constant to match that. > > Second; Stream re-init. I implemented this by adding a flag to AVStream (at > the end, discard_flags). Set this to one in end-user app, and this will tell > the demuxer that this stream has been discarded by user (user closed all > codecs, etc). > > It works by telling the demuxer that once it dedects a certain packet type, > and normally does a av_new_stream; don't re-add the stream to the end of the > new list (pushing more and more streams on the list). Example; suppose the > discard_flag is set to 1 for the video stream (and we only have one). When a > new packet is received by the demuxer, its loop will not find a valid video > stream anymore. It thus will create a new one, but will fill the slot for > the "old" video stream with the new video stream. Don't know if this text > makes sense, bit a quick look at the code should be helpful. > > Why not do this in the protocol reader? Basically 1) because the protocol > doesn't and shouldn't know about the streams and 2) because the end-user > needs to close and free his/her decoders. Once that's done, the demuxer > doesn't even know. > > Next; chapter seeking. I added a new flag for that: > > #define AVSEEK_FLAG_DIRECT 8 ///< seek directly, skipping any stamps > and translation > #define AVSEEK_FLAG_CHAPTER 16 ///< seek to chapter, if supported > > Using that will basically use av_seek_frame() to override a search like it > does for AVSEEK_FLAG_BYTE. The arguments timestamp will thus position > chapters, instead of timestamps. > > Summary; whats the total change; > > 1) The whole setup adds a protocol handler called ffdvd. User inits his open > file with av_open_input_file ("dvdread:e:/") or > av_open_input_file("dvdread:/cdrom1/") > 2) User registers a callback by using url_fset_protocol_callback(), a new > function in avio.c. I tried to make it generic. > 3) User polls streams as before for audio/video > 4) User can get callback events from the protocol, with timestamps. > 5) Events get examined for the timestamps on the protocol packet, and > handled upon. > 6) When needed, these events are executed, and for some functions a notify > is send back (example, when user selects a button > dvd_protocol_select_button() is called). > 7) That's it. > > How well does this all work? I build a example app separate from ffmpeg, on > which rather big changes are going to be needed. Note that these changes > have almost nothing to do with the dvd protocol handler. They are mostly > implementing the "we should look at this/this is wrong/etc" comments > (example; discarding new streams). I have ffplay ok, but will need to work > with someone on ffmpeg. Source is. Offcourse available. > > I am attaching the main files here, and the most important patches. Since I > am not sure on how the continue on this, please tell me how this needs to be > handled (ege do patches first, what order,. Etc. ) If the system is > accepted, at least one patch needs a major version bump. I should probably > split up some patches too. > > Note 2; I need to talk to the people for libdvdread, since I fixed a few > problems in their code (and added one function to read attributes that its > not available now dvdnav_get_video_attributes()). To have this working, we > would offcourse also need to depend on libdvdread (if the user enabled it by > ./configure, changes for makesfiles and that needed). > > Note 3: we will probably need some commandline extensions to specify seeking > to a chapter, etc. for ffmpeg > > I have worked on and of on this for the better part of 2 months figuring out > the best and least intrusive way. If this all gets rejected, I am fine with > that. If it doesn't, also. Just let me know. I definitly want this in ffmpeg, so no reject nitpick: we use spaces not tabs also try tools/patcheck these of course are not truly important and it would probably be unwise to spend much time on cleaning this up for some parts of the code Some comments 1. First the av_seek_direct() code looks good, if you could send this as a seperate patch (also please name it av_seek_protocol() and AVSEEK_FLAG_PROTOCOL) i suspcct we should be able to commit this quickly 2. AVSEEK_FLAG_CHAPTER should be ok as well if you could send this as a seperate patch. But note the parametr should be an absolute chapter number not a relative to current position or you will have to deal with packet fifos between demux and decode causing a delay and +-1 errors close to chapter transitions 3. iam against stream reusal, stream numbers must be unique, we cant just by seeking around have chapter 5 use stream 1 as video and when we seek back video is stream 0 with audio stream 1. (this can happen if streams are marked "discard" and reused. 4. about the callback, i think this will need to be changed somewhat as is i see 2 issues A. thread sync B. every application would need a buffer to make sure the data from the callback is delayed and processed at the correct time (that is the time matchng what went through demux & decode with all its fifos) I thus think some way to pull data out like av_read_frame() seems a bit easier to handle. 5. If we had a data stream instead of a callback, stream copy to arbitrary containers might work somewhat. With a callback all callback provided information would be lost, i dont know if this would lead to any problem or not with actual DVDs being stream copied to mkv/nut/avi whatever more comments below [...] > typedef struct AVAttachmentButton { > uint16_t x; > uint16_t y; > uint16_t w; > uint16_t h; > } AVAttachmentButton; 64k should be enough for everyone ;) i do prefer int for this and most other variables > > typedef struct AVDVDParseContext { > // pts of this context > int64_t pts; > > // type > uint16_t Type; > > // when waiting > uint16_t Wait; > > // when buttons > uint16_t HighLightIndex; > uint16_t ButtonCount; > AVAttachmentButton Buttons[MAX_ATTACH_BUTTONS]; > > // when color-palette > uint32_t rgba_palette[16]; seems like job for a union not a struct > > // expected size of the video pictures (allows for skip of scan-stream) > uint16_t video_width; > uint16_t video_height; > > // aspect ratio (0 = 4:3 , 3 = 16:9) > uint8_t aspect_ratio : 4; > // video format (0 = ntsc, 1 = pal) > uint8_t video_format : 4; > > // current vts > uint8_t current_vts; > > // count of languages > uint8_t AudioLanguage_Count; > uint8_t SubTitleLanguage_Count; > > // languages > uint16_t Audio_Language[MAX_ATTACH_AUDIO_LANGUAGE]; > uint16_t Audio_Flags[MAX_ATTACH_AUDIO_LANGUAGE]; > uint8_t Audio_Channels[MAX_ATTACH_AUDIO_LANGUAGE]; > uint8_t Audio_Mode[MAX_ATTACH_AUDIO_LANGUAGE]; > uint16_t SubTitle_Language[MAX_ATTACH_SUB_LANGUAGE]; > uint16_t SubTitle_Flags[MAX_ATTACH_SUB_LANGUAGE]; > alot of this looks misplaced, width, height, aspect and language have their fields in AVStream & AVCodecContext already. So please explain why they are here duplicated. > // when angle change > uint8_t current_angle; > uint8_t max_angle; > > // size of current title in pts ticks. divide by 90000 to get time in seconds > uint64_t titletime; needs a AVRational time_base instead of 90khz and i guess duration is a better term > > // flags that describe actions allowed for current chapter > uint32_t flags; > } AVDVDParseContext; Also this name hurt my eyes, this could be usefull for MMS/RTP and who knows what else. In that sense it also should be kept reasonably generic > > /* *************** protos *************** */ > > /* register */ > int dvd_protocol_register(); > /* select a button */ > void dvd_protocol_select_button(AVFormatContext *ctx, uint32_t nIndex); > /* signal queue empty */ > void dvd_protocol_signal_wait(AVFormatContext *ctx, uint32_t iSkip); > /* signal queue empty */ > void dvd_protocol_signal_queue(AVFormatContext *ctx); > /* reset has been processed */ > void dvd_protocol_signal_reset(AVFormatContext *ctx); A more generic message passing to AVProtocols seems better than one function specific to one protocol and function each [...] > @@ -2493,10 +2544,10 @@ > av_log(s, AV_LOG_ERROR, "dimensions not set\n"); > return -1; > } > - if(av_cmp_q(st->sample_aspect_ratio, st->codec->sample_aspect_ratio)){ > - av_log(s, AV_LOG_ERROR, "Aspect ratio mismatch between encoder and muxer layer\n"); > - return -1; > - } > +// if(av_cmp_q(st->sample_aspect_ratio, st->codec->sample_aspect_ratio)){ > +// av_log(s, AV_LOG_ERROR, "Aspect ratio mismatch between encoder and muxer layer\n"); > +// return -1; > + // } > break; > } > > @@ -2932,6 +2983,7 @@ > } > #endif > > +/* @@@ BUG BUG will fail on encoding at t 23.59.59 started + */ > int64_t av_gettime(void) > { > struct timeval tv; ehm ... [...] > Index: avformat.h > =================================================================== > --- avformat.h (revision 19309) > +++ avformat.h (working copy) > @@ -185,6 +185,7 @@ > #define AVFMT_GENERIC_INDEX 0x0100 /**< Use generic index building code. */ > #define AVFMT_TS_DISCONT 0x0200 /**< Format allows timestamp discontinuities. */ > #define AVFMT_VARIABLE_FPS 0x0400 /**< Format allows variable fps. */ > +#define AVFMT_NOHEADER 0x0800 /**< do not try to read headers. */ what is this good: <> | http://ffmpeg.org/pipermail/ffmpeg-devel/2010-February/085194.html | CC-MAIN-2018-05 | refinedweb | 1,830 | 60.65 |
More 1 2 sorted lists together into 1 sorted list.
We begin with n items, which we treat as n sorted 1 in order of n log 1 merge to 1 thread, which can perform each individual merge using a serial algorithm on that thread.
We might get better memory coalescing performance if we use shared memories as a staging area to read many input elements or store many output elements at the same time.
I’d say it’s more common, though, at the start of a large merge sort, to just sort a block of elements, say, 1024 2 stages here, and we’re going to see a good algorithm for an in block sort after we finish the discussion on merge)
// C++ program for Merge Sort #include <iostream> using namespace std; // Merges two subarrays of arr[]. // First subarray is arr[l..m] // Second subarray is arr[m+1..r] void merge(int arr[], int l, int m, int r) { int n1 = m - l + 1; int n2 = r - m; // Create temp arrays int L[n1], R[n2]; // Copy data to temp arrays L[] and R[] for (int i = 0; i < n1; i++) L[i] = arr[l + i]; for (int j = 0; j < n2; j++) R[j] = arr[m + 1 + j]; // Merge the temp arrays back into arr[l..r] // Initial index of first subarray int i = 0; // Initial index of second subarray int j = 0; // Initial index of merged subarray int k = l; while (i < n1 && j < n2) { if (L[i] <= R[j]) { arr[k] = L[i]; i++; } else { arr[k] = R[j]; j++; } k++; } // Copy the remaining elements of // L[], if there are any while (i < n1) { arr[k] = L[i]; i++; k++; } while (j < n2) { arr[k] = R[j]; j++; k++; } } void mergeSort(int arr[],int l,int r){ if(l>=r){ return;//returns recursively } int m = (l+r-1)/2; mergeSort(arr,l,m); mergeSort(arr,m+1,r); merge(arr,l,m,r); } void printArray(int A[], int size) { for (int i = 0; i < size; i++) cout << A[i] << " "; } int main() { int arr[] = { 12, 11, 13, 5, 6, 7 }; int arr_size = sizeof(arr) / sizeof(arr[0]); cout << "Given array is \n"; printArray(arr, arr_size); mergeSort(arr, 0, arr_size - 1); cout << "\nSorted array is \n"; printArray(arr, arr_size); return 0; }
Now we move on to stage 2. Now we have lots of small sorted blocks, and we need to merge these small sorted blocks together. On the GPU for these intermediate merges, we would usually assign 1 merge to 1 thread block.
Now, the obvious way to merge 2 sorted sequences is a serial algorithm. So let’s take a little bit of a closer look at the algorithm that we choose to use here, and we’ll come back to this diagram a little bit later.
The obvious way to merge 2 sorted sequences is a serial algorithm, and here’s our little serial processor here. The input to this algorithm is 2 sorted sequences, and the output is 1 instructive to look another way, a better way, a more parallel way to merge 2 sorted sequences.
Merge Sort C++ STL
C++ offers in its STL library a merge() which is quite useful to merge sort two containers into a single container..
Merge Sort C++ Vector
Implementation
I am using C++ to implement the mergesort algorithm using vectors instead of arrays. I worked step by step from the original algorithm but when I compile, I get no output or error messages. I think there is an issue with my "merge" function but I cannot find it. I am new to sorting algorithms so if there are any fundamental misunderstandings or errors in my code please explain them to me. #include <iostream> #include <vector> using namespace std; void mergeSort(vector<int>& numvec, int left, int right){ if(left < right){ int middle = left + (right - left) / 2; mergeSort(numvec, left, middle); mergeSort(numvec, middle + 1, right); merge(numvec, left, middle, right); } } int merge(vector<int>& numvec , int left, int mid, int right){ int i, j, k, temp[right-left+1]; i = left; j = right; k = mid + 1; while(i <= mid && j <= right){ if(numvec[i] < numvec[j]) { temp[k] = numvec[i]; k++; i++; } else {temp[k] = numvec[j]; k++; j++; } while(i <= mid){ temp[k] = numvec[i]; k++; i++; } while( j <= right){ temp[k] = numvec[j]; } for(i = left; i <= right; i++){ numvec[i] = temp[i - left]; } } }
there is not need to create new space at each call of merge. std::vector<int> L(&array[0], &array[lo]); will actually create space
Merge Sort Complexity
Merge Sort is a stable sort which means that the same element in an array maintain their original positions with respect to each other. Overall time complexity of Merge sort is O(nLogn). It is more efficient as it is in worst case also the runtime is O(nlogn) The space complexity of Merge sort is O(n).
The run time complexity of merge sort so this is really the meat of merge sort so what’s the idea here the idea is you sort using your recursive call to merge sort.
you sort the first couple valve and you sort a second cup of L and then you merge the two together using merge so merge is the function that they send two sorted lists and it returns a sorted version of both of the firsts together step together and it’s sorted and it can do it.
Length of a 1 plus length of l2 time like we saw in lecture now the length of this is n over 2 and the length of this sent over to so the runtime of the code to merge here is open all of the length of L.
So merge this merge runs in o of n where n is the length about so if you break the list up into halves and then assume that the halves are sorted so you run merge sort to make sure that this house is worth in this half is sorted you get distorted versions then to run merge that’s oh that where n is the length of the list.
Now we’ll just say that rather than just saying that it’s all bad we’ll say assume that merge sort and so on takes eight and operations so we get to do that because we know that it runs in open time that means it performs a number of operations that’s proportional to K and.
Merge Sort Algorithm Steps
Merge Sort
- Divide the unsorted list into sublists, each containing element.
- Take adjacent pairs of two singleton lists and merge them to form a list of 2 elements. N. will now convert into lists of size 2.
- Repeat the process till a single sorted list of obtained.
We will see the recursive algorithm for merge sort. This algorithm usage divide and conquer technique. So let’s see what this algorithm is. First the list to be sorted is divided into two sublists of almost equal size.
Then the left sublist is sorted recursively using merge sort. Then the right sublist is recursively sorted using merge sort. After that the two sorted sublists are merged to get the final sorted list.
The terminating condition for a recursion is the sublist form should contain only one element. Because we can consider a one element sublist as sorted list. So now let’s take an example and see how this algorithm works. This is a list of 11 elements.
It is divided into these two sublists. This one contains 6 elements and this one contains 5 elements. These two sublists are to be sorted recursively using merge sort.
So this is divided into two. This is divided into two. Now here we get a sublist of only one element and we can consider it as a sorted sublist. So the recursion will stop here.
This is again divided into two. Here also the recursion stops and now these two sorted sublists are merged. So we will get this list. You can see that this list has become sorted. Now this sorted list and this sorted list both of them are merged.
So now we can see that this sublist has become sorted. Now we come to this sublist, this is divided, this is divided. These two sorted sublists are merged. Then this sorted list and this sorted list, are merged.
So we can see that this sublist has become sorted. Now these two sorted sublists are merged and we can see that this list has become sorted. Now we come to this sublist. This is divided into two. This is divided into two. This is divided.
These two sorted lists are merged, then this and this are merged, so this list becomes sorted. This is divided and merged. Now this sorted list and this sorted lists are merged. So we can see that this sublist has become sorted. Now these two sorted sublists are merged to get the final sorted list. | https://epratap.com/merge-sort-cpp-stl-vector-complexity-algorithm-steps/ | CC-MAIN-2021-10 | refinedweb | 1,515 | 67.89 |
How to Get Outlook Task Owner
Discussion in 'ASP General' started by Artur B, Jun 11, 2004.
- Similar Threads
Re: Problem in ant replace task and replaceregexp task to update xmlVictor, Sep 1, 2004, in forum: Java
- Replies:
- 0
- Views:
- 9,277
- Victor
- Sep 1, 2004
- Replies:
- 1
- Views:
- 9,937
- Marco Meschieri
- Aug 9, 2006
Accessing ant task name when running a taskteggy, May 29, 2007, in forum: Java
- Replies:
- 0
- Views:
- 973
- teggy
- May 29, 2007
stop org.jdesktop.application.Task started using @Action public TaskMike, May 12, 2008, in forum: Java
- Replies:
- 1
- Views:
- 2,164
- GArlington
- May 12, 2008
[Rake] call a task of a namespace from an other task.Stéphane Wirtel, Jun 14, 2007, in forum: Ruby
- Replies:
- 3
- Views:
- 569
- Stephane Wirtel
- Jun 15, 2007 | http://www.thecodingforums.com/threads/how-to-get-outlook-task-owner.794921/ | CC-MAIN-2016-18 | refinedweb | 131 | 56.93 |
Inferno 4 Available for Download 287
Tarantolato writes "A new preliminary public release of the Inferno distributed operating system is now available for downloading from Vita Nuova's website. Inferno is meant to be a better Plan 9, which was meant to be a better Unix. It can run as a standalone OS, as an application on top of an existing one, or even as a browser plugin. Also, all of its major components are named after things related to hell."
inferno? (Score:5, Funny)
Re:inferno? (Score:5, Funny)
Yes and no. (Score:2)
Re:inferno? (Score:5, Funny)
Re:inferno? (Score:2)
Managing daemon apps is apparently very easy, but sending stuf to
/dev/null can be quite spectacular ;-)
Or so they tell me, I am stuck with Windows, the orginal devils OS.
Just what we need (Score:5, Funny)
Re:Just what we need (Score:5, Funny)
Inferno? (Score:4, Funny)
Re:Inferno? (Score:5, Informative)
Could be interesting stuff, especially the Limbo "C-like, concurrent" programming language (though the syntax seems like an ugly version of Python with some bizarre odds and ends tacked on like a <- operator for "channels").
-fren
Re:Inferno? (Score:5, Interesting)
-- Anon Coward
Re:Inferno? (Score:5, Insightful)
No, I think mistake that Bell Labs made was to charge everybody (including Joe User, the hobbyist) for the OS. Nobody could download it for free and try it out. You had to pay for it.
If I'm a hobbyist and just want to try something out, I don't feel like shelling out $100 for something that seems quite esoteric.
Basically, BL's desire to milk it completely destroyed Plan9's chances. Couple that with Linux's surge, and Plan9 was doomed.
Later, much later, they released it for free, but by then it was too little, too late.
Re:Inferno? (Score:4, Interesting)
Trust me, it is all very interesting stuff. Just don't let it slip that you use the heathen UNIX, especially on #plan9 on FreeNode.
Re:Inferno? (Score:2, Informative)
Re:Inferno? (Score:5, Informative)
I don't see the resemblance to Python. Limbo has:
I don't see any redeeming qualities.
Re:Inferno? (Score:5, Insightful)
Wrong. Compilers do not need separate interface definitions. They might just as well use the source files and find all the definitions there.
There is actually a very good (programmer-centric) reason for doing having separate interface/implementation: If you want to remain completely (binary or source) interface-compatible you just lock down the interface file. If the language is strongly typed and pedantic about matching type/function/value definitions exactly it will complain if you accidentally change the declaration of a function (this is particularly easy to do accidentally in type-inferenced languages like ML). Thus, you can ensure interface compatibility in this very simple way.
(Of course, in C/C++ this is not nearly that useful because compilers usually don't actually check any of this. But it works very well).
Re:Inferno? (Score:3, Interesting)
Compilers do not need them if properly designed, as with the many modern languages I cited.
But C and C++ require this. Ever notice how in C and C++, you can't refer to an undeclared type, even if it is declared later in the file? You have to provide at least a forward definition. ("class Bob;") Likewise functions, data members, etc. This is most annoying in C++, wi
Re:Inferno? (Score:5, Insightful)
This is unrelated to whether or not you require a separate interface file. The reason that the forward declaration exists is that you cannot declare circular types (such as linked list elements) without them. In all other cases, you can just sort the declarations topologically and write them out in that order.
Besides, what you're saying is true even of "properly designed languages" like ML. Just try: It doesn't compile, but it DOES work if you use: (note the and)
So you're basically talking about a syntactical problem in C/C++ which forces you to declare (textuall) things in a topological order.
Making up a statistic is never a good way to argue a point.
Besides, nobody said anything about forcing you to use separate interface/implementation. I just said to it could be a good thing to use it and have it be supported by the compiler.
In my preferred language, OCaml, you have the option of having a separately declared interface (a
By the way, since you brought them up, declaring a proper interface is much more important in type-inferencing languages, since even tiny
changes to code can cause completely different types to be inferred. For example, in OCaml: f and g have different signatures even though the difference is tiny. If you're interfacing to binary libraries it helps immensely to know that the library would not have compiled if such a type-altering change has occurred in the "hidden" code.
You call that intelligent? Instead of just having the compiler do it? It already knows all about type aliases, what types are compatible,
etc. etc. (i.e. all the stuff that makes checking such things using a postprocessor extremely error-prone).
Re:Inferno? (Score:5, Informative)
Of particular note is that Dennis Ritchie has written exactly two language reference manuals in his life: C and Limbo. that says a lot to me, anyway.
name dropping aside, Limbo really is a huge win for user-mode programming. the channel stuff isn't bizarre at all - it's a very elegant way to handle inter-process communication. Python's got nothing on Limbo for this.
Re:Inferno? (Score:4, Interesting)
substantial software under Inferno. It has
some nice features, and is by far the
cleanest environment for multithreading
that I've ever seen.
At the time (c. 2000), it had a few misfeatures,
such as no way to signal that you've closed
a channel between two threads, but hopefully
that's been fixed. Limbo is a nice, clean
language. It isn't object oriented: think of
it as the ideal C, rather than python, java or
C++. However, interestingly enough,
you can do large-scale OO things reasonably
nicely in two ways:
First, for objects that are more like lightweight
daemons, you can have a thread that simulates
a file (or file system, even). The rest of
the program can then read and write to that
special file to interact.
One can be OO by implementing a whole directory,
where each file corresponds to a OO message
(member function).
Second, for even ligher weight stuff,
you can easily (trivially easily, compared
to most languages) spawn a thread that
talks via the rendevous mechanism,
and treat that as a little light-weight
server to which you can pass
arbitrary data structures.
There's no support for fine-grained OO,
which was, I think, a reaction to some
of the OO-idiots out there that
insist on making objects out of things
that aren't naturally separable.
The failing is that there are not
extensive libraries, and there is certainly
not much in the line of applications that
one can download.
It is very elegant in many respects.
If you need to write multi-threaded things,
and can live without much in the way of
libraries or applications, you should think
about it.
Re:Inferno? (Score:3, Insightful)
Actually, that's not a misfeature, but a fundamental (and desirable, I think) part of the way that channels work. The point is that, unlike pipes, channels have just a single endpoint. Any process can use that endpoint for reading or writing as it likes - the runtime system will sort out the rendezvous.
So there's no real concept of a channel being "between" two threads. It just exist
informative (Score:4, Funny)
License (Score:4, Interesting)
Any ideas why they didint use GPL/BSD or any other standard license. Or is there some subtle(or obvious) licensing issue
Re:License (Score:5, Interesting)
Re:License (Score:5, Informative)
Plan 9 had a license where you couldn't sue Lucent on an unrelated matter if you used it. They've now changed that (as of June 2003), and Stallman now considers [mirror5.com] it a "free software license incompatible with the GPL". From the GNU site:
Inferno's license seems to be the same as the new plan 9 one. (But I haven't looked in depth).
Re:License (Score:2, Insightful)
How did you get the time to read his rants about the Plan 9 license? I'm still working on his rants from 2002.
But seriously, what necessarily makes a license that might not be Stallman compliant a problem? Can't it just be different? As far as I can tell, he's not some sort of supreme licensing authority. Also, he doesn't have the power to actually enforce his ideas on what's GPL compatible or incompatibl
Re:License (Score:2, Insightful)
Don't be assanine, he's allowed to give his advice and that's all he is doing. Licences aren't rejected just on Stallman's say so but because, more often than not, any problems that Stallman points out are potentially serious; whether for the developer or the end user.
I'm getting really bored with this rele
Re:License (Score:3, Interesting)
as a trivia point, with the exception of legacy ones like
on to your question proper: i, of course, am neither a lawyer nor british, but from what i was told GB law
Shall Jesux rise again? (Score:5, Funny)
INSERT WITTY BSD DAEMON JOKE HERE
Re:Shall Jesux rise again? (Score:3, Informative)
Yeah I tried it (Score:4, Funny)
Re:Yeah I tried it (Score:3, Funny)
Re:Yeah I tried it (Score:5, Funny)
yay! (Score:5, Funny)
Re:yay! (Score:4, Interesting)
Re:yay! (Score:3, Interesting)
one: the idea of portable executables is not a new thing to Java, nor is a virtual machine. Java and Inferno were started at about the same time, Java just beat Inferno to market. the fact that some solution to a problem exists shouldn't deter anyone from seeking a better one. Inferno takes a very different approach to platform portability than Java does, and manages to preserve the write-once-run-everywhere promise than Java does.
two: javascript has nothing to do with java - t
New p2p (Score:4, Interesting)
Host Operating Systems
Windows NT/2000/XP
Linux
FreeBSD
Solaris
Plan 9
Supported Architectures
Intel x86 (386 & higher)
Intel XScale
IBM PowerPC
ARM StrongARM (ARM & Thumb)
Sun SPARC
and it supports crypto and since its native code its faster than java.
Re:New p2p (Score:5, Insightful)
The problem with Java is that its GUI toolkit is slow.
In any case, with a file sharing app, CPU efficiency is certainly not an issue. You should never worry abot CPU efficiency if you don't need to, as you will only be making things harder on yourself.
And, finally, writing portable C/C++ code is really not that hard if you know what you are doing. Certainly you'd be better off with that than you would be asking all of your users to install an extra OS over their current one just to run your program. Really, the most important factor in making file sharing successful is to get lots and lots of users, and most of those users are going to be people who have absolutely no idea what an operating system even is.
Re:New p2p (Score:3, Interesting)
I must admit to a bias, as I quite like Inferno. But here are a few rebuttals to your points anyway.
Java may compile to native code, but it's not done at program startup. The Inferno VM has a Harvard architecture which prohibits the byte-code manipulation that's allowed at Java run-time. This allows a one-off translation to native code at module load time. In contrast, the Java VM must JIT hotspots as you hit them (which does make it slower).
I agree that CPU efficiency shouldn't be a factor. How
Re:New p2p (Score:3)
> Oh, I see... could that possibly be because it's written in Java?
No, the problem with the Java GUI / other std libraries toolkits is that they are bloated. As in: they mount very deeply in the top of each other, for the sake of consistency, but... this means every time you take one, you take the whole family.
Why the last link? (Score:2, Insightful)
we don't want you to burn in hell (Score:2)
I did not submit the article but I believe that the last link was included for the sake of non-believers.
On a related note, what do you think of the sounds supposidly recorded from hell [av1611.org] [RealAudio] on that webpage? It sounds kind of electronic to me but maybe it really is from hell?
Re:we don't want you to burn in hell (Score:2)
Re:we don't want you to burn in hell (Score:2, Interesting)
Off topic, and scary. (Score:2)
The editors reserve the right to be off topic.
Woooo! It's scary:
YOU will see HELL. .
YOU will smell HELL. .
YOU will breathe HELL. .
YOU will hear HELL. .
YOU will feel HELL. .
YOU WILL BE HELL. . .
Re:Why the last link? (Score:2)
Thanks for the nightmares! (Score:3, Funny)
How dare you link to a site that uses RealAudio! How will I go to sleep now, I'll have "buffering..." nightmares!
Re:Thanks for the nightmares! (Score:3, Funny)
Re:Thanks for the nightmares! (Score:3, Offtopic)
Imagine for a moment that we could drill a hole to Hell and rescue all these tormented souls. Millions upon millions - 40 billion, I think was the number cited - many of whom have been writhing in agony for thousands of years, and you just open the door for them? A couple of billion pissed off spirts running loose is not my idea of fun. If there was some way to offer them repentance before exit then sure, it's worth
Hell comes in many flavors (Score:5, Funny)
I am not sure which part of hell the Tk UI toolkit represents, but I feel their pain.
Re:Hell comes in many flavors (Score:3, Funny)
hmm.
prooooogramming awaaaay
full of open source
in the language C
myyyyy mallocs are freeeeee
Re:Hell comes in many flavors (Score:3, Informative)
Good introduction to Limbo (Score:5, Informative)
I've briefly looked into trying out Inferno, but bear in mind it's not designed as a desktop system. Instead, the market it seems to be used in is the embedded market - so it'd be interesting to see how easy you can write server apps for application boxes with it.
However, it initially appears that Limbo is the only way to program for Inferno (prove me wrong please), which would be an obvious impediment to developer take-up.
Re:Good introduction to Limbo (Score:5, Informative)
"Features
Compact
Runs on devices with as little as 1MB of RAM
Complete Development Environment
Including Acme IDE, compilers, shell, UNIX like commands & graphical debugger
Limbo
An advanced modular, safe, concurrent programming language with C like syntax.
Library Modules
Limbo modules for networking, graphics (including GUI toolkit), security and more...
JIT Compilation
Improves application performance by compiling object code on the fly (Just In Time).
Namespaces
Powerful resource representation using a single communication protocol. Import and export resources completely transparently.
Full Source Code
Full source code for the whole system and applications, subject to licence terms
And more...
# Online manual pages
# Full unicode support
# Dynamic modules
# Advanced GUI toolkit
# Javascript 1.1 web browser
# C cross compiler suite
# Remote Debugging
# Games, Demos & Utilities"
Most relevant on the list is the C cross compiler suite. Theres at least one language other than Limbo you can code in (although it seems limbo is designed by many of the guys who wrote C and other minor items of note such as Unix).
If there is one language any developer you'd really want on the playing field knows, it's C.
Re:Good introduction to Limbo (Score:2, Informative)
Still, I think the compiler might be one of the most valuable parts of this distribution. It was originally written by Ken Thompson; it is fast; its code is small and readable.
If enough people notice, that could be a worty competition to GCC.
Re:Good introduction to Limbo (Score:5, Interesting)
there are options for getting existing C code into the Inferno world. at a high level, 3.
and yes, it has been an impediment to developer take-up, which is a real shame. Limbo is a simply beautiful language.
was that really necessary? (Score:2)
OSS authors: Think carefully about communication. (Score:5, Insightful)
It amazes me how bad open source people are at marketing. Why make your project, which requires a huge amount of excellent thinking, the butt of jokes?
Why give a name to your open source project that will cause those who have less than complete technical knowledge to feel uncomfortable about adopting what you have done?
One question is, how bad can it get? Will there one day be a "Worthless" project? There is already a "Waste [sourceforge.net]".
The funniest bad name for an open source project was "Killustrator". It's easy to see how the name was chosen. Everything in KDE began with a K, as much as possible, and Killustrator is an open source illustration program. It didn't seem to bother anyone that the first syllable of the name was "Kill". I can imagine the Killustrator author thinking how convenient it is that the word illustrator begins with a vowel; that makes it easy, just put a K at the beginning, and you have a name!
The name Killustrator gave everyone a million dollars worth of laughs, and did perhaps $10 million damage to Adobe's reputation when the CEO of Adobe overreacted, saying people would confuse Killustrator with Adobe Illustrator.
Do open source authors believe that there are only a few concepts available, not enough for everyone? Why copy the FreeBSD devil idea? [freebsd.org]
And Why did the FreeBSD project adopt that idea? I know FreeBSD is an excellent OS, and the favorite BSD for ISPs, but there are some who will be discouraged by the amateurish baby red devil marketing scheme.
Re:OSS authors: Think carefully about communicatio (Score:5, Funny)
OSS authors:Think carefully about [making money] (Score:5, Insightful)
The cURL license seems okay now: (Score:3, Informative)
The cURL license seems okay now: cURL license [curl.haxx.se]. I suppose it wouldn't be on Sourceforge [sourceforge.net] if it weren't okay.
Don't confuse [curl.haxx.se] cURL with Curl, from the Curl Corporation [curl.com].
Re:OSS authors: Think carefully about communicatio (Score:5, Insightful)
Maybe these folks don't give shit about marketing
... they just do it because they like it. WASTE is a good name IMHO - funny reference to Pynchon's Crying of lot 49. I don't think WASTE author wanted to 'take over the market' with his prog either.
FreeBSD's beastie
... yeah, sure, the OS logo is the first thing everyone would consider when choosing a solution (yahoo seems very much discouraged by chuck - name for beastie btw -, as does NYInternet, Pair Networks, netcraft itself or the apache project).
Linux was criticized for the 'idiotic' looking penguin as well, remember? Yet I don't think that its market entry was very much hindered because of its logo.
You can save me hours of boring, repetitive... (Score:3, Interesting)
"... funny reference to Pynchon's Crying of lot 49".
To those who understand the reference, it may be funny. To everyone else, it is just confusing.
FreeBSD's little devil logo is well-drawn and cute. But the logo doesn't match the subject. FreeBSD is seriously important! It's the OS of choice for those who want to run a secure web server. It's not clear to me why FreeBSD is chosen more than the other BSDs, but FreeBSD has become important to the world. The FreeBSD license allows mixing with closed s
Re:You can save me hours of boring, repetitive... (Score:3, Interesting)
On the other hand, I think you are right about netbsd logo, but for different reasons (poli
Re:OSS authors: Think carefully about communicatio (Score:5, Insightful)
Re:OSS authors: Think carefully about communicatio (Score:3, Interesting)
Anyway, that was the first encounter I had with the idea of a daemon as a program that just sat around waiting for an activation command. (It may not have been new then,
Re:OSS authors: Think carefully about communicatio (Score:4, Funny)
FreeBSD is not alone in this, as can be seen from why Mac is bad [jesussave.us]
;-)
Re:OSS authors: Think carefully about communicatio (Score:3, Funny)
Well, I've seen better names than ProjectTraq Intranet System Services aka "PISS" [freshmeat.net] anyway..
Re:OSS authors: Think carefully about communicatio (Score:2)
The number one rule of marketing is that there is no such thing as bad press. Every time I see an absolutely awful commercial on TV, I'll talk about it with my friends, and we'll all agree that the commercial was terrible.
Except it isn't - it's brilliant.
We're all sitting around talking about a commercial we would have otherwise forgotten.
The same principle applies here - if you get a clever or mem
Looks right. (Score:3, Funny)
Yup. All related to hell.
Re:Looks right. (Score:2)
Alternative to VMWare? (Score:5, Interesting)
Inferno on Lucent BRICK Firewal (Score:4, Informative)
All jokes aside... (Score:5, Insightful)?
From the description it sounds like it's multi-threaded and designed with distributed systems (read cluster) in mind.
Plus it already has a language designed by the fathers of C and C cross compiler (wonder how well it works, also being designed by the fathers of C).
So in one sweep we have a solution suitable (sounds like it carries 1mb ram overhead) for most applications. Anything written for it magically runs on every major platform, it's highly scriptable and carries most of the magic of Unix packed with it wherever it's run from.
If it's significantly faster than Java I'd say we have a solution to the multi-distro problem as far as apps go.
Re:All jokes aside... (Score:5, Informative)
operating system that lends itself to clustering
applications, and Vita Nuova has a few big clients
looking at exactly this.
Plus the Vita Nuova people are very approachable.
(Their office is virtually within sight of mine).
One of the great advantages is that just about
everything looks like a file so it is very easy
to create namespaced collections of device-type
files that might be resident on your machine, or
just as easily resident on a collection of
disparate machines. It makes prototyping GRID
applications very much easier.
Personally I am very keen on looking more at
Inferno for GRID computing just as soon as I have
more time to spend on it. It's not a solution to
all ills, but it has definite advantages, and
seems to be very robust and has a small footprint.
I've seen it running happily on a fairly old
PDA being used to seamlessly integrate a whole
series of remote devices.
Aaron Turner, University of York
Re:All jokes aside...(scary) (Score:3, Funny)?
(snip)
So in one sweep we have a solution suitable (sounds like it carries 1mb ram overhead) for most applications. Anything written for it magically runs on every major platform, it's highly scriptable and
Re:All jokes aside... (Score:3, Informative)
Re:All jokes aside... (Score:4, Insightful)
In fact, I'd say that most applications won't be able to use this. And only a few will find it the best choice. (A part of the reason that it's so small is that it doesn't hold much. Useful stuff, but not much of it.)
environment (Score:2, Insightful)
When's the last time you saw an app so well developed that it ran on almost any platform - not to mention as its own OS.
At this point, I don't even care what it does, I think that part shows a level that many other applications need to strive for.
Re:environment (Score:5, Funny)
Emacs
:-P (at least the "own OS" part).
Inferno on the Mac G5 (Score:3, Informative)
Just wondering -- has anyone else tried this, successfully? I downloaded the demo disk and ran the OS X install script, and when the script got to the part where it started running the "emu" binary, all sorts of fascinating and wonderful errors began, starting with malloc messages. I finally ended up having to kill the process.
I seem to remember... (Score:2, Informative)
Re:I seem to remember... (Score:3, Informative)
Discreet's Inferno (Score:4, Interesting)
Re:Discreet's Inferno (Score:2)
Re:Discreet's Inferno (Score:3, Interesting)
It was distributed-only, where the Disk-subsystem ran as seperate (networked) nodes from the CPU-subsystem(s), which were seperate from the Terminal(s), etc, etc. It seemed an awful lot like a mainframe-style system using commodity parts, but you had to invest in at least 3 nodes in this way, if not more. This could have been expensive for what was mostly a research or hobby system at the time -- at least if you were going to get anything usable, speed-wi
Tinkered with an early version of Inferno (Score:3, Informative)
Re:Tinkered with an early version of Inferno (Score:5, Informative)
the license has changed substantially (it's free if your work is), a commercial source license is now a couple orders of magnitude cheaper, and the tech has progressed substantially since 1997 (which, if i recall properly, was before even the 1.0 release).
MS, incidentally, found it interesting enough to offer to buy it twice in 1996 and 1997.
oh, and having met Dennis Ritchie in a work environment, i'm thinking that if your co-worker was chewed out, he/she deserved it. the big three - Dennis, Ken, and Brian - are some of the easiest geniuses to work with i've ever met (and Bell Labs had plenty wandering around).
Call me when they get to (Score:5, Funny)
A few obvious questions:
"I was made by the first power, the first holiness and the first love"
And if the above sounds like raving, just google for Dante Alighieri...
Re:Call me when they get to (Score:3, Funny)
Well, duh. Why do you think the ninth circle is made of ice?
And considering what the core [appstate.edu] looks like, I'm glad they've expanded the traditional four rings of protection to nine...
My favorite user license term ever (Score:3, Funny)
"I will not be using Plan 9 in the creation of weapons of mass destruction to be used by nations other than the US."
There are so many ways that this is funny. There are enough jokes in that one line to keep a sitcom running for two years, maybe more.
Additional License issue (Score:3, Insightful)
This is realistically commercial software with a "demo" license. You can't do anything serious with it. (Compare to Perl/PHP/Apache)
Re:Cool ... (Score:5, Funny)
In hell it is always the 1980s (Score:2)
Re:In hell it is always the 1980s (Score:2)
Re:I thought a better unix was ... (Score:5, Informative)
Linux is better mostly because it's free. It does not fix some of the imperfections in the core design (for good reasons; that would break Posix compatibility). According the Inferno Design Principles [vitanuova.com], Inferno takes Unix ideas and applies them more consistently. For instance: everything is a file. In Inferno, what you're typing in a text editor window can be queried in something like
Re:Inferno 4... (Score:2)
Re:Distributed Operating System? (Score:4, Interesting)
Not that this is bad, but it isn't just "UNIX++".
Distributed operating systems are cool -- to do research on. However, they suffer from some serious real-world-usage problems. Unless you really know what you're doing and frequently are writing the application you plan to use, you don't "magically get lots more speed" because most tasks that people want to do just don't parallelize all that well (and even if they do, take more work to parallelize). There are only a couple of non-unique software systems that *really* parallelize really, really well. Raytracing is one. The problem is that these systems are so few and far between that it's often better to just write application-specific distributed code rather than trying to write a general distributed OS that gives less good performance. There's often a fair amount of overhead involved in distributing an OS, so the vast majority of common tasks run with overhead they they wouldn't need to on a traditional OS.
*IX is pretty good. There aren't a whole lot of obvious changes I'd like to see. Hmm...if I could make changes:
* Standard home directory structure redone. I wrote a detailed proposal on Slashdot for this that allows a standard mechanism for dropping off files, having public files without exposing the contents of one's home directory, and not having config files litering ones home directory.
* ACLs being standardized (and ideally used minimally or not at all on vanilla boxes). ACLs are terribly useful for end users, as it's much easier to do many tasks (and you can do things that you can't do with the standard *IX permission scheme). Minimal use is important to keep things easy to audit.
* Linux has a fully-ordered init system rather than a partially-ordered init system. This is not that great from a performance and usability perspective. Partial orderings allow a full ordering to be forced, if necessary. However, full orderings prevent clever things being done like getting the desktop up as quickly as possible on a desktop-based system, but the nfs server up as quickly as possible on a fileserver.
* *IX lacks a standard utility that can escape all non-line-terminators. This is terribly important for dealing with files with spaces and parens and things in their name. I have a replacement awk script called "myxargs" that does this and lets me do all the standard *IX file operations easily without having my stuff barf on files named using Windows conventions.
* *IX does not have a standard set of features -- and on Linux, no easily-end-user-available features at all for transparent file encryption. Windows does. This is an embarassment.
* Chroot is very cool, but also overkill for a lot of things. I'd like to see a support for a standard Linux restricted
* I've always wondered why network interfaces (at least under Linux, not sure if this is the same under other OSes) are not files like almost everything else in the UNIX world.
* *IX lacks a good, common secure, easy to set up a distributed filesystem. It would be really nice if AFS was a piece of cake to set up, supported large files out of box, and was present on all *IX systems. If it could serve the role that SMB/CIFS does in the Windows world (Joe User can easily make a share), but with better performance and security, and the ability to easily distribute, we'd definitely be going somewhere.
* *IX lacks a good, common, secure, easy to set up messaging client. Talk was absolutely wonderful back in the day, but firewalls and other nastiness have made it very uncommon. This is not just for desktop systems -- messaging can be a CLI application for troubleshooting and the like. I'd personally hope that such a system be able to do end-to-end encryption.
Re:Distributed Operating System? (Score:3, Informative)
Spelling creat with a "e"
And umount with an n... (Plan 9 has unmount. Don't know about create though. It also lacks the root of string overflows,
* I've always wondered why network interfaces (at least under Linux, not sure if this is the same under other OSes) are not files like almost everything else in the UNIX world.
In Plan 9... the whole network interface and system is done as files, not merely the c
Re:Tryst with Plan9 (Score:5, Interesting)
also, "its GUI sucked" is an overly broad and essentially content-free statement. a large part of it is subjective. the gui is certainly minimalist, but i really like that. i try hard to get any X11 system i have to use to look as much like it as possible. there's a number of things which you simply can't say "suck" - things like the chording in Acme, the exact window positioning with sweeping on creation, the underlying model. all amazing. particularly the underlying model - built using the same primitives as everything else in the system. you get things like distribution and recursion for free. wonderful stuff.
all that being said, if you can't get Plan 9 working, that's a good reason to check out Inferno. all the Plan 9 concepts, with one or two others in the mix, and can run hosted (read: no driver worries).
Re:really cool... (Score:5, Informative)
By using a single, simple metaphor to represent external resources (a hierarchical filesystem with streamlined semantics), it's possible to write general purpose components that are not conceivable in other systems, because their resources are not available in such a uniform way.
For instance:
Almost all of the complexity in most conventional systems today comes from backward compatibility requirements. Inferno can do what it does by discarding that backward compatibility - the obvious cost is that it's quite an effort to get your old programs to run underneath it. However, for many applications, that's not an issue, whereas the unreasonable complexity of other "modern" systems is. | https://developers.slashdot.org/story/04/05/16/2334220/Inferno-4-Available-for-Download | CC-MAIN-2016-40 | refinedweb | 5,736 | 62.98 |
en and Todd,
What you are seeing is cruft--I think I was planning on using something
called that, but never wrote it. It can safely be removed.
Todd, I have on my todo list for the next release getting the Bio::GMOD
namespace stuff sorted out, because right now I suspect there is a
conflict. The only modules I use now are Bio::GMOD::Config, which
returns basic configuration info) and Bio::GMOD::DB::Config, which
returns database specific information. I suspect both of these will go
away, to be replaced with commandline scripts.
Thanks,
Scott
On Fri, 2005-07-08 at 15:56 -0600, Todd Harris wrote:
> Whoa. There may be a namespace conflict with the already registered
> Bio::GMOD. I've already encapsulated many packages under a different
> structure as suggested at the GMOD meeting but there are stil some
> things to do.
>
> SC - do you have a list of _your_ Bio::GMOD modules? I'd like to get
> this straightened out so I'm not holding up the gmod release.
>
> T
>
>
> On Fri, 8 Jul 2005 3:32 pm, Allen Day wrote:
> > % grep -r 'Bio::GMOD::Util' BUILD/gmod-0.003/
> > BUILD/gmod-0.003/lib/Bio/GMOD/Load.pm:use Bio::GMOD::Util;
> >
> > Where can I get this module?
> >
> > -Allen
> >
> >
> > -------------------------------------------------------
> >-devel mailing list
> > Gmod-devel@...
> >
--
------------------------------------------------------------------------ | https://sourceforge.net/p/gmod/mailman/gmod-devel/?viewmonth=200507&viewday=9 | CC-MAIN-2017-17 | refinedweb | 220 | 65.32 |
-25-2018 10:39 AM
Hi,
I need some tips do implement a project: I need to connect a MicroBlaze processor to a Convolution IP Block that i've just implemented using HLS. There must be a Image stored in MicroBlaze and that image must be transfered to the IP Block, be processed and return to the MicroBlaze.
Whats is the best way to do this? It's a single image that a need to send.
For now, that is the blocks I have:
The Conv HLS code is the following:
#include "conv.h" void conv(uint8_t image_in[rows*collums],uint8_t image_out[rows*collums]){ #pragma HLS INTERFACE m_axi depth=1440000 port=image_out offset=slave bundle=CRTL_BUS #pragma HLS INTERFACE m_axi depth=1440000 port=image_in offset=slave bundle=CRTL_BUS #pragma HLS DATAFLOW #pragma HLS INTERFACE s_axilite port=return bundle=lite const char coefficients[3][3] = { {-1,-2,-1}, { 0, 0, 0}, { 1, 2, 1} }; hls::Mat<collums,rows,HLS_8UC1> src; hls::Mat<collums,rows,HLS_8UC1> dst; hls::AXIM2Mat<rows,uint8_t,collums,rows,HLS_8UC1>(image_in,src); hls::Window<3,3,char> kernel; for (int i=0;i<3;i++){ for (int j=0;j<3;j++){ kernel.val[i][j]=coefficients[i][j]; } } hls::Point_<int> anchor = hls::Point_<int>(-1,-1); hls::Filter2D(src,dst,kernel,anchor); hls::Mat2AXIM<rows,uint8_t,collums,rows,HLS_8UC1>(dst,image_out); }
03-25-2018 07:40 PM
@tiago0297 Create a block with BRAM or DDR. It will expose a slave port.
Then connect both MicroBlaze and Conv's master ports to it through an AXI interconnect. Connection automation should take care of this for you.
You also need to make s_axi_lite a slave of the Microblaze so you can start/stop/configure your block.
03-25-2018 08:27 PM
The image will be stored in the BRAM? How it's done?
When I do this, the MicroBlaze will be able to get it from the BRAM and send it to the Conv, then Conv will send it back to the BRAM?
03-25-2018 09:13 PM
@tiago0297 The way you set up, you have an AXI master. It will issue AXI read and then write commands to the interconnect and fetch the data for you. This is already done by HLS in your block.
Microblaze only has to configure the IP's slave offset and start the IP block using the SDK-provided API.
03-27-2018 08:52 AM
Nice, I think I understood. But I still have some questions, i'm new to Vivado.
So the MicroBlaze will be in charge to get the image from the BRAM and send it to Conv. But my Image is a .jpeg file and I don't understood how can I load this file in the BRAM. How can I do this?
03-27-2018 08:59 AM
Microblaze will just fetch this image and decompress it in memory. Not sure if opencv is available for microblaze but if it does you can use cvloadimage. Otherwise I remember tinydnn comes with a header only jpeg/png reader.
You can alternatively use "convert" on Linux to generate a raw file and then upload to Bram using XSCT
03-30-2018 07:37 AM
I"m still confused about how I can initialize my memory with my image. What's the best way to do this?
03-31-2018 12:35 PM
I've connected the MicroBlaze, the Conv IP Block and a BRAM. I'm attaching the pdf of my design. Now, I want to initialize my memory with a Image I have stored in my computer, then send this image to the Conv and get it back to the memory. Am I on the right way to do this?
I've tryied to use the Memory Configuration File Generator on Tools of the Vivado, selecting my image and asking it to generate a file for me. I'm a very beginner user, so I really don't know if it's the best way to generate a initialization file for my BRAM. Unfortunately, this didn't work for me because of the following error: [Writecfgmem 68-4] Bitstream at address 0x00000000 has size 385828 bytes which cannot fit in memory of size 2097152 bytes. | https://forums.xilinx.com/t5/Processor-System-Design/Merging-a-MicroBlaze-and-a-Image-Processing-IP-Block/td-p/841665 | CC-MAIN-2019-47 | refinedweb | 709 | 63.29 |
This is the third, and last, of a series of posts on Benford’s law, this time looking at a famous open problem in computer science, the 3n + 1 problem, also known as the Collatz conjecture.
Start with a positive integer n. Compute 3n + 1 and divide by 2 repeatedly until you get an odd number. Then repeat the process. For example, suppose we start with 13. We get 3*13+1 = 40, and 40/8 = 5, so 5 is the next term in the sequence. 5*3 + 1 is 16, which is a power of 2, so we get down to 1.
Does this process always reach 1? So far nobody has found a proof or a counterexample. (But there has been progress.)
If you pick a large starting number n at random, it appears that not only will the sequence terminate, the values produced by the sequence approximately follow Benford’s law (source). If you’re unfamiliar with Benford’s law, please see the first post in this series.
Here’s some Python code to play with this.
from math import log10, floor def leading_digit(x): y = log10(x) % 1 return int(floor(10**y)) # 3n+1 iteration def iterates(seed): s = set() n = seed while n > 1: n = 3*n + 1 while n % 2 == 0: n = n / 2 s.add(n) return s
Let’s save the iterates starting with a large starting value:
it = iterates(378357768968665902923668054558637)
Here’s what we get and what we would expect from Benford’s law:
|---------------+----------+-----------| | Leading digit | Observed | Predicted | |---------------+----------+-----------| | 1 | 46 | 53 | | 2 | 26 | 31 | | 3 | 29 | 22 | | 4 | 16 | 17 | | 5 | 24 | 14 | | 6 | 8 | 12 | | 7 | 12 | 10 | | 8 | 9 | 9 | | 9 | 7 | 8 | |---------------+----------+-----------|
We get a chi-square of 12.88 (p = 0.116) and so we get a reasonable fit.
Here’s another run with a different starting point.
it = iterates(243963882982396137355964322146256)
which produces
|---------------+----------+-----------| | Leading digit | Observed | Predicted | |---------------+----------+-----------| | 1 | 44 | 41 | | 2 | 22 | 24 | | 3 | 15 | 17 | | 4 | 12 | 13 | | 5 | 11 | 11 | | 6 | 9 | 9 | | 7 | 11 | 8 | | 8 | 6 | 7 | | 9 | 7 | 6 | |---------------+----------+-----------|
This has a chi-square value of 2.166 (p = 0.975) which is an even better fit.
See also: Benford’s law posts organized by application area
One thought on “The 3n+1 problem and Benford’s law”
Maybe my intuition is off, but I think it’s relatively obvious that a process that generates new values by (approximate) multiplication and (exact) division, i.e., by “randomly” shifting values up and down a logarithmic scale would generate numbers that follow Benford’s Law. | https://www.johndcook.com/blog/2017/05/03/the-3n1-problem-and-benfords-law/ | CC-MAIN-2021-31 | refinedweb | 432 | 70.94 |
This is a trivial, yet very fast approximation of calculating binomial coefficients is to use the logarithm rules we got from the basic course in calculus. We have that
Therefore,
.
Example: In Python, that is
from math import log from math import exp def log_fac(n): sum = 0 for i in xrange(2,n+1): sum += log(i) return sum def log_binomial(n,k): return (log_fac(n)-log_fac(k)-log_fac(n-k))
To compute for instance
, write
print exp(log_binomial(80000,30)) which outputs
4.64170748337e+114 in a matter of milliseconds. We can improve this further. Here’s a code which does a bit less calculation, although the improvement is marginal:
def log_fac(m,n): sum = 0 for i in xrange(m,n+1): sum += log(i) return sum def log_binomial(n,k): if k > (n-k): return (log_fac(n-k+1,n)-log_fac(2,k)) else: return (log_fac(k+1,n)-log_fac(2,n-k))
Sergey Shashkov suggested in the comments the following exact computation of the binomial coefficient:
def pow_binomial(n, k): def eratosthenes_simple_numbers(N): yield 2 nonsimp = set() for i in xrange(3, N + 1, 2): if i not in nonsimp: nonsimp |= {j for j in xrange(i * i, N + 1, 2 * i)} yield i def calc_pow_in_factorial(a, p): res = 0 while a: a //= p res += a return res ans = 1 for p in eratosthenes_simple_numbers(n): ans *= p ** (calc_pow_in_factorial(n, p) - calc_pow_in_factorial(k, p) - calc_pow_in_factorial(n - k, p)) return ans
The code above generates a set
containing all primes up to
. Then, for each
it computes the number of times that
divides
,
and
. Define
as the number of times
divides
. It then computes the difference
and multiplies the product with
. By the fundamental theorem of arithmetic, we have that
Another way of visualizing it is to write
as
To count the number of times
occurs in
, it first computes
, which is the number of times
occurs once in the range
. Then, it computes
and so forth until
. Adding all cardinalities, we get
.
The reason why it runs faster than computing
is that it omitts multiplying with factors of
that we know will cancel out from division by
.
Running the code 100 times each, the average time to compute is as follows
Exact approach: 38.8791418076 ms
Non-exact log approach: 15.0316905975 ms
It still seems that the approximate log computation is a bit faster, although the difference is quite small. For excessively large binomial coefficients, we can exploit Stirling’s approximation:
.
def log_fac2(m,n): return n*log(n)-m*log(m) def log_binomial2(n,k): if k > (n-k): return (log_fac2(n-k+1,n)-log_fac2(2,k)) else: return (log_fac2(k+1,n)-log_fac2(2,n-k))
This runs in about
1.18 μs, at the cost of lower precision.
Computing factorial and binomial coefficients modulo prime
A quite efficient way to compute the factorial modulo a prime
can be achieved by using a precomputed table containing a sparse subset of the function image (if you are familiar with the baby-step giant-step algorithm for computing discrete logarithms, these two approaches are very similar). For instance, let the table contain values
{0: 1, 10000000: 682498929, 20000000: 491101308, 30000000: 76479948, 40000000 ... }, up to the maximum value (the modulus
). This means that we at most need to compute 10000000 values. If the distance between the points in the table is
, we can compute the a factorial mod
in time
. Let us for simplicity assume that
.
Define the
factorial function as follows:
pre_computed = {} sqrt_mod = int(sqrt(mod)) def factorial(n): if n >= mod: return 0 f = 1 for i in reversed(xrange(1, n + 1)): if i % sqrt_mod == 0: if i in pre_computed: f = f * pre_computed[i] % mod break f = f * i % mod if f == 0: break return f
Then, we compute the table as follows:
def create_precomputed_table(): for i in range(0, mod, sqrt_mod): pre_computed[i] = factorial(i) create_precomputed_table()
Obviously, we do this only once (it is a precomputation, right?). This precomputation requires
time (we have to compute all values in between and
).
Here is some code to compute the modular inverse (you can use basically any implementation):
def extended_gcd(aa, bb): lastremainder, remainder = abs(aa), abs(bb) x, lastx, y, lasty = 0, 1, 1, 0 while remainder: lastremainder, (quotient, remainder) = remainder, divmod(lastremainder, remainder) x, lastx = lastx - quotient*x, x y, lasty = lasty - quotient*y, y return lastremainder, lastx * (-1 if aa < 0 else 1), lasty * (-1 if bb < 0 else 1) def modinv(a, m): g, x, y = extended_gcd(a, m) if g != 1: raise ValueError return x % m
Finally, we compute the binomial coefficient as follows
def binomial(n, k): if n < k: return 0 return factorial(n) * modinv(factorial(n - k) * factorial(k), mod) % mod
which also requires
time.
6 thoughts on “Quick and dirty way to calculate large binomial coefficients in Python”
What about using the recursive formula to avoid calculating the factorials explicitly? Something like
# def log_binomial(n,k)
# b = 1
# for i in range(1,k): b += log(n-(k-i)) – log(i)
# return b
Yeah, why not? Nice 🙂
If you replace range by xrange, the script will require less memory
True! I can’t see why you shouldn’t. I didn’t know this until now, thanks!
The same time, but the exact result.
And it does work for (80000 40000).
# def pow_binomial(n, k):
# ….def eratosthenes_simple_numbers(N):
# ……..yield 2
# ……..nonsimp = set()
# ……..for i in range(3, N + 1, 2):
# …………if i not in nonsimp:
# …………….nonsimp |= {j for j in range(i * i, N + 1, 2 * i)}
# …………….yield i
# ….def calc_pow_in_factorial(a, p):
# ……..res = 0
# ……..while a:
# …………a //= p
# …………res += a
# ……..return res
# ….ans = 1
# ….for p in eratosthenes_simple_numbers(n+1):
# ……..ans *= p ** (calc_pow_in_factorial(n, p) – calc_pow_in_factorial(k, p) – calc_pow_in_factorial(n – k, p))
# ….return ans
Yes, this works very well! Thank you, I have added it to the post. | https://grocid.net/2012/07/02/quick-and-dirty-way-to-calculate-large-binomial-coefficients-in-python/ | CC-MAIN-2017-17 | refinedweb | 983 | 52.09 |
Hello, i am new to C++ and doing an assignment, "Binary Code Breaker"
what i have to do is make a game where the player has to guess a four digit code made up of 1's and 0's. The game can either be made so that the play guesses all 4 digits at once, or one digit at a time. I am trying to do a 1 digit at a time.
Right now, i'm just trying to get it functional, and would gladly take any help or tips on how to go about doing this.
#include <iostream> #include <cstdlib> #include <ctime> #include <conio.h> #include <string> using namespace std; int main() { srand(static_cast<unsigned int>(time(0))); int userSelection; const short PLAY_GAME = 1; const short QUIT_GAME = 2; do { cout << "[1] Start Game." << endl; cout << "[2] Quit Game." << endl; cout << "> "; cin >> userSelection; switch (userSelection) { case PLAY_GAME: break; case QUIT_GAME: cout << endl; cout << "Goodbye!" << endl; cout << endl; break; default: cout << endl; cout << "That is not a valid option!" << endl << endl; cout << endl; } while (userSelection == 1) { const int CODES = 10; string BIN[CODES] = { "0100", "1010", "1110", "1011", "0001", "0010", "0000", "1111", "1000", "0100" }; const int TRIES = 5; string guess; while (true) { cout << "play Game? (y/n) " << endl << endl; char decision = _getch(); if (decision == 'y') { int outCode = (rand() % CODES); string secretCode = BIN[outCode]; for (int i = 0; i < TRIES; i++) { int counter = 0; cout << "Enter Number" " You have (" << 5 - i << ") Attempts left. ->"; cout << "Enter your guess -> "; getline(cin, guess); for (size_t i = 0; i < guess.size(); i++) { if (guess[i] == secretCode[i]) { cout << guess[i]; ++counter; cout << "Is correct" << endl; } else { cout << endl << endl << "\t\t** WRONG ** " << endl << endl; } } cout << endl; if (counter == 5) { cout << "You win" << endl; } else { cout << "Sorry, you lose." << endl; } } } else if (decision == 'n') { cout << "Goodbye"; } } } } while (userSelection != 2); system("PAUSE"); return 0; }
When i..
[1]start game,
Press - y - to play
it produces the first two lines as if i have already made a guess.
"Enter digit #1" " You have (5) Attempts left. ->"
"Enter your guess -> ";
sorry, you lose.
"Enter digit #1" " You have (4) Attempts left. ->"
"Enter your guess -> "
It looks like you're new here. If you want to get involved, click one of these buttons! | https://programmersheaven.com/discussion/436717/binary-code-game | CC-MAIN-2018-34 | refinedweb | 372 | 79.8 |
Creating and Using Widgets
A widgeta mini-application that you place on a Web page using PageBuilder; a widget provides either specific functionality (calculators, search, social bars, and so on) or areas into which you can add content (content blocks, list summaries, collections, and so on). is a mini-application that can provide either specific functionality (search, social bars, and so on) or areas into which you can add Ektron content (content blocks, list summaries, collectiona list of Ektron content links for display on a Web page.s, and so on). You can drag and drop widgets onto a page using a wireframethe architecture of a Web page containing columns, dropzones, and layout information., dropzonean area on a Web page where you can drag and drop a widget.s, and widgets.
The following figure shows the relationship between a wireframe, dropzones, and widgets.
A widget consists of 3 file types.
.ascx—contains a widget’s source code
.ascx.csor
.vb—contains widget’s code-behind
.ascx.jpg—image that represents a widget in the widget selection tool
When you create a widget, save the widget files to the
siteroot/widgets folder. This folder path is defined in the site
root/web.config file, so if you change the folder name or path, you must update the following
web.config element:
<add key=”ek_widgetPath” value=”Widgets/" />.
NOTE: Your widget might use additional files, such as
.css or
.js files.You should place these files in a folder within
siteroot/widgets, and give the folder the same name as the custom widget.
Ektron stores each page’s data (a serialized XML string) as a type of content in the Ektron Workarea. The string is stored like other content types, such as HTML content and XML Smart Forman Ektron-defined Web page that contains XML (hidden from the end user) to display content, and receive, verify, and save user input.s.
After you integrate widgets into Ektron, you can add them to a dashboard in your profile page or a community group’s page. You also can drag-and-drop these building blocks onto a PageBuilder page or a Personalized Web page. See also: Creating Web Pages with PageBuilder and Personalizing a Web Page.
Content authors open the widget bar from the PageBuilder menu by clicking the up/down (
) or down (
) controls.
For HTML5 CSS3-compliant browserall browsers are supported EXCEPT for the following versions *and older*: IE 8, Chrome 3 and older, Firefox 3 and older, Safari 3 and older, and Opera 10 and older.s:
Widget States on a Pagebuilder Page
Widgets placed on a PageBuilder page have 3 possible combinations of states.
In a widget’s user control file, you create an
asp:MultiView element that determines available actions when a widget is in View mode and Edit mode.
<%@ Control</asp:Label><br /> <asp:Label</asp:Label> <!-- End To Do .............................. --> </asp:View> /><br /> <!-- End To Do .............................. --> <asp:Button <asp:Button </div> </asp:View> </asp:MultiView>
An administrator can determine which widgets can appear for use on a wireframe.
To learn how to create a widget, create a simple widget in the
siteroot/widgets folder. This widget is based on the Hello World widget that is installed with the sample site with the following files:
HelloWorld.ascx—user control file
HelloWorld.ascx.cs—user control code-behind file
HelloWorld.ascx.jpg—image that represents this control on the widget menu
widgetsfolder.
HelloWorld.ascx.
widgetsfolder.
Copy of HelloWorld.ascx.
new_widget.ascx. Visual Studio automatically renames the code-behind file to
new_widget.ascx.cs.
helloworld.ascx.jpgfile to
new_widget.ascx.jpg.The image file is 48 x 48 pixels and 72 dpi. Ektron administrators and content authors drag a widget’s image onto the page.
new_widget.ascxto open it.
HelloWorld(circled) with
new_widget.
new_widget.ascx.cs.
HelloWorldwith
new_widget.
new_widget.ascxand
new_widget.ascx.cs.
NOTE: Use this procedure to select (or restrict) any widget that you want to be available to use on a wireframe template. When the template is used, the selected widgets appear in the PageBuilder Widgets menu.
new_widget.ascx, appears in the list.
PageLayout.aspx. Or, any wireframethe architecture of a Web page containing columns, dropzones, and layout information. template that you are using to create a PageBuilder page.
After you create a new widget and enable it in the Workarea, you can begin to customize it. For more information about customizing widgets, see Customizing a Widget.
Here is the
new_widget.ascx file that is the basis of the widget.
<%@ Control Language=”C#” AutoEventWireup=”true” CodeFile=”new_widget.ascx.cs” Inherits=”widgets_new_widget” %> <%@="TextLabel" runat="server"></asp:Label><br /> <asp:Label</asp:Label> <!-- End To Do .............................. --> </asp:View> <asp:View <div id="<%=ClientID%>_edit"> <!-- You Need To Do ............................. --> <asp:TextBox </div> </asp:View> </asp:MultiView>
Notice the following elements of the file.
asp:MultiViewelement declares that the control has 2 possible modes: View and Edit.
<asp:MultiView
multiviewtags is information about the control in view mode. It has 2 fields: one is a text field, and the other is a check box.
<asp:View <asp:Label</asp:Label><br /> <asp:Label</asp:Label> </asp:View>
multiviewtags is information about the control in edit mode. In edit mode, a text box, a check box, and a Save button appear. The text box and check box collect end-user input, and the Save button saves that input to the database.
" />
Review the code-behind file,
new_widget.ascx.cs.
usingstatements are at the top of the file. Notice the Ektron ones in particular:
using Ektron.Cms.Widget; using Ektron.Cms;Marketing team using Ektron.Cms.API; using Ektron.Cms.Common; using Ektron.Cms.PageBuilder; using System.Text.RegularExpressions;
system.Web.UI.UserControland
IWidgetclasses.
public partial class widgets_new_widget: System.Web.UI.UserControl, IWidget
The following figure summarizes the remaining elements of the code-behind file.
#region properties private string _HelloString; private bool _CheckBoxBool; [WidgetDataMember(true)] public bool CheckBoxBool { get { return _CheckBoxBool; } set { _CheckBoxBool = value; } } [WidgetDataMember("Hello Wolrd")] public string HelloString { get { return _HelloString; } set { _HelloString = value; } } #endregion
private IWidgetHost _host;
page_initevents.
protected void Page_Init(object sender, EventArgs e) { (); }); ViewSet.SetActiveView(View); }
gethostmethod returns a reference to the container widgethost for this widget. This is the case in both Personalization and PageBuilder.
Titleproperty is the title of this widget. By setting it in
page_initfor the widget, we inform the host what text to put in the title bar above the widget. This works in both PageBuilder and Personalization.
host.Titleare raised by the widgethost. It’s up to the widget to subscribe to them. In all cases, if we don’t subscribe to them, the icons don’t show up. This is a method of attaching widget code to button clicks and other events that occur outside the widget.
PreRender: Ektron renders the contents of this widget on pre-render, thus ensuring a single render event. Another option is to call
SetOutputon the Load event, but you can only do that if the widget is not in edit mode currently.
editevents.
void EditEvent(string settings) { string sitepath = new CommonApi().SitePath; ScriptManager.RegisterClientScriptInclude(this, this.GetType(), "widgetjavascript", sitepath + "widgets/widgets.js"); ScriptManager.RegisterOnSubmitStatement(this.Page, this.GetType(), "gadgetescapehtml", "GadgetEscapeHTML('" + HelloTextBox.ClientID + "');"); HelloTextBox.Text = HelloString; MyCheckBox.Checked = CheckBoxBool; ViewSet.SetActiveView(Edit); }
IMPORTANT: You must register JavaScript and cascading style sheet (css) instructions in an external file.
sitepathto ensure that the correct path for included files is used across installations.
scriptmanagerto include the script. Alternatively, you can use
Ektron.Cms.Api.Js.RegisterJSInclude ScriptManager.RegisterOnSubmitStatement(this.Page, this.GetType(), "gadgetescapehtml", "GadgetEscapeHTML('" + HelloTextBox.ClientID + "');");
onsubmitstatementis JavaScript that is run when the widget is submitted. It calls escape html, which cleans the submitted text to avoid any XSS.
HelloTextBox.Text = HelloString;
MyCheckBox.Checked = CheckBoxBool;
ViewSet.SetActiveView(Edit);
saveevents.
protected void SaveButton_Click(object sender, EventArgs e) { HelloString = ReplaceEncodeBrackets(HelloTextBox.Text); CheckBoxBool = MyCheckBox.Checked; _host.SaveWidgetDataMembers(); ViewSet.SetActiveView(View); }
SetOutputevents.
protected void SetOutput() { HelloTextLabel.Text = HelloString; // client javascript remove brackets, server side adds back CheckBoxLabel.Text = CheckBoxBool.ToString(); }
Cancelevents.
protected void CancelButton_Click(object sender, EventArgs e) { ViewSet.SetActiveView(View); }
protected string ReplaceEncodeBrackets(string encodetext) { encodetext = Regex.Replace(encodetext, "<", "<"); encodetext = Regex.Replace(encodetext, ">", ">"); return encodetext; }
The following topics let you further customize widget behavior.
You can use JavaScript or a cascading style sheet to add custom functionality or styling to a widget. To do this, place the JavaScript or cascading style sheet (css) instructions in an external file, then register it in the code-behind file.
Example of including a JavaScript file.
void EditEvent(string settings) JS.RegisterJSInclude(this, _api.SitePath + "widgets/contentblock/jquery.cluetip.js", "EktronJqueryCluetipJS");
Example of including a .css file.
Css.RegisterCss(this, _api.SitePath + "widgets/contentblock/CBStyle.css","CBWidgetCSS");
WARNING! You must register JavaScript and .css files in an external file, as shown above. If you do not, the OnSubmit event places HTML in the TextArea field in encoded brackets (< >) and generates a dangerous script error.
The
JS.RegisterJSInclude and
Css.RegisterCss functions take 3 arguments.
this. For example:
_api.SitePath + "widgets/contentblock/jquery.cluetip.js"
_api.SitePath + "widgets/contentblock/CBStyle.css"
"EktronJqueryCluetipJS"
NOTE: Widgets use an update panel for partial postbacks. As a result, the ASP.NET tree view and file upload controls do not work with widgets. Ektron has workarounds for these functions. For an example of a tree view, see the content block widget (
siteroot/widgets/contentblock.ascx). For an Ajax file uploader, see the flash widget (
siteroot/widgets/flash.ascx).
Whenever your code is interacting with a widget, you need to verify that it is on a PageBuilder page (as opposed to another Ektron page that hosts widgets, such as personalization).
To check for this, insert the following code:
Ektron.Cms.PageBuilder.PageBuilder p = (Page as PageBuilder); If(p==null) // then this is not a wireframe When you want to check the mode, use code like this. If(p.status == Mode.Edit) // we are in edit mode
Global and local widget properties reduce your development effort by eliminating settings data classes. While you can still use these classes and manage your own serialization, for the vast majority of types, the built-in engine performs the necessary work.
Global properties apply to every instance of a widget. Local properties apply to one instance. If both local and global values are assigned to a property, local overrides global.
As an example of using a local property to override a global, consider a ListSummary widget. You may want its sort mostly by modified date in descending order, but in certain instances you want to sort by title in ascending order.
[GlobalWidgetData()] public string NewWidgetTextData { get {return _NewWidgetTextData;} set {_NewWidgetTextData = value; }}
[WidgetDataMember()] public string NewWidgetTextData { get { return _NewWidgetTextData; } set { _NewWidgetTextData = value; } }
A global property lets an Ektron developer or administrator assign properties and values that apply to all instances of a widget. You apply a global property to the widget’s code-behind page. Administrators could then set or update the property’s value in the Workarea’s Widgets screen.
For example, the Brightcove Video widget requires a player ID. You could insert that in the widget’s code-behind file. Then, an administrator could review and possibly update that information in the Workarea widgets screen. Whenever a user drops a Brightcove Video widget onto a page, the player ID is already assigned.
If the developer does not set a default value in code-behind, an administrator must set one on the Workarea’s Widgets screen.
If the developer does set a default value in code-behind, it will be applied unless changed by an administrator on the Workarea’s Widgets screen.
To set global properties:
siteroot/widgetsfolder.
propertiessection, insert the
GlobalWidgetDataattribute (as shown) to set the global property’s name and type.
[GlobalWidgetData()] public string NewWidgetTextData { get { return _NewWidgetTextData; } set { _NewWidgetTextData = value; } }
The supported types for
GlobalWidgetData are:
A local property lets an Ektron user assign property values that apply to a particular instance of a widget. For example, the Brightcove Video widget requires a Video ID, which identifies the video that appears where you drop the widget.
To set a local properties:
site root/widgetsfolder.
propertiessection, insert the
WidgetDataMemberattribute to set the property.
[WidgetDataMember(150530105432)]1 public long VideoID { get { return _VideoID; } set { _VideoID = value; } }
[WidgetDataMember. In the example above, the value is
150530105432.
_host.SaveWidgetDataMembers();.
You can add a Content type drop-down to the ListSummary widget. The drop-down lets the person dropping the widget on the page select from these choices.
The drop-down appears as follows after it is implemented.
To add this drop-down to the ListSummary widget:
siteroot/widgets/ListSummary.ascx.
DisplaySelectedContent.
DisplaySelectedContent, add the following code to create a drop-down list for the
ContentTypeproperty.
<tr> <td>DisplaySelectedContent:</td> <td> <asp:CheckBox </td> </tr> <tr> <td>ContentType:</td> <td> <asp:DropDownList <asp:ListItemAllTypes</asp:ListItem> <asp:ListItemContent</asp:ListItem> <asp:ListItemAssets</asp:ListItem> </asp:DropDownList> </td> </tr>
ListSummary.ascxfile.
ListSummary.ascx.cs.
ContentTypeproperty:
private string _ContentType;
AllTypes:
[WidgetDataMember("AllTypes")] public string ContentType { get { return _ContentType; } set { _ContentType = value; } }
EditEventarea, set the select list's value to
ContentType:
ContentTypeList.SelectedValue = ContentType;
SaveButton_Clickevent, set
ContentTypeas the select list's value:
ContentType = ContentTypeList.SelectedValue;
SetListSummary()function, set the List Summary server control's
ContentTypeto the
CMSContentTypeproperty:
ListSummary1.ContentType = (CMSContentType)Enum.Parse(typeof(CMSContentType), ContentType);
ListSummary.ascx.csfile.
You can include help for any widget that has the help icon ().
The help icon only appears when a user is editing a PageBuilder page. The icon appears both when a user is viewing a widget and editing its properties. It is not available to a page’s site visitors.
To create a widget’s help file:
You could create a content block within Ektron then switch to source view, copy the content into a word processor (like Notepad), and save it with an HTML extension.
HelpFileproperty to the code-behind of the page that hosts the widget.
protected void Page_Init(object sender, EventArgs e) { _host = Ektron.Cms.Widget.WidgetHost.GetHost(this); _host.HelpFile = "~/widgets/myWidget/help.html";
If you want to remove a widget, follow these steps. After you do, the widget is removed from the list of widgets, and from the dashboard of users and community groups.
/siteroot/widgets/folder.
The following conditions cause this image to appear.
IMPORTANT: You need an account on Brightcoveto show videos in the Ektron Brightcove Video widget.
Your Brightcove.comaccount lets you upload, store, and play videos on your Web page with the Ektron Brightcove Video widget.
You see the following screen the first time you use a Brightcove Video widget if you have not entered account information in the Workarea. Enter your Brightcove account information here. After you successfully save your account data, this screen does not appear again.
Follow these steps to play videos with the Brightcove Video widget.
NOTE: Brightcove provides the values for the Global Settings for this widget on your Brightcove Account pages. Log in to Brightcoveand go to Account Settings > API Management.
NOTE: You can upload your video using your Brightcove.com account or follow these steps and upload a video using the Ektron Brightcove Video widget.
A message on the Edit window shows that the Video is being uploaded. In this example, we are uploading the Training Program 8.0 video.
NOTE: Your video may not be available to view immediately after uploading. Allow time for Brightcove to publish it or check its status on your Brightcove Account page.
The following image is an example of the Brightcove Video widget on an Ektron OnTrek website page.
If your videos do not show in the Brightcove Video widget, check the following topics.
Check the status of your video by logging into your account on Brightcove and looking for it in the Media Library.
According to Brightcove's article Uploading Videos with the Media Module, "Your video files can use most available file formats; if your files are not already encoded as VP6 (FLV—Flash video) or H.264, Brightcove transcodes them into one of those formats."
Check the publisherID setting in the widget Configuration.
Widgets are located in the Ektron
webroot/siteroot/widgets/ folder. Ektron assigns standard names to widgets, but Ektron administrators can change a widget's name on the Synchronize Widgets Screen.
Displays activities of a user or group, depending on the type of page on which it is placed. See also Using the ActivityStream Widget.
After you select a blog ID, displays posts from that blog.
Properties Tab:
Plays a Brightcove video. You must have a Brightcove account to add a video with the widget.
See Using the Brightcove Video Widget for information about using the BrightCove widget.
Double click on the CTACall to action; a user interface element that prompts a site visitor to touch or click it to proceed on a path toward conversion from site visitor to customer. For example, links that say "For more information...," "Add to cart," or "Buy now." that you want on the page. If you have a lot of CTAs in the list, enter the name of the CTA you want in the Search box and click Search (
).
Displays a collection. You select a Collection ID. See Working with Collections.
Displays a list of content blocks. In contrast to a List Summary, where content must be in a specified folder, the ContentList control displays content from any Ektron folder.
Displays a selected Flash file which resides in Ektron. You can also set the display’s height and width. See Using the Flash Widget.
Lets you enter a path to a Web page or an item on the Web page.
Displays an Ektron List Summary, a list of certain types of content in a selected folder. You can click on the Folder tab and go to the content or click on the Property tab and specify the following fields.
Reports on the following categories of content on your website: Most Viewed, Most Emailed, Most Commented, or Highest Rated. See also the Most Popular Widget.
NOTE: You can change this to any number you wish. However, the widget can only show data for days for which data is stored in your database.
This controls the experiment. Settings include the target content number, start/stop button and the Report hide/show button. See also Measuring Web Experiences with Multivariate Testing.
Drag any type of content widget into the Multivariate Section widget. These produce the variations used during the experiment. See also Measuring Web Experiences with Multivariate Testing.
When a page view occurs on a page containing this widget, the conversion count is increased. See also Measuring Web Experiences with Multivariate Testing.
Adds a responsive image to a page. Click Select Image to browse for an image in the Library.
Lets you create a set of conditions. As soon as any condition evaluates to true, an appropriate widget appears. See Connecting Visitors with Targeted Content for information about the TargetedContent widget.
Displays content assigned to a taxonomy category.
See also Organizing Content with Taxonomies .
By default, the Trends widget shows the Most Viewed content on your website. You can edit the widget so it displays any of these content categories instead: Most Emailed, Most Commented, or Highest Rated.
NOTE: You can change to any number you wish. However, the widget only shows data for days for which data is stored in your database.
Provides full calendar functionality, including adding events.
See also Working with Calendars.
Lets you embed code for any YouTube video. | https://webhelp.episerver.com/Ektron/documentation/documentation/wwwroot/cms400/v9.10/Reference/Web/Widgets/Using_Widgets.htm | CC-MAIN-2020-34 | refinedweb | 3,225 | 50.73 |
Hi everyone
This should be one of our last assignments. Well, not sure, but most probably.
As the title implies, we're supposed to write a program using multi-dimensional arrays for a 6x6 matrix multiplication.
Here's what I've done. It's almost OK; except the most important part which is the final result. It's wrong.
What's the algorithm for matrix multiplication?
#include <iostream> #include <fstream> using namespace std; int main() { ifstream infile("MATRIX.dat"); ofstream outfile ("RESULT.dat"); int m1[6][6], m2[6][6], M[6][6]; int i=0, j=0; for (i=0; i<6; i++) { for (j=0; j<6; j++) {infile>>m1[i][j] >>m2[i][j]; M[i][j] = m1[0][j]*m2[i][0]; cout<<M[i][j]<<" "; } cout<<endl; } infile.close (); outfile.close (); return 0; }
BTW, Ancient Dragon, Our tutor said, I asked you to use End Of File function. Why you invent sth different when you can't solve it? Insist on it, and solve it that way. :@
WTF!!! :'( | https://www.daniweb.com/programming/software-development/threads/184945/matrix-multiplication-using-multi-dimensional-arrays | CC-MAIN-2017-09 | refinedweb | 173 | 68.77 |
While the .NET platform has much to offer developers, one of the frustrations is the "black box" nature of the framework itself.
While Delphi has always shipped with the source code to the entire VCL, Microsoft has chosen not to ship the source code to the framework - leaving developers attempting to do advanced "stuff" in dark, when the often patchy documentation is inadequate.
This article attempts to show how it is possible to create debug versions of the framework assemblies (as also of any managed assemblies such as those used by Visual Studio itself) and how one can set breakpoints in IL code and step from one's own code into that of the framework.
My frustration and quest started when I was attempting to create a user control with 3 panels. I wanted the end user to be able to drop the user control on a form and then add controls to individual panels using the Visual Studio designer. However, the Windows Forms designer does not offer this facility and is only able to place controls on the user control itself, not on its sub controls.
I felt that, knowing how Visual Studio and the framework work behind the scenes would help me understand how a custom designer that offers this functionality could be written.
Reflector .NET, an excellent free tool written by Lutz Roeder helped me view the decompiled source of the concerned assemblies (Microsoft.VisualStudio.dll and System.Design.dll) and figure out how designers are created and individual controls selected, etc.
However, the hierarchy is quite complex and the exact sequence in which the methods of the many classes are called and the contents of the variables was still obscure. Only stepping through the Visual Studio code would really help me understand its operations well.
Browsing the web did not yield any articles on this topic, with many submissions asserting that it was not possible to step into the framework classes.
However, perusal of the .NET Framework Developer's guide and a lot of luck helped me find one way to achieve the goal.
Since debugging design time behavior is more difficult and less well documented, the walkthrough demonstrates how to debug a custom designer for a user control. Of course, debugging run time behavior would not require the second instance of Visual Studio and one would just have to set a breakpoint in the source in the main instance of Visual Studio 2003 itself.
This requires the creation of an INI file with the same name as the process that is to be debugged - since we are going to be debugging Visual Studio in its design time mode, the name of the file will be devenv.ini and it will be located in the same directory as devenv.exe, usually \Program Files\Microsoft Visual Studio .NET 2003\Common7\IDE.
The INI file is simple and looks like this:
[.NET Framework Debugging Control]
GenerateTrackingInfo=1
AllowOptimize=0
More information can be obtained from the .NET Framework Developer's Guide article Making an Image Easier to Debug.
If this is not done, any modified version of the assembly is immediately replaced with the original version by Windows' system file protection mechanism.
While there are some manual ways to disable system file protection (see Disable Windows File Protection on the excellent WinGuides site), the shareware wfpAdmin utility from Collake Software is very convenient and allows specific folders to be removed from Windows File Protection. The minimum required is to remove file protection for the <WINDIR>Microsoft.NET\Framework\v1.1.4322 directory and its sub directories.
The Microsoft Visual Studio assemblies are not under system file protection and do not require this measure.
This involves a round trip from retail DLL -> ILDASM to extracted resources and an IL file containing MSIL instructions -> ILASM to recompiled DLL with debug information embedded and accompanying PDB file created.
A pair of batch files is convenient to automate this process where multiple assemblies are to be converted.
The first batch file simply loops through the specified directory, and for each file that matches the passed file mask, calls the batch file that does the actual processing.
REM DEBUGMAKEALL.BAT
rem process each matching file
for %%a in (%1\%2) do DebugMake.bat "%%a"
:end
The second batch file first calls ILDASM to decompile the assembly and then calls ILASM to create a debug version of the assembly. The IL file is saved with the name of the assembly with .il suffixed.
REM DEBUGMAKE.BAT
rem delete any il file left over from a previous invocation,
else output will be appended to it and compilation will fail
del %1.il
rem call ILDASM to create the il file
ILDASM /OUT=%1.il /NOBAR /LINENUM /SOURCE %1
rem call ILASM to compile a debug version
of the dll as well as a pdb file
ILASM /DEBUG /DLL /QUIET /OUTPUT=%1 %1.IL
<Pathtothebatchfile>DebugMakeAll.bat . System.Design.dll
To create debug version of all System.*.dll, type:
<Pathtothebatchfile>DebugMakeAll.bat . System.*.dll
The end result of the round trip is an <assemblyname>.il file that contains MSIL, a recompiled assembly with the DebuggableAttribute set and a PDB file that contains debugging information. A number of ico and BMP files are also created and these can be deleted if so desired.
DebuggableAttribute
A small portion of the IL file for System.Design.dll looks like this:
.namespace System.Design
{
.class private auto ansi sealed beforefieldinit SRDescriptionAttribute
extends [System]System.ComponentModel.DescriptionAttribute
{
.custom instance void
[mscorlib]System.AttributeUsageAttribute::.ctor(valuetype
[mscorlib]System.AttributeTargets) =
( 01 00 FF 3F 00 00 00 00 ) // ...?....
.field private bool replaced
.method public hidebysig specialname rtspecialname
instance void .ctor(string description) cil managed
{
// Code size 15 (0xf)
.maxstack 8
IL_0000: ldarg.0
IL_0001: ldc.i4.0
IL_0002: stfld bool System.Design.SRDescriptionAttribute::replaced
IL_0007: ldarg.0
IL_0008: ldarg.1
IL_0009: call instance void
[System]System.ComponentModel.DescriptionAttribute::.ctor(string)
IL_000e: ret
} // end of method SRDescriptionAttribute::.ctor
To ensure none of these assemblies are loaded by any process, it is best to reboot the machine and open only a command prompt and possible Windows Explorer.
This is required only for Framework assemblies.
The GacUtil supplied with the framework does this well and has the advantage of being able to process multiple files from the command line, unlike the GUI extension to Windows Explorer.
Here again a pair of batch files do the trick.
REM GACInstallAll.bat
rem process each matching file
for %%a in (%1\%2) do GACInstall.bat "%%a"
:end
REM GACInstall.bat
rem call gacUtil, asking it to install the passed file
gacutil /i %1
<Pathtothebatchfile>GACInstallAll.bat . System.Design.dll
or
<Pathtothebatchfile>GACInstallAll.bat . System.*.dll
Start Visual Studio 2003 and open up the solution that is to be debugged. A sample solution TestApp.sln is included in the source files download.
This second instance of Visual Studio 2003 will be the debugger.
Name it as devenv.sln and save it in the same directory that devenv.exe is located.
This option is accessed via Tools -> Options.
The sample project devenv.sln in the sample files can be used for this. The paths to source and symbol files would of course have to be changed. In the walkthrough, a breakpoint has been set in the designer's overridden method Initialize - this will be called by Visual Studio when an instance of the designed control is created - either by dropping on to a form or by opening a form that contains the control in design view.
Initialize
Switch over the Visual Studio 2003 instance and initiate an action that should cause the breakpoint to be hit in the other instance.
If you are using the TestApp.sln, just open Form1.vb from the Solution Explorer.
As soon as Visual Studio creates the custom control, it will instantiate the custom designer and call its Initialize method. The debugger instance will stop execution at the placed breakpoint and come to the foreground automatically.
As the image shows, Visual Studio has loaded the symbols for a number of framework as well as Visual Studio assemblies.
The call stack also shows calls originating in framework and Visual Studio assemblies. The black (instead of grey) font indicates that symbols are available for the procedure higher up the stack. In case the stack says "<Non user Code>", right click and select "Show Non User Code" to at least view the names and parameters of methods for which debug versions of assemblies have not been created.
Double click on the calling method from microsoft.visual.studio.dll!DesignerHost.Add and the debugger will automatically load the decompiled IL file and show the return line. Though the language is unfamiliar, all the features of the debugger are available for this source file also, including setting breakpoints, stepping into and over, etc. Also, as the image shows, the locals displays all locals in the calling procedure and any variable can also be inspected using the watch window.
microsoft.visual.studio.dll!DesignerHost.Add
Of course the language is IL and not as easy to understand as VB or C#. With the help of a few articles (ILDASM is Your New Best Friend in Bugslayer, for example), it is however possible to get the gist of what the code is doing and the locals window is really useful in understanding what is happening.
This section is in response to queries by some users as to whether it is possible to debug core assemblies such as System.dll and Mscorlib.dll also.
This is relatively straight forward provided system file checking has been turned off.
The next screen shot shows the commands given and the results (@echo off has been added to all batch files to keep the display clear).
This is more complicated as Windows seems to load this DLL on normal startup. Preparing it requires:
The next screen shot shows the commands given and the results.
We can use the same sample application.
The Sub main of Form1.vb has been modified to instantiate a CodeSnippetStatement (resident in System.dll) and also a StringBuilder (resident in mscorlib.dll).
Sub main
CodeSnippetStatement
StringBuilder
The following screens show how it is possible to step through from the user code into the IL code for the constructors (called .ctor in MSIL) in each assembly.
.ctor
I wonder why Microsoft chose not to ship the .NET Framework with code - they did choose not to obfuscate it (thanks, Microsoft) but considering many decompilers (Salamander, a commercial program from RemoteSoft, appears to do the best job and can decompile entire assemblies) do a good job of decompiling it anyway, why not just release the code itself?
Also, stepping through the MSIL code is very slow (about 20 times slower than stepping through custom code) and there seems to be a memory leak in Visual Studio 2003 as the virtual memory used went up to 800 MB (on my 1 GB RAM machine) after 15 rounds on breaking and resuming. I wonder if there are any ways of getting around these problems?
The article mentioned the need to create an INI file in the same directory as the process being debugged and with the same root name. In response to a query by Scott, further testing showed that this step is not required if working with assemblies recompiled to include debug. | https://www.codeproject.com/articles/5227/debugging-net-framework-and-ms-visual-studio-manag?fid=24901&df=90&mpp=10&sort=position&spc=none&tid=4072324 | CC-MAIN-2016-50 | refinedweb | 1,894 | 55.03 |
* L. :) * Lars Marius Garshol > > The trouble is that it will be very hard (if at all possible) to do > this without doing damage to backwards compatibility. * Jack Jansen > >. This sounds like a viable alternative, even if it is just a limited form of support. However, you can do exactly the same (and much more) with architectural forms, which we already have support for via Geir Oves xmlarch module. Why do you want to use namespaces instead? Also, perhaps we should add to the DOM implementations some standard way of inserting a SAX ParserFilter (something we should perhaps also work on) between the parser and the DOM. This would enable us to do automate things like removing whitespace, joining blocks of PCDATA that were separated by buffer boundaries in the parser, doing architectural processing, (for those who want it) doing namespace filtering, filtering out XLinks for special processing etc etc --Lars M. | https://mail.python.org/pipermail/xml-sig/1998-November/000478.html | CC-MAIN-2017-22 | refinedweb | 151 | 54.36 |
LastPass clone - part 9: Password Generation UI
Hey guys, last week we made the
EditFormView, and in this one and the next, we will create the password generation functionality. We will start by creating the user interface as it will be a little bit involved. Let’s get started.
Preparations
You should have the source code link in your email inbox if you are subscribed, otherwise click here to subscribe and get the source code.
Password generator view
Start by creating a swift UI file in the Views folder named PasswordGeneratorView.swift. Now add the following properties to the top of the struct:
@State private var password = "" @Binding var generatedPassword: String @State private var lowercase = true @State private var uppercase = false @State private var specialCharacters = true @State private var digits = false @State private var length: CGFloat = 6
The above properties will be changed when we implement the password generation service which will implement the
ObservableObject protocol.
Then replace everything in body with the following:
VStack(spacing: 30) { Text("Generate password").font(.title).bold() Group { SharedTextfield(value: self.$password, header: "Generated Password", placeholder: "Your new password", errorMessage: "", showUnderline: false, onEditingChanged: { flag in }).padding() }.background(Color.background) .cornerRadius(10).neumorphic() VStack(spacing: 10){ Toggle(isOn: $lowercase) { Text("Lowercase") } Toggle(isOn: $uppercase) { Text("Uppercase") } Toggle(isOn: $specialCharacters) { Text("Special Characters") } Toggle(isOn: $digits) { Text("Digits") } Slider(value: $length, in: 1...30, step: 1) .accentColor(Color.accent) }.padding() .background(Color.background) .cornerRadius(10) .neumorphic() }.padding(.horizontal) .frame(maxHeight: .infinity) .background(Color.background) .edgesIgnoringSafeArea(.all)
The above code might seem enormous, but it’s way simpler than you think. Here is what it does:
- We create a text that will act as a header title for this view.
- We then use our
SharedTextfieldthat we created in one of previous parts to create a field that will display the generated password. Here we use a textfield rather than a simple text label because we want to give the user an option to change if he wishes. We then add some corner radius and made it neumorphic using our own neumorphic modifier.
- Next, we put four Toggles and a slider inside a vertical Stack container. Right now, those toggles have the built-in boring styles, but we will create an amazing and beautiful neumorphic toggle design later on. Next, we add a bunch of modifiers to style the second
VStack.
- Last, we also add modifiers to the outer
VStackto make it look beautiful. Remember the order of modifiers matters.
Currently the preview is broken because of the error that we have, so replace the
PasswordGeneratorView() in the preview with the following:
PasswordGenerationView(generatedPassword: .constant(""))
Resume the preview, and you should see something like this:
As you can see the screen looks good, but the toggles are boring, let’s create a custom toggle neumophic style next.
Custom toggle style
Let’s start by creating a new folder that we will name Styles, inside it add a swift file named CustomToggleStyle.swift containing the following code:
import SwiftUI struct CustomToggleStyle: ToggleStyle { func makeBody(configuration: Self.Configuration) -> some View { EmptyView() } }
In order for us to use the custom toggle style, it needs to conform to the
ToggleStyle protocol which in turn gives us the
makeBody function which is required to customise the toggle that uses this style. Let’s start with something simpler.
Replace the
EmptyView() with the following code:
Button(action: { configuration.isOn.toggle() }) { if configuration.isOn { Circle().frame(width: 50, height: 50).foregroundColor(Color.green) } else { Circle().frame(width: 50, height: 50).foregroundColor(Color.red) } }
Here is what we are doing here:
- The configuration has a
isOnwhich indicates whether the toggle is on or off. This field is a boolean binding property which means that we can also mutate it using the
toggle()function. We toggle the
isOnproperty in the button’s action to switch states.
- Next, we return a simple green
Circle()when
isOnis set to true or a red
Cirlce()otherwise. This the logic we will eventually use to toggle on and off the neumorphic design.
To test this, open the password generation view, and add the following modifier to each toggle:
.toggleStyle(CustomToggleStyle())
After making that change, you will need to resume the preview and click the play button to run the it, and try clicking the toggles… You will notice that there are switching from red to green and vice-versa.
Here what we got now:
Before we start making the neumorphism design, let’s first add the label that was there before. Now replace the entire block of code in the
makeBody function with the following:
HStack { configuration.label Spacer() Button(action: { configuration.isOn.toggle() }) { if configuration.isOn { Circle().frame(width: 50, height: 50).foregroundColor(Color.green) } else { Circle().frame(width: 50, height: 50).foregroundColor(Color.red) } } }
What we’ve done here is we put the label and the existing button inside and
HStack container. You might be wondering where’s the label coming from, well it’s coming from the configuration. Go back to the
PasswordGeneratorView , and resume the preview again.
As you can see in the above preview, the label is the
Text you passed in the
Toggle. Now let’s make that neumorphic design now, shall we?
We will start with the simple state which is when the toggle is off. So go back in the
CustomToggleStyle, and replace the else content which is the red circle with the following:
Circle() .frame(width: size, height: size) .foregroundColor(Color.background) Image(systemName: "power") .resizable() .frame(width: 20, height: 20) .foregroundColor(Color.gray) }.cornerRadius(size / 2).neumorphic()
We are missing the size property hence the errors, so add the following to the top:
var onColor: Color = .background var offColor: Color = .darkerAccent var size: CGFloat = 40
Now go back to
PasswordGeneratorView, resume the preview, and you should see this:
As you can see in the above picture, the off state will be convex which means we want to make the on state concave. We can create concave neumophic designs using inner shadow, but there’s a problem, swift UI does not have inner shadows, so we will try to create our own.
There are a bunch of ways one can create this concave neumophic design, but I prefer the one made by Paul Hudson from HackingWithSwift. So let’s start making it.
In the Extensions folder, create a swift file named LinearGradientExt containing the following block of code:
extension LinearGradient { init(_ colors: Color...) { self.init(gradient: Gradient(colors: colors), startPoint: .topLeading, endPoint: .bottomTrailing) } }
Here we are just extending the
LinearGradient with yet another initialiser, this one will just take in a gradient object, the start and end point will be the same for what we are trying to do, that’s why there’s no need to pass them in as parameters.
Next, let’s extend the
View protocol as well. In the same Extensions folder, add another swift file named ViewExt containing the following code:
func innerShadow(radius: CGFloat, colors: (dark: Color, light:Color)) -> some View { self.overlay( Circle() .stroke(colors.dark, lineWidth: 4) .blur(radius: radius) .offset(x: radius, y: radius) .mask(Circle().fill(LinearGradient(colors.dark, Color.clear))) ) }
Here is what that code he is doing:
- We add an overlay to the view that will use this modifier.
- Inside the overlay, we add a circle
- Then stroke it with the dark color used for the darker side of our neumorphic design.
- Next, we add a
blurmodifier with the same radius as the offset value we will add next.
- Next, we move the blurred view a little to the bottom right.
- We then mask the view with yet another circle filled with a linear gradient.
That was one side which is the top left side, we are missing the bottom right.
Add another overlay directly after first one with the following code:
.overlay( Circle() .stroke(colors.light, lineWidth: 8) .blur(radius: radius) .offset(x: -radius, y: -radius) .mask(Circle().fill(LinearGradient(Color.clear, colors.dark))) )
The above code will create the same overlay as the one above but in the opposite side. This one goes from bottom right to top left whereas the first one went from top left to bottom right.
Next, in the
CustomToggleStyle, replace the following code:
Circle().frame(width: 50, height: 50).foregroundColor(Color.green)
With the following:
ZStack { Circle() .fill(Color.background) .frame(width: size, height: size) Image(systemName: "power") .resizable() .frame(width: 20, height: 20) .foregroundColor(Color.accent) }
Now, open the
PasswordGeneratorView and resume the preview, then run it. You should see something like this:
As you can see, there’s still no concave neumorphic design on the toggle when it’s selected. Well that’s because we still haven’t added the the design we’ve created. So, add the following after the
.frame(width: size, height: size) modifier in the
CustomToggleStyle:
.innerShadow(radius: 1, colors: (Color.darkShadow, Color.lightShadow))
Now if you resume and run the preview, you should be able to click the toggle and see your concave neumorphic design.
And with this, we are done with this part . In the next one, we will finish the password generation process by creating its brain. Please feel free to share this article, subscribe if you haven’t done so already. Stay tuned for more and happy coding.
Support my work by becoming a patreon | https://liquidcoder.com/lastpass-redesigned-clone-part-9/ | CC-MAIN-2020-50 | refinedweb | 1,553 | 56.76 |
Suppose we are in charge of building a library system that monitors and queries various operations at the library. We are now asked to implement three different commands that perform the following −
By using command 1, we can record the insertion of a book with y pages at shelf x.
By using command 2, we can print the page number of the y-th book at shelf x.
By using command 3, we can print the number of books on shelf x.
The commands are given to us as a 2D array in this format {command type, x, y}. If there is no y value, the value will default at 0. We print the results of the given commands.
So, if the input is like number of shelves = 4, queries = 4, input_arr = {{1, 3, 23}, {1, 4, 128}, {2, 3, 0}, {3, 4, 0}}; then the output will be
23 1
Command 1 inserts a book with 23 pages on shelf 3. Command 2 inserts a book with 128 pages on shelf 4. Command 3 prints the page number of book 0 on shelf 3. Command 4 prints the number of books on shelf 3.
To solve this, we will follow these steps −
Let us see the following implementation to get better understanding −
#include <stdio.h> #include <stdlib.h> void solve(int s, int q, int q_array[][3]) { int* b; int** p; b = (int*)malloc(sizeof(int)*s); p = (int**)malloc(sizeof(int*)*s); for(int i = 0; i < s; i++) { b[i] = 0; p[i] = (int*)malloc(sizeof(int)); } int loopCount; for(loopCount = 0; loopCount < q; loopCount++) { int qtype; qtype = q_array[loopCount][0]; if (qtype == 1) { int x, y; x = q_array[loopCount][1]; y = q_array[loopCount][2]; b[x] += 1; p[x] = realloc(p[x], b[x]*sizeof(int)); p[x][b[x] - 1] = y; } else if (qtype == 2) { int x, y; x = q_array[loopCount][1]; y = q_array[loopCount][2]; printf("%d\n", p[x][y]); } else { int x; x = q_array[loopCount][1]; printf("%d\n", b[x]); } } if (b) free(b); for (int i = 0; i < s; i++) if (p[i]) free(p[i]); if (p) free(p); } int main() { int input_arr[][3] = {{1, 3, 23}, {1, 4, 128}, {2, 3, 0}, {3, 4, 0}}; solve(4, 4, input_arr); }
int input_arr[][3] = {{1, 3, 23}, {1, 4, 128}, {2, 3, 0}, {3, 4, 0}}; solve(4, 4, input_arr);
23 1 | https://www.tutorialspoint.com/c-program-to-demonstrate-usage-of-variable-length-arrays | CC-MAIN-2022-21 | refinedweb | 402 | 71.89 |
hello! First post
I must create two functions, one that asks user for a character and determines whether a character is a vowel or isn't by returning true or false.
The second is to call the first function and ask the user to input a word.
This is what I have so far:
vowel = ["A","E","I","O","U","a","e","i","o","u"]
vowelinpt = input("Please enter a character: ")
def isVowel(x):
if x in vowel:
return True
else: return False
a = isVowel(vowelinpt)
print(a)
def countvowel(b)
count 0
for a in b:
if isVowel(a) == True
count +=1
return count
x = input("Please input a WORD: ")
y = countvowel(x)
print(y)
Any ideas on how to actually make it count the correct number of vowels? | http://forums.devshed.com/beginner-programming/949785-vowel-counter-python-3-a-last-post.html | CC-MAIN-2014-23 | refinedweb | 130 | 50.84 |
Difference between revisions of "Forth"
Revision as of 12:21, 9 December 2014
Forth on Raspberry Pi] and here [5]
Contents gb_common.h and gb_spi.h right after include <math.h>:
#ifdef MATH #include <math.h> #endif #ifdef GERTBOARD #include "gb_common.h" #include "gb_spi.h" #endif
Then add your own word definitions at the end of the section with word definitions, around line 2704, right after #endif /* COMPILERW */ in atlast.c:
#ifdef GERTBOARD prim P_gert_io() // state --- { // Setup and restore I/O revision of Raspberry Pi I/O port int rev; Sl(1); if (S0 == 21) { // Find out which revision of Raspberry Pi.c, gb_common.h, gb_spi.c and gb_spi.h from the gertboard_sw directory to atlast-1.2 directory.
Add gb_common.o and gb_spi.o to the file Makefile in atlast-1.2.
ATLOBJ = atlast.o gb_common.o gb_spi Raspberry Pi. I/O port negative edge unsigned int i; clock_t start, end; start = clock(); Sl(1); if (S0 == 21) { // Find out which rev of Raspberry Pi [6] three.
Add this primary word to atlast.c. Put it right after the function prim P_quit()
prim P_sleep() // microsec --- { Sl(1); usleep(S0); Pop; }
Remember to wire up the Gertboard according to the information you get when you run the command sudo ./leds in the Gertboard demo directory. Now we can do:
sudo ./atlast : use 1 ; : free 0 ; : leds 25 24 23 22 21 18 17 11 10 9 8 7 ; : on 12 0 do 1 setio 500000 sleep loop ; : off 12 0 do 0 setio 500000 sleep loop ;
Now define the word leddemo:
: leddemo use gertboard leds on leds off leds on leds off free gertboard ; ( and try it ) leddemo
Reflective Sensor
The edge trigger word can be used for other sensors as well, I tried it with this Reflective Sensor [7] and it works right out of the box. Connect VCC to a digital output and the "Out" to a digital input. Gnd to Gnd. I found it more convenient to look for positive edges so here is the word for that:
prim P_gert_getposedge() // channel wait_microsec --- clocks clocks_per_second { // Get a digital I/O port positive edge unsigned int i; // wait is duration until port is low again clock_t start, end; start = clock(); Sl(1); if (S1 == 21) S1 = 27; INP_GPIO(S1); i = 0; while(!(GPIO_IN0 & (1 << S1))) { i++; if(i > 100000000) break; } usleep(S0); end = clock(); S1 = (double) (end - start); S0 = CLOCKS_PER_SEC; }
Define a word COUNT to test the setup (output on the sensor is connected to Buf2 on Gertboard and VCC is connected to Buf3):
1 gertboard 23 1 setio : count 1 100 1 do 24 100000 getposedge 2drop 1 + .s cr loop ; count
More Gertboard Words
Here are some more words for reading and writing the analog input and output (DtoA and AtoD). First GETATOD, it takes a channel on the stack and leaves a voltage.
prim P_gert_getatod() // chan --- voltage { // V will be in range 0-1023 (0-3.3 V) Sl(1); INP_GPIO(8); SET_GPIO_ALT(8,0); INP_GPIO(9); SET_GPIO_ALT(9,0); INP_GPIO(10); SET_GPIO_ALT(10,0); INP_GPIO(11); SET_GPIO_ALT(11,0); setup_spi(); S0 = read_adc(S0); }
And SETDTOA expects a channel and a voltage on the stack and sets the output accordingly.
prim P_gert_setdtoa() // chan volt -- { // V between 0 and 255 Sl(2); INP_GPIO(7); SET_GPIO_ALT(7,0); INP_GPIO(9); SET_GPIO_ALT(9,0); INP_GPIO(10); SET_GPIO_ALT(10,0); INP_GPIO(11); SET_GPIO_ALT(11,0); setup_spi(); write_dac(S1, S0*16); // V_out = S0*16 / 256 * 2.048 Pop2; }
If you would like to read a burst of values from an analog input, here's how to do it. The word GETBURST expects a channel, a delay in microseconds between samples, and the number of samples you want. It leaves the samples on stack. You may want to increase the stack length constant at line 138 like this: atl_int atl_stklen = 1000; /* Evaluation stack length */
prim P_gert_burst() // chan delay samples --- n1, n2, nn { unsigned int i, num, chan, delay; Sl(3); So(S0 - 3); INP_GPIO(8); SET_GPIO_ALT(8,0); INP_GPIO(9); SET_GPIO_ALT(9,0); INP_GPIO(10); SET_GPIO_ALT(10,0); INP_GPIO(11); SET_GPIO_ALT(11,0); setup_spi(); num = S0; delay = S1; chan = S2; Pop; Pop; Pop; for(i = 0; i < num; i++){ Push = (stackitem) read_adc(chan); usleep(delay); } }
And finally, add the new words to the list of primitives:
{"0GETATOD", P_gert_getatod}, {"0GETBURST", P_gert_burst}, {"0SETDTOA", P_gert_setdtoa},
If you want a simple way to inspect the values captured with the getburst word, one way is to let ATLAST Forth create an HTML page. Here is the code for that:
FILE fd : printstring ( s1 -- ) fd fputs drop ; : printreal ( f1 -- ) " " dup 2swap "%f" 4 roll fstrform dup " " swap strcat printstring ; : printint ( n -- ) " " dup -rot "%i" swap strform dup " " swap strcat printstring ; : htmlcanvas ( s1 -- ) 10 fd fopen drop "<html><head><title>Burstdemo</title></head><body>" printstring "<canvas id='burst' height='1023' width='1000' style='position:absolute; left:0; top:0; z-index:0;'></canvas>" printstring "<script> var canvas = document.getElementById('burst'); var ctx = canvas.getContext('2d'); ctx.lineWidth=1; " printstring "ctx.strokeStyle ='#000000'; ctx.beginPath(); " printstring 0 100000 100 getburst 100 0 do "ctx.lineTo(" printstring 10 i * printint "," printstring printint "); " printstring loop "ctx.stroke(); </script></body></html>" printstring fd fclose ;
The word htmlcanvas takes a filename on stack. It runs getburst and writes the result as a line diagram in a canvas element.
"test.html" htmlcanvas .( "ok" cr
Tellstick
In order to have a safe way to handle mains switching, a Tellstick [8] is an excellent choice. It can control many consumer brands of wireless controlled sockets and dimmers. Implementing some words in Forth for switching mains on and off leaves the safe voltages for sensing through the Gertboard interface.
Download and install the development kit from Telldus:
wget tar -xzvf telldus-core-2.1.1.tar.gz sudo apt-get install libconfuse-dev sudo apt-get install libftdi-dev sudo apt-get install cmake cd telldus-core-2.1.1 cmake . make sudo make install
When that installation is done you can copy two files over to the ATLAST Forth directory:
cd /home/pi/atlast-1.2 cp ../telldus-core-2.1.1/client/telldus-core.h . cp ../telldus-core-2.1.1/client/libtelldus-core.so .
Then add the lib libtelldus-core.so to the Makefile, at the top, right after -lm:
LIBRARIES = -lm libtelldus-core.so
In atlast.c, add define TELLDUS right before #define MATH;
#define TELLDUS /* Tellstick functions */
In atlast.c, include the telldus_core.h file:
#ifdef TELLDUS #include "telldus_core.h" #endif
In atlast.c, right after #endif /* COMPILERW */ add the following Tellstick primitives:
#ifdef TELLDUS prim P_tell_init() // --- { tdInit(); } prim P_tell_close() // --- { tdClose(); } prim P_tell_on() // device --- { Sl(1); tdTurnOn(S0); Pop; } prim P_tell_off() // device --- { Sl(1); tdTurnOff(S0); Pop; } #endif /* TELLDUS */
And finally, add the new words to the list of primitives:
#ifdef TELLDUS {"0TELLON", P_tell_on}, {"0TELLOFF", P_tell_off}, {"0TELLINIT", P_tell_init}, {"0TELLCLOSE", P_tell_close}, #endif /* TELLDUS */
Save atlast.c and re-run make to compile and you can try your new words. Remember to configure /etc/tellstick.conf according to the brands of hardware you are using, read the Tellstick documentation. First, start telldusd and then ATLAST.
telldusd ./atlast
In the ATLAST console, type your new Forth words:
tellinit 1 tellon 1 telloff tellclose
Tellstick duo
If you are fortunate enough to have a Tellstick Duo [9], you probably want to listen for incoming 433 MHz signals so here we go. But first follow the Tellstick instructions above, and test that they work as expected.
Add this primitive word to atlast.c. We will use it to register a callback. (A callback is an ordinary function, nothing strange, but it will be called not by you but by the Tellstick Duo when there is data coming in).
prim P_tell_rawevent() // string --- { Sl(1); tdRegisterRawDeviceEvent( rawcallback, S0); Pop; }
Then add the callback function. This is the function that will be called. Add it around line 295, just above the line /* TOKEN -- Scan a token and return its type. */
#ifdef TELLDUS static void rawcallback(const char *data, int controllerId, int callbackId, void *context) { dictword *dw; V strcpy((char *)context, data); dw = atl_lookup("rawevent"); atl_exec(dw); } #endif
Finally, add the new primitive word to the list of primitives
{"0TELLRAWCALLBACK", P_tell_rawevent},
Now, to try it out, save and re-compile with make and then start telldusd and ./atlast. Type the following in the ATLAST console:
127 string raw : rawevent raw type cr ; raw tellrawcallback
The word rawevent is executed by the callback function. You can re-define it, and it will use the latest definition. With this definition it will just type the incoming data in the console. If you take a Nexa remote control and press a few buttons, the incoming data should print as a string, for example,
"class:command;protocol:waveman;model:codeswitch;house:A;unit:1;method:turnoff;"
You can even put the three lines of Forth code in a file, for example, telldus.atl and start ATLAST like this
./atlast -i./telldus.atl
Websockets
With all these sensor signals coming in from Gertboard and Tellstick Duo it would be nice to be able to connect to a web server and upload data in real time. Websockets is a new way to create a full duplex communication pipe between a websocket client and a web server. Download and install noPoll
wget tar -xzvf nopoll-0.2.7.b164.tar.gz cd nopoll-0.2.7.b164/ ./configure make sudo make install
When that installation is done you can copy all nopoll header files over to the ATLAST Forth directory:
cd ~/atlast-1.2 cp ../nopoll-0.2.7.b164/*.h .
I like to have all application header files in the application directory, so edit all the nopoll header files so that all #include "nopoll_xxx.h" have filenames between " " instead of < >.
Then add the lib nopoll to the Makefile so that the lines LIBRARIES and INCLUDE looks like this:
LIBRARIES = -lrt -lm -L/usr/local/lib libtelldus-core.so -lnopoll INCLUDE = -Wl,-rpath -Wl,/usr/local/lib
In atlast.c, add #define WEBSOCKETS right before #define MATH;
#define WEBSOCKETS /* Websockets functions */
In atlast.c, include the nopoll.h file:
#ifdef WEBSOCKETS #include "nopoll.h" #endif
In atlast.c, right before /* TOKEN -- Scan a token and return its type. */ add the following nopoll globals:
#ifdef WEBSOCKETS noPollCtx *ctx; noPollConn * conn; #endif
In atlast.c, right before /* Table of primitive words */ add the following nopoll primitives:
#ifdef WEBSOCKETS prim P_ws_connect() // "localhost" "1234" "/wsh" -- bool { Sl(3); Hpc(S0); Hpc(S1); Hpc(S2); ctx = nopoll_ctx_new(); conn = nopoll_conn_new (ctx, (char *) S2, (char *) S1, NULL, (char *) S0, NULL, NULL); nopoll_conn_wait_until_connection_ready (conn, 5); Pop; Pop; S0 = (nopoll_conn_is_ok (conn)) ? Truth : Falsity; } prim P_ws_close() // -- { nopoll_conn_close (conn); nopoll_ctx_unref(ctx); } prim P_ws_send() // string -- length { Sl(1); int bytes_written, length; Hpc(S0); length = strlen((char *) S0); bytes_written = nopoll_conn_send_text(conn, (char *) S0, length); if(bytes_written != length) bytes_written = nopoll_conn_flush_writes(conn, 2000000, bytes_written); S0 = bytes_written; } #endif /* WEBSOCKETS */
And finally, add the new words to the list of primitives:
#ifdef WEBSOCKETS {"0WSCONNECT", P_ws_connect}, {"0WSSEND", P_ws_send}, {"0WSCLOSE", P_ws_close}, #endif /* WEBSOCKETS */
Save atlast.c and re-run make to compile and you can try your new words.
make clean make
To connect to the echo server:
./atlast "echo.websocket.org" "80" "/" wsconnect . "test" wssend . "send some more" wssend . wsclose
The dot after wsconnect will print out a -1 if connected, 0 otherwise. The dot after wssend will print the number of characters sent. | https://elinux.org/index.php?title=Forth&diff=prev&oldid=364130 | CC-MAIN-2020-10 | refinedweb | 1,887 | 62.48 |
You must understand your data in order to get the best results.
In this post you will discover 7 recipes that you can use in Python to learn more about your machine learning data.
Let’s get started.
- Update March/2018: Added alternate link to download the dataset as the original appears to have been taken down.
Understand Your Machine Learning Data With Descriptive Statistics in Python
Photo by passer-by, some rights reserved.
Python Recipes To Understand Your Machine Learning Data
This section lists 7 recipes that you can use to better understand your machine learning data.
Each recipe is demonstrated by loading the Pima Indians Diabetes classification dataset from the UCI Machine Learning repository (update: download from here).
Open your python interactive environment and try each recipe out in turn.
Need help with Machine Learning in Python?
Take my free 2-week email course and discover data prep, algorithms and more (with code).
Click to sign-up now and also get a free PDF Ebook version of the course.
Start Your FREE Mini-Course Now!
1. Peek at Your Data
There is no substitute for looking at the raw data.
Looking at the raw data can reveal insights that you cannot get any other way. It can also plant seeds that may later grow into ideas on how to better preprocess and handle the data for machine learning tasks.
You can review the first 20 rows of your data using the head() function on the Pandas DataFrame.
You can see that the first column lists the row number, which is handy for referencing a specific observation.
2. Dimensions of Your Data
You must have a very good handle on how much data you have, both in terms of rows and columns.
- Too many rows and algorithms may take too long to train. Too few and perhaps you do not have enough data to train the algorithms.
- Too many features and some algorithms can be distracted or suffer poor performance due to the curse of dimensionality.
You can review the shape and size of your dataset by printing the shape property on the Pandas DataFrame.
The results are listed in rows then columns. You can see that the dataset has 768 rows and 9 columns.
3. Data Type For Each Attribute
The type of each attribute is important.
Strings may need to be converted to floating point values or integers to represent categorical or ordinal values.
You can get an idea of the types of attributes by peeking at the raw data, as above. You can also list the data types used by the DataFrame to characterize each attribute using the dtypes property.
You can see that most of the attributes are integers and that mass and pedi are floating point values.
4. Descriptive Statistics
Descriptive statistics can give you great insight into the shape of each attribute.
Often you can create more summaries than you have time to review. The describe() function on the Pandas DataFrame lists 8 statistical properties of each attribute:
- Count
- Mean
- Standard Devaition
- Minimum Value
- 25th Percentile
- 50th Percentile (Median)
- 75th Percentile
- Maximum Value
You can see that you do get a lot of data. You will note some calls to pandas.set_option() in the recipe to change the precision of the numbers and the preferred width of the output. This is to make it more readable for this example.
When describing your data this way, it is worth taking some time and reviewing observations from the results. This might include the presence of “NA” values for missing data or surprising distributions for attributes.
5. Class Distribution (Classification Only)
On classification problems you need to know how balanced the class values are.
Highly imbalanced problems (a lot more observations for one class than another) are common and may need special handling in the data preparation stage of your project.
You can quickly get an idea of the distribution of the class attribute in Pandas.
You can see that there are nearly double the number of observations with class 0 (no onset of diabetes) than there are with class 1 (onset of diabetes).
6. Correlation Between Attributes
Correlation refers to the relationship between two variables and how they may or may not change together.
The most common method for calculating correlation is Pearson’s Correlation Coefficient, that assumes a normal distribution of the attributes involved. A correlation of -1 or 1 shows a full negative or positive correlation respectively. Whereas a value of 0 shows no correlation at all.
Some machine learning algorithms like linear and logistic regression can suffer poor performance if there are highly correlated attributes in your dataset. As such, it is a good idea to review all of the pair-wise correlations of the attributes in your dataset. You can use the corr() function on the Pandas DataFrame to calculate a correlation matrix.
The matrix lists all attributes across the top and down the side, to give correlation between all pairs of attributes (twice, because the matrix is symmetrical). You can see the diagonal line through the matrix from the top left to bottom right corners of the matrix shows perfect correlation of each attribute with itself.
7. Skew of Univariate Distributions
Skew refers to a distribution that is assumed Gaussian (normal or bell curve) that is shifted or squashed in one direction or another.
Many machine learning algorithms assume a Gaussian distribution. Knowing that an attribute has a skew may allow you to perform data preparation to correct the skew and later improve the accuracy of your models.
You can calculate the skew of each attribute using the skew() function on the Pandas DataFrame.
The skew result show a positive (right) or negative (left) skew. Values closer to zero show less skew.
More Recipes
This was just a selection of the most useful summaries and descriptive statistics that you can use on your machine learning data for classification and regression.
There are many other statistics that you could calculate.
Is there a specific statistic that you like to calculate and review when you start working on a new data set? Leave a comment and let me know. note pad.
Summary
In this post you discovered the importance of describing your dataset before you start work on your machine learning project.
You discovered 7 different ways to summarize your dataset using Python and Pandas:
- Peek At Your Data
- Dimensions of Your Data
- Data Types
- Class Distribution
- Data Summary
- Correlations
- Skewness
Action Step
- Open your Python interactive environment.
- Type or copy-and-paste each recipe and see how it works.
- Let me know how you go in the comments.
Do you have any questions about Python, Pandas or the recipes in this post? Leave a comment and ask your question, I will do my best to answer it..
Excellent write-up. I definitely appreciate this site. Continue the good
work!
Thanks M. Willson, I’m glad you found it useful.
Hi Jason .
Thankyou for the explanation .
it is very clear and well understood many things from the article .
Perhaps i have a doubt in understand the purpose of each step as like what message it is conveying with respect to the data. for example ” Descriptive Statistics” give some many output with respect to the data but what do i understand from that.
Seeking for some clarification on the same.
We are understanding the univariate distribution of each feature.
This can help in data preparation and algorithm selection.
Hey Jason, amazingly concise and effective post, as always.
Any suggestions on how to do exploratory analysis with binary features?
Please don’t stop your work, it’s immensely helpful. 🙂
Thanks, glad to hear it Neeraj.
A good start with binary and categorical variables is to look at proportions by level.
“Is there a specific statistic that you like to calculate and review when you start working on a new data set?”
> in addition to the descriptive stats given by .describe(), I like to calculate:
– median
– 95%-ile
oops. 50th percentile is median. already given by “.describe()”
Nice!
Hi, I want to thank you for your useful and well explained articles, specially this one.. But I would like to ask you about imbalanced data and resampling effectiveness if you don’t mind.
Well, as you have stated in your other article, in some cases the imbalance is so natural because like in my case where I have mainly 2 classes with this data distribution (A: 6418, B: 81) the class B is a rare phenomena that we are looking to understand its reasons.
I’m more interested in class B, but I’m afraid that resampling changed a lot in my dada since I noticed a change in correlation after applying an undersampling for class A and oversampling for B to get 1000:1000 samples in final dataset.
total correlation of my 6 features with the target feature passed from 0.14 to 0.67 which I find somehow artificial and not realistic.
If you can help me understand this, I will be so grateful. And thanks for the wonderful website 🙂
I have some ideas for handling imbalanced data in this post that might help as a start:
I am building a linear regression model and using the dataset which is a CSV file of some numbers of 13 columns and 5299 rows. I am following your tutorials and applied the skew function and correlation but it showing an empty array and also in the data types section,it shows object for whole 13 columns. Please help
Perhaps try posting your code and data to stackoverflow?
Hi Jason!
Loved the blog post! You structured it very well. Thanks for all the guidance.
I have a small question. As you mentioned in the data types section, many Machine Learning algorithms take numerical attributes as input. Any specific reason for why they do that??
Not really, more of an engineering reason so the same APIs can be used for many algorithms.
Many algorithms don’t care about data types in principle, e.g. decision trees, knn, etc.
Dear Sir,
I am a new member to Python. I am lucky to find out this website and start learning with this. I just have a small practice myself based on your above lesson as follow (I try to load the data with numpy.loadtxt):
import pandas
import numpy
names = [‘preg’, ‘plas’, ‘pres’, ‘skin’, ‘test’, ‘mass’, ‘pedi’, ‘age’, ‘class’]
dataset = numpy.loadtxt(“pima-indians-diabetes.csv”, delimiter=”,”)
description = dataset.dtypes()
print(description)
However, there is an error and I don’t know how to correct.
“AttributeError: ‘numpy.ndarray’ object has no attribute ‘dtypes’ ”
Please help. Thank you.
Looks like you have mixed up a Numpy data loading example with a pandas exploration example.
You can laod the data as a DataFrame via pandas or change your loaded numpy array to a dataframe in order to call dtypes()
Thank you so much. You are right. The problem is solved now.
Glad to hear it.
U r Awesome… Thanks a lot
Thanks! | https://machinelearningmastery.com/understand-machine-learning-data-descriptive-statistics-python/ | CC-MAIN-2018-51 | refinedweb | 1,840 | 64.51 |
The key is to define namespace, why is also different behavior from ironpython: Python 2.7.9 |Continuum Analytics, Inc.| (default, Dec 12 2014, 14:56:54) [MSC v .1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. Anaconda is brought to you by Continuum Analytics. Please check out: and >>> import clr Attempting to load Python.Runtime using standard binding rules... Attempting to load Python.Runtime from: 'C:\Python\Python27\lib\site-packages\Py thon.Runtime.dll'... >>> import System >>> class ViewModel(System.ComponentModel.INotifyPropertyChanged): ... > I am not sure what is the reason. > > I remember a post last year saying the the interface inheritance is > implemented in a development brand and not integrated into main branch. not > sure if it is the reason. > > I am using python 2.1.0.dev1. > > thanks > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | https://mail.python.org/pipermail/pythondotnet/2016-March/001732.html | CC-MAIN-2020-10 | refinedweb | 145 | 54.59 |
Parsing builds trees over sentences, according to a phrase structure grammar. Now, all the examples we gave earlier.
>>> from nltk.corpus import treebank >>> t = treebank.parsed_sents('wsj_0001.mrg')[0] >>> print)))) (. .))
We can use this data to help develop a grammar. For example, the
program in Example 8-18 uses a simple
filter to find verbs that take sentential complements. Assuming we
already have a production of the form
VP
-> SV S, this information enables us to identify
particular verbs that would be included in the expansion of
SV.
Example 8-18. Searching a treebank to find sentential complements.
def filter(tree): child_nodes = [child.node for child in tree if isinstance(child, nltk.Tree)] return (tree.node == 'VP') and ('S' in child_nodes)
>>> from nltk.corpus import treebank >>> [subtree for tree in treebank.parsed_sents() ...
No credit card required | https://www.safaribooksonline.com/library/view/natural-language-processing/9780596803346/ch08s06.html | CC-MAIN-2017-09 | refinedweb | 136 | 61.93 |
If you think about it for a moment the way that you ensure that an object has a particular method is to insist that it implements an interface that contains that method.
If you recall an interface is a bit like a class with definitions but no implementation. If another class implements the interface then it has to provide code for everything that the interface defines. One extra detail is that you can access the interface methods on the class by casting it to the interface.
This is a subtle point so let's have an example. If you already know enough about interfaces and how they relate to type then skip the next small section.
If we define an interface:
public interface myInterface{ public void myMethod(int a);}
Then any class that implements the interface has to implement myMethod with the same signature and return type.
Now if we have Class2 that inherits from Class1 we can implement the interface:
public class Class2 extends Class1 implements myInterface{ public void myMethod(int a){ return 2*a; }}
public class Class2 extends Class1 implements myInterface{
public void myMethod(int a){ return 2*a; }}
Obviously you can now call the interface method on an object of type Class2:
Class2 class2=new class2();int b=class2.myMethod(1);
However you can also call the interface methods on an object of type Class1 that happens to be at run time an object of Class2 as long as you cast it to the interface. That is:
Class2 class2=new class2();int b=class2.myMethod(1);Class1 class1=class2;int c=(myInterface) class1.myMethod(1);
An interface allows you to call a method defined in the interface by casting an object that might have been passed to you as a base type to the interface type and not to a specific child class.
Why is this useful?
If we insist that any Activity that uses the Fragment implements an interface with the callback defined then we can cast the Activity to the interface and this works no matter what the type of the Activity is.
So our plan now is to:
1) Define an interface within the Fragment that defines the callbacks that the Activity has to implement to use the Fragment.
2) Make the Activity implement the interface and hence the callbacks
3) Use Fragment onAttach to get the callbacks and store them for later use.
Time to see how the template does this.
The Fragment template generates the following code for the interface:
public interface OnFragmentInteractionListener { public void onFragmentInteraction(View view);}
You are free to change the names of the interface and the method or methods it defines. You can define as many callback methods as you need to signal to the Activity that the Fragment needs attention. Also you can include whatever parameters you like to pass data from the Fragment to the Activity.
In this case we have modified the generated code to pass the Button as a View object to follow the pattern of the event handler used in the earlier examples.
The next step is to save a reference to the interface in a private field for later use each time the Fragment is attached to a new Activity:
First we need a private field to store the callback object in:
private OnFragmentInteractionListener mListener;
and we can now define the onAttach event handler:
@Overridepublic void onAttach(Activity activity) { super.onAttach(activity); try { mListener = (OnFragmentInteractionListener) activity; } catch (ClassCastException e) { throw new ClassCastException(activity.toString() + " must implement OnFragmentInteractionListener"); }}
This simply casts the Activity to the interface type we have created. In theory every Activity that uses the Fragment should implement the interface but we need to check that it has - hence the try-catch.
One small detail - it is possible for the Activity to remove the Fragment without the Fragment being destroyed we have to remove the reference to the callback interface onDetach:
@Overridepublic void onDetach() { super.onDetach(); mListener = null;}
That's all we need to set up the persistence of the callback.
Now as long as the Activity implements the interface the callback will be hooked up and working whenever the Fragment is attached to an Activity. It can be destroyed and created by the system as many times as necessary and the callback will still work.
To use the callback all the Fragment has to do is call it:
mListener.onFragmentInteraction(v);
where v is the View object to pass to the Activity.
For example; if you want to activate the callback when the Button is clicked you would write in the onCreateView method:
Button bt=(Button) v.findViewById(R.id.button);bt.setOnClickListener( new View.OnClickListener() { @Override public void onClick(View view) { mListener.onFragmentInteraction(view); }});
This creates an event handler for the Button which delegates what has to happen to the interface by calling onFragmentInteraction. Notice that you can't hook up the callback directly to the Button because it is the wrong type. In general it is better to create custom callbacks and use event handlers within the Fragment to call them.
This completes all the code needed to get the Fragment to use a callback provided by any Activity that wants to use it. The final part of the whole thing is getting the Activity to implement the interface.
public class MainActivity extends Activity implements OnFragmentInteractionListener{
You can get Android Studio to implement the interface for you by right clicking on its name and selecting Generate,Implement methods. In this case the following method is generated:
@Overridepublic void onFragmentInteraction(View view) {}
This can now be filled out to do whatever you want. In our case transfer the text from the Button to the TextView. This isn't a good use of the callback because, as we have already discussed anything to do with the UI provided by the Fragment should be handled by the Fragment. It does make an easy example however:
@Overridepublic void onFragmentInteraction(View view) { TextView tv = (TextView) findViewById(R.id.textView); Button bt = (Button) view; tv.setText(bt.getText());}
If you run this program you will discover that when you click the Button the text is transferred and this still works after your rotate the device.
This has been a long description and a short summery of exactly what happens might make it all seem simpler.
What you have to do to implement Fragment callbacks is: | http://i-programmer.info/programming/android/6996-fragment-and-activity-working-together.html?start=3 | CC-MAIN-2017-04 | refinedweb | 1,061 | 50.16 |
Content Count18
Joined
Last visited
Reputation Activity
- jasonsturges reacted to bubamara in TilingSprite offset distortion over time
Seems like precision problem over the time. Try adding:
this.tile.tilePosition.x %= this.tileTexture.width;
- jasonsturges reacted to ivan.popelyshev in TilingSprite offset distortion over time
> Any insight as to what might cause this, or how to debug it?
Old known bug, shader precison, not possible to fix, you should do what @bubamara said.
Its both
0. reported and solved with that workaround 100 times
1. not mentioned in documentation and even in pixijs wiki ()
2. Workaround is not integrated into pixijs vanilla but I think its possible
Maybe someone wants that situation to change and makes PR either fixing it either adding workaround to the docs. I wont do that because I'm on huge vacation from pixi.
- jasonsturges reacted to ivan.popelyshev in Remove `onComplete()` handler from shared loader
That's because resource-loader is in separate repo so our docs aren't usually synced up
it uses mini-signals, its in second circle of pixi deps, and they have detach
- jasonsturges got a reaction from jonforum in Large texture data on mobile with zoom pan
Looking)
- jasonsturges got a reaction from Tiggs in Change cursor view in PIXI.js
Since this is the #1 Google Search result for changing Pixi cursor, just noting the URL appears to have updated:
- jasonsturges got a reaction from ivan.popelyshev in Component invalidaton lifecycle
!
- jasonsturges reacted to ivan.popelyshev in Component invalidaton lifecycle
Another possibility is removal of updateTransform FOR's - but you need special event queue per root stage element, and every element should add itself to it in case of change. I've spent big number of hours to debug that, and now my work project can handle 6000+ animated objects and it spends time on them only when animation is switched to another, element moves, or camera moves far away from clipping rectangle.
- jasonsturges reacted to ivan.popelyshev in Component invalidaton lifecycle
Only partial.
PixiJS cares about transforms of all elements, calculated vertices for sprite, graphics bounds, some other things.
We switched from simple dirtyflags to dirtyID's because usually one object is watched by several validators. The trick is also to set watcher updateID to -1 if parent was changed.
Unfortunately we cant cover everything:
If we implement dynamic props like in react and angular, it will seriously affect debuggability of code.
I have my own Stage to execute flash SWF's, I forked it from mozilla shumway, fixed many things and covered them with tests, spent 2 years on it, and I still think it doesn't fit vanilla pixijs. I made it to cache results of filters, because they are too heavy: blurs, shadows, glows, e.t.c.
If you want full control like that - I advice to copy code of pixi/display , sprite, graphics and whatever elements you need, to make your own scene tree. It is possible, I proved that. Most of our logic is already in inner components : Transform, Texture, GraphicsGeometry, e.t.c.
Its even possible to make FilterSystem and MaskSystem work with your components, just need some extra interfaces like MaskTarget or FilterTarget.
What I don't recommend - is to try track dirtyRects. Its hell, don't even try it. Stick to elements that you cache in renderTextures.
Btw, that's the reason I want to make `cacheAsBitmap` not a boolean ,but a number. Just `cacheAsBitmap++` should invalidate the cache.
- jasonsturges got a reaction from ivan.popelyshev in Component invalidaton lifecycle
A } }
- jasonsturges got a reaction from ivan.popelyshev in Fuzzy blurry edges on transparent background on zoom
@ivan.popelyshev Awesome - yes, that makes significant improvement.
Have a few curved paths that become a little grainy with that scale mode, but that's minor.
As always, really appreciate your help and insight! Thanks!
- jasonsturges reacted to ivan.popelyshev in Fuzzy blurry edges on transparent background on zoom
texture.baseTexture.scaleMode = PIXI.SCALE_MODES.NEAREST doesn't help?
- jasonsturges reacted to xerver in Change Background Color In JavaScript?
renderer.backgroundColor = 0xFF00FF;
- jasonsturges reacted to ivan.popelyshev in Extending PIXI.utils
Example of what can go wrong with pixijs + webpack + plugins:
- jasonsturges got a reaction from ivan.popelyshev in Extending PIXI.utils
Are?
- jasonsturges reacted to ivan.popelyshev in Extending PIXI.utils
Extending such way its supported with webpack is difficult. Most of my plugins work like that:
import * as PIXI from "pixi.js"; global.PIXI = PIXI; require("pixi-spine"); then pixi-spine takes global PIXI and adds "PIXI.spine" inside, and also a few methods for existing classes in pixi.
You can use the same approach - make a library that works with global pixi. In future we'll provide a way that is free of problems like "there are two versions of pixi in node_modules now", we have some problems with peer dependencies.
Right now dont focus on webpack and just do this thing.
- jasonsturges got a reaction from ivan.popelyshev in Fabric.js select and transform suggestions in Pixi.js
@ivan.popelyshev Minus multi-selection, that's a perfect match.
Thanks! Really appreciate all the support you provide.
- jasonsturges reacted to ivan.popelyshev in Fabric.js select and transform suggestions in Pixi.js
- jasonsturges got a reaction from themoonrat in Change cursor view in PIXI.js
Since this is the #1 Google Search result for changing Pixi cursor, just noting the URL appears to have updated:
- jasonsturges got a reaction from ivan.popelyshev in Pixi.js color accuracy
Hi,
- jasonsturges got a reaction from ivan.popelyshev in Pixi.js color accuracy
Appreciate the insight - thanks, @ivan.popelyshev | https://www.html5gamedevs.com/profile/35353-jasonsturges/reputation/?type=forums_topic_post&change_section=1 | CC-MAIN-2020-29 | refinedweb | 934 | 59.09 |
It has been about a month since I have posted, that does not mean I have stopped coding. Lately I’ve been back on my “security” kick. Although for me it’s more of an obsession rather than just a kick. When it comes to security, a programming language like Python can make many common task a breeze to accomplish. Here I have a basic Linux password cracker that can crack the current SHA-512 shadowed hashes from a user supplied dictionary and detect whether a hash is the lesser used MD5 or SHA-256 format. Enjoy.
import crypt def testPass(cryptPass): hashType = cryptPass.split("$")[1] if hashType == '1': print "[+] Hash Type is MD5" elif hashType == '5': print "[+] Hash Type is SHA-256" elif hashType == '6': print "[+] Hash Type is SHA-512" else: "[+] Hash Type is Unknown" salt = cryptPass.split("$")[2] dictFile = open('dictionary.txt', 'r') for word in dictFile.readlines(): word = word.strip('\n') pepper = "$" + hashType + "$" + salt cryptWord = crypt.crypt(word, pepper) if cryptWord == cryptPass: print '[+] Found Password: ' + word + '\n' return print '[-] Password Not Found.\n' return def main(): passFile = open('passwords.txt') for line in passFile.readlines(): if ':' in line: user = line.split(':')[0] cryptPass = line.split(':')[1].strip(' ') print '[*] Cracking Password For: ' + user testPass(cryptPass) if __name__ == '__main__': main()
Advertisements | https://rundata.wordpress.com/2013/12/29/linux-password-cracker/ | CC-MAIN-2017-26 | refinedweb | 213 | 68.87 |
I need some help with the prime_factor fxn. I think I'm just missing something easy that I'm not seeing right now. Anyways, it's supposed to output the prime factors of an input number.
When I input 12, it outputs the desired results; however, if I enter in another nonprime number, it doesn't output the desired results. For example, if I input 6, it only says that 2 is a factor, but does not say 3 is a factor.
Ex. Output:
Enter an integer < 1000 -> 12
The prime factorization of the number 12 is:
2 is a factor
2 is a factor
3 is a factor
______________________
Enter an integer < 1000 -> 6
The prime factorization of the number 6 is:
2 is a factor
// It doesn't output that 3 is also a factor for the number 6.
Here's an example main to go with the program, followed by the function I need help with:
Code:
#include <iostream>
#include <string>
#include <cmath>
using namespace std;
const string SPACE_STR = " ";
void prime_factor(int number);
bool is_prime(int number);
int main() {
int number;
cout << "Enter a number < 1000 -> ";
cin >> number;
prime_factor(number);
return 0;
}
// Function I need help with.
void prime_factor(int number)
{
bool isPrime = is_prime(number);
int prime = number;
int i = 2, j;
double squareRoot = sqrt(static_cast<double>(number));
int count = 0;
cout << "The prime factorization of the number " << number << " is:" << endl;
if(isPrime)
cout << SPACE_STR << number << " is a prime number." << endl;
else {
while((prime > 0) && (i <= squareRoot)) {
if((prime % i) == 0) {
count++;
for(j = 0; j < count; j++)
cout << SPACE_STR;
cout << i << " is a factor" << endl;
prime /= i;
} else
i++;
}
}
}
bool is_prime(int number)
{
int i;
for(i = 2; i < number; i++) {
if((number % i) == 0)
return false;
}
return true;
} | http://cboard.cprogramming.com/cplusplus-programming/46147-prime-factor-fxn-printable-thread.html | CC-MAIN-2015-48 | refinedweb | 293 | 57.74 |
Hello all!
I've begun coding for the ArbotiX and the Dynamixel AX-12 servos using the Arduino IDE on Ubuntu Linux (with the BioloidController library). I've installed and configured everything according to the online instructions for the Arduino and the ArbotiX, and I'm able to load programs and control my servos no problem.
My problem lies with the Serial Monitor in the Arduino IDE. I'm trying to display servo data like position, load etc., and program debug output, but nothing will output to the Serial Monitor. Even a simple "Hello World!" produces no output.
e.g.
#include <ax12.h>
void setup() {
Serial.begin(38400);
}
void loop() {
Serial.println("Hello World!");
delay(1000);
}
I read somewhere that the Serial Monitor on Linux will not work correctly with certain versions of GCC (edit: programs compiled with certain versions of GCC). I'm hoping someone here can help!
My setup is as follows (some of which I realise is redundant given my problem, but just in case...
):):
Ubuntu 10.04 LTS (32-bit version)
ArbotiX Robocontroller and the Pololu ISP
AX-12 servos (from the Bioloid Comprehensive Kit)
Arduino 0018 (with the BioloidController library)
From a terminal my versions of gcc (gcc --version) and avr-gcc (avr-gcc -version) are as follows:
gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3
avr-gcc (GCC) 4.3.4
Does anyone else have a similar setup to mine and experience or knowledge of this issue?
Thanks,
Stephen
Bookmarks | http://forums.trossenrobotics.com/showthread.php?4386-Arduino-IDE-Serial-Monitor-output-on-Linux&s=1bb95f2af2ca5d9e8cfe72f65fe9bbd0 | CC-MAIN-2020-16 | refinedweb | 247 | 57.47 |
How to: Create a WMI Event Alert (SQL Server Management Studio)
This topic describes how to create a SQL Server Agent alert that is raised when a specific SQL Server event occurs that is monitored by the WMI Provider for Server Events. For information about the using the WMI Provider to monitor SQL Server events, see WMI Provider for Server Events Concepts. For information about the permissions necessary to receive WMI event alert notifications, see Selecting an Account for the SQL Server Agent Service.
To create a WMI event alert
In Object Explorer, connect to an instance of the SQL Server Database Engine, and then expand that instance.
Expand SQL Server Agent.
Right-click Alerts and then click New Alert.
In the Name box, enter a name for this alert.
Select Enable to enable the alert to run. Enable is checked by default.
In the Type box, click WMI event alert.
In the Namespace box, specify the WMI namespace for the WMI Query Language (WQL) statement that identifies which WMI event will trigger this alert. Only namespaces on the computer that runs SQL Server Agent are supported.
In the Query box, specify the WQL statement that identifies the event that this alert responds to. For more information about WQL, see Using WQL with the WMI Provider for Server Events. | https://technet.microsoft.com/en-US/library/ms366332(v=sql.105).aspx | CC-MAIN-2017-26 | refinedweb | 219 | 72.56 |
Aerospike
Detailed information on the Aerospike state store component
Component format
To setup Aerospike state store create a component of type
state.Aerospike. See this guide on how to create and apply a state store configuration.
apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: <NAME> namespace: <NAMESPACE> spec: type: state.Aerospike version: v1 metadata: - name: hosts value: <REPLACE-WITH-HOSTS> # Required. A comma delimited string of hosts. Example: "aerospike:3000,aerospike2:3000" - name: namespace value: <REPLACE-WITH-NAMESPACE> # Required. The aerospike namespace. - name: set value: <REPLACE-WITH-SET> # Optional
WarningThe above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here.
Spec metadata fields
Setup Aerospike
You can run Aerospike locally using Docker:
docker run -d --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike
You can then interact with the server using
localhost:3000.
The easiest way to install Aerospike on Kubernetes is by using the Helm chart:
helm repo add incubator helm install --name my-aerospike --namespace aerospike stable/aerospike
This installs Aerospike into the
aerospike namespace.
To interact with Aerospike, find the service with:
kubectl get svc aerospike -n aerospike.
For example, if installing using the example above, the Aerospike host address would be:
aerospike-my-aerospike.aerospike.svc.cluster.local:3000) | https://docs.dapr.io/reference/components-reference/supported-state-stores/setup-aerospike/ | CC-MAIN-2021-43 | refinedweb | 221 | 56.35 |
ToString() and Error Logging
In production environment, it is not always possible to step through the compiled code using a debugger and thus we rely on error logs to provide insights into the potential causes of bugs. The location of the bug, which is provided by the stacktrace of the exception, is not always helpful in determining the reason of the exception as the exception might be caused by data. So it is desirable to log not only the location of the error but also the object and the related object’s states (e.g. the value of the variables, etc).
One of the best places to insert logging code that captures object’s state would be in an object’s ToString method. By overriding the ToString method, an object can be visualized not only in debugging mode, but in shipped code as well. Another benefit of this approach is that the visualization can be many levels deep. For instance, if the objects that the current object has access to implement similar ToString methods, the states of all variables would be captured.
To illustrate this, let’s take a look at some code:
using System;
using System.Collections.Generic;
using System.Text;
namespace ExceptionHandling {
public class Record {
public string ObjName = string.Empty;
public string ObjColor = string.Empty;
public Record(string name, string color) { ObjName = name; ObjColor = color; }
public override string ToString() { return string.Concat("{", ObjName, " — ", ObjColor, "}"); } }public override string ToString() { return string.Concat("{", ObjName, " — ", ObjColor, "}"); } }
public class Records {
public List<Record> records = null;
public Records() { records = new List<Record>(); }
public override string ToString() {
StringBuilder sb = new StringBuilder();
foreach (Record r in records) {
sb.Append(r.ToString());
sb.Append(Environment.NewLine);
}
return sb.ToString();return sb.ToString();
}
}
}
As you can see in the code above, both Record and Records classes override the base ToString function and each outputs its member variables’ states.
To test our implemtation, consider the code below:
public void Test() {
Records r = new Records();
try {
r.records.Add(new Record("car", "red"));
r.records.Add(new Record("apple", "green"));
r.records.Add(new Record("sky", "blue"));
throw new Exception("This is a test");
}
catch (Exception e) {
Console.WriteLine(e.StackTrace);
Console.WriteLine(r.ToString());
}
}
Her we purposefully threw an exception and the second line writes out the object (note, here is the place where logging function would be placed).
The output of r.ToString() is:
{car — red}
{apple — green}
{sky — blue} | http://www.kerrywong.com/2007/03/18/tostring-and-error-logging/ | crawl-001 | refinedweb | 403 | 56.25 |
Hi, can you help with a C programming assignment?
So a small part of the course covers writing/compiling assembler code as well, & I'm having some trouble figuring it out:
Basically it deals with unsigned integer multiplication (specifically in dealing with 32-bit int overflow), using two files: big_mult.c, and mull.s.
So far, I have written the big_mult.c, which contains two methods: int main, & void mull (mull.s).
Now mull.s function is to be written manually, and will be linked to the final program after compile: $ gcc -Wall -std=c99 -o big_mult.exe big_mult.c mull.sNow mull.s function is to be written manually, and will be linked to the final program after compile: $ gcc -Wall -std=c99 -o big_mult.exe big_mult.c mull.sCode:
#include <stdio.h>
void mull(unsigned int x, unsigned int y, unsigned int* low, unsigned int* high);
int main(char *argv[]) {
unsigned x, y;
unsigned low = high = 0;
sscanf(argv[1],"%x",&x);
sscanf(argv[2],"%x",&y);
mull(x,y,&low,&high);
return 0;
}
This is the part I am lost at...
All I really know is that it starts out something like this:
Code:
_mull:
pushl %ebp | save stack pointer
movl %esp, %ebp | new stack pointer
movl 8(%ebp), %eax | get x
movl 12(%ebp), %edx | get y
movl 16(%ebp), (%?) | get *low
movl 20(%ebp), (%?) | get *high
...
...
pop %ebp
ret
Here is the sample output:
* Using Cygwin Bash Shell* Using Cygwin Bash ShellCode:
$ ./big_mult 2f432f43 629b03cb
2f432f43 x 629b03cb = 12345678 87654321
Can you please help?
Thank you! Thank you! | http://cboard.cprogramming.com/c-programming/126518-assembly-integration-unsigned-integer-multiplication-printable-thread.html | CC-MAIN-2015-32 | refinedweb | 263 | 70.63 |
After a succession of traumas arising from problems in the four cumulative updates that Microsoft has released for Exchange 2013 to date, it comes as blessed relief to report that Cumulative Update 5 (CU5) appears to be totally unremarkable. Of course, time will tell whether as yet unknown bugs bubble to the surface, but the signs are that CU5 is a simple, easy-to-apply set of fixes. Exchange administrators around the world will roundly applaud the sheer ordinariness of the update. You can get CU5 from the Microsoft download center. See KB2936880 for a list of the fixes included in CU5.
I've been applying builds of CU5 to servers for the last several weeks and after an initial hiccup, each update has been smooth and uneventful. The hiccup was caused by a missing registry key (CERES_REGISTRY_PRODUCT_NAME). This key is associated with the Search Foundation and its absence causes Setup to fail. Apparently it's an issue that has been around since Exchange 2013 was first released and many examples are reported on the web. I had never encountered the problem before and it seems to be more common when servers are heavily loaded. In any case, the Exchange engineers fixed the problem in CU5 and it shouldn't raise its head again.
Released thirteen weeks after Exchange 2013 SP1, CU5 contains many other bug fixes because it is a clean-up, fit and finish update. Stuff that didn't make it into SP1 or bugs that showed their heads after SP1 was announced are fixed. One example is the fix described in KB2958430. In this case, administrators reported that "identity references" could not be translated with an "IdentityNotMappedException" when working with the Set-DatabaseAvailabilityGroup or the Add/Remove-DatabaseAvailabilityGroupMember cmdlets to build out a Database Availability Group (in EMS or through EAC). Essentially, Exchange didn't cope well when DAGs were deployed into disjoint namespaces and produces a Dr Watson dump when these cmdlets are run. You won't have seen this problem if your DNS domain name matches the primy DNS suffix assigned to servers. The bug was introduced in Exchange 2013 SP1 and has since caused some issues for those who have a need to maintain disjoint namespaces.
The biggest achitectural fix included in CU5 was revealed by Microsoft on May 13 when they posted information about changes to the way that Exchange provides Offline Address Book data to Outlook clients. The EHLO blog says that these changes are "improvements" but I think they are fixes for flaws in the new OAB generation and storage mechanisms introduced in Exchange 2013 in an attempt to address some known issues in the older implementation. The fixes make OAB generation and distribution more efficient in scenarios where multiple OAB arbitration mailboxes exist within an organization. However, multiple OAB arbitration mailboxes are typically only met in very large and complex deployments and the changes made in CU5 will have zero impact on most customers.
Those who have implemented multiple OAB arbitration mailboxes will enjoy the fact that all of their clients will have to download a complete OAB after the changes are applied. Remember, these are large deployments so they have large OABs too and the prospect of every client having to download several hundred megabytes of OAB data is not welcome, especially if it occurs on a Monday morning following a weekend update. Oh well, you cannot make an omelette without breaking eggs and you can't have OABs without downloads.
Despite some expectations to the contrary, CU5 contains nothing much to help modern public folders to scale up from the horribly inadequate limits revealed in March. These limits, together with the sheer manual nature of the work required to prepare for and then execute the migration is enough to give anyone heartburn. During a spirited "Experts Unplugged" session at the recent MEC, Microsoft promised that they were working hard to raise the limits to a point where Exchange would support 1 million public folders. Although I hear that some progress has been made, clearly that work is still not ready for prime time. Let's hope that it will appear in Exchange 2013 CU6. On the upside, CU5 does contain a fix to allow Autodiscover to reference legacy public folders when MAPI over HTTP is used.
One small point to keep an eye on: apparently a hyperactive Managed Availability probe that forces frequent restarts the shared cache service exists in CU5 and you might be concerned when you hear about it. However, the service that is restarted is currently unused by the product and the probe has been around for a while. It does nothing except chew up some CPU cycles unnecessarily. KB2971467 explains how to mitigate the pesky probe if you consider this necessary.
On a housekeeping note, I approve of the way that Setup now cleans up the many PowerShell scripts that it creates in the \ExchangeSetupLogs folder to perform various operations when installing Exchange 2013. These files absolutely did not need to be kept after a successful installation and it's good that they are now removed. Noticing stuff like this simply proves that I have the capacity to pick up on the oddest thing while ignoring other stuff that's probably more important.
Remember that every cumulative update for Exchange 2013 is a full version of the product. You can apply it to a server to install Exchange 2013 from scratch or you can update any previous version of Exchange 2013 with CU5 to bring a server completely up-to-date.
The fact that CU5 contains only bug fixes and no new features is actually a pleasant change. CU5 does include a schema update and you still need to test the new code before you bring CU5 anywhere near a production server, but signs are that applying CU5 is easy and straightforward. I'm thankful that build 913.22 is so undramatic. I suspect others will be too.
Update: MVP Michael Van Horenbeeck points out that there are some useful updates to the Hybrid Configuration Wizard (HCW) included in CU5. Read all about this topic on his blog.
Update 2: In a note on the Facebook Exchange 2013 page, Microsoft's Brian Day said that CU5 brings back certificate-based authentication for Exchange ActiveSync. Apparently the documentation is still being worked on. Stay tuned!
Update 3: It appears that the bug requiring app pools to be recycled to allow ActiveSync connections after mailboxes are moved to Exchange 2013 is not fixed in CU5. No word yet whether the fix will be in CU6 or in a roll-up update for Exchange 2010.
Update 4: KB2958434 describes a fix for the issue that I detailed in my "Cherish your old databases" article. The approach that Microsoft has taken to mitigate the problem is to reduce the lifetime of the cookie that is cached by the CAS. In turn, this reduces the amount of time that you need to keep a database after all mailboxes are moved from it. However, it would be unreasonable to reduce the lifetime of the cookie to a point where you could remove the database immediately as this would impact performance (cached data is key to responsiveness). So you still have to cherish those old databases, but for less than you had to before.
Update 5: CU5 breaks OWA redirection from Exchange 2007 to Exchange 2013 if FBA is disabled on Exchange 2007. This is a curious bug that is a definite regression. The workaround is simple - enable FBA on both Exchange 2007 and 2013 (alternatively, point Exchange 2007 to the legacy namespace). Microsoft is aware of the issue and is working on a fix, or so I hear.
Follow Tony @12Knocksinna | http://www.itprotoday.com/microsoft-exchange/exchange-2013-cu5-totally-unremarkable-update-good-way | CC-MAIN-2017-47 | refinedweb | 1,291 | 58.82 |
instances of a script always execute, both as part of Play Mode and when editing.
By default, MonoBehaviours are only executed in Play Mode and only if they are on GameObjects in the main stage containing the user Scenes. They do not execute in Edit Mode, nor do they execute on objects edited in Prefab Mode, even while in Play Mode. By adding this attribute, any instance of the MonoBehaviour will have its callback functions executed at all times.
The [ExecuteAlways] attribute can be used when you want your script to perform certain things as part of Editor tooling, which is not necessarily related to the Play logic that happens in buildt Players and in Play Mode. Sometimes the Play functionality of such a script is identical to its Edit Mode functionality, while other times they differ greatly.
A MonoBehaviour using this attribute must ensure that they do not run Play logic that incorrectly modifies the object while in Edit Mode, or if the object is not part of the playing world. This can be done via Application.IsPlaying in which the script can pass in its own GameObject to check if it's part of the playing world.
If a MonoBehaviour runs Play logic in Play Mode and fails to check if its GameObject is part of the playing world, a Prefab being edited in Prefab Mode may incorrectly get modified and saved by logic intended only to be run as part of the game.
If your script makes use of static variables or Singleton patterns, you should ensure that instances of the script that belong to the playing world and instances that don't will not accidentally affect each other through those variables or Singletons.
On an object which is not part of the playing world, the functions are not called constantly like they otherwise are.
See Also: Application.IsPlaying, runInEditMode, EditorApplication.QueuePlayerLoopUpdate.
using UnityEngine;
[ExecuteAlways] public class ExampleClass : MonoBehaviour { void Start() { if (Application.IsPlaying(gameObject)) { // Play logic } else { // Editor logic } } } | https://docs.unity3d.com/ScriptReference/ExecuteAlways.html | CC-MAIN-2021-25 | refinedweb | 333 | 59.23 |
Next: Pretty Printing API, Previous: Values From Inferior, Up: Python API [Contents][Index] alignment of this type, in bytes. Type alignment comes from the debugging information; if it was not specified, then GDB will use the relevant ABI to try to determine the alignment. In some cases, even this is not possible, and zero will be returned.
The type code for this type. The type code will be one of the
TYPE_CODE_ constants defined below.
The name of this type. If this type has no name, then
None
is returned.
The size of this type, in target
char units. Usually, a
target’s
char type will be an 8-bit byte. However, on some
unusual platforms, this type may have a different size.
The tag name for this type. The tag name is the name after
struct,
union, or
enum in C and C++; not all
languages have this concept. If this type has no tag name, then
None object, with some pre-defined attributes:
bitpos
This attribute is not available for
enum or
static
(as in C++) fields. The value is the position, counting
in bits, from the start of the containing type.
enumval
This attribute is only available for
enum fields, and its value
is the enumeration member’s integer representation.
name
The name of the field, or
None for anonymous fields.
artificial
This is
True if the field is artificial, usually meaning that
it was provided by the compiler and not the user. This attribute is
always provided, and is
False if the field is not artificial.
is_base_class
This is
True if the field represents a base class of a C++
structure. This attribute is always provided, and is
False
if in some situations.
parent_type
The type which contains this field. This is an instance of
gdb.Type.
Return a new
gdb.Type object object and a
vector is that
arrays behave like in C: when used in expressions they decay to a pointer
to the first element whereas vectors are treated as first class values.
Return a new
gdb.Type object which represents a
const-qualified variant of this type.
Return a new
gdb.Type object which represents a
volatile-qualified variant of this type.
Return a new
gdb.Type object which represents an unqualified
variant of this type. That is, the result is neither
const nor
volatile.
Return a Python
Tuple object that contains two elements: the
low bound of the argument type and the high bound of that type. If
the type does not have a range, GDB will raise a
gdb.error exception (see Exception Handling).
Return a new
gdb.Type object which represents a reference to this
type.
Return a new
gdb.Type object which represents a pointer to this
type.
Return a new
gdb.Type that represents the real type,
after removing all layers of typedefs.
Return a new
gdb.Type object is an instantiation of a template, this will
return a new
gdb.Value or
gdb.Type which represents the
value of the nth template argument (indexed starting at 0).
If this
gdb.Type is not a template type, or if the type has fewer
than n template arguments, this will throw an exception.
Ordinarily, only C++ code will have template types.
If block is given, then name is looked up in that scope. Otherwise, it is searched for globally.
Return
gdb.Value instance
The type is a pointer.
gdb.TYPE_CODE_ARRAY
The type is an array.
gdb.TYPE_CODE_STRUCT
The type is a structure.
gdb.TYPE_CODE_UNION
The type is a union.
gdb.TYPE_CODE_ENUM
The type is an enum.
gdb.TYPE_CODE_FLAGS
A bit flags type, used for things such as status registers.
gdb.TYPE_CODE_FUNC
The type is a function.
gdb.TYPE_CODE_INT
The type is an integer type.
gdb.TYPE_CODE_FLT
A floating point type.
gdb.TYPE_CODE_VOID
The special type
void.
gdb.TYPE_CODE_SET
A Pascal set type.
gdb.TYPE_CODE_RANGE
A range type, that is, an integer type with bounds.
gdb.TYPE_CODE_STRING
A string type. Note that this is only used for certain languages with language-defined string types; C strings are not represented this way.
gdb.TYPE_CODE_BITSTRING
A string of bits. It is deprecated.
gdb.TYPE_CODE_ERROR
An unknown or erroneous type.
gdb.TYPE_CODE_METHOD
A method type, as found in C++.
gdb.TYPE_CODE_METHODPTR
A pointer-to-member-function.
gdb.TYPE_CODE_MEMBERPTR
A pointer-to-member.
gdb.TYPE_CODE_REF
A reference type.
gdb.TYPE_CODE_RVALUE_REF
A C++11 rvalue reference type.
gdb.TYPE_CODE_CHAR
A character type.
gdb.TYPE_CODE_BOOL
A boolean type.
gdb.TYPE_CODE_COMPLEX
A complex float type.
gdb.TYPE_CODE_TYPEDEF
A typedef to some other type.
gdb.TYPE_CODE_NAMESPACE
A C++ namespace.
gdb.TYPE_CODE_DECFLOAT
A decimal floating point type.
gdb.TYPE_CODE_INTERNAL_FUNCTION
A function internal to GDB. This is the type used to represent convenience functions.
Further support for types is provided in the
gdb.types
Python module (see gdb.types).
Next: Pretty Printing API, Previous: Values From Inferior, Up: Python API [Contents][Index] | https://sourceware.org/gdb/onlinedocs/gdb/Types-In-Python.html | CC-MAIN-2018-43 | refinedweb | 812 | 62.14 |
)
Re:not surprising (Score:5, Insightful).
Re: (Score:2, Funny).
Also 9/10 enjoys group rapes
Re: (Score:3, Funny)
Re:not surprising (Score:5, Interesting)
Someone who just needs to run a browser and word processor probably can't tell Windows 7 from KDE. Someone who needs to configure and administrate systems for an organisation certainly will.
True.
I actually had a long argument with my SO about Linux vs. Windows issue. My main point was this: whenever she experiences any trouble she still complains to me, and for me it is much easier to deal with Linux. So she gave it a try and it all went OK to her own surprise, she had no troubles using FF, Gimp and Pidgin., Insightful)
Dude she just wants your password to check your email and make sure you don't have anyone on the side
:)
Re:not surprising (Score:5, Funny)
Re:not surprising (Score)
Or it would be, if you had a girlfriend.
Re:not surprising (Score:5, Funny)
Or it would be, if you had a girlfriend.
#include <stdyourmom>
Re:not surprising (Score:5, Funny)
Careful, sledgehammers need no passwords.
Re:not surprising (Score:5, Funny)
Re:not surprising (Score:5, Funny)
Nah, the dead can't talk.
You know anything about putting my honda back together?
Set her up on another VT... (Score:5, Interesting)
I set up my wife on my PC on another virtual terminal - (ctl-alt-F8), it automatically logs her in on boot-up, and whenever she needs "her" stuff, it's all there for her. With all her own passwords. Plus, my "stuff" remains untouched - so whether I'm downloading torrents, or in the middle of composing an email, wp, graphic, presentation...it's all still there when she's done (ctl-alt-F7, back to me)
Simple.
cheers,
Re: (Score:3, Informative)
It only works in XP if you aren't connected to a domain, otherwise you get a Windows 2000 style login.
Re:not surprising (Score:4, Interesting)
Actually, this is a point of contention between me and my wife... Ocassionally, when Im on the computer she wants to check her hotmail email or stuff, and becomes angry when I tell her she cannot login into Pidgin unless she does it from her own user account.
"But MSN in Windows allows you to sign out and sign in again with a different username!" She says
"Yeah, but UNIX has a different philosophy, every user should have its own desktop and its own settings!"
"Why? You and all your Linux friends are a bunch of paranoid idiots! What's the point of so many passwords? Who do you think is going to try to hack you?"
"[sigh] You can then reboot into windows when I'm done with this..."
Re:not surprising (Score:4, Funny)
"I am not married" negates your comment. Try that when you are married
:)
Re:not surprising (Score:4, Informative)
Then just give her "a" password, not "that" passord. It's pretty easy to create a user and not make them show up in the login screen
;)
Or just make a TrueCrypt File called "corruptedVideo.mpg" and put all "that" stuff in there.
If there's one thing I've learned from women, it's that the only way to win a fight is to make her think SHE won!
Re:not surprising (Score:5, Funny)
This is why I have no problem with my GF running windows. If it breaks, I don't know what to do with it anyway, so it's not my problem.
Re:not surprising (Score:5, Insightful)
References to Windows are one of the only times I see geeks proudly proclaiming their ignorance....It's just an OS by a company, not some insane enemy to be avoided at all cost =/
Hence "itsatrap" on every article about Microsoft supposed altruism.
They are the enemy, they declared it many, many times! The everyone-not-Windows crowd might not have a problem with Microsoft (and therefor Windows) If they didn't have so much history of UI/feature theft, assimilation of over a hundred of corporations, investing in corporations like SCO to assault the public image of Linux, claiming Open Source is dangerous and will destroy computing... Jesus, it's like a monkey throwing shit at you. You know the monkey's just doing what it does but you'll never in your right mind appreciate it.
Actually, let me make a much more concise attempt at responding to your comment.
References to Windows are one of the only times I see geeks proudly proclaiming their ignorance....It's just an OS by a company, not some insane enemy to be avoided at all cost =/
THE FUCK IT AIN'T.
Re:not surprising (Score:5, Interesting)
My wife does digital scrapbooking. She was using a cheapo scrapbooking app, but started to find it too limiting. She started to insist on a purchase of Photoshop, which I resisted. So she got the free trial version, played with it for 30 days and loved it. I asked her to give gimp the same 30 days, and she did. We never did make that Photoshop purchase - she has managed to find gimp tutorials online and even a dead-tree book that has all sorts of hints, tips, and ideas for gimp. Now she does all her scrapbooking in gimp. Maybe I'll be able to sneak a switch over to Gentoo from XP on her box now.
:-)
She's no techie, she's artistic. (NOT AUtistic, ARtistic.) Took a bit to get over the learning curve to the point where she was productive, but it wasn't terribly worse than the learning curve for Photoshop.
Re:not surprising (Score:5, Interesting)
Same experience here with The Gimp. As long as SO hasn't become entrenched in using a particular non-free application, she grasps new free apps easily. I hadn't expected her to get used to the gimp (as every gimp article on
/. might have you think) as quickly as she did. Perhaps not being English helps in this case :)
Getting her switched from Microsoft Office however is a different story. Having used it for years, she was wary about OOo and balked about not being able to find various options easily.
It goes to show that moving users from what they are comfortable with is a difficult process. If the new app doesn't have a clear win (Firefox + AdBlock for instance) users won't switch easily. But if the user is new to the domain, they will try it with an open mind and learn quickly.
Re:not surprising (Score:5, Funny)
Wait, are you saying that your wife does NOT live in her own little world AND effectively communicates her wants/needs? I'm not sure you appreciate the full magnitude of your discovery, sir. Please.. go on.
Re:not surprising (Score:5, Insightful)
Don't confuse the fact that women remember what shoes another woman was wearing 3 years ago in March with some sort of all-encompassing perception of reality, because it's not. And seriously, when's the last time a woman ever told you exactly what she really wanted or needed? The only time that happens is right before or after a fight/breakup, because they're so upset that you didn't know to begin with. "You should have known I wanted you to vacuum upstairs because I left the vacuum cleaner sitting in the middle of the floor!"
My bad.. I just thought you left the vacuum out.
Re: (Score:3, Insightful)
Gimp isn't hard. All it takes is a try.
What I mean is, you actually have try and look around the menus, get messy, try things.
I know, people don't like to learn. Well, then they shouldn't bitch when they don't know how to do something new, now should they?
Re:not surprising (Score:5, Informative)
I'm sure that most people will see the difference when trying to install a game, sync their PDA (with the instruction on their constructor webpage not matching what they see on their screen) or try to open the crappy humor Powerpoint filling their mailboxes. No need to be a admin to see a subtle difference between linux and windows if you don't have a diligent kid/friend that take care of every single installation problem for you.
This video reminds me of all those "infomercial" showing the latest innovation in carpet cleaning or kitchen robot
...
Re:not surprising (Score:5, Interesting)
From [wikipedia.org]
"Participants weren't asked to work with peripheral devices (such as printers or scanners), nor were they asked about compatibility with older software or hardware.[4] Participants did not have an opportunity to try the software themselves[2], but were only demonstrated certain features by a salesman."
So while calling it Mojave prevented the bad hype from geeks, they still showed it to people in a very limited capacity that didn't actually show any of the things that were being criticized. Mojave proved very little, and this video is sort of analogous to that.
With as much certainty as the Mojave Experiment provided us with, this video demonstrates that Linux and KDE are indeed desktop ready and 100% compatible With windows. It's only when you tell users that it's not Windows that they start believing the M£ propaganda and claim that all of a sudden they can't run GTA4.
Re:not surprising (Score:5, Funny)
that's because 9 out of 10 statistics are made up 73% of the time.
Re:not surprising (Score:4, Funny)
Re:not surprising (Score:5, Funny)
Bucking for some informative karma I've tracked down some visual aids for our comparison:
Windows 7 [redbubble.net]
KDE 4 [photobucket.com]
Your welcome.
Re:not surprising (Score:5, Funny)
I consider myself fairly computer literate and I can't tell the difference between Windows Vista and Windows Vista.
Re:not surprising (Score:5, Insightful)
I think it would be more accurate to say that "people, outside their area of expertise, are generally clueless"
I consider myself somewhat of a Renaissance Man--I program, write, fiddle with electronics, skeet shoot, draw, wrench on my motorcycle, play a musical instrument or two, do carpentry and so forth. I find it moderately amusing to hear geeks who wouldn't know their way around an engine compartment tell auto mechanics that they're clueless--or nerds who can't carry a tune in a bucket tell musicians the same.
It's important to keep in mind (perhaps especially here on
/.) that the average person isn't a computer expert. They use the computer the same way they use a car, or a stereo, or a blender--they don't necessarily understand (or care about) the differences between models, they just want something that works.
Re:not surprising (Score:5, Insightful)
Right, but how many people go around saying "Oh, I never drive a BrandA cars, I've always driven BrandB cars. I wouldn't even know how to drive a BrandA. The buttons might be in the wrong place, or the shift lever might be at the rear passenger door. I just wouldn't know where anything was?" I've heard the strangest reasons for not switching to Linux. One was simply "The Start button looks too different." Yes, the start button was enough to scare them. Heaven forbid that they ever get in a car that had the gear shift on the wheel column instead of the floor.
No, the reason tech people say non-computer people are clueless about computers is because the ones that stick out in our memory are so willfully clueless. They are the ones who would get in any car and find the buttons they need, but change the color of an icon on the computer and they are lost. The blender breaks and they buy what ever one is on sale, but when they need to check their email they "Only know how to use OutLook Express. What is this 'webmail thing' you are talking about?" And stereos, geez, Talk about moving the buttons around, every one I've ever owned had the volume dial in a different place. But the volume icon in KDE is right next to the clock, same as windows normally, and most of these 'clueless' users wouldn't want to find it. They would rather just complain that 'it doesn't look the way I remember it.' I don't know what it is about computers that induces this autistic-like behavior, but that's exactly what it looks like.
I settled the issue with my parents. I told them that unless they could name an application they wanted to use that I could not get them under Linux, then the next time they wanted their computer fixed it was getting Linux installed. A nice windows-like theme and KDE, sure, I'd go ahead and do that for them, but I was not supporting windows. My mother actually asked me to pirate her a windows CD, just because she didn't want to 'learn a whole new computer'. I handed her my laptop and asked her what she thought, and she thought it was a "nice windows theme, but that wasn't linux. I've seen linux, that's where you type away in that little text box with no pictures."
Now they run Kubuntu, and the only problem they've had is that the LTS version hasn't updated firefox in ages. Next time they ask about it, they get moved from LTS to stable, which frightens them. I can't wait till they ask again and get moved to bleeding edge nightly builds.
Re:not surprising (Score.
It should be labeled under "fun", not "kde" (Score:5, Insightful)
I mean; even the editors themselves state that there isn't any conclusion to be drawn here; "we've learned nothing" because there simply are too many factors to consider. People don't know Windows 7 or people don't know KDE. Or people don't really care at all. So; fun movie, move along.
Thats it just show the eye candy. (Score:5, Insightful)
Any OS can look impressive when you find a demo that shows off all the eye candy to its full extent. You could have shown these people DWM configured nicely they would think it would be the next generation OS, UI. Vista got good visual reviews too. The problem is when you start working with it, things change. KDE and GNOME while have a rather niced polished UI, you still need to do things the Unix/Linux way. The same with windows no matter what you do to the UI it is still windows and need to work with it.
What I find really funny comparing Windows/Gnome/KDE with a Mac. The Mac actually has a lot less eye candy, yet perception has it as having more.
Re:Thats it just show the eye candy. (Score:5, Interesting)
My favorite piece of eye candy was the "static" when opening the photo.
When the hell is somebody going to fix that, and whos fault is it?
X? WM? Graphics Driver?
it's getting old.
Re:Thats it just show the eye candy. (Score:4, Informative)
Folgers... (Score:5, Funny)
The two guys' bottom line is nearly correct (Score:5, Interesting)
Still a nice little laugh, that video.
Re: (Score:2)
Totally agree with you - although at the end the ZDnet video they said 'they learnt nothing', that's not quite correct. They learnt that nobody in their (presumably not very scientific sample) has any idea of what KDE4 looks like...
So, as you imply, should be on 'idle' really...
Re:The two guys' bottom line is nearly correct (Score:5, Interesting)
Re:The two guys' bottom line is nearly correct (Score:5, Insightful)
I think I learned quite a bit. I learned that when you get people in front of a camera talking about your product, they don't really pay very much attention to what they are seeing. If you look like a representative of the company, most people are going to say kind things.
Which to me, says an awful lot about the Mojave Experiment. It doesn't really matter what people say they think in that setting. It matters what they think when they install the OS on their own computer, and for Vista that hasn't been very good.
It also makes me question the effectiveness of usability labs I've sat through in the process of developing software for corporations. It's a painful process, and now I wonder if it is very accurate at all.
Good laugh, but misleading (Score:2, Insightful)
It's very misleading, people could have pretended any OS or GUI, including MacOS-X - because the 1-2 min demonstration saying "look how easy it is" could have been a Vista desktop with a different background image, and people would be alike fooled. So the laugh was good, but it just shows how misleading suggestive presentations are, and what people truly value: easy to use, and they believe it (first) when you tell them, and get pissed (later) when it's not so as told (like in case of Vista).
Is KDE4 actually usable yet? (Score:4, Informative)
This isn't a troll - I installed it with Suse 11.0 last year and though it was supposedly a release version it was utterly unusable, unstable and missing important features. I had to install 3.5.4 to actually get some work done. Since then I haven't bothered to check what state 4 is in now as I felt the KDE team (and Suse) had, to be polite, been rather dishonest about it. Is it worthwhile looking at it yet or should I just stick to 3.5 for the forseable future.
Re:Is KDE4 actually usable yet? (Score:5, Informative)
Yeah, 4.2 is far, far better than 4. I use it and love it!
Re: (Score:3, Interesting)
The question is: Is KDE 4.2 better than 3.5.x?
I've found that 4.2
* looks nice,
* is slow to draw things on the screen,
* still has fewer things working than its 3.5.x predecessor.
Although I found that I could alleviate most of the slow screen painting using desktop effects with KWin's composition manager. However, like all the other broken composition managers out there, you get a nice desktop that can't run 3D applications.
Lure them in with spinning cubes and wobbly windows and then break their hearts by te
Re: (Score:3, Informative)
I run X-plane all the time under Gnome and compiz-fusion. It's almost as fast as without compiz running (indirect rendering does take a hit), and it still gets a very reasonable frame rate. Maybe Kwin's compositor prevents #D apps, but in general composite managers should not and do not. Now I do prefer to turn off the effects when I do run a game. But yes, 3D apps certainly do work u
Re:Is KDE4 actually usable yet? (Score:5, Informative)
I blame the distro. They should not have made KDE4 the default so early - they should have stuck with KDE 3 until at least 4.2.
AS far as I can remember KDE 4.0 was well know not to be really ready.
KDE4 user (Score:5, Informative)
I've been using KDE4 since openSUSE started including the previews.
I felt the KDE team (and Suse) had, to be polite, been rather dishonest about it.
I don't know but to me it always seemed clear that the 4.0 was more a "early tester" release.
By now KDE4.2 is starting to get really usable and really configurable and could be used by more casual users.
Sure, if you have tons finely tuned stuff in KDE3.5, you'll really miss them.
But KDE4.2 offers enough basic functionality to be usable by most people.
Is it worthwhile looking at it yet or should I just stick to 3.5 for the forseable future.
If you don't depend on highly specific KDE3.5 customisations,
or if you're ready to spend time re-tuning everything again in a slightly different way,
then KDE4.2 is definitely worth giving a try.
On the other hand if you absolutely require the same level of ultra smooth-polished user experience that KDE3.5 offers, you'd better stick with the KDE3.x branch for now and probably wait until somewhere around the KDE4.5 version. (maybe just giving quick shot to KDE4.3 and 4.4 just to watch progress).
Ditto for KDE5.x in a couple of years : stay with KDE4.5 until that one matures.
;-)
Re: (Score:3, Informative)
Hey.
I have a fix for you.
Add krandrtray to your list of Autostarted applications.
See this bug for more information: [kde.org]
Re:Is KDE4 actually usable yet? (Score:4, Interesting)
It seems to me that the solution to that would have been to call "4.0" "4-alpha". I'm not a big KDE user myself (I'm mostly forced to use Windows machines for my day to day workstations, and as often as not I just SSH into the servers to admin them), so I don't know what the issues are/were beyond what I've seen on
/. comments, but it sure seem like they released a ".0" release without really finishing it. Which is what everyone screams at Microsoft for doing all the time. this little comment war breaks out every so often, and it always come back to "Well they/we admitted it was crap when they/we released it!". So why release it? Release the alpha as an alpha and release what is now 4.2 as the 4.0 release.
Not being either a developer or a (significant) user of the project I don't really have a horse in the race, but it sure seems like if a commercial product had done this kind of thing it would have been held up by the community as an example of why FOSS is better. Granted I don't usually pay $unspecified_large_amount_of_money to use KDE, so I guess that's something, but shouldn't a flag ship FOSS project hold itself to the same standards that it expects from its competitors?
I like the conclusion... (Score:4, Funny)
It is indeed surprising AND unsurprising.
The video ends with the two guys discussing "what have we learned today". FTFV:
Sufficient Reason To Avoid Both (Score:3, Insightful)
If you can't distinguish KDE from Windows, and vice versa, that's reason enough to avoid both.
"I use Windows iMac" (Score:5, Informative)
So what does this experiment show? That people just aren't computer savvy.
Re:"I use Windows iMac" (Score:4, Informative)
Re:"I use Windows iMac" (Score:4, Insightful)
It shows that the supposed problem, "People just can't understand how to use Linux" is bunk. If they can't even tell it from the latest and greatest Windows, how can it be any more confusing for them than Windows is?
Put another way, if the users are going to be confused anyway when upgrading from XP, you might as well upgrade to Linux and get off the treadmill.
What does it show? (Score:4, Interesting)
Anyone staging a demo can find a number of people to say oooh ahhhh.
Seriously. This is the problems with computers today. The perception of "usability" is not actual "usability."
We all know, at the end of the day, "usability" is how easy it is to accomplish one or more tasks, to a certain degree the ease at which you learn how to do these tasks, and lastly the predictability and reliability of accomplishing your tasks.
So, if something is easy to do, easy to learn, and rewards careful execution with consistent outcome, the thing is easy to use.
Now, where does flashy eye candy come in to that picture? It doesn't. That's why military vehicles are all drab colors. The criteria is utility not beauty.
Sure, I do *like* the way KDE 4 looks, but it is less usable than KDE 3.
Re:What does it show? (Score:4, Interesting)
They DO act the same.
Point in fact, they don't. They have different action menus, options, etc. Dragging an icon from konqueror or dolphin creates something "different" and behaves differently than something from within dolphin or konqueror.
You've got it backwards.
Obviously I don't.
the Desktop as a folder AND as an interface is where you get things acting differently in different contexts.
"contexts" are bad things to users. Coming to a system it is difficult to grasp multiple contexts. Even as a regular user, "contexts" are a pain in the ass.
Would you like to write a document in a contextual editor like vim or OpenOffice.org?
The KDE4 desktop makes interface separate from data.
Yes, you've said basically that same thing previously and my response is the same, it is a bad idea.
You have to have a plasmoid to display a folder's contents if you want data on your desktop, which is completely in keeping with the concept.
The "plasmoid" is a cop-out for a well typed system. Why do you need plasmoids for the desktop but not in dolphin or konqueror? The desktop, conceptually, represents a physical space as does file cabinets. Just like your real 3D desk, why would a piece of paper be something different on your desk than in a file cabinet?
This is the foundation of UI design. Our lizard brains want things to be consistent.
They also always act the same... you never see a folder on the desktop in KDE4
And that is something I dislike as well. I *like* and would prefer to use folders on my desktop, because in the real 3d world, I keep things on my desk. Up until Kubuntu 8.10, I used my desktop they way I wanted to use my desktop.
you never see a program icon.
Why not? I keep things like my ipod on my desk, a couple USB drives, etc. By making the desktop artificially restricted -- "different" -- from the rest of the system you make it less easy to use.
It's in a plasmoid or in the file browser
A "file browser" corresponds to a real world entity. A file cabinet. What does a "plasmoid" represent?
What is the perpose of introducing a new concept? What does it answer? How does it make the system more usable? I've read a lot of the KDE discussions about plasmoids and they are all about an aesthetic preference from a few people, but not one discussion about how they are better or easier for end users.
It's doing exactly what you say is good, but you keep claiming that it's bad.
Then you are confused about what I have said.
It's all about the... (Score:4, Insightful)
Apps and games baby...Uhh, uh-huh, yeah.
Seriously thou, the rub comes in with what the Win32/64 platform can run more than anything else these days. Both Mac and GNU desktops are plenty mature enough to deal with what most normal users would want. The main thing is now the sheer force of inertia that the Windows platform has in terms of what it runs natively.
Re: (Score:3, Insightful)
Oddly enough the same group of people who want more Applications for Linux, are also so dead against Web Applications and Cloud Computing, which in essence gives apps to these platforms. Really other then Games, CAD or High Performance Apps. A Well Designed web app can do the job, and work on Linux, Mac, Windows, BSD, Solaris... As most applications are based on Text Input some calculations Text or simple graphic output, Web Based apps are a good choice.
But no punchline... (Score:5, Interesting)
At the end, they should have said:
"Have you ever heard of Linux?"
"What have you heard?"
"What you say if I told you this was Linux and not MS-Windows?"
Re: (Score:3, Insightful)
linux mojave TM
Re: (Score:3, Insightful)
And sad thing is...I can bet that in 2 years or so many MS-fanbots will point to KDE4 while saying "see? Linux doesn't innovate, it just rips off Windows!"...
Re: (Score:3, Insightful)
bait and switch and switch... (Score:5, Informative)
I started the video, and it stuttered, and started over... with an actual demonstration of Windows 7. I had to reload the page to get the KDE4 prank video.
Was that supposed to be some kind of Zen test?
Linux is ready, KDE4.2 is not (Score:3, Insightful)
Explaining tabs in the browser is harder, the vast majority will still shut down the browser instead of just the tab they were in.
Although KDE4.2 is showing great promises it's all but ready for full roll out.
But I sure like the way they are moving, it's nice to look at and the way they are splitting configurations like through widgets is in my view nice if only because it's optional.
But even in this demo we can see one of the issues, while rolling through the windows you notice how a video window is momentarily loosing like what seems sinc.
Now once it'll get snappy like KDE3.5 and robust as the OS underneath...
Eye candy is a superficial metric (Score:3, Interesting)
Exercises like this might be fun, but they have no practical purpose.
Linux desktops aren't marketed, they are judged by their users based on useful metrics: configuration options, stability, tools, etc.
In Windows world, 95, XP, and Vista were all marketed to the public primarily by showing static screens illustrating how pretty they were. Windows' classic interface looks bland today, but it was hip in the 90's. XP's fisher price interface was a hackish step further. Aero is a half-hearted catchup maneuver to Linux and OSX, delivered in a business-minded blandness that only Microsoft thinks is "innovative". Each of those versions were marketed the same, but received differently based on almost everything except their appearance. No one has ever said UAC prompts are pretty, they're too busy being annoyed by them.
Which desktop is more visually attractive has little to do with how much can be done with it, and how efficiently.
Re: (Score:2, Informative)
Re:eye candy (Score:4, Insightful)
Compared to other OS's MacOS is actually quite lite with its eye candy. Oddly enough OS X focuses more of the function of the UI more then how it looks. Every effect has a reason for it, and is used to help people grasp rather abstract concepts better. Vs. Say Wobbly windows in Ubuntu Linux which only hinders usage in order to look fancier aka (Window stuttering when it gets close to an other window)
Re:eye candy (Score:5, Insightful)
Re:eye candy (Score:5, Insightful)
Mod parent up. Almost all attacks against eye candy are based on a false dicothomy between beauty and functionality. Wobbly windows are not useful? Well, probably neither is your wallpaper. Or the painting on your house. Or good-looking clothes. And as much as it may sound surprising, woobly windows do not get in my way, I like them and nowadays I feel unconfortable when I have to use another system that does not have them. Different people, different tastes.
Going all "eh, I prefer functionality" is like ignoring a incredibly hot girl because "since she's beautiful, she's probably dumb". One thing does not exclude the other, specially considering Compiz/KWin are remarkably fine-tunable.
Re: (Score:3, Insightful)
It's when the eye candy gets in the way of the functionality that it becomes a problem. (To stretch your analogy, you can never go out with your beautiful girlfriend, because she takes all night to put her make-up and clothes on.)
Re: (Score:3, Informative)
Re: (Score:3, Insightful)
My argument against moving the windows to be wobbly is the fact in real life we have more experience with solid objects Then Rubbery ones. Moving a windows should stay as a solid feel. Actually if you want to get a more realistic effect you should probably have the window rotate based on the torque that you place on the window when moving it. As for the "slurp" it effect is because the window is doing something that in real life we don't experience Objects shrinking without distortion it also forms an arro
Re: (Score:3, Insightful)
Which isn't a problem except that you use "slurp is good because it helps the metaphor" in your defense of it.
You're quoting me? I never said that. I think the whole document/window/desktop metaphor stuff gets in the way of providing organizational mechanisms that possibly "break" some stupid metaphor. If something works, I don't care if it behaves within the bounds of a "desktop" metaphor. Or if something uses a "slurp" animation when such things don't occur in nature. It's useful organ
Re:eye candy (Score:5, Insightful)
Am I the only one who doesn't want eye candy these days?
Don't get me wrong, I don't want the look of Pre-OSX Mac or early Unix operating systems, or windows 3.1... I don't want things that are painful to look at. Just a simple, quiet appearance that doesn't distract me from what I'm doing.
I can get that in Windows and KDE 3.5. I can get it in Gnome.
Vista screwed the UI, and I can't get it there (I can come close, but they made some things use the same colors, while in earlier versions of windows, they used different colors - such as input fields and non-input page backgrounds. Windows 7 hasn't fixed this.
KDE 4, MacOSX, Windows 7, Windows Vista... Too much bling and not enough customisation in the UI for me.
Re:eye candy (Score:5, Informative)
Xfce is your friend.
I use Xubuntu. Plain, clear, simple and *fast*. 8.10 runs out of the box everything on my ThinkPad laptop including Bluetooth. Get it.
Re: (Score:3, Insightful)
Re: (Score:3, Interesting)
I'll second this. Xubuntu or Slackware with Xfce is very nice. It looks good without being distracting. It is very fast compared to the other full desktop/window managers and doesn't get in the way. Being based on Gtk it has similar customizations as gnome. KDE apps still run great under it as well. I keep trying Gnome & KDE but always go back to Xfce when I need to get some work done.
Re: (Score:3, Interesting)
XFCE is nice, but I think Fluxbox is nicer still, especially when used with XFCE apps. It loads in less than a second but still manages to look rather nice [imageshack.us] with transparency and stuff. The best bit though, aside from its fleety-nimbleness, is that it allows user-definable, chained keyboard shortcuts (I have {Alt+x, Alt+z} mapped to 'screen -Rd', for example). It's freaking awesome.
I apologise for evangelizing, but I just love it so damn much.
Re: (Score:3, Informative)
WindowBlinds, huh?
Ahhhhh! My eyes! [draginol.com]
Wow, after seeing that desktop, I see why Ubuntu went with brown instead of bright, fluorescent orange
:)
Re:eye candy (Score:5, Funny)
You'll buy KDE4?!?!
I've got this pirate copy of KDE4.2... It's much cheaper than the original.
Re: (Score:2)
Re:eye candy (Score:5, Funny)
That's a bug in your legal system. I heard you recently voted a new president who may submit a patch.
Then again, your system is so broken you may want to consider a ground up re-write.
Re: :eye candy (Score:5, Informative)
Can I legally play a DVD on a Linux box in the US?
Yes.
Ask Dell. They now include a closed source DVD player app to cover this niggle. The rest of the world uses the free codecs and the libdvdcss library just fine.
Another Linux roadblock gone eh.. Soon people will have to come up with real arguments.
Re:eye candy (Score:5, Insightful)
"Another Linux roadblock gone eh."
How about Blu-Ray?
Re: (Score:3, Interesting)
now all you have to do is reliably and legally run all software that runs on windows
I can tell you right now that I have been using Linux exclusively since 1995. I have not missed *any* Windows software.
I have always had a good office suite. Applix, then Star Office, now OpenOffice. I have always had netscape. I have always had modern tools of the time.
So, why would I want to run Windows software that is inherently more buggy, not designed for my platform of choice, and does not give me the freedom to inspe
Re: (Score:3, Informative)
You're an outlier.
Ad Hominem - You seek to label me in an attempt to diminish my opinion. FUD Warning FUD warning Danger Will Robinson.
It's only been since about 2003 or 2004 that Linux has been good enough for me to consider using it exclusively.
That may be your opinion, and you have every right too it, but Windows has NEVER been good enough for me.
That's just for software quality,
Nice generalization. Which software would that be? Come on, dig deep make up something.
and completely discounting software c
Re: (Score:3, Informative)
And what is RA3? Red Alert 3? If you're wanting to run a program designed for WINDOWS you need Wine installed first. Linux isn't Windows, which is apparently very hard for lots of people (like you) to understand. After that, you just pop the disc in, and install [winehq.org]. You might consider playing a better programmed gam
Re: (Score:3, Insightful)
Linux isn't Windows, which is apparently very hard for lots of people (like you) to understand.
I've never understood why people have no trouble understanding that with a Mac they can't use Windows software. But with a linux distro, they scream that they can't install the free* smiley pack they downloaded. This is the sole reason I haven't moved most of my family to linux and thus freeing myself from having to remove viruses and spyware every month.
*Free to install, and only US $60 to remove all the spyware that program it came with found!
Re:Welcome to Niggerbuntu (Score:5, Insightful)
Re: (Score:3, Funny)
I Sir, will have you know I'm a civilized primate and only fling poo at others during football season.
Re: (Score:3, Interesting)
Re: (Score:3, Interesting)
Actually, what I told my Ubuntu box to do was to look on the network for a printer. Even easier than putting a disk in. Then, when the printer became really flaky and stopped talking in TCP, and only in Appletalk, I told it to look for a CUPS server on my wife's iMac. Still easier than putting a disk in.
I wouldn't hesitate to recommend Ubuntu for somebody who didn't specifically want a specifically Windows program. It's ready to roll from the start. | https://tech.slashdot.org/story/09/02/06/0912213/is-it-windows-7-or-kde-4?sdsrc=nextbtmnext | CC-MAIN-2016-44 | refinedweb | 6,630 | 73.07 |
.
(Sorry for the potato quality of the MacBook camera)
Step 1: Support Me on Patreon
Consider supporting me in Patreon. I'd like to make better quality Instructables and more complex and interesting projects but most of these projects are quite costly and time consuming.
Thank you for your support.
Step 2: Materials
(2) Servos
(1) Arduino Uno
(1) Jumper cables
3D print one
or buy one... (includes the servos)
or make one of your own using scrap wood pieces and glue
Step 3: Installing Python
If you have programmed in Python (2.7) before, you can skip this step.
The language I used for the programs that track the faces and send the coordinates to the Arduino is Python so it must be installed in your computer to be able to run the code.
I personally use PyCharm (download here:) as my main python IDE but you can use other IDE's like WingIDE, or just use a text editor and terminal. Most of the time, I just use Sublime Text and Terminal (built-in command line tool for a Mac). But, if you've never programmed or used Python before, I recommend starting with a simple free IDE like WingIDE (download here:).
My code uses Python 2.7 so download that from here:...
You can also use homebrew (this only applies to Mac users) to install python by typing the following line to terminal:
brew install python
Step 4: Installing OpenCV and NumPy
If you already have OpenCV and NumPy installed in your computer you can skip this step.
The libraries I am using are OpenCV and NumPy so both libraries must be installed in your computer. OpenCV is an open-sourced computer vision library used to find faces in still images. I've adapted my code to work on frames of a live video. While, NumPy is a powerful numeric calculations library for Python.
For Windows users install OpenCV by following this guide:...
For Mac users I often had trouble installing it the way OpenCV recommended so I use homebrew to install it. Follow this set of instructions on how to install homebrew and OpenCV:...
Installing OpenCV can be a hit-or-miss sometimes, so test if you have OpenCV using the snippet of code below.
import cv2
If you do not get any error messages when you run it, then you have installed it successfully
To install NumPy, open your command line tool (Terminal for Mac/Linux users and Command Prompt for Windows users) and type the following
pip install numpy
Pip is a package that comes when you install Python in your computer so it works cross-platform.
Similarly, if you want to check if NumPy has been installed successfully, run the following snippet of code below:
import numpy
Step 5: Install ArduinoIDE
If you already have ArduinoIDE installed in your computer you can skip this step.
You can download the ArduinoIDE for the Arduino website:
Setup your your Arduino by connecting it to your computer and selecting the correct board and com port.
Step 6: Wiring the Servos
The wiring for this project is pretty straight forward. Follow the diagram above.
Step 7: Printing/Assembling the Camera Gimbal
You can 3d print this camera gimbal designed by Jake King:
Make sure that before you place the servos onto the camera gimbal that you centre it first. You can centre a servo by applying +5V VCC to the red pin and GND to the black pin. The servo will automatically rotate to the centre position.
Step 8: Run the Code
You can download the code from my Github. I will update this as I continue to improve the code....
First run the main.py using your Python IDE or if you are using terminal like I am, type python main.py. This will launch the video capture mode of your camera.
Then, compile and upload the Arduino sketch.
To exit the video capture mode, go to the screen that is running the video capture -- Python IDLE -- and hit "q" to quit.
Step 9: Calibrating the Camera
Once you run the code, you will get two prompts. One is the average distance of the person the track from the camera and the other is the field of view of the camera.
Step 10: More...
Change the training data, so if you want to film your cat for example, copy and paste the pictures of your cat to the training folder and the camera will be tracking your cat. You can learn more about training the HaarCascade here:...
Run the code in a Raspberry Pi, which is a pretty much the same steps since I use a Mac and Raspberry Pis are computers that run on Linux. Now, you'll have a more compact and portable camera setup.
If you want to improve the system you can also try playing with the code and controlling Depth and z-axis zoom functionality.
Step 11: Vote for Me!
I've entered this instructables to both the robotics and the photography contest. Please vote for me if you liked it.
6 Discussions
1 year ago
Yo!!!!
1 year ago
Hello, I'm following this tutorial and have found some hiccups.
First off, I fixed a thing in the code because python was giving errors, at the following place:
vid2photos.py - line 7, changed cv2.cv.CV_CAP_PROP_FPS to 5, which I saw in the docs of the newer OpenCV library. The new code therefore is cap.set(5, 1)
Now, if I run python main.py, I get the camera view screen where my camera video is shown. This all seems to work correctly. I also put the Arduino code on the Arduino, which also uploads.
Unfortunately, this is all that happens. I don't see like a square marking my face on the little view and my arduino doesn't do much either. I downloaded the current version of the github repo and am using a Windows PC.
Do you know what I can do about this? Are there steps I missed? Right now I'm at step 8, running python pixel2degree.py returns a few errors (the error at line 44 where == is missing instead of = I already fixed, the current error is that int doesn't have a a get index function, guessing because of (parseList[i])[i]), and before I move on to try and fix that I would like to know what might be going on.
Thanks in advance!
1 year ago
voted
And i must try it ?
Reply 1 year ago
Thank you! Awesome profile pic btw.
1 year ago
Nice I like it, I haven't used openCV before cause I wanted to track humans in general but now I realise that it's actually a neural network haha so I'll put it on the never ending projects to do list. Voted!
Reply 1 year ago
Thanks! If you're interested in training datasets you can also consider Clarifai. It's a really simple machine learning computer vision API, but it doesn't have localization yet. | https://www.instructables.com/id/Face-Tracking-Pan-Tilt-Camera/ | CC-MAIN-2019-09 | refinedweb | 1,183 | 72.26 |
import "github.com/apache/beam/sdks/go/pkg/beam/io/filesystem"
Package filesystem contains an extensible file system abstraction. It allows various kinds of storage systems to be used uniformly, notably through textio.
Read fully reads the given file from the file system.
Register registers a file system backend under the given scheme. For example, "hdfs" would be registered a HFDS file system and HDFS paths used transparently.
ValidateScheme panics if the given path's scheme does not have a corresponding file system registered.
Write writes the given content to the file system.
type Interface interface { io.Closer // List expands a patten to a list of filenames. List(ctx context.Context, glob string) ([]string, error) // OpenRead opens a file for reading. OpenRead(ctx context.Context, filename string) (io.ReadCloser, error) // OpenRead opens a file for writing. If the file already exist, it will be // overwritten. OpenWrite(ctx context.Context, filename string) (io.WriteCloser, error) }
Interface is a filesystem abstraction that allows beam io sources and sinks to use various underlying storage systems transparently.
New returns a new Interface for the given file path's scheme.
Package filesystem imports 5 packages (graph) and is imported by 7 packages. Updated 2018-12-12. Refresh now. Tools for package owners. | https://godoc.org/github.com/apache/beam/sdks/go/pkg/beam/io/filesystem | CC-MAIN-2019-13 | refinedweb | 207 | 52.56 |
From: Dom Lachowicz (doml@appligent.com)
Date: Tue May 14 2002 - 14:45:21 EDT
> > All others --
> > whether defined by Word or by users -- go at top level.
>
> I would rather see them prefixed by 'custom.' or something
> similar. We *may* chose to support other metadata standards in the
> future[1], and IMHO it's better if *all* keys are prefixed.
I'm going to prefix our custom keys with "abiword." as a namespace, of
sorts. User-defined tags will be prefixed with "custom."
> > 1. For those of you who read the above date examples
> > carefully, I'm not sure whether our canonical datetime output
> > should include the timezone offsets or not. For details, see:
> >
> >
>
> I would prefer if they did.
I'm preferring that they have GMT-XXXX included in there too, or
something similar.
> > 3. FWIW, I'm not sure it's all that safe to map Word's
> > company onto DC's publisher. Word actually has a separate
> > publisher keyword in their custom tag.
>
> Then we shouldn't map it, IMO.
MSWord's OLE Summary Streams do not have anything resembling a Publisher
tag (at least as standard). I can show you specs and implementations to
this effect, and you can probably show me a screenshot that proves your
point. I'm leaving things as-is for now until I'm convinced that we
should do otherwise.
I've just committed the proper prefixing stuff to CVS HEAD.
Dom
CVS:
----------------------------------------------------------------------
CVS: Enter Log. Lines beginning with `CVS:' are removed automatically
CVS:
CVS: Committing in .
CVS:
CVS: Modified Files:
CVS: src/text/ptbl/xp/pd_Document.h
CVS:
----------------------------------------------------------------------
This archive was generated by hypermail 2.1.4 : Tue May 14 2002 - 14:49:38 EDT | http://www.abisource.com/mailinglists/abiword-dev/02/May/0513.html | CC-MAIN-2014-35 | refinedweb | 286 | 77.13 |
Markdown Config
An alternative to enabling the Markdown Razor View Engine is to use new
MarkdownConfig API); }
Markdown Razor View Engine
Markdown Razor is the first HTML and Text (i.e. Markdown) view engine built into ServiceStack. The pages are simply plain-text Markdown surrounded by MVC Razor-like syntax to provide its enhanced dynamic functionality.
Configure
Markdown Razor support is available by regitering the
MarkdownFormat Plugin:
Plugins.Add(new MarkdownFormat()); // In ServiceStack.Razor
Extensible with custom base classes and Helpers
Markdown Razor is extensible in much the same way as MVC Razor is with the ability to define and use your own custom base class, Helpers and HtmlHelper extension methods. This allows you to call util methods on your base class or helpers directly from your templates.
You can define a base class for all your markdown pages by implementing MarkdownViewBase and register it in your AppHost with:
SetConfig(new HostConfig { //Replace prefix with the Url supplied WebHostUrl = "", //Set base class for all Markdown pages MarkdownBaseType = typeof(CustomMarkdownPage), //Define global Helpers e.g. at Ext. MarkdownGlobalHelpers = new Dictionary<string, Type> { {"Ext", typeof(CustomStaticHelpers)} } });
If a WebHostUrl is specified, it replaces all ~/ in all static website and Markdown pages with it. The MarkdownGlobalHelpers allow you to define global helper methods available to all your pages. This has the same effect of declaring it in your base class e.g:
public class CustomMarkdownPage : MarkdownViewBase { public CustomStaticHelpers Ext = new CustomStaticHelpers(); }
Which you can access in your pages via @Ext.MyHelper(Model). Declaring instance methods on your custom base class allows you to access them without any prefix.
MarkdownViewBase base class
By default the MarkdownViewBase class provides the following properties and hooks:
public class MarkdownViewBase { //Access Config, resolve dependencies, etc. public IAppHost AppHost; //This precompiled Markdown page with Metadata public MarkdownPage MarkdownPage; //ASP.NET MVC's HtmlHelper public HtmlHelper Html; //Flag to on whether you should you generate HTML or Markdown public bool RenderHtml; /* All variables passed to and created by your page. The Response DTO is stored and accessible via the 'Model' variable. All variables and outputs created are stored in ScopeArgs which is what's available to your website template. The Generated page is stored in the 'Body' variable. */ public Dictionary<string,object> ScopeArgs; //Called before page is executed public virtual void InitHelpers(){} //Called after page is executed before it's merged with website template if any public virtual void OnLoad(){} }
See this websites CustomMarkdownPage.cs base class for an example on how to effectively use the base class to Resolve dependencies, inspect generated variables, generate PagesMenu and other dynamic variables for output in the static website template.
Compared with ASP.NET MVC Razor Syntax
For the best way to illustrate the similarities with ASP.NET MVC Razor syntax I will show examples of the Razor examples in ScottGu's introductory Introducing "Razor" - a new view engine for ASP.NET
Note: more context and the output for each snippet and example displayed is contained in the Introductory Example and Introductory Layout Unit tests. For reference most features of Mardown Razor view engine are captured in the Core Template Unit Tests
Hello World Sample with Razor
The following basic page:
Can be generated in MVC Razor with:
And Markdown Razor with:
# Razor Example ### Hello @name, the year is @DateTime.Now.Year Checkout [this product](/Product/Details/@productId)
Loops and Nested HTML Sample
The simple loop example:
With MVC Razor:
With Markdown Razor:
@foreach (var p in products) { - @p.Name: (@p.Price) }
Parens-Free
At this point I think it would be a good to introduce some niceties in Markdown Razor of its own. Borrowing a page out of BrendanEich proposal for CoffeeScript's inspired Parens free syntax for JS.Next - you can simply remove the parens from all block statements e.g:
@foreach var p in products { - @p.Name: (@p.Price) }
Produces the same output, and to go one step further you can remove the redundant var as well 😃
@foreach p in products { - @p.Name: (@p.Price) }
Which makes the Markdown Razor's version a bit more wrist-friendly then its MVCs cousin 😃
If-Blocks and Multi-line Statements
If statements in MVC Razor:
If statements in Markdown Razor:
@if (products.Count == 0) { Sorry - no products in this category } else { We have products for you! }
Multi-line and Multi-token statements
Markdown Razor doesn't support multi-line or multi-token statements, instead you are directed to take advantage for variable syntax declarations, e.g:
Markdown replacement for Multi-line Statements
@var number = 1 @var message = ""Number is "" + number Your Message: @message
Integrating Content and Code
Does it break with email addresses and other usages of in HTML?
With MVC Razor
With Markdown Razor
Send mail to scottgu@microsoft.com telling him the time: @DateTime.Now.
Both View engines generate the expected output, e.g:
Identifying Nested Content
With MVC Razor
With Markdown Razor
@if (DateTime.Now.Year == 2011) { If the year is 2011 then print this multi-line text block and the date: @DateTime.Now }
Markdown Razor doesn't need to do anything special with text blocks since all it does is look for the ending brace '}'. This means if you want to output a brace literal '{' then you have to double escape it with ' or '.
HTML Encoding
Markdown Razor follows MVC Markdown behaviour where by default content emitted using a @ block is automatically HTML encoded to better protect against XSS attack scenarios.
If you want to avoid HTML Encoding you have the same options as MVC Razor where you can wrap your result in @Html.Raw(htmlString) or if you're using an Extension method simply return a MvcHtmlString instead of a normal string.
Markdown also lets you mix and match HTML in your markdown although any markdown between the tags does not get converted to HTML. To tell Markdown Razor to evaulate the contents inside html <tag>...</tag>'s need to prefixed with
^, e.g. (taken from the /Views/Search.md page):
^<div id="searchresults"> @foreach page in Model.Results { ### @page.Category > [@page.Name](@page.AbsoluteUrl) @page.Content } ^</div>
If we didn't prefix
^ we would see
### @page.Category ... repeating.
Layout/MasterPage Scenarios - The Basics
Markdown Razor actually deviates a bit from MVC Razor's handling of master layout pages and website templates (we believe for the better 😃.
Simple Layout Example
MVC Razor's example of a simple website template
Rather then using a magic method like
@RenderBody() we treat the output Body of View as just another variable storing the output a in a variable called 'Body'. This way we use the same mechanism to embed the body like any other variable i.e. following the place holder convention of <--@VarName--> so to embed the View page output in the above master template you would do:
<!DOCTYPE html> <html> <head> <title>Simple Site</title> </head> <body> <div id=""header""> <a href=""/"">Home</a> <a href=""/About"">About</a> </div> <div id=""body""> <!--@Body--> </div> </body> </html>
By default we use convention to select the appropriate website template for the selected view where it uses the nearest default.shtml static template it finds, looking first in the current directory than up parent directories.
Your View page names must be unique but can live anywhere in your /View directory so you are free to structure your website templates and view pages accordingly. If for whatever reason you need more granularity in selecting website templates than we provide similar options to MVC for selecting a custom template:
Select Custom Template with MVC Razor
With Markdown Razor
Note: In addition to @Layout we also support the more appropriate alias of @template.
Layout/MasterPage Scenarios - Adding Section Overrides
MVC Razor allows you to define sections in your view pages which you can embed in your Master Template:
With MVC Razor:
And you use in your website template like so:
With Markdown Razor:
Markdown Razor supports the same @section construct but allows you to embed it in your template via the standard variable substitution convention, e.g:
@section Menu { - About Item 1 - About Item 2 } @section Footer { This is my custom footer for Home }
And these sections and body can be used in the website template like:
<!DOCTYPE html> <html> <head> <title>Simple Site</title> </head> <body> <div id="header"> <a href="/">Home</a> <a href="/About">About</a> </div> <div id="left-menu"> <!--@Menu--> </div> <div id="body"> <!--@Body--> </div> <div id="footer"> <!--@Footer--> </div> </body> </html>
Encapsulation and Re-Use with HTML Helpers
In order to encapsulate and better be able to re-use HTML Helper utils MVC Razor includes a few different ways to componentize and re-use code with HTMLHelper extension methods and declarative helpers.
Code Based HTML Helpers
HtmlHelper extension methods with MVC Razor:
Since we've ported MVC's HtmlHelper and its Label, TextBox extensions we can do something similar although to make this work we need to inherit from the MarkdownViewBase<TModel> generic base class so we know what Model to provide the strong-typed extensions for. You can do this using the @model directive specifying the full type name:
@model ServiceStack.ServiceHost.Tests.Formats.Product <fieldset> <legend>Edit Product</legend> <div> @Html.LabelFor(m => m.ProductID) </div> <div> @Html.TextBoxFor(m => m.ProductID) </div> </fieldset>
Whilst we ported most of MVC HtmlHelper extension methods as-is, we did rip out all the validation logic which appeared to be unnecessarily complex and too coupled with MVC's code-base.
Note: Just as it is in MVC the @model directive is a shorthand (which Markdown Razor also supports) for:
@inherits Namespace.BaseType<Namespace.ModelType>
Whilst we don't support MVC Razors quasi C# quasi-html approach of defining declarative helpers, we do allow you to on a per instance basis (or globally) import helpers in custom Fields using the @helper syntax:
@helper Prod: MyHelpers.ExternalProductHelper <fieldset> <legend>All Products</legend> @Prod.ProductTable(Model) </fieldset
You can register Global helpers and a custom base class using the MarkdownGlobalHelpers and MarkdownBaseType AppHost Config options as shown at the top of this article.
Summary
Well that's it for the comparison between MVC Razor and Markdown Razor as you can see the knowledge is quite transferable with a lot of cases the syntax is exactly the same.
As good as MVC Razor is with its wrist-friendly and expressive syntax, we believe Razor Markdown is even better! Where thanks to Markdown you can even dispense with most of HTML's boiler plage angle brackets 😃 We think it makes an ideal solution for content heavy websites like this one.
Unlike ASP.NET's MVC Razor, Markdown Razor like all of ServiceStack is completely Open Source and as such we welcome the contribution from the community via new features, Unit and regression tests, etc. | https://docs.servicestack.net/markdown-razor.html | CC-MAIN-2022-05 | refinedweb | 1,791 | 50.57 |
This is my first ever how-to so I apologize in advance for any grammar mistakes or spelling errors.
Introduction
This is what I'm planning on being a series of c++ coding articles. This one in particular are gonna be a series of programs for various password cracking methods. Mind you, if you are gonna be a script kiddie please at least try to bother learning the language.
Planning
Obviously all programs don't just pop out of the air, but take time to plan what's gonna be in the final product.
Features:
- Be able to allow the user to input a hash
- Be able to allow the user to input a text file for dictionary attack
- Be able to take each word in text file and compare to the hash
- Be able to tell the user which password matches with the hash
I should mind you that this type of attack requires a rainbow table like file.
Code Part 1
So now we must do actual code:
char string inHash()
{
char string passHash
cout << "Enter in the password hash: \n" <<
cin >> passHash >>
Firstly we must be able to allow to type any hash that we please and store that hash in a variable for future reference. The 'char string inHash()' is pretty much separating this part of the code from the rest so that we may concentrate on this part individually. Pretty simple, eh?
Code Part 2
Now the main big part of the code:
int main()
ifstream inFile;
string fileName;
cout << "Enter in File: \n" <<
cin >> fileName >>
Obviously for all you educated in this programming language you can tell what I'm doing here. ifstream is part of a declaration that I haven't bothered to mention until now
#include <cstido>
#include <cstdlib>
#include <fstream>
#include <cstring>
#include <string>
These are all the declarations that is part of this program and should be at the top before the rest of the code.
Code Part 3
Ok so now since we got the file saved in a variable we must now be able to read and compare the contents of the file to the hash right?
int nLine = 1
int nNewline
getline (fileName, line nline);
While nLine != passHash
{
nNewline = nLine + 1
nLine = nNewline
getline (fileName, line nNewline);
}
I've actually used a simple equation to check each line by line. The 'getline(fileName, line nNewline);' is where the program takes each line that equals to the equation 'nNewline = nLine + 1' and compares that line to the hash. The while loop was the best option for this type of method, thus this program continues until nLine = passHash.
Code Part 4
cout << "The password is: \n" <<
<< passHash "=" nLline <<
cout << "Press Enter to exit" <<
return 0;
Of course we needed to finish the program up in a nice and tidy way and thus if you can't flatly see, pretty much the program tells what the password is.
Finish
Please leave your comments down below and give me some loveins. I would gladly take any ways to improve in both my how-to's and my coding. Finally, feel free to test this code out and edit the code as you like. :D
The link to the full code is down below, mind you that I did add my own little gizmo to the code, but here it is:
Oh, I didn't test the code out ;)
- Follow Null Byte on Twitter, Flipboard, and YouTube
5 Comments
Nice! +1
As a side note, it's much easier to take screenshots of your source code and upload them instead of typing your code out, the editor isn't very code friendly!
Also, if you could please link the source code through pastebin! It makes the whole process much easier!
-Defalt
Thank you.... and I'm on a chromebook so there's a few issues with the whole screen shots.
Ah, I understand completely!
-Defalt
Good tutorial! +1
Just FYI: You're missing quite a lot of semicolons. Not only did you not test the code, but you must not have compiled it either.
Share Your Thoughts | https://null-byte.wonderhowto.com/how-to/c-hash-cracker-0166701/ | CC-MAIN-2020-05 | refinedweb | 685 | 75.03 |
There's naturally an "I want to go as fast as possible" theme in many of the response. This is good. There's also a naturally a tendency to view version numbers as dates. This is not accurate. Putting more in a release doesn't make it come sooner. Using no version numbers whatsoever, here are the most likely potential release windows based on effort: - Any release of Jakarta EE that is identical to Java EE 8. Achievable this summer or early fall. Low risk. - Any release of Jakarta EE that contains any form of namespace change. Achievable late fall to winter. Medium risk. - Any release of Jakarta EE that contains any new features or pruning. Achievable mid to late next year. High risk. Others can chime in, but I don't believe the rough framing of these dates is way off. -David | https://www.eclipse.org/lists/jakartaee-platform-dev/msg00106.html | CC-MAIN-2019-26 | refinedweb | 143 | 76.62 |
This tip shows the simple way to fill a PDF Form Template programmatically. I know that already lots of articles have been posted by other experts, software developers/programmers which can be easier to adopt for anyone. However, I posted this tip to enhance my programming skills and abilities.
For this, I use Microsoft Visual C# 2010 Express as the programming environment and iText PDF Library to fill PDF form programmatically.
The iText PDF Library is free and open source software, & there is a C# port - iTextSharp, used for creating and manipulating PDF documents programmatically. The iText is available on SourceForge.net.
I have created a sample PDF Form Template to fill it (see the download section above). Before filling a PDF Form, we should be aware of the fields used in the PDF Form so that we can fill the appropriate values in it. The iText PDF Library allows 7 fields to be used programmatically as follows:
From these 7 fields, this tip represents how to fill the Text Box field programmatically. The sample PDF Template contains 5 fields (softwarename, softwaretype, license, category, platforms).
To fill the PDF Form Template programmatically - the most important thing is that - one should know how many fields are present in that PDF form and their associated internal names, so that appropriate values can filled in it.
From the C# Port of iText - iTextSharp - The Dictionary variable Fields of class AcroFields gives a list of field names & their associated values present in PDF Form Template. For demonstrating this, I created a Sample PDF Form Template using Open Office 4.0.1. Editable PDF Forms created through other than Adobe products does not allow user to save data into same PDF file.
Dictionary
Fields
AcroFields
Few lines of code in Actual Code are quite lengthy hence I put here pseudo code for Filling PDF Form (see source code file for actual code). In order to fill the fields programmatically, first we need to include a namespace iTextSharp.text.pdf and then create an object for reading and writing PDF document.
iTextSharp.text.pdf
//PDFReader Class is used to read a PDF file -
//there are 12 methods provided by this class to read PDF file
//Among those methods, the simplest way is to just give a
//file name along with its complete path (as shown below)
PdfReader rdr = new PdfReader("pdf_file_path");
//PDFStamper class allow to manipulate the content of existing PDF file.
//There are 3 methods provided to create an object of PDFStamper Class.
//Among which I choose, is required 2 arguments -
//a PDFReader object and a new Output File as a Stream.
PdfStamper stamper = new PdfStamper(rdr,
new System.IO.FileStream("output_file_path","file_mode"));
The SetField method of class AcroFields is used to assign the values for fields in PDF form. AcroFields exists in both PDFReader and PDFStamper class. In PDFReader, fields retrieved are read-only, whereas in PDFStamper it allows to manipulate the value of fields. Thus we must use the object of class AcroFields along with the object of PDFStamper class.
SetField
PDFReader
PDFStamper
Figure 1 shows a GUI of Application to be filled programmatically into PDF Form. This application allows you to generate Editable and Non-editable form of PDF Form Template. In non-editable form, the PDF file is no longer able to fill the fields in both ways - manually or computerized. But in Editable form, we are still able to manipulate the fields of PDF file.
The following code snippet shows how to assign a particular value to fields of PDF form.
//returns a Type of License choose for a Software
string getLicense(){
string license = "";
if (rbDemo.Checked == true) license = "Demo";
if (rbFreeware.Checked == true) license = "Freeware";
if (rbShareware.Checked == true) license = "Shareware";
return license;}
//Assign value for 'Software Name' Field
stamper.AcroFields.SetField("softwarename", txtSoftware.Text);
//Assign value for 'Software Type' Field
type = "web-based | standalone | cloud-based"
stamper.AcroFields.SetField("softwaretype", type);
//Assign value for 'Licensed As' Field
stamper.AcroFields.SetField("license", getLicense());
//Assign value for 'Category' Field
stamper.AcroFields.SetField("category", cbCategory.SelectedItem.ToString());
//Assign value for 'Supported Platforms' Field
foreach (string item in lbPlatform.SelectedItems) platforms += item + ", ";
stamper.AcroFields.SetField("platforms", platforms);
//Select Output PDF Type
if (rbEdit.Checked == true)
stamper.FormFlattening = false; //Editable PDF Form
else stamper.FormFlattening = true; //Non-Editable PDF Form
stamper.Close();//Close a PDFStamper Object
rdr.Close(); //Close a PDFReader Object
Figure 2 is a screenshot of final output PDF form in non-editable form.
For successful execution of this program, you need to download "iTextSharp.dll" from here.
Using Demo: Place the "iTextSharp.dll" file into the same directory of this program.
Using Source Code: Add the "iTextSharp.dll" into "References" section present in Solution Explorer section of IDE. (I use Microsoft Visual C# 2010 Express as IDE for development of this program.)
The tip describes the simple method for filling fields in PDF Form template programmatically. The iText PDF Library is the powerful tool for creating and manipulating the PDF files programmatically. It provides ease to handle PDF files programmatically.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
// Force each of the non-editable fields to be displayed
AcroFields pdfFormFields = stamper.AcroFields;
pdfFormFields.GenerateAppearances = true;
foreach (KeyValuePair<string, AcroFields.Item> de in pdfFormFields.Fields)
{
pdfFormFields.RegenerateField(de.Key.ToString());
}
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
News on the future of C# as a language | http://www.codeproject.com/Tips/679606/Filling-PDF-Form-using-iText-PDF-Library | CC-MAIN-2014-10 | refinedweb | 928 | 56.96 |
Maybe you have got a mini cable car or any thing else that you want to know it's speed.
There are many ways on HOW-TO mesure speed, but today we will be working with basic speed measuring. I was just building my model train when this strange idea came into my head. I HAVE TO KNOW THE SPEED OF THE TRAIN... Since trains are close to ground and make shadow underneath it will be easy to measure approximate SPEED.
This really works and outputs real speed. It is not the exact speed but close to exact. Let's stop talking and do some DIYing.
What Can This Be Used For?
- It can be used for measuring speed of objects close to surfate that make shadow.
What Does This Guide Include?
- Arduino code
- Processing 2 code
- How to make speed sensor
What Do I Need For This Project
- ARDUINO (I'm using Arduino Mega 2560, but works fine on Arduino Uno ,...)
- LDR (Light Dependent Resistor)
- 500 Ohms Resistor
- Processing (can be downloaded HERE)
You will need some expirience with Arduino and electronics. You don't need exactly 500 Ohms resistor, but if you use resistor with different resistance you will have to change some values in Arduino code.
Step 1: Connecting LDR to Arduino
Connect 5V to one leg of resistor. Connect the other leg (of resistor ) to one leg of LDR . Connect A0 (analog input) to that same leg of LDR. Connect GND to other pin of the LDR.
That is all about the connections. It's not rocket science! :)
Step 2: Programing Arduino
Connect your Arduino to computer and look up for the port (COM port on windows) on which Arduino is connected to. Remember it. You'll need it later. Then just upload this code to Arduino. Just fill in object length you can play with other variables too if you want.
int ldr = 0; int if_val = 1; int ldr_value = 0; unsigned long time; unsigned long time2; float time3; float sped = 0; int val; int start_val = 0; //analog pin to which LDR is connected float object_length = 5.5; //object length in cm int sensitivity = 40;//Less more sensitive, more less sensitive void setup() { Serial.begin(9600); //start serial monitor object_length = object_length * 1000; val = analogRead(ldr); start_val = val + sensitivity; }void loop() { ldr_value = analogRead(ldr); if (ldr_value > start_val){ if (if_val == 1){ if_val = 0; time = millis(); } else { } }else{ if(if_val == 0){ if_val = 1; time2 = millis(); time3 = (time2 - time); sped = (object_length / time3) / 100; Serial.println(sped); } } }
Step 3: Programing Processing
We won't stop at just reading the speed. We want an app that will read the output and display it in different units. There is no easier way than a simple program in PROCESSING. You can download it here.
Just copy and paste this code into Processing, connect Arduino to computer, run code on Arduino and press start button in Processing. And enter your port .
import processing.serial.*; PFont f; float val = 0; Serial port; // The serial port object String Ardport = ""; //Enter the port on which Arduino is connected void setup() { size(200,200); f = createFont("Arial",16,true); // Arial, 16 point, anti-aliasing on // In case you want to see the list of available ports // println(Serial.list()); port = new Serial(this, Ardport, 9600); } void draw() { } // Called whenever there is something available to read void serialEvent(Serial port) { String inString = port.readStringUntil('\n'); if (inString != null) { // trim off any whitespace: inString = trim(inString); // convert to an float println(inString); float val = float(inString); float val1 = val * 3.6; background(255); textFont(f,16); fill(0); text("Speed : " + val + "M/s",10,50); text("Speed : " + val1 + "Km/h",10,75); println( "Raw Input:" + val); } }
Step 4: How Does It Work
The program waits for a shadow and then it measures time between the start of the shadow and the end of the shadow. So enter exact length of your object because if you don't it won't show real speed.
As you know speed is (length of ) PATH / TIME. So PATH is length of object and TIME is time between the start of the shadow and the end of the shadow.
I will make another Instructabe with IR sensor to mesure speed.
WARNING:
This speed meter is not reliable and needs same brightness through whole process.
If you have QUESTIONS about this project FELL FREE TO ASK. :)
You got no reason why not to press that cute FAVOURITE and FOLLOW buttons. :) | http://www.instructables.com/id/Arduino-LDR-Speedometer/ | CC-MAIN-2017-26 | refinedweb | 743 | 72.76 |
The Poisson Deviance for Regression
You’ve probably heard of the Poisson distribution, a probability distribution often used for modeling counts, that is, positive integer values. Imagine you’re modeling “events”, like the number of customers that walk into a store, or birds that land in a tree in a given hour. That’s what the Poisson is often used for. From the perspective of regression, we have this thing called Generalized Poisson Models (GPMs), where the response variable you are modeling is some kind of Poisson-ish type distribution. In a Poisson distribution, the mean is equal to the variance, but in real life, the variance is often greater (over-dispersed) or less than the mean (under-dispersed).
One of the most intuitive and easy to understand explanations of GPMs is from Sachin Date, who, if you don’t know, explains many difficult topics well. What I want to focus on here is something kind of related — the Poisson deviance, or the Poisson loss function.
Most of the time, your models, whether they are linear models, neural networks, or some tree-based method (XGBoost, Random Forest, etc.) are going to use mean squared error (MSE) or root mean squared error (RMSE) as your objective function. As you might know, MSE tends to favor the median, and RMSE and tends to favor the mean of a conditional distribution — this is why people worry about how the loss function works with the outliers in their data sets. However, most of the time, with some normally distributed response, you expect the values of the response (if z-score normalized, especially), to be some kind of Gaussian with mean 0 and unit variance. However, if you are dealing with count data — all positive integers — and you don’t scale or transform the response, then maybe a Poisson distribution is the better description of the data. When the mean of a Poisson is over 10, and especially over 20, it can be approximated with a Gaussian; the heavy tail that you see when the mean (the rate) is low tends to disappear.
Take a look at the formula in the beginning of the post — the y_i is the ground truth, and the mu_i is your model’s prediction. Obviously, if y_i = mu_i, then you have a ln(1), which is 0, canceling out the first term, and the second as well, giving a deviance of 0. What’s interesting is what happens when your model errs on either side of the actual value. In the following snippet, I plot the loss of one example where the ground truth is set at 20 — meaning that we assume the variable is Poisson distributed with mean/rate = 20.
from matplotlib import pyplot as plt
import numpy as npxticks = np.linspace(start = 0.0000, stop = 50, num = int(1e4))
yi = 20
losses = [2 * (yi * np.log(yi / x) - (yi - x)) for x in xticks]
losses = np.array(losses)
plt.scatter(xticks, losses)
Here’s the graph:
So what does this tell you? Well, if your model guesses between 10 and even 50, it doesn’t look like the deviance is that high — at least not compared with what happens if you guess 1 or 2. This is obviously a lot different from other loss functions:
So if your model outputs a 0 when the ground truth as 20, then, if you’re using MSE, the loss is 20² = 400, whereas, for the Poisson deviance, you would get infinite deviance, which is, uh, not good. The model isn’t supposed to really output 0, since there’s really not much probability mass there. If your model outputted something like .0001, your loss would be in the ballpark of >400. If your model outputs 30, and the ground truth was 20, your loss would only be around 3.78, where was in MSE terms, it would be 10² = 100.
So the moral of the story is that yes, the model should favor the mean of the distribution, but using Poisson deviance means you won’t penalize it that heavily for being biased ABOVE the mean — like if your model kept outputting 30 or 35, even if the distribution’s mean is 20. But your model will be penalized very heavily for outputting anything between 0 and 10. Why?
Well, what is the (log)likelihood function of the Poisson distribution?
Someone smarter than me did this:
Now if you want to look at something a little more tractable, take the log of that —
If your model keeps outputting 0, then t = 0, and that t * ln(lambda) expression is going to be 0, leaving you with a large negative number from the first term. In fact, the only way that your LL can even be close to “good” is if t — the sum of the observations — is sufficiently large for the second term to counteract the first term. That’s why, I believe, low values between 0–10 (again, assuming the rate/mean is 20) are penalized so heavily.
OK, so let’s say you’re sufficiently interested in the Poisson deviance. How can you calculate it? Well, from the perspective of the Python user, you have sklearn.metrics.mean_poisson_deviance — which is what I have been using. And you have Poisson loss as a choice of objective function for all the major GBDT methods — XGBoost, LightGBM, CatBoost, and HistGradientBoostingRegressor in sklearn. You also have PoissonRegressor() in the 0.24 release of sklearn…in any case, there are many ways you can incorporate Poisson type loss into training.
Does it help, though? Tune in for my next project, where I will see just how these Poisson options play out on real life data sets. | https://peijin.medium.com/the-poisson-deviance-for-regression-d469b56959ce?source=---------3---------------------------- | CC-MAIN-2021-25 | refinedweb | 952 | 66.98 |
On Wed, Oct 01, 2003 at 02:26:19PM +0100, Matthew Wilcox wrote: > > I reviewed the dependency list for a file this morning to see why it was > being unnecessarily recompiled (a little fetish of mine, mostly harmless). > I was a little discombobulated to find this line:Mmm discombobulation. > $(wildcard include/config/higmem.h) \ > > Naturally, I assumed a typo somewhere. It turns out there is indeed > a CONFIG_HIGMEM in include/linux/mm.h, but it's in a comment. The > fixdep script doesn't parse C itself, so it doesn't know that this should > be ignored.Maybe it should be taught to parse comments? There are zillions of#endif /* CONFIG_FOO */braces in the tree. Why is this one special ? Dave-- Dave Jones unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2003/10/1/133 | CC-MAIN-2017-04 | refinedweb | 150 | 59.3 |
Map in STL is used to hash key and value. We generally see map being used for standard data types. We can also use map for pairs.
For example consider a simple problem, given a matrix and positions visited, print which positions are not visited.
// C++ program to demonstrate use of map // for pairs #include <bits/stdc++.h> using namespace std; map<pair<int, int>, int> vis; // Print positions that are not marked // as visited void printPositions(int a[3][3]) { for (int i = 0; i < 3; i++) for (int j = 0; j < 3; j++) if (vis[{ i, j }] == 0) cout << "(" << i << ", " << j << ")" << "\n"; } int main() { int mat[3][3] = { { 0, 1, 2 }, { 3, 4, 5 }, { 6, 7, 8 } }; // Marking some positions as visited vis[{ 0, 0 }] = 1; // visit (0, 0) vis[{ 1, 0 }] = 1; // visit (1, 0) vis[{ 1, 1 }] = 1; // visit (1, 1) vis[{ 2, 2 }] = 1; // visit (2, 2) // print which positions in matrix are not visited by rat printPositions(mat); return 0; }
Output:
(0, 1) (0, 2) (1, 2) (2, 0) (2, 1)
This article is contributed by Abhishek. | https://www.geeksforgeeks.org/map-pairs-stl/ | CC-MAIN-2018-09 | refinedweb | 183 | 59.16 |
On Mon, 23 Jan 2006, Dave McCracken wrote:> --On Friday, January 20, 2006 21:24:08 +0000 Hugh Dickins> <hugh@veritas.com> wrote:> > I'll look into getting some profiles.Thanks, that will help everyone to judge the performance/complexity better.> The pmd level is important for ppc because it works in segments, which are> 256M in size and consist of a full pmd page. The current implementation of> the way ppc loads its tlb doesn't allow sharing at smaller than segment> size.Does that make pmd page sharing strictly necessary? The way you describeit, it sounds like it's merely that you "might as well" share pmd page,because otherwise would always just waste memory on PPC. But if it'ssignificantly less complex not to go to that level, it may be worth thatwaste of memory (which would be a small fraction of what's now wasted atthe pte level, wouldn't it?). Sorry for belabouring this point, whichmay just be a figment of my ignorance, but you've not convinced me yet.And maybe I'm exaggerating the additional complexity: you have, afterall, been resolute in treating the levels in the same way. It's morea matter of offputting patch size than complexity: imagine splittingthe patch into two, one to implement it at the pte level first, thena second to take it up to pmd level, that would be better.> I needed a function that returns a struct page for pgd and pud, defined in> each architecture. I decided the simplest way was to redefine pgd_page and> pud_page to match pmd_page and pte_page. Both functions are pretty much> used one place per architecture, so the change is trivial. I could come up> with new functions instead if you think it's an issue. I do have a bit of> a fetish about symmetry across levels :)Sounds to me like you made the right decision.> >> +#define pt_decrement_share(page)> >> +#define pt_shareable_vma(vma) (0)> >> +#define pt_unshare_range(vma, address, end)> >> +#endif /* CONFIG_PTSHARE */> > > > Please keep to "#define<space>MACRO<tab(s)definition" throughout:> > easiest thing would be to edit the patch itself.> > Sorry. Done. I thought the standard was "#define<TAB>" like all the other> C code I've ever seen. I didn't realize Linux does it different.No, I can't claim "#define<space>" is a Linux standard: I happen toprefer it myself, and it seems to predominate in the header files I'vemost often added to; but I was only trying to remove the distractinginconsistency from your patch, whichever way round.> >> static inline int copy_pmd_range(struct mm_struct *dst_mm, struct> >> mm_struct *src_mm, pud_t *dst_pud, pud_t *src_pud, struct> >> vm_area_struct *vma,> >> - unsigned long addr, unsigned long end)> >> + unsigned long addr, unsigned long end, int shareable)> >> {> > > > I'd have preferred not to add the shareable argument at each level,> > and work it out here; or is that significantly less efficient?> > My gut feeling is that the vma-level tests for shareability are significant> enough that we only want to do them once, then pass the result down the> call stack. I could change it if you disagree about the relative cost.I now think cut out completely your mods from copy_page_range and itssubfunctions. Given Nick's "Don't copy ptes..." optimization there,what shareable areas will reach the lower levels? Only the VM_PFNMAPand VM_INSERTPAGE ones: which do exist, and may be large enough toqualify; but they're not the areas you're interested in targetting,so I'd say keep it simple and forget about shareability here. Thesimpler you keep your patch, the likelier it is to convince people.And that leads on to another issue which occurred to me today, inreading through your overnight replies. Notice the check on anon_vmain that optimization? None of your "shareable" tests check anon_vma:the vm_flags shared/write check may be enough in some contexts, butnot in all. VM_WRITE can come and go, and even a VM_SHARED vma mayhave anon pages in it (through Linus' strange ptrace case): you mustkeep away from sharing pagetables once you've got anonymous pages inyour vma (well, you could share those pagetables not yet anonymized,but that's just silly complexity). And in particular, the prio_treeloop next_shareable_vma needs to skip any vma with its anon_vma set:at present you're (unlikely but) liable to offer entirely unsuitablepagetables for sharing there.(In the course of writing this, I've discovered that 2.6.16-rcsupports COWed hugepages: anonymous pages without any anon_vma.I'd better get up to speed on those: maybe we'll want to allocatean anon_vma just for the appropriate tests, maybe we'll want toset another VM_flag, don't know yet... for the moment you ignorethem, and assume anon_vma is the thing to test for, as in 2.6.15.)One other thing I forgot to comment on your patch: too many largishinline functions - our latest fashion is only to say "inline" on theone-or-two liners.Hugh-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2006/1/24/145 | CC-MAIN-2014-35 | refinedweb | 847 | 60.35 |
Getting Mutual Funds Net Asset Value via Python using mftools API
6 min read
Net Asset Value (NAV) is one of the most fundamental data points when it comes to the Mutual Funds industry. Today we will learn about NAV and how to fetch NAV details in python using
mftools API.
Net Asset Value(NAV) is the market value of all assets less any liabilities divided by the total shares outstanding of the fund. This figure is published daily by Association of Mutual Funds in India (AMFI) in India. You might have heard the tagline somewhere "Mutual Funds Sahi Hai", Yes, that's by AMFI.
Coming back to calculation of NAV:
\[ NAV = (Assets - Liabilities) / Total_Outstanding_Shares \]
Daily change in NAV of a mutual fund scheme indicates a rise or dip in the scheme's assets. While NAV helps us to analyze how the mutual fund performs daily, it does not indicate how lucrative a fund is, i.e., a fund with higher NAV is not superior to a fund with lower NAV, it is quite just possible that the fund with Higher NAV has less outstanding shares than the fund with lower NAV.
Economic Times has summarized NAV in a good article here. You can collect NAV to build your own database of mutual fund prices, you can then wish to analyze this data over a certain time period which can tell you about the performance of a certain mutual fund.
Let’s look at the Python Implementation Now
Installing the necessary Library.
As we mentioned, we will be using the open-source
mftool library, which is basically a wrapper to scrape data from the AMFIs website in a very easy-to-use manner.
You can install
mftool via
pip install mftool in your Anaconda prompt.
Making the Necessary Imports
import pandas as pd from mftool import Mftool
The documentation of the API can be found by clicking here.
In the next step, we will initialize an instance of the API and use the function
get_scheme_codes to get all available scheme codes in India right now. If you are confused,
scheme_code is basically a unique code assigned by AMFI to each and every Mutual Fund scheme.
mf = Mftool() scheme_codes = mf.get_scheme_codes()
Note:- The output of this code snippet will be in dictionary format, so we will further take the keys from the output.
Taking only keys from the dictionary as our next function only requires the
scheme_code and not
scheme_name
scheme_code_list = [x for x in scheme_codes.keys()] #the above is list comprehension which is a #much efficient way of creating lists.
Before we go and start collecting this data in a loop for all scheme codes, let's see how we can use the function
get_scheme_historical_nav_for_dates to fetch data for AMFI code
119551 between dates
01-01-2021 and
11-05-2021
nav_data = mf.get_scheme_historical_nav_for_dates('119551', '01-01-2021', '11-05-2021') nav_data
Did you notice, how simple this was? It's amazing; you do not have to directly deal with the AMFI website; just call the function and job done. Let's now build our own function,
HistoricalNav, which will take a list of scheme codes, to & from date, and return a dataframe with all price information.
If you prefer, you can directly skip to the Google Collab Notebook by clicking here.
Let us learn some keywords before entering the code:-
Assert:- Used when debugging the code. Raises an assertion value error message if the assertion is false. It is very similar to try and except in python error handling.
Series:- A Pandas Series is like a column in a table. It is a one-dimensional array holding data of any type.
def HistoricalNav(scheme_code_list, start_date, end_date): # Assert keyword is a debugging tool. # Below assert keyword check whehther the scheme_code_list is a list and it is present, if not it raises an assertion failure message. assert (isinstance(scheme_code_list, list) is True), "Arguement scheme_code_list should be a list" assert (isinstance(start_date, str) is True), "start_date must be a str in %d-%m-%Y format" # checks whether start date is present and is in correct format. assert (isinstance(end_date, str) is True), "end_date must be a str in %d-%m-%Y format" # checks whether end date is present and is in correct format main_df = pd.DataFrame() #empty dataframe for schemes in scheme_code_list: data = mf.get_scheme_historical_nav_for_dates(schemes, start_date, end_date) # requesting NAV data from the api. df = pd.DataFrame(data['data']) df['scheme_code'] = pd.Series([data['scheme_code'] for x in range(len(df.index))]) #adding Pandas Series(scheme_code) as a column in Pandas Dataframe. df['scheme_name'] = pd.Series([data['scheme_name'] for x in range(len(df.index))]) #adding Pandas Series(scheme_name) as a column in Pandas Dataframe. df = df.sort_values(by = 'date') # sorting the values of every Scheme code based on Date main_df = main_df.append(df) # appending the data in the main_df dataframe. main_df = main_df[['scheme_code', 'scheme_name', 'date', 'nav']] #creating names of dataframe columns main_df.reset_index(drop = True, inplace = True) return main_df #Returning the required Dataframe.
The
NAV_Data returns the data frame by checking if the values are present or not.
For example when it encounters a day when the market was closed. It returns the
latest Value of NAV when the market last opened.
# Function to return NAV data def NAV_Data(start,end): try: values_df = HistoricalNav(scheme_code_list = scheme_code_list[0:5], start_date= start, end_date= end) #to get the data return values_df except KeyError: #if the data is not available on that date, going on previous date to get latest data start=datetime.strptime(start, '%d-%m-%Y') - timedelta(1) # gets to previous day where the data is available. return NAV_Data(start.strftime("%d-%m-%Y"),end) #returns the required data.
We will call this function and save the output in a variable.
# Calling the function and saving the output in a variable. # To get the latest NAV set the start_date and end_date as the last traded date in 'dd/mm/yyyy' format. # Note:- To get data of a particular date, enter same start_date and end_date. start_date= "01-05-2021" # enter the date in "dd-mm-yyyy" format end_date = "10-05-2021" # enter the date in "dd-mm-yyyy" format values_df = NAV_Data(start_date,end_date) #calling function NAV_Data values_df
The Output of the function will be the data frame.
Note:- Here, I have used only 5 scheme codes to show the output.
If we want NAV for a single scheme code from the above data frame:-
values_df[values_df['scheme_code'] == 119552]
Pro Tip: To get the latest NAV set the start_date and end_date as the last traded date in 'DD-MM-YYYY' format in the HistoricalNav function itself.
We have also uploaded the whole code on Github in the
tradewithpython-blogs repository. You can access the
.py file for this article. If you liked this article, you could also buy me a book to show some appreciation. 😇
Did you find this article valuable?
Support Trade With Python by becoming a sponsor. Any amount is appreciated! | https://tradewithpython.com/getting-mutual-funds-net-asset-value-via-python-using-mftools-api | CC-MAIN-2022-21 | refinedweb | 1,158 | 63.29 |
.
I’m yet to read Matt Harrison’s guide, but to date my favourite explanation of decorators comes in the form of a stackoverflow answer
I agree… that is a good explanation!
What you refer to as “decorator line”/“decorator invocation line”/“annotation line” could simply be called a “decoration”. The decorator is the function doing the work whereas the decoration is simply the thing that the end users sees in the code.
+1
I like it! It is short and precise and easy to use. And the similarity between the words — “decorator” and “decoration” — reflects the close connection between decorators and decorations.
I think it is worth noting that there is some historical justification for the term “decoration”. PEP 318 does contain the term “decoration”, although it seems to use the terms “decorator” and “decoration” interchangeably.
It might be argued that the similarity between the two words would make it too easy for someone (someone who is trying to learn about decorators) to overlook the important difference between the two terms.
But that possibility would make it imperative for anyone writing an introduction to decorators to explicitly point out the importance of the difference in terminology. And that would be a good thing, because it would force anyone writing an introduction to decorators to explicitly distinguish decorators and decorations, and to be very careful with his terminology. And that would be a Very Good Thing indeed.
I think that one implication of such terminological standardization would be that we should stop saying that lines that begin with @ use “decorator syntax”. Instead, we should say that such lines use “decoration syntax“.
As one of the functools maintainers, I’ve definitely wrestled with these terminology problems. The terms I use myself are:
I believe the fact that many people incorrectly use the term “decorator” to refer to decorator factories like functools.wraps helps contribute to the confusion.
+1 on your objection to the term “annotation”.
My first post on this subject was in 2009, when Python 3 was less prominent than it is now. I used the term “annotation” in this post (1) partly for continuity with that earlier post, and (2) partly because now (with the growing use of Python 3) “annotation” is obviously a lousy name. In this post, “annotation” is so in-your-face obnoxious that it is bound to provoke comments with proposals for better terminology.
So thanks for your comment. It is nice to know that I’m not alone in wrestling with these terminological problems. I like your suggestion for “decorator expression”, but I think I like Matt Williams’s suggestion of “decoration” (see above) even better.
But the important thing, of course, is that the Python community move toward using some standard term — any term, as long as it is different from “decorator” — so it will become easier to write clearly about the relationship between decorators and decorations (or decorator expressions).
That’s pretty much how I explain decorators here:
Nice explanation. I have one comment, though.
You write that “a decorator is syntactic sugar”.
Since I’ve taken some pain to distinguish a decorator from a decorator line, I think I would prefer to say that decoration syntax (using the @) or a decoration or a decorator line is syntactic sugar.
Except for that reservation, I think your explanation is pretty good! :-)
As you’re aiming for rigour it’s probably worth changing your fourth step: decorators don’t need to return functions. They can return whatever they want.
This gets to the interesting question of “What (exactly) is a decorator?” or “What is it that makes a function a decorator function?”
Let’s do some thought experiments.
Thought experiment 1
I agree that in Python it certainly is possible to create a function that accepts a function object as an argument and returns something other than a callable.
Suppose, for example, that we write a function that accepts a function object as an argument and returns a string. Maybe something like this.
Now, with just this code available to us, would we want to say that toString is a decorator?
Thought experiment 2
It is also true that we can use a decorator line (a decoration) to invoke such a function.
With this additional code available to us, would we want to say that toString is a decorator? Is being invoked by a decorator line (a decoration) what makes us call a function a “decoration”?
Thought experiment 3
Suppose we have a program that defines and uses toString, but does not invoke it using a decorator line.
In this program, toString is certainly capable of being invoked via a “@toString” decoration, but in fact in this particular program it is not.
In this program, would we want to say that toString is a decorator?
My personal inclination is to say something like this.
We can classify a function as a “decorator” when it meets the following criteria.
So those are my personal intuitions about how we probably want to use the term “decorator”.
I like it because I can explicitly state what seem to me pretty clear, consistent, and intuitive rules for deciding whether or not to call a function a “decorator”. And I like it because it makes it possible to classify a function as a decorator (or as a non-decorator) independently of the technique that might be used to invoke it in any particular program.
Others may of course prefer to use the word “decorator” differently. If they do, I think I would ask that they explicitly state their rules for using the word “decorator”. That would reduce the chances of miscommunication caused by mere differences in terminology.
Finally I think that a Python Decorator Cookbook would be an appropriate place to discuss all functions that we might want to invoke via a decoration line (@ line) — whether or not they return a callable. Maybe we should call it the Python Decorator and Decoration Cookbook. :-)
Steve-
Thanks for your post. It is true, decorator is overloaded. Thanks for pointing that out, I think you are one of the first I recall doing that. I’m in the middle of reviewing the book as I’m preparing for POD and having a pdf/mobi/epub bundle again. You’ve given me some thoughts on some improvements I can make, so thanks!
A decorator cookbook sounds like an interesting idea. I actually don’t write new decorators too often, but I use closures all the time. Decorators just sort of fall out once you have really grasped closures. Sadly I don’t think a closure cookbook would be general enough to serve a wide audience. My closures are usually very specific to the problem at hand (changing interface, generating new functions, etc).
Thanks!
RE: “Decorators just sort of fall out once you have really grasped closures.” Yeah, that’s one of the things that I learned from your book. It really helped me to put decorators into a broader context in which they made a lot more sense.
As you can see from my comments on earlier comments, I’m now inclined to follow Matt Williams’s suggestion (see above) and use the word “decoration” for what I previously called a “decorator line”.
And to replace the expression “decorator syntax” with “decoration syntax”. It is, after all, a specialized syntax for writing decorations, not decorators.
It would be an interesting test and experiment to see what the effect would be of using this terminology in the Guide. I’m guessing that it would make things clearer. But you never know with experiments — sometimes they don’t turn out the way you expect them to! :-)
You should start by saying a decorator is a pattern. That will help by warning people that it’s just one way to do a common task. Next you should explain that python decorators are a specific syntax used to identify (but not necessarily implement) the pattern. In other words, “python decorators” are the little @ annotations that specify the function (not decorator) that is used to implement decorator pattern. You can then explain that the pattern is to “decorate” a function with additional functionality but not break its original contract (e.g. perform the original task and/or return the same value.) That’s about all that’s needed.
Then, except in rare cases where the decorator is already implemented and all people have to do is paste the @ call, they can accomplish the same thing in a simpler way (usually by adding a call to a logging function within their own function.)
Frankly, I think your original post did a better job, primarily because it has a real example.
I don’t know why people think if a term is used in a book they haven’t read that they need to build a big mystique around it. Chances are the book is considered arcane because it’s more obtuse than enlightening. A more generous phrasing would be that the knowledge gained from reading the books reflects better on the brainiac able to sift through the muddle than the profundity of the original author(s).
Yup, that might be yet another way of structuring an explanation of decorators. But I wonder how helpful it would be, especially when trying to help newbies understand decorators. It sort of reminds me of the old joke …
You have a problem — you need to explain decorators.
You decide to start by saying that a decorator is a pattern.
Now you have two problems. :-)
It’s also worth explaining (clearly) the difference between a decorator that takes parameters, and one that doesn’t. I found that confusing at first.
I’m convinced that once one understands closures, decorators come quickly. Parameterized decorators are just another closure around a normal decorator. Again, the key here is to understand closures and how they work
Being just a beginner who is learning python, I find decorators one of several concepts difficult to grasp. At first I thought that all it was is just like you would assign a number 5 to a variable num = 5, is the same with decorators that you assign a function to a variable by using @num = assign the below function and pass it to another function??
If my understanding is correct, I dont know how to actually write it or use it as all the examples I have seen so far are rather complex.
I would say there is one important step missing in the process: you need to explain why decorators are useful. A cookbook would help with that, but having a few practical examples showing exactly why someone would want to use decorators in their own code in the first place, rather than just modifying their functions, is really important, and something that all explanations of decorators that I have seen pretty much completely skip. At least personally it isn’t really obvious why I would want to use a decorator.
I absolutely agree.
In the case of almost all (or all?) current introductions to decorators, once you’ve read the introduction you’ve got a solution looking for a problem. It is like being handed a tool without any explanation of what you can do with the tool. Imagine being handed a stick of dynamite without being given any idea of what you could do with it (blast out mines and tunnels and road cuts and canals, kill enemy troops, blow out oil-well fires, etc.). Pretty useless.
That’s why I think a cookbook can’t just be a list of recipes. I think the recipes have to be organized around problems. Basically, you have to treat each recipe like a design pattern, where each recipe starts with a problem statement (“Pattern 1234 — You’re in situation such-and-such, and you need to do so-and-so. Here is how you can solve your problem using a decorator.”), and group the patterns into general categories.
In a collection of code examples, there is always a tension around whether to organize the examples around the coding techniques used (“Now we’re going to look at how to pass arguments to decorators.”) or around the kinds of problems to be solved(“Now we will look at various ways to use decorators to add logging capabilities to functions and classes in your programs.”). Most introductions to decorators are organized the first way. But now that we have such introductions, what we really need is a cookbook — or a collection of decorator design patterns — organized in the second way, around problems that can be solved (dealt with effectively) using decorators.
Bruce Eckel has an excellent description of decorators in “Python 3 Patterns & Idioms”:
A simple, functioning example was not given. Why not ?
What good is any explanation without a good, simple example ?
This post was not an attempt to explain decorators — it was a discussion of how such an explanation should be structured.
For an actual explanation of decorators, you may want to look at the text of my earlier post post titled Introduction to Python Decorators and the links in that post.
For an explanation that has the structure that I recommend (and includes examples!) I recommend Matt Harrison’s e-book Guide to: Learning Python Decorators.
This seems to be a moot point such as “determining how many angels will fit on the head of a pin”. Why try to clearly explain what is clearly an obfuscation: Whatever decorators can do are better done by using simple functions :
Decorators are not “elegant” – they are unnecessarily confusing. They seem to be like a throwback to the dark days of in-line compiler directives. If a factory function is what is wanted, then write one.
Ah, but decorators are simple functions. In fact, that function you’ve provided as an example of not using decorators *is* a decorator.
You’ve most likely confused decorators themselves—functions which transform a function to a souped-up version—with Python’s decoration syntactic sugar, provided to make decorator usage more convenient.
To expand through example, here’s your function:
This function is a decorator. It may be applied using regular function syntax:
Alternately, it may be applied using Python decoration syntax:
Both versions are equivalent, and both versions use only the simple function you’ve provided.
I liked your explanation including what is wrong with traditional decorator explanations. It seems to me that decorators would be good for logging output before and/or after an original function performs its job, more like allowing the run-time code to change output on the fly.
Great meta-explanation! Please do not name sections of your article after numbers though.
This is awesome, i had been trying to understand decorators for long.
This is the first article, which made me understand decorators in 9 points. Great Work!!!
Rather than a cookbook of decorators, may be attempt at a classification of decorators – as to the various types (intention of usage).
1. To do some standard stuff on entry/exit of a function: Such decorators may be applied to a lots of different functions and is a way of enforcing the DRY principle.
2. To add some functionality to third party/standard library modules: These may be one off.
You write “A decorator is a function (such as foobar in the above example) that takes a function object as an argument, and returns a function object as a return value.”
I believe there is a clearer way to describe this. “A decorator is a function which takes a function as an argument and returns a value – any value. This is often a new function, but it might be the same function, an integer, or even the Python None object. The return value is assigned to a variable with the same name as the function given in the function argument.”
For example, “@apply” “def data(): return 5″ is a round-about way to write “data = 5″. I’ve seen it used in real code as a way to initialize a variable where the initialization uses several other helper variables which shouldn’t clutter the module namespace.
IMHO, step 1 should be why. What problem is solved? It’s only point seven that addresses it and it already comes with a solution. Which confused me for a moment since I didn’t realise decorators was not really a new feature.
tl; dr… your old article was great and I totally got it from that. Thank you.
This is great Steve! I came across decorators while playing around with django and flask and was curious about what was happening behind the scenes. After some unhelpful explanations, I read Bruce Eckel’s article and thought I was getting somewhere. But you really nailed it. I’m really impressed by the way you have deconstructed the concept and structured your explanation. | http://pythonconquerstheuniverse.wordpress.com/2012/04/29/python-decorators/?like=1&source=post_flair&_wpnonce=d95803fda3 | CC-MAIN-2014-15 | refinedweb | 2,818 | 53.51 |
Hi, to everyone.
My program to make program to write the application the input a dollar amount to be printed on a cheque, and then prints the amount with eleven position number are available for printing the cheque. And need to run the word equivalent of the cheque amount e.g.: the amount 123.30 should be written
One hundred twenty three dollars and thirty cents
My problem I don't know how to make to run the number to the word equivalent of the cheque amount. Like this
The amount 123.30 should be written as One hundred twenty three dollars and thirty cents
This is my code:
import javax.swing.*; /** * <p>Title: </p> * <p>Description: </p> * <p>Copyright: Copyright (c) 2005</p> * <p>Company: </p> * @author not attributable * @version 1.0 */ public class Cheque { public static void main (String [] args) { String Q = JOptionPane.showInputDialog(null, "Please Enter the Account.."); int l; int nn=0; int t=0; for (int x=1 ; x<=Q.length() ; x++) { if (Q.length() < 12) { l = Q.length(); t = l - 12; nn = t * ( -1); } } System.out.println( " "+Q +"$"+" t The Cheque amount " ); System.out.println( " " + nn+ " t The positon number"); } }
can anyone tell me how can i do that.Thank you, | https://www.daniweb.com/programming/software-development/threads/35087/run-the-number-to-the-word-equivalent | CC-MAIN-2017-26 | refinedweb | 207 | 76.82 |
hi every body ,
i am in a big trouble here.....i yahoo password got hacked because the question i kept was very easy and someone broke it...is there any chance that while i chat with him from another ID can i get the IP of tht guy's machine is there any tool for tht...please help me in that regard..so tht i can catch tht guy ....he is a known person to me thats for sure
thankx
How about giving this link a try [1] .
[1] Yahoo! Help - Report Abuse Help
Hope that helps ... Maybe some other members might have better suggestions ...
Operation Cyberslam
\"I\'ve noticed that everybody that is for abortion has already been born.\" Author Unknown
Microsoft Shared Computer Toolkit
Proyecto Ututo EarthCam
thankx agent_steal,
thank u for ur quick response...i think that link is probably for those to complain about any abuse of yahoo account..i know tht one of my frends might have cracked that password and i want to catch him redhanded which i can do if i get his IP
thankx
You are probably wasting your time, just change your password and question and get on with life.
The person's IP address may well be dynamic and/or they may be using a proxy (a good bet if they are messing you around). So the information will probably be meaningless or inconclusive.
I am open to correction here but I believe that if you engage someone in IM and persuade them to download/open a file you can use traceroute to find the IP address? However, as I have suggested, this information may well be pretty useless.
yeah nihil your right (afaik) when you are chatting to someone on an IM it is normally like so :
you --msg--> IM Server --msg--> buddy
but when you start a file transfer it skips the server and initiates a direct p2p connection. So a simple netstat -n will show you the other persons ip (unless they are behind a proxy or such)
3MakeStuff
Or why don't you just create a new ID.?
import all your old contacts and just let 'em know that you have changed ID and to ignore the old one..
f2b
i know tht one of my frends might have cracked that password
If it's one of your 'friends' as them for their IP when you go online. Also if it's your 'friend' ask for the account back.
Otherwise follow the other suggestions.
Sheep Rustlin\' Taz Chairman
If your going to try "back-hacking" him I wouldnt suggest it... its more trouble than its worth. just get a new yahoo id and move on.
Git R Dun - Ty
A tribe is wanted
Forum Rules | http://www.antionline.com/showthread.php?268689-my-yahoo-password-got-hacked | CC-MAIN-2017-22 | refinedweb | 459 | 79.5 |
More on Custom Ant Tasks
Handling Specific Nested Types
To support nesting a given type element under your task, you must know the Java class name that corresponds to that type element. Let us consider this hypothetical case: You are writing code for a Java class called "CustomTask". You want to support a nested element <blah> in your Ant script, and that the supporting Java class is named FooTypeClass. (I am intentionally mismatching the element name and the class name to improve clarity.) You must provide one (but only one) of the following three methods to get a reference to the FooTypeClass object:
- public FooTypeClass createBlah() {...}
You assume responsibility to instantiate the FooTypeClass object and pass the reference to Ant.
- public void addBlah(FooTypeClass ftc) {...}
Ant instantiates the FooTypeClass object and passes the reference via this method; then, it configures the object with its attributes and nested elements, if any.
- public void addConfiguredBlah(FooTypeClass ftc) {...}
Ant instantiates the FooTypeClass object and configures it with its attributes and nested elements, if any, and then passes the reference via this method.
Personally, I use the second option, but this is mainly out of laziness. (Larry Wall, creator of the Perl language, cites laziness as the first of a programmer's three great virtues.) More practically, there may be situations where you should use one form instead of the others, but—usually—it doesn't matter.
The Ant manual strongly cautions to only provide one of the above three methods to support a given nested element class:
"What happens if you use more than one of the options? Only one of the methods will be called, but we don't know which, this depends on the implementation of your Java virtual machine."
Example 6: A rot13 task with arbitrary number of nested fileset elements
This shows a custom task that interacts with the standard fileset element. For any matching file that the fileset selects, you will read the file and write a new file whose contents are rot13-translated. The exercise is two-fold: to show how to handle some nested element in your custom task and also to show how to work with a fileset element (which could really be slightly more intuitive, in my opinion).
For those unfamiliar, rot13 is the highly simplistic way of pseudo-scrambling text: alphabet characters in the set [a-m] are replaced with alphabet characters in the set [n-z], respectively and vice-versa. (Case is generally preserved, and numbers and symbols are generally left alone.) Caution: do not mistake rot13 for any form of encryption.
To handle the <fileset> element, you first quickly look through the Ant API (included with the standard documentation in your download) and deduce that this standard type is supported by the org.apache.tools.ant.types.FileSet class. Because I am choosing to allow multiple <fileset> elements to be nested, I will store the FileSet objects into a Vector instance variable. Note I am omitting some of the more mundane details here; feel free to refer to Example Six in the zip file accompanying this article for a full copy of the code.
//Rot13Converter.java package org.roblybarger; // ... imports here ... public class Rot13Converter extends Task { String extension = null; Vector<FileSet> filesets = new Vector<FileSet>(); /* ...init, getter/setter... */ public void addFileSet(FileSet fileset) { if (!filesets.contains(fileset)) { filesets.add(fileset); } } public void execute() { /* ...attribute validation... */ int filesProcessed = 0; DirectoryScanner ds; for (FileSet fileset : filesets) { ds = fileset.getDirectoryScanner(getProject()); File dir = ds.getBaseDir(); String[] filesInSet = ds.getIncludedFiles(); for (String filename : filesInSet) { File file = new File(dir,filename); /* ...handle the matching file... */ filesProcessed++; } } log("Done. "+filesProcessed+" file(s) processed."); } /* ...method(s) called by execute, above... */ }
Remember that this is a custom task because you want the execute() method to fire, so add an entry in your task-based properties file:
rot13=org.roblybarger.Rot13Converter
Update your jar file, then use something of this form in your Ant script:
<project name="ex6" default="demo" basedir="."> <taskdef resource="MyAntTasks.properties" classpath="mytasks.jar"/> <target name="demo"> <rot13 extension=".rot13"> <fileset dir="${basedir}" includes="build.*.xml"/> </rot13> </target> </project>
In the real world, I have put a custom task with nested filesets to more practical use in order to generate an XML summary file with a listing of files that result from an automated nightly build process. (This file listing contributes to post-build verification and summation procedures.)
Handling Arbitrary Nested Types
The fact that one must know the exact Java class name in the above section might not sit well with some people. Notably, if you want to have conditional support, you can't very well be expected to have an addXXX method for every possible conditional type, right? After all, you would be required to maintain this through all of Ant's future upgrades.
Fortunately, there is a way to add types to your type class in a more generic fashion. Ant supports these methods:
public void add(SomeInterface obj) { ... } public void addConfigured(SomeInterface obj) { ... }
This is probably due as much to polymorphism in Java as it is intended design of Ant. This also carries with it the proviso that you have, for example, a common interface to work from, and also requires that all your concrete implementing type classes are declared in the typedef statement. (That is, all implementing type classes are listed in your types properties file and referenced by the typedef statement.) Unfortunately, this requirement also applies to the standard condition types, and you cannot simply re-typedef them with the same name, either.
Page 2 of 4
| http://www.developer.com/lang/article.php/10924_3636196_2/More-on-Custom-Ant-Tasks.htm | CC-MAIN-2016-18 | refinedweb | 927 | 54.52 |
FTMlib and rtcmix~/chuck~ conflict: yyparse
Hey dev-people –
I’ve had several reports now of crashes from people using both [rtcmix~] and IRCAM’s FTMlib. I think this could be a problem for other developers, especially anyone who is dynamically-loading their own dylib code library. The specific issue is that [rtcmix~] was built using yacc (bison) to design the parser. When [rtcmix~] calls the parser yyparse() function, it finds the entry for the FTMlib parser instead.
Here’s what I think is going wrong: When I updated [rtcmix~], I noticed that the default behavior for Apple’s CFBundleLoadExecutable() dynamic loader seems to have changed. In the past, the dylib symbol table was kept ‘private’, none of the loaded functions were exposed unless I did an explicit CFBundleGetFunctionPointerForName() to find the entry-point. I needed to keep individual instances of [rtcmix~] separate so that the parser/queue/etc. from one instance would not interfere with others. After much experimentation, I found that only by using the mach-o NSLinkModule() system (with options NSLINKMODULE_OPTION_PRIVATE | NSLINKMODULE_OPTION_BINDNOW) was I able to get unique and private loading of the rtcmix.dylib.
Apparently the IRCAM FTM objects don’t do this when loading FTMib from the framework. I can’t find the source for FTMlib, but ‘nm’ does show all the range of yyparse() material, and I see from some of the .h files that bison was used in FTMlib.
I could try to go in and rename all of the [rtcmix~] parsing functions, although this will be difficult because bison auto-generates the code. I also suspect that exposing dylib-loaded symbol tables are going to lead to other unwanted namespace conflicts. The ChucK developers have also used bison for their parser, and I have found that [chuck~] + FTMlib will also cause yyparse()-conflict crashes. Plus any inadvertent name conflicts could cause weirdness down the road…
Is there a good solution to this? I’m not a Real Live Programmer, so I may be overlooking something obvious. Any suggestions/advice are welcome!
brad
oops, I forgot to say — I used FTM.2.5.ALPHA.1-Max5 to check on [rtcmix~] and [chuck~], running on a MacBook Intel, Max5.05 (but I think the problem isn’t unique to this combo).
brad | http://cycling74.com/forums/topic/ftmlib-and-rtcmixchuck-conflict-yyparse/ | CC-MAIN-2014-41 | refinedweb | 378 | 62.88 |
Failure to instantiate timezones with well-known names in 12.04
Bug #884890 reported by Olivier Tilloy on 2011-11-01
This bug affects 1 person
Bug Description
The following works in oneiric but fails with pytz.exceptions
import pytz
pytz.
lsb_release -rd
Description: Ubuntu precise (development branch)
Release: 12.04
apt-cache policy python-tz
python-tz:
Installed: 2011k-0ubuntu1
Candidate: 2011k-0ubuntu1
Version table:
*** 2011k-0ubuntu1 0
500 http://
100 /var/lib/
Olivier Tilloy (osomon) on 2011-11-01
Barry Warsaw (barry) on 2011-11-08
This bug was fixed in the package python-tz - 2011k-0ubuntu2
---------------
python-tz (2011k-0ubuntu2) precise; urgency=low
* debian/
patches/ tzdata: Restore patch to use tzdata package for
timezone information. This was lost during the last Debian merge.
(LP: #884890).
-- Barry Warsaw <email address hidden> Tue, 08 Nov 2011 15:47:51 -0500 | https://bugs.launchpad.net/ubuntu/+source/python-tz/+bug/884890 | CC-MAIN-2018-26 | refinedweb | 141 | 54.52 |
choice(sequence)
Returns one random item from the
sequence parameter (a tuple, list, or string). The choice() command is syntactic sugar for the comparatively awkward incantation:
sequence[random(len(sequence)]
words = ["It is certain", "Reply hazy try again", "Outlook not so good"] c = choice(words) # Variable c contains a random item from the words list # It might now be "It is certain"... # or "Reply hazy try again"... # or "Outlook not so good"
files(pattern, case=True)
Retrieves the names of all files at a given path matching a wildcard pattern. By default a case-sensitive search is performed but this can be relaxed by passing
False as the
case parameter.
f = files("~/Pictures/*.jpg") # tilde expands to $HOME image(choice(f), 10, 10)
fonts(like=None, western=True)
Searches the fonts installed on the system and can filter the list in a few ways. When called with no arguments, the list contains all fonts on the system with a latin character set. To retrieve the set of non-latin fonts, pass
False as the
western parameter. To filter the list through a (case-insensitive) font name match, pass a substring as the
like parameter.
romans = fonts() # Get a list of all the text-friendly families on the system jenson = fonts(like="jenson") # returns ["Adobe Jenson Pro"] eastern = fonts(western=False) # all asian-character fonts, math fonts, etc. kozuka = fonts(western=False, like="koz") # the Kozuka Gothic & Mincho families
grid(cols, rows, colsize=1, rowsize=1)
The grid() command returns an iteratable object, something that can be traversed in a for-loop. You can think of it as being quite similar to the range() command. But rather than returning a series of ‘index’ values, grid() returns pairs of ‘row’ and ‘column’ positions.
The first two parameters define the number of columns and rows in the grid. The next two parameters are optional, and set the width and height of one cell in the grid. In each iteration through the for-loop, your counters are updated with the position of the next cell in the grid.
fill(0.2) for x, y in grid(7, 5, 12, 12): rect(10+x, 10+y, 10, 10)
measure(grob) measure(image="path", width=None, height=None) measure("text", width=None, height=None, **fontstyle)
Returns width & height for graphics objects, strings, or images. If the first argument is a string, the size will reflect the current font() settings and will layout the text using the optional
width and
height arguments (as well as any keyword arguments supported by the text() command).
If called with an image keyword argument, PlotDevice will expect it to be the path to an image file and will return its pixel dimensions.
If called with a Bezier, returns its ‘bounding box’ dimensions.
ordered(list, *names, reverse=False)
Creates a sorted copy of a list of objects (similar to Python’s built-in
sorted function). Lists of dictionaries or objects can be sorted based on a common key or attribute if any
name strings are provided. If more than one name is present, they will be used (in order) as primary, secondary, tertiary, etc. sorting criteria.
Lists are returned in ascending order unless the
reverse parameter is
True.
a sorted copy of the original sequence
students = [{"name":"Alice", "grade":95}, {"name":"Bob", "grade":60}, {"name":"Carol", "grade":88}, {"name":"Eve", "grade":95} ] for s in ordered(students, 'grade', 'name'): print s['grade'], s['name'] >>> 60 Bob >>> 88 Carol >>> 95 Alice >>> 95 Eve
random(v1=None, v2=None)
Returns a random number that can be assigned to a variable or a parameter. The random() command is useful for all sorts of operations, from picking colors to setting pen widths.
When no parameters are supplied, returns a floating-point (decimal) number between 0.0 and 1.0 (inclusive). When one parameter is supplied, returns a number between 0 and this parameter. When two parameters are supplied, returns a number between the first and the second parameter.
Note that whether the boundaries are integers or floating point values is significant. See the example code for the differing behaviors that result.
New random values are returned each time the script runs. A particular sequence of random values can be locked-in by supplying a custom random seed:
from random import seed seed(0)
r = random() # returns a float between 0 and 1 r = random(2.5) # returns a float between 0 and 2.5 r = random(-1.0, 1.0) # returns a float between -1.0 and 1.0 r = random(5) # returns an int between 0 and 5 r = random(1, 10) # returns an int between 1 and 10 # sets the fill to anything from # black (0.0,0,0) to red (1.0,0,0) fill(random(), 0, 0)
read(path, format=None, encoding='utf-8', cols=None, dict=dict)
Returns the contents of a file located at
path as a unicode string or format-dependent data type (with special handling for
.json and
.csv files).
The format will either be inferred from the file extension or can be set explicitly using the
format arg. Text will be read using the specified
encoding or default to UTF-8.
JSON files will be parsed and an appropriate collection type will be selected based on the top-level object defined in the file. The optional keyword argument
dict can be set to adict or odict if you’d prefer not to use the standard Python dictionary when decoding pairs of
{}’s.
CSV files will return a list of rows. By default each row will be an ordered list of column values. If the first line of the file defines column names, you can call read() with
cols=True in which case each row will be a dictionary using those names as keys. If the file doesn’t define its own column names, you can pass a list of strings as the
cols parameter.
a unicode string, list of rows (for CSV files), or Python collection object (for JSON)
lipsum = read('lorem.txt') print lipsum[:11] # lorem ipsum print lipsum.split()[:3] # [u'Lorem', u'ipsum', u'dolor']
rows = read('related/months.csv', cols=True) print rows[0]['english'], rows[0]['days'] # January 31 print rows[1]['french'] # Février print rows[2]['german'] # März
oracle = read('8ball.json') prognosis = choice(oracle.keys()) outcome = choice(oracle[prognosis]) print (prognosis, outcome) # (u'neutral', u'Better not tell you now')
data: lorem.txt, months.csv, and 8ball.json
shuffled(sequence)
Scrambles the order of a sequence without modifying the original. The
sequence argument can be a list, tuple, or string and the return value will be of the same type.
seq = range(10) print shuffled(seq) # [7, 4, 8, 3, 9, 2, 0, 5, 6, 1] print shuffled(seq) # [4, 5, 2, 3, 7, 8, 9, 6, 1, 0] print shuffled(seq) # [4, 6, 2, 7, 3, 0, 8, 1, 5, 9] print seq # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
var(name, type, default, min, max)
The variables sheet creates connections between a variable and a user interface control, such as a slider, textfield or check box. Variables created with the var() command become part of the global namespace, meaning you can access them just like any other variable.
The
name parameter must be a string, but thereafter can be referred to as a plain variable (without quoting). The variable’s
type must be
NUMBER,
TEXT,
BOOLEAN, or
BUTTON. Unless a
default value is passed, the variable will be initialized to
50 for numbers,
True for checkboxes, or ‘
hello’ for text. The
min and
max set the range for number-type variables and default to the range 0–100.
var("amount", NUMBER, 30, 0, 100) background(0.1, 0, 0.0) for i in range(int(amount)): rotate(15) image("10.png", 5, 5, alpha=0.25)
Python’s built-in dictionary type has the advantages of being fast, versatile, and standard. But sometimes you’ll come across situations where you wish it worked slightly differently. PlotDevice includes a handful of dict-like classes to make a few common access patterns easier or less verbose.
The
adict() command creates a dictionary whose items may also be accessed with dot notation. Items can be assigned using dot notation even if a dictionary method of the same name exists. Subsequently, dot notation will still reference the method, but the assigned value can be read out using traditional
d["name"] syntax.
d = adict(torches=12, rations=3, potions=0) d.keys = 9 print d.rations # 3 print d.keys() # ['keys', 'torches', 'potions', 'rations'] print d['keys'] # 9
The
ddict() command creates a dictionary with a default ‘factory’ function for lazily initializing items. When your code accesses a previously undefined key in the dictionary, the factory function is run and its return value is assigned to the specified key.
normal_dict = {} normal_dict['foo'].append(42) # raises a KeyError lst_dict = ddict(list) lst_dict['foo'].append(42) # sets 'foo' to [42] num_dict = ddict(int) print num_dict['bar'] # prints '0' num_dict['baz'] += 2 # increments 'baz' from 0 to 2 nest_dict = ddict(dict) nest_dict['apple']['color'] = 'green' print nset_dict # prints ddict{'apple': {'color': 'green'}}
the
odict() command creates a dictionary that remembers insertion order. When you iterate using its keys(), values(), or items() methods, the values will be returned in the order they were created (or, e.g., deserialized from JSON).
Standard
dict objects return keys in an arbitrary order. This poses a problem for initializing an
odict since both
dict-literals and keyword arguments will discard the ordering before the
odict constructor can access them.
To initialize an
odict and not lose the key-order in the process, pass a list of (key,val) tuples:
odict([ ('foo',12), ('bar',14), ('baz', 33) ])
or construct it as part of a generator expression:
odict( (k,other[k]) for k in ordered(other.keys()) )
Point(x, y)
A pair of x & y coordinates wrapped into a single value. You can access its properties by name in addition to using the object as a ‘normal’ tuple. Point objects support basic arithmetic and provide utility methods for geometry calculations.
PlotDevice commands that take separate x & y arguments (e.g., poly(), arc(), text()) will typically also accept a Point object in their place.
pt.x pt.y
pt.angle(point)
Returns the angle from the object to a target Point or pair of (x,y) coordinates. By default, the angle will be represented in degrees, but this can be altered by setting an alternate unit with the geometry() command.
src = Point(25,75) dst = Point(75,25) theta = src.angle(dst) arc(src, 4, fill='red') arc(dst, 4, fill='orange') with pen(dash=5), stroke(.5), nofill(): arc(src, 32, range=theta) print theta >>> -45.0
pt.distance(point)
Returns the linear distance between the object and a second Point or pair of (x,y) coordinates.
src = Point(25,25) dst = Point(100,100) length = src.distance(dst) arc(src, 4, fill='red') arc(dst, 4, fill='orange') with pen(dash=5), stroke(.8): line(src, dst) print length >>> 106.066017178
pt.reflect(point, d=1.0, a=180)
Uses the object as an ‘origin’ and returns a Point containing the reflection of a target Point or pair of (x,y) coordinates. The optional
d parameter is a magnification factor relative to the distance between the mirror and the target point.
The
a parameter controls the angle of reflection. When set to 180° (the default), the reflected point will be directly opposite the target, but other values will allow you to ‘ricochet’ the target point off of the mirror instead.
origin = Point(50,50) src = Point(25,40) dst = origin.reflect(src, d=2.0) arc(src, 4, fill='red') arc(dst, 4, fill='orange') arc(origin, 4, fill=.7)
pt.coordinates(distance, angle)
Uses the object as the ‘origin’ of a polar coordinate system and returns a Point at a given distance and orientation relative to the origin. The
distance parameter uses the current canvas size() unit and the
angle should be expressed in the current geometry() unit.
origin = Point(50,50) dst = origin.coordinates(50, 45) arc(origin, 4, fill=.7) arc(dst, 4, fill='orange') with pen(dash=3), stroke(.5), nofill(): arc(origin, 25, range=45)
If you’d prefer to work with raw numbers rather than Point objects, you can also import the numerical versions of these methods for use in your scripts:
from plotdevice.lib.pathmatics import angle, distance, coordinates, reflect
Region(x, y, w, h) Region(Point, Size)
An origin Point and a Size which together describe a rectangular region. Region objects have properties to read and modify the rectangle geometry. Methods allow you to generate new regions from combinations of old ones or by systematically shifting their values.
PlotDevice commands that take
x/
y/
w/
h arguments (e.g., rect(), oval(), image()) will also accept a Region object in their place.
r.x, r.y # position r.w, r.h # dimensions r.origin # Point(r.x, r.y) r.size # Size(r.w, r.h)
r.top, r.bottom, r.left, r.right # edge positions r.w/h/t/b/l/r # shorthand aliases
r.intersect(region)
return a new Region describing the overlapping portions of the existing Region object and a second Region
r1 = Region(20,20, 40,30) r2 = Region(40,40, 30,40) overlap = r1.intersect(r2) with nofill(), stroke(.7): rect(r1) rect(r2) rect(overlap, stroke='red', dash=3)
r.union(region)
return a new Region that contains the combined bounds of the existing Region object and a second Region
r1 = Region(20,20, 40,30) r2 = Region(40,40, 30,40) union = r1.union(r2) with nofill(), stroke(.7): rect(r1) rect(r2) rect(union, stroke='red', dash=3)
r.shift(point)
return a copy of the Region with its origin shifted by a Point
r = Region(20,20, 40,30) shifted = r.shift(20,15) nofill() rect(r, stroke=.7) rect(shifted, stroke='red', dash=3)
r.inset(dx, dy)
return a copy of the Region whose horizontal dimensions have been shrunk by
dx and vertical by
dy. If
dy is omitted, shrink both dimensions by
dx. Passing a negative value for
dx or
dy will enlarge the dimension rather than shrinking it.
r = Region(30,40, 60,40) shrunk = r.inset(20,5) grown = r.inset(-20) nofill() rect(r, stroke=.7) rect(shrunk, stroke='orange', dash=3) rect(grown, stroke='red', dash=3)
autotext()
fontsize(9) lineheight(0.8) txt = autotext("kant.xml") text(txt, 5, 10, width=110)
imagesize(path)
Returns the dimensions (width and height) of an image located at the given path. Obtaining the size of an image is useful when the image needs to scaled to an exact size, for example. The return value is a list with the image’s width and the image’s height.
Note that these dimensions may also be retrieved from the
size property of the Image object returned by the image() command.
The width & height of any graphical object (including Images) can be retrieved using the measure() command.
w, h = imagesize("superfolia.jpg") print w print h
open(path).read()
The open() command opens a file specified by the path parameter. The open() command can be used in two ways: open(path).read(), which returns the file’s text content as a string, or, alternatively, open(path).readlines(), which returns a list of text lines.
Note that open() is actually a part of the Python language. See the official docs for details on using open() and dealing with the file objects it returns.
readmethod provides its contents as a string
A more convenient way to read the contents of a file into a string (or other data structure) is the new read() command. It can handle text files as well as CSV- or JSON-formatted data.
# Prints the contents of sample.txt as a whole txt = open("sample.txt").read() print txt # Prints the contents line per line txt = open("sample.txt").readlines() for line in txt: print line | https://plotdevice.io/ref/Misc | CC-MAIN-2017-39 | refinedweb | 2,687 | 65.01 |
Overview and Tutorial¶
Welcome to Fabric!
This document is a whirlwind tour of Fabric’s features and a quick guide to its use. Additional documentation (which is linked to throughout) can be found in the usage documentation – please make sure to check it out.
What is Fabric?¶. Let’s take a look.
Hello,
fab¶
This wouldn’t be a proper tutorial without “the usual”:
def hello(): print("Hello world!")
Placed in a Python module file named
fabfile.py in your current working
directory, that
hello!
Task arguments¶
Local commands¶
As used above,
fab only really saves a couple lines of
if __name__ == "__main__" boilerplate. It’s mostly designed for use with
Fabric’s API, which contains functions (or operations) for executing shell
commands, transferring files, and so forth.
Let’s build a hypothetical Web application fabfile. This example scenario is
as follows: The Web application is managed via Git on a remote host
vcshost. On
localhost, we have a local clone of said Web application.
When we push changes back to
vcshost, we want to be able to immediately
install these changes on a remote host
my_server in an automated fashion.
We will do this by automating the local and remote Git commands.") local("git> [localhost] run: git push <git push session, possibly merging conflicts interactively> Done.
The code itself is straightforward: import a Fabric API function,
local, and use it to run and interact with local shell
commands. The rest of Fabric’s API is similar – it’s all just Python.
See also
Operations, Fabfile discovery
Organize it your way¶ push(): local("git push") def prepare_deploy(): test() commit() push()
The
prepare_deploy task can be called just as before, but now you can make
a more granular call to one of the sub-tasks, if desired.
Failure¶.
Failure handling¶:
- The
__future__import required to use
with:in Python 2.5;
- Fabric’s
contrib.consolesubmodule, containing the
confirmfunction, used for simple yes/no prompts;
- The
settingscontext manager, used to apply settings to a specific block of code;
- Command-running operations like
localcan return objects containing info about their result (such as
.failed, or
.return_code);
- And the
abortfunction, used to manually abort execution.
However, despite the additional complexity, it’s still pretty easy to follow, and is now much more flexible.
See also
Context Managers, Full list of env vars
Making connections¶
Let’s start wrapping up our fabfile by putting in the keystone: a
deploy
task that is destined to run on one or more remote server(s), and ensures the
code is up to date:
def deploy(): code_dir = '/srv/django/myproject' with cd(code_dir): run("git pull") run("touch app.wsgi")
Here again, we introduce a handful of new concepts:
- Fabric is just Python – so we can make liberal use of regular Python code constructs such as variables and string interpolation;
cd, an easy way of prefixing commands with a
cd /to/some/directorycall. This is similar to
lcdwhich does the same locally.
run, which is similar to
localbut runs remotely instead of locally. doesn’t know
on which host(s) the remote command should be executed. When this happens,
Fabric prompts us at runtime. Connection definitions use SSH-like “host
strings” (e.g.
user@host:port) and will use your local username as a
default – so in this example, we just had to specify the hostname,
my_server.
Remote interactivity¶
git pull works fine if you’ve already got a checkout of your source code –
but what if this is the first deploy? It’d be nice to handle that case too and
do the initial
git clone:")
Defining connections beforehand¶.
Conclusion¶ commit(): local("git add -p && git commit") def push(): local("git push") def prepare_deploy(): test() commit() push()")
This fabfile makes use of a large portion of Fabric’s feature set:
- defining fabfile tasks and running them with fab;
- calling local shell commands with
local;
- modifying env vars with
settings;
- handling command failures, prompting the user, and manually aborting;
- and defining host lists and
run-ning remote commands.
However, there’s still a lot more we haven’t covered here! Please make sure you follow the various “see also” links, and check out the documentation table of contents on the main index page.
Thanks for reading! | http://docs.fabfile.org/en/1.10/tutorial.html | CC-MAIN-2016-40 | refinedweb | 706 | 53.81 |
“The introduction of new object-oriented programming features in PHP V5 has significantly raised the level of functionality in this programming language. Not only can you have private, protected, and public member variables and functions – just as you would in the Java,.”
Going Dynamic with PHP v5
About The Author
Thom Holwerda
Follow me on Twitter @thomholwerda
35 Comments
2006-02-20 7:39 pmMightyPenguin
I want to know what scripting language doesn’t have these problems? Pretty much all semi-commonly used scripting languages allow these things.
I do agree with you that this is one of the reasons that scripting languages are frowned on in larger projects with many developers because the language doesn’t inforce good coding practices and strong type checking like Java for instance.
>For me, this is a no-no, force people to declare their
>variables and specify a type. Makes poor programmers.
Hmm.. what are you talking about. Even in C you declare variables where you need them. It’s common in many languages to do this. In fact I think it makes the code cleaner beacuse you are not reading code thinking “where the hell did that variable come from” because its declared right there and you know that it is, what type it is and what it is for.
C Example:
int count = 1;
while (count <= 100)
{
printf(“%d
“,count);
count += 1;
}
Your just bitching for no reason.
2006-02-20 6:56 pmkamper
Even in C you declare variables where you need them. It’s common in many languages to do this.
I don’t think he was so much talking about where you declare variables, so much as whether or not you actually have to declare and strongly type them. Sure, you can easily argue in favour of php’s looser model, but both approaches are quite valid. The interesting thing is deciding when to use each one.
2006-02-20 8:17 pmphoenix
You miss his point entirely.
In C, C++, Java, etc you have to declare a variable before you use (give it a name, a type, etc).
In PHP, you just use the variable..
2006-02-20 9:19 pmunoengborg
It is also much easier to create tools to help you write the correct variable names in a typed language. E.g. if you write java code in Eclipse or any other modern java tool pressing the dot after an object name will present you with a popup with all methods and variables available on that object so that you easily can do auto complete.
In some cases (like in Eclipse) it even shows whatever documentation you made for that method.
Another problem with languages like php is that the lack of typed variables, makes testing much more complex if you want to make sure that the program really does what it is supposed to do. In a large project the work of writing such tests takes much more time than declaring variables.
As for the example in the article, there are other ways to do things like this. The article assumes that you mapps the fields in the database to fields in PHP objects.
Why not put more functionality into the database using stored procedures. That way the application will be much faster, if done right you will also have a natural way to get separation between function in the database and presentation written in PHP. By using template systems such as PHPTal, the separation could be even clearer.
The downside is of course that you will get hard ties to whatever database engine you are using. This is usually not a big problem unless you are making some kind of product in PHP, that you intend to sell to others to use as you then would like the same code to run on as many databases as possible. However if you do some coding for your own organization, it is highly likely that the database infrastructure will last a lot longer than the look of the website.
2006-02-21 9:10 amedmnc
.”
You can easely change the error warning level in PHP configuration (or at runtime) and PHP will display Notice for each undeclared variable in your code.
2006-02-20 8:47 pmwerpu
Actually there used to be days, when it was taught to be a good design decision to have explicite variable declaration on top of the algorithm and a clean interface and implementation separation.
And given the fact that I constantly have to switch between non declarative and declartive languages, I at least agree with the declaration part, it makes the code way more readable once the stuff gets bigger.
As for the interface implementation separation, that stuff fortunately can be covered better by autodoc tools.
2006-02-21 2:56 amCrLf
“Even in C you declare variables where you need them.”
That’s just been possible since the ’99 revision of the standard (C99), which not every compiler implements fully (I think gcc if compliant).
Now, some folks might have a distorted view of C, since they really have been compiling stuff with a C++ compiler (which no, C isn’t a subset of C++ anymore).
you can also create objects that bend at runtime, creating new methods and member variables on the fly. You can’t do that with the Java, C++, or C# languages.
Not that I have a problem with people exploring this type of functionality, but I just want to point out that this is not a deficiency in the other languages. They’re aimed quite firmly at a different development model. It would be dead simple to put this sort of stuff into a virtual machine language and it can already be done to a degree with runtime bytecode engineering. It’s just not a native part of the language because it’s too easily abused.
2006-02-20 7:28 pmjayson.knight
I don’t know about Java or C++, but you absolutely can do this in C# using either Reflection.Emit or the System.CodeDom namespace, and this functionality has been in the CLR since v1.0. For an excellent high level primer on this, have a look at this article:….
“…but you can also create objects that bend at runtime, creating new methods and member variables on the fly…”
I can’t wait to see the interesting vulnerabilities this capability will foster when used in conjunction with badly written PHP code LOL.
The security community has it’s research cut out for them.
-Viz
2006-02-20 8:55 pmAnonaMoose
Howdy
Javascript lets you do this kind of thing and it does not really have to lead to security problems.
Remember PHP code is ran on the server not on the client all the client sees is the result, so to add a new method or variable to a running instance would require hacking of the server then gaining access to the running container and instance in the container(container = zend engine etc).
Realistically you`d take over the server then run you own code or server.
2006-02-20 9:10 pmkamper
Javascript lets you do this kind of thing and it does not really have to lead to security problems.
The idea of a security vulnerability due to sloppy javascript writing is kind of dumb. The language, by the nature of where it runs, is simply the least secure thing imaginable. The reason javascript mistakes don’t matter is because javascript should never (ever) be touching sensitive data anyways.
so to add a new method or variable to a running instance would require hacking of the server
No, the point is that a programmer can accidentally add stuff they didn’t mean to and this would allow a cracker to gain access.
2006-02-21 11:54 amdruiloor
> The idea of a security vulnerability due to sloppy
> javascript writing is kind of dumb. The language, by
> the nature of where it runs, is simply the least secure
> thing imaginable. The reason javascript mistakes don’t
> matter is because javascript should never (ever) be
> touching sensitive data anyways.
Right… It’s just a scripting language, the fact it started off at Netscape as a web-browser feature doesn’t mean it can only be used for that:
Comparing it to the Apache mod_php one may have a look at the old mod_js or more recently mod_whitebeam or something –
2006-02-22 8:47 amkamper
Right… It’s just a scripting language, the fact it started off at Netscape as a web-browser feature doesn’t mean it can only be used for that: …
Oh, granted. Along with vbscript, it was also the syntax for asp and I believe it can still be used interchangeably with vbs for general windows scripting.
In my post I was referring to javascript as the interpreters/object models that exist within web browsers. I just figured it was obvious enough that I din’t need to bother specifying :-p
2006-02-22 10:21 amdruiloor
> Along with vbscript, it was also the syntax for asp and
> I believe it can still be used interchangeably with vbs
> for general windows scripting.
Although i don’t use MS-Windows , i’m pretty sure you’re correct. However KDE[0] (kjscmd) and Gnome[1] (mjs) have similar functionality. AKA ECMAScript[2], its probably stock platform agnostic to a greater extent then PHP is.
[0]:
[1]:
[2]:…
2006-02-22 6:06 pmkamper
its probably stock platform agnostic to a greater extent then PHP is.
What’s your point? As I said, I wasn’t talking about abstract javascript and all the places it can be applied. I was talking specifically (and only) about scripting within webpages in a browser. Stop trying to add irrelevant things to the discussion.
2006-02-21 12:36 amAnonaMoose
The idea of a security vulnerability due to sloppy javascript writing is kind of dumb.
Exactly!, although I`m not saying it might not ever happen but it does not need to be a security nightmare with enough thought on the implementation.
No, the point is that a programmer can accidentally add stuff they didn’t mean to and this would allow a cracker to gain access.
I fail to see why adding a method at runtime leads to this, bad programing is bad programing and if they cannot account for what they add and why then their static code would be questionable aswell.
The sheer amount of code may lead to risks but the idiom behind it does not need to.
Sorry, but that PHP is a server-sided language IS the problem – if a PHP developer makes mistakes, it allows hackers (via GET or POST data) to run malicious code on the server like getting passwords or other sensitive data. This is called PHP or SQL Injection.
Failure to validate inputs is called stupidity, GET/POST buffer overflow is a little different but I seriously fail to see why the abilty to dynamicaly add a method suddenly makes this more easily happen if you can tell me an example of this I`ll happily listen.
2006-02-21 4:17 amkamper
if you can tell me an example of this I`ll happily listen.
You have some user input that you’re going to inject into a sql query, a search term or something. You have a function that validates the input and returns a boolean and you rely on this function to make sure the input isn’t malicious. But you accidentally pass $serach_term to the function, which somehow results in a pass while $search_term (what you really wanted to validate) is malicious. You then proceed to build the query using the correct but malicious search term.
Sure, it’s a contrived example and lots of common sense things could prevent this, but the point is that any time your code starts doing something you didn’t expect it to, you have zero guarantee that it’ll be safe.
2006-02-21 6:02 amkamper).
2006-02-21 7:04 amShane.
2006-02-21 9:49 amAnonaMoose.
2006-02-21 12:25 pmmadhatter
I didn’t want to say anything against the dynamic of PHP,
just wanted to say you don’t really need to hack the (whole) server.
Additionaly it’s not that easy to avoid Injection, projects like phpBB have/had a lot of security problems because of Injetion.
P.S.: I’m not against PHP or something – in fact I like and use PHP but PHP has also it’s disadvantages
I was a PHP programmer for a few years, but now I just find it boring and tedious.
I converted to Ruby 6 months ago, its so delicious. Its like C++, Smalltalk & Perl rolled up into this candy bar that is almost perfect. (nothing is perfect)
The rails development just blew me away, and I think PHP will find it hard to mimick a similar framework based purely on their design of the OO methodology.
I won’t go into details as to why, you’ll just have to take a look and see why PHP is so old school.
Checkout and view the 15 minute demo on building a weblog. It doesn’t teach you how powerful Ruby is, it will illustrate the power of rails which sits on top of Ruby.
PHP is not even close to this functionality and its what ultimately sold me to be a complete convert.
2006-02-22 12:51 amsirwally
“I converted to Ruby 6 months ago, its so delicious. Its like C++, Smalltalk & Perl rolled up into this candy bar that is almost perfect. (nothing is perfect)”
I’m sorry, my brain is stuck in a infinite loop repeating “& Perl” intejected with the word “Perfect”. I guess it’s not used to seeing those two words in the same sentence. 😉
I’ll stop trolling/cracking lame jokes now, for this is about PHP, not Perl, although both languages do lend themselves to being horribly tortured by programmers (or are they torturing the programmers, I’m not sure).
FWIW, I wrote PHP & Perl for a number of years. I’m glad those years are far behind me.
Long live C# 😉 … at least until something better come along.
Which hosting providers provide these languages?
I found “dreamhost” for Ruby and PHP5.
I currently use GoDaddy and PHP4. I asked them about the availability of PHP5 and got the “I don’t know when” answer.
PHP4 is okay and I created a nice web architecture in it, but I wouldn’t mind investing either in PHP5 or a more powerful language.
I just need a place to host the websites for a newer PHP or alternative language.
I could just about predicted that a new RoR convert would have to post here in this thread and tell us all that RoR has changed their lives. Newsflash: yes, we’ve all heard about Rails by now. Some of us have also used it too. It’s getting a bit to the point of how much linux gets mentioned in threads about other OSes.
Since Rails *has* been mentioned though, I’ll suggest that people have a look at CakePHP (cakephp.org) if they want to have a similar framework, but one built on top of PHP. CakePHP was inspired by Rails, but the developers are evolving it separately. The framework is pretty young and things are still changing rapidly but I found that the framework is useable already.
There are other Rails-like PHP frameworks around, but I haven’t tried them yet. Maybe someone could give us a comparison?
I’ve been playing with PHP for about 2 or 3 years now. In comparison to the other big boys out there, I find PHP just too, too loose! For example, declaring variables. PHP allows you to create variables almost everywhere inside the code. For me, this is a no-no, force people to declare their variables and specify a type. Makes poor programmers.
Anyone else? | https://www.osnews.com/story/13738/going-dynamic-with-php-v5/ | CC-MAIN-2019-09 | refinedweb | 2,684 | 69.21 |
On 5/29/06, Georg Brandl <g.brandl at gmx.net> wrote: > Fredrik Lundh wrote: > > > > is there some natural and obvious connection between generators and that > > number that I'm missing, or is that constant perhaps best hidden inside > > some introspection support module? > > It seems to be a combination of CO_NOFREE, CO_GENERATOR, CO_OPTIMIZED and > CO_NEWLOCALS. > > The first four CO_ constants are already in inspect.py, the newer ones > (like CO_GENERATOR) aren't. All these constants are declared in compiler.consts. Some are also defined in __future__. It would be better to have them all in a single place. > I wonder whether a check shouldn't just return (co_flags & 0x20), which > is CO_GENERATOR. Makes more sense. n | https://mail.python.org/pipermail/python-dev/2006-May/065410.html | CC-MAIN-2021-49 | refinedweb | 116 | 61.63 |
span8
span4
span8
span4
Hello,
I am trying to develop a startup python script, that will look at an incoming file and list the feature types that are within that file. Then compare against another list of mandatory feature types. Essentially I would like to make sure that required feature types exist in the file. I am fairly new to Python, and still learning, but any guidance would be helpful.
import fmeobjects
import fme
class FMEFeature(object):
value = fme.macroValues['SourceDataset_AUTOCAD_OD']
fme.getFeatureType(value)
#unsure what to do next
Thank you!
Hi @david_prosack88,
You could use the Schema (Any Format) reader to read the feature type from both files and compare, no python needed.
Hope this helps,
Itay
Answers Answers and Comments
16 People are following this question.
ArcPy "import" statement tries to perform import action? 5 Answers
Python Script will not run as Startup Script but runs in IDLE 1 Answer
Is it not possible to use the multiprocessing library within python? 4 Answers
passing a date from python as a published parameter 2 Answers
problem returning python generated value to published parameter 3 Answers | https://knowledge.safe.com/questions/85086/startup-script-that-reads-features-in-file.html | CC-MAIN-2020-05 | refinedweb | 186 | 63.19 |
Stress-Test Rendering to Optimize Performance
In many financial and medical applications, rendering multiple charts on a single screen with near real-time data is a standard practice. You need to ensure that your UI tools can match the demands of these high-stress data scenarios — where multiple charts are updating data in real-time as fast as the eye can see. In this lesson, you’ll see how the Infragistics Ultimate UI for Xamarin data chart performs under this level of stress on standard tablets and phones, and how you can tweak the charts to optimize performance..
Lesson Objectives.
The steps you’ll perform to do this are:
- Define the data source
- Create a single chart
- Test the solution
- Add charts to the solution
- Retest the solution
For more information on the control used in this lesson, see the Xamarin Data Chart Control page.
Step 1: Setting up the Project
You can download the project used in this lesson by clicking here.
Then, to run the projects for the first time after un-zipping, load the entire solution into Visual Studio, right-click on the Solution, and select Restore, browsing a Single Chart
The first step is to open the ChartStressTest.xaml file that is located in Views. This file contains pre-defined code for the Infragistics Charts namespace, as well as a grid with row and column definitions. With this sample, you will meter how much data is used by the app. You could also do this by using the Infragistics Toolbox to drag and drop the chart requirements with XAML.
In the Grid, insert the following code segment to define the category chart.
<igCharts:XamCategoryChart x: </igCharts:XamCategoryChart> <Label Text="{Binding Path=FpsText}" Grid.
This code segment creates a chart using the specified data source. By default, markers are enabled and this code segment disables them. We also create a label using the FpsText data to test the number of frames per second (fps).
Step 3 Test the Solution
After configuring the chart, CHART – STRESS TEST.
The line chart will be generated. The frames per second (fps) and amount fields will be updated at the bottom. In our example, there are approximately 1,000 data points per series being used, and the chart is being displayed at over 40 fps.
Step 4 Adding Charts to Solution
To demonstrate the capabilities of the stress test, we’ll incrementally add more properties to an additional chart in the app. Add the following code segment after chart1.
<igCharts:XamCategoryChart x: </igCharts:XamCategoryChart> <igCharts:XamCategoryChart x: </igCharts:XamCategoryChart> <Label Text="{Binding Path=FpsText}" Grid.
The above code segment adds two charts to the view, and defines the Y-axis and X-axis defaults. The third chart also defines the default values for the Y-axis for the chart, which minimizes the amount of work the chart has to do at runtime. We also need to modify the label text property to be displayed on the third row.
Step 5 Retest the Solution
Deploy the solution to your emulator or device to display the newly created charts. Notice that the first two charts create a Y-axis that is variable to the data in the source, and the third chart is automatically set to 100 from the defined properties. After the chart loads, the frames per second will average approximately 25-30 frames per second, regardless of using portrait or landscape mode.
Conclusion
The XamCategoryChart enables you to use several charts with multiple data sources in a single view without sacrificing performance. | https://www.infragistics.com/products/xamarin/run-fast/stress-test-rendering | CC-MAIN-2017-39 | refinedweb | 590 | 51.78 |
while back I got into a debate with a good friend of mine – Mitch Garvis – President of the Montreal IT Pro user-group. He’s a hands on Small Business Server guy with plenty of practical real world experience with the product and with various customer requirements. We were having a few drinks with a bunch of people after an event one night and I overheard him mention installing Small Business Server with the DNS namespace ending in a .local top level domain. That is to say – if I owned canitpro.ca public DNS namespace, the default SBS install suggests I install Active Directory with a canitpro.local DNS namespace.
This intrigued me – why do this? I’ve been designing AD and DNS namespaces for small, medium and enterprise organisations since the product was in Beta and working with DNS and internet connected systems way before then. It doesn’t make sense… A non routable domain suffix? How could I let Mitch pass out this recommendation to someone without asking – WHY? So I asked, and opened the proverbial can of worms….
Mitch – Because it’s default… it’s simple.. and it works… Why would I change it?
OK. I had to speak up. My main concern for recommending someone use a .local is that it is not routable and never will be routable on the internet. This is both a good thing (security some say) and bad thing (what if I want to talk to someone on the internet directly without jumping through hoops). If you have a server that has a FQDN (Fully Qualified Domain Name) that ends in a .local – someone who needs to get back to you, can not – without extra work. This isn’t a big deal for internet email (if you don’t mind the extra work), since you can edit the MX records to point to a properly formated, internet resolvable name that just happens to correspond to your IP where your firewall / router / ISA server will accept the incoming SMTP request and switch-er-oo the addressing info to the proper info and pass it on through. This isn’t the case for other technologies coming down the pipe… maybe they will work with the extra work, maybe they wont… Here are three things that I foresee as being problems for you if you decide to use .local
My questions back to him (and you) is – do you own your own internet based DNS namespace that ends in a properly resolvable top level domain (like canitpro.ca or canitpro.com)? Why not save yourself all that extra work and set up your SBS server environment in such a way as to future proof yourself for DNS namespace and FQDN name resolution headaches. What could you use? Following the K.I.S.S. (Keep It Simple S{fill in here}) principle – if you own canitpro.ca, name your internal AD name space ad.canitpro.ca or corp.canitpro.ca or whateveryoulike.canitpro.ca… You control the namespace, you call it whatever you like. Sure it will make your user names slightly longer (rick@ad.canitpro.ca) – but you can fix that with a couple of simple post install steps.
After some back and forth amongst Mitch, myself and the other table participants – we all came to the agreement that it made sense to use proper DNS naming conventions that are routable and controllable by yourself in an effort to reduce the extra work that would be coming down the pipe as the company grows. I mean hey – what small business wants to stay small forever, right? Does this mean you should run out and re-install all your SBS installations or reinstall your personal one? You would have to evaluate the Pros and Cons for that one, since it would take a big chunk of planning to determine the impact. Don’t worry – you can continue to live just fine with a .local implementation, provided you are ready for the extra work that lays ahead.
Please don’t take this post as a slight to the SBS community or Development team for using a .local DNS name as a default install choice. IT IS NOT. Likewise – Small Business Server is a True / Real server OS that is an extremely integrated and powerful solution that grows to accommodate up to 75 users. It is not a “lite” version of Windows Server 2003. Trust me – I respect SBS and the user community that supports it – they know their stuff.
Disclaimer: This discussion was over beer, amongst friends, peers and fellow geeks. It was around DNS naming conventions and best practices for DNS namespace with debates on both sides of the house in good faith. Who won and who lost? I think I picked up the tab, but Mitch and others in attendance now use a different approach to namespace design. You make the call.
I just posted a lengthy discussion around the question that comes up when one installs Small Business...
I had discussion with a new client recently about her in-house computer guy. It was actually a conversation about 'where should we go from here' and every time I asked why they were doing something she would say 'because my computer guy told me this is the only way to do it.'
I told her there were a couple of different types of 'computer guys...' the ones who were willing to learn from others and to keep up with the ever-evolving industry in which we earn our living, and the ones who are not willing to learn, and are lucky enough to have jobs where their bosses also don't keep up and are willing to pay them a salary to stay at the level they were at when they were hired.
I always want to be the first guy. I do not want to be like some of my peers of old who think that IPX/SPX is simpler than TCP/IP, and who are happy earning their keep fixing simple setups. When I am unwilling to learn from people who know more than me then it will be time for me to find another profession, because from what I can see the IT industry is not planning on staying in one place for long.
Thanks to Rick for opening this SBS guy's eyes to routable DNS issues, and thanks to my colleagues and peers who challenge me every day to learn more. I only hope that you are able to learn from me from time to time too!
I'm having a hard time understanding what this "extra work" that keeps getting mentioned. The really "cool" thing about SBS is that there is no extra work. There's a wizard for everything. I name the DNS .local, run the CEICW wizard and voila, each user has a .com or .net or whatever is appropriate to match the FQDN, Exchange is all set up, no muss, no fuss. No if you can call, creating the MX record to match your external facing IP "extra work" then how is that different that admittedly extra work the OP mentions are required if you choose the FQDN for your SBS. "Sure it will make your user names slightly longer (rick@ad.canitpro.ca) – but you can fix that with a couple of simple post install steps."
If the SBS AD domain is given the FQDN, then extra work will be required because the Admin will have to create records in the DNS so that internal users can get to the company's externally hosted website. And there is no wizard for that. So actually choosing the FQDN does create extra work for the SBS Admin.
The DNS in SBS is meant for internal name resolution only.
I appreciate the OP's kind words about SBS and its community.
But its the suggestion's made here, i.e. not accepting the default recommedations for the domain, IP, etc, and not using the wizards, that lead to many, many questions for fixing issues that are caused in the many SBS community forums.
Cris Hanna
SBS-MVP
Between the arguments over "IT", and...
The whole topic is a load of BUNKUM. This _has nothing_ to do with SBS, it is a pure AD question.
The OP is wrong. Your AD DNS namespace should in no way be connected to any other namespace and there is no advantage in making it so.
Turn the question upside down. Rather than asking yourself 'I have this namespace, why shouldn't I use it?' ask yourself 'I am about to create a namespace which will be solely used internally to my AD, is there any reason why this namespace should be in any way related to my public namespace?'
I look forward to continuing discussion.
I never mentioned this post or the one on the public newsgroups was solely about SBS. I merely mentioned that when choosing a DNS namespace for your AD design, you need to considered a lot more then just taking a .local because it's default and non routable.
You kind of reinforced my point with your comment "...I am about to create a namespace which will solely used internally to my AD, is there any reason why this namespace should be in any way related to my public namespace?" the answer is YES. If you take the short sighted approach that your AD will never need to be referenced outside your network, you are selling your design choice short and potentially limiting your options.
All my examples in the post related to needing access information and AD stuff outside (ie: SSL certificates, Mobile 5 Push email, Active Directory Federation Services with partners). These are the types of questions I get from SBS professionals who deploy and support SBS and have made the .local choice. There are work arounds, but they could have been avoided if they chose a sub dir option with a split managed DNS zone.
I'm not saying your way is wrong or my way is right - it's all a matter of perspective and what's right to the customer.
Rick
I love reading the intelligent back-and-forth between professionals. I was completely onside with SuperGumby until Rick gave me reason to change my mind. I did not do this solely because I respect Rick and his experience - I do, but I know we disagree on other matters. I changed my mind because Rick made a lot of valid points and touched on some technology pains that I had already encountered - yes this works but it's more work. I am not an SBS-MVP but Microsoft does consider me an SME. As such I know the easy way to do things and I can assure you I know the hard way too. I do not particularly like building systems on workarounds which is why after my conversation with Rick I went back to my lab and built a test environment with a .com extension. Lo and behold some of my pains disappeared. I learned something new and yes, the next time Rick and I were together *I* bought the drinks! M
sorry if I misunderstood but a heading of 'To use a .Local or real internet DNS name for SBS (Small Business Server)' seems, to me, directly aimed at SBS. go figure.
More to the point. What problem with SSL certificates? What problem with Mobile 5 push? You may have an edge on me in the area of ADFS, but what exactly is the problem? All questions asked in light of .local vs FQDN based AD DNS, nothing yet about SBS.
But I'd also like to point out: SBS dev chose .local, it turns out to be a bad choice. Shortly _after_ SBS Dev chose .local other OS's (it's not only OSX, some linux variants are also involved) started treating .local in a non-standard manner. Due to this I have, for some time, used and promoted the use of .lan (hoping that no-one decides this also needs special handling).
But I do go further. I believe using an FQDN based DNS for your AD is so wrong, and that so many people make the mistake, that I want to thrash it out with someone, anyone, who can give me a valid reason for so naming your AD. Though I have been involved in such discussion for several years _not once_ has anyone come up with such 'valid reason'.
If I wish to make something inside my AD available publicy I point a name from my FQDN zone to the IP of my firewall (generally ISA, not that it matters much), the firewall then passes requests to my AD resource. I can, but normally don't, also host the FQDN zone.
As indicated in the newsgroup. I'd really much prefer the discussion to return to that forum, and I'd really like your participation. I hope to persuade you to stop doing what I consider a bad thing but maybe you can change my mind.
The cert issue has nothing to do with AD naming though. Exchange naming is not tied to AD naming.
Again.. you have to get up to R2 EE before you even get ADFS bits to play with.
99.99% of the SBS customers will never see these issues.
I have a self signed cert now that has no relationship with how my domain is named. In fact you don't even have to have it resolve to a proper name at all..it works with an IP address.
The only issue I currently have with self signed certs is that Vista slightly barfs on them, but other than that, there's no restrictions.
If you got a question about ADFS from a SBS customer they are probably also the ones complaining about the fact that SBS 2003 R2 has no real Windows R2 bits and they can't do quotas and DFS.
There's a white paper coming out with Mobile 5 and SBS in fact.. I'll ping it up here when it's live.
SuperGumby is right on the money here. The .local namespace is an issue as he pointed out. Besides that, there is no technical justification for exposing your internal AD or tethering your AD namespace to your external presence.
The fact that your internal AD is named myAD.xyz does not inhibit your ability to present yourself as Rick@mycompany.co.whatever. Your exchange infrastructure is not married to your internal AD namespace in any technically rigid way. If this were so, there would have been no business built around the concept of Hosted Exchange.
So, while your discussion may have been technically fulfilling while it lasted, I am afraid it may have been for nought as it appears to me that there really is no "issue" here to be addressed in the first place.
I got an email from Kenrick Robertson a week or so ago about problems he’s been having trying to get...
Interesting discussion. Here's my 2 cents.
1. By using a registered, globally unique domain suffix you ensure that there are never going to be any interoperability issues with other AD implementations that may have chosen the same name (e.g. activedir.local).
2. If you use a subdomain of your existing, registered domain (e.g. ad.myco.com) then the integration with your existing DNS namespace is going to be easier.
Tony
MVP - Directory Services
(Originally posted March 5, 2006) One of the perks to my position is that I have had the opportunity
One of the perks to my position is that I have had the opportunity to make friends with some great people
PingBack from | http://blogs.technet.com/b/canitpro/archive/2006/03/05/421256.aspx | CC-MAIN-2014-10 | refinedweb | 2,639 | 71.55 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Basic Authentication: Part 33:03 with Naomi Freeman
In this video, we'll continue to build out the HTTP Basic authentication method for our base API controller and apply the current user pattern.
Code Samples
With our authenticate method complete, we're going to apply the
current_user pattern. In the authenticate method, we'll set a
@current_user instance variable on successful authentication:
if user && user.authenticate(password) @current_user = user
We can then access that variable in another method in the base API controller:
def current_user @current_user end
Since these methods are in the base API controller, this makes the
current_user method available in the todo list and todo items api controllers. We use that to scope our queries. For example, here is our new show method in the todo lists controller:
def show *list = TodoList.find(params[:id])* render json: list.as_json(include:[:todo_items]) end
Becomes the following:
def show list = current_user.todo_lists.find(params[:id]) render json: list.as_json(include:[:todo_items]) end
We repeat that pattern in the todo items controller as well.
- 0:00
Our API's authenticate method is almost there.
- 0:03
It currently authenticates the user and writes a log message on success or
- 0:07
failure to log in.
- 0:09
The problem though is that we have no way to access that user outside of the login.
- 0:14
So what we're going to do is repeat the current user pattern established
- 0:18
elsewhere in the application.
- 0:19
If we open up our application controller and
- 0:23
look at the current user method in here, you'll see this is pretty complex.
- 0:28
Ours doesn't need to be this involved, but we will implement something similar.
- 0:33
Now if we go back into our API controller, in our authenticate method,
- 0:37
if the authentication is successful, let's set a current user instance variable.
- 0:43
That will make the current user instance variable available to
- 0:46
all of the methods in our subclass controllers of the API controller.
- 0:50
Now, so we don't have to write that all the time,
- 0:52
let's create a method called current user, which returns that instance variable.
- 0:57
So, above our authenticate method, let's create a current user method.
- 1:03
And place the instance variable inside.
- 1:04
[BLANK_AUDIO]
- 1:08
Even though we already have a current user method defined in the application
- 1:12
controller, our API controller inherits from it.
- 1:16
Therefore, this current user method is going to overwrite that method for
- 1:20
the API controller and its subclasses.
- 1:23
Now, let's do a quick test.
- 1:26
Open up the API ToDoList controller file.
- 1:31
And in the index method, let's just see if we can access the current user.
- 1:36
[BLANK_AUDIO]
- 1:45
Now, let's try and CURL and pay attention to the log.
- 1:49
[BLANK_AUDIO]
- 1:55
So we've returned all the ToDoLists, and
- 1:57
in the log we can see that the correct user is currently logged in.
- 2:03
This will take care of our API returning all ToDoLists and
- 2:06
also fix the bug where the user ID is blank when we create ToDoLists.
- 2:10
Now let's replace all instances of ToDoLists with current user dot ToDoLists.
- 2:14
[BLANK_AUDIO]
- 2:30
And let's try another CURL and make sure it works.
- 2:32
[BLANK_AUDIO]
- 2:41
We can see the response of the ToDoLists that don't belong to me are gone.
- 2:45
Great.
- 2:46
Finally, let's go into the ToDo items controller and
- 2:49
make sure that this one is up to date as well.
- 2:52
In the fine ToDoList method, we'll only need to change that once since
- 2:55
the rest of the queries are already scoped to the list
- 2:58
[BLANK_AUDIO]
- 3:02
Great job. | https://teamtreehouse.com/library/build-a-rails-api/authorization-and-authentication/basic-authentication-part-3 | CC-MAIN-2019-09 | refinedweb | 693 | 71.14 |
I would like to get all residues that are close to any chosen residue in my pose. (I need a list of all residues whose side chains could potentially interact with the side chain of my chosen residue.)
I’m guessing the information is available in `pose.energies().tenA_neighbor_graph()`, but I can’t seem to access it in PyRosetta.
calling:
for l in pose.energies().tenA_neighbor_graph().get_node(1).edge_list_begin():
pass
returns:
TypeError: 'EdgeListIterator' object is not iterable
and the `graph::Node` does not seem to expose an array or list or linked-list of edges of each node.
Creating a Vector1 (as suggested here) gives the same error.
`rosetta.Vector1(pose.energies().tenA_neighbor_graph().get_node(1).edge_list_begin())`
Is it possible to iterate over the edges in a node in PyRosetta?
Thank you for your help & best regards,
Ajasja
Is there any reason you want to use the neighbor graph iteration to do this?
Probably the easier way to do it would be to do a double for-loop iteration over the residues in the pose:
for ii in range(1,pose.total_residue()+1):
for jj in range(ii+1,pose.total_residue()+1):
if not is_neighbor(pose, ii, jj):
continue
#Do whatever you want with neighbor residues
You would have to write the is_neighbor() function yourself, but this allows you to more accurately control how you classify neighbor residues. Rosetta uses a definition that's optimized for the particular use case in the energy function definition, but you may prefer a slightly different one.
I wanted to reuse the existing calculation already performed by PyRosetta. (Also since it’s in C++ it’s probably faster than doing this at the Python side. If this speed difference matters is of course another question.)
Perhaps I will have to “manually” implement neighbourhood detection in the end. Since my next question was how to get a neighbourhood graph at an arbitrary distance not just at 10 or 12 Angstrom :) The custom approach would also enable me to have a separate radius defined for each residue.
I did also find a hacky workaround using ` graph.get_edge_exists`
def get_close_resids(pose, target_resid):
graph = pose.energies().tenA_neighbor_graph()
resids = []
for r in range(1,graph.num_nodes()+1):
if (r <> target_resid) and graph.get_edge_exists(target_resid, r):
resids.append(r)
return resids
Yeah, the edge_list_begin() returns a C++ iterator, which doesn't mesh well with the Python iterators in for loops. There's not really a way exposed to iterate through the edges on the Python level. The workaround you've come up with is probably the best way to do things if you want to extract the information out of the existing tenA_neighbor_graph. | https://rosettacommons.org/node/3803 | CC-MAIN-2022-27 | refinedweb | 442 | 56.05 |
note voj <p>I just wanted to facilitate the use of a Module and avoid exorting into the caller's namespace. In particular, the Module does not just call the method but does some processing to pass it the right parameters. The same could be done with an export:</p> <code> use My::Module qw(call); call(\&foo); sub foo { ... }; </code> <p>I am not that familiar with the processing phases BEGIN,INIT,END, that's why I asked. Is there a way to call the method at the END of execution of the caller?:</p> <code> use My::Module call => 'foo'; sub foo { ... }; # foo is called at the end </code> 992164 992172 | http://www.perlmonks.org/?displaytype=xml;node_id=992256 | CC-MAIN-2015-18 | refinedweb | 113 | 73.78 |
Intro
Recently, after totally falling for Dave Rupert's YouTube thumbnail (on Twitter) experiment, I discovered his bookshelf which I really love!
As a reader (my day job is at a public library) I use Goodreads to keep track of which books I've finished and to give quick ratings to them. So, I thought that if Goodreads has a public API I could use this to practice getting and displaying data on my static, eleventy powered, site 👍.
Getting Started
As I was planning for this to be a public page on my website (which is already a git project), I didn't need to create a new project directory or initialise/initialize it with git.
Instead, I created a new branch on git - by typing:
git checkout -b bookshelf
This command is shorthand and will both create and checkout the new branch (
bookshelf is the name which I assigned to this branch). It is the same as the following two commands:
git branch bookshelf git checkout bookshelf
This way I was ready to work on the new branch, and could commit and push changes without directly affecting my live site.
My site begins life as a JavaScript Node.js project, which uses npm as its package manager.
The API
First, I found that Goodreads does have an API, so I checked the docs and found that I would probably need the reviews.list method. This method will "Get the books on a members shelf."
To do this I needed to get an API key from Goodreads too. As a member already all I needed to do was log in to the site and request a key.
Keeping the API Key Secret
I was also aware that it is best practice to keep API keys a secret in production code. This is so that they cannot be found and potentially abused - the Goodreads key is unlikely to be abused because the API is a free service, but it is still best to adhere to best practices and be in the correct habits.
One way to keep API keys a secret is to use a
.env file which is configured to be ignored by Git. To do this I installed the dotenv package and placed my API key into the
.env file in a key/value format:
// My .env file format: GRKEY='API-Key-goes-here'
To make sure the file is then ignored by Git, I included a reference to it in my
.gitignore file as so:
// My .gitignore file format: node_modules dist .env ...
The intro to the dotenv package says:
Dotenv is a zero-dependency module that loads environment variables from a
.envfile into
process.env.
This means that I could now access the
GRKEY within my project by referring to
process.env.GRKEY.
You do also have to
require the module and call the
.config() method in the file where you'll be accessing it, I think, as so:
const dotenv = require('dotenv'); dotenv.config();
Making a Request to the API
At this point I wanted to make a HTTP request to the API and confirm that it was returning the information I needed for the bookshelf. I have used the node-fetch package once before to make an HTTP request so I used it again in this instance. Essentially the package brings the functionality of the fetch Web API to Nodejs.
The static site generator I use, eleventy, has a great set up for working with data fetched from API calls just like this one. There is more information in the eleventy docs about handling data in an eleventy project.
From reading these docs I knew that I needed to create the file which will make the API call within the
_data folder, and that I needed to use
module.exports to make the data available to use in the rest of the site's files. I created my file:
_data/bookshelf.js and made the API call, with a
console.log to see the response. Like so:
module.exports = async function() { await fetch(`{id}&shelf=read&key=${key}`) .then(res => res.json()) .then(result => { console.log(result) }; }
For the URL you can see that I've used a template literal and included three queries. The
id query and a
key query are dynamic values (they are declared above this
module.exports function).
The
id is my Goodreads id number, like a unique identifier for my Goodreads account - I got this from logging in to my Goodreads account, clicking on 'My Books' in the menu, and then checking the URL. For example my URL at this point looks like this:
So that last part is my Goodreads id.
The
key is referring to my API key.
And the third query is
shelf which I have set to
read, because I only want to return books which I have already read and not those which are on my 'DNF' (Did Not Finish - the shame) or 'TBR' (To Be Read...) shelves.
Now, when I ran the eleventy build command in order to run the code and see the result, the result was not what I expected. There was a error in the log! I don't recall the exact error now, but I could see that it was the
.json() call which I had made to parse the result as a json object which had caused the problem.
After consulting google, I found that the Goodreads API does not respond with json but instead with XML. At this point I also found Tara's post about using the Goodreads API to choose which book to read next, which I'm so glad I found because it really helped me! Tara's HTTP request was a little different from mine because she'd used the request-promise package.
After reading Tara's post I knew that the Goodreads API would be returning XML, and I also learned that I could use the xml2js package to convert the XML response to json! 🎉
After installing and including xml2js, I edited my
bookshelf.js file:
module.exports = async function() { await fetch(`{id}&shelf=read&key=${key}`) .then(res => res.text()) .then(body => { xml2js.parseString(body, function (err, res) { if (err) console.log(err); console.log(body); }; }
When I ran the code again by running the eleventy build command I didn't see an error but a quite a complicated looking object instead! Perfect.
Accessing and Returning the Data
From there I could access the data, iterate over it with a
for loop, assign those parts that I needed for the bookshelf to another object and then push that object onto an array which I would return.
By returning the array of objects I would make this data available to be used in my other project files.
After working out the structure of the data from a few more API calls and
console.logs, my
module.exports inside
bookshelf.js ended up looking like this:
module.exports = async function() { let books = []; await fetch(`{id}&shelf=read&key=${key}`) .then(res => res.text()) .then(body => { xml2js.parseString(body, function (err, res) { if (err) console.log(err); console.log('Getting Book List from GoodReads API'); let bookList = res.GoodreadsResponse.reviews[0].review; for (let i=0; i < bookList.length; i++) { books.push({ title: bookList[i].book[0].title[0], author: bookList[i].book[0].authors[0].author[0].name[0], isbn: bookList[i].book[0].isbn[0], image_url: bookList[i].book[0].image_url[0], small_image_url: bookList[i].book[0].image_url[0], large_image_url: bookList[i].book[0].large_image_url[0], link: bookList[i].book[0].link[0], date_started: bookList[i].date_added[0], date_finished: bookList[i].read_at[0], rating: bookList[i].rating[0] }) } }) }).catch(err => console.log(err)); return books; }
Looking at it again now I think I am error checking twice, which is probably not necessary 🤦♂️.
The result of that code is that I now have access to a global data array:
books, which contains each book I have on my Goodreads 'Read' shelf as an object with title, author and other useful info. An example of the data I now had is below:
[ { title: 'Modern Web Development on the JAMstack', author: 'Mathias Biilmann', isbn: , image_url: , small_image_url: , large_image_url: , link: '', date_started: 'April 28 2020', date_finished: 'May 02 2020', rating: '5' }, { // Another book }, { // Another book }, ... ]
Tidying the Data
You may notice from that example that the entry 'Modern Web Development on the JAMstack' does not have an isbn or any images. Data is rarely perfect, no matter where you get it from, it is likely to have some missing items or anomalies.
In this example - that book is an online published book and so does not have an ISBN number. This also means that, although Goodreads use an image of the cover on their website, for some reason they are unable to provide that image via their API.
This was the case with about 3 or 4 of the ~20 books in my data. Some had ISBN's but no images.
I looked in to other APIs for book covers which are available and found a few:
I have a sneaking suspicion Amazon may be the best bet for image quality. However, to keep the project simple, and because it resonated with me more, I attempted to use the Library Thing API but it didn't seem to work 😭.
At this point I wanted to get the bookshelf up and running, so instead of configure a new API, I decided to instead host any book cover images that weren't returned automatically by the Goodreads API on my own website. This would work for me because the site will only update when I've finished a book and added it to that shelf (so I can always double check an image has come through and then add one if not).
In order to add those images that hadn't come through I needed to decide on a naming convention that could be referred to easily. I decided that I would name my images in 'spinal-case'. To be able to refer to them I would need to add one final item - the title in spinal-case - to the object that I was creating with each API call.
For example, to be able to refer to the image saved for 'Modern Web Development on the JAMstack', I would need the object to include a field called 'spinal_title' which contained the value: 'modern-web-development-on-the-jamstack'. To do this I added the following function to
bookshelf.js:
function spinalCase(str) { str = str.replace(/:/g,''); return str .split(/\s|_|(?=[A-Z])/) .join("-") .toLowerCase(); }
This function also removes any colons (':').
Then in the object within the API call itself I could also add the following field:
spinal_title: spinalCase(bookList[i].book[0].title[0]),
This references the book title but calls the
spinalCase() function so that the title is returned in spinal case.
For this personal project this approach works, but I think another solution would need to be found depending on the project. For example in the above case my
spinalCase() function actually returns
...on-the-j-a-mstack, so I actually had to rename the file to match it properly.
Displaying the Data on the Site
I won't go in to too much detail about how the templating system works. There's a great css-tricks post about nunjucks, which is the templating language I am using here. Eleventy (can't fault it!) is also a great static site generator because you can use any templating language with it, as mentioned, I use nunjucks.
The following code references the data returned from
bookshelf.js as the array
bookshelf, and iterates through it displaying each item as specified in the template. To do that I use the nunjucks
for i in item loop, in my case
{% for book in bookshelf %} - that way I can refer to each object as
book.
<div class="wrapper"> <ul class="auto-grid"> {% for book in bookshelf %} <li> <div class="book"> {% if '/nophoto/' in book.image_url %} <img class="book-cover" src="/images/book-covers/{{ book.spinal_title }}.jpg" alt={{book.title}}> {% else %} <img class="book-cover" src={{book.image_url}} alt={{book.title}}> {% endif %} <p class="font-serif text-300 gap-top-300 low-line-height">{{book.title}}</h2> <p class="text-300">{{book.author}}</p> <p class="text-300"> {% for i in range(0, book.rating) %} ⭐ {% endfor %} </p> <p class="text-300 gap-bottom-base"><a href={{book.link}}>On Goodreads↗ </a></p> </div> </li> {% endfor %} </ul> </div>
As you can see it is a lot like HTML, but with the power to use logic and reference data. That logic and data is processed at build time and the resulting HTML page is used to build the site.
One interesting part is how I rendered the star rating. Nunjucks is super powerful, you can use lots of different techniques with it. In this case I use the range function.
{% for i in range(0, 5) -%} {{ i }}, {%- endfor %} // 12345 // In my own case, where book.rating == 4: {% for i in range(0, book.rating) %} ⭐ {% endfor %} // ⭐⭐⭐⭐
Merging Branch and Pushing to Live Site
In order to complete this project I needed to merge the branch
bookshelf with the
master branch in git. I did this via the GitHub website.
After running my final commit and push in the terminal, I went to the project on GitHub where I created a Pull Request to be able to merge the two branches.
One Last Thing to Do
Before doing this there was one other thing I had to do though. My site is built and hosted by Netlify. If you recall that I was keeping the API key secret, and so git was ignoring it, you might also see that when the site files merge and Netlify tries to build the site, it would not have access to the API key.
Luckily Netlify provides a way to add environment variables right in their dashboard. So I was able to add the API key here, where it will stay a secret but will be accessible during the build of the site.
The Finished Product and Next Steps
You can view the result on the bookshelf page on my personal website. I would love to hear what you think?
As with all projects I think that this can be improved upon and I will likely look for ways to update it soon, or if I receive any feedback from people who see it.
One idea that comes to mind is to configure the site to rebuild each time I add a book to my 'Read' shelf on Goodreads without my own input. To do this I'd likely need to add a build hook in Netlify.
Outro
This has ended up being a longer post than I envisioned, but I guess quite a lot of work goes into getting data from an API and using it or displaying it elsewhere. Thank you if you have read the whole thing! Let me know what you think?
I decided to do this project to learn more about API calls and displaying data, and I think I've achieved that goal. As usual with webdev there is always more to learn!
Discussion (11)
This is great! I did something very similar with Goodreads on my own site, (write-up here).
Yours has a way better level of polish, nice. Nice to know the impulse to get that stuff out of Goodreads and into my own home site isn’t mine alone.
Thank you! Yours is so cool! Really enjoyed reading about how it works, and love that you have a record of books (both read and tbr - you read a lot!), but also films and articles too.
I need to learn more about servers like digital ocean and docker etc. Maybe creating a personal API like yours is the way to do that!
Thanks!
Yeah I've definitely learned a lot by constantly tinkering with this system.
I am pretty surprised! I made a movie recommender just like you(though yours is a book) using TMDb API.
Ah, that's a great idea too! Is the code available on GitHub? I'd love to have a look?
Thanks!
Here it is: github.com/Muhimen123/PyMovies
👍 Thanks! I'll take a look!
great wite-up zachary. thanks for taking the time to do so. the book images aren't showing up for me and i'm guessing they should be? (i tested mobile and dektop on brave and safari)
Thank you! And thanks for taking the time to read it!
Ah - since writing this post I have made a few changes to my live site's bookshelf. I think I was having trouble finding a reliable book covers image API and also thought only titles and my reviews might look a bit cleaner. I may bring the images back again at some point!
Thanks again!
Like your way of story telling
Thanks v much, glad you liked it! | https://practicaldev-herokuapp-com.global.ssl.fastly.net/zgparsons/using-the-goodreads-api-and-11ty-to-create-an-online-bookshelf-han | CC-MAIN-2021-21 | refinedweb | 2,842 | 72.76 |
csBSPTree Class Reference
[Geometry utilities]
This BSP-tree is a binary tree that organizes a triangle mesh. More...
#include <csgeom/bsptree.h>
Inheritance diagram for csBSPTree:
Detailed Description
This BSP-tree is a binary tree that organizes a triangle mesh.
This tree will not split triangles. If a triangle needs to be split then it will be put in the two nodes.
Definition at line 46.
Build the BSP tree given the set of triangles.
Clear the BSP-tree.
The documentation for this class was generated from the following file:
Generated for Crystal Space 2.1 by doxygen 1.6.1 | http://www.crystalspace3d.org/docs/online/api/classcsBSPTree.html | CC-MAIN-2015-14 | refinedweb | 101 | 52.56 |
The simplest possible way to get IP geolocation information.
Project description
The simplest possible way to get IP geolocation information in Python.
Meta
- Author: Randall Degges
- Twitter:
- Site:
- Status: production ready
Prerequisites
To use this library, you’ll need to create a free GeoIPify account:
If you haven’t done this yet, please do so now.
Installation
To install simple-geoip using pypi, simply run:
$ pip install simple-geoip
In the root of your project directory.
Usage
Once you have simple-geoip installed, you can use it to easily find the physical location of a given IP address.
This library gives you access to all sorts of geographical location data that you can use in your application in any number of ways.
from simple_geoip import GeoIP geoip = GeoIP("your-api-key"); try: data = geoip.lookup("8.8.8.8") except ConnectionError: # If you get here, it means you were unable to reach the geoipify # service, most likely because of a network error on your end. except ServiceError: # If you get here, it means geoipify is having issues, so the request # couldn't be completed :( except: # Something else happened (non-geoipify) related. Maybe you hit CTRL-C # while the program was running, the kernel is killing your process, or # something else all together. print.
Changelog
All library changes in descending order.
Version 0.1.0
Released April 26, 2018.
- First release!
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/simple-geoip/ | CC-MAIN-2020-05 | refinedweb | 257 | 54.22 |
Name | Synopsis | Description | Return Values | Errors | Examples | Usage | Attributes | See Also | Notes
#include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> int open(const char *path, int oflag, /* mode_t mode */);
int openat(int fildes,..
If the path names a symbolic link, open() fails and sets errno to ELOOP.
If the link count of the named file is greater than 1, open() fails and sets errno to EMLINK.
These flags can affect subsequent reads and writes (see read(2) and write(2)). If both O_NDELAY and O_NONBLOCK are set, O_NONBLOCK takes precedence. thread opens the file for writing. An open():
If O_NONBLOCK or O_NDELAY is set, the open() function returns without blocking for the device to be ready or available. Subsequent behavior of the device is device-specific.
If O_NONBLOCK and O_NDELAY are clear, the open() function blocks the calling thread until the device is ready or available before returning.
Otherwise, the behavior of O_NONBLOCK and O_NDELAY is unspecified... opening files owned by UID 0 for writing. The {PRIV_FILE_DAC_READ} privilege allows processes to open files for reading regardless of permission bits..); ...
The following example uses the open() function to try to create the LOCKFILE file and open it for writing. Since the open() function specifies); } ....), unlockpt(3C), attributes(5), lf64(5), privileges(5), standards(5), connld(7M), streamio(7I)
Hierarchical Storage Management (HSM) file systems can sometimes cause long delays when opening a file, since HSM files must be recalled from secondary storage.
Name | Synopsis | Description | Return Values | Errors | Examples | Usage | Attributes | See Also | Notes | https://docs.oracle.com/cd/E19253-01/816-5167/open-2/index.html | CC-MAIN-2018-51 | refinedweb | 256 | 58.69 |
Yes I know, it's a strange title, but bear with me, you'll soon see why.
Every so often, you see things that seem to confuse all developers, regardless of time in the industry, or level of skill. The process of performing Boolean operations, unfortunately, seems to be one of these things.
I'm writing this post with the .NET developer in mind, and any code presented will be in C#, but the concepts I'm about to discuss are applicable to any programming language.
So, What Do We Mean by Bit Twiddling?
Well, to clarify that, we first need to take a trip back to Computer Science 101. For many younger developers who've been schooled using a language-based approach, this might actually be something new to them. To those who've done a dedicated CS or electronics course, this will probably just serve as a bit of a refresher.
The chips in your computer communicate using a series of electronic pulses; these pulses travel along "Busses" in groups. The sizes of these groups are what determine the "Bit Size" of your PC.
A 32-bit machine will group these busses into groups of 32 individual wires, each carrying one set of pulses. Likewise, a 64-bit machine will make groups of 64 wires.
I'm not going to deep dive into this subject because that would take a whole book. Instead, we'res interested only in the behaviour of one wire at a time.
Why?
Well, because the behaviour of one wire is perfectly modelled in computer software, when you start looking at performing Boolean operations on your data.
The wires in your PC can either have an electric current in them, or not. This typically manifests itself as +5 volts or 0 volts. When looking at the same thing from a software point of view, we can easily say that +5 volts is equal to "True" and 0 volts is equal to "False."
Once you start to understand this, you begin to understand how numbers are represented by your PC. Take the following example:
Table 1: 8 Bit binary values
In Table 1, I listed the first 8 wires, or as it's more commonly known, a "Byte." Each wire from the right going towards the left is given a power of 2 (because it can only be in one of two states: 1 or 0), and each power of 2 as you go left is multiplied twice by itself.
In the first row, the "1" indicates that wire 1 has +5 volts in it, while the "0"s in the other wires give 0 volts. This is how the computer represents a value of "1" as electrical signals, to mean a data value of "1" in your program.
In the second number, we have "128" because only the wire with value "128" has +5 volts in it.
The 3rd number is 255, because if you add up all the numbers 1 through to 128, you get 255, and all the wires for that number have +5 volts in them.
If you wanted to represent a value of "24," you'd put +5 volts into wires "16" and "4."
When you look at these values in your program code, if you convert them to binary notation, you'll see exactly the same thing, with 0s and 1s in the appropriate columns.
Bigger numbers are represented by adding more wires. In Table 1, we could add a 9th column equal to 256 (128 * 2) and that would give us a 9-bit number, whose maximum value would be "512."
All This Is Interesting Stuff, but if C# Takes Care of All This for Me, Why Do I Need to Care?
You need to care because this is how your if/then statements work, and how your transparent graphics merge pixels without destroying other graphics.
The very fundamentals of how a computer makes its decisions revolve around seven very basic logic operations:
- And
- Nand
- Or
- Nor
- Xor
- Nxor
- Not (Inverse)
These basic logic rules govern pretty much everything your PC does, and are the absolute fundamentals of how the CPU in your PC decides what to do based on what numerical instructions it's given.
It also happens that knowing this stuff has a load of uses in software, too.
The rules are simple to interpret, and each one has a defined set of inputs that give exactly one output. The following tables are the Truth Tables used to describe these rules:
AND
The rule for an AND states that the output is "1" only when all of its inputs are "1."
NAND
The rule for a NAND states that the output is "1" only when all of its inputs are not equal to "1."
OR
The rule for an OR states that the output is "1" only when one or more of its inputs are "1."
NOR
The rule for a NOR states that the output is "1" only when one or more of its inputs are not "1."
XOR
The rule for an XOR states that the output is "1" only when all of its inputs are different to each other.
NXOR
The rule for a NXOR states that the output is "1" only when all of its inputs are not different to each other.
NOT (Inverter)
The rule for a NOT states that the output is to be the opposite of the input.
Enough Theory. Let's See Some Code.
Create a simple console program in Visual Studio, and make sure program.cs has the following code in it:
using System; namespace bit_twiddling { class Program { static void Main() { int num1 = 1; int num2 = 2; int result = num1 & num2; Console.WriteLine("AND"); Console.WriteLine("Input A = {0} [{1}]", num1, Convert.ToString(num1, 2)); Console.WriteLine("Input B = {0} [{1}]", num2, Convert.ToString(num2, 2)); Console.WriteLine("Result = {0} [{1}]", result, Convert.ToString(result, 2)); } } }
If you press F5 and run this, you should see the following:
Figure 1: Output from our program ANDing two numbers
The Binary representation of "1" is "00000001" and the binary representation of "2" is "00000010." If you AND them as per the previous logic rules, you get the following:
Looking at the two columns with a 0 in and referring to the truth table:
1 AND 0 = 0
0 AND 1 = 0
So, our result gives us a 0.
Let's now change the code slightly and make num2 equal to 3
int num2 = 3;
Then run our program again. What do we see this time?
Figure 2: Output from AND operation with the second number changed to 3
You can see from Figure 2 that our result is now equal to input A, and what we've effectively done is used the input in num1 to turn off any bits in num2 that we don't care about.
This comes in handy, for example, when we only want part of a number.
Let's alter our code once more, and this time try an OR operation.
Change your code in program.cs so it looks like the following:
using System; namespace bit_twiddling { class Program { static void Main() { int num1 = 1; int num2 = 2; int result = num1 | num2; Console.WriteLine("OR"); Console.WriteLine("Input A = {0} [{1}]", num1, Convert.ToString(num1, 2)); Console.WriteLine("Input B = {0} [{1}]", num2, Convert.ToString(num2, 2)); Console.WriteLine("Result = {0} [{1}]", result, Convert.ToString(result, 2)); } } }
Now, try running it and you should see the following:
Figure 3: Result of the OR operation
This time you can clearly see that the opposite of the AND operation has happened, and you've set the 1st bit in your result using the input in num1.
Using an OR operation effectively sets parts of a number without changing parts already present.
An OR operation is often used in computer graphics when merging two images together, and ensures that existing information is preserved.
In .NET, the operators you use for these operations are as follows:
- & = AND
- | = OR
- ^ = XOR
- ! = NOT
To get NAND/NOR and NXOR, simply prefix the output with a NOT; for example:
int result = !(num1 | num2);
This will give you the same result as a NOR; whereas
int result = !(num1 & num2);
will equal the output of a NAND.
I'll leave the XOR and NXOR operations as an exercise for the reader to play with.
Got a tricky .NET problem you can't solve, or simply just want to know if there's an API for that? Hunt me down on Twitter as @shawty_ds or come and visit the Lidnug (Linked .NET) user group on the Linked-In platform that I help run, and let me hear your thoughts. It may even make it into a post in this column. | https://mobile.codeguru.com/columns/dotnet/bit-twiddling.html | CC-MAIN-2019-09 | refinedweb | 1,465 | 69.82 |
SD_BUS_PATH_ENCODE(3) sd_bus_path_encode SD_BUS_PATH_ENCODE(3)
sd_bus_path_encode, sd_bus_path_encode_many, sd_bus_path_decode, sd_bus_path_decode_many - Convert an external identifier into an object path and back
#include <systemd/sd-bus.h> int sd_bus_path_encode(const char *prefix, const char *external_id, char **ret_path); int sd_bus_path_encode_many(char **out, const char *path_template, ...); int sd_bus_path_decode(const char *path, const char *prefix, char **ret_external_id); int sd_bus_path_decode_many(const char *path, const char *path_template, ...);
sd_bus_path_encode() and sd_bus_path_decode() convert external identifier strings into object paths and back. These functions are useful to map application-specific string identifiers of any kind into bus object paths in a simple, reversible and safe way. sd_bus_path_encode() takes a bus path prefix and an external identifier string as arguments, plus a place to store the returned bus path string. The bus path prefix must be a valid bus path, starting with a slash "/", and not ending in one. The external identifier string may be in any format, may be the empty string, and has no restrictions on the charset — however, it must always be NUL-terminated. The returned string will be the concatenation of the bus path prefix plus an escaped version of the external identifier string. This operation may be reversed with sd_bus_decode(). It is recommended to only use external identifiers that generally require little escaping to be turned into valid bus path identifiers (for example, by sticking to a 7-bit ASCII character set), in order to ensure the resulting bus path is still short and easily processed. sd_bus_path_decode() reverses the operation of sd_bus_path_encode() and thus regenerates an external identifier string from a bus path. It takes a bus path and a prefix string, plus a place to store the returned external identifier string. If the bus path does not start with the specified prefix, 0 is returned and the returned string is set to NULL. Otherwise, the string following the prefix is unescaped and returned in the external identifier string. The escaping used will replace all characters which are invalid in a bus object path by "_", followed by a hexadecimal value. As a special case, the empty string will be replaced by a lone "_". sd_bus_path_encode_many() works like its counterpart sd_bus_path_encode(), but takes a path template as argument and encodes multiple labels according to its embedded directives. For each "%" character found in the template, the caller must provide a string via varargs, which will be encoded and embedded at the position of the "%" character. Any other character in the template is copied verbatim into the encoded path. sd_bus_path_decode_many() does the reverse of sd_bus_path_encode_many(). It decodes the passed object path according to the given path template. For each "%" character in the template, the caller must provide an output storage ("char **") via varargs. The decoded label will be stored there. Each "%" character will only match the current label. It will never match across labels. Furthermore, only a single directive is allowed per label. If "NULL" is passed as output storage, the label is verified but not returned to the caller.
On success, sd_bus_path_encode() returns positive or 0, and a valid bus path in the return argument. On success, sd_bus_path_decode() returns a positive value if the prefixed matched, or 0 if it did not. If the prefix matched, the external identifier is returned in the return parameter. If it did not match, NULL is returned in the return parameter. On failure, a negative errno-style error number is returned by either function. The returned strings must be free(3)'d by the caller.
sd_bus_path_encode() and sd_bus_path_decode() are available as a shared library, which can be compiled and linked to with the libsystemd pkg-config(1) file.
systemd(1), sd-bus(3),_PATH_ENCODE(3)
Pages that refer to this page: sd-bus(3), systemd.directives(7), systemd.index(7) | http://man7.org/linux/man-pages/man3/sd_bus_path_decode_many.3.html | CC-MAIN-2019-26 | refinedweb | 618 | 54.83 |
Welcome to this new series of blog posts. The series is based on a list of ten recommended practices or tips to help you in your journey with Node.js.
nearForm staff train and consult with teams all over the world. Our experts also host and speak at various conferences throughout America and Europe on a regular basis. This means that I frequently get to speak with developers on the front line.
When developers begin to share their questions, needs and frustrations with me, it gets my attention. This is partly because one of my roles is to curate our training material, but also because I’ve been there, had the same questions, and shared the same frustrations and needs.
In response to these conversations, I’ve compiled the following list of ten Node.js tips:
- Develop debugging techniques
- Avail and beware of the ecosystem
- Know when (not) to throw
- Reproduce core callback signatures
- Use streams
- Break out blockers
- Deprioritize synchronous code optimizations
- Use and create small single-purpose modules
- Prepare for scale with microservices
- Expect to fail, recover quickly
Let’s kick off our series with the first tip on the list.
Develop debugging techniques
Interactive debugging
Let’s begin with the fundamentals.
Node has a built-in debugger. For those of you who have worked with lower-level languages, like C/C++, the built-in Node debugger may feel familiar. It resonates with the likes of GDB or LLDB:
$ node debug myApp.js
Node.js developers tend to have diverse backgrounds. If you’re one of those who is comfortable with debuggers like LLDB, the built-in Node debugger may suit you down to the ground. However, a large number of migrators to Node.js come from the front end or from languages that tend to be accompanied by powerful IDEs. These developers may prefer a more visual debugging experience. Enter node-inspector:
$ npm -g i node-inspector $ node-debug myApp.js
The
node-inspector module hooks up Node’s remote debug API with a now slightly aging version of Chrome Devtools, connecting to it via web sockets. Essentially, we can debug and control a process the same way as we debug a web app. This is manna from heaven for those already familiar with Devtools.
While it is present in the provided Devtools UI,
node-inspector does not provide profiling. However, webkit-devtools-agent covers Devtools-based heap and CPU profiling.
Debug logs
You can enable core API debug information using the
NODE_DEBUG environment variable:
$ NODE_DEBUG=http,net node server.js
Debug information is supported for the following internal mechanisms: * fs, * http, * net, * tls, * module and * timers.
Similarly, the non-core debug module can be used to implement the same pattern as core, with a
DEBUG environment variable:
$ npm install debug --save
var debug = require('debug'); var log = debug('myThing:log'); var info = debug('myThing:info'); var error = debug('myThing:error');
log('Squadron 40, DIVE!!!'); info('Gordon\'s ALIVE!'); error('Impetuous boy!');
Just like internal debug output, the debug module creates log output on a per-namespace basis. You do this by requiring the module, creating logger functions, and passing them a namespace during initialization. Log functions are essentially no-ops unless the
DEBUG environment variable enables a log:
$ DEBUG=myThing:info node app.js
The
DEBUG variable can contain wildcards; for instance,
DEBUG=* turns on all debug and
DEBUG=myThing:* turns on any debug namespaces prefixed with myThing:.
In the example above, we set up
log,
info and
error loggers. However, in larger apps that are subdivided by library modules, we could sub-namespace by the name of each module, affording granular, specific debug output.
Stack traces
The default stack trace limit for Node.js is ten frames. Of course, it requires some internal state management to keep track of stack frames, so the higher the tracing limit, the greater the performance hit.
A good Node app is composed of lots of small modules, which in turn tend to rely on lots of other small modules. This generally means that a full stack trace would consist of more than 10 frames:
$ node -e "function x() { --x.c ? x() : u } x.c=100; x()"
The
-e flag here tells Node to evaluate the subsequent string. The code in that string causes x to be called 100 times. 99 times x is calling itself, which creates a stack frame each time. The result is a stack that is much larger than ten frames.
In production, we would probably prefer to keep the stack trace depth low. However, for debugging purposes we can set the stack trace limit to a higher number, using the
--stack_trace_limit v8 flag:
$ node --stack_trace_limit=200 -e "function x() { --x.c ? x() : u } x.c=100; x()"
The stack trace limit can also be set in process. We do this by setting the
stackTraceLimit property on the native global
Error object:
$ node -e "Error.stackTraceLimit=200; function x() { --x.c ? x() : u } x.c=100; x();"
This can be useful in certain cases where the Node binary is executed indirectly—for example, when a Node script is spawned as a child process or when an executable script is called from the command line (executable scripts use the
#!/usr/bin/env node hashbang, which is essentially indirect execution of
node). In these cases, flags aren’t passed to the
Node executable. A quick and easy solution is to set the
Error.stackTraceLimit property.
Unlike the
--stack_trace_limit flag,
Error.stackTraceLimit can be set to
Infinity for the longest possible stack traces. This can be useful when we don’t or can’t know how long the stack trace will be:
node -e "Error.stackTraceLimit=Infinity; function x() { --x.c ? x() : u } x.c=Math.floor(Math.random()*1e3); x();"
Of course, we could achieve the same with the
--stack-trace-limit by setting it to an arbitrarily super-high number, but setting
Error.stackTraceLimit to
Infinity is more elegant. However, a flag is much easier to keep out of production. If you’re setting trace limits in code, I recommend running it through a build process that strips
Error.stackTraceLimit.
Asynchronous stack traces
The asynchronous nature of JavaScript is made possible by the event loop. The event loop is an infinite loop that processes event queues. Certain actions in one iteration of the loop can lead to events being processed in a future iteration (an iteration is known as a tick). For instance, while an AJAX request would be part of one event queue, the handling of the response would appear in a later tick and would therefore be part of a different queue.
Stack traces are intrinsically tied to the event queue. By default, we can’t trace a series of operations across separate ticks.
Going back to the AJAX example, if the callback handler for that request causes a throw, the stack trace will only lead back to whatever function ultimately triggered the request callback. It won’t transfer to a tick that occurred in the past.
The
longjohn module allows us to trace calls across ticks:
$ npm install longjohn $ node -e "require('longjohn'); function x() { --x.c ? setImmediate(x) : u } x.c=10; x()"
Our evaluated code is similar here to previous examples, except that subsequent calls to the x function are made via
setImmediate. This is an asynchronous operation that means ‘do some work at the beginning of the next tick’.
As with deep stack traces, asynchronous tracing capabilities should be removed before production, as longjohn comes with a significant performance hit.
Cleaner stacks
This is the tiny cute-stack module:
npm install cute-stack --save
cute-stack applies assorted strategies to making stack traces more descriptive and visually clearer. It works for most kinds of trace origins, such as
throw,
console.trace, and
Error().stack,
try{}catch(e) {e.stack}. The only traces that
cute-stack does not catch are during parse stage, where a syntax error occurs before
cute-stack has initialized.
cute-stack supplies a variety of output formats for improved human-parseable output, such as colored output, tables and JSON.
If a function doesn’t have a name, but exists as a method on an object,
cute-stack uses the method name instead. Failing that,
cute-stack displays the function signature so that a developer can more easily recognize anonymous functions that they have written.
cute-stack also normalizes paths to the current working directory, resulting in more readable and shorter file locations.
When traces relate to deeply embedded dependencies, they also tend to have long paths.
cute-stack has a way of shortening these too (see the [Readme] page for info).
Function deanonymizing
Anonymous functions lead to puzzling stack traces. A function is often used as a lambda; that is, a small item of work, or an iterable function or a continuation (a callback). Functions that are used as lambdas and continuations can be hard to name:
setTimeout(function whatDoICallThis() {}, 1000);
There are many other reasons why a code base may contain anonymous functions: bad habits, ignorance and laziness all contribute. The
cute-stack module handles this by supplying function signatures instead. However, what if there are no function parameters? In this case, we’re none the wiser.
The decofun module parses code and names any anonymous functions according to their context. The easiest way to use decofun is as an executable.
Let’s say we have the following code called
app.js:
function gravy () { return function () { return { prop: function () { setTimeout(function () { console.trace('Getting a trace...'); }, 10) } } } } console.log(gravy+''); gravy()().prop();
We can deanonymize any anonymous functions by running this file with the
deco executable instead of the
node binary:
npm -g install decofun deco app.js
Notice that function names appear to contain pipe and space characters, which are illegal JavaScript characters. Yet the code still executes. These are actually characters from Hangul, the Korean alphabet, and they’re legal JavaScript from EcmaScript 5. The space is code point U+FFAO, a “Halfwidth Hangul Filler”(basically half a space in Korean), and the pipe is code point U+3163, the 10th and 21st vowel in North and South Korean Hangul respectively. It’s said by Sejong the Great to represent a mediator between yin and yang. For our purposes, it’s a Javascript-legal character acts a decent visual separator between meta-data.
The
decofun module is integrated with
cute-stack as follows:
deco app.js --cute
The
decofun module can also be required in-process. Check out the github page for details.
Conclusion: comment and subscribe
I hope you found this post useful. Please post your comments and questions below – feedback is very welcome!
Our next Node.js practice is ‘Avail and beware of the ecosystem’. Subscribe to this blog to be notified as soon as that post, and others, are published.
Want to work for nearForm? We’re hiring.
We’re currently seeking applications for a UI developer.
Phone +353-1-514 3545 | http://www.nearform.com/nodecrunch/node-js-develop-debugging-techniques/ | CC-MAIN-2016-50 | refinedweb | 1,830 | 56.66 |
React Component for Pts canvas drawing
react-pts-canvas
Pts Canvas React Component.
This is a React Component for canvas drawing using Pts. Pts is a javascript library for visualization and creative-coding.
Install
npm install --save react-pts-canvas
Examples
- The example folder provides a quick example of using PtsCanvas in a React app
- Take a look at more examples in pts-react-example repo.
Quick Start
import React, { Component } from 'react' import PtsCanvas from 'react-pts-canvas' // Add your own Pts drawing functions class MyCanvas extends PtsCanvas { animate( time, ftime ) { // ... } } // Use the component class Example extends React.Component { render () { return ( <MyCanvas background="#abc" play={true} /> ) } }
Usage
If you are getting started with Pts, take a look at the cool demos and read the guides.
PtsCanvas is a component that you may extend to implement your own drawings and animations on canvas using Pts. Like this:
class MyCanvas extends PtsCanvas { animate (time, ftime, space) { // your code for drawing and animation } start (bound, space) { // Optional code for canvas init } action: (type, x, y, event) { // Optional code for interaction } resize (size, event) { // Optional code for resize } }
There are 4 functions in Pts that you can (optionally) overrides:
animate,
start,
resize, and
action. See this guide to learn more about how these functions work.
Once you have implemented your own canvas, you can use it as a component like this:
class Example extends React.Component { render () { return ( <MyCanvas background="#abc" play={true} /> ) } }
PtsCanvas component provides the following props:
background
- background fill color of the canvas. Default value is "#9ab".
resize
- A boolean value to indicate whether the canvas should auto resize. Default is
true.
retina
- A boolean value to indicate whether the canvas should support retina resolution. Default is
true.
play
- A boolean value to set whether the canvas should animate. Default is
true.
touch
- A boolean value to set whether the canvas should track mouse and touch events. Default is
true.
name
- The css class name of the container
<div>. Default value is "pts-react". Use this class name to set custom styles in your .css file.
style
- Optionally override css styles of the container
<div>.
canvasStyle
- Optionally override css styles of the
<canvas>itself. Avoid using this except for special use cases. | https://reactjsexample.com/react-component-for-pts-canvas-drawing/ | CC-MAIN-2019-35 | refinedweb | 372 | 58.18 |
New submission from Robert Scrimo <whistler11783@...>:
__run__.py script at root level of myscript.jar(copy of jython.jar):
-- __run__.py --
import sys
sys.exit(0)
-- __run__.py --
# java org.python.util.jython -jar myscript.jar
Upon executing the sys.exit(0) call(last line in script) in my main I
recieve the following exception:
Exception in thread "main" Traceback (most recent call last):
File "__run__", line 2, in
SystemExit: 0
# echo $?
1
The script exit code is set to a 1 because an SystemExit exception is
thrown which is expected at exit if it is unhandled but I am calling
sys.exit(0) to set the exit code to 0 and in the main(top level) it
should not be throwing or at least propagating the exception upon
termination.
This behaviour does not occur when executing this same script outside of
the .jar(not as __run__.py)
# /jython2.5.0/jython myscript.py
# echo $?
0
This is the correct behavior
----------
components: Core
messages: 4916
nosy: whistler11783
severity: major
status: open
title: Executing __run__.py from .jar throws exception(SystemExit: 0) in main when sys.exit(0) is called
type: behaviour
versions: 2.5.0
_______________________________________
Jython tracker <report@...>
<>
_______________________________________ | http://sourceforge.net/mailarchive/message.php?msg_id=23088219 | CC-MAIN-2013-48 | refinedweb | 200 | 60.72 |
I repeatedly get this message and I am trying to include the d3.js into my distribution file.
Treating 'd3.js' as external dependency
import includePaths from 'rollup-plugin-includepaths';
var includePathOptions = {
paths: ['node_modules/d3/d3.min.js'],
include: {
d3: 'd3'
},
};
Note: This is for d3js v4 (I'm not sure its possible with v3)
You need to use rollup-plugin-node-resolve to make rollup aware of dependencies in node_modules.
You install it via
npm install --save-dev rollup-plugin-node-resolve
(Note: I'm new to all this, the babel plugin might not be necessary)
rollup.config.js
import babel from 'rollup-plugin-babel'; import nodeResolve from 'rollup-plugin-node-resolve'; export default { entry: 'path/to/your/entry.js', format: 'umd', plugins: [ babel(), nodeResolve({ // use "jsnext:main" if possible // see jsnext: true }) ], sourceMap: true, dest: 'path/to/your/dest.js' };
It's necessary to use
jsnext:main otherwise you will get errors like
Export '<function>' is not defined by 'path/to/node_module/file.js'
Taken from a post on integrate with rollup and es2015
This is also documented in rollup issue #473 (note it refers to the old name of this plugin rollup-plugin-npm)
Then you can run rollup via
rollup -c
You also need to "roll your own" d3 module with just the bits you need. That way you can still use examples from the web with
d3.* prefixes. I was originally importing the relevant bits into my client code but there is no way to merge all these into one namespace.
Start with d3 master module and paste all the exports that you need in your code into a local
./d3.js file
Example roll-your-own d3.js
/* re-export for custom "d3" implementation. Include only the stuff that you need. */ export { ascending } from 'd3-array'; export { nest, map } from 'd3-collection'; export { csv, json } from 'd3-request'; export { hierarchy, tree } from 'd3-hierarchy'; export { select } from 'd3-selection';
Import your hand rolled d3
import * as d3 from "./d3"
As you import more of d3 you only need to keep your
./d3.js in sync and your client code won't care.
Remember you will need to re-run rollup after any changes. | https://codedump.io/share/1I7UArBHBCb4/1/how-to-include-use-of-nodemodules-in-rollup-compile | CC-MAIN-2016-50 | refinedweb | 369 | 54.63 |
[Date Index]
[Thread Index]
[Author Index]
Re: Packages - trivial question
Hi,
> I'm sorry for such a trivial question, but what does mean the
> backapostroph behind the package name, when one wants to get
> package?
it is the character which serves as the separator for context names.
> For example: << PhysicalConstants`
that is just a short form for Get["PhysicalConstants`"]
> I'm trying to find it in the documentation, but I'm not successfull.
> Is it the same symbol as it is used for NumberMarks i.e.
> 0.3333333333333333` for example?
it's not a symbol and serves a completely different purpose in those to
cases...
> I think, that in the case of packages it means something like
> "definition space", because for example: Remove["Global`*"] should
> remove all the symbol definitions in current session.
it removes all symbol definitions in the "Global`" context. If you have
experience with other languages you can think of a context as a
namespace for symbols. If you don't, think of it as a "folder" for
symbol names as you have "folders" for filenames in your computers file
system.
> But I'm still confused of the little but magic `
nothing magic, if you are interested in the details look up
tutorial/ContextsAndPackages in the documentation center and the
reference pages for BeginPackage, Begin, $Context, $ContextPath.
hth,
albert | http://forums.wolfram.com/mathgroup/archive/2012/Feb/msg00503.html | CC-MAIN-2016-30 | refinedweb | 222 | 51.99 |
Created on 2015-06-14 21:33 by nedbat, last changed 2017-03-31 17:12 by Mariatta. This issue is now closed.
This doesn't work on Python 3.4 on a Mac with Yosemite and Chrome installed:
import webbrowser
webbrowser.get("chrome")
This patch makes it work:
```
*** /usr/local/pythonz/pythons/CPython-3.4.1/lib/python3.4/webbrowser.py 2014-09-21 16:37:46.000000000 -0400
--- /Users/ned/foo/webbrowser.py 2015-06-14 17:31:28.000000000 -0400
***************
*** 605,614 ****
--- 605,615 ----
# Don't clear _tryorder or _browsers since OS X can use above Unix support
# (but we prefer using the OS X specific stuff)
register("safari", None, MacOSXOSAScript('safari'), -1)
register("firefox", None, MacOSXOSAScript('firefox'), -1)
+ register("chrome", None, MacOSXOSAScript('chrome'), -1)
register("MacOSX", None, MacOSXOSAScript('default'), -1)
# OK, now that we know what the default preference orders for each
# platform are, allow user to override them with the BROWSER variable.
```
I
Boštjan Mejak the windows issue has been addressed in issue 8232 and recently patched for 3.5.
Well... I created a patch based on Ned's code :)
This now works in the default branch
Python 3.7.0a0 (default:f2204eaba685+, Oct 5 2016, 20:43:44)
[GCC 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import webbrowser
>>> webbrowser.get("chrome")
<webbrowser.MacOSXOSAScript object at 0x10adc7398>
>>> webbrowser.open_new("")
True
Please review :)
The patch looks good to me.
(The test coverage for chrome browser can be improved. But that seems a like a different change than the current one).
OK, this seems to work for me. I'm, applying this to 3.5, 3.6 and 3.7 (default).
New changeset bd0f502c5eea by Guido van Rossum in branch '3.5':
Issue #24452: Make webbrowser support Chrome on Mac OS X.
New changeset 64a38f9aee21 by Guido van Rossum in branch '3.6':
Issue #24452: Make webbrowser support Chrome on Mac OS X (merge 3.5->3.6)
New changeset 4e2cce65e522 by Guido van Rossum in branch 'default':
Issue #24452: Make webbrowser support Chrome on Mac OS X (merge 3.6->3.7)
I applied this to 3.5, 3.6 and 3.7. I'm not sure we should also apply this to 2.7 -- optinions? Bug or feature?
The documentation seems to indicate that chrome MacOS is supposed to work in 2.7, which makes this a bug.
But... it could also be a documentation bug.
Applying on 2.7 seems alright. Bug fix.
OK will do.
New changeset bc8a4b121aec by Guido van Rossum in branch '2.7':
Issue #24452: Make webbrowser support Chrome on Mac OS X (backport to 2.7)
Thanks everyone! Applied to 2.7, so closing as fixed now.
New changeset 0c8270cbdc62 by Brett Cannon in branch 'default':
Issue #24452: add attribution
I have Windows 10, 64-bit, and Python 3.6.1, 64-bit, and the code still does not work!
>>> import webbrowser
>>> webbrowser.get("chrome")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Program Files\Python 3.6\lib\webbrowser.py", line 51, in get
raise Error("could not locate runnable browser")
webbrowser.Error: could not locate runnable browser
Note: Yes, my Google Chrome browser was running when this command was executed.
Hi Boštjan Mejak, this ticket addresses the change for MacOS.
The windows support is in. Please raise the issue there. Thanks. | https://bugs.python.org/issue24452 | CC-MAIN-2021-25 | refinedweb | 576 | 70.5 |
CGTalk
>
Software Specific Forums
>
Autodesk Mudbox
> getting 'import as layer' to work
PDA
View Full Version :
getting 'import as layer' to work
pavstudio
03-30-2007, 04:00 AM
Hi,
I'm using Mudbox v1.03 and am having trouble using the 'import as layer' feature. Everytime I try it I get an error. I've tried all of the settings and have tried it on multiple computers. Has anyone else had this problem? If someone who has used this successfully could give me a quick run through of how they do it, I'd appreciate it.
Thanks in advance!
oglu
03-30-2007, 07:53 PM
try the newest version... 1.06..
mabe your mesh isnt clean or bad uvs..?
fx81
03-30-2007, 09:01 PM
import as layer was fine in 1.03 aswell. you dont need the new 1.06 upgrade for that.
if you are using maya, check the second tutorial (Maya Mudbox Workflow 2). it should explain.
CGTalk Moderation
03-30-2007,. | http://forums.cgsociety.org/archive/index.php/t-480100.html | CC-MAIN-2014-15 | refinedweb | 168 | 78.25 |
OK, so I have done this before, and it has worked. It should be as simple as copying the code and renaming the FLV player, but I get an "invalid seek" error. Like I have said, this has worked on previous sites I have made, so any explanation would be great. I am trying to get a video to play through, and then loop back to the middle of the video and play to the end in a continuous loop. The only difference I can see is that on my other site, it is streaming from my ftp. This shouldn't make a difference, as apposed to being local, should it? I am also using embedded cue points. Thanks again. Here is the code:
stop();
import fl.video.MetadataEvent;
Movie.addEventListener(MetadataEvent.CUE_POINT, loopFunction);
function loopFunction(e:MetadataEvent):void{
if (e.info.name=="introEnd") {
Movie.seekToNavCuePoint("introMiddle");
Movie.play();
}
}
trace your cuepoints to confirm there is one named introMiddle
I used
and it shows up as introMiddle
i just put it into an html file. let me know if this works for you. Thanks
try seeking to 352.
invalid seek
same as before. Though which code did you use to seek to 352, and what exactly am I seeking to?
After beating my head against the wall, I simply re-rendered the FLV in premiere, but used my home machine, which runs CS 5.5 of everything, and for whatever reason it worked. Thanks for all your help kglad, you are awesome!
you're welcome. | https://forums.adobe.com/thread/1172054 | CC-MAIN-2018-34 | refinedweb | 255 | 74.29 |
SPChangeToken class
Represents the unique sequential location of a change within the change log.
Microsoft.SharePoint.SPChangeToken
Namespace: Microsoft.SharePointNamespace: Microsoft.SharePoint
Assembly: Microsoft.SharePoint (in Microsoft.SharePoint.dll)
Each entry in the change log is represented by an SPChange object. When a change is logged, it is stamped with an identifying token, which is represented by an SPChangeToken object of the ChangeToken property of the SPChange object.. The number of changes that are returned in a single collection is limited for performance reasons, so you should call GetChanges in a loop until you get an empty collection, signifying either that you have reached the end of the log or that there are no more changes that satisfy your query. When you do this, use the token that is returned by the LastChangeToken property of the first batch to get the second batch, and so on until you get a batch with zero changes. The general approach is illustrated by the following code, which retrieves all changes logged for a site collection.
// Get the first batch of changes. SPChangeToken token = null; SPChangeCollection changes = siteCollection.GetChanges(token); while (changes.Count > 0) { foreach (SPChange change in changes) { // Process each change. } // Go get another batch. token = changes.LastChangeToken; changes = siteCollection.GetChanges(token); }
The following example is a console application that uses the SPChangeToken class to return changes that have occurred on a Web site during the past 60 days.
using System; using Microsoft.SharePoint; namespace Test { class ConsoleApp { static void Main(string[] args) { using (SPSite siteCollection = new SPSite("")) { using (SPWeb webSite = siteCollection.RootWeb) { // Display change times as local time. SPTimeZone timeZone = webSite.RegionalSettings.TimeZone; // Create a change token. DateTime startTime = DateTime.UtcNow.AddDays(-60); SPChangeToken startToken = new SPChangeToken(SPChangeCollection.CollectionScope.Web, webSite.ID, startTime); // Retrieve the first batch of changes. SPChangeCollection changes = webSite.GetChanges(startToken); while (changes.Count > 0) { foreach (SPChange change in changes) { // Process the change. Console.WriteLine("\nDate: {0}", timeZone.UTCToLocalTime(change.Time).ToString()); Console.WriteLine("Change subclass: {0}", change.GetType().ToString()); Console.WriteLine("Type of change: {0}", change.ChangeType.ToString()); } // Get another batch. startToken = changes.LastChangeToken; changes = webSite.GetChanges(startToken); } } } Console.Write("\nPress ENTER to continue..."); Console.ReadLine(); } } } | http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spchangetoken(v=office.15).aspx | CC-MAIN-2014-42 | refinedweb | 358 | 53.47 |
In 1988 Martin Gardner offered a $100 prize for the first person to produce a magic square filled with consecutive primes. Later that year, Harry Nelson found 22 solutions using a Cray computer.
Gardner said that the square above is “almost certainly the one with the lowest constant possible for such a square.”
It’s easy to verify that the numbers above are consecutive primes. Here’s a little Python code for the job. The function
nextprime(x, i) gives the next
i primes after
x. We call the function with
x equal to one less than the smallest entry in the square and it prints out all the entries.
from sympy import nextprime for i in range(1,10): print( nextprime(148028128, i) )
If you’re looking for more than a challenge, verify whether Gardner’s assertion was correct that the square above uses the smallest possible set of consecutive primes.
By the way, assuming Harry Nelson’s Cray was the Y-MP model that came out in 1988, here are its specs according to Wikipedia:.
Related posts:
Magic squares from a knight’s tour and a king’s tour.
7 thoughts on “A magic square filled with consecutive primes”
Very cool! There’s a typo in your Python – should be 1480028128 not 1480082128
Thanks!
By my reckoning, the python code to verify the numbers are consecutive primes should be
from sympy import nextprime
for i in range(1,10):
print( nextprime(1480028128, i) )
1. 82128 becomes 28128
2. ‘range’ returns 1 .. 9 instead of 0 .. 8
Thanks. I’ve fixed these errors.
You just pick on me with your blogs because you know I’m a sucker for primes
Great (if small) blog—kudos. I’d forgotten the Gardener article, thanks for the memory. BTW, did you notice that the latest Cray is upgrading to the latest INTEL chip. I’m sure that this is some kind of irony, but I’m not sure just what and how…
If you normalize a magic square to make the lowest number 0, you’re left with 8 unique values.
It should not be too hard to calculate all solutions of a reasonable size, then match rolling sets of consecutive primes against them. | http://www.johndcook.com/blog/2013/09/02/a-magic-square-filled-with-consecutive-primes/ | CC-MAIN-2014-52 | refinedweb | 372 | 80.72 |
class in UnityEngineマニュアルに切り替える
Unity が private フィールドを強制的にシリアライズします。.
シリアライズのシステムは次の条件があります:
- CAN serialize public nonstatic fields (of serializable types)
- CAN serialize nonpublic nonstatic fields marked with the [SerializeField] attribute.
- CANNOT serialize static fields.
- CANNOT serialize properties.
Unity がシリアライズ可能な型である場合のみフィールドはシリアライズされます:
シリアライズ可能な型:
-
- 構造体
Note: if you put one element in a list (or array) twice, when the list gets serialized, you'll get two copies of that element, instead of one copy being in the new list twice.
Hint: Unity won't serialize Dictionary, however you could store a List<> for keys and a List<> for values, and sew them up in a non serialized dictionary on Awake(). This doesn't solve the problem of when you want to modify the dictionary and have it "saved" back, but it is a handy trick in a lot of other cases.
For UnityScript users: Fields in c# is a script variable in UnityScript, and [SerializeField] becomes @SerializeField. [Serializable] on a class becomes @script Serializable in a UnityScript.
using UnityEngine;
public class SomePerson : MonoBehaviour { //This field gets serialized because it is public. public string firstName = "John";
//This field does not get serialized because it is private. private int age = 40;
//This field gets serialized even though it is private //because it has the SerializeField attribute applied. [SerializeField] private bool hasHealthPotion = true;
void Start() { if (hasHealthPotion) Debug.Log("Person's first name: " + firstName + " Person's age: " + age); } } | https://docs.unity3d.com/ja/2018.1/ScriptReference/SerializeField.html | CC-MAIN-2020-24 | refinedweb | 227 | 55.54 |
WorkspaceStore class that can be used to actually reinitialize memory. More...
#include <Teuchos_Workspace.hpp>
WorkspaceStore class that can be used to actually reinitialize memory.
The client can create concrete instances of this type and initalize the memory used. The client should call
initialize(num_bytes) to set the number of bytes to allocate where
num_bytes should be large enough to satisfy all but the largests of memory request needs.
Definition at line 317 of file Teuchos_Workspace.hpp.
Default constructs to no memory set and will dynamically allocate all memory requested.
Definition at line 495 of file Teuchos_Workspace.hpp.
Set the size block of memory to be given as workspace.
If there are any instantiated RawWorkspace objects then this function willl throw an std::exception. It must be called before any RawWorkspace objects are created.
Definition at line 500 of file Teuchos_Workspace.hpp. | http://trilinos.sandia.gov/packages/docs/r10.6/packages/teuchos/doc/html/classTeuchos_1_1WorkspaceStoreInitializeable.html | CC-MAIN-2014-35 | refinedweb | 141 | 50.53 |
Opened 4 years ago
Closed 4 years ago
Last modified 4 years ago
#20257 closed Bug (fixed)
QuerySet that prefetches related object with a ManyToMany field cannot be pickled.
Description
After upgrading from 1.4 to 1.5, exceptions were thrown when trying to pickle certain querysets. This means that the caching framework doesn't work for these querysets.
In 1.4 the following code runs fine. In 1.5 this error occurs:
PicklingError: Can't pickle <class 'people.models.SocialProfile_friends'>: it's not found as people.models.SocialProfile_friends
models.py
from django.db import models class Person(models.Model): name = models.CharField(max_length=200) class SocialProfile(models.Model): person = models.ForeignKey(Person) friends = models.ManyToManyField('self')
tests.py
from django.test import TestCase from people.models import Person import pickle class SimpleTest(TestCase): def test_pickle_failure(self): bob = Person(name="Bob") bob.save() people = Person.objects.all().prefetch_related('socialprofile_set') pickle.dumps(people)
Change History (7)
comment:1 Changed 4 years ago by
comment:2 Changed 4 years ago by
The problem is that fields for automatic models (for example the intermediary m2m table) can't be pickled in 1.5. In master this is fixed by introducing Field.reduce. In 1.4 the field wasn't used directly, only the field.name was used so there was no problems. So, to fix this, one of the following three things needs to be done:
- alter
QuerySet.__getstate__and
__setstate__to do something similar that sql.query:Query's state methods do. That is, store the field's name in getstate and restore back the real field instance in setstate.
- remove direct storage of the field in QuerySet
- backpatch Field.reduce changes from master
I think the Field.reduce changes are too risky to backpatch, and I assume there was a good reason to store the field directly in QuerySet instead of field.name. So, changes to getstate and setstate seem like the best choice to me.
comment:3 Changed 4 years ago by
comment:4 Changed 4 years ago by
comment:5 Changed 4 years ago by
This is currently blocking our team from upgrading to Django 1.5. Just wondering if this will be included in the 1.5.2 release? Thanks for the fix!
comment:6 Changed 4 years ago by
@michaelmior, Yes it will be. You can see the commit above is prefixed with [1.5.x] indicating it was committed to the stable/1.5.x branch.
comment:7 Changed 4 years ago by
Thanks @timo! I noticed that after. I apologize if it's bad form to ask, but any ETA on the 1.5.2 release? I'm happy to pitch in if there are release blockers that need to be resolved.
Hi,
Thanks for the detailed report.
I can reproduce the issue in the
stable/1.5.xbranch but it appears to have been fixed in
master.
The testcase works on the
stable/1.4.xbranch so it is a regression.
Using
git bisect, I found that the problem was introduced by commit 056ace0f395a58eeac03da9f9ee7e3872e1e407b. | https://code.djangoproject.com/ticket/20257 | CC-MAIN-2017-39 | refinedweb | 508 | 70.39 |
2014-11-25
Generating an Ordered Data Set from an OCR Text File
Recommended for Advanced Users
Generating an Ordered Data Set from a Text File
Lesson goals:
This tutorial illustrates strategies for taking raw OCR output from a scanned text, parsing it to isolate and correct essential elements of metadata, and generating an ordered data set (a python dictionary) from it. These illustrations are specific to a particular text, but the overall strategy, and some of the individual procedures, can be adapted to organize any scanned text, even if it doesn’t look like this one.
Table of Contents
- Preliminaries
- Iterative processing of OCR output texts
- Creating the dictionary
- Completed dictionary
Introduction
It is often the case that historians involved in digital projects wish to work with digitized texts, so they think “OK, I’ll just scan this fabulously rich and useful collection of original source material and do wonderful things with the digital text that results”. (Those of us who have done this, now smile ruefully). Such historians quickly discover that even the best OCR results in unacceptably high error rates. So the historian now thinks “OK I’ll get some grant money, and I’ll enlist the help of an army of RAs/Grad students/Undergrads/Barely literate street urchins, to correct errors in my OCR output. (We smile again, even more sadly now).
There is little funding for this kind of thing. Increasingly, projects in the humanities have focused upon NLP/Data Mining/Machine Learning/Graph Analysis, and the like, frequently overlooking the fundamental problem of generating useable digital texts. The presumption has often been, well, Google scanned all that stuff didn’t they? What’s the matter with their scans?
Even if you had such an army of helpers, proof-reading the OCR output of, say, a collection of twelfth century Italian charters transcribed and published in 1935, will quickly drive them all mad, make their eyes bleed, and the result will still be a great wad of text containing a great many errors, and you will still have to do something to it before it becomes useful in any context.
Going through a text file line by line and correcting OCR errors one at a time is hugely error-prone, as any proof reader will tell you. There are ways to automate some of this tedious work. A scripting language like Perl or Python can allow you to search your OCR output text for common errors and correct them using “Regular Expressions”, a language for describing patterns in text. (So called because they express a “regular language”. See L.T. O’Hara’s tutorial on Regular Expressions here at the PM.) Regular Expressions, however, are only useful if the expressions you are searching for are … well … regular. Unfortunately, much of what you have in OCR output is highly irregular. If you could impose some order on it: create an ordered data set out of it, your Regular Expression tools would become much more powerful.
Consider, for example, what happens if your OCR interpreted a lot of strings like this “21 July, 1921” as “2l July, 192l”, turning the integer ‘1’ into an ‘l’. You would love to be able to write a search and replace script that would turn all instances of 2l into 21, but then what would happen if you had lots of occurrences of strings like this in your text: “2lb. hammer”. You’d get a bunch of 21b. hammers; not what you want. If only you could tell your script: only change 2l into 21 in sections where there are dates, not weights. If you had an ordered data set, you could do things like that.
Very often the texts that historians wish to digitize are, in fact, ordered data sets: ordered collections of primary source documents, or a legal code say, or a cartulary. But the editorial structure imposed upon such resources is usually designed for a particular kind of data retrieval technology i.e., a codex, a book. For a digitized text you need a different kind of structure. If you can get rid of the book related infrastructure and reorganize the text according to the sections and divisions that you’re interested in, you will wind up with data that is much easier to do search and replace operations on, and as a bonus, your text will become immediately useful in a variety of other contexts as well.
This is where a scripting language like Python comes very much in handy. For our project we wanted to prepare some of the documents from a 12th century collection of imbreviatura from the Italian scribe known as Giovanni Scriba so that they could be marked up by historians for subsequent NLP analysis or potentially for other purposes as well. The pages of the 1935 published edition look like this.
GS page 110
The OCR output from such scans look like this even after some substantial clean-up (I’ve wrapped the longest lines so that they fit here):
110 MARIO CHIAUDANO MATTIA MORESCO professi sunt Alvernacium habere de i;psa societate lb. .c., in reditu tracto predicto capitali .ccc. lb. proficuum. debent dividere per medium. Ultra vero .cc. lb. capitalis Ingo de Volta lb. .xiv. habet quas cum ipso capitali de scicietate extrahere debet. Dedit preterea prefatus Ingo de Volta licenciam (1) ipsi Ingoni Nocentio portandi lb. .xxxvII. 2 Oberti Spinule et Ib. .xxvII. Wuilielmi Aradelli. Actum ante domum W. Buronis .MCLVII., .iiii. kalendas iulias, indicione quarta (2). L f o. 26 v.] . CCVIII. Ingone Della Volta si obbliga verso Ingone Nocenzio di indennizzarlo di ogni danno che gli fosse derivato dalle societa che egli aveva con i suoi figli (28 giugno 1157). Testes Ingonis Nocentii] . Die loco (3) ,predicto et testibus Wuilielmo Burone, Bono Iohanne Malfiiastro, Anselmo de Cafara, W. de Racedo, Wuilielmo Callige Pallii. Ego Ingo de Volta promitto tibi Ingoni Nocentio quod si aliquod dampnum acciderit tibi pro societate vel societatibus quam olim habueris cum filiis meis ego illud totum tibi restaurato et hoc tibi promitto sub pena dupli de quanto inde dampno habueris. Do tibi preterea licentiam accipiendi bisancios quos ultra mare acciipere debeo et inde facias tona fide quicquid tibi videbitur et inde ab omni danpno te absolvo quicquid inde contingerit. CCIX. Guglielmo di Razedo dichiara d'aver ricevuto in societatem da Guglielmo Barone una somma di denaro che portera laboratum ultramare (28 giugno 1157). Wuilielmi Buronis] . Testes Anselmus de Cafara, Albertus de Volta, W. Capdorgol, Corsus Serre, Angelotus, Ingo Noncencius. Ego W. de Raeedo profiteor me accepisse a te Wuilielmo Burone lb. duocentum sexaginta tre et s. .XIII. 1/2 in societatem ad quartam proficui, eas debeo portare laboratum ultra mare et inde quo voluero, in reditu, (11 Licentiam in sopralinea in potestatem cancellato. (2) A margine le postille: Pro Ingone Nocentio scripta e due pro Alvernacio. (3) Cancellato: et testibus supradictis.
In the scan of the original, the reader’s eye readily parses the page: the layout has meaning. But as you can see, reduced to plain text like this, none of the metadata implied by the page layout and typography can be differentiated by automated processes.
You can see from the scan that each charter has the following metadata associated with it.
- Charter number
- Page number
- Folio number
- An Italian summary, ending in a date of some kind
- A line, usually ending with a ‘]’ that marks a marginal notation in the original
- Frequently a collection of in-text numbered footnote markers, whose text appears at the bottom of each page, sequentially numbered, and restarting from 1 on each new page.
- The Latin text of the charter itself
This is typical of such resources, though editorial conventions will vary widely. The point is: this is an ordered data set, not just a great big string of characters. With some fairly straightforward Python scripts, we can turn our OCR output into an ordered data set, in this case, a python dictionary, before we start trying to proofread the Latin charter texts. With such an ordered data set in hand, we can do proofreading, and potentially many other kinds of tasks, much more effectively.
So, the aim of this tutorial is to take a plain text file, like the OCR output above and turn it into a python dictionary with fields for the Latin text of the charter and for each of the metadata elements mentioned above:
{ . . .. }
Remember, this is just a text representation of a data structure that lives in computer memory. Python calls this sort of structure a ‘dictionary’, other programming languages may call it a ‘hash’, or an ‘associative array’. The point is that it is infinitely easier to do any sort of programmatic analysis or manipulation of a digital text if it is in such a form, rather than in the form of a plain text file. The advantage is that such a data structure can be queried, or calculations can be performed on the data, without first having to parse the text.
A couple of useful functions before we start:
We’re going to borrow a couple of functions written by others. They both represent some pretty sophisticated programming. Understanding what’s going on in these functions is instructive, but not necessary. Reading and using other people’s code is how you learn programming, and is the soul of the Open-Source movement. Even if you don’t fully understand how somebody does it, you can nevertheless test functions like this to see that they reliably do what they say they can, and then just apply it to your immediate problem if they are relevant.
Levenshtein distance
You will note that some of the metadata listed above is page-bound and some of it is charter-bound. Getting these untangled from each other is our aim. There is a class of page-bound data that is useless for our purposes, and only meaningful in the context of a physical book: page headers and footers. In our text, these look like this on recto leaves (in a codex, a book, recto is the right-side page, and verso its reverse, the left-side page)
recto header
and this on verso leaves:
verso header
We’d like to preserve the page number information for each charter on the page, but the header text isn’t useful to us and will just make any search and replace operation more difficult. So we’d like to find header text and replace it with a string that’s easy to find with a Regular Expression, and store the page number.
Unfortunately, regular expressions won’t help you much here. This text can appear on any line of our OCR output text, and the ways in which OCR software can foul it up are effectively limitless. Here are some examples of page headers, both recto and verso in our raw OCR output.
260 11141110 CH[AUDANO MATTIA MORESCO IL CIRTOL4RE DI CIOVINN1 St'Itlltl 269 IL CJIRTOL.%RE DI G:OVeNNl FIM P% 297 IL CIP.TQLIRE DI G'OVeNNI SCI Dt r.23 332 T1uu:0 CHIAUDANO M:11TIA MGRESCO IL CIRTOL.'RE DI G:OV.I\N( sca:FR 339 342 NI .\ßlO CHIAUDANO 9LtTTIA MORESCO
These strings are not regular enough to reliably find with regular expressions; however, if you know what the strings are supposed to look like, you can compose some kind of string similarity algorithm to test each string against an exemplar and measure the likelihood that it is a page header. Fortunately, I didn’t have to compose such an algorithm, Vladimir Levenshtein did it for us in 1965 (see:). A computer language can encode this algorithm in any number of ways; here’s an effective Python function that will work for us:
def lev(seq1, seq2): """ Return Levenshtein distance metric (ripped from) """ oneago = None thisrow = range(1, len(seq2) + 1) + [0] for x in xrange(len(seq1)): twoago, oneago, thisrow = oneago, thisrow, [0] * len(seq2) + [x + 1] for y in xrange]
Again, this is some pretty sophisticated programming, but for our purposes all we need to know is that the
lev() function takes two strings as parameters and returns a number that indicates the ‘string distance’ between them, or, how many changes had to be made to turn the first string into the second. So:
lev("fizz", "buzz") returns ‘2’
Roman to Arabic numerals
You’ll also note that in the published edition, the charters are numbered with roman numerals. Converting roman numerals into arabic is an instructive puzzle to work out in Python. Here’s the cleanest and most elegant solution I know:
def rom2ar(rom): """ From the Python tutor mailing list: János Juhász janos.juhasz at VELUX.com returns arabic equivalent of a Roman numeral """ roman_codec = {'M':1000, 'D':500, 'C':100, 'L':50, 'X':10, 'V':5, 'I':1} roman = rom.upper() roman = list(roman) roman.reverse() decimal = [roman_codec[ch] for ch in roman] result = 0 while len(decimal): act = decimal.pop() if len(decimal) and act < max(decimal): act = -act result += act return result
(run <this little script> to see in detail how
rome2ar works. Elegant programming like this can offer insight; like poetry.)
Some other things we’ll need:
At the top of your Python module, you’re going to want to import some python modules that are a part of the standard library. (see Fred Gibbs’s tutorial Installing Python Modules with pip).
First among these is the “re” (regular expression) module
import re. Regular expressions are your friends. However, bear in mind Jamie Zawinski’s quip:
Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems.
(Again, have a look at L.T. O’Hara’s introduction here at the Programming Historian Cleaning OCR’d text with Regular Expressions)
Also:
from pprint import pprint.
pprintis just a pretty-printer for python objects like lists and dictionaries. You’ll want it because python dictionaries are much easier to read if they are formatted.
And:
from collections import Counter. We’ll want this for the Find and normalize footnote markers and texts section below. This is not really necessary, but we’ll do some counting that would require a lot of lines of fiddly code and this will save us the trouble. The collections module has lots of deep magic in it and is well worth getting familiar with. (Again, see Doug Hellmann’s PyMOTW for the collections module. I should also point out that his book The Python Standard Library By Example is one well worth having.)
A very brief review of regular expressions as they are implemented in python
L.T. O’Hara’s introduction to using python flavored regular expressions is invaluable. In this context we should review a couple of basic facts about Python’s implementation of regular expressions, the
re module, which is part of Python’s standard library.
re.compile()creates a regular expression object that has a number of methods. You should be familiar with
.match(), and
.search(), but also
.findall()and
.finditer()
- Bear in mind the difference between
.match()and
.search():
.match()will only match at the beginning of a line, whereas
.search()will match anywhere in the line but then it stops, it’ll only return the first match it finds.
.match()and
.search()return match objects. To retrieve the matched string you need
mymatch.group(0). If your compiled regular expression has grouping parentheses in it (like our ‘slug’ regex below), you can retrieve those substrings of the matched string using
mymatch.group(1)etc.
.findall()and
.finditer()will return all occurrences of the matched string;
.findall()returns them as a list of strings, but .finditer() returns an iterator of match objects. (read the docs on the method .finditer().)
Iterative processing of text files
We’ll start with a single file of OCR output. We will iteratively generate new, corrected versions of this file by using it as input for our python scripts. Sometimes our script will make corrections automatically, more often, our scripts will simply alert us to where problems lie in the input file, and we will make corrections manually. So, for the first several operations we’re going to want to produce new and revised text files to use as input for our subsequent operations. Every time you produce a text file, you should version it and duplicate it so that you can always return to it. The next time you run your code (as you’re developing it) you might alter the file in an unhelpful way and it’s easiest just to restore the old version.
The code in this tutorial is highly edited; it is not comprehensive. As you continue to refine your input files, you will write lots of little ad hoc scripts to check on the efficacy of what you’ve done so far. Versioning will ensure that such experimentation will not destroy any progress that you’ve made.
A note on how to deploy the code in this tutorial:
The code in this tutorial is for Python 2.7.x, Python 3 is quite a different animal.
When you write code in a text file and then execute it, either at the command line, or from within your text editor or IDE, the Python interpreter executes the code line by line, from top to bottom. So, often the code on the bottom of the page will depend on code above it.
One way to use the code snippets in section 2 might be to have all of them in a single file and comment out the bits that you don’t want to run. Each time you execute the file, you will want to be sure that there is a logical control flow from the
#! line at the top, through your various
imports and assignment of global variables, and each loop, or block.
Or, each of the subsections in section 2 can also be treated as a separate script, each would then have to do its own
importing and assignment of global variables.
In section 3, “Creating the Dictionary”, you will be operating on a data set in computer memory (the
charters dictionary) that will be generated from the latest, most correct, input text you have. So you will want to maintain a single python module in which you define the dictionary at the top, along with your
import statements and the assignment of global variables, followed by each of the four loops that will populate and then modify that dictionary.
#!/usr/bin/python import re from pprint import pprint from collections import Counter # followed by any global variables you will need, like: n = 0 this_folio = '[fo. 1 r.]' this_page = 1 # compiled regular expressions like: slug = re.compile("(\[~~~~\sGScriba_)(.*)\s::::\s(\d+)\s~~~~\]") fol = re.compile("\[fo\.\s?\d+\s?[rv]\.\s?\]") pgbrk = re.compile("~~~~~ PAGE (\d+) ~~~~~") # the canonical file you will be reading from fin = open("/path/to/your/current/canonical.txt", 'r') GScriba = fin.readlines() # then the empty dictionary: charters = dict() # followed by the 4 'for' loops in section 2 that will populate and then modify this dictionary
Chunk up the text by pages
First of all, we want to find all the page headers, both recto and verso and replace them with consistent strings that we can easily find with a regular expression. The following code looks for lines that are similar to what we know are our page headers to within a certain threshold. It will take some experimentation to find what this threshold is for your text. Since my recto and verso headers are roughly the same length, both have the same similarity score of 26.
NOTA BENE: The
lev()function described above returns a measure of the ‘distance’ between two strings, so, the shorter the page header string, the more likely it is that this trick will not work. If your page header is just “Header”, then any line comprised of a six letter word might give you a string distance of 6, eg:
lev("Header", "Foobar")returns ‘6’, leaving you none the wiser. In our text, however, the header strings are long and complex enough to give you meaningful scores, eg:
lev("RANDOM STRING OF SIMILAR LENGTH: 38", 'IL CARTOLARE DI GIOVANNI SCRIBA')
returns 33, but one of our header strings, even badly mangled by the OCR, returns 20:
lev("IL CIRTOL4RE DI CIOVINN1 St'Itlltl 269", 'IL CARTOLARE DI GIOVANNI SCRIBA')
So we can use
lev() to find and modify our header strings thus:
# At the top, do the importing you need and define the lev() function as described above, and then: fin = open("our_base_OCR_result.txt", 'r') # read our OCR output text fout = open("out1.txt", 'w') # create a new textfile to write to when we're ready GScriba = fin.readlines() # turn our input file into a list of lines for line in GScriba: # get a Levenshtein distance score for each line in the text recto_lev_score = lev(line, 'IL CARTOLARE DI GIOVANNI SCRIBA') verso_lev_score = lev(line, 'MARIO CHIAUDANO - MATTIA MORESCO') # you want to use a score that's as high as possible, # but still finds only potential page header texts. if recto_lev_score < 26 : # If we increment a variable 'n' to count the number of headers we've found, # then the value of that variable should be our page number. n += 1 print "recto: %s %s" % (recto_lev_score, line) # Once we've figured out our optimal 'lev' score, we can 'uncomment' # all these `fout.write()` lines to write out our new text file, # replacing each header with an easy-to-find string that contains # the page number: our variable 'n'. #fout.write("~~~~~ PAGE %d ~~~~~\n\n" % n) elif verso_lev_score < 26 : n += 1 print "verso: %s %s" % (verso_lev_score, line) #fout.write("~~~~~ PAGE %d ~~~~~\n\n" % n) else: #fout.write(line) pass print n
There’s a lot of calculation going on in the
lev() function. It isn’t very efficient to call it on every line in our text, so this might take some time, depending on how long our text is. We’ve only got 803 charters in vol. 1. That’s a pretty small number. If it takes 30 seconds, or even a minute, to run our script, so be it.
If we run this script on our OCR output text, we get output that looks like this:
. . . verso: 8 426 MARIO CHIAUDANO MAITIA MORESCO recto: 5 IL CARTOLARE DI GIOVANNI SCRIBA 427 verso: 11 , , 428 MARIO CHIAUDANO MATTIA MORESCO recto: 5 IL CARTOLARE DI GIOVANNI SCRIBA 499 verso: 7 430 MARIO CHIAUDANO MATTIA MORESCO recto: 5 IL CARTOLARE DI GIOVANNI SCRIBA 431 verso: 8 432 MARIO CHIAUDASO MATTIA MORESCO 430
For each line, the output tells us that it’s page verso or recto, the Levenshtein “score”, and then the text of the line (complete with all the errors in it. Note that the OCR misread the pg. number for pg. 429). The lower the Levenshtein “score”, the closer the line is to the model you’ve given it.
This tells you that the script found 430 lines that are probably page headers. You know how many pages there should be, so if the script didn’t find all the headers, you can go through the output looking at the page numbers, find the pages it missed, and fix the headers manually, then repeat until the script finds all the page headers.
Once you’ve found and fixed the headers that the script didn’t find, you can then write out the corrected text to a new file that will serve as the basis for the other operations below. So, instead of
quicquid volueris sine omni mea et (1) Spazio bianco nel ms. 12 MARIO CSIAUDANO MATTIA MORESCO heredum meorum contradicione. Actum in capitulo .MCLV., mensis iulii, indicione secunda.
we’ll have a textfile like this:
quicquid volueris sine omni mea et (1) Spazio bianco nel ms. ~~~~~ PAGE 12 ~~~~~ heredum meorum contradicione. Actum in capitulo .MCLV., mensis iulii, indicione secunda.
Note that for many of the following operations, we will use
GScriba = fin.readlines() so
GScriba will be a python list of the lines in our input text. Keep this firmly in mind, as the
for loops that we will use will depend on the fact that we will iterate through the lines of our text In Document Order.
Chunk up the text by charter (or sections, or letters, or what-have-you)
The most important functional divisions in our text are signaled by upper case roman numerals on a separate line for each of the charters. So we need a regex to find roman numerals like that. Here’s one:
romstr = re.compile("\s*[IVXLCDM]{2,}"). We’ll put it at the top of our module file as a ‘global’ variable so it will be available to any of the bits of code that come later.
The script below will look for capital roman numerals that appear on a line by themselves. Many of our charter numbers will fail that test and the script will report
there's a charter roman numeral missing?, often because there’s something before or after it on the line; or,
KeyError, often because the OCR has garbled the characters (e.g. CCG for 300, XOII for 492). Run this script repeatedly, correcting
out1.txt as you do until all the charters are accounted for.
# At the top, do the importing you need, then define rom2ar() as described above, and then: n = 0 romstr = re.compile("\s*[IVXLCDM]{2,}") fin = open("out1.txt", 'r') fout = open("out2.txt", 'w') GScriba = fin.readlines() for line in GScriba: if romstr.match(line): rnum = line.strip().strip('.') # each time we find a roman numeral by itself on a line we increment n: # that's our charter number. n += 1 try: # translate the roman to the arabic and it should be equal to n. if n != rom2ar(rnum): # if it's not, then alert us print "%d, there's a charter roman numeral missing?, because line number %d reads: %s" % (n, GScriba.index(line), line) # then set 'n' to the right number n = rom2ar(rnum) except KeyError: print n, "KeyError, line number ", GScriba.index(line), " reads: ", line
Since we know how many charters there should be. At the end of our loop, the value of n should be the same as the number of charters. And, in any iteration of the loop, if the value of n does not correspond to the next successive charter number, then we know we’ve got a problem somewhere, and the print statements should help us find it.
Here’s a sample of the output our script will give us:
23 there's a charter roman numeral missing?, because line number 156 reads: XXIV. 25 there's a charter roman numeral missing?, because line number 186 reads: XXVIII. 36 KeyError, line number 235 reads: XXXV1. 37 KeyError, line number 239 reads: XXXV II. 38 there's a charter roman numeral missing?, because line number 252 reads: XL. 41 there's a charter roman numeral missing?, because line number 262 reads: XLII. 43 KeyError, line number 265 reads: XL:III.
NOTA BENE: Our regex will report an error for the single digit Roman numerals (‘I’,’V’,’X’ etc.). You could test for these in the code, but sometimes leaving a known and regular error is a help to check on the efficacy of what you’re doing. Our aim is to satisfy ourselves that any inconsistencies on the charter number line are understood and accounted for.
Once we’ve found, and fixed, all the roman numeral charter headings, then we can write out a new file with an easy-to-find-by-regex string, a ‘slug,’ for each charter in place of the bare roman numeral. Comment out the
for loop above, and replace it with this one:
for line in GScriba: if romstr.match(line): rnum = line.strip().strip('.') num = rom2ar(rnum) fout.write("[~~~~ GScriba_%s :::: %d ~~~~]\n" % (rnum, num)) else: fout.write(line)
While it’s important in itself for us to have our OCR output reliably divided up by page and by charter, the most important thing about these initial operations is that you know how many pages there are, and how many charters there are, and you can use that knowledge to check on subsequent operations. If you want to do something to every charter, you can reliably test whether or not it worked because you can count the number of charters that it worked on.
Find and normalize folio markers
Our OCR’d text is from the 1935 published edition of Giovanni Scriba. This is a transcription of a manuscript cartulary which was in the form of a bound book. The published edition preserves the pagination of that original by noting where the original pages change: [fo. 16 r.] the face side of the 16th leaf in the book, followed by its reverse [fo. 16 v.]. This is metadata that we want to preserve for each of the charters so that they can be referenced with respect to the original, as well as with respect to the published edition by page number.
Many of the folio markers (e.g. “[fo. 16 v.]”) appear on the same line as the roman numeral for the charter heading. To normalize those charter headings for the operation above, we had to put a line break between the folio marker and the charter number, so many of the folio markers are on their own line already. However, sometimes the folio changes in the middle of the charter text somewhere. We want these markers to stay where they are; we will have to treat those two cases differently. For either case, we need to make sure all the folio markers are free of errors so that we can reliably find them by means of a regular expression. Again, since we know how many folios there are, we can know if we’ve found them all. Note that because we used
.readlines(),
GScriba is a list, so the script below will print the line number from the source file as well as the line itself. This will report all the correctly formated folio markers, so that you can find and fix the ones that are broken.
# note the optional quantifiers '\s?'. We want to find as many as we can, and # the OCR is erratic about whitespace, so our regex is permissive. But as # you find and correct these strings, you will want to make them consistent. fol = re.compile("\[fo\.\s?\d+\s?[rv]\.\s?\]") for line in GScriba: if fol.match(line): # since GScriba is a list, we can get the index of any of its members to find the line number in our input file. print GScriba.index(line), line
We would also like to ensure that no line has more than one folio marker. We can test that like this:
for line in GScriba: all = fol.findall(line) if len(all) > 1: print GScriba.index(line), line
Again, as before, once you’ve found and corrected all the folio markers in your input file, save it with a new name and use it as the input to the next section.
Find and normalize the Italian summary lines.
This important line is invariably the first one after the charter heading.
italian summary line
Since those roman numeral headings are now reliably findable with our ‘slug’ regex, we can now isolate the line that appears immediately after it. We also know that the summaries always end with some kind of parenthesized date expression. So, we can compose a regular expression to find the slug and the line following:
slug_and_firstline = re.compile("(\[~~~~\sGScriba_)(.*)\s::::\s(\d+)\s~~~~\]\n(.*)(\(\d?.*\d+\))")
Let’s break down that regex using the verbose mode (again, see O’Hara’s tutorial). Our ‘slug’ for each charter takes the form “[~~~~ GScriba_CCVII :::: 207 ~~~~]” for example. The compiled pattern above is exactly equivalent to the folowing (note the re.VERBOSE switch at the end):
slug_and_firstline = re.compile(r""" (\[~~~~\sGScriba_) # matches the "[~~~~ GScriba_" bit (.*) # matches the charter's roman numeral \s::::\s # matches the " :::: " bit (\d+) # matches the arabic charter number \s~~~~\]\n # matches the last " ~~~~ " bit and the line ending (.*) # matches all of the next line up to: (\(\d?.*\d+\)) # the paranthetical expression at the end """, re.VERBOSE)
the parentheses mark match groups, so each time our regex finds a match, we can refer in our code to specific bits of the match it found:
match.group(0)is the whole match, both our slug and the line that follows it.
match.group(1)= “[~~~~ GScriba_”
match.group(2)= the charter’s roman numeral
match.group(3)= the arabic charter number
match.group(4)= the whole of the Italian summary line up to the parenthesized date expression
match.group(5)= the parenthesized date expression. Note the escaped parentheses.
Because our OCR has a lot of mysterious whitespace (OCR software is not good at parsing whitespace and you’re likely to get newlines, tabs, spaces, all mixed up without rhyme or reason), we want to hunt for this regex as substrings of a great big string, so this time we’re going to use
.read() instead of
.readlines(). And we’ll also need a counter to keep track of the lines we find. This script will report the charter numbers where the first line does not conform to our regex model. This will usually happen if there’s no line break after our charter header, or if the Italian summary line has been broken up into multiple lines.
num_firstlines = 0 fin = open("your_current_source_file.txt", 'r') # NB: GScriba is not a list of lines this time, but a single big string. GScriba = fin.read() # finditer() creates an iterator 'i' that we can do a 'for' loop over. i = slug_and_firstline.finditer(GScriba) # each element 'x' in that iterator is a regex match object. for x in i: # count the summary lines we find. Remember, we know how many # there should be, because we know how many charters there are. num_firstlines += 1 chno = int(x.group(3)) # our charter number is a string, we need an integer # chno should equal n + 1, if it doesn't, report to us if chno != n + 1: print "problem in charter: %d" % (n + 1) #NB: this will miss consecutive problems. # then set n to the right charter number n = chno # print out the number of summary lines we found print "number of italian summaries: ", num_firstlines
Again, run the script repeatedly until all the Italian Summary lines are present and correct, then save your input file with a new name and use it the input file for the next bit:
Find and normalize footnote markers and texts
One of the trickiest bits to untangle, is the infuriating editorial convention of restarting the footnote numbering with each new page. This makes it hard to associate a footnote text (page-bound data), with a footnote marker (charter-bound data). Before we can do that we have to ensure that each footnote text that appears at the bottom of the page, appears in our source file on its own separate line with no leading white-space. And that none of the footnote markers within the text appears at the beginning of a line. And we must ensure that every footnote string, “(1)” for example, appears exactly twice on a page: once as an in-text marker, and once at the bottom for the footnote text. The following script reports the page number of any page that fails that test, along with a list of the footnote strings it found on that page.
# Don't forget to import the Counter module: from collections import Counter fin = open("your_current_source_file.txt", 'r') GScriba = fin.readlines() # GScriba is a list again r = re.compile("\(\d{1,2}\)") # there's lots of ways for OCR to screw this up, so be alert. pg = re.compile("~~~~~ PAGE \d+ ~~~~~") pgno = 0 pgfnlist = [] # remember, we're processing lines in document order. So for each page # we'll populate a temporary container, 'pgfnlist', with values. Then # when we come to a new page, we'll report what those values are and # then reset our container to the empty list. for line in GScriba: if pg.match(line): # if this test is True, then we're starting a new page, so increment pgno pgno += 1 # if we've started a new page, then test our list of footnote markers if pgfnlist: c = Counter(pgfnlist) # if there are fn markers that do not appear exactly twice, # then report the page number to us if 1 in c.values(): print pgno, pgfnlist # then reset our list to empty pgfnlist = [] # for each line, look for ALL occurences of our footnote marker regex i = r.finditer(line) for mark in [eval(x.group(0)) for x in i]: # and add them to our list for this page pgfnlist.append(mark)
Note: the elements in the iterator ‘i’ are string matches. We want the strings that were matched,
group(0). e.g. “(1)”. And if we do eval(“(1)”) we get an integer that we can add to our list.
Our
Counter is a very handy special data structure. We know that we want each value in our
pgfnlist to appear twice. Our
Counter will give us a hash where the keys are the elements that appear, and the values are how many times each element appears. Like this:
>>> l = [1,2,3,1,3] >>> c = Counter(l) >>> print c Counter({1: 2, 3: 2, 2: 1})
So if for a given page we get a list of footnote markers like this
[1,2,3,1,3], then the test
if 1 in c.values() will indicate a problem because we know each element must appear exactly twice:
>>> l = [1,2,3,1,3] >>> c = Counter(l) >>> print c.values() [2, 1, 2]
whereas, if our footnote marker list for the page is complete
[1,2,3,1,2,3], then:
>>> l = [1,2,3,1,2,3] >>> c = Counter(l) >>> print c.values() [2, 2, 2] # i.e. 1 is not in c.values()
As before, run this script repeatedly, correcting your input file manually as you discover errors, until you are satisfied that all footnotes are present and correct for each page. Then save your corrected input file with a new name.
Our text file still has lots of OCR errors in it, but we have now gone through it and found and corrected all the specific metadata bits that we want in our ordered data set. Now we can use our corrected text file to build a Python dictionary.
Creating the Dictionary
Now that we’ve cleaned up enough of the OCR that we can successfully differentiate the component parts of the page from each other, we can now sort the various bits of the meta-data, and the charter text itself, into their own separate fields of a Python dictionary.
We have a number of things to do: correctly number each charter as to charter number, folio, and page; separate out the Italian summary and the marginal notation lines; and associate the footnote texts with their appropriate charter. To do all this, sometimes it is convenient to make more than one pass.
Create a skeleton dictionary.
We’ll start by generating a python dictionary whose keys are the charter numbers, and whose values are a nested dictionary that has fields for some of the metadata we want to store for each charter. So it will have the form:
charters = { . . . 300: { 'chid': "our charter ID", 'chno': 300, 'footnotes': [], # an empty list for now 'folio': "the folio marker for this charter", 'pgno': "the page number in the printed edition for this charter, 'text': [] # an empty list for now }, 301: { 'chid': "our charter ID", 'chno': 301, 'footnotes': [], # an empty list for now 'folio': "the folio marker for this charter", 'pgno': "the page number in the printed edition for this charter, 'text': [] # an empty list for now }, . . . etc. }
For this first pass, we’ll just create this basic structure and then in subsequent loops we will add to and modify this dictionary until we get a dictionary for each charter, and fields for all the metadata for each charter. Once this loop disposes of the easily searched lines (folio, page, and charter headers) and creates an empty container for footnotes, the fall-through default will be to append the remaining lines to the text field, which is a python list.
slug = re.compile("(\[~~~~\sGScriba_)(.*)\s::::\s(\d+)\s~~~~\]") fol = re.compile("\[fo\.\s?\d+\s?[rv]\.\s?\]") pgbrk = re.compile("~~~~~ PAGE (\d+) ~~~~~") fin = open("your_current_source_file.txt", 'r') GScriba = fin.readlines() # we'll also need these global variables with starting values as we mentioned at the top n = 0 this_folio = '[fo. 1 r.]' this_page = 1 # 'charters' is also defined as a global variable. The 'for' loop below # and in the following sections, will build on and modify this dictionary charters = dict() for line in GScriba: if fol.match(line): # use this global variable to keep track of the folio number. # we'll create the 'folio' field using the value of this variable this_folio = fol.match(line).group(0) continue # update the variable but otherwise do nothing with this line. if slug.match(line): # if our 'slug' regex matches, we know we have a new charter # so get the data from the match groups m = slug.match(line) chid = "GScriba_" + m.group(2) chno = int(m.group(3)) # then create an empty nested dictionary charters[chno] = {} # and an empty container for all the lines we won't use on this pass templist = [] # this works because we're proceeding in document order: templist continues to exist as we iterate through each line in the charter, then is reset to the empty list when we start a new charter(slug.match(line)) continue # we generate the entry, but do nothing with the text of this line. if chno: # if a charter dictionary has been created, # then we can now populate it with data from our slug.match above. d = charters[chno] # 'd' is just more convenient than 'charters[chno]' d['footnotes'] = [] # We'll populate this empty list in a separate operation d['chid'] = chid d['chno'] = chno d['folio'] = this_folio d['pgno'] = this_page if re.match('^\(\d+\)', line): # this line is footnote text, because it has a footnote marker # like "(1)" at the beginning. So we'll deal with it later continue if pgbrk.match(line): # if line is a pagebreak, update the variable this_page = int(pgbrk.match(line).group(1)) elif fol.search(line): # if folio changes within the charter text, update the variable this_folio = fol.search(line).group(0) templist.append(line) else: # any line not otherwise accounted for, add to our temporary container templist.append(line) # add the temporary container to the dictionary after using # a list comprehension to strip out any empty lines. d['text'] = [x for x in templist if not x == '\n'] # strip empty lines
Add the ‘marginal notation’ and Italian summary lines to the dictionary
When we generated the dictionary of dictionaries above, we assigned fields for footnotes (just an empty list for now), charterID, charter number, the folio, and the page number. All remaining lines were appended to a list and assigned to the field ‘text’. In all cases, the first line of each charter’s text field should be the Italian summary as we have insured above. The second line in MOST cases, represents a kind of marginal notation usually ended by the ‘]’ character (which OCR misreads a lot). We have to find the cases that do not meet this criterion, supply or correct the missing ‘]’, and in the cases where there is no marginal notation I’ve supplied “no marginal]” in my working text. The following diagnostic script will print the charter number and first two lines of the text field for those charters that do not meet these criteria. Run this script separately against the
charters dictionary, and correct and update your canonical text accordingly.
n = 0 for ch in charters: txt = charters[ch]['text'] # remember: the text field is a python list of strings try: line1 = txt[0] line2 = txt[1] if line2 and ']' not in line2: n += 1 print "charter: %d\ntext, line 1: %s\ntext, line 2: %s" % (ch, line1, line2) except: print ch, "oops" # to pass the charters from the missing page 214
Note: The
try: except:blocks are made necessary by the fact that in my OCR output, the data for pg 214 somehow got missed out. This often happens. Scanning or photographing each page of a 600 page book is tedious in the extreme. It’s very easy to skip a page. You will inevitably have anomalies like this in your text that you will have to isolate and work around. The Python
try: except:pattern makes this easy. Python is also very helpful here in that you can do a lot more in the
except:clause beyond just printing “oops”. You could call a function that performs a whole separate operation on those anomalous bits.
Once we’re satisfied that line 1 and line 2 in the ‘text’ field for each charter in the
charters dictionary are the Italian summary and the marginal notation respectively, we can make another iteration of the
charters dictionary, removing those lines from the text field and creating new fields in the charter entry for them.
NOTA BENE: we are now modifying a data structure in memory rather than editing successive text files. So this script should be added to the one above that created your skeleton dictionary. That script creates the
chartersdictionary in memory, and this one modifies it
for ch in charters: d = charters[ch] try: d['summary'] = d['text'].pop(0).strip() d['marginal'] = d['text'].pop(0).strip() except IndexError: # this will report that the charters on p 214 are missing print "missing charter ", ch
Assign footnotes to their respective charters and add to dictionary
The trickiest part is to get the footnote texts appearing at the bottom of the page associated with their appropriate charters. Since we are, perforce, analyzing our text line by line, we’re faced with the problem of associating a given footnote reference with its appropriate footnote text when there are perhaps many lines intervening.
For this we go back to the same list of lines that we built the dictionary from. We’re depending on all the footnote markers appearing within the charter text, i.e. none of them are at the beginning of a line. And, each of the footnote texts is on a separate line beginning with ‘(1)’ etc. We design regexes that can distinguish between the two and construct a container to hold them as we iterate over the lines. As we iterate over the lines of the text file, we find and assign markers and texts to our temporary container, and then, each time we reach a page break, we assign them to their appropriate fields in our existing Python dictionary
charters and reset our temporary container to the empty
dict.
Note how we construct that temporary container.
fndict starts out as an empty dictionary. As we iterate through the lines of our input text, if we find footnote markers within the line, we create an entry in
fndict whose key is the footnote number, and whose value is another dictionary. In that dictionary we record the id of the charter that the footnote belongs to, and we create an empty field for the footnote text. When we find the footnote texts (
ntexts) at the bottom of the page, we look up the footnote number in our container
fndict and write the text of the line to the empty field we made. So when we come to the end of the page, we have a dictionary of footnotes that looks like this:
{1: {'chid': 158, 'fntext': 'Nel ms. de due volte e ripa cancellato.'}, 2: {'chid': 158, 'fntext': 'Sic nel ms.'}, 3: {'chid': 159, 'fntext': 'genero cancellato nel ms.'}}
Now we have all the necessary information to assign the footnotes to the empty ‘footnotes’ list in the
charters dictionary: the number of the footnote (the key), the charter it belongs to (chid), and the text of the footnote (fntext).
This is a common pattern in programming, and very useful: in an iterative process of some kind, you use an accumulator (our
fndict) to gather bits of data, then when your sentinel encounters a specified condition (the pagebreak) it does something with the data.
fin = open("your_current_source_file.txt", 'r') GScriba = fin.readlines() # in notemark, note the 'lookbehind' expression '?<!' to insure that # the marker '(1)' does not begin the string notemark = re.compile(r"\(\d+\)(?<!^\(\d+\))") notetext = re.compile(r"^\(\d+\)") this_charter = 1 pg = re.compile("~~~~~ PAGE \d+ ~~~~~") pgno = 1 fndict = {} for line in GScriba: nmarkers = notemark.findall(line) ntexts = notetext.findall(line) if pg.match(line): # This is our 'sentinel'. We've come to the end of a page, # so we record our accumulated footnote data in the 'charters' dict. for fn in fndict: chid = fndict[fn]['chid'] fntext = fndict[fn]['fntext'] charters[int(chid)]['footnotes'].append((fn, fntext)) pgno += 1 fndict = {} # and then re-initialize our temporary container if slug.match(line): # here's the beginning of a charter, so update the variable. this_charter = int(slug.match(line).group(3)) if nmarkers: for marker in [eval(x) for x in nmarkers]: # create an entry with the charter's id and an empty text field fndict[marker] = {'chid':this_charter, 'fntext': ''} if ntexts: for text in [eval(x) for x in ntexts]: try: # fill in the appropriate empty field. fndict[text]['fntext'] = re.sub('\(\d+\)', '', line).strip() except KeyError: print "printer's error? ", "pgno:", pgno, line
Note that the
try: except: blocks come to the rescue again here. The loop above kept breaking because in 3 instances it emerged that there existed footnotes at the bottom of a page for which there were no markers within the text. This was an editorial oversight in the published edition, not an OCR error. The result was that when I tried to address the non-existent entry in
fndict, I got a
KeyError. My
except: clause allowed me to find and look at the error, and determine that the error was in the original and nothing I could do anything about, so when generating the final version of
charters I replaced the
pass. Texts made by humans are messy; no getting around it.
try: except: exists to deal with that reality.
NOTA BENE: Again, bear in mind that we are modifying a data structure in memory rather than editing successive text files. So this loop should be added to your script below the summary and marginal loop, which is below the loop that created your skeleton dictionary.
Parse Dates and add to the dictionary
Dates are hard. Students of British history cling to Cheyney as to a spar on a troubled ocean. And, given the way the Gregorian calendar was adopted so gradually, and innumerable other local variations, correct date reckoning for medieval sources will always require care and local knowledge. Nevertheless, here too Python can be of some help.
Our Italian summary line invariably contains a date drawn from the text, and it’s conveniently set off from the rest of the line by parentheses. So we can parse them and create Python
date objects. Then, if we want, we can do some simple calendar arithmetic.
First we have to find and correct all the dates in the same way as we have done for the other metadata elements. Devise a diagnostic script that will iterate over your
charters dictionary, report the location of errors in your canonical text, and then fix them in your canonical text manually. Something like this:
summary_date = re.compile('\((\d{1,2})?(.*?)(\d{1,4})?\)') # we want to catch them all, and some have no day or month, hence the optional quantifiers: `?`. # And we want to make Python speak Italian: ital2int = {'gennaio': 1, 'febbraio': 2, 'marzo': 3, 'aprile': 4, 'maggio': 5, 'giugno': 6, 'luglio': 7, 'agosto': 8, 'settembre': 9, 'ottobre': 10, 'novembre': 11, 'dicembre': 12} import sys for ch in charters: try: d = charters[ch] i = summary_date.finditer(d['summary']) dt = list(i)[-1] # Always the last parenthetical expression, in case there is more than one. if dt.group(2).strip() not in ital2int.keys(): print "chno. %d fix the month %s" % (d['chno'], dt.group(2)) except: print d['chno'], "The usual suspects ", sys.exc_info()[:2]
Note: When using
try/exceptblocks, you should usually trap specific errors in the except clause, like
ValueErrorand the like; however, in ad hoc scripts like this, using
sys.exc_infois a quick and dirty way to get information about any exception that may be raised. (The sys module is full of such stuff, useful for debugging)
Once you’re satisfied that all the parenthetical date expressions are present and correct, and conform to your regular expression, you can parse them and add them to your data structure as dates rather than just strings. For this you can use the
datetime module.
This module is part of the standard library, is a deep subject, and ought to be the subject of its own tutorial, given the importance of dates for historians. As with a lot of other python modules, a good introduction is Doug Hellmann’s PyMOTW(module of the week). An even more able extension library is mxDateTime. Suffice it here to say that the
datetime.date module expects parameters like this:
>>> from datetime import date >>> dt = date(1160, 12, 25) >>> dt.isoformat() '1160-12-25'
So here’s our loop to parse the dates at the end of the Italian summary lines and store them in our
charters dictionary (remembering again that we want to modify our in-memory data structure
charters created above):
summary_date = re.compile('\((\d{1,2})?(.*?)(\d{1,4})?\)') from datetime import date for ch in charters: c = charters[ch] i = summary_date.finditer(c['summary']) for m in i: # remember 'i' is an iterator so even if there is more than one # parenthetical expression in c['summary'], the try clause will # succeed on the last one, or fail on all of them. try: yr = int(m.group(3)) mo = ital2int[m.group(2).strip()] day = int(m.group(1)) c['date'] = date(yr, mo, day) except: c['date'] = "date won't parse, see summary line"
Out of 803 charters, 29 wouldn’t parse, mostly because the date included only month and year. You can store these as strings, but then you have two data types claiming to be dates. Or you could supply a 01 as the default day and thus store a Python date object, but Jan. 1, 1160 isn’t the same thing as Jan. 1160 and thus distorts your metadata. Or you could just do as I have done and refer to the relevant source text: the Italian summary line in the printed edition.
Once you’ve got date objects, you can do date arithmetic. Supposing we wanted to find all the charters dated to within 3 weeks of Christmas, 1160.
# Let's import the whole thing and use dot notation: datetime.date() etc. import datetime # a timedelta is a span of time week = datetime.timedelta(weeks=1) for ch in charters: try: dt = charters[ch]['date'] christmas = datetime.date(1160,12,25) if abs(dt - christmas) < week * 3: print "chno: %s, date: %s" % (charters[ch]['chno'], dt) except: pass # avoid this idiom in production code
Which will give us this result:
chno: 790, date: 1160-12-14 chno: 791, date: 1160-12-15 chno: 792, date: 1161-01-01 chno: 793, date: 1161-01-04 chno: 794, date: 1161-01-05 chno: 795, date: 1161-01-05 chno: 796, date: 1161-01-10 chno: 797, date: 1161-01-10 chno: 798, date: 1161-01-06
Cool, huh?
Our completed data structure
Now we’ve corrected our canonical text as much as we need to to differentiate between the various bits of meta-data that we want to capture, and we’ve created a data structure in memory, our
charters dictionary, by making 4 passes, each one extending and modifying the dictionary in memory.
- create the skeleton
- separate out the
summaryand
marginallines and create dictionary fields for them.
- collect and assign footnotes to their respective charters
- parse the dates in the
summaryfield, and add them to their respective charters
Print out our resulting dictionary using
pprint(charters) and you’ll see something like this:
{ . . .. }
Printing out your Python dictionary as a literal string is not a bad thing to do. For a text this size, the resulting file is perfectly manageable, can be mailed around usefully and read into a python repl session very simply using
eval(), or pasted directly into a Python module file. On the other hand, if you want an even more reliable way to serialize it in an exclusively Python context, look into
Pickle. If you need to move it to some other context, JavaScript for example, or some
RDF triple stores, Python’s
json module will translate effectively. If you have to get some kind of XML output, I will be very sorry for you, but the
lxml python module may ease the pain a little.
Order from disorder, huzzah.
Now that we have an ordered data structure, we can do many things with it. As a very simple example, let’s append some code that just prints
charters out as html for display on a web-site:
fout = open("your_page.html", 'w') # create a text file to write the html to # write to the file your html header with some CSS formatting declarations fout.write(""" <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"> <html> <head> <title>Giovanni Scriba Vol. I</title> <style> h1 {text-align: center; color: #800; font-size: 16pt; margin-bottom: 0px; margin-top: 16px;} ul {list-style-type: none;} .sep {color: #800; text-align: center} .charter {width: 650px; margin-left: auto; margin-right: auto; margin-top: 60px; border-top: double #800;} .folio {color: #777;} .summary {color: #777; margin: 12px 0px 12px 12px;} .marginal {color: red} .charter-text {margin-left: 16px} .footnotes .page-number {font-size: 60%} </style></head> <body> """) # a loop that will write out a blob of html code for each of the charters in our dictionary: for x in charters: # use a shallow copy so that charters[x] is not modified for this specialized purpose d = charters[x].copy() try: if d['footnotes']: # remember, this is a list of tuples. So you can feed them directly # to the string interpolation operator in the list comprehension. fnlist = ["<li>(%s) %s</li>" % t for t in d['footnotes']] d['footnotes'] = "<ul>" + ''.join(fnlist) + "</ul>" else: d['footnotes'] = "" d['text'] = ' '.join(d['text']) # d['text'] is a list of strings <h1>%(chid)s</h1> <div class="folio">%(folio)s (pg. %(pgno)d)</div> <div class="summary">%(summary)s</div> <div class="marginal">%(marginal)s</div> <div class="text">%(text)s</div> <div class="footnotes">%(footnotes)s</div> </div> </div> """ fout.write(blob % d) # `string % dictionary` is a neat trick for html templating # that makes use of python's string interpolation syntax # see: fout.write("\n\n") except: # insert entries noting the absence of charters on the missing pg. 214 <h1>Charter no. %d is missing because the scan for Pg. 214 was ommited</h1> </div> </div> """ % d['chno'] fout.write(erratum) fout.write("""</body></html>""")
Drop the resulting file on a web browser, and you’ve got a nicely formated electronic edition.
html formatted charter example
Being able to do this with your, still mostly uncorrected, OCR output is not a trivial advantage. If you’re serious about creating a clean, error free, electronic edition of anything, you’ve got to do some serious proofreading. Having a source text formatted for reading is crucial; moreover, if your proofreader can change the font, spacing, color, layout, and so forth at will, you can increase their accuracy and productivity substantially. With this example in a modern web browser, tweaking those parameters with some simple CSS declarations is easy. Also, with some ordered HTML to work with, you might crowd-source the OCR error correction, instead of hiring that army of illiterate street urchins.
And, our original problem, OCR cleanup, is now much more tractable because we can target regular expressions for the specific sorts of metadata we have: errors in the Italian summary or in the Latin text? Or we could design search-and-replace routines just for specific charters, or groups of charters.
Beyond this though, there’s lots you can do with an ordered data set, including feeding it back through a markup tool like the brat as we did for the ChartEx project. Domain experts can then start adding layers of semantic tagging even if you don’t do any further OCR error correction. Moreover, with an ordered dataset we can get all sorts of output, some other flavor of XML (if you must) for example: TEI (Text Encoding Initiative), or EAD (Encoded Archival Description). Or you could read your dataset directly into a relational database, or some kind of key/value store. All of these things are essentially impossible if you’re working simply with a plain text file.
The bits of code above are in no way a turn-key solution for cleaning arbitrary OCR output. There is no such magic wand. The Google approach to scanning the contents of research libraries threatens to drown us in an ocean of bad data. Worse, it elides a fundamental fact of digital scholarship: digital sources are hard to get. Reliable, flexible, and useful digital texts require careful redaction and persistent curation. Google, Amazon, Facebook, et alia do not have to concern themselves with the quality of their data, just its quantity. Historians, on the other hand, must care first for the integrity of their sources.
The vast 18th and 19th century publishing projects, the Rolls Series, the Monumenta Germaniae Historica, and many others, bequeathed a treasure trove of source material to us by dint of a huge amount of very painstaking and detailed work by armies of dedicated and knowledgeable scholars. Their task was the same as ours: to faithfully transmit history’s legacy from its earlier forms into a more modern form, thereby making it more widely accessible. We can do no less. We have powerful tools at our disposal, but while that may change the scale of the task, it does not change its nature.
Suggested Citation
Jon Crump , "Generating an Ordered Data Set from an OCR Text File," Programming Historian, (2014-11-25), | http://programminghistorian.org/lessons/generating-an-ordered-data-set-from-an-OCR-text-file | CC-MAIN-2017-26 | refinedweb | 10,469 | 61.36 |
Overview
Atlassian SourceTree is a free Git and Mercurial client for Windows.
Atlassian SourceTree is a free Git and Mercurial client for Mac.
PyFoursquare
PyFoursquare is a Python interface for the Foursquare API.
If you are looking for a complete foursquare-APIv2 reference, go to
<>
How to Install it
After download the source code by:
easy_install pyfoursquare
or
Just execute the command at the terminal:
$ python setup.py install
How to Use it
import pyfoursquare as foursquare # == OAuth2 Authentication == # # This mode of authentication is the required one for Foursquare # The client id and client secret can be found on your application's Details # page located at client_id = "" client_secret = "" callback = '' auth = foursquare.OauthHandler(client_id, client_secret, callback) #First Redirect the user who wish to authenticate to. #It will be create the authorization url for your app auth_url = auth.get_authorization_url() print 'Please authorize: ' + auth_url #If the user accepts, it will be redirected back #to your registered REDIRECT_URI. #It will give you a code as # code = raw_input('The code: ').strip() #Now your server will make a request for #the access token. You can save this #for future access for your app for this user access_token = auth.get_access_token(code) print 'Your access token is ' + access_token #Now let's create an API api = foursquare.API(auth) #Now you can access the Foursquare API! result = api.venues_search(query='Burburinho', ll='-8.063542,-34.872891') #You can acess as a Model print dir(result[0]) #Access all its attributes print result[0].name """ If you already have the access token for this user you can go until lines 1- 13, and then get at your database the access token for this user and set the access token. auth.set_access_token('ACCESS_TOKEN') Now you can go on by the line 33. """ | https://bitbucket.org/dori/pyfoursquare | CC-MAIN-2017-26 | refinedweb | 291 | 54.63 |
1. Create a Paperspace GPU machine! Use promo code MLIIB2 for $5 towards your new machine!
important: you will need to add a public IP address to be able to access to Jupyter notebook that we are creating. Make sure to select that option. If you forgot, you can always add it later through the console
2. Install CUDA / Docker / nvidia-docker
Here's a really simple script. Once you have SSH'ed in to your new machine, just run the script by pasting in the following to your terminal:
wget -O - -q '' | sudo bash
For the curious: you can find the script here
When it is done you will need to restart the machine by typing:
sudo shutdown -r now
3. Run jupyter
When the machine is back up you should be good to go! Type the following to run a docker container that includes Jupyter. It will run a server on port
8888 of your machine.
sudo nvidia-docker run --rm --name tf-notebook -p 8888:8888 -p 6006:6006 gcr.io/tensorflow/tensorflow:latest-gpu jupyter notebook --allow-root
Your notebook will be accessible from any computer but going to a web browser and entering in your machine's public IP address and the port :
You can confirm that the GPU is working by opening a notebook and typing:
from tensorflow.python.client import device_lib def get_available_devices(): local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos] print(get_available_devices())
Get started today with your own Paperspace desktop by signing up today! | https://blog.paperspace.com/jupyter-notebook-with-a-gpu-the-easy-way/ | CC-MAIN-2017-43 | refinedweb | 254 | 64 |
[Data Points]
EF Core in a Docker Containerized App
By Julie Lerman | April 2019 | Get the Code
I’ve spent a lot of time with Entity Framework and EF Core, and a lot of time working with Docker. You’ve seen evidence of this in many past columns. But, until now, I haven’t put them together. Because the Docker Tools for Visual Studio 2017 have been around for a while, I imagined it would be an easy leap. But it wasn’t. Perhaps it’s because I prefer to know what’s going on under the covers and I also like to know and comprehend my options. In any case, I ended up sifting through information from blog posts, articles, GitHub issues and Microsoft documentation before I was able to achieve my original goal. What I hope is to make it easier for readers of this column to find the path (and avoid some of the hiccups I encountered) by consolidating it all in one place.
I’m going to focus on Visual Studio 2017, and that means Windows, so you’ll need to ensure you have Docker Desktop for Windows installed (dockr.ly/2tEQgR4) and that you’ve set it up to use Linux containers (the default). This also requires that Hyper-V be enabled on your machine, but the installer will alert you if necessary. If you’re working in Visual Studio Code (regardless of OS), there are quite a few extensions for working with Docker directly from the IDE.
Creating the Project
I began my journey with a simple ASP.NET Core API. The steps to set up a new project to match mine are: New Project | .NET Core | ASP.NET Core Web Application. On the page where you choose the application type, choose API. Be sure that Enable Docker Support is checked (Figure 1). Leave the OS setting at Linux. Windows containers are larger and more complicated, and hosting options for them are still quite limited. I learned this the hard way.
Figure 1 Configuring the New ASP.NET Core API Project
Because you enabled Docker support, you’ll see a Dockerfile in the new project. Dockerfile provides instructions to the Docker engine for creating images and for running a container based on the final image. Running a container is akin to instantiating an object from a class. Figure 2 shows the Dockerfile created by the template. (Note that I’m using Visual Studio 2017 version 15.9.7 with .NET Core 2.2 installed on my computer. As the Docker Tools evolve, so may the Dockerfile.)
FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base WORKDIR /app EXPOSE 80 FROM microsoft/dotnet:2.2-sdk AS build WORKDIR /src COPY ["DataApi/DataApi.csproj", "DataApi/"] RUN dotnet restore "DataApi/DataApi.csproj" COPY . . WORKDIR "/src/DataApi" RUN dotnet build "DataApi.csproj" -c Release -o /app FROM build AS publish RUN dotnet publish "DataApi.csproj" -c Release -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "DataApi.dll"]
The first instruction identifies the base image used to create the subsequent images and container for your app is specified as:
Then a build image will be created based on the base image. The build image is solely for building the application, so it needs the SDK, as well. A number of commands are executed on the build image to get your project code into the image, and to restore needed packages before building the image.
The next image created will be used for publishing; it’s based on the build image. For this image, Docker will run dotnet publish to create a folder with the minimal assets needed to run the application.
The final image has no need for the SDK and is created from the base image. All of the publish assets will get copied into this image and an Entrypoint is identified—that is, what should happen when this image is run.
By default, the Visual Studio tools only perform the first stage for the Debug configuration, skipping the publish and final images, but for a Release configuration the entire Dockerfile is used.
The various sets of builds are referred to as multi-stage builds, with each step focused on a different task. Interestingly, each command performed on an image, such as the six commands on the build image, causes Docker to create a new layer of the image. The Microsoft documentation on architecture for containerized .NET applications at bit.ly/2TeCbIu does a wonderful job of explaining the Dockerfile line by line and describing how it’s been made more efficient through the multi-stage builds.
For now, I’ll leave the Dockerfile at its default.
Debugging the Default Controller in Docker
Before heading to Docker debug, I’ll first verify the app by debugging in the ASP.NET Core self-hosted profile (which uses Kestrel, a cross-platform Web server for ASP.NET Core). Be sure that the Start Debugging button (green arrow on the toolbar) is set to run using the profile matching the name of your project—in my case that’s DataAPI.
Then run the app. The browser should open pointing to the URL and displaying the default controller method results (“value1,” “value2”). So now you know the app works and it’s time to try it in Docker. Stop the app and change the Debug profile to Docker. If Docker for Windows is running (with the appropriate settings noted at the start of this article), Docker will run the Dockerfile. If you’ve never pulled the referenced images before, the Docker engine will start by pulling down those images from the Docker hub. Pulling the images may take a few minutes. You can watch its progress in the Build Output window. Then Docker will build any images following the steps in Dockerfile, though it won’t rebuild any that haven’t changed since the last run. The final step is performed by the Visual Studio Tools for Docker, which will call docker build and then docker run to start the container. As a result, a new browser window (or tab) will open with the same output as before, but the URL is different because it’s coming from inside the Docker image that’s exposing that port. In my case, it’s. Alternate setups will cause the browser to launch using.
If Visual Studio Can’t Run Docker
I encountered two issues that initially prevented Docker from building the images. The very first time I tried to debug from Visual Studio targeting Docker, I got a message that, “An error occurred while attempting to run Docker container.” The error referenced container.targets line 256. This was neither informative nor helpful until I later realized that I could have seen the details of the error in the Build Output. After trying a variety of things (including a lot of reading on the Internet, yet still not checking the Build Output window, I ended up trying to pull an image from the Docker CLI, which prompted me to log in to Docker, even though I was already logged in to the Docker Desktop app. Once I did this, I was able to debug from Visual Studio 2017. Subsequently, logging out via the Docker CLI didn’t affect this and I could still debug. I’m not sure of the relationship between the two actions. However, when I completely uninstalled and reinstalled Docker Desktop for Windows, I was, again, forced to log in via the Docker CLI before I could run my app. According to the issue on GitHub at bit.ly/2Vxhsx4, it seems it’s because I logged into Docker for Windows using my e-mail address, not my login name.
I also got this same error once when I had disabled Hyper-V. Re-enabling Hyper-V and restarting the machine solved the problem. (For the curious, I needed to run a VirtualBox virtual machine for a completely unrelated task and VirtualBox requires Hyper-V to be disabled.)
What Is the Docker Engine Managing?
As a result of running this app for the first time, the Docker engine pulled down the two noted images from the Docker Hub (hub.docker.com) and was keeping track of them. But the Dockerfile also created other images that it had cached. Running docker images at the command line revealed the docker4w image used by Docker for Windows itself, an aspnetcore-runtime image pulled from Docker Hub, and the dataapi:dev image that was created by building the Dockerfile—that is, the image from which your application is running. If you run docker images -a to show hidden images, you’ll see two more images (those with no tags) that are the build and publish intermediate images created by Dockerfile, as shown in Figure 3. You won’t see anything about the SDK image, according to Microsoft’s Glenn Condron, “due to a quirk in the way Docker multi-stage build works.”
Figure 3 Exposed and Hidden Docker Images After Running the API
You can look at even more details about an image using the command:
What about the containers? The command docker ps reveals the container created by Docker Tools for Visual Studio 2017 calling docker run (with parameters) on the dev image. I’ve stacked the result in Figure 4 so you can see all the columns. There are no hidden containers.
Figure 4 The Docker Container Created by Running the App from Visual Studio 2017 Debug
Setting Up the Data API
Now let’s turn this into a data API using EF Core as the data persistence mechanism. The model is simplistic in order to focus on the containerization and its impact on your EF Core data source.
Begin by adding a class called Magazine.cs:
Next, you need to install three different NuGet packages. Because I’ll be showing you the difference between using a self-contained SQLite database and a SQL Server database, add both the Microsoft.EntityFrameworkCore.Sqlite and Microsoft.EntityFrameworkCore.SqlServer packages to the project. You’ll also be running EF Core migrations, so the third package to install is Microsoft.EntityFrameworkCore.Design.
Now I’ll let the tooling create a controller and DbContext for the API. In case this is new for you, here are the steps:
- Right-click on the Controllers folder in Solution Explorer.
- Choose Add | Controller | API Controller with actions, using Entity Framework.
- Select the Magazine class as the Model class.
- Click the plus sign next to Data context class and change the highlighted portion of the name to Mag, so it becomes [YourApp].Models.MagContext, and then click Add.
- Leave the default controller name as MagazinesController.
- Click Add.
When you’re done, you’ll have a new Data folder with the MagContext class and the Controllers folder will have a new MagazineController.cs file.
Now I’ll have EF Core seed the database with three magazines using the DbContext-based seeding I wrote about in my August 2018 column (msdn.com/magazine/mt829703). Add this method to MagContext.cs:
Setting Up the Database
To create the database, I need to specify the provider and connection string, and create and then run a migration. I want to start by going down a familiar path, so I’ll begin by targeting SQL Server LocalDB and specifying the connection string in the appsettings.json file.
When you open appsettings.json, you’ll find it already contains a connection string, which was created by the controller tooling when I let it define the MagContext file. Even though both SQL Server and SQLite providers were installed, it seems to have defaulted to the SQL Server provider. This proved to be true in subsequent tests. I prefer my own connection string name and my own database name, so, I replaced the MagContext connection string with MagsConnectionMssql, and added my preferred database name, DP0419Mags:
In the app’s startup.cs file, which includes a ConfigureServices method, the tooling also inserted code to configure the DbContext. Change its connection string name from MagContext to match the new name:
Now I can use EF Core migrations to create the first migration and, as I’m in Visual Studio, I can do that using PowerShell commands in the Package Manager Console:
Migrating the Database
That command created a migration file, but I’m not going to create my database using the migration commands—when I deploy my app, I don’t want to have to execute migrations commands to create or update the database. Instead, I’ll use the EF Core Database.Migrate method. Where this logic method goes in your app is an important decision. You need it to run when the application starts up. A lot of people interpret this to mean when the startup.cs file is run, but the ASP.NET Core team recommends placing application startup code in the program.cs file, which is the true starting point of an ASP.NET Core app. But as with any decision, there may well be factors that affect this guidance.
The program’s default Main method calls the ASP.NET Core method, CreateWebHostBuilder, which performs a lot of tasks on your behalf, then calls two more methods—Build and Run:
I need to migrate the database after Build but before Run. To do this, I’ve created an extension method to read the service provider information defined in startup.cs, which will discover the DbContext configuration. Then the method calls Database.Migrate on the context. I adapted code (and guidance from EF Core team member, Brice Lambson) from the GitHub issue at bit.ly/2T19cbYto create the extension method for IWebHost shown in Figure 5. The method is designed to take a generic DbContext type.
public static IWebHost MigrateDatabase<T>(this IWebHost webHost) where T : DbContext { using (var scope = webHost.Services.CreateScope()) { var services = scope.ServiceProvider; try { var db = services.GetRequiredService<T>(); db.Database.Migrate(); } catch (Exception ex) { var logger = services.GetRequiredService<ILogger<Program>>(); logger.LogError(ex, "An error occurred while migrating the database."); } } return webHost; }
Then I modified the Main method to call MigrateDatabase for MagContext between Build and Run:
As you’re adding all of this new code, Visual Studio should prompt you to add using statements for Microsoft.EntityFrameworkCore, Microsoft.Extensions.DependencyInjection and the namespace for your MagContext class.
Now the database will get migrated (or even created) as needed at runtime.
One last step before debugging is to tell ASP.NET Core to point to the Magazines controller when starting, not the values controller. You can do that in the launchsettings.json file, changing the instances of launchUrl from api/values to api/Magazines.
Running the Data API in Kestrel, Then in Docker
As I did for the values controller, I’m going to start by testing this out in the self-hosted server via the project profile (for example, DataAPI), not the Docker profile. Because the database doesn’t yet exist, Migrate will create the new database, which means there will be a short delay because SQL Server, even LocalDB, has a lot of work to do. But the database does get created and seeded and then the default controller method reads and displays the three magazines in the browser at localhost:5000/api/Magazines.
Now let’s try it out with Docker. Change the Debug profile to Docker and run it again. Oh no! When the browser opens, it displays a SQLException, with details explaining that TryGetConnection failed.
What’s going on here? The app is looking for the SQL Server (defined as “(localdb)\\mssqllocaldb” in the connection string) inside the running Docker container. But LocalDB is installed on my computer and not inside the container. Even though it’s a common choice for preparing for a SQL Server database when you’re in a development environment, it doesn’t work so easily when you’re targeting Docker containers.
This means I have more work to do—and you possibly have more questions. I certainly did.
A Detour to Easy Street
There are some great options, such as using SQL Server for Linux in another Docker container or targeting a SQL Azure database. I’ll be digging into those solutions in the next couple of articles, but first I want you to see a quick solution where the database server will indeed exist inside of the container and your API will run successfully. You can achieve this easily with SQLite, which is a self-contained database.
You should already have the Microsoft.EntityFramework.SQLite package installed. This NuGet package’s dependencies will force the SQLite runtime components to install in the image where the app is built.
Add a new connection string called MagsConnectionSqlite to the appsettings.json file. I’ve specified the file name as DP0419Mags.db:
In Startup, change the DbContext provider to SQLite with the new connection string name:
The migration file you created is specific to the SQL Server provider, so you’ll need to replace that. Delete the entire Migrations folder, then run Add-Migration initSqlite in the Package Manager Console to recreate the folder along with the migration and snapshot files.
You can run this against the built-in server if you want to see the file that gets created, or you can just start debugging this in Docker. The new SQLite database gets created very quickly when the Migrate command is called and the browser then displays the three magazines again. Note that the IP address of the URL will match the one you saw earlier when running the values controller in Docker. In my case, that’s. So now the API and SQLite are both running inside the Docker container.
A More Production-Friendly Solution, Coming Soon
While using the SQLite database certainly simplifies the task of letting EF Core create a database inside the same container that’s running the app, this is most likely not how you’d want to deploy your API into production. One of the beauties of containers is that you can express separation of concerns by employing and coordinating multiple containers.
In the case of this tiny solution, perhaps SQLite would do the job. But for your real-world solutions, you should leverage other options. Focusing on SQL Server, one of those options would be to target an Azure SQL Database. With this option, regardless of where you’re running the app (on your development machine, in IIS, in a Docker container, in a Docker container in the cloud), you can be sure to always be pointing to a consistent database server or database depending on your requirements. Another path is to leverage containerized database servers, such as SQL Server for Linux, as I’ve written about in an earlier column (msdn.com/magazine/mt784660). Microservices introduces another layer of possible solutions given that the guidance is to have one database per microservice. You could easily manage those in containers, as well. There’s a great (and free) book from Microsoft on architecting .NET apps for containerized microservices at bit.ly/2NsfYBt.
In the next few columns, I’ll explore some of these solutions as I show you how to target SQL Azure or a containerized SQL Server; manage connection strings and protect credentials using Docker environment variables; and enable EF Core to discover connection strings at design time, using migrations commands and at run time from within the Docker container. Even with my pre-existing experience with Docker and EF Core, I went through a lot of learning curves working out the details of these solutions and am looking forward to sharing them all with you. experts for reviewing this article: Glenn Condron, Steven Green, Mike Morton
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
Data Points - EF Core in a Docker Containerized App...
Apr 1, 2019 | https://msdn.microsoft.com/magazine/mt833405 | CC-MAIN-2019-22 | refinedweb | 3,328 | 63.49 |
Params
Params is an object used by the http.* methods that generate HTTP requests. Params contains request-specific options like e.g. HTTP headers that should be inserted into the request.
Params.auth
string
The authentication method used for the request. It currently supports
digest,
ntlm, and
basic authentication methods.
Params.cookies
object
Object with key-value pairs representing request scoped cookies (they won't be added to VU cookie jar)
{cookies: { key: "val", key2: "val2" }}
You also have the option to say that a request scoped cookie should override a cookie in the VU cookie jar:
{cookies: { key: { value: "val", replace: true }}}
Params.headers
object
Object with key-value pairs representing custom HTTP headers the user would like to add to the request.
Params.jar
object
http.CookieJar object to override default VU cookie jar with. Cookies added to request will be sourced from this jar and cookies set by server will be added to this jar.
Params.redirects
number
The number of redirects to follow for this request. Overrides the global test option
maxRedirects.
Params.tags
object
Key-value pairs where the keys are names of tags and the values are tag values. Response time metrics generated as a result of the request will have these tags added to them, allowing the user to filter out those results specifically, when looking at results data.
Params.timeout
number
Request timeout to use in milliseconds. Default timeout is 60s.
import http from "k6/http"; export default function() { let params = { cookies: { "my_cookie": "value" }, headers: { "X-MyHeader": "k6test" }, redirects: 5, tags: { "k6test": "yes" } }; let res = http.get("", params); };
A k6 script that will make an HTTP request with a custom HTTP header and tag results data with a specific tag
Here is another example using http.batch() with a
Params argument:
import http from "k6/http"; let url1 = ""; let url2 = "" let apiToken = "f232831bda15dd233c53b9c548732c0197619a3d3c451134d9abded7eb5bb195"; let requestHeaders = { "User-Agent": "k6", "Authorization": "Token " + apiToken }; export default function() { let res = http.batch([ { method: "GET", url: url1, params: { headers: requestHeaders } }, { method: "GET", url: url2 } ]); };
Here is one example of how to use the
Params to Digest Authentication
import http from "k6/http"; import { check } from "k6"; export default function() { // Passing username and password as part of URL plus the auth option will authenticate using HTTP Digest authentication let res = http.get("", {auth: "digest"}); // Verify response check(res, { "status is 200": (r) => r.status === 200, "is authenticated": (r) => r.json().authenticated === true, "is correct user": (r) => r.json().user === "user" }); } | https://docs.k6.io/docs/params-k6http | CC-MAIN-2018-30 | refinedweb | 412 | 55.24 |
Take 40% off Hugo in Action by entering fccjain into the discount code box at checkout at manning.com.
Just like dynamic form submissions, a search widget with real-time results as you type requires JavaScript. Now that we have a skeleton structure of the JS code and JSON-based pseudo-API for the website ready, we can use them to provide client-side search.
Concept of client-side search
In traditional systems, search is server-based, where the keyword supplied by the client maps to values in a search index which provides the best ranking pages for the keyword. A client-side search is a concept where the server supplies this index to the client (or the client builds it dynamically), and this mapping happens on the client.
Client-side search has a bunch of advantages over server-based search:
- The search index is static like everything else in the Jamstack. It can be distributed over a CDN and provide all the advantages of caching and performance that a CDN has to offer.
- The search index is pushed to the client on-demand or even preloaded. Therefore, there is no roundtrip time lost in sending keystrokes to the server, and the search becomes faster.
- There is no additional server to maintain and keep in sync with the database. The user’s machine supplies the resources required to perform the search.
- The search can work even if the user goes offline after loading the initial web page.
The significant limitation of the client-side search is the size of the index. If we have an extensive index, the preload of the search index can prove to be too bandwidth-intensive to be of any practical use. We can split the index and load it in parts on demand but stretching that approach goes back to the world of the entire index maintained by the server.
For most websites, the textual content is not very huge from the eyes of the modern web. 2 MB of data can store 2 million characters. That number is considerable for a text-based search index but not a massive overhead for a web page where we do have images of this size often on websites. While we can create a more optimized and robust search index in Hugo, the amount of data in the Acme Corporation website is so tiny that we supply all of it via the JSON pseudo-API. We can even move the search index creation to JavaScript.
Showing the search box in the header
A search widget consists of an input box and a result dropdown to show results from partial queries. We will be adding it to the website header.
Listing 1. Search form to be added to the website header. (AcmeTheme/layouts/_default/baseof.html)
<header> ... <span id="search"> ❶ <input type="search" placeholder="Search"> ❷ <div></div> ❸ </span> {{ partialCached "menu.html" ... }} ... </header>
❶ Wrapper div to contain the search form and the result list.
❷ Actual search form for the website.
❸ Placeholder for search results.
That is all that is needed. JavaScript will be jumping in to make this search field active.
To fill up the search results in JS, we need to load the website content from the Pseudo API and create a search index. We can use JavasScript’s fetch function to fetch the website data into a variable. ( ).
Listing 2. Loading the website data using window.fetch function in JavaScript (AcmeTheme/assets/search.js)
export default { async init() { try { const response = await window.fetch( ❶ "/index.json"); if (!response.ok) { this.removeSearch(); ❷ return; } let data = await response.json(); ❸ / Just for now. console.log(data); } catch(e) { this.removeSearch(); } }, removeSearch() { document.querySelector("#search")?.remove(); } }
❶ Using the fetch function to download the index file with all the website content.
❷ In case of error, remove the search box.
❸ Get the response data as an object from JSON.
The above code has one problem that will break if the website is hosted within a subfolder like in GitHub Pages, as this code assumes the root
/index.json is where the JSON version of the code lives. We will be passing the
site.BaseURL to the
defines as another variable to fix this. This value needs to be surrounded by quotation marks to be valid JavaScript (we could use params instead of defines which does not have this limitation).
Listing 3. Adding BaseURL to defines parameter (AcmeTheme/layouts/default/baseof.html)
{{ $defines := dict "REMOVE_FORM_ON_SUBMISSION" (default "false" ( site.Param "RemoveFormOnSubmission")) "BASE_URL" (print "\"" site.BaseURL "\"") }} ❶
❶ Surround by quotes to make this a valid JavaScript string.
We will also need to fix our JS code.
Listing 4. Adding BASE_URL to ensure the search always picks up from the right endpoint(AcmeTheme/assets/search.js)
const response = await window.fetch( BASE_URL + "/index.json");
We will be invoking the init method of the search form from the index. Even though the function is async, we can call it without using
await if we do not need to wait for it to return a valid value.
Listing 5. Initializing the search query. (AcmeTheme/assets/index.js)
import Search from "./search" function init() { ... Search.init(); }
The code above should log the entire contents of the website in the browser console.
Code Checkpoint. Live at.
Source
Importing a search library
When the data on the website is small, we can use regular expressions and loop through the content to find results. It may work, but an excellent full-text search library can be helpful when we need features like fuzzy matching (which allows for results with partial terms and autocompletes), properly weighted scoring of search results. The JavaScript ecosystem has many ready-to-use libraries, which are very well maintained and easy-to-use readily available.
Node.js (See) needs to be installed (we can use any version) on the machine for getting community modules. Once node.js is available, we can use the npm (Node package manager) command line.
Before installing a node.js dependency, we need to initialize node.js for our project. We have multiple projects on our website, the Acme Theme project and the Acme Corporation website project. Since the search code lives in Acme Theme and is shared, we need to initialize node.js in the Acme Theme project.
To do that, we will be running
npm init and answering a small questionnaire to get a `package.json file that can list our JavaScript-based dependencies.
Listing 6. Initializing as a npm reposiory (In AcmeTheme/)
npm init
Next, we need to search for and download a node.js module to help users with
fuzzy search. To find a library using
npm, you can use the
npm search command.
Listing 7. Searching for fuzzy search library on npm
npm search fuzzy search
Listing 8. Search results for fuzzy search using npm search fuzzy search.
❯ npm search fuzzy search NAME | DESCRIPTION | AUTHOR | DATE fuse.js | Lightweight… | =krisk | 2021-01-05| fastest-levenshtein | Fastest Levenshtein… | =ka-weihe | 2020-08-07| fuzzy-search | Simple fuzzy search | =wouter2203|2020-02-20| feathers-mongodb-fuzzy-se | hook which adds… | =arve0 | 2020-09-13| arch | | | | minisearch | Tiny but powerful… | =lucaong |2021-06-25| mongoose-fuzzy-searching|Mongoose fuzzy…| =vspallas|2020-11-03| fuzzy-tools | Functions for fuzzy… | =axules | 2021-04-18| fuzzy | small, standalone… | =mattyork | 2016-10-01| leven-match | Return all word… | =eklem | 2021-06-11| fuzzysearch | Tiny and… | =bevacqua | 2015-03-06| mongoose-fuzzy | Mongoose fuzzy… | =pabloc | 2020-07-28| scored-fuzzysearch | Tiny and… | =jhudson | 2020-07-31| neofuzzy | Quick fuzzy search… | =jeanno | 2020-11-26| fuzzy-search-mongoose | Fuzzy sarch | =piotreksl | 2020-09-28| vue-fuse | A Vue.js pluggin… | =shayneosulli… | 2021-07-02| liblevenshtein | Various utilities… | =dylon.edwards | 2015-07-04| fuzzy-pop | Simple fuzzy search… | =yoshokatana | 2015-05-05| fast-fuzzy | Fast and tiny… | =ethanrutherf… | 2021-05-19| react-fuzzy-picker | Search through a… | =1egoman | 2019-09-29|
Here the root command passed to
npm is search, and we are searching for a library that provides
fuzzy search. The top result from npm is
fuse.js. A quick check over the internet shows us that `fuse.js` is Apache-licensed, reasonably small(< 50kB), has no other dependencies, and has been maintained regularly for almost a decade with regular releases along with having a lot of downloads and packages depending on it.
To add a dependency, we can use the
npm install command. The
--save-dev flag saves the development dependency in package.json so that is it is available for use if we do
npm install on a new machine. A development dependency means that it is used only during development and not required in the released website. Since we compile our dependencies, we do not need them at runtime.
I would recommend using version 6 of fuse.js.
Listing 9. Adding fuse.js as a dependency (In AcmeTheme/)
npm install --save-dev [email protected]
This command will generate a file called the
package-lock.json file along with a
node_modules folder. The
package-lock.json is equivalent to
go.sum and holds the checksums to ensure the integrity of our dependencies. The
node_modules folder is similar to the
_vendor folder, which stores our dependencies. npm does not create a hidden folder for the dependencies.
Updating our build systems to support npm
Unless archiving the
node_modules folder does not make sense to be submitted to source control. It is not easy to keep it out either, as we will need to run
npm install inside the
AcmeTheme module to get its contents. Running
npm install inside the AcmeTheme module may not be possible since Hugo, by default, puts modules in a hidden folder.
Therefore we need a way to get the
fuse.js dependency exposed to the top-level AcmeCorporationWebsite project. To perform this task, we need to rename
package.json inside the AcmeTheme module to
package.hugo.json. If there is a
package.hugo.json file present in a Hugo module, Hugo understands that this module depends on npm, and Hugo is allowed to copy its dependencies to the top-level project.
To transfer our dependency to the top-level AcmeCorporationWebsite project, we can run the following command:
Listing 10. Generating the top-level package.json by packing all module packages (In website root folder /)
hugo mod npm pack
Hugo will initialize the top-level AcmeCorporation website as an npm-based project and create a package.hugo.json and a package.json. Now we run
npm install at the top-level AcmeCorporationWebsite project to get node_modules and package-lock.json in that folder. The ones in the AcmeTheme project are redundant, and we can delete them. If ever a new dependency is added to the AcmeTheme project, we need to add it to the
package.hugo.json and run the
hugo mod npm pack command again.
Next, we need to update our build script to install npm-based dependencies. For this to work, we need the
npm install command installed on the build machines. Netlify’s build machines come pre-installed with npm, while for GitHub Actions, we need to add a step. Note that
npm i' is a shorthand to `npm install. There is also an
npm ci command which ensures the dependencies match the package-lock.json. But it does not delete the already installed node_modules may cause builds to take longer.
Note that since we have the exact version of our Hugo Modules-based dependencies via
go.sum, the npm-based dependencies of the Hugo Modules can not change across builds. Therefore we can run
hugo mod npm pack only when we change our modules and check in the generate package.json in source control.
Updating Netlify
To update the build command in Netlify, we can go to the
Site settings > Build & deploy > Continuous deployment > Build command to update the build command. Since the Netlify UI takes only one test box for the build command, we can use the
&& operator to pipe commands and ensure both succeeded.
Listing 11. Build command to setup npm based dependencies and build Hugo.
npm i && hugo --minify --baseURL $DEPLOY_PRIME_URL
Updating GitHub Actions
For GitHub Pages, we need to add a set of build steps in
gh-pages.yml to set up
node.js and then run `npm i’.
Listing 12. Changes to GitHub Actions to install npm and npm based dependencies. (.github/workflows/.gh-pages.yml)
jobs: deploy: steps: ... - name: Use Node.js uses: actions/[email protected] with: node-version: '16.x' - name: Install NPM Dependencies run: npm i
With these changes, we have the fuse.js search library ready to use in our JavaScript code.
Creating a search index
We can import
fuse.js by using the
import statement in JavaScript. After fetching the website data, we need to pass this to
fuse.js to create a search index. We will be making a weighted search index where the title will weigh
20, and a tag will score
5 while the content gets a weight of 1. This scoring allows for having the word in the title given a much higher value than being present in the web page content.
We store the index as a local variable of the module. This way, it can be used by all methods in the modules. Since search is not a class and we expect only one instance, a local variable of the module acts like a private variable not accessible outside of this file.
Listing 13. Importing The fuse.js library to perform a search within JSON-based content. fuse.js provides support for fuzzy matching, weighted search to have a great searching experience. Running this in JavaScript makes the search responsive and fast. (AcmeTheme/assets/search.js)
import Fuse from 'fuse.js' let index = null; ❶ export default { init() { ... let data = await response.json(); ❷ index= new Fuse(data, { keys: [{ ❸ name: 'title', weight: 20 }, { name: 'tag', weight: 5 }, { name: 'content' ❹ }] }); / Just to test. Do not leave in code. console.log(index.search('acme')); ❺ } }
❶ Creating a module variable to store the index to be used in all functions.
❷ Creating a fuse.js index.
❸
title is added with weight 20.
❹ If not provided, weight is treated as
1.
❺ While developing, leaving a test query can help. We log the search results to the browser’s console.
Code Checkpoint. Live at. Source code at
Getting search input and showing results
With the search input box and the search method ready, the next step is to link the two together. The first thing we need to do is listen to the input event on the search box. We will be running a search query as soon as the user enters a single character in the search box and displays the resultant page’s title in the result div. If the user presses the enter key, we will navigate to the first search result. We will also be limiting the number of search results to a reasonable number.
We also need to show the search result dropdown when the user focuses on the search box and remove it when the user clicks outside. The full file after this change is present in chapter resources ()
Listing 14. Showing the search results inline via a dropdown is relatively straightforward. We use the input event to take keyboard and context menu entries. (AcmeTheme/assets/search.js)
import Fuse from 'fuse.js' let index = null; const MAX_SEARCH_RESULTS = 5; export default { init() { ... document.addEventListener("input", this.showResults); ❶ } showResults(event) { const searchBox = document.querySelector( "#search input"); if (event.target !== searchBox) { return; } const result = document.querySelector( "#search div"); result.style. <img src="${x.item.cover || ""}" width= "40" height="40"> <h3>${x.item.title}</h3> <span>${x.item.content.substr( 0,40)}</span> </a>`) ❹ .join(""); } else { result.innerHTML = ''; } }, ... }
❶ The input event is the best one for a text box as it handles uncommon cases like copy paste via mouse and regular keyboard presses.
❷ The innerHTML is used to replace the contents of the dropdown. Note that we can update existing DOM elements instead if performance is a big concern.
❸ Limit the number of search results to MAX_SEARCH_RESULTS
❹ Provide a rich dropdown experience with an image and accompanying text.
Note that the variable MAX_SEARCH_RESULTS could come from the Hugo config as define or a param.
With these changes, we have a working search box inside our website to help users navigate the entire content.
Figure 1. Search with results dropdown showing up in the Acme Corporation Website. Search can be added in the Jamstack based websites using a Pseudo API to get all contents and using JavaScript to filter it.
Code Checkpoint. Live at. Source code at
The GitHub Pages repository with the npm changes is present at hugoinaction/GitHubPagesNpm.
Using Hugo modules with JavaScript
While npm is straightforward to use, we can continue to use Hugo modules to load dependencies. Hugo Modules allow dependencies to provide template code, bundled content, and other Hugo-specific data alongside JavaScript. The assets folder in a Hugo module acts as the
node_modules folder in node.js.
We did not add any keyboard handling in our search handler. We will be importing a Hugo module AcmeSearchSupport ( chapter-10-resources/06) to perform this task.
We start by adding this as a dependency to AcmeTheme.
Listing 15. Adding AcmeSearchSupport as a dependency to AcmeTheme (AcmeTheme/config.yaml)
module: ... imports: ... - path: github.com/hugoinaction/AcmeSearchSupport
Next, we load this module our
search.js and call it during initialization.
Listing 16. Loading JS code from Hugo modules to be compiled by js.Build (AcmeTheme/assets/search.js)
import AcmeSearchSupport from "SearchSupport" ... export default { async init() { ... try { ... AcmeSearchSupport(); } catch (e) { this.removeSearch(); } }, ... }
That’s all for this article. If you want to learn more about the book, check it out on Manning’s liveBook platform here. | https://freecontent.manning.com/enabling-client-side-search/ | CC-MAIN-2022-05 | refinedweb | 2,983 | 66.33 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.