text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Created on 2010-09-14.12:49:43 by babelmania, last changed 2013-02-26.19:08:35 by fwierzbicki.
Whenever mixing Jython and Java class in the same package structure and the Java class is not explicitly mentioned in __init__.py, the module fails to find it in subsequent import calls.
from foo import bar // OK
from foo import Foo // ERROR
(Yes, we are able to mix java and jython code within the same structure since Jython 2.1 and managed to make it work under Jython 2.5.1 as well)
This is due to the fact that an PyModule is initialized once but incomplete. Rather than to do a full initialization one could simply fix this by adding the following code to PyModule.
@Override
public PyObject __findattr_ex__(String name) {
PyObject attr=super.__findattr_ex__(name);
if (attr!=null)
return attr;
return impAttr(name);
}
Problem found in 2.5.1 and 2.5.2b2
Raised it to major, as I would /really/ like to see this end up in 2.5.2 :)
Sorry, I did not complete the explanation
The above package structure would have been:
foo/__init__.py # empty, only there primarily for namespace reasons
foo/bar.py
foo/Foo.class
Sounds reasonable, a minimally scoped fix. Marked high to be part of 2.5.2rc1
An apparent duplicate of #1464, which I recently closed because we didn't have a good idea of what to do here in terms of resolving the loading. Now we do. #1464 also a test case we potentially can leverage.
We will use this issue to track. Moving nosy list here too.
Applied patch in r7151. Marked pending since it still needs a unit test however, hopefully I will have time before 2.5.2rc1, but better to get it in now.
Much appreciated!
Hi Jim,
I noticed that the code ended up in Jython 2.5.2.
Thanks!
What stops you to close the issue: test-harness?
cheers - Jorgo
Looks like this can be closed. | http://bugs.jython.org/issue1653 | CC-MAIN-2016-07 | refinedweb | 334 | 78.14 |
>> The cleanest solution is to let build_annotations_unwind run some >> buffer-local hook function (e.g. write-region-post-annotate-function), >> which can either run kill-buffer, and/or re-narrow the buffer, and/or >> kill previous buffers. The current code already allows it via >> kill-buffer-hook, but using that is ugly and will lead to >> other surprises. > The annotation functions would need to add functions to that hook when > they are run. Not really. When they switch to another buffer, they may need to adjust that hook in that buffer. The default value could be `kill-buffer', so as to preserve current behavior. > Another idea: allow a new type of return value for annotation functions, > and use this to keep track of buffers to be killed. For example, allow > annotation functions to return (FUN1 . FUN2), where FUN1 and FUN2 are > lambda functions. Then FUN1 is called during annotation, and FUN2 is > called after other annotations have taken place. Since they return a buffer already, we may as well store the FUN2 inside that buffer (as a buffer-local var). Stefan | http://lists.gnu.org/archive/html/emacs-devel/2009-01/msg00527.html | CC-MAIN-2013-48 | refinedweb | 181 | 56.05 |
On 11/17/05, Greg Woodhouse <gregory.woodhouse at sbcglobal.net> wrote: > > Isn't there a potential for confusion with function composition (f . g)? > Perhaps, but I always have spaces on either side when it's function composition. Isn't there already an ambiguity? -- I bet there's a quicker way to do this ... module M where data M a = M a deriving (Show) data T a = T a deriving (Show) module M.T where f = (+1) import M import qualified M.T f = (*2) v1 = M . T . f $ 5 v2 = M.T.f $ 5 main = do { print v1; print v2; return () } Fraser. -------------- next part -------------- An HTML attachment was scrubbed... URL: | http://www.haskell.org/pipermail/haskell-cafe/2005-November/012150.html | CC-MAIN-2014-15 | refinedweb | 111 | 71.51 |
One challenge faced by new iOS developers is how to work with the limited memory on Apple's handheld products. Products must optimize memory use, avoid leaks, and reduce the overall footprint.
This article explains how an iOS application should manage its allocated memory. It describes the lifecycle of a Cocoa object and how that cycle differs on iOS, and then explains how to reduce the memory footprint and to prevent memory leaks. It also explains how to detect and react to a low-memory signal from iOS.
Readers must have a working knowledge of Xcode and Cocoa.
Life of a Cocoa Object
A typical Cocoa object undergoes three distinct stages in its lifecycle (Figure 1).
First, the object is created. This is done by sending an
alloc message to the appropriate class. The class reserves the memory needed by the object and returns the object itself as a result. For example, the statement below creates an instance of the
NSString class and stores it in the variable
tFoo:
tFoo = [NSString alloc];
If the allocation fails due to insufficient memory, the class will return a
nil object.
Next, the object is initialized. This is where it assumes a default state and value. It is where the object defines its delegates and prepares its parent. Initialization happens when the object gets an
init message. Using our
NSString example, this next statement initialize the object to a null string:
[tFoo init];
Now most Cocoa classes have custom methods with which to initialize their respective objects. And many of those methods have two or more arguments to pass data or states to the object. In the case of
NSString, for instance, the method
initWithString gives the object with an initial string value:
[tFoo initWithString:@"foobar"];
Finally, the object reaches a time when it has to be disposed of. It cleans up after itself and frees up all the memory used by its properties and methods. Disposal usually happens when the object gets a
dealloc message:
[tFoo dealloc];
But this could cause problems, especially when two or more other objects have to work with the affected object. Thankfully, there is a better way of disposing a Cocoa object, and it involves the use of reference counts.
Objects on Reference
By default, a newly created object has a reference count of 1. When that count hits 0, the object starts disposing of itself. In short, it literally self-destructs.
So to dispose the object, send it a
release message:
[tFoo release];
This will decrease the object's reference count by 1. The object runs its
dealloc code, as well those of its parent. Once the object is disposed of, its variable then points to a "bad" address.
But suppose we want to add the object to a collection, or we want to use the object as a return value. For these cases, we must keep the object from self-destructing on its own.
To prevent disposal, send a
retain message to the object as follows:
[tFoo retain];
This will increase the object's reference count by 1. With a count greater than 1, the object stays valid and will be unable to dispose of itself.
Now it is important that the object gets an equal set of retains and releases (Figure 2). If the number of retains exceeds those of releases by at least one, the end result is a memory leak. Conversely, if the number of releases is greater, the result is a bad access error.
An alternative way to disposing an object is to mark it for autorelease. The marked object goes into an autorelease pool, created just before the application starts its event loop. Periodically, the pool checks its collection of objects, locating those with a reference count of 1, which are out of scope. When such an object is found, the pool disposes of it with a
release message.
To place an object into the autorelease pool, send it an
autorelease message:
[tFoo autorelease];
To prevent the pool from disposing of the object prematurely, use the
retain message as described earlier.
Most Cocoa classes, however, can produce objects already marked for autorelease. This is done by using any one of the factory methods from each class. Consider again our
NSString example. Its factory method
stringWithString will create and initialize an instance of
NSString. Furthermore, that same instance will be slated for autorelease.
tFoo = [NSString stringWithString:@"foobar"];
The iOS Difference
Now the same Cocoa object will have a similar lifecycle on iOS. Creating and initializing an object use the same messages. Retention and disposal are also the same. This is not surprising, of course, since iOS is a variant of MacOS X, but one optimized for handheld use.
iOS has its own notable quirks. First, it has a smaller amount of physical memory than its desktop cousins. Physical memory can be as small as 256 MB on an iPhone 3GS, or 512 MB on an iPad2. The system itself takes up at least 64 MB of that memory for its own needs. This leaves a smaller amount for all the applications to share. Flash memory, which figures in the gibabyte range, is used solely for storage. Furthermore, that same physical memory is not user-upgradeable.
Second, though iOS has the same virtual memory engine as OS X, its engine does not write out inactive resources to volatile pages. Instead, it expects each iOS app to dispose of its unneeded resources and free up the occupied memory.
Third, iOS favors the active application session, which has the user's immediate attention. It will signal background tasks and inactive apps to frequent memory purges. In severe cases, iOS may terminate the apps themselves.
Finally, as of this writing, iOS does not have any garbage collection service. Instead, it expects each application to manage its memory share properly and frugally. Again, iOS will terminate those apps that habitually hog the memory store.
On Optimizing Footprint
Now the first step to prepare an application for iOS is to reduce its memory footprint. Too large a footprint can degrade an application's performance and that of its host system. It reduces the amount of available memory and marks the application for immediate termination.
Apple offers four guidelines on how to reduce an app's memory footprint. Follow these guidelines closely when working with your iOS projects.
1. Locate and fix all possible memory leaks.
As stated earlier, a memory leak happens when a Cocoa object fails to dispose of itself properly. It may be due to one too many
retains, leaving the object with a reference count of 1 or more. It may even be due to the object using
malloc() to allocate itself some memory, then failing to call
free() to release said memory.
Consider the sample class in Listing One. This class defines a typical view controller. It declares three properties, which link the controller to three widgets on its window view (lines 5-7). Then it uses the
@property and
@synthesize keywords to declare and define the accessors for those outlets (lines 11-13, 23-25).
Listing One
-- CLASS:CONTROLLER:DECLARATION @interface MyViewController : UIViewController <UITextFieldDelegate> { // -- properties:outlets UITextField *textField; UILabel *label; NSString *string; } // -- accessors:outlets @property (nonatomic, retain) IBOutlet UITextField *textField; @property (nonatomic, retain) IBOutlet UILabel *label; @property (nonatomic, copy) IBOutlet NSString *string; // -- methods:actions - (IBAction) changeGreeting:(id)sender; @end -- CLASS:CONTROLLER:DEFINITION @implementation MyViewController // -- accessors:outlets @synthesize textField; @synthesize label; @synthesize string; // -- The user has entered new text data - (IBAction) changeGreeting:(id)sender; { // read the entered string self.string = textField.text; // prepare the name string NSString *nameString = string; // should a default name be used? if ([nameString length] == 0) { nameString = @"World"; } // prepare the string data NSString *greeting = [[NSString alloc] initWithFormat:@"Hello, %@!" , nameString]; // display the string label.text = greeting; [greeting release]; } // The user has pressed the <Return> key - (BOOL)textFieldShouldReturn:(UITextField *)aFld { // check the calling widget if (aFld == textField) { [textField resignFirstResponder]; } return (YES); } // -- The view has been disposed - (void)viewDidUnload { // pass the message to the parent [super viewDidUnload]; } // -- The controller is about to be destroyed - (void)dealloc { // dispose the following outlets [textField release]; [label release]; [string release]; // pass the message to the parent [super dealloc]; } @end
Note the three lines in the controller's
dealloc routine (lines 75-77). They send a
release message to each of the outlets. But what if we forget to add these three lines? The result will be each outlet remaining valid after the view controller has self-destructed. We then get a small leak, which grows every time the iOS app recreates and disposes of the same controller.
Next, consider the sample method in Listing Two. This method gets an
NSString input, which holds a path string. First, the method copies the string into the local
tNom (line 9). It uses the instance method
pathComponents to separate the path into its component items (line 12). The separated items are then returned in the form of an
NSArray. Next, the method reads the last item of that array (15). If the array has only two entries, a root separator and name, the method prepares an empty string (line 17). And it returns the result as an
NSString object.
Listing Two
// Return the file or directory name - (NSString *)getItemName:(NSString *)aPth { NSString *tNom; NSArray *tTmp; NSInteger tIdx; // create a string object tNom = [NSString stringWithString:aPth]; // extract a path item tTmp = [tNom pathComponents]; tIdx = [tTmp count] if (tIdx > 1) tNom = [tTmp objectAtIndex:(tIdx - 1)]; else tNom = [NSString string]; // return the extraction result return (tNom); }
Note that all
NSString instances are made with calls to factory methods. This means these instances are marked for autorelease. But suppose we send a
retain message to the final instance:
[tNom retain];
At first, this looks innocuous. After all, it only increases the reference count by 1. But if we fail to send a matching
release message, this
NSString instance will remain in the pool, taking up precious memory, thus creating a memory leak. | http://www.drdobbs.com/testing/managing-memory-on-ios/232200738 | CC-MAIN-2014-52 | refinedweb | 1,657 | 63.19 |
I am using Entity Framework Code First and have the following POCO that represents a table in my database.
public class LogEntry { public int Id {get; set;} public DateTimeOffset TimeStamp {get;set;} public string Message {get; set;} public string CorrelationId {get; set;} }
The CorrelationId is not unique. There will typically be multiple records in the table with the same CorrelationId and this field is used to track what log entries correspond to what request.
I then have another object that lets me group these log entries by CorrelationId. This object does not map back to any tables in the DB.
public class AuditTrail { public string CorrelationId {get; set;} public DateTimeOffset FirstEvent {get; set;} public List<LogEntry> LogEntries {get; set;} }
I want to be able to populate a list of AuditTrail objects. The catch is that I want them to be sorted so that the newest Audit Trail records are at the top. I am also doing paging so I need the order by to happen before the group by so that the correct records get returned. i.e. I don't want to get the results and then sort through them. The sort needs to happen before the data is returned.
I have tried some queries and have ended up with this:
var audits = from a in context.LogEntries group a by a.CorrelationId into grp select grp.OrderByDescending(g => g.TimeStamp);
This gives me an
IQueryable<IOrderedEnumerable<LogEntry>> back that I iterate through to build my AuditTrail objects. The problem is that the records are only sorted within the groups. For example I will get back an AuditTrail for yesterday followed by one from a week ago followed by one from today but within the LogEntries List all those entries are sorted. What I want is for the AuditTrails to come back in descending order based on the TimeStamp column so that new AuditTrails are displayed at the top of my table on the UI.
I have also tried this query (as per Entity Framework skip take by group by):
var audits = context.LogEntries.GroupBy(i => i.CorrelationId) .Select(g => g.FirstOrDefault()) .OrderBy(i => i.TimeStamp) .ToList();
This only returns the first LogEntry for each Correlation Id when I want them all back grouped by Correlation Id.
I think what you are looking for is something like this:
var audits = (from a in context.LogEntries group a by a.CorrelationId into grp let logentries = grp.OrderByDescending( g => g.TimeStamp) select new AuditTrail { CorrelationId = grp.Key, FirstEvent = logentries.First().TimeStamp, LogEntries = logentries.ToList() }).OrderByDescending( at => at.FirstEvent); | https://entityframeworkcore.com/knowledge-base/34747112/entity-framework--order-by-and-then-group-by | CC-MAIN-2022-21 | refinedweb | 427 | 56.45 |
#include <Atmosphere.hpp>
The sky brightness is computed with the SkyBright class, the color with the SkyLight. Don't use this class directly but use it through the LandscapeMgr.
Set fade in/out duration in seconds.
Get fade in/out duration in seconds.
Define whether to display atmosphere.
Get whether atmosphere is displayed.
Get the actual atmosphere intensity due to eclipses + fader.
Get the average luminance of the atmosphere in cd/m2 If atmosphere is off, the luminance includes the background starlight + light pollution.
Otherwise it includes the atmosphere + background starlight + eclipse factor + light pollution.
Set the light pollution luminance in cd/m^2.
Get the light pollution luminance in cd/m^2. | http://www.stellarium.org/doc/0.10.5/classAtmosphere.html | CC-MAIN-2015-35 | refinedweb | 113 | 53.37 |
//**************************************
//INCLUDE files for :Bubble Sort
//**************************************
#include <stdio.h>
#include <stdlib.h>
#include <conio.h>
//**************************************
// Name: Bubble Sort
// Description:The purpose of this code is to show novice programmers the simple and classic sorting method of Bubble Sort. The code is in C/C++ and should be compilied in DJGPP. The code includes the Bubble Sort Algorithm in action.
// By: Joshua Thompson (from psc cd)
//
// Inputs:No Inputs
//
// Returns:No Returns
//
// Assumes:The code is complete. The Bubble Sort Algorithm is modulizied inside its own function. It is called from main, the current implementation sorts an array of ints, but can be modified to sort varying data types.
//
// Side Effects:No Side Effects
//**************************************
/* Function Prototypes */
void BubbleSort( int Array[], const int Size );
void PrintArray( int Array[], const int Size );
int main( void )
{
int i;
const int Size = 20;
int Array[ Size ];
/* Fill the Array with random values
between 0 and 99 */
for( i = 0; i < Size; i++ )
Array[i] = random() % 100;
/* Print the Random Array to Screen */
clrscr();
printf( "The Array with random order:\n\n");
PrintArray( Array, Size );
/* Wait for key Press... */
printf( "\nPress any key..." );
getch();
/* Sort the Array using Bubble Sort */
BubbleSort( Array, Size );
/* Print the Smallest-to-Largest
Order Array */
clrscr();
printf( "The Array after Bubble Sort:\n\n");
PrintArray( Array, Size );
/* End the Program */
printf( "\nPress any key to quit..." );
getch();
return 0;
}
/* Uses the classic bubble sort algorithm */
void BubbleSort( int Array[], const int Size )
{
int i, j, temp;
for( i = 0; i < Size - 1; i++ )
for( j = 0; j < Size - i + 1; j++ )
if( Array[j] > Array[j + 1] )
{
temp = Array[j];
Array[j] = Array[j + 1];
Array[j + 1] = temp;
}
}
/* Prints an integer Array line by line */
void PrintArray( int Array[], const int Size )
{
int i;
for( i = 0; i < Size; i++ )
printf("Array[%i] = %i\n", i, Array[i] );
}. | http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=550&lngWId=3 | CC-MAIN-2019-04 | refinedweb | 307 | 66.47 |
I have realized that there is some funky stuff going on with the way Tensorflow seems to be managing graphs.
Since building (and rebuilding) models is so tedious, I decided to wrap my custom model in a class so I could easily re-instantiate it elsewhere.
When I was training and testing the code (in the original place) it would work fine, however in the code where I loaded the graph's variables I would get all sorts of weird errors - variable redefinitions and everything else. This (from my last question about a similar thing) was the hint that everything was being called twice.
After doing a TON of tracing, it came down to the way I was using the loaded code. It was being used from within a class that had a structure like so
class MyModelUser(object):
def forecast(self):
# .. build the model in the same way as in the training code
# load the model checkpoint
# call the "predict" function on the model
# manipulate the prediction and return it
MyModelUser
def test_the_model(self):
model_user = MyModelUser()
print(model_user.forecast()) # 1
print(model_user.forecast()) # 2
ValueError: Variable weight_def/weights already exists, disallowed. Did you mean to set reuse=True in VarScope?
get_variable
reuse_variables
get_variable
tensorflow.python.framework.errors.NotFoundError: Tensor name "weight_def/weights/Adam_1" not found in checkpoint files
__init__
class MyModelUser(object):
def __init__(self):
# ... build the model in the same way as in the training code
# load the model checkpoint
def forecast(self):
# call the "predict" function on the model
# manipulate the prediction and return it
def test_the_model(self):
model_user = MyModelUser()
print(model_user.forecast()) # 1
print(model_user.forecast()) # 2
__init__
By default, TensorFlow uses a single global
tf.Graph instance that is created when you first call a TensorFlow API. If you do not create a
tf.Graph explicitly, all operations, tensors, and variables will be created in that default instance. This means that each call in your code to
model_user.forecast() will be adding operations to the same global graph, which is somewhat wasteful.
There are (at least) two possible courses of action here:
The ideal action would be to restructure your code so that
MyModelUser.__init__() constructs an entire
tf.Graph with all of the operations needed to perform forecasting, and
MyModelUser.forecast() simply performs
sess.run() calls on the existing graph. Ideally, you would only create a single
tf.Session as well, because TensorFlow caches information about the graph in the session, and the execution would be more efficient.
The less invasive—but probably less efficient—change would be to create a new
tf.Graph for every call to
MyModelUser.forecast(). It's unclear from the question how much state is created in the
MyModelUser.__init__() method, but you could do something like the following to put the two calls in different graphs:
def test_the_model(self): with tf.Graph(): # Create a local graph model_user_1 = MyModelUser() print(model_user_1.forecast()) with tf.Graph(): # Create another local graph model_user_2 = MyModelUser() print(model_user_2.forecast()) | https://codedump.io/share/QlFWUl01UeV3/1/how-does-tensorflow-manage-graphs | CC-MAIN-2017-04 | refinedweb | 495 | 58.08 |
Going back to the "work-around" in "_commit_internal": That "work-around" only works because we
know that the only value it MIGHT skip is the first one and that the other value is some other
"non-pointer" type.
What if THEY had both been pointers of the same type? What then if both were NULL?
My question was supposed an example of exactly that. It was also to be taken in the context of
the current "t_output_helper" madness of dropping objects of type "Py_None".
In other words, the Python script would only get ONE return value in my example.
That leads to the question, how would the script know WHICH one was just returned? Maybe it was
the first one... Maybe it was the third one...
In fact if all three pointers are NULL, there will be NO return value from this hypothetical
function.
It looks to me like one possible culprit, this funky "t_output_helper", is generated by SWIG..
Marshall
--- Karl Fogel <kfogel@newton.ch.collab.net> wrote:
> Marshall White <cscidork@yahoo.com> writes:
> > What if you have a function that returns three object pointers of
> > the same type?
> > If the function tries to return (None, "some pointer", None), how
> > does the Python script know which one was just returned?
> >
> > It almost doesn't look like there is an easy answer here.
>
> Why not just return
>
> (None, SOME_OBJECT, None)
>
> then? We know what number of return values we're expecting, so it
> should be okay if some of them are None, no?
>
> -K
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Fri Feb 28 00:25:07 2003
This is an archived mail posted to the Subversion Dev
mailing list. | https://svn.haxx.se/dev/archive-2003-02/1767.shtml | CC-MAIN-2021-39 | refinedweb | 293 | 75.3 |
SmtpMail.send() does _not_ cause button click to block . . .
- From: glaserp@xxxxxxxxxxxxxxx
- Date: 1 Sep 2005 07:44:18 -0700
Hi,
I am using a simple form to test a library I am developing. The library
reads data from a database, puts together a report, and then sends the
report to a number of email recipients. I am initiating this process by
clicking a button on the form.
The smtp server I'm working with is a bit sluggish, so the
SmtpMail.Send() is taking about 5-10 seconds to return (don't ask; this
is out of my control). Stepping through the code in the debugger,
something a bit odd happens: it appears as if Send()returns
immediately, and the form pops back up, without any of the code
following Send() being executed. I wait the amount of time I know that
Send() really takes, and then the debugger pops back into the code at
the line just following Send(). I do not see this behavior if I drive
the library from a console app: here, the blocking behavior of Send()
causes the debugger to sit and wait until Send() returns, as one would
expect.
What's going on here? It almost seems as if the Application is deciding
that the execution of Send() is taking too long and somehow putting it
into the background. I don't understand how that could be happening
given that I never created a new thread for it to execute. Moreover, it
seems to me that this behavior could cause major problems upon
re-entry. For example: the GUI becomes active, and so I click the test
button again, and it ends up re-running code that is not intended to be
thread safe; what if I'm in the middle of re-running that code when
Send() does return?
In production, this library will be driven from from a windows service
application. Might the difference between exercising this code through
a GUI, versus through a command-line app, be the presence a message
loop? If so, then this problem remains relevant in the service
application, and I need to figure out what to do.
Can this behavior be controlled, or do I have to run SmtpMail.Send() in
its own thread in order to head this problem off?
Thanks.
--Phil
.
- Prev by Date: Re: How to create a synchronize scrolling in richtexbox using c#
- Next by Date: Where are push buttons in Visual Studio 2003? Easy Question.
- Previous by thread: Re: Datagrid multiple select
- Next by thread: Where are push buttons in Visual Studio 2003? Easy Question.
- Index(es): | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.windowsforms/2005-09/msg00051.html | crawl-002 | refinedweb | 437 | 68.1 |
QWebView/QWebPage need help with context menu
(Edit: I changed the title because it is clear that the context menu is the right approach but I can't see how to make it work.)
Hello. I have a simple browser based on QWebView/QWebPage. I would like my user to be able to ask for special treatment by right-clicking (control-clicking) a link. Or maybe shift-clicking, anyway a special click to request special linkage.
When the link is clicked normally, the WebPage should handle the link in the usual way. I want to offer special handling only when the link is differently-clicked (specifically, I would open the link in the system default browser instead of mine).
I can see two approaches: -one, set the link delegation policy to delegate all links, this gives me a signal linkClicked(URL). However I do not see, in the handler for this signal, any way to distinguish the type of click that caused the signal.-
Or, I could override createStandardContextMenu() and provide a menu with my action in it. This would be called on any right/control-click, but then: how do I know what link the mouse is on, or indeed, was it clicked on a link at all?
Any suggestions or different ideas most welcome.
A little more experimenting and I find the context menu is probably the right choice. QWebView knows whether the user has right-clicked a URL or just text! It uses different default context menus.
If I right-click on selected text (not a url) the context menu has only one choice, Copy. If I right-click on unselected text, the word under the cursor is selected(!) and then the menu is Copy.
But when I right-click on a link, I get a context menu with Open Link, Open in New Window, Save Link, Copy Link. That's the menu I need to customize...
But HOW does it know? If I want to provide a custom context menu only when the context menu is invoked over a link, how do I find out? The only input to contextMenuEvent is the reason (mouse or keyboard) and the position in the viewport.
From that, how do I figure out whether the event involves a link, and if so, the contents of the link?
The hint of an answer, from stackoverflow: get an action that assumes a link, and if it is enabled, there's a link. For example (this is python, inside a QWebView object)
@
linkact = self.page().action(QWebPage.OpenLink)
if linkact.isEnabled() :
# it would appear there's a link involved...
@
Next question: how does one access the URL of that link???
OK, I am going to answer my own question. This can be marked [Solved].
The following code is in Python and PyQt4; PySide should be the same; translation to C++ should be straightforward.
To create a custom context menu in a QWebView all you need to do is re-implement contextMenuEvent. Your method will get control whenever a context menu is requested (typically by a right-click or (mac) control-click).
You receive a contextMenuEvent object which contains the reason for the call (mouse, keyboard or other) and the global and relative point positions of the event.
@
def contextMenuEvent (self, cx_event) :
why = cx_event.reason()
rel_pos = cx_event.pos()
@
In my case I did not care the reason, but I did want to know whether the user had clicked on a link or not. In order to find out you must test the context of the event. You call your QWebPage to get the web frame of the click, and you ask it for the hit context based on the relative position:
@
main_frame = self.page().mainFrame()
hit_test = main_frame.hitTestContent(rel_pos)
@
What is returned is a QWebHitTestResult object and this can be queried for various things. I wanted to know if it represented a link, and if it did not, I wanted to just pass control to the original context menu handler and exit:
@
hit_url = hit_test.linkUrl()
if hit_url.isEmpty() :
super(myWebPage, self).contextMenuEvent(cx_event)
return
@
When the event involved a link I need to supply the custom context menu. To do this, one creates a QMenu object and populates it with QActions, connecting the 'triggered()' signal of each action to an appropriate handler method. One can call self.pageAction(action-name) to get prepared QActions for many different web-related functions listed in the QWebPage::WebAction enum. This makes it easy to populate the QMenu with standard actions. However I did not do this. For demonstration purposes, here is the rest of my code:
@
# save the string form of the clicked URL as python string
self.contextUrl = unicode(hit_url.toString())
# Create the custom two-action menu:
ctx_menu = QMenu()
# Action one is, copy link to clipboard
ctx_copy = QAction(QString(u'Copy link to clipboard'),self)
self.connect(ctx_copy, SIGNAL("triggered()"), self.copyLinkToClipboard)
ctx_menu.addAction(ctx_copy)
# Action two is, open link in default browser
ctx_open = QAction(QString(u'Open in default browser'),self)
self.connect(ctx_open, SIGNAL("triggered()"), self.openInDefaultBrowser)
ctx_menu.addAction(ctx_open)
# Finally show the menu.
ctx_menu.exec_(cx_event.globalPos())
@
That completes the contextMenuEvent method. The two slots connected from the menu actions are:
@
def copyLinkToClipboard(self) :
QApplication.clipboard().setText(self.contextUrl)
def openInDefaultBrowser(self) :
webbrowser.open_new(self.contextUrl)
@
I coded the copy method this way rather than using pageAction(QWebPage::CopyLinkToClipboard) because the latter action seemed to use an internal clipboard: I could paste the copied link within my app, but not in another app. Coded as above, it goes to the system clipboard. The webbrowser.open_new call is a Python library module that finds the system browser in a platform-independent way. | https://forum.qt.io/topic/23736/qwebview-qwebpage-need-help-with-context-menu | CC-MAIN-2018-22 | refinedweb | 948 | 65.73 |
Resource Fields¶
When designing an API, an important component is defining the representation
of the data you’re presenting. Like Django models, you can control the
representation of a
Resource using fields. There are a variety of fields
for various types of data.
Quick Start¶
For the impatient:
from tastypie import fields, utils from tastypie.resources import Resource from myapp.api.resources import ProfileResource, NoteResource class PersonResource(Resource): name = fields.CharField(attribute='name') age = fields.IntegerField(attribute='years_old', null=True) created = fields.DateTimeField(readonly=True, help_text='When the person was created', default=utils.now) is_active = fields.BooleanField(default=True) profile = fields.ToOneField(ProfileResource, 'profile') notes = fields.ToManyField(NoteResource, 'notes', full=True)
Standard Data Fields¶
All standard data fields have a common base class
ApiField which handles
the basic implementation details.
Note
You should not use the
ApiField class directly. Please use one of the
subclasses that is more correct for your data.
Common Field Options¶
All
ApiField objects accept the following options.
attribute¶
A string naming an instance attribute of the object wrapped by the Resource. The
attribute will be accessed during the
dehydrate or or written during the
hydrate.
Defaults to
None, meaning data will be manually accessed.
default¶
Provides default data when the object being
dehydrated/
hydrated has no data on
the field.
Defaults to
tastypie.fields.NOT_PROVIDED.
null¶
Indicates whether or not a
None is allowable data on the field. Defaults to
False.
blank¶
Indicates whether or not data may be omitted on the field. Defaults to
False.
This is useful for allowing the user to omit data that you can populate based
on the request, such as the
user or
site to associate a record with.
readonly¶
Indicates whether the field is used during the
hydrate or not. Defaults to
False.
help_text¶
A human-readable description of the field exposed at the schema level. Defaults to the per-Field definition.
IntegerField¶
An integer field.
Covers
models.IntegerField,
models.PositiveIntegerField,
models.PositiveSmallIntegerField and
models.SmallIntegerField.
Relationship Fields¶
Provides access to data that is related within the database.
The
RelatedField base class is not intended for direct use but provides
functionality that
ToOneField and
ToManyField build upon.
The contents of this field actually point to another
Resource,
rather than the related object. This allows the field to represent its data
in different ways.
The abstractions based around this are “leaky” in that, unlike the other
fields provided by
tastypie, these fields don’t handle arbitrary objects
very well. The subclasses use Django’s ORM layer to make things go, though
there is no ORM-specific code at this level.
Common Field Options¶
In addition to the common attributes for all ApiField, relationship fields accept the following.
full¶
Indicates how the related
Resource will appear post-
dehydrate. If
False, the related
Resource will appear as a URL to the endpoint of
that resource. If
True, the result of the sub-resource’s
dehydrate will
be included in full. You can further control post-
dehydrate behaviour when
requesting a resource or a list of resources by setting
full_list and
full_detail.
full_list¶
Indicates how the related
Resource will appear post-
dehydrate when requesting a
list of resources. The value is one of
True,
False or a callable that accepts a
bundle and returns
True or
False. If
False, the related
Resource will appear
as a URL to the endpoint of that resource if accessing a list of resources. If
True and
full
is also
True, the result of thesub-resource’s
dehydrate will be included in
full. Default is
True
full_detail¶
Indicates how the related
Resource will appear post-
dehydrate when requesting a
single resource. The value is one of
True,
False or a callable that accepts a
bundle and returns
True or
False. If
False, the related
Resource will appear
as a URL to the endpoint of that resource if accessing a specific resources. If
True and
full
is also
True, the result of thesub-resource’s
dehydrate will be included
in full. Default is
True
Field Types¶
ToOneField¶
Provides access to related data via foreign key.
This subclass requires Django’s ORM layer to work properly.
ToManyField¶
Provides access to related data via a join table.
This subclass requires Django’s ORM layer to work properly.
This field also has special behavior when dealing with
attribute in that
it can take a callable. For instance, if you need to filter the reverse
relation, you can do something like:
subjects = fields.ToManyField(SubjectResource, attribute=lambda bundle: Subject.objects.filter(notes=bundle.obj, name__startswith='Personal'))
The callable should either return an iterable of objects or
None.
Note that the
hydrate portions of this field are quite different than
any other field.
hydrate_m2m actually handles the data and relations.
This is due to the way Django implements M2M relationships. | https://django-tastypie.readthedocs.io/en/latest/fields.html | CC-MAIN-2018-09 | refinedweb | 797 | 50.84 |
Jupyter Notebooks are one of the most important tools for data scientists using Python. This is because they're an ideal environment for developing reproducible data analysis pipelines. Data can be loaded, transformed, and modeled all inside a single Notebook, where it's quick and easy to test out code and explore ideas along the way. Furthermore, all of this can be documented "inline" using formatted text, so you can make notes for yourself or even produce a structured report. Other comparable platforms - for example, RStudio or Spyder - present the user with multiple windows, which promote arduous tasks such as copy and pasting code around and rerunning code that has already been executed. These tools also tend to involve Read Eval Prompt Loops (REPLs) where code is run in a terminal session that has saved memory. This type of development environment is bad for reproducibility and not ideal for development either. Jupyter Notebooks solve all these issues by giving the user a single window where code snippets are executed and outputs are displayed inline. This lets users develop code efficiently and allows them to look back at previous work for reference, or even to make alterations.
We'll start the chapter by explaining exactly what Jupyter Notebooks are and continue to discuss why they are so popular among data scientists. Then, we'll open a Notebook together and go through some exercises to learn how the platform is used. Finally, we'll dive into our first analysis and perform an exploratory analysis in the section Basic Functionality and Features.
By the end of this chapter, you will be able to:
- Learn what a Jupyter Notebook is and why it's useful for data analysis
- Use Jupyter Notebook features
- Study Python data science libraries
- Perform simple exploratory data analysis
In this section, we first demonstrate the usefulness of Jupyter Notebooks with examples and through discussion. Then, in order to cover the fundamentals of Jupyter Notebooks for beginners, we'll see the basic usage of them in terms of launching and interacting with the platform. For those who have used Jupyter Notebooks before, this will be mostly a review; however, you will certainly see new things in this topic as well.:
Those familiar with R will know about R Markdown. Markdown documents allow for Markdown-formatted-style-style Notebooks. It's also a good idea to accumulate multiple date-stamped versions of the Notebook as you progress through the analysis, in case you want to look back at previous states.
Deliverable Notebooks are intended to be presentable and should contain only select parts of the lab-style Notebooks. For example, this could be an interesting discovery to share with your colleagues, an in-depth.
Now, we are going to open up a Jupyter Notebook and start to learn the interface. Here, we will assume you have no prior knowledge of the platform and go over the basic usage.
- Navigate to the companion material directory in the terminal.
Note
On Unix machines such as Mac or Linux, command-line navigation can be done using ls to display directory contents and
cd to change directories. On Windows machines, use
dir to display directory contents and use
cd to change directories instead. If, for example, you want to change the drive from
C: to
D: , you should execute
d: to change drives.
- Start a new local Notebook server here by typing the following into the terminal:
jupyter notebook.A new window or tab of your default browser will open the Notebook Dashboard to the working directory. Here, you will see a list of folders and files contained therein.
- Click on a folder to navigate to that particular path and open a file by clicking on it. Although its main use is editing IPYNB Notebook files, Jupyter functions as a standard text editor as well.
- Reopen the terminal window used to launch the app. We can see the
NotebookApp being run on a local server. In particular, you should see a line like this:
[I 20:03:01.045 NotebookApp] The Jupyter Notebook is running at: http:// localhost:8888/ ? oken=e915bb06866f19ce462d959a9193a94c7c088e81765f9d8aGoing to that HTTP address will load the app in your browser window, as was done automatically when starting the app. Closing the window does not stop the app; this should be done from the terminal by typing Ctrl + C.
- Close the app by typing Ctrl +C in the terminal. You may also have to confirm by entering
y. Close the web browser window as well.
- When loading the NotebookApp, there are various options available to you. In the terminal, see the list of available options by running the following:
jupyter notebook â-help.
- One such option is to specify a specific port. Open a NotebookApp at
local port 9000Â by running the following:
jupyter notebook --port 9000
- The primary way to create a new Jupyter Notebook is from the Jupyter Dashboard. Click New in the upper-right corner and select a kernel from the drop-down menu (that is, select something in the Notebooks section):
Kernels provide programming language support for the Notebook. If you have installed Python with Anaconda, that version should be the default kernel. Conda virtual environments will also be available here.
Note
Virtual environments are a great tool for managing multiple projects on the same machine. Each virtual environment may contain a different version of Python and external libraries. Python has built-in virtual environments; however, the Conda virtual environment integrates better with Jupyter Notebooks and boasts other nice features. The documentation is available atÂ.
With the newly created blank Notebook, click in the top cell and type
print('hello world'), or any other code snippet that writes to the screen. Execute it by clicking in the cell and pressing Shift + Enter, or by selecting render Markdown instead.
- Click into an empty cell and change it to accept Markdown-formatted text. This can be done from the drop-down menu icon in the toolbar or by selecting Markdown from the Cell menu. Write some text in here (any text will do), making sure to utilize Markdown formatting symbols such as #.
- Focus on the toolbar at the top of the Notebook:
There is a Play icon in the toolbar, which can be used to run cells. As we'll see later,however, it's handier to use the keyboard shortcut Shift +Enter to run cells. Right next to this is a Stop icon, which can be used to stop cells from running. This is useful, for example, if a cell is taking too long to run:
New cells can be manually added from the Insert menu:
Cells can be copied, pasted, and deleted using icons or by selecting options from the Edit menu:
Cells can also be moved up and down this way:
There are useful options under the Cell menu to run a group of cells or the entire Notebook:
- Experiment with the toolbar options to move cells up and down, insert new cells,and delete cells.
An important thing to understand about these Notebooks is the shared memory between cells. It's quite simple: every cell existing on the sheet has access to the global set of variables. So, for example, a function defined in one cell could be called from any other, and the same applies to variables. As one would expect, anything within the scope of a function will not be a global variable and can only be accessed from within that specific function.
- Open the Kernel menu to see the selections. The Kernel menu is useful for stopping script executions and restarting the Notebook if the kernel dies. Kernels can also be swapped here at any time, but it is unadvisable to use multiple kernels for a single Notebook due to reproducibility concerns.
-.
 The Notebook name will be displayed in the upper-left corner. New Notebooks will automatically be named Untitled.
- Change the name of your IPYNB
Notebookfile by clicking on the current name in the upper-left corner and typing the new name. Then, save the file.
- Close the current tab in your web browser (exiting the Notebook) and go to the Jupyter Dashboard tab, which should still be open. (If it's not open, then reload it by copy and pasting the HTTP link from the terminal.)
Since we didn't shut down the Notebook, we just saved and exited, it will have a green book symbol next to its name in the Files section of the Jupyter Dashboard and will be listed as Running on the right side next to the last modified date. Notebooks can be shut down from here.
- Quit the Notebook you have been working on by selecting it (checkbox to the left of the name) and clicking the orange Shutdown button: has many appealing features that make for efficient Python programming. These include an assortment of things, from methods for viewing docstrings to executing Bash commands. Let's explore some of these features together in this section.
Note
The official IPython documentation can be found here:. It has details on the features we will discuss here and others.
- From the Jupyter Dashboard, navigate to the
chapter-1directory and open theÂ
chapter-1-workbook.ipynbfile by selecting it. The standard file extension for Jupyter Notebooks is
.ipynb, which was introduced back when they were called IPython Notebooks.
- Scroll down to SubtopicÂ
Jupyter Featuresin the Jupyter Notebook. We start by reviewing the basic keyboard shortcuts. These are especially helpful to avoid having to use the mouse so often, which will greatly speed up the workflow. Here are the most useful keyboard shortcuts. Learning to use these will greatly improve your experience with Jupyter Notebooks as well as your own efficiency:
- Shift +Enteris used to run a cell
- The Esckey is used to leave a cell
- The Mkey is used to change a cell to Markdown (after pressingEsc)
- The Ykey is used to change a cell to code (after pressingEsc)
- Arrow keys move cells (after pressing Esc)Â
- The Enterkey is used to enter a cell
Moving on from shortcuts, the help option is useful for beginners and experienced coders alike. It can help provide guidance at each uncertain step.
Users can get help by adding a question mark to the end of any object and running the cell. Jupyter finds the docstring for that object and returns it in a pop-out window at the bottom of the app.
- Run the Getting Help section cells and check out how Jupyter displays the docstrings at the bottom of the Notebook. Add a cell in this section and get help on the object of your choice:Â
Tab completion can be used to do the following:
- List available modules when importing external libraries
- List available modules of imported external libraries
- Function and variable completion
This can be especially useful when you need to know the available input arguments for a module, when exploring a new library, to discover new modules, or simply to speed up workflow. They will save time writing out variable names or functions and reduce bugs from typos. The tab completion works so well that you may have difficulty coding Python in other editors after today!
- Click into an empty code cell in the Tab Completion section and try using tab completion in the ways suggested immediately above. For example, the fist suggestion can be done by typing import (including the space after) and then pressing the Tab key:
- Last but not least of the basic Jupyter Notebook features are magic commands. These consist of one or two percent signs followed by the command. Magics starting with
%%will apply to the entire cell, and magics starting with
%will only apply to that line. This will make sense when seen in an example.
Scroll to the Jupyter Magic Functions section and run the cells containing
%lsmagic and %matplotlib inline:
%lsmagic lists the available options. We will discuss and show examples of some of the most useful ones. The most common magic command you will probably see is
%matplotlib inline, which allows matplotlib figures to be displayed in the Notebook without having to explicitly use
plt.show() .
The timing functions are very handy and come in two varieties: a standard timer
(%time or %%time) and a timer that measures the average runtime of many iterations
(%timeit and %%timeit).
- Run the cells in the Timers section. Note the difference between using one and two percent signs.
Even by using a Python kernel (as you are currently doing), other languages can be invoked using magic commands. The built-in options include JavaScript, R, Pearl, Ruby, and Bash. Bash is particularly useful, as you can use Unix commands to find out where you are currently (
pwd), what's in the directory (
ls), make new folders
(mkdir), and write file contents
(cat / head / tail).
- Run the fist cell in the Using bash in the notebook section. This cell writes some text to a file in the working directory, prints the directory contents, prints an empty line, and then writes back the contents of the newly created file before removing it:
- Run the following cells containing only
lsand
pwd. Note how we did not have to explicitly use the Bash magic command for these to work.
There are plenty of external magic commands that can be installed. A popular one is
ipython-sql, which allows for SQL code to be executed in cells.
- If you've not already done so, install
ipython-sqlnow. Open a new terminal window and execute the following code:
pip install ipython-sql
- Run the
%load_ext sqlcell to load the external command into the Notebook:
This allows for connections to remote databases so that queries can be executed (and thereby documented) right inside the Notebook.
- Run the cell containing the SQL sample query:
Here, we first connect to the local sqlite source; however, this line could instead point to a specific database on a local or remote server. Then, we execute a simple
SELECT to show how the cell has been converted to run SQL code instead of Python.
- Moving on to other useful magic functions, we'll briefly discuss one that helps with documentation. The command is
%version_information, but it does not come as standard with Jupyter. Like the SQL one we just saw, it can be installed from the command line with pip.
If not already done,.
- Run the cell that loads and calls the version_information command: chapter-1-notebook.ipynb yourÂ
.py files. For example, if theÂ
.py files are inside a folder called
chapter-1, you could do the following:
pipreqs chapter-1/
The resulting
requirements.txt file for
chapter-1-workbook.ipynb looks like this:
cat chapter-1/requirements.txt matplotlib==2.0.2 numpy==1.13.1 pandas==0.20.3 requests==2.18.4 seaborn==0.8 beautifulsoup4==4.6.0 scikit_learn==0.19.0Â do not come standard with Python.
 The external data science libraries we'll be using in this book are
NumPy,
Pandas,
Seaborn,
matplotlib,
scikit-learn,
Requests, and
Bokeh. Let's briefly introduce each.
Note.
- NumPy offers multi-dimensional-D tabular structures where columns represent different variables and rows correspond to samples. Pandas provides many handy tools for data wrangling such as filling in
NaNentriesit-learn is the most commonly used machine learning library. It offers top-of the-line algorithms and a very elegant API where models are instantiated and then fit with data. It also provides data processing modules and other tools useful for predictive analytics.
- Requests is the go-to fist analysis, where we finally start working with a dataset.
- Open up the
chapter 1Jupyter Notebook and scroll to OK.
- Run the cells to import the external libraries and set the plotting options:
For a nice Notebook setup, it's often useful to set various options along with the imports at the top. For example, the following can be run to change the figure appearance to something more aesthetically pleasing than the
matplotlib and Seaborn defaults:Â
import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns # See here for more options: fist analysis together using the Jupyter Notebook.
So far, this chapter has focused on the features and basic usage of Jupyter. Now, we'll put this into practice and do some data exploration and analysis.
The dataset we'll look at in this section is the so-called Boston housing dataset. It contains US census data concerning houses in various areas around the city of Boston. Each sample corresponds to a unique area and has about a dozen measures. We should think of samples as rows and measures as columns. The data was fist
Oftentimes,.
In the chapter 1 Jupyter Notebook, scroll to subtopic
Loading the Data into Jupyter Using a Pandas DataFrame  of
Our First Analysis:
The Boston Housing Dataset. The Boston housing dataset can be accessed from the
sklearn.datasetsmodule using the
load_bostonmethod.
- Run the first two cells in this section to load the Boston dataset and see the data structures type:
The output of the second cell tells us that it's a scikit-learn Bunch object. Let's get some more information about that to understand what we are dealing with.
- Run the next cell to import the base object from scikit-learn utils and print the docstring in our Notebook:
Reading the resulting docstring suggests that it's basically a dictionary, and can essentially be treated as such.
- Print the field names (that is, the keys to the dictionary) by running the next cell. We find these fields to be self-explanatory: [
'DESCR',
'target',
'data',
'feature_names'] .
- ⦠- MEDV Median value of owner-occupied homes in $1000's :Missing Attribute Values: None
Of particular importance here are the feature descriptions (under
Attribute Information). We will use this as reference during our analysis..
- Run the cell where Pandas is imported and the docstring is retrieved forÂ
pd.DataFrame:
The docstring reveals the DataFrame input parameters. We want to feed in boston[
'data'] for the data and use boston[
'feature_names'] for the headers.
- Run the next few cells to print the data, its shape, and the feature names:
Looking at the output, we see that our data is in a
2D NumPy array. Running the command boston[
'data'].shape returns the length (number of samples) and the number of features as the first and second outputs, respectively
- Load the data into a Pandas DataFrame
dfby
- Run the next cell to see the shape of the target:
We see that it has the same length as the features, which is what we expect. It can therefore be added as a new column to the DataFrame.
- Add the target variable to
dfby running the cell with the following:
df['MEDV'] = boston['target']
- To distinguish the target from our features, it can be helpful to store it at the front of our DataFrame. Move the target variable to the front of df by running the cell with the following:
y = df['MEDV'].copy() del df['MEDV'].
- Now that the data has been loaded in its entirety, let's take a look at the DataFrame.
We can do
df.head() or
df.tail() to see a glimpse of the data and
len(df) to make sure the number of samples is what we expect. Run the next few cells to see the head, tail, and length of
df:
Each row is labeled with an index value, as seen in bold on the left side of the table.By default, these are a set of integers starting at 0 and incrementing by one for each row.
- Printing
df.dtypeswill.
- The next thing we need to do is clean the data by dealing with any missing data, which Pandas automatically sets as
NaNvalues. These can be identified by running
df.isnull(), which returns a Boolean DataFrame of the same shape as
df.To get the number of NaN's per column, we can do
df.isnull().sum(). Run the next cell to calculate the number of
NaNvalues in each column:
For this dataset, we see there are no NaN's, which means we have no immediate work to do in cleaning the data and can move on.
- To simplify the analysis, the final thing we'll do before exploration is remove some of the columns. We won't bother looking at these, and instead focus on the remainder in more detail.
 Remove some columns by running the cell that contains the following code:
for col in ['ZN', 'NOX', 'RAD', 'PTRATIO', 'B']: del df[col]
Since this is an entirely new dataset that we've never seen before, the first goal here is to understand the data. We've already seen the textual description of the data, which is important for qualitative understanding. We'll now compute a quantitative description.
- Navigate to Subtopic Data exploration in the Jupyter Notebook and run the cell containing
df.describe():. Going forward with the analysis, we will specify a set of columns to focus on.
- Run the cell where these "focus columns" are defined:
cols = ['RM', 'AGE', 'TAX', 'LSTAT', 'MEDV']
This subset of columns can be selected from
dfusing square brackets. Display this subset of the DataFrame by running
df[cols].head():
As a reminder, let's recall what each of these columns is. From the dataset documentation, we have the following:
- RM average number of rooms per dwelling
- AGE proportion of owner-occupied units built prior to 1940
- TAX full-value property-tax rate per $10,000
- LSTAT % lower status of the population
- MEDV Median value of owner-occupied homes in $1000's
To look for patterns in this data, we can start by calculating the pairwise correlations using
pd.DataFrame.corr.
- Calculate the pairwise correlations for our selected columns by running the cell containing the following code:
df[cols].corr()
This resulting table shows the correlation score between each set of values. Large positive scores indicate a strong positive (that is, in the same direction) correlation.As expected, we see maximum values of 1 on the diagonal.
Pearson coefficient is defined as the co-variance between two variables,divided by the product of their standard deviations:
The co-variance, in turn, is defined as follows:
Here, n is the number of samples, xi and yi are the individual samples being summed over, andÂ
andÂ
  are the means of each set.
Instead of straining our eyes to look at the preceding table, it's nicer to visualize it with a heatmap. This can be done easily with Seaborn.
- Run the next cell to initialize the plotting environment, as discussed earlier in theÂ/chapter-1-boston-housing-corr.png', bbox_inches='tight', dpi=300)
We call
sns.heatmap and pass the pairwise correlation matrix as input. We use a custom color palette here to override the Seaborn default. The function returns aÂ
matplotlib.axes object which is referenced by the variable
ax. The final figure is then saved as a high resolution PNG to the
figures folder.
- For the final step in our dataset exploration exercise, we'll visualize our data using Seaborn's
pairplotfunction.
- Visualize the DataFrame using Seaborn's pairplot function. Run the cell containing the following code:
sns.pairplot(df[cols], plot_kws={'alpha': 0.6}, diag_kws={'bins': 30})
Having previously used a heatmap to visualize a simple overview of the correlations, this plot allows us to see the relationships in far more detail.Looking at the histograms on the diagonal, we see the following:
- a: RM and MEDV have the closest shape to normal distributions.
- b: AGE is skewed to the left and LSTAT is skewed to the right (this may seem counter intuitive but skew is defined in terms of where the mean is positioned inÂ-limit bin around $50,000. Recall when we did
df.describe() , the min and max of MDEV was 5k and 50k, respectively. This suggests that median house values in the dataset were capped at 50k.. Then, in the next chapter, you'll be more comfortable dealing with the relatively complicated models.
- Scroll to Subtopic
Introduction to predictive analyticsin the Jupyter Notebook and look just above at the pairplot we created in the previous section. In particular, look at the scatter plots in the bottom-left corner:.
- Draw scatter plots along with the linear models by running the cell that contains the following:
fig, ax = plt.subplots(1, 2) sns.regplot('RM', 'MEDV', df, ax=ax[0], scatter_kws={'alpha': 0.4})) sns.regplot('LSTAT', 'MEDV', df, ax=ax[1], scatter_kws={'alpha': 0.4})).
- Seaborn can also be used to plot the residuals for these relationships. Plot the residuals('')
Each point on these residual plots is the difference between that sample (y) and the linear model prediction ( ŷ). Residuals greater than zero are data points that would be underestimated by the model. Likewise, residuals less than zero are data points that would be overestimated by the model.
Patterns in these plots can indicate sub optimal modeling. In each preceding case,we see diagonally arranged scatter points in the positive region. These are caused by the $50,000 cap on MEDV. The RM data is clustered nicely around 0, which indicates a good fit. On the other hand, LSTAT appears to be clustered lower than 0.
Â
- Moving on from visualizations, the fits can be quantified by calculating the mean squared error. We'll do this now using scikit-learn. Defile ait-learn; this is only necessary when modeling a one-dimensional.
- Call the
get_msefunction for both RM and LSTAT, by running the cell containing the following:
get_mse(df, 'RM') get_mse(df, order polynomial model with scikit-learn.
Forgetting about our Boston housing dataset for a minute, consider another real-world situation where you might employ polynomial regression. The following example is modeling weather data. In the following plot, we see temperatures (lines) and precipitations (bars) for Vancouver, BC, Canada:
Any of these fields are likely to be fit quite well by a fourth-order polynomial. This would be a very valuable model to have, for example, if you were interested in predicting the temperature or precipitation for a continuous range of dates.
You can find the data source for this here:.
Shifting our attention back to the Boston housing dataset, we would like to build a third order.
Use scikit-learn to fit a polynomial regression model to predict the median house value (MEDV), given the LSTAT values. We are hoping to build a model that has a lower meansquared error (MSE).
- Scroll to the empty cells at the bottom of
Subtopic Introduction to Predictive Analysis in your Jupyter Notebook.These will be found beneath the linear-model MSE calculation cell under the Activity heading.
Note
You should fill these empty cells in with code as we complete the activity. You may need to insert new cells as these become filled up; please do so as needed!
- Given that our data is contained in the DataFrame
df, we will fist pull out our dependent feature and target variable using the following:
y = df['MEDV'].values x = df['LSTAT'].values.reshape(-1,1)
This is identical to what we did earlier for the linear model.
- Check out what x looks like by printing the fist few samples with print(x[:3]) :
Notice how each element in the array is itself an array with length 1. This is what
 reshape(-1,1) does, and it is the form expected by scikit-learn.
- Next, we are going to transform x into "polynomial features". The rationale for this may not be immediately obvious but will be explained shortly.Import the appropriate transformation tool from scikit-learn and instantiate the third-degree polynomial feature transformer:
from sklearn.preprocessing import PolynomialFeatures poly = PolynomialFeatures(degree=3)
- At this point, we simply have an instance of our feature transformer. Now, let's use it to transform the LSTAT feature (as stored in the variable x) by running the
fit_transformmethod.
Build the polynomial feature set by running the following code:
x_poly = poly.fit_transform(x)
x_polylooks like by printing the fist few samples with print(
x_poly[:3]) .
Unlike x, the arrays in each row now have length 4, where the values have been calculated as x0, x1, x2 and x3.
We are now going to use this data to fit a linear model. Labeling the features as a, b, c, and d, we will calculate the coefficients α0, α1, α2, and α3 and of the linear model:
We can plug in the definitions of a, b, c, and d, to get the following polynomial model, where the coefficients are the same as the previous ones:
- We'll import the Linear Regression class and build our linear classification model the same way as before, when we calculated the MSE. Run the following:
from sklearn.linear_model import LinearRegression clf = LinearRegression() clf.fit(x_poly, y)
- Extract the coefficients and print the polynomial model using the following code:
a_0 = clf.intercept_ + clf.coef_[0] #intercept a_1, a_2, a_3 = clf.coef_[1:] #other coefficients msg = 'model: y = {:.3f} + {:.3f}x + {:.3f}x^2 + {:.3f}x^3'\.format(a_0, a_1, a_2, a_3) print(msg)
To get the actual model intercept, we have to add the
intercept_ and coef_[0]attributes. The higher-order coefficients are then given by the remaining values ofÂ
coef_.
- Determine the predicted values for each sample and calculate the residuals by running the following code:
y_pred = clf.predict(x_poly) resid_MEDV = y - y_pred
- Print some of the residual values by running print(
resid_MEDV[:10]) :
We'll plot these soon to compare with the linear model residuals, but first we will calculate the MSE.
- Run the following code to print the MSE for the third-order polynomial model:
from sklearn.metrics import mean_squared_error error = mean_squared_error(y, y_pred) print('mse = {:.2f}'.format(error))
Â
As can be seen, the MSE is significantly less for the polynomial model compared to the linear model (which was 38.5). This error metric can be converted to an average error in dollars by taking the square root. Doing this for the polynomial model, we find the average error for the median house value is only $5,300.
Now, we'll visualize the model by plotting the polynomial line of best fit along with the data.
- Plot the polynomial model along with the samples by running the following:');
Here, we are plotting the red curve by calculating the polynomial model predictions on an array of x values. The array of x values was created using
np.linspace, resulting in 50 values arranged evenly between 2 and 38.
Now, we'll plot the corresponding residuals. Whereas we used Seaborn for this earlier, we'll have to do it manually to show results for a scikit-learn model. Since we already calculated the residuals earlier, as reference by the
resid_MEDV variable, we simply need to plot this list of values on a scatter chart.
- Plot the residuals by running the following:
fig, ax = plt.subplots(figsize=(5, 7)) ax.scatter(x, resid_MEDV, alpha=0.6) ax.set_xlabel('LSTAT') ax.set_ylabel('MEDV Residual $(y-\hat{y})$') plt.axhline(0, color='black', ls='dotted');
Compared to the linear model LSTAT residual plot, the polynomial model residuals appear to be more closely clustered around y - ŷ = 0. Note that y is the sample MEDV and ŷ is the predicted value. There are still clear patterns, such as the cluster near x = 7 and y = -7 that indicates suboptimal modeling.
Having successfully modeled the data using a polynomial model, let's finish up this chapter by looking at categorical features. In particular, we are going to build a set of categorical features and use them to explore the dataset in more detail.
Often, we find datasets where there are a mix of continuous and categorical fields. In such cases, we can learn about our data and find patterns by segmenting the continuous variables with the categorical fields.
Â).
)..
- Scroll up to the pair plot in the Jupyter Notebook where we compared MEDV, LSTAT, TAX, AGE, and RM:
Take a look at the panels containing AGE. As a reminder, this feature is defined as the proportion of owner-occupied units built prior to 1940. We are going to convert this feature to a categorical variable. Once it's been converted, we'll be able to replot this figure with each panel segmented by color according to the age category.
- Scroll down to Subtopic
Building and exploring categorical features and click into the first cell. Type and execute the following to plot the AGEÂ());
Note that we set
kde_kws={'lw': 0} in order to bypass plotting the kernel density estimate in the preceding figure.
Looking at the plot, there are very few samples with low AGE, whereas there are far more with a very large AGE. This is indicated by the steepness of the distribution on the far right-hand.
 We'll use the places where the red horizontal lines intercept the distribution as a guide to split the feature into categories: Relatively New, Relatively Old, and Very Old.
- Setting the segmentation points as 50 and 85, create a new categorical feature by chapters.
- Check on how many samples we've grouped into each age category by typingÂ
df.groupby('AGE_category').size()Â into a new cell and running
Looking at the result, it can be seen that two class sizes are fairly equal, and the Very Old group is about 40% larger. We are interested in keeping the classes comparable in size, so that each is well-represented and it's straightforward to make inferences from the analysis.
Note
It may not always be possible to assign samples into classes evenly, and in real-world situations, it's very common to find highly imbalanced classes. In such cases, it's important to keep in mind that it will be difficult to make statistically significant claims with respect to the under-represented class. Predictive analytics with imbalanced classes can be particularly difficult. The following blog post offers an excellent summary on methods for handling imbalanced classes when doing machine learning:.
Let's see how the target variable is distributed when segmented by our new featureÂ
AGE_category.
- Make a violin plot by running the following code:
sns.violinplot(x='MEDV', y='AGE_category', data=df, order=['Relatively New', 'Relatively Old', 'Very Old']);.
Â
Â
- Redo the violin plot adding the inner='point' argument to the sns.violinplot call:.
- Re-do});
Looking at the histograms, the underlying distributions of each segment appear similar for RM and TAX. The LSTAT distributions, on the other hand, look more distinct. We can focus on them in more detail by again using a violin plot.
- Make a violin plot comparing the LSTAT distributions for each
AGE_category segment:.
In this chapter, you have seen the fundamentals of data analysis in Jupyter.
We began with usage instructions and features of Jupyter such as magic functions and tab completion. Then, transitioning to data-science-specific. | https://www.packtpub.com/product/applied-deep-learning-with-python/9781789804744 | CC-MAIN-2020-40 | refinedweb | 5,837 | 62.48 |
Clock Project
Here is my Clepsydra project, also known as "Mom's Clock". It's based on 3,700-year-old water clocks, in this case using 1" steel balls to tell the time. For example, at 3:00:00pm three balls roll out and skitter noisily across the floor. This project includes 13 ball bearings, 1,474 lines of Arduino C++ code, and several feet of wire.
The two parts of the clock are the control box and the raceway. The menu-driven control box contains all the electronics, and the V-shaped raceway has a servo-controlled ball release mechanism. The control box and raceway are connected by a simple stereo cable.
Software
You may set alarms for any time between now and December 31st 2099, such as one’s nth birthday, announced by your choice of 1 to 13 balls, with an optional repeat time, in this case one year. This clock also can use ship’s bells to announce the half-hours in each 4- or 8-hour shift, or choose its own random times just for fun. There’s an optional "quiet time", by default 10pm to 5am, when no balls are released.
See some sample C++ code.
Control Box
These pictures show the control electronics before (components), during (soldered together), and after (in the box!).
Components, row by row:
stereo cable, on/off switch and key, battery,
knob and rotary switch, battery charger,
Arduino Nano,
real-time clock with its own battery, and a two-line display with an RGB backlight.
Here is everything wired together, with all components in their sockets. The kludge board (upper left, beige) is a rectifier circuit to prevent current backflow from the battery through the Nano (correcting a rare design flaw).
Finally assembled box! The rotary switch brings up menu items and values, with click-to-select.
Raceway
The raceway is basically an inclined plane where each ball's potential energy is efficiently converted into kinetic energy with a minimum of friction. The height of the release gate can be changed, and the default 4" is sufficient for at least 10' of rolling across a hard floor.
Sample Arduino C++ Code
Here is a sample of the code, all of which I wrote, with some help from StackExchange & Google:
- All alarms are represented in an array
myTimesas defined by the
timeElementstructure.
- The
checkSetTimesprocedure runs every 333 milliseconds and checks whether the current second is equal to or past any non-empty alarm times, and if so, and we’re not in quiet time, rolls out the specified number of balls for that alarm type. Any subsequent time for that alarm is calculated and stored, such as the next bell time.
- The “Existing Timers” menu selection invokes the
showTimesprocedure to display the currently set alarms’ types and times – you scroll through them by rotating the switch, and exit with a click. This procedure also calls a
timeoutfunction which returns true for 12 seconds after your most recent switch change, so the procedure will eventually exit after you stop scrolling. Since you may be fiddling with the switch for more than one second,
showTimescalls
checkSetTimeseach time through its loop to ensure all alarms are seen.
enum timeTypes { tempty = 0, talarm, trepeat, trandom, tbells, tquietude }; typedef struct timeElement { timeTypes xTimeType = tempty; DateTime xTime = tZero; TimeSpan xTS = tsZero; }; timeElement myTimes[maxTimes];
void checkSetTimes() { boolean oneOfUs = false; boolean rollable = false; for (int j = 0; j < maxTimes; j++) { oneOfUs = // is this an active alarm type? (myTimes[j].xTimeType == talarm ) || (myTimes[j].xTimeType == trepeat ) || (myTimes[j].xTimeType == trandom) || (myTimes[j].xTimeType == tbells); if (oneOfUs) { if (checkForNow(myTimes[j])) { // is it time for this alarm? rollable = !inTimeout(myTimes[j]); // are we in quiet time? switch (myTimes[j].xTimeType) { case talarm: sendBalls(myTimes[j].xTS.totalseconds()); myTimes[j].xTimeType = tempty; break; case trepeat: if (rollable) { sendOneBall("Repeat"); } myTimes[j].xTime = myTimes[j].xTime + myTimes[j].xTS; break; case trandom: if (rollable) { sendOneBall("Random"); } myTimes[j].xTime = DateTime(myTimes[j].xTime.unixtime() + random(rMinMinute, rMaxMinute) * 60); break; case tbells: if (rollable) { sendBellBalls(myTimes[j].xTS.totalseconds()); } myTimes[j].xTime = DateTime(myTimes[j].xTime.unixtime() + halfHour); myTimes[j].xTS = ((myTimes[j].xTS.totalseconds() + 1) % numBells) + 1; break; default: if (rollable) { sendOneBall("Default"); } } } } } }
char* menuStrings[MENUSIZE] = {"Just the Clock", "Reminder", "Repeat", "Random", "Bells", "Quiet Time", "Settings", "Existing Timers"}; void showTimes() { boolean done = false; boolean foundOne = false; while (!done && !timeout()) { checkSetTimes(); if (count != old_count) { // the switch was rotated inc = (count > old_count) ? 1 : -1; old_count = count; updateCurIndex(inc); } foundOne = false; for (int i = 0; i < maxTimes; i++) { if (myTimes[i].xTimeType != tempty) { foundOne = true; break; } } if (foundOne) { if (myTimes[curIndex].xTimeType != tempty) { // display the current alarm type & time showEvent(myTimes[curIndex]); do { checkSetTimes(); if (captureSwitchState()) { // the switch was clicked justClick = shortClick = longClick = false; done = true; } } while ((count == old_count) && !timeout() && !done); } else { updateCurIndex(inc); } } else { lcd.setCursor(0, 1); lcd.print("No timers set!"); delay(comprehensionTime); done = true; } } }
Speaklight Project
Here is my SpeakLight project, which uses Android speech recognition to control a lamp's color.
The SpeakLight project has two components, an Android app (because iPhones are a closed system) and a remote Arduino-controlled light source. When you say to the app, for example, "aquamarine", the light and the screen change to that color. The SpeakLight app also recognizes "brighter", "dimmer", "rainbow", and a few other commands.
SpeakLight app screenshots 1-2-3!
The app has a list of 3,214 color names with their RGB color values. It listens for a spoken color name and sends that color’s value through a Bluetooth connection to the light. The light source was initially a simple tri-color LED kit, then a brighter I2C-controlled LED kit, and is now a 75 watt DMX-controlled theater light.
This is the first version of the Arduino lamp and its color pattern on the ceiling -- as you can see, the aquamarine color is there but not uniform.
This is the second SpeakLight output device, a BlinkM MaxM, the brightest LED kit that SparkFun had!
Third version, with the PAR64 can light. This is the color wash I was looking for!
The SpeakLight app required 398 lines of Java code (and weeks of Android research), and the Arduino sketch merely 79 lines of C++.
The Android built-in
onActivityResult function (see below) receives the results of all requested events. In my case, if it’s a speech event, this function parses all recognized phrases and compares each phrase to each of the known color names. If there’s a match, that color’s hex value is sent to the Arduino, which magically (using electrons) changes the lamp’s color.
SpeakLight onActivityResult Code
String[] colors = { "Acadia#1B1404", "Acapulco#7CB0A1", "Acid Green#A8BB19", "Acorn#6A5D1B", "Aero Blue#C9FFE5", "African Violet#B284BE", ... }; protected void onActivityResult(int requestCode, int resultCode, Intent data) { String result ="", firstResult = "", colorName = ""; boolean colorFound = false; int iColor = 0; String[] cNameVal; button.setText("Working..."); if (requestCode == SPEECH_REQUEST_CODE) { if (resultCode == RESULT_OK) { ArrayList matches = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS); if (matches.size() == 0) { button.setText("No matches!"); } else { // most likely matched phrase firstResult = matches.get(0); for (int j = 0; j < matches.size(); j++) { // each phrase in turn result = matches.get(j).toLowerCase(); if (result.contains("grey")) { result = result.replace("grey", "gray"); } for (int i = 0; i < colors.length; i++) { // color name [0] and value [1] cNameVal = colors[i].split("#"); if (result.equals(cNameVal[0].toLowerCase())) { colorFound = true; // prepend octothorpe sColor = "#" + cNameVal[1]; iColor = parseColor(sColor); colorName = cNameVal[0]; button.setText(colorName); break; } } if (colorFound) { break; } } } } else { button.setText("Please try again"); } } if (colorFound) { // send color hex value bytes to Arduino through // previously-established Bluetooth connection ArduinoSend(sColor); } else { button.setText("'" + firstResult + "' is not a color I know!"); beDoop(errsound); } super.onActivityResult(requestCode, resultCode, data); }
XML Project
The task was to collect revision information as XML records from 477 legacy documents (most only in PDF format, some in Word doc or docx) in 44 collections over 5 releases, and also to convert all PDF files to Word files.
The resulting historical information is up on MSDN (example).
Each document (PDF or otherwise) has a Revision Summary table containing a set of release dates, revision numbers, and revision classes, for example:
Rather than do this by hand, I wrote some PowerShell and Word Basic scripts to extract the release information from each file’s Revision Summary table and save it as XML records, one per release per file. For each revision source directory:
- The first PowerShell script (PDFtoDoc.ps1) goes through a given directory and opens each PDF file in Word, then saves it as a doc file.
- A follow-on PowerShell script (WordExtractDir.ps1) opens each new doc or legacy docx file in a given directory, and executes a Word macro (ReturnDocInfo, in ReturnDocInfo.vba) to extract and incrementally write release information records to each document's own XML file.
- The Word macro gets the document short name (MS-XYZ) and its title, then finds the revision table and reads the last row. The Word macro then calls a subroutine (OutputDocInfo) that writes out XML records for each entry to that document's MS-XYZ.xml file.
After all the records were extracted from all documents for all releases, I then used Bash with its sed and awk tools to post-process the 477 document XML files, also creating two summary XML files (collection structure and contents).
I handed the set of 479 post-processed XML files to a subsequent tool writer who converted them into the format seen on MSDN.
PDFtoDoc.ps1 (PowerShell)
### # 1. Opens each .pdf file in the given path in Word # 2. Word saves that file in doc format # # Usage: PDFtoDoc directory-path ### $documents_path = $args[0] # for example, c:MyProtocolPDFs\Release2014 $saveasdoc = 0 # SaveAs format identifier $word_app = New-Object -ComObject Word.Application Get-ChildItem -Filter *.pdf -Path $documents_path | ForEach-Object { $document = $word_app.Documents.Open($_.FullName) $doc_filename = "$($_.DirectoryName)\$($_.BaseName).doc" Write-Host $doc_filename $document.SaveAs([ref] $doc_filename, [ref] $saveasdoc) $document.Close() } $word_app.Quit()
WordExtractDir.ps1 (PowerShell)
## # 1. Starts in a directory that contains either doc or docx files # 2. Determines which type is in this directory # 3. Opens each doc/docx file in Word # 4. Calls a Word macro that extracts the revision info and saves it into doc-specific XML files # # Usage: WordExtractDir directory-path ## $dir = $args[0] echo $dir $i = 0 $docs = 0 $docxs = 0 $extractMacro = "Normal.NewMacros.ReturnDocInfo" $word = New-Object -ComObject Word.Application $word.visible = $false echo "" Get-ChildItem -path $dir -recurse -include "*.doc" | % { $docs = $docs + 1 } Get-ChildItem -path $dir -recurse -include "*.docx" | % { $docxs = $docxs + 1 } if ($docs -gt 0) { $type = "doc" $num = $docs } else { $type = "docx" $num = $docxs } Get-ChildItem -path $dir -recurse -include "*.$type" | % { $doc = $word.documents.open($_.fullname) $results = $word.run($extractMacro) $doc.close() echo ([string] ($num - $i) + " - " + $_.Name) # counts down $i = $i + 1 } $word.Quit()
ReturnDocInfo.vba (Visual Basic for Applications)
Sub ReturnDocInfo() Dim dirDate As String ' Release subdirectory name" textStream.writeline (sAny) textStream.writeline (" <Releases>") End If" textStream.writeline (sAny) sAny = sPDFLoc + quote + sBasePath + "\Downloads\[" + docName + "].pdf" + quote + " />" textStream.writeline (sAny) sAny = sWordLoc + quote + sBasePath + "\Documents\[" + docName + "].doc" + quote + " />" textStream.writeline (sAny) sAny = " </Release>" textStream.writeline (sAny) If Not more Then sAny = " </Releases>" textStream.writeline (sAny) sAny = "</Protocol>" textStream.writeline (sAny) End If End Sub
Unity Project
This tiny game demonstrates the primary features of a video game, where you control the player’s movements and the Unity game engine determines what happens in a collision.
The object of this game is to roll the ball into the cubes, either by tilting the device or pushing the ball with a finger. This version is geared toward an Android phone, but it can run on any device with a Unity driver.
When a cube is touched by the ball, the C# code in
PlayerController.cs performs a simple game action – playing a sound (Homer Simpson saying "Woo-hoo!") and destroying that cube – see the code listing below.
The cubes have a different picture on each face, and the playing field is Easter Island. The ball is actually white with a reddish-purple light source shining on it, and the game camera follows the ball rather than staying put over the game field. The walls are now glued down – in the first iteration, as the ball and each wall weigh 1 Unity mass unit, a ball bounce knocked that wall clean off!
PlayerController.cs
using UnityEngine; using System.Collections; public class PlayerController : MonoBehaviour // this controller code is attached to the player object (in this case the ball) { public float speed = 300.0f; // overall game running speed factor public float touchSpeed = 1.0f; // touch input speed factor public GUIText countTxt; // display string for cubes-to-go -- GUIText variables map to Unity scene text variables public GUIText winTxt; // display string for winning! public int maxcount = 5; // number of target cubes private int count; // count of cubes-to-go private string xaxe = "Horizontal"; // Unity-specific axis names, for example joystick axes private string yaxe = "Vertical"; private Vector3 gohere; // next ball movement's vector private GameObject go; // each cube is a game object float horiz, vert; // incoming movement axis values void Start () // runs once at startup { winTxt.enabled = false; SetCountTxt (); if (Application.platform == RuntimePlatform.Android) { Screen.orientation = ScreenOrientation.LandscapeLeft; Screen.sleepTimeout = SleepTimeout.NeverSleep; Screen.autorotateToLandscapeLeft = true; } for (int i = 0; i < maxcount; i++) { // instantiate cubes go = (GameObject)Instantiate (Resources.Load ("MyCube"),
new Vector3 (Random.Range (-9.0F, 9.0F), 1.0f, Random.Range (-9.0F, 9.0F)),
Quaternion.identity); } } void Update () // runs once per game frame, such as 30fps { if (Input.GetKey (KeyCode.Escape)) { Application.Quit (); } // check whether there's been a moving touch on the screen; if so, move the ball correspondingly if (Input.touchCount > 0 && Input.GetTouch (0).phase == TouchPhase.Moved) { Vector2 touchDeltaPosition = Input.GetTouch (0).deltaPosition * touchSpeed; moveMe (touchDeltaPosition.x, touchDeltaPosition.y); } } void FixedUpdate () // runs once per physics time tick, by default 0.02 seconds { if (SystemInfo.deviceType == DeviceType.Handheld) { gohere = Input.acceleration; // acceleration vector from mobile device's built-in gyro if (gohere.sqrMagnitude > 1) gohere.Normalize (); horiz = gohere.x; vert = gohere.y; } else { horiz = Input.GetAxis (xaxe); // use the default Unity input device, possibly a mouse or joystick vert = Input.GetAxis (yaxe); } moveMe (horiz, vert); } void moveMe (float horiz, float vert) // adds the movement vector to the current ball vector { Vector3 moveme = new Vector3 (horiz, 0.0f, vert); GetComponent<Rigidbody> ().AddForce (moveme * speed * Time.deltaTime); } void OnTriggerEnter (Collider other) // triggered whenever the ball collides with another object { if (other.name.Contains ("Target")) { // was the object one of the Target cubes? GetComponent<AudioSource> ().Play (); // play happy sound (Homer Simpson saying "Woo-hoo!") Destroy (other.gameObject); // obliterate that cube count++; SetCountTxt (); } // "else" could be one of the walls, if we wanted the walls to actively influence the game } void SetCountTxt () { countTxt.text = (maxcount - count) + " mini-cubes to go!"; if (count >= maxcount) { // all cubes are gone countTxt.enabled = false; // disable counting text visibility winTxt.enabled = true; // show winning text (its value is set in the Unity scene editor, in this case "Nice going!") } } } | http://rohmco.com/8-code-examples | CC-MAIN-2020-34 | refinedweb | 2,498 | 57.67 |
Predicting Movie Ratings: NLP Tools is What Film Studios Need
>
- Predict the success of a new film as well as box offices using Natural Language Processing (NLP) techniques
- Use movie viewers' comments to predict the movie ratings
- List of sources for movie reviews include the social media sources of movie-related data
- Sentiment analysis of movie reviews and opinions shared on social media platforms can help marketers to predict ratings
- Analysis of movie reviews can be also used to classify movies into different genres and to improve the movie recommendation systems
Movie Reviews and Ratings
Everyone knows how profitable the film industry is. According to PwC, the global box office revenue amounted to about 38 billion U.S. dollars in 2015. People are now spoiled for choice: North America alone released more than 690 movies in 2015. Yet only a small number of movies have a long lifetime. Most of them get to the top of the list quickly, but then they are dethroned as new starlets are not long in coming.
Studios understand that the competition is high, and their movies may not meet expectations of the box office (e.g. "Superman Returns" released in 2006). They work hard to enhance the likelihood of success and movie industry players are increasingly interested in gaining access to success and failure predictions. Some studies show that there is a relation between a movie's ratings and its subsequent sales. For instance, Gilad Mishne and Natalie Glance proved that there is a good correlation between references to movies in blog posts and the financial success of the movies.
People tend to rely on opinions of others when they look for a movie to watch. Taking into account the fact that not only movie critics, but also ordinary people share their reviews on the Internet, such reviews can become a fertile source of predictions of movies' ratings and box offices. Natural language processing (NLP) tools are a good choice when it comes to conducting the movie review analysis. This article focuses on how these tools can be used for the analysis and challenges that their developers face.
Movie Review Data
There are many websites that specialize in the reviews of movies and TV shows. Rotten Tomatoes and IMDb are some of the most popular review hubs. Movie reviews are not limited to these websites: people post their opinions on movie forums, publish their reviews in online magazines and journals. So, researchers get an ocean of extractable data for free.
Posts from social media (e.g. Twitter) should also be considered since around 6,000 tweets are sent every second. Many of the tweet messages are movies-related. The role of Twitter as a source of data was demonstrated by Bernard J. Jansen et al. in the study where the power of tweets as electronic word of mouth was investigated.
It is not hard to find tweets that mention movies, as people use hashtags to make their posts searchable. But researchers don’t need to search for tweets manually. They can take the full advantage of the automatic search thanks to Twitter's Search API and Streaming APIs. One more option to get the desired data is to purchase it from a reseller.
YouTube has the potential to become a rich bank of data for researchers, as well. Users actively express their opinions about movies in comments under movie trailers (official or not). See below for some of the comments posted under the official La La Land trailer.
(Click on the image to enlarge it)
Once a movie is available in theaters, YouTube vloggers and other Youtubers upload their reviews to their channels. These reviews can also be used by researchers. Speech recognition software can transform the speech into text and then analyze it with linguistic tools. And it goes without saying that comments posted under such reviews should be utilized by experts, too.
Why Natural Language Processing
It is clear that all movie reviews cannot be analyzed without computers. But the machines are used to deal with highly structured languages. That is why they are not able to understand the context of natural language (spoken by humans) by themselves.
Technological advances changed the situation: new approaches and algorithms gave computers a chance to understand natural speech. For instance, machine learning and natural language processing (NLP) make use of different techniques (e.g. Bayesian and hidden Markov model-based ones) to recognize speech and "understand" natural speech.
For which purposes is NLP is now used? For instance, it’s employed in different question and answer systems like Cortana and Siri. Summarizers based on NLP can process texts to create their short summaries. Text Summarizer is one of such solutions: users can input the page url they want to summarize or paste texts directly into the text box. NLP tools are used to identify languages, recognize named entities, search for related facts.
Sentiment analysis is one of the major areas of NLP. It helps machines detect general sentiment of a text message. Technology tools can easily detect emotions when a video or a recording are analyzed. The task is a little bit more difficult when it comes to the analysis of text. Marketers often use NLP tools in opinion mining to learn what people think about a product/service. It’s evident that film studios can use sentiment analysis to find out people's views about a film.
Sentiment Analysis Accuracy
When it comes to the automatic classification of movie reviews, researchers may choose one of existing approaches or combine two of them or more. Each approach is quite precise, and some experts claimed that they could achieve approximately 65% of sentiment classification accuracy. They also showed that higher accuracy (67.931%) could be reached by combining statistical-based, bag-of-words-based, content-based, and lexicon-based approaches.
Similar results (75-83% accuracy) were achieved by combining three components (namely, Categorizer, Comparator and Sentiment Analyzer) of Intellexer SDK to analyze hotel and restaurant reviews. You can see how it works here.
Intellexer Sentiment Analyzer is a linguistic tool that utilizes linguistic and statistical information along with a set of semantic rules.
How Sentiment Analyzer Works
Let’s see how Intellexer Sentiment Analyzer can extract sentiments from Rotten Tomatoes' reviews of "Fifty Shades Darker". The sample code is available here. To run it by yourself you need to have access to Intellexer cloud API and a Python interpreter installed.
You should take the following steps to start using the API:
- Read the documentation to choose the method appropriate for your task (the analyzeSentiments method is suitable for the analysis of film reviews).
- Execute a GET/POST HTTP request and parse response results.
Movie reviews are transferred to the POST body in the form of JSON array, where each array item contains ‘id’ - the review ID and ‘text’ - the review text.
There are two types of weight (‘w’):
- the sentiment weight of the opinion (negative or positive values are used for opinion phrases, zero values – for objects or ontology categories);
- the sentiment weight of the review. This parameter is used to classify the whole text of a review as expressing a positive, neutral or negative opinion.
The code below illustrates how Sentiment Analyzer works:
import json import urllib import urllib2 # list of reviews in JSON format reviews = """[ { \"id\": \"snt1\", \"text\": \"I know that “Fifty Shades Darker” isn’t supposed to be good — it’s supposed to be bad, in need of a spanking. This sequel is almost so bad that it’s good, and if only the filmmakers would submit to making campy comedy of E.L. James’ naughty novels, this just might be quality trash cinema.\" }, { \"id\": \"snt2\", \"text\": \"Fifty Shades Darker opens with a smack. Not the erotic sound of palm hitting rump, but of junkies brawling as their 4-year-old son, BDSM-billionaire-to-be Christian Grey, cowers under a table. Months later, his birth mother dies of a heroin overdose. Doing the math, she could have been shooting up with fellow Seattle addict Kurt Cobain. The orphaned boy will be adopted by tycoons and upgrade from grunge to glam. His childhood pain will mutate into a fetish for whips, slaps, and sad-eyed brunettes who look like his mommy — a pathology diagnosed by a college kid who skipped most of Psychology 101. And so, in the film's first five minutes, Fifty Shades author E.L. James sets up the series's strange sanctimony: You're screwed up if you think this sex-torture stuff is hot. But hey, isn't it kinda hot?\" } ]""" # set the URL for POST request, specify url, parameters for information processing and API key for authorization purposes (change YourAPIKey to the Intellexer API key) api_url = "" # print categorized opinions def print_tree(node, height): for i in range(0, height): print "\t", print node.get("t"), if node.get('w') != 0: print "\t", node.get('w') else: print "\t" children = node.get('children') height += 1 for child in children: print_tree(child, height) # print response results def print_response(response): print "Sentences with sentiment objects and phrases:"; sentences = response.get('sentences') for sent in sentences: print "Sentence Weight = ", sent.get('w'), "\t", sent.get('text').encode('utf-8') #print categorized opinions print "\nCategorized Opinions with sentiment polarity (positive/negative)" print_tree(response.get('opinions'), 0) # create request to the Sentiment Analyzer API service def request_api(url, data): header = { 'Content-Type' : "application/json" } req = urllib2.Request(url, data, header) conn = urllib2.urlopen(req) try: json_response = json.loads(conn.read()) finally: conn.close() print_response(json_response) # perform the request try: request_api(api_url, reviews) except urllib2.HTTPError as error: print 'HTTP error - %s' % error.read()
Here is the output:
(Click on the image to enlarge it)
(Click on the image to enlarge it)
Film Review Analysis: Challenges
There are some challenges that providers of custom business intelligence applications have to address. I listed below some of the most common ones:
- One review may contain multiple opinions (even about the same entities). Sentence-level approaches, as a rule, are not able to discover opinions about each entity and (or) its aspects. The aspect-based approach is more suitable in such a case since it can evaluate two opinion targets of the same entity.
- Neutral or objective tweets may change the overall rating. Such tweets are believed to be "just a fact, without any sentiment or opinions associated with them".
- Polysemy and homographs. For example, the word "firm" can mean something secure/solid or a business organization/company depending on the context.
- Distinguishing the name from the description. It means that a movie title may include such words as "war" or "monster" that an NLP solution may recognize as negative ones and the total rating may be skewed.
- The use of anaphora. NLP solutions may experience certain difficulties while determining what a pronoun, a noun or a phrase refers to. E.g. "I ate my lunch and watched the movie. It was great".
- Slang is another challenge. People do use slang in their reviews and tweets. For instance, they may say "That's a bad shirt, man" when they mean it as a compliment to a friend.
- Sarcasm and subtlety: People like playing with words; and sarcasm and irony are some of the types of this game. Big data solutions are not always able to recognize a deeply buried meaning. What is more, there are cross-cultural differences pertinent to sarcasm.
- Special characters: Some movie titles contain accents (foreign movies, in particular). That is why the movies that have apostrophes in their titles may cause encoding problems.
- Misspelling: People make mistakes in their reviews and social media posts, and NLP tools may not classify such words correctly. E.g., Google found out that people living in California often confuse "dessert" and "desert", while people from Alaska often misspell "Hawaii".
- Geographic restrictions: A movie may be very popular in one region and panned in other regions. So, ratings may be mixed as only a small number of tweets have geotagging.
How Else Can Sentiment Analysis be Applied?
The role of NLP tools is not limited to classification of reviews into negative and positive ones. Negative and positive reviews can be grouped on the basis of a subject under discussion: script, actors, atmosphere (i.e. a special mood or feeling it creates among viewers. For instance, a film may have a mysterious atmosphere), etc. The reviews can be further analyzed to extract information on what exactly viewers liked and disliked in a movie.
Owners of film review websites will be able to create a more flexible movie rating system, thus offering users a chance to access opinions of others on each aspect of the movie to find out why it has such a rating. For instance, they will be able to learn that other people liked the leading actor due to emotions they experienced while watching the film, but they did not like the soundtrack since it did not correspond to the topic.
Some steps in this direction has already been taken: Subhabrata Mukherjee and Pushpak Bhattacharyya explored how to identify feature-specific expressions of opinion in product reviews describing different features and containing mixed emotions.
Movie Reviews and Genre Classification
At present, movie genres are mainly identified manually by those people who moderate websites. These people may have a strong passion for movies, but they may fail to identify a movie genre correctly.
As I mentioned previously, NLP tools can give researchers a helping hand in the identification of movie genres as reviews of movies belonging to the same genre will have some common features that enable NLP tools to group them together effectively and in a time-saving manner.
Nevertheless, developers of such tools will have to solve one issue: they need to choose a movie genre scheme they are going to use. Nowadays, movies do not belong to one genre, they represent a combination of multiple genres. E.g. IMDb says that "Star Trek Beyond" released in 2016 belongs to the following genres: action, adventure, sci-fi, and thriller. And this is true, as this movie contains features of all these genres (and some others that are not mentioned). This publication explores issues related to the genre classification more deeply (from the machine learning perspective).
NLP and Similar Movies
Movies can belong to different genres, but have an analogous impact on viewers. For instance, you may like "X-Men" (classified as an action, adventure and sci-fi movie by IMDb) thanks to the love story described in it. But if you try to find similar movies using existing review websites, you will be advised another sci-fi movie, not a love story you are looking for.
NLP tools can do much more than just sentiment analysis and movies' classification by genre. NLP solutions like Comparator can compare reviews and set the degree of similarity between them. This case study describes how NLP solutions can help to manage media content.
Conclusion
NLP is a powerful solution that can take the movie review system to the next level. The information obtained by these tools can be used by site owners to create extended movie reviews with a focus on specific aspects, to classify movies on the basis of both genres and their similarity. This can be also used to make targeted advertising work properly.
Do not limit the sources of data for the research to the ones devoted to movie reviews. Social media like Twitter and YouTube can vie with such websites in terms of squeezable data.
References
1. Amolik, A., Jivane, N., Bhandari, M., Venkatesan, M. (2015). Twitter Sentiment Analysis of Movie Reviews using Machine Learning Techniques. International Journal of Engineering and Technology, Volume 7, Issue 6. Retrieved April 26, 2017 from this link.
2. Brennan, M. W. (2016, November). Performance Comparison of 10 Linguistic APIs for Entity Recognition. ProgrammableWeb.
3. EffectiveSoft, Ltd. (2014) Intellexer Sentiment Analyzer SDK WP [White paper]. Retrieved April 26, 2017, from Intellexer.
4. Kitin, Y. (2016, August). Will Google NL kill the market? Linguistic APIs review. LinkedIn. Retrieved April 26, 2017 from this link.
5. Kitin, Y. (2016, November). Online Summarizers overview. LinkedIn. Retrieved April 26, 2017 from this link.
6. Manning, C.D., Raghavan, P., Schütze, H. (2008). Introduction to Information Retrieval.
7. Turney, P. D. (2002, July). Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews.
About the Author
Tatsiana Levdikova is a Tech Journalist at EffectiveSoft. She writes about software development, UI and UX, natural language processing, Big Data, AI, and other IT-related topics.
Rate this Article
- Editor Review
- Chief Editor Action | https://www.infoq.com/articles/predicting-movie-ratings-nlp?utm_source=articles_about_bigdata&utm_medium=link&utm_campaign=bigdata | CC-MAIN-2018-05 | refinedweb | 2,768 | 55.03 |
A large portion of the field of statistics is concerned with methods that assume a Gaussian distribution: the familiar bell curve.
If your data has a Gaussian distribution, the parametric methods are powerful and well understood. This gives some incentive to use them if possible. Even if your data does not have a Gaussian distribution.
It is possible that your data does not look Gaussian or fails a normality test, but can be transformed to make it fit a Gaussian distribution. This is more likely if you are familiar with the process that generated the observations and you believe it to be a Gaussian process, or the distribution looks almost Gaussian, except for some distortion.
In this tutorial, you will discover the reasons why a Gaussian-like distribution may be distorted and techniques that you can use to make a data sample more normal.
After completing this tutorial, you will know:
-.
Let’s get started.
How to Transform Data to Better Fit The Normal Distribution
Photo by duncan_idaho_2007, some rights reserved.
Tutorial Overview
This tutorial is divided into 7 parts; they are:
- Gaussian and Gaussian-Like
- Sample Size
- Data Resolution
- Extreme Values
- Long Tails
- Power Transforms
- Use Anyway
Need help with Statistics for Machine Learning?
Take my free 7-day email crash course now (with sample code).
Click to sign-up and also get a free PDF Ebook version of the course.
Download Your FREE Mini-Course
Gaussian and Gaussian-Like
There may be occasions when you are working with a non-Gaussian distribution, but wish to use parametric statistical methods instead of nonparametric methods.
For example, you may have a data sample that has the familiar bell-shape, meaning that it looks Gaussian, but it fails one or more statistical normality tests. This suggests that the data may be Gaussian-like. You would prefer to use parametric statistics in this situation given that better statistical power and because the data is clearly Gaussian, or could be, after the right data transform.
There are many reasons why the dataset may not be technically Gaussian. In this post, we will look at some simple techniques that you may be able to use to transform a data sample with a Gaussian-like distribution into a Gaussian distribution.
There is no silver bullet for this process; some experimentation and judgment may be required.
Sample Size
One common reason that a data sample is non-Gaussian is because the size of the data sample is too small.
Many statistical methods were developed where data was scarce. Hence, the minimum. number of samples for many methods may be as low as 20 or 30 observations.
Nevertheless, given the noise in your data, you may not see the familiar bell-shape or fail normality tests with a modest number of samples, such as 50 or 100. If this is the case, perhaps you can collect more data. Thanks to the law of large numbers, the more data that you collect, the more likely your data will be able to used to describe the underlying population distribution.
To make this concrete, below is an example of a plot of a small sample of 50 observations drawn from a Gaussian distribution with a mean of 100 and a standard deviation of 50.
Running the example creates a histogram plot of the data showing no clear Gaussian distribution, not even Gaussian-like.
Histogram Plot of Very Small Data Sample
Increasing the size of the sample from 50 to 100 can help to better expose the Gaussian shape of the data distribution.
Running the example, we can better see the Gaussian distribution of the data that would pass both statistical tests and eye-ball checks.
Histogram Plot of Larger Data Sample
Data Resolution
Perhaps you expect a Gaussian distribution from the data, but no matter the size of the sample that you collect, it does not materialize.
A common reason for this is the resolution that you are using to collect the observations. The distribution of the data may be obsecured by the chosen resolution of the data or the fidelity of the observations. There may be many reasons why the resolution of the data is being modified prior to modeling, such as:
- The configuration of the mechanism making the observation.
- The data is passing through a quality-control process.
- The resolution of the database used to store the data.
To make this concrete, we can make a sample of 100 random Gaussian numbers with a mean of 0 and a standard deviation of 1 and remove all of the decimal places.
Running the example results in a distribution that appears discrete although Gaussian-like. Adding the resolution back to the observations would result in a fuller distribution of the data.
Histogram Plot of a Low Resolution Data Sample
Extreme Values
A data sample may have a Gaussian distribution, but may be distorted for a number of reasons.
A common reason is the presence of extreme values at the edge of the distribution. Extreme values could be present for a number of reasons, such as:
- Measurement error.
- Missing data.
- Data corruption.
- Rare events.
In such cases, the extreme values could be identified and removed in order to make the distribution more Gaussian. These extreme values are often called outliers.
This may require domain expertise or consultation with a domain expert in order to both design the criteria for identifying outliers and then removing them from the data sample and all data samples that you or your model expect to work with in the future.
We can demonstrate how easy it is to have extreme values disrupt the distribution of data.
The example below creates a data sample with 100 random Gaussian numbers scaled to have a mean of 10 and a standard deviation of 5. An additional 10 zero-valued observations are then added to the distribution. This can happen if missing or corrupt values are assigned the value of zero. This is a common behavior in publicly available machine learning datasets; for example.
Running the example creates and plots the data sample. You can clearly see how the unexpected high frequency of zero-valued observations disrupts the distribution.
Histogram Plot of Data Sample With Extreme Values
Long Tails
Extreme values can manifest in many ways. In addition to an abundance of rare events at the edge of the distribution, you may see a long tail on the distribution in one or both directions.
In plots, this can make the distribution look like it is exponential, when in fact it might be Gaussian with an abundance of rare events in one direction.
You could use simple threshold values, perhaps based on the number of standard deviations from the mean, to identify and remove long tail values.
We can demonstrate this with a contrived example. The data sample contains 100 Gaussian random numbers with a mean of 10 and a standard deviation of 5. An additional 50 uniformly random values in the range 10-to-110 are added. This creates a long tail on the distribution.
Running the example you can see how the long tail distorts the Gaussian distribution and makes it look almost exponential or perhaps even bimodal (two bumps).
Histogram Plot of Data Sample With Long Tail
We can use a simple threshold, such as a value of 25, on this dataset as a cutoff and remove all observations higher than this threshold. We did choose this threshold with prior knowledge of how the data sample was contrived, but you can imagine testing different thresholds on your own dataset and evaluating their effect.
Running the code shows how this simple trimming of the long tail returns the data to a Gaussian distribution.
Histogram Plot of Data Sample With a Truncated Long Tail
Power Transforms
The distribution of the data may be normal, but the data may require a transform in order to help expose it.
For example, the data may have a skew, meaning that the bell in the bell shape may be pushed one way or another. In some cases, this can be corrected by transforming the data via calculating the square root of the observations.
Alternately, the distribution may be exponential, but may look normal if the observations are transformed by taking the natural logarithm of the values. Data with this distribution is called log-normal.
To make this concrete, below is an example of a sample of Gaussian numbers transformed to have an exponential distribution.
Running the example creates a histogram showing the exponential distribution. It is not obvious that the data is in fact log-normal.
Histogram of a Log Normal Distribution. The method is named for George Box and David Cox.
More than that, it can be configured to evaluate a suite of transforms automatically and select a best fit. It can be thought of as a power tool to iron out power-based change in your data sample. The resulting data sample may be more linear and will better represent the underlying non-power distribution, including Gaussian.
The boxcox() SciPy function implements the Box-Cox method. It, because we know that the data is lognormal, we can use the Box-Cox to perform the log transform by setting lambda explicitly to 0.
The complete example of applying the Box-Cox transform on the exponential data sample is listed below.
Running the example performs the Box-Cox transform on the data sample and plots the result, clearly showing the Gaussian distribution.
Histogram Plot of Box Cox Transformed Exponential Data Sample
A limitation of the Box-Cox transform is that it assumes that all values in the data sample are positive.
An alternative method that does not make this assumption is the Yeo-Johnson transformation.
Use Anyway
Finally, you may wish to treat the data as Gaussian anyway, especially if the data is already Gaussian-like.
In some cases, such as the use of parametric statistical methods, this may lead to optimistic findings.
In other cases, such as machine learning methods that make Gaussian expectations on input data, you may still see good results.
This is a choice you can make, as long as you are aware of the possible downsides.
Extensions
This section lists some ideas for extending the tutorial that you may wish to explore.
- List 3 possible additional ways that a Gaussian distribution may have been distorted
- Develop a data sample and experiment with the 5 common values for lambda in the Box-Cox transform.
- Load a machine learning dataset where at least one variable has a Gaussian-like distribution and experiment.
If you explore any of these extensions, I’d love to know.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
API
- numpy.random.seed() API
- numpy.random.randn() API
- numpy.random.rand() API
- matplotlib.pyplot.hist() API
- scipy.stats.boxcox() API
Articles
- Normal distribution on Wikipedia
- Outlier on Wikipedia
- Log-normal distribution on Wikipedia
- Power transform on Wikipedia
Summary
In this tutorial, you discovered the reasons why a Gaussian-like distribution may be distorted and techniques that you can use to make a data sample more normal.
Specifically, you learned:
-,
Do you have to do any further testing after the data has been transformed?
That is suppose a Box-Cox transformation is performed on the data to have a symmetrical Gaussian appearance. What are the further tests to ensure that the transformed data is Gaussian.
Another question is that the Poisson distribution is a distribution of discrete values rather than continuous values. A Poisson distribution can have a symmetrical histogram. Could machine learning techniques relying on the assumption that the data is guassian apply to Poisson distribution?
Thank you,
Anthony of exciting Belfield
You could use these tests:
Poisson is not always symmetric. In the case that it is, many algorithms that assume gaussian are robust enough to work with other symmetric distributions without failing badly.
Well explained.
Thanks a lot Mr. Jason.
Thanks.
As Ravi said, vert well explained!
Now, if I want to use this method on real world data, do I need to look at every feature distribution and try to transform it into Gaussian using these methods?
Thanks!
Depending on the modeling method, yes.
Mr. Brownlee hello.
I am trying to make an algorithm in Python taking data from a fits file named “NGC5055_HI_lab.fits
and making them another fits file f.e “test.fits”.
So far i can’t do something.
My algorithm so far is the following…
from matplotlib import pyplot as mp
import numpy as np
import astropy.io.fits as af
cube=af.open (‘NGC5055_HI_lab.fits’)[0]
mo=np.mean(cube.data)
s=np.var(cube.data)
σ=np.std(cube.data)
amp=1/(σ*np.sqrt(2*3.14))
cube.data=amp*np.exp(-np.power(cube.data-mo,2.)/(2*np.power(s,2.)))
cube.writeto(‘test.fits, overwrite=True)
Can you help?
Sounds like a programming problem, not a machine learning question. I recommend posting on stackoverflow. | https://machinelearningmastery.com/how-to-transform-data-to-fit-the-normal-distribution/ | CC-MAIN-2018-34 | refinedweb | 2,168 | 53.61 |
I have tried to write a couple programs but haven't really been able to fully grasp how all the pieces fit together. I have tried reading a lot of the documentation and examples online etc... But I quickly get lost... It seems they start with super simple examples but quickly get complicated. Also, I see lots of little differences with how they go about programming something that is basically the same thing. I understand there are lots of ways to skin a cat... But this causes me to just get confused. As far as I am concerned, I don't really care at this point what the "best" way is. Even the "worst" way is better than "no" way at this point which is where I am currently at.
I am trying to keep things simple to start, as I learn and progress, I can "improve" and learn why I should do things one way vs the other.
Ok... So I simply want to make a graph of the "drawdown" for Tesla. At this point, I would like to simply hard code "Tesla" and if i want to see drawdown for apple... Then I can backspace a couple times and type in Apple.
As I understand, I may need to place an order to actually produce a graph of drawdown. If this is the case then I would like to simply say I buy 1 share on X date. For example, I buy 1 share on July 2nd, 2019. Then If I want to change the date... I can simply backspace a few times and type in a new date.
One problem I keep running into is that I find only a piece of the code I need. For example just the piece to show drawdown but I can't never figure out how to add it to my code.
Here is what I have been able to successfully accomplish. Which I am really proud of although it took me a ridiculous number of hours to get this far. Many of the examples use data from a file, I like that this can pull some of the latest data from online by simply changing the dates.
from __future__ import (absolute_import, division, print_function, unicode_literals) from datetime import datetime import backtrader as bt # Create a cerebro enity (cerebro engine) cerebro = bt.Cerebro() # Get TSLA data # ** Easy to change Tesla to another stock like Apple # **** Just backspace and enter new ticker symbol and hit run # ** Easy to change Date # **** Just backspace and enter new date and hit run data = bt.feeds.YahooFinanceData(dataname='TSLA', fromdate=datetime(2019, 1, 1), todate=datetime(2019, 12, 31)) # Add TSLA data to cerebro engine (inject a datafeed) cerebro.adddata(data) # Run over everything cerebro.run() # Plot TSLA data cerebro.plot()
Can someone please help me to add drawdown to this? I know if someone provided the solution, I could study it and begin to understand how all this works. Again, keep it simple for me, just the basic minimum I need to add a plot of drawdown to the code above. Thank you. | https://community.backtrader.com/user/russell-turner/? | CC-MAIN-2022-40 | refinedweb | 517 | 71.95 |
RavenDB… What am I Persisting, What am I Querying? (part 2)
RavenDB… What am I Persisting, What am I Querying? (part 2)
Join the DZone community and get the full member experience.Join For Free
MariaDB TX, proven in production and driven by the community, is a complete database solution for any and every enterprise — a modern database for modern applications.
Part 2 I want to discuss Relationships & References, and the difference between the two.
Taking from part 1’s example, lets add a User to the mix:
public class Order { public string Id { get; set; } public string UserId { get; set; } public string DateOrdered { get; set; } public string DateUpdated { get; set; } public string Status { get; set; } // Other properties… public IEnumerable<OrderLine> Lines { get; set; } } public class OrderLine { public int Quantity { get; set; } public decimal Price { get; set; } public decimal Discount { get; set; } public string SkuCode { get; set; } // Other Properties } public class User { public string Id { get; set; } public string Username { get; set; } public string Password { get; set; } public string FirstName { get; set; } public string Surname { get; set; } // Other Properties }
As you can see I’ve added ‘UserId’ to the Order, not a ‘User’ just the Id part. This is because I don’t want direct access to the User. (It is possible to map a User in RavenDB, but I don’t believe that is always a good idea. Save it for special occasions.)
If we were modelling this in a Relational Database, we would have a relationship between Order and User, add some foreign keys, and if we threw an ORM into the mix we would probably have an Order object looking like:
Where we wire up the User object inside the Order. This in the long run lets to all sorts of problem. Then we would eager load the User when we fetch the order, maybe on the order we need to fetch the product, so on and so forth. It just gets messy and complicated.
So rather than adding the User object to the Order, in RavenDB we would just add the UserId. But why are we doing this? Below I have modelled the Relational Database Table Structure.
As you can see I’ve highlighted two Foreign Keys. But I’ve named them both differently, one is a reference and one is a relationship.
Reference
The reference has no real purpose other than to maintain referential integrity in the database. Not for our sake, but mainly because we want to keep our DBAs happy. The problem with this however, is we don’t actually need it. An order can still exist in the system without a User. We still know who paid for it by the billing information, and we know who it was shipped to from the shipping information.
Maybe the user wanted to specify what email or phone number to contact them. This information isn’t information that belongs to the user. The only reason we have ‘UserId’ is to so when that user logs into the application, we know which orders belong to him, the information on those orders don’t relate to the User other. This is not a relationship, it’s a reference. A reference to the User.
Relationship
The next one is the Relationship, and it has a real purpose beyond referential integrity. An OrderLine really can’t exist without an Order. Without an order it has no meaning or purpose. The problem is because there are multiple Lines to a single Order, we need to persist them in their own table.
An OrderLine might have a Reference to Product, but an OrderLine can exist without the Product. Since an OrderLine relates back to an Order, you don’t have a real need to ever load an OrderLine by itself. You may edit/delete lines, but that will always be done via the Order.
This ultimately creates a Root Aggregate, the Order becomes the Root while the Lines become the children, and an OrderLine is always loaded with an Order, but never on it’s own.
User/Product Data Duplication
First thing you may think by having the First/Last name of the User on the Addresses, or the Addresses data copied into the Order’s Billing/Delivery Address, is duplicating data. Same with taking a Products Price/Name/SKUCode and putting it on the OrderLine.
This isn’t data duplication.
If a user changes his name, you have a Reference to the user still, but at the time he billed his order, he was John Doe, not John Snow. His address may have changed but we captured it at the point of ordering. This is information that belongs to the Order, not to the User. The fact we have the same name in both the User and Order is a mute point, because visually they are the same, but from a business perspective, they are not the same.
Benefit of Duplicating
So we are copying data now. Is this a good thing? Well lets think about it in an Order History screen.
If a user logged in, went to their account history and viewed their previous order:
Using a relational database, no copying data.
In the scenario using a relational database, we would use the selected OrderId to load the Order, eager load the OrderLine. Fetch the User, Addresses, Product.
Fetching all this data could be done multiple different ways, but already we are asking for a lot of data. A lot of which we aren’t using.
Then we have to compose a lot of that data together, or maybe we joined it and created a new object for displaying it all.
Using a document database, copying the data.
In the scenario of using a document database, we would query for the Order using the OrderId. And begin displaying all the data.
We already knew the Product name that was captured and used at the time of purchase, but we would have the ProductId to reference it back to the Product in the system.
We already know who it was shipped to, and who it was billed to.
We don’t need to find the User or the Product or the Addresses or anything like that. We have all the information for that Order.
In my next post I’ll talk about loading References. This one is already long.
Again I hope this makes sense, feel free to comment and ask questions
MariaDB AX is an open source database for modern analytics: distributed, columnar and easy to use.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/ravendb%E2%80%A6-what-am-i-persisting-0 | CC-MAIN-2018-30 | refinedweb | 1,109 | 62.58 |
Hi, did you find a solution for this problem?
Thanks!
Hi, did you find a solution for this problem?
Thanks!
Hi guys,
Those who are testing on android but can’t get firestore to work, I’ve been there too. Here’s the stack overflow question I asked and got a working fix. There’s a deeper insight on the question, if you’re facing the problem.
But now I’m having a different problem, the one that @spettit faced. When I’m trying to enable offline persistence it won’t allow me. I get the following error printed on console:
“Error enabling offline storage. Falling back to storage disabled: FirebaseError: [code=unimplemented]: This platform is either missing IndexedDB or is known to have an incomplete implementation. Offline persistence has been disabled.”
Does anyone know any fix to it?
Thanks in advance
In the end I built my app outside expo using react-native-firebase which implements offline persistence out of the box. I’ve not tried it on android but on iOS react-native-firebase does everything I want it to do.
I’ve used this solution to build my firestore app. However, recently I faced security rules are not working properly on this one. Anyone having same issue?
allow read, write: if request.auth != null;
Always failed getting document with
request.auth != null option
Great to hear that you had such a positive experience.
I’m keen to migrate my app away from Firebase Realtime DB to Firestore, I had a quick crack at it but it’s mostly a rewrite of my actions and cloud functions
, and it didn’t seem like Firestore had solid support for React Native at the time.
Do you have an Expo link for iOS?
Thanks… it’s working (Y)
import firebase from "firebase" require("firebase/firestore"); // Initialize Firebase var config = { apiKey: "XXXX", authDomain: "XXX.firebaseapp.com", databaseURL: "", projectId: "XXX", storageBucket: "XXX.appspot.com", messagingSenderId: "XXXX" }; firebase.initializeApp(config); var db = firebase.firestore(); . . . . componentDidMount(){ var db = firebase.firestore(); var docRef = db.collection("users"); const output = {}; docRef.limit(50) .get() .then(querySnapshot => { querySnapshot.docs.map(function (documentSnapshot) { return (output[documentSnapshot.id] = documentSnapshot.data()); }); this.setState({dataSource: Object.entries(output)}) ; console.log("datasource:", this.state.dataSource ); }); }
Are you getting a massive object as well? Cant work out if I’m doing something wrong or if thats the actual results
Does it work yet? I am not able to make it work. Please help me out.
It gives the error - TypeError: _firebase2.default.firestore is not a function. (In ‘_firebase2.default.firestore()’, ‘_firebase2.default.firestore’ is undefined)
Thanks
Have you imported firebase/firestore where you are calling firebase.firestore()
Declare
require('firebase/firestore');
or
import '@firebase/firestore';
I’m having the same issue as well
TypeError: _firebase2.default.firestore is not a function. (In ‘_firebase2.default.firestore()’, ‘_firebase2.default.firestore’ is undefined)
even when using
require('firebase/firestore');
or
import '@firebase/firestore';
Thanks for the help
Hey did you get the solution for that even I am getting the same error.
Error: firebase.firestore is not a function. (In ‘firebase.firestore()’, ‘firebase.firestore’ is undefined)
P.S.: I have imported;
import * as firebase from ‘firebase’;
import ‘firebase/firestore’;
It works fine when I use firebase.database(), but I need to use firestore… all my data is in firestore could you help?
Hello,
I solved it by using
import * as firebase from 'firebase/app'; import 'firebase/firestore';
I think how you import depends on your firebase version.
So what is the version of both the imports
5.8.1
Did you find a solution for this?
Is firestore working with expo and if yes with which firebase version? Because it’s not working with 7.14
It doesn’t work in 7.9.0 firebase version. I have tried all the solutions given above but still get firestore is not a function error.
I was able to fix doing the following:
import * as firebase from 'firebase' import "firebase/firestore" firebase.initializeApp(firebaseConfig); const db = firebase.firestore();
I had the same problem, but the solution it isn’t downgrading the version for the firebase package. Searching on the internet and other people with the same error, I was found these:
Finally, the solution is changing the rule on the firestore.
rules_version = ‘2’;
service cloud.firestore {
match /databases/{database}/documents {
match /{document=**} {
allow read, write: if TRUE # change FALSE for TRUE
}
}
}
Expo + Firestore has become my go-to stack for all apps. I’ve built a number of libraries/products around it. In case it’s useful for anyone here:
Doorman: Expo + Firebase SMS auth (+ UI components)
swr-firestore: Super easy react hooks that let you query & edit real-time Firestore data
expo-firestore-offline-persistence: Enable offline mode for Firestore without detaching from Expo | https://forums.expo.io/t/open-when-an-expo-firebase-firestore-platform/4126/29 | CC-MAIN-2021-31 | refinedweb | 797 | 59.6 |
Created on 2013-11-18 12:11 by ivan.radic, last changed 2014-10-16 17:37 by paul.moore. This issue is now closed.
shutil.rmtree works nice on Windows until it hits file with read only attribute set. Workaround is to provide a onerror parameter as a function that checks and removes file attribute before attempting to delete it. Can option to delete read_only files be integrated in shutil.rmtree?
Example output in In Python 2.7:
shutil.rmtree("C:\\2")
Traceback (most recent call last):
File "<pyshell#60>", line 1, in <module>
shutil.rmtree("C:\\2")
File "C:\Program Files (x86)\Python.2.7.3\lib\shutil.py", line 250, in rmtree
onerror(os.remove, fullname, sys.exc_info())
File "C:\Program Files (x86)\Python.2.7.3\lib\shutil.py", line 248, in rmtree
os.remove(fullname)
WindowsError: [Error 5] Access is denied: 'C:\\2\\read_only_file.txt'
Example output in In Python 3.3:
shutil.rmtree("C:\\2")
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
shutil.rmtree("C:\\2")
File "C:\Program Files (x86)\Python.3.3.0\lib\shutil.py", line 460, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Program Files (x86)\Python.3.3.0\lib\shutil.py", line 367, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "C:\Program Files (x86)\Python.3.3.0\lib\shutil.py", line 365, in _rmtree_unsafe
os.unlink(fullname)
PermissionError: [WinError 5] Access is denied: 'C:\\2\\read_only_file.txt'
You?
This,.
Well, it would *definitely* need to be a new explicit option whose default value was the current behavior.
>You".
I like the idea of a remove_readonly flag. I was going to say that I'm a bit worried about the fact that shutil.rmtree already has a couple of keyword arguments, but it's nowhere near what, say, copytree has. Call me +0.75.
That's not a good name for the flag. The problem is that 'read-only' means different things on Windows than it does on Unix. Which is why I suggested that the flag control whether or not it acts "posix like" on Windows. So perhaps 'posix_compat'? That feels a bit odd, though, since on posix it does nothing...so it's really about behavioral consistency across platforms....
Hmm. It's really hard to think of a name that conveys succinctly what we are talking about here.
A more radical notion would be something like 'delete_control' and have it be tri-valued: 'unixlike', 'windowslike', and 'native', with the default being native. Bad names, most likely, but you get the idea. The disadvantage is that it would be even more code to implement ;)
T)
I make no claims of being good at naming things :)
The most obvious solution would be if the onerror argument allowed for retries. At the moment, all it can do is report issues, not recover. Suppose that returning True from onerror meant "retry the operation". Then you could do
def set_rw(operation, name, exc):
os.chmod(name, stat.S_IWRITE)
return True
shutil.rmtree('path', onerror=set_rw)
See issue 8523 for a discussion of changing the way onerror behaves. I think it is addressing this same use case, but I didn't reread it in detail..
OK,.
Looks like that works. At least in my case - I just did
def del_rw(action, name, exc):
os.chmod(name, stat.S_IWRITE)
os.remove(name)
shutil.rmtree(path, onerror=del_rw)
Something more robust might check if name is a directory and os.rmdir that - I didn't need it for my case though.
Thanks.
This could be at least part of docs; I found that people tend to avoid shutil.rmtree(...) on Windows because of such issues. Some of them call subprocess("rmdir /S /Q <path>") to get desired behavior.
Ok, so to move this forward we have essentially two proposals:
1) Add a remove_readonly flag
2) Add a doc example which shows how to use the onerror handler to remove a recalcitrant file.
I'm -0.5 on (1) because it feels like Windows-specific clutter; and +0 on (2).
I'm good with just adding an example to the docs, along the lines of Paul's del_rw. I think it would be better to use a more conservative example though, something like:
def readonly_handler(rm_func, path, exc_info):
if issubclass(exc_info[0], PermissionError) and exc_info[1].winerror == 5:
os.chmod(path, stat.S_IWRITE)
return rm_func(path)
raise exc_info[1]
Checking the exact error could be a bit fragile. I have a feeling I recently saw an issue somewhere with code that stopped working on Python 3.4 because the precise error raised for a read-only file changed. I don't recall where the issue was, unfortunately.
It's also worth noting that trapping too broad a set of errors won't actually matter much, because the retry will simply fail again if the actual problem wasn't a read-only file...
The.
Fair point, Paul.
Patch looks good to me, Tim, barring a couple of nits pointed out on Rietveld.
Thanks, Zach. Updated patch.
LGTM!
Thanks. I'll hold off pushing until I've had a chance to run it on a
Unix system. I'm not 100% whether it will operate in the same way there.
New changeset 31d63ea5dffa by Tim Golden in branch 'default':
Issue19643 Add an example of shutil.rmtree which shows how to cope with readonly files on Windows
New changeset a7560c8f38ee by Tim Golden in branch 'default':
Issue19643 Fix whitespace
Although?
I think it is an interesting idea. Probably worth opening a new enhancement request with the suggestion.
Not. | http://bugs.python.org/issue19643 | CC-MAIN-2017-04 | refinedweb | 938 | 68.16 |
1. IPS Design Goals, Concepts, and Terminology
Tools for Software Self-Assembly
Configuration Composition
Actuators and SMF Services
Examples of Software Self-Assembly in Oracle Solaris
Multiple Packages Delivering Configuration Fragments
IPS Terminology and Components
This section defines IPS terms and describes IPS components.
IPS is designed to install packages in an image. An image is a directory tree, and can be mounted in a variety of locations as needed. An image is one of the following three types:
In a full image, all dependencies are resolved within the image itself, and IPS maintains the dependencies in a consistent manner.
Non-global zone images are linked to a full image (the parent global zone image), but do not provide a complete system on their own. In a zone image, IPS maintains the non-global zone consistent with its global zone as defined by dependencies in the packages.
User images contain only relocatable packages.
In general, images are created or cloned by installers, beadm(1M), or zonecfg(1M), for example, rather than by pkg image-create.
Every IPS package is represented by a fault management resource identifier (FMRI) that consists of a publisher, a name, and a version, with the scheme pkg. In the following example package FMRI, solaris is the publisher, system/library is the package name, and 0.5.11,5.11-0.175.0.0.0.2.1:20111019T082311Z is the version:
pkg://solaris/system/library@0.5.11,5.11-0.175.1.0.0.2.1:20120919T082311Z
FMRIs can be specified in abbreviated form if the resulting FMRI is still unique. The scheme, publisher, and version can be omitted. Leading components can be omitted from the package name.
When the FMRI starts with pkg:// or //, the first word following // must be the publisher name, and no components can be omitted from the package name. When no components are omitted from the package name, the package name is considered complete, or rooted.
When the FMRI starts with pkg:/ or /, the first word following the slash is the package name, and no components can be omitted from the package name. No publisher name can be present.
When the version is omitted, the package generally resolves to the latest version of the package that can be installed.
A publisher is an entity that develops and constructs packages. A publisher name, or prefix, identifies this source in a unique manner. Publisher names can include upper and lower case letters, numbers, hyphens, and periods: the same characters as a valid host name. Internet domain names or registered trademarks are good choices for publisher names, since these provide natural namespace partitioning.
Package clients combine all specified sources of packages for a given publisher when computing packaging solutions.
Package names are hierarchical with an arbitrary number of components separated by forward slash (/) characters. Package name components must start with a letter or number, and can include underscores (_), hyphens (-), periods (.), and plus signs (+). Package name components are case sensitive.
Package names form a single namespace across publishers. Packages with the same name and version but different publishers are assumed to be interchangeable in terms of external dependencies and interfaces.
Leading components of package names can be omitted if the package name that is used is unique. For instance, /driver/network/ethernet/e1000g can be reduced to network/ethernet/e1000g, ethernet/e1000g, or even simply e1000g. When no components are omitted from the package name, the package name is considered complete, or rooted. If the packaging client complains about ambiguous package names, specify more components of the package name or specify the full, rooted name. Package names should be chosen to reduce possible ambiguities as much as possible.
If an FMRI contains a publisher name, then the full, rooted package name must be specified.
Scripts should refer to packages by their full, rooted names.
FMRIs can also be specified using an asterisk (*) to match any portion of a package name. Thus /driver/*/e1000g and /dri*00g both expand to /driver/network/ethernet/e1000g.
A package version consists of four sequences of integer numbers, separated by punctuation. The elements in the first three sequences are separated by dots, and the sequences are arbitrarily long. Leading zeros in version elements are forbidden, to allow for unambiguous sorting by package version. For example, 01.1 and 1.01 are invalid version elements.
In the following example package version, the first sequence is 0.5.11, the second sequence is 5.11, the third sequence is 0.175.1.0.0.2.1, and the fourth sequence is 20120919T082311Z.
0.5.11,5.11-0.175.1.0.0.2.1:20120919T082311Z
The first sequence is the component version. For components that are developed as part of Oracle Solaris, this sequence represents the point in the release when this package last changed. For a component with its own development life cycle, this sequence is the dotted release number, such as 2.4.10.
The second sequence is the build version. This sequence, if present, must follow a comma. Oracle Solaris uses this sequence to denote the release of the OS for which the package was compiled.
The third sequence is the branch version, providing vendor-specific information. This sequence, if present, must follow a hyphen. This sequence can contain a build number or provide some other information. This value can be incremented when the packaging metadata is changed, independently of the component. See Oracle Solaris Package Versioning for a description of how the branch version fields are used in Oracle Solaris.
The fourth sequence is a time stamp. This sequence, if present, must follow a colon. This sequence represents the date and time the package was published in the GMT time zone. This sequence is automatically updated when the package is published
The package versions are ordered using left-to-right precedence: The number immediately after the @ is the most significant part of the version space. The time stamp is the least significant part of the version space.
The pkg.human-version attribute can be used to hold a human-readable version string, however the versioning scheme described above must also be present. The human-readable version string is only used for display purposes, as documented in Set Actions.
By allowing arbitrary version lengths, IPS can accommodate a variety of different models for supporting software. For example, a package author can use the build or branch versions and assign one portion of the versioning scheme to security updates, another for paid versus unpaid support updates, another for minor bug fixes, or whatever information is needed.
A version can also be the token latest, which specifies the latest version known.
Appendix B, How IPS Is Used To Package the Oracle Solaris OS describes how Oracle Solaris implements versioning.
Actions define the software that comprises a package; they define the data needed to create this software component. Package contents are expressed in a package manifest file as a set of actions.
Package manifests are largely created using programs. Package developers provide minimal information, and the manifest is completed using package development tools as described in Chapter 2, Packaging Software With IPS.
Actions are expressed in the following form in package manifest files:
action_name attribute1=value1 attribute2=value2 ...
In the following example action, dir indicates this action specifies a directory. Attributes in the form name=value describe properties of that directory:
dir path=a/b/c group=sys mode=0755 owner=root
The following example shows an action that has data associated with it. In this file action, the second field, which has no name= prefix, is called the payload:
file
In this example, the payload is the SHA-1 hash of the file. This payload can alternatively appear as a regular attribute with the name hash, as shown in the following example. If both forms are present in the same action, they must have identical values.
file hash
Action metadata is freely extensible. Additional attributes can be added to actions as needed. Attribute names cannot include spaces, quotation marks, or equals signs (=). Attribute values can have all of those, although values with spaces must be enclosed in single or double quotation marks. Single quotation marks need not be escaped inside a string enclosed in double quotation marks, and double quotation marks need not be escaped inside a string enclosed in single quotation marks. A quotation mark can be prefixed with a backslash (\) to prevent terminating the quoted string. Backslashes can be escaped with backslashes. Custom attribute names should use a unique prefix to prevent accidental namespace overlap. See the discussion of publisher names in Package Publisher.
Multiple attributes with the same name can be present and are treated as unordered lists.
Most actions have a key attribute. The key attribute is the attribute that makes this action unique from all other actions in the image. For file system objects, the key attribute is the path for that object.
The following sections describe each IPS action type and the attributes that define these actions. The action types are detailed in the pkg(5) man page, and are repeated here for reference. Each section contains an example action as it would appear in a package manifest during package creation. Other attributes might be automatically added to the action during publication.
The file action is by far the most common action. A file action represents an ordinary file. The file action references a payload, and has the following four standard attributes:
The file system path where the file is installed. This is the key attribute of a file action. The value of the path attribute is relative to the root of the image. Do not include the leading /.
The access permissions of the file. The value of the mode attribute is simple permissions in numeric form, not ACLs.
The name of the user that owns the file.
The name of the group that owns the file.
The payload is normally specified as a positional attribute: The payload is the first word after the action name and has no attribute name. In a published manifest, the payload value is the SHA-1 hash of the file contents. If the payload is present in a manifest that has not yet been published, it represents the path where the payload can be found, as explained in the pkgsend(1) man page. The named hash attribute must be used instead of the positional attribute if the payload value includes an equal symbol (=), double quotation mark ("), or space character. Both positional and hash attributes can be used in the same action, but the hashes must be identical.
A file action can also include the following attributes:
Specifies that the contents of the file should not be overwritten on upgrade if the contents are determined to have changed since the file was installed or last upgraded. On initial installs, if an existing file is found, that existing file is salvaged (stored in /var/pkg/lost+found).
The preserve attribute can have one of the following values:
The existing file is renamed with the extension .old, and the new file is put in its place.
The existing file is left alone, and the new file is installed with the extension .new.
This file is not installed for initial package installs. On upgrades, any existing file is renamed with the extension .legacy, and then the new file is put in its place.
The existing file is left alone, and the new file is not installed..
The overlay attribute can have one of the following values:
One other package is allowed to deliver a file to the same location. This value has no effect unless the preserve attribute is also set..
This attribute is used to handle editable files moving from package to package, from place to place, or both. The value of this attribute is the name of the originating package, followed by a colon, followed.
Once this attribute is set, do not change its value, even if the package or file are repeatedly renamed. Keeping the same value permits upgrade to occur from all previous versions. the pkg revert command is invoked with any of those tags specified. See the pkg(1) man page for information about the revert subcommand.
Specific types of files can have additional attributes. For ELF files, the following attributes are recognized:
The architecture of the ELF file. This value is the output of uname -p on the architecture for which the file is built.
This value is 32 or 64.
This value is the hash of the ELF sections in the file that are mapped into memory when the binary is loaded. These are the only sections necessary to consider when determining whether the executable behavior of two binaries will differ.
An example file action is:
file path=usr/bin/pkg owner=root group=bin mode=0755
The dir action is like the file action in that it represents a file system object, except that it represents a directory instead of an ordinary file. The dir action has the same four standard attributes as the file action (path, owner, group, and mode), and path is the key attribute.
Directories are reference counted in IPS. When the last package that either explicitly or implicitly references a directory no longer does so, that directory is removed. If that directory contains unpackaged file system objects, those items are moved into /var/pkg/lost+found.
Use the following attribute to move unpackaged contents into a new directory:
Names a directory of salvaged items. A directory with such an attribute inherits on creation the salvaged directory contents if they exist. For an example, see Moving Unpackaged Contents on Directory Removal or Rename.
During installation, pkg(1) checks that all instances of a given directory action on the system have the same owner, group, and mode attribute values. The dir action is not installed if conflicting values are found on the system or in other packages to be installed in the same operation.
An example of a dir action is:
dir path=usr/share/lib owner=root group=sys mode=0755
The link action represents a symbolic link. The link action has the following standard attributes:
The file system path where the symbolic link is installed. This is the key attribute for a link action.
The target of the symbolic link. The file system object to which the link resolves.
The link action also takes attributes that allow for multiple versions or implementations of a given piece of software to be installed on the system at the same time. Such links are mediated, and allow administrators to easily toggle which links point to which version or implementation as desired. These mediated links are discussed in Delivering Multiple Implementations of an Application.
An example of a link action is:
link path=usr/lib/libpython2.6.so target=libpython2.6.so.1.0
The hardlink action represents a hard link. It has the same attributes as the link action, and path is also its key attribute
An example of a hardlink action is:
hardlink path=opt/myapplication/hardlink target=foo
The set action represents a package-level attribute, or metadata, such as the package description.
The following attributes are recognized:
The name of the attribute.
The value given to the attribute.
The set action can deliver any metadata the package author chooses. The following attribute names have specific meaning to the packaging system:
The name and version of the containing package.
One or more tokens that a pkg(5) client can use to classify the package. The value should have a scheme (such as org.opensolaris.category.2008 or org.acm.class.1998) and the actual classification (such as Applications/Games), separated by a colon (:). The scheme is used by the packagemanager(1) GUI. A set of info.classification values is provided in Appendix A, Classifying Packages.
A brief synopsis of the description. This value is shown at the end of each line of pkg list -s output, as well as in one line of the output of pkg info. This value should be no longer than 60 characters. This value should describe what the package is, and should not repeat the name or version of the package.
A detailed description of the contents and functionality of the package, typically a paragraph or so in length. This value should describe why someone might want to install this package.
When true, the package is marked obsolete. An obsolete package can have no actions other than more set actions, and must not be marked renamed. Package obsoletion is covered in Obsoleting Packages.
When true, the package has been renamed. The package must include one or more depend actions as well, which point to the package versions to which this package has been renamed. A package cannot be marked both renamed and obsolete, but otherwise can have any number of set actions. Package renaming is covered in Renaming, Merging and Splitting Packages.
The version scheme used by IPS is strict and does not allow for letters or words in the pkg.fmri version field. If a commonly used human-readable version is available for a given package, that version can be set here. The value is displayed by IPS tools. This value is not used as a basis for version comparison and cannot be used in place of the pkg.fmri version.
Some additional informational attributes, as well as some used by Oracle Solaris are described in Appendix B, How IPS Is Used To Package the Oracle Solaris OS.
An example of a set action is:
set name=pkg.summary value="Image Packaging System"
The driver action represents a device driver. The driver action does not reference a payload. The driver files themselves must be installed as file actions. The following attributes are recognized. See add_drv(1M) for more information about these attribute values.
The name of the driver. This is usually, but not always, the file name of the driver binary. This is the key attribute of the driver action.
An alias for the driver. A given driver can have more than one alias attribute. No special quoting rules are necessary.
A driver class. A given driver can have more than one class attribute.
The file system permissions for the device nodes of the driver.
The file system permissions for the minor nodes of the clone driver for this driver.
Additional security policy for the device. A given driver can have more than one policy attribute, but no minor device specification can be present in more than one attribute.
Privileges used by the driver. A given driver can have more than one privs attribute.
An entry in /etc/devlink.tab. The value is the exact line to go into the file, with tabs denoted by \t. See the devlinks(1M) man page for more information. A given driver can have more than one devlink attribute.
An example of a driver action is:
driver name=vgatext \ alias=pciclass,000100 \ alias=pciclass,030000 \ alias=pciclass,030001 \ alias=pnpPNP,900 variant.arch=i386 variant.opensolaris.zone=global
The depend action represents an inter-package dependency. A package can depend on another package because the first requires functionality in the second for the functionality in the first to work, or even to install. Dependencies are covered in Chapter 4, Specifying Package Dependencies.
The following attributes are recognized:
The FMRI representing the target of the dependency. This is the key attribute of the depend action. The FMRI value must not include the publisher. The package name is assumed to be complete (that is, rooted), even if it does not begin with a forward slash (/). Dependencies of type require-any can have multiple fmri attributes. A version is optional on the fmri value, though for some types of dependencies, an FMRI with no version has no meaning.
The FMRI value cannot use asterisks (*), and cannot use the latest token for a version.
The type of the dependency.
The target package is required and must have a version equal to or greater than the version specified in the fmri attribute. If the version is not specified, any version satisfies the dependency. A package cannot be installed if any of its require dependencies cannot be satisfied.
The dependency target, if present, must be at the specified version level or greater.
The containing package cannot be installed if the dependency target is present at the specified version level or greater. If no version is specified, the target package cannot be installed concurrently with the package specifying the dependency.
The dependency is optional, but the version of the target package is constrained. See Chapter 4, Specifying Package Dependencies for a discussion of constraints and freezing.
Any one of multiple target packages as specified by multiple fmri attributes can satisfy the dependency, following the same rules as the require dependency type.
The dependency target is required only if the package defined by the predicate attribute is present on the system.
Prior to installation of this package, the dependency target must, if present, be at the specified value or greater on the image to be modified. If the value of the root-image attribute is true, the target must be present on the image rooted at / in order to install this package.
The dependency target is required unless the package is on the image avoid list. Note that obsolete packages silently satisfy the group dependency. See the avoid subcommand in the pkg(1) man page for information about the image avoid list.
The dependency is ignored if the image is not a child image, such as a zone. If the image is a child image, then the dependency target must be present in the parent image. The version matching for a parent dependency is the same as that used for incorporate dependencies.
The FMRI that represents the predicate for conditional dependencies.
Has an effect only for origin dependencies as mentioned above.
An example of a depend action is:
depend fmri=crypto/ca-certificates type=require
The license action represents a license or other informational file associated with the package contents. A package can deliver licenses, disclaimers, or other guidance to the package installer through the license action.
The payload of the license action is delivered into the image metadata directory related to the package, and should only contain human-readable text data. The license action payload should not contain HTML or any other form of markup. Through attributes, license actions can indicate to clients that the related payload must be displayed or accepted. The method of display or acceptance is at the discretion of clients.
The following attributes are recognized:
Provides a meaningful description for the license to assist users in determining the contents without reading the license text itself. This is the key attribute of the license action.
Wherever possible, including the version of the license in the description is recommended as shown above. The license value must be unique within a package.
When true, this license must be accepted by a user before the related package can be installed or updated. Omission of this attribute is equivalent to false. The method of acceptance (interactive or configuration-based, for example) is at the discretion of clients.
When true, the payload of the license action must be displayed by clients during packaging operations. Omission of this attribute is equivalent to false. This attribute should not be used for copyright notices, but only for actual licenses or other material that must be displayed during operations. The method of display is at the discretion of clients.
An example of a license action is:
license license="Apache v2.0"
The legacy action represents package data used by the legacy SVR4 packaging system. The attributes associated with the legacy action are added into the databases of the legacy SVR4 packaging system so that the tools querying those databases can operate as if the legacy package were actually installed. In particular, specifying the legacy action should cause the package named by the pkg attribute to satisfy SVR4 dependencies.
The following attributes are recognized. See the pkginfo(4) man page for description of the associated parameters. the key attribute of the legacy action.
The value for the VENDOR parameter.
The value for the VERSION parameter. The default value is the version from the FMRI of the package.
An example of a legacy action is:
legacy pkg=SUNWcsu arch=i386 category=system \ desc="core software for a specific instruction-set architecture" \ hotline="Please contact your local service provider" \ name="Core Solaris, (Usr)" vendor="Oracle Corporation" \ version=11.11,REV=2009.11.11 variant.arch=i386
Signature actions are used as part of the support for package signing in IPS. Signature actions are covered in detail in Chapter 9, Signing IPS Packages.
The user action defines a UNIX user as specified in the /etc/passwd, /etc/shadow, /etc/group, and /etc/ftpd/ftpusers files. Information from user actions is added to the appropriate files.
The following attributes are recognized:
The unique name of the user.
The encrypted password of the user. The default value is *LK*.
The unique numeric ID of the user. The default value is the first free value under 100.
The name of the user's primary group. This name must be found in /etc/group.
The real name of the user, as represented in the GECOS field in /etc/passwd. The default value is the value of the username attribute.
The user's home directory. The default value is /.
The user's default shell. The default value is empty.
Secondary groups to which the user belongs. See the group(4) man page.
Can be set to true or false. The default value of true indicates that the user is permitted to login via FTP. See the ftpusers(4) man page.
The number of days between January 1, 1970, and the date that the password was last modified. The default value is empty.
The minimum number of days required between password changes. This field must be set to 0 or above to enable password aging. The default value is empty.
The maximum number of days the password is valid. The default value is empty. See the shadow(4) man page.
The number of days before password expires that the user is warned.
The number of days of inactivity allowed for.
Set to empty.
A example of a user action is:
user gcos-field="pkg(5) server UID" group=pkg5srv uid=97 username=pkg5srv
The group action defines a UNIX group as specified in the group(4) file. No support is provided for group passwords. Groups defined with the group action initially have no user list. Users can be added with the user action.
The following attributes are recognized:
The value for the name of the group.
The unique numeric ID of the group. The default value is the first free group under 100.
An example of a group action is:
group groupname=pkg5srv gid=97
A software repository contains packages for one or more publishers. Repositories can be configured for access in a variety of different ways: HTTP, HTTPS, file (on local storage or via NFS or SMB), and as a self-contained package archive file, usually with the .p5p extension.
Package archives allow for convenient distribution of IPS packages, and are discussed further in Publish as a Package Archive.
A repository accessed via HTTP or HTTPS has a server process, pkg.depotd, associated with it. See the pkg.depotd(1M) man page for more information. For an example, see Retrieving Packages Using an HTTP Interface in Copying and Creating Oracle Solaris 11.1 Package Repositories.
In the case of file repositories, the repository software runs as part of the accessing client. Repositories are created with the pkgrepo and pkgrecv commands as shown in Copying and Creating Oracle Solaris 11.1 Package Repositories. | http://docs.oracle.com/cd/E26502_01/html/E21383/pkgterms.html | CC-MAIN-2017-17 | refinedweb | 4,641 | 57.57 |
#include <wchar.h>
#include <string.h>
#include <winpr/winpr.h>
#include <winpr/wtypes.h>
WinPR: Windows Portable Runtime String Manipulation (CR.
Swap Unicode byte order (UTF16LE <-> UTF16BE)
ConvertFromUnicode is a convenience wrapper for WideCharToMultiByte:
If the lpMultiByteStr parameter for the converted string points to NULL or if the cbMultiByte parameter is set to 0 this function will automatically allocate the required memory which is guaranteed to be null-terminated after the conversion, even if the source unicode string isn't.
If the cchWideChar parameter is set to -1 the passed lpWideCharStr must be null-terminated and the required length for the converted string will be calculated accordingly.
ConvertToUnicode is a convenience wrapper for MultiByteToWideChar:
If the lpWideCharStr parameter for the converted string points to NULL or if the cchWideChar parameter is set to 0 this function will automatically allocate the required memory which is guaranteed to be null-terminated after the conversion, even if the source c string isn't.
If the cbMultiByte parameter is set to -1 the passed lpMultiByteStr must be null-terminated and the required length for the converted string will be calculated accordingly.
Notes on cross-platform Unicode portability:
Unicode has many possible Unicode Transformation Format (UTF) encodings, where some of the most commonly used are UTF-8, UTF-16 and sometimes UTF-32.
The number in the UTF encoding name (8, 16, 32) refers to the number of bits per code unit. A code unit is the minimal bit combination that can represent a unit of encoded text in the given encoding. For instance, UTF-8 encodes the English alphabet using 8 bits (or one byte) each, just like in ASCII.
However, the total number of code points (values in the Unicode codespace) only fits completely within 32 bits. This means that for UTF-8 and UTF-16, more than one code unit may be required to fully encode a specific value. UTF-8 and UTF-16 are variable-width encodings, while UTF-32 is fixed-width.
UTF-8 has the advantage of being backwards compatible with ASCII, and is one of the most commonly used Unicode encoding.
UTF-16 is used everywhere in the Windows API. The strategy employed by Microsoft to provide backwards compatibility in their API was to create an ANSI and a Unicode version of the same function, ending with A (ANSI) and W (Wide character, or UTF-16 Unicode). In headers, the original function name is replaced by a macro that defines to either the ANSI or Unicode version based on the definition of the _UNICODE macro.
UTF-32 has the advantage of being fixed width, but wastes a lot of space for English text (4x more than UTF-8, 2x more than UTF-16).
In C, wide character strings are often defined with the wchar_t type. Many functions are provided to deal with those wide character strings, such as wcslen (strlen equivalent) or wprintf (printf equivalent).
This may lead to some confusion, since many of these functions exist on both Windows and Linux, but they are not the same!
This sample hello world is a good example:
wchar_t hello[] = L"Hello, World!\n";
int main(int argc, char** argv) { wprintf(hello); wprintf(L"sizeof(wchar_t): %d\n", sizeof(wchar_t)); return 0; }
There is a reason why the sample prints the size of the wchar_t type: On Windows, wchar_t is two bytes (UTF-16), while on most other systems it is 4 bytes (UTF-32). This means that if you write code on Windows, use L"" to define a string which is meant to be UTF-16 and not UTF-32, you will have a little surprise when trying to port your code to Linux.
Since the Windows API uses UTF-16, not UTF-32, WinPR defines the WCHAR type to always be 2-bytes long and uses it instead of wchar_t. Do not ever use wchar_t with WinPR unless you know what you are doing.
As for L"", it is unfortunately unusable in a portable way, unless a special option is passed to GCC to define wchar_t as being two bytes. For string constants that must be UTF-16, it is a pain, but they can be defined in a portable way like this:
WCHAR hello[] = { 'H','e','l','l','o','\0' };
Such strings cannot be passed to native functions like wcslen(), which may expect a different wchar_t size. For this reason, WinPR provides _wcslen, which expects UTF-16 WCHAR strings on all platforms. | http://pub.freerdp.com/api/string_8h.html | CC-MAIN-2020-05 | refinedweb | 745 | 57.3 |
Agenda
See also: IRC log
<fjh> draft minutes from last meeting
The minutes were not approved because they contain some confidential information.
<fjh> minutes from xml core
<fjh> red line
Frederick reviewed the outcomes of XML Core C14N11 discussion that was held on Tuesday.
<Frederick> XML Core discussion on Tuesday included clarification of XML Base and Appendix A issues.
<klanz2> dialing in ...
Frederick reviewed the "red line".
<fjh>
<fjh> agenda
klanz2 joined on Zakim.
tlr: Append a '/' character to a trailing ".." <ins>segment</ins
<fjh> proposal add "segment" to the end of "Append a “/” character to a trailing “..” "
konrad: not sure whether .. elimination on a relative uri reference might lead to a "/", which would be wrong
<klanz2> no/../ may result in a slash, but I'll double check
<klanz2> in RFC 3986
<tlr> ACTION: klanz2 to double-check on relative-to-absolute resolution [recorded in]
<trackbot-ng> Created ACTION-108 - Double-check on relative-to-absolute resolution [on Konrad Lanz - due 2007-11-15].
<klanz2> the test cases are also in the latest version of the to be removed Appendix
<fjh> Consider this example
<klanz2> The test cases from there will remain in the spec wouldn't they ...
<fjh> <a xml:
<fjh> <b xml:
<fjh> <c xml:
<fjh> <d xml:
<fjh> </d>
<fjh> </c>
<fjh> </b>
<fjh> </a>
<fjh> now consider removing elements b and c
<fjh> incorrect result would be "../x"
<fjh> due to left hand side algorithm
<fjh> should get ../../x
<klanz2>
<fjh> section 5.2.3
<klanz2> That's what I previously said about this
<klanz2> Further in the concourse of these initial tests I also found a potential
<klanz2> ambiguity in the merge_path function in rfc3986
<klanz2>
<klanz2> Which says: " i.e., excluding any characters after the right-most "/" in
<klanz2> the base URI path"
<klanz2> However I don't think this applies if a base URI has two trailing dots
<klanz2> (assuming the optional normalization mentioned in the second paragraph
<klanz2> of was not performed).
<klanz2> So I'm unsure what would happen to an inherited xml:base URI reference
<klanz2> of the form "../.." to be joined with a URI reference of the form "..".
<klanz2> For the least surprising output I would bet on "../../../" as an output
<klanz2> and I think this would also deserve a mention in section 2.4 of C14n 1.1 .
<fjh> from 5.2.3 return a string consisting of the reference's path component
<fjh> appended to all but the last segment of the base URI's path (i.e.,
<fjh> excluding any characters after the right-most "/" in the base URI
<fjh> path, or excluding the entire base URI path if it does not contain
<fjh> any "/" characters).
tlr: This is not an Appendix A problem.
<klanz2> we could also say that trailing .. will be replaced with ../
<klanz2> before processing
<fjh> Add bullet to join-URI-References text "Replace a trailing .. with ../ before processing"
<tlr> Replace a trailing ".." segment with "../" before processing.
<klanz2> no/../
klanz2: If all segments get removed from other .. segments the trailing / must be removed as well.
<fjh> If all segments are removed through .. segment processing, any lone / must be removed as well
<fjh> Three suggested changes
<fjh> 1 Append a '/' character to a trailing ".." <ins>segment</ins
<fjh> 2. insert Add bullet to join-URI-References text "Replace a trailing ..sgment with "../" before processing
<fjh> 3. Add If all segments get removed from other .. segments the trailing / must be removed as well.
<klanz2> join-uri-reference("no/", "../") := ""
<klanz2> and not "/"
tlr: If the path component of both inputs are not absolute then the result should not be absolute.
<klanz2> join-uri-reference("no/", "/") ="/"
<fjh> if path component of both not absolute (e.g. not start with /) then result is not absolute (e.g. no /)
<klanz2> join-uri-reference("no/", "/") :="/"
<klanz2> join-uri-reference("{whatever}", "/{whatever2}") :="/{whatever2}"
<klanz2> I still understand why we would have to this all again
<klanz2> just for the new algorithm
<klanz2> well lets make just the test cases normative
<klanz2> and put the algorithm informative
<tlr> The algorithm is modified to ensure that a combination of two xml:base attribute values that include relative path components (i.e., path components that do not begin with a '/' character) results in an attribute value that is a relative path component.
<klanz2> I still follow the opinion is that the problem is and was sticking to rfc 3986 remove dot function
thomas: add example + language I put in chat
<klanz2> examples are always good +1
<pdatta> I think refering to RFC3986 and putting modifications on it, is a bad idea
<klanz2> +1 to pdatta
<pdatta> because RFC3986 is very complicated
<klanz2> uneccesarily complicated
<klanz2> and the algorithm by bruce
<klanz2> we should just ask them if they give c14n 1.1 to us
<klanz2> and they offerded that may times in the past
<pdatta> a more intuitive algorithm is one that splits up the path into segments with / delimiter. and then use a stack to push the segments in. And a dot dot will pop segments
<klanz2> :
<klanz2> Some applications may find it more efficient to implement the
<klanz2> remove_dot_segments algorithm by using two segment stacks rather than
<klanz2> strings.
<klanz2> We actually need just one stack ;-)
<klanz2> Didn't hear that ;-)
<klanz2> what was it
<tlr> ACTION: tlr to provide example for "isolated .." case [recorded in]
<trackbot-ng> Created ACTION-109 - Provide example for \"isolated ..\" case [on Thomas Roessler - due 2007-11-15].
<klanz2> btw. regrets from jcc, he will join tomorrow morning ...
<fjh> ACTION: frederick update redline and share with xml:core [recorded in]
<trackbot-ng> Created ACTION-110 - Update redline and share with xml:core [on Frederick Hirsch - due 2007-11-15].
<tlr>
<klanz2> sean please speak up, ...
<tlr> Sean describing above message
<klanz2>
<fjh> konrad - be consistent between spec and test cases
<klanz2>
sean: Is it too late to change c14n 1.1?
Frederick: I think we should do it.
RESOLUTION: Update c14n 1.1 spec example 3.8 and also rerun interop test cases accordingly.
<tlr> ACTION: Frederick to review examples in C14N 1.1 and propose detailed changes to use xml:Id [recorded in]
<trackbot-ng> Created ACTION-111 - Review examples in C14N 1.1 and propose detailed changes to use xml:Id [on Frederick Hirsch - due 2007-11-15].
<tlr> ACTION: tlr to prepare interop report template [recorded in]
<trackbot-ng> Created ACTION-112 - Prepare interop report template [on Thomas Roessler - due 2007-11-15].
<tlr> ACTION: sean to update testcase document [recorded in]
<trackbot-ng> Created ACTION-113 - Update testcase document [on Sean Mullan - due 2007-11-15].
<klanz2> ok
<tlr> ACTION: tlr to ensure that result from ACTION-109 goes into test suite [recorded in]
<trackbot-ng> Created ACTION-114 - Ensure that result from ACTION-109 goes into test suite [on Thomas Roessler - due 2007-11-15].
<tlr>
Graham Rong from MIT/Sloan entered the meeting.
<fjh> interop to do: verify that all implementations are now up to date and checked in, verify test caes doc is up to date, run test for xml:id once accpeted by xml core, run new test for .. processing
<fjh> report generation
<klanz2> Seen your mail, thanks ...
<fjh> Donald mentioned RFC 4051
<klanz2> thomas talked in Mountainview about a registry maechanism ...
<fjh> informational RFC
On Break 3:06pm to 3:30pm
Eric E Cohen presents; slides:
<klanz2>
<fjh> semantics of signatures important
hal: What do "different syntaxes" mean?
discussion around policies, semantics.
cohen: as much interoperability as possible
ed, frederick: signatures have to adhere to policies
ed: can xacml be used for specifying these kinds of policies?
how to reconcile XML Signature syntax with XBRL syntax?
<klanz2> yes
<klanz2> not very good
<klanz2> I can hear Ed fine
<klanz2> I could infer from the slides what it was about ...
<klanz2> a lot better
<fjh>
FJH: sign what you see?!
ecohen: content not limited to
presentation
... in Sweden, they have a standard rendering based solution ...
... throwing away the machine-readable structure in turn ...
fjh: make the report a first-class object?
ecohen: does it have to have a visual representation?
EdS: Check out my whitepaper for
mountain view
... might be relevant for your canonicalization use case ...
eric: "what you see is what you sign" model doesn't really work these days
<fjh> tlr: XBRL canonicalization - canonical XML with differences, definition?
<klanz2> tlr, please speak up
tlr: what's XBRL canonicalization?
ecohen: what we need
<klanz2> works again
<rdm> Bede McCall of MITRE entered the meeting.
<klanz2> order is important in XML ....
(discussion about reordering)
<klanz2> use XSLT sort
<klanz2> to sort elements
ecohen: XBRL can live with
reordering
... more losely defined, since some relations defined elsewhere ...
hal: careful about semantics of order
<klanz2> if elment order doesn't matter, just sort it ...
ed: sign structure?
hal: signature software has no
idea what semantics is
... run risk of basically signing something that can't be modified, but moved ...
... so it's good news that order doesn't matter, since that makes stuff more robust ...
... also, what's lifecycle?
eric: there's a workflow business requirement here
hal: layer processing might break for signature
<klanz2> thanks for the insights, looks like you need some research on this ...
fjh: What do we get out of this?
<klanz2> fjh, could you please turn the mic sensitivity higher ...
tlr: known gaps?
<klanz2> ok
hal: support for profiles -- not very desirable to have 27 different ways to do the signature
<klanz2> does some one have this as pdf, can't read odp ...
hal: would rather have a superset document than a lot of individual profiles ...
<klanz2> thx, use old one in the mean time ...
<Donald3>
ed: single-pass validation and creation
etc
<fjh> pdf
<fjh> ed: use SAML assertions to specify signer identity...
<fjh> SAML subject element.
<fjh> (correction)
<fjh> support multiple signers for single SignedInfo
<klanz2> I'm not sure this will all fit into the core, btw. we may want to concentrate on staying as close as possible to the current spec ... yet still enable better streaming ... and extend in places where signature can be extended anyway
<klanz2> cf. ConterSignature in XAdES ...
<tlr> which you could do using the current infrastructure, except for the semantics part
<klanz2> we can put additional semantic stuff into ds:Object s
<klanz2> CounterSignature, vs. Parallell Signatures
<klanz2> let's not mix things here
<fjh> konrad: use extension points in current spec where we can
<klanz2> re: CounterSignatures
<klanz2>
<fjh> discussion need XML C14N for xml, text canonicalization for text, none for binary
<klanz2> Multiple signatures and countersignatures
<klanz2> the ref above
<fjh> proposal - define in XML Signature how to reliably hash SignedInfo without talking about canonicalization in general
<klanz2> one thing really tricky about SignedInfo is indention and whitespace and that there is no transforms allowing to heal differences in parsing and so on ...
<klanz2> Re streaming, for enveloping signatures we can do something easily putting ds:Object just before the digest value ... for enveloped signatures we have to be more creative ... like factoring DigestMethod and Transforms outside the signature
<klanz2> maybe using xi:Include
<klanz2> just brainstorming
frederick: note we might want to
use different word for different concept despite namespaces due
to people's understanding, e.g. ds:Object vs proposed Object to
embedd or include references
... question - is nesting profiles adding run time complexity to processing, perhaps only allow single profile
tlr: could wrap current signature element with new element with profile attribute vs defining new attribute on signature element
pratik: we should explore approach of using existing Signature 1.0 where we can
hal: real backward compatibility requires same signature value, probably not likely if we fix stuff
hal: dislike namespace profiles, agree with rich
<klanz2> Thats what 1.0 looks like (*where we may want to make additions*, changes #):
<klanz2> <Signature ID?>
<klanz2> <SignedInfo>
<klanz2> <CanonicalizationMethod/>
<klanz2> <SignatureMethod/>
<klanz2> (<Reference URI? >
<klanz2> (<Transforms>)? # use xi:include to including the transforms positioned at the beginning of the document
<klanz2> <DigestMethod> # use xi:include to including the DigestMethod positioned at the beginning of the document
<klanz2> (*add <ds:Object> here to stream it*)
<klanz2> <DigestValue>
<klanz2> </Reference>)+
<klanz2> </SignedInfo>
<klanz2> <SignatureValue>
<klanz2> (<KeyInfo>)?
<klanz2> (<Object ID?>)*
<klanz2> </Signature>
<klanz2> such additions and changes would allow new processors to read old signatures
<klanz2> and old processors could x:include first and then verify / doesn't work for a new ds:Object
<fjh> sean: great presentation, agree with Konrad and Pratik re building on existing spec
<fjh> konrad: streaming issue is conflicting needs for signing and verifying with current spec
<fjh> konrad: at beginning, specify DigestMethod and transforms, at end values, ease streaming
<fjh> ed correct presentation, digest ahead of object
<fjh> hal: streaming has issue of node sets vs infoset
<klanz2> navigational commands in XPATH
<klanz2> allowing to go in reverse document order
<fjh> which XPath?
<klanz2> in transforms
<fjh> Thank you to Eric and Ed for excellent presentations.
<klanz2> what we'd need here is that transforms depend on their ancestor context only
<klanz2> re to what hal said
<klanz2> re to what hal said also, transforms should be decoupled as if they had documents in between ...
<klanz2> Hal, that should cover most of whatr I recall
<klanz2> I may have more readable versions of this stuff in a single document availiable by christmas time
<klanz2> thanks everyone, just had to type this rest not to forget ...
<klanz2> bye bye | http://www.w3.org/2007/11/08-xmlsec-minutes.html | CC-MAIN-2015-06 | refinedweb | 2,236 | 62.78 |
Hacking is an overstatement. Fiddling around with COD4. Anyways, here is a code snippet to get information (send the getinfo command) from a COD4 server:
import socket hex = '\xFF\xFF\xFF\xFF\x67\x65\x74\x73\x74\x61\x74\x75\x73' s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) host = '176.9.142.200' port = 29056 s.sendto(hex, (host, port)) d = s.recvfrom(1024)
To list the number of players and their names:
import re print d[0].count('\"')/2 + " players" list_l = re.findall('"([^"]*)"', d[0]); print for pl in list_l: print pl
Advertisements
3 thoughts on “COD4 Hacking”
Olala …. 😎:D😎:D Its only point on screen or ? Hahahaha
Haha no, Duje. It is only used in a script that joins a server when there are empty slots.
😀 ahaaa 😉 oki doki …. 🙂 | https://grocid.net/2013/05/22/cod4-hacking/ | CC-MAIN-2017-17 | refinedweb | 131 | 72.02 |
AutocompletionFuzzy
Sublime anyword completion
Details
Installs
- Total 12K
- Win 7K
- OS X 3K
- Linux 2K
Readme
- Source
- raw.githubusercontent.com
Sublime Autocompletion plugin
This is really glorious plugin that reduce plain typing considerably while coding.
Demo
Installation
- Package Control: plugin is avaiable under
AutocompletionFuzzyin sublime's
package-control.
- GIT: plugin should be cloned under
AutocompletionFuzzyname.
Features
Provides 8 different types of autocompletion:
Complete word - basic completion; completes word that occurenced in text and opened files.
Complete word (fuzzy) - like “complete word” but uses fuzzy match over words.
Complete subword - completes snake_case and CamelCase words parts.
Complete long word - complete long words: class names with namespaces (Class::Name), method calls (object.method), filenames (file/name.py), urls (http://…).
Complete long word (fuzzy) - line “complete long word” but uses fuzzy match over words.
Complete nesting - completes over and into brackets: can complete full method call (method(arg1, arg2)), method arguments (arg1, arg2), array ([value1, value2]) and everything that has brackets over it or after it.
Complete nesting (fuzzy) - like “complete nesting” but uses fuzzy match.
Complete line - competes whole line.
However it maps only 6 types of autocompletion. Not fuzzy completions aren't mapped to keyboard shortcuts by default. See “installation” section if you would like map non-fuzzy completion behavior.
All lists are build in order of first occurence of word. That makes autocomplete very predictable and easy to use.
Words completion works over all files opened. Nesting completion works only in current file (because of performance issues)
Installation
This plugin is part of sublime-enhanced plugin set. You can install sublime-enhanced and this plugin will be installed automatically.
If you would like to install this package separately check “Installing packages separately” section of sublime-enhanced package.
If you don't like fuzzy behavior you should rebind keyboard shortcuts after installation in the “Autocompletion/Default (OSNAME).sublime-keymap” file (non-fuzzy behavior are commented by default).
Usage
Type a one of two characters of the beginning of word. Than hit keyboard shortcut or run command to complete the word. You can run same command again to complete next/previous occurence.
If you like fuzzy completion it is really useful to type a start character and following character from the middle of word to receive more accurate completion. E.g. for complete localvariable type “lv” and hit keyboard shortcut. First character of word should always be character of completion. However if word starts with underscore () it possible to type next character, e.g. for complete _local_variable same “lv” will work. | https://packagecontrol.io/packages/AutocompletionFuzzy | CC-MAIN-2019-51 | refinedweb | 419 | 50.84 |
Using Temperature Sensors with Gertboard and the Raspberry Pi TMP36 and LM335z stay in a fixed place
- Log the data locally on the Pi
- Log the data on local media server so other local users can view it
- Put it in a self-refreshing html file so it updates in the browser every few seconds
But this was still insufficient. So the last step, pending the arrival of more sensors, was…
- Log the data to the internet so that everybody can view it.
Needless to say, each step threw up different challenges, and there was some frustration and trouble-shooting along the way.
I’m not going to fully document everything I did this time (for the moment) but I will share the sites I used for info gleaning and give you the edited highlights of the process.
Indoor/Outdoor Temperatures Logged per Minute
I now have Indoor and outdoor temperature sensors logging temperature data every minute to the internet. The scientist in me loves measurement and data logging. It seems like a million years ago (must have been summer ’89) that I did a module on electronics as part of my BSc (Analytical Chemistry). I don’t remember much of it, but I do remember I enjoyed it.
So, which sensors worked out?
I can heartily recommend two analogue sensors:
- TMP36 – costs about £1.50 and is a very simple to use, Celsius based sensor
- LM335z – costs about £1 is fairly simple to use, but needs an additional resistor and is Kelvin based
And the one I had trouble with?
That was the DS18B20 digital thermometer, which I’m sure is a brilliant piece of kit, but I couldn’t get it working in a useful timeframe. I will come back to it when I know what I’m doing. :)
Other time wasters?
After getting frustrated with the DS18B20, I wasted a lot of time trying to get the right resistor on the LM335z to make it read something like the right voltage. Unfortunately there are web sites out there where people give bad advice, so it’s a good idea to stick to good sources like MIT :) I did have a look at the datasheet for the sensor, but didn’t quite understand it. Basically you have to use good old R = V / I to calculate what value resistor to use according to what voltage you will be hooking the sensor up to (3.3V in my case) to get the required current.
MIT and Adafruit to the Rescue
This page over at MIT explains it very nicely in language I could understand. Basically 1000 Ohms for 5 volts, but that’ll be OK for 3v3 as well. And sure enough, it works just as it should (I really need to stop being surprised when things work as they should).
Another site which came to the rescue was Adafruit. A brilliantly simple piece of advice for testing analogue temperature sensors can be found here… In fact there’s a really good tutorial on the TMP36 there as well.
I wished I’d found it before I spent so much time piddling about taking advice from bogus sources. It shows you how to hook up your sensor directly to a Voltmeter to check that it is working properly.
And then there was the sofware
It was also necessary to find out what Arduino commands to program into a sketch. Adafruit to the rescue again
Then connect the Gertboard up in the right way to output data to the serial port.
Then write a little Python script to read from the serial port and output the data.
Now I write about this, and see how many steps are involved, I’m not at all surprised it took so long :rotfl:
Then once that was working well, Adafruit helped with the data logging part too…
(Online Data Logging Tutorial)
I pulled a few bits out of that tutorial, but didn’t use their scripts as I was using the Atmega on the Gertboard.
Here’s the result
Without boring you with further details, here’s a link to the the RasPi.TV temperature feed on Cosm.
And here are the current graphs with the most up to date data…
Inside Temperature
Outside Temperature
I don’t promise to leave it switched on all the time, as I will need my Pi and Gertboard for other things. But I’m going to try and set it up directly (minus Gertboard) on another Pi permanently in due course. At the moment there’s 2.5 days worth of data on the log. It defaults to 6 hours display. The best view is on 1 week display. Then you can clearly see the daily temperature cycle. Not surprisingly, outside it gets colder at night and warmer in mid afternoon. Inside similar, but more even. We haven’t used the heating much yet this year.
It’s been a lengthy learning process. I’ve enjoyed it. Frustrations are what makes the success all the sweeter. But it’s much easier to say that once you’ve overcome all the difficulties. :)
One more thing
I just learned the hard way that you need to stop logging before you mess about with the circuit. I wanted to move some stuff around on the breadboard to make it easier to demo in class tomorrow. I switched over a couple of wires expecting to be offline only a second. It didn’t go as planned and I ended up logging zero volts on both sensors. On the TMP36 this was equivalent to -48.3 C and on the LM335z it gave -270 C. This totally messed up the scale on the graphs, so I had to find out how to delete data points. It’s a bit tricky and you have to learn how to interact with the software API. Best to switch off logging before fooling with the circuit.
To be honest, you really should switch off everything before tampering with the circuit. We all know this, and yet… ;)
This project is really interesting, could you give more technical details about the implementation?
Thank you
Sure I can elaborate if you can tell me which part do you want more detail on :) The sensors, the arduino sketch, the COSM interface? It would take a long time to write in detail about all of it (which is why I haven’t yet done so). I’m happy to fill in some details though.
I have some distance sensors I’m using on a robot, is it possible to have the arduino convert the raw values into a distance and send this data to the pi and read it in python. Thanks!
To be honest, I don’t get two things:
1. If you follow this tutorial:, how you connect the second thermometer (and the ldr sensors)? I havent used the cobbler yet.
2. Why do you need the Gertboard? Probably, this answers the question 1.
I think that a diagram of your circuit could answer my questions (or generate more :D )!!!
thank you!
Good questions. Fortunately they both have fairly straightforward answers.
1. I wrote a second blog with a photo of my circuit next to theirs. That shows clearly how the second sensor was attached. You might have missed it because I called it Pi Cobbler Review.. But I also had to hack the python code a bit to incorporate the second sensor (not too much though)
2. At the time I wrote this blog the Gertboard was all I had. I used the Atmega on the Gertboard to run the sensors. Now I’m using the Cobbler on the semi-permanent setup and the Gertboard is free for other experiments. The permanent setup will be on a PCB that I am still working on populating. :)
hello,
thanks for this article. May u can make a picture of your hardware setup?
thanks
Peter
I’ll see if I’ve got one. I took that setup apart and put it on a breadboard for now, so I could use the Gertboard for other things. Failing that, the hardware setup is quite similar to the next blog article.
[…] From previous blog posts, you’ll know I have a Raspberry Pi set up to read two temperature sensors and two light sensors (inside and outside) and log the data online at COSM Setting up temperature sensors and COSM feed […]
Hi,
i really like your tutorial, but i have one question: Which ports did you use to connect the sensor to your gertboard?
Thanks
Nevermind it was in a previous answer :idk:
Thank you very much :)
i am sorry…but could you explain how you installed the sensors on the gertboard, because i am not very experienced with the hardware
I got most of my information from the adafruit tutorial but I modified their arduino sketch to suit the pi. I’ll see if i can find it. The main “gotcha” is don’t connect anything to 5V.
Would you mind sharing your code for this? I’m curious as to how to you’re reading the serial data from ATMega and matching the data vs my code (I’m new at this! :) Also, how did you get COSM to display the voltage of your LDR? Looking at the eeml.unit information it appears to only allow Temp, Humidity, and a few others related to environmental. Nothing that will accept raw data in the form of voltage (1023 / 3.3) or just the full 0-1023.
Cheers!
I’ll see if I can dig out that code and match the relevant snippets (I’m not posting it all). I’ve long-since dismantled that project and superseded it with a proto-plate version, so the Gertboard isn’t tied up. I now read an ADC directly using a script from Adafruit, which I have heavily customised (not sharing that at the moment).
Here you go, I found it, warts and all.
import serial
sport = serial.Serial("/dev/ttyAMA0", 9600, timeout=1)
#print sport.portstr # for debug
response = sport.readline(None) # I've forgotten what None refers to
# I altered the arduino sketch to tag all serial output with "ID1"
# as the program was erroring out on me with some random serial noise
# Error checking prevents bombing out with corrupt serial input
if not response.startswith("ID1"):
sport.close() # this line is indented
continue # this line is indented
Here’s how to define your own units in the “write to COSM” command
pac.update([eeml.Data(4, volts, unit=eeml.Unit('Volt', type='basicSI', symbol='V'))])
Obviously the important bit is unit=eeml.Unit(‘Volt’, type=’basicSI’, symbol=’V’), but I leave the whole line there for context.
I really appreciate you posting the above. The COSM information was incredibly useful.
Thanks again and keep up the posting. Your site is excellent!
You’re welcome and thanks for your kind words. It makes it all worthwhile. :)
Hi Alex!
Perfect project, I’m in awe. During last days I’m working to create some kind of smart installation in my home and I’m fighting to connect TMP36 temp. sensors to my GERTBOARD. Could you please give me an advice how to connect sensors to my gertboard? I mean which pins on geartboard should I use?
Additionally the question is if it is possible to connect more than 2 temperature sensors to the gertboard to working with Raspberry Pi.
thanks in advance for your answer.
There are two ways to do it with Gertboard, either using the onboard ADC or the ATMega 328p chip. In this blog I used the ATMega, but it was such a long time ago and I’ve done so many things since then that I’ve forgotten the details.
But, having said that, the best way to connect more than two TMP-36 sensors is using an MCP3008 eight channel ADC. There is a tutorial and software for this on the Adafruit site. You can connect up to eight of these temp sensors to this ADC.
I have TMP-36 sensors connected to my gertboard by MCP3008 but unfortunately I’m not able to read temperature values on it. It is showing value 0 all the time. Could you give my any hints how configure wiring on the gertboard extension to connect in a proper way? Additionaly it is needed some extra software?
If you’re using a MCP3008 you don’t need the Gertboard at all. Try this link
Ok, it is no problem to connect directly to Raspberry Pi – then it’s working. But I need to use Gertboard to control some another circuits so I suppose it is necessary to connect MCP3008 to Gertboard, not Raspi… Tell me if I’m wrong.
The next doubt is – I have connected 3 TMP36 sensors on one PCB (just for the test) so each of them should show the same value but it not happening – there are quite huge differences between them (ex. 1st: 210, 2nd: 180, 3rd: 160). Is it possible to get so big differences?
Is the TMP36 even rated to a temperature that high? It’s a long time since I looked at the data sheet, but I thought it was rated lower than that.
I checked the datasheet… | https://raspi.tv/2012/using-temperature-sensors-with-gertboard-and-the-raspberry-pi-tmp36-and-lm335z?replytocom=1836 | CC-MAIN-2020-29 | refinedweb | 2,223 | 72.26 |
This document shows how to configure Anthos clusters on VMware to use bundled load balancing with the MetalLB load balancer.
In Anthos clusters on VMware, MetalLB runs in layer-2 mode.
Example of a MetalLB configuration
Here is an example of a configuration for clusters running the MetalLB load balancer:
The preceding diagram shows a MetalLB deployment. MetalLB runs directly on the cluster nodes. In this example, the admin cluster and user cluster are on two separate VLANs, and each cluster is in a separate subnet:
admin-cluster.yaml
The following example of an admin cluster configuration file shows the configuration seen in the preceding diagram of:
MetalLB load balancer
VIPs on MetalLB for Kubernetes API server and add-ons of the admin cluster
network: hostConfig: ... ipMode: type: "static" ipBlockFilePath: "config-folder/admin-cluster-ipblock.yaml" ... loadBalancer: kind: "MetalLB" ... vips: controlPlaneVIP: "172.16.20.100" addonsVIP: "172.16.20.101"
admin-cluster-ipblock.yaml
The following example of an IP block file shows the designation of IP addresses for the nodes in the admin cluster. This also includes the address for the control-plane node for the user cluster and an IP address to use during cluster upgrade.
blocks: - netmask: "255.255.255.0" gateway: "17
user-cluster.yaml
The following example of a user cluster configuration file shows the configuration of:
Address pools for the MetalLB controller to choose from and assign to Services of type
LoadBalancer. The ingress VIP is in one of these pools.
VIP designated for the Kubernetes API server of the user cluster, and the ingress VIP you have chosen to configure for the ingress proxy. The Kubernetes API server VIP is on the admin cluster subnet because the control plane for a user cluster runs on a node in the admin cluster.
A node pool enabled to use MetalLB. MetalLB will be deployed on the nodes in the user cluster that belong to that node pool.
network: hostConfig: ... ipMode: type: "static" ipBlockFilePath: "config-folder/user-cluster-ipblock.yaml" ... loadBalancer: kind: MetalLB metalLB: addressPools: - name: "address-pool-1" addresses: - "172.16.40.100" - "172.16.40.101-172.16.40.112 avoidBuggyIPs: true ... vips: controlPlaneVIP: "172.16.20.102" ingressVIP: "172.16.40.102" ... nodePools: - name: "node-pool-1" cpus: 4 memoryMB: 8192 replicas: 3 enableLoadBalancer: true
The configuration in the preceding example specifies a set of addresses available for Services. When an application developer creates a Service of type
LoadBalancer in the user cluster, the MetalLB controller will choose an IP address from this pool.
user-cluster-ipblock.yaml
The following example of an IP block file shows the designation of IP addresses for the nodes in the user cluster. This includes an IP address to use during cluster upgrade.
blocks: - netmask: "255.255.255.0" gateway: "17.16.40.1" ips: - ip: 172.16.40.21 hostname: user-vm-1 - ip: 172.16.40.22 hostname: user-vm-2 - ip: 172.16.40.23 hostname: user-vm-3 - ip: 172.16.40.24 hostname: user-vm-4 - ip: 172.16.40.25 hostname: user-vm-5
Set up MetalLB
Open firewall ports
MetalLB uses the
Go memberlist library
to do leader election. The
memberlist library uses TCP port 7946 and UDP port
7946 to exchange information. Make sure those ports are accessible for incoming and
outgoing traffic on all load-balancer nodes.
Enable MetalLB for a new admin cluster
In your
admin cluster configuration
file,
set
loadBalancer.kind to
"MetalLB".
loadBalancer: kind: "MetalLB"
Fill in the rest of your admin cluster configuration file, and create your admin cluster as described in Create an admin cluster.
Specify address pools
The MetalLB controller does IP address management for Services. So when an application developer creates a Service of type LoadBalancer in a user cluster, they don't have to manually specify an IP address for the Service. Instead, the MetalLB controller chooses an IP address from an address pool that you specify at cluster creation time.
Think about how many Services of type LoadBalancer are likely to be active in
your user cluster at any given time. Then in the
loadBalancer.metalLB.addressPools
section of your
user cluster configuration file, specify enough IP addresses to accommodate
those Services.
The ingress VIP for your user cluster must be among the addresses that you specify in an address pool. This is because the ingress proxy is exposed by a Service of type LoadBalancer.
If your application developers have no need to create Services of type LoadBalancer, then you don't have to specify any addresses other than the ingress VIP.
Addresses must be in CIDR format or range format. If you want to specify an individual address, use a /32 CIDR. For example:
addresses: - "192.0.2.0/26" - "192.0.2.64-192.0.2.72" - "192.0.2.75/32
If you need to adjust the addresses in a pool after the cluster is created, you
can use
gkectl update cluster. For more information, see
Update MetalLB.
Enable MetalLB for a new user cluster
In your user cluster configuration file:
- Set
loadBalancer.kindto
"MetalLB".
- Specify one or more address pools for Services. The ingress VIP must be in one of these pools.
- Set
enableLoadBalancerto
truefor at least one node pool in your cluster.
Fill in the rest of your user cluster configuration file, and create your user cluster as described in Create a user cluster.
Manual assignment of Service addresses
If you do not want the MetalLB controller to automatically assign IP
addresses from a particular pool to Services, set the
manualAssign field of
the pool to
true. Then a developer can create a Service of type
LoadBalancer
and manually specify one of the addresses from the pool. For example:
loadBalancer: metalLB: addressPools: - name: "my-address-pool-2" addresses: - "192.0.2.73-192.0.2.80" manualAssign: true
Avoiding buggy IP addresses
If you set the
avoidBuggyIPs field of an address pool to
true, the MetalLB
controller will not use addresses from the pool that end in .0 or .255. This
avoids the problem of buggy consumer devices mistakenly dropping traffic sent
to those special IP addresses. For example:
loadBalancer: metalLB: addressPools: - name: "my-address-pool-1" - "192.0.2.0/24" avoidBuggyIPs: true
Create a Service of type LoadBalancer
Here are two manifests: one for a Deployment and one for a Service:
apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment spec: selector: matchLabels: greeting: hello replicas: 3 template: metadata: labels: greeting: hello spec: containers: - name: hello image: gcr.io/google-samples/hello-app:2.0 --- apiVersion: v1 kind: Service metadata: name: my-service spec: type: LoadBalancer selector: greeting: hello ports: - name: metal-lb-example-port protocol: TCP port: 60000 targetPort: 8080
Notice that the Service manifest does not specify an external IP address. The MetalLB controller will choose an external IP address from the address pool you specified in the user cluster configuration file.
Save the manifests in a file named
my-dep-svc.yaml. Then create the Deployment
and Service objects:
kubectl --kubeconfig USER_CLUSTER_KUBECONFIG apply -f my-dep-svc.yaml
View the Service:
kubectl --kubeconfig USER_CLUSTER_KUBECONIFG get service my-service --output wide
The output shows the external IP address that was automatically assigned to the Service. For example:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR my-service LoadBalancer 10.96.2.166 192.0.2.2 60000:31914/TCP 28s
Verify that the assigned external IP address was taken from the address pool you specified in your user cluster configuration file. For example, 192.0.2.2 is in this address pool:
metalLB: addressPools: - name: "address-pool-1" addresses: - "192.0.2.0/24" - "198.51.100.1-198.51.100.3"
Call the Service:
curl EXTERNAL_IP_ADDRESS:60000
The output displays a
Hello, world! message:
Hello, world! Version: 2.0.0
Update MetalLB
After you create your cluster, you can update the MetalLB address pools and the
enableLoadBalancer field in your node pools. Make the desired changes in the
user cluster configuration file, and then call
gkectl update cluster:
gkectl update cluster --kubeconfig ADMIN_CLUSTER_KUBECONIFG --config USER_CLUSTER_CONFIG
MetalLB Pods and ConfigMap
The MetalLB controller runs as a Deployment, and the MetalLB speaker runs as a
DaemonSet on nodes in pools that have
enableLoadBalancer set to
true. The
MetalLB controller manages the IP addresses assigned to Services. The MetalLB
speaker does leader election and announces Service VIPs.
View all MetalLB Pods:
kubectl --kubeconfig USER_CLUSTER_KUBECONIFG get pods --namespace kube-system --selector app=metallb
You can use the logs from the MetalLB Pods for troubleshooting.
MetalLB configuration is stored in a ConfigMap in a format known by MetalLB.
Do not change the ConfigMap directly. Instead, use
gkectl update cluster as
described previously. To view the ConfigMap for troubleshooting:
kubectl --kubeconfig USER_CLUSTER_KUBECONIFG get configmap metallb-config --namespace kube-system
Benefits of using MetalLB
MetalLB runs directly on your cluster nodes, so it doesn't require extra VMs.
The MetalLB controller does IP address management for Services, so you don't have to manually choose an IP address for each Service.
Active instances of MetalLB for different Services can run on different nodes.
You can share an IP address among different Services.
MetalLB compared to F5 BIG-IP and Seesaw
VIPs must be in the same subnet as the cluster nodes. This is also a requirement for Seesaw, but not for F5 BIG-IP.
There are no metrics for traffic.
There is no hitless failover; existing connections are reset during failover.
External traffic to the Pods of a particular Service passes through a single node running the MetalLB speaker. This means that the client IP address is usually not visible to containers running in the Pod. | https://cloud.google.com/anthos/clusters/docs/on-prem/1.10/how-to/bundled-load-balance-metallb | CC-MAIN-2022-27 | refinedweb | 1,612 | 56.45 |
Situation: If the RPi system send mail to Gmail, it seems the body of the email keep on bulking up on the upcoming emails and it makes the email to be so messy with the same sentences again and again. I'm not sure, what to add up with the Python codes.
Code:
Code: Select all
mail['From'] = fromaddr mail['To'] = toaddr mail['Subject'] = "[Intruder Alert] Motion Detected!" body = "Who is the Intruder? Find the attachment for the Intruder's picture!"
Code:
Code: Select all
def sendMail(data): mail.attach(MIMEText(body, 'plain')) print (data) # open the file to be sent dat = '%s.jpg' %data print (dat) attachment = open(dat, 'rb') # instance of MIMEBase and named as p p = MIMEBase('application', 'octet-stream') # To change the payload into encoded form p.set_payload(attachment.read()) attachment.close() # encode into base64 encoders.encode_base64(p) p.add_header('Content-Disposition', "attachment; filename= %s" %dat) # attach the instance 'p' to instance 'mail' mail.attach(p) server = smtplib.SMTP('smtp.gmail.com', 587) server.starttls() server.login(fromaddr, "-password-") text = mail.as_string() server.sendmail(fromaddr, toaddr, text) server.quit()
Gmail screenshot (the congested body):
| https://www.raspberrypi.org/forums/viewtopic.php?t=249007&p=1520063 | CC-MAIN-2019-35 | refinedweb | 188 | 52.66 |
This is part of the Ext JS to React blog series. You can review the code from this article on the Ext JS to React Git repo.
Ext JS introduced routes as an enhancement over the history class used to coordinate browser history and an application UI. What routes offers is the ability to intimately map the views in the UI to the browser’s address bar. React applications may also take advantage of routes to render the correct UI based on the current URL. In this article we’ll look at the leading router solution in the React ecosystem, react-router. We’ll explore how you can define your application to predictably render all aspects of your application using the browser URL, including user navigation.
Note: Full disclosure, Modus Create is a sponsor of react-router so, yeah, we like it. 😃
Note: While not a requirement for React, this article’s code examples assume you’re starting with a starter app generated by create-react-app.
React Router Overview
Install react-router:
npm install --save react-router-dom
The react-router module handles interacting with the browser’s address bar and history manager. In Ext JS the router handled only the portion of the url following the hash (or optionally #! in later versions). Routes with react-router may be handled using either the content within the hash or within the URL itself. You can set up routing using the HashRouter component to handle routes like:
appRoot#view/subview/itemId
However, react-router recommends using the BrowserRouter component instead to handle routes defined in the url itself ahead of any hash:
appRoot/view/subview/itemId
The
BrowserRouter component uses the HTML5 history API and will not be compatible with all browsers. Additionally, you’ll need to ensure that your web server is configured to allow real url navigation versus folder navigation.
react-router Simple Route
Let’s set up a simple route example to demonstrate how react-router works. In this first example, we’ll set up our App with two views that will be rendered depending on the URL. Let’s create the following files:
src/Home.js
import React from 'react'; export default () => <h2>Home</h2>;
src/User.js
import React from 'react'; export default () => <h2>User</h2>;
src/App.js
import React from 'react'; import { BrowserRouter as Router, Route } from 'react-router-dom'; import Home from './Home'; import User from './User'; const App = () => ( <Router> <div> <Route exact path="/" component={Home} /> <Route path="/user" component={User} /> </div> </Router> ); export default App;
Our
App class serves as a container for either
Home or
User depending on the URL used. If we run
npm start in the terminal, we’ll see “Home” in the browser since by default the URL loaded is the root URL of. The Home component is shown by navigating to the application root by stipulating the Route as:
<Route exact path="/" component={Home}/>
The path value of “
/” tells react-router that when we’re at the root with no additional pathing in the URL the
Home component is to be rendered. If we want to render the User view instead, we can navigate to. Since our user route is defined with a path of “/user” the
User component, not the
Home component, will be rendered.
You can review the sample code on the Git repo.
react-router Nested Routes and Params
Now that we have a simple routing example laid out, let’s add nested routes to it. Most applications won’t be a single layer deep. Our User class, for example, could be a container displaying a list of users by default. Additionally, if a user’s id is appended to the URL, then that user’s info would display in an edit form. Think of clicking a row in a master grid to view record details. Let’s modify the
User class and add a
UserList and
UserForm class as child components of User.
src/User.js
import React from 'react'; import { BrowserRouter as Router, Route } from 'react-router-dom'; import UserForm from './UserForm'; import UserList from './UserList'; export default ({ match }) => ( <div> <UserList /> <Router> <Route path={`${match.url}/:userId`} component={UserForm}/> </Router> </div> );
src/UserList.js
import React from 'react'; export default () => <h2>User List</h2>;
src/UserForm.js
import React from 'react'; export default (props) => { const { userId } = props.match.params; return <h2>User Id: {userId}</h2>; };
Our
User class now displays the
UserList as a child and optionally a
UserForm component if the URL has a value after “user/” in the URL. For this example, we could assume that that would be a user’s ID and the URL might look like:
If the URL were only then the
User view and the
UserList would be rendered. The
path attribute of our
User class’s
Route is
{`${match.url}/:userId`}. The use of “
:” in the path means that any value supplied in the URL will match and be will automatically passed down to the component specified on the Route. Its value will be stored as a param on the
match prop passed to the Route’s component,
UserForm in the case of our example. Here we render the value in the
UserForm in plain text.
const { userId } = props.match.params; … <h2>User Id: {userId}</h2>
However, this user ID value could easily be used to asynchronously fetch user data to populate the
UserForm once mounted.
You can review the sample code on the Git repo.
react-router Route Navigation
Now that we’ve looked at creating the route structure, let’s look at how to populate the UI to activate a route either programmatically or via user interaction.
Programmatic Navigation
First, let’s look at how the route can be activated in the logic of a class. The react-router module wraps its internal history api and can be used to set the URL or url segment programmatically. As an arbitrary example to demonstrate how routing can be navigated let’s make an arbitrary change to our example UserForm class:
const UserForm = (props) => { const { userId } = props.match.params; const { history } = props; if (userId === '1234') { history.push('abcd'); } return ( <div> <h2>User Id: {userId}</h2> </div> ) }
A component rendered via a
Route will have
history passed as a prop from the Route. We can use history’s
push method to swap the current matched portion of the route with another string. The conditional in the example above does just that. Setting the URL to results in the URL changing to as the
UserForm is rendered. The
push method adds the URL to browser history, which in this case may not be the desired action. The
replace method will instead swap the current URL in the browser’s history with the one passed in.
You can review the sample code on the Git repo.
Simple Navigation
Link components can be used for user navigation as you would use an anchor tag. Let’s modify our example app to have three navigation links at the top of the view. We’ll also add an
About class as a peer of
User and
Home. The links allow a user to select the home, user, or about view to display below the links.
src/About.js
const About = () => ( <div> <h2>About</h2> </div> )
src/App.js
import React from 'react'; import { BrowserRouter as Router, Link, Route } from 'react-router-dom'; import About from './About'; import Home from './Home'; import User from './User'; const App = () => ( <div> <Router> <div> <Link to="/">Home<br/></Link> <Link to="/user">User<br/></Link> <Link to="/about">About</Link> <Route exact path="/" component={Home}/> <Route path="/user" component={User}/> <Route path="/about" component={About}/> </div> </Router> </div> ); export default App;
Here we’ve added three
Link components to the parent
Router with
to attributes that mirror the paths defined on each
Route. Clicking on the text from each Link will update the URL which, in turn, updates the view.
You can review the sample code on the Git repo.
Tab Navigation
Lastly, let’s modify our previous
Link example to instead use the
NavLink component. The NavLink is an enhanced Link component that adds a
className and
/ or
style prop to the NavLink instance when the current URL matches the NavLink’s
to prop. The NavLink component makes it easy style the navigation elements based on the current URL resulting in UI’s like we’re used to with Ext JS’s Tab Panel.
src/App.js
import React from 'react'; import { BrowserRouter as Router, NavLink, Route } from 'react-router-dom'; import About from './About'; import Home from './Home'; import User from './User'; import './App.css'; const App = () => ( <div> <Router> <div className="nav-links"> <NavLink exactHome</NavLink> <NavLink exactUser</NavLink> <NavLink exactAbout</NavLink> <Route exact path="/" component={Home}/> <Route path="/user" component={User}/> <Route path="/about" component={About}/> </div> </Router> </div> ); export default App;
In this example, we add a
className of “nav-links” to the div containing the
NavLinks so that we can decorate the navigation elements like tabs. The
activeClassName prop on each
NavLink serves to add “active” as a class name when the Route
path and NavLink
to props match which shows an “active tab” decoration. For completeness, here is the CSS you might use for tab styling:
src/App.css
.nav-links a { display: inline-block; border-bottom: 4px solid transparent; padding: 6px 12px; text-decoration: none; color: #555; } .nav-links a.active { border-bottom-color: #6597ca; }
You can review the sample code on the Git repo.
Conclusion
Routing enables your application to operate in the browser’s address bar similar to how you might expect a website to behave. Each view within your site relates to the browser URL which allows your users to not link to not only the application front page, but also any route-enabled view within the application. In addition to user convenience, routing offers a centralized navigation pattern for your application to follow where navigable views are rendered by the router’s built-in conditional logic. As we’ve said, we’re big fans of react-router. However, it’s not the only router package available. Have you found a router module you’ve fallen in love with? Please share your experiences in the comments below!
>… | https://moduscreate.com/blog/ext-js-to-react-routing/ | CC-MAIN-2020-40 | refinedweb | 1,712 | 62.48 |
#include <cafe/pads/wpad/wpad.h> #define WPAD_CHAN0 0 #define WPAD_CHAN1 1 #define WPAD_CHAN2 2 #define WPAD_CHAN3 3 s32 WPADGetInfo( s32 chan, WPADInfo *info );
Gets the status of the Wii remote for the specified channel. This function registers a status-requesting command to the WPAD library and waits for the WPAD library to complete its processes.
Can get the following status types.
WPADGetInfoAsync
WPADInfo
2014/03/05 Fixed
WPADInfo struct.
2013/05/08 Automated cleanup pass.
2009/11/27 Added a caution regarding use while HOME Menu is displaying.
2008/07/08 Added explanations related to lowBat and battery.
2007/09/18 Added nearempty to
WPADInfo.
2007/09/11 Added a note related to Interrupts and Callback Functions.
2006/09/06 Added WPAD_BATTERY_LEVEL_CRITICAL.
2006/08/15 Added
WPADInfo to See Also.
2006/06/19 Initial version.
CONFIDENTIAL | http://anus.trade/wiiu/personalshit/wiiusdkdocs/fuckyoudontguessmylinks/actuallykillyourself/AA3395599559ASDLG/pads/wpad/WPADGetInfo.html | CC-MAIN-2018-05 | refinedweb | 137 | 53.37 |
0
I need help with my output. I have pretty much everything done. I'm just getting an output like:
"The class average is 5.8%" "The class average is 12.4%" "The class average is 13.6%" "The class average is 18.4%" etc. "Highest grade is 58%" "Highest grade is 66%" "Highest grade is 85%" "Highest grade is 96%" etc. "0 out of 10 passed the test" "2 out of 10 passed the test" "4 out of 10 passed the test" etc.
I want to end with the last class average (ex: 18.4%), the last highest grade (ex: 96%) and the greatest number of students who passed the text (ex: 4 out of 10 passed the test).
One other thing I need help with....I believe I did everything right, but I need "getMax" and "countPassing" functions to have "value returning"....I don't know how to do "value returning", so I need help with that.
Here is my code:
#include <iostream> #include <iomanip> using namespace std; const int numofgrades = 10; void getGrades(double grades[]); double computeAverage(double grades[]); double getMax(double grades[]); void countPassing(double grades[]); int main () { double grades[numofgrades]; double avg; double largest; getGrades(grades); computeAverage(grades); getMax(grades); countPassing(grades); system ("PAUSE"); return 0; } void getGrades(double grades[]) { cout<<"Please enter 10 grades"<<endl; cout<<""<<endl; for (int i=0; i<numofgrades; i++) { cout<<"Grade #"<<(i+1)<<endl; cin>>grades[i]; } } double computeAverage(double grades[]) { double avg; double sum = 0; for (int i=0; i<10; i++) { sum+=grades[i]; avg = (sum/10); cout<<"The class average is "<<avg<<"%"<<endl; } } double getMax(double grades[]) { double largest = grades[0]; for (int i=0; i<numofgrades; i++) { if (grades[i] > largest) largest = grades[i]; cout<<"Highest grade is "<<largest<<"%"<<endl; } } void countPassing(double grades[]) { double sum = 0; for (int i=0; i<numofgrades; i++) { sum+=grades[i]; if (grades[i] > 59) i++; cout<<(i++)<<"out of "<<numofgrades<<" passed the test"<<endl; } }
Edited by mike_2000_17: Fixed formatting | https://www.daniweb.com/programming/software-development/threads/145515/grades-with-arrays-and-functions | CC-MAIN-2018-39 | refinedweb | 331 | 59.84 |
Today I'll show how to load an XML file into objects using XLINQ (Language Integrated Query for XML).
Imaging that you have an XML file which contains an Employee list with corresponding information about each employee. Now you want to load this list into memory and work with that data. For these purposes we have XLINQ technology which allows you to load the data from XML directly into business objects.
public class Employee { public int EmployeeID { get; set; } public string EmployeeName { get; set; } public string EmployeePosition { get; set; } public string EmployeeCountry { get; set; } public Project[] Projects { get; set; } } public class Project { public string ProjectCode { get; set; } public int ProjectBudget { get; set; } }
So, each employee has an ID, Name, Position he holds, Country he lives. Each employee can participate several projects. The Project object contains a project Code and project Budget. Ok, here how employee object looks like in XML:
<employee> <id>1001</id> <name>John</name> <position>Developer</position> <country>USA</country> <projects> <project> <code>Orlando</code> <budget>1000</budget> </project> <project> <code>Rocket</code> <budget>7000</budget> </project> </projects> </employee>And finally this is how we loading the XML into our objects:
List<Employee> employeeList = ( from e in XDocument.Load(@"..\..\Employees.xml").Root.Elements("employee") select new Employee { EmployeeID = (int)e.Element("id"), EmployeeName = (string)e.Element("name"), EmployeePosition = (string)e.Element("position"), EmployeeCountry = (string)e.Element("country"), Projects = ( from p in e.Elements("projects").Elements("project") select new Project { ProjectCode = (string)p.Element("code"), ProjectBudget = (int)p.Element("budget") }).ToArray() }).ToList();Download the code for this example (Visual Studio 2010).
Great code -
it solved 1/2 of my problem.
Thanks
Question:
How do I go about getting ALL the data to display in a dataGrid for example.
I can access the List or the Array one @ a time, but I need to merge (if you will) data from both XML nodes (parent/child).
If you could point me the the right direction that would be great.
Thanks
Hi!
You cannot show XML structure in one datagrid unless your XML has only one level. Think of XML as of set of tables, it's actually similar to DataSet.
So if, for instance, your XML has 3 levels - show each level in separate datagrid.
Hi -
Thanks for the reply, I was able to solve my issue.
You are correct, apparently each leave of the XML file is saved to a new table in the dataSet.
I created another dataSet and used LINQ to join each dataTable within my dataSet containing my XML data and moved that data to my new dataSet.
I was then able to display my new dataTable/Set to a dataGrid.
Thanks again
What if the XML structure is the following. (Couldn't write XML in the comment.)
body
employees
employee
...
Thanks. Super helpful as suits just what I wanted to do and I am a newbie to Linq
How to view result as string ? and add to listview ?
Thank you very much for sharing.
Escpecially the Products[] and following ...}).ToArray()
solved a Problem that kept me busy for quite a Long time.
Thanks
Thanks so much for sharing this. It simplified the conversion process for a new project I am working on. | http://www.codearsenal.net/2012/07/c-sharp-load-xml-using-xlinq.html | CC-MAIN-2021-17 | refinedweb | 537 | 66.64 |
A new version, 1.5, of ProGuard has been released on SourceForge. ProGuard is a Java class file shrinker and obfuscator, that can detect and remove unused classes, fields, methods, and attributes. It can then rename the remaining classes, fields, and methods using short meaningless names.
Changes for Version 1.5:
Fixed processing of retrofitted library interfaces
Fixed processing of .class constructs in internal classes targeted at JRE1.2 (the default in JDK1.4)
Fixed -dump option when -outjar option is not present
Updated documentation and examples
Do many of you obfuscate your code?
What obfuscators do you use?
ProGuard 1.5: Open Source Class File Shrinker / Obfuscator (17 messages)
- Posted by: Dion Almaer
- Posted on: January 13 2003 14:22 EST
Threaded Messages (17)
- Yes by Arne Vajh??j on January 14 2003 07:11 EST
- Is it configurable? by James Birchfield on January 14 2003 09:31 EST
- Never by Arne Vajh??j on January 14 2003 10:05 EST
- Obscurity or Size? by Dave Wolf on January 14 2003 10:42 EST
- for obscurity by R S on January 14 2003 11:07 EST
- Why obfuscate ? by Arne Vajh??j on January 15 2003 02:56 EST
- Common Things can Foil Obfuscation by Craig Pfeifer on January 16 2003 09:31 EST
- Thanks for the Leasons Learned. by Mark N on January 16 2003 11:15 EST
- Not my experience by Arne Vajh??j on January 16 2003 12:37 EST
- Class.forName() by Marc Logemann on January 23 2003 08:00 EST
- Is it configurable? by Vlad Ender on January 14 2003 15:19 EST
- with a test suite ... by Domingo Sebastian on January 15 2003 03:04 EST
- J2ME can't breathe without an obfuscator! by Yuri Magrisso on January 14 2003 17:59 EST
- Proguard article by Yuri Magrisso on January 14 2003 18:12 EST
- Incremental Obfuscation by Arnaud Brochard on January 15 2003 03:54 EST
- Other obfuscators: Alphaworks JAX by David Hamilton on January 15 2003 04:53 EST
- ProGuard 1.5: Open Source Class File Shrinker / Obfuscator by Eric Lafortune on January 18 2003 12:13 EST
Yes[ Go to top ]
Yes - I use an obfuscater whenever delivering jar files
- Posted by: Arne Vajh??j
- Posted on: January 14 2003 07:11 EST
- in response to Dion Almaer
to external customers.
I use an old piece of software called JMangle.
Old, but easy to use and apperently does the job well.
Is it configurable?[ Go to top ]
One thing I would worry about with code chrinkers is the removal of methods it 'thinks' are unnecessary. I was using IDEA to inspect some code last night and found several methods in which the IDE thought could be removed. But we are using Digester for some of our code and relying upon reflection to invoke the methods when necessary. I would assume this software would think these methods were not used as well since they are never 'directly' called in code.
- Posted by: James Birchfield
- Posted on: January 14 2003 09:31 EST
- in response to Dion Almaer
Any one have experience with similar situations?
Jim
Never[ Go to top ]
I would never let an obfuscator delete methods.
- Posted by: Arne Vajh??j
- Posted on: January 14 2003 10:05 EST
- in response to James Birchfield
Too many things can go wrong.
Obscurity or Size?[ Go to top ]
Why are people using obfuscators? Is it for security through obscurity or for the size/performance impacts?
- Posted by: Dave Wolf
- Posted on: January 14 2003 10:42 EST
- in response to Arne Vajh??j
Dave Wolf
for obscurity[ Go to top ]
I use them to prevent access to easily understandable source code. Size/performance is not one of the reasons.
- Posted by: R S
- Posted on: January 14 2003 11:07 EST
- in response to Dave Wolf
Why obfuscate ?[ Go to top ]
To make reverse engineering sligthly more difficult !
- Posted by: Arne Vajh??j
- Posted on: January 15 2003 02:56 EST
- in response to Dave Wolf
Common Things can Foil Obfuscation[ Go to top ]
I was taked with adding obfuscation to a medium-sized client/sever pair w/java 1.1 and here were my observations:
- Posted by: Craig Pfeifer
- Posted on: January 16 2003 09:31 EST
- in response to Arne Vajh??j
- any methods that are accesssed via reflection need to be excluded from obfuscation
- any classes that are loaded via Class.forName("foo") need to be excluded from obfuscation
- properly obfuscating swing applications is very difficult due to the dynamic nature of the swing components
- expect to spend several iterations of full regression sweeps to validate the obfuscated product
- if you are adding obfuscation late in the development cycle, expect to produce parallel builds (obfuscated and non obfuscated) so that QA can repro bugs. One big problem with debugging obfuscated code is the fact that you will not get a stack trace that's meaningful.
- obfuscation will drastically reduce your jarsize and your memory footprint
- good luck patching an obfuscated applicaiton in the field. You will probably have to redeliver a new jar instead of just the updated classes.
Thanks for the Leasons Learned.[ Go to top ]
Thanks for sharing Craig.
- Posted by: Mark N
- Posted on: January 16 2003 11:15 EST
- in response to Craig Pfeifer
Not my experience[ Go to top ]
A good obfuscater allows you to specify that public
- Posted by: Arne Vajh??j
- Posted on: January 16 2003 12:37 EST
- in response to Craig Pfeifer
class/member/method names are not to be mangled.
A good obfuscator produces a map of mangled names that can
be used to analyze a stacktrace.
My experience with JMangle is just run it and ship the
result and forget about it.
OK - it has also been our experience that we need
to distribute full jars. But we would have wanted to
do that anyway.
Class.forName()[ Go to top ]
- any classes that are loaded via Class.forName("foo") need to be excluded from obfuscation
- Posted by: Marc Logemann
- Posted on: January 23 2003 08:00 EST
- in response to Craig Pfeifer
This is IMO not true for ProGuard.
Is it configurable?[ Go to top ]
I don't think anything should ever remove public classes and/or methods. You might be deploying your jar as a library and then.. Not to mention reflection etc.
- Posted by: Vlad Ender
- Posted on: January 14 2003 15:19 EST
- in response to James Birchfield
Removing unused private methods is safe though. (if you're doing reflection on private methods I would say something is wrong with your code..)
All the rest is debatable and should be probably configurable.
Regards,
Vlad
with a test suite ...[ Go to top ]
Maybe if you have a good test suite for your application you can remove unused classes more safety.
- Posted by: Domingo Sebastian
- Posted on: January 15 2003 03:04 EST
- in response to Vlad Ender
J2ME can't breathe without an obfuscator![ Go to top ]
I am currently developping J2ME applications and there I can surely say you can't deploy any serious application without obfuscation. And Proguard does a great job there!
- Posted by: Yuri Magrisso
- Posted on: January 14 2003 17:59 EST
- in response to Dion Almaer
In J2ME the main benefit of obfuscation is in shrinking the size of the code.
I have used Retroguard () but from my experience Proguard produces about 5% smaller jars.
The security aspect is also important but it is not gonna be as easy to get the jar file of the J2ME application once OTA capable phones become available as it is with applets.
In J2ME the removal of unused methods and classes is of great help, because otherwise I would have to do it by hand. Without such a tool it won't be possible to write a descent J2ME library. I'll have to rip it off in every application and it will be just easier to write a lot of stuff a new.
Does anybody know if Proguard removes methods that are not called at all, or methods which call stacks are not invoked?
Proguard article[ Go to top ]
There is a useful article about Proguard and MIDlet obfuscation at
- Posted by: Yuri Magrisso
- Posted on: January 14 2003 18:12 EST
- in response to Dion Almaer
that includes a class to integrate Proguard with Sun's Wireless Toolkit
Incremental Obfuscation[ Go to top ]
this software do not manage incremental obfuscation which is essential to incremental jar update in java web start for example (in fact a few does !)
- Posted by: Arnaud Brochard
- Posted on: January 15 2003 03:54 EST
- in response to Dion Almaer
Other obfuscators: Alphaworks JAX[ Go to top ]
What does it do that JAX didn't do in 1998?!
- Posted by: David Hamilton
- Posted on: January 15 2003 04:53 EST
- in response to Dion Almaer
I know that it hasn't been developed (at least in the Alphaworks version) since then, but IIRC, JAX also does method devirtualisation, class merging and other stuff...
I haven't needed to use it for a long time, but the only issue I remember having with it was that it took ages to work through all the features and work out what they did...
/david
ProGuard 1.5: Open Source Class File Shrinker / Obfuscator[ Go to top ]
Thank you all for your interest in ProGuard. Even when processor speeds and memory capacity are increasing all the time, I personally liked the idea of my code being made a bit leaner, without much effort. For simple applications or libraries, processing the code can be trivial. For code that uses a lot of introspection, shrinking and obfuscation can be more complicated.
- Posted by: Eric Lafortune
- Posted on: January 18 2003 12:13 EST
- in response to Dion Almaer
In any case, ProGuard was designed to be easily configurable, so it can scale with increasingly complex code. For example, ProGuard accepts the following option:
-keep public class * extends java.applet.Applet
This would preserve all public applets, including other classes, methods, and fields that are required in the output jar. Note that ProGuard automatically recognizes Class.forName calls with constant string arguments. It also gives hints about other classes that you may want to keep because they might be created dynamically.
Similarly, one can preserve all applications in the input jars:
-keepclasseswithmembers public class * {
public static void main(java.lang.String[]);
}
In some cases, one wants to preserve all classes that implement an interface:
-keep public class mypackage.** implements MyInterface
The templates allow to quickly specify entire groups of classes, methods, and fields, constraining them based on their access flags, names, interfaces or superclasses, and class members. The templates are more resilient against changes than, say, a fixed set of names. This simple and robust configuration is probably the chief advantage of ProGuard compared to other obfuscators. Please feel free to download it and report your experiences.
Eric (author of ProGuard). | http://www.theserverside.com/discussions/thread.tss?thread_id=17383 | CC-MAIN-2015-32 | refinedweb | 1,838 | 62.68 |
Simon Guest
Microsoft Corporation
September 2004
Applies to:
Microsoft .NET Framework 1.1
Microsoft Visual Studio .NET 2003
BEA WebLogic 8.1 SP3 (8.1.3)
Summary: Based on a series of unit tests between Microsoft .NET and BEA WebLogic 8.1.3, this article shows a series of scenarios and recommendations for achieving Web services Interoperability between the two. (13 printed pages)
Contents
Web Services Interoperability: "Is It Going to Work?" in J2EE." I nodded in acceptance. "I'm looking to build an application in .NET and interoperate leading J2EE platforms that support Web services. For example: Passing a message with a Boolean, then a String, then a Long, then a Float. As the tests moved on, they got increasingly complex. Create an array. Nest types within types. Include some null values. Based on this, I then extended the tests to other platforms. BEA WebLogic 8.1 SP3 (8.1.3).
I've divided the recommendations up in to two sections: IDE Recommendations and "On the Wire" Recommendations.
IDE Recommendations covers recommendations for using either Visual Studio .NET 2003 or BEA WebLogic Workshop 8.1.3 using BEA WebLogic 8.1.3 calling Web service in .NET
Scenario: Imagine two Web services, both deployed using Microsoft .NET. The first Web service is used to create new customers. It has a method that looks like the following:
public Response CreateCustomer(Customer customer)
The second Web service is used to display current orders based on a current customer. It has a method signature that looks as follows:
public Orders GetOrders(Customer customer). Based on this, the xsd.exe tool (a utility in the .NET Framework SDK) has been used to generate the class from this XSD document (xsd.exe /c customer.xsd). This is used for the Web services described above.
To generate the client proxies in BEA WebLogic 8.1.3, the clientgen tool is used. Within an ANT script, clientgen is invoked using the following targets.
<target name="Service1">
<clientgen wsdl="{host}:${port}/crm/Service1.asmx?WSDL"
clientJar="${libdir}/${libjar}"
packageName="org.myorg.service1" />
</target>
<target name="Service2">
<clientgen wsdl="{host}:${port}/purchasing/Service1.asmx?WSDL"
clientJar="${libdir}/${libjar}"
packageName="org.myorg.service2" />
</target>
Everything works well and the client can call the two Web services with no problems. BEA WebLogic Workshop shows the content of the generated JAR file containing the proxy files for both services as shown in Figure 1.
Figure 1. JAR file outline contain both services and shared types
The shared Customer type is located in the org.myorg.types.customer.xsd package and is used by both services.
Let's however imagine that the signature for the two .NET Web services changes from:
public Response CreateCustomer(Customer customer)
public Orders GetOrders(Customer customer)
To this:
public Response CreateCustomer(Customer[] customer)
public Orders GetOrders(Customer[] customer)
In the interest of efficiency, it has been decided that the Web services are going to accept multiple customers for each method.
The .NET Web services are modified and the client proxy is regenerated using the clientgen ANT task. Investigating the resulting JAR file shows a newly generated class called ArrayOfCustomerSequenceCodec.class.
Figure 2. The generated ArrayOfCustomer class
When the BEA client is re-run however, both the calls to the Web services fail. The .NET Web services believe that no value (or a NULL value) was passed to the service.
Recommendation:
If we take a look at the SOAP envelope for one of the failing calls to the .NET Web services we can see why things are not working:
<env:Envelope xmlns:env=""
xmlns:xsi=""
xmlns:soapenc=""
xmlns:
<env:Body>
<n1:CreateCustomer xmlns:
<n1:customer xmlns:n1=""
xmlns:n2=""
xsi:
<n3:Customer xmlns:
<n2:Id xmlns:12345</n2:Id>
<n2:Created xmlns:
2005-01-20T00:00:00-08:00</n2:Created>
<n2:Name xmlns:Test customer</n2:Name>
</n3:Customer>
</n1:customer>
</n1:CreateCustomer>
</env:Body>
</env:Envelope>
As can been seen in the trace, the client proxy generation has become confused with the multiple XML namespaces. The array (customer) defines namespaces for both the CRM Web service and the shared schema type, but the element within the array (Customer) defines a namespace that maps to the Purchasing service.
As a result, neither .NET Web service are able to deserialize the request correctly (the CRM Web service believes that the array is empty, the Purchasing Web service believes that the array is not intended for the service).
To correct this, we can explicitly set the generated package for types when the clientgen tool is run. This is done by adding a typePackageName parameter to the ANT task. In our scenario, this looks like the following:
<target name="Service1">
<clientgen wsdl="{host}:${port}/crm/Service1.asmx?WSDL"
clientJar="${libdir}/${libjar}"
packageName="org.myorg.service1"
typePackageName="org.myorg.service1" />
</target>
<target name="Service2">
<clientgen wsdl="{host}:${port}/purchasing/Service1.asmx?WSDL"
clientJar="${libdir}/${libjar}"
packageName="org.myorg.service2"
typePackageName="org.myorg.service2" />
</target>
When the clientgen task is run, the array class for the customer is created specific to the service in question, as shown in Figure 3.
Figure 3. Each service now contains specific array classes
After adjusting the BEA client-side code to use the newly generated type (which involves using org.myorg.service1.customer instead of org.myorg.types.customer.xsd), things work as expected.
If we now re-check the SOAP trace we can see that the XML namespace for the elements within the array match the array construct.
<env:Envelope xmlns:env=""
xmlns:xsi=""
xmlns:soapenc=""
xmlns:
<env:Body>
<n1:CreateCustomer xmlns:
<n1:customer xmlns:n1=""
xmlns:n2=""
xsi:
<n1:Customer xmlns:
<n2:Id xmlns:12345</n2:Id>
<n2:Created xmlns:
2005-01-20T00:00:00-08:00</n2:Created>
<n2:Name xmlns:
Test customer</n2:Name>
</n1:Customer>
</n1:customer>
</n1:CreateCustomer>
</env:Body>
</env:Envelope>
Applies To: Client in .NET calling a Web service written using BEA WebLogic 8.1.3
Scenario: Imagine a Web service created using BEA WebLogic 8.1.3 that exposes the following Web Method:
public Orders GetOrders(Company company)
The Orders and Company type have been generated by first creating the types in XSD and then dropping them into the Schemas folder within BEA WebLogic workshop. This has generated corresponding XML beans for each complex type. The Company type is defined in XSD as follows:
>
Notice how the Company type contains a field for the mailing address. This is of type Address.
Let's assume that this Web service works as expected, however we want to test whether a mailing address has been set for the company passed. You may imagine that the code used to check this could look similar to the following:
if (company.getMailingAddress != null)
{
// read the mailing address and do something
}
When the mailing address is set to null however, the WebLogic Web service still believes it contains some data (i.e. company.getMailingAddress is never asserted to null).
Recommendation
Because the generated customer and mailing address are XML beans, they still contain a structure. Therefore, testing company.getMailingAddress to null is testing whether the structure of the XML Bean exists as opposed to the value. If a valid structure exists, this will always be false.
To test whether the actual mailing address is null, it is recommended to use the IsNil() method on the data type. This will test whether the value of the passed type is null. For our scenario, this would look as follows:
if (!customer.getMailingAddress.IsNil())
{
// read the mailing address and do something
}
Following this recommendation returns the correct result.
Applies To: BEA WebLogic 8.1.3 client calling a .NET Web service
Scenario: Imagine that you have a client created using BEA WebLogic 8.1.3: .NET Client calling Web service in BEA WebLogic 8.1.3
Scenario: Imagine that you have a Web service hosted in BEA WebLogic 8.1.3. The Web service returns an order. The method for this could look similar to the following:
public Order GetOrders()
The order contains a number of items. By creating an XSD document and using the Schema folder in BEA WebLogic Workshop you have generated a series of XML Beans. The XSD for the Order and OrderItem types are defined as follows:
:sequence>
</xs:complexType>
You wish to return an order to the client in .NET. You create the order, add new items but fail to populate them before returning. For example:
// Create a new order from the XML Bean
Order order = Order.Factory.newInstance();
// Create some empty items
OrderItem[] items = new OrderItem[3];
order.setItemsArray(items);
// return the order
return order;
Although this may look correctly formed, when the .NET client calls this Web service an exception will be generated on the BEA WebLogic 8.1.3 Server.
<Sep 1, 2004 3:39:15 PM PDT> <Warning> <WLW> <000000>
<Id=top-level; Method=Service1.GetOrders();
Failure=java.lang.IllegalArgumentException: Array element null [ServiceException]>
<Sep 1, 2004 3:39:15 PM PDT> <Error> <WLW> <000000>
<Failure=java.lang.IllegalArgumentException: Array element null [ServiceException]>
If you plan to return an object that contain multiple items, some of which may be initialized, it is recommended to add them individually. For example:
Order order = Order.Factory.newInstance();
OrderItem i1 = order.addNewItems();
OrderItem i2 = order.addNewItems();
OrderItem i3 = order.addNewItems();
return order;
The above code will correctly return an empty array of three elements to the calling .NET client.
The recommendations listed in this section cover "On the Wire" (or run time) recommendations. They are in no particular order of priority, but some do follow sequentially. Each of the recommendations covers the platform it applies to, an example scenario and the actual recommendation itself.
Applies To: .NET Client and .NET Web service
Scenario: You have a Web service running on BEA WebLogic 8.1.3:
In this article you have seen some of the recommendations for achieving interoperability between Web services created using Microsoft .NET Framework 1.1 and BEA WebLogic 8.1.3.
Through creating this article, it is apparent that interoperability using Web services developed on the Microsoft .NET Framework 1.1 and BEA WebLogic 8.1.3 is most definitely achievable today. The results and observations from running the unit tests confirm this, and this is validated by the number of organizations who are developing applications today that span both platforms.
Overall, the results are testament to how well both Microsoft and BE. | http://msdn.microsoft.com/en-us/architecture/ms998265.aspx | crawl-002 | refinedweb | 1,734 | 50.33 |
Legal syntax, bug or what?
Discussion in 'Ruby' started by Jonathan Maasland, Sep 1, 2006.
Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
- Similar Threads
Legal C or bug in gccBoltar, Feb 27, 2008, in forum: C Programming
- Replies:
- 31
- Views:
- 838
- Richard
- Mar 1, 2008
Legal syntax for VHDL expressionrickman, Feb 21, 2010, in forum: VHDL
- Replies:
- 1
- Views:
- 1,054
- rickman
- Feb 21, 2010
Syntax highligth with textile: Syntax+RedCloth ?gabriele renzi, Dec 30, 2005, in forum: Ruby
- Replies:
- 2
- Views:
- 204
- gabriele renzi
- Dec 31, 2005
[ANN] SqlStatement 1.0.0 - hide the syntax of SQL behind familiarruby syntaxKen Bloom, Oct 9, 2006, in forum: Ruby
- Replies:
- 3
- Views:
- 212
Syntax bug, in 1.8.5? return not (some expr) <-- syntax error vsreturn (not (some expr)) <-- fineGood Night Moon, Jul 22, 2007, in forum: Ruby
- Replies:
- 9
- Views:
- 283
- Rick DeNatale
- Jul 25, 2007 | http://www.thecodingforums.com/threads/legal-syntax-bug-or-what.833019/ | CC-MAIN-2014-41 | refinedweb | 183 | 68.7 |
Hi there. Rob Huyett here again. I’m an SDET on the VC Libraries team. One of the things I’ve been working on lately is the new TR1 add-on for Visual Studio 2008. When VS 2008 TR1 support was announced about a month ago, I’m sure that the reaction varied from “Woo-hoo! I can’t wait!” to “Huh? What’s TR1? Do I care about this?” and all sorts of things in between.
This post is mostly aimed at that second category of folks – the ones who hadn’t heard about TR1, and who maybe haven’t found the time or inspiration to seek out what it’s all about. I’ll talk briefly here about just what TR1 is, and I’ll describe just a few of the TR1 features that I think are particularly nifty. This isn’t going to be an in-depth discussion or how-to manual, but it will hopefully inspire someone out there to take a closer look at what’s in TR1 and see if it can help you in your projects.
So what is TR1? Well, TR1 (“Technical Report 1”) is a set of proposed additions to the C++0x standard. Most of it will likely find its way into the new standard, but in the meantime it provides a useful stepping stone toward C++0x. TR1 is full of very useful new utilities such as new types of smart pointers (called “shared_ptr” and “weak_ptr”), new containers (tuples, unordered maps, unordered sets, and a neat STL-like array), reference wrappers, regular expression support, and function wrappers. You might look at that list and think that much of it sounds familiar. Smart pointers and function wrappers, for instance, already exist. This is true, but TR1’s versions try to be easier to use and more useful than the existing stuff… sort of like the next iteration based on a few years of experience finding out what works and what doesn’t work.
Alright, on to some specifics! First, I’d like to mention the new TR1 tuple and array classes. Then I’ll talk just a bit about shared_ptr. I’ll finish up with some information about regex, TR1’s regular expression utility. Again, people who are familiar with TR1 probably won’t get much out of this, but it will hopefully whet the appetite of those who are new to TR1.
Tuple is very much like the existing pair class, except that it can hold up to ten items instead of just two. Just like you can have pair<INT, char> p;, you can have something like tuple<INT, char, int, double, char*> t;. Handy, no?
TR1’s array class is very much like a fixed-length STL vector. Vector is a very useful class, and it is probably sufficient (even preferable) for most array-type needs. However, there are some situations where the developer is absolutely positive that the array needed will always be a particular size… no more, no less. In these cases, the variable-size feature of vector is not needed and just adds extra overhead. While you could just use a regular old C-style “square-bracket” array, the TR1 array class lets you use all of the STL-type iterators and algorithms. While it lacks some of the flexibility of a vector, it opens up more options than are available with a C-style array.
Shared_ptr is a very easy-to-use tool that greatly simplifies memory management. It all but makes new/delete combinations obsolete. Shared_ptr is a smart pointer class that is pretty easy to use. The syntax is fairly simple… shared_ptr<STRING> sp(new string(“foo”)); creates a shared_ptr called sp to a string containing “foo”. This shared_ptr will act almost just like any “normal” pointer, except that you don’t need to remember to delete it when you’re done. And unlike some older smart pointers, you don’t need to modify the target class (in this case, string) to include reference counting or anything like that… nearly any class you want will work with shared_ptr as-is. Speaking of reference counting, shared_ptr takes care of that (hence the “shared” in the name). If I were to make another shared_ptr that points to the same thing (like shared_ptr<STRING> sp2 = sp;), then shared_ptr’s reference counting is smart enough to only free up the memory when BOTH sp and sp2 are gone. Of course, this barely scratches the surface of what shared_ptr is all about, but it’s a start.
Regex is a class that lets you write complex regular expressions like those commonly used in Perl. While C++ has always had some amount of support for regular expressions, TR1’s regex utilities simplify things by building in the mechanics for parsing, matching, and capture groups. The regex class holds the actual regular expression, and algorithms such as regex_search(), regex_match(), and regex_replace() make it easy to apply that expression to a string. As you can probably deduce from the algorithm names, regex_search() tells the developer if the string contains any substrings that conform to the expression, regex_match() tells if the entire string conforms to the expression, and regex_replace() provides an easy way to change the string to fit a particular format. Regex can do quite a bit more than I’ve outlined here, but this should give you an idea of what regex is all about.
Well, that’s about all I wanted to say here. If any of this kindles some new interest in TR1 in any of the vcblog readers, great! Of course, any comments or questions that you might have will be appreciated.
Rob Huyett, VC Libraries Team
Join the conversationAdd Comment
"Most of it will likely find its way into the new standard, but in the meantime it provides a useful stepping stone toward C++0x."
Or in the "meantime", perhaps you could complete C++ 1988 (ISO/IEC 14882:1998) from almost a decade ago. Hint: export (it’s one of the keywords).
Sure, I’m looking forward to future versions of the standard as well, but there are still past standards that are not fully implemented even years after their finalization.
I must agree with the above poster. Please fully implement the previous standards before working on future ones.
Last month it was clarified that Microsoft would be licensing the Dinkumware implementation of TR1. So, what exactly is the Microsoft VC Libraries team doing?
If I really needed TR1, I could always license it from Dinkumware myself. Or I even code it myself. What I absolutely can not do myself is fix / implement non-conformant language issues such as export, two-phase name lookup, or exception specifications. I growing proportion of various customer’s code base can no longer be compiled with Microsoft’s non-conformant compilers, often forcing me to use more conformant compilers such as those from Comeau and Borland. I really do want to use VC++, so please give me a better option.
Please fully implement existing standards before jumping on future ones.
I just wanted to add some notes regarding export:
IIRC it isn’t a decade old and is an extension to C++98. There are only few compilers supporting this feature and AFAIK all of them are using the EDG front end. IIRC there has been only a beta of an experimental EDG front end based compiler from Borland, the current one doesn’t support export – AFAIK.
It’s been said that to implement export 2-3 men years of implementation is needed and I think it doesn’t pay out, to invest that much time. I rather would see VC to support a C++ module concept as soon as possible, when it becomes part of the C++ standard.
IMHO export is rarely used, but this may be only my impression. I would agree to add export support to VC too, if export would be widely used by other compilers / source code.
(Please correct me if I’ve made some wrong statements – I’m not 100% sure about all of them)
Forget about "export" – and do read . Sure, please do fix other bugs etc. before adding new features, but TR1 is welcome.
And Borland compiler having better C++ conformance than Visual C++? Heh, which planet are *you* living on?
I look forward to TR1.
If C++0x was introduced, I am very glad., till a C89 compiler., still a C89 compiler.
Yeah, boost has had this stuff for years.
And in the end, will Microsoft’s new libraries have significantly better performance, conformance, usability than already found in the boost libraries?
Why not just sign an agreement with boost to officially incorporate them into DevStudio and get back to trying to make C++ a first class language within the DevStudio environment. (WinForms must be the slowest GUI dialogs known to mankind!)
In my opinion, export templates should not be used even if support were available. Export model doesn’t add anything useful. Implementing this idiotic proposal voted for by accident will just slow down compilation speed.
boost tr1 implementation is rather poor according to
boost is a sandbox for c++ extension, not a solid library.
Btw, boost::shared_ptr uses virtual functions inside (to implement custom deleters). This is not compatible with explicit DLL loading (and unloading). shared_ptr holds a pointer with vptr pointing to address space of the DLL where shared_ptr was created. So, shared_ptr::~shared_ptr() crashes with AV if the DLL was unloaded.
Unlike people in comp.lang.c++.moderated, I cannot imagine another way to implement custom deleters, so I won’t be surprised that I won’t be able to use tr1 shared_ptr
"IMHO export is rarely used, but this may be only my impression."
Sure it’s rarely used. If we look hard, we can find the reason. Here’s the biggest reason:
"There are only few compilers supporting this feature and AFAIK all of them are using the EDG front end."
And here’s why people who have one of those compilers still can’t use export:
‘I growing proportion of various customer’s code base can no longer be compiled with Microsoft’s non-conformant compilers’
(Someone else explained that part of it better in an earlier thread. When some of their customers use Microsoft compilers, they have to give their customers source code that can be compiled by Microsoft compilers.)
"More, overhaul the C compiler, it is such an
antique, still a C89 compiler."
I second this, how about implementing C99.
‘I growing proportion of various customer’s code base can no longer be compiled with Microsoft’s non-conformant compilers’
Hm, which compiler supports export, so that Microsoft’s compiler get’s non-conformant with the source code written with this compiler ? When I take a look at boost, how many workarounds there are in this library to get it compiled with the various compilers, I doubt that there is really a 100% conforming compiler. Even if there is, then all compilers should support the standard at this level too, to get really portable code without having to implement compiler workarounds.
If export would be easy to implement or add real value, then I think it would have been adopted much faster. But it hasn’t.
I’m still missing essential features in C++ more than export, e.g delegates. They are now in TR1, but unfortunately only as a library extension.
I guess some details of the post disappeard due to angle brackets in HTML… <>
It’s a simple solution here, avoid the boost libraries and TR1 features until their equivalent have been in VC++ for 1+ years. Try to keep your development tool set free of things not included in your core development tool (VC++). MS has lacked VC++ progress since Visual Studio 6.0, not entirely due to MS, since the C++ standards committee did not progress the language much in the last 10 years. NB: Attempting to port just about anything from the UNIX/Linux open source camp to VC++ will be difficult.
I’ve addressed VC’s non-implementation of export, two-phase name lookup, and exception specifications before. However, I’d like to remind the minority of VC customers clamoring for these features that library developers/testers are not interchangeable with compiler front-end developers/testers. If you think that these features are valuable, and aren’t convinced by arguments otherwise, then you should make your concerns known – but to the relevant people. Commenting on posts by library testers like Rob (or library devs like myself) is highly unlikely to get you anywhere.
[Stephen]
> Last month it was clarified that Microsoft would be licensing the Dinkumware implementation of TR1.
Correct.
>).
2. We’re making TR1 play nice with /clr and /clr:pure.
3. We’re ensuring that TR1 compiles warning-free at /W4, in all supported scenarios. This includes switches like /clr, /clr:pure, /Za, /Gz, and the like.
4. We’re ensuring that TR1 is /analyze-clean..
6. We’re striving for performance parity with Boost. In some areas, we won’t get there for VC9 TR1 (hopefully, we should for VC10), but we’ve already made good progress. Thanks to MS’s performance testing (which Rob has been in charge of), we identified a performance problem in regex matching, which Dinkumware has sped up by 4-5x. And we’ve achieved performance parity for function.
7. We’re identifying select C++0x features to backport into TR1 – for example, allocator support for shared_ptr and function. While not in TR1, this is important to many customers (including our own compiler).
8. Because TR1 lives alongside the STL, we can. Dinkumware’s standalone TR1, and (to my knowledge) Boost don’t implement this optimization, because it is nonportable.
9. We’re implementing IDE debugger visualizers for TR1 types. I am secretly proud of how shared_ptr’s visualizer switches between "1 strong ref" and "2 strong refs".
Adding TR1 isn’t as simple as dropping new headers into VCinclude and calling it a day.
> If I really needed TR1, I could always license it from Dinkumware myself.
That would cost you money and time. The VC9 TR1 patch will be distributed free of charge, with no effort required on your part.
And you really need TR1. Everyone does.
> Or I even code it myself. – but I certainly don’t fool myself into thinking that I could write it from scratch – well, not without dedicating several years to it and nothing else.
[Akira]
> I look forward to TR1.
> If C++0x was introduced, I am very glad.
VC9 TR1 isn’t introducing any compiler changes.
[C0]
> TR1 is nothing to make a fuss. Since boost has these libraries for years.
If you are using Boost, that’s great. However, many companies are reluctant to use open-source code, even given Boost’s extremely permissive license and excellent reputation. Programmers at those companies will be really happy to have VC9 TR1.
[Vyacheslav Lanovets]
> boost is a sandbox for c++ extension, not a solid library.
I must disagree with this – I have found Boost to be extremely high quality, in terms of both correctness and performance. This isn’t too surprising, since the Boost developers are also world experts at library programming.
> Btw, boost::shared_ptr uses virtual functions inside (to implement custom deleters).
> This is not compatible with explicit DLL loading (and unloading).
I’m not sure what you’re saying here. If you create a shared_ptr with a custom deleter that lives inside a DLL that gets unloaded, of course the thing is going to blow up. Nothing can save you from that.
Or are you saying something else?
[Andre]
> When I take a look at boost, how many workarounds there are
> in this library to get it compiled with the various compilers
Boost is special because some of its libraries still attempt to provide limited support for VC6 (the Infinite Enemy of modern C++ programming – GCC 2.x was also bad, but it apparently doesn’t have eternal unlife). This is presumably highly appreciated by those people who still use VC6 despite the fact that Microsoft no longer supports it, and presumably contributes to Boost’s widespread use and reputation for working anywhere – but it certainly makes the implementation of those libraries more complex.
> I doubt that there is really a 100% conforming compiler.
There isn’t – but Comeau comes close. That’s one of their main selling points, after all.
> I’m still missing essential features in C++ more than export, e.g delegates.
> They are now in TR1, but unfortunately only as a library extension.
Unlike other languages, C++ prefers to provide powerful support in the core language for library implementation, instead of providing such functionality directly in the core language. This contributes to C++’s extreme flexibility and generality.
What you call "delegates" in the managed world, I call "bound functors". TR1 binders are psychotically powerful. You just have to learn a different way of looking at things.
[Greg]
> It’s a simple solution here, avoid the boost libraries and TR1
> features until their equivalent have been in VC++ for 1+ years.
I disagree – Boost is highly stable, and TR1 will be immediately usable – if not, Rob and I haven’t done our jobs (and since there will be a TR1 beta, you won’t either).
> Try to keep your development tool set free of things not included in your core development tool (VC++).
I’m trying to imagine how anyone could get anything done in C++ without third-party libraries, and failing.
> MS has lacked VC++ progress since Visual Studio 6.0
This is completely untrue.
VC’s compiler and library conformance has increased MASSIVELY from the blighted VC6, through VC7/VC7.1, to VC8. (VC9 further improved conformance, although not nearly as much as the 6 => 7.1 and 7.1 => 8 leaps).
> Attempting to port just about anything from the UNIX/Linux open source camp to VC++ will be difficult.
Some of that, it is true, is due to VC bugs and quirks. A lot of it is due to Unixisms. Try compiling GNU tar for Windows and see how far you get.
(On the other hand, other programs are equally at home on Unixes and Windows – my usual favorite being bzip2. Portable programming pays!)
Stephan T. Lavavej
Visual C++ Libraries Developer, working on TR1
[Stephan – What you call "delegates" in the managed world, I call "bound functors". TR1 binders are psychotically powerful. You just have to learn a different way of looking at things]
I know. But don’t you think the compiler couldn’t do it more efficiently, regarding performance of the code and compilation ?
Doesn’t mean that boost/TR1 implementation is bad, I only have the feeling that this is a basic feature, which should be implemented in the core language (too). The libraries are growing and growing, basically everything could be exported to a library, even a for loop. Yes, regarding compatibility libraries have true advantages, but TR1 needs highly compliant compilers anyway.
But anyways, very good work guys – keep it up. And (have) a good and happy new year.
[Just in case it happened: sorry for double posting]
[Andre]
> I know. But don’t you think the compiler couldn’t do it more efficiently
I do not think that the compiler could do it more efficiently. There is nothing magical about binding arguments to a functor, and TR1 should generate code which is as efficient as hand-written C++.
It so happens that the STL and TR1 are very high performance, although VC9’s optimizer isn’t perfect (data members frustrate the inliner).
> I only have the feeling that this is a basic
> feature, which should be implemented in the core
> language (too).
C++’s core language is already packed with features, and you want to add *more*?
There are very important things that need to be added to the core language which will improve expressiveness (variadic templates) and performance (rvalue references). Bound functors can already be implemented well in a library. C++ doesn’t need first-class functions like Scheme because classes can imitate functions – in fact, C++ encapsulates modifiable state better. And C++ doesn’t need delegates like managed languages because it has more powerful library support (templates, templates, and more templates).
Now, it turns out that true lambda functions can’t be implemented well in a library (Boost.Lambda was a good try, demonstrating the difficulty), so I’m eager to see them be added to C++0x. Merely binding arguments to existing functions is a simpler case, though.
> The libraries are growing and growing
As they should!
Actually, C++’s standard library is small compared to those of other languages (because the C++ standard library has a different focus). It has room to grow, as TR1 demonstrates.
> basically everything could be exported to a
> library, even a for loop.
Like std::for_each().
There’s a reason that I’m picky about terminology – "delegate" sounds special and fundamental, while "bound functor" sounds like a modification of a special, fundamental thing (and indeed, functors are powered by core language features – operator overloading, templates, and the like).
There *are* compromises in TR1 that really want to be part of the core language, but can’t in a library-only addition. result_of is the best example.
Stephan T. Lavavej
Visual C++ Libraries Developer
[Stephen]
"If you think that these features are valuable, and aren’t convinced by arguments otherwise, then you should make your concerns known – but to the relevant people. Commenting on posts by library testers like Rob (or library devs like myself) is highly unlikely to get you anywhere."
Actually the Visual C++ leadership team reads all the comments on all the vcblog entries. We take these comments into account as one of the factors when we make decisions on the evolution of the product.
Ronald Laeremans, Product Unit Manager
[Stephan]
"Boost is special because some of its libraries still attempt to provide limited support for VC6 (the Infinite Enemy of modern C++ programming"
IMHO, Microsoft did a great job in improving the C++ compiler since VC6 age. I don’t like very much the VC6 C++ compiler, but I do love the VC6 IDE!
I think that one of the reasons of VC6 success is that you in Microsoft developed a *great* IDE for VC6: the IDE is snappy and robust (it was so also in the old days of 64 MB of RAM and Pentium 150 MHz, it’s not that VC6 became fast only with big powerful hardware of these days).
The edit-and-continue does work.
The help system integrated in VC6 does work: you find what you are looking for (it’s not like the help system of VC7 or lather… that I prefer using Google to find things on MSDN :(
VC6 + Visual Assist X + WndTabs is a great development experience, IMHO.
There was a very interesting thread on Mr. Somasegar’s blog about VC6 and VC++ improvements, and I believe that you already read that:
So, what I just ask to VC++ Team is to continue the *great* work they have alrady done with the C++ compiler and libraries improvments, but also work like this with the IDE.
Best wishes for the new year!
Giovanni
[Stephan – I do not think that the compiler could do it more efficiently]
Ok. I’ll take your words ;-).
Currently a simple member function call with 1 parameter in boost needs 30 assembly instructions with 3 jumps.
The same call in C++/CLI needs 5 assembly instructions and 1 single jump.
(And it’s managed code)
[Stephan – it has more powerful library support (templates, templates, and more templates)]
Once I’ve been in love with them and I’ve been a hardcore C++ developer using them even for meta template programming. But since I’ve seen that I can be (much much) more powerful and have a higher abstraction level with code generation on a higher level I don’t find them that >worthwhile< anymore.
Andre
[Ronald Laeremans]
> Actually the Visual C++ leadership team reads all
> the comments on all the vcblog entries.
Aha! I didn’t realize you read everything. That is cool.
(For those following along at home, Ronald is my great-grandboss, the manager for all of VC. Our org actually has a new shiny name now, but that’s basically what it is.)
[Sys64738]
> IMHO, Microsoft did a great job in improving the
> C++ compiler since VC6 age. I don’t like very much
> the VC6 C++ compiler, but I do love the VC6 IDE!
You and everyone else on the planet, as far as I can tell! :-)
The IDE team has definitely gotten your feedback, and they’re working hard to make VC10’s IDE better. We recently got T-shirts with the slogan "10 is the new 6", heh.
[Andre]
> Currently a simple member function call with 1 parameter
> in boost needs 30 assembly instructions with 3 jumps.
> The same call in C++/CLI needs 5 assembly
> instructions and 1 single jump.
Hm, that’s not good. What exactly are you doing? Are you simply binding a member function to an object and invoking it later (this would look like bind(&foo::bar, obj, arg)), or are you introducing a boost::function into the mix?
Send me a self-contained repro at stl@microsoft.com, and I’ll see how VC9 TR1 handles it. We might be able to do something on the libraries side better, or at least I’ll be able to file an optimizer bug. Fundamentally, native code should never be less efficient than managed code.
I do know that the optimizer can’t see through data members (e.g. the one that powers mem_fn()), and I’ve already got a bug open about that. This might be the same thing, or something different – either way, it’ll be good to get a repro.
Stephan T. Lavavej
Visual C++ Libraries Developer
[Stephan]
"The IDE team has definitely gotten your feedback, and they’re working hard to make VC10’s IDE better. We recently got T-shirts with the slogan "10 is the new 6", heh."
That is *great* news! :)
And I do like that slogan!
Thanks,
G
[Stephan – Hm, that’s not good. What exactly are you doing?]
Sorry, my fault. I wasn’t aware that boost::function has that much more overhead as direct binding. Forget one of the main performance rules, first look at the implementation ;-).
Without boost::function everything is as it should be:
The C++ optimizer even removed the call. Quite hard to write short test/reproduction code, if most of the code is removed ;-).
Thank you for the hint and sorry for the misinterpretation.
Can’t (now) wait to get hands on TR1 and are much excited about the next "10 is the new 6" release *g*.
[Stephan – Hm, that’s not good. What exactly
[Andre – Sorry, my fault]
Sigh, forget my last post. Got myself confused and no bind was involved. I need a function object, to write a generic callback function, correct me if I’m wrong.
Sample:
class test { public: void cb(int i) {} };
test t;
boost::function<void (int)> foo = boost::bind<void>(&test::cb, t, _1);
foo(100);
This C++ code needs the 30 assembly instructions. While the comparable C++/CLI code:
ref class Test { public: void cb(int i) {} };
delegate void CB(int i);
Test^ test = gcnew Test();
CB^ foo = gcnew CB(test, &Test::cb);
foo(100);
Needs "only" 5 for the function call.
Did I got something (again) wrong, or is the managed version really more efficient ?
Stephen and the VC++ team,
I advocated keeping your project free, when possible, of dependencies on external libraries to reduce the project’s cost and risk over its life cycle. This is shooting for a 5+ year life cycle between major upgrades or rewrites.
This is much more true for non-GUI code than it is for GUI code.
The compiler and libraries have made significant security improvements in the CRT since 6.0. Much more useful changes include better compiler diagnostics and run time security checks.
I appreciate MS’s effort in VC++ since 6.0.
New features/libraries are not put into production use for about the first year for the same reason why our production machines do not get an OS upgrade until at least service pack 1 is released.
This works out OK because our release cycle is about 6-8 months so we can develop using new features a few months before production use.
Our environment is such that new tools, libraries, build tools, plug-ins, etc. need to get approval from our technical advisory team before being used.
This is the result of the ease of which each developer brought in his favorite set of open source/low cost tools for his application during 1996-2005. This turned over with each developer and about every two years. This proliferation and staff turnover led to us having tens of tools that products relied on but no-one had seriously used, were unsupported or nearly dead. FWIW, we have C++, C#, VB6, VB.NET, .NET 1.x, 2.x, classic ASP, ASP.NET, C as languages for production systems.
[Andre]
> Sorry, my fault. I wasn’t aware that boost::function
> has that much more overhead as direct binding.
This is a fundamental consequence of how boost/tr1::function works. It "forgets type" (wrapping a functor of arbitrary type with a given signature, inside a type that depends only on the signature), as if by using virtual functions, but virtual functions are incompatible with inlining. Even when virtual functions themselves aren’t used because of other overheads, you’re still going to incur some penalty.
> boost::function<void (int)> foo = boost::bind<void>(&test::cb, t, _1);
> Did I got something (again) wrong
Yes, you’re unnecessarily using a boost::function.
(Note that you can say boost::bind(stuff) instead of boost::bind<void>(stuff) here, although it doesn’t make a difference in terms of the generated code. I suggest avoiding explicit return types unless they are necessary.)
To clarify:
boost/tr1::function’s purpose is to provide "insulation" by forgetting type. Ordinarily, the only way to make an algorithm accept functors of arbitrary type is to template the algorithm on the functors, but that can lead to lots of template instantations. Some things make it even more difficult – consider an object that wants to be constructed from a functor of arbitrary type. Now you’d need to template the object on that functor type, and anything taking that object needs to be a template too (if full generality is desired), etc. (Think of boost::thread here.) Also, containers have to be homogeneous, so you can’t fill a container with functors of different types.
Instead, you can insulate these functor-taking algorithms and functor-containing objects/containers from the exact types of the functors by introducing a boost/tr1::function. Now, your algorithms and containing objects can be non-templates (or, at least, not templated on that functor type), since the type variability is handled by the boost/tr1::function. This introduces a small overhead (after all, you’re forgetting type – "dynamic" languages do this all the time so it’s easy to forget about, but C++ is capable of both compile-time and run-time polymorphism, and only the latter forgets type and introduces overheads), but it can be very useful and lead to efficiency gains elsewhere (e.g. because now you have only one implementation of the algorithm, which fits into instruction cache, etc.).
boost/tr1::function has a second purpose, but this one is not fundamental. It’s a convenient way to store a functor whose type is inconvenient to say. In C++03, the return types of mem_fn() and especially bind() are difficult to say. You could dig into the implementation, but you’d end up with nonportable types (boost::detail, or Dinkumware’s _Bind, etc.). You could use tr1::result_of, but it would be kind of complicated (bind() is a template function, so you’d have to give it explicit template arguments). It’s a lot easier to just store the return value of bind() in a boost/tr1::function. However, that introduces overheads.
C++0x auto/decltype is the true solution for dealing with types that are inconvenient to say. Otherwise, you have a couple of choices:
1. Use result_of. However, if you’re using any placeholders (i.e. you’re doing incomplete binding), I don’t think this will work. As far as I can tell, there’s no way to say the type of _1, _2, etc. portably (without C++0x decltype). TR1 doesn’t provide a placeholder_type<N> (in VC9 TR1, the types are _Ph<N>, but this is nonportable). This is a very minor oversight on the part of TR1 (and since C++0x has auto/decltype, there’s no point in filing a Library Issue).
2. Hand the result of bind() to a template function. This is easy:
template <typename F> void foo(F f) {
// do stuff with f
}
foo(bind(stuff));
foo() will know the exact type of F, whatever it is, so you’ll make the inliner happy by not introducing any yucky virtuals. You may have to take the functor by reference instead of by value in order to avoid copying (C++03 does love copying stuff).
TR1 is, of course, an intermediate step in the evolution of C++ – it makes a lot of things easier to write, but some things require the full power of C++0x. Until then, awkward workarounds will occasionally be necessary.
Stephan T. Lavavej
Visual C++ Libraries Developer
[Stephan]
Thank you for the clarification and a happy new year.
My intention was it to decouple the sources and not using templates, besides for boost::function and to bind the member function.
As you already wrote, templates have a high coupling factor and they restrict other goodies to work, e.g. Intellisense.
Handing the bind result to a template function helps me not that much IMHO, because I cannot store the pointer and since I don’t want to use the additional lambda goodies in this case I could also use a template function without bind:
template <typename F, typename T>
void C(F f, T& p)
{
(p.*f)(100);
}
Auto won’t perhaps help too, because it can’t be used AFAIK as a class member, since it must derive the type from a return type of a function.
So, I think we agree here, for the "simple task" of storing a member function, not bound to a special class type, in an ordinary non templated class I have to use boost::function ?
Currently simply speaking the managed implementation is 6 times more efficient regarding code complexity on the assembly level as the native one.
We will see if variadic templates and other C++0x features will help to make the code more efficient.
I’m still not convinced that a core implementation of a delegate type *and* the library extension in combination couldn’t be more efficient. In the managed version it obviously is. I’m only nitpicking on this, because C++ always claims to be (most) efficient in code generation.
—————————————-
By the way: I had a look at the preprocessor output of my sample code:
Managed: 46 lines
Native: 121000 lines
And the native output has many, many empty lines, just wondering (in case that’s not only for some kind of optimization and other devs responsible for the preprocessor read this post).
Andre
[Andre]
> As you already wrote, templates have a high coupling factor
What?
Templates *decouple* code by generalizing over types.
> and they restrict other goodies to work, e.g. Intellisense.
That’s Intellisense’s deficiency.
> I could also use a template function without bind
This is similar to a handwritten binder, which I suggest as another workaround.
The problem with handwritten binders is that they involve more typing (they aren’t particularly brittle, just tedious). The problem with bind() in C++03 is that it’s difficult to say its return type. (The absence of perfect forwarding is another problem, but not relevant here.) If you can’t use bind(), fall back to handwritten binders.
> Auto won’t perhaps help too
decltype will.
> So, I think we agree here, for the "simple task" of storing
> a member function, not bound to a special class type, in an
> ordinary non templated class I have to use boost::function ?
No.
> Currently simply speaking the managed implementation is 6
> times more efficient regarding code complexity on the
> assembly level as the native one.
That’s because you’re not performing a proper comparison.
> C++ always claims to be (most) efficient in code generation.
Only when you write efficient code to begin with.
> I had a look at the preprocessor output of my sample code
Templates go in headers, and the Standard Library has a lot of templates. Also, for ease of implementation, VC’s Standard Library headers include each other more often than if they tried to avoid doing so.
(I thought about it some more, and I don’t think result_of can be used to work around the absence of auto/decltype here.)
Stephan T. Lavavej
Visual C++ Libraries Developer
I wish compiler diagnostic and debugger support for tr1 will be also on high level. I don’t want to land in the middle of some macro expansion during debugging some tr1:function/bind code.
Does the tr1 in vc9 use the same preprocessor tricks as boost does?
[Stephan]
[What? Templates *decouple* code by generalizing over types]
Perhaps I’ve not expressed me correctly. Coupling factor – meaning if I want to store a templated type, which has a template parameter, in a class as member, I have to make the class containing this type templated too.
Since there’s no real separation between declaration and implementation for templated classes I have to include the implementation too.
Ever tried to separate template based code via pimpl or ship libraries of template classes with code and implementation separated, where the implementation code is compiled to a library / dll ?
In the sense of a template type can be everything you are right.
So let me rewrite my post:
Templated classes tend to make classes containing this classes templated too, if the template type is or cannot be specified directly.
[That’s Intellisense’s deficiency]
Don’t think so. For Intellisense to function constraints are needed. Otherwise how should Intellisense know of which type the template paramter is, when I write the implementation code of the template ? It could be any type.
[decltype will]
Not with an ordinary non templated class. Or can you give me an illustrative example – for the code below.
[No]
So please give me an example – rewrite the following code:
class CallbackHolder
{
boost::function<void(int)> myCallback;
};
By replacing boost::function with auto or decltype and without converting the class CallbackHolder to a templated one.
With a delegate type directly supported by the C++ core I could write:
class CallbackHolder
{
delegate<void(int> myCallback;
};
[That’s because you’re not performing a proper comparison]
What would be a proper comparison ? I want to use the style boost::function allows me to use. I want to use it in a simple non templated class and perhaps export this class and ship the implementation code compiled to a dll.
[Only when you write efficient code to begin with]
And if C++ allows me to do ;-).
[Templates go in headers, and the Standard Library has a lot of templates]
Ups. Templates should decouple better ;-)
[I thought about it some more, and I don’t think result_of can be used to work around the absence of
auto/decltype here]
But you could give me / us a sample for my class CallbackHolder anyways, how you would write it by using auto/decltype
*without* using templates.
Andre
Senior C++ software engineer
[peter]
> I wish compiler diagnostic and debugger support for tr1 will be also on high level.
Diagnostics: The compiler errors you’ll get from misusing TR1 will be more or less equivalent to those you’ll get from misusing Boost. (In the cases where compiler errors are triggered at the TR1 interface, e.g. trying to construct a shared_ptr implicitly from a raw pointer, you’ll get identical diagnostics modulo namespaces. Compiler errors that are triggered within the implementation may vary. Without concepts, there is little that can be done at the library level to make template error messages less hideous-looking.)
Debugger: VC9 TR1 will come with extensive IDE debugger visualizers for almost all TR1 types, including some new visualizers for STL types that should be helpful for TR1 users (e.g. plus<T>() will be visualized as "plus", to make bound functors easier to look at).
> I don’t want to land in the middle of some macro expansion during debugging some tr1:function/bind code.
> Does the tr1 in vc9 use the same preprocessor tricks as boost does?
Without looking at the Boost implementation, I can’t perform a direct comparison (you can, when the VC9 TR1 beta is released very soon). I would characterize VC9 TR1 as using "some" but not "a whole lot" of macros. Most of TR1’s implementation is similar to the STL macro-wise. The exception to this is those parts of TR1 that have to simulate variadic templates; this requires lots of preprocessor machinery (the only thing worse than lots of macros here would be no macros). In particular, tuple, bind, and the other functional stuff are powered by non-idempotent headers and the like.
[Andre]
> Coupling factor – meaning if I want to store a templated type,
> which has a template parameter, in a class as member, I have
> to make the class containing this type templated too.
Only if you want to propagate the generality to the containing class. stack<T> contains a deque<T> (by default), but you can also have an Image containing a vector<unsigned char>.
This particular case is special because TR1 leaves some types unspecified. Usually, you can look at a function (even a function template) and easily figure out what it’s going to return.
> Don’t think so. For Intellisense to function constraints are needed.
Okay – let’s call it a consequence of how Intellisense and templates interact (in the absence of concepts).
> Or can you give me an illustrative example – for the code below.
decltype allows you to say "give me the type of this expression", where the expression can be something like a + b, or a function call. This is something that C++03 can’t do, and it’s exactly what you’re wanting to do here.
I don’t have a decltype-supporting compiler, but you can read the decltype papers for more details.
auto allows you to say something similar, "give this thing the same type as its initializer".
> By replacing boost::function with auto or decltype and without converting the class CallbackHolder to a templated one.
That’s asking for type forgetting without any overhead, which can’t be done. Functors in C++ can contain arbitrary state. Even pointers to member functions have to be invoked differently depending on whether the inheritance is virtual or not, etc.
According to my extremely limited understanding of managed delegates, they work by restricting themselves to binding only "pointers to member functions" (even if that’s not what they call them) to "pointers to objects" (also even if that’s not what they call them), and they’ve restricted inheritance so that you can call everything in a uniform way. That way, branches can be avoided. I might be completely wrong about that.
C++ works in a different way – it allows great diversity in functors (pointers to member functions with no inheritance, single inheritance, virtual inheritance, or pointers to data members, or pointers to functions, or full-fledged function objects, stateless or stateful), and relys on inlining being done through templates to produce efficiency equivalent to handwritten C. If you demand that something be a non-template, yet cope with so many different functors, you’re going to introduce overheads.
> What would be a proper comparison ? I want to use the style boost::function allows me to use.
boost::function is extremely simple to use, and permits separate compilation in cases where it was previously difficult or impossible. It does, however, involve small overheads. Here, you are looking at the overheads to the exclusion of everything else, which is why it appears to be a big deal.
By "handwritten binder", I mean a functor that is templated on class type, parameter types, and return type, which stores a pointer to member function, an object, and an argument (you can make the object free, or the argument free, or neither for full binding). You can’t use this for type forgetting – the bound functor type depends on the class type – but it also doesn’t propagate templateness (something storing a bound functor need not be a template).
Stephan T. Lavavej
Visual C++ Libraries Developer
[(in the absence of concepts)]
O.k. C++ names them concepts. Sorry get always confused with C++/CLI, C# and C++.
[decltype allows you to say …]
Well I want to express, store a member function pointer at this location to a function with the signature X.
[binding only "pointers to member functions"]
I’ve not discovered any restrictions of delegates yet. But anyways you are correct, managed code doesn’t support multiple inheritance of classes. Only of abstract interfaces.
So they don’t have the problems, introduced by C++ supporting multiple inheritance.
There are other proprietary delegate extensions of other compiler vendors, which also restrict multiple inheritance, to support delegates. VC also has extensions, though IMHO not directly comparable.
[…you’re going to introduce overheads.]
Yes, at least the delegate would have to be large enough to store the largest member function pointer, which is larger as the most simple member function pointer: this + codeptr.
[of everything else, which is why it appears to be a big deal]
Yes, agreed, it’s not that simple for a (C++) library – it would be rather hard to implement this for all possible cases.
And yet there are some implementations, which come quite near to an ideal C++ delegate implementation, which supports the syntax of boost::function, avoiding it’s runtime overhead.
But they have to use dirty hacks, where a little compiler (C++ core) support could help. E.g. Don C. has written such an implementation and article on the codeproject web site – fast delegate.
Generally std::function will be sufficient for me most times. But, this is where the discussion started, I think with a little standard/compiler help it could be implemented more efficient.
Andre
Stephan T. Lavavej [MSFT],
Sorry for the late response.
I was on vacation for new years.
>>).
As a developer, managing headers and libraries is one of the basic skills needed in our profession. It is not a difficult task. Hopefully you can accept that I can just as easily do that myself. Boost is just one major example.
> 2. We’re making TR1 play nice with /clr and /clr:pure.
/clr is not C++ (ISO/IEC 14882:2003). Nor is it part of the forthcoming C++0x, which TR1 is being targeted for.
While I have given try implementations of C++/CLR a chance, customers continually ask me to do managed development in C#, and mixed interop is essentially non-existent (usually for reasons of portability).
> 3. We’re ensuring that TR1 compiles warning-free at /W4, in all supported scenarios. This includes switches like /clr, /clr:pure, /Za, /Gz, and the like.
At least in regard to standard code (see above), that sounds like Dinkumware’s job. I seriously hope that warnings are not merely being pragma-ed off.
4. We’re ensuring that TR1 is /analyze-clean.
Sounds like Dinkumware’s job.
>.
Good, but again that sounds like Dinkumware’s job.
> 7. We’re identifying select C++0x features to backport into TR1 – for example, allocator support for shared_ptr and function. While not in TR1, this is important to many customers (including our own compiler).
As C++0x does not exist yet, I hope that you are limiting yourself to the currently accepted subset. However, key features such as export have existed in the standard for almost a decade now. Please implement the existing standards first.
>> If I really needed TR1, I could always
>> license it from Dinkumware myself.
> That would cost you money and time.
Yes, that is how business works.
However, while I *can* license Dinkumware’s implementation as needed, no matter how many years I plead and beg for standard compliance, I can *not* get Microsoft to implement the existing C++ standard. The biggest issues for me are 1) export, 2) two-phase name lookup, and 3) exception specifications.
>> Or I even code it myself.
> Are you a world expert at library
> programming?
You make it sound so difficult to build C++ libraries… I am sufficiently confident in my skills to implement required types as necessary for jobs.
I have used VC++ for many, many years now. However, as more and more customers are beginning to use standard C++ features such as export (etc), increasingly I can no longer compiler existing code-bases with non-conformant Microsoft compilers. Perhaps we do not need some of these features, but people *are* using them (for whatever reasons), and they are part of the existing standard. Often when I try to get around this with #defines I am opposed by senior developers who point me to the ISO/IEC 14882:2003 standard and tell me to replace the compiler if it is incompatible with the standard.
I really, really want to continue using VC++. I beg and plead with you for years, but while you do not deny that it is part of the existing standard, you ignore it and implement future standards. You make it so incredibly difficult to support your products.
"""The biggest issues for me are 1) export, 2) two-phase name lookup, and 3) exception specifications."""
This is starting to sound like kid’s quarrel[1], but i would change the order to: 1) two-phase name lookup, 2) exception specifications and 3) export. With export, one can always say: "so don’t use it". Not so with two-phase name lookup (unless you avoid templates completely, it’s hard to guess when it will happen, or when you forgot the keyword "typname", specially if you never use another compiler).
"""
>> If I really needed TR1, I could always
>> license it from Dinkumware myself.
> That would cost you money and time.
Yes, that is how business works.
"""
I appreciate the work that’s being done for TR1, because it’s the kind of library that should be included with compilers. If i were going to buy a library separately, i would forget TR1 and go directly to Qt, for example.
[1] Sorry, English is not my native language, but i think you’ll get the point
[Stephen]
>> 1. We’re integrating TR1 into VC9
> Hopefully you can accept that I can just as easily do that myself.
You misunderstand – adding TR1 to the Visual Studio product, right next to the Standard Library, involves more work than a standalone library (like Boost). There’s the Visual Studio build system to contend with (it took us a little while to get the TR1 separately compiled components exported from msvcp90[d].dll), and then the setup system (getting the new headers and sources picked up by the installer), etc. Unglamorous work, but work nevertheless.
>> 2. We’re making TR1 play nice with /clr and /clr:pure.
> /clr is not C++ (ISO/IEC 14882:2003). Nor is it part of the forthcoming C++0x, which TR1 is being targeted for.
Certainly. I am personally uninterested in /clr[:pure]; however, it is a VC feature, so TR1 must support it. And this took an unbelievable amount of work.
>> 3. We’re ensuring that TR1 compiles warning-free at /W4, in all supported scenarios. This includes switches like /clr, /clr:pure, /Za, /Gz, and the like.
>> 4. We’re ensuring that TR1 is /analyze-clean.
> At least in regard to standard code (see above), that sounds like Dinkumware’s job.
TR1 arrived mostly /W4-clean. However, Dinkumware hadn’t yet thrown (to my knowledge) exotic options like /Za and especially /clr[:pure] at TR1. Our test matrix identified these warnings so Dinkumware could fix them. Similarly, /analyze exposed several warnings.
> I seriously hope that warnings are not merely being pragma-ed off.
Generally, no – we preferred true fixes to workarounds to pragmas, in that order. And as usual, we disable warnings only in the TR1 headers, not in user code.
>> 5. We’re identifying bugs in TR1 and working with Dinkumware to fix them.
> Good, but again that sounds like Dinkumware’s job.
As I said, more eyes find more bugs. It’s our job to find bugs, and Dinkumware’s job to fix them. (Independently, they also find and fix bugs.)
>> 7. We’re identifying select C++0x features to backport into TR1
> As C++0x does not exist yet, I hope that you are limiting yourself to the currently accepted subset.
Anything that has been voted into the Working Paper is almost certainly going to make it into the C++0x standard.
> You make it sound so difficult to build C++ libraries… I am sufficiently
> confident in my skills to implement required types as necessary for jobs.
The more generic the library, the more skill it takes to implement. Application and OS developers aren’t library developers, and they don’t have the same skills. TR1 is an extremely generic library, and extremely difficult to implement.
(Within MS, I’ve seen a half-dozen smart pointer implementations, all deficient in one way or another. My hope is that shared_ptr will sweep them all away.)
[ikk]
> Not so with two-phase name lookup
It’s actually "pretty hard" to trigger two-phase name lookup (such that it’ll make a difference), if you follow a certain style of code organization. Yes, that’s rather vague.
> or when you forgot the keyword "typname"
Requiring "typename" is completely unrelated to two-phase name lookup, and VC implements the "typename" rules pretty well. I’m sure that there are bugs that aren’t coming to my mind right now, but VC definitely enforces this rule in most situations where it should be enforced.
Perhaps you’re thinking of unqualified name lookup reaching into dependent base classes (it shouldn’t, says the Standard) – this rule is related to two-phase name lookup, although not actually part of it. VC actually enforces this rule under /Za.
Stephan T. Lavavej
Visual C++ Libraries Developer
ikk,
> With export, one can always say: "so don’t use it".
Only if one owns all of the code themselves. However, when I work with various customer codebases, increasingly they are already–for whatever reasons–using export. Needless to say, this will not compile with any existing VC++ compiler. When I try to either 1) rewrite it without using export or 2) make a special VC++ version with #ifdef, I am often opposed by other developers who point me to the ISO/IEC 14882:2003 standard and tell me to replace the compiler if it is incompatible with the standard rather than replace good code. And that is precisely what I have had to do. More and more I am being forced to use Comeau. Comeau is a great compiler, but I would like to continue using VC++. However, it is less and less possible.
"so don’t use it" is often not an option.
”’Requiring "typename" is completely unrelated to two-phase name lookup”’
”’Perhaps you’re thinking of unqualified name lookup reaching into dependent base classes”’
(snip)
”’VC actually enforces this rule under /Za”’
Sorry, i wasn’t sure and mixed a few concepts (i wasn’t very clear either, mainly because i didn’t feel the need to be very specific).
But i did think about "unqualified name lookup reaching into dependent base classes" too.
Last time i tried /Za (a long time ago, under VC++.NET 2002), i triggered a documented bug and never tried /Za again.
If /Za is working OK (and if standard headers compile cleanly under /Za), i will give it a try again. Thanks for the info. :-)
[ikk]
> If /Za is working OK (and if standard headers compile
> cleanly under /Za), i will give it a try again.
The Standard and TR1 headers should compile cleanly under /Za. If they don’t, that’s a bug. (Warnings and compiler errors occasionally creep in – the unqualified-name-lookup-reaching-into-dependent-base-classes thing is really easy to forget – but we now have pretty good test coverage.)
However, /Za is an obscure option. It doesn’t get a whole lot of testing (on both the compiler and library sides), doesn’t do a whole lot of stuff, and isn’t really being actively developed. The front-end devs I know recommend against it, as it could do more harm than good.
For portable code, I simply suggest using multiple compilers regularly. Then you’ll get the union of their conformance checks (of course, you’ll also have to deal with the union of their bugs).
Stephan T. Lavavej
Visual C++ Libraries Developer
TR1 is a set of additions to the standard library of C++9x (not 0x as the article states) and should become part of C++0x (as the article states correctly).
TR1 is important as it brings new library features to C++ that don’t require a language (and therefor a compiler) change. Thanks and kudos to Microsoft for being quick to bring TR1 to Visual C++.
However, I should like to add my voice to those clamouring for support for export. The fact that export offers an opportunity to increase separation between the definition and the declaration of code templates means that it is a valuable tool in enabling separation between compilation units and so in managing build times. Even if all the other benefits offered by export turn out to be illusory this one is worth every bit of the development effort that its implementation might require (a couple of lousy man-years is nothing to argue about, really). Yes, Daveed Vandevoorde’s proposal for modules offers more, but we will have some years to wait before that is standardized, let alone available in the compilers on our desktop.
Exception specifications are another matter. The C++ syntax is such that checking of exception specifications at compile time is not possible (a pointer-to-function doesn’t have an exception specification, for example, so a call through a pointer cannot be checked), and automatically checking at runtime violates the C++ convention that you should not pay for what you do not use. Exception specifications are essentially useless in C++ and should be removed from the standard.
> I’m still not convinced that a core implementation of a delegate type *and* the library extension in combination couldn’t be more efficient.
"Delegates" (lambdas) will likely be in C++09, and from the looks of them they are going to be just as efficient as .NET delegates.
Stephen and others – we really do not need "export". Please reade complete N1426 to understand why. If I had a choice, I would vote to remove it entirely from the standard. Not only does not buy us anything, but also comes with difficult to understand problems related to names visibility and makes implementation of other C++ features (present or future) needlessly expensive. Yes, it’s good to have another tool to improve code separation/decoupling, IF that tool works.
If I install this beta, will I be able to uninstall it / update it to final without problem??
PS: I agree with those that say that improving standards conformance — particularly for straight C — is more important to me than other new features. The workarounds in boost give a good indication of how far VC++ is away from standard C++. I am not particularly interested in .NET features.
[jrp]
> If I install this beta, will I be able to uninstall
> it / update it to final without problem??
I am told that uninstallation works, which will get you back to VC9 RTM so that you can apply the final patch. I would suggest NOT trying to apply the final patch over the beta patch.
Stephan T. Lavavej
Visual C++ Libraries Developer
———————–
[Stephan]
> -…
———————–
What an arrogance !!
A good indication why VC is not progressing further and letting the C# and all its peers take precedence.
Well, what if we ask the same thing? "if the world experts, namely the so called Dinkumware, are doing better things that you cannot barely even "understand" – what is Microsoft Doing? Why are you with Microsoft? Why do not you join the "World experts"??
Don’t try to underestimate your readers, and in turn pull-down your own company. Remember that your company, Micrsoft, has its foundations built on top of those very people you are criticizing with questions such as "Are you a world expert at library programming"….
If we are not world experts at library programming, you are not either (because this is not "your" library) !! Let us all praise the Dinkumware !!
So much for an arrogant. Yuck.
Hello managers – what are you doing when one of your reports is criticizing your customers and bringing down your company values in public?? May be its a time you either give him a break or train him to be respectful. Seems he has forgotten the very core values of being open and respectful.
Reading this list, I do wonder what MS’ priorities are.
While it is nice to have a performant IDE, it would be good to deepen standards conformance (for C as well as C++). For example, libsndfile does not compile with VC++.
It would also be good to update OMP support and improve vectorisation (with the advent of multi-core cpus). We’re now on OMP 2.5 and 3.0, whereas VC++ is only 2.0.
More pertinently to this thread, the use of parallelism in the standard library would be most welcome. See what the gcc guys are up to.
Unless the strategy is to leave the parallel field to Intel?
Soma annonce sur son blog le support du TR1 dans Visual Studio 2008 . Vous pouvez télécharger la béta, | https://blogs.msdn.microsoft.com/vcblog/2007/12/26/just-what-is-this-tr1-thing/ | CC-MAIN-2018-09 | refinedweb | 10,484 | 63.19 |
I am able to compare Strings fine, but would like to know how I can rank floating point numbers?
getChange() returns a String. I want to be able to sort descending. How can I do this?
UPDATE:
package org.stocktwits.helper;
import java.util.Comparator;
import org.stocktwits.model.Quote;
public class ChangeComparator implements Comparator<Quote>
{
public int compare(Quote o1, Quote o2) {
float change1 = Float.valueOf(o1.getChange());
float change2 = Float.valueOf(o2.getChange());
if (change1 < change2) return -1;
if (change1 == change2) return 0; // Fails on NaN however, not sure what you want
if (change2 > change2) return 1;
}
}
This method must return a result of type int ChangeComparator.java
Read the javadoc of
Comparator#compare() method.
Compares its two arguments for order. Returns a negative integer, zero or a positive integer as the first argument is less than, equal to or greater than the second.
So, basically:
float change1 = o1.getChange(); float change2 = o2.getChange(); if (change1 < change2) return -1; if (change1 > change2) return 1; return 0;
Or if you like conditional operators:
return o1.getChange() < o2.getChange() ? -1 : o1.getChange() > o2.getChange() ? 1 : 0;
You however need to take account with
Float.NaN. I am not sure how you'd like to have them ordered. First? Last? Equally? | https://codedump.io/share/xqpleHdmxDSK/1/help-comparing-float-member-variables-using-comparators | CC-MAIN-2018-26 | refinedweb | 210 | 69.48 |
After you upgrade a Microsoft Windows NT 4.0 Primary domain controller or member server to Microsoft Window 2000, the Domain Name System (DNS) suffix of the computer name of the new domain controller may not match the name of its domain. When this problem occurs, you may also experience a variety of other symptoms.
Typically, this problem occurs when the following conditions are true:
To resolve this problem, upgrade the domain controller to Windows 2000 with the latest service pack or to Windows Server 2003. Alternatively, you may use one of the other methods that this article describes.
Typically, this problem occurs when the following conditions are true:
- You install the original release version of Windows 2000 on a Microsoft Windows NT 4.0 domain controller.
- A DNS suffix is defined in the Network control panel item of the domain controller.
Symptoms
After you upgrade a Windows NT 4.0 Primary domain controller or member server to Windows 2000, the DNS suffix of the computer name of the new domain controller may not match the name of its domain.
Additionally, you may experience one or more of the following symptoms:
Note After Active Directory has been installed on a member server, you cannot rename the computer on the Network Identification tab of Computer Management properties.
Additionally, you may experience one or more of the following symptoms:
- Active Directory replication does not succeed.
- The File Replication service (FRS) stops responding.
- When you try to join a computer that is running Microsoft Windows XP Professional to the domain, you receive an error message that is similar to the following:A domain controller for the domain DomainName.local could not be contacted.If you click Details in the message window, you see text that is similar to the following:DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain DomainName.local. The query was for the SRV record for _ldap._tcp.dc._msdcs.DomainName.LOCAL
- You cannot log on to the domain.
- When you try to install Active Directory on another member server, you receive an error message that is similar to one of the following messages:
Message 1The specified domain either does not exist or cannot be contactedMessage 2A Service Principal Name (SPN) could not be constructed because the provided hostname is not in the necessary formatMessage 3The Directory Service failed to create the server object for CN=NTDS Settings,CN=CLIENT01,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=Contoso,DC=com on server DC01. Please ensure the network credentials provided have sufficient access to add a replica.Message 4The operation failed because: failed finding a suitable domain controller for the domain contoso.com. The specified domain either does not exist or could not be contacted."
- You receive the following errors when you try to use any Active Directory MMC snap-in:
Message 1Naming information cannot be located because: The logon attempt failedMessage 2Naming information could not be located because the object name has bad syntax
- The following events are logged in the System log of a client, member server, or domain controller:
- The following events are logged in the Application log of a client, member server, or domain controller:
- You receive the following error message when you install the Recipient Update Service (RUS) in Microsoft Exchange Server:Only one instance of the Recipient Update Service can update a Domain Controller and all Domain Controllers on contoso.com are being updated. ID No: c1039c6c."
- In Microsoft Exchange 2000, the Microsoft Exchange System Attendant service does not start, and the following event is logged in the Application log:
- You receive the following error message when you try to use the SetSpn command-line tool:Requested name "contoso\DC01$" not found in directory."
- Pre-Boot Execution Environment (PXE) clients do not authenticate, even when you use valid domain administrator credentials. When this problem occurs, the Logon Error page in the Client Installation Wizard shows the following information:00004e28.OSC error - The System cannot validate your User Name Password or Domain
The system cannot validate your user name, password, or domain name. Verify that your user name and domain name are correct, and then retype your password. Passwords must be typed using the correct case. Be sure the CAPS LOCK key is not pressed.
- When you set up a Mobile Information Server (MIS) server, you receive the following error message after you enter the password for the message processor:Additionally, the following event is logged in the Application log:
- When you run the Active Directory Migration Tool (ADMT), the following error is logged in the Migration.log file:2002-01-23 15:00:34 ERR2:7422 Failed to move object CN=Jsmith, hr=8009030d The credentials supplied to the package were not recognized
- The Domain Controller Diagnostic Tool (Dcdiag.exe) reports the following errors:
- Starting test: NetLogons
* Network Logons Privileges Check
[DC01] An net use or LsaPolicy operation failed with error 1231, The network location cannot be reached
- Starting test: MachineAccount Could not open pipe with
[DC01]:failed with 1231: The network location cannot be reached. For information about network troubleshooting, see Windows Help. Could not get NetBIOSDomainName Failed can not test for HOST SPN
- When you use the Small Business Personal Console or Active Directory Users and Computers to create users, and then you mailbox-enable the user, the following problems occur:
- SMTP addresses are not generated.
- The user does not appear in the global address list (GAL).
- The following event is logged in the directory service event log:
- When you install Windows Services for Unix 2.0, you receive the following error message:error 26065 NIS Schema Upgrade Failed
Cause
These problems may occur when the following conditions are true:
When you install Windows 2000, the Windows 2000 Setup program automatically unchecks the Change primary DNS suffix when domain membership changes check box. Setup also sets the primary DNS suffix to the first suffix that is listed in the Network control panel item. After Active Directory is installed on a member server, the new domain controller tries to resolve the DNS records in the DNS zone that matches its primary DNS suffix.
This problem does not occur if one or more of the following conditions are true:
If DNS is correctly configured, Windows 2000 and Windows Server 2003 both support a disjoint namespace as a valid configuration. However, this configuration is frequently unintentional.
- You install the original release version of Microsoft Windows 2000 on a Microsoft Windows NT 4.0 domain controller.
- A DNS suffix is defined in the Network control panel item of the domain controller.
This problem does not occur if one or more of the following conditions are true:
- The Windows NT 4.0 domain controller does not have a DNS suffix defined before the upgrade.
- You upgrade the Windows NT 4.0 domain controller to Windows 2000 with Service Pack 1 (SP1) or a later service pack.
- You upgrade the Windows NT 4.0 domain controller to Microsoft Windows Server 2003.
Resolution
To resolve this problem, upgrade the domain controller to Windows 2000 with the latest service pack or to Windows Server 2003. For more information about how to obtain the latest Windows 2000 service pack, click the following article number to view the article in the Microsoft Knowledge Base:
Alternatively, use one of the following methods:
Verify whether there is a disjoint namespace, and then fix the namespace. To do this, follow these steps:
If the DNS name has a single label, and your computer is running Windows 2000 Service Pack 4 (SP4), Windows XP, or Windows Server 2003, use the AllowSingleLabelDnsDomain registry entry to resolve the problem. For example, if the domain name is "contoso" and is not "contoso.com," the DNS name has a single label. For more information, click the following article number to view the article in the Microsoft Knowledge Base:
If there is a disjoint namespace, follow these steps to fix it:
Alternatively, use one of the following methods:
Method 1
- When you upgrade your computer to Windows 2000, quit the Active Directory Installation Wizard as soon as it starts.
- Click to select the Change primary DNS suffix when domain membership changes check box.
- Restart the Active Directory Installation Wizard.
Method 2:
Verify whether there is a disjoint namespace, and then fix the namespace. To do this, follow these steps:
- Right-click My Computer, and then click Properties.
- In the Properties dialog box, click the Computer Name tab.
If the DNS suffix of the computer name does not match the domain name, there is a disjoint namespace. The following three examples illustrate disjoint namespaces:
- Full computer name: dc01.fabrikam.com
Domain: contoso.com
- Full computer name: dc01.corp.contoso.com
Domain: contoso.com
- Full computer name: dc01
Domain: contoso.com
- DNS Host Name: dc01.fabrikam.com
DNS Domain Name: contoso.com
- DNS Host Name: dc01.corp.contoso.com
DNS Domain Name: contoso.com
- DNS Host Name: dc01
DNS Domain Name: contoso.com
- Type "ipconfig /all" at a command prompt and examine the DNS suffix to the right of "Connection-specific DNS Suffix."
If the DNS Suffix defined is different or invalid from the Domain: entry seen in the Computer Name tab of Step 2, follow these steps:
- Click Start, click Run, type NCPA.CPL, and then press ENTER.
This opens the Network and Dial-up Connections.
- Right-click Local Area Connection, and then click Properties.
- Highlight "Internet Protocol (TCP/IP)" and click Properties. Then, click Advanced on the General tab.
- Click the DNS tab and modify the suffix in the field to the right of "DNS suffix for this connection in DNS" to match the DNS Suffix of the Domain: entry seen in the Computer Name tab of Step 2. Or, uncheck the box to the left of "Use this connection's DNS suffix in DNS registration."
If there is a disjoint namespace, follow these steps to fix it:
- Log on to the domain controller by using an account that has domain administrator credentials.
- Paste the following code into Notepad. Then, save the file as Fixdomainsuffix.vbs.Note This script automatically modifies the following registry subkey:HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\ParametersThe following table lists the entries in this subkey.
- Double-click the file that you saved in step 2.
- Restart the domain controller.
More Information
To use a disjoint namespace, the DNS servers that are used by domain controllers, member servers, and clients must be able to resolve records in the following DNS zones:
- DNS zones that are the same as the fully qualified domain that the computer account resides in
- The primary DNS suffix zones that are defined in the forest
Properties
Article ID: 257623 - Last Review: Feb 2, 2010 - Revision: 1 | https://support.microsoft.com/en-us/help/257623/the-dns-suffix-of-the-computer-name-of-a-new-domain-controller-may-not-match-the-name-of-the-domain-after-you-upgrade-a-windows-nt-4.0-primary-domain-controller-to-windows-2000 | CC-MAIN-2017-13 | refinedweb | 1,796 | 52.49 |
Bill thanks for your response. The subprocess definitely runs when I don't try to connect to
the broker. Running a HTTPServer in it works too. It just doesn't connect to the qpid broker.
I did a little digging and I believe this is where it is hanging:
(In Connection class)
270 @synchronized
271 -<>
def attach<>(self):
272 """
273 Attach to the remote endpoint.
274 """
275 if not self._connected:
276 self._connected = True
277 self._driver.start<>()
278 self._wakeup()
279 self._ewait(lambda: self._transport_connected and not self._unlinked())
The parent Python process can connect successfully, but the self._transport_connected never
gets set to True for all of the new Connection objects created in the subprocesses that are
trying to connect to the same broker. Where does this get set to True?
Could this have something to do with the predicate because it doesn't return an error it just
times out?
212 -<>
def _ewait<>(self,
predicate, timeout=None):
213 result = self._wait(lambda: self.error or predicate(), timeout<>)
214 self.check_error<>()
215 return result
Thanks,
Taylor
________________________________
From: Bill Freeman [ke1g.nh@gmail.com]
Sent: Wednesday, August 07, 2013 2:00 PM
To: users
Subject: Re: Python Connections Hang from Subprocesses
Subprocesses (using the subprocess module, or even the older exec stuff, as
opposed to threads, or even forked clones) are relatively trouble free in
python (except maybe on Windows, whose process model has that Microsoft
difference). I've certainly made multiple connections to a broker from one
python process, as well as using tools like spout and drain, which are both
written in python, while my main development project is running connected.
Maybe there are broker configuration items which can affect this. If so, I
hope that someone knowledgeable will speak up. But I doubt that this is
the problem.
Are you sure that your subprocess runs? It might be trying to report an
error to you. Of, if you have pipes configured for interaction with the
invoking processor, it might be waiting on one of those.
You could, temporarily, instead of your intended code, have the subprocess
invoke something like BasicHTTPServer, and see if you can interact with it
using your browser. If that also fails, it leaves the broker connection
out as the source of your problems.
Possibly easier is to have it log its arrival at various points, so you can
be sure where it is getting stuck. If you've already confirmed that it's
in the broker connect, forgive me, and wait for a better answer.
Bill
On Wed, Aug 7, 2013 at 12:55 PM, Eagy, Taylor <teagy@blackbirdtech.com>wrote:
> Hello,
>
>
>
>
> | http://mail-archives.apache.org/mod_mbox/qpid-users/201308.mbox/%3CA0B2E669F0D2924A949F6D047BE17C2A1A438B98@RTC-EX-001.BLACKBIRD.BLACKBIRDTECH.com%3E | CC-MAIN-2016-26 | refinedweb | 443 | 66.44 |
Are you a startup?
Get BizSpark cloud access
Got MSDN?
Get up to $3,700 of cloud benefits
Don’t have MSDN?
Here’s cloud access.
We expect that it will be part of the next Visual C++ compiler and fully integrated in the next release of Visual Studio experience.
The announcement was made at AMD Fusion Developer Summit. More information is available on his blog post, Targeting Heterogeneity with C++ AMP and PPL.
Accelerated Massive Parallelism is integrated and supported fully in Visual Studio vNext. Editing, building, debugging, profiling and all the other goodness of Visual Studio work well with C++ AMP. AMP provides an STL-like library as part of the existing concurrency namespace and delivered in the new amp.h header file.
AMP builds on DirectX (and DirectCompute in particular) which offers a great hardware abstraction layer that is ubiquitous and reliable. The architecture is such, that this point can be thought of as an implementation detail that does not surface to the API layer.
For more information, see Daniel Moth’s blot post C++ Accelerated Massive Parallelism.
Soma.
Bruce D. Kyle ISV Architect Evangelist | Microsoft Corporation | http://blogs.msdn.com/b/usisvde/archive/2011/06/16/target-multiple-gpu-architectures-with-new-c-accelerated-massive-parallelism.aspx | CC-MAIN-2014-52 | refinedweb | 190 | 50.53 |
One of the most of the common reason for concurrency issues I often see in web application is due to concurrent access of data stored in variables. Generally in servlets , data in variables are often stored as Local variables, Instance Variables, Class Variables , request attributes, session attributes and context attributes.
Below example simplest I can think of for storing data as local variable and accessing it in a thread safe manner
public class MyServlet extends httpServlet { // mylocalage is localvariable here for this servlet. public void printAge(){ int mylocalage = 0; mylocalageage = mylocalage - 10; System.out.println("My age 10 years earlier was: " + mylocalage); } }
Its considered that by design that data stored in local variable is thread safe.
Every thread accessing the above servlet will have their own values and they will not interface with each other.
Local variables are stored in stack in Java. So data stored in these variables are thread safe.
Tags: Java, local variable, Servlet | http://vasanti.org/blog/?cat=14 | CC-MAIN-2018-09 | refinedweb | 157 | 54.12 |
Qt 5.8 QML Image async long delay
Hi,
after I updated Qt 5.7 to Qt 5.8, I noticed loading of images over HTTP became very slow. I'm running the application on RPi, but the same thing happens on Linux Ubuntu. Reverting to Qt 5.7 the problem is solved. Consider this QML snippet:
main.qml
import QtQuick 2.0 import QtQuick.Controls 1.0 ApplicationWindow { visible: true width: 1350 height: 800 title: qsTr("Qt Quick Controls 1.0") GridView { id: grid width: parent.width height: cellHeight cellWidth: 225 cellHeight: 320 focus: true model: 20 delegate: Card { pic: (index > 5) ? "" : "" } Keys.onReturnPressed: time.running = false; } Timer { id: time interval: 1 running: true repeat: true property double count: 0.0 property double time0: 0.0 onTriggered: { if(time0 == 0.0) time0 = Date.now(); txt.text = "elapsed time (ms): " + (Date.now() - time0); for(grid.currentIndex = 0; grid.currentIndex < 20; grid.currentIndex++) count += grid.currentItem.load; if(count == 20.0) time.running = false; else count = 0.0 } } Text { id: txt anchors.bottom: parent.bottom anchors.right: parent.right font.pixelSize: 30 } }
Card.qml
import QtQuick 2.0 Item { id: cardItem width: 235 height: 320 property string pic property double load: cardImage.progress Image { id: cardImage anchors.fill: cardItem source: pic } }
What I noticed too, if I set
asynchronous: trueon images loaded over qrc, the same thing happens - those have a big delay too. My first guess is that something has changed in how Qt handles async image loading, but I can't find anything in the changelogs.
Any help is appreciated. Thanks.
- SGaist Lifetime Qt Champion
Hi and welcome to devnet,
Looks like a regression. You should take a look at the bug report system to see if it's something known. If not please consider opening a new report providing a minimal compilable example. | https://forum.qt.io/topic/75557/qt-5-8-qml-image-async-long-delay | CC-MAIN-2018-39 | refinedweb | 305 | 63.66 |
/* * "$Id$" * * Private MD5 definitions for CUPS. * * Copyright 2007-2010 by Apple Inc. * Copyright 2005 by Easy Software Products * * _CUPS_MD5_PRIVATE_H_ # define _CUPS_MD5_PRIVATE_H_ /* Define the state of the MD5 Algorithm. */ typedef struct _cups_md5_state_s { unsigned int count[2]; /* message length in bits, lsw first */ unsigned int abcd[4]; /* digest buffer */ unsigned char buf[64]; /* accumulate block */ } _cups_md5_state_t; # ifdef __cplusplus extern "C" { # endif /* __cplusplus */ /* Initialize the algorithm. */ void _cupsMD5Init(_cups_md5_state_t *pms); /* Append a string to the message. */ void _cupsMD5Append(_cups_md5_state_t *pms, const unsigned char *data, int nbytes); /* Finish the message and return the digest. */ void _cupsMD5Finish(_cups_md5_state_t *pms, unsigned char digest[16]); # ifdef __cplusplus } /* end extern "C" */ # endif /* __cplusplus */ #endif /* !_CUPS_MD5_PRIVATE_H_ */ /* * End of "$Id$". */ | http://opensource.apple.com//source/cups/cups-327/cups/cups/md5-private.h | CC-MAIN-2016-44 | refinedweb | 110 | 50.02 |
After going through the basics of QML and Qt Creator in Getting Started with Felgo and Qt Creator, we can now continue to the really hot stuff: We learn the basics how to use the Felgo Games and create a simple game with that knowledge. The game will be a physics objects stacking game: Try to put as many objects on top of each other as possible until the sky is reached.
Images and sounds for this game are available for you to download right here: Download Resources
You will learn where to put them later in this tutorial.
A game entity is an object that interacts with the game and responds to player input or other entities. Other terms for entity are actor, or game object. However, in the Felgo documentation the term entity or game entity is used. Examples for entities are player-controllable objects like cars, power-ups, projectiles or enemy units.
An entity consists of components that define what the entity is actually doing. There are components that get rendered like the Image component. In addition the Felgo Games provide more components for all kind of fields needed for games:
An overview of all components is available in Felgo Games Components Reference.
So an entity itself is only a container of different components that has a unique EntityBase::entityId and an EntityBase::entityType. The
entityId is important for entity removal. The
entityType is used for example for collision checking.
Let's create a new Felgo project from the project wizard like explained in Getting Started with Felgo and Qt Creator and start with the following code:
import Felgo 3.0 import QtQuick 2.0 GameWindow { id: gameWindow EntityManager { id: entityManager entityContainer: scene } Scene { id: scene EntityBase { entityId: "box1" entityType: "box" Image { source: "../assets/img/box.png" width: 32 height: 32 } } } }
This code demonstrates how to define an entity with a single Image component to display an image in the img folder relative to the qml file. The
entityId and
entityType are not needed yet, but are
added for clarity and because it is good practice to always define them for new entities. The EntityManager component is required as soon as an entity is defined, because when new
entities are created at runtime it must know under which parent item the entity should be added. Thus the EntityManager::entityContainer property is set to the scene.
All dynamically created entities are put as children of this item. This topic is handled in more detail in the bottom section Entity Creation & Removal at
runtime. The
box1 entity is created when the scene is loaded at the default position 0/0, so on the top left corner of the scene.
The only thing that we are missing before we can run the app, is the
box.png image that we used in the code above. Download the Resources
if you haven't done so already, and extract the content (
img and
snd folder) into the
assets folder of your project.
Now run the app and you will see the following:
The game is not too interesting so far, so we add some physics to it to please our gamer's soul. It is as simple as that:
GameWindow { id: gameWindow // ... // start physics once the splash screen has disappeared, else the box would fall out of the screen while the splash is shown onSplashScreenFinished: world.running = true Scene { id: scene PhysicsWorld { id: world // physics is disabled initially, and enabled after the splash is finished running: false gravity.y: 9.81 } EntityBase { entityId: "box1" entityType: "box" Image { id: boxImage source: "../assets/img/box.png" width: 32 height: 32 } BoxCollider { anchors.fill: boxImage } } } }
We just have added the physics component BoxCollider to our entity and with
anchors.fill: boxImage it is the same size as the image. Felgo also provides other colliders if
your shape is not rectangular: A CircleCollider and PolygonCollider for arbitrary complex physics shapes. However, the physics shape is often
sufficient to be an assumption of the real object and the player most of the time won't recognize the difference if it's not 100% exact. Thus you can usually try the BoxCollider or CircleCollider at first and only when they are not sufficient use a PolygonCollider.
When you run the game now, you will see the box falling down because we set the
gravity.y property to 9.81 which equals earth gravity. You are not forced to use that gravity setting - in fact in later versions
of the game we want the objects to fall faster and increase the gravity setting. So balance it so the game is most fun to play.
You can change all kinds of physics properties of BoxCollider like the velocity, damping, friction, density or set up collision filters when you want to collide only with some other
collider categories. You can also use the physics system only for collision detection if you do not want to move entities based on physics but with custom behaviors. An example would be to animate the entity with a NumberAnimation and an easing type, or with a MoveToPointHelper to move towards a target point or another entity. In
case you only want to use physics for collision testing but not to modify the entity position from physics calculations, set the ColliderBase::collisionTestingOnlyMode property of the collider components to
true, which is
false by default.
The box is now falling down endlessly because nothing stops it. So let us change that by adding a ground where the box can fall upon:
EntityBase { entityId: "ground1" entityType: "ground" height: 20 anchors { bottom: scene.bottom left: scene.left right: scene.right } Rectangle { anchors.fill: parent color: "blue" } BoxCollider { anchors.fill: parent bodyType: Body.Static // the body shouldn't move } }
So now the ground entity is added to the Scene and the box falls down on it. The most important part here is that the ground is a
static body. That means it is not affected by gravity and will stay at the same
position. The default bodyType is
dynamic. The width of the entity is set to the scene size by anchoring to the left and right of scene - note that setting the width to
scene.width would have the same
effect. When you run the game, you will see the physics shape falling on the ground.
We now add more interesting stuff to our demo game. We want to play a collision sound when the box falls on the ground, and start a smoke particle effect for a couple of seconds. So here is what it looks like:
// ... EntityBase { entityId: "box1" entityType: "box" x: scene.width/2 Image { id: boxImage source: "../assets/img/box.png" anchors.fill: boxCollider } BoxCollider { id: boxCollider width: 32 height: 32 anchors.centerIn: parent fileName: "SmokeParticle.json" } }
The sound and particle effect are available as part of the Felgo Games and are easy to use. The SoundEffect::source points to the relative path of the sound file that we want
to play. It also has an
id to be able to access it in the
onBeginContact handler. As you can see, we changed the size of the box to make it smaller and better match the particle effect size. Mention
that the
anchors.centerIn: parent now shifts the transform point of the entity: when we positioned the entity at 0/0 before, it was positioned in the top left of the scene. Now, as we anchor the image and also the
collider to the center, the center point of the entity is 0/0 which would lead to half of the entity being out of the scene. Thus we position it in the horizontal center of the scene initially.
You can use pre-made particle effects for smoke, fire and splatter effects that ship with Felgo or create custom ones with the Particle component. The best way to choose a particle effect
for your game, is using the Felgo Particle Editor. You can open the Felgo Particle Editor by navigating to the Felgo SDK folder and then to the
demos/ParticleEditor folder. Open the
ParticlEditor.pro file with Qt Creator and you then can choose from a wide range of particle effects and modify them if you like. If you have an iOS or Android
device, you can also search for
Felgo Particle Editor in the app store and try the particle effects on your mobile device, and send you the effect via email once you are happy with the results!
The
SmokeParticle.json file used in this demo is one of the sample particle effects, and you can copy it and the
particleSmoke.png file from the particle editor qml folder. Alternatively, you can
also find them in the Resources zip archive in the
particle folder. Just throw them into the entities folder next to the
Box.qml
file. All that is left, is to point to that file with
fileName: "SmokeParticle.json", if you put the files somewhere else, like e.g. the assets folder, make sure to adapt the path to the json file.
When you run the project now, the smoke particle effect is shown when 2 physics bodies collide:
We are now getting more interactive: the player shall be able to drag the box around. There are many ways to move an entity like using the Animation component or MoveToPointHelper, but for physics-driven games the MouseJoint is the easiest one. Have a look at the code: = world(world) //() } } } }
Here we are using the Component element to put a MouseJoint into it. The Component element is the same as if the MouseJoint was defined in a separate file, and its children (so the MouseJoint) is not created when the Scene is loaded! Instead, we create a new joint every
time the user touches on a box. While the user drags the box around, the
onPositionChanged handler is called where the target position of the MouseJoint is updated. Finally,
when the user releases the touch, the created MouseJoint is removed.
Right now the box is pretty lonesome and the game is not that much fun - so let's change that! After this section, you can stack as many boxes on top of each other as possible, until the first box reaches the top. Therefore we need to create several boxes the longer the game lasts, and add some walls on the side and a top wall for detecting the end of the game.
It will look like this:
We start with the creation of new boxes at random positions. You can use the EntityManager for creating new entities. The EntityManager needs
the QML Component that should be created, which can either be a path to the qml file or the Component item where the entity is defined. Like mentioned above, if
you define an entity within a QML Component element, it is the same as defining an entity in a separate file. To make the code more readable, we put the whole EntityBase definition of box
and wall into two separate files
Box.qml and
Wall.qml and put them into an entities folder relative to our
main.qml file.
Box.qml in entities subfolder:
import QtQuick 2.0 import Felgo 3.0 EntityBase { id: box entityType: "box" // the origin (the 0/0 position of the entity) of this entity is the center, thus we cannot use an anchors.fill: parent in Image and BoxCollider, otherwise it would use the top left corner as origin width: 32 height: 32 // the 0/0 of the entity should be the center of the collider and image // this is required when a width & height are set to the entity! in that case, the rotation should be applied around the center (which is top-left, not the width/2,height/2 Item.Center which is the default value) transformOrigin: Item.TopLeft Image { id: boxImage source: "../../assets/img/box.png" // set the size of the image to the one of the collider and not vice versa, because the physics properties depend on the collider size anchors.fill: boxCollider } BoxCollider { id: boxCollider // the size effects the physics settings (the bigger the heavier) // this is set automatically in any collider - the default size is the one of parent! //width: parent.width //height: parent.height // the collider should have its origin at the x/y of the entity (so the center is in the TopLeft) x: -width/2 y: -height/2 friction: 1.6 restitution: 0 // restitution is bounciness - a wooden box doesn't bounce density: 0.1 // this makes the box more heavy // make the particles float independent from the entity position - this would be the default setting, but for making it clear it is added explicitly here as well positionType: 0 fileName: "SmokeParticle.json" } }
Wall.qml in entities subfolder:
import QtQuick 2.0 import Felgo 3.0 // for accessing the Body.Static type EntityBase { entityType: "wall" // this gets used by the top wall to detect when the game is over signal collidedWithBox // this allows setting the color property or the Rectangle from outside, to use another color for the top wall property alias color: rectangle.color property alias collider: collider Rectangle { id: rectangle color: "blue" anchors.fill: parent } BoxCollider { id: collider anchors.fill: parent bodyType: Body.Static // the body shouldnt move fixture.onBeginContact: collidedWithBox() } }
With these changes, the main qml file can reference the entities by their file name. Because the entities are put in a subfolder, an import to this folder is needed. Mention that the import is put within "". This indicates a relative path from the qml file and allows to structure your code into folders.
import "entities" Scene { // ... // no entityId is required for Box & Wall because they need not be identified uniquely Box { x: scene.width/2 y: 50 } Wall { height: 20 anchors { bottom: scene.bottom left: scene.left right: scene.right } } }
We can now create new boxes randomly after 2-5 seconds with the following code snippet:
Scene { // ... // gets increased when a new box is created, and reset to 0 when a new game is started // start with 1, because initially 1 Box is created property int createdBoxes: 1 // display the amount of stacked boxes Text { text: "Boxes: " + scene.createdBoxes color: "white" z: 1 // put on top of everything else in the Scene } Timer { id: timer interval: Math.random()*3000 + 2000 running: true // start running from the beginning, when the scene is loaded repeat: true // otherwise restart wont work onTriggered: { var newEntityProperties = { // safetyZoneHorizontal = box.width*SQRT(2)/2+leftWall.width -> which is about 50 // vary x between [ safetyZoneHorizontal ... scene.width-safetyZoneHoriztonal] x: Math.random()*(scene.width-2*50) + 50, y: 50, // position on top of the scene, at least below the top wall rotation: Math.random()*360 } entityManager.createEntityFromUrlWithProperties( Qt.resolvedUrl("entities/Box.qml"), newEntityProperties); // increase the createdBoxes number scene.createdBoxes++ // recalculate new interval between 2000 and 5000ms interval = Math.random()*3000 + 2000 // restart the timer timer.restart() } } }
The Timer component is useful for code that should be called delayed. In the
onTriggered handler a new entity is created. As we have our Box in
an own qml file, EntityManager::createEntityFromUrlWithProperties() can be used. Mention that the url could also be a web link! So you could
create your entities on a web server or in a Dropbox account and load the entity remotely. This speeds up the development toolchain because you don't have to re-deploy the game to your phone but just reload the application! In
onTriggered we also calculate a new interval and restart the timer with the new interval.
We are almost done now, all that is left is to place a wall right and left of the scene, and a red-colored one to the top. When the top one is reached, the game is over and will start from the beginning again. And this is how it works:
Scene { // ... Wall { // bottom wall height: 20 anchors { bottom: scene.bottom left: scene.left right: scene.right } } Wall { // left wall width: 20 height: scene.height anchors { left: scene.left } } Wall { // right wall width: 20 height: scene.height anchors { right: scene.right } } Wall { // top wall height: 20 width: scene.width anchors { top: scene.top } color: "red" // make the top wall red onCollidedWithBox: { // gets called when the wall collides with a box, and the game should restart // remove all entities of type "box", but not the walls entityManager.removeEntitiesByFilter(["box"]); // reset the createdBoxes amount scene.createdBoxes = 0; } } }
In here all box entities get removed, and the
createdBoxes counter is reset to 0. Mention that the wall entities are not removed, because they should stay around the scene when a new game starts.
If you are wondering where the
onCollidedWithBox handler comes from: it was added before to the
Wall.qml file, when a collision is detected with the wall:
EntityBase { entityType: "wall" // this gets used by the top wall to detect when the game is over signal collidedWithBox // this allows setting the color property or the Rectangle from outside, to use another color for the top wall property alias color: rectangle.color Rectangle { id: rectangle color: "blue" anchors.fill: parent } BoxCollider { anchors.fill: parent bodyType: Body.Static fixture.onBeginContact: collidedWithBox() } }
You can browse the full source code of this guide at the StackTheBox Demo.
So as you can see, you can design your components & entities to have interfaces to the outside: either properties (with an automatic changed-handler) or signals, which are basically functions that are called when something
of interest happens. You can also forward internal properties of child components to the outside with a property alias. This is used for the color property of the Rectangle, which is then set to
red for the top wall. As you can see, we define a signal called
collidedWithBox in Wall.qml. That allows us to
call
onCollidedWithBox for the top wall to detect a collision between the wall and a box. Alternatively, we could have done the collision detection in the Box entity, together with a further check if the collided
entityType is a wall and if the
entityId is topWall, but that is left as an exercise.
You have now created a simple, physics based stacking boxes game. The final step in the development process is to test and deploy it on your mobile device. Continue to the Deploying Felgo Games & Apps guide for more information about that.
To dig deeper into other examples and full demo games, you can now browse through the Felgo Games Examples and Demos. Examples are smaller code tutorials, whereas demos are more complex and complete demo games of different genres like tower-defense or platformer games.
Voted #1 for: | https://felgo.com/doc/felgo-entity-concept/ | CC-MAIN-2019-39 | refinedweb | 3,105 | 63.8 |
#include <hallo.h> * Thomas Hood [Fri, Jul 08 2005, 04:16:01PM]: > If. Exactly my point. There is really no reason for having a "minor release number after dot" in the Debian version, it justs leads people to pointless discussions like this one. Even labelling the versions with integer numbers and having a release every 18 months, we would have about 10 years to get to a state of "number space polution" that has been reached by commercial distros even now (9.x versions). IMO enough time to do a lot of things. Therefore I suggest dropping the "minor number" and giving numbers as suggested above. In addition, there may be single latin chars to declare minimalistic changes (like a fix in CD images, not really affecting the released version). Then we would have Debian 4.0 for etch, 4.1 for etch stable release 1, 4.2 for etch stable release 2, 4.2a for etch stable release 2 with a minor CD mastering fix (for example), etc.pp. Does the release team agree with this change or do we need another consensus (or even a GR)? Regards, Eduard. -- Susan Ivanova: An expedition to Coronis space found Sheridan's ship a few days later, but they never found him. All the airlocks were sealed, but there was no trace of him inside. Some of the Minbari believe he will come back some day, but I never say him again in my lifetime... -- Quotes from Babylon 5 --
Attachment:
signature.asc
Description: Digital signature | https://lists.debian.org/debian-devel/2005/07/msg00329.html | CC-MAIN-2015-40 | refinedweb | 253 | 74.79 |
Iterators
Table of contents
Introduction
Iterators are used for iterating over the values of a collection, such as an
Array or
HashMap. Typically a programming language will use one of two
iterator types:
- Internal iterators: iterators where the iteration is controlled by a method, usually by executing some sort of callback (e.g. a block).
- External iterators: stateful data structures from which you "pull" the next value, until you run out of values.
Both have their benefits and drawbacks. Internal iterators are easy to implement and usually offer good performance. Internal iterators can not be composed together (easily), they are eager (the method only returns once all values have been iterated over), making it harder (if not impossible) to pause and resume iteration later on.
External iterators do not suffer from these problems, as control of iteration is given to the user of the iterator. This does come at the cost of having to allocate and mutate an iterator, which can sometimes lead to worse performance when compared with internal iterators.
Iterators in Inko
Inko primarily uses external iterators, but various types will allow you to use
internal iterators for simple use cases, such as just traversing the values in a
collection. For example, we can iterate over the values of an
Array by sending
each to the
Array:
import std::stdio::stdout [10, 20, 30].each do (number) { stdout.print(number) }
We can also do this using external iterators:
import std::stdio::stdout [10, 20, 30].iter.each do (number) { stdout.print(number) }
Using external iterators gives us more control. For example, we can simply take the first value (skipping all the others) like so:
let array = [10, 20, 30] array.iter.next # => 10
Because external iterators are lazy, this would never iterate over the values
20 and
30.
Implementing iterators
Implementing your own iterators is done in two steps:
- Create a separate object for your iterator, and implement the
std::iterator::Iteratortrait for it.
- Define a method called
iteron your object, and return the iterator created in the previous step. If an object provides multiple iterators, use a more meaningful name instead (e.g.
keysor
values).
To illustrate this, let's say we have a very simple
LinkedList type that (for
the sake of simplicity) only supports
Integer values. First we define an
object to store a single value, called a
Node:
object Node { def init(value: Integer) { let @value = value # The next node can either be a Node, or Nil, hence we use `?Node` as the # type. We specify the type explicitly, otherwise the compiler will infer # the type of `@next` as `Nil`. let mut @next: ?Node = Nil } def next -> ?Node { @next } def next=(node: Node) { @next = node } def value -> Integer { @value } }
Next, let's define our
LinkedList object that stores these
Node objects:
object LinkedList { def init { let mut @head: ?Node = Nil let mut @tail: ?Node = Nil } def head -> ?Node { @head } def push(value: Integer) { let node = Node.new(value) @tail.if true: { @tail.next = node @tail = node }, false: { @head = node @tail = node } } }
With our linked list implemented, let's add the import necessary to implement our iterator:
import std::iterator::Iterator
Now we can create our iterator object, implement the
Iterator trait for it,
and define an
iter message for our
LinkedList object:
# Iterator is a generic type, and in this case takes a single type argument: the # type of the values returned by the iterator. In this case our type of the # values is `Integer`. object LinkedListIterator impl Iterator!(Integer) { def init(list: LinkedList) { let mut @node: ?Node = list.head } # This will return the next value from the iterator, if any. def next -> ?Node { let node = @node @node.if_true { @node = @node.next } node } # This will return True if a value is available, False otherwise. def next? -> Boolean { @node.if true: { True }, false: { False } } } # Now that our iterator object is in place, let's reopen LinkedList and add the # `iter` method to it. impl LinkedList { def iter -> LinkedListIterator { LinkedListIterator.new(self) } }
With all this in place, we can use our iterator like so:
let list = LinkedList.new list.push(10) list.push(20) let iter = list.iter stdout.print(iter.next.value) # => 10 stdout.print(iter.next.value) # => 20
If we want to (manually) cycle through all values, we can do so as well:
let list = LinkedList.new list.push(10) list.push(20) let iter = list.iter { iter.next? }.while_true { stdout.print(iter.next.value) # => 10, 20 }
Since the above pattern is so common, iterators respond to
each to make this
easier:
let list = LinkedList.new list.push(10) list.push(20) let iter = list.iter # Because of a bug in the compiler () # we need to manually annotate the block's argument for the time being. iter.each do (node: Node) { stdout.print(node.value) # => 10, 20 } | https://inko-lang.org/manual/getting-started/iterators/ | CC-MAIN-2018-51 | refinedweb | 803 | 56.66 |
#include <db.h>
int DB->open(DB *db, const char *file, const char *database, DBTYPE type, u_int32_t flags, int mode);.
The DB->open interface opens the database represented by the file and database arguments for both reading and writing. The file argument is used as the name of an underlying file that will be used to back the database. The database argument is optional, and allows applications to have multiple databases in a single file. Although no database argument needs to be specified, it is an error to attempt to open a second database in a file that was not initially created using a database name. Further, the database argument is not supported by the Queue format.
In-memory databases never intended to be preserved on disk may be created by setting both the file and database arguments to NULL. Note that in-memory databases can only ever be shared by sharing the single database handle that created them, in circumstances where doing so is safe.
The type argument is of type DBTYPE, and must be set to one of DB_BTREE, DB_HASH, DB_QUEUE, DB_RECNO, or DB_UNKNOWN. If type is DB_UNKNOWN, the database must already exist and DB->open will automatically determine its type. The DB->get_type function may be used to determine the underlying type of databases opened using DB_UNKNOWN.
The flags and mode arguments specify how files will be opened and/or created if they do not already exist.
The flags value must be set to 0 or by bitwise inclusively OR'ing together one or more of the following values:
The DB_EXCL flag is only meaningful when specified with the DB_CREATE flag.
The DB_TRUNCATE flag cannot be transaction-protected, and it is an error to specify it in a transaction-protected environment.
On UNIX systems or in IEEE/ANSI Std 1003.1 (POSIX) environments, all files created by the access methods are created with mode mode (as described in chmod(2)) and modified by the process' umask value at the time of creation (see umask(2)). If mode is 0, the access methods will use a default mode of readable and writable by both owner and group. On Windows systems, the mode argument is ignored. The group ownership of created files is based on the system and directory defaults, and is not further specified by Berkeley DB.
Calling DB->open is a reasonably expensive operation, and maintaining a set of open databases will normally be preferable to repeatedly opening and closing the database for each new query.
The DB->open function returns a non-zero error value on failure and 0 on success.
The DB->open function may fail and return a non-zero error for the following conditions:.
The DB->open function may fail and return a non-zero error for errors specified for other Berkeley DB and C library or system functions. If a catastrophic error has occurred, the DB->open function may fail and return DB_RUNRECOVERY, in which case all subsequent Berkeley DB calls will fail in the same way. | http://pybsddb.sourceforge.net/api_c/db_open.html | crawl-001 | refinedweb | 505 | 59.64 |
ChangePassword.UserName Property
Assembly: System.Web (in system.web.dll)
Property ValueThe user name for which to change the password.
The UserName property gets the Web site user name for which to change the password. You can also use the UserName property just to get the user name from within the ChangePassword control, without changing the password. Additionally, the UserName property can be used from within an e-mail message that has been created to send e-mail from the ChangePassword control by using the string "<%UserName%>" in the body of the e-mail message.
To allow the user to type in a user name, set the DisplayUserName property to true. If a user is already authenticated, he or she does not need to enter a user name.
The following code example demonstrates an ASP.NET page that uses a ChangePassword Web control, and includes an event handler for the SendingMail event named SendingMail. attempts to use SMTP to send an e-mail message to the user to confirm the change. This is done in the SendingMail event handler. For information about how to configure an SMTP server, see How to: Configure an SMTP Virtual Server."> void MySendingMail(object sender, MailMessageEventArgs e) {.
#region Using directives using System; using System.Collections.Generic; using System.Text; using System.Diagnostics; #endregion namespace CreateEventSource { class Program { static void Main(string[] args) { try { // Create the source, if it does not already exist. if (!EventLog.SourceExists("MySamplesSite")) { EventLog.CreateEventSource("MySamplesSite", "Application"); Console.WriteLine("Creating Event Source"); } // Create an EventLog instance and assign its source. EventLog myLog = new EventLog(); myLog.Source = "MySamplesSite"; // Write an informational entry to the event log. myLog.WriteEntry("Testing writing to event log."); Console.WriteLine("Message written to event log."); } catch (Exception e) { Console.WriteLine("Exception:"); Console.WriteLine("{0}", e.ToString()); } } } }
The following example code can be used as the ChangePasswordMail.htm file for the previous> | http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.changepassword.username(v=vs.85) | CC-MAIN-2014-52 | refinedweb | 313 | 52.46 |
Jul 04, 2019 07:45 AM|IamGuy84|LINK
Hi folks,
I am using asp.net core. I want to save some photos into a database So I have created seed data for test.
In model:
public class Image { public int ImageId { get; set; } public string ImageName { get; set; } public string ContentType { get; set; } public byte[] Content { get; set; } }
In SeedData:
public static class SeedData { public static void Seed(this ModelBuilder modelBuilder) { modelBuilder.Entity<Image>().HasData( new Image { ImageId=1, Content = "???" }, new Image { ImageId = 2, Content = "???" }, new Image { ImageId = 3, Content = "???" } ); } }
I have saved three photos in webapplication(project) -> Images(Folder)->(cat1.png, cat2.png, cat3.png).
So how do i seed images?
I am waiting for your response.
Thanks in Advance!
Ref:
Participant
1968 Points
MVP
Jul 04, 2019 01:45 PM|maherjendoubi|LINK
Hi,
Which version of ASP.NET Core are you using?
Thank you.
Best regards,
Maher
All-Star
58794 Points
Jul 04, 2019 07:10 PM|bruce (sqlwork.com)|LINK
Jul 05, 2019 06:33 AM|Xing Zou|LINK
Hi, lamGuy84,
You could use the file path to open file in to a filestream and read data in a byte array.
using System.IO; //... public static class SeedData { public static void Seed(this ModelBuilder modelBuilder) { modelBuilder.Entity<Image>().HasData(
new Image { ImageId = 1, Content = ReadFile("images/Cat1.png") },
new Image { ImageId = 2, Content = ReadFile("images/Cat2.png") },
new Image { ImageId = 3, Content = ReadFile("images/Cat3.png") }
); } public static byte[] ReadFile(string sPath) { //Initialize byte array with a null value initially. byte[] data = null; //Use FileInfo object to get file size. FileInfo fInfo = new FileInfo(sPath); long numBytes = fInfo.Length; //Open FileStream to read file FileStream fStream = new FileStream(sPath, FileMode.Open, FileAccess.Read); //Use BinaryReader to read file stream into byte array. BinaryReader br = new BinaryReader(fStream); //When you use BinaryReader, you need to supply number of bytes //to read from file. //In this case we want to read entire file. //So supplying total number of bytes. data = br.ReadBytes((int)numBytes); return data; } }
Best Regards,
All-Star
48920 Points
Jul 05, 2019 11:41 AM|PatriceSc|LINK
Hi,
You have also File.ReadAllBytes which read the file in one go and according to it is available in ASP.NET Core.
6 replies
Last post Jul 05, 2019 11:41 AM by PatriceSc | https://forums.asp.net/t/2157400.aspx?Seed+Data+Image | CC-MAIN-2021-25 | refinedweb | 386 | 60.01 |
- NAME
- CONTENTS
- Downloading, Compiling & Installing
- Is there a binary distribution of Embperl for Unix?
- Is there a binary distribution of Embperl for Win32?
- I want to run Embperl with mod_perl under Apache. In what order should I do the compiling?
- I'm getting:
- I'm trying to build HTML::Embperl, and while running 'make' i get:
- I have a lot of errors in 'make test' from mod_perl when using Embperl
- How can I prevent 'make test' from running some of the tests?
- Running 'make test' fails with an error message at loading of Embperl (even though mod_perl compiled and tested cleanly!)
- I get symbol ap_* undefined/cannot resolve ap_*
- How can I build a statically-linked copy of Embperl with mod_perl support?
- How do I load Embperl at server startup?
- make test fails with a SIGxxxx, how can I obtain a stack backtrace from gdb?
- How do I build Embperl with debugging informations
- make test fails with SIGXFSZ
- Embperl on SCO Unix
-?
- Common Problems
- When I use a module inside a Embperl page, it behaves weired when the source changes.
- Why doesn't the following line work?
- I'm getting: "Glob not terminated at ..."
- My HTML is getting stripped out.
- I _am_ using optRawInput, and my HTML _is_ still being stripped out!
- Help! I got a SIGSEGV! Ack!
- I am having troubles with using Embperl in combination with Apache::Include inside a Apache::Registry script.
- I can't get PerlSendHeader to work under Embperl?
- But how do I customize the header that Embperl is sending?
- I can't figure out how to split a 'while' statement across two [- -] segments
- My HTML tags like '<' '>' and '"' are being translated to <, > !!!
- Netscape asks to reload the document
- I get "Stack underflow"
- Common Questions
- How can I get my HTML files to be converted into Perl code which, as a whole, could then be compiled as function so that I could, for instance, fetch Perl docs from the Formatter table and compile them the way AUTOLOAD does.
- I have an HTML page which is dynamically generated at runtime and should be post-processed by Embperl. How can I do this?
- How can I customise the header that Embperl is sending?
- Can I use Embperl to send cookies?
- Can I do a Redirect with Embperl?
- Can I serve random GIFs with Embperl? (Will Lincoln Stein's GD.pm module work with Embper?
- Can I pass QUERY_STRING information to an HTML::Embperl::Execute call?
- How to include other files into Embperl pages?
- EmbPerl iteration without indexing
- How to display arrays with undef values in it?
- Escaping & Unescaping
- Debugging
- Customizing
- How can I fiddle with the default values? How can I override or alter this or that behavior?
- I'd like to (temporarily) disable some of Embperl's features. What can be customized?
- How can I disable auto-tables?
- How can I change predefined values like $escmode from my Toolbox module?
- How can I customize the header that Embperl is sending?
- How can I use a different character set? ASCII values over 128 are showing up as ? (question marks)!
-?
- In what namespace does Embperl store pre-compiled?
- Additional Help
- SEE ALSO
- AUTHOR
NAME
Embperl FAQ - embed Perl code in your HTML docs
CONTENTS
- "Downloading, Compiling & Installing"
-
- "Common Problems"
-
- "Common Questions"
-
- "Escaping & Unescaping"
-
- "Debugging"
-
- "Customizing"
-
- "Optimizing & Fine Tuning"
-
- "Additional Help"
-
Downloading, Compiling & Installing.
Is there a binary distribution of Embperl for Unix?
No.
Is there a binary distribution of Embperl for Win32?
Win NT/95/98 binarys for Apache/perl/mod_perl/Embperl are available from . A european mirror is at .
I want to run Embperl with mod_perl under Apache. In what order should I do the compiling?
First mod_perl and Apache, then Embperl.
I'm getting:
../apache_1.3.0/src/include/conf.h:916: regex.h: No such file or directory
Try compiling Embperl again, like this:
make DEFS=-DUSE_HSREGEX
I'm trying to build HTML::Embperl, and while running 'make' i get:
cc: Internal compiler error: program cc1 got fatal signal 11 make: *** [epmain.o] Error 1
GCC croaking with signal 11 frequently indicates hardware problems. See
I have a lot of errors in 'make test' from mod_perl when using Embperl
Try recompiling Perl and all modules -- this can sometimes make those annoying error messages disappear!
How can I prevent 'make test' from running some of the tests?
For example, I don't allow CGI scripts, so 'make test' fails at CGI. How do I run just the other tests?
Try:
$ make test TESTARGS="--help" # and for just offline and mod_perl: $ make test TESTARGS="-hoe"
Running 'make test' fails with an error message at loading of Embperl (even though mod_perl compiled and tested cleanly!)
see "I get symbol ap_* undefined/cannot resolve ap_*":
- 1.) make clean
-
- 2.) perl Makefile.PL
NOTE: answer _no_ to mod_perl support. (This is important!)
- 3.) make test
-).
How can I build a statically-linked copy of Embperl with mod_perl support?
- 1.) go to your mod_perl directory, change to src/modules/perl and edit the Makefile so that it contains the line
#STATIC_EXTS = Apache Apache::Constants HTML::Embperl
- 2.) add a definition for EPDIR and change the ONJ= line so that it looks like this:
- 3.) go to the mod_perl directory and run
perl Makefile.PL
- 4.) go to the Embperl directory and do
make clean perl Makefule.PL make
(to compile in mod_perl support)
- 5.) go back to the mod_perl directory and remake Apache by typing:
- 6.) go back to the Embperl directory
-
- 7.) backup the file test/conf/config.pl
-
- 8.) now build Embperl again but _without_ mod_perl support
make clean perl Makefile.PL make
- 9.) restore your saved config.pl to test/conf/config.pl
(without this step, only the offline mode would be tested)
- 10.) run 'make test' for Embperl
-
- 11.) do 'make install' for Embperl
-
NOTE: You should do it in this order, or it may not work.
NOTE: It seems to be necessary to load Embperl at server startup, either by PerlModule or in a PerlScript. See next question on how to do this.
How do I load Embperl at server startup?
You can load Embperl at server startup by PerlModule or in a startup.pl:
- 1.) edit your srm.conf file to read:
PerlModule HTML::Embperl
- 2.) edit your startup.pl file to read:!
make test fails with a SIGxxxx, how can I obtain a stack backtrace from gdb?
How do I build Embperl with debugging informations
- edit the Makefile
-
- search for the line starting with 'CC = ' add the -g switch to the end of the line
-
- search for the line starting with 'LDDFLAGS = ' add the -g switch to the end of the line
-
- type make to build Embperl with debugging infomation
-
now start the gdb as decribed before.
make test fails with SIGXFSZ
This may occur when the filesize limit for the account, either test is running as or the test httpd, is too small. Embperl make test generates a really large logfile! Yu must increase the filesize limit for that accounts.
Embperl on SCO Unix
>From Red Plait
My OS is SCO Unix 3.2v4.2, Apache 1.3.4, perl 5.004_4, mod_perl 1.18 and Embperl-1.1.1
I done following:
- 1)"
- 2)
I installed mod_perl and "perl Makefile.PL", then "make"
- 3)
because I have`nt dynamical loading ( very old and buggy OS ) I had to manually change src/modules/perl/perlxsi.c to insert bootstraps function`s and it`s invocations and also /src/Makefile to manually insert libXXX.a libraries
- In access.conf I insert code:
PerlModule HTML::Embperl <Directory /my_dir> SetHandler perl-script PerlHandler HTML::Embperl::handler </Directory>?:
At least Perl 5.004_04
cc or gcc (your isp must give you access to the gcc compiler)
URI
MIME::Base64
HTML::Parser
HTML::HeadParser
Digest::MD5
libnet
libwww
File::Spec (I believe you may have to install this too if you are using Perl 5.004_04 as it may not be a standard module)
Direction:
Get your copy of EmbPerl (HTML-Embperl-x.x.tar.gz)
% tar -xvzf HTML-Embperl-x.x.tar.gz
% cd HTML-Embperl-x.x
% perl Makefile.PL PREFIX=/to/your/private/dir
% make
% make test
% make install.
Common Problems
The most common problems of all involve Escaping and Unescaping. They are so common, that an entire section on "Escaping & Unescaping" is devoted to them.
When I use a module inside a Embperl page, it behaves weired when the source changes. neccessary and from the moment they are forked, they run on their own and don't know of each other. So if a module is loaded at server startup time (before the fork), it is loaded in all childs childs has loaded different versions of the same module and when you reload your page you hit different childs?
If a module change, simply restart Apache. That's works always.
Use Apache::StatInc. This will do a stat on every loaded module and compare the modification time. If the source has changed the module is reloaded. This works most times (but not all modules can be cleanly reloaded) and as the number of loaded modules increase, your sever will slow down, because of the stat it has to do for every module.
Use
doinstead of
require.
dowill execute your file everytime it is used. This also adds overhead, but this may be accpetable for small files or in a debugging environement. (NOTE: Be sure to check
$@after a
do, because do works like
eval)
Why doesn't the following line work?
[+ $var . "<b>". $foo . "</b>". $bar +]
See what we mean? This is an Escaping & Unescaping problem for sure. You need to escape <b> as ' <b> ' and you probably also need to read the section on "Escaping & Unescaping"...
I'm getting: "Glob not terminated at ..."
This might be a problem with "Escaping & Unescaping" as well.
My HTML is getting stripped out.
Sounds like a problem with Escaping & Unescaping again!
Unless, of course, you have already read the section on Escaping & Unescaping, and it is still happening... Like if you are using optRawInput and your HTML is _still_ being stripped out...
I _am_ using optRawInput, and).
Help! I got a SIGSEGV! Ack!.
I am having troubles with using Embperl in combination with Apache::Include inside a Apache::Registry script.)
I can't get PerlSendHeader to work under Embperl?
You don't need PerlSendHeader when using Embperl - Embperl always sends its own httpd header.
But how do I customize the header that Embperl is sending?
You'll find the answer to this and many other header issues in the "Common Questions" section.
I can't figure out how to split a 'while' statement across two [- -] segments.
My HTML tags like '<' '>' and '"' are being translated to <, > !!!
Hey! Not you again!? I thought we already sent you to the "Escaping & Unescaping" section of the FAQ?!?! ;)
Netscape asks to reload the document.
I get "Stack underflow"
The problem often occurs, when you have a <table> tag in one file and a </table> tag in another file and you both include them in a main page (e.g. as header and footer). There are two workarounds for this problem:
- 1. Set optDisableTableScan.
- 2. Add a <table> as comment
Add the following to the top of the footer document:
<!-- <table><tr><td> -->
This will work also, because Embperl (1.x) will not scan for html comments
Common Questions
The most common questions of all deal with "Escaping & Unescaping" - they are so common that the whole next section is devoted to them. Less common questions are addressed here:
How can I get my HTML files to be converted into Perl code which, as a whole, could then be compiled as function so that I could, for instance, fetch Perl docs from the Formatter table and compile them the way AUTOLOAD does.
Embperl cannot covert your HTML into one piece of Perl-code, but you can wrap the call to Execute into a Perl function and let AUTOLOAD call it.
I have an HTML page which is dynamically generated at runtime and should be post-processed by Embperl. How can I do this?
- 1.) Generate the page within a normal CGI/Apache::Registry script and put the result into a scalar - then you can call HTML::Embperl::Execute to post-process your document. Execute can either send the document to the browser or put it into another scalar for further processing.
-
- 2.) Use EMBPERL_INPUT_FUNC (1.1b1 and above). With this configuration directive, you can specify a custom input function which reads the HTML source from the disk or even from a database. Embperl also provides the function ProxyInput, which allows you to get input from another web server altogether.
-
- 3.) Look at the module Apache::EmbperlChain, which is able to chain multiple modules, including Embperl, together.
-
How can I customise the header that Embperl is sending?') -]
Can I use Embperl to send cookies?
Yes. Embperl sends its own headers, so all you have to do to send cookies is to remember to print an additional header.
Example Code:
- 1.) in documents, add
<META HTTP-
- 2.) or use %http_headers_out
[- $http_headers_out{'Set-Cookie'} = "$cookie=$value" -]
- 3.) or - using mod_perl's functionality - use
[- $req_rec -> header_out("Set-Cookie" => "$cookie=$value"); -]
NOTE: You make also take a look at Embperls (1.2b2 and above) ability to handle sessions for you inside the %udat and %mdat hashes.
Can I do a Redirect with Embperl?
The following way works with mod_perl and as cgi:
[- $http_headers_out{'Location'} = "" -]
the status of the request will automaticly.
Can I serve random GIFs with Embperl? (Will Lincoln Stein's GD.pm module work with Embperl??)
As always, there is more than one way to do this - especially as this is more of a question of how you are coding your HTML than how you are coding your Embperl.
Here are some ideas:
- 1.) You could include an IMG tag which points to your cgi-bin, where a regular CGI script serves the graphics.
-
- 2.) You could be running Apache::Registry, which can generate on-the-fly GIFs using GD. (This is just the same as if you were including the GD image from a static page or from another CGI script, but it allows all of the appropriate logic to live in a single document, which might be appropriate for some Embperl users).
-
If you think of another way, or come up with some sample code, I'd love to hear from you, so that I could add it to the FA?
-
- 2.) If you compiled _everything_ to Perl, you would hold all of the HTML text in memory, and your Apache child processes would grow and grow... But often-accessed documents are still held in memory by your os disk cache, which is much more memory-efficient.
-
- 3.) There is only so far that you can go with precompiling until you reach the point of diminishing returns. My guess is that converting dynamic tables and other HTML processing to Perl at this point in Embperl's development would actually slow down operation.
-
Can I pass QUERY_STRING information to an HTML::Embperl::Execute call?
With Embperl 1.0 and higher, you can do this. QUERY_STRING is set as $ENV{QUERY_STRING} by default. Alternatively, you can use the fdat parameter to pass values to %fdat.
How to include other files into Embperl pages?') -]
EmbPerl iteration without indexing.
How to display arrays with undef values in it?.
Escaping & Unescaping
Escaping & Unescaping Input.
Ways To Escape Input:
- 1. Escape it -> \<H1>
NOTE: Inside double quotes you will need to use \\ (double backslash), since Perl will remove the first Escape itself.
Example: In most cases '\<tr>' but inside double-quotes "\\<tr>"
- 2. Turn off Escaping for all input by setting the optRawInput in EMBPERL_OPTIONS
-
-.
Escaping & Unescaping Output
Embperl will also escape the output - so <H1> will be translated to <H1>
To see the exact steps taken by Embperl to process a Perl-laden document, please see Inside Embperl in the Embperl documentation.
Ways To Escape Output:
- 1.) Escape it -> \\<H1>
(You need a double backslash \\, because the first one is removed by Perl and the second by Embperl.
- 2.) set $escmode = 0 -> [- $escmode = 0 ; -]
-
- 3.) set SetEnv EMBPERL_ESCMODE 0 in your srm.conf
-
Debugging
I am having a hard time debugging Embperl code.
Embperl is running slow.!
How can I improve Embperl's performance?
- 1.) Load Embperl at server startup. This will cause UNIX systems to only allocate memory once, and not for each child process. This reduces memory use, especially the need to swap additional memory.
-
- 2.) Disable all unneeded debugging flags. You should never set dbgFlushLog dbgFlushOutput, dbgMem and dbgEvalNoCache in a production environment.
-
- 3.) You may also want to take a look at the available options you can set via EMBPERL_OPTIONS. For example optDisableChdir, will speed up processing because it avoid the change directory before every request.
-
Customizing
How can I fiddle with the default values? How can I override or alter this or that behavior?
Usually, defaults are set in a way that is likely to make most sense for a majority of users. As of version 1.0, Embperl allows much more flexibility in tweaking your own default values than before. Take a look at EMBERPL_OPTIONS.
I'd like to (temporarily) disable some of Embperl's features. What can be customized?
[+/-/!/$ .... $/!/-/+]
- 2.) optDisableTableScan, optDisableInputScan and optDisableMetaScan can be used to disable individual parts of HTML processing.
You may set these flags in your server config, or at runtime:
[+ $optDisableHtmlScan = 1 +] <table> foo </table> [+ $optDisableHtmlScan = 0 +]
How can I disable auto-tables?
Set optDisableTableScan in EMBPERL_OPTIONS
How can I change predefined values like $escmode from my Toolbox module?
$HTML::Embperl::escmode = 0 ;
Predefined values in Embperl are simply aliases for $HTML::Embperl::foo (for instance, $escmode is an alias for $HTML::Embperl::escmode)
How can I customize the header that Embperl is sending?
You'll find the answer to this and many other header issues in the "Common Questions" section.
How can I use a different character set? ASCII values over 128 are showing up as ? (question marks)! untouch, which is especialy usefull.?
To pre-compile pages, just call Execute once for every file at server startup in your startup.pl file.
In what namespace does Embperl store pre-compiled data?
The cached Perl blocks are stored as a set of subroutines in the namespace of the document. (HTML::Embperl::DOC::_<n> for default) Look at the logfile to see the actual.
Additional Help
Where can I get more help?.
SEE ALSO
some links here
AUTHOR
Gerald Richter <richter@ecos.de>
Edited by Nora Mikes <nora@radio.cz>
1 POD Error
The following errors were encountered while parsing the POD:
- Around line 573:
You forgot a '=back' before '=head2' | http://web-stage.metacpan.org/pod/release/GRICHTER/HTML-Embperl-1.3.1/Faq.pod | CC-MAIN-2019-39 | refinedweb | 3,124 | 67.15 |
Getting/Setting System.properties() in J2EE (3 messages)
- Posted by: Jerome Banks
- Posted on: August 09 2001 21:47 EDT
Folks,
We are running into some problems with our product.
We use System properties to configure several of
our libraries, as do many standard java packages.
We are deploying on weblogic 6.0 and finding that
there isn't a good way to set System properties
( other than a -D on the command line ). Was this
done for a reason ??? Are accessing System properties
considered "bad form" for J2EE applications ??
Why exactly ??? It's not mentioned in the EJB spec
that it shouldn't be done.
--- jerome
Threaded Messages (3)
- Getting/Setting System.properties() in J2EE by Tony Brookes on August 09 2001 22:23 EDT
- Getting/Setting System.properties() in J2EE by Jerome Banks on August 09 2001 23:35 EDT
- Getting/Setting System.properties() in J2EE by Tony Brookes on August 11 2001 10:23 EDT
Getting/Setting System.properties() in J2EE[ Go to top ]
Have you tried simply listing them in weblogic.properties? I rather thought that worked, but then I never played with it too much.
- Posted by: Tony Brookes
- Posted on: August 09 2001 22:23 EDT
- in response to Jerome Banks
The other solution, which is more effective, is to use a WebLogic startup class to set the properties, by reading them from a properties file. That's about 10 lines of code and a bit cleaner.
In any case, you are better off not using system properties to drive things. They (by definition) apply to the whole VM which can be painful at times (at least it has been for me.)
HTH
Chz
Tony
Getting/Setting System.properties() in J2EE[ Go to top ]
- Posted by: Jerome Banks
- Posted on: August 09 2001 23:35 EDT
- in response to Tony Brookes
Thanks for the advice ...
The problem is that there doesn't seem to
be a weblogic.properties file anymore, just
a config.xml , and there is no place to set
system properties that we can find.
My question is more a philosophical/design issue;
when should you use System properties, when JNDI,
when some other mechanism for J2EE applications.
We used System properties mostly for Singleton/static
configuration ( where is the resource file, what
implementing class do we use for an interface, etc.)
This seems reasonable to me, because they need to be set
regardless of the envirnoment ( J2EE or non-J2EE )
Others may argue that System properties should never be
used for configuration, since your application may be
sharing a VM with other applications, and may squash/be
squashed by applications. I would think this could be
avoided by using a namespace ( like <company>.<subsystem>.<property> ) which seems to be the
de facto standard ( java.xxx, sun.xxx. weblogic.xxx )
I wanted to ask the J2EE community their opinion on this.
Getting/Setting System.properties() in J2EE[ Go to top ]
My own opinion is that they should not be used, as it has two definite consequences.
- Posted by: Tony Brookes
- Posted on: August 11 2001 22:23 EDT
- in response to Jerome Banks
1) It applies to the whole VM (unless you make it ClassLoader specific.)
2) It leads to lots of scripts with -D parameters in them, which is not pretty and prone to error. Configuration belongs in a configuration repository, not in startup scripts.
Didn't realize you were using 6.0, sorry about that. There is no weblogic.properties, but there is probably a way to set system properties in there.
Chz
Tony | http://www.theserverside.com/discussions/thread.tss?thread_id=8408 | CC-MAIN-2014-52 | refinedweb | 594 | 64.41 |
File Length
Calculates the length of a binary file.Controller: CodeCogs
Contents
Interface
C++
FileLength
Returns the length of a file in bytes. Useful when you need to dynamically allocate memory to hold the contents of a file you intend to read. This function was designed to replicated the similarly named filelength function that comes with some distributions of C/C++, particularly on DOS platforms. There doesn't appear to be an equivalent in Unix.
Example 1
#include <stdio.h> #include <codecogs/computing/io/binary/file_length.h> using namespace Computing::IO::Binary; int main () { FILE* stream = fopen("/codecogs/io/binary/file_length.h","rb"); printf("\n File length=%d", fileLength(stream)); fclose(stream); return 0; }
Parameters
Returns
- The length of the file in bytes. Return -1 if an error is encountered.
Authors
- Will Bateman (March 2005)
Source Code
Source code is available when you agree to a GP Licence or buy a Commercial Licence.
Not a member, then Register with CodeCogs. Already a Member, then Login.
Last Modified: 25 Oct 10 @ 13:12 Page Rendered: 2022-03-14 17:19:24 | https://www.codecogs.com/library/computing/io/binary/file_length.php | CC-MAIN-2022-21 | refinedweb | 180 | 60.31 |
06-NBconvert-Doc-Draft
NBconvert has now been merged into IPython itself. You will need IPython 1.0 or above to have this works (asuuming the API have not changed)
In this post I will introduce you to the programatic API of nbconvert to show you how to use it in various context.
For this I will use one of @jakevdp great blog post. I've explicitely chosen a post with no javascript tricks as Jake seem to be found of right now, for the reason that the becommings of embeding javascript in nbviewer, which is based on nbconvert is not fully decided yet.
This will not focus on using the command line tool to convert file. The attentive reader will point-out that no data are read from, or written to disk during the conversion process. Indeed, nbconvert as been though as much as possible to avoid IO operation and work as well in a database, or web-based environement.
The main principle of nbconvert is to instanciate a
Exporter that controle
a pipeline through which each notebook you want to export with go through.
Let's start by importing what we need from the API, and download @jakevdp's notebook.
import requests response = requests.get('') response.content[0:60]+'...'
'{\n "metadata": {\n "name": "XKCD_plots"\n },\n "nbformat": 3,\n...'
We read the response into a slightly more convenient format which represent IPython notebook. There are not real advantages for now, except some convenient methods, but with time this structure should be able to guarantee that the notebook structure is valid.
from IPython.nbformat import current as nbformat jake_notebook = nbformat.reads_json(response.content) jake_notebook.worksheets[0].cells[0]
{u'cell_type': u'heading', u'level': 1, u'metadata': {}, u'source': u'XKCD plots in Matplotlib'}
So we have here Jake's notebook in a convenient for, which is mainly a Super-Powered dict and list nested. You don't need to worry about the exact structure.
The nbconvert API exposes some basic exporter for common format and default options. We will start by using one of them. First we import it, instanciate an instance with all the defautl parameters and fed it the downloaded notebook.
import IPython.nbconvert
from IPython.config import Config from IPython.nbconvert import HTMLExporter ## I use basic here to have less boilerplate and headers in the HTML. ## we'll see later how to pass config to exporters. exportHtml = HTMLExporter(config=Config({'HTMLExporter':{'default_template':'basic'}}))
(body,resources) = exportHtml.from_notebook_node(jake_notebook)
The exporter returns a tuple containing the body of the converted notebook, here raw HTML, as well as a resources dict. The resource dict contains (among many things) the extracted PNG, JPG [...etc] from the notebook when applicable. The basic HTML exporter does keep them as embeded base64 into the notebook, but one can do ask the figures to be extracted. Cf advance use. So for now the resource dict should be mostly empty, except for 1 key containing some css, and 2 others whose content will be obvious.
Exporter are stateless, you won't be able to extract any usefull information (except their configuration) from them.
You can directly re-use the instance to convert another notebook. Each exporter expose for convenience a
from_file and
from_filename methods if you need.
print resources.keys() print resources['metadata'] print resources['output_extension'] # print resources['inlining'] # too lng to be shown
['inlining', 'output_extension', 'metadata'] defaultdict(None, {'name': 'Notebook'}) html
# Part of the body, here the first Heading start = body.index('<h1 id', ) print body[:400]+'...'
<div class="text_cell_render border-box-sizing rendered_html"> <h1 id="XKCD-plots-in-Matplotlib">XKCD plots in Matplotlib<a class="anchor-link" href="#XKCD-plots-in-Matplotlib">¶</a></h1> </div> <div class="text_cell_render border-box-sizing rendered_html"> <p>This notebook originally appeared as a blog post at <a href="...
You can directly write the body into an HTML file if you wish, as you see it does not contains any body tag, or style declaration, but thoses are included in the default HtmlExporter if you do not pass it a config object as I did.
When exporting one might want to extract the base64 encoded figures to separate files, this is by default what does the RstExporter does, let see how to use it.
from IPython.nbconvert import RSTExporter rst_export = RSTExporter() (body,resources) = rst_export.from_notebook_node(jake_notebook)
print body[:970]+'...' print '[.....]' print body[800:1200]+'...'
XKCD plots in Matplotlib ======================== This notebook originally appeared as a blog post at `Pythonic Perambulations <>`_ by Jake Vanderplas. : In[1]: .. code:: python from IPython.display import Image Image('') .. image:: output_3_0.png Sometimes when showing schematic plots, this is the type of figure I want to display. But drawing it by hand is a pain: I'd rather just use matplotlib. The problem is, matplotlib is a bit... [.....] owing It just doesn'...
Here we see that base64 images are not embeded, but we get what look like file name. Actually those are (Configurable) keys to get back the binary data from the resources dict we havent inspected earlier.
So when writing a Rst Plugin for any blogengine, Sphinx or anything else, you will be responsible for writing all those data to disk, in the right place. Of course to help you in this task all those naming are configurable in the right place.
let's try to see how to get one of these images
resources['outputs'].keys()
']
We have extracted 5 binary figures, here
pngs, but they could have been svg, and then wouldn't appear in the binary sub dict.
keep in mind that a object having multiple repr will store all it's repr in the notebook.
Hence if you provide
_repr_javascript_,
_repr_latex_ and
_repr_png_to an object, you will be able to determine at conversion time which representaition is the more appropriate. You could even decide to show all the representaition of an object, it's up to you. But this will require beeing a little more involve and write a few line of Jinja template. This will probably be the subject of another tutorial.
Back to our images,
from IPython.display import Image Image(data=resources['outputs']['output_3_0.png'],format='png')
Yep, this is indeed the image we were expecting, and I was able to see it without ever writing or reading it from disk. I don't think I'll have to show to you what to do with those data, as if you are here you are most probably familiar with IO.
Use case:
I write an awesome blog in HTML, and I want all but having base64 embeded images. Having one html file with all inside is nice to send to coworker, but I definitively want resources to be cached ! So I need an HTML exporter, and I want it to extract the figures !
The process of converting a notebook to a another format with the nbconvert Exporters happend in a few steps:
- Get the notebook data and other required files. (you are responsible for that)
- Feed them to the exporter that will
- sequentially feed the data to a number of
Transformers. Transformer only act on the structure of the notebook, and have access to it all.
- feed the notebook through the jinja templating engine
- the use templates are configurable.
- templates make use of configurable macros called filters.
- The exporter return the converted notebook as well as other relevant resources as a tuple.
- Write what you need to disk, or elsewhere. (You are responsible for it)
Here we'll be interested in the
Transformers. Each
Transformer is applied successively and in order on the notebook before going through the conversion process.
We provide some transformer that do some modification on the notebook structure by default.
One of them, the
ExtractOutputTransformer is responsible for crawling notebook,
finding all the figures, and put them into the resources directory, as well as choosing the key
(
filename_xx_y.extension) that can replace the figure in the template.
The
ExtractOutputTransformer is special in the fact that it should be availlable on all
Exporters, but is just inactive by default on some exporter.
# second transformer shoudl be Instance of ExtractFigureTransformer exportHtml._transformers # 3rd one shouel be <ExtractOutputTransformer>
[<function IPython.nbconvert.transformers.coalescestreams.wrappedfunc>, <IPython.nbconvert.transformers.svg2pdf.SVG2PDFTransformer at 0x10c203e90>, <IPython.nbconvert.transformers.extractoutput.ExtractOutputTransformer at 0x10c20e410>, <IPython.nbconvert.transformers.csshtmlheader.CSSHTMLHeaderTransformer at 0x10c20e490>, <IPython.nbconvert.transformers.revealhelp.RevealHelpTransformer at 0x10c1cbf10>, <IPython.nbconvert.transformers.latex.LatexTransformer at 0x10c203550>, <IPython.nbconvert.transformers.sphinx.SphinxTransformer at 0x10c203690>]
To enable it we will use IPython configuration/Traitlets system. If you are have already set some IPython configuration options, this will look pretty familiar to you. Configuration option are always of the form:
ClassName.attribute_name = value
A few ways exist to create such config, like reading a config file in your profile, but you can also do it programatically usign a dictionary. Let's create such a config object, and see the difference if we pass it to our
HtmlExporter
from IPython.config import Config c = Config({ 'ExtractOutputTransformer':{ resources.keys() print '' print 'Here we have one more field ' print resources_with_fig.keys() resources_with_fig['outputs'].keys()
resources without the "figures" key : ['inlining', 'output_extension', 'metadata'] Here we have one more field ['outputs', 'inlining', 'output_extension', 'metadata']
']
So now you can loop through the dict and write all those figures to disk in the right place...
Of course you can imagine many transformation that you would like to apply to a notebook. This is one of the reason we provide a way to register your own transformers that will be applied to the notebook after the default ones.
To do so you'll have to pass an ordered list of
Transformers to the Exporter constructor.
But what is an transformer ? Transformer can be either decorated function for dead-simple
Transformers that apply
independently to each cell, for more advance transformation that support configurability You have to inherit from
Transformer and define a
call method as we'll see below.
All transforers have a magic attribute that allows it to be activated/disactivate from the config dict.
from IPython.nbconvert.transformers import Transformer import IPython.config print "Four relevant docstring" print '=============================' print Transformer.__doc__ print '=============================' print Transformer.call.__doc__ print '=============================' print Transformer.transform_cell.__doc__ print '============================='
Four relevant docstring =============================. Disabled by default and can be enabled via the config by 'c.YourTransformerName.enabled = True' ============================= Transformation to apply on each notebook. You should return modified nb, resources. If you wish to apply your transform on each cell, you might want to overwrite transform_cell method instead. Parameters ---------- nb : NotebookNode Notebook being converted resources : dictionary Additional resources used in the conversion process. Allows transformers to pass variables into the Jinja engine. ============================= Overwrite if you want to apply a transformation on each cell. You should return modified cell and resource dictionary. Parameters ---------- cell : NotebookNode cell Notebook cell being processed resources : dictionary Additional resources used in the conversion process. Allows transformers to pass variables into the Jinja engine. index : int Index of the cell being processed =============================
We don't provide convenient method to be aplied on each worksheet as the data structure for worksheet will be removed. (not the worksheet functionnality, which is still on it's way)
I'll now demonstrate a specific example requested while nbconvert 2 was beeing developped. The ability to exclude cell from the conversion process based on their index.
I'll let you imagin how to inject cell, if what you just want is to happend static content at the beginning/end of a notebook, plese refer to templating section, it will be much easier and cleaner.
from IPython.utils.traitlets import Integer
class PelicanSubCell(Transformer): """A Pelican specific transformer to remove somme call(self, nb, resources): #nbc = deepcopy(nb) nbc = nb # don't print in real transformer !!! print "I'll keep only cells from ", self.start, "to ", self.end, "\n\n" for worksheet in nbc.worksheets : cells = worksheet.cells[:] worksheet.cells = cells[self.start:self.end] return nbc, resources
# I create this on the fly, but this could be loaded from a DB, and config object support merging... c = Config({ 'PelicanSubCell':{ 'enabled':True, 'start':4, 'end':6, } })
I'm creating a pelican exporter that take
PelicanSubCell extra transformers and a
config object as parameter. This might seem redundant, but with configuration system you'll see that one can register an inactive transformer on all exporters and activate it at will form its config files and command line.
pelican = RSTExporter(transformers=[PelicanSubCell], config=c)
print pelican.from_notebook_node(jake_notebook)[0]
I'll keep only cells from 4 to
All part on figure naming in template removed since many thinfs in API have changed
I think this is enough for now, As you have seen there are a few bugs here and there I need to correct before continuing. Next time I'll show you how to modify template :
{%- extends 'fullhtml.tpl' -%} {% block input_group -%} {% endblock input_group %}
... and you just removed all the codecell by keeping the output and markdown codecell, isn't that wonderfull ? You want to wrap each cell in your own div ?
{%- extends 'fullhtml.tpl' -%} {% block codecell %} <div class="myclass"> {{ super() }} </div> {%- endblock codecell %}
Try to look at what Jinja can do, thenlearn about Jinja Filters and imagine they can magically read your config file.
For example we provide a filter that highlight by presupposing code is Python. Or one that wraps text at a default length of 80 char... Want a rot13 filter on some codecell when doing exercises for student ? See you next time !
One more example from one Pull-Request.
from IPython.nbconvert.filters.highlight import _pygment_highlight from pygments.formatters import HtmlFormatter from IPython.nbconvert.exporters import HTMLExporter from IPython.config import Config from IPython.nbformat import current as nbformat def my_highlight(source, language='ipython'): formatter = HtmlFormatter(cssclass='highlight-ipynb') return _pygment_highlight(source, formatter, language) c = Config({'CSSHtmlHeaderTransformer': {'enabled':True, 'highlight_class':'highlight-ipynb'}}) exportHtml = HTMLExporter( config=c , filters={'highlight': my_highlight} ) (body,resources) = exportHtml.from_notebook_node(jake_notebook)
from jinja2 import DictLoader dl = DictLoader({'html_full.tpl': """ {%- extends 'html_basic.tpl' -%} {% block footer %} FOOOOOOOOTEEEEER {% endblock footer %} """}) exportHtml = HTMLExporter( config=None , filters={'highlight': my_highlight}, extra_loaders=[dl] ) (body,resources) = exportHtml.from_notebook_node(jake_notebook) for l in body.split('\n')[-4:]: print l
<p>This post was written entirely in an IPython Notebook: the notebook file is available for download <a href="">here</a>. For more information on blogging with notebooks in octopress, see my <a href="">previous post</a> on the subject.</p> </div> FOOOOOOOOTEEEEER | https://matthiasbussonnier.com/posts/06-NBconvert-Doc-Draft.html | CC-MAIN-2019-09 | refinedweb | 2,388 | 56.96 |
Saving data in the CSV format is fine most of the time. It is easy to exchange CSV files, since most programming languages and applications can handle this format. However, it is not very efficient; CSV and other plaintext formats take up a lot of space. Numerous file formats have been invented, which offer a high level of compression such as zip, bzip, and gzip.
The following is the complete code for this storage comparison exercise, which can also be found in the
binary_formats.py file of this book's code bundle:
import numpy as np import pandas as pd from tempfile import NamedTemporaryFile from os.path import getsize np.random.seed(42) a = np.random.randn(365, 4) tmpf = NamedTemporaryFile() ...
No credit card required | https://www.oreilly.com/library/view/python-data-analysis/9781783553358/ch05s02.html | CC-MAIN-2019-18 | refinedweb | 125 | 59.7 |
Now that SendGrid joined the Google Cloud Platform Partner Program, it’s extremely simple to integrate email into your Google App Engine apps. In addition to helping get your email delivered, we offer statistics and advanced APIs to send, receive and analyze email. Below you’ll learn five key lessons for using SendGrid with Google App Engine.
1. Get Started
The first thing you need to know is that Google App Engine developers can send a lot of emails through SendGrid for free. Before you dive into any of the examples below, be sure to register your account to claim your free emails.
2. Use SendGrid Client Libraries
There are two main ways to send email through SendGrid. You can change your SMTP server settings or you can use the Web API. Both let you harness the same feature-set, but some of the advanced tools are easier with the Web API. The client libraries simplify the process even more by wrapping the HTTP calls in methods you can reference in popular programming languages.
There are special libraries built specifically for Google App Engine developers running Java or Python.
Python
Copy the SendGrid Python library into your project by placing the files in a sendgrid sub-directory. Then you’ll need a couple import statements and just a few lines of code:
from sendgrid import Message
# make a secure connection to SendGrid
s = sendgrid.Sendgrid(”, ”, secure=True)
# make a message object
message = sendgrid.Message(“from@mydomain.com”, “message subject”, “plaintext message body”, “<strong>HTML message body</strong>”)
# add a recipient
message.add_to(“someone@example.com”, “John Doe”)
# use the Web API to send your message
s.web.send(message)
Java
copy SendGrid.java to the src directory of your app. You’ll import this class so that you can create a SendGrid instance and send mail with simple commands:
// set credentials
Sendgrid mail = new Sendgrid(“”,”");
// set email data
mail.setTo(“foo@bar.com”)
.setFrom(“me@bar.com”)
.setSubject(“Subject goes here”)
.setText(“Hello World!”)
.setHtml(“<strong>Hello World!</strong>”);
// send your message
mail.send();
3. Engage the Event API
Once you start sending email from your app, you’ll want to know more about how it’s performing. The statistics within SendGrid are one of its best features. The Event API lets you see all this data as one giant firehose. There are many different ways you could use the data. Some common uses are to integrate mail stats into internal dashboards or use it to respond immediately to unsubscribes and spam reports. Advanced users of the Event API raise the engagement of their emails by sending only to those who have clicked or opened within the last few months.
Technically, the Event API is a webhook. Whenever an event registers within SendGrid’s system, it fires off a bit of descriptive JSON to your app, which can react or store the data however you want.
The Nine Events
Processed — Message has been received and is ready to be delivered.
Dropped — Message will not be delivered, either by error or the address is suppressed.
Delivered — Message was accepted by receiving server (not necessarily inbox)
Deferred — Recipient’s email server temporarily rejected message.
Bounce — Receiving server could not or would not accept message.
Open — Recipient has opened the HTML message (with images enabled)
Click — Recipient clicked on a link within the message.
Spam Report — Recipient marked message as spam.
Unsubscribe — Recipient clicked on messages’ subscription management link.
Activate the Event API
The nine events provide a fairly complete a picture of your app’s email. To feed this data back into your app, you’ll need to configure the Event API in your account.
While logged into SendGrid enable the Events app
Edit the Event app settings
Choose the events you want to send a notification–why not all nine?
Enter the URL for the endpoint within your Google App Engine app where you want to receive events.
Save the changes and now any of the selected events will be send to your app as JSON.
Events as JSON
Every message may trigger a dozen or more events. Each is sent separately, unless you tell SendGrid to batch events. A single event is just a simple bit of JSON.
For example, here’s how an open event might look:
Some events send additional data, such as a status code or the URL clicked. You’ll find all the potential fields detailed in the Event API documentation.
If you batch event calls, you’ll receive multiple events at a time. Batched events currently post every second, or when the batch size reaches 1MB (one megabyte), whichever occurs first. Batched events look exactly like individual events, but are separated by a newline:
{"timestamp": "1234567890", "email": "eve@example.com", "event": "delivered"}
{"timestamp": "1234567890", "email": "able@example.com", "event": "processed"}
4. Set Categories and Unique Arguments
If the Event API lets you harness the firehose of email data from SendGrid, categories and unique arguments help you filter that firehose. As you send email, you can add categories and unique arguments through either SMTP headers or the Web API. Later, you can view statistics on subsets of email or dive into individual records.
First, it’s a good idea to understand the difference between the two:
- Categories help organize your email analytics by enabling you to tag emails by type. For example, you may want to track analytics separately on password reset messages. Or perhaps you want to perform cohort analysis or otherwise segment your users. You can have unlimited categories, creating an endless number of ways to analyze your email.
- Unique arguments are a used to attach data to individual emails. For example, you may want to include a registered user id in an email so events can quickly be tied back to an account. Where categories are used to group emails together, unique arguments keep them separated.
Both categories and unique arguments are added with x-smtpapi headers by including them as JSON values:
"category": "Example Category",
"unique_args": {
"user_id": 1235,
"first_name": "Pedro"
}
}
Multiple categories can be set by wrapping them in an array:
"category": [
"rocks",
"rivers",
"trees"
]
}
If you use the SendGrid client libraries, you don’t even need to worry about the JSON syntax for adding categories and unique arguments.
Python
# set two categories for this message
message.add_category(["Category 1", "Category 2"])
# set 'Customer' to a value of 'Someone'
message.add_unique_argument("Customer", "Someone")
Java
// set two categories for this message
mail.addCategory("Category 1");
mail.addCategory("Category 2");
// set 'Customer' to a value of 'Someone'
mail.addUniqueArgument("Customer", "Someone");
5. Get Interactive with Inbound Parsing
SendGrid is really good at sending email, but there’s a lesser-known feature to-domain you use for the Parse API.
Set Up DNS
Making a change to your DNS means updating the MX record wherever you manage your DNS. Typically, this would be a domain registrar, hosting company or a DNS service.
Point the MX Record of the domain/hostname or sub-domain to mx.sendgrid.net. Remember, all incoming email will now flow through SendGrid, so don’t use a domain where you are currently receiving email.
DNS can take anywhere from a few minutes to several hours to propagate, so it’s important to set this up right away.
Point to Your Endpoint
While logged into your SendGrid account, navigate to the Parse API settings. To add a new endpoint where emails will be directed, you’ll need to include the following information:
- Hostname is the domain or sub-domain where emails that you want parsed will be sent.
- URL is the endpoint within your Google App Engine app where you want to receive emails as data.
- Spam Check is an optional filter to restrict potential spam from being passed along.
- Click the Add button and you should see your endpoint in the list of Hosts & URLs.
Read Emails as JSON
Once DNS has propagated and your endpoint is enabled, any email sent to the hostname you used will be sent to your endpoint. Developers love reading JSON and with the Parse API that’s exactly what an email becomes.
"from": "adam@example.com",
"to": "eve@example.com",
"subject": "Parse API example",
"html": "<p>Body of email in HTML (if set by sender)</p>",
"text": "Text body of email (if set by sender)"
}
There are more fields that come within the JSON, including attachments. That’s all explained in SendGrid’s Parse API documentation.
Now you’re set up receiving email with SendGrid, in addition to sending.
If you implement all five of these SendGrid Best Practices, you’ll be an email power user. Get started now with SendGrid on Google App Engine by claiming your free emails and downloading the libraries. | http://sendgrid.com/blog/5-best-practices-for-using-sendgrid-with-google-app-engine/ | CC-MAIN-2015-40 | refinedweb | 1,459 | 63.7 |
The and use a continuation (.then) with some lambdas. In fact, in many cases writing the code itself is not so hard, but the readability is not good.
C++ Coroutines can simplify your async code, and make the code easy to understand, write, and maintain. But rather than give you a 1000-word description, let’s look at an example:
In this code we try to open an image, using PickSingleFileAsync and OpenAsync:
void AsyncDemoForBuild::MainPage::PickImageClick(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e) { using namespace Windows::UI::Xaml::Media::Imaging; using namespace Windows::Storage::Pickers; using namespace concurrency; auto picker = ref new FileOpenPicker(); picker->FileTypeFilter->Append(L".jpg"); picker->SuggestedStartLocation = PickerLocationId::PicturesLibrary; create_task(picker->PickSingleFileAsync()).then([this] (Windows::Storage::StorageFile^ file) { if (nullptr == file) return; create_task(file->OpenReadAsync()).then([this] (Windows::Storage::Streams::IRandomAccessStreamWithContentType^ stream) { auto bitmap = ref new BitmapImage(); bitmap->SetSource(stream); theImage->Source = bitmap; OutputDebugString(L"1. End of OpenReadAsync lambda.\r\n"); }); OutputDebugString(L"2. End of PickSingleFileAysnc lambda.\r\n"); }); OutputDebugString(L"3. End of function.\r\n"); }
The code introduces complexity because of the async model, but if it was synchronous, it would look a lot nicer:
//Pseudo Code Void ShowImage() { auto picker = ref new FileOpenPicker(); picker->FileTypeFilter->Append(L”.jpg”); picker->SuggestedStartLocation = PickerLocationId::PicturesLibrary; auto file = picker->PickSingleFile(); auto stream = file->OpenRead(); auto bitmap = ref new BitmapImage(); bitmap->SetSource(stream); theImage->Source = bitmap; }
With Coroutines, we can use co_await in C++, but we still need to be in a task, so the code could be written like this:
#include <experimental\resumable> #include <pplawait.h> using namespace Platform; task<void> AsyncDemoForBuild::MainPage::PickAnImage() { auto picker = ref new FileOpenPicker(); picker->FileTypeFilter->Append(L".jpg"); picker->SuggestedStartLocation = PickerLocationId::PicturesLibrary; auto file = co_await picker->PickSingleFileAsync(); if (nullptr == file) return; auto stream = co_await file->OpenReadAsync(); auto bitmap = ref new BitmapImage(); bitmap->SetSource(stream); theImage->Source = bitmap; OutputDebugString(L"1. End of OpenReadAsync lambda.\r\n"); }
And we could call it this way:
void AsyncDemoForBuild::MainPage::PickImageClick(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e) { PickAnImage(); }
As you can see in this sample, UWP C++ code can be made much simpler by using co_await; almost as simple as the synchronous form. We see also that co_await can be used with C++/Cx code, meaning you can use ‘^’ references without any ambiguities.
Of course, the code has to be compiled using the /await option in the command line:
It’s important to note that in Visual Studio 2015, Update2 you can also use the /SDL option.
This form of Coroutine (co_await) is the easiest way to use Coroutines. However, Coroutines in C++ can do much more. For example, you can:
· Define new awaitables to customize await for your environment using existing coroutine types.
· Define new coroutine types.
Have a look at this post about customized awaiters. We’ll have more posts to come on other async coding subjects.
Note that Coroutines are not yet part of the C++ standard, but are only found in a TS (Technical Specification) and needs to be seen as experimental (more info here). However, since we removed some compatibility friction with the /RTC and /SDL options in VS2015 Update2, we consider Coroutines ready for production. Please let us know about your experiments, your questions, and any issues you find.
We recorded a video about this for //build 2016.
Join the conversationAdd Comment
Please format. No one’s gonna read it like that.
It’s a little bit late, but I fixed the formatting of this post. Our blogging software is way more capable these days.
You are capturing the “this” pointer by value which may be potentially dangerous if the object’s lifetime is shorter than the lambda execution time.
Really, really sad to see the words “another couple of years” and “a new TS” wrt coroutines… That’s sufficiently scary enough to essentially halt all forward progress on modernizing real, async code for the foreseeable future :-/
It’s good to see the await functionality coming along nicely, but can I suggest that you add the necessary #include and using namespaces with any examples you show to make it easier for us to try out the code snippet examples. When you don’t implicitly know, it’s not trivial to discover what you need to add to reproduce what you’re trying to show us.
Now, is there any chance the C++ debugger will be able to handle the WinRT COM objects in the same elegant way that we see them when debugging the equivalent thing in C#? Currently in C++ I only see the raw object, which is pretty useless.
@Davel, you’re right, here are the include used in this sample.
experimental\resumable // for the Coroutine itself, co-await etc
pplawait.h// for the task, use using namespace concurrency;
ppltasks.h // in the pch.h by default
and don’t forget to use: using namespace Platform;
eric
@DaveL you should see a message about the need to load symbols in the watch window? Once you load symbols for the WinRT COM object you should have a much nicer view as Windows has created Natvis files for most of them. If you still aren’t seeing a nice view let us know which one it is and we’ll work with Windows to make sure that gap gets filled
Andrew,
For this line of code:
auto sf = co_await fp.PickSingleFileAsync();
Hovering over the sf variable I get: “No type information available in symbol file for windows.storage.dll”.
However, the modules pane shows that windows.storage.dll does have symbols loaded.
Expanding the sf variable, shows it as Platform::Object, and below that, 6 interface methods are listed: QueryInterface, AddRef, Release, ….GetTrustLevel`adjustor.
Nothing like the friendly experience when using C# :(
@DaveL
Thanks for the clarification, we definitely currently have a gap currently for file objects because getting a view of the object requires kernel state. C# does this by running code in the target process, something currently Natvis does not support as it has the potential to cause serious side effects in the application. We’ll revisit the experience for file objects and see what we can do to improve it. Thanks for bringing this up
Seriously guys, no decent C++ developer will ever use this managed-extended syntax-garbage collected-runtime aware version of C++.
if I wanted managed, GC language for windows, I’d be using C#. instead of wasting your time and money on products which no developer ever going to use, how about develop some real C++ libraries which utilize idiomatic, standard C++14/17 that developers will ACTUALLY consider to use?
Are you serious? I did use C++/CLI often in the past to interface with C++ libraries seamlessly into my C# projects.
I thinks it’s a quite nice feature ;)
There is some hope there at least, since MS did actually employ someone.
I really think they hurt themselves badly by not having a decent C/C++ binding and treating the desktop as just another mobile device. To be honest, I think even having the native COM objects documented would have helped more than they realise. Keeping the native way of doing it out of the picture, and basically forcing C++ developers to go through a language extension stopped people using other compilers from targeting the store.
This is C++/CX – extended syntax, yes, but it is neither managed nor garbage collected.
Seriously guy, no one will take your ranting seriously if you don’t have even the basic facts straight.
At the same time, doesn’t that hint at if people still see that the syntax is the same then people will assume that it acts the same as being another reason why C++/CX wasn’t the best idea?
There has been major confusion about this extension, I have seen the ^ being referred to as a managed pointer for example. So to be honest, I’m not surprised that the misconception of the extension being managed still exists.
It’s a double-edged sword – MSFT specifically reused the C++/CLI syntax for C++/CX to eliminate the learning curve for people who already knew the former. This is good, a selling-point even, for users who are committed to or stuck with the Microsoft stack regardless. Those who are confused are IMO confused because they’re simply not interested enough to learn the difference, which in turn implies they’re likely not interested in using any extended language anyway. So while I don’t personally use or have any intention of using C++/CX and only very sporadically have occasion to use C++/CLI, I think MSFT made the right move here because if I _needed_ to use C++/CX I could do so with minimal effort. And isn’t ‘minimal effort’ the whole point to begin with? Just my 2¢ :-]
Its a fair criticism that C++ devs just want to write plain-old, standard C++. That being said, that’s the language of the WinRT projection used to demonstrate the utility of the coroutines TS here. Coroutines is the headline here, though, and that’s a straight-up C++ feature — no managed-language, GC magic going on there. Same goes for C++/CX — the syntax looks like C++/CLI but there’s no CLR involved — the hats here are just shorthand for “C++ compiler, please track these objects for me.”
@Andrew B Hall
For this line of code: auto sf = co_await fp.PickSingleFileAsync();
Hovering over the sf variable I get: “0x05d25c08 ”
However, the modules pane shows that windows.storage.dll does have symbols loaded.
Expanding the sf variable, shows it as Platform::Object, and below that, 6 interface methods.
Nothing like the experience when using C# :(
Thanks for all the comments
Its not possible to use co_await in a constructor, right?
It goes beyond me to understand the obsession to async every single API.
On the one hand, it should be the developers’ responsibility to optimize their codes. Android is much better on this: Developer has most freedom to do AsyncTask, as long as they avoids network on UI threads, etc. This causes unreadability problem on the first place; without that stupid obsession, you don’t need to implement co_await at all. That is not to count it forces developer to resort to create_task().get() constantly which is not even efficient.
On the other hand, async task are more difficult to debug as it runs in different context. VS typically tells me something in my code that crashes some ntdll, kernel32, … It fails to give me a stack trace to my code because it is not possible to do that across processes. And debugging becomes a guessing game.
I’m having a problem with the header file that doesn’t like task
“Cannot overload functions distinguished by return type”
Hi, for co_await example:
In MainPage.xaml.h, add these forward declarations to the private section of the MainPage class declaration.
void PickImage_Click(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e);
concurrency::task PickAnImage();
Hi
i’m migrating from VS2013 to VS2017 and convert all .then() method from my cpprestsdk project to co_await but i have some issue while
compiling my solution.
In particular i have issues when compile a project when i include these .h files
#include
#include
Suggestions?
Hi
Can you edit your question, I can’t see the header who cause issue.
i don’t find how to edit btw issue are on pplawait.h and experimental\resumable
Dbl check if you have /await on the compiler cmd line for the platform you target (x64 or x86). What is the error message? Feel free to reach out me at ericmitt@microsoft.com
i sent you an email,btw it works if i open a new project to test co_await but the issue is probably because i’m migrating from a VS2013 Solution to VS2017. | https://blogs.msdn.microsoft.com/vcblog/2016/04/04/using-c-coroutines-to-simplify-async-uwp-code/ | CC-MAIN-2018-26 | refinedweb | 1,983 | 53.31 |
Red Hat Bugzilla – Bug 738193
rhn_check fails when RHN channels are changed
Last modified: 2012-02-21 01:29:14 EST
Description of problem:
rhn_check events from 'packages.' namespace fail, when the caching file
(rhnplugin.repos) contains outdated information.
Version-Release number of selected component (if applicable):
yum-rhn-plugin-0.5.4-22.el5_7.2
How reproducible:
always
Steps to Reproduce:
1. Register system with RHN Hosted to the base channel
(e.g. rhel-i386-server-5)
2. # yum repolist
3. Register system with RHN Satellite without any channel
4. Schedule 'Update Package List' on RHN Satellite webui
5. # rhn_check -vv
Actual results:
Fatal error in Python code occured
yum.Errors.RepoError: Cannot retrieve repository metadata (repomd.xml) for repository: rhel-i386-server-5. Please verify its path and try again
Expected results:
rhn_check tool should not crash, even in case the rhnplugin.repos
contains outdated information.
Additional info:
Created attachment 523112 [details]
snippet from /var/log/up2date
Regression against earlier rhel5 releases (rhel5.5 and older).
Having this issue as well.
I've tried to manually clean up /var/cache/yum to force an update of the rhnplugin.repos but I am unable to get a usable rhnplugin.repo file.
Has anyone managed a work-around?
Thanks,
Henry
Edit:
This is on RHEL6.1
(In reply to comment #5)
> Edit:
>
> This is on RHEL6.1
More debugging -- turning off SSL in /etc/sysconfig/rhn/update lets rhn_check succeed and everything else work as expected.
My issues appear to be duplicates of #692118 and corresponding Fedora bugs:
738566 - python-urlgrabber
738367 - yum
738568 - anaconda
Added RHTS keyword.
QA would like to have an automated test for this issue. Well, currently,
the issue might be exposed by our automation, when series of different
tests make a use of different channel sets. However, having separate test
for this issue, would be a preferred. That way, would would minimize the
risk that test cases gets lost by yum-clean-like workarounds.
Added qa_ack+ as well.
The issue has been already fixed by z-stream errata package
yum-rhn-plugin-0.5.4-22.el5_7.2.noarch.
The issue has been addressed as a part of bug 734965 and bug 735. | https://bugzilla.redhat.com/show_bug.cgi?id=738193 | CC-MAIN-2016-22 | refinedweb | 368 | 61.12 |
Created on 2013-05-23 08:56 by ncoghlan, last changed 2013-07-19 00:05 by python-dev. This issue is now closed.
Another attempt at tackling the "but I want to ensure my enum values are unique" problem that PEP 435 deliberately chose not to handle. My previous suggestion (in issue 17959) was rightly rejected due to the other problems it caused, but this idea is much cleaner and simpler.
All we would need to do is provide the following class decorator in the enum module:
def unique(new_enum):
for name, member in new_enum.__members__.items():
if name != member.name:
msg = "Alias {!r} for {!r} not permitted in unique Enum"
raise TypeError(msg.format(name, member))
return new_enum
Used as so:
>>> @enum.unique
... class MyEnum(enum.Enum):
... a = 1
... b = 2
... c = 1
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "<stdin>", line 6, in unique
TypeError: Alias 'c' for <MyEnum.a: 1> not permitted in unique Enum
This is certainly an effective method, but it places safety off by default. I would rather have a system that was from duplicates by default but had an easy override.
The method I had in place in my original code was something like:
class Color(Enum, options=DUPLICATES):
red = 1
green = 2
blue = 3
grene = 2
Without the DUPLICATES option, the above class would raise an error. Safe(r) by default, easy override.
If my suggestion doesn't fly, we should definitely put Nick's in.
I take Guido's acceptance of the PEP (and the discussion in the previous
issue) as meaning the default behaviour (allowing aliases) is no longer up
for debate. Hence this suggestion to offer a self-documenting way to opt in
to the more restrictive variant.
I'm not giving up hope yet. Plenty of Python features no longer work the way they did when their PEP was accepted. ;)
You don't generally see reversals of decisions where Guido has made an explicit choice based on consistency with the rest of the language. The fact that aliases are permitted in enumerations by default is consistent with the normal behaviour of namespaces and dictionaries in general, so providing a way to opt in to the stricter checks is a better solution.
The idea of passing flags or other configuration options to the metaclass is also rather ugly. Offering permissive behaviour by default with an easy way to opt in to additional restrictions is far more in keeping with the general "consenting adults" ethos of the language.
Oh. Well, I like your decorator. :)
Don't worry, compared to some of the ideas I've had (and rightfully had shot down) over the years, that one was positively sensible :)
+1 for the decorator!
I haven't seen any discouraging words regarding the decorator. If no one has any compelling reasons why it shouldn't be added, I'll craft a version and put it in (only real difference with Nick's would be catching all the duplicates at once instead of one at a time).
unique() added to enum.py; tests added; docs updated.
If no complaints within a few days I'll push them through.
Sent some review comments. I'll be on a short vacation this weekend, so please wait at least until next week so I can review the changes. Also, Nick should definitely review this too :)
Integrated comments.
The documentation still contains an "Interesting example": UniqueEnum. I would prefer to only have one obvious way to get unique enum, so please just drop this example. Or at least, mention the new decorator in the example.
New changeset 2079a517193b by Ethan Furman in branch 'default':
closes issue18042 -- a `unique` decorator is added to enum.py | https://bugs.python.org/issue18042 | CC-MAIN-2021-39 | refinedweb | 624 | 63.8 |
I have 3 things that I need to clarify:
1. I have a MC33HB2001 chip which I wish to set at a 10.7 current limit when I start using it. I want to use an arduino uno's SPI as the master to interface with the chip. However, I am not too sure about the order of things I should send to the chip, and how I should go about sending the commands to the chip since arduino is 8 bit and the chip is 16 bit. Here is the code I have come up with so far:
#include <SPI.h> //SPI library
#define SS 10 // define slave select pin here
const uint16_t limit = 58256; // 1110001110010000 is the 16 bit that fits my specification
SPISettings chip(10000000, MSBFIRST, SPI_MODE0); // 10MHz, MSB first, output on falling, data capture on rising
void setup() {
pinMode(SS,OUTPUT); //setting slave select pin as a digital output
SPI.begin(); //initialise SPI bus
digitalWrite(SS,LOW); //contact MC33HB2001
SPI.transfer16(limit); // send the 16 bits over
digitalWrite(SS,HIGH); //end contact with chip
}
void loop() {
other portion of code that is not related to chip
}
I want to ask if I have to just send the 16 bits over once to the chip and that is the end of story, or must I keep running the commands in the arduino loop()?
2. For the MC33HB2001 chip, to send commands to it, do I just send a single 16 bit transmission, or must I do things in this order:
1. Send a 'Write' command 16 bit
2. Send my specific 'current limit' 16 bit
3. End transmission
3. Under the MC33HB2001 documentation, for the 'control and configuration', if i set bit 2 (active INPUT control mode) to 0, does this mean that bit 1 (virtual input 1) and bit 0 (virtual input 2) can be any value since I am not using virtual input to control my chip? | https://community.nxp.com/thread/454902 | CC-MAIN-2018-34 | refinedweb | 322 | 65.96 |
<am.h> contains the am_cleanup() function which cleans up all internal data structures created by am_sso_init(), am_auth_init(), or am_policy_init(). It needs to be called only once at the end of any calls. After cleanup, the relevant initialize function must be called again before using any of its interfaces.
Any properties passed to the initialization functions am_sso_init(), am_auth_init(), or am_policy_init() should be destroyed only after am_cleanup() is called.
#include "am.h" AM_EXPORT am_status_t am_cleanup(void);
This function takes no parameters.
This function returns one of the following values of the am_status_t enumeration (defined in the <am_types.h> header file):
If successfully cleaned up.
Netscape Portable Runtime (NSPR) error.
If any other error occurred. | http://docs.oracle.com/cd/E19681-01/820-3738/gclym/index.html | CC-MAIN-2014-41 | refinedweb | 112 | 50.84 |
Important: Please read the Qt Code of Conduct -
[SOLVED]Access dynamically created object (created with Qt.createQmlObject in Javascript) from another dynamically created object
- Tory Gaurnier last edited by
OK, so, it seems I'm having one issue after another, and here is my latest. I have a dynamically created ListModel created in Javascript with Qt.createQmlObject(), then I have a dynamically created GridView also created in Javascript with Qt.createQmlObject(), which needs to set it's model to the dynamically created model. Now I can not figure out how to do this for the life of me, because dynamically created objects have no QML id. The model is added to my root element, I've tried 'model: root.myModelID' just to see if it would work, and of course it didn't.
Now from what I understand a dynamically created object can only be accessed from the object returned by Qt.createQmlObject(), so I even tried making making a globalvars.js file, declaring 'var model;' in the file, including it in my QML file, then returning my dynamically created model as such:
globalvars.js:
@
var myModel;
@
QML file:
@
import "globalvars.js" as Vars
// Then later I have this
Component.onCompleted: {
Vars.myModel = createMyModel(); /* The ListModel created by Qt.createQmlObject() is returned /
createMyGridView(); / This is where I use Qt.createQmlObject() to create a GridView, which is then trying to use Vars.myModel as the model */
}
@
Now when I try that, it throws the error: 'ReferenceError: Vars is not defined'
This is getting very frustrating, I need to figure out a way to set the model, I tried to search to see if GridView has a method to set the model, because that would solve my problem as I wouldn't have to have it set the model in the dynamic creation (in Qt.createQmlObject()), I could just pass myModel to createMyGridView(), then call the Javascript method of myGridView, but GridView has no method to set the model.
I hope my question makes sense, if it doesn't please let me know and I'll try to clarify further. Any help is greatly apprecieated.
- JapieKrekel last edited by
I'm not sure if I understand what you are building...
You should be able to simply assign to the model of your created gridview
@var myGrid=createMyGridView();
var myModel = createMyModel();
myGrid.model = myModel;@
Sounds complicated to use a dynamically created ListModel. I would use a JavaScript array in that case.
And a dynamic GridView, I would use a separate QML file for the GridView and stuff and use createComponent and createObject. But I will not doubt your reasons for making it dynamically.
I cooked up a simple example with a dynamic GridView (do not see why you need to make it dynamically, but anyhow).
Rectangle {
id: main
width: 360
height: 360
property variant myModel: [{name: "Apple", cost: 2.45}, {name: "Orange", cost: 3.95}, {name: "Banana", cost: 1.95}, {name: "Ananas", cost: 4.25}] function createMyGridView(theParent) { return Qt.createQmlObject('import QtQuick 1.1; GridView {' + 'anchors.fill: parent;' + 'delegate: Component {' + 'Rectangle {' + 'width: 30;' + 'height: 20;' + 'Text {'+ 'anchors.fill: parent;' + 'text: modelData.name + "= $" + modelData.cost;' + '}' + '}' + '}' + '}', theParent, ""); } Component.onCompleted: { var jsModel = [{name: "Apple", cost: 2.45}, {name: "Orange", cost: 3.95}, {name: "Banana", cost: 1.95}, {name: "Ananas", cost: 4.25}]; var gridview = createMyGridView(main); gridview.model = jsModel; // main.myModel; }
}@
In this example I show two ways of making the data model. One is as a variant property in which you can basically put any JavaScript Object or Array. Since data models want to have a list, I put an Array in it.
If you only need to use it in Javascript you can use the var jsModel example.
Using either of them is simply assigning it to the model of the gridview.
Accessing the data from your model in the delegate of your Gridview is done by using the property modelData.
I hope it helps. Happy coding...
- Tory Gaurnier last edited by
Thank you, you're amazing :D
myGrid.model = myModel; was all I needed, and I don't know why I didn't think of trying that, it just makes sense when I saw it in your post.
And I actually was making my ListModels and GridViews as arrays, I just posted a very very simplified version of what I'm doing, and I thought of having separate QML files for my dynamic objects, but the way I'm making it this made the most sense, I need to be able to add variables, like so:
@
//Pretend this is inside a Qt.createQmlObject()
'//MyDynamicQmlStuff' + i + '//MyDynamicQMLStuffContinued'
@
The way I'm creating my app having these components dynamically created is crucial (at least in my head :P ), it's not too difficult though, I just get stuck on these simple little things. | https://forum.qt.io/topic/27709/solved-access-dynamically-created-object-created-with-qt-createqmlobject-in-javascript-from-another-dynamically-created-object/1 | CC-MAIN-2022-05 | refinedweb | 799 | 57.47 |
ODI - IKM SQL to File Append - Header not Generateduser8948518 Sep 4, 2012 7:13 PM
I'm using ODI IKM SQL to File Append to create a text file, but the header is not being generated. And the GENERATE_HEADER is set to Yes. The file is Tab delimited and the Heading (number of lines) is set to 1.
Seems to only be an issue with HFM files coming from the Unix server.
Any suggestions?
Thanks, Mike
Seems to only be an issue with HFM files coming from the Unix server.
Any suggestions?
Thanks, Mike
This content has been marked as final. Show 7 replies
1. Re: ODI - IKM SQL to File Append - Header not Generated932033 Sep 5, 2012 7:06 AM (in response to user8948518)Execute interface and go to Operator tab.
Expand task node and view execution steps.
Find header-generated step and view code.
If code is empty then step not executed => check IKM code - why.
If code is not empty => check that code correct (not generated empty header).
2. Re: ODI - IKM SQL to File Append - Header not Generatedtluefex Sep 6, 2012 7:22 PM (in response to user8948518)I had similar issue. Setting all field data types to string in the Datastore did the trick to generate the headers, but introduces other issue with double quotes around numbers. Feature request was placed with Oracle
3. Re: ODI - IKM SQL to File Append - Header not Generateduser8948518 Sep 7, 2012 7:11 PM (in response to user8948518)Ok, getting the following error in step 6 - Integration - HFM_EA_Translate - Insert Column Headers
java.lang.NumberFormatException
at java.math.BigDecimal.<init>(BigDecimal.java:459)
at java.math.BigDecimal.<init>(BigDecimal.java:728)
at com.sunopsis.sql.SnpsQuery.updateExecStatement(SnpsQuery.java)
at com.sunopsis.sql.SnpsQuery.addBatch(SnpsQuery.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execColl.h.y(h.java)
at com.sunopsis.dwg.cmd.e.run(e.java)
at java.lang.Thread.run(Thread.java:662)
The last two columns in the file are numeric which seems to be causing the issue. Will change formatting for the two columns (String, Numeric, etc.) to see if I can resolve the header issue.
Thanks,
Mike
4. Re: ODI - IKM SQL to File Append - Header not Generateduser8948518 Sep 7, 2012 9:31 PM (in response to user8948518)Ok, converting the numeric columns to string resolved the issue with the header not being generated.
However I now see the real problem that the numeric column has leading spaces in the number to make the the column 30 in length. Seems like a bug with HFM EA extracts and data values.
Now I need to remove the spaces from the data value. May need to create a table to load the data to to use the string functions.
Any other suggestions would be helpful.
Thanks,
Mike
5. Re: ODI - IKM SQL to File Append - Header not GeneratedmRainey Sep 8, 2012 4:45 AM (in response to user8948518)Have you tried changing the order of the columns? Moving the numeric columns so they are not last, and keeping their datatype as numeric, and instead placing a string column at the end? I remember seeing this as the solution to a similar issue at some point, but cannot recall the exact details. Worth a shot.
Regards,
Michael Rainey
6. Re: ODI - IKM SQL to File Append - Header not Generatedtluefex Sep 11, 2012 8:54 PM (in response to user8948518)if it is only about removing leading or trailing spaces you could try using a Linux/Unix tool like sed or awk on your source system (if your ODI Agent is running there).
To be honest, I actually think it is more a bug than a missing feature when an ETL tool is not capable of creating proper csv files as i.e. described here:
7. Re: ODI - IKM SQL to File Append - Header not GeneratedLuizFilipe Jul 11, 2013 9:16 PM (in response to user8948518)
You can implement the generation in Jython, its quite easy.
import os
vSrc = open('<%=odiRef.getSchemaName( "<YourSchema>" , "D" )%>/<%=snpRef.getTargetTable("RES_NAME")%>', 'w')
try:
vCol = "<%=snpRef.getColList("", "[COL_NAME]", ";", "", "INS") %>" + "\n"
vSrc.write(vCol)
finally:
vSrc.close()
Just add this command to KM.
[]'s | https://community.oracle.com/thread/2437657?tstart=0&messageID=10571407 | CC-MAIN-2016-50 | refinedweb | 699 | 58.48 |
Using derived CListCtrl in CListView - Undocumented
Posted by Zafir Anjum on August 6th, 1998
If you weren't satisfied with the answer to this same issue in the previous topic then maybe this one will. However, there is a big risk involved. It uses some undocumented features of MFC and maybe I haven't got all the angles covered. So use this at your own RISK.
The basic idea to make this work is that we set up a couple of member variable of the CListCtrl class to connect it to the actual control and we funnel the windows messages on to the CListCtrl object. The actual list view control is owned by the CListView derived object and MFC does not allow a single control to be owned by multiple C++ objects.
Step 1: Derive a new class from CMyListCtrlThere are two reasons for deriving a new class. First, the CListView derived class needs access to some of the protected members of CListCtrl, so this class declares the CListView derived class as a friend. Second, we override AssertValid(). AssertValid() is defined for debug builds only and our overridden function does nothing. The default version would have asserted since our object is not really in a consistent state as far as MFC is concerned.
class CFriendlyListCtrl : public CMyListCtrl { CFriendlyListCtrl() {}; #ifdef _DEBUG void AssertValid() const {} #endif friend class CListVw; };
Step 2: Add member variable in CListVwAdd a protected member of the type CFriendlyListCtrl in ClistVw. We will use this object to connect to the list view control and channel the messages to. Use this member whereever you would use GetListCtrl().
protected: CFriendlyListCtrl m_listctrl;
Step 3: Override OnCreate in CListVwOverride the OnCreate() function. The framework calls this function immediately after the window is created. After calling the base class version we initialize the member variable m_listctrl. We assign the handle of the control to the m_hWnd member. This member is implicitly used by many of the function of a CWnd derived class. The next variable is probably unfamiliar. The m_pfnSuper is a function pointer and it holds the original WndProc of the control before it was sub-classed by MFC. We call the PreSubclassWindow() to make sure that any code in there will be executed.
int CListVw::OnCreate(LPCREATESTRUCT lpCreateStruct) { if (CListView::OnCreate(lpCreateStruct) == -1) return -1; m_listctrl.m_hWnd = m_hWnd; m_listctrl.m_pfnSuper = m_pfnSuper; m_listctrl.PreSubclassWindow(); }
Step 4: Override message handling functionsThere are three message handling functions that we have to override. In each of these, if the CListVw class does not handle the message we forward it to the CMyListCtrl class.
BOOL CListVw::PreTranslateMessage(MSG* pMsg) { if( ! CListView::PreTranslateMessage(pMsg) ) return m_listctrl.PreTranslateMessage(pMsg); return FALSE; } LRESULT CListVw::WindowProc(UINT message, WPARAM wParam, LPARAM lParam) { LRESULT lResult = 0; if (!OnWndMsg(message, wParam, lParam, &lResult)) if( !m_listctrl.OnWndMsg(message, wParam, lParam, &lResult)) lResult = DefWindowProc(message, wParam, lParam); return lResult; } BOOL CListVw::OnChildNotify(UINT message, WPARAM wParam, LPARAM lParam, LRESULT* pLResult) { if( !CListView::OnChildNotify(message, wParam, lParam, pLResult) ) return m_listctrl.OnChildNotify(message, wParam, lParam, pLResult) ; return FALSE; }
Bug in OnChildNotifyPosted by Legacy on 09/04/2001 12:00am
Originally posted by: Mateo Anderson
I was working on my own control and I needed a way to turn
this control into the view (I needed both the control and the
view). Then I found this article and it helped me a lot.
Well, it solved my problems.
However, when I used this approach within a MDI Child window
with the splitter and my own views on both side of the splitter,
the application seemed to work slowly (even though I have a
2-CPU Pentium III 800 Mhz computer).
Maybe the problems is, because I use my own control and Mr. Zafir was talking about CListView ???
I traced the code and I believe the problems is in the:
BOOL CMyView::OnChildNotify(UINT message, WPARAM wParam, LPARAM lParam,
LRESULT* pLResult)
{
if( !CView::OnChildNotify(message, wParam, lParam, pLResult) )
return m_myCtrl.OnChildNotify(message, wParam, lParam, pLResult) ;
return FALSE;
// return TRUE; ???????
}
I believe that this function should return TRUE instead of FALSE.
Why?
The documentation for the function says:
"Return Value
Nonzero if this window is responsible for handling the message sent to its parent; otherwise 0."
Explanation:
Because I intended that my classes to be as generic as possible,
it is possible to use CMyView as a base class for other
classes. Now imagine that in this new View class, the
user want to handle a notification from the control.
He will add a handler
ON_NOTIFY_REFLECT(MYCONTROL_NOTIFICATION, OnMyControl)
This means that the view handled the message and there is
no need for further processing, so we should return TRUE
(if clause will be FALSE).
If the view won't handle the message, if clause will be TRUE
and the message will be send to the control.
In my example it seems that whenever control generates a
NOTIFICATION message, both view receives it, not just the
view that the control belongs to.
When I changed the return value to TRUE, only the view that
was responsible for the message received it.
I am not sure, if the return value for the PreTranslateMessage
should be changed from FALSE to TRUE too.
Any ideas?
Regards,Reply
Mateo
Pls help! Problems with OnChildNotifyPosted by Legacy on 08/29/2001 12:00am
Originally posted by: Valeri
Pls help! Problems with OnChildNotify
I tried this code but have assertion on OnChildNotify (unsigned in 78.....)
Any help would be greatly appreciated.
Best way to do it: derive a class from CView and insert a CListBox, here's how to do it...Posted by Legacy on 10/30/2000 12:00am
Originally posted by: Dennis Vriezekolk
I came here to find a way to create a CListBox as a CView, but found nothing usefull. When actually is was soo easy, here's how i did it...
Use the Classwizard to derive a class from CView. Then add a CListBox pointer variable to your class.
private:
CListBox * pListBox;
Overload the OnCreate(...) function and add this code:
//Create a Listbox, but set it's size to 0.
pListBox = new CListBox();
pListBox->Create(WS_CHILD|WS_VISIBLE|WS_VSCROLL|WS_HSCROLL|LBS_NOINTEGRALHEIGHT, CRect(0,0,0,0), this, 1);
Then overload the OnSize(UINT nType, int cx, int cy) function and add this code:
//Set the position of the listbox to 0,0 (relative to the window)
//Set the width and height of the listbox to the width and height of the window
pListBox->MoveWindow(0, 0, cx, cy);
And that's it!!!! Easy huh?
If you want to change the font of the ListBox, Add a CFont member variable to your class:
private:
CFont ListBoxFont;
Then add this code to the OnCreate(...) function:
Font.CreatePointFont(80,_T("MS Sans Serif"));
pListBox->SetFont(&Font);
BTW: It also works for every other control you'd like to use in a view.
Well that's it, I hope you find it as easy as i did, add comments for questions...Reply
It work on Win95 if you add this code!Posted by Legacy on 11/17/1999 12:00am
Originally posted by: Alessandro Arrabito
Using derived CListCtrl in doc-view archPosted by Legacy on 01/27/1999 12:00am
Originally posted by: Tom Phan
If you want to use a derived CListCtrl in a document-view architecture then instead of using the CListView, use a CView and create your own derived CListCtrl. This is what I usually do when I want to embed a control inside a window.Reply
Does not work under Win95Posted by Legacy on 01/21/1999 12:00am
Originally posted by: Ron Birk
Tried this under Win95 OSR1 with IE4.01sp1 on it. When exiting the application I get a GPF in USER.EXE. This same code works fine under NT4sp4.
RonReply | http://www.codeguru.com/cpp/controls/listview/introduction/article.php/c901/Using-derived-CListCtrl-in-CListView--Undocumented.htm | CC-MAIN-2014-35 | refinedweb | 1,290 | 63.29 |
Re: What is "this" ?
- From: "Wessel Troost" <nothing@xxxxxxxxxxxx>
- Date: Sun, 10 Apr 2005 12:31:28 +0200
Normally, you don't need "this", as C# prefixes "this." by default.
However, say you have a member variable and a local variable with the
same name:
class TheUseOfThis : IUntestedExample
{
private string a;
public void MyFunction()
{
string a;
this.a = "assigned to member var";
a = "assigned to local var";
}
}
As you can see you need this to assign to the member variable.
There is no that in C#, of course.
Greetings,
Wessel
-----Original Message-----
From: WJ [mailto:JohnWebbs@xxxxxxxxxxx]
Posted At: Sunday, April 10, 2005 3:50 AM
Posted To: microsoft.public.dotnet.framework.aspnet
Conversation: What is "this" ?
Subject: What is "this" ?
What is "this" that is being used by many asp and asp.net web
applications
and is there "that" object some where ? Googled what is that and was not
satisfied @ all !
I know for sure that in c#, I donot need "this" and the thing still
works
correctly as expected ! Is there a very good reason to use "this"
notation
on every line ?
John
.
- Follow-Ups:
- Re: What is "this" ?
- From: WJ
- References:
- What is "this" ?
- From: WJ
- Prev by Date: Re: my namespace wont reference!
- Next by Date: Re: Separate Webserver and SQL Server -- error when connecting asp.net app to a database
- Previous by thread: Re: What is "this" ?
- Next by thread: Re: What is "this" ?
- Index(es): | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.aspnet/2005-04/msg02371.html | crawl-002 | refinedweb | 241 | 76.42 |
This is a Livecoding Recap – an almost-weekly post about interesting things discovered while livecoding. It’s shorter than 500 words, and there are pictures! You can follow my channel here. New content almost every Sunday at 2pm PDT. There’s live chat. Come hang out.
This Sunday was all about rendering React components with canvas and smoothly animating 10k+ circles.
We did it! Well, the canvas part. The smooth animation part… not so much. Turns out that part’s hard.
It all started with some tedious coding to update the React particles experiment to D3 v4. Some idiot (me) had had the bright idea of changing that and not finishing the transition.
With the release candidate version of D3v4, importing the entire library no longer works. From now on, you have to do something like
import { randomNormal } from 'd3' to get specific bits and pieces. This is tedious, but it produces smaller bundles in the end. All in all, it’s better this way.
Our slow implementation was back. \o/
View post on imgur.com
Then we turned to react-konva, “a JavaScript library for drawing complex canvas graphics using React.” In theory, we should be able to render our particles using HTML5 canvas without changing our code too much.
It’s based on the Konva library, which looks like a sort of D3 for canvas. It gives you a bunch of useful abstractions to make 2d graphicsing easier.
To my surprise, the conversion was simple.
We had to change our main render method to use a Konva
Stage instead of an
<svg> node:
<Stage width={this.props.svgWidth} height={this.props.svgHeight}> <Layer> <Particles particles={this.props.particles} /> </Layer> </Stage>
We also wrapped it in a big
<div> to help D3 detect the mouse events we need for particle generation. Yes, we could have moved away from D3 for those, but it was already coded up, so why change?
We had to change the
Particles render method to use Konva’s
Circle component.
<Group> {particles.map(particle => <Circle radius="1.8" x={particle.x} y={particle.y} key={particle.id} )} </Group>
Things Just Worked™. Kind of. Our animation looked less than smooth, even with just 200 particles. With a few thousand, it was comically bad.
View post on imgur.com
Not cool, React. Not cool. Canvas is supposed to be super fast! Maybe this is a bit faster than the SVG approach? It’s hard to tell.
We did some profiling and discovered that calculating a new frame takes only 7 milliseconds. Flushing those changes to React components … heh … that took anywhere from 200ms to 980ms.
Yikes.
The culprit seems to be a function called
updateChildren deep in the bowels of React.
We’ll find a workaround on Sunday the 10th of July. There are several promising venues to explore, anything from using better Konva components (FastLayer is a thing) to avoiding prop updates as the driver of our animation. Somehow. We’ll figure it out.
See you next time.
PS: the edited and improved versions of these videos are becoming a video course. Readers of the engineer package of React+d3js ES6 get the video course for free when it’s ready.
Related
You should follow me on twitter, here.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/livecoding-13-rendering-react-components-with-canv | CC-MAIN-2016-44 | refinedweb | 555 | 69.07 |
Last week, I spent some time solving an odd bug with the WriteFreely client's iOS app.
When you're looking at the list of posts, you can tap the gear button to get to the settings screen, which presents you with a form for logging into your WriteFreely instance:
As you might expect, you can tap the close button in the upper-right (ⓧ) to dismiss the sheet.
Except… well, if you tapped into one of the login form's fields, you end up in a state where tapping on the close button didn't seem to have any effect.
Okay, so that's not entirely true — if you added a print statement to the button's action, you'd find that the first tap does register, toggling presenting view's the
isPresentingSettingsView flag correctly; it just doesn't have any effect.
The workaround, while I'd been testing the app, was to dismiss the sheet is by swiping down on it — a standard (if somewhat undiscoverable) system gesture.
Interestingly, when you'd tap in any form field, you'd also receive the following warning in Xcode's console:
2020-09-11 09:56:01.927435-0400 WriteFreely-MultiPlatform[37593:6860302] [Presentation] Attempt to present <_TtGC7SwiftUI22SheetHostingControllerVS_7AnyView_: 0x7fb24a7297f0> on <_TtGC7SwiftUI19UIHostingControllerGVS_15ModifiedContentVS_7AnyViewVS_12RootModifier__: 0x7fb24c905ac0> (from <_TtGC7SwiftUI19UIHostingControllerGVS_15ModifiedContentVVS_22_VariadicView_Children7ElementGVS_18StyleContextWriterVS_23ContentListStyleContext___: 0x7fb24a711ec0>) which is already presenting <_TtGC7SwiftUI22SheetHostingControllerVS_7AnyView_: 0x7fb24c80eaf0>.
There's a lot of cruft there, but it hints that SwiftUI is trying to present a view that's already being presented. This suggested to me that the hosting view is getting re-rendered when a login form field becomes the first responder, finds that the
isPresentingSettingsView flag is set, and tries to present the sheet again.
Okay! This is something we can test! Here's what the settings view looked like:
import SwiftUI struct SettingsView: View { @EnvironmentObject var model: WriteFreelyModel @Binding var isPresented: Bool var body: some View { VStack { HStack { Text("Settings") .font(.largeTitle) .fontWeight(.bold) Spacer() Button(action: { self.isPresented = false }, label: { Image(systemName: "xmark.circle") }) } .padding() Form { Section(header: Text("Login Details")) { AccountView() } Section(header: Text("Appearance")) { PreferencesView(preferences: model.preferences) } } } } }
(For debugging purposes, I've simplified this a tiny bit: originally that
HStack was in a separate
SettingsHeaderView struct.)
To test the hypothesis, I started by commenting out the entire
Form. Everything then worked fine in presenting and dismissing the sheet, but of course, it's not a very useful sheet without that form. 😅
If I just included the appearance form, that works fine too. That narrows things down here — or so I thought.
There are two ways to dismiss a sheet. The first is to pass the hosting view's presentation state as a binding to the presented sheet, which is what you see in the above listing. Simplified, the
SettingsView is presented from the
PostListView like this:
Button(action: { self.isPresentingSettingsView = true }, label: { Image(systemName: "gear") }) .sheet( isPresented: $isPresentingSettingsView, content: { SettingsView(isPresented: self.$isPresentingSettingsView) } )
You can also use @Environment(.presentationMode) in the
SettingsView to dismiss itself. You declare the property wrapper at the top of the struct like so:
@Environment(\.presentationMode) var presentationMode
…and call its
dismiss() method in a button action, like so:
Button(action: { presentationMode.wrappedValue.dismiss() }, label: { Image(systemName: "xmark.circle") })
Interestingly enough, using this method to dismiss the sheet no longer triggered the console warning when I tapped into any login form field. Could it be? Was the problem solved? 😃
Nope. 😬
If you filled out the form and logged in, then that same warning was logged three times in the console. If you logged out, the warning was logged again. But this looked like progress! It seemed likely that something in the account views was triggering this, so I explored that a little deeper.
The
AccountView swaps between an
AccountLoginView and an
AccountLogoutView based on the state of an
isLoggedIn flag in the
AccountModel. My prime suspect was the
AccountLoginView, which has an
.alert(isPresented:) modifier attached to it. If there's an error logging in, this is triggered and an alert is presented depending on which of the three
AccountError cases are present. Because the
.alert(isPresented:) and
.sheet(isPresented:) modifiers work similarly, maybe some wires were getting crossed there? This is, of course, a beta framework running on a beta operating system in a beta IDE!
So, I started with an easy test: commenting out the
.alert(isPresented:) modifier, and see what happens on login.
You guessed it: this doesn't change the behaviour — the warnings are still logged, and the sheet can't be dismissed.
Digging further and further, setting breakpoints and stepping through code, commenting out blocks to see if they were the culprit, got me nowhere. I finally started searching DuckDuckGo for
SwiftUI "Attempt to present" "which is already presenting" and eventually found this year-old forum comment on Swift.org:
Is it the current recommendation, to put modal views & the triggers outside
NavigationView, or is it only to circumvent an existing bug?
🤦
Yep. Taking the
.sheet(isPresented:) modifier out of the PostListView and attaching it to an EmptyView outside of the NavigationView solved the issue. Nothing in the docs on NavigationView, View Modifiers, or sheet suggests this could be a thing.
So, yeah, the title of this post is a bit misleading — it turns out that I spent a couple of hours trying to figure out what was happening, when an undocumented bug in the framework was the cause.
Again: this is a beta framework, on a beta operating system, and frankly the amount of SwiftUI documentation that's already out there is surprisingly good. But it's a little frustrating to have spent a couple of hours debugging a warning that could have been avoided with a one-line disclaimer in the documentation. Hopefully, this will be helpful to anyone that searches for a similar issue!
For those of you that want to see the code, here's the fix in the app.
Discussion | https://dev.to/writeas/stupid-swiftui-tricks-debugging-sheet-dismissal-1p60 | CC-MAIN-2020-50 | refinedweb | 981 | 54.63 |
To read a text file in C#, you will use a StreamReader object. In this tutorial, you will learn how to use StreamReader to read the contents of an existing file.
For this tutorial, you will start by creating a blank file on your Desktop called
ReadFile.txt. Next, copy and paste the following five lines to the file and save it.
1. Learn C# online at wellsb.com 2. Free C# tutorials 3. C# tutorials for beginners 4. Learn to code in C# 5. Become an expert in C# programming
Using StreamReader to Read a File
Create a new C# project and include the System.IO directive. The System.IO directive allows you to reference such objects as StreamReader and StreamWriter.
using System; using System.IO;
The file that we want to read is located on the Desktop. We can define the path to this file, by providing the directory and filename to the
Path.Combine() method. We will save the file path as a string variable and pass it to the StreamReader object.
string filePath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Desktop), "ReadFile.txt"); using (StreamReader inputFile = new StreamReader(filePath)) { Console.WriteLine(inputFile.ReadToEnd()); }
As you can see, the code for reading from a text file is similar to the process we used to write to a text file using StreamWriter. We create a StreamReader object called
inputFile on Line 12, and we declare a new instance of that object by passing it the path of the text file we want to read.
Recall that the
using statement on Line 12 is to ensure that the StreamReader object is correctly disposed of, even if an error occurs. We will learn more about error handling later in this tutorial.
On Line 14, we use the
ReadToEnd() method of the StreamReader object to read all the contents of the text file. This method returns a string value. Since we just opened the document, we can think of the cursor or pointer being at the the beginning of the document. The
ReadToEnd() method reads from the current position in the document to the end of the document.
In this case, we can expect to see all five lines of our text document printed to the console. The output of our program should look like the following:
It is worth noting that StreamReader also includes methods for reading individual characters
Read() and for reading line-by-line
ReadLine(). In the next tutorial, you will learn how to use StreamReader to read a specific line in a text file.
Try/Catch Error Handling
Suppose the file that we want to read does not exist? As the program is currently written, if the file does not exist, our application will throw an exception and crash. This is not a pleasant end-user experience. When designing a program, it is important to ensure that these types of potential errors are appropriately handled.
For this exercise, we will learn about
try/
catch error handling. Specifically, we want our application to try to open the file for reading. If the file does not exist, we should present a friendly error message to the user.
To use
try/
catch blocks, consider the following template.
try { //Code goes here } catch (IOException) { //Error handling goes here }
In our example, we will place our file I/O into a
try block, and return any error messages in
catch blocks. Our program might look like the following:
try { string filePath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Desktop), "ReadFile.txt"); using (StreamReader inputFile = new StreamReader(filePath)) { Console.WriteLine(inputFile.ReadToEnd()); } } catch(Exception e) { Console.WriteLine("There was an error reading the file: "); Console.WriteLine(e.Message); } Console.ReadLine();
On Line 10 of this example, we have used a general
catch block that handles all exceptions, but it is good practice to handle specific errors. For example, the StreamReader file I/O operation could throw a
FileNotFoundException when the file is not found. It could throw a
DirectoryNotFoundException when the directory is not found. Or, it could throw an
IOException if the file cannot be opened for other reasons. Our program could provide custom error messages for each of these three exceptions by using multiple
catch statements as part of a try-catch-catch-catch block.
The Bottom Line
In this tutorial, you learned how to use the
StreamReader object to read a text file in your C# applications. This could be useful for reading a settings file or for restoring default settings from a file. You also learned how to do basic error handling. Specifically, you learned how to catch exceptions using a
try/
catch block. In the next tutorial, you will learn how to use StreamReader to read a specific line from a text file. | https://wellsb.com/csharp/beginners/csharp-read-text-file/ | CC-MAIN-2020-16 | refinedweb | 793 | 64.71 |
Welcome to the Core Java Technologies Tech Tips for April 19, 2005. Here you'll get tips on using core Java technologies and APIs, such as those in Java 2 Platform, Standard Edition (J2SE).
This issue covers:
Thread Handling in Swing
Atomic Variables
These tips were developed using the Java 2 Platform Standard Edition Development Kit 5.0 (JDK 5.0). You can download JDK 5.0 at. where enthusiasts of Java technology can collaborate and build solutions together.
java.com - Hot games, cool apps -- Experience the power of Java technology.
To increase efficiency and decrease complexity, all Swing components are designed not to be thread-safe. This simply means that all access to Swing components needs to be done from a single thread. That thread is called the event-dispatch thread, and it isn't one you create yourself. If you are unsure that your executing code is in the event-dispatch thread, you can query the EventQueue class through its static isDispatchThread() method. Alternatively, you can query the SwingUtilities class through its static isEventDispatchThread() method. The isEventDispatchThread() method acts as a proxy to the isDispatchThread() method.
EventQueue
isDispatchThread()
SwingUtilities
isEventDispatchThread()
To properly execute tasks on the event-dispatch thread, implement the Runnable interface and pass the tasks to the EventQueue class. Use the public static void invokeLater(Runnable runnable) method of EventQueue if you need to execute a task on the event-dispatch thread, but you don't need any results and you don't care when the task finishes. However, if you can't continue what you're doing until the task completes and returns a value, use the public static void invokeAndWait(Runnable runnable) method of EventQueue. With invokeAndWait(Runnable runnable), you need to provide the code to get the return value -- it is not returned by the invokeAndWait() method.
Runnable
public static void invokeLater(Runnable runnable)
public static void invokeAndWait(Runnable runnable)
invokeAndWait(Runnable runnable)
invokeAndWait()
If you're familiar with the SwingUtilities class, you know that it too has invokeLater() and invokeAndWait() methods. However, those two methods simply wrap the call to the EventQueue versions. So, it's better to directly call the EventQueue versions.
invokeLater()
You need to access Swing components from the event-dispatch thread for both realized (visible) and unrealized (invisible) components. It might seem reasonable to access unrealized components from a thread other than the event-dispatch thread. However, because building a Swing GUI can trigger notification of listeners (such as for a property change event or when adding an ancestor component), and that notification is on the event-dispatch thread, it is always best to access Swing components from the event-dispatch thread.
This requirement of all access on the event-dispatch thread makes it interesting to create Swing programs. That's because the first things the main() method of a program does is create a Runnable object, create a JFrame, and put of all the components into that frame:
main()
JFrame
Runnable runnable = new Runnable() {
public void run() {
// build screen
}
}
EventQueue.invokeLater(runnable);
Here's what one such program looks like. It creates a frame with a button. When the button is selected, it prints the message "I was selected."
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
public class ButtonSample {
public static void main(final String args[]) {
Runnable runner = new Runnable() {
public void run() {
String title = args.length == 0 ?
"Hello, World" : args[0];
JFrame frame = new JFrame(title););
}
}
Notice that the title for the frame is supplied in the command line. Because the Runnable object is creating another class, you have to declare the parameter to the main method as final to access the command-line arguments:
public static void main(final String args[]) {
If you forget this, and access args from the inner class, you get a compile time error message:
args
ButtonSample.java:9: local variable args is accessed from
within inner class; needs to be declared final
String title = args.length == 0 ? "Hello, World" : args[0];
^
ButtonSample.java:9: local variable args is accessed from
within inner class; needs to be declared final
"Hello, World" : args[0];
^
2 errors
To avoid the possibility of threading problems when working with Swing interfaces, be sure all access is passed through the event-dispatch thread. For long running tasks, you can fork a new thread. If you choose to use invokeAndWait() instead of invokeLater(), the calling method blocks until the executing thread finishes, and returns control to the caller. In other words, you should only use invokeAndWait() from a thread other than the event dispatch thread. Also if you use invokeAndWait(), when execution returns to the calling thread, you can get a "return value" from a place that both threads know about. Because the calling thread is blocked, no synchronization is necessary.
For more information about using threads with Swing, see the How to Use Threads trail in the Java Tutorial. The trail includes a description of a non-standard helper class called SwingWorker. If you use the worker class, make sure you use version 3 (also known as, SwingWorker 3), from February 2000. Earlier versions were buggy.
SwingWorker
Three previous Tech Tips covered aspects of the new concurrency utility support in JDK 5.0. The February 16, 2005 tip, Getting to Know Synchronizers, discussed synchronizers. The November 16, 2004 Tech Tip, Pooling Threads to Execute Short Tasks, investigated thread pools. And the October 19, 2004 Tech Tip, Queues and Delayed Processing, explored the concurrent collections, including the blocking queue. The following tip examines another facet of the new concurrency support in JDK 5.0: atomic variables.
You should be aware of the risks of sharing variables across different threads. It's important that you restrict access to shared variables by ensuring that only one thread changes a shared variable at a time. This is typically done by wrapping critical code sections with a synchronized block, such that only one thread is in a protected section at time.
This technique works well, however synchronizing code adds to the runtime costs of your program. It takes time to get the synchronized lock, modify the variable, and then release the lock. It is quicker if you can skip the use of locks for simple variable updates, or simply have lock-free algorithms to begin with. But, you can't just remove the synchronized block without replacing it with something else.
JDK 5.0 offers a way to meet these needs through the new java.util.concurrent.atomic package. The classes in the package allow you to atomically access variables of the designated type. They also offer methods for atomic get-and-set type operations.
java.util.concurrent.atomic
The package includes an AtomicInteger class for atomically updating integer values, AtomicLong for atomically updating long values, AtomicBoolean for basic boolean operations, and AtomicReference for atomic object comparisons and settings. There are also classes for special handling of arrays: AtomicIntegerArray, AtomicLongArray, and AtomicReferenceArray.
AtomicInteger
AtomicBoolean
AtomicReference
AtomicIntegerArray
AtomicLongArray
AtomicReferenceArray
To get started with atomic variables, let take a look at the AtomicInteger class. Essentially, this class works like a wrapped integer value. You get the value with the class's get() method, and set it with the set() method. You can also get and set the value in one step with the getAndSet() method -- this eliminates any risk of another thread changing the value between your call to get and your call to set.
get()
set()
getAndSet()
The basic get and set operations on an integer work as follows, where myVariable is the variable to manipulate:
myVariable
// Save off old value
int oldValue = myVariable;
// Change to new value
myVariable = oldValue + 1;
If you don't put these lines of code into a synchronized block, it is possible for the thread scheduler to interrupt in the middle of the two statements. If so, the change to myVariable happens on the original value of myVariable, not the updated version. There is a similar problem in using the ++ auto-increment operator. Short of putting the ++ usage in a synchronized block, there's no way to ensure that the auto-increment operation is atomic.
To prevent these problems, use one of the new atomic methods in AtomicInteger. The methods let you combine set or get operations with one of several different methods:
addAndGet()
getAndAdd()
decrementAndGet()
getAndDecrement()
incrementAndGet()
getAndIncrement()
Why two versions for most of these methods? When "get" is first, the value returned is the original value. When "get" is second, the value returned is the new, adjusted value. So, for an AtomicInteger with a value of 10, getAndIncrement() returns 10, and incrementAndGet() returns 11. In both cases, the value of the AtomicInteger is 11 after the call.
get
Another method in AtomicInteger worth mentioning is compareAndSet(int expect, int update). This method allows you to check if the value of the AtomicInteger is the expected value, and if it is, change the AtomicInteger to the new updated value. In fact, nearly all the methods previously mentioned are internally implemented with compareAndSet().
compareAndSet
int
compareAndSet()
To demonstrate the value of the classes in the atomic package, consider the following. Say you had a property that was protected by a synchronized setter/getter pair of methods:
public class MyLong1 {
private long seed;
public synchronized void setSeed(long seed) {
this.seed = seed;
}
public synchronized long getSeed() {
return seed;
}
}
With the use of AtomicLong and its related classes, you can change this to use an unsynchronized version:
AtomicLong
public class MyLong2 {
private AtomicLong seed;
public void setSeed(long seed) {
this.seed.set(seed);
}
public long getSeed() {
return seed.get();
}
}
Notice that the assignment statement in the setter method changed to a call to the set() method of AtomicLong, and the getter method calls the get() method of AtomicLong. The use of AtomicLong here removes the need to synchronize the methods. In the specific case of the java.util.Random class, the setSeed() method is still synchronized due to other aspects of the method, not for the benefit of the seed property.
java.util.Random
setSeed()
As a simple rule, if you create synchronized blocks for accessing variables of type int, long, or boolean, consider swapping the synchronized block for an atomic variable. For more complex types, create your own synchronized type with the help of the AtomicReference class.
long
boolean
For more information about the atomic package, see the javadoc
for java.util.concurrent.atomic.
atomic | http://java.sun.com/developer/JDCTechTips/2005/tt0419.html | crawl-002 | refinedweb | 1,730 | 52.7 |
/* * : @(#)sprite.h 8.1 (Berkeley) 6/6/93 * $FreeBSD: src/usr.bin/make/sprite.h,v 1.9 1999/08/28 01:03:36 peter */ /* * Functions that must return a status can return a ReturnStatus to * indicate success or type of failure. */ typedef int ReturnStatus; /* * The following statuses overlap with the first 2 generic statuses * defined in status.h: * * SUCCESS There was no error. * FAILURE There was a general error. */ #define SUCCESS 0x00000000 #define FAILURE 0x00000001 /* * A nil pointer must be something that will cause an exception if * referenced. There are two nils: the kernels nil and the nil used * by user processes. */ #define NIL ~0 #define USER_NIL 0 #ifndef NULL #define NULL 0 #endif /* NULL */ /* * An address is just a pointer in C. It is defined as a character pointer * so that address arithmetic will work properly, a byte at a time. */ typedef char *Address; /* * ClientData is an uninterpreted word. It is defined as an int so that * kdbx will not interpret client data as a string. Unlike an "Address", * client data will generally not be used in arithmetic. * But we don't have kdbx anymore so we define it as void (christos) */ typedef void *ClientData; #endif /* _SPRITE */ | http://opensource.apple.com/source/bsdmake/bsdmake-8/sprite.h | CC-MAIN-2016-30 | refinedweb | 199 | 65.73 |
Visual Studio 15.7 Preview 3 has shipped initial support for some C# 7.3 features. Let's see what they are!
System.Enum,
System.Delegate and
unmanaged constraints.
Now with generic functions you can add more control over the types you pass in. More specifically, you can specify that they must be
enum types,
delegate types, or "blittable" types. The last one is a bit involved, but it means a type that consists only of certain predefined primitive types (such as
int or
UIntPtr), or arrays of those types. "Blittable" means it has the ability to be sent as-is over the managed-unmanaged boundary to native code because it has no references to the managed heap. This means you have the ability to do something like this:
void Hash<T>(T value) where T : unmanaged { fixed (T* p = &value) { // Do stuff... } }
I'm particularly excited about this one because I've had to use a lot of workarounds to be able to make helper methods that work with "pointer types."
Ref local re-assignment
This is just a small enhancement to allow you to assign
ref type variables / parameters to other variables the way you do normal ones. I think the following code is an example (off the top of my head)
void DoStuff(ref int parameter) { // Now otherRef is also a reference, modifications will // propagate back var otherRef = ref parameter; // This is just its value, modifying it has no effect on // the original var otherVal = parameter; }
Stackalloc initializers
This adds the ability to initialize a stack allocated array (did you even know this was a thing in C#? I did :D) as you would a heap allocated one:
Span<int> x = stackalloc[] { 1, 2, 3 };.
Indexing movable fixed buffers
I can't really wrap my head around this one so see if you can understand it
Custom fixed statement
This is the first I've seen this one, and it is exciting for me! Basically, if you implement an implicit interface (one method), you can use your own types in a
fixed statement for passing through P/Invoke. I'm not sure what the exact method is (
DangerousGetPinnableReference() or
GetPinnableReference()) since the proposal and the release notes disagree but if this method returns a suitable type then you can eliminate some boilerplate.
Improved overload candidates
There are some new method resolution rules to optimize the way a method is resolved to the correct one. See the propsal for a list of the.
Expression Variables in Initializers
The summary here is "Expression variables like out var and pattern variables are allowed in field initializers, constructor initializers, and LINQ queries." but I am not sure what that allows us to do...
Tuple comparison
Tuples can be compared with
== and
!= now!
Attributes on backing fields
Have you ever wanted to put an attribute (e.g.
NonSerializable) on the backing field of a property, and then realized that you then had to create a manual property and backing field just to do so?
[Serializable] public class Foo { [NonSerialized] private string MySecret_backingField; public string MySecret { get { return MySecret_backingField; } set { MySecret_backingField = value; } } }
Not anymore!
[Serializable] public class Foo { [field: NonSerialized] public string MySecret { get; set; } }
Discussion (6)
Expression Variables in Initializers
I think it will let us do something like this:
Oh, that could be!
Thanks Jim for the article.
C# 7.3 update gave me an impression that it's for providing optimizations.
About half of C# 7.1 and 7.2 was also optimization. I think they want to focus on making the language less verbose and able to do more in one line!
I am most excited about the
Tuple Equalitychecks :thumbsup:
Saves many keystrokes
Good summary.
Ref local re-assignment should be updated:
var otherRef = ref parameter; => ref var otherRef = ref parameter; | https://practicaldev-herokuapp-com.global.ssl.fastly.net/borrrden/whats-new-in-c-73-26fk | CC-MAIN-2021-25 | refinedweb | 633 | 61.16 |
Problem Statement
Problem “Count pairs from two linked lists whose sum is equal to a given value” state that you are given two linked lists and an integer value sum. The problem statement asked to find out how many total pair has a sum equal to the given value.
Example
LinkedList1 = 11 à 6 à 1 à 8 LinkedList2 = 5 à 3 à 1 à 9 Sum = 9
2
Explanation: There are two pairs i.e, 6, 3, and 8, 1 that sums up the value to the given value 9.
Algorithm to count pairs from linked lists whose sum is equal to a given value
1. Push all the integer values to the two different linked lists. 2. Declare a set. 3. Set the count to 0. 4. Iterate over the first linked list and put all the values of it into the linked list1. 5. Iterate over the second linked list 1. Check if the set contains the difference between the value of x and a current element of linked list2. 1. If true then increase the value of count by 1. 6. Return count.
Explanation
We are given integer values as input. So, we will push them all into the linked lists. In C++, we have externally created a method for implementing a linked list to perform this solution. But in java, we have an in-built class of Linked List with the help of we can easily push all the integers values into the linked list. Now we have asked to find out the pair in both of the linked lists of which the number sums up to the given value.
We are going to push all the integer values to the linked list. We are going to use Set Data Structure and will be traversing all the values of linked list1 and store all the values of the first linked list to set. Set also provides a feature that the common elements are automatically removed from the set. So if we next time we will be using the set there will no problem for the common elements, and also there will not be any of the common elements in there in the linked list.
Now we have all the values in linked list 1 into the set. Now we will traverse the second linked list and check if the difference between the x and each value of the second list is present in the set or not. If present then we have found a pair for now and also we will increase the value of count by 1. This means 1 pair found and it is counted. At the end of traversal. The value of count will be the number of pair which has sum equal to the given value.
Code
C++ to count pairs from two linked lists whose sum is equal to a given value
#include<iostream> #include<unordered_set> using namespace std; struct Node { int data; struct Node* next; }; void implementLinkedList(struct Node** headReference, int newItem) { struct Node* newNode = (struct Node*) malloc(sizeof(struct Node)); newNode->data = newItem; newNode->next = (*headReference); (*headReference) = newNode; } int getPairOfsum (struct Node* head1, struct Node* head2,int sum) { int count = 0; unordered_set<int> SET; while (head1 != NULL) { SET.insert(head1->data); head1 = head1->next; } while (head2 != NULL) { if (SET.find(sum - head2->data) != SET.end()) count++; head2 = head2->next; } return count; } int main() { struct Node* head1 = NULL; struct Node* head2 = NULL; implementLinkedList (&head1,11); implementLinkedList (&head1, 6); implementLinkedList (&head1, 1); implementLinkedList (&head1, 8); implementLinkedList (&head2, 5); implementLinkedList (&head2, 3); implementLinkedList (&head2, 1); implementLinkedList (&head2, 9); int sum = 9; cout << "Count = "<< getPairOfsum (head1, head2, sum); return 0; }
Count = 2
Java code to count pairs from two linked lists whose sum is equal to a given value
import java.util.Arrays; import java.util.HashSet; import java.util.Iterator; import java.util.LinkedList; class PairOfSumInLinkedList1 { public static int getPairOfsum(LinkedList<Integer> head1, LinkedList<Integer> head2, int sum) { int count = 0; HashSet<Integer> SET = new HashSet<Integer>(); Iterator<Integer> itr1 = head1.iterator(); while (itr1.hasNext()) { SET.add(itr1.next()); } Iterator<Integer> itr2 = head2.iterator(); while (itr2.hasNext()) { if (!(SET.add(sum - itr2.next()))) count++; } return count; } public static void main(String[] args) { Integer arr1[] = {11, 6, 1, 8}; Integer arr2[] = {5, 3, 1, 9}; LinkedList<Integer> head1 = new LinkedList<>(Arrays.asList(arr1)); LinkedList<Integer> head2 = new LinkedList<>(Arrays.asList(arr2)); int x = 9; System.out.println("Count = " + getPairOfsum(head1, head2, x)); } }
Count = 2
Complexity Analysis
Time Complexity
O(n1 + n2) where “n1” and “n2” are the numbers of elements in the linked list. We are able to achieve linear time complexity. Because we traversed over both the linked lists and used HashSet.
Space Complexity
Since we stored the input in two linked lists and used a HashSet. We have a linear space complexity solution. | https://www.tutorialcup.com/interview/hashing/count-pairs-from-two-linked-lists-whose-sum-is-equal-to-a-given-value.htm | CC-MAIN-2021-25 | refinedweb | 803 | 71.95 |
Rake: Deleting or overwriting a task?
Discussion in 'Ruby' started by Michael Schuerig, Sep 5, 2005.:
- 180
- Damphyr
- Jan 31, 2006
Rake and rake aborted! Rake aborted! undefined method `gem' for main:Objectpeppermonkey, Feb 9, 2007, in forum: Ruby
- Replies:
- 1
- Views:
- 359
- Gregory Brown
- Feb 10, 2007
[Rake] call a task of a namespace from an other task.Stéphane Wirtel, Jun 14, 2007, in forum: Ruby
- Replies:
- 3
- Views:
- 537
- Stephane Wirtel
- Jun 15, 2007
Rake TestTask running its block anytime rake is invokedAdam Anderson, Sep 19, 2007, in forum: Ruby
- Replies:
- 1
- Views:
- 213
- Adam Anderson
- Sep 19, 2007
rake published rdoc version and arity of Rake::Task#execute - wrongnumber of arguments (0 for 1)James Mead, Jan 15, 2008, in forum: Ruby
- Replies:
- 0
- Views:
- 263
- James Mead
- Jan 15, 2008 | http://www.thecodingforums.com/threads/rake-deleting-or-overwriting-a-task.824224/ | CC-MAIN-2015-40 | refinedweb | 134 | 62.21 |
view raw
After 2 days of debug, I nailed down my time-hog: the Python garbage collector.
My application holds a lot of objects in memory. And it works well.
The GC does the usual rounds (I have not played with the default thresholds of (700, 10, 10)).
Once in a while, in the middle of an important transaction, the 2nd generation sweep kicks in and reviews my ~1.5M generation 2 objects.
This takes 2 seconds!
The nominal transaction takes less than 0.1 seconds.
My question is what should I do?
I can turn off generation 2 sweeps (by setting a very high threshold - is this the right way?) and the GC is obedient.
When should I turn them on?
We implemented a web service using Django, and each user request takes about 0.1 seconds.
Optimally, I will run these GC gen 2 cycles between user API requests. But how do I do that?
My view ends with
return HttpResponse()
I believe one option would be to completely disable garbage collection and then manually collect at the end of a request as suggested here: Garbage Collection
I imagine that you could disable the GC in your
settings.py file.
If you want to run GarbageCollection on every request I would suggest developing some Middleware that does it in the process response method:
import gc class GCMiddleware(object): def process_response(self, request, response): gc.collect() return response | https://codedump.io/share/Fs72sqTwJFFZ/1/django-python-garbage-collection-woes | CC-MAIN-2017-22 | refinedweb | 239 | 66.94 |
Laravel Livewire is a great tool to achieve dynamic behavior on the page, without directly writing JavaScript code. And, like any tool, it has a lot of "hidden gems", both in its official docs, and practical extra tips provided by developers. I decided to compile some of them in this article. Let's get into it!
1. No render() needed
A typical
render() method looks something like this:
// app/Http/Livewire/PostsShow.phpclass PostsShow extends Component{public function render(){return view('livewire.posts-show');}}
But if your
render() method is just a one-line to render the default view, you may delete that
render() method from the component and it will all still work, loading the default
render() from the vendor's method.
class PostsShow extends Component{// This empty component will still work and load the Blade file}
2. Components in Subfolders
If you want to generate a component in a subfolder, like
app/Http/Livewire/Folder/Component.php, you have two ways how to do it:
php artisan make:livewire Folder/Component
or
php artisan make:livewire folder.component
Notice that the first way is with the first letter uppercase, and the second way is lowercase. In both cases, there will be two files generated:
- app/Http/Livewire/Folder/Component.php
- resources/views/livewire/folder/component.blade.php
The subfolders will be created automatically if they don't exist.
3. Components in non-default Folder
If you use some external package with Livewire components, you may have your Livewire component in a different folder than the default
app/Http/Livewire. Then, you may need to bind its name to the actual location.
Typically, it's done in
app/Providers/AppServiceProvider.php (or in any service provider) method
boot():
class AppServiceProvider extends ServiceProvider{public function boot(){Livewire::component('shopping-cart', \Modules\Shop\Http\Livewire\Cart::class);}}
4. Easily Rename or Move Components
If you made a typo while generating the component with
make:livewire, don't worry. You don't need to rename two files manually, there's a command for that.
For example, if you wrote
php artisan make:livewire Prduct, but instead you want "Product", and also decided to put it into a subfolder, you can follow up with this command:
php artisan livewire:move Prduct Products/Show
The result will be this:
COMPONENT MOVEDCLASS: app/Http/Livewire/Prduct.php=> app/Http/Livewire/Products/Show.phpVIEW: resources/views/livewire/prduct.blade.php=> resources/views/livewire/products/show.blade.php
5. Change Default Component Templates
Livewire components are generated using the default templates, so-called "stubs". They are hidden away in the "vendor" folder of the Livewire package, but you can publish them and edit them according to your needs.
Run this command:
php artisan livewire:stubs
You will find a new folder
/stubs with a few files.
Example of a
stubs/livewire.stub:
<?phpnamespace [namespace];use Livewire\Component;class [class] extends Component{public function render(){return view('[view]');}}
For example, if you want to generate the components without the
render() method, just remove it from the stub file, and then each time you run
php artisan make:livewire Component, it will take the template from your updated public stub.
6. Don't Create a Method Just to Set Value
If you have a click event that would set some value of some property, you may do something like this:
<button wire:Show</button>
And then
class Show extends Component{public $showText = false;public function showText() {$this->showText = true;}}
But actually, you can assign a new value to the Livewire property directly from your Blade file, without having a separate method in the Livewire component.
Here's the code:
<button wire:Show</button>
So, you call the
$set and provide two parameters: your property name and the new value.
7. Step Even Further: Set True/False Value Easily
Following up on the last tip, if your property is a boolean variable with true/false values, and you want to have a show/hide button, you can do something like this:
<button wire:Show/Hide</button>
Notice: I would personally avoid using Livewire for such simple toggle effects because it adds the additional request to the server.
Instead, it's better to use JavaScript for this, like Alpine.js:
<div x-<button @Expand</button><span x-Content...</span></div>
8. Three Ways to Minimize Server Requests
One of the main criticism of Livewire is the fact that it does too many requests to the server. If you have
wire:model on the input field, each keystroke would potentially call the server to re-render the component. It's very convenient if you have some real-time effects, like "live search". But generally, server requests may be quite expensive, in terms of performance.
However, it's very easy to customize this behavior of
wire:model.
wire:model.debounce: by default, Livewire waits for 150ms after the keystroke on the input, before performing the request to the server. But you can override it:
<input type="text" wire:model.debounce.
wire:model.lazy: by default, Livewire is listening for all events on the input, and then performs the server requests. By providing a
lazydirective, you tell Livewire to listen only to the
changeevent. It means that the user may keep typing and changing the value, and the server request will be fired only when the user clicks away from that field.
wire:model.defer: this will not fire the server requests on the change of the input. It will save the new value internally and will pass it to the next network request, which may come from other input fields or other button clicks.
9. Customize Validation Attributes
Livewire validation works very similarly to the Laravel validation engine, but with a few differences. In Laravel, if you want to customize the names of the attributes, you may define the
attributes() method in a Form Request class.
In Livewire, the approach is different. In the component, you need to define a property called
$validationAttributes and assign the array of key-value there:
class ContactForm extends Component{protected $validationAttributes = ['email' => 'email address'];// ...
This is useful for common error messages, like "Field [XYZ] is required". By default, that XYZ is replaced with the field name, which may be not a human-friendly word, so it's worth replacing it for the error messages with something clearer.
10. Loading Indicators
Something that is described in the official documentation but quite rarely used, from what I've seen. If some action takes a while on the screen, it's worth showing some loading indicator, like a spinning gif, or just a text of "Loading data..."
In Livewire, it's very easy not only to implement but also to customize.
The most simple example of processing data: when the server request is made, it will show "Processing Payment..." text until the server request is finished and back with the result.
<div><button wire:Checkout</button><div wire:loading>Processing Payment...</div></div>
In practice, I like to show such loading indicators only if it takes a while. No point in re-rendering the DOM every time, in every possible case. What if we do it only if the request takes more than 500ms?
Easy:
<div wire:loading.delay.longer>...</div>
There are also possibilities to play around with CSS classes for loading states, attach them to specific actions, and more: read in the official docs.
11. Offline Indicator
Another documented but less known feature of Livewire is telling the user if their internet connection is lost. It can be beneficial if your application works with real-time data or multiple updates on the screen: you may blur some parts of the webpage and show the "offline" text.
It's as easy as this:
<div wire:offline>You are now offline.</div>
Also, as I mentioned, you may blur some elements, by assigning CSS classes, like this:
<div wire:offline.</div>
12. Pagination with Bootstrap Framework
Similar to Laravel, Livewire uses pagination styling from the Tailwind framework, by default. Luckily, it's easy to override, just provide the different value to the property:
class ShowPosts extends Component{use WithPagination;protected $paginationTheme = 'bootstrap';
You can check the available pagination designs directly in Livewire Github repository. While browsing that, I didn't find any information on whether the Bootstrap 4 or Bootstrap 5 version is used.
13. No Mount: Automatic Route-Model Binding
If you want to pass an object to the Livewire component, this is a typical way to do it, with the
mount() method:
class ShowPost extends Component{public $post;public function mount(Post $post){$this->post = $post;}}
Then, somewhere in Blade, you have
@livewire('show-post', $post).
But did you know that if you provide a type-hint to the Livewire property, that route-model binding would happen automatically?
class ShowPost extends Component{public Post $post;}
That's it, no need to have the
mount() method at all.
14. Delete Confirm Modal
If you have a "Delete" button and you want to call the confirm JavaScript modal before taking the action, this code wouldn't work correctly in Livewire:
<button wire:Delete</button>
There are a few possible solutions to this, probably the most elegant is to stop the Livewire event before it is even happening:
<button onclick="confirm('Are you sure?') || event.stopImmediatePropagation()"wire:Delete</button>
That
event.stopImmediatePropagation() will stop the Livewire method from being called, if the confirmation result is false.
You may find a few other possible solutions in this Github issue discussion.
That's it, less-known Livewire features and small tips. Hope it was useful!
Filed in:
A web-developer with 15+ years experience, founder of Laravel QuickAdminPanel generator.
Sharing Laravel lessons on Youtube with channel Laravel Daily. | https://laravel-news.com/laravel-livewire-tips-and-tricks | CC-MAIN-2022-21 | refinedweb | 1,625 | 52.49 |
Read LS-Dyna databases (d3plot) in parallel. More...
#include <vtkPLSDynaReader.h>
Read LS-Dyna databases (d3plot) in parallel.
This filter reads LS-Dyna databases in parallel.
The Set/GetFileName() routines are actually wrappers around the Set/GetDatabaseDirectory() members; the actual filename you choose is irrelevant – only the directory name is used. This is done in order to accommodate ParaView.
Definition at line 131 of file vtkPLSDynaReader.h.
Definition at line 134 of file vtkPLSDynaReader.h.
Return 1 if this class is the same type of (or a subclass of) the named class.
Returns 0 otherwise. This method works in combination with vtkTypeMacro found in vtkSetGet.h.
Reimplemented from vtkLSDynaReader.
Reimplemented from vtkLSDynaReader.
Methods invoked by print to print information about the object including superclasses.
Typically not called by the user (use Print() instead) but used in the hierarchical print process to combine the output of several classes.
Reimplemented from vtkLSDynaReader.
Determine if the file can be read with this reader.
Reimplemented from vtkLSDynaReader.
Set/Get the communicator object.
By default we use the world controller
This is called by the superclass.
This is the method you should override.
Reimplemented from vtkLSDynaReader.
This is called by the superclass.
This is the method you should override.
Reimplemented from vtkLSDynaReader.
These functions read various parts of the database.
The functions that take a vtkIdType argument must be passed the current timestep. Functions that do not take a timestep must have the read head positioned to the start of their data sections. These functions should only be called from within RequestData() since they require the various output meshes to exist.
Reimplemented from vtkLSDynaReader. | https://vtk.org/doc/nightly/html/classvtkPLSDynaReader.html | CC-MAIN-2021-04 | refinedweb | 269 | 53.98 |
Get ML predictions from scikit-learn or XGBoost models
The AI Platform Prediction online prediction service manages computing resources in the cloud to run your models. These models can be scikit-learn or XGBoost models that you have trained elsewhere (locally, or via another service) and exported to a file. This page describes the process to get online predictions from these exported models using AI Platform Prediction.
Before you begin
Overview
In this tutorial, you train a simple model to predict the species of flowers, using the Iris dataset. After you train and save the model locally, you deploy it to AI Platform Prediction and query it to get online predictions.
Before you begin
Complete the following steps to set up a GCP account, activate the AI Platform Prediction API, and install and activate the Cloud SDK.
Set up your GCP Prediction,
aip-env:
virtualenv aip-env source aip frameworks
macOS
Within your virtual environment, run the following command to install the versions of scikit-learn, XGBoost, and pandas used in AI Platform Prediction runtime version 2.8:
(aip-env)$ pip install scikit-learn==1.0 xgboost==1.5.1 pandas==1.3.4
By providing version numbers in the preceding command, you ensure that the dependencies in your virtual environment match the dependencies in the runtime version. This helps prevent unexpected behavior when your code runs on AI Platform Prediction.:
Versions of scikit-learn and XGBoost
AI Platform Prediction runtime versions are updated periodically to include support for new releases of scikit-learn and XGBoost. See the full details for each runtime version.
Train and save a model
Start by training a simple model for the Iris dataset.
scikit-learn
Following the scikit-learn example on model persistence, you can train and export a model as shown below:
from sklearn.externals import joblib from sklearn import datasets from sklearn import svm # Load the Iris dataset iris = datasets.load_iris() # Train a classifier classifier = svm.SVC() classifier.fit(iris.data, iris.target) # Export the classifier to a file joblib.dump(classifier, 'model.joblib')
To export the model, you also have the option to use the pickle library as follows:
import pickle with open('model.pkl', 'wb') as model_file: pickle.dump(classifier, model_file)
XGBoost
You can export the model by using the "save_model" method of the Booster object.
For the purposes of this tutorial, scikit-learn is used with XGBoost only to import the Iris dataset.
from sklearn import datasets import xgboost as xgb # Load the Iris dataset iris = datasets.load_iris() # Load data into DMatrix object dtrain = xgb.DMatrix(iris.data, label=iris.target) # Train XGBoost model bst = xgb.train({}, dtrain, 20) # Export the classifier to a file bst.save_model('./model.bst')
To export the model, you also have the option to use the pickle library as follows:
import pickle with open('model.pkl', 'wb') as model_file: pickle.dump(bst, model_file)
Model file naming requirements
For online prediction, the saved model file that you upload to
Cloud Storage must be named one of:
model.pkl,
model.joblib, or
model.bst, depending on which library you used. This restriction ensures that
AI Platform Prediction uses the same pattern to reconstruct the model on import as was
used during export.
This requirement does not apply if you create a custom prediction routine (beta).
scikit-learn
XGBoost Prediction.
If you're using a bucket in a different project, you must ensure that your AI Platform Prediction service account can access your model in Cloud Storage. Without the appropriate permissions, your request to create an AI Platform Prediction Prediction, you must explicitly grant access to the AI Platform Prediction service accounts.
Specify a name for your new bucket. The name must be unique across all buckets in Cloud Storage.
BUCKET_NAME="YOUR_BUCKET_NAME"
For example, use your project name with
-aiplatformappended:
PROJECT_ID=$(gcloud config list project --format "value(core.project)") BUCKET_NAME=${PROJECT_ID}-aiplatform
Check the bucket name that you created.
echo $BUCKET_NAME
Select a region for your bucket and set a
REGIONenvironment variable.
Use the same region where you plan on running AI Platform Prediction jobs. See the available regions for AI Platform Prediction.joblib gs://$BUCKET_NAME/model.joblib
You can use the same Cloud Storage bucket for multiple model files. Each model file must be within its own directory inside the bucket.
Format data for prediction
gcloud
Create an
input.json file with each input instance on a separate line:
).
REST API
Create an
input.json file formatted as a simple list of floats, with each
input instance on a separate line:
{ "instances": [ ).
For XGBoost, AI Platform Prediction does not support sparse representation of
input instances. If the value of a feature is zero, use
0.0 in the
corresponding input. If the value of a feature is missing, use
NaN in the
corresponding input..
You must decide at this time whether you want model versions belonging to this this model to use a regional endpoint or the global endpoint. In most cases, choose a regional endpoint. If you need functionality that is only available Google.
If you don't specify the
--region flag, then the gcloud CLI
prompts you to select a regional endpoint (or to use
us-central on the
global endpoint).
Alternatively, you can set the
ai_platform/region
property to a specific region in
order to make sure the gcloud CLI always uses the
corresponding regional endpoint for AI Platform Prediction, even when
you don't specify the
--region flag. (This configuration doesn't apply
to commands in the
gcloud ai-platform operations
command group.).
If you don't specify the
--regions flag, then the
gcloud CLI prompts you to select a regional endpoint (or to
use
us-central1 on the global endpoint). Google Cloud CLI Google Cloud CLI. If you plan to use the model version for batch prediction, then you must use runtime version 2.1 or earlier.
n1-standard-2on regional endpoints and
mls1-c1-m2on the global endpoint. Google Cloud console:
On the Models page, select the name of the model resource you would like to use to create your version. This brings you to the Model Details page. Prediction runtime versions.
Select a Machine type to run online prediction..
If you select "Manual scaling", you must enter the Number of nodes you want to keep running at all times.
Learn how scaling options differ depending on machine type. CLI,=2.8 \ -..7' runtimeVersion: '2.8'.8", .8", "framework": "[YOUR_FRAMEWORK_NAME]", "pythonVersion": "3.7", "machineType": "[YOUR_MACHINE_TYPE]"}' \ ": "2.8", "framework": "[YOUR_FRAMEWORK_NAME]", "machineType": "[YOUR_MACHINE_TYPE]", "pythonVersion": "3.7" } } }
Send online prediction request
After you have successfully created a model version, AI Platform Prediction starts a new server that is ready to serve prediction requests.
gcloud
Set environment variables for your model name, version name, and the name of your input file:
MODEL_NAME="iris" Prediction Model is deployed. model (str): model name. instances ([[float]]): List of input instances, where each input instance is a list of floats. version: str, version of the model to target. Returns: Mapping[str: any]: dictionary of prediction results defined by the model. """ # Create the AI Platform Prediction each of these parameters in the AI Platform Prediction API for prediction input.
Clean up
To avoid incurring charges to your Google Cloud account for the resources used on this page, follow these steps. | https://cloud.google.com/ai-platform/prediction/docs/ml-predictions-scikit-xgboost-models?authuser=0 | CC-MAIN-2022-27 | refinedweb | 1,211 | 57.37 |
Over the years debugging XSLT has gotten better and better but one thing continues to be lacking. XSLT debugging with .NET component references in the XSLT. I know there are some tools that do a decent job but something is always lacking.
Well I decided to write an XSLT debugger that uses Visual Studio. It is fairly easy to use and is very robust. I will put pictures and such up later but for now I just wanted to get the app out. So if you are like me and are looking for a debugger that includes .NET capabilities then here it is. If you have questions feel free to ask.
So here it is... (Drum Roll)
The XSLT Debugger for .NET.
So I added an example application that is located in the install directory
C:\Program Files\XSLTDebugger\ExampleApplication
Here are the instructions:
1. Start application - XSLTDebugger.exe located C:\Program Files\XSLTDebugger\ by default.
2. Follow instructions on the screen, adding input, xslt, and output files.
3. The Arguments are the most important part. The arguments represent the .NET component itself. So reference your .NET component and then choose the appropriate "type" or method call. Make sure that the assembly you reference is in the GAC and is strong named.
4. The namespace field is what is referenced in the XSLT. An example is: xmlns:inputHelperObj=
Enjoy.
5. The Type field and the Namespace field basically coorelate to each other thus providing the reference.
6. Make sure you add the arguments to the list and repeat for each argument you need to add.
7. Once done click debug, you will get the screen that says which environment do you want to open. This is normal and is how you debug in visual studio so if you have Visual Studio 2005 choose the Visual Studio 2005 environment.
8. Any questions just ask. I will try to respond quickly.
posted @ Monday, November 10, 2008 8:27 AM
RSS ATOM
© Mark Wiggins
Theme by PiyoDesign. Valid XHTML & CSS. | http://geekswithblogs.net/markw/archive/2008/11/10/126931.aspx | crawl-002 | refinedweb | 337 | 69.48 |
You can create a new macro project by selecting Tools, Macros, New Macro Project in the VS .NET IDE or by selecting New Macro Project from the Macros Explorer context menu. When you're in the Macros IDE, you cannot create a new macro project.
Follow the numbered steps and refer to the listing to create a macro that creates subfolders . (The example creates a subfolder for managing sample projects for this book.)
In the VS .NET IDE, choose Tools, Macros, New Macro Project.
Browse to the location where you would like to store the macro and type MakeDirectory for the Name of the macro project.
By default the module will be named Module1. Rename the module to Directory.
Double-click on the Directory module to open the Macros IDE with focus set on this new module.
Add the code shown in Listing 4.5 to complete the macro.
1: Option Strict On 2: Option Explicit On 3: Imports EnvDTE 4: Imports System.IO 5: 6: Public Module Directory 7: 8: Private Function GetSourceDirectory() As String 9: Return _ 10: "C:\ Books\ Sams\ Visual Basic .NET Unleashed\ Source\ " 11: End Function 12: 13: Private Sub GetPath(ByRef Directory As String) 14: Directory = _ 15: InputBox("Enter subfolder name:", _ 16: "Create Sub-Folder", Directory) 17: End Sub 18: 19: Private Function GetFullPath(_ 20: ByVal Directory As String) As String 21: 22: If (Directory = "") Then GetPath(Directory) 23: Return GetSourceDirectory() & Directory 24: End Function 25: 26: Private Sub DoMakeChapterDirectory(_ 27: ByVal Path As String) 28: Try 29: MkDir(Path) 30: Catch 31: MsgBox(Path & " alread exists", _ 32: MsgBoxStyle.Exclamation) 33: End Try 34: End Sub 35: 36: Public Sub MakeChapterDirectory(_ 37: Optional ByVal Directory As String = "") 38: DoMakeChapterDirectory(GetFullPath(Directory)) 39: End Sub 40: 41: End Module
The sample listing employs Option Strict On and Option Explicit On settings for the same reason as we use them for applications.
The listing also imports System.IO, demonstrating the use of other CLR namespaces within a macro. The query function GetSourceDirectory is a hard-coded method rather than a literal or string by choice. GetPath prompts the user for an input path. The motivation for this decision is that you cannot pass arguments to macros. Macros either have to be parameterless subroutines or all parameters have to have optional parameters with default values. In the sample listing, the macro is MakeChapterDirectory with an Optional parameter, Directory, with a default value of "". If the code is used in some other context, we can pass an argument to the macro procedure.
GetPath prompts the user for a subfolder and concatenates the subfolder to the book's path. DoMakeDirectory attempts to create the subfolder. If the subfolder already exists, the Catch block beginning on line 30 handles the error by telling the operator that the subfolder already exists. | https://flylib.com/books/en/1.488.1.64/1/ | CC-MAIN-2020-34 | refinedweb | 476 | 54.93 |
Description.
Instead of mocks we should use stubs. Mocking frameworks tend to treat them as interchangeable and this makes it hard to tell them apart. So it is good to have a simple definition. Quoting Martin Fowler:
Stubr alternatives and similar packages
Based on the "Testing" category
hound9.8 0.0 Stubr VS houndElixir library for writing integration tests and browser automation.
ex_machina9.8 3.0 Stubr VS ex_machinaFlexible test factories for Elixir. Works out of the box with Ecto and Ecto associations.
wallaby9.7 7.0 Stubr VS wallabyWallaby helps test your web applications by simulating user interactions concurrently and manages browsers.
meck9.6 5.5 Stubr VS meckA mocking library for Erlang.
proper9.6 6.8 Stubr VS properPropEr (PROPerty-based testing tool for ERlang) is a QuickCheck-inspired open-source property-based testing tool for Erlang.
mox9.4 5.1 Stubr VS moxMocks and explicit contracts for Elixir.
faker9.4 7.2 Stubr VS fakerFaker is a pure Elixir library for generating fake data.
espec9.4 3.8 Stubr VS especBDD test framework for Elixir inspired by RSpec.
mix_test_watch9.3 0.7 Stubr VS mix_test_watchAutomatically run your Elixir project's tests each time you save a file.
bypass9.3 6.5 Stubr VS bypassBypass provides a quick way to create a mock HTTP server with a custom plug.
ExVCR9.2 5.5 Stubr VS ExVCRHTTP request/response recording library for Elixir, inspired by VCR.
StreamData9.2 3.8 Stubr VS StreamDataData generation and property-based testing for Elixir. 🔮
mock9.1 4.6 Stubr VS mockMocking library for the Elixir language.
excheck8.4 0.0 Stubr VS excheckProperty-based testing library for Elixir (QuickCheck style).
white_bread8.0 1.5 Stubr VS white_breadStory based BDD in Elixir using the gherkin syntax.
Quixir8.0 0.0 Stubr VS QuixirProperty-based testing for Elixir
amrita7.9 0.0 Stubr VS amritaA polite, well mannered and thoroughly upstanding testing framework for Elixir.
ponos7.7 0.0 Stubr VS ponosPonos is an Erlang application that exposes a flexible load generator API.
power_assert7.6 0.0 Stubr VS power_assertPower Assert in Elixir. Shows evaluation results each expression.
blacksmith7.5 0.0 Stubr VS blacksmithData generation framework for Elixir.
espec_phoenix7.4 0.0 Stubr VS espec_phoenixESpec for Phoenix web framework.
shouldi7.3 0.0 Stubr VS shouldiElixir testing libraries with nested contexts, superior readability, and ease of use.
FakerElixir7.1 0.0 Stubr VS FakerElixirFakerElixir generates fake data for you.
chaperon7.0 2.4 Stubr VS chaperonAn HTTP service performance & load testing framework written in Elixir.
pavlov7.0 0.0 Stubr VS pavlovBDD framework for your Elixir projects.
katt6.7 0.0 Stubr VS kattKATT (Klarna API Testing Tool) is an HTTP-based API testing tool for Erlang.
ex_unit_notifier6.6 0.0 Stubr VS ex_unit_notifierDesktop notifications for ExUnit.
ex_spec6.2 0.0 Stubr VS ex_specBDD-like syntax for ExUnit.
FakeServer6.2 0.9 Stubr VS FakeServerFakeServer integrates with ExUnit to make external APIs testing simpler
blitzy5.9 0.0 Stubr VS blitzyA simple HTTP load tester in Elixir.
Mockery5.9 0.4 Stubr VS MockerySimple mocking library for asynchronous testing in Elixir.
mecks_unit5.1 3.4 Stubr VS mecks_unitA package to elegantly mock module functions within (asynchronous) ExUnit tests using meck.
Walkman4.9 0.1 Stubr VS WalkmanIsolate tests from the real world, inspired by Ruby's VCR.
factory_girl_elixir4.7 0.0 Stubr VS factory_girl_elixirMinimal implementation of Ruby's factory_girl in Elixir.
test_selector4.6 4.6 Stubr VS test_selectorA set of test helpers that make sure you always select the right elements in your Phoenix app.
double4.5 0.0 Stubr VS doubleCreate stub dependencies for testing without overwriting global modules.
definject4.3 8.0 Stubr VS definjectUnobtrusive dependency injector for Elixir.
cobertura_cover3.9 0.0 Stubr VS cobertura_coverWrites a coverage.xml from mix test --cover file compatible with Jenkins' Cobertura plugin.
ex_parameterized3.8 2.6 Stubr VS ex_parameterizedSimple macro for parametarized testing.
exkorpion3.7 0.0 Stubr VS exkorpionA BDD library for Elixir developers.
mix_erlang_tasks3.7 0.0 Stubr VS mix_erlang_tasksCommon tasks for Erlang projects that use Mix.
mix_eunit3.6 0.0 Stubr VS mix_eunitA Mix task to execute eunit tests.
ex_unit_fixtures3.4 0.0 Stubr VS ex_unit_fixturesA library for defining modular dependencies for ExUnit tests.
hypermock3.4 0.0 Stubr VS hypermockHTTP request stubbing and expectation Elixir library.
ElixirMock3.1 2.6 Stubr VS ElixirMock(alpha) Sanitary mock objects for elixir, configurable per test and inspectable
efrisby3.0 0.0 Stubr VS efrisbyA REST API testing framework for erlang.
apocryphal2.9 0.0 Stubr VS apocryphalSwagger based document driven development for ExUnit.
ExopData2.1 0.4 Stubr VS ExopDataA library that helps you to write property-based tests by providing a convenient way to define complex custom data generators.
kovacs2.1 0.0 Stubr VS kovacsA simple ExUnit test runner.
test_that_json2.1 0.0 Stubr VS test_that_jsonJSON assertions and helpers for your Elixir testing needs.
Scout APM: Application Performance Monitoring
Do you think we are missing an alternative of Stubr or a related project?
Popular Comparisons
README
Stubr
Stubr is a set of functions helping people to create stubs and spies in Elixir.
About
Elixir is a functional language, so you should aim to write pure functions. However, sometimes you need to call external API’s or check the current time. Since these actions can have side effects, they make it harder to unit test your system.
Stubr solves this problem by taking cues from mocks and explicit contracts. It provides a set of functions that help people create "mocks as nouns" and not "mocks as verbs":
iex> stub = Stubr.stub!([foo: fn _ -> :ok end], call_info: true) iex> stub.foo(1) iex> stub |> Stubr.called_once?(:foo) true iex> spy = Stubr.spy!(Float) iex> spy.ceil(1.5) iex> spy |> Stubr.called_with?(:ceil, [1.5]) true iex> spy |> Stubr.called_twice?(:ceil) false
Installation
Stubr is available in Hex, the package can be installed as:
Add
stubr to your list of dependencies in
mix.exs:
def deps do [{:stubr, "~> 1.5.1", only: :test}] end
Developer documentation
Stubr documentation is available in hexdocs.
Examples
Random numbers
Use
Stubr.stub! to set up a stub for the
uniform/1 function in the
:rand module. Note, there is no need to explicitly set the module option, however, it is useful to do so because it makes sure the
uniform/1 function exists in the
:rand module.
test "create a stub of the :rand.uniform/1 function" do rand_stub = Stubr.stub!([uniform: fn _ -> 1 end], module: :rand) assert rand_stub.uniform(1) == 1 assert rand_stub.uniform(2) == 1 assert rand_stub.uniform(3) == 1 assert rand_stub.uniform(4) == 1 assert rand_stub.uniform(5) == 1 assert rand_stub.uniform(6) == 1 end
Timex
As above, use
Stubr.stub! to stub the
Timex.now/0 function in the
Timex module. However, we also want the stub to act as a transparent proxy over the
Timex module for all non-stubbed functions. To do this, we just set the
module option to
Timex and the
auto_stub option to
true.
test "create a stub of Timex.now/0 and defer on all other functions" do fixed_time = Timex.to_datetime({2999, 12, 30}) timex_stub = Stubr.stub!([now: fn -> fixed_time end], module: Timex, auto_stub: true) assert timex_stub.now == fixed_time assert timex_stub.before?(fixed_time, timex_stub.shift(fixed_time, days: 1)) end
HTTPoison
In this example, we create stubs of the functions
get and
post in the
HTTPoison module and make them return different values based on their inputs:
setup_all do http_poison_stub = Stubr.stub!([ get: fn("") -> {:ok, %HTTPoison.Response{body: "search", status_code: 200}} end, get: fn("") -> {:ok, %HTTPoison.Response{status_code: 401}} end, post: fn("", _) -> {:error, %HTTPoison.Error{reason: :econnrefused}} end ], module: HTTPoison) [stub: http_poison_stub] end test "create a stub of HTTPoison.get/1", context do {:ok, google_response} = context[:stub].get("") {:ok, nasa_response} = context[:stub].get("") assert google_response.body == "search" assert google_response.status_code == 200 assert nasa_response.status_code == 401 end test "create a stub of HTTPoison.post/2", context do {:error, error} = context[:stub].post("", "any content") assert error.reason == :econnrefused end
Links
TDD in functional languages using stubs:
Mark Seemann's blog post about the difference between Mocks and Stubs in the context of commands and queries. | https://elixir.libhunt.com/stubr-alternatives | CC-MAIN-2020-45 | refinedweb | 1,358 | 52.05 |
@Overrideannotation that instructs the compiler that you intend to override a method in the superclass. If, for some reason, the compiler detects that the method does not exist in one of the superclasses, it will generate an error. This is extremely useful to quickly identify typos or API changes. Let's look at an example.
The first class is Animal that has one class method. Its will be used as the superclass.
package com.as400samplecode; public class Animal { public void myMethod() { System.out.println("The instance method in Animal."); } }
A subclass of Animal, is called Cat. This overrides the method myMethod()
package com.as400samplecode; public class Cat extends Animal { @Override public void myMethod() { System.out.println("The instance method in Cat."); } }
Now if I make a mistake in typing the correct method name then the compiler will give me an error only when i use the @Override annotation - The method myMethod() of type Cat must override a superclass method
NO JUNK, Please try to keep this clean and related to the topic at hand.
Comments are for users to ask questions, collaborate or improve on existing. | https://www.mysamplecode.com/2011/06/java-override-annotations-what-does-it.html | CC-MAIN-2020-29 | refinedweb | 186 | 67.15 |
This is part of a series I started in March 2008 - you may want to go back and look at older parts if you're new to this series.
So far the method_missing implementation has just printed a notice and quit.
During the trip down the rabbit hole
that is
attr_accessor and friends that became a major annoyance.
The problem is that this notice has not included a stack backtrace or any way
to figure out where it occurred. It's also been impossible to override it
and actually figure out what method was being called because we currently
get to
method_missing by jumping straight into the vtable.
We get
method_missing when we hit the pointer that's installed there as
default when nothing has overridden it.
So how do we get better debug output? And how do we support users overriding
method_missing and actually getting a symbol to use?
A "thunk" in terms of a object oriented languages is generally a small piece of compiler generated code that gets inserted to "adjust" a function or method call.
In this specific case, we will generate a separate thunk for each vtable entry.
Instead of inserting a pointer to
method_missing directly, we will insert the
address of a small thunk. The thunk will not even create a full stack frame,
but simply add the address of the
Symbol corresponding to the vtable slot
as the first argument on the stack, and then jump straight into
method_missing,
thereby simulating a "direct" call to method_missing with the symbol as the
first argument.
It's actually very simple - we just need to pop the real return address off the stack, push the symbol onto the stack, and then push the real return address back on.
Ok, so we still cheat a bit. Eventually we need to make our
method_missing
into a real method, but for now it's a function. Here's the code I've added
to create a "base" vtable that is used to initialize the vtable slots of
Object:
def output_vtable_thunks @vtableoffsets.vtable.each do |name,_| @e.label("__vtable_missing_thunk_#{clean_method_name(name)}") # FIXME: Call get_symbol for these during initalization # and then load them from a table instead. compile_eval_arg(GlobalScope.new, ":#{name.to_s}".to_sym) @e.popl(:edx) # The return address @e.pushl(:eax) @e.pushl(:edx) @e.jmp("__method_missing") end @e.label("__base_vtable") # For ease of implementation of __new_class_object we # pad this with the number of class ivar slots so that the # vtable layout is identical as for a normal class ClassScope::CLASS_IVAR_NUM.times { @e.long(0) } @vtableoffsets.vtable.to_a.sort_by {|e| e[1] }.each do |e| @e.long("__vtable_missing_thunk_#{clean_method_name(e[0])}") end end
I hope it's reasonably easy to follow. First it generates a number of functions that will look like this:
__vtable_missing_thunk_to_yaml: subl $4, %esp movl $1, %ebx movl $.L110, (%esp) movl $__get_symbol, %eax call *%eax addl $4, %esp popl %edx pushl %eax pushl %edx jmp __method_missing
This can be optimized a lot, but if you've followed this series, you know
getting something working is higher priority. In this case we generate a
call to
__get_symbol, which was introduced in the last part, and we pass
the string corresponding to the name:
.L110: .string "to_yaml"
Then we adjust the stack as mentioned above.
The next step is to create the
__base_vtable. Here's an excerpt:
__base_vtable: .long 0 .long 0 .long __vtable_missing_thunk_new .long __vtable_missing_thunk___send__ .long __vtable_missing_thunk___get_symbol .long __vtable_missing_thunk___method_missing .long __vtable_missing_thunk_array .long __vtable_missing_thunk___new_class_object .long __vtable_missing_thunk_define_method ...
Then we need to modify
__new_class_object to assign entries from
__base_vtable instead of just blindly assigning a pointer to
__method_missing:
# size <= ssize *always* or something is severely wrong. def __new_class_object(size,superclass,ssize) ob = 0 %s(assign ob (malloc (mul size 4))) # Assumes 32 bit i = 1 %s(while (lt i ssize) (do (assign (index ob i) (index superclass i)) (assign i (add i 1)) )) %s(while (lt i size) (do # Installing a pointer to a thunk to method_missing # that adds a symbol matching the vtable entry as the # first argument and then jumps straight into __method_missing (assign (index ob i) (index __base_vtable i)) (assign i (add i 1)) )) %s(assign (index ob 0) Class) ob end
Finally we make
__method_missing output the symbol, instead of just
spitting out "Method missing":
def __method_missing sym %s(printf "Method missing: %s\n" (callm sym to_s)) %s(exit 1) 0 end | https://hokstad.com/writing-a-compiler-in-ruby-bottom-up-step-22 | CC-MAIN-2021-21 | refinedweb | 725 | 58.62 |
Servlets
.
Anyways, please visit the following links:
servlets
:// why we require wrappers in servlets? what are its uses?
Please explain
These wrappers classes help you to modify request
servlets
: are sessions in servlets what are sessions in servlets
A Session refers to all the request that a single client makes to a server
servlets
://... regarding the user usage and habits. Servlets sends cookies to the browser client
servlets
, visit the following links:
servlets
the following links:
servlets
; Please visit the following link:
How to make a closing eyes, make a closing eyes, closing eyes
How to make a closing eyes
This is a funny example, It is great effort to
make still eye to closing eye. You will learn here how is it made possible
Servlets Programming
visit the following links:
jsp and servlets
jsp and servlets i want code for remember password and forget password so please send as early as possible ............. thanks in advance
Please visit the following link:
Prob. on tutorial () - Framework
Prob. on tutorial Struts part 2 Hello, I am trying to code springframework code with using servlet posted on I compile the servlet and copy all the class
servlets bulk - Java Beginners
servlets bulk Hi,
My problem is " i want to send a one mail to many persons(i.e.,same mail to many persons) using servlets".
Pls help me. ...://
Thanks
servlets mails - Java Beginners
servlets mails Hi,
My problem is " i want to send a one mail to many persons(i.e.,same mail to many persons) using servlets".
Pls help me...://
servlets - Servlet Interview Questions
servlets What would we do with a doGet() method? Hi
Read more Details
servlets - JSP-Servlet
servlets how to upload images in servlets Hi friend,
For solving the problem :
Thanks
servlets - Java Interview Questions
in servlet visit to :... html and processing in servlets and store in DB like ORACLE. And next i want to retrieve this image into web page using servlets.
please try to send the answer
Servlets - JSP-Servlet
to: Hi,im d beginner to learn servlets and jsp.please can u... with the host server.It also allows the servlets to write events to a log file
. Hi friend,
Read for more information.
Thanks
servlets - JSP-Servlet
servlets i want to write a simple program on servlet context listener. Hi Friend,
Please visit the following link:
Hope
servlets - JSP-Servlet
for more information.... servlets link . you can learn more information about servlets structure
servlets - Java Beginners
://
Thanks...servlets what is the difference b/w servlets and JSP,
what servlets... to respond to HTTP requests.
A JSP layered on top of Java Servlets. Whereas execution - JSP-Servlet
://
Thanks.
Amardeep...servlets execution the xml file is web.xml file in which the servlet name,servlet class,mapping etc which has to be done.
What u want
servlets execution - JSP-Servlet
,
To visit this link for solving the problem: execution hi friends,
i wanted to know how to compile and run a servlet which has got an html file with it. this html file
Servlets - Java Interview Questions
information visit to :
Thanks... and
are available to all the servlets within that application.
It represents your web
servlets - JSP-Servlet
an application using Servlets or jsp make the directory structure given below link
Now visit
servlets - Servlet Interview Questions
://... servlet and are unknown to other servlets.
The ServletContext parameters
Servlets - Java Beginners
for more information,
Thanks
Amardeep
Servlets - JSP-Servlet - JSP-Servlet
----------------------------------------------
Read for more information.... servlets. Hi friend,
employee form in servlets...;This is servlets code.
package javacode;
import java.io.*;
import java.sql.
servlets deploying - Java Beginners
.
Thanks
Amardeep...servlets deploying how to deploy the servlets using tomcat?can you...);
}
}
------------------------------------------------------- This - JSP-Servlet
at :
Thanks... servlets link , read more and more information about servlet. how to compile and how to run servlets program.This is running program but you are not able
servlets - Servlet Interview Questions
information.
... for more information.
Servlets
Servlets How to check,whether the user is logged in or not in servlets to disply the home page
servlets - JSP-Servlet
link:
Thanks...servlets how to generate reports in servelts
pls tell me from first onwards i.e., i don't know about reports only i know upto servlets
servlets - JSP-Servlet
("");
}
}
---------------------------------------------------
Read for more information.
Doubt in servlets - JSP-Servlet
the following link:
Thanks - JSP-Servlet
Servlets How can we differentiate the doGet() and doPost() methods in Servlets Hi friend,
Difference between doGet() and doPost()
Thanks
servlets - Servlet Interview Questions
------------------------------------------
Read for more Details
J2me app with servlets
J2me app with servlets Can we send and receive message from our servlet website to mobile? if yes,then how..
without using any router..code plz??
Please visit the following link:
you.
Please visit for more information.
servlets - Servlet Interview Questions
://
Servlets - JSP-Servlet
on JDBC-Mysql visit to :
Thanks
how to execute jsp and servlets with eclipse
how to execute jsp and servlets with eclipse hi
kindly tell me how to execute jsp or servlets with the help of eclipse with some small program... these will be helpful for you
Servlets Vs Jsp - JSP-Servlet
information : Vs Jsp In servlets and Jsp's which one is important? and also tell me the Is Servlets are best or jsp's are best? give me the(reason also
Servlets
servlets
the servlets
SERVLETS
fetching data using servlets - SQL
for fetching data from a ORACLE10g database using SERVLETS. Hi Friend,
Please visit the following links:
Java Servlets - Java Interview Questions
links:
2)Visit the following link:
3)Visit the following link:
http
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/84019 | CC-MAIN-2013-20 | refinedweb | 962 | 73.68 |
0
Hello, I have been stuck on this for awhile, and can not seem to get it to work exactly correct.
I am trying to read in a file ("in_text.txt") and print out it's contents to the screen for now (eventually i will manipulate the data with a block_cipher class). Here is what I have so far...
in_text.txt
your cat
#include <iostream> #include "block_cypher.h" #include <fstream> #include <string> using namespace std; int main() { ifstream ins; long begin,end,length; char letter; { ins.open("in_text.txt", ios::in); if (ins.fail()) { cerr << "Failed to open in_text.txt.\n"; exit(1); } } begin = ins.tellg(); ins.seekg(0, ios::end); end = ins.tellg(); length = ((end - begin)-1); ins >> letter; while (!ins.eof()) { cout << letter; ins >> letter; } cout << endl; cout << "Length = " << length << endl; return (0); }
Thanks | https://www.daniweb.com/programming/software-development/threads/107357/no-idea-why-i-can-t-read-in-this-file | CC-MAIN-2017-26 | refinedweb | 136 | 70.19 |
Web user controls are very much like Windows user controls, with the obvious difference that they're designed for use in Web applications. Like user controls, Web user controls can be composites of other controls, or you can draw them from scratch. (Although your options for drawing them from scratch are more limited in Web user controls because of the limitations of browsers. One option is to make the Web user control display an image that you can change when the control is clicked, activated, and so on.)
Although Web user controls are based on the System.Web.UI.UserControl class, which is very different from the user control System.Windows.Forms.UserControl class, programming them is similar. To demonstrate just how similar, we'll re-create the same user control we just created as a Web user control.
To follow along, create a new Web application called ch11_03 now, and then add a Web user control to this application by using Project, Add Web User Control. When you select this menu item, the Add New Item dialog box opens, as you see in Figure 11.8. Accept the default name for the new Web user control, WebUserControl1 , by clicking Open .
This creates the new Web user control you see in Figure 11.9. The new Web user control's class is WebUserControl1 , and at design time, it looks like a small standard Web page. We've already added the label we'll use in this control to the Web control, as you also see in Figure 11.9.
As far as the C# code goes, programming a Web user control in C# is so close to programming a user control that we can use the same code. All we have to do is to borrow the code from our user control example, ch11_01, and drop it into the code designer for the new user control. Here's what that looks like in WebUserControl1.ascx.cs:
public class WebUserControl1 : System.Web.UI.UserControl { protected System.Web.UI.WebControls.Label Label1; #region Web Form); } }
This implements the DisplayColor property, the DrawText method, and the NewText event in our Web user control. It was as simple as that.
This is the point where things start to differ from user controls. As you recall, all we had to do to make a user control available to other projects was to compile it. The IDE can't work that closely with the Web server, however, which means that you have to follow a different procedure to add our Web user control to the ch11_03 Web application.
Here's what you do: Open the Web application's main form in a form designer, and then drag the WebUserControl1.ascx entry from the Solution Explorer onto that form, adding the Web user control, WebUserControl11 , to the form as you see in Figure 11.10. Because the Web user control has not been compiled, the IDE doesn't know what it will look like at runtime, so it gives it a generic button-like appearance at design time, as you can see in the figure.
Dragging this control to the Web form creates a new Web user control, WebUserControl11 , in the Web application's main Web form, WebForm1.aspx. Here's what this new control looks like in ASP.NET, in WebForm1.aspx:
<%@ PageWeb User Controls</asp:Label> <uc1:WebUserControl1 id=WebUserControl11</uc1:WebUserControl1> </form> </body> </HTML>
On the other hand, because this control will not actually be compiled until runtime, the IDE does not automatically add the user control, WebUserControl11 , to the "code-behind" file for our Web application, WebForm1.aspx.cs. To use this control in code, you have to declare it in WebForm1.aspx.cs like this:
public class WebForm1 : System.Web.UI.Page { protected System.Web.UI.WebControls.Label Label1; protected WebUserControl1 WebUserControl11; . . .
That adds the Web user control to our code designer, which means you can work with the control's properties, methods , and events in code. Note that because the control hasn't been compiled, you can't work with its properties in the properties window at design time. You can, however, set properties (such as setting the DisplayColor property to System.Drawing.Color.aquamarine ) when the Web form containing the control loads, which we'll do in the ch11_03 example like this:
private void Page_Load(object sender, System.EventArgs e) { WebUserControl11.DisplayColor = System.Drawing.Color.Aquamarine; }
As in the user control example we saw earlier today, we can also add a button with the caption Click Me! to call the DrawText method, except this time we'll use that method to display the text "Web User Controls!" (not "User Controls!" ) in the label in our Web user control:
private void Button1_Click(object sender, System.EventArgs e) { WebUserControl11.DrawText("Web User Controls!"); }
The DrawText method will also fire the NewText event in the Web user control. In our Web application, we can connect an event handler, WebUserControl11_NewText , to that event. To do that, add this code to the InitializeComponent method in the Web Form Designer generated code in WebForm1.ascx.cs:
private void InitializeComponent() { this.Button1.Click += new System.EventHandler(this.Button1_Click); this.Load += new System.EventHandler(this.Page_Load); this.WebUserControl11.NewText += new ch11_03.WebUserControl1.NewTextDelegate(WebUserControl11_NewText); }
All that's left is to write the event handler WebUserControl11_NewText . In that event handler, we'll display the new text in our Web user control in a text box this way:
private void WebUserControl11_NewText(object sender, string text) { TextBox1.Text = text; }
And that's all we need. We've been able to duplicate the work we did with the UserControls example earlier, but this time we're using a Web user control.
You can see this example at work in Figure 11.11. When you click the Click Me! button, the new text is displayed in the Web user control and the text box, as you see in that figure.
That rounds off our discussion on user controls and Web user controls. When it comes to code reuse, it's hard to beat these types of custom-built controlsyou write them once and you can use them in dozens of applications. | https://flylib.com/books/en/1.254.1.121/1/ | CC-MAIN-2019-13 | refinedweb | 1,028 | 56.25 |
Decision::ParseTree - Replacing waterfall IF-ELSIF-ELSE blocks
Version 0.041
Death to long if-elsif-else blocks that are hard to maintain, and hard to explain to your manager. Heres an overly simplistic example:
if ( $obj->is_numeric ) { if ( $obj->is_positive ) { print 'Positive Number'; } elsif ( $obj->is_negative ) print 'Negative Number'; } else { print 'Looks like zero'; } else { print 'Non-Numeric Value'; }
--- - is_num : 0 : Non-Numeric Value 1 : - is_pos : 1 : Positive Number - is_neg : = : Looks like zero 1 : Negative Number ...
package Rules; use Scalar::Util; sub is_num { my ( $self, $obj ) = @_; return (Scalar::Util::looks_like_number($obj->{value})) ? 1 : 0; } sub is_pos { my ( $self, $obj ) = @_; return ($obj->{value} > 0 ) ? 1 : 0; } sub is_neg { my ( $self, $obj ) = @_; return ($obj->{value} < 0 ) ? 1 : 0; }
package Number; sub new { my ( $class, $value ) = @_ my $self = { parse_path => [], value => $value }; return bless $self, $class; }
use Decision::ParseTree q{ParseTree}; my $rules = Rules->new; my $tree = LoadFile('tree.yaml'); print ParseTree( $tree, $rules, Number->new(10) ); # Positive Number print ParseTree( $tree, $rules, Number->new(-1) ); # Negative Number print ParseTree( $tree, $rules, Number->new(0) ); # Looks like zero print ParseTree( $tree, $rules, Number->new('a')); # Non-Numeric Value
To make this all work we need a few parts:
So this all started as a way to make a decision tree thats easy to parse and easy to read for non-programmers. So to do this I looked to YAML, it's easy to read and easy to parse. Though make this work we have some hard and fast rules to follow for the tree construction:
Sometimes you have to make things messy before they can get clean.
Theres a flexibility that comes with breaking things apart in to nice, neat little chunks. By separating the rule logic in to one place you can make very complex rules that do not gunk up your code. You pull the order of these rules in to another place as it's completely possible that you would want to tweak the order. And lastly you need to glue these separate things together, so you have an object that gets passed thru to make this all work. Tada!
It would be nice to whip up a big example here to show all the interesting bits, sadly I can't think of a good example. Ideas?
$obj = Number->new(10); ParseTree( $tree, $rules, $obj ); # $obj->{parse_path} will now look like : # [ { 'is_num' => 1 }, # { 'is_pos' => 1 }, # ]
print $obj->{parse_answer}; # Positive Number
ParseTree is the only thing that can get exported, it's also the only thing in here, so export away.
Runs $obj thru $tree, using $rules as the library of rules.
Returns the first endpoint that you run into as the answer.
ben hengst,
<notbenh at cpan.org>
Please report any bugs or feature requests to
bug-decision-parsetree at rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
You can find documentation for this module with the perldoc command.
perldoc Decision::ParseTree
You can also look for information at:
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/~notbenh/Decision-ParseTree-0.041/lib/Decision/ParseTree.pm | CC-MAIN-2015-06 | refinedweb | 534 | 59.43 |
A WebCam Class in Visual Basic
WEBINAR: On-demand webcast
How to Boost Database Development Productivity on Linux, Docker, and Kubernetes with Microsoft SQL Server 2017 REGISTER >
The WebCam class will make it easy for your applications to view a webcamera. Additionally, it will make it possible to configure a number of settings on the Webcam, such as setting the FPS value (frames per second).
This class was written quickly to dig into the .NET environment. As such, the coding may not be the best, but it works. Included with the code is a project that uses the class, so that you can see how simple it is.
If you look at the class, you will find that the first three variables that can be changed are as follows:
Private CamFrameRate As Integer = 15 Private OutputHeight As Integer = 240 Private OutputWidth As Integer = 360
CamFrameRate is set to 15 in this code. This is just the initial frame rate that is used. The CamFrameRate sets how much of a gap there is between frames. In this case, 15ms, or about 65 FPS, is the setting. You can change the FPS through a subroutine.
OutputHeight and OutputWidth are fairly self explanatory. You can set these to the dimensions of output.
Before doing much with a WebCam, you should probably make sure it is running. You can use the iRunning variable to check. This variable is defined in the code as follows:
Public iRunning As Boolean
You can call IRunningat any time; it will return true if the camera is running or false if it is not. For example, the following line of code displays a message box letting you know if the camera is running:
Messagebox.show Mycam.Irunning
How to Use the Class
To use the class, you first need to create an iCam object:
Private myCam As iCam Set myCam = New iCam
From here, you can call a range of functions using standard syntax:
mycam.Function Name
Functions
The following provides a bit of information on the key functions:
- initCam(ByVal parentH As Integer): This is where it all starts. You must call this to set up the camera. parentH is where you want to prievew, so if you have a pictureBox on your form, named picoutput, you would call initCam as follows:
myCam.initCam(Me.picOutput.Handle.ToInt32)
myCam.setFrameRate(25)
Me.picStill.Image = myCam.copyFrame(Me.picOutput, _ New RectangleF(0, 0, _ Me.picOutput.Width, _ Me.picOutput.Height))
In Conclusion...
The code might not be the greatest; however, it works well. Hopefully, you will find it useful.
zoomPosted by faizal on 08/03/2016 03:49am
hii the code working perfectly. :) is Is it possible to zoom thanks is advanceReply
MrPosted by arthur on 09/07/2015 01:03am
im working with vb .net 2010. when i run your program no error but the picture box displays only black...
MrPosted by Chris on 10/28/2015 12:33pm
I am using Visual Studio 2015 community and also get a black screen and no errors.
VB userPosted by Marcus on 08/13/2016 03:51pm
It's not the code, it's Windows 10 Right now its not compatible Ive been looking for a fix for this haven't found one yet I'm very upset with Microsoft for leaving this out. I hope it was not intentional but it seems like it I haven't had any trouble with any other code but the webcam. Business as usual Microsoft Forces everyone to go their way or the highwayReply
Web Cam ZoomPosted by Tony Robson on 07/08/2015 06:01am
Hi there, The code does indeed work perfectly! Thank you. Is it possible to zoom in and out? If so - how can this be achieved? Many Thanks in anticipation. :-)Reply
"Not Declared" errorsPosted by Skylar on 10/09/2012 03:20pm
When I try to debug my app, I get these errors: 'Application' is not declared. It may be inaccessible due to its protection level. 'MessageBox' is not declared. It may be inaccessible due to its protection level. 'MessageBox' is not declared. It may be inaccessible due to its protection level. 'MessageBox' is not declared. It may be inaccessible due to its protection level. Type 'PictureBox' is not defined. Type 'RectangleF' is not defined. Type 'Bitmap' is not defined. Type 'Graphics' is not defined. Type 'Bitmap' is not defined. Type 'Graphics' is not defined. Type 'Bitmap' is not defined. 'MessageBox' is not declared. It may be inaccessible due to its protection level. Importing the namespaces do nothing, and are already referenced in my application anyways.Reply
WifiPosted by zspzs on 03/19/2010 09:11pm
Hello, Thanks for the nice article. It works fine when I use an USB webcam. Where can I change the webcam source in the code if I want to use another camera? e.g. wifi camera. Thank you in advance for your help! PeterReply | http://www.codeguru.com/csharp/csharp/cs_misc/graphicsandimages/article.php/c13951/A-WebCam-Class-in-Visual-Basic.htm | CC-MAIN-2017-43 | refinedweb | 824 | 66.13 |
Created on 2008-11-08.18:20:24 by thijs, last changed 2008-12-16.22:04:29 by thijs.
When trying to install jython-elementtree or setuptools trunk, or running our unit tests it reports the
following error:
Traceback (most recent call last):
File "./setup.py", line 53, in <module>
setup(name = "PyAMF",
File "/usr/local/src/jython-trunk/dist/Lib/distutils/core.py", line 151, in setup
dist.run_commands()
File "/usr/local/src/jython-trunk/dist/Lib/distutils/dist.py", line 974, in run_commands
self.run_command(cmd).py", line 56, in run
File "/usr/local/src/jython-trunk/dist/Lib/distutils/command/install.py", line 517, in run
self.run_command(cmd_name)
File "/usr/local/src/jython-trunk/dist/Lib/distutils/cmd.py", line 333, in run_command
self.distribution.run_command(command)_lib.py", line 21, in run
File "/usr/local/src/jython-trunk/dist/Lib/distutils/command/install_lib.py", line 116, in install
outfiles = self.copy_tree(self.build_dir, self.install_dir)
File "/home/buildbot/Buildslaves/pyamf/amd64-ubuntu-jython/build/setuptools-0.6c8-
py2.5.egg/setuptools/command/install_lib.py", line 50, in copy_tree
File "/usr/local/src/jython-trunk/dist/Lib/distutils/cmd.py", line 385, in copy_tree
return dir_util.copy_tree(
File "/usr/local/src/jython-trunk/dist/Lib/distutils/dir_util.py", line 169, in copy_tree
outputs.extend(
File "/usr/local/src/jython-trunk/dist/Lib/distutils/dir_util.py", line 174, in copy_tree
copy_file(src_name, dst_name, preserve_mode,
File "/usr/local/src/jython-trunk/dist/Lib/distutils/file_util.py", line 172, in copy_file
os.utime(dst, (st[ST_ATIME], st[ST_MTIME]))
File "/usr/local/src/jython-trunk/dist/Lib/os.py", line 534, in utime
_posix.utimes(path, long(atime * 1000), long(mtime * 1000))
TypeError: utimes(): 3rd arg can't be coerced to long
For a full issue report see our buildbot log at-
jython/builds/0/steps/jython-install/logs/stdio/text
I'm logging this ticket against 2.5alpha3 because the 2.5beta0 version nr isn't available in the list,
might want to add that as well. Exact revision nr I was using is 5557.
Is this 64 bit java? I know there's an existing issue with utimes and 64
bit java (on OS X, maybe others) -- though oddly enough this doesn't
look like that issue at all.
I wasn't able to reproduce this on OS X 64 bit java (using jython trunk
r5557, pyamf trunk).
Does the following simple case work?
import os
f = '/tmp/jython-foo'
open(f, 'w').close()
s = os.stat(f)
os.utime(f, (s.st_atime, s.st_mtime))
If you have a local Jython build and can reproduce the issue, it'd help
to know what the actual arguments being passed to os.utime are (you
could temporarily hack your os.py to print them out)
Also I'd like to double check what the value of 'os._native_posix' is on
this platform
Yes it's a 64-bit machine. I put your test into a .py file and ran it
with Jython and it didn't report any errors..
The results of the second request:
Jython 2.5b0+ (trunk:5557, Nov 8 2008, 16:33:47)
[Java HotSpot(TM) 64-Bit Server VM (Sun Microsystems Inc.)] on
java1.5.0_06
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os._native_posix
True
>>>
The errors with jython-elementtree went away because they recently
changed the installation procedure (now using an install.py instead of
setuptools-based setup.py). I'll see if I can create a better reproducible test for this issue.
Alright so I'm able to reproduce this with jython-elementtree
() r26 and jython r5587.
running 'jython setup.py install' throws this error. Using setuptools-
0.6c8. Not sure if it's then a jython, jython-elementtree or setuptools
issue, so let me know if I should open a ticket against a different
project, but it all points to jython it seems.
hippyhacker noticed stat results on this platform (64 bit linux) are
occasionally returning huge numbers (for things like st_mtime):
This must be the cause of this problem, I can reproduce the error
message on any platform by passing a huge long such as 7308549900279964261 to os.utime
I have a Java 64/amd64 linux VM setup to work on this, I just need a
file that can reproduce the problem. I'm positive it's the problem
hippyhacker described, so I'm looking for a file that will return a huge
long timestamp from stat
you should be able to tar up the file and send it to me (it'll preserve
its timestamps), though hippyhacker tried and that didn't work (might
have been his fault). It'd be nice if you could double check that the
problem still happens after untarring the file
(and I mean huge, as shown here: )
I tarred up the elementtree-jython trunk on my server using:
tar --atime-preserve -czvf et-jython.tar.gz jython-elementtree-trunk
I then downloaded it and tried to unpack it, which threw the following
warnings:
tar: jython-elementtree-trunk/build/lib/elementtree/ElementTree.py: time
stamp 2030-11-07 16:42:11 is 692845574 s in the future
tar: jython-elementtree-trunk/build/lib/elementtree/TidyTools.py:
implausibly old time stamp 1970-01-01 01:00:00
tar: jython-elementtree-
trunk/build/lib/elementtree/TidyHTMLTreeBuilder.py: implausibly old time
stamp 1970-01-01 01:00:00
tar: jython-elementtree-trunk/build/lib/xml/etree/ElementInclude.py:
time stamp 2026-01-20 09:52:31 is 541448194 s in the future
tar: jython-elementtree-trunk/build/lib/xml/parsers/__init__.py:
implausibly old time stamp 1970-01-01 01:00:00
So there's definitely something weird going on there. I attached that
same tarball, I hope it helps solving this bug.. I have no idea how
those timestamps became so screwed up, the time is and always has been
correct on my server afaik. So I'm guessing Jython did something weird
with the timestamps when trying to build the package? Just guessing..
I'm getting the same problem trying to install django.
jython 5.5b0+ (trunk, Dec 9 2008, 16:54:43)
[Java HotSpot(TM) 64-bit Server VM (Sun Microsystems Inc.)] on
java1.6.0_04
django/trunk r9619
when I typed
# jython setup.py install
there were lots of
"copying django/this/that -> build/lib/django/this/that"
then, build/lib/django/contrib/csrf/tests.py throws up a traceback,
with line 542 of os.py
_posix.utimes(path, long(atime * 1000), long(mtime * 1000))
throwing
TypeError: utimes(): 3rd arg can't be coerced to long
I stuck some print statements into os.py, and
DEBUG: mtime=7310012245875128690
what? Checking with stat
# stat build/lib/django/contrib/csrf/tests.py
File: `build/lib/django/contrib/csrf/moved_tests.py'
Size: 5037 Blocks: 24 IO Block: 4096 regular file
Device: 823h/2083d Inode: 11255892 Links: 1
Access: (0644/-rw-r--r--) Uid: ( XXXX/ XXXX) Gid: ( XXXX/ XXXXX)
Access: 2008-12-10 09:00:48.000000000 +1100
Modify: 7165064483007524707
Change: 2008-12-10 09:00:45.000000000 +1100
Hmm. Other files don't stat like that (XXX overtyped by me by the
way)...
# stat build/lib/django/contrib/gis/measure.py
File: `build/lib/django/contrib/gis/measure.py'
Size: 12528 Blocks: 40 IO Block: 4096 regular file
Device: 823h/2083d Inode: 11256021 Links: 1
Access: (0644/-rw-r--r--) Uid: ( XXXX/ XXXXXX) Gid: ( XXX/ XXXXX)
Access: 2008-12-10 09:00:45.000000000 +1100
Modify: 1970-01-01 13:28:48.000000000 +1000
Change: 2008-12-10 09:00:45.000000000 +1100
That that Modify time is not nonsence, but it's not right either. see
also:
# ls -l build/lib/django/contrib/csrf/tests.py
-rw-r--r-- 1 XXXXXX XXXXX 5037 7165064483007524707
build/lib/django/contrib/csrf/tests.py
# ls -l build/lib/django/contrib/gis/measure.py
-rw-r--r-- 1 XXXXXX XXXXX 12528 Jan 1 1970
build/lib/django/contrib/gis/measure.py
Yeah, right.
# touch build/lib/django/contrib/gis/measure.py
# ls -l build/lib/django/contrib/gis/measure.py
-rw-r--r-- 1 XXXXXX XXXXX 12528 Dec 10 09:25
build/lib/django/contrib/gis/measure.py
OK then,
# touch build/lib/django/contrib/csrf/tests.py
# ls -l build/lib/django/contrib/csrf/tests.py
-rw-r--r-- 1 A01417 eawmd 5037 Dec 10 09:26
build/lib/django/contrib/csrf/tests.py
Another "jython setup.py install" failed the same way further down, in
contrib/webdesign. So I went on a touch rampage all over build/lib,
then "jython setup.py install" completed OK.
But why was "jython setup.py build" making dodgy MTIMES in the build
directory when stat in the source files looks fine?
Why is coreutils giving me nonsence with some of these dodgy MTIMES
(but not all of them)?
I was finally able to reproduce this by manually setting a file to a
huge timestamp. I later realized 64 bit CPython also allowed me to set
these huge timestamps. This seemed to only work on Linux, other OSes I
tried overflow back to smaller values
So the question is, who set the huge mtime value in the first place?
Likely Jython -- but how? We already knew utimes had some issues on 64
bit platforms, but how would it manage to set a timestamp to such a huge
value if it runs into "3rd arg can't be coerced to long" on those huge
values?
I'm thinking maybe that jna-posix utimes bug we already knew about
somehow fudged another value into a much larger timestamp value
So I've a) fixed the 64 bit utimes bug we knew about, and also b) fixed
utimes to allow setting these huge timestamps (as 64 bit CPython also
allows it). So hopefully a) solves fudging timestamps, if we were in
fact doing it before, and b) allows fudged timestamps to just pass on
through. r5768
pjenvey@lamp:~/src/jython$ ll /tmp/foo.test
-rw-r--r-- 1 pjenvey pjenvey 0 7165064483007524864 /tmp/foo.test
pjenvey@lamp:~/src/jython$ stat !$
stat /tmp/foo.test
File: `/tmp/foo.test'
Size: 0 Blocks: 0 IO Block: 4096 regular
empty file
Device: 801h/2049d Inode: 78731 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/ pjenvey) Gid: ( 1000/ pjenvey)
Access: 7165064483007524864
Modify: 7165064483007524864
Change: 2008-12-15 00:40:34.000000000 +0000
pjenvey@lamp:~/src/jython$ touch /tmp/foo.test2
pjenvey@lamp:~/src/jython$ ll !$
ll /tmp/foo.test2
-rw-r--r-- 1 pjenvey pjenvey 0 2008-12-15 19:23 /tmp/foo.test2
pjenvey@lamp:~/src/jython$ dist/bin/jython
Jython 2.5b0+ (trunk:5768, Dec 15 2008, 19:21:04)
[Java HotSpot(TM) 64-Bit Server VM (Sun Microsystems Inc.)] on
java1.6.0_07
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.stat('/tmp/foo.test').st_mtime
7165064483007524864L
>>> os.utime('/tmp/foo.test2', (_, _))
>>>
pjenvey@lamp:~/src/jython$ !-2
ll /tmp/foo.test2
-rw-r--r-- 1 pjenvey pjenvey 0 7165064483007524864 /tmp/foo.test2
Confirmed it works, cheers! | http://bugs.jython.org/issue1166 | CC-MAIN-2014-41 | refinedweb | 1,851 | 60.51 |
CSC/ECE 517 Fall 2012/ch1 1w22 an
Introduction[1]
A method that belongs to a class is called by creating an object of the class and passing the method name to the object as a message. The object then looks up the method lookup path and tries to match the called method with the defined methods in the class. On success, the method is executed and the result is returned.
If the object does not find a match in its method lookup, in normal circumstances the NoMethodError Exception is raised .
In cases where the user wants to handle the methods which are not defined but are still called, “method_missing” can be defined and the user can handle the methods as he/she sees fit.
Format for Defining method_missing
=> def method_missing(m,*args,&block)
(i) m-> accepts the symbol/name of the undefined method (ii) *args-> accepts the array of arguments passed in the method call (iii) &block->accepts a block passed to the method
Ruby Method Lookup Flow
When the object of a class receives a method name to be executed, the following steps are carried out for matching and executing the method:
- First, the object looks in its own instance methods.
- Second, it looks in the list of instance methods that all objects of that class share.
- Third, in each of the included modules of that class, in reverse order of inclusion.
- Fourth, it looks in that class’s superclass.
- Fifth, in the superclass’s included modules, all the way up until it reaches the class Object.
- Sixth, if it still can’t find a method, the very last place it looks is in the Kernel module, included in the class Object.
- Finally, it calls method_missing (if defined in the class), else throws up the NOMethodError exception.
This entire tracing that the object does is called the method lookup path.
Examples
Calling Defined and Undefined Methods
class A // creating a class 'A' def say // defining a method 'say' puts " say Hi " // body of method say end end
Creating the object of the class
a=A.new // object of the class => #<A:0x2a082e0> //object id
Calling the defined method
a.say // defined method => say Hi // returned result
Calling the undefined method
a.sayhi // undefined method sayhi NoMethodError: undefined method `sayhi' for #<A:0x2a082e0> // the NoMethodError is raised
method_missing Implementation[2]
class A def say puts " say hi " end def method_missing(m,*args,&block) // defining method_missing puts " This method does not exist" // body of method_missing end end
Calling a method that is not defined
a=A.new a.sayhi // calling the undefined method sayhi with no arguments => This method does not exist // this result returned when method_missing is executed
When the object 'a' traces its method lookup path for a matching method 'sayhi', upon failure it resorts to method_missing and the body of method_missing is executed.
Sometimes when a class has many methods that do generally the same kinds of functionality, and the programmer is not sure in advance which methods the user will call since there are so many of them, and all of them are similar, writing code for all of the methods seems futile. In these situations method_missing can be defined to take care of these cases. The below 'Generic Handler' example implements this.
Passing Parameters to an Undefined Method Call
class A def add(a,b) a+b end def method_missing(name,*args,&block) // the method_missing is defined and the *args parameter accepts all the parameters passed during the method call puts “You have typed the method name wrong and these were the parameters passed ; #{args[0]}, #{args[1]}” end end
The passed parameters are stored in the array 'args' and can be accessed like a normal array
Calling the defined method
a.add(1,2) // calling the defined method add and passing the parameters (1,2) => 3 // result
Calling the undefined method
a.adds(4,2) // calling the undefined method adds and passing the parameter (4,2) => You have typed the method name wrong and these were the parameters passed; 4, 2
The user made a genuine mistake by typing 'adds', but this method is not defined. When the 'adds' method with parameters is called, the object 'a' tries to match the method in the method lookup path. Upon failure it invokes method_missing, the args are passed, stored in the array 'args' and the body of method_missing is executed.
Converting Numbers from Roman Representation to Integer Representation end
Calling the undefined methods
r= Roman.new r.vii r.xxix r.xxiv r.xxvi
The Output
=> 7 => 29 => 24
method_missing to Log Method Calls[4]
Another application that makes use of method_missing could be a simple logger used for debugging purposes. Many times, it may be required to log the trace of called methods and provide information such as: called method-name, arguments, return type. It can be tedious to repeat this part of code in every method. A simple solution to this problem can be obtained using method_missing as:
class SimpleCallLogger def initialize(o) @obj = o end def method_missing(methodname, *args) puts "called: #{methodname}(#{args})" a = @obj.send(methodname, *args) puts "\t-> returned: #{a}" return a end end
This program makes use of method_missing in a way that it wraps around called method to output the logging information on entry and on exit, it logs the return type. Further, method_missing intercepts the method call and forward it to internal object with ‘send’ method of ruby. Hence, this use of method_missing acts as wrapper.
Generic Handler
class NoBar def method_missing(methodname, *args) define_method(:bar) if "bar" == methodname.to_s define_method(:nobar) if "nobar" == methodname.to_s end end
This is an example of using method_missing as a generic handler to handle when a calling method is not exist. You can use missing_method to dynamically create a method at a runtime.
Advantages of method_missing
- In addition to specifying the error messages for the undefined methods, method_missing provides a more dynamic behavior in the programming environment.
- If we are unfamiliar with the usage of the objects we created, then using method_missing is a good technique.
- Handles problems at runtime.
- Define's a generic method_missing and handle's any undefined method, a big advantage over Java. In Java, when you call an undefined method, the program will not compile.
- method_missing falls under the general technique of meta-programming. Employ meta-programming in missing_function to write an another function to handle the call.
Disadvantages of method_missing
- Slower than conventional method lookup. Simple tests indicate that method dispatch with method_missing is at least two to three times as expensive in time as conventional dispatch.
- Since the methods being called never actually exist—they are just intercepted at the last step of the method lookup process—they cannot be documented or introspected as conventional methods can.
- method_missing restricts compatibility with future versions of an API. Introducing new methods in a future API version can break users' expectations.
Key Points
- In the following example, if within method_missing() we define an undefined method, we get a stack level too deep error message.
class A @@i = 0 def method_missing(method_id) puts "In Method Missing #{@@i}" @@i += 1 self.fun end end
a = A.new a.foo
Output
The result is a 'stack level too deep' error.
When the 'foo' method is called, after no method match the method_missing is run and this block has a method 'self.fun' that is undefined. Here when the program execution encounters 'self.fun' it once again calls method_missing. This goes on in an endless loop till the stack memory becomes full.
- Ruby knows method_missing( ) exists, because it's a private instance method of 'BasicObject' that every object inherits. The BasicObject#method_missing( ) responds by raising the NoMethodError. Overriding this method_missing( ) allows you to call methods that don't really exist.
- If method_missing is only looking for certain method names, don't forget to call the super keyword if you haven't found what you're looking for, so that the other superclass' method_missing can handle it.
- obj.respond_to? function returns 'true' if the obj responds to the given method. So if you want to know whether your class will respond to a function you can use respond_to? to know the answer. But if method_missing() is used, the output may not be what you expect.
class A
def method_missing(method_id) puts "In method_missing" end end
a = A.new puts a.respond_to?(:foo) a.foo
Output
false In method_missing
Similar functionality in other languages
Method missing, one of the dynamic features of Ruby, is not a feature that is unique to Ruby. It exists in Smalltalk, Python, Groovy, some Javascripts and most CLOS (Common Lisp Object System)extensions. In this section we look at the few such similar implementations in other languages. The table below gives different ways the functionality related to method_missing is handled in other languages.[5]
Construct Language AUTOLOAD Perl AUTOSCALAR, AUTOMETH, AUTOLOAD... Perl6 __getattr__ Python method_missing Ruby doesNotUnderstand Smalltalk __noSuchMethod__(1) CoffeeScript, JavaScript unknown Tcl no-applicable-method Common Lisp doesNotRecognizeSelector Objective-C TryInvokeMember(2) C# match [name, args] { ... } E the predicate fail Prolog forward Io
(1) supported by Firefox
(2) only for dynamic objects
object.__getattr__(self,name)
This method is.
class Roman(object):
def roman_to_int(self, roman): # implementation here end
def __getattr__(self, name): return self.roman_to_int(name) end
>>> r = Roman()
>>> r.iv
4
In SmalltalkLanguage When a receiver is asked to perform a method that is unknown to it, then a run-time complaint is issued which is #doesNotUnderstand. When a Smalltalk object is sent a message for a method it has not defined, the runtime system turns the message-send into an object and sends #doesNotUnderstand: to the original receiver with this message-send object as argument. By default the #doesNotUnderstand: method raises an exception, but the receiver can override it and implement it in a way that he sees fit.
- Other languages
JavaScript also has a method which has an implementation similar to that of method_missing and that is "noSuchMethod". The limitation of this method is that it is only supported by Firefox/Spidermonkey.
Similarly, Perl has an AUTOLOAD method which works on subroutines & class/object methods.
Patterns of method missing[9]
Now that we have covered all the important areas about method missing including the advantages, the disadvantages and key points related to its functionality, it would be appropriate to know about the different ways method_missing is used and what are the consequences of its use.
- Providing Debug information on Failure
Well as method missing is called when there is no object to handle the method being called, we can use method_missing to include more information about the reasons for it being called, i.e. to say that we can provide users with more information about the error messages and hence make the life of the programmers easy and provide a faster way to solve the bugs.
- Encode parameters in method name
Instead of sending method as explicit parameters, another method is to use the name to encode parameters. Find below a Rails-style find expression: Person.find_by_name_and_age("ABC",30) Another way of writing the same: Person.find_by(:name => "ABC", :age => 30)
The disadvantage of this is that creating such kind of API's make it difficult to debug and maintain the application.
- Builders
The idea of a builder is that you use Ruby’s blocks and method_missing to make it easy to create any kind of output structure. You create a builder object and then send it messages and it responds to the messages by building up a data structure based on those messages. For example, the following code
builder = Builder::XmlMarkup.new("", 2) puts builder.person { name("ABC") phone("12345", "local"=>"yes") address("Raleigh") }
will print
<person> <name>ABC</name> <phone local="yes">12345</phone> <address>Raleigh</address> </person>
Here we have defined a method_missing method and it handles any undefined method and adds the name of the method to the XML markup that is being built. Code blocks are used to capture the nested nature of the XML. The result is a very natural way to programmatically generate XML markup. Also the cumbersome task of closing the tags and escaping rules are taken care of for us.
- Accessors
The inversion of the builder pattern is to use a parser that goes through the XML document and then allow access to the elements by using method_missing.
- Test Helpers
Different kind of test helpers can be created using method_missing. Many of the open source Ruby projects implementations of method missing are found in the tests.
Conclusion
method_missing is a very powerful feature of Ruby and as is the way with powerful things it can help you a great deal if used properly but it can make things a lot harder for you if implemented incorrectly. It is a kind of a feature which should be used sparingly, if not at all. And there are things that needs to be taken into consideration, as mentioned in the Key Points section above, if method_missing is to be implemented.
References
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
- ↑
Further Suggested Reading
-
-
-
-
-
- | http://wiki.expertiza.ncsu.edu/index.php/CSC/ECE_517_Fall_2012/ch1_1w22_an | CC-MAIN-2017-17 | refinedweb | 2,177 | 60.45 |
Ok, I would like to use a regular expression that I create dynamically to search and replace words in a file. For instance:
import re
fl = re.compile('abc|def|ghi')
ts = 'xyz abc mno def'
n = fl.search(ts)
print n
How would I find all matches in the string? So far, "n" only returns the first occurence.
Or, maybe I'm going about this wrong anyway. What I would like to do is find each match that is in my re:compile, and replace it with additional text appended to that match. So, my text string would end up being 'xyz match=abc mno match=def', as 'abc' and 'def' are items in my re. In my real world example, my re:compile string is created dynamically and will have anywhere from 10 to 50 items (this part of the code works fine), but I'm having a hard time making a one to one relationship with the item in the re:compile with what is actually found. Since regular expressions are supposed to be fast, I didn't explore using a loop yet, but maybe that's the only way. Advice? | https://www.daniweb.com/programming/software-development/threads/131590/search-and-replace-with-re-compile | CC-MAIN-2016-50 | refinedweb | 194 | 70.23 |
[
]
Ilya Berezhniuk commented on HARMONY-3864:
------------------------------------------
I've found a cause of a deadlock. I looked through HARMONY-4195 design problems list and did
not found corresponding problem.
So I'll better describe this deadlock.
When deadlock happens, we have the following state:
- There are several (21) thread groups chained together as a list so as every group is a children
to another
- Top thread group contains several (20) threads which are ready to be started
- Bottom thread group contains one thread ready to finish its lifecycle
- Other thread groups contain one subgroup and contain no threads
Then threads execution goes in following order:
- One thread in top group starts and executes setMaxPriority for its group; it involves recursive
call to synchronized nonsecureSetMaxPriority for its group and all subgroups. For example,
it can perform nonsecureSetMaxPriority 10 times - so 10 top groups are loced by top thread
- Last thread in bottom thread group continues to execute and invokes Thread.detach(..) method,
which invokes synchronized group.remove(this). If thread groups are daemon groups, it involves
removing group from its parent group with synchronized parent.remove(..), because group becomes
empty; parent.remove(..) involves removing group from next parent, and so on.
- In some place in group list upper group tries to invoke nonsecureSetMaxPriority for child
group and stops on lock, because underlying group is already locked by detaching thread. Detaching
thread tries to perform parent.remove(..) and stops on lock obtained by top thread. So these
two treads are deadlocked.
- Other top threads are waiting on 'this' lock for top thread group.
Looks like problem appears because of over-synchronization on 'this'. setMaxPriority should
possibly use separate lock, or detaching threads should possibly use synchronized flag preventing
further recursion in nonsecureSetMaxPriority.
> [drlvm][thread] VM sometimes hangs after ThreadGroup.setMaxPriority()
> ---------------------------------------------------------------------
>
> Key: HARMONY-3864
> URL:
> Project: Harmony
> Issue Type: Bug
> Components: DRLVM
> Environment: Windows and Linux
> Reporter: Vera Petrashkova
> Assignee: weldon washburn
> Priority: Minor
>
> The following test demonstrates that sometimes VM hangs on some thread after invocation
of
> the method ThreadGroup.setMaxPriority(int)
> ------------------ThreadGroupTest.java-----------------
> import java.io.*;
> public class ThreadGroupTest {
> public static int nmbTG = 20;
> public static int nmbTH = 20;
> public static boolean isDaemon = false;
> public static boolean setPrior = false;
> public static void main(String[] args) {
> if ((args.length >= 1) && "true".equals(args[0])) {
> isDaemon = true;
> }
> if ((args.length >= 2) && "true".equals(args[1])) {
> setPrior = true;
> }
> for (int t = 0; t < 100; t++) {
> new ThreadGroupTest().test();
> System.err.println("Step: "+t+" finished");
> }
> System.err.println("Test passed");
> }
> public void test() {
> ThreadGroup roottg = new ThreadGroup("root-tg");
> roottg.setDaemon(isDaemon);
> Thread_t [] threads1 = new Thread_t[nmbTH];
> for (int i = 0; i < nmbTH; i ++) {
> threads1[i] = new Thread_t(roottg,"roottg");
> }
> ThreadGroup [] tg = new ThreadGroup[nmbTG];
> Thread_t [][] threads = new Thread_t[nmbTG][nmbTH];
> for (int i = 0; i < nmbTG; i ++) {
> tg[i] = new ThreadGroup(i == 0 ? roottg : tg[i-1], Integer.toString(i));
> for (int j = 0; j < nmbTH; j++) {
> threads[i][j] = new Thread_t(tg[i],Integer.toString(j));
> }
> }
> for (int i = 0; i < nmbTG; i ++) {
> for (int j = 0; j < nmbTH; j++) {
> threads[i][j].setDaemon(tg[i].isDaemon());
> threads[i][j].start();
> }
> }
> for (int i = 0; i < nmbTH; i ++) {
> threads1[i].start();
> }
> for (int i = 0; i < nmbTG; i++) {
> for (int j = 0; j < nmbTH; j++) {
> try {
> threads[i][j].join();
> } catch (Throwable e) {
> e.printStackTrace();
> }
> }
> }
> for (int i = 0; i < nmbTH; i ++) {
> try {
> threads1[i].join();
> } catch (Throwable e) {
> e.printStackTrace();
> }
> }
> }
> }
> class Thread_t extends Thread {
> ThreadGroup tg;
> String id;
> public Thread_t (ThreadGroup tg, String n) {
> super(tg, n);
> this.tg = tg;
> this.id = n;
> }
> public void run() {
> int mp = tg.getMaxPriority();
> if (ThreadGroupTest.setPrior ) {
> tg.setMaxPriority(2);
> }
> String[][] str = new String[10][100];
> for (int i = 0; i < str.length; ++i) {
>
> for (int j = 0; j < str[i].length; ++j) {
> str[i][j] = "" + i + "" + j;
> }
> }
> }
> }
> ------------------------------------------------------------------
> Run ThreadGroupTest several times
> I can not reproduce the issue on Windows for non daemon threads.
> But it is reproducible for daemon threads on Windows and for both kinds of thread on
Linux.
> java -cp . ThreadGroupTest true true
> Apache Harmony Launcher : (c) Copyright 1991, 2006 The Apache Software Foundation or
its l
> icensors, as applicable.
> java version "1.5.0"
> pre-alpha : not complete or compatible
> svn = r537729, (May 14 2007), Windows/ia32/msvc 1310, release build
>
> Step: 0 finished
> Step: 1 finished
> Step: 2 finished
> Step: 3 finished
> Step: 4 finished
> Step: 5 finished
> Step: 6 finished
> Step: 7 finished
> Step: 8 finished
> Step: 9 finished
> Step: 10 finished
> Step: 11 finished
> Step: 12 finished
> java -cp . ThreadGroupTest false true
> Apache Harmony Launcher : (c) Copyright 1991, 2006 The Apache Software Foundation or
its licensors, as applicable.
> java version "1.5.0"
> pre-alpha : not complete or compatible
> svn = r537729, (May 14 2007), Linux/ia32/gcc 3.3.3, release build
>
> Step: 0 finished
> Step: 1 finished
> Step: 2 finished
> This bug causes the failure of the reliability test
> api.kernel.threadgroup.EnumerateTest
> from
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online. | http://mail-archives.apache.org/mod_mbox/harmony-commits/200706.mbox/%3C16667633.1182259715867.JavaMail.jira@brutus%3E | CC-MAIN-2015-27 | refinedweb | 836 | 57.16 |
| Join
Last post 06-27-2006 12:23 PM by John de Vashon. 60 replies.
Sort Posts:
Oldest to newest
Newest to oldest
Thanks David, Will it list new content from your Articles Module?
AerosSaga wrote:
...the namespace shuffle;)
Ok, locked and loaded it. Now, here are some suggestions, most of which you probably want and know about, but since you asked
1. Might be nice to be able to use a module options control to allow storage of the module's settings.2. Modules settings that would complement the operation: - Number of Items to display - DateCutoff (don't display items older than this date)3. Allow syndication.4. Include the pubDate of the item.5. Include the author (username) of the item6. I know that the module title is part of the returned title, due to the implementation requirements for the search, but it would be nice to substring it out, and keep it seperate. Then, you could display an item like say this:
DNN Version v3.0.13 Released by Diesel 'Dale' Fuego Published: Mon, 02 May 2005 14:00:00 GMT The newest version of DNN is available here. Go get it! Links provided by MyCoolSite.com
7. Might consider using a custom sql procedure to pull the items out, which would free you to use the module settings at your will.
8. Limit the amount of text allowed in the description to maybe 100 or 150 characters. Also, might want to weed out any images from the description if there are any, it makes the display look better IMHO.
Ok, those are some things. I'm actually doing something similar, but I like your idea, keep working on it, we need this type of module, don't we!
Thanks
Advertise |
Ads by BanManPro |
Running IIS7
Trademarks |
Privacy Statement
© 2009 Microsoft Corporation. | http://forums.asp.net/p/880392/907096.aspx | crawl-002 | refinedweb | 305 | 72.97 |
-- | , next, nextN, rest, closeCursor, isCursorClosed, -- ** (..), Server(..)) import Data.Bson import Data.Word import Data.Int import Data.Maybe (listToMaybe, catMaybes) import Data.UString as U (dropWhile, any, tail, unpack) :: (Server, MonadMVar m) => Access m instance (Context Pipe m, Context MasterOrSlaveOk m, Context WriteMode m, Throw Failure m, MonadIO' m, MonadMVar m) => Access m newtype Action m a = Action (ErrorT Failure (ReaderT WriteMode (ReaderT MasterOrSlaveOk (ReaderT Pipe m))) a) deriving (Context Pipe, Context MasterOrSlaveOk, Context WriteMode, Throw Failure, MonadIO, MonadMVar,) -- | A connection failure, or a read or write exception like cursor expired or inserting a duplicate key. -- Note, unexpected data from the server is not a Failure, rather it is a programming error (you should call 'error' in this case) because the client and server are incompatible and requires a programming change. data Failure = ConnectionFailure IOError -- ^ TCP connection ('Pipe') failed. Make work if you try again on the same Mongo 'Connection' which will create a new Pipe. | CursorNotFoundFailure CursorId -- ^ Cursor expired because it wasn't accessed for over 10 minutes, or this cursor came from a different server that the one you are currently connected to (perhaps a fail over happen between servers in a replica set) | QueryFailure database (if server is running in secure mode). Return whether authentication was successful or not. Reauthentication is required for every new pipe.Access (Database db) col = U.any (== '$') col && db <.> col /= "local.oplog.$main" -- * Selection data Selection = Select {selector :: Selector, coll :: Collection} deriving (Show, Eq) -- ^ Selects documents in collection that match selector = [P) where batchSize' = if batchSize == 1 then 2 else batchSize -- batchSize 1 is broken because server converts 1 to -1 meaning limit 1 queryRequest :: Bool -> 'CursorNotFoundFailure'. Note, a cursor is not closed when the pipe is closed, so you can open another pipe to the same server and continue using the cursor.Closed cursor = do CS _ cid docs <- getCursorState cursor return (cid == 0 && null docs) -- ** Group -- | Groups documents in collection by key then reduces (aggregates) each group data Group = Group { gColl :: Collection, gKey :: GroupKey, -- ^ Fields to group by gReduce :: Javascript, -- ^ @(doc, agg) -> ()@. The reduce function reduces (aggregates) the objects iterated. Typical operations of a reduce function include summing and counting. It takes two arguments, the current document being iterated over and the aggregation value, and updates the aggregate value. gInitial :: Document, -- ^ @agg@. Initial aggregation value supplied to reduce gCond :: Selector, -- ^ Condition that must be true for a row to be considered. [] means always true. gFinalize :: Maybe Javascript -- ^ @agg -> () | result@. (@doc -> key@) returning a "key object" to be used as the grouping key. Use KeyFAccess m) => Group -> m [Document] -- ^ Execute group query and return resulting aggregate value for each distinct key group g = at "retval" <$> runCommand ["group" =: groupDocument g] -- ** MapReduce -- | Maps every document in collection to a list of (key, value) pairs, then for each unique key reduces all its associated values to a single result. There are additional parameters that may be set to tweak this basic operation. data MapReduce = MapReduce { rColl :: Collection, rMap :: MapFun, rReduce :: ReduceFun, rSelect :: Selector, -- ^ Operate on only those documents selected. Default is [] meaning all documents. pipe only, however, other pipes may read from it while the original one is still alive. Note, reading from a temporary collection after its original pipe. The function must call @emit(key,value)@ at least once, but may be invoked any number of times, as may be appropriate. type ReduceFun = Javascript -- ^ @(key, [value]) -> value@. The reduce function receives a key and an array of values and returns an aggregate result value. The MapReduce engine may invoke reduce functions iteratively; thus, these functions must be idempotent. That is, the following must hold for your reduce function: @reduce(k, [reduce(k,vs)]) == reduce(k,vs)@.Access m) => MapReduce -> m Cursor -- ^ Run MapReduce and return cursor of results. Error if map/reduce fails (because of bad Javascript) -- TODO: Delete temp result collection when cursor closes. Until then, it will be deleted by the server when pipe closes. runMR mr = find . query [] =<< (at "result" <$> runMR' mr) runMR' :: (DbAccess $ "mapReduce error:\n" ++ show doc ++ "\nin:\n" ++ show mr -- * Command type Command = Document -- ^ A command is a special query or action against the database. See <> for details. runCommand' :: Reply) -- ^ Send notices and request as a contiguous batch to server and return reply promise, which will block when invoked until reply arrives. This call will throw 'ConnectionFailure' if pipe fails on send, and promise will throw 'ConnectionFailure' if pipe fails on receive. call ns r = do pipe <- context promise <- mapErrorIO ConnectionFailure (P.call pipe ns r) return (mapErrorIO ConnectionFailure promise) {-. -} | http://hackage.haskell.org/package/mongoDB-0.9/docs/src/Database-MongoDB-Query.html | CC-MAIN-2016-40 | refinedweb | 753 | 55.34 |
In 2018, Ryan Dahl gave a talk titled "10 things I regret about Node.JS" - and at the end he introduced a new runtime called Deno. Before we get into Deno, let's talk about why Ryan might have wanted a new runtime in the first place.
What Node lacked
In the talk, Ryan went over a few regrets he had with the Node ecosystem, and I love how he addressed all of it because with time, technologies change - And in the case of Node, the ecosystem around it had changed drastically. Deno solves a few important issues that Node has, and this is how.
Node has access to essential System Calls
Node Programs can write to Filesystems and related networks because in the original Node, which was built in C++ by building a wrapper (of sorts) around V8 engine, had skipped some important security functions. This, I imagine is because V8 is a secure, solid sandbox, but it is to be used inside of Chrome (or whatever other browsers implement it), but Node can be used as a CLI tool. Node files could have access to a lot of essential system calls and they could, and have resulted in malicious behavior.
crossenv malware on the npm registry
()
The devs dropping out Promises
Node was designed before JS introduced the concept of Promises or Async/Await. Node instead found a way around promises with EventEmitters, and a lot of APIs are built around this - Sockets and HTTP for example. Async/Await is amazing when you consider how ergonomically handy it is to use. Emitters caused a lack of well-defined protocols to deal with backpressures in streams. While this was okay for some streams, in other cases it causes a buildup, like when the receiving process is slower than sending - eg TCP, MQTT. File read/write (Write is slower than Read). In modern JavaScript, Promises provide the delegation in the form of abstraction, but Node did not have this for its own APIs - and much newer Async APIs are becoming less compatible over time.
Node Package Manager is clunky
Package.JSON is a handy, nifty little file that helps you install your NPM packages on a new system in a quick function - But package.JSON has its own problems.
Package.JSON aimed to create a local machine of sorts for Node in a folder, but it took a lot of time, was heavy, and usually ran into problems out the box. Package.JSON is also very cluttered with metadata.
Deno does not have a package manager! Deno relies on URLs to host and import packages, which I am assuming will be through a CDN, thus we can take advantage of caching! Some people in the Deno community are also trying to have a Go-like dependency handling: Compiling the program into an executable that you can run without external dependencies - But it's not a thing yet.
The Node Build System hasn't aged well
Node uses GYP Build System, which is very complicated and somewhat broken. You can read a comparison of GYP to CMake Here -
cMake is essentially a Unix system tool, and it is not cross-platform: Thus GYP made sense at the time. But even Chromium moved from GYP to GN, another build system which was 20x faster for Chromium's use case. This is one of Dahl's biggest regret. Node is one of the only remaining GYP users.
Out of the box TypeScript Support
TypeScript is amazing - Optionally static typing and Type interfaces are two of the best things about TypeScript. But setting up TS with Node is a pain: You need to install dependencies, you need to configure your tsconfig.json, you have to update package.json - It's all too much. With deno, it's out of the box, no additional tooling required.
Explicit is better than implicit
For example, no .JS tags while importing a module!
It is one of my biggest problems with Node, and Ryan mentioned that as well. It is needlessly less explicit. It is also unnatural: Browsers need you to have.JS extensions. I can understand where this came from, but we can also see how it is broken.
Is Node really dead?
No, I was being sensationalist. Node will be alive for years to come since a lot of websites are securely built in Node, it is awesome and has a strong community around it. Small-time projects might see a shift to Deno - Personally, I have a Supply Chain project where I might use Deno.
It is less clunky, lighter, more intuitive, and explicit. I also love how it uses Rust Crates and is not a monolith. I am not sure if Node was, but I think it was a monolith that directly called C++ APIs.
function hello(place: string): string { return `Hello ${place}`} console.log(hello('world'))
That is a simple 'hello world!' that runs like this
./deno hello.ts
Hello world
And a simple URL import would be
import { factorial } from "" console.log(factorial(10))
This is beautiful, don't you think?
🌺 Hey, I hope you enjoyed reading that article. I am Abhinav, editor @ The Crypto Element.making new friends!
Discussion
Just imagine what will happen when in five years time Ryan goes: “10 things I regret from Deno. I present you... ENDO!!”
I see many posts on dev.to about deno these days, but let me tell you one really important thing. Deno does not have all the packages that node has. And until the community duplicates those packages in deno, we will still use node.
Hi, sorry. There are quite a few packages on Deno and most packages can be easily ported from Node to Deno. I'm working on quite a few myself. Hope to see you on this side!
I really dont understand the "The Node Build System hasn't aged well" part, that is the best part of node, the one that make it a big thing.
I just keep thinking what would say PHP is it can speak and read that title. How many time people killed php with something new, and there it is still.
GYP as a build system is complicated :(
It works fine with Node, but there are Go Build System which is pretty solid, but then again, I am no Go Dev.
Of course, Node won't die, it is an essential piece of software. That was just the headline.
Hey Abhinav 👋🏼Appreciate you sharing your opinion! There's been a lot of talk about Deno lately, it's nice to hear what everyone thinks about it
Nothing is dead. Just shapeshifting. You can't imagine millions of projects will change their infrastructure written with NodeJS.
I was admittedly sensationalist, but in retrospect it comes off as annoying. | https://practicaldev-herokuapp-com.global.ssl.fastly.net/abhinavmir/node-is-dead-welcome-deno-4l96 | CC-MAIN-2021-04 | refinedweb | 1,135 | 73.17 |
When we are writing software that is deployed to different environments, we often have to create different configuration files for each environment. If we are using Maven, we can do this by using build profiles.
This blog post describes how we can create a build script that uses different configuration for development, testing, and production environments.
The requirements of our build process are:
- Each profile must have its own configuration file. The name of that configuration file is always config.properties.
- The configuration files must be found from the profiles/[profile name] directory.
- The development profile must be active by default.
Let’s start by taking a quick look at our example application.
The Example Application
The example application of this blog post has only one class that writes ‘Hello World!’ to a log file by using Log4j. The source code of the HelloWorldApp class looks follows:
import org.apache.log4j.Logger; public class HelloWorldApp { private static Logger LOGGER = Logger.getLogger(HelloWorldApp.class); public static void main( String[] args ) { LOGGER.info("Hello World!"); } }
The properties file that configures Apache Log4j is called log4j.properties, and it is found from the src/main/resources directory. Our log4j.properties file looks as follows:
log4j.rootLogger=DEBUG, R log4j.appender.R=org.apache.log4j.FileAppender log4j.appender.R.File=${log.filename} log4j.appender.R.layout=org.apache.log4j.PatternLayout log4j.appender.R.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n
Our Log4j configuration file looks pretty ordinary, but it has one thing that doesn’t really make any sense.. Let’s move on and find out how we can do that.
Creating the Profile Specific Configuration Files
Because we have to create a build script that uses different configuration in development, production, and test environments, we have to create three configuration files that are described in the following:
- The profiles/dev/config.properties file contains the configuration that is used in the development environment.
- The profiles/prod/config.properties file contains the configuration that is used in the production environment.
- The profiles/test/config.properties file contains the configuration that is used in the test environment.
These properties files configure the file path of the log file that contains the log of our example application.
The configuration file of the development profile looks as follows:
log.filename=logs/dev.log
The configuration file of the production profile looks as follows:
log.filename=logs/prod.log
The configuration file of the testing profile looks as follows:
log.filename=logs/test.log
We have now created the properties files that specify the location of our log file. Our next step is create a build script that replaces the placeholder found from the src/main/resources/log4j.properties file with the actual property value. Let’s see how we can do that.
Creating the Build Script
We can create a Maven build script that replaces the placeholder found from the src/main/resources/log4j.properties file with the actual property value by following these steps:
- Configure the development, production, and testing profiles.
- Configure the locations of the properties files that contains the configuration of each Maven profile.
- Configure the location of our resources and enable resource filtering.
First, we have configure the development, production, and testing profiles in our pom.xml file. We can do this by following these steps:
- Create the development profile and configure it to be active by default. Specify a property called build.profile.id and set its value to ‘dev’.
- Create the production profile. Specify a property called build.profile.id and set its value to ‘prod’.
- Create the testing profile. Specify a property called build.profile.id and set its value to ‘test’.
We can finish these steps by adding the following XML to our pom.xml file:
<!-- Profile configuration --> <profiles> <!-- The configuration of the development profile --> <profile> <id>dev</id> <!-- The development profile is active by default --> > </properties> </profile> <!-- The configuration of the production profile --> <profile> <id>prod</id> <properties> <!-- Specifies the build.profile.id property that must be equal than the name of the directory that contains the profile specific configuration file. Because the name of the directory that contains the configuration file of the production profile is prod, we must set the value of the build.profile.id property to prod. --> <build.profile.id>prod</build.profile.id> </properties> </profile> <!-- The configuration of the testing profile --> <profile> <id>test</id> <properties> <!-- Specifies the build.profile.id property that must be equal than the name of the directory that contains the profile specific configuration file. Because the name of the directory that contains the configuration file of the testing profile is test, we must set the value of the build.profile.id property to test. --> <build.profile.id>test</build.profile.id> </properties> </profile> </profiles>
Second, we have to configure Maven to load the property values from the correct config.properties file. We can do this by adding the following XML to the build section of our POM file:
<filters> <!-- Ensures that the config.properties file is always loaded from the configuration directory of the active Maven profile. --> <filter>profiles/${build.profile.id}/config.properties</filter> </filters>
Third, we have to configure the location of our resources directory and enable resource filtering. We can do this by adding the following XML to the build section of our POM file:
<resources> <!-- Placeholders that are found from the files located in the configured resource directories are replaced with the property values found from the profile specific configuration file. --> <resource> <filtering>true</filtering> <directory>src/main/resources</directory> </resource> </resources>
We have now configured our build script to replace the placeholders found from our resources (files that are found from the src/main/resources directory) with the actual property values. Let’s move on and find out what this really means.
What Did We Just Do?
We have now created a build script that replaces the placeholders found our resources with the property values found from the profile specific configuration file.
In other words, if we compile our project by running the command: mvn clean compile -P test at the command prompt, the log4j.properties file found from the target/classes directory looks as follows (the relevant part is highlighted):
log4j.rootLogger=DEBUG, R log4j.appender.R=org.apache.log4j.FileAppender log4j.appender.R.File=logs/test.log log4j.appender.R.layout=org.apache.log4j.PatternLayout log4j.appender.R.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n
As we can see, the placeholder that was the value of the log4j.appender.R.File property was replaced with the actual property value that was read from the profiles/test/config.properties file.
Let’s move on and summarize what we learned from this blog post.
Summary
This blog post has taught us two things:
- If we need to use different configuration files in different environments, using Maven profiles is one way to solve that problem.
- If we need to replace placeholders found from our resource files with the actual property values, we can solve this problem by using resource filtering.
Moved the example application to GitHub.
very helpfull, thanks
I created huge config.prop, works fine
Svetlana,
good to hear that I could help you!
Hi Petri , i don’t exactly how much its relative but i m stuck in problem where i want to move all properties used under from pom.xml to pom.properties file i did it.but its not able to read it
Hi Sunny,
Use the Properties Maven Plugin. The Usage page describes how you can read the properties of your build from a file.
I created exactly the same project structure but the config.properties file could not be found:
[INFO] Scanning for projects…
[INFO]
[INFO] ————————————————————————
[INFO] Building java_cukes 1.0-SNAPSHOT
[INFO] ————————————————————————
[INFO]
[INFO] — maven-clean-plugin:2.4.1:clean (default-clean) @ java_cukes —
[INFO]
[INFO] — maven-resources-plugin:2.5:resources (default-resources) @ java_cukes —
[debug] execute contextualize
[INFO] ————————————————————————
[INFO] BUILD FAILURE
[INFO] ————————————————————————
[INFO] Total time: 0.656s
[INFO] Finished at: Fri Sep 21 10:13:33 CEST 2012
[INFO] Final Memory: 3M/15M
[INFO] ————————————————————————
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:2.5:resources (default-resources) on proj
ect java_cukes: Error loading property file ‘C:\Documents and Settings\…..\java
_cukes\profiles\dev\config.properties’ -> [Help 1]
Any idea ? Regards.
Hi Javix,
as the error states, the Maven Resources plugin tries to look for the properties file from the file path ‘C:\Documents and Settings\…..\java_cukes\profiles\dev\config.properties’ and cannot find it. If the file path is correct, I would move the project to a directory which does not have white space in its path. I am not sure if this is an issue anymore but I remember facing similar problems in the past. Let me know if this did the trick.
Hey Petri,
Just a quick thanks for pointing out this feature. Our Maven POM profiles were starting to make the POM quite full & bloated. Being able to split a lot of that environment specific configuration out into separate files is exactly what we needed (and thought wasn’t possible via Maven).
So thanks a bunch!
Hi Lee,
It is great to hear that I could help you out.
Hello,
quick thx for this example, it helped me further to configured binaries.
I was wondering why you have duplicated each tag in the profiles.
I have moved the build tag out of the profiles.
Was there any particular reason for it?
kind regards
Hi,
Are you referring to the build.profile.id property? The reason why I specify it in each profile is that I think that it is cleaner than the solution described in this StackOverflow question.
If you know another solution, I would love to hear about it!
When the project is a web-app, I like to take the properties out of the application completely and put them inside the e.g. the Tomcat “context.xml” so the the app reads them from its environment, rather than having to remember to compile a different version.
In context.xml:
Read the values in the app:
JndiTemplate template = new JndiTemplate();
String sts = template.lookup(“java:comp/env/Status”);
Cheers!
You might be interested in this plugin as an alternative:
Its very configurable but has sensible defaults. Available from Maven Central:-
com.ariht
config-generation-maven-plugin
0.9.10
generate/goal>
That plugin looks very interesting. I will have to take a closer look at it. Thanks for the tip!
Thanks a lot Petri. This is really very helpful page. Thanks for sharing this. I have one question with the described approach . will it be applicable to placeholders in xml files ? I have weblogic-properties where i m putting the placeholder that can be substituted as per the selected profile but its not working ?
Yes, you can use this approach with XML files as well. You just have to put your XML files to the correct directory (this example uses the src/main/resources directory). Also, if you include only specific files, you will have to change the configuration to include your XML file.
I have a few questions about your problem:
Hi Petri,
I have been reading your articles for the last week they are great! I tried to run this example though but the jar does not contain the log4j jar, do you know if I have to specify something else when I call the command?
Thanks
Hi,
Yes. You need to declare the Log4j dependency in your pom.xml file. Take a look at the
dependenciessection of this pom.xml file.
Thanks, I declared and it compiled, but then to build it you need to also include the assembly plugin as you explain in another of your tutorials, after that it was all good, thanks a lot.
You are welcome. I am happy to hear that you were able to solve your problem.
Hi, tank you for this tutorial, it’s very good, I had a small problem when i was running the project, maven was throwing a exception saying “FileNotFoundException: /Users/xxxx/Documents/codigo/web/main/resources/profiles/dev/config.properties” it was fixed changing this
profiles/${build.profile.id}/config.properties
for this
src/main/resources/profiles/${build.profile.id}/config.properties.
Hi Omar,
You are welcome!
The example assumes that the profiles directory is found from the root directory of the project. If you create the profiles directory to some other directory, you need to change the value of that property.
However, If you create the profiles directory under the src/main/resources directory, Maven will add all profile specific configuration files to the created binary (unless you exclude them).
God bless you for this tutorial, it helped me.
Thank you for your kind words. I really appreciate them.
Since in my case (as in yours) there are only three files, I just configured it as three separate files: profiles/${build.profile.id}.properties, instead of three separate directories.
It works great – very helpful, thanks!
Thank you for your kind words. I really appreciate them. Also, I agree that adding separate directories is not necessary if you have only a few profile specific files.
Hi Petri,
How can I use the property from the config file in java class ( ${} obviously is not working and also I am not using Spring).
Thanks in advance.
Hi Mar,
Take a look at this blog post. It explains how you can read property files from the file system and from the classpath.
Thank you very much !
Petri,
I am currently implementing the same logic in a new project and now I have problem that it couldn’t read the property value -> it prints it as ${test}, not the value ot test property.
Do you have an idea what I do wrong or what I have forgotten to do?
Thanks
The problem was the type of the project when I create maven project with netbeans and when with eclipse, which is completely strange….anyway I resolved the issue.
Hi,
Good to hear that you were able to solve your problem.
Hi,
I have two profiles in my pom.xml and want to build both of them together. i.e. I want my application to build jboss7 as well as was8.5 profiles together. But when I run command, giving both profiles, only was8.5 gets executed as it is written in the end of pom.xml.
Is there any way by which I can generate both ear’s together.
Hi Petri,
Very nice information on profiles in Maven. I have also one requirement where I have to change pom.xml at run time. Is it possible in any way?
My Scenario is
I have 2 xml file (testng1.xml and testng2.xml) and I am running 2 xml files via pom.xml but sometimes I have to run either 1 or 2 xml so as of now we are commenting another file and run.
So any idea or solution for this so that based on our parameter it can run required xml file.
Any help will be highly appreciated.
Hi,
It’s a bit hard to answer to your question because I assume that the plugin you are using to run your unit tests might have support for your use case. Are you by any chance using the Maven Surefire plugin?
Thanks for this article, helped me in configuring Maven with Selenium.
You are welcome! I am happy to hear that this blog post was useful to you.
I tried using the example you provided but the final artifacts that were generated replaced the config.properties values in properties files defined in the main/resources folder within jar created by pom not in the zip file as intended in assembly.xml.
I am looking for replacing the contents of properties files defined in my main/resources folder with config.properties and the clean install to generate a zip file as defined in assembly.xml
Hi Petri
Thanks a lot for the post. It was very useful to me.
I used same approach as suggested in the post. And tests run in command line as I expect.
But, I’m not able to run any test in IDE (Intellij 2016). It gives “No tests were found. Empty test suite” error. If I comment out filters and testResources in pom, then IDE runs the tests.
I use surefire plugin.
Any idea what it could be?
thanks
is there any way in which we can encrypt this property file ?
I am using the above example. instead of logger file I am using server.port=8085 different in different environment files. But its not working. It always picks up default port.
It’s hard to say what could be wrong because you didn’t share your POM file and the property files, but you said that your Maven build always uses the default port. Where to do you specify this default port?
Also, does your “real” property file contain the following line:
Hello sir, Your blog is very useful i’m also working on Maven Thanks it will be beneficial for us.
Thank you for your kind words. I really appreciate them.
Petri Kainulainen
Thanks Alot
You are welcome.
Below is my requirement
I have 4 components with same profile and property file. These 4 components are not dependent on each other. There should be one common place to place the property files and accessible to these 4 components. The same property file copy should be used by the 4 components
Please suggest me the different ways to place/access the property files
thank you
Hi,
I assume that you want to use the properties file for configuring the application (and not for passing properties to the Maven module). If so, you should take a look at this StackOverflow answer.
Hi Petri,
Thanks for quick response.
But i am not looking for the application configuration. the property file is having database(JNDI) details which are common for all the four moduels.
Currently i added the property file in my module as below but cdt-model is not dependancy to my current module. here ${env} is the profile
../cdt-model/src/main/resources/${env}/CDT-${env}.properties
<!– src/main/resources/${env}/CDT-${env}.properties –>
Suggest me different options for adding this property file.
What about the Test suite for these property file. How can we mock it
Hi,
I have typically created a new Maven profile that is activated when I run my tests. This way I can just create a test specific properties file and I don’t have to mock anything.
Also, if you are using Spring Boot and you can annotate your tests with the
@ActiveProfilesannotation, you can use the properties support provided by Spring Boot. For Example, if your active profile is called:
integrationTest, you can create a properties file called
application-integrationTest.propertiesand Spring Boot uses this properties file when the
integrationTestSpring profile is active (when you run your tests).
what is in the logs/dev.log file and what is it for? | https://www.petrikainulainen.net/programming/tips-and-tricks/creating-profile-specific-configuration-files-with-maven/ | CC-MAIN-2018-47 | refinedweb | 3,190 | 58.89 |
The world's most viewed site on global warming and climate change.
It would appear that GISS is ripe for a full 3rd party audit. Followed by a validation of their methods against actual data for sites where the values were recorded on paper and have been kept. That would seem to be an ideal exercise for some metrologists and statisticians.
take that one further again, if the original records are not available, the custodians are not to be considered scientists at all.
I recall seeing books in the past here in Australia published by BOM with recorded temperatures for major towns and cities in various states, page after page of nothing but tabulated temperature records. I’d love to find one of these to be able to cross reference it against the ‘temperature records’ of today.
GISS should simply be closed. NASA is being refocused for space exploration after all. ?)
Every one of those points lies on the curve; exactly. Well that is great. The Temperature should be a curve of some shape or other.
And if it is a real curve and not a fake curve, then of course it MUST go through (exactly) every one of those accurately measured points. (and it does).
So maybe the curve is real after all. Now what if they made measurements more often; let’s say just one more measurement in that total time frame. Well that extra point of course MUST also fallon that curve, because we have now decided it must be a real curve.
Well that would also be true if we took one extra measurement halfway between the points shown on the graph. Well every one of those extra point must also lie on the exact same curve because we just said it is a real curve.
So no matter how many extra Temperature readings we take, the curve will not change. It already records every possible Temperature that occurred during the total time depicted.
But what can we predict from this real accurate curve of Temperatures over the total time interval.
One thing is apparent; no matter how many measurements we make, we can never predict whether the very next measurement we make will be higher, or lower, or maybe exactly the same, as the measurement we just made. There is NO predictability of ANY future trend; let alone any future value.
Well there is only one problem with this conjecture..
But when we look at the data set of measured accurate Temperatures used to construct this real non fake curve, we find that there is not an infinite number of data points in the set; just a few dozen in fact.
So clearly the data set is not a proper sampled set of the plotted curve function. It is in fact grossly under-sampled, so not only can we not reconstruct what the real function is, we are under-sampled to such a degree, that we cannot even determine any statistical property of the numbers in the data set, such as even the average. It is totally corrupted by aliasing noise, and quite unintelligible.
So I was correct right at the outset.
Both of those curves are fake; and do not represent anything real that anybody ever observed or measured.
What did you say was the name of the chap(ess) who read that thermometer ??
G
This sort of behaviour by GISS is, in my opinion, at about the level of paedophile priests. The data handling equivalent of ‘kiddy fiddling’..
This is a complete fallacy! The frequency response of all glass thermometers band-limits the measured temperature variations to frequencies lower than a few cycles per minute. Furthermore, the decimation of daily mean temperatures into monthly averages shifts the effective cut-off frequency into the cycles per year range. The monthly averages are then further decimated into yearly data, narrowing the baseband range to half a cycle per year. Band-limited signals contain only finite power; thus there ARE limits to how much the slope can change month to month or year to year. And aliasing is by no means the overwhelming factor erroneously claimed here.
The real problem lies in the gross falsification of actual measurements through various ad hoc adjustments and the fragmentation of time series perpetrated in Version 3 of GHCN. Even the unadjusted (olive curve) does not show the average yearly temperature as indicated by actual measurements.
Dementia setting in? I recall that during the first week of March in Dallas sometime in the early ’60s, it hit 106F. Looking at an official record of Dallas Love Field temps the other day, the warmest day I could find for the period was 97F. It’s possible what I remember from the ’60s was actually a record temp at a non-official station, perhaps Meacham Field in Fort Worth. But what I wonder is whether that 106 figure has been reduced in the record books by “adjustments”?
george e. smith February 22, 2017 at 12:40 pm ?)
Hi George, in 1916 it was someone called ‘J.K. Maclein’ (well Mac-something his handwriting is a bit difficult to read.
Well I can’t even imagine what 1sky1 is even talking about.
It matters not a jot what sort of thermometers are used to make Temperature measurements so long as they give accurate readings. I took it for granted that whoever made the measurements used to construct those graphs, took accurate readings.
So I have NO PROBLEM with the measured Temperatures, nor with whoever made those measurements.
The problem is with whoever was the total idiot who drew those two colored graphs.
A scatter plot of the original measured Temperature values would have made a wonderful graph.
For the benefit of 1sky1 and others similarly disadvantaged; a scatter plot is a plot of INDIVIDUAL POINTS on an X-Y graph or other suitable graphical axis system.
For the Temperatures measured by whomever it was, the Temperature is the Y value, and the presumed time of measurement is the X value.
Scatter plots can easily be plotted by anyone with a copy of Micro$oft Excel on their computer.
If the total idiot who drew those graphs had used Excel, (s)he could have made a wonderful scatter plot graph of those measurements.
Moreover, that person could even have had Excel construct a CONTINUOUS graphical plotted curve that passes through EVERY ONE of those measured data points EXACTLY; perhaps using some sort of cubic spline or other approximation.
Such a CONTINUOUS curve, would not necessarily be an exact replication of the real Temperature that was the subject of the original measurements; but it would be acceptably close to what that exact curve would have been, and points on such a fitted CONTINUOUS curve intermediate between the measured values, would have been close to what real value could have been measured at that time.
But the total idiot who drew these two graphs, chose to just connect the measured and scatter plotted points with straight line segments, that results in a DISCONTINUOUS NON BAND LIMITED INFINITE FREQUENCY RESPONSE FAKE GRAPH.
Apparently, 1sky1 doesn’t even know the difference between a CONTINUOUS function and a DISCONTINUOUS function.
The first one is band limited, and can be properly represented by properly sampled point data values.
The second one has no frequency limit whatsoever, so it cannot be properly represented by even an infinite number of sampled values.
I suggest 1sky1 that you take a course in elementary sampled data theory, before throwing around words like “fallacy” or “erroneously”, whose meanings you clearly don’t understand.
G
Well I can’t even imagine what 1sky1 is even talking about.
At least you’ve got that much right, George! Since temperature is a continuous physical variable, any periodic sampling will generate a discrete time series. The crucial question is how well that sampled time-series represents the continuous signal. That’s what I address in pointing out the various band-limitations imposed by instruments and data averaging. (With well-sampled time-series, band-limited interpolation according to Shannon’s Theorem can always be employed to reconstruct the underlying continuous signal.)
In stark contrast, you’re apparently concerned with the purely graphic device of connecting the discrete yearly average data points by straight lines. That’s looking at the paper wrapping instead of the substantive information content. The notion of using some spline algorithm on an Excel scattergram of the time series (instead of Shannon’s Theorem) to obtain a continuous function speaks volumes. It shows that, despite throwing around the terminology of signal analysis, its well-established fundamentals continue to escape you.
Well I see it is a total waste of time trying to educate the trolls.
1sky1 simply doesn’t understand the fundamental concept the NO PHYSICAL SYSTEM is accurately described by a DISCONTINUOUS FUNCTION.
Ergo; by definition, a discontinuous graph purporting to be an accurate plot of some measured data from a real physical system, is FAKE. No real system can behave that way.
Gathering the data; of necessity sampled, is one issue. I have no problem with how this particular data was obtained.
How it is presented is a totally different issue that 1sky1 does not seem able to grasp.
Shannon’s theorem on information transmission is not even relevant to the issue. It’s a matter of interpolating real measured data with totally phony fake meta-data.
G
Well I see it is a total waste of time trying to educate the trolls.
Being “educated” by amateurs who are patently clueless about the rigorous theory of reconstruction of continuous bandlimited signals from discrete samples() provides no intelligent benefit. On the contrary, it prompts senseless fulminations about “fake” graphs of well-sampled time series, based upon nothing more than the superficial impression of how the discrete data points are visually connected.
Well sky see if you can giggle yourself up a book that’s a bit more on the ball, than the one you linked to the contents pages of.
Nowhere in that entire text book, does it teach reconstruction of a band limited continuous function from properly sampled instantaneous samples of the function, by simply connecting the properly sampled data points, with discontinuous straight line segments. Such a process does not recover the original continuous function, which a correct procedure will do.
And your reference text is a bit of a Johny come Lately tome anyway.
Well as was Claude Shannon, whose writings on sampled data systems are about 20 years after the original papers by Hartley and Nyquist of Bell Labs, circa 1928.
I’m sure there are precedent papers from the pure mathematicians, preceding Hartley’s Law, and Nyquist’s Sampling Theorem, but then Cauchy and co, weren’t exactly in the forefront of communications technology as was Bell Telephone Laboratories.
G
It always surprises me that past measurements can be read so much better now, after 80 years, than they could at the time.
In 2097 GISS will probably record that we are hunting woolly mammoths today.
“In 2097 GISS will probably record that we are hunting woolly mammoths today.”
LMFAO – glad I didn’t have food in my mouth when I read that!
But ya never can tell:
This is the one thing I have never understood … and in my opinion should be the main sticking point.
Forget accurate temperature measurements….and everything else.
They run a algorithm every time they enter new temp data.
They say any adjustments to the new data…..is more accurate than the old data.
..so the algorithm retroactively changes the old data….every time they enter new data.
And yet every thing published about global warming is based on the temp history today.
…which will be entirely different tomorrow
We will never know what the temp is today…..because in the future it’s constantly changing
oddest thing about it all….the models reflect all of that past adjusting
and inflate global warming at exactly the same rate as all the adjustments
oddest thing about it all….the models reflect all of that past adjusting
and inflate global warming at exactly the same rate as all the adjustments
Not odd at all, they base their adjustments on the same theory used in the models.
Mosher has stated more than once, they create a “temp” field mostly based on Lat, Alt, and distance to large bodies of water, and the rest is just noise.
Remember when Nancy Pelosi said “We have to pass the bill before we can know what’s in the bill!”?? Well, climate science is like that…”We have to adjust the temperatures before we can know what the temperature is/was”.
Makes perfect sense. 🙂 *snark*
But we are hunting them today, there’s one over the way it’s huge & grey & hairy it eats children & it’s……………..oh sorry it was a big red bus instead dropping off some school kids!!! By mistake! sarc off!
School buses are sunflower yellow here in the states.
Sounds like your buses have liver damage.
Perhaps they’ve had too much bioethanol?
Actually, it seems that only thermometer data recorded around the year 1970 are accurate, according to GISS. They have reduced the readings prior to that date and increase them after. See climate4you for the details. I am now searching for a circa-1970 thermometer in my box of old stuff in the garage. It seems it is the only device that works ok, according to the world class scientists at GISS.
DHR; is this a fact? Could you show it. alf
brilliant dhr 🙂
Ha, ha, ha
Could someone develop an easy-to-follow description of this concept of “atmospheric energy?”
I understand that a temp reading is a measure of ambient temperature, and also of “atmospheric energy.”
To the degree that a thermometer, properly used, is not so great at measuring atmospheric energy as ambient temperature, it seems that atmospheric energy is measured by the use of at least two indicators: the local thermometer and some other indicator.
Apparently the other indicator or indicators is somehow simply not superior, since the thermometer is also needed. So, this other indicator and the thermometer are both flawed (as all measures are).
And, it seems that some kind of systematic bias in the thermometer can be determined based on this second indicator, although the second is acknowledged as flawed.
How does the logic go? What is the other indicator?
If the thermometer is wrong, isn’t it systematically wrong, and so all values ought to be changed the same amount? If an old thermometer was wrong, wouldn’t it be wrong every day for years? And if the new one is wrong, wouldn’t it be the same amount of wrong every day for years?
Is a thermometer more reliable at some temp ranges than others (barring extremes)?
–All of this does not add up.
Temperature is actually the incorrect metric for atmospheric heat energy as the amount of water vapor in the volume of air alters its ‘enthalpy’. The correct metric is kilojoules per kilogram and can be calculated from the temperature and relative humidity.
A volume of air in a misty bayou in Louisiana with the air temperature of 75F and a humidity of 100% is twice the amount of energy as a similar volume of air in Arizona with the air temperature of 100F and humidity close to 0%.
It is therefore incorrect to use atmospheric temperature to measure heat content and a nonsense to average them. Averaging the averages of intensive variables like atmospheric temperature is meaningless. It is like an average telephone number or the average color of cars on the interstate mathematically simple but completely meaningless.
Ian W, in my view you are right. The Enthalpy of a volume of air is easily calculated from the wet bulb temperature, the dry bulb temperature and the barometric pressure and I would bet that all three have been recorded at weather stations for a long time. What’s more you can safely average Enthalpy. I can’t help but wonder why this is not done.
@IanW: I was about to write a post on my blog on this. Another part of the delta energy content not directly aligned based on simple temperature measurement is the variance in the sensible heat of disparate materials for global surface measurements, and the high phase change heat content for water at constant temperature points as it transitions from solid to liquid to gas and back.
Everything you presented is correct.
The problem is measuring humidity. Imagine this!!
TY this layman has been saying for many years we CANT compute a single temperature for the earth…..w dont have accurate measurements to average for starters, and as you posted an average is meaningless.
When the goal is to influence politicians, and the general populace, it is a bad idea to make them cranky by confusing them. You present your big picture in term they are familiar with, and thus believe they understand.
Here is my reply using only the average letter in the English language:
Mmmmmmmmmmm mmm mm mmmmmmm m mmmmm mmm mmmmmmm m mmmmm mm mmm mm mmmmmmm mmmm m mmmm!
Bill, we could compute such a temperature, the problem is that the error bars would have to be around +/- 20C or so.
@John Mauer
Thank you for opening this can of worms.
Do you have a reference for “GISS decided to modify the temperature data to account for perceived
faults in its representation of atmospheric energy”?
To amplify on thoughts by Forest Gardener, Ian W, RobR, and others, the measure of heat
energy is enthalpy. In joules per kilogram,the Bernoulli principle expression is
h = (Cp * T -.026) + q * (L(T) + 1.84 * T) + g * Z + V^2/2
Cp is heat capacity, T is temperature in Celsius, q is specific humidity in kg H20/kg Air, g is gravity
L(T) is latent heat of water ~2501, Z is altitude, V is wind speed
An interesting study is
Pielke shows that the difference between effective temperature h/Cp and thermometer temperature can be tens of degrees.
Classical weather reporting does not include the data necessary to calculate enthalpy to an accuracy better than several percent. This inaccuracy is greater than effects attributable to CO2. Hurricane velocity winds add single degrees of effective temperature, but modest winds can add tenths of degrees.
I speculate that part of the reason for the never ending adjustment of temperatures is an attempt to compensate for the inaccuracies inherent in temperature only estimates of energy.
For a more formal discussion of wet enthalpy,
Absolutely right in my opinion (scientific opinion might I add). At the least the data should be taken as is, with error bars. Every measurement tool has a measurement error. If its decided past measurments aren’t right then the bars need expanding by some amount to cover the uncertainty. The errors then need computing forward. If the errors are too large you’ll end up with some total nonsense at the end that shows errors larger than the signal. That means your data isn’t good enough to draw any conclusions.
There must be a reason that many climate model outputs seldom feature error bars.
“I say that the data is not fit for calculating climatic trends.”
But it is fit calculating climactic trends. ;/
“….This means that 20C measured 80 years ago is not necessarily the same as 20C measured today…..”
This is probably true, but since the heat island affect was less pronounced 80 years ago it implies that temperature measurements back then were MORE accurate (or more representative of the “true” temperature) than today’s (concrete, asphalt, brick, glass influenced) measurements.
So if any corrections (i.e., data fudging ) are made, it should be made to TODAY’S temperature readings by LOWERING (artificially high) them !!!
But if this were done, well, there goes the millions of $$$$$ into the AGW scam and it would be game over.
NOAA has built a “climate reference network of 50 stations out in the “boonies” where they don’t expect urbanization for 50-100 years”. They refuse to publish those temps because they don’t fit their theory. The models by the way, don’t adjust their error bars when they move the data from one model run to the next one, SO, at the end of 100 years of model runs, the ERROR BARS are plus or minus 14-28 degrees. That makes it very hard to find a gain of a few degrees in the 50+ degree error bands. PURE NONSENSE, to be polite.
Data for the NOAA Climate Reference can be found at The network is currently (I think) 114 stations in the lower 48, 18 in Alaska and 2 in Hawaii. On the site click on to “Graph the Data” within “National Temperature Comparisons.” Set the time interval to “previous 12 months”, set the start and end date you want and click “plot.” You will see that for duration of the CRN program, roughly from 2005 on, US temps have not changed. The chart can be set back as far as 1889 but as we know, the prior temps have been reduced so are not meaningful. I expect the info from the CRN will make or break the warmist’s in another 10 or 20 years because the data cannot be adjusted.
Are you talking about USCRN?
Is their another reference network?
if the past location was essentially pristine then it should be unadjusted and all adjustments should be applied to the forward records based on the changes driving those adjustments …
I think the point of the article was that it read 20C eighty years ago and is still reading 20C. Exactly what changed so that the two readings of 20C are only adjusted from 80 years ago?
Beyond that, in the past, that 20C was calculated by averaging the daily max and daily min. With modern sensors, it’s the average of 24 hourly readings. (Some are taken more often. A few a little less often.)
Even without any other source of contamination, it still wouldn’t be possible to compare the past number with the current number because they weren’t arrived at in the same fashion.
It is ALL error, which is why there are no bars.
If it is plotted on a graph it is in error.
g
..If each and every WUWT follower would do this for their own area, I bet we could put together a great historical log of these false/unjustified “adjustments” ! Considering how many people follow this great blog, it should be a “YUGE” list !!!
Sounds simple. Who would do the training and quality control?
We could follow the NOAA standard, and not require any.
Training and quality control? Once one knows how to download data and create a spread sheet…..We’re not building models, just recording data and its adjustments.
MarkW,
You always make me smile!
“Sheri
February 22, 2017 at 9:54 am
Training and quality control? Once one knows how to download data and create a spread sheet…..We’re not building models, just recording data and its adjustments.”
My experience says very few people know how to make good user documentation. Writers generally assume the user will know various things they themselves know, thus don’t cover, and they frequently, very frequently, use multiple different terms (labels, names, etc.) for the same thing, without ever telling the user that these different words are supposed to be the same.
The point is that no step of obtaining, calculating, or presenting the data would be obvious to the novice. For there to be any chance of getting people to participate it would be necessary to present extremely clear step by step instructions of how. Otherwise, the first result will be that most people quite in frustration when they can’t make the leap from step n to step n+1. The second result will be that many different, and not correct, procedures will be practiced by those who are persistent enough to be able to get SOMETHING done by trial and error.
“The result is an annual “temperature” which is a measure of the energy ”
It’s not a measure of anything. It’s an anti-physical value which if used as a temperature in a physical calculation, it is guaranteed to get the wrong result. Temperature is an intensive value (meaning: you don’t add such values) which cannot be defined for a system that’s not even in a dynamic equilibrium, let alone a thermodynamic one.
Amen, brother! Especially as the “energy” balance we’re looking for is a radiative balance (T^4), linear temperature averages have no meaning.
In the future, none of the current “record” temperatures will be records. They will have been adjusted downwards and the temperatures of the day will be “new” records. Adjusting the past is the worst of all possible methods to deal with correcting data.
What kind of maroon believes that min and max can be used to calculate a daily average temperature?
It’s at least the range of the day, which has more info than the average of the two number. Everything they do, throws data away they don’t like.
I should added this, seem pertinent. />
Bingo….
What if all the min’s for the month were averaged and same for the max data – then average the monthly calculated min’s/max’s.
Just asking. Not sure how GISS actually handles the min/max and I don’t know how ‘real science’ would use min/max.
This chap wrote several papers…Why all models that use Global Mean Temperature as a reference to the air temperatures must be wrong Dr Darko Butina…. Sorry its not linked.
There’s more than one type of average – the median is a perfectly respectable choice.
What’s the median of two measurements?.
In the Global Summary of Days data set, mean is the average of min and max.
Except the median is not any kind of average, so it doesn’t count as an average; good or bad.
Hint: that’s why they call it the median, and NOT the average.
g
George, there are different types of average, the mean, the median and the mode (not sure if there are more).
What we need is an average of the averages 🙂
GISS is nothing but artefact. As Schmidt said himself, “what we choose to do with the data determines the temperatures”
That is very post-modernist thinking. It is what you think about the data, and how you feel about it, that establishes its identity.
It’s NOAA\NCEI that need to be stopped from making up data for places they have none. That is where the real problems!
NOAA’s data warms as it moves down latitutes as in when they lose stations on higher latitude warmer southern data warms up the missing station data further north.
BE do this too, fail to cool southern data that is used to make up data further north
The “real problem(s)” are that it is clear, and was centuries ago when the concept was invented, that “climate” is a summary of weather. It is not a real phenomenon but a reified “idea.” Even paleoclimatologists mistakenly discuss Pleistocene “climate” as if it were real, though they have considerably greater justification. The other “real” problem is that we don’t know in detail what drives weather. The basics are in place, but there are critical shortcomings such as the effect of clouds, and more importantly the manner that storms cool the planet. If you couple the altitudes at which clouds form with the geometry of the average optical path for an LWIR photon at that altitude, most of the energy released during condensation and cloud formation will be radiated away from the planet. If you have ever watched squalls pass with virga droppoing. Climate is more attended to because it is already “summarized” and appears effectively simpler, but every single bit of data that addresses “climate” is really weather data at the base.
Duster wrote, “If you have ever watched squalls pass with virga dropping.”
Right! But that’s not a refrigeration “-like” cycle, it is a classic phase-change refrigeration cycle, exactly like your refrigerator, except that the refrigerant is H2O instead of a CFC or HCFC.
Duster also wrote, “If you couple the altitudes at which clouds form with the geometry of the average optical path for an LWIR photon at that altitude, most of the energy released during condensation and cloud formation will be radiated away from the planet. “
I’ve wondered about that. It seems like half would radiate downward, so at most half could initially be headed toward space. But, of course, when the radiation is re-absorbed by other CO2 molecules, it just warms the atmosphere, usually at a similar altitude, so eventually it should get other chances to be radiated toward space. So, can you quantify “most of”?
Talk of an average temperature for the globe is meaningless. The world could be one with 15ºC from pole to pole or one with -10ºC at the poles and +40ºC in tropics and still be the same average.
We have just one ice sheet over the south pole at present with a bit of sea ice in winter over the north pole.This is because we are living in a warm interglacial period.
These balmy times usually only last for around 10,000yrs and we are nearing the end of that period.
Within the next few hundred years the northern ice-sheet will return to stay with us for the next 100,000yrs until the next interglacial and son and so on for a few million years until continents shift enough to allow for better penetration of ocean warmth into the polar night.
Water the great moderator of extremes.
More correctly … interstadial should replace interglacial as the term for warm periods within the colder stadials.
NASA Climate are history and so are all their bogus adjustments.
What matters now, is:
1. getting people we trust to compile a metric that really does tell us what is happening.
2. Adjusting for urban heating by REDUCING modern temperatures near settlement
3. Filling in all the gaps around the world
4. Creating a global quality assured system with ownership of the temperature stations so their compliance to required standards can be enforced.
How are the adjustments made? i.e. how is it decided that the adjustment should be x degrees? the graph just looks like someone decided it should be 0.4 deg here and 0.6 deg there. There must be some routine that works it out for each point (though if there were, i’d expect each adjustment to be different perhaps). I’d love to know why, for instance an adjustment of 0.9 deg was used in the 60’s and then it jumps to 0.6 degrees in the 70’s….what changed to make that a realistic adjustment?
Above all though, it is now not data is it. Its some computed numbers which someones opinion has had a part in forging. If they want people to take it seriously there needs to be significant background work showing why the adjustments are valid.
Your contentions are totally wrong. GISS doesn’t calculate the monthly average. GISS doesn’t adjust the data as you claim. They get the monthly adjusted (and unadjusted) data from GHCN, a NOAA branch.
The GHCN adjustment is the difference between the yellow and the blue line in this chart:
The adjustment that GISS may do is the UHI adjustment, but that is zero at Falls Village because the site is classed as rural.
You can check this by comparing the black and the blue line, they are identical.. The black line is what GISS finally uses..
Its different in Central Park, NY City:
There, GISS actually reduces the trend of the adjusted GHCN data (to that of nearby rural stations).
Is that a bad thing?.
Mark – Helsinki
February 22, 2017 at 6:51 am
GISS is nothing but artefact. As Schmidt said himself, “what we choose to do with the data determines the temperatures”
Averaging temperatures from different stations is the bad thing. Physically meaningless.
it doesn’t have to be, you average the SB flux, and then convert it back to a temp.
That added about 1.2F to temps, but otherwise doesn’t change much. I’m in the process of switching my code to do all temp processing like this.
micro6500 February 22, 2017 at 8:08 am
Do remember to account for enthalpy or you are wasting your time. The hydrologic cycle is what drives climate. That includes the thermohaline currents and clouds as well as humidity and the resulting wet and dry lapse rates.
Yes I calculate enthalpy, and you should like this too. />
Averaging anything is physically meaningless. Nothing physical pays any attention to it; well nothing physical can even sense an average if and when one happens.
G
If this year resembles 1983 and Powell and Mead fill up he’ll look even more foolish. />
Looks to me like Wettest ‘82-’83, it’s time,” he said. Northern California has received so much rain this year that the region is on pace to surpass the record rainfalls of 1982-83.
Rivers feeding Lake Powell are running at 149.53% of the Feb 22nd avg. Click for Details
Multiple wet years from the 1970s to the 1990s filled both lakes to capacity,[10][11] reaching a record high of 1225 feet in the summer of 1983.[11] In these decades prior to 2000, Glen Canyon Dam frequently released more than the required 8.23 million acre feet (10.15 km3) to Lake Mead each year, allowing Lake Mead to maintain a high water level despite releasing significantly more water than for which it is contracted.
I think you meant that comment for another thread
It would be worthwhile doing some absolute basic research before writing these kinds of articles. The adjustments shown have nothing to do with GISS. GISS appear to have made no changes to the GHCN input they use.
..If you torture the data long enough, it will say what ever you want !!
Another excellent report by WUWT. Unfirtunately this kind of report and discussion would make the average persons eyes glaze over after about eleven seconds, which is good for the attention span of their postmodern education. Fortunately there are people who can actually outline the detail and further the argument.
Simply put:
Everybody has heard about the non stop stream of Fake News coming from the leftist establishment, so you won’t be surprised to hear that NASA is peddling Fake Data on Global Warming.
The same methodical manipulation of temperature trends by reducing older temperatures and increasing recent temperatures to produce an artificial warming trend, has been reported for many years in many blogs covering all regions of the world. It appears to be quite consistent and systematic.
Since both NOAA and NASA GISS use GHCN temperature data, it is not clear to me whether this systematic manipulation is being done by NOAA, by NASA, or perhaps more likely by both NOAA and NASA GISS in a co-ordinated fashion. Hopefully a congressional inquiry will soon get to the bottom of this.
So what is the average Temperature for this month for Zealandia; I mean the whole continent ??
G
The google earth pic of the site in 1991 does not seem to show two trees next to the weather station. The pic is not too clear but it seems that the trees, if they were there were no where near the size they are in the current pic. This would affect readings in the last 25 years surely?
All is revealed!! NASA has a working time machine!!!! How else can they go back in time and determine accurate temperatures in the past???? I am sure I will find confirmation somewhere on the net of the great cover-up of all time!!!!!!
Oh, they don’t have a time machine? Never mind. . .
I live 20 minutes north of the generating station. One change to the plant was the large changes in the transformer locations, to right to the south of that SScreen. Now the main step-up transformer is in that location. I’ll get a photo.
Its not obvious to me why the ‘homogenized’ plot would have gaps when the raw does not. Could it be that upward spikes in the raw are clipped off to prevent the homogenized from rising high too soon?
Just look up the Berkley Earth results… thousands and thousands of surface temp sites checked, UHI influence eliminated.
Griff,
Please keep commenting here.
Your never failing comic relief is appreciated!
If I recall, they stole AWs work.
You checked thousands and thousands of sites that quickly?I don;t think that can be true.
As always, Griff defines as good, any “study” that reaches the correct result.
If they claim that they have a 100% perfect method for removing UHI, then they do. After all, they got the correct result, so the methods must be good.
“Eliminated”…so it was quantified for every location and subtracted out? Would love to see the annual values on that for a number of cities.
More like it was allegedly “eliminated” by algorithmic (no pun intended) background processing and hand-waving.
Griffy,
I have serious reservations about the methodology BEST used to ‘prove’ UHI influence is negligible.
Laughably, BEST calculated a UHI cooling from 1950 to 2010. Instead of reaching the conclusion that something was f*d, they said it justified that UHI was minimal.
Thank you. This covers many of the riddles I’ve been puzzling over.
One big point is that the actual raw data are unavailable. The kind of instrument, readings and times of day, missing hours or days or weeks or months are all lost in the black-hole, the bit-bucket, the court-house fire or flood. The means used to interpolate and aggregate to arrive at monthly figures…well, scraps and hints and possibilities tantalize, but are not readily accessible.
But, the watermelons say we should give up our liberty and earnings and property because CATASTROPHE!
The actual data, the handwritten charts are readily available, I linked to one below. Just go here:
That’s roughly what I was seeking. Max, min, reading “at obs” over the 24 hours before observation. Suggests a recording unit, perhaps a circular pen trace, that a human would collect once a day, and transcribe readings to another form, keeping the original in a file for some fixed time before tossing them. We used to have one in computing center “machine room”.
That is real progress.
But then, it has to be digested down to a daily, monthly, quarterly, annual figure…somehow.
Another thought, about siting… though this is in a nice little grass area, it has problems… 1) body of water close by, 2) blacktop roadway and parking lot close by, nearly surrounding it, 3) shade trees too close, 4) building too close, 5) transformers in grid connections too close.
The article states that it is considered to be rural so no UHI adjustment would have been considered necessary.
Figures don’t lie. Liars figure.
Surely someone has done a study on the effect of building/trees/tarmacadam/water? It would be easy to do, just set up say 10 stations at various locations on one test site with building/etc, measure distances and compare results. This could even be done today for a short period, say a week, for an initial indication of variation. Obviously, years would be needed for comprehensive results, but even a week of measurements would show something, especially if the measuring instruments were state of the art. It could all be networked back to central station
A body of water with a 30m radius close to a site will give you temps 1C cooler.
That can hardly be generally true. It would depend on how deep the water is, whether it stands or flows, how far exactly from the site, predominant wind direction etc.
Fake news.
Michael,
How else can they claim that the UHI effect is cooling?
The elevated temperature from pavement/asphalt dimishes within 10-15m of its border ( calm day)
diminishes to zero ? surely you jest 🙂
Mosh,
Why can’t you make an equitable comparison? How much is the cooling effect of water on a calm day within 10-15m of it’s border? How much is the “elevated temperature from pavement/asphalt” within a 30m radius? What about other sources of UHI i.e. car, AC and jet engine exhaust? Are these assumptions or have they been tested?
At first glance it appears that the cooling effect is greater than the heating effect by a factor of 2 to 3. Your inability to compare apples to apples greatly diminishes your credibility.
Doncha just hate it when that happens.On the other hand, the effects of a lake can carry for miles.
At least according to Steven.
On the other hand, the effects of a lake can carry for miles.
At least according to Steven.
It does, I frequently have to shovel it off my driveway in the winter. And I’m about 30 miles away.
At what altitude above that pavement ??
g
The elevated temperature from pavement/asphalt diminishes within 10-15m of its border (calm day)
Heck, you can go further than that. It diminishes the further away it gets. According to Leroy (2010), the impact of a heat sink from 1-10m distance from the sensor will work out to ~8 times that of the same area of sink from 10-30m.
But there is a sizable number of HCN sensors that are rated as Class 3 (NWS non-compliant) rather than Class 2 (compliant) that have no heat sink exposure within 10m — but do have over 10% exposure within 30m (which works out to ~8 times the same exposure within 10m).
So you are correct to say that heat sink exposure does diminish after 10m distance, as you say. But that also is not to say that heat sink exposure at under 30m is not — very — important, at least when we are dealing in terms of trend differences of tenths/hundredths of a degree C per decade.
Exposure at distances from 30-100m can only make the difference between Class 1 and Class 2. Both of those ratings are NWS-compliant and both of the offsets are zero degrees C. So that is a distinction without much of a practical difference for our (limited) purposes.
But anything within 30m can potentially bump a station into non-compliance.
Well, we can adjust for that! (And we will, too, if the VeeV won’t see the light).
One thing I am very interested in is how our stationset will run using your methods. But for valid results, any homogenization you do cannot, Cannot, CANNOT be cross-class. So no compliant stations being pairwised with non-compliant stations if you please! What that means in plain English is that you can only use Class 1s and 2s for pairwise. Any 3\4\5 stations would have to be adjusted for microsite BEFORE using them to pairwise with Class 1s or 2s.
OK, so the measuring site is in the middle of several square miles of said pavement/asphalt along with other monitoring sites. Yessiree, UHI in my book.
lol a light breeze can carry the heat from a parking lot hundreds of meters
OK, so the measuring site is in the middle of several square miles of said pavement/asphalt along with other monitoring sites. Yessiree, UHI in my book.
Well, sure. And that will increase the baseline temperature. But that is merely an offset. I have found that UHI does not appear to have a heck of a lot of influence on trend (sic).
When I do not grid the data, urban stations show less warming that non-urban. When I grid the data, it shows a bit more warming than non-urban.
Urban stations are less than 10% of the USHCN total, in any case. So any effect on trend is marginalized.
But bad MICROSITE, i.e., the immediate environment of the station, has a huge effect on trend — even if the bad microsite is constant and unchanging over the study period.
UHI is edge-nibbling. Microsite is your prime mover. Microsite is the New UHI.
Chad Jessup February 22, 2017 at 7:22 pm
OK, so the measuring site is in the middle of several square miles of said pavement/asphalt along with other monitoring sites. Yessiree, UHI in my book.
Well your book is in error, the site is in the middle of several square miles of woods, and 23 m from the river (likely a cooling influence).
I propose that we dub these important physical discoveries “Mosher’s laws”. Now I’m fully convinced that Berkeley Earth contains nothing but the unvarnished, untarnished truth.
He is correct. But I think he is not following that particular trail closely enough. OTOH, he has said that microsite is a valid subject for study and has the potential to expose a systematic error in the data, i.e., not “nibbling around the edges”. (He doubts it will make much difference, of course, as well he may. But he said he thinks it is a good issue for study.)
He is correct about what? He doesn’t even make complete statements. “It is not only not true, it is not even wrong.”
He is correct that heat sink effect diminishes at 10-15m.
(Although I think he is drawing the wrong conclusions from that: A diminished effect can still be quite significant.)
The heat sink effect surely diminishes in some sort of continuous manner. That means you need at least 2 numbers to describe the decrease as a function of distance.
The heat sink effect surely diminishes in some sort of continuous manner. That means you need at least 2 numbers to describe the decrease as a function of distance.
Yes.
Not only that, but, say, you have a house and the front end is 15m from the sensor and the back end is 25m away. Well, the front end will have more effect than the back end. That makes it next to impossible, and certainly impractical, to calculate the effects precisely.
So Leroy crudes it out so it can actually be measured in this lifetime. It’s not exact, but, as the engineers say, it will do for practical purposes. OTOH, we have always realized that Leroy is a bit of a meataxe (although a very good meataxe), and one of the things we want to look at in followup is re-doing leroy’s method to achieve greater and more uniform accuracy.
Engineers use rules of thumb for estimating the amounts of explosive for blowing up bridges also. After the calculation, they take everything times ten just to be safe. This makes sense, as long as all you care about that the bridge gets blown up. It is not a sufficient basis for constructing bridges.
I think that Trump should offer an amnesty to fraudulent ‘climate scientists’ – Come clean and walk away, if you keep cheating go to jail.
+1
I guess I just don’t understand the whole thing, as the warmists are always telling me…
Hasn’t warming been going on for about 10,000 years now give or take, and sea level rise the same? Don’t any direct measurements we have that could be considered even partially global, go back only several hundred years at most?
What has been Man’s contribution to this warming and sea level rise over that period? Show me, and prove it. Show me, and prove, how the computer climate models accurately portray the climate as best as we can reconstruct it, over that period and over the recent directly measured past. Is that too much to ask?
Why is “climate science” not held to similar high standards as are other scientific disciplines? And God help us if engineers had to meet only the low standards of “climate science”…
I like this analogy that i read on one of these sites: ” What is the average color of your television annually?”
How useful is that information?
To show how silly some averages are, I like to ask “If all my darts hit the wall around the board, can I say my average is a bulls eye?”
SR
Reminds me of the Texas sharpshooter fallacy.
Basically fire a bunch of shots at a barn. Find where most of the bullets hit. Paint your bulls eye there.
the triple 20 will still beat your bullseye.
g
The surfacestations project surveyed this site, and rated it a 4 for heat source nearby.
Well, it’s a Class 4 using Leroy (1999). With the upgunned Leroy (2010), it is a Class 3. (There is heat sink within 10m, but covering under 10% of the area within 10m.)
Data revisionism only takes place in climastrology, and it usually happens completely opposite of what logic would dictate, i.e. lowering past temperatures and raising current ones because of UHI effect.
Data adjustment is very necessary. Raw data, writ large, won’t do. But that just means it is all the more important to do the adjustments right. (And clear. And explainable. And replicable.)
.
Are you accusing the Matte Menne and Claude Williams of scientific misconduct?
For the record?
Is the Publisher of this site aware that you are making a charge of scientific misconduct?
The temperature data for a given station (site) appears as monthly averages of temperature. The actual data from each measurement (daily?) is not present.
Check GHCN Daily. duh
Steve, with the facts presented in this piece, are you of the opinion that there would be need for adjusting the temperatures at this particular site? If so, why?
This is an honest question. I’m not a lay person, I am a chemist who has done some study into this protocol. I’m still not sure why adjustment would be needed. Broadening uncertainty I could understand. Shifting data points would contradict much of my training in physical science.
I can think of adjustments that need to be applied.
1.) There is a TOBS flip from 18:00 to 7:00 in 2010. There needs to be a pairwise comparison to adjust for the jump. (NOAA supposedly does this.)
2.) CRS units have a severe problem with Tmax trend because the bulbs are attached to the box itself — the station is carrying its own personal Tmax heat sink around on its shoulders. CRS Tmax trends are more than double any other equipment (either as warming trend OR cooling trend).
But instead of adjusting CRS to conform with MMTS (and ASOS and PRT, and, and, and), NOAA adjusts MMTS trends to match CRS units. In other words, they adjust in exactly the wrong direction. (In our study we account for the jumps, but we include MMTS units in our pairwise.)
So all CRS units need an adjustment to reduce Tmax trend. Instead, MMTS trends are increased by NOAA.
3.) It is a non-compliant Class 3 station. A microsite adjustment needs to be applied, one that will reduce the trend by somewhere between a third and a half.
It’s not that the data does not need adjustment. Unfortunately, it does. But NOAA is doing it wrong, Wrong, WRONG. (As far as I can tell, this is NOT fraud — just error.)
Evan Jones wrote, “There is a TOBS flip from 18:00 to 7:00 in 2010.”
Say what? Surely by 2010 they were automated and recording measurements every few minutes, right? So how can Time of OBServation be an issue?
Heh!
Sure, the readout is on all the time for both MMTS and ASOS, so you could at any time do a reading. And you can get hourly data on ASOS, as it is.
But all NOAA ticks down is max and min. That’s all that goes into USHCN2.5 data, anyway. Yeah, you can get the hourly scoop on the Airport stations from NOAA, but only max and min go into the HCN station data. However, airports typically observe at 24:00, which is a good time to do it.
With MMTS, the hourly data could be recorded, but simply isn’t, so far as I know. Therefore, TOBS adjustment is necessary. We simply drop stations with TOBS flips and don’t adjust. (Bearing in mind that dropping TOBS-flipped stations is, in essence, the logical equivalent of an adjustment.)
That is amazing and appalling. The amounts of data are very small, by today’s standards. Data storage and transmission are practically free. The stations are expensive. Why on earth would they ever discard any data? SMH.
no, you are the one assuming misconduct … moron …
Accusations of misconduct seems to be an automatic go-to defense by alarmists. To think these scientists (sic) are just incompetent are never considered by these self proclaimed supremacists.
Misconduct or incompetence… how to decide which it is? Generally speaking, look at the patterns of the errors. Simple incompetence generally makes errors which do not have a long term alignment with some ulterior motive. Misconduct produces patterns of errors which correspond with some desire or purpose.
Remember also that incompetence can only be judged in relation to the documented credentials of the people involved. A medical doctor who even once prescribes cyanide for a headache instead of aspirin cannot claim it was simple error, incompetence on his part. What level of incompetence is believable for a PhD employed by NASA as a climate expert?
For climate scientists, this is not complicated. Are the changes that have been made to the data equally scattered, some cooling, a roughly equal number warming, with warming and cooling adjustments having no unusual patterns related to past or present? Alternatively, do the changes produce new trends which did not exist in the raw data, new trends which support plausible desires or purposes of the people making the adjustments?
I know what it looks like to me.
I didn’t bring this up before, Mosh, but I guess you realize there is a big-ass problem with CRS. Tmax trend is crazy-outlier-large (either in a warming OR cooling trend — doesn’t matter which) when compared with any other equipment.
The MMTS, ASOS Hygro, or CRN PRTs all show a radically different story. Either all of them are wrong or CRS is wrong.
And you can imagine what effect THAT will have on MMTS adjustment . . .
Galileo recanted his heretical heliocentrism, but still was right that the earth moves, contrary to Church orthodox doctrine.
Adjusted or no anything that close to the Housatonic River for the last century isn’t useful for climate research.
The Housatonic River runs 140 miles from Pittsfield, MA, through Lee, Great Barrington and many other small communities on its way through Connecticut to the Long Island Sound.
Over that period major manufacturing from Plexiglas to paper had been using the river for dumping industrial pollution. Indeed when the paper companies upstream were producing colored paper you could tell the color even into the 1970s.
Rob, please explain. I grew up down wind from the Naugatuck River, so on the right day the smell was nearly unbearable. Let me know how that may or may not have impacted temperature data. Are you thinking the aerosols? If so, that would have a cooling effect, and therefore temperatures should be adjusted ….. up?
I hit send prematurely …. to explain, the Naugy and the Housy rivers run pretty much parallel to each other. The Naugatuck Rubber Company dumped into the Naugy, and the smell was, well ….. we didn’t like North Winds much ……
From the Climate Explorer. The average raw temperature anomaly in the 20 closest stations to Falls Village (one of these stations goes back to 1793).
There is absolutely no reason to adjust Falls Village based on the pairwise homogeneity algorithm unless it is rigged or faulty or so unstable that it just not work.
Rigged?
Mr Illis. Are you charging Matt menne and Claude Williams with scientific misconduct?
For the record.
And is the publisher of this site standing behind this charge?
1793? really? the 20 closest stations?
there are 20 stations within 25km of this site.
the oldest one goes back to 1884.
IF you are going to accuse people of misconduct show your work.
I’d hate to see you and other who publish this stuff getting sued.
Veiled threats of lawsuits don’t really advance science.
nice try … you really are a nasty piece of work aren’t you …
New Haven Connecticut, 1781-
Steven Mosher, currently running the BEST break-point “adjust temperatures higher” algorithim.
Hopefully, Steven is not one those who can be prosecuted for running fake temperature adjustment algorithms. I’ve posted on boards with Steven for about 10 years now, most of which he would have been described as a skeptic. I hope in changing sides he has not sacrificed his integrity, at least not on the prosecutable side.
Let’s look at Falls Village in Berkeley Earth partially managed by Steven Mosher now. This is a good representation of what this adjustment algorithm actually does.
It takes the raw temperature below.
And then finds 13 different “break-points” in this raw data, then separates the original record into 13 different sections and then restitches the 13 different section back together into a “new regional record” that goes up by almost +2.0C versus the “no change” in the raw record.
I mean there are even 3 different “time of observation” breakpoints in the year 1983 alone. As if they changed the time of observation at this station 3 different times in 1983, all of which made the historical records go down and at a period when all this time of observation problem was supposed to be sorted out 50 years earlier.
Obviously, this is a “biased algorithm”. How it got so biased I don’t know but I doubt it was an accident because people would fix it after they found just one example like Falls Village when there are 13,000 more just like it in the Berkeley Earth system. They would have already noticed how biased it is.
Sheri: veiled law suits don’t advance science, but neither do hinted allegations of misconduct. Lets get it out there – Is Illis accusing them of misconduct or not? If not lets say it plain that there is NO accusation of misconduct. Then we all know where we stand.
So Mr Illis “Steven is not one those who can be prosecuted for running fake temperature adjustments. ” Please tell us who is running fake temperature adjustments, and please clarify that by fake you mean that by fake you mean they are deliberately and knowingly publishing false data to misled the public.
Obviously, this is a “biased algorithm”. How it got so biased I don’t know but I doubt it was an accident because people would fix it after they found just one example like Falls Village…
Spot on! The technical source of the bias of the “break algorithm” lies in its fundamentally erroneous ex- ante model of a monotonically declining “red noise” spectral structure for the data. That assumption, which misidentifies many sharp moves due to various quasi-cyclical components as “empirical breaks,” winds up fragmenting intact records into mere snippets, to be shifted upwards or downwards to conform to the model. Because of nearly ubiquitous–but largely unrecognized–UHI in the data base, the shifting of snippets toward the regional mean anomaly surreptitiously transfers the UHI effects to stitched-together non-urban records.
There can be little doubt that the uncritical embrace of this biasing algorithm, devised by statisticians with no expertise in geophysical signal behavior, is no accident on the part of agenda-driven “climate science.”
You realize that NASA GISS do not adjust this GHCN data as you claim..
That NOAA creates the adjusted data for NASA?
Is the publisher of this site aware of the fake news in your piece?
Blaming NASA for NOAA changes in data looks irresponsible.. Are you trying to do a hit job on NASA?
Great points Mosher. An official agency publishing and promoting trillion-dollar policies based on a dataset has no responsibility for the accuracy of its content, whereas commenters on a website must be held to the highest standards..
Steve,
do you realize that GISS in 1961,,was not supposed to be involved in surface temperature data, in the first place.Their original mission was to support NASA on Space exploration:
“Thus an initial emphasis on astrophysics and lunar and planetary science”
It was Dr. Hansen who changed the mission, when he took over the directorship in 1981.
First, the government is responsible. Everyone who touches it has a duty to assure that it is correct regardless of which office creates it. That’s why audit trails are important.
Second, I think the Trump fire has raised temperatures a bit!
Hmm. Don’t overanalyze. Especially when one is in opposition. We haven’t the data to arrive at serious value judgments. And we all have our loyalties. I know I do, and I don’t mince words, not on that subject. Life is an armed truce. But we knew that, anyway.
Besides, I may not agree with all of Mosh’s methodology or any of his conclusions, but BEST is a remarkable piece of work and has a different approach than we do.
In simple terms, he does “jumps”, while we do “gradual”. But I think a gradual, systematic, spurious bias will slip right past BEST, because BEST does jumps, not gradual. And our two biggies — Microsite and CRS bias — are gradual and systematic and not only won’t be picked up by BEST, but will serve just fine to make homogenization crash and burn.
Yet I think that by looking at both the BEST method and at our own method, when we publish — for their strengths — we (or others) might well create a better, more sophisticated method than either alone.
Although the data set is not labeled, these measurements are presumably min/max measurements.
The GHCN site clearly lists the data as being Max/min taken at 6pm (ideally) each day.
The temperature data for a given station (site) appears as monthly averages of temperature. The actual data from each measurement (daily?) is not present.
Here’s some data from 1916, for example:
“The result is an annual “temperature” which is a measure of the energy in the atmosphere coupled with any direct radiation impingement.”
Unless the enthalpy for each data point is calculated then they are not getting a “measure of energy in the atmosphere”.
Unless the enthalpy for each data point is calculated then they are not getting a “measure of energy in the atmosphere”.
I have it here, in these zips
if you want to look and see what it says.
Email me if you have any questions about what it all means.
micro6500 February 22, 2017 at 10:46 am
Thanks a lot of work as I cannot find your email perhaps you would explain how the DayWatts and DayWattsFlat were calculated and what they represent.
(Nice to see Little Risington BTW not been on that hill for a while 🙂 ).
As OA points out above, this is all barking up the wrong tree. GISS does very little in handling this data. They use GHCN adjusted data; that is where to look for the adjustment activity. GHCN’s sheet listing the records and its adjustment is here. The article says that GISS does not have the daily record – again looking in the wrong place. It is in GHCN Daily here. If you really want the raw stuff the handwriiten forms are accessible here. Metadata is accessible here.
“GISS does very little in handling this data.”
So why are they handling it?
Andrew
To calculate regional and national averages and their time history.
“To calculate regional and national averages and their time history.”
Why can’t a computer at NOAA do that?
Andrew
Nick,
GISS was originally created by Dr. Jastrow as a supporting group, to NASA’s SPACE EXPLORATION projects.
They have no justification to do work that other agencies already does,it is a waste of taxpayers money.
“They have no justification to do work that other agencies already does,it is a waste of taxpayers money.”
GISS was doing it first. They have a long record, and their product is well used. It would cost very little to produce (I do a similar calc on my home computer). There is no reason to stop.
“There is no reason to stop.”
Yes there is. You stated it yourself. They do very little, according to your own comment.
Andrew
Nick Stokes,
It sounds like you are making excuses to maintain the status quo. That’s not a very scientific posture.
Andrew
Now you are a liar,Nick!
I just posted that it was originally founded to do this:
“Thus an initial emphasis on astrophysics and lunar and planetary science”
It was not about global warming or climate change at all back in 1961,when GISS was founded by Astronomer Dr. Jastrow.
“They do very little, according to your own comment.”
My comment was that they don’t do the data handling – the adjustments for homogeneiities which is a minor but unavoidable part of calculating a proper global average. One of the ironies of these posts which seem to come every month or so, is that the reason GISS attracts this misplaced attention is that they seem to run a more user-friendly website for accessing the data, although NOAA has more overall. Do you really want to lose that?
Nick: “GISS was doing it first. They have a long record, and their product is well used. It would cost very little to produce (I do a similar calc on my home computer). There is no reason to stop.”
Before we launched all those expensive satellites, NASA was bubbling over with various excellent reasons to stop using surface stations. Those reasons haven’t changed, and the network has only gotten patchier since. It took a team of unpaid volunteers to even bother to document the current site conditions.
If GISS was showing a satellite-era trend significantly cooler than satellites, it would be quietly buried in a field at midnight instead of being kept on life support and implausibly promoted as more accurate than those really expensive satellites NASA wanted from taxpayers.
“they seem to run a more user-friendly website”
Seriously, Nick. How much money does NASA consume to produce “a more user-friendly website”?
Why not have NOAA produce “a more user-friendly website”?
Andrew
“How much money does NASA consume to produce “a more user-friendly website”?”
No use asking me. Why don’t you find out? I expect it is very little, as is the cost of producing Gistemp.
Nick, if GISTEMP costs almost nothing to produce, then there’s no need for taxpayers to fund it.
“then there’s no need for taxpayers to fund it.”
Lots of things that don’t cost much are still worth doing.
Running the internet doesn’t cost that much. So do you think they should stop?
“Why don’t you find out?”
We can’t get these folks to cooperate with FOIA requests or Congressional inquiries, but surely they’ll fall all over themselves in eagerness to help some commenter on a website critical of their work.
“Running the internet doesn’t cost that much.”
Huh?
Andrew
Nick — if you’re referring to the maintenance of Internet namespace databases (and not the trillions of dollars in private infrastructure), that is administered by a nonprofit.
Would you like some help with a Kickstarter campaign?
I say this has introduced me to anew and surprising way of deciding which public projects to finance. Instead of looking at the value of the output and the costs of the input, then funding if the benefits seem to be greater than the costs, this new way is much simpler. You simply look at the costs, and if they are quite low you scrap the program! Such simplicity must be applauded. After all, if the costs are low, what does it matter if the value is huge?
This new policy will ensure that cheap and excellent value for money projects will be scrapped, and only expensive projects will be funded.
“You simply look at the costs, and if they are quite low you scrap the program!”
Seaice, you are being deliberately obtuse. The reason the costs are quite low is because they aren’t really doing anything. Key point for you to try and comprehend.
Andrew
“Nick, if GISTEMP costs almost nothing to produce, then there’s no need for taxpayers to fund it.”
That is the point I was responding to. If it does not cost much there is no need to fund it. Absurd.
The article says that GISS does not have the daily record … and you confirm that by saying that GHCN has them … thanks
Immediately below the graph on the GISS site it explicitly says where the data is kept and provides a link to it:
“Key
Based on GHCN data from NOAA-NCEI and data from SCAR..”
And step 2 is where it all goes badly awry. Reason being that those breaks are fixed by doing pairwise, and the upwards of 80% of the stations used are invalid on the grounds of poor microsite, alone. And 100% of Stevenson Screen records have a Tmax trend that is more than spuriously doubled.
But neither microsite nor CRS bias creates a break, so it just slips through. And since bad microsite alone creates a systematic error that affects four stations out of five (even if non-CRS), NCEI “corrects” the situation not by adjusting the ~80% bad down to conform with the ~20% good stations, but adjusting the 20% good stations to match the 80% bad ones.
And that, folks, is how homogenization bombs. It works as intended if the majority of the data is good. In that case, the bad is conformed to the good. But if most of the data is bad, then it does the exact opposite. An average of good and bad data is not so great. Obviously. But misapplied homogenization takes a bad situation and, rather than making it better, makes it even worse.
“I expect it is very little, as is the cost of producing Gistemp.”
I guess so, since they don’t really do anything.
Andrew
Thanks, Nick. I am exposing my ignorance, but this was the easiest way to ask the question. I live about 20 miles south and was really curious why the old temperature data was modified.
I am at the very beginning of studying the record of an extended and near-complete record of one station in New Zealand. It is an exercise to see how the modern record has been created. One must start with how early raw data we read and averages for months and years established.
My understanding is that nowadays average (mean) daily temperature is established through finding the mean between max and min daily temp. But: when was it that instruments could automatically record max and min? This is what I am researching at the moment. As yet I can find no record of thermometer specs throughout the record.
The most likely scenario before automation (to establish max and min) is that the reader recorded temp at specific times (or time) in the day. It is most unlikely that a reader would attempt to find the max and min on a daily basis. Reading the device at 9 am in winter has very different implications to reading at the same time in Summer. The first (winter) may well record min temp but the second (summer) most probably wont. Many of our stations were situated at hydro power stations, forests and research institutes. I cannot imagine these staff getting out of bed at 5 am in summer to ensure that they record a min temp. Neither would they hang around a station during the afternoon to find the max.
There are people in our system who know what was done to establish the “mean” from early data which I am assuming were recorded at specific times during the day. I am going to keep digging until I get an answer.
This is the most basic of questions
To Michael ,The first thermometer to read the maximum &minimum temperatures was invented in 1780 by James Six.of Canterbury (uk) .it recorded the current temp.&also the max &min since last read ..it had no time clock so the time of the occurrence of the max min temp was not known ,but was probably read every 24 hours .it needed to be reset before recording the next 24 hours temps .
Yep – your right. Found out today through research 🙂
This is like the one I used to use in the early 60’s in my school’s Stephenson’s screen: />
I used to make the measurement at lunchtime every day and reset the indicators.
The link I gave earlier showed that at Falls Village the max/min temp was read at 6pm although the records from 1916 showed the actual time was more variable but still in the afternoon.
Mosher, you are quite litigious today. Taking lessons from your pal Mikey?
Goddard Institute for SPACE STUDIES
Founded by Dr. Jastrow in 1961,Directed for the first 20 years. Note that INITIALLY it was: “Thus an initial emphasis on astrophysics and lunar and planetary science”
.”
Guess who became Director in 1981,who changed the mission to Climate Change?
A surplus of solar energy on a dry surface produces only sensible heat which makes for a very hot surface and an equally hot air above.
Evaporation from an equally solar radiated moist surface replaces some of that sensible heat with latent heat which cools the surface and warms the air somewhere else far away as vapour condenses.
Hence tropical rainforests probably do more to cool the surface and move heat somewhere else than anything else.
Without tropical rain forests rainfall would cease to be as regular, the land would be subject to extremes, ecosystems would be destroyed and mankind would have a tougher time surviving..
I’m not, but I have complained about this for 10+ years. Think what that money wasted on climate change could have done!
have a +1 from me on that .
Bit Chilly. U R a piker. +100 from me!!!!
Not sure if this has been mentioned before and apologies if so; I haven’t had time to read the entire thread.
Berkeley Earth (BE) also analysed Falls Village and came up with results similar to NASA. Here are the data plotted using BE’s breakpoint algorithm:
This suggests that recent temperatures at Falls Village fall below those recorded at nearby stations, suggesting a local discrepancy of some sort, defined as an ‘Empirical Break’.
(Can’t help noticing that the tree beside the screen in the photo is casting a shadow over the screen. Was that always the case, I wonder?)
we don’t live in a pristine environment … there will be local differences driven by alot of factors and averaged out over the globe that is fine … there is no need to adjust every location to a pristine baseline … nobody lives in a pristine baseline …
Perhaps the fact that we don’t live in pristine environments is one good reason why we ‘should’ adjust for non-climatic influences. Shouldn’t we adjust for influences like UHI, for example?
Many statistical tests for significance assume a random distribution of errors. If someone “adjusts” the sample, it makes the tests rather invalid.
DWR54,
how do we know what the UHI effect for EACH temperature recording station are?
That alone is why Surface station data is a big mess.
Hey DWR54! “Shouldn’t we adjust for influences like UHI, for example?”
Yes, if the influences can be justified and quantified. But justification and especially quantification require information. Do we really have some new source of information that allows us to correctly adjust data from the 1880s? No? Then why are the old numbers changing?
Jason Calley on February 23, 2017 at 12:22 pm
No? Then why are the old numbers changing?
This is always the same question, like an eternal refrain.
All anomalies computed in a series out of the average of a given “baseline” period (e.g. 1951-1980 or 1981-2010) will change every time any absolute value within the baseline was changed (for example, to correct an error).
But all the other absolute values in the time series arfe left unchanged.
If a tree has grown and is now shading it why would you adjust data from 100 years ago rather than more current data? Part of the problem is that there is no audit trail for any of this. This is beside the point of just what global temperature really means and what it actually indicates. If you use fudged up data to calculate some fudged up figure, all you have is something that is meaningless.
most rivers and stream run in a valley of some sort . even a relatively small depression over a significant length appears to be colder than the surrounding area, often by a significant amount. in-laws live on the bank of the local river half a mile south of me and around 300 feet lower elevation. i often see temperatures up to 4 c cooler in the morning on the car temperature display than when i left my house 5 minutes previously.
Or a change in predominant wind direction off the river…
Yes, the stevenson screens were adopted when it was discovered that every weather station did not have a handy shade tree, so that the standard “temperature in the shade” might not always be the actuality as the sun hit it. There is shade on the shade, and both the shrubbery and the shelter minimally interfere with breezes.
J.K. Mackie’s (variant spellings in the family include M’Kie, Mackey, MacKay…) record from 1916 March shows that they recorded a daily max, min, the range, and then they took an arithmetical average max per day, and an average min per day each month. Wouldn’t be surprised if they just averaged those 2 to arrive at a monthly aggregate “average”…or at least central tendency. As long as it is consistent, it should be OK, not bias a trend into the mix.
This dataset was supposed to obsoleted decades ago with the advent of satellites and the shutting down or degrading of so many surface stations. Instead, it became a wonderful opportunity to promote a political agenda with adjustment that seem plausible on the surface (haha) but are deeply problematic when delving into the devils of the details.
Really hope Trump’s team just takes an axe to the whole GISS temperature dataset.
But Nick Stokes should feel free to keep publishing it from his PC, at no cost to taxpayers.
I use unadjusted temperatures. I can use adjusted. It makes very little difference.
Homogenization is far from the only adjustment. But make up any rules you want, it’s your dataset.
Forrest,
“See the problem with your line of argument?”
I wrote a code using quite different methods to GISS, described here. It is similar to what BEST later used. I use unadjusted GHCN data, which is a difference, but has little effect on the result. But you don’t know that until you have done it. Where there are clear inhomogeneities, you have to adjust for them, even if it all balances out in the end. I can just see people here taking the other tack if they didn’t (Negligent!).
I use unadjusted GHCN data, which is a difference, but has little effect on the result.
that’s because your processing generates the trends itself. You guys screw with the data so much, it doesn’t matter much.
It interesting, I use the measurements as is, when I scrape off the day to day change of min temp, average out over a year, if I take the last 30 years, invert it, and it’s a good match to satellite temps, which makes me think the satellites are detecting the heat passing through the troposphere.
Soon, Nick Stokes will be running a duplicate version of the internet on his PC. because running the internet doesn’t cost that much. lol
Andrew
talldave2 on February 22, 2017 at 10:50 am
Why should a dataset become obsolete with the advent of satellites when both look so similar?
Here is a chart comparing, exclusively for the GHCN V3 FALLS VILLAGE station, unadjusted and adjusted data together with the UAH6.0 2.5° grid cell just above the station:
You see that all three plots differ by so few that any claim for so called adjustments really sounds a bit paranoid.
But not only the plots show such convergence. Numbers do as well, e.g. highest and lowest temperature anomalies wrt 1981-2010 (in °C) from december 1978 till december 2016.
Highest is december 2015 for all three datasets
– UAH: +5.35
– GHCN unadj: +6.79
– GHCN adj: +7.04
Lowest is february 2015 for both GHCN datasets as well (december 1989 for UAH)
– UAH: -5.32
– GHCN unadj: -8.28
– GHCN adj: -8.33
The similarities between surface and troposphere temperatures at peaks and downs in the chart is sometimes amazing, especially when you look at it in a pdf file.
This of course you see only when looking at anomaly based charts: there are about 24 °C difference between UAH and GHCN, making comparisons of absolute values impossible.
If the data doesn’t fit, adjust it a bit!
Climatologist’s maxim: One fudged data table is worth a thousand weasel words!
Back in 2011 I tried to find out exactly what accounted for the NOAA/GISS U.S. temperature adjustments. I wrote about that attempt in a comment on WUWT in 2012, here:
Approximately all of the reported warming in the U.S. 48-State surface temperature record from the 1930s to the 1990s was due to adjustments. So I “asked the Climate Science Rapid Response Team” (a/k/a, the Defenders Of The Faith, the Congregation for the Doctrine of Anthropogenic Global Warming) to help me locate the old data, and to explain the alterations which had added so much apparent warming to the U.S. surface temperature record.
They were unable to do so, though they did direct me to some interesting material — some of which made me queasy.
In the WUWT conversation, Amos Batto claimed that the “data and software algorithms are publically available.” But when I told him that I couldn’t find it, and I asked him to find it, he went away.
I never did find an explanation for the majority of those temperature adjustments, nor did I ever track down the original data graphed in Hansen’s 1999 paper. Eventually I reconstructed the data, pretty closely, by digitizing Hansen’s graph, using WebPlotDigitizer.
That the author and most commenters seem ignorant of the fact that raw GHCN dailies are available from NOAA is a real head scratcher. Nick Stokes even supplied a link to the raw daily file for Falls Village, which includes TMAX, TMIN, TOBS, precip, snow, and snow depth, plus measurement, quality, and source flags. Data begins Feb 1916. Snow and snow depth measurements stop in 2010, precipitation measurements stop in 2014. (Why?)
When someone says they are using raw data, however, I wonder how they deal with missing days and data flagged for various “quality issues.”
Glancing at Falls Village, the biggest temperature increase appears to be balmier summer nights. No trend in precipitation. Record high temperature of 104F in 2002 beat the previous 103 in 1933. Has anyone asked long-term residents if they feel climatically threatened?
“Snow and snow depth measurements stop in 2010, precipitation measurements stop in 2014. (Why?)”
It’s a coop (volunteer) station, and seems to be fading out. You can read this in the metadata here. Why it is happening in stages is a mystery.
Seems a shame to shut down a station in continuous operation since 1916.
The Norfolk 2SW coop station, 7.8 miles away, has continuous records since 1943, but with many missing days. It shows more warming than Falls Village over the same period. The NWS shut down the hydrological reporting for that station in 2010 and has since been installing automated equipment:
What a difference 7.8 miles makes in extreme highs. Falls Village reached its record high of 104F in 2002, 5F above its 2001 high. Norfolk reached its record high of 98F in 2001, 7F above its 2002 high. Extremely odd.
More on the lack of correspondence between record TMAX days at Falls Village and Norfolk, CT.
Falls Village:
2001-08-03 — 91
2001-08-04 — 85
2001-08-05 — 88
2001-08-06 — 92
…
2002-07-29 — 93
2002-07-30 — 104
2002-07-31 — 91
Norfolk:
2001-08-03 — 87
2001-08-04 — M
2001-08-05 — 98
2001-08-06 — 81
…
2002-07-29 — 75
2002-07-30 — 87
2002-07-31 — 85
An argument against interpolation?
Where is that outflow coming from? Is that the sewage treatment plant?
Ah, never mind, it’s the hydro outlet. Built in 1914..
Susan,
In case you’ve not seen this, look at Item 4.
(4) CLIMATE CHANGE SCIENCE: TIME FOR TEAM “B”?
The American Enterprise Institute, 15 February 2005
Anyone with even a basic view of statistics knows that taking averages of averages etc and then trying to derive even more data from that result is a fools errand. Remember the definition of statistics – “an attempt to derive meaning where there is none.” In general the more manipulation done to historical data the more useless it becomes. In other words, the error bar gets so large that ALL the data is useless.
As far as the GISS manipulations go I’d be very surprised if anyone with a statistical background ever reviewed and approved this approach.
I’ve just sampled a few stations in Aus a few times but the mean of half hour readings usually comes out over a degree (C) different (more or less) than the mean of the official min and max (which is usually half a degree or more greater the the highest half hour reading shown because of a short spike). It shows how off the use of the mean of min and max readings of thermometer for a day is for a guess at how the thermal energy of a hundred cubic kilometers of air changes (especially since its not the same air the next day). Homogenizing the data using a method developed for real intensive properties is not real science.
It is not real science but it provides the right answer for the customer funding the research. I find it amazing that despite the immense amount of money being spent based on these over processed observations, nobody has carried out a formal validation of them. I doubt if any of them would even meet basic quality standards. A rather slap dash approach to processing large numbers of data points is forgivable in an undergraduate project with insufficient research, but for multiple government agencies with funding in the multiple millions to not bother to validate and provide result documentation of that testing is very close to malfeasance. Each site should have a ‘quality record’ that provides supportable reasoning for each and every ‘homogenization’ change to data and that change signed off by an accountable person. The original data for that site must be kept as well as the homogenized (invented?) data to allow replication of the claimed to be justified changes.
If an accountant did what NASA/NOAA/Hadley Centre do to meteorological observations, to a company sales figures it would be criminal.
East coast. Pullitzer Price for fiction.
In business you don’t duplicate. Profit margin is too important. Well so are my tax dollars. Trump the businessman is doing what should have been done the very first time another agency got into the temperature business. He is minding my money. About friggin time. I don’t care who does what as much as I care that several tax funded agencies are doing essentially the same $&@#%damn what! | https://wattsupwiththat.com/2017/02/22/through-the-looking-glass-with-nasa-giss/ | CC-MAIN-2022-05 | refinedweb | 14,860 | 63.9 |
12 September 2012 12:56 [Source: ICIS news]
SINGAPORE (ICIS)--Crude futures rose on Wednesday with ICE Brent gaining more than $1/bbl at one stage, supported by a softer US dollar and a decision by a German court in favour of the eurozone rescue fund.
Expectations that the US Federal Reserve will announce further economic stimulus plans this week added further support.
At 11:21 GMT, October Brent crude on ?xml:namespace>
October NYMEX light sweet crude futures (WTI) were trading at $97.69/bbl, up by 52 cents/bbl from the previous close. Earlier, the
The weaker US dollar made dollar denominated commodities such as crude more attractive to overseas investors.
There are widespread concerns | http://www.icis.com/Articles/2012/09/12/9594970/brent-crude-futures-rise-1bbl-on-softer-dollar-eurozone-hopes.html | CC-MAIN-2014-10 | refinedweb | 117 | 55.64 |
The MySensors library has just received an important update and at the same time goes to version 2. This is an opportunity to discover (or rediscover) this library to create DIY connected objects developed by the Swedish Sensnology AB team. . This article will be followed by a series of tutorials to integrate connected objects in Jeedom and Domoticz.
Contents
Install the MySensors library on the Arduino IDE
There are already many explanations on the principle of the MySensors library on the internet. In this article we will focus on the changes made in version 2 of the library. Before going into the thick of the subject, the MySensors library is now available in the Arduino IDE, you can install it from Library Manager very easily.
The biggest change is under the hood. The Sensnology team did a lot of optimization work that reduced the size of the library by 20%, which is still very significant in an Arduino project. This optimization is accompanied by a change in the writing of the code. For example, it is no longer necessary to create a gw object (MySensor gw) then to call the desired method (for example gw.begin), we now call the desired method directly, for example sendSketchInfo () to send the name and the version of the object.
#define MY_DEBUG.
#define MY_RADIO_NRF24 //#define MY_RADIO_RFM69 //#define MY_RS485
We need to include the SPI.h and MySensors.h libraries. Note the ‘s’ at the end to show the difference with the previous version.
#include <SPI.h> #include <MySensors.h>
Now, we can create a child. It simply a number from 0.
#define CHILD_ID_UV 0
Now we can set an MyMessage object. This object will contain the value (in the correct format). In this case it is a UV Index (V_UV).
MyMessage uvMsg(CHILD_ID_UV, V_UV);
If you need to initialise output for example, you can do that in the standard Arduino function startup(){}..
void presentation() { // Send the sketch version information to the gateway and Controller sendSketchInfo("UV Sensor", "1.2"); // Register all sensors to gateway (they will be created as child devices) present(CHILD_ID_UV, S_UV); }
The standard Arduino loop allows us to make UV measurement
uint16_t uv = analogRead(UV_SENSOR_ANALOG_PIN);
Check if the value change (or every 5 minutes). So, use the send command to send the value. The MyMessage will prepare the value in the correct format. You can define the number of decimal in this case.
send(uvMsg.set(uvIndex,2));
Finally the Arduino can sleep a little. For example 30 x 1000 ms in this case.
sleep(SLEEP_TIME);
#include <MySensors.h>
Erase MySensor gw and gb.begin() in your code.
Erase all gw in front of all the MySensors library calls. For example gw.send() becomes send().
Move the presentation inside the new presentation() function like that
void presentation() { sendSketchInfo("UV Sensor", "1.2"); present(CHILD_ID_UV, S_UV); }
That
- | https://diyprojects.io/mysensors-v2-discover-news-migrate-old-sketchs/ | CC-MAIN-2019-22 | refinedweb | 475 | 58.69 |
Structure that stores all data associated with one map layer. More...
#include <qgslayertreemodel.h>
Structure that stores all data associated with one map layer.
Definition at line 405 of file qgslayertreemodel.h.
Active legend nodes.
May have been filtered. Owner of legend nodes is still originalNodes !
Definition at line 413 of file qgslayertreemodel.h.
A legend node that is not displayed separately, its icon is instead shown within the layer node's item.
May be
nullptr. if non-null, node is owned by originalNodes !
Definition at line 420 of file qgslayertreemodel.h.
Data structure for storage of legend nodes.
These are nodes as received from QgsMapLayerLegend
Definition at line 426 of file qgslayertreemodel.h.
Optional pointer to a tree structure - see LayerLegendTree for details.
Definition at line 428 of file qgslayertreemodel.h. | https://qgis.org/api/structQgsLayerTreeModel_1_1LayerLegendData.html | CC-MAIN-2020-50 | refinedweb | 132 | 63.05 |
Structure of a topology object.
More...
#include <hwloc.h>
NULL
children
Structure of a topology object.
Applications mustn't modify any field except userdata .
Number of children.
[write]
Object type-specific Attributes.
[read]
Children, children[0 .. arity -1].
CPUs covered by this object.
Vertical index in the hierarchy.
Father, NULL if root (system object).
First child.
Last child.
Horizontal index in the whole list of similar objects, could be a "cousin_rank" since it's the rank within the "cousin" list below.
Object description if any.
Next object of same type.
Next object below the same father.
OS-provided physical index number.
OS-provided physical level.
Previous object of same type.
Previous object below the same father.
Index in father's children[] array.
Type of object.
Application-given private data pointer, initialized to NULL, use it as you wish. | https://www.open-mpi.org/projects/hwloc/doc/v0.9.3/structhwloc__obj.php | CC-MAIN-2016-50 | refinedweb | 138 | 56.42 |
the result in an integer format.
Constraints
- 1<=|s|<=1000000
- a<=s[i]<=z.
Example Input: fwezfwjmevfukwejbfqegwkf
Example Output: 3
Explanation For Find unique character in a string
Store the count of the character in an array. Then run the loop from beginning till the end and check if a character whose count is 1 then print that index at we are and jump from the loop. Here is the count of each character which is listed below.
a- 0, b- 1, c- 0, d- 0, e- 4, f- 5, g- 1, h- 0, i- 0, j- 2, k- 2, l- 0, m- 1, n- 0, o- 0, p- 0, q- 1, r- 0, s- 0, t- 0, u- 1, v- 1, w- 4, x- 0, y- 0, z- 1.
Now, we traverse the string from the beginning and check if there is exist any character whose count is 1. Then we print the index of that character and terminate the loop and if we don’t find any character then print -1. Here we got z as count 1 so print the index of z which is 3.
Implementation For Find unique character in a string
/*C++ Implementation of First uique character in a string*/ #include<bits/stdc++.h> using namespace std; int main() { string s; /*take string as input*/ cin>>s; int len=s.length(); int freq[26]; /*initalize all with 0*/ memset(freq,0,sizeof(freq)); for(int i=0;i<len;i++) { /*count the characters and store count of a at freq[0], count of b at freq[1], count of c at freq[2],...,count of z at freq[25]*/ freq[s[i]-'a']++; } int flag=0; for(int i=0;i<len;i++) { /*check the first character whose count is 1*/ if(freq[s[i]-'a']==1) { cout<<i<<endl; flag=1; /*jump from the loop*/ goto label; } } label:; /*if no character found as count 1 then print -1*/ if(!flag) { cout<<-1<<endl; } return 0; }
Example Input 1: dvcaewliabjvwdbgkinshckgfdbfcvd
Example Output 1: 4
Example Input 2: aabbccdddeeefffrrrwwwqqqhhhfffihhi
Example Output 2: -1
Time complexity
O(L) where L is the length of the string.
Space complexity
O(1) because we only create a constant array freq of size 26 which is less and we can say the space complexity is constant. | https://www.tutorialcup.com/interview/string/find-unique-character-in-a-string.htm | CC-MAIN-2021-49 | refinedweb | 387 | 65.56 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.