text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Hi Robert, Robert Millan wrote: > When it comes to __attibute__((__unused__)), it's just a matter of > agreeing on whether to call it __attribute_unused__ or __unused. I don't agree. It's perfectly fine for there to be multiple names for the thing --- the task at hand is dividing up the __* namespace between libbsd and libc. Would it be possible to collect a near-exhaustive list of identifiers like this (i.e., used in BSD application code and starting with __) that could cause trouble? Then we can propose sed-ing them away at header installation time to the libc-ports maintainers (as Thorsten suggested), and I wouldn't be surprised if such a patch is accepted. Thanks for moving this forward. Thoughts below about the alternatives you mentioned. Regards, Jonathan > - Propose to BSD folks that they accept __attribute_* prefix and > define those macros (in addition to __unused etc), then begin > accepting patches that replace __unused with __attribute_unused. If I were in their shoes, I wouldn't be happy about such patches. It sounds like heavy patching without much immediate benefit, with no end in sight (since glibc could start using another keyword the next day). > - Propose to GCC folks that they define __attribute_* macros in > <stddef.h> (they won't define __unused since this would break > Linux and Glibc), then bring the same proposal to Clang folks. If > both accept, FreeBSD is much more likely to backport it to GCC 4.2. Likewise, in their shoes I wouldn't accept such patches. The macros are not in the C or POSIX standard and are easy to define in terms of the __attribute__() feature, so they're not part of what a C compiler is supposed to do. Making each implementation of the standard headers add these macros would hinder portability between implementations (yes, there are more than two :)). > - Work with standards bodies (POSIX?) so that they specify either > definition (it doesn't matter to us which one, the offending > definition will have to adapt). That sounds like an excellent idea! Presumably the C working group would be likely to consider standardizing __attribute__((__unused__)) if it is proposed, since that syntax is already widely implemented. C++0x has its own attribute syntax which could make something like [[unused]] possible.
|
http://lists.debian.org/debian-kernel/2011/06/msg00468.html
|
CC-MAIN-2013-48
|
refinedweb
| 380
| 71.65
|
Fibonacci Python Program
Table of Contents
Today in this tutorial, we will learn how to write the Fibonacci python program. In python programming, the Fibonacci series can be implemented in many ways like memorization or by using the lru_cache method.
First of all, you should know about the Fibonacci series. In the Fibonacci python program, the series is produced by just adding the two numbers from the left side to produce the next number.
In Fibonacci series, the first two number is added to produce the next number, As you can see below.
- 0+ 1 = 1
- 1 + 1 = 2
- 1 + 2 = 3
- 2 + 3 = 5
- 3 + 5 = 8 and so on.
Fibonacci Python Program
1) Fibonacci Python Program To Print Fibonacci Series Using Loop
Code
#Take a input from user for fibbonacci series n = int(input('Enter the number for Fibonacci Series : ')) #initialize a and b. a = 0 b = 1 #if condition for n = 1 if n == 1: print(a) #And when the value of n > 1 else: print(a, end = ' ') print(b, end = ' ') #Loop for producing the Fibonacci Series for i in range(2, n): c = a+b a = b b = c print(c , end=' ')
In this Fibonacci Python program, first of all, take input from the user for the Fibonacci number. After that initialize the two variables (a, b) with values 0 and 1. And then print the value of a and b when the value of n is equal to 1 then print the value of a, and if the value of n is greater than 1, then print the value of both a and b. At last, for-loop conditions for producing the Fibonacci series by just swaiping the value of a = b, b = c, and printing the value of c continuously.
2) Fibonacci Python Program To Print Nth Fibonacci Number Using Recursion:
Code
#Defining function of fibonacci fib def fib(n): #returning the value if n=1 return 0. if n == 1: return 0 #returning the value if n=2 or 3 return 1. if n==2 or n==3: return 1 #Using recursion for fibonacci number. return fib(n-1)+fib(n-2) #take a input of n from the user. n = int(input("Enter to print nth number : ")) #printing the nth fibonacci snumber print(fib(n))
In this Fibonacci Python program, This code is about to find the nth number of the Fibonacci series. We are using Recursion to find the nth number.
3) Fibonacci Python Program To Print Nth Term By Using Memorization
i) Normal Method for Memorization
Code
#Use the cache method for memorization fibonacci_cache = {} #define fibonacci def fibonacci(n): # if we have cached value then return it if n in fibonacci_cache: return fibonacci_cache[n] #compute for the nth term if n==1: value = 0 elif n==2: value = 1 elif n>2: value = fibonacci(n-1) + fibonacci(n-2) #cache the value and return it fibonacci_cache[n] = value return value for n in range(1, 11): print(fibonacci(n), end=" ")
In the above Fibonacci program, Memorization is used in a function to return the Nth term of the Fibonacci sequence. By using memorization the program is fast and solid.
ii) By Using rlu_cache Method
Code
from functools import lru_cache @lru_cache(maxsize = 1000) def fib(n): #check the number is positive number if type(n) != int: raise TypeError('n must be a positive intiger') if n<1: raise ValueError('n must be a positive intiger') #compute the nth term if n==1: return 0 elif n==2: return 1 elif n>2: return fib(n-1) + fib(n-2) for n in range(1, 11): print(fib(n), end=" ")
In the above Fibonacci Python Program, Memorization is used by the lru_cache method. In this, the least recently used (LRU) method is used for producing the Fibonacci sequence.
Video Tutorial for Fibonacci Python program
Learn More Trick in Python Programming
- Strip in Python
- Mod in python
- Clear Screen in the Python Console
- List Methods in Python
- Tkinter Tutorial For GUI
FAQ for Fibonacci Python Program
What is Fibonacci Sequence?
0 1 1 2 3 5 8 13 21 34 … are the sequence of Fibonacci. This Sequence is produced by just adding two below numbers for the next number.
Ways for Fibonacci Python Program?
In Python Programming Language, The Fibonacci sequence program can be written in many ways like using the lru_cache method, memorization, recursion, and normal method.
2 thoughts on “Fibonacci Python Program”
Hi Dude
I like your writing skilll Thanks for sharing this premium knowledge for free of cost thanks
Dude
I believe that is one of the so much significant info for me.
And i am happy studying your article. However want to observation on some basic issues, The site style is ideal, the articles is
truly nice : D. Excellent task, cheers
|
https://technoelearn.com/fibonacci-python-program/
|
CC-MAIN-2020-45
|
refinedweb
| 802
| 55.37
|
Basic Options
SDKs are configurable using a variety of options. The options are largely standardized among SDKs, but there are some differences to better accommodate platform peculiarities. Options are set when the SDK is first initialized.
Options are passed to
sentry inside your environment:
import * as Sentry from "@sentry/ember"; Sentry.init({ dsn: "", tracesSampleRate: 1.0, // We recommend adjusting this in production maxBreadcrumbs: 50, debug: true, });).
tunnel
Sets the URL that will be used to transport captured events, instead of using the DSN. This can be used to work around ad-blockers or to have more granular control over events sent to Sentry. This option requires the implementation of a custom server endpoint. Learn more and find examples in Dealing with Ad-Blockers..
attachStack.. In mobile SDKs, when the app goes to the background for longer than 30 seconds, sessions are ended.
initialScope
Data to be set to the initial scope. Initial scope can be defined either as an object or a callback function, as shown below.
Object:
Sentry.init({ dsn: "", debug: true, initialScope: { tags: {"my-tag": "my value"}, user: {id: 42, email: "john.doe@example.com"}, } });
Callback function:
Sentry.init({ dsn: "", debug: true, initialScope: scope => { scope.setTags({ a: 'b' }); return scope; }, });
maxValueLength
Maximum number of characters a single value can have before it will be truncated (defaults to
250)..Integrations
This can be used to disable integrations that are added by default. When set to
false, no default integrations are added.").
|
https://docs.sentry.io/platforms/javascript/guides/ember/configuration/options/
|
CC-MAIN-2021-31
|
refinedweb
| 242
| 59.19
|
Details
Description
When adding an InputStream via addResource(InputStream) to a Configuration instance, if the stream is a HDFS stream the loadResource(..) method fails with IOException indicating that the stream has already been closed.
Activity
- All
- Work Log
- History
- Activity
- Transitions
Integrated in Hadoop-trunk #756 (See)
Is this an incompatible change? Hope that there is no codes depend on the old behavior.
I committed this to 0.19.1 and on. The fix for 0.18.4 is not straightforward, so I left that out, we can fix it in a new issue if need be.
Thanks. +1 for v3.
If you and/or others feel v2 is better, please go ahead, I would not -1 it.
Yes my eclipse configuration is intentionally this way, since I believe we should have the @Override's and imports corrected for ALL of the code. Doing these in a separate issue is a logical option, however I cannot imagine anyone doing this for all the java classes, so I tend to be practical and fix the ones I change for a patch.
I can remove the [import,@Override] changes, but I'm afraid I could not find time to prepare a patch for them.
On a related issue I guess the eclipse configurations about import statements should be fixed for everybody who develops code for Hadoop, so that the statements will be correct in the first place.
import statements are reordered and * are converted to actual classes again by save actions. As in this case here, the import statements are constantly switched between actual class names or * between patches.
hmm.. I am pretty sure eclipse can be configured not to do that (in fact, by default it may not do that). If every patch includes a lot of corrections like this, it would be pretty hard to track and maintain. There might even be constants flips committed due to minor variations in different eclipse configurations or JDKs used by eclipse environments. Pretty error prone as well.
I am -0.5 on these. I might be biased in this since I make sure my patch is not polluted even by minor white space changes. At least two separate patches would be much better. Note that it should be ok to fix the code just around the actual code changes..
All tests pass except TestSetupAndCleanupFailure, which seems unrelated..
+1. This should have been fixed a long time back.
regd the patch :
- If both the file and DFSclient are closed, it would still throw an exception, checkopen() should probably called only if the stream is not closed.
- it has a lot of meta, formatting changes, are those intentional? I know @override is useful, but these changes are spread all over.. one big disadvantage is that it conflicts patches coming from other active branches.
Here is a patch, which checks for the{input|output}
streams to be closed and returns w/o throwing IOException. Test case ensures that HDFS and S3 file systems meet the contract.
We should fix HDFS to not complain about twice-closed streams, to be compatible with other InputStream implementations.
+1, I have come across several cases, where this was the problem.
Doug, you are correct, according to the Closeable API: Closes this stream and releases any system resources associated with it. If the stream is already closed then invoking this method has no effect.
We should fix HDFS to not complain about twice-closed streams, to be compatible with other InputStream implementations.
The problem is in the Configuration.loadResource(...) method at:
} else if (name instanceof InputStream) { try { doc = builder.parse((InputStream)name); } finally { ((InputStream)name).close(); } }
the DocumentBuilder (the{builder}
variable) parse(...) method closes the stream, making the close() in finally to fail.
Note that the failure does not happen with all stream classes, only with those that check that the stream is not closed before closing it (HDFS stream does that)
I don't think so, Closeable.close() explicitly states that closing more than once should have no effect.
|
https://issues.apache.org/jira/browse/HADOOP-4760
|
CC-MAIN-2017-51
|
refinedweb
| 669
| 73.88
|
Mur.
thanks,
alek
>
> Thanks,
>
>
>
> -------- Original Message --------
> Subject: XNode 1.1 API submission
> Date: Tue, 19 Aug 2003 20:33:01 +0100
> From: Murray Altheim <m.altheim@open.ac.uk>
> Organization: Knowledge Media Institute
> To: Brian Behlendorf <brian@collab.net>
> References: <2EDE63C8-C2C5-11D7-86F7-000A9577EA3A@xmldatabases.org>
> <20030730183930.D51544@fez.hyperreal.org>
>
> Brian,
>
> I don't know if you're aware, but while I was still at Sun I wrote an
> API to wrap XML documents in a SOAP-like wrapper to enable the attachment
> of metadata. That API was com.sun.xnode.XNode. I'd originally written it
> as part of a non-Sun project, as I was at the time involved with Doug
> Englebart's group at SRI, where Lee Iverson (now at UBC) had floated
> his "Nodal" project. I thought XNode was a Nodal-like thing for XML.
> Well,
> I was wrong, but that's where it started.
>
> So anyway, my boss was laid off and I found myself (and the rest of my
> office) under Java management. I suggested using XNode and dbXML as
> part of Sun's Registry project, as we needed an XML database and a means
> of storing metadata. Long story short, the API was intended to be
> submitted
> to Apache but due to office reshuffling and my leaving off to school, it
> never happened. Sun still delivers com.sun.xnode.* as part of their
> Registry project (part of JWSDP), but has never published the API nor
> does
> there seem to be any interest in it. Likely below the radar, really.
>
> I'd submitted the code to Xindice, but somebody pulled it off the server
> because there hadn't been the proper paperwork from Sun releasing the
> code.
> (No criticism -- that's what should have happened) So I sent a message
> off
> to Sun's "feedback" line, and one to Jeff Suttor, but after two weeks
> I've
> not heard anything at all. Like I said, probably under the radar.
>
> Because XNode is a core API within my Ph.D. project Ceryle, I've gone
> ahead and rewritten the API as "XNode 1.1". It is not quite backward
> compatible with XNode 1.0, uses a different package designation (i.e.,
> it uses the intended package of org.apache.xnode.*) and a different
> XML namespace URI ("" rather than
> "", noting the www/xml difference as well
> as the version number). I've altered/improved the method names in a few
> places, added a method to allow embedding of a DOM Element as metadata
> rather than simply name-value pairs, and rewritten the documentation.
>
> Now... what would it take for me to submit XNode 1.1 to Apache? I've
> always intending to do this and perhaps now it can be done. I realize
> there is some muddiness as to IPR, though several things keep that from
> being a problem for you:
>
> 1. the original API was never published publicly, except by me after
> leaving Sun, hence in theory I own the copyright (but not the IPR
> since I was under contract while at Sun)
> 2. Sun has not patented com.sun.xnode.XNode, so org.apache.xnode.XNode
> is not an enfringement on copyright. You can't copyright ideas, only
> patent them, so the ideas in com.sun.xnode.XNode that resurface in
> org.apache.xnode.XNode are protected under Sun's IPR only insofar
> as they are Sun's ideas. In reality, XNode is a combination of Lee
> Iverson's NODAL and SOAP.
> 3. I've rewritten the API so you'd be receiving a *different* API.
> 4. They've never made any public splash about this, nor have they ever
> published anything about it either. They could potentially come
> after me for violating my IPR agreement (since in theory everything
> I've ever done or said is now owned by them, even going back to my
> birth), but there's no violation of copyright since they've not
> published anything (which is required of US copyright laws). As
> I mentioned previously, I actually beat them to it, so I in theory
> own the copyright on XNode 1.0.
> 5. Sun has no history of going after things like this, esp. ones that
> are way below the radar, esp. since I'm an ex-employee with no
> resources, not Microsoft. They would not spend resources bothering
> with such a thing, even if they could figure out where there was
> any infringement (since I've not given away anything demonstrably
> Sun's, having been away from them for over a year and a half).
>
> I've gussied the thing up, do you want it? :-)
>
>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
> For additional commands, e-mail: general-help@incubator.apache.org
>
--
If everything seems under control, you're just not going fast enough. —Mario Andretti
---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
For additional commands, e-mail: general-help@incubator.apache.org
|
http://mail-archives.apache.org/mod_mbox/incubator-general/200309.mbox/%3C3F6A33F2.9080601@cs.indiana.edu%3E
|
CC-MAIN-2018-22
|
refinedweb
| 822
| 65.52
|
FinalState QML Type
Provides a final state. More...
Detailed Description
A final state is used to communicate that (part of) a StateMachine has finished its work. When a final top-level state is entered, the state machine's finished() signal is emitted. In general, when a final substate (a child of a State) is entered, the parent state's finished() signal is emitted. FinalState is part of The Declarative State Machine Framework.
To use a final state, you create a FinalState object and add a transition to it from another state.
Example Usage
import QtQuick 2.0 import QtQml.StateMachine 1.0 as DSM Rectangle { DSM.StateMachine { id: stateMachine initialState: state running: true DSM.State { id: state DSM.TimeoutTransition { targetState: finalState timeout: 200 } } DSM.FinalState { id: finalState } onFinished: console.log("state finished") } }
See also StateMachine and.
|
https://doc.qt.io/archives/qt-5.7/qml-qtqml-statemachine-finalstate.html
|
CC-MAIN-2019-26
|
refinedweb
| 136
| 53.98
|
I have a waveshield that I want to activate based on proximity from my parallax 28015 ping sensor. When I run the AF_wave sketch (easier than WaveHC for me to manipulate), the waveshield works fine playing all the files in a loop. When I run the PING))) sketch ( ... oundSensor), I get a nice stream of numbers. However, I am having a hard time combining the two sketches to make them work in unison. The ping sensor only runs once and never activates a wav file. It gives me one numeric value on the serial monitor and then just stops.
I approached it by trying to use the serial numbers as a variable for a "while{_}" statement, thinking that an "if/else" statement would be activating a new wave every few milliseconds. If you're more experienced at coding, I could use a suggestion or two I'm sure. I couldn't even manipulate the wave_hc to make this work, so if that would be better, let me know what I can do.
Here is the code, I commented out unnecessary (or so I think) lines:
#include <AF_Wave.h>
#include <avr/pgmspace.h>
#include "util.h"
#include "wave.h"
AF_Wave card;
File f;
Wavefile wave;
const int pingPin = 7;
void setup() {
Serial.begin(9600); // set up Serial library at 9600 bps
// Serial.println("Wave test!");
// set up waveshield pins
for (byte i = 2; i <= 5; ++i) {
pinMode(i, OUTPUT);
}
// open memory card
//;
// }
}
void loop()
{
// establish variables for duration of the ping,
// and the distance result in inches and centimeters:
long duration, inches;
//);
///////////wave stuff for loop
//sound files need to be named acordingly and distance ranges adjusted
while (inches >= 0 && inches <= 75) {
playfile("play1.WAV");
}
while (inches >= 100 && inches <= 175) {
playfile("play2.WAV");
}
while (inches >= 200 && inches <= 275) {
playfile("play3.WAV");
}
while (inches >= 300 && inches <= 399) {
playfile("play4.WAV");
}
return;
}
////////ping functions below: ... G;
//}
////wave functions below
void playfile(char *name) {
// stop any file already playing
if (wave.isplaying) {
wave.stop();
}
// close file if open
if (f) {
card.close_file(f);
}
// play specified file
f = card.open_file(name);
if (f && wave.create(f)) {
wave.play();
}
}
|
https://forums.adafruit.com/viewtopic.php?f=31&t=14745&p=180778
|
CC-MAIN-2017-22
|
refinedweb
| 354
| 66.94
|
Print formatted output to a new string
#include <qdb/qdb.h> char *qdb_mprintf( const char* format, ... );
qdb
This function is a variant of sprintf() from the standard C library. The resulting string is written into memory obtained from malloc(), so there's no possibility of buffer overflow. The function implements some additional formatting options that are useful for constructing SQL statements.
You should call free() to free the strings returned by this function.
All the usual printf() formatting options apply. The qdb_mprintf() function adds these options:
Suppose some string variable contains the following text:
char *zText = "It's a happy day!";
You can use this text in an SQL statement as follows:
qdb_mprintf("INSERT INTO table VALUES('%q')", zText);
Because the %q formatting option is used, the single-quote character in zText is escaped, and the generated SQL is:
INSERT INTO table1 VALUES('It''s a happy day!')
This is correct. Had.
Suppose you're unsure if your text reference is NULL. You can use this reference as follows:
char *zSQL = qdb_mprintf("INSERT INTO table VALUES(%Q)", zText);
The code above will render a correct SQL statement in the zSQL variable even if the zText variable is a NULL pointer.
The %z option is handy for nested strings:
char id[] = "12345678"; char *nested = qdb_mprintf( "SELECT msid FROM mediastores WHERE id = %Q", id); char *sql = qdb_mprintf( "DELETE FROM library WHERE msid = (%z);", nested); qdb_exec(sql); free(sql);
The nested string doesn't have to be freed after it gets copied into the formatted string and the SQL code within the formatted string is executed.
QNX Neutrino
|
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.qdb_en.dev_guide/topic/api/qdb_mprintf.html
|
CC-MAIN-2018-09
|
refinedweb
| 265
| 60.45
|
May 04, 2010 11:20 PM|franzo|LINK
.Net 4.0 is encoding values when using Attributes.Add. In previous versions it didn't. With the new behaviour it is no longer possible to write attributes containing single quotes.
Here's an example.
<asp:TextBox
txtTest.Attributes.Add("onkeyup", "keyuphandler('hello')");
With the application pool framework version set to 2.0 it produces the desired result:
<input name="txtTest"
type="text" id="txtTest"
onkeyup="keyuphandler('hello')" />
With it set to 4.0 it produces an undesirable result:
<input name="txtTest" type="text" id="txtTest" onkeyup="keyuphandler('hello')" />
.Net 4.0 needs to be fixed to allow the developer to write attribute values containing single quotes.
encoding attribute .NET 4.0 quote
May 04, 2010 11:48 PM|vcsjones|LINK
' is an escape character for a single quote. The browser will interpret this as single quote. For example, given this HTML:
<html> <body> <input type="button" value="Click Me" onclick="alert('Hello');" /> </body> </html>
saving that do disk still works and functions as desired. This is rather a security feature to prevent XSS than a bug.
May 05, 2010 02:29 AM|franzo|LINK
Yes I know that ' is encoded. The point is that it shouldn't be! You have not suggested a workaround that works in general for setting attribute values programmatically from code behind pages. Imagine for example if the string passed to the function
were to be varied between calls depending on what is occuring server-side.
This problem:
1. is a change in behaviour from 2.0 to 4.0, breaking existing applications
2. has no clear workaround to achieve the desired result.
3. reduces the capability of the ASP.Net developer in building web applications.
It's either a bug or a poorly considered design change. Either way, it needs to be addressed.
May 05, 2010 02:50 AM|vcsjones|LINK
Yes, but it is a security change, and security changes trump backward compatibility.
Regardless, I am sorry for sounding so terse and not offering a work around. Does your web.config have a value for "controlRenderingCompatibilityVersion" attribute of the pages element? I haven't tried it, but if you set it to 3.5 - does it correct the issue for you?
May 05, 2010 04:12 AM|franzo|LINK
As I noted above, this isn't just about backward compatibility, it is also about the capability to do web development. Nailing the doors of your house shut would improve the security of your house but it would make it unreasonably difficult to enter.
No, controlRenderingCompatibilityVersion="3.5" does not resolve the problem. Even if it did it would not be a reasonable long term fix.
We need a way to put single quotes in html attribute values using 4.0.
May 05, 2010 05:23 AM|vcsjones|LINK
After reviewing the code with reflector, I cannot find a reasonable way to do this. This appears to be baked into the framework. The only solution that I can offer other than contacting Microsoft directly is to use a Control Adapter, which would need to be done with every Web Control. Like this:
public class ButtonControlAdapter : WebControlAdapter { protected override void RenderBeginTag(HtmlTextWriter writer) { writer.WriteBeginTag("input"); var control = Control as Button; writer.WriteAttribute("type", control.UseSubmitBehavior ? "submit" : "button"); foreach (string attribute in control.Attributes.Keys) { writer.WriteAttribute(attribute, control.Attributes[attribute], false); } } protected override void RenderEndTag(HtmlTextWriter writer) { writer.WriteEndTag("input"); } }
And then registering it in your browser definition. Given that the last solution was not acceptable even if it worked, I would assume this is not acceptable either.
Maybe we are looking at this the wrong why. Can you provide a specific example as why you absolutely positively need the legacy behavior?
May 07, 2010 10:18 AM|franzo|LINK talks about creating custom encoding routines.
You can turn off attribute encoding by creating a class like this:
public class HtmlAttributeEncodingNot : System.Web.Util.HttpEncoder
{
protected override void HtmlAttributeEncode(string value, System.IO.TextWriter output)
{
output.Write(value);
}
}
and adding this to web.config under <system.web>:
<httpRuntime encoderType="HtmlAttributeEncodingNot"/>
This gives me the control I need.
However, now we must worry that new controls may depend on the new standard 4.0 behaviour and not encode single quotes, so it's still imperfect, nay, worse than imperfect: security is even worse, because we don't know what is going on where, so it's not a great workaround really.
I think only Microsoft can fix this properly. Others have suggested the need for an HtmlAttributeString class here:
If there were such a class and Attributes.Add could take an object like this for its value parameter then we would have the control that we need again.
Sep 16, 2010 06:35 PM|kevinmcc|LINK
Frazno,
I just wanted to say thank you for this. Man, why does .net suck so much with JavaScript?
It's like Microsoft couldn't imagine a scenario where you'd want to do some client side stuff outside of their framework.
And how does escaping single qutoes that I write out from code behind make this more secure?
I have no clue what the below does, but it works. And that's good enough for me.
;-)
<httpRuntime encoderType="HtmlAttributeEncodingNot"/>
None
0 Points
Dec 07, 2010 07:12 PM|webst128|LINK
I found that I also needed to override the HtmlEncode method to handle script blocks that were being written as part of content in the innerHtml of HtmlGenericControls. Not sure if this is necessary, but I decided to let the default Encoding happen, and then just override the result replacing the escaped quotes
protected override void HtmlAttributeEncode(string value, System.IO.TextWriter output) { StringBuilder sb = new StringBuilder(); StringWriter sw = new StringWriter(sb); base.HtmlAttributeEncode(value, sw); output.Write(sw.ToString().Replace("'","'")); } protected override void HtmlEncode(string value, System.IO.TextWriter output) { StringBuilder sb = new StringBuilder(); StringWriter sw = new StringWriter(sb); base.HtmlEncode(value, sw); output.Write(sw.ToString().Replace("'", "'")); }
None
0 Points
May 04, 2011 04:54 PM|ghudson@assessmentplus.com|LINK
This breaking change just bit me today. I need to add some CSS styling from code behind that contains the single quote character and of course .NET 4 is rendering the escaped version. The CSS does not render correctly when the quotes are escaped. How might one go about using ASP.NET 4 to add valid CSS styles server side that contain single quotes without resorting to the solutions posted? There doesn't seem to be a way.
I agree with others that this is a bug, not a security solution. Who can we report this to and have this reverted back to .NET 1-3 behavior?
May 19, 2011 09:59 AM|dlatikay|LINK
Hi, I have found another elegant solution for your particular problem:
txtTest.Attributes.Add("my_custom_stringliteral", "hello"); txtTest.Attributes.Add("onkeyup", "keyuphandler(this.my_custom_stringliteral)");
I tested it, it works in IE and Mozilla, and it seems to be perfectly "safe for scripting".
The extension attribute "my_custom_stringliteral" is even scoped to txtTest only, so you could use the same attribute name for other controls on the same control/page without having to bother about keeping them unique.
BR,
Daniel Latikaynen
en-software
May 20, 2011 08:53 AM|dlatikay|LINK
There is another, very strange implication: I managed to set up a scenario in a huge ASP.NET 4 web app, which had been ported from 2.0 not long ago, where Microsoft's own framework code fools itself because of this *#$@[ attribute encoding:
A plain dropdown combo with AutoPostback=True
<asp:DropDownList
Renders like this (when viewing client-side in IE7 "View source" pane)
<select name="lstMachine" onchange="javascript:setTimeout('__doPostBack(\'lstMachine\',\'\')', 0)" language="javascript" id="lstMachine" style="width:300px;">
Which seems to prove that AutoPostback is broken. And for which we were unable to find a viable workaround so far. Will again resort to clientside javascript or AJAX rewrite.
Anybody else expieriencing this?
Jan 16, 2012 04:47 PM|mike_b_32768|LINK
Yes, that is elegant!
I guess the example doesn't show it working for single quotes, but it really does! You can adapt it (very slightly) to a more general purpose version:
txtTest.Attributes.Add("my_custom_stringliteral", "alert('foo')"); // custom literal, for the code to execute txtTest.Attributes.Add("onkeyup", "eval(this.my_custom_stringliteral)"); // the eval now executes whatever is in the custom literal
Jan 16, 2012 05:02 PM|dlatikay|LINK
The eval() version is a more universal one for sure, and it renders the originally intended "fix" so utterly futile, that it made me LOL.
Beware of passing user-entered data as values of "my_custom_stringliteral", it would be the perfect gateway for an injection attack.
Jan 18, 2012 02:42 PM|mike_b_32768|LINK
@dlatikay I still haven't found another way to get round the change in ASP.Net 4.0. The fix does gets round it, so I'm missing something, why is it futile? The risk of passing user data in to "my_custom_stringliteral" makes the fix exactly as dangerous - or not - as the original functionality - i.e. it would always have been dangerous to pass user data in to something that was always *meant* to be executed as JavaScript.
Jan 18, 2012 03:04 PM|dlatikay|LINK
@mike_b sorry I was not clear...
the fix we made (my_custom_string_literal, eval()...) is all right.
the remark about the passing of user data is just my disclaimer so nobody will sue me claiming I encouraged people to circumvent a security precaution
the "fix" I had been referring to in my previous post was the "improvement" to the attribute encoding of the ASP.NET framework, which made _our_ fix necessary in first place.
17 replies
Last post Jan 18, 2012 04:42 PM by mike_b_32768
|
http://forums.asp.net/p/1554455/3818604.aspx
|
CC-MAIN-2015-11
|
refinedweb
| 1,631
| 58.28
|
Python Programming/Decorators< Python Programming
Duplicated code is recognized as bad practice in software for lots of reasons, not least of which is that it requires more work to maintain. If you have the same algorithm operating twice on different pieces of data you can put the algorithm in a function and pass in the data to avoid having to duplicate the code. However, sometimes you find cases where the code itself changes, but two or more places still have significant chunks of duplicated boilerplate code. A typical example might be logging:
def multiply(a, b): result = a * b log("multiply has been called") return result def add(a, b): result = a + b log("add has been called") return result
In a case like this, it's not obvious how to factor out the duplication. We can follow our earlier pattern of moving the common code to a function, but calling the function with different data is not enough to produce the different behavior we want (add or multiply). Instead, we have to pass a function to the common function. This involves a function that operates on a function, known as a higher-order function.
Decorator in Python is a syntax sugar for high-level function.
Minimal example of property decorator:
>>> class Foo(object): ... @property ... def bar(self): ... return 'baz' ... >>> F = Foo() >>> print F.bar baz
The above example is really just a syntax sugar for codes like this:
>>> class Foo(object): ... def bar(self): ... return 'baz' ... bar = property(bar) ... >>> F = Foo() >>> print F.bar baz
Minimal Example of generic decorator:
>>> def decorator(f): ... def called(*args, **kargs): ... print 'A function is called somewhere' ... return f(*args, **kargs) ... return called ... >>> class Foo(object): ... @decorator ... def bar(self): ... return 'baz' ... >>> F = Foo() >>> print F.bar() A function is called somewhere baz
A good use for the decorators is to allow you to refactor your code so that common features can be moved into decorators. Consider for example, that you would like to trace all calls to some functions and print out the values of all the parameters of the functions for each invocation. Now you can implement this in a decorator as follows:
#define the Trace class that will be #invoked using decorators class Trace(object): def __init__(self, f): self.f =f def __call__(self, *args, **kwargs): print "entering function " + self.f.__name__ i=0 for arg in args: print "arg {0}: {1}".format(i, arg) i =i+1 return self.f(*args, **kwargs)
Then you can use the decorator on any function that you defined by:
@Trace def sum(a, b): print "inside sum" return a + b
On running this code you would see output like
>>> sum(3,2) entering function sum arg 0: 3 arg 1: 2 inside sum
Alternately, instead of creating the decorator as a class, you could have used a function as well.
def Trace(f): def my_f(*args, **kwargs): print "entering " + f.__name__ result= f(*args, **kwargs) print "exiting " + f.__name__ return result my_f.__name = f.__name__ my_f.__doc__ = f.__doc__ return my_f #An example of the trace decorator @Trace def sum(a, b): print "inside sum" return a + b #if you run this you should see >>> sum(3,2) entering sum inside sum exiting sum 5
Remember it is good practice to return the function or a sensible decorated replacement for the function so that decorators can be chained.
|
https://en.m.wikibooks.org/wiki/Python_Programming/Decorators
|
CC-MAIN-2016-44
|
refinedweb
| 566
| 61.16
|
On Thu, 9 Aug 2018 12:07:00 -0400 Carlos Neira <cneirabus...@gmail.com> wrote:
> Jesper, > Here is the updated patch. > > From 92633f6819423093932e8d04aa3dc99a5913f6fd Mon Sep 17 00:00:00 2001 > From: Carlos Neira <cneirabus...@gmail.com> > Date: Thu, 9 Aug 2018 09:55:32 -0400 > Subject: [PATCH bpf-next] BPF: helpers: New helper to obtain namespace > data from current task > [...] Hi Carlos, This is not how you resubmit a patch, it is both documented in [1] and [2], that: "In case the patch or patch series has to be reworked and sent out again in a second or later revision, it is also required to add a version number (v2, v3, ...) into the subject prefix" [1] [2] Take a look at [1], which toplevel doc is about "HOWTO interact with BPF subsystem". -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn:
|
https://www.mail-archive.com/netdev@vger.kernel.org/msg245941.html
|
CC-MAIN-2018-47
|
refinedweb
| 146
| 71.34
|
XR constantly losing app connection?
Just got my Xr back from FM to fix the Bluetooth issue. Tested it for few minutes; looks like it works fine now.
I just got mine back. The signal strength seems to have a marginally improved. It held connection for about a minute into the ride then dropped. Still not usable.
- TerminalVelocity last edited by
I just got my board back from @future-motion and it seems like the BT issue is fixed. It connects super fast and I just went for a nice session and it stayed connected the whole ride with phone in my back pocket. The board came back with my miles still showing so I guess the main controller was not switched out. All I know is it works now like it is suppose to Thanks @future-motion for the quick turn around 11 days total from shipping to return Thanks, Thomas
@rfkilowatt @kanguru007 I'm in Argentina so sending to USA is no-way (customs cost as hi as U$S 1000 here).
I'm electronic engineer and I have the right tools, maybe I will try to solder an external antenna (1/4 lambda). What do you think?
@tlamb2 THIS!
It really works better! I was thinking in opening the board and soldering an external antenna, but this works! I've to test it more time, but great so far!
You've to enter "Developer Mode" in Android.
Go to Settings > System Info (the last on the list), then tap 7 times on the Build Number field (also the last field).
Then go search in settings, and look for AVRCP, then switch to 1.6 as @tlamb2 says.
That's it!
@gasppol Plus it also helps to clear your bluetooth settings in an android phone it seems to get rid of all the old Bluetooth connections making current connections better. In my work van now I am able to see text messages delivered via bluetooth to the vans audio unit which it use to do but suddenly stopped but after resetting my bluetooth that works now also. So it does something.
- namespaced last edited by
For those who are in the market for used boards, what do we need to watch out for? HW version 4208 purchased between August 2018 and December 2018? Were all new boards affected, or only a subset? I can't really tell from skimming through all these posts.
@namespaced Pretty much all 4208 boards. 4206 (very first XRs produced) are not affected but did have the bricking issue.
- namespaced last edited by namespaced
I’ve had a new XR for about a month and have recorded a dozen or more rides with no problems at all. I can walk 60’ away, line of sight, and still communicate with the board. iPhone 6. This issue was the most worrisome for me in before buying the board but doesn’t exist for me. Go figure???
@gasppol I’m an EE too. I’m going to try it with a 50Ohm low gain patch antenna and coax I bought from digikey. I’m going to mount it away from the aluminum cover plate. I’ll let you know if it improves things. That great a software change works with android.
@Kbnikto51 What hardware and firmware are you running? Maybe they fixed it? I’m running Hardware 4208, firmware 4134.
To be clear, I can connect to the board when I’m dismounted. I once I get on the board, the phone and my Apple Watch lose connection. I can’t check my speed. I have to stop to check battery strengh. I also can’t track a ride.
Anyway, I’ve got news. I fixed it adding an external antenna. Works great now. It took it for a ride and I get realtime speed on my watch. Never worked before.
I’m trying to find a way to post details on what it looks like. Maybe reddit or imgur.
The antenna is a tiny little patch antenna that I stuck to the inner plastic part of the controller housing. You do have to unsolder and remount a 0402 cap on the Bluetooth module to remove the internal antenna and use an external one. Not easy, need a really good soldering iron like a Metcal with fine tips.
Now I’m pretty sure its a signal strenght issue. Maybe it only affect some boards. Getting it reworked by onewheel customer service did not fix it. Maybe they replaced the module, but that was not enough.
Here.
Sorry this is blurry. Here you can see the cap is moved 90, connecting it to the external ant pin 1. I connected an insulated 24 awg wire to pin1 and I put the wire through the ground via to reach the other side. This was my attempt to keep the impedance of the wire closer to my 50 Ohm antenna (lower its inductance).
This is the topside of the controller board. That’s the wire popping through from the backside. I soldered in an H.FL connector. I was able to capture two ground points at 12 and 3 o’clock. This important so you can properly ground the coax. After soldering, I added RTV along the edges for stability (not shown).
The antenna I bought, [Taoglas FXP74.07.0100A], has a U.FL connector. I mistook that for being the same as H.FL which I have lying around my lab. It does not mate well to U.FL. When I get a chance I will swap it for the correct U.FL smt connector. I wanted a connector here so you can disconnect the antenna from the board when you assemble/disassemble because the antenna will be attached to the plastic back shell while the board is bolted to the aluminum pan.
This is the antenna I bought. I was just looking for something low gain, that was small and 50 Ohms with a connector. There are hundreds of different ones on digikey that I would think would work.
Here’s were I put the antenna. The adhesive seems really strong. I don’t think it’s coming off. Coax going to the left.
After that, I put the shell back on to the aluminum plate. You have to reconnect all the connectors while its barely open, because the wires are not that long. Everything then goes back together like it came apart.
|
https://community.onewheel.com/topic/7966/xr-constantly-losing-app-connection/271
|
CC-MAIN-2019-39
|
refinedweb
| 1,075
| 84.78
|
Guide to Programming with Python. Chapter Nine Inheritance Working with multiple objects. OOP So Far. Object-oriented programming is a programming language model organized around "objects ” An object is a software bundle of related attributes and behavior Nine
Inheritance
Working with multiple objects
class Player(object):
def __init__(self, name = "Enterprise", fuel = 0):
self.name = name
self.fuel = fuel
def status(self):
...
myship = Ship("Appolo")
myship.status()
Guide to Programming with Python
Car
Sports car
Convertible
Van
Animals
Mammals
Fish
Reptile
Dog
Cat
Human
Guide to Programming with Python
Guide to Programming with Python
Guide to Programming with Python
class Animal(object):
def __init__(self, name): # Constructor
self.name = name
def get_name(self):
return self.name
class Cat(Animal):
def talk(self):
return 'Meow!'
class Dog(Animal):
def talk(self):
return 'Woof! Woof!'
animals = [Cat('Missy'), Cat('Mr. Bojangles'),
Dog('Lassie')]
for animal in animals:
print animal.talk() + ' I am ' + animal.get_name()
Base class: A class upon which another is based; it is inherited from by a derived class
Derived class: A class that is based upon another class; it inherits from a base class
Guide to Programming with Python
class Animal(object):
def __init__(self, name):
self.name = name
def talk(self):
return 'Hello!'
class Cat(Animal):
def talk(self):
return 'Meow!'
class Animal(object):
def __init__(self, name, age):
self.name = name
self.age = age
def get_name(self):
return self.name
def __gt__(self, other):
#override comparison operators
return self.age > other.age
print Animal('Missy', 4) > Animal('Lassie', 3)
Guide to Programming with Python
Class Card(object):
...
class Positionable_Card(Card):
def __init__(self, rank, suit, face_up = True):
super(Positionable_Card, self).__init__(rank, suit)
#invoke parent’s method by calling super()
self.is_face_up = face_up
Guide to Programming with Python
Guide to Programming with Python
class DerivedClass(Base1, Base2, Base3):
...
(We already know that we can exchange information among functions through parameters and return values)
Guide to Programming with Python
Figure 9.3: Visual representation of objects exchanging a message
hero, a Player object, sends invader, an Alien object, a message.
Guide to Programming with Python
class Player(object):
def blast(self, enemy): #enemy refers to the Alien object
print "The player blasts an enemy."
enemy.die() #invokes the Alien object’s die()method
class Alien(object):
def die(self):
print "Good-bye, cruel universe.”
hero = Player()
invader = Alien()
hero.blast(invader)
# here an object is passed to a function of another object
Guide to Programming with Python
Guide to Programming with Python
class Card(object):
""" A playing card. """
RANKS = ["A", "2", "3", "4", "5", "6", "7",
"8", "9", "10", "J", "Q", "K"]
SUITS = ["c", "d", "h", "s"] #class attributes
def __init__(self, rank, suit):
self.rank = rank #object attributes
self.suit = suit
def __str__(self):
reply = self.rank + self.suit
return reply
card1 = Card(”A", ”d")
# A Cardobject with rank "A" and suit "d" is the ace of diamonds
print card1
class Hand(object):
""" A hand of playing cards. """
def __init__(self):
self.cards = [] #attribute – a list of Card objects
def __str__(self): #special function, returns string for entire hand
if self.cards:
reply = ""
for card in self.cards:
reply += str(card) + " "
else:
reply = "<empty>"
return reply
def add(self, card): #adds card to list of cards
self.cards.append(card)
def give(self, card, other_hand):
self.cards.remove(card)
other_hand.add(card)
my_hand = Hand()
card1 = Card("A", "c")
card1 = Card("2", "c")
my_hand.add(card1)
my_hand.add(card2)
print my_hand # Ac 2c
your_hand = Hand()
my_hand.give(card1, your_hand)
my_hand.give(card2, your_hand)
print your_hand # Ac 2c
Guide to Programming with Python
|
https://www.slideserve.com/giacomo-rollins/guide-to-programming-with-python
|
CC-MAIN-2018-09
|
refinedweb
| 594
| 51.44
|
sec_rgy_attr_lookup_no_expand-Reads a specified object's attribute(s), without expanding attribute sets into individual member attributes
#include <dce/sec_rgy_attr.h> void sec_rgy_attr_lookup_no_expand( sec_rgy_handle_t context, sec_rgy_domain_t name_domain, sec_rgy_name_t name, sec_attr_cursor_t *cursor, unsigned32 num_attr_keys, unsigned32 space_avail, uuid_t attr_keys[], unsigned32 *num_returned, sec_attr_t attr_sets[], unsigned32 *num attribute sets that the caller is authorized to see are returned.
- space_avail
An unsigned 32-bit integer that specifies the size of the attrs_sets array.
- attr_keys[]
An array of values of type uuid_t that specify the UUIDs of the attribute sets to be returned. The size of the attr_keys array is determined by the num_attr_keys parameter.
Input/Output
- cursor
A pointer to a sec_attr_cursor_t. As an input parameter, cursor is a pointer to a sec_attr_cursor_t that is initialized by the sec_rgy_attr_cursor_init() routine. As an output parameter, cursor is a pointer to a sec_attr_cursor_t that is positioned past the attribute sets returned in this call.
Output
- num_returned
A pointer to a 32-bit integer that specifies the number of attribute sets returned in the attrs array.
- attr_sets
An array of values of type sec_attr_t that contains the attribute sets retrieved by UUID. The size of the array is determined by space_avail and the length by num_returned.
- num_left
A pointer to a 32-bit unsigned integer that supplies the number of attribute sets that were found but could not be returned because of space constraints in the attr_sets buffer. To ensure that all the attributes will be returned, increase the size of the attr_sets array by increasing the size of space_avail and num_returned.
- status
A pointer to the completion status. On successful completion, the routine returns error_status_ok. Otherwise, it returns an error.
The sec_rgy_attr_lookup_no_expand() routine reads attribute sets. This routine is similar to the sec_rgy_attr_lookup_by_id() routine with one exception: for attribute sets, sec_rgy_attr_lookup_by_id() expands attribute sets and returns a sec_attr_t for each member in the set. This call does not. Instead it returns a sec_attr_t for the set instance only. The sec_rgy_attr_lookup_no_expand() routine is useful for programmatic access.
cursor is a cursor of type sec_attr_cursor_t that establishes the point in the attribute set list from which the server should start processing the query. Use the sec_rgy_attr_cursor_init() function to initialize cursor. If cursor is uninitialized, the server begins processing the query with the first attribute that satisfies the search criteria.
The num_left parameter contains the number of attribute sets that were found but could not be returned because of space constraints of the attr_sets array. (Note that this number may be inaccurate if the target server allows updates between successive queries.) To obtain all of the remaining attribute sets, set the size of the attr_sets array so that it is large enough to hold the number of attributes listed in num_left.
Permissions RequiredThe sec_rgy_attr_lookup_no_expand()_unavailable
Server is unavailable.
- sec_attr_unauthorized
Unauthorized to perform this operation.
Functions:
sec_rgy_attr_lookup_by_id(), sec_rgy_attr_lookup_by_name().
|
http://pubs.opengroup.org/onlinepubs/9696989899/sec_rgy_attr_lookup_no_expand.htm
|
CC-MAIN-2014-42
|
refinedweb
| 466
| 56.05
|
Started a new gaming project and I'm faced with a small problem which I can't think of a clever function.
The issue is the health bar of characters, they're always green in color (i know sikuli probably doesn't see colors). When the character looses health the green color depletes from the health bar
Now these health bars can change positions as the character moves up or down, this is no problem because I can use region.
How can I make sure sikuli clicks the bar with the lowest amount of Health or the colour green? The health bars can also change in shape for eg: The one at the very top can look the the 3rd one... Really stumped :S
This is what the health bars look like:
https:/
Any suggestion is greatly appreciated!
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- Sikuli Edit question
- Assignee:
- No assignee Edit question
- Solved:
- 2019-09-21
- Last query:
- 2019-09-21
- Last reply:
- 2019-09-21
Not on a PC atm going to check as soon as I get home, thanks RaiMan. As always, very prompt in your response!
Just got a chance to try this, I think i'm missing the point and require some more explanation. If you have the time, can you show me the function for this scenerio:
Let's say I'm the healer character in the game, and I need to scan who has the lowest amount of health comparing all of those 4 health bars and then click it.
The last sentence I should have said comparing who has the least of "GREEN" health bar
ok, but might take till somewhen tomorrow.
thanks very much! This will be amazing
ok, this is all I can do for you - other priorities.
Based on the given image shown in browser Opera
The offsets have been manually evaluated in IDE Preview.
Not all bar images in your shot show the left end, so I decided to evaluate right to left.
def init():
switchApp(
pat1R = Pattern(
pat1L = Pattern(
offsetR = pat1R.getTarget
offsetL = pat1L.getTarget
length = offsetR.x - offsetL.x
hover(pat1R)
start1 = Mouse.at()
pat2 = Pattern(
start2 = start1.
pat3 = Pattern(
start3 = start1.
pat4 = Pattern(
start4 = start1.
return (length, start1, start2, start3, start4)
def evalHealth(start):
for n in range(length + 1):
color = start.offset(
if color > 50: break
return 100 - int(n * 100.0/length)
length, start1, start2, start3, start4 = init()
print evalHealth(start1)
hover(start2)
print evalHealth(start2)
hover(start3)
print evalHealth(start3)
hover(start4)
print evalHealth(start4)
THANK YOU!! This did it!!
You might use the Location.getColor() feature.
a sample with the most left bar in your image:
"img.png" ).targetOffset( -24,1) "img.png" ).targetOffset( 21,1) ffset() offsetR. x * 2, 0) n,0).getColor( ).getGreen( )
switchApp("opera")
img = Pattern(
patR = Pattern(
offsetR = patR.getTargetO
hover(img)
start = Mouse.at()
end = start.offset(
for n in range(offsetR.x * 2 + 10):
print start.offset(
To get specific points in an image (as here the left side middle and the right side middle of the bar) you can use the image Preview feature in the IDE.
There are about 40 pixels to check.
If you use the check-the-middle algorithm, you would have your result latest with 6 checks in a loop:
- test the middle of the bar
- if it is black, test the middle of the left half of the bar
- if it is not black, test the middle of the right half of the bar
- ... hope you got it ;-)
|
https://answers.launchpad.net/sikuli/+question/684034
|
CC-MAIN-2020-05
|
refinedweb
| 591
| 83.05
|
throw (C# Reference). A throw statement can be used in a catch block to re-throw the exception that the catch block caught. In this case, the throw statement does not take an exception operand. For more information and examples, see try-catch (C# Reference) and How to: Explicitly Throw Exceptions.
This example demonstrates how to throw an exception using the throw statement.
public class ThrowTest2 { static int GetNumber(int index) { int[] nums = { 300, 600, 900 }; if (index > nums.Length) { throw new IndexOutOfRangeException(); } return nums[index]; } static void Main() { int result = GetNumber(3); } } /* Output: The System.IndexOutOfRangeException exception occurs. */
See the examples in try-catch (C# Reference) and How to: Explicitly Throw Exceptions.
For more information, see the C# Language Specification. The language specification is the definitive source for C# syntax and usage.
|
https://msdn.microsoft.com/en-US/library/1ah5wsex(v=vs.120).aspx
|
CC-MAIN-2015-35
|
refinedweb
| 133
| 51.44
|
You ....
Got it! Darn single quotes.....
$env:Path += ";c:Program Fileslcpython15";
$env:PATHEXT += ";.py";
$arg1 = "Test3"
$arg2 = "Testing"
$arg3 = 'c:ProgramDataset_cust_attr.py'
python $arg3 $arg1 $arg2
Use subprocess module:
import subprocess
args = [subject, body]
subprocess.call(['python','SendMail.py'] + args)
Inside SendMail.py use sys.argv:
import sys
subject, body = sys.argv.
You should change directory within the same command:
cmd = "/path/to/executable/executable"
outputdir = "/path/to/output/"
subprocess.call("cd {} && {}".format(outputdir, cmd), shell=True).
optout is a command like any other, and so must be preceded by any local
modifications to the environment. The command that optout runs will inherit
that environment.
CC=${BUILD_TOOL_CC} optout ./configure ${ZLIB_CONFIGURE_OPT}
--prefix=${CURR_DIR}/${INSTALL_DIR}
By the way, this is just one of the problems you are likely to encounter
with your optout function. You cannot run arbitrary command lines in that
fashion, only a simple command followed by zero or more arguments (and I
would expect there are some exceptions to even that restricted set, as)
Use "$@" in your for loop:
for i in "$@"
do
sed -i.bak 's/~/~
/g' "$i"
done
Though I am not really sure if sed is doing what you've described here.
Always quote the parameter expansion. The value of $video_title is being
split into multiple words, which confuses the [ command.
if [ -f "$video_title.$ext1" ]
then
ffmpeg -i "$video_title.$ext1" ...
escapeshellarg() should be used on each argument, not the command as a
whole.
exec(escapeshellarg('/bin/bash') . ' ' .
escapeshellarg("/home/monu/myBash.sh") . ' ' . escapeshellarg(...));"
$*, unquoted, expands to two words. You need to quote it so that someApp
receives a single argument.
someApp "$*"
It's possible that you want to use $@ instead, so that someApp would
receive two arguments if you were to call b.sh as
b.sh 'My first' 'My second'
With someApp "$*", someApp would receive a single argument My first My
second. With someApp "$@", someApp would receive two arguments, My first
and My second.
while read -r account abelpass entapass
You are reading abelpass & entapass but passing password1 and password2
! Didn't you meant to use
while read -r account password1 password2
?
Just like any other simple command, [ ... ] or test requires spaces between
its arguments.
if [ "$#" -ne 1 ]; then
echo "Illegal number of parameters"
fi
Or
if test "$#" -ne 1; then
echo "Illegal number of parameters"
fi
When in Bash, prefer using [[ ]] instead as it doesn't do word splitting
and pathname expansion to its variables that quoting may not be necessary
unless it's part of an expression.
[[ $# -ne 1 ]]
It also has some other features like unquoted condition grouping, pattern
matching (extended pattern matching with extglob) and regex matching.
The following example checks if arguments are valid. It allows a single
argument or two.
[[ ($# -eq 1 || ($# -eq 2 && $2 == <glob pattern>))
&& $1 =~ <regex pattern> ]]
For pure arithmetic
You can have bash pass all the arguments to the R script using something
like this for a bash script:
#!/bin/bash
Rscript /path/to/R/script --args "$*"
exit 0
You can then choose how many of the arguments from $* need to be discarded
inside of R. want to iterate your list of filenames with shift
after you get your arguments,
shift $(( OPTIND-1 ))
while [ -f $1 ]
do
#do whatever you want with the filename in $1.
shift
done
$( '
} );
} );
You can use withArgs() of ScriptBootstrapActionConfig, in Java you can do
as follows, I am sure there is a similar method for C#:
ScriptBootstrapActionConfig bootstrapActionScript = new
ScriptBootstrapActionConfig()
.WithPath(configBootStrapScriptLocation)
.WithArgs(List<String> args);
Add $@ to your call in myscript, like this: ../scripts/myscript.php $@
$@ passes all the command line parameters the calling script had.
Reference:';
instead of passing simulating an interactive session i would:
setup ssh key based authentication - this removes the need of entering the
then use:
ssh root@kroute "logread | grep asd"
MySQL doesn't support session variables in a LOAD DATA INFILE statement
like that. This has been recognized as a feature request for quite some
time (), but the feature has never
been implemented.
I would recommend using mysqlimport instead of doing the complex steps with
mysql that you're doing. The file's name must match the table's name, but
you can trick this with a symbolic link:
#!/bin/bash
for file in *.symbol
do
path="/home/qz/$file"
ln -s -f "$path" /tmp/lasp.txt
mysqlimport -u qz -h compute-0-10 -pabc
--local --columns "score,symbols" /tmp/lasp.txt
done
rm -f /tmp/lasp.txt
PS: No need use `ls`. As you can see above, filename expansion works fine.
Have your script arrArg.sh like this:
#!/bin/bash
arg1="$1"
arg2=("${!2}")
arg3="$3"
arg4=("${!4}")
echo "arg1=$arg1"
echo "arg2 array=${arg2[@]}"
echo "arg2 #elem=${#arg2[@]}"
echo "arg3=$arg3"
echo "arg4 array=${arg4[@]}"
echo "arg4 #elem=${#arg4[@]}"
Now setup your arrays like this in a shell:
arr=(ab 'x y' 123)
arr2=(a1 'a a' bb cc 'it is one')
And pass arguments like this:
. ./arrArg.sh "foo" "arr[@]" "bar" "arr2[@]"
Above script will print:
arg1=foo
arg2 array=ab x y 123
arg2 #elem=3
arg3=bar
arg4 array=a1 a a bb cc it is one
arg4 #elem=5
Note: It might appear weird that I am executing script using . ./script
syntax. Note that this is for executing commands of the script in the
current shell environment.
Q. Why current shell environment and why not a sub shell?
A. Bec
I think you just want to add the -H option to grep:
-H, --with-filename
Print the file name for each match. This is the default when
there is more than one file to search.
you need to use array:
#!/usr/bin/env bash
BACKUP_EXCLUDES=()
function exclude
{
while
(( $# ))
do
BACKUP_EXCLUDES+=(--exclude="$1")
shift
done
}
exclude /proc /dev /mnt /media
exclude "/lost+found"
rsync -ruvz "${BACKUP_EXCLUDES[@]}" / /some/backup/path
the bash explanation: please check @janos answer
the zsh explanation (if it was Zsh): when you used a string variable it was
like you passed --exclude=""/proc" --exclude="/media" ..." - so it was
treated as long path with spaces - which never matched.
If you insist on using Perl, look into File::Find & friends. Though if
you're on a *nix box you should probably be aware of find(1) for tasks this
common.
try: find /release/logs -name test.log-* -mtime +7 -delete
If you want to test it out 1st, leave off the -delete flag & it will
just print a list of the files it would have otherwise deleted.
I would try doing somehting like this.
os.system("appcfg.py arg1 arg2 arg3")
I would look into this portion of the os documentation.
Goodluck..
|
http://www.w3hello.com/questions/use-bash-script-to-pass-arguments-to-a-python-script
|
CC-MAIN-2018-17
|
refinedweb
| 1,095
| 65.22
|
Tcl_DetachPids, Tcl_ReapDetachedProcs, Tcl_WaitPid - manage child pro-
cesses in background
#include <tcl.h>
Tcl_DetachPids(numPids, pidPtr)
Tcl_ReapDetachedProcs()
Tcl_Pid
Tcl_WaitPid(pid, statPtr, options)
int numPids (in) Number of process ids contained in the
array pointed to by pidPtr.
int *pidPtr (in) Address of array containing numPids
process ids.
Tcl_Pid pid(in) The id of the process (pipe) to wait for.
int* statPtr (out) The result of waiting on a process (pipe).
Either 0 or ECHILD.
int options The options controlling the wait. WNOHANG
specifies not to wait when checking the
process.
_________________________________________________________________
Tcl_DetachPids and Tcl_ReapDetachedProcs provide a mechanism for manag-
ing subprocesses that are running in background. These procedures are
needed because the parent of a process must eventually invoke the wait-
pid kernel call (or one of a few other similar kernel calls) to wait
for the child to exit. Until the parent waits for the child, the
child's state cannot be completely reclaimed by the system. If a par-
ent't necessary for any
Tcl Tcl_DetachPids(3)
|
http://www.syzdek.net/~syzdek/docs/man/.shtml/man3/Tcl_DetachPids.3.html
|
crawl-003
|
refinedweb
| 168
| 66.33
|
Polling from a ui.View (built in timers in ui.Views)
@omz, some feedback on the
updatefeature, all of it positive:
- Very stable and consistent. Stops when the view is closed, no threading hassles.
- Very intuitive to use. Changing the
update_intervalto 0.0 stops updates, and positive numbers start them again.
With my limited Threading skills, I was unable to create stable and predictable UI animations with either ui.delay or custom Threads. With
update, no issues. This is my vote for moving the feature out of beta.
@mikael , thanks all to @omz. When i had to do something threaded or in a loop, I never felt confident about it. I have a small thing now where its just a label update to show when something expires. It's so nice and simple with update mechanism.
My experience has been same as yours. Very stable.
@mikael , I mentioned something in the github issues area about ui.TableViewCell. @omz said just add your view to the cell. I had forgotten @JonB had helped me with this a long time ago. Anyway, i was playing around. The below maybe is not pretty. But I find it interesting and it shows off a few things. Also how well update works. Well i think it does anyway.
EDIT: to see the cool stuff I think you have to tap a cell and also scroll. You can see how things are getting suspended when you scroll. I think its nice
import ui from random import choice _color_list = ['purple', 'orange', 'deeppink', 'lightblue', 'cornflowerblue' 'red', 'yellow', 'green', 'pink', 'navy', 'teal', 'olive', 'lime', 'maroon', 'aqua', 'silver', 'fuchsia', ] class MyCustomCell(ui.View): def __init__(self, parent, *args, **kwargs): super().__init__(*args, **kwargs) self.cell = parent self.tableview = None self.blink_count = 0 self.lb = None self.frame = self.cell.frame self.flex = 'wh' self.width -= 10 self.x = 5 self.height -= 10 self.y = 5 self.alpha = .5 self.corner_radius = 6 # this allows the touch events to pass through my subview self.touch_enabled = False self.update_interval = .2 lb = ui.Label(frame=(0, 0, 24, 24), bg_color='black', text_color='white', alignment=ui.ALIGN_CENTER) lb.center = self.center lb.corner_radius = 12 self.lb = lb self.add_subview(lb) def rect_onscreen(self): ''' Have to write this method. Would be nice if this was built in. like ui.TableView.is_visible for example. I know its just some rect math, but it means you need to save extra references etc.. to calculate it yourself. ''' return True def update(self): if not self.tableview: return # I did not implement this yet. A little drunk and having a party today. # but gives the idea... if not self.rect_onscreen(): return if self.blink_count == 98: self.update_interval = 0 self.blink_count += 1 self.lb.text = str(self.blink_count) self.bg_color = choice(_color_list) def create_cell(): ''' Create and return a ui.TableViewCell. We add a custom ui.View to the TableViewCell.content_view. This means our view is sitting on top of the normal TableViewCell contents. All is still there. Also create an attr in the cell at runtime that points to our custom class. I guess this can be done many ways. I choose this way for the example. To me its at least clear for access. ''' cell = ui.TableViewCell() myc = MyCustomCell(cell) cell.content_view.add_subview(myc) cell.my_cell = myc return cell class MyDataSource(object): def __init__(self, data): self.data = data self.sel_item = 0 self.cells = [create_cell()] # just showing we can access our class from the my_cell attr # we added. In this case I want to save the tableview attr cell.my_cell.tableview = tableview return cell def tableview_did_select(self, tableview, section, row): # Called when a row was selected. self.select_row(row)', 'Gaew', 'Pete', 'Ole', 'Christian', 'Mary', 'Susan', 'Juile' 'Simone', 'Terry', 'Michael', 'James']) v.present(style='sheet', animated=False)
.
|
https://forum.omz-software.com/topic/4109/polling-from-a-ui-view-built-in-timers-in-ui-views/?page=2
|
CC-MAIN-2021-10
|
refinedweb
| 626
| 72.53
|
SOLVED Handling selection in a Custom Tool
Is there a standard way to handle the selection/unselection of points in a custom tool?
I mean something like the
EditingToolbehaves. When I subclass my custom tool from the
EditingTool, the selection behaviour is there, but I don't want the editing/moving of points in my tool.
If I subclass the
BaseEventTool, I know how to find out where clicking/dragging starts and stops, but is there an easy way to find out the affected points to manage the selection myself? Something like
CurrentGlyph.getPointsInsideBox((x0, y0), (x1, y1))perhaps?
there is no standard way to set the point selection
you can use fontTools
pointInRect
from fontTools.misc.arrayTools import pointInRect for contour in CurrentGlyph(): for point in contour.points: print pointInRect((point.x, point.y), (minx, miny, maxx, maxy))
take a look at the PolygonSelectionTool
good luck!
Great, thanks
|
https://forum.robofont.com/topic/247/handling-selection-in-a-custom-tool/1
|
CC-MAIN-2021-49
|
refinedweb
| 149
| 66.13
|
Back to: Java Tutorials For Beginners and Professionals
Thread Life Cycle in Java with Examples
In this article, I am going to discuss Thread Life Cycle in Java with examples. Please read our previous article where we discussed Thread Class in Java. The thread can exist in different states. Just because of a thread’s start() method has been called, it doesn’t mean that the thread has access to the CPU and can start executing straight away. Several factors determine how it will process. So, at the end of this article, you will understand the life cycle of Thread in Java.
Thread Life Cycle in Java
A thread goes through various stages in its life cycle. According to Sun, there are only 4 states in the thread life cycle in java new, runnable, non-runnable, and terminated. There is no running state. But for better understanding the threads, we are explaining it in the 5 states. The life cycle of the thread in java is controlled by JVM.
For better understanding please have a look at the below diagram. The java thread states are as follows:
- Newborn
- Runnable
- Running
- Blocked (Non-Runnable)
- Dead (Terminated)
Following are the stages of the life cycle –
New – A new thread begins its life cycle in the new state. It remains in this state until the program starts the thread. It is also referred to as a born thread. In simple words, a thread has been created, but it has not yet been started. A thread is started by calling its start() method.
Runnable – The thread is in the runnable state after the invocation of start() method, but the thread scheduler has not selected it to be the running thread. A thread starts life in the Ready-to-run state by calling the start method and wait for its turn. The thread scheduler decides which thread runs and for how long.
Running – When the thread starts executing, then the state is changed to a “running” state. The scheduler selects one thread from the thread pool, and it starts executing in the application.
Dead – This is the state when the thread is terminated. The thread is in running state and as soon as it completed processing it is in “dead state”. Once a thread is in this state, the thread cannot even run again.
Blocked (Non-runnable state):
This is the state when the thread is still alive but is currently not eligible to run. A thread that is blocked waiting for a monitor lock is in this state.
A running thread can transit to one of the non-runnable states depending on the situation. A thread remains in a non-runnable state until a special transition occurs. A thread doesn’t go directly to the running state from a non-runnable state but transits first to the Ready-to-run state.
The non-runnable states can be characterized as follows:
- Sleeping:- The thread sleeps for a specified amount of time.
- Blocked for I/O:- The thread waits for a blocking operation to complete.
- Blocked for Join completion: – The thread awaits completion of another thread.
- Waiting for notifications: – The thread awaits notification from another thread.
- Blocked for lock acquisition: – The thread waits to acquire the lock of an object.
JVM executes thread based on its priority and scheduling.
Example: Thread Life Cycle
Here we are giving a simple example of the Thread life cycle. In this example, we will create a Java class where we will create a Thread, and then we will use some of its methods that represents its life cycle.
In this example, we have used the methods and indicate their purposes with the comment line. We have created two Thread subclasses named A1 and B. In both of the classes we have to override the run() method and executed the statements. Then we have created a ThreadLifeCycle.java class where we have created instances of both classes and uses the start() method to move the threads into running state, yield() method to move the control to another thread, sleep() method to move into a blocked state. And after successful completion of the run() method the Thread is automatically moved into Dead state.
A1.java
class A1 extends Thread { public void run() { System.out.println("Thread A"); System.out.println("i in Thread A "); for (int i = 1; i <= 5; i++) { System.out.println("i = " + i); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } } System.out.println("Thread A Completed."); } }
B.java
class B extends Thread { public void run() { System.out.println("Thread B"); System.out.println("i in Thread B "); for (int i = 1; i <= 5; i++) { System.out.println("i = " + i); } System.out.println("Thread B Completed."); } }
ThreadLifeCycleDemo.Java
public class ThreadLifeCycleDemo { public static void main(String[] args) { // life cycle of Thread // Thread's New State A1 threadA = new A1(); B threadB = new B(); // Both the above threads are in runnable state // Running state of thread A & B threadA.start(); // Move control to another thread threadA.yield(); // Blocked State of thread B try { threadA.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } threadB.start(); System.out.println("Main Thread End"); } }
Output:
Thread Scheduler:
JVM implements one of the following scheduling strategies
Preemptive scheduling: If a thread with a higher priority than the current running thread, then the current running thread preempted(moves to the ready-to-run state) to let the higher priority thread to execute.
Time-sliced or Round-Robin scheduling: A running thread is allowed to execute for a fixed length of time, after which it moves to the Ready-to-run state to waits it turn to run again.
In the next article, I am going to discuss Thread Priority in Java with Examples. Here, in this article, I try to explain Thread Life Cycle in Java with examples. I hope you enjoy this Thread Life Cycle in Java with Examples article.
|
https://dotnettutorials.net/lesson/thread-life-cycle-in-java/
|
CC-MAIN-2021-31
|
refinedweb
| 985
| 65.32
|
XForms is intended to be used embedded within other markup. The most widely used, and the focus of this article, is within XHTML. There is a need to write your XHTML documents following some guidelines to ensure a smooth experience of a variety of browsers like Microsoft's® Internet Explorer, Mozilla's Firefox, X-Smiles, and Opera, to name some. As of this writing, the only desktop browser that natively supports XForms is X-Smiles. Therefore, an add-on, or sometimes referred to as a plugin, is needed for a browser to process the XForms content. There are also solutions that convert XForms markup to ECMAScript and HTML, which are more widely supported in deployed browsers. See the resources for more information.
Though this article attempts to show a thorough solution for a variety of deployment configurations, not all scenarios and configurations can be covered. As new versions of browsers and XForms processors are released, as well as standards support, the solutions outlined in this article may no longer be valid. The approach to solving this is done in a way to isolate these changes so that changes can be localized and broadly distributed.
The in March of 2006.
Guidelines for serving content
Next there are certain considerations when hosting form documents from Web servers that need to be addressed. This depends directly on what your deployment environment will look like. I'll outline some possibilities in the table below:
Table 1. Some browser and XForms processor options
As you can see, it can get complicated based on your choices. There are also possibilities for using server-side processors, like Chiba, that could work across all these browser platforms. I've also intentionally left off operating systems supported, which is another dimension to consider.
There are advantages and disadvantages to picking one deployment configuration over another. This article is not intended to touch on those but to leave that to you to determine based on your specific needs. This article is intended to focus on building XHTML + XForms content so that the content can be used with most of the configurations outlined above.
Let's take this deployment scenario for the purposes of this article:
- Mozilla Firefox with Mozilla XForms
- Microsoft Internet Explorer with formsPlayer
- X-Smiles
- All others will use FormFaces
In order to ensure this works, there are two requirements on the server side:
- Firefox requires the content to be served with the HTTP response header content type with a value of
application/xhtml+xml. To be technically accurate, it simply needs a content type that invokes its "standards" rendering mode. See the resources for more information on Firefox rendering.
- Internet Explorer requires the content to be served with HTTP response header Content-type with values of
text/htmlor
application/xhtml+xml. See the resources for more information on Internet Explorer's handling of content types.
Disclaimer: There are a variety of configuration parameters that are possible on different versions of Internet Explorer, and I'm sure there are combinations that will not work. See the resources for more details.
In order to serve your content from a Web server such as Apache Tomcat, simply
update the
web.xml configuration file as such:
Listing 1. Apache Tomcat web.xml configuration file
There are many other ways to set the Content-type header; it just depends on your Web server. To change the header from a Java™ Servlet, do the following:
Listing 2. Java Servlet setting HTTP header Content-type
To help better understand possible configurations and requirements, see Table 2, below, for how content should be served:
Table 2. HTTP Content-type header settings
Guidelines for authoring content
Guideline 1: Use XHTML's namespace as the default namespace
If you plan on using Microsoft Internet Explorer with a browser add-on like formsPlayer or FormFaces, you'll need to author your XHTML so that its default XML namespace is that of XHTML. Why is this? Internet Explorer doesn't support XHTML, but you can use the HTML processor by making sure you use valid HTML tag names and no XML namespace prefixes. See the example below in Listing 3 for an example of this.
Listing 3. XHTML with default XML namespace
Hint:: Using a schema validator and schema-assisted editor will ease with developing well-formed and valid XHTML+XForms content. See the resources for more information.
Guideline 2: Add XML namespace declarations to XForms instances
Because of some of the limitations with providing XML within HTML, it is necessary to repeat certain namespace declarations on the XForms instance element. It will depend on what features you use and what namespaces your instances use. Namely these are the namespaces used for the XML instance, the XSI prefix (XML Schema instance) and XSD prefix (XML Schema). Keep this in mind if there are issues with some of your forms.
Handling activation and installation of XForms processors
In this section I'll outline a simple ECMAScript that can be included in all your
XHTML files to perform appropriate checks and take your defined action. The
script will be called
xforms-check.js. Using this
technique, form authoring becomes very simple. It only requires following the
guidelines outlined in this article and included in the script download.
So updating the sample we have above to include the script, it now becomes:
Listing 4. XHTML with browser check
NOTE: Before diving into the details of the client-side ECMAScript that is used,
a disclaimer about how the script determines which browser is used. Most sites have
scripts or guidelines for determining this. The scripts below take a very simple approach to identifying the browser by just testing against
navigator.appName.
Now we'll look at each individual browser and what is need to activate the processor. In some cases, we'll even go as far as installing the plug-in if it isn't installed.
First, let's start with Mozilla Firefox. Here is the portion needed:
Listing 5. Browser check for Firefox
First a browser test is performed for Firefox, and next there is a check to see if the Mozilla XForms add-on is already installed. If it is installed, we have nothing else to do so we return; otherwise, we invoke the installer with the appropriate plug-in XPI for XForms.
Microsoft Internet Explorer
Next we'll check for Internet Explorer and enable formsPlayer as such:
Listing 6. Browser check for Internet Explorer
In order to activate formsPlayer, an
<object> must be
used to link the CLSID to the application in the local Windows® registry. We can cause the installation of the plug-in by referencing the formsPlayer cabinet file.
See the resources for specifics on enabling formsPlayer.
Based on our deployment configuration, FormFaces will be used when the browser is
either Opera or Safari. To do this, a check is made for the appName and then the
formsfaces.js implementation is included into the HTML
DOM. Then instead of
xforms-check.js being referenced,
FormFaces.js is referenced. Below shows how the
<script> is dynamically added to accomplish this:
Listing 7. Browser check for Opera and Safari
Since X-Smiles supports XForms natively, by default we'll provide the user an alert or other corrective action such as:
Listing 8. Browser check for X-Smiles
alphaWorks XML Forms generator
If you're a user of this IBM® alphaWork's toolset, you'll realize part of this article documented some of what the action "Convert for XForms Renderer->" does. It can simply change the default namespace and remove namespace prefixes and lastly add the "xforms-check.js" script. This tool is great also for developing XForms applications. See the resources for more information.
This article has shown that by taking a few steps in authoring and hosting, it is possible to create XForms-based Web pages that can not only work across browsers but can also install necessary add-ons as needed.
Perhaps there will soon be a day when these browsers support XForms and XHTML natively, therefore simplifying the consumption of these Web applications by end users.
Information about download methods
Learn
- Internet Forms: Content Types: MozzIE handling of content types.
- Handling MIME types in Internet Explorer: Internet Explorer handling of content types.
-
- Mozilla XForms: Render your standards-compliant forms in Mozilla Firefox using this plug-in.
- MozzIE: an open-source plug-in that allows you to render XForms in Internet Explorer.
- FormsPlayer: a plug-in for Internet Explorer that renders XForms.
- X-Smiles: an open-source browser that natively supports XForms.
- Opera: a standards-based browser from Opera.
- Apple Safari: a standards-based browser from Apple.
- Chiba: a server-side XForms processor.
- FormFaces: a browser-based XForms processor utilizing JavaScript.
- XML Forms Generator: Create functional, standards-compliant forms with a click of the mouse using this Eclipse-based tool from alphaWorks.
- Visual XForms Designer: Check out the home page, with links to installation instructions, prerequisites, and the forum.
- Compound XML Document Toolkit: Explore other open-standard XML markups, including Scalable Vector Graphics (SVG), MathML, VoiceXML, and Synchronized Multimedia Integration Language (SMIL).
Discuss
- Participate in the discussion forum.
- Get answers to your questions about the Visual XForms Designer on its discussion forum..
|
http://www.ibm.com/developerworks/library/x-xformsxbrowser.html
|
crawl-002
|
refinedweb
| 1,534
| 55.03
|
)
1.how/ISO C++
The American National Standards Institution (ANSI) and the International Standards Organisation (ISO) provide “official” and generally accepted standard definitions of many programming languages, including C and C++. Such standards are important. A program written only in ANSI/ISO C++ is guaranteed to run on any computer whose supporting software conforms to the the standard. In other words, the standard guarantees that standard-compliant C++ programs are portable. In practice most versions of C++ include ANSI/ISO C++ as a core language, but also include extra machine-dependent features to allow smooth interaction with different computers’ operating systems. These machine dependent features should be used sparingly. Moreover, when parts of a C++ program use non-compliant components of the language, these should be clearly marked, and as far a possible separated from the rest of the program, so as to make modification of the program for different machines and operating systems as easy as possible. The four most significant revisions of the C++ standard areC++98 (1998), C++03 (2003) and C++11 (2011) and C++14 (2014). The next iteration is currently expected in 2017. Of course, it can be a challenging task for software engineers, compiler writers and lecturers (!) to keep track of all the revisions that appear in each major version of the standard. You may wish to read further about the ongoing efforts that are being made to make the widely-used g++ compiler compliant with the C++11 standard and compliant with the C++14 standard.
.
(BACK TO COURSE CONTENTS)
1.5 An Example C++ Program
Here is an example of a complete C++ program:
// The C++ compiler ignores comments which start with // double slashes like this, up to the end of the line. /* Comments can also be written starting with a slash followed by a star, and ending with a star followed by a slash. As you can see, comments written in this way can span more than one line. */ /* Programs should ALWAYS include plenty of comments! */ /* Author: Rob Miller and William Knottenbelt Program last changed: 30th September 2001 */ /* This program prompts the user for the current year, the user's current age, and another year. It then calculates the age that the user was or will be in the second year entered. */ ; }
This program illustrates several general features of all C++ programs. It begins (after the comment lines) with the statement
#include <iostream>
This statement is called an include directive. It tells the compiler and the linker that the program will need to be linked to a library of routines that handle input from the keyboard and output to the screen (specifically the cin and cout statements that appear later). The header file “iostream” contains basic information about this library. You will learn much more about libraries of code later in this course.
After the include directive is the line:
using namespace std;
This statement is called a using directive. The latest versions of the C++ standard divide names (e.g. cin and cout) into subcollections of names callednamespaces. This particular using directive says the program will be using names that have a meaning defined for them in the std namespace (in this case theiostream header defines meanings for cout and cin in the std namespace).
Some older C++ compilers do not support namespaces. In this case you can use the older form of the include directive (that does not require a using directive, and places all names in a single global namespace):
#include <iostream.h>
Some of the legacy code you encounter in industry may be written using this older style for headers.
Because the program is short, it is easily packaged up into a single list of program statements and commands. After the include and using directives, the basic structure of the program is:
int main() { First statement; ... ... Last statement; return 0; }
All C++ programs have this basic “top-level” structure. Notice that each statement in the body of the program ends with a semicolon. In a well-designed large program, many of these statements will include references or calls to sub-programs, listed after the main program or in a separate file. These sub-programs have roughly the same outline structure as the program here, but there is always exactly one such structure called main. Again, you will learn more about sub-programs later in the course.
When at the end of the main program, the line
return 0;
means “return the value 0 to the computer’s operating system to signal that the program has completed successfully”. More generally, return statements signal that the particular sub-program has finished, and return a value, along with the flow of control, to the program level above. More about this later.
Our example program uses four variables:
year_now, age_now, another_year and another_age
Program variables are not like variables in mathematics. They are more like symbolic names for “pockets of computer memory” which can be used to store different values at different times during the program execution. These variables are first introduced in our program in the variable declaration
int year_now, age_now, another_year, another_age;
which signals to the compiler that it should set aside enough memory to store four variables of type “int” (integer) during the rest of the program execution. Hence variables should always be declared before being used in a program. Indeed, it is considered good style and practice to declare all the variables to be used in a program or sub-program at the beginning. Variables can be one of several different types in C++, and we will discuss variables and types at some length later.
(BACK TO COURSE CONTENTS)
1.6 Very Simple Input, Output and Assignment
After we have compiled the program above, we can run it. The result will be something like
Enter current year then press RETURN. 1996 Enter your current age in years. 36 Enter the year for which you wish to know your age. 2011 Your age in 2011: 51
The first, third, fifth and seventh lines above are produced on the screen by the program. In general, the program statement
cout << Expression1 << Expression2 << ... << ExpressionN;
will produce the screen output
Expression1Expression2...ExpressionN
The series of statements
cout << Expression1; cout << Expression2; ... ... cout << ExpressionN;
will produce an identical output. If spaces or new lines are needed between the output expressions, these have to be included explicitly, with a " " or a "\n"respectively. The expression endl can also be used to output a new line, and in many cases is preferable to using "\n" since it has the side-effect of flushing the output buffer (output is often stored internally and printed in chunks when sufficient output has been accumulated; using endl forces all output to appear on the screen immediately).
The numbers in bold in the example screen output above have been typed in by the user. In this particular program run, the program statement
cin >> year_now;
has resulted in the variable year_now being assigned the value 2001 at the point when the user pressed RETURN after typing in “2001”. Programs can also include assignment statements, a simple example of which is the statement
another_age = another_year - (year_now - age_now);
Hence the symbol = means “is assigned the value of”. (“Equals” is represented in C++ as ==.)
(BACK TO COURSE CONTENTS)
1.7 Simple Flow of Control
The last few lines of our example program (other than “return 0“) are:
if (another_age >= 0) { cout << "Your age in " << another_year << ": "; cout << another_age << "\n"; } else { cout << "You weren't even born in "; cout << another_year << "!\n"; }
The “if … else …” branching mechanism is a familiar construct in many procedural programming languages. In C++, it is simply called an if statement, and the general syntax is
if (condition) { Statement1; ... ... StatementN; } else { StatementN+1; ... ... StatementN+M; }
The “else” part of an “if statement” may be omitted, and furthermore, if there is just one Statement after the “if (condition)”, it may be simply written as
if (condition) Statement;
It is quite common to find “if statements” strung together in programs, as follows:
... ...; } ... ...
This program fragment has quite a complicated logical structure, but we can confirm that it is legal in C++ by referring to the syntax diagram for “if statements”. In such diagrams, the terms enclosed in ovals or circles refer to program components that literally appear in programs. Terms enclosed in boxes refer to program components that require further definition, perhaps with another syntax diagram. A collection of such diagrams can serve as a formal definition of a programming language’s syntax (although they do not help distinguish between good and bad programming style!).
Below is the syntax diagram for an “if statement”. It is best understood in conjunction with the syntax diagram for a “statement”. In particular, notice that the diagram doesn’t explicitly include the “;” or “{}” delimiters, since these are built into the definition (syntax diagram) of “statement”.
1.7.1 Syntax diagram for an If Statement
The C++ compiler accepts the program fragment in our example by counting all of the bold text in
... ...; } ... ...
as the single statement which must follow the first else.
(BACK TO COURSE CONTENTS)
1.8 Preliminary Remarks about Program Style
As far as the C++ compiler is concerned, the following program is exactly the same as the program in Section 1.5:
; }
However, the lack of program comments, spaces, new lines and indentation makes this program unacceptable. There is much more to developing a good programming style than learning to lay out programs properly, but it is a good start! Be consistent with your program layout, and make sure the indentation and spacing reflects the logical structure of your program. It is also a good idea to pick meaningful names for variables; “year_now“, “age_now“, “another_year” and “another__age” are better names than “y_n“, “a_n“, “a_y” and “a_a“, and much better than “w“, “x“, “y” and “z“. Although popular, “temp” and “tmp” are particularly uninformative variable names and should be avoided. Remember that your programs might need modification by other programmers at a later date.
(BACK TO COURSE CONTENTS)
1.9 Summary
We have briefly and informally introduced a number of topics in this lecture: variables and types, input and output, assignment, and conditional statements (“if statements”). We will go into each of these topics more formally and in more detail later in the course. The material here is also covered in more detail inSavitch, Chapter 1, Section 2.4 and Section 2.5.
|
https://tfetimes.com/an-introduction-to-the-imperative-part-of-c/
|
CC-MAIN-2017-04
|
refinedweb
| 1,736
| 61.06
|
. In SBCL, as in CMU CL (or, for that matter, any compiler which really understands Common Lisp types) a suitable default does exist, in all cases, because the compiler understands the concept of functions which never return (i.e. has return type NIL, e.g. ERROR). Thus, as a portable workaround, you can use a call to some known-never-to-return function as the default. E.g. (DEFSTRUCT FOO (BAR (ERROR "missing :BAR argument") :TYPE SOME-TYPE-TOO-HAIRY-TO-CONSTRUCT-AN-INSTANCE-OF)) or (DECLAIM (FTYPE () will compile without complaint and work correctly either on SBCL or on a completely compliant Common Lisp system.. 7: The "byte compiling top-level form:" output ought to be condensed. Perhaps any number of such consecutive lines ought to turn into a single "byte.) 18:)))))) 19: (I *think* this is a bug. It certainly seems like strange behavior. But the ANSI spec is scary, dark, and deep..) (FORMAT NIL "~,1G" 1.4) => "1. " (FORMAT NIL "~3,1G" 1.4) => "1. " 20:)) 22: 27: Sometimes (SB-EXT:QUIT) fails with Argh! maximum interrupt nesting depth (4096) exceeded, exiting Process inferior-lisp exited abnormally with code 1 I haven't noticed a repeatable case of this yet. 29:. This is still present in sbcl-0.6.8. 31:.) 38: DEFMETHOD doesn't check the syntax of &REST argument lists properly, accepting &REST even when it's not followed by an argument name: (DEFMETHOD FOO ((X T) &REST) NIL). on July 25, 2000: a: (fixed in sbcl-0.6.11.25) b:. d: (in section12.erg) various forms a la (FLOAT 1 DOUBLE-FLOAT-EPSILON) don't give the right behavior. 46: type safety errors reported by Peter Van Eynde July 25, 2000:. d: ELT signals SIMPLE-ERROR if its index argument isn't a valid index for its sequence argument, but should signal TYPE-ERROR instead. e: FILE-LENGTH is supposed to signal a type error when its argument is not a stream associated with a file, but doesn't. f: (FLOAT-RADIX 2/3) should signal an error instead of returning 2. g: (LOAD "*.lsp") should signal FILE. j: (PARSE-NAMESTRING (COERCE (LIST #\f #\o #\o (CODE-CHAR 0) #\4 #\8) (QUOTE STRING))) should probably signal an error instead of making a pathname with a null byte in it.: a: (DEFCLASS FOO () (A B A)) should signal a PROGRAM-ERROR, and doesn't. b: (DEFCLASS FOO () (A B A) (:DEFAULT-INITARGS X A X B)) should signal a PROGRAM-ERROR, and doesn't. c: (DEFCLASS FOO07 NIL ((A :ALLOCATION :CLASS :ALLOCATION :CLASS))), and other DEFCLASS forms with duplicate specifications in their slots, should signal a PROGRAM-ERROR, and doesn't. d: (DEFGENERIC IF (X)) should signal a PROGRAM-ERROR, but instead causes a COMPILER-ERROR. 48: SYMBOL-MACROLET bugs reported by Peter Van Eynde July 25, 2000: a: (SYMBOL-MACROLET ((T TRUE)) ..) should probably signal PROGRAM-ERROR, but SBCL accepts it instead. b: SYMBOL-MACROLET should refuse to bind something which is declared as a global variable, signalling PROGRAM-ERROR. c: SYMBOL-MACROLET should signal PROGRAM-ERROR if something it binds is declared SPECIAL inside. 49: LOOP bugs reported by Peter Van Eynde July 25, 2000: a: (LOOP WITH (A B) DO (PRINT 1)) is a syntax error according to the definition of WITH clauses given in the ANSI spec, but compiles and runs happily in SBCL. b:. 50: type system errors reported by Peter Van Eynde July 25, 2000: a: (SUBTYPEP 'BIGNUM 'INTEGER) => NIL, NIL but should be (VALUES T T) instead. b: (SUBTYPEP 'EXTENDED-CHAR 'CHARACTER) => NIL, NIL but should be (VALUES T T) instead. c: (SUBTYPEP '(INTEGER (0) (0)) 'NIL) dies with nested errors. d: In general, the system doesn't like '(INTEGER (0) (0)) -- it blows up at the level of SPECIFIER-TYPE with "Lower bound (0) is greater than upper bound (0)." Probably SPECIFIER-TYPE should return NIL instead. e: (TYPEP 0 '(COMPLEX (EQL 0)) fails with "Component type for Complex is not numeric: (EQL 0)." This might be easy to fix; the type system already knows that (SUBTYPEP '(EQL 0) 'NUMBER) is true. f: The type system doesn't know about the condition system, so that e.g. (TYPEP 'SIMPLE-ERROR 'ERROR)=>NIL. g: The type system isn't all that smart about relationships between hairy types, as shown in the type.erg test results, e.g. (SUBTYPEP 'CONS '(NOT ATOM)) => NIL, NIL.. b: READ should probably return READER-ERROR, not the bare arithmetic error, when input a la "1/0" or "1e1000" causes an arithmetic error..) 53:. 54: The implementation of #'+ returns its single argument without type checking, e.g. (+ "illegal") => "illegal". 56: Attempting to use COMPILE on something defined by DEFMACRO fails: (DEFMACRO FOO (X) (CONS X X)) (COMPILE 'FOO) Error in function C::GET-LAMBDA-TO-COMPILE: #<Closure Over Function "DEFUN (SETF MACRO-FUNCTION)" {480E21B1}> was defined in a non-null environment. 58: (SUBTYPEP '(AND ZILCH INTEGER) 'ZILCH) => NIL, NIL Note: I looked into fixing this in 0.6.11.15, but gave up. The problem seems to be that there are two relevant type methods for the subtypep operation, HAIRY :COMPLEX-SUBTYPEP-ARG2 and INTERSECTION :COMPLEX-SUBTYPEP-ARG1, and only the first is called. This could be fixed, but type dispatch is messy and confusing enough already, I don't want to complicate it further. Perhaps someday we can make CLOS cross-compiled (instead of compiled after bootstrapping) so that we don't need to have the type system available before CLOS, and then we can rewrite the type methods to CLOS methods, and then expressing the solutions to stuff like this should become much more straightforward. -- WHN 2001-03-14 60: The debugger LIST-LOCATIONS command doesn't work properly. 61: Compiling and loading (DEFUN FAIL (X) (THROW 'FAIL-TAG X)) (FAIL 12) then requesting a BACKTRACE at the debugger prompt gives no information about where in the user program the problem occurred. 62: The compiler is supposed to do type inference well enough that the declaration in (TYPECASE X ((SIMPLE-ARRAY SINGLE-FLOAT) (LOCALLY (DECLARE (TYPE (SIMPLE-ARRAY SINGLE-FLOAT) X)) ..)) ..) is redundant. However, as reported by Juan Jose Garcia Ripoll for CMU CL, it sometimes doesn't. Adding declarations is a pretty good workaround for the problem for now, but can't be done by the TYPECASE macros themselves, since it's too hard for the macro to detect assignments to the variable within the clause. Note: The compiler *is* smart enough to do the type inference in many cases. This case, derived from a couple of MACROEXPAND-1 calls on Ripoll's original test case, (DEFUN NEGMAT (A) (DECLARE (OPTIMIZE SPEED (SAFETY 0))) (COND ((TYPEP A '(SIMPLE-ARRAY SINGLE-FLOAT)) NIL (LET ((LENGTH (ARRAY-TOTAL-SIZE A))) (LET ((I 0) (G2554 LENGTH)) (DECLARE (TYPE REAL G2554) (TYPE REAL I)) (TAGBODY SB-LOOP::NEXT-LOOP (WHEN (>= I G2554) (GO SB-LOOP::END-LOOP)) (SETF (ROW-MAJOR-AREF A I) (- (ROW-MAJOR-AREF A I))) (GO SB-LOOP::NEXT-LOOP) SB-LOOP::END-LOOP)))))) demonstrates the problem; but the problem goes away if the TAGBODY and GO forms are removed (leaving the SETF in ordinary, non-looping code), or if the TAGBODY and GO forms are retained, but the assigned value becomes 0.0 instead of (- (ROW-MAJOR-AREF A I)).. 65: (probably related to bug #70; maybe related to bug #109) As reported by Carl Witty on submit@bugs.debian.org 1999-05-08, compiling this file (in-package "CL-USER") (defun equal-terms (termx termy) (labels ((alpha-equal-bound-term-lists (listx listy) (or (and (null listx) (null listy)) (and listx listy (let ((bindings-x (bindings-of-bound-term (car listx))) (bindings-y (bindings-of-bound-term (car listy)))) (if (and (null bindings-x) (null bindings-y)) (alpha-equal-terms (term-of-bound-term (car listx)) (term-of-bound-term (car listy))) (and (= (length bindings-x) (length bindings-y)) (prog2 (enter-binding-pairs (bindings-of-bound-term (car listx)) (bindings-of-bound-term (car listy))) (alpha-equal-terms (term-of-bound-term (car listx)) (term-of-bound-term (car listy))) (exit-binding-pairs (bindings-of-bound-term (car listx)) (bindings-of-bound-term (car listy))))))) (alpha-equal-bound-term-lists (cdr listx) (cdr listy))))) (alpha-equal-terms (termx termy) (if (and (variable-p termx) (variable-p termy)) (equal-bindings (id-of-variable-term termx) (id-of-variable-term termy)) (and (equal-operators-p (operator-of-term termx) (operator-of-term termy)) (alpha-equal-bound-term-lists (bound-terms-of-term termx) (bound-terms-of-term termy)))))) (or (eq termx termy) (and termx termy (with-variable-invocation (alpha-equal-terms termx termy)))))) causes an assertion failure The assertion (EQ (C::LAMBDA-TAIL-SET C::CALLER) (C::LAMBDA-TAIL-SET (C::LAMBDA-HOME C::CALLEE))) failed. Bob Rogers reports (1999-07-28 on cmucl-imp@cons.org) a smaller test case with the same problem: (defun parse-fssp-alignment () ;; Given an FSSP alignment file named by the argument . . . (labels ((get-fssp-char () (get-fssp-char)) (read-fssp-char () (get-fssp-char))) ;; Stub body, enough to tickle the bug. (list (read-fssp-char) (read-fssp). 68: As reported by Daniel Solaz on cmucl-help@cons.org 2000-11-23, SXHASH returns the same value for all non-STRUCTURE-OBJECT instances, notably including all PCL instances. There's a limit to how much SXHASH can do to return unique values for instances, but at least it should probably look at the class name, the way that it does for STRUCTURE-OBJECTs. 69: As reported by Martin Atzmueller on the sbcl-devel list 2000-11-22, > There remains one issue, that is a bug in SBCL: > According to my interpretation of the spec, the ":" and "@" modifiers > should appear _after_ the comma-seperated arguments. > Well, SBCL (and CMUCL for that matter) accept > (ASSERT (STRING= (FORMAT NIL "~:8D" 1) " 1")) > where the correct way (IMHO) should be > (ASSERT (STRING= (FORMAT NIL "~8:D" 1) " 1")) Probably SBCL should stop accepting the "~:8D"-style format arguments, or at least issue a warning. 70: (probably related to bug #65; maybe related to bug #109) The compiler doesn't like &OPTIONAL arguments in LABELS and FLET forms. E.g. (DEFUN FIND-BEFORE (ITEM SEQUENCE &KEY (TEST #'EQL)) (LABELS ((FIND-ITEM (OBJ SEQ TEST &OPTIONAL (VAL NIL)) (LET ((ITEM (FIRST SEQ))) (COND ((NULL SEQ) (VALUES NIL NIL)) ((FUNCALL TEST OBJ ITEM) (VALUES VAL SEQ)) (T (FIND-ITEM OBJ (REST SEQ) TEST (NCONC VAL `(,ITEM)))))))) (FIND-ITEM ITEM SEQUENCE TEST))) from David Young's bug report on cmucl-help@cons.org 30 Nov 2000 causes sbcl-0.6.9 to fail with error in function SB-KERNEL:ASSERT-ERROR: The assertion (EQ (SB-C::LAMBDA-TAIL-SET SB-C::CALLER) (SB-C::LAMBDA-TAIL-SET (SB-C::LAMBDA-HOME SB-C::CALLEE))) failed. 71: (DECLAIM (OPTIMIZE ..)) doesn't work. E.g. even after (DECLAIM (OPTIMIZE (SPEED 3))), things are still optimized with the previous SPEED policy. This bug will probably get fixed in 0.6.9.x in a general cleanup of optimization policy. 72: (DECLAIM (OPTIMIZE ..)) doesn't work properly inside LOCALLY forms. 74: As noted in the ANSI specification for COERCE, (COERCE 3 'COMPLEX) gives a result which isn't COMPLEX. The result type optimizer for COERCE doesn't know this, perhaps because it was written before ANSI threw this curveball: the optimizer thinks that COERCE always returns a result of the specified type. Thus while the interpreted function (DEFUN TRICKY (X) (TYPEP (COERCE X 'COMPLEX) 'COMPLEX)) returns the correct result, (TRICKY 3) => NIL the compiled function (COMPILE 'TRICKY) does not: (TRICKY 3) => T. 80: (fixed early Feb 2001 by MNA) 81: As reported by wbuss@TELDA.NET (Wolfhard Buss) on cmucl-help 2001-02-14, According to CLHS (loop with (a . b) of-type float = '(0.0 . 1.0) and (c . d) of-type float = '(2.0 . 3.0) return (list a b c d)) should evaluate to (0.0 1.0 2.0 3.0). cmucl-18c disagrees and invokes the debugger: "B is not of type list". SBCL does the same thing.. 84: (SUBTYPEP '(SATISFIES SOME-UNDEFINED-FUN) NIL)=>NIL,T (should be NIL,NIL) to ask whether it's equal to the T type and then (using the EQ type comparison in the NAMED :SIMPLE-= type method) return NIL. (I haven't tried to investigate this bug enough to guess whether there might be any user-level symptoms.) 90: a latent cross-compilation/bootstrapping bug: The cross-compilation host's CL:CHAR-CODE-LIMIT is used in target code in readtable.lisp and possibly elsewhere. Instead, we should use the target system's CHAR-CODE-LIMIT. This will probably cause problems if we try to bootstrap on a system which uses a different value of CHAR-CODE-LIMIT than SBCL does. 91: (subtypep '(or (integer -1 1) unsigned-byte) '(or (rational -1 7) unsigned-byte (integer -1 1))) => NIL,T An analogous problem with SINGLE-FLOAT and REAL types was fixed in sbcl-0.6.11.22, but some peculiarites of the RATIO type make it awkward to generalize the fix to INTEGER and RATIONAL. It's not clear what's the best fix. (See the "bug in type handling" discussion on cmucl-imp ca. 2001-03-22 and ca. 2001-02-12.) 93: In sbcl-0.6.11.26, (COMPILE 'IN-HOST-COMPILATION-MODE) in src/cold/shared.lisp doesn't correctly translate the interpreted function (defun in-host-compilation-mode (fn) (let ((*features* (cons :sb-xc-host *features*)) ;; the CROSS-FLOAT-INFINITY-KLUDGE, as documented in ;; base-target-features.lisp-expr: (*shebang-features* (set-difference *shebang-features* '(:sb-propagate-float-type :sb-propagate-fun-type)))) (with-additional-nickname ("SB-XC" "SB!XC") (funcall fn)))) No error is reported by the compiler, but when the function is executed, it causes an error TYPE-ERROR in SB-KERNEL::OBJECT-NOT-TYPE-ERROR-HANDLER: (:LINUX :X86 :IEEE-FLOATING-POINT :SB-CONSTRAIN-FLOAT-TYPE :SB-TEST :SB-INTERPRETER :SB-DOC :UNIX ...) is not of type SYMBOL.)))) 99: DESCRIBE interacts poorly with *PRINT-CIRCLE*, e.g. the output from (let ((*print-circle* t)) (describe (make-hash-table))) is weird, #<HASH-TABLE :TEST EQL :COUNT 0 {90BBFC5}> is an . (EQL) Its SIZE is 16. Its REHASH-SIZE is 1.5. Its REHASH-THRESHOLD is . (1.0) It holds 0 key/value pairs. where the ". (EQL)" and ". (1.0)" substrings are screwups. (This is likely a pretty-printer problem which happens to be exercised by DESCRIBE, not actually a DESCRIBE problem.). 101: The error message for calls to structure accessors with the wrong number of arguments is confusing and of the wrong condition class (TYPE-ERROR instead of PROGRAM-ERROR): * (defstruct foo x y) * (foo-x) debugger invoked on condition of type SIMPLE-TYPE-ERROR: Structure for accessor FOO-X is not a FOO: 301988783 102: As reported by Arthur Lemmens sbcl-devel 2001-05-05, ANSI requires that SYMBOL-MACROLET refuse to rebind special variables, but SBCL doesn't do this. (Also as reported by AL in the same message, SBCL depended on this nonconforming behavior to build itself, because of the way that **CURRENT-SEGMENT** was implemented. As of sbcl-0.6.12.x, this dependence on the nonconforming behavior has been fixed, but the nonconforming behavior remains.) 103: As reported by Arthur Lemmens sbcl-devel 2001-05-05, ANSI's definition of (LOOP .. DO ..) requires that the terms following DO all be compound forms. SBCL's implementation of LOOP allows non-compound forms (like the bare symbol COUNT, in his example) here. 105: (DESCRIBE 'STREAM-READ-BYTE) 106: (reported by Eric Marsden on cmucl-imp 2001-06-15) Executing (TYPEP 0 '(COMPLEX (EQL 0))) signals an error in sbcl-0.6.12.34, The component type for COMPLEX is not numeric: (EQL 0) This is funny since sbcl-0.6.12.34 knows (SUBTYPEP '(EQL 0) 'NUMBER) => T). 109: reported by Martin Atzmueller 2001-06-25; originally from CMU CL bugs collection: ;;; This file fails to compile. ;;; Maybe this bug is related to bugs #65, #70 in the BUGS file. (in-package :cl-user) (defun tst2 () (labels ((eff (&key trouble) (eff) ;; nil ;; Uncomment and it works )) (eff))) In SBCL 0.6.12.42, the problem is internal error, failed AVER: "(COMMON-LISP:EQ (SB!C::LAMBDA-TAIL-SET SB!C::CALLER) (SB!C::LAMBDA-TAIL-SET (SB!C::LAMBDA-HOME SB!C::CALLEE)))". 111: reported by Martin Atzmueller 2001-06-25; originally from CMU CL bugs collection: (in-package :cl-user) ;;; Produces an assertion failures when compiled. (defun foo (z) (declare (type (or (function (t) t) null) z)) (let ((z (or z #'identity))) (declare (type (function (t) t) z)) (funcall z 1))) The error in sbcl-0.6.12.42 is internal error, failed AVER: "(COMMON-LISP:NOT (COMMON-LISP:EQ SB!C::CHECK COMMON-LISP:T))" 112: reported by Martin Atzmueller 2001-06-25; taken from CMU CL bugs collection; apparently originally reported by Bruno Haible (in-package :cl-user) ;;; From: Bruno Haible ;;; Subject: scope of SPECIAL declarations ;;; It seems CMUCL has a bug relating to the scope of SPECIAL ;;; declarations. I observe this with "CMU Common Lisp 18a x86-linux ;;; 1.4.0 cvs". (let ((x 0)) (declare (special x)) (let ((x 1)) (let ((y x)) (declare (special x)) y))) ;;; Gives: 0 (this should return 1 according to CLHS) (let ((x 0)) (declare (special x)) (let ((x 1)) (let ((y x) (x 5)) (declare (special x)) y))) ;;; Gives: 1 (correct). The reported results match what we get from the interpreter in sbcl-0.6.12.42."> 114: reported by Martin Atzmueller 2001-06-25; originally from CMU CL bugs collection: (in-package :cl-user) ;;; This file causes the byte compiler to fail. (declaim (optimize (speed 0) (safety 1))) (defun tst1 () (values (multiple-value-list (catch 'a (return-from tst1))))) The error message in sbcl-0.6.12.42 is internal error, failed AVER: "(COMMON-LISP:EQUAL (SB!C::BYTE-BLOCK-INFO-START-STACK SB!INT:INFO) SB!C::STACK)")" 116: The error message from compiling (LAMBDA (X) (LET ((NIL 1)) X)) is KNOWN BUGS RELATED TO THE IR1 INTERPRETER (Note: At some point, the pure interpreter (actually a semi-pure interpreter aka "the IR1 interpreter") will probably go away, replaced by constructs like (DEFUN EVAL (X) (FUNCALL (COMPILE NIL (LAMBDA ..))))) and at that time these bugs should either go away automatically or become more tractable to fix. Until then, they'll probably remain, since some of them aren't considered urgent, and the rest are too hard to fix as long as so many special cases remain. After the IR1 interpreter goes away is also the preferred time to start systematically exterminating cases where debugging functionality (backtrace, breakpoint, etc.) breaks down, since getting rid of the IR1 interpreter will reduce the number of special cases we need to support.) IR1-1: The FUNCTION special operator doesn't check properly whether its argument is a function name. E.g. (FUNCTION (X Y)) returns a value instead of failing with an error. (Later attempting to funcall the value does cause an error.) IR1-2: COMPILED-FUNCTION-P bogusly reports T for interpreted functions: * (DEFUN FOO (X) (- 12 X)) FOO * (COMPILED-FUNCTION-P #'FOO) T IR1-3: Executing (DEFVAR *SUPPRESS-P* T) (EVAL '(UNLESS *SUPPRESS-P* (EVAL-WHEN (:COMPILE-TOPLEVEL :LOAD-TOPLEVEL :EXECUTE) (FORMAT T "surprise!")))) prints "surprise!". Probably the entire EVAL-WHEN mechanism ought to be rewritten from scratch to conform to the ANSI definition, abandoning the *ALREADY-EVALED-THIS* hack which is used in sbcl-0.6.8.9 (and in the original CMU CL source, too). This should be easier to do -- though still nontrivial -- once the various IR1 interpreter special cases are gone. IR1-3a: EVAL-WHEN's idea of what's a toplevel form is even more screwed up than the example in IR1-3 would suggest, since COMPILE-FILE and COMPILE both print both "right now!" messages when compiling the following code, (LAMBDA (X) (COND (X (EVAL-WHEN (:COMPILE-TOPLEVEL :LOAD-TOPLEVEL :EXECUTE) (PRINT "yes! right now!")) "yes!") (T (EVAL-WHEN (:COMPILE-TOPLEVEL :LOAD-TOPLEVEL :EXECUTE) (PRINT "no! right now!")) "no!"))) and while EVAL doesn't print the "right now!" messages, the first FUNCALL on the value returned by EVAL causes both of them to be printed. IR1-4:.] IR1-5: (not really a bug, just a wishlist thing which might be easy when EVAL-WHEN is rewritten..) It might be good for the cross-compiler to warn about nested EVAL-WHENs. (In ordinary compilation, they're quite likely to be OK, but in cross-compiled code EVAL-WHENs are a great source of confusion, so a style warning about anything unusual could be helpful.) IR1-6: (another wishlist thing..) Reimplement DEFMACRO to be basically like DEFMACRO-MUNDANELY, just using EVAL-WHEN.
|
http://sourceforge.net/p/sbcl/sbcl/ci/sbcl_0_6_13/tree/BUGS
|
CC-MAIN-2015-35
|
refinedweb
| 3,470
| 53.51
|
I have this python script to check services like MySQL and Apache and if some of them are down then it starts again:
import subprocess
import smtplib
_process = ['mysql', 'apache2']
for _proc in _process:
p = subprocess.Popen(["service", _proc, "status"], stdout=subprocess.PIPE)
output, err = p.communicate()
print "*** check service "+_proc+" ***\n"
if 'Active: active (running)' in output:
print _proc+" is working!"
else:
print "Ouchh!! "+_proc+" again is down."
output = subprocess.Popen(["service", _proc, "start"])
print "*** The service "+_proc+" was restarted ***\n", output
print "*** sending email ***\n"
server = smtplib.SMTP('smtp.gmail.com', 587)
server.ehlo()
server.starttls()
_from = "from@email.co"
_to = "to@email.co"
_cc = ["another@email.co"]
_pass = "xxxxxx"
server.login(_from, _pass)
msg = "\r\n".join([
"From: "+_from,
"To: "+_to,
"CC: "+",".join(_cc),
"Subject: Ouchh!! "+_proc+" again is down.",
"",
"Again " + _proc + " is down. The service was restarted!!"
])
server.sendmail(_from, _to, msg)
server.quit()
print "*** email send ***\n"
python /home/check-services.py
Traceback (most recent call last):
File "/home/check-services.py", line 77, in <module>
p = subprocess.Popen(["service", _proc, "status"],
The problem is that
service isn't in any of the directories listed in the $PATH of the cron process.
There are many solutions. One solution is to specify the complete path to
service in your
.Popen() call. On my computer, that is
/usr/sbin/service.
Try this:
p = subprocess.Popen( ["/usr/sbin/service", _proc, "status"], stdout=subprocess.PIPE)
Another solution is to specify a different $PATH in your crontab:
* * * * * PATH=/bin:/usr/bin:/sbin:/usr/sbin /usr/bin/python /home/check-services.py
|
https://codedump.io/share/Jg9xoJDi7Omh/1/what-is-wrong-with-my-python-script-and-cron-job
|
CC-MAIN-2017-09
|
refinedweb
| 261
| 64.27
|
.
Grace is derived from Xmgr (a.k.a. ACE/gr), originally written by Paul Turner.
From version number 4.00, the development was taken over by a team of volunteers under the coordination of Evgeny Stambulchik. You can get the newest information about Grace and download the latest version at the Grace home page.
When its copyright was changed to GPL, the name was changed to Grace, which stands for ``GRaphing, Advanced Computation and Exploration of data'' or ``Grace Revamps ACE/gr''. The first version of Grace available is named 5.0.0, while the last public version of Xmgr has the version number 4.1.2.
Paul still maintains and develops a non-public version of Xmgr for internal use.
Copyright (©) 1991-1995 Paul J Turner, Portland, OR Copyright (©) 1996-2003 certain libraries required to build Grace (which are therefore even included in a suitable version) there may be different Copyright/License statements. Though their License may by chance match the one used for Grace, the Grace Copyright holders can not influence or change them.
yaccor, better,
bison).
/usr/local/src/grace-x.y.zand the compilation will be performed in
/tmp/grace-obj, do the following:
% mkdir /tmp/grace-obj % cd /tmp/grace-obj % /usr/local/src/grace-x.y.z/ac-tools/shtool mkshadow \ /usr/local/src/grace-x.y.z .
configureshell script attempts to guess correct values for various system-dependent variables used during compilation. It uses those values to create
Make.confin the top directory of the package. It also create
config.hfile containing system-dependent definitions. Finally, it creates a shell script
config.statusthat you can run in the future to recreate the current configuration, a file
config.cachethat saves the results of its tests to speed up reconfiguring, and a file
config.logcontaining compiler output (useful mainly for debugging
configure). If at some point
config.cachecontains results you don't want to keep, you may remove or edit it.
./configure --helpto get list of additional switches specific to Grace
./configure <options>. Just an example:
% ./configure --enable-grace-home=/opt/grace --with-extra-incpath=/usr/local/include:/opt/include \ --with-extra-ldpath=/usr/local/lib:/opt/lib --prefix=/usrwould use
/usr/local/includeand
/opt/includein addition to the default include path and
/usr/local/liband
/opt/libin addition to the default ld path. As well, all stuff would be put under the /opt/grace directory and soft links made to
/usr/bin,
/usr/liband
/usr/include.
Note: If you change one of the
--with-extra-incpath or
--with-extra-ldpath options from one run of
configure to another, remember to delete the
config.cache file!!!
make
If something goes wrong, try to see if the problem has been
described already in the Grace FAQ (in the
doc directory).
make tests
This will give you a slide show demonstrating some nice features of Grace.
make install
make links
The later (optional) step will make soft links from some files
under the Grace home directory to the system-wide default
locations (can be changed by the
--prefix option
during the configuration, see above).
Not written yet...
For a jump-in start, you can browse the demos ("Help/Examples" menu tree). These are ordinary Grace projects, so you can play with them and modify them. Also, read the Tutorial.
O.k. Here's a VERY quick introduction:
A project file contains all information necessary to restore a plot created by Grace, as well as some of preferences. Each plot is represented on a single page, but may have an unlimited number of graphs.You create a project file of your current graph with File/Save,Save as.
A parameter file contains the detailed settings of your project. It can be used to transfer these settings to a different plot/project. You generate a parameter file with File/Save menu entry selected from the "Plot/Graph appearance popup". You can load the settings contained in a parameter file with File/Open.
Grace understands several input files formats. The most basic one is ASCII text files containing space and comma separated columns of data. The data fields can be either numeric (Fortran 'd' and 'D' exponent markers are supported) or alphanumeric (with or without quotes). Several calendar date formats are recognized automatically and you can specify your own reference for numeric dates formats. Grace also has a command language (see command interpreter), you can include commands in data files using lines having "@" as their first non-blank character. Depending on configuration, Grace can also read NetCDF files (see configuration).
A graph consists of (every element is optional): a graph frame, axes, a title and a subtitle, a number of sets and additional annotative objects (time stamp string, text strings, lines, boxes and ellipses).
The graph type can be any of:
The idea of "XY Chart" is to plot bars (or symbols in general) of several sets side by side, assuming the abscissas of all the sets are the same (or subsets of the longest set).
A dataset is a collection of points with x and y coordinates, up to four optional data values (which, depending on the set type, can be displayed as error bars or like) and one optional character string.
A set is a way of representing datasets. It consists of a pointer to a dataset plus a collection of parameters describing the visual appearance of the data (like color, line dash pattern etc).
The set type can be any of the following:
Not all set types, however, can be plotted on any graph type. The following table summarizes it:
Regions are sections of the graph defined by the interior or exterior of a polygon, or a half plane defined by a line. Regions are used to restrict data transformations to a geometric area occupied by region.
Real Time Input refers to the ability Grace has to be fed in real time by an external program. The Grace process spawned by the driver program is a full featured Grace process: the user can interact using the GUI at the same time the program sends data and commands. The process will adapt itself to the incoming data rate.
Hotlinks are sources containing varying data. Grace can be instructed a file or a pipe is a hotlink in which case it will provide specific commands to refresh the data on a mouse click (a later version will probably allow automatic refresh).
Grace allows the user to choose between several output devices to produce its graphics. The current list of supported devices is:
Note that Grace no longer supports GIF due to the copyright policy of Unisys. Grace can also be instructed to launch conversion programs automatically based on file name. As an example you can produce MIF (FrameMaker Interchange Format) or Java applets using pstoedit, or almost any image format using the netpbm suite (see the FAQ).
In many cases, when Grace needs to access a file given with a
relative
pathname, it searches for the file along the
following path:
./pathname:./.grace/pathname:~/.grace/pathname:$GRACE_HOME/pathname
Grace can access external functions present in either system or third-party shared libraries or modules specially compiled for use with it. The term dynamic refers to the possibility Grace has to open the library at run time to find the code of the external function, there is no need to recompile Grace itself (the functions already compiled in Grace are "statically linked").
There are two types of coordinates in Grace: the world coordinates and the viewport coordinates. Points of data sets are defined in the world coordinates. The viewport coordinates correspond to the image of the plot drawn on the canvas (or printed on, say, PS output page). The transformation converting the world coordinates into the viewport ones is determined by both the graph type and the axis scaling.
Actually, there is yet another level in the hierarchy of coordinates - the device coordinates. However, you (as a user of Grace) should not worry about the latter. The mapping between the viewport coordinates and the device coordinates is always set in such a way that the origin of the viewport corresponds to the left bottom corner of the device page, the smallest of the device dimensions corresponds to one unit in the viewport coordinates. Oh, and the most important thing about the viewport -> device transformation is that it is homotetic, i.e. a square is guaranteed to remain a square, not a rectangle, a circle remains a circle (not an ellipse) etc.
With respect to the user interface, there are three modes of
operation that Grace can be invoked in. The full-featured GUI-based
version is called
xmgrace. A batch-printing version is
called
gracebat. A command-line interface mode is called
grace. Usually, a single executable is called in all cases,
with two of the three files being (symbolic) links to a "real" one.
Override any parameter file settings
Turn off all toolbars
Execute batch_file on start up
Assume data file is block data
Form a set from the current block data set using the current set type from columns given in the argument
Set the hint for dates analysis
Read data from descriptor (anonymous pipe) on startup
Set canvas size fixed to width*height
Use free page layout
Set the current graph number
Set the type of the current graph
No interactive session, just print and quit
Set default hardcopy device
Install private colormap
Turn the graph legend on
Set the axis scaling of the current graph to logarithmic
Run Grace in monochrome mode (affects the display only)
Assume data file is in netCDF format. This option is present only if the netCDF support was compiled in
If -netcdf was used previously, read from the netCDF file X_var Y_var variables and create a set. If X_var name is "null" then load the index of Y to X. This option is present only if the netCDF support was compiled in
Assume the answer is yes to all requests - if the operation would overwrite a file, Grace will do so without prompting
Don't use private colormap
In batch mode, do not print
Disable safe mode
Don't catch signals
Read data from named pipe on startup
Assume data file is in X Y1 Y2 Y3 ... format
Load parameters from parameter_file to the current graph
Interpret string as a parameter setting
Read data from stdin on startup
file Save print output to file
Remove data file after read
Write results of some data manipulations to results_file
Exchange the color indices for black and white
Run in the safe mode (default) - no file system modifications are allowd through the batch language
Save all graphs to save_file
Integer seed for random number generator
Source type of next data file
Set allowed time slice for real time inputs to delay ms
Add timestamp to plot
Set the type of the next data file
Show the program version
Set the viewport for the current graph
Set the working directory
Set the world coordinates for the current graph
This message
Set the location of Grace. This will be where help files, auxiliary programs, and examples are located. If you are unable to find the location of this directory, contact your system administrator.
Print command. If the variable is defined but is an empty string, "Print to file" will be selected as default.
The editor used for manual editing of dataset values.
The shell command to run an HTML viewer for on-line browsing of the help documents. Must include at least one instance of "%s" which will be replaced with the actual URL by Grace.
These flags control behavior of the FFTW planner (see FFTW tuning for detailed info)
Upon start-up, Grace loads its init file,
gracerc. The file
is searched for in the magic path (see
magic path); once found, the rest of the
path is ignored. It's recommended that in the
gracerc file,
one doesn't use statements which are part of a project file - such
defaults, if needed, should be set in the default template (see
default template).
Whenever a new project is started, Grace loads the default template,
templates/Default.agr. The file is searched for in the magic
path (see
magic path); once found, the
rest of the path is ignored. It's recommended that in the default
template, one doesn't use statements which are NOT part of a project
file - such defaults, if needed, should be set in the
gracerc (see
init file).
The following Grace-specific X resource settings are supported:
It is also possible to customize menus by assigning key accelerators to any item.
You'll need to derive the item's X resource name from the respective menu label, which is easily done following these rules:
For example, in order to make Grace popup the Non-linear curve fitting by pressing Control+F, you would add the following two lines
XMgrace*transformationsMenu.nonLinearCurveFittingButton.acceleratorText: Ctrl+F
XMgrace*transformationsMenu.nonLinearCurveFittingButton.accelerator: Ctrl<Key>f
to your
.Xresources file (the file which is read when an X
session starts; it could be
.Xdefaults,
.Xsession or
some other file - ask your system administrator when in doubt).
Similarly, it may be desirable to alter default filename patterns of file selection dialogs. The recipe for the dialog's name is like for menu buttons outlined above, with "Button" being replaced with "FSB". E.g., to list all files in the "Open project" dialog ("File/Open..."), set the following resource:
XMgrace*openProjectFSB.pattern: *
This section describes interface controls - basic building blocks, used in many popups.
Whenever the user is expected to provide a filename, either for reading in or writing some data, a file selection dialog is popped up. In addition to the standard entries (the directory and file lists and the filter entry), there is a pulldown menu for quick directory change to predefined locations (the current working directory, user's home directory and the file system root). Also, a "Set as cwd" button is there which allows to set any directory as you navigate through the directory tree as the current working directory (cwd). Once defined, it can be used in any other file selection dialog to switch to that directory quickly.
Various selectors are available in several popups. They all display lists of objects (graphs, sets, ...) and can be used to perform simple operations on these objects (copying, deleting, ...). The operations are available from a popup menu that appears when pressing mouse button 3 on them. Depending on the required functionality, they may allow multiple choices or not. The following shortcuts are enabled (if the result of an action would contradict the list's selection policy, this would be ignored):
The operations that can be performed on graphs through the graph selector's popup menu are:
Double-clicking on a list entry will switch the focus to that graph.
The operations that can be performed on sets through the set selector's popup menu are:
Double-clicking on a list entry will open the spreadsheet editor (see Spreadsheet data set editor) on the set data.
When the pointer focus is on the canvas (where the graph is drawn), there are some shortcuts to activate several actions. They are:
A single click inside a graph switches focus to that graph. This is the default policy, but it can be changed from the "Edit/Preferences" popup.
Double clicking on parts of the canvas will invoke certain actions or raise some popups:
The double clicking actions can be enabled/disabled from the "Edit/Preferences" popup.
Along the left-hand side of the canvas (if shown) is the ToolBar. It is armed with several buttons to provide quick and easy access to the more commonly used Grace functions.
Draw: This will redraw the canvas and sets. Useful if "Auto Redraw" has been deselected in the Edit|Preferences dialog or after executing commands directly from the Window|Commands interpreter.
Lens: A zoom lens. Click on the lens, then select the area of interest on the graph with the "rubber band". The region enclosed by the rubber band will fill the entire graph.
AS: AutoScale. Autoscales the graph to contain all data points of all visible (not hidden) sets.
Z/z: Zoom in/out by 5%. The zoom percentage can be set in the Edit/Preferences dialog.
Arrows: Scroll active graph by 5% in the arrow's direction. The scroll percentage can be set in the Edit/Preferences dialog.
AutoT: AutoTick Axes. This will find the optimum number of major and minor tick marks for both axes.
AutoO: Autoscale On set. Click the
AutoObutton, then click on the graph near the set you wish to use for determining the autoscale boundaries of the graph.
ZX,ZY: Zoom along an axis. These buttons work like the zoom lens above but are restricted to a single axis.
AX,AY: Autoscale one axis only.
The following buttons deal with the graph stack and there is a good example under Help/Examples/General Intro/World Stack.
Pu/Po: Push and pop the current world settings to/from the graph stack. When popping, makes the new stack top current.
PZ: Push before Zooming. Functions as the zoom lens, but first pushes the current world settings to the stack.
Cy: Cycles through the stack settings of the active graph. Each graph may have up to twenty layers on the stack.
Exit: Pretty obvious, eh?
The file menu contains all entries related to the input/output features of Grace.
Reset the state of Grace as if it had just started (one empty graph ranging from 0 to 1 along both axes). If some work has been done and not yet saved, a warning popup is displayed to allow canceling the operation.
Open an existing project file. A popup is displayed that allow to browse the file system.
Save the current work in a project file, using the name that was used for the last open or save. If no name has been set (i.e., if the project has been created from scratch) act as save as.
Save the current work in a project file with a new name. A popup allows to browse the file system and set the name, the format to use for saving data points (the default value is "%16.8g"), and a textual description of the project. A warning is displayed if a file with the same name already exists.
Abandon all modifications performed on the project since the last save. A confirmation popup is fired to allow the user canceling the operation.
Set the properties of the printing device. Each device has its own set of specific options (see Device-specific settings). According to the device, the output can be sent either directly to a printer or directed to a file. The global settings available for all devices are the sizing parameters. The size of the graph is fixed. Changing the 'Page' settings changes the size of the canvas underneath the graph. Switching between portrait and landscape rotates the canvas. Make sure the canvas size is large enough to hold your graph. Otherwise you get a 'Printout truncated' warning. If your canvas size cannot easily be changed because, for example, you want to print on letter size paper, you need to adjust the size of your graph ('Viewport' in Plot/Graph Appearance).
Print the project using the current printer settings
Exit from Grace. If some work has been done and not saved, a warning popup will be displayed to allow the user to cancel the operation.
Using the data set popup, you can view the properties of datasets. This include its type, length, associated comment and some statistics (min, max, mean, standard deviation). A horizontal scrollbar at the bottom allows to get the two last properties, they are not displayed by default. Also note that if you find some columns are too narrow to show all significant digits, you can drag the vertical rules using Shift+Button 2.
Using the menu on the top of this dialog, you can manipulate existing sets or add new ones. Among the most important entries in the menu, are options to create or modify a set using the spreadsheet data set editor (see Spreadsheet data set editor).
The dialog presents an editable matrix of numbers, corresponding to the data set being edited. The set type (and hence, the number of data columns) can be changed using the "Type:" selector. Clicking on a column label pops up a dialog allowing to adjust the column formatting. Clicking on the row labels toggles the respective row state (selected/unselected). The selected rows can be deleted via the dialog's "Edit" menu. Another entry in this menu lets you add a row; the place of the new row is determined by the row containing a cell with the keyboard focus on. As well, just typing in an empty cell will add one or several rows (filling the intermediate rows with zeros).
To resize columns, drag the vertical rules using Shift+Button 2.
The set operations popup allows you to interact with sets as a whole. If you want to operate on the data ordering of the sets, you should use the data set operations popup from the Data menu. The popup allows you to select a source (one set within one graph) and a destination and perform some action upon them (copy, move, swap). This popup also give you a quick access to several graph and set selectors if you want to perform some other operation like hiding a graph or creating a new set from block data.
This entry fires up a popup to lay out several graphs in a regular grid given by M rows and N columns.
The graph selector at the top allows one to select a number of graphs the arrangement will operate on. If the number of selected graphs isn't equal to M times N, new graphs may be created or extra graphs killed if needed. These options are controlled by the respective checkboxes below the graph selector.
The order in which the matrix is filled in with the graphs can be selected (first horizontally then vertically or vise versa, with either of them inverted). Additionaly, one may choose to fill the matrix in the snake-like manner (adjacent "strokes" are anti-parallel).
The rest of the controls of the dialog window deal with the matrix spacing: left/right/top/bottom page offsets (in the viewport coordinates) and relative inter-cell distances, vertical and horizontal. Next to each of the vertical/horizontal spacing spinboxes, a "Pack" checkbox is found. Enabling it effectively sets the respective inter-cell distance to zero and alter axis tickmark settings such that only bottom/left-most tickmarks are visible.
If you don't want the regular layout this arrangement gives you, you can change it afterwards using the mouse (select a graph and double click on the focus marker, see clicks and double clicks).
You can overlay a graph on top of another one. The main use of this feature is to plot several curves using different scales on the same (apparently) graph. The main difficulty is to be sure you operate on the graph you want at all times (you can hide one for a moment if this becomes too difficult).
Using this entry, you can autoscale one graph or all graphs according to the specified sets only. This is useful if you need either to have truly comparable graphs despite every one contains data of different ranges, or if you want to focus your attention on one set only while it is displayed with other data in a complex graph.
This small popup only displays the current state (type and whether it is active or not) of the existing regions.
You can define a new region (or redefine an existing one), the allowed region types are:
A region can be either linked to the current graph only or to all graphs.
This kills a region.
This popup reports you which sets or points are inside or outside of a region.
You can link a set to a file or a pipe using this feature. Once a link has been established, you can update it (i.e., read data again) by clicking on the update button.
Currently, only simple XY sets can be used for hotlinks.
After having selected this menu entry, you can select a point on a graph that will be used as the origin of the locator display (just below the menu bar). The fixed point is taken into account only when the display type of the locator is set to [DX,DY].
This entry is provided to remove a fixed point set before and use the default again: point [0, 0].
The locator props popup allows you to customize the display of the locator, mainly its type and the format and precision of the display. You can use all the formats that are allowed in the graphs scales.
The preferences popup allows you to set miscellaneous properties of your Grace session, such as GUI behavior, cursor type, date reading hint and reference date used for calendar conversions.
This popup gathers all operations that are related to the ordering of data points inside a set or between sets. If you want to operate on the sets as a whole, you should use the set operations popup from the Edit menu. You can sort according to any coordinate (X, Y, DX, ...) in ascending or descending order, reverse the order of the points, join several sets into one, split one set into several others of equal lengths, or drop a range of points from a set. The set selector of the popup shows the number of points in each set in square brackets like this: G0.S0[63], the points are numbered from 0 to n-1.
The transformations sub-menu gives you access to all data-mining features of Grace.
Using evaluate expression allows you to create a set by applying an explicit formula to another set, or to parts of another set if you use regions restrictions.
All the classical mathematical functions are available (cos, sin, but also lgamma, j1, erf, ...). As usual all trigonometric functions use radians by default but you can specify a unit if you prefer to say cos (x rad) or sin (3 * y deg). For the full list of available numerical functions and operators, see Operators and functions.
In the formula, you can use X, Y, Y1, ..., Y4 to denote any coordinate you like from the source set. An implicit loop will be used around your formula so if you say:
x = x - 4966.5
you will shift all points of your set 4966.5 units to the left.
You can use more than one set in the same formula, like this:
y = y - 0.653 * sin (x deg) + s2.y
which means you use both X and Y from the source set but also the Y coordinate of set 2. Beware that the loop is a simple loop over the indices, all the sets you use in such an hybrid expression should therefore have the same number of points and point i of one set should really be related to point i of the other set. If your sets do not follow these requirements, you should first homogenize them using interpolation.
The histograms popup allows you to compute either standard or cumulative histograms from the Y coordinates of your data. Optionally, the histograms can be normalized to 1 (hence producing a PDF (Probability Distribution Function).
The bins can be either a linear mesh defined by its min, max, and length values, or a mesh formed by abscissas of another set (in which case abscissas of the set must form a strictly monotonic array).
This popup is devoted to direct and inverse Fourier transforms (actually, what is computed is a power spectrum). The default is to perform a direct transform on unfiltered data and to produce a set with the index as abscissa and magnitude as ordinate. You can filter the input data window through triangular, Hanning, Welch, Hamming, Blackman and Parzen filters. You can load magnitude, phase or coefficients and use either index, frequency or period as abscissas. You can choose between direct and inverse Fourier transforms. If you specify real input data, X is assumed to be equally spaced and ignored; if you specify complex input data X is taken as the real part and Y as the imaginary part.
If Grace was configured with the FFTW library (see configuration), then the DFT and FFT buttons really perform the same transform (so there is no speed-up in using FFT in this case). If you want Grace can to use FFTW wisdom files, you should set several environment variables to name them.
The running average popup allows you to compute some values on a sliding window over your data. You choose both the value you need (average, median, minimum, maximum, standard deviation) and the length of the window and perform the operation. You can restrict the operation to the points belonging to (or outside of) a region.
The differences popup is used to compute approximations of the first derivative of a function with finite differences. The only choice (apart from the source set of course) is the type of differences to use: forward, backward or centered.
The seasonal differences popup is used to subtract data from a period to data of the preceding period (namely y[i] - y[i + period]). Beware that the period is entered in terms of index in the set and not in terms of abscissa!
The integration popup is used to compute the integral of a set and optionally to load it. The numerical value of the integral is shown in the text field after computation. Selecting "cumulative sum" in the choice item will create and load a new set with the integral and compute the end value, selecting "sum only" will only compute the end value.
This popup is used to interpolate a set on an array of alternative X coordinates. This is mainly used before performing some complex operations between two sets with the evaluate expression popup.
The sampling array can be either a linear mesh defined by its min, max, and length values, or a mesh formed by abscissas of another set.
Several interpolation methods can be used: linear, spline or Akima spline.
Note that if the sampling mesh is not entirely within the source set X bounds, evaluation at the points beyond the bounds will be performed using interpolation parameters from the first (or the last) segment of the source set, which can be considered a primitive extrapolation. This behaviour can be disabled by checking the "Strict" option on the popup.
The abscissas of the set being interpolated must form a strictly monotonic array.
The regression popup can be used to fit a set against polynomials or some specific functions (y=A*x^B, y=A*exp(B*x), y=A+B*ln(x) and y=1/(A+Bx)) for which a simple transformation of input data can be used to apply linear regression formulas.
You can load either.
The non linear fit popup can be used for functions outside of the simple regression methods scope. With this popup you provide the expression yourself using a0, a1, ..., a9 to denote the fit parameters (as an example you can say y = a0 * cos (a1 * x + a2)). You specify a tolerance, starting values and optional bounds and run several steps before loading the results.
The fit characteristics (number of parameters, formula, ...) can be saved in a file and retrieved as needed using the file menu of the popup.
In the "Advanced" tab, you can additionally apply a restriction to the set(s) to be fitted (thus ignoring points not satisfying the criteria), use one of preset weighting schemes or define your own (notice that "dY" in the preset "1/dY^2" one actually refers to the third column of the data set; use the "Custom" function if this doesn't make sense for your data set), and choose whether to load.
This popup can be used to compute autocorrelation of one set or cross correlation between two sets. You only select the set (or sets) and specify the maximum lag. A check box allows one to evaluate covariance instead of correlation. The result is normalized so that C(0) = 1.
You can use a set as a weight to filter another set. Only the Y part and the length of the weighting set are important, the X part is ignored.
The convolution popup is used to ... convolve two sets. You only select the sets and apply.
You can rotate, scale or translate sets using the geometric transformations popup. You specify the characteristics of each transform and the application order.
This popup provides two sampling methods. The first one is to choose a starting point and a step, the second one is to select only the points that satisfy a boolean expression you specify.
This popup is devoted to reducing huge sets (and then saving both computation time and disk space).
The interpolation method can be applied only to ordered sets: it is based on the assumption that if a real point and an interpolation based on neighboring points are closer than a specified threshold, then the point is redundant and can be eliminated.
The geometric methods (circle, ellipse, rectangle) can be applied to any set, they test each point in turn and keep only those that are not in the neighborhood of previous points.
Given a set of curves in a graph, extract a feature from each curve and use the values of the feature to provide the Y values for a new curve.
Read new sets of data in a graph. A graph selector is used to specify the graph where the data should go (except when reading block data, which are copied to graphs later on).
Reading as "Single set" means that if the source contains only one column of numeric data, one set will be created using the indices (from 1 to the total number of points) as abscissas and read values as ordinates and that if the source contains more than one column of data, the first two numeric columns will be used. Reading as "NXY" means that the first numeric column will provide the abscissas and all remaining columns will provide the ordinates of several sets. Reading as "Block data" means all column will be read and stored and that another popup will allow to select the abscissas and ordinates at will. It should be noted that block data are stored as long as you do not override them by a new read. You can still retrieve data from a block long after having closed all popups, using the set selector.
The set type can be one of the predefined set presentation types (see sets).
The data source can be selected as "Disk" or "Pipe". In the first case the text in the "Selection" field is considered to be a file name (it can be automatically set by the file selector at the top of the popup). In the latter case the text is considered to be a command which is executed and should produce the data on its standard output. On systems that allows is, the command can be a complete sequence of programs glued together with pipes.
If the source contains date fields, they should be automatically detected. Several formats are recognized (see appendix dates in grace). Calendar dates are converted to numerical dates upon reading.
The "Autoscale on read" menu controls whether, upon reading in new sets, which axes of the graph should be autoscaled.
This entry exists only if Grace has been compiled with support for the NetCDF data format (see configuration).
Save data sets in a file. A set selector is used to specify the set to be saved. The format to use for saving data points can be specified (the default value is "%16.8g"). A warning is displayed if a file with the same name already exists.
The plot appearance popup let you set the time stamp properties and the background color of the page. The color is used outside of graphs and also on graphs were no specific background color is set. The time stamp is updated every time the project is modified.
The graph appearance popup can be displayed from both the plot menu and by double-clicking on a legend, title, or subtitle of a graph (see Clicks and double clicks). The graph selector at the top allows to choose the graph you want to operate on, it also allows certain common actions through its popup menu (see graph selector). Most of the actions can also be performed using the "Edit" menu available from the popup menubar. The main tab includes the properties you will need more often (title for example), and other tabs are used to fine tune some less frequently used options (fonts, sizes, colors, placements).
If you need special characters or special formatting in your title or subtitle,!
You can save graph appearance parameters or retrieve settings previously saved via the "File" menu of this popup. In the "Save parameters" dialog, you can choose to save settings either for the current graph only or for all graphs.
The set appearance popup can be displayed from both the plot menu and by double-clicking anywhere in a graph (see Clicks and double clicks). The set selector at the top allows to choose the set you want to operate on, it also allows certain common actions through its popup menu (see set selector). The main tab gathers the properties you will need more often (line and symbol properties or legend string for example), and other tabs are used to fine tune some less frequently used options (drop lines, fill properties, annotated values and error bars properties for example).
You should note that despite the legend string related to one set is entered in the set appearance popup, this is not sufficient to display it. Displaying all legends is a graph level decision, so the toggle is in the main tab of the graph appearance popup.
If you need special characters or special formatting in your legend,!
The axis properties popup can be displayed from both the "Plot" menu and by double-clicking exactly on an axis (see Clicks and double clicks). The pulldown menu at the top allows to select the axis you want to operate on. The "Active" toggle globally activates or deactivates the axis (all GUI elements are insensitive for deactivated axes). The start and stop fields depict the displayed range. Three types of scales are available: linear, logarithmic or reciprocal, and you can invert the axis (which normally increases from left to right and from bottom to top). The main tab includes the properties you will need more often (axis label, tick spacing and format for example), and other tabs are used to fine tune some less frequently used options (fonts, sizes, colors, placements, stagger, grid lines, special ticks, ...).
If you need special characters or special formatting in your label,!
Most of the controls in the dialog should be self-explanatory. One that is not (and frequently missed) is the "Axis transform" input field in the "Tick labels" tab. Entering there e.g. "-$t" will make the tick labels show negates of the real coordinates their ticks are placed at. You can use any expression understood by the interpreter (see command interpreter).
Once you have set the options as you want, you can apply them. One useful feature is that you can set several axes at once with the bottom pulldown menu (current axis, all axes current graph, current axis all graphs, all axes all graphs). Beware that you always apply the properties of all tabs, not only the selected one.
This toggle item shows or hides the locator below the menu bar.
This toggle item shows or hides the status string below the canvas.
This toggle item shows or hides the tool bar at the left of the canvas.
Set the properties of the display device. It is the same dialog as in Print setup.
This menu item triggers a redrawing of the canvas.
This menu item causes an update of all GUI controls. Usually, everything is updated automatically, unless one makes modifications by entering commands in the Command tool.
Command driven version of the interface to Grace. Here, commands are typed at the "Command:" text item and executed when <Return> is pressed. The command will be parsed and executed, and the command line is placed in the history list. Items in the history list can be recalled by simply clicking on them with the left mouse button. For a reference on the Grace command interpreter, see Command interpreter.
Not written yet...
Not written yet...
Not written yet...
The console window displays errors and results of some numerical operations, e.g. nonlinear fit (see Non-linear fit). The window is popped up automatically whenever an error occurs or new result messages appear. This can be altered by checking the "Options/Popup only on errors" option.
Click on any element of the interface to get context-sensitive help on it. Only partially implemented at the moment.
Browse the Grace user's guide.
Browse the Grace tutorial.
Frequently Asked Questions with answers.
The list of changes during the Grace development.
The whole tree of submenus each loading a sample plot.
Use this to send your suggestions or bug reports.
Grace licensing terms will be displayed (GPL version 2).
A popup with basic info on the software, including some configuration details. More details can be found when running Grace with the "-version" command line flag.
The interpreter parses its input in a line-by-line manner. There may
be several statements per line, separated by semicolon (
;).
The maximal line length is 4 kbytes (hardcoded). The parser is
case-insensitive and ignores lines beginning with the "
#" sign.
Not finished yet...
In numerical expressions, the infix format is used. Arguments of both operators and functions can be either scalars or vector arrays. Arithmetic, logical, and comparison operators are given in tables below.
Another conditional operator is the "?:" (or ternary) operator, which operates as in C and many other languages.
(expr1) ? (expr2) : (expr3);
This expression evaluates to expr2 if expr1 evaluates to TRUE, and expr3 if expr1 evaluates to FALSE.
Methods of directly manipulating the data corresponding to the Data|Transformation menu are described in table transformations . To evaluate expressions, you can directly submit them to the command interpreter like you would in the evaluate expression window. As an example, S1.X = S1.X * 0.000256 scales the x-axis coordinates. The functionality of the 'Sample points' menu entry can be obtained through RESTRICT.
Not finished yet...
For producing "hard copy", several parameters can be set via the command interpreter. They are summarized in table Device parameters.
User-defined variables are set and used according to the syntax described in table User variables.
Not finished yet...
We divide the commands pertaining to the properties and appearance of graphs into those which directly manipulate the graphs and those that affect the appearance of graph elements---the parameters that can appear in a Grace project file.
General graph creation/annihilation and control commands appear in table Graph operations.
Setting the active graph and its type is accomplished with the commands found in table Graph selection parameters.
The axis range and scale of the current graph as well as its location
on the plot viewport are set with the commands listed in table
Axis parameters.
The commands to set the appearance and textual content of titles and legends are given in table Titles and legends.
Not finished yet...
Again, as with the graphs, we separate those parser commands that manipulate the data in a set from the commands that determine parameters---elements that are saved in a project file.
Operations for set I/O are summarized in table Set input, output, and creation. (Note that this is incomplete and only lists input commands at the moment.)
The parser commands analogous to the Data|Data set operations dialogue
can be found in table
Set operations.
Not Finished yet...
Not written yet...
For all devices, Grace uses Type1 fonts. Both PFA (ASCII) and PFB (binary) formats can be used.
The file responsible for the font configurations of Grace is
fonts/FontDataBase. The first line contains a positive integer
specifying the number of fonts declared in that file. All remaining lines
contain declarations of one font each, composed out of three fields:
Here is the default
FontDataBase file:
14 Times-Roman Times-Roman n021003l.pfb Times-Italic Times-Italic n021023l.pfb Times-Bold Times-Bold n021004l.pfb Times-BoldItalic Times-BoldItalic n021024l.pfb Helvetica Helvetica n019003l.pfb Helvetica-Oblique Helvetica-Oblique n019023l.pfb Helvetica-Bold Helvetica-Bold n019004l.pfb Helvetica-BoldOblique Helvetica-BoldOblique n019024l.pfb Courier Courier n022003l.pfb Courier-Oblique Courier-Oblique n022023l.pfb Courier-Bold Courier-Bold n022004l.pfb Courier-BoldOblique Courier-BoldOblique n022024l.pfb Symbol Symbol s050000l.pfb ZapfDingbats ZapfDingbats d050000l.pfb
For text rastering, three types of files are used.
.pfa-/
.pfb-files: These contain the character outline descriptions. The files are assumed to be in the
fonts/type1directory; these are the filenames specified in the
FontDataBaseconfiguration file.
.afm-files: These contain high-precision font metric descriptions as well as some extra information, such as kerning and ligature information for a particular font. It is assumed that the filename of a font metric file has same basename as the respective font outline file, but with the
.afmextension; the metric files are expected to be found in the
fonts/type1directory, too.
.enc-files: These contain encoding arrays in a special but simple form. They are only needed if someone wants to load a special encoding to re-encode a font. Their place is
fonts/enc
It is possible to use custom fonts with Grace. One mostly needs to use
extra fonts for the purpose of localization. For many European
languages, the standard fonts supplied with Grace should contain all the
characters needed, but encoding may have to be adjusted. This is done by
putting a
Default.enc file with proper encoding scheme into the
fonts/enc directory. Grace comes with a few encoding files in
the directory; more can be easily found on the Internet. (If the
Default.enc file doesn't exist, the IsoLatin1 encoding will be
used). Notice that for fonts having an encoding scheme in themselves
(such as the Symbol font, and many nationalized fonts) the default
encoding is ignored.
If you do need to use extra fonts, you should modify the
FontDataBase file accordingly, obeying its format. However,
if you are going to exchange Grace project files with other people who
do not have the extra fonts configured, an important thing is to define
reasonable fall-back font names.
For example, let us assume I use Hebrew fonts, and the configuration file has lines like these:
My colleague, who lives in Russia, uses Cyrillic fonts with Grace configured like this:My colleague, who lives in Russia, uses Cyrillic fonts with Grace configured like this:
... Courier-Hebrew Courier courh___.pfa Courier-Hebrew-Oblique Courier-Oblique courho__.pfa ....
... Cronix-Courier Courier croxc.pfb Cronix-Courier-Oblique Courier-Oblique croxco.pfb ...
Thus, with properly configured national fonts, you can make localized annotations for plots intended for internal use of your institution, while being able to exchange files with colleagues from abroad. People who ever tried to do this with MS Office applications should appreciate the flexibility :-).
The grace_np library is a set of compiled functions that allows you to launch and drive a Grace subprocess from your C or Fortran application. Functions are provided to start the subprocess, to send it commands or data, to stop it or detach from it.
There is no Fortran equivalent for the GracePrintf function, you should format all the data and commands yourself before sending them with GraceCommandF.
The Grace subprocess listens for the commands you send and interprets them as if they were given in a batch file. You can send any command you like (redraw, autoscale, ...). If you want to send data, you should include them in a command like "g0.s0 point 3.5, 4.2".
Apart from the fact it monitors the data sent via an anonymous pipe, the Grace subprocess is a normal process. You can interact with it through the GUI. Note that no error can be sent back to the parent process. If your application send erroneous commands, an error popup will be displayed by the subprocess.
If you exit the subprocess while the parent process is still using it, the broken pipe will be detected. An error code will be returned to every further call to the library (but you can still start a new process if you want to manage this situation).
Here is an example use of the library, you will find this program in the distribution.
#include <stdlib.h> #include <stdio.h> #include <unistd.h> #include <grace_np.h> #ifndef EXIT_SUCCESS # define EXIT_SUCCESS 0 #endif #ifndef EXIT_FAILURE # define EXIT_FAILURE -1 #endif void my_error_function(const char *msg) { fprintf(stderr, "library message: \"%s\"\n", msg); } int main(int argc, char* argv[]) { int i; GraceRegisterErrorFunction(my_error_function); /* Start Grace with a buffer size of 2048 and open the pipe */ if (GraceOpen(2048) == -1) { fprintf(stderr, "Can't run Grace. \n"); exit(EXIT_FAILURE); } /* Send some initialization commands to Grace */ GracePrintf("world xmax 100"); GracePrintf("world ymax 10000"); GracePrintf("xaxis tick major 20"); GracePrintf("xaxis tick minor 10"); GracePrintf("yaxis tick major 2000"); GracePrintf("yaxis tick minor 1000"); GracePrintf("s0 on"); GracePrintf("s0 symbol 1"); GracePrintf("s0 symbol size 0.3"); GracePrintf("s0 symbol fill pattern 1"); GracePrintf("s1 on"); GracePrintf("s1 symbol 1"); GracePrintf("s1 symbol size 0.3"); GracePrintf("s1 symbol fill pattern 1"); /* Display sample data */ for (i = 1; i <= 100 && GraceIsOpen(); i++) { GracePrintf("g0.s0 point %d, %d", i, i); GracePrintf("g0.s1 point %d, %d", i, i * i); /* Update the Grace display after every ten steps */ if (i % 10 == 0) { GracePrintf("redraw"); /* Wait a second, just to simulate some time needed for calculations. Your real application shouldn't wait. */ sleep(1); } } if (GraceIsOpen()) { /* Tell Grace to save the data */ GracePrintf("saveall \"sample.agr\""); /* Flush the output buffer and close Grace */ GraceClose(); /* We are done */ exit(EXIT_SUCCESS); } else { exit(EXIT_FAILURE); } }
When the FFTW capabilities are compiled in, Grace looks at two environment variables to decide what to do with the FFTW 'wisdom' capabilities. First, a quick summary of what this is. The FFTW package is capable of adaptively determining the most efficient factorization of a set to give the fastest computation. It can store these factorizations as 'wisdom', so that if a transform of a given size is to be repeated, it is does not have to re-adapt. The good news is that this seems to work very well. The bad news is that, the first time a transform of a given size is computed, if it is not a sub-multiple of one already known, it takes a LONG time (seconds to minutes).
The first environment variable is GRACE_FFTW_WISDOM_FILE. If this is set to the name of a file which can be read and written (e.g., $HOME/.grace_fftw_wisdom) then Grace will automatically create this file (if needed) and maintain it. If the file is read-only, it will be read, but not updated with new wisdom. If the symbol GRACE_FFTW_WISDOM_FILE either doesn't exist, or evaluates to an empty string, Grace will drop the use of wisdom, and will use the fftw estimator (FFTW_ESTIMATE flag sent to the planner) to guess a good factorization, instead of adaptively determining it.
The second variable is GRACE_FFTW_RAM_WISDOM. If this variable is defined to be non-zero, and GRACE_FFTW_WISDOM_FILE variable is not defined (or is an empty string), Grace will use wisdom internally, but maintain no persistent cache of it. This will result in very slow execution times the first time a transform is executed after Grace is started, but very fast repeats. I am not sure why anyone would want to use wisdom without writing it to disk, but if you do, you can use this flag to enable it.
Grace can access external functions present in either system or third-party shared libraries or modules specially compiled for use with Grace.
One must make sure, however, that the external function is of one
of supported by Grace types:
The return values of functions are assumed to be of the
double type.
Note, that there is no difference from the point of view of function prototype between parameters and variables; the difference is in the way Grace treats them - an attempt to use a vector expression as a parameter argument will result in a parse error.
Let us consider few examples.
Caution: the examples provided below (paths and compiler flags) are valid for Linux/ELF with gcc. On other operating systems, you may need to refer to compiler/linker manuals or ask a guru.
Suppose I want to use function
pow(x,y) from the Un*x math
library (libm). Of course, you can use the "^" operator defined
in the Grace language, but here, for the sake of example, we
want to access the function directly.
The command to make it accessible by Grace is
USE "pow" TYPE f_of_dd FROM "/usr/lib/libm.so"
Try to plot y = pow(x,2) and y = x^2 graphs (using, for example, "create new -> Formula" from any set selector) and compare.
Now, let us try to write a function ourselves. We will define
function
my_function which simply returns its (second)
argument multiplied by integer parameter transferred as the
first argument.
In a text editor, type in the following C code and save it as "my_func.c":
double my_function (int n, double x) { double retval; retval = (double) n * x; return (retval); }
OK, now compile it:
$gcc -c -fPIC my_func.c $gcc -shared my_func.o -o /tmp/my_func.so
(You may strip it to save some disk space):
$strip /tmp/my_func.so
That's all! Ready to make it visible to Grace as "myf" - we are too lazy to type the very long string "my_function" many times.
USE "my_function" TYPE f_of_nd FROM "/tmp/my_func.so" ALIAS "myf"
A more serious example. There is a special third-party library available on your system which includes a very important for you yet very difficult-to-program from the scratch function that you want to use with Grace. But, the function prototype is NOT one of any predefined types. The solution is to write a simple function wrapper. Here is how:
Suppose, the name of the library is "special_lib" and the
function you are interested in is called "special_func" and
according to the library manual, should be accessed as
void
special_func(double *input, double *output, int parameter).
The wrapper would look like this:
double my_wrapper(int n, double x) { extern void special_func(double *x, double *y, int n); double retval; (void) special_func(&x, &retval, n); return (retval); }
Compile it:
$gcc -c -fPIC my_wrap.c $gcc -shared my_wrap.o -o /tmp/my_wrap.so -lspecial_lib -lblas $strip /tmp/my_wrap.so
Note that I added
-lblas assuming that the special_lib
library uses some functions from the BLAS. Generally, you have
to add all libraries which your module depends on (and
all libraries those libraries rely upon etc.), as if you wanted
to compile a plain executable.
Fine, make Grace aware of the new function
USE "my_wrapper" TYPE f_of_nd FROM "/tmp/my_wrap.so" ALIAS "special_func"
so we can use it with its original name.
An example of using Fortran modules.
Here we will try to achieve the same functionality as in Example 2, but with the help of F77.
DOUBLE PRECISION FUNCTION MYFUNC (N, X) IMPLICIT NONE INTEGER N DOUBLE PRECISION X C MYFUNC = N * X C RETURN END
As opposite to C, there is no way to call such a function from Grace directly - the problem is that in Fortran all arguments to a function (or subroutine) are passed by reference. So, we need a wrapper:
double myfunc_wrapper(int n, double x) { extern double myfunc_(int *, double *); double retval; retval = myfunc_(&n, &x); return (retval); }
Note that most of f77 compilers by default add underscore to
the function names and convert all names to the lower case,
hence I refer to the Fortran function
MYFUNC from my C
wrapper as
myfunc_, but in your case it can be different!
Let us compile the whole stuff:
$g77 -c -fPIC myfunc.f $gcc -c -fPIC myfunc_wrap.c $gcc -shared myfunc.o myfunc_wrap.o -o /tmp/myfunc.so -lf2c -lm $strip /tmp/myfunc.so
And finally, inform Grace about this new function:
USE "myfunc_wrapper" TYPE f_of_nd FROM "/tmp/myfunc.so" ALIAS "myfunc"
In general the method outlined in the examples above can be used on OS/2, too. However you have to create a DLL (Dynamic Link Library) which is a bit more tricky on OS/2 than on most Un*x systems. Since Grace was ported by using EMX we also use it to create the examples; however other development environments should work as well (ensure to use the _System calling convention!). We refer to Example 2 only. Example 1 might demonstrate that DLLs can have their entry points (i.e. exported functions) callable via ordinals only, so you might not know how to access a specific function without some research. First compile the source from Example 2 to "my_func.obj"
gcc -Zomf -Zmt -c my_func.c -o my_func.obj
Then you need to create a linker definition file "my_func.def" which contains some basic info about the DLL and declares the exported functions.
LIBRARY my_func INITINSTANCE TERMINSTANCE CODE LOADONCALL DATA LOADONCALL MULTIPLE NONSHARED DESCRIPTION 'This is a test DLL: my_func.dll' EXPORTS my_function
(don't forget about the 8 characters limit on the DLL name!). Finally link the DLL:
gcc my_func.obj my_func.def -o my_func.dll -Zdll -Zno-rte -Zmt -Zomf
(check out the EMX documentation about the compiler/linker flags used here!) To use this new library function within Grace you may either put the DLL in the LIBPATH and use the short form:
USE "my_function" TYPE f_of_nd FROM "my_func" ALIAS "myf"
or put it in an arbitrary path which you need to specify explicitly then:
USE "my_function" TYPE f_of_nd FROM "e:/foo/my_func.dll" ALIAS "myf"
(as for most system-APIs you may use the Un*x-like forward slashs within the path!)
Grace permits quite complex typesetting on a per string basis. Any string displayed (titles, legends, tick marks,...) may contain special control codes to display subscripts, change fonts within the string etc.
Example:
F\sX\N(\xe\f{}) = sin(\xe\f{})\#{b7}e\S-X\N\#{b7}cos(\xe\f{})
prints roughly
-x F (e) = sin(e)·e ·cos(e) x
using string's initial font and e prints as epsilon from the Symbol font.
NOTE:
Characters from the upper half of the char table can be entered directly
from the keyboard, using appropriate
xmodmap(1) settings, or
with the help of the font tool ("Window/Font tool").
Grace can output plots using several device backends. The list of available devices can be seen (among other stuff) by specifying the "-version" command line switch.
Some of the output devices accept several configuration options. You can set the options by passing a respective string to the interpreter using the "DEVICE "devname" OP "options"" command (see Device parameters). A few options can be passed in one command, separated by commas.
We use two calendars in Grace: the one that was established in 532 by Denys and lasted until 1582, and the one that was created by Luigi Lilio (Alyosius Lilius) and Christoph Klau (Christophorus Clavius) for pope Gregorius XIII. Both use the same months (they were introduced under emperor Augustus, a few years after Julian calendar introduction, both Julius and Augustus were honored by a month being named after each one).
The leap years occurred regularly in Denys's calendar: once every four years, there is no year 0 in this calendar (the leap year -1 was just before year 1). This calendar was not compliant with earth motion and the dates were slowly shifting with regard to astronomical events.
This was corrected in 1582 by introducing Gregorian calendar. First a ten days shift was introduced to reset correct dates (Thursday October the 4th was followed by Friday October the 15th). The rules for leap years were also changed: three leap years are removed every four centuries. These years are those that are multiple of 100 but not multiple of 400: 1700, 1800, and 1900 were not leap years, but 1600 and 2000 were (will be) leap years.
We still use Gregorian calendar today, but we now have several time scales for increased accuracy. The International Atomic Time (TAI) is a linear scale: the best scale to use for scientific reference. The Coordinated Universal Time (UTC, often confused with Greenwich Mean Time) is a legal time that is almost synchronized with earth motion. However, since the earth is slightly slowing down, leap seconds are introduced from time to time in UTC (about one second every 18 months). UTC is not a continuous scale ! When a leap second is introduced by International Earth Rotation Service, this is published in advance and the legal time sequence is as follows: 23:59:59 followed one second later by 23:59:60 followed one second later by 00:00:00. At the time of this writing (1999-01-05) the difference between TAI and UTC was 32 seconds, and the last leap second was introduced in 1998-12-31.
These calendars allow to represent any date from the mist of the past to the fog of the future, but they are not convenient for computation. Another time scale is possible: counting only the days from a reference. Such a time scale was introduced by Joseph-Juste Scaliger (Josephus Justus Scaliger) in 1583. He decided to use "-4713-01-01T12:00:00" as a reference date because it was at the same time a Monday, first of January of a leap year, there was an exact number of 19 years Meton cycle between this date and year 1 (for Easter computation), and it was at the beginning of a 15 years Roman indiction cycle. The day number counted from this reference is traditionally called Julian day, but it has really nothing to do with the Julian calendar.
Grace stores dates internally as reals numbers counted from a reference date. The default reference date is the one chosen by Scaliger, it is a classical reference for astronomical events. It can modified for a single session using the Edit->Preferences popup of the GUI. If you often work with a specific reference date you can set it for every sessions with a REFERENCE DATE command in your configuration file (see Default template).
The following date formats are supported (hour, minutes and seconds are always optional):
One should be aware that Grace does not allow to put a space in one data column as spaces are used to separate fields. You should always use another separator (:/.- or better T) between date and time in data files. The GUI, the batch language and the command line flags do not have this limitation, you can use spaces there without any problem. The T separator comes from the ISO8601 standard. Grace support its use also in european and us formats.
You can also provide a hint about the format ("ISO8601",
"european", "us") using the -datehint command line flag or the
ref popup of the GUI.
The formats are tried in the following order: first the hint
given by the user, then iso, european and us (there is no
ambiguity between calendar formats and numerical formats and
therefore no order is specified for them).). By default years are left untouched, so 99 is a date far
away in the past. This behavior can be changed with the
Edit->preferences popup, or with the
DATE WRAP on and
DATE WRAP YEAR year
commands. Suppose for example that the wrap year is chosen as
1950, if the year is between 0 and 99 and is written with two or
less digits, it is mapped to the present era as follows:
range [00 ; 49] is mapped to [2000 ; 2049]
range [50 ; 99] is mapped to [1950 ; 1999]
with a wrap year set to 1970, the mapping would have been:
range [00 ; 69] is mapped to [2000 ; 2069]
range [70 ; 99] is mapped to [1970 ; 1999]
this is reasonably Y2K compliant and is consistent with current use. Specifying year 1 is still possible using more than two digits as follows: "0001-03-04" is unambiguously March the 4th, year 1. The inverse transform is applied for dates written by Grace, for example as tick labels. Using two digits only for years is not recommended, we introduce a wrap year + 100 bug here so this feature should be removed at some point in the future ...
The date scanner can be used either for Denys's and Gregorian calendars. Inexistent dates are detected, they include year 0, dates between 1582-10-05 and 1582-10-14, February 29th of non leap years, months below 1 or above 12, ... the scanner does not take into account leap seconds: you can think it works only in International Atomic Time (TAI) and not in Coordinated Unified Time (UTC). If you find yourself in a situation were you need UTC, a very precise scale, and should take into account leap seconds ... you should convert your data yourself (for example using International Atomic Time). But if you bother with that you probably already know what to do.
This is a very brief guide describing problems and workarounds for reading in project files saved with Xmgr. You should read the docs or just play with Grace to test new features and controls.
|
http://www.mcs.anl.gov/~jacob/gracedoc/UsersGuide.html
|
CC-MAIN-2016-22
|
refinedweb
| 11,023
| 62.68
|
The problem I have is theres suppose to be a certain amount of Os in front of the number when the final output is made.
Here's what the output is suppose to look like:
Please enter a positive number (no more than 10 digits): 12345
The number you entered has 5 digits.
Big Number: 0000012345
here's my code:
#include <iostream> using namespace std; /* Author: Dominique Pope Class: IS 2043 Date: June 25, 2009 Purpose: To make a program where the user can enter a number, and the program will mathematically count how many digits the number is. */ int main() { // declare variables const long SIZE = 10; long num[SIZE]={0,0,0,0,0,0,0,0,0,0}; // Ask for a number from user and store it. for(long i = 0; i < SIZE; i++) { cout<<"Please enter a positive number (no more than 10 digits): " << endl; cin >> num[i]; cout << "The number you entered has " << num[i]%10 << " digits." << endl; cout << "Big Number: "<< num[SIZE] <<endl; } if(num[SIZE] < 0|| num[SIZE] > SIZE + 1) { cout << "Invalid number! Try again." << endl; return 0; } // Individually count the number of digits // The individually retrievable digits are placed inside the array. system("PAUSE"); return 0; } /* Output of program should be: Please enter a positive number (no more than 10 digits): 12345 The number you entered has 5 digits. Big Number: 0000012345 */
all i need is for someone to explain how i can get those zeros in...
|
https://www.dreamincode.net/forums/topic/111611-counting-integers-in-an-array/
|
CC-MAIN-2018-47
|
refinedweb
| 244
| 63.43
|
Now that the Wagtail CMS is gearing up for its 1.0 release, I wanted to take some time to introduce you to the all around best and most flexible Django CMS currently available. Wagtail has been around for a while, but doesn’t seem to get the attention I believe it deserves.
We’ve used Wagtail recently on a number of projects, and the overall experience has been great. It strikes the right balance of making the easy things easy and the hard things not only possible, but relatively easy as well.
Feature Highlights
- Non-technical end-user ease. Custom admin with great UI/UX
- Plays nicely alongside any other Django apps on your site
- Easy admin customization and branding
- Flexibility of CMS models for more structured data beyond just having a “Page”
- Built in Form builder system
- Great Image and Document/File support and UI
- StreamField for ultimate flexibility allowing you to define and organize small blocks of content
- Ability to organize admin tabs and field layouts
- Control/Flexibility of what page models can be added below certain URLs
- Hooks into ElasticSearch for searching
- Compatible with Varnish and static site generators to help with performance at scale
Admin Interface
Let’s face it, the Django admin leaves a lot to be desired. It’s very CRUD-oriented and confusing for all but the most technical of users. Even giving it a facelift with packages like Django Suit, or swapping it out entirely for something like Grappelli isn’t really what your end users want. Don’t get me wrong: both of these packages are great and you should check out, but they both simply can’t get past all of the hurdles and pain that come with attempting to customize the Django admin beyond a certain point.
Wagtail comes out of the box with it’s own custom admin interface that is specifically geared toward a typical CMS workflow. Check out this great promo video about Wagtail and you’ll see what I mean. No seriously, go watch it. I’ll wait.
Isn’t that great looking? My first thought when seeing the Wagtail video for the first time was “Nice, but I bet customizing it is a huge pain in the…”. Thankfully, I gave it a whirl anyway and came to find that customizing the Wagtail admin is actually pretty simple.
There is a great editor’s guide in the docs that is all most end users need to get started. So far in our use, the only thing that confuses users is the Explorer, Root pages, and the hierarchical nature of pages in general. Even those are small issues as one quick chat with the user and they grok it and are on their way.
Oh and a huge bonus the admin is surprisingly usable on both mobile and tablets!
Ways to customize the Wagtail Admin
There are a few ways you can customize the admin. First off, you can determine what fields are visible to your users and on what tab of the interface with just a bit of configuration. Consider this the bare bones entry level of “customization” you’ll be doing.
Customizing the branding of the admin is also a very frequent need. Techies often don’t see the point, but if you can put on your end user hat for a moment it seems weird and often confusing to come to a login page for that reads “Welcome to the Wagtail CMS Admin”.
If you install django-overextends you can easily customize the logo, login message, and welcome messages used by the CMS to match your user’s expectations.
For me, these two customization options are what I expect from a CMS. However, Wagtail goes a step further and gives you hooks to allow for much richer customizations. You can do things like:
- Add items to the Wagtail User Bar that appears for logged in users on the right side of the page much like Django Debug Toolbar
- Add or remove panels from the main Wagtail admin homepage
- Add or remove summary items (Pages, Documents, Images, etc) from the homepage
- Use hooks for taking behind the scenes actions or if you want your own customized Responses after creating, editing, or deleting at Page
- Add your own admin menu items, which can go to any Django views or offsite URLs you desire.
I used that last ability to add admin menu items with great success on TEDxLawrence.com. We needed a way for our Speaker Committee to view the speaker submissions, vote, and make comments. Instead of attempting to shoe horn all of this into a Django Admin or even Wagtail Admin universe, I simply linked off to entirely customized Class Based Views to give me complete end to end control.
Wagtail Pages
Most content management systems operate around the concept of a page that usually has a title, some sort of short description of the page, and then the page content itself. Many give you nice WYSIWYG editing tools to make things like adding headers, lists, bold, and italics relatively easy.
The problem comes when what you are wanting to represent on a page doesn’t fit cleanly in this data model. Do you just shove it into the content field? Maybe your CMS has some clunky mechanism to relate additional content to a page or via some plugin system. Or maybe you’re just out of luck, punt, and load some stuff with javascript from the template.
With Wagtail you build your own models that inherit from it’s Page model. This gives you the ability to customize specific fields for specific data and ultimately removes a lot of the usual shenanigans one goes through to fit your data concepts into your CMS’ vision of the world.
This probably works best as an example. Let’s build two different types of pages. A simple blog type page and a more complex Staff Member page one might use for individual staff members.
Our simple page can look like this:
from django.db import models from wagtail.wagtailcore.models import Page from wagtail.wagtailcore.fields import RichTextField from wagtail.wagtailadmin.edit_handlers import FieldPanel class BlogPage(Page): sub_title = models.CharField(max_length=500, blank=True) published = models.DateField() author = models.CharField(max_length=100) summary = models.RichTextField(blank=True) body = models.RichTextField() closing_content = models.RichTextField(blank=True) content_panels = [ FieldPanel(‘title’), FieldPanel(‘sub_title’), FieldPanel(‘published’), FieldPanel(‘author’), FieldPanel(‘summary’), FieldPanel(‘body’), FieldPanel(‘closing_content’) ]
Wagtail automatically sets up some fields for you, like title, the slug of the page, start/end visibility times, and SEO/meta related fields so you just need to define the fields you want beyond those.
Here we’ve defined some additional structured information we want on a blog post. A possible sub_title and summary information, an author, the date the entry was published, and the usual body field. We’ve also added an additional closing_content field we might use for a ending call to action or other content that we want highlighted and shown below the post.
All you need to do is add this to a Django app’s models.py, run makemigrations and migrate and you’re good to go.
Now let’s make a slightly more complicated Staff Page:
DEPARTMENT_CHOICES = ( (‘admin’, ‘Administration’), (‘accounting’, ‘Accounting’), (‘marketing’, ‘Marketing’), (‘sales’, ‘Sales’), (‘engineer’, ‘Engineering’), ) class StaffPage(Page): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) active = models.BooleanField(default=True) start_date = models.DateField() end_date = models.DateField(blank=True, null=True) department = models.CharField(max_length=50, choices=DEPARTMENT_CHOICES) email = models.EmailField() twitter = models.CharField(max_length=50) short_bio = models.RichTextField(blank=True) bio = models.RichTextField(blank=True) education = models.RichTextField(blank=True) work_history = models.RichTextField(blank=True) # Panel options left out for brevity
As you can see the StaffPage model has quite a bit more fields, most of them optional, which allows the Staff member to update their information over time and not get strangled into putting ‘Bio coming soon’ into an otherwise required field.
Pretty simple right? You’re probably thinking there is some catch, luckily you’re wrong. It’s really pretty much that simple. Easy things easy right?
Harder things in Wagtail
So what are the hard things in Wagtail? Well it’s mostly just getting familiar with the system in general. A few things that may trip you up are:
- You can’t have a field named url on your derived models as Wagtail uses that field name in the parent Page model. Unfortunately if you do add one, which I’ve done more times than I care to admit, you get a not very useful error “ can’t set attribute ” and not much else to go on.
- On many listing type pages it’s fine to simply show all of the items, with pagination, in some sort of chronological order. Other times users want to be able to manually curate what shows up on given pages. Wagtail makes this relatively easy as you can define a ForeignKey relationship using a through type model to other pages and use a PageChoosePanel to give the user a nice interface for doing this. The user can also manually order them right in the admin, no additional work necessary.
- Limiting which pages can be created as children (aka beneath) a page can be handled by setting a list named parent_page_types on the child model. Then it can only be added below pages of those defined types. On complex sites with lots of different page types this helps keep the Page Choosing and Creation option choices to a manageable level for the users. And it also obviously helps to keep users from creating the wrong types of pages in the wrong parts of the site.
- Wagtail currently doesn’t have a great story around building navigation menus for you, but there are a dozen reusable Django apps to help handle that. Often a site’s menu remains relatively static and isn’t edited day-to-day anyway.
- Supporting multiple sites with the same CMS. This isn’t technically hard, but more conceptually difficult to grok. Wagtail supports having multiple sites, via it’s wagtailsites app. The way this works is you simply set the Root page for each hostname and it basically takes it from there. However, in most circumstances it’s probably easier and cleaner to just have two different instances of Wagtail and use different databases.
Images and Documents
Documents are files of any type you want to be able to upload into the system. This handles any sort of situations where a user needs to upload a PDF, Excel, or Word document and be able to link to it from any other bit of content.
Images are exactly what you think, however you can define your own base model for this if you choose and attach additional fields for things like copyright, license, attribution, or even something like EXIF data if you wanted.
With both Documents and Images having tagging support via django-taggit and a really nicely designed look and UX for them in the admin interface.
And yes, before you ask, it has built in support for flexible thumbnails in your templates andthe ability for you to mainually define the main focal point in the image to avoid cropping things weirdly.
Form Builder interface
Wagtail also has a nice form builder built into it that can easily suffice for your typical contact form scenarios or more complicated collection needs.
Much like Pages, you simply subclass from Wagtail and define what fields you want to collect. On your model you can also override the process_form_submission method to do more complex validation or in a more common case to email the interested parties that there is a new submission.
One great feature of the form builder that is also built in, is the viewing and downloading interface. Viewing the data that has come in is great, but you just know your users are going to want to pull it out and use it for some other purpose. Wagtail smartly anticipates this and allows the user to download the submitted data, by date range, as a CSV file anytime they want.
Snippets
Wagtail Snippets are reusable bits of content that aren’t full web pages. Often these are used for things like sidebar content, advertisements, or calls to action.
Unlike with Pages or Forms, you don’t subclass a model but instead define a model and simply register it as a snippet. You’re then free to give your users the option of attaching snippets of the types you want to other pages. Or if you just want to give them the ability to edit things like footer content for example, you can just manually include the snippet data and the Snippet admin UI is really just becomes their editing interface.
Best Feature? Content Streams
While being able to define your own Page types with their own fields goes a long way, it’s quite a stretch from truly free form content. New in version 1.0 is Wagtail’s killer feature the StreamField.
Users want free form content while developers, designers, and even ops want nicely structured data. StreamFields satisfies both camps.
When you define a StreamField on a Page you set what types of blocks are available to be added into that stream. A block can be something as simple as a CharField of text or as complicated as a Staff type record like above usingstructural block types.
The end user can then add say few headers of various types, some rich text content blocks and have it all interspersed with a few images and code blocks. Each of these block types you define can then be styled differently CSS and/or have their markup structured entirely differently if needed.
Prior to this feature being added to 1.0, I had to resort to complicated Page relationships that weren’t actually pages we intended to make visible on the site. We just subverted the Page Choosing features of Wagtail to give the users the flexibility they needed and keep it all in the same admin interface.
Here is what the admin UI looks like for StreamFields. Here we've defined a field named Body that has header, content, and code block types. Each of these lines are different blocks. The top and bottom being headers. As you can see you can simply click the plus icons to add new blocks in between others or use the arrows on the right to move block around. They are currently a bit hard to see due to a CSS bug I anticipate being fixed quickly.
Wagtail’s Future
I think Wagtail has a VERY bright future in general and especially inside the Django community. However, like any 1.0 product there are definitely some things I would like to see in future versions. The two main things I hope to see are:
- A community collection of high quality and flexible common Page and Block types to make most sites more of a Lego exercise than a coding one.
- The ability to more easily customize and control the Publish/Moderate/Save as Draft options that appear at the bottom of the screen while editing content. On many smaller sites or those with a flat workflow it should be trivial to make ‘Publish’ or ‘Submit for Moderation’ be the default action presented to the user.
|
https://www.revsys.com/tidbits/wagtail-best-django-cms/
|
CC-MAIN-2020-10
|
refinedweb
| 2,570
| 61.56
|
Using Auditing¶
Substance D keeps an audit log of all meaningful operations performed against content if you have an audit database configured. At the time of this writing, "meaningful" is defined as:
- When an ACL is changed.
- When a resource is added or removed.
- When a resource is modified.
The audit log is of a fixed size (currently 1,000 items). When the audit log fills up, the oldest audit event is thrown away. Currently we don't have an archiving mechanism in place to keep around the items popped off the end of the log when it fills up; this is planned.
You can extend the auditing system by using the
substanced.audit.AuditLog, writing your own events to the log.
Configuring the Audit Database¶
In order to enable auditing, you have to add an
audit database to your
Substance D configuration. This means adding a key to your application's
section in the
.ini file associated with the app:
zodbconn.uri.audit = <some ZODB uri>
An example of "some ZODB URI" above might be (for a FileStorage database, if your application doesn't use multiple processes):
zodbconn.uri.audit =
Or if your application uses multiple processes, use a ZEO URL.
The database cannot be your main database. The reason that the audit database must live in a separate ZODB database is that we don't want undo operations to undo the audit log data.
Note that if you do not configure an audit database, real-time SDI features such as your folder contents views updating without a manual refresh will not work.
Once you've configured the audit database, you need to add an audit log object to the new database. You can do this using pshell:
[chrism@thinko sdnet]$ bin/pshell etc/development.ini Python 3.3.2 (default, Jun 1 2013, 04:46:52) [GCC 4.6.3] on linux Type "help" for more information. Environment: app The WSGI application. registry Active Pyramid registry. request Active request object. root Root of the default resource tree. root_factory Default root factory used to create `root`. >>> from substanced.audit import set_auditlog >>> set_auditlog(root) >>> import transaction; transaction.commit()
Once you've done this, the "Auditing" tab of the root object in the SDI should no longer indicate that auditing is not configured.
Viewing the Audit Log¶
The root object will have a tab named "Auditing". You can view the currently
active audit log entries from this page. Accessing this tab requires the
sdi.view-auditlog permission.
Adding an Audit Log Entry¶
Here's an example of adding an audit log entry of type
NailsFiled to the
audit log:
from substanced.util import get_oid, get_auditlog def myview(context, request): auditlog = get_auditlog(context) auditlog.add('NailsFiled', get_oid(context), type='fingernails') ...
Warning
If you don't have an audit database defined, the
get_auditlog() API will return
None.
This will add a``NailsFiled`` event with the payload
{'type':'fingernails'} to the audit log. The payload will also
automatically include a UNIX timestamp as the key
time. The first argument
is the audit log typename. Audit entries of the same kind should share the
same type name. It should be a string. The second argument is the oid of the
content object which this event is related to. It may be
None indicating
that the event is global, and unrelated to any particular piece of content.
You can pass any number of keyword arguments to
substanced.audit.AuditLog.add(), each will be added to the payload.
Each value supplied as a keyword argument must be JSON-serializable. If one
is not, you will receive an error when you attempt to add the event.
Using The
auditstream-sse View¶
If you have auditing enabled, you can use a view named
auditstream-sse
against any resource in your resource tree using JavaScript. It will return
an event stream suitable for driving an HTML5
EventSource (an HTML 5
feature, see for more
information). The event stream will contain auditing events. This can be used
for progressive enhancement of your application's UI. Substance D's SDI uses
it for that purpose. For example, when an object's ACL is changed, a user
looking at the "Security" tab of that object in the SDI will see the change
immediately, rather than upon the next page refresh.
Obtain events for the context of the view only:
var source = new EventSource( "${request.sdiapi.mgmt_path(context, 'auditstream-sse')}");
Obtain events for a single OID unrelated to the context:
var source = new EventSource( "${request.sdiapi.mgmt_path(context, 'auditstream-sse', query={'oid':'12345'})}");
Obtain events for a set of OIDs:
var source = new EventSource( "${request.sdiapi.mgmt_path(context, 'auditstream-sse', query={'oid':['12345', '56789']})}");
Obtain all events for all oids:
var source = new EventSource( "${request.sdiapi.mgmt_path(context, 'auditstream-sse', query={'all':'1'})}");
The executing user will need to possess the
sdi.view-auditstream permission
against the context on which the view is invoked. Each event payload will
contain detailed information about the audit event as a string which represents
a JSON dictionary.
See the
acl.pt template in the
substanced/sdi/views/templates directory
of Substance D to see a "real-world" usage of this feature.
|
http://substanced.readthedocs.io/en/latest/audit.html
|
CC-MAIN-2017-30
|
refinedweb
| 863
| 58.58
|
Exec command stays waiting for output and error streams to be closed even when
executed command already finished.
This bug prevents Ant from execution of processes, that are not closing out and
err stream correctly on Windows.
Small example is java class only executing its argument:
public static void main (String args[]) throws Exception {
Runtime.getRuntime().exec(args[0]);
System.out.println("finished");
}
and build.xml containing something like this:
<exec executable="java" >
<arg line=" -cp . test rmid"/>
</exec>
This task starts rmid using test class, writes "finished" and stays hanged on
Windows.
The same code on Linux(Solaris) starts rmid, writes "finshed" and realy
finishes.
Main problem is waiting for error and output stream to be closed in
org.apache.tools.ant.taskdefs.PumpStreamHandler method stop() code
inputThread.join(); and errorThread.join();
Output with Full thread dump of blocked exec task is:
Buildfile: build.xml
test:
[exec] finished
Full thread dump:
"Thread-1" daemon prio=5 tid=0x8b8ad48 nid=0x604 runnable [0x8f2f000..0x8f2fdbc]
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:166)
at org.apache.tools.ant.taskdefs.StreamPumper.run(StreamPumper.java:99)
at java.lang.Thread.run(Thread.java:484)
"Thread-0" daemon prio=5 tid=0x8b3da98 nid=0x57c runnable [0x8eef000..0x8eefdbc]
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:183)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:186)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:225)
at java.io.BufferedInputStream.read(BufferedInputStream.java:280)
at java.io.FilterInputStream.read(FilterInputStream.java:93)
at org.apache.tools.ant.taskdefs.StreamPumper.run(StreamPumper.java:99)
at java.lang.Thread.run(Thread.java:484)
"Signal Dispatcher" daemon prio=10 tid=0x960620 nid=0x670 waiting on monitor
[0..0]
"Finalizer" daemon prio=9 tid=0x95c880 nid=0x4e8 waiting on monitor
[0x8daf000..0x8dafdbc]
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:108)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:123)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:162)
"Reference Handler" daemon prio=10 tid=0x8af0368 nid=0x4fc waiting on monitor
[0x8d6f000..0x8d6fdbc]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:420)
at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:110)
"main" prio=5 tid=0x284950 nid=0x60c waiting on monitor [0x6f000..0x6fc34]
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:930)
at java.lang.Thread.join(Thread.java:983)
at org.apache.tools.ant.taskdefs.PumpStreamHandler.stop
(PumpStreamHandler.java:111)
at org.apache.tools.ant.taskdefs.LogStreamHandler.stop
(LogStreamHandler.java:85)
at org.apache.tools.ant.taskdefs.Execute.execute(Execute.java:397)
at org.apache.tools.ant.taskdefs.ExecTask.runExecute(ExecTask.java:250)
at org.apache.tools.ant.taskdefs.ExecTask.runExec(ExecTask.java:279)
at org.apache.tools.ant.taskdefs.ExecTask.execute(ExecTask.java:177))
"VM Thread" prio=5 tid=0x8b5e1c0 nid=0x3c8 runnable
"VM Periodic Task Thread" prio=10 tid=0x95f320 nid=0x558 waiting on monitor
"Suspend Checker Thread" prio=10 tid=0x95fc70 nid=0x608 runnable
Created attachment 860 [details]
suggested fix of org.apache.tools.ant.taskdefs.PumpStreamHandler
this bug prevents Ant from running on Windows and I found no workaround
This is an interesting problem, and not one I have seen myself, despite my
extensive use of ant on NT. I wonder if it is showing some interesting side
effects of the call to exec() inside the sub process.
Whatever, your supplied path [NB, please use diff -u in future] would seem to
ensure ant continues, and given that the stop() method is called after the
process has terminated naturally or been killed by the watchdog should not
affect the sub process.
However, it runs the risk of leaking threads. This may not seem much on a
single ant run, but in an ant-in-gui or automated build system thread leakage
can become an issue. Not as much a one as the build blocking, but still an
issue.
I think therefore that for a patch like this to go into the build, it has to
print out big warning messages to the effect that something is wrong with the
client app. Also we need to see if anyone else has replicated the problem
Created attachment 868 [details]
another proposal how to fix this bug by implementing interruptable read
Created attachment 996 [details]
Another more powerfull patch, because bug still ocures in several cases
This one has been here forever and I'm wondering if it is not related in some
way to #10345 and #8510. Adam, out of curiosity do you have a testcase for
this ?
Oops stupid question. the testcase is here...having a look.
The patch (id=996J) did stop problems I had with execute hanging.
Unfortunately the patch also causes the output of my cvs log command to be
prematurely truncated. Before applying the patch, my code would hang the
second time I executed a CVS log command but all of the output made it to my
client code's input buffer.
I made the additional following change:
The patch to PumpStreamHandler makes changes like:
while (inputThread.isAlive()) {
inputThread.interrupt();
inputThread.join(TIMEOUT);
}
I changed these to instead be:
if (inputThread.isAlive()) {
inputThread.join(TIMEOUT);
while (inputThread.isAlive()) {
inputThread.interrupt();
inputThread.join(TIMEOUT);
}
}
From reading the previous patches this seems to have been the intent of Adam
Sotona all along. He started out with something similar to this and then lost
the initial wait in the later version.
Immediately interupting the thread is more likely to cause premature closing of
the thread. Thereby preventing the client code from obtaining all the output
of the executed command. (cvs log in my case) At least with my additional
change there is a better chance all the output is pushed into the client's
input buffer.
Am I correct in thinking that a call to Process.waitFor() would work, except
that StreamPumper does not know about such things? The complication
is that the command may not finish if you do not read the output streams but
I am still wondering whether more fundamental changes might eventually be
worthwhile e.g. Ant 2. StreamPumper looks to me like it might be mostly avoided.
Still, the latest proposed fix looks like it would work.
I no longer think StreamPumper is likely to be avoided
but it is a shame that it is not dead simple.
My own software that encountered the same problem under
windoze relied on Process.waitFor(). The nearest thing I have
to an ant task can now detect and interrupt a thread processing a
stream that is never closed. Sorry I don't have time right
now to look into this possibility in the ant case.
Assigning back to ant-dev as Stephane is now jetsetting about
Adam, would you like to try the latest CVS version of ant, where <exec/> seems to be implemented by a
different class org.apache.tools.ant.taskdefs.ExecTask if I read properly the
defaults.properties file.
The code is quite different from the code of the old exec task.
It works for me with :
java version "1.4.1_01"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.1_01-b01)
Java HotSpot(TM) Client VM (build 1.4.1_01-b01, mixed mode)
on
Win 2K, Service Pack 2
However, I wonder whether what is really happening is not that rmid has been
fixed to close its stdout and stderr properly.
I tried also with cvs log (which I am doing over ssh), and did not reproduce the
problem.
So the next question is :
how can one create a test class or a shell or Perl script or C program which
does not close properly its stdin/stderr streams, so that the problem can be
"lab studied" ?
Without a possibility to reproduce the problem, this should be closed as WONTFIX
or WORKSFORME.
Hi, I mark this bug as resolved for ant 1.6 since nobody has voiced remarks.
bug is still present in 1.6 aand it cause problems for NetBeans 4.0 execution
(NetBeans 4.0 build system is now based on Ant).
See bug:
simple case I've described before is still reproducible but better try to
execute notepad instead of rmid:
public static void main(String args[]) throws Exception {
Runtime.getRuntime().exec("notepad");
System.out.println("finished");
}
Hi Adam,
I do not follow your example with notepad.
The Runtime.getRuntime().exec("notepad") will not
return until the notepad process stops.
Hi Peter,
I do follow, the exec method starts the process and returns. Did you tried that ?
BTW: part of the Runtime.exec javadoc:
"Executes the specified string command in a separate process."
Notepad is special; it is a GUI app. If you look at how windows execs guis, it
does *odd* things, things that only make sense from a historical perspective.
try on a command line app instead of notepad.
Adam, I still do not follow.
Ant exec is not the same as just calling process.exec().
It's job is to start the process and handle it's input
and output <file descriptors|handles> and wait for it to finish.
One can use the "spawn" attribute to spawn off the process and
not care about it.
note, the notepad program is not the issue the follows also
works in unix:
public class Test {
public static void main(String[] args) {
try {
Runtime.getRuntime().exec("emacs");
System.out.println("finished");
Thread.sleep(10 * 1000);
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
The parent process of the emacs process is the java program, and when it
dies, the parent process becomes the init process.
so once again:
- this bug occures on Windows only
- if you just execute emacs from Java on Linux - the Java process finishes - OK
- if you'll do it using Ant - it finishes - OK
- if you execute notepad (or whatever you want) on Windows - the Java process
finishes - again correct behavior
- but if you'll do it through Ant on Windows it will wait till ALL executed
processes close their streams and that's not correct
everything was already described here and several patches were proposed
you just need to cut the streams pumping when the process dies on Windows after
some timeout - that's all
Created attachment 13680 [details]
build file showing the problem
Just do ant in the directory with the build file
It makes a src directory, and populates it with two java files
The files are compiles, and an exec is run "java -cp classes CallHello"
On unix, the build finishes just after the "finished" message from CallHello
On windows, the build finishes about 19 seconds after the "finished" message
Ok, I see what you are saying now.
On windows child processes keep the std and std outout file handles of
the master process (or at least Runtime#exec() is implemented in this
way), on Unix this does not happen.
This means that one can start a master process from ant. This master
process can create a number of child processes. The master process
then terminates, but the child processes of the master process are
still running. On Unix, the exec task while end at this time, but
on windows this will not happen, in fact the exec task will wait until
all the children of the master process have terminated - this is *not* good,
especially for something like rmid.
I have added an attachment that shows the problem.
I am reassigning this bug to the whole ant community, because I do not have any
special solution (I think my name was in there since 2003, at a time when the
issue was inactive).
*** Bug 28135 has been marked as a duplicate of this bug. ***
*** Bug 37787 has been marked as a duplicate of this bug. ***
*** Bug 42534 has been marked as a duplicate of this bug. ***
Created attachment 22009 [details]
INTERVIEW EVALUATION FORM
Comment on attachment 22009 [details]
INTERVIEW EVALUATION FORM
this has nothing to do with the bug; marking as obsolete.
a loooooooooooooooong time, I know.
Ant's code has changed a bit, so some extra work has become necessary. That
other classes are now using StreamPumper as well didn't help either.
With the original patch (even if adapted) several unit tests of Ant would hang
and never return - I guess this has been true seven years ago as well.
One major problem I faced was that available() returns 0 on a closed stream
on some VMs (it did on Suns 1.4.2 for Windows, for example) and thus the
available trick doesn't work unless you are sure you are going to interrupt
the thread running StreamPumper eventually.
I've also noticed that the approach using available impacts performance considerably, so I've restricted it to the platform (Windows) where it is needed (like the original patch did, but for a different reason).
svn revision 711860
*** Bug 46805 has been marked as a duplicate of this bug. ***
I just want to mention my fix which is posted at bug 46805. (sorry for creating a duplicate)
This bug is caused by closing unowned streams:
new XyzStream() --> close it
getXyzStream() --> don't close it, close/destroy the underlying object
reopened since an unresolved bug is merged into this.
if there really is an issue caused by closing the streams than bug 46805 is no duplicate of bug 5003
Using Ant 1.8.2 I still reproduce the error.
(In reply to comment #34)
> Using Ant 1.8.2 I still reproduce the error.
How?
Created attachment 28364 [details]
patch to StreamPumper.run to make it responsive to interrupts
I ran into the same issue and was able to get the ant JVM unstuck by modifying the org.apache.tools.ant.taskdefs.StreamPumper.run() to make it more responsive to interrupt conditions.
Please see my diff to /ant-trunk/org/apache/tools/ant/taskdefs/StreamPumper.java and let me know if addreses any pending issues.
Created attachment 28365 [details]
revised earlier patch slightly ... does the write more efficiently
revised earlier patch
I had something like this in some of my code.
I found process.join() was returning, but my joins on the workers reading the input streams were not.
you could assume if the process has been dead, or has been dead a certain amount of time, you no longer care about the data on the streams. I ran into something like this when killing an external process. In my case, the choice is easy. I am forcibly terminating it. I don't care about the streams. Basically you get a reference to the streams while the process is still alive. And then at any point after it's dead, (or should be) you close the streams. This worked like a charm for me. I could do this where I join the process as well, instead of where I kill it.
So....
OutputStream outputStream = process.getOutputStream();
InputStream inputStream = process.getInputStream();
InputStream errorStream = process.getErrorStream();
process.destroy();
try
{
outputStream.flush();
}
catch(IOException e)
{
}
try
{
outputStream.close();
}
catch(IOException e)
{
}
try
{
errorStream.close();
}
catch(IOException e)
{
}
try
{
inputStream.close();
}
catch(IOException e)
{
}
Ha. Strike that. Sometimes the calls to close() still deadlock in a native method.
Without looking at the Ant implementation: Could this be related to
@Jochen: your link explains why my hack (adding timeouts to join()) helped.
Is this still noticed against Java 11 (or Java 16) versions?
|
https://bz.apache.org/bugzilla/show_bug.cgi?id=5003
|
CC-MAIN-2021-25
|
refinedweb
| 2,572
| 57.98
|
Creating and Publishing a Flutter Package
In this tutorial, you’ll learn how to create and publish your own Dart packages to be used in Flutter apps.
Version
- Dart 2.10, Flutter, Android Studio 4.1
Every three minutes, a new Flutter package pops up on pub.dev (according to the same source that revealed 73.6% of all statistics are made up on the spot). True or not, it’s very believable. So what are you waiting for? Why not join the club and become a package publisher? :]
Some advantages of being a package publisher:
- You’ll learn a ton about open source project: documentation, versions, releases, issues, licenses, CHANGELOG and README files and more!
- It’s a great accomplishment to add to your resume.
- You’ll meet new people who are using your package or are helping you maintain it.
- Your package can help develop many production apps.
- You’ll be giving back to the Flutter community.
By the end of this tutorial, you’ll learn everything you need to go from package user to package creator, including how to:
- Stand out on pub.dev
- Document your package
- Structure your package files
- Create an example project
- Get package ideas
Getting Started
Download the starter project by clicking the Download Materials button at the top or bottom of the tutorial.
Unzip the downloaded file and, with Android Studio 4.1 or later, open the project folder starter/yes_we_scan. You can use Visual Studio Code instead, but you might need to tweak some instructions to follow along.
Download the dependencies with Pub get, then build and run your project. If everything went OK, you should see something like this:
Knowing the Project
The app above could easily be the most uninteresting bar code scanner if it weren’t for two things:
- The name: Yes We Scan
- The file lib/utils/focus_detector.dart
In this file, you’ll find a widget named
FocusDetector.
When you wrap any widget of yours with
FocusDetector, you can register callbacks to know whenever that widget appears or disappears from the screen — which happens, for example, when the user navigates to another screen or switches to another app.
FocusNode.
If you’ve done native mobile development, think of
FocusDetector as a Flutter adaptation of Android’s
onResume() and
onPause() or iOS’s
viewDidAppear() and
viewDidDisappear().
Setting the Goal
Yes We Scan leverages
FocusDetector to turn on and off the camera as the user leaves and returns to the main screen. Other applications of
FocusDetector could include:
- Turning on and off the GPS or Bluetooth
- Syncing data with a remote API
- Pausing and resuming videos
Yes We Scan is here to show that even seemingly boring projects have something to offer to the community.
So from now on, your focus (pun intended) will be on transforming focus_detector.dart, the file, into Focus Detector, the package.
And to put you in a good mood, you’ll start by doing something all developers love: documenting.
Documenting
Pub.dev generates a documentation page for every published package. You can find a link to it on the right panel of the package’s pub.dev page:
To enhance this document with your own words, you have to place special comments above your public classes, functions, properties and typedefs in your code. Why special? Because they use three slashes (
///) instead of two.
Using Doc Comments
Open lib/utils/focus_detector.dart and replace the comment:
// TODO: Document [FocusDetector].
with this documentation comment:
/// Fires callbacks every time the widget appears or disappears from the screen.
Doc comments can be as big as you want — you can even include code snippets and hypertext links. The only requirement is that the first paragraph should be a single-sentence descriptive summary like you did above.
To make sure this gets tattooed on your brain, do it once again by replacing:
// TODO: Document [onFocusGained].
with:
/// Called when the widget becomes visible or enters foreground while visible.
You can peek here to see how these comments end up looking in the docs. For more information about documentation comments, check out Effective Dart: Documentation.
It’s time to leave Yes We Scan aside for a while to work on your spinoff project.
Creating the Project
On your Android Studio’s menu bar, click File ▸ New ▸ New Flutter Project. Then, select Flutter Package and click Next.
Now, follow the instructions to fill in the fields:
- Project name: Type in focus_detector.
- Flutter SDK path: Make sure the default value is the right path to your Flutter SDK.
- Project location: Choose where you want to store your project.
- Description: Type in Detects when your widget appears or disappears from the screen.
Click Finish and wait for Android Studio to create and load your new project.
Understanding Packages
You may not know it, but you’ve been creating packages for a while.
A Dart package is nothing but a directory with a pubspec.yaml. Does that remind you of all the apps you’ve created until now? The thing is, there are two types of packages:
- Application Packages: Those you know very well, with a main.dart file.
- Library Packages: Shortened to Packages. These are the subject of this tutorial.
What about Flutter Package vs. Dart Package? Flutter Package is only an alias used by the community when referring to a Dart Library Package for Flutter apps.
This raises the next question: What on Earth is a library?
Understanding Libraries
A library is a collection of functions, classes, typedefs and/or properties.
Every time you create a Dart file, you’re making a library. For example, the public elements of an alert_utils.dart file form an alert_utils library. That’s why your Dart files go under a lib directory.
You can also create a library by creating a file that gathers other libraries. For example, imagine a file called alert_utils.dart with the following content:
export 'src/snackbar_utils.dart'; export 'src/dialog_utils.dart';
The result is an alert_utils library collecting elements from both the snackbar_utils and the dialog_utils libraries.
Wrapping up the definition of Library Packages, you can see its purpose is to define libraries that both types of packages can import and use.
Understanding the
library Keyword
Returning to your focus_detector project, open lib/focus_detector.dart that Android Studio generated for you. Look at the first line:
library focus_detector;
This sets the name of the file’s library to focus_detector. The default library name is the filename, making this line unnecessary. So why have a
library keyword? There are two cases in which you might want to specify a library name:
- Adding library-level documentation to your package. The dartdoc tool requires a
librarydirective to generate library-level documentation. For example:
- Giving the library a different name.
/// Contains utility functions for displaying dialogs and snackbars. library alert_utils;
When you import a library, you give the uniform resource identifier (URI) to the
import directive, not the library name. So the only place you can actually see the name is in the documentation page. As you see, there’s little point in changing the name.
That said, the Dart documentation recommends you omit the library directive from your code unless you plan to generate library-level documentation.
Enough with the talk! You’re now ready for some action.
Adjusting the Pubspec
Open your new project’s pubspec.yaml file. Notice that the content of the file is grouped into five sections: metadata,
environment,
dependencies,
dev_dependencies, and
flutter. For this tutorial, you’ll need only the first three:
- The Metadata
Everything users need to find your package comes out of here.
You already provided parts of this information when creating the project. Now you need only a few adjustments:
Note: Remember: The focus_detector package is already published on pub.dev. The homepage field here is being set to the existing code repository.
- version: Replace
0.0.1with
1.0.0. Unless you’re publishing unfinished or experimental code, avoid 0.*.* version numbers. It might scare off some users.
- author: This property is no longer used, so delete it.
- homepage: Put. The URL doesn’t have to be from a git repository, but that’s the most common usage.
This should be your result:
name: focus_detector description: Detects when your widget appears or disappears from the screen. version: 1.0.0 homepage:
- The Environment
- The Dependencies
Use this section to restrict the Dart or Flutter SDK versions of your users. This is useful if, for example, your package relies on a feature introduced in an earlier Flutter release or you still haven’t complied with a breaking change in the API.
For this tutorial, the defaults are great.
Packages can also depend on other packages. This section is no different than the one you find in your apps.
FocusDetector uses the 0.1.5 version of a package published by google.dev called VisibilityDetector. Specify the dependency, then make sure your section looks like this:
dependencies: flutter: sdk: flutter visibility_detector: ^0.1.5
Last, click Pub get at the top of your screen to download your new dependency.
Now that
FocusDetector has a new place to call home, it’s finally time to bring it in.
Bringing the Code
Go back to the yes_we_scan project on Android Studio. Copy lib/utils/focus_detector.dart and paste it under the lib folder of the focus_detector project. When presented with the Copy dialog, click Refactor. Then, on the next dialog, click Overwrite.
By replacing the old lib/focus_detector.dart Android Studio had created for you, you broke the tests. Because testing is outside the scope of this article, delete the test directory as a quick fix.
Analyzing the Code
You can’t build and run a package. What you can do is run the
flutter analyze command on your terminal to analyze your code for issues.
Open Android Studio’s shell by clicking Terminal at the bottom of your screen. Then, type in
flutter analyze and press Enter, as shown below.
The command should give you a No issues found! message. If that’s the case, you’re good to go.
Structuring Packages
Focus Detector is a single-class package, so there isn’t much thinking to do about how to organize it. But what if your next package contains many files? Some of them you might want exposed to your users, while others you might prefer to keep private. When that is the case, the convention tells you to follow these simple rules:
- Keep your implementation files under the lib/src directory. All code inside this folder is private and should never be directly imported by your users.
- To make these implementation files public, use the
exportkeyword from a file that is directly under lib, like in the previous alert_utils example above.
The main benefit of this approach is being able to change your internal structure later without affecting end users.
An extension of the second rule is that there should be a library file directly under lib with the same name as your project that exports all your public implementation files. Think of it as your “main” library file. The advantage is that users can explore all your functionalities by importing a single file.
It isn’t an anatomy class if it doesn’t have a dissected body. :] So take a look at the internal structure of a more complex package, the Infinite Scroll Pagination:
If you want to know more about the Infinite Scroll Pagination package, check out Infinite Scrolling Pagination in Flutter.
Creating an Example Project
Every good package has an example app. The example app is the first thing your users will turn to if they can’t make your package work right off the bat. It should be as concise as possible while still showcasing every feature.
Create your example project by again clicking File ▸ New ▸ New Flutter Project. This time, select Flutter Application and click Next.
Like you did before, follow the instructions to fill in the fields:
- Project name: Type in example. Don’t change this name, or your example project won’t follow the convention of being in the example folder.
- Flutter SDK path: Make sure the default value is the right path to your Flutter SDK.
- Project location: Choose the root folder of your package’s project.
- Description: Here, you can type anything you want or go with Example app of the Focus Detector package..
Click Next and, in the following window, type com.focusdetector.example in the Package name field. Finally, click Finish and wait until Android Studio opens your example project in a new window.
Specifying the Example’s Dependencies
Open your example project’s pubspec.yaml and replace the entire dependencies section with:
dependencies: flutter: sdk: flutter focus_detector: path: ../ logger: ^0.9.4
Look at how you specified the focus_detector dependency. Instead of linking to a specific version like you usually do, you’re specifying the local parent path, where the package code is located. That way, all changes on the package are automatically reflected in your example.
The second dependency is a fancier logger the app will use to print to the console every time
FocusDetector fires a callback.
Don’t forget to click Pub get at the top of your screen to download your dependencies.
Filling the Example
You’re here to learn how to create and publish a package, not how to use
FocusDetector. So to spare you unnecessary details, replace your example project’s main.dart with the one in starter/auxiliary in your downloaded materials. Also, take the opportunity to delete the test folder from the root of your example.
Close the focus_detector project on Android Studio and then reopen it. This will cause Android Studio to locate your example app’s main.dart and let you run it without the need to open the project separately.
Make sure it all works by building and running your package’s project as you usually do with your apps. If everything went OK, you should see something like this:
You’ll also see logger messages in the console like this:
Hacking the Example Tab
Every package with an example project gets an Example tab on pub.dev:
Pub.dev automatically shows the contents of the example/lib/main.dart file if it can find one. That’s one reason why it’s essential to follow the convention of placing your example project in an example folder. Two other good reasons are:
- So users know exactly where to look when they need it
- So Android Studio locates it by itself and enables the Run button for your package’s project
But what if you feel like you can’t do justice to your package displaying only your example’s main.dart file in the Example tab? Tell no one, but there’s a way. If pub.dev finds an example.md file in the example folder, it’ll display that file rather than the main.dart. You can leverage this to create a cookbook for your package, like this.
Your package is already fully functional but still isn’t publishable. For that, you first need to work on three essential files: the README.md, the CHANGELOG.md and the LICENSE.
Crafting an Effective README
Code is not the only thing package creators need to know how to write. Your most valuable lines won’t be in a Dart file but a plain-text file: the README.md. The README.md is a Markdown file that uses a lightweight markup language to create formatted text output for viewing. Markdown supports HTML tags, so you can use a mixture of both. If you’re unfamiliar with Markdown, check out Markdown Tutorial.
Your README is your business card. It’s the first, and in the worst case, the only thing users will see when they find you on GitHub or pub.dev. The quality of your README can make or break your package as easily as the quality of your code can.
Fortunately, writing an effective README is not rocket science. It all boils down to selecting some of the following resources:
- Logo.
- Short description of what the package does.
- Screenshot of the example app — suitable for visual packages only.
- Code snippet showcasing the most common usage.
- Feature list.
- Link for a tutorial.
- Table of all properties along with their type and a brief description.
- Quick fixes to the most common problems users face. This section is usually called Troubleshooting.
- Badges — more on these in a second.
Keep the most relevant information at the top, and remember that your goals are to:
- Make users trust you — package fatigue is a real thing.
- Keep users away from needing your example project — as much as possible.
Displaying Badges
Badges are the little rectangles you find at the top of some READMEs:
Badges signal credibility, mostly because some of them require you to do some work to earn the right — and the image URL — to carry them in your README. Good examples of this are creating a chat room on Gitter or setting up a build workflow with GitHub Actions.
Adding a README File
Your README is ready and waiting for you in the downloaded materials. Replace the README.md under your project’s root, focus_detector/README.md, with the one in the starter/auxiliary folder.
Licensing
Another prerequisite for publishing a package is having an open-source license file. This license allows people to use, modify and/or share your code under defined terms and conditions. What terms and conditions? That depends on the license you choose.
For Focus Detector, you’re going with the MIT license. When choosing a license for your next project, consult the Choose a License website.
Go ahead and replace the empty LICENSE in your project’s root with the one in the starter/auxiliary from the downloaded materials.
Understanding CHANGELOGs
You’ve probably been in the situation of knowing there’s a new version of a package you use, but you:
- Don’t know what the benefits of the upgrade are.
- Don’t know what’s the effort for the upgrade.
- Decided to upgrade but are having trouble with some missed property or function.
When you run into this situation, look at the ChangeLog tab:
But today, you’re the creator, not the user. It’s your job to ensure your users won’t be helpless when going through this same situation. You do that by creating a thoughtful CHANGELOG.md file.
Unlike the freestyle of the README, this one follows a specific format:
- One heading for each published version. The headings can be level 1 or level 2.
- The heading text contains nothing but the version number, optionally prefixed with “v”.
Here’s an example of a typical raw CHANGELOG and here’s what your users will see.
Adding a CHANGELOG File
Replace the empty CHANGELOG.md under your project’s root with the one in the starter/auxiliary folder from the downloaded materials.
Notice that your CHANGELOG contains a single entry: 1.0.0. One concern you can expect to have in your next ventures is knowing how to name your subsequent versions: 1.0.0+1? 1.0.1? 1.1.0? 2.0.0? Technically, you can do whatever you want, but it’s good pub.dev citizenship to follow this standard:
When you increase a number, the others to the right should be zeroed. For example, if you both add a feature and fix a bug, you increase the middle number and set the next to 0.
You’re finally ready to claim your piece of real estate on pub.dev. :]
Publishing Your Package
The introduction of this article listed several reasons to create a package. Now, to even things out a bit, take this delicate disclaimer to heart:
If you still want to take this road and become a package parent, the only thing you’ll need is a Google account. If you’re still not sure, don’t fret! Consider the next two steps a rehearsal.
Delete the example/build directory to guarantee your package won’t go over the pub.dev limit of 10 MB after gzip compression. That folder is automatically ignored if your project is in a git repository where the .gitignore includes
build/ — which likely will be your case when publishing a package of your own.
Open your Android Studio’s shell again by clicking Terminal at the bottom of your screen. Run the following command:
pub publish --dry-run
The command then outputs a tree of the files in your package. If the command gave you a Package has 0 warnings. message, it’s the end of rehearsal for you! You can still execute the next steps to try publishing Focus Detector, but expect an error at the end because the package already exists.
For the real thing, run
pub publish (without the
--dry-run part). After outputting the file tree, the command will warn you that publishing is forever and prompts you to confirm before proceeding. Type y and press Enter. Next, it’ll ask you to open a link in your browser and sign in with your Google account. After signing in, the command will automatically verify your authorization and start the upload.
If you’re trying to publish Focus Detector, you’ll receive a Version 1.0.0 of package focus_detector already exists. error message. If you’re publishing a package of your own, wait a couple of minutes – or don’t, if you’re too anxious – and try accessing your new package’s URL:
Welcome to the
club pub!
Understanding Verified Publishers
When you publish a package, your pub.dev page displays your email address to everyone. This not only lacks privacy but may also look unprofessional to users – who knows you’re not some random account? The way around this is to be a Verified Publisher. This works as “your company” on pub.dev, where your Google account is just an
employee uploader. All you need is a domain address.
Another great advantage is the credibility boost you get from having a verified publisher badge everywhere your name appears on pub.dev:
If you want to become a verified publisher, read more here.
Using the Remote Package
Now for the easy part. Go back to the Yes We Scan project you haven’t opened since the beginning of the article.
Double-click pubspec.yaml in the left panel and replace
visibility_detector: ^0.1.5 with
focus_detector: ^1.1.0+1 — which is the current version of Focus Detector, and not the
1.0.0 you just fake published. Download the new dependency with Pub get.
Open lib/pages/scanner_page.dart, and, at the top of the file, replace:
import 'package:yes_we_scan/utils/focus_detector.dart';
with:
import 'package:focus_detector/focus_detector.dart';
Done! Now you’re depending on the version hosted on pub.dev and can delete the lib/utils/focus_detector.dart.
As your last task, build and run the project to make sure everything is safe and sound.
Where to Go From Here?
Download the completed project files by clicking the Download Materials button at the top or bottom of the tutorial.
You couldn’t be more prepared to create your own package. If you don’t have an idea to invest in, check these issues labeled by the Flutter team as “would be a good package”. They do this to encourage us to develop the features they consider outside of the framework scope.
We hope you enjoyed this tutorial. If you have any questions or comments, please join the forum discussion below!
|
https://www.raywenderlich.com/19421827-creating-and-publishing-a-flutter-package
|
CC-MAIN-2021-17
|
refinedweb
| 3,935
| 66.23
|
Important: Please read the Qt Code of Conduct -
QWidget objects with Qt::Window flag set do not pass on ignored events
Using Qt 5.6.0 on Windows 10.
After a fair amount of trouble and debugging, I discovered that if a QWidget is a window (has Qt::Window set), then it will not put ignored events back into the event loop.
I am creating a layout editor tool for work. There is the MainWindow, and that window can spawn other QWidget-derived windows, and each of those can have several QOpenGLWidget objects as children. I want to make the ESC key close the program (or do something at the MainWindow level), so I made each of my widgets overload keyPressEvent(...), and everyone ignores it except for MainWindow.
Problem: MainWindow never gets the key events if it has a child window as the focus.
The QOpenGLWidgets will, when ignoring an event, pass it on to their parents. But if the QOpenGLWidget has the Qt::Window flag set, then the parent QWidget-derived window will no longer receive the event. Unset Qt::Window flag and the parent again receives ignored events.
Is this a bug?
Note: I even tried overriding QObject's virtual event(QEvent *e) method, and that didn't get anything.
Also Note: The only way around right now is signals, which is a real pain when MainWindow has a family tree of widgets.
Hi! It's not a bug. Anyways, you can hack around this without using signals, but be careful not to send the events around in an endless loop.
Add this to your MainWindow:
public: void dirtyHack(QKeyEvent *ev) { keyPressEvent(ev); }
And in your child widget (the one with the Qt::Window flag set), override keyPressEvent:
#include "mainwindow.h" void MyWindow::keyPressEvent(QKeyEvent *ev) { MainWindow *mainWindow = dynamic_cast<MainWindow*>(parentWidget()); mainWindow->dirtyHack(ev); }
By this you can indirectly call the protected keyPressEvent of the MainWindow. Not sure if that's a great idea but it seems to work. :-)
Or you could use a global event filter:
myfilter.h
#ifndef MYFILTER_H #define MYFILTER_H #include <QObject> class MyFilter : public QObject { Q_OBJECT public: explicit MyFilter(QObject *parent = 0); protected: virtual bool eventFilter(QObject *obj, QEvent *event) override; }; #endif // MYFILTER_H
myfilter.cpp
#include "myfilter.h" #include <QEvent> #include <QKeyEvent> #include <QDebug> MyFilter::MyFilter(QObject *parent) : QObject(parent) { } bool MyFilter::eventFilter(QObject *obj, QEvent *event) { if (event->type() == QEvent::KeyPress) { QKeyEvent *ke = static_cast<QKeyEvent*>(event); if (ke->key() == Qt::Key_Escape) { qDebug() << "Escape"; return true; } } return QObject::eventFilter(obj, event); }
main.cpp
#include "mainwindow.h" #include <QApplication> #include "myfilter.h" int main(int argc, char *argv[]) { QApplication a(argc, argv); MyFilter *myFilter = new MyFilter(&a); a.installEventFilter(myFilter); MainWindow w; w.show(); return a.exec(); }
@Wieland I tried that, but MainWindow will not receive key events when another window widget (that is, has the window flag
Qt::Windowset), or one of its non-window children, has the focus. Repeated experiments on my system all indicate that the state of being a window prevents any widgets upstream of it from receiving events, even those with global event filters.
Another explanation?
In another post of mine, you pointed me to the framework developers. Should I try the "Interest" list?
@amdreallyfast Hmm.. the code I posted works for me. Can you post a minimal example? Maybe I haven't fully understood what you're trying to do. Regarding the mailing list: The problem here isn't a bug so no need to annoy the devs (yet) ;-)
@Wieland I'll try to get something put together in the next few days. I can't send code from my work computer to home, but I'll try to get something together.
Sorry for the delay.
Here's the basic program: The main window has a push button that summons a ChildWindow object (QOpenGLWidget derivative) with the Qt::Window flag set. Pressing any key causes the ChildWindow's keyPressEvent(...) to print a message to the console, and then the event is ignored. The main window is the ChildWindow's parent, so the event should pass on to the main window. But it doesn't. The event stops at the ChildWindow.
Did I miss something or is this a bug?
Here's my code.
.pro (note the "CONFIG+= console" at the end)
QT += core gui greaterThan(QT_MAJOR_VERSION, 4): QT += widgets TARGET = WidgetsEventsNotPropogating TEMPLATE = app SOURCES += main.cpp\ mainwindow.cpp \ childwindow.cpp HEADERS += mainwindow.h \ childwindow.h FORMS += mainwindow.ui CONFIG += console
mainwindow.ui XML
<"> <widget class="QPushButton" name="pushButton"> <property name="geometry"> <rect> <x>140</x> <y>100</y> <width>171</width> <height>81</height> </rect> </property> <property name="text"> <string>PushButton</string> </property> </widget> </widget> >
mainwindow.h
#include <QMainWindow> namespace Ui { class MainWindow; } class MainWindow : public QMainWindow { Q_OBJECT public: explicit MainWindow(QWidget *parent = 0); ~MainWindow(); private slots: void on_pushButton_clicked(); void keyPressEvent(QKeyEvent *e) override; private: Ui::MainWindow *ui; };
mainwindow.cpp
#include <qevent.h> #include "mainwindow.h" #include "ui_mainwindow.h" #include "childwindow.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); } MainWindow::~MainWindow() { delete ui; } void MainWindow::on_pushButton_clicked() { printf("MainWindow::on_pushButton_clicked()\n"); ChildWindow *cw = new ChildWindow(this, Qt::Window); cw->show(); } void MainWindow::keyPressEvent(QKeyEvent *e) { printf("MainWindow::keyPressEvent(QKeyEvent *e)\n"); e->accept(); }
childwindow.h
#include <qopenglwidget.h> class ChildWindow : public QOpenGLWidget { Q_OBJECT public: ChildWindow(QWidget *parent = 0, Qt::WindowFlags f = Qt::WindowFlags()); protected: void keyPressEvent(QKeyEvent *e); };
childwindow.cpp
#include <qevent.h> #include "childwindow.h" ChildWindow::ChildWindow(QWidget *parent, Qt::WindowFlags f) : QOpenGLWidget(parent, f) { } void ChildWindow::keyPressEvent(QKeyEvent *e) { printf("ChildWindow::keyPressEvent(QKeyEvent *e)\n"); e->ignore(); }
- A Former User last edited by mrjj
Hi!
I still don't think it's a bug but intended behaviour. However, in the meantime I came up with another workaround which is much prettier than the ones before: Just do the following:
#include <QApplication> // ... void ChildWindow::keyPressEvent(QKeyEvent *e) { printf("ChildWindow::keyPressEvent(QKeyEvent *e)\n"); QApplication::sendEvent(parentWidget(), e); e->ignore(); }
Override
QWidget::eventand inspect what the state of the event is after the call to the superclass' method. E.g:
#include <qopenglwidget.h> class ChildWindow : public QOpenGLWidget { Q_OBJECT public: // ... bool event(QEvent * e) override { bool result = QOpenGLWidget::event(e); qDebug() << result << e->isAccepted(); return result; } };
Then report your findings here. In principle
QWidget::eventshould propagate the event up the object tree. While I don't believe this should be dependent on the window flags, I'm not sure.
Kind regards.
@kshegunov As far as I can tell the docs mention nothing about this. On the other hand google finds a handful of people stuggling with this dating back half a decade or so. Maybe it's time to read some code :-|
@kshegunov And yes, the base class implementation returns true although the event hasn't been accepted. So, to me, still looks as this happens on purpose.
- kshegunov Qt Champions 2017 last edited by
Yes, because of this call and ultimately this. It appears key press events aren't propagated at all. Could someone please confirm that last part, and make sure that the window flags doesn't change behavior?
@Wieland said in QWidget objects with Qt::Window flag set do not pass on ignored events:
As far as I can tell the docs mention nothing about this.
They do[1, 2], but it's really vague.
@kshegunov I meant it doesn't say anything about event propagation stopping on window boundaries.
- kshegunov Qt Champions 2017 last edited by
@Wieland said in QWidget objects with Qt::Window flag set do not pass on ignored events:
I see. Have you confirmed this is because of the flag? I'm not as convinced. I didn't see anything in the code to hint that the flags have any reflection on events propagation. I'd expect the same behavior (key presses not bubbling up) for any window flag that may be passed.
@kshegunov Flags have some effect, e.g. the popup flag makes the window close on key pressed and then the event is accepted; but otherwise: no. The keyPressEvent() function doesn't do anything but ignore the event and then in the end event() returns true. Maybe there's more magic in the event dispatcher, but the code is quite hard to read.
Edit: Key event propagation actually does work as expected with flag Qt:Widget on the child and when the event is being ignored. But I can't tell where the magic happens :-/
Sorry, I give up. :-/
Thanks anyway.
BTW, @kshegunov ,. For the time being, I'll just click on MainWindow and hit Esc. to close down the program. I don't want to deal with connecting everyone to the MainWindow via signal and slot just to tell it to close.
@amdreallyfast said in QWidget objects with Qt::Window flag set do not pass on ignored events:.
My best advice is to ask in the mailing list if this is intentional, and if it's not to file a bug report.
By "mailing list" do you mean the "interest" mailing list? I've already asked one question there and gotten no replies. It is quite busy with "how do I do this?" messages instead of unexpected behavior, so I don't know if I'm asking the right group.
|
https://forum.qt.io/topic/71813/qwidget-objects-with-qt-window-flag-set-do-not-pass-on-ignored-events/9
|
CC-MAIN-2021-21
|
refinedweb
| 1,535
| 57.77
|
- Java program to print all Armstrong numbers between 0 to N.
In this Java program, given a number N we have to print all armstrong numbers between 0 and N. Here is a brief introduction of armstrong number. Here is the list of first few armstrong numbers 0, 1, 2, 3, 153, 370, 407 ...
An Armstrong number is a number whose sum of cubes of every digit of a number is equal to the number itself.For Example, 153 is an armstrong number
153 = 1*1*1 + 5*5*5 + 3*3*3
115 is not an armstrong number
115 is not equal to 1*1*1 + 1*1*1 + 5*5*5
How to Generate Armstrong number ?
We have to find all armstrong numbers between 0 to N.
We have to find all armstrong numbers between 0 to N.
- Then using for loop we iterate from 0 till N.
- For any number i (0< i < N) find the cubic sum of digits of i, and store it in sum variable.
- Compare i and sum.
- If both are equal then i is an Armstrong number otherwise not an Armstrong number.
Java program to print all armstrong numbers between 0 to Numbers
In this java program, we first take N as input from user and then using a for loop iterate from 0 to N. Then we call "isArmstrongNumber" function for every number between 0 to N to check whether it is armstrong number or not. Function isArmstrongNumber takes an integer as input and returns "true" is it is armstrong number else return "false".
package com.tcc.java.programs; import java.util.Scanner; /** * Java Program to print all Armstrong numbers between 1 to N */ public class ArmstrongSeries { public static void main(String[] args) { double N; int i; Scanner scanner; scanner = new Scanner(System.in); System.out.println("Enter a Number"); N = scanner.nextFloat(); System.out.println("Armstrong Number between 0 to " + (int) N); /* * Check for every number between 0 to N, whether it is Armstrong number * or not */ for (i = 0; i < N; i++) { if (isArmstrongNumber(i)) { System.out.println(i + " "); } } } /** * This method return true if num is armstrong number otherwise returns * false */ public static boolean isArmstrongNumber(int num) { int sum = 0, rightDigit, temp; temp = num; while (temp != 0) { rightDigit = temp % 10; sum = sum + (rightDigit * rightDigit * rightDigit); temp = temp / 10; } /* * Check if sum is equal to N, then N is a armstrong number otherwise * not an armstrong number */ if (sum == num) { // N is armstrong number return true; } else { // N is not an armstrong number return false; } } }Output
Enter a Number 1000 Armstrong Number between 0 to 1000 0 1 153 370 371 407
Recommended Posts
|
https://www.techcrashcourse.com/2017/03/java-program-print-armstrong-numbers-between-1-to-N.html
|
CC-MAIN-2020-16
|
refinedweb
| 444
| 60.04
|
I'd like to propose lockless get_user_pages for merge in -mm. Since last
posted, this includes Linus's scheme for loading the pte on 32-bit PAE
(with comment), and an access_ok() check. I'd like if somebody can verify
both these additions (and the patch in general). It wouldn't be hard to
have a big security hole here if any of the checks are wrong...
I do have a powerpc patch which demonstrates that fast_gup isn't just a
crazy x86 only hack... but it is a little more complex (requires speculative
page references in the VM), and not as well tested as the x86 version.
Anyway, I think the x86 code is now pretty well complete and has no
more known issues.
--
Introduce a new "fast_gup" (for want of a better name right now) which
is basically a get_user_pages with a less general API (but still tends to
be suited to the common case):
- task and mm are always current and current->mm
- force is always 0
- pages is always non-NULL
- don't pass back vmas
This restricted API can be implemented in a much more scalable way when
the ptes are present, by walking the page tables locklessly.
Before anybody asks; it is impossible to just "detect" such a situation in the
existing get_user_pages call, and branch to a fastpath in that case, because
get_user_pages requres the mmap_sem is held over the call, wheras fast_gup does
not.
This patch implements fast_gup on x86, and converts a number of key callsites
to use it.
On x86, we do an optimistic lockless pagetable walk, without taking any page
table locks or even mmap_sem. Page table existence is guaranteed by turning
interrupts off (combined with the fact that we're always looking up the current
mm, means we can do the lockless page table walk within the constraints of the
TLB shootdown design). Basically we can do this lockless pagetable walk in a
similar manner to the way the CPU's pagetable walker does not have to take any
locks to find present ptes.
Many other architectures could do the same thing. Those that don't IPI
could potentially RCU free the page tables and do speculative references
on the pages (a la lockless pagecache) to achieve a lockless fast_gup. I
have actually got an implementation of this for powerpc.
This patch was found to give about 10% performance improvement on a 2 socket
8 core Intel Xeon system running an OLTP workload on DB2 v9.5
"To test the effects of the patch, an OLTP workload was run on an IBM
x3850 M2 server with 2 processors (quad-core Intel Xeon processors at
2.93 GHz) using IBM DB2 v9.5 running Linux 2.6.24rc7 kernel. Comparing
runs with and without the patch resulted in an overall performance
benefit of ~9.8%. Correspondingly, oprofiles showed that samples from
__up_read and __down_read routines that is seen during thread contention
for system resources was reduced from 2.8% down to .05%. Monitoring
the /proc/vmstat output from the patched run showed that the counter for
fast_gup contained a very high number while the fast_gup_slow value was
zero."
(fast_gup_slow is a counter we had for the number of times the slowpath
was invoked).
The main reason for the improvement is that DB2 has multiple threads each
issuing direct-IO. Direct-IO uses get_user_pages, and thus the threads
contend the mmap_sem cacheline, and can also contend on page table locks.
I would anticipate larger performance gains on larger systems, however I
think DB2 uses an adaptive mix of threads and processes, so it could be
that thread contention remains pretty constant as machine size increases.
In which case, we stuck with "only" a 10% gain.
Lots of other code could use this too (eg. grep drivers/).
The downside of using fast_gup is that if there is not a pte with the
correct permissions for the access, we end up falling back to get_user_pages
and so the fast_gup is just extra work. This should not be the common case
in performance critical code, I'd hope.
Signed-off-by: Nick Piggin <npiggin <at> suse.de>
Cc: shaggy <at> austin.ibm.com
Cc: axboe <at> oracle.com
Cc: torvalds <at> linux-foundation.org
Cc: linux-mm <at> kvack.org
Cc: linux-arch <at> vger.kernel.org
---
arch/x86/mm/Makefile | 2
arch/x86/mm/gup.c | 193 ++++++++++++++++++++++++++++++++++++++++++++++
fs/bio.c | 8 -
fs/direct-io.c | 10 --
fs/splice.c | 41 ---------
include/asm-x86/uaccess.h | 5 +
include/linux/mm.h | 19 ++++
7 files changed, 225 insertions(+), 53 deletions(-)
Index: linux-2.6/fs/bio.c
===================================================================
--- linux-2.6.orig/fs/bio.c
+++ linux-2.6/fs/bio.c
@@ -713,12 +713,8 @@ static struct bio *__bio_map_user_iov(st
const int local_nr_pages = end - start;
const int page_limit = cur_page + local_nr_pages;
- down_read(¤t->mm->mmap_sem);
- ret = get_user_pages(current, current->mm, uaddr,
- local_nr_pages,
- write_to_vm, 0, &pages[cur_page], NULL);
- up_read(¤t->mm->mmap_sem);
-
+ ret = fast_gup(uaddr, local_nr_pages,
+ write_to_vm, &pages[cur_page]);
if (ret < local_nr_pages) {
ret = -EFAULT;
goto out_unmap;
Index: linux-2.6/fs/direct-io.c
===================================================================
--- linux-2.6.orig/fs/direct-io.c
+++ linux-2.6/fs/direct-io.c
@@ -150,17 +150,11 @@ static int dio_refill_pages(struct dio *
int nr_pages;
nr_pages = min(dio->total_pages - dio->curr_page, DIO_PAGES);
- down_read(¤t->mm->mmap_sem);
- ret = get_user_pages(
- current, /* Task for fault acounting */
- current->mm, /* whose pages? */
+ ret = fast_gup(
dio->curr_user_address, /* Where from? */
nr_pages, /* How many pages? */
dio->rw == READ, /* Write to memory? */
- 0, /* force (?) */
- &dio->pages[0],
- NULL); /* vmas */
- up_read(¤t->mm->mmap_sem);
+ &dio->pages[0]); /* Put results here */
if (ret < 0 && dio->blocks_available && (dio->rw & WRITE)) {
struct page *page = ZERO_PAGE(0);
Index: linux-2.6/fs/splice.c
===================================================================
--- linux-2.6.orig/fs/splice.c
+++ linux-2.6/fs/splice.c
@@ -1147,36 +1147,6 @@ static long do_splice(struct file *in,;
-
- if (!access_ok(VERIFY_READ, src, n))
- return -EFAULT;
-
-;
-}
-
-/*
* Map an iov into an array of pages and offset/length tupples. With the
* partial_page structure, we can map several non-contiguous ranges into
* our ones pages[] map instead of splitting that operation into pieces.
@@ -1189,8 +1159,6 @@ static int get_iovec_page_array(const st
{
int buffers = 0, error = 0;
- down_read(¤t->mm->mmap_sem);
-
while (nr_vecs) {
unsigned long off, npages;
struct iovec entry;
@@ -1199,7 +1167,7 @@ static int get_iovec_page_array(const st
int i;
error = -EFAULT;
- if (copy_from_user_mmap_sem(&entry, iov, sizeof(entry)))
+ if (copy_from_user(&entry, iov, sizeof(entry)))
break;
base = entry.iov_base;
@@ -1233,9 +1201,8 @@ static int get_iovec_page_array(const st
if (npages > PIPE_BUFFERS - buffers)
npages = PIPE_BUFFERS - buffers;
- error = get_user_pages(current, current->mm,
- (unsigned long) base, npages, 0, 0,
- &pages[buffers], NULL);
+ error = fast_gup((unsigned long)base, npages,
+ 0, &pages[buffers]);
if (unlikely(error <= 0))
break;
@@ -1274,8 +1241,6 @@ static int get_iovec_page_array(const st
iov++;
}
- up_read(¤t->mm->mmap_sem);
-
if (buffers)
return buffers;
Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h
+++ linux-2.6/include/linux/mm.h
@@ -12,6 +12,7 @@
#include <linux/prio_tree.h>
#include <linux/debug_locks.h>
#include <linux/mm_types.h>
+#include <linux/uaccess.h> /* for __HAVE_ARCH_FAST_GUP */
struct mempolicy;
struct anon_vma;
@@ -830,6 +831,24 @@ extern int mprotect_fixup(struct vm_area
struct vm_area_struct **pprev, unsigned long start,
unsigned long end, unsigned long newflags);
+#ifndef __HAVE_ARCH_FAST_GUP
+/* Should be moved to asm-generic, and architectures can include it if they
+ * don't implement their own fast_gup.
+ */
+#define fast_gup(start, nr_pages, write, pages) \
+({ \
+ struct mm_struct *mm = current->mm; \
+ int ret; \
+ \
+ down_read(&mm->mmap_sem); \
+ ret = get_user_pages(current, mm, start, nr_pages, \
+ write, 0, pages, NULL); \
+ up_read(&mm->mmap_sem); \
+ \
+ ret; \
+})
+#endif
+
/*
* A callback you can register to apply pressure to ageable caches.
*
Index: linux-2.6/arch/x86/mm/Makefile
===================================================================
--- linux-2.6.orig/arch/x86/mm/Makefile
+++ linux-2.6/arch/x86/mm/Makefile
@@ -1,5 +1,5 @@
obj-y := init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \
- pat.o pgtable.o
+ pat.o pgtable.o gup.o
obj-$(CONFIG_X86_32) += pgtable_32.o
Index: linux-2.6/arch/x86/mm/gup.c
===================================================================
--- /dev/null
+++ linux-2.6/arch/x86/mm/gup.c
@@ -0,0 +1,243 @@
+/*
+ * Lockless fast_gup for x86
+ *
+ */
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/vmstat.h>
+#include <asm/pgtable.h>
+
+static inline pte_t gup_get_pte(pte_t *ptep)
+{
+#ifndef CONFIG_X86_PAE
+ return *ptep;
+#else
+ /*
+ * With fast_gup, we walk down the pagetables without taking any locks.
+ * For this we would like to load the pointers atoimcally, but that is
+ * not possible (without expensive cmpxchg8b) on PAE. What we do have
+ * is the guarantee that a pte will only either go from not present to
+ * present, or present to not present or both -- it will not switch to
+ * a completely different present page without a TLB flush in between;
+ * something that we are blocking by holding interrupts off.
+ *
+ * Setting ptes from not present to present goes:
+ * ptep->pte_high = h;
+ * smp_wmb();
+ * ptep->pte_low = l;
+ *
+ * And present to not present goes:
+ * ptep->pte_low = 0;
+ * smp_wmb();
+ * ptep->pte_high = 0;
+ *
+ * We must ensure here that the load of pte_low sees l iff
+ * pte_high sees h. We load pte_high *after* loading pte_low,
+ * which ensures we don't see an older value of pte_high.
+ * *Then* we recheck pte_low, which ensures that we haven't
+ * picked up a changed pte high. We might have got rubbish values
+ * from pte_low and pte_high, but we are guaranteed that pte_low
+ * will not have the present bit set *unless* it is 'l'. And
+ * fast_gup only operates on present ptes, so we're safe.
+ *
+ * gup_get_pte should not be used or copied outside gup.c without
+ * being very careful -- it does not atomically load the pte or
+ * anything that is likely to be useful for you.
+ */
+ pte_t pte;
+
+retry:
+ pte.pte_low = ptep->pte_low;
+ smp_rmb();
+ pte.pte_high = ptep->pte_high;
+ smp_rmb();
+ if (unlikely(pte.pte_low != ptep->pte_low))
+ goto retry;
+
+#endif
+}
+
+/*
+ * The performance critical leaf functions are made noinline otherwise gcc
+ * inlines everything into a single function which results in too much
+ * register pressure.
+ */
+static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
+ unsigned long end, int write, struct page **pages, int *nr)
+{
+ unsigned long mask, result;
+ pte_t *ptep;
+
+ result = _PAGE_PRESENT|_PAGE_USER;
+ if (write)
+ result |= _PAGE_RW;
+ mask = result | _PAGE_SPECIAL;
+
+ ptep = pte_offset_map(&pmd, addr);
+ do {
+ pte_t pte = gup_get_pte(ptep);
+ struct page *page;
+
+ if ((pte_val(pte) & mask) != result)
+ return 0;
+ VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
+ page = pte_page(pte);
+ get_page(page);
+ pages[*nr] = page;
+ (*nr)++;
+
+ } while (ptep++, addr += PAGE_SIZE, addr != end);
+ pte_unmap(ptep - 1);
+
+ return 1;
+}
+
+static inline void get_head_page_multiple(struct page *page, int nr)
+{
+ VM_BUG_ON(page != compound_head(page));
+ VM_BUG_ON(page_count(page) == 0);
+ atomic_add(nr, &page->_count);
+}
+
+static noinline int gup_huge_pmd(pmd_t pmd, unsigned long addr,
+ unsigned long end, int write, struct page **pages, int *nr)
+{
+ unsigned long mask;
+ pte_t pte = *(pte_t *)&pmd;
+ struct page *head, *page;
+ int refs;
+
+ mask = _PAGE_PRESENT|_PAGE_USER;
+ if (write)
+ mask |= _PAGE_RW;
+ if ((pte_val(pte) & mask) != mask)
+ return 0;
+ /* hugepages are never "special" */
+ VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
+
+ refs = 0;
+ head = pte_page(pte);
+ page = head + ((addr & ~HPAGE_MASK) >> PAGE_SHIFT);
+ do {
+ VM_BUG_ON(compound_head(page) != head);
+ pages[*nr] = page;
+ (*nr)++;
+ page++;
+ refs++;
+ } while (addr += PAGE_SIZE, addr != end);
+ get_head_page_multiple(head, refs);
+
+ return 1;
+}
+
+static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end,
+ int write, struct page **pages, int *nr)
+{
+ unsigned long next;
+ pmd_t *pmdp;
+
+ pmdp = pmd_offset(&pud, addr);
+ do {
+ pmd_t pmd = *pmdp;
+
+ next = pmd_addr_end(addr, end);
+ if (pmd_none(pmd))
+ return 0;
+ if (unlikely(pmd_large(pmd))) {
+ if (!gup_huge_pmd(pmd, addr, next, write, pages, nr))
+ return 0;
+ } else {
+ if (!gup_pte_range(pmd, addr, next, write, pages, nr))
+ return 0;
+ }
+ } while (pmdp++, addr = next, addr != end);
+
+ return 1;
+}
+
+static int gup_pud_range(pgd_t pgd, unsigned long addr, unsigned long end, int write, struct page
**pages, int *nr)
+{
+ unsigned long next;
+ pud_t *pudp;
+
+ pudp = pud_offset(&pgd, addr);
+ do {
+ pud_t pud = *pudp;
+
+ next = pud_addr_end(addr, end);
+ if (pud_none(pud))
+ return 0;
+ if (!gup_pmd_range(pud, addr, next, write, pages, nr))
+ return 0;
+ } while (pudp++, addr = next, addr != end);
+
+ return 1;
+}
+
+int fast_gup(unsigned long start, int nr_pages, int write, struct page **pages)
+{
+ struct mm_struct *mm = current->mm;
+ unsigned long end = start + (nr_pages << PAGE_SHIFT);
+ unsigned long addr = start;
+ unsigned long next;
+ pgd_t *pgdp;
+ int nr = 0;
+
+ if (unlikely(!access_ok(write ? VERIFY_WRITE : VERIFY_READ,
+ start, nr_pages*PAGE_SIZE)))
+ goto slow_irqon;
+
+ /*
+ * XXX: batch / limit 'nr', to avoid large irq off latency
+ * needs some instrumenting to determine the common sizes used by
+ * important workloads (eg. DB2), and whether limiting the batch size
+ * will decrease performance.
+ *
+ * It seems like we're in the clear for the moment. Direct-IO is
+ * the main guy that batches up lots of get_user_pages, and even
+ * they are limited to 64-at-a-time which is not so many.
+ */
+ /*
+ * This doesn't prevent pagetable teardown, but does prevent
+ * the pagetables and pages from being freed on x86.
+ *
+ * So long as we atomically load page table pointers versus teardown
+ * (which we do on x86, with the above PAE exception), we can follow the
+ * address down to the the page and take a ref on it.
+ */
+ local_irq_disable();
+ pgdp = pgd_offset(mm, addr);
+ do {
+ pgd_t pgd = *pgdp;
+
+ next = pgd_addr_end(addr, end);
+ if (pgd_none(pgd))
+ goto slow;
+ if (!gup_pud_range(pgd, addr, next, write, pages, &nr))
+ goto slow;
+ } while (pgdp++, addr = next, addr != end);
+ local_irq_enable();
+
+ VM_BUG_ON(nr != (end - start) >> PAGE_SHIFT);
+ return nr;
+
+ {
+ int i, ret;
+
+slow:
+ local_irq_enable();
+slow_irqon:
+ /* Could optimise this more by keeping what we've already got */
+ for (i = 0; i < nr; i++)
+ put_page(pages[i]);
+
+ down_read(&mm->mmap_sem);
+ ret = get_user_pages(current, mm, start,
+ (end - start) >> PAGE_SHIFT, write, 0, pages, NULL);
+ up_read(&mm->mmap_sem);
+
+ return ret;
+ }
+}
Index: linux-2.6/include/asm-x86/uaccess.h
===================================================================
--- linux-2.6.orig/include/asm-x86/uaccess.h
+++ linux-2.6/include/asm-x86/uaccess.h
@@ -3,3 +3,8 @@
#else
# include "uaccess_64.h"
#endif
+
+#define __HAVE_ARCH_FAST_GUP
+struct page;
+int fast_gup(unsigned long start, int nr_pages, int write, struct page **pages);
+
|
http://article.gmane.org/gmane.linux.kernel.cross-arch/930
|
crawl-002
|
refinedweb
| 2,272
| 62.58
|
.
.
In order to test Flask installation, type the following code in the editor as Hello.py
from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello World’ if __name__ == '__main__': app.run()
Importing flask module in the project is mandatory. An object of Flask class is our WSGI application.
Flask constructor takes the name of current module (__name__) as argument.
The route() function of the Flask class is a decorator, which tells the application which URL should call the associated function.
app.route(rule, options)
The rule parameter represents URL binding with the function.
The options is a list of parameters to be forwarded to the underlying Rule object.
In the above example, ‘/’ URL is bound with hello_world() function. Hence, when the home page of web server is opened in browser, the output of this function will be rendered.
Finally the run() method of Flask class runs the application on the local development server.
app.run(host, port, debug, options)
All parameters are optional
The above given Python script is executed from Python shell.
Python Hello.py
A message in Python shell informs you that
* Running on (Press CTRL+C to quit)
Open the above URL (localhost:5000) in the browser. ‘Hello World’ message will be displayed on it.
A Flask application is started by calling the run() method. However, while the application is under development, it should be restarted manually for each change in the code. To avoid this inconvenience, enable debug support. The server will then reload itself if the code changes. It will also provide a useful debugger to track the errors if any, in the application.
The Debug mode is enabled by setting the debug property of the application object to True before running or passing the debug parameter to the run() method.
app.debug = True app.run() app.run(debug = True)
Modern).
The
Http..items() %} <tr> <th> {{ key }} </th> <td> {{ value }} </td> </tr> {% endfor %} </table> </body> </html>
Here, again the Python statements corresponding to the For loop are enclosed in {%..%} whereas, the expressions key and value are put inside {{ }}.
After the development starts running, open in the browser to get the following output.
A web application often requires a static file such as a javascript file or a CSS file supporting the display of a web page. Usually, the web server is configured to serve them for you, but during the development, these files are served from static folder in your package or next to your module and it will be available at /static on the application.
A special endpoint ‘static’ is used to generate URL for static files.
In the following example, a javascript function defined in hello.js is called on OnClick event of HTML button in index.html, which is rendered on ‘/’ URL of the Flask application.
from flask import Flask, render_template app = Flask(__name__) @app.route("/") def index(): return render_template("index.html") if __name__ == '__main__': app.run(debug = True)
The HTML script of index.html is given below.
<html> <head> <script type = "text/javascript" src = "{{ url_for('static', filename = 'hello.js') }}" ></script> </head> <body> <input type = "button" onclick = "sayHello()" value = "Say Hello" /> </body> </html>
hello.js contains sayHello() function.
function sayHello() { alert("Hello World") }
The data from a client’s web page is sent to the server as a global request object. In order to process the request data, it should be imported from the Flask module.
Important attributes of request object are listed below −
Form − It is a dictionary object containing key and value pairs of form parameters and their values.
args − parsed contents of query string which is part of URL after question mark (?).
Cookies − dictionary object holding Cookie names and values.
files − data pertaining to uploaded file.
method − current request method.
We have already seen that the http method can be specified in URL rule. The Form data received by the triggered function can collect it in the form of a dictionary object and forward it to a template to render it on a corresponding web page.
In the following example,.
The template dynamically renders an HTML table of form data.
Given below is the Python code of application −
from flask import Flask, render_template, request app = Flask(__name__) @app.route('/') def student(): return render_template('student.html') @app.route('/result',methods = ['POST', 'GET']) def result(): if request.method == 'POST': result = request.form return render_template("result.html",result = result) if __name__ == '__main__': app.run(debug = True)
Given below is the HTML script of student.html.
<html> <body> <form action = "" method = "POST"> <p>Name <input type = "text" name = "Name" /></p> <p>Physics <input type = "text" name = "Physics" /></p> <p>Chemistry <input type = "text" name = "chemistry" /></p> <p>Maths <input type ="text" name = "Mathematics" /></p> <p><input type = "submit" value = "submit" /></p> </form> </body> </html>
Code of template (result.html) is given below −
<!doctype html> <html> <body> <table border = 1> {% for key, value in result.items() %} <tr> <th> {{ key }} </th> <td> {{ value }} </td> </tr> {% endfor %} </table> </body> </html>
Run the Python script and enter the URL in the browser.
When the Submit button is clicked, form data is rendered on result.html in the form of HTML table.
A..
For example, to set a ‘username’ session variable use the statement −
Session[‘username’] = ’admin’
To release a session variable use pop() method.
session.pop('username', None)
The following code is a simple demonstration of session works in Flask. URL ‘/’ simply prompts user to log in, as session variable ‘username’ is not set.
@app.route('/') def index(): if 'username' in session: username = session['username'] return 'Logged in as ' + username + '<br>' + \ "<b><a href = '/logout'>click here to log out</a></b>" return "You are not logged in <br><a href = '/login'></b>" + \ "click here to log in</b></a>"
As user browses to ‘/login’ the login() view function, because it is called through GET method, opens up a login form.
A Form is posted back to ‘/login’ and now session variable is set. Application is redirected to ‘/’. This time session variable ‘username’ is found.
@app.route('/login', methods = ['GET', 'POST']) def login(): if request.method == 'POST': session['username'] = request.form['username'] return redirect(url_for('index')) return ''' <form action = "" method = "post"> <p><input type = text name = username/></p> <p<<input type = submit value = Login/></p> </form> '''
The application also contains a logout() view function, which pops out ‘username’ session variable. Hence, ‘/’ URL again shows the opening page.
@app.route('/logout') def logout(): # remove the username from the session if it is there session.pop('username', None) return redirect(url_for('index'))
Run the application and visit the homepage. (Ensure to set secret_key of the application)
from flask import Flask, session, redirect, url_for, escape, request app = Flask(__name__) app.secret_key = 'any random string’
The output will be displayed as shown below. Click the link “click here to log in”.
The link will be directed to another screen. Type ‘admin’.
The screen will show you the message, ‘Logged in as admin’.
Flask class has a redirect() function. When called, it returns a response object and redirects the user to another target location with specified status code.
Prototype of redirect() function is as below −
Flask.redirect(location, statuscode, response)
In the above function −
location parameter is the URL where response should be redirected.
statuscode sent to browser’s header, defaults to 302.
response parameter is used to instantiate response.
The following status codes are standardized −
The default status code is 302, which is for ‘found’.
In the following example, the redirect() function is used to display the login page again when a login attempt fails.
from flask import Flask, redirect, url_for, render_template, request # Initialize the Flask application app = Flask(__name__) @app.route('/') def index(): return render_template('log_in.html') @app.route('/login',methods = ['POST', 'GET']) def login(): if request.method == 'POST' and request.form['username'] == 'admin' : return redirect(url_for('success')) else: return redirect(url_for('index')) @app.route('/success') def success(): return 'logged in successfully' if __name__ == '__main__': app.run(debug = True)
Flask class has abort() function with an error code.
Flask.abort(code)
The Code parameter takes one of following values −
400 − for Bad Request
401 − for Unauthenticated
403 − for Forbidden
404 − for Not Found
406 − for Not Acceptable
415 − for Unsupported Media Type
429 − Too Many Requests
Let us make a slight change in the login() function in the above code. Instead of re-displaying the login page, if ‘Unauthourized’ page is to be displayed, replace it with call to abort(401).
from flask import Flask, redirect, url_for, render_template, request, abort app = Flask(__name__) @app.route('/') def index(): return render_template('log_in.html') @app.route('/login',methods = ['POST', 'GET']) def login(): if request.method == 'POST': if request.form['username'] == 'admin' : return redirect(url_for('success')) else: abort(401) else: return redirect(url_for('index')) @app.route('/success') def success(): return 'logged in successfully' if __name__ == '__main__': app.run(debug = True)
A good GUI based application provides feedback to a user about the interaction. For example, the desktop applications use dialog or message box and JavaScript uses alerts for similar purpose.
Generating such informative messages is easy in Flask web application. Flashing system of Flask framework makes it possible to create a message in one view and render it in a view function called next.
A Flask module contains flash() method. It passes a message to the next request, which generally is a template.
flash(message, category)
Here,
message parameter is the actual message to be flashed.
category parameter is optional. It can be either ‘error’, ‘info’ or ‘warning’.
In order to remove message from session, template calls get_flashed_messages().
get_flashed_messages(with_categories, category_filter)
Both parameters are optional. The first parameter is a tuple if received messages are having category. The second parameter is useful to display only specific messages.
The following flashes received messages in a template.
{% with messages = get_flashed_messages() %} {% if messages %} {% for message in messages %} {{ message }} {% endfor %} {% endif %} {% endwith %}
Let us now see a simple example, demonstrating the flashing mechanism in Flask. In the following code, a ‘/’ URL displays link to the login page, with no message to flash.
@app.route('/') def index(): return render_template('index.html')
The link leads a user to ‘/login’ URL which displays a login form. When submitted, the login() view function verifies a username and password and accordingly flashes a ‘success’ message or creates ‘error’ variable.
)
In case of error, the login template is redisplayed with error message.
<!doctype html> <html> <body> <h1>Login</h1> {% if error %} <p><strong>Error:</strong> {{ error }} {% endif %} <form action = "" method = post> <dl> <dt>Username:</dt> <dd> <input type = text name = username </dd> <dt>Password:</dt> <dd><input type = password name = password></dd> </dl> <p><input type = submit value = Login></p> </form> </body> </html>
On the other hand, if login is successful, a success message is flashed on the index template.
<!doctype html> <html> <head> <title>Flask Message flashing</title> </head> <body> {% with messages = get_flashed_messages() %} {% if messages %} <ul> {% for message in messages %} <li<{{ message }}</li> {% endfor %} </ul> {% endif %} {% endwith %} <h1>Flask Message Flashing Example</h1> <p>Do you want to <a href = "{{ url_for('login') }}"> <b>log in?</b></a></p> </body> </html>
A complete code for Flask message flashing example is given below −
from flask import Flask, flash, redirect, render_template, request, url_for app = Flask(__name__) app.secret_key = 'random string' @app.route('/') def index(): return render_template('index.html') ) if __name__ == "__main__": app.run(debug = True)
After executing the above codes, you will see the screen as shown below.
When you click on the link, you will be directed to the Login page.
Enter the Username and password.
Click Login. A message will be displayed “You were successfully logged in” .
Handling)
Flask
A.
One of the essential aspects of a web application is to present a user interface for the user. HTML provides a <form> tag, which is used to design an interface. A Form’s elements such as text input, radio, select etc. can be used appropriately.
Data entered by a user is submitted in the form of Http request message to the server side script by either GET or POST method.
The Server side script has to recreate the form elements from http request data. So in effect, form elements have to be defined twice – once in HTML and again in the server side script.
Another disadvantage of using HTML form is that it is difficult (if not impossible) to render the form elements dynamically. HTML itself provides no way to validate a user’s input.
This is where WTForms, a flexible form, rendering and validation library comes handy. Flask-WTF extension provides a simple interface with this WTForms library.
Using Flask-WTF, we can define the form fields in our Python script and render them using an HTML template. It is also possible to apply validation to the WTF field.
Let us see how this dynamic generation of HTML works.
First, Flask-WTF extension needs to be installed.
pip install flask-WTF
The installed package contains a Form class, which has to be used as a parent for user- defined form.
WTforms package contains definitions of various form fields. Some Standard form fields are listed below.
For example, a form containing a text field can be designed as below −
from flask_wtf import Form from wtforms import TextField class ContactForm(Form): name = TextField("Name Of Student")
In addition to the ‘name’ field, a hidden field for CSRF token is created automatically. This is to prevent Cross Site Request Forgery attack.
When rendered, this will result into an equivalent HTML script as shown below.
<input id = "csrf_token" name = "csrf_token" type = "hidden" /> <label for = "name">Name Of Student</label><br> <input id = "name" name = "name" type = "text" value = "" />
A user-defined form class is used in a Flask application and the form is rendered using a template.
from flask import Flask, render_template from forms import ContactForm app = Flask(__name__) app.secret_key = 'development key' @app.route('/contact') def contact(): form = ContactForm() return render_template('contact.html', form = form) if __name__ == '__main__': app.run(debug = True)
WTForms package also contains validator class. It is useful in applying validation to form fields. Following list shows commonly used validators.
We shall now apply ‘DataRequired’ validation rule for the name field in contact form.
name = TextField("Name Of Student",[validators.Required("Please enter your name.")])
The validate() function of form object validates the form data and throws the validation errors if validation fails. The Error messages are sent to the template. In the HTML template, error messages are rendered dynamically.
{% for message in form.name.errors %} {{ message }} {% endfor %}
The following example demonstrates the concepts given above. The design of Contact form is given below (forms.py).
from flask_wtf import Form from wtforms import TextField, IntegerField, TextAreaField, SubmitField, RadioField, SelectField from wtforms import validators, ValidationError class ContactForm(Form): name = TextField("Name Of Student",[validators.Required("Please enter your name.")]) Gender = RadioField('Gender', choices = [('M','Male'),('F','Female')]) Address = TextAreaField("Address") email = TextField("Email",[validators.Required("Please enter your email address."), validators.Email("Please enter your email address.")]) Age = IntegerField("age") language = SelectField('Languages', choices = [('cpp', 'C++'), ('py', 'Python')]) submit = SubmitField("Send")
Validators are applied to the Name and Email fields.
Given below is the Flask application script (formexample.py).
from flask import Flask, render_template, request, flash from forms import ContactForm app = Flask(__name__) app.secret_key = 'development key' @app.route('/contact', methods = ['GET', 'POST']) def contact(): form = ContactForm() if request.method == 'POST': if form.validate() == False: flash('All fields are required.') return render_template('contact.html', form = form) else: return render_template('success.html') elif request.method == 'GET': return render_template('contact.html', form = form) if __name__ == '__main__': app.run(debug = True)
The Script of the template (contact.html) is as follows −
<!doctype html> <html> <body> <h2 style = "text-align: center;">Contact Form</h2> {% for message in form.name.errors %} <div>{{ message }}</div> {% endfor %} {% for message in form.email.errors %} <div>{{ message }}</div> {% endfor %} <form action = "" method = post> <fieldset> <legend>Contact Form</legend> {{ form.hidden_tag() }} <div style = font-size:20px; font-weight:bold; margin-left:150px;> {{ form.name.label }}<br> {{ form.name }} <br> {{ form.Gender.label }} {{ form.Gender }} {{ form.Address.label }}<br> {{ form.Address }} <br> {{ form.email.label }}<br> {{ form.email }} <br> {{ form.Age.label }}<br> {{ form.Age }} <br> {{ form.language.label }}<br> {{ form.language }} <br> {{ form.submit }} </div> </fieldset> </form> </body> </html>
Run formexample.py in Python shell and visit URL. The Contact form will be displayed as shown below.
If there are any errors, the page will look like this −
If there are no errors, ‘success.html’ will be rendered.
Python has an in-built support for SQlite. SQlite3 module is shipped with Python distribution. For a detailed tutorial on using SQLite database in Python, please refer to this link. In this section we shall see how a Flask application interacts with SQLite.
Create an SQLite database ‘database.db’ and create a students’ table in it.
import sqlite3 conn = sqlite3.connect('database.db') print "Opened database successfully"; conn.execute('CREATE TABLE students (name TEXT, addr TEXT, city TEXT, pin TEXT)') print "Table created successfully"; conn.close()
Our Flask application has three View functions.
First new_student() function is bound to the URL rule (‘/addnew’). It renders an HTML file containing student information form.
@app.route('/enternew') def new_student(): return render_template('student.html')
The HTML script for ‘student.html’ is as follows −
<html> <body> <form action = "{{ url_for('addrec') }}" method = "POST"> <h3>Student Information</h3> Name<br> <input type = "text" name = "nm" /></br> Address<br> <textarea name = "add" ></textarea><br> City<br> <input type = "text" name = "city" /><br> PINCODE<br> <input type = "text" name = "pin" /><br> <input type = "submit" value = "submit" /><br> </form> </body> </html>
As it can be seen, form data is posted to the ‘/addrec’ URL which binds the addrec() function.
This addrec() function retrieves the form’s data by POST method and inserts in students table. Message corresponding to success or error in insert operation is rendered to ‘result()
The HTML script of result.html contains an escaping statement {{msg}} that displays the result of Insert operation.
<!doctype html> <html> <body> result of addition : {{ msg }} <h2><a href = "\">go back to home page</a></h2> </body> </html>
The application contains another list() function represented by ‘/list’ URL. It populates ‘rows’ as a MultiDict object containing all records in the students table. This object is passed to the list.html template.
@app.route('/list') def list(): con = sql.connect("database.db") con.row_factory = sql.Row cur = con.cursor() cur.execute("select * from students") rows = cur.fetchall(); return render_template("list.html",rows = rows)
This list.html is a template, which iterates over the row set and renders the data in an HTML table.
<!doctype html> <html> <body> <table border = 1> <thead> <td>Name</td> <td>Address>/td< <td>city</td> <td>Pincode</td> </thead> {% for row in rows %} <tr> <td>{{row["name"]}}</td> <td>{{row["addr"]}}</td> <td> {{ row["city"]}}</td> <td>{{row['pin']}}</td> </tr> {% endfor %} </table> <a href = "/">Go back to home page</a> </body> </html>
Finally, the ‘/’ URL rule renders a ‘home.html’ which acts as the entry point of the application.
@app.route('/') def home(): return render_template('home.html')
Here is the complete code of Flask-SQLite application.
from flask import Flask, render_template, request import sqlite3 as sql app = Flask(__name__) @app.route('/') def home(): return render_template('home.html') @app.route('/enternew') def new_student(): return render_template('student() @app.route('/list') def list(): con = sql.connect("database.db") con.row_factory = sql.Row cur = con.cursor() cur.execute("select * from students") rows = cur.fetchall(); return render_template("list.html",rows = rows) if __name__ == '__main__': app.run(debug = True)
Run this script from Python shell and as the development server starts running. Visit in browser which displays a simple menu like this −
Click ‘Add New Record’ link to open the Student Information Form.
Fill the form fields and submit it. The underlying function inserts the record in the students table.
Go back to the home page and click ‘Show List’ link. The table showing the sample data will be displayed..
What is ORM (Object Relation Mapping)?
Most programming language platforms are object oriented. Data in RDBMS servers on the other hand is stored as tables. Object relation mapping is a technique of mapping object parameters to the underlying RDBMS table structure. An ORM API provides methods to perform CRUD operations without having to write raw SQL statements.
In this section, we are going to study the ORM techniques of Flask-SQLAlchemy and build a small web application.
Step 1 − Install Flask-SQLAlchemy extension.
pip install flask-sqlalchemy
Step 2 − You need to import SQLAlchemy class from this module.
from flask_sqlalchemy import SQLAlchemy
Step 3 − Now create a Flask application object and set URI for the database to be used.
app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///students.sqlite3'
Step 4 − Then create an object of SQLAlchemy class with application object as the parameter. This object contains helper functions for ORM operations. It also provides a parent Model class using which user defined models are declared. In the snippet below, a students model is created.
Step 5 − To create / use database mentioned in URI, run the create_all() method.
db.create_all()
The Session object of SQLAlchemy manages all persistence operations of ORM object.
The following session methods perform CRUD operations −
db.session.add(model object) − inserts a record into mapped table
db.session.delete(model object) − deletes record from table
model.query.all() − retrieves all records from table (corresponding to SELECT query).
You can apply a filter to the retrieved record set by using the filter attribute. For instance, in order to retrieve records with city = ’Hyderabad’ in students table, use following statement −
Students.query.filter_by(city = ’Hyderabad’).all()
With this much of background, now we shall provide view functions for our application to add a student data.
The entry point of the application is show_all() function bound to ‘/’ URL. The Record set of students table is sent as parameter to the HTML template. The Server side code in the template renders the records in HTML table form.
@app.route('/') def show_all(): return render_template('show_all.html', students = students.query.all() )
The HTML script of the template (‘show_all.html’) is like this −
<!DOCTYPE html> <html lang = "en"> <head></head> <body> <h3> <a href = "{{ url_for('show_all') }}">Comments - Flask SQLAlchemy example</a> </h3> <hr/> {%- for message in get_flashed_messages() %} {{ message }} {%- endfor %} <h3>Students (<a href = "{{ url_for('new') }}">Add Student </a>)</h3> <table> <thead> <tr> <th>Name</th> <th>City</th> <th>Address</th> <th>Pin</th> </tr> </thead> <tbody> {% for student in students %} <tr> <td>{{ student.name }}</td> <td>{{ student.city }}</td> <td>{{ student.addr }}</td> <td>{{ student.pin }}</td> </tr> {% endfor %} </tbody> </table> </body> </html>
The above page contains a hyperlink to ‘/new’ URL mapping new() function. When clicked, it opens a Student Information form. The data is posted to the same URL in POST method.
<!DOCTYPE html> <html> <body> <h3>Students - Flask SQLAlchemy example</h3> <hr/> {%- for category, message in get_flashed_messages(with_categories = true) %} <div class = "alert alert-danger"> {{ message }} <>
When the http method is detected as POST, the form data is added in the students table and the application returns to homepage showing the added data.
')
Given below is the complete code of application (app.py).
from flask import Flask, request, flash, url_for, redirect, render_template from flask_sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///students.sqlite3' app.config['SECRET_KEY'] = "random string" @app.route('/') def show_all(): return render_template('show_all.html', students = students.query.all() ) ') if __name__ == '__main__': db.create_all() app.run(debug = True)
Run the script from Python shell and enter in the browser.
Click the ‘Add Student’ link to open Student information form.
Fill the form and submit. The home page reappears with the submitted data.
We can see the output as shown below.
S.
A.
Fast.
|
http://www.tutorialspoint.com/flask/flask_quick_guide.htm
|
CC-MAIN-2021-49
|
refinedweb
| 3,941
| 59.8
|
Fredrik Lundh <effbot at telia.com> wrote: > Randall Hopper <aa8vb at yahoo.com> wrote: > > Reading BASIC and Pascal code for several years taught me (among other > > things) that case insensitivity isn't the better way. > > > > whY do WE Have RULES fOR capiTALIZation IN lANGUAGES? Because it makes > > them easier to read and comprehend. > > so why not enforce these rules, just like we're enforcing the > indentation rules: > > >>> class foo: > SyntaxError: Class name should be Foo > > (if not else, that should make martijn happy, right? ;-) Actually, this is exactly what Haskell does; type names *must* begin with a capital letter and variables *can't* begin with a capital letter. (For the curious: partitioning type and regular variables into different namespaces makes life easier for the language designers, who then doesn't have to worry about what it means when a type name is shadowed by a value binding, since Haskell types only have meaning at compile time.) Neel
|
https://mail.python.org/pipermail/python-list/2000-May/023957.html
|
CC-MAIN-2019-35
|
refinedweb
| 157
| 60.75
|
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
View Tests6:26 with Kenneth Love
We get to our model data through our views so they're equally important to test. Bonus, we can test our URLs at the same time.
django.core.urlresolvers.reverse() takes a URL name and reverses it to the correct URL string. More information
self.client acts like a web browser and lets you make requests to URLs, both inside and outside of your Django project.
assertEqual(a, b) checks that
a and
b are equal to each other.
assertIn(a, b) checks that
a is contained in
b.
- 0:00
I hope that you took the time to write a test for the step model.
- 0:03
If you did, you probably found that you had to create a course too.
- 0:06
This is a good time to use the setup method in your test cases.
- 0:09
If you're not sure what I mean, check out the attached workspace, or
- 0:12
go watch the Python testing course.
- 0:14
Or just hang on, cuz we're gonna need it for
- 0:16
the test we're going to write in this video.
- 0:18
We've tested our models, so now let's write a test for our CourseListView.
- 0:22
One nice thing about writing the view test, like I'm about to show you.
- 0:25
Is that you don't have to write the test for the URL, and a test for the view.
- 0:29
You can test both at once.
- 0:30
Some people will say that this is messy.
- 0:32
And your tests should only test a single thing.
- 0:34
This is true, but URLs and views are so tightly bound in Django,
- 0:38
that it's more trouble than I think it's worth to separate them at test time.
- 0:42
So testing views, is not the same as testing models.
- 0:45
So, we need to create another test class, of course.
- 0:50
So, class course views tests.
- 0:54
This will also extend the tests case and we're gonna use a setUp method,
- 1:00
so that we can create a couple of things before we start actually writing tests.
- 1:06
So we'll say self.course equals Course.objects.create.
- 1:13
And the title will be Python Testing.
- 1:17
And the description will be Learn to write tests in Python.
- 1:23
Then we'll do self.course2, Course.objects.create.
- 1:30
Title will be New Course.
- 1:34
Description will be Misspelled a new course, exciting.
- 1:42
And then we'll say step is step.objects.create.
- 1:50
Title is introduction to doc tests,
- 1:55
description is learn to write tests in your doc strings and
- 2:03
the course will be self.course.
- 2:08
All right, so the setup method, if you haven't taken Python testing is run at
- 2:13
the beginning of each test and it just creates some things for us.
- 2:24
So, let's write our test.
- 2:27
So we're gonna test the course list view,
- 2:30
and make sure that it shows both of these courses.
- 2:33
Now to do that, we're going to have to import the reverse function.
- 2:38
So, let's come up here, and it's from Django.cor.urlresolvers, import reverse.
- 2:45
And reverse let's us reverse a URL name.
- 2:47
So we get the name of the URL, we want to go hey, I've got this name,
- 2:52
what's the actual route that this goes to and the view ultimately that goes to.
- 2:58
So, it's able to do that for us.
- 3:01
So, test_course_list_view and it's going to take self.
- 3:06
And if you're wondering that import for reverse, I do have it memorized but
- 3:10
it has taken me years to get it memorized.
- 3:13
Okay so, when we're testing views,
- 3:16
we get a pretty cool thing, which is called self.client.
- 3:19
Self.client is kind of like a web browser, it lets you make web requests to a URL and
- 3:25
then it gives us back the status code and the HTML that come from that URL.
- 3:31
If it goes to a view in our Django app, we get back some other useful info, too.
- 3:35
So let's go ahead and do that.
- 3:36
We're gonna say resp.
- 3:37
R-E-S-P, there's no E-C-T on this one.
- 3:42
Just for response, it's a pretty common abbreviation that I use.
- 3:45
Client.get.
- 3:47
And then we're going to reverse courses:list.
- 3:53
So we're gonna reverse our namespaced list URL.
- 3:57
To start off with, let's just make sure this comes back as a 200.
- 4:01
So we'll do self.assertEqual.
- 4:06
The resp.status_code is equal to 200.
- 4:13
And I wanna make sure that both of our courses are in the view.
- 4:17
And this is one of the handy things that we get to do with a Django view.
- 4:21
Our resp object has an attribute named context.
- 4:26
And this is the dictionary.
- 4:28
Of all the values that we passed into our template when it gets rendered.
- 4:32
Remember doing that in our views.pi.
- 4:34
Let's look at our views.pi real quick just to see it.
- 4:38
So here's our course list this is what we're gonna hit.
- 4:40
And right here is this dictionary and we have a thing called courses in it.
- 4:44
That's what we're gonna test.
- 4:46
So we can check to see if something is in there So,
- 4:51
we're gonna do self.assertIn, and we're gonna assert
- 4:56
that self.course is in resp.context['courses'].
- 5:05
And then we're going to assertIn self.course2, resp.context.
- 5:13
Courses.
- 5:15
So both of those things should be in there.
- 5:18
We're checking the views context to make sure that it renders the template that
- 5:22
both of those things are there.
- 5:24
And we're also making sure that we get back a successful status code.
- 5:27
So let's run that test.
- 5:30
So python manage.pi test.
- 5:34
Three tests pass.
- 5:35
So there's one here.
- 5:38
There's the one for testing the step creation that I wrote in between videos.
- 5:42
I hope you wrote it too.
- 5:44
And the course creation one that we wrote last time.
- 5:46
So that's great.
- 5:47
All of our tests pass.
- 5:49
Remember way back at the beginning of this episode, I said we'd get URL testing for
- 5:53
free with this method?
- 5:54
If you didn't see where that comes into play,
- 5:56
it's when we use the reverse function to get the URL.
- 5:59
Since we go through the URL to get to the view, if the URL ever stops working,
- 6:03
our test will fail.
- 6:04
All right.
- 6:05
One more testing topic to cover.
- 6:07
Before we move on to that though, you should definitely go back and
- 6:09
write tests for the other two views in our app.
- 6:12
Make sure that only a single course shows up on a course detail page.
- 6:16
And make sure that only a single step shows up in the step detail view.
- 6:19
These will require passing keyword arguments to reverse for the URL though.
- 6:23
So they're a little bit trickier.
- 6:24
I'll link to the docs in the teacher's notes.
|
https://teamtreehouse.com/library/django-basics/test-time/view-tests
|
CC-MAIN-2018-26
|
refinedweb
| 1,382
| 92.73
|
Hi,
I am developing a phone application with the new Flex 4.5 SDK for mobile (Android) applications. I have a mapping component to the app, and i have been easily able to get the 'Geolocation' classes from the Flash.Sensors namespace. Also included is the accelerometer. This is all great, that we can take advantage of some of the phone features.
What i would like to do is be able monitor the phones signal strength (i.e., got 1 bar or 4 bars, etc.), and when signal strength is low (no cell coverage), i want to alert the user and let the know the live mapping may be slow, or just turn it off. I have not been able to find in the SDK how to get information about the cell signal strength or battery level (another handy thing to know) for that matter.
If anyone has any ideas, that would be great. I know in the Android SDK (for Java) they do have access to most of the hardware on the phone, but just not sure if that has been exposed in the Flex SDK yet.
Thanks,
Pete
Retrieving data ...
|
https://forums.adobe.com/thread/859301
|
CC-MAIN-2018-30
|
refinedweb
| 192
| 80.82
|
Python parentheses primer
If you have children, then you probably remember them learning to walk, and then to read. If you’re like me, you were probably amazed by how long it took to do things that we don’t even think about. Things that we take for granted in our day-to-day lives, and which seem so obvious to us, take a long time to master.
You can often see and experience this when you compare how you learn a language as a native speaker, from how you learn as a second language. I grew up speaking English, and never learned all sorts of rules that my non-native-speaking friends learned in school. Similarly, I learned all sorts of rules for Hebrew grammar that my children never learned in school.
It’s thus super easy to take things for granted when you’re an expert. Indeed, that’s almost the definition of an expert — someone who understands a subject so well, that for them things are obvious.
To many developers, and especially Python developers, it’s obvious not only that there are different types of parentheses in Python, but that each type has multiple uses, and do completely different things. But to newcomers, it’s far from obvious when to use round parentheses, square brackets, and/or curly braces.
I’ve thus tried to summarize each of these types of parentheses, when we use them, and where you might get a surprise as a result. If you’re new to Python, then I hope that this will help to give you a clearer picture of what is used when.
I should also note that the large number of parentheses that we use in Python means that using an editor that colorizes both matching and mismatched parentheses can really help. On no small number of occasions, I’ve been able to find bugs quickly thanks to the paren-coloring system in Emacs.
Regular parentheses — ()
Callables (functions and classes)
Perhaps the most obvious use for parentheses in Python is for calling functions and creating new objects. For example:
x = len('abcd') i = int('12345')
It’s worth considering what happens if you don’t use parentheses. For example, I see the following code all the time in my courses:
d = {'a':1, 'b':2, 'c':3} for key, value in d.items: print(f"{key}: {value}")
When you try to run this code, you can an error message that is true, but whose meaning isn’t completely obvious:
TypeError: 'builtin_function_or_method' object is not iterable
Huh? What the heck does this mean?
It’s worth remembering how “for” loops work in Python:
- “for” turns to the object at the end of the line, and asks whether it’s iterable
- if so, then “for” asks the object for its next value
- whenever the object says, “no more!” the loop stops
In this case, “for” turns to the method “d.items” and asks if it’s iterable. Note that we’re not asking whether the output from “d.items” is iterable, but rather whether the method itself is iterable.
That’s because there’s a world of difference between “d.items” and “d.items()”. The first returns the method. The second returns an iterable sequence of name-value pairs from the dictionary “d”.
The solution to this non-working code is thus to add parentheses:
d = {'a':1, 'b':2, 'c':3} for key, value in d.items(): print(f"{key}: {value}")
Once we do that, we get the desired result.
I should note that we also need to be careful in the other direction: Sometimes, we want to pass a function as an argument, and not execute it. One example is when we’re in the Jupyter notebook (or other interactive Python environment) and ask for help on a function or method:
help(len) help(str.upper)
In both of the above cases, we don’t want to get help on the output of those functions; rather, we want to get help on the functions themselves.
Prioritizing operations
In elementary school, you probably learned the basic order of arithmetic operations — that first we multiply and divide, and only after do we add and subtract.
Python clearly went to elementary school as well, because it follows this order. For example:
In [1]: 2 + 3 * 4 Out[1]: 14
We can change the priority by using round parentheses:
In [2]: (2 + 3) * 4 Out[2]: 20
Experienced developers often forget that we can use parentheses in this way, as well — but this is, in many ways, the most obvious and natural way for them to be used by new developers.
Creating tuples
Of course, we can also use () to create tuples. For example:
In [8]: t = (10,20,30) In [9]: type(t) Out[9]: tuple
What many beginning Python developers don’t know is that you actually don’t need the parentheses to create the tuple:
In [6]: t = 10,20,30 In [7]: type(t) Out[7]: tuple
Which means that when you return multiple values from a function, you’re actually returning a tuple:
In [3]: def foo(): ...: return 10, 20, 30 ...: ...: In [4]: x = foo() In [5]: x Out[5]: (10, 20, 30)
What surprises many newcomers to Python is the following:
In [10]: t = (10) In [11]: type(t) Out[11]: int
“Wait,” they say, “I used parentheses. Shouldn’t t be a tuple?”
No, t is an integer. When Python’s parser sees something like “t = (10)”, it can’t know that we’re talking about a tuple. Otherwise, it would also have to parse “t = (8+2)” as a tuple, which we clearly don’t want to happen, assuming that we want to use parentheses for prioritizing operations (see above). And so, if you want to define a one-element tuple, you must use a comma:
In [12]: t = (10,) In [13]: type(t) Out[13]: tuple
Generator expressions
Finally, we can use round parentheses to create “generators,” using what are known as “generator expressions.” These are a somewhat advanced topic, requiring knowledge of both comprehensions and iterators. But they’re a really useful tool, allowing us to describe a sequence of data without actually creating each element of that sequence until it’s needed.
For example, if I say:
In [17]: g = (one_number * one_number ...: for one_number in range(10))
The above code defines “g” to be a generator, the result of executing our generator expression. “g” is than an iterable, an object that can be placed inside of a “for” loop or a similar context. The fact that it’s a generator means that we can have a potentially infinite sequence of data without actually needing to install an infinite amount of RAM on our computers; so long as we can retrieve items from our iterable one at a time, we’re set.
The above generator “g” doesn’t actually return 10 numbers. Rather, it returns one number at a time. We can retrieve them all at once by wrapping it in a call to “list”:
In [18]: list(g) Out[18]: [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
But the whole point of a generator is that you don’t want to do that. Rather, you will get each element, one at a time, and thus reduce memory use.
Something funny happens with round parentheses when they’re used on a generator expression in a function call. Let’s say I want to get a string containing the elements of a list of integers:
In [19]: mylist = [10, 20, 30] In [20]: '*'.join(mylist)
This fails, because the elements of “mylist” are integers. We can use a generator expression to turn each integer into a string:
In [21]: '*'.join((str(x) ...: for x in mylist)) Out[21]: '10*20*30'
Notice the double parentheses here; the outer ones are for the call to str.join, and the inner ones are for the generator expression. Well, it turns out that we can remove the inner set:
In [22]: '*'.join(str(x) ...: for x in mylist) Out[22]: '10*20*30'
So the next time you see a call to a function, and a comprehension-looking thing inside of the parentheses, you’ll know that it’s a generator expression, rather than an error.
Cheating Python’s indentation rules
Python is famous for its use of indentation to mark off blocks of code, rather than curly braces, begin/end, or the like. In my experience, using indentation has numerous advantages, but tends to shock people who are new to the language, and who are somewhat offended that the language would dictate how and when to indent code.
There are, however, a few ways to cheat (at least a little) when it comes to these indentation rules. For example, let’s say I have a dictionary representing a person, and I want to know if the letter ‘e’ is in any of the values. I can do something like this:
In [39]: person = {'first':'Reuven', 'last':'Lerner', 'email':'reuven@lerner.co.il'} In [40]: if 'e' in person['first'] or 'e' in person['last'] or 'e' in person['email']: ...: print("Found it!") ...: Found it!
That “if” line works, but it’s far too long to be reasonably readable. What I’d love to do is this:
In [40]: if 'e' in person['first'] or 'e' in person['last'] or 'e' in person['email']: ...: print("Found it!")
The problem is that the above code won’t work; Python will get to the end of the first “or” and complain that it reached the end of the line (EOL) without a complete statement.
The solution is to use parentheses. That’s because once you’ve opened parentheses, Python is much more forgiving and flexible regarding indentation. For example, I can write:
In [41]: if ('e' in person['first'] or ...: 'e' in person['last'] or ...: 'e' in person['email']): ...: print("Found it!") ...: ...: Found it!
Our code is now (in my mind) far more readable, thanks to the otherwise useless parentheses that I’ve added.
By the way, this is true for all parentheses. So if I want to define my dict on more than one line, I can say:
In [42]: person = {'first':'Reuven', ...: 'last':'Lerner', ...: 'email':'reuven@lerner.co.il'}
Python sees the opening { and is forgiving until it finds the matching }. In the same way, we can open a list comprehension on one line and close it on another. For years, I’ve written my list comprehensions on more than one line, in the belief that they’re easier to read, write, and understand. For example:
[one_number * one_number for one_number in range(10)]
Square brackets — []
Creating lists
We can create lists with square brackets, as follows:
mylist = [ ] # empty list mylist = [10, 20, 30] # list with three items
Note that according to PEP 8, you should write an empty list as [], without any space between the brackets. I’ve found that with certain fonts, the two brackets end up looking like a square, and are hard for people in my courses to read and understand. So I always put a space between the brackets when creating an empty list.
(And yes, I’m that rebellious in real life, not just when programming.)
We can use square brackets not just to create lists with explicitly named elements, but also to create lists via list comprehensions:
In [16]: [one_number * one_number ...: for one_number in range(10)] ...: Out[16]: [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
The square brackets tell Python that this is a list comprehension, producing a list. If you use curly braces, you’ll get either a set or a dict back, and if you use regular parentheses, you’ll get a generator expression (see above).
Requesting individual items
Many people are surprised to discover that in Python, we always use square brackets to retrieve from a sequence or dictionary:
In [23]: mylist = [10, 20,30] In [24]: t = (10, 20, 30) In [25]: d = {'a':1, 'b':2, 'c':3} In [26]: mylist[0] Out[26]: 10 In [27]: t[1] Out[27]: 20 In [28]: d['c'] Out[28]: 3
Why don’t we use regular parentheses with tuples and curly braces with dictionaries, when we want to retrieve an element? The simple answer is that square brackets, when used in this way, invoke a method — the __getitem__ method.
That’s right, there’s no difference between these two lines of code:
In [29]: d['c'] Out[29]: 3 In [30]: d.__getitem__('c') Out[30]: 3
This means that if you define a new class, and you want instances of this class to be able to use square brackets, you just need to define __getitem__. For example:
In [31]: class Foo(object): ...: def __init__(self, x): ...: self.x = x ...: def __getitem__(self, index): ...: return self.x[index] ...: In [32]: f = Foo('abcd') In [33]: f[2] Out[33]: 'c'
See? When we say f[2], that’s translated into f.__getitem__(2), which then returns “self.x[index]”.
The fact that square brackets are so generalized in this way means that Python can take advantage of them, even on user-created objects.
Requesting slices
You might also be familiar with slices. Slices are similar to individual indexes, except that they describe a range of indexes. For example:
In [46]: import string In [47]: string.ascii_lowercase[10:20] Out[47]: 'klmnopqrst' In [48]: string.ascii_lowercase[10:20:3] Out[48]: 'knqt'
As you can see, slices are either of the form [start:end+1] or [start:end+1:stepsize]. (If you don’t specify the stepsize, then it defaults to 1.)
Here’s a little tidbit that took me a long time to discover: You can get an IndexError exception if you ask for a single index beyond the boundaries of a sequence. But slices don’t have in such problems; they’ll just stop at the start or end of your string:
In [50]: string.ascii_lowercase[500] --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-50-fad7a1a4ec3e> in <module>() ----> 1 string.ascii_lowercase[500] IndexError: string index out of range In [51]: string.ascii_lowercase[:500] Out[51]: 'abcdefghijklmnopqrstuvwxyz'
How do the square brackets distinguish between an individual index and a slice? The answer: They don’t. In both cases, the __getitem__ method is being invoked. It’s up to __getitem__ to check to see what kind of value it got for the “index” parameter.
But wait: If we pass an integer or string (or even a tuple) to square brackets, we know what type will be passed along. What type is passed to our method if we use a slice?
In [55]: class Foo(object): ...: def __getitem__(self, index): ...: print(f"index = {index}, type(index) = {type(index)}") ...: In [56]: f = Foo() In [57]: f[100] index = 100, type(index) = <class 'int'> In [58]: f[5:100] index = slice(5, 100, None), type(index) = <class 'slice'> In [59]: f[5:100:3] index = slice(5, 100, 3), type(index) = <class 'slice'>
Notice that in the first case, as expected, we get an integer. But in the second and third cases, we get a slice object. We can create these manually, if we want; “slice” is in the “bulitin” namespace, along with str, int, dict, and other favorites. And as you can see from its printed representation, we can call “slice” much as we do “range”, with start, stop, and step-size arguments. I haven’t often needed or wanted to create slice objects, but you certainly could:
In [60]: s = slice(5,20,4) In [61]: string.ascii_lowercase[s] Out[61]: 'fjnr' In [62]: string.ascii_uppercase[s] Out[62]: 'FJNR' In [63]: string.ascii_letters[s] Out[63]: 'fjnr' In [64]: string.punctuation[s] Out[64]: '&*.<'
Curly braces — {}
Creating dicts
The classic way to create dictionaries (dicts) in Python is with curly braces. You can create an empty dict with an empty pair of curly braces:
In [65]: d = {}
In [66]: len(d)
Out[66]: 0
Or you can pre-populate a dict with some key-value pairs:
In [67]: d = {‘a’:1, ‘b’:2, ‘c’:3}
In [68]: len(d)
Out[68]: 3
You can, of course, create dicts in a few other ways. In particular, you can use the “dict” class to create a dictionary based on a sequence of two-element sequences:
In [69]: dict(['ab', 'cd', 'ef']) Out[69]: {'a': 'b', 'c': 'd', 'e': 'f'} In [70]: d = dict([('a', 1), ('b', 2), ('c', 3)]) In [71]: d Out[71]: {'a': 1, 'b': 2, 'c': 3} In [72]: d = dict(['ab', 'cd', 'ef']) In [73]: d Out[73]: {'a': 'b', 'c': 'd', 'e': 'f'}
But unless you need to create a dict programmatically, I’d say that {} is the best and clearest way to go. I remember reading someone’s blog post a few years ago (which I cannot find right now) in which it was found that {} is faster than calling “dict” — which makes sense, since {} is part of Python’s syntax, and doesn’t require a function call.
Of course, {} can also be used to create a dictionary via a dict comprehension:
In [74]: { one_number : one_number*one_number ...: for one_number in range(10) } ...: Out[74]: {0: 0, 1: 1, 2: 4, 3: 9, 4: 16, 5: 25, 6: 36, 7: 49, 8: 64, 9: 81}
In the above code, we create a dict of the number 0-9 (keys) and their values to the second power (values).
Remember that a dict comprehension creates one dictionary, rather than a list containing many dictionaries.
Create sets
I’ve become quite the fan of Python’s sets. You can think of sets in a technical sense, namely that they are mutable, and contain unique, hashable values. As of Python 3.6, they are stored in insertion order.
But really, it’s just easiest to think of sets as dictionaries without any values. (Yes, this means that sets are nothing more than immoral dictionaries.) Whatever applies to dict keys also applies to the elements of a set.
We can create a set with curly braces:
In [75]: s = {10,20,30}
In [76]: type(s)
Out[76]: set
As you can see, the fact that there is no colon (:) between the name-value pairs allows Python to parse this code correctly, defining ‘s” to be a set, rather than a dict.
Nearly every time I teach about sets, someone tries to create an empty set and add to it, using set.add:
In [77]: s = {}
In [78]: s.add(10)
—————————————————————————
AttributeError Traceback (most recent call last)
<ipython-input-78-721f80ddfefc> in <module>()
—-> 1 s.add(10)
AttributeError: ‘dict’ object has no attribute ‘add’
The error indicates that “s” is a dict, and that dicts lack the “add” method. Which is fine, but didn’t I define “s” to be a set?
Not really: Dicts came first, and thus {} is an empty dict, not an empty set. If you want to create an empty set, you’ll need to use the “set” class:
s = set() s.add(10)
This works just fine, but is a bit confusing to people starting off in Python.
I often use sets to remove duplicate entries from a list. I can do this with the “set” class (callable), but I can also use the “* argument” syntax when calling a function:
In [79]: mylist = [10, 20, 30, 10, 20, 30, 40] In [80]: s = {*mylist} In [81]: s Out[81]: {10, 20, 30, 40}
Note that there’s a bit difference between {*mylist} (which creates a set from the elements of mylist) and {mylist} which will try to create a set with one element, the list “mylist”, and will fail because lists are unhashable.
Just as we have list comprehensions and dict comprehensions, we also have set comprehensions, which means that we can also say:
In [84]: mylist = [10, 20, 30, 10, 20, 30, 40] In [85]: {one_number ...: for one_number in mylist} ...: Out[85]: {10, 20, 30, 40}
str.format
Another place where we can use curly braces is in string formatting. Whereas Python developers used to use the printf-style “%” operator to create new strings, the modern way to do so (until f-strings, see below) was the str.format method. It worked like this:
In [86]: name = ‘Reuven’
In [87]: “Hello, {0}”.format(name)
Out[87]: ‘Hello, Reuven’
Notice that str.format returns a new string; it doesn’t technically have anything to do with “print”, although they are often used together. You can assign the resulting string to a new variable, write it to a file, or (of course) print it to the screen.
str.format looks inside of the string, searching for curly braces with a number inside of them. It then grabs the argument with that index, and interpolates it into the resulting string. For example:
In [88]: 'First is {0}, then is {1}, finally is {2}'.format(10, 20, 30) Out[88]: 'First is 10, then is 20, finally is 30'
You can, of course, mix things up:
In [89]: 'First is {0}, finally is {2}, then is {1}'.format(10, 20, 30) Out[89]: 'First is 10, finally is 30, then is 20'
You can also repeat values:
In [90]: 'First is {0}. Really, first is {0}. Then is {1}'.format(10, 20, 30) Out[90]: 'First is 10. Really, first is 10. Then is 20'
If you’ll be using each argument once and in order, you can even remove the numbers — although I’ve been told that this makes the code hard to read. And besides, it means you cannot repeat values, which is sometimes annoying:
In [91]: 'First is {}, then is {}, finally is {}'.format(10, 20, 30) Out[91]: 'First is 10, then is 20, finally is 30'
You cannot switch from automatic to manual numbering in curly braces (or back):
In [92]: 'First is {0}, then is {}, finally is {}'.format(10, 20, 30) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-92-f00f3adf93eb> in <module>() ----> 1 'First is {0}, then is {}, finally is {}'.format(10, 20, 30) ValueError: cannot switch from manual field specification to automatic field numbering
str.format also lets you use names instead of values, by passing keyword arguments (i.e., name-value pairs in the format of key=value):
In [93]: 'First is {x}, then is {y}, finally is {z}'.format(x=10, y=20, z=30) Out[93]: 'First is 10, then is 20, finally is 30'
You can mix positional and keyword arguments, but I beg that you not do that:
In [94]: 'First is {0}, then is {y}, finally is {z}'.format(10, y=20, z=30) Out[94]: 'First is 10, then is 20, finally is 30'
f-strings
As of Python 3.6, we have an even more modern way to perform string interpolation, using “f-strings”. Putting “f” before the opening quotes allows us to use curly braces to interpolate just about any Python expression we want — from variable names to operations to function/method calls — inside of a string:
In [99]: name = 'Reuven' In [100]: f"Hello, {name}" Out[100]: 'Hello, Reuven' In [101]: f"Hello, {name.upper()}" Out[101]: 'Hello, REUVEN' In [102]: f"Hello, {name.split('e')}" Out[102]: "Hello, ['R', 'uv', 'n']"
I love f-strings, and have started to use them in all of my code. bash, Perl, Ruby, and PHP have had this capability for years; I’m delighted to (finally) have it in Python, too!
from __future__ import braces
Do you sometimes wish that you could use curly braces instead of indentation in Python? Yeah, you’re not alone. Fortunately, the __future__ module is Python’s way of letting you try new features before they’re completely baked into your current Python version. For example, if you’re still using Python 2.7, you can say
from __future__ import division
and division will always return a float, rather than an integer, even if the two operands are integers.
So go ahead, try:
from __future__ import braces
(Yes, this is part of Python. And no, don’t expect to be able to use curly braces instead of indentation any time soon.)
Enjoyed this article? Join more than 11,000 other developers who receive my free, weekly “Better developers” newsletter. Every Monday, you’ll get an article like this one about software development and Python:
I wonder what this python structure could be? Is it a tuple, a list, a dictionary? I’m confused.
VIDEOS = {‘Channels’: [{‘name’: ‘Channel 1’, ‘thumb’:’special://home/addons/plugin/image.png’}, {‘name’: ‘Channel 2’, ‘thumb’:’special://home/addons/plugin/image.png’}]}
Let’s look at the parentheses:
(1) On the outside, you see {}. This means that it’ll either be a dictionary or a set. If there is a colon (:) between keys and values, then it’s a dictionary. Without colons, it’s a set.
(2) This dictionary has one key-value pair. The key is “Channels”. And the value is a list.
(3) The list contains two elements. Each of those elements is a dictionary.
(3) Each dictionary has two key-value pairs. The keys are identical in both dictionaries (“name” and “thumb”). The values are strings.
Does this make sense?
A wonderful refreshing tutorial for some features that you forget.
[…] Cris #2: Python parenthesis primer […]
What about square braces for Type Hints?
Excellent point! I’ll add that soon.
|
https://lerner.co.il/2018/06/08/python-parentheses-primer/?share=google-plus-1
|
CC-MAIN-2019-30
|
refinedweb
| 4,252
| 70.23
|
Introduction :
Yellow warning box in react native is helpful if you want to know about any type of warnings. Especially, if you are using any third party library and if that library is using any deprecated API, it will show that over your phone screen. But it can be annoying at the same time if you already aware of that warning and you want to avoid it.
React native provides an easy way to disable this warning message. In this post, I will show you how to do that. These warnings are called YelloBox warnings and they are
console.disableYellowBox :
If we assign this variable as true, the yellow warning box is removed.
console.disableYellowBox = true;
Put it anywhere you want.
Remove specific warnings :
It is not always a good idea to remove all current and future warning messages. Instead, we can remove specific warnings by setting an array of prefixes of warnings like below :
import { YellowBox } from "react-native"; YellowBox.ignoreWarnings(["Warning: ..."]);
It will remove all the warning messages prefixed with any of the provided values in that array.
|
https://www.codevscolor.com/react-native-remove-yellow-warning-box
|
CC-MAIN-2020-40
|
refinedweb
| 180
| 63.8
|
The QWSWindow class provides server-specific functionality in Qt/Embedded. More...
#include <qwindowsystem_qws.h>
List of all member functions..
Constructs a new top-level window, associated with the client client and giving it the id i.
Returns the region that the window is allowed to draw onto, including any window decorations but excluding regions covered by other windows.
See also requested().
Returns the window's caption.
Returns the QWSClient that owns this window.
Returns TRUE if the window is completely obsured by another window or by the bounds of the screen; otherwise returns FALSE.
Returns TRUE if the window is partially obsured by another window or by the bounds of the screen; otherwise returns FALSE.
Returns TRUE if the window is visible; otherwise returns FALSE.
Returns the window's name.
Returns the region that the window has requested to draw onto, including any window decorations.
See also allocation().
Returns the window's Id.
This file is part of the Qt toolkit. Copyright © 1995-2005 Trolltech. All Rights Reserved.
|
https://doc.qt.io/archives/3.3/qwswindow.html
|
CC-MAIN-2019-26
|
refinedweb
| 168
| 61.93
|
One of the problems most .NET developers would have come across is representing enum in human readable format. The problem is that, although you can get the string value by using .ToString(), it's hardly ever human friendly. One normally have to follow strict naming policies (for example Not_Connected), and run the enum through a name mangler that gets the value in a friendlier format. But even that is not always sufficient, since once you've implemented ALL your code using the enums as agreed upon by everyone, business comes along and decides to change a couple, meaning changes in your code. A second strategy involves loosely coupled constants or lookups.
enum
string
.ToString()
Not_Connected
enum
Wouldn't it be nice if you could get the value you want by asking the enum for it? Even better, wouldn't it be nice if all the different representations that you could want for an enum value is all controlled right where you define the enum, so that adding or removing values do not result in a hunt for constants? In this article, I provide a strategy for easily hooking up representation for enum values, right in the enum, and then a way to easily get the representation you want back out again.
I have to give credit to a colleague of mine, Steven Visagie, who wrote a generics version converting enums to string (or other) representations in C# 2.0, on our current project. He got me thinking on how one could solve the problem without the need to define a wrapper class around your enum. My solution differs completely from his, but the objective is the same. Also my version will only work from C# 3.0 and up, since it leverages extension methods, which isn't available in C# 2.0.
There are two key elements to this strategy, an attribute and an extension method. The attribute provides a way to decorate the enum with the string (or other) values you want as the representations of the enum values. We begin by defining an attribute class called EnumStringsAttribute.
EnumStringsAttribute
[global::System.AttributeUsage
(AttributeTargets.Enum, Inherited = false, AllowMultiple = false)]
public class EnumStringsAttribute : Attribute {}
Next, I need two fields. The first will hold the type of the enum that the instance of the attribute applies to. This is unfortunately needed, because there is no "reverse lookup" (that I know of) for finding the item to which the attribute was applied.
The second field is a static dictionary, which is used across all instances of the attribute class as a lookup for values that have already been converted to its string representation.
static dictionary
private Type enumType;
private static Dictionary<string, Dictionary<int, string>>
stringsLookup = new Dictionary<string, Dictionary<int, string>>();
public Dictionary<int, /> StringsLookup
{
get
{
return stringsLookup[enumType.FullName];
}
}
The StringsLookup property provides instance access to the static stringsLookup dictionary on the type name of the associated enum. I'll get back to the stringsLookup and its property again later. Next, the attribute needs a constructor, and the constructor is where the representation values are assigned to the attribute.
StringsLookup
static stringsLookup
stringsLookup
public EnumStringsAttribute(Type enumType, params string[] strings)
{
this.enumType = enumType;
ProcessStrings(strings);
}
private void ProcessStrings(string[] strings)
{
lock (stringsLookup)
{
string typeName = enumType.FullName;
if (!stringsLookup.ContainsKey(typeName))
{
stringsLookup.Add(typeName, new Dictionary<int, />());
int[] values = Enum.GetValues(enumType) as int[];
if (values.Length != strings.Length)
throw new ArgumentException("The number of enum values differ
from the number of given string values");
for (int index = 0; index < values.Length; index++)
stringsLookup[typeName].Add(values[index], strings[index]);
}
}
}
The constructor assigns the enumType, and then calls the private method ProcessStrings. ProcessStrings adds a new entry (if it doesn't already exist) to the stringsLookup static field, into which the representations for the enum are stored. It then validates the number of strings that were passed in with the number of values in the enum. If they do not match up, an ArgumentException is thrown, since it's only logical that you need a representation for each value in your enum. I recommend creating your own custom exception class so that when you add or remove an enum value at some stage during your project, it would be clear where it originates from. If all is well, the method adds the representations into the dictionary for the enum type against its corresponding enum integer value. It's important to note that the order of the strings passed to the enum must be the same as the order in which the enum values are defined in the enum.
enumType
private
ProcessStrings
ProcessStrings
stringsLookup static
string
ArgumentException
dictionary
enum
That completes the attribute. Pretty straight forward. The code below gives an example of how to use it.
[EnumStringsAttribute(typeof(AspectUsage), "Class",
"Method", "Get Property", "Set Property", "Event", "All")]
public enum AspectUsage
{
Class = 0x1,
Method = 0x2,
PropertyGet = 0x4,
PropertySet = 0x8,
Event = 0x10,
All = 0x1F
}
Now for the extension method. Again, it's actually pretty straightforward.
public static String StringValue(this Enum value)
{
Attribute[] attributes = value.GetType().GetCustomAttributes
(typeof(EnumStringsAttribute), false) as Attribute[];
if (!ReferenceEquals(attributes, null) && attributes.Length > 0)
{
return (attributes[0] as EnumStringsAttribute).StringsLookup
[(value as IConvertible).ToInt32(CultureInfo.InvariantCulture)];
}
return value.ToString();
}
The method gets the any EnumStrings custom attributes on the enum type passed in. If there was no such attribute on the enum, the normal .ToString() behaviour of the enum is returned. If the attribute is found, the representation is retrieved for the StringsLookup dictionary. Notice that I'm leveraging the fact that Enum implements IConvertible.
EnumStrings
Enum
IConvertible
The code below shows how the extension method is used in the end:
public void Method1(AspectUsage aspectUsage)
{
System.Diagnostics.Debug.WriteLine(aspectUsage.StringValue());
}
Pretty clean, right? That's why I like it, it doesn't clutter your code with verbose string mangling methods!
You can extend this technique to more than just strings, for instance Guids. Because the representations are stored in a static dictionary, the only efficiency hit is the first time the enum is loaded. Thereafter, all subsequent usages of the enum or the extension method simply do a lookup from the dictionary.
Guids
dictionary
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
DescriptionAttribute
[AttributeUsageAttribute(AttributeTargets.Method, AllowMultiple=false)]
public sealed class DescriptionAttribute : Attribute
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/42375/Extending-Enum-for-Extra-Meta-Data
|
CC-MAIN-2014-52
|
refinedweb
| 1,086
| 55.44
|
In this lesson we will be writing a component to manage user input. We will work with Unity’s Input Manager so that your game should work across a variety of input devices (keyboard, controller, etc). The component we write will be reusable so that any script requiring input can receive and act on these events.
Custom Event Args
It is common to use an EventHandler when posting an event. Using this delegate pattern you must pass along the sender of the event, and an EventArgs (or subclass) as well. When we post input events, it is handy to pass along information such as what button was pressed, or what direction is the user trying to apply. Most of the time, all I ever need to pass is a single field of data. Rather than creating a custom subclass of EventArgs for each occasion, we can create a generic version. Create a new folder within the Scripts folder called EventArgs. Then create a script within this folder called InfoEventArgs and use the following implementation:
using UnityEngine; using System; using System.Collections; using System.Collections.Generic; public class InfoEventArgs<T> : EventArgs { public T info; public InfoEventArgs() { info = default(T); } public InfoEventArgs (T info) { this.info = info; } }
This is a pretty simple class which can hold a single field of any data type named info. I created two constructors, an empty one which inits itself using the default keyword (this keyword handles both reference and value types), and one which allows the user to specify the intial value.
Unity’s Input Manager
Unity provides an Input Manager to help simplify the… well, the management of input – that was obvious. From the menu bar choose Edit->Project Settings->Input. Look in the inspector and you will be able to see the various mappings of input concepts to input mechanisms. Expand the Axes (if it isnt already open) and you should see several entries such as: “Horizontal”, “Fire1”, and “Jump”. There are actually several entries for most. One entry for “Horizontal” monitors keyboard input from the arrow keys or the ‘a’ and ‘d’ keys. Another entry for “Horizontal” monitors keyboard input for Joystick axis input. In your code, you can check if there is “Horizontal” input from any of those sources with a single reference to that name.
Unity has done most of the heavy lifting for us, however, one of my own personal complaints with this manager (and several of their other systems) is a lack of support for events. You must check the status of Input every frame (through an Update method or Coroutine) in order to make sure you dont miss anything. As you may have guessed, this is not terribly efficient, and can be a bit cumbersome to re-implement everywhere you need input. Therefore, I will do this process only once, and then share the results via events with any other interested script.
Create another subfolder of Scripts called Controller. Inside this folder create our script, InputController.cs and open it for editing.
We will be using the “Horizontal” and “Vertical” inputs for a variety of things such as moving the tile selection cursor around the board (to select a move location or attack target) or to change the selected item in a UI menu. As I mentioned before, we will need to check for input on every frame, so let’s go ahead and take advantage of the Update method. Add the following code to your script:
void Update () { Debug.Log(Input.GetAxis("Horizontal")); }
Save your script, attach it to any gameobject in a new scene, and press play. Every frame, a new debug log will print to the console (Make sure Collapse is disabled so they appear in the correct time-wise order). Watch what happens to the value when your press the left or right arrow keys, or the ‘a’ and ‘d’ keys.
Pressing right or ‘d’ causes the output to raise toward positive one, and pressing left or ‘a’ causes the output to lower toward negative one. If you aren’t pressing in either direction, the output will ease back to zero. With this function, Unity has smoothed the input for us. If I were making a game where a character could move freely through the world such as an FPS, then that easing would help movement look a little more natural.
For our game, I don’t want any of the smoothing on directional input. Since we are snapping to cells on a board or between menu options, etc. a very obvious on/off tap of a button will be better for us. In this case there is another method we can try:
void Update () { Debug.Log(Input.GetAxisRaw("Horizontal")); }
Save the script and run the scene again. Now the keyboard presses result in jumps immediately from zero to one or negative one depending on the direction you press.
You may have noticed that some games allow input both through pressing, and through holding. For example, as soon as I press an arrow key, the tile might move on the board. If I keep holding the arrow, after a short pause, the tile might continue moving at a semi-quick rate.
I want to add this “repeat” functionality to our script, but since I will need it for multiple axis, it makes sense to track each one as a separate object so that we can reuse our code. I will add another class inside this script – normally I dont like to do that, but this second class is private and will only be used by our input controller, so it is an exception.
class Repeater { const float threshold = 0.5f; const float rate = 0.25f; float _next; bool _hold; string _axis; public Repeater (string axisName) { _axis = axisName; } public int Update () { int retValue = 0; int value = Mathf.RoundToInt( Input.GetAxisRaw(_axis) ); if (value != 0) { if (Time.time > _next) { retValue = value; _next = Time.time + (_hold ? rate : threshold); _hold = true; } } else { _hold = false; _next = 0; } return retValue; } }
At the top of the Repeater class I defined two const values. The threshold value determines the amount of pause to wait between an intial press of the button, and the point at which the input will begin repeating. The rate value determines the speed that the input will repeat.
Next, I added a few private fields. I use _next to mark a target point in time which must be passed before new events will be registered – it defaults and resets to zero, so that the first press is always immediately registered. I use _hold to indicate whether or not the user has continued pressing the same button since the last time an event fired. Finally, I use _axis to store the axis that will be monitored through Unity’s Input Manager. This value is assigned via the class constructor.
After the constructor, I have an Update method. Note that this class is not a MonoBehaviour, so the Update method wont be triggered by Unity – we will be calling it manually. The method returns an int value, which will either be -1, 0, or 1. Values of zero indicate that either the user is not pressing a button, or that we are waiting for a repeat event.
Inside the Update method, I declare a local variable called retValue which is the value which will be returned from the function. It will only change from zero under special circumstances. Next we get the value this object is tracking from the Unity’s Input Manager using GetAxisRaw as we did earlier. I put the method inside of another method which rounds the result and casts it to an int value type.
The if condition basically asks if there is user input or not. When the value field is not zero the user is providing input. Inside this body we do another if condition check which verifies that sufficient time has passed to allow an input event. On the first press of a button, Time.time will always be greater than _next which will be zero at the time. Inside of the inner condition body, we set the retValue to match the value reported by the Input Manager, and then set our time target to the current time plus an additional amount of time to wait. This means that subsequent calls into this method will not pass the inner condition check until some time in the future. Some of you may not be familiar with the conditional operator (?:) used here – it is very similar to an if condition where the condition to validate is to the left of the question mark, the value to the right of the question mark is used when the condition is true, and the value after the colon is used when the condition is false. Finally, I mark _hold as being true.
The first (outer) if condition has an else clause – whenever the user is NOT providing input, this will mark our _hold value as false and reset the time for future events to zero so that they can immediately fire with the next press of the button.
Now its time to put our Repeater class to good use. Add two fields inside the InputController class as follows:
Repeater _hor = new Repeater("Horizontal"); Repeater _ver = new Repeater("Vertical");
Whenever our Repeaters report input, I will want to share this as an event. I will make it static so that other scripts merely need to know about this class and not its instances. We will implement this EventHandler using generics so that we can specify the type of EventArgs – we will use our InfoEventArgs and specify its type as a Point. Don’t forget that you will need to add a using statement for the System namespace in order to use the EventHandler.
public static event EventHandler<InfoEventArgs<Point>> moveEvent;
We will need to tie our repeaters into Unity’s Update loop, and actually fire the event we just declared at the appropriate time:
void Update () { int x = _hor.Update(); int y = _ver.Update(); if (x != 0 || y != 0) { if (moveEvent != null) moveEvent(this, new InfoEventArgs<Point>(new Point(x, y))); } }
Next I want to add events which watch for the various Fire button presses. I don’t need these to repeat, because I wont consider the input as complete until it is actually released. I will use one Fire button for confirmation, one for cancellation and will add a third, just in case I think of a reason to have it.
The event we send for these will also use InfoEventArgs but instead of passing a Point struct for direction, it will just pass an int representing which Fire button was pressed
public static event EventHandler<InfoEventArgs<int>> fireEvent;
Add a string array to your class to hold the buttons you wish to check for:
string[] _buttons = new string[] {"Fire1", "Fire2", "Fire3"};
In our Update loop after we check for movement, let’s add the following to loop through each of our Fire button checks:
for (int i = 0; i < 3; ++i) { if (Input.GetButtonUp(_buttons[i])) { if (fireEvent != null) fireEvent(this, new InfoEventArgs<int>(i)); } }
Using Our Input Controller
Now that we’ve completed the Input Controller, let’s test it out. Create a temporary script somewhere in your project and add it to an object in the scene. You will also need to make sure to add the Input Controller to an object in the scene. I created a script called Demo in the root of the Scripts folder.
I usually connect to events in OnEnable and disconnect from events in OnDisable. Remember that cleanup is very important – particularly when using static events, because they maintain strong references to your objects. This means they keep the objects from going out of scope and being truly destroyed, and could for example trigger events on scripts whose GameObject’s are destroyed.
void OnEnable () { InputController.moveEvent += OnMoveEvent; InputController.fireEvent += OnFireEvent; } void OnDisable () { InputController.moveEvent -= OnMoveEvent; InputController.fireEvent -= OnFireEvent; }
When you have added statements like this, but have not yet implemented the handler, you can have MonoDevelop auto-implement them for you with the correct signatures. Right-click on the OnMoveEvent and then choose Refactor->Create Method. A line will appear indicating where the implementation will be inserted which you can move up or down with the arrow keys, and then confirm the placement by hitting the return key. You should see something like the following:
void OnMoveEvent (object sender, InfoEventArgs<Point> e) { throw new System.NotImplementedException (); }
I dont want to crash the program so I will replace the throw exception statement with a simple Debug Log indicating the direction of the input.
Debug.Log("Move " + e.info.ToString());
Use the same trick to implement the OnFireEvent handler and use a Debug message to indicate which button index was used.
Debug.Log("Fire " + e.info);
Run the scene and trigger input and watch the console to verify that everything works.
Summary
In this lesson we reviewed making a custom, generic subclass of EventArgs to use with our EventHandler based events. We discussed Unity’s Input Manager and how it can provide unified input across multiple devices, and then we wrapped it up with a new Controler class to listen for special input events specific to our game. In the end I showed a simple implementation that would listen to the events and take an action.
42 thoughts on “Tactics RPG User Input Controller”
Love your new serie about TRPG. Keep writing 🙂
Suppose we don’t pass in a Point for the moveEvent, but instead want to handle a forward/back/left/right input based on the Unity Input Manager axes (horizontal and vertical). How would that be done?
I suppose I would define an enum with the various directions you listed, and then pass a type of the enum based on the axis values. For example, if the ‘x’ value was greater than a certain threshold, I would pass along “right”, or if the ‘y’ value was greater than a certain threshold, I would pass along “up”.
As an alternative to the enum, you could also pass along a KeyCode value such as “UpArrow”, see the docs:
Just wondering – would declaring bools for each direction and trying to pass them in instead of an enum be bad practice? Sorry, I’m new to events and wondering why you’d choose enums.
I’m also curious where I’m going wrong in trying to pass in an enum.
moveEvent(this, new InfoEventArgs(MovementDirs.Left));
MovementDirs is a public enum.
I wouldn’t declare a bool for each direction because it just feels wasteful. If the idea is that you want to support two directions simultaneously such as Up and Right, then you can use an enum as Flags (see my post here, if you are unfamiliar with that).
When you are using the generic InfoEventArgs, you can help it understand what it is creating like this:
“
new InfoEventArgs LessThan MovementDirs GreaterThan (MovementDirs.Left)” Sorry, wordpress keeps modifying my comment and stripping the generic line out. I am not sure how to display it correctly in a comment so you will have to replace the LessThan and GreaterThan bits with the appropriate character.
Hey @thejon2014. Again, great tutorials.
Here are some suggestions (for the newbies sake):
> you may be interested in adding a note/link to the repository on every post, or maybe at the Project posts list (a different looking one, to make it obvious)
> I could only know for sure where the EventHandler goes because I confirmed it at the Repository, so it might be good if you stated that more clearly.
Hi, I’ve hit a bit of a road block following this tutorial and I think it has to do with my event handlers. I am getting no errors, but nothing is added to the debug log when I give input.
This is my Demo.cs
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Demo : MonoBehaviour {
void OnEnable () {
Debug.Log (“This line is logged properly”););
}
}
Does this look like it should be working? Is the problem elsewhere?
The code looks ok (but don’t forget to also unsubscribe from the fireEvent). Since you are able to see the Debug.Log where you subscribe for events but aren’t receiving anything, your next check should be that you are actually sending something. I would put some Debug.Log statements in the InputController script around your input where you expect the event to be posted.
Your problem could be as simple as forgetting to have an InputController component in your scene.
I noticed that you’re missing the info type in your method declarations. So OnMoveEvent parameter should be InfoEventArgs e, and OnFireEvent should be InfoEventArgs e.
Hope this helps!
Hi =)
I’ve been so sorry for google translate but I can not speak English well.
My question is:
How do I get it to play the game on my mobile?
As I know it is not possible that a UI button simulates a button print or am I wrong with this statement?
How do I best deal with this? Can you help me or do you have an idea how I can manage this?
What maybe still important is I am not really good in the coding I make it only half a year. And have given me everything myself.
Thanks in advance =)
Unity has a new UI system with Unity 5. Their UI Button does respond to touches for a mobile screen in the same way as it would respond to a mouse click. I have a short post that shows how to link button events to your code here:
Note that the architecture is pretty different than the setup for the Tactics RPG project which was geared more toward using a game controller or keyboard so you could highlight menu options and then confirm them etc. With a touch input you don’t need a highlight state.
I once tried a little, and did not come to any sly results. The only thing I could use but not 100% works is the following:
Public float Move = 1.0f;
Void Update ()
{
// Up
If (Input.GetKeyDown (KeyCode.U))
{
Transform.Translate (0, 0, Move);
}
// Down
If (Input.GetKeyDown (KeyCode.J))
{
Etc
If I pull it on the TileSelection Indicator, I can use it and could modify it using buttons. But when I drive 2 Tile’s to the left and it selects it does not go there. If I then but normally with (Key A) 1 left it goes 1 links although the Tile Selection Indicator is on 2 links. Then I remembered that you in the input controller believe the position update.
Int x = _hor.Update ();
Int y = _ver.Update ();
If (x! = 0 || y! = 0)
{
If (moveEvent! = Null)
MoveEvent (this, new InfoEventArgs (new point (x, y));
Could this be the reason that it does not work?
And how can I rewrite or paste that it works?
I thought I understood what you were asking, but now I am not sure. I thought you wanted to modify the Tactics RPG project so that it would work with a touch-screen interface on mobile rather than the keyboard input it currently uses.
This response makes it sound like you are still working with keyboard input, but I think it wasn’t translated well enough for me to understand what the problem is or what your goal is.
I’m sorry, as I said, I translate everything with Google Translate. You have already understood me the first time. I would like to play the game on my mobile phone. I wanted to explain to you that I made some attempts, but I do not find a working solution. With the above written code, I can indeed control the Tile Selection Indicator over the buttons. But when I move the Tile Selection Indicator with my code above, and then confirm the destination, it does not move. If I drive with the old control with keyboard 1 Tile to the left, and then still take another step with my code. If he ignores the position of the TileSelection Indicator (which in the game is now 2 on the left) and moves only 1 to the left. Perhaps I go completely wrong the problem, I hope they understand me now. I would also have a button to confirm the destination or to select the actions. Also there I have unfortunately no idea how I should do that. The same would be with the back button. In the tutorial of this project I thought step by step I understand what they are doing. But now I doubt slowly to myself. I am quite a newcomer in coding, and am in the end with my knowledge have never synonymous nor have been specially adapted for mobile phone. But there is always the first time =) Thanks for their help and excuse
Ok, that helps clarify. I think your problem is that you are simply moving the tile selection indicator’s transform position. You are controlling where it appears in the scene, but the rest of the game logic also needs to be informed of the change. See the BattleController’s “pos” field. This is the data that the game references to determine what tile is actually selected. The object you moved is just a view that represents that information to the player, but it is up to you to keep them in sync.
I’m not sure how much of a newcomer you are to coding, but it is worth noting that this is a pretty advanced project. Changing the input architecture may “sound” easy, but is actually a pretty involved process and will have a large impact on the flow of the states and look of the screens as well as a host of architectural challenges you’ll have to consider, like what happens if I tap two places on the screen simultaneously – this can be a much larger problem than you might think. I wouldn’t recommend attempting this sort of challenge until you have a solid understanding of programming, and fully understand the code I have already provided.
Until then, I am working on another project which will be touch-screen friendly. I hope to start publishing it in a few weeks. Stay tuned because it will probably be a better starting point for you.
I’ve seen I’ve written the wrong code as I’ve still tested it with the keyboard. This is the code I use with the UI button.
public class TileSelectionController : MonoBehaviour {
public float Move = 1.0f;
public void UP ()
{
transform.Translate(0, 0, Move);
}
So here I am again XD after the last error was stupid and this one must be too… well im getting this strange error on the “demo” script which i name InputHelper…. the on enable onfireevent retuns this error and does not let my enter play test mode :
“Assets/Scripts/InputHelper.cs(10,19): error CS0123: A method or delegate `InputHelper.OnFireEvent(object, InfoEventArgs)’ parameters do not match delegate `System.EventHandler<InfoEventArgs>(object, InfoEventArgs)’ parameters”
My code is the following :
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class InputHelper : MonoBehaviour {
void OnEnable ()
{);
}
}
If i remove the inputcontroller.fireevent += onfireevent from the onenable method it lets me play test but only the move event works…
Unfortunately wordpress always modifies the code in the comments so I can’t verify that you had the correct generic type applied to the InfoEventArgs in the OnFireEvent handler. Make sure that it is “int” instead of “Point”, otherwise it looks correct at my first glance. If necessary, copy a fire event handler from somewhere else in the code that is working and that should help you understand what you missed.
Hi Jon,
First of all, great tutorial, I really like the way you describe your thought process during decisions, so it’s easy to understand why it’s better to do it this way than the other way. Much better than the video tutorials where they just give you the fastest path, although many times the wrong one for a full project.
So, my question is in the Update method for the Repeater class. Why do you use Mathf.RoundToInt instead of a simple cast to int? Since Input.GetAxisRaw will always be 0, 1, or -1, I didn’t understand the reason for that. Is it just a matter of preference?
Thanks!
Glad you are enjoying it! To answer your question, although the project lends itself to keyboard input, the Input class can accept joystick input as well. In that case, the rounding will help pick the value nearest to the intent of the user.
Oh yeah, that makes sense. Thanks! extremely quick for me
on Safari. Outstanding Blog!
Hi there, I enjoy reading all of your post.
I like to write a little comment to support you.
Hey I’m having real issues with passing the moveEvent as a Point. Apparently Visual Studio can’t find the Point type. I am using the System namespace. After googling the issue I tried System.Drawing, but apparently that doesn’t exist so…I’m lost. Any advice?
The “Point” I am using in this code is not provided by System. We create it in the board generator post. Also, if you ever get stuck you can refer to the completed project and compare
Hi, this is a great tutorial, very challenging and professionally made. I’m learning a lot from it!
I was wondering if you have made a more recent tutorial somewhere, could be with another game, with an updated version of the Unity Input controller. I did some research and apparently they fixed the issue about having to put your stuff in the update() method.
Thanks!
Glad you’re enjoying it and learning! I just looked at the updated Unity Documentation (2018.2) for Input and they still show it implemented with a polling style of architecture (via the update method). If however, you are referring to their EventSystem which they use with their new UI, then yes, I do also have projects that make use of that. Feel free to check out either the “Unofficial Pokemon Board Game” or “Collectible Card Game” tutorials from my Project page.
Thanks very much, yes I was referring to their event system, will check that out!
A few people in the comments seems to be removing
InputController.fireEvent -= OnFireEvent;
from OnDisable, but it seems like the example provided by you actually still has it included.
Is there any reason to remove this?
I kept it in my code and it seems to be working fine. Just curious more than anything.
It is wrong to forget to unsubscribe from events. Events maintain a strong reference to objects and therefore could keep something alive when you don’t intend it to be. In other words it could cause your program to crash, or worse, keep running but execute code more times than you intended and get into an unexpected state.
It occurred to me I never thanked you for this though I initially meant to. Just to clarify, that means I was right to keep the code? That snippet on the on disable is the unsubscribe events code?
Either way, Thanks for this tutorial. It’s the perfect balance of me wanting to know every bit of putting a tactics rpg game together that I can, with the actuality of being able to create something worth showing in a reasonable time. This has really kept me motivated (and hopefully for me, I’ll be able to stay motivated.)
Yes, you were correct to keep the code, and you’re welcome. It always feels nice to know that your work is appreciated and that you are inspiring others!
Great Ьlog you have here.. It?s harԁ to
find good quality writing like yours these days.
I honestly appreciate indiviɗuals like you! Take care!!
I have been trying to change this to an on screen ui button for the inputs, but can not figure it out. It seems I need to set the horizontal value manual for it to work. Or am I missing how to do this?
If I understand what you are saying, then I think you are taking the correct approach. For example, connect a button event for an up button to an “upButtonPressed” method on your Input Controller, and then use that method to fire the appropriate notification (the same as you would had the up arrow keyboard key been pressed).
I think I got it to work. I simple set the x or y directly, and also pass that into the repeater update function.
The movement works fine, however the repeater no longer seems to work correctly. Hmmm
Also, when I tried to directly fire a moveEvent from a function and gave it the values directly, it seemed to not work. (Since the horizontal and vertical raw never changes I think.)
If you want to post your code in the Forum I’d be happy to take a look.
The repeater needs to check for a held key each frame. The UI button only sends a single event (probably on the first frame it detects the mouse down). If you want a UI button that acts like a keyboard key, you will probably need to create your own and that goes outside the scope of this tutorial. I’d be happy to help you further in the Forums if you want.
Thank you for your further offer of help! I actually figured out what I was doing wrong though, and it is a beginner mistake for sure.
For anyone else who tries for uibutton controls, Just make sure you manipulate the input from within the update function……am I ever blind lol. Now repeaters are working as expected and now I am adding in camera zoom as well since I have a better understanding of the input control system now.
|
http://theliquidfire.com/2015/05/24/user-input-controller/
|
CC-MAIN-2020-24
|
refinedweb
| 5,016
| 71.44
|
The final query expression we will be decomposing today is this:
xpath(xml(variables('count')), '/*[local-name()="lists"]/*[local-name()="list"]/*[local-name()="name" and text()="*Billable"]/..')
This is the XPath function (...).
We assume there is a count string typed variable containing the XML payload to query into. The variables function looks-up that variable (...).
The XML function (...) parses that string input to an XML document which the XPath function expects as first input.
Then comes the XPath itself
/*[local-name()="lists"]/*[local-name()="list"]/*[local-name()="name" and text()="*Billable"]/..
First
/*[local-name()="lists"]
allows to select all 'lists' named elements at root level without consideration for namespace. Beware many online XPath test tools will ignore the input XML document namespace, while .NET, and Logic Apps XPath function built with .NET, honor the namespace. So for a node to match with just
/lists
it needs to be
<lists>
with no namespace assigned. Beware children nodes inherit default namespaces assign by their parents (
xmlns="http://..."
).
Second
/*[local-name()="name" and text()="*Billable"]
lets you select all the name nodes where the name value is '*Billable'. It is typical to want to select elements from an XML array which have specific value for a child node or attribute. This demonstrate selecting the name nodes equal to *Billable.
Third
/..
lets you go back to the parent node of the name node, so you are able to list all the 'list' elements which satisfied the condition on their leave nodes (has a name element of value *Billable).
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
|
https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/xpath-to-select-array-of-nodes-satisfying-a-child-property-with/ba-p/1475132
|
CC-MAIN-2021-49
|
refinedweb
| 276
| 58.89
|
Ruby Array#sort_by
Let’s say we have the following list of very interesting items.
items = [ { name: "dogfood", price: 4.99 }, { name: "catfood", price: 5.99 }, { name: "batfood", price: 4.99 } ]
Our client wants the list sorted by price descending, and we should break ties in price by then sorting by item name descending. So with our example list, the result should be:
items = [ { name: "batfood", price: 4.99 }, { name: "dogfood", price: 4.99 }, { name: "catfood", price: 5.99 } ]
Here’s a method that does the job for our client:
def sort_by_price_and_name(items) items .group_by { |i| i[:price] } .map { |_k,v| v.sort { |a, b| a[:name] <=> b[:name] } } .flatten end
It’s a bit unwieldy and inelegant. It turns out there is a perfect method for this in Ruby:
def sort_by_price_and_name(items) items.sort_by { |i| [i[:price], i[:name]] } end
Ruby’s
Enumerable#sort_by works in this case where we sort by all the fields in ascending order. If we wanted to sort by price ascending, then name descending,
sort_by would no longer be the tool we’d want to use.
Hat tip to @joshcheek for the knowledge.Tweet
|
https://til.hashrocket.com/posts/mr8cml6htw-ruby-arraysortby
|
CC-MAIN-2020-05
|
refinedweb
| 190
| 86.91
|
I have created a horizontal bar chart of student exam score by taking in input from a file that contains the count of students in the file, and a student's last name followed by their score in each subsequent line. I am using the graphics module created by John Zelle primarily for beginners to try out a simple graphics.
The following I have written works:
- Code: Select all
# horizontal bar chart
# This program creates a horizontal bar chart of student exam scores by taking in input from a file that contains the count of students in the file, and a student's last name followed by their score (from 0 to 100) in each subsequent line.
# by: Luis Henriquez-Perez
from graphics import*
def main():
# get the input
# fname = input("Enter filename: ")
fname = 'class_sample.txt'
file = open(fname, 'r')
list1 = file.readlines()
# get the number of students in the file
students = int(list1[0])
# create the window
# set the coordinates so that the chart is scalable
win = GraphWin('Grade Chart', 400, 400)
win.setCoords(0, 0, 500, 500)
# access the information in the file
scores = []
names = []
for item in list1[1:]:
scores += [int(item.split()[1])]
names += [item.split()[0]]
# initial rectangle
rect = Rectangle(Point(140, 20), Point(140 + 3.4 * scores[0], 460/(students)))
rect.setFill('green')
rect.draw(win)
# initial name
text = Text(Point(70, 230/students + 10), names[0])
text.setStyle('bold')
text.setSize(10)
text.draw(win)
for i in range(students-1):
rect = Rectangle(Point(140, rect.getP1().getY() + 460/students), Point(140 + 3.4 * scores[i + 1], rect.getP2().getY() + 460/students))
rect.setFill('green')
rect.draw(win)
# create the names next to the bars
text1 = Text(Point(70, (rect.getP1().getY() + rect.getP2().getY())/2) , names[i + 1])
text1.setStyle('bold')
text1.setSize(10)
text1.draw(win)
# wait for mouse click
win.getMouse()
main()
It creates this image:
using a text file named 'class_sample.txt' with this information.
15
Anderson 10
Moore 30
Ford 50
Smith 96
Harris 67
White 89
Rodriguez 22
Cuban 12
Le 83
Thomas 90
Lewis 95
Wang 45
Brown 52
Williams 34
Taylor 89
I have a couple concerns about this code however that I would like advice to address. First, I realize that I am drawing rectangles (or bars) twice. I create the first one outside of the for loop and all the other ones in the for loop. Since I do this work twice it becomes a bit annoying because whenever I want to change something about the bars I have to do it in two places. Second, I realize that I can make the code a lot easier to read by creating functions for creating the rectangle and maybe also for initializing the window and opening the file. As python programmers, what would you suggest I can do to make this code more readable and efficient?
|
http://www.python-forum.org/viewtopic.php?p=15904
|
CC-MAIN-2015-35
|
refinedweb
| 482
| 63.9
|
I'm new to this site and was looking for help. I'm trying to design a program that will take an input of numbers from a user, store it into an array, and format them correctly into three columns. However, I can't seem to figure out why my inputs aren't formatting correctly.
#include <stdio.h>
int main()
{
int x=0;
float num[100];
/* for loop for receiving inputs from user and storing it in array */
for (x=0; x<=100; x++)
{
scanf("%f", &num[x]);
printf("%7.1lf%11.0lf%10.3lf", num[x], num[x+1], num[x+2]);
//printf("%f %f %f\n", num[0], num[1], num[2]);
}
return 0;
Problems I see:
Use of incorrect range in the
for loop. Given
float num[100];
the maximum valid index is
99. Hence, the
for loop needs to be:
for (x=0; x < 100; x++) // x < 100 not x <= 100
Using array elements before they are initialized.
printf("%7.1lf%11.0lf%10.3lf", num[x], num[x+1], num[x+2]);
Nothing has been read into
num[x+1] and
num[x+2]. Hence, you are going to get garbage values.
Accessing
num using out of bounds array indices.
Accessing
num[x],
num[x+1], and
num[x+2] makes sense only if
x+2 is less than or equal to
97.
My suggestion:
Use two loops. In the first loop, read the data. In the second loop, write out the data.
for (x=0; x < 100; x++) // Using 100 here. { scanf("%f", &num[x]); } for (x=0; x < 98; x++) // Using 98 here. { printf("%7.1lf%11.0lf%10.3lf", num[x], num[x+1], num[x+2]); }
Update, in response to comment by OP
Change the print loop to:
for (x=0; x < 98; x += 3) // Increment x by 3 { printf("%7.1lf%11.0lf%10.3lf", num[x], num[x+1], num[x+2]); }
|
https://codedump.io/share/GiHltpLwSCCf/1/output-from-arrays-in-c-won39t-format
|
CC-MAIN-2017-13
|
refinedweb
| 323
| 82.95
|
Wow,.
Glad to see the blog revived – I’ve been lurking here hoping for that for a looong time!
Does all this mean that the event-based async pattern will get deprecated, and the Begin/End IAsyncResult pattern make a comeback?
The event-based one is very hard to compose and generally a PITA to work with. I recall completely abandoning the WebServices support in VS2005 when the async model for generated interfaces changed between the beta (or was it a CTP) and the final release.
@Barry Kelly
No, the event-based async pattern and the Begin/End IAsyncResult patterns are not deprecated; however, you can chose to write Rx on top of either model if that makes it more palatable.
Good to see you are back again with your blog! I wondered a month ago whether to remove your blog from my google reader or not. I’m glad i didnt!
Hope your back for good, or at least a long time 🙂
I have heard mixed reports about whether Rx is going to be included in .NET 4.0 itself – does the DevLabs release indicate that it *won’t* be in .NET 4.0?
Hi Wes, awesome job on Rx, it’s a real work of art.
Is there any reason why all the assemblies in this drop are not CLSCompliant?
Hey, I’m glad to see you blogging again. Keep it up! I always like what you write.
Hooray! The talented Wes Dyer is blogging once again! Great to have you blogging again, Wes. I hope to see more posts on Rx coming from you.
@jonskeet
Unfortunately, the only things from Rx making it into .NET 4.0 are the two intefaces: IObservable and IObserver.
@Jacob Stanley
No reason. Noted as a bug, we’ll fix it in a minor release.
Hey Wes, great to see you blogging again!! Terrific job on Rx!
@Wes: That’s no problem – I just want to get my facts straight as I’m hoping to write about Rx in the second edition of C# in Depth as an example of an alternative in-process LINQ provider. (I was going to write about my own hobby version – Push LINQ – but obviously Rx is a better choice.)
Please ping me if you’d be interested in tech-proofing the Rx section of the book. I should be writing that section in a few weeks.
Wes, thank you for this article.
Enjoyed the Rx talk at PDC. I’d love to download Rx for .NET 4 Beta 2, but the download link appears to be broken (same on the DevLabs site). The link for the .NET 3.5 SP1 version works, though.
This Interval Sample does not seem to work in Beta 2. Has anyone else had success with this? The Lambda fires every second, but the TextBlock never updates.
I am having lots of trouble getting any of this to work, can we have a sample project with these basic functions in it? Maybe I am missing a reference somewhere, but I followed the above instructions explicitly and q.Subscribe(value => textBlock.Text = value.ToString()); never updates the TextBlock
Thanks, when I figure out how this works, it is going to help trmendously with what we are doing with pushing sensor information to the UI
Actually this appears to be a problem with the VB.Net implementation:
In C# this works fine, everything updates
private void Window_Loaded(object sender, RoutedEventArgs e)
{
Observable.Context = System.Threading.SynchronizationContexts.CurrentDispatcher;
var timer = Observable.Interval(TimeSpan.FromSeconds(1));
timer.Subscribe(value => MyTextBlock.Text = value.ToString());
}
However, the same thing in VB does NOT:
Private Sub MainWindow_Loaded(ByVal sender As Object, ByVal e As System.Windows.RoutedEventArgs) Handles Me.Loaded
Observable.Context = System.Threading.SynchronizationContexts.CurrentDispatcher
Dim timer = Observable.Interval(TimeSpan.FromSeconds(1))
timer.Subscribe(Function(value) MyTextBlock.Text = value.ToString())
End Sub
Is this a problem with Lambdas in VB? Or is it something in Reactive?
VB works fine…
My translation skills didn’t 🙁
Changing timer.Subscribe(Function(value) MyTextBlock.Text = value.ToString())
to
timer.Subscribe(Sub(value) MyTextBlock.Text = value.ToString())
made everything work perfectly.
It looks like you figured it out. If you use Function instead of Sub in VB.NET then "=" means relational equality not assignment.
I’m very new to Rx, so this might be a stupid question.
Does Rx provide a way to transform regular .NET events (which work in a push-mode) into a pull-mode? In other words, I see that you can get an IObservable from an event, but I don’t see if you can get an IEnumerable from an event. Does Rx help with that?
That is a great question.
Rx does support a way to transform .NET events into a pull mode. This is done in two phases:
1. Convert .NET event to Observable
2. Convert Observable to Enumerable
Typically, you can use either FromEvent or Create to do (1) and you can use ToEnumerable, Latest, Next, or MostRecent for (2) depending on which semantics you want for the conversion.
Thanks for the quick answer. I’ve another question.
About the IObservable<T> example shown at MSDN (). The Subscribe function should return an IDisposable object that, when disposed, will remove the observer from the observable.
I don’t believe the implementation on MSDN is compliant to this. It returns `observer as IDisposable`, i.e., it will either return (A) null, or (B) the observer parameter.
(A) might be acceptable. Returning a null from Subscriber is the same as telling the user code that there isn’t any "unsubcriber" object available;
(B) won’t do it, because the observer has no knowledge of the objects being observed. It can also cause a defect when one wants to remove a subscription and ends up disposing an observer object;
BTW, the MSDN documentation doesn’t specify anywhere what the Subscriber method is supposed to return. I only knew that after I watched your video with Kim Hamilton.
One correction. At the documentation of the Subscriber method (), it’s said what the method is supposed to return. So, this is not missing.
Anyways, I think it should be cited in the IObservable<T> doc, and the example there should be fixed.
Please could you update the code so it works with the latest version of Rx? For example, Observable.Context = SynchronizationContexts.CurrentDispatcher; is no longer used.
Ditto::
Please could you update the code so it works with the latest version of Rx? For example, Observable.Context = SynchronizationContexts.CurrentDispatcher; is no longer used.
Hi Wes.
Why cannot I use Observable in a viewmodel?
The compiler throws an error:
Error 1 Cannot convert lambda expression to type ‘System.Collections.Generic.IObserver<int>’ because it is not a delegate type
when I execute the code
int i;
var xs = Observable.Return(42);
xs.Subscribe(value => i = value);
I am guessing I am missing something very fundamental here. This code runs in code behind ,but not in my viewmodels?
Regs.
Ok I figured it out.
It is because I did not have the Generics namespace included.
Now I get an invalid operation exception on things like Observable.Start() when I code rx outside of code behind.
Not easy!
As Howard pointed out, the Observable.Context doesn’t appear to be valid any more. You can get around that by using:
timer = Observable.Interval(TimeSpan.FromSeconds(1)).ObserveOnDispatcher
I don’t see a global context in the current bits. Has it moved somewhere or do we need to handle the dispatcher on each subscription?
Steel, the answer to your issue is that by using Function rather than Sub, the operation was doing a comparison and returning the boolean result rather than doing the intended assignment. This is a downside of not differentiating between = (assignment) and == (comparison) in VB.
Unfortunately this is not an example of useful code. Drag and Drop operations need to capture the mouse. This example does not do this. Just release the mouse while you are off the application window and you'll find you are stuck in drag mode.
How about next time you write try to assume the user reading your post knows nothing about the subject, you lost me about half way where I have no idea what's being returned or what is going on due to programmer jargon used… It would also help to describe the benefits in each code line to show us the purpose of doing what your doing, more to see the forest from the trees etc.
Once you loose someone, and bits of information aren't comprehended, the rest goes out the window as even if I do understand I'm missing the vital bits to understand the rest… This isn't introductory level.
Just as an example, if you write like how Scott guthrie does, where he explains everything and not just in programmer jargon, there is no possible way anyone familiar wih the base language cannot understand. Just my advice, thanks.
|
https://blogs.msdn.microsoft.com/wesdyer/2009/11/18/a-brief-introduction-to-the-reactive-extensions-for-net-rx/
|
CC-MAIN-2017-43
|
refinedweb
| 1,499
| 66.84
|
"A Conceptual Introduction to Hamiltonian Monte Carlo" arXiv:1701.02434 ().
In this case study I will demonstrate the recommended Stan workflow in Python where we not only fit a model but also scrutinize these diagnostics and ensure an accurate fit.
import pystan import matplotlib import matplotlib.pyplot as plot
Unfortunately diagnostics are a bit ungainly to check in PyStan 2.16.0, so to facilitate the workflow I have included a utility module with some useful functions.
import stan_utility help(stan_utility)
Help on module stan_utility: NAME stan_utility FILE /Users/Betancourt/Documents/Research/Code/betanalpha/jupyter_case_studies/pystan_workflow/stan_utility.py FUNCTIONS check_div(fit) Check transitions that ended with a divergence check_energy(fit) Checks the energy Bayesian fraction of missing information (E-BFMI) check_treedepth(fit, max_depth=10) Check transitions that ended prematurely due to maximum tree depth limit compile_model(filename, model_name=None, **kwargs) This will automatically cache models - great if you're just running a script on the command line. See partition_div(fit) Returns parameter arrays separated into divergent and non-divergent transitions
To demonstrate the recommended Stan workflow let's consider a hierarchical model of the eight schools dataset infamous in the statistical literature,$$ "Bayesian Data Analysis" by Gelman et al.
with open('eight_schools_cp.stan', 'r') as file: print(file.read()) Python environment in this way. Not only does it make it easier to identify and read through the Stan-specific components of your analysis, it also makes it easy to share your models Stan users exploiting workflows in environments, such as R and the command line.
Given the Stan program we then use the
compile_model method of our
stan_utility module to compile the Stan program into a C++ executable,
model = stan_utility.compile_model('eight_schools_cp.stan')
Using cached StanModel
This is not technically necessary, but it allows us to cache the executable and run this model with Stan multiple times without having to recompile between each run.
data = dict(J = 8, y = [28, 8, -3, 7, -1, 1, 18, 12], sigma = [15, 10, 16, 11, 9, 11, 10, 18]) pystan.stan_rdump(data, 'eight_schools.data.R')
At the same time, an existing Stan data file can be read into the Python environment using the
read_rdump function,
data = pystan.read_rdump('eight_schools.data.R')
fit = model.sampling(data=data, seed=194838) in parallel, each initialized from a diffuse initial condition to maximize the probability that at least one of the chains might encounter a pathological neighborhood of the posterior, if it exists.
Each of those chains proceeds with 1000 warmup iterations and 1000 sampling iterations, totaling 4000 sampling iterations available for diagnostics and analysis.
print(fit)
Inference for Stan model: anon_model_71b609c34d59a40b345b5328a36dbb39. 4 chains, each with iter=2000; warmup=1000; thin=1; post-warmup draws per chain=1000, total post-warmup draws=4000. mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat mu 4.28 0.14 2.97 -1.79 2.51 4.38 6.05 10.55 482.0 1.01 tau 3.46 0.27 2.94 0.65 1.26 2.66 4.65 11.19 117.0 1.06 theta[0] 5.89 0.2 5.2 -2.97 3.04 4.93 8.22 18.53 652.0 1.01 theta[1] 4.67 0.15 4.29 -3.76 2.23 3.96 7.03 14.08 780.0 1.0 theta[2] 3.83 0.16 4.75 -6.22 1.31 4.21 6.38 12.71 909.0 1.0 theta[3] 4.66 0.15 4.38 -4.14 2.38 4.74 7.01 14.33 892.0 1.0 theta[4] 3.61 0.16 4.38 -6.53 1.37 3.88 5.91 11.83 742.0 1.01 theta[5] 3.86 0.16 4.54 -6.14 1.62 3.93 6.33 12.83 846.0 1.0 theta[6] 5.96 0.22 4.75 -1.96 2.91 4.84 8.35 17.36 461.0 1.01 theta[7] 4.64 0.14 4.93 -5.22 2.16 4.82 7.18 14.9 1250.0 1.0 lp__ -13.81 1.34 6.55 -25.5 -18.62 -14.24 -9.09 -2.42 24.0 1.18 Samples were drawn using NUTS at Wed Jul 26 23:44:33 except for the log posterior density,
lp__, which should inspire some small hesitation in our fit.,
stan_utility.check_treedepth(fit)
0 of 4000 iterations saturated the maximum tree depth of 10 (0%)
We're good here, but if our fit had saturated the threshold then we would have wanted to rerun with a larger maximum tree depth,
fit = model.sampling(data=data, seed=194838, control=dict(max_treedepth=15))
and then check if still saturated this larger threshold with
stan_utility,
stan_utility.check_energy(fit)
Chain 2: E-BFMI = 0.177681346951 E-BFMI below 0.2 indicates you may need to reparameterize your model
The
stan_utility module "A Conceptual Introduction to Hamiltonian Monte Carlo" arXiv:1701.02434 ()..
stan_utility.check_div(fit)
202.0 of 4000 iterations ended with a divergence (5.05%) = model.sampling(data=data, seed=194838, control=dict(adapt_delta=0.9))
Checking again,
sampler_params = fit.get_sampler_params(inc_warmup=False) stan_utility.check_div(fit)
45.0 of 4000 iterations ended with a divergence (1.125%), but note that this function works only if your model parameters are reals, vectors, or arrays of reals. More robust functionality is planned for future releases of PyStan.
light="#DCBCBC" light_highlight="#C79999" mid="#B97C7C" mid_highlight="#A25050" dark="#8F2727" dark_highlight="#7C0000" green="#00FF00" nondiv_params, div_params = stan_utility.partition_div(fit) plot.scatter([x[0] for x in nondiv_params['theta']], nondiv_params['tau'], \ color = mid_highlight, alpha=0.05) plot.scatter([x[0] for x in div_params['theta']], div_params['tau'], \ color = green, alpha=0.5) plot.gca().set_xlabel("theta_1") plot.gca().set_ylabel("tau") plot.show()
WARNING:root:`dtypes` ignored when `permuted` is False.
One of the challenges with a visual analysis of divergences is determining exactly which parameters to examine. Consequently visual analyses are most useful when there are already components of the model about which you are suspicious, as in this case where we know that the correlation between random effects (
theta_1 through
theta_8) and the hierarchical standard deviation,
tau, can be problematic.
Indeed we see the divergences clustering towards small values of tau where the posterior abruptly stops. This abrupt stop is indicative of a transition into a pathological neighborhood that Stan was not able to penetrate.
In order to avoid this issue we have to consider a modification to our model, and in this case we can appeal to a non-centered parameterization of the same model that does not suffer these issues.
with open('eight_schools_ncp.stan', 'r') as file: print(file.read())
data { int<lower=0> J; real y[J]; real<lower=0> sigma[J]; } parameters { real mu; real<lower=0> tau; real theta_tilde[J]; } transformed parameters { real theta[J]; for (j in 1:J) theta[j] = mu + tau * theta_tilde[j]; } model { mu ~ normal(0, 5); tau ~ cauchy(0, 5); theta_tilde ~ normal(0, 1); y ~ normal(theta, sigma); }
model = stan_utility.compile_model('eight_schools_ncp.stan') fit = model.sampling(data=data, seed=194838)
Using cached StanModel
print(fit) stan_utility.check_treedepth(fit) stan_utility.check_energy(fit) stan_utility.check_div(fit)
Inference for Stan model: anon_model_b4ca739f9fe7ffcdbf0d530f00d0a587. 4 chains, each with iter=2000; warmup=1000; thin=1; post-warmup draws per chain=1000, total post-warmup draws=4000. mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat mu 4.37 0.05 3.39 -2.49 2.1 4.51 6.69 10.94 4000.0 1.0 tau 3.59 0.06 3.19 0.13 1.27 2.79 4.95 11.68 2761.0 1.0 theta_tilde[0] 0.31 0.02 0.96 -1.61 -0.34 0.32 0.95 2.16 4000.0 1.0 theta_tilde[1] 0.1 0.01 0.92 -1.72 -0.51 0.12 0.72 1.9 3766.0 1.0 theta_tilde[2] -0.08 0.02 0.98 -2.0 -0.74 -0.08 0.55 1.91 4000.0 1.0 theta_tilde[3] 0.05 0.01 0.94 -1.86 -0.57 0.07 0.68 1.87 4000.0 1.0 theta_tilde[4] -0.16 0.02 0.95 -2.03 -0.79 -0.17 0.47 1.71 3681.0 1.0 theta_tilde[5] -0.09 0.01 0.92 -1.95 -0.71 -0.1 0.52 1.76 3787.0 1.0 theta_tilde[6] 0.37 0.02 0.97 -1.62 -0.26 0.38 1.01 2.28 4000.0 1.0 theta_tilde[7] 0.09 0.02 0.96 -1.77 -0.56 0.11 0.74 1.92 4000.0 1.0 theta[0] 6.15 0.09 5.53 -3.49 2.76 5.66 8.88 19.12 4000.0 1.0 theta[1] 4.9 0.07 4.56 -4.39 2.04 4.9 7.77 14.09 4000.0 1.0 theta[2] 3.91 0.08 5.37 -7.72 0.98 4.19 7.18 13.58 4000.0 1.0 theta[3] 4.63 0.08 4.8 -5.02 1.7 4.68 7.57 14.11 4000.0 1.0 theta[4] 3.62 0.08 4.79 -7.34 0.89 3.97 6.71 12.27 4000.0 1.0 theta[5] 3.97 0.07 4.73 -6.0 1.22 4.15 6.96 12.7 4000.0 1.0 theta[6] 6.37 0.08 5.07 -2.22 3.04 5.97 9.02 18.49 4000.0 1.0 theta[7] 4.92 0.08 5.32 -5.75 1.82 4.86 7.91 15.68 4000.0 1.0 lp__ -6.91 0.06 2.34 -12.17 -8.29 -6.56 -5.23 -3.32 1754.0 1.0 Samples were drawn using NUTS at Wed Jul 26 23:44:34 2017. For each parameter, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat=1). 0 of 4000 iterations saturated the maximum tree depth of 10 (0%) 0.0 of 4000 iterations ended with a divergence (0.0%)
With this more appropriate implementation of our model all of the diagnostics are clean and we can now utilize Markov chain Monte Carlo estimators of expectations, such as parameter means and variances, to accurately characterize our model's posterior distribution.
|
https://mc-stan.org/users/documentation/case-studies/pystan_workflow.html
|
CC-MAIN-2019-13
|
refinedweb
| 1,756
| 56.66
|
Move Type to Another Namespace refactoring
This refactoring helps you move one or more non-nested types to a new or an existing namespace. Namespace declarations are replaced right in the original files and all usages are updated accordingly.
If you need to move a nested type, you can first apply the Move Type to Outer Scope Refactoring.
To move one or more types to another namespace
- Select types that you want to move in one of the following ways:
- In the editor, set the caret at the name of a type.
- Select one or several types and/or files and/or folders in the Solution Explorer.
- Select one or several types in the File Structure Window.
- Select a type in the Class View.
- Select a type in the Object Browser.
- Select a type or folder(s) in the type dependency diagram.
- Do one of the following:
- Press F6 and then choose Move Type to Another Namespace
- Press Ctrl+Shift+R and then choose Move Type to Another Namespace
- Right-click and choose Refactor | Move Type to Another Namespace on the context menu.
- Choose in the main menu.
- Start typing the name of the target namespace. If you are moving type(s) to an existing namespace, pick it from the drop down list that displays all namespaces matching your input.
-: 15 December 2016
|
https://www.jetbrains.com/help/resharper/2016.2/Refactorings__Move__Type_to_Another_Namespace.html
|
CC-MAIN-2018-26
|
refinedweb
| 223
| 72.76
|
As you may know, we recently brought Rolf Rolles on board the team here at Exodus. We all met at our Austin office and Rolf spent a week working alongside us. Our interview process doesn’t consist of contrived questions intended to observe the interviewee’s capacity for mental acrobatics. Traditionally, when we bring someone in for consideration we are already familiar with their past work and skillset. What we are more interested in is evaluating their capacity to work as part of our team. So, Rolf spent his time auditing code and writing some instrumentation tools for some of the problems we were facing at the time. It went very well, and we’re thrilled that he decided to join us.
One night during that week we were chatting with Rolf about random programming problems and he recalled the story of a past interview whereby he was asked to implement a strlen() function in C that, when compiled, would not contain any conditional branches. He didn’t pose the problem as a challenge but Brandon, Zef, and I all found it intriguing and took a shot at solving it. Leave it to Rolf Rolles to reverse the interview process itself…
Spoiler alert: what follows are our independently created solutions.
Brandon’s Solution:
#define f(b) ((-b)>>31)&1
typedef unsigned int (*funcptr)(unsigned int x);
funcptr functable[2];
unsigned char *p;
unsigned int done(unsigned int x)
{
return x;
}
unsigned int counter(unsigned int x)
{
return(functable[f(*(p+x+1))](x+1));
}
int main(int argc, char *argv[])
unsigned int len;
p = (unsigned char *)argv[argc-1];
functable[0] = (funcptr)&done;
functable[1] = counter;
len = functable[f(*p)](0);
printf("len is %un", len);
return 0;
}
Zef’s Solution:
*
* strlen without conditional branch
* compiles with -Wall -ansi
*/
#include <stdio.h>
int _gtfo(char *s);
int _str_len(char *s);
int (*f[])(char *s) = {_gtfo, _str_len};
int _gtfo(char *s)
{
return -1; /* set to '0' to include trailing null */
}
int _str_len(char *s){
char c = *s;
return f[((c & 0x01))|
((c & 0x02) >> 1)|
((c & 0x04) >> 2)|
((c & 0x08) >> 3)|
((c & 0x10) >> 4)|
((c & 0x20) >> 5)|
((c & 0x40) >> 6)|
((c & 0x80) >> 7)](++s) +1 ;
}
int main(int argc, char *argv[])
{
if(argc > 1 ) printf("strlen("%s") = %dn", argv[1], _str_len(argv[1]));
return 0;
}
Zef’s description:
“So, my immediate thought was to use function pointers to ‘conditionally’ execute code without a conditional branch. There are two possible states for each member of a string when performing a ‘strlen’-type operation. ‘Terminator’ and ‘Not Terminator’. In this case the ‘Terminator’ for a C-string is ‘NULL’ (0x00). This of course is the only value with 0 bits set; by masking each bit in the 8 bit value and shifting to the lsb then combining the values with a ‘|’ operation, a binary state is created allowing for the indepedent execution of the two defined states ‘Terminator’ and ‘Not Terminator'”.
Aaron’s Solution:
As I admittedly suck at C, I approached the problem in straight assembly (I know, that’s cheating. And yes, this could be achieved with a rep scasb, but that’s just too easy). However, I was able to solve the problem in 27 bytes:
section .text
global _start
_start:
pop eax
pop eax
xor eax, eax
xor ebx, ebx
pop esi
_continue:
mov al, [esi]
add al, 0xFF
salc
inc al
lea ecx, [0x8048097+eax*4]
jmp ecx
inc ebx
inc esi
jmp _continue
int 0x80
The three pops that occur within _start are to get access to argv[1] (the string to be measured, provided on the command line). The last pop esi puts a pointer to the string into the esi register.
The mov al, [esi] grabs a single byte off the string. Then, the add al, 0xFF is used to determine whether the byte is NULL or not. If the value is non-NULL, the add to the 8-bit register al will set the Carry flag. If it is NULL, it will not set the CF.
The next instruction is actually considered undocumented (even objdump shows the mnemonic as ‘bad’). What the salc instruction does is sets the al register to 0xFF if the Carry flag is set, otherwise it sets it to 0x00. This is the method I used to implement a binary state to determine if the character is NULL or not.
The inc al instruction then increments al, which was either 0xFF or 0x00. After the inc it will either be 0x00 or 0x01.
The lea ecx, [0x8048097+eax*4] instruction loads into ecx either the address 0x8048097 or 0x804809b. These addresses are significant and can be observed by objdump’ing the assembled binary:
strlen_no_conditionals: file format elf32-i386
Disassembly of section .text:
08048080 :
8048080: 58 pop eax
8048081: 58 pop eax
8048082: 31 c0 xor eax,eax
8048084: 31 db xor ebx,ebx
8048086: 5e pop esi
08048087 :
8048087: 8a 06 mov al,BYTE PTR [esi]
8048089: 04 ff add al,0xff
804808b: d6 (bad)
804808c: fe c0 inc al
804808e: 8d 0c 85 97 80 04 08 lea ecx,[eax*4+0x8048097]
8048095: ff e1 jmp ecx
8048097: 43 inc ebx
8048098: 46 inc esi
8048099: eb ec jmp 8048087
804809b: cd 80 int 0x80
$
So, if the character is not NULL, the code will jmp ecx to 0x8048097 which increments the string length counter (ebx) and increments the string pointer (esi) and then branches unconditionally to _continue.
If the value was NULL, the jmp ecx will land directly at the int 0x80. As the size of the inc ebx and inc esi and jmp _continue is exactly 4 bytes, the lea instruction very conveniently can load either the address of the inc ebx or directly at the int 0x80, thus removing the need for any NOP-like instructions.
The last convenient optimization to note is that the int 0x80 will execute the syscall specified by the eax register. Well, because the result of the add/salc/inc condition will set eax to 1 only when a NULL is found, the int 0x80 will execute syscall #1 which on Linux is exit(). Additionally, the exit code is specified by the ebx register. That is why I used the ebx register as my counter to hold the string length. So, upon execution of the interrupt, the exit code will contain the length of the string as can be observed by running the assembled binary and inspecting the return value:
$ ld -o strlen_no_conditionals a.o
$ ./strlen_no_conditionals "ExodusIntel" ; echo $?
11
$ ./strlen_no_conditionals "should return 16" ; echo $?
16
$
Rolf’s Solution:
“Basically, the fundamental problem to overcome with this challenge is to ‘make a decision’ — that is to say, decide when to terminate the iteration upon reaching a NULL character — without using an explicit jcc-style conditional branch. A few minutes’ reflection upon this problem yields that we could use recursion into a function pointer table with 256 entries, where 255 of the entries increased some counter variable, and the entry at 0 terminates the procedure and returns the counter. In doing so, we have replaced all conditional jumps with one indexed, switch jump. Some further reflection provides the reduction of the table size from 256 entries down to two.”
int func(char *);
int func_x(char *c) { return 1+func(c); }
int func_0(char *c) { return 0; }
ctr table[2] = { &func_0, &func_x };
int func(char *c) { return table[!!*c](c+1); }
If you’ve come up with an interesting approach, we’d love to see it. Feel free to leave a comment or some such.
—
Aaron Portnoy
@aaronportnoy
|
https://blog.exodusintel.com/2012/09/
|
CC-MAIN-2021-10
|
refinedweb
| 1,263
| 64.44
|
I'm trying to expand my knowledge and skill of Python by moving from small scripts and web development to desktop applications, and am looking at Deluge's code for examples of how to structure a project. I am especially interested in how Deluge has separated the torrenting functionality from the user interface. Designing the core functionality (which is within Deluges
coremodule), and having the user interface (with multiple user interfaces within the
uimodule) written separately and simply calling functions from the
coreis something I would like to emulate in my own project.
Looking at Deluges code, it seems that all imports for its own modules are absolute, for example:
Code: Select all
# deluge/core/core.py
import deluge.common
from deluge.core.eventmanager import EventManager
But interestingly all it's entrypoints, as defined in `setup.py` are also deeper within the program:
Code: Select all
# setup.py
entry_points = {
'console_scripts': [
'deluge-console = deluge.ui.console:start',
'deluge-web = deluge.ui.web:start',
'deluged = deluge.core.daemon_entry:start_daemon'
],
'gui_scripts': [
'deluge = deluge.ui.ui_entry:start_ui',
'deluge-gtk = deluge.ui.gtkui:start'
],
'deluge.ui': [
'console = deluge.ui.console:Console',
'web = deluge.ui.web:Web',
'gtk = deluge.ui.gtkui:Gtk',
],
}
From what I can tell, according to the answer for this SO question:, this would not normally be possible. To use absolute imports the program would have to be started from a file in the projects root. However, Deluge can be started from several locations, none of which are in the projects root.
As far as my understanding goes, deluge must be changing its PYTHONPATH somehow, such as with `sys.path`, however searching the code only finds one instance of it, but it only seems to be related to the documentation, not the actually software code.
So how would one have an entry point for their Python program deeper within the modules, but use absolute imports from the project root? Deluge seems to have done it, but I don't know how, and I would like to do so in my project.
And as a side question, if you know of any other large desktop apps written in Python that I can look at for examples, I'd love it if you told me about it
Cheers
|
https://forum.deluge-torrent.org/viewtopic.php?f=8&t=54145&p=224771
|
CC-MAIN-2019-22
|
refinedweb
| 373
| 57.47
|
Doing more with less in JUnit 4
Doing more with less in JUnit 4
Join the DZone community and get the full member experience.Join For Free
JUnit is a popular testing framework for writing unit tests in Java projects. Every software engineer should always try to consider using less code to achieve objectives and the same is true for test code. This post shows one way to write less test code by increasing code re-use using JUnit 4 annotations.
Ok, so let's set the scene. You have a bunch of tests that are very similar but vary only slightly in context. For example, let's say you have three tests which do pretty much the same thing. They all read in data from a file, and count the number of lines in the file. Essentially you want the one test to run three times, but read
a different file and test for a specific amount of lines each time.
This can be achieved in an elegant way in JUnit 4 using the @RunWith and @Parameters annotations. An example is shown below.
@RunWith(Parameterized.class) public class FileReadTest { @Parameters public static Collection<Object[]> data() { return Arrays.asList( new Object[][]{{"players10.txt", 10}, {"players100.txt", 100}, {"players300.txt", 300}}); } private String fileName; private int expectedNumberOfRows; public FileReadTest(String fileName, int expectedNumberOfRows) { this.fileName = fileName; this.expectedNumberOfRows = expectedNumberOfRows; } @Test public testFile() { LineNumberReader lnr = new LineNumberReader(new FileReader(fileName))); lnr.skip(Long.MAX_VALUE); assertEquals("line number not correct", expectedNumberOfRows, lnr.getLineNumber()); } }
W.R.T. to code above:
- The RunWith annotation tells JUnit to run with Parameterized runner instead of the default runner.
- The @Parameters annotation is defined by the JUnit class org.junit.runners.Parameterized.
It is used to define the parameters that will be injected into the test.
- The data() method is annotated with the parameters annotation. It defines a two dimensional array. The first array is: "players10.txt", 10. These parameters will be used the first time the test is run. "players10.txt" is the file name. 10 is the expected number of rows.
Enjoy...
From
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/doing-more-less-junit-4?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%253A+javalobby%252Ffrontpage+%2528Javalobby+%252F+Java+Zone%2529&utm_content=Google+Reader
|
CC-MAIN-2020-10
|
refinedweb
| 367
| 51.44
|
IRC log of xmlsec on 2008-10-20
Timestamps are in UTC.
06:43:25 [RRSAgent]
RRSAgent has joined #xmlsec
06:43:25 [RRSAgent]
logging to
06:43:27 [trackbot]
RRSAgent, make logs member
06:43:27 [Zakim]
Zakim has joined #xmlsec
06:43:27 [klanz2]
klanz2 has joined #xmlsec
06:43:29 [trackbot]
Zakim, this will be XMLSEC
06:43:29 [Zakim]
ok, trackbot; I see T&S_XMLSEC()2:30AM scheduled to start 13 minutes ago
06:43:30 [trackbot]
Meeting: XML Security Working Group Teleconference
06:43:30 [trackbot]
Date: 20 October 2008
06:43:57 [tlr]
zakim, who is on the phone?
06:43:57 [Zakim]
T&S_XMLSEC()2:30AM has not yet started, tlr
06:43:59 [Zakim]
On IRC I see klanz2, Zakim, RRSAgent, csolc, pdatta, tlr, bal, fhirsch3, trackbot
06:44:35 [brich]
brich has joined #xmlsec
06:45:29 [klanz2]
Hello everyone and good morning.
06:45:37 [fhirsch3]
Agenda:
06:45:42 [fhirsch3]
Chair: Frederick Hirsch
06:45:43 [kyiu]
kyiu has joined #xmlsec
06:46:05 [fhirsch3]
Scribe: Gerald Edgar
06:46:07 [rdmiller]
rdmiller has joined #xmlsec
06:46:12 [fhirsch3]
zakim, who is here?
06:46:12 [Zakim]
T&S_XMLSEC()2:30AM has not yet started, fhirsch3
06:46:13 [Zakim]
On IRC I see rdmiller, kyiu, brich, klanz2, Zakim, RRSAgent, csolc, pdatta, tlr, bal, fhirsch3, trackbot
06:46:35 [tlr]
zakim, call Executive_6
06:46:35 [Zakim]
ok, tlr; the call is being made
06:46:36 [Zakim]
T&S_XMLSEC()2:30AM has now started
06:46:36 [Zakim]
+Executive_6
06:46:46 [tlr]
zakim, who is on the phone?
06:46:46 [Zakim]
On the phone I see Executive_6
06:47:09 [fhirsch3]
zakim, who is here?
06:47:09 [Zakim]
On the phone I see Executive_6
06:47:10 [Zakim]
On IRC I see rdmiller, kyiu, brich, klanz2, Zakim, RRSAgent, csolc, pdatta, tlr, bal, fhirsch3, trackbot
06:48:50 [csolc]
ScribeNick: csolc
06:52:57 [csolc]
Agenda review
06:53:04 [csolc]
Welcome all
06:54:14 [G_Edgar]
G_Edgar has joined #xmlsec
06:55:57 [bal]
bal has joined #xmlsec
06:59:03 [csolc]
Topic: Liaisons
06:59:21 [Zakim]
+??P4
06:59:23 [Zakim]
-??P4
06:59:23 [Zakim]
+??P4
06:59:29 [tlr]
zakim, ??P4 is klanz2
06:59:29 [Zakim]
+klanz2; got it
07:00:25 [fhirsch3]
07:00:39 [csolc]
TOPIC: Minuites Approval
07:01:07 [csolc]
Resolution: 10/07 minutes are approved
07:02:00 [csolc]
Topic: Best Practices
07:03:16 [csolc]
brich: want to confirm that it will be published as first working working draft
07:03:40 [csolc]
bal: does publishing it start any w3c clock
07:03:51 [csolc]
tlr: no clock will be started
07:04:28 [klanz2]
okay, with me
07:04:54 [csolc]
Resolution: Group agrees to publish the Best Practices doc as first working draft
07:05:11 [tlr]
ACTION: thomas to prepare best practices for publication
07:05:12 [trackbot]
Created ACTION-83 - Prepare best practices for publication [on Thomas Roessler - due 2008-10-27].
07:05:26 [fhirsch3]
rrsagent, where am i?
07:05:26 [RRSAgent]
See
07:07:04 [csolc]
rdmiller: does he wait to send best practice to RSA
07:07:25 [csolc]
r/RSA/NSA
07:08:00 [csolc]
fh: wait to send doc untill tlr has doc published
07:09:37 [csolc]
Topic:Requirements updates I
07:11:01 [csolc]
ACTION-73, Title, contents update (Magnus)
07:11:01 [csolc]
07:11:04 [csolc]
ACTION-73, Title, contents update (Magnus)
07:11:04 [csolc]
07:13:11 [csolc]
bal: do we need a section on assumptions?
07:14:31 [csolc]
bal: proposals to add a section between 4 and 5 for opperation assumptions
07:14:47 [fhirsch3]
s/opperation/operational environment/
07:16:03 [csolc]
Resolution: accept the change
with the addtion of the operational enviroment assumptions
07:16:25 [csolc]
Proposal for principles section
07:16:25 [csolc],
07:18:57 [fhirsch3]
remove Specialized approaches optimized for specific use cases should be
07:19:10 [fhirsch3]
avoided
07:19:38 [fhirsch3]
change "security layer independent of a security layer" to security layer independent of application layer"
07:21:12 [csolc]
fh: what are first class objects
07:21:38 [klanz2]
With what respect is that important to us, maybe add a this means sentence ...
07:21:56 [csolc]
fh: XML Signature -> XML Security
07:23:10 [csolc]
fh: first class object should be defined in the original security doc
07:24:08 [tlr]
07:24:21 [tlr]
07:24:35 [csolc]
second url is the correct one
07:25:57 [tlr]
I think that Frederick was actually looking for this one:
07:26:45 [csolc]
fh: would like to accept the proposal then edit it after.
07:28:41 [csolc]
Resolution: Accept proposed principles section with the above edits
07:28:54 [gedgar]
gedgar has joined #xmlsec
07:30:28 [csolc]
fh: may need to ensure we define requirements before we look a v.next
07:31:30 [csolc]
ACTION: fh edit proposed principles section
07:31:30 [trackbot]
Created ACTION-84 - Edit proposed principles section [on Frederick Hirsch - due 2008-10-27].
07:32:16 [csolc]
Topic: Byte Range signatures
07:32:29 [klanz2]
TOPIC: Byte Range signatures
07:32:47 [fhirsch3]
07:34:10 [fhirsch3]
csolc: sign byte ranges of binary document since some might change others not
07:36:10 [fhirsch3]
bruce: why bytes not over bits, for binary? Bytes higher level than binary
07:38:23 [klanz2]
like LZW
07:38:41 [tlr]
q+ to note that Transforms are defined in terms of octet-streams, not bitstreams
07:39:06 [jcruella]
jcruella has joined #xmlsec
07:39:48 [Zakim]
+??P7
07:39:59 [tlr]
zakim, ??P7 is jcruella
07:39:59 [Zakim]
+jcruella; got it
07:40:29 [fhirsch3]
pratik: binary can be more complicated, depending on encoding
07:40:39 [fhirsch3]
kelvin: prealocate p7 fill in for binary signing
07:40:43 [klanz2]
@Juan Carlos, are there requirements from XAdES in PDF for ByteRenges?
07:41:01 [klanz2]
s/ByteRenges/ByteRanges/
07:50:03 [klanz2]
.
07:53:33 [klanz2]
fjh: add why it's ByteRange and not BitRange ...
07:55:13 [klanz2]
csolc, pleas add to your proposal ...
07:56:08 [bal]
bal has joined #xmlsec
07:56:17 [tlr]
tlr has joined #xmlsec
07:57:28 [klanz2]
ACTION: csolc to update the proposal on a ByteRange Transform
07:57:28 [trackbot]
Created ACTION-85 - Update the proposal on a ByteRange Transform [on Chris Solc - due 2008-10-27].
07:57:32 [csolc]
csolc has joined #xmlsec
07:59:29 [fhirsch3]
fhirsch3 has joined #xmlsec
07:59:36 [brich]
brich has joined #xmlsec
07:59:37 [fhirsch3]
zakim, who is here?
07:59:37 [Zakim]
On the phone I see Executive_6, klanz2, jcruella
07:59:38 [Zakim]
On IRC I see brich, fhirsch3, csolc, tlr, bal, jcruella, rdmiller, klanz2, Zakim, RRSAgent, trackbot
07:59:43 [pdatta]
pdatta has joined #xmlsec
07:59:54 [csolc]
ScribeNick:csolc
07:59:59 [pdatta]
pdatta has joined #xmlsec
08:02:45 [csolc]
chris will note why we are using byte ranges instead of bit ranges
08:02:59 [gedgar]
gedgar has joined #xmlsec
08:03:51 [klanz2]
q+
08:04:07 [fhirsch3]
tlr: add to requirement clarity on possible attacks with byte ranges
08:04:28 [fhirsch3]
fjh: please include in proposal note on not bit stream, possible limit
08:04:31 [fhirsch3]
ack tlr
08:04:31 [Zakim]
tlr, you wanted to note that Transforms are defined in terms of octet-streams, not bitstreams
08:04:36 [tlr]
q-
08:04:38 [fhirsch3]
ack klanz
08:04:51 [tlr]
that's precisely my question
08:05:00 [fhirsch3]
klanz: how are gaps handled, leave out or fill with 0s?
08:05:12 [tlr]
fill with zeroes, fill with something that's given in the transform, produce output that's byte ranges encapsulated in ASN.1, ...
08:05:16 [jcruella]
q+
08:05:20 [fhirsch3]
csolc: need to consider
08:05:21 [tlr]
(just joking, re ASN.1)
08:05:26 [fhirsch3]
klanz: pls add to proposal
08:05:32 [fhirsch3]
ack jruella
08:05:36 [rdmiller]
rdmiller has joined #xmlsec
08:05:38 [fhirsch3]
ack jcruella
08:06:18 [fhirsch3]
jcreullas: filling with 0s is modifying document, is it not
08:07:46 [fhirsch3]
csolc: transform defined, whether to 0 or compress etc
08:08:09 [gedgar]
gedgar has joined #xmlsec
08:09:33 [klanz2]
q+
08:09:34 [csolc]
klanz2: suggests that we ensure proper defaults are defined
08:09:51 [tlr]
q+
08:10:08 [fhirsch3]
ack klanz
08:10:29 [fhirsch3]
ack tlr
08:10:50 [csolc]
tlr: is there a use case for concat
08:11:08 [fhirsch3]
tls notes signing excerts vs concatenation
08:11:15 [fhirsch3]
s/excerts/excerpts
08:12:08 [fhirsch3]
bal: concat effectively via multiple references
08:12:31 [csolc]
bal: terminal transforms?
08:13:14 [csolc]
Topic: Simple Sign
08:13:16 [csolc]
Simple Signing Strawman requirements
08:13:16 [csolc]
08:16:26 [brich]
q+
08:18:51 [csolc]
bal: lower level os stuff wants the minimal set of dependancy
08:19:30 [csolc]
... so if simple sign needs xpath, the more libraries you will need
08:22:32 [fhirsch3]
kelvin notes want to leverage platform, offer support at low level without pulling in xml libraries, no XPath etc
08:23:24 [csolc]
brich: you may require to set a policy instead of a max length
08:23:52 [csolc]
.. on the amount of data that is signed.
08:24:35 [fhirsch3]
fhirsch3 has joined #xmlsec
08:25:14 [fhirsch]
fhirsch has joined #xmlsec
08:25:19 [fhirsch]
zakim, who is here?
08:25:19 [Zakim]
On the phone I see Executive_6, klanz2, jcruella
08:25:20 [Zakim]
On IRC I see fhirsch, rdmiller, pdatta, brich, csolc, tlr, bal, jcruella, klanz2, Zakim, RRSAgent, trackbot
08:25:46 [fhirsch]
kelvin notes policy can be in doc rather than apps, since apps could differ
08:25:49 [csolc]
kelvin: the application tells the library the max amount of data that is allowed to be processed.
08:26:12 [fhirsch]
kelvin notes shred, dsobhect can have unsigned items added at higher layer, can break signed items already existant
08:26:43 [csolc]
pdatta: asked about text nodes
08:26:50 [fhirsch]
item - policy in signature
08:28:10 [klanz2]
Off Topic: Can someone taking care of our mainpage, take an action to update
and add "public-xmlsec-comments@w3.org"
, the need to do this is indicated by the following comment:
08:28:16 [Geald_Edgar]
Geald_Edgar has joined #xmlsec
08:29:25 [klanz2]
Off Topic continued, maybe also mention the old lists:
08:29:25 [klanz2]
public-xmlsec-discuss@w3.org
08:29:25 [klanz2]
08:29:25 [klanz2]
w3c-ietf-xmldsig@w3.org
08:29:25 [klanz2]
08:31:08 [csolc]
bal: need to keep in mind about older libraries, how can the new format be supported by older processors
08:31:34 [brich]
q-
08:33:08 [csolc]
fh: this does duplicate a number of the xades reqs
08:33:31 [jcruella]
q+
08:33:41 [csolc]
fh: declarative policy as part of the sig?
08:33:50 [klanz2]
@tlr: shall
be forwarded to
08:33:50 [klanz2]
www-xml-canonicalization-comments@w3.org
08:33:50 [klanz2]
08:36:42 [tlr]
bal: Putting policy languages into these requirements is a can of worm.
08:36:47 [tlr]
All: yes, and in the following way
08:37:08 [csolc]
ACTION: Kalvin: Clean up proposal
08:37:08 [trackbot]
Sorry, couldn't find user - Kalvin
08:37:24 [tlr]
ACTION: Kelvin to clean up proposal
08:37:24 [trackbot]
Created ACTION-86 - Clean up proposal
[on Kelvin Yiu - due 2008-10-27].
08:37:48 [tlr]
fjh: need to clarify policy-related requirement and why we don't want to do this
08:38:41 [csolc]
bal: may need to declare what are the capabilities of the application.
08:39:19 [klanz2]
q+
08:39:38 [fhirsch]
bal: will need to declare capabilities, relevant for simple low level, or higher level apps
08:39:44 [fhirsch]
ack jcrella
08:39:48 [fhirsch]
ack jcruella
08:40:47 [fhirsch]
bal: at sig generation time declare as part of sig, that sig adhers to part of std
08:41:10 [fhirsch]
bal: verifiers can declare portions they understand
08:41:55 [fhirsch]
jcc: etsi defined language for signature policy
08:42:31 [klanz2]
@jcc do you have a link or reference ...
08:42:55 [csolc]
bal: would like to see levels for the profiles
08:42:58 [fhirsch]
bal: policy limited to statement adhere to level 0 profile, level 1 profele etc
08:43:03 [fhirsch]
s/profele/profile
08:44:05 [fhirsch]
ack klanz
08:44:33 [csolc]
klanz: public mailing lists are not easy access
08:44:38 [tlr]
ACTION: thomas to add link to comment list to public page
08:44:39 [trackbot]
Created ACTION-87 - Add link to comment list to public page [on Thomas Roessler - due 2008-10-27].
08:46:38 [klanz2]
Can you type, when you reconvene please ...
08:46:46 [Zakim]
-klanz2
08:46:51 [Zakim]
-jcruella
08:52:56 [John_Boyer]
John_Boyer has joined #xmlsec
08:56:09 [Zakim]
+John_Boyer
08:56:11 [Zakim]
-John_Boyer
08:56:11 [Zakim]
+John_Boyer
08:56:48 [Zakim]
+wellsk
09:02:43 [csolc]
csolc has joined #xmlsec
09:04:02 [csolc]
Joint Meeting with XForms (11:00 - 12:30)
09:04:05 [fhirsch3]
fhirsch3 has joined #xmlsec
09:04:16 [csolc]
Topic: Joint Meeting with XForms
09:04:26 [fhirsch3]
zakim, who is here?
09:04:26 [Zakim]
On the phone I see Executive_6, John_Boyer, wellsk
09:04:27 [Zakim]
On IRC I see fhirsch3, csolc, John_Boyer, Geald_Edgar, rdmiller, pdatta, brich, jcruella, klanz2, Zakim, RRSAgent, trackbot
09:05:42 [John_Boyer]
yugma web con session id is 229 481 091
09:05:55 [tlr]
tlr has joined #xmlsec
09:06:37 [fhirsch3]
zakim, who is here
09:06:37 [Zakim]
fhirsch3, you need to end that query with '?'
09:06:43 [fhirsch3]
zakim, who is here?
09:06:43 [Zakim]
On the phone I see Executive_6, John_Boyer, wellsk
09:06:44 [Zakim]
On IRC I see tlr, fhirsch3, csolc, John_Boyer, Geald_Edgar, rdmiller, pdatta, brich, jcruella, klanz2, Zakim, RRSAgent, trackbot
09:07:26 [bal]
bal has joined #xmlsec
09:07:50 [tlr]
zakim, who is on the phone?
09:07:50 [Zakim]
On the phone I see Executive_6, John_Boyer, wellsk
09:08:04 [tlr]
klanz2?
09:08:06 [tlr]
we're back
09:08:23 [fhirsch3]
zakim, who iis here?
09:08:23 [Zakim]
I don't understand your question, fhirsch3.
09:08:28 [fhirsch3]
zakim, who is here?
09:08:28 [Zakim]
On the phone I see Executive_6, John_Boyer, wellsk
09:08:30 [Zakim]
On IRC I see bal, tlr, fhirsch3, csolc, John_Boyer, Geald_Edgar, rdmiller, pdatta, brich, jcruella, klanz2, Zakim, RRSAgent, trackbot
09:09:23 [nick]
nick has joined #xmlsec
09:11:00 [Steeeven]
Steeeven has joined #xmlsec
09:11:23 [unl]
unl has joined #xmlsec
09:11:34 [fhirsch3]
Present+ Steven Pemberton, Ulide Lisse, Nick van den Blecken, Roland Merrick, TV Raman, Charlie Wiecha, Keith Wells, John Boyer
09:12:33 [Steeeven]
s/Bleck/Bleek
09:12:43 [unl]
s/Ulide/Ulrich
09:13:13 [csolc]
John Boyer presentor
09:13:36 [nick]
nick has joined #xmlsec
09:13:45 [Zakim]
+??P5
09:14:01 [Steeeven]
zakim, who is on the phone?
09:14:02 [Zakim]
On the phone I see Executive_6, John_Boyer, wellsk, ??P5
09:14:38 [jcruella]
zakim, ??P5 is jcruella
09:14:38 [Zakim]
+jcruella; got it
09:14:47 [kyiu]
kyiu has joined #xmlsec
09:19:57 [fhirsch3]
presentation in PDF at
09:20:44 [Zakim]
-jcruella
09:21:36 [Zakim]
+??P5
09:21:58 [fhirsch3]
zakim, ??P5 is jcruellas
09:21:58 [Zakim]
+jcruellas; got it
09:24:27 [Zakim]
+??P6
09:24:36 [klanz2]
zakim, ? is klanz2
09:24:36 [Zakim]
+klanz2; got it
09:24:40 [fhirsch3]
s/presentation in PDF/XForms security presentation in PDF/
09:26:07 [fhirsch3]
me at "what if I work offline"
09:26:12 [Steeeven]
"But what if I want to work offline"
09:26:25 [fhirsch3]
s/me at .*//
09:27:52 [fhirsch3]
johnboyer: odf two types, single standalone file or zip file with many resources
09:28:27 [fhirsch3]
odf of presentation at
09:29:28 [fhirsch3]
tlr notes zip issue related to widget signing spec
09:30:03 [fhirsch3]
raman notes xml packaging a generic issue in w3c
09:31:24 [fhirsch3]
john notes content.xml is main xml in document, enveloped signature
09:31:34 [fhirsch3]
raman asks if xml base can be used
09:32:05 [Geald_Edgar]
It sounds as though a detatched signature has the potential of signing an information source that no longer exists
09:32:24 [klanz2]
What is the URI scheme for Zip Files, is there one?
09:32:48 [fhirsch3]
detached can only sign as binary opaque reference
09:33:21 [Geald_Edgar]
So XML is detached as a seperate unit witjhin the ODF package, but it is included in the same information resource?
09:33:39 [Geald_Edgar]
s/witjhin/within/
09:34:59 [klanz2]
consider
for referencing inside zip ... also
09:35:22 [Geald_Edgar]
perhaps the XML signature itself is created as a detached signature, but it is attached withing the ODF file.
09:36:20 [fhirsch3]
reference refers to instance document not entire xforms environment
09:36:28 [fhirsch3]
s/reference/john boyer: reference/
09:36:48 [fhirsch3]
john boyer: using reference with no uri
09:37:46 [csolc]
john wants to sign the odf doc, and since the xml signature is part of the instance data if uri="" is used it refers to the data not the odf doc
09:38:52 [csolc]
see slides for details
09:46:25 [fhirsch3]
john boyer notes at run time separate dom for recording instance data, separate document
09:46:41 [fhirsch3]
tlr- what is base uri for instance document
09:47:33 [fhirsch3]
john boyer - expect same doc reference, signature in instance document
09:47:35 [nick]
nick has joined #xmlsec
09:47:44 [fhirsch3]
zakim, who is here?
09:47:44 [Zakim]
On the phone I see Executive_6, John_Boyer, wellsk, jcruellas, klanz2
09:47:45 [Zakim]
On IRC I see nick, kyiu, unl, Steeeven, bal, tlr, fhirsch3, csolc, John_Boyer, Geald_Edgar, rdmiller, pdatta, brich, klanz2, Zakim, RRSAgent, trackbot
09:49:46 [fhirsch3]
after run time might be serialized back with intial larger document
09:49:58 [fhirsch3]
s/after/john boyer notes after/
09:52:38 [csolc]
john - there are 3 layers, Instance data, the Model and the instance form.
09:55:00 [csolc]
john - there is a difference between the runtime of the model and the serialized version.
09:57:51 [Zakim]
+Ed_Simon
09:58:10 [csolc]
john - since the signatures are being generated at runtime - the references are relative to the containing data dom.
09:58:45 [fhirsch3]
john - separate dom for instance at run time, not serialized or incorpporated until commit, ie. temporary data until accepted
10:06:38 [csolc]
raman - all information except state information is stored in the xforms model.
10:09:00 [csolc]
roman - custom functions must also be signed.
10:10:00 [csolc]
roman - there are custom libraries that can be loaded into xforms.
10:10:22 [fhirsch3]
extensions functions are full XPath
10:12:23 [fhirsch3]
john boyer application can define context for uri
10:13:53 [csolc]
john b - a reference without a uri points to the outer most document.
10:16:11 [fhirsch3]
raman at save time, save original doc plus instance data, to enable restore
10:16:29 [fhirsch3]
john boyer - eg save template and instance data
10:18:44 [csolc]
- instance data can be inline in the doc, fetched once at startup then stored inline, or saved in a remote source
10:21:41 [fhirsch3]
) defined in terms of original document, not data document...
10:21:56 [fhirsch3]
here was defined...
10:23:13 [esimon2]
esimon2 has joined #xmlsec
10:23:28 [klanz2]
maybe XProc would be good way to explain what is going on here ... ;-)
10:28:08 [csolc]
john b - issue with repeating content.
10:28:27 [Zakim]
+ +46.7.09.41.aaaa
10:29:43 [csolc]
john b - section 4.4.3.3.4 xmlsig doc
10:30:56 [csolc]
... input to the first transform should be output of the referencing
10:31:48 [fhirsch3]
zakim, who is here?
10:31:48 [Zakim]
On the phone I see Executive_6, John_Boyer, wellsk, jcruellas, klanz2, Ed_Simon, +46.7.09.41.aaaa
10:31:50 [Zakim]
On IRC I see esimon2, nick, kyiu, unl, Steeeven, bal, fhirsch3, csolc, John_Boyer, Geald_Edgar, rdmiller, pdatta, brich, klanz2, Zakim, RRSAgent, trackbot
10:32:31 [csolc]
... 4.4.3.3.1 counfusion on the output of a non same document reference. does it have to be an octet stream
10:32:52 [Steeeven]
zakim, country code 46?
10:32:52 [Zakim]
I don't understand your question, Steeeven.
10:34:19 [Steeeven]
+46 is Sweden
10:34:31 [fhirsch3]
john boyer - support for uri-less reference required, possible errata.
10:34:39 [fhirsch3]
konrad - can you submit test cases?
10:35:29 [fhirsch3]
bal - cannot mandate application specific feature, but should require it to be allowed
10:35:56 [klanz2]
4.3.3.4 ... The input to the first Transform is the result of dereferencing the URI attribute of the Reference element. ... 4.3.3.1 ... If the URI attribute is omitted altogether, the receiving application is expected to know the identity of the object. For example, a lightweight data protocol might omit this attribute given the identity of the object is part of the application context. This attri
10:35:56 [klanz2]
bute may be omitted from at most one Reference in any particular SignedInfo, or Manifest. ...
10:36:24 [fhirsch3]
bal - what is the interoperability point
10:36:42 [csolc]
bal - could add an imlementation note
10:37:14 [fhirsch3]
klanz your are asking that null can be passed to url resolver
10:37:44 [csolc]
r/imlementation/implementation
10:38:06 [fhirsch3]
klanz - should define own uri scheme in these cases, can then separate from work-arounds
10:41:03 [csolc]
fh - may need a uri identifier for instance data
10:41:32 [csolc]
roman - could define odf:here()
10:41:43 [fhirsch3]
avoid confusion of wether in instance data context or in committed merged document, need to be explicit with explicit URI
10:41:48 [Steeeven]
s/roman/raman/
10:41:53 [fhirsch3]
s/wether/whether
10:43:01 [csolc]
klanz: maybe should use xslt functions
10:43:33 [csolc]
john b - xslt is like poison. too complicated
10:45:12 [csolc]
john b - xslt is also an optional component to xml sigs
10:46:44 [csolc]
klanz - can xinclude be used to resolve the multiple doc issue
10:47:01 [fhirsch3]
is it possible to simplify this
10:47:07 [fhirsch3]
raman notes reflection of interactivity
10:47:25 [klanz2]
q+
10:47:29 [fhirsch3]
john notes even with zip still want node set, still have here issue, even if not here function
10:49:45 [fhirsch3]
concern with complexity, want to make security simpler, sounds complicated to have separate instance and documents, then merge and lose context.
10:50:07 [fhirsch3]
john notes interactive document case works
10:51:01 [fhirsch3]
ack klanz
10:51:11 [klanz2]
q-
10:51:45 [Zakim]
-jcruellas
10:52:21 [csolc]
fh - is it possible for xforms not to use xpath in the signatures
10:52:32 [klanz2]
is it XPath 1.0?
10:52:38 [klanz2]
I'd presume so ..
10:53:31 [fhirsch3]
john offers to summarize use case in terms of instance documents and original, serialization into single document, what is process, issues
10:53:50 [fhirsch3]
also to summarize lessons from implementation and needs regarding here etc.
10:55:00 [Zakim]
-wellsk
10:55:13 [Zakim]
- +46.7.09.41.aaaa
10:55:15 [csolc]
Thanks to the XFORMS foaks
10:55:23 [jcruella]
jcruella has joined #xmlsec
10:55:26 [Zakim]
-John_Boyer
10:55:35 [Zakim]
-klanz2
10:55:39 [jcruella]
sorry, had problems with my labtop
10:55:57 [fhirsch3]
zakim, who is here?
10:55:57 [Zakim]
On the phone I see Executive_6, Ed_Simon
10:55:58 [Zakim]
On IRC I see jcruella, esimon2, kyiu, bal, fhirsch3, csolc, John_Boyer, Geald_Edgar, rdmiller, pdatta, brich, klanz2, Zakim, RRSAgent, trackbot
10:56:05 [csolc]
Breaking for 1 hour lunch
10:56:14 [Zakim]
+??P0
10:56:31 [jcruella]
zakim, P0 is jcruella
10:56:31 [Zakim]
sorry, jcruella, I do not recognize a party named 'P0'
10:57:42 [Zakim]
-Ed_Simon
10:57:44 [Zakim]
-??P0
10:57:45 [Zakim]
-Executive_6
10:57:45 [Zakim]
T&S_XMLSEC()2:30AM has ended
10:57:46 [Zakim]
Attendees were Executive_6, klanz2, jcruella, John_Boyer, wellsk, jcruellas, Ed_Simon, +46.7.09.41.aaaa
12:02:30 [Zakim]
T&S_XMLSEC()2:30AM has now started
12:02:37 [Zakim]
+Ed_Simon
12:03:59 [nick]
nick has joined #xmlsec
12:04:24 [nick]
nick has joined #xmlsec
12:04:31 [nick]
nick has left #xmlsec
12:05:42 [unl]
unl has joined #xmlsec
12:06:56 [esimon2]
esimon2 has joined #xmlsec
12:07:05 [unl]
unl has left #xmlsec
12:09:12 [rdmiller]
scribenick: rdmiller
12:09:17 [csolc]
csolc has joined #xmlsec
12:09:43 [Zakim]
+[IPcaller]
12:09:58 [bal]
bal has joined #xmlsec
12:10:51 [fhirsch3]
fhirsch3 has joined #xmlsec
12:10:59 [fhirsch3]
zakim, who is here?
12:10:59 [Zakim]
On the phone I see Ed_Simon, [IPcaller]
12:11:00 [Zakim]
On IRC I see fhirsch3, bal, csolc, esimon2, jcruella, rdmiller, klanz2, Zakim, RRSAgent, trackbot
12:11:18 [jcruella]
zakim, who am I
12:11:18 [Zakim]
I don't understand 'who am I', jcruella
12:11:21 [jcruella]
zakim, who am I?
12:11:21 [Zakim]
I don't understand your question, jcruella.
12:11:22 [pdatta]
pdatta has joined #xmlsec
12:12:39 [fhirsch3]
zakim, please call xmlse
12:12:39 [Zakim]
I am sorry, fhirsch3; I do not know a number for xmlse
12:13:12 [fhirsch3]
zakim, call executive_6
12:13:12 [Zakim]
ok, fhirsch3; the call is being made
12:13:14 [Zakim]
+Executive_6
12:13:28 [fhirsch3]
zakim, who is here?
12:13:28 [Zakim]
On the phone I see Ed_Simon, [IPcaller], Executive_6
12:13:29 [Zakim]
On IRC I see pdatta, fhirsch3, bal, csolc, esimon2, jcruella, rdmiller, klanz2, Zakim, RRSAgent, trackbot
12:13:54 [fhirsch3]
zakim, IPcaller is jcc
12:13:54 [Zakim]
+jcc; got it
12:14:29 [rdmiller]
Topic: Review XForms Discussion
12:15:26 [rdmiller]
bah: We need to clarify the application specific behavior og references that are lacking URIs
12:15:36 [rdmiller]
s/bah/bal
12:16:10 [brich]
brich has joined #xmlsec
12:16:26 [rdmiller]
fhirsch3: We need to confirm that signature verification requires an XForms application
12:17:20 [fhirsch3]
s/an XForms/a running XForms/
12:17:39 [rdmiller]
fhirsch3: John from XForms to clarify the processing model and what he needs from XMLSEC to support his implementation.
12:18:38 [rdmiller]
fhirsch3: Concern that the complexity of the XForms processing model and goals seem to run counter to those of the XMLSEC WG.
12:20:02 [G_Edgar]
G_Edgar has joined #xmlsec
12:20:30 [rdmiller]
Topic: NIST Review
12:20:39 [csolc]
12:21:27 [rdmiller]
bal: Reviewed 2 documents from NIST regarding Radmomized Hashing and approved hash algorithms.
12:22:11 [rdmiller]
NIST SP800-106 Radomized Hashing
12:22:48 [rdmiller]
bal: We could use the radomization of content for references only.
12:23:26 [esimon2]
Which schema? XML Signature's or XML schemas in general?
12:24:57 [fhirsch3]
randomized hashing - modification of any hash alg to add randomization, NIST defines only for sig hash, could do for Dsig hashing
12:25:02 [fhirsch3]
of input to content
12:25:13 [fhirsch3]
bal - xml signature schema
12:25:27 [fhirsch3]
currently define hash alg and any, could define element for salt
12:25:29 [fhirsch3]
optional
12:26:34 [rdmiller]
bal: We would need to update ds:SignatureMethod.
12:27:01 [fhirsch3]
group notes oaep only defined for encryption
12:30:35 [rdmiller]
RESOLUTION: Work on ransomized hashing is a lower priority for the XMLSEC WG and will be deferred until there is a pressing need.
12:31:37 [rdmiller]
fhirsch3: At lunch there was a discussion about releasing a 3rd addition to address addition of algorithms.
12:32:17 [rdmiller]
tlr: If it affects conformance then it will need to at least be a minor edition.
12:32:29 [rdmiller]
s/addition/edition
12:33:35 [fhirsch3]
Present+ Xu Guibao
12:33:41 [Zakim]
+??P13
12:33:44 [fhirsch3]
Xu Guibao joined as observer
12:33:48 [klanz2]
zakim, ? is klanz2
12:33:48 [Zakim]
+klanz2; got it
12:35:29 [fhirsch3]
bal notes may want to deprecate sha1 in 1.1, or not but simply introduce new algs in 1.1
12:35:37 [fhirsch3]
bal notes goal not to change namespace in 1.1
12:36:12 [rdmiller]
bal: We may want to recommend in v.next that old algorithms are not used and then deprecate them in a following version.
12:37:58 [tlr]
tlr has joined #xmlsec
12:40:58 [klanz2]
q+
12:42:02 [klanz2]
q?
12:42:15 [fhirsch3]
tlr if a future version, requires versioning, then need a new namespace is a reading of this
12:42:21 [fhirsch3]
ack klanz
12:42:55 [bal]
q+
12:42:57 [fhirsch3]
q+
12:42:58 [tlr]
q+
12:43:05 [fhirsch3]
q-
12:43:24 [fhirsch3]
ack bal
12:43:38 [klanz2]
12:43:47 [klanz2]
12:43:52 [klanz2]
12:44:59 [rdmiller]
bal: If something changes that breaks backward compatibility then it would require a new namespace.
12:45:52 [rdmiller]
tlr: Prepare a working draft for verion 1.1 where we add algorithms and clarify the versioning policy.
12:46:03 [fhirsch3]
clarify versioning in wd and see if that is acceptable to constituents, including possibly sha-1 deprecation
12:46:04 [jcruella]
q+
12:46:16 [bal]
konrad, so is your point that additional algorithms have already been defined without revving the namespace?
12:46:20 [fhirsch3]
ack tlr
12:46:25 [fhirsch3]
ack jcruella
12:47:02 [fhirsch3]
jcc notes one doc of sig semantics and one for algorithms
12:48:11 [fhirsch3]
jcc this avoids need to constantly change entire for algs
12:48:13 [rdmiller]
TOPIC: Joint Meeting with EXI
12:48:57 [herve]
herve has joined #xmlsec
12:49:05 [youenn]
youenn has joined #xmlsec
12:49:36 [fhirsch3]
Present+ John Schneider, Carine Bournez, Daniel Peintnec, Richard Kantschke
12:49:58 [fhirsch3]
jcc, can you hear?
12:49:59 [dape]
dape has joined #xmlsec
12:50:07 [esimon2]
not well; I'm quite dependent on the IRC
12:50:28 [caribou]
caribou has joined #xmlsec
12:52:10 [fhirsch3]
exi has looked at xml security in more detail
12:52:12 [brutzman]
brutzman has joined #xmlsec
12:52:39 [klanz2]
Re, Algorithm Identifiers (Last Topic) :
12:52:53 [fhirsch3]
some future work is c14n work - use exi to improve performance
12:53:16 [fhirsch3]
reduce what needs to be preserved for verification, e.g. leverage typed values
12:53:52 [fhirsch3]
EXI has parameters , e.g. preserve comments, similar to c14n
12:54:14 [rdmiller]
To improve performance canonicalization with EXI could require the use of some parameters.
12:54:29 [fhirsch3]
EXI encoding for encryptoin - need to specify that encrypted content is exi encoded, encoding attribute
12:54:47 [smullan]
smullan has joined #xmlsec
12:55:53 [rdmiller]
URIs for canonicalization algs when using EXI.
12:56:02 [fhirsch3]
possibly using exe for c1n
12:56:15 [jkangash]
jkangash has joined #xmlsec
12:56:34 [rdmiller]
EXI could provide "type aware" canonicalization to improve performance.
12:57:47 [jcruella]
q+
12:58:21 [fhirsch3]
q+
12:58:52 [fhirsch3]
tom - test cases should be considered
12:59:07 [fhirsch3]
zakim, who is here?
12:59:07 [Zakim]
On the phone I see Ed_Simon, jcc, Executive_6, klanz2
12:59:09 [Zakim]
On IRC I see, RRSAgent,
12:59:11 [Zakim]
... trackbot
12:59:17 [esimon2]
q+
13:01:22 [fhirsch3]
tom wanting to integrate xml security testing into exi testing, thinking about what is involved
13:01:36 [fhirsch3]
tom effort involves university development
13:02:13 [tlr]
university development?!
13:02:41 [tlr]
don: do signatures survive EXI round-tripping?
13:03:09 [tlr]
tlr: we do have signatures and signed documents that you could run through your tests.
13:03:14 [rkuntsch]
rkuntsch has joined #xmlsec
13:04:20 [fhirsch3]
bal - do two semntically equivalent xml docs do they serialize into two exi serializations?
13:04:35 [fhirsch3]
yes, when consideraing parameters
13:04:52 [fhirsch3]
s/yes/steven, yes/
13:05:06 [fhirsch3]
zakim, who is here?
13:05:06 [Zakim]
On the phone I see Ed_Simon, jcc, Executive_6, klanz2
13:05:07 [rdmiller]
s/consideraing/considering
13:05:08 [Zakim]
On IRC I see rkuntsch,,
13:05:10 [Zakim]
... RRSAgent, trackbot
13:05:46 [brutzman]
recap/summary: we have an exi test suite with a corpus of several thousand documents. will be looking to ensure we have sufficient set of encrypted and/or signed documents to properly test round-trip success and interoperability by various EXI processors.
13:06:12 [fhirsch3]
steven, have option to preserve namespace info
13:06:48 [fhirsch3]
q+
13:08:27 [magnus]
magnus has joined #xmlsec
13:08:30 [rdmiller]
tlr: XML Signature is dependant on which EXI paramaters are used.
13:08:42 [pdatta]
q+
13:09:17 [tlr]
ha! Excellent news!
13:09:28 [tlr]
(I hadn't realized that EXI had done this piece of work.)
13:09:33 [rdmiller]
EXI is set to work with canonicalization and is documented in a best practices document.
13:10:00 [brutzman]
EXI Best Practices relevant to security:
13:10:25 [tlr]
john: EXI has designed things to be compatible with existing canonicalizations, and there are sets of parameters which will not break XML Security.
13:10:33 [brutzman]
EXI Impacts relevant to security:
13:10:38 [jschneid2]
jschneid2 has joined #xmlsec
13:10:42 [tlr]
... we're now talking about forward-looking work that would permit use of EXI with Signature.
13:12:56 [fhirsch3]
13:13:47 [kyiu]
kyiu has joined #xmlsec
13:13:48 [rdmiller]
Ed Simon looked at the EXI document and not the EXI Best Practices
13:14:15 [rdmiller]
fhirsch3: We should review EXI Best Practices.
13:14:15 [fhirsch3]
ack jcc
13:14:24 [fhirsch3]
ack jcruela
13:14:30 [fhirsch3]
ack jcruellas
13:14:34 [fhirsch3]
ack jcruella
13:16:03 [rdmiller]
jcruella: EXI is not caninocalization, but serialization of canonicalization.
13:16:07 [fhirsch3]
best practice, how to use exi with existing c14n algs, using preserve algs
13:16:24 [fhirsch3]
s/preserve algs/preserve parameers
13:16:42 [fhirsch3]
exi could be used as c14n alg in future, topic for joint discussion
13:17:18 [fhirsch3]
s/exi/john wzi
13:17:26 [fhirsch3]
a/wzi/exi
13:18:04 [fhirsch3]
zakim, who is making noise?
13:18:14 [Zakim]
fhirsch3, listening for 10 seconds I heard sound from the following: jcc (49%), Executive_6 (77%)
13:18:48 [fhirsch3]
john, preserving more in exi makes it larger
13:19:22 [fhirsch3]
john, eg no need to preserve lexical values, gaining efficiency
13:20:01 [klanz2]
q+
13:20:36 [fhirsch3]
jcc should signature cover tables
13:20:56 [fhirsch3]
john, tables are implicit
13:21:11 [fhirsch3]
john, part of stream
13:21:24 [brutzman]
EXI Format 1.0, 6.3 Fidelity Options
lists Preserve.comments Preserve.pis Preserve.dtd Preserve.prefixes Preserve.lexicalValues
13:21:28 [fhirsch3]
ack fhirsch
13:22:32 [rdmiller]
fhirsch3: What is the performance hit of using EXI for canonicalization because of having to use the EXI parser?
13:23:38 [klanz2]
Is there something like in memory EXI as well that expands to APIs like DOM or XPATH on the fly?
13:25:27 [fhirsch3]
question is whether the startup and shutdown time for EXI is too expensive when only performing single sign or verify...
13:25:45 [fhirsch3]
answer is that schema load can take time, but exi is also able to save internal compiled form
13:25:48 [rdmiller]
john: Performance would be based on initial load and number of schemas used.
13:25:54 [fhirsch3]
bal asks about memory footprint
13:26:13 [fhirsch3]
john, string tables but can be limited
13:26:46 [fhirsch3]
john, serialize xml doc or set of xml fragments, which are individual elements + attributes
13:28:23 [fhirsch3]
bal cannot replace c14n with it directly due to input requirement, not a nodeset as input
13:28:24 [klanz2]
Ordered NodeSet ....
13:29:06 [fhirsch3]
bal possible issue if unparented nodeset allowed, would need to be considered
13:29:28 [fhirsch3]
ack esimon
13:30:23 [esimon2]
13:30:27 [fhirsch3]
ed asks re importance of native formatted signatures
13:31:41 [fhirsch3]
ed, e.g. sign exi format without converting to xml for signing
13:32:53 [G_Edgar]
G_Edgar has joined #xmlsec
13:34:10 [G_Edgar]
I am wondering what might be the impact of "pluggable codecs" on this? I wonder since "pluggable codecs are negotiated.
13:34:41 [brutzman]
XBC was predecessor of EXI group. XML Binary Characterization Use Cases
13:34:56 [fhirsch3]
ack pdatta
13:35:23 [fhirsch3]
pratik - does exi preserve ordering
13:35:27 [rdmiller]
ACTION: esimon to look at the EXI use cases.
13:35:27 [trackbot]
Sorry, couldn't find user - esimon
13:35:39 [esimon2]
try esimon2
13:35:42 [fhirsch3]
john, always preserves ordering of elements, not attributes
13:36:13 [rdmiller]
ACTION: esimon2 to lookk at the EXI use cases
13:36:13 [trackbot]
Created ACTION-88 - Lookk at the EXI use cases [on Ed Simon - due 2008-10-27].
13:36:53 [fhirsch3]
john, attribute order can be preserved as part of serializion, might need switch in EXI for writing out attribute order
13:37:13 [G_Edgar]
Are attribute orders part of any Codec?
13:37:14 [fhirsch3]
action 88 is on test cases
13:37:14 [trackbot]
Sorry, couldn't find user - 88
13:37:44 [G_Edgar]
+q
13:37:50 [fhirsch3]
john, also keep track of encoding options
13:38:14 [fhirsch3]
pratik canonicalization often last step to digest, hence space benefits may not be valuable
13:38:27 [anil]
anil has joined #xmlsec
13:38:47 .
13:39:10 [fhirsch3]
john, can serialize direct from model to exi, not necessarily via xml
13:39:25 [fhirsch3]
... hence faster
13:40:15 [fhirsch3]
ack klanz
13:40:36 [fhirsch3]
exi implementation can offer different modes, e.g. from DOM, SAX etc. open source exisits
13:41:55 [rdmiller]
john: EXI is not designed to be an in memory represenation but specifici parts of the EXI strream can be referenced using self contained sub-trees.
13:41:55 [fhirsch3]
john, exi is interchange format, can have self-contained subtrees, meetin requirements for random access
13:42:16 [rdmiller]
s/specifici/specific
13:42:26 [fhirsch3]
q?
13:43:14 [rdmiller]
john: If not preserving namespace declarations EXI stores the qualified names direclty which is the full identity of the URI.
13:46:34 [fhirsch3]
ack G_Edgar
13:47:20 [fhirsch3]
disallow plugabble codecs
13:47:41 [klanz2]
Could we have some pointers to implemtations/Implementation Reports for EXI
13:47:52 [klanz2]
I'd be interested finding one that offers a DOM API
13:49:04 [klanz2]
13:49:21 [Zakim]
-jcc
13:49:32 [Zakim]
-klanz2
14:00:06 [Zakim]
+ +1.781.515.aaaa
14:00:22 [magnus]
zakim, magnus is aaaa
14:00:22 [Zakim]
sorry, magnus, I do not recognize a party named 'magnus'
14:01:12 [klanz2]
zakim, aaaa is magnus
14:01:12 [Zakim]
+magnus; got it
14:09:30 [fhirsch3]
zakim, who is here?
14:09:30 [Zakim]
On the phone I see Ed_Simon, Executive_6, magnus
14:09:31 [Zakim]
On IRC I see anil, G_Edgar, magnus, jkangash, smullan, brutzman, caribou, dape, herve, brich, pdatta, fhirsch3, csolc, esimon2, jcruella, rdmiller, klanz2, Zakim, RRSAgent,
14:09:34 [Zakim]
... trackbot
14:10:18 [bal]
bal has joined #xmlsec
14:11:18 [esimon2]
I'm back.
14:11:34 [tlr]
tlr has joined #xmlsec
14:13:04 [rdmiller2]
rdmiller2 has joined #xmlsec
14:13:45 [Zakim]
+??P15
14:13:56 [jcruella]
juan carlos
14:14:28 [jcruella]
zakim, P15 caller is jcruella
14:14:28 [Zakim]
I don't understand 'P15 caller is jcruella', jcruella
14:14:43 [caribou]
me
14:14:55 [jcruella]
zakim, P15 caller is jcc
14:14:55 [Zakim]
I don't understand 'P15 caller is jcc', jcruella
14:15:05 [bal]
zakim, +P15 is jcc
14:15:05 [Zakim]
sorry, bal, I do not recognize a party named '+P15'
14:15:11 [jcruella]
zakim, +P15 caller is jcc
14:15:11 [Zakim]
I don't understand '+P15 caller is jcc', jcruella
14:15:12 [bal]
zakim, P15 is jcc
14:15:13 [Zakim]
sorry, bal, I do not recognize a party named 'P15'
14:15:28 [jcruella]
zakim, +P15 caller is jcruella
14:15:28 [Zakim]
I don't understand '+P15 caller is jcruella', jcruella
14:15:50 [bal]
zakim, ??P15 is jcc
14:15:50 [Zakim]
+jcc; got it
14:16:11 [jcruella]
thanks!!
14:18:35 [rkuntsch]
rkuntsch has joined #xmlsec
14:19:28 [jkangash]
XBC use cases:
14:19:47 [Zakim]
+??P16
14:19:49 [rdmiller]
john: Usecases are XBC and usecases are on the EXI webpage as part of the testing framework.
14:19:50 [klanz2]
zakim, ? is klanz 2
14:19:50 [Zakim]
I don't understand '? is klanz 2', klanz2
14:20:04 [klanz2]
zakim, ? is klanz2
14:20:04 [Zakim]
+klanz2; got it
14:20:08 [csolc]
note: for xmlsec to use EXI as a canonicalization alg, EXI would have to add as part of the spec a rule on what order attributes are written out.
14:21:30 [rdmiller]
fhirsch3: We may want to think about using EXI for canonicalization and how it may improve XML Signature performance.
14:25:06 [rdmiller]
fhirsch3: Case number 1 is increase XMLSec for instances that are not aware of EXI.
14:26:43 [youenn]
youenn has joined #xmlsec
14:26:44 [rdmiller]
fhirsch3: Case number 2 is to improve XMLSec within EXI.
14:27:14 [esimon2]
Ideally, want to EXI doc before encrypting.
14:28:00 [anil]
anil has left #xmlsec
14:28:05 [klanz2]
Re Previous Discussaion: There might be similarites between "transform primitives" and what EXI calls the things you do not care about ...
14:28:41 [rdmiller]
john: XML Enc may not be able to take advantage of the performance increase of EXI without significant pain.
14:28:58 [Zakim]
-Executive_6
14:29:10 [magnus]
Did we just loose the conference bridge?
14:29:11 [klanz2]
you lost the connection
14:29:14 [esimon2]
the phone line seems dead
14:29:27 [jcruella]
I also lost connection
14:29:29 [fhirsch3]
zakim, call executive_6
14:29:29 [Zakim]
ok, fhirsch3; the call is being made
14:29:30 [Zakim]
+Executive_6
14:29:42 [jcruella]
OK... it works again
14:31:13 [klanz2]
[s1] <EncryptedData xmlns='
'
14:31:13 [klanz2]
Type='
'/>
14:31:13 [klanz2]
[s2] <EncryptionMethod
14:31:13 [klanz2]
Algorithm='
'/>
14:31:13 [klanz2]
[s3] <ds:KeyInfo xmlns:ds='
'>
14:31:14 [klanz2]
[s4] <ds:KeyName>John Smith</ds:KeyName>
14:31:16 [klanz2]
[s5] </ds:KeyInfo>
14:31:18 [klanz2]
[s6] <CipherData><CipherValue>DEADBEEF</CipherValue></CipherData>
14:31:20 [klanz2]
[s7] </EncryptedData>
14:31:27 [klanz2]
Maybe use another Algorith Identifier
14:31:42 [klanz2]
s/Algorith/Algorithm/
14:32:08 [klanz2]
14:32:14 [klanz2]
or similar
14:33:29 [fhirsch3]
consideration of using mimetype attribute
14:33:41 [klanz2]
14:34:05 [fhirsch3]
note two areas, 1st use of exi to improve xml security, here for c14n in signature worth consideration
14:34:19 [fhirsch3]
second, integration with exi tighter
14:34:21 [anil]
anil has joined #xmlsec
14:34:49 [fhirsch3]
main pain point in exi is encryption due to size of cipherdata, from xml, here exi first then encryptoin would help
14:35:01 [fhirsch3]
mimetype
14:35:22 [rdmiller]
s/encryptoin/encryption
14:36:05 [rdmiller]
john: EXI could possibly be used with XML Enc as it is with a minor tweak to identify the encrypted data as EXI.
14:37:28 [esimon2]
q+
14:38:14 [brutzman]
brutzman has joined #xmlsec
14:38:41 [fhirsch3]
ack esimon
14:39:50 [rdmiller]
john: Mapping from XML for XML encryption to EXI is relatively straight forward.
14:41:12 [rdmiller]
john: the work to allow EXI as a canonicalization method should benefit both the XMLSEC and EXI WGs.
14:42:20 [rdmiller]
bal: Supporting XML Enc within EXI will require a change to XML Enc, ref section 4.2.
14:44:11 [rdmiller]
fhirsch3: We understand how to support EXI for XML Enc, but need to be mindful of interoperability.
14:45:06 [rdmiller]
fhirsch3: We also need to work the W3C Rec process.
14:45:49 [rdmiller]
john: No current pressing need for EXI from the XMLSEC WG.
14:46:05 [rdmiller]
pdatta: We cannot use a MIME type directly.
14:46:35 [rdmiller]
fhirsch3: We were discussing using a new type element.
14:47:06 [magnus]
queue+
14:47:40 [pdatta]
bal: EXI could define a new types EXIelement
14:48:03 [klanz2]
14:48:05 [pdatta]
bal: this can be done outside XML Encryption spec
14:48:13 [klanz2]
<attribute name='Type' type='anyURI' use='optional'/>
14:48:14 [klanz2]
<attribute name='MimeType' type='string' use='optional'/>
14:48:14 [klanz2]
<attribute name='Encoding' type='anyURI' use='optional'/>
14:48:59 [fhirsch3]
ack magnus
14:49:31 [klanz2]
q+
14:49:53 [fhirsch3]
discussion, use type attribute, uri defined by EXI team and processing rules
14:49:56 [fhirsch3]
ack klanz
14:49:56 [pdatta]
jakko: does EXI need both EXIelment and EXIContent, probably not because EXI does not propobably support mixed content , so only EXIElment is ok
14:50:10 [rdmiller]
fhirsch3: EXI should define the URI and processing rules for XML Enc support.
14:50:41 [brutzman]
wondering, where is the test/examples corpus for XMLSEC mentioned earlier today?
14:50:50 [klanz2]
Process decrypted data if Type is unspecified or is not 'element' or element 'content'.
14:50:50 .
14:50:50 [klanz2]
2. Note, this step includes processing data decrypted from an EncryptedKey. The cleartext octet sequence represents a key value and is used by the application in decrypting other EncryptedType element(s).
14:51:00 [fhirsch3]
in this case EXI) to interpret
14:51:38 [fhirsch3]
if not element or elementcontent then exi can interpret
14:52:26 [fhirsch3]
bal exi takes care of decryption, into dom then exi
14:52:36 [pdatta]
bal: XML encryption spec says that if type is not element or content, then hand it back to application, is EXI the application ?
14:53:14 [rdmiller]
fhirsch3: Using EXI for canonicalization will require further work outside of this meeting.
14:55:07 [pdatta]
john: three things a) using EXI for canonicalization, b) define new algorithm URI for EXI canoncailzation, c) new type for Encryption EXIelement
14:56:01 [rdmiller]
fhirsch3: Performance measurements regarding the use of EXI for canoniclaization would be helpful.
14:57:04 [rdmiller]
EXI does have a test framework for measuring compression and decompression that is a Java based framework.
14:57:18 [rdmiller]
It can measure both Java and C++
14:58:21 [tlr]
ACTION: thomas to update homepage with information test suites
14:58:21 [trackbot]
Created ACTION-89 - Update homepage with information test suites [on Thomas Roessler - due 2008-10-27].
14:59:38 [brutzman]
The EXI test corpus is online at
15:01:08 [brutzman]
The EXI test corpus is hosted at Naval Postgraduate School in Monterey
15:01:22 [rdmiller]
fhirsch3: It may make sense to have a joint EXI XMLSEC session at the next XMLSEC F2F (13-14 January 2009).
15:01:53 [brutzman]
The EXI test corpus is based on Japex
- "Japex is a simple yet powerful tool to write Java-based micro-benchmarks."
15:03:47 [pdatta]
john: In the case where the fidelity is not important - e.g. in web services an EXI bases canonicalization will be advantageous
15:04:31 [rdmiller]
fhirsch3: What is the benefit of EXI users to use EXI canonicalization?
15:04:42 [pdatta]
bal: in web services fidelty is important - shred and reconstruct use cases
15:04:56 [rdmiller]
john: We have some information based on a customer experiment that was done in 2006.
15:07:25 [rdmiller]
fhirsch3: Do the benefits of using EXI for canonicalization outweigh the costs for adding everything needed to process EXI?
15:08:38 [pdatta]
fhirsch3: Using EXI for canoncalization adds more dependent libraries - need to evaluate this
15:11:29 [jkangash]
jkangash has left #xmlsec
15:13:03 [dape]
dape has joined #xmlsec
15:13:04 [brutzman]
bye
15:13:27 [dape]
dape has left #xmlsec
15:14:04 [youenn]
youenn has joined #xmlsec
15:14:29 [klanz2]
Please find answer to Sue Hoylen's Question:
15:18:23 [rdmiller]
TOPIC: Hoylen Response
15:18:31 [klanz2]
15:21:39 [caribou]
caribou has left #xmlsec
15:21:41 [klanz2]
15:22:25 [rdmiller]
fhirsch3: The response looks reasonable.
15:23:13 [rdmiller]
RESOLUTION: Konrad's response to Sue Hoylen is fine and Konrad will send it.
15:25:32 [rdmiller]
RESOLUTION: Add Hal's Web Services info into the requirements doc.
15:26:36 [rdmiller]
ACTION: kyiu to provide a draft for the requirements document of the simple signing requirements.
15:26:37 [trackbot]
Created ACTION-90 - Provide a draft for the requirements document of the simple signing requirements. [on Kelvin Yiu - due 2008-10-27].
15:27:47 [rdmiller]
ACTION: jcruella to provide a draft for the requirements document for long term signatures.
15:27:47 [trackbot]
Created ACTION-91 - Provide a draft for the requirements document for long term signatures. [on Juan Carlos Cruellas - due 2008-10-27].
15:33:54 [rdmiller]
TOPIC: Web Apps Prep
15:33:55 [tlr]
15:34:53 [herve]
herve has left #xmlsec
15:35:04 [rdmiller]
tlr: WebApps is writing a profile of XML Signature for signing widgets.
15:36:53 [rdmiller]
tlr: WebApps want to know what set of algorithms should be mandatory?
15:38:32 [klanz2]
15:38:54 [klanz2]
15:39:28 [klanz2]
15:40:11 [klanz2]
RSA-SHA256
15:40:11 [klanz2]
15:40:31 [klanz2]
15:40:34 [klanz2]
ECDSA
15:43:03 [klanz2]
RFC 4051 is PROPOSED STANDARD in
...
15:43:23 [fhirsch3]
15:43:35 [rdmiller]
ACTION: kyiu to make a proposal for Issue 59.
15:43:35 [trackbot]
Created ACTION-92 - Make a proposal for Issue 59. [on Kelvin Yiu - due 2008-10-27].
15:45:03 [klanz2]
15:45:11 [magnus]
For HMAC, there are also some identifiers in RFC 4231
15:46:41 [klanz2]
15:46:49 [klanz2]
that is the expired draft ...
15:47:34 [tlr]
I propose seeking review by the IETF security directorate.
15:48:25 [fhirsch3]
q+
15:48:50 [klanz2]
We MUST not forget about this one that was added to the expired draft ...
15:48:50 [klanz2]
15:48:50 [klanz2]
15:49:21 [rdmiller]
pdatta: I recommend adding a table for recommendations regarding bit strength.
15:49:45 [rdmiller]
bal: I recommend not doing that and pointing to the relevant NIST doc.
15:50:07 [klanz2]
can we make sure all the URIs and references we have here in the minutes, are revisited by the person taking the action of collecting this stuff
15:51:55 [rdmiller]
4051 covers all of the algorithms that are not covered elswhere, but does not point to the ones that are covered.
15:51:59 [fhirsch3]
summary, can answer widgets re alg identifiers using sha256 uri from encryption for reference hashing and rsa-sha256 from 4051
15:53:10 [rdmiller]
bal: Who will implement the checks for WebApps?
15:55:28 [rdmiller]
TOPIC: Action Review
15:56:13 [rdmiller]
fhirsch3: All actions items can be closed.
15:56:34 [klanz2]
bye everyone ...
15:56:44 [magnus]
q
15:56:52 [rdmiller]
Recessing until tomorrow morning.
15:57:01 [Zakim]
-magnus
15:57:30 [esimon2]
bye
15:57:45 [Zakim]
-Ed_Simon
15:57:50 [jcruella]
bye have a nice dinner !!
15:58:03 [Zakim]
-jcc
15:58:05 [klanz2]
bye every one
15:58:13 [Zakim]
-klanz2
15:58:26 [rdmiller]
Zakim, list participants
15:58:26 [Zakim]
As of this point the attendees have been Ed_Simon, Executive_6, jcc, klanz2, +1.781.515.aaaa, magnus
15:58:33 [fhirsch3]
recess until tomorrow, thank you
15:58:35 [rdmiller]
RRSAgent, make log member
15:58:52 [rdmiller]
RRSAgent, generate minutes
15:58:52 [RRSAgent]
I have made the request to generate
rdmiller
15:59:06 [rdmiller]
Zakim, bye
15:59:06 [Zakim]
leaving. As of this point the attendees were Ed_Simon, Executive_6, jcc, klanz2, +1.781.515.aaaa, magnus
15:59:06 [Zakim]
Zakim has left #xmlsec
16:03:03 [fhirsch3]
Present+ Jaakko Kangasharju, Taki Kamiya, Bede Mccall, Youenn Fabuet, Herve Ruellan, Don Brutzman
16:03:53 [fhirsch3]
Present+ John Boyer, Steven Pemberton, Ultide Lisse, Nick Van den Blecken, Roland Merrick, TV Raman, Charlie Wiecha
16:04:16 [fhirsch3]
Present+ Keith Wells
16:04:42 [fhirsch3]
observers included Bede, Youenn, Herve, Xu
16:05:09 [fhirsch3]
RRSAgenda, generates minutes
16:05:25 [fhirsch3]
RRSAgent, generate minutes
16:05:25 [RRSAgent]
I have made the request to generate
fhirsch3
16:08:26 [bal]
bal has joined #xmlsec
|
http://www.w3.org/2008/10/20-xmlsec-irc
|
CC-MAIN-2014-52
|
refinedweb
| 9,055
| 59.98
|
C++ Programming
What is a palindrome numbers?
Palindrome numbers are those numbers which remain same after reverse. Think about the number 1221. If we reverse the number then the value will be same as 1221. This type of numbers are called palindrome numbers. Some palindrome numbers are 11, 55, 121, 363, 25652 etc.
In this C++ programming article we will see C++ program related to palindrome numbers. We already have discussed about palindrome numbers in C programming. Now, let’s try to understand palindrome number program in C++.
C++ program to check palindrome numbers
Let’s see the algorithm to check whether a number is palindrome or not which we will apply here.
- Take the number from user and store it to a variable
- Reverse the number and compare with the given number.
- If they are same, print the number is palindrome
- Otherwise print the number is not a palindrome number.
Now, Let’s see the C++ source code bellow;
// c++ program to check palindrome number #include <iostream> using namespace std; int main(){ int main_num, temp_num, rem, sum = 0; cout << "Enter the number here : "; cin >> main_num; temp_num = main_num; while(main_num > 0){ rem = main_num % 10; sum = (sum * 10) + rem; main_num /= 10; } if(temp_num == sum){ cout << "\n" << temp_num << " is a palindrome number." << endl; }else{ cout << "\n" << temp_num << " is not a palindrome number." << endl; } return 0; }
Output of palindrome number program:
|
https://worldtechjournal.com/cpp-programming/print-palindrome-numbers-in-cpp/
|
CC-MAIN-2022-40
|
refinedweb
| 228
| 73.27
|
Oct 14, 2020 11:50 AM|Idi_idi|LINK
I have a VS 2017 Web Application project.
When the project is copied to my local C drive and I debug it using IIS Express, it runs fine and everything is all well. When I publish the site, it also works fine.
However when I copy the project to a shared drive on the network, it fails to debug with an error in the webconfig which references my AccountProfile class which I have used to replace the Profile as it isn't available in Web Application projects.
<profile inherits="AccountProfile"> <providers> <clear /> <add name="AspNetSqlProfileProvider" type="System.Web.Profile.SqlProfileProvider" connectionStringName="LocalSqlServer" /> </providers> </profile>
Here is my AccountProfile class in the AppCode folder
using System; using System.Web; using System.Web.Profile; public class AccountProfile : ProfileBase { static public AccountProfile CurrentUser { get { return (AccountProfile)Create(HttpContext.Current.User.Identity.Name); } } [SettingsAllowAnonymous(false)] public string MyStaffCode { get { return (string)base["MyStaffCode"]; } set { base["MyStaffCode"] = value; Save(); } } }
The error I get is: Compiler Error Message: CS0246: The type or namespace name 'AccountProfile' could not be found (are you missing a using directive or an assembly reference?)
Any ideas?
Oct 15, 2020 06:27 AM|Sean Fang|LINK
Hi Idi_idi,
Since you mentioned the AppCode folder, I suspect that the problem is that your classes are not compiled.
If your application is a Web Application project rather than a Web Site project, the code files should not be in the App_Code folder.
VS defaulted the name space for the class to ProjectName.App_Code. In this case Visual Studio defaulted the
Build Action to Content, even though the type was .cs code. If you change the
Build Action to Compile, you might get passed for
VS IntelliSense however you might still get
objects defined twice errors on deployment.
Add another folder inside the project and put the file in.
Hope this helps.
Best regards,
Sean
Oct 15, 2020 08:55 AM|Idi_idi|LINK
Hi Sean,
Thanks for your reply.
However the code is not in the "App_Code" folder, it is in the "AppCode" folder. I have renamed the folder to "MyCode" and still get the same error.
Also, the Build Action is set to "Compile" for the AccountProfile class.
Oct 15, 2020 09:54 AM|PatriceSc|LINK
Hi,
And in which namespace is your class? In short you are now trying to publish source code rather than binaries? Could it be that the problem is that if the web.config depends on compiling source code you have an chickend or the egg problem?
If you confirm this intent I would still try to move the code on which web.config depends in its own project and DLL.
Oct 15, 2020 10:01 AM|Idi_idi|LINK
Currently they are not in a namespace. I have tried putting the class in a namespace and adding the namespace in the webconfig as:
<profile inherits="NamespaceName.AccountProfile">
but still same error. The bit that is throwing me is it works fine when copied to Local C drive.
Oct 15, 2020 03:59 PM|PatriceSc|LINK
If taking things the other way round the expected approach is to publish your application to a network folder (the target may change but if publishing from VS on your own workstation it is expected that you will use "Publish" menu option).
Stricky speaking it means the type is not found so I guess that your manual copy is omitting the binary file that includes this type. Or your intent is to publish the source code on the web server and have your app being compiled there? Could you clarify.
Edit: or you want to publish from the command line? For example you could try
Oct 16, 2020 03:28 AM|Sean Fang|LINK
Hi Idi_idi,
One way we could adopt to try to solve the problem is to
Also, you could check this document to see if there is a cause that is similar to your case:
Best regards,
Sean
Oct 16, 2020 10:36 AM|Idi_idi|LINK
Hi Sean. I have tried the above with no luck. I have also tried creating a brand new test project and testing and it still has the same issue. Works on my local C drive but not when moved to a network drive.
Oct 16, 2020 11:33 AM|PatriceSc|LINK
As you have a AppCode folder (not App_Code though) could it be that you try to handle your "Web Application Project" as a "Web Site Project"? See for example for differences (edit: and in particular regarding compilation)
If "publish" works, let's start maybe from the beginning and explain which benefit you try to get by "moving" the site (ie you copy just source code files ?)
Oct 16, 2020 02:36 PM|Idi_idi|LINK
It was previously a web site project which I changed over to a web application project, hence why I had App_Code folder. I renamed it to AppCode (or could be anything else) to comply with the Web Application structure.
The site is published to a Server and runs fine from there. The VS application is currently on a shared network drive. When I need to make some changes I copy the project locally to my C drive, make the change, publish to the server, and then copy the new code back to the shared drive. This works fine most of the time. However the project is quite large and takes a few minutes to copy over each time. So when making small changes, such as a small bug fix, I would like to change the code directly on the shared drive, debug and make sure it works, and then publish to server. So basically remove the step of having to copy to C drive. But it doesn't debug when running from the shared drive.
Oct 20, 2020 08:09 AM|Sean Fang|LINK
Hi Idi_idi,
What kind of access do you have on that shared drive?
One direction that we could narrow down the problem is to check the access on the shared drive.
Possibly the compiling process stops when the compiled file cannot be written to the shared drive.
Best regards,
Sean
Oct 22, 2020 07:21 AM|Sean Fang|LINK
Hi Idi_idi,
Sorry that we might need more information.
Just a reminder, I think you should use another source control way, for example, azure devops, or svn, which will guarantee the security and does not have those kind of problems. That's way, you don't need to copy whole project from shared drive and copy back after modification.
Best regards,
Sean
12 replies
Last post Oct 22, 2020 07:21 AM by Sean Fang
|
https://forums.asp.net/t/2171484.aspx?Web+Application+not+working+when+copied+to+another+drive
|
CC-MAIN-2021-17
|
refinedweb
| 1,116
| 70.02
|
In the previous lesson on Composition, we noted that object composition is the process of creating complex objects from simpler one., teacher is created independently of dept, and then passed into dept’s constructor. When dept is destroyed, the m_teacher pointer is destroyed, but the teacher itself is not deleted, so it still exists until it is independently destroyed later in, that is left up to an external party to do so. groups.
Quiz time
1) Would you be more likely to implement the following as a composition or an aggregation?
1a) A ball that has a color
1b) An employer that is employing multiple people
1c) The departments in a university
1d) Your age
1e) A bag of marbles
Show Solution
2) Update the Teacher/Dept example so the Dept can handle multiple Teachers. The following code should execute:
This should print:
Department: Bob Frank Beth
Bob still exists!
Frank still exists!
Beth still exists!
Hint: Use a std::vector to hold the teachers. Use the std::vector::push_back() to add Teachers. Use the std::vector::size() to get the length of the std::vector for printing.
Hi Alex !
I have 2 code as below:
#1
//----------------------------
#2
//----------------------------
The #1 build have error:
Error C4700 uninitialized local variable ‘test’ used Test e:\cpp\learnc\learnc\test\source.cpp 20
The #2 run as well.
Can you explain why ?
Thank you.
In case 1, the compiler is able to detect that test has not been initialized. In case 2, the compiler is unable to detect that test hasn’t been initialized.
Both are incorrect, even though the second one runs.
i’m run with visual studio 2017. That mean, it depend on the compiler.
Thanks you
Yes, modern compilers will try to detect when you’ve done certain things that you should not be doing, like using the value of an uninitialized variable. Sometimes they are able to do a good job of this. Other times, not so much.
Hello, I didn’t see your hint so i did your second quiz task with pointer of pointers, can I ask you a question: Are there any memory leaks in this code? Thanks 🙂 Your tutorials on C++ are the best I have ever seen!
From inspection I don’t see any obvious leaks, but code like this (that uses pointers to pointers) is awfully hard to read.
Thanks for the answer!
I have finally made the program work and I found my code has different concept with the answer provided. One point to note here is that I have to make getName() const, or I get the error on line (out << ref.getName() << " ";) saying "function member getName not viable: ‘this’ argument has type const Teacher’ but function is not marked….".
Any comment?
Great tutorial! Thanks so much!
#include <string>
#include <iostream>
#include <vector>
class Teacher
{
private:
std::string m_name;
public:
Teacher(std::string name)
: m_name(name)
{
}
std::string getName() const { return m_name; }
};
class Department
{
private:
std::vector<Teacher> m_teacher; // This dept holds only one teacher for simplicity, but it could hold many teachers
public:
void add(Teacher *teacher){
m_teacher.push_back(*teacher);
}
friend std::ostream& operator<< (std::ostream &out, const Department &dept) {
out << "Department: ";
for (auto &ref: dept.m_teacher)
out << ref.getName() << " ";
out << "\n";
return out;
}
};
int main()
{
// Create a teacher outside the scope of the Department
Teacher *t1 = new Teacher("Bob"); // create a teacher
Teacher *t2 = new Teacher("Frank");
Teacher *t3 = new Teacher("Beth");
{// Create a department and use the constructor parameter to pass the teacher to it.
Department dept;// create an empty Department
dept.add(t1);
dept.add(t2);
dept.add(t3);
std::cout << dept;
} // dept goes out of scope here and is destroyed
// Teacher still exists here because dept did not delete m_teacher
std::cout << t1->getName() << " still exists!\n";
std::cout << t2->getName() << " still exists!\n";
std::cout << t3->getName() << " still exists!\n";
delete t1;
delete t2;
delete t3;
return 0;
}
In my code I had written an overloaded operator<< for both Teacher and Department because I didn’t setup my m_teacher vector with a pointer data type. My solution still worked but when I saw your solution I went on a few hour look at pointers and reducing redundancy in my code. Any day where I learn more about things I thought I understood is a good day! So thank you again for these great tutorials!
Hello Alex,,,
i try to understand the last question,it’s make me little confuse, i don’t know deference between
" std::vector<Teacher*> m_teacher " and " std::vector<Teacher>*m_teacher; ".
Can you explain me ?Thnks for the answer
std::vector m_teacher declares a std::vector containing pointers to Teacher objects. This is a common way to implement an aggregation or association of Teachers.
std::vector *m_teacher declares a pointer to a std::vector containing Teachers. I’m struggling to think of a case where you’d ever do this.
Thanks Alex… i learn more and i understand now…
Hello, you should include iostream in the above example.
You are correct. Thanks for pointing that out.
Consider The Following Snippet :-
It Gives Me An Error :-
Can You Tell Me, Why?
Also, It works perfectly if I remove the const keyword !
Thanks in Advance ! Reply ASAP.
temp is a const Department, but you’re trying to call member function getName(), which is non-const. Instead of making temp non-const, you should make member function getName() const.
Oh yes ! So Foolish !
Sorry for Disturbing !
Tysm 🙂
Hi Alex!
In his code he doesn’t have std::vector <Teacher> m_TrackTeachers; in pointer and in urs u got Teacher*. what makes it diffrent is that when he does add he use m_TrackTeachers.push_back(*temp); and u dont have pointer to temp. It’s just something i really do have problem understanding, and the only thing that is diffrent is std::vector <Teacher> m_TrackTeachers;
what is this called that i need to read about to understand this better?
Thanks:)
Basically, you use a non-pointer when you want the array to own (manage the existence of) the object. You use a pointer when you want the array to hold the object, but not manage it.
Hi Alex!
I got one question and wounder. when u create a std::vector<Something> m_something; and when u do m_something.push_back u actually make a copy of the object and push it back? while u do std::vector<*Something> m_something; and just m_something.push_back u actually push back the pointing to that object so u dont make copy? thats why he is making a push_back(*temp) and u dont cause u already pointing to that object? He needs to point to object he wanna push back so it doesnt make a copy while u already pointing? I am correct?
Thanks in advance!:)
> when u create a std::vector m_something; and when u do m_something.push_back u actually make a copy of the object and push it back?
Yes. And this can cause object slicing if you’re not careful.
> while u do std::vector< *Something> m_something; and just m_something.push_back u actually push back the pointing to that object so u dont make copy?
Yes, in that case we’re just pushing the pointer value itself.
If you want to point at something that already exists elsewhere, you’ll want to use a pointer so you can point at the object (wherever it lives) rather than making a copy.
In the following:
"You may also run across the term “aggregate class” in your C++ journeys, which is defined as a struct or class that has only default constructors/destructors, no protected or private members, and no inheritance or virtual functions."
A. Did we discuss the concepts of "protected member" "inheritance" or "virtual functions"?
I understand this as follows:
"You may also run across the term “aggregate class” in your C++ journeys, which is defined as a struct or class that has only *a* default *constructor/destructor* and public members *(no other features such as: private members, overloaded constructors/destructors and, to be discussed, protected members, inheritance or virtual functions)*."
Is this correct?
1) We haven’t covered protected members or inheritance yet -- that’s covered next chapter. I’ve updated the wording to speak more towards stuff we have covered.
2) I also updated the text here a bit -- to be an aggregate class, you can’t define any constructors or destructors (not even a default constructor).
Typo?:
"In the lesson Structs, we defined *an* aggregate data types (such as structs and classes) as *a* data types that *groups* multiple variables together."
Perhaps:
"In the lesson Structs, we defined ** aggregate data types (such as structs and classes) as ** data types that *group* multiple variables together."
Typos fixed. Thanks!
Typo?:
"Typically use pointer or reference members that point to or reference objects that *lives* outside the scope of the aggregate class."
Perhaps:
"Typically use pointer or reference members that point to or reference objects that *live* outside the scope of the aggregate class."
hi alex
mujy tu opp ki samjh nhi ati koi hal tu bta yara
i cannot understand opp tell me solution
Name (required)
Website
|
http://www.learncpp.com/cpp-tutorial/103-aggregation/
|
CC-MAIN-2018-05
|
refinedweb
| 1,524
| 64.2
|
AD7705/AD7706 Library Revisited
About a year ago, I wrote a simple library for interfacing AD7705/AD7706 with Arduino. The library works, but it requires some decent knowledge of the underlying chip, which had made it somewhat difficult to use. Most issues users reported can be resolved by adjusting the timing in user code, but I admit that it is somewhat difficult for users who are not familiar with the chip. For a library, I should have made it easier to use to begin with. So, I decided to add a few long-awaited features and hopefully these tweaks will make the library easier to use.
One of the changes to the original code is the addition of the dataReady function. This function queries the DRY bit in the communication register and returns true when the data ready bit is cleared.
bool AD770X::dataReady(byte channel) { setNextOperation(REG_CMM, channel, 1); digitalWrite(pinCS, LOW); byte b1 = spiTransfer(0x0); digitalWrite(pinCS, HIGH); return (b1 & 0x80) == 0x0; }
Using this function, we can wait till the converted data is ready before reading out the conversion result (rather then using delay statements):
double AD770X::readADResult(byte channel, float refOffset) { while (!dataReady(channel)) { }; setNextOperation(REG_DATA, channel, 1); return readADResult() * 1.0 / 65536.0 * VRef - refOffset; }
In readADResult, I added an optional parameter refOffset. If your Vref- is not tied to the ground then you can use this variable to set the offset voltage to be subtracted from the conversion result. The default operating mode is set to be bipolar. For AD7705 and AD7706, the difference between unipolar and bipolar operation is simply how the the input signal is referenced so by setting the input mode to bipolar, you can still measure unipolar voltages. All that is needed is to tie the Vref- to the ground and leave the refOffset with the default value (i.e. 0).
I have also added a reset function. By calling this function first in your setup code, you are guaranteed that the chip is brought to a known state. Some of the difficulties users faced using the original library is that, depending on how the system is powered up, the AD770x may not be in a consistent mode and thus the A/D results seemed to be random. The chip reset can be achieved by either using the RESET pin or code. In my opinion, implementing in code is the desired method unless you need highest performance possible. Another benefit is that this implementation requires one less MCU pin.
Finally, I added a few parameters to the alternative constructor. In case you want to fine-tune your setup (e.g. setup different gain/speed), you can use the alternative constructor instead.
The following example shows how to use this library to read ADC results from multiple channels:
#include <AD770X.h> AD770X ad7706(2.5); double v; void setup() { Serial.begin(9600); ad7706.reset(); ad7706.init(AD770X::CHN_AIN1); ad7706.init(AD770X::CHN_AIN2); } void loop() { v = ad7706.readADResult(AD770X::CHN_AIN1); Serial.print(v); v = ad7706.readADResult(AD770X::CHN_AIN2); Serial.print(" : "); Serial.println(v); }
Download: AD770X1.1.tar.gz (compatible with Arduino 1.0 IDE)
Hallo Kerry D.Wong!
It’s from Russia again :)
Thank you for the new library! Now it is compiled into the first version of the Arduino! But I have a question, is it possible to increase the polling rate channels? Or you can tell which part of the code of your library, can I change the time delay? So I can experiment with delay …
Thank you again! With the new library, all channels are working fine!
Yes, there is another constructor that takes the update rate as a parameter:
init(byte channel, byte clkDivider, byte polarity, byte gain, byte updRate)
and you can use the constants defined in the header to set your rates.
The clock divider can be set accordingly based on your oscillator frequency.
Hello Kerry,
Thanks for the library…very useful. I was wondering if it would be possible to use the library with an AD7715 which is pin compatible with the AD7706 but is only a single channel device. Which parts would need to be changed in order to make it work. I think the byte codes for accessing the different registers are the same??
Thanks again
Alex
Thanks Alex. As you observered, AD7705 and AD7706 are almost identical. You can use the exact same code for either AD7705 and AD7706. The only change would be the interpretation of the input pin s used.
For instance, if the channel setting is CH1=0 and CH0=0, for AD7705 AIN1+ and AIN- are used whereas for AD7706 AIN1 and COMMON are used.
Kerry,
I was actually looking at using the AD7715 which is not quite the same as an AD7706 – it is very similar though but this device only has one channel not two like the AD7706. There are other differences too though, there is no clock register in the AD7715. How difficult would it be to modify the AD7706 library to work with the AD7715? Please note that my coding skill is almost zero!!
Cheers
Alex
Sorry I didn’t quite read it right. I thought you were trying to use AD7705 (but actually you were trying to use AD7715).
Kerry,
My sincerest apologies, I have it working and it’s excellent!! I did not actually just try the library! Someday I will have to learn how to code properly and understand how libraries are written! Thank you so much for your library and your response…
Cheers
Alex
First thanks for this library kwong!
Alex, have you realy had success with this library and the AD7715? Because I didn’t…
I suppose you modify it because of the difference between AD770X and the AD7715 like clock register, and channel…
Would you offer us your modified code?
(French man writing…)
Cheers
Damien
Damien,
My code used Kerry’s library without being modified. I’m not a very good programmer so I can’t be certain how it worked with the AD7715. The critical part was adding a 100ms delay to Kerry’s example code. Once I did that the device started to work. I tested it on a breadboard with minimal components as specified in the datasheet with a variable resistor to provide analogue data! It worked first time…best of Luck
Here is my code:
/*
* AD770X Library
* sample code (AD7706)
* Kerry D. Wong
*
* 3/2011
*/
#include
AD770X ad7706(2.5f);
unsigned int v;
void setup()
{
Serial.begin(115200);
ad7706.reset();
ad7706.init(AD770X::CHN_AIN1);
// ad7706.init(AD770X::CHN_AIN2);
Serial.println(“Here we go…”);
}
void loop()
{
v = ad7706.readADResult(AD770X::CHN_AIN1);
Serial.println(v);
delay(100); //100ms delay for AD7715
}
Thanks Alex for replying!
But…
I can’t believe how it worked for you with the AD7715…
I’ve tried you sketch, but it didn’t work for me…
This library doesn’t work for me, but I wrote one for our AD7715.
I didn’t spend a lot of time on, and even it seem’s to work quite
well (thanks to kwong) there are a lots of difference between AD770X
and AD770X…
Hi Damien
Would you offer me your modified code for the AD7715?
Best regards
In that case: my email is jkuj@teknologisk.dk :)
Hi Kerry,
i’m using your library for AD7705. I still don’t understand how to get data. To take data from AIN1, this is the code i use:
#define LOOP_DELAY 120
setup:
ad7705.reset();
ad7705.init(AD770X::CHN_AIN1);
ad7705.init(AD770X::CHN_AIN3);
loop:
delay(LOOP_DELAY);
temp1 = ad7705.readADResult(AD770X::CHN_AIN1);
delay(LOOP_DELAY);
temp2 = ad7705.readADResult(AD770X::CHN_AIN3);
total[m]=temp2-temp1;
I’m working with UPDATE_RATE_60. Is there other way to get data?? If i only read AIN3 or AIN1, sometimes it doesn’t get data. Depending on the delay time (loop_delay), accuracy is higher or lower. When i read AIN1, i only have ground and i supposed that AIN1 and AIN3 should be for AIN1+/-. Why do i have to read AIN1 and AIN3 for AIN1+/-??.
I’m working with a duemilanove ATM328.
Thanks in advance
Hi Bodhi,
You can configure the AD7705 in either differential (bipolar) or single-ended (unipolar) operation. The simple constructor you used defaults to unipolar. If you need to take differential measurement, you can use the overloaded constructor: AD770X::init(byte channel, byte clkDivider, byte polarity, byte gain, byte updRate) , you can take a look at the cpp file to see how it can be used.
I am not sure why you need to include delay in your code though.
Hi kwong,
If i don’t use the delay, sometimes it doesn’t measure anything and accuracy is lower. I have to measure thermocouples and accuracy is very important.
I tried to use bipolar but values were wrong and the only way i could work was the one i wrote in the past post.
Thanks for your response.
Hi again kwong,
i tried to test your test program and i also need a delay. if i change the update_rate to other value different to 25 it doesn’t work, only measure 0.0000. With the gain, happens the same, measure something but out of the reality.
I’m using a crystal of 4.000M, can this be the reason?.
Thanks in advance
This is really strange.. I used the same design in one of my later projects () also with a 4Mhz crystal but I didn’t experiencing any delays… the code actually checks the status of the dataready bit and returns only when the conversion results are stable.
I guess as an alternative, you can poll the dataready pin, which should achieve the same result as the implementation in my code.
Hi again Kwong,
how can i poll the dataready? only checking the pin 12 of the ad7705?
Yes, according to the datasheet, the DRDY pin (12) goes low whenever the data is ready.
Hello ,
Can you check you library for issuse :
ad7706.init(AD770X::CHN_AIN1,AD770X::CLK_DIV_1, AD770X::BIPOLAR, AD770X::GAIN_4, AD770X::UPDATE_RATE_25);
Init must setup CHN_AIN1 , but it sets CHN_AIN2 … The same with AIN2. This settings must be swaped.
Regards,
Linas
Nice librarry , great job !!
Greetings!
I will try to connect AD7705 to the arduino Leonardo. The question is, in the datasheet used DRDY. It is also connected to the arduino or leave free? Also the datasheet CS goes to the ground, and you have it connected to the arduino.
Sorry for the horrible English. I’m from Ukraine.
Hi Bogdan,
DRDY can either be read from the register or from the pin. The method I used was reading from the register and thus the pin is left unused. The CS pin is used in the SPI protocol and is controlled within the SPI library.
Hello! This again I am! Another question: what is the face value of capacitors that connect to ground quartz? I understand that stupid question, but thanks for the answer!)
I used a 2MHz ceramic oscillator so the caps were not used. But can also use a crystal with 2 load capacitors. The value varies based on the crystal you choose, but anything between 18pF and 33pF will work.
Hi! One more question on the library. I’m using the Arduino Leonardo. It SPI pins are different from UNO. For normal operation of A can change them in the file library?Thanks for the fast reply!
I have a quick question, I’m using the AD7705 with the arduino mega 2560 chip. I modified the header file to use the correct pins for the mega 2560: MISO(pin 50), MOSI(pin 51), and SCK(pin 52). I also set pinCS to pin 24 (since that is what the AD7705 CS pin is connected to) and I setup pin 10 to be a 2 MHz clock (using timers) to go into MCLK IN since I don’t have a crystal. My current issue is that I will only get values equal to 0 or whatever I pass in as the reference voltage. If I pass 2.5 as my reference voltage then my output randomly alternates between 0 and 2.5, if I pass in 5, then it flips between 0 and 5. So could this be an issue with the chip or is this an issue because I’m using the mega 2560?
Hard to tell… the first thing I’d do is short the input (or provide a steady voltage) of the ADC and see how the output behave. AD7705/06 does have a minimum recommended clock frequency of 4MHz so not sure if it would work with a 2MHz clock.
Right now I have the AD7705 connected to a simple voltage divider across a 10k pot, if I plug the input into ground or if I pull the input high the data coming in is still 0.00. I setup a 4 MHz clock and my serial output to my screen stopped altogether, any ideas?
Sorry, my bad. AD7705’s minimum operating freq is 400kHz not 4MHz so 2MHz should be fine (and that’s the clock frequency I was using). Could you try a reset() immediately before reading the ADC value? Also, could you add a small delay (e.g. delay(10)) before reading the ADC to see if you observe any difference?
Currently I have a 500ms delay in my loop between readings, I tried the reset just before reading the ADC value and still nothing. At this point I’m inclined to think that I may have a bad chip
Kwong,
Firstly I’d like to thank you for putting together this library, I was wondering if you could help me with a problem I’ve got.
I am in the exact same boat as drewman2000; I’m on a Mega2560, have altered the header file for the Mega pins (50-52), using pin 10 as a 2MHz clock with the AIN connected to a 50k pot and all I get out is 0.00 with about a ~100ms delay between readings. I’ve tried adding the small delay and also the reset but neither work, do you have any ideas?
Is it also normal that it should take about 2-3 mins to complete the setup routine and start giving readings back?
Separately, is it possible to read the analog input as a decimal number rather than as a voltage?
Thanks,
Rob
Assuming you are using the revisited library querying the DRY bit instead of using delays. Since I don’t have an Arduino mega board I can’t test for sure, but something is definitely not right.
It shouldn’t take that long to get the initial reading, and the only way it could take that long is the dataReady function keeps returning “false”. In this case the readADResult function would be in a wait loop.
Could you try using a crystal oscillator instead of clock generated via Mega? Although it really shouldn’t matter.
Hi kwong,
i tried to operate your code program and i read H1= 0.0 H2=0.0 H1=0.0 H2=0.0 i can not reading Anything !!!!!!!!!!!!!!!
#include
#include
/****************************************************************************/
LiquidCrystal lcd(8, 9, 4, 5, 6, 7);
/****************************************************************************/
//set reference voltage to 2.5 V
AD770X ad7705(2.5);
float v1;
float v2;
float H1;
float H2;
void setup()
{
//initializes channel 1
ad7705.init(AD770X::CHN_AIN1);
//initializes channel 2
ad7705.init(AD770X::CHN_AIN2);
Serial.begin(9600);
lcd.begin(16, 2);
}
void loop()
{
//read the converted results (in volts)
v1 = ad7705.readADResult(AD770X::CHN_AIN1);
//read the converted results (in volts)
v2 = ad7705.readADResult(AD770X::CHN_AIN2);
H1 =(v1*1);
H2 =(v2*1);
Serial.println(“H1”);
Serial.println(H1);
Serial.println(“H2”);
Serial.println(H2);
lcd.setCursor(0,0);
lcd.print( H1);
lcd.setCursor(0,1);
lcd.print(H2);
unsigned int data=0;
delay(500);
}
Could you try reading just one channel and see what values you get?
You must place before AD7705 initialisation this: void AD770X::reset() Input shift register of AD7705 must be in known state before initialisation and communicating.
2Keryy: reset routine simplified:
void AD770X::reset() {
digitalWrite(pinCS, LOW);
spiTransfer(0xff);
digitalWrite(pinCS, HIGH);
}
I use cheap TM7705 modules from ebay. After corrrect interface reset this fake chips work fine.
Hello Brother,
Your library is the only thing that I got to get my work done,Actually I am Building a Data Logger so I need to port this code to ATMEGA 32 Mcu But I am could not do this,Is there any way that this library can be ported to a Off the market Atmega32 Mcu.
Thanks a lot for your work
Thank you very much for the AD770X library!
Is it possible to use pins other than 11, 12, 13 for MOSI/MISO/SCK?
I am asking because two of my devices do not seem to be compatible with each other (they work well independently with the Arduino), and I have extra digital lines available. I did not see a reference to SPI.h in your library and example, so I am hoping this is possible. Thanks again.
For ATMega328/or 328p those pins are fixed for SPI as this is a hardware function. But if you implement the SPI protocol on your own, you can use pretty much any pins.
Hi to to all,
I’m using the AD7705 and i found this lib very helpful, thanks a lot!
i just have quite a problem on the measuring:
i have connected to the GND the ai1(-)
i’m giving to the ai2(+) a known voltage from the arduino (voltage that i also measure with the multimeter)
The configuration is exactly the one shown in the scheme by kerry wong.
when i run the program, i get from the ad7705 a value lower than the one i expect, for example:
if i set 1 V from the arduino, i measure 1.00V with the multimeter, but i get 0.95V on the AD7705.
if i set 2 V from the arduino, i measure 2.00V with the multimeter, but i get 1.92V on the AD7705.
it seems like i have a gain problem (i set a gain og 1 btw), because the ratio voltge_expected/volytage_measured=1.045 ish, which is averagely constat.
Can it be a calibration problem? or am i making some stupid mistake? or is the IC broken?
hope someone can help me,
cheers,
Francesco
sorry,
my mistake i was talking about the ai1(+) not the ai2(+)
cheers,
francesco
Dear Kwong,
Thanks a lot for your library it works fine when i connect a potentiometer to the channel where the middle pin is connected to the +ve In and -ve In is connected to ground , now when i try to enter a sine wave as an input i get very unreasonable values , I’ve tried both making an offset so that the value of the input is between 0 and 1V for example or without offsite such that the input would vary between -50 and +50 mV (because it is said in the data sheet that there won’t be correct readings below -100 mV) do u have any ideas where the problem could be ?!
Best regards,
A question for anyone using the cheap boards off eBay that use the TM7705 chip (supposedly AD7705 compatible).
I’m using both inputs, and when I do, the data from AIN1 data comes out when I request data from AIN2 and the data from AIN2 come out when I request data from AIN1.
However when I use just AIN1 by itself, reading data from AIN1 works as expected, but when using only AIN2, the data can only be read by reading AIN1.
Does the real AD7705 exhibit this behavior or this is probably just a problem because the chip is (probably) a Chinese knock off?
Jeff
Hi, Jeff.
I am using TM7705 module with Arduino MEGA 2560 board, the input is unipolar, both AIN1(-) and AIN2(-) connect to GND. But I ran into a similar situation as you mentioned above. I do not have any infomation about the usage of TM7705 module. Have you made any progress on sovling the problem?
Based on MEGA 2560 schematic the pin connections are:
MISO(50), MOSI(51), SCK(50) and CS(53)
Thanks,
John
Sorry for typo, SCK(50) should be sck(52)
John
Wondering how you got data from the tm7705 chip. perhaps you could send me a picture or schematic of your wiring and the code you used. I cannot figure out how to get data using the example code. my email is koryherber@gmail.com Thanks!
Hello Kwong!
I have a module with AD 7705.
It has AD 7705 and reference voltage of 2.5 V. I connect it to the Arduino Uno with your library. I connect outputs in accordance with the protocol library SPI. I use a simple sketch of an example for testing. On the input AIN 1, I tried to connect the various signal sources (electret microphone, temperature sensor, battery 1.5 V). All values displayed in the serial monitor – zero. Help me to understand, Why it’s not working?
I am also having the same problem using the same module listed by Amir-aka. any ideas??
Hi, I am having the same problem with the TM7705 ( value is always zero). Does anyone know what can be the problem?
Thank you Wong. Your library is very useful.
I am also having problems with the module described by Amir (AD7705 Dual 16 bit ADC Data Acquisition Module Input Gain Programmable SPI Interface TM7705 ). I have changed the the bit CLK to account for the crystal of 4 MHz but still doesn’t work. It seems the module has some sort of hardware problem.
Does anyone had success using the module 7705?
Hi,
I am having compilation errors when using the program with the DUE. It has to do with the library. Has anybody made it work with a DUE? what modifications to the library are necessary? thank you
Since it was not working for the DUE I have tryed first with the UNO. It compiles correctly but all my readings are 0 (except for some which seem like noise). Has anyone had a similar problem? Any help? ty
Hi Jorge,
A lot of people seemed to have similar issues like what you were having. I am not entire sure whether there was some slight change in the chip or what. The ones I developed my code with are all working fine… So unless I can get a sample of the ones that people are having trouble with, I am not quite sure what’s going on.
It works on my TM7705, but I have only two digits after dot. How increase this to 4 digits?
Hi Mark, how did you manage to make it work? Can you please tell us, I am desperately facing the same issue of other users…
Hi Kerry,
I’m building an Arduino musical instrument, and I’m using your library and the AD7705.
Will it be OK to publish the code, including your library, as a blog post or as an instructable?
I’m asking because I couldn’t find a license reference in the code.
Thanks,
UriSh
Absolutely! Everything here is open sourced in the hope that it would be useful for other people.
Thanks for your reply!
I sure hope it will work out. I’ve been using the AD7705 to get a precise reading of a 10-turn potentiometer, and it worked perfectly fine, until I added a LCD+keypad shield (DFRobot). For some reason, when I connect both the LCD and the ADC, the ADC hangs. I’ve checked to see if I fried the hardware or something like that, checked if the pinout matches, and most everything else that came to my mind. Nothing works so far, which is very frustrating. If you have some insight on how to get this to work, I would sure like to know.
Thank you very much,
UriSh
For anyone using the TM7705 red boards of ebay.
I had a lot of issues getting mine going until I attached 5v to the reset pin. I’m not great at reading wiring diagrams but I thought that it was already connected, but it wasn’t.
I get readings from a pot using the default code above on Arduino uno with the tm7705 above and the Pot connected to 5v, ain+, ain-.
it reads up until 2.5v.
if you touch a wire to ain- and the REFIN+ (The fourth pin from the dot(ground) on the ad7705 chip), you get readings from 0-5v.
I have a question for Kerry D Wong. (Thanks so much for this library, its so great that you did all the work and than shared it!)
I am trying to get a reading from an RTD sensor on the board. While I would love your help with that too my actual question is:
To get a bipolar reading, do I need to read ain+ and ain- (by putting ad7705.readADResult(AD770X::CHN_COMM)). Or is that a question that would not even make sense if I knew more aobut electronics?
(I ask because when I read Ain+ with the REF as the agitation for the RTD, I only get the agitation value (2.5) as a reading)
Thanks for all your help in both writing this library and then reviewing it to make it easier :)!
Hi, to use bipolar mode with AD7705, you need to call the overloaded initialization function (definition below):
void AD770X::init(byte channel, byte clkDivider, byte polarity, byte gain, byte updRate)
and passing in AD770x::BIPOLAR into the polarity variable.
Also, in bipolar mode, the measurable range between ain+ and ain- is between -2.5V to 2.5V if the reference is 2.5V.
Thanks for the great library, but im not able to get correct values on the ouptut, it is always the same even if i change the input voltage, my module is the red from ebay i also tried connecting reset pin to 5v but nothing changes.
Hi Kerry,
i´m quite desperate to bring AD7706 alive. Im using uno board with AT328, all wires are connected correctly and still i cant get nothing but 0.00 in serial monitor. Would you be so kind and give me an advice how to fix it. It looks like AD7006 doesnt comunicate with uC.
Thanks for reply
Hello, if you are interested, i’ve ported this library in python to use it with spidev (for raspi for ex). Here is the link :
Awesome, thanks!
Ps. I also use the red one from eBay connected on an pi and it work, even without connected the reset pin.it has a vref = 2.5v.
Dear Kerry!
In file AD7705.h we see:
static const byte UNIPOLAR = 0x0;
static const byte BIPOLAR = 0x1;
In datasheet DOC000143693.pdf page 12
B/U Bipolar/Unipolar Operation. A “0” in this bit selects Bipolar Operation. A “1” in this bit selects Unipolar Operation.
Where is the mistake?
I also found it and I suppose datasheeet is right.
Dear Kerry D. Wong, I am facing the same issue of other users. Any updates for TM7705?
Severals errors:
1. in AD770x.h
static const byte UNIPOLAR = 0x0; // MUST BE 0X1
static const byte BIPOLAR = 0x1; // MUST BE 0X0
2.
static const byte CLK_DIV_1 = 0x1; // MUST BE 0x00
static const byte CLK_DIV_2 = 0x2; // MUST BE 0x01
Thanks Roman for your reply. However, I still have some issues and my readings are always zero. I think I have figured out a problem: while using the scope for the DRDY pin I noticed that it does not go LOW.
Hi kerry,
thanks for you library. it is realy promising.
ıf ı change
AD770X ad7706(2.5);
double v;
to
AD770X ad7706(2.5);
double v1,v2;
the outputs fluctuate randomly. Both channel can be read easiliy however two channel give unexpected outcome.
could you comment on that? And do we need to change (as Roman said)?
static const byte UNIPOLAR = 0x0; // MUST BE 0X1
static const byte BIPOLAR = 0x1; // MUST BE 0X0
thanks a lot.
thanks
Hi. In the monitor serial see. Why?
0.00 : 0.00
0.00 : 0.00
0.00 : 0.00
0.00 : 0.00
0.00 : 0.00
0.00 : 0.00
0.00 : 0.00
did you fix the problem my ad7705 also show 0.0
Hello Sir
Thank you for the library.
I wanted to know weather this code will run in Arudino Due.
can you please do tell me what can be the code to interface AD7705 in Arudino Due.
Thanks
Mark
Hello everybody
first of all thanks for the lib.
I have problem with AC7705 conversion. my sketch is the following.
#include
#include
AD770X ad7705(65536);
unsigned int ADCValue1;
unsigned int ADCValue2;
void setup() {
// Open serial communications and wait for port to open:
Serial.begin(9600);
while (!Serial) {
; // wait for serial port to connect. Needed for Leonardo only
}
ad7705.reset();
delay(1000);
ad7705.init(AD770X::CHN_AIN1,AD770X::CLK_DIV_1, AD770X::BIPOLAR, AD770X::GAIN_1, AD770X::UPDATE_RATE_25);
ad7705.init(AD770X::CHN_AIN2,AD770X::CLK_DIV_1, AD770X::BIPOLAR, AD770X::GAIN_1, AD770X::UPDATE_RATE_25);
delay(1000);
}
void loop() {
ADCValue1 = ad7705.readADResult(AD770X::CHN_AIN1);
ADCValue2 = ad7705.readADResult(AD770X::CHN_AIN2);
Serial.print(“AD7705 analog 16bit input 1: “);
Serial.println(ADCValue1);
Serial.print(“AD7705 analog 16bit input 2: “);
Serial.println(ADCValue2);
Serial.println(“-“);
delay(1000);
}
but the serial output still zero forever.
AD7705 analog 16bit input 1: 0
AD7705 analog 16bit input 2: 0
–
AD7705 analog 16bit input 1: 0
AD7705 analog 16bit input 2: 0
even if i put 3 volt at the input.
somebody can help me?
thanks.
Hey Miki
I’ve also been there, so in my case what worked was to pull up the rst pin of the ADC using a pullup resistor.
First, your line does not look good to me “AD770X ad7705(65536);”
you might want to provide the ref voltage like so:
AD770X ad7705(2.5);
Second, you must expect double values from readResult, not unsigned int
Best
Dear,
Please note that I had a reading problem with the AD7705 (DRDY never going to level Zero).
Board with PIC16F690 XTAL @8 MHz, PortC Driving all ADC pins.
Programming with MikroBasic v7 Pro and Pickit2.
Driving AD7705 pins with own software and simple functions (not really SPI protocol).
I noted that “reading” was subject to setting, when reading datas, the AD7705 Din input … MUST be set to level ONE !
I Don’t understand why ( possibly inputting command while reading ?) but it WORKED really !
All is now OK !
Perhaps some solution to others softwares and users.
If you are interrested for all users, please give me your e-mail where I can send my software (test).
Antonio
Hello everyone, tried to use the library from Kerry Dwong with the TM7705 chip from Ebay (red breakboard), link:. It has a reference voltage of 2.5V and a oscillator the matches the requirement stated in the AD7705 datasheet.
I Have rewritten the library from Kerry Dwong for the AD7705, and now added SPI lib to it, so it can be used for all type of controllers. The library can be found here:. Enjoy it :)
However, I am like everyone else getting readings of only zeroes. I want to know if anyone else has solved this problem, and if so how? Furthermore is there a chance that I can get a look into the library you used/made to understand how you made it work with the TM7705 version? Also how did you guys connect the breakboard with arduino controller and to what ever sensor you are using – I am using a load cell.
My connections to arduino due and load cell:
For Arduino due and TM7705:
GND -> GND
VCC -> 5V
RST -> 5V or nothing (as you can see in the circuit diagram)
CS -> pin 10
SCK -> pin 76
DIN -> pin 75
DOUT -> pin 74
DRDY -> Nothing, since i am using the internal resistor.
For load cell and TM7705:
AIN1 + -> Output data
AIN1 – -> Output data
The two left wires coming from the load cell goes to GND and 5V for excitation.
If you look into my library you can see that I am using SPI_MODE = 3. I am not sure if that us correct, but i think so since it states that in the sample code in the datasheet.
Furthermore I am using Unipolar mode which I think is the correct mode, though not sure?
Please if you find any mistakes in the library, do tell me so I can correct it :)
Hope you guys have an answer for my questions :)
Hello
Please … I Don’t know very well C language and how is written SPI library … I am only a Mikroe Basic language writer.
Verify that before reading datas … the DIN of AD7705 is SET … DIN=1
In my program, DIN of AD7705 is connected to Spi_SDO output of my board with PIC16F690 minimum system …
Here are the définitions of my SPI pins for PIC16F690.
dim Chip_Select as sbit at RC0_bit
Spi_DRDY as sbit at RC1_bit
Spi_RST as sbit at RC2_bit
Spi_CLK as sbit at RC3_bit
Spi_SDI as sbit at RC4_bit // connected to Dout AD7705
Spi_SDO as sbit at RC5_bit // connected to Din AD7705
dim Chip_Select_Direction as sbit at TRISC0_bit
Spi_DRDY_Direction as sbit at TRISC1_bit
Spi_RST_Direction as sbit at TRISC2_bit
Spi_CLK_Direction as sbit at TRISC3_bit
Spi_SDI_Direction as sbit at TRISC4_bit // =1 so Spi_SDI is an input connected to AD7705 Dout output
Spi_SDO_Direction as sbit at TRISC5_bit // =0 so Spi_SDO is an output connected to AD7705 Din input.
So Spi_SDO is an output … and I had to make Spi_SDO = 1 –> Din = 1 before reading datas of AD7705.
If Spi_SDO = 0 … then I only read 0000 !
See the changes …
Spi_SDO = 1 The last column is the mean of the first 8 samples
(speed = ~ 100 samples /second, turning slowly input voltage with pot 10 turns)
8BD7 8B90 8B46 8B03 8AC4 8A85 8A46 8A06 00008AE9
89C4 8982 893C 88FA 88BB 887F 8848 8813 000088E2
87E3 87B9 8792 876E 874C 872C 870C 86EC 00008761
86CC 86A9 8684 865F 8638 8615 85F7 85E0 0000864F
85CC 85BD 85B2 85A6 859E 8599 8598 8597 000085A9
8596 8597 8596 8595 8595 8593 8591 8592 00008594
8595 8599 859D 85A6 85B5 85C8 85DE 85F8 000085B8
8612 862A 8643 865D 8675 868B 86A3 86BF 00008668
Spi_SDO = 0
I hope this can help you …
Best Regards to everyone
|
http://www.kerrywong.com/2012/04/18/ad7705ad7706-library-revisited/comment-page-1/
|
CC-MAIN-2018-34
|
refinedweb
| 5,743
| 72.87
|
Contents
- Introduction
- Background
- Converting the existing project
- Extending the project
- Using the code
- Points of Interest
- History
Introduction
One of my most epic moments at CodeProject was the release of the article about Mario5. In the article I describe the making of a game based on web technologies such as HTML5, CSS3 and JavaScript. The article gained a lot of attentation and is probably among the ones where I am really proud of.
The original article uses something I described as "OOP JavaScript". I wrote a small helper script called oop.js, which allowed me to use a simple inheritence / class pattern. Of course JavaScript is very object-oriented from the beginning. Classes are no direct criteria for OOP. Nevertheless, the pattern helped a lot to make the code both, easy to read and maintain. This is obtained by not having to deal with the
prototype scheme directly.
With TypeScript we get a unified class construct in JavaScript. The syntax is based on the ES6 version, preparing TypeScript to remain a full superset of JavaScript even with ES6 implementations. Of course TypeScript compiles down to ES3 or ES5, which means that this class construct will be decomposed to something that is available right now: again the
prototype mechanism. Nevertheless, what remains is code that is readable, cross-implementation (ES3 / ES5) safe and agrees on a common base. With my own approach (oop.js) no one besides me did know what was going on without reading the helper code. With TypeScript a broad set of developers uses the same pattern, as it is embedded in the language.
It was therefore just natural to convert the Mario5 project to TypeScript. What makes this worth an article on CodeProject? I think it is a nice study how to convert a project. It also illustrates the main points of TypeScript. And finally it gives a nice introduction to the syntax and the behavior. After all, TypeScript is easy for those who already know JavaScript and makes it easier to approach JavaScript, for those who do not have any experience yet.
Background
More than a year ago Anders Heijlsberg announced his new language called TypeScript. This was a surprise for most people, as Microsoft (and especially Anders) seemed to be against dynamic languages, in particular JavaScript. However, it turned out, that Microsoft realized what a big opportunity the centralization of general purpose programming to web programming is. With JavaScript for Windows Store apps, the ongoing hype with node.js and the NoSQL movement with document stores that use JavaScript for running queries, it is obvious that JavaScript is definitely important.
The realization did influence the decision on the design of a new language. Instead of creating a new language from the ground up (like Google did with Dart), Anders decided that any language that may be still established has to extend JavaScript. No solution should be orthogonal. The problem with CoffeeScript is that it hides JavaScript. This may be appealing for some developers, but for most developers it is an absolute exclusion criterion. Anders decided that the language has to be strongly typed, even though only an intelligent compiler (or transpiler to be more correct) will see these annotations.
So what happened? A true superset of ECMAScript 5 has been created. This superset has been called TypeScript to indicate the close relationsship with JavaScript (or ECMAScript in general), with the additional type annotations. Every other feature, such as interfaces, enums, generics, casts, ... follows from these type annotations. In the future TypeScript will evolve. There are two areas:
- Embracing ES6 to remain a true superset of JavaScript
- Bringing in further features to make JS development easier
The primary benefit of using TypeScript is two-fold. On the one side we can take the advantage of being informed of potential errors and problems during compile-time. If an argument does not fulfill a given signature, then the compiler will throw an error. This is especially useful when working with larger teams or an a bigger project. The other side is also interesting. Microsoft is known for their excellent tooling with Visual Studio. Giving JavaScript code a good tooling support is tedious, due to the dynamic nature of JavaScript code. Therefore even simple refactoring tasks such as renaming a variable cannot be performed with the desired stability.
In the end TypeScript gives us great tooling support combined with a much better idea about how our code will work. The combination of productivity plus robustness is the most appealing argument for using TypeScript. In this article we will explore how to convert existing projects. We will see, that transforming a code to TypeScript can be done incrementally.
Converting the existing project
TypeScript does not hide JavaScript. It starts with plain JavaScript.
The first step in utilizing TypeScript is of course to have TypeScript source files. Since we want to use TypeScript in an existing project, we'll have to convert these files. There is nothing to do here as a requirement, however, we'll just rename our files from *.js to *.ts. This is just a matter of convention, nothing that is actually required. Nevertheless, as the TypeScript compiler tsc usually considers *.ts files as input, writing *.js files as output, renaming the extension ensures that nothing wrong happens.
The next subsections deal with incremental improvements in the conversion process. We now assume that every file has the usual TypeScript extension *.ts, even though no additional TypeScript feature is used.
References
The first step is to supply references from single JavaScript files, to all other (required) JavaScript files. Usually we would only write single files, which, however, (usually) have to be inserted in a certain order in our HTML code. The JavaScript files do not know the HTML file, nor do they know the order of these files (not to speak of which files).
Now that we want to give our intelligent compiler (TypeScript) some hints, we need to specify what other objects might be available. Therefore we need to place a reference hint in the beginning of code files. The reference hint will declare all other files, that will be used from the current file.
For instance we might include jQuery (used by, e.g., the main.ts file) by its definition as via:
/// <reference path="def/jquery.d.ts"/>
We could also include a TypeScript version of the library, or the JavaScript version, however, there are reasons for including the definition file only. Definition files do not carry any logic. This will make the file substentially smaller and faster to parse. Also, such files will usually contain much more / better documentation comments. Finally, while we would prefer our own *.ts files to *.d.ts files, in case of jQuery and other libraries the original has been written in JavaScript. It is unclear, if the TypeScript compiler is satisfied with the source code. By taking a definition file, we can be sure that everything works.
There are reasons to write plain definition files outselves, as well. The most basic one is covered by the def/interfaces.d.ts file. We do not have any code, which would make a compilation irrelevant. Referencing this file on the other hand makes sense, since the additional type information provided by the file helps in annotating our code.
Annotations
The most important TypeScript feature is type annotations. Actually the name of the language indicates the high importance of this feature.
Most type annotations are actually not required. If a variable is immediately assigned (i.e. we define a variable, instead of just declaring it), then the compiler can infer the type of the variable.
var basepath = 'Content/';
Obviously the type of this variable is a
string. This is also what TypeScript infers. Nevertheless, we could also name the type explicitly.
var basepath: string = 'Content/';
Usually we do not want to be explicit with such annotations. It introduces more clutter and less flexibility than we aim for. However, sometimes such annotations are required. Of course the most obvious case appears, when we only declare a variable:
var frameCount: number;
There are other scenarios, too. Consider the creation of a single object, that may be extended with more properties. Writing the usual JavaScript code is definitely not enough information for the compiler:
var settings = { };
What properties are available? What is the type of the properties? Maybe we don't know, and we want to use it as a dictionary. In this case we should specify the arbitrary usage of the object:
var settings: any = { };
But there is also another case. We already know what properties might be available, and we only need to set or get some of these optional properties. In that case we can also specify the exact type:
var settings: Settings = { };
The most important case has been omitted so far. While variables (local or global) can be inferred in most cases, function parameters can never be inferred. In fact function parameters may be inferred for a single usage (such as the types of generic parameters), but not within the function itself. Therefore we need to tell the compiler what type of parameters we have.
setPosition(x: number, y: number) { this.x = x; this.y = y; }
Transforming JavaScript incrementally with type annotations therefore is a process, that starts by changing the signature of functions. So what about the basics of such annotations? We already learned that
number,
string and
any are built-in types, that represent elementary types. Additionally we have
boolean and
void. The latter is only useful for return types of functions. It indicates that nothing useful is returned (as JS functions will always return something, at least
undefined).
What about arrays? A standard array is of type
any[]. If we want to indicate that only numbers can be used with that array, we could annotate it as
number[]. Multi-dimensional arrays are possible as well. A matrix might be annotated as
number[][]. Due to the nature of JavaScript we only have jagged arrays for multi-dimensions.
Enumerations
Now that we started annotating our functions and variables, we will eventually require custom types. Of course we already have some types here and there, however, these types may be less annotated than we want to, or defined in a too special way.
Sometimes there are better alternatives offered by TypeScript. Collections of numeric constants, for instance, can be defined as an enumeration. In the old code we had objects such as:
var directions = { none: 0, left: 1, up: 2, right: 3, down: 4 };
It is not obvious that the contained elements are supposed to be constants. They could be easily changed. So what about a compiler that might give us an error if we really want to do nasty things with such an object? This is where
enum types come in handy. Right now they are restricted to numbers, however, for most constant collections this is sufficient. Most importantly, they are transported as types, which means that we can use them in our type annotations.
The name has been changed to uppercase, which indicates that
Direction is indeed a type. Since we do not want to use it like an enumeration flag, we use the singular version (following the .NET convention, which makes sense in this scenario).
enum Direction { none = 0, left = 1, up = 2, right = 3, down = 4, };
Now we can use it in the code such as:
setDirection(dir: Direction) { this.direction = dir; }
Please note that the
dir parameter is annotated to be restricted to arguments of type
Direction. This excludes arbitrary numbers and must use values of the
Direction enumeration. What if we have a user input that happens to be a number? In such a scenario we can also get wild and use a TypeScript cast:
var userInput: number; // ... setDirection(<Direction>userInput);
Casts in TypeScript work only if they could work. Since every
Direction is a number, a
number could be a valid
Direction. Sometimes a cast is known to fail a priori. If the
userInput would be a plain
string, TypeScript would complain and return an error on the cast.
Interfaces
Interfaces define types without specifying an implementation. They will vanish completely in the resulting JavaScript, like all of our type annotations. Basically they are quite similar to interfaces in C#, however, there are some notable differences.
Let's have a look at a sample interface:
interface LevelFormat { width: number; height: number; id: number; background: number; data: string[][]; }
This defines the format of a level definition. We see that such a definition must consist of numbers such as
width,
height,
background and an
id. Also a two-dimensional string-array defines the various tiles that should be used in the level.
We already mentioned that TypeScript interfaces are different to C# interfaces. One of the reasons is that TypeScript interfaces allow merging. If an interface with the given name already exists, it won't be overwritten. There is also no compiler warning or error. Instead the existing interface will be extended with the properties defined in the new one.
The following interface merges the existing
Math interface (from the TypeScript base definitions) with the provided one. We gain one additional method:
interface Math { sign(x: number): number; }
Methods are specified by specifying parameters in round brackets. The usual type annotation is then the return type of the method. With the provided interface (extension) the TypeScript compiler allows us to write the following method:
Math.sign = function(x: number) { if (x > 0) return 1; else if (x < 0) return -1; return 0; };
Another interesting option in TypeScript interfaces is the hybrid declaration. In JavaScript an object is not limited to be a pure key-value carrier. An object could also be invoked as a function. A great example for such a behavior is jQuery. There are many possible ways to call the jQuery object, each resulting in a new jQuery selection being returned. Alternatively the jQuery object also carries properties that represent nice little helpers and more useful stuff.
In case of jQuery one of the interfaces looks like:
interface JQueryStatic { (): JQuery; (html: string, ownerDocument?: Document): JQuery; ajax(settings: JQueryAjaxSettings): JQueryXHR; /* ... */ }
Here we have to possible calls (among many) and a property that is directly available. Hybrid interfaces therefore require that the implementing object is in fact a function, that is extended with further properties.
We can also create interfaces based on other interfaces (or classes, which will be used as interfaces in this context).
Let's consider the following case. To distinguish points we use the
Point interface. Here we only declare two coordinates,
x and
y. If we want to define a picture in the code, we need two values. A location (offset), where it should be placed, and the string that represents the source of the image.
Therefore we define the interface to represent this functionality to be derived / specialized of the
Point interface. We use the
extends keyword to trigger this behavior in TypeScript.
interface Point { x: number; y: number; } interface Picture extends Point { path: string; }
We can use as many interfaces as we want, but we need to separate them with commas.
Classes
At this stage we already typed most of our code, but an important concept has not been translated to TypeScript. The original codebase makes use of a special concept that brings class-like objects (incl. inheritance) to JavaScript. Originally this looked like the following sample:
var Gauge = Base.extend({ init: function(id, startImgX, startImgY, fps, frames, rewind) { this._super(0, 0); this.view = $('#' + id); this.setSize(this.view.width(), this.view.height()); this.setImage(this.view.css('background-image'), startImgX, startImgY); this.setupFrames(fps, frames, rewind); }, });
Unfortunately there are a lot of problems with the shown approach. The biggest problem is that it is non-normative, i.e. it is no standard way. Therefore developers who aren't familiar with this style of implementing class-like objects, cannot read or write code as they usually would. Also the exact implementation is unknown. All in all any developer has to look at the original definition of the
Class object and its usage.
With TypeScript a unified way of creating class-like objects exists. Additionally it is implemented in the same manner as in ECMAScript 6. Therefore we get a portability, readability and extensibility, that is easy to use and standardized. Coming back from our original example we can transform it to become:
class Gauge extends Base { constructor(id: string, startImgX: number, startImgY: number, fps: number, frames: number, rewind: boolean) { super(0, 0); this.view = $('#' + id); this.setSize(this.view.width(), this.view.height()); this.setImage(this.view.css('background-image'), startImgX, startImgY); this.setupFrames(fps, frames, rewind); } };
This looks quite similar and behaves nearly identical. Nevertheless, changing the former definition with the TypeScript variant needs to be done in a single iteration. Why? If we change the base class (just called
Base), we need to change all derived classes (TypeScript needs classes to inherit from other TypeScript classes).
On the other hand if we change one of the derived classes we cannot use the base class any more. That being said, only classes, that are completely decoupled from the class hierachy, can be transformed within a single iteration. Otherwise we need to transform the whole class hierachy.
The
extends keyword has a different meaning than for interfaces. Interfaces extend other definitions (interfaces or the interface part of a class) by the specified set of definitions. A class extends another class by setting its prototype to the given one. Additionally some other neat features are placed on top of this, like the ability to access the parent's functionality via
super.
The most important class is the root of the class hierachy, called
Base. It contains quite some features, most notably
class Base implements Point, Size { frameCount: number; x: number; y: number; image: Picture; width: number; height: number; currentFrame: number; frameID: string; rewindFrames: boolean; frameTick: number; frames: number; view: JQuery; constructor(x: number, y: number) { this.setPosition(x || 0, y || 0); this.clearFrames(); this.frameCount = 0; } setPosition(x: number, y: number) { this.x = x; this.y = y; } getPosition(): Point { return { x : this.x, y : this.y }; } setImage(img: string, x: number, y: number) { this.image = { path : img, x : x, y : y }; } setSize(width, height) { this.width = width; this.height = height; } getSize(): Size { return { width: this.width, height: this.height }; } setupFrames(fps: number, frames: number, rewind: boolean, id?: string) { if (id) { if (this.frameID === id) return true; this.frameID = id; } this.currentFrame = 0; this.frameTick = frames ? (1000 / fps / setup.interval) : 0; this.frames = frames; this.rewindFrames = rewind; return false; } clearFrames() { this.frameID = undefined; this.frames = 0; this.currentFrame = 0; this.frameTick = 0; } playFrame() {++; } } } };
The
implements keyword is similar to implementing interfaces (explicitely) in C#. We basically enable a contract, that we provide the abilities defined in the given interfaces within our class. While we can only extend from a single class, we can implement as many interfaces as we want. In the previous example we choose not to inherit from any class, but to implement two interfaces.
Then we define what kind of fields are available on objects of the given type. The order does not matter, but defining them initially (and most importantly: in a single place) makes sense. The
constructor function is a special function that has the same meaning as the custom
init method before. We use it as the class's constructor. The base class's constructor can be called any time via
super().
TypeScript also provides modifiers. They are not included in the ECMAScript 6 standard. Therefore I also do not like to use them. Nevertheless, we could make fields private (but remember: only from the view of the compiler, not in the JavaScript code itself) and therefore restrict access to such variables.
A nice usage of these modifiers is possible in combination with the constructor function itself:
class Base implements Point, Size { frameCount: number; // no x and y image: Picture; width: number; height: number; currentFrame: number; frameID: string; rewindFrames: boolean; frameTick: number; frames: number; view: JQuery; constructor(public x: number, public y: number) { this.clearFrames(); this.frameCount = 0; } /* ... */ }
By specifying that the arguments are
public, we can omit the definition (and initialization) of
x and
y in the class. TypeScript will handle this automatically.
Fat arrow functions
Can anyone remember how to create anonymous functions in C# prior to lambda expressions? Most (C#) devs cannot. And the reason is simple: Lambda expressions bring expressiveness and readability. In JavaScript everything is evolving around the concept of anonymous functions. Personally, I only use function expressions (anonymous functions) instead of function statements (named functions). It is much more obvious what is happening, more flexible and brings a consistent look and feel to the code. I would say it is coherent.
Nevertheless, there are little snippets, where it sucks writing something like:
var me = this; me.loop = setInterval(function() { me.tick(); }, setup.interval);
Why this waste? Four lines for nothing. The first line is required, since the interval callback is invoked on behalf of the
window. Therefore we need to cache the original
this, in order to access / find the object. This closure is effective. Now that we stored the
this in
me, we can already profit from the shorter typing (at least something). Finally we need to hand that single function over in another function. Madness? Let's use the fat-arrow function!
this.loop = setInterval(() => this.tick(), setup.interval);
Ah well, now it is just a neat one-liner. One line we "lost" by preserving the
this within fat-arrow functions (let's call them lambda expressions). Two more lines have been dedicated to preserving style for functions, which is now redudant as we use a lambda expression. In my opinion this is not only readable, but also understandable.
Under the hood, of course, TypeScript is using the same thing as we did before. But we do not care. We also do not care about MSIL generated by a C# compiler, or assembler code generated by any C compiler. We only care about the (original) source code being much more readable and flexible. If we are unsure about the
this we should use the fat arrow operator.
Extending the project
TypeScript compiles to (human-readable) JavaScript. It ends with ECMAScript 3 or 5 depending on the target.
Now that we basically typed our whole solution we might even go further and use some TypeScript features to make the code nicer, easier to extend and use. We will see that TypeScript offers some interesting concepts, that allow us to fully decouple our application and make it accessible, not only in the browser, but also on other platforms such as node.js (and therefore the terminal).
Default values and optional parameters
At this stage we are already quite good, but why leave it at that? Let's place default values for some parameters to make them optional.
For instance the following TypeScript snippet will be transformed...
var f = function(a: number = 0) { } f();
... to this:
var f = function (a) { if (a === void 0) { a = 0; } }; f();
The
void 0 is basically a safe variant of
undefined. That way these default values are always dynamically bound, instead of default values in C#, which are statically bound. This is a great reduction in code, as we can now omit essentially all default value checks and let TypeScript do the work.
As an example consider the following code snippet:
constructor(x: number, y: number) { this.setPosition(x || 0, y || 0); // ... }
Why should we ensure that the
x and
y values are set? We can directly place this constraint on the constructor function. Let's see how the updated code looks like:
constructor(x: number = 0, y: number = 0) { this.setPosition(x, y); // ... }
There are other examples, as well. The following already shows the function after being altered:
setImage(img: string, x: number = 0, y: number = 0) { this.view.css({ backgroundImage : img ? c2u(img) : 'none', backgroundPosition : '-' + x + 'px -' + y + 'px', }); super.setImage(img, x, y); }
Again, this makes the code much easier to read. Otherwise the
backgroundPosition property would be assigned with default value consideration, which looks quite ugly.
Having default values is certainly nice, but we might also have a scenario, where we can safely omit the argument without having to specify a default value. In that case we have still to do the work of checking if a parameter has been supplied, but a caller may omit the argument without running into trouble.
The key is to place a question mark behind the parameter. Let's look at an example:
setupFrames(fps: number, frames: number, rewind: boolean, id?: string) { if (id) { if (this.frameID === id) return true; this.frameID = id; } // ... return false; }
Obviously we allow calling the method without specifying the
id parameter. Therefore we need to check if it exists. This is done in the first line of the method's body. This guard protects the usage of the optional parameter, even though TypeScript allows us to use it at free will. Nevertheless, we should be careful. TypeScript won't detect all mistakes - it's still our responsibility to ensure a working code in every possible path.
Overloads
JavaScript by its nature does not know function overloads. The reason is quite simple: Naming a function only results in a local variable. Adding a function to an object places a key in its dictionary. Both ways allow only unique identifiers. Otherwise we would be allowed to have two variables or properties with the same name. Of course there is an easy way around this. We create a super function that calls sub functions depending on the number and types of the arguments.
Nevertheless, while inspecting the number of arguments is easy, getting the type is hard. At least with TypeScript. TypeScript only knows / keeps the types during compile-time, and then throws the whole created type system away. This means that no type checking is possible during runtime - at least not beyond very elementary JavaScript type checking.
Okay, so why is a subsection dedicated to this topic, when TypeScript does not help us here? Well, obviously compile-time overloads are still possible and required. Many JavaScript libraries offer functions that offer one or the other functionality, depending on the arguments. jQuery for instance usually offers two or more variants. One is to read, the other to write a certain property. When we overload methods in TypeScript, we only have one implementation with multiple signatures.
Typically one tries to avoid such ambiguous definitions, which is why there is are no such methods in the original code. We do not want to introduce them right now, but let's just see how we could write them:
interface MathX { abs: { (v: number[]): number; (n: number): number; } }
The implementation could look as follows:
var obj: MathX = { abs: function(a) { var sum = 0; if (typeof(a) === 'number') sum = a * a; else if (Array.isArray(a)) a.forEach(v => sum += v * v); return Math.sqrt(sum); } };
The advantage of telling TypeScript about the multiple calling versions lies in the enhanced UI capabilities. IDEs like Visual Studio, or text editors like Bracket may show all the overloads including the descriptions. As usual calls are restricted to the provided overloads, which will ensure some safety.
Generics
Generics may be useful to tame multiple (type) usages, as well. They work a little bit different than in C#, as well, since they are only evaluated during compile time. Additionally they do not have anything special about the runtime representation. There is no template meta programming or anything here. Generics is only another way to handle type safety without becoming too verbose.
Let's consider the following function:
function identity(x) { return x; }
Here the argument
x is of type
any. Therefore the function will return something of type
any. This may not sound like a problem, but let's assume the following function invocations.
var num = identity(5); var str = identity('Hello'); var obj = identity({ a : 3, b : 9 });
What is the type of
num,
str and
obj? They might have an obvious name, but from the perspective of the TypeScript compiler, they are all of type
any.
This is where generics come to rescue. We can teach the compiler that the return type of the function is the calling type, which should be of the exact type that has been used.
function identity
(x: T): T { return x; }
In the above snippet we simply return the same type that already entered the function. There are multiple possibilities (including returning a type determined from the context), but returning one of the argument types is probably the most common.
The current code does not have any generics included. The reason is simple: The code is mostly focused on changing states and not on evaluating input. Therefore we mostly deal with procedures and not with functions. If we would use functions with multiple argument types, classes with argument type dependencies or similar constructs, then generics would certainly be helpful. Right now everything was possible without them.
Modules
The final touch is to decouple our application. Instead of referencing all the files, we will use a module loader (e.g. AMD for browsers, or CommonJS for node) and load the various scripts on demand. There are many advantage to this pattern. The code is much easier to test, debug and usually does not suffer from wrong orders, as the modules are always loaded after the specified dependencies are available.
TypeScript offers a neat abstraction over the whole module system, since it provides two keywords (
import and
export), which are transformed to some code that is related to the desired module system. This means that a single code base can be compiled to AMD conform code, as well as CommonJS conform code. There is no magic required.
As an example the file constants.ts won't be referened any more. Instead, the file will export its contents in form of a module. This is done via:
export var audiopath = 'Content/audio/'; export var basepath = 'Content/'; export enum Direction { none = 0, left = 1, up = 2, right = 3, down = 4, }; /* ... */
How can this be used? Instead of having a reference comment, we use the
require() method. To indicate that we wish to use the module directly, we do not write
var, but
import. Please note, that we can skip the *.ts extension. This makes sense, since the file will have the same name later on, but a different ending.
import constants = require('./constants');
The difference between
var and
import is quite important. Consider the following lines:
import Direction = constants.Direction; import MarioState = constants.MarioState; import SizeState = constants.SizeState; import GroundBlocking = constants.GroundBlocking; import CollisionType = constants.CollisionType; import DeathMode = constants.DeathMode; import MushroomMode = constants.MushroomMode;
If we would write
var, then we would actually use the JavaScript representation of the property. However, we want to use the TypeScript abstraction. The JavaScript realization of
Direction is only an object. The TypeScript abstraction is a type, that will be realized in form of an object. Sometimes it does not make a difference, however, with types such as interfaces, classes or enums, we should prefer
import to
var. Otherwise we just use
var for renaming:
var setup = constants.setup; var images = constants.images;
Is this everything? Well, there is much to be said about modules, but I try to be brief here. First of all, we can use these modules to make interfaces to files. For instance the public interface to the main.ts is given by the following snippet:
export function run(levelData: LevelFormat, controls: Keys, sounds?: SoundManager) { var level = new Level('world', controls); level.load(levelData); if (sounds) level.setSounds(sounds); level.start(); };
All modules are then brought together in some file like game.ts. We load all the dependencies and then run the game. While most modules are just objects bundled together with single pieces, a module can also be just one of these pieces.
import constants = require('./constants'); import game = require('./main'); import levels = require('./testlevels'); import controls = require('./keys'); import HtmlAudioManager = require('./HtmlAudioManager'); $(document).ready(function() { var sounds = new HtmlAudioManager(constants.audiopath); game.run(levels[0], controls, sounds); });
The
controls module is an example for a single piece module. We achieve this with a single statement such as:
export = keys;
This assigns the
export object to be the
keys object.
Let's see what we got so far. Due to the modular nature of our code we included some new files.
We have another dependency on RequireJS, but in fact our code is more robust and easier to extend than before. Additionally all dependencies are always exposed, which removes the possibility of unknown dependencies drastically. The module loader system combined with intellisense, improved refactoring capabilities and the strong typing added much safety to the whole project.
Of course not every project can be refactored so easily. The project has been small and was based on a solid code base, that did not rust that much.
In a final step we will break apart the massive main.ts file, to create small, decoupled files, which may only depend on some setting. This setting would be injected in the beginning. However, such a transformation is not for everyone. For certain projects it might add too much noise than gain clearity.
Either way, for the
Matter class we would have the following code:
/// <reference path="def/jquery.d.ts"/> import Base = require('./Base'); import Level = require('./Level'); import constants = require('./constants'); class Matter extends Base { blocking: constants.GroundBlocking; level: Level; constructor(x: number, y: number, blocking: constants.GroundBlocking, level: Level) { this.blocking = blocking; this.view = $('<div />').addClass('matter').appendTo(level.world); this.level = level; super(x, y); this.setSize(32, 32); this.addToGrid(level); } addToGrid(level) { level.obstacles[this.x / 32][this.level.getGridHeight() - 1 - this.y / 32] = this; } setImage(img: string, x: number = 0, y: number = 0) { this.view.css({ backgroundImage : img ? img.toUrl() : 'none', backgroundPosition : '-' + x + 'px -' + y + 'px', }); super.setImage(img, x, y); } setPosition(x: number, y: number) { this.view.css({ left: x, bottom: y }); super.setPosition(x, y); } }; export = Matter;
This technique would refine the dependencies. Additionally the code base would gain accessibility. Nevertheless, it depends on the project and state of the code, if further refinement is actually desired or unnecessary cosmetics.
Using the code
The code is live and available online at GitHub. The repository can be reached via github.com/FlorianRappl/Mario5TS. The repository itself contains some information on TypeScript. Additionally the build system Gulp has been used. I will introduce this build system in another post. Nevertheless, the repository also contains a short installation / usage guide, which should give everyone a jump start, who does not have knowledge about gulp or TypeScript.
Since the origin of the code lies in the Mario5 article, I also suggest everyone who has not read it, to have a look. The article is available on CodeProject at codeproject.com/Articles/396959/Mario. There is also a follow up article on CodeProject, which deals with extending the original source. The extension is a level editor, which showcases that the design of the Mario5 game has indeed been quite good as most parts of the UI can be easily re-used to create the editor. You can access the article under codeproject.com/Articles/432832/Editor-for-Mario. It should be noted that the article also deals with a social game platform that combines the game and the editor in a single webpage, which can be used to save and share custom levels.
Points of Interest
One of the most asked questions in the original article has been where to acquire the sound / how to set up the sound system. It turns out that the sound might be one of the most interesting parts, yet I decided to drop it from the article. Why?
- The sound files might cause a legal problem (however, same could be said about the graphics)
- The sound files are actually quite big (effect files are small, but background music is O(MB))
- Every sound file has to be duplicated to avoid compatibility issues (OGG and MP3 files are distributed)
- The game has been made independent of a particular sound implementation
The last argument is my key point. I wanted to illustrate that the game can actually work without strongly coupling it to a particular implementation. Audio has been a widely discussed topic for web applications. First of all we need to consider a series of formats, since different formats and encodings only work on a subset of browsers. To reach all major browsers, one usually needs at least 2 different formats (usually consisting of one open and one proprietary format). Additionally the current implementation of the
HTMLAudioElement is not very efficient and useful for games. That is what motivated Google to work on another standard, which works much better for games.
Nevertheless, you want a standard implementation? The GitHub repository actually contains a standard implementation. The original JavaScript version is available, as is the Type'd version. Both are just called
SoundManager. One is in the folder Original, the other one in the Scripts folder (both are subfolders of src).
History
- v1.0.0 | Initial Release | 18.11.2014
|
https://florian-rappl.de/Articles/Page/270/mario5-typescript
|
CC-MAIN-2020-29
|
refinedweb
| 6,206
| 58.38
|
10 November 2010 10:15 [Source: ICIS news]
SHANGHAI (ICIS)--?xml:namespace>
“Our group cut base oils output by 10% in November as we allocated more feedstock to boost diesel production,” a source at petrochemical giant Sinopec said in Mandarin.
Diesel consumption in
Sinopec subsidiary Jingmen Petrochemical, which has a 250,000 tonne/year base oils capacity, would halve its monthly output in November to less than 7,000 tonnes, the source said.
“Most output was [being] supplied only to Sinopec's lubricant plants and the group doesn’t have available stocks to sell to the market,” he said.
Meanwhile, PetroChina had suspended spot market sales of base oils this month as production would not even hit 60% of its average monthly output, a company source said.
Apart from beefing up its diesel production rates, PetroChina also shut some plants for maintenance this month.
This included a 700,000 tonne/year naphthenic base oil refinery in Karamay, Xinjiang province, and a 150,000 tonne/year Group I base oils refinery in
Most regions in
($1 = CNY
|
http://www.icis.com/Articles/2010/11/10/9408821/china-refiners-cut-base-oils-output-to-boost-diesel-production.html
|
CC-MAIN-2015-22
|
refinedweb
| 176
| 56.18
|
I have to develop new arduino projects each week for my young pupil, who is passionate about electronics and hardware. For that, I found our last project creative and interesting, and I bet some of you can use it as a geek experiment.
What we need here is graphite based pencils (or directly graphite, works better), arduino, resistors, a led, a metallic clip, and a white regular paper. How does it works?
First we should know how this works. In arduino, we can use several sensors as INPUTS, such as a button, a light sensor, an humidity sensor, etc. But we can also attach home-made INPUTS using conductor materials. Steel and metal are common conductors materials (you can try this experiment with a coin, too) and so is graphite.
For making this work in arduino, we are using a special library called CapacitiveSensor04. Once our library is added, we can start designing the circuit. This is an example with steel paper, works the same with graphite. Only draw something (very dense), attach a paper clip to the drawing (be careful, it should be a single-line drawing) and a cable to the paper pin, which is the one connected to the resistor + 4 and 2 pins.
And this is our code:
#include <CapacitiveSensor.h> CapacitiveSensor capSensor = CapacitiveSensor(4,2); int threshold = 1000; const int ledPin = 12; void setup() { Serial.begin(9600); pinMode(ledPin, OUTPUT); } void loop() { long sensorValue = capSensor.capacitiveSensor(30); Serial.println(sensorValue); if(sensorValue > threshold) { digitalWrite(ledPin, HIGH); } else { digitalWrite(ledPin, LOW); } delay(10); }
We might have to calibrate the threshold, in which case you will only have to open the Monitor and test. And... tadaah! interactive drawing hat lights a led. You can now do other things. Just experiment!
In case you are a tutor, teacher or parent, here's the content of my class in spanish ready for the students (answers, incomplete code for them and complete code with comments for the teachers and activities).
Posted on by:
Paula
Offensive security, into privacy and digital rights. I give speeches, write articles and founded a digital privacy awareness association called Interferencias in Spain. Japanese style tattooing.
Discussion
Amazing! Thanks for sharing this :D
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/terceranexus6/creating-an-interactive-drawing-with-arduino-2jnp
|
CC-MAIN-2020-40
|
refinedweb
| 367
| 56.05
|
NOTE.
When I first posted about Django, I said that I'd post the details of how I wrote a blog in Django, without actually writing any real Python code. This will show you how you too can have a simple blog working in no time at all.
I'll assume for this article that you've got Django installed and working, and you know the basics. Create a new project and within it create a 'blog' application. Edit the models file for that application, so it contains the following text:
from django.core import meta class Tag(meta.Model): slug = meta.SlugField( 'Slug', prepopulate_from=("title",), help_text='Automatically built from the title.', primary_key='True' ) title = meta.CharField('Title', maxlength=30) description = meta.TextField( 'Description', help_text='Short summary of this tag' ) def __repr__(self): return self.title def get_absolute_url(self): return "/tag/%s/" % self.slug class META: admin = meta.Admin( list_display = ('slug', 'title',), search_fields = ('title', 'description',), ) class Post(meta.Model): slug = meta.SlugField( 'Slug', prepopulate_from=('title',), help_text='Automatically built from the title.', primary_key='True' ) assoc_tags = meta.ManyToManyField(Tag) title = meta.CharField('Title', maxlength=30) date = meta.DateTimeField('Date') image = meta.ImageField( 'Attach Image', upload_to='postimgs', blank=True ) body = meta.TextField('Body Text') def __repr__(self): return self.title def get_absolute_url(self): return "/blog/%s/%s/" % (self.date.strftime("%Y/%b/%d").lower(), self.slug) class META: admin = meta.Admin( list_display = ('slug', 'title', 'date'), search_fields = ('title', 'description'), date_hierarchy = ('date',), ) ordering = ('-date',)
What this creates for us is three database tables. The first, which will be called 'blog_tags' (a concatenation of the application name, and the pluralisation of the class name) will contain three fields. First is a 'slug', which is an alphanumeric representation of the title (and our primary key). This will be automatically generated from the second field in this table, the title. If the title is, for example, "How To Make A Nice Coffee", the slug will be "how-to-make-nice-coffee" - all in lowercase, with hyphens for spaces, and common short words (such as "a" and "at") removed. Lastly is a description field, which we'll use later in tooltips.
For this tags table, we also define a few functions. The first, repr will return the name of the category by default when ask for a basic view of that category - this makes it easier to use this model later on. We also define a function called 'get_absolute_url' - this is used for us to quickly build a link to view this tag outside of the admin - in this case, we will link to /tag/tag-slug-name/. Later on we will build a template that, when called via this URL, will display a list of articles that belong to that tag. Lastly we define some simple Admin stuff, which I won't detail here.
The second table to be created is called 'blog_posts' and will contain a few simple fields. Again, a slug and title, same as the 'tags' table. It will also contain a ManyToMany field, relating it to the 'tags' table. This lets us select multiple tags for each post, providing a quick and easy way to categorise blog postings without having them tied into only one category. There is also a date field (which, in the Django admin will provide pop-up calendars and time selectors... cool!), an Image field (which lets you attach an image to a post), and a field for the body of the text. Again, we define a repr and basic Admin functions, and a get_absolute_url function: This time the URL will refer to /blog/year/month/day/slug/, which allows each post to have a simple-to-read URL that will never change.
Save your changes, and then edit your 'urls.py' file. Ensure the admin line is uncommented:
(r'^admin/', include('django.contrib.admin.urls.admin')),
Now, visit your /admin/ url, for example. Login with your Django username and password, and you'll see 'Tags' and 'Posts' options in the list. Add a few tags in, noting that the slug is built automatically. Add a few posts, noting you get nice widgets for selecting multiple tags, for picking dates, and building slugfields.
In the next article, find out how we can use Django's generic views to display blog postings, including a date-based archive, by writing practically zero code. Stay tuned, and in the meantime see if you can tweak the above model a little, and learn how objects are represented in the Admin screens..
|
http://www.rossp.org/blog/2006/jan/23/building-blog-django-1/
|
CC-MAIN-2014-10
|
refinedweb
| 747
| 57.87
|
read ASCII or binary stereo lithography files More...
#include <vtkSTLReader.h>
read ASCII or binary stereo lithography files
vtkSTLReader is a source object that reads ASCII or binary stereo lithography files (.stl files). The FileName must be specified to vtkSTLReader. The object automatically detects whether the file is ASCII or binary.
.stl files are quite inefficient since they duplicate vertex definitions. By setting the Merging boolean you can control whether the point data is merged after reading. Merging is performed by default, however, merging requires a large amount of temporary storage since a 3D hash table must be constructed.
Definition at line 46 of file vtkSTLReader.h.
Definition at line 49 of file vtkSTL.
Turn on/off merging of points/triangles.
Turn on/off tagging of solids with scalars.
Specify a spatial locator for merging points.
By default an instance of vtkMergePoints is used.
Get header string.
If an ASCII STL file contains multiple solids then headers are separated by newline character. If a binary STL file is read, the first zero-terminated string is stored in this header, the full header is available by using GetBinaryHeader().
Get binary file header string.
If ASCII STL file is read then BinaryHeader is not set, and the header can be retrieved using.GetHeader() instead.
Create default locator.
Used to create one when none is specified.
Set header string.
Internal use only.
This is called by the superclass.
This is the method you should override.
Reimplemented from vtkPolyDataAlgorithm.
Definition at line 124 of file vtkSTLReader.h.
Definition at line 125 of file vtkSTLReader.h.
Definition at line 126 of file vtkSTLReader.h.
Definition at line 127 of file vtkSTLReader.h.
Definition at line 128 of file vtkSTLReader.h.
|
https://vtk.org/doc/nightly/html/classvtkSTLReader.html
|
CC-MAIN-2021-17
|
refinedweb
| 285
| 62.24
|
jGuru Forums
Posted By:
ravi_raj
Posted On:
Monday, March 11, 2002 12:30 AM
I have defined an hierarchy of packages and organized the source files in appropriate subdirectories.
root package : FlexIMA
It has subdirectories :
ByteCodeExtractor
Precompiler
AgentCreation
I have placed a package statement for *.java files belonging to respective packages .
For eg : In Precompiler ..
package FlexIMA.Precompiler ;
I want to use the classes defined in ByteCodeExtractor and Precompiler in the *.java files of AgentCreation.
For eg : I have a source file called Creator.java in which the first few lines are ...
package FlexIMA.AgentCreation ;
import FlexIMA.Precompiler.* ;
import FlexIMA.ByteCodeExtractor.* ;
Whenever i compile the *.java files in AgentCreation package i get errors of the form : "Package not found"
What is the solution to it ?
Re: Packages
Posted By:
Sean_Ruff
Posted On:
Monday, March 11, 2002 06:03 AM
If the folder FlexIMA is located at c:JavaProjectsFlexIMAthen you could do the following:
javac -classpath c:JavaProjects *.java
|
http://www.jguru.com/forums/view.jsp?EID=790468
|
CC-MAIN-2015-22
|
refinedweb
| 158
| 50.43
|
best of both worlds
Document options requiring JavaScript are not displayed
Help us improve this content
Level: Intermediate
Michael Roberts (michael@vivtek.com), Owner, Vivtek
01 Aug 2001
Extending Python in C is easy once you see how it all works, and an extension of Python is equally easy to package up for Zope. The hard part is wading through the different documentation sets in search of the nuggets of information you need, and Michael has collected them for you in this article.
You might want to extend Zope in C for several reasons. The most likely is that you have an existing C library that already does something you need, and you're
not excited about translating it to Python. Also, since Python is an interpreted language, any
Python code that gets called a lot is going to slow you down. So even if you have some extension you've
written in Python, you may still want to consider moving the most often called parts into C. Either
way, extending Zope starts with extending Python. Furthermore, extending Python gets you other nice benefits because your code is accessible from any Python script, not just Zope. The only caveat here
is that although Python's current version is 2.1 as of this writing, Zope still runs only with Python 1.5.2.
For C extensions, there are no changes between the two versions, but if you get fancy with the Python wrappings
for your library, you need to be careful not to use anything that's newer than 1.5.2 if you want it all to work under Zope.
What is Zope?
Zope stands for "Z Object Publishing Environment", and it's an application server implemented in Python. "Great," you say, "but what exactly is an application server?" An application server is simply a long-running process that provides services for active content. The Web server then makes calls to the application server in order to have pages built at runtime.
Extending Python for fun and profit
To extend Zope, you first extend Python. While extending Python is not brain surgery, it's no walk in the park either. There are two basic components to a Python extension. The first is obviously the C code. I'll cover that in a minute.
The other component is the Setup file. The Setup file describes the module by supplying its module name, the
location of its C code, and any compiler flags you may need. This file is preprocessed to create a makefile (on UNIX)
or MSVC++ project files (on Windows). Before you ask -- Python on Windows is indeed built using the Microsoft compilers.
The folks at Python.org recommend using MSVC++ to build extensions as well. It stands to reason that you should be
able to persuade GNU compilers to do the trick, but I haven't tried that myself.
At any rate, let's define a little module called 'foo'. The 'foo' module will have a function called 'bar'. When we
get things running, we will be able to import this function into a Python script using import foo;, just
like any module. The Setup file is very simple:
import foo;
# You can include comment lines. The *shared* directive indicates
# that the following module(s) are to be compiled and linked for
# dynamic loading as opposed to static: .so on Unix, .dll on Windows.
*shared*
# Then you can use the variables later using the $(variable) syntax
# that 'make' uses. This next line defines our module and tells
# Python where its source code is.
foo foomain.c
Writing the code
So how do we actually write code that Python knows how to use, you ask? The foomain.c
file (you can call it anything you want, of course) contains three things: a method table, an initialization function, and the
rest of the code. The method table simply associates names with functions, and tells Python what parameter-passing mechanism
each function uses (you have a choice between a regular list of positional arguments, or a mix of positional and keyword arguments).
Python calls the initialization function when the module loads. It does whatever initialization is required for the module,
if any, but most crucially, it also passes a pointer to the method table back to Python.
foomain.c
So let's look at the C code for our little foo module.
#include <Python.h>
/* Define the method table. */
static PyObject *foo_bar(PyObject *self, PyObject *args);
static PyMethodDef FooMethods[] = {
{"bar", foo_bar, METH_VARARGS},
{NULL, NULL}
};
/* Here's the initialization function. We don't need to do anything
for our own needs, but Python needs that method table. */
void initfoo()
{
(void) Py_InitModule("foo", FooMethods);
}
/* Finally, let's do something ... involved ... as an example function. */
static PyObject *foo_bar(PyObject *self, PyObject *args)
{
char *string;
int len;
if (!PyArg_ParseTuple(args, "s", &string))
return NULL;
len = strlen(string);
return Py_BuildValue("i", len);
}
A closer look
Let's look at that code for a moment. First, notice that you have to include Python.h. Unless you have your include
path set up to find that file, you may need to include an -I flag in your Setup file to point to it.
Python.h
-I
The initialization function must be named init<module name>, in our case initfoo. The initialization function's
name is, of course, all that Python knows about your module when it loads it, and that's why its name is so constrained. The init
function, by the way, should be the only global identifier in the file that isn't declared static. This is more
crucial for static linking than dynamic, because a non-static identifier will be globally visible. This isn't too
much of a problem with dynamic linking, but if you're linking everything at compile-time, you will very likely run into name collisions
if you don't make everything static that you can.
init
initfoo
static
When we get down to the actual code, take a look at how parameters are processed, and how the return value is passed. Everything,
of course, is a PyObject -- an object on the Python heap. What you get in your arguments are a reference to the "this" object (this is
for object methods and is NULL for plain old functions like bar()) and an argument tuple in args. You retrieve
your arguments using PyArg_ParseTuple, then you pass your result back with Py_BuildValue. These functions and
more are documented in the "Python/C API" section of the Python documentation. Unfortunately, there is no simple listing
of functions by name; instead the document is arranged by topic.
bar()
args
PyArg_ParseTuple
Py_BuildValue
Notice also that in case of error, the function returns NULL. A NULL return signals an error; to work with Python even better, though,
you should raise an exception. I'll refer you to the documentation on how that works.
Building the extension
All that remains now is building the module. There are two ways you can do this. The first is to follow the instructions
in the documentation, and run make -f Makefile.pre.in boot, which builds a Makefile using your Setup. Then you use that
to build your project. This route works only on UNIX. For Windows, there is a script called "compile.py" (see Resources later in this article). The original script is hard to
find; I found a highly modified copy in a mailing list posting from Robin Dunn (the man behind wxPython). This script is supposed
to work on UNIX and Windows; on Windows it builds MSVC++ project files starting from your Setup.
make -f Makefile.pre.in boot
To run the build, you'll need to have includes and libraries available. The standard Zope installation of Python doesn't contain these
files, so you'll need to install the regular Python installation from (see Resources). On Windows, you'll also have to get the config.h
file from the PC directory of the source installation; it's a manual version of the config.h that the UNIX installation builds for you.
On UNIX, therefore, you should already have it.
Once all this is complete, the result is a file with the extension ".pyd". Put this file into your Python installation's "lib" directory
(under Zope, Python lives in the "bin" directory, so you want your extension to end up in the "bin/lib" directory, oddly.) Then you can
call it, just like any native Python module!
>>> import foo;
>>> foo.bar ("This is a test");
14
My first question when I got this far was to ask myself how I would define a class in C that would be visible from Python. It
turns out I was probably asking the wrong question. From examples I've studied, anything that Python-specific is simply done in
Python and calls the C functions you export from your extension.
Taking it to Zope
Once your Python extension is complete, the next step is getting Zope to work with it. There are a couple of routes you can take, and to
a certain extent, the way you want your extension to work with Zope will influence the way you build it in the first place. The basic
ways to use your Python (and by extension, C) code from inside Zope are:
Of course, your own application may use some combination of these modes.
Creating an External Method
The simplest way to call Python from Zope is to make your Python code an External Method. External Methods are Python
functions that have been placed into the "Extensions" directory in the Zope installation. Once you have such a Python file there, you
can go to any folder, choose "Add External Method", and add a variable that invokes the function in question. Then you can add a DTML
field into any page in the folder that displays the result of that invocation. Let's look at a quick example, using our foo.bar
Python extension defined above.
foo.bar
First, the extension itself: let's put it into a file named, say, foo.py. Remember, this file goes into the Extensions directory under
Zope. For this to work, of course, our foo.pyd created above has to be in the Python library in bin/lib. This is what a
simple wrapper for that might look like:
foo.pyd
import foo
def bar(self,arg):
"""A simple external method."""
return 'Arg length: %d' % foo.bar(arg)
Simple, right? This defines an external method "bar", which can be attached to any folder using the Zope management interface. Then to
call our extension from any page in that folder, we simply insert a DTML variable reference like this:
<dtml-var bar('This is a test')>
When our page is viewed by the user, the DTML field will be replaced by the text "Arg length: 14". And we've thus extended Zope in C.
Zope scripts: Cliff Notes version
Zope scripts are a new feature with Python 2.3, and they are intended to supplant External Methods. They can do anything External Methods
can do, but they're better integrated with the security and management system, they offer more flexibility in integration, and they also
have a great deal more access to all the Zope functionality exposed in the Zope API.
A script is basically just a short Python program. It can define classes or functions, but doesn't have to. It's installed as an
object in a Zope folder, and can then be called as a DTML variable or call (like an External Method) or "from the Web", which is a Zopism
meaning that it will be invoked as a page. This implies, of course, that a script can generate the response to a forms submission, just like
a CGI program but without the CGI overhead. A nifty feature indeed. In addition, the script has access to the object it's been invoked on or from
(via the "context" object), the folder that object is in (via the "container" object), and a few other odds and ends of information. For
a great deal more information about scripts, see the chapter "Advanced Zope Scripting" in The Zope Book (see Resources).
You might make the mistake of thinking that you can simply import foo and use foo.bar directly from a script (I know I did). However, this isn't
the case. Due to security restrictions, only Products can be imported, not arbitrary modules. As a general policy, the Zope designers have the idea that access to the file system is required for arbitrary scripting, and since script objects are managed from the Web using
the Zope management interface, they're not fully trusted. So instead of showing you an example script, I'm just
going to cut to the chase and talk about Products and base classes.
Going all out with a Product
The power-tool method of extending Zope is the Product. At the installation level, a Product is just a directory in the "lib/python/Products"
directory under the Zope directory. You can see lots of examples in your own Zope installation, but essentially, a minimal Product consists
of just two files in its directory: an arbitrarily named code file, and a file called __init__.py that Zope calls to initialize the
Product at startup. (Note that Zope only reads Product files at startup, meaning that for testing you must be able to stop and restart the
Zope process.) This article only hints at the vast amount of stuff you can do with Zope Products.
The thing to understand is that a Product packages up one or more classes that can be used from ZClasses, scripts, or directly from URLs over the
Web. (In the last case, of course, instances of the Product are treated as folders; then the last part of the URL names the method to be
called, and it returns arbitrary HTML.) You don't have to treat a Product as an "addable" object, but that's its primary purpose. For a
good real-life example, take a look at the ZCatalog implementation, part of the standard Zope distribution. There you will see a
fairly simple installation script in __init__.py, and in ZCatalog.py you can see the ZCatalog class, which presents a number of methods
for publication. Note that Zope uses an odd convention to determine which methods are accessible via the Web -- if a method has a doc
string, it is Web accessible; otherwise, it's considered private.
At any rate, let's look at a very simple Product that uses our C module defined up above. First, a very simple __init__.py; note
that the only thing this does is to tell Zope what the name is of the class we're installing. More elaborate initialization scripts
can do a lot more, declaring global variables to be maintained by the server, setting up access privileges, and so on. For a
lot more detail, see the Zope Developer's Guide in the online documentation and study the stock Products in your Zope installation. As
you might have guessed, our example Product is called "Foo". Thus you would create a Foo subdirectory in your lib/python/Products
directory.
import Foo
def initialize(context):
context.registerClass(
Foo.Foo,
permission='Add Foo',
constructors=Foo.manage_addFoo
)
Now notice that this initialization script not only imports the class in order to make it accessible to other
parts of Zope, it also registers it for "addability". The context.registerClass call does that by naming the class we imported,
then specifying the name of a method that can be used to add an instance (this method must display a management page, and it will
automatically be integrated with the Zope management interface). Cool.
context.registerClass
So let's scratch out a simple little Product. It will expose our foo.bar function to scripts and ZClasses, and it will also
have a little interface as an "addable" object, and that's about all.
import foo
class Foo(SimpleItem.Item):
"A Foo Product"
meta_type = 'foo'
def bar(self, string):
return foo.bar(string)
def __init__(self, id):
"Initialize an instance"
self.id = id
def index_html(self):
"Basic view of object"
return '
My id is %s and its length is %d.
' % (self.id, foo.bar(self.id))
def manage_addFoo(self, RESPONSE):
"Management handler to add an instance to a folder."
self._setObject('Foo_id', Foo('Foo_id'))
RESPONSE.redirect('index_html')
This is just barely a Product. It's not quite the absolute tiniest possible Product, but it's close. It does illustrate
a few key insights about Products, though. First, note the "index_html" method; it is called to present an object instance, and it
does so by building HTML. It's effectively a page. The manage_addFoo method is our interface to Zope object management; it was
referenced above in our __init__.py. The "__init__" method initializes the object; all it really must do is record
the instance's unique identifier.
manage_addFoo
__init__.py
This micro-Product doesn't interact with Zope security. It doesn't do much management. It has no interactive features. So there's
a lot you could add to it (even besides useful functionality, which it also doesn't have.) I hope it's a good start for you.
Where to go from here
This quick introduction to Zope Products has shown you how to take
a C-language function from C code to usability in Zope. To learn how to write Products, you're going
to have to read much more documentation (most of it still in progress) and, frankly, study existing Products to see how they're done.
There is a great deal of power and flexibility in the Zope model, and it's well worth exploring.
I'm currently working on a large C integration project with Zope: I'm integrating my workflow toolkit. By the time this article
is published, I hope it will be in some shape to be read. It's listed in the Resources below, so check it out; there should be an extended example by the time you read this. Wish me luck.
Resources
About the author
Michael Roberts has been coding for money for thirteen years, but has done it in public for only a year or so. Lately it's rumored he even writes articles about it. You can contact Michael
at michael@vivtek.com. He welcomes your comments and questions.
Rate this page
Please take a moment to complete this form to help us better serve you.
Did the information help you to achieve your goal?
Please provide us with comments to help improve this page:
How useful is the information?
|
http://www-106.ibm.com/developerworks/library/l-pyzo.html
|
crawl-001
|
refinedweb
| 3,114
| 72.46
|
for those of you not familiar with the rules they can be found here:
i am sort of new to java and i am trying to write the game of life, i have got the board made and how to randomly assign each cell a true or false (living or dead) variable, but for each cell i now need to write a way to count its neighbors so the rules of the game can be followed...from examples the two if statements i have in the countaliveneighbors function i know that the first checks for one of the four surrounding neighbors, while the second if statement checks for one of the four diagonal neighbors, i have tried many thinks and thought about this for a while but i am not sure what the other if statements should be and what this function should return.
*also when i run this program i get an out of bounds error in line 14 and 15, i will star them*
package gameolife; import java.util.Random; public class Life { private static final int ROW = 40; private static final int COL = 40; public static void main(String[] args) { Random randGen = new Random(); boolean[][] nextBoard = new boolean[ROW+2][COL+2]; boolean[][] currBoard = new boolean[ROW+2][COL+2]; for(int i = 0; i <= currBoard.length; i++) { for (int j = 0; i <= nextBoard.length; j++) { * currBoard[i][j] = false; * nextBoard[i][j] = false; } } for (int k = 1; k < currBoard.length - 1; k++) { for (int l = 1; l < currBoard.length - 1; l++) { if (randGen.nextInt(10)==2) { currBoard[k][l] = true; } } } } public static int countLiveNeighbors(int row, int col, boolean[][] board){ int count = 0; if (row-1 >= 0 && board[row-1][col] == true) { count++; } if (row+1 < COL && board[row+1][col] == true) { count++; } return count; } }
|
https://www.daniweb.com/programming/software-development/threads/192843/conway-s-game-of-life-help
|
CC-MAIN-2018-47
|
refinedweb
| 299
| 67.12
|
Continuous Integration - Testing
One of the key features of Continuous Integration is to ensure that the on-going testing holds all the code which gets built by the CI server. After a build is carried out by the CI Server, it has to be ensured that the test cases are in place to get the required code tested. Every CI server has the ability to run unit test cases as part of the CI suite. In .Net, the unit testing is a feature which is inbuilt into the .Net framework and the same thing can be incorporated into the CI Server as well.
This chapter will see how we can define a test case in .Net and then let our TeamCity server run this test case after the build is completed. For this, we first need to ensure that we have a unit test defined for our sample project.
To do this, we must follow the ensuing steps with utmost carefulness.
Step 1 − Let’s add a new class to our solution, which will be used in our Unit Test. This class will have a name variable, which will hold the string “Continuous Integration”. This string will be displayed on the web page. Right-click on the Simple Project and choose the menu option Add → Class.
Step 2 − Give a name for the class as Tutorial.cs and click the Add button at the bottom of the screen.
Step 3 − Open the Tutorial.cs file and add the following code in it. This code just creates a string called Name, and in the Constructor assign the name to a string value as Continuous Integration.
using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace Simple { public class Tutorial { public String Name; public Tutorial() { Name = "Continuous Integration"; } } }
Step 4 − Let us make the change to our Demo.aspx.cs file to use this new class. Update the code in this file with the following code. So this code will now create a new instance of the class created above.
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace Simple { public partial class Demo : System.Web.UI.Page { Tutorial tp = new Tutorial(); protected void Page_Load(object sender, EventArgs e) { tp.Name = "Continuous Integration"; } } }
Step 5 − In our demo.aspx file, let us now reference the tp.Name variable, which was created in the aspx.cs file.
<%@ Page <head runat = "server"> <title>TutorialsPoint1</title> </head> <body> <form id = "form1" runat = "server"> <div> <% = tp.Name%>) </div> </form> </body> </html>
Just to ensure our code works fine with these changes, you can run the code in Visual Studio. You should get the following output once the compilation is complete.
Step 6 − Now it is time to add our Unit tests to the project. Right-click on Solution and choose the menu option Add → New Project.
Step 7 − Navigate to Test and on the right hand side, choose Unit Test Project. Give a name as DemoTest and then click OK.
Step 8 − In your Demo Test project, you need to add a reference to the Simple project and to the necessary testing assemblies. Right-click on the project and choose the menu option Add Reference.
Step 9 − In the next screen that comes up, go to Projects, choose Simple Reference and click OK.
Step 10 − Click Add Reference again, go to Assemblies and type Web in the Search box. Then add a reference of System.Web.
Step 11 − In the Unit Test file, add the following code. This code will ensure that the Tutorial class has a string name variable. It will also assert the fact that the Name should equal a value of “Continuous Integration”. This will be our simple Test case.
using System; using Microsoft.VisualStudio.TestTools.UnitTesting; using Microsoft.VisualStudio.TestTools.UnitTesting.Web; using System.Web.UI; using System.Web.UI.WebControls; using Simple; namespace DemoTest { [TestClass] public class UnitTest1 { [TestMethod] public void TestMethod1() { Tutorial tp = new Tutorial(); Assert.AreEqual(tp.Name, "Continuous Integration"); } } }
Step 12 − Now let’s run our test in Visual Studio to make sure it works. In Visual Studio, choose the menu option Test → Run → All Tests.
After running the test, you will see the Test successfully run on the left hand side of Visual Studio.
Enabling Continuous Testing within TeamCity – Now that all the test cases are in place, it is time to integrate these into our Team City server.
Step 13 − For this, we need to create a build step in our Project configuration. Go to your project home and click Edit Configuration Settings.
step 14 − Then go to Build Step → MS Build and click Add build step as depicted in the following screenshot.
In the next screen that comes up, add the following values −
- Choose the runner type as Visual Studio Tests.
- Enter an optional Test step name.
- Choose the Test Engine type as VSTest.
- Choose the Test Engine version as VSTest2013.
- In the Test files name, provide the location as DemoTest\bin\Debug\DemoTest.dll – Remember that DemoTestis the name of our project which contains our Unit Tests. The DemoTest.dll will be generated by our first build step.
- Click Save which will be available at the end of the screen.
Now you will have 2 build steps for your project. The first is the Build step which will build your application code and your test project. And the next will be used to run your test cases.
Step 15 − Now it is time to check-in all your code in Git, so that the entire build process can be triggered. The only difference is this time, you need to run the git add and git commit command from the Demo parent folder as shown in the following screenshot.
Now when the build is triggered, you will see an initial output which will say that the test passed.
Step 16 − If you click on the Test passed result and go to the Test tab, you will now see that the UnitTest1 was executed and that it is passed.
|
http://www.mumbai-academics.com/2018/07/continuous-integration-testing.html
|
CC-MAIN-2019-26
|
refinedweb
| 1,019
| 75.91
|
Cookies are strings of text that the server can store on the client side. These cookies can then be sent back by the client to the server each time the client accesses that server again. Cookies are commonly used for session tracking, authentication, site preferences and maintaining specific information about users. For example, items a client stores in a shopping cart can be stored on the client side as cookies so that they can leave the online store and return later to check out.
So, how do we set cookies from within a Silverlight application? To accomplish this we turn again to the HtmlPage.Document object. To use this object you must add a using statement to reference the System.Windows.Browser namespace.
To set a cookie we need to call SetProperty() with a string in the following format: “Key=Value;expires=ExpireDate.”
For example:
private void SetCookie(string key, string value)
{
// Expire in 7 days
DateTime expireDate = DateTime.Now + TimeSpan.FromDays(7);
string newCookie = key + "=" + value + ";expires=" + expireDate.ToString("R");
HtmlPage.Document.SetProperty("cookie", newCookie);
}
Now, to get the cookie we split up and iterate through all the cookies returned through the property HtmlPage.Document.Cookies.
private string GetCookie(string key)
string[] cookies = HtmlPage.Document.Cookies.Split(';');
foreach (string cookie in cookies)
{
string [] keyValue = cookie.Split('=');
if (keyValue.Length == 2)
{
if(keyValue[0].ToString() == key)
return keyValue[1];
}
}
return null;
For more details on additional properties you can set when creating cookies please see MSDN.
Thank you,
--Mike Snow
Subscribe in a reader
Cookies are strings of text that the server can store on the client side. These cookies can then be sent
Mike, firstly I like to thank you for these great Tips you provide. They are very valuable!
Secondly, what would be the advantage of using Cookies v.s. IsolatedStorage, whereas the cookies can be cleared by the user!
Thanks!
..Ben
Wish you'd had this post up a couple of days ago. I just had to use code very much like this to get/set a username cookie for my Tank Wars game. I'm using the AuthenticationServices to handle my silverlight security, but you can't register through the services. So I had to redirect the users to a new aspx page that uses the registration control, then redirects them back to the silverlight page. The problem I ran into was that the silverlight app could see that the user was logged in, but I could find a way to get the username. That's where the username cookie comes in.
BenHayat- Cookies are good for small sets of data since they are more straight forward to use. For larger sets of data I would use IsolatedStorage (IS). I'll do a blog next on the adv. and disadvantages of IS but one example is that with IS administrators can set quoates whichs means there is no guarantee of the size available.
Thanks! But how to set cookies with Unicode character in Silverlight?
So after you use SetProperty to set "cookie", should you see your cookie in HtmlPage.Document.Cookies?
Any idea how you'd be able to see the resulting cookie in a self hosted WCF service?
Pingback from Dew Drop - July 16, 2008 | Alvin Ashcraft's Morning Dew
Itsmallmouse - There is no Unicode support sorry.
Jhoffa - Yes, after you set it it should show up in HtmlPage.Document.Cookies. Make certain you set the expiration date.
From a WCF service, check out the class HttpCookie: msdn.microsoft.com/.../system.web.httpcookie.aspx
Life is better with a little LINQ.
return (from z in cookies
let y = z.Split('=')
where y.Length == 2 && y[0] == key
select y[1]).FirstOrDefault();
Some additional cookie info here as well:
silverlight.net/.../38621.aspx
Silverlight uses Isolated Storage as a virtual file system to store data in a hidden folder on your machine
Post: Approved at: Jul-16-2008 File Dialog and more! Here is a great screencast on Channel 9 showing
6 new Silverlight tutorials are completed! Tip of the Day #15 - Communicating between JavaScript &
Updated 18/7/2008 Percorso formativo in italiano Aggiornamento del percorso formativo su Silverlight
Pingback from Silverlight - Tip of the Day by Mike Snow at Blog von J??rgen Ebner
When you create a Silverlight application the App.xaml.cs code behind file has the Application event
Is the cookies saved in the browser relating to the current page? for example, what happens if I call HtmlPage.Document.Cookies from a difernet page, will I see the cookies set by the former one?
Thanks,
Tzahi.
Silverlight uses Isolated Storage as a virtual file system to store data in a hidden folder on your machine.
The cookie will look like this:
Key1=Value1; Key2=Value2
So you need to call .Trim() on each cookie before you break it into key and value. Or else you'll get a space on the beginning of all keys but your first one, and you won't be able to get those cookies with the key you saved them under.
|
http://silverlight.net/blogs/msnow/archive/2008/07/15/tip-of-the-day-18-how-to-set-browser-cookies.aspx
|
crawl-002
|
refinedweb
| 844
| 65.32
|
After.
On the server, there are three steps which you can meddle with using OpenSSH:
authentication, the shell session, and the command. The shell is pretty easily
manipulated. For example, if you set the user’s login shell to
/usr/bin/nethack, then nethack will run when they log in. Editing
this is pretty straightforward, just pop open
/etc/passwd as root and set
their shell to your desired binary. If the user SSHes into your server with a
TTY allocated (which is done by default), then you’ll be able to run a curses
application or something interactive.
However, a downside to this is that, if you choose a “shell” which does not
behave like a shell, it will break when the user passes additional command line
arguments, such as
ssh user@host ls -a. To address this, instead of overriding
the shell, we can override the command which is run. The best place to do this
is in the user’s
authorized_keys file. Before each line, you can add options
which apply to users who log in with that key. One of these options is the
“command” option. If you add this to
/home/user/.ssh/authorized_keys instead:
command="/usr/bin/nethack" ssh-rsa ... user
Then it’ll use the user’s shell (which should probably be
/bin/sh) to run
nethack, which will work regardless of the command supplied by the user (which
is stored into
SSH_ORIGINAL_COMMAND in the environment, should you need it).
There are probably some other options you want to set here, as well, for
security reasons:
restrict,pty,command="..." ssh-rsa ... user
The full list of options you can set here is available in the
sshd(8) man
page.
restrict just turns off most stuff by default, and
pty explicitly
re-enables TTY allocation, so that we can do things like curses. This will work
if you want to explicitly authorize specific people, one at a time, in your
authorized_keys file, to use your SSH-driven application. However, there’s
one more place where we can meddle: the
AuthorizedKeysCommand in
/etc/ssh/sshd_config. Instead of having OpenSSH read from the
authorized_keys file in the user’s home directory, it can execute an arbitrary
program and read the
authorized_keys file from its stdout. For example, on
Sourcehut we use something like this:
AuthorizedKeysCommand /usr/bin/gitsrht-dispatch "%u" "%h" "%t" "%k" AuthorizedKeysUser root
Respectively, these format strings will supply the command with the username
attempting login, the user’s home directory, the type of key in use (e.g.
ssh-rsa), and the base64-encoded key itself. More options are available - see
TOKENS, in the
sshd_config(8) man page. The key supplied here can be used to
identify the user - on Sourcehut we look up their SSH key in the database. Then
you can choose whether or not to admit the user based on any logic of your
choosing, and print an appropriate
authorized_keys to stdout. You can also
take this opportunity to forward this information along to the command that gets
executed, by appending them to the command option or by using the environment
options.
How this works on builds.sr.ht
We use a somewhat complex system for incoming SSH connections, which I won’t go into here - it’s only necessary to support multiple SSH applications on the same server, like git.sr.ht and builds.sr.ht. For builds.sr.ht, we accept all connections and authenticate later on. This means our AuthorizedKeysCommand is quite simple:
#!/usr/bin/env python3 # We just let everyone in at this stage, authentication is done later on. import sys key_type = sys.argv[3] b64key = sys.argv[4] keys = (f"command=\"buildsrht-shell '{b64key}'\",restrict,pty " + f"{key_type} {b64key} somebody\n") print(keys) sys.exit(0)
The command,
buildsrht-shell, does some more interesting stuff. First, the
user is told to connect with a command like
ssh builds@buildhost connect <job
ID>, so we use the
SSH_ORIGINAL_COMMAND variable to grab the command line
they included:
cmd = os.environ.get("SSH_ORIGINAL_COMMAND") or "" cmd = shlex.split(cmd) if len(cmd) != 2: fail("Usage: ssh ... connect <job ID>") op = cmd[0] if op not in ["connect", "tail"]: fail("Usage: ssh ... connect <job ID>") job_id = int(cmd[1])
Then we do some authentication, fetching the job info from the local job runner and checking their key against meta.sr.ht (the authentication service).
b64key = sys.argv[1] def get_info(job_id): r = requests.get(f"{job_id}/info") if r.status_code != 200: return None return r.json() info = get_info(job_id) if not info: fail("No such job found.") meta_origin = get_origin("meta.sr.ht") r = requests.get(f"{meta_origin}/api/ssh-key/{b64key}") if r.status_code == 200: username = r.json()["owner"]["name"] elif r.status_code == 404: fail("We don't recognize your SSH key. Make sure you've added it to " + f"your account.\n{get_origin('meta.sr.ht', external=True)}/keys") else: fail("Temporary authentication failure. Try again later.") if username != info["username"]: fail("You are not permitted to connect to this job.")
There are two modes from here on out: connecting and tailing. The former logs into the local build VM, and the latter prints the logs to the terminal. Connecting looks like this:
def connect(job_id, info): """Opens a shell on the build VM""" limit = naturaltime(datetime.utcnow() - deadline) print(f"Your VM will be terminated {limit}, or when you log out.") print() requests.post(f"{job_id}/claim") sys.stdout.flush() sys.stderr.flush() tty = os.open("/dev/tty", os.O_RDWR) os.dup2(0, tty) subprocess.call([ "ssh", "-qt", "-p", str(info["port"]), "-o", "UserKnownHostsFile=/dev/null", "-o", "StrictHostKeyChecking=no", "-o", "LogLevel=quiet", "build@localhost", "bash" ]) requests.post(f"{job_id}/terminate")
This is pretty self explanatory, except perhaps for the dup2 - we just open
/dev/tty and make
stdin a copy of it. Some interactive applications
misbehave if stdin is not a tty, and this mimics the normal behavior of SSH.
Then we log into the build VM over SSH, which with stdin/stdout/stderr rigged up
like so will allow the user to interact with the build VM. After that completes,
we terminate the VM.
This is mostly plumbing work that just serves to get the user from point A to point B. The tail functionality is more application-like:
def tail(job_id, info): """Tails the build logs to stdout""" logs = os.path.join(cfg("builds.sr.ht::worker", "buildlogs"), str(job_id)) p = subprocess.Popen(["tail", "-f", os.path.join(logs, "log")]) tasks = set() procs = [p] # holy bejeezus this is hacky while True: for task in manifest.tasks: if task.name in tasks: continue path = os.path.join(logs, task.name, "log") if os.path.exists(path): procs.append(subprocess.Popen( f"tail -f {shlex.quote(path)} | " + "awk '{ print \"[" + shlex.quote(task.name) + "] \" $0 }'", shell=True)) tasks.update({ task.name }) info = get_info(job_id) if not info: break if info["task"] == info["tasks"]: for p in procs: p.kill() break time.sleep(3) if op == "connect": if info["task"] != info["tasks"] and info["status"] == "running": tail(job_id, info) connect(job_id, info) elif op == "tail": tail(job_id, info)
This… I… let’s just pretend you never saw this. And that’s how SSH access to builds.sr.ht works!
|
https://drewdevault.com/2019/09/02/Interactive-SSH-programs.html
|
CC-MAIN-2019-39
|
refinedweb
| 1,209
| 58.99
|
Monitor the NTP daemon and determine its performance
ntpq [-46dinp] [-c command] [host] [...]
Neutrino
The ntpq utility monitors the ntpd daemon operations and determines its performance. It uses the standard NTP mode 6 control message formats defined in Appendix B of the NTPv3 specification RFC 1305. The same formats are also used for NTPv4 specification, which has more variables, and are discussed here.
You can run this utility either in interactive mode or in command mode. Command mode is controlled using command-line arguments. You can use both raw and pretty-printed options when assembling requests to read or write. You can also obtain and print a list of peers in a common format by sending multiple queries to the server.
When you run the ntpq utility by including one or more requests in the command line, each request is sent to the NTP servers running on each of the hosts. If no request option is given, ntpq attempts to read commands from the standard input and execute them on the NTP server running on the first host, as given on the command line. If no host is mentioned, it always defaults to localhost. The ntpq utility prompts for commands if the standard input is a terminal device.
The ntpq utility uses NTP mode 6 packets to communicate with the NTP server, and hence can be used to query any compatible server on the network that permits it. However it is somewhat unreliable, especially over large distances in a network topology. The ntpq utility makes only one attempt to retransmit requests, and times out if the remote host's response isn't received within a suitable timeout time.
In contexts where a host name is expected, a -4 qualifier preceding the host name forces DNS resolution to the IPv4 namespace, while a -6 qualifier forces DNS resolution to the IPv6 namespace.
Specifying a command line option other than -i or -n causes the specified queries to be sent to the indicated host(s) immediately. Otherwise, ntpq attempts to read interactive format commands from the standard input.
The interactive format commands consist of a keyword followed by zero or more arguments. You can type only enough characters to uniquely identify the command. The output of a command is normally sent to the standard output, but you can send the output to a file by appending a <, followed by a file name, to the command line. A number of interactive format commands are executed entirely within the ntpq utility:
A 16-bit (integer) association identifier is associated with an NTP server. When NTP control messages are sent, this association identifier is always included to identify peers. An association identifier of 0 has special meaning; it indicates that the variables are system variables, whose names are drawn from a separate name space.
Control message commands result in one or more NTP mode 6 messages, which are sent to the server, and data returned is always printed in some format. You will find that most commands send a single message and expect a single response. The current exceptions are the peers command, which sends a preprogrammed series of messages to obtain the required data, and the mreadlist and mreadvar commands, which iterate over a range of associations.
The data returned by the associations command is cached internally in the ntpq utility. The index is useful when you deal with some servers that have association identifiers which are hard for humans to type. For any subsequent command that requires an association identifier as an argument, you can use the form and the index as an alternative.
The character in the left margin of the peers billboard, called the tally code, shows the fate of each association in the clock selection process. Following is a list of these characters, for which the peer is:
The status, leap, stratum, precision, rootdelay, rootdispersion, refid, reftime, poll, offset, and frequency variables are described in RFC 1305 specification. Additional NTPv4 system variables include:
Additional system variables are displayed when the NTPv4 daemon is compiled with the OpenSSL software library.
The status, srcadr, srcport, dstadr, dstport, leap, stratum, precision, rootdelay, rootdispersion, readh, hmode, pmode, hpoll, ppoll, offset, delay, dspersion, and reftime variables are described in the RFC 1305 specification, as are the timestamps org, rec and xmt. Additional NTPv4 peer variables include:
When the NTPv4 daemon is compiled with the OpenSSL software library, additional peer variables are displayed, as follows:
Use the flash code to debug. It is displayed in the peer variables list and shows the results of the original sanity checks defined in the NTP specification RFC 1305 and additional ones added in NTPv4. There are 12 tests, designated as TEST1 through TEST12, that perform in a certain order designed to gain maximum diagnostic information while protecting against accidental or malicious errors. The flash variable is initialized to zero as each packet is received. If, after each set of tests, one or more bits are set, the packet is discarded. Use these tests for the following tasks:
The flash bits for each test are defined as follows:
The peers command is nonatomic and may occasionally result in spurious error messages about invalid associations. Also, you wait a long time for timeouts, because the timeout time is a fixed constant, and it assumes the worst-case scenario. In addition, the program doesn't estimate timeout as it sends queries to a particular host.
ntpd, ntpdate, ntpdc, ntptrace
|
https://www.qnx.com/developers/docs/6.4.1/neutrino/utilities/n/ntpq.html
|
CC-MAIN-2018-47
|
refinedweb
| 907
| 50.26
|
On Wed, Mar 12, 2008 at 07:09:56AM +1100, nscott@xxxxxxxxxx wrote:
> I don't have any immediate plans. I can imagine it could be used to
> stitch parts of the namespace together in a filesystem that supports
> multiple devices (in a chunkfs kinda way) ... or maybe more simply
> just an in-filesystem auto-mounter. *shrug*. But its there, the tools
> support it (once again, I didn't see a userspace patch - hohum), so I
> would vote for leaving it in its current form so some enterprising,
> constructive young coder can try to make something useful from it
> at some point. :)
That kind of automounter really doesn't belong into the low-level
filesystem. If we really wanted it it would go into the VFS, storing
the uuid or other identifier for the mountpoint in an xattr. This is
really just dead junk that should go away.
|
http://oss.sgi.com/archives/xfs/2008-03/msg00913.html
|
CC-MAIN-2014-49
|
refinedweb
| 148
| 68.2
|
Archive for July, 2011:
Which led to absolutely nothing happening when run like this!!
Performance.
Un:
I can never quite tell which column I need to get so end up doing some exploration with awk like this to find out:
Once we’ve worked out the column then we can add them together like this:
I think that’s much better than trying to determine the total run time in the application and printing it out to the log file.
We can also calculate other stats if we record a log entry for each record:
A!
Scala: Prettifying test builders with package object
We have several different test builders in our code base which look roughly like this:
In our tests we originally used them like this:
This works well but we wanted our tests to only contain domain language and no implementation details.
We therefore started pulling out methods like so::
We can then use aFooWith like so:..
Scala: Making it easier to abstract code
A couple of months ago I attended Michael Feathers’ ‘Brutal Refactoring’ workshop at XP 2011 where he opined that developers generally do the easiest thing when it comes to code bases.
More often than not this means adding to an existing method or existing class rather than finding the correct place to put the behaviour that they want to add.
Something interesting that I’ve noticed so far on the project I’m working on is that so far we haven’t been seeing the same trend.
Our code at the moment is comprised of lots of little classes with small amounts of logic in them and I’m inclined to believe that Scala as a language has had a reasonable influence on that.
The following quote from ‘Why programming languages?‘ sums it up quite well:
Sometimes the growing complexity of existing programming languages prompts language designers to design new languages that lie within the same programming paradigm, but with the explicit goal of minimising complexity and maximising consistency, regularity and uniformity (in short, conceptual integrity).
It’s incredibly easy to pull out a new class in Scala and the amount of code required to do so is minimal which seems to be contributing to the willingness to do so.
At the moment nearly all the methods in our code base are one line long and the ones which aren’t do actually stand out which I think psychologically makes you want to find a way to keep to the one line method pattern.
Traits
As I’ve mentioned previously we’ve been pulling out a lot of traits as well and the only problem we’ve had there is ensuring that we don’t end up testing their behaviour multiple times in the objects which mix-in the trait.
I tend to pull traits out when it seems like there might be an opportunity to use that bit of code rather than waiting for the need to arise.
That’s generally not a good idea but it seems to be a bit of a trade off between making potentially reusable code discoverable and abstracting out the wrong bit of code because we did it too early.
Companion Objects
The fact that we have companion objects in the language also seems to help us push logic into the right place rather than putting it into an existing class.
We often have companion objects which take in an XML node, extract the appropriate parts of the document and then instantiate a case class object.
In Summary
There’s no reason you couldn’t achieve the same things in C# or Java but I haven’t seen code bases in those languages evolve in the same way.
It will be interesting to see if my observations remain the same as the code base increases in size.
Sc:
There is also some other logic around how a collection of Foos should be ordered and by using the companion object to parse the XML we can create a test with appropriate bar and baz values to test.
Clojure: Creating XML document with namespaces:
We can make use of lazy-xml/emit to output an XML string from *some sort of input?* by wrapping it inside with-out-str like so:
I was initially confused about how we’d be able to create a map representing name spaced elements to pass to xml-string but it turned out to be reasonably simple.
To create a non namespaced XML string we might pass xml-string the following map:
Which gives us this:
Ideally I wanted to prepend :foo and :bar with ‘:mynamespace” but I thought that wouldn’t work since that type of syntax would be invalid in Ruby and I thought it’d be the same in Clojure.
In fact it isn’t so we can just do this:
As a refactoring step, since I had to append the namespace to a lot of tags, I was able to make use of the keyword function to do so:
Scala: Rolling with implicit:
We didn’t want to have to pass Language to the LanguageAwareString factory method every time we’re going to be calling it in quite a few places.
We therefore create an implicit val at the beginning of our application in the Scalatra entry code:
Maybe we can phase that out as people get used to implicit or maybe we’ll just get rid of implicit and decide it’s not worth the hassle!
|
http://www.markhneedham.com/blog/2011/07/
|
CC-MAIN-2015-14
|
refinedweb
| 917
| 53.78
|
IFMEDIA(4) BSD Programmer's Manual IFMEDIA(4)
ifmedia - network interface media settings
#include <sys/socket.h> #include <net/if.h> #include <net/if_media.h>
The ifmedia interface provides a consistent method for querying and set- ting network interface media and media options. The media is typically set using the ifconfig(8) command. There are currently four link types supported by ifmedia: IFM_ETHER Ethernet IFM_TOKEN Token Ring IFM_FDDI FDDI IFM_IEEE80211 IEEE802.11 Wireless LAN The following sections describe the possible media settings for each link type. Not all of these are supported by every device; refer to your device's manual page for more information. The lists below provide the possible names of each media type or option. The first name in the list is the canonical name of the media type or op- tion. Additional names are acceptable aliases for the media type or op- tion.
The following media types are shared by all link types: IFM_AUTO Autoselect the best media. [autoselect, auto] IFM_MANUAL Jumper or switch on device selects media. [manual] IFM_NONE Deselect all media. [none] The following media options are shared by all link types: IFM_FDX Place the device into full-duplex mode. This option only has meaning if the device is normally not full-duplex.: IFM_10_T 10BASE-T, 10Mb/s over unshielded twisted pair, RJ45 connector. [10baseT, UTP, 10UTP] IFM_10_2 10BASE2, 10Mb/s over coaxial cable, BNC connector, also called Thinnet. [10base2, BNC, 10BNC] IFM_10_5 10BASE5, 10Mb/s over 15-wire cables, DB15 connector, also called AUI. [10base5, AUI, 10AUI] IFM_10_STP 10BASE-STP, 10Mb/s over shielded twisted pair, DB9 connector. [10baseSTP, STP, 10STP] IFM_10_FL 10BASE-FL, 10Mb/s over fiber optic cables. [10baseFL, FL, 10FL] IFM_100_TX 100BASE-TX, 100Mb/s over unshielded twisted pair, RJ45 connector. [100baseTX, 100TX] IFM_100_FX 100BASE-FX, 100Mb/s over fiber optic cables. [100baseFX, 100FX] IFM_100_T4 100BASE-T4, 100Mb/s over 4-wire (category 3) unshield- ed twisted pair, RJ45 connector. [100baseT4, 100T4] IFM_100_T2 100BASE-T2. [100baseT2, 100T2] IFM_100_VG 100VG-AnyLAN. [100baseVG, 100VG] IFM_1000_SX 1000BASE-SX, 1Gb/s over multi-mode fiber optic cables. .
The following media types are defined for Token Ring: IFM_TOK_STP4 4Mb/s, shielded twisted pair, DB9 connector. [DB9/4Mbit, 4STP] IFM_TOK_STP16 16Mb/s, shielded twisted pair, DB9 connector. [DB9/16Mbit, 16STP] IFM_TOK_UTP4 4Mb/s, unshielded twisted pair, RJ45 connector. [UTP/4Mbit, 4UTP] IFM_TOK_UTP16 16Mb/s, unshielded twisted pair, RJ45 connector. [UTP/16Mbit, 16UTP] The following media options are defined for Token Ring: IFM_TOK_ETR Early token release. [EarlyTokenRelease, ETR] IFM_TOK_SRCRT Enable source routing features. [SourceRouting, SRCRT] IFM_TOK_ALLR All routes vs. single route broadcast. [AllRoutes, ALLR]
The following media types are defined for FDDI: IFM_FDDI_SMF Single-mode fiber. [Single-mode, SMF] IFM_FDDI_MMF Multi-mode fiber. [Multi-mode, MMF] IFM_FDDI_UTP Unshielded twisted pair, RJ45 connector. [UTP, CDDI] The following media options are defined for FDDI: IFM_FDDI_DA Dual-attached station vs. Single-attached station. [dual-attach, das] MEDIA TYPES AND OPTIONS FOR IEEE802.11 WIRELESS LAN The following media types are defined for IEEE802.11 Wireless LAN: IFM_IEEE80211_FH1 Frequency Hopping 1Mbps. [FH1] IFM_IEEE80211_FH2 Frequency Hopping 2Mbps. [FH2] IFM_IEEE80211_DS1 Direct Sequence 1Mbps. [DS1] IFM_IEEE80211_DS2 Direct Sequence 2Mbps. [DS2] IFM_IEEE80211_DS5 Direct Sequence 5 The implementation that appeared in NetBSD 1.3 was written by Jonathan Stone and Jason R. Thorpe to be compatible with the BSDI API. It has since gone through several revisions which have extended the API while maintaining backwards compatibility with the original API. Support for the IEEE802.11 Wireless LAN link type was added in NetBSD 1.5. Host AP mode was added in OpenBSD 3.1. MirOS BSD #10-current July 19, 2000.
|
https://www.mirbsd.org/htman/sparc/man4/ifmedia.htm
|
CC-MAIN-2016-30
|
refinedweb
| 595
| 61.12
|
Paperback, First printing, 107 pages
By Robert Eckstein
Published by O'Reilly & Associates, Inc.
ISBN: I-56592-709-5
Review written: October 18, 2000
By Donald W. Larson
Web Site:
Book reviews:
XML is becoming in lingua franca for exchanging information between computer systems. Many Java technologies implement XML as a way to establish properties. XML is a way to disseminate records from databases to XML-aware applications at-large. I found the book to be most helpful and sits beside me as I work on my computer.
The book provides practical examples and then fully explains using those example's line-by-line in most cases. Overviews provide well-rounded understanding as the reader proceeds. The book's index is extensive and most helpful.
Topics include the complete description of DTD's, elements, entities, and attributes. It cleared up some confusion I had about default namespaces and should make it clear to anyone else too..
|
http://www.oreillynet.com/cs/catalog/view/cs_msg/35919
|
crawl-002
|
refinedweb
| 156
| 58.18
|
13 - Networking and Communications
This week's assignemnt is a very importnat assignment for my final project, so I decided to try to create the system that the final project will use. We need to create 2 boards and make them communicate between them. After doing some research and some tests, I decided to design a little board that will be used in each fret with four buttons and four LED's, one for each string. When acting as the student Ukulele, the lights will show the position of the teachers fingers with the LED's, and the one from the teacher will detect the pressure with the buttons.
How will all the boards of the Ukulele communicate?
As they are all in the same device, the best way to connect them is by a fisical communication using wires, so I decided to create an I2C communication. The idea of this boards is to send the information of the buttons as inputs or to read the information and show it in the LED's as outputs.
I2C communication
I2C is a communication bus created by Phillips back in the early 80's. It is a synchronous serial computer bus able to comunicate a huge ammount of devices as master and slave, up to 120 devices aproximatelly. It is a physical connexion that uses 4 wires, the VCC and the GND to power all the devices and to ports, the SDA and the SCL. SDA is the one in charge to transmit the data, while the SCL is the clock of the signal.
The idea is simple, the master device sends and recive data to and from the slaves, each slave has an specific address used by the master to establish communication with each one of them. When the master wants to communicate with one slave, it sends the address byte to let the slaves know to wich one is talking to, and then sends the message followed by a finish comunication byte.
Parts
This assignment is about testing the conncection and making 2 boards to communicate. My final project will have a lot of boards that will communicate between them, but I will start milling 3 boards, one master and two slaves, to test the communication and validate my idea of using I2C to communicate them. So, this assignment will have two parts, a master board and a slave board.
- Master Board: This board will have the ISP communication port to program the board and to make the I2C communication with the SCK pin as SCL and MOSI pin as SDA. Also, it will have an FTDI connection to communicate via serial port with the computer.
- Slave Board: There will we more than one slave boards. They have the ISP connection to program it and to communicate with the master. Then it will use 4 buttons and 4 LED's connected to 4 pins (one button and LED for each pin). The idea is to use the LED as a pull down resistor for the button so we will be able to use the pins as outputs lighting the LED and as inputs pressing the button.
Master board
It doesn't have any visual impact to the device, but it is really important. It's function is to communicate and organize the rest of the boards. Here are the list of commponents and the drawings that I design using KiCad following the explanation I did in Week 06.
- Attiny 84
- 10kΩ as pull-up resistor
- 1uF Capacitor
- FTDI 6 pin header
- 2x3 pin header for the ISP
Crystal 20MHz And this is the final board with all the components solded on it.
Please note that there are a couple of errors in the board. I used a 20MHz crystal, but finally I ended up working with the 8MHz internal clock of the ATtiny, so it is not needed for the board. Also, I didn't know that I would have to use pull-up resistors for the I2C bus, so they should be added. To test everything before milling a new board, I decided to solder two 10KΩ resistors in the ribbon wire directly. This resistors can be added to the master board or to one slave board.
Slave Board
This board will be replicated multiple times to be used in each fret of the ukulele. It has the function to sense where someone is playing or to show where it has to play. Then, this board needs and input and an output, so I decided to use a button to use as an input for the teacher and a LED light as an output. This are the components used:
- Attiny 44
- 4x Buttons
- 4x LED
- 4x 1KΩ for the LED
- 2x3 pin header for the ISP
Crystal 20MHz
For this board, as I need it to be small to fit in the fret spaces, I decided to mill a 2 side board. It was the first time I did it, so I decided to document it in the Week 06.
Like the master board, it has space for an external clock, but finally I used the internal 8MHz clock of the ATtiny so it is not necessary to solder it. Also, I did a mistake and I didn't connect a pull-up resistor to the Reset pin, but I didn't have any problem with it, it is only necessary for the programing, and I program it correctly.
And here is the result from both sides:
Codeing
This was the most difficult part of the assignment. Codeing an I2C communication with an Arduino is really easy using the Wire.h library, but with the ATtinny's is not that simple. I did some research before trying to program my boards, and I found some libraries created specifically for the ATtinny, the TinyWireM for the Master board and the TinyWireS for the slave. Everything seemed to be easy, but after a lot of errors and tests I saw that the TinyWireM was not working fine, at least with the ATtiny 44.
After giving up with the TinyWire libraries, I decided to keep looking for a solution, and after some hours of research and cold sweat, I found the way to make it work! I found another Boards Manager library for Attiny that allows you to use the Wire.h library. You only need to add this link: to the Additional Boards Manager URL's in ArduinoIDE Preferences. Then install ATTiny Core by Spencer Conde from the Boards Manager, and you will see that you have more optipons when selecting your board. Using this profiles you will be able to use the Wire Library.
Slave Code
After having all the setup done, I decided to create a little test script to see if it works ok. I started with the Slave board, the code is really simple:
#include <Wire.h> byte pins = 0b00000000; void setup() { Wire.begin(1); // join i2c bus with address #5 Wire.onReceive(receiveEvent); Wire.onRequest(requestEvent); // register event } void loop() { } void receiveEvent(int howMany) { DDRA = DDRA | B00001111; pins=Wire.read(); PORTA = ((PORTA>>4)<<4)|pins; } void requestEvent() { PORTA = ((PORTA>>4)<<4); DDRA &= ~B00001111; Wire.write(PINA & (B00001111)); }
What it does is simple, it starts the communication having the address 1 setting the Wire.begin(1). Then I created 2 functions, one for sending the actual state (requestEvent) and one for reading the state (receiveEvent).
- reciveEvent(): First, I set the 4 pins as outputs, then I read the value that it has to be, knowing that this value will be a number between 0 and 15, and reading it as a byte this will be the state of the first 4 bits. Finally, I change the last 4 bits of the PORTA actual state for the lecture, because I don't want to change the state of the other 4 bits.
- requestEvent(): This is the inverse method, I set the pins as Inputs and send the value of this pins setting the other four as 0 to avoid errors.
Master Code
To test that it was working fine I decided to create a simple script that reads the state from device number one and send it to device number 2. This is the script:
#include <Wire.h> byte state; void setup() { Wire.begin(); // join i2c bus (address optional for master) } void loop() { Wire.requestFrom(1,1); while (Wire.available()){ state = Wire.read(); } Wire.beginTransmission(2); // transmit to device #1 Wire.write(state); // sends one byte Wire.endTransmission(); }
As you can see it is really easy, the only thing I would like to comment is the fact that it is really important to end the communication between boards to avoid future problems.
Here it is a video showing how it worked:
Master Code for final project
During Week 15 I created an Interface to communicate with the Ukulele. Everything is explained in that week documentation, but here I will post the Master Code I used for the boards, the Slave has the same code that I just explained. In this code I was using the SoftwareSerial library and the Wire, this was a problem, because it was really heavy, that's by I had to change the ATtiny 44 for an 84 with more memory. This was the only problem I had.
#include
SoftwareSerial mySerial(8, 7); #include String mode = ""; int Dev[] = {1, 2, 3, 4, 5}; int nDev = 5; //Change this number accordingly to the number of devices int State[] = {0, 0, 0, 0, 0}; int PrevState[] = {0, 0, 0, 0, 0}; void setup() { Wire.begin(); // join i2c bus (address optional for master) mySerial.begin(9600); // start serial for output delay(5000); } void loop() { //mySerial.println("waiting"); while (mySerial.available() == 0) {} if (mySerial.available() > 0) { mode = mySerial.readStringUntil('\n'); } if (mode == "r") { readU(); } if (mode == "w") { writeU(); } } void readU() { for (byte i=0; i < nDev; i++) { Wire.requestFrom(Dev[i],1); while (Wire.available()){ byte state = Wire.read(); byte message = Dev[i] << 4; message = message | state; mySerial.write(message); } } } void writeU() { while (mySerial.available() == 0) {} for (byte i=0; i < nDev; i++) { byte m = mySerial.parseInt(); Wire.beginTransmission(Dev[i]); // transmit to device #1 Wire.write(m); // sends one byte Wire.endTransmission(); } }
What this code does is:
- Wait until it receives an 'r' for reading or a 'w' for writing.
- If an 'r' is received, it try to connect with the slave board and ask for it state to store it in the States Array.
- If an 'w' is received, then it waits to receive the desired state and send it to each fret with the function Wire.write().
For this code, I made a special codification to optimize the way it worked. The main goal was to send the less amount of data possible, so taking in account that I need to send 4 bits for each fret being the state of the 4 LED's, I decided to send a byte for each fret, where the first 4 bits will be the device address and the last 4 the state, for example, if device number 1 is totally lighted up, the message will be 0001 1111.
Bluetooth
For the final project I also need to communicate 2 Ukuleles, so I decided to create a BlueTooth serial communication between them. To do it I decided to use the bluetooth module HC-05 because I had some experience with it and one spare module to use. The communication was easy, I just needed to connect it to the serial ports RX and TX to send the data, but first I had to configure the modules. To do it, I followed this tutorial. The main AT commands are:
- AT+RMAAD: Clear any paired devices
- AT+ROLE=0: To set it as slave
- AT+ROLE=1: To set it as master
- AT+ADDR: To know the Address of the slave as it will be used by the Master
- AT+BIND=xxxx,xx,xxxxxx: Where xxxx are the Address of the Slave device to pair it.
- AT+UART=9600,0,0: To set the Baud Rate to 9600
To enter AT mode you have to press the button if there is when you plug your module or set high the Key pin. When the LED start blinking twice twice every 2 seconds they are connected. See the video showing how it worked:
Group Assignment
For this week group assignment we decided to connect our HelloBoards with an I2C Bus adapting my scripts. This was an easy and simple way to communicate our boards, we first wanted to use Bluetooth, but we didn't had enough modules or we only had BLE modules and we had some troubles with them.
The main objective was to control my Hello Board that has an RGB LED using two hello boards to change the color of the LED. To establish the I2C communication we followed what I just explained up, installing th ATTiny Core by Spencer Conde and using the Wire.h library. Those are the codes that we used as a master:
#include <Wire.h> int State[] = {0, 0, 0}; int but=8; int R=7; int G=2; int B=3; int Gustavo = 1; int Diar = 2; void setup() { Wire.begin(); delay(5000); } void loop() { if (digitalRead(but)== HIGH) { analogWrite(R,255); analogWrite(G,0); analogWrite(B,255); State[0] = 1; } else { analogWrite(R,255); analogWrite(G,255); analogWrite(B,0); State[0] = 0; } readU(); writeU(); delay(10); } void readU() { Wire.requestFrom(Gustavo, 1); State[1] = Wire.read(); Wire.requestFrom(Diar, 1); State[2] = Wire.read(); } void writeU() { Wire.beginTransmission(Gustavo); Wire.write(State[2]); Wire.endTransmission(); Wire.beginTransmission(Diar); Wire.write(State[0]); Wire.endTransmission(); }
And here is the slave's code:
#include <Wire.h> int LED = 1; //Change it for your LED pin int But = 2; //Change it for your Button pin int Gustavo = 1; int Diar = 2; int LEDState = 0; int ButState = 0; void setup() { Wire.begin(Diar); //Change the name Wire.onReceive(receiveEvent); Wire.onRequest(requestEvent); } void loop() { if (digitalRead(But)==HIGH) { ButState = 1; } else{ ButState = 0; } if (LEDState == 1) { digitalWrite(LED, HIGH); } else { digitalWrite(LED, LOW); } } void receiveEvent() { LEDState=Wire.read(); } void requestEvent() { Wire.write(ButState); }
This is a video showing how it worked:
|
http://fab.academany.org/2019/labs/barcelona/students/josep-marti/Week13.html
|
CC-MAIN-2021-43
|
refinedweb
| 2,377
| 68.6
|
Steven Lehar
2010-07-19
Hi,
I'm trying to add a wx.DirPickerControl from the Buttons controls, and as soon as I click the location on my panel, I get the error:
Error on Accessing Getter for ToolTipString: 'NoneType' object has no attribute 'Get Tip'
I'm using Python 2.5, wxPython 2.8.7.1, Boa Constructor 0.6.1
Werner F. Bruhin
2010-07-19
Hi,
I see the same with wxPython 2.8.10.1, for some reason Boa doesn't set a default value for ToolTipString.
If I read the code correctly this should be done in BaseCompanions.ControlDTC but it doesn't happen for DirPickerCtrl nor FilePickerCtrl.
Hopefully Riaan or someone else can shed a light why this is happing.
If I ignore the error one can still resize the control.
Werner
Vinicius Carvalho
2010-07-22
Hi.
I have the same problem.
Steven Lehar
2010-07-22
My error is more serious. If I ignore the warning message, the wx.DirPickerControl appears where it should, but its all screwed up, the text field is too small off to the left, with a big gap between it and the button, and it doesn't work! Nor does it respond correctly to changes in its Properties.
Worse still, if I cut and paste the Boa-generated code into a simple Python program and import wxPython, it STILL doesn't work!
I'm afraid thats too much for me, I need results right now! I'm turning back to Tkinter, at least I can get it running!
Werner F. Bruhin
2010-07-22
Obviously something wrong with wx.DirPickerControl, you might want to report it on the wxPython list.
Anyhow there is an alternative, see wx.lib.filebrowsebutton.DirBrowseButton, on Boa it is under the Library tab.
Werner
Werner F. Bruhin
2010-07-22
Steven,
Not sure what you did but I can get wx.DirPickerCtrl to work outside the Boa designer, e.g. the following code (note use of sizer) works for me:
#Boa:Dialog:Dialog1
import wx
def create(parent):
return Dialog1(parent)
=
class Dialog1(wx.Dialog):
def _init_sizers(self):
# generated method, don't edit
self.boxSizer1 = wx.BoxSizer(orient=wx.VERTICAL)
self.SetSizer(self.boxSizer1)
def _init_ctrls(self, prnt):
# generated method, don't edit
wx.Dialog.__init__(self, id=wxID_DIALOG1, name='', parent=prnt,
pos=wx.Point(640, 340), size=wx.Size(400, 250),
style=wx.DEFAULT_DIALOG_STYLE, title='Dialog1')
self.SetClientSize(wx.Size(384, 212))
self._init_sizers()
def __init__(self, parent):
self._init_ctrls(parent)
self.dirPickerCtrl1 = wx.DirPickerCtrl(self, -1, 'c:', 'Select dir')
self.boxSizer1.Add(self.dirPickerCtrl1, 0, wx.EXPAND)
self.boxSizer1.Layout()
if __name__ == '__main__':
app = wx.PySimpleApp()
dlg = create(None)
try:
dlg.ShowModal()
finally:
dlg.Destroy()
app.MainLoop()
Steven Lehar
2010-07-22
Ok, I've reverted back to Boa Constructor, thanks guys for helping me out, I've got my UI pretty much done!
But now I'm having trouble with the events. Which event triggers when I change the text in DirBrowseButton control?
Or even my wx.TextCtrl !
I've been putting debug print statements in various event callback functions…
def OnTextCtrl1TextEnter(self, event):
print "Event: OnTextCtro1TextEnter"
and I've found events that trigger for each individual character, but isn't there one that triggers when you finally hit or click outside the text box? Damned if I could find it!
And similarly for my DirBrowseButton.
Werner F. Bruhin
2010-07-23
Steven,
To find what events fire and to work out issues with sizers and ….. there is a really neat tool in wxPython.
Werner
Werner > there's a really neat tool in wxPython
WIDGET INSPECTION TOOL
Wow! That really IS neat!!! Thanks Werner!
Now I know that the event I am looking for is not a DirBrowseButton event, but an event of its text box (Frame1.panel1.dirBrowseButton.text) called EVT_KILL_FOCUS (or an event of the browse button called EVT_SET_FOCUS) to catch the final event when a valid folder has been selected.
But although I can click on the DirBrowseButton in the Frame Designer to access it properties, I cannot click on its component text box or browser button in the Frame Designer, or view their properties, and thus I cannot get Boa to auto-generate a callback function for those components.
How do I generate the callback handler for the text box event?
Oh, NEVER MIND! There it is!
self.dirBrowseButton1.bind(wx.EVT_KILL_FOCUS, self.OnDirBrowseButton1SetFocus)
I'll just write a handler called OnDirBrowseButton1SetFocus( ).
Thanks!
Wrong again - excuse the flip-flopping!
The functions OnDirBrowseButton1SetFocus(self, event) and OnDirBrowseButton1KillFocus(self, event) which would *seem* to be triggered by a change in focus of my DirBrowseButton widget, do NOT in fact do so, so I still can't find a way to generate the callback function for the EVT_KILL_FOCUS event of the text box (or the button) components of my DirBrowseButton.
By rights, there *should* be a way to click on those components and thus explore their properties and set up their events, but when I click on the text box, all I get is the properties of the whole compound DirBrowsebutton widget.
Riaan Booysen
2010-07-23
Hi,
Firstly, I must mention, if you ever have trouble with a control not behaving in the designer, just place a container in it's place and create the control yourself after boa creation code completes, there is really no reason to give up because of designer issues.
Secondly, I have none of the problems so far mentioned in this thread so I guessed I fixed the tooltip issue some time ago, it will be in the next release.
Third,.
Hope this helps,
Riaan.
Thanks Riaan, thats exactly the kind of advice I need. I'm a rank beginner with Boa.
Another question at that level: When I was experimenting with various events, I auto-generated a bunch of event handlers, most of which are totally irrelevant. How do I remove them again? I'm guessing that I can't just delete the source code, because Boa must have hidden internal copies of them and would get confused. But if I delete an object in the Frame Designer, its source code remains! How do people deal with this in the trial-and-error debugging process?
Also, Riaan, if I start messing around with my code outside of Boa, I can never get it back INTO Boa again, can I?
So do you make the rough interface first with Boa, then you leave Boa never to return, and start tweaking the code outside of Boa?
Werner F. Bruhin
2010-07-26
Steven,
1. Deleting an event handler is simple, click on it in the Inspector/Evts tab, there are options to delete, rename showall.
2. Boa has no hidden code for the module you generated with Boa's designer, i.e. all code is in the source you can see and edit.
3. You can edit Boa code outside Boa and get it to Boa without any problems, AS LONG as you respect the code convention Boa uses. Any method marked with "# generated method, don't edit" has to respect Boa's convention, e.g.:
def _init_coll_boxSizer1_Items(self, parent):
# generated method, don't edit
parent.AddWindow(self.button1, 0, border=0, flag=0)
Outside Boa the above line could also be:
parent.Add(self.button1, 0, border=0, flag=0)
you could even "run" it from within Boa, but when you try to use the designer it will hick up as it expects AddWindow and not Add.
Hope this helps
Werner
Thank you Werner, yes, that is very helpful!
Steve
So for example I have a wx.textCtrl and I selected it in the Frame Designer and checked its Evts properties, and picked wx.EVT_Text, which related to the callback function OnTexCtrl1Text( ) which was autogenerated, and I added…
def OnTextCtrl1Text(self, event)
print "OnTextCtrl1 Event: " + event.GetString()
event.Skip()
What does the event.Skip() do? Nothing changes when I #comment it out.
When I run that, and highlight the text in textCtrl1 and replace it with "text", I get in the output field:
OnTextCtrl1 Event: t
OnTextCtrl1 Event: te
OnTextCtrl1 Event: tex
OnTextCtrl1 Event: text
Four separate events for each character of "text", but no particular event when I hit "Enter".
Isn't there an event I can search for when the text entry is complete?
Steve
Never mind! I've got it!
def OnTextCtrl1KillFocus(self, event)
print "OnTextCtrl1KillFocus Event: text = " + self.textCtrl1.GetValue()
event.Skip()
Werner >>
To find what events fire and to work out issues with sizers and ….. there is a really neat tool in wxPython.
<< Werner
That looks very useful Werner, but it doesn't seem that it would have helped me.
Now I discover that to catch the event when my textCtrl box's text has just been changed ( was hit) I need to catch the wx.EVT_KILL_FOCUS event (with callback function OnTextCtrl1KillFocus(self, event ) )
But when I view the Widget Inspection Tool / Events, and type text into my textCtrl and hit , the only events I get are:
EVT_CHAR_HOOK None
EVT_CHILD_FOCUS Panel "panel1" (114)
EVT_CHILD_FOCUS CheckBox "checkBox2" (104)
Why do I not see EVT_KILL_FOCUS ? How would the Widget Inspection Tool / EventWatcher have told me the right event to watch for? Does it only report events for Frame1, and not its children? (checkBox2 is the next widget in sequence after checkBox1, so that's the event of the focus passing from my textCtrl to the checkBox.)
Steve
NEVER MIND! I see it now!
I have to browse the Widget Tree in the Widget Inspection Tool, and select the TextCtrl("textCtrl1") whose events I want to watch.
I'm getting it slowly. Thanks for your patience.
Steve
Riaan >>.
<< Riaan
So after creating my compound control dirBrowseButton I find this auto-generated code:
self.dirBrowseButton1 = wx.lib.filebrowsebutton.DirBrowseButton(…
…
…
…
…)
after which I type in something like this?
self.dirBrowseButton1.text.Bind(wx.EVT_KILL_FOCUS, # bind this event of the dirBrowseButton's text control \
self.OnTextCtrlKillFocus) # to this callback function. And then, same indent level I add…
def OnTextCtrlKillFocus( self.dirBrowseButton1.text, event): # Callback function defined within compound control
<event handling code here>
event.Skip()
Syntax error: invalid syntax
Riaan Booysen
2010-07-26
Steven,
Look closely at all Werner told.
e.g print out the first few controls in the dir control:
print self.dirBrowseButton1.GetChildren()
print self.dirBrowseButton1.GetChildren()
print self.dirBrowseButton1.GetChildren()
Does that give you some hints which controls to access and bind events too?
I haven't tried this, I'm just guessing for you.
Hope it helps,
Riaan.
Hi Stephen,
Couple of very general points. One is, it is not good form to ask many different questions using the same thread. Your thread started out about "DirPickerControl problem" and has wandered all over in a general survey of Boa Constructor and wxPython. Instead, it is better to post a new thread for each new idea. This way when others are trying to learn, they can know how to find the information more easily. It also keeps it straighter for you and for those answering you.
Second, you can learn much by researching about how Boa works on your own. There are some good resources to help. One is the Boa tutorial that is included. Hit F1 and then read-or rather work through-the Boa Constructor Getting Started Guide and or the help. There is also some videos I did on Intro to Boa on ShowMeDo here:
You can also search this forum or just Google for the answer. Also, some of your questions are not really Boa questions but wxPython questions, and there's an excellent wxPython forum with good folks who can help with that, as well as lots of wxPython tutorials out there as well. The mailing list is here:
Good luck,
Che
Steven Lehar
2010-07-27
Ria.
<< Riaan
Believe it or not, I HAVE done hours and hours of research on the web. I *never* post a question without first trying hard to find the answer myself. I have gone through the you-do-it tutorials, and many of the help pages, but none of them happen to use or document this compound control and how to catch its events. Yes that code was just made up by guessing - I have no idea what it should be. Maybe I should give up on using that compound dirBrowseButton and just use a button and a text box, which I *do* know how to use. You have to have a working example to "close the loop" before you can do trial-and-error investigation. If you have something that *doesn't* work, there's no way to figure out whats wrong. I know this stuff is all crystal clear to yous guys, but it ain't so clear to me! Not until I see a working example that I can tweak and modify to suit my needs.
Steven Lehar
2010-07-28
I don't know why I am still working on this ridiculous thing!
I got this part to work…
self.dirBrowseButton1 = ….
self.dirBrowseButton1.GetChildren().Bind(wx.EVT_KILL_FOCUS, self.OnDirBrowseButtonText)
And I even get a print working in the event handler!
def OnDirBrowseButtonText(self, event):
print "OnDirBrowseButtonText Event: " # <= this WORKS!
But now how do I retrieve the Value of that text field?
print event.GetValue() # has no attrib. GetValue
print self.GetChildren().GetValue() # has no attrib. GetValue
print self.GetChildren().GetValue() # has no attrib. GetValue
print self.GetChildren.GetValue() # has no attrib. GetValue
print self.text.GetValue() # Frame1 no attrib. 'text'
print self.TextCtrl.GetValue() # Frame1 no attrib. 'TextCtrl'
print self.dirBrowseButton1.text.GetValue() # no attrib. 'text'
print self.dirBrowseButton1.TextCtrl.GetValue()# no attrib. 'TextCtrl'
event.skip() # FocusEvent no attrib 'skip'
If you don't have an example to follow, its just a matter of random guessing!
This really sucks!
|
http://sourceforge.net/p/boa-constructor/discussion/5483/thread/f17272dd
|
CC-MAIN-2014-35
|
refinedweb
| 2,307
| 67.15
|
I need to “publish” pages to static files (in order to transfer to a
server where Rails isn’t available, this is not for caching).
When I do:
require “open-uri”
def publish
File.open(publish_path, “w”) do |file|
open(base_view_path) { |res| file.write(res.read) }
end
end
where publish() is a method on a model running in Rails (base_view_path
is a method returning an URL to the page to publish but in “view mode”).
The problem here seems to be that Webrick hangs… (I guess Webrick just
can handle a request at a time or something and it deadlocks)
Is there another solution to this problem?
Regards,
/Marcus
|
https://www.ruby-forum.com/t/publish-to-static-pages/102312
|
CC-MAIN-2018-47
|
refinedweb
| 109
| 67.89
|
heterocephalus
A type-safe template engine for working with popular front end development tools
Heterocephalus template engine
A type-safe template engine for working with popular front end development tools.
Any PRs are welcome, even for documentation fixes. (The main author of this library is not an English native.)
Who should use this?
If you are planning to use Haskell with recent web front-end tools like gulp, webpack, npm, etc, then this library can help you!
There are many Haskell template engines today. Shakespeare is great because it checks template variables at compile time. Using Shakespeare, it's not possible for the template file to cause a runtime-error.
Shakespeare provides its own original ways of writing HTML
(Hamlet),
CSS
(Cassius
/
Lucius),
and JavaScript
(Julius).
If you use these original markup languages, it is possible to use control
statements like
forall (for looping),
if (for conditionals), and
case
(for case-splitting).
However, if you're using any other markup language (like
pug, slim,
haml, normal HTML, normal CSS, etc), Shakespeare only
provides you with the
Text.Shakespeare.Text
module. This gives you variable interpolation, but no control statements like
forall,
if, or
case.
Haiji is another interesting
library. It has all the features we require, but its templates take a very
long time to compile with
GHC >= 7.10.
Heterocephalus fills this missing niche. It gives you variable interpolation along with control statements that can be used with any markup language. Its compile times are reasonable.
Features
Here are the main features of this module.
DO ensure that all interpolated variables are in scope
DO ensure that all interpolated variables have proper types for the template
DO expand the template literal on compile time
DO provide a way to use
forall,
if, and
casestatments in the template
Text.Shakespeare.Text.texthas a way to do variable interpolation, but no way to use these types of control statements.
DO NOT enforce that templates obey a peculiar syntax
Shakespeare templates make you use their original style (Hamlet, Cassius, Lucius, Julius, etc). The
Text.Shakespeare.Text.textfunction does not require you to use any particular style, but it does not have control statements like
forall,
ifand
case.
This makes it impossible to use Shakespeare with another template engine such as
pugin front end side. It is not suitable for recent rich front end tools.
DO NOT have a long compile time
haijiis another awesome template library. It has many of our required features, but it takes too long to compile when used with ghc >= 7.10.
DO NOT provide unneeded control statements
Other template engines like EDE provide rich control statements like importing external files. Heterocephalus does not provide control statements like this because it is meant to be used with a rich front-end template engine (like pug, slim, etc).
Usage
You can compile external template files with the following four functions:
compileTextFile: A basic function that embeds variables without escaping and without default values.
compileTextFileWithDefault: Same as
compileTextFilebut you can set default template values.
compileHtmlFile: Same as
compileTextFilebut all embeded variables are escaped for html.
compileHtmlFileWithDefault: Same as
compileHtmlFilebut you can set default template values.
For more details, see the latest haddock document.
Checking behaviours in
ghci
To check the behaviour, you can test in
ghci as follows. Note that
compileText and
compileHtml are used for checking syntaxes.
$ stack install heterocephalus # Only first time $ stack repl --no-build --no-load Prelude> :m Text.Heterocephalus Text.Blaze.Renderer.String Prelude> :set -XTemplateHaskell -XQuasiQuotes Prelude> let a = 34; b = "<script>"; in renderMarkup [compileText|foo #{a} #{b}|] "foo 34 <script>" Prelude> let a = 34; b = "<script>"; in renderMarkup [compileHtml|foo #{a} #{b}|] "foo 34 <script>"
Syntax
The Text.Heterocephalus module provides two major features for use in template files: variable interpolation and control statements.
Variable interpolation
A Haskell variable can be embedded in the template file with the
#{foo}
syntax. The value of the variable will be injected in at run time.
Basic usage
All of following are correct (this assumes that you have already declared the
var variable in your Haskell program and it is in scope):
#{ var } #{var} #{ var} #{var } #{ var }
The variable must be an instance of
Text.Blaze.ToMarkup.
Advanced usage
You can use functions and data constructors as well.
#{ even num } #{ num + 3 } #{ take 3 str } #{ maybe "" id (Just b) }
Control statements
Only two type of control statements are provided.
Forall
%{ forall x <- xs } #{x} %{ endforall } %{ forall (k,v) <- kvs } #{k}: #{v} %{ endforall }
If
%{ if even num } #{num} is even number. %{ else } #{num} is odd number. %{ endif }
%{ if (num < 30) } #{ num } is less than 30. %{ elseif (num <= 60) } #{ num } is between 30 and 60. %{ else } #{ num } is over 60. %{ endif }
Case
%{ case maybeNum } %{ of Just 3 } num is 3. %{ of Just num } num is not 3, but #{num}. %{ of Nothing } num is not anything. %{ endcase }
%{ case nums } %{ of (:) n _ } first num is #{n}. %{ of [] } no nums. %{ endcase }
Why we do not provide
maybe and
with?
TODO
Discussions about this topic is on issue #9.
Why "heterocephalus"?
"Heterocephalus" is the scientific name of the naked mole-rat.
Changes
Change Log
Version 1.0.5.0 (2017-06-05)
New features
Add settings to be able to change the character used to deliminate control statements #18
Document updates
Fixed small spelling/grammars on readme #19
Version 1.0.4.0 (2017-02-07)
New features
Expose htmlSetting and textSetting
Version 1.0.3.0 (2017-01-24)
New features
Add
casecontrol statement
Version 1.0.2.0 (2016-12-13)
New features
- Add
compileTextFileWithand
compileHtmlFileWithto inject extra variables
- Add
ScopeMtype for specifying extra template variables
- Add
setDefaultand
overwritefor constructing
ScopeM
|
https://www.stackage.org/package/heterocephalus
|
CC-MAIN-2017-51
|
refinedweb
| 941
| 65.12
|
0
im making a battleship game and so far i have this but im getting an error and i don't know what it means, this is the error
error C2664: 'strcpy' : cannot convert parameter 1 from 'char' to 'char *' this is my code
#include <iostream.h> #include <stdlib.h> #include <stdio.h> #include <string.h> struct coord { int x; int y; }; int main() { int c; int x=0; char view[7][25]; coord place[2]; strcpy(view[0]," ___________ "); strcpy(view[1]," 1| o o o o o |"); strcpy(view[2]," 2| o o o o o |"); strcpy(view[3]," 3| o o o o o |"); strcpy(view[4]," 4| o o o o o |"); strcpy(view[5]," 5| o o o o o |"); strcpy(view[6]," |___________|"); strcpy(view[7]," 1 2 3 4 5 "); cout<<view[0]<<endl; cout<<view[1]<<endl; cout<<view[2]<<endl; cout<<view[3]<<endl; cout<<view[4]<<endl; cout<<view[5]<<endl; cout<<view[6]<<endl; cout<<view[7]<<endl; cout<<""<<endl; cout<<"Welcome to battleship"<<endl; cout<<"First position your ships, you have 3 ships"<<endl; cout<<"Bad idea to put all your battle ships on one coordinate"<<endl; cout<<""<<endl; do { for(int i=1;i<4;i++) { c=i; cout<<"Enter the y coordinate you want to place your "<<i<<" ship(1-5)"<<endl; cin>>place[i].y; cout<<""<<endl; cout<<"Enter the x coordinate you want to place your "<<i<<" ship(1-5)"<<endl; cin>>place[i].x; strcpy(view[place[i].y][place[i].x],"s");[B]//says that the problem is here[/B] cout<<""<<endl; } system("CLS"); } while(x>0); cout<<view[1]<<endl; cout<<view[2]<<endl; cout<<view[3]<<endl; cout<<view[4]<<endl; cout<<view[5]<<endl; cout<<view[6]<<endl; cout<<view[7]<<endl; cout<<view[8]<<endl; return 0; }
|
https://www.daniweb.com/programming/software-development/threads/113991/error-i-don-t-understand
|
CC-MAIN-2018-43
|
refinedweb
| 309
| 69.04
|
Range-based for Statement (C++)
Executes
statement repeatedly and sequentially for each element in
expression.
Syntax
for ( for-range-declaration : expression ) statement
Remarks
Use the range-based for statement to construct loops that must execute through a "range", which is defined as anything that you can iterate through—for example,
std::vector, or any other C++ Standard Library sequence whose range is defined by a
begin() and
end(). The name that is declared in the
for-range-declaration portion is local to the for statement and cannot be re-declared in
expression or
statement. Note that the auto keyword is preferred in the
for-range-declaration portion of the statement.
New in Visual Studio 2017: Range-based for loops no longer require that begin() and end() return objects of the same type. This enables end() to return a sentinel object such as used by ranges as defined in the Ranges-V3 proposal. For more information, see Generalizing the Range-Based For Loop and the range-v3 library on GitHub.
This code shows how to use range-based for loops to iterate through an array and a vector:
// range-based-for.cpp // compile by using: cl /EHsc /nologo /W4 #include <iostream> #include <vector> using namespace std; int main() { // Basic 10-element integer array. int x[10] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; // Range-based for loop to iterate through the array. for( int y : x ) { // Access by value using a copy declared as a specific type. // Not preferred. cout << y << " "; } cout << endl; // The auto keyword causes type inference to be used. Preferred. for( auto y : x ) { // Copy of 'x', almost always undesirable cout << y << " "; } cout << endl; for( auto &y : x ) { // Type inference by reference. // Observes and/or modifies in-place. Preferred when modify is needed. cout << y << " "; } cout << endl; for( const auto &y : x ) { // Type inference by const reference. // Observes in-place. Preferred when no modify is needed. cout << y << " "; } cout << endl; cout << "end of integer array test" << endl; cout << endl; // Create a vector object that contains 10 elements. vector<double> v; for (int i = 0; i < 10; ++i) { v.push_back(i + 0.14159); } // Range-based for loop to iterate through the vector, observing in-place. for( const auto &j : v ) { cout << j << " "; } cout << endl; cout << "end of vector test" << endl; }
Here is the output:
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 end of integer array test 0.14159 1.14159 2.14159 3.14159 4.14159 5.14159 6.14159 7.14159 8.14159 9.14159 end of vector test
A range-based for loop terminates when one of these in
statement is executed: a break, return, or goto to a labeled statement outside the range-based for loop. A continue statement in a range-based for loop terminates only the current iteration.
Keep in mind these facts about range-based for:
Automatically recognizes arrays.
Recognizes containers that have
.begin()and
.end().
Uses argument-dependent lookup
begin()and
end()for anything else.
See also
auto
Iteration Statements
Keywords
while Statement (C++)
do-while Statement (C++)
for Statement (C++)
Feedback
|
https://docs.microsoft.com/en-us/cpp/cpp/range-based-for-statement-cpp?redirectedfrom=MSDN&view=vs-2019
|
CC-MAIN-2019-47
|
refinedweb
| 546
| 61.26
|
Today is the last day to fill out the survey for the Netbeans UML module. So, I thought I would share some early results of the survey. As of Sunday there have been 67 responses. if you have not already taken the survey, please take some time to give us your opinion.
The current results are as followings:
Stability: Excellent 34%, Somewhat 60%, Not Stable 6%
How Often Used: Often 42%, Somewhat 45%, Never 13%
Usability: Excellent 36%, Somewhat 54%, Not Usable 10%
Performance: Excellent 42%, Somewhat 54%, Not Adequate 4%
Documentation: Excellent 46%, Somewhat 52%, Not Adequate 1%
The interesting thing to note, is that almost everyone that said the tool is not usable said it was because it does not support generics. I would like to hear more information about how UML does not support generics. The reverse engineering and code generation does have generic support, but perhaps there are areas that are not covered so well. I know of one important issue that was fixed in the last couple of days. I think that this issues will make a big difference. Please let me know what you think.
Types of Users
Design/Architect: 84%
Code Generation: 66%
Reverse Engineering: 75%
Diagrams Used
Activity: 34%
Class: 93%
Collaboration: 19%
Component: 30%
Deployment: 19%
Sequence: 58%
State: 33%
Use Case: 64%
Modeling Activities
Report Generation: 40%
Reverse Engineering: 66%
Code Generation: 69%
Design Patterns: 40%
Requirements Gathering: 36%
Generics was a new feature in Java 5. This is not new, because Java 5 has been out for a long time now. Now that NetBeans has UML support, we are getting a lot of questions about UML support of Generics. I will make an attempt to demonstrate how UML can represent generics.
In UML every classifier can have template parameters. Classifiers are things like Class, Interfaces, and even Use Cases. To use a template you have to create an instance of the template. The instance of a template is a new type. Creating an instance of a template is similar to extending a class. This is not to far fetched, because when templates was first added to C++, many C++ compiler would actually create a new class declaration for every new template instantiation. Then compile the new template decloration. So, ArrayList < String > is a completely different type than ArrayList < Integer >, and vise versa. You can not assign a ArrayList < Integer > a variable that is declared to be of a ArrayList < String >. UML has the same concepts. UML actually has two different mechanism to represent generic instances.
The first is called an anonymous classifier (In the NetBeans UML module, we use the term Derivation Classifier). The anonymous classifier is used to define an instance of a template. The anonymous class uses syntax to declare the template name, and the arguments being passed into the template. The NetBeans UML currently does not use the UML notation, but instead uses a more Java like notation.
The second approach is to use a derivation edge. You can draw a derivation edge between a classifier and a template. A derivation edge has bindings that are used to specify the template arguments. The anonymous classifier is simply a shortcut, for the derivation classifier approach. A problem with the derivation edge approach is that you there is no way to determine that a derivation is for generalizing from a class, or implementing an interface. Code generation can try to determine if a derivation is for implementation or generalization, if Classes and interfaces are used. However, if you start to use constructs like DataTypes to represent classifiers declared in a library things get are not so clear. Also, when you are generalizations, and implementation links, the picture get cloudy when you start to use a third relationship to represent a super class relationships (not to mention that the derivation edge looks like an implementation link with bindings). For these reasons the NetBeans UML module does not currently allow derivation edges to be drawn from a class to a template. In the future when we have better support for libraries, we will probably add this feature into the tool.
Following are a few examples of how the Java generics are represented in UML.
Example 1
The follow class declaration declares a class that extends a generic type. The the super type is a generic instance of MyGeneric, with the argument of String.
public class MyExtendedGeneric extends MyGeneric < String >
{
}
The NetBeans UML reverse engineering module will create a class that extends a derivation classifier. The derivation classifier will have a derivation to the MyGeneric template, and its binding will specify that argument is a String.
Example 2
The follow class declaration declares a class that implements a generic type. The the super interface is a generic instance of MyGenericInterface, with the argument of String.
public class MyExtendedGeneric implements MyGenericInterface < String >
{
}
The NetBeans UML reverse engineering module will create a class that implements a derivation classifier. The derivation classifier will have a derivation to the MyGenericInterface template, and its binding will specify that argument is a String. For the use case the NetBeans UML module has actually extended UML. In standard UML, an implementation link can only be created between a classifier and an interface. However, in order to better support templates NetBeans allows an implementation to be created between classifiers and derivation classifiers as well.
Example 3
The follow class declaration declares a class that has one data member. The type of the data member is an instance of the generic MyGeneric, with the argument of String.
public class MyClass
{
MyGeneric < String > myAttr = new MyGeneric < String >();
}
The NetBeans UML reverse engineering module will create a class that has an aggregation association to a derivation classifier. The derivation classifier will have a derivation to the template MyGeneric, and its binding will specify that argument is a String.
Technorati Tags: Modeling, NetBeans, Generics, Java, UML
Today I saw a very cool movie created from World of Warcraft scenes. Check it out.
After ragging on MarsEdit about it's image support, I thought I better try out Ecto image support.
First thing I noticed it that I can drag an image from my filesystem (or even a web page) and drop the image into the editor. I can also resize the image by clicking and dragging a resize handle. Very cool. I can also double click on the image and get a number of options. For example I can specify that I want this image to be a thumbnail. That is a nice feature.
I will now try to publish and see what happens.
Now MarEdit does have a combo box that list a number of HTML tags. By using the HTML tags tool, it does make it easy to add HTML tags. My main concern at this time is how to insert images. The HTML tags combo box does have a insert image item. So maybe it will help. Lets try it out.
So far so pretty good. I was able to select a picture and the pictures alignment. When I pressed the Ok button, MarEdit took care of sending the image to the server.
The image is a picture of my four children. The picture was taken the same day that my youngest son was born. August 25, 2006.
MarsEdit has a Preview option (so did Ecto). The Preview window has a live preview option. The live preview option allow you to see the entry in WSYWIG. The question I have is why not put that in the editor? Also after adding the image, I noticed that every keystroke I type in the editor cases the preview window to refresh. Which causes the image to redraw over and over. It was very annoying, so I had to close the window :-( . I have not turned off the live preview, and I notice that the image still flicker when I scroll the preview window. Very annoying.
Well now it the lime to publish.
Publishing Results Well the good thing is that the entry was published and immediately appeared. That is good. However, again the new line characters where not replaced with HTML line break tags. Before editing I thought that I set the option to convert line break to tags. I will try it again.
When I inserted the image, I specified that I did not want the image to have an alignment. I just realized that MarEdit defaulted the image to be left aligned. That is not good. I had to update the HTML by hand.
Well that did not work either. I just gave up and added the paragraph characters myself. All and All I like Ecto better.
One of my hang ups with blogging is a writing the blog. You have to enter the blog into a text field a web page. The problem is that you do not get nice editing tools. Well at least I have not seen any editing tools. So, I decided to see if there are any good blog editing tools.
I am testing out Ecto. What is cool about using Ecto (maybe other blog tools as well) is that the entry editing feels like you are writing any other document.
Well this is my first entry using a blog editor. Pretty cool.
Ok, I learned two things. First, there is a delay from when you publish and when the entry appears in the blog. The reason is that a publication time is set for the entry (why I do not know). Second, you have to use the option edit setting "Convert line breaks in Rich Text mode" to automatically convert newlines to <p> and <br> tags.
Alright I have now updated my client about the time zone information. Lets see if this works.
Well that did not work. Matter a fact, it seem random on when it will decided to actually post the entry. :-(
By no means is this concept new. If you look at the history of software engineering you will see steady progress of moving away from the abstract concepts to a more logical concept. First, we started with machine code. I would argue that the machine language is a very abstract language to a human, while the most concrete language to a computer. We then move to assembly language. Assembly language made developing a little more concrete to us humans by representing op codes (numbers) as names, but assembly was not understandable to a computer. Therefore, the assembly language program had to be assembled to machine language so that the computer could understand. The assembled program became the artifact. Next came structured language and functional languages. At first people said that these languages could never produce code as well as programs written by hand in assembly. Over time, this statement was proven to be incorrect, and the object code became the artifact. Next came concepts like object oriented programming and aspect programming. Over and over the process repeats. We move to a logical reprsentation of the machine code. Each iteration makes that programming language more abstract to the computer but easier to understand for us humans. Domain specific languages simply repeat the cycle. Trying to make it easier to understand the concepts needed to develop software.
The previous statements are all fine and good, but is there any practical use? Yes, there are a number of examples of domain languages out in the world. If you look at NetBeans you will see a number of examples. The one that everyone will be most familar with is the form editor. It is a language that is designed specifically for designing user interfaces. As such, it is very intuitive to use and very powerful. The reason it is very intuitive is because it uses the concepts of a user interface to describe the user interface. It does not use general programming concepts of classes and components to describe a user interface. The form editor uses buttons, panels, labels, and a host of user interface controls to describe a user interface. To build the same user interface in code is very possible, but it is a lot of work. It takes a lot of work to get the control in the correct locations, not to mention how to handle resizing windows. However, when you use the form editor, this becomes a farily trivial task. Especially a form editor like Matisse. Note that in the form editor a lot of code still needs to be written. It is still up to the developer to write the user interface logic. The fact that the form editor leave a lot of code to be written by hand is a very improtant thing to note. A domain specific language does not mean that there will no longer be a need for a good source code editor. The goal of a domain specific language is to start generating more code not to do away with writing code.
Another example is the orchrastration editor. The orchrastration editor is a graphical editor that is designed to write BPEL documents. The orchrastration editor is an example of a domain specific language that represents not structural information but instead behavioral information. Even thougth the orchrastration editor represents behavioral information, it is still easy to use. It is easy to use because it uses the concepts of the BPEL domain to describe the behavior that is to be executed.
In my previous examples of domain specific languages, I have used graphical languages. Domain specifc languages are not limited to graphical languages. Indeed, there are may example of textual domain specfic languages. BPEL is a good example of a domain specific language that is a text based language instead of being a graphical language.
I can already hear the arguments that BPEL documents can easily be represented in a general purpose language like UML. That is a very true statement. The UML Activity diagram would be a very good fit to represent BPEL documents. As a matter of fact, I have made that argument myself many times. Also, a page flow diagram can easily be represented as by a state diagram. The point is that the higher the level of the language, the more intuitive the language. Now, I am not saying that there is no need for general pupose modeling languages. I believe that general pupose modeling languages like UML can be used to represent domain specific langauges, but that is a topic for latter discussion.
So, where does that leave us? Are domian specific languages by its nature abstract? Well, the designer of a domain specific language has to think about how to design the language. Usally the designing of a language will take you into abstraction. I would argue that the level of abstraction required to design a language is no more than it would take to build the concepts into a modern object oriented application. For example, to write an accounting applicaiton you would still have to develop the concepts of the accounting domain into your system. You would instead use the accounting terminology as an abstract concept, and the object oriented concepts as the concrete concepts. With domain specific languages, the domain concepts become the concrete concepts. Therefore, putting the focus on the problem domain insteading of focusing on the domian of the programming language.
Later I will try to have a more in depth demo of model element navigation.
We also have a new web report. In previous releases of Java Studio Enterprise, the web report was quite awkward to use. The new web report is much more easier. We have choosen to format our reports after JavaDoc.
On a sad note, we hav taken out live round trip. We decided to take out live round trip for a number of reasons. One major reason was because it did not work as well as we liked. Also, we where using the NetBeans Java Model, in NetBeans 6.0 the Java model will be taken out of NetBeans. The Java Editor will not be built on top of the Jackpot project. In the end NetBeans will have a much more robust editor (check out Jan Lahoda's blog about improved coloring, and about improved code completion), but for the UML team it means that we would have to rewrite our live round trip mechanism. Because of these two reasons, we decided that it would be best to take out the round trip component for now. In later releases we plan to put it back into the tool, but it will most likely take the form of a batch round trip instead of a live round trip.
We have also fixed a number of bugs, and worked on a number of usability features. Check out our NetBeans module
The previous paragraph sounds a lot like the Software Factories approach. The Software Factories approach creates a new language for each domain. The new domain specific language (DSL) is then used to describe an application. The advantage of the domain specific language approach is that you are no longer using classes, interfaces and objects to describe applications, you are now manipulating constructs that are custom fit for the specific domain that is being developed. As you have probably guessed the software factories approach will require the birth of a number of new languages. Each company that employs developers will also have to employ language specialists to develop and maintain languages needed for their specific domain.
In my opinion, I believe that the Software Factories approach has a lot of good points and a lot of bad points. One area that I disagree with is the break away from UML. I think that the higher constructs can be accomplished by either extending the UML meta model or by using the profile extension mechanism. In order to add the higher level constructs, the profile mechanism will need to be extended. One of the extensions that needs to be made to the profile mechanism is to allow a profile author to specify an alternative representation for a model element. Once "webservice" stereotype is applied to a component, the user will not see a component notation that has the stereotype of a "webservice." The model element instead will be represented as a web service. This concept is not to far fetched. Many UML tools already support the Robustness diagram notation. The Robustness model elements are UML classes that have a stereotype of "boundary," "controller," or "entity." The trick is that the representation of classes with the Robustness stereotypes is not the same as the representation of a class without the stereotype.
The way I see it, most users of modeling tools do not care about how the model element is represented internally. All they care about is how clear the information is represented, and how quickly they can get their work done. By still using an extension mechanism, the burden is moved from the language designers to the tool providers. The stereotypes that have their own representations can be added to a palette. A tool that adds the stereotypes to their palette can also allow the user to quickly drop model elements with the stereotype. The code generation mechanism (or model transformations mechanism) can handle the issues of transforming the classes with the stereotype to source code. Tools can do what ever it takes to limit the reasons why the user has to physically apply stereotypes to a model element.
Lets say that we have written DSLs for the EJB and web service domains. Now we need an EJB that is also a web service. With the DSL approach, we have to create a new language that merges the two languages which means the developer now has to wait until the language expert can create the new language. I would rather have the ability to overlay the concept of a web service onto my EJB object or vise versa. That way the developer would not have to wait for a new language to be created. Then, I could also view my object as a web service or an EJB depending on what the diagram is trying to communicate.
A domain specific language is extremely effectively for its designed use. However, as soon as you start to go beyond the bounds of the languages concepts the language starts to slow you down. Because of the limitations of the language, the developer will either fight the language, switch languages, or again go back to the language developer. In any case, the rate of development will be drastically hindered.
I believe that a shift in our thinking needs to be made. It is time for modeling languages to advance from low level languages to more abstract modeling languages. UML should become to modeling languages what assembly language became to modern programming languages. When it became too complex and too painful to develop using assembly languages, the industry started to develop more advanced languages. The higher level language would still compile down to the assembly language. For a long time the higher languages allowed developers to embed assembly language instructions into an application. More recently, programming languages such as Java and C# no longer allow the developers to embed assembly language instructions, and they no longer compile down to assembly language. The same pattern has to occur in the modeling world as well.
We need to build on top of UML to produce higher level modeling languages. We still need to be able to embed some UML constructs into the higher level modeling languages. The meta data of the higher level languages need to be built on top of the UML meta model, but few users of the modeling tools will know or care about how model information is stored. Just like few developers care what assembly instructions are generated when they compile their source files.
There are two reason why we should build on top of UML. First, UML is already accepted as the modeling standard. Second, UML already has the a powerful extension mechanism. The extension mechanism needs to be enhanced, but is still a very good mechanism.
There has been a dialog between Grady Booch and Alan Cameron Wills on the differences between using UML and DSL's (Domain Specific Langauge). After reading the two blogs, I had a lot of thoughts about the conversation, so I thought that I would share some of them here.
Alan asserts that with DSL you have a model that is designed around the domain that you are designing. In the example given by Alan, he states that when you retrieve the type from a DSL model, you will get the domain specific type name. In UML when you retrieve the type you get the abstract model element type, for example an UML Class. Grady counters the argument by stating that you get is a stereotype of with the domain specific type.
Both arguments are correct. When you read a UML model and you ask a model element for its type, you do receive a UML Class. Then you are able to receive the domain specific type by querying for its stereotypes. Now that I have stated how each argument is correct, I do not understand the point that Alan is trying to make with his argument. In both cases the model element information will have to be translated into some other form to actually produce source code. Whether or not the translation tool has to check for a stereotype or not, does not affect the user of the tool. Alan can be trying to make the argument that it is bad to require the user to add the stereotype to the class in the first place. I see this also as a tool issue. UML does not restrict tool venders from creating new representations for the UML model elements. An example is the Robustness diagram.
Most UML tools have some way to support the Robustness diagram. The model elements on the robustness diagram are specializations of a UML class. Since most UML tools provide special visualizations for Robustness diagram, the user is not required to add the Robustness stereotypes to the UML class.
While on the topic of stereotypes, Alan also states that "You have to understand all of UML (stereotypes,etc) before you can create your language on top of it". I personally can not imagine a way that you could possibly not have a complete understanding of a domain when you go to write a language. Even if you are wanting to write a new DSL you would have to be very knowledgeable about modeling languages. Basically this is not an area for the novice, even if you are writing a domain specific language. So, basically the choice is whether or not to build upon a system that has been used by a number of people, or whether you want to start from scratch.
Alan also makes a point that UML may not be the best choice to represents all domains, such as a visual GUI editor. I tend to agree with him on this point. However, I would also state that I think that the authors of MDA would also agree. OMG has spent a lot of effort designing modeling languages for other domains like data warehouse applications. In the book MDA Explained: The Model Driven Architecture -- Practice and Promise the authors talk about other model types. The sample application that is discussed in the book uses ER diagrams. I am not saying that I do not have any problems or concerns with MDA, for example in MDA Explained the ER diagrams are generated from a Class diagram which I do not see as a realistic approach. In reality the developer will most likely be given the ER diagram from a database management group. It will be the developers job to retrieve the information from an existing database instead of designing a database.
|
http://blogs.sun.com/treyspiva/
|
crawl-002
|
refinedweb
| 4,325
| 63.8
|
There's another issue with these files:From drivers/scsi/qla2xxx/ql2100_fw.c in kernel 2.6:<-- snip -->/****************************************************************************** * QLOGIC LINUX SOFTWARE * * QLogic ISP2x00 device driver for Linux 2.6.x * Copyright (C) 2003-2004 QLogic Corporation * () * *. * *************************************************************************//* * Firmware Version 1.19.24 (14:02 Jul 16, 2002) */...#ifdef UNIQUE_FW_NAMEunsigned short fw2100tp_code01[] = { #elseunsigned short risc_code01[] = { #endif 0x0078, 0x102d, 0x0000, 0x95f1, 0x0000, 0x0001, 0x0013, 0x0018, 0x0017, 0x2043, 0x4f50, 0x5952, 0x4947, 0x4854, 0x2032, 0x3030, 0x3120, 0x514c, 0x4f47, 0x4943, 0x2043, 0x4f52, 0x504f, 0x5241, 0x5449, 0x4f4e, 0x2049, 0x5350, 0x3231, 0x3030, 0x2046, 0x6972,...<-- snip -->The GPL says that you must give someone receiving a binary the source code, and it says: The source code for a work means the preferred form of the work for making modifications to it.This is perhaps a bit besides the main firmware discussion and IANAL, but is this file really covered by the G
|
http://lkml.org/lkml/2004/3/25/119
|
CC-MAIN-2017-13
|
refinedweb
| 143
| 73.88
|
On 13/08/10 09:39, Nick Coghlan wrote:
On Thu, Aug 12, 2010 at 10:51 PM, Greg Ewing greg.ewing@canterbury.ac.nz wrote:
There are plenty of uses for cofunctions that never send or receive any values using yield.
I provided an example of doing exactly that during the yield-from debate. A full discussion can be found here:....
In the above example, for instance, I define a function sock_readline() that waits for data to arrive on a socket, reads it and returns it to the caller. It's used like this:
line = yield from sock_readline(sock)
or if you're using cofunctions,
line = cocall sock_readline(sock)
The definition of sock_readline looks like this:
def sock_readline(sock): buf = "".]
|
https://mail.python.org/archives/list/python-ideas@python.org/message/GSCN42RQM573YDCEINGBVGESSC36XO64/
|
CC-MAIN-2021-49
|
refinedweb
| 121
| 53.41
|
How to deploy with WSGI¶
Dj webserver to use. Django includes getting-started documentation for the following WSGI servers:
The application object¶
One key concept of deploying with WSGI is to specify a central application callable object which the webserver uses to communicate with your code. This is commonly specified as an object named application in a Python module accessible to the server.
The startproject command creates a projectname/wsgi.py that contains such an application callable.
Note
Upgrading from a previous release of Django and don’t have a wsgi.py file in your project? You can simply add one to your project’s top-level Python package (probably next to settings.py and urls.py) with the contents below. If you want runserver to also make use of this WSGI file, you can also add WSGI_APPLICATION = "mysite.wsgi.application" in your settings (replacing mysite with the name of your project).
Initially this file contains:
import os os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings") # This application object is used by the development server # as well as any WSGI server configured to use this file..
To apply WSGI middleware you can simply wrap the application object in the same file:.
|
https://docs.djangoproject.com/en/1.4/howto/deployment/wsgi/
|
CC-MAIN-2015-40
|
refinedweb
| 199
| 50.53
|
The Ethereum Name Service (ENS) is so named not because it only supports Ethereum addresses (ENS can support any cryptocurrency address, as well as non-blockchain data like IPFS hashes and Tor .onion addresses), but because it runs on the Ethereum blockchain and uses ETH for payments.
Other blockchain-based naming projects, both recently and in the past, have chosen to launch their own bespoke blockchains and tokens.
This post explains why we think using Ethereum and ETH is the best path for blockchain-based naming.
Some history
The first significant blockchain-based naming project Namecoin launched with its own dedicated blockchain in 2011.
To create a new blockchain-based application at that time, one had to launch a new blockchain dedicated to that purpose (since making applications on Bitcoin is difficult). This involved having to be knowledgeable enough to create and maintain a new blockchain layer 1 protocol, bootstrapping a new mining community for security, and finally getting people to actually start using the new blockchain.
Ethereum changed all of this when it launched in 2015. One could now much more easily launch a new blockchain-based application, piggy-backing on the security, userbase, and infrastructure of the already existing Ethereum blockchain.
ENS launched on Ethereum in May 2017 as an open source and non-profit project, taking advantage of these and other benefits (explained below). ENS has quickly become the leading blockchain-based naming project with over 100 wallet and dapp integrations and over 310k registered names.
Other blockchain-based naming projects have chosen to follow the path of Namecoin. For example, the Handshake blockchain and its accompanying token HNS launched recently, and the FIO blockchain and token is also set to launch soon, among others. (They are also unnecessarily creating many new TLDs that are sure to eventually conflict with the DNS namespace. We think this bad for users and bad for the adoption of blockchain technology for Internet naming —but that’s another topic, more here.)
Benefits of a bespoke naming blockchain and token?
There are a few apparent technical benefits to running a naming service on a bespoke blockchain: less blockchain bloat, the speed and cost of transactions, and a smaller attack surface. I will explain and respond to each in turn.
Less Blockchain Bloat
If you want the full security benefits of ENS (or any Ethereum application), you need to run an Ethereum full node, which means not only storing the ENS data but all of the data of everything else running on Ethereum. With a bespoke naming blockchain, a full node only includes the naming data, so the size of the blockchain should be smaller and easier to run.
But the security of a bespoke naming blockchain will actually be lower if there’s less mining security and fewer full nodes compared to Ethereum. Further, the cost of running an Ethereum node, while still affordable for many people, may be reduced in the future by Ethereum light clients and sharding.
Speed and Cost
The benefits of a bespoke naming blockchain versus Ethereum in this regard are negligible. Updating an ENS record on Ethereum usually costs around $0.01 in ETH if you’re okay waiting a few minutes for confirmation, or around $0.04 if you want it to confirm in the next block.
As ENS grows, we plan to leverage L1 and L2 scaling on Ethereum, but for the foreseeable future it’s not a problem for most users.
Attack Surface
This of course depends on the bespoke blockchain. If a bespoke naming blockchain would have all the benefits of programmability available on Ethereum (more on this below), it might be just as complicated; if not, it likely lacks key features.
Further, since Ethereum is far more widely used, it has a large community of devs maintaining, fixing, and improving Ethereum, something a bespoke naming blockchain will have a difficult time reproducing.
Greater benefits to using Ethereum and ETH
There are many clear advantages to running a naming service on Ethereum and using ETH that we think more than compensate for any advantages of having a bespoke naming blockchain and token.
Benefits of Ethereum
Among the most obvious, using Ethereum means ENS gets all of the security, robustness, censorship-resistance, decentralization, and regular protocol improvements of Ethereum.
I’d like to also highlight a few other benefits that might not be as well understood:
- Programmability and interactivity: By being on Ethereum, ENS constitutes another Ethereum “lego.” You can program your names with the Solidity you already know, do cool things like have names owned by Ethereum-based DAOs, or even have your names automatically do things in response to other smart-contracts on Ethereum that have nothing to do with naming. This is a revolutionary feature for naming (one that still needs exploring). Bespoke naming blockchains lack this latter feature entirely.
- Ecosystem and infrastructure: ENS-native .ETH names are ERC721-compliant NFTs, which means that .ETH names can automatically plug in to any NFT market or wallet (e.g. OpeaSea, et al). ENS also benefits from being able to easily plug in to the existing infrastructure of the Ethereum ecosystem, like major Ethereum libraries, MetaMask, TruffleSuite, MyEtherWallet, et al.
Benefits of ETH
Using ETH rather than our own token means that users get all of the convenience, wide distribution, supporting infrastructure, and market liquidity of ETH. A bespoke naming token simply adds unnecessary friction.
Conclusion
We share similar goals of bespoke naming blockchain projects: we want to bring the benefits of decentralization and censorship-resistance of blockchains to Internet naming. We are convinced that building on Ethereum and using ETH is the best way to achieve those goals, in addition to allowing for new capabilities that bespoke naming blockchains by their very nature lack, like interactivity with other Ethereum smart-contracts.
Which is why we’re building ENS on Ethereum and have no plans to change in the foreseeable future.
And in doing so, ENS is taking Ethereum to the rest of the Internet. Each new feature and integration with ENS, particularly with things outside of the Ethereum community (e.g. DNS records and namespace), further entrench ENS and therefore Ethereum as a basic piece of Internet infrastructure.
|
https://medium.com/the-ethereum-name-service/why-ens-uses-ethereum-and-eth-not-a-bespoke-blockchain-and-token-36f86727e71f?source=post_page-----36f86727e71f----------------------
|
CC-MAIN-2020-16
|
refinedweb
| 1,036
| 59.03
|
implementation of rand()
I am writing some embedded code in C and need to use the rand() function. Unfortunately, rand() is not supported in the library for the controller. I need a simple implementation that is fast, but more importantly has little space overhead, that produces relatively high-quality random numbers. Does anyone know which algorithm to use or sample code?
EDIT: It's for image processing, so "relatively high quality" means decent cycle length and good uniform properties.
Check out this collection of random number generators from George Marsaglia. He's a leading expert in random number generation, so I'd be confident using anything he recommends. The generators in that list are tiny, some requiring only a couple unsigned longs as state.
Marsaglia's generators are definitely "high quality" by your standards of long period and good uniform distribution. They pass stringent statistical tests, though they wouldn't do for cryptography.
How does the rand() function in C work?, \begingroup Given the typical implementation of rand() , you're using a quite generous definition of "works". IMO it does not work, even for non The rand function returns a pseudorandom integer in the range 0 to RAND_MAX (32767). Use the srand function to seed the pseudorandom-number generator before calling rand . Requirements
Use the C code for LFSR113 from L'écuyer:
unsigned int lfsr113_Bits (void) { static unsigned int z1 = 12345, z2 = 12345, z3 = 12345, z4 = 12345; unsigned int b; b = ((z1 << 6) ^ z1) >> 13; z1 = ((z1 & 4294967294U) << 18) ^ b; b = ((z2 << 2) ^ z2) >> 27; z2 = ((z2 & 4294967288U) << 2) ^ b; b = ((z3 << 13) ^ z3) >> 21; z3 = ((z3 & 4294967280U) << 7) ^ b; b = ((z4 << 3) ^ z4) >> 12; z4 = ((z4 & 4294967168U) << 13) ^ b; return (z1 ^ z2 ^ z3 ^ z4); }
Very high quality and fast. Do NOT use rand() for anything. It is worse than useless.
rand() implementation, the rand() facility is not a very good random number generator, because it is often poorly implemented. So I am planning to write my own What is the implementation of rand() on say visual c/c++ 5 or 6 ? Another question is this rand() implementation the same on any other platforms/compilers/libraries ? Where can I find/look at the implementation of rand() ? Thx for any help. Bye, Skybuck.
Here is a link to a ANSI C implementation of a few random number generators.
rand() and srand() in C/C++, create the same sequence again and again every time program runs. rand() function is used in C to generate random numbers. If we generate a sequence of random number with rand() function, it will create the same sequence again and again every time program runs.
How does rand() work in C?, (). These sequences are repeatable by calling srand() with the same seed value. The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous. RAND is nonprofit, nonpartisan, and committed to the public interest.
I recommend the academic paper Two Fast Implementations of the Minimal Standard Random Number Generator by David Carta. You can find free PDF through Google. The original paper on the Minimal Standard Random Number Generator is also worth reading.
Carta's code gives fast, high-quality random numbers on 32-bit machines. For a more thorough evaluation, see the paper.
C/Randomization, rand() function is used in C to generate random numbers. If we generate a sequence of random number with rand() function, it will create the same class in C++ · Implementation of all Partition Allocation Methods in Memory Management.
What is the code for the rand() function which generates random , The C library function int rand(void) returns a pseudo-random number in the range of 0 to RAND_MAX. RAND_MAX is a constant whose default value may vary between implementations but it is granted to be at least 32767. printf("%d\n", rand() % 50); %50 is the range. $\begingroup$ I think the idea of the Standard is to make clear that code which intends to be portable shouldn't rely upon rand being anything better than that. From a conformance standpoint, I don't think the Standard would forbid an implementation which returned 42 for the first 5 calls after srand(8675309) nor one that would returned 42 for the first trillion calls regardless of the value
srand(3): pseudo-random number generator, The rand() function returns a pseudo-random integer in the range 0 to the following example of an implementation of rand() and srand(), possibly useful when RAND Corporation, in partnership with the American Institutes for Research, evaluated implementation of key elements of the Intensive Partnerships for Effective Teaching in three public school districts and four charter management organizations.
std::rand, It is implementation-defined which functions do so. It is implementation-defined whether rand() is thread-safe. Rand index adjusted for chance. The Rand Index computes a similarity measure between two clusterings by considering all pairs of samples and counting pairs that are assigned in the same or different clusters in the predicted and true clusterings.
|
http://thetopsites.net/article/52353083.shtml
|
CC-MAIN-2020-34
|
refinedweb
| 845
| 53.51
|
The core of the umtx code in the kernel is this: umtx_sleep: acquire and hold VM page tsleep loop using the page's physical address as the sleep address release page umtx_wakeup: acquire and hold the VM page lookup the physical address wakeup based on physical address. release page The page is not supposed to be reassigned while a thread is sleeping in umtx_sleep on that page, so the physical address shouldn't change between the sleep and the wakeup. There are two possibilities that I can think of: * Firefox is remapping the underyling virtual address * There is a bug in the kernel implementation somewhere that is causing the underlying page to get reassigned somehow. I have two patches for you to try. Please try each one separately (not both together) and tell me which ones fix the problem, or that neither of them fixes the problem. -Matt PATCH #1 Index: kern_umtx.c =================================================================== RCS file: /cvs/src/sys/kern/kern_umtx.c,v retrieving revision 1.7 diff -u -p -r1.7 kern_umtx.c --- kern_umtx.c 2 Jul 2007 01:30:07 -0000 1.7 +++ kern_umtx.c 12 Apr 2008 18:12:12 -0000 @@ -105,6 +105,7 @@ return (EFAULT); m = vm_fault_page_quick((vm_offset_t)uap->ptr, VM_PROT_READ, &error); if (m == NULL) return (EFAULT); + vm_page_wire(m); sf = sf_buf_alloc(m, SFB_CPUPRIVATE); offset = (vm_offset_t)uap->ptr & PAGE_MASK; @@ -121,6 +122,7 @@ error = EBUSY; } sf_buf_free(sf); + vm_page_unwire(m, 1); vm_page_unhold(m); return(error); } PATCH #2 Index: vm_object.c =================================================================== RCS file: /cvs/src/sys/vm/vm_object.c,v retrieving revision 1.31 diff -u -p -r1.31 vm_object.c --- vm_object.c 8 Jun 2007 02:00:47 -0000 1.31 +++ vm_object.c 12 Apr 2008 18:16:54 -0000 @@ -1605,12 +1605,15 @@ } /* * limit is our clean_only flag. If set and the page is dirty, do + * not free it. If set and the page is being held by someone, do * not free it. */ if (info->limit && p->valid) { vm_page_test_dirty(p); if (p->valid & p->dirty) return(0); + if (p->hold_count) + return(0); } /*
|
https://www.dragonflybsd.org/mailarchive/bugs/2008-04/msg00025.html
|
CC-MAIN-2017-39
|
refinedweb
| 335
| 73.78
|
Applies to: C166 C Compiler
Information in this article applies to:
What does the strtoul library routine do?
The strtoul library routine converts the contents of a string to an unsigned long. Leading whitespace is ignored. A base may be specified. If the base specified is between 2 and 36, the conversion is performed in that base. If the base specified is zero, the prefix of the value stored in the string determines the base of the conversion: 0 implies an octal value and 0x or 0X implies a hexadecimal value.
This function is declared as follows:
#include <stdlib.h> unsigned long strtoul ( const char *string, /* string to convert */ char *endp, /* ptr to unconverted text */ int base); /* the base to convert from */
A pointer to the first character in string that can't be converted is stored in endp unless endp is NULL.
For example:
#include <stdlib.h> #include <stdio.h> void tst_strtoul (void) { char buf [] = "123456 is a positive number"; long neck; char *p; neck = strtoul (buf, &p); }
Article last edited on: 2005-10-20 07:44:50
Did you find this article helpful? Yes No
How can we improve this article?
|
http://infocenter.arm.com/help/topic/com.arm.doc.faqs/ka9974.html
|
CC-MAIN-2018-47
|
refinedweb
| 193
| 72.76
|
On line 193 of "lib/xfile.c" there is a call to x_fflush which may start an infinite looping if something goes wrong during the flush and before f->bufused is updated.
Quick fix: test the return value of x_fflush and act accordingly
Better fix: replace the loop ; I've drafted some code that I can mail you if you're interested
Note: this may explain the loop reported in comment #14 of bug #3817
Note: I've assigned this bug to 'ntlm_auth' because in ntlm_auth.c there is a lot of indirect calls to x_fwrite (through x_fprintf). But of course any other component of samba may be impacted.
Created attachment 3187 [details]
non-looping draft implementation of x_fwrite
This is draft code I've written to implement x_fwrite with no loop. Please note that this code has not been tested, not even compiled.
Note also that this implementation better emulates fwrite in that upon error it returns zero or a positive number smaller than nmemb. Besides, the prototype of x_fwrite forbids the returning of -1.
Comment on attachment 3187 [details]
non-looping draft implementation of x_fwrite
><html><body><pre style="word-wrap: break-word; white-space: pre-wrap;">/* simulate fwrite() */
>size_t x_fwrite(const void *p, size_t size, size_t nmemb, XFILE *f)
>{
> ssize_t ret;
> size_t total=size*nmemb; // we should check for overflow...
>
> if (!total) return 0; // this is what fwrite must do according to the standard
>
> /* we might be writing unbuffered */
> if (f->buftype == X_IONBF ||
> (!f->buf && !x_allocate_buffer(f))) {
> ret = write(f->fd, p, total);
> if (ret < 0) return 0;
> return ret/size;
> }
>
> if (total <= f->bufsize - f->bufused) {
> memcpy(f->buf + f->bufused, (const char *)p, total);
> f->bufused += total;
> } else {
> size_t buffered = total % f->bufsize; // 0 <= buffered <= total
> if (x_fflush(f) < 0) return 0; // flush old content before writing more bytes
> ret = write(f->fd, (const char *)p, total-buffered); // no-op when (buffered == total); write all when (buffered == 0)
> if (ret < 0) return 0;
> if (ret < total-buffered) return ret/size;
> memcpy(f->buf, total-buffered+(const char *)p, buffered); // no-op when (buffered == 0); copy all when (buffered == total)
> f->bufused = buffered;
> }
> }
> /* when line buffered we need to flush at the last linefeed. This can
> flush a bit more than necessary, but that is harmless */
> if (f->buftype == X_IOLBF && f->bufused) {
> int i;
> for (i=f->bufused-1; i>=0; i--) {
> if (*(i+f->buf) == '\n') {
> if (x_fflush(f) < 0) return (total-f->bufused)/size;
> break;
> }
> }
> }
>
> return nmemb;
>}
Comment on attachment 3187 [details]
non-looping draft implementation of x_fwrite
In the code attached to comment#2
> if (ret < total-buffered) return written/size;
should be
if (ret < total-buffered) return ret/size;
It would be best if you added a patch from "git
format-patch". This way you would also get all the credit in
the author information.
Thanks,
Volker
Created attachment 3243 [details]
Patch
I think this is a simpler patch for the problem. Not as clean as a rewrite, but easier to get into the current codebase. Let me know what you think.
Jeremy.
The patch is not in git am format. I will upload a correct patch next.
Volker
Created attachment 5612 [details]
git am format patch
Patch in git am format against master
xfile.c is gone :-)
|
https://bugzilla.samba.org/show_bug.cgi?id=5335
|
CC-MAIN-2022-21
|
refinedweb
| 546
| 56.69
|
What is dispatch?
"dispatch" in the context of object-oriented languages means that calling one of subroutines that have the same names by checking the parameters dynamically. There are multiple and single for dispatch. I'll write about dispatch and its support in Java and other modern JVM languages in this post.
Java
Java is "single dispatch". I'll describe it with an example of handling touch events. Let me assume that there are classes that express touch events and they have parent class named TouchEvent, also, there is TouchEventHandler that handles touch events (the y might have a virtual upper layer in an actual API).
class TouchEvent {} class TouchStartEvent extends TouchEvent {} class TouchMoveEvent extends TouchEvent {} class TouchEndEvent extends TouchEvent {} class TouchCancelEvent extends TouchEvent {} class TouchEventHandler { void handle(TouchEvent e) {} }
Let me assume that you want to create a handler that changes behaviour depending on if the the parameter is TouchEvent or TouchStartEvent. I guess that you will write code like this if you expect subtype polymorphism.
class MyTouchEventHandler extends TouchEventHandler { public void handle(TouchEvent e) { System.out.println("... ?:("); } public void handle(TouchStartEvent e) { System.out.println("touch started :)"); } }
But the code doesn't work well.
TouchEventHandler h = new MyTouchEventHandler(); h.handle(new TouchStartEvent()); // prints "... ?:("
Java refers MyTouchEventHandler but it calls handle(TouchEvent) but not handle(TouchStartEvent). In more technical terms, Java resolves the type of the receiver dynamically but resolves the other parameters statically when sending the message. This is called "single dispatch". I'll modify the code to enhance the gap between expect for the code and compiler's opinion.
class MyTouchEventHandler extends TouchEventHandler { @Override public void handle(TouchEvent e) { System.out.println("... :("); } @Override public void handle(TouchStartEvent e) { System.out.println("It's my turn! :)"); } }
Then compiler will say
src/Overloading.java:19: error: method does not override or implement a method from a supertype @Override public void handle(TouchStartEvent e) { ^
There is a way to simulate "multiple dispatch" in Java code (without tricks in compilation phase) and I'll describe it in the following section about Xtend.
Xtend
Xtend is a relatively new JVM language that has appeared in the last year ("relatively" means that a new JVM language has appeared in this year too). Xtend is "single dispatch" as Java, but it also supports "multiple dispatch" by "dispatch" keyword.
class MyTouchEventHandler extends TouchEventHandler { def dispatch void handle(TouchEvent e) { println("... :(") } def dispatch void handle(TouchStartEvent e) { println("touch started :)") } } val TouchEventHandler h = new MyTouchEventHandler() h.handle(new TouchStartEvent()) // prints "touch started :)"
It's easy to grasp what Xtend does because Xtend generates Java files instead of class files. Xtends puts a proxy method that delegates to methods have "dispatch" keyword. This is the way to simulate "multiple dispatch" in Java that I mentioned before.
public class MyTouchEventHandler extends TouchEventHandler { protected void _handle(final TouchEvent e) { InputOutput.<String>println("... :("); } protected void _handle(final TouchStartEvent e) { InputOutput.<String>println("touch started :)"); } public void handle(final TouchEvent e) { if (e instanceof TouchStartEvent) { _handle((TouchStartEvent)e); return; } else if (e != null) { _handle(e); return; } else { throw new IllegalArgumentException("Unhandled parameter types: " + Arrays.<Object>asList(e).toString()); } } }
I think that using one underscore for prefix isn't nice because programmers need to worry about naming collision but let's leave it aside for now. My interest is why Xtend is "single dispatch" by default. I asked the question to Sven Efftinge who is one of the authors of Xtend and he answered "because changing method resolution would impose many interop issues with existing Jave libs". I guess it's unhappy that spoiling writability of new lauguage for taking care about bad old Java code depend on "single dispatch" but it might be because that I have never had an awful experience. Anyway, Xtend supports "multiple dispatch".
Groovy
Groovy is "multiple dispatch". The example for Java works perfectly in Groovy as expected. It is so intuitive.
TouchEventHandler h = new MyTouchEventHandler(); h.handle(new TouchStartEvent()); // prints "touch started :)"
The type of the parameters are detected dynamically and handle(TouchStartEvent) is called. I'll explain more details. Groovy replaces the method call with a CallSite object that has the caller class, the index for the call, and the method name. It is like this by expressing in code.
TouchEventHandler h = new MyTouchEventHandler(); CallSite handleN = // create a call site for the "n:handle" handleN.call(h, new TouchStartEvent());
CallSite searches the target method gradually by listing up the meta data of "handle" methods of MyTouchEventHandler and checking the number and the type of the parameters. Of course this dynamic resolution comes at a cost but CallSite caches the method and will call the method immediately at the second time. Recent Groovy has @StaticCompile that is an annotation for turning off dynamic capabilities. Where use the annotation is compiled to binary that doesn't use CallSites and calls methods statically. In that case "multiple dispatch" doesn't work.
Others
I checked a bit about "multiple dispatch" support of other JVM languages. It is seemed that Clojure has "defmulti" macro corresponds to "dispatch" keyword of Xtend. Scala doesn't support but they might say that it's unnecessary because of powerful pattern matching. JRuby and Jython don't support because Ruby and Python don't support it. I'm wondering also about Kotlin but I'll end this post here because I'm full :-P
Thanks for your information on dynamic dispatch.
java training in chennai
Thank you very much!! I finally understood what "multiple dispatch" means
|
http://nagaimasato.blogspot.jp/2012/12/multiple-dispatch-in-modern-jvm.html
|
CC-MAIN-2018-22
|
refinedweb
| 911
| 57.27
|
Vadim
Thanks for the sample code - both of these look quite a bit more
complicated than the samples in the XSP Logicsheet, but I will work
through them to try and understand both the grammar and logic.
Please excuse my ignorance, but I am still unclear as to how these are
actually *used*. Is there also a simple.xml file that has has tags that
'call' these sheets? Can I use the original example? Also, what does
the sitemap pipeline be that will enable these files to actually be
processed by Cocoon?
Thanks in advance for help
Derek
D Hohls
CSIR Environmentek
PO Box 17001
Kwa-Zulu Natal
South Africa
4013
>>> vadim.gritsenko@verizon.net 02/03/02 00:03 AM >>>
Try attached example. Put under docs/samples/xsp directory in the Cocoon
sample webapp. Let me know if it was helpful for you.
PS: Note that logicsheet was declared right after <xsp:page> element, no
spaces or tags:
<xsp:page language="java"
xmlns:xsp=""
xmlns:xsp-request=""
xmlns:<xsp:logicsheet
Vadim
> -----Original Message-----
> From: Derek Hohls [mailto:dhohls@csir.co.za]
> Sent: Saturday, February 02, 2002 3:43 PM
> To: cocoon-users@xml.apache.org
> Cc: Derek Hohls
> Subject: C2 Newbie: XSP Logicsheet in sitemap ?
>
> As an ex-Cocoon1 user, I am trying to move all my applications across
to
> C2. I can see that there are a lot of conceptual chnages that I need
to
> understand to make full use of C2's functionality.
>
> Right now I am trying to see how to use XSP/logic sheets. I have
tried
> to implement the examples shown in the XSP Logicsheet Guide, in the C2
> documentation, but have got stuck.
>
> The first point I noticed was that the namespace for XSP was
incorrect;
> its shown as and should actually be
> (maybe someone can update this?)
>
> The second point that I cannot get correct is how to implement the
> logicsheet in the sitemap. While this is straightforward for a
one-step
> case (as in greeting2.xml), it is not clear for the for the two-step
> case (greeting3.xml).
>
> What I have tried is this (and various combinations):
>
> <map:match
> <map:generate
> <map:transform
> <map:transform
> <map:serialize />
> </map:match>
>
> Does anyone know what it should look like ?? (in order to produce the
> 'Hello World' output one gets from the other two examples)
>
> As a final note, maybe there is someone who can also update the
section
> on "Using Logicsheets (Taglibs)" as the discussion revolves around the
> approach used in Cocoon 1 and is now no longer appropriate.
>
> Thanks
> Derek
>
> ---------------------------------------------------------------------
>>
|
http://mail-archives.us.apache.org/mod_mbox/cocoon-users/200202.mbox/%3Csc5cfde0.042@CS-IMO.CSIR.CO.ZA%3E
|
CC-MAIN-2019-26
|
refinedweb
| 429
| 62.78
|
Opened 20 months ago
Closed 18 months ago
#20895 closed enhancement (fixed)
Computing ordinary models of plane curves
Description
Given a plane curve, it is possible to transform it into a plane curve with only ordinary singularities via application of a finite sequence of quadratic transformation maps.
Implement a function at the curve class level to apply the standard quadratic transformation (the birational automorphism of
P2 sending
(x : y : z) to
(yz : xz : xy)) and a function to transform a given plane curve into one with only ordinary singularities.
Change History (18)
comment:1 Changed 20 months ago by
- Branch set to u/gjorgenson/ticket/20895
comment:2 Changed 20 months ago by
- Commit set to 54a991e606b87e665bef1ea53c229f5a5df265f8
comment:3 Changed 20 months ago by
Okay, here's my first attempt at the implementation. I improved the affine tangents function to work with QQbar, and added some helper functions to apply the standard Cremona transformation, find the equation of the line between two projective plane points, and to move a curve into excellent position as defined in Fulton's alg. curves book. I haven't yet been able to find a good way to mitigate the dependency on QQbar, so right now I just restrict to QQbar curves entirely. Moving curves to excellent position seems to take > 2 seconds even for basic curves and quickly becomes impractical for more complicated curves. The new functionality does seem to be working correctly on the basic examples I've tested though.
Do you think it's possible to reduce the dependency on QQbar computations?
comment:4 Changed 20 months ago by
Take a look at the
.as_number_field_element() method of algebraic numbers, and the
number_field_elements_from_algebraicsfunction. They allow you to get a number field to embed your numbers. So you can proceed this way: use QQbar just to compute the roots you need. Once done that, find a number field where everything you need may fit (that is, it should be an extension of your current field that contains the needed elements), and extend your base ring. From there, you can keep working in a concrete number field which should be much faster than QQbar.
comment:5 Changed 20 months ago by
- Commit changed from 54a991e606b87e665bef1ea53c229f5a5df265f8 to 1c91bf1b7fa839e488e23306acf6dbf78720b77f
Branch pushed to git repo; I updated commit sha1. New commits:
comment:6 Changed 20 months ago by
- Commit changed from 1c91bf1b7fa839e488e23306acf6dbf78720b77f to 132dfb58a02856fb2650d17330866def4ddbb175
Branch pushed to git repo; I updated commit sha1. New commits:
comment:7 Changed 20 months ago by
- Commit changed from 132dfb58a02856fb2650d17330866def4ddbb175 to dd402019c59ce2889d22933d3167079f0faf5997
Branch pushed to git repo; I updated commit sha1. New commits:
comment:8 Changed 20 months ago by
Alright, I experimented with embedding into numberfields, but after doing some timing analysis, I found that the biggest use of time was finding intersection points of a given curve with the lines used to create a change of coordinates map to move the curve into excellent position. I tried revising the excellent_position and ordinary_model functions in the first of the last three commits to reduce the costs of these computations, but they still used a lot of time even for simple examples.
In the last two commits I tried a different approach and gave the excellent_position function an option to accept a list/tuple of three points to use to create the transformation (without checks), and modified the ordinary model function so that it now creates lists of vertices incrementally without explicitly checking that they put the curve into excellent position and passes them to excellent_position. To verify that the nonordinary singularities do become resolved, I'm using that after every application of excellent_position + quadratic_transformation, if the given curve was actually put into excellent position, the resulting curve should either have a smaller apparent genus (arithmetic genus - sum m*(m-1)/2 as m runs over the curve's singular point multiplicities), or should have fewer nonordinary singularities. This gives an upper bound for the number of applications of excellent_position + quadratic_transformation needed to resolve the nonordinary singularities, and so if the nonordinary singularities are not resolved after this number of transformations, a new set of vertices is used.
So far this seems to work pretty quickly for curves of degree < 5, but is somewhat hit-or-miss for higher degree curves. The transformed curves can also have high degrees when multiple transformations are needed to resolve all of the nonordinary singularities, and sometimes don't seem very practically useful. I think the code is a bit too much of a mess right now for this to be ready for review, but does the method of implementation seem okay so far? Do you think there's a way I can make the transformations nicer?
comment:9 Changed 19 months ago by
- Commit changed from dd402019c59ce2889d22933d3167079f0faf5997 to 837ddb8b01cb421e7568545f8f2181c29a47a389
Branch pushed to git repo; I updated commit sha1. New commits:
comment:10 Changed 19 months ago by
- Status changed from new to needs_review
I implemented a new approach for excellent_position that no longer requires computing intersection points and works for number fields. I think the computations are faster now and I revised ordinary_model to leave the checks to excellent_position. ordinary_model works for number fields as well by returning a curve defined over an extension if any of the coordinates of the non-ordinary singularities are not contained in the original base field.
I also cleaned up the code and removed unnecessary changes that were implemented in previous commits, such as the line function for projective space.
comment:11 Changed 18 months ago by
- Commit changed from 837ddb8b01cb421e7568545f8f2181c29a47a389 to 71bd9a091149b640499c1d168845035b39fd0dcc
Branch pushed to git repo; I updated commit sha1. New commits:
comment:12 Changed 18 months ago by
- Milestone changed from sage-7.3 to sage-7.4
I made some improvements to the functionality here and I think this should be ready for review.
Changes made in this ticket up to this point have been to add an ordinary_model function along with two helper functions, quadratic_transform and excellent_position, to the projective plane curve class. The is_ordinary_singularity and tangents functions for affine curves were also modified to reduce the need for QQbar computations.
Overall the ordinary_model function appears to be working properly, but still becomes very slow for most curves of degree > 5. I think this slowdown is mainly due to the rate at which the degrees of the quadratic transforms of the curves can grow.
comment:13 Changed 18 months ago by
- Reviewers set to Ben Hutz
- Status changed from needs_review to needs_work
I did not find any functionality issues, but here are some comments
- in affine_curve.py is there a reason you have
from sage.arith.misc import binomial
in the function and not with the rest of the imports
- in affine tangents()
t = T.degree(vars[0]) for monom in T.monomials(): if monom.degree(vars[0]) < t: t = monom.degree(vars[0]
seems to just be doing
min([e1 for e1,e2 in T.exponents()])
is that right? If so, you can simplify both instances.
- in is_ord : projective
C = self.affine_patch(i) Q = list(P) t = Q.pop(i) Q = [1/t*Q[j] for j in range(self.ambient_space().dimension_relative())]
You can just say
C(Q.dehomogenize(i))
to get the affine point
- in excellent position: projective
d = self.defining_polynomial().degree()
is there no C.degree() function for plane curves??
if all([g.degree(PP.gens()[0]) > 0 for g in T.monomials()]):
seems simpler as
if all([e[0] > 0 for e in T.exponents()]):
also this function fails over QQbar since .divides() does not work. You should either fix that or specify in the documentation that you need a number field
- in def ordinary_model(self):
Return an ordinary plane curve model of this curve.
Actually you are returning the morphism from the curve to an ordinary model. You should either match the description of excellent position or return the curve and add a return_mapping parameter. Either way, I think both functions should do the same thing.
I'd also like an example to show that all singularities are now ordinary.
[C.is_ordinary_singularity(Q) for Q in C.singular_points()]
in extension()
not sure why you are doing this:
# make sure the defining polynomial variable names are the same for K, N N = NumberField(K.defining_polynomial().parent()(F.defining_polynomial()), str(K.gen()))
because
sage: K.<v>=QuadraticField(2) sage: N = NumberField(K.defining_polynomial().parent()(x^3+2), str(K.gen())) sage: N.gen()==K.gen() False
comment:14 Changed 18 months ago by
- Commit changed from 71bd9a091149b640499c1d168845035b39fd0dcc to 6ef31f5e81344759cc6a27941cd30ace24f6de9f
Branch pushed to git repo; I updated commit sha1. Last 10 new commits:
comment:15 Changed 18 months ago by
- Status changed from needs_work to needs_review
Thanks for reviewing this. I was a bit busy with preparing for the semester and wasn't able to finish working on the next update until now.
I made the suggested changes and also merged this with ticket 20790 (has been closed) to fix a conflict with the imports. The projective subscheme multiplicity and intersection_multiplicity functions now use the point dehomogenize function as well for cleaner code. I also added a degree function override for projective plane curves which just returns the degree of the defining polynomial of the curve to avoid using the slower Hilbert polynomial computation.
In
N = NumberField(K.defining_polynomial().parent()(F.defining_polynomial()), str(K.gen()))
I don't think
str(K.gen()) is needed and can be replaced with an arbitrary variable name, but the reason I am using
K.defining_polynomial().parent()(F.defining_polynomial()) is to make the defining polynomials of
N and
K have the same variable name. If the names are different the composite_fields function raises an error:
R.<x> = QQ[] S.<y> = QQ[] N.<a> = NumberField(x^2 + 1) M.<b> = NumberField(y^2 - 7) N.composite_fields(M) Traceback (click to the left of this block for traceback) ... sage.libs.pari.handle_error.PariError: inconsistent variables in polcompositum, x != y
comment:16 Changed 18 months ago by
- Status changed from needs_review to positive_review
The updates look fine to me and all tests still pass.
comment:17 Changed 18 months ago by
OUPUT ?
comment:18 Changed 18 months ago by
- Branch changed from u/gjorgenson/ticket/20895 to 6ef31f5e81344759cc6a27941cd30ace24f6de9f
- Resolution set to fixed
- Status changed from positive_review to closed
Branch pushed to git repo; I updated commit sha1. New commits:
|
https://trac.sagemath.org/ticket/20895
|
CC-MAIN-2018-09
|
refinedweb
| 1,719
| 53.51
|
#include "test.h" void tst_sig(fork_flag, handler, cleanup) char *fork_flag; int (*handler)(); void (*cleanup)();
Tst_sig is used by UNICOS test case programs to set up signal handling functions for unexpected signals. This provides test cases with a graceful means of exiting following an unexpected interruption by a signal. Tst_sig should be called only once by a test program.
The fork_flag parameter is used to tell tst_sig whether or not to ignore the SIGCLD signal caused by the death of a child process that had previously been created by the fork(2) system call (see signal(2) for more information on the SIGCLD signal).
Setting fork_flag to FORK will cause tst_sig to ignore the SIGCLD signal. This option should be set if the test program directly (eg. call fork(2)) or indirectly (eg. call system(3S)) creates a child process.
Setting fork_flag to NOFORK will cause tst_sig to treat the SIGCLD signal just as any other unexpected signal (ie. the handler will be called). This option should be set by any test program which does not directly or indirectly create any child processes.
The handler parameter is a pointer to a function returning type int which is executed upon the receipt of an unexpected signal. The test program may pass a pointer to a signal handling function or it may elect to use a default handler supplied by tst_sig.
The default handler is specified by passing DEF_HANDLER as the handler argument. Upon receipt of an unexpected signal, the default handler will generate tst_res(3) messages for all test results that had not been completed at the time of the signal, execute the cleanup routine, if provided, and call tst_exit. Note: if the default handler is used, the variables TCID and Tst_count must be defined and available to tst_sig (see tst_res(3)).
The cleanup parameter is a pointer to a user-defined function returning type void which is executed by the default handler. The cleanup function should remove any files, directories, processes, etc. created by the test program. If no cleanup is required, this parameter should be set to NULL.
#include "test.h" /* * the TCID and TST_TOTAL variables must be available to tst_sig * if the default handler is used. The default handler will call * tst_res(3) and will need this information. */ int TCID = "tsttcs01"; /* set test case identifier */ int TST_TOTAL = 5; /* set total number of test results */ void tst_sig(); /* * set up for unexpected signals: * no fork() system calls will be executed during the test run * use the default signal handler provided by tst_sig * no cleanup is necessary */ tst_sig(NOFORK, DEF_HANDLER, NULL); void tst_sig(), cleanup(); int handler(); /* * set up for unexpected signals: * fork() system calls will be executed during the test run * use user-defined signal handler * use cleanup */ tst_sig(FORK, handler, cleanup);
Tst_sig will output warnings in standard tst_res format if it cannot set up the signal handlers.
|
http://www.makelinux.net/man/3/T/tst_sig
|
CC-MAIN-2015-22
|
refinedweb
| 476
| 59.53
|
.]]>
Since the first iteration of my blog—some time around 2016—I've used highlight.js to highlight code blocks. With highlight.js being so popular, I never really second guessed the idea. It was a given to use JavaScript.
A few weeks ago, TJ Miller introduced me to highlight.php by writing a Parsedown extension for it. Highlight.php does the exact same thing as highlight.js: they both add tags and classes to your
code blocks, which enables them to be highlighted with CSS. The difference is, highlight.php does this on the server.
There are two benefits of highlighting code blocks on the server:
Whenever I need to convert markdown to HTML in PHP, I pull in league/commonmark. It parses markdown to an AST before rendering it to HTML which makes it easy to extend, and I did exactly that.
I created a spatie/commonmark-highlighter package that supports higlighting with CommonMark. After you register two custom renderers, all code blocks will receive a set of tags and classes, so they're already prepped to be highlighted by CSS when your content arrives in the browser.
<title>I'm highlighted!</title>
View the above snippet's source in your browser, and you'll see the highlighting has already been done!]]>
One of the best things about Vue templates is the special
v-model prop.
v-model allows you two quickly map prop getters and setters without breaking unidirectional data flow. Props down, events up.
<!-- People.vue --> <template> <filter v-</filter> <ul> <li v- {{ person.name }} </li> </ul> </template> <script> import Filter from "./Filter"; export default { data() { return { filter: "", people: [ /* ... */ ] }; }, components: { Filter }, computed: { filteredPeople() { // Return filtered `people` array result } } }; </script>
For now, we assume the
filter prop is a plain string. The
v-model property expands to a
:value prop and an
@input event listener.
<filter v-</filter> <!-- Expands to... --> <filter :</filter>
Inside the
Filter component, we registered an
@input listener to an
input element, which emits the value to the parent component.
<!-- Filter.vue --> <template> <input type="search" : </template> <script> export default { props: { value: { required: true, type: String } } }; </script>
This works great for simple inputs, but what if we want to expand our filter to an object?
<!-- People.vue --> <template> <filter v-</filter> <!-- ... --> </template> <script> export default { data() { return { filter: { query: "", job: "" } }; } // ... }; </script>
We're not allowed to modify props in Vue, so we need to emit a modified copy of the
filter prop from our
Filter component.
<!-- Filter.vue --> <template> <div> <input type="search" : <select : <option value="">All</option> <option value="developer">Developer</option> <option value="designer">Designer</option> <option value="manager">Account manager</option> </select> </div> </template> <script> export default { props: { value: { required: true, type: Object } } }; </script>
This works, but
Filter became a black box: it's not immediately clear what
value contains. We need to dive into the components implementation details to discover it expects
query and
job keys.
We can solve this by down passing each filter key individually, and emitting multiple, key-specific events.
<!-- People.vue --> <template> <filter :</filter> <!-- ... --> </template>
<!-- Filter.vue --> <template> <div> <input type="search" : <select : <option value="">All</option> <option value="developer">Developer</option> <option value="designer">Designer</option> <option value="manager">Account manager</option> </select> </div> </template> <script> export default { props: { value: { required: true, type: String }, job: { required: true, type: String } } }; </script>
We've greatly improved
Filter's public API. Someone using the component now knows it expects two distinct props:
query and
job.
Unfortunately, by making
Filter more explicit, we've punished the consumer. The parent now needs to pass a prop and register an event listener for each
filter key.
Compared to how concise
v-model was, this feels like a regression. We had to invent our own API.
Vue provides a more generic alternative to
v-model: the
.sync modifier.
<!-- People.vue --> <template> <filter :query.</filter> <!-- ... --> </template>
sync works just like
v-model, except it listens for an
update:[key] event. The above expands to our two props, and two listeners.
<!-- People.vue --> <template> <filter :</filter> </template>
If we rename the emitted events in
Filter, we get the best of both words: seemingly two way data binding while maintaining an explicit component API.
<!-- Filter.vue --> <template> <div> <input type="search" : <select : <option value="">All</option> <option value="developer">Developer</option> <option value="designer">Designer</option> <option value="manager">Account manager</option> </select> </div> </template> <script> export default { props: { value: { required: true, type: String }, job: { required: true, type: String } } }; </script>.]]>
This is the first post in a series about removing distractions from an interface to provide a better user experience.
I have a specific pet peeve with user interfaces: things that draw my attention when they don't need to. In any graphical interface, movement is distraction. Our eyes are naturally drawn to anything in motion.
Motion is a powerful tool. We can abuse this distraction to attract our users to a certain place: a notification, an added list item after a background refresh, etc. Let's look into the movement behind a form submission. Below are three dummy forms, each with a different server response time.
See the Pen pZpWQw by Sebastian De Deyne (@sebdd) on CodePen.
In the first two examples, the submit button changed twice in the blink of an eye. We might even think we missed the first state change because it happened so fast. We saw something happen though, and we're probably able place it into perspective:
“That happened so fast, but everything went well. I suppose what I missed wasn't that important after all.”
It's a near-subconscious train of thought, but the fact that we needed to make that deduction means the interface momentarily distracted us. Let's fix this.
We have a submit button. It's enabled by default. While the form is being submitted, it's temporarily disabled. When the server responds, it's re-enabled. While the form is being submitted, it's in a transient state between unsaved and saved.
The problem is: our app is too fast! That doesn't sound like a problem, but is in the context of an interface. In the transient state, the interface is trying to communicate something: "I'm busy." However, the application is so fast that the user doesn't care about it being busy. It completed the task so swiftly that the user didn't even get the chance to react to the transient state.
If an interface doesn't need to talk to users, it shouldn't.
Unfortunately, sometimes users are on a slow network connection. In that case, we have to notify them that something is indeed happening. A transient "I'm busy" state makes sense here, or the user thinks something's wrong.
To summarize: we don't want a visible disabled state when the network conditions are in our favor, but we do need one when things are running slow.
Solution: delay the visible transient state.
When the network is fast, the transition is seamless. When the network is slow, the interface will tell us it's working on it.
See the Pen Delayed transient states 2 by Sebastian De Deyne (@sebdd) on CodePen.
Let's build this. We'll be using Vue to build an
AjaxForm component, but these concepts can be applied in any other environment.
Let's start with a basic
AjaxForm implementation.
<template> <form @ <button type="submit" : {{ buttonText }} </button> </form> </template> <script> export default { data: () => ({ status: "idle" // or 'submitting' or 'submitted' }), computed: { buttonText() { if (this.status === "submitting") { return "Busy..."; } if (this.status === "submitted") { return "Thanks!"; } return "Submit"; } }, methods: { submit() { this.status = "submitting"; doSubmit() .then(() => { this.status = "submitted"; }) .catch(() => { this.status = "idle"; }); } } }; </script> <style scoped> button[disabled] { opacity: 0.5; } </style>
Our form has a submit button, with a dynamic text depicting the form status. When the button is clicked, the form gets submitted, and the button will be disabled until the form is idle again.
This implementation will give our users motion sickness: if the form takes 100ms to submit, the button will go from "Submit" to "Busy.." to "Thanks!" in that very short time span, probably also changing visual styles like opacity along the way.
To fix this, wa can modify our script to wait a certain amount of time, let's say 400ms, until we disable the button. That way, the "Busy..." state change will never be visible to the user unless the submission takes longer than 400ms.
<script> export default { // ... methods: { submit() { const busyTimeout = window.setTimeout(() => { this.status = "submitting"; }, 400); doSubmit() .then(() => { window.clearTimeout(busyTimeout); this.status = "submitted"; }) .catch(() => { window.clearTimeout(busyTimeout); this.status = "idle"; }); } } }; </script>
We'll now show "Busy..." after 400ms, only if the form submission hasn't completed (successfully or not) in that time.
Unfortunately, it looks like we just introduced a bug. If a user clicks the button again within those 400ms, the form will be submitted multiple times. We didn't immediately disable the button like in the first example. We're using the "status" property for two concerns: the form status and a network health check of some sorts. Let's split it up into two concepts and squash our bug.
<template> <form @ <button type="submit" : {{ buttonText }} </button> </form> </template> <script> export default { data: () => ({ status: "idle", // or 'submitting' or 'submitted' isSlowRequest: false }), computed: { buttonText() { if (this.isSlowRequest) { return "Busy..."; } if (this.status === "submitted") { return "Thanks!"; } return "Submit"; } }, methods: { submit() { this.status = "submitting"; const slowRequestTimeout = window.setTimeout(() => { this.isSlowRequest = true; }, 400); doSubmit() .then(() => { window.clearTimeout(slowRequestTimeout); this.isSlowRequest = false; this.status = "submitted"; }) .catch(() => { window.clearTimeout(slowRequestTimeout); this.isSlowRequest = false; this.status = "idle"; }); } } }; </script> <style scoped> button.is-disabled { opacity: 0.5; } </style>
Above, we introduced an
isSlowRequest property to take care of how we want to visually indicate the busy state. The
status property is now immediately updated in the first example, so the button gets properly disabled now.
Note that we're also using an
is-disabled class so the button there's no immediate visual change when it gets disabled in the DOM.
I'm using similar
setTimeout techniques in a bunch of projects, and it's a great trick to remove unnecessary distraction.
In my next post about distraction-less interfaces, we'll tackle the same problem with the opposite solution: by always showing the transient state.]]>
In my most recent project at work, I'm experimenting with JSX templates in Vue. Vue offers first-party support for JSX with near-zero configuration, but it doesn't seem to be commonly used in the ecosystem.
Here's the tl;dr. Every one of these is discussed in detail below.
PRO
CON
I'm going to share my initial thoughts on using JSX with Vue. I'll be posting side-by-side examples of Vue templates and their JSX counterparts.
To get the ball rolling, here's a straightforward example of what JSX looks like in a simple Vue component:
<template> <h1>{{ message }}</h1> </template> <script> export default { data: () => ({ message: "Hello, JSX!" }) }; </script>
export default { data: () => ({ message: "Hello, JSX!" }), render() { return <h1>{this.message}</h1>; } };
Vue templates are limited to what's registered in the component's options. With JSX, you can do anything inside the
render function.
No need to assign functions to
methods, which means a little less boilerplate.
<template> <span>{{ formatPrice(this.price) }}</span> </template> <script> import { formatPrice } from "./util"; export default { props: ["price"], methods: { formatPrice } }; </script>
import { formatPrice } from "./util"; export default { props: ["price"], render() { return <span>{formatPrice(this.price)}</span>; } };
Another small quality of life change that reduces boilerplate. You can directly use your components in the
render function instead of aliasing them to a string in the
components option.
<template> <span class="price-tag"> <formatted-price :</formatted-price> </span> </template> <script> import FormattedPrice from "./FormattedPrice"; export default { data: () => ({ price: 100 }), components: { FormattedPrice } }; </script>
import FormattedPrice from "./FormattedPrice"; export default { data: () => ({ price: 100 }), render() { return ( <span class="price-tag"> <FormattedPrice price={this.price} /> </span> ); } };
In Vue templates, we can use
v-bind to pass an object as component props. An example from the Vue docs:
<blog-post</blog-post> <!-- Will be equivalent to: --> <blog-post v-bind:</blog-post>
This is very similar to JavaScript's spread syntax, which is available to us in JSX.
An added benefit of the spread syntax, is that it can be used multiple times per component. Since
v-bind is an attribute, it's limited to a single declaration.
<template> <!-- This doesn't work! --> <blog-post</blog-post> </template> <script> // ... </script>
export default { // ... render() { return <BlogPost {...this.post} {...this.metaData} />; } };
Casing in Vue is rough.
Templates want everything to be kebab-case, while everything in your script is probably camelCase.
From the Vue docs:
HTML attribute names are case-insensitive, so browsers will interpret any uppercase characters as lowercase. That means when you’re using in-DOM templates, camelCased prop names need to use their kebab-cased (hyphen-delimited) equivalents
You can use PascalCase for components and camelCase for props your
.vue files, but then they won't work in in-DOM templates. Oh, and that only applies to component names and props. Events need to be written exactly as-is, no behind-the-scene case changes there.
<!-- App.vue --> <template> <!-- How it should be, PascalCase components, kebab-cased attributes --> <PostList posts="posts" link-</PostList> <!-- This also works, but not in in-DOM templates --> <!-- Don't forget you can't change event casing! --> <PostList posts="posts" linkColor="blue" @</PostList> </template> <!-- PostList.vue --> <script> export default { // Meanwhile, props should be declared with camelCase props: ['linkColor'], }
All of these issues disappear when using JSX. Since you're writing JavaScript, you simply use PascalCase and kebabCase everywhere.
export default { // ... render() { return ( <PostList posts={this.posts} linkColor="blue" onPostClick={() => this.handlePostClick()} /> ); } };
More added benefits because we're breaking away from HTML. From the Vue docs:.
Additionally, if you want a custom
form component, you need to rename it, because it would clash with the existing
form tag. Naming things is hard enough as it is!
<template> <my-form></my-form> </template>
import Form from "./Form"; export default { render() { return <Form />; } };
Sometimes you want to write a little component, that's only going to be used in the context of another component. With
.vue files, you'd need to create two files, even though the second one is trivial and shouldn't be reused anywhere else.
With JSX, you can structure things however you like. I generally stick to one component per file, but it can be useful to extract bits of that to make things more readable.
<template> <article> <post-title :</post-title> <section>{{ post.contents }}</section> </article> </template> <script> import PostTitle from "./PostTitle"; export default { props: ["post"], components: { PostTitle } }; </script> <!-- PostTitle can now be imported anywhere, while we only created it for the Post component --> <template> <h1>{{ title }}</h1> </template> <script> export default { props: ["title"] }; </script>
// Since the rest of the application doesn't need PostTitle, // we shouldn't expose it. const PostTitle = { props: ["title"], render() { return <h1>{this.title}</h1>; } }; export default { props: ["post"], render() { return ( <article> <PostTitle title={this.post.title} /> <section>{this.post.contents}</section> </article> ); } };
If someone else is coding your app's design, you're forcing them into scary-JavaScript-territory instead of happy-HTML-land.
It depends on your team and situation if this is a tradeoff worth making.
You might miss the control structures Vue templates offer. I personally don't mind fully embracing JavaScript, for example by using
map instead of
v-for.
Less custom directives mean less abstraction between the template and the code it compiles too.
<template> <ul> <li v-{{ post.title }}</li> </ul> </template>
export default { // ... render() { return ( <ul> {this.posts.map(post => ( <li key={post.id}>{post.title}</li> ))} </ul> ); } };
If you're using
style tags, or want scoped styles in your Vue components, you'll need to look for a different solution.
I'm rarely use these myself, so I can't say what the alternative would look like right now.
While Vue offers a first-party solution for JSX, it doesn't seem to have much traction in the Vue community.
With JSX, you're doing something different. I generally prefer to stick with the herd when it comes to tooling, unless doing otherwise offers a substantial benefit.
I'm working on a project that has quite an amount of low-level components. They contain lots of scripting with small amounts of templating. JSX feels like a breath of fresh air in this scenario.
On the other hand, when building large views that consist of large chunks of html with some custom components and directives, Vue templates are a better fit.
Luckily, we don't need to pick one, we can use both! I'll be my low-level components with JSX, and the "views", which will be written by other developers, will be writting with familiar Vue templates.
I suppose I'll see how this all goes, only one way to find out! If I encounter a bunch tradeoffs in the coming months, expect a follow-up post about why I reverted back to
.vue files.
Maintaining a number of open source projects comes with a number of issues. Reporting a good issue will result in a more engaged approach from project maintainers. Don't forget: there's a human behind every project..
An issue describes a single problem or feature. If an issue touches multiple topics, it should be split.
Use code blocks for code. Check indentation for pasted code. Check for syntax errors. When unsure, use a code linter or fixer before reporting. The easier it is for the maintainer to read code, the easier it is for them to understand it.
Stay polite in all circumstances. There's no accountability in free open source software. Respectful communication is paramount.
When reporting an issue, it's not uncommon for you to be in a difficult position. Don't hurl that frustration towards the project maintainer.
Without a person behind an issue, it's hard for the maintainer to feel empathy. Simply saying "Hello", or "I would appreciate some help on this matter" creates a human connection.]]>.
Code splitting and prefetching look daunting at first, but once you've grasped the basic concepts, they're quite a performance boost for little effort.
A working example of the code is this post is available on GitHub.]]>.
A single page app (commonly known as an SPA) is a client-side rendered app. It's an application that runs solely in your browser. If you're using a framework like React, Vue.js or AngularJS, the client renders your app from scratch.
A browser needs to go through a few steps before a SPA is booted and ready for use.
The user doesn't see anything meaningful until the browser has fully rendered the app, which takes a while! This creates a noticeable delay until the first meaningful paint and takes away from the user experience.
This is where server side rendering (commonly known as SSR) comes in. SSR prerenders the initial application state on your server. Here's what the browser's to-do list looks like with server side rendering:
Since the server provides a prerendered chunk of HTML, the user doesn't need to wait until everything's complete to see something meaningful. Note that the time to interactive is still at the end of the line, but the perceived performance got a huge boost.
Server side rendering's main benefit is an improved user experience. SSR is also a must-have if you're dealing with older web crawlers that can't execute JavaScript. The crawlers will be able to index a rendered page from the server instead of a nearly empty document.
It's important to remember that server side rendering is not trivial. Your application suddenly runs in both browser and server environments. If you rely on DOM access in your app, you need to ensure that those calls won't be fired on the server, because there's no DOM API available.
You've decided to server side render your client-side application. If you're reading this article, you're probably building the majority of your app with PHP. Your server rendered SPA needs to run in a Node.js environment, so you'll need to maintain a second application.
You'll need a bridge between the two apps for them to communicate and share data: you'll need an API. Building a stateless API is hard compared to a stateful application. You'll need to deal with new concepts like authentication via JWT or OAUTH, CORS, REST calls. These are all non-trivial to add to an existing application.
Benefits don't exist without tradeoffs. We've established that SSR enhances your app's user experience, but SSR doesn't come without costs.
There's an extra step on the server. It'll have an increased load and pages will have slightly increased response times. The latter won't affect the user because the first meaningful paint becomes immediate.
You'll probably render your SPA in a Node.js application. If you're not writing your backend in JavaScript, you're introducing infrastructure complexity.
Let's simplify our infrastructure needs. Let's find a way to server side render a client side app in the PHP environment we already have.
We need to gather three key ingredients to render a SPA on the server:
For simplicity's sake, we're gonna build a classic "Hello, world!" example.
Here's what our app looks like without SSR in mind:
// app.js import Vue from "vue"; new Vue({ template: ` <div>Hello, world!</div> `, el: "#app" });
This instantiates a Vue component with a template and renders the app in a container (an empty
div with an
app id).
If we'd run this script on the server, it would throw an error. We don't have any DOM access, so Vue would try to render the app in an element that can't ever exist.
Let's refactor our script to something we can run on the server.
// app.js import Vue from "vue"; export default () => new Vue({ template: ` <div>Hello, world!</div> ` }); // entry-client.js import createApp from "./app"; const app = createApp(); app.$mount("#app");
We split the previous script in two parts. The
app.js file becomes a factory to create new app instances. A second script,
entry-client.js, will run in the browser. It creates a new app instance with the factory and mounts it in the DOM.
Now that we can create an app without a DOM dependency, we can write a second script for the server.
// entry-server.js import createApp from "./app"; import renderToString from "vue-server-renderer/basic"; const app = createApp(); renderToString(app, (err, html) => { if (err) { throw new Error(err); } // Dispatch the HTML string to the client... });
We imported the same app factory, but we're using a server renderer to render a plain HTML string. This string will contain a representation of the application's initial state.
We already have two of our three key ingredients: a server script and a client script. Now lets run them in PHP!
The first option that comes to mind to run JavaScript in PHP is V8Js. V8Js is a V8 engine embedded in a PHP extension which allows us to execute JavaScript.
Executing a script with V8Js is pretty straightforward. We can capture the result with output buffering in PHP and
$v8 = new V8Js(); ob_start(); // $script contains the contents of the script we want to execute $v8->executeString($script); echo ob_get_contents();
print("<div>Hello, world!</div>");
The drawback of this method is the need for a third-party PHP extension. Extensions could be hard or impossible to install on your system so it would be nice if there was an alternative.
An alternative way to run JavaScript would be with Node.js. We could spawn a Node process that runs our script and capture its output. Symfony's
Process component does just what we need.
use Symfony\Component\Process\Process; // $nodePath is the path to the Node.js executable // $scriptPath is the path to the script we want to execute new Process([$nodePath, $scriptPath]); echo $process->mustRun()->getOutput();
console.log("<div>Hello, world!</div>");
Note that for Node we're calling
console.log instead of
One of the key concepts of the spatie/server-side-rendering package is the
Engine interface. An engine is an abstraction of the above JavaScript execution.
namespace Spatie\Ssr; interface Engine { public function run(string $script): string; public function getDispatchHandler(): string; }
The
run method expects a script (script contents, not a path), and returns the execution result.
getDispatchHandler allows the engine to declare how it expects the script to emit the output. A
console.log for Node.
A V8Js engine implementation isn't too fancy. It mostly resembles our above proof of concept, with some added error handling.
namespace Spatie\Ssr\Engines; use V8Js; use V8JsException; use Spatie\Ssr\Engine; use Spatie\Ssr\Exceptions\EngineError; class V8 implements Engine { /** @var \V8Js */ protected $v8; public function __construct(V8Js $v8) { $this->v8 = $v8; } public function run(string $script): string { try { ob_start(); $this->v8->executeString($script); return ob_get_contents(); } catch (V8JsException $exception) { throw EngineError::withException($exception); } finally { ob_end_clean(); } } public function getDispatchHandler(): string { return 'print'; } }
Notice that we rethrow the
V8JsException as our own
EngineError. This way we can catch same exception with any engine implementation.
A Node engine is a bit more complex. Unlike V8Js, Node needs a file to execute, not script contents. Before executing a server script, it needs to be saved to a temporary path.
namespace Spatie\Ssr\Engines; use Spatie\Ssr\Engine; use Spatie\Ssr\Exceptions\EngineError; use Symfony\Component\Process\Process; use Symfony\Component\Process\Exception\ProcessFailedException; class Node implements Engine { /** @var string */ protected $nodePath; /** @var string */ protected $tempPath; public function __construct(string $nodePath, string $tempPath) { $this->nodePath = $nodePath; $this->tempPath = $tempPath; } public function run(string $script): string { // Generate a random, unique-ish temporary file path $tempFilePath = $this->createTempFilePath(); // Write the script contents to the temporary file file_put_contents($tempFilePath, $script); // Create a process to execute the temporary file $process = new Process([$this->nodePath, $tempFilePath]); try { return substr($process->mustRun()->getOutput(), 0, -1); } catch (ProcessFailedException $exception) { throw EngineError::withException($exception); } finally { unlink($tempFilePath); } } public function getDispatchHandler(): string { return 'console.log'; } protected function createTempFilePath(): string { return $this->tempPath.'/'.md5(time()).'.js'; } }
Besides the temporary path steps, the implementation looks pretty straightforward.
Now that we have a solid engine interface, we can write an actual renderer class. The following paragraphs highlight the basics of the
Renderer class from the spatie/server-side-rendering package.
The renderer has one dependency: an
Engine implementation.
class Renderer { public function __construct(Engine $engine) { $this->engine = $engine; } }
If we were to write a render method, it'd need to execute a script that consists of two parts:
A simple
render method looks like this:
class Renderer { public function render(string $entry): string { $serverScript = implode(';', [ "var dispatch = {$this->engine->getDispatchHandler()}", file_get_contents($entry), ]); return $this->engine->run($serverScript); } }
The method requires an entry path that points to our
entry-server.js file.
We'll need some way to dispatch the prerendered HTML from the script to the PHP environment. A function needs to be loaded before our server script to ensure it's available. The
dispatch function contains the return value of the engine's
getDispatchHandler method.
Remember our server's entry script? Let's call that newly added
dispatch script with the prerendered application.
// entry-server.js import app from "./app"; import renderToString from "vue-server-renderer/basic"; renderToString(app, (err, html) => { if (err) { throw new Error(err); } dispatch(html); });
The application script itself doesn't need any special treatment. A
file_get_contents call will suffice.
We created a server renderer in PHP! The full
Renderer implementation in spatie/server-side-rendering looks a bit different. There's better error handling and more features, including mechanics to share data between PHP and JavaScript. Browse through the server-side-rendering codebase if you're interested in the nitty gritty details.
We reviewed server side rendering's benefits and tradeoffs. We know SSR adds complexity to an application's architecture and infrastructure. If server side rendering doesn't provide any value to your business, you probably shouldn't bother with it in the first place.
If you do want to get started with server side rendering, read up on application architecture first. Most JavaScript frameworks have an in-depth guide on SSR. Vue.js has an entire site dedicated to SSR documentation. It explains pitfalls like data fetching and managing application for server rendered apps.
There are many battle-tested solutions out there that provide a great SSR experience out of the box. Notable projects are Next.js if you're building a React app, or Nuxt.js if you prefer Vue.
You have limited resources to manage the infrastructure complexity. You want to server render a component as part of a larger PHP app. You don't want to build and maintain a stateless API. If any of those reasons resonate with you, server rendering in PHP could be a viable solution.
I published two libraries to enable server side rendering JavaScript from PHP: spatie/server-side-rendering and spatie/laravel-server-side-rendering for Laravel apps. The Laravel package works with near-0 configuration. The generic package requires some setup depending on your environment. Don't be daunted though, everything's thorougly documented in the readme's.
If you'd rather see the libraries in action first, check out the spatie/laravel-server-side-rendering-examples repository and follow the installation guide.
I hope these packages can be of help if you're considering SSR, and I'm looking forward to any questions or feedback on GitHub!]]>
One of the hardest (and sometimes frustrating) tasks in a programmer's day-to-day workload is naming things. When I have a hard time finding that perfect word, I generally wind up in one of two situations:
Luckily, there are tools out there that can be of help..
Sometimes I'm completely stumped. I know the context of the word I'm looking for, but I can't think of anything that resembles it. Word Associations Network can help here. The results here are a lot more broad than the ones on Thesaurus..
Laravel 5.6 adds the ability to register alias directives for Blade components. Let's review some background information and examples.
I've been using Blade components since they were added back in Laravel 5.4. For those who don't know what a Blade component is, it's a directive to include views inspired by "components" in JavaScript frameworks like Vue.
{{-- resources/views/components/alert.blade.php --}} <div class="alert alert-{{ $type }}">{{ $slot }}</div> {{-- resources/views/page.blade.php --}} @component('components.alert', [ 'title' => 'Beware!', 'type' => 'warning', ]) Here be dragons! @endcomponent
A component's injected contents—placed in the "main slot"—are available through a
$slot variable. Other data can be passed via an associative array similar to
@include. What's nice about plain variables is that you can easily transform them or fall back to a default value, which is a bit more verbose using
@section in layouts.
We can also share component "properties" via the
@slot directive. This is useful if we need to pass a chunk of html or any other large string to the component.
@component('components.card', ['type' => 'warning']) @slot('header') <img src="dragon.svg" /> Beware! @endslot Here be dragons! @endcomponent
This is pretty cool. We can now build apps by composing components, a more coherant model than traditional layouts & includes.
I had one issue with Blade components—they can be annoyingly verbose at times.
Compare a Vue.js component with the previous
alert example:
<alert type="warning" title="Beware!"> Here be dragons! </alert>
This is much leaner than
@component('path.to.component', ...). A similar html-like syntax in Blade would be a bad idea, but that's okay. Blade's has a simple syntax: print things with curly brackets and do all the other things with directives. Let's keep it that way.
What if we could simplify the component syntax to a single directive? this is possible with the new
Blade::component function.
Back to our
alert example:
Blade::component('components.alert');
@alert(['type' => 'warning', 'title' => 'Beware!']) Careful! @endalert
If you don't have any extra slots, you can reduce it even further.
@alert Careful! @endalert
We were able to contract the verbose
@component syntax to something simpler while maintaining readability.
By default, Blade will assume that the last part of the component path is its alias. If we'd prefer a diffent name for our alias, we can pass a second parameter.
Blade::component('components.alert', 'myAlert');
Component aliases will be part of Laravel 5.6, I hope they'll be of help tidying up your views!]]>
|
https://sebastiandedeyne.com/feed
|
CC-MAIN-2019-13
|
refinedweb
| 5,482
| 58.28
|
I spent some time this week working on building a Docker image using a Dockerfile. In the process I learned a little about networking with Docker that I wanted to record here before I forget about it.
One of the steps in building my image was to update the list of packages using apt-get update. Mysteriously during the build I would get these errors:
sudo docker build -t="build_2013-10-03" . Uploading context 20480 bytes Step 1 : FROM colinsurprenant/ruby-1.9.3-p448 ---> 6d1e62cb5cff ... Step 5 : RUN apt-get install --assume-yes software-properties-common sudo libmysqlclient-dev vim ---> Running in 481577d7acec ... Err raring/main libapt-inst1.5 amd64 0.9.7.7ubuntu4 Something wicked happened resolving 'us.archive.ubuntu.com:http' (-11 - System error)
Logging in to the container gave me a pointer in the form of a warning and an confirmation of the problem:
sudo docker run -i -t colinsurprenant/ruby-1.9.3-p448 /usr/bin/env bash WARNING: IPv4 forwarding is disabled. root@0836328ec06a:/# ping us.archive.ubuntu.com ping: unknown host us.archive.ubuntu.com
Since Docker containers are run inside a namespace and AuFS is used to hold their files, the only thing shared between the host OS and the container is the kernel. For IP traffic to move between guest and host the kernel must be set to do IP forwarding.
To enable this I needed to use the sysctl command and then restart the Docker daemon:
sudo sysctl -w net.ipv4.ip_forward=1
That little test solved my problem and so the next step was to ensure that the new setting would survive a reboot. As with almost all things on Linux, it just meant editing a configuration file:
sudo vim /etc/sysctl.conf # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 1
My next learning about how networking works with Docker came from wanting the app in my container to access the MySQL database on the host. It turns out that docker creates a network interface:
mike@sleepycat:~☺ ifconfig docker0 Link encap:Ethernet HWaddr 9e:b5:ca:76:70:c3 inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::9cb5:caff:fe76:70c3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:65 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:10675 (10.6 KB) ...
So on the host side I need to get MySQL to bind to 172.17.42.1, and containers (which will all end up on the 172.x.x.x network) will just connect to that. Don’t forget that your MySQL user ‘x’@’localhost’ won’t be able to connect when its logging in from 172.x.x.x and that you will need to add the “host: 172.17.42.1” to your database.yml.
The only real down side to all this Docker stuff has been that the extra layer of abstraction can make things a little hard on the head. The potential for repeatable, self documenting app deployments using Dockerfiles is pretty exciting. I’m impressed with what I have seen and I know that what I am doing is still pretty primitive. We’ll see what’s next.
One thought on “Docker networking”
Thanks for the tip. It is annoying that you cannot connect to the host database using “localhost”.
Just wanted to state that one can just comment out the “bind-address” option in the mysql my.cnf file in order to have your mysql databsase accept connections from any ip/interface, however this would leave you “open” to the world (still have mysql user authentication though, so set passwords for your mysql users and make sure there are none with no password). Also rightly pointed out that will not work with “localhost” users, but a user is only @localhost if specified, the default is to not be limited to localhost, so the following would work:
E.g. GRANT ALL ON my-db.* TO my-docker-user IDENTIFIED BY ‘password-here’
|
https://mikewilliamson.wordpress.com/2013/10/06/docker-networking/
|
CC-MAIN-2020-16
|
refinedweb
| 726
| 65.62
|
A label that can be placed onto a map composition. More...
#include <qgscomposerlabel.h>
A label that can be placed onto a map composition.
resizes the widget such that the text fits to the item.
Keeps top left point
Get item display name.
This is the item's id if set, and if not, a user-friendly string identifying item type.
Reimplemented from QgsComposerItem.
Returns the text as it appears on screen (with replaced data field)
Get font color.
Accessor for the horizontal alignment of the label.
brief Accessor for the margin of the label
Reimplementation of QCanvasItem::paint.
sets state from Dom document
Reimplemented from QgsComposerObject.
Sets the current feature, the current layer and a list of local variable substitutions for evaluating expressions.
Sets text color.
Mutator for the horizontal alignment of the label.
brief Mutator for the margin of the label
Mutator for the vertical alignment of the label.
return correct graphics item type.
Added in v1.7
Reimplemented from QgsComposerItem.
Accessor for the vertical alignment of the label.
stores state in Dom element
Reimplemented from QgsComposerObject.
|
http://qgis.org/api/classQgsComposerLabel.html
|
CC-MAIN-2014-41
|
refinedweb
| 179
| 62.14
|
CRect
CRect
The CRect class is similar to a Windows RECT structure. CRect also includes member functions to manipulate CRect objects and Windows RECT structures.
A CRect object can be passed as a function parameter wherever a RECT structure, LPCRECT, or LPRECT can be passed.
Note This class is derived from the tagRECT structure. (The name tagRECT is a less-commonly-used name for the RECT structure.) This means that the data members (left, top, right, and bottom) of the RECT structure are accessible data members of CRect.
A CRect contains member variables that define the top-left and bottom-right points of a rectangle.
When specifying a CRect, you must be careful to construct it so that it is normalized — in other words, such that the value of the left coordinate is less than the right and the top is less than the bottom. For example, a top left of (10,10) and bottom right of (20,20) defines a normalized rectangle but a top left of (20,20) and bottom right of (10,10) defines a non-normalized rectangle. If the rectangle is not normalized, many CRect member functions may return incorrect results. (See CRect::NormalizeRect for a list of these functions.) Before you call a function that requires normalized rectangles, you can normalize non-normalized rectangles by calling the NormalizeRect function.
Use caution when manipulating a CRect with the CDC::DPtoLP and CDC::LPtoDP member functions. If the mapping mode of a display context is such that the y-extent is negative, as in MM_LOENGLISH, then CDC::DPtoLP will transform the CRect so that its top is greater than the bottom. Functions such as Height and Size will then return negative values for the height of the transformed CRect, and the rectangle will be non-normalized.
When using overloaded CRect operators, the first operand must be a CRect; the second can be either a RECT structure or a CRect object.
#include <afxwin.h>
Class Members | Hierarchy Chart
Sample
See Also CPoint, CSize, RECT
|
http://msdn.microsoft.com/en-us/library/aa300172(v=vs.60).aspx
|
CC-MAIN-2014-41
|
refinedweb
| 336
| 60.85
|
S. Somasegar is the corporate vice president of the Developer Division at Microsoft. Learn more about Somasegar.
I’ve had multiple meetings recently with customers and press where the topic of .NET development has come up, particularly as it relates to the cloud and server. They’ve heard about the extensive work we’ve done with Visual Studio 11 to enable the client-side development of Metro style apps for Windows 8 using C#, Visual Basic, C++, and JavaScript, and they’re curious to learn what improvements have been made for server-side development using .NET
From my perspective, .NET is already the richest and most productive way for developers to create server-side applications that run in the cloud and on premises, and to do so with symmetry across both. With .NET 4 available today on Windows Server and in Windows Azure, developers have in place the languages, libraries, frameworks, and capabilities necessary to create next-generation solutions, whether for the enterprise or for a consumer application.
And things only get better with .NET 4.5. The coming release of .NET is targeted to provide great capabilities for developers working on mobile apps, web apps, and cloud services, while at the same time enabling rapid scalability, fast time to market, and support that spans a gamut of PCs, browsers, and mobile devices. Work in .NET 4.5 has been done at all levels of the stack, from the internals of the Common Language Runtime (CLR), to the Base Class Libraries (BCL), to Entity Framework (EF), to Windows Communication Foundation (WCF), to Windows Workflow Foundation (WF), to ASP.NET.
Given the aforementioned questions I’ve received about server-side development with .NET 4.5, I thought I’d share some highlights for what’s coming in this very exciting release.
Web
For a broad view of what’s new for web development, I recommend reading What’s New for ASP.NET 4.5 and Web Development in Visual Studio 11. For the purposes of this post, I’ll focus on a few highlights.
In our modern world of connected devices and continuous services, a very important aspect of a web site is being able to provide great results for mobile devices. ASP.NET in .NET 4.5 enables just that, with mobile support built into ASP.NET MVC 4 that enables optimized experiences to be delivered for tablets and phones. ASP.NET Web Pages also includes support for Mobile Display Modes, which enable you to create device-specific pages that are rendered conditionally based on the device making the request.
ASP.NET Web Forms has also seen a lot of love in .NET 4.5. Web Forms now supports model binding, which allows data controls to directly bind query parameters to form fields, query strings, cookies, session state, and view state. These data controls can also use strongly-typed binding expressions, rather than using the customary dynamic binding methods where type information is only available at run time. Additional support for HTML5 is included as well, such as support for new HTML5 form type elements like ‘email’, ‘tel’, ‘url’, and ‘search’.
ASP.NET in .NET 4.5 now has support for WebSockets, a protocol that provides bidirectional communication over ports 80 and 443 but still with performance similar to that of TCP, enabling a good alternative to long-polling. It has much improved support for writing asynchronous code. It has support for bundling and minification of JavaScript, which results in less data to be served to clients and faster load times for applications. And it has inherent performance improvements, such as a ~30% reduction in base level memory consumption for an ASP.NET application.
Of course, the improvements go beyond just programming model capabilities and performance. The code editor in Visual Studio has been enhanced with new ASP.NET Smart Tasks, with IntelliSense improvements, and with refactoring support.
As I’ve discussed previously on this blog, JavaScript support in Visual Studio 11 has also been enhanced significantly, and the code editor in Visual Studio 11 has some very handy HTML5 and CSS3 code editor improvements. The new Page Inspector tool in Visual Studio 11 also enables developers to better connect the browser on the client with code on the server, letting developers easily map HTML elements to the source that resulted in their rendering.
Data
The ability to query for data is an integral aspect of many web applications and services, and .NET 4.5 delivers some significant updates in the data space, particularly around the Entity Framework.
Entity Framework 5 (included in .NET 4.5) now supports multiple development workflows, catering to whichever methodology the developer prefers. Some developers prefer to first create their database and then consume that into their application; this is enabled with the “database first” approach. Others prefer a more “model first” approach, where a designer or an XML editor can be used to define entities in an application domain, highlighting how these relate to each other and to concrete representations which can then be used to generate a database. Still other developers prefer a more “code first” approach, where they define their model objects using “plain old classes,” never having to open a designer or define a mapping in XML. With the .NET Framework 4.5, developers don’t have to choose, as all three approaches are supported (with “code first”, Entity Framework now also supports migrations, enabling developers to easily modify and version their database schema without dropping any data).
Entity Framework in .NET 4.5 includes a variety of new features focused on making the developer more productive. For example, .NET enum types can now be mapped to the underlying Entity Data Model (EDM). Table-valued functions (TVF) are now supported, making it possible to use LINQ queries and Entity SQL against them. Spatial data is now supported, with two new primitive EDM types called Geometry and Geography, and with their associated .NET types DbGeometry and DbGeography.
The Entity Framework has also seen some significant performance improvements in .NET 4.5. For example, it now supports automatically compiled LINQ queries. Queries written with the Entity Framework are manifested as expression trees which need to be translated at run time into SQL queries. That transformation can take some measurable time, in particular for very complex queries. .NET 4.5 supports the automatic caching of query compilations, alleviating the need for the developer to do so manually to achieve the same performance benefits. The impact of these performance improvements when combined with others in the system have resulted in throughput improvements on some queries by as much as 600%.
For more information on what’s new in Entity Framework in .NET 4.5, the Data Developer Center on MSDN is a good place to start.
Services
One of the significant new features to surface in .NET 4.5 is the new Web API framework. ASP.NET Web API enables developers to build modern HTTP services that can be consumed by a broad range of clients, including browsers and mobile devices. It supports content negotiation, such that clients and servers can work together to determine the right format for data to be returned. It supports automatic serialization of data to JSON, XML, and Form URL-encoded data. It enables easily exposing REST APIs that map incoming requests to business logic using built-in routing and validation support. It supports query composition, such that the framework can automatically layer on top of IQueryable<T>-returning APIs support for paging and sorting and other capabilities. It supports testing with mocking frameworks, and much more.
It is also highly productive. The ASP.NET Web API tutorial has a good example of how easy it is to build a simple API; this API just returns a list of available products, which for the purposes of the example has been hardcoded: } };}
This API can then easily be consumed from any client, such as one using JavaScript in a browser. For more information on ASP.NET Web API, I recommend the ASP.NET Web API site.
Windows Communication Foundation (WCF) has also been significantly enhanced for .NET 4.5, making it much easier to develop robust web services. As with ASP.NET, one of the major WCF features in .NET 4.5 is support for WebSockets, exposed via the NetHttpBinding and NetHttpsBinding bindings. UDP support has also been added to WCF in .NET 4.5.
WCF has also been augmented with productivity features. WCF in .NET 4.5 supports contract-first development, such that you can generate service and data contracts from a WSDL document and then provide the backing implementation. The configuration files used by WCF have been greatly simplified, such that the defaults used are much more likely to be appropriate for production use and thus don’t need to be specified.
The WCF support in .NET 4.5 also ties in with the greatly overhauled asynchronous story in .NET 4.5, which I’ve previously written about and which I’ll mention again later in this post. WCF service methods can now be written to return Tasks, and can thus be implemented using the new async and await keywords in C# and Visual Basic. WCF also now has support for true asynchronous streaming, which can lead to important scalability benefits.
The What’s New in Windows Communication Foundation 4.5 page provides a good overview on the breadth of what’s new.
Workflow
Windows Workflow Foundation (WF) is a key technology for implementing long-running processes, the kind that often show up in server and cloud workloads. It includes a programming model, an in-process workflow engine, and a designer, all of which have been improved for .NET 4.5.
Several of the more prominent improvements for WF are visible in this simple screenshot I took after experimenting with some of the features:
This image highlights the state machine workflows that are now possible to create using .NET 4.5. Further, it shows that C# expressions are now supported in the workflow designer (this is in addition to the support for Visual Basic expressions that were already supported in the previous release). The designer showcased here has also been improved significantly, with support for new features like search, click-and-drag support to pan around the workflow, an outline view to make it easier to navigate hierarchical workflows, support for multi-select, and more.
These are just some of the improvements available for WF in .NET 4.5. Others include being able to run different versions of the same workflow side-by-side, better control of how child activities persist in the workflow, and improved debugging support and build-time validation.
For more on what’s new in Windows Workflow in .NET 4.5, I recommend the What's New in Windows Workflow Foundation in .NET 4.5 overview material in the MSDN documentation.
Identity
Windows Identity Foundation (WIF) is a set of classes that support implementing claims-based identity, something particularly relevant to web applications and services. Prior to the .NET Framework 4.5, WIF shipped as a standalone entity, but as of .NET 4.5 its types are integrated into the heart of the Framework. In fact, some types at the center of .NET’s security model now inherit from WIF types; for example, GenericPrincipal (which has been in .NET since v1) now derives from the WIF type ClaimsIdentity. This deep integration makes it much easier to use claims.
Other improvements are also available in WIF as part of .NET 4.5. Session management has been improved for web farms, such that custom code is no longer required. Developers can simulate any incoming claim types and values at development time using Visual Studio 11 support for WIF. You can also use Visual Studio support for Windows Azure Access Control Service (ACS) to easily enable application login using Active Directory, Facebook, Windows Live ID, Google, Yahoo!, or any other OpenID provider.
For more information on WIF, the Windows Identity Foundation 4.5 Overview is a good place to get started.
Scalability
The fewer resources on average a server system uses to process a request, the more requests the system can process concurrently, and the more you can do with less. One key way server and cloud systems enable this kind of scalability is via asynchrony, such that thread-related resources are only consumed when they’re needed for executing code; while waiting for I/O to complete that will yield the data necessary for a computation to continue, asynchronous solutions allow the system’s threading resources to be used for other means.
As has already been alluded to in discussions of ASP.NET, WCF, and WF, up and down the stack improvements have been made for .NET 4.5 around asynchrony. Both C# and Visual Basic have been extended with new “async” and “await” keywords that allow developers to write asynchronous code with the same control flow constructs they use in their synchronous code, avoiding the need for complicated, explicit callback mechanisms (F# already had a similar capability in Visual Studio 2010). Then throughout the Framework, types have been augmented with Task-returning methods that make functionality immediately consumable using the new keywords, and hooks have been added that allow developers to write Task-returning methods and have them hosted by the system in an asynchronous manner. These improvements make it possible for developers to build asynchronous systems much more productively.
Consider an asynchronous action method written with ASP.NET MVC. In this method, I want to first download the contents of, and then once that’s completed, I want to download the contents of. I also want to report any errors that occur. Doing this prior to .NET 4.5 would have resulted in a page’s worth of code, with nested callbacks, complicated error handling, and manual management for tracking outstanding operations. With .NET 4.5, I can instead simply write a straightforward action method like the following:
public async Task<ActionResult> IndexAsync() { var client = new HttpClient(); try { ViewBag.BingContent = await client.GetStringAsync(""); ViewBag.MsdnContent = await client.GetStringAsync(""); } catch(Exception exc) { ViewBag.Error = exc.Message; } return View();}
This example uses some of the new async support in .NET 4.5, including the new keywords, new support in ASP.NET MVC that supports writing such methods, and the new HttpClient class which exposes a wealth of asynchronous methods.
As mentioned in my previous post on Visual Studio 11 Programming Language Advances, this support for asynchrony expands further into Visual Studio 11’s tooling. For example, the Visual Studio 11 debugger has first-class knowledge of async methods (allowing you to step through the method as if it were synchronous), the code editor provides tooltips that show how to consume async methods using the new language keywords, the “Add Service Reference…” dialog automatically generates Task-based endpoints that can be consumed in async methods, the Concurrency Visualizer can highlight suspension points in async methods, and the MSTest unit test framework in Visual Studio supports async unit tests.
You can learn more about this async support on the Visual Studio Async page on MSDN.
Runtime Performance
Improvements for .NET 4.5 extend all the way down into the core of the engine, into the CLR itself, with multiple improvements made specifically with server scenarios in mind.
One key area of investment has been in garbage collection. In .NET 4, we introduced “background GC” for the workstation flavor of the garbage collector. This background mode, which is enabled by default, allows for Generation 2 collections to be performed concurrently with collections of Generation 0 and Generation 1. What this means to developers is that application pause time is reduced significantly, which for client applications with rich UI can be a big deal, minimizing the number of hiccups that occur in the application’s responsiveness. This need for reduced pause time is also relevant to servers, however, as servers need to remain responsive to incoming requests and often provide consistency on the latency with which requests are processed. As such, .NET 4.5 sees the introduction of background GC for the server flavor of the garbage collector, as well. As with the workstation GC, the server GC has this new capability enabled by default, so you don’t need to do anything special in your applications to reap the benefits..
Another performance-focused improvement in the runtime is the new multi-core JIT compilation support. In .NET 4 and earlier, just-in-time (JIT) compilation results in methods being compiled as they’re needed. With .NET 4.5 and the new multi-core JIT capability, applications can make a few calls to methods on the ProfileOptimization class so as to highlight regions of the app where improved JIT compilation times are important. The system will track what methods are used in that region, and on subsequent runs, those methods may be compiled on additional threads concurrent with execution of the program. ASP.NET itself makes use of these APIs in .NET 4.5, so server apps written with ASP.NET benefit automatically. This can be particularly beneficial for web app startup time, where some apps should see as much as a 30% improvement.
You can learn more about such performance improvements in the April 2012 issue of MSDN Magazine.
Base Class Libraries
The support for asynchrony through the .NET 4.5 libraries is quite extensive, with hundreds of new async methods exposed, and with quite a few performance improvements made for existing types in the Framework. These new async methods span mscorlib.dll, System.Xml.dll, System.Data.dll, System.Web.dll, System.Net.Http.dll, and beyond.
Outside of asynchrony, though, there are a plethora of additions made to the core libraries that will positively impact server and cloud scenarios. For example, .NET 4.5 includes the new ZipArchive and ZipFile classes for manipulating .zip files, as well as improvements to the existing compression support in the framework to improve compression ratios and speeds.
ZipFile.CreateFromDirectory(@"C:\MyBlog", @"Somasegar.zip");
The Managed Extensibility Framework (MEF) now includes support for generic types, support for multiple scopes, and a convention-based programming model. The new System.Net.Http namespace (which contains the previously mentioned HttpClient type) provides components that allow for consumption of modern web services over HTTP, components that can be used in both clients and servers. And much more.
The .NET team blog is a good place to learn more about what’s new in the BCL.
Conclusion
I’ve said for a long time now that .NET is a great environment in which to develop server and cloud applications, spanning from the private cloud (datacenter) to the public cloud, and I see this role getting even stronger with .NET 4.5.
.NET 4.5 is a modern enterprise framework. ASP.NET enables developers to build highly interactive server applications that use modern standards like HTML5 and CSS3. Such applications are often built in terms of APIs and services that are also exposed to other consumers; ASP.NET Web API and WCF both provide solid frameworks with which to build these. These applications and services also often need to support credentials, and the integration of Windows Identity Foundation into .NET 4.5 gives developers the tools they need to be successful with authentication and authorization. ADO.NET and the Entity Framework enable developers to easily incorporate data into their server applications, with multiple work styles enabled, and with clean integration of validation and business rules. With Windows Workflow Foundation, developers can create powerful workflow services for authoring distributed and asynchronous applications, and with WCF, these systems can communicate and integrate with any number of others enterprise solutions, from SAP to Oracle to SharePoint to Dynamics. Scalability is achieved with an asynchronous solution that spans the .NET runtime, libraries, and languages, and all of this runs on top of the highly robust, reliable, and efficient engine that is the CLR.
As you can see, there are a lot of new enhancements coming in .NET 4.5 that makes .NET a compelling programming environment for building applications for the cloud and server environments.
Namaste!
|
http://blogs.msdn.com/b/somasegar/archive/2012/05/16/net-improvements-for-cloud-and-server-applications.aspx?PageIndex=1
|
CC-MAIN-2014-23
|
refinedweb
| 3,363
| 53.41
|
Making the zfs snapshot service run faster
By user12625760 on Jan 03, 2009
I've not been using Tim's auto-snapshot service on my home server as once I configured it so that it would work on my server I noticed it had a large impact on the system:
: pearson FSS 15 $; time /lib/svc/method/zfs-auto-snapshot \\ svc:/system/filesystem/zfs/auto-snapshot:frequent real 1m22.28s user 0m9.88s sys 0m33.75s : pearson FSS 16 $;
The reason is two fold. First reading all the properties from the pool takes time and second it destroys the unneeded snapshots as it takes new ones. Something the service I used cheats with and does only very late at night. Looking at the script there are plenty of things that could be made faster and so I wrote a python version that could replace the cron job and the results , while and improvement were disappointing:
: pearson FSS 16 $; time ./zfs.py \\ svc:/system/filesystem/zfs/auto-snapshot:frequent real 0m47.19s user 0m9.45s sys 0m31.54s : pearson FSS 17 $;
still too slow to actually use. The time was dominated by cases where the script could not use a recursive option to delete the snapshots. The problem being that there is no way to list all the snapshots of a filesystem or volume but not it's decendents.
Consider this structure:
# zfs list -r -o name,com.sun:auto-snapshot tank NAME COM.SUN:AUTO-SNAPSHOT tank true tank/backup false tank/dump false tank/fs true tank/squid false tank/tmp false
The problem here is that the script wants to snapshots and clean up âtankâ but can't use recustion without backing up all the other file systems that have the false flag set and set for very good reason. Howeve If I did not bother to snapshot âtankâ then tank/fs could be managed recusively and there would be no need for special handling. The above list does not reflect all the file systems I have but you get the picture. The results of making this change brings the timing for the service
: pearson FSS 21 $; time ./zfs.py \\ svc:/system/filesystem/zfs/auto-snapshot:frequent real 0m9.27s user 0m2.43s sys 0m4.66s : pearson FSS 22 $; time /lib/svc/method/zfs-auto-snapshot \\ svc:/system/filesystem/zfs/auto-snapshot:frequent real 0m12.85s user 0m2.10s sys 0m5.42s : pearson FSS 23 $;
While the python module still gets better results than the korn shell script the korn shell script does not do so badly. However it still seems worthwhile spending the time to get the python script to be able to handle all the features of the korn shell script. More later.
What's with the sudden Python fad at Sun?
Are you guys sacrificing high tech in favor of being hip & cool, or what? It will come back to bite you.
Posted by UX-admin on January 04, 2009 at 01:15 AM GMT #
Thanks Chris! A python re-implementation is something I've wanted to have a go at, but not much ample free time at the moment, with a newborn in the house. As you point out, getting correctness as well as speed is vital: I agree.
There are some performance improvements in the hg repository, over and above what's in 2008.11, but I suspect you're already using the latest bits.
iirc I've logged an RFE to ask for a means to show snapshots of a just a given dataset - will try to dig up the CR (and if I haven't, it's certainly one I meant to file)
Posted by Tim Foster on January 04, 2009 at 01:51 AM GMT #
@UX-admin
My reasons for choosing python were three fold:
1) I wanted to learn python due to the "fad" @ sun. I know I will end up needing to know it.
2) The other choice was TCL but that would not have stood such a good chance of making it back into the base OS.
3) I like learning new (computer) languages. I could get to like python a lot.
@Tim,
Thanks for the RFE. On my home server I'm bang upto date so I think this is a good as it gets.
I do recall writing a java app to download pictures from my first digital camera with my son asleep on my chest all night. If I moved he woke so I just stayed put and wrote code: Obviously I wrote the code to allow me to take pictures as well and hence I have that photo.
So having a new born is no excuse;-)
Posted by Chris Gerhard on January 04, 2009 at 03:51 AM GMT #
BTW: The performance of the original Korn shell script can be \*VASTLY\* improved - right now it is |fork()|'ing like mad and most of this stuff can be avoided (and I bet 10 Euro that I can tune this script using ksh93 (from ksh93-integration update1) and make it outperform any Python version =:-) ).
Posted by Roland Mainz on January 04, 2009 at 10:04 AM GMT #
Be my guest Roland! Giving the ksh method code a good kicking would be more than welcome, just be sure to start with the version at the tip of the hg repo:
Posted by Tim Foster on January 04, 2009 at 11:24 AM GMT #
The point is not korn shell v python but that the performance of both the korn shell script and the python script are both dominated by reading the names of the snapshots when the script only needs the names of the snapshots of this filesystem or volume. Instead it gets all the snapshots below this filesystem or volume.
Anything else while useful will not get you significant gains.
Posted by Chris Gerhard on January 04, 2009 at 12:52 PM GMT #
"I wanted to learn python due to the "fad" @ sun. I know I will end up needing to know it."
Sun seems to really like slow programming languages (Java), and Python fits that bill as well.
The stuff which you are doing in the KSH script is trivial to write in AWK, with the additional bonus that, once working, the AWK program can be compiled into a binary executable, for maximum performance.
I am unpleasantly amazed that you did not use the AWK programming language to solve this, since your program is exactly the kind of workload what AWK was designed for.
It's a case of "pick the right tool for the job", and I can't justify Python being the right tool no matter how you slice it and dice it.
Then again, learning something just because it's a fad isn't rational, so my argument completely sinks.
Posted by UX-admin on January 05, 2009 at 02:44 AM GMT #
Hmm, java is not slow, there are just slow java apps as there are slow apps written in any language.
Even if I agreed that python was a "fad" then it would still be rational to learn it as I will find my self faced with python code to debug.
Awk would be an interesting choice but while it may be possible to write this in awk it would not be the few hours it took for a novice python programmer to do in python and that is speaking as someone who writes awk. ksh93 could well be one of the right tools for the job but python or even TCL would also do the job just fine.
However that still misses the point. The way to improve this is to change the way the zfs command works to allow it to do only the things that the script needs. So CR 6352014 would appear to be the place to start and that is written in C.
Posted by Chris Gerhard on January 05, 2009 at 03:14 AM GMT #
BTW, I looked at 6352014, and the workaround for "-c" is trivial:
awk -F'/' '$3 !~ /@/ && $4 == "" {print;}'
And if you want to do it \*right\*:
#include <sys/types.h>
#include <regex.h>
...and go to town with:
cc -xO4 -xprefetch=auto [-xipo=2 -xlinkopt=2 -xlibmil -xlibmopt ...]
But, being a Sun Microsystems employee, you should already know all the Sun Forte switches "like drinking water", correct?
Posted by guest on January 05, 2009 at 11:19 AM GMT #
was the RFE I'd logged requesting basically the same thing as 6352014, but a bit more useful.
Posted by Tim Foster on January 06, 2009 at 01:33 AM GMT #
@194.158.241.118
While the workaround gets you the same output it does not solve the performance problem.
I do like the idea that all Sun Employees know all the swithes to the compiler. You should test Jonathan at the next investor con call;-)
@Tim Foster
Oh the irony. I had that functionality working until I saw bug 6352014 so changed my prototype to implement the -c flag. The results are impressive:
: pearson FSS 1 $; time /usr/sbin/zfs list -r -t snapshot tank | wc
43876 219380 4431485
real 0m12.61s
user 0m4.16s
sys 0m7.11s
: pearson FSS 2 $; time /usr/sbin/zfs list -c -t snapshot tank | wc
370 1850 25539
real 0m0.06s
user 0m0.03s
sys 0m0.03s
: pearson FSS 3 $;
Let me know if you want a binary to play with!
Moving this from a prototype to real code and putting the code in to provide both depth and child would not be that hard. Getting it through PSARC will be harder;-)
Posted by Chris Gerhard on January 06, 2009 at 02:35 AM GMT #
Provided that 6762432 gets through the PSARC, "--depth" would be a really, really bad idea, because it diverges from the UNIX standard.
For example, it would be much more constructive and consistent to use "-l" for "level" (depth), or if that's not an option, at least use "-depth", to attempt to be consistent with other commands which support this option, like find(1).
"--something" is GNU, and GNU is not UNIX. We're on a System V UNIX here, please remain consistent.
Posted by UX-admin on January 07, 2009 at 01:55 AM GMT #
And in that regard, Mr. Gerhard is on a better track because he went for a single letter switch, "-c".
Posted by UX-admin on January 07, 2009 at 01:58 AM GMT #
--depth would not be my choice and strangely enough I had chosen -l as well but in my head it stood for limit until I switched to the other bug report. The important point is not the name of the flag but the functionality that it provides. Which would be better a flag that allows listing the children or a flag that allows listing to an fixed depth in the tree or should we aim for the same number of options as ls and have both!
Posted by Chris Gerhard on January 07, 2009 at 02:09 AM GMT #
Probably a flag to allow fixed depth.
UNIX is all about the power to choose, so a flag to specify an arbitrary depth puts the consumer (of the technology) behind the steering wheel.
Posted by UX-admin on January 07, 2009 at 10:56 AM GMT #
|
https://blogs.oracle.com/chrisg/entry/making_the_zfs_snapshot_service
|
CC-MAIN-2015-27
|
refinedweb
| 1,896
| 77.77
|
Hi java beginner here. I was hoping someone could help me understand what my teacher cant be bothered with. Got to create a simple program for class that will take random 5 numbers entered by user - check to see if any of those are divisible by 3 - add any numbers that are divisible by 3 and print a total.
please help me to understand what im doing. instruction was to create a LOOP with and IF statement. I cant quiet figure out how to gel all the pieces together so that it produces what I want below is what I have got. any help would be appreciated!!
package javaapplication8; import java.util.*; public class JavaApplication8 { public static void main(String[] args) { System.out.println(" HI! Lets play a game... "); System.out.println(" Pick any 5(five) numbers between 1 & 100: "); Scanner s = new Scanner (System.in); int n1; n1 = s.nextInt(); int n2; n2 = s.nextInt(); int n3; n3 = s.nextInt(); int n4; n4 = s.nextInt(); int n5; n5 = s.nextInt(); int count; int total; for (int x = 0 ; x <100; x+=3) { count = x++; total = x+=3; if (n1+=3){System.out.println(total);} else {}; } System.out.println(" There are " + count +"numbers divisivible by 3(three) "); System.out.println(" The total of those numbers is: "); } }
|
https://discuss.codecademy.com/t/idiot-requires-help-with-simple-java-please/52670
|
CC-MAIN-2018-26
|
refinedweb
| 216
| 70.19
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.