text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Scroll down to the script below, click on any sentence (including terminal blocks!) to jump to that spot in the video!
If you liked what you've learned so far, dive in!
video, code and script downloads.
Ok, guys: confession time. This cool little autoloader idea where we make our class name match our file name and our namespace match our directory structure ... well, that was not my idea. In fact, this idea has been around in PHP for years, and every modern project follows it. That's nice for consistency and organization, but it's also nice for a much more important reason: we can write a single autoloader function that can find anyone's code: our code or third-party code that we include in our project.
The idea of naming your classes and files in this way is called
PSR-0. You see,
there's a lovable group called the PHP FIG. It's basically the United Nations of
PHP: they come together to agree on standards that everyone should follow.
PSR-0
was the first standard... called 0 because we geeks start counting, well, at 0.
It simply says that Thou shalt call your class names the same as your filenames plus
.php and you shall have your directory structures match up with your namespaces.
Why do we care? Because instead of having to write this autoloader by hand, you can actually include an outside library that takes care of all of it for us. The library is called Jordi, I mean, Composer: you may have heard of it.
Let's get it: Go to getcomposer.org and hit download. Copy the lines up here: if you're on Windows, you may see slightly different instructions. Then move into your terminal, open a new tab, and paste those in:.phpphp -r "unlink('composer-setup.php');"
This is downloading Composer, which is just a single, executable file. Usually people use Composer to download external libraries they want to use in their project. It's PHP's package manager.
But it has a second superpower: autoloading. When this command finishes, you'll end
up with a
composer.phar file. This is a PHP executable. We'll come back to it in
a second.
To tell Composer to do the autoloading for us, all you need is a small configuration
file called
composer.json. Inside, add an
autoload key, then a
psr-4 key, and
empty quotes set to
lib:
That's it.
Remember how I said this rule is called
PSR-0? Well
PSR-4 is a slight amendment to
PSR-0, but they both refer to the same thing. This tells Composer that we want to
autoload using the
PSR-0 convention, and that it should look for all classes inside
the
lib directory. That's it.
Back in your terminal, run:
php composer.phar install
This command normally downloads any external packages that we need - but we haven't
defined any. But it also generates some autoload files inside a new
vendor/ directory.
To use those, open
bootstrap.php, delete all the manual autoload stuff, and replace
it with just
require __DIR__vendor/autoload.php, which is one of the files that
composer just generated:
That's it.
You also usually don't commit the
vendor/ directory to your Git repository: team
members just run this same command when they download the project.
Let's see if it works! Go back and refresh! It does! And as we add more classes and
more directories to
lib/, everything will keep working. AND, if you guys want to
start downloading external libraries into your project via Composer, you can do that
too and immediately reference those classes without needing to worry about require
statements or autoloaders. Composer takes care of everything. Thanks Jordi! | https://symfonycasts.com/screencast/oo-ep4/composer-autoloading | CC-MAIN-2022-27 | refinedweb | 634 | 75.81 |
So I was struggling for awhile with a binding issue in my Silverlight application, and learned the hard way that I forgot my basics.
The scenario is fairly common. I have a view model that contains the entity I want to edit along with supporting lists. The common example I see on the web is something like this:
public class CityEntity { public int ID { get; set; } public string Name { get; set; } } public class MyEntity { public CityEntity CurrentCity { get; set; } } public class ViewModel { public MyEntity Entity { get; set; } public ObservableCollection<CityEntity> Cities { get; set; } }
This is purely contrived but you get the point ... I have basic "building block" entities that are composed into the entity I wish to edit, so my view model hosts that entity as well as some collections for binding to drop downs so I can change properties on the entity.
My ComboBox (I thought) was straightforward:
... <ComboBox Style="{StaticResource SimpleComboBoxStyle}" ItemsSource="{Binding Cities}" SelectedItem="{Binding Path=Entity.CurrentCity,Mode=TwoWay}" DisplayMemberPath="Name"/> ..
Imagine my chagrin when I'd pop up my window and the combo box ... never ... showed ... the city. What was wrong? I had the right path, the right selected item. Was I missing something?
It turns out, I was.
The framework deals with the lists and bindings as objects, so unless the actual reference to the city on the main entity matches the entity in the list, there is no way to "set" the selected item (the framework doesn't know I intend for them to match).
The solution was quite simple: most of my entites on the Silverlight side derive from a base class, call it
SimpleEntity. By implementing
Equals (and by way of that, the hash code as well), the framework can now understand how to take my entity over here and match it to the entity in the list over there. Once it was implemented, I pushed it out and voila! my combo boxes started populating with the right value.
Can you post the code please for this solution?
A bit of code showing how you solved this would be helpful. This has been a big headache for me. Thanks!
In the above class, I'd add this:
override int GetHashCode()
{
return ID.GetHashCode();
}
override bool Equals(object obj)
{
return obj is CityEntity && ((CityEntity)obj).ID.Equals(ID);
}
This assumes I base equality on my identifier and not the name.
I use a business entity that I created on server side. I use WCF to communicate with my Silverlight application. So How can I do to override my Equals and GetHashCode Method?
If it is from a WCF entity, I'd make a local entity and have an extension method, like this (Foo is the WCF proxy class):
public class FooLocal : Foo
{
override GetHashCode() ...
override bool Equals ...
}
public static class FooExtensions
{
public static FooLocal ToLocal(this Foo incoming)
{
return new FooLocal { Name=incoming.Name, Address = incoming.Address ... etc };
}
}
Then you can simply do:
public FooLocal { get; set; }
public Foo
{
get { return FooLocal; }
set { FooLocal = value.ToLocal(); }
}
or something similar.
Thanks Jeremy | https://csharperimage.jeremylikness.com/2009/10/silverlight-comboboxes-and-importance.html | CC-MAIN-2017-39 | refinedweb | 509 | 64.51 |
Originally posted by Pranob Samani: Hi Champs, I m very new to this web services thing but i want to learn it. please help me out!!! Thanks in advance
Originally posted by ankur rathi:
import javax.jws.*;
@WebService
public class HelloWorldWS {
@WebMethod
public String getMessage() {
return "Hello World!!!";
}
}
It's a web service.
Originally posted by ankur rathi: It really IS a web service.
Originally posted by Ulf Dittmer: It's not. It's a piece of code, the meaning of which -to a WS beginner- is totally opaque. It does nothing unless deployed on a server und accessed by a WS client; since none of these pieces are in place -and are not easily put in place by someone just starting out- it doesn't help. [ March 03, 2008: Message edited by: Ulf Dittmer ] | http://www.coderanch.com/t/224820/Web-Services/java/Guys | CC-MAIN-2015-06 | refinedweb | 135 | 75.2 |
RCDB(3) BSD Programmer's Manual RCDB(3)
rcdb - plain-text key/value pair database
#include <sys/types.h> #include <db.h> DBT * rcdb_alloc(void *item, const size_t length); DBT * rcdb_string(char *text); void rcdb_split(RCDB *handle, int64_t *value, char **key); RCDB * rcdb_open(const char * const dbfile); int rcdb_close(RCDB *handle); int rcdb_rawread(RCDB *handle, recno_t nr, char **bstr); int rcdb_rawrite(RCDB *handle, recno_t nr, char **bstr); int rcdb_delete(RCDB *handle, recno_t nr); recno_t rcdb_lookup(RCDB *handle, const char *const searchkey); int rcdb_retrieve(RCDB *handle, recno_t nr, int64_t *value, char **key); int rcdb_read(RCDB *handle, recno_t nr, int64_t *value); int rcdb_modify(RCDB *handle, recno_t nr, int64_t value, const char *const key); int rcdb_write(RCDB *handle, recno_t nr, int64_t value); recno_t rcdb_store(RCDB *handle, int64_t value, const char *const key);
The rcdb function suite has been developed as an interface to the dbopen(3) suite of functions with a recno(3) back-end and database file. rcdb databases save key/value pairs, with the key consisting of a C string and the value being a signed 64-bit integer. In this implementa- tion, the value is being saved as a hexadecimal value by default, but oc- tal and decimal values can be written into the exposed database manually as well due to usage of the strtoll(3) function to parse the values (called reference counts in former versions of this manual page). The character set to be used is ISO_646.irv:1991, which limits the separator character to be within the range of 1 to 126 decimally, or 0x01 to 0x7E hexadecimal, including the range boundaries, but excluding the newline character (10 or 0x0A by default). If you want to save a more broad range of characters within the key or raw string, you are en- couraged to use the UTF-7 encoding of the recent Unicode character set, because UTF-8 peruses the eigth bit and is not guaranteed to be handled the same on all operating environments. On the other hand, using UTF-8 makes the support for high-bit characters transparent and allows strictly conforming systems to still look up keys which only use plain ascii(7). The rcdb function suite is defined in the <db.h> include file and is drawn around the following structure: typedef struct { DB *database; recno_t key; recno_t currec; recno_t lastrec; DBT dbt_key; DBT dbt_data; char sep; } RCDB; Direct access to the database, circumventing the rcdb routines, is possi- ble via the RCDB->database field, but this should be avoided if possible. You actually can change the newline character, but the resulting database might not be binary-compatible. Optimally, the RCDB structure should be handled as opaque. The separator character, '|' by default, can appear within the key.
Within <db.h>, the _DB_RCDB_API macro is defined to the version of the current application programming interface level. This version of the rcdb manual page describes API version 3. The _DB_RCDB_MAJOR macro is defined as the lowest API supported by that version of <db.h>.
rcdb_alloc(item, length) allocates storage for a DBT structure using malloc(3) and ini- tializes it to point to item and contain length. In case of an error, NULL is returned, else a pointer to the newly allocated DBT structure. This function is __DBINTERFACE_PRIVATE. rcdb_string(text) calls rcdb_alloc(3) with text and its length (without the trail- ing zero byte) as parameters. This function is __DBINTERFACE_PRIVATE. rcdb_split(handle, value, key) splits the contents of the DBT data component into two com- ponents. The first one is assumed being a signed 64-bit integer in decimal notation (octal if starting with a zero, hexadecimal if starting with 0x). Separated by the separator character, the second value until the end of the data string but before the end mark, is assumed to be the key. Appropriate space for the key is allocated using malloc(3). If handle or handle->dbt_data.data was NULL, no action is performed at all. If an error occured, key is guaranteed to contain NULL. If the separator character was found to be invalid, EDOM is stored in the global variable errno, and EFTYPE if the separator character is missing (opaque raw access). This function is __DBINTERFACE_PRIVATE. rcdb_open(dbfile) calls dbopen(3) to open the recno(3) database file dbfile. It al- locates storage for an RCDB structure using malloc(3) and ini- tializes it, then returns a pointer to the structure. If an error occurs, NULL is returned. rcdb_close(handle) takes an RCDB handle and tries to close the database and free(3) the memory associated with it. If the RCDB pointer or the RCDB->database pointer are already NULL, no action is taken on them. If the database was open and the attempt to close it was successful, 0 is returned, else 1 is returned. In any case, the memory associated with the handle is freed. rcdb_rawread(handle, nr, bstr) reads the record nr from the database and stores a copy of its content in *bstr unless bstr is NULL (in which case you can use handle->dbt_data to access the content). On error, -1 is returned and *bstr is set to NULL if possible. rcdb_rawrite(handle, nr, bstr) writes the data contained in the C string *bstr into the data- base, overwriting record nr unless nr is zero, in which case a new record is created and appended at the end of the database text file. On error, -1 is returned. *bstr is never modified. rcdb_lookup(handle, searchkey) searches the database from beginning to the end for a record wirh the key searchkey. In case of an error, (recno_t)-1 is returned. If the record is found, its number is returned, else zero. So- called opaque records, these on which rcdb_split(3) returns EF- TYPE, that is, are skipped during the comparison and do not con- stitute an error. rcdb_retrieve(handle, nr, value, key) retrieves the record with the number nr and writes its key into *key and its value into *value. If an error occurs, -1 is re- turned, the content of *value is undefined, and *key is most likely set to NULL. If the operation was successful, 0 is re- turned. An invalid separator character causes an error of EDOM, and EINVAL is yielded if the record number is zero. rcdb_read(handle, nr, value) retrieves the record with the recno nr and writes its value into *value. If an error occurs, -1 is returned and the content of *value is undefined, else the result is 0. rcdb_delete(handle, nr) tries to delete the record nr and returns 0 on success and -1 on failure. rcdb_modify(handle, nr, value, key) allocates necessary space to handle a record containing a hexade- cimal 64-bit number preceded by 0x and followed by the separator character and key, trailed by the zero character. It then fills the space with content generated from value and key, and writes it into the database. If nr is zero, a new record is appended at the end of the file. If a record to be overwritten does not ex- ist, the behaviour is unspecified due to a limitation in the underlying libdb. Memory allocated during the operation is freed before exit. If the operation fails, -1 is returned, else zero. This function is not intended to be called by an end-user, be- cause rcdb_write(3) and rcdb_store(3) already provide powerful interfaces to the database. However, if you know what you are do- ing, this function is most likely more performant and thus not private. This is the most low-level write interface. rcdb_write(handle, nr, value) can be used to write a new value to a known key/recno into the database. rcdb_store(handle, value, key) writes the key/value pair into the database, overwriting a prob- ably already existing entry with the same key, appending if it did not already exist. It is the high-level write interface to this library. If an error occured, -1 is returned, else the recno of the record just written.
The return values of the functions are already described in the API FUNCTIONS section above. As a rule of thumb, the functions return -1 if an error occured, and 0 or a recno if the operation completed successful- ly. Caveat: rcdb_lookup() and rcdb_store() return -1 casted to recno_t on er- ror, so be sure to compare it to (recno_t)-1 because in the current im- plementation, the value is unsigned.
In most cases, if an error occured, the db, recno or rcdb functions are setting errno to an appropriate error number. In addition to this rule, the recno(3) and dbopen(3) manual pages also describe return values which can be yielded from the rcdb functions. The memory allocation functions can also write to errno.
ascii(7), dbopen(3), recno(3)
rcdb appeared in MirOS MirPorts in 2004.
Copyright (c) 2004 Thorsten Glaser <tg@mirbsd.org> This product includes material provided by Thorsten Glaser.
Probably some. If you encounter one, feedback is highly appreciated. MirOS BSD #10-current August 13, 2011. | http://www.mirbsd.org/htman/i386/man3/rcdb_delete.htm | CC-MAIN-2015-35 | refinedweb | 1,508 | 53 |
Ok, it sounds reasonable to want those caching conditions. But for my own
curiosity, why do you want them? Are you in a tight memory situation or is
it a perfectionist thing?
What's wrong with using JellyContext variables? It's pretty much what
they're there for.
You can think of a JellyContext as a scope. At each level of XML tag, you
get a new JellyContext (a new scope). Export says "when a variable is set
into the current scope, should I automatically put it into the scope above"?
Inherit says "when creating a new scope, should I give it all the variables
from the current scope"?
If a Tag wants to set a variable in a context (scope) that's higher up in
the tree, there are two choices: use export=true or find the parent context
(while ((parent = getParent()) != null)).
To find a variable that's been set into a context above yours, either use
inherit=true or findVariable.
I'd suggest this solution:
Have your Tag find the top level context (while ((parent = getParent()) !=
null)). Use a single variable name in this context, say my_imported_scripts.
That variable will be a single HashMap from uri to parsed Script instance.
As you saw, compilable tags are sort of compiled, but they don't get a
context or their attributes at that point. The problem with compilable tags
(I don't recommend using them) is that they're only compiled with respect to
the thread that compiles them. If you run the same Script in a different
thread, those compiled tags won't be there. This is not true for a Script.
Once a Script is compiled, it stays compiled no matter where you use it.
Hans
-----Original Message-----
From: Arnaud Masson [mailto:arnaud.masson@kikamedical.com]
Sent: Sunday, December 05, 2004 5:06 PM
To: Jakarta Commons Users List
Subject: Re: [Jelly] nested compileScript() with import ?
Yes, I have copied the import tag and added the cache to the new tag.
It works fine, but I would like to reuse compiled scripts
- if several container scripts include the same sub script
- if a container script include the same sub script several times
To do that, I currently use the JellyContext to cache compiled scripts
with a call to setVariable(),
using a variable name based on the script uri... it works, but it's not
really clean.
Maybe a better way would be to override the compileScript() methods of
the JellyContext
and add a cache of compiled scripts inside the context (indexed by
uri/url) ?
Also I am not sure how to handle the 'export' and 'inherit' parameters
when I use compileScript+run instead of a single call to runScript().
You say a Tag isn't compilable, but what's the "CompilableTag" ??
I tried to use it the but the attributes (uri,...) of my tag are empty
when "compile" is called.
Thanks for your help!
.Hans Gilde wrote:
>You could easily cache the imported script the first time it's run. This is
>a simple modification to the current import tag, so that it keeps the
>reference to the script. If you do this, why not add an attribute "cache"
to
>turn caching on/off and then submit it as a patch to the import tag?
>
>It would be a little harder, but not at all impossible, to make it cache
the
>imported script at compile time.
>
>If not, the basic idea is this:
>
>A Tag isn't compilable, it's generated at runtime. A Script is compilable,
>it's generated at compile time. You need a special Script, not a special
>Tag. Scripts are very much like tags except that they need to be thread
>safe. Most of the time, a Script called TagScript is used. By default, this
>Script creates and caches Tag instances when it's run.
>
>Your new Script would compile the import at compile time. The result of
>compiling the import is, it self, a Script instance. At runtime, your
Script
>would simply pass control to the imported Script.
>
>You would also have to implement a custom TagLibrary. A TagLibrary gets to
>create a TagScript (implements Script) for every XML tag in its namespace.
>So, your TagLibrary would create a custom TagScript that would compile and
>keep the imported XML.
>
>-----Original Message-----
>From: Arnaud Masson [mailto:am@kikamedical.com]
>Sent: Saturday, December 04, 2004 6:07 PM
>To: commons-user@jakarta.apache.org
>Subject: [Jelly] nested compileScript() with import ?
>
>hi
>
>in the current version of jelly "import" tag, it seems that imported
>scripts are always parsed and recompiled each time the containing script
>runs,
>even if this script has already been compiled.
>
>the problem is that it isn't optimized if the compiled version of the
>main script is cached.
>
>is it possible to compile all scripts included via <j:import ...> via a
>single call to jellyContext.compileScript() on the containing script ?
>should i write a custom tag to implement that (to replace import) ?
>
>thanks in advance
>
>arna | http://mail-archives.apache.org/mod_mbox/commons-user/200412.mbox/%3C0I8A0010L12C8X@mta10.srv.hcvlny.cv.net%3E | CC-MAIN-2015-06 | refinedweb | 835 | 73.98 |
}
This is a very nice example. The only thing I would personally change is to use the parameterized insertion.
Instead of:
conn.prepareStatement(“INSERT INTO \”TABLE1\” values(‘” + id + “‘, ‘” + val1 + “‘)”).execute();
conn.commit();
Use the parameterized statement to protect against SQL Injection – particularly when using values from the request object:
var st = prepareStatement(“INSERT INTO \”TABLE1\” values(?,?)”);
st.setString(1,id);
st.setString(2,val1);
st.execute();
conn.commit();
Awesome, Thomas.
I’ve updated the document accordingly.
Thanks for the great post. Do you have an example of how credential authentication could be set if the call to the server side js is made in batch without any user interaction, would it be something along the lines of basic authentication setting a name/password pair, or is the HTTPS route requid?
I will also be interested in how AWS Hana will handle 100’s of calls per second to perform realtime updates from external non-sap linux system using this mechanism.
Thanks.
Great post.
I don’t know how many calls a second (using XSJS to perfom insert/update) HANA on AWS can handle. It’s probably difficult to answer because it will depend on many factors particular to your environment.e.g. network connection, table design , Memory availability, other processes running etc.
In your external non-sap linux system you could test it out by writing a small program.
If you are familiar with Java you could included the following logic to connect to HANA on AWS:
URL url = new URL(““);
URLConnection uc = url.openConnection();
String userpass = “SYSTEM” + “:” + “manager”; //Or preferrably a user/password created just for this purpose
String basicAuth = “Basic ” + javax.xml.bind.DatatypeConverter.printBase64Binary(userpass.getBytes());
uc.setRequestProperty (“Authorization”, basicAuth);
InputStream in = uc.getInputStream();
NOTE: you would need to populate ‘id’ and ‘val’ values in the ‘url’ string with appropriate values.
If it works for you then please let me know how many calls a second your environment handles. 🙂
Thomas-
Should your code read as follows?
var st = conn.prepareStatement(“INSERT INTO \”TABLE1\” values(?,?)”);
as opposed to
var st = prepareStatement(“INSERT INTO \”TABLE1\” values(?,?)”);
prepareStatement is a method of the $.db.getConnection(); correct?
Hey Carl ,
In the above example
var st = prepareStatement(“INSERT INTO \”TABLE1\” values(?,?)”);
If u will write this line it shows an error . so when u write it like
var st = conn.prepareStatement(“INSERT INTO \”TABLE1\” values(?,?)”);
Then the error is getting removed.
So I think Henrique missed that word.
And yes you are right prepareStatement is a method of the $.db.getConnection();
You can see this link in case u have not gone through….
Hi Thomas,
Is it possible to read data from file placed in the application server or external sever using XSJS?
Actually we are building one scenario where customer will place file (.CSV) file in one particular sever and we will run XSJSJOB to access the file on regular interval.
We have prepared period job file and called XSJS in it but we don’t know how the access the file content and update the table.
Please help.
Regards,
Vikas Madaan
There is no API which allows access to the file system of the underlying server.
Hi Vikas Madaan,
Have you checked out HDBTI?
import = [
{
table = “myTable”;
schema = “mySchema”;
file = “sap.ti2.demo:myData.csv”;
header = false;
delimField = “;”;
keys = [ “GROUP_TYPE” : “BW_CUBE”];
}
];
I’m not sure if it’s possible, but here some scenarios I have in mind:
(Maybe Thomas Jung can verify this)
[1]
– Customer uploads CSV file via front end UI into Server
– Prefix file name to match filename in HDBTI
– Create a procedure or XSJS call, making the call to import the file into HANA Tables.
[2]
– Read the CSV file stored in the server
Found this 3rd party libraryto read CSV file using JS: Here
– Parse the information into HANA tables either from XSJS or XSJSLIB calls
Do let us know if it works!
All the best!
Cheers,
Jacob
Hi Thomas, Henrique,
does the JavaScript API provide any methods for mass/bulk update? What scaling options are there for extended application services?
Thanks,
Orel
Hi Orel,
I’ll leave that question to Thomas Jung, since he’s he expert. 😉
>does the JavaScript API provide any methods for mass/bulk update
Yes you use a bulk prepared statement. There is an example in the online help:
var conn = $.db.getConnection();
var BACTHSIZE = 100;
var st = conn.prepareStatement(“insert into mytable values(?)”);
st.setBatchSize(BACTHSIZE);
var i;
for(i=0;i<BACTHSIZE;i++) {
st.setInt(1,i);
st.addBatch();
}
st.executeBatch();
st.close();
conn.commit();
However you could also consider using a stored procedure – particularly if you need to do additional processing to the data before insertion. All your data intensive logic should be in SQL/SQLScript/Views not at the XSJS layer.
>What scaling options are there for extended application services?
More to the previous point, if you are using XS correctly you shouldn’t need much scaling. XSJS should be a lightweight pass through layer. Its always stateless and all the really heavily lifting should be done in SQL/SQLScript/Views – which is processed in the index server not the XS Engine process. There are a few parameters to set the number of JavaScript VM threads and memory per thread but you should really only have to change those if SAP Support tells you to (in cases of very large numbers of requests per second).
Hi Thomas,
thank you for your informative reply. In fact, I opened a message with SAP development on this and did already get an answer.
One further question though on your comment:
>> XSJS should be a lightweight pass through layer. Its always stateless ..
When you say stateless – you refer to the xsjs layer as a client right? Is state ever maintained on the server e.g.when the server is orchestrating the output the client may go thru transitions..?
No I mean XSJS as a server. It is completely stateless. I don’t understand what you mean by the “client may go thru transitions” or what that has to do with the server being stateless.
Hi Thomas,
thank you for clarifying this. I thought for a moment that the server would have to manage the different states of the client. I realize this is incorrect.
Going back to how you characterized XSJS as a lightweight pass thru layer sounds to me that XS is only an interface for getting/setting data from Hana database. no business logic, only call SQL scripted or modelled views or stored procedures (return JSON strings, save data into tables- parsing json input, validate etc…). no transaction support is provided (state less).
The question I have then is: What is the distinct advantage that XS provides to me as an application developer compared to the alternatives?
Thanks + regards,
Orel
Hi ,
Currently I am using trial version eclipse hana, My data is in json api and i want to import into hana. I have also created schema and table with the entries mentioned in code. I was trying to execute below code , to get the details from json request and store in hana database and to display the objects( as mentioned in below)
I have also given call statement in hana which send a request to XS . When I am trying to run the URL from hana xs applications ( HANA Cloud platform) . Facing below error ,
Could you please help me out ,
var body = $.request.body.asString();
var obj = JSON.parse(body);
var id = obj.id;
var name1 = obj.name1;
var name = obj.name2;
var name = obj.name3;
var conn = $.db.getConnection();
var output = {};
output.data = [];
conn.prepareStatement(“SET SCHEMA \”XXXXXX\””).execute();
var st = prepareStatement(“INSERT INTO \”RESULT\” values(?,?)”);
st.setString(name1,name1);
st.setString(name2,name2);
st.setString(name3,name3);
st.execute();
conn.commit();
var record = [];
record.push(name1);
record.push(name2);
record.push(name3);
output.data.push(record);
conn.close();
Also files created : xsapp, xsprivileges, xsjs , .xsaccess, .hdbrole
Please advise..
Basic question (Sorry!) –
How do you open connection?
I see – var conn = $.db.getConnection();
But don’t see any connection parameters such as user name /password/server…
I was expecting connection parameters the way we do in python…
Hi Abhijeet,
That is exactly the awesomeness of having XS Engine as an intrinsic component of SAP HANA! The logon data doesn’t need to be declared in the code level. Once you try to access the .html, .js or .xsjs residing in XS, it will ask for authentication, which can then be a user from HANA DB itself, with both permission to execute JavaScript as well as to query the underlying tables/views.
Best regards,
Henrique.
Thanks for quick reply…
So I write this script (HTML/JS) on any machine…and $.db.getConnection() will prompt me for credentials (server/user/password). and then if successful then continue with the script? Cool..
Actually HANA (XS) is your web server. So while you can write it locally, you do need to commit it back to HANA server.
I had set the following content in xsaccess file.
“authentication :null”
but now I want to access the database in xsjs file for a A simple demo(dont want to use name/password by client).
when i use $.db.getConnection() , I got a error like this
[getConnection: expects an authenticated session].
how can i get the connection with any connection parameters such as user name /password.
Hi yx,
first, this example above was created on SPS4 internal release of XS, so that’s why I didn’t have to create an .xsaccess file and you do (I suppose you’re in SPS5).
Also, you need to have at least basic authentication. I suggest to use at least something like this in your .xsaccess file:
Best regards,
Henrique.
Nice Blog – thanks
simple example… was helpful
Hi Henrique,
your are using var id = $.request.getParameter(“id”); to access the request parameters.
For me this is not working, i get the error that the function “getParamter” is not declared.
Actually instead using $.request.parameters.get(‘id’); does work for me.
Same for:
Instead use:
$.response.contentType = “application/javascript”;
$.response.setBody(JSON.stringify(output));
Best regards,
Alexander
Hi Henrique Pinto
I got the following error
Found the following errors:
===========================
InternalError: dberror(Connection.prepareStatement): 258 – insufficient privilege: Not authorized at ptime/query/checker/query_check.cc:2547 (line 15 position 0 in /p1940812895trial/myhanaxs/audiencemarketing1/javaScriptServices/odataMultiply.xsjs)
Make sure you have insert rights into that table/schema.
Hi Herique
Can you guide me how to add rights under which file?
That needs to happen at your user level (check the Security entry in HANA Studio)=.
Just add the Schema you want to select from and give the user SELECT rights.
I meant, INSERT rights.
its, 2014 if you run into the error $.request.getParameter is not a function
replace getParameter with parameters.get
Hi
i am trying to insert data into a table .
and the following error keeps persisting.
Error isInternalError: dberror(PreparedStatement.executeQuery): 340 – not all variables bound: unbound parameter : 0 of 1 at ptime/query/query_param.h:546
any help is much appreciated.
A
Hi Henrique Pinto & Thomas Jung
Thank you for the solution.
May I please ask, how about inserting records (creating records) into B1 v9.1 tables via utilizing the service layer? I believe that I can’t do INSERT queries.
Then, how do I call the service layer via XSJS / JS through my MVC Methodology?
Looking forward to hear from you.
Thank you.
Regards,
JacobTan
I havent done anything yet with B1 hence I do not know if my answer may help
From my perspective, communication between your .js and your .xsjs file can be done as shown below, all calls are made from the within the .js controller
Basically, call your .xsjs file with ajax
Another method I use is to use JSONModel interact
Will this be possible Importing AngularJS into Hana cloud platform
Dear all,
i have been following this blog post
I think my logic is already right, but i got response
500 – Internal server error
Don’t know how to debug it…
Appreciate your response…
Check the xsengine diagnostics trace file for clues. Also very easy to debug xsjs, especially via the browser based workbench ide.
Hi jon
Could you guide me jon? Thank you
But that would be too easy Yoppie 😉
Hopefully you will have access to the trace files via the administration console or workbench ide. For debugging xsjs in workbench ide simply set breakpoint by clicking in left column next to line number – you should see a red arrow indicating breakpoint, then click the play icon to execute. You might need to create a test wrapper, for example if the xsjs you want to debug exists in an xsjslib.
Hi John,
i find what’s the problem.. but still can’t resolve it…
my query be like
INSERT INTO MYTABLE VALUES(SEQ.NEXTVAL,’VAR1′)
(not work)
when i try to hard code the id :
INSERT INTO MYTABLE VALUES(99,’VAR1′)
it works but it’s not what i want…
Thank you…..
Hi XS champs,
I am very new to hana xs, i have a requirement that i have to create a hana xs application which will display a form in web and when user fill the fields and submit it will store data in hana tables,
Still didn’t find any article on this, kindly help me please
Hello everyone,
Im having this:
var conn = $.hdb.getConnection();
500 Cannot read property ‘getConnection’ of undefined
Some help?
Hi,
Did you add the required hdi resource to your xsjs module ? | https://blogs.sap.com/2012/12/04/how-to-insert-data-in-hana-through-javascript-with-xs-engine/ | CC-MAIN-2018-43 | refinedweb | 2,261 | 66.44 |
I.
InfiniBand has no standard.
libibverbs, developed and maintained by Roland Dreier since 2006, are de-facto the verbs API standard in *nix.
Till now there has been no Verbs API for Python 3.x and the existing libibverbs wrapper for Python 2.x was very complete but at the same time not very pythonic (I didn’t have this batteries-included feel 😉 – please don’t hesitate to compare the two for yourself).
The aim of pyibverbs project is providing a minimalistic wrapper for libibverbs which works in Python 2/3 and allows to exchange NumPy arrays between Infiniband nodes. The style of the API is very pythonic and gives a fully functional application in only a few lines of code. Data rates are easily an order of magnitude higher than using IPoIB.
Installation:
cython ibverbs.pyx gcc -c ibverbs.c -o ibverbs.o gcc -shared ibverbs.o -o ibverbs.so -lpython3.6 -libverbs cp -iv ibverbs.so /usr/lib/python3.6/site-packages/ibverbs.so
Usage:
from ibverbs import IBDeviceList, \ IBAccessFlags as acc import numpy as np A = np.zeros((512, 512, 512)) # # ibport, remote_qpn, remote_psn, remote_lid, # local_psn, remote_vaddr and remote_rkey # must be established using a side-channel # with IBDeviceList() as dlst, dlst[0].open() as ctx, ctx.protection_domain() as pd, pd.memory_region_from_array(A, acc.LOCAL_WRITE | acc.REMOTE_WRITE) as mr, ctx.completion_channel() as chan, ctx.completion_queue() as rcq, ctx.completion_queue(chan) as scq, pd.queue_pair(rcq, scq) as qp: qp.change_state_init(ibport) qp.change_state_ready_to_receive(remote_qpn, remote_psn, remote_lid, ibport) qp.change_state_ready_to_send(local_psn) qp.post_send(mr, 3, remote_vaddr, remote_rkey) scq.wait_complete()
Primitives for side channel data encapsulation and exchange are provided in IBConnSpec class. Examples of usage for data send/receive are placed in test_verbs_cli.py and test_verbs_srv.py. In the examples, ZMQ is used for the exchange of side-channel information but naturally it could be anything.
You will find the code in my GitHub Repository:.
Enjoy! 🙂
Photo by Wilson Ye on Unsplash. | https://adared.ch/pyibverbs-minimalistic-python-api-for-linux-verbs/ | CC-MAIN-2021-10 | refinedweb | 323 | 54.08 |
I have this role:
import { Role } from 'testcafe';
export default Role('', async t => {
await t
.setNativeDialogHandler(() => true)
.typeText('#identification', 'test.user@foo.io')
.typeText('#password', 'mypassword')
.click('button[type=submit]');
});
And this test:
import { Selector } from 'testcafe';
import UserRole from '../roles/userRole';
fixture `Visit Dashboard`
.page ``
.beforeEach(async t => {
await t.useRole(UserRole);
});
test('Dashboard Renders and has two tabs', async t => {
await t.expect(Selector('.dashboard-navigation li').count).eql(2);
});
test('Dashboard Renders and Activity Tab is active', async t => {
const activeTab = Selector('.dashboard-navigation li.active');
await t
.expect(activeTab.count).eql(1)
.expect(activeTab.find('a').innerText).eql('ACTIVITY')
});
Is this the right way to do that?
It definitely logs me in, correctly. But then it goes BACK TO /login for the 2nd test.
This doesn't hurt anything, but slows things down.
Just wondering what the best way is to be able to login before a test (or a giant suite of tests) ONE TIME and then not have it involved over and over again
Hi @Roger_Studner,
Your test looks right. I've tried to reproduce the problem but it all operates properly in a simple sample:
import { Selector, Role } from 'testcafe';
const UserRole = Role('', async t => {
await t
.click(Selector('a').withText('Log in'))
.typeText('input[aria-label="Phone number, username, or email"]', '')
.typeText('input[aria-label="Password"]', '')
.click(Selector('button').withText('Log in'));
});
fixture `fixture`
.page ``
.beforeEach(async t => {
await t.useRole(UserRole);
});
test('test1', async t => {
await t.expect(Selector('a').withText('Profile').exists).ok();
});
test('test2', async t => {
await t.expect(Selector('a').withText('Profile').exists).ok();
});
I think that roles do not work with your site's authentication. Probably, there is an issue with cookies. To find the cause of the problem, I need to reproduce it with your site. Could you please give me a public link?
Hi @churkin / @amoskovkin,
I'm running into an issue where I'm not seeing 'set-cookie' in the response header (see attached screenshots - Expected vs Bug/Issue) when using the Role property. So the Role's feature is not working for me when using TestCafe, but everything works fine when run manually. As @Roger_Studner mentioned above, it logs me in correctly for the first test but then 2nd test fails and goes back to the login page. We are using a Proxy using Express hosted on Heroku to send requests to destination server.
Could this be a TestCafe bug? Sorry, I can't provide login credentials to our app as its still in development phase.
Expected:
Bug/Issue:
Hi @SynaQE,Obviously, it is a TestCafe issue if some scenario works manually but does not work with TestCafe. But I will not investigate and fix the problem without access to the problematic web site. Could you create temporary credentials with limited rights for your web application to us? We can sign an NDA if it is needed.
Thanks @churkin for getting back to me on this. I tried to get you a temporary credential to debug this issue but unfortunately I didn't get the permission to from my higher-up. We can revisit this issue later. No worries.
A related question maybe about roles / using beforeEach
If I have the same test as posted above.
My testcafe chrome opens up... goes to my login page.
Then... it refreshes.. and goes to my login page...
then it logs in fine/proceeds to my test and things work.
Why does it go to the login page "twice". I mean, I can literally just watch it do it.
Hi Roger,
It works in the following way:
beforeEach
useRole
Try to use the preserveUrl option to don't open the initial page twice. You don't need to set the fixture page url in this case.
Hi all, I am dealing with the same problem with one of our site. The first test is goes nicely with the useRole but when it comes to the next test it redirected to the signin page. Have you guys resolve this issue Thank you!
@AndunRanmal,To avoid mixing several issues in a single ticket, I have created a new thread for you:
How to use roles | https://testcafe-discuss.devexpress.com/t/api-how-to-use-the-roles-properly/534 | CC-MAIN-2021-39 | refinedweb | 697 | 58.58 |
OK. I’m getting close to done with all of the combinations and permutations of ways to communicate between PC’s, Raspberry Pi’s, and Arduinos.
Today I’m going to look at using TCP on an Arduino Nano because it is a special case.
This post builds on the prior post TCP Communications between a PC and an Arduino Using Lazarus / Free Pascal.
The Nano Ethernet shield is a very nice little package:
You simply snap your Arduino Nano into the shield. The resulting package integrates into a project quite nicely:
You can purchase Ethernet shields for the Nano on eBay quite inexpensively. Looks like the going price at time of writing is about $11:
The Ethernet shield for the Nano uses a different Ethernet chip than the Uno’s Ethernet shield. The Nano uses the ENC28J60 while the Uno uses the WIZ5100. This means you must use different drivers. The normal <Ethernet.h> include file WILL NOT WORK.
I found this out the hard way when I got my first Nano Ethernet shield. I spent a lot of time learning new drivers the first time I tried this.
There are three different drivers for the ENC28J60 you can choose between. There is a nice explanation of these drivers here:
Prior to this experiment I had used the Ethershield library mentioned in the above article. To be honest, I had a lot of trouble with it. It was several years ago, so perhaps the bugs have been worked out. In looking at my old notes I noted that the driver did not do a good job taking edge conditions into account and I had so much trouble with TCP I gave up and just used UDP.
For this experiment, I used the UIPEthernet library. This was recommended to me when I had issues with Ethershield, but I was already close to done with that project. I decided to use the UIPEthernet library for this experiment partially because of past problems but also because UIPEthernet is advertised as being functionally very close to that of the Uno’s <Ethernet.h> library. And it would be nice not to have to rewrite my own code to take advantage of the ENC28J60.
The UIPEthernet Library can be found here:
In today’s experiment I took the hardware already built and moved it from the Uno to the Nano. All pins end up staying the same:
I then altered the Arduino sketch to remove the Ethernet.h library and use the UIPEthernet.h library instead:
#include <UIPEthernet.h>
I then simply recompiled the Uno’s version of my program and uploaded it to the Nano. It worked on the first try.
Nice!
I did have one major headache I should mention though it probably won’t impact anyone else. I usually use Code::Blocks for Arduino rather than the Arduino IDE. I, admittedly, never use the Arduino IDE. I find it way to hard to read the text. Worst case, I use Textpad and enter the program thru that and then compile with the IDE. But most often I use Code::Blocks for Arduino.
Anyway, for some reason I will never figure out (even after wasting a good 5 hours on it), Code::Blocks refuses to link this particular program. Every once in a while Code::Blocks has an infuriating issue like this. After this problem, I’m on the fence as to whether I will continue to use Code::Blocks. I don’t program C++ for a living so when weird issues happen, I blame my code, and waste a lot of time only to discover the problem is not mine.
Last, but not least. Here is the code for this experiment.
Pingback: Wiring Teensy 3.1 and ENC28J60 Ethernet Module Together | Big Dan the Blogging Man
Thanks for sharing your knowledge! | https://bigdanzblog.wordpress.com/2014/11/19/tcp-communications-between-a-pc-and-an-arduino-nano-using-lazarus-free-pascal/ | CC-MAIN-2017-39 | refinedweb | 640 | 73.07 |
Qt 5 embedded screen rotation
With Qt4 application rotation of 90, 180 and 270 degree was fairly easy with -qws -display transformed:Rot270.
I've been trying to find the equivalent with Qt 5 with Ogles2/eglfs, starting the application with -platform eglfs command line.
I'm starting to dig through the source and will post an answer if I find it. I'm hoping post the question here will result in a quicker and hopefully more correct solution.
Anyone know the Qt5 equivalent to -qws -display transformed:Rot270?
thanks
- peterlin82
Now,I have the same problem too.
Someone can teach me?
thanks a lot.
- jgestevebbri.com
Does anyone know how to rotate screens on QT5 embedded ?
I am looking for this solution as well.
Any help would be appreciated!
I would also be interested in a general solution.
If you are using qml then you can rotate any root items just by inserting the following. This will rotate all children and even handle touch input correctly.
@
transform: Rotation {
angle: 180
origin.x: root.width / 2
origin.y: root.height / 2
}
@
- peterlin82
Thanks for JoelC.
But I use Qt Widgets Application.
Any help would be appreciated!
I also have this problem. We are porting a c++/qml application from qt4.8 to qt5. We have used transformed:Rot90 and it has worked fine without any performance issues. However if we perform the following in our qml code:
@
transform: Rotation {
angle: 90
}
@
the qml part becomes laggy.
Has someone come up with a solution to this?
Dear All,
Just one more information that i want to add this is that if you don't want to pass
@-platform eglfs@
the above argument while running your app each time, you can simply store this information in the environment variable :
@export QT_QPA_PLATFORM=eglfs@
And now you can directly run your app without passing any argument.
Cheers!!
Same problem here. Any information is welcome, even it is a statement that this is not possible.
Btw: is this the wrong use case? Should one use QML?
There is no equivalent of QWS' transforms, neither in linuxfb nor eglfs.
Qt Quick applications are expected to do the transformations themselves.
For widget apps this is a bit unfortunate since they are left out in the cold for now. :/
Hello All,
I have same problem. App works fine on Qt4 with QML 1.1 version on both landscape and portrait mode... I use transformed:Rot270.
But in Qt5 if I try to rotate my app in QML rootitem by transform: Rotation {
angle: 180
origin.x: root.width / 2
origin.y: root.height / 2
}
it does not work properly...only part of the screen is rotated..Any known issues in that area ?
Thanks Kumars
My solution was: (in mainwindow.cpp-constructor):
QGraphicsScene *scene = new QGraphicsScene(); QGraphicsView *view = new QGraphicsView(parent); view->setGeometry(0,0,X,Y); // actual Display size view->setHorizontalScrollBarPolicy(Qt::ScrollBarAlwaysOff); view->setVerticalScrollBarPolicy(Qt::ScrollBarAlwaysOff); QGraphicsProxyWidget *proxy = scene->addWidget(this); view->setScene(scene); view->show(); view->rotate(90);
- Chad Barbe
@agocs Does this mean that one cannot run a QWidget based application in portrait mode on a natively landscape display? For example, I have a 480x272 display that my hardware team wants to orient in a portrait orientation.
I have tried a number of things including the last suggestion on this thread from 0xFa81 but i end up with garbled text. That's the best way i can explain it. When I run the same code on my development box it works fine and the text is not garbled and show up sideways on my monitor. But if i run it on my embedded device i get text where the letters are switched around.
I started another thread about this here:
Hi,
Does Qt rotation is done as part of Qt library/application or as control to framebuffer (which in this case is expected to support rotation).
I ask because in our HW/fbdev driver there is no support for rotation, so I wander if Qt can do this rotation in software...
Thanks,
Ran
- matthew2011
@0xFa81 I'm curious if this is the method you are still using.
I'm using this on a product I'm helping with, and the performance is rather slow and is negatively impacting product design requirements (graphing updates per second).
I've tried adjusting the QGraphicsView settings (optimization, etc), but no improvement.
If anyone has found a better way to rotate a landscape LCD to portrait, on a platform using linuxFB (all software driven, no hardware graphics cores at all) help is appreciated.
This platform was previously being developed in Qt4, and the QWS system rotated without any noticeable issues...
Thank you!
Matthew
Are there any news about rotating the Display? I do have a Raspberry 7" Display which is landscape native. Using lcd_rotate in the boot config doesnt work (it only works for 180°) and display_rotate misses to rotate the touchscreen. So I need to either rotate the whole thing (touch and display) or just the touchscreen (then I would use the display_rotate). Is there a way to do this?
export QT_QPA_EVDEV_TOUCHSCREEN_PARAMETERS=rotate=90
somehow doesnt work for me (should it?)
thanks!
Though it is a pretty old query, I think answering to this thread is still valid.
The major difference between Qt4.8 and Qt5.x.x is that, they removed the Qt's own windowing system and let it open for user defined windowing system. That means, QWS (Qt Windowing System) is no more part of the Qt and we can use X11, Wayland or FB as the backend for the same.
If someone want to rotate the screen, then basically that is doing with the help of the windowing system but not just with the Qt itself. As many other people stated, Qt4 rotation is done with the QWS.
If you are using X11 windowing system along with Qt5, then X11 have rotation feature and one should make use of that for the Qt rotation.
I hope this helps.
Regards,
Ajith P V
Hi, Is this also valid for the QT embedded since it is using EGLFS? All I found was the workaround using setTransformation, but this (at least thats what it looks like) was removed in QT 5.
@tsaG Yes, you are right. "setTransformation" is part of QWSDisplay and in Qt5, they have removed the complete QWS windowing system itself. So as far as my knowledge, there is no equivalent for "setTransformation". However, instead of QWS windowing system, now Qt5 opens to any other popular windows systems such as x11 which can do this trick.
PS: I'm afraid I can't help you with EGLFS since, my qt work is around the corner of X11 almost all time.
@tsaG There does not seem to be any easy way – unless you are using QML.
However, it is possible that your platform supports rotation. For example, Raspberry Pi (kind of) supports rotation as a boot-up option. The rotation occurs then at a lower level (i.e. below the OpenGL layer). Other embedded platforms may provide similar functionality.
I actually tested both methods (with RPi/EGLFS). Using the QML rotation is relatively fast, at least I did not notice any significant speed difference between landscape and portrait orientations. Using the platform-specific rotation was slow, but that result cannot be generalized to other platforms. (RPi seems to fall into some sort of soft OpenGL when rotated by 90° or 270°.)
Of course, X11 or Wayland may provide a solution here, but at the same time the performance will suffer both at startup and during run time. YMMV.
This is a strange omission indeed, as screen rotation is not uncommon with embedded devices.
I found a solution that works for single touch. First I rotate the display:
sudo nano /boot/config.txt
Add this at the end for a 90 degree rotation:
display_rotate=1
Restart unit.
Then I Just catch the touch event, transforms the coordinate and then send a mouse event instead.
Creating a new class:
myguiapplication.h
#ifndef MYGUIAPPLICATION_H #define MYGUIAPPLICATION_H #include <QGuiApplication> #include <QEvent> class MyGuiApplication : public QGuiApplication { Q_OBJECT public: MyGuiApplication(int &argc, char **argv); virtual bool notify(QObject*, QEvent*); }; #endif // MYGUIAPPLICATION_H
myguiapplication.c
#include "myguiapplication.h" #include <QTouchEvent> #include <QMouseEvent> #include <QDebug> MyGuiApplication::MyGuiApplication(int &argc, char **argv) : QGuiApplication(argc, argv) { } bool MyGuiApplication::notify(QObject* target, QEvent* event) { try { switch (event->type()) { case QEvent::TouchBegin: case QEvent::TouchUpdate: case QEvent::TouchEnd: { QTouchEvent* te = static_cast<QTouchEvent*>(event); if (te->device()->type() == QTouchDevice::TouchScreen) { QList<QTouchEvent::TouchPoint> tps = te->touchPoints(); if(tps.count() != 1) { qDebug() << "Touch points != 1"; return true; } qreal tx = tps.first().pos().x(); qreal ty = tps.first().pos().y(); qreal mx = 480.0/800 * ty; qreal my = 800 - 800.0/480 * tx; qDebug() << tx << "," << ty << " => " << mx << "," << my; QPointF mp(mx, my); switch (event->type()) { case QEvent::TouchBegin: { qDebug() << "BEGIN"; QMouseEvent ee(QEvent::MouseButtonPress, mp, Qt::LeftButton, Qt::LeftButton, Qt::NoModifier); QCoreApplication::sendEvent(target, &ee); break; } case QEvent::TouchUpdate: { qDebug() << "UPDATE"; QMouseEvent ee(QEvent::MouseMove, mp, Qt::LeftButton, Qt::LeftButton, Qt::NoModifier); QCoreApplication::sendEvent(target, &ee); break; } case QEvent::TouchEnd: { qDebug() << "END"; QMouseEvent ee(QEvent::MouseButtonRelease, mp, Qt::LeftButton, Qt::LeftButton, Qt::NoModifier); QCoreApplication::sendEvent(target, &ee); break; } default: qDebug() << "Unhandled touch event"; return true; } return true; } break; } case QEvent::MouseButtonPress: { QMouseEvent *k = (QMouseEvent *)event; qDebug() << "MouseButtonPress:" << k->pos(); break; } case QEvent::MouseMove: { QMouseEvent *k = (QMouseEvent *)event; qDebug() << "MouseMove:" << k->pos(); break; } case QEvent::MouseButtonRelease: { QMouseEvent *k = (QMouseEvent *)event; qDebug() << "MouseButtonRelease:" << k->pos(); break; } default: ; } } catch (...) { } return QGuiApplication::notify(target, event); }
And then using my new class instead of QGuiApplication:
#include "myguiapplication.h" #include <QQmlApplicationEngine> #include <QQmlContext> int main(int argc, char *argv[]) { MyGuiApplication app(argc, argv); //... }
Has anyone found a suitable solution for an embedded device on eglfs? I want to use a natively portrait display in landscape orientation, for displaying Qt5 Widget applications | https://forum.qt.io/topic/22852/qt-5-embedded-screen-rotation | CC-MAIN-2017-39 | refinedweb | 1,641 | 57.37 |
ImportError: matplotlib requires dateutil
I have successfully installed matplotlib with python 2.6 on x64 Windows7. When I try to import matplotlib, it shows the following error. I have also installed numpy following this link: Installing Numpy on 64bit Windows 7 with Python 2.7.3
import matplotlib.pyplot as plt Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> import matplotlib.pyplot as plt File "C:\Python26\Lib\site-packages\matplotlib\__init__.py", line 110, in <module> raise ImportError("matplotlib requires dateutil") ImportError: matplotlib requires dateutil
How can I make it work?
I installed matplotlib-1.3.0.win-amd64-py2.6.exe from
Here's a list of the programs you can install on windows:
And you'll need the following dependencies: Requires numpy, dateutil, pytz, pyparsing, six
★ Back to homepage or read more recommendations:★ Back to homepage or read more recommendations:
From: stackoverflow.com/q/18280436 | https://python-decompiler.com/article/2013-08/importerror-matplotlib-requires-dateutil | CC-MAIN-2019-26 | refinedweb | 152 | 58.79 |
#include <mod_sim_terrain_tile.hh>
List of all members.
/// /// 0 1 = 2 subdivisions per edge /// +--+--+ /// | /| /| /// |/ |/ | /// +--+--+ /// | /| /| /// |/ |/ | /// +--+--+ /// 0 1 2 = 3 vertexs per edge /// ///
Reset state to prepare for the building the next Tile. To be called at start of MakeTile().
Make nodes for local vertexs.
0 1 2 corner-relative tile coords -1 0 +1 center-relative tile coords +--+--+ subdivisionsEdge=2 | /| /| |/ |/ | +--+--+ | /| /| |/ |/ | +--+--+
Make nodes for polygon normals and vertex normals.
Two sets of polygon normals: ---------------------------- For a pair of triangles, mPolygonNormals0 and mPolygonNormals1 are two Array2D objects. They are catenated into one flat array. Therefore, the vixs of the second triangle begin at the middle of the flat array.
Make node for texture.
Make nodes for triangles.
TileGraph::mLocalVertexs2D is needed here, not for the values but rather, for their indexs, to build vixs.
Map texture coordinates of triangle to terrain colormap.
Compute polygon normal (normal vector on polygon). That is, compute polygon normals of the 2 triangles of a LandTile.
vix3 __ vix2 |\ | vix0 |_\| vix1
Compute an average normal vector from 8 surrounding triangles contained in 4 surrounding quads. _____ |\ |\ | |_\._\| |\ |\ | |_\|_\| | http://www.palomino3d.org/pal/doxygen_v2/classmod__sim_1_1TileFactory.html | crawl-003 | refinedweb | 182 | 68.06 |
Qt Location. QML
I have been trying to play with Qt Location, but i couln't make it work.
I get this:: "serialnmea: No known GPS device found. Specify the COM port via QT_NMEA_SERIAL_PORT.
qml:
Failed to create Geoclue client interface. Geoclue error: org.freedesktop.DBus.Error.Disconnected"
I have tried to run several devices as external gps for the pc, but no success. ¿ Any workaround? Help would be appreciated.
I have tried the MAP viewer(QML) example and also this basic block code to test it.
import QtQuick 2.7 import QtQuick.Window 2.2 import QtPositioning 5.6 import QtLocation 5.6 Item{ PositionSource{ active: true onPositionChanged: { console.log(position.coordinate); } } }
Hi,
Did you get your device to work outside of Qt ?
Does your system recognize them properly ?
Is necessary to use any real external device?, is not possible to simulate data?, or gps connection that works with QT?
IIRC there's the
simulatorplugin
Can you tell me a little more please?. where can i get this?
Sorry, my bad, Qt Simulator doesn't apply in your case. It's a bit old and was for the Nokia time.
Maybe the gypsy plugin could be an alternative.
- raven-worx Moderators
Is necessary to use any real external device?, is not possible to simulate data?, or gps connection that works with QT?
You can use NMEA files todo so. In QML you can use PositionSource's nmeaSource property
There a tools available to generate such .nmea files.
I haven't used it yet though. So i can't tell if it works out of the box.
There was a thread about simulated GPS data with QtQuick some time ago:.
- raven-worx Moderators
@Wieland
ah didn't know about the log file position source example yet.
Thats even simpler than messing around with the .nmea files.
I get this error:
"Failed to create Geoclue client interface. Geoclue error: org.freedesktop.DBus.Error.Disconnected"
Do i need another plugin? which one?, i'm using windows.. ..Thank you again
You would need DBus running on Windows.
I'd recommend following @Wieland and @raven-worx suggestions for simulating GPS data.
- ldanzinger
You should be able to directly plug in an nmea file to the source, and it'll work. If you don't have one, you can use this -
Also, ArcGIS Runtime SDK provides a position plugin for Windows, so you can get your current position on Windows, with no need for simulation -
- Mahmoud_batman | https://forum.qt.io/topic/70124/qt-location-qml | CC-MAIN-2018-30 | refinedweb | 411 | 70.6 |
Purpose: This demo introduces the basal ganglia model that the SPA exploits to do action selection.
Comments: This is just the basal ganglia, not hooked up to anything. It demonstrates that this model operates as expected, i.e. supressing the output corresponding to the input with the highest input value.
This is an extension to a spiking, dynamic model of the Redgrave et al. work. It is more fully described in several CNRG lab publications. It exploits the ‘nps’ class from Nengo.
Usage: After running the demo, play with the 5 input sliders. The highest slider should always be selected in the output. When they are close, interesting things happen. You may even be able to tell that things are selected more quickly for larger differences in input values.
Output: See the screen capture below.
import nef import nps D=5 net=nef.Network('Basal Ganglia') #Create the network object net.make_input('input',[0]*D) #Create a controllable input function #with a starting value of 0 for each of D #dimensions net.make('output',1,D,mode='direct') #Make a population with 100 neurons, 5 dimensions, and set #the simulation mode to direct nps.basalganglia.make_basal_ganglia(net,'input','output',D,same_neurons=False, neurons=50) #Make a basal ganglia model with 50 neurons per action net.add_to_nengo() | http://ctnsrv.uwaterloo.ca/docs/html/demos/basalganglia.html | CC-MAIN-2017-47 | refinedweb | 216 | 51.65 |
Register for demo | RBAC at scale, Oracle CDC Source Connector, and more within our Q2 Launch for Confluent Cloud
How do you process IoT data, change data capture (CDC) data, or streaming data from sensors, applications, and sources in real time? Apache Kafka® and Apache Spark®.
Below is a common architectural pattern used for streaming data:
This tutorial shows you how to implement the above architecture outlined in blue.
This step-by-step guide uses sample Python code in Azure Databricks to consume Apache Kafka topics that live in Confluent Cloud, leveraging a secured Confluent Schema Registry and AVRO data format, parsing the data, and storing it on Azure Data Lake Storage (ADLS) in Delta Lake.
Sign in to the Azure portal and search for Confluent Cloud.
If you already have a Confluent organization set up in Azure, you can use it, otherwise select Apache Kafka® on Confluent Cloud™ under the “Marketplace” section.
Choose your desired Subscription and Resource Group to host the Confluent organization, complete the mandatory fields, and then click Review + create. On the review page, click Create.
Wait for the deployment to complete and then click Go to resource. On the overview page, click the Confluent SSO link on the right.
Once you are redirected to Confluent Cloud, click the Create cluster button. Select the cluster type Basic and click Begin Configuration.
Select an Azure region, then click Continue.
Specify a cluster name and click Launch cluster.
Next, click the API access link on the left menu and click Create key.
Select Create an API key associated with your account and then select Next.
Copy the key and secret to a local file, and check I have saved my API key and secret and am ready to continue. You will need the key and secret later for the Datagen Source connector as well as in the Azure Databricks code.
Return to your cluster. Click the Topics link in the left menu and click Create topic.
Type “Clickstreams” as the topic name and select Create with defaults.
To enable Schema Registry, go back to your environment page. Click the Schemas tab and choose an Azure region. Select Azure, choose a region, and click Enable Schema Registry.
Click on the Settings tab. Open the Schema Registry API access section, and click on the Create key button (or Add Key button if you already have some keys created).
Copy the key and secret, and make sure to write them down. Add a description for the key. Check the I have saved my API keys checkbox. Click Continue.
Click on the Connectors link on the left menu. Select the DatagenSource connector (you can also use the search box). Fill in the details as follows:
clickstream_datagen
CLICKSTREAM
Hit Continue.
On the “Test and verify” page, you are presented with the JSON configuration of the connector.
Hit Launch. You can now inspect the messages flowing in. Click the Topics link in the left menu. Select the clickstreams topic. Click on the Messages tab. (As the Datagen connector is provisioned, it may take a few minutes before you start seeing messages.)
Now that your topic is receiving data, you can move to Azure Databricks to see how to leverage it!
If you do not already have an Azure Databricks environment, you will need to spin one up:
When the Azure Databricks instance finishes deploying, you can navigate to it in the Azure Portal and click Launch Workspace. Alternatively, if you already have the URL for an Azure Databricks workspace, you can go to the URL directly in your browser.
Once you’re logged in to the Azure Databricks workspace, you will need a running cluster. If you do not already have a cluster that you would like to use for this example, you can spin one up by following these steps:
Once the cluster is spun up, you will need to add an additional library to it. This example is for Python, but if you need this functionality in Scala, there is also an example Scala notebook that details which libraries are needed, you can find both in the downloadable notebooks section.
confluent-kafka[avro,json,protobuf]>=1.4.2
Once your cluster is spun up, you can create a new notebook and attach it to your cluster. There are multiple ways to do this:
Once you’ve clicked on either Create Notebook or Create > Notebook, the following screen appears:
Give your notebook a name, pick your default language (select Python to follow the example below), and then select the cluster that you just spun up. From there, click Create.
The example for this tutorial uses Python, but there is also a Scala notebook available that enables the same functionality. The example below uses the sample clickstream data provided from Confluent Cloud’s Datagen Source Connector.
Below are the steps needed to successfully read, parse, and store Avro data from Confluent Cloud.
Using the following information, connect to the topic that you created in Confluent Cloud from Azure Databricks:
To connect to the Schema Registry, which you will need in order to pull the schemas for parsing your Avro data, you need the following information:
Finally, to write the result to a Delta table, you need to specify the ADLS Gen2 path, a Data Bricks File System (DBFS) mount pointing to the ADLS Gen2 path, or a local DBFS location (such as
dbfs:/delta/mytable and
dbfs:/delta/checkpoints/mytable, which are not recommended for production jobs; however, they don’t require any additional authentication) where your Delta table and streaming checkpoint will be located.
Below is the set of variables we’ll be using (which you can substitute with your server, key, and path values):
confluentClusterName = "databricks_rocks" confluentBootstrapServers = "YOURBOOTSTRAPSERVERHERE" confluentTopicName = "clickstreams" schemaRegistryUrl = "YOURSCHEMAREGISTRYURLHERE" confluentApiKey = "APIKEYHERE" confluentSecret = "APISECRETHERE" confluentRegistryApiKey = "REGISTRYAPIKEYHERE" confluentRegistrySecret = "REGISTRYAPISECRETHERE" deltaTablePath = "dbfs:/delta/mytable" checkpointPath = "dbfs:/delta/checkpoints/mytable"
Note: While not required for this demo, in the sample notebooks, you will see that the variables for the API keys and secrets are set like this:
confluentApiKey = dbutils.secrets.get(scope = "confluentTest", key = "api-key") confluentSecret = dbutils.secrets.get(scope = "confluentTest", key = "secret") confluentRegistryApiKey = dbutils.secrets.get(scope = "confluentTest", key = "registry-api-key") confluentRegistrySecret = dbutils.secrets.get(scope = "confluentTest", key = "registry-secret")
The syntax uses a mechanism called Azure Databricks secrets, which allows you to create values outside of your code and store them in a manner in which they can be accessed and used, but not displayed. If you were to perform a
print(confluentAPiKey) after retrieving the value with the above syntax, the value will show as
[REDACTED]. This is the recommended method for any values that you don’t want in plain text in your code, such as the key and secret values above. Azure Databricks utilizes Azure Key Vault as the secret store for these values.
For instructions on how to set up an Azure Key Vault backed secret scope in Azure Databricks, please see the Microsoft docs on secret scopes. The page guides you through spinning up Azure Key Vault, adding keys to it, and then creating an Azure Databricks secret scope so that you can access those values in your code.
If you are choosing to write your data to an ADLS Gen2 path, you will need to pass in a storage key to the Spark configuration. If you’re just using a local DBFS path (like in this blog post) or a DBFS mount, then you can skip the following step:
adlsGen2Key = "YOURSTORAGEKEYHERE" spark.conf.set("fs.azure.account.key.achuadlsgen2test.dfs.core.windows.net", adlsGen2Key)
The example notebooks also use Azure Databricks secrets for the
adlsGen2Key.
Once you have the connection information that you need, the next step is to set up a Schema Registry client. Confluent Cloud requires the Schema Registry API key and secret to authenticate—note the use of some of the variables declared above:
from confluent_kafka.schema_registry import SchemaRegistryClient import ssl schema_registry_conf = { 'url': schemaRegistryUrl, 'basic.auth.user.info': '{}:{}'.format(confluentRegistryApiKey, confluentRegistrySecret)} schema_registry_client = SchemaRegistryClient(schema_registry_conf)
Now the Spark ReadStream from Kafka needs to be set up and the data manipulated. Both of these operations are combined into one statement. First, we’ll show the complete statement and then we’ll break it down.
import pyspark.sql.functions as fn from pyspark.sql.types import StringType binary_to_string = fn.udf(lambda x: str(int.from_bytes(x, byteorder='big')), StringType()) clickstreamTestDf = ( spark .readStream .format("kafka") ) .option("startingOffsets", "earliest") .option("failOnDataLoss", "false") .load() .withColumn('key', fn.col("key").cast(StringType())) .withColumn('fixedValue', fn.expr("substring(value, 6, length(value)-5)")) .withColumn('valueSchemaId', binary_to_string(fn.expr("substring(value, 2, 4)"))) .select('topic', 'partition', 'offset', 'timestamp', 'timestampType', 'key', 'valueSchemaId','fixedValue') )
You can manipulate the data using the imports and user-defined functions (UDF). The first part of the above ReadStream statement reads the data from our Kafka topic. First, we specify the format of the ReadStream as
“kafka”:
clickstreamTestDf = ( spark .readStream .format("kafka")
Next, the bootstrap servers, protocol, authentication configuration, and topic need to be specified:
)
The
“kafkashaded” at the front of the
kafka.sasl.jaas.config option is present so that the
PlainLoginModule can be used.
Next, specify the Kafka topic offset at which to start at. By default, a ReadStream from a Kafka topic will use
“latest” for all topic partitions. That means that it will only start pulling data from the time that the stream started reading and not pull anything older from the topic. In our example, we will tell it to use
“earliest”, meaning it will start reading data from the earliest available offset in the topic:
.option("startingOffsets", "earliest")
If data is actively being written to your topic, then you can experiment with both the
“latest” and
“earliest” settings. If, however, there is no new data coming into your topic, then you need to use
“earliest” if you want your ReadStream to pull any data.
You can also specify the
failOnDataLoss option—when set to
“true”, it will stop the stream if there is a break in the sequence of offsets because it assumes data was lost. When set to
“false”, it will ignore missing offsets (and depending on your use case, it can be valid for offsets to be missing):
.option("failOnDataLoss", "false")
Finally, specify load:
.load()
A Kafka message contains a key and a value. Data going through a Kafka topic in Confluent Cloud has five bytes added to the beginning of every Avro value. If you are using Avro format keys, then five bytes will be added to the beginning of those as well. For this example, we’re assuming string keys. These bytes consist of one magic byte and four bytes representing the schema ID of the schema in the registry that is needed to decode that data. The bytes need to be removed so that the schema ID can be determined and the Avro data can be parsed. To manipulate the data, we need a couple of imports:
import pyspark.sql.functions as fn from pyspark.sql.types import StringType
Next, use a UDF to help parse bytes into a string:
binary_to_string = fn.udf(lambda x: str(int.from_bytes(x, byteorder='big')), StringType())
Finally, new columns need to be generated as part of our Spark DataFrame.
The key needs to be cast to a string:
.withColumn('key', fn.col("key").cast(StringType()))
The first five bytes need to be removed from the value:
.withColumn('fixedValue', fn.expr("substring(value, 6, length(value)-5)"))
Bytes 2–5 of the value need to be converted from binary into a string to get the schema ID for each row:
.withColumn('valueSchemaId', binary_to_string(fn.expr("substring(value, 2, 4)")))
And finally, we only select the columns that we want from the dataset:
.select('topic', 'partition', 'offset', 'timestamp', 'timestampType', 'key', 'valueSchemaId','fixedValue') )
The
.select is the final part of the overall statement for reading from the Kafka topic and manipulating the data.
After executing the complete ReadStream statement into the
clickstreamTestDf variable, we can run the following command:
display(clickstreamTestDf)
Here are the first three rows of the results:
Make sure to click the Cancel icon that appears under the cell to stop the streaming display.
Now that the data is being read, it needs to be parsed and written out. For this example, we parse the data and then write it to a Delta table on ADLS Gen2.
Over time, data going through a Kafka topic can change. There are options in Confluent Cloud to restrict how a schema can change, but unless the topic is locked down so that no changes are allowed, the code will need to take changes into account. There could be rows that require parsing by different schemas within the same micro-batch in a Spark stream. Because this is the case, we can’t just pull a schema once from the registry and use it until the stream is restarted—rather, we may have to pull multiple schemas from the registry for each micro-batch.
The
foreachBatch() functionality in Spark Structured Streaming allows us to accomplish this task. With the
foreachBatch() functionality, code can be executed for each micro-batch in a stream and the result can be written out. A
writeStream is still being defined, so you get the advantage of streaming checkpoints.
foreachBatch()function
We’ll start with the function that will be executed for each micro-batch. Again, we’ll show you the complete statement and then break it down:
import pyspark.sql.functions as fn from pyspark.sql.avro.functions import from_avro def parseAvroDataWithSchemaId(df, ephoch_id): cachedDf = df.cache() fromAvroOptions = {"mode":"FAILFAST"} def getSchema(id): return str(schema_registry_client.get_schema(id).schema_str) distinctValueSchemaIdDF = cachedDf.select(fn.col('valueSchemaId').cast('integer')).distinct() for valueRow in distinctValueSchemaIdDF.collect(): currentValueSchemaId = sc.broadcast(valueRow.valueSchemaId) currentValueSchema = sc.broadcast(getSchema(currentValueSchemaId.value)) filterValueDF = cachedDf.filter(fn.col('valueSchemaId') == currentValueSchemaId.value) filterValueDF \ .select('topic', 'partition', 'offset', 'timestamp', 'timestampType', 'key', from_avro('fixedValue', currentValueSchema.value, fromAvroOptions).alias('parsedValue')) \ .write \ .format("delta") \ .mode("append") \ .option("mergeSchema", "true") \ .save(deltaTablePath)
Now for the imports:
import pyspark.sql.functions as fn from pyspark.sql.avro.functions import from_avro
The first import gives us access to the PySpark SQL col function, which we use to reference columns in a DataFrame. The second import is for the
from_avro function. The
from_avro function is what we use to parse the binary Avro data. We can’t use the version of
from_avro that takes a Schema Registry URL, because at this time, there’s no mechanism for passing authentication. Because Confluent Cloud requires authentication for the Schema Registry (which is a best practice), we use the version of
from_avro that takes an Avro schema directly.
A
foreachBatch() function will always have two inputs: a DataFrame containing all of the data in the micro-batch and an
ephoch_id representing the micro-batch number.
def parseAvroDataWithSchemaId(df, ephoch_id):
We’re going to reference the DataFrame multiple times in our code, so let’s cache it to avoid pulling it from the stream multiple times:
cachedDf = df.cache()
Next, let’s specify how we want the
from_avro function to behave when it cannot parse a row. There are two options:
FAILFAST and
PERMISSIVE.
FAILFAST will immediately fail, and processing will stop.
PERMISSIVE will return
NULL for the parsed value and continue. In our case, we’ve chosen to stop on failure:
fromAvroOptions = {"mode":"FAILFAST"}
Next, we define a function that queries the Schema Registry by ID and returns the schema:
def getSchema(id): return str(schema_registry_client.get_schema(id).schema_str)
We don’t want to query the Schema Registry with more than what is necessary, so let’s get the distinct set of schema IDs from the data:
distinctValueSchemaIdDF = cachedDf.select(fn.col('valueSchemaId').cast('integer')).distinct()
Then for each schema ID, we pull the schema from the registry and put it in a broadcast variable so that it is available to all of the workers:
for valueRow in distinctValueSchemaIdDF.collect(): currentValueSchemaId = sc.broadcast(valueRow.valueSchemaId) currentValueSchema = sc.broadcast(getSchema(currentValueSchemaId.value))
Next, we filter the DataFrame only to the rows with that schema ID. Remember that if the schema is changing rapidly, there could be rows with completely different schemas in the DataFrame, so we only want to parse the rows that need the schema that we just pulled from the registry:
filterValueDF = cachedDf.filter(fn.col('valueSchemaId') == currentValueSchemaId.value)
Finally, we parse those rows with the
from_avro function, passing the schema that we pulled from the registry, and write the parsed results out to Delta. Note that this is a batch write—when you’re operating within a
foreachBatch() function, everything you’re doing is batch based.
filterValueDF \ .select('topic', 'partition', 'offset', 'timestamp', 'timestampType', 'key', from_avro('fixedValue', currentValueSchema.value, fromAvroOptions).alias('parsedValue')) \ .write \ .format("delta") \ .mode("append") \ .option("mergeSchema", "true") \ .save(deltaTablePath)
The
mergeSchema option was set to
“true” in this case to allow the schema for the Delta table to change over time. If you want the current schema to be enforced and changes to be prevented, then either remove the option or set it explicitly to
“false”.
writeStream
After defining the
foreachBatch() function, the last task is to define the
writeStream. The
writeStream statement calls the
foreachBatch() function for each micro-batch, specifying the function name, the checkpoint, and a name for the stream:
clickstreamTestDf.writeStream \ .option("checkpointLocation", checkpointPath) \ .foreachBatch(parseAvroDataWithSchemaId) \ .queryName("clickStreamTestFromConfluent") \ .start()
If the checkpoint and Delta table don’t already exist, they will be created automatically. The checkpoint will be created first, followed by the Delta table when the first batch write is performed.
The following is what you see while the writeStream is running—micro-batches of data being processed:
Below is a sample of the final output from the Delta table. You can get this output by querying the destination Delta table. You can run this statement while the
writeStream is still running, and it will give you the latest consistent state of the Delta table:
deltaClickstreamTestDf = spark.read.format("delta").load(deltaTablePath) display(deltaClickstreamTestDf)
When you’re done with the demo, stop the stream by clicking Cancel under the
writeStream cell. You can then navigate to the “Clusters” page and stop the cluster.
If you’d like, you can download the example notebooks:
To avoid incurring unwanted charges, after you’re done with the demo, make sure you delete the resources created as part of this tutorial.
Please note that any work that you’ve done will be lost when you tear down your Databricks workspace. If you’d like to keep your notebook, you can export it with the following steps:
To tear down a Databricks workspace, open the Azure Portal and navigate to the Resource Group that your Azure Databricks instance is located in.
Click on Cluster settings in the left menu. Scroll to the bottom and click on the Delete cluster link.
In the “Confirm deletion” modal, confirm the cluster name and click Continue.
This blog post has guided you through first steps in using Databricks and Confluent Cloud together on Azure. Now you are ready to build your own data pipelines and get the value out of your data leveraging whatever service best suits the specific task at hand. With Confluent Cloud, Databricks, and all the Azure services at your disposal, the possibilities are wide open.
Learn more on the Streaming Audio podcast and try Confluent for free on Azure Marketplace. When you sign up, you receive $400 to spend within Confluent Cloud during your first 60 days, and you can use the promo code
CL60BLOG for an additional $60 of free Confluent Cloud usage.*
Angela Chu is a solution architect at Databricks, responsible for enabling customers to solve the world’s toughest data problems. She has been designing solutions that turn large volumes of data into information for more than 20 years and has experience in everything data related from ingestion to presentation. She enjoys traveling with her family and showing her kids the amazing world that we live in!
Gianluca is partner solution engineer at Confluent, responsible for technical enablement of partners in EMEA. With over 10 years of experience covering different roles (solution engineer, professional services consultant & trainer, and developer) in different countries (Italy, Ireland, and Germany), he has experience across event streaming, big data, business intelligence, and data integration. In his leisure time, he is studying toward his economics degree, reads about tech, plays guitar and enjoys discovering the world again through his daughter’s eyes.
Caio Moreno is a senior cloud solution architect at Microsoft, responsible for helping Microsoft empower every person and organisation on the planet to achieve more using Data and AI. He has experience in artificial intelligence, machine learning, big data, IoT, distributed systems, analytics, streaming, business intelligence, data integration and visualization. He is also a Ph.D. student at Complutense University of Madrid. He enjoys traveling and all kinds of sports. He lives in London with his wife and 3 daughters. | https://www.confluent.io/es-es/blog/consume-avro-data-from-kafka-topics-and-secured-schema-registry-with-databricks-confluent-cloud-on-azure/ | CC-MAIN-2022-21 | refinedweb | 3,484 | 53.81 |
Hello, I am new to R package development. I am working on a package that has in its src folder one (prime) cpp file and some helper cpp (X.Cpp, Y.Cpp) and one c file (Z.C) and their header files (X.h, Y.h and Z.h)
Advertising
I am getting the following error when I do 'Build & Reload' in Rstudio. " Error in dyn.load(dllfile) : unable to load shared object '/Users/abcd/BART/bart_pkg1/src/bartpkg.so': dlopen(/Users/abcd/BART/bart_pkg1/src/bartpkg.so, 6): Symbol not found: __ZN3RNG4nfixElm Referenced from: /Users/abcd/BART/bart_pkg1/src/bartpkg.so Expected in: flat namespace in /Users/abcd/BART/bart_pkg1/src/bartpkg.so Calls: suppressPackageStartupMessages ... <Anonymous> -> load_all -> load_dll -> library.dynam2 -> dyn.load Execution halted Exited with status 1. " I have followed the basic guidelines to build the package. The .R file has directive #' @useDynLib bartpkg in the right place. Also, the prime cpp file has the following tags in the right place. 1. #include <Rcpp.h> using namespace Rcpp; 2. //' @param x A single integer. //' @export // [[Rcpp::export]] And my NAMESPACE file shows 'useDynLib(bartpkg)' correctly. I am able to see the 'bartpkg.so' shared object file in the src directory. I tried in the terminal this command "c++filt -n _ZN3RNG4nfixElm" and was able to see that the symbol in the error ' Symbol not found: __ZN3RNG4nfixElm' is coming from the .C file RNG.C and is because of a function 'nfix'. But even if I remove the function 'nfix' or remove the RNG.C file altogether, the same error ' Symbol not found: __ZN3RNG4nfixElm' comes. can it be a flag issue that my compiler is not able to compile the 'C' file? I am able to see that all the cpp files generate respective object files, but I dont see anything like that for the C file. I am using RStudio is the session info is >] bartpkg_0.1.0 packrat_0.4.8-1 Rcpp_0.12.8 msm_1.6.4 LaplacesDemon_16.0.1 loaded via a namespace (and not attached): [1] roxygen2_5.0.1 lattice_0.20-34 mvtnorm_1.0-5 digest_0.6.10 grid_3.3.1 magrittr_1.5 [7] stringi_1.1.2 Matrix_1.2-7.1 splines_3.3.1 tools_3.3.1 stringr_1.1.0 survival_2.39-5 [13] parallel_3.3.1 rsconnect_0.5 inline_0.3.14 expm_0.999-0 I am stuck at this problem for weeks now. Any help would be highly appreciated. Thank you. -Aarti [[alternative HTML version deleted]] ______________________________________________ R-package-devel@r-project.org mailing list | https://www.mail-archive.com/r-package-devel@r-project.org/msg01173.html | CC-MAIN-2016-50 | refinedweb | 421 | 62.14 |
Style and Behavior of Links
Help links must be associated with the cascading style sheet file, HxLink.css, which is embedded in the HxLink.htc file that is installed with the Help run-time components. By linking to HxLink.css, you guarantee that your Help links will function properly. To associate HxLink.css with a topic, add the following markup to the <head> tag of the source file for the topic:
This markup creates a link to HxLink.htc.
You can use your own style sheet instead of HxLink.css. To do so, add a link to your own style sheet after the link to HxLink.css, as in the following example:
You can also add a reference to an alternate .htc file that implements equivalent behavior. For more information about .htc files, see the HTML Components (HTC) Reference on MSDN.
You can use standard style mechanisms to redefine the look and feel of links. This example demonstrates how to apply alternate styles to a link.
HxLink.css and HxLink.htc are part of a compiled Help (.hxs) file that is installed and registered on your computer in one of two ways:
When you install Microsoft Help Workshop.
The first time you run HxComp.exe.
This .hxs file uses the namespace ms-help://hx.
If this .hxs file is installed, you can view HxLink.css, HxLink.htc, and other files by clicking the the following links:
You can also save these files elsewhere on your computer by right-clicking one of the links and then clicking Save Target As. | http://msdn.microsoft.com/en-US/library/bb164680(v=vs.80).aspx | CC-MAIN-2014-35 | refinedweb | 259 | 77.23 |
A short example of how to utilize various open source library functions that can be used to identify and analyse strongly connected components for a given input image.
In the example I have given here, the image represents microarray sample spots printed to a slide using a Xaar inket printer. Using our robotic equipment, a camera is mounted to the printhead, so that images are taken of the spots, as they are being printed on-the-fly, usually in linear groups of 12 or 32 at a time:
As part of an investigation into how we may improve our quality control (QC) processes, one task (of many) will be to analyse such an input image, checking the spot images for things like misalignment in the x,y axes, spot shape (circularity), missing spots or tiny spots (satellites). It is anticipated that this would greatly speed up QC, which at present rely on manual validation.
In reality, the initial spot images will be subject to a degree of perspective distortion, given that the camera is mounted at an angle of approximately 30 degrees to the perpendicular of the slides. Some corrective matrix transformations would be needed to make the image rectangular, as opposed to quadrilateral. Another time.
For blob extraction, I have used the CvBlobsLib, a library to perform connected component labelling on binary images, available at the OpenCV:
C++ example code is here:
#include "stdafx.h" #include "BlobResult.h" #include <cv.h> #include <cxcore.h> #include <highgui.h> const std::string filepath = "spots.bmp"; int _tmain(int argc, _TCHAR* argv[]) { CBlobResult blobs; CBlob *currentBlob; // Load grayscale version of coloured input image IplImage* original = cvLoadImage( filepath.c_str(), CV_LOAD_IMAGE_GRAYSCALE ); // Make sure image file is available assert ( original ); // Obtain binary (black and white) version of input image IplImage* img_bw = cvCreateImage( cvGetSize( original ), IPL_DEPTH_8U, 1 ); // Threshold to convert image into binary (B&W) cvThreshold( original, // source image img_bw, // destination image 100, // threhold val. 255, // max. val CV_THRESH_BINARY ); // binary type ); // Find the white blobs in the B&W image blobs = CBlobResult( img_bw, NULL, 0 ); // Exclude all white blobs smaller than the given value (80) // The bigger the last parameter, the bigger the blobs need // to be for inclusion blobs.Filter( blobs, B_EXCLUDE, CBlobGetArea(), B_LESS, 80 ); // Get the number of blobs discovered int num_blobs = blobs.GetNumBlobs(); // Display the filtered blobs IplImage* filtered = cvCreateImage( cvGetSize( img_bw ), IPL_DEPTH_8U, 3 ); cvMerge( img_bw, img_bw, img_bw, NULL, filtered ); for ( int i = 0; i < num_blobs; i++ ) { currentBlob = blobs.GetBlob( i ); currentBlob->FillBlob( filtered, CV_RGB(255,0,0)); } // Display the input / output windows and images cvNamedWindow( "input" ); cvNamedWindow( "output" ); cvShowImage("input", img_bw ); cvShowImage("output", filtered); // Wait for user key press and then tidy up cvWaitKey(0); cvReleaseImage( &original ); cvReleaseImage( &filtered ); cvDestroyWindow( "input" ); cvDestroyWindow( "output" ); return 0; }
Sample image “spots.jpg” available here.
To make this actually build and work you first need to ensure the following things are in place:
Install OpenCV
Make sure the OpenCV libraries have been properly installed in your Visual Studio environment. Make sure this works all right before proceeding further.
Download and build the cvBlobsLib library
Download and extract the cvBlobsLib library placing the extracted folder in a suitable location.
After downloading and extracting this cvBlobsLib Visual Studio project, you then build it, so that the necessary
cvblobslib.lib file gets created, either within a Debug or Release folder. Any new project you are working on that uses the cvBlobsLib library will need the
cvblobslib.lib file in order to correctly work.
On building this for the first time, you will probably encounter compiler errors like these:
c:\dump\cvblobslib_opencv_v8_3\blobcontour.h(6): fatal error C1083: Cannot open include file: 'cv.h': No such file or directory
1> BlobResult.cpp
1>c:\dump\cvblobslib_opencv_v8_3\blobresult.h(24): fatal error C1083: Cannot open include file: 'cxcore.h': No such file or directory
1> BlobOperators.cpp
Notice that this project comes with the original project settings which would need to be changed:
As with any other project that uses OpenCV, the
cvblobslib VC++ project will also need to be set up so that OpenCV is correctly installed, and it knows where to find the library files, additional includes etc.
See the same OpenCV posting for details, which includes a section for Visual Studio 2010 considerations.
If you still have trouble building and getting the necessary cvblobslib.lib file, here’s one I made earlier.
Set the Visual Studio Project Properties
1. If you’re not using a Windows console application as in my example but are using an empty project instead, then omit the “
#include stdafx.h” bit. “stdafx” is created with either a new Win32 Console Project or a Console Application (.NET) application. (Thanks hiperchelo)
2. Make sure the
cvblobslib.lib file you created earlier is copied into the project folder of the application you are working on.
3. In Project -> Properties -> C/C++ -> Additional Include Directories, add the location of the folder where the cvBlobsLib library was installed. If this isn’t done, the compiler will complain it can’t find the “BlobResult.h” header file.
4. In Project -> Properties -> Linker -> Input, add the
cvblobslib.lib entry, in addition to any existing OpenCV library files.
If you’re using OpenCV2.1 for example, you will also need to include the {cv210, cvaux210, highgui210, cxcore210, etc}.lib files. For OpenCV1.x versions, these will be {cv, cvaux, highgui, cxcore, etc}.lib files.
5. In Project -> Properties -> C/C++ -> Pre-Compiled Headers, select Not use precompiled headers.
6. In Project -> Properties -> C/C++ -> Code generation -> Run-time library, select Debug Multithreaded DLL (for debug version) or Multithreaded DLL (for release version).
7. In Project -> Properties -> General -> Use of MFC, select Use MFC in a shared DLL.
Build the Project
You may get a linker error during building your project that uses cvBlobsLib, similar to this garbage:
error LNK2019: unresolved external symbol "public: virtual __thiscall CBlobResult::~CBlobResult(void)" (??1CBlobResult@@UAE@XZ) referenced in function "public: struct _IplImage * __thiscall OpenCV_Handler::FindConnectedComponents(struct _IplImage *,class ConnComponentSet *,int const &,int const &)" (?FindConnectedComponents@OpenCV_Handler@@QAEPAU_IplImage@@PAU2@PAVConnComponentSet@@ABH2@Z)
If this is the case, then Visual Studio does not yet know about the
cvblobslib.lib library file, against which it should link. In Project Properties -> Linker -> Input -> Additional Dependencies check that this file has been included:
During compilation you might get the following error message:
fatal error C1083: Cannot open include file: 'BlobResult.h': No such file or directory
If this is the case then make sure you have specified the necessary include file for using the cvBlobsLib libraries. In the Project Properties -> C/C++ -> General -> Additional Include Directories, ensure that the path to the includes has been added:
Other Issues: access violation errors when running under Release Mode
I noticed that using CBlobResult objects in Visual Studio’s Release Mode can cause access violation errors similar to the one shown:
This can be corrected by making sure that the proper version of the
cvblobslib.lib file is being used – it needs to be the one built under Release mode, not Debug Mode:
Open the CvBlobsLib Visual Studio project and do a clean and rebuild under Release Mode. Grab hold of the newly built
cvblobslib.lib file contained in the Release folder and copy it into your project that is using the cvBlobsLib library. In other words, replace the old (probably Debug Mode)
cvblobslib.lib file you have been using with the new Release Mode one.
Update: 22 July 2011
The library does the job for real-world instances too. See this posting for tips on how to integrate the OpenCV/cvBlobsLib with the FlyCapture camera, by Point Gray Research. The input image used was a sample subset of microarray spots printed using a Xaar inkjet printer approximately 150 microns in diameter, printed to a 75.0 x 25.0 mm glass slide, with black background, camera approximately 10 degrees to the perpendicular: | http://www.technical-recipes.com/2011/object-detection-using-the-opencv-cvblobslib-libraries/?replytocom=1950 | CC-MAIN-2017-39 | refinedweb | 1,308 | 53.51 |
PIPE(2) Linux Programmer's Manual PIPE(2)
pipe, pipe2 - create pipe
#include <unistd.h> int pipe(int pipefd[2]); #define _GNU_SOURCE /* See feature_test_macros(7) */ #include <fcntl.h> /* Obtain O_* constant definitions */ . Since Linux 4.5, it is possible to change the O_DIRECT setting of a pipe file descriptor using fcntl(2). O_NONBLOCK Set the O_NONBLOCK file status flag on the two new open file descriptions. Using this flag saves extra calls to fcntl(2) to achieve the same result.).
pipe2() was added to Linux in version 2.6.27; glibc support is available starting with version 2.9.
pipe(): POSIX.1-2001, POSIX.1-2008. pipe2() is Linux-specific.), splice(2), tee(2), vmsplice(2), write(2), popen(3), pipe(7)
This page is part of release 4.16 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2017-11-26 PIPE(2)
Pages that refer to this page: eventfd(2), fork(2), getrlimit(2), socketpair(2), statfs(2), syscalls(2), pmda(3), pmdaconnect(3), __pmprocesspipe(3), popen(3), capabilities(7), fifo(7), inode(7), man-pages(7), pipe(7), signal-safety(7) | http://www.man7.org/linux/man-pages/man2/pipe2.2.html | CC-MAIN-2018-51 | refinedweb | 202 | 68.77 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Image Picker Controllers6:24 with Pasan Premaratne
When you want to use a photo in an app, by selecting one in the library or by taking a new photo, this functionality is handled by a single class - UIImagePickerController. In this video, let's allow users to either select an image from the library or take a photo with just a few lines of code.
- 0:00
When you want to use a photo in an app, there are a couple of ways to do it.
- 0:04
Think of the experience on an app like Instagram.
- 0:06
You can either use an image you've taken before from your photo library, or
- 0:10
you can launch the camera.
- 0:12
On iOS,
- 0:13
both of these flows can be handled by a single class UI image picker controller.
- 0:18
An image picker controller offers an interface through which a user
- 0:23
can take pictures or movies from inside an app, select images or
- 0:27
movies that they've taken before, so both flows.
- 0:30
The class can be configured to only include a limited subset of the features
- 0:35
depending on what you want your users to do.
- 0:38
So let's add a new file to the main ImageViewers group.
- 0:43
This will be a swift file and
- 0:48
we'll name this PhotoPickerManager.
- 0:55
UI ImagePickerController is a ViewController subclass and
- 0:59
has a defined interface for taking or picking images.
- 1:03
So we don't need to worry about actually defining this interface, none of that.
- 1:06
It communicates via the delegate pattern.
- 1:09
So we're going to house all this logic in a separate custom object
- 1:13
rather than putting it directly in the ViewController.
- 1:16
So we'll say class PhotoPickerManager.
- 1:20
It's gonna be an NSObject subclass.
- 1:25
And in this class, we'll create a private ImagePickerController.
- 1:29
While this instance is going to do all the work for us, we don't need to modify it or
- 1:32
access it anyway from outside the class, so we can make it private.
- 1:35
Let imagePickerController, and
- 1:40
this is an instance of UIImagePickerController.
- 1:46
Now we know that that did not complete because I need to go ahead and
- 1:50
import UIKit.
- 1:51
Since the picker controller is a ViewController subclass,
- 1:55
to display it on screen, we simply ask the parent view controller,
- 1:59
which at this point, is the PhotoListController over here,
- 2:03
we ask this instance to present it modally by calling presentAnimated.
- 2:09
Now the way we've defined it,
- 2:10
with the picker being a private property, we can't provide it
- 2:14
as an argument to the present method from inside the PhotoListController class.
- 2:19
So instead we're going to employ a pretty useful pattern.
- 2:22
Let's add another stored property to the class.
- 2:24
So we'll say private let presentingController,
- 2:28
this is going to be a UIViewController.
- 2:32
Instead of asking the PhotoListController to present the imagePickerController
- 2:37
directly, we can pass in the PhotoListController instance
- 2:40
as a dependency to the PhotoPickerManager, and
- 2:44
from inside this class, we can handle presentation.
- 2:47
This way we can still present the picker from the ViewController that we want, and
- 2:51
really in this way, you can use any ViewController.
- 2:53
But again, we don't have to expose the imagePickerController in any way,
- 2:57
none of the details have to be exposed outside of this object.
- 3:01
So we'll say init with presentingViewController of type
- 3:06
UIViewcontroller and self.presentingController =
- 3:11
presentingViewController, and super.init, oops.
- 3:19
Now we can define a simple public method that allows us to present
- 3:22
the image picker.
- 3:23
So we'll say func presentPhotoPicker animated and
- 3:29
this is going to be a Boolean parameter.
- 3:34
And the reason we're defining that is so
- 3:36
that we can pass it to the present method that we call on the presentingController.
- 3:42
So here we'll say presentingController.present view
- 3:45
controller to present, and that's the imagePickerController animated,
- 3:50
so we'll pass this through and for the completion we'll say nil.
- 3:56
Let's see if this works.
- 3:57
So navigate to the PhotoListController class, and let's add a lazy
- 4:03
stored property to maintain a reference to an instance of the photoPickerManager.
- 4:08
Lazy var photoPickerManager, this is of type photoPickerManager and
- 4:13
we're going to assign a closure to it that we'll call immediately.
- 4:19
So here inside we'll say, let manager = PhotoPickerManager,
- 4:25
and we'll initialize it with a presentingViewController,
- 4:30
and here we'll say self, since this is the class that is going to present it.
- 4:35
And then we'll return the manager.
- 4:38
Now let's go back to Main.storyboard and
- 4:42
let's wire up this button to actually do something.
- 4:47
So that should when we pull up the assistant editor,
- 4:49
that should bring up the PhotoListController on the side.
- 4:52
So at the bottom here after viewDidLoad, we'll wire this up control drag
- 4:58
over to create an action, not an outlet, and we'll name this launchCamera.
- 5:05
Okay, so when the user taps this button now,
- 5:08
we want to launch the imagePickerController.
- 5:10
So in here let's call the method we just added,
- 5:12
photoPickerManager.present, hm, why is this not auto completing?
- 5:20
Let's see, let's go back to the standard editor.
- 5:23
I'm going to use Open Quickly which is command shift zero,
- 5:26
type photolist controller to go there, okay.
- 5:32
So photoPickerManager, let's jump to the definition
- 5:39
and here we have that method defined.
- 5:40
Let's build this, make sure it's not, okay, and
- 5:45
now let's go back here and start typing so present, there we go.
- 5:48
PresentPhotoPicker animated true, all right and let's run this.
- 5:54
So if you run the app, and now if we tap on the camera button,
- 5:58
the image picker should pop up.
- 6:02
There we go. Now here we see that there's a fully
- 6:04
defined interface at the top including a navigation controller, a cancel button,
- 6:09
and if you tap around, even a master detail interface.
- 6:12
But wait a minute, we want the camera, right, not whatever this is.
- 6:17
So in the next video, let's take a look at how we can customize the interface
- 6:21
to select particular types of media. | https://teamtreehouse.com/library/image-picker-controllers | CC-MAIN-2018-47 | refinedweb | 1,217 | 70.02 |
users can configure the frequencies of AHB bus, high-speed APB2 bus and low-speed APB1 bus through multiple prescalers. The maximum frequency of the AHB and APB2 domains is 72 MHz. The maximum allowable frequency of APB1 domain is 36 MHz. The clock frequency of SDIO interface is fixed as HCLK/2.
The 40 kHz LSI is used by the independent watchdog IWDG. In addition, it can also be selected as the clock source of the Real-Time Clock RTC. In addition, the clock source of Real-Time Clock RTC can also select LSE or 128 frequency division of HSE. The clock source of RTC is selected through RTCSEL[1:0].
STM32 has a full speed USB module, and its serial interface engine needs a clock source with a frequency of 48MHz. The clock source can only be obtained from the PLL output, and can be selected as 1.5 frequency division or 1 frequency division. That is, when the USB module needs to be used, the PLL must be enabled, and the clock frequency is configured as 48MHz or 72MHz.
In addition, STM32 can also select a PLL output frequency division 2, HSI, HSE, or system clock SYSCLK to output to MCO pin (PA8). The system clock SYSCLK is the clock source for most parts of STM32. It can be PLL output, HSI or HSE (PLL frequency doubling to 72Mhz is used in general procedures). Before selecting the clock source, pay attention to judge whether the target clock source has oscillated stably. Max=72MHz, which is divided into two channels. One channel is sent to i2s2clk and i2s3clk used by I2S2 and I2S3; Another channel is divided by AHB frequency divider (1 / 2 / 4 / 8 / 16 / 64 / 128 / 256 / 512) and sent to the following 8 modules for use:
Send SDIOCLK clock used by SDIO.
Send FSMCCLK clock used by FSMC.
HCLK clock for AHB bus, kernel, memory and DMA.
The system timer clock (SysTick) sent to Cortex after 8 frequency division.
The idle running clock FCLK directly sent to Cortex.
To APB1 divider. APB1 frequency divider can select 1, 2, 4, 8 and 16 frequency divisions. One of its outputs is used by APB1 peripherals (PCLK1, maximum frequency 36MHz), and the other is sent to timer (Timer2-7)2, 3 and 4 frequency multipliers. The frequency multiplier can select 1 or 2 frequency multipliers, and the clock output is used by timers 2, 3, 4, 5, 6 and 7.
Send it to APB2 frequency divider. APB2 frequency divider can select 1, 2, 4, 8 and 16 frequency divisions. One of its outputs is for APB2 peripherals (PCLK2, maximum frequency 72MHz), and the other is for timers (Timer1 and Timer8)1 and 2 frequency multipliers. The frequency multiplier can select 1 or 2 frequency multipliers, and the clock output is used by timer 1 and timer 8. In addition, APB2 frequency divider has another output for ADC frequency divider. After frequency division, ADCCLK clock is obtained and sent to ADC module for use. ADC frequency divider can be selected as 2, 4, 6 and 8 frequency division.
2 after frequency division, it is sent to SDIO AHB interface for use (HCLK/2).
Detailed reference:
2. External crystal oscillator as clock source
Next, solve how to configure the 12M external crystal oscillator as the system clock source.
The first step is to modify the HSE in stm32f10x.h_ Value is 12000000
/** * @brief In the following line adjust the value of External High Speed oscillator (HSE) used in your application Tip: To avoid modifying this file each time you need to use different HSE, you can define the HSE value in your toolchain compiler preprocessor. */ #if !defined HSE_VALUE #ifdef STM32F10X_CL #define HSE_VALUE ((uint32_t)25000000) /*!< Value of the External oscillator in Hz */ #else #define HSE_VALUE ((uint32_t)12000000) /*!< Value of the External oscillator in Hz */ #endif /* STM32F10X_CL */ #endif /* HSE_VALUE */
Step 2: modify the system_ For the clock configuration in stm32f10x. C, first find void SystemInit(void) - "SetSysClock()" SetSysClockTo72(), and change the 9-octave frequency to 6-octave frequency, 12*6=72MHz
/** * @brief Sets System clock frequency to 72MHz and configure HCLK, PCLK2 * and PCLK1 prescalers. * @note This function should be used only after reset. * @param None * @retval None */ static void SetSysClockTo72; } else { HSEStatus = (uint32_t)0x00; } if (HSEStatus == (uint32_t)0x01) { /* Enable Prefetch Buffer */ FLASH->ACR |= FLASH_ACR_PRFTBE; /* Flash 2 wait state */ FLASH->ACR &= (uint32_t)((uint32_t)~FLASH_ACR_LATENCY); FLASH->ACR |= (uint32_t)FLASH_ACR_LATENCY_2; /* HCLK = SYSCLK */ RCC->CFGR |= (uint32_t)RCC_CFGR_HPRE_DIV1; /* PCLK2 = HCLK */ RCC->CFGR |= (uint32_t)RCC_CFGR_PPRE2_DIV1; /* PCLK1 = HCLK */ RCC->CFGR |= (uint32_t)RCC_CFGR_PPRE1_DIV2; #ifdef STM32F10X_CL // ... #else /* PLL configuration: PLLCLK = HSE * 9 = 72 MHz */ RCC->CFGR &= (uint32_t)((uint32_t)~(RCC_CFGR_PLLSRC | RCC_CFGR_PLLXTPRE | RCC_CFGR_PLLMULL)); RCC->CFGR |= (uint32_t)(RCC_CFGR_PLLSRC_HSE | RCC_CFGR_PLLMULL6); // 12 #endif /* STM32F10X_CL */ /* Enable PLL */ RCC->CR |= RCC_CR_PLLON; /* Wait till PLL is ready */ while((RCC->CR & RCC_CR_PLLRDY) == 0) { # summary Android Advanced architecture learning is a long and arduous road. It can't be learned by passion or by staying up for a few days and nights. We must develop the habit of studying hard at ordinary times.**So: insist!** The analysis of the company's real interview questions in 2020 shared above. The author also sorted out the main interview technical points of front-line Internet enterprises into videos and PDF(In fact, it takes a lot more energy than expected), including the context of knowledge + Many details. **[CodeChina Open source projects:< Android Summary of study notes+Mobile architecture video+Real interview questions for large factories+Project practice source code]( **Just write this first. The code word is not easy. It is very one-sided. Please point out the disadvantages. If you think it has reference value, you can also pay attention to me** > **①「Android Analysis of real interview questions」PDF Full HD version+②「Android Interview knowledge system」Learning mind map compressed package reading and downloading**,Finally, friends who feel helpful and in need can praise them > > ] > > [External chain picture transfer...(img-hd1AwWIZ-1631074696573)] > > [External chain picture transfer...(img-YW53KYJh-1631074696574)] | https://programmer.help/blogs/619da88f1ec78.html | CC-MAIN-2022-21 | refinedweb | 1,038 | 54.32 |
- NAME
- VERSION
- SYNOPSIS
- DESCRIPTION
- METHODS
- FUNCTION
- THINKING IN ITERATORS
- TUTORIAL
- EXPORTS
- DIAGNOSTICS
- REQUIREMENTS
- SEE ALSO
- THANKS
- AUTHOR / COPYRIGHT
NAME
Iterator - A general-purpose iterator class.
VERSION
This documentation describes version 0.03 of Iterator.pm, October 10, 2005..
DESCRIPTION. See "DIAGNOSTICS".
Note that in many, many cases, you will not need to explicitly create an iterator; there are plenty of iterator generation and manipulation functions in the other associated modules. You can just plug them together like building blocks.
METHODS
- new
$iter = Iterator->new( sub { code } );
Creates a new iterator object. The code block that you provide will be invoked by the "value" method. The code block should have some way of maintaining state, so that it knows how to return the next value of the sequence each time it is called.
If the code is called after it has generated the last value in its sequence, it should throw an exception:
Iterator::X::Am_Now_Exhausted->throw ();
This very commonly needs to be done, so there is a convenience function for it:
Iterator::is_done ();
- value
$next_value = $iter->value ();
Returns the next value in the iterator's sequence. If
valueis called on an exhausted iterator, an
Iterator::X::Exhaustedexception is thrown.
Note that these iterators can only return scalar values. If you need your iterator to return a list or hash, it will have to return an arrayref or hashref.
- is_exhausted
$bool = $iter->is_exhausted ();
Returns true if the iterator is exhausted. In this state, any call to the iterator's "value" method will throw an exception.
- isnt_exhausted
$bool = $iter->isnt_exhausted ();
Returns true if the iterator is not yet exhausted.
FUNCTION
- is_done
Iterator::is_done();
You call this function after your iterator code has generated its last value. See "TUTORIAL". This is simply a convenience wrapper for
Iterator::X::Am_Now_Exhausted->throw();
THINKING IN ITERATORS
Typically, when people approach a problem that involves manipulating a bunch of data, their first thought is to load it all into memory, into an array, and work with it in-place. If you're only dealing with one element at a time, this approach usually wastes memory needlessly.
For example, one might get a list of files to operate on, and loop over it:
my @files = fetch_file_list(....); foreach my $file (@files) ... If C<fetch_file_list> were modified to return an iterator instead of an array, the same code could look like this: my $file_iterator = fetch_file_list(...) while ($file_iterator->isnt_exhausted) ...
The advantage here is that the whole list does not take up memory while each individual element is being worked on. For a list of files, that's probably not a lot of overhead. For the contents of a file, on the other hand, it could be huge.
If a function requires a list of items as its input, the overhead is tripled:
sub myfunc { my @things = @_; ...
Now in addition to the array in the calling code, Perl must copy that array to
@_, and then copy it again to
@things. If you need to massage the input from somewhere, it gets even worse:
my @data = get_things_from_somewhere(); my @filtered_data = grep {code} @data; my @transformed_data = map {code} @filtered_data; myfunc (@transformed_data);
If
myfunc is rewritten to use an Iterator instead of an array, things become much simpler:
my $data = ilist (get_things_from_somewhere()); $filtered_data = igrep {code} $data; $transformed_data = imap {code} $filtered_data; myfunc ($transformed_data);
(This example assumes that the
get_things_from_somewhere function cannot be modified to return an Iterator. If it can, so much the better!) Now the original list is still in memory, inside the
$data Iterator, but everwhere else, there is only one data element in memory at a time.
Another advantage of Iterators is that they're homogeneous. This is useful for uncoupling library code from application code. Suppose you have a library function that grabs data from a filehandle:
sub my_lib_func { my $fh = shift; ...
If you need
my_lib_func to get its data from a different source, you must either modify it, or make a new copy of it that gets its input differently, or you must jump through hoops to make the new input stream look like a Perl filehandle.
On the other hand, if
my_lib_func accepts an iterator, then you can pass it data from a filehandle:
my $data = ifile "my_input.txt"; $result = my_lib_func($data);
Or a database handle:
my $data = imap {$_->{IMPORTANT_COLUMN}} idb_rows($dbh, 'select IMPORTANT_COLUMN from foo'); $result = my_lib_func($data);
If you later decide you need to transform the data, or process only every 10th data row, or whatever:
$result = my_lib_func(imap {magic($_)} $data); $result = my_lib_func(inth 10, $data);
The library function doesn't care. All it needs is an iterator.
Chapter 4 of Dominus's book (See "SEE ALSO") covers this topic in some detail.
Word of Warning
When you use an iterator in separate parts of your program, or as an argument to the various iterator functions, you do not get a copy of the iterator's stream of values.
In other words, if you grab a value from an iterator, then some other part of the program grabs a value from the same iterator, you will be getting different values.
This can be confusing if you're not expecting it. For example:
my $it_one = Iterator->new ({something}); my $it_two = some_iterator_transformation $it_one; my $value = $it_two->value(); my $whoops = $it_one->value;
Here,
some_iterator_transformation takes an iterator as an argument, and returns an iterator as a result. When a value is fetched from
$it_two, it internally grabs a value from
$it_one (and presumably transforms it somehow). If you then grab a value from
$it_one, you'll get its second value (or third, or whatever, depending on how many values
$it_two grabbed), not the first.
TUTORIAL
Let's create a date iterator. It'll take a DateTime object as a starting date, and return successive days -- that is, it'll add 1 day each iteration. It would be used as follows:
use DateTime; $iter = (...something...); $day1 = $iter->value; # Initial date $day2 = $iter->value; # One day later $day3 = $iter->value; # Two days later
The easiest way to create such an iterator is by using a closure. If you're not familiar with the concept, it's fairly simple: In Perl, the code within an anonymous block has access to all the lexical variables that were in scope at the time the block was created. After the program then leaves that lexical scope, those lexical variables remain accessible by that code block for as long as it exists.
This makes it very easy to create iterators that maintain their own state. Here we'll create a lexical scope by using a pair of braces:
my $iter; { my $dt = DateTime->now(); $iter = Iterator->new( sub { my $return_value = $dt->clone; $dt->add(days => 1); return $return_value; }); }
Because
$dt is lexically scoped to the outermost block, it is not addressable from any code elsewhere in the program. But the anonymous block within the "new" method's parentheses can see
$dt. So
$dt does not get garbage-collected as long as
$iter contains a reference to it.
The code within the anonymous block is simple. A copy of the current
$dt is made, one day is added to
$dt, and the copy is returned.
You'll probably want to encapsulate the above block in a subroutine, so that you could call it from anywhere in your program:
sub date_iterator { my $dt = DateTime->now(); return Iterator->new( sub { my $return_value = $dt->clone; $dt->add(days => 1); return $return_value; }); }
If you look at the source code in Iterator::Util, you'll see that just about all of the functions that create iterators look very similar to the above
date_iterator function.
Of course, you'd probably want to be able to pass arguments to
date_iterator, say a starting date, maybe an increment other than "1 day". But the basic idea is the same.
The above date iterator is an infinite (well, unbounded) iterator. Let's look at how to indicate that your iterator has reached the end of its sequence of values. Let's write a scaled-down version of irange from the Iterator::Util module -- one that takes a start value and an end value and always increments by 1.
sub irange_limited { my ($start, $end) = @_; return Iterator->new (sub { Iterator::is_done if $start > $end; return $start++; }); }
The iterator itself is very simple (this sort of thing gets to be easy once you get the hang of it). The new element here is the signalling that the sequence has ended, and the iterator's work is done. "is_done" is how your code indicates this to the Iterator object.
You may also want to throw an exception if the user specified bad input parameters. There are a couple ways you can do this.
... die "Too few parameters to irange_limited" if @_ < 2; die "Too many parameters to irange_limited" if @_ > 2; my ($start, $end) = @_; ...
This is the simplest way; you just use
die (or
croak). You may choose to throw an Iterator parameter error, though; this will make your function work more like one of Iterator.pm's built in functions:
... Iterator::X::Parameter_Error->throw( "Too few parameters to irange_limited") if @_ < 2; Iterator::X::Parameter_Error->throw( "Too many parameters to irange_limited") if @_ > 2; my ($start, $end) = @_; ...
EXPORTS
No symbols are exported to the caller's namespace.
DIAGNOSTICS
Iterator uses Exception::Class objects for throwing exceptions. If you're not familiar with Exception::Class, don't worry; these exception objects work just like
$@ does with
die and
croak, but they are easier to work with if you are trapping errors.
All exceptions thrown by Iterator have a base class of Iterator::X. You can trap errors with an eval block:
eval { $foo = $iterator->value(); };
and then check for errors as follows:
if (Iterator::X->caught()) {...
You can look for more specific errors by looking at a more specific class:
if (Iterator::X::Exhausted->caught()) {...
Some exceptions may provide further information, which may be useful for your exception handling:
if (my $ex = Iterator::X::User_Code_Error->caught()) { my $exception = $ex->eval_error(); ...
If you choose not to (or cannot) handle a particular type of exception (for example, there's not much to be done about a parameter error), you should rethrow the error:
if (my $ex = Iterator::X->caught()) { if ($ex->isa('Iterator::X::Something_Useful')) { ... } else { $ex->rethrow(); } }
Parameter Errors.
Exhausted Iterators
Class:
Iterator::X::Exhausted
You called "value" on an iterator that is exhausted; that is, there are no more values in the sequence to return.
As a string, this exception is "Iterator is exhausted."
End of Sequence
Class:
Iterator::X::Am_Now_Exhausted
This exception is not thrown directly by any Iterator.pm methods, but is to be thrown by iterator sequence generation code; that is, the code that you pass to the "new" constructor. Your code won't catch an
Am_Now_Exhaustedexception, because the Iterator object will catch it internally and set its "is_exhausted" flag.
The simplest way to throw this exception is to use the "is_done" function:
Iterator::is_done() if $something;
User Code Exceptions
diewas invoked.
As a string, this exception evaluates to the stringified
$@.
I/O Errors
$!.
Internal Errors
Class:
Iterator::X::Internal_Error
Something happened that I thought couldn't possibly happen. I would appreciate it if you could send me an email message detailing the circumstances of the error.
REQUIREMENTS
Requires the following additional module:
Exception::Class, v1.21 or later.
SEE ALSO
Higher Order Perl, Mark Jason Dominus, Morgan Kauffman 2005.
The Iterator::Util module, for general-purpose iterator functions.
The Iterator::IO module, for filesystem and stream iterators.
The Iterator::DBI module, for iterating over a DBI record set.
The Iterator::Misc module, for various oddball iterator functions.
THANKS
Much thanks to Will Coleda and Paul Lalli (and the RPI lily crowd in general) for suggestions for the pre-release version.. | https://metacpan.org/pod/release/ROODE/Iterator-0.03/Iterator.pm | CC-MAIN-2017-30 | refinedweb | 1,958 | 52.49 |
Gatsby
At the end of this short tutorial you will learn how to set up the localization process for Gatsby and the ttag library.
Step 1. InstallationStep 1. Installation
Follow these steps to setup gatsby and install ttag dependencies.
npm install --global gatsby-cli gatsby new ttag-gatsby cd ttag-gatsby npm i ttag npm i -D ttag-cli
Step 2. Create .po file for translationsStep 2. Create .po file for translations
At this step, we should create
.po file for the language that we want to translate to.
For this example, we will create
.po file with all appropriate settings for the Spanish language (
es code).
mkdir i18n # create a separate dir to keep translation files npx ttag init uk i18n/es.po
You can find the list of all available language codes here -
Step 3. Wrap strings with tagsStep 3. Wrap strings with tags
Let's edit
src/pages/index.js and wrap the "Hi people" string to practice translating a single string:
import { t } from 'ttag'; //... some jsx code <Layout> <h1>{ t`Hi people` }</h1> <p>Welcome to your new Gatsby site.</p> //... some jsx code </Layout>
Step 4. Update the translation file and add a translationStep 4. Update the translation file and add a translation
In this step, we will use
update command from
ttag-cli to extract translations from the sources.
This will also update references to the translated string and remove strings that aren't present in the source files.
npx ttag update i18n/es.po src/
After this, we should see that the new translation was added to the
i18n/es.po file:
#: src/pages/index.js:11 msgid "Hi people" msgstr ""
Let's add a translation:
#: src/pages/index.js:11 msgid "Hi people" msgstr "¡Hola Amigos!"
Step 5. Setup precompiled translationsStep 5. Setup precompiled translations
In this tutorial we'll only be showing how to setup precompiled translations, as the whole purpose of Gatsby as a static site generator is to generate your website ahead of time in order to ensure the fastest possible final result.
Add a custom Babel configAdd a custom Babel config
To setup Gatsby to work with ttag, you'll need to create a custom babel config in the root
directory of your project. In order to switch based on environment variables, we'll need
to use the
babel.config.js variety of babel config instead of the static
.babelrc.
You'll need to explicitly install and use the
babel-preset-gatsby babel plugin since
we're overriding the default babel config shipped with gatsby.
npm install --save babel-preset-gatsby
Here's a simple babel config that we'll use to precompile the right language at build time:
const { env: { LOCALE } } = process; module.exports = { "presets": [ [ "babel-preset-gatsby", ], ], "plugins": [ [ "babel-plugin-ttag", { "resolve": { "translations": LOCALE === "es" ? "i18n/es.po" : "default", }, }, ], ], }
The key portion here is the line where
babel-plugin-ttag's option object's
resolve.translations value is dynamically set based on the presence of an environment variable.
Build (or develop) by localeBuild (or develop) by locale
The dynamic babel config shown above allows you to pick a translation at build (or develop) time like so:
LOCALE=es npm run build
For convenience, you can make specialized build and develop scripts which take care of setting these environment variables for you:
{ "scripts": { "build:en": "LOCALE=en gatsby build", "develop:en": "LOCALE=en gatsby develop", "build:es": "LOCALE=es gatsby build", "develop:es": "LOCALE=es gatsby develop" } }
Default StarterDefault Starter
If you like, there is a default starter preconfigured with the above outlined steps:
gatsby new my-ttag-site | https://ttag.js.org/docs/gatsby.html | CC-MAIN-2021-25 | refinedweb | 603 | 55.34 |
Python is a fantastic language that continues to help so many businesses and individuals. It offers readable syntax to get started, yet extensive amounts of control and flexibility to move into the more advanced areas of software engineering. Python the number one choice for many because it is packed with the power of unparalleled libraries, it recommended to run them through a python virtual environment.
Conventionally, running a python script from the terminal is as simple as calling it and passing in the script needed to be executed.
python3 my_script.py
Note that we only discuss Python version 3 these days, as Python 2 had it’s “end of life” at the beginning of 2020; long overdue.
Let’s say that in
my_script.py I have the following code.
import pandas as pd def runme():] } df = pd.DataFrame(data) print(df) if __name__ == '__main__': runme()
This prints out a table of five columns, showing some facts about the locations.
If we try and run this as is, we will get the following error:
$ python3 my_script.py Traceback (most recent call last): File "my_script.py", line 1, in <module> import pandas as pd ModuleNotFoundError: No module named 'pandas'
So we will naturally run a
pip install pandas, or a
pip3 install pandas as we are calling the
python3 binary when we run our script.
What this does, is goes to PyPi (Python’s Package Index) and gets the relevant library, then installs it locally to where our Python executable is being run from.
While this will fix our problem, over time, it creates another problem. That is to say that we will end up with a global python directory, full of dependencies that we don’t particularly need for every project.
To fix this, we introduce
virtual environments.
What is a Python Virtual Environment?
A Python Virtual Environment is a directory locally configured to a python project that contains all the necessary things to run python, such as the python binaries, libraries and other tidbits.
To get a python virtual environment setup, you will first need to install the
virtualenv global package; which may or may not be available on your machine already.
The easiest way to get started is to run
pip install virtualenv, or
pip3 install virtualenv. You can read more about it here if required.
Now that you have
virtualenv available to your local machine, you can make use of it within your above application, simply set it up!
$ ls my_script.py
We can see that there is only the one file available in the working directory.
How to Setup a Python Virtual Environment
By running
virtualenv -p python3 venv, we tell Virtual Environment to install Python3 in the
venv local directory. You should see an output similar to the following:
$ virtualenv -p python3 venv Running virtualenv with interpreter /usr/local/bin/python3 Already using interpreter /usr/local/opt/python/bin/python3.7 Using base prefix '/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7' New python executable in /Users/ao/src/tmp/test2/venv/bin/python3.7 Also creating executable in /Users/ao/src/tmp/test2/venv/bin/python Installing setuptools, pip, wheel... done.
If we list all files in the directory now, we will see our additional Virtual Environment is available.
$ ls my_script.py venv
All it takes to use this environment, is to activate it. This can be done by typing
source venv/bin/activate, alternatively, you can also replace the
source keyword with a period
. instead; as following:
. venv/bin/activate.
~ source venv/bin/activate (venv) ~
We can now see the virtual environment’s name within our terminal window. At this stage, any python commands executed are from within our local virtual environment.
Installing python packages into the Virtual Environments
At this stage, we can now run
python my_script.py as we did before. Notice that we are now only running
python, as opposed to
python3 from before. This is because we told the virtual environment to install python as python3 (
virtualenv -p python3 venv).
$ python my_script.py Traceback (most recent call last): File "my_script.py", line 1, in <module> import pandas as pd ModuleNotFoundError: No module named 'pandas'
Unfortunately, we still get the same error, but that is easily fixed by running a
pip install pandas. Which will now install the package to our local virtual environment.
$ pip install pandas Collecting pandas Using cached pandas-0.25.3-cp37-cp37m-macosx_10_9_x86_64.whl (10.2 MB) Collecting numpy>=1.13.3 Downloading numpy-1.18.1-cp37-cp37m-macosx_10_9_x86_64.whl (15.1 MB) |████████████████████████████████| 15.1 MB 9.7 MB/s Collecting pytz>=2017.2 Using cached pytz-2019.3-py2.py3-none-any.whl (509 kB) Collecting python-dateutil>=2.6.1 Downloading python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB) |████████████████████████████████| 227 kB 14.5 MB/s Collecting six>=1.5 Downloading six-1.14.0-py2.py3-none-any.whl (10 kB) Installing collected packages: numpy, pytz, six, python-dateutil, pandas Successfully installed numpy-1.18.1 pandas-0.25.3 python-dateutil-2.8.1 pytz-2019.3 six-1.14.0
python my_script.py now runs our application successfully!
country capital area population0 Brazil Brasilia 8.516 200.40 1 Russia Moscow 17.100 143.50 2 India New Dehli 3.286 1252.00 3 China Beijing 9.597 1357.00 4 South Africa Pretoria 1.221 52.98
Exporting / Freezing Packages for Later
It is good to practise to export – or freeze as it’s called in the python world – any packages you may have used. This helps other developers get your application running with a few commands, as opposed to having to figure out what needed to be installed first.
Running a
pip freeze > requirements.txt will dump all currently used dependencies into a
requirements.txt file. This is the common convention typically followed.
Note that this will dump all the dependencies of the python virtual environment into this file, as we created a new virtual environment at the beginning of this tutorial, only packages used for this script will be exported, or frozen. If you had to do this from your globally installed python/pip, you may find many more unnecessary packages included; yet another reason to use virtual environments.
Taking a look at our requirements.txt file, we can now see the following:
$ cat requirements.txt numpy==1.18.1 pandas==0.25.3 python-dateutil==2.8.1 pytz==2019.3 six==1.14.0
Pingback: How to package a Python app using Nuitka - Learn Software Engineering @ao.gl
Pingback: When your Python code is much faster with PyPy ~ Andrew Odendaal | https://ao.gl/how-to-setup-and-use-the-python-virtual-environment/ | CC-MAIN-2020-10 | refinedweb | 1,112 | 58.08 |
Google.
The service is currently in preview phase being offered to a limited number of current Google service developers (sign-up link), each account receiving 100GB of storage and 300GB of bandwidth. Data is stored as objects organized in a flat hierarchy inside buckets. Buckets are also organized in a flat hierarchy inside an account, all buckets sharing one common namespace across GSD. Each account is allowed to create up to 1,000 buckets, and each object can be as large as 100GB, but those numbers are supposed to increase when the preview phase is over.
GSD provides read-after-write consistency which basically means that an object can be accessed - listed, downloaded, or deleted – right after uploading. Also, a deleted object is no longer accessible right after the erasing command. Also, listing commands are eventually consistent from everywhere on the Internet.
GSD supports access control lists (ACL) based sharing. There are various permissions – read, write, full control - assigned to users, and access granularity established at bucket or object level.
Storage management can be performed through the GS Manager, a browser based application offering support for the most actions: creating/deleting buckets, uploading/downloading/deleting objects, managing ACL lists. The application requests one of the following browser versions to run: Google Chrome 5.0, FireFox 3.6, Safari 4.0 or higher. Another tool is GSUtil, an open source command line tool used to perform the same tasks as GS Manager.
GSD is currently not integrated with Google Docs and Google Apps accounts do not work, only regular Google accounts, but that is going to change in the future.
Pricing is set at $0.17/GB/month, higher than that of the similar Amazon S3 pricing which is set at $0.15/GB/month for 11 nines durability and $0.1/GB/month with 99.99 durability. Uploading and accessing are the same at $0.1/GB and $0.01/1000 HTTP requests. Amazon has progressive discounts for storage in excess of 50 TB, 400TB, 500TB and so on. There is no SLA for GDS yet, but Google promises to provide one when the service will be open to all those interested.
Community comments | https://www.infoq.com/news/2010/06/Google-Storage-for-Developers/ | CC-MAIN-2019-39 | refinedweb | 364 | 57.67 |
How do I add textures to sprites?
I have a sprite called spriteName. Whenever I type something like.
self.spriteName.texture = ('plc:Gem_Blue')
or import my own image into my own folder called Textures and use
self.spriteName.texture = ('Textures/IMG_0469.JPG')
They both result in the error "TypeError: Expected a Texture Object"
I don't know what I'm doing wrong.
@heyguy4 Just use
Textureinstead of
scene.Texture. Most included examples import everything from the
scenemodule (using
from scene import *), but not the module itself. @JonB's code would be correct when you import just the module (
import scene).
Okay now there's no error, but I can't see the sprite when the code compiles.
Here's my setup method
def setup(self): self.background_color = 'green' self.spriteName = SpriteNode() self.spriteName.anchor_point = (5,5) self.spriteName.position = (50, 50) self.spriteName.texture = Texture('plc:Gem_Blue')
You haven't added the sprite to the scene. You have to add something like
self.add_child(self.spriteName). Alternatively, you could also pass
parent=selfto the
SpriteNodeinitializer.
That solves the issue with the blue gem not appearing, but if I use one of my own images that I imported into the folder
self.spriteName.texture = Texture('IMG_0469.JPG')
returns a "Image not found" error and
self.spriteName.texture = Texture(ui.Image.named('IMG_0469.JPG'))
returns a "Could not load image" error
Is the imported image in the same folder as your script?
Yes it is
No, that's not necessary, but the image shouldn't be in a sub-folder (or you'd need to include the sub-folder's name in the image name).
It's not in a sub folder. I have no idea why it's not working.
Try...
import os print(os.listdir(os.curdir)) # see if 'IMG_0469.JPG' is in that list
from scene import * class MyScene (Scene): def setup(self): self.background_color = 'green' self.spriteName = SpriteNode() self.spriteName.position = self.size / 2 self.spriteName.texture = Texture('plc:Gem_Blue') #self.spriteName.texture = Texture('canvas.png') #self.spriteName.texture = Texture('image000.jpg') self.add_child(self.spriteName) run(MyScene())
import os print(os.path.abspath(os.curdir)) print(os.listdir('.'))
at the start of your script. Then, copy and paste the name of the image that gets printed to the console. A common problem is that this is case sensitive, you must type the name exactly as it is printed.
I did what you said but I'm still getting the "Image Not Found" Error
What was printed when you added @JonB and/or my code?!?
Does adding the following code print True or False?!?
import os print('IMG_0469.JPG' in os.listdir(os.curdir)) | https://forum.omz-software.com/topic/3174/how-do-i-add-textures-to-sprites/12 | CC-MAIN-2021-49 | refinedweb | 446 | 62.04 |
The time difference you're looking at is the time it takes the JVM to throw an exception. Jess interprets an expression like this (?string codePointAt 0) as (assuming ?string is the symbol 'a')
Advertising
(call a codePointAt 0) That's actually ambiguous. We might be calling a static method codePointAt on a class named a, or we might want to call the method codePointAt on the String "a". Since it's a SYMBOL, Jess assumes the first case is more likely. The first step in trying the first alternative is to try to load the class 'a'. If we do it and fail, it costs us a ClassNotFoundException. Then we can try plan 'B', which is to call the member method on the String. But that's only what Jess does if ?string is a SYMBOL. If it's an RU.STRING -- i.e., if it appears in the source as a double-quoted String -- then Jess tries the other alternative first. In this case, that turns out to be the right alternative, and we don't need to pay the cost of throwing the exception. I tried changing all the bare 'a's to "a"s in your source -- the time discrepancy completely disappeared. ________________________________ From: owner-jess-us...@sandia.gov [mailto:owner-jess-us...@sandia.gov] On Behalf Of Nguyen, Son Sent: Friday, October 21, 2011 5:14 PM To: jess-users Subject: JESS: Performance, Java static method vs Java oject method calls. Hi, I observed a real dramatic difference in performace when using the following two ways to get the same result. (bind ?value (?stringObject codePointAt 0)) (bind ?value (Helper.stringCodePointAt ?stringObject 0)) I did some not so scientific measurements with the following clp: (deftemplate model1 (slot a)(slot b)) (deftemplate model2 (slot a)(slot b)) (import Helper) (deffunction test1 (?string) (bind ?start (System.nanoTime)) (bind ?var (?string codePointAt 0)) (bind ?stop (System.nanoTime)) (printout t ">> test1 took in nanosec: " (- ?stop ?start) crlf) ) (deffunction test2 (?string) (bind ?start (System.nanoTime)) (bind ?var (Helper.stringCodePointAt ?string 0)) (bind ?stop (System.nanoTime)) (printout t "-- test2 took in nanosec: " (- ?stop ?start) crlf) ) (deffunction compare (?s1 ?s2 ?message) ;(printout t ?s1 ":" ?s2 " - called by " ?message " returns " (eq ?s1 ?s2) crlf) (if (eq ?s1 ?s2) then (test1 a) (test2 a)) (return (eq ?s1 ?s2))) (defrule rule1 (model1(a ?model1a &:(compare ?model1a a "model1 slot a"))(b ?model1b &:(compare ?model1b b "rule1 model1 slot b"))) (model2(a ?model2a &:(compare ?model2a 1 "model2 slot a"))(b ?model2b &:(compare ?model2b 2 "rule1 model2 slot b"))) (test (compare a a a)) => ) .................. (defrule rule16 (model1(a ?model1a &:(compare ?model1a a "model1 slot a"))(b ?model1b &:(compare ?model1b b "rule16 model1 slot b"))) (model2(a ?model2a &:(compare ?model2a 1 "model2 slot a"))(b ?model2b &:(compare ?model2b 2 "rule16 model2 slot b"))) (test (compare a a a)) => ) (assert (model2 (a 1)(b 2))) (assert (model1 (a a)(b b))) The difference drastic in favor of the static implementation. The times are in the 30 to 40 microseconds while the other calls take much longer, usually in the 1000 to 2000 microseconds. Without rules, the difference is barely noticable. The side effect of the performance drop has a significant impact of scalability in a multi-cpu system. In our test environment, using Jmeter with multiple virtual users, the cpu usage of a 4 CPU system barely reach the 40% mark with the 'slow' method. With the static implementation, it can go up to the mid 90s for cpu usage. Any feedback is appreciated. Son Nguyen -------------------------------------------------------------------- To unsubscribe, send the words 'unsubscribe jess-users y...@address.com' in the BODY of a message to majord...@sandia.gov, NOT to the list (use your own address!) List problems? Notify owner-jess-us...@sandia.gov. -------------------------------------------------------------------- | http://www.mail-archive.com/jess-users@sandia.gov/msg11880.html | CC-MAIN-2017-34 | refinedweb | 623 | 68.26 |
:
...via..
I am wondering if we could get a similar drawing for YAML. Better drawn, I
hope.
Ok. More clarity with regard to YAML is emergent
today thanks to Andrew's thoughtful questioning.
YAML serializes a typed and labeled graph which
is both directed and has an origin such that all
nodes in the graph are reachable from the origin.
By typed graph, we mean that each graph is given
a transfer method (a class plus a format). By labeled,
we mean that all branch nodes label their connections
with either integers [sequence] or other nodes [mapping].
By directed with origin, we mean that each document
starts with a top-level node and only nodes reachable
by this top-level node can be serialized. Thus, a structure
such as the one below cannot be serialized:
0 0
N1 ---> N2 <--- N3
Where N1 and N3 are "lists" with one item, N2 stored
in their first position and thus having a label of 0.
Note that the above structure doesn't have an origin.
If you start with N1, you will not serialize N3, etc.
In practice, this reachability constraint is not a problem
as all programming languages that I know of have the same
limitation (indeed, unreachable nodes are usually called
"garbage" and are often discarded by a garbage collector).
Also, the above structure can always be represented by
using a convention of introducing a top level node, which,
at the application level is not considered part of the graph.
N1 ---> N2 <--- N3
^ ^
\ /
\a /b
\ /
ORIGIN
In Python, the role of the ORIGIN is the top-level namespace
which is a mapping called __dict__. In the example above, it
would have variable names "a" and "b". So, assuming that N2
also has the type "list", this would be represented in Python...
a = [[]]
b = [a[0]]
I hope this is somewhat clarifying the situation; so from
an information model perspective, YAML does indeed have
a few limitations; but those are quite pratical in nature.
Best,
Clark
> Ok. More clarity with regard to YAML is emergent
> today thanks to Andrew's thoughtful questioning.
>
I have posted Clark's thoughts on to the Wiki:
> YAML serializes a typed and labeled graph which
> is both directed and has an origin such that all
> nodes in the graph are reachable from the origin.
> [...]
Thanks for the explanation.
-- Steve | https://sourceforge.net/p/yaml/mailman/message/8985398/ | CC-MAIN-2018-17 | refinedweb | 392 | 68.91 |
:
March 5, AKD NOTHING MORE: WOMEN, THEIR RIGHTS AND NOTHING LESS.
VOL. I.NO. 10.
NEW YORK, THURSDAY, MARCH 12, 1868. single^opy^ekts.
C|f Efuolntian.
ELIZABETH CADY STASTOU.)
PARKER PILLSBUBY, j
Editors.
SUSAN B. ANTHONY, Proprietor.
OFFICE 37 PARK ROW (ROOM 17).
IMPEACHMENT OF THE PRESIDENT.
At last a majority in Congress vote for im-
peachment. The proposal came from the peo-
ple almost two and a half years ago, but few
listened, to it. A year ago last autumn the
question had become of such importance
that it was confidently believed the ensuing
session of Congress would bring its consum-
mation. The republican majority was so
overwhelming that presi dential interference
by veto or otherwise, could avail nothing. Every
state was loyal, patriotic, earnest and deter-
mined. The most radical men were returned
at the fall elections, and Gen. Butler of Massa-
chusetts was elected on the strength of his zeal
for impeaching the President, and his well
known ability as a contestant in criminal
prosecutions. But nothing was done by
Congress in all the long sitting to restore
national unity, still less harmony and prosper-
ity. A summer session was equally fruitless.
We are still divided, distracted, deranged in
currency, commerce, diplomacy, with State and
Federal liabilities resting on the people, the
producing people, amounting to not lesi than
four or five thousand millions of dollars, not to
speak of current expenditures which are also
appalling ; with a President (so it is believed)
whose weakness finds no parallel but in his
wickedness, with a Secretary of State who has
become his full counterpart in both, and a
Senate too cowardly, or too corrupt till now to
impeach the former or to seek the removal of
the latter.
The delay to impeach can be accounted for on
only two grounds. Conscience made moral
cowards of Congress, or it feared the-result in
a political point of view. In a body so reck-
less and corrupt as it has proved itself, no won-
der if there should be hesitation about casting
the first stone. The difference between Con-
gress and the President had become so slight in
moral turpitude, that one was reminded when
impeaching the latter was named, of the emi-
nent Dr. Beecher on tr ial before the General
Assembly of the Presbyterian church thirty or
forty years ago, on charges of heresy. A profane
wag said (oath here omitted), it would be more
proper to try the General Assembly before Dr.
Beecher.
Wendell Phillips had already impeached Con-
gress before the people on two grounds, b
which he well sustained; and both under the
circumstances, were high crimes and misde-
meanors. He proved all the-way from Maine
to Mexico and back again, that we had a
dawdling Congress, and a swindling Con-
gress Whether he himself cared anything
for the guilt thus charged and proved or not,,
the people have taken him at his word ; and at
the last elections have begun in good earnest a
change. How he could so earnestly and elo-
quently urge a dawdling and swindling
court to impeach and punish such a criminal
exceeded all power of comprehension. The
Farce of the Forty Thieves sometimes per-
formed at the theatres, might be enlarged and
improved should the nine and thirty pri-
vates undertake to arraign their captain beoause
his policy of plundering differed from their
own.
The democratic party disclaim all responsi-
bility for, or sympathy with the President. It
is a pity they had not better grounds and
reasons for such disclaimer. But the repub-
lican party is responsible for him, and elected
him too with full knowledge that he was a low
born, poor white, slave-breeding and slave-
holding member of the democratic party! That
Hannibal Hamlin should have been sacrificed
for such a Barabbas, at such a time, and by a
party of such pretensions and professions is a
phenomenon without a parallel, at least in the
last eighteen hundred years!
The Senate of the United States knew him
well. None were more active than he at the
opening scenes of the rebellion, and in the
preceding year. He was a senator from Ten-
nessee, and supported every demand of the
slave power with demonaic fierceness. Indeed,
his own demands of the free states as con-
ditions for remaining in the Union were more
monstrous than those from any other quarter.
Take one. On the thirteenth of December,
1860, he proposed the following as a constitu-
tional amendment:
Resolved, That the select committee of thirteen be
instructed to inquire into the expediency of establishing
by Constitutional provision, 1. A lino running through
the territory of the United States, not included within
the States, making an equitable and just division of said
territory ; south of which line, slavery shall be recog-
nized and protected as property by ample and full guar-
antees ; and north of whioh line, it shall be prohibited.
2. The repeal of all aots of Congress in regard to the
restoration of fugitives from labor, and an explicit de-
deration in the constitution that it is the duly of each state
for itself to return fugitive stav.es when demanded by the
proper authority, or pay double their cash value out of the
Treasury of the State. 3. An amendment that slavery shall
exist in Navy Yards, Arsenals, etc., or not, as it may be
admitted or prohibited by the states in which such
arsenals, navy yards, etc., may be situated. 4. Con*
gress shall-never interfere with slavery in the District of
Columbia, so long as it shall exist in the State of Mary-
land, nor even then without the consent of the inhabit*
ants and compensation of the owners, ft. Congress
shall not touch the representation of the three-fifths of
the slaves, nor the inter-state slave trade, coastwise or
inland. 6. These provisions to be unamendablr, like that
which relates to the equality of the States in the Senate
of the United States.
In his memorable speech in the Senate on the
18th and 19th of Dec. of that year, which an
enthusiastic republican on the floor pronounced
Jacksonian in tone, Web&terian in argument,
he declared he did not diffei' muck from his
Southern friends, only as to the mode of redress.
Shall I be so* cowardly, he asked, as to dc^
serfc a noble band at the north who stand by the
south on principle? Instead of acting with
that division of my southern friends who take
the ground of secession, I shall take other
grounds, while I try to accomplish the same end.
I believe the continuance of slavery depends,
upon the preservation of this Union, and a com-
pliance with all the guarantees of the constitution
Of course he meant the constitution as
amended ; for he most distinctly declared there
would be no safety without his own or similar
amendment. And finally, to the amazement of
even Jefferson Davis who bad not then seceded
from the Senate, he exclaimed, whenlhe North
refuses unfLer the constitution to give us what we
consider the needful guarantees for the protection of
our institutions and other interests, i will go as
FAR AS HE WHO QOfcS FARTHEST!
Ever since the war he has been redeeming that
solemn pledge and promise.. Senators heard him
make it, heard Jefferson Davis demand of him
to explain it, and have witnessed his attempts
and determination to redeem it, especially dur-
ing the last two years, since the cowardice and
corruption of the republican party have been so
manifest to the universal world.
And maimer of the impeachment, now
that it is commenced, is even more remarkable
than was its long delay. It is not clear that
the President should not be impeached; but
nothing could be more clear than that this Con-
gress is not a fit tribunal for so important a
transaction. Twice it has been attempted be-
fore. That it is intended to subserve the in-
terests of party is proved by the whole history
of the republican Congress, and the party
leaders, during the last two years and a half.
Pitchforking colored suffrage into fbe South on
the points of federal oayonets, and denying it
in every Northern- and Western state where it
is asked (Utah only excepted, and that, a terri-
tory), shows the interest the party have in the
question as one of justice and right. Changing
unsolicited the constitution (amending,it was
called) so as to leave the colored population
wholly at the mercy of the white so soon us tlio
rebel states are restored, shows the quality and,
extent of republican humanity and philan-
thropy. It was Stephen A. Douglas who said
he didnt care a dnfor the nigger, but he
had other reasons for opposing slavery ex-
tension. The colored man evidently has many
such friends in the republican party.
It does not yet appear that the President has
committed treason against any higher authority
than the party that elected him. No one was
more opposed to the Civil Tenure Act than was
Mr. Stanton himself. None more strongly than
he- advised the President against it. None
could test the constitutionality of that Act but
one holding the appointing power, the Presi-
dent himself. He assumed the responsibility
as did Gen. Jackson in removing a cabinet
officer and the Federal deposits. The people sns-
146
ft* lUvfllttiifltt.
tained the President in his course then and seem
likely to do so again. Congress feels that it has
undertaken a perilous work, and has prudently
preceded it with every cunning forecast
possible. The attempt to subjugate the Su-
preme Court and strip it of its authority, failed.
In the endeavor to cut off debate, and to rush
recklessly through with its purpose, it hopes to
have succeeded better. Jt may be so, but this
also is doubtful.. The protest of the demo-
cratic members of the House was spurned. It
was neither permitted to be read, entered on
the journal of the House, nor printed in the
Washington Globe. Though the constitution
makes it the duty of the .Chief-Justice of the
Supreme Court to preside at the impeachment
trial, his brief and respectful communication to
the Senate on the manner of organizing the
tribunal, awakened' only displeasure, aud was
treated with disrespect, if not absolute con-
tempt. The Senate enacted the law which the
President is charged with violating, and now it
is both judge and jury in the trial of its cul-
prit. In ordinary courts no juryman would be
tolerated who was known to be even prejudiced
against the accused. But the majority of the
Senatorial pannel have been loud and long in
uttering their at least pretended condemnation.
To say that the President is bound by the
law until it is declared unconstitutional, is ab*'
surd ; because (though at his own peril) he has
taken the only possible method to test its con-.
stitutionality. If condemned by the court that
has over and over prejudged him, and spurned
him and his policy together, he must suffer the
consequences as would any other citizen.
The removal of a cabinet officer is no new
wonder in our political heavens, to frighten the
dwellers thereunder, like the natives Columbus
found who were so terrified at an eclipse of the
sun. Officers of bureaus have been dismissed
by presidential fiat; and a large number of
these very senators once waited on President
Lincoln, headed by Mr. Sumner, and prayed
most fervently for the removal of Montgomery
Blair. And the arguments they used were
mainly those by which Mr. Johnson now justi-
fies his course.
The articles of impeachment are themselves
an anomaly in thehistory of civilization. Apoor
fellow with a junk of bread in each hand, to dine
more sumptuously, insisted on calling one beef,
the other bread. The high court of im-
peachment has crumbled its loaf into nearly a
bakers dozen of fragments and determines to
find in them a whole bill of fare, animal, vege-
table and mixed. Afterthought has super-
added some side dishes, but they only make the
case worse, revealing more and more the pov-
erty of the diet. In the absence of face cards
(in card-table parlance), it hopes by a handfull
of small trumps to secure the game.
But at this late day, the whole plot may fail.
Two thieves had planned to steal a neighbors
oalf. The owner heard of it and borrowed a
pet bear kept by a butcher near by, and tied
him where the calf was kept. When the thieves
came, one watched at the door while the other
went in the dark to lead out the calf, not aware
that Bossy was relieved by Bruin. Bruin met his
visitor with a hug more fierce than affectionate.
He at the dpor grew impatient and called,
Why dont you lead the calf out ? The other
answered, I cant get him out; the bear
hugging tighter and tighter. Atlast the watcher
alarmed at a noise said, Well, come out with-
out him, we shall be caught. The other an-
swered, by Qd, I cant do that either.
Republicanism may fare no better in impeaching
Andrew Johnson.
After all, High Treason against the moral
government of the Universe is in every policy
of reconstruction yet proposed. The war might
have been rebellion by the South against the
Federal authority. But while slavery continued
it was murder and treason against high
heaven on the part of the North. It was
heavens thunderbolts hurled at slavery. And
federal protection by the army of that accursed
institution from the moment the war com-
menced, was bold defiance of Omnipotence it-
self. And our army of two millions six hun-
dred and thirty thousand men were as chaff be-
fore the storm until we blew tbe trumpet of
emancipation.
So shall it be still. Until the North and the
nation shall together abandon the tyrannical
schemes and plans of all parties, and accept as
the one only sure basis of reconstruction, in-
telligent, loyal, equal suffrage and citizenship, re-
gardless of race, color, condition or sex, presi-
dential or congressional policies, Freedmens
bureaus, standing army, constitutional amend-
ment, bailing or banging Jefferson Davis and
impeaching Andrew Johnson, will all alike be in
vain! __________________ p. p.
TBE ROUND TABLE
The Round Table is deservedly growing in
favor with the most intelligent readers. An
article in it last week entitled What the Re-
public Needs, contained the following defi-
nition of true patriotism:
True patriotism does not consist in affectation. It
does not make believe, for tbe sake of winning the
affections of tbe people, that all things, tbe people in-
cluded, are as perfect as they can possibly be. It rather
aims at telling tbe truth, regardless of unpopularity,
not only because of tbe intrinsic beauty and righteous-
ness Of truth, but because, in tbe long run, it is sure to
be safest and most wholesome. We do not hesitate to
express the conviction that a great proportion of our ex-
isting national embarrassments and those that threaten
our future, have had their origin in a lack of candor on
the part of those who ought to have been the teachers
of the people instead of their flatterers. The kind of
courage whose absence we deprecate is not that which
enables men to declaim against interests or institutions
whose destruction would cost nothing to their assailants
either in purse or cohscience and whose abuse gains
coveted notoriety. The courage we would' fain see is
that which should lead men to admonish the people of
their conceit, their ignorance, their boastfulness, tbeir
irreverence, their self-indulgence, tbeir adoration of
money, their contempt for modest merit, their pitiful,
shop-keeping way of measuring life, its duties and re-
sponsibilities j in a word, of all those qualities which,
during ihepast generation, have so corrupted the nation,
and which are more menacing to true liberty and a dig-
nified national life than even the overthrow of the Con-
stitution and the rise of a Military Dictator. The latter,
indeed, would be part of the product of the enumerated
vices ; but the vices would continue to poison the .sys-
tem, like a lingering disease, long after the spasmodic
effort was made that haply-might cast off and outlive its
climax.
* * *
The republic needs for the discussion of these grave
questions, not demagogues and coarse-grained partisans,
but the cultivated and high-minded gentlemen of the
land ; men who, having nothing to ask of the people,
will not be fearful about displeasing them ; men who,
for the sake of their country in her hour of need, wi.l
emerge from the political obscurity to which their own
taste as well as that of the majority consigned them in
tbe tame of that countrys prosperity. With the cour-
ageous and disinterested -aid of such men, the perils
that surround us may be surmounted or avoided : with-
out such aid, we have now slender hope of escaping a
catastrophe.
The Round Table italicizes the word gentle-
men as used above. What would its editor
say to an amendment to his proposition,padding
ladies of the same excellent qualities he would
have his gentleman possess to the councils
discussions ? There are plenty of ladies quite
equal to the Victorias, Annes and Elizabeths
of England, the Theresas of Austria, or the
Catharines of Russia.'
The Round lable is certainly favorable to the
equal voice of intelligent women in the gov-
ernmental councils. Even the very discouraging
article in its columns upon Womans Suffrage,
on which we commented in a late Revolu-
tion was a Communication, as perhaps should
have been more definitely shown, and quite un-
like the general character of the editorial col-
umns. p. p.
SUFFRAGE FOR WOMAN.
An auspicious sign of the times, as relates to
extension of suffrage, is the tone of the public
press. East, West and South the demand ife
now making, and the newspaper press, political,
pictorial, literary and religious is beginning not
only to treat the. question with respect, but in
many cases boldly to advocate it. A number
of the Michigan journals are preparing the way
for the extension of the franchise in that state
without distinction of color or sex. To some of
them we have referred before. The last Hud-
son Post announces, in its new prospectus, that
its political principles are founded in a convic-
tion of the necessity and expediency of the es-
tablishment of impartial justice and impartial
suffrage; and our efforts will be devoted to the
advocacy of those principles. With regard to
the franchise, the Post says there are two
courses, either of which is apparently just; one,
the conferring of the right of suffrage upon all,
irrespective of color or sex; the other, the es-
tablishment of certain requirements of educa-
tion which all must comply with to be entitled
to enfranchisement. The Post goes for the for-
mer, believing the latter inconsistent with a
government that derives all its just power from
the consent of the governed. The Revolu-
tion only proposes a slight educational test, not
so hard to attain as are one and twenty years
of age, and accessible' to all. Wo will not
quarrel with our brave Michigan contemporary
even.about this.
CHILD MURDER,
The public attention has been much drawn
to this frightful subject of late. The disclose-
ures made are appalling to the highest degree.
The social system is too corrupt, it would cer-
tainly seem, long to survive. Infanticide is on
the increase to an extent inconceivable. Nor is
it confined to the cities by any means. An-
droscoggin county in Maine is largely a rural
district, but a recent Medical Convention
there unfolded a fearful conditibn of society
in relation to this subject. Dr. Oaks made the
remark that, according to the best estimate he
could make, there were four hundred murders
annually produced by abortion in that county
alone. The statement is made in all possible
seriousness, before a meeting of regular
practitioners in the county, and from the statis-
tics which were as freely exposed to one mem-
ber of the medical fraternity as another.
There must be a remedy even for such a cry-
ing evil as this. But where shall it be found,
at least where begin, if not in the complete en-
franchisement and elevation of woman? Forced
maternity, not out oi legal marriage but with-
in it, the complete power of the stronger over
the weaker sex, must lie at the bottom of a vas
IB ft* JUVfllUtifltt. 147
proportion of such revolting outrages against
the laws of nature and our common humanity.
WE AT TEE PRESS SATS OF OS.
From the Odd Fellow, Boonsboro, Md.
The Revolution is handsomely printed, edited
with genuine female spice, and of course, goes heavily
for female suffrage, and the rights of womankind gene-
rally. It has'a big job on hand, but the proprietresses
tseem to go at it with a will. Of course we wish them
t uccess in their enterprise and shall be glad to receive
* The Revolution regularly.
We need something more than good wishes.
We ask a little male spice from all the odd
fellows in the land.
Woman has indeed a big job on hand to
overcome not only the ordinary obstacles in life
common to all, but the artificial ones that the
usurper man has put in her way. Help us to
pull down these barriers in the state, the
church and the home, that woman may stand
on an even platform with man.
From the Brooklyn Evening Post.
A Thzno of Beauty is a Joy Fobever.So thinks
Mrs, -Anthony, and everyone of our male readers who
po&ress common sense. We have been favored with a
copy of The Revolution, and we must give Mrs.
Anthony and Elizabeth Cady Stanton, not forgetting
barker Pillsbury, and the celebrated G. F. Train, credit
Tor issuing a paper editorially and typographically the
Smartest and neatest sheet we have seen for a long time
'They seem fully determined that the handsomest and.
smartest women and men shall rule this country. If
man is the Lord of creation, woman is the Queen, and
rules the lord, generally speaking, with a despotic
power. Let our females rule the house, train up the
young in the way they should go; and in this sphere'
ithey will have more influence* than by brawling at elec-
tions or serving as members of Congress.
' What a feeble folk these handsome lords
must be, if, with the purse and ballo.tin pocket,
navy behind their back, they are still ruled
with despotic power by women. Now we
submititto the judgment of a candidworld
if such men have the strength to brawl at elec-
tions, or make laws in Congress for thirty mil-
lions qf educated people.
From the Boston Saturday Evening Express.
The Revolution is smart and peppery, filled with
readable articles and goes it strong for woman's rights
and George Francis Train for President. It tells also
some unpalatable truths. The last number says that
Senators Yates of Illinois, and Saulsbury, are confirmed
and habitual drunkards, the editress having recently
seen them at Washington. She also advocates an equal-
ity of wages whether work is done by men or women,
and goes in strong for iemale compositors to get mens
wages. Train has also a letter saying no English hull
ever stopped a Yankee Train, and goes strong for war
with England. Speaking of Judge Chase, she says he
has got a heart as cold as a clam. The Revolu-
tion is replete with live reading.
. No, sir, we have a grander work on hand than
making Presidents. We are trying to educate
the people into the responsible duties of self-
government. If we leave the interests of this
republic wholly to the tender mercies of politi-
cians, our nations decline and death is swift and
sure. The women of this nation demand as
one of their rights sober men, in high places,
and all places, not only in the White House and
Congress, in the pulpit and at the family altar,'
but on our streets and highways, in our steam-
boats and railroads ; for statistics show that
more than haif the accidents, the pauperism,
.the diseases, the crimes that make our Eden
pandemonium, are the result of this whole-
sale drunkenness among those who make and
lead the public sentiment of the country. If
there are no sober men for rulers, then let the
Deborahs lead the armies of the Lord to vic-
tory and judge the nation with wisdom*
SUFFRAGE I2V KANSAS.
The following is only one of many brave
voices constantly reaching us from Kansas. The
. work there is well begun, though a ra ther Hood-
winked correspondent of the Springfield Re-
publican reports otherwise for reasons best
known to himself: *
Lawbenoe, Kansas, Feb. 26, 1868.
Bear Miss Anthony : The watchword of Kansas women
is onwards Revolutions do not go backward, and we
know no such word as fail. Though some of the prominent
republicans rejoice that Woman Suffrage did not succeed
at the November election, we are not in the least dis-
couraged. On the contrary, we are determined to press
our cause to the earliest possible success. To accom-'
plish this, one of our best women (Mrs. Helen M. Starrett)
has already entered the field to plead thee anse of woman.
She delivers her first lecture this evening in Topeka.
Subject, Man mid Woman. [Kansas men, not content
with the able arguments and logical reasoning of im-
ported speakers, have clamored incessantly for home
orators, arguments, and eloquence. Letus hear from
the women of Kansas has been sounded in our ears
since the question of Female Suffrage was first agitated;
thus actually forcing from the quiet seclusion of hpme
the wives and mothers they would so bravely shield and
protect.
To let the .Legislature know that we are not dead and
buried, our widows petitioned that Honorable body
for exemption from taxation, urging its injustice with-
out representation. Their petition was referred to a
committee of five, a majority reporting against it. The
report says, Taxation without representation is
tyranny, rung from Fanenil Hall nearly a century ago ;
but who in tbe land then dreamed that the ladies would
make the sentiment of those old patriots against the
British government applicable-to the women, and espe-
cially the widows of Kansas ? Thus we see that man,
although claiming superior reasoning faculties, could not
foresee the logical sequence of the sentiments.uttered
and earnestly urged by himself. The minority of com-
mittee also made a report, all of which I inclose.
The brave women of Kansas have* nailed their
colors to the mast, and may be relied upon as efficient
workers till a Revolution shall be seen at every hearth-
stone, and woman be recognized the equal of man and
nothing less.
Inclosed find $11.00, for which send six copies of
The Revolution to my address.
With kindest regards,
Mrs. R. S. Tenney.
The following is the petition referred to in the
foregoing letter, and one other .to the samepur-
port; and also the minority report by the
Legislative Committee.
PETITION 037 FORTY-FOUR WIDOWS AND SIXTY-TWO
CITIZENS OF THE CITY OF LAWRENCE, KANSAS,
ASKING EXEMPTION FROM TAXATION FOR WID-
OWS.
To the Honorable, the Representatives of the people of Kan-
sas, now met in the Stale Capital for the purpose of good
and just Legislation :
Gentlemen : We, the undersigned, widows of Law-
rence, do hereby respectfully petition you to onaot a
law that will exempt the widows of the state from tax-
ation.
We appreciate, equally wo think with yourselves, the
fact thatnaxation without representation is unjust, op-
pressive and burdensome ; and, gentlemen, we are sure
you cannot regard it as just to make widows an excepted
class, and impose burdensome taxes on them. Does
any one say we are represented ? Or are you disposed
to set aside the claims of our petition on the theory that
in some latent, though undiscovered way we are repre-
sented? Then, gentlemen, we do respectfully petition
you to enact a law that shall require the payment of the
taxes assessed upon us at the hands of our represen-
tativesself-constituted or otherwisewho impose
them. Make those who represent us in imposing them,
represent us in paying them.
Is it said that, as we are protected by .tho government
and laws, we ought to support them with o;ur means?
This is only the old plea for taxation without represent-
ation. Obligations and benefits are mutual between
the state and citizen. The obligationto pay taxes to the
government corresponds exactly to the right of repre-
sentation in the government; and for the benefit of
governmental system and social order, wegive in return,
equally with other citizens, our moral support, respect
and industry.
That you may be made aware that we do not petition
you, gentlemen, in a matter of abstract principle merely,
we will show you very strikingly that we are heavily
enough burdened to warrant us in crying out for the
removal of the insupportable load that is laid upon us
and our children, and kept on us without our consent
and in spite of us.
One of us whose names are appended, has an income
of $900, and her taxes for this year amount to $736.
Another has, for her support, an income of $200, derived
from an insurance policy on the life of her deceasedhus-
band. Of this it takes $99 to pay the taxes on her house
and lot. Another is now contemplating the sale of her
house and lot, next May, by the sheriff, to pay the
taxes, and it is a matter of impossibility for *her to effect
more by her labor, than a meagre supply oi food and
clothing for herself and children. Another has just
mortgaged her little shed of a house in obtaining a loan'
to pay the taxes and keep her little home from the fate
awaiting tbe one just mentioned. And these instances
only fairly show the average proportion of our taxes to
our incomes, and tbe average stress of difficulties under
which we now suffer, because of this burdensome taxa-
tion imposed upon us by othersmostly for things in
which we take no interest at all.
Gentlemen, surely you will not continue this injustice
and oppression, simply because you and your consti-
tuents are so much stronger than the widows of the
state, who are powerless except in so far as their appeals
to your sense of honor and justice affect you.
We believe, gentlemen, you will do yourselves the
justice to respond unqualifiedly to these appeals, and
we trust that our petition will be grantedalike to your
credit and our relief.
[Signed by 44 Widows.]
The undersigned fully agree with the sentiment ex-
pressed in tbe petition of the iwdows of Lawrence; and
respectfully unite with them in asking the passage of a
law to meet their request.
(Signed by 62 citizens.]
PETITION OF TWENTY-TWO WIDOWS AND 375 CITI-
ZENS OF THE CITY OF TOPEKA, SHAWNEE COUNTY,
KANSAS, ASKING EXEMPTION FROM TAXATION FOR
WIDOWS.
Wc, tho undersigned citizens, join most earnestly in
praying our Senators and Representatives in granting
the petition of the widows of Lawrence, to exempt them
irom taxes. Also the widows of all Kansas.
[Signed by 22 widows and 375 citizens.]
REPORT OF MINORITY OF COMMITTEE ON PETITION
OF CITIZENS OF LAWRENCE AND TOPEKA, ASKING
EXEMPTION FROM TAXATION OF WIDOWS.
Mr. Green made the following minority report from
the Special Committee on petition exempting widows
from taxation :
Mr. President : A minority of yuuv Special Commit-
tee, to whom was referred the petition of forty-four
widows of the oity of Lawrence, and twenty-two widows
of the city of Topeka, indorsed by 400 citizens of the
state, praying for the passage of a law exempting the
property of all the widows of the state of Kansas from
taxation, had the same under consideration, and instruct
me to make the following report:
That while we recognize the existence of heavy and
burdensome taxes upon the property of the petitioners,
yet the passage of auy law by the Legislature exempting
the property of the .widows of the state of Kansas from
taxation would, in the opinion of your Committee, be so
clearly in conflict with section 1, article 11, of the Consti-
tution ot the state, that we are unable to recommend
the passage of an act making the discrimination in favor
of tho widows of tho state of Kansas, as desired in tbe
petitions before us ; and your Committee recognize the
manifest injustice of imposing heavy and^ burdensome
taxes upon uny class of persons without their consent,
and believing that the widows of the state arc entitled
to such civil rights as will enablt them to protect them-
selves, their children and their property, and to remove
all cause of complaint, and to conform to the principles
of free and representative government in accordance
with the principles of natural justice, as enunciated by
the Fathers, that all good governments derive their just
powers from the consent of the governed, your Commit-
tee would recommend that the Constitution oi the state
of Kansas be so amended as to make no discrimination
against persons on account of sex or color.
L. F. Gales,
W. H. Dodge.
The majority of the Committee repotted at
148
tu lUralntta#.
very much greater length. But as Long Meas-
ure only seems to apply to it, wo give only its
concluding periods :
Man is the sentinel around tbe camp of life; be wards
off the approaching danger, and receives the blowa
protection created by Gcd. Within is the family, al-
though deprived of its head and depressed in sorrow, it
is nevertheless within the paradise.
Better far would it be for the females of the state to be
thus dependent on tbe stronger sex, standing on the
outer riog of a boisterous life, than to cut loose and
swing from her orb, and sail through life an indepen-
dent being. ________________________
ter of the bouse came up and remarked that the book
was not suitable reading for ladies, but that here (put-
ting into my baud a email volume) was a work adapted
for tbe use of women and children, which he had pro-
vided for the seuhoras of the family. I opened it and
found it to be a sort of text-book of morals filled with
commonplace sentiments, copy-book phrases, written in
a tone of condescending indulgence for the female in-
tellect. * I could hardly wonder, after seeing
this specimen of their intellectual food, that fhe wife
and daughters of our host were not greatly addicted to
reading. Nothing strikes a stranger more than the ab-
sence of books in Brazilian homos.
The act just passed in the Kansas State Senate allowing
any qualified person, without regard to sex or color,
to praotico law in all the courts, may be fairly pronounced
a point gained by the advocates of Womans Bights.
We incline to think it is a measure that will be promptly
acted on. We do not know how feminine logic might do
on a technical law-point; but, unless Kansas women are
different from other women, and Kansas men from
other men, there must be some admirable jury pleaders
amongst the ladies there.
Neably all of the papers in Kansas, it is said, that are
controlled by men who have never been tinctured with
democracy and anti-abolitionism, are supporting Woman
Suffrage. With such afi array for tbe cause, it cannot be
ridiculed out of existence, and to defeat it will require
something besides the wicked attempt to array the Bible
against it. . _______________________
PROFESSOR AGASSIZ'S BRAZIL.
The journals of Mt. and Mrs. Agassiz in Bra-
zil will b read with great interest by all lovers
of travels. There are not many such travellers,
perhaps none since the period of Humboldt.
The following is apropos to our columns :
EQUAL BIGHTS IN SCHOOLS.
There are establishments in'nearly all the larger cities,
in which the children of the poor are taught a trade. In
these sohools, blacks and whites are, so to speak, in-
dustrially united. Indeed, there is no antipathy oi
race to be overcome in Brazil, either among the laboring
people or in the higher walks o£ life. I was pleased to
see pupils, without distinction of race or color, mingling
in tbe exercises.
The Imperial Library of Bio de Janeiro is very fairly
supplied with books In ail departments of learning, and
is conducted in a very liberal spirit, suffering no limi-
tation from religious or political prejudice.
In fact, tolerance and benevolence are common
characteristics of the institutions of learning in Brazil.
EDUCATION OF WOMEN.
Both authors of this book have passages on
the neglect of the education of women. Mrs.
Agassiz says :
The education of women is little regarded in Brazil,
and tbe standard of instruction for girls in the public
schools is low. * The majority of girls in Brazil
who go to school at all, are sent about seven or eight
years of age, and are considered to have finished their
education at thirteen or fourteen. The next step in their
life is marriage. * There is not a Brazilian sen-
hora who has ever thought upon the subject at all, who
is not aware that her life is one of repression and con-
straint. She cannot go out of her house, except under
certain conditions, without awakening scandal. Her
education leaves her wholly ignorant of the most com-
mon topics of a wider interest, though perhaps with a
tolerable knowledge of French and music. The world
of books is closed to ber ; for there is little Portuguese
literature in which she is allowed to look, and that of
other languages is still less at her command. She knows
little of tbe history of her own country, almost nothing
of that of others, and she is hardly aware that there is
any religious faith except the uniform one of Brazil;
she has probably never heard of the Beformation, nor
does she dream that there is a sea of thought surging in
the world outside, constantly developing new phases of
national and individual life; indeed, of all but her own
narrow domestic existence, she is profoundly ignorant.
Mis. Agassiz tells an incident that illustrates
this condition of society. Staying at a fazenda
one day, she found a book and took it up to
read it. It was a romance. She says :
As I stood turning over the leaves * the mas-
THE ONE THING NEEDFUL.
Boston, Feb., 1868.
The one thing needful for women to learn, is
physiology; not only for its direct value in
teaching how to preserve health, but as the
basis of psychology. But to be of much use it
must be thoroughly and universally taught. This
can only be done by making it the indispensable
part of all school education, not in the super-
ficial way in which it is generally tanght now,
when taught at all, but in its wholeness. Mo-
thers cannot teach their daughters, because
very few of them are competent, and because
parents are, for that very reason, the worst
possible educators of their own children. The
parental relation is exclusively an affectional
and not an educational relation. Of course pa-
rents may educate their own children, and'if
they have a genius for educating, they may do
it tolerably well, but they will not do it so well
for their own children as for others.
The best educators of children are older
children, especially those not of the same fam-
ily ; and in fact we see our best efforts constantly
defeated by the evil influence of the vicious and
ignorant companions of our children. There
is no safety but in the path of justice to all.
One ill-bom, neglected child may ruin a com-
munity. All children are equally Gods, and so-
ciety, as the representative on earth of Gods
providence, should care equally for every one of
its little ones. Up to the present time the in-
fluence of children upon one another has prin-
cipally been felt for evil, but it is just as potent
for good. I am very glad to see the remarks of
Dr. Lozier in The Revolution, about the
teaching of woman. The profession of phy-
sician calls for a noble character, and there are
many noble men in it; but they are human, and
they cannot act against the very life of their
business. If all women were wisely taught
there would be no need of a class of physicians ;
children would be well-bom and well-bred.
There is much, very much, to be said to women
that cannot be printed, that must come from
thoroughly taught women to their sisters ; and
until it is said and the truths acted upon, the
world must continue to suffer. Orilf woman
can save us. People never learn by experience ;
if they did we should all have all the wisdom
we need, women would not be growing weaker
and sustaining a special class of physicians,
undreamed of by their grandmothers. Naturally
women are stronger than men, as steel is
stronger than iron. Men could not stand the
dress and habits of women, and go through
what they do, without utterly breaking down.
Some day the world will learn that the strongest
forces are the finest, the least visible, the most
spiritual; then we shall see why it is that things
go so badly when the lower attempts to govern
the higherwhen the force element incarnated
in man rebels against the love element which
inspires woman. i\ j. c.
THE WORKING WOMEN OF NEW YORK.
ARTICLE in.
THE SEWING GIKLS CONTINUED.
Thebe are, first, the shop girls, who sit in long
rows, up and down the length of great dreary
work rooms, or pile in promiscuously in less
pretentious establishments. Then there are
the dressmakers, the cloak-maker6, tailoresee6,
seamstresses and takers in of slop work. Of the
thirty thousand women now out of employment
in this wilderness of a city, twenty thousand are
said to be sewing girl6. Piles of unsold cloaks
on the shelves, clothing of every description on
hand, although marked down at ruinous
pricesthis tells the story why so many thou-
sands of women are thrown out of employment
during the inclemency of the season. A picture
of one of the vast establishments where shop-
girls work, will do for all the rest. Large, well-
lighted show-rooms, attentive salesmen, watch-
ful floor-walkers, spry little cash boysall these
will the eager buyers find in the lower rooms.
Down stairs- to see evening dresBes, where bril-
liant gas jets flame out to show the effect; up
stairs to see the bargains in cloaks -and shawls;
wherever they go, the same genial light and soft
radiance is thrown. The work room is a very
different place to the show rooms, however. On
the third floor are the first work-rooms. They
are large and well-lighted, though but poorly
ventilated. The impression made upon one's
mind is, that a breath of fresh air has not en-
tered that close atmosphere for a long time, and
yet the windows are throyn up as high as they
will go every night at sweeping hour. But one
must remember, when sixty human beings, some,
of them with diseased lungs and horrid breaths,
work in these rooms for ten hours daily, that
ventilation is almost impossible. These are the
lace workers and muslin finishers. They pre-
pare those delicate articles of lingerie which so
win ones heart from the window or case where
they hang. Infants robes are made, babies
baskets are thoroughly prepared here. The girls
look tired, even at an hour before noon. They
bend over shockingly, and nearly all of them
have sore eyes and sorer hearts, poor things.
Six dollars a week is the average price made
here. Some there are who make nine. The
majority only make five. The fourth floor, on
immense room, running over the whole building
from back to front and from side to side, is oc-
cupied by the cloak makers.' There are four-
long tables down the centre of the room, and
smaller ones placed a tittle to the side. Here,
during the brisk season, ninety-five girls work.
Now the number cannot be more than twenty.
The women working here seemed more oheeriul
than those on the lower floor ; but they, too, are
overtaxed and allow themselves to die by inches,
just because they fancy they are making an hon-
est livelihood. They average more than the mus-
lin workers. Some of them can make ten dollars
a week, but those are old hands at the business.
On the fifth floor is a smaller work-room than
either of the others. It is devoted to the malt-
ing up of plain underwear for ladies and chil-
dren. There are about twenty-five or thirty em-
ployees here, pale, wan and sickly ; but, strange
enough, more contented with their lot than
those of either of the floors below. I asked one
old lady, whose age would surely entitle her to
rest, how she liked to work there ? She replied,
I thank God that I can take care of myself in
my old age! She is seventy-two .years of .age,
and earns three dollars a week, God help her! I
could riot help wondering how in the world she
149
managed to reach the fifth story with her poor,
rheumatic limbs and feeble strength. As if
divining my thoughts, she said, To be sure,
its a good ways up, and I have to come very
slowly ; but after I once get here theres a good
rest for me until night. A little creature of
thirteen, but who looked no more than nine,
was basting hems in a..comer. She was only
learning to sew, she said, and had been there
two weeks, but in a fortnight more she would
be paid tor working. Upon asking her how
much, she answered with a proud inflection of
voice, five dollars a month. The faces of the
employees throughout this establishment gener-
ally wore a shocked, startled expression, as if
they were forever on the rack. A great majority
seemed to be suffering with lung and throat dis-
eases.
With & heavy heart I saw them at their tasks.
Poorly paid, illy clothed and fed, they go on
from one years end to another. Surely there
must be relief for them sometime in the future.
Why not now ? Tupto.
DR. TODD ON WOMANS RIGHTS.
BY MILTON B. SCOTT.
The Rev. Dr. Todd has lately written a little
book on Womans Rights. It is the most
complete illustration that could be devised of
the weakness and softness of the opposition to
the cause of Womans Rights. He neither ap-
preciates the weight nor the delicacy of the sub-
ject which he assumes to discuss. In common
with other lords of creation, he seems not to
realize that women have trains. He addresses
his words-chiefly to women, and although he
compliments and patronizes V the sex con-
siderably, one can find very little worthy the at-
tention or consideration of reasonable crea-
tures.
The Doctor can see nothing higher in the
Womans Rights movement than an aggressive
warfare on the part of women upon the just
prerogatives of men; and having constituted,
himself a champion of the sterner sex, he
enters the lists in full armor. He opposes every-
thing that savors of equality. He would not let
woman share the right of suffrage with man;
he ddes not want her to choose her own em-
ployment, nor receive full pay for work ; he
seeks to limit her education; he forbids her
either to act or grow except in a certain
sphere, and he even considers it necessary
to prescribe the kind of garments she should
wear.
The work is a little one ; but it contains a vast
amount of advice and admonition which will
amuse, if not instruct, the women of our coun-
try. The poor girls of our cities, who are toil-
ing out life for scarcely enough to preserve life,
and are exposed to the most fearful danger and
temptation from day to day, should ponder well
the argument of Dr. Todd against displacing
so many young men and taking away so* many
chances of marriage from themselves. The
young, ladies at our various female seminaries
and colleges should read his earnest words upon
the insufficiency of their physical organiza-
tion to go through the course of study they
have undertaken. The vast number of maidens
in the country who, Dr. Todd seems to think,
are seeking to escape marriage, need his pre-
cious information about the blessedness of the
connubial relation and their dependence on
man. The women who have broken over the
chalk line which Dr. Todd and others have drawn
fUrdutiau.
as the boundary of their sphere, should lis-
ten to his fatherly advice, and no longer expose
themselves to the gross and unchivalrous
charge of seeking to be men.
And then, the menthe men wbo invent,
the men who earn the property,!* the men who
support the families, the men who endure
the pressure of continued and long labor, the
men who kill whales, pull teeth, cut off
legs, dig ore and coal, carry hods, tan
leather,*' groom horses, and perform the
other manly deeds which Dr. Todd strings over
nearly a page of* his bookthese men and all
others should receive the instruction here offer-
ed to them, and be ready evermore to resist the
encroachments which strong-minded women
are making upon their authority and power.
Oh, Dr. Todd, have you ever thought of the
jpooi' girl ?
The revelations -of nature speak through the.
elements of society, foreshowing a better day
for woman. Her social condition is not ade-
quate to her capacity for usefulness; the rivalry
of conservatism prevents her from rising equal
to her true status. Woman alone knows her
sorrows and struggles. She feels the sting of de-
gradation in her heart arousing her to rescue
herself from slavery. Her work should move
with the celerity of thought to compete with
error in its pressure against her elevation. The
course of sundry would-be reformers (who have
turned against Woman Suffrage) is unjust in
the extreme ; their opposition is not sufficient
to rival the living principles that unfold in wo-
manly wisdom. Teach woman that it is not her
province to obey unprincipled man. Man is
not lord of creation; his claims, have fallen,
leaving him on a level with, woman. Her sym-
pathy lives in every reform ; she can open the
field of culture broad enough to develop her
individuality for the elevation of the world.
Close thinking will impress every woman that
the Eternal Principles will deliver her from the
bondage of false customs. Behold, millions de-
sire to carry fonvard this great Revolution for
the enfranchisement of women! Change is the
order of natureprogression is the life of
societyall should yield to the never-varying
round of Providence. m. t.
A MOTHER TO A DAUGHTER.
My Dauohter : Among all types of beauty
and sweetness which have been given us
by poets and painters, stands first of all a
young girl just budding into womanhood, whose
elastic step, delicate bloom, and round, flowing
outline of form express suppleness aiicl vigor.
What an incarnation of hopefulness! what a
reservoir qf all that is lovely and inspiring in
the future woman, is such a picture!
But where do you see the type, save in fancy?
You can count on your fingers the Misses of
your own age whose form and bearing do not
suggest some frail, exquisite piece of porcelain,
too delicate for actual use, and to be handled
with exceeding care when taken from its sup-
port.
In our cities such crops of hot-house exotics
are yearly poured forth to swell the ranks of
human life, that physiologists are sorely per-
plexed in enumerating how long it will take for
the race to die out from sheer delicacy in wo-
man. In the country there is not so much ad-
vantage over the city as one would suppose to
be the case, for there is pure air, the blessed
sunshine, and the fertile bosom of mother-
earth from which to draw vitalitythere are
flowers to cultivate, and woods and fields to ex-
plore, from which to gather botanical and geo-
logical specimens ; for, even in the most pic-
turesque country, girlhood is too much re-
stricted from healthy, out-of-doors activity and
from living out those natural instincts which
should ever be religiously respected by their
parents. So the glory and pomp of sunrise
and sunsets come and go, and the miracles of
reproduction, growth and death pass before the
eyes; so summer breathes in blossoms and
fruitage, and winter in snow and sleet, as the
great world spins forever down the ringing
grooves of change, while myriads of hnman
beings stand stolid and dull, with senses una-
wakened by the palpitating life that pulsates
alike through the granite, the plant and the hu-
man being!
It is time we shook off' our ^Joth, dear girl,
and look life fairly in the face, to see our situa-
tion and our needs, and to devise what must be
done to reform existing evils. There is some
great wrong in the position of women, as you
well know. When shall we begin to point out
the wrong and specify the remedies? There is
balm in Gilead; for every wrong exists a
right; for every evil a cure.
You have asked me if there were two kinds of
human nature ; very indignantly did you ex-
claim that the world must think woman greatly
inferior that it restricted her so much, and de-
barred her from hundreds of pursuits tbat men
considered good and praiseworthy. Your
brother could use his limbs in ways that de-
veloped his whole body, and gratify his curi-
osity, on the street, in the workshop, or in min-
gling with his fellows. As he grew older, every
pursuit that charmed him courted hi9 atten-
tion. You were told, if you attempted to follow
your early playmate, reared by your side from
babyhood, that such things were very improper
for girls, they were coarse and unwomanly.
Again you answered, Is not what is good
for him good for me, and does not what injures
me hurt him also ? Why should we be sepa-
rated now, wlmn we need each other more than
ever? What power is it that has decreed I am
immodest in using faculties that are given to
me as well as to him ? Is it not a decree un-
founded in the nature of things, and made only
by a false view of our capacities, which may be
annulled ? It must be so, for I feel the truth of
what you said in your last, That every faculty
has an inherent right to its natural develop-
ment.
Then, many of your associates are restless
and unhappy. But I will not touch upon this
mental phase of girlhood now. In answering
these queries and a hundred others, my dear-
girl, I commence with the physical life since
in the order of nature it is first developed,
and it is at the base of all intellectual and men-
tal power. Given firm- health and you have a
capital to start with, which will enable you to
strive to attain some noble end and exult in the
strife. You feel that fresh enjoyment of exist-
ence, that exulting sense of power, which
should underlie all effort, and with which a
woman can put forth questions that only the
reorganization of society can answer.
So let me urge upon you again and again to
respect your body, and obey its laws as far a9
you know them. There is no enjoyment, no
vigor, no usefulness without a sound body.
Regard it as your first duty to care for your
health. Let not the sneer of being unfashion-
able tempt you to sell the birthright of nature
150
for a mess of pottage. There is nothing in the
world so demoralizing as to run counter to the
known laws of being. Such a coarse dwarfs
every higher and better faculty ; it aims a blow
at the foundations of morality itself. Thus
saith the Lord is wiitten in the very constitu-
tion of our being, and to disobey is to degrade
our whole nature.
And I charge you, my child, if you have any
love of truth, to remember tbis, that there is
just compensation for every broken law, and
never can one be transgressed with impunity.
Fashion ignores this; hitherto our sex have
been yielding, and disliked the notoriety of be-
ing peculiar, and so we have bowed to her sway
with more than pagan idolatry. Thank Heaven,
my child, that you live in an ora when indivi-
duality is claiming expression and woman feels
that her outward life shall henceforth express
her inner nature.
First of all you must understand well the
outlines of Anatomy and Physiology. The day
lias passed when the body was despised and
called altogether corrupt and vile, and all that
is most natural and sweet was to be tortured
till extinct. That belief belongs to the dark
ages. These wondrous organs by which we
perceive the outer world, and by which all sen-
sations play upon the interior, are like the keys
of some delicate musical instrument, and like
thoss need tuning in perfect harmony from
their lowest to their highest notes. But har-
mony means health, and that is wholeness or
holiness. In perfect health every faculty has a
normal use and gratification. Each one is sa-
cred and beautiful in its true place, and in the
broad fields of human existence there is room
for all to play freely and grandly.
If you understand, my child, that every fac-
ulty you possess is God-implanted and presup-
poses a use for that faculty, you have caught
fast an eternal truth. That the Divine flows
through the human in all ages aud races is a
truth just illuminating humanity. The light
streams upon fewer still who. have learned how
fully and sweetly it flows through woman in her
true development. Affectionately,
h. u. H. p.
MORE WESTERN CORRESPONDENCE.
EXTRACT OP. A PRIVATE LETTER PROM ILLINOIS.
Dear Miss Anthony ; When I first advoeatojl Wo-
mans Suffrage,! did so as a protest against Negro
Suffrage. I now look upon it as nothing more than our
own just right, and I am doiug all that I can to interest
others in the cause. But 1 find with surprise that those
who claim to be reformers, and are loud in their profes-
sions of respect and appreciation of women, when I come
to ask them for some practical demonstration more than
the general gallantries of polite society to substantiate
their claims, they become suddenly indifferent,, or
boldly.d^olare : 0, it is not thus that we desire to see
iadies advanced and elevated! I find in your paper
clear aud concise answers to every objection men offer
to your position, but knowing their own weakness, they
cowardly shrink from even their perusal; and some
consider that I unsex myself, and others that I am a
maniac on the questions of The Social Evil and
Woman's Bights. I am pleased that George Francis
Train can galvanize true democrats into espousing our
cause. All honor, especially to him, who never deceived
a woman, whose name is without this almost uni-
versal reproach! I find our own sex after all our great-
est enemies. They attack our claims with more acute
ridicule and keener sarcasm than man is capable of.
But enough of this. I think very muoh of The Revo-
lution, and am not willing to lose a number, as I in-
tend to have them bound as suggested in the paper it-
self. I inclose a list of persons to whom please send it.
Hoping that the great need of the age may be accom-
plishedthe elevation of our sexI subscribe myself,
with respect, Your friend,
P, W. Raley.
WHAT AN IRISHMAN THINKS.
New Yons, March 2,1869,
Editors of The Revolution.
X favk read all the numbers of your journal so far,
and, am happy to state, that I have received a great deal
of information from your spirited and very intelligent
advocacy of the right ol the slaves, everywhere. But
you, like a great many other well meaning people, I am
afraid, are very apt to make mistakes at the start, which
may estrange a large and very powerful element in the
United States from the good cause of which you are- in-
deed the eloquent exponants.
The Anti-slavery party, from time to time, were too
much given to comparing negroes with Irishmen--
drunken Irishmen and the- party were astonished
at the sensitive Irish, iu not working in harmony with
those who were and are in the habit of thus offending
them. The Irish as a people, are not and never were in
favor of slavery, but the advocates of universal liberty
in this country were for a lorn' period, and are now to a
certain extent; the best friends to England, the deadly
enemy to the Irish people at home and abroad, and, in
fact, the enemy of the human race.
Irishmen, like most American men, do not like to be
associated with negroes ; neither do Irishwomen wish
it to be supposed, that they are to he found only in your
kitchensj although there is oftentimes as much truth and
decency in kitchens, as in parlors and bow-windows.
But I am happy to find that your Revolution is
truly American; not drawing any inspiration from
Exeter Hall and the London Times. So much the better
for the principles wnich you so fairly and squarely put
forward; the franchise for women ; protection for Amer-
ican industry and freedom for all people irrespective
of races and colors. God speed The Revolution.
The right to vote is a great-blessing to an intelligent
aud virtuous people, and'to them only should the gift be
extended. The ignorant, and those who are guilty of
crimes against the state and society at large, should be
prohibited strictly from electing men or women to any
office. Women, certainly, have as good a right to say
who are to make and execute the laws as men. Women
are in many things equal, if not superior to men, in
taste, virtue, wisdom, courage and judgment. I know
two women of -but average intelligence, who, after each
of them had only a short acquaintance with James
Stephens, C. O. I, R., pronounced the great Head-Centre
a very Utile man ; and yet, Stephens was surrounded for
years by men of great minds certainly, who were con-
vinced that the Fenian Chief was a terrible fellow en-
tirely. And -the most remarkable feature in the opinions
of these two women of Stephens is, that they have never
exchanged a word about the man, so far.
Fraternally yours, Eugene OShea.
. A REPLY TO GENERAL
Editors of Revolution :
In your No. 6 Gen.-----, through Mrs. Stan-
ton, asks : What Alfred H. Love would have
had us do in the Revolutions of *76 and *61 ?
Would he have let the red coats come in and
the rebs \ go out ?
Answer : I would have had you simply be
men and women ; and if the highest convictions
of your nature and your duty, and your best
knowledge, after going to school with six thou-
sand years of history behind you, and the ex-
ample and triumphs of Jesus Christ with you
for eighteen hundred years, have taught you no
better than surrender your manhood and wo-
manhood, your spirituality and divinity, and ac-
cept the lowest plane with the uncertain arbi-
trament of the sword, you could not have done
differently and you must still reap as you
sow.
Red coats might have come in and
slavery have gone out, sooner and more cheaply,
for ifl their country complexion is not the price
of liberty. And the women of our land might
have had deoent respect and'Equal Rights, for
they could have pointed to Queen Victoria with
more hope for the rights of ballot and office.
And had we have let the rebs go out, we
should not have had the fearful drain of blood
and treasure to keep them in, and now the im-
peachments and arrests to keep them out.
And as for what I would have doneI did
not live in *76, but did in *61, and though I coun-
sel all the world never to hinge present action
upon the grooves of the past, but to live and act
in the revelation aud inspiration of the moment
and do better, still 1 put on record in *C1:
What a sublime spectacle it would be to find
a people willing to., relinquish their artificial
claims to country for the sake of peace, and
carrying out the principles of Christ. There
has never been a nation willing to relinquish a
single inch of territory. Why not part with
discordant members for the sake of the Union
which means harmony ? Why not be willing to
retreat and retire into such a domain as would
be harmonious, and where the rights of all
Gods creatures would be recognized?
As there was free will in the formation of the
Union, let it be maintained upon this free will
policy, which has been the admiration of the
world.
Secession would not then be mooted for light
and trivial causes, especially if we were to make
the privilege of remaining in the Union a mat-
ter of desert. Let the question beare you good
enough, free enough, patriotic enough for the
Union, rather than what extent of territory or
human authority will, be added. Let it be
known that neither geographical limits nor gov-
ernmental powers comprehend and secure the
highest prosperity or closest unity, and that co-
ercion is not conversion.
1868 endorses this, and I add thereunto for
The Revolution that the old plan has been
tried and failed, mid 1 ask to revolve. Millions
of cannons mark cowards. This radical press
and the outspoken truth that will not serve
two masters, mark the braves of the' day.
With Jesus as our model and the Christ of
our individual'natures as our guide, we shall
know neither limit to country nor end of af-
fection for mankind ; and as for red coat or
reb, learn to hate the sin but never the sin-
ner. And then may we find the term Gen-
I eral defined : One high in the rank of man
impoverishing, enslaving, wounding and kill-
ing.
Inquirer, whoever thou art, resign thy com-
mission. I honor thy noble intentions ; but look
highertrust the testimonies of Jesus, He
who loseth his life for my sake shall find it.
Suffer rather than cause suffering. Die' rather
than kill. Hopefully and fraternally,
Alfred H. Love.
Philadelphia, 2d month 22, 1868.
LB. C. B. BOYNTON,
AND THE FIRST CONGREGATIONAL CHURCH AT
WASHINGTON, D. C.
Washington, Feb. 22,1868.
Editors of the Revolution :
I was astonished to find in the .last number of your
spioy and fearless Revolution, a statement repre-
senting Dr. Boynton, the radical Chaplain of Congress,
as hostile to the admission of colored people to his
churchthat he preached a sermon io sustain bis views,
and that Gen. Hancock (it should be Howard), who had
raised $100,000 from Northern Congregaticnalists for the
erection of the church edifice, had led a respectable
minority protesting against the doctrine and action of
the pastor.
Now, as a member of that church and society, and
President of the Board of Trustees, I desire to inform
you that these statements are in every material point
untrue and calculated to injure the character and posi-
tion of th9 church and pastor, all of whom are thoroughly
anti-slavery and anti-caste,' and would, under no circum-
stances, join or minister to a religious body which would
exclude from the communion and fellowship any person
on account of race or color.
It may be proper, however, to state that Dr. Boynton,
151
§Uv0tMti0tf.
-*- -.... ___;__ ____i.j_________________;_
in November last, delivered a discourse on the subject-of
" races, and while he distinctly claimed equal- civil
and religious and other rights, for all men and women,
and emphatically denied the right to exclude them from
o.ur churches and societies, he expressed the opinion
that, in large communities, the colored people would find
. it for their highest interest to organize and maintain as-
sociations of their own, and thus reach the highest point
of manly and Christian attsdnment by a development of
their own excellencies and peculiarities.
Very respectfully, D. M.Kelsey.
We are constrained to differ entirely with the
Reverend Doctor in his conclusions as to any
form of negro-pew-worship or education. Until
spiritual astronomy discover another heaven
and worship, and God also, for the hereafter of
us, all such fastidiousness as this had better be
overcome. If colored people can sit with their
mistresses as slaves, servants and wet nurses,
suckling young Senators and Presidents, and
the dainty baby mothers of Senators and Presi-
dents, washing, dressing and cooking for the lady
saints of the capital generally, and of Dr. Boyn-
tons church particularly with its peculiarly
loud professions, it would seem to us better
that he should leave all class and caste preach-
ing to the rebel priests and prophets who still
prowl through the South, and sometimes even
' steal into the North.
IS LABOR TO BE DIGNITY OR
DEGRADATION?
Honor and shame from no condition rise ;
Act wellyour part; there all the honor lies.
Tupto, in a late number of The Revolution,
tells the story of a woman who sought work closely
veiled, and says she was of that class called genteel poor,
who would rather die(!) than have it known by any-
body that they would descend to sewing even as a means
of ekelng out a scanty income. We are inclined to
think that the woman who would put forth both such a
plea and complaint as that, wanted the sum earned lor
some purpose not included in lifes necessaries.
Actual need destroys false pride and makes labor hon-
orable ; something to be sought for openly, and not by
stealth.
Then again, the woman who would rather die .than
have it known she worked for her living, shows & sad
disregard tor the reputation she tries to sustain in an un-
derhanded way, by leaving ber friends in doubt as to the
manner in which she procures the means necessary to
sustain it; is it notoar preferable to any right-minded,
thinking woman to be known as an independent worker,
than feel the askance eye of suspicion or doubt?
An unprincipled employer would be more than likely
to lake advantage of a person who would seek honest la-
bor in such a questionable manner, upon the sound sup-
position that the same feeling which governed'such an
action, would secure silence.
Hundreds of poor women in this city, educated and
tenderly nurtured in early life, would be glad to get work
which they could do, not to eke out the pittance of
only a thousand a year, but to be all in all to them by
replacing the crusts with a fresh loaf, or putting the
loaf on an empty shelf; and feel no shame in going for
such work unveiled. We earnestly suggest the propriety
of giving work to such, and rigidly withholding it from
the former class till their need dignifies the labor.
Although the want of good, wholesome independence
amongst women to do any and all just and necessary
things, is a fact to be both deplored and condemned,
still, individuals are not wholly responsible for it. So-
ciety, that something and nothing composed of and sus-
tivind mostly by women, is the hot-bed of rivalry wiiere
principles of false pride, false shame, and false show are
bred and instilled into the minds of each to the end of
prefering death to honest labor.
It leaves the imagination a wide range and correspond-
ing blank in the continunity of our remarks ; but when,
women cease to make the frivolities of dress the horizon
line of their mental range they will then be able to see
the injustice of their'exclusion from the ballot.
S. F. R.
Mb. Greeley furnishes the last illustration of the sage
remark of Josh Billings : When a feller gets to going
down Hill, It duz seem as tho everything had been
greeced for the okashun.
ORDINATION OF A LADY MINISTER AT
HINGE AM, MASS.
-------
Mrs. P. A. Hanapord Was ordained and in-
stalled pastor of the Universalist Society in
Hingham Feb. 19. A correspondent of the Bos-
ton Journal gives the following particulars of
the services :
The church was crowded with spectators, including
very many personal friends of the candidate, some of
whom came from great distances, to be present at the
services. Mrs. Hanafords name is familiar in almost
every household of New England, and to thousands in
all parts of the Union, as the author of several very ex-
cellent works, among which are the Life of Lincoln and
The Soldier Boyboth deservedly popular. The sweet
song, The Empty Sleeve, is also from her pen. She is also
the editor of the Ladies* Repository, a Universalist Maga-
zine ; of The Myrtle, a Sunday School paper, and is
farther well and favorably known as a most talented
lecturer on temperance and reformatory themes. '
Sermon and Right Hand of Fellowship by Rev. Olym-
pia Brown. Ordaining prayer by Rev. Elmer.Hewefct.
Rarely has a more deeply interesting or profitable
occasion been enjoyed by the writer than was experi-
enced this day. Every service was feelingly rendered,
and of the very large congregation present there were
few dry eyes during some of the more impressive and
solemn of the exercises.
WOMAN IN THE POST OFFICES.
The Stale Sentinel, Republican organ at Montgomery,
Alabama, under date of 19th nit., says that there are
more than forty ladies acting as Postmasters in that
state. The agent ot the Post Office department there
says-they are discharging their duties with groat
fidelity and promptness ; in no instance are they ever
behind in making their returns or paying over public
monies. Of cour?e efforts have been made to turn
them out, on account of their sex. Some oi the other
gender are always engaged in -such congenial tasks.
Through the exertions of Judge Gier, the aforesaid
agent, and the kindness of Gov. Randall, they have been
retained. There are a great many women now in charge
of Southern Post Offices. Most of them can take the
test oath, while competent men cannot in very many in-
stances. Did it never suggest itself to the editor of the
Sentinel and others of the party entrusted with the re-
organization of the South, that a person competent to
manage a Post Office may be fully equal to duties of
citizenship ? Recently we noticed that Judge Under-
wood complimented Mrs. Harper, a talented woman of
color who has been lecturing in the South, by saying
that she was doing more work for reconstruction than
any two men who were laboring in the same field. We
have waited with some interest to see the Judge take
steps in the Virginia Constitutional O&nvention, for the
purpose of making Mrs. Harper* the political equal of
at least one man.
EDUCATION OF GIRLS.
It is at least a hopeful sign that the attention
of so many of the best men and women through-
out the civilized world is turned to the subject
of womans education. Every good writer con-
tributes something valuable, and few writers
fail to say something, such is the public interest
already awakened. Some sharp criticisms upon
modern English life, written in a fresh and vig-
orous style, are contained in a book by Professor
DArcy W. Thompson, just published in Edin-
burgh, under the title of Wayside Thoughts.
On the subject of girls and womans reading
and thinking, he says :
The goal to which all a girls thoughts are directed,
from childhood upwards, is matrimony. In every tale
she reads the heroine is followed by her with absorbing
interest, as she pursues a tortuous pathway through two
entire volumes and three-quarters of a third to a Rosa-
mond's bower, in which is standing a clergyman in a
surplice. Now, surely, in the name of all that is logical,
if wedlock is thus to preoccupy all the thoughts of girl-
hood, it should be kept as carefully before the mind ot
boyhood as the goal of all ultimate endeavor, seeing that
wedlock is a condition that affeots one sex as much as the
other. Atall events, a woinan can never be married, but,
from the necessities of the case, a man must be.married
at the same moment. And yet we should regard with
unqualified and merited contempt a wrotch that should
maunder through a sentimental youth into manhood,
wasting his thoughts and energies ufion mawkish antici-
pations of connubial bliss. We feel intuitively that a
man should pursue some definite useful career, inde-
pendently of all connected with marriage ; and ho can
only win respect of himself and his fellows by the prose-
cution of a fixed and honest calling.
Why then should the world of usefulness be closed
against feminine aspirations? Why should all chance of
independence be deuied ? Why should the happiness of
half humanity be staked upon what, in seven cases out
of ten, is a matter of utterest contingency ? Why should
a man be allowed to push his way to fortune, and a wo-
man be compelled to wait until she be pulled into it ? It
would seem as though we had two separate creeds for the
two sexes, and believed in freedom of tbe will for man
and in fatalism for woman. There is an extremely beau-
tiful fairy tale, exquisitely handled by our Poet Laureate,
of a sleeping princess awakened by the true lover's kiss.
The story is thus far true in its suggestions, that warm
and reciprocated love throws a superlative charm into
tbe life of man or woman; but it is false if it suggests
that woman Las no duties or responsibilities of weight
anterior to wedlock, and no subsequent duties and re-
sponsibilities disconnected with her new condition.
MAID SEE VANT DRESSES IN ENGLAND.
The English papers are calling loudly for a
reform in the dress of their servant girls. Ladies
are scandalized at the near approach of these
girls to themselves in dress ; and as there is not
always difference enough in deportment and be-
havior to distinguish the different classes, it is
proposed to label the servants by a costume
that shall leave no room for doubt. The Lon-
don Saturday Review remarks very sensibly, if* a
little impudently, to the upper classes, that they
have a mote in tlieir own eye, and that the re-
form can be brought about m one way only:
The reaction in favor of a neat aud simple
style must come from above, and not from be-
lowin the way of example, not precept. When
ladies of fortune and position in England or
America cease to lavish their thousands on mil-
linery, their copyists in the nursery and kitchen
will cease to spend their wages on a similar ob-
ject. '
Gloucester, Mass.The newspapers tell of
great destitution in that usually flourishing
town, but it did not prevent our receiving from
them an encouraging list of subscribers to
The Revolution one day last week. We
earnestly hope to do something to prevent a
recurrence of the present tide of suffering now
sweeping over the land, for at least a century,
if not forever. Our nation has gone far in evil
doing and now reaps its reward, tho innocent
suffering with the guilty.
One Way to Do It.The New York Ti'ibune
said the other day that to elect a man to office
who deliberately gets drunk is to bring delirium
tremens into our legislation and to make the
preparation and execution of our laws uncertai q ,
wild and spasmodic. Now is the time for the
men who really believe in the virtue of temper-
ance to show their faith by their works. Let
us resolve to vote for no man who has not
strength enough to resist the temptation of
wine. An exchange intimates that this is a
specimen of Mr. Greeleys support of Gen. Grant
for the Presidency.
Kansas.From Lawrence and all parts of the state the
most intelligent, moral and truest women are asking
suffrage. It is fast being demonstrated that it is the ig-
norant, the weak, the vicious, and the caieless who op-
pose.
152
lUvflJtttitfi!.
SUSAN B. ANTHONY, Proprietor.
NEW YORK,. MARCH 12, 1868.
MAN THE USURPER.
In the February number of the Radical is an
a' Mcle by David Cronyn, which wo publish to-
day, under the head Woman as a Mendi-
cant. In many respects the argument is able.
and timely, though founded on two fallacies :
one, that woman does not demand suffrage ; the
other, that her helplessness and degradation are
not enforced like that of serfdom, peasantry,
or slavery, but a defect per se, in and of her-,
self. On the first point the writer says :
In the present New York State Constitutional Con-
vention, an. effort was made to secure to woman the
right of franchise. The committee on suffrage, Horace
Greeley, chairman, reported adversely. A leading, if not
the leading reason given for such report was, women did
not want suffrage, did not ask it The fact alleged is un-
deniable.
In the face of this undeniable fact, let us state
that at least ten thousand of the leading women
of New York, wives of judges, lawyers, editors,
clergymen, and merchants appeared as petition-
ers before the Constitutional Convention, de-
manding the light of suffrage, and many
proudly refused to sign the petitions, because,
said they, we will not humble ourselves to
ask of man what is our right. Among these
petitioners were such women as the sister of
Secretary Seward, the daughter of Thurlow
Weed, the wife of Horace Greeley, wife of Theo-
dore Tilton, wife and daughter of the Hon.
Gerrit Smith, wife and daughters of Judge
Daniel Cady, wife and daughter of the Hoh.
Charles Sedgwick, sisters of Gen. John Coch-
rane, etc., etc., showing that the leading women
in wealth, rank and intelligence in this State
now make the demand. In the very hour that
Horace Greeley read that unworthy report, the
Convention was all in motion with the innumer-
able petitions poured in from every part of the
state, asking suffrage for woman. We repudiate
the assertion, as not only insulting to us, but
opposed to the facts of the century. Woman is
waking up everywhere to the claims of the new
and higher civilization. When, in old monarch-
ical England, where the best minds are in a
measure palsied by the demon of caste, women
are rising up in their dignity, throwing off the
schackles of custom and demanding a voice in
the government, shall it be said that here, un-
der the inspiration of our free institutions,
the most enlightened minds in the country do
not know enough of the machinery of govern-
ment to demand their political rights? No, no ;
all this talk of woman not wanting suffrage is
like the old talk that the black man was con-
tented in slavery.
When New York abolished her prpperty quali-
fication for white men in 1821, did ten thou-
sand of that disfranchised class petition as we
did for the right ? When in 1848 and 1868 it
was proposed to abolish the property qualifica-
tion for black men, did ten thousand of that
class petition for the right? Woman has peti-
tioned more than all these classes put together,
and not in such humble tones either, that the
writer's of this day need complain that the wo-
men who-have fought this battle in New York,
and radically changed her legislation for women,
have not shown a proper pride and seli-respect
and power.
Horace Greeleys assertion was not true, nei-
ther was it his real reason for his action. That
suffrage committee decided in caucus before
giving us a hearing or counting our petitions,
to report just as they did!. The real reason for
their action was that the republican party could
not afford to make a new issue, with all the
other odious measures it had on hand. Wo-
man's apathy, is no greater than was that of
the white men in 1821, nor the black men in
1848, nor the two million plantation hands to-
day. We pray David Cronyn to grant us suffi-
cient intelligence in NeV York to understand
Horace Greeley, if we do not know enough to
demand the right of suffrage.
On the second point the writer says, in regard
to the enforced slavery of woman :
No, let her cease fondly comparing herself with the
negro. The latter is not honored by the comparison.
The cases have few points of analogy. He was helpless,
not for the chains that bound his limbs, but for those
which fettered his intellect, for the prison which walled
in his soul. Given freedom to the latter, the former had
long since been broken and flung to the winds. Woman
has the supreme condition of freedom and justice. That
condition is moral and intellectual liberty. Let her use
thiaT Let her act 1 Let tier act! But she does not act;
she complains. She does not work; she begs. She does
not demand; she supplioates.
In comparing the woman with the negro we
hut assert ourselves' subjects of law. It is not
in fondness but humiliation that we admit
our condition. The old adage, might makes
right, is the one law of violence, war, slavery,
oppression, injustice, that has thus far governed
the world, ^subjugating alike the weaker animal,
race and sex to brute force. In the infancy
of the race, as of the individual, passion and
power rule, until the waking moral nature
holds the animal beneath its feet. This being
the law of life, we by no means make man re-
sponsible for all the blunders and barbarisms
of his ignorance ; we only ask the nineteenth
century to shed the dead skin of the past, and
bring its customs, creeds, and codes into har-
mony with the higher civilization we are now
entering. Whether the negro is honored by-
comparing him with serfs, peasants, or women,
matters little so long as all are equally dishon-
ored in being thrust outside the pale of political
consideration.
The difference in the slavery of the negro and
woman is that of the mouse in the cats paw, and
the bird in a cage, equally hopeless for happi-
ness. One perishes by violence, the other through
repression. If the mouse escapes it is stronger
for the struggle ; if the bird escapes it perishes
in its native element.
There are many points of analogy in the con-
dition of all disfranchised classes. The fact
.that women and negroes have no voice in the
government is one strong point of analogy;
that women and negroes are taught obedience
to their white masters in the Bible is another ;
the fact that women and negroes have ever
been the slaves of white man, the one to his
lust, the other to his avarice, makes too many
points of analogy for woman to contemplate
without a deep feeling of indignation. But if
there are no points of analogy in the condition
of women and negroes, why did the white
man in his wisdom make the same laws for
both classes? Why are women and negroes
shnt ont of the colleges and professions to-
gether if there are no points of analogy in their
condition ? Why do the telegraphic wires bring
the news to-day that in Kansas and Iowa hence-
forth women and negroes are to be permit-
ted to practice law. We have stood together in
the laws and constitutions in our degradation,
why not together in our exaltation ? We rather
think from this passage that the writer is a re-
publican or abolitionist, which is about the
same thing, and wants black men to enter
the kingdom first. Woman, he says, has
the supreme condition of freedom and justice!
with the laws of barbarism on our statute books;
moral and intellectual liberty! shut out of
the world of work, Columbia, Harvard, Yale!
Harriet Hosmer the gifted artist knocked at the
doors of our eastern colleges for a course of
lectures on anatomy, but iu vain until she
reached St. Louis, in a slaveholding state! Lot
her act! She enlisted in the late war ; you dis-
missed her in disgrace, without pay. Let her
work! You will tell her where, and give her
half pay for obeying yon. Do women make the
laws and customs ? Theodore Tilton in his de-
mand is right, David pronyn to the contrary
notwithstanding. Let the usurper make volun-
tary restitution of one-half the universe to its
rightful queen, then talk of womans dutiesto
herself, to God, mid man. Mr. Cronyn says :
We repeat it respectfully and deliberately, there is one
great beggar in the world. It is woman as she is repre-
sented by the conduct of the pending issue.
This is cool talk for the usurper to-day, after
holding woman a victim under his heel for cen-
turies, legislating her property, wages, every-
thing into his own pocketafter all the seli-
denial and sacrifice of mothers, sisters and
daughters that man might be educated and ex-
alted. In your circle of acquaintances, reader,
can yon find one father who has made his sons
all toil that-a daughter might enjoy the advan-
tages of a classical education? left them in pov-
erty that she might be rich ? Can you find one
family of brothers who have voluntarily spent
their lives in drudgery, to give a sister an edu-
cation superior to their own? If there are such
cases they are rare indeed, while facts of life-
long self-denial on the part of mothers, daugh-
ters and sisters stand out at every turn. Where
have we ever seen a society of men formed for
the express purpose of educating poor but
.pious young women?
Yet we have not only done that in the past
for men, but every year our journals herald
many facts of women of wealth giving and be-
queathing large sums of money to boys schools,
colleges and universities, to the utter neglect of
their own sexa proof of womans lack of self-
respect. If women are beggars/they are made
so by the injnstice of men. As we understand
the demand of to-day woman asks no more
than the poor devils in the Scriptures asked.
Let us alone. Blot our names ont of your
statute books. We ask no special laws or con-
stitutions or customs for us. We are willing to
rough it with man, and abide by the same laws
he has made for himself. We have tried the
rights, privileges and immunities accorded to
negroes, and now we are ready to try the white
mans code. We ask no more than Diogenes
in his tub asked the intruder: Stand from be-
tween ns and the sun.
Shakspeare, in his Titus Andronicus, tells of
the kings beautiful daughter, whom rude men
seized, cut out her tongue, cut off her hands,
and then bade her call for water and wash her
hands. Not more unreasonable are the men
of our day, who bid woman go forward to take
the rights denied herto enter the colleges and
professions barred against herto express her
153
itu ijev0lut*0
opinions at the ballot-box and altar and fireside,
when law and Gospel {dike forbid it. No, man
can never know all that it costs every woman
who makes for herself a place to stand. It is
easy for man to go forward, for the universe is
his, by common consent, and woman is his pro-
perty, made for his pleasure. This is the com-
mon idea taught, men say, in the Bible, the
constitution and by the facts of life.
After further berating woman for her frivolity,
Mr. Cronyn says :
When she is serious, every department of effort
flings wide its doors to her. Mrs. Somervilles sex
stands not in the way of generous recognition and honor.
One embodiment of self-respect like Margaret Fuller is
a perpetual burning reproach to the universal effeminacy
of her sex. Anna Dickinsons presence and personality
on the platform, are infinitely more powerful for her
cause than her arguments.
Most magnanimous! You fling wide your
doors after woman is inside the citidel. After
Mrs. Somerville has educated herself outside
your universities and secured a place in the
world of science, and you cannot shut your eyes
to the fact, you give her generous recognition.
Margaret Fuller is a perpetual burning re-
proach to the men of Massachusetts, that the
sphere in which she moved was so narrow and
her labors in life so poorly paid or praised.
Over what a holocaust of wounded hearts and
reputations of noble women, over what labo-
rious years of argument and assertion Anna
Dickinson at last gained the height she holds,
those who have worked and watched and waited
know. Her personality may long make her po-
sition sure, but we need arguments still to show
others less brave, that her shining paths are
free to all*. One fact like Frederick Douglass,
was worth much towards emancipation, yet it
took thirty years of argument and four of
bloody .war, to open the eyes of this nation to
its injustice to his race. And though we have
multitudes of facts, we shall ply the argument,
until all women have a generous recognition
of their rights whether in science,, literature and
art, or the more humble employment of every-
day life. We ask generous recognition for
the pale, weary workers m our school-houses
and factories, in the garrets and cellars .of our
cities ; for the outcast burdened with sorrow
and guilt, and for the caged children of ease,
pining amid luxury for something to do.
Speaking of womans education, the writer
says :
There must be some serious defect iu our domestic
and educational institutions that furnish such an infe-
rior article of woman.
The supply is always suited to the demand.
The women of a nation are always moulded
after mans highest idea. For a quarter of a
century strong-minded women have been the
target for the soom and ridicule of politician,
priest and press in this republic; hence the
harvest of weak-minded ones, we all alike de-
plore to-day.
We fully agree with the writer in his estimate
of our female seminaries, but so long as woman
holds neither the purse nor the ballot, she can-
not bribe or vote the doors of Harvard and Yale'
The writer further says : -
The agitators of Female Suffrage movement are
laboring under a peculiar difficulty. They are trying to
liit a dead weight with a minimum of power. They are
endeavoring to elevate woman against her own volition.
It is not so sure that political suffrage will prove a speedy
remedy for all her ills ; that, the ballot secured, the now
lifeless and inert mass will rouse and tend irresistably
to higher conditions.
Our difficulties are the same that John Bright
labors under in demanding suffrage for ignorant
Britons, the same Wendell Phillips labors un-
der in demanding saffrage for ignorant Africans;
but few of their clients know the priceless value
of the rights their champions claim. But the
cry of liberty is the mightiest power to galva-
nize dead souls to life, and freedom is their
native element; hence, when we work with na-
ture, progress, though slow, is sure. We do not
suppose that suffrage will end simoons, small-
pox or superstition, but it will secure political
equality, which our Fathers, who were wise men,
considered a great blessing. And believing the
old adage, that what is sauce for the gander is
sauce for the goose, we ask the privilege of trying
it, and we do not propose to let these crafty men
like David Cronyn, Wendell Phillips and 0. B.
Frothingham, shirk their responsibility in this
matter, under any plea of the supposed indif-
ference of woman to the question. It is your
business, gentlemen, to take down the barriers
your hands put up. Have you not found lifes
battle hard enough while all its paths to you
were free ? Are not the tasks that Nature gives
to all alike enough for our development, that
man should build his artificial walls to block our
way ?
The writer mourns womans lack of self-re-
spect. Where'shall she go, we ask, to learn the
fitting lesson?. To mans laws and constitu-
tionswhich from, Coke to Kent, degrade her
from a person to a thing ? To the Biblewhere
mans translations of holy words degrade Gods
laws to his desires, and make woman but the
creature of his will? To the facte of lifewhere
woman has reverently conformed herself, her
ways and will and wishes to mans creeds and
codes? Whatever class in life is ostracised, that
class is degraded in its own eyes, for equality is
the first condition of self-respect. When man
recognizes woman everywhere as his peer she
will set new value on herself, and not before.
The line of historical movement lies through Wo-
man's Suffrage. But will she accept it as alms or achieve-
ment? Shall it he a concession to her weakness, or a
victory to her strength ; a propitiation to her affection
or a conquest of. her character ; a deed of chivalry or of
extorted respect and justice? These are not unimport-
ant questions to womanly pride. Let her reflect upon
them. The .ballot is a moral educator even to whom it
comes unsought. But its benefioence is increased ten
fold to those to whom it comes in answer to their own
extraordinary seeking. $VA
We are in an attitude to take it both ways.
Those who have fought for it bravely twenty
years could take it now as an achievement;
those who have accepted the situation with
pious resignation could take it as alms. Neither
David Cronyn, Wendell Phillips, or O. B. Fro-
tlnngham, achieved the ballot by extraordi-
nary seeking; their fathers fought the battle,
they entered into the glory. The strong-
minded women, too, have fought our battle and
it is but just that our weak-minded should reap
the benefit. Why demand a more universal in-
terest of woman in politics than men have ever
manifested. .
But, in spite of his heresies, we are glad to hear
David Cronyn on this subject. We like this
berating and scarifying woman, it is better than
worshipping ns in the clouds as of yoro. Wo
are glad to have woman at last touch terra firm?.
Wendell Phillips bravely led off in this direc-
tion three years ago, and our best thinkers are
falling in line. This change of base is a good
sign. It is a confession of weakness on the part
of the usurpers, and argues a speedy surrender.
They know they are surrounded, cornered.
They cannot answer our arguments; no man of
common sense attempts it. Now, the question
is, shall they stand still and let us fire hot shot
on their devoted heads till they are annihilated
with a sense of their awful responsibilities or
shall t_ey spring to the battlements and turn
their guns on us ? We say, fire away, gentlemen,
but do not load too heavy lest your guns kick
and kill man instead of woman. e. c. s.
Petekboro, Feb. 28, 1863.
My Dear Cousin : I am glad to get your letter, and
to read in The Revolution that you had so pleasant
a time in Johnstown. * * * * *
You are maxing, with the help of my excellent friend
Pillsbury, a pungent and lively paper of The Revolu-
tion." I can but think that Train is a heavy load for you
to oarry. I was sorry you treated Garrison as you did.
He is truly a great and good man.
I am leading a quiet life, &s a man nearly seventy years
of age should.
Your affectionate cousin, Gbrit Smith.
Mrs. E. Cady Stanton.
We do not know what system of locomo-
tion is common in Madison County, but
in our high state of- civilization here in New
York the people do not carry the Train, but the
Train the people. G. F. Trains avoirdupois is
of little consequence to The Revolution so
long as he walks on his own legs, and carries it
on.his shoulders.
But young Hercules will, no doubt, willingly
shift his burthen as soon as our veteran reform-
ers, like Atlas of old, return to their duty.
LETTER FROM MISSOURI.
TWO WEEKS AT THE STATE CAPITAL.
Editors of ike Revolution:
I find that the agitation of the suffrage question daring
the political campaign of Kansas, last autumn, has done
much toward arousing the minds of people here, who
had perhaps never before given the subject a thought.
Missouri at that time wa9 watching the movements of
her sister state with deep interest, anxiously awaiting
the result of her great struggle for Womans Suffrage,
andnota tew felt a sincere regret in the defeat.
The great question now with the dominant party is
power ; all minor considerations are ignored to accom-
plish this object; and while they question the expe-
diency of the negro on their platform, is it surprising
that they shrink from woman ? When we reflect that in
this state all questions of progress are novelties, sprung
upon a people before they can be able to weigh any mat-
ter with proper consideration j we have every reason to
.be sanguine (judging by the present) of future success.
Scarcely one year ago the women of this state joined
themselves into a Womans Suffrage Association, and
shortly after their organization, sent a petition to the
legislature signed by some three hundred, praying that
the word male might be stricken from the state Con-
stitution. This was followed by the introduction of a
bill, in the form of an amendment to another bik then
before the House, which received thirty-nine votes.
This winter the same petition has been renewed, vrith
the addition of eight or nine hundred signatures, and
although a question of policy will probably exclude the
subject from all further consideration this season, stil
we cannot fail to observe the great progress which has
been made during the last twelve months. A writer
who has recently published a work on The New Re-
public, speaks in glowing terms of the brilliant
prospects of Missouri. He notes the influence which
Nature exercises on the souls of men. He assumes that
a lofty, mountainous country has. a tendency to inspire
with noble impulses and develop the higher qualities iu
man. He prophesies for the future of the great West a
high state of cultivation and civilization in the humau
race, which will eclipse, in poets, philosophers, states-
men, all that have ever walked upon the earth. Hence
we have every reason to look for great results in the
legislative halls of Missouri. During the present ses-
sion little attention has been given to anything beyond
the subject of railroads. It has been one of absorbing
interestin both houses at times creating considerable
excitement, and is indeed one ot vital importance to the
state; for on the successful operation of this principal
mode of transit, will depend, to a great extent, its future
prosperity. The Pacific railroad bill is now pending in
the House, and it is to be hoped its final disposition will
be such as to insure those improvements, of whicl^
154
i
there is a palpable need, whemeight hours are necessary
to pass over a distance of one hundred and twenty-five
miles, from the chief city to the capital. In all the state,
a more appropriate site could not have been selected on
which to build the capital. It might almost be called
the city of seveu hills. Although there are no elevations
ol great prominence, the surrounding country is one ol
continued undulation as far as the eye can reach. The
capital 19 beautifully situated, commanding an extensive
view from its dome, and can be seen at a long distance
up anid down the river, whose turbid waters wash tho
foot of its grounds. Nowhere in Missouri do we find
the romantic in scenery. Our Niagara, Hudson and
White Mountains are in no degree reproduced in this
state, but nature, in her freak of sobriety, has compen-
sated for the absence of surface, sublimity and gran-
deur, by an imbedded wealth, which promises to make
this the richest, if not the most flourishing state in the
Union.
Missouri has entered upon a comparatively new life,
shaking off the galling and oppressive shackles with
which slavery had sought to bind her and girding her-
self With noble purposes and fresh resolutions, she has
launched forth as a new state unfetteredfree! If the
men who stand at the helm are true to principle, firm in
their adherauce to the fundamental laws which they pro-
fess to have adopted as their basis, there need be no
fears ior results in the future.
The Radical State Convention for the election of dele-
gates to the Chicago National Convention, was held here
in the House of Representatives, on the 22d, the anni-
versary of Washingtons birtn-day. The assembly was
large and everything passed off harmoniously. They
adopted 'no platform and steered clear of all side issues
contenting themselves with an enthusiastic expression
of preference for Gen. Grant as the Presidential candi-
date.
WOMAN AS A MENDICANT.
BY DAVID CRONYN.
From the Radical.
In the present New York State Constitutional Conven-
tion, an effort was made to secure to woman the right of
franchise. The committee on suffrage, Horace Greeley
chairman, reported adversely. A leading, if not the
leading^eason'given for such report was, women did not
want suffrage, did not ask it. The fact alleged is unde-
niable. But its validity as a reason is questionable. To
our mind, it were wiser for the committee and -the con-
vention to aim to develop a sense of responsibility, a
seeking for it by imposing it. But the world is not up
to that. Constitutional Conventions do not regard it as
their function to educate public sentiment, but rather,
to gratify it. The fact of womans unconcern had its
weight with the committee and the convention, as it has
its weight with the world. The indifference of the great
mass outweighed the interest of a few. The pitiful
fraction of petitioners commanded no influential respect*
This is natural Men are still influenced more by con-
crete facts than by ideal theories ; more by action than
by apathy. Figures are forces ever in reforms.
However, we have to do here with the radical
import and not with the validity or invalidity of the
above reason. The case before the convention is an ex-
act type of the case before the country and the world.
A wide-spread and culpable apathy infficts woman. She
is insensible to her own condition. She does not want
suffrage, and does not want it because not aware of her
want. This is the most grievous fact of all. She is but
feebly interested in her own case. A half dozen cham-
pions are fighting her battles for her, and fighting them
bravely, let us admit. Her army is all generals. Evi-
dently she has more sympathizers and supporters in
the opposite, tbau in her own sex. She tightly cl :sps
the wrongs of which she complains. Hor protest is thus
far futile, because feeble. The old traditionary rule con-
tinues in three in default of her appearance in the court
* of appeal.
The popular idea of mans responsibility for womans
situation, contains only a partial truth. There are two
parties to the guilt. Man is one, woman is the other.
Nay, tiie latter is the greater. For, what extenuation
exists for her criminal inaction, which, more than any
other circumstance, perpetuates her bonds? Is it that
it is not for her to claim her rights, as man originally
usurped them, and should now make voluntary restitu-
tions ? This view involves a false conception of his-
toric facts. But if it were true, it is still, as a reason,
* Theodore Tilton, in Music Hall lecture on Woman
Suffrage.
palpably weak and inadequate. It simply counsels in-
definite submission to injustices which courageous
action might very speedily remove. It counsels an un-
masterly inactivity, Is it that she is rendered helpless
by enforced slavery? No, let her cease fondly compar-
ing herself with the negro. The latter is not honored
by the comparison. The cases have few points of ana-
logy. He was helpless, not for the chains that bound
his limbs, but for those which fettered his intellect, for
tho prison which walled-in his soul. Given freedom to
the latter, the former had long since been broken and
flung to the winds. Woman has the supreme condition
of freedom and justice. That condition is moral and in-
tellechaal liberty. Let us use this. Let her act! Let
her act! But she does not act; she complains. She
does not work ; she begs. She does not demand; she
supplicates. All this, while her own powerful self-
resources lie undeveloped. She appears on the steps of
the world as mendicant, complaining of mans injustice
and womans -wrong ; mans tyranny and woman's
servitude ; mans usurpation and womans helplessness,
and bogging, piteously begging, her rights I
We repeat it respectfully and deliberately, there is
one great beggar in the world. It is woman as she is re-
presented by the conduct of the pending issue. Dear
as her cause is to us, we cannot close our eyes to her
great complicity in the crime of her own personal, so-
cial, and political degradation. The radical difficulty of
her case lies deeper than statute law, than conservatism,
than physical weakness, than sex. It lies simply in her-
self. She invites and perpetuates all that she suffers.
She does this by her weakness of character, her feeble-
ness of intellect, her levity of soul, and, as the result of
all, by her fatal inaction. Doubtless her composition is
the partial product of our institutions. So is that of un-
just man, as for that matter. Yet, 'if there be such a
thing as freedom of will, she cannot he wholly despoiled
of it. In the active exercise of that freedom, lies her
salvation. Not anothers, but her own volition is the
vital need. The help she wants is self-help.
They who would he free,
Themselves must strike the blow. .
This is the divine condition of whatever enfranchise-
ment is worth anything. When it comes to that, woman
will find the world ready to fly to arms in her defence.
When she is just to woman, man will be just to her.
When she is truly respectable, she will be respected.
The fatal obstacle to Romans Amelioration is her want
of self-respect. Indeed, it is hard to resist the conclu-
sion that this is, in the ultimate analysis, the Pandoras
box of her wrongs. She respects everything save her-
self ; yes, respects herself as a personal, social, conven-
tional creature, but not as woman. This devitalizes her,
leaves her weak and impotent, kind, loving, humane it
you will, but yot weak and impotent, a prey to circum-
stances that knead her like a thing of dough, a prey to
accidents which destroy her individuality. In either
sex, self-respect is the condition of force and elevation
of character. It is emphatically so in woman. In any,
it is the surest means to the suffrage and honor of the
world ; it is supremely so in woman. She lacks it and
lacks all. She commands the praise, flattery, admira-
ation, love, and chivalry of men, but not their respect.
She commands man, but not his mauhood.
Various practical forms illustrate the evil of which we
oomplain. It is beheld in the sentimentalism which is
the characteristic and bane of female society ; in the
mean and abject servility to the caprices of fashion; in
herrunning to dress like an uncultivated garden-plot to
weeds ; in her absorption in gallantry ; in her devotion
to heartless artificial conventions ; in her absence of
high intellectual tastes and ambitions ? in her want of
self-masteryin a word, in her appalling and disastrous
disproportion of feeling of thought, of imagination to
judgment. Nut wholly without reason is her name a
synonym for frailty, fickleness aud superficiality. IJot
without reason is she still classed with children, ne-
groes, idiots, mid Indians. lake these, she is the sub-
ject of the sensational. Like these, she has literally a
savage passion for baubles and colors, tinselly and
tawdry ear-rings and finger-rings. With them, her vo-
cabulary is prolific in interjections and exclamations.
She is with them a creature of imitation. Her basis of
respect is external and not internal, is sense, and not
self.
There must be some serious defecj in our domestic
and educational institutions that furnish such an in-
ferior article of woman. They give us beings with all
surface accomplishments, hut being destitute of mental
. strength, thoughtful earnestness, dignified characters.
Our female seminaries are notoriously hot-beds of fe-
male sentimentalism. Our misses and ladies schools
give us too many misses and ladies, too lew wo*
men. The female product of our present educational
methods strikingly illustrate the theory of Prof. Baines
recent article in an English periodical, on the correla-
tion of the mental powers. In the prevailing stamp of
female mind, the will and intcllectare utterly swamped
and hurried away in a Niagara tide of feeling over into
that awful gulfher heart. There must be, we say,
some grave defect in tie instruments employed, that
society fails to get more of a higher type of woman.
But wherever the difficulty lies, whether in curriculum
or system, the great vital necessity still stands. The
characterless condition of female characters must be re-
moved, before any true and permanent amelioration is
possible. Until that time, woman cannot be just to
herself. Until then, sooiety will not be just to her. In
the nature of things, weakness commands love and
pity, not respect and power.
Womans way to empire is'through her will. The
world bears her no malice prepense. Her sex is no
misfortune, despite the drivelling of those who would
bring it into disrepute, or make it an excuse for her
vegetative conditions. When she is sericus, every de-
partment of effort flings wide its doors to hor. Mrs.
Somervilles sex stands notin the way of generous re-
cognition and honor. Physical weakuess proved no
obstacle to Madam Pfeiffers extensive travels afoot.
One embodiment of self-respect like Margaret Fuller is
a perpetual burning reproach to the universal effeminacy
of her sex. Anna Dickinsons presence and personality
on the platform, are infinitely more powerful for her
cause than her arguments. The latter are her propo-
sitions, the former, her demonstrations. She. was shot
at once on a political platform. Had she screamed and
fainted according to the fashion, the index-finger on wo-
mans dial-plate would have gone hack some years. But
she did not do either. Her womans strength was su-
perior to her sexs weakness. As if in contempt of her
sex, a very modest lady acquaintance of ours can beak
bread, shoot a gun. ride a horse, play the piano, solve
problems in calculus, read Demosthenes in the origioal,
write an essay and deliver it with force. Yet she is not an
exception to the radical capacity of her gender, but only
a departure from their ruling conduct. So it is. Aspi-
ration and ambition know no sex. When woman simply
does what she claims she can do, or ought to do, all the
gods are at her service. Despite mans usurpation, in-
justice, and tyranny, when did ever a woman appear
whom society did not honor ? Learning, talent, genius,
character, there in woman, as in man, when did they -
ever fail to command the respect and homage of the
world ? The law of moral and intellectual strength pre-
vails. Let woman prove herself strong, all gilts, rights,
and immunities will speedily gravitate to her.
The agitators of Female Suffrage movement are labor-
ing under a peculiar difficulty. They are trying to lift a
dead weight with a minimum of power. They are en-
deavoring to elevate. woman against her own volition.
It is not sure that political suffrage will prove a speedy
remedy for all her ills ; that, the ballot secured, the now
lifeless and inert mass will rouse and tend irresistably to
higher conditions. But granting this, how long must
the possessor of this instrument be delayed by the pas-
sivmess of woman herself? how long deferred by the
reproachful conduct of woman as a mendicant. In the
pending battle, the strategy of the field commanders
is just here open to criticism. Eagerly intent upon, the
objective point, they overlook the discipline of their
own forces. The real enemy is in their midst. Not
so, says a friend with whom we remonstrated for join-
ing in the clamorous cry o£her sex. Suppose all the Wo-
men in the United Statas should demand the rightof suf-
frage, could they have cast a single vote until man should
be pleased to let them? Reasoned like a woman,
one is tempted to say. It is only the fatal assumption
over again, the assumption of sex prejudice. Snch
reasoning is sophistical and far from broad. Man con-
trols the ballot, but not the conditions of its possession.
His pleasure in the matter is at her earnest kidding.
Let her make a general organized demand for the right,
and enforce the requisition not alone by numerical, but
by proper moral demonstrations. Granted even that he
ought to give the ballot without effort or interest on her
part. Yet if he will not, and the conditional effort is
withheld, where does the fault lie ? Our friend further
insists, with her sex, that man is responsible in this
matter, because it is mens opinions which govern wo-
men, more than womens which govern men. Very
true, this goes near the heart of the issue. It is woman's
degradation and shame that she has no opinions of her
own. There is, in the present constitution of society,
unjust as it is, no natural or inseparable artificial reason
ior her intellectual helplessness and dependency. The
taunt of the organic inferiority of the female brain takes
its rise in her self-faithlessneae. How long will she be
the pantomime of men no better than herself? Dr.
155
Win ship, when helplessly imposed upon by a fellow
student physically stronger than he, obtained justice by
quietly developing his strength, and then giving his
enemy the alternative of apology or chastisement. Is
woman intellectually weak, unable to cope with uDjust
man ? Then let her get strength, develop it, work for it,
aye, dig for it, and no longer be the inferior and depen-
dent she confesses herself. Let her cultivate intellect-
ual courage and independence. The world Is hers.
Books and brain and will are hers. A celebrated female
writer says of herself, that she took revenge on Fortune
by deserving tbe favor which Fortune did not bestow.
Let the woman of to* day take signal revenge on man by
at least deserving the privilege he does not give. To
this end, let the leaders of the woman movement change
their war cry, from the platitudinalphrase of mans
Injustice to the more needed and truthful alarm of
Woman's Apathy! Let them sweep her sex with a
storm of the red hot shell of argumentative indignation
and appeal. The fulcrum of reform is the conscious-
ness of its necessity. Let this consciousness be roused
in woman as well as in man. The line of historical
movement lies through Womans Suffrage. But will she
accept it as alms or achievement? Shall it be a conces-
sion to her weakness, or a victory of her strength; a
propitiation to her affection or a conquest of her cha-
racter ; a deed of chivalry or of extorted respect and
justice ? These are not unimportant questions to wo-
manly pr ide. Let her refleot upon them. The ballot is
a moral educator even to whom it comes unsought. But
its beneficence is increased ten fold to those to whom it
comes in answer to their own extraordinary seeking.
The reader will not mistake us. The original claim is
, granted, is advocated. The unequal applications of law
and custom are unjust. The vice of society here is that
it is striving to confine great natural forces to unnatural
channels. We sin against individual freedom by putting
purely personal tastes, proprieties and conventions into
Organic and arbitrary forms, into social, civil and po-
litical institutions. Societys should not is very well. So-
cietys shalt not is all wrong. Womans education, poli-
tics and profession are not the legitimate objects of
written statutes. Womans destiny? What petty
business! Let every man go to heaven in his own
way, said Frederic the Great. Let every woman go to
her destiny, in her own way. There is no royal road
thither, college charters and Pauline theology notwith-
standing. Let the laws of human nature have generous
scope! The forebodings of womans degeneracy are
puerile and irrelevant to the previous question. Has
she a right to personal freedom ? If so, let her have it
and let God take care of his own as He surely will. Let
her become what time, thought, and wise discussion, in
a word, what the inevitable law of human development
may make her, whether that be politician or parlor-
ticion, kitchen domestic or railroad engineer, weakling
or woman. The all-vital thing is an open field and fair
play. Mature knows no Salic law ; Society must know
none. It is as plain as plain can be that it is womans
right and duty to do
Whatever perfect thing she can,
In life, in art, in science.
But while allowing all this, we must, to the charge of
mans responsibility, return the counter-obarge of wo-
mans responsibility. The greatest obstace to her en-
franchisement, personal or political, is herself. Mo
artificial barrier opposes her which she may not beat
down, if she will, when she will- Mo opinion of mans
can stand before her womanly determination and achieve-
ment. Let her know her capacity and vindicate it.- Let
her know her rights and maintain them. We look with
bitter pain upon her passive sufferance of social shams
and conventions, which disrobe her of her aignity,
strike out her individuality, and consign her to moral
and intellectual impotence. She is the one all-powerful
reserved force of humanity. The time is ripe for the
play of that force. That it is yet comparatively inactive
lies somewhat in mans injustice, but more, far more, in
womans apathy. Let her act ! Let her act I
David Cbonyn.
Sir Walter Scott says, in Ivankoe, that the
youngest reader of romances and romantic bal-
lads, must recollect how often the females, du-
ring the dark ages, as they are called, were ini-
tiated into the mysteries of surgery, and how
frequently the gallant knight submitted the
wounds of his person to her cure, whose eyes
had yet nitre deeply penetrated his heart. If
women were M. D.s in the dark ages, it should
not be thought wrong or revolutionary in this
age.
it Involution,
GEORGE FRANCIS TRAIN ON WOMAN'S
SUFFRAGE AND THE POPE.
AN EVIDENT PAPAL BULL AGAINST TRAINS KANSAS
CAMPAIGN-r-THE great ovation at dungar-
VANTRAIN STARTLING THE MONARCHIES OF
THE OLD WORLD PROM A SLEEP OF AGESWHAT
OASSAR SAIDNATURE AND HUMAN NATURE.
The Augustinian Convent, )
Dungarvan, Feb. 18, 1868. f
Dear Revoluuion : Veni, Vidi, Vici. Na-
ture in volcano. Nature in an earthquake.
Nature in an iceberg floating in mid ocean.
Nature in a tornado in the Gulf stream. Nature
in a typhoon in the China Seas. Nature in a
thunder storm. The lightning bolt striking the
tree under which you have sought shelter. Na-
ture when two great armies meet under the
shock of battle. These all awaken the divinity
in man and inspire his soul with the grandeur
of the Almighty power of creation. Such a
thing as*an Infidel never existed.
human nature grander than nature.
Grand as is nature in the breaking of the
elements, there is nothing so grand, so majestic,
so terrible in its power as the. spontaneous out-
burst of a great soul, the outgushing sentiments
of a grateful people towards a country that
opened wide the door to their outraged kindred,
who, escaping from the despotic clutch of their
enemies, And themselves in the arms of their
friends. This great people love America more
than America loves herself. Read the Herald
and the Examiner to-day. Long letters have no
show in The Revolution. Short articles
only tell. So I refer to the journals for you to
editorialize the most remarkable of the many
ovations received on behalf of my country and
my people.
LETTER NO. THIRTY GOES TO THE WORLD TO-DAY
THE HOLY FATHER AT ROME IS AFTER US.
Those nine thousand Catholic votes, or the
majority of them at least Catholic, for woman,
have startled Rome into making a Bull. Bishop
Dupanloup of France has got the rap intended
for The Revolution. Never mind. Dont
be discouraged. The Pope is a jolly old brick,
and I will talk him out of it when I go to Rome;
and shall buy him a palace on the Hudson for a
Christmas present auyway.
THE POPES BRIEF ON FEMALE EDUCATION.
The Pope has addressed a brief to M. Dupanloup,
Bishop of Orleans, in which he compliments that Pro-
late on the position he has taken up with respect to the
education of girls. In this document his Holiness
says :
One of the plans which these writers in their cynical
daring have adopted is to pervert youth in order tiie
better to attain their object, which is the ruin of religion
and authority. They are now carrying out this plan
more perseveringly either by corrupting education cr by
insidious alterations of history, or exciting wicked pas-
sions, or by all the manoeuvres of a shameless impiety.
As the means employed hitherto affected males more
than females, and as,-for this reason, they did not attain
the object as soon as they wished, they now desire to
attack even woman, to deprive her of her native modesty,
to exhibit her in public, to turn her aside trom domestic
life and its duties, and to puff her up witn false and vain
knowledge ; so that she, who, if properly and religiously
brought up, would be like a pure and brilliant light in
the house, the glory of her husband, the edification of
her family, a fountain of peace and an attraction to piety,
will now, full of pride and arrogance, disdain the cares
and duties that are proper for woman, will be a germ of
division in tbe household, will pervert her children and
become a stumbling block to alL And, what is profoundly
deplorable 1 those who are entrusted with public duties,
disregarding this peril which menaces society no leas
than religion, favor the schemes of impiety by strange
and unheard of projects, and thus with the most ex-
treme imprudence assist in the ruin of society which
has already begun.
WHAT BULLS ACCOMPLISH.
The Bull against the Fenians made the or-
ganization a great power, and the Bull against
woman will only make our cause the more
prolific. The Catholic church itself is based
and holds its power for eighteen hundred years-
on the grand idea of the Immaculate Conception
of a woman. Mary the mother of Christ. Was
it a woman that sold our Redeemer for thirty
pieces of silver ? Was it a man that was first at
the sepulchre? Has the Pope forgotten his
noble mother? Would he have been so good
and great a man had not that exalted lady been
an educated woman ?
The Catholic priests are the best educated
men in the world. Have their mothers nothing
to do with that education ? The Pope has done
us a great service. Nothing so stimulates the
milk of Human Progress as a Bull from the
Papal See. Sincerely,
George Francis Train.
MR TRAIN UP IN THE HOUSE OF
COMMONS.
lord mayo admits that he was arrested by
ORDER OF GOVERNMENT.
Cork, Feb. 19.
Dear Revoluton : Lord Mayo comes to
time. There is one thing about these Dress
Circle men ; they .own up square when in a close
point.
MR. TRAINS ARREST.
In reply to Sir C. OLoghlen,
The Earl of Mayo said that there were persons now
in custody, who had been so since the act was suspended.
The longest period was one year and eleven months,
and of the ninety-six persons now in custody, only six-
teen had been arrested, and only four had returned
from America, after having been released. With respect
to the arrest of Mr. Train, the police had orders to
watch carefully all the arrivals from America, and arrest
all persons whom they believed to have come for the
purpose of promoting sedition or rebellion. In Mr.
Trains baggage was found a number of papers chiefly of
his own speeches, but itmust be recollected that, previous to
his departure, he had delivered at a Fenian meeting very
violent speeches, and there was every reason for thepolice to
believe that he came over to aid the movement. The police
had acted strictly in accordance with their instructions
and their duty. On Mr. Train giving an understanding
that he had not come over to aid the Fenian movement,
he was at once set at liberty. *
After a few words from Mr. Darby Griffiths, the bilk
was read a second time.
Tbe government organ here before said, it was
local police, now see what it says :
From the Constitution.
We aro glad to see that government ate not shifting
the responsibility of Trains arrest on the police. The
police (says Lord Mayo) had acted striotly in accordance
with their instructions and their duty. So we said
ourselves at the beginning, and so Ministers say now.
NAGLE DEFENDED BY MR. TRAIN.
Sligo AssizesThe Fenian Prosecutions.The as-
sizes will commence on the 25th instant. JudgesKeogh
and Fitzgerald. The Attorney-General and Solicitor-
General will attend to prosecute Captain Magle, who
was connected with the treasonable expedition of the
Jacbnell privateer that sailed round our coast in summer
last, two men of the crew having landed at Streeda, and
a General Bourke paying this town a visit at the same
time. Magle, who is an alien, will be tried by a jury
composed of half foreigners. Four others of lesser
note belonging to the same vessel will also he tried here
if the time allowed he sufficient. Accommodation is
being provided here for a troop of dragoons (40 men)
and two additional companies of soldiers (120 men) to-
gether with 100 policemen.Sligo Independent.
Shall defend Nagle if the government per-
mits. I think I can clear him.
'truly, George Francis Train,
156
EXTRACT PRIVATE LETTER FROM
(IEORGEFRAECIS TRAIN.
On the road from Dublin to Cork, )
Sunday, Feb. 23,1868. |
Dear Mrs. Stanton : * Thanks for
kind words on arrest. Your Revolution in
America is making Revolution in Europe.
Great paper. Well edited. P. P. means Powder
Parrot. E. C. *S Erin Columbia Semiramis.
Have Revlutions to No. 5February 5fch.
Hammer away on moralsTemperance, So-
briety, Infanticide, Delirium Tremens. Terrible
broadside that on Garrison. He must have
howled. Fire proof as lam against abuse,
praise, avarice, wine and woman, I must say I
should not wish to be your target. Satire kills'
more than forty yards.
Miss Susan, your school-girl manager and
proprietor, seems to be renewing .her age.
That green above the red idea of the little
Irish girls is indeed Revolution.
Sir Thomas Larcom, General in chief of all
the devils in Ireland, has written Lord Mayo
commander of all the devils in England, under
Derby, for my Revolution pamphlets.
Regards to woman sincerely,
^G. F. T.
FINANCIAL FACTS FOR TEE PEOPLE.
The following article is from a gentleman
whose moral worth and long and large financial
experience (not to speak of his great wealth),
entitle him to attentive and thoughtful perusal:
Property in United States bonds is not taxed. The
present system is most unequal, unjust and oppressive ;
and while the legal tender greenback is a great boon, and
the best currency ever devised, the bond system is most
ruinous, and is operating to dwarf our resources, to
loosen the rivets, and to weaken all the bonds of society.
It will overthrow and politically annihilate the party
which shall perpetuate It, endangering meantime the
very foundations of the Republic.
Look at the facts : Bonds pay six per cent, interest in
gold, and taking the average premium for the past four
years, equals four per cent, j then allow three per cent.,
the average taxation which other property bears, and we
have a total of thirteen per cent. Alarge portion of this
is drawn from the people who hold no bonds. A por-
tion of the United States bonds, purchased when gold
was worth 200, and up to 290, actually pay on the same
principle twenty-five per cent, annually for every gold
dollar invested, and it is proposed to perpetuate this sys-
tem indefinitely. A sop is to be thrown to the people
by reducing this interest one per cent. The Govern-
ment of the United States is run in the interestof bond-
holders. * * * * * *
To-day the Senate is about to enact a system of fund-
ing the debt, giving away the'rights, mortgaging the la-
bor and property of the people on usurious terms. Be-
fore the war, money was glad to remain free from taxes
at about 4X< per cent. ; and on what principle has it a
right to more now ? Shall bondholders bear none of the
burthens incident to a great national calamity; is theirs
the ontyinterest to be saved; is all the opppressive
weight to rest upon shoulders least able to bear it ?
Take the average population in each one hundred75
are clerks, laborers, or otherwise employees, dependent
for support upon bone and muscle, bearing no share in
profits or losses in business. This class have no houses
or lands, and few if any bonds to-day; to them the
question of government i9 such as will give employment
at fair wages and an equal chance in the race for ad-
vancement. Twenty more, making 96 of the one hun-
dred, are in some way men of business, including for-
mers whose ventures and whose opportunity for the
profitable use oi their real property insure employ-
ment to the 76 and ultimate value to the otherwise idle
capital of the other fivefile bondholders.
These twenty have no bonds, the only security which
will command loanstheir property is practically value-
less. The present system ignores them, they are bor-
rowers, not lenders ; their necessities should enter into
every scheme of financeas they prosper so will the
country. Before the war this class received and gave
credit, and thus trebled their business, using thousands
of millions of credits, in the form of book accounts and
notes of band, which kept the machinery of commerce
and labor in motion and gave prosperity to the country.
The war broke up this system, and witn it the entire
system, bad though it was, of State banks, leaving the
country to depend upon gold and silver, which never
can return as circulation while our sixteen hundred
millions of foreign indebtedness remain.
The people, therefore, look only to greenbacks as a
currencya circulating medium. And what is pro-
posed ? Just this : to limit its issue within the bounds
of about half the average direct taxes imposed to carry
on national, state, and county governments ; not enough,
we say, if equally divided among the people, to meet
half their tax bills.
England has a circulation of twenty-five dollars to
each person. Six hundred millions of currency is
fifteen dollars to each one of our population of forty
millions ; not enough for the peoples pocket change and
our enterprising citizens only the same amount as old
England doles out to her impoverished people, and it
will then reach one thousand millions of greenbacksa
sum altogether too small, as we will soon discover when
the incubfis of our present deplorable system is re-
moved. One thousand millions of greenback circula-
tion saves just so much interest as six per cent, in gold
of the retired bonds. Sixty millions in gold or eighty-
four millions in currency is a sum of sufficient magni-
tude in itself to justify an intelligent discussion of the
question, ******
Equalize the now criminally imposed burthen upon
the mass of the people by a just system of taxation.
Compel property in bonds to bear taxation in proportion
equal to that imposed upon other property.
* * * * .
We want an extension of legal-tenders. We want, and
the demand is for, a more equal division, and it must
come. Specie payment is certain to follow in the right
time. We want labor, and money to pay for labor; and,,
as surely as the Lord livth, if it is not granted we shall
see greater desolation and destruction than we have
ever yet witnessed. There is not money enough in the
cotton-growing states to pay for its cultivation alone, to
say nothing of rice, sugar, and other productions, as
well as the other numerous resourcesmining, canals,
railways, etc. *****
Issue a 3.65 convertible bond, as proposed by Silas
M. Stillwell. It wul at all times absorb auy redundancy,
and the thing will regulate itself. Capital, under this
just system, will come out from its hiding places and
enier into the business of the people, who will then and
thereby transform an intolerable oppression into a wel-
come blessing-betting the two million of idle people at
work, and at file same time removing a most dangerous
element in a season of great political excitement.
Seven hundred and fifty thousand idle people in the
North and one and a half million in the South, if em-
ployed, would average one dollar 'a day. This is six
hundred millions a year. Add two hundred millions to
keep these idle people from starving and dying, if we
are to allowthem life at all, and this eight hundred mil-
lions would pay off our whole national debt in less than
four years. Contemplate, besides, eight hundred mil-
lions of yearly taxes. Why should we not inaugurate an
American policyone adapted to our country and form
of government? Let the people no longer be deluded by
the cry of politicians, from either end of the Avenue.
Both political parlies are alike guilty ; both will at-
tempt to shield themselves by charging the crime upon
the other. There is no greater error which men in-
power commit than when they attempt to palm off their
fpiserable selfishness for patriotism. Is there any na-
tional honor or patriotism- in ruining ninety-five of the
people to promote the special interest of the other five,
and thereby sink the nation itself into insignificance ?
* * *. * *
I want the South to have its equal chance with the
North, then we will begin to meet as friends. Impover-
ishing a country or a nation i& by no means a safe way.
Let the East look to this system of unequal circulation
as now existing, when it is the common expression that
there is no money in the South, none in the West, and
ask is it not suicidal to the East It will require no far-
seeing prophet to foretell the result, nor how it came.
The question to-day is money and labor ; and we must
meet it and meet it too, upon the platform of substan-
tial justice. u. h. D.
Washington, Feb. 28,1868.
One dress-making establishment in Boston
has adopted the French fashion, and a male
modiste fits the garments of its lair customers.
LITERARY.
An Appeal for Impartial Suffrage. Bt a Law-
yer of Illinois.
Mankind are all by nature free and equal;
'Tis their consent alone gives just dominion.*
Zturicomb's Junius Brutus.
This is a well printed pamphlet of nearly a hun-
dred pages, and so far as we have had time to examine
it is one of the very best works on impartial suffrage yet
produced. In about twenty brief ohapter9 it disposes of
the whole question, statement of positions, argument,
answer to objections and conclusions. It was published
in Chicago by the Western News Company. Price per
single copy, thirty cents. We have only two regrets
about the work ; one. is that there is not also a much
cheaper form of it for a perfect snow-storm distribution ;
the other that we havnt it for sale at the office of The
Revolution. We earnestly hope for a speedy removal
of both these difficulties. As we are constantly receiv-
ing calls for tracts of this description from every west-
ern and some of the southern states, it affords us great
pleasure to thus announce as well as recommend this
eloquent and able plea for Impartial Suffrage, in The
Revolution meaning of those words.
A Manual of Instruction in the art of Wood En-
graving. By S. E. Fuller. Boston : Joseph Watson.
This too, in a very important sense, is a plea for wo
man, for whose truest and best interests the author
shows file profoundest regard. The mauual is a truly
pretty pamphlet of forty eight pages ; and is a descrip-
tion of the necessary tools and apparatus, and concise
directions for their use, with definitions of the terms
used and the methods employed for producing the vari-
ous classes of wood engravings. There are also numer-
ous fine illustrations by the author of the work. .To
families where there are boys or girls, or both, and a
taste for art, or relish for its culture, this little manual
must be a welcome visitor.
The Northern Monthly is a magazine of general litera-
ture now in its second year and a suitor for public favor.
We are in receipt of the March number containing
among other valuable articles a sketch of the life oi
Benjamin Lundy, by Dr. E. A. Snodgrass. The relations
between Mr. Lundy and liis friend Wm. Lloyd Garrison
are treated by the Doctor at considerable length. The
Northern Monthly is by M. R. Dennis & Co., 132 Nassau
street, New York, and 248 Broad street, Newark.
. Every Saturday. Ticknor & Fields, Boston, keep Every
Saturday strictly a9 do the Jews. We may get tired of
praising, but sha:l not soon oi reading it. $5 a year, ten
cents a single copy.
Editors are having hard times everywhere. They
fine them a dozen or so at a time in France, imprison
them in,Ireland, buffet and kick them out ot conven-
tions in the Southern monarchies, assassinate them in
Mexico, and starve them in Spain. Wonder when our
time for special punishment will arrive.Troy Press.
Our colleague, Mr. Pillsbury, has been neither
fined, starved, buffeted, kicked, imprisoned or
assassinated. No doubt this is partly due to the
protection secured him by Association with
strong-minded women, and partly to the /em-
itting style of his editorials, owing to his early
education in the Troy Female Seminary. Ten-
nysons Princess perchance is a fact of life.
Wendell. Phillipss Great Victory.Wendell Phil-
lips is out in an exultant double-leaded editorial in hie
Anti-Slavery Standard to-day, claiming in effect that the
success of the Impeachment conspiracy, thus far is his
work.
But for the energy of the radical wing of the repub-
lican party, he says, the resolution would never have
gone through the House.
Wendell is right. It is he that supplies the radical
party with brains, and though he boasts that he is usually
about a year in advance of that party they never fail to
come up to his platform.
Negro Political and Social Equality is the next thing
Wendell is after. Unfit that is achieved the rebellion
will never be suppressed. *
It may take a year to get the republicans, as a party,
up to Social Equalitybut as things are going now,
many have doubts.N. F. Evening Express.
You are mistaken, Express, about Wendell be-
157
.Sttfftlttiitftt,
MODERN LADIES OF LYONS.
The New York Evening Fost says : A few
weeks ago a number of ladies in Lyons, France,
sent a printed address of sympathy to Gen. Gari-
baldi. The General replied by a letter, through
the coltuns of the liberal journal of Lyons,
called Le Frogres. Upon this a number of noble
dames, belonging to the Church party, pub-
lished a bitter and insulting article in the cleri-
cal organ of Lyons, the Courier, demand-
ing the names of these revolutionary ladies,
for purposes of social ostracism, probably.
Nothing daunted by the prospect of possible
exclusion, the fair republicans responded by the
following neat epistle ini Le Frogres:
Lyons, February 11, 3868.
Me. Editor : A letter published last Saturday in the
Courier, and signed Many ladies of.Lyons, takes the
signers of the address to Garibaldi insultingly to task for
that document. Be kind enough to lend ns the columns
of Le Progres for our reply, which, we trust, will satisfy
our noble interrogaters.
Ws, mesdames, are the mothers, sisters and daughters
of those men who, in 1868, sent to the Corps Legislatif a
poor lawyer named Jules Favxe, whose French seems to
be purer than that of Vade or VeuiUot. Every one to
his taste, you know, mesdame6. We are, also, the mo-
thers, sisters and daughters of thof e men Who, last au-
tumn, carried the local elections, which your journal no
doubt remembers. - * * *
You bave sent money to the Pope, you say. Well, we
have not been angry with you for that. We have only
thought to ourselves : So much money wasted, and l'elt
sorry for the pockets of your gallant husbands. Foolish
daughters of Eve! you have merely obeyed your in-
stinct of vanity, and, hesitating for a time between the
last new bonnet and the papal demonstration, have
finally decided in favor of the pontifical zouaves. It was
in the fashion, and a nice thing to do; therefore, you
looked no further.
As to your cudgels [it seems the popish ladies had
spoken of personal chastisement], these are, apparently,
the logic of your social system, the only one, doubtless,
within your comprehension. In logic, as in other mat-
ters, people think and act according to their means and
associations.
With us language borrows no force from knotted
clubs, nor from flexible switches; we reply simply upon
common sense.
(Signed) Mesdames Barbet, Berlioz, Dttchene,
Damien, Millet, Nesme, etc. etc.
The defenders of the Pope have not as yet
replied to this pithy episth.
State of Society in Cork.To show the
effect of English oppression in Ireland^ we give
the follow big headings to as many separate ar-
ticles which appeared in the Cork Weekly Her-
ald of February 15th, and all of which relate to
Fenianism: The Recent Arrests in the City.
Conflict With the PoliceOne Man Shot. At-
tempt to Assassinate Policemen. Monday
Nights Disturbances. Stone. Throwing at the
Police. Another Attempt to Shoot Police in the
City. Robbery of Firearms. Terrible Times.
Disturbances in the city. The Feniaii Prose-
cutions.
And yet they tell us Ireland has no wrongs.
OUR AGENTS.
Mbs. B. B. Fischer, 923 Washington et., St. Louie, Mo. -
Mrs. A. L. Quimby, P. O. Box 117, Cincinnati, Ohio.
Mrs. H. M. F. Brown, Chicago, 111.
Mrs. G. L. Hildrkbband, Fond Du Lac, Wie.
Mrs. Julia A. Holmes, Washington, D. C.
Miss Maria S. Page, Lynn, Mass.
8. G. Hammonp, Peterboro, N. Y.
Financial and Commercial.America vei'svs
EuropeGold, like our Coilon, FOR SALE.
Greenbacks for Money. An American System
of Finance. American Products and Labor
Free. Foreign Manufactures Prohibited. Open
doors to Artisans and Immigrants. Atlantic~
and Pacific Oceans for AMERICAN Steam-
ships and Shipping. New York the Financial
Centre of the World. Wall Street emancipated
from Bank of England, or American Cash for
American Bills. The Credit Fancier and Credit
Mobilier System, or Capital Mobilised to Re-
suscitate the South and our Mining Interests,
and to People the Country from Ocean to Ocean,
from Omakato San Francisco. Mwe organized.
Labor, more Cotton, more Gold and Silver
Bullion to sell foreigners at the highest prices.
Ten millions of Naturalized Citizens DEMAND
A PENNY OCEAN POSTAGE, to Strength-
en the Brotherhood of Labot\ If Congress Vote
One Hundred and Twenty-five Millions for a
Standing Army and Freedmanys Bureau for the
Blacks, Cannot they spare One Million for the
Whites ?
THE REVOLUTION.
NO. X.
Talk among the Brokers in Wall Street*
The talk is all about Erie and injunctions, and the
GREAT RAILROAD WAR BETWEEN DREW AND KEEP,
AGAINST THE VANDERBILT PARTY.
The talk is that Drew got rather a sharp point on
Frank Work, in
JUDGE BALCOLMs INJUNCTION
on bim and the Attorney General, Chat Frank Work is
going to set the
ALBANY LEGISLATURE TO INVESTIGATE
the Erie Companys affairs. The talk is that if the same
Legislative Committee that reported on Pacific Mail re-
ports on Erie, where will Erie go to ? that
TACHTC MAIL WAS KTT.T.KT)
dead as mutton by the Albany Legislative Committees
report, that
ALLAN MLANBs AND HOWARD POTTERS OATH,
that it was cheap at 150, that Paoific Mail could not stand
that, and tumbled 60 per cent. The question then is if
the Albany Committee knocked Pacific Mail 60 per coni,
how mudh
WILL IT KNOCK ERIE'?
The talk is that
ing ahead of the republican party ; he has been
on their platform ever since the war begun.
We do not blame him for sitting down to rest
in pleasant, conservative bowers a little while.
This going ahead over untried bridges and
through deep waters alone forever is hard work,
6ven for the noblest and most daring natures.
The Revolution Appreciated.A lady,
writing from Western New York, says : I think
The Revolutions too good to keep. They
are needed ; so I shall send mine to my friends.
Every one wants to read them. I am more than
satisfied. Mr. J. is much pleased with it. He
would read me Garrisons letter and the reply ;
said it was too good. I was surprised, as he
always thought so much ot Garrison.
A number of boys wanted to carry the Troy Daily
We have noticed this advertisement for some
time in^the Press, and infer that boys are scarce
in Troy. Why not advertise for girls ? The
Revolution has half a dozen girls, gaily
dressed, with red and green caps and skirts,
who sell a dozen papers where ragged boys do
one. Madame D£morest will furnish this beau-
tiful costume for twenty-five dollars each, and
we assure you, Mr. Press, it pays to make half
a dozen poor girls comfortable as well as orna-
mental to the city. In all things we need a
Revolution.
NEW HAMPSHIRE WOMEN.
An intelligent correspondent of the New Yorli
Eveniny Post, writing from Concord, N. H., on.
the present political agitation there, says :
Everyman and woman in New Hampshire appears to
have been born a politician. A sketch of the character-
istic features of a political canvass m New Hampshire,
in which no mention was made of the women and the
part they take in it, would be as incomplete as a version
of Hamlet in which that philosophical prince was omit-
ted. The interest felt by the voting population in the
triumph of principle, is scarcely greater than that
evinced by their wives and daughters, whose part in the
contest is restricted to the exertion of a silent but pow-
erful influence. In conversation on general topics, the
New Hampshire women show much intelligence, and
more accurate information than is commonly found
among the representatives of a sex that is elsewhere ac-
cused of jumping at conclusions, rather than of ar-
riving at them by the usual inductive process. Their
political principles are as saored to them as their reli-
gious creed, and most of them are fully able to defend
themselves and then: position against ihe logio or the
sophistry of thosd who differ with them.
At the mass meetings a liberal portion of the hall is.
exclusively devoSed to them, and on occasions of ordi-
nary interest they attend in strong loree, listening atten-
tively and applauding warmly. It is possible that much
of the order and decorum characterizing these gather-
ings is attributable to the restraining influence of their
presence ; and certain it is that what is so fully recog-
nized and countenanced by the ladies must over be free
from much that make political associations so corrupt-
ing and demoralizing In their tendency in many parts
of the country. If the long-sought franchise is ever
given to the women of America, it will be a satisfaction
to know that, in one state at least, they will vote as intel-
ligently and judiciously as many who claim the ballot as
one of their fixed and inalienable rights. t.
Leap Year Forever.Mrs. Oakes Smith,
without distinction of time, and in utter disre-
gard of the old Saxon Leap year law, announces :
I stand to the point, and nail my colors to
the mast in defence of itthat it is right,
proper and delicate for a woman to choose her
husband ; and the man thus distinguished by
her choice will feel himself ennobled and sanc-
tified.
A Salute.Hail, hail, true friends of liberty,
firm advocates of progression! Your Revolu-
tion'is a sound!, practical Reformer, standing
unequalled in the dissemination of justice. Its
integrity for truth is glorious ; may its rising
star never set but in millennial triumph.
M. T.
Twelve newspapers in Michigan, and fourteen in
Wisconsin are advocating Woman Suffrage. The two
leading, ablest and most influential papers m the North-
west, tiie Chicago Republican and St. Louis Democrat, are
doing good worK for the same cause.
CLEWS IS WELL POSTED
in Erie, and hits the bulls eye every time, that he bought
stacks of it at 65 to 67, and the question is
WHEN IS HE GOING TO SELL ?
The faltr is, what is the matter with tho Vanderbilt
stocks ? Why are people selling them and buying West-
ern railroad stocks7
WHAT IS UP IN TOLEDO AND WABASH ?
Why are the Express companiesbrokers buying Tole-
do & Wabath and some of the
VANDERBILT BROKERS ?
The talk is that all the Western roads are a purchase,
that many of them are good for a
158
TWENTY TO THIRTY PER CENT. RISE
this year, that all the national progress is forced into the
Western States as the only outlet until the Southern
States are reconstructed and in a settled condition. The
talk is that
ROCK ISLAND TRACY
has got the inside track of that affair, and will build the
road to Omaha. The talk is that
BILLY MARSTON AND MviCEARS
have given up Ohio and Mississippi as a slow coach, and
h#ve taken to water in
UNION NAVIGATION COMPANY,
which they are running for friend Trask, and the ques-
tion is where will they.run it to ? The talk is that Billy
MARSTON CONSULTED HIS FRIEND JEROME,
t ,
and Jerome told him that he had found, the water
cure, a
SOVEREIGN CURE FOR DEAD DUCES,
and that he had better try it on Union Navigation, that
he had tried it on everybody knd everything he could,
and that he had always found it work well for himself,
but he was not quite so sure about his friends, that
MARSTON TOOK JEROMES ADVICE AND WATERED
UNION NAVIGATION
four times, so that 20 now is as good to Billy as 80 was
last year, and the public wont be a bit the wiser until
THEY DROP THE GREENBACKS.
The talk is that the capitalists
WHO HOLD CUMBERLAND COAL
are going into the Hydropathic business also, that they
intend to
WATER CUMBERLAND COAL
and float the diluted article od the confiding public that
the public may find that coal burn them. The talk is
that Drews broker last week gave
ms check for $5,000,000
to a firm who gave $4,000,000 in one check
FOR PURCHASES OF ERIE,
that Drew is all snug and long in Erie, although
FISK k BELDEN
and all Uncle Daniels other friends arc short on the
point he gave thorn. The talk is that
UNCLE DANIEL SAYS
that them critters aint reasonable in expecting him to
make money for them, that
' AARON WHEN HE STUCK
his Israelitdsh brethren and cornered them on the gold,
en calf, looked out only for himself and not his friends-
that a
CHRISTIAN METHODIST CANNOT
be expected to be anything better than a Hebrew Pa-
triarch, that this ere Erie is a big thing, and that
FRANK works RAPACITY
bothers him. The talk is that
GROESBECK HAD 12,000 AND 18,000
shares of Chicago and North Western common stock
transferred to his name on the companys books last
week, and the question is
WHAT DOES IT MEAN ?
Is Drew going into North West, and
KEEP INTO ERIE?
What is the Vanderbilt party going to do to get a con-
tinuous connection with
NEW YORK CENTRAL TO THE WEST ?
The talk in mining circles is about
JONES, FROM UP THE HUDSON,
better known as
QUARTZ BILL JONES.
This individual stuck all his friends with Quartz Hill
about $2, and then made himself scarce. He is still in
the city, but is evidently
AFRAID TO SHOW HTMftWT.'B*
1 jke a man in stock circles. He bas been a frequent visi-
tor at
JUDGE NELSONS OFFICE
of late, and it is said he is applying for an
OFFICE AT THE CANAL BOARD,
and that the Secretary of State has actually endorsed
him. The talk is that he is after
JUDGE BARNARD FOR HIS ENDORSEMENT,
that both of the Judges had better look out for him, and
inquire into his
RECENT WALL STREET OPERATIONS
%
on Quartz Hill. Chapman knows hire, and a good many
.of
drakes CUSTOMERS,
also, the talk is, Wonder if these individuals would en-
dorse him for any position where he would have to
HANDLE THE CASH ?
The talk is that, Jones would make a first ra te collector
on the
CANALS, THAT IS FOR JONES,
The talk is that everybody hopes the cash belonging
to the state wont stick as fast to his fingers as that
Of his friends if
. JONES GETS HIS CLUTCHES ON IT,
that the canal revenues wont grow any bigger under
Joness administration. That the gentlemen of the Bench
and Canal Board had belter
LOOK OUT FOR JONES.
That Jones is called Bones at home, that he is a sanc-
timonious looking and very pious Bones. That Jones
refers for honesty and integrity to
CHAPMAN, NELSON TAPPAN, RANDALL, DRAKES CUS-
TOMERS, KEEN, GILLEY AND GERMOND, AND
JOE GAY. *
That if any member of the Canal Board inquires about
Jones ot any of the above parties, that he will be sure
of an appointment as
COLLECTOR ON THE CANALS,
or something else. The talk is about the
GREAT BANQUET,
to be given at Delmonicos in 14th street on Thursday,
March 19, by the
SOCIETY FOR THE CENTRALIZATION OF GREENHORNS
SPONDULIX.
The talk is that all iho clique leaders will be there and
that they have
, INVITED TONY MORSE
to join them. The talk is that Tony Morse has accepted
the invitation and will be there, that
THE CHEQUE LEADERS
are in a fix, that the
BANES AND CAPITALISTS ARE FRIGHTENED
at the democratic victories, that they fear' the demo-
cratic party will elect the next President, and that con-
traction and specie payments will be their financial
policy and a
GRAND SMASH
will send prices down with a run. The talk is that the
banks and capitalists have told the cliques that they
MUST SELL AND REALIZE
before the presidential nominating committees meet m
May, that the clique leaders have been trying to
UNLOAD AND STICK THE PUBLIC
for a month past, that they find the public wont he
stuck. The talk is that the
CLIQUE LEADERS IN DESPAIR
have sent for Tony Morse to suggest some plan by which
they can unload and stick the public, and pay their loans
to the banks and capitalists. That
TONY MORSE HAS PREPARED
an elaborate speech which he will deliver next Thursday
at the banquet at Delmonicos to the elite of the cliques
who belong to the
NOBLE AND ANCIENT SOCIETY FOR THE CENTRALI-
ZATION OF GREENHORNS SPONDULIX,
that he will tell his experiences in Wall street in his
usual amusing style, interspersed with
ANECDOTES OF LITTLE INCIDENTS
in the lives of his dear friends, that he will tell how
HE BOUGHT THE SAME CERTIFICATE
for 100 shares of Chicago and Northwestern Common
Stock
TWENTY-THREE TIMES THROUGH ELEVEN DIFFERENT
BROKERS
from his dear friends in Broad street who reported to
him that they
WERE CARRYING TWENTY-THREE HUNDRED SHARES
for him upon their sacred, word of honor. That
THE REVOLUTIONS SPECIAL REPORTER
will be at the banquot, that
THE REVOLUTION OF MARCH 26
will contain
TONY MORSES SPEECH ON WALL STREET STOCK-
JOBBING
and the speeches of the great clique leaders
INVITATION TO TONY MORSE,
New York, March 2, 1868.
Anthony W. Morse, Esq.. St. Paul, Minnesota.
Dear Sir : I am instructed by the Committee on In-
vitations of The Noble and Ancient Society for the Cen-
tralization of Greenhorns Spondullx, of which lhave
the honor to be Secretary, to request that you will
honor us with your company at a
BANQUET (STRICTLY PRIVATE)
to be given at Delmonicos, 14th street, on the evening
of Thursday, March 19, at 6 p. m. I am also instructed
to inform you that the
PRIMARY OBJECT OF INVITING YOU
to meet the Society is to obtain ifom your varied and
enlightened experience some
PRACTICAL PLAN FOR UNLOADING
upon the public the numerous stocks wo have been
carrying for a number of years. As an earnest of the
high value we place on your
ADVICE AND PRACTICAL CO-OPERATION
in this matter we beg to inclose a certificate of deposit
on the bank of * * for $50,000 payable to your
order, and further arrangements will be made satis-
factory to you on arrival here. It is scarcely necessary
to say that all communications are to be strictly private
and confidential. I have the honor to be
Yours respectfully, Napoleon Burb,
Secretary to the Noble and Ancient Society for the Cen-
tralization of Greenhorns Spondhlix.
TONY MORSES REPLY TO THE SECRETARY.
St. Paul, March 9, 1868.
To my Deab Friends in a Fix : I got your Secretarys
letter all right. I shall be on hand. Have plenty of
champagne and
TELL DELMON1CO I AM
to be there, so he will know what to do and have a din-
ner that I can eat. You have pot this business off rather
late. The
DEMOCRATS ARE COMING IN.
They will send -
WALL STREET TO ETERNAL SMASH. SELLER
SIXTY IS THE TICKET.
If any o) your lunkhead customers ask you to carry
stocks .
CARRY THEM LIKE MY FRIENDS
who sold Northwest common just three minutes after
they had bought it for me. The
JIG IS UP FOR THE BULL CLIQUES. .
The public won't bite, they only nibble.
YOU CAN MtT.TT them,
but you cant stick them. My scheme is for a
CHOSEN FEW TO CHEAT
the balance of their oomrades. Somebody has got to be
stuck, and
CHEATING MUST BE THE ORDER OF THE DAY.
So toss up for the insiders, and let the
OUTSIDERS GO TO THUNDER.
Champagne must be well iced, and two quarts -of cream
for me to take before and after dinner. Place fete dt
veau en tortue,
(CALFS HEAD IN A STEW)
all round for everybody, excepting
Yours devotedly,
Anthony W. Mobse.
P. S. Your Spondulix, $50,000 arrived all right Sen.
159
eible thing that. Suits me to a dot. Shell out, and I'm
the boy that
WILL MATne THE FEATHERS FLY.
No slow .coaches for me. Dont forget the iced cham-
pagne, aud cream, mid stewed calls head for the
boys. Tell your committee on finance to learn the fol-
lowing beautiful lines to sing as a
CHORUS AT DELMONICOs BANQUET
to the tune of the
PEA NTJT WALTZ,
which you can get from De Comeau, Phil. Bruns, Tracy,
Arnold, or any of the Mining Board :
TONEY MOUSES CHOEUS FOR THE FINANCE
COMMITTEE.
who believe that the common stock will earn a dividend
of 15 per cent, this year. Offers have been made to de-
liver Michigan Southern shares any time this year for
Toledo mid Wabash. Both Michigan Southern and
Toledo and Wabash are about the same amount of capi-
tal, and the Toledo and Wabash extends more than
double the number of miles that the Michigan Southern
does in a straight line, although the Michigan Southern
with two parallel lines operates abont the same'as the
Toledo and Wabash, 520 miles. The Western Railroad
shares have cut loose from the influence of Erie, and
show a steady advancing tendency in their price. Pacific
Mail is steady in price, but dull and heavy. Atlantic
Mail is steady, Canton isactive and strong at 63% to 64,
Western Union is steady at 34% to 34%. The Express
companies shares are inactive. The general market
closed strong.
He thats got plenty Spondulix,
And wont give to him thats got none ;
Shant have any of our Spondulix,
When his Spondulix are gone.
Next Weeks Revolution will contain a Full ac-
count OP THE DELMONIOO BANQUET AND THE SPEECHES
op Toney Morse and the Clique Leaders, members or
the Noble and Ancient Society for the Centraliza-
tion of the Greenhorns Spondulix.
THE MONEY MARKET
was easy during the week at 5 to 6 per oent. on calk
Prime business paper is discounted at 6 to 7 per cent.
The weekly bank statement shows expansion of loans
and weakening of the bank reserves, the loans being
increased $1,915,958, and the legal tenders decreased
$1,536,563,' and the deposits $914,498. The specie is
decreased $1,377,409. The following is a statement of
the changes in the New York city banks compared with
the preceding week:
Feti. 29th March 7th Differences
Loans, $267,240,678 $269,156,636 Inc. $1,915,958
Specie, 22,081,642 20,714,233 Dec. 1,377,409
Circulation, 34,086,223 34,158,957 Inc. 67,734
Deposits, 208,651,578 207,737,080 Dec. 914,498
Legal tenders, 58,558,607 57,017,044 Dec. 1,536,563
THE GOLD MARKET
was dull and steady throughout the week, but on Satur-
day, after the board adjourned, it declined to 140% to
140%, under the pressure of sales made on a report th a
the government had been selling during the day. The
rates paid for carrying gold during the week ranged from
3 to 7 per cent.
The fluctuations in the'gold market for the week were
a* follows: Opening. Highest Lowest. Closing.
Saturday, 29, 141% 141 % 141% 141%
Monday, 2, 141% 141 % 140% 141
Tuesday, 3, 141 141% 140% 141%
Wednesday, 4, 141 141% 140% 140%
Thursday, 5, 141 141% 141 141
Friday, 6, 141% 141% 141% 141%
Saturday, 7, 141% 141% 149% 140%
Monday, 9, 140% 140% 139% 140
THE FOREIGN EXCHANGE MARKET
was inactive and heavy, especially towards the close of
the week, owing to the limited -demand from importers,
and to an increased supply of produce bills. Rates were
fully % lower, the quotations being 109% to 100% for
bankers. 60 days sterling bills and sight; 109% to 110.
Francs on Paris bankers 60 days, 5.17% to 5.16%, and
sight 5.15 to 5.13%. The produce exports are $1,000,000
morei than last week, being $3,980,200 in currency, equal
to about $2,300,000 against $4,753,533 in gold, of merchan-
dise imports. The receipts of bullion from California
for the week were $1,552,000, aud the exports of specie
were $1,548,290.
Musgrave & Co., 19 Broad street, report the following
quotations :
Canton, 63% to 64 ; Boston W. P., 20 to 21; Cumber-
land, 36 to 36%; Wells, Fargo & Co., 39% to 40%; Ameri-
can Express, 67 to 68 ; Adams Express, 72 to 72%;
United States Express ; 69% to 70; Merchants Union
Express, 32% to 33 ; Quicksilver, 21% to 22%; Mariposa,
7 to 8 ; preferred, 11 to 12 ; Pacific Mail, 111% to 111%;
Atlantic Mail, 99% to 99%; W. U. Tel., 34% to
35; New York Central, 129 to 129% ; Erie, 75 to 75%;
preferred, 80% to 81; Hudson River, 143 to 145 ; Read-
ing, 94% to 94% ; Tel. W. & W., 64% to 54% ; preferred,
73% to 74 ; Mil. & St. P., 54% to 54% ; preferred, 69% to
70 ; Ohio & M. C., 30% to 31; Mich. Central, 113% to
114 ; Mich. South. 91% to 9L%; 111. Central, 138% to 140;
Cleveland & Pittsburg, 96 to 96% ; Cleveland & Toledo,
107% to 108; Rock Island, 98 to 98% ; North West, 68%
to 68%; do. preferred, 75% to 75%; Ft. Wayne, 101 to
101%.
UNITED STATES SECURITIES
have been quiet throughout the. week, but prices closed
a fraction better.
Fisk and, Hatcn, o Nassau street, report the following
quotations:
Registered, 1881, 111 to 111%; Coupon, 1881,110% to
111% ; 5-20 Registered, 1862, 107 to 107% ; 5-20 Coupon,
1862, 110% to 110%; 5-20 Coupon, 1864,107% to 107%; 5-20
Coupon, 1865,108% to 108% ; 5-20 Coupon, Jan. and July,
1865, 106% told ; 5-20 Coupon, 1867, 106% to 107%;
tio-40 Registered, 101% to 101% ; 10-40 Coupon, 101% to
101% ; June, 7-30, 105% to 106; July, 7-30,-105% to
106; May Compounds, 1864, 118 ; August Compounds,
1864,117 ; September ^Compounds, 1864,116% ; October
Compounds, 1864, 116.
. THE CUSTOMS DUTIES
for tbe week were $2,482,946 against $2,321,183, $2,589,-
317 and $2,319,531 for the preceding weeks. The im-
ports of merchandise for the week are $4,753,533 against
$5,111,098, $5,735,486, $4,037,820 and $5,047,004 for the
preceding weeks. The exports, exclusive of specie, are
$3,980,2000 against $2,968,819, $3,686,417, $2,678,180
and $3,218,00*0 for the preceding weeks. The exports of
specie are $1,543,290 against $650,901, $934,364, $864,563
and $1,644,057 for the preceding weeks.
HE POLICIES
OF THE
AMERICAN
POPULAR LIFE INSURANCE CO.
419, 421 BROADWAY, N. Y.,
ARE THE
BEST NEW YEAR PRESENTS
FOR A WIFE, FOR A FAMILY,
THE RAILWAY 8HARE MARKET
FOR A DAUGHTER, FOR A SON, *
has been feverich, owing to tbe frequent fluctuations in
Erie, which has ranged from 74% to 79. On Saturday
the aggregate sales of Erie were over 70,000 shares, of
which 32;C00 shaies were at the first open board, proba.
bly the largest days business on record in any one
stock. The injunctions and threatened litigation in Erie
have caused many influential operators to sell the Erie
and New York Central they held, and they have in their
place taken up some of the leading Western Railroad
shares, the increased earnings ot which have attracted
their attention. Toledo, Wabash and Western and the
North West shares common and preferred were the most
active and strong. The movement in Toledo and Wabash
is attracting the attention of the street, and it is said the
heavy purchases are for account of Western operators'
FOR YOURSELF.
For a wife or Family a whole LIFE POLICY is the best
hing possible.
For a Daughter or Son an ENDOWRY POLICY is the
most desirable, as it is payable at marriage or other speci-
fied time.
For ones own self the best New Year treat is a LIFE
RETURN ENDOWMENT POLICY, which is issued only
by this Company; it gives the person a certain sum if he
lives to a specified time, or to his heirs if he decease be-
fore, with the return of the Endowment Premiums with
interest. It thereiore truly combines all the advantages
of Insurance and a Ravings Bank, whjch Jjaa not before
been done.
y^ECTTJBES AND SPEECHES
OF
GEORGE FRANCIS TRAIN.
CHAMPIONSHIP OF WOMEN.
The Great Epigram Campaign of Kansas of 1807. Pri ce
25 cents.
srx, 1805. Price 25 cents.
Speeches in England on Slavery and Emancipation,
delivered in 1862. Also great speech on the Pardoning
of Traitors. Price 10 cents. .
UNION SPEECHES.
Delivered in England, during the American.War. By
George Francis Train. Price 25 cents.
TRAINS 25 cents.
Copies of the above-named pamphlets sent by mail, at
prices named.
For sale at the office of
THE REVOLUTION,
37 Park Row (Room 17),
New York.
gTAB-B & MARCUS,
22 JOHN STREET.
AN EXTENSIVE STOCK
of the celebrated
GORHAM PLATED WARE
AT RETAIL.
Warranted superior to the Finest Sheffield Plate.
160
The Revolution;
THE ORGAN OF IHE
NATIONAL PARTY OP NEW AMERICA*
PRINCIPLE, NOT POLICYINDIVIDUAL BIGHTS AND
RESPONSIBILITIES.
THE REVOLUTION WILL DISCUSS:
1. In PoliticsEducated Suffrage, Irrespective of
Sex or Color; Equal Pay to Women for Equal Work;
Eight Hours Labor; Abolition of Standing Annies and
Party Despotisms. Down with PoliticiansUp with the
People I
2. In ReligionDeeper Thought; Broader Ideas;
Science not Superstition; Personal Purity; Love to Man
as well as God.
8. In Social Life.Practical Education, not Theo-
retical; Fact, not Fiction; Virtue, not Vice; Cold Water,
not Alcoholic Drinks or Medicines. Devoted to Moral-
ty and Reform, The Revolution will not insert Grose
Personalities and Quack Advertisements, which even
Religious Newspapers introduce to every family.
4. The Revolution proposes.
New Vork the Financial Centre of the World. Wall
Street emancipated from Bank of England, or American
Cash for American Bills. The Credit Foncier and
Credit Mobilier System, or Capital Mobilized to Be-
suse
Brotherhood of Labor. If Congress Vote One Hun-
dred and Twenty-five Millions for a Standing Army and
Freedmans,Bureau for the Blacks, cannot they spare
Ooe Million for the Wljites, to keep bright the chain of
friendship between them and their Fatherland?
Send in your Subscription.. The Revolution, pub-
lished weekly, will be tbe Groat Organ of the Age.
Terms.Two dollars a year, in advance. Ten names
($20) entitle the sendor to one copy free.
ELIZABETH CADY STANTON, j E .
BARKER RILLSBURY, f
SUSAN B. ANTHONY, Proprietor..
87 Park Row (Room 17), New York City,
To whom address all business letters.
RATES OF ADVERTISING:
Single insertion, per line......................20 cents.
One Months insertion, per line.................18 cents.
Three Months insertion, per line...............16 cents.
Orders addressed to
SUSAN B. ANTHONY, Proprietor,
37 Park Row (Room 17), New York.
THE REVOLUTION
may be had of the American News Company, 121 Nas-
au street, New York, and of the large News Dealers
throughout the country.
rjIHE. CREDIT FONCIER OF AMERICA.
GEORGE FRANCIS TRAIN PRESIDENT.
The following are among £he first one hundred share-
holders of the Credit Foncier and owners of Columbus :
Augustus Kountze, [First National Bank, Omaha.]
Samuel E. Rogers, Omaha.
E. Creighton, [President 1st National Bank, Omaha.]
, Thomas C. Durant, V. P. U. P. R. R.
James H. Bowen, [Proof 3rd National Bank, Chicago.]
George M. Pullman.
George L. Dunlap, [Superintendent N. W. B. B.)
John A. Dix, [President U. P. R. R.]
William H. Guion, (Credit Mobilier.)
William H. Macy, (President Leather Manf. Bank.]
Charles A. Lambard, [Credit Mobilier] Director U. P. B. R.
Oakes Ames, M. C., [Ciedit Mobilier.]
John M. S. Williams, [Director Credit Mobilier.]
John J. Cisco, [Treasurer U. P. R. B.)
H. Clews. -
William P. Furntss.
Cyrus H. McCormick, (Director U. P. B. R.]
Hon. Simon Cameron.
John A. Griswold, M. C., [President Troy City National
Bank.]
Charles Tracy.
Thomas Nickerson, (Credit Mobilier,) Boston.
F. Nickerson, [Credit Mobilier,] Boston.
E. H. Baker, Baker & Morrlil, [Credit Mobilier,] Boston.
W. T. Glidden, Glidden & Williams, Boston, [Credit Mo-
bilier.]
H. S. McComb, Wilmington, Del., [Credit Mobilier.]
James H. Ome, [Merchant,] Philadelphia.
George B. Upton, [Merchant,] Boston.
Charles Macalester, [Banker,] Philadelphia.
C. S. BushneU, [Director U. P. R. R.) Credit Mobilier.
A. A. Low, [President Chamber Commorce.J
Leonard W. Jerome.
H. G. Stebbins.
C. C. & H. M. Taber.
David Jones, [Credit Mobilier.)
Ben. Holladay, [Credit Mobilier.]
Hon. John Sherman, U. S. S.
The oities hlong the line of
THE UNION PACIFIC RAILROAD.
Omaha already Sixteen Thousand People.
Columbus the next important agricultural city on
the way to Cheyenne.
A Fifty Dollar Lot may prove a Five Thousand Dollar
Investment.
PARIS to PEKIN in Thirty Days. Two Ocean Fei'ry-
Boats and a Continental Railway. Passengers for China
this way l 18 ro the road will be finished to San Francisco.
Five hundred and thirty miles are already running west
of Omaha to the base of the mountains, north of Denver.
The Iowa Railroad (Chicago and Northwestern) is now
temporary bridge that has been constructed joins you
with tbe Pacific. Here is the time-table :
New York to Chicago (dra -< ing-room car all
the way, without change)...............38 hours.
Chicago to Omaha, without change (Pull-
mans sleeping palaces).................24
Omaha to Cheyenne, or summit of Rocky
* Mountains, (Union Pacific Railroad).......28 44
. 90
Say four days from New York to the Rocky Mountains.
Two thousand two hundred miles without a change of
gauge or car, or the removal of your carpet bag and
shawl from your state-room.
The Credit Foncier of America owns the capitol addi-
tion to Columbus,probably the future capitol of Ne-
braska. What is t&e Credit Foncier ?. Ask the first mil-
lionaire you meet, and the chances are he will tell you
that he was one of Hie one hundred original thousand
dollar subscribers. No other such special copartnership
o1 wealthy men exists on this continent. (A list of these
distinguished names can be seen at *i>a Companys
office.)
Where is Columbus ? Ask the two hundred Union
Pacific Railroad excursionists who encamped there on
the Credit Foncier grounds, is it not the geographical
centre of this nation? Ninety-six mileB due west from
Omaha, the new Chicago ; ninety-six miles from the
Kansas border on the south ; ninety-six gdW from the
Dacotah line on the north, Columbus is situated on the
upper bottom, at the junction of the Platte and the Loup
Fork, and is surrounded by the finest agricultural lands in
the world.
The Credit Foncier lands extend from the railway
station across the railway, and enclose tbe Loup Fork
Bridge; the county road to the Pawnee settlement run-
ning directly through, and some leading generals and statesmen are
also property owners round about. Would yon make
money easy ? Find, then, the site of a city and buy the .
farm it is to be built on. How many regret the non-
purebase lino to California, enriches its
shareholders while distributing its profits by selling
alternate lots at a nominal price to the public.
.The Credit Foncier owns 688 acres at Columbus, di-
vided into 80ft. streets and 20ft alleys.
These important reservations are made : Two ten-acre
parks; one tr n-acre square, for Hie university of Nebras-
ka ; one five-acre triangle, for an agricultural college
one five-acre quadrangle, for a public school; one acre
each donated to the several churches, Episcopal, Catho-
lic, Presbyterian, Lutheran, Methodist, Congregational
and Baptist, and ten acres to the State for the new Capitol
buildings. *
Deducting these national, educational and religious
donations, the Credit Foncier lias over 3,000 lots (44x115)
remaining, 1,500 of which they offer for sale, reserving
the alternate lots for improvements.
ADVANTAGES.
First._It is worth fifty dollars to a young man to be
associated with such a powerful Company.
Second.By buying in Columbus, you purchase the
preference right to be interested in the next town
mapped out by the Credit Fontier; and, as we dig
through the mountains, that town may be a gold mine.
Third.Owning 5,000 feet of land 1,700 miles off by
rail, extends one's geographical knowledge, and suggests
that Massachusetts, South Carolina and Virginia do not
compose the entire American Republic.
When this ocean bottomthis gigantic plateau of the
antediluvian seathis relic of the great inland lake of ten
thousand years ago, between Omaha and Columbus, be-
comes peopled, with corn-fields and villages, a lot at
Columbus may be a handy thing to have about the
house.
The object of the Credit Foncier in selling alternate
lots at such a low figure, is to open up the boundless
resources along tbe line of tbe Union Pacific Railroad to
the young men of the East. Landed proprietsrship
gives a man self-reliance, and may stimulate the om-
employeo to become employer. Fifty dollars invested
ten years ago in Chicago or Omaha, produces many
thousand now.
As this allotment of 1,500 shares is distributed through
New York, Boston, Philadelphia, Baltimore, Washington,
Cincinnati, Chicago and St. Louis, early application
should be made by remitting a check to the Companys
office, 20 Nassau street, when you will receive a deed for
the property.
To save the lot-owner the trouble of writing, the Credit
Foncier pays all taxes for two years.
Do not forget that every mile of road built westward,
adds to the value of property in Omaha and Columbus.
Cheyenne, at the toot of the mountains, four hundred
miles west of' Columbus, is but six months old, and has
three thousand people. Lots there selling for three thou-
sand dollars.
Most of the Directors of the Union Pacific Railroad,
and the Directors and Subscribers of the Credit Mobilier,
are the Shareholders of the Credit Foncier of America.
Call at the office and examine the papers.
Most respectfully,
Your obedient servant,
GEO. P. BEMIS.
Secretary.
Office of the Company, 20 Nassau Street,_00010\ LastIndex | http://digital.auraria.edu/AA00003578/00010 | CC-MAIN-2018-43 | refinedweb | 30,668 | 68.2 |
)... read more
Pd-extended 0.40.3 released, dedicated to Jamie Tittle, aka tigital.
This release is dedicated to Jamie Tittle, aka tigital, who recently died of cancer. He was a long time and key contributor to Gem and Pd in general, even while he was in the hospital undergoing treatment. He is sorely missed in this community, and I am sure by many others.
Some highlights of this release:
* more functional namespace tools ([declare] and [import])
* new appearance designed to enhance readability
* GLSL shader support in Gem
* usability improvements
* on Mac OS X, you can now build "standalone" applications
* standard locations for user-installed externals
* many bug fixes... read more
Finally, Miller Puckette has announced the release of Pure-data 0.41.0
main features:
fully 64bit capable - running Pure-data in a native 64bit environment
improved callback-based audio-api
other features (that have now slipped my mind)
available via sourceforge at
or directly form miller puckette's homepage at
Finally, Pd-0.39.3-extended has been released on all platforms!
Finally, it's done. The most polished release of Pd yet. We are further refining Pd into a truly powerful and usable programming platform.
Some highlights:
* new font and layout is the exact same size on all platforms to the pixel
* PDP/PiDiP work out-of-box on Mac OS X
* Gem has working shader support
* many new libraries: mapping, msd,
* [comport] is robust on all platforms
* -font-face and -font-weight command line options... read more
This is the first unified build of Pd-extended, it runs on GNU/Linux, Mac OS X, and Windows. It also included a very large collection of objects, documentation, examples, and manuals.
A beta version of the Windows installer has been released. It is the product of a unified build system for all of the code in the CVS.
A beta version of the MacOS X installer has been released. It is the product of a unified build system for all of the code in the CVS. | http://sourceforge.net/p/pure-data/news/ | CC-MAIN-2014-52 | refinedweb | 338 | 54.63 |
Template Meta Programming (TMP) was accidentally discovered and was not an intended feature of the language. It has many applications such as generating complex code at compile time and writing compile time checks to see if a type satisfies a given condition. It is a Turing complete language in and of itself. It is purely functional as well. Like all functional languages it can manipulate lists. Lists can be represented as a type which stores 2 other types one of which can be another pair; that can continue until a terminating value is reached. Functions can be represented as template classes.
I present to you a small framework for creating template meta lists and a couple of meta functions which can be applied to lists using a left fold. Your challenge is to write a class (or meta function) that performs a left fold. 2 test cases are in used in main.
#include <iostream> //specil type for ending lists struct meta_null {}; //type to hold value pairs; these pairs can form lists template<class Head, class Tail> struct meta_pair { typedef Head head; typedef Tail tail; }; //a simple type that holds a value template<int x> struct meta_int { static const int value = x; }; template<class I1, class I2> struct add { typedef meta_int<I1::value + I2::value> value; }; template<class I1, class I2> struct mul { typedef meta_int<I1::value * I2::value> value; }; int main() { typedef meta_pair<meta_int<1>, meta_pair<meta_int<2>, meta_pair<meta_int<3>, meta_pair<meta_int<4>, meta_pair<meta_int<5>, meta_null> > > > > test_list; //find the sum of values in the list std::cout<<fold_l<add, meta_int<0>, test_list>::value::value; std::cout<<'\n'; std::cout<<fold_l<mul, meta_int<1>, test_list>::value::value; return 0; }
For added challenge write a...
- right fold function
- a map function
- a sum function (sums a list)
- a product function (product of list).
- something i haven't thought of
Also, if you have any other TMP goodies, It would be nice if you could posts those too
hint 1:
hint 2:
hint 3:
If your struggling with TMP or simply want to know more about it, i recommend reading this article which compares TMP to Haskell. The same author also has this article on TMP
This post has been edited by ishkabible: 06 December 2011 - 03:59 PM | http://www.dreamincode.net/forums/topic/258650-c-challenge-template-meta-programming/page__pid__1504751__st__0 | CC-MAIN-2016-18 | refinedweb | 378 | 55.17 |
Details
- Type:
Sub-task
- Status: Closed
- Priority:
Major
- Resolution: Duplicate
- Affects Version/s: 10.2.2.1
- Fix Version/s: None
-
- Labels:
- Environment:JDK 6
- Urgency:Normal
- Issue & fix info:High Value Fix
Description
Issue Links
- duplicates
DERBY-5880 Move java.sql.Wrapper implementations to base classes in embedded driver
- Closed
Activity
Being new to issues of API specs and all I don't know. The words used are those of the development lead I spoke to about the topic. They are using the metadata call as described. I do not recall if the word compliance was used, however, I don't want that to detract from the problem presented here. The situation for this developer is as described. The metadata call is used to determine what is supported in code using the datasource. His comment upon finding that the datasource did not implement this JDBC 4 interfaces was:
"For non JDBC4.0 providers we expect JDBC3.0 to be returned (or JDBC2.0 depending on the spec version), otherwise, we would break."
Upon looking at the JSR221 sections cited above this seemed like a reasonable use of the call so I raised this issue so others in the community familiar with implementing datasources in complex environments could give there assessment of whether this might be a problem in other development efforts as well.
I see the issue but I'm not sure of a solution. I think compliance is the key point here though, as EmbeddedConnectionPoolDataSource does not claim to be JDBC 4.0 compliant in a JDK 6 environment. So once compliance is not there, any requirement to implement java.sql.Wrapper goes away.
Of course I think the only compliance indicator in JDBC is on Driver and Derby's driver is JDBC 4.0 compliant, but it doesn't really have a direct relationship to DataSource objects.
Maybe the meta-data call could return 3 for getJDBCMajorVersion() but then connection objects within the same JVM would be returning different values.
I.e. Connection objects created by old data source implementations would return 3 while those created by DriverManager or the 40 data source implementations would return 4. Even though it's the same implementation of Connection in all cases, i.e. all Connection objects in a JDK 6 environment are JDBC 4.0 compliant. This would be additional code for the only reason of satisfying this condition, which is assuming compliance for something that isn't.
Another question is what is the Wrapper interface being used for? The Wrapper interface is explicitly for non-standard features provided by the JDBC driver and since Derby does not provide any non-standard features there should be no need to treat any of its JDBC objects as Wrappers.
I can't say that I fully understand the issue, but from what I gather the application in question is receiving some DataSource from somewhere and it (the application) needs to figure out whether or not that DataSource object is JDBC 4. I assume the app itself is not the one creating the datasource (if it was then it should theoretically know whether or not it instantiated the JDBC 4 Derby datasource class, shouldn't it?).
My inclination is to agree with Dan's comment above: "I think the only compliance indicator in JDBC is on Driver and Derby's driver is JDBC 4.0 compliant, but it doesn't really have a direct relationship to DataSource objects."
To me it seems odd to change Derby's connection objects to return a driver version of 3 for DataSources even though the connection and the driver are both JDBC 4.
One possible application-side workaround to this problem (if it's actually a problem with Derby, of which I am not sure) is to use reflection on the data source and then check the declared methods. Since EmbeddedConnectionPoolDataSource40 is a public class that we expect users/apps to be referencing directly, it seems like we should be able to suggest that applications code according to the public javadoc for that class:
Notice that all of the Derby *40 data source classes declare two public methods, both of which are required for JDBC 4:
public boolean isWrapperFor(Class<?> interfaces)
public <T> T unwrap(java.lang.Class<T> interfaces)
So if an application has some generic DataSource object and it wants to determine whether or not that data source is JDBC 4, one possible approach is to iterate through the declared methods and search for either of the above two. If "ds" is some DataSource object, we could do something like:
boolean isJDBC4DataSource = false;
java.lang.reflect.Method [] mA = ds.getClass().getDeclaredMethods();
for (int i = 0; i < mA.length; i++)
{
String methodName = mA[i].getName();
if (methodName.equals("isWrapperFor") || methodName.equals("unwrap"))
}
Maybe this is too naive of an approach, but when I tried it on a simple repro it seems to have done the trick (i.e. isJDBC4DataSource remained false for the JDBC 3 datasource but was set to true for the JDBC 4 datasources). For what that's worth...
Simpler check would be
ds instanceof java.sql.Wrapper
or derby specific
ds instanceof EmbeddedConnectionPoolDataSource40
>Simpler check would be
> ds instanceof java.sql.Wrapper
I don't think this works, as all DataSources are instances of java.sql.Wrapper in a Java 1.6 JVM. At least that's what I noticed when I tried this.
> ds instanceof EmbeddedConnectionPoolDataSource40
I guess that's simple enough
Assumes application is running with Java 1.6 and is Derby-specific, but you're right, that'd be the easies route.
> Assumes application is running with Java 1.6
Err, meant "built", not "running". Or is it possible to build an application that references a *40 class without JDBC 4?
It is also possible to add wrapper methods with these signatures to the old data sources:
public boolean isWrapperFor(Class iface)
public Object unwrap(Class iface)
This will work as long as you access them through the ConnectionPoolDataSource interface (or the Wrapper interface) on Java 1.6, like this:
ConnectionPoolDataSource ds = myEmbeddedConnectionPoolDataSource;
if (ds.isWrapperFor(EmbeddedConnectionPoolDataSource.class)
I'm not saying it's a good solution, but perhaps it could reduce the problem?
Marking normal urgency and HVF so hopefully it will get picked up for the next release. This won't make it for 10.5.2.
EmbeddedConnectionPoolDataSource and friends have implemented the java.sql.Wrapper methods since
DERBY-5880/Derby 10.10.1.1, so I believe this is not an issue anymore. Closing the issue.
Dumb question but what from the DataSource api indicates "JDBC 4 compliance"?
The javadoc for DatabaseMetaData.getJDBCMajorVersion() does not say anything about compliance. | https://issues.apache.org/jira/browse/DERBY-2582?focusedCommentId=12491131&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2014-10 | refinedweb | 1,113 | 55.84 |
Christian Heimes wrote: > I'm sending this mail to Python-dev in the hope to reach more developers. > > GvR likes to rename the __builtin__ to reduce confusing between > __builtin__ and __builtins__. He wanted to start a poll on the new name > but apparently he forgot. > > >From > --- >. > [...] > OK, then we need to agree on a new name. I find __root__ too short, > __rootns__ too cryptic, and __root_namespace__ too long. :-) What else > have we got? > --- > > What name do you prefer? I'm +1 with Raymond on __root__ but I'm still > open for better suggestions. > +1 for '__root_namespace__' (explicit) +0.5 for '__root__' Michael Foord > Christian > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > > Unsubscribe: > > | https://mail.python.org/pipermail/python-dev/2007-November/075389.html | CC-MAIN-2016-36 | refinedweb | 115 | 77.53 |
Understanding C# Async/Await
Back in .Net Framework 4.5, Microsoft introduced two new keywords to make asynchronous programming easy for developers.
These keywords, async & await have been first class citizens of C# since then and have helped developers easily introduce concurrency in their applications.
However, many developers are still confused about the roles these keywords play in everyday development so I’m happy to shed some light on this.
Consider the code below:
public ActionResult Pay(int customerId, int amount)
{
var customer = customerRepository.Find(customerId);
var result = processPayment(customer, amount);
FlashSuccess("Payment successfully made");
return View("PaymentSuccess");
}
This method runs synchronously and in this case on the UI thread. The drawback here is that multiple requests to this method will have to wait till the currently executing request completes and returns.
For low traffic applications, this code will work just fine but when your app begins to grow and traffic increases, users might experience some amount of lag between requests. Here’s where async/await can be of help!
Now let’s modify the code above to use the async/await keywords.
public async Task<ActionResult> Pay(int customerId, int amount)
{
var customer = customerRepository.Find(customerId);
var result = await processPayment(customer, amount);
FlashSuccess("Payment successfully made");
return View("PaymentSuccess");
}
As shown above, the async keyword is included in the method definition and the return type for the method is a generic Task. Also notice that we placed the await keyword just before the processPayment(…) method.
Now what’s going on here? Well the await keyword instructs the compiler to run the processPayment(…) method on a worker thread which it grabs from the thread pool, thus freeing up the UI thread to handle other requests. Once processPayment(…) completes, control is returned to the original thread i.e. the UI thread for the request to continue processing. Typically, methods that take long to return (calls to databases, external APIs, disk etc) are candidates for asynchronous processing.
Finally, asynchronous methods can also return void instead of a Task. This is useful when you care less about whether that method completes or not, essentially performing a fire-and-forget request. One disadvantage here is that your app won’t be notified if an error occurs. | https://plasteezy.medium.com/understanding-async-await-in-c-eaea7c5bc577?source=post_internal_links---------4---------------------------- | CC-MAIN-2022-33 | refinedweb | 372 | 55.44 |
Starting with JSS version 9.6.0, Material-UI supports Content Security Policy headers.
Basically, CSP mitigates cross-site scripting (XSS) attacks by requiring developers to whitelist the sources their assets are retrieved from. This list is returned as a header from the server. For instance, say you have a site hosted at the CSP header
default-src: 'self'; will allow all assets that are located at* and deny all others. If there is a section of your website that is vulnerable to XSS where unescaped user input is displayed, an attacker could input something like:
<script> sendCreditCardDetails(''); </script>
This vulnerability would allow the attacker to execute anything. However, with a secure CSP header, the browser will not load this script.
You can read more about CSP here.
In order to use CSP with Material-UI (and JSS), you need to use a nonce. A nonce is a randomly generated string that is only used once, therefore you need to add a server middleware to generate one on each request. JSS has a great tutorial on how to achieve this with Express and React Helmet. For a basic rundown, continue reading.
A CSP nonce is a Base 64 encoded string. You can generate one like this:
import uuidv4 from 'uuid/v4'; const nonce = new Buffer(uuidv4()).toString('base64');
It is very important you use UUID version 4, as it generates an unpredictable string. You then apply this nonce to the CSP header. A CSP header might look like this with the nonce applied:
header('Content-Security-Policy') .set(`default-src 'self'; style-src: 'self' 'nonce-${nonce}';`);
If you are using Server Side Rendering (SSR), you should pass the nonce in the
<style> tag on the server.
<style id="jss-server-side" nonce={nonce} dangerouslySetInnerHTML={{ __html: sheetsRegistry.toString() } } />
Then, you must pass this nonce to JSS so it can add it to subsequent
<style> tags.
The client side gets the nonce from a header. You must include this header regardless of whether or not SSR is used.
<meta property="csp-nonce" content={nonce} /> | https://material-ui-next.com/guides/csp/ | CC-MAIN-2019-04 | refinedweb | 342 | 65.93 |
Functional library for Dart: dart_slang: ^0.0.4
You can install packages from the command line:
with Flutter:
$ flutter packages get
Alternatively, your editor might support
flutter packages get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:dart_slang/dart_slang.dart';
We analyzed this package on Apr 16, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter
References Flutter, and has no conflicting libraries.
Document public APIs. (-1 points)
20 out of 20 API elements have no dartdoc comment.Providing good documentation for libraries, classes, functions, and other API elements improves code readability and helps developers find and use your API._slang. | https://pub.dartlang.org/packages/dart_slang | CC-MAIN-2019-18 | refinedweb | 122 | 50.02 |
Living Water Church Ministry Training Center
Transcription
1 Living Water Church Ministry Training Center He that believeth on me, as the scripture hath said, out of his belly shall flow rivers of living water. John 7: Hall Valley Drive Bridgeport, WV Office (304) Senior Pastor Jerry Fiscus (304) Catherine Adkins, Administrator
2 Greetings in the wonderful name of Jesus; welcome to the Ministry Training Center of Living Water Church. I m thrilled that you have chosen to invest your time and resources into studying God s Word, and allowing our staff and curriculum to be a vital part of your growth in Christ. Living Water Church desires to make disciples around the world for the Kingdom of God. As Pastor, it gives me no greater joy than to see men and women grow in their knowledge of scripture, putting it into practice in their daily lives and ministry (James 1:22). Our program affords students to gain vital Godly knowledge in a consistent manner without leaving their families, jobs or regular routines for an extended period of time. Whether you will be attending for ministerial status, or to increase your knowledge of God s Word, may God richly bless you in your studies. In His Service, As a division of Living Water Church, the Ministry Training Center is dedicated to providing a biblically based curriculum. Our curriculum is designed to train and equip those called into the five-fold ministry as outlined in Ephesians 4: It is also ideal for individuals who wish to expand and enrich their personal walk with Christ through in-depth study of God s Word. Once our students have completed the fifteen course program, they will be thoroughly prepared and eligible for licensing thru Living Water Church and Ministry Training Center or equipped to seek licensing under their own Church affiliation. Each course is designed to provide our students with a firm foundation upon which to build their ministries. All students will be encouraged to use the gifts that they have received to serve others. (I Corinthians 12:4-7, Romans 12:4-8). Whether God has called you to full-time ministry or you seek to enrich your own personal studies, it is our privilege to offer this program to help you achieve your goals. I look forward to encouraging you in your journey to know Christ more intimately. In Christ, Jerry W. Fiscus Senior Pastor, Living Water Church Catherine Adkins Administrator
3 CONFESSION OF FAITH GOD MISSION/VISION STATEMENT We will teach God s Word to those who are called to service in the Five-fold ministry (Ephesians 4:11-12), Missions and Church leadership. We will use only Biblically based training materials that exalt Christ as King of Kings and Lord of Lord s (Philippians 2:10-12). OUR VISION To build a residential facility that will house students of the Bible school and offer educational facilities that will accommodate a long-term learning environment. We believe in the only true Creator, God The Father, and the Son, and the Holy Spirit, these three are one. This God created the heavens and the earth and everything in them, visible and invisible. God is sustainer of the universe, and governs over all. JESUS We believe in Jesus Christ. He is both God and man. Jesus was conceived by the Holy Spirit and was born to the Virgin Mary. He is the prophesied Savior of the whole human race, to all who accept with full faith the grace He offers. Jesus suffered and died on the cross for our sins, was buried and rose the third day, and ascended to the heavens, where He is our intercessor with God the Father. He will return again in power and glory as King of Kings and Lord of Lords, judging the living and the dead. HOLY SPIRIT We believe in the Holy Spirit, who is equal with the Father and the Son, who is the comforter Jesus promised the faithful. He convicts us of sin, and teaches gifts of the Spirit (I Corinthians 12:13), within scriptural order and discernment. He lives within us as believers, goes everywhere with us, hears all we hear, sees all we see, and guides us into all truth.
4 THE BIBLE We believe the Bible, Old and New Testament, is the inspired Word of God, and teaches the true and only way to eternal life through faith in Jesus Christ alone as Savior. The scriptures are a true record of God s revelation to man and are foundational to the Christian believer, in all aspects of our spiritual and physical life (II Peter 1:3). SALVATION We believe that human beings are brought into salvation by accepting the free gift of God, which is His Son, Jesus Christ. It is by God s grace and or faith in Jesus Christ that makes this possible (Ephesians 2:8-9). We believe salvation is based upon being in Christ and remaining in Christ (John 15: 5-7) (II Corinthians 5:17), that the first Adam fell into sin and by the saving grace of Jesus Christ we are redeemed by his atonement. BAPTISM We believe in the ordinance of baptism by immersion (Matthew 3:16, Jesus example); in the Lord s supper as outlined in Matthew 26:26, the bread and cup of the new covenant with it s observance till His return (I Corinthians 11:23-26) and the washing of the saint s feet as Jesus did as an example in John 13:4-5, 15, 17. STANDARD OF ACADEMIC INTEGRITY AND CODE OF CONDUCT Living Water Ministry Training Center is a Christian institution that desires to equip men and women to fulfill God s calling on their lives and to encourage personal growth in biblical studies. Christian conduct as guided by Scripture is expected from our students. Examples of such principles are found in Romans 12:9-21; Galatians 5:22-23; and Ephesians 4:1-3, Students of the Ministry Training Center will be required to adhere to high standards of academic integrity. The following examples represent basic types of behavior that are unacceptable and will be grounds for disciplinary action: 1. Cheating: the use of notes or unauthorized information while taking an examination, submitting work done by someone else as your own and/or copying or using someone else s work as your own. 2. Plagiarizing: submitting someone else s work and claiming it as your own or neglecting to give appropriate documentation when using any kind of reference materials. 3. Fabricating: falsifying or inventing information, data, or citation.
5 TUITION AND FEES CLASS SCHEDULE Tuition - $60.00 per course/ $30.00 audit fee Textbooks prices vary based upon selected materials. Book Deposit $25; textbooks provided for use by the institution will require a refundable deposit. New Student Application Fee - $30.00 Late Registration Fee - $10.00 Graduation Fee - $25 Diploma Processing Fee - $40.00 PAYMENT POLICIES Required books will be posted for prospective students no later than five (5) weeks prior to the start of a new term. All payments will be required in full before the start of each new ten (10) week term. Students will receive a ten (10) percent discount on tuition for payments received prior to the start of the new term. Payment plans will be available at the full fee. (Please see the administrator for details). 1. Old Testament Survey A comprehensive analysis of the Hebrew canon. The course acquaints the student with key facts in each book of the Old Testament. March 19, 2015-May 21, The Local Church in Evangelism Make evangelism a lifestyle. A practical approach to sharing your faith in everyday circumstances, learn basic skills and techniques in spreading the gospel. May 28, 2015 July 30, New Testament Survey An in-depth analysis of the Greek canon. The course highlights the key facts in each book of the New Testament. Key facts are highlighted in a manner that can be used in preaching, teaching and applying principles to everyday life. August 6, 2015 October 8, Church Ministries A practical guide for launching ministries for children, youth, senior adults and hospital and nursing home patients. October 15, 2015 December 17, 2015 (Nine week class)
6 5. Synoptic Gospels: The Life of Christ A study of the life and teachings of Christ based on Matthew, Mark, Luke and John. Emphasis is given to the context of His teachings and miracles. January 7, 2016 March 10, Cults and World Religions Students will study and receive an overview of major world cults and their belief systems. The students will also gain a greater understanding of the many world religions including Catholicism, Hinduism, Islam, Buddhism and Judaism. March 17, 2016 May 19, Acts: The Holy Spirit at Work in Believers A thorough study of the content, purposes, principles, and applications of Acts. The course emphasizes the role of the Holy Spirit in the early church and today. May 26, 2016 July 28, Church Government and Administration This course will address the roles / responsibilities of Pastor, Music/Worship Leader, Administration, Finance and Law in the life of the church structure. The course will also provide practical helps and skills needed to fulfill the individual s calling to each of these ministries. August 4, 2016 October 6, Corinthian Correspondence/Spiritual Gifts A study of 1 and 2 Corinthians, providing the student with truths for teaching and preaching as well as practical assistance in dealing with the issues facing today s church. The student will learn how Paul instructed the Corinthians to deal with division and difficulty. The operation and working of spiritual gifts in the church will also be addressed in this study. October 13, 2016 December 22, Romans An in-depth study of the book of Romans. The doctrines of sin, salvation, and sanctification will be addressed along with instruction on our union with Christ and the indwelling Holy Spirit which leads to spiritual growth. January 5, 2017 March 9, Counseling This course will cover various issues relevant to Biblical counseling including topics involving faith, life, marriage and family. Role play and counseling scenarios will be used to teach ethical and effective counseling techniques. March 16, 2017 May 18, 2017
7 12. Prison Epistles This study will focus on the New Testament books of Colossians, Philemon, Ephesians and Philippians. A practical study of the principles Paul wrote to the churches during his imprisonment will be applied to both our personal lives and ministries. May 25, 2017 July 27, Prophecy This course will provide an overview of the prophetic books of Daniel and Isaiah. August 3, 2017 October 5, Introduction to Homiletics/Relationships and Ethics A basic study of the principles of Christian preaching. The course emphasizes biblical exposition and deals with practical matters such as the preparation of sermons, the sources of materials, the construction of sermon components, the variety of sermon types and the delivery of sermons. An emphasis will be placed upon relationships and ethics in ministry. October 12, 2017 December 21, Revelation An in-depth study of the book of Revelation. January 4, 2018 March 8, 2018
WITNESS FOR CHRIST PERSONALLY AND AS A PART OF THIS CHURCH FAMILY!
COVENANT MEMBERSHIP HARVEST BIBLE CHAPEL The membership covenant of Harvest Bible Chapel, North Indianapolis is birthed out of a desire to honor Christ and give loving care and spiritual direction
Constitution. Living Water Community Church Harrisburg, Pennsylvania
of Living Water Community Church Harrisburg, Pennsylvania PREAMBLE We, the members of Living Water Community Church of Harrisburg, Pennsylvania, in order to more effectively carry out the commission given
Scripture Memory Program
Scripture Memory Program Psa. 119:11 Your word I have hidden in my heart, That I might not sin against You. Nothing enriches the spiritual life of the believer more than the discipline of Bible memory.
HOW TO BE FILLED WITH THE HOLY SPIRIT
HOW TO BE FILLED WITH THE HOLY SPIRIT The Main Thing: God gives us the Holy Spirit so we can successfully live the Christian life. Be filled with the Holy Spirit by faith, understanding who the Holy Spirit
VWCS Teacher Application
Your interest in Victory World Christian School (VWCS) is appreciated. We invite you to fill out this initial application and return it to the attention of Irene Prue, school administrator at the VWCS
SURE FOUNDATION BIBLE STUDY LESSON 3 INTRODUCTION TO THE BIBLE
SURE FOUNDATION BIBLE STUDY LESSON 3 INTRODUCTION TO THE BIBLE The word Bible comes from the Greek word biblos, which means book. The Bible that we have today is really a collection of books combined into
Big Valley Grace Lay Counseling Ministry Application Process. Requirements and Responsibilities of a Lay Counselor
Big Valley Grace Lay Counseling Ministry Application Process It is a process! Please prayerfully and thoroughly complete this application. Lay Counseling is not for everyone, though you may have a heart
The Doctrinal Statement of Adam Young
The Doctrinal Statement of Adam Young 1. The Inspiration of the Bible: I believe the writings of the Old and New Testaments are fully God-breathed to the extent that the very words were chosen by God from
Evangelical Covenant Church Discipleship/Confirmation
Evangelical Covenant Church Discipleship/Confirmation #1 What do we believe about the Bible? We believe that the Holy Scriptures, Old and New Testaments, are the word of God and the only perfect rule General Council of the Assemblies of God STATEMENT OF FUNDAMENTAL TRUTHS
The General Council of the Assemblies of God STATEMENT OF FUNDAMENTAL TRUTHS The Bible is our all-sufficient rule for faith and practice. This is intended simply as a basis of fellowship among us (i.e.,
We will work and pray for the unity of the Spirit in the bond of peace. (Ephesians 4:3)
Candies Creek Core Doctrines CHURCH COVENANT Having been brought by divine grace to repent and believe in the Lord Jesus Christ and to give up ourselves to him, and having been baptized upon our profession.
FRONTLINE CHURCH MEMBERSHIP COVENANT
FRONTLINE CHURCH MEMBERSHIP COVENANT REQUIREMENTS FOR MEMBERSHIP + A profession of faith in Jesus + Submission to the authority of the Bible + Baptized as a Christian + Ongoing commitment to a life,
OLD TESTAMENT NEW TESTAMENT THEOLOGY CHURCH HISTORY PASTORAL THEOLOGY AND MINISTRY
COURSE DESCRIPTIONS OLD TESTAMENT NEW TESTAMENT THEOLOGY CHURCH HISTORY PASTORAL THEOLOGY AND MINISTRY OLD TESTAMENT O-101 Old Testament Introduction A systematic study of the background against which
Local Church Deaf Bible Institute
International Partnership Ministries Local Church Deaf Bible Institute Leader Handbook Pastor or Deaf Ministry Leader Dear Pastor/Deaf Ministry Leader, Thank you for your interest in the Local-Church Deaf
Excellence in Christian Higher Education Founded 1999
Excellence in Christian Higher Education Founded 1999 American Evangelical Bible College and Seminary Dear Prospective Student, Warmest greetings to you in the name of our Lord, Jesus and welcome to
Know and Use Your Gifts for God's Glory
Know and Use Your Gifts for God's Glory An analysis tool to help you: Know your gifts & abilities Follow your heart & passions Get involved in Kingdom work James C. Denison, Ph.D. President, Denison Forum
CERTIFIED TEACHER APPLICATION FOR EMPLOYMENT
Northshore Christian Academy A Ministry Of Northshore Christian Church 5700-23 rd Drive West, Everett, WA 98203 Phone (425) 407-1119 Fax (425) 425-322-2386 For Office
We will achieve our vision and fulfill our mission by:
Details of Mission/Vision/Core Values & Statement of Faith We will achieve our vision and fulfill our mission by: Basing all teaching, activities and programs upon the principles taught to us through BAPTIST FAITH & MESSAGE
I. The Scriptures THE BAPTIST FAITH & MESSAGE The Holy Bible was written by men who were divinely influenced. The Holy Bible is God's explaining Himself to man. It is a perfect treasure of divine teaching.
1 The Structure of the Bible
1 1 The Structure of the Bible MEMORIZE: 2 Timothy 3:16 All scripture is given by inspiration of God, and is profitable for doctrine, for reproof, for correction, for instruction in righteousness: G od
Induction of Ministers of Word and Sacrament
of Word and Sacrament A meeting of the District/Area Council, the inducting body in the United Reformed Church, shall be constituted. The Moderator of the Synod (or deputy) shall be invited by the Council
Highland Christian Fellowship Statement of Faith 1. Important aspects of the nature and function of confessions of faith:
Highland Christian Fellowship Statement of Faith 1 Preamble Confessions of faith are rooted in historical precedent, as the church in every age has been called upon to define and defend its beliefs. Each
DOCTRINAL TEACHINGS REMOVED, WEAKENED, CHANGED, OR ADDED IN MODERN ENGLISH BIBLE VERSIONS
DOCTRINAL TEACHINGS REMOVED, WEAKENED, CHANGED, OR ADDED IN MODERN ENGLISH BIBLE VERSIONS 1. The doctrine pertaining to the exact purpose, power, and importance of FASTING is removed. 2. The teaching: Fellowship Files (1) Why Church Membership?
The Fellowship Files (1) Why Church Membership? Becoming a member of a local church is not just a good idea, it is God s idea. To be a committed Christian who is committed to Christian people pleases God.
The Challenges of Evangelism. Sharing your faith in the 21 st Century
The Challenges of Evangelism Sharing your faith in the 21 st Century Definitions What is Evangelism? Evangelism, definitions of: zealous preaching and dissemination of the gospel, as through missionary
Subject: New Beginnings A New Creation
Subject: New Beginnings A New Creation Scripture Introduction Commentary Jesus came to introduce a New Covenant so that those who follow him would be able to find salvation through his life, death and
BIBLE CHARACTER STUDIES
BIBLE CHARACTER STUDIES SESSION ONE DEVELOPING CHRISTIAN CHARACTER THROUGH STUDYING GOD S WORD God gave you His Word, the Bible, so that He can have a personal relationship with you. As you get to know
The Mission Church Statement of Faith
The Mission Church Statement of Faith The Scriptures -- God's perfect Word, our authority 1 The Holy Bible was written by men divinely inspired and is God's revelation of Himself to man. It is a
Ministry Questionnaire
Ministry Questionnaire Thank you for your interest in serving at Calvary Chapel Fellowship of Foley. We request that you fill out the questionnaire form completely and as detailed as possible. You
NATIONAL BIBLE INSTITUTE
NATIONAL BIBLE INSTITUTE NBI'S FREE QUIZ... "BIBLE: Basic Information Before Leaving Earth" Please answer all questions contained in the test. You will be contacted within a few days with your results.
What Is Baptism? What Is Baptism A Sign Of?
What Is Baptism? A sign is a promise, a signal, a visible representation of a reality that is yet to be seen. Baptism, a sacrament of the church, is a sign of what God has already done for and continues AS CREATOR, OWNER, AND PERFECT GIFT-GIVER
PART TWO GOD AS CREATOR, OWNER, AND PERFECT GIFT-GIVER I. GOD AS CREATOR. To understand Christian stewardship we must accept that God is the creator of everything, owner of everything, and the perfect
FIRST YEAR. Course Description Session I
FIRST YEAR Course Description Session I Faith God is not ignorant and cannot condone ignorance. You must be informed to be able to change this world. Faith is the power that creates, and brings into being,
Holy Spirit I. Instructor: Wendell Parr
Table of Contents Holy Spirit I Lesson 1 Three Works of the Holy Spirit. 2 Lesson 2 Dispensations of the Holy Spirit... 5 Lesson 3 Changes Produced by the Holy Spirit... 8 Lesson 4 How to Do the Greater
WHAT MAKES BIBLICAL COUNSELING BIBLICAL
WHAT MAKES BIBLICAL COUNSELING BIBLICAL Our mission Course Description Describe Biblical Counseling as will be conducted at EBC and the four aspects of what makes this counseling Biblical. Course Objectives?
Baptism Self-Study Guide
Baptism Self-Study Guide Our Salvation 1. Why am I here? a) God made me to love me! John 3:16 b) We were created to enjoy a personal relationship with God and to manage all of the rest of God s creation.
GCS Goals and Objectives
GCS Goals and Objectives Grace Communion Seminary P.O. Box 5005 (2011 E. Financial Way) Glendora, California 91740 GCS Mission: Equipping the Saints for Pastoral Ministry. We are committed to equip the
Knowing and Using Your Gifts for God's Glory
Knowing and Using Your Gifts for God's Glory How Can I Serve God at Hyde Park United Methodist Church? Know your gifts & abilities Follow your heart & passions Just do it Get involved Used with permission
Leadership and Church Partnership Information
Leadership and Church Partnership Information Thank you for your interest in partnering with Redeemed2Repeat, Inc. to walk alongside, equip and care for those who struggle with addiction. People are lost!
Premarital Sex By Evan Lenow
Premarital Sex By Evan Lenow Pre-Session Assignments One week before the session, students will take the following assignments. Assignment One Read the comments related to Hebrews 13:4 in the section It
VERSE-A-THON TRACKING SHEET
Reference NKJV Section Book Page John 3:16 1 John 4:14 For God so loved the world that He gave His only begotten Son, that whoever believes in Him should not perish but have everlasting life The Father
Statement of Faith of the Faith Missionary Baptist Church Cabot, Arkansas
Statement of Faith of the Faith Missionary Baptist Church Cabot, Arkansas The Scriptures God We believe that the Holy Bible was written by men divinely inspired, and is a perfect treasure of heavenly instruction.? What is a Covenant?
Membership Covenant The Village Church exists to bring glory to God by making disciples through gospel-centered worship, gospel-centered community, gospel-centered service and gospel-centered multiplication.
LEADING A CHILD TO CHRIST
LEADING A CHILD TO CHRIST LEADING A CHILD TO CHRIST Introduction: 1. 2. 3. Things to remember in leading a child to Christ: 1. Memorizing scripture. 2. Establish a key in your Bible. 3. Read from the Bible.
Bible OVERVIEW 1: Promise and Pattern
BIBLE OVERVIEW 1: Promise and Pattern Introduction: THE BIBLE IS ONE BOOK 2 Timothy 3:15-16 One author One subject eg: John 5:39, Luke 24:45-47 NOT A book of quotations NOT A collection of books BUT One
2017 Arkansas Youth SAMPLE Bible Drill CYCLE 3 - GREEN
2017 Arkansas Youth SAMPLE Bible Drill CYCLE 3 - GREEN This SAMPLE Drill has been prepared to: - assist leaders in seeing how a drill is conducted - guide leaders in preparing drills for practice -
_ WE BELIEVE : ESSENTIAL TENETS OF THE REFORMED FAITH
WHAT WE BELIEVE : ESSENTIAL TENETS OF THE REFORMED FAITH What are the essential tenets? The essential tenets are our foundational convictions, drawn from Scripture and contained in our creeds and confessions
Correspondence Program Course Requirements
Course Title Correspondence Program Course Requirements Instructor Course Description 1. A Sure Foundation Andrew Wommack Two subjects are covered in this course. The Integrity of the Word expounds on
Online Program - Suggested Course Schedule & Descriptions
Online Program - Suggested Schedule & Descriptions What classes do I take and when? You have a lot of flexibility in scheduling your classes. There are a few courses that need to follow previous courses
THE GOSPELS Unit 5 (2014 Revised)
THE GOSPELS (2014 Revised) CONCEPT: Each of the New Testament gospels, Matthew, Mark, Luke and John, were written by different people. Each writes about Jesus in his own unique way, but all agree that
Online School Course Descriptions
Course Title: Sure Foundation Two subjects are covered in this class. The Integrity of the Word expounds on the truth, surety, and infallibility of God s Word. Christian Philosophy shows us that we | http://docplayer.net/1187933-Living-water-church-ministry-training-center.html | CC-MAIN-2017-34 | refinedweb | 3,936 | 57 |
/* * File: main.cpp * Author: TBuchli * * Write a program that helps a real estate agent calculate an average price of *up to 20 homes. An array of data type double should be used to contain the *prices. The program should prompt the user to enter the number of prices to *average — a maximum of 20 prices. * * Then, the program should prompt the user to enter each price. Once all of the *prices have been entered into the array, sort them in ascending order. The *output should include a listing of the prices entered, and the average price *should be calculated and displayed. The program must use arrays to store the *prices and should use a loop to process the array... * * Created on November 20, 2013, 12:52 AM */ #include <cstdlib> #include <iostream> #include <algorithm> using namespace std; /* * In the beginning there was a real-estate agent that had to add the prices of *up to 20 house values, print the data in the order that which it was received, *then sort and print the data in ascending order, and finally print the average *of all house prices... * * Along came TBuchli, and the real estate agent had time to shop for the *missus... * */ int main(int argc, char** argv) { // Initialize Variables... int PRICES = 20; int housePrice[PRICES]; int houseNum; int a; double total = 0; double average; houseNum = 0; // Prompt user for house price... cout << "Enter first house price, or type 999 to quit: "; // Store entered data as an array... cin >> housePrice[houseNum]; while(houseNum < PRICES && housePrice[houseNum] != 999) { total += housePrice[houseNum]; ++houseNum; if(houseNum < PRICES) { // Prompt user for next price... cout << "Enter next house price, or 999 to quit: "; // Store entered data as an array... cin >> housePrice[houseNum]; } } // Print data... cout << "The entered house prices are: "; for(a = 0; a < houseNum; a++) // Print data in order it was entered... cout << housePrice[a] << " "; // Sort function... sort(housePrice, housePrice + houseNum); // Print data (\n = on the next line)... cout << "\nThe entered house prices in ascending order are: "; for(a = 0; a != houseNum; ++a) // Print data in ascending order... cout << housePrice[a] << " "; // Initiate average and store for later use... average = total / houseNum; // Print the average... cout << endl << "The average of all house prices is: " << average << endl; return 0; } // And everyone was Happy, Happy, Happy!!!
0
| https://www.daniweb.com/programming/software-development/threads/468042/averaging-home-prices | CC-MAIN-2017-43 | refinedweb | 379 | 73.07 |
C++ Function
A function is a subpart of a program that can be reused when needed. A function is a set of statements that perform specific task.
There are two types of functions: user defined functions and library functions.
Library Functions
C++ has some pre-defined functions in C++.
For ex:- pow(x,y), sqrt(x), floor(x), exp(x), abs(x), cos(x) etc.
Example Of Library Functions:
#include <iostream> #include <cmath> using namespace std; int main() { double num, square; cout << "Enter a number: "; cin >> num; // Sqrt () is a library function to calculate square root. square = sqrt(num); cout << "Square root of " << num << " = " << square; return 0; }
Output :
Enter a number: 26 Square root of 26 = 5.09902
In the example above, the sqrt () library function is used to calculate the square root of a number. For example, if we want to calculate the square root of 26, then the sqrt() is a library function that is used to calculate the square root.
User-Defined Function
C++ allow the user to define their own function. The function is a part of the program which can be executed whenever needed by simply calling its name with proper arguments.
Function prototype (declaration)
A function declaration give the compiler information about the function name, type and its parameters.
Syntax
A function declaration has the following parts.
return_type function_name( parameter list );
For the function add () defined above, the following is the function declaration.
int add(int num1, int num2);
Parameter names are not important in function declaration only their type is required
int add(int num1, int num2);
Function call
If we want to execute the code that is written inside the function then we have to call that function. On calling that function the control of program transfer to that function.
Syntax
Function_call(parameter1,parameter2);
Function Definition
When the function is called, the program transfer his control over the first statement of the program body and all further statements will be executed sequentially. And the control of the program will return to the calling function after successfully executing the code of the called function.
// Function definition int sum(int a,int b) { int add; add = a + b; return add; }
Syntax of User-Defined Function
#include<iostream.h>
void function_name()
{
... ... ...
... ... ...
}
int main()
{
... ... ...
function_name();
... ... ...
}
In the above syntax the program starts with the main() function. When the control of the program reaches the function_name(); then program control moves to void function_name () and executes the code inside the function and returns to the main function.
Example User-Defined Function
C ++ program to add two integers. To add integers, create a function add () and display the sum in the main () function.
#include <iostream>
using namespace std;
int add(int, int);
int main()
{
int x, y, sum;
cout<<"Enters two numbers to add: ";
cin >> x >> y;
// Function call
sum = add(x, y);
cout << "Sum = " << sum;
return 0;
}
// Function definition
int add(int a, int b)
{
int add;
add = a + b;
// Return statement
return add;
}
Output :
Enters two integers: 8
-4
Sum = 4
Exercise:-
1. What is the scope of the variable declared in the user defined function?
View Answer
Explanation:The variable is valid only in the function block.
2. Constant function in C++ can be declared as
View Answer
Explanation:Constant function in a class is used to prevent modification of class member variables inside its body. When we only want to read member variable and use it in function body with no modification then we should use const function.
Program :-
C++ program to find max between two number using function
#include <iostream> using namespace std; // function declaration int max(int num1, int num2); int main () { // local variable declaration: int a = 100; int b = 200; int ret; // calling a function to get max value. ret = max(a, b); cout << "Max value is : " << ret << endl; return 0; } // function returning the max between two numbers int max(int num1, int num2) { // local variable declaration int result; if (num1 > num2) result = num1; else result = num2; }
Output :
Max value is : 200
Visit : | https://letsfindcourse.com/tutorials/cplusplus-tutorials/functions-in-cpp | CC-MAIN-2021-31 | refinedweb | 671 | 58.92 |
Draft Rectangle/ru
Usage
-
Data
- DataLength: specifies the length of the shape in the X axis direction.
- DataHeight: specifies the height of the shape in the Y axis direction.
- DataChamfer Size: specifies the diagonal length of the 45° chamfer at each corner of the rectangle.
- DataFillet Radius: specifies the radius of the 90° fillet at each corner of the rectangle.
- DataRows: specifies the number of equal-sized rows in which the original shape is divided; by default, 1 row.
- DataColumns: specifies the number of equal-sized columns in which the original shape is divided; by default, 1 column.
- DataMake Face: specifies if the shape makes a face or not. If it is True a face is created, otherwise only the perimeter is considered part of the object.
View
- ViewPattern: specifies a Draft Pattern with which to fill the face of the shape. This property only works if DataMake Face is True, and if ViewDisplay Mode is "Flat Lines".
- ViewPattern Size: specifies the size of the Draft Pattern.
- ViewTexture Image: specifies the path to an image file to be mapped on the face of the shape. Blanking this property will remove the image.
- The rectangle should have the same proportions as the image to avoid distortions in the mapping.:
import FreeCAD, Draft Rectangle1 = Draft.makeRectangle(4000, 1000) Rectangle2 = Draft.makeRectangle(1000, 4000) ZAxis = FreeCAD.Vector(0, 0, 1) p3 = FreeCAD.Vector(1000, 1000, 0) place3 = FreeCAD.Placement(p3, FreeCAD.Rotation(ZAxis, 45)) Rectangle3 = Draft.makeRectangle(3500, | https://wiki.freecadweb.org/Draft_Rectangle/ru | CC-MAIN-2020-16 | refinedweb | 245 | 59.9 |
On 25.04.19 08:31, Nathaniel Smith wrote: > You don't necessarily need rpath actually. The Linux loader has a > bug/feature where once it has successfully loaded a library with a given > soname, then any future requests for that soname within the same process > will automatically return that same library, regardless of rpath settings > etc. So as long as the main interpreter has loaded libpython.whatever from > the correct directory, then extension modules will all get that same > version. The rpath won't matter at all. > > It is annoying in general that on Linux, we have these two different ways > to build extension modules. It definitely violates TOOWTDI :-). It would be > nice at some point to get rid of one of them. > > Note that we can't get rid of the two different ways entirely though – on > Windows, extension modules *must* link to libpython.dll, and on macOS, > extension modules *can't* link to libpython.dylib. So the best we can hope > for is to make Linux consistently do one of these, instead of supporting > both. > > In principle, having extension modules link to libpython.so is a good > thing. Suppose that someone wants to dynamically load the python > interpreter into their program as some kind of plugin. (Examples: Apache's > mod_python, LibreOffice's support for writing macros in Python.) It would > be nice to be able to load python2 and python3 simultaneously into the same > process as distinct plugins. And this is totally doable in theory, *but* it > means that you can't assume that the interpreter's symbols will be > automagically injected into extension modules, so it's only possible if > extension modules link to libpython.so. > > In practice, extension modules have never consistently linked to > libpython.so, so everybody who loads the interpreter as a plugin has > already worked around this. Specifically, they use RTLD_GLOBAL to dump all > the interpreter's symbols into the global namespace. This is why you can't > have python2 and python3 mod_python at the same time in the same Apache. > And since everyone is already working around this, linking to libpython.so > currently has zero benefit... in fact manylinux wheels are actually > forbidden to link to libpython.so, because this is the only way to get > wheels that work on every interpreter. extensions in Debian/Ubuntu packages are not linked against libpython.so, but the main reason here is that sometimes you have to extensions built in transition periods like for 3.6 and 3.7. And this is also the default when not configuring with --enable-shared. | https://mail.python.org/pipermail/python-dev/2019-April/157197.html | CC-MAIN-2019-47 | refinedweb | 427 | 65.93 |
Create a Thing Model and Bind to Device
07/10/2019
You will learn
- How to create new thing packages
- How to manage thing properties and thing property sets
- How to use thing types
- How to add a new thing connected to your device
- How to secure thing data using access rights
The Launchpad provides all the tools for creating thing types, properties, things but also persons, companies and KPI’s. In this first step you find out how to access it.
- Go to the SAP Cloud Platform cockpit.
- Drill down into the Global Account and the Cloud Foundry Sub-account where you have configured the subscription to Leonardo IoT.
- From the left-side menu, click Subscriptions.
- Click Go to Application from the
SAP Leonardo IoTtile.
Enter the email address used with your SAP Cloud Platform account and the password if you are asked for one.
The next step is to create a package. A package allows re-use of thing types and properties within your tenant and across tenants. As we want to put the environment sensor that we have to use in the context of a greenhouse condition monitoring application, we call this package
greenhouse.
- Click Go to Application link from the SAP Leonardo IoT tile (more info in Step 2).
- From IoT Thing Modeler, click the Packages tile.
- Make sure no similar package already exists using the top-right search field.
- Create a new package by clicking the
+in the right of the search field.
- Enter your package name as
greenhouse. You can create a namespace using dots (e.g.,
my.first.greenhouse). Take notice on the scope selection: mark your package as private if you do not want to share its content in the tenant.
- Click Save (bottom right grey slice of the page).
Now that we have a package, we can start putting properties reflecting the measurements but also the master data that we need for out things. Let’s assume we are producing this greenhouse for processing warranty claims, we use a serial number to track every individual greenhouse.
Please make sure you use the thing modeler based on OData. You can check this by checking that you choose the package in the thing modeler at the top with a drop down instead of at the bottom. If the later is the case please check your role collection established when you initially configured the tenant to include
Thing_Engineer_Odata and not
Thing_Engineer or
Thing_Engineer_Fiori_Launchpad.
- Go to the Thing Properties Catalog by clicking on the tile with the same name in the home page.
- Select your
greenhousepackage.
- Select the
Defaultproperty set from the left-side list (or create it for Basic Data Properties if it is not there).
- On the Properties list on the right side, click
+just to the right of the search field.
- Enter the new property name as
serialNumber.
- Set the type to
String.
- Set the Length field value (no chars) to 64.
- Click Save (on the grey bar, bottom-right of the page).
If you have sensitive data or personal data (defined as personal by EU GDPR), mark the property set
Sensitivity Levelas
Personal Dataor
Sensitive Data.
Now that our greenhouse has a property set to capture the serial number for business process integration, let’s add the properties required to capture the measurements/time series of data coming from the environment sensor.
Note that we focus on the data we want to use in the business application – the actual physical sensor or its manufacturer does not matter. We could even get the different measurements from different devices across different communication technologies. What matters in the thing model is what you need to know in the applications built on top.
- Click the Home icon at the top-left of the page after the package creation, or click Go to Application from the SAP Leonardo IoT tile (more info in Step 2).
- From the IoT Thing Modeler section, click the Thing Properties Catalog tile. On the left side, you’ll see the list of existing property sets in the selected package that is written on the top of the search field of the list.
- Select your package from the grey slice of the screen found below the list. Click the package icon (first one).
- Search and choose your package by entering
greenhousein the search field and click on the search icon or press Enter.
- Click the package name to select it. Your property sets list (left side) will refresh and you will see the
Defaultproperty set that is meant for master data and that is created along with the package.
- To create a new property set, click
+near the package selection button (second icon on the grey bar, under the property set list).
- Set the name of the property set to
envData.
- Set Property Set Category as
Measured Values(third option).
- Click Save.
- In the Measured Values list (see Step 4 for details about adding a new property) add a property called
temperaturewith type
Integerand unit of measure
DegreeCelsius, symbol
°C, and choose to have an upper and lower threshold for the temperature.
- Add a property called
humidity, type
Integerand unit of measure
Percentage, symbol
%.
- Add a property called
light, type
Integerand without a unit of measure.
- Click
Save(on the bottom grey bar, right side of the screen).
The last step to set up our metadata and required to capture things and measurements is the creation of a thing type. A thing type brings together multiple property sets and adds additional generic properties like location, name, and description to allow to understand the full context of the thing.
- Use the Thing Modeler button in the lower right corner to jump from the Properties Catalog to the Thing Modeler.
- Click
+at the bottom-left to add a new thing type. Call it
greenhouseType.
- Add the 2 property sets you worked on earlier and save your work. The thing type will then look like this:
Now create a new mapping from this thing type to the sensor type you have created earlier using the connectivity tab and the plus sign:
First choose the sensor type from the list in the upper right and then choose the device properties that match the thing properties. In this example the names are the same but they do not have to be the same:
Now we are ready to create a new thing and map it to the device you created earlier.
- Click New Thing from the upper-right corner.
- In the dialog, enter
greenhouse1or
greenhouse2as the name (
1or
2indicates that this is your first or second instance of this type of greenhouse).
- Add a description and select the default authorization group. Note that this authorization group will be something that later, when you automate onboarding of things and you define differentiated access rights to the time series, will be very specific authorization groups you have created to make sure the your application users see only the things and time series data for those things, that they should see.
Lastly we will set the things serial number and the location to make sure we can integrate into business processes and to be able to show it on a map.
- In the Thing Modeler, set a value for the property
serialNumberfor the thing (not for the thing type). See the image below for where to enter it.
- Use HTML5 geolocation lookup example to find your current location or choose any other location using another tool (mobile phone, Google maps).
- Enter the location in the location fields in the Thing Modeler (see image below).
- Enter a lower and upper threshold for the temperature (under Measured Values).
- Then connect the device and sensors created in the earlier Tutorials in the connectivity tab.
- Save your change.
If you are ingesting data you should see this data showing up in the thing in the thing modeler under
measured values. If not please check first in IoT service and then also in the data ingestion error log app in the SAP Fiori launchpad.
You are now ready to build interactive or batch applications on top of your greenhouse things.
Prerequisites
- Tutorials: Assemble and Configure Device Hardware or at least Create a Simple IoT Device Model
- Configuration You or someone else that is an administrator in your global account has walked through the following end-to-end configuration and onboarding guide: Getting Started with SAP Leonardo IoT.
- Step 1: Access Fiori Launchpad with Leonardo IoT apps
- Step 2: Create a package greenhouse
- Step 3: Add property to Default property set
- Step 4: Create new property set
- Step 5: Create a new thing type using Thing Modeler
- Step 6: Map thing type to sensor type
- Step 7: Create new thing of the new thing type
- Step 8: Set thing master data properties and location
- Back to Top | https://developers.sap.com/mena-ar/tutorials/iot-express-4-create-thing-model.html | CC-MAIN-2019-30 | refinedweb | 1,471 | 60.55 |
‘Sup
Look, I’m sorry. Yet again, I’ve not written any blog posts for ages. Let’s all get over it and move on to something more important. Sales. Let’s imagine you’re an organisation selling B2B. You use Salesforce (or any other platform). You’ve got plenty of opportunities and a history of those opportunities. You’ve gone and built a sales pipeline.
Good work. That’s not an easy thing to do.
Now you want to use that pipeline to get better at sales. You want to use the data you’ve got to help forecast what you’ll do in the future. You want to know the value of what you’ve already got in the pipeline. You want to know what the most valuable activities you perform are. I’m not going to be able to fit all of that into one post so I’ll break things up into parts and (I’ve said this before only to underdeliver) FINISH THE SERIES.
However, for part 1 I’m actually only going to focus on generating some dummy data to play with. “What!? That’s none of the things you said you’d do!” No. It’s not. However, if you’re able to find me a B2B company with a small number of sales who are willing to publicly share all their data then fair play to you. Lacking that I’m going to have to create a dummy set of data and make it halfway believable. In doing this I’ve made a few assumptions (that I’m later going to try to show). It’s a bit circular but don’t be that guy. What I’m doing is broadly legit and if you look at the data and don’t think it’s reasonable then I’m providing the code so you can change whichever bit you find egregious. Even better, just use your actual company’s sales data (assuming you’re lucky enough to have it).
I’ll be building a dataframe that resembles a Salesforce pipeline – it’s going to have the following rows:
Stage – this is the ‘Salesforce/Hubspot/<don’t care>’ stage in the pipeline. Measures how far along an opportunity is.
Name – got to keep track of the opportunities using something
Value – how much money are we going to make from this opportunity. Daily, Monthly, Annually. Doesn’t matter.
Days – this is the date the opportunity entered the stage given. Going to be important later for time-dependence stuff.
So, let’s begin (all code also available here)
import numpy as np import random import matplotlib from matplotlib import pyplot as plt import datetime from datetime import datetime as dt from scipy import stats import pandas as pd def weighted_pick(weights, n_picks): t = np.cumsum(weights) s = np.sum(weights) return np.searchsorted(t, np.random.rand(n_picks)*s) pre_stages = [('Contact initiated', 0.8, 10), ('Meeting booked', 0.6, 20), ('Trial booked', 0.4, 15), ('Proposal sent', 0.3, 25), ('Contract sent', 0.2, 10)] closed_stages = ['Closed Won', 'Closed Lost'] success_stages = ['Closed Won']
Here I’m declaring a few things that are going to be useful to me later. I want all of the stages in the pipeline that I care about, the closed stages and the success stages. The code is probably a bit brittle regarding the random addition of closed and success stages but is fine for new ‘pre_stages’. The parameters are the probability that the opportunity will fall out of this stage (rather than move on successfully) and something else that we’ll talk about later.
WORDS = open('/usr/share/dict/words', 'rb').read().splitlines() NUM_POINTS = 400 AVERAGE_SALE_PRICE = 3500 SD_SALE_PRICE = 1000 sales_opportunities = [(entry.title(), np.random.normal(AVERAGE_SALE_PRICE, SD_SALE_PRICE)) for entry in np.random.choice(WORDS, NUM_POINTS, replace=False)]
Here I’m generating a list of ‘company names’, picking words randomly from a dictionary. In all honesty, just looking through the list of company names is pretty fun in itself. I’m also assuming that the revenue I make from my product is a normal distribution with mean and standard deviation given as ‘AVERAGE_SALE_PRICE’ and ‘SD_SALE_PRICE’. Not rocket science. But it is an assumption I’m making – let’s chalk it down. First assumption: revenue/client is normally distributed. Then we build a list of sales opportunities and their value.
start_date = datetime.datetime.now() - datetime.timedelta(days = 365*2) days_range = range(365*2) y = [float(entry)/365. for entry in days_range] indices = weighted_pick(np.exp(y), NUM_POINTS)
Second assumption I’m going to make in generating this data – you’re working for the right kind of start-up/business. Basically, the number of opportunities created are going to broadly follow an exponential distribution. That is, you specify how many opportunities enter the pipeline with ‘NUM_POINTS’ and we’re going to distribute those according to an exponential distribution. I’m saying that the company starts 2 years ago – again, change if you don’t like it.
sales_data = [[pre_stages[0][0], name_value_pair[0], name_value_pair[1], start_date + datetime.timedelta(days = index)] for name_value_pair, index in zip(sales_opportunities, indices)] remaining_opportunities_frame = pd.DataFrame(sales_data) remaining_opportunities_frame.columns = ['Stage', 'Name', 'Value', 'Days'] sales_data_frame = pd.DataFrame(sales_data) sales_data_frame.columns = ['Stage', 'Name', 'Value', 'Days'] finished_list = set([])
OK. Now I’ve got the first set of entries that’ll make up my final dataframe – it’s all of the opportunities with the value (generated from a normal distribution) and the time the opportunity entered the pipeline (generated via an exponential distribution). I’m going to create a few things for later, namely a dataframe containing all of the live opportunities and our final dataframe containing all the rows we’re going to care about.
for stage_index, stage in enumerate(pre_stages[1:]): next_stage = pd.DataFrame([(sales_opp[1], index, np.argmax(entry)) for sales_opp in sales_data for index, entry in enumerate(np.random.multinomial(1, [0.99, (1. - stage[1])/100., stage[1]/100.0], (datetime.datetime.now() - sales_opp[3]).days)) if entry[0] != 1 and sales_opp[1] not in finished_list])
The above line is where it’s all at. Let me explain slowly and then again, even slower. My intuition is this – I think that the probability that an opportunity converts (moves from its current stage to the next stage) is proportional to the negative exponential of the time spent in that stage. Let’s be clearer. I’m going to make the third assumption – that the probability of moving to the next stage broadly follows a negative exponential. What’s more, I think that each stage will have its own characteristic drop off rate (or half-life, for those of you thinking this looks mightily like radioactive decay). You know how before I said I’d added a parameter to ‘pre_stages’ and I’d explain it. That’s what ‘pre_stages[x][2]’ is. So, for a given stage in the sales pipeline, for each opportunity left in the previous stage, for every day between when the opportunity entered the stage and now I run the multinomial line. The multinomial line is going to return a binary array of three elements where exactly one of the elements is filled. The first element will be filled in 99% of cases – I’ve chosen to set this and if you don’t like it then change it to something else. It means that, for every day between the opportunity entering the state and today there’s a 99% the opportunity will still be in that state at the end of the day. If the second element is filled then that means that the opportunity succeeded on that particular day (with probability given by the stage parameter). Finally, if the third element is filled then the opportunity died on that particular day. ‘Index’ gives us the number of days that’ve happened since the opportunity entered the stage and the argmax gives us whether we succeeded or failed (you’ll see we’re ignoring days when we neither succeeded or failed).
next_stage.columns = ['Name', 'Days', 'Status'] meh = next_stage.ix[next_stage.groupby('Name').Days.idxmin()] tempy_frame = meh.merge(remaining_opportunities_frame[['Name', 'Value', 'Days']], how='inner', on='Name') tempy_frame['new_date'] = tempy_frame.apply(lambda x: x.Days_y + datetime.timedelta(days = x.Days_x), axis=1) tempy_frame = tempy_frame[['Name', 'Value', 'new_date', 'Status']] tempy_frame.columns = ['Name', 'Value', 'Days', 'Status'] success_frame = tempy_frame[tempy_frame.Status == 1] success_frame = success_frame.drop('Status', 1) success_frame.insert(0, 'Stage', pre_stages[stage_index + 1][0] if stage_index + 1 < len(pre_stages) - 1 else success_stages[0]) failure_frame = tempy_frame[tempy_frame.Status == 2] failure_frame = failure_frame.drop('Status', 1) failure_frame.insert(0, 'Stage', closed_stages[1]) sales_data_frame = sales_data_frame.append(success_frame).append(failure_frame)
That was a crazy line – but it contained most of the interesting stuff we do. From here on in we grab the first of the days that the opportunity moved (we actually kept all of the days in the above line but we’re only allowing each opportunity to move out of each stage once!), add the number of days to the original date we entered the stage to find the day we move into the next stage and then create the rows that we need.
finished_frame = sales_data_frame.groupby('Name').apply(lambda x: x.Stage.isin(closed_stages).any()) finished_list = set(finished_list).union(set(finished_frame[finished_frame == True].index.values)) remaining_opportunities = remaining_opportunities_frame[~remaining_opportunities_frame.Name.isin(finished_list)]
Finally, there’s a bit of tidying up to make sure that we don’t calculate anything for any of the opportunities that have died
dates = matplotlib.dates.date2num(sales_data_frame[sales_data_frame.Stage == success_stages[0]].sort('Days').Days.astype(dt)) revenue = sales_data_frame[sales_data_frame.Stage == success_stages[0]].sort('Days').Value.cumsum().values plt.plot_date(dates, revenue, 'b-') plt.xlabel('Date') plt.ylabel('Revenue') plt.title('Company revenue over time') plt.show() sales_data_frame.to_csv('generated_data.csv', index=False)
Quite a lot of work, really, just to generate some ‘likely looking’ sales data. Again, if you’ve got your own then use it! However, up till now I’ve just asserted that it’s likely looking. If you play around with it you can actually see some pretty interesting stuff. Firstly, with lots and lots of data point (N = 8000) you see that the company revenue growth looks very exponential:
However, it’s unlikely that you’ve got 8000 B2B transactions in your sales pipeline (if you do, kudos!). Let’s examine the situation where you’ve got 150:
And a once more with 150:
I think it’s interesting that, even though we’ve literally built this whole pipeline using exponential growth – we still look flat in a lot of places. Hopefully that might provide some solace if you’re struggling with sales and think you’re not hitting your exponential growth. Play around with the parameters and you can see what sort of effect increasing your conversion at various stages has on your overall revenue etc. Or just read the company names – they’re also pretty good.
Right, I’m counting that as broadly done. We’ve got sales data that nobody will mind me analysing in a public forum. Stay tuned/subscribe/email me to keep in touch for part 2. We’ll imagine that we’ve started with this data and we’ll try to assign a total value to our pipeline, and maybe even get onto predicting how many opportunities will progress in the next N days. | http://dogdogfish.com/tag/data-analysis/ | CC-MAIN-2018-17 | refinedweb | 1,887 | 57.16 |
If you’ve hit this post it’s probably because you’ve been reading the Service Pack 1 documentation for Data Services and you’ve been trying to find the elusive DataWebKeyAttribute that the docs mention.
As far as I can tell, it’s not there. You need to use DataServiceKey instead and it lives in assembly System.Data.Services.Client under namespace System.Data.Services.Common and you appear to use it like;
[DataServiceKey("FirstName")] public class Person { public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } }
which is slightly different from what you did with the DataWebKeyAttribute. | https://mtaulty.com/2008/05/19/m_10424/ | CC-MAIN-2017-39 | refinedweb | 105 | 61.06 |
Opened 8 years ago
Closed 5 months ago
#11228 closed Cleanup/optimization (wontfix)
FieldFile with check if file exists, and don't raise errors by default
Description %}<a href="{{ MEDIA_URL }}{{ object.file }}">download this file</a>{% endif %} {% if object.image.is_exists %}<img src="{{ MEDIA_URL }}{{ object.image }}" alt="My image" />{% endif %}
Attachments (3)
Change History (12)
Changed 8 years ago by
comment:1 Changed 8 years ago by
If path is stored in database and actual file not exists then data is corrupted and better not create workarounds but fix it immediately.
Anyway os.path.exists will not work with non-local file systems better use self.storage.exists(self.name).
comment:2 Changed 8 years ago by
And "is_exists" seems ugly. Why not just "exists"?
But yeah, I'm dubious as whether this functionality is important enough to include in the api.
comment:3 Changed 6 years ago by
comment:4 Changed 5 years ago by
Like Chris, I doubt this would be generally useful. If you're losing files, you're got bigger problems than broken links in your templates!
Furthermore, it's trivial to subclass FileField to add this method if you need it.
Changed 5 years ago by
Changed 5 years ago by
comment:5 Changed 5 months ago by
Sorry to re-open, but the notion that an error is raised when attempting to access attributes on a missing file is a wildly impractical default.
It assumes that "everyone at all times will have a full, up-to-date copy of the media folder, that is exactly in sync with the database they're running against".
If you're coding in a little bubble, eg pre-launch, against some bollocks test data-- sure, lets raise errors and annoy ourselves. But in the "real world" this is often/usually not the case and means you have to move around massive media libraries and databases just to work on the site at all, without resorting to temporarily commenting stuff out or hacking up a custom field.
The default should be to return None, overridable through the field/model definition (eg silent=False). And there should be an exists() method too, why not? @aaugustin your presumption above "that you have bigger problems" is totally irrelevant in many production scenarios.
comment:6 Changed 5 months ago by
comment:7 Changed 5 months ago by
comment:8 Changed 5 months ago by
comment:9 Changed 5 months ago by
Hi, the correct procedure to reopen a ticket that's closed as wontfix is to start a discussion on the DevelopersMailingList.
In this case, I'm not sure how related your ideas are to the original ticket. Please try to include a demonstration of the behavior that's problematic. We have what sounds like a similar situation for djangoproject.com. The CorporateMember model has an ImageField for the logo, however, I haven't seen any crashes in Django when those files don't exist.
Fix the patch | https://code.djangoproject.com/ticket/11228 | CC-MAIN-2017-09 | refinedweb | 495 | 60.85 |
One of my favorite features in Perl 6 is the NativeCall interface, because it allows gluing virtually any native library into it relatively easily. There have even been efforts to interface with other scripting languages so that you can use their libraries as well.
There have already been a pair of advent posts on NativeCall already, one about the basics in 2010 and one about objectiness in 2011. So this one won’t repeat itself in that regard, and instead be about Native Callbacks and C++ libraries.
Callbacks
While C isn’t quite as good as Perl at passing around functions as data, it does let you pass around pointers to functions to use them as callbacks. It’s used extensively when dealing with event-like stuff, such as signals using
signal(2).
In the NativeCall docs, there’s a short quip about callbacks. But they can’t be that easy, can they?
Let’s take the Expat XML library as an example, which we want to use to parse this riveting XML document:
<calendar> <advent day="21"> <topic title="NativeCall Bits and Pieces"/> </advent> </calendar>
The Expat XML parser takes callbacks that are called whenever it finds and opening or closing XML tag. You tell it which callbacks to use with the following function:
XML_SetElementHandler(XML_Parser parser, void (*start)(void *userdata, char *name, char **attrs), void (*end)(void* userdata, char *name));
It associates the given parser with two function pointers to the start and end tag handlers. Turning this into a Perl 6 NativeCall subroutine is straight-forward:
use NativeCall; sub XML_SetElementHandler(OpaquePointer $parser, &start (OpaquePointer, Str, CArray[Str]), &end (OpaquePointer, Str)) is native('expat') { ... }
As you can see, the function pointers turn into arguments with the
& sigil, followed by their signature. The space between the name and the signature is required, but you’ll get an awesome error message if you forget.
Now we’ll just define the callbacks to use, they’ll just print an indented tree of opening and closing tag names. We aren’t required to put types and names in the signature, just like in most of Perl 6, so we’ll just leave them out where we can:
my $depth = 0; sub start-element($, $elem, $) { say "open $elem".indent($depth * 4); ++$depth; } sub end-element($, $elem) { --$depth; say "close $elem".indent($depth * 4); }
Just wire it up with some regular NativeCallery:
sub XML_ParserCreate(Str --> OpaquePointer) is native('expat') { ... } sub XML_ParserFree(OpaquePointer) is native('expat') { ... } sub XML_Parse(OpaquePointer, Buf, int32, int32 --> int32) is native('expat') { ... } my $xml = q:to/XML/; <calendar> <advent day="21"> <topic title="NativeCall Bits and Pieces"/> </advent> </calendar> XML my $parser = XML_ParserCreate('UTF-8'); XML_SetElementHandler($parser, &start-element, &end-element); my $buf = $xml.encode('UTF-8'); XML_Parse($parser, $buf, $buf.elems, 1); XML_ParserFree($parser);
And magically, Expat will call our Perl 6 subroutines that will print the expected output:
open calendar open advent open topic close topic close advent close calendar
So callbacks are pretty easy in the end. You can see a more involved example involving pretty-printing XML here.
C++
Trying to call into a C++ library isn’t as straight-forward as using C, even if you aren’t dealing with objects or anything fancy. Take this simple library we’ll call
cpptest, which can holler a string to stdout:
#include <iostream> void holler(const char* str) { std::cout << str << "!\n"; }
When you try to unsuspectingly call this function with NativeCall:
sub holler(Str) is native('cpptest') { ... } holler('Hello World');
You get a nasty error message like
Cannot locate symbol 'holler' in native library 'cpptest.so'! Why can’t Perl see the function right in front of its face?
Well, C++ allows you to create multiple functions with the same name, but different parameters, kinda like
multi in Perl 6. You can’t actually have identical names in a native library though, so the compiler instead mangles the function names into something that includes the argument and return types. Since I compiled the library with
g++ -g, I can get the symbols back out of it:
$ nm cpptest.so | grep holler 0000000000000890 T _Z6hollerPKc
So somehow
_Z6hollerPKc stands for “a function called holler that takes a
const char* and returns
void. Alright, so if we now tell NativeCall to use that weird gobbledegook as the function name instead:
sub holler(Str) is native('cpptest') is symbol('_Z6hollerPKc') { ... }
It works, and we get C++ hollering out
Hello World!, as expected… if the libary was compiled with g++. The name mangling isn’t standardized in any way, and different compilers do produce different names. In Visual C++ for example, the name would be something like
?holler@@ZAX?BPDXZ instead.
The proper solution is to wrap your function like so:
extern "C" { void holler(const char* str) { std::cout << str << "!\n"; } }
This will export the function name like C would as a non-
multi function, which is standardized for all compilers. Now the original Perl 6 program above works correctly and hollers without needing strange symbol names.
You still can’t directly call into classes or objects like this, which you probably would want to do when you’re thinking about NativeCalling into C++, but wrapping the methods works just fine:
#include <vector> extern "C" { std::vector<int>* intvec_new() { return new std::vector<int>(); } void intvec_free(std::vector<int>* vec) { delete v; } // etc. pp. }
There’s a more involved example again.
Some C++ libraries already provide a C wrapper like that, but in other cases you’ll have to write your own. Check out LibraryMake, which can help you compile native code in your Perl 6 modules. There’s also FFI::Platypus::Lang::CPP for Perl 5, which lets you do calls to C++ in a more direct fashion.
Update on 2015-12-22: as tleich points out in the comments, there is an
is mangled attribute for mangling C++ function names. So you might be able to call the pure C++ function after all and have NativeCall mangle it for you like your compiler would do – if your compiler is g++ or Microsoft Visual C++:
sub holler(Str) is native('cpptest') is mangled { ... } holler('Hello World');
It doesn’t seem to be working for me though and fails with a
don't know how to mangle symbol error. I’ll amend this post again if I can get it running.
Update on 2015-12-23: the NativeCall API has changed (thanks to jczeus for pointing it out) and now automatically adds a
lib prefix to library names. The code changed from
is native('libexpat') to
is native('expat'). It will also complain that a version should be added to the library name, but I don’t want to weld this code to an exact version of the used libraries.
5 thoughts on “Day 21 – NativeCall-backs and Beyond C”
The section about C++ is not quite correct.
To call into a C++ library you would instead of:
sub holler(Str) is native(‘cpptest’) { … }
do:
sub holler(Str) is native(‘cpptest’) is mangled { … }
Then the symbol to look up will be found because NativeCall has a C++ name mangler installed.
Currently g++ and clang on linux and OSX, as well as g++ and MSVC on windows are supported.
So there is a good chance that calling a C++ lib on your platform will just work, and not ‘extern C’ dance is necessary.
Oh wow, that’s pretty cool, I’ll add it to the post. I seem to get `don’t know how to mangle symbol` though.
Also, do you know of any documentation I can link to? I can’t seem to find anything except the Rakudo source code.
Apparently, the API of NativeCall has changed slightly: for the first example to work correctly, the “lib” part must be omitted and a version should be specified:
is native(‘expat’, v1) { … }
(I’m using Rakudo 2015.11-737-g9a01b4b built on MoarVM version 2015.11-113-gbd56e2e)
Yup, looks like it changed very recently. I’ll fix the post accordingly, thanks for letting me know.
Using a version seems to look for a versioned symbol file and ignore the regular name though, so I’ll leave that out and live with a warning from NativeCall. | https://perl6advent.wordpress.com/2015/12/21/day-21-nativecall-backs-and-beyond-c/ | CC-MAIN-2017-22 | refinedweb | 1,375 | 60.35 |
Hello, I'm new in topics like saving game progress and I have a question. What I need is to save whole GameObject to file on button click (when game is closing) and later when turning game again on I want to Instantiate that GameObject from saved file. Is there any way to do that? What i tried now is:
Doing class with GameObject:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
[System.Serializable]
public class SaveLevel
{
public GameObject Object;
public SaveLevel(GameObject object)
{
Object = object;
}
}
And voids for buttons to save and load it:
public void SaveLevelOnButton()
{
SaveLevel save = new SaveLevel(/*GameObject i want to save. Let's say:*/ ObjectToSave);
string json = JsonUtility.ToJson(save);
File.WriteAllText(Application.persistentDataPath + "/LevelState.json", json);
}
public void LoadLevelOnButton()
{
GameObject LoadedLevel;
string json = File.ReadAllText(Application.persistentDataPath + "/LevelState.json");
LoadedLevel = JsonUtility.FromJson<SaveLevel>(json).gameObject;
Instantiate(LoadedLevel, new Vector3(0, 0, 0), Quaternion.identity);
}
That code is creating the GameObject "ObjectToSave" from begin like it wasn't saved.
Answer by xbassad
·
Apr 17 at 05:59 PM
If you want to save your game progress just use PlayerPrefs.Save()
My GameObject to save has more GameObjects in it as childerns and every of this children has a lot of data. That's why I wanted just to save the GameObject. Do you think when i turn game off and on, will i get the data like they were on quiting the game? CanPlayerPrefs do that?
Check the documentation
and also check this video
So, PlayerPrefs is a good way to save the data. I know now that i can't save whole GameObject. I'm going to change my scripts so I can save the variables. Thanks for your time!
Answer by NikitaDemidov
·
Apr 17 at 07:52 PM
I wouldn't save GameObject. I would save some Attributes. Example: A normal 3D cube: "P-x=" + obj.transform.position.x + "P-y" + obj.transform.position.x ... You can put all that in a txt-File with System.IO;
Hello, Yeah, thanks, I know now that i can't save whole GameObject. I'm going to change my scripts so i can save the attributes/variables by the method with PlayerPref..
Save/Load Objects settings(position, rotation ...)
1
Answer
How to save and load any data type?
1
Answer
JSON - Load Subset of Files Based on Key in File
0
Answers
Can gameobject be exported as unity3d form?
0
Answers
How do I create/load/save Highscores?
1
Answer | https://answers.unity.com/questions/1623194/saving-gameobject-to-file.html | CC-MAIN-2019-39 | refinedweb | 416 | 59.9 |
88633/what-will-be-the-output-of-below-code-and-why-print-x-insert-2-3
What will be the output of the below code and why?
x=[1,2,3,4,5]
print(x.insert(2,3))
If you write x.insert(2,3)
and then print x them it would show [1,2,3,3,4,5]
It will show nothing in your code as insert does not return anything
Try using this question by list comprehension:
a=[4,7,3,2,5,9]
print([x for ...READ MORE
Hey @abhijmr.143, you can print array integers ...READ MORE
It will print concatenated lists. Output would ...READ MORE
xrange only stores the range params and ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
You can simply the built-in function in ...READ MORE
You can go through this:
def num(number):
...READ MORE
Hi. @Nandini,
The above code will give you ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/88633/what-will-be-the-output-of-below-code-and-why-print-x-insert-2-3?show=88635 | CC-MAIN-2021-21 | refinedweb | 185 | 79.46 |
Hi everyone, I’m Jay Ongg, ready to do more with MAPI. Last time, I wrote a function that displays the names of all the message accounts on your device. In this article, I’ll go a little bit further. I’ll talk a bit about how to manipulate folders, messages, and properties. This post is a bit code-heavy, but if you have any questions about it please feel free to ask (note this is sample code with some error checking thrown in), but I know that there are some ISVs who need some help with this. Rather than answering questions one at a time, I figure it’s more efficient to do this post). Hopefully my color-coded comments in the code (alliteration!) can explain what’s going on.
One thing that I really want is a way to back up my text messages. I’m going to present a function that will take an account name and ultimately, pass some message information to a function. This can be the basis of more advanced MAPI property manipulation.
HRESULT SaveSmsMessages(IMAPISession *pSession, LPCTSTR pszFilename)
{
static const SizedSPropTagArray (2, spta) = { 2, PR_DISPLAY_NAME, PR_ENTRYID };
HRESULT hr;
SRowSet *prowset = NULL;
CComPtr<IMAPITable> ptbl;
CComPtr<IMsgStore> pStore;
//);
SPropValue *pval = prowset->aRow[0].lpProps;
ASSERT (pval[0].ulPropTag == PR_DISPLAY_NAME);
ASSERT (pval[1].ulPropTag == PR_ENTRYID);
if (!_tcscmp(pval[0].Value.lpszW, TEXT("SMS")))
{
// Get the Message Store pointer
hr = pSession->OpenMsgStore(0, pval[1].Value.bin.cb, (LPENTRYID)pval[1].Value.bin.lpb, 0, 0, &pStore);
CHR(hr);
SaveMessages(pStore, pszFilename);
}
}
Error:
FreeProws (prowset);
return hr;
}
This function is similar to the function that I wrote last time. Instead of displaying the account names, I instantiate a message store object and navigate into it. The function iterates through each account, searching for SMS. Once the SMS row is found, we know that pval[1] has the ENTRYID of the associated message store object. Every MAPI object has a unique id, called the ENTRYID. The ENTRYID is always stored in the PR_ENTRYID property, which is a binary property (remember, you can tell the type of the property based on the definition of the constant).
We can then call IMAPISession::OpenMsgStore in order to obtain an IMsgStore pointer. From here, I call a new function, SaveMessage(), passing it the IMsgStore pointer along with the filename to write to.
Sorry to have to give you such a big function, but this is somewhat similar to what we did to get the accounts. As above, we have to obtain a table of records, set the columns, and iterate through it. To clarify the code better, I’ll use inline comments to describe the important steps. MAPI requires quite a few stpes to do seemingly simple things. Hopefully this is a good technique for larger functions J.
HRESULT SaveMessages(IMsgStore *pStore, LPCTSTR pszFilename)
{
static const SizedSSortOrderSet(1, sortOrderSet) = { 1, 0, 0, { PR_MESSAGE_DELIVERY_TIME, TABLE_SORT_DESCEND } };
static const SizedSPropTagArray (3, spta) = { 3, PR_SENDER_NAME, PR_SUBJECT, PR_MESSAGE_DELIVERY_TIME };
HRESULT hr = S_OK;
LPENTRYID pEntryId = NULL;
ULONG cbEntryId = 0;
CComPtr<IMAPIFolder> pFolder;
ULONG ulObjType = 0;
// 1 First retrieve the ENTRYID of the Inbox folder of the message store
// Get the inbox folder
hr = pStore->GetReceiveFolder(NULL, MAPI_UNICODE, &cbEntryId, &pEntryId, NULL);
// 2 we have the entryid of the inbox folder, let's get the folder and messages in it
hr = pStore->OpenEntry(cbEntryId, pEntryId, NULL, 0, &ulObjType, (LPUNKNOWN*)&pFolder);
ASSERT(ulObjType == MAPI_FOLDER);
// 3 From the IMAPIFolder pointer, obtain the table to the contents
hr = pFolder->GetContentsTable(0, &ptbl);
// 4 Sort the table that we obtained. This is determined by the sortOrderSet variable
hr = ptbl->SortTable((SSortOrderSet *)&sortOrderSet, 0);
// 5 Set the columns of the table we will query. The columns of each row are determined by spta
hr = ptbl->SetColumns ((SPropTagArray *) &spta, 0);
// now iterate through each message in the table
// 6 Get the three properties we need: Sender name, Subject, and Delvery time.
ASSERT (pval[0].ulPropTag == PR_SENDER_NAME);
ASSERT (pval[1].ulPropTag == PR_SUBJECT);
ASSERT (pval[2].ulPropTag == PR_MESSAGE_DELIVERY_TIME);
LPCTSTR pszSender = pval[0].Value.lpszW;
LPCTSTR pszSubject = pval[1].Value.lpszW;
SYSTEMTIME st = {0};
FileTimeToSystemTime(&pval[2].Value.ft, &st);
// 7 Pass the parameters to a function to archive (this function is not written)
hr = AppendToFile(pszFilename, pszSender, pszSubject, st);
CHR(hr);
MAPIFreeBuffer(pEntryId);
Hope this helps. Next time I’ll talk about creating an Outlook menu extension, as well as some helpful APIs I haven’t talked about yet.
If you would like to receive an email when updates are made to this post, please register here
RSS
Is there any way to access my pocket outlook Email (specifically Activesynced email) through managed code? As far as I could tell digging through MSDN, it's possible to send e-mail but not access my actual messages.
Hi Union,
You can do this type of stuff in managed code. Take a look at the comments in
Hopefully you can get some pointers and things to look up in MSDN from there.
Can you tell us why the SMS message is retrieved through the subject property instead of body?
I would also like to see a topic on sending SMS messages. Last time I tried that the message stayed in the outbox folder. I had to use SMS API to send SMS messages. I am just curious, I too have moved on to the managed world.
Hi, I wasn't around when whoever designed SMS did it, so I don't really know the answer. My guess would be that it's faster to read from PR_SUBJECT (which is just a string) instead of PR_BODY (where you have to open an IStream). Since most SMS's are limited in size, PR_SUBJECT made more sense.
As for how to send SMS's, what did you try that failed?
Hi, I checked some old sources dated back in 2002. Here's what I tried, rough pseudo code:
Get drafts folder (PR_CE_IPM_DRAFTS_ENTRYID). Create message. Set recipient. Set the message subject. And finally SubmitMessage(0) for the message.
As far as I recall the message went to outbox just fine but stayed there until the user opened the outbox and resent the message.
Anyway, all this happened long, long time ago. May be I made some mistake or may be it was a real problem with the device I used back then (O2 XDA PPC 2002 phone-edition). SMS API solved the problem so I did't thought about MAPI anymore for sending messages.
HarriK
Take a look onto this aticle: . Mobile Secretary has been writen by BCross:
I would like to point out that currently Microsoft.WindowsMobile.PocketOutlook has the ability to:
- Enumerate messaging accounts.
- Send Email
- Send SMS
- Intercept SMS
Currently it does not allow you to enumerate folders, or enumerate messages in folders.
That said, you can always use DllImports to get that functionality from native MAPI.
Thanks,
-Luis Cabrera
Surely someone must have gone through the trouble of DllImporting all these functions already? Anyone?
Hi Doug,
I'm not sure if you would want to use DllImport versions of these methods, or use the managed Microsoft.WindowsMobile.PocketOutlook classes. The latter is cleaner in C# and comes installed with Windows Mobile SDKs.
Hi,
This is vishal here, I have written a Mapi code which can able to read subject and sender info etc. the same way using help of your code.
I have a problem in reading body of the mail. its showing some junk content. Could you please post the code which can able to read the content using IStream
in C++.
Please help me
Thanks
Vishal
The managed Microsoft.WindowsMobile.PocketOutlook interface seems very limited in its abilities with messaging accounts. As LuisCa said, "Currently it does not allow you to enumerate folders, or enumerate messages in folders." I would like to be able to add and delete emails from the message stores, just as I can with contacts or appointments.
i have the same problem as Doug has.
i want to delete sms and arrange them based on certain filtering options..
As LuisCa said, "Currently it does not allow you to enumerate folders, or enumerate messages in folders."
so wats the way around for this in managed code??
kapil
How can I retrieve the attachment of the email on windows mobile device?
Hi Eric, look at IMessage::GetAttachmentTable - you can then get attachments from this table.
Hi Jay,
i'm creating a new folder in mapi using createFolder()
and than i try to switch to that folder using MailSwitchToFolder() which returns E_FAIL,,
can we switch to folders created using createFolder()??
Thanks
Hi kapil, MailSwitchToFolder should work. What parameters are you passing to CreateFolder, on which account, and what parameters are you passing to MailSwitchToFolder?
thanks for your prompt reply..
i am doing the following steps:
ULONG cbEntryId = 0;
CComPtr<IMAPIFolder> pFolder;
LPMAPIFOLDER pfld=NULL;
LPTSTR lpszFolderName=_T("TempFolder");
LPSPropValue pPropVals;
//(here pStore is the SMS store)
hr = pStore->GetProps (&propDefaultFolder, MAPI_UNICODE, &cbEntryId, &pPropVals);
SBinary& eidInbox = pPropVals->Value.bin;
hr = pStore->OpenEntry(eidInbox.cb, (LPENTRYID)eidInbox.lpb, NULL, MAPI_MODIFY, NULL, (LPUNKNOWN*)& pFolder);
CHR(hr);
hr=pFolder->CreateFolder(NULL,lpszFolderName,NULL,NULL,MAPI_UNICODE,&pfld);
CHR(hr);
propDefaultFolder.aulPropTag[0] = PR_ENTRYID ;
hr=pfld->GetProps(&propDefaultFolder, MAPI_UNICODE, &cbEntryId, &pPropVals);
eidInbox = pPropVals->Value.bin;
hr = pStore->OpenEntry(eidInbox.cb, (LPENTRYID)eidInbox.lpb, NULL, 0, NULL, (LPUNKNOWN*)& pfld);
hr = MailSwitchToFolder((LPENTRYID)eidInbox.lpb,eidInbox.cb);
//the last statement returns E_FAIL as result
sorry for the lengthy code..
i had to write it to explain my problem :(..
thanks
Another thing to add..
the folders are created as subfolders of the Inbox folder and i can switch to them manually using stylus on emulator.
so only problem is with switching programmatically ..
i guess there's some hierarchy problem..??
anyone there???
i 'm stuck with this mailswitchtofolder()....:(
Jay,
I found this article very informative. You have shown how to access SMS folder. Following a path of logical extension, I tried developing code for accessing the IMAP4 Inbox folder. I could access most of the properties of the emails. Only the Body of the email was what where I got stuck. I used the OpenProperty()method of Messages with PR_CE_MIME_TEXT property. The stream interface which I used with that, returns no data.
It would be great if you could write another small article on how to access the Body of Email, especially IMAP4...
BR,
AL.
Hi AL, to access the body, you have to create a stream. You get the stream from the PR_BODY property.
Look into IMessage:OpenProperty, and output it into an IStream pointer. From there you can read/write to the stream.
hi,
when i try to use MailSwitchToFolder to switch to a folder created using
Createfoder() method on win mob 6 professional emulator i get E_FAIL as
result.
but when i use the same code on win mob 5 pocket pc phone emulator i get
S_OK as result but no switching occurs.
is this a bug??
Hi Kapi,
Try contacting me offline. I'm unsure as to why the folder doesn't work. You may need to set some properties.
hi Jay,
can u please tell me your contact info...
thanks
Hi Kapil, for the Microsoft employees who post, you can find out email addresses by taking our blog name @microsoft.com. I hesitate to post my email explicitly to avoid spam crawlers.
Ok Jay,
i understand ur problem..
i'll try contacting u ..
if feasible than u can drop me a mail on my email id.
Thanks for your response. I have already tried the PR_BODY property and it returns error. From what I have read in the blogs about CEMAPI, is that, the PPC2003 supports the PR_BODY property, but with WM5.0 this property is no longer supported and instead the PR_CE_MIME_TEXT property is used with body of email available as MIME output.
Please have a look at the code snippet below for understanding my issue, I am using WM5.0 Pocket PC SDK.
Anyway, the PR_CE_MIME_TEXT property does return succesfully, but I don't know how to decode the MIME output. Is there any method to decode the MIME output from Stream to text readable format.
//----------------------------------------------------
hr = pMsg->OpenProperty(PR_BODY, &IID_IStream,
STGM_READ, MAPI_MODIFY,
(IUnknown **)&pStm);
if (hr != S_OK)
MessageBox(_T("PR_BODY FAILED"),_T("Warning"), MB_OK);
return;
TCHAR cMsgBody[255];
ULONG NoBytesRead = msgsize;
ULONG cbCount;
hr = pStm->Read(cMsgBody, 254, &cbCount);
contdbgstatic.SetWindowTextW((LPCTSTR)cMsgBody);
Hi AL,
PR_BODY will work with certain messages. as for PR_CE_MIME_TEXT decoding, you probably will have to decode it - the MIME formats are all documented (in non-MSFT documentation).
I found that the "ActiveSync" Email account provides the email-body as a ASCII (CHAR) text stream...
But the IMAP4 appears to be MIME encoded... Now I am researching on MIME parsers, which I can fit into my application.
Thanks for your help.
I create folder using the statement
hr=pFolder->CreateFolder(NULL,lpszFolderName,NULL,NULL,MAPI_UNICODE|OPEN_IF_EXISTS,&pfld);.
where pFolder is the parent folder created using PR_IPM_WASTEBASKET_ENTRYID
Then i get the entry id of the created folder using the following code :
propDefaultFolder.cValues = 1;
propDefaultFolder.aulPropTag[0] = PR_ENTRYID ;
hr=pfld->GetProps(&propDefaultFolder, MAPI_UNICODE, &cbEntryId, &ptempVals);
And, then I do mailswitchtofolder. The code is
hr = MailSwitchToFolder((LPENTRYID)ptempVals[0].Value.bin.lpb,ptempVals[0].Value.bin.cb);
The folder doesnt show. But, if I open up messaging application where the folder tree view is shown, then simply close the messaging appln and go back to my application and again run mailswitchtofolder, It works...my folder shows up in list view. Strange !! Seems I m missing some proprty or step.
any idea what property needs to be set??
Kapil
Hi Kapil, can you send me an email privately? The way you leave comments, I cannot answer you privately, and I'd like to talk about this outside the forum.
Hi Jay,
I try to get the attachment table by GetAttachmentTable(), but I don't know how to save the attachment to a file on the windows mobile device. Could you please tell me how to do this?
Thank you.
Eric
Hi Eric,
Have you looked into IMessage::OpenAttach and finding the properties of the attachment?
I can save the attachment to a file now.
great! Just curious what was the problem (you can email me privately if you want)?
Hey Jay,
I am trying to access the PR_SENDER_NAME property for the OUTLOOK E-Mail account ("ActiveSync"), and it gives me something which doesn't look like SENDER NAME
I checked this,
ASSERT (pval[0].ulPropTag == PR_SENDER_NAME);
the ulPropTag shows 0x000a while the PR_SENDER_NAME = 0x0C1A.
This means its not mapped properly. What could have gone wrong...
Now its 2 things I am not able to get, PR_BODY and PR_SENDER_NAME...
that property does not exist on the message store, only for messages.
You may want to look at the PR_EMAIL_ADDRESS - that is set for message accounts like POP3 and IMAP, am not sure if it's set for Activesync (although if it's not now, it will be in the future).
hi all,
this article really help.
so, how to save sent item or get sent item folder from smartphone?
sorry for my english
eantaru, you can look at the IMessage interface to see how to save a message. to get a sent message, you have to navigate to the Sent Items folder, and enumerate the messages. Then you can find the message.
jayongg,
should i create folder first before read the data from sent item folder? because, i dont know how to open it and read per messages.
like the sample that can read per sms inbox, but for sent item
thanks before
here the code:
HRESULT smsHelper::GetSMSMsgStore(const CComPtr<IMAPISession>& spSession, CComPtr<IMsgStore>& spMsgStore)
// first we get the msgstores table from the session
CComPtr<IMAPITable> spTable;
HRESULT hr = spSession->GetMsgStoresTable(MAPI_UNICODE, &spTable);
if (FAILED(hr))
{
//AfxMessageBox(IDS_SMS_FAIL_MSGSTORE_TABLES);
return FALSE;
}
// next we loop over the message stores opening each msgstore and
// getting its name to see if the name matches SMS.
// If it does then we break out of the loop
while (TRUE)
SRowSet* pRowSet = NULL;
hr = spTable->QueryRows(1, 0, &pRowSet);
// If we failed to query the
// rows then we need to break
if (FAILED(hr))
{
//AfxMessageBox(IDS_SMS_FAILEDTABLE);
break;
}
// if we got no rows back then just exit the loop
//remembering to set an error
if (pRowSet->cRows == 1)
ASSERT(pRowSet->aRow[0].lpProps->ulPropTag == PR_ENTRYID);
SBinary& blob = pRowSet->aRow[0].lpProps->Value.bin;
hr = spSession->OpenMsgStore(NULL, blob.cb, (LPENTRYID)blob.lpb, NULL, 0, &spMsgStore);
if (FAILED(hr))
//AfxMessageBox(IDS_SMS_FAILED_OPENMSGSTORE);
;
else
//AfxMessageBox(IDS_SMS_MSGSTORENOTFOUND);
hr = HRESULT_FROM_WIN32(ERROR_NOT_FOUND);
// now remember to free the row set
FreeProws(pRowSet);
// now get the display name property from the
// message store to compare it against the name
// 'SMS'
SPropTagArray props;
props.cValues = 1;
props.aulPropTag[0] = PR_DISPLAY_NAME;
ULONG cValues;
SPropValue* pProps = NULL;
hr = spMsgStore->GetProps(&props, MAPI_UNICODE, &cValues, &pProps);
if (FAILED(hr) || cValues != 1)
//AfxMessageBox(IDS_SMS_FAILED_GETNAME);
// if the name matches SMS then break and as
// hr == S_OK the current MsgStore smart pointer
// will correctly be set.
if (_tcsicmp(pProps[0].Value.lpszW, _T("SMS")) == 0)
// if we failed for some reason then we clear out
// the msgstore smartpointer and return the error.
spMsgStore.Release();
return hr;
HRESULT smsHelper::GetSMSFolder(const CComPtr<IMsgStore>& spMsgStore, CComPtr<IMAPIFolder>& spFolder)
// Now get the Drafts folder.
SPropTagArray propDefaultFolder;
propDefaultFolder.cValues = 1;
propDefaultFolder.aulPropTag[0] = PR_CE_IPM_DRAFTS_ENTRYID;// PR_CE_IPM_INBOX_ENTRYID;
ULONG cValues;
LPSPropValue pPropVals;
HRESULT hr = spMsgStore->GetProps (&propDefaultFolder, MAPI_UNICODE, &cValues, &pPropVals);
if (FAILED(hr))
//AfxMessageBox(IDS_SMS_FOLDERNOTFOUND);
return hr;
SBinary& eidDrafts = pPropVals->Value.bin;
hr = spMsgStore->OpenEntry(eidDrafts.cb, (LPENTRYID)eidDrafts.lpb, NULL, MAPI_MODIFY, NULL, (LPUNKNOWN*)&spFolder);
//AfxMessageBox(IDS_SMS_FOLDERNOTOPENED);
AfxMessageBox(_T("error"));
return hr;
HRESULT smsHelper::SendSMSMessage(const CComPtr<IMAPISession>& spSession)
// now get the SMS message store
CComPtr<IMsgStore> spMsgStore;
HRESULT hr = GetSMSMsgStore(spSession, spMsgStore);
CComPtr<IMAPIFolder> spFolder;
hr = GetSMSFolder(spMsgStore, spFolder);
CComPtr<IMessage> spMessage;
//here,
//i dont know what i supose to do then
//i want to see each message from sent item folder
return FALSE;
BOOL smsHelper::DoGetSentItem()
HRESULT hr = MAPIInitialize(NULL);
//AfxMessageBox(IDS_SMS_FAIL_MAPIINIT);
else
; // initialized the MAPI subsystem
CComPtr<IMAPISession> spSession;
BOOL bRet = FALSE;
hr = MAPILogonEx(0 ,NULL, NULL, 0, &spSession);
//AfxMessageBox(IDS_SMS_FAIL_MAPILOGON);
bRet = SUCCEEDED(SendSMSMessage(spSession));
spSession->Logoff(0, 0, 0);
spSession.Release();
MAPIUninitialize();
return bRet;
Hi jayongg, i'm a vietnamese, i'm a female final-year student, field of information technology. At present, i meet with difficulties, my theme is programming for pocket pc in visual C++, main function is convert system time into correlative text(example: 1:00 am into "Mot", then write a function convert that text into voice that recorded and processed from my voic( example: from the function's result above, i have a string "Mot", then play a correlative wav file,ex:Mot.wav). My trouble is i have never programed for pocket pc, in addition i see visual C++ is difficult to understand and complex for me. i don't know where begin, i have not material and can't find material, my friends can not help me,hopefully you help me, i'm at a standstill, please send material and give practical advices for me. (i use Visual Studio 2005 tool), thank you very much indeed. hearing from you soon
I have a serivce application which consists of both C# and C++ modules.
What I want's is to catch the email sent event and do some insertion in DB.
That is when ever some body send an email from the device I want's to catch that event.
Any solution in managed or unmanaged can be worth.
Please if any bosy have an idea then let me know
Thanks and Kind regards.
Is it possible to get the notifications in my application from listview created by MailSwitchToFolder() function.
Thanks and Regards,
Sunisha
I need to develop a custom Add in which must capture the Send event from the Pocket Outlook.
Is there any code snippet in C# that I can make use of for the same
Thanks in Advance
My MMS messages are blocked by outbox.
Is there any way to get rid of this issue?
Hi "test", without more info it is tough to tell you what's going on. In addition, Microsoft doesn't make MMS clients - you need to contact your operator/OEM and find out what's going on.
What would have to be done to embed SaveMessages into a .dll that can be invoked from C#?
POOM is nice enough to give the account names, as mentioned here before. It would be really handy with a library function that can store a directory based on those strings.
C++ seems so aliene these days
Hi Lobo,
If you wish to export SaveMessages, it is possible as a standard export in a DLL. Then you can write a C# wrapper to P/Invoke it. Hope that helps.
You may need to write some more C++ to get an IMsgStore (see the first post in this series) and pass it to SaveMessages.
Jay
If you're looking to be notified when a user switches folders, that is not possible (I think).
Hi all,
who know the way to get properties in compose email screen of Pop3/pocket?
i mean that get CC,BCC,FROM...
Thanks.
dzungtran.
Hey Jay, these postings have been very helpful - thanks so much! I'm having a frustrating problem when my app is processing messages *at the same time* as the phone is synchronizing to pull down new activesync mail. When I time it just right it manifests as either a Datatype Misalignment or Access Violation. Last time the debugger landed on the call to IMAPITable QueryRows. All I do is loop thru the message stores and then thru the messages. Using vs2005 with win mobile 5 smartphone sdk on a motorola q. Any ideas? Is there something besides checking return codes that I need to coexist and play nice with MAPI?));
hr is always MAPI_E_NO_RECIPIENTS.
How can we insert a sms entry into the message store table.
selva, you will need to create a new message. Look into CreateMessage().
I tried using the above code.
I am getting a value :
pval[0].ulPropTag=10.
Can you tell me which property this 10 referes to :
One more to add.
I got this above value " pval[0].ulPropTag=10." when i tried to get the Props of the Inbox settingthe SPropTagArray STags = { 1,{ PR_SENDER_NAME}};
How to get the prop of inbox using pStore->GetProps
if i use the displayname, it gives the inbox of the mail. But i want the inbox of sms. can u help .
selva
Hello Everybody,
I need to count all the SMS messages and to read all of them too.
For the moment I have written a code that works fine with all the predefined sms folders:
1. I get the IMsgStore for sms.
2. Then I get the required folder using GetProps with a predefined folder type tag
3. Then OpenEntry to get the IMAPIFolder
4. GetProps with PR_CONTENT_COUNT
But I guess I will encounter problems when a custom folder will be created. I won't be able to get its property tag or even know whether it exists. How can I enumerate all the folders? Is it possible to do using some other interface perhaps a low level one?
Really need help on the subject.
Thanks in advance and Best Regards,
Vladimir
can I put a contextmenu-item on the inbox messages using C#? I'm not a C++ user
any thoughts on why im only getting a few sms and not all of them?
Hello!
Like many here, I need to access the SMS messages in the Inbox folder on my smartphone. Is there any way to do this with managed code? I'm using Compact Framework 2.0 in a Smartphone with Windows Mobile 5.0 Smartphone and doing this project in C#.
The SmsAccount doesn't give me almost any options (Dispose(), Send() and Name), so how can I do this?
Thanks in advance for any help,
Paulo
Hi there
I have exactly the same problem as Paulo above.
Jay - thanks for an excellent, and informative posting - but, unfortunately, when people start asking about managed code examples half way through the comments, you suddenly seem to go a bit quiet!
As has been pointed out, the windowsmobile.pocketoutlook namespace does contain SMSAccount and EmailAccount objects, but they lack functions to enumerate and read messages. Do you have any examples of wrapper functions, or ways to do this from managed code?
Carl
Can you tell me where I can find code sample which is sending e-mail with attachment using managed code.
Jake
On Windows Mobile 5.0, I'm getting a crash when I try to do an OpenEntry() using a PR_ENTRYID obtained from a previous MAPI session. As the documentation indicated, in the previous session I opened the item and used GetProps() to make sure I have the 'persistent' ID rather than the one for this session. I even get a crash if I just try to do an OpenEntry() on the PR_ENTRYID of a store. I believe I have valid values for each of these because if I enumerate through the stores and use CompareEntryIDs(), I get the right match on the store. On the message, I can call MailDisplayMessage() with my message PR_ENTRYID and it works properly. Can you give me some hints on what I might be doing wrong? If MailDisplayMessage() can find the message, there should be a way for my code to do it also, right?
i was sucessfull in reading the SMS that comes to the inbox. Now i am trying to read the MMS. How can i parse the property's of MMS. I can able to get only the PR_SUBJECT of the Message.
Hope some one help me:(
Hi, Very fast help in every aspect. I am making a backup tool for mobile. I have successfully got all the SMS in the sent folder. I got the text in SMS, time of SMS. But I am not getting the number to which the SMS was sent.
PR_SENDER_NAME gives the number for Inbox messages. But it doesn't work for sent messages.
Any help friends. message
appears in the Inbox without the Sender Number now.
Any idea how to go about it?
I use WM 6.0 for Pocket PC.
Is it possible to associate single Message store with two Messaging transport. ex MMS and SMS transport
Yes, it is possible. This has been done by various third party MMS providers in the past. Have your MMS message get created into SMS account, and set the message class to IPM.MMS, and add that to the MsgClasses registry key under SMS.
I have done the same.But MMS Transport DLL(Custom) does not load and Initialize.I think without MMS account its not possible. Do you sujjest any way to load MMS transport DLL with only SMS account?
Can I tell if a message is still actively being received or that synchronization is happening?
you can tell if Activesync is syncing (music, messages, contacts, calendar, tasks) by looking up the following reg value:
HKEY_LOCAL_MACHINE\System\State\ActiveSync
"Synchronizing"
You can't tell if a particular message is currently being downloaded.
You can tell if a message is partially downloaded by getting PR_MSG_STATUS,and checking for MSGSTATUS_HEADERONLY or MSGSTATUS_PARTIAL (look up these constants and its neighbors in cemapi.h).
hi jayongg,
could you sujjest any way to load MMS transport DLL with only SMS account?
Hello Jayongg,
i need help. as per the description negative value of rowcount should traverse back but this returns the error invalid parameters.
my req is i want to traverse forward till end of list then travers back from last till begining.
hr = pTable->QueryRows (-1, 0, &prowset);
Hello Manish, I'm not sure that's possible.
Thankyou Jayongg for prompt reply...
hello jayongg, I would like to know how to read the from/body of an sms. Can you share a pc of code which can guide me regarding this? thanks
Ashish, look at the SaveMessages() function in this blog post. it does that. SMS messages keep the body in PR_SUBJECT, not PR_BODY
Hi Jay! Could please tell me why can I retrieve the PR_CE_SMS_RAW_HEADER property? I would like to check some bytes into the SMS message header he same way the J2ME applications does when using JSR-120 specification.
I'm trying to intercept incoming SMS on a specific port. I've successfully gotten the WM2006-TV_Inbox sample to intercept incoming SMS messages through the custom RuleClient, but I need access to the SMS port information, and don't see it anywhere in the MAPI.
Can you explain how the MAPI relates to the SMS API? Are they compatable? Are they alternate, but non-compatable APIs?
How would I use the SMS API against individual messages in the MsgStore? Can this be done?
Can I retrieve messge entries from the store and cast them as SMS messages? Are there converters?
How would I get at the WCMP_PROVIDER_SPECIFIC_DATA and WDP_PROVIDER_SPECIFIC_DATA from MAPI?
DD
Hi, what port data are you talking about? WDP ports over SMS?
The SMS API deals with managing SMS's, but MAPI deals with managing messages. Windows Mobile uses the SMS APIs to read SMS messages and drop them into MAPI with properties set. SMS API cannot be used against the message store - it has no knowledge of them.
I don't understand what you mean by casting messages as SMS messages? You can't get the WCMP_* data via MAPI, you need to use the SMS API's - probably at the time of receipt. You would then need to replace/shim the existing SMS driver.
I have the same problem ddevine reported. I am trying to port a J2ME application to C#. The J2ME application is capable to intercep text (not WDP) SMS messages based on some port information that is stored on the SMS PDB header. I could retrieve that information when I used the SMS API and according to the documentation, that information should be available into PR_CE_SMS_RAW_HEADER property when we use the MAPI. However, this property is empty. So, is the documentation wrong!? Should we use the SMS API and replace the tmail.exe process as the default SMS interceptor?
Rodrigo.
Rodrigo's point helps to clarify what I meant too.
I'm not fully versed on the J2ME spec, so forgive if some thoughts are muddled... I'm attempting to port a J2ME application to WM6 also. We've got existing handsets and a server/billing/Customer Service system already setup that's driven through port directed SMS messages to the handsets. The purpose of the port directed SMSs is to change the state of our application (subscribed/un-subscribed/update). I would like to leverage our existing system so that we don't need to re-engineer our server transmission system just for WM.
J2ME apps have the ability to listen on specific ports for incoming messages. From my looking this morning at WDP, it appears that this functionality matches, with the J2ME simply providing access to that transport functionality. My understanding is that for ports other than the standard text port, the messages are not visable to the handset user. The notation for setting up a connection looks similar to standard IP syntax..."sms://5551212:1234" In this case, I'm assuming that the port is "1234."
I'll just cut to the chase...How would I setup similar functionality on WM? I've tried using the "SmsSetMessageNotification()," but I don't see any means of locking this notification to a specific port... I don't want to intercept everything, only incoming messages on "sms://5551212:1234." How would I get the same functionality?
Thanks Jay,
Hi, that question is beyond my area of expertise. I recommend looking at the WAP stack, and possibly some code in the SDK. The WM6 SDK has a sample on WDP over SMS, which may be useful. If you need more help then try contacting Microsoft Developer support.
There is no solution available on the SDK. We just need to know if the property PR_CE_SMS_RAW_HEADER works and we are too stupid to retrieve it. Otherwise, we will have to change a server application that works perfectly fine with J2ME applications, but is compatible with Microsoft WDP API. Could you please talk to the guy who coded that property?
By the way, here is the property description:
The PR_CE_SMS_RAW_HEADER property contains the header portion of a RAW SMS message. This is contained in RAW_PROVIDER_SPECIFIC_DATA::pbHeaderData, which is returned from the SmsReadMessage function in its pbProviderSpecificBuffer parameter.
When I used the SMS API, I was able to retrieve that data and checked 4 bytes that represent the SMS port number...
Hi, PR_CE_SMS_RAW_HEADER stores an object of type
RAW_PROVIDER_SPECIFIC_DATA (defined in sms.h of the SDK). I'm unsure if that is what you need.
I think WAP/WDP might be the answer I was looking for.
Thanks for the help, and great job on the Blog!
I am doing a project that can intercept incoming and outgoing messages, check the content and forward it. I understand c# can intercept incoming sms using Microsoft.WindowsMobile.PocketOutlook namespace but i am stucked here since I dont know how to intercept outgoing sms. So I am thinking of retrieving sms from outbox and check then forward. Could you pls help? Thank you so much.
Hi Jeff, I don't work with the C# version of the API, and I'm not sure if we can intercept outgoing SMS. Perhaps some other commenter can help?
I am unable to find any SMS Text Messages in my InBox. e.g. ptbl->GetRowCount(0, &messageCount); always gives me a 0 messageCount.
When I peek into my SMS Inbox on the device using the stock version of Messaging Software from MSFT, I found many SMS Text Messages inside my InBox.
Thanks for your advice in advance
Hi Jay again,
If you are not familiar with C# version of this API, is it possible to retrieve the messages from outbox from time to time and do some content checking? I mean using your normal way. I need to get the outbox messages, check it and forward to numbers. Thanks a lot in advance.
Hi Jeff, you can write an app or service that listens for MAPI notifications. You can then check for messages that get copied to that folder and open/check them.
Thanks for the quick reply. Unfortunately I am not so familiar with MAPI...Is there any portion of code or sample for this? Thanks.
Jeff, look in the SDK for MAPI notifications.
Thanks again Jay, I will look into that and see what I can do. Good Article!
Btw, you from Singapore?
I would like to ask how can i detect how many messages are new to the user from SMS inbox. Can someone give hints.
thanks a lot
i got it , using SnApi.h and RegExt.h
Is the name of the store (SMS) guaranteed to be the same on all versions of Windows Mobile or is there some proper localizable string that we should use??
I mean suppose I need to access the SMS store on some WM which is say chinese? Would I need to do something special for that
Unless the OEM has done something out of the ordinary, the SMS store should be "SMS"
I would liked to call out and see if there was an answer to this question, I am having the same problem for Mobile 2003 SE and Mobile 5.
);
"
How to read the message properties from Compose screen?
I have added a menu option in the compose screen. In the action, I need to read the properties (To, Body etc.).
Kris
look in mapitags.h for a list of properties. from there you can use HrGetOneProp. For the "To" you need to open the recipient table. You can find info in the SDK docs.
jayongg, thanks for the reply.
I saw the example 'Readviewmenuext' in SDK.
HrGetOneProp needs (IMAPIProp*) as first parameter. To get this Prop pointer, we need IMessage pointer.
How to get this 'IMessage' pointer for the message entered in the Compose window?
This message is not yet there in any message folder.
I have the same problem as Johny Alan. I can't read the number to which the SMS was sent in sent items. I've tried almost every PR_XXX :(.
Any help appreciated. Thanks Nick.
Try reading the recipient table to get the list of recipients for the message.
GetRecipientTable works fine - thanks!!
I want to store email in inbox folder by coding, how can I do it? I only do it in draft or outbox folder, but can't store it in inbox folder.
I'm very appreciated that you give me any clue or tip.
Thanks in advance.
The reason is that once I got email info from others channel, I can store it in inbox folder, thus can have the push mail function.
Is there a VB.NET version of it? I need to do a school project in VB that requires this.
I want to know how to get SMS from a Windows Mobile phone's inbox as well as to delete it.
But both need to be done on VB.NET. Anyone knows how?
your article is fine but i am looking for send mail functionality.
I am developing WM application. I have to add email functionality in my application.
I want to use pocket outlook interface for sending mail.I have a button on my userinterface.
I want that after clicking this button outlook interface open..
I am using device emulator.
can anyone show me the way to achive this ?
thanx
- MAPI: the wrong approach (yet interesting sample code) - WebDAV: DELETE command via HTTP - Customize
hi, I want to know can read mms through MAPI in windows mobile 6?
because i'm think SMS MMS box is combined in windows mobile 6
Do you have similar program write on VB?
Hello! I'm trying to install the MapiRule sample from MS on a SmartpHone Mteor with WM5. I've followed all the steps and signed it with the test cert. Still it won't get installed at the device. It says "This software is not authorized...". It works fine with PPC WM6. Any ideas? Any help appreciated. Thanks Nick.
Nice tutorial for beginer. I learnt alot from the steps provide.
is it possible to send SMS to device(self) without using GSM operator with MAPI ?
If I create MAPI Store with CreateMsgStore, why doesnt it appear in list of accounts on wm6 standard emulator? What else should be done?
Hi Jay;
I make thousands of search to know how to extract my sms Messages from Inbox and but the messages received from a particular sender in a text Box;
Actually this is the most helpful page I found.
using of API is very difficult ... I used the C# with windows mobile 2003 SDK.
I used this code
MessageBox.Show(session.SmsAccount.Name);
foreach (EmailAccount account in session.EmailAccounts)
{
MessageBox.Show(account.Name);
}
this shown the accounts name but not the message itself;
My question is how to read those message in sms account.
Thanks a lot in advance
Hallo All!
Can someone tell me is where any flags or properties for IMessage Object to knew if e-mail an new created mail, forward mail or reply mail?
Thanks for any help!
Hi Jay and everyone.
I am struggling with attachment as well. Doing the work as explained before, on a WM5 device works good. But, on the device, when I want to retrieve the attachment with createAttach, the attachment is just invisible. The message icon in the messages list shows attachment is notified, but when I open the message, there is only the attachment row (for editing interface) with no name, no icon and no link on where i can click to open this attachment.
If this message is sent through outlook via activesync, the message contains, effectively , the attachment.
Therefore, what do I miss for MAPI not too completely "see" there is an attachment ??
Thanks a lot for your help,
Pierre
That is answer from my own question.
OK I can now retrieve the attachments.
The problem was I didn't provide anything in PR_ATTACH_SIZE.
By doing so, the attachment appears :)
Hope this can help.
I am working on one SMS application, which has two modules Sender and Receiver. At the sender module, I am programmatically sending the SMS using SmsSendMessage API. And at the receiver end I am using IMailRuleClient’s ProcessMessage to perceive the message. If the message has some specific combination of characters(flags) then I am displaying the message in a message box. On click of either OK or Cancel button in the Message Box, I am again programmatically sending SMS as an acknowledgement to the Sender.
My entire application is working fine in Emulator. But, while I am testing in actual devices (Here, I am using HTC and ASUS Mobiles with Vodafone network); at the receiver end the SMS is not perceived. Instead, the SMS is displayed like “Message from Network” as a notification and even it is not going to Inbox folder.
One more strange thing is, If I send the SMS manually with the same text which I use to send programmatically, then at the receiver end my SMS is perceived and displayed in MessageBox(which I want). However, If I click on OK or Cancel Button in the Message Box then again at the sender’s end the message is not perceiving. It is displayed as “Message from Network”.
I guess there is a problem with SMS sending functionality. But, previously I worked with the SmsSendMessage API and it sent SMS programmatically to the receiver (Here, I am not using Message Interceptor to perceive the message) without any problems.
My question is
• why my application is fine working with Device Emulator and Cellular Emulator and why not with Actual device?
• Why my application is perceiving SMS which have been sent manually and why not with programmatically? Are there any specific restrictions from the Network operator?? And why the ”Message from Network” notification is coming? Am I missing any properties to set in the SmsSendMessage API??
Could you please let me know the reason behind it as soon as possible.
Thanks in Advance.
Regards
Vikanth P
Hey.. I got where I went wrong.
we should use PS_MESSAGE_CLASS2 instead of PS_MESSAGE_CLASS0 for the TEXT_PROVIDER_SPECIFIC_DATA structure.
I want to make an application that encrypts all the sms messages in the inbox. How can i modify your code to be able to alter the contents of the messages?
Best regards
/Rob
Hello. I have read your source codes and done an example these days. When I compile it, there is no mistake. But when I execute it, it always give me a memory fault. I debug it step by step, I found that it was in the end that it produce a mistake . Please give me a favour. I also find another web site in China. It is
When I execute the codes in this site, it can work very well.
Thanks again.
I need to develop some kind of messaging client. which need imap4 (receive) and smtp(sending) support. but my ui requirements are different.
is there some way i can use imap4, smtp stack of MS without accessing the mapi? | http://blogs.msdn.com/windowsmobile/archive/2007/04/23/practical-use-of-mapi.aspx | crawl-002 | refinedweb | 7,245 | 66.13 |
I work on the Entity Framework Team at Microsoft.
The DbContext API introduced in Entity Framework 4.1 exposes a few methods that provide pass-through access to execute database queries and commands in native SQL, such as Database.SqlQuery<T>, DbSet<T>.SqlQuery, and also Database.ExecuteSqlCommand.
These methods are important not only because they allow you do execute your own native SQL queries but because they are right now the main way you can access stored procedures in DbContext, especially when using Code First.
Implementation-wise these are just easier to use variations of the existing ObjectContext.ExecuteStoreQuery<T> and ObjectContext.ExecuteStoreCommand that we added in EF 4.0, however there still seems to be some confusion about what these methods can do and in particular about the query syntax they support.
I believe the simplest way to think about how these methods work is this:
For a stored procedure that returns the necessary columns to materialize a Person entity, you can use syntax like this:
1: var idParam = new SqlParameter {
2: ParameterName = "id",
3: Value = 1};
4: var person = context.Database.SqlQuery<Person>(
5: "GetPerson @id",
6: idParam);
For convenience these methods also allow parameters of regular primitive types to be passed directly. You can use syntax like “{0}” for referring to these parameters in the query string:
1: var person = context.Database.SqlQuery<Person>(
2: "SELECT * FROM dbo.People WHERE Id = {0}", id);
However this syntax has limited applicability and any time you need to do something that requires finer control, like invoking a stored procedure with output parameters or with parameters that are not of primitive types, you will have to use the full SQL syntax of the data source.
I want to share a simple example of using an output parameter so that this can be better illustrated.
Given a (completely useless ) stored procedure defined like this in your SQL Server database:
1: CREATE PROCEDURE [dbo].[GetPersonAndVoteCount]
2: (
3: @id int,
4: @voteCount int OUTPUT
5: )
6: AS
7: BEGIN
8: SELECT @voteCount = COUNT(*)
9: FROM dbo.Votes
10: WHERE PersonId = @id;
11: SELECT *
12: FROM dbo.People
13: WHERE Id = @id;
14: END
You can write code like this to invoke it:
1: var idParam = new SqlParameter {
2: ParameterName = "id",
3: Value = 1};
4: var votesParam = new SqlParameter {
5: ParameterName = "voteCount",
6: Value = 0,
7: Direction = ParameterDirection.Output };
8: var results = context.Database.SqlQuery<Person>(
9: "GetPersonAndVoteCount @id, @voteCount out",
10: idParam,
11: votesParam);
12: var person = results.Single();
13: var votes = (int)votesParam.Value;
There are few things to notice in this code:
Once you have learned that you can use provider specific parameters and the native SQL syntax of the underlying data source, you should be able to get most of the same flexibility you can get using ADO.NET but with the convenience of re-using the same database connection EF maintains and the ability to materialize objects directly from query results.
Hope this helps, Diego:
Some:
And the typical operation of the application goes likes this::
You can use foreign key properties to set associations between objects without really connecting the two graphs. Every time you would do something like this:
model.Make = make;
model.Make = make;
… replace it with this:
model.MakeId = make.Id;
model.MakeId = make.Id;
This is the simplest solution I can think of and should work well unless you have many-to-many associations or other “independent associations” in your graph, which don’t expose foreign key properties in the entities..
//;
This approach should work well even if you have associations without FKs. If you have many-to-many associations, it will be necessary to use the Include method in some queries, so that the data about the association itself is loaded from the database..
It is actually premature to use the word “conclusion”. Mixing EF and TDD in the same pan is something I am only starting to think about. This is a set of scenarios that I want to see among our priorities for future versions.
In order to come to a real conclusion, I need to at least develop a sample application in which I apply and distill the approaches I am suggesting in this post. I hope I will find the time to do it soon....
Visual Studio 2010 and .NET 4.0 were released on Monday! The Self-Tracking Entities template is included in the box, and the POCO Template we released some time ago in Visual Studio Gallery is compatible with the RTM version.
A few days ago we found a small issue in some of our code generation templates. It is really not a major problem with their functionality but rather a case of an unhelpful exception message being shown in Visual Studio when the source EDMX file is not found. So, I am blogging about it with the hope that people getting this exception will put the information in their favorite search engine and will find here what they need to know.
If you open any of our code generation templates you will see near the top a line that contains the name of the source EDMX file, i.e. something like this:
string inputFile = @"MyModel.edmx";
This line provides the source metadata that is used to generate the code for both the entity types and the derived ObjectContext. The string value is a relative path from the location of the TT file to the EDMX file, so if you change the location of either, you will normally have to open the TT file and edit the line to compensate for the change, for instance:
string inputFile = @"..\\Model\\MyModel.edmx";
If you make a typo or for some other reason the template cannot find the EDMX file, you will in general see a System.IO.FileNotFoundException in the Error List pane in Visual Studio:
Running transformation: System.IO.FileNotFoundException: Unable to locate file File name: 'c:\Project\WrongModelName.edmx'
...
Now, the exception message above is the version thrown by the “ADO.NET EntityObject Generator” (which is the default code generation template used by EF), and it is quite helpful actually, because it provides the file name that caused the error.
On the other side, if you are using the “ADO.NET POCO Entity Generator” or the “ADO.NET Self-Tracking Entity Generator”, the exception is going to be wrapped in a reflection exception and therefore you won’t directly get the incorrect file name:
Running transformation: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.IO.FileNotFoundException: Unable to locate file
...
Something very similar happens when you add the template to your project incorrectly. Our code generation templates have been designed to be added through the “Add Code Generation Item…” Option in the Entity Data Model Designer:
When you do it this way, we automatically write the name of your EDMX file inside the TT file. But if you add the template to the project in some other way, for instance, using the standard “Add New Item” option in the Solution Explorer, the name of the EDMX file will not be written in the TT file, and instead a string replacement token will remain:
string inputFile = @"$edmxInputFile$";
When this happens, again, the exception message you get from the EntityObject generator is quite helpful:
Running transformation: Please overwrite the replacement token '$edmxInputFile$' with the actual name of the .edmx file you would like to generate from.
But unfortunately, for the POCO and the Self-Tracking Entities templates you will just get a wrapped System.IO.FileNotFoundException as in the examples above.
In any case, the solution is always the same: open the TT file, and replace the token manually with the name or relative path to the EDMX file. Alternatively, remove the template files from the project and add them again using “Add Code Generation Item…”.
I hope you will find this information helpful.
Diego..
Zlatko has been posting about one LINQ to Entities new feature included in the upcoming Beta 3, so I decided to take revenge and make a 100% Entity SQL post. Here is something I ran against the other day:
Let's assume we need to retrieve the Order with the maximum OrderID, which is a really awful way to get the ID of the order you just inserted! :)
In your everyday store-specific SQL, you can use a MAX() aggregate function in a subquery as a WHERE predicate. In Transact SQL, it should look like this:
SELECT * FROM Products AS p WHERE p.ProductID =(SELECT MAX(p2.ProductID) FROM Products as p2);
SELECT * FROM Products AS p WHERE p.ProductID =(SELECT MAX(p2.ProductID) FROM Products as p2);
So far, so good. If you have been playing a little with Entity SQL, you will probably guess how the equivalent Entity SQL would look like:
SELECT VALUE p FROM Products AS p WHERE p.ProductID =(SELECT MAX(p2.ProductID) FROM Products as p2);
SELECT VALUE p FROM Products AS p WHERE p.ProductID =(SELECT MAX(p2.ProductID) FROM Products as p2);
But if you run this query, what you get is this interesting exception:
System.Data.QueryException: Argument types 'Edm.Int32' and 'Transient.collection[Transient.rowtype[(_##groupAggMax2,Edm.Int32(Nullable=True,DefaultValue=))](Nullable=True,DefaultValue=)]' are incompatible for this operation, near WHERE predicate, line 1, column 60.
System.Data.QueryException: Argument types 'Edm.Int32' and 'Transient.collection[Transient.rowtype[(_##groupAggMax2,Edm.Int32(Nullable=True,DefaultValue=))](Nullable=True,DefaultValue=)]' are incompatible for this operation, near WHERE predicate, line 1, column 60.
The subquery is actually returning a Transient.collection of a Transient.rowtype... Those are internal things, so for illustration purposes, let's turn to the LINQ perspective of life:
var query = from p in context.Products select new { p.ProductID }; int productID = query;
var query = from p in context.Products select new { p.ProductID }; int productID = query;
(Argh, this post is no longer 100% Entity SQL!)
No surprisingly, what you get is a compile-time exception:
Cannot implicitly convert type 'System.Linq.IQueryable<AnonymousType#1>' to 'int'.
Cannot implicitly convert type 'System.Linq.IQueryable<AnonymousType#1>' to 'int'.
Both exceptions are homologous, and for a text-based query language, Entity SQL happens to be very type-safe at its core. Standard SQL makes the basic assumption that it is ok to implicitly convert single-item collections of single-column projections to discrete scalars. We don't.
The basic theme in Version 1.0 of the Entity Framework is to build a solid foundation for the future. As a consequence, one thing we avoid doing is "magic" type conversions except when they make perfect sense (think union of projection queries with exactly the same shape). The motive: magic conversions tend to mine the stability and composability of the language.
That said, this buys us freedom to hand-pick certain implicit behavior in the future, if we find enough feedback and proof that it makes sense.
That's enough on the rationale. Now, how do I make it work? There are two approaches.
First:
SELECT VALUE p FROM Products AS p WHERE p.ProductID = MAX(SELECT VALUE p2.ProductID FROM Products AS p2);
SELECT VALUE p FROM Products AS p WHERE p.ProductID = MAX(SELECT VALUE p2.ProductID FROM Products AS p2);
This one works because:
a) The SELECT VALUE returns the scalar itself, instead of a projection (rowtype) of the scalar.
b) MAX() operates on the collection of scalars returned by the subquery, returning a single maximum value that will be directly comparable (same type) as ProductID.
Second:
SELECT VALUE p FROM Products AS p WHERE p.ProductID = ANYELEMENT( SELECT VALUE MAX(p2.ProductID) FROM Products AS p2);
SELECT VALUE p FROM Products AS p WHERE p.ProductID = ANYELEMENT( SELECT VALUE MAX(p2.ProductID) FROM Products AS p2);
This works because:
a) The subquery will return a single-item collection of a scalar value.
b) ANYELEMENT will retrieve a single element (in this case, the only one) contained in the collection. That element will be directly comparable with ProductID.
In case you are wondering now how efficient this is, don't worry. Entity SQL is still a functional language. So, while understanding the type reasoning is interesting and useful, these queries still express "what you want to get" rather than "how you want the job done".
As a matter of fact, with our current SqlClient implementation, these queries will be translated to some simple, yet unexpected Transact-SQL. But I'll leave that to you as an exercise...
Last month, a question was asked in the ADO.NET Prerelease forum that went more or less like this:
Considering that there are many APIs you can use (Entity SQL, ObjectQuery<T>, LINQ to Entities), is there any guidance that could help me decide when to use each?
Considering that there are many APIs you can use (Entity SQL, ObjectQuery<T>, LINQ to Entities), is there any guidance that could help me decide when to use each?
The best I could do based on my knowledge at the time:
It is matter of taste.
It is matter of taste.
While my answer was partially correct and had the great quality of being easy to look at, I immediately realized I should do a better job in helping people choose the appropriate API for each of their scenarios.
I won’t pretend here to give the definitive and detailed answer, just a head start. You will find more information in our docs and I am sure this topic alone will easily fill a few chapters in upcoming books about the product.
We basically support two distinct programming layers and two different query languages your applications can use:
Service layers and query languages supported
Query language
Entity SQL
LINQ Queries
Service layer
Entity Services
Yes
Object Services
For those coming from the Object/Relational Mapping world, one easy way to look at our stack is to understand that we have two mapping tools layered one on top of the other:
Of course, once you have mapped your relational tables to entities and your entities to objects, what you get is a fully functional O/R Mapper.
But as it is usual in our profession, adding a level of indirection uncovers a lot of power and flexibility :)
The public surface of this layer is the EntityClient component, which is a new type of ADO.NET provider that gives you access to a store agnostic entity-relationship model of your data called Entity Data Model (EDM), and decouples your code from the store specific relational model that lives underneath.
Besides a pair of new classes, the EntityClient contains most of the same types as previous providers: Connection, Command, DataReader, Parameter, Adapter, Transactions and a ProviderFactory.
To be able to use this layer, you typically need three elements:
One advantage of programming against this layer is that being the first public surface intended for application development, it is also the most lightweight.
Moreover, at this level you use full eSQL queries to obtain data readers and not actual entity classes. For this reason, we call EntityClient our “value” oriented programming interface. Neither the columns included in your rows, nor the source of your rows, nor the filtering, grouping or sorting criteria, are fixed at compile time. The query is just a string that we parse at run-time, and the results are just EntityDataReaders.
All this makes Entity Services suitable for applications that today typically exploit the flexibility of writing dynamic SQL queries, like reporting, ad-hoc querying, etc.
Notice however, that even when the EntityClient closely follows the traditional ADO.NET connected object model, you cannot get an ADO.NET DataSet on top. There are two main reasons for this:
Moreover, the Entity Framework currently lacks a string based data manipulation language, so you cannot directly express UPDATE, INSERT and DELETE operations in eSQL. Given this, our EntityAdapter is hardly any similar to the previous DataAdapters. We do not even derive it from the DbDataAdapter class!
Object Services lives immediately on top of the EntityClient, and provides your application an Object Oriented view your data. Many public classes live in this space, but the two most important are ObjectContext and ObjectQuery<T>.
This object’s main role is to encapsulate the underlying EntityConnection, and serve as a porthole for objects performing CRUD operations.
When you choose to use our code generation, you get a type-safe ObjectContext that incorporates some methods specific to your data model.
ObjectQuery<T> and its builder methods let you create queries in an completely object oriented way. It also provides a type-safe way to create queries. Most of the time, the shape and source of your data, the filtering, grouping and sorting criteria are known at compile time. So we call this our object-oriented programming interface.
You can still use fragments of eSQL with many builder methods, but the idea here is that you typically use ObjectQuery<T> in an early-bound manner to build queries that get compiled in your application. Even more important, the results of those queries can be full entity classes or new types created for projections.
Entity-SQL is a text based query language that currently gives you the most expressiveness over the Entity Framework stack on late-bound scenarios. You can use Entity-SQL to get collections of rows in the Entity Services layer, but also instances of entity classes, when used with Object Services.
I highly recommend reading Zlatko Michailov’s Entity SQL post for a head start on the language and on its main differences with traditional SQL.
The Language Integrated Query is a set of strategic language extension Microsoft is including both in C# and VB that facilitate the creation of query expressions using a terse syntax familiar to anyone who has used SQL.
LINQ is very powerful, and it is broadly applicable since it aims to solve the problem of querying any data source, including objects in memory, databases and XML files while maintaining a consistent, object-oriented and type-safe programming interface.
For the Entity Framework, ObjectQuery<T> is the center of our LINQ implementation. This class implements the necessary interfaces to fully support the creation and deferred execution of queries comprehensions against our stack.
We have invested a great amount of work in correctly mapping CLR features that can be useful in queries to our EDM and query capabilities. Still, LINQ and the Entity Framework are built and optimized against different goals and assumptions, and some concepts of LINQ and the Entity Framework simply do not map one-to-one.
We certainly plan to continue investing in better alignment. But right now the reality is that there are some things you can do with Entity SQL that still cannot be expressed in LINQ, and there are a few things you can do with LINQ that still we cannot be translated or compose over in our LINQ implementation.
My original answer stays correct: Using one or other API to create your applications also has to do with a matter of taste. This is specially true thanks to the flexibility of ObjectQuery<T>, which allows you to mix and match start with query building methods that take eSQL fragments inside or LINQ queries. Just be aware that you could run into some corners scenarios in which we cannot completely go from one model to the other and back.
Edit: The assertion that you can mix and match LINQ and ESQL was incorrect. Once you started one way, you have to keep in that route in ObjectQuery<T>.!
This q = context.Customers.Where(
PredicateBuilder.ContainsNonUnicodeString<Customer>(values, a => a.CompanyName));
The translation looks like..
I.
Update 8.11.2010: Broken link to Rowan’s | http://blogs.msdn.com/b/diego/default.aspx?PostSortBy=MostViewed&PageIndex=1 | CC-MAIN-2014-23 | refinedweb | 3,297 | 53.71 |
Download CD Content
The previous chapter introduced you to some basics of C++ programming. This chapter covers another vital part of C++ programming: getting input from the user and displaying the results. This chapter concentrates on getting that input and providing the required output. This chapter focuses on doing this from some command line such as the Windows DOS prompt/command prompt, or a Unix/Linux shell. This means that you will learn how to take in what the user types, and to display results in a text format, back to the user. Obviously a program that does not take in input, or give back output, is ultimately useless, so please pay particular attention to this chapter.
You will often want to provide the user with some type of text. It might be a message to the user, a request for input, or the results of some data that the program has computed. How do you take data from your program and convert it into some representation on the screen? Luckily for you, C++ provides some functions that handle screen output for you. All you have to do is learn how to use these functions. These functions are found in one of the header files that was mentioned in Chapter 1. The particular file you need to include is the iostream file. You include this by simply placing the following line at the top of your source code file.
#include <iostream>
Recall from Chapter 1 that, when you include a header file, you have access to all the functions defined in that file. By including this file you will have access to a number of functions for input and output. The two most important functions are cout and cin. Virtually all your input and output needs (at least for keyboard input and screen output) can be handled by these two functions. The first, cout, is how you display data to the screen. The following example shows the ubiquitous “hello world” program. (It seems like every programming book uses this example, so who are we to argue with tradition?)
Step 1: Enter the following code into your favorite text editor then save the file as example02-01.cpp.
#include <iostream> using std::cout; using std::cin; int main() { // Print hello world on the screen cout << "Hello World"; return 0; }
Step 2: To compile this, simply type the code into a text file and save it as hello.cpp. Then you will run your command line compiler by typing in bcc32 hello.cpp. If you typed in the code properly you will then have a hello.exe, that you can run any time you wish.
Step 3: Run the executable you just created. When you run it, it should look similar to Figure 2.1. (This image is from the command prompt of a Windows 2000 professional machine. A Linux shell would look a little bit different, but would be essentially the same concept.)
Figure 2.1: Hello World.
This may not be a particularly exciting program, but it does illustrate the basics of screen output. Notice that after we included <iostream> we also had two strange-looking lines of code.
using std::cout; using std::cin;
cout and cin are both defined inside of the iostream header file. You have to tell your program which parts of iostream you wish to use. These two lines of code tell the compiler that you wish to use cout and cin. (cin will be described in detail later in this chapter.)
The cout command tells the C++ compiler to redirect the output to the default display device, usually that’s the monitor. cout is short for “console output.” Notice the <<after the cout. The arrows literally point you in the direction the text will be sent. In the case of cout, the code is sent out of the program, thus the arrows point out of the code! This seems pretty simple so far, and it should. Now, what if we wish to format the code that we output in some special way? For example, when a program is done, the Windows 2000 command prompt (and earlier Windows DOS prompts) adds on the phrase “press any key to continue.” Perhaps you would like to place that on a separate line, to separate it from the output you are producing. That would be the logical thing to do because you do not wish to confuse the user into thinking that “press any key to continue” is your program’s output. Fortunately for you, C++ provides several formatting codes that you can add to any string to format it. For example, the \n code tells the C++ compiler to start a new line. Let’s rewrite the “hello world” program with this addition.
Step 1: Type the following code into your favorite text editor and save it as example02-02.cpp.
#include <iostream> using std::cout; using std::cin; int main() { cout << "Hello World \n"; return 0; }
Step 2: Compile the code by running bcc32 example02-02.cpp.
Step 3: Execute the compiled code by typing example02-02. You should see an image like that depicted in Figure 2.2.
Figure 2.2: Hello World 2.
Note that anything after “Hello world” is on a new line. That’s exactly what the \n command means; it means start a new line. There are actually several commands that you can execute in this manner to format the code in any way you like. These codes are often referred to as escape codes. Table 2.1 summarizes most of them for you.
As you can see, there is a plethora of options for formatting output to the screen. Throughout this chapter and the next you will see several of these codes used in examples. This table provides a summary of the escape characters you can use. You should recall from Chapter 1 that C++ is built on the framework of C. These formatting keys (also called escape keys) are a prime example. These keys work exactly the same way in C as they do in C++. The following is an example that illustrates everything covered thus far.
Step 1: Enter the following code into a text editor of your choice and save it as example02-03.cpp.
#include <iostream> using std::cout; using std::cin; int main() { // the following code demo's various escape keys cout << "\" Hello World \" \n"; cout << "C++ is COOL! \n \a"; return 0; }
Step 2: Compile the code by typing in bcc32 example02-03.cpp.
Step 3: Run your code by typing in example02-03.
If you entered the code in properly you will here a beep. (Remember that \a causes a beep.) You will see something similar to the image shown in Figure 2.3.
Figure 2.3: Hello World 3.
You should notice several things about this code. First, notice that you can place more than one escape sequence in order. You should notice that this was done in this example. You can use as many escape characters as is necessary in a given string of characters. You should also notice that the way to place quotes inside a string of characters is to use the proper escape character.
Finally, you should also notice the “beep” provided by \a. It is often useful to provide the user with audio signals in addition to visual signals.
Using these various escape sequences you manipulate the output of your program in a variety of ways. You can also, as you have already seen, create some audio output. The following example should illustrate this to you.
Step 1: Enter the following code into your favorite text editor.
#include <iostream> using std::cout; int main() { cout << "As you can see these \" escape keys \" \n"; cout << "are quite \'useful \' \a \\ in your code \\ \a \n"; return 0; }
Step 2: Compile that code.
Step 3: Execute the code. You should see something similar to Figure 2.4.
Figure 2.4: Using various escape keys.
These keys give you a wide range of formatting options as well as some sound effects. You will probably find these escape keys quite useful in your various programming projects.
In addition to the escape keys you have already seen, there are some other techniques for manipulating your output. For example, you will frequently see C++ programmers choosing the endl command for a new line at the end of screen output, rather than the escape key \n. To use endl just end your quotation marks, then type the << endl command, and terminate it with a semicolon. The following example demonstrates this.
Step 1: Enter the following code into your favorite text editor.
#include <iostream> using std::cout; using std::cin; using std::endl; int main() { cout << "You have previously used the \\n key to get a new line \n"; cout << "However you can also use the endl command"<<endl; return 0; }
Step 2: Compile that code.
Step 3: Execute the code. You should see something similar to what is displayed in Figure 2.5.
Figure 2.5: Using endl.
As you can see, the endl is just as useful in creating a new line. From a technical perspective, it also flushes C++’s buffer stream. What that means is that as you send content to the screen via cout, it is placed in a temporary buffer. The endl command causes that buffer to be emptied. | https://flylib.com/books/en/2.331.1.20/1/ | CC-MAIN-2021-25 | refinedweb | 1,569 | 73.68 |
Created on 2009-01-14 04:42 by javen72, last changed 2015-03-27 06:01 by Emil.Styrke. This issue is now closed.
I encountered a very strange issue in file flush operation in Windows.
Here's the scenario of my application:
1. The main thread of my application will create makefiles sequentially.
2. Once a makefile is generated, launch a separate process calling
nmake.exe to run it in parallell. The main thread then create another
makefile until no more makefiles to create.
3. The number of new processes is limited by command line options.
My application has been running for almost a year without any problem.
But, after I made some changes recently to the content of makefile
generated, "nmake.exe" in separate process sometimes complains that
the makefile was not found. But when I went into the directory, the
makefile was there.
Because I didn't change any thing related to file creation and the new
makefiles are a little bit less than before, I guessed that the
makefile just created hasn't been flushed to disk because of size
change so the new process could not see it in a short time.
So I decided add code to force flush file buffer after writing the
file content (I didn't flush the file explicitly before). I did it
like below:
Fd = open(File, "w")
Fd.write(Content)
Fd.flush()
os.fsync(Fd.fileno())
Fd.close()
The strangest thing happened. The "makefile" missing happened more
frequently than no flush operation. I searched the web but no answer
there.
Finally I tried to use Windows file API to create the file via pywin32
extension. The problem's gone.
import win32file
Fd = win32file.CreateFile(
File,
win32file.GENERIC_WRITE,
0,
None,
win32file.CREATE_ALWAYS,
win32file.FILE_ATTRIBUTE_NORMAL,
None
)
win32file.WriteFile(Fd, str(Content), None)
win32file.FlushFileBuffers(Fd)
win32file.CloseHandle(Fd)
I tried writing small python extension in C to make use Windows API to
create file like above. It also works well, even I removed the
FlushFileBuffers() calling.
I think that there's a bug in Python file buffer mechanism.
I tried to reproduce the issue, interpreting your description, but
failed. It worked fine in my setup. Perhaps you can add more elements
until it fails.
python2.5 uses the functions of the fopen() family: fwrite(), fclose().
Does the problem reproduce if you use these functions in your extension
module?
Are your files located on the local hard drive, or on a network storage?
gagenellina,
My application is a little bit different from your test code. It won't
wait for the exit of new process and there're file writing operations
during makefile running. I changed the test code to be real
multi-process and tried many file sizes. But I cannot reproduce it.
Maybe my application is more complicated situation.
The created files are on local drive. I saw the problem on the laptop
(XP-SP2), desktop(XP-SP3) and server (Win2003). But there's no such
problem on the Linux and Mac boxes.
I tried to use fopen/fwrite in my extension according to your
suggestion. The problem wasn't be reproduced. It seems the bug is more
alike in Python part.
My application is a build system and is also an open source project. Is
it possible for you to download it and try it in your box?
I created temporary user (py123, password: 123py123) for you (just in
case) and here's steps of how to reproduce it.
1. Checkout the build system source code in, for example, c:\test
C:\test> svn co --username py123 --password 123py123 tools
2. Checkout the source code to be built against in c:\test
C:\test> svn co --username py123 --password 123py123 edk2
3. Change the source code between line 222 and line 229 of
c:\test\tools\Source\Python\Common\Misc.py (SaveFileOnChange function)
like below:
Fd = open(File, "wb")
Fd.write(Content)
Fd.flush()
os.fsync(Fd.fileno())
Fd.close()
4. In c:\test\edk2, run
C:\test\edk2> edksetup.bat
C:\test\edk2> set PYTHONPATH=C:\test\tools\Source\Python
C:\test\edk2> python.exe C:\test\tools\Source\Python\build\build.py
-n 2 -p MdeModulePkg\MdeModulePkg.dsc -a IA32 -s
5. If the application stops with message like "makefile not found" or
"AutoGen.h not found" message, that means the problem happened.
Visual Studio 2005 is needed to reproduce it.
There would be more chances to see the problem by doing this:
C:\test\edk2> python.exe C:\test\tools\Source\Python\build\build.py
-n 2 -p IntelFrameworkModulePkg\IntelFrameworkModulePkg.dsc -a IA32 -s
I really tried, but I can't run your script. first PYTHONPATH must be
set to some value (I tried with tools\Source\Python); Also the tool is
complaining about missing WORKSPACE and EDK_TOOLS_PATH env vars, and a
missing file "Conf\target.txt".
This seems very complicated for a test case... furthermore it comes with
binaries already linked with a specific version of python... what if I
want to use a debug build?
Another question: is there a running anti-virus? if yes, can you try to
temporarily disable it?
Thank you very much for the trying. You might miss the step 4 in my
previous message. The step is:
C:\test\edk2> edksetup.bat
C:\test\edk2> set PYTHONPATH=C:\test\tools\Source\Python
C:\test\edk2> python.exe C:\test\tools\Source\Python\build\build.py
-n 2 -p IntelFrameworkModulePkg\IntelFrameworkModulePkg.dsc -a IA32 -s
The Visual Studio 2005 must be in the standard installation directory.
Otherwise the C:\test\edk2\Conf\tools_def.txt needs to be changed to
reflect the real path.
And I tried to disabled all antivirus services and the problem is still
there.
Don't worry about the binary version of build in
edk2\BaseTools\Bin\Win32 (linked against Python 2.5.2). The step I told
you is to execute my application from Python script directly. And I
tried to execute from script source against Python 2.5.4 and the problem
is the same. And no matter running the build from script source or the
freeze-ed binary, the results are the same either.
If it's hard or inconvenient for you to reproduce it, could you please
give me any advice or suggestion on how to debug it (in the interpreter)
and where's most possible place the root cause would be in the Python
interpreter's code? I can try to change something in the Python
interpreter's code, rebuild it and try it on my machine. Although I have
work around for this problem, I'd like to root cause it to avoid further
possible build break of our project.
I patiently waited for all those 150MB to download, modified Misc.py,
run the specified commands and got this error:
build.py...
: error 7000: Failed to start command
C:\Program Files\Microsoft Visual Studio 8\Vc\bin\nmake.exe /
nologo -s t
build [C:\test\edk2\Build\MdeModule\DEBUG_MYTOOLS\IA32\MdePkg\Library
\BasePrintL
ib\BasePrintLib]
That's right - VS is installed in another place. "C:\Program Files"
doesn't even exist in my Spanish version of Windows. edksetup.bat
didn't report any error, and I have nmake.exe in my PATH.
Anyway, trying to hunt a bug in the middle of 150 MB of code is way too
much. You should try to reduce it to the smallest piece that still
shows the problem.
(As a side note, a project so big SHOULD have plenty of unit tests, but
I see they're almost inexistant. Having to write tests forces people to
use a more modular design. In this case, probably it would have been
easier to test this issue, in a more isolated way, without having to
bring the whole framework in).
Multithreaded programs may be tricky. Looking at the
_MultiThreadBuildPlatform method (we're talking of it, right?) it isn't
obvious that there are no race conditions in the code.
For a producer-consumer process like this, I would use a pool of worker
threads, all waiting for work to do from a Queue, and a main (producer)
thread that puts tasks to be done into the Queue. The synchronization
is already done for you, it's almost automatic.
I've modified my previous example to show this usage.
Unless you can bring more evidence that this is a real Python issue and
not a race condition in your code or something, I'd recommend to close
this as invalid.
I agree multithread programming is kind of difficult and tricky. But I
don't think there's a race condition in _MultiThreadBuildPlatform
method, because I do have only one producer. And the consumer consumes
the product only when it's done (i.e. the file is created and closed).
The only race condition, I think, it's in the Python or Windows file
system, in which the file created might not be seen by other process due
to the file buffer mechanism. I think the flush() method of file object
and os.fsync() are to remove the race condition. But it seems that they
are not working as they're supposed to be.
What I know is that os.fsync() calls the _commit() which calls
FlushFileBuffers(). Why no problem if I call the FlushFileBuffers()
directly? That's why I think the most possible race condition is in
Python file buffer operation which is out of the control of my Python code.
I'm sorry that I didn't realize there's 150M code to checkout. Thanks
for your patience. Actually they are not code of my application itself.
They are the code used to test my application because my application is
a build system which needs source code to build. The real code of my
application is in the, for my example, directory of
C:\test\tools\Source\Python with just about 3M source files :-) And I
think I have narrowed down the issue in the file creation in
SaveFileOnChange function in C:\test\tools\Source\Python\Common\Misc.py.
I know it's very hard to reproduce issue in multi-thread context. And I
cannot say absolutely there's no bug in my code. It's OK for you to
close this tracker. But it would be better to let it open for a few days
so that I can do more investigation then. Anyway, thanks for the trying
again.
I narrowed down the root cause in the GIL of Python. I read the source
code of implementing os.fsync and found it's using
Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS to enclose the calling
of _commit() in order to be thread safe. I tried to add them in my C
extension and then the issue was reproduced.
It looks like the thread state operation or GIL caused a race condition
and let os.fsync() not flush the makefile in buffer to disk before it
returns.
The multi-thread scenario in my application is
a. the main thread produces makefile
b. once a makefile is generated, the main thread launches a new
thread in which a separated process is launched to run the makefile
until it exits.
I think there's no issue in this multi-thread scenario. And the Python
2.5.2 Maunal, Python/C API Ch8.1, says GIL must be taken care when doing
some blocking I/O operations, but it caused thread unsafe. So I still
think there's a bug in the Python.
If you start the new thread *after* the file is closed, no race
condition should exist. I'm sorry but I'm unable to follow the code you
use to create threads and write the makefile; but you should verify
that your original assertion "Once a makefile is generated, launch a
separate process..." is true.
Alternatively, when using a Queue like in my last example, make sure
the new job enters the queue *after* closing the.
01 for Module in Pa.Platform.Modules:
02 Ma = ModuleAutoGen(Wa, Module, BuildTarget, ToolChain, Arch,
self.PlatformFile)
03 if Ma == None:
04 continue
05 # generate AutoGen files and Makefile here
06 if self.Target not in ['clean', 'cleanlib', 'cleanall', 'run',
'fds']:
07 # for target which must generate AutoGen code and makefile
08 if not self.SkipAutoGen or self.Target == 'genc':
09 Ma.CreateCodeFile(True)
10 if self.Target == "genc":
11 continue
12
13 if not self.SkipAutoGen or self.Target == 'genmake':
14 Ma.CreateMakeFile(True)
15 if self.Target == "genmake":
16 continue
17
18 # Generate build task for the module which will be launched
separately
19 Bt = BuildTask.New(ModuleMakeUnit(Ma, self.Target)).
It isn't obvious that this is the case, at least to me, due to those "if" and "continue" in the code.
Try instrumenting it a bit and see what happens; a few "print" around lines 14 and 19 should suffice.
Also, try to come with a *short* example that shows the problem. Remove all irrelevant details: if you write a "constant" makefile and still fails, omit all dependences for makefile generation; if you invoke a simple .exe (like the one I posted) and it still fails, replace the call to nmake.exe. And so on, until you can't remove anything. It may happen that you find yourself the problem doing this.
Another strategy is to start with a simple example that works (like the one I posted) and adding more and more things until it doesn't work anymore.
With the code as it is now, it's difficult to say whether this is a Python bug or not; there are tons of other factors involved.
>.
Why not? All I/O operations release the GIL, and that's a good thing. In this case, if (as you assert) the other thread that accesses the file hasn't started yet, it doesn't matter if the GIL is released or not.
Yahoo! Cocina
Recetas prácticas y comida saludable
I did trace the order of file creation and process launch. It shows the
file is indeed created before the process launch.
I did another investigation. I added a new process, "cmd.exe /c copy
Makefile NewMakefile", to copy the the Makefile created, just before
launching the "nmake.exe" process. The strangest thing happened again:
each makefile was copied successful but there's still "makefile not
found" reported by "nmake.exe" process. I tried standalone copy
application "xcopy.exe" and the result was the same. So I guess that
"cmd.exe", "xcopy.exe" and "nmake.exe" might use different families of
API or different mode (sync vs async) to access file.
I decided to try the code you provided. In checkfile.c, fopen is used to
test the file existence. I changed it to the Windows native API
CreateFile and I also added a file write operation in order to make it
more similar to the real case. Eventually, the problem was reproduced in
your code. Following are the successful number of creating 1000 files 5
times in different file sizes:
Create file in 403 bytes,
985, 992, 984, 989, 992 (no flush after creation)
883, 886, 907, 909, 915 (flush after creation)
Create file in 4061 bytes
983, 976, 982, 977, 983 (no flush after creation)
654, 672, 684, 686, 648 (flush after creation)
Create file in 16461 bytes:
970, 967, 963, 963, 971 (no flush after creation)
598, 664, 711, 653, 623 (flush after creation)
In summary:
a. Using fopen to test a file existence in check_file.c will never
report failure, no matter buffer is flushed or not.
b. Using CreateFile (read mode) to test a file existence in
check_file.c will always report failure. The bigger the file size will
cause more failure reported; the flush operation after file creation in
test_file_flush.py will cause more failure reported; the flush operation
after new file creation in check_file.c will cause more failure
reported; no flush operation in both check_file.c and test_file_flush.py
almost cause no failure.
I don't know what's root cause: GIL, Python thread state switch, Python
file buffer or Windows FileCreate API. I'm just certain there's race
condition between Python and Windows.
The test code and script which can reproduce the problem has been uploaded.
Thanks for adapting the smaller example. I think I figured out what's
the problem.
The error reported by checkfile.c is 0x20 = ERROR_SHARING_VIOLATION
"The process cannot access the file because it is being used by
another process."
I modified the subprocess call, adding the parameter close_fds=True --
and now I see no errors.
Please do a similar change in your application and see if it works.
I'm using Python2.5 in which close_fds is not available in Windows. And
I cannot use Python2.6 because I need to redirect the stdout and stderr
and there's performance concern.
I have questions on the root cause:
a. why doesn't fopen() has sharing issue?
b. why don't os.close() and FileObject.close() really close the file?
python2.6 is not so different from python2.5. Which performance concerns
do you have?
I don't have Python2.6 installed. I just have ever read a bench article
comparing the performance between different version of Python, including
2.5, 2.6 and 3.0. That article shows, for the same script, Python2.6 has
longer running time than Python2.5. My application is a build system and
the users are very care about the time spent in build. That's why I have
the performance concern.
What concerns me more is the stdout/stderr redirection. Python2.6 manual
says they cannot be redirected if close_fds is set to True. My
application relies on the I/O redirection to control the screen output
from subprocess.
Actually I have had a work-around for this issue. It works very well so
far. I reported a bug here just because I want the Python to be better.
I learned it one year ago but I love this language. I just hope nobody
else encounter such problem again. If you guys think it won't be fixed
in Python 2.5 or has been fixed in Python 2.6, it's OK to close this
tracker. Thanks for your time.
To be a tracker bug, as opposed to feature request, behavior must disagree with the manual or doc string, not personal expectation. In any case, OP declined to test on 2.6 and suggested closing, so I am. This still remains on tracker to be searched.
Anyone who considers reopening should verify a discrepancy with 2.7 or later current version.
I have experienced this issue with Python 2.7.8 and 2.7.9. It is almost the same issue as the OP experiences as far as I can tell: spawn some Python threads that each create a file, flush, fsync, close, then start a subprocess which uses the file through the Windows API. One thing that may differ is that I create the file and spawn the child from the same thread.
I tried with close_fds=True, and it indeed works then, but like the OP, in my production code I need to get the output from the process, so it is not a usable workaround for me.
Test script and child program code is available at (the file upload button doesn't work for me it seems). Running the script on my machine will print at least one failure most of the time, but not always.
Emil,
Your example child process opens the file with only read sharing, which fails with a sharing violation if some other process inherits the file handle with write access. The `with` block only prevents this in a single-threaded environment. When you spawn 10 children, each from a separate thread, there's a good chance that one child will inherit a handle that triggers a sharing violation in another child.
Using close_fds is a blunt solution since it prevents inheriting all inheritable handles. What you really need here has actually already been done for you. Just use the file descriptor that mkstemp returns, i.e. use os.fdopen(infd, 'wb'). mkstemp opens the file with O_NOINHERIT set in the flags.
eryksun, thank you for the explanation, the penny finally dropped for me. It's the *other* child processes that keep the file open, not the parent. Actually this bug started hitting me after I *stopped* doing exactly what you suggest (fdopen on the temp file), and instead started using file names generated by my program (opened with regular "open") and now I see why that is. If I use os.fdopen with the O_NOINHERIT flag, it works as expected. | https://bugs.python.org/issue4944 | CC-MAIN-2021-25 | refinedweb | 3,458 | 67.45 |
PYTHON
pytest
Overview
pytest is a program/framework for running Python tests.
Writing Tests
pytest looks for tests in Python files which either begin in
test_ or end in
_test, for example:
test_my_module.py # Will be found my_module_test.py # Will be found my_module.py # Won't be found
Inside these files,
pytest will look for either:
- Functions that begin with
test, or
- Functions/methods that begin with
testthat are inside classes that begin with
Test.
For example:
def test_my_function(): # Will be found assert True class TestMyStuff: def test_my_stuff(): # Will be found assert True def my_other_function(): # Won't be found pass
Running Tests
Run All Tests In Module
$ pytest my_file.py
Run All Tests In Directory
This will also run any tests in sub-directories:
$ pytest my_dir/
Test Name Matching
You can use
-k to match against test name substrings. The following command will run all tests that contain the stinrg
hello, e.g.
test_hello_world,
test_slow_hello():
$ pytest -k hello
Marks
Marks (or markers) can be applied to test functions using a decorator in the form
@pytest.mark.name_of_mark.
For example, to apply a mark called
unit (note that
unit should defined as a custom mark before you use it like this, see the below Custom Marks section):
import pytest @pytest.mark.unit def my_test_function(): assert True
You can use the
-m option on the command-line to only run tests with specific marks. The following command will run all tests in the current directory/sub-directory with the mark
my_mark:
$ pytest -m unit
A common use-case is to specify tests with test types such as
unit,
e2e,
performance, e.t.c so that you can easily run quick unit tests during development and longer running tests on merge or nightly.
You can also negatively select tests against a mark. The following command will run all tests to marked with
unit:
$ pytest -m "not unit"
You can get a list of all the marks you can use from the command-line with:
$ pytest --markers
Custom Marks
Custom marks need to be registered before you can use them. They can be registered in your
pytest.ini file, like so:
[pytest] markers = unit: All unit tests (fast tests requiring no external dependancies). e2e: End-to-end tests.
As shown above, a description/comment can be added after the
: symbol.
For more information of marks, see.
conftest.py
conftest.py files are used to specify directory-specific
pytest features. All
conftest.py files that are at the directory level of the test or closer to the root of the file system will be used when executing
pytest. You can have many
conftest.py files per test project.
Common things to include into
conftest.py files include
pytest hooks and fixtures, as well as loading external plugins specific to the tests in same directory.
Plugins
xdist
xdist allows you to provide the
-n option to distribute the tests to multiple CPUs:
$ pytest -n 4
Note however this will prevent all of your
print() statements from working (as well as anything else the prints to
stdout, e.g. log messages). As a workaround, you can redirect
stdout to
stderr:
import sys sys.stdout = sys.stderr
This can be added to a
conftest.py file so that it applies to all tests in it's directory and subdirectories. Be warned that all of the output will be interleaved, so it might make the output somewhat useless!
Jenkins
pytest has the ability to generate
junit.xml files which are used by Jenkins to display the test results.
You can provide the
--junitxml <path> option to
pytest and it will generate the file for you:
$ pytest --junitxml /test_output/results.xml
Related Content:
- An Introduction To Asynchronous Programming In Python
- Installing Python
- scipy
- Python Virtual Environments
- pandas
- programming
- programming languages
- Python
- pytest
- marks
- markers
- Jenkins
- tests
- testing
- unit tests
- conftest.py
- xdist | https://blog.mbedded.ninja/programming/languages/python/pytest/ | CC-MAIN-2020-05 | refinedweb | 642 | 62.48 |
Trying to build pygame with python 2.7 on Mac
Hi all-
I thought I'd try to build pygame with python 2.7 on my Mac running Snow Leopard again. I do have pygame running under python 2.6.
It seems to successfully compile and install, but then when I try to run an game, I get the following:
Traceback (most recent call last):
File "Wiggy.py", line 10,): Symbol not found: _SDL_EnableUNICODE
Referenced from: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pygame/base.so
Expected in: flat namespace
in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pygame/base.so
Any suggestions/explanations would be appreciated.
Dan | http://permalink.gmane.org/gmane.comp.python.pygame/21581 | CC-MAIN-2014-10 | refinedweb | 118 | 52.36 |
On 1/21/21 11:13 AM, Daniel P. Berrangé wrote: > On Thu, Jan 21, 2021 at 10:11:32AM +0000, Daniel P. Berrangé wrote: >> On Thu, Jan 21, 2021 at 10:56:15AM +0100, Philippe Mathieu-Daudé wrote: >>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> >>> --- >>> Cc: Daniel P. Berrange <berrange@redhat.com> >>> --- >>> meson.build | 34 +++++++++++++++++++--------------- >>> 1 file changed, 19 insertions(+), 15 deletions(-) >>> >>> diff --git a/meson.build b/meson.build >>> index e372b69f163..9274775a81a 100644 >>> --- a/meson.build >>> +++ b/meson.build >>> @@ -2453,19 +2453,8 @@ >>> endif >>> summary(summary_info, bool_yn: true, section: 'Block layer support') >>> >>> +# Crypto >> >> Rather than a comment why not introduce grouping in the output >> so it is visible when reading the summary. >> >> This can be done in meson by calling summary() multiple times >> giving "section: 'Crypto'" arg. > > Sigh, I'm not very good at reading this morning. I see this > is in fact done in this patch, I just couldn't see it in the > diff :-( No worry, I should have describe this better in the commit description. > ... >>> +summary(summary_info, bool_yn: true, section: 'Crypto') > > > Regards, > Daniel > | https://lists.gnu.org/archive/html/qemu-devel/2021-01/msg05384.html | CC-MAIN-2021-49 | refinedweb | 180 | 67.35 |
I can connect to my local mysql database from python, and I can create, select from, and insert individual rows.
My question is: can I directly instruct mysqldb to take an entire dataframe and insert it into an existing table, or do I need to iterate over the rows?
In either case, what would the python script look like for a very simple table with ID and two data columns, and a matching dataframe?
There is now a
to_sql method, which is the preferred way to do this, rather than
write_frame:
df.to_sql(con=con, name='table_name_for_df', if_exists='replace', flavor='mysql')
Also note: the syntax may change in pandas 0.14...
You can set up the connection with MySQLdb:
from pandas.io import sql import MySQLdb con = MySQLdb.connect() # may need to add some other options to connect
Setting the
flavor of
write_frame to
'mysql' means you can write to mysql:
sql.write_frame(df, con=con, name='table_name_for_df', if_exists='replace', flavor='mysql')
The argument
if_exists tells pandas how to deal if the table already exists:
if_exists: {'fail', 'replace', 'append'}, default
'fail'
fail: If table exists, do nothing.
replace: If table exists, drop it, recreate it, and insert data.
append: If table exists, insert data. Create if does not exist.
Although the
write_frame docs currently suggest it only works on sqlite, mysql appears to be supported and in fact there is quite a bit of mysql testing in the codebase. | https://codedump.io/share/g1u9FusqKm9m/1/how-to-insert-pandas-dataframe-via-mysqldb-into-database | CC-MAIN-2017-43 | refinedweb | 239 | 70.94 |
Note: This tutorial uses Team functionality available in the Terraform Cloud Team tier and above, and in Terraform Enterprise. Organization owners can enable a 30-day free trial in their settings under “Plan & Billing”.
The TFE Terraform provider can codify your Terraform Cloud workspaces, teams and processes.
In this tutorial, you will use the TFE provider to automate the creation and configuration of the Terraform Cloud (TFC) workspaces in the Deploy Consul and Vault on Kubernetes with Run Triggers Learn tutorial.
In this tutorial, you will automate the following using the TFE provider:
- Deploy three version-control backed workspaces in Terraform Cloud
- Create three Terraform teams to manage their respective workspaces. This is a new addition to the Deploy Consul and Vault on Kubernetes with Run Triggers Learn tutorial.
- Configure run triggers to each workspace to automate the process.
You will then trigger the deployment of a Consul-backed Vault cluster on Google Kubernetes Engine (GKE).
»Prerequisites
This tutorial focuses on how to leverage the TFE provider to automate your Terraform Cloud workflows and assumes that you are familiar with the standard Terraform workflow, Terraform Cloud, run triggers and provisioning a Kubernetes cluster using Terraform.
If you are unfamiliar with any of these topics, reference their respective tutorials.
- Provision GKE cluster using Terraform — Provision a GKE Cluster (Google Cloud)
- Run Triggers to deploy Consul and Vault on Kubernetes — Deploy Consul and Vault on Kubernetes with Run Triggers
For this tutorial, you will need:
- a Google Cloud (GCP) account with access to Compute Admin and GKE Admin
- a Terraform Cloud with the Team plan or Terraform Enterprise account
- a Terraform Cloud user. Refer to [Manage Permissions in Terraform Cloud] to learn how to invite a user to a Terraform Cloud organization.
- a GitHub account
- Github.com added as a VCS provider to Terraform Cloud. Refer to the Configure GitHub.com Access through OAuth tutorial to learn how to do this.
If you don’t have your GCP credentials as a JSON or your credentials don’t have access to Compute Admin and GKE Admin, reference the GCP Documentation to generate a new service account and with the right permissions.
If you are using a GCP service account, your account must be assigned the Service Account User role.
Note: There may be some charges associated with running this configuration. Please reference the GCP pricing guide for more details. Instructions to remove the infrastructure you create can be found at the end of this tutorial.
»Fork workspace repositories
You will need to fork three GitHub repositories, one for each workspace — Kubernetes, Consul, Vault).
»Fork Kubernetes repository
Fork the Learn Terraform Pipelines K8s repository. Update the organization and workspace values in
main.tf to point to your organization and your workspace name — the default organization is
hashicorp-learn and workspace is
learn-terraform-pipelines-k8s. This is where the Terraform remote backend and Google provider are defined.
# main.tf terraform { backend "remote" { organization = "hashicorp-learn" workspaces { name = "learn-terraform-pipelines-k8s" } } }
»Fork Consul workspace
Fork the Learn Terraform Pipelines Consul repository. Update the
organization and
workspaces values in
main.tf to point to your organization and your workspace name —
learn-terraform-pipelines-consul.
# main.tf terraform { backend "remote" { organization = "hashicorp-learn" workspaces { name = "learn-terraform-pipelines-consul" } } }
The
main.tf file contains the configuration for the Terraform remote backend, Terraform remote state (to retrieve values from the Kubernetes workspace), Kubernetes provider and Helm provider.
»Fork Vault Repository
Fork the Learn Terraform Pipelines Vault repository. Update the
organization and
workspace values in
main.tf to point to your organization and your workspace name (
learn-terraform-pipelines-vault).
# main.tf terraform { backend "remote" { organization = "hashicorp-learn" workspaces { name = "learn-terraform-pipelines-vault" } } }
The
main.tf file contains the configuration for the Terraform remote backend, Terraform remote state (to retrieve values from the Kubernetes and Consul workspaces), and Helm provider.
»Clone repository
Clone the Learn Terraform TFE Provider Run Triggers GitHub repository.
$ git clone
»Review configuration
Navigate to the cloned repository.
$ cd learn-terraform-tfe-provider-run-triggers
This directory contains the configuration to spin up the Terraform Cloud workspaces and teams needed to deploy and manage a Consul-backed Vault on Kubernetes.
Here, you will find the following files.
main.tfdefines the TFE provider and random provider in
required_providerblock.
random.tfcontains the configuration to generate a random value to append to your Terraform Cloud team and workspace name. This is to ensure there are no name conflicts.
variables.tfcontains all the variables used in the configuration. This file has comments, breaking it into 6 sections — Google, GitHub, TFC organization/team names, and workspace names and variables for the Kubernetes, Consul and Vault workspaces.
assets/*contains the list of
csvfiles used to populate each workspace's team members. In addition, this directory will contain your JSON GCP credentials. The
.gitignorefile contains
gcp-creds.jsonand therefore will not commit your GCP credentials to version control.
tfc.tfcontains the configuration to reference users to the Terraform Cloud organization. If these users are not part of the teams defined above, they won't be able to apply any runs in the Kubernetes, Consul or Vault workspaces. In addition, this configuration defines the run triggers between the Kubernetes and Consul workspaces and the Consul and Vault workspaces.
admin.tfcontains the configuration to create the
adminteam and gives it
adminaccess to the Kubernetes, Consul and Vault workspaces.
In addition, the
workspace-k8s.tf,
workspace-consul.tf and
workspace-vault.tf define their respective workspaces and does the following.
- Creates a team to manage its particular workspace.
- Adds members listed in
assets/*.csvto the team, where
*is the workspace.
- Creates the particular workspace, linking it with its respective forked repository. This workspace will not queue runs when it created.
- Gives write permission to particular team.
- Defines the workspace's Terraform and environment variables.
»Customize Configuration
To use this configuration, update the
variables.tf and
assets/*.csv files. In addition, you will need to add your Google Cloud credentials to the
assets directory.
»Update variables
Update the
variables.tf file with your values. These variables currently contain
REPLACE_ME as their default value.
google_project_id- Update the default value with your Google Project ID.
vcs_oauth_token_id- Update the default value with your Terraform Cloud VCS Provider's OAuth Token ID.
k8s_repo_name- Update the default value to point to your forked Kubernetes workspace repository.
consul_repo_name- Update the default value to point to your forked Consul workspace repository.
vault_repo_name- Update the default value to point to your forked Vault workspace repository.
tfc_org- Update the default value with your Terraform Cloud's organization name.
»Update team CSV files
The
assets directory contains
all.csv,
admin.csv,
k8s.csv,
consul.csv and
vault.csv.
all.csv is a superset of
*.csv files and should contain email addresses that exists in Terraform Cloud. The
admin.csv should contain email addresses that will have access to all three workspaces (Kubernetes, Consul, Vault). The
k8s.csv,
consul.csv, and
vault.csv should contain email addresses that will have access to their respective workspaces.
Update the email address for all the
csv files in the
assets directory. The following command will replace the existing email address for all files with your email. Replace
EMAIL_ADDRESS with your email address. This email address must already be a user in your Terraform Cloud organization.
Alternatively, you can update each file with a different email address to test Terraform Cloud team permissions.
$ sed -i '' 's/test@hashicorp\.com/EMAIL_ADDRESS/g' ./assets/*
»Add Google Cloud Credentials
Add your Google Cloud Credentials to the
assets directory and name it
gcp-creds.json.
You must flatten the JSON (remove newlines) before pasting it into Terraform Cloud. The command below flattens the JSON using jq, removes the trailing newline and writes it to
assets/gcp-creds.json.
$ cat <key_file>.json | jq -c | tr -d '\n' > assets/gcp-creds.json
If you don’t have your GCP credentials as a JSON or your credentials don’t have access to Compute Admin and GKE Admin, reference the GCP Documentation to generate a new service account and with the right permissions.
»Apply configuration
Before you can apply your configuration, you need to authenticate to Terraform Cloud.
Go to the Tokens page in Terraform Cloud and generate an API token.
Add the generated API token as an environment variable named
TFE_TOKEN. Replace value after
TFE_TOKEN= with the API token you retrieved.
$ export TFE_TOKEN=UGkrH5Uu3RqTXA.atlasv1...
Initialize your configuration.
$ terraform init
Apply your configuration.
$ terraform apply An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: ## Output truncated Plan: 34 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:
Remember to confirm your apply with a
yes.
»Deploy Kubernetes cluster
Now that you have successfully configured all three workspaces (Kubernetes, Consul, and Vault), you can deploy your Kubernetes cluster.
Select your Kubernetes workspace and click "Queue Plan". If the plan is successful, Terraform cloud will display a notice that a run will automatically queue a plan in the Consul workspace, and ask you to confirm and apply.
Click "Confirm & Apply" to apply this configuration. This process should take about 10 minutes to complete.
»Deploy Consul
Navigate to the Consul workspace, view the run plan, then click "Confirm & Apply". This will deploy Consul onto your cluster using the Helm provider. The plan retrieves the Kubernetes cluster authentication information from the Kubernetes workspace to configure both the Kubernetes and Helm provider.
This process will take about 2 minutes to complete.
Notice that a plan for the
learn-terraform-pipelines-vault workspace will be automatically queued once the apply completes.
»Deploy Vault
Navigate to the Vault workspace, view the run plan, then click "Confirm & Apply". This will deploy Vault onto your cluster using the Helm provider and configure it to use Consul as the backend. The plan retrieves the Kubernetes namespace from the Consul workspace’s remote state and deploys Vault to the same workspace.
This process will take about 2 minutes to complete.
»Next steps
Congratulations — you have created and configured Terraform Cloud workspaces to deploy a Consul-backed Vault on a GKE cluster using the TFE Provider.
Refer to the Deploy Consul and Vault on Kubernetes with Run Triggers Learn tutorial for instructions on how to verify and view your Consul and Vault deployments.
»Clean up resources
To clean up the resources and destroy the infrastructure you have provisioned in this track, go to each workspace in the reverse order you created them in (Vault, Consul, Kubernetes), queue a destroy plan, and apply it.
For a more detailed guide on destroying resources on Terraform Cloud, reference the Clean up Cloud Resources guide.
NOTE: The TFE provider only manages Terraform Cloud workspaces and teams. It does not queue destroy plans. If you destroy your workspace using
terraform destroy, resources provisioned by that workspace will not be destroyed.
After you've done this, go to your TFE provider configuration.
Destroy the resources. This will remove members and destroy the Terraform Cloud workspaces and teams created in this tutorial. Remember to confirm your apply with a
yes.
$ terraform destroy
»Helpful Links
To learn more about the TFE provider, reference the TFE Provider Registry page.
To learn how to get started with Consul Service Mesh, visit the Getting Started with Consul Service Mesh Learn track.
To learn how to leverage Vault features on Kubernetes, visit the Kubernetes Learn Vault track. | https://learn.hashicorp.com/tutorials/terraform/tfe-provider-run-triggers?in=terraform/kubernetes | CC-MAIN-2020-45 | refinedweb | 1,925 | 57.57 |
Download presentation
Presentation is loading. Please wait.
Published byAnthony Chase Modified over 3 years ago
1
Hybrid Synchronous Languages Vijay Saraswat IBM TJ Watson Research Center March 2004
2
Outline CCP as a framework for concurrency theory Constraints CCP Defaults Discrete Time Hybrid Time Examples Themes Synchronous languages are widely applicable Space, as well as time, needs to be treated.
3
Constraint systems Any (intuitionistic, classical) system of partial information For Ai read as logical formulae, the basic relationship is: A1,…, An |- A Read as If each of the A1,…, An hold, then A holds Require conjunction, existential quantification A,B,D ::= atomic formulae | A&B |X^A G ::= multiset of formulae (Id) A |- A (Id) (Cut) G |- B G,B |- D G,G |- D (Weak) G |- A G,B |- A (Dup) G, A, A |- B G,A |- B (Xchg) G,A,B,G |- D G,B,A,G |- D (&-R) G,A,B |- D G, A&B |- D (&-L) G |- A G|- B G |- A&B (^-R) G |- A[t/X] G |- X^A (^-L,*) G,A |- D G,X^A |- D
4
Constraint system: Examples Gentzen Herbrand Lists Finite domain Propositional logic (SAT) Arithmetic constraints Naïve, Linear Nonlinear Interval arithmetic Orders Temporal Intervals Hash-tables Arrays Graphs Constraint systems are ubiquitous in computer science Type systems Compiler analysis Symbolic computation Concurrent system analysis
5
Concurrent Constraint Programming Use constraints for communication and control between concurrent agents operating on a shared store. Two basic operations Tell c: Add c to the store Ask c then A: If the store is strong enough to entail c, reduce to A. (Agents) A::= c c A A,B X^A (Config) G ::= A,…, A G, A&B G,A,B G, X^A G,A (X not free in G) G, c A G,A (s(G) |- c) [[A]] = set of fixed points of a clop Completeness for constraint entailment
6
Default CCP A ::= c ~~> A Unless c holds of the final store, run A ask c \/ A Leads to nondet behavior (c ~~> c) No behavior (c 1 ~~> c 2, c 2 ~~> c 1 ) gives c1 or c2 (c ~~> d): gives d (c, c~~>d): gives c [A] = set S of pairs (c,d) st S d ={c | (c,d) in S} denotes a clop. Operational implementation: Backtracking search Open question: compile-time analysis Use negation as failure
7
Discrete Timed CCP Synchronicity principle System reacts instantaneously to the environment Semantic idea Run a default CCP program at each time point Add: A ::= next A No connection between the store at one point and the next. Future cannot affect past. Semantics Sets of sequences of (pairs of) constraints Non-empty Prefix-closed P after s =d= {e | s.e in P} is denotation of a default CC program
8
Timed Default CCP: basic results The usual combinators can be programmed: always A do A watching c whenever c do A time A on c A general combinator can be defined time A on B: the clock fed to A is determined by (agent) B Discrete timed synchronous programming language with the power of Esterel Present is translated using defaults Proof system Compilation to automata
9
jcc Implementation of Default Timed cc in Java Uses Java as a host language Full implementation of Timed Default CCP Promises, unification. More CSs can be added. Implements defaults via backtracking. Uses Java GC Saraswat,Jagadeesan, Gupta jcc: Integrating Timed Default CCP into Java, Dec 2003 Very useful as a prototyping language. Currently only backend implemented. Available from sourceforge aswat/jcc.html aswat/jcc.html LGPL
10
The Esterel stopwatch program public void WatchAgent() { watch = WATCH; whenever (watch==WATCH) { unless (changeMode) { print("Watch Mode"); } when (p == UL) { next enterSetWatch = ENTER_SET_WATCH; changeMode=CHANGE_MODE; print("Set Watch Mode"); } when (p == LL) { next stopWatch=STOP_WATCH; changeMode=CHANGE_MODE; print("Stop Watch Mode"); } do { always watch = WATCH; } watching ( changeMode ) unless (changeMode) { beep(); } time.setAlarmBeeps(beeper); }
11
Hybrid Systems Traditional Computer Science Discrete state, discrete change (assignment) E.g. Turing Machine Brittleness: Small error major impact Devastating with large code! Traditional Mathematics Continuous variables (Reals) Smooth state change Mean-value theorem e.g. computing rocket trajectories Robustness in the face of change Stochastic systems (e.g. Brownian motion) Hybrid Systems combine both Discrete control Continuous state evolution Intuition: Run program at every real value. Approximate by: Discrete change at an instant Continuous change in an interval Primary application areas Engineering and Control systems Paper transport Autonomous vehicles… And now.. Biological Computation. Emerged in early 90s in the work of Nerode, Kohn, Alur, Dill, Henzinger…
12
Background: Concurrent Constraint Programming A constraint expresses information about the possible values of variables. E.g. x = 0, 7x + 3y = 21. A cc program consists of a store, which is a set of constraints, and a set of subprograms independently interacting with it. A subprogram can add a constraint to the store. It can also ask if a constraint is entailed by the store, and if so, reduce to a new set of subprograms. The output is the store when all subprograms are quiescent. Example program: x = 10, x = 0, if x > 0 then x = -10 X = 10 X = 0 if x>0 then x=-10 if y=0 then z=1 store x = 0 X = 10 if x>0 then x=-10 if y=0 then z=1 store x = 0, x = 10 if x>0 then x=-10 if y=0 then z=1 store x = 0, x = 10 x=-10 if y=0 then z=1 store x = 0, x =10, x = -10 if y=0 then z=1 Basic combinators: c add constraint c to the store. if c then A if c is entailed by the store, execute subprogram A, otherwise wait. A, B execute subprograms A and B in independently. unless c then A if c will not be entailed in the current phase, execute A (default cc). Output Gupta/Carlson
13
Extending cc to Hybrid cc Basic assupmtion: The evolution of a system is piecewise continuous. Thus, a system evolution can be modeled a sequence of alternating point and interval phases. Constraints will now include time-varying expressions e.g. ODEs. Execute a cc program in each phase to determine the output of that phase. This will also determine the cc program to be run in the next phase. In an interval phase, any constraints asked of the store are recorded as transition conditions. Integrate the ODEs in the store to evolve the time dependent variables, using the store in the previous point phase to determine the initial conditions. The phase ends when any transition condition changes status. The values of the variables at the end of the phase can be used by the next point phase. Example program: x=10,x=0, hence {if x>0 then x=-10, if prev(x)=0 then x=-0.5*prev(x)} cc prog output t = 1.414- x = 10, x = 0 x = 10, x = 0 if x>0 thenx = -10 if prev(x)=0 then x=-0.5*prev(x) x=-10 x=10,x=0 x > 0, prev(x) = 0 x=-10 x=0,x=-14.14 x > 0, prev(x) = 0 if x>0 thenx = -10 if prev(x)=0 then x=-0.5*prev(x) x = 0, x = 7.07 t = 0 t = 0+ t = 1.414 New combinator: hence A execute a copy of A in each phase (except the current point phase, if any) Gupta, Jagadeesan, Saraswat Computing with continuous change, SCP 1998 Gupta/Carlson
14
Hybrid cc with interval constraints n Arithmetic variables are interval valued. Arithmetic constraints are non-linear algebraic equations over these, using standard operators like +, *, ^, etc. Users can easily add their own operators as C libraries (useful for connecting with external C tools, simulators etc.). n Object-oriented system with methods and inheritance. Methods and class definitions are constraints and can be changed during the course of a program. Recursive functions are allowed. n Various combinators defined on the basic combinators e.g. do A watching c --- execute A, abort it when c becomes true when c do A --- start A at the first instant when c becomes true wait N do A --- start A after N time units forall C(X) do A(X) --- execute a copy of A for each object X of class C n Arithmetic expressions compiled to byte code and then machine code for efficiency. Common subexpressions are recognized. n Copying garbage collector speeds up execution, and allows taking snapshots of states. n API from Java/C to use Hybrid cc as a library. System runs on Solaris, Linux, SGI and Windows NT. Carlson, Gupta Hybrid CC with Interval ConstraintsGupta/Carlson
15
The Arithmetic Constraint System Constraints are used to narrow the intervals of their variables. For example, x^2 + y^2 = 1 reduces the intervals for x and y to [-1,1] each. Further adding x >= 0.5 reduces the interval for x to [0.5, 1], and for y to [-0.866, 0.866]. Various interval pruning methods prune one variable at a time. n Indexicals: Given a constraint f(x,y) = 0, rewrite it as x = g(y). If x I and y J, then set x I g(J). Note: y can be a vector of variables. n Interval splitting: If x [a, b], do binary search to determine minimum c in [a,b] such that 0 f([c,c], J), where y J. Similary determine maximum such d in [a,b], and set x [c,d]. n Newton Raphson: Get minimum and maximum roots of f(x,J) = 0, where y J. Set x as above. n Simplex: Given the constraints on x, find its minimum and maximum values, and set it as above. Non-linear terms are treated as separate variables. These methods can be combined to increase efficiency. For example, we use Splitting only to reduce the size of the interval of x, then use Newton Raphson to get the root quickly. Gupta/Carlson
16
Integrating the differential equations Differential equations are just ordinary algebraic equations relating some variables and their derivatives e.g. f = m * a, x + d*x + k*x = 0. We provide various integrators --- Euler, 4th order Runge Kutta, 4th order Runge Kutta with adaptive stepsize, Bulirsch-Stoer with polynomial extrapolation. Others can be added if necessary. All integrators have been modified to integrate implicit differential equations, over interval valued variables. Exact determination of discrete changes (to determine the end of an interval phase) is done using cubic Hermite interpolation. For example, in the example program we need to check if x = 0. We use the value of x and x at the beginning and end of an integration step to determine if x = 0 anywhere in this step. If so, the step is rolled back, and a smaller step is taken based on the estimate of the time when x = 0. This is repeated till the exact time when x = 0 is determined. Gupta/Carlson
17
Example: The Solar System Planet = (m, initpx, initpy, initpz, initvx, initvy,initvz) [px, py, pz, mass]{ px = initpx, py = initpy, pz = initpz, px' = initvx, py' = initvy, pz'=initvz, always { mass := m, px'' := sum(g * P.mass * (P.px - px)/((P.px -px)^2 + (P.py -py)^2 + (P.pz -pz)^2)^1.5, Planet(P), P != Self), py'' := sum(g * P.mass * (P.py - py)/((P.px -px)^2 + (P.py -py)^2 + (P.pz -pz)^2)^1.5, Planet(P), P != Self), pz'' := sum(g * P.mass * (P.pz - pz)/((P.px -px)^2 + (P.py -py)^2 + (P.pz -pz)^2)^1.5, Planet(P), P != Self) } }, always pTg := 8.88769408e-10, //Coordinates, velocities on 1998-jul-01 00:00 Planet(Sun, 332981.78652, -0.008348511782195148, 0.001967945668637627, 0.0002142251001467145, -0.000001148114436325, - 0.000008994958827348018, 0.00000006538635311283), Planet(Mercury, 0.0552765501, -0.4019000379850893, -0.04633361689674035, 0.032392079927509,-0.002423875040887606, - 0.02672168963230259, -0.001959654820981497), Planet(Venus, 0.8150026784, 0.6680247657009936, 0.2606201175567890, -0.03529355196193388, -0.007293563117650372, 0.01879420958390879, 0.0006778739390714113), Planet(Earth, 1.0, 0.1508758612501242, -1.002162526305211, 0.0002082851504420832, 0.01671098890724774, 0.002627047365383169, -0.0000004771611907632339), /* A fragment of a model for the Solar system. The remaining lines give the coordinates and velocities of the other planets on July 1, 1998. The class planet implements each planet as one of n bodies, determining its acceleration to be the sum of the accelerations due to all other bodies (this is defined by the sum constraint). Units are Earth-mass, Astronomical units and Earth days.*/ Results of simulation: Simulated time - 3321 units (~9 years). CPU time = 55 s. Accuracy: Mercury < 4°, Venus < 1°, other < 0.0001° away from actual positions after 9 years. Gupta/Carlson
18
Programming in jcc public class Furnace implements Plant { const Real heatR, coolR, initTemp; public readOnly Fluent temp; public inputOnly Fluent switchOn; public Furnace(Real heatR, Real coolR, Real initT ) { this.heatR = heatR; this.coolR = coolR; this.initTemp = initT; } public void run() { temp = initT; time always { temp=heatR} on switchOn; time always {temp=-coolR} on ~switchOn; }} public class Controller { Plant plant; public void setPlant(Plant p) { this.plant=p;} … } public class ControlledFurnace { Controller c; Furnace f; public ControlledFurnace(Furnace f, Controller c) { this.c = c; this.f = f;} public void run() { c.run(); c.setPlant(f); f.run(); }
19
Systems Biology Develop system-level understanding of biological systems Genomic DNA, Messenger RNA, proteins, information pathways, signaling networks Intra-cellular systems, Inter- cell regulation… Cells, Organs, Organisms ~12 orders of magnitude in space and time! Key question: Function from Structure How do various components of a biological system interact in order to produce complex biological functions? How do you design systems with specific properties (e.g. organs from cells)? Share Formal Theories, Code, Models … Promises profound advances in Biology and Computer Science Goal: To help the biologist model, simulate, analyze, design and diagnose biological systems.
20
Systems Biology Work subsumes past work on mathematical modeling in biology: Hodgkin-Huxley model for neural firing Michaelis-Menten equation for Enzyme Kinetics Gillespie algorithm for Monte-Carlo simulation of stochastic systems. Bifurcation analysis for Xenopus cell cycle Flux balance analysis, metabolic control analysis… Why Now? Exploiting genomic data Scale Across the internet, across space and time. Integration of computational tools Integration of new analysis techniques Collaboration using markup- based interlingua (SBML) Moores Law! This is not the first time…
21
Chemical Reactions Cells host thousands of chemical reactions (e.g. citric acid cycle, glycolis…) Chemical Reaction X+Y 0 –k 0 XY 0 XY 0 –k -0 X+Y 0 Law of Mass Action Rate of reaction is proportional to product of conc of components [X]= -k 0 [X][Y] + k -0 [XY 0 ] [Y]=[X] [XY]=k 0 [X][Y]-K -0 [XY 0 ] Conservation of Mass When multiple reactions, sum mass flows across all sources and sinks to get rate of change. Same analysis useful for enzyme-catalyzed reactions Michaelis-Menten kinetics May be simulated Using deterministic means. Using stochastic means (Gillespie algorithm). At high concentration, species concentration can be modeled as a continuous variable.
22
State dependent rate equations Expression of gene x inhibits expression of gene y; above a certain threshold, gene y inhibits expression of gene x: if (y < 0.8) {x= -0.02*x + 0.01}, If (y >= 0.8) {x=-0.02*x, y=0.01*x} Bockmayr and Courtois: Modeling biological systems in hybrid concurrent constraint programming
23
Cell division: Delta-Notch signaling in X. Laevis Consider cell differentiation in a population of epidermic cells. Cells arranged in a hexagonal lattice. Each cell interacts concurrently with its neighbors. The concentration of Delta and Notch proteins in each cell varies continuously. Cell can be in one of four states: Delta and Notch inhibited or expressed. Experimental Observations: Delta (Notch) concentrations show typical spike at a threshold level. At equilibrium, cells are in only two states (D or N expressed; other inhibited). Ghosh, Tomlin: Lateral inhibition through Delta-Notch signaling: A piece- wise affine hybrid model, HSCC 2001
24
Delta-Notch Signaling Model: VD, VN: concentration of Delta and Notch protein in the cell. UD, UN: Delta (Notch) production capacity of cell. UN=sum_i VD_i (neighbors) UD = -VN Parameters: Threshold values: HD,HN Degradation rates: MD, MN Production rates: RD, RN Model: Cell in 1 of 4 states: {D,N} x {Expressed (above), Inhibited (below)} Stochastic variables used to set random initial state. Model can be expressed directly in hcc. if (UN(i,j) < HN) {VN= -MN*VN}, if (UN(I,j)>=HN){VN=RN-MN*VN}, if (UD(I,j)
25
Controlling Cell division: The p53-Mdm2 feedback loop 1/ [p53]=[p53] 0 –[p53]*[Mdm2]*deg -d p53 *[p53] 2/ [Mdm2]=p1+p2 max *(I^n)/(K^n+I^n)-d Mdm2* [Mdm2] I is some intermediary unknown mechanism; induction of [Mdm2] must be steep, n is usually > 10. May be better to use a discontinuous change? 3/ [I]=a*[p53]-k delay *I This introduces a time delay between the activation of p53 and the induction of Mdm2. There appears to be some hidden gearing up mechanism at work. 4/ a=c 1 *sig/(1+c 2 *[Mdm2]*[p53]) 5/ sig=-r*sig(t) Models initial stimulus (signal) which decays rapidly, at a rate determined by repair. 6/ deg=deg basal -[k deg *sig-thresh] 7/ thresh=-k damp *thresh*sig(t=0) Lev Bar-Or, Maya et al Generation of oscillations by the p53-Mdm2 feedback loop..,2000
26
The p53-Mdm2 feedback loop Biologists are interested in: Dependence of amplitude and width of first wave on different parameters Dependence of waveform on delay parameter. Constraint expressions on parameters that still lead to desired oscillatory waveform would be most useful! There is a more elaborate model of the kinetics of the G2 DNA damage checkpoint system. 23 species, rate equations Multiple interacting cycles/pathways/regulatory networks: Signal transduction MPF Cdc25 Wee1 Aguda A quantitative analysis of the kinetics of the G2 DNA damage checkpoint system, 1999
27
HCC2: Integration of Space Need to add continuous space. Need to add discrete space. Use same idea as when extending CCP to HCC Extend uniformly across space with an elsewhere (= spatial hence) operator. Think as if a default CC program is running simultaneously at each spatial point. Implementation: Move to PDEs from ODEs. Much more complicated to solve. Need to generate meshes. Use Petsc (parallel ANL library). Uses MPI for parallel execution.
28
Generating code for parallel machines There is a large gap between a simple declarative representation and an efficient parallel implementation Cf Molecular dynamics Central challenge: How can additional pieces of information (e.g. about target architecture, about mapping data to processors) be added compositionally so as to obtain efficient parallel algorithm? Need to support round-tripping. Identify patterns, integrate libraries of high-performance code (e.g. petsc).
29
The X10 language X10=Java–Threads+Locales A locale is a region containing data and activities An activity may atomically execute statements that refer to data on current locale. Arrays are striped across locales. An activity may asynchronously spawn activities in other locales. Locales may be named through data. Barriers are used to detect termination. Supports SPMD computations. Load input into array A striped into K locales in parallel; Barrier b = new Barrier(); forall( i : A) { async A[i] { int j = f(A[i]); async atomic A[j]{ A[j]++; } before b; } await b; A language for massively parallel non-uniform SMMPs The GUPS benchmark
30
Integration of symbolic reasoning techniques Use state of the art constraint solvers ICS from SRI Shostak combination of theories (SAT, Herbrand, RCF, linear arithmetic over integers). Finite state analysis of hybrid systems Generate code for HAL Predicate abstraction techniques. Develop bounded model checking. Parameter search techniques. Use/Generate constraints on parameters to rule out portions of the space. Integrate QR work Qualitative simulation of hybrid systems
31
Conclusion We believe biological system modeling and analysis will be a very productive area for constraint programming and programming languages Handle continuous/discrete space+time Handle stochastic descriptions Handle models varying over many orders of magnitude Handle symbolic analysis Handle parallel implementations
32
HCC references Gupta, Jagadeesan, Saraswat Computing with Continuous Change, Science of Computer Programming, Jan 1998, 30 (12), pp 3--49 Saraswat, Jagadeesan, Gupta Timed Default Concurrent Constraint Programming, Journal of Symbolic Computation, Nov-Dec1996, 22 (56), pp 475-520. Gupta, Jagadeesan, Saraswat Programming in Hybrid Constraint Languages, Nov 1995, Hybrid Systems II, LNCS 999. Alenius, Gupta Modeling an AERCam: A case study in modeling with concurrent constraint languages, CP98 Workshop on Modeling and Constraints, Oct 1998.
33
Alternative splicing regulation Alternative splicing occurs in post transcriptional regulation of RNA Through selective elimination of introns, the same premessenger RNA can be used to generate many kinds of mature RNA The SR protein appears to control this process through activation and inhibition. Because of complexity, experimentation can focus on only one site at a time. Bockmayr et al use Hybrid CCP to model SR regulation at a single site. Michaelis-Menten model using 7 kinetic reactions This is used to create an n- site model by abstracting the action at one site via a splice efficiency function. Results described in [Alt], uses default reasoning properties of HCC.
Similar presentations
© 2017 SlidePlayer.com Inc. | http://slideplayer.com/slide/701016/ | CC-MAIN-2017-13 | refinedweb | 3,553 | 54.32 |
The day is 09.11.2015 and for all intents and purposes, it seems to be a fairly routine and run of the mill day. But on this inconspicuous day, was the release of SAP Netweaver 7.40 SPS 13. This SPS stack release comprises many new features to the BW platform and is considered a feature release. There has been posts detailing some of the new features and product road map updates, but I find it interesting that not that many people have picked up on or commented on the new feature, Embedded Consolidation.
Previously, it was fairly straight forward. If you wanted to do planning or use the embedded model. You were always limited to only planning, the consolidation model was only available on the BPC standard / Classic which guided your implementation approach, but the reason for my interest and blog post is to draw some attention to this new feature and capability with SAP BW SPS 13, SAP BPC Embedded Consolidation is now possible..
Reference Links –
Having been involved with an implementation of SAP BPC Embedded Model at a large Financial Institution. I have come to appreciate the performance, flexibility and power of the BPC Embedded model. (Please refer to the reference section below to get links to the various posts outlining and detailing the difference between the two model types) . Having the ability of having both a planning and consolidation application that leverage off of the power and performance of PAK, I can say that I am pretty excited about the fact that organizations can now start to harmonize there planning and consolidation environments and leverage off the power of PAK.
I do however believe that customers will have to carefully consider and understand the drivers for deciding which models to use in their implementation. It will require careful consideration especially in light of the technical requirements and skills required for the implementation of the SAP BPC Embedded models.
I do believe that having both embedded models in the same namespace as SAP BW along with the flexibility that the it provides you, is pretty powerful motivation to consider the embedded consolidation model in itself. There is a multitude of scenarios in which having the planning and consolidations cubes in the same multi / composite provider will facilitate reporting and reduce much of the problems of moving data between models. I can remember scenario’s in which having to write tedious script logic to move data between the different models and conversely how incredibly easy it was to see and move data across different models in the embedded model using FOX and as a result of using Multi / Composite providers in the data model design.
In addition, the ability to call SQL in FOX or SQL Exit CR are pretty compelling reasons in itself for choosing the embedded models in solving a lot of the limitations of the classic BPC models. I look forward to seeing more features and functionality being added to the embedded model, along with hearing some of the customer success stories of the new implementations of using the embedded consolidation model.
SAP Support Product Availability Matrix
Reference Links
SAP Notes –
2240919 – Release Note for Netweaver 7.40 SPS13
Thank you for the info Daniel Jacinto
Hi Daniel
Thank You very much for your information and I am very happy for the importance of Consolidation in Embedded version.
< Planning and Consolidation Functions/Features are heart of the BPC irrespective of Versions and Now Embedded is “All in One Solution”>
I am glad that SAP has been recognized Consolidation and brought into Embedded version
Hello Daniel,
Nice information shared. Thanks.
Hi Daniel,
Thank you for the information.
Regards
Great Info Dan.
Cheers
Nikhil
Thank you so much for valuable information. Sorry if it is basic question , how about Equity pickup and Ownership calculation. Are these features part of other SP levels or i am missing something.
Geeta,
Currently in current version of Embedded Consolidation Equity pickup is not supported but I am sure it would come up later sometime.
Regards
Nikhil
Hi Daniel,
It seems I have to upgrade to Embedded version of BPC.
Very good post.
Thanks
Narsi
Nice one Dan.
Hi Daniel,
Just a note, I see this is only due for release 2016 Q2
Cheers,
Andries
Hi Daniel,
thank you for the valuable info. Is someone in the community aware of an official document or website where the consolidation funcionalities of both, embedded and stadard are compared or litsted? Would really appreciate that.
Kind regads,
Boris
Hi Daniel:
Thanks for the Blog and was very helpful.
Have question related to Embedded vs Real-time Consolidation.
1. Can we follow the Note_2243472 to create Embedded Cons model and later will be able to migrate to S/4 Real-time consolidation?.And where can I find the list Infoobject & other object used for S/4 Real-time consolidation.
2.What is the use of Source Infoprovider in the optional section of creating Consolidation model in Embedded?
Regards
Venkatesh | https://blogs.sap.com/2016/02/23/sap-bpc-embedded-consolidation/ | CC-MAIN-2018-43 | refinedweb | 834 | 50.67 |
Linux Software › Search › when building
Tag «when building»: downloads
Search results for «when building»:
Building Block 1-0-0 by Florian Berger
Building Block ist the Open Source Content Management Software for your website. It is lightweight and has minimal hardware requirements.
At the same time it is powerful and delivers high performance. Building Block's diversity allows you to create and manage professional web projects directly in…
My Own Building System 2.3.2 by Raul Nunez de Arenas Coronado
My Own Building System (a.k.a. mobs) is a GPL'd build system, lightweight and easy to use, with a limited application framework. My Own Building System project gets information from the end-user wanting to build your project and modifies the building process according to such information.
It prov…();
…
Plasma EBG 0.26.22 by Konstanty Bialkowski,…
ModPerl::BuildMM 2.0.2 by ModPerl::BuildMM Team
ModPerl::BuildMM is a "subclass" of ModPerl::MM used for building mod_perl 2.0.
SYNOPSIS
use ModPerl::BuildMM;
# ModPerl::BuildMM takes care of doing all the dirty job of overriding
ModPerl::BuildMM::WriteMakefile(...);
# if there is a need to extend the methods
sub MY::p…;
…
HTML::Embperl 1.3.6 by G. Richter
HTML::Embperl is a Perl module for building dynamic Websites with Perl.
SYNOPSIS
Embperl is a Perl extension module which gives you the power to embed Perl code directly in your HTML documents (like server-side includes for shell commands).
If building more than a single page, you may also…)…
netBPM 0.8.3.1 by Jan-Philipp Bolle
NetBpm is a platform for building, executing, and managing workflows. netBPM is very simple to use and integrate in other .NET applications.
It supports building applications which are able to turn business models into executable software models.
Business analysts are able to use a model drive…
Fle3 1.5.0 by Janne Pietarila
Fle3 project is a Web-based learning environment.
More specifically, it is a server program for computer-supported collaborative learning (CSCL).
The Fle3 Knowledge Building tool allows groups to carry out dialogues, theory building, and debates by storing their thoughts into a shared databas……
AxKit::XSP::WebUtils 1.6 by Matt Sergeant
AxKit::XSP::WebUtils is a Perl module for utilities for building XSP web apps.
SYNOPSIS
Add the taglib to AxKit (via httpd.conf or .htaccess):
AxAddXSPTaglib AxKit::XSP::WebUtils
Add the web: namespace to your XSP tag:
< xsp:page
language="Perl"
xmlns:xsp="http:…
Cross-LFS 1.0.0 by Ryan Oliver and Jim Gifford syste…
Text::NSP::Measures::2D 1.01 by Text::NSP::Measures::2D Team
Text::NSP::Measures::2D is a Perl module that provides basic framework for building measure of association for bigrams.
SYNOPSIS
Basic Usage
use Text::NSP::Measures::2D::MI::ll;
my $npp = 60; my $n1p = 20; my $np1 = 20; my $n11 = 10;
$ll_value = calculateStatistic( n11=>$n11,
…
ATK 1.12.3 by ATK Team
ATK is an accessibility library for GNOME.
Requirements:
GLib-2.0.0 or better
Building:
To configure ATK, run the ./configure script, then 'make'; and 'make install'. If you are installing into a location where you don't have write permission, you'll have to become root before running
'…
XML::XPath::Builder 1.13 by Ken MacLeod
XML::XPath::Builder is a SAX handler for building an XPath tree.
SYNOPSIS
use AnySAXParser;
use XML::XPath::Builder;
$builder = XML::XPath::Builder->new();
$parser = AnySAXParser->new( Handler => $builder );
$root_node = $parser->parse( Source => [SOURCE] );
XML::XPath::Builder…
asyncj 1.4-02 by Thanos Vassilakis.
What's New i…
GOffice 0.3.3 by Jody Goldberg
GOffice is a library of document-centric objects and utilities building on top of GLib and Gtk+ and used by software such as Gnumeric.
What's New in This Release:
Fix combo sizing problem. [#362704]
Fix Save problem. [#365115]
Detect more date/time formats. Part of [#370183]…
XML::Grove::Builder 0.46 Alpha by Ken MacLeod
XML::Grove::Builder is a PerlSAX handler for building an XML::Grove.
SYNOPSIS
use PerlSAXParser;
use XML::Grove::Builder;
$builder = XML::Grove::Builder->new();
$parser = PerlSAXParser->new( Handler => $builder );
$grove = $parser->parse( Source => [SOURCE] );
XML::Grove: | http://nixbit.com/search/when-building/ | CC-MAIN-2015-22 | refinedweb | 690 | 58.08 |
Re: Strange SSL_shutdown() error return (SSL_ERROR_SYSCALL but errno == 0)
Antoine Pitrou wrote: Well, in our case, and unless I'm mistaken, ret == -1, ERR_get_error() == 0 and then errno (the Unix errno) == 0. SSL_shutdown() by virtue of its unique mechanic you will not see ret == 0 (in the way the SSL_get_error man page describes) since that has a different and special meaning. It means the first point that ((SSL_get_shutdown() SSL_SENT_SHUTDOWN) == SSL_SENT_SHUTDOWN) would be true. Unlike for example SSL_read() which can return 0, which does mean EOF. For which you can then do ((SSL_get_shutdown() SSL_RECEIVED_SHUTDOWN) == SSL_RECEIVED_SHUTDOWN) to find out if it was a secure EOF. === RANT MODE If the OpenSSL SSL_shutdown() API could have been made better this is certainly one area that could be better. i.e. make SSL_shutdown() return the current state like SSL_get_shutdown() does (which means non-zero states). Then reuse the return of 0 state to mean EOF on transport and keep -1/WANT_READ/WANT_WRITE/ERROR_SYSCALL as-is. This would mean (simplified understanding) : * old version returned 0, new version returns 1 (SSL_SENT_SHUTDOWN). * old version returned 1, new version returns 3 (SSL_SENT_SHUTDOWN|SSL_RECEIVED_SHUTDOWN). Unfortunately this would have broken historical compatibility; it took quite a while to get the minimum breakage patch in to achieve my goals by the end of that time thinking about improving OpenSSL (rather than bug fixing it) was long out of my mind. I'm all for breaking APIs to make things better, providing its done in a responsible way. A poorly thought out API call can't hog a popular API symbol forever, otherwise the whole product starts to weaken. === RANT MODE Perhaps errno gets cleared by another operation... I may try to investigate if I get some time. Well now I've looked at the Python Module/_ssl.c to understand the context of your usage, you are using standard stuff for BIO. I know that errno==0 is getting set by OpenSSL before it makes the read() system call (openssl-1.0.0/crypto/bio/bss_fd.c:150 function fd_read() calls clear_sys_error() which does errno=0; from openssl-1.0.0/e_os.h). Then (I presume) it gets a read()==0 from kernel (bss_fd.c:151). Of course a read()==0 does not modify errno in libc. So in openssl-1.0.0/ssl/s3_lib.c:3191 inside the SSL_shutdown() implementation you can see the error return is ignored. Since returning 0 from here has a different documented meaning. I think this is the sequence of events you observe. Unfortunately I can't confirm it to be so since I can't get the test cases to run from Python's SVN. Darryl __ OpenSSL Project User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
Re: Openssl tarball SHA1 checksum
* Kenneth Goldman wrote on Fri, Apr 09, 2010 at 08:12 -0400: I notice that the tarballs also include a SHA1 digest. What's the point? To have a check whether the FTP download was successful to avoid accidently using corrupt files, a file integrity check with a checksum is quite common. oki, Steffen About Ingenico: Ingenico is a leading provider of payment solutions, with over 15 million terminals deployed in more than 125 countries. Its 2,850 employees worldwide support retailers, banks and service providers to optimize and secure their electronic payments solutions, develop their offer of services and increase their point of sales revenue. More information __ OpenSSL Project User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org
RE: Problems with DSA 2048-bit keys
From: owner-openssl-us...@openssl.org On Behalf Of Sad Clouds Sent: Saturday, 10 April, 2010 10:56 I'm testing a very simple SSL web server. Everything seems to work OK with RSA and DSA 1024-bit keys. I tried using DSA 2048-bit key and snip Then when I use Firefox to connect to the server I get: Thread starting keylength = 1024 SSL_accept() error error:1409441B:SSL routines:SSL3_READ_BYTES:tlsv1 alert decrypt error Any ideas why I'm getting decrypt error with OpenSSL? Is this related to the fact that the tmp_dh_callback() is passed 1024-bit key length, even though the certificate was set up with a 2048-bit key? Why does this happen? This is an alert received by openssl in your server, *from* Firefox. Either openssl is encrypting something improperly so Firefox can't decrypt it, which seems unlikely since you say later s_client works; or FF is decrypting something wrong or perhaps just disliking it, in which case you probably need help from FF support/development. There's no protocol reason the ephDH group has to be the same size as the DSA key/group that authenticates it, although for security good sense you probably want it to. The actual call to the callback is s3_srvr.c uses some macros to enforce 'export' restrictions on strength, which I don't understand in detail but it appears to me can limit your pubkey size to 1024 in at least some cases. Maybe someone else is more familiar with this area. Aside: do you really need this? FIPS 186-3 extended DSA to 2k and 3k, but SP 800-57 no longer approves classic DSA for USgovt use at all, even in the new sizes, it switches to ECDSA instead. __ OpenSSL Project User Support Mailing Listopenssl-users@openssl.org Automated List Manager majord...@openssl.org | https://www.mail-archive.com/search?l=openssl-users%40openssl.org&q=date%3A20100411&f=1 | CC-MAIN-2022-40 | refinedweb | 902 | 63.7 |
table of contents
NAME¶
wl_event_loop - An event loop context.
SYNOPSIS¶
#include <wayland-server-core.h>
Public Member Functions¶
struct wl_event_loop * wl_event_loop_create (void)
void wl_event_loop_destroy (struct wl_event_loop *loop)
void wl_event_loop_dispatch_idle (struct wl_event_loop *loop)
int wl_event_loop_dispatch (struct wl_event_loop *loop, int timeout)
int wl_event_loop_get_fd (struct wl_event_loop *loop)
void wl_event_loop_add_destroy_listener (struct wl_event_loop *loop, struct wl_listener *listener)
struct wl_listener * wl_event_loop_get_destroy_listener (struct wl_event_loop *loop, wl_notify_func_t notify)
Detailed Description¶
An event loop context.
Usually you create an event loop context, add sources to it, and call wl_event_loop_dispatch() in a loop to process events.
See also
Member Function Documentation¶
void wl_event_loop_add_destroy_listener (struct wl_event_loop * loop, struct wl_listener * listener)¶
Register a destroy listener for an event loop context
Parameters
listener The listener with the callback to be called.
See also
struct wl_event_loop * wl_event_loop_create (void)¶
Create a new event loop context
Returns
This creates a new event loop context. Initially this context is empty. Event sources need to be explicitly added to it.
Normally the event loop is run by calling wl_event_loop_dispatch() in a loop until the program terminates. Alternatively, an event loop can be embedded in another event loop by its file descriptor, see wl_event_loop_get_fd().
void wl_event_loop_destroy (struct wl_event_loop * loop)¶
Destroy an event loop context
Parameters
This emits the event loop destroy signal, closes the event loop file descriptor, and frees loop.
If the event loop has existing sources, those cannot be safely removed afterwards. Therefore one must call wl_event_source_remove() on all event sources before destroying the event loop context.
int wl_event_loop_dispatch (struct wl_event_loop * loop, int timeout)¶
Wait for events and dispatch them
Parameters
timeout The polling timeout in milliseconds.
Returns
All the associated event sources are polled. This function blocks until any event source delivers an event (idle sources excluded), or the timeout expires. A timeout of -1 disables the timeout, causing the function to block indefinitely. A timeout of zero causes the poll to always return immediately.
All idle sources are dispatched before blocking. An idle source is destroyed when it is dispatched. After blocking, all other ready sources are dispatched. Then, idle sources are dispatched again, in case the dispatched events created idle sources. Finally, all sources marked with wl_event_source_check() are dispatched in a loop until their dispatch functions all return zero.
void wl_event_loop_dispatch_idle (struct wl_event_loop * loop)¶
Dispatch the idle sources
Parameters
See also
struct wl_listener * wl_event_loop_get_destroy_listener (struct wl_event_loop * loop, wl_notify_func_t notify)¶
Get the listener struct for the specified callback
Parameters
notify The destroy callback to find.
Returns
int wl_event_loop_get_fd (struct wl_event_loop * loop)¶
Get the event loop file descriptor
Parameters
Returns
This function returns the aggregate file descriptor, that represents all the event sources (idle sources excluded) associated with the given event loop context. When any event source makes an event available, it will be reflected in the aggregate file descriptor.
When the aggregate file descriptor delivers an event, one can call wl_event_loop_dispatch() on the event loop context to dispatch all the available events.
Author¶
Generated automatically by Doxygen for Wayland from the source code. | https://manpages.debian.org/bullseye/libwayland-doc/wl_event_loop.3.en.html | CC-MAIN-2022-21 | refinedweb | 488 | 53.92 |
You don't have to set the login window to "name and password". You just have to press option+return if you use the user pictures list. However, on my system (10.3.9), I have to type the first letter of an existing user to highlight (don't press enter or click on it) it before option+return works to show the username and password login.
Ditto (10.3.9)
You can also hit Cntl-Eject (or the power button on older keyboards/powerbooks) to get the Sleep - Shutdown - Restart dialog box.
From that box R restarts, S sleeps, Enter does shutdown, and esc cancels.
Fewer keys to type if you want quick and easy.
To me, the security risk is there.
If, for example, someone has set up the computer to auto-log a specific User and then selects the Login Window... (set to not show the restart button) in the fast user switching menu prior to going to lunch, then someone could restart the computer with >restart while the User is away and the computer would boot right into the User account, gaining access to the computer to the extent the User has access.
PatentBoy
The security risk there is in the fact that you set auto user login. Anyone who wants a secure system should not enable this.
---
-Peter
In order for someone to somehow gain access to the login window (e.g., via FUS) and the type ">restart", they need physical access to the machine. If they have physical access to the machine, they already can reboot it.
Even though I'm sure people will try to claim so, this does NOT represent a "security risk" above and beyond any access you already have by virtue of having physical access.
There is no risk if the computer always boots into the login window.
But, in my opinion, if the computer boots into a User account and that User (who is logged in) selects the login window, while going to lunch for example, another could reboot his machine and the machine would then boot into the User account automatically giving the other user access to the computer.
Am I missing something? perhaps I do not understand the complete situation.
PatentBoy
I finally figured it out...
If someone is logged onto the machine, it will not be able to be restarted unless an admin password is provided to safeguard the current users un-saved work, etc.
sorry for the confusion...
No problem. Hold the power button down for a few seconds. The machine turns off.
Turn machine on and have access to any personal files for an auto login user...
If this is the case then what is the point of having the ability to disable the buttons to perform the corresponding actions? So that only semi-educated users can perform them? Am I missing something here?
thombo
It's to prevent clutter at the login window and to prevent casual users from randomly clicking on restart and shut down; nothing more.
I have a slightly unrelated question. I can't get Tiger to work with >console anymore. Is anyone else having this problem? I'm wondering since they added these options if maybe there are other options out there that use the redirection operator and maybe another option that replaced the >console.
---
Jayson --When Microsoft asks you, "Where do you want to go today?" tell them "Apple."
>console
If you switched to the login window by fast user switching console will not work.
There is *always* some risk associated with allowing a machine to shut down or reboot.
Previous security guidelines advised against enabling these buttons, as a small measure of defense against two general scenarios: in one, the attacker has modified binary code on the system by remote means (buffer overflow, filesystem tricks, etc.), and needs to reboot to apply the changes; in the other, the attacker wants access to protected data on the machine and intends to reboot from a custom hard disk or CD (think: Knoppix), or put the machine in TDM mode with their laptop.
Of course, this is a very small measure of defense. An Open Firmware password is much stronger against this kind of attack. But it's easier now, say, to pull an alley-oop, in which the attacker might install malicious code remotely using a nonauthenticated exploit, and then convince an unauthorized employee over the phone to reboot the server using one of these methods.
Regardless, this is pretty minor with regard to security. I only point these things out because I feel it's somewhat irresponsible *ever* to say "Wrong; there is no risk."
Make no mistake. There is always risk.
Physical access to a computer equals a risk no matter what!
Open Firmware pswd's are easaly disabled via a couple of reboots and removal of RAM
With a tower case, you can padlock the box so people can't get in and remove RAM, etc. But of course few people do. Securing a laptop is harder.
I've got several labs of computers running 10.3.7. This trick works on them.
To get this to work from the Login Window when it displays the List of users w/icons, instead of the Name and password inputs do the following:
1) Select any user using the up/down arrows (Do not click a user -- that will bring up the password input field)
2) Hit control-shift-return (works under 10.4. I recall a different key combination under Panther... maybe option-shift-return?)
The Login window will change to display name and password fields. Proceed as described in the hint.
Please read all of the comments, including the first one, before commenting.
Oops... sorry.
A scenario where you have a kiosk to use for different users could need the login screen, without physical access to the CPU: just Keyboard, Mouse and Screen.
The login screen can be set to ignore >console logins; its a hidden loginwindow.plist setting somewhere. I think bombich.com has some documentation on this. Perhaps the >restart etc logins can also be disabled in this way.
However you will need a third-party security tool to disable the shutdown dialog you get at pressing control-eject (or one of the key combo's that enter sleep/restart/shutdown mode direct).
But I think >sleep etc are unnecessary because there are GUI buttons for this already and if you can use >console you can then use the terminal commands to achieve the same.
As soon as I posted this "hint", I thought I should have posted it as a bug or security risk.
Let us assume this, a server in a locked rack, but I keep the keyboard outside the cage for easy access. Of course the server is at the login window during normal use. and of course the shutdown and restart buttons are disabled. now I can login if needed, and once logged in can restart if needed, but restart and shutdown are not available without password. The control-eject key combo does not work at the login screen and you don't have access to the computer to shut off the power.
Although you could cut the power form outside the building and wait for the UPS to run out of juice and then restor the power.
Security hole or not it's a flaw. Otherwise there would be no point in removing the reboot/shutdown options in the first place. And for the record: being able to reboot as one wants can in fact be regarded as a security concern.
Don't have an account yet? Sign up as a New User
Visit other IDG sites: | http://hints.macworld.com/article.php?story=20050603213256255 | CC-MAIN-2014-15 | refinedweb | 1,288 | 72.26 |
The stdlib C Library function bsearch searches the given key in an sorted array pointed by base and returns a void pointer in the table that matches the search key.
To perform the binary search, the function uses a comparison function(compare) to compare any element of array with key. The elements of the array must be in ascending sorted order for binary search to work properly.
Here is the return value of comparison function int compare(const void *x, const void *y).
- If *x < *y, compare function should return an integer < 0.
- If *x == *y, compare function should return 0.
- If *x > *y, compare function should return an integer > 0.
Function prototype of bsearch
- key : This is the pointer to the element to be searched which serves as key of binary search, type-casted as a void*.
- base : This is a pointer to the first element of the sorted array where the search to be performed, type-casted to a void*.
- num : This is the number of elements in the sorted array pointed by base.
- size : This is size in bytes of each element in the sorted array.
- compare : This is a pointer to a function that compares two elements.
Return value of bsearch
This function returns a pointer to an element in the array that matches the search key otherwise a NULL pointer is returned If key is not found in array. In case of multiple occurrence of key in array this may point to any one of them(not necessarily the first one).
C program using bsearch function
The following program shows the use of bsearch function to search an integer in a sorted integer array.
#include <stdio.h> #include <stdlib.h> int compare(const void *x, const void *y){ return (*(int*)x - *(int*)y); } int main(){ int array[50], counter, n; int toSearch, *ptr; printf("Enter number of elements\n"); scanf("%d", &n); printf("Enter %d numbers in increasing order\n", n); for(counter = 0; counter < n; counter++){ scanf("%d", &array[counter]); } printf("Enter element to search\n"); scanf("%d", &toSearch); /* Binary Search on sorted array*/ ptr = (int*)bsearch(&toSearch, array, n, sizeof (int), compare); if(ptr != NULL) { printf("%d found at index %d\n", toSearch, ptr-array); } else { printf("%d not be found\n", toSearch); } return 0; }
Output
Enter number of elements 5 Enter %d numbers in increasing order 1 3 4 9 10 Enter element to search 9 9 found at index 3 | https://www.techcrashcourse.com/2015/08/bsearch-stdlib-c-library-function.html | CC-MAIN-2020-16 | refinedweb | 409 | 58.52 |
US Government Letterhead?
Discussion in 'Word General' started by Mike, Jan 27, 2006.
Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
- Similar Threads
import a word doc. letterhead for e mail letterheadrupet 204, Jun 8, 2006, in forum: Outlook Contacts
- Replies:
- 3
- Views:
- 141
- ernie
- Jun 10, 2006
Can I scan paper letterhead into Word as electronic letterhead?ECTRBob, Dec 18, 2005, in forum: Word Documents
- Replies:
- 2
- Views:
- 218
- Charles Kenyon
- Dec 19, 2005
How do I delete letterhead from letterhead/envelope file?Lonnie, Mar 27, 2009, in forum: Word Documents
- Replies:
- 1
- Views:
- 137
- Suzanne S. Barnhill
- Mar 27, 2009
How do I use my company letterhead as letterhead template in Word?FolsomResident, Aug 20, 2005, in forum: Word Page Layout
- Replies:
- 1
- Views:
- 129
- Doug Robbins
- Aug 20, 2005
Printing onto letterhead, with letterhead watermark for measuremenJulian, Sep 11, 2007, in forum: Word Page Layout
- Replies:
- 1
- Views:
- 142
- Graham Mayor
- Sep 11, 2007 | http://www.office-forums.com/threads/us-government-letterhead.379403/ | CC-MAIN-2015-32 | refinedweb | 193 | 62.68 |
Bin packing involves packing a set of items of different sizes in containers of various sizes. The size of the container shouldn’t be bigger than the size of the objects. The goal is to pack as many items as possible in the least number of containers possible. Generally speaking, this is referred to as an optimization problem.
This article will focus on the bin packing problem. This concept is important to understand due to its applicability in resource allocation in ML orchestration. In computing this is applicable in resource allocations e.g GPU as well as scheduling of processes. Let’s start by discussing that.
GPU utilization with bin packing
Serving Deep Neural Networks (DNNs) efficiently from a cluster of GPUs is a problem that can be addressed via bin packing. This is important in order to ensure high utilization as well as low-cost GPU usage. Achieving this requires cluster-scale resource management; for instance, scheduling of GPUs.The ability to distribute a large workload onto a cluster at a high accelerator utilization and acceptable latency is therefore critical. The application of packing-based scheduling algorithms can be used to address this issue. The schedule can specify three items:
- the needed GPUs
- the distribution of neural nets across them
- their execution order that guarantees high throughput and low latency
Loading models into memory consumes a lot of time. When served at high velocity, the models can first be pre-loaded into a GPU memory. This can then be re-used in future executions. Placing these models requires efficient packing. It is important to note that in most deep learning systems data is passed in batches. This improves GPU utilization but poses problems in the allocation of resources in a cluster. The problems arise from the fact that the process cost of input depends on the size of the batch where the input is being processed. The algorithm that is packing the models in the GPUs has to take this into consideration. Generally, this is known as batching-aware resource allocation.
Another issue that needs to be taken into consideration is where there are few model requests. These requests don’t necessarily need a GPU. In order to optimize resource utilization, these requests can be packed into a single GPU. This is an optimization problem that can be denoted as an integer programming problem. Some of the constraints on this problem include:
- the required latency
- only GPUs that are in use can be assigned
- only one GPU can be assigned to each session
One of the libraries that can be used for this problem is the CPLEX package. Since solving this integer problem is computationally expensive, greedy scheduling algorithms can be considered.
Bin packing can also be applied in training convolutional neural networks with GPUs in order to ensure efficient training. This is achieved by allocating CNN layers to computational units. The objective, in this case, is to minimize the overall training time. The First-Forward Decreasing (FFD) algorithm can be used to map layers to the bins. Since it is a greedy algorithm it will place items in bins in decreasing order of their size.
Before we go any further, it is worthy to briefly mention other optimization problems.
Optimization problems
In optimization, the aim is to obtain the best possible solution out of a huge set of possible options. For example, in the computing world, you may be interested in assigning resources to a certain project. In this case, your goal is to use the optimal number of resources sufficient for the project as well as the most cost-effective. Essentially, you don’t want to purchase resources that will not be used. You are therefore trying to ensure that there is no wastage of resources.
Types of optimization problems
The first step in solving any optimization problem is to identify its type. The next step is to find the best algorithm that will obtain the optimal solution. Let’s now mention a couple of optimization problems.
Routing
This problem involves finding the most optimal route in the delivery of packages to customers. You, therefore, have to assign packages and routes to trucks in a manner that minimizes the total cost of delivery.
Assignment
In this case, workers have to be assigned to tasks that involve a fixed cost. The goal here is to make the assignment that leads to the least cost.
Packing
In packing problems, containers with fixed sizes are provided. Given a set of items, the goal is to find the best way to pack the objects. In a packing problem, the goal is to maximize or minimize something. For instance, one could be interested in maximizing the total value of the packed items or minimizing the cost of shipping the items. There are two main variants of packing; knapsack problems and bin packing. The simple knapsack problem involves just one container. In this case, the goal is to pack the items that result in the total maximum value. A variant of the knapsack problem known as the multiple knapsack problem is concerned with maximizing the total value of the packed items in all knapsacks.
The bin-packing problem
n this case, multiple containers of the same capacity are provided. They are usually referred to as bins. The aim is to compute the least number of bins that can hold all the items. The number of bins is not fixed. This is different from the multiple knapsack problem where the number of containers is fixed. For instance, let’s illustrate this using that shipping problem. In a bin packing problem, you have more than enough trucks to ferry all the items but you want to make sure that you use the least number of trucks to hold all the items. However, in the multiple knapsack problem, you have a fixed number of trucks and your goal is to load a subset of the packages that will result in the maximum weight.
As you have seen so far, optimization involves two main things; the objective and the constraints. The objective is the quantity that you are optimizing, for instance, the number of trucks in the shipping example. The constraints are the set restrictions, for example, the total number of trucks available. In order to solve any optimization problem you first need to identify the objective and the constraints. The optimal solution is the one that meets the objective whereas the feasible solution is the one that satisfies the given constraints even if it’s not optimal. The goal is always to aim for the optimal solution.
Bin packing algorithms
The bin packing problem can be solved by algorithms. These algorithms can either be online or offline heuristic.
Online heuristics
In online heuristics items arrive in order. As they come a decision of whether to pack an item must be made. The algorithm does not have any information about the next item or if there will be a next item.
Next-Fit
In this algorithm, there is only one bin that is partially filled at any time. The algorithm works by considering items in an ordered list. An item is placed in the same bin as the previous item if it fits there. If it doesn’t fit in the bin, that bin is closed. A new bin is opened and the item is placed inside the new bin.
For instance, if you’ve just placed an item in bin 5, then you will never put anything in bin 1-4. If there is a next item it will go to bin 5 if it fits there. If it doesn’t fit into the 5th bin it will go into the 6th
Next-k-Fit
This algorithm works like the above one but keeps the last k bins open and selects the bin in which the item fits.
First-Fit
In this algorithm, items are processed in an arbitrary order. The algorithm tries to place an item in the first bin that can hold the item. In the event that no bin is found, a new bin is opened and the item is placed in the new bin.
So an item will be placed in bin 1 if it fits there, if it doesn’t, it is placed in bin 2. If the item doesn’t fit in a bin that already contains an item, then a new bin is opened.
Best-Fit
This algorithm is similar to the previous one. The difference is that it doesn’t place the next item in the first bin where it fits. If the item fits, it is placed in the bin with the maximum load.
Worst-Fit
This algorithm is similar to the previous one. The difference is that instead of placing the item in the bin that has the maximum load, it is placed in the bin with the minimum load. A new bin is opened if the item doesn’t fit in that bin. If there are two bins with the same minimum load then the one that was opened earliest is used.
Almost Worst-Fit
This algorithm works by examining the order of the list and attempts to place the next item in the second most empty open bin. If the item doesn’t fit the algorithm, then it tries to place the item in the most empty bin. If it doesn’t fit there, the algorithm will open a new bin.
Offline heuristics
Offline heuristic algorithms have complete information about the items before execution. As a result, they have the ability to modify the order of the list of items.
First Fit Decreasing
This one is similar to First-Fit. The difference is that items are sorted in non-increasing order of their sizes before they are placed.
Optimization solvers
There are also Python packages that can be used to solve the bin packing problem. The process of solving the problem entails:
- defining a solver
- declaring the constraints
- defining the objective function
- showing the result
Let’s take a look at solving the problem in Python.
OR-Tools
OR-Tools is an open-source library for combinatorial optimization. It can be used to solve the scheduling problem, vehicle routing as well as the bin packing problem.
Let’s start by importing the library and declaring the solver. The solver is a mixed-integer programming solver.
from ortools.linear_solver import pywraplp solver = pywraplp.Solver.CreateSolver(“SCIP_MIXED_INTEGER_PROGRAMMING”)
`pywraplp` is a Python wrapper for the C++ linear solver wrapper. In this case, the SCIP backend is used.
Next, define the data that you would like to use:
- `weights` is a list containing the weights of the items
- `bin_capacity` is the capacity of the bins
Since the goal is to minimize the number of bins, no value is assigned to the items.
def define_data(): data = {} data['weights'] = [45, 50, 12, 70, 120, 78, 89, 20, 30, 17, 59] data['items'] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] data['bins'] = data['items'] data['bin_capacity'] = 150 return data data = define_data()
The next step is to declare the program’s variables:
- `x[i, j]` is equal to 1 if item i is packed in bin j, otherwise, it’s 0
- `y[j]`is defined as 1 if bin j is used, otherwise, it’s 0
The sum of `y[j]` is the total number of bins used.
x = {} for i in data['items']: for j in data['bins']: x[(i, j)] = solver.IntVar(0, 1, 'x_%i_%i' % (i, j)) y = {} for j in data['bins']: y[j] = solver.IntVar(0, 1, 'y[%i]' % j)
The next step is to define the program constraints. The first constraint is that each item should be in exactly one bin. This is set by ensuring that the sum of `x[i][j]` overall bins j is equal to 1.
for i in data['items']: solver.Add(sum(x[i, j] for j in data['bins']) == 1)
The next constraint is that the total weight packed in each bin should not exceed the capacity of the bin.
for j in data['bins']: solver.Add( sum(x[(i, j)] * data['weights'][i] for i in data['items']) <= y[j] * data['bin_capacity'])
The next step is to define the objective function. The objective, in this case, is to reduce the number of bins. This is enforced by minimizing the sum of the y[j] which is the number of bins used.
solver.Minimize(solver.Sum([y[j] for j in data['bins']]))
Finally, call the solver and display the solution.
status = solver.Solve() if status == pywraplp.Solver.OPTIMAL: num_bins = 0. for j in data['bins']: if y[j].solution_value() == 1: bin_items = [] bin_weight = 0 for i in data['items']: if x[i, j].solution_value() > 0: bin_items.append(i) bin_weight += data['weights'][i] if bin_weight > 0: num_bins += 1 print('Bin number', j) print(' Items packed:', bin_items) print(' Total weight:', bin_weight) print() print() print('Bins used:', num_bins) print('Time = ', solver.WallTime(), ' milliseconds') else: print('Problem has no optimal solution.')
In the solution, you will see the least number of bins needed to pack all the items. It also shows the items packed in each bin that was used as well as the total bin weight.
Binpacking
Another package that you can look at is binpacking. It uses greedy algorithms to solve bin packing problems in two main ways:
- sorting items in a constant number of bins
- sorting items into a low number of bins of constant size
Let’s start by looking at the first scenario. First, import the package and declare the resources. Contributions are defined in a dictionary with the key being the name of the resource and the values being its contribution value.
import binpacking b = { 'a': 12, 'b': 21, 'c':11, 'd':31, 'e': 22,'f':17 }
The items can be sorted into constant bins by using the `to_constant_bin_number` function. The function accepts the dictionary of the resources and the number of bins expected.
bins = binpacking.to_constant_bin_number(b,5) bins
Sorting the items into a low number of bins of constant size is achieved by using the `to_constant_volume`. The function accepts a list containing weights and the maximum expected volume.
b = list(b.values()) bins = binpacking.to_constant_volume(b,33) bins
Final Thoughts
In this article, you have seen that there are various variants of the bin packing problem. You have also learned about the algorithms and optimization solvers that can be used to solve the bin packing problem. Specifically, you have covered:
- various types of optimization problems
- what the bin packing problem is
- online and offline algorithms for solving the bin packing problem
- solving the problem using OR-Tools and the bin packing packages
- GPU utilization of bin packing
just to mention a few. The examples used here can be found in this Google Colab Notebook. | https://cnvrg.io/bin-packing/ | CC-MAIN-2021-43 | refinedweb | 2,474 | 63.59 |
import "go.uber.org/fx/internal/fxreflect"
Caller returns the formatted calling func name
FuncName returns a funcs formatted name
ReturnTypes takes a func and returns a slice of string'd types.
type Frame struct { // Unique, package path-qualified name for the function of this call // frame. Function string // File and line number of our location in the frame. // // Note that the line number does not refer to where the function was // defined but where in the function the next call was made. File string Line int }
Frame holds information about a single frame in the call stack.
Stack is a stack of call frames.
Formatted with %v, the output is in a single-line, in the form,
foo/bar.Baz() (path/to/foo.go:42); bar/baz.Qux() (bar/baz/qux.go:12); ...
Formatted with %+v, the output is in the form,
foo/bar.Baz() path/to/foo.go:42 bar/baz.Qux() bar/baz/qux.go:12
CallerStack returns the call stack for the calling function, up to depth frames deep, skipping the provided number of frames, not including Callers itself.
If zero, depth defaults to 8.
CallerName returns the name of the first caller in this stack that isn't owned by the Fx library.
Format implements fmt.Formatter to handle "%+v".
Returns a single-line, semi-colon representation of a Stack. For a multi-line representation, use %+v.
Package fxreflect imports 8 packages (graph) and is imported by 5 packages. Updated 2019-11-20. Refresh now. Tools for package owners. | https://godoc.org/go.uber.org/fx/internal/fxreflect | CC-MAIN-2020-34 | refinedweb | 254 | 67.15 |
File Systems
This section provides an overview of file systems on Linux and discusses the virtual file system, the ext2 file system, LVM and RAID, volume groups, device special files, and devfs.
Virtual File System (VFS)
One of the most important features of Linux is its support for many different file systems. This makes it very flexible and well able to coexist with many other operating systems. Virtual file system . The Linux Virtual File System layer allows you to transparently mount many different file systems at the same time.
The Linux virtual file system is implemented so that access to its files is as fast and efficient as possible. It must also make sure that the files and their data are maintained correctly.
ext2fs
The first file system that was implemented on Linux was ext2fs. This file system is the most widely used and the most popular. It is highly robust compared to other file systems and supports all the normal features a typical file system supports, such as the capability to create, modify, and delete file system objects such as files, directories, hard links, soft links, device special files, sockets, and pipes. However, a system crash can leave an ext2 file system in an inconsistent state. The entire file system has to be validated and corrected for inconsistencies before it is remounted. This long delay is sometimes unacceptable in production environments and can be irritating to the impatient user. This problem is solved with the support of journaling. A newer variant of ext2, called the ext3 file system, supports journaling. The basic idea behind journaling is that every file system operation is logged before the operation is executed. Therefore, if the machine dies between operations, only the log needs to be replayed to bring the file system back to consistency.
LVM and RAID
Volume managers provide a logical abstraction of a computer’s physical storage devices and can be implemented for several reasons. On systems with a large number of disks, volume managers can combine several disks into a single logical unit to provide increased total storage space as well as data redundancy. On systems with a single disk, volume managers can divide that space into multiple logical units, each for a different purpose. In general, a volume manager is used to hide the physical storage characteristics from the file systems and higher-level applications.
Redundant Array of Inexpensive Disks (RAID) is a type of volume management that is used to combine multiple physical disks for the purpose of providing increased I/O throughput or improved data redundancy. There are several RAID levels, each providing a different combination of the physical disks and a different set of performance and redundancy characteristics. Linux provides four different RAID levels:
RAID-Linear is a simple concatenation of the disks that comprise the volume. The size of this type of volume is the sum of the sizes of all the underlying disks. This RAID level provides no data redundancy. If one disk in the volume fails, the data stored on that disk is lost.
RAID-0 is simple striping. Striping means that as data is written to the volume, it is interleaved in equal-sized "chunks" across all disks in the volume. In other words, the first chunk of the volume is written to the first disk, the second chunk of the volume is written to the second disk, and so on. After the last disk in the volume is written to, it cycles back to the first disk and continues the pattern. This RAID level provides improved I/O throughput.
RAID-1 is mirroring. In a mirrored volume, all data is replicated on all disks in the volume. This means that a RAID-1 volume created from n disks can survive the failure of n–1 of those disks. In addition, because all disks in the volume contain the same data, reads to the volume can be distributed among the disks, increasing read throughput. On the other hand, a single write to the volume generates a write to each of the disks, causing a decrease in write throughput. Another downside to RAID-1 is the cost. A RAID-1 volume with n disks costs n times as much as a single disk but only provides the storage space of a single disk.
RAID-5 is striping with parity. This is similar to RAID-0, but one chunk in each stripe contains parity information instead of data. Using this parity information, a RAID-5 volume can survive the failure of any single disk in the volume. Like RAID-0, RAID-5 can provide increased read throughput by splitting large I/O requests across multiple disks. However, write throughput can be degraded, because each write request also needs to update the parity information for that stripe.
Volume Groups
The concept of volume-groups (VGs) is used in many different volume managers.
A volume-group is a collection of disks, also called physical-volumes (PVs). The storage space provided by these disks is then used to create logical-volumes (LVs).
The main benefit of volume-groups is the abstraction between the logical- and physical-volumes. The VG takes the storage space from the PVs and divides it into fixed-size chunks called physical-extents (PEs). An LV is then created by assigning one or more PEs to the LV. This assignment can be done in any arbitrary order—there is no dependency on the underlying order of the PVs, or on the order of the PEs on a particular PV. This allows LVs to be easily resized. If an LV needs to be expanded, any unused PE in the group can be assigned to the end of that LV. If an LV needs to be shrunk, the PEs assigned to the end of that LV are simply freed.
The volume-group itself is also easily resizeable. A new physical-volume can be added to the VG, and the storage space on that PV becomes new, unassigned physical-extents. These new PEs can then be used to expand existing LVs or to create new LVs. Also, a PV can be removed from the VG if none of its PEs are assigned to any LVs.
In addition to expanding and shrinking the LVs, data on the LVs can be "moved" around within the volume-group. This is done by reassigning an extent in the LV to a different, unused PE somewhere else in the VG. When this reassignment takes place, the data from the old PE is copied to the new PE, and the old PE is freed.
The PVs in a volume-group do not need to be individual disks. They can also be RAID volumes. This allows a user to get the benefit of both types of volume management. For instance, a user might create multiple RAID-5 volumes to provide data redundancy, and then use each of these RAID-5 volumes as a PV for a volume-group. Logical-volumes can then be created that span multiple RAID-5 volumes.
Device Special Files
A typical Linux system has at least one hard disk, a keyboard, and a console. These devices are handled by their corresponding device drivers. However, how would a user-level application access the hardware device? Device special files are an interface provided by the operating system to applications to access the devices. These files are also called device nodes that reside in the /dev directory. The files contain a major and minor number pair that identifies the device they support. Device special files are like normal files with a name, ownership, and access permissions.
There are two kinds of device special files: block devices and character devices. Block devices allow block-level access to the data residing on the device, and character devices allow character-level access to the device. When you issue the ls –l command on a device, if the returned permission string starts with a b, it is a block device; if it starts with a c, it is a character device.
devfs
The virtual file system, devfs, manages the names of all the devices. devfs is an alternative to the special block and character device node that resides on the root file system. devfs reduces the system administrative task of creating device nodes for each device in the system. This job is automatically handled by devfs. Device drivers can register devices to devfs through device names instead of through the traditional major-minor number scheme. As a result, the device namespace is not limited by the number of major and minor numbers.
A system administrator can mount the devfs file system many times at different mount points, but changes to a device node are reflected on all the device nodes on all the mount points. Also, the devfs namespace exists in the kernel even before it is mounted. Essentially, this makes the availability of device nodes independent of the availability of the root file system.
With the traditional solution, a device node is created in the /dev directory for each and every conceivable device in the system, irrespective of the existence of the device. However, in devfs, only the necessary and sufficient device entries are maintained. | http://www.informit.com/articles/article.aspx?p=389712&seqNum=7 | CC-MAIN-2019-18 | refinedweb | 1,539 | 62.48 |
1413/how-can-i-concatenate-str-and-int-objects
How To Concetanate Strings and Integers in Python?
I am trying to execute the following code:
things = 5
print("You have " + things + " things.")
I get the following error in Python 3.x:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: must be str, not int
'+' operator can be used to either add two numeric values or to concatenate sequences.
For example:
A = 1
B = 2
Print (A + B)
Output = 3
Print ([1,2,3,’name’] + [1,2,5, ’name’])
Print (‘string-1’ + ‘string-2’)
Output = [1,2,3,’name’,1,2,5,’name’]
Output = string-1string-2
>>> [1, 2, 3] + [4, 5, 6]
[1, 2, 3, 4, 5, 6]
>>> 'abc' + 'def'
'abcdef'
either: print("You have " + str(things) + " things.") (the old school way)
or: print("You have {} things.".format(things)) (the new pythonic and recommended way)
If you want to concatenate int or floats to a string you must use this:
i = 123
a = "foobar"
s = a + str(i)
You probably want to use np.ravel_multi_index:
[code]
import numpy ...READ MORE
Use , to separate strings and variables while printing:
print ...READ MORE
Hi, good question. What you can do ...READ MORE
You absolutely can use nameko and Flask together.
In that ...READ MORE
if you google it you can find. ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
You can simply the built-in function in ...READ MORE
You can try the below code which ...READ MORE
Context Manager: cd
import os
class cd:
"""Context manager for ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/1413/how-can-i-concatenate-str-and-int-objects?show=1414 | CC-MAIN-2019-47 | refinedweb | 282 | 75.81 |
Grid Sort for formatted date column in FF vs IE6
It looks like the grid sorts by the data model's column value in IE6 and the view's rendered html in FF. Not certain about this. I'll have to investigate further but that's my initial assessment.
As your RSS feed viewer, I parse the date on the way in (converting unix time values to javascript Date objects). I use the same renderer as in the RSS feed viewer. Many of my data rows have the same rendered date (i.e. same minute) but the Date object should be different by a couple seconds. In IE6, the sort is ordered correctly. In FF, it appears that all of the rows with the same rendered date are grouped together.
setup code looks like:
Code:
function formatDate(dateVal){ return (dateVal && dateVal.format) ? dateVal.dateFormat('M j, Y, g:i a') : 'Not Available'; } var myColumns = [ {header: "From", width: 128, sortable: true}, {header: "Status", width: 45, sortable: true}, {header: "Subject", width: 233, sortable: true}, {header: "Date", width: 128, sortable: true, renderer: formatDate} ]; var colModel = new YAHOO.ext.grid.DefaultColumnModel(myColumns); function parseDate(noteUnixTime){ return new Date(noteUnixTime*1000); } var schema = { tagName: 'note', id: 'id', fields: ['from', 'status', 'subject', 'date'] }; this.dataModel = new YAHOO.ext.grid.XMLDataModel(schema); this.dataModel.addPreprocessor(3,, 3, 'ASC');
--Mark
Just wondering, why did you post this as a bug? You will need to adjust your date parsing code. The Date sorting just calls date.getTime() and compares the the results. If the returned long is the same, they are considered equal.
Originally Posted by jacksloc
I'll see if I can find a way to reproduce in a simpler example.
Do you have it set to use date sorting?
Try this small change (paste it in your file somewhere after yui-ext.js):
Code:
YAHOO.ext.grid.DefaultColumnModel.sortTypes.asDate = function(s) { if(s instanceof Date){ return s.getTime(); } return Date.parse(String(s)); };
No wasn't doing that.
Do you have it set to use date sorting?
Sorry for the bogus bug report. This is probably a help query and not a bug report. Next time I'll post to the help section first.
I'll move it over there. Thanks.
Similar Threads
Send sort column number in paged grid in Ext 1.0By JeffHowden in forum Community DiscussionReplies: 3Last Post: 2 Jun 2008, 11:16 AM
column sort makes ajax call for each time grid was recreatedBy lemontree in forum Ext 1.x: Help & DiscussionReplies: 6Last Post: 4 Jul 2007, 2:14 AM
Grid: Post column name (not just id) in remote sortBy brondsem in forum Community DiscussionReplies: 2Last Post: 18 Feb 2007, 4:01 PM
Date string cannot be formattedBy qiuyl in forum Ext 1.x: Help & DiscussionReplies: 2Last Post: 12 Dec 2006, 6:59 AM
grid sort marks all rows by the first columnBy lsmith in forum Ext 1.x: Help & DiscussionReplies: 8Last Post: 22 Nov 2006, 9:13 AM | https://www.sencha.com/forum/showthread.php?363-Grid-Sort-for-formatted-date-column-in-FF-vs-IE6 | CC-MAIN-2015-48 | refinedweb | 497 | 67.04 |
Aflați mai multe despre abonamentul Scribd
Descoperiți tot ce are Scribd de oferit, inclusiv cărți și cărți audio de la editori majori.
Technical Report Writing
Chemical Engineering Department
Dr. Moustapha Salem Mansour First year Spring 2009
Table of contents
1. Introduction
1
1.1.
Types of Technical Reports
2
1.1.1. Technical-background report
1.1.2. Instructions
1.1.3. Feasibility, recommendation, and evaluation reports
1.1.4. Primary research report
1.1.5. Technical specifications
3
1.1.6. Report-length proposal
1.1.7. Business proposed
1.2. Audience and Situation in Technical Reports
1.3. Topics for Technical Reports
4
1.3.1. Editorializing
1.3.2. Fuzzy topics
1.3.3. Tough technical topics
1.4.
General Characteristics of Technical Reports
1.4.1. Graphics
1.4.2. Accurate detail
1.4.3. Information sources
1.4.4. Documentation
5
1.4.5. Realistic audience and situation
1.4.6. Headings and lists
1.4.7. Special format
1.4.8. Production
1.4.9. Length
1.4.10. Technical content
2. Visual Elements
6
2.1. Making a visual aid truly visual
2.2. Deciding when to use a visual aid
7
2.3. Selecting the best type of visual aid in a given situation
2.3.1. Conventions of Visual Perception
8
2.3.2. Some types of visual aids and their uses
2.4.
Designing the visual aid
14
2.4.1.
Making a visual aid relevant
2.4.2.
Making a visual aid clear
16
2.5.
Integrating the Visual Aid into the Test
18
2.5.1. Positioning
2.5.2. Printing
19
2.6.
Formatting Contentions that Make Reading Easier
3. The technical Report
22
3.1. Types of Reports
3.2. Organization of reports
24
3.2.1.
Organization of a design report
3.3. Preparing the report
27
3.4. Presenting the results
3.4.1. Subheadings and Paragraphs
28
3.4.2. Tables
3.4.3. Graphs
3.4.4. Illustrations
29
3.4.5. References to Literature
3.4.6. Sample Calculations
30
3.4.7. Mechanical Details
31
4. Oral Presentations
32
4.1. Topic and Situation for the Oral Presentation
4.2. Contents and Requirements for the Oral Presentation
33
4.3. Preparing for the Oral Report
34
4.4. Delivering an Oral Presentation
4.5. Planning and Preparing Visuals for Oral Presentations
35
4.5.1.
Tips for the preparation off the visuals
36
5. Making Your Writing Readable
37
5.1. Introduction
5.2. Information selection
5.2.1. Establish your Topic and Purpose
5.2.2. Use Keywords Prominently
38
5.2.3. Explain Important Concepts when Writing for Nonspecialist
Readers
5.2.4.
Use
Standard
Terminology
when
Writing
for
Specialist
39
5.2.5.
Structure your Text to Emphasize Important Information
40
5.2.6.
Construct Well Designed Paragraphs
5.2.7.
Field-Test Your Writing
41
5.3.
Information ordering
5.3.1.
Optimal Ordering of Noun Phrases
42
5.4.
Editing For Emphasis
45
5.4.1. Combine Closely Related Sentences
46
5.4.2. Be Concise
49
6. Project Proposal
52
6.1. The contents of project proposal can be structured as follows:
6.2. NATURE OF THE REPORTS:
6.3. Technical-industrial project proposals:
53
7. Checklist for the Technical Report
54
1. INTRODUCTION:.:
Is there hard, specific, factual data for this topic?
Will there be at least one or two graphics?
Is there some realistic need for this report?
Technical Reports Writing (HS x12)
First year Chemical Engineering Department Spring 2009
[
In this course you can choose to write one of the following types of reports
The background report is the hardest to define but the most commonly written. This type of technical report provides background on a topic—for example, solar energy, global warming, CD- ROM technology, a medical problem, or U.S. recycling activity. MS-Word, just a guide on writing macros in MS-Word..
1.1.5..
Why does the audience need this information?
How will readers get access to.
1.3.
Topics for Technical Reports
Just about any topic can be worked into a good technical-report project. Some are a little more difficult than others; that's where your instructor can help. And, that is why some technical writing course includes a proposal assignment: it gives your instructor a chance to see what you want to do and to guide you away from problems such as the following: anybody of specialized knowledge.
1.4. General Characteristics of Technical Reports
You're probably wondering what this technical report is supposed to look like. Ask your instructor to show you a few example reports. In addition to that, here is a brief review of some of the chief characteristics of the technical report:.
The report should be very detailed and accurate. The point of the report is to go into details, the kind of details your specific audience needs..
1.4.4.
Documentation
When you use borrowed information in your technical report, be sure to cite your sources. The style of citing your sources (also called "documenting" your sources). One style commonly used in science and engineering is called the number system.."
The report should use the format for headings that is required for the course, as well as various kinds of lists as appropriate.
The technical report uses a rather involved format including covers, binding, title page, table of contents, list of figures, transmittal letter, and appendixes. These have to be prepared according to a set standard, which will be presented in a later chapter.
The technical report should be typed or printed out neatly. If graphics are taped in, the whole report must be photocopied, and the photocopy handed in (not the original with the taped-in graphics). The report must be bound in some way.
The report should be at least 8 1.5 spaced typed or printed pages (using 3/4 .
There are times when words alone are not the best way to transfer information or points of view. Also, sometimes words need to be combined with visual aids, formatting (the use of white space and indenting), or other visual elements. For example, appropriate formatting can make a technical report much easier to read, so much easier that the formatting becomes necessary given the limitations on the time and attention of an audience. The same can often be said of other visual elements, such as drawings, figures, charts, or graphs, which can quickly summarize an important point or present it in a different way. It is known that you can increase the strength and memorability of a message simply by repeating it or, even better, by repeating it in a different form. Thus, when a visual presentation is added to a verbal one, the combination can produce a much stronger and more easily remembered message than either presentation alone. Further, a visual aid can present a compact summary of the main points of a verbal text. (Have you ever heard the expression "a picture is worth a thousand words"?) Finally, a visual element can often summarize in a more memorable form than words alone can. Given these advantages of visual aids, a communicator ought to be able to use them effectively. This involves knowing
1. How to make a visual aid effective
2. When to use the visual aid
3. How to select the best type of visual element in a given situation (e.g., pie chart, bar graph, line graph)
4. How to integrate the visual aid into the text
Take about 2 to 5 seconds to look at Table 2-1 and then cover it up. Do not look at any of the following tables or discussions. Now try to write .down the main points made by the table. When you have finished this, look at the presentation of the same information in Table 2-2 and see if you can quickly add any more main points to your list. Do this before you continue. Typically, people who read only Table 2-1 note (1) that job satisfaction declines in each of the two main groups of occupations. These readers will sometimes notice (2) that there is a large difference in job satisfaction between the two groups-that is, that most of the first group is relatively satisfied (93 to 82 percent satisfied) whereas most of the second group is much less satisfied (only 52 to 16 percent satisfied). Very few readers of only Table 2-1 will notice (3) that the job satisfaction of skilled printers is higher than that of nonprofessional white-collar workers. These last two observations (points 2 and 3) are very hard to "see" in the format used in Table 2-1. In contrast, most readers of Table 2-2 easily and quickly note all three observations, as well as a few other, more subtle ones, simply because of the format of the table. Notice that Table 2-2 makes it visually quite clear that the job satisfaction ratings of the two groups overlap and that the skilled trade and factory workers as a group are less satisfied than the professionals.
Table 2-1 Proportion of occupational groups who would choose similar work again
Professional occupation
percent
skilled trades occupations
Ubran university professors
93
Skilled printers
Mathematicians
91
Paper workers
Physicists
89
Skilled auto workers
Biologists
Skilled steel workers
Chemists
86
Textile workers
Lawyers
85
Unskilled steel workers
School superintendents
84
21
Journalists
82
White-collar workers
43
Table 2-2 Alternate arrangement for proportion of occupational groups who would choose similar work again
Ubran university professors Mathematicians Physicists Biologists Chemists Lawyers School superintendents Journalists White-collar workers
Skilled printers Paper workers Skilled auto workers Skilled steel workers Textile workers Unskilled steel workers Unskilled steel workers
Communicators often wonder when they should use a visual aid in a communication. Three suggested principles for deciding this are to use a visual aid
1. Where words alone would be either impossible or quite inefficient for describing a concept or an object
2. Where a visual aid is needed to underscore an important point, especially a summary
3. Where a visual element is conventionally or easily used to present data
When you design a particular visual aid, you are consciously or unconsciously making certain decisions. You are deciding-that the particular type of aid yon choose (a line graph, bar chart, pie diagram, and photograph) is the best type to make your point and that the arrangement and highlighting of material on the page is, again, the best to make your point. Unfortunately, there is little information available on which to base such decisions. If you
are like most writers, you probably choose one type of visual aid over another simply because it is the first thing you think of using. The purpose of this section is to sketch out some better or more conscious reasons for choosing. The section will first identify some conventions of visual perception and then examine several common types of visual -aids to see what they do and do not show well.
There are a number of general statements we can make about our expectations of visual information. First, we expect written things to proceed from left to right. Note that in scientific and technical graphs, we place the independent variable on the x-axis so that the more important variable moves from left to right. For instance, we plot time on the x-axis and frequency on the y- axis, as illustrated in Figure 2-1. This pattern is so universal that Figure 2-2 looks at best odd and at worst disturbing.
Figure 2-1 Preferred location of independent variable on a graph
Figure 2-2 Unconventional location of independent variable on a graph
Second, we expect things to proceed from top to bottom, and, third, we expect things in the center to be more important than things on the periphery. Fourth, we expect things in the foreground to be more important than things in the background; fifth, large things to be more important than small things; and sixth, thick things to be more important than thin things. Note that writing that, is larger, thicker, or bolder than the surrounding type is usually more important: a heading, a title, or an especially important word in a passage. Seventh, we expect areas containing a lot of activity and information to contain the most important information. Eighth, we expect that things having the same size, shape, location, or color are somehow related to each other. Lastly, ninth, we see things as standing out if they contrast with their surroundings because of line thickness, type face, or color.
There are six main types of visual aids with which a scientist or engineer should be familiar: (1) line graphs, (2) bar graphs, (3) pie charts, (4) tables, (5) photographs, and (6) line drawings. Each of these types has particular strengths and weaknesses, and to use any one appropriately, you must decide what point you are trying to make and then select the type of visual aid which makes that kind of point well.
LINEGRAPHS Line graphs, show well continuity and direction as opposed to individual or discrete points, direction as opposed to volume, and the importance of a nodal point, if there is one. These characteristics are illustrated in Figure2-3. Line graphs do not show well them importance of one particular point which falls of a node, the relationship of many lines, or the inter section of three or more lines, lf its important to be able to trace each line on a graph, you should probably not put more than three or four on a single graph, especially if they intersect frequently, or you may produce a graph as hard to follow as the one in Figure 2-4.
Figure 2-3 River flow before (1963) and after (1977) construction of Aswan High Dam on the Nile River
Figure 2-4 Reference of families for girls versus boys in six countries
BAR GRAPHS Bar graphs show relatively well the discreteness or separateness of points as opposed to their continuity, volume as opposed to direction, the relationships among more than three or four items at a time, the contrast between large and small numbers, and the similarities and differences between similar numbers. These characteristics are evident in the variant of the bar graph presented in Figure 2-5 and in Figure 2-6. Bar graphs can be arranged with either horizontal (Figure 2-5) or vertical bars (Figure 2-6), depending on the type of information they represent. The bars are normally separated by spaces.
Figure 2-5 Bar Chart Showing annual energy saving
9
Figure 2-6 Vertical bar chart
HISTOGRAMS
A histogram looks like a bar chart, but functionally it is similar to a graph because it deals
with two continuous variables (functions that can be shown on a scale' to be decreasing or increasing). It is usually plotted like a bar chart, as shown in Figure 2-7. The chief visible difference between a histogram and a bar chart is that there are no spaces between the bars
of a histogram.
Figure 2-6 Histogram for failure records
SURFACE CAHRTS A surface chart is shown in Figure 2-8. It may look like a graph, but it is not. To a technical person its' construction may seem so awkward that he might wonder when he would ever need to use one. Yet as a means for conveying Illustrative information to non technical readers, it can serve a very useful purpose.
10
Figure 2-8 Surface chart adds thermal data to hydro data to show total energy resources Like a graph, a surface chart has two continuous variables that form the scales against which the
curves are plotted. But unlike a graph, individual curves cannot be read directly from the scales. The uppermost curve is achieved as follows:
1. The curve containing the most import-ant or largest quantity of data is drawn first, in the normal way. This is the Hydro curve in Figure 2-8.
2. The next curve is drawn in above the first curve, using the first curve as a base (i.e. zero) and adding the second set of data to it. For example, the energy resources shown as being available in 1980 are:
Hydro
15,000 MW
Thermal
7,000
MW
In Figure 2-8, the lower curve for 1980 is plotted at 15,000 MW. The 1980 data for the next curve is 7,000 MW, which is added to the first set of data so that the second curve indicates a total of 22,000 MW. (If there is a third set of data, it is added on in the same way). PIE DIAGRAMS Pie diagrams show relatively well the relationship among three or four items which total 100 percent, the contrast between large and small percentages, and the similarities between relatively similar percentage (they show that well that 27 percent and 29 percent are about equal). Pie diagrams do not show well the small differences between two similar percentages (you can not usually see the differences between 27 and 29 percent). They also do not show well absolute values (unless you label the parts of the pie) or the relationship among more than five or six parts; with too many parts it is hard to see relationships of part to part and part to whole. These strengths and weakness is illustrated in figure 2-9.
11
Figure 2-9 Distribution of fatalities in 181 fatal car-truck crashers
TABLES Tables are convenient for presenting lots of data and for giving absolute values where precision is very important. However, since they present items one at a time in columns, they emphasize the discrete rather than the continuous and make it very difficult to show trends or direction in. the data. Tables are not predominantly visual: the reader's mind must translate each number into a relationship with each other number, as already described in the job satisfaction example at the beginning of this chapter. Thus, for maximum visual impact, tables should probably be a last choice as a visual aid and used only when it is important to provide a great deal of information with precision in a very small space. PHOTOGRAPHS Photographs are useful when you do not have the time, the money, or the expertise to produce a complicated line drawing; when you are trying to produce immediate visual recognition of an item; when you are emphasizing the item's external appearance (as opposed to its internal structure or a cross section); and when you are not concerned with eliminating the abundant detail a photograph provides. While photographs can be air-brushed to eliminate some undesired detail, they still are not preferred when you need to focus on some one aspect by eliminating a lot of detail and when you have the time and resources to produce a good line drawing.
LINE DRAWINGS
The term line drawing includes several types of drawings which focus on external appearance, physical shape, function, or relationship. These include "simplified photos," maps (see Figure 2- 10), anatomical drawings, parts charts, and drawings of models (such as atomic or molecular models as seen in Figure 2-11) or objects from any field of science or engineering. Also included are flow charts, organizational charts, schematic charts, block diagrams, as seen in Figure 2-12, architectural plans, and blueprints. While there are many types of line drawings, all of them share certain functions. They allow you to show things which you can't normally see in a photograph because of size, location, or excessive detail. They also allow you to easily highlight a particular shape, part, or function.
12
Figure 2-10 map Showing UK Standard regions
Figure 2-11 Model for polyethylene
Figure 2-12 flow diagrams for programming sequence
13
2.4. Designing the visual aid
Once you have decided .where a visual aid is needed and what type it should be, you must design it so that it is as relevant, clear, and truthful as possible. This will usually be at least a two-stage process: designing a rough copy and then producing the finished COPY, If you work for a company which has an art or illustration department you may be able to get a technical illustrator to produce the finished copy for you and to counsel you in the design stage. However, even if you have such help, you should be the real designer of the visual aid: you have the best knowledge of the subject and best know the purpose of the aid and the context in which it is being used.
2.4.1. Making a visual aid relevant
Since you place a visual aid in a text to make a point, you should be sure that it makes the point you intend. For instance, suppose that you are discussing expected energy saving from the use of
solar energy in the future. You have posed three possible sources of the savings-residences, total energy systems such as industrial parks and shopping centers, and solar-based electric power plants- and have broken down the specific savings as illustrated in Table 2-3. Table 2-3 Expected annual saving from solar energy Annual Savings (10 15 Btu)
Year
Residences
1985
0.4
1990
1.2
1995
1.9
Total energy
Solar-based electric
systems
power plants
0.24
-
0.92
1.4
Now that you have your data, you want to construct a visual aid to show the growth in savings and the relative contributions of each source. You construct five possible versions of a visual aid, presented in Figures 2-13 through 2-17, and now have to choose the one most appropriate to your point. On what basis do you choose? What are the differences among the five visual aids?
Figure 2-13 Annual energy savings from solar energy, version I
Figure 2-14 Annual energy savings from solar energy, version 2
Figure 2-15 Annual energy savings from solar energy, version 3
Figure 2-16 Annual energy savings from solar energy, version 4
15
Figure 2-17 Annual energy savings from solar energy, version 5 First let us consider the bar graphs. Among the bar graphs, Figure 2-13 presents the most information in the smallest space and the clearest vision of total growth; however, in comparison to the other charts, it obscures the comparisons between items in the same year and between the same item in different years. Figure 2-14 obscures the total growth but makes the comparisons already mentioned much clearer, especially between the same item in different years. On the other hand, Figure 2-15 clarifies the comparison between items in the same year but obscures comparisons between years. The line graphs in Figures 2-16 and2-17 have the same strengths and weaknesses as their respective bar graph counterparts, but in addition they also bring out more strongly the idea of direction and rate of change. So how do you choose one (or two) from among the group? You pick the one which best matches the focus you wish to take in your report or talk. If you are not much concerned about total growth but want to focus on the contribution of each area for savings, then you would probably choose Figure2-14. If you are interested in the growth of the contribution of each area, you would probably choose Figure 2-16. If you are primarily interested in the increase in total savings, you would probably choose Figure 2-13 or 2-17.
2.4.2. Making a visual aid clear
Making a visual aid clear involves two separate activities: making it conceptually clear and making it technically clear. Making it conceptually clear means having a clearly defined and relevant point and a good form for the point. Conceptual clarity is discussed above. Technical clarity is a simpler matter and will be treated here. It involves having an informative title, appropriate headings and labels, and enough white space so that an audience has the best possible chance of finding the "right" meaning for the visual aid. To really see the benefit of proper labeling and sufficient white space, look at the series of graphs presented in Figure 2-18. Graph (a) is an extremely bad example of a visual aid since it has none of the labeling information usually presented. Graphs (b) and (c) present more information, but still not enough to really get the message across. (Notice that graph c lacks enough information even though it provides everything except the title and two critical labels.) Graph (d) provides an adequate title and labels, but the grid in the background is so obtrusive that a reader can hardly see the important lines and labels. Finally, graph (e) provides adequate information and enough white space to let it be seen; from these, a careful and hardworking reader can probably figure out the
message (you should note that version d is typical of most student reports, which are done quickly checked mainly for accuracy rather than readability).
Figure 2-18 The necessity of labels, headings and titles in visual aids
17
2.5. Integrating the Visual Aid into the Test
Once you have decided to use a visual aid in a particular spot in the text, you must incorporate it so that it seems to belong there. The visual aid needs to be tied to the text and explained since it appears in the text and make sense to readers. In addition, if the communicator does NOT EXPLAIN the importance of the visual aid (its main point, limitations, assumptions and implications), then the readers will have to provide these of information for themselves. As a general rule, when readers are put in the position, they will -at least sometimes- see points or implications rein those the communicator's wants them to see or perhaps even completely miss the communicator's point. The easiest way to integrate a visual aid with the text is to explain its main points and any special implications a reader should note.
Try to always put the visual aid after you have mentioned it and not are reverse: in other words, do not put a visual aid in a spot within the text. Before pointing out to it, for example do not put a figure in the text, and then point to it. Note that all illustrations in the present notes are referred to first, then they are inserted into the text. You must not only refer to every illustration in a report, but a real effort must be made to keep the illustration on the same page as the description it supports. This can become problem if the description is long. However, a reader who has to keep flipping back and forth between the text and illustrations will soon tire, and the reason for including the illustrations will be defeated. When reports are typed on only one side of the paper, full page illustrations can become an embarrassment. Try to limit the size of the illustrations so they can be placed beside, above or below the words, and lien to make sure that they are correctly placed. Horizontal full page illustrations may be inserted sideways on a page (landscape), but must always be positioned so that they are read from the right, see Figure 2-19. This holds true whether they are placed on a left- or right-hand page.
Figure 2-19 Page-size horizontal drawings should be positioned so they can read from the right
When an illustration is too large to fit on a normal page, or is going to be referred to frequently, you should consider printing it on a foldout sheet and inserting it at the back of the report. If the illustration is printed only on the extension panels of the foldout, the page can be left opened out for continual reference while the report is being read, see Figure 2-20. This technique is particularly suitable for circuit diagrams, plant layouts and flow charts.
Figure 2-20 large illustrations can be placed on a fold out sheet at rear of report
Always discuss printing methods with the person who will be making copies of your report before you start making reproduction copy. Certain reproduction equipment cannot handle some sizes, materials and colors. For example, heavy blacks and light blues may not reproduce well on some electrostatic copiers, light browns cannot be copied by other types of equipment, and photographs can be reproduced clearly by very few.
2.6. Formatting Contentions that Make Reading Easier
There are many features of technical writing that make it look different from most writing we see in
newspapers, books and personal letters. Look for instance at Figure 2-21, the beginning of a typical engineering report. You will notice that its has some very interesting formatting features:
1. Single-spacing
2. Short paragraphs
3. Lists
4. Headings (underlined titles)
5. Numbers to mark the various paragraphs
6. Liberal use of white space
All of these features occur frequently in scientific and technical writing because they are functional; single-spacing saves space, and the others make a text easier to read, especially for busy and inattentive readers. Headings clearly announce the contents of a section so that, busy readers can skip that section if they don't need details. Short paragraphs and white space make a report easy on the eye, even though it may be single-spaced. The numbering, indentation and lists provide clues to the organization of the report: they allow a reader to skip freely from section to section without reading everything.
Figure 2-21 Formatted version of discussion of technical report
20
To get a good idea of how helpful these simple formatting considerations can be, look at the unformatted version of the Discussion section of the report, presented in Figure 2-22. Do you agree that it is much more difficult to read? Do you agree that formatting makes the version in Figure 2- 21 more functional, that is, easier to read and understand?
Figure 2-22 Unformatted version of discussion of figure 2-21
3. THE TECHNICAL REPORT
A successful engineer must be able to apply theoretical and practical principles in the development
of ideas and methods and also have the ability to express the results clearly and convincingly.
During the course of a design project, the engineer must prepare many written reports which explain what has been done and present conclusions and recommendations. The decision on the advisability
of continuing the project may be made on the basis of the material presented in the reports. The
value of the engineer‟s work is measured to a large extent by the results given in the written reports covering the study and the manner in which these results are presented. The essential purpose of any report is to pass on information to others. A good report writer never forgets the words “to others.” The abilities, the functions, and the needs of the reader should be kept
in mind constantly during the preparation of any type of report. Here are some questions the writer
should ask before starting, while writing, and after finishing a report:
What is the purpose of this report? Who will read it? Why will they read it? What is their function? What technical level will they understand? What background information do they have now? The answers to these questions indicate the type of information that should be presented, the amount of detail required, and the most satisfactory method of presentation.
Reports can be designated as formal and irtfortrrul. Formal reports are often encountered as research, development, or design reports. They present the results in considerable detail, and the writer is allowed much leeway in choosing the type of presentation. Informal reports include memorandums, letters, progress notes, survey-type results, and similar items in which the major purpose is to present a result without including detailed information. Stereotyped forms are often used for informal reports, such as those for sales, production, calculations, progress, analyses, or summarizing economic evaluations. Figures 13-1 through 13-3 present examples of stereotyped forms that can be used for presenting the summarized results of economic evaluations. Although many general rules can be applied to the preparation of reports, it should be realized that each industrial concern has its own specifications and regulations. A stereotyped form shows exactly what information is wanted, and detailed instructions are often given for preparing other types of informal reports. Many companies have standard outlines that must be followed for formal reports. For convenience, certain arbitrary rules of rhetoric and form may be established by a particular concern. For example, periods may be required after all abbreviations, titles of articles may be required for all references, or the use of a set system of units or nomenclature may be specified.
Figure 3-1 Example of form for an informal summarizing report on factory manufacturing cost.
Figure 3-2 Example of form for an informal summarizing report on capital investment.
23
Figure 3-3 Example of form for an informal summarizing report on income and return.
The organization of a formal report requires careful sectioning and the use of subheadings in order
to maintain a clear and effective presentation? To a lesser degree, the same type of sectioning is
valuable for informal reports. The following discussion applies to formal reports, but, by deleting or combining appropriate sections, the same principles can be applied to the organization of any type
of report.
A complete design report consists of several independent parts, with each succeeding part giving
greater detail on the design and its development. A covering Letter of Transmittal is usually the
first item in any report. After this come the Title Page, the Table of Contents, and an Abstract
or Summary of the report. The Body of the report is next and includes essential information,
presented in the form of discussion, graphs, tables, and figures. The Appendix, at the end of the report, gives detailed information which permits complete verification of the results shown in the body. Tables of data, sample calculations, and other supplementary material are included in the Appendix. A typical outline for a design report is as follows:
3.2.1. Organization of a design report
1. Letter of transmittal
Indicates why report has been prepared
Gives essential results that have been specifically requested
2. Title page
Includes title of report, name of person to whom report is submitted, writer‟s name and
organization, and date
3. Table of contents
Indicates location and title of figures, tables, and all major sections
4. Summary
Briefly presents essential results and conclusions in a clear and precise manner
5. Body of report
A. Introduction
Presents a brief discussion to explain what the report is about and the reason for the report;
no results are included
B. Previous work
Discusses important results obtained from literature surveys and other previous work
C. Discussion
Outlines method of attack on project and gives design basis
Includes graphs, tables, and figures that are essential for understanding the discussion
Discusses technical matters of importance
Indicates assumptions made and their justification
Indicates possible sources of error
Gives a general discussion of results and proposed design
D. Final recommended design with appropriate data
Drawings of proposed design
a. Qualitative flow sheets
b. Quantitative flow sheets
c. Combined-detail flow sheets
Tables listing equipment and specifications
Tables giving material and energy balances
Process economics including costs, profits, and return on investment
E. Conclusions and recommendations
Presented in more detail than in Summary
F. Acknowledgment
Acknowledges important assistance of others who are not listed as preparing the report
G. Table of nomenclature
Sample units should be shown
H. References to literature (bibliography)
Gives complete identification of literature sources referred to in the report
I. Appendix i. Sample calculations
One example should be presented and explained clearly for each type of calculation
ii. Derivation of equations essential to understanding the report but not presented in detail in the main body of the report
iii. Tables of data employed with reference to source
iv. Results of laboratory tests 1. If laboratory tests were used to obtain design data, the experimental data, apparatus and procedure description, and interpretation of the results may be included as a special appendix to the design report.
3.2.1.1. Letter of Transmittal
The purpose of a letter of transmittal is to refer to the original instructions or developments that have made the report necessary. The letter should be brief, but it can call the reader‟s attention to certain pertinent sections of the report or give definite results which are particularly important. The writer should express any personal opinions in the letter of transmittal rather than in the report itself. Personal pronouns and an informal business style of writing may be used.
25
3.2.1.2.
Title Page and Table of Contents
In addition to the title of the report, a title page usually indicates other basic information, such as the
name and organization of the person (or persons) submitting the report and the date of submittal. A table of contents may not be necessary for a short report of only six or eight pages, but, for longer reports, it is a convenient guide for the reader and indicates the scope of the report. The titles and subheadings in the written text should be shown, as well as the appropriate page numbers. Indentations can be used to indicate the relationships of the various subheadings. A list of tables, figures, and graphs should be presented separately at the end of the table of contents.
3.2.1.3. Summary
The summary is probably the most important part of a report, since it is referred to most frequently and is often the only part of the report that is read. Its purpose is to give the reader the entire contents of the report in one or two pages. It covers all phases of the design project, but it does not go into detail on any particular phase. All statements must be concise and give a minimum of
general qualitative information. The aim of the summary is to present precise quantitative
information and final conclusions with no unnecessary details. The following outline shows what should be included in a summary:
1. A statement introducing the reader to the subject matter
2. What was done and what the report covers
3. How the final results were obtained
4. The important results including quantitative information, major conclusions, and
recommendations An ideal summary can be completed on one typewritten page. If the summary must be longer than
two pages, it may be advisable to precede the summary by an abstract, which merely indicates the subject matter, what was done, and a brief statement of the major results.
3.2.1.4. Body of the Report
The first section in the body of the report is the introduction. It states the purpose and scope of the
report and indicates why the design project originally appeared to be feasible or necessary. The relationship of the information presented in the report to other phases of the company‟s operations can be covered, and the effects of future developments may be worthy of mention. References to previous work can be discussed in the introduction, or a separate section can be presented dealing with literature-survey results and other previous work.
A description of the methods used for developing the proposed design is presented in the next
section under the heading of disczmion. Here the writer shows the reader the methods used in reaching the final conclusions. The validity of the methods must be made apparent, but the writer should not present an annoying or distracting amount of detail. Any assumptions or limitations on the results should be discussed in this section. The next section presents the recommended design, complete with figures and tables giving all
necessary qualitative and quantitative data. An analysis of the cost and profit potential of the proposed process should accompany the description of the recommended design.
26
The body of a design report often includes a section giving a detailed discussion of all conclusions and recommendations. When applicable, sections covering acknowledgment, table of nomenclature, and literature references may be added. 3.2.1.5. Appendix In order to make the written part of a report more readable, the details of calculation methods, experimental data, reference data, certain types of derivations, and similar items are often included as separate appendixes to the report. This information is thus available to anyone who wishes to make a complete check on the work, yet the descriptive part of the report is not made ineffective because of excess information.
The physical process of preparing a report can be divided into the following steps:
1. Define the subject matter, scope, and intended audience
2. Prepare a skeleton outline and then a detailed outline
3. Write the first draft
4. Polish and improve the first draft and prepare the final form
5. Check the written draft carefully, have the report typed, and proofread the final report
In order to accomplish each of these steps successfully, the writer must make certain the initial work on the report is started soon enough to allow a thorough job and still meet any predetermined deadline date. Many of the figures, graphs, and tables, as well as some sections of the report, can be prepared while the design work is in progress.
Accuracy and logic must be maintained throughout any report. The writer has a moral responsibility to present the facts accurately and not mislead the reader with incorrect or dubious statements. If approximations or assumptions are made, their effect on the accuracy of the results should, be indicated. For example, a preliminary plant design might show that the total investment for a proposed plant is $5,500,000. This is not necessarily misleading as to the accuracy of the result, since only two significant figures are indicated. On the other hand, a proposed investment of $5554,328 is ridiculous, and the reader knows at once that the writer did not use any type of logical reasoning in determining the accuracy of the results. The style of writing in technical reports should be simple and straightforward. Although short sentences are preferred, variation in the sentence length is necessary in order to avoid a disjointed staccato effect. The presentation must be convincing, but it must also be devoid of distracting and unnecessary details. Flowery expressions and technical jargon are often misused by technical writers in an attempt to make their writing more interesting. Certainly, an elegant or forceful style is sometimes desirable, but the technical writer must never forget that the major purpose is to present information clearly and understandably.
3.4.1.
Subheadings and Paragraphs
The use of effective and well-placed subheadings can improve the readability of a report. The sections and subheadings follow the logical sequence of the report outline and permit the reader to become oriented and prepared for a new subject. Paragraphs are used to cover one general thought. A paragraph break, however, is not nearly as definite as a subheading. The length of paragraphs can vary over a wide range, but any thought worthy of a separate paragraph should require at least two sentences. Long paragraphs are a strain on the reader, and the writer who consistently uses paragraphs longer than 10 to 12 typed lines will have difficulty in holding the reader‟s attention.
The effective use of tables can save many words, especially if quantitative results are involved. Tables are included in the body of the report only if they are essential to the understanding of the written text. Any type of tabulated data that is not directly related to the discussion should be located in the appendix. Every table requires a title, and the headings for each column should be self-explanatory. If numbers are used, the correct units must be shown in the column heading or with the first number in the column. A table should never be presented on two pages unless the amount of data makes a break absolutely necessary.
In comparison with tables, which present definite numerical values, graphs serve to show trends or comparisons. The interpretation of results is often simplified for the reader if the tabulated information is presented in graphical form. If possible, the experimental or calculated points on which a curve is based should be shown on the plot. These points can be represented by large dots, small circles, squares, triangles, or some other identifying symbol. The most probable smooth curve can be drawn on the basis of the plotted points, or a broken line connecting each point may be more appropriate. In any case, the curve should not extend through the open symbols representing the data points. If extrapolation or interpolation of the curve is doubtful, the uncertain region can be designated by a dotted or dashed line. The ordinate and the abscissa must be labeled clearly, and any nomenclature used should be defined on the graph or in the body of the report. If numerical values are presented, the appropriate units are shown immediately after the labels on the ordinate and abscissa. Restrictions on the plotted information should be indicated on the graph itself or with the title. The title of the graph must be explicit but not obvious. For example, a log-log plot of temperature versus the vapor pressure of pure glycerol should not be entitled “Log-Log Plot of Temperature versus Vapor Pressure for Pure Glycerol.” A much better title, although still somewhat obvious, would be “Effect of Temperature on Vapor Pressure of Pure Glycerol.” Some additional suggestions for the preparation of graphs follow:
1.
The independent or controlled variable should be plotted as the abscissa, and the variable that is being determined should be plotted as the ordinate.
2. Permit sufficient space between grid elements to prevent a cluttered appearance (ordinarily, two to four grid lines per inch are adequate).
3. Use coordinate scales that give good proportionment of the curve over the entire plot, but do not distort the apparent accuracy of the results.
4. The values assigned to the grids should permit easy and convenient interpolation.
5. If possible, the label on the vertical axis should be placed in a horizontal position to permit easier reading.
6. Unless families of curves are involved, it is advisable to limit the number of curves on any one plot to three or less.
7. The curve should be drawn as the heaviest line on the plot, and the coordinate axes should be heavier than the grid lines.
Flow diagrams, photographs, line drawings of equipment, and other types of illustrations may be a necessary part of a report. They can be inserted in the body of the text or included in the appendix. Complete flow diagrams, prepared on oversize paper, and other large drawings are often folded and inserted in an envelope at the end of the report.
The original sources of any literature referred to in the report should be listed at the end of the body of the report. References are usually tabulated and numbered in alphabetical order on the basis of the first author‟s surname, although the listing is occasionally based on the order of appearance in the report. When a literature reference is cited in the written text, the last name of the author is mentioned and the bibliographical identification is shown by a raised number after the author‟s name or at the end of the sentence. An underlined number in parentheses may be used in place of the raised number, if desired. The bibliography should give the following information:
1. For journal articles:
(a) Authors‟ names, followed by initials,
(b) Journal, abbreviated to conform to the “List of Periodicals” as established by Chemical Abstracts,
(c)
volume number,
(d)
issue number, if necessary,
(e)
page number, and
(f)
year (in parentheses).
The title of the article is usually omitted. Issue number is omitted if paging is on a yearly basis. The
date is sometimes included with the year in place of the issue number. McCormick, J. E., Chem. Eng., 9503175-76 (1988). McCormick, J. E., Chem. Eng., 95:75-76 (Sept. 26, 1988).
Gregg, D. W., and T. F. Edgar, AKhE J., 24753-781 (1978).
2. For single publications, as books, theses, or pamphlets:
(a)
authors‟ names, followed by initials,
(b)
title (in quotation marks),
edition (if more than one has appeared),
volume (if there is more than one),
publisher,
place of publication, and
(g)
year of publication.
The chapter or page number is often listed just before the publisher‟s name. Titles of theses are often omitted. Peters, M. S., “Elementary Chemical Engineering,” 2d ed., p. 280, McGraw-Hill Book Company, New York, 1984.
Heaney, M., PhD. Thesis in Chem. Eng., Univ. of Colorado, Boulder, CO. 1988.
3. For unknown or unnamed authors:
(a) alphabetize by the journal or organization publishing the information.
Chem. Eng., 9.5(13):26 (1988).
4. For patents:
(a) patentees‟ names, followed by initials, and assignee (if any) in parentheses,
country granting patent and number, and
date issued (in parentheses).
Fenske, E. R. (to Universal Oil Products Co.), U.S. Patent 3,249,650 (May 3, 1986).
5. For unpublished information:
(a) “in press” means formally accepted for publication by „the indicated journal or publisher; (b) the use of “private communication” and “unpublished data” is not recommended unless absolutely necessary, because the reader may find it impossible to locate the original material. Morari, M., Chem. Eng. Progr., in press (1988).
The general method used in developing the proposed design is discussed in the body of the report, but detailed calculation methods are not presented in this section. Instead, sample calculations are given in the appendix. One example should be shown for each type of calculation, and sufficient detail must be included to permit the reader to follow each step. The particular conditions chosen for the sample calculations must be designated. The data on which the calculations are based should be listed in detail at the beginning of the section, even though these same data may be available through reference to one of the tables presented with the report.
The final report should be submitted in a neat and businesslike form. Formal reports are usually bound with a heavy cover, and the information shown in title page is repeated on the cover. If paper fasteners are used for binding in a folder, the pages should be attached only to the back cover. The report should be typed on a good grade paper with a margin of at least 1 in. on all sides. Normally, only one side of the page is used and all material, except the letter of transmittal, footnotes, and long quotations, is double-spaced. Starting with the summary, all pages including graphs, illustrations, and tables should be numbered in sequence. Written material on graphs and illustrations may be typed or lettered neatly in ink. If hand lettering is required, best results are obtained with an instrument such as a LeRoy or Wrico guide. Short equations can sometimes be included directly in the written text if the equation is not numbered. In general, however, equations are centered on the page and given a separate line, with the equation number appearing at the right-hand margin of the page. Explanation of the symbols used can be presented immediately following the equation. Proofreading and Checking Before final submittal, the completed report should be read carefully and checked for typographical errors, consistency of data quoted in the text with those presented in tables and graphs, grammatical errors, spelling errors, and similar obvious mistakes. If excessive corrections or changes are necessary, the appearance of the report must be considered and some sections may need to be retyped. Nomenclature If many different symbols are used repeatedly throughout a report, a table of nomenclature, showing the symbols, meanings, and sample units, should be included in the report. Each symbol can be defined when it first appears in the written text. If this is not done, a reference to the table of nomenclature should be given with the first equation. Ordinarily, the same symbol is used for a given physical quantity regardless of its units. Subscripts, superscripts, and lower- and upper-case letters can be employed to give special meanings. The nomenclature should be consistent with common usage.
4. ORAL PRESENTATIONS
One of the assignments in this technical writing course is to prepare and deliver an oral presentation. You might wonder what an oral report is doing in a writing class. Employers look for course work and experience in preparing written documents, but they also look for some experience in oral presentation as well.
For the oral report, imagine that you are formally handing over your final written report to the people with whom you set up the hypothetical contract or agreement. For example, imagine that you had contracted with the Govemorate of Alexandria to write a visitor's guide to the city of Alexandria. Once you had completed it, you'd have a meeting with the officers in charge to formally deliver the guide. You'd spend some time orienting them to the guide, showing them how it is organized and written, and discussing some of its highlights. Your goal is to get them acquainted with the guide and to prompt them for any concerns or questions. Here are some brainstorming possibilities in case you want to present something:.
Place or situation: You can find topics for oral reports or make more detailed plans for them by thinking about the place or the situation in which your oral report might naturally be 'given: at a neighborhood association? at the parent teachers' association meeting? at a religious meeting? at the gardening club? at a city council meeting? at a meeting of the board of directors or high-level executives of a company? Thinking about an oral report this way makes you focus on the audience, their reasons for listening to you, and their interests and background..
• Instructional purpose: An oral report can be primarily instructional. Your task might be to train new employees to use certain equipment or to perform certain routine tasks.
•
• Plan to explain to the class what the situation of your oral report is. who you are, and who they should imagine they are Make sure that there is a clean break between this brief explanation and the beginning of your actual oral report.
• Make sure your oral report lasts no longer than few minutes.
• Pay special attention to the introduction to your talk Indicate the purpose of your oral report, give an overview of its contents, and find some way to interest the audience
• Use at least one visual- preferably a transparency for the overhead projector. Flip charts and objects for display are okay Bui please avoid scribbling stuff on the chalkboard or relying strictly on handouts
• Make sure you discuss key elements of your visuals Don't just throw them up there and ignore them. Point out things about them; explain them to the audience
• Make sure that your speaking style and gestures are okay Ensure that you are loud enough so that everybody can hear, that you don't speak too rapidly (nerves often cause that).
• Plan to explain any technical aspect of your topic very clearly and understandably Don't race through complex, technical stuff--slow down and explain it carefully so that we understand it.
• Never present large a large body of information orally without summarizing its main points (on a transparency, for example)
• Use "verbal heading"- by now, you've gotten used to using headings in your written work. There is a analogy in oral reports with these, you give your audience a very clear signal you are moving from one topic or part of your talk to the next.
•.
•.
• As mentioned above, be sure your oral report is carefully timed to few minutes. Some ideas on how to do this are presented m the next section.
4.3.
Preparing for the Oral Report
Pick the method of preparing for the talk that best suits your comfort level with public speaking and with your topic. However, do some sort of preparation or rehearsal—some people assume that they can just jump up there and ad Mb for few minutes and be relaxed, informal. It doesn't often work that way- drawing a mental blank is the more common experience. Here are the obvious possibilities for preparation and delivery:
• Write a script, practice it, keep it around for quick-reference during your talk.
• Set up an outline of your talk, practice with it, bring it for reference.
• Set up cue cards, practice with them, use them during your talk.
• Write m script and read from it Of course, the spontaneous or impromptu methods are also out there for the brave and the adventurous. However, please bear in mind that many people will be listening to you—you owe them a good presentation, one that is clear, understandable, well-planned, organized, and informative..
When you give an oral report, focus on common problem areas such as these:
• Timing-Make sure you keep within the time limit. Anything under the limit is also a problem. Do some rehearsal, write a script, or find some other way to get the timing just right. It should take about two minutes to go through a single transparency in the talk.
• Posing, speed— sometimes, speakers who are a bit nervous talk too fast. That makes it hard for the audience to follow. In general, it helps listeners to understand you better if you speak a bit more slowly and deliberately than you do in normal conversation. Slow down, take it easy, be clear.
• Make sure your watch is visible and check it occasionally to see how the time is running. If you see you are running short or long, try to adjust the speed of your presentation to compensate.
• Volume-Obviously, you must be sure to speak loud enough so that all of your audience can hear you. You might find some way to practice speaking a little louder in the days before the oral presentation.
• Gestures and posture-Watch out for nervous hands flying all over the place. This too can be distracting and a bit comical. Plan to keep your hands clasped together or holding onto the podium and only occasionally making some gesture, and make sure that your gestures and
posture are okay. For example, don't slouch on the podium or against the wall, and avoid fidgeting with your hands.
• A verbal crutches- As for speaking style, consider slowing your tempo a bit-a common
tendency is to get nervous and talk too fast. Also, be aware of how much you say things like "uh," "you know," and "okay."eehhh" and other lands of nervous verbal habits. Instead of saying "uh" or "you know" every three seconds, just don't say anything at all. In the days before your oral presentation, exercise speaking without these verbal crutches. The silence that replaces them is not a bad thing- -it gives listeners time to process what you are saying
• Never read directly from prepared text, there is nothing more deadly to an audience
• Make frequent eye contact with your audience throughout the talk Do not stare at your notes or at the screen. Do not direct your talk to one or two individuals, leaving the rest of the audience isolated
• Sound enthusiastic about your subject, or at least interested in it If you seem bored by your
material, you can be guaranteed your audience will follow the lead!
Prepare at least one visual for this report. Here are some ideas for the medium" to use for your visuals.
•.
• Pasteboard size charts-Another possibility is to get some posterboard and draw and letter what you want your audience to see. If you have a choice, consider transparencies~-it's hard to make charts look neat and professional.
• Handouts- You can ran.
• Objects-If you need to demonstrate certain procedures, you may need to bring in actual physical objects. Rehearse what you are going to do with these objects; sometimes they can take up a lot more time than you expect. Please avoid just scribbling your visual on the chalkboard. Whatever you can scribble on the chalkboard can be neatly prepared and made into a transparency or posterboard-size chart, for example. Take some time to make your visuals look sharp and professional-use a straightedge, good dark markers, neat lettering or typing. Do your best to ensure that they are legible to the entire audience.
As for the content of your visuals consider these ideas:
• Outline of your talk, report, or both If you are at a loss for visuals to use in your oral presentation, or if your presentation is complex, have an outline of it that you can show at various points during your talk.
• Drawing or diagram of key objects-If you describe or refer to any objects during your talk, try to get visuals of them so that you can point to different components or features.
• Tables charts, graphs—If you discuss statistical data, present it in some form or table, chart, or graph. Many members of your audience may have trouble "hearing" such data as opposed to seeing it.
• Key terms and definitions A good idea for visuals (especially when you can't think of any others) is to set up a two-column list of key terms you use during your oral presentation with their definitions in the second column.
• Key concepts or points similarly, you can list your key points and show them in visuals. (Outlines, key terms, and main points are all good, legitimate ways of incorporating visuals into oral presentations when you can't think of any others.) During your actual oral report, make sure to discuss your visuals, refer to them, guide your listeners through the key points in your visuals. It's a big problem just to throw a visual up on the screen and never even refer to it.
4.5.1. Tips for the preparation off the visuals
• Lay-out, try to always present your transparencies in the Landscape position rather that the Portrait position
• Do not present more than about eight lines on a single transparency. Transparencies crowded with information are useless.
• Use large-type fonts on transparencies. Ordinary size type does not look good.
• If you hand-write the transparency, use large Mock, lettering with horizontal guidelines to keep your lines straight
• If you show a process flowchart, make sure the units and streams are labeled. A bunch of unlabeled boxes and lines with arrows is worthless to the audience.
• If you show data plots, be sure the axes are clearly labeled.
• Do not over fill your transparency with mixed, unmatched colors. Some of the best color combinations are: white on blue, yellow on blue, Black on white , black on yellow, red on yellow.
• Do not crowd your visuals with too many mixed font types/sizes
5. MAKING YOUR WRITING READABLE
Most readers of scientific or technical writing do not have as much time for reading as Hey would like to have and therefore, must read selectively. This is especially true for managers, supervisors, executives, senior scientists and other busy decision makers, who often skim-read for main points and ideas. However, it is also true for professionals who often need TC read more closely and slowly, for thorough understanding, and it is true for technicians. workers and consumers who may need to read and follow operating instructions. These different types of readers are selective in different ways: the skim-reading decision maker may be looking for bottom-line cost figures and performance data; the professional may be looking for the main thread of an argument: Ac technician. worker or consumer may need to use operating instructions only as a checklist. For such readers, writing is readable to the extent that it provides the information they need, located where they can quickly find it, in a form in which they can easily use it. This lakes considerable effort on the w riter's part. If you can make your writing readable, you will greatly increase its chances of being read an used: i.e. you will increase its effectiveness. How can you make your writing readable? Unfortunately, there is no simple formula to follow. There are steps that you can take, however, that should be of some help; these are discussed in what follows. First we make suggestions for selecting appropriate information and for making this information accessible to the reader. Then we suggest a number of things you can do to make it easier for the reader to absorb details.
Make it clear whet the main topic of the report of the section is. Then state your purpose explicitly, so that your readers can anticipate how you will be dealing with the topic. Readers of scientific and technical writing are typically purpose-directed and pressed for time. So. rather than reading word for word and cover to cover, they often prefer to merely "consult" a document, looking only for the information they need. When you define your topic and state your purpose, you make it easier for the reader to determine right away how to process the document; whether to read it closely, skim-read it. pass it on to someone else, or disregard it. A clear statement of topic and purpose allows the reader to form certain expectations about the rest of the text, specifically, how the topic is likely to be developed. It is a well-known fact that we process information most quickly and efficiently when it accords with our preconceptions, this is why it is important to create the right preconceptions in the reader's mind in the first place Scientific and technical writing genres customarily have various features signed to announce
the topic and set up initial expectations, titles, abstracts, summaries, overviews etc
full advantage by loading them with keywords and main ideas instead of vague phrases if you are writing a report dealing with some problematic issue as is the case with most reports be sure to include a well written problem statement at the beginning Engineering and other applied sciences
Use these to
are fundamentally problem-oriented, and so as discussed in chapter 6. a good problem statement usually has important orientation value.
Build sections and paragraphs around keywords related to the main topic If possible, make these keywords visually prominent by using them in headings, subheadings, topic statements and sentence subjects Once you have established a conceptual framework at the beginning of your text, you can turn your attention by filling it in with appropriate details To make sure that your discussion is a coherent one. you should strive to link these details as directly as possible to the
main topic the best way to do this is to establish a hierarch) of intermediate topics and subtopics for the various units and subunits of vour text with each being directly related to the immediately higher topic or subtopic These intermediate topics and subtopics should consist of appropriate keywords as discussed above
A well-structured discussion is highly functional in at least two respects First it builds on the
basic framework established at the beginning of the text, allowing for easier interpretation and promoting greater coherence at the same time As new information is progressively added to the initial framework, it is interpreted in terms of this framework and integrated into it As such, this new information is transformed into given information and can then be used to help interpret succeeding pieces of new information. Second, a hierarchically structured text facilitates selective reading. Since the sections and subsections are arranged in a general-to-specific order, the reader can quite easily zero in on desired levels of details - specially, if the respective topics of these sections and subsections are made visually prominent through the use of headings and subheadings.
5.2.3. Explain Important Concepts when Writing for Nonspecialist Readers
When writing for nonspecialists be sure to clarify the important technical concepts in your text by using examples, analogies, visual aids, or other forms of verbal or visual illustration. Research by information theorists in the past few decades suggests that communication proceeds best when there is a fairly even balance between given information and new information. This is what you should strive for in your own writing this means that you must have some idea of who your readers are and what sort of background knowledge they have. For example if you are describing the function of a refinery distillation column the terms "bubble cap trays" would be perfectly comprehensible to a chemical engineer, to anyone else it would not. Therefore, if for some reason you had to communicate with such technical information to a nonspecialist reader, you would have to insert some background information more familiar to the reader to provide a proper framework for interpreting the new information
In technical writing, it frequently happens that the writer feels it necessary to introduce key
concepts that may be unfamiliar to the reader In general it is important to define such concepts, not necessarily with a formal definition but rather with some kind of illustration How is the concept used? What is t similar to? What does it look like'' If technical terminology is used, what is a non- technical way of saying more or less the same thing Not only will answering such questions with the reader's needs in mind help the reader understand that particular concept but more important specially if the concept is a typical one it will enrich and sharpen the reader's interpretation of the
text as a whole It will provide some of the given information that a specialist reader would automatically and implicitly associate with that particular concept but which a nonspecialist reader would not. There are several ways to illustrate and explain unfamiliar concepts for the nonspecialist reader. Visual aids, of course, should be used whenever the concept is suited to visual presentation. Often, however, a concept is too abstract to be presented visually In such cases; specific examples of the concept are usually the most powerful means you can use to help the nonspecialist reader. Analogies help explaining an unfamiliar concept b} showing that it is similar in certain ways to a familiar concept: they are useful in situations where the concept is so unfamiliar that you simply cannot think of any ordinary examples of it. Paraphrases, on the other hand, are useful in precisely the opposite situation: where the concept is familiar to the reader but only if restated in more recognizable terms. Paraphrases have a distinct advantage over examples and analogies in that they usually take up less space: sometimes even a one-word paraphrase will accomplish the purpose. Definitions, of course are a familiar way of explicating new concepts. Here is an example of an extended definition, explaining what the technical term "Remark Coefficient" means:
The Remark Coefficient In the production of powdered detergents, spray drying is the icchn que used to evaporate the solvent from the liquid reaction mixture and physically form the finished powder product. In spray drying, the liquid is sprayed into the top of a tall tower and allowed to fall freely in the bottom of the tower, where it is removed as a dry powder. The solvent evaporates during the course of the fall. Particles dried in this fashion have an unusual shape, like that of a saddle (or a potato chip), and
Analogy
consequently, fail through the air in an unusual manner. Rather than
Paraphrase
falling in a vertical path, the particle fall in a helical (spiral) path. The shape of the helical path is described by the Remark coefficient, which is the ratio of the diameter of the helix to the height required for one
Definition
passage of the particle around the perimeter of the helix. The coefficient, which is a function of drying conditions, is sought to be
maximized, so that the length of flight of the panicle is made much greater than the actual height of the spray-drying tower.
5.2.4. Use Standard Terminology when Writing for Specialist Readers
When writing for specialists, on the other hand, do not overexplain. That is. do not exemplify, define, illustrate, paraphrase, or otherwise explain concepts the reader is likely to already be familiar with. Instead, simply refer to such concepts with the standard terminology of the field. Technical terms permit efficient and precise communication between specialists who know the concepts that such terms refer to. They should be used for that purpose, and used freely, even if they appear to be incomprehensive jargon to an outsider. When used among specialists, standard
technical terms are not only comprehensible, but arc often "information-rich" in the sense that they may trigger a host of associated concepts in the reader's memory. These associated concepts then become part of the "given information" in the message. Adding more given information in the form
of examples, analogies, etc
for that type of reader. What do you do, though, if you are writing to a mixed audience of specialists and nonspecialists? This is always a very challenging sometimes impossible'- situation, but there are a few things you can do. First, you might divide and conquer" produce two separate pieces of writing, or a single piece with two parts to it. so that each group of readers can be addressed with appropriate terminology. Alternatively, you might stick to a single text but briefly define the technical terms as you go along. The least objectionable way of doing this, usually, is to insert a short familiar paraphrase immediately after each technical term: in the Remark coefficient example, for instance, notice how the writer has inserted the paraphrase (spiral) after the less familiar term
helical.
would only produce a disproportionate and inefficient give/new ratio
5.2.5. Structure your Text to Emphasize Important Information
Structure the different parts of the text so as to give greatest prominence to the information you expect the reader to pay most attention to For mam ideas, use a hierarchical structure, for details, use a listing structure A hierarchical text structure allows the reader to move quickly through the text seeing what the mam ideas are. how they arc linked together and what kind of detailed support they have many readers, specifically busy decision makers habitually read this way Thus, if you are writing for that type of reader you should try to organize and present vour information in a highly hierarchical pattern, with main levels of subordination On the other hand, if you are writing for a reader who will be focusing more on details try to use a more coordinate structure, i.e. with the details arranged in list A list-like structure whether it is formatted as a list or not, draws the reader's attention to all of the items making up the list. Instead of one statement being subordinated to another, as in a hierarchical structure the statements in a list are all on the same level and thus share equal prominence Perhaps the most obvious examples of this phenomenon are lists of instructions, which are expected to be read and followed step by step The same phenomenon can also be seen in carefully reasoned arguments and explanations, which are often cast in the form of a list-like sequence of cause-and-effect statements Chronological sequences, too. As found m descriptions of test procedures or in progress reports, are often presented as lists
5.2.6. Construct Well Designed Paragraphs
Make sure that each paragraph has a good topic statement and a clear pattern of organization the paragraph is a basic and highly functional unit of discourse in scientific and technical writing. By definition a paragraph is a group of sentences focusing on one main idea If vou use a topic statement to capture the main idea and a clear pattern of organization to develop it. you make it easy for die reader to either read the paragraph in detail or read it selectively. The topic statement, of course, should be presented within the first two sentences of the paragraph, and it should contain one or more keywords for readers to focus their attention on. The pattern of organization you select
for the remaining sentences in die paragraph should (1) be consistent with expectations likely to be raised by the topic statement. (2) be appropriate to the subject matter and the most important (3) be appropriate to die anticipated use of the paragraph by the reader. If you adhere to these principles with all your paragraphs, you will greatly enhance the overall readability of your writing.
5.2.7. Field-Test Your Writing
Field-test your manuscript with its intended users or with representative substitutes Up to tins point you have had to make guesses about whether or not you arc providing your readers with a proper mix of given information and new information for their purposes Your decisions about what kind of terminology to use. What kind of structure to use, when to use verbal or visual illustrations, and so on. have been made on the basis of guesswork about the background knowledge of your readers and the reasons they will have for reading your writing This is why field-testing is an important part of making any manuscript maximally useful Field-testing allows you to see whether the assumptions you have made about your readers are accurate or not. This is so important that you should not put it off until the final stage; as soon as you have finished writing a good first or second draft, try it out with few intended users. Have them read it as if it were the final draft submitted for actual use Tell them to mark it up raise questions about it, criticize it. Talk to them about it. ask them for their comments Docs it leave anything out? Does it mislead them'' Does it raise unanswered questions'' if they are using it for Reference purposes, can they easily find what they need? If they are skimming it for main points, can they easily locate and understand them? If you are writing a research proposal or article, for example, you might want to show your draft to other researchers in that area, so as to guard against the possibility that you have overlooked something important, misrepresented someone else's research, or to make sure that nothing you have written is substantively wrong. If you arc writing a progress report for a group project, this would be a good time to show it to other members of the team.
5.3. Information ordering
One of the most important parts of speech in scientific and technical writing is the noun phrase (NP). It can be defined as any noun or noun-plus-modifier combination (or any pronoun) that can function as the subject or object of a sentence. Some examples are tables, water, y/e. a potential buyer, the growing demand for asphalt, and strict limitations on the size of plates that can be handled. Note that each of these NPs can serve as the subject of a sentence:
Tables usually have four legs. Water can be dangerous. We have an emergency. A potential buyer has arrived. The growing demand for asphalt is obvious. Strict limitations on the size of plates that can be handled have been established. By contrast, a singular countable noun, such as table, is not a NP, because it cannot function by itself as die subject or object of a sentence. We cannot say:
Table usually has four legs.
Instead, we would have to say A table usually has four legs
or
The table usually has four legs. Samir's table has four legs.
5.3.1. Optimal Ordering of Noun Phrases
In English, NPs are expected to occur in certain orderings according to grammatical and functional criteria. These will be discussed in order of importance, beginning with the most important
A) Put Given Information Before New Information
As will all languages. English sentences typically contain a mixture of given information and new information. That is, some NPs in a sentence refer to concepts or objects that have already been discussed or that are presumed to be understood from the context, this is given information. Other NPs refer to concepts or objects that have not set been discussed and are not presumed to be understood from the context; this is new information Let us consider a specific example of the optimal ordering of NPs. The 5-year plan does not indicate a clearly defined commitment to long-range environmental research For instance, where the plan docs address long-range research, it discusses the development of techniques rather than the identification of important Song- range issues. The key NPs in both sentences are in italics By the time the first sentence has been read and understood, the phrases the 5-vear plans and long-range environmental research have been mentioned and are part of the given information possessed by the reader Notice that the words "the given information" come at the beginning of the second sentence and that the new nour phrases "the new information", come at the end of the second sentence Tins ordering of given before new is desirable because the given information of the second sentence serves as a kind of glue between the information presented in the first sentence and the new information presented in the second sentence. Such an ordering allows a reader to more easily fit the new information into a meaningful context and to see the connection between the two sentences.
B) Put Topical Information in Subject Position
Often, more than one NP in a sentence carries given information. In that case, which of these NPs should be promoted to subject position? Ideally. the NP that carries information most closely related to the paragraph topic - call it "Topical Information" - should go there Consider the following example:
Not all investors will benefit from Saving Certificates of the Investment authority Investors exceeding a deposit of LE 26886 (LE 53768 joint return) would have an after-tax yield far lower than with alternative investments, such as money market funds, or Treasury bills. Alternative investments would also yield better after-tax yieldsand no penalty if the certificate was redeemed within the one-) car maturity period.
The last sentence in this paragraph has three definite NPs which contain given information Alternate investments, after-tax yields and the certificate. Of these, the last seems to come closest to
being thought of as topical information, the word Certificate, after all, does appear in the topic statement. But what is the real topic of this paragraph? Isn't it different kinds of investors! Notice for example, that the word investors appears not only in the topic statement but in the subject position of the next sentence. Notice also that investors are referred to b\ implication as the delegated agent of the passive mam verb: was redeemed (by investors) ideally, then, we should try to insert the word investors in the subject position of the third sentence, too. if it is all possible Indeed it is.
Not all investors will benefit from Saving Certificates of the Investment Authority. Investors exceeding a deposit of LE 26886 (LE 53768 joint return) would have an after-tax yield far lower than with alternative investments, such as money market funds, or Treasury bills. Investors redeeming their certificates within the one-year maturity period would also have a lower after- tax yield and would pay a penalty besides.
Not only does this rewritten version keep the focus on the topic of the paragraph and thus contribute to paragraph unity- it also establishes parallelism between the second and thud sentences, thus making it much clearer to the reader that we are talking about two different classes of investors: those who exceed a deposit of LE 26886 (LE 33768 joint return) and thos. who redeem their certificate early
C) Put "Light" NPs Before "Heavy" NPs As seen earlier. NPs vary considerably in length, complexity, preciseness. etc
If we use the
word heavy to describe NPs which are long and complex and the word light for NPs which are short and simple, the preferred stylistic ordering is light NPs before heavy NPs For instance, consider the
following passage:
We have received and acted upon requests for equipment from several branch offices We have sent the research, development and testing office in Alexandria a gas analyzer,
The second sentence of this passage is awkward and difficult to read. It has a very heavy indirect object - the research, development and testing office in Alexandria - and a very light direct object -a gas analyzer. Thus the ordering of NPs in this sentence, as it stands, is heavy Light. A more readable version of the second sentence and thus a better version, would
order the NPs light
We have sent a gas analyzer (Direct Object) to the research, development and testing office in Alexandria (Object of preposition) Notice that in moving the heavy NP to the end, we have to insert the preposition The following represents a flowchart for editing sentences in paragraphs:
heavy as follows
44
5.4. Editing For Emphasis
Although some readers may prefer to skim-read, others have to read more closely and thoroughly, concentrating on details. For these readers, there is a danger of getting lost in the details, of overlooking main points and "not seeing the forest for the trees", so to speak. Consequently the details themselves begin to loose significance; the reader cannot sec exactly how they fit into the larger picture and thus cannot evaluate their importance. The reading process as a whole bogs down at this point, and the reader is forced to stop and start over. When the readers get bogged down in detail like this, it is often the water's fault. Many writers make little effort to organize details in a coherent, unified way, preferring instead to have the reader do all the work. But this invites the kind of failure just described Readers are often pressed for time or, tired or have other things on their mind. Many readers lack the kind of background knowledge the writer has. Still others have poor reading techniques and arc unable to decipher poor writing, no matter how hard they work at it. In general readers are at the mercy of the writer, they depend on the writer to present details in such a way that the role of these details in support of main points is readily apparent. If the writer fails to do this, there is little the reader can do except try to figure things out. It thus fells on the writer to mold the details of a text so that they reinforce the main points in unified fashion. This is somewhat similar, actually, to the demands made on a speaker engaged in a serious conversation. Face-to-face conversation is an intensive form of communication in which the speaker is acutely aware of the listener and vice versa. Because of this close speaker-listener relationship, conversations are governed by certain unwritten rules; say what you mean, don't beat around the bush, get to the point, be honest, etc. If the speaker violates any of these rules, the conversation will begin to break down unless the listener rescues it with a corrective comment such as "I don't see what you are driving at" or "What's your point?" The possibility of such immediate feedback from the listener forces the speaker to make every detail relevant to the conversation, most listeners are simply intolerant of irrelevant details and will either intervene or break the conversation off if the speaker strays too far from the topic of discussion. Good conversationalists, of course, are aware of such constraints and employ various techniques to make it clear to the listener that they are observing the rules. For one thing, they use emphatic intonation, physical gestures, inverted sentence structure, intensifiers and other devices to signal important words- key words, topical words, words earning new information. Conversely they use none of these devices for the less important words- those that cany given information or redundant information. As for empty meaningless words that serve no communicative purpose at all, they are simply omitted. In general, both by giving prominence to important words and by subordinating or omitting unimportant ones, good conversationalists emphasize those aspects of a detailed discussion that link the discussion to the main point or purpose of the conversation. As a result, the listener not only absorbs those details but also sees just how they support the main point. Writers should do the same kinds of things as good conversationalists. They may not be in close touch with their audience as speakers are and so they may not have such immediate demands placed on them, and they cannot, of course, use intonation and gestures in their writing. But writers
do have an audience, and this audience needs to know, just as listeners do. how the details of a discussion are related to the main points. Furthermore, writers have as many devices as speakers do for helping the reader sec how details support main points. In short the use of emphasis is as appropriate and indeed necessary to good writing as it is to good conversation. In what follows we will describe the most common and useful devices used by good writers to create emphasis within individual sentences. These fall into three categories: devices used to highlight important words and phrases, devices used to subordinate relatively unimportant words and phrases, and devices used to eliminate unnecessary words and phrases
Combine closely related sentences unless there is a compelling reason not to (such as maintaining independent steps in a list of instructions or avoiding extreme sentence length): put main ideas in main clauses. Many inexperienced writers have a tendency to use nothing but short, simple sentences, producing a very choppy style of writing which irritates the reader with its sing- song rhythm and, worse fails to put emphasis on important ideas. This tendency derives, probably, from two principal sources: (1) An overemphasis in many quarters on the need to avoid dangling modifiers, comma splices, and other problems associated with complex sentence structures, and (2) Erroneous belief, promoted by readability formulas, short sentences make reading easier. Dangling modifiers, comma splices, and other errors of sentence structure and punctuation should, of course be avoided - but not at the expense of emphasis, unity, and coherence. And although a short sentence by itself may be easier to read than a long sentence, the repeated use of short sentences may have just the opposite effect The best approach to take regarding sentence length is to let the form reflect the content. If an idea is complex enough to require qualification, the best way to qualify it may be with a relative clause, an adverbial phrase, or some other complex modifier. On the other hand, if an idea is simple and straightforward, a simple sentence may be the best way to represent it. Often, these choices can be made properly only within the context of an entire paragraph. For example consider the following paragraph from a student report:
ORIGINAL VERSION At the present time electric car utilization is not possible. The problems holding it back are satisfactory performance and costs. Performance problems of lack of speed, short mileage range, and lack of acceleration are present. Cost problems are the price of battery replacement and the base price of the electric car. It is possible though, with research and development, that these problems can be solved in the future. Each of the first two sentences, taken in isolation, is grammatically correct and easy to read. When you look at them together, however, you notice that there excessive overlap between them:
sentence 2, in other words, contains too much given information (The problems holding it back). This unnecessary redundancy can be eliminated by combining these sentences; At the present time electric car utilization is not possible because of performance and cost problems.
Not only does this move reduce the wordiness of the first two sentences, it also creates a better topic statement: it is more unified and emphatic, and it introduces the key terms performance problems and cost problems, (notice how these terms are the subjects of the next two sentences). If we also change sentence 3 to satisfy given-new and light-heavy criteria, we can reduce the wordiness of the paragraph and increase its readability still further. The overall result is this:
FIRST REWRITE At the present time electric car utilization is not possible because of performance and cost problems. The performance problems are lack of speed, short mileage range, and lack of acceleration. The cost problems are the price of batten replacement and the base price of the car. It is possible, though with research and development, that these problems can be solved in the future This is a significant improvement, but we have other options that might improve it even more. For example, now that we have converted the original sentence 2 into a prepositional phrase, we can shift it into presubject position in place of the time adverbial originally there:
Because of performance and cost problems, electric CAN utilization is not possible at the present lime. This puts more focus on the key terms performance problems and cost problems and less focus on the less time important time adverbial. Another change we could make, though not as compelling a one as thos just described, would be to combine the two sentences in the middle with a semicolon These two sentences are closely related in function; linking them formally would reflect this relatedness FINAL VERSION Because of performance and cost problems, electric car utilization is not possible at the present time. The performance problems are lack of speed, short mileage range, and lack of acceleration: the cost problems are the price of battery replacement and the base price of the car. It is possible. though, with research and development that these problems can be solved in the future. ln general, combining sentences is often a good way to create emphasis in your writing. By making it easy for your readers to see the relatedness of ideas, you make it easier for them to absorb these ideas. You can also show explicitly that one idea is logically subordinate to another by putting the more important idea in the main clause of the sentence and the less important idea in a subordinate clause. For example, suppose you wanted to combine the two sentences in italics in the following paragraph:
NEGATIVE EXAMPLE Electric cars must be able to meet the same safety standards that gasoline cars must meet as set up by the Ministry of Environmental Affairs. These standards are derived from an established crash test. In the crash test, the car is propelled against a solid wall at 30 mph. The data obtained from the crash test are analyzed for fuel spillage, fuel system integrity, windshield retention, and zone intrusion. In combining the two italicized sentences, we could subordinate the more detailed sentence to the more general first one:
47
These standards are derived from an established car test in which the car is propelled against a solid wall at 30 mph. Alternatively, we could maintain prominence on the details and subordinate instead the idea that the crash test is an established one:
These standards are derived from propelling the car against a solid wall at 30 mph. which is an established car test. Clearly the first option is the more appropriate one in this context: the fact that the crash test is an established one underscores the main idea of the paragraph, as stated in the topic sentence. REVISED VERSION Electric cars must be able to meet the same safety standards that gasoline cars must meet as set up by the Department of Transportation. These standards are derived from an established crash test in which the car is propelled against a solid wall at 30 mph. The data obtained from the crash test are analyzed for fuel spillage, fuel system integrity, windshield retention, and zone intrusion. There are times when it is best not to combine sentences. For example, if you are giving a list of instructions and want to emphasize independent steps in accordance with how the user might carry out the instructions, you might want to state these steps in independent sentences. To see how this might apply in a specific case, consider the following set of instructions for replacing a brake line an automobile:
1. Disconnect the union nuts at both ends
2. Unclip the line from the chassis
3. Pull the line out
4. Install the new line in the chassis clips
5. Moisten the ends in brake fluid
6. Tighten the union nuts.
You could leave these set of instructions as is in the form of a formatted list Or you could combine some of the steps ( 2 with 3. 5 with 6) to create more realistic four-step sequence of disconnect-remove-install-reconnect. as is done in tins excerpt from a repair manual To replace a brake line, disconnect the union nuts at both ends. Unclip the line from the chassis and pull it out. install the new line in the chassis clips Moisten the ends in brake fluid, then tighten the union nuts. To combine sentences beyond this however, would be a mistake because it would destroy the emphasis we want to maintain on certain individual steps. For example, if we were to combine sentences 2 and 3 in the repair manual version, this would be the result:
NEGATIVE EXAMPLE To replace a brake line, disconnect the union nuts at both ends Unclip the line from the chassis, pull it out. and install the new line in the chassis clips. Moisten the ends in brake fluid, then tighten the union nuts. By lumping together the remove and install steps like this (Unclip the line from the chassis, pull it out. and install the new line in the chassis clips), we would be creating an imbalance in the
48
sequence: no mechanic would consider this to be a single step, as the form of the description implies. It is also best not to combine sentences when the result would be too long a sentence Suppose, for example, you have been writuig a proposal for a computer-aided design system and have included this paragraph in your summary. The proposed system is required to alleviate the increase in demand. The system will do that by removing the burden of data entry from the present system. CADDS. This is accomplished by utilizing the microcomputer as a stand-alone data entry system. The microcomputer has all of the graphics and software capabilities required to implement this concept. As it stands, this paragraph is a nicely written one. with an adequate topic statement, a clear general-to-specific pattern of development, and properly constructed sentences satisfying the given- new. light-heavy and topical criteria. The result is a highly readable paragraph with appropriate emphasis on the main ideas and key words. If you were to combine the sentences into one. on the other hand, much of this emphasis would be destroyed:
NEGATIVE EXAMPLE The proposed system is required to alleviate the increase in demand by utilizing the microcomputer as a stand-alone entry system with all the necessary graphics and soft ware capabilities to remove the burden of data entry from the present system. CADDS. This is a more economical version, no doubt, insofar as it contains 16 fewer words than tic original. But is it more readable? Absolutely not! In fact it is a perfect example of the kind of incomprehensive gobbiedygook that so many readers of technical writing complain about. the lesson to be learned from this example . then, is this: do not combine sentences just for the sake of doing so; do it only when it serves a purpose.
While the more important words and phrases of a text should be highlighted, the less important ones should be subordinated - or perhaps even eliminated altogether Unnecessary words and phrases will only detract from the emphasis you have carefully tried to build up through the use of combined sentences, signal words and identifiers. A bloated, wordy style can submerge your readers in a sea of empty terms, making it next to impossible for them to follow. your main points and be persuaded to your point of view. In' fact, foggy language is more likely than not to rum readers against you. Inexperienced writers sometimes think that they must use a wordy, bloated style of writing in order to create a certain professional image. They seem to believe that by using pretentious language, they will enhance their image as experts in their field. Actually, what evidence there is suggests just the opposite: pretentious, wordy language is less likely to promote one's credibility as an expert than is concise, direct, simple language. For example, consider the following two abstracts presented in a conference; one version (Version 1) being noticeably wordier than the other (Version
2).
Version 1 IN the experiment of the series using mice, it was discovered that total removal of the adrenal glands effects reduction or aggressivenss and that aggressheness in adrenalectomised mice is restorable to the level of intact mice by treatment with corticosterone. These results point to the indispensability of the adrenals for the full t expression of aggression. Nevertheless, since adrenalectomy is followed by an increase in the release of adrenocorticotrophic hormone (ACTH). and since ACTH has been . reported (P. Brain. 1972) to decrease the aggressiveness of intact mice, it is possible that the effects of adrenalectomy on aggressiveness of intact mice, it is possible that the effects of adrenalectomy on aggressiveness are a function of the concurrent increased levels of ACTH. However, high levels of ACTH. in addition to causing increases in glucocorticoids (which possibly accounts for the depression of aggression in intact mice by ACTH). also result in decreased androgen levels. In view of the fact that animals with low androgen levels are characterized by decreased aggressiveness the possibility exists that adrenalectomy, rather than affecting aggression directly, has the effect of reducing aggressiveness by producing an ACTH- mediated condition of t decreased androgen levels. Version 2 The experiment in our series with mice showed that the total removal of the adrenal glands reduces aggressiveness. Moreover, when treated with corticosterone. mice that had their adrenals taken out became as aggressive as intact animals again. These findings suggest that the adrenals are necessary for animals to show full aggressiveness. But removal of the adrenals raises the levels of adrenocorticotrophic hormone (ACTH). and P. Brain found that ACTH lowers the aggressiveness of intact mice. Thus the reduction of aggressiveness after this operation might be due to the higher levels of ACTH which accompany it. However. high levels of ACTH have tow effects. First the level of glucocorticoids rise, which might account for P. Brain's results. Second, the levels of androgen fall. Since animals with low levels of androgenare less aggressive it is possible that removal of the adrenals redues aggressiveness only drrectly by ratsing the levels of ACTH it cuases androgen levels to drop . Obviously, Version 2 is easier to read, and its style is more appropriate therefore the more concise abstract of Version 2 (155 words versus 179 for Version 1) is definmtely preferred This style is not so "noun-heavy", it has higher percentage of verbs and adectives than Version 1. For example, instead of saying effects reduction of it simply says reduces lnstead of point to the indispensability of the adrenals . it has suggests that the adrenals are necessary instead of producing a condition of decrease androgen levels . it has couses androgen levels to drop Second the Version 2 style has simpler sentence structure with fewer and shorter adverbial phrases before the sentence subject This means that the reader reaches the main verb of the sentence sooner, making it easier to process the sentence as a whole Thirdly, the Version 2 style avoids unnecessary
50
technical terms in favor of more comrnooplace equivalents, even when it requires more words to make the substitution In place of adrenalectmised mice, for example. Version 2 has mice that had their adrenals taken out instead of are aunction of. there is are due to. Finally, the style of Version 2 uses more pronouns and demonstrative adjectives: their in sentence 2. these in sentence 3. this in sentences5 and it in the last part of sentence 9. By contrast, the Version 1 style has only one demonstrative These, leading off sentence 2 Pronouns and demonstrative adjectives, in general, help make a text more cohesive - provided, of course, that it is clear to the reader what they refer to.
This last point deserves some discussion before we end. Scientists, engineers, and other technical people sometimes use full nouns phrases repeatedly to avoid being "imprecise". They have heard of cases, perhaps, where a single misinterpretation of a pronoun by a single reader has led to some accident or mishap, which in turn has led to the writer's company being sued for damages. Therefore, they tend to avoid pronouns and demonstratives altogether, preferring instead to repeat full noun phrases over and over. This strategy is certainly a safe one. and indeed it should be used in appropriate circumstances (such as when writing operating instructions for a potentially hazardous macliine or when writing a legally binding contract). There are many circumstances, however, where such caution is uncalled for. and where in feet it simply disrupts the coherence of the text. Consider this example NEGATIVE EXAMPLE In order to keep from delaying the construction phase of the Office Building, the Technical Division needs to know the loads that will be placed upon the footings. I have investigated the proposed use of the structure and various footing systems to determine the loads that will be placed upon the footings. This report gives the loads of the footings and explains how these loads were derived There is no reason to describe the loads every time they are referred to Pronouns and demonstratives can be used instead without any real risk of misinterpretation, and the result will be more coherent and more concise text. REVISED VERSION In order to keep from delaying the construction phase of the Office Building, the Technical Division needs to know the loads that will be placed upon the footings. 1 have investigated the proposed use of the structure and various footing systems to determine these loads. This report gives the loads and explains how they were derived In general when you have to refer repeatedly to some object or concept that has first been introduced with a long noun phrase, you can usually use a shortened version of tins noun phrase and a demonstrative adjective or definite article without muck if any. risk of ambiguity
51
6. PROJECT PROPOSAL
A project proposal deals with work plans of a certain subject. Project proposals usually serve the following purposes with respect to the different functional types of projects.
A)
Institution building projects:
They help in the institutional building up, its approaches and capabilities, set standards of performance and help continuing staff development.
B)
Direct support projects:
Provide data, information and analysis of a certain idea and in some cases embody the technical details and findings of a certain project.
C)
DIRECT TRAINING PROJECTS.
D)
Upgrading of the efficiency of certain institutions in industry, administration and other activities.
E)
Experimental and pilot projects:
Provide data, information and analysis on different aspects of experimental research or pilot activities and the results thereof, in detailed support of the findings and the recommendations of the project.
F)
SPECIAL SUPPORT PROJECTS. Which provide development support of communication, documentary services, e.g. CAD, computer services
a)
Title page.
b)
An abstract of the documentary output or a list of KEYWORDS reflecting the principal subject fields of the project.
c)
An introduction providing information on:
1)
Project activity or subacthity related to the project proposal.
2)
Project staff responsible for the production.
3)
Specific purposes the project is intended to serve.
4)
Different means and methods which could be utilized to achieve the goals of the project.
5)
Future expected results on implementation of the included study.
d)
A summary of findings and recommendations.
e)
Substantive sections or chapters.
f)
Annexes as appropriate.
They may be:
Technical (production and upgrading).
Administrative.
Investment potential.
Training activities.
These proposals may deal mainly with.
Erection of completely new production line for a certain commodity, e.g. a fertilizer plant.
Upgrading the efficiency of already working industrial plants, e,g, pulp and paper, oil, leather tanning factories. Implementation of new production technologies and application of new machinery (research and pilot plant projects) . Such project proposals should include the following MAIN POINTS:
1- Present situation of the unit or state-of-art including.
Description of the commodity.
Raw materials required or used in daily and annually consumed amounts.
Production line chemicals, machinery, additives
etc.
d) Services, water, electricity, man power, environmental conditions of the unit and
its suitability.
Cost of production, deficits, benefits, wages
Proposed capacity in case of installation of a completely new factory.
g)
Pre-feasibility study of poini f.
2- Critical discussion of the present situation and proposed steps required for upgrading the efficiency (not required in case of installation of new factories).
3-
Recommendations for better production (technical and mechanical) and development of the required steps to achieve the required targets.
4-
A time schedule for implementation of the proposed project.
5- In case of new factory installation, study of foreign markets should be included, export-
import prices, foreign and local currency required
6- Different expenditure items required, total budget of the project
7. CHECKLIST FOR THE TECHNICAL REPORT
Use the following questions to ensure that your technical report is structured properly according to common expectations:.)
Mult mai mult decât documente.
Descoperiți tot ce are Scribd de oferit, inclusiv cărți și cărți audio de la editori majori.Anulați oricând. | https://ro.scribd.com/doc/55482200/Technical-Report-Writing | CC-MAIN-2020-50 | refinedweb | 18,464 | 50.67 |
Toward
pyvenv-3.4 ~)
from flask import Flask app = Flask(__name__) app.debug = True @app.route('/') def hello_world(): return '<h1>Hello World!</h1>' @app.route('/user/<name>') def hello_user(name): return '<h1>Hello {0}</h1>'.format(name) if __name__ == '__main__': app.run()
Now, this is not anywhere close to our previous example (yet), but this short program gives us plenty to go on. The first thing we want to do is to run this and give it a test. It is really easy to do. If you save the code above to a file
helloflask.py all you need to do is run
python3 helloflask.py. This command starts up a web server and will respond to requests on port 5000 by default. In your browser try the following: You should see Hello World! Now try, in this case you should see
Hello Me.
The two pages you just viewed are an example of URL Routing, and is a fundamental aspect of any web development framework. The URL / maps to the function
hello_world and the the URL /user/<your name here> maps to the hello_user function. The key to this is the
@app.route decorator. By adding this decorator before a function you can set up any function to be called in response to a user submitting a URL.
Wait, what’s a decorator?
Thanks for asking! We will cover decorators in the next section, but the truth is you could live a happy productive life using Flask if you only understood that By placing
@app.route('/path/to/something') on a line by itself before a function definition will cause that function to be called in response to a URL matching the pattern in parenthesis!
Also you should notice that the functions do not use print, but rather return an iterable. In this case a string. This is because Flask is one of many frameworks built around Python’s WSGI standard. We’ll cover WSGI in another section, but you should know that any function that returns an iterable can be used as a response to a GET request.
Now lets look at a Flask-ified version of our hello program.
from flask import Flask, request app = Flask(__name__) app.debug = True # need this for autoreload as and stack trace @app.route('/') def hello_world(): return 'Hello World!' @app.route('/user/<name>') def hello_user(name): return '<h1>Hello {0}<h1>'.format(name) @app.route('/hello') def hello_form(): if 'firstname' in request.args: return sendPage(request.args['firstname']) else: return sendForm() def sendForm(): return ''' <html> <body> <form method='get'> <label for="myname">Enter Your Name</label> <input id="myname" type="text" name="firstname" value="Nada" /> <input type="submit"> </form> </body> </html> ''' def sendPage(name): return ''' <html> <body> <h1>Hello {0}</h1> </body> </html> '''.format(name) if __name__ == '__main__': app.run()
Here is where things start to get better. We no longer have to worry about environment variables, instead Flask provides us with a
request object. The request object contains pre-processed attributes that contain all of the information we could possibly want from a form submission. The
args attribute is a dictionary containing keys for all of the names in a submitted form. | https://runestone.academy/runestone/static/webfundamentals/Frameworks/frameworkintro.html | CC-MAIN-2018-17 | refinedweb | 532 | 67.35 |
Frank Sommers: Does a JCP JSR represent an "official" Java standard? What would prevent developers from favoring a non-JCP-developed solution to a problem over a relevant JSR?
Rob Gingell: The JCP reserves the namespaces
java.* and
javax.*. Other than namespace use, there's nothing about the JCP that requires anyone to
use the APIs created through it. If there were, say, a
javax.toaster.* family of classes, there's
nothing that stops anyone from working outside the JCP to create
an
org.othertoasters.* family.
The JCP would tend to resist having a competing
javax.othertoasters.* activity under its
roof, but couldn't do anything about someone setting up a completely different thing in competition.
One reason we don't see instances of many competing APIs is that there's a general appreciation that Java's value lies mostly in the over 3 million developers who see that an investment in a single set of skills gives them a wide market in which to work. Fragmentation would be inconsistent with the value proposition perceived by those developers, and thus counter-productive to the motivations that would lead one to want to make toaster-based APIs in the first place.
Ultimately any community is defined by what makes up its members' common self-interest, and while that self-interest might be codified into agreements and practices and process rules, what really makes it work is the shared set of values behind it. If you're building something you want developers to target, you're not well-served by fragmenting that developer pool. | https://www.artima.com/intv/standards3.html | CC-MAIN-2017-51 | refinedweb | 262 | 50.87 |
We are excited to release new functionality to enable a 1-click import from Google Code onto the Allura platform on SourceForge. You can import tickets, wikis, source, releases, and more with a few simple steps.
Hi Tony,
./smispy_lsmplugin -u
smispy://9.126.140.140:5988/?namespace=root/LsiArray13
--create-volume=from-lsm --size=1G --pool 1
error: 13 msg: Error: volume_create rc= 4096
But then ...
./smispy_lsmplugin -t " | " -u
smispy://9.126.140.140:5988/?namespace=root/LsiArray13 -l VOLUMES
600A0B800017DCEC00000F324A4AB9EF | 2 | | 512 | 419430400 | 1 | 214748364800
600A0B800017DCEC00000F334A4ABA67 | 3 | | 512 | 419430400 | 1 | 214748364800
600A0B800017DCEC00000F344A4ABA75 | 4 | | 512 | 419430400 | 1 | 214748364800
600A0B800017DCEC00000F364F95DD7F | from-lsm | | 512 | 2097152 | 1 |
1073741824
From the storage manager software also I could verify that the 1G LUN
was created successfully on the array.
1) Where can i see any debug logs for smispy, if any ?
2) Would you know of any way to look for the SMI-S provider logs, I
could not find any info about it in the provider documentation.
3) In -l VOLUMES output, the header ( ID Name vpd etc) is missing
when used with -t option.
thanx,
deepak
On 04/24/2012 03:26 AM, Deepak C Shetty wrote:
> ./smispy_lsmplugin -u
> smispy://9.126.140.140:5988/?namespace=root/LsiArray13
> --create-volume=from-lsm --size=1G --pool 1 error: 13 msg: Error:
> volume_create rc= 4096
4096 is a return code that a job has been created (async).
Unfortunately I introduced a bug in the last release when I added
additional constants. Do a pull and you should be good. I will add an
automated test case against the simulator. I can control what it
returns easier (to exercise more paths).
> 1) Where can i see any debug logs for smispy, if any ?
Logging is done to syslog.
> 2) Would you know of any way to look for the SMI-S provider logs, I
> could not find any info about it in the provider documentation.
This is vendor specific. You could use lsof or strace or some other
tool to find files the provider has open.
> 3) In -l VOLUMES output, the header ( ID Name vpd etc) is
> missing when used with -t option.
The -t option is for terse output so the header information is
intentionally absent. This was done to make it easier for people that
want to script the command line interface.
Regards,
Tony | http://sourceforge.net/p/libstoragemgmt/mailman/libstoragemgmt-devel/thread/4F96B755.1060006@redhat.com/ | CC-MAIN-2014-10 | refinedweb | 391 | 73.47 |
Related Titles
- Full Description.
What youll
- 55:Nuget.config should be (removed one ../):
<settings>
<repositoryPath>../../../libs</repositoryPath>
</settings>
(This change is needed for downloaded Nuget packages to get stored in the lib folder in the project folder structure.)
On page 91:
There is no Browser class in the CassiniDev namespace.
Also, the latest version of Cassini that you say you're using is "using Cassini" it should be "using CassiniDev"
On page 103:
I followed everything to the T in the setup leading to running the first SpecFlow scenario on page 103. I'm getting the error in my NUnit test runner of:
***** KojackGames.Blackjack.Acc.Tests.Features.GetInvolvedInAGameFeature.MakeABet
Given I have navigated to the game play screen to play a hand
-> error: The CurrentThread needs to have it's ApartmentState set to ApartmentState.STA to be able to automate Internet Explorer.
When I click on the bet button
-> skipped because of previous errors
Then I should see the deal button
-> skipped because of previous errors
I set the app.config of the Acc.Tests according to page 102. Still getting this error even though these are set in the app.config. | http://www.apress.com/microsoft/c/9781430235330 | CC-MAIN-2016-22 | refinedweb | 193 | 57.77 |
A minimalist Flutter game engine.
Any help is appreciated! Comment, suggestions, issues, PR's! Give us a star to help!.
Just drop it in your
pubspec.yaml:
dependencies: flame: ^0.8.4
And start using it!
The complete documentation can be found here.
Bellow is an overview that should suffice to build a simple game, and work your way up from there.
The flame-example game has been updated to use the newer APIs (0.8.2) on a new branch.
There is a very good QuickStart tutorial for version
0.6.1 here. The API has changed a lot, so refer this documentation for updated information. Soon I plan to release an updated tutorial.
The modular approach allows you to use any of these modules independently, or together, or as you wish. = new Sprite('player.png'); // in your render loop sprite.render(canvas, width, height);
Note that the render method will do nothing while the image has not been loaded; you can check for completion using the
loaded method./component.dart'; Sprite sprite = new Sprite('player.png'); const size = 128.0; var player = new SpriteComponent.fromSprite(size, size, sprite); // width, height, sprite // screen coordinates player.x = ... // 0 by default player.y = ... // 0 by default player.angle = ... // 0 by default:
AnimationComponenttakes an
Animationobject and renders a cyclic animated sprite (more details about Animations here)
ParallaxComponentcan render a parallax background with several frames
Box2DComponent, that has a physics engine built-in (using the Box2D port for Dart)
Complete Components Guide
The Game Loop module is a simple abstraction over the game loop concept. Basically most games are built upon two methods: = new } }
In order to handle user input, you can use the libraries provided by Flutter for regular apps: Gesture Recognizers.
However, in order to bind them, use the
Flame.util.addGestureRecognizer method; in doing so, you'll make sure they are properly unbound when the game widget is not being rendered, and so the rest of your screens will work appropriately.
For example, to add a tap listener ("on click"):
Flame.util.addGestureRecognizer(new TapGestureRecognizer() ..onTapDown = (TapDownDetails evt) => game.handleInput(evt.globalPosition.dx, evt.globalPosition.dy));
Where
game is a reference to your game object and
handleInput is a method you create to handle the input inside your game.
If your game doesn't have other screens, just call this after your
runApp call, in the
main method.
Add this to your package's pubspec.yaml file:
dependencies: flame: "^0.8.4"
You can install packages from the command line:
with Flutter:
$ flutter packages get
Alternatively, your editor might support
flutter packages get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:flame/flame 9 hints.
Run
flutter formatto format
lib/audio.dart.
Run
flutter formatto format
lib/box2d/box2d_component.dart.
Similar analysis of the following files failed:
lib/components/animation_component.dart(hint)
lib/components/component.dart(hint)
lib/game.dart(hint)
lib/images.dart(hint)
lib/position.dart(hint)
lib/sprite.dart(hint)
lib/util.dart(hint)
Maintain an example.
Create a short demo in the
example/directory to show how to use this package. Common file name patterns include:
main.dart,
example.dartor you could also use
flame.dart. | https://pub.dartlang.org/packages/flame | CC-MAIN-2018-26 | refinedweb | 541 | 52.05 |
Iseries odbc failed jobs
You can see the error here: [kirjaudu nähdäksesi URL:n] Site uses Yii, PHP and Mysql...
.. populate
[kirjaudu nähdäksesi URL:n] is getting a "secure connection failed" error for some browsers: see: [kirjaudu nähdäksesi URL:n]://[kirjaudu nähdäksesi URL:n] I need help to fix this issue.
.."
I am currently working on an excel app that pulls data from Quickb.. are almost done.
Rekisteröidy tai kirjaudu sisään nähdäksesi tiedot.
... i. Working Hive SQL on Hadoop ii. ODBC access iii. Validate software installed thru Ambari portal
We need best recommendations, on how to create a devops culture for a team of IBM I-Series server environment [kirjaudu nähdäksesi URL:n] Developers..
...required & will be disclosed to shortlisted parties
Error: /project/gradlew: Command failed with exit code 1 Error output: FAILURE: Build failed with an exception. A problem occurred configuring root project 'project'. Here is the exact error: Could not find [kirjaudu nähdäksesi URL:n] ([kirjaudu nähdäksesi URL:n]:play-services-auth-base-license:11.8.0). Searched in the following locations with
...let us know if you can deliver a ODBC connector for [kirjaudu nähdäksesi URL:n] Details of a similar ODBC connector below. We need to get data from our account on Hubspot and currently this one ODBC driver is working wonderfully well but is expensive. Currently available ODBC Connector [kirjaudu nähdäksesi URL:n].
fix failed disk in FreeNas . .. freenas error down
..
I need someone to configure my email server.
.. ho... is set by option the caller selects in the IV...
.. "fin...
.. compliant text and image
I need to build XML3 Files from ERP System using ODBC Connection. The structure is for Bank Payment files. Expecting 1 hour quick job as everything is in place already.)
Hi I have tried to upgrade my joomla to current version unfortunatelly it has failed. I need someone to repair and update to lates joomla version.
I'm working on a test case where I'm trying to test async action creator for a failed response. But the error is a promise and when the unit test is running the error promise is not resolving and the test is failing. eg: // [kirjaudu nähdäksesi URL:n] export const fetchData = () => { return (dispatch) => { return fetchJSON(URL).then((data) =&...
fix asap issue with our email not delivered "550 Reverse DNS lookup failed for IP...."
I got a MS Access db that is connect to mysql via odbc I now would like to track the last change to a record and display the user that performed the changes. Each User has their own ODBC access that need to be recored when a record is saved.
[kirjaudu nähdäksesi URL:n] visual basic [kirjaudu nähdäksesi URL:n] sap crystal 2013 [kirjaudu nähdäksesi URL:n] msql install [kirjaudu nähdäksesi URL:n] MySQL ODBC Driver [kirjaudu nähdäksesi URL:n] inno setup
Need help configuring ODBC and JDBC driver in SAP Business Object with Hadoop HBASE DB in Linux
.. receiving Encrypted Data in an xml file. I need to pass that data and a key to Capicom on a Windows Server and receive the Decrypted Data on the Iseries..
I have an Access application that is old and outdated. It runs with Oracle DB using ODBC. I want to convert it to Java/intranet based. Also would like to add features to it. | https://www.fi.freelancer.com/job-search/iseries-odbc-failed/ | CC-MAIN-2018-30 | refinedweb | 553 | 65.62 |
Compaq: Alpha is Better Than IA-64 373
Compaq released a document (it's in PDF format) that states that their Alpha is better then IA-64 (Intel next generation Itanium Processor). The document compares Alpha (and future generations of Alpha) against the IA-64 (I hate this "Itanium" name - where do they get these names anyway?). Certainly worth a read. What do you think, folks?
IA 64 vs Alpha (Score:1)
yeah but alpha has gone the way of the Mac. (Score:1)
operating systems (Score:1)
What else is new? (Score:1)
Alpha's been the fastest for a long time... (Score:1)
Objectivity (Score:2)
I would like to point out that this document is from Compaq, so we must suspect that the document was written with a Pro-Alpha slant to begin with. Its like Intel coming out with a paper debating the merits of the Pentium III vs. the Athlon Processor.
Manung
Re:operating systems (Score:1)
Soon we'll have another 64 bit platform! For ultra-stud-muffins only.
Re:operating systems (Score:1)
Finkployd
Alpha = speed, cost (Score:4)
However, they cost too much for anyone except a supercomputing hound. If Compaq would drop Dec's insanely idiotic OS and component licensing scheme and aid linux on alphas, they might stand a chance of making a LOT of money selling hardware. As is, people buy ten times more alphas one chip generation late and run linux instead of OSF.
Anyone interested should see the linux alpha compilers available. cc is a small improvement, and ForTran is a LARGE improvement.
But still, Itanium will come out, and an Itanium box will offer slightly less than half the floating point speed, and it will cost about 1/4th of the fast alpha box from Compaq. And the alpha motherboards will still make it tough to support third party peripherals. And Itanium will dominate the 64 bit market. And Alpha will own the supercomputing market.
faster, cheaper, more powerful (Score:1)
faster
cheaper
*and* more powerful than Merced.
Not to mention that the Alpha, anyways, is proven technology.
Re:IA 64 vs Alpha (Score:1)
"Its"? (Score:1)
Bah on Alpha. (Score:3)
It's going to be tough for Digital to edge into Intel's market, mainly because nearly all consumers have been brainwashed to look for the "Intel Inside" Logo.
"Excuse me sir, is this an Itanium?"
"No, Ma'am. This is an Alpha processor by Digital corporation."
"Well Shit, I've never heard of THEM. Where are your Itanium machines?"
Not only that, but Alphas have never really been geared toward the general consumer. Most have been high-end server machines. Also, as far as I know, Alpha won't run x86 code because it uses a different architecture. (Please correct me if I'm wrong.)
"Alpha, huh?
"No Ma'am, this machine runs a Unix variant, and has a different architecture than Intel processors."
"Well Shit, I NEED those programs. Where are your Itanium machines?"
-- Give him Head? Be a Beacon?
Best feature (Score:2)
I've wanted an Alpha for a while now because (for various geeky reasons: fun, supposed speed, fun, assembly programming, and fun) but I've never been able to find a reasonably priced machine (even for auction) OR good instructions on how to build them.
If Compaq were smart (note the use of a counterfactual conditional) they'd hype Linux on Alpha like all get out. What better way to screw MS than to give geeks hardware that Windows can't touch (anymore)?
But does Compaq want to screw MS? If they're smart they do: Compaq produces an ostensibly competing OS.
---
Re:operating systems (Score:1)
One is the ability to address more than 4GB of physical ram (which 32bit addressing is limited to). Linux already does this on alpha's.
The ability to seek more than 4GB into a file (ie: fseek takes a 64bit offset, not a 32bit one). I'm pretty sure alpha linux has this too.
I'm not sure what areas (if any at all) that alpha linux is 32bit where it should be 64, but I've been led to belive it is a full 64bit os (with exceptions of things that *have* to be done in smaller word sizes, like ipv4 addresses.. which must be 32 bit).
Got to love the EV8--cluster on a chip (Score:1)
This has the potential, along with a big cache, to really boost the performance of a box, as well as drop the price per bang down. SMP circuitry's not cheap or simple, and definitely non-trivial to design. But with the EV8, it's all been done for you...
Re:Bah on Alpha. (Score:1)
I could be wrong. Its happened before.
Desktop Alpha? (Score:1)
I know a lot of people who would absolutely adore having an Alpha box, but they're just so expensive... We have a variety of free high-quality OSes working on Alpha and we've got millions of people who are now re/entering the land of *nix. Put 1 and 1 together and you get a large potential market for low-end, moderate-cost alpha boxes... My question is, where can we find them and what's holding up the market from bringing them to us at a sane price? Are we not looking hard enough or are they not there?
Re:Sheep my ass. (Score:1)
Bus size (Score:2)
---
Re:Best feature (Score:2)
What probably will happen (Score:1)
IA64 v Alpha (Score:1)
Re:Bah on Alpha. (Score:2)
Not only that, but Alphas have never really been geared toward the general consumer. Most have been
high-end server machines. Also, as far as I know, Alpha won't run x86 code because it uses a different
architecture. (Please correct me if I'm wrong.)
You're right. You can run WinNT on Alpha's if you get a special Alpha-only copy of WinNT. But then you still need Alpha-only copies of everything else... For the most part this doesn't happen with Linux, FreeBSD etc. as everything is availailable in source form.
From deep within Intel Corporation (Score:5)
#!/bin/perl
# This is a proprietary Intel perl script.
@prefix = ( "Pent", "It", "Max", "Ath", "Cort", "Trit" );
@suffix = ( "ium", "alon", "ex", "anium", "oricon", "agon",
"on", "eres", "obos", "ymede", "itan", "erion" );
@tag = ( "II", "III", "IV", "Pro", "MMX", "Deluxe" );
srand;
printf( "%s%s %s\n", $prefix[rand 6], $suffix[rand 12], $tag[rand 6] );
So if we run this script, we can see where the names come from:
sg1 237%
Cortium II
sg1 238%
Pentalon IV
sg1 239%
Penteres III
sg1 240%
Athalon Pro
sg1 241%
Pentitan II
sg1 242%
Maxymede MMX
Please show discretion when you refer this script to others. It is, after all, an Intel proprietary secret and should therefore only be shared with others on a "need-to-know" basis.
Re:Best feature (Score:1)
Most places that carry alpha's are proberly willing to sell you a bare boned system, just board, cpu and box, and you should be able to build a system pretty cheap.
Re:IA 64 vs Alpha (Score:2)
Not anymore. Check this [theregister.co.uk] out.-Brent
Re:"Its"? (Score:1)
Why can't we comment on the article, rather than pick at HeUnique's grammar?
Re:Bah on Alpha. (Score:2)
Oh...and I work(ed) for Compaq on the Visual C++ for Alpha/NT product(I'm leaving at the end of the month, for obvious reasons).
--GnrcMan--
A Better Chip: who cares if they're not available (Score:2)
As long as you can't go to an average computer store and pick up a PPC, Alpha or Sparc chip and build your own computer from it, the general population will not even know they exist. Don't get me wrong: I would like it if all of a sudden the availability of these chips were equal to the Intel chips, but that's just not the reality of the marketplace. With the switch to the 64-bit architecture there may be an opening in the market which will allow these chips to become a more available product in the eyes of the average consumer. But as long as the Intel/MS duopoly (which is showing signs of fracturing) is as dominant as it is now, that's just not going to happen.
Re:Best feature (Score:1)
Excellent!!!
Who would buy an alpha CPU? (Score:1)
Oh,
Don't forget VMS (Score:2)
Re:Objectivity (Score:1)
As we all know, product quality is only one (small) factor in product success. Aside from the usual Marketing and FUD wars, the real test will be software support. What OSes, what Apps, what real world uses will run on these chips?
If a CPU shits in the woods, but no one writes native code, does it make a sound?
Say macintosh? (Score:3)
Macin... Linux.
Ma... Linux
Linux.
Linux.
Nope, it just doesn't seem to come out.
(the irony is I'm posting this from a mac)
Moderate this up! (Score:2)
-----------
"You can't shake the Devil's hand and say you're only kidding."
Re:IA 64 vs Alpha (Score:1)
--GnrcMan--
Re:operating systems (Score:1)
Conscience is the inner voice which warns us that someone may be looking. [lemuria.org]
Hmm... (Score:2)
Re:"Its"? (Score:2)
2. The blurb clearly implied that the PDF format was Compaq's.
3. I had nothing good to say about the article
Re:Bah on Alpha. (Score:2)
--GnrcMan--
Re:operating systems (Score:2)
Compaq could learn a thing or two from Intel (Score:2)
If they want some exposure or to compete for the desktop market (Which is where the money is) they need to slash the price on the chips and sell them near cost. Sure they take a hit for R&D but the volume of sales should go up. If they don't have a good strong presence by the time IA64 hits, they may as well close up their doors and go home.
Re:From deep within Intel Corporation (Score:2)
Intel
Merced: first IA-64 / Itanium (2000)
Willamette: 0.18-micron cut-down version of Itanium (2000/1)
McKinley: 1GHz IA-64 / 2MB on-chip cache (2001)
Madison: 0.13-micron IA-64 high-end workstation/appl. server (2002)
Deerfield: better price/performance
Northwood: 3GHz barrier broken (2003)
AMD:
SledgeHammer: 64bit K8 ??
Minor programming style nitpicks (Score:2)
First of all with current versions of Perl the srand call is not needed.
Secondly I would recommend using qw() because it is more legible for lists.
Thirdly a little information hiding works well. There is no need to have to synchronize the length of the list with the argument to rand.
And -w is always worthwhile
So rewritten we get
#!
@prefix = qw(Pent It Max Ath Cort Trit);
@suffix = qw(ium alon ex anium oricon agon on eres obos ymede itan erion);
@tag = qw(II III IV Pro MMS Deluxe);
printf ("%s%s %s\n", &rand_elt(@prefix), &rand_elt(@suffix), &rand_elt(@tag));
sub rand_elt {
return $_[rand(scalar @_)];
}
Not that it matters in this case, but good habits are good habits...
:-P
Cheers,
Ben
PS To get the code to look like code use the TT tag, and to get indents use . Warning, IE may mess up the indented space on a cut-and-paste...
Re:The love & hate relationship of Intel & Compaq (Score:2)
--GnrcMan--
Re:I hate PDF! (Score:2)
Besides which, readers are free for just about every platform (Linux included, I believe... and if the official version isn't available - there's surely an opensource reader....)
So, get over it!
:)
Re:yeah but alpha has gone the way of the Mac. (Score:2)
Model No.: PBPSMIATA
Tested to comply with FCC Standards
For Home or Office Use
This Class B digital apparatus meets all requirements of the Canadian Interference-Causing Equipment Regulations.
(repeated in French)
--GnrcMan--
Come on guys! :) (Score:2)
port (Score:2)
Let's port this to all other languages like LISP et al.
I will do the easy one and port it to C.
// This is a proprietary Intel C program.
static const char * prefix [ ] =
{ "Pent", "It", "Max", "Ath", "Cort", "Trit" };
static const char * suffix [ ] =
{ "ium", "alon", "ex", "anium", "oricon", "agon", "on", "eres", obos", "ymede", "itan", "erion" };
static const char * tag [ ] =
{ "II", "III", "IV", "Pro", "MMX", "Deluxe" };
int
main ( void )
{
srand ( 0 );
printf ( "%s%s %s\n",
prefix [ rand ( ) % ( sizeof ( prefix ) / sizeof ( prefix [ 0 ] ) ],
suffix [ rand ( ) % ( sizeof ( suffix ) / sizeof ( suffix [ 0 ] ) ],
tag [ rand ( ) % ( sizeof ( tag ) / sizeof ( tag [ 0 ] ) ] );
return 0;
}
P.S. Yeah, I know, I should write a perl to C printer, but then the post would be too long.
For I am not master coder yet who can code a super short compressed one-line self-compiling compiler to fit as a post.
Any challenger care to respond with one?
P.P.S. Back to doing some real coding.
Corrinne Yu
3D Game Engine Programmer
3D Realms/Apogee
Corrinne Yu
3D Game Engine Programmer
Re:operating systems (Score:2)
--GnrcMan--
Re:operating systems (Score:2)
--GnrcMan--
Re:operating systems (Score:2)
--GnrcMan--
Re:"Its"? (Score:2)
the possessive form does not possess an apostrophe.
so, the possessive form isn't.
Re:"orders of magnitude improvements" (Score:2)
Main Entry: order of magnitude
Date: 1875
: a range of magnitude extending from some value to ten times that value
and
Main Entry: magnitude
Pronunciation: 'mag-n&-"tüd, -"tyüd
Function: noun
Etymology: Middle English, from Latin magnitudo, from magnus
Date: 15th century
Seems like they have an inkling.
--GnrcMan--
Re:Bah on Alpha. (Score:2)
It's the other way around, really, with Intel trying to get into Alpha's market. I say this mainly because I don't believe that either chip is going to be aimed at anything but the server market for a long time...
Code bloat (Score:2)
33% ??
I guess Intel really is a 'Microsoft Strategic Partner! They're helping them with code bloat!!
Re:Good god, GRAMMAR, GUYS! (Score:3)
Re:port (Score:2)
Re:Code bloat (Score:2)
Funny name (Score:2)
"welcome to the Plani-arium"
Vs
"Our new chip will be called: I-anium!"
x86 emulation for NT and Linux (Score:3)
32 bit x86 code no less... Also, there is support for 32 bit x86 Linux binaries available (in Linux of course.) How well it actually works is best left for someone else to answer. I'm suprised that so many people thought there was no x86 emulation available.
Of course, the emulation isn't quite as important under Linux as it is under Windows since most software for Linux is open source and able to be compiled natively. Note that I am NOT implying that it's always as easy as simply recompiling the source...
BTW, doesn't seem like a great idea to go with an Alpha/NT combo these days anyway. Microsoft ceased development of NT5/Win2k/whatever for the Alpha. Presumably because they need to focus on rigging it to work with the IA64 first. I wonder if Windows for the IA64 will end up being enough 64 bit code to call it a 64 bit OS and as much of the old 32 bit code as they can get away running under emulation. Any guesses?
numb
Re:Bah on Alpha. (Score:2)
I should hope I know something about this. I helped write the damn Alpha compiler for Visual C++.
--GnrcMan--
Re:Come on guys! :) (Score:2)
Is WOrdperfect for Linux available for AlphaLinux, LinuxPPC, and UltraLinux (is that the sparc version)?
Oracle8
Informix
Sybase
and mostly every other non-opensource program for linux will probably target x86...
Thanks for the info, though!
Re:Alpha = speed, cost (Score:2)
Compare apples to apples. A full system with a PIII 500MHz is around $2500-$3k including a complement of RAM and hard disk space. The top alpha is ONLY available from Compaq. The fast motherboard is the XP-1000 for about $10k without the RAM or monitor. And add-ons for the Compaq alphas machines are EXPENSIVE. One generation late you can get a 21164 processor which is still fast in a machine for about $3k. Video card support under OSF or alpha linux is very poor. Unless you really need the supercomputing, it just doesn't make a lot of sense to buy an alpha. Now, if Compaq somehow made third party peripherals supportable for alphas they would sell a WHOLE lot more alphas without losing the number crunching crowd.
Re: 99%? (Score:2)
Sure, the chips are expensive, but what Alpha processors need is marketing not necessarily pricing.
And 99% of computers are shipping with Intel processors? Guess you missed AMD having the majority of computer sales a couple months ago (at over 40%) or the iMac's surprising success.
Advantage point missed: binary compatibility (Score:3)
The implementation of simultaneous multithreading is something I very much would like to see. I'm impressed that they're able to do it as simply as this paper seems to imply.
One Alpha advantage (one that I think falls in the irreducible category) that I've never seen Digital/Compaq play up is the angle of binary compatibility of the Alpha instruction stream across different implementaions of Alpha. A binary executable that the compiler has tuned/targeted to a specific implementation of Alpha will still run, perhaps not quite optimally, on a later implementation.
Out-of-order execution is key, here. Because the programmer (or compiler) have to be explicit (with memory barrier instructions) about dependencies that might otherwise be hidden, the instruction stream in the binary executable file documents an idealized instruction execution order -- but any execution order that achieves the same result is also acceptable.
More outstanding data fetches, larger out-of-order instruction queue and wider simultaneous issue all work together to transparently make the old code work better. I haven't seen where increasing the VLIW bundle from 3 instructions to 6 instructions, for instance, would be as transparent -- so there's a much stronger need to recompile and maintain separate binaries targeting the various implementations of IA64.
Depends... (Re:Is alpha + linux to be recommended? (Score:3)
Other stuff (disk I/O, etc) is not faster than x86, and some hardware (e.g. many recent 3D graphics boards) can't be used in alphas.
Also, you should be aware of the fact that most closed-source Linux software (StarOffice, Netscape, Civ3,
Re:Alpha = speed, cost (Score:5)
Maybe you'd have a leg to stand on if Linux supported the enterprise features that Digital UNIX does.
Unfortunately, it doesn't.
Example: High performance, dynamically resizable, journalling filesystem.
Does Linux have it? No. I'm familiar with the efforts that exist to address this, I work with one of the authors of a major project for this. He'll admit that ext3/reiserfs doesn't touch ADVFS.
Example: Advanced high availability clustering solution with a shared filesystem among nodes, cluster aliasing, and context-dependant symlinks for a SINGLE disk image shared amoung up to 8 cluster nodes.
Does Linux have it? No. Be aware that Beowulf is NOT an HA solution - it's a distributed computing cluster.
Perhaps you should do some more research before blindly bashing an OS that has features that Linux has yet to dream of.
As a side note, the Alpha isn't only used for supercomputing. I'm part of a group that runs 3 clusters of AlphaServers for everything from mail, web, and database serving. Only recently did DEC/Compaq enter into the supercomputer arena with the ``SC'' series of Alphaserver.
Your typical DS/ES/GS series AlphaServer may not be meant for your average joe-blow computer enthusiast, but 14 processors does not constitute a supercomputer. The new ``SC'' series AlphaServer that DEC recently released is a 64-512 Alpha CPU model. THAT is a supercomputer.
I've been using Linux since 1995, and Digital UNIX since 1996, so I've got a pretty good feeling on the comparisons between them.
-Jeff
Moderate this down as flame bait if you like - but I have a feeling that most readers have never used Digital UNIX/Tru64, and don't have enough knowledge of it to form a good opinion.
IA-64 will loose out to vanilla x86. (Score:2)
I seriously doubt a consumer is going to want an Itanium. Or even an Alpha. These chips are designed as server and technical computing workhorses.
Like with the Alpha, all the operating systems and applications will need to be ported to the new IA-64 architecture to see any useful speed gain. All reports indicate that the on-board x86 compatibility is dog slow, with no appreciable performance gain over Pentium or Athalon chips. Why should gran'ma buy a $5000 Itanium box when the $999 iMac will run rings around it when running Quicken or MS Office?
Then there is the issue of native software: Linux, and NetBSD are gimmies. HP-UX is going to be forced marched to IA-64 (HP originally developed EPIC for the HP9000). IRIX and SCO are "definite maybes".
Sun and Microsoft, on the other hand, will probably port their OS to the platform in hopes of killing it. Microsoft had ports of NT on x86, PowerPC, MiPS and Alpha. Only x86 remains. Like with the older RISC architectures, MS will port and support the platform for a little while, but won't port it's applications, and won't promote their OS on anything other than x86. This way, Microsoft can keep control of their hardware market, and deny competitors popular support for their primary platform. And, when the market drops out, MS can quietly discontinue NT for IA-64, and place the blame squarely on Intel; just as they've blamed Compaq, Apple, and SGI for the failure of NT on RISC. Sun has a cross-platform strategy with similar goals: get them hooked on Solaris, and then entice them over to SPARC, where the applications are.
MS likes x86 becuase it -owns- x86. Linux will always be an also-ran on x86: merely a "Hobbyist's OS". The blind loyalty to intel and x86 I find expressed here is disconcerting. The only thing that will allow Linux to overcome proprietary systems is -ubiquity-, and that means cross-platform parity. Use the fastest and the best when available. That, more often than not, means Alpha.
SoupIsGood Food
`Where do these names come from?' (Score:2)
It's very simple really. Check out this article [salon.com] in Salon for details.
cjs
Re:Alpha = speed, cost (Score:4)
Previous generation PC164 motherboards were (maybe still are) selling for around $250, including a 500MHz 21164A. Just add an ATX power supply, case, 4 or 8 72 pin parity SIMMs, a hard drive and you have yourself a computer
:) (I guess video and network would be nice... it's got ISA and PCI slots). I got myself one of those in May:
Re:operating systems (Score:3)
P6's can use various tricks to access more than 4GB, but only by using yucky segmentation techniques. At any one moment, only 4GB can be addressed because that's all 32 bits allow. You can't mmap in a 5GB file, or use an array of 550 million doubles. A 64-bit processor can access many petabytes-- directly. Not something useful on most app servers, but the database, video and science folks sure like it.
Re:x86 emulation for NT and Linux (Score:2)
Yes, it's called em86.
I dug up the info from the alphalinux faq. I've used it myself, however I had no luck with icecast when I tried running it emulated. There were a lot of other issues so I'm not sure that em86 was at fault. That's about the extent of my personal experience with it. [alphalinux.org]
numb
Ports (Re:From deep within Intel Corporation) (Score:3)
* Scheme
(let ((rand-elt
___________(lambda (l)
________________(list-ref l (round (rand (length l))))))
______(prefix '(Pent It Max Ath Cort Trit))
______(suffix '(ium alon ex anium oricon agon))
______(tag '(II III IV Pro MMX Deluxe)))
_____(begin
__________(display (rand-elt prefix))
__________(display (rand-elt suffix))
__________(display (rand-elt tag))
__________(newline)))
* Python
def rand_elt(list):
____list[int(rand(len(list)))]
prefix = ["Pent", "It", "Max", "Ath", "Cort", "Trit"]
suffix = ["ium", "alon", "ex", "anium" "oricon", "agon"]
tag = ["II", "III", "IV", "Pro", "MMX", "Deluxe"]
s = rand_elt(prefix) + ' ' + rand_elt(suffix) + ' ' + rand_elt(tag) + '\n'
print s
That's all for now... I seem to have run out of creativity
Re:Code bloat (Score:2)
Re:Use your head (Score:2)
If x86 Linux is headed towards the mainstream, then it's RISC cousins need to be able to have a mechanism in order to use all the software available to x86, otherwise they'll always be treated as second rate to x86.
And yeah, running Oracle in emulation would be just dumb... but for something not as performance hungry, like WordPerfect or Opera, it'd be nice to have the option, i'd think...
Re:We need a new architecture (Score:2)
Almost certainly not. You do not have to use the segmented addressing more to access more than 4GB of physical memory - you may not be able to have all of it mapped in at the same time, but you could map it in and out dynamically, or give 4GB-or-less chunks to various processes.
In fact, the x86 segmented mode - which is not new in the P6 processors (PPro, PII, PIII), but has been around in its current form since the 386, and existed with smaller addresses in the 286 - doesn't even help. The x86 MMU maps 48-bit segmented addresses (which any OS running in protected mode uses, although most of them set up "trivial" segments and, unless they're running 16-bit programs that use 286-style segmentation to boost their address space size, don't really make use of it) to 32-bit linear addresses; those 32-bit linear addresses are then translated to physical addresses through the page table, if paging has been enabled (which it is, in most x86 OSes, e.g. Windows OT and NT, OS/2, Solaris, Linux, BSD, etc., etc.).
What's new in the P6 processors is the ability to specify page tables that generate 36-bit physical addresses rather than 32-bit physical addresses (an ability that at least some other 32-bit processors, e.g. SPARCs with the SPARC Reference MMU, have had). You need that ability and you need a memory bus that puts out more than 32 bits of physical address; I think some 32-bit platforms have had that, and some high-end "PC" platforms may have it.
Re:Bah on Alpha. (Score:2)
On the upside, Starting in the next month or so (After I officially leave the VC/Alpha project) I plan on fooling around independently with EGCS on my Alpha box at home. I'll certainly contribute what I can.
--GnrcMan--
in awk: (Score:2)
srand()
split("Pent It Max Ath Cort Trit", PRE)
split("ium alon ex anium oricon agon on eres obos ymede itan erion", SUF)
split("II III IV Pro MMX Deluxe", T)
b=rand()*100; c=rand()*100; d=rand()*100
CONVFMT = "%2i"
a=b ""
x=c ""
y=d ""
printf "%s%s %s\n", PRE[a%6 + 1], SUF[x%12 + 1], T[y%6 + 1]
}
--
"One World, one Web, one Program" - Microsoft promotional ad
Re:HTML is limited? (Score:2)
CSS is relatively new - yes, the spec's been out for a while, but only now do most browsers have somewhat decent support for it. And Mozilla I wouldn't even consider, being that it's pretty much Alpha software... I haven't ventured to try it with Linux (because at this point i feel Linux is best suited as a server OS), but on the Mac, Win 9x, and Win NT, I've found it to be horribly unstable. I also haven't fully investigated Opera, which leaves us with just Netscape and IE...
Of those two, IE seems to more fully implement CSS... version 5 is much better than 4.5, but 5 is Windows only where as 4.5 was also available for the Mac.
Acrobat reader is available for probably as many or more platforms as Netscape Communicator/Navigator. It also (Thanks to QuarkXPress) has MUCH better typographical control than CSS. That's probably because the programs being used to generate PDF's are much more mature than those used to genertate HTML, XML, etc...
You can't generate ligatures with CSS... nor can you have nearly as wide a choice of fonts... With PDF, I don't care what font's you have available, because i can embed them within my document. And also, using PDF's, you're extremely unlikely to need to tinker with the file/code at all, where as anything where detail is that much an issue in HTML, you always need to wade through the code.
So, to summarize... The tools to generate PDF's are much more advanced than the ones to make CSS/HTML. The tools to view PDF's are also much more advanced than those currently available for HTML, in that the designer/author has so much more control over the final appearance of their document than can ever be achieved with CSS/HTML... Yes, I could specify that i'd like this font to appear as Adobe Bembo, but if that's not available on your machine, you may end up with Times, or whatever generic Serif font is available.
That's all in my opinion, of course..
Re:We need a new architecture (Score:2)
That should've been "You do not have to use the segmented addressing mode".
As per my parenthetical note, you do, in a sense, have to use it to use paging, but you don't have to use it in a non-trivial fashion; most (if not all) x86 OSes don't use full 48-bit addresses, they just use 32-bit addresses with implicit segment numbers, and set up the segments to overlap so that those 32-bit addresses translate trivially to 32-bit linear addresses.
As for FreeBSD vs. NetBSD, I think none of the BSDs map the entire kernel address space into virtual memory; I don't think Solaris does, either - I think they added support for >4GB of physical memory in 2.6 or 7.
Re:We need a new architecture (Score:3)
MIPS does; however, POWER and Alpha may not. The first POWER and Alpha processors were superscalar (PowerPC being a descendant of POWER).
You're confusing (as per my followup to the person who responded to you) the support for 36-bit physical addresses in the P6 processors (PPro, PII, PIII) with the support for 48-bit segmented virtual addresses, which dates back to the 386 (and which is a 32-bit-segment-offset version of the 286's segmentation). You don't need to use 48-bit virtual addresses, in their full shining glory, to get more than 32 bits of physical address.
There's code in the 2.3 kernel from, if I remember correctly, Siemens, to do exactly that.
I don't know offhand whether any of the BSDs support it; I think either Solaris 2.6 or Solaris 7 do.
Re:IA 64 vs Alpha (Score:2)
Re:Alpha = speed, cost (Score:2)
Lay down the crack pipe and take a deep breath and read what I wrote again. Insanely idiotic OS and component licensing scheme. The licensing is insane, not the OS.
Maybe you'd have a leg to stand on if Linux supported the enterprise features that Digital UNIX does. Unfortunately, it doesn't. Example: High performance, dynamically resizable, journalling filesystem.
AdvFS cannot be fscked. This in turn has fscked me. Get the picture. If the FS breaks, you keep both pieces. Thank you Digital for such a wonderful advance in computing. Bad inodes just get to live indefinitely on your system until you copy all the files to a different partition (dump and tar choke), and reformat. We haven't had such advances since, well, DOS. Needless to say, we are going back to UFS for our OSF needs, which is only quite a bit slower than ext2. But at least when it breaks we can fix it.
OSF also doesn't ship with a reasonably modern interface. CDE, MWM, and TWM are simply not enough.
It certainly has its niche for ultra high end computing where user interfaces are just not viewed as important. But the OS is chock full of holes. And I run into them from time to time.
Example. glibc call for wordexp does a complete shell-like expansion in C. Libc shipped with OSF does the expansion by shelling a command to ksh. Why should libc depend on ksh for its integrity ??
Example. Recursive scripts fill up the process table and lock the system in OSF. Not so in linux.
There are also good things. CC is a really fast compiler. So is the ForTran compiler. If you want to run processes real fast, OSF is a good choice. If you want a large number of CPUs, OSF is a good choice. If you want a decent user interface, reasonable speed, and a journaled file system that can be fixed, linux is a good choice.
Re:IA 64 vs Alpha (Score:2)
--GnrcMan--
Re:We need a new architecture (Score:2)
Bzzzt..wrong. From the Alpha Architecture Reference Manual, preface, first edition:
We concluded that the remaining factor of 100 would have to come from other design dimentions. If you cannot make the clock faster, the next dimension is to do more work per clock cycle. So the Alpha architecture is focused on allowing implementations that issue many instructions every clock cycle.
down the page a little:
These three dimensions therefore formed part of our design framework:
* Gracefully allow fast cycle time implementations
* Gracefully allow multiple-instruction-issue implementations
* Gracefully allow multiple-processor implementations
It goes on to list specific design decisions made to meet these goals. When they designed the Alpha, they had a 25 year design horizon. BTW, that preface was written in 1992.
--GnrcMan--
Re:What good are 64 bits anyawy? (Score:2)
Re:Lobby TacoHemo for <pre></pre> (Score:2)
But yeah, it would be nice for code posting. Oh well, just another example of a few sh*t-heads screwing things up for everyone else...
________________________
Re:operating systems (Score:3)
The segmentation tricks don't help much, if at all; the x86 MMU maps 48-bit segmented addresses to 32-bit linear addresses, and then maps 32-bit linear addresses to 32-bit physical addresses or, on P6, 36-bit physical addresses if that feature is being used by the OS. Thus, you can't get at more than 4GB of linear address space at any one time - you'd have to map segments into and out of the linear address space, although I guess you could do that on demand, so that it's somewhat transparent (although still potentially slow).
However, all that does is, as you note, prevent you from addressing more than 4GB at any one time; stuff can be mapped into or out of the address space (which I guess could be considered a "yucky segmentation technique" - you're an old-timer like me, so you may remember the use of that on some versions of PDP-11 UNIX and various PDP-11 OSes from Digital), and you can have more than one address space by having more than one process.
More than 4GB of physical memory is more useful on machines that let you get at it all at once, in a single address space, but it probably has some use even on platforms such as x86, SPARC V7/V8 with SPARC Reference MMU, etc. that have only 32-bit linear addresses but support more than 32 bits of physical address.
Re:Bus size (Score:2)
You can do that on 32-bit machines as well; most memory fetches tend to turn into cache-line fills, which can use the wider-than-32-bit data buses available on most if not all general-purpose-computer 32-bit processors these days.
You typically can't process all 64 bits of that word - at least not with integer instructions - but you at least get all 64 bits (or more, if your memory bus is wider) at once.
Re:operating systems (Score:3)
...in a single instruction. You can do 64-bit arithmetic on 32-bit platforms (for example, most if not all modern C compilers for 32-bit platforms support long long int or some equivalent 64-bit integral data type), but the operations generally have to be synthesized from multiple instructions (typically done inline for most operations, although multiplication and division, and possibly others, might be done in a subroutine), with each instruction working on 32 bits at a time, and may require more registers, as the non-floating-point registers on a 32-bit platform are typically 32 bits wide.
Re:operating systems (Score:2)
32-bit addressing is limited to 4GB of physical RAM at any instant, but you can handle more than 4GB of physical memory in multiple 4GB-or-less process address spaces, or handle it by mapping pages in and out of a given address space, on a platform with 32-bit virtual addresses.
...and there's a patch from, as I remember, Siemens, which, as I remember, was accepted for the 2.3 kernel, to do that on x86, presumably in the fashion I described.
...as does x86 {Free,Net,Open}BSD, versions of NetBSD and OpenBSD on other 32-bit platforms, Solaris 2.6 and Solaris 7 on x86 and 32-bit SPARC, etc., etc., etc....
...and on Linux, with the right patch; I think that patch has also been accepted for the 2.3 kernel.
A 64-bit architecture run in 64-bit mode isn't necessary for handling more than 4GB of memory, or for seeking more than 4GB into a file - fseek() may take a 32-bit offset on those platforms, but fsetpos() could take a 64-bit offset, as could llseek(), say - but it does make it a bit more convenient (no need to muck around with mapping stuff into or out of an address space, no need to use on UNIX all the extra stuff from the Large File Summit, or to use the somewhat clumsy support in Win32 for file offsets >32 bits).
Re:MicroFlaw (Score:2)
That's how it runs on Alpha - 32-bit address space and, I think, 32-bit page table entries. On Alpha you can do 32-bit page-table entries by doing NT PALcode, as, on all existing Alpha processors, TLB misses are handled in software (well, PALcode, but that's just software loaded into memory from a ROM, running in a special mode that lets it get at processor-specific internal registers), so the software (PALcode) can control what PTEs look like.
I don't know whether IA-64 will do that or not.
Re:Alpha = speed, cost (Score:3)
...but it does need a salvager; I infer from the name that salvage is a salvager for AdvFS, just as fsck is a salvager for, for example, UFS.
So the original poster was right in his belief that file systems without salvagers are bad (anybody who believes otherwise either believes that restoring from a backup tape is Always The Right Answer, a claim of which I'm rather skeptical, or believes that disks, disk firmware, and file system software breaks sufficiently rarely that it's not an issue, another claim of which I'm rather skeptical), but wrong, apparently, in his belief that AdvFS lacks one.
Re:Ports (Re:From deep within Intel Corporation) (Score:2)
Re:operating systems (Score:2)
Yup. You don't need a 64-bit processor to do 64-bit integer operations; however, you're likely to be able to do it faster on a 64-bit processor.
Re:Got to love the EV8--cluster on a chip (Score:2)
Re:What good are 64 bits anyawy? (Score:2)
On another thread here, regarding the mistaken notion that a machine with 32-bit pointers can't hold more then 2**32 memory, it seems to me that an 11/70 would allow more than 64k of memory per machine, but would only let you map in 64k per process.
Re:operating systems (Score:2)
Yes, 2.4 BSD allowed for overlays, as did RSX/11. This was a matter of the OS reloading some segmentation registers on request. Pentia could do similar things via the page table, should someone care to add the necessary OS calls. But it was yucky even back then. I've recomitted those brain cells to other tasks these days. | https://slashdot.org/story/99/12/28/0946225/compaq-alpha-is-better-than-ia-64 | CC-MAIN-2018-05 | refinedweb | 6,902 | 70.63 |
Introduction: How to Make a Drawing Arm With Arduino
This Instructables will show you how to create your very own drawing arm. The drawing arm draws by using the hole in the up most piece of wood as a writing utensil holder. This drawing arm works by taking input from the potentiometers and making them a servo reading. The Drawing arm has 2 DOF and each DOF is controlled by a potentiometer.
Step 1: Gather Materials
You will need the following to make a drawing arm:
Electronics:
1 - Arduino Uno
2 - Mini Servos
1 - Bread Board (Any Size will do)
A couple jumper cables
For the Non- Electronics:
1 - Piece of wood to screw the Arduino on and hot glue a servo on (you could use acrylic, I didn't)
1 - Popsicle Stick / Craft Stick to hot glue a Servo horn onto
1 - Small piece of wood to hot glue a Servo horn and make a hole to put a writing utensil in
Step 2: The Circuit
Step 3: The Code
The code uses a map function that ties each potentiometer to its servo it controls.
#include <Servo.h>
Servo myservo1; // "Calls" a Servo
Servo myservo2;
int potpin1 = 0; // analog pin used to connect the potentiometer
int potpin2 = 1; int value; // variable to read the value from the analog pin
void setup() {
myservo1.attach(3); // attaches the servo on pin3
myservo2.attach(4); // attaches the servo on pin 4
}
void loop() {
value = analogRead(potpin1); // reads the value of the potentiometer
value = map(value, 0, 1023, 0, 179); // scale it between 0 and 180
myservo1.write(value); // sets servo according to the scaled value
value = analogRead(potpin2); // reads the value of the pot
value = map(value, 0, 1023, 0, 179); // scale it between 0 and 180
myservo2.write(value); // sets servo according to the scaled value
delay(15); // waits for the servo to get there
}
Step 4: The Non-Electronics
Using the largest piece of wood (or acrylic) screw the Arduino to the board. Hot glue one servo to the top left corner of the board. Hot glue the breadboard wherever you have room.
Hot glue one servo horn to a Popsicle stick. When laying the Popsicle horizontally the servo horn should be on the left side. On the complete opposite side of the servo horn should be the place in which you hot glue another servo.
On the small piece of wood drill a hole big enough to place a writing utensil in. Hot glue the last servo horn on the opposite side. You should be able to place the servo horn on the Popsicle stick in the servo on the base, and place the servo horn on the small piece of wood on the servo that is on the Popsicle stick. Enjoy ! Please comment questions, or things you would like for me to make Instructables on.
Recommendations
We have a be nice policy.
Please be positive and constructive.
3 Comments
Hello! I need a drawing robot very simple, your project is fantastic because is very cheap and ingenious!! But i don't know if it is suitable for my idea...Can I find a video where it works?
Thanks
Any idea on how you would program set images for it to draw? For example a smiley face.
I thing you should use a clothspin instead of plywood at the and :D All in all: nice, cheap and easy project!!! | http://www.instructables.com/id/How-to-Make-a-Drawing-Arm-with-Arduino/ | CC-MAIN-2018-17 | refinedweb | 570 | 68.4 |
Important: Please read the Qt Code of Conduct -
Displaying integer value
HI
I have to take 2 bytes from Qbytearray and display it as signed int in Line edit. is it possible.
for eg: if my byte 0 is 03 and byte1 is FF--> my data =03ff which is 1023.
i want to display in line edit as 1023.
i tried like this
ui-> value-> setText(QByteArray().append(Buffer[2]).append(Buffer[1]).toHex()); the value in line edit is 0a17.
In addition i have to do some calculations on the data 03ff and finally display a signed integer value in line edit. please help.
Hello again!
I assume:
Buffer[1] = 0x03; Buffer[2] = 0xff;
then you can get the interger like this
#include <QtEndian> qint16 integer= qFromBigEndian<qint16>(Buffer.constData() + 1);
then you can convert it to text
ui->value->setText(QString::number(integer));
If you want to use toHex() you should use like this
int integer = Buffer.mid(1, 2).toHex().toInt(nullptr, 16);
Buffer.mid(1, 2)means to get a new byte array from index 1, and for length = 2
toInt(nullptr, 16)means convert a string to int using base 16, aka, hex.
But this seems a little unnecessary, it is basiclly convert number to string and then convert back to number again.
@Bonnie said in Displaying integer value:
@Bonnie
it worked perfectly. thank u for the support | https://forum.qt.io/topic/114331/displaying-integer-value/1 | CC-MAIN-2021-21 | refinedweb | 233 | 65.12 |
Adopted for extjs v4.0.7 and nodejs v0.6.2
Adopted for extjs v4.0.7 and nodejs v0.6.2
I found that menuAlign is applied correctly. But when menu render it use menu element size and for the first time this size is wrong (because menu isn't completely rendered at that moment).
ExtJS 4.0.2 only.
Ext.onReady(function(){
Ext.create('Ext.button.Button', {
menuAlign: 'tr-br', // <-- ignored for the first time (used default 'tl-bl?')
renderTo:...
This happens because js works in this way and extjs doesn't specially cloning objects when instance of any class created. Maybe cloning impossible or has a very difficult implementation. We should...
I think that all data defined in the class body copied to the each instance of this class. And if this data is changed in instance it isn't changed in class prototype. Another example:
...
Code illustrated bug:
Ext.define('Test', {
data: {}, // <-- isn't cloned for instances.
constructor: function(config) {
var me = this;
...
Add to requires Ext.form.field.Radio for Ext.form.RadioGroup
hmm, disable error handling is a strange way to fix this issues :)
This is my overrides for grid, scroller and store after that buffered store and grid work.
Ext.require('Ext.panel.Table',...
all issues are reproduced in 4.0.1
I made simple nodejs script for getting app dependencies list. It's useful for backend server integration without using Sencha SDK(e.g. rails with jammit gem).
var appDir = process.argv[2];...
Yes, but inside extjs code buttons config converts to dockedItems and extjs 4 has no any simple way to define buttonAlign.
Instead of buttons config, add something like:
dockedItems: [{
xtype: 'toolbar',
dock: 'bottom',
items: [
{ xtype:...
I found several issues with buffered store and grid with PagingScroller:
1) guranteeRange throw error "Start (0) was greater than end (-1) (Store.js:1545)" when server return zero items (store is...
present in B3
Ext.regModel('Test', {
fields: [
{name: 'name' , type: 'string' }
],
proxy: {
type: 'localstorage',
id: 'test-store'
}
});
Ext.define('TestStore', {
I understood the problem. But how to use singletons with the Application in the same namespace?
Ext.define("app.core.Test", {
singleton: true
});
Ext.define("app.Application", {
extend: "Ext.Applicaton",
name: "app",
singleton: true,
requires: [ "app.core.Test"],
...
So i only want to generate ids for new records on the client-side (UUID) not on the server side.
Ok, it's not a phantom, but it's a new record and request for creating this record on the server side must be POST with url: and id must be put as post params (RESTful way),...
Why? It a new record, it doesn't present on the sever. If id present in the url, server try to find resource by id. For creating new record with custom id, id must be in the POST params not in the...
my backend server based on the Rails framework, so it has mechanism for generating RESTful urls for resources and auto-generated url for create some resource is POST to...
I create new record, but i generate id for this new record from js and put this id as param to post request. RestProxy add this id to url e.g. ...
If create some model(proxy: rest) with auto-generated id and invoke save for it, RestProxy build url with ID in the url but id must be in the POST params not in the url.
when click recreate grid show loading and js throw error | https://www.sencha.com/forum/search.php?s=1ecd945aea7f9d9e5eff77a40446a3b5&searchid=18401117 | CC-MAIN-2016-50 | refinedweb | 580 | 67.76 |
Tutorial for: django-selectable
Requirements:
Do you have a Django website with a large Knowledgebase? Do you want to make this knowledgebase easily searchable from any page on your site with a fancy jQuery auto-complete box? This tutorial has your solution!
This tutorial assumes that you know how to use Django and have an existing website and a compatible data model.
The first thing you will want to do is install django-selectable, you can click the package link above to locate it's download page or simply use pip. Your model will need to have a field which stores the entire content of the article you would like to be searchable. Installing django-selectable is as simple as adding a simple entry to your urls.py:
urlpatterns = patterns('', # Other patterns go here (r'^selectable/', include('selectable.urls')), )
In order to make this all work, you will need to create a couple files and edit your base.html template to add additional javascript:
<link type="text/css" href="/css/redmond/jquery-ui-1.8.16.custom.css" rel="stylesheet" /> <link type="text/css" href="/css/dj.selectable.css" rel="stylesheet" /> <script type="text/javascript" src="/js/jquery-1.6.2.min.js"></script> <script type="text/javascript" src="/js/jquery-ui-1.8.16.custom.min.js"></script> <script type="text/javascript" src="/js/jquery.dj.selectable.js"></script>
If you would like your users to be-able to click on the options in the drop down to automatically move to the article, use this jQuery code:
$(function(){ $('[id=id_q]').bind('autocompleteselect', function(event, ui){ $(this).val(ui.item.value); $(this).parents("form").submit(); }); });
In order for django-selectable to work, there needs to be a lookups.py file in your apps directory. This file contains a class which describes how the queryset is generated to perform a lookup. Here is a sample lookups.py to get you started:
from selectable.base import ModelLookup from selectable.registry import registry, LookupAlreadyRegistered from kbase.models import Entry class EntryLookup(ModelLookup): model = Entry search_field = 'content__contains' try: registry.register(EntryLookup) except LookupAlreadyRegistered: pass
There is a very strange bug in django-selectable, where it attempts to import the lookups.py file twice, and I'm not entirely sure why. Next we need a form, this can live in your apps directory forms.py module, here is a working example to get you started:
from django import forms from selectable.forms import AutoCompleteWidget from kbase.lookups import EntryLookup class KbaseForm(forms.Form): q = forms.CharField( label='', widget=AutoCompleteWidget(EntryLookup), required=False, )
I'll explain what this does, it creates a form which we will render later. It assigns a widget to the CharField, which is the django-selectable auto-complete jQuery widget. We tell the widget the location of the lookup we created earlier. The next thing we need to do is render the form for the user can actually interact with it. Since we would like to be-able to reference this on any page we would like, and not necessarily have it on every single page, we will create a template tag. In order to do this, you will need to create a new directory in your apps directory called templatetags, and inside here, touch a file called __init__.py to make the directory a python package. I have a file called kbase_tags.py, which I stored this template tag inside of, and here is the code for it:
from django import template from kbase.forms import KbaseForm register = template.Library() @register.inclusion_tag('kbase/kbase_lookup.html') def kbase_lookup(): return {'form': KbaseForm()}
Almost done, now all we need to create is the template for the form:
<form action="{% url kbase_search %}" method="get"> {{form}} </form>
The template for the form is very simple and very portable throughout your website. Here is a sample view to get you started with adding the finishing touches to make it all come together nicely:
def search(req): if 'q' in req.GET: entry_list = Entry.objects.filter( Q(title__contains=req.GET['q']) | Q(content__contains=req.GET['q']) ) if entry_list.count() == 1: return redirect(entry_list[0]) return render_to_response("kbase/search.html", {'entry_list':entry_list}) else: return render_to_response("kbase/search_error.html", {'error':'Search query missing.'})
It is good to be backwards compatible and support older browsers too, or users who just have JavaScript disabled. This view will also check to see if a single result is being returned, if so, then just redirect to that result instead of a showing a list with one result.
That's all there is to it to using django-selectable in a nutshell. This definitely adds an element of interactivity to a website, and also allows users to perform searches on your content with very little effort and instant results.
The double import/LookupAlreadyRegistered error is in part related to the magic that manage.py does to your Python path (see). It allows you to import a module which is inside your project with both `from myproject.myapp import models` or `from myapp import models`. Thankfully this hack is being removed in the upcoming 1.4 release. I really enjoyed this tutorial. I'm also glad to see django-selectable working side by side with Twitter Bootstrap. Best, Mark
Thank you for your comment Mark. I just realized that you are actually the maintainer of django-selectable. Thank you for creating such a great django app, and keeping it very plug-able into projects. It works very nicely alongside of Twitter Bootstrap. The only problem I ran up against was that I cannot add custom html tag properties using the AutoCompleteWidget which Twitter Bootstrap requires. So instead of rendering the form django-style, I manually put in the INPUT tag with the appropriate tag properties. The extra properties are "class" and "placeholder". It would work fine without, but wouldn't keep the consistent look. Is there a way to add custom tag properties using django.forms? I would hope at least adding a "class" property should be doable. Anyways, thank you again.
Yep it's me. I was cruising around looking for people complaining about my project and stumbled onto your very nice post. You can add attributes to the widgets (this isn't specific to django-selectable) by passing them as attrs such as selectable.AutoCompleteWidget(FruitLookup, attrs={'class': 'foo', 'placeholder': 'bar'}). Here are the official docs It's kind of ugly to me because you are basically writing HTML in Python. There are some other projects out there that are made to help with rendering forms to work with Bootstrap like. They should work just fine with django-selectable and if they don't just open and issue.
Thanks for sharing such a valuable method. I have been using selectable autocomplete for a while. I decided to use your method for search with selectable. But when use whole method above in terminal i get error such "[01/Jul/2012 18:29:14] "GET /selectable/tanim-descriptionlookup/?term=yyyyyyy×tamp=1341185354816 HTTP/1.1" 500 103788" I have created a yyyyyyy object. But having trouble to see the object in the autocomplete search bar. Do you have any idea? Should I use a different view or template? Thanks Tuna | http://pythondiary.com/tutorials/add-ajax-powered-quick-search-any-django-site.html | CC-MAIN-2019-22 | refinedweb | 1,204 | 58.99 |
The Samba-Bugzilla – Bug 12469
CTDB lock helper getting stuck trying to lock a record
Last modified: 2017-02-13 15:59:09 UTC
This is due to a bug related to robust mutex scheduling in Linux/glibc.
This bug has been reported to redhat bugzilla.
Created attachment 12830 [details]
Patches for v4-6
Created attachment 12831 [details]
Patches for v4-5
Hi Karolin,
This is ready for 4.5 and 4.6.
Thanks...
(In reply to Martin Schwenke from comment #4)
Pushed to autobuild-v4-{6,5}-test.
(In reply to Karolin Seeger from comment #5)
Pushed to both branches.
Closing out bug report.
Thanks!
Patch seems to brake build on e.g. SUSE 11.1:
[2788/3997] Compiling ctdb/tests/src/test_mutex_raw.c
../ctdb/tests/src/test_mutex_raw.c: In function 'main':
../ctdb/tests/src/test_mutex_raw.c:205: error: 'PTHREAD_MUTEX_ROBUST' undeclared (first use in this function)
../ctdb/tests/src/test_mutex_raw.c:205: error: (Each undeclared identifier is reported only once
../ctdb/tests/src/test_mutex_raw.c:205: error: for each function it appears in.)
Waf: Leaving directory `/root/build/4.5.5-13/BUILD/samba-4.5.5/bin'
Build failed: -> task failed (err #1):
{task: cc test_mutex_raw.c -> test_mutex_raw_110.o}
make: *** [all] Error 1
Created attachment 12866 [details]
skip build of test_mutex_raw if robust mutexes are not available
With the attached patch waf skips the build of the test if robust mutexes are not available.
(In reply to Björn Baumbach from comment #8)
Hi Björn,
can you change this pass
enabled=bld.env.HAVE_ROBUST_MUTEXES
to bld.SAMBA_BINARY() ?
Thanks!
metze
Comment on attachment 12866 [details]
skip build of test_mutex_raw if robust mutexes are not available
Does not work, sorry.
(In reply to Björn Baumbach from comment #10)
Why does the patch not work?
Created attachment 12870 [details]
Extra patches for master
Hi Karolin,
Can you check if the extra patches fix the issue?
(In reply to Amitay Isaacs from comment #13)
The test should use the following includes
#include "replace.h"
#include "system/filesys.h"
#include "system/wait.h"
#include "system/threads.h"
and don't use the _np() functions directly.
Created attachment 12874 [details]
Extra patches for master
(In reply to Stefan Metzmacher from comment #14)
Yes, I wrote this code for standalone testing.
I see that system/threads.h takes care of defining PTHREAD_MUTEX_ROBUST and pthread_mutexattr_setrobust(). I have updated the patches.
(In reply to Amitay Isaacs from comment #16)
You should also remove the pthread_mutex_consistent_np() prototype
and use pthread_mutex_consistent() instead.
Created attachment 12879 [details]
Extra patches for master
(In reply to Stefan Metzmacher from comment #17)
Done.
(In reply to Amitay Isaacs from comment #19)
Thanks! Pushed to autobuild.
Created attachment 12892 [details]
Extra patches for v4-6
Created attachment 12893 [details]
Extra patches for v4-5
Karolin,
The extra patches should fix the build on suse 11.1 or any other older glibc distro.
They are ready for 4.5 and 4.6.
(In reply to Amitay Isaacs from comment #23)
Pushed to autobuild-v4-{5,6}-test.
(In reply to Karolin Seeger from comment #24)
Pushed to both branches.
Closing out bug report.
Thanks! | https://bugzilla.samba.org/show_bug.cgi?id=12469 | CC-MAIN-2018-09 | refinedweb | 518 | 60.51 |
Optimistic.
What is optimistic locking?
Before defining optimistic locking, I'll first describe it's opposite, pessimistic locking. Suppose you need to update a record in your database, but you cannot do so atomically -- you'll have to read the record, then save it in a separate step. How do you ensure that, in a concurrent environment, another thread doesn't sneak in and modify the row between the read and update steps?
The answer depends on the database you are using, but in Postgres you would issue a
SELECT ... FOR UPDATE query. The
FOR UPDATE locks the row for the duration of your transaction, effectively preventing any other thread from making changes while you hold the lock. SQLite does not support
FOR UPDATE because writes lock the entire database, making row-level (or even table-level) locks impossible. Instead, you would begin a transaction in
IMMEDIATE or
EXCLUSIVE mode before performing your read. This would ensure that no other thread could write to the database during your transaction.
This type of locking is problematic for, what I hope are, obvious reasons. It limits concurrency, and for SQLite the situation is much worse because no write whatsoever can occur while you hold the lock. This is why optimistic locking can be such a useful tool.
Optimistic locking
Unlike pessimistic locking, optimistic locking does not acquire any special locks when the row is being read or updated. Instead, optimistic locking takes advantage of the database's ability to perform atomic operations. An atomic operation is one that happens all at once, so there is no possibility of a conflict if multiple threads are hammering away at the database.
One simple way to implement optimistic locking is to add a
version field to your table. When a new row is inserted, it starts out at version 1. Subsequent updates will atomically increment the version, and by comparing the version we read with the version currently stored in the database, we can determine whether or not the row has been modified by another thread.
Implementation
Here's the code for the example implementation included in the documentation:
from peewee import * class ConflictDetectedException(Exception): pass class BaseVersionedModel(Model): version = IntegerField(default=1, index=True) def save_optimistic(self): if not self.id: # This is a new record, so the default logic is to perform an # INSERT. Ideally your model would also have a unique # constraint that made it impossible for two INSERTs to happen # at the same time. return self.save() # Update any data that has changed and bump the version counter. field_data = dict(self._data) current_version = field_data.pop('version', 1) field_data = self._prune_fields(field_data, self.dirty_fields) if not field_data: raise ValueError('No changes have been made.') ModelClass = type(self) field_data['version'] = ModelClass.version + 1 # Atomic increment. query = ModelClass.update(**field_data).where( (ModelClass.version == current_version) & (ModelClass.id == self.id)) if query.execute() == 0: # No rows were updated, indicating another process has saved # a new version. How you handle this situation is up to you, # but for simplicity I'm just raising an exception. raise ConflictDetectedException() else: # Increment local version to match what is now in the db. self.version += 1 return True
Here’s a contrived example to illustrate how this code works. Let’s assume we have the following model definition. Note that there’s a unique constraint on the
username – this is important as it provides a way to prevent double-inserts, which the
BaseVersionedModel cannot handle (since inserted rows have no version to compare against).
class User(BaseVersionedModel): username = CharField(unique=True) favorite_animal = CharField()
We'll load these up in the interactive shell and do some
update operations to show the code in action.
Example usage
To begin, we'll create a new
User instance and save it. After the save, you can look and see that the version is 1.
>>> u = User(username='charlie', favorite_animal='cat') >>> u.save_optimistic() True >>> u.version 1
If we immediately try and call
save_optimistic() again, we'll receive an error indicating that no changes were made. This logic is completely optional, I thought I'd include it just to forestall any questions about how to implement it:
>>> u.save_optimistic() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "x.py", line 18, in save_optimistic raise ValueError('No changes have been made.') ValueError: No changes have been made.
Now if we make a change to the user's favorite animal and save, we'll see that it works and the version is now increased to 2:
>>> u.>> u.save_optimistic() True >>> u.version 2
To simulate a second thread coming in and saving a change, we'll just fetch a separate instance of the model, make a change, and save, bumping the version to 3 in the process:
# Simulate a separate thread coming in and updating the model. >>> u2 = User.get(User.username == 'charlie') >>> u2.>> u2.save_optimistic() True >>> u2.version 3
Now if we go back to the original instance and try to save a change, we'll get a
ConflictDetectedException because the version we are saving (2) does not match up with the latest version in the database (3):
# Now, attempt to change and re-save the original instance: >>> u.>> u.save_optimistic() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "x.py", line 30, in save_optimistic raise ConflictDetectedException() ConflictDetectedException: current version is out of sync
And that's all there is to it!
Thanks for reading
Thanks for reading this post, I hope that this technique and the example code will be helpful for you! Don't hesitate to add a comment if anything is unclear or if you have any questions.
Commenting has been closed, but please feel free to contact me | http://charlesleifer.com/blog/optimistic-locking-in-peewee-orm/ | CC-MAIN-2020-50 | refinedweb | 953 | 55.74 |
I
So I am programming an application using Java Swing, and I am wondering how to make it so that when the entire application gets resized only certain panels get resized in various ways. Take a look at the two pictures below: I have three vertical pane'm trying to position some JButton objects, but I need the coordinates to position them in terms of the center of the button, instead of the upper left hand corner as default, is there a way to do this? //Button L1 (Left 1) buttonL1 = new JButton( "
Trying to understand how the GridBagLayout for Java works. Never used it before so its probably a stupid error that I've made. My objective is to place a JLabel at the top center of the page. I've been using the java tutorials on Oracle but have had
I didn't find good title I guess sorry for this, As you will see bottom I've called ssframe.add(new JScrollPane(table),BorderLayout.CENTER); 3 times and if I click the typeButton then unitButton it makes 2 table on the screen. I want to have one tabl
I'm really struggling with creating a complicated layout. Here is a picture of what I want in the end: I've attempted to divide and conquer, creating small panels and putting those in other panels. At first I figured a borderlayout for the main conta
I need to make a program that ends up looking like this: My code so far is what's posted and I can't seem to wrap my head around how to make this look exactly like the image I provided. I just don't really know what else I can do, I've tried looking
i am trying to make a simple interface in java,but i have a problem with adding multiple panels into a frame.I want it to be a cafe software so there will be multiple tables.Here is my code public class CafeView extends JFrame{ private JButton takeMo
In my swing application I have to create a number panel for taking user input. The required numbers can change according to requirement (For example at one place it contain numbers 1-90, at another place it contains 1-36 numbers). My problem is that
Question can be incorrect, but i don't know how to ask correctly. I am sorry for that. Here the problem: My JFrame has CardLayout as the layout manager. I have three JPanel's and I switching between them. Everything was good until i had to add 2 JPan
I'm trying to display a different JFrame after the user does something in the same window they are using, similar to a login feature. Haven't been able to figure out how to do that. The workaround I have now is to just hide the current JFrame and the
I have issues with the GridLayout i think. I tried to place 25 buttons to the Jpanel "lightJpanel" but all the buttons are only on one at the top left of my lightJpanel. I tried many things but i don't find the issue... The buttons are created but th
I have a problem because I want to put a small JPanel inside a different JPanel, but I can't get the small JPanel to show. What am I missing? this.setLayout(new BorderLayout(5,5)); this.cardsPanel= new JPanel(); this.cardsPanel.setBackground(Color.DA
The following method has the end result I need. The problem is, when the method is called, the starting pnlMain stays visible until the new pnlMain is created and replaces the original. The point of this method is to change the panel by creating a ne
Right now I have the following code which adds the JLabel to the top center of the Panel, which I assume is the default imageLabel = new JLabel(); ImageIcon customer1 = new ImageIcon("src/view/images/crab.png"); imageLabel.setIcon(customer1); storePa
I have a somewhat simple GUI and I am trying to create buttons and controls for the left side of the window. The right side has a text area which will eventually display content. The left side contains buttons and controls for the user to manipulate.
As the title said, I'm trying to put 4 different JLabels at each of the corners of a JFrame. I want them to stay there forever even if I try to resize the JFrame I've tried using a layout manager but I just can't get it right. ImageIcon icon; JLabel
I am having an issue with the alignment of components in my JPanel which has a GridBagLayout. The JLabel is on the top, but not centered, and the JButtons underneath are positioned all the way to the right. Is there any way I can position them both i | http://www.dskims.com/tag/layout-manager/ | CC-MAIN-2018-22 | refinedweb | 813 | 67.89 |
Here are some notes I made -- I hope it might save someone a bit of time.
So, I finally tried out a few other backends on mac os x. I had been recommending and using TkAgg, as this works out of the box on mac os x. However, it seems unsnappy sometimes, and there was this strange issue with the first window not giving control back to the command line.
QT4 takes *forever* to compile, but it seems to compile easier now than previous versions that needed a small library hack. The default configuration compiles and installs fine. The other tools (PyQt4 and SIP) also compile and install painlessly with the default configuration.
I initially forgot to set the -q4thread for ipython (since the other -pylab flag is hidden in a launching script). After that it worked mostly fine.
I found that the correct threading was sensitive to how I started ipython. I have been using terminal, and I was starting ipython like this:
bash -l -c /path/to/ipython -q4thread
from within a terminal .term file (i.e., the terminal starts running ipython automatically). This seems to not work great. However, when I put this command in a script, and run the script like
bash -l -c pylab_start_script
thinks work as expected. This is also true when just typing these commands in on the command line. I almost always start ipython from a terminal .term file from quicksilver. This gives me a dedicated (color coded) ipython window instantly that does not take away my shell. This is all pretty slick, and I am pleased with the setup now.
Developers: Finally, I had to make some small changes to the qt4 backend so that things worked right. One is an essential change -- the latin1() method no longer exists in the newer qt. The other is a cosmetic change so that I can see the cursor position in the toolbar better. Diff below.
-Rob
Index: backend_qt4.py
···
--- backend_qt4.py (revision 2999)
+++ backend_qt4.py (working copy)
@@ -148,7 +148,7 @@
def _get_key( self, event ):
if event.key() < 256:
- key = event.text().latin1()
+ key = str(event.text())
elif self.keyvald.has_key( event.key() ):
key = self.keyvald[ event.key() ]
else:
@@ -290,7 +290,7 @@
# The automatic layout doesn't look that good - it's too close
# to the images so add a margin around it.
- margin = 4
+ margin = 12
button.setFixedSize( image.width()+margin, image.height()+margin )
QtCore.QObject.connect( button, QtCore.SIGNAL( 'clicked()' ),
@@ -301,7 +301,7 @@
# The stretch factor is 1 which means any resizing of the toolbar
# will resize this label instead of the buttons.
self.locLabel = QtGui.QLabel( "", self )
- self.locLabel.setAlignment( QtCore.Qt.AlignRight | QtCore.Qt.AlignVCenter )
+ self.locLabel.setAlignment( QtCore.Qt.AlignRight | QtCore.Qt.AlignTop )
self.locLabel.setSizePolicy(QtGui.QSizePolicy(QtGui.QSizePolicy.Ignored,
QtGui.QSizePolicy.Ignored))
self.layout.addWidget( self.locLabel, 1 )
Rob Hetland, Associate Professor
Dept. of Oceanography, Texas A&M University
phone: 979-458-0096, fax: 979-845-6331 | https://discourse.matplotlib.org/t/notes-on-switching-backends-to-qt4-on-mac-os-x/6758 | CC-MAIN-2022-21 | refinedweb | 493 | 59.7 |
Pythonista 1.6 Beta
Found a bug. In the UI Editor when delete enabled is off, you can still delete rows in a table.
- TutorialDoctor
Will dialogs allow me to create tables with cells (rows and columns)?
This would be most useful, otherwise I don't see an advantage to dialogs over UI.
Pinch to zoom in the UI editor would be nice too, as well as saving UI control presets.
The beta doesn't seem to ask for permission to use Location Services when using the location module for the first time.
I thought it might be for all required permissions, but the Reminders seems to work fine.
Anyone have any ideas on this?.
@andymitchhank Thanks, I've been able to reproduce the issue. Apparently this has to do with some new requirements for location permissions in iOS 8, looking into it.
- wradcliffe
I am still working on reproducing an issue that occurs in one of the callbacks in the cb module. What happens is some kind of heap corruption when I call functions in other modules. I was writing some code that used str.append() and getting all kinds of wierd behavior. It looks like either heap corruption or blown stack.
Trouble started when I modified the code trying to print out returned values in a characteristic. I used (print c.value.encode('hex')) in two callbacks and it did not "work". The program just printed nothing but also just returned from the callback and stopped working.
I could use a few hints on how to get a good repro case. I could post my code, but the effect is random and the code needs to access a specific device. Any ideas on how to stress the heap or stack in this callback environment would be appreciated.
- polymerchm
And how does one find OMZ's e-mail address? happy to help.
Click on the word "email" in the very first post in this thread... It is a hyperlink.
- Gcarver166
ui.TableView.row_height seems to always be -1 for me. This is new behavior. I can provide repro code if you need it.
Trying to make a table action so when you select a row it checks off the reminder, is this how that logic would work?
def picked(sender): r.title = sender.data_source.items[row] if r.title in reminders.get_reminders(completed=False): print 'good' r.completed = True
ui.TableView.row_height seems to always be -1 for me. This is new behavior. I can provide repro code if you need it.
Thanks, should be fixed in the build after the next one (already uploaded that).
Trying to make a table action so when you select a row it checks off the reminder, is this how that logic would work?
I can't see where
ris coming from in that example and why you would set its title when you actually want to set its completion state.
- misha_turnbull
Sounds great! Just send you an email (or two--sorry)
@techteej Here's a very simple example of a table view UI to check off reminders:
import reminders import ui def table_action(sender): item = sender.items[sender.selected_row] r = item['reminder'] r.completed = True r.save() del sender.items[sender.selected_row] def main(): v = ui.TableView() v.frame = (0, 0, 500, 500) v.name = 'To Do' all_reminders = reminders.get_reminders(completed=False) items = [{'title': r.title, 'reminder': r} for r in all_reminders] data_source = ui.ListDataSource(items) data_source.action = table_action v.data_source = data_source v.delegate = data_source v.present('sheet') main()
Has anyone else seen this.
I can't install 1.6 because when I click on "Open in Testflight" when reading the apple invite email in gmail it insists on opening iTunes on the Testflight page. When I click on "OPEN" Testflight then says I have to click the link in the email. Has anyone else been able to install from gmail? Is this a gmail bug? I haven't configured apple email so I can't click the link from there.
Any suggestions?
Edit: Solved it. It was Chrome. Gmail launches Chrome to resolve urls. It required cut and paste of the link from Crome to Safari.
Still reading the doc. (Actually I connected with the SensorTag and my heart rate monitor first ;-) ) Thanks for Reminders. I didn't realize how much useful functionality (Calendars, Alarms, even geo-location alarms) that included.
Reminders question: This may be a limitation in the apple framework but would it be possible to add an action_url to reminder alarms like there is in the notifications module? It would be nice to implement custom behavior when an alarm happens, from snooze to marking a reminder as done or even changing the reminder contents. | https://forum.omz-software.com/topic/1946/pythonista-1-6-beta/?page=3 | CC-MAIN-2020-34 | refinedweb | 786 | 77.84 |
File Zipper : Visual C#
File Zipper is a Visual C# program for compressing and decompressing files. It creates .zip files and can decompress .zip files as well. The support for Zip files was a very useful feature introduced in .NET Framework 4.5. The DLL named System.IO.Compression.dll provides two new classes named ZipArchive and ZipFile. In this program, we have used the ZipFile class for accomplishing the compression and decompression of data.
If you just want to try the program, you can download File Zipper here or read more about it below. This program requires .NET Framework 4.5
File Zipper
As mentioned above, we have used the ZipFile class in this program for compressing and decompressing files. This program uses two static methods of the ZipFile class, namely, CreateFromDirectory() and ExtractToDirectory(). The CreateFromDirectory() method creates a .zip file by compressing the contents of the specified directory. The ExtractToDirectory() method extracts the contents of .zip to the specified directory.
Since the ZipFile class is a part of the System.IO.Compression namespace, we have included the statement using System.IO.Compression; in our program. Also, a reference to System.IO.Compression.dll has been added. We have also added using System.IO; because we had to use some methods of the Path class, which is a part of System.IO.
As we mentioned above, the actual task of compression and decompression is pretty simple. We just need to call the methods CreateFromDirectory() and ExtractToDirectory() with the appropriate arguments. The real difficulty lies in figuring out the appropriate arguments for these methods. As you may see in the screenshots below, we have chosen to create a .zip file using the directory “C:\Users\Gaurav\ADB” and the output .zip file would be at “C:\Users\Gaurav\ADB.zip”. Also, we have chosen to decompress the “C:\Users\Gaurav\ADB.zip” and extract its contents to “C:\Users\Gaurav\ADB”. Simple as these things may seem, a fair bit of messing with strings is required.
This is where the Path class plays its part. We have used the following static methods of the Path class : GetFullPath(), GetDirectoryName() and GetFileNameWithoutExtension(). The process of compression and decompression and the role of these methods in these tasks have been described below :
Compression
First of all, a FolderBrowserDialog is displayed and the users selects the folder which is to be compressed. After this, the following tasks are performed :
- The full directory path i.e. “C:\Users\Gaurav\ADB” is retrieved using the GetFullPath() method
- The container directory path i.e. “C:\Users\Gaurav” is retrieved using the GetDirectoryName() method. This is crucial because the resultant .zip file would have to be stored at the same location as the directory that was compressed.
- The name of the directory to be compressed is retrieved by removing “C:\Users\Gaurav” from “C:\Users\Gaurav\ADB”. This is done by using the Replace() method of the string class.
- Finally, the CreateFromDirectory() method is called and the task of compression is performed.
Decompression
First of all, an OpenFileDialog is displayed and the user selects the .zip file which is to be decompressed. After this, the following tasks are performed :
- The full path of the .zip file is retrieved using the GetFullPath() method.
- The name of the .zip file without the extension is retrieved using the GetFileNameWithoutExtension() method. This is very important since the newly created directory should have the same name as the .zip file.
- The container directory path is retrieved using the GetDirectoryName() method.
- The full path of the resultant directory is generated by adding the name of the .zip file (without extension) to the container directory path.
- Finally, the ExtractToDirectory() method is called and the task of decompression is performed.
Screenshots of File Zipper
We suppose you have understood the above mentioned processes. To fully understand the process, have a look at the screenshots below :
File Zipper – Idle
Compression
File Zipper – Compression Step 1
File Zipper – Compression Step 2
Decompression
File Zipper – Decompression Step 1
File Zipper – Decompression Step 2
Download File Zipper
We believe you like the concept of this program. You can download the source and/or the executable of File Zipper by clicking on the links below :
Click here to download the source code of File Zipper (99 KB)
Click here to download only the executable of File Zipper (16 KB). This program requires .NET Framework 4.5
If you like this File Zipper program, help others find it by sharing it…
It is a nice project! I was looking for a C# Project on File Compression and I landed here. It’s great 🙂
Stay tuned for more COOL projects.. | https://www.wincodebits.in/2016/02/file-zipper-visual-c.html | CC-MAIN-2018-34 | refinedweb | 777 | 59.6 |
AWS Cloud Operations & Migrations Blog metrics to power dashboards, alarms, and other tools that rely on accurate and timely metric data.
You can use metric streams to send metrics to partner solutions, including Datadog, Dynatrace, New Relic, Splunk, and Sumo Logic. Alternatively, you can send metrics to your data lake built on AWS, such as on Amazon Simple Storage Service (Amazon S3). You can continuously ingest monitoring data and combine billing and performance data with the latest CloudWatch metric data to create rich datasets. You can then use Amazon Athena to get insights into cost optimization, resource performance, and resource utilization. Metric streams are fully managed, scalable, and easy to set up.
In this post, I will show you how to store data from metric streams in an S3 bucket and then use Amazon Athena to analyze metrics for Amazon Elastic Compute Cloud (Amazon EC2). I will also show you how to look for opportunities to correlate the EC2 metric data with the AWS Cost and Usage Report.
Figure 1 shows the architecture for the solution I discuss in this post.
Figure 1: Solution architecture
The workflow includes the following steps:
- Amazon CloudWatch metrics data is streamed to a Kinesis Data Firehose data stream. The data is then sent to an S3 bucket.
- AWS Cost and Usage Reports publish the AWS Billing reports to an S3 bucket.
- AWS Glue crawlers are used to discover the schema for both datasets.
- Amazon Athena is used to query the data for metric streams and AWS Cost and Usage Reports.
- (Optional) Amazon QuickSight is used to build dashboards.
Prerequisite
Enable AWS Cost and Usage Report for your AWS account.
Walkthrough
Create a metric stream (All metrics)
To get started, in the left navigation pane of the Amazon CloudWatch console, expand Metrics, choose Streams, and then choose Create metric stream button.
Alternatively, you can use the CloudWatch API, AWS SDK, AWS CLI, or AWS CloudFormation to provision and configure metric streams. Metric streams support OpenTelemetry and JSON output formats.
Figure 2: CloudWatch metric streams in the CloudWatch console
By default, metric streams send data from all metrics in the AWS account to the configured destination. You can use filters to limit the metrics that are being streamed. On the Create a metric stream page, leave the default selections.
Figure 3: Create a metric stream
A unique name will be generated for the stream. The console will also create an S3 bucket with a unique name to store the metrics, an AWS Identity and Access Management (IAM) role for writing to S3, and a Kinesis Data Firehose data stream to stream metrics to S3.
Figure 4: Resources to be added to the account
Choose Create metric stream.
Figure 5: Metric streams tab
Create a metric stream (Selected namespaces)
You can also create a metric stream to capture CloudWatch metrics specific to a service like Amazon EC2. On the Create a metric stream page, under Metrics to be streamed, choose Selected namespaces and then select EC2, as shown in Figure 6:
Figure 6: Creating a metric stream for EC2 metrics only
You now have two metric streams: one that has data for all metrics and one that has data for EC2 metrics only.
Figure 7: Metric streams displayed in the console
Set up AWS Glue and Amazon Athena
AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue provides all the capabilities required for data integration so that you can start analyzing your data and putting it to use in minutes.
Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon S3 using standard SQL. With just a few actions in the AWS console, you can point Athena at your data stored in Amazon S3 and start using standard SQL to run ad-hoc queries and get results in seconds. Athena natively supports querying datasets and data sources that are registered with the AWS Glue Data Catalog.
You will use AWS Glue to connect to your metric data sources in S3.
- In the AWS Glue console, choose Crawlers, and then choose Add crawler.
- For Crawler name, enter
metric-stream-full.
- On Specify crawler source type, leave the default options.
- On Add a data store, for Choose a data store, choose S3. Include the path to the bucket that has data for all metrics.
- On Add another data store, choose No.
- Create an IAM role that will be used by AWS Glue. Make sure that the IAM role has access to the AWSGlueServiceRole managed policy and the S3 bucket.
- Create a schedule with an hourly frequency.
- On Configure the crawler’s output, for Database, choose default.
- Create the crawler, and then choose Run crawler.
When you’re done, you can look at the discovered table schema from the AWS Glue Data Catalog. Make a note of the location of the S3 bucket that has metrics from all namespaces.
Figure 8: Table details in the AWS Glue Data Catalog
Now create a table in the AWS Glue Data Catalog for EC2 metrics. Repeat steps 1-9. In step 4, make sure you use the S3 bucket location for EC2 metrics.
Figure 9: Table details in the AWS Glue Data Catalog
For both AWS Glue Data Catalog tables, the timestamp column is recognized as
bigint by default. Later in the post, you’ll write Athena queries. To make that task easier, you can manually change the data type to
timestamp. For each AWS Glue Data Catalog table, choose Edit schema and change the timestamp column to the
timestamp data type.
Figure 10: Table definition and schema details
Edit and run the crawlers again to update all the partitions with the new schema. Under Configuration options, choose Ignore the change and don’t update the table in the data catalog and then select the Update all new and existing partitions with metadata from the table checkbox, as shown in Figure 11.
Figure 11: Configure the crawler’s output
Run the AWS Glue crawler again with these settings. You’re now ready to start analyzing your metric streams data with Amazon Athena.
Run queries in Athena
Open the Athena console and start running queries on metric streams data.
Figure 12: Previewing data in metric streams table using Amazon Athena
Query 1: Find average and max CPU utilization for a given instance ID
Figure 13: Using Athena to find average and max CPU utilization of an EC2 instance
Query 2: Find average CPU utilization with data aggregated across five-minute intervals
Figure 14: Using Athena to find average CPU utilization of an EC2 instance aggregated over five minutes
Query 3: Find average CPU utilization of all EC2 instances in an AWS Region. Run this query against the Athena table with EC2-only metrics.
Figure 15: Average CPU utilization of all EC2 instances in a Region
Correlate with AWS Cost and Usage Reports
The AWS Cost and Usage Report contains the most comprehensive information available on your costs and usage. For information about how you can quickly and easily enable, configure, and query your AWS cost and usage information using Athena, see the Querying your AWS Cost and Usage Report using Amazon Athena blog post and Querying Cost and Usage Reports using Amazon Athena in the AWS Cost and Usage Report User Guide.
Query 4: Find EC2 instances running in your account in the us-east-2 Region and the amount you’re paying on an hourly basis
When you run this query in Athena, you see multiple t3.small instances are running every hour in us-east-2. Note the line_item_blended_cost (0.0208 USD/hour).
Figure 16: Analyze EC2 hourly charges using Athena
You can aggregate the cost data from this query across all EC2 instances running in a Region and compare it with the average CPU utilization of EC2 instances.
Query 5: Aggregate cost data across all EC2 instances and compare it with average CPU utilization of instances
From the result of query, you can see that you’re spending roughly 0.25 USD every hour on EC2 instances with an average CPU utilization across all instances of approximately 2%.
Figure 17: Use Athena to compare EC2 resource utilization and costs
Because overall CPU utilization for all EC2 instances is fairly low at 2%, there might be an opportunity for you to right-size some of the instances.
Query 6: Find average and max CPU utilization of each instance running in this Region
Figure 18: Average and max CPU utilization for all instances in a Region
You can look at this query result to find out which of the EC2 instances need to be downsized. Some of the EC2 instances in this example aren’t being used to capacity, which means you can use smaller-sized instances instead. You can also do ORDER BY CPU utilization instead of datetime, which is shown in Figure 18.
After you have completed the right-sizing, you can execute Query 5 to see if the hourly cost numbers are trending down. In Figure 19, the cost column shows a decrease in overall spend.
Figure 19: Use Athena to compare EC2 resource utilization and costs
Cleanup
To avoid ongoing charges to your account, delete the resources you created in this walkthrough.
- Delete the metric streams, the corresponding Kinesis Data Firehose data streams, and the IAM role.
- Delete the AWS Glue crawlers and tables in the AWS Glue Data Catalog.
- Delete the S3 buckets where the metric data is stored.
Conclusion
In this post, I showed you how to use metric streams to export CloudWatch metrics to S3. You used Athena to look at metrics like EC2 CPU utilization. You learned how to use Athena to combine data from the AWS Cost and Utilization Report and metric streams to examine historical trends and perform right-sizing and cost optimization. You might want to use Amazon QuickSight to create dashboards from queries you have written in Athena.
Metric streams can help you run reports on other AWS services, including Amazon RDS, Amazon Elastic Block Store, S3, and AWS Lambda. | https://aws.amazon.com/blogs/mt/cost-optimization-aws-amazon-cloudwatch-metric-streams-aws-cost-and-usage-reports-and-amazon-athena/ | CC-MAIN-2022-40 | refinedweb | 1,694 | 59.53 |
Concise programming 2022-06-24 07:34:46 阅读数:561
linux System :utf-8
windows:gbk
mac:utf-8
When the decoding and encoding methods are different, there will be garbled code !
We recommend popularizing utf-8 code
An abstract class for reading character streams . The only way Subclasses must implement read(char[], int, int) and close(). most however , Subclasses will override some of the methods defined here in order Provide higher efficiency 、 Additional functions or both .
Abstract classes for writing character streams . The only way Subclasses must implement write(char[], int, int)、flush() and close(). However , Most subclasses will override some of the methods defined here To provide more efficiency 、 Additional functions or both .
Read text from a character file using the default buffer size . Decode from bytes to characters Use specified Character set Or platform Default character set .
this FileReader Used to read a character stream . For reading Raw byte stream , Consider using FileInputStream.
import java.io.FileNotFoundException; import java.io.FileReader; import java.io.IOException; public class demo9 { public static void main(String[] args) throws IOException { String path = "C:\\Users\\Syf200208161018\\Desktop\\ New text document .txt"; FileReader fileReader = new FileReader(path); int count = 0; char[] strings = new char[1024]; while ((count = fileReader.read(strings))!=-1){ System.out.println(new String(strings,0,count)); } fileReader.close(); } }
Use the default buffer size to write text to a character file . From character encoding to bytes Use specified Character set Or platform Default character set .
Whether a file is available or can be created depends on Bottom platform . Especially on some platforms , Allow files Only one person is open to writing FileWriter( Or other files object ) once . under these circumstances , Constructors in this class If the file involved has been opened , It will fail .
this FileWriter For writing character streams . Writing function Raw byte stream , Consider using FileOutputStream.
import java.io.FileWriter; import java.io.IOException; public class demo10 { public static void main(String[] args) throws IOException { String path = "C:\\Users\\Syf200208161018\\Desktop\\neww.txt"; FileWriter fileWriter = new FileWriter(path); fileWriter.write(" Anhui Normal University subsea tunnel "); fileWriter.flush(); fileWriter.close(); } } | https://en.javamana.com/2022/175/202206240734394031.html | CC-MAIN-2022-33 | refinedweb | 350 | 51.24 |
i wrote over my project and had to start again, i cannot get my report to loop 10 times and add 1 to year, year 1, year 2, etc, been working at this all day. please help...
/Define a class Hammurabi with a member function that takes a parameter;
//Create a Hammurabi object and call its displayMessage function.
#include <iostream> //Header library for the input, output stream
#include <cstdlib> //Header library defines general purpose functions including random number generation
#include <time.h> //Header library allows the change of numbers per certain length of time
using namespace std;
//Hammurabi class definition
class Hammurabi
{
public:
//function that displays a message to the Hammurabi user
//Print out the introductory message
void displayMessage (int year, int starved, int immigrants, int population, int land, int harvest, int rats, int storage, int trade)
{
cout << "Hammurabi: I beg to report to you that in Year " << year << endl << endl;
cout << starved << " people starved;" << endl;
cout << immigrants << " immigrants came to the city" << endl;
cout << "The city population is " << population << endl;
cout << "The city now owns " << land << " acres" << endl;
cout << "You harvested " << harvest << " bushels per acre;" << endl;
cout << "Rats ate " << rats << " bushels;" << endl;
cout << "You now have " << storage << " bushels in storage;" << endl;
cout << "Land is trading at " << trade << " bushels per acre" << endl;
cout << endl;
}//end function displayMessage
};//end class Hammurabi
//function main begins program execution
int main()
{
//variables to store the values
int year = 0;
int starved = 0; //people who starved, population loss
const int immigrants = 5; //people who came to the city, population gain
int population = 100;
int land = 1000; //amount of land, acres owned by the city
const int harvest = 3; //amount of bushels harvested per acre planted
const int rats = 10; //amount of bushels destroyed by rats
int storage = 2500; //amount of bushels in storage
int trade = 15; //price land is trading, how many bushels per acre
while (year <=11 && population > 0) {
year = year + 1;
srand((unsigned)time(NULL));
//trade = 15 + (rand() % 5) + 1;
Hammurabi myHammurabi; //create a Hammurabi object named my Hammurabi
//call my Hammurabi displayMessage function
//and pass values as an argument
myHammurabi.displayMessage(year, starved, immigrants, population, land, harvest, rats, storage, trade);
int buy; //amount of acres to buy
int sell; //amount of acres to sell
int food; //amount of bushels to feed the population
int plant; //amount of acres to plant with bushels
cout << "How many acres of land do you want to buy? " << endl; //amount of bushels to to trade for land
cin >> buy;
land += buy; //assignment by sum and difference, (land = land + buy)
storage -= buy * trade;
cout << "How many acres of land do you want to sell? " << endl;
cin >> sell;
land -= sell;
storage += sell * trade;
cout << "How many bushels do you want to feed to the people? (each needs 20) " << endl;
cin >> food;
storage -= food;
cout << "How many acres do you want to plant with seed? (each acre takes one bushel) " << endl;
cin >> plant;
storage -= plant;
cout << endl;
population += immigrants;
storage -= rats;
storage = storage + (harvest * plant);
system("pause");
return 0;
}//end main
}
You have to properly indent your code. Then you will perhaps see were the problem is.
Victor Nijegorodov
...and also use code tags. Go Advanced, select the code and click '#'.
Hint. Look at the order of the last 4 lines of code.
Also, you don't need srand() within the while loop. Executing it once at the start of the program is fine.
Why have the class Hammurabi? At the moment all this class has is one function that displays the contents of the function parameters.
Last edited by 2kaud; January 15th, 2014 at 05:47 | http://forums.codeguru.com/showthread.php?542955-loop-report-10-times&p=2144801 | CC-MAIN-2018-09 | refinedweb | 597 | 55.2 |
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.
I believe I've found the problem. Before gcse we had (insn 4062 9281 4065 383 0x20000053188 (set (reg/v:DI 82 [ swapped ]) (eq:DI (reg/v:DI 82 [ swapped ]) (const_int 0 [0x0]))) 149 {*setcc_internal} (nil) (nil)) (jump_insn 4065 4062 9282 383 0x20000053188 (set (pc) (if_then_else (eq (reg/v:DI 82 [ swapped ]) (const_int 0 [0x0])) (label_ref 4223) (pc))) 174 {*bcc_normal} (nil) (expr_list:REG_BR_PRED (concat (const_int 20 [0x14]) (const_int 7000 [0x1b58])) (nil))) Note that reg 82 is modified and then tested. Our use of get_condition looked back through the first instruction to return (ne:DI (reg:DI 82) (const_int 0)). So we wound up inferring the wrong condition. My find_reloads test case is fixed by the following patch. I'm going to start a full bootstrap shortly. This also brings up an interesting point: swapped is boolean (I don't know whether it's declared bool or not, but that's not relevant at the moment). Ideally, we'd infer equality with one on the other side of the branch. Dunno how often that would make a difference. In this case I think we were able to make substitutions only because we had swapped = 0; label: A goal_alternative_swapped = swapped; B swapped = !swapped; if (swapped) { C goto label; } with the inferrence at C of swapped == 0, we've got two identical sets that together dominate the copy to goal_alternative_swapped. r~ Index: gcse.c =================================================================== RCS file: /cvs/gcc/gcc/gcc/gcse.c,v retrieving revision 1.232 diff -c -p -d -r1.232 gcse.c *** gcse.c 27 Jan 2003 11:30:35 -0000 1.232 --- gcse.c 7 Feb 2003 01:37:46 -0000 *************** struct ls_expr *** 479,484 **** --- 479,487 ---- rtx reaching_reg; /* Register to use when re-writing. */ }; + /* Array of implicit set patterns indexed by basic block index. */ + static rtx *implicit_sets; + /* Head of the list of load/store memory refs. */ static struct ls_expr * pre_ldst_mems = NULL; *************** static int load_killed_in_block_p PAR *** 614,619 **** --- 617,624 ---- static void canon_list_insert PARAMS ((rtx, rtx, void *)); static int cprop_insn PARAMS ((rtx, int)); static int cprop PARAMS ((int)); + static rtx fis_get_condition PARAMS ((rtx)); + static void find_implicit_sets PARAMS ((void)); static int one_cprop_pass PARAMS ((int, int, int)); static bool constprop_register PARAMS ((rtx, rtx, rtx, int)); static struct expr *find_bypass_set PARAMS ((int, int)); *************** record_last_set_info (dest, setter, data *** 2470,2476 **** Currently src must be a pseudo-reg or a const_int. - F is the first insn. TABLE is the table computed. */ static void --- 2475,2480 ---- *************** compute_hash_table_work (table) *** 2532,2537 **** --- 2536,2547 ---- note_stores (PATTERN (insn), record_last_set_info, insn); } + /* Insert implicit sets in the hash table. */ + if (table->set_p + && implicit_sets[current_bb->index] != NULL_RTX) + hash_scan_set (implicit_sets[current_bb->index], + current_bb->head, table); + /* The next pass builds the hash table. */ for (insn = current_bb->head, in_libcall_block = 0; *************** cprop (alter_jumps) *** 4478,4483 **** --- 4488,4604 ---- return changed; } + /* Similar to get_condition, only the resulting condition must be + valid at JUMP, instead of at EARLIEST. + + This differs from noce_get_condition in ifcvt.c in that we prefer not to + settle for the condition variable in the jump instruction being integral. + We prefer to be able to record the value of a user variable, rather than + the value of a temporary used in a condition. This could be solved by + recording the value of *every* register scaned by canonicalize_condition, + but this would require some code reorganization. */ + + static rtx + fis_get_condition (jump) + rtx jump; + { + rtx cond, set, tmp, insn, earliest; + bool reverse; + + if (! any_condjump_p (jump)) + return NULL_RTX; + + set = pc_set (jump); + cond = XEXP (SET_SRC (set), 0); + + /* If this branches to JUMP_LABEL when the condition is false, + reverse the condition. */ + reverse = (GET_CODE (XEXP (SET_SRC (set), 2)) == LABEL_REF + && XEXP (XEXP (SET_SRC (set), 2), 0) == JUMP_LABEL (jump)); + + /* Use canonicalize_condition to do the dirty work of manipulating + MODE_CC values and COMPARE rtx codes. */ + tmp = canonicalize_condition (jump, cond, reverse, &earliest, NULL_RTX); + if (!tmp) + return NULL_RTX; + + /* Verify that the given condition is valid at JUMP by virtue of not + having been modified since EARLIEST. */ + for (insn = earliest; insn != jump; insn = NEXT_INSN (insn)) + if (INSN_P (insn) && modified_in_p (tmp, insn)) + break; + if (insn == jump) + return tmp; + + /* The condition was modified. See if we can get a partial result + that doesn't follow all the reversals. Perhaps combine can fold + them together later. */ + tmp = XEXP (tmp, 0); + if (!REG_P (tmp) || GET_MODE_CLASS (GET_MODE (tmp)) != MODE_INT) + return NULL_RTX; + tmp = canonicalize_condition (jump, cond, reverse, &earliest, tmp); + if (!tmp) + return NULL_RTX; + + /* For sanity's sake, re-validate the new result. */ + for (insn = earliest; insn != jump; insn = NEXT_INSN (insn)) + if (INSN_P (insn) && modified_in_p (tmp, insn)) + return NULL_RTX; + + return tmp; + } + + /* Find the implicit sets of a function. An "implicit set" is a constraint + on the value of a variable, implied by a conditional jump. For example, + following "if (x == 2)", the then branch may be optimized as though the + conditional performed an "explicit set", in this example, "x = 2". This + function records the set patterns that are implicit at the start of each + basic block. */ + + static void + find_implicit_sets () + { + basic_block bb, dest; + unsigned int count; + rtx cond, new; + + count = 0; + FOR_EACH_BB (bb) + /* Check for more than one sucessor. */ + if (bb->succ && bb->succ->succ_next) + { + cond = fis_get_condition (bb->end); + + if (cond + && (GET_CODE (cond) == EQ || GET_CODE (cond) == NE) + && GET_CODE (XEXP (cond, 0)) == REG + && REGNO (XEXP (cond, 0)) >= FIRST_PSEUDO_REGISTER + && CONSTANT_P (XEXP (cond, 1))) + { + dest = GET_CODE (cond) == EQ ? BRANCH_EDGE (bb)->dest + : FALLTHRU_EDGE (bb)->dest; + + if (dest && ! dest->pred->pred_next + && dest != EXIT_BLOCK_PTR) + { + new = gen_rtx_SET (VOIDmode, XEXP (cond, 0), + XEXP (cond, 1)); + implicit_sets[dest->index] = new; + if (gcse_file) + { + fprintf(gcse_file, "Implicit set of reg %d in ", + REGNO (XEXP (cond, 0))); + fprintf(gcse_file, "basic block %d\n", dest->index); + } + count++; + } + } + } + + if (gcse_file) + fprintf (gcse_file, "Found %d implicit sets\n", count); + } + /* Perform one copy/constant propagation pass. PASS is the pass count. If CPROP_JUMPS is true, perform constant propagation into conditional jumps. If BYPASS_JUMPS is true, *************** one_cprop_pass (pass, cprop_jumps, bypas *** 4496,4503 **** --- 4617,4633 ---- local_cprop_pass (cprop_jumps); + /* Determine implicit sets. */ + implicit_sets = (rtx *) xcalloc (last_basic_block, sizeof (rtx)); + find_implicit_sets (); + alloc_hash_table (max_cuid, &set_hash_table, 1); compute_hash_table (&set_hash_table); + + /* Free implicit_sets before peak usage. */ + free (implicit_sets); + implicit_sets = NULL; + if (gcse_file) dump_hash_table (gcse_file, "SET", &set_hash_table); if (set_hash_table.n_elems > 0) | http://gcc.gnu.org/ml/gcc-patches/2003-02/msg00468.html | CC-MAIN-2019-04 | refinedweb | 1,005 | 55.54 |
By the way...
Wow, I'm quite busy these days, haven't been writing (or reading, for that matter!) much...
Mostly, it's to blame on the quest for a place to live in that's going on. I'd like to buy, this time around, so this makes it a couple of notches more complicated than what I'm used to (I've never been an owner, so this is all new to me). The numbers bandied around are making me quite dizzy! Hopefully, we should come out of this with a nice place, but in the meantime, it's time for "let's save up money like crazy for the cash down", so on top of being busy with this stuff, it'll also make me less visible than I usually am (well, uh, it should still be better than the last year!).
In other more geeky news, I think I am succumbing to the coding style of the C++ standard library with regard to naming. For method names, there's more than a few people who are going to think "finally!" (I used to favour a Java-style interCap, like "readUntil", now I tend to prefer "read_until"). This makes a lot of sense, since this is also more common in C and Perl code. But the more controversial part is that the standard library uses all lowercase for class names (it's "unordered_set", not "UnorderedSet"), and I'm getting a crush on those too... Perl, Ruby and Python are using FullyCapitalized style for those, and so are a number of C++ programmers I know, but I'm finding that there is something to be said for adopting the style of the language. I'm also using namespaces and exceptions (mostly in constructors and object-returning methods) more, these days.
So either I'm becoming stylish, or I'm becoming senile. Oh well.
Also, it would seem that the giant jackhammers are following me.
Syndicated 2007-08-24 14:58:03 (Updated 2007-08-24 15:14:07) from Pierre Phaneuf | http://www.advogato.org/person/pphaneuf/diary.html?start=323 | CC-MAIN-2016-36 | refinedweb | 343 | 76.76 |
Ted Neward wrote an article in 2004 comparing Object Relational Mapping to the American debacle in Vietnam. Jeff Attwood revived the article in 2006. He pasted a giant photo at the top of his block (Vietnam grunts-in-the-fields-with-helicopters), copied Neward's conclusions verbatim, and pronounced his favorable opinion in a few sentences. The revival provoked some extended commentary and breathed life back into an article that might justly have been forgotten. Apparently it has struck a chord with some recent readers and so I feel compelled to review it.
See Neward's article here on his blog and a brief follow-up here. See Attwood's article here.
The first three pages are dedicated to thoughts on the Vietnam War. His remarks are irrelevant to computer science or ORM and the comparison, to my mind, is hyperbolic. Rhetorically, it intends to prepare for a horrifying tale of ORM disaster — carnage, squalor, and waste. In his apocalyptic vision:
."
From this beginning, surely we can expect tales of ORM disaster — some woeful story of failed projects, misspent millions, and broken careers. Nope. We get plenty of theory but not a single account of any organization suffering this fate.
Not one example.
Perhaps some of the folks who commented on his blog (or Jeff Attwood's blog) had such experiences. I found one engineer who tried ORM a few times (not our product) and hated it (Rob Conery — for no concrete reasons) and a couple of sane voices who have used ORM successfully (Wesley Shephard, Martin Marconcini).
There is no evidence — not from Neward, not from Atwood, not from Conery — that anything bad has actually happened.
With one exception. Neward himself claims that he "built three ORM's in his lifetime, and refuses to build another one because they all faced the same end, despite very different beginnings." To what fateful "end" we never learn. I'll take him at his word and suggest he refrain from writing another ORM.
On the plus side of this ledger stands solid evidence for success. I can't speak for other ORM vendors but I can report that our files are full of praise for ORM.
Our product has been on the market since 2002. There have been many successes and there have been some project failures and projects not yet finished. Never … not once … has there been a hint of disappointment with the ORM approach. No developer — not even of a failed project — has ever blamed ORM. In fact, we get letter after letter saying how ORM has been responsible for faster development and improved application quality. Developers associated with flops are not shy about laying blame and casting aspersions. In five years, surely someone would have said something about the "quagmire". Not yet.
I'm not suggesting that everything is sweetness and light. As with all software tools, we've had bugs to fix. Developers make design errors. It's easy to forget performance consequences when you are so well insulated from the persistence operations, particularly by client-side caching. Such adverse consequences are easily detected and easily addressed.
ORM is not ideal for every project. If we have to grind through simple calculations on millions of records, with no user experience to worry about, we gain little from expressing the data as objects for a few microseconds; the cost of transformation from raw bytes to business objects is never repaid. We're better off with the fastest, native data manipulation techniques available.
An object-oriented approach is better suited to highly interactive, line-of-business applications. Data records are not nearly so numerous in such applications. They live in client space for minutes or hours, not microseconds. During their unpredictable stay, they appear in various guises on multiple screens. Users change them. They're roiled in complex business logic. The consensus of architects is that such data records are best represented as business objects. The open question is how best to shuttle data between database and business object form.
Our answer is ORM. Neward vehemently disagrees.
Toward the end of the piece, Neward recommends five alternatives. However, these alternatives are so briefly considered — a paragraph each — that it is difficult to see meaningful contrasts. Neward offers a sixth suggestion — "Acceptance of ORM limitations" — a choice he derided relentlessly unto this very moment.
While some [choices] are more attractive to others [sic], which are "better" is a value judgment that every developer and development team must make for themselves. It is conceivable that the object/relational problem can be "won" through careful and judicious application of a strategy that is clearly aware of its own limitations.
Where did that quagmire Go?
So it is "conceivable" that ORM might work. Of course we'll have to be "careful" and "judicious" and have a "strategy that is clearly aware of its own limitations." This is sound generic advice but impractical in the absence of details. Neward offers none as the article races to a close. What do other experts say?
There is much written about representing data within an application — about the application's "domain." Among the most respected voices is Martin Fowler. Let's turn to the Fowler book that Neward references approvingly: Patterns of Enterprise Application Architecture (PEAA).
Early on, Fowler contrasts two fundamental approaches to "Domain Logic": Transaction Script and Domain Model.
Transaction Script organizes all [application] logic primarily as a single procedure, making calls directly to the database or though a thin database wrapper. Each transaction will have its own Transaction Script … [110]
This is the approach recommended for processing a large volume of records with simple calculations … as one might do in a batch payroll run.
The line of business application is a different animal.. [116]
How do you choose between approaches?. …
It certainly takes practice and coaching to get used to a Domain Model, but once used to it I've found that few people want to go back to a Transaction Script for any but the simplest problems. [119]
Ok .. that sounds like a call for Domain Model. Fowler continues:
If you're using Domain Model, my first choice for database interaction is Data Mapper (165). This will help keep your Domain Model independent from the database and is the best approach to handle cases where the Domain Model and database schema diverge. [119]
Data Mapper (which is ORM) isn't the only way to support a Domain Model architecture.
If the domain model is pretty simple, and the database is under the domain model developers' control, then it's reasonable for the domain objects to access the database directly with Active Record (106) [a simpler pattern]. Effectively this puts the mapper behavior discussed here into the domain objects themselves. [171]
These words deserve commentary:
Data Mapper appears to be the safe choice. Why not use a Data Mapper?
The price .. is the extra layer that you don't get with Active Record (160). [170]
Neward will try to make a big deal out of this; but Fowler continues:
Remember that you don't have to build a full-featured database-mapping layer. It's a complicated beast to build, and there are products available that do this for you. For most cases I recommend buying a database-mapping layer rather than building one yourself. [171]
There is no shame in this. A Data Mapper is a "complicated beast" to build. It's harder still to build a mapper that 'both' does a great job 'and' is easy to learn and use. We are wise to take heed of Neward's multiple failures. Such caution need not dissuade us from using a good one.
The analysis begins in earnest with Neward's recap of the widely known "Object-Relational Impedance Mismatch." One can learn more about this topic (and learn it more clearly) from other sources but Neward's rendition is unobjectionable.
Class-to-Table mapping — the fundamental act of object mapping — works well most of the time but poorly models certain business object families in which several concrete objects share have common characteristics and common state. This is the "inheritance" or "generalization hierarchy" problem discussed in plain and neutral language by Fowler (PEAA, p.45-7).
Neward blames the object world whereas the fault — if there is fault — lies more obviously on the side of the SQL. In any case, there is certainly a mismatch to be resolved.
In Neward's example, there exists a base class, Person, with a cascade of subclasses: Student inherits from Person, GraduateStudent inherits from Student. Other derived classes follow.
This is actually a terrible example (as is Fowler's which suffers from the same fault). We should never model persons this way for the simple reason that the people we want to model are not pinned to any of the subtypes. A student can become a non-student or become a graduate student in the course of an application session. Unfortunately, an object can't change its type; once instantiated, it is what it is. An instance of Student can't suddenly become an instance of GraduateStudent or stop being a Student altogether. Type assignment is permanent.
Moreover, a Person could belong to more than one of the subtypes; he could be both a Student and an Employee.
Student, GraduateStudent, Employee … these are "roles" or "facets" of a person. A person can have them or not. A person may have more than one facet. A person may gain and lose a facet.
Neward asserts "it's only natural that a well-trained object-oriented developer will seek to leverage inheritance in the object system, and seek ways to do the same in the relational model."
Actually, this is the natural inclination of the "poorly-trained" object-oriented developer. The justly famous Design Patterns book (Design Patterns, Gamma, et al) states: Favor object composition over class inheritance [20]
Inheritance is clearly the wrong mechanism for this use case; we should turn instead to a compositional approach.
Perhaps more objectionable is the assertion that the object-oriented developer will try to pervert the relational schema to satisfy ORM. We will see that this is not so.
Let me substitute another example that might be more plausibly represented through inheritance. Imagine that the application concerns groceries of two basic kinds, Fruit and Vegetable.
We'll suppose that Fruits and Vegetables have many common Produce properties and a significant number of distinct properties of their own. We'd like to make Produce a base class and extend it with derived classes called Fruit and Vegetable.
I do not favor a base Produce class and derivative Fruit and Vegetable subclasses but at least the example does not suffer from the concern that a piece of produce might transform itself from an apple to a carrot nor is there much chance that a given item will be both apple and carrot.
Neward correctly observes that SQL doesn't support inheritance. If we choose to represent produce in a class inheritance hierarchy, there is going to be a mismatch between the business objects and the tables in the database.
There are three typical approaches:
Neward writes as if ORM developers are free to choose from among these alternatives. That's rare in my experience. We get what we're given and deal with it.
Neward claims that ORM developers always prefer the second or third because he thinks these (theoretically inferior) designs are easier for ORM developers to handle. The implication is that they will be at war with the DBAs and antagonistic to good database design.
ORM practitioners are more nuanced. Here's Fowler:
There's no clear-cut winner here. You need to take into account your own circumstances and preferences ... My first choice tends to be [Table per type family] as it is easy to do and resilient … I tend to use the other two as needed to help solve the inevitable issues with irrelevant and wasted columns. Often it is best to talk to the DBAs; they often have good advice as to the sort of access that makes the most sense for the database.[PEAA, 47]
Neward's argument rests on three false claims.
Generalization hierarchies aren't just problematic for ORM developers; they are trouble for everyone — ORM and non-ORM developers alike. I suspect that a survey of actual databases would show that the second and third designs were promoted by non-ORM developers! It seems they too find these alternatives easier to handle. The folks who write reports, for example, are notoriously averse to normalized databases.
Any developer who must represent inheritance hierarchies in a relational database is going to have to make some uncomfortable design decisions and is going to have to write some code to access, present, and save data relating to these groceries.
I invite you to pause for a moment and think about how you addressed this situation in the past — without an ORM. If it was challenging without ORM, why are we alarmed when the situation provokes some discomfort for ORM developers?
Second, consider if this problem is common or rare. Look at your own data and take an informal count of the number of generalization hierarchies. Most of us will have none; some of us will have one; and a few of us will have more than one. The larger the database, the smaller the percentage of tables involved in generalization hierarchies.
How can this be a crisis if the one-class-per-table mapping is perfectly satisfactory for more than 90% of our data?
Perhaps those few cases are so critical that the application will fail catastrophically if we don't find a solution. We wouldn't fly with an airline if it had even 1% failures.
We've already seen that generalization hierarchies are tricky … for any developer. Do they break ORM development? We'll let Neward try to make this case when the data are normalized database as in the "Table per type" arrangement.
Relating these [Produce, Fruit, and Vegetable] tables together … requires each to have an independent primary key (one whose value is not actually stored in the object entity) so that each derived class can have a foreign key.
This is not correct. Each table must have a primary key but the three tables can share the same primary key. An apple may be represented by a row in the Apple table with id=123 and a row in the Produce table with id=123. If the Apple table row has its own independent primary key, it should still have a foreign key column (value = 123) that refers to the matching Produce row; this foreign key column is stored in the Apple object.
Foreign keys are artifacts of relational designs; their existence and value are independent of their utility in support of object oriented domain models whose objects always store foreign keys.
This means that when querying for a particular instance at the relational level, at least three JOINs must be made in order to bring all of the object's state into the object program's working memory.
Why would I want to get all object state for all groceries? If I'm interested in all Produce items, regardless of their Fruit or Vegetable nature, I retrieve only the Produce data. There is no reason for me to go after the Fruit and Vegetable tables. When I need to know about them as Fruits or as Vegetables, I'll get those data.
Suppose I bring all grocery data into memory anyway. I don't have to do a three-way join. I could make three separate queries: one for selected Produce rows and one for each of the related Fruit and Vegetable rows. Our testing proves that three queries can be as efficient as a single query with three outer joins. Multiple queries are faster, producing less data, as the number of related tables increases.
Round trips to the database might be expensive if bandwidth is poor. But I don't have to make three trips to the server if my object persistence layer stacks those queries and sends all query results over the wire in a single bundle.
Finally, if I am caching the results on the client, I won't have to make those queries to the database again.
That doesn't sound so bad. Is raw ADO really better than ORM? If I need to get all grocery data into memory and I don't have ORM, what would I do differently?
If you can think of something, rest assured that you can apply such wizardry within any high quality ORM product. DevForce, for example, offers several alternative persistence mechanisms. PassThru Query, Stored Procedure Query, Dynamic Entities, and Remote Procedure Call (RPC). They all return (or can return) first class business objects that are resident in cache. When the queries complete, the unpleasantness is behind us. Our in-memory model is in good shape.
In a postscript Neward claims without evidence that workarounds such as I've described won't use the cache. That's not true. The retrieved entities are merged into the entity cache regardless of query technique.
We cover generalization hierarchies in our training class, in our documentation, and in our tutorials. This is not an exotic topic and nothing to be afraid of. There is some extra care and work involved … but no more than you'd expend in some rival paradigm. The effort is probably less and you'll have a nicely encapsulated method to hide the details from the UI developers.
Most ORM products can handle data stored according to one of the other two formats: Table-per-concrete-type and Table-per-type-family. DevForce has an optional "where clause" feature that is particularly useful for Table-per-type-family designs.
At heart, many object-relational mapping tools assume that the [database] schema is something that can be defined according to schemes that help optimize the O/R-M's queries against the relational data.
This assertion is not supported by either our experience or that of our clients and consultants.
Most quality ORM products assume the opposite; they assume that the relational database schema is inviolate. The schema doesn't adapt to us. We adapt to it.
In the real world, the database can often be extended but rarely can it be radically transformed. There are too many external forces (reports for example) that resist change. Fortunately, commercial object mapping products offer features for working around inconvenient and non-performant database designs.
All database schemas (like all code) tend to rot with time. Refactoring the database typically improves database integrity and performance for everyone, not just the ORM developers. ORM and DBA interests are aligned; there is not merit to the insinuation that ORM developers wish to contort the schema for their own purposes.
I agree that application domain schemas evolve more rapidly than relational database schemas. Business requirements are often easier to satisfy by changing the application than by changing the database. Neward seems to think this difference is a special problem for ORM.
In fact, ORMs tend to mitigate the problem, which is far more severe for non-ORM applications that are tightly coupled to the database schema. ORM frees the application's domain model from a lock-step dependence on database schema as Fowler explained earlier and on many pages of PEAA. When Neward writes "before too long, the schema must be "frozen", thereby potentially creating a barrier to object model refactoring" he has it exactly backwards. ORM helps us overcome the limitations of a database that is frozen in some unsupportive state.
Databases schemas are not permanently frozen. Changing business requirements eventually break the ice. The problem is that database changes usually wreck several applications at once. The developer who needs the database change for his application module is happy while everyone else suffers. The widespread suffering chills the schema again. The DBAs keep it cool … until business pressures crack it once more.
ORM-based applications are insulated from this dynamic.
An ORM application is far more resilient than a non-ORM application that has fixed the database schema in its DNA … as data structures dependent upon outdated table structures or as SQL DML commands with obsolete column names hidden in strings.
Finally, contrary to Neward's claim, ORMs do not promote turf wars with DBAs by trying to impose an ORM-favorable structure on the database. In our experience, they do just the opposite — they protect existing applications from DBA whims such as the sudden compulsion to impose a new naming convention across the tables and columns.
A related issue to the question of schema ownership is that, in an O/R-M solution, the metadata to the system is held fundamentally in two different places: once in the database schema, and once in the object model.
Of course there are two schemas. This verity has nothing to do with ORM. When you develop a line-of-business application, you have some kind of data model. That model may mimic the database schema at first but it is sure to diverge over time. The only serious question is whether the application schema is tacit and unmanaged or if it is explicit and well managed.
Non-ORM applications have implied application schemas. Their schemas are hidden in the data structures that hold data retrieved from the database. Their schemas are hidden in the SQL commands that fetch the data. There is no central authority. Commands and containers are invented and reinvented in all corners of the application. The compiler can't help you find them; you'll have to search for them, line by line.
The ORM mapping file declares an explicit application schema and describes precisely how that schema corresponds to the database schema. The Data Mapper tool helps you manage the mapped application schema. Every ORM-driven persistence operation conforms to it. Every business object conforms to it.
When the database schema changes, which approach has a problem?
The ORM developer has two options: he can make the application schema conform to the database or he can adjust the mapping to preserve the application schema. The non-ORM developer has only the first option; he must modify the application to conform to the database because he is tightly coupled to it.
The ORM developer gets help from the Data Mapper, refactoring tools, and the compiler. The Data Mapper tells him what has changed. The mapper regenerates domain model code. The compiler catches most mistakes because there is much greater use of strong-typing.
The non-ORM developer must rely predominantly on text search and unit testing. He has no tool to tell him both what changed in the database schema and how the changes affect his application. The compiler can't help because there is little strong typing; field value indexers are usually strings as are the table and column names buried in query commands. Unit testing can catch omissions … if there is any unit testing.
Clearly the explicit and managed application schema improves maintainability. Yet Neward writes about it as if it were sinister:
As the system grows over time, there will be increasing pressure on the developers to "tie off" the object model from the database schema, such that schema changes won't require similar object model refactorings, and vice versa.
What is wrong with that? We call that reduced dependency. Most architects think it's a good thing..
I am unaware of an ORM product with this nearly-fatal limitation.
ORMs didn't create the problem of database schema change. If the database schema is evolving in one direction while the application schema is moving in another, the blame lies not with ORM. ORM is a solution that can facilitate these divergent tectonic shifts without suffering an earthquake.
Object systems use an implicit sense of identity.
Two objects that contain precisely identical bit patterns in two different locations of memory are in fact separate objects. (This is the reason for the distinction between "==" and ".equals()" in Java or C#.)
This is a genuine problem for many ORM products. In fact, it's a concern for all applications, whether they use ORM or not. But ORMs with some form of caching can resolve it comfortably.
Business objects built with a caching ORM are identified by the same primary key that identifies the corresponding row in its database table. An application does not use the object's reference — its location in memory — to identify the object. It identifies an object by its primary key.
It is critically important that there is one object and only one object with a given primary key. This is what we mean by "Entity Identity". My company's ORM product, DevForce, is an example of a caching ORM. Every business object read into or created in memory resides in an entity cache. The consumer of a business object holds a reference to an object in the cache, not some free-floating object. In a given entity cache, there will be one instance of an entity of a particular type and its primary key is the same as the primary key of its corresponding database table row.
For example, suppose we query for the employee whose name is "Nancy Davolio." We then query for the employee whose id = 1. We'll get the same employee object instance because the employee named "Nancy" also has id=1. If she appears in two lists, she appears in each list as a reference to the one and only "Nancy" object in the cache. If we change her name to "Sally" in one list, her name is "Sally" in the all lists.
To be precise, there can be only one object with a given primary key in a particular entity cache.Most applications only need one entity cache. The one cache is shared throughout the application among all screens. The employee appearing on an HR form is the same employee appearing in a grid on the Company Contacts form.You are free to create as many caches as you like and there are reasonable scenarios for doing so, in which case, you make a conscious and controlled decision to enable multiple versions of the same "thing" in your application session.
Entity identity is a difficult to achieve, whether you take an ORM approach or use something else such as native ADO. It's not an exclusively ORM issue. It is a critical ORM feature.
The lesson: only use an ORM that supports entity identity.
In a related diatribe, Neward faults the object-oriented system's supposed inability to support concurrency and ACID transactions. He must have been thinking about the ORMs he built without object identity. Concurrency and ACID transactions (including distributed transactions) should be fully supported by the object persistence layer and should be largely transparent to the developer.
Neward makes some claims that have no bearing on ORM systems with client-side caching schemes.
When does the actual "flush" to the database take place, and what does this say about transactional integrity if the application code believes the write to have occurred when in fact it hasn't?
The "flush" takes place when the application tells the object manager to save. The save is transactional by default. If the save fails, the objects remain in their current cached state, exactly as they were before the save attempt. If the save succeeds, the saved objects are adjusted to reflect (a) their currently "unmodified" state and (b) any property changes resulting from database triggers (e.g., updates to modification timestamps within the objects). From the application's perspective, the database reality and the session reality are the same at this moment.
The balance of his caching critique raises concerns about a caching strategy unknown to me. I suspect he is talking about some kind of server-side caching.
In all honesty, a purely object-oriented approach would make use of object approaches for retrieval, ideally using constructor-style syntax identifying the object(s) desired
A purely object-oriented approach prefers a Factory method. See Robert Martin's many lessons on dependency inversion in Agile Principles, Patterns and Practices in C#.
I agree that a purely object-oriented syntax for retrieval is strongly preferable. It isn't achievable without changing the language syntax because SQL and most object-oriented languages are incompatible in this respect. Or, rather, they were incompatible. Microsoft LINQ (Language Integrated Query), due in early 2008, brings a strongly-typed, SQL-like syntax to the major .NET languages. Let's return to the present.
Neward describes three object-oriented query syntaxes: My company's product, DevForce, offers an Object Query Language (OQL) that approximates what he calls Query-by-API.
Using OQL, the developer defines a query object, adds elements to it (in the manner of StringBuilder), passes the finished object to a PersistenceManager (PM), and the PM returns a specialized collection of (cached) business objects.
StringBuilder
PersistenceManager
Simple OQL queries are easy to understand:
RdbQuery q = new RdbQuery(typeof(Person));
q.AddClause(Person.LastNameEntityColumn, EntityQueryOp.EQ, "Smith");
EntityList<Person> oc = aPM.GetEntities<Person>(q);
The query object, "q", is pinned to the Person business object type. We add a clause that restricts the query to returning persons whose last name equals "Smith." We ask a PersistenceManager to perform the query described by "q"; it returns a strongly typed collection of Person objects.
The query can be made more complex by adding additional information to the query object as in:
q.AddClause(Person.LastNameEntityColumn, EntityQueryOp.EQ, "Jones");
q.AddOperator(EntityBooleanOp.Or);
q.AddClause(Person.FirstNameEntityColumn, EntityQueryOp.EQ, "John");
Now the query will return persons named either "John Smith" or "John Jones."
I agree that this approach is "much more verbose than the traditional SQL approach."
The developer could write object queries in raw SQL via the "PassThru SQL" facility. I recommend only limited use of this feature; PassThru should be reserved for queries that cannot be expressed in OQL.
Why the strong OQL preference?
Why would I care about the ability to compose a query?
We tend to think of queries as static expressions known by the developer at design time. Many of them are. But many are not.
We often ask the user to supply search criteria. What is the last name we want? Should the name be exactly as specified, begin with certain characters, or be "like" a given string?
Oh … there's more than one last name?
Oh … you want to limit the result to persons living in a particular zip code?
What's that? Role-based security says that this user can only search for Persons in his own department?
Unfortunately this business requirement surfaced in a module downstream from the UI that gathered the user's search criteria. How do we merge the security restriction into the previously prepared query?
Constructing correct SQL dynamically in response to user input is simple with OQL. The query object is easy to inspect and modify as it moves along a pipeline to its execution point. Inserting the security restriction outside the search view (but prior to query execution) is just another operation on the query object.
On the other hand, constructing and merging into raw SQL strings is nontrivial.
We've made the case in favor of OQL but there is no denying that OQL syntax is "unnatural" for most of us. We don't want to write a lot of these queries.
We have to write some queries. While we can wrap our heads around the simple queries, the complex queries are hard to read and write.
Fortunately, static analysis of existing applications reveals that ORM applications contain far fewer explicit queries than non-ORM applications. Fewer queries; less to go wrong.
Several ORM vendors will be able to support something closer to Neward's Query-by-Language syntax with their LINQ-based products. Thanks to Microsoft's control over language definition and the .NET common runtime, LINQ syntax is both strongly-typed and much closer to SQL.
Examine the typical SQL-laced application and you'll find hundreds of handcrafted SQL statements sprinkled throughout the UI. Many queries appear to select for the same thing … but who can be sure.
Do we know if a query will still work when the database schema changes? ADO query commands are strings, often constructed from shorter pieces of string. There's no type-safety in strings and no easy way to determine its assumptions about the schema. We probably won't find out if the query still works until crash time.
Notice that there are a lot of joins in those queries. Most joins exist solely to flesh out the query result with column values from multiple tables.
For example, suppose we intend to display a grid of orders with columns for order number, customer name, order status, order date, shipping date, the shipping company name, and the ship-to street, city and state. In typical SQL fashion, we'll write a five-way join of Order, Customer, OrderStatus, Shipper, and Address.
On another page we'll display a different order grid, this time with order number, customer name, order status, and delivery date. We don't need the address fields so even though we want to see the same orders we'll write a different query with four joins.
With ORM, we need only a simple query for Orders to support both displays. We'll acquire the Customer, Status, Shipper, and Address attributes when we need them … if we need them … as we need them.
Need to show the customer name? We get it from anOrder.Customer.Name.
anOrder.Customer.Name
This syntax is known as "object navigation." Calling the order's Customer property causes the object persistence layer to fetch the order's related Customer object just in time. The first time, we have to go to the database; subsequent requests for the customer will be served from the cache.
Neward will raise objections to such "lazy loading" in a few pages; we'll tackle the objections then.
Since our two displays rely on a simple order query, maybe we can use one query — and one query result — to drive both screens. We won't have to issue new queries as we jump between the screens. We won't have to make extra trips to the database either.
In sum,
Certain styles of queries (particularly the more unconventional joins, such as outer joins) are much more difficult — if not impossible — to represent in the QBA approach.
Joins are usually unnecessary in an ORM world because we rely on object navigation to deliver data from related objects.
Of course we must be able to query related objects for reasons other than displaying the data from those related objects. For example, we might need to query for orders shipped to California. Such a query depends upon "joining" the Order table to the Address table.
Most OQLs support such queries with syntax that, while different from SQL, is logical and easy to learn.
There are less common types of queries that OQLs may not support. Some OQLs don't support the fancier grouping queries and the proprietary constructs that search XML and geographical data.
That's why ORM vendors offer alternative query mechanisms. Our product, for example, has PassThru, StoredProcedure, Web Service, and Dynamic Entity queries. The programmer can create any kind of query at all using a feature dubbed "Remote Procedure Call (RPC)."
These alternatives break with the usual ORM paradigm. Is that bad? It is bad if we resort to them frequently. But occasional use is bearable. I think it is ridiculous to chastise ORM when it both acknowledges and facilitates workarounds for edge conditions.
The O/R layer has now lost an important "selling point", that of the "objects and only objects" mantra that begat it in the first place; using a SQL-like language is almost just like using SQL itself, so how can it be more "objectish"?
We're faced with the basic problem that greater awareness of the logical — or physical — data representation is required on the part of the developer — instead of simply focusing on how the objects are related to one another … the developer must now have greater awareness of the form in which the objects are stored, leaving the system somewhat vulnerable to database schema changes.
True, every query requires some awareness of a persistent data repository. We can specify selection criteria for some properties (e.g., Person.LastName) but not others (Person.Age). We make that manifest by reference to special objects representing the persistable properties (e.g., Person.LastNameEntityColumn).
Person.LastName
Person.Age
Person.LastNameEntityColumn
It might be nice to write something like:
aPM.GetEntities.Where.Order.OrderDetails.Include(someProducts);
We can't do that kind of thing yet. To retrieve objects, we must pull back the curtain and expose some of the machinery.
True, this essential awareness marks the querying apparatus as a point in which the "system is somewhat vulnerable to database schema changes."
Is this the quagmire we've been waiting for? I say it is merely inconvenient. We dedicate a tiny fraction of the application code to retrieving business objects — perhaps less than 100th of a percent of all handwritten lines of code concern queries. The rest of the time, we are indeed "focusing on [the objects themselves and] how the objects are related to one another."
Strongly-typed OQL improves our ability to detect problems caused by schema changes and to address them quickly. Although ORM systems are not invulnerable, they are less vulnerable to schema changes than non-ORM systems.
Neward's critique turns to performance. He alleges that ORM's commitment to business objects of fixed shape ensures inferior performance.
He argues that we should "optimize" our application by retrieving only the subset of table columns that we actually need "right now".
Sounds reasonable doesn't it? Is it true? Is there really a measurable penalty to retrieving more columns than you need at the moment you issue the query?
The answer: no one knows and no one could know.
The problem with Neward's argument — as with most performance arguments — is that it lacks context. There is no abstract quantity called "performance." There is only measured performance coupled to a judgment about whether the measured performance is good or bad.
Consider his example:
SELECT id, first_name, last_name FROM person;
Neward argues that this will perform better than the object system equivalent:
EntityList<Person> persons = aPM.GetEntities<Person>();
How does he know? Surely it matters what other fields define Person, how many persons are in the database, and what it costs to communicate between client and server.
Let's be generous and grant that the size of a full person is 10 times that of the {id, firstname, lastname} tuple. Let's be generous again and grant that there are 10,000 persons. Does the SQL query deliver the data any faster?
{id, firstname, lastname}
I submit that we don't know until we test under conditions that approximate production use.
Let's stipulate that the object system query takes four times as long as the straight SQL query. Does it matter? If the SQL query completes in 2/10th second and the object query took 8/10ths, I submit that it does not matter. If the times were 2 seconds versus 8 seconds it might matter — depending upon how often you made this query. If the timings are 20 versus 80, we want to reconsider grabbing so much data at once, regardless of the approach.
In any case, it is foolish to judge a system based on the performance of a single query. The application will make many person queries. Most of us care about how users judge the responsiveness of the application over the course of a typical session.
Following Neward, we turn next to a two query application:
SELECT id, first_name, last_name FROM person;
// time passes
SELECT * FROM person WHERE id = 1;
Here's the object system equivalent:
EntityList<Person> persons = aPM.GetEntities<Person>();
// time passes
Person aPerson = aPM.GetEntity<Person>(new PrimaryKey(typeof(
Employee),1)));
Assume that our object system takes 8 seconds to fetch all persons and Neward's takes just two seconds. So what is the speed of the second query?
The object-oriented client system cached all Persons so it takes no measurable time to get the employee.
Neward's client system could not have cached so it goes to the database. How long did that take? Let's say we measured and discovered that the latency for any trip to the server is one second regardless of the number of records returned.
That's not bad. But it's not as good as the object system. In fact, for all Person queries, the object system will always be faster than Neward's system after the first 8 seconds. The seconds lost to lots of small queries are going to add up. Neward's system will seem sluggish by comparison. There could be end user productivity consequences.
Can Neward add caching? Not easily. A query can return any shape. It's virtually impossible to cache queries that differ both in shape and criteria.
The simplicity of reliable business object shape makes caching possible which in turn wins back the performance lost to fetching supposedly unnecessary data columns.
Notice that I haven't said anything about the benefits of encapsulating data and behavior in business objects. We don't get those benefits unless the object shapes are fixed. Free form queries can only return raw data. Logic we would like to see inscribed inside the business objects must be reproduced in code surrounding the query result. Does anyone really want to go back to procedural programming?
Neward claims:
[M]ost SQL experts will eschew the "*" wildcard column syntax, preferring instead to name each column in the query, both for performance and maintenance reasons--performance, since the database will better optimize the query, and maintenance, because there will be less chance of unnecessary columns being returned as DBAs or developers evolve and/or refactor the database table(s) involved.
It just isn't so. There are no SQL experts who claim that the wildcard selection is faster than naming the columns. Column selection is only faster if it results in a significant reduction of data transmitted to the client. "Significant data reduction" cannot be determined apriori.
There are no maintenance savings to be had. Good object persistence systems do not break when the query returns "unnecessary" columns; they don't care if there are unrecognized columns.
The application could throw an exception if asked to insert a new record. This is sure to happen when one of the new, unrecognized columns is required and the database cannot supply a default value. This is a problem for everyone, not just ORM systems.
Neward's fundamental point is that we want to change the column selection from one query to the next and we may want to return just a few of the table columns. This is not the normal practice of an object relational approach and he is almost correct in saying that
An object-oriented system … cannot return just "parts" of an object — an object is an object, and if the Person object consists of 12 fields, then all 12 fields will be present in every Person returned.
It would be more accurate to say that object-oriented systems 'prefer' to return the complete Person object with all twelve fields, exactly as it was mapped.
Many object systems can return Person-like objects with fewer columns. We can map the Person table to a PersonInBrief type that includes only some of the Person columns.
PersonInBrief
This is a different type than the Person object; a Person object with the same primary key as a PersonInBrief object is not the same object (even though their primary key values are the same). Clearly the developer must exercise caution with this technique. But if the application demands both a brief form and an expanded form of an object, it can be done and there are ways to prevent entity identity mistakes.
We are not limited to design time decisions about the shapes of our business objects. The DevForce dynamic entity query can return an object of any shape, the shape can be determined at runtime, and the object resides in cache just like any other business object.
There are also schemes for lazy loading fields — for retrieving them only when an object consumer asks for them. Such schemes are difficult to implement. We've not tried — because years of experience demonstrate that there is little need and, in those rare cases of need, our recommended workarounds suffice.
So the question is not "can" the object-system support partial objects, but is it "desirable" to do so in other than special circumstances.
Special circumstances do arise. The developer may improve performance dramatically by the careful and tactical use of "partial object" techniques.
It's a good thing that these techniques are available. ORM would be a quagmire if the developer was unable to step outside of the ORM paradigm. However, the developer should only step outside the paradigm when measured bottlenecks justify that step.
Neward's final shot aims at how ORM solutions manage the object graph. An object graph consists of a root object and the set of all objects related to it by some form of association. In practice, we usually want a pruned graph consisting of some subset of the most often needed related objects.
Neward claims
[O]bjects are frequently associated with other objects, in various cardinalities (one-to-one, one-to-many, many-to-one, many-to-many), and an O/R mapping has to make some up-front decisions about when to retrieve these associated objects.
This is true. The safe decision is to never retrieve related objects automatically. Is that a bad decision? For Neward it's a trick question. He doesn't actually care what we decide.
[T]here.
Neward asserts that there is a paradox in the impossibility of predetermining whether objects should always be loaded or only loaded when needed. A paradox is a false and self-contradictory proposition. There is no paradox here but there is futility: the futility of attempts to predetermine load behavior.
Maybe some ORMs are trapped in this way, but others enable the developer to vary the retrieval policy on a situational basis by means of span queries.
For example, an OQL query within the DevForce product returns one or more objects of a 'single' type; it doesn't try to load related objects. However, the query can be decorated with spans that instruct the persistence layer to fetch related objects at the same time.
Suppose we know that, on a particular page after retrieving a select group of customers, we will display each customer's orders, the line items on those orders, and the sales rep who placed those orders.
We don't have to issue separate queries for each customer's order, order detail, and sales rep objects. We can retrieve all of the selected customers with all of their related objects in a single shot by decorating the customer query with the "spans" that identify these associative relationships. The object persistence machinery will acquire these related objects for us. It will get all of the orders of these customers (but only their orders). It will get all the order details of those orders (but only of those orders). And so on.
In a distributed application, the server sends all of these objects to the client in one package. The persistence machinery pours them into the client's entity cache where subsequent navigation expressions (e.g., aCustomer.Order[0].OrderDetails) will be satisfied entirely from the cache.
aCustomer.Order[0].OrderDetails
This multi-entity, object graph fetching happens because the developer made a conscious decision to retrieve the auxiliary information at the same time that he was querying for customers.
Now suppose there is another "Customer Browser" page displaying only the customer names. The query driving this view won't include spans and will retrieve only customers. It's the developer's choice.
Every one of Neward's critical observations concerns an edge condition. There are problems to surmount but they lurk in the rarely visited corners of our application.
Generalization hierarchies, for example, are uncommon and present difficult design choices for everyone. ORM has a decent if not wonderful answer. The "partial object" issue is a bogeyman we need not fear when we have caching to restore performance. There doesn't have to be an entity identity problem for ORM systems — and it's just as serious an issue for non-ORM systems.
If the Vietnam comparison made sense, he should have found disaster and devastation everywhere, not just discomfort at the periphery. I contend that a 95% solution with a safe landing for the remaining 5% is astonishingly good technology — as good as it gets. Neward thinks anything less than 100% satisfaction is a crushing defeat for ORM.
"[ORM] developers simply accept that there is no way to … close the loop on the O/R mismatch."
At the end of the day, does Neward produce any evidence to substantiate his charge that ORM adoption is a Slippery Slope leading to a Vietnam-like quagmire?
No. Not a single case study. Fortunately thousands of happy ORM users are unperturbed by Neward's dark fears.
I close with an extended quote from one of the blog responses:
I read, and even understood the article. However, I find it overblown because I'm one of those who have accepted a data centric universe. In a large enterprise, if you accept a data centric universe (and for interoperability, that would seem wise) then you don't try to shoehorn objects into relations. You use objects generated by your ORM to encapsulate your relations and build your business logic objects on top of those building blocks.
I guess I'm confused by the commentary that makes this seem "unsolved". Systems I have designed are used by tens of thousands of users every day, with high concurrency and various access patterns (reporting, transactions and multi-screen edits) and it just hasn't become a problem. Neither web products nor desktop products have made me curse my choice of ORM tool, nor have I encountered situations where I have been unable (or even hard pressed) to create a solution. Nor do I feel like I'm writing tedious and buggy code, since the only code I write at the business layer is to handle, you know, business rules.
Wesley Shephard on June 29, 2006
Ted Neward and Oren Eini squared off on a "Dot Net Rocks" show on 24 May 2007. The show was billed as "The ORM Smackdown" and there were plenty of verbal punches thrown.
The transcript can be found here
Neward again failed to produce a serious example of disaster. Eini, on the other hand, managed to refer to several extremely challenging situations in which an ORM solution worked well.
Neward identified one ORM failure wherein the developers used a tool to generate a database from the object model. The resulting relational schema was unworkable. This proves little other than that we should be wary of database generation tools. Schema generation is, at most, an optional ORM-related feature.
Neward restated his belief that object-oriented developers and DBAs are in some kind of war. They each think they own the data. There is some truth to this and, being a developer, I side with the developers. But no sensible minds in either camp seriously believe they can — or should — dictate the database schema to the other.
There was an interesting exchange about object-oriented databases in which Eini observed — correctly I think — that they succeed only in relocating the object relational mapping to a different tier: a replication layer that translates the object database into a relational database for reporting and analytic tools.
At the end of the show, Neward arrives at a startlingly sensible conclusion:
I guess my big thing would simply be, an ORM is not gonna save you from the object/relational impedance mismatch; It may make things easier for you to manage; but this is a leaky abstraction. We need to just accept that. You need to basically decide where you wanna be on the continuum. And, I mean, to a large degree, you can use an ORM, NHibernate or Hibernate or whatever, to simplify your development life, to take the easiest 80% and then use the remaining 20% to just write straight SQL. And do those hard things that an HQL or something else may not be able to accomplish for you. But don't expect that an ORM is going to completely close the loop and remove the relational database from view. It's never going to go away. You just have to accept that it's a leaky abstraction.
I can live with the "leaky abstraction." I might quarrel with the arithmetic but I fundamentally agree. ORM is a great tool that carries you most of the way home. You can never forget that there is a relational database behind it and you will have to compromise the object model and break the object-oriented paradigm on occasion.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
select ProductName , Quantity , Order.OrderDate as OrderDate , Order.Customer.Name as CustomerName<br />
from OrderDetail<br />
where<br />
Order.Customer.Name like 'A%'
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/20849/ORM-as-Vietnam-of-Computer-Science-A-Response?msg=2275362 | CC-MAIN-2020-34 | refinedweb | 9,024 | 55.44 |
OSMF with Flex examplesttehify Aug 10, 2010 6:11 AM
Hi there,
I'm looking for a Flex flash video player that can be skinned using CSS and was hoping OSMF might be the answer. Somehow, I can't seem to find any examples of it? They all seem to be Flash related.
Thanks
1. Re: OSMF with Flex examplesScreenName1710b Aug 10, 2010 7:03 AM (in response to ttehify)
I haven't seen that OSMF has any CSS parsing/skinning capabilities built in, but it does have an example of a FlexUI component:
MediacontainerUIComponent under apps\samples\framework - you can use that as a base Container for OSMF and build a larger Flex component on top of it.
I think you'd have to construct your own system for reading in CSS, while you can use the Chrome skinning interface (example is the OSMFPlayer under apps\samples\framework) - that only seems to allow for a fixed external fonts. You might be able to pull in an assign different fonts to different assets, but I haven't experimented with that.
-Will
2. Re: OSMF with Flex examplesttehify Aug 10, 2010 8:54 AM (in response to ScreenName1710b)
Hi there,
Thanks for your reply! Ouch.. well that's not really gonna work for me. I was hoping that I can use a CSS file to be shared for my Flash players and HTML 5 players. Anyway, I'm having compilation issues when building a simple media player using OSMF and Flex 4.
1. The OSMF.swc bundled with Flex 4 just wasn't able to recognize the MediaPlayerSprite type
2. I copied the OSMF.swc from the source code and it wouldn't compile my mxml file because the of some signed-digest thing that wasn't found in the catalog. It requested me to compile the file again but I have no idea what to do. I tried compc but am completely lost at what options to use. Please help?
3. Re: OSMF with Flex examplesScreenName1710b Aug 10, 2010 9:41 AM (in response to ttehify)
There is a new OSMF.swc available from Adobe on the OSMF downloads page, do not use the OSMF.swc that is packaged with Flash Builder 4 as it is not the most recent.
I have had the most success in compiling from the raw AS3 source files found in the SVN trunk of the OSMF project, as there are also updated changes in those files as well. However, if you have need for the SWC, then re-compiling it yourself is an option.
Also, make sure that you have added your additional compiler settings for OSMF:
-define CONFIG::LOGGING false -define CONFIG::DEBUG true -define CONFIG::FLASH_10_1 true
FLASH_10_1 set to true is required for HTTP streaming - but all of these settings are required to be present in the compiler build statement.
-Will
4. Re: OSMF with Flex examplesttehify Aug 10, 2010 11:25 AM (in response to ScreenName1710b)
How would I be able to compile from the raw AS 3 source files and integrate it? Specifically, what would a sample command of compiling look like? I'm fairly new to Flex and have only ever compiled mxml files.
I get an error mentioning that MediaContainer type was not found - would that relate to my library?
I followed this code but was not able to compile:
5. Re: OSMF with Flex examplesrshin Aug 10, 2010 12:05 PM (in response to ttehify)
That example appears to be based on the old version of OSMF. Since then there were API changes that won't complie with OSMF1.0.
Have you imported MediaContainerUIComponent and it says no type MediaContainer found? What version of OSMF.swc or library are you using?
Are you using Flex builder (Flash builder)?
Ryan
6. Re: OSMF with Flex examplesScreenName1710b Aug 10, 2010 12:18 PM (in response to ttehify)
To get to the compiler in for the "define settings"
Click on the Project top folder> Right Click > Properties > Flex Compiler
Add the above info to "Additional compiler arguements:" space.
Download the OSMF Trunk via SVN to a directory.
In the project properties select "Flex Build Path" and under Source Path add that folder.
Remove in the Library under flex 4.1 (or whatever SDK) the OSMF.swc (as that will cause a conflict).
Make sure you have downloaded the most recent "playerglobal.swc" from AdobeLabs, then you'll want to "Add SWC" and link that swc to you project.
Maybe not the best example link because some of the tags aren't closed and seem to be misplaced, could be used with some corrections, but I don't have the time to make them for you. Highly recommend the the media component above for flex, but it still will require you to add some code to get it to work.
-Will
7. Re: OSMF with Flex examplesttehify Aug 10, 2010 1:00 PM (in response to rshin)
I'm using the command line compiler. I downloaded the swc off of the downloads page linked from the OSMF website. Here's what I do and here's what I get back:
I tried to compile it with:
mxmlc osmf.mxml
and I get back:
Error: No signed digest found in catalog.xml of the library, /opt/flex/frameworks/libs/osmf.swc. Compile the library with -create-digest=true and try again
Code is shown below
<?xml version="1.0"?>
<!-- mxml\HellowWorld.mxml -->
<mx:Application xmlns:
<mx:Script>
import flash.display.Sprite;
import org.osmf.media.MediaPlayerSprite;
import org.osmf.media.URLResource;
public function init():void {
var sprite:MediaPlayerSprite = new MediaPlayerSprite();
videoContainer.addChild(sprite);
sprite.resource = new URLResource(" lv");
}
</mx:Script>
<mx:Panel
<mx:UIComponent
</mx:Panel>
</mx:Application>
8. Re: OSMF with Flex examplesttehify Aug 10, 2010 12:58 PM (in response to ScreenName1710b)
Would there be any chance of doing this through command line?
I'm hoping everything will all build and compile on Solaris as well.
9. Re: OSMF with Flex examplesScreenName1710b Aug 10, 2010 1:31 PM (in response to ttehify)
Unfortunately, using the command line compiler isn't a forte of mine, and I'd have to defer to someone else with more experience with it.
Basic overview of what I think you'll need to do - add in the "source path" of the osmf source files (-source-path ?), add in the definition statements, (-define CONFIG::DEBUG false etc.).
Since there's a bunch of different settings that you'll have to set, then looking up how to build a configuration xml file would probably be best for expediency - that information can be found online with the mxmlc compiler docs.
-Will
10. Re: OSMF with Flex examplesrshin Aug 10, 2010 3:11 PM (in response to ScreenName1710b)
Here's some feeback from Flex folk
What build of Flex are you compiling with? Where did you get osmf.swc?
The quick solution to your problem is to turn off RSLs by compiling with the "-static-rsls=true" option.
They think that this sounds like rsl so can you try this?
Ryan
11. Re: OSMF with Flex examplesAndrian Cucu Aug 11, 2010 12:29 AM (in response to ScreenName1710b)
You can also try to use an ant task for compiling your project. It might be easier then using the command line directly.
Here is a sample build.xml that might get you started:
It sets the necessary compile options, and builds both the OSMF library and a player that uses it. You can target either 10.0 or 10.1 by changing the properties files.
Hope this helps,
Andrian
12. Re: OSMF with Flex examplesttehify Aug 11, 2010 6:48 AM (in response to rshin)
Thanks Ryan.
I got the osmf.swc from
I used both the source.zip and the the standalone one.
I'd like to try compiling it but I don't know how to. I mean I have tried looking at compc but it looks like I also have to specify every single class I want to compile? Given the amount of classes in the osmf framework, I reckon it would probably take a while for me to find all the ".as" files and specify them. Furthermore, I am unfamiliar with all the other options required so it'd be nice if you can show me how you guys compiled the existing osmf.swc?
Thanks!
13. Re: OSMF with Flex examplesttehify Aug 11, 2010 6:50 AM (in response to Andrian Cucu)
Thanks! This sounds perfect.. I'll try it out!
14. Re: OSMF with Flex examplesttehify Aug 11, 2010 7:38 AM (in response to rshin)
Hi Ryan,
I managed to compile it indeed with that solution. Does this mean that all the related libraries are within the swf?
The swf file is unfortunately quite large.
15. Re: OSMF with Flex examplesdaslicht Nov 4, 2010 10:20 AM (in response to ttehify)
Hello,
I am looking for a Flex Player example which can play .f4m files and listen for Cue points.
My first try, but I get :
Error: The specified capability is not currently supported
<fx:Script> <![CDATA[ import mx.events.FlexEvent; import org.osmf.containers.HTMLMediaContainer; import org.osmf.elements.HTMLElement; import org.osmf.elements.VideoElement; import org.osmf.media.MediaElement; import org.osmf.media.MediaPlayer; import org.osmf.media.URLResource; import org.osmf.metadata.TimelineMetadata; [Bindable] private var player:MediaPlayer = new MediaPlayer(); [Bindable] private var stream:String = ""; //private var embeddedTimelineMetadata:TimelineMetadata; private var media:MediaElement = new VideoElement(new URLResource(stream)); protected function application1_creationCompleteHandler(event:FlexEvent):void { media = new VideoElement( new URLResource(stream)); player.media = media player.play(); } ]]> </fx:Script>
Cheers
Marc
16. Re: OSMF with Flex examplesMarioVieira.net Nov 13, 2010 1:51 PM (in response to ttehify)
Hiya,
I got OSMF 1.5 up for Flex 3 and 4. It's the actual playback and controls, you would be free to add your buttons as you want.
Hope it helps!
M
17. Re: OSMF with Flex examplesdaslicht Nov 13, 2010 3:22 PM (in response to MarioVieira.net)
What about *.f4m File Playback ?
18. Re: OSMF with Flex examplesMarioVieira.net Nov 14, 2010 12:16 AM (in response to daslicht)
just addded this player type
select "flashMediaManifestF4M" for playerType in the OSMFPlayer | https://forums.adobe.com/message/3044254 | CC-MAIN-2015-27 | refinedweb | 1,711 | 65.52 |
Web Developer
Welcome! Happy to see you in the last part of my JSWorld Conference 2022 summary series, in which I share a summary of all the talks with you.
You can read the first part here, the second part here, and the third part here, where I summarized the first ten talks of the first day that were about::
So, let’s start with the last part of the first day.
Gert Hengeveld - Principal software engineer at Chromatic
Storybook 6.4 will bring interaction testing to Storybook, enabled by CSF 3. In 6.5, we're going all-in on accessibility testing and will enable addon authors to provide additional testing methods. With Storybook 7.0, stories and their components become the source of truth for the entire UI development lifecycle.
You working on a web app and in order to do that you have to first Spin up the whole platform, Recompile with every change you make, when designers review your stuff 2 weeks later they always find something that you have to tweak or change, and that means you have to rework stuff that’s already shipped.
But now days Modern UIs are
built assembled from components because for these reasons:
Efficiency: Reuse existing components
Speed: Parallelize development across people and teams
Quality: Verify that UIs work in different scenarios
Maintenance: Pinpoint bugs at the component level.
It’s easier to work on isolated components and not have to consider the entire context of a running app. This is where Storybook comes in.
Storybook is a frontend tool for building UI components faster and easier in isolation. It gives you a catalog of all components in different states, and you can document your component library.
The general concept in Storybook is a story:
import Badge from './Badge' // Metadata export default { component: Badge, args: { label: 'Hello world' }, } // First Story export const Small = { args: { size: 'small', }, } // Second Story export const Large = { args: { size: 'large', }, }
Storybook is framework agnostic, so you can choose your favorite framework.
What would you define a story for?
Different states:
Edge cases:
Context:
And it gets more complicated because there are countless combinations of these situations.
Sharing is essential in the workflow because:
And in Storybook, there are so many features that allow you to share your stories and your components with other people.
Storybook provides
npm run build-storybook command which gives you a statically exported version of your story and you can upload it on Github pages, netlify, or on chromatic, and then you can share a link to that with for example your stakeholders so that they can play around with them and verify that is what they had in mind.
Chromatic is a service on top of Storybook. You can upload your stories to chromatic and it gives you a library of your components.
There is a Storybook Connect for Figma, in which you can Inspect stories in Figma, Link components to stories, Play with interactive stories in Figma, compare the design with implementation, or inspect the sizing of the component to verify whether or not that is developed pixel perfect or you can use accessibility add-on in Storybook to verify what the component looks like when you have blurred vision or you are color blind.
You can simply Paste a Storybook URL in Medium or Notion. It Automatically converts them to iframe embed and Dynamically adjusts the height of that iframe.
It’s more convenient to extract the Docs from Storybook and put them into a custom build website, and that’s Storybook Docs 2.0 (alpha) which allows you to take those MDX pages that you write in Storybook and throw them into your custom Website.
Components break in unexpected places, Testing all scenarios is a lot of work, and Reproducing a specific state is tricky. On the other hand, customers have high expectations these days because they are used to the way that Apple builds their software which always works the same way.
Testing is crucial if you are building software that needs to be reliable and always works.
It was a top request from the community and what it allows you to do is to Define a
play function on your story, which is a function that gets executed as soon as stories run on your browser and you can use that to build Stories for complex components & pages.
You can now Simulate user behavior in the browser. That is Powered by Testing Library and they didn’t reinvent the wheel.' })); } }
This is familiar to people written end-to-end tests and integration tests in the past with other tools, but it’s now possible directly in your Storybook as you are working on your stories.
This gives you Assertions powered by Jest and gives you an In-browser debugger. They also built a Node.js test runner so that we can run all of those tests in one go in a headless browser. And when something went wrong, Storybook can give you a URL which you can click, and it opens up your Storybook exactly at that state and then you can check and figure out what was wrong.
import { expect } from '@storybook/jest'' })); // Assertion | Automatic spy on actions await expect(args.onSubmit).toHaveBeenCalledWith('query'); } }
All of this is possible in Storybook 6.5.
Wim Selles - Lead solution architect at saucelabs
Building a mobile app nowadays is pretty easy. If you have some JavaScript/HTML/CSS skills you can use technologies like Ionic, NativeScript, or React Native to release your first app to the App Stores. Building your app might seem easy but releasing and maintaining your app when it’s in production can be difficult. During this talk, Wim will walk us through most of the hurdles of releasing and maintaining a Mobile app in production.
According to Wim himself, the actual title that he chose was “The problems or difficulties that must be overcome to allow our app to move freely to the phones of our end users and give ourselves the necessities to keep this app running.” but it was too long!
Imagine a situation where you want to build your own app and release it in stores, and to do so, you need to start with an idea.
Then you need some skills. You are maybe a master in HTML, CSS, and JavaScript/TypeScript, but to develop an app and publish it in app stores you need to have some ios skills (Swift) and android skills (Java/Kotlin). But maybe it’s easier for Web developers to work with tools like React Native, NativeScript, and Ionic.
So, now grab your Coffee and your mac - if you wanna develop something for iOS you need to have a mac - and open up your favorite IDE!
Based on tools of your choice like React Native or the other tools mentioned, Start reading Docs, and you’ll see that you need some extra tools. For example, you need Xcode for iOS or Android Studio for android.
Depending on the type of framework that you selected, you might have a challenge, and the only thing you need to do is use your best friend, Google.
A couple of days later, when you built your first screens, you tested your app manually on emulators, you maybe want to test it on your real device. On Android, you need to do some signings and when you got your key, you can push an android application to your android phone.
But when you start with apple, the first thing you need to do is use your credit card! You need to have a membership and there are two types of them. There is a free one but you can not push your app into the store. There is a default option that costs you 99$ but then you have the option to push your application to your own phone but also in the end into the app store so that you make some money from your app, but remember, They will still take 30%.
There is one thing that you need to be aware of. If you start with the developer certificate, there is a limitation on the number of friends that you can invite to push the application to their phones, especially iOS phones. Apple can also make it a little bit easier for you as long as you pay 299$ and you will get the enterprise certificate.
Building your app — which is better, cooler, and faster with m1 macs — and deploying it is so slow in comparison to the web.
In addition to that, you need to start testing your app with unit tests and UI tests. But when you start to run your test cases and again compare that to what you are used to, you’ll see that it’s much slower.
Now, maybe it’s time to push your app to the app store. Android would be the easier one, but for that, we also need to use our credit card and pay 25$ to be able to push our app into the store.
But when you are then looking to the apple or the android app stores, you’ll see that you need to wait for a review, which will take between 1 and 7 days. The other problem is, that after a couple of days you may receive an email that tells you: “Your application has been rejected” maybe because you forgot to provide credentials or another reason. Even after acceptance it can take up to 1 day before the app or your update is in the stores, and it will be the same for every release!
After a while, and after those hard days and so much waiting, you’ll get your first review from a user, a 1.0 Star review!
Why this story?
These are the things that could happen if you compare this with your web app experiences.
Let's take a look at Software development cycles and compare each step for web applications and hybrid/native applications.
Why is it hard you may ask? If we’re talking about mobile, first thing is that we could have 20 different versions out there on the phones of our end-users, especially if we do not implement a proper way to do some version management, and unlike with web apps, a mobile native app can’t be reverted/removed from a user’s phone. This can result in having more business impact when a Native app contains bugs.
Secondly Mobile is remote by default, meaning an issue with your app is always happening somewhere else, not at your desk or your screen. it happens in the hand of a person whom you cannot easily communicate with to understand why and how the issue happened.
Last but not least, fragmented market. There is a massive variety of android and ios devices across hardware and software configurations, different levels of os versions, and locals. and this is before we get to the mobile device itself that needs reception, wifi, battery, and more.
If we go back to those pain points, the first thing that we should be aware of is Deploy. If we want to release something quickly, we are limited here and can not roll back. You can not fix something fast in production. it will take time and that time will annoy your customer.
Also, you need to focus on getting the right feedback from the devices. If you know what happens and what your customers doing, it could help you to replicate what was causing that issue.
Next, do not forget to test properly. Do not test only the web content in the case of hybrid. Your user is using his fingers. he is not using JavaScript executors to scroll something into the view, no, we are all swiping through our apps. test that, manually or automated.
And last but not least in all three parts of this process the debugging information is important. If you already have the right debugger in your test build, you might be able to retrieve information that could be valuable for you before releasing it to the stores.
Samuel Snopko - head of developer relations at Storyblok
The time flies fast. We spent more than the last two years in the online bubble. Our work and life merged slowly together. We don't commute, and we are so much more effective! But are we? Did we change? Let's stop for a second. Let's put our heads together. Let's question our creativity and productivity. What is the best developer experience? Are the answers in SDKs, Relations, Love, or Thunder?
This talk is mostly about the experience, and about having a good developer experience to enhance the final user experience.
If you work in an agency, stakeholders, clients, salespersons, and managers are asking not to create the best backend experience, but to create something for customers that can quickly use, something powerful which will make them better sales, better search optimization ratings, better accessibility and which will look cool. To do that, we as developers need one thing, and that’s time.
We spend most of our time usually on the monolithic CMSs and building and shaping them to the state that we can achieve some cool new frontend stuff because they are not ready. SDKs are here to save you time, and to use them to build a better user experience at the end.
With better SDKs → you as a developer have more time → to build a better app.
Developer relations, which i’m currently head of, is not about developers, it’s about building the relations between all the people.
You need relations because you can not scale your knowledge in every aspect of something. On the other hand, relations are like a guide on your journey and will help you to build better products. Those relations will shape your future of you.
I can backtrace my relations and go to my high school. I had a math teacher who lead me to the point that I started coding. My first job as a frontend developer where I met someone and he opened for me the word of CSS and HTML and he showed me some magazines and books and conferences where I got all the inspiration. where then I made some other relations with others who show me the beauty of HTML and CSS. I really loved what I was doing and that’s also the reason why I headed to Vue and then Nuxt. And then after the next, I met the CEO of Storyblok, and that's the reason why I’m here as head of developer relations and speaking right now. If you think about it, it's only one person and one relation that I was building that could change the directions that I was going.
So relations are important because your success depends on them, and not only it can help you to succeed, it can save you when you have some problems, and not only the people from your bubble or your team but also the people from other teams, from the community and ambassadors.
When we have different views on the same topic, we can inspire each other.
We all need to practice it and it takes time.
Those relations will also inspire you and give you the creative boost, the boost that we can not get if we are sitting alone in the room and just coding all the time.
On the other hand, you need to get bored and then get rest to let the creativity take over.
Next time when you face a bug, don’t drink the 20th cup of coffee and try to work 12 hours on it, just do something else or take a rest or go for a walk so that your brain has time to think and you will come up with the solution more easily.
Love what you do, your community, and your time. if you don’t like it it doesn’t matter how much money you make.
Struggles are good. We have different opinions, but the point is don’t take them personally.
There will be always some decisions that you don’t like, and there will be also some decisions that the community doesn’t like. But it’s not personal, it’s not about us. It’s because we all want to move to the next level and make it better.
Last but not least, we all need to learn to say sorry.
This is the end of my JSWorld Conference summary series. Thank you for coming so far, I hope you enjoyed the journey and it can be as valuable to you as it was to me.
You can read the previous parts here:
Over the next few days, I’ll do the same for Vue Amsterdam Conference which was held on 2. and 3. June right after JSWorld Conference. Stay tuned…
Encode, Stream, and Manage Videos With One Simple Platform | https://hackernoon.com/jsworld-conference-2022-part-iv?ref=hackernoon.com | CC-MAIN-2022-33 | refinedweb | 2,853 | 68.7 |
CSPICE_WNELMD determines whether a point is an element
of a double precision window.
For important details concerning this module's function, please refer to
the CSPICE routine wnelmd_c.
Given:
point a double precision scalar point, which may or
may not be contained in one of the intervals
in window.
window scalar, double precision window, containing
zero or more intervals.
The user must create 'window' using
cspice_celld.
the call:
boolean = cspice_wnelmd( point, window )
returns:
a scalar boolean, TRUE if the input 'point' is an element of
the input window---that is, if
a(i) < point < b(i)
- -
for some interval [ a(i), b(i) ] in window---and returns FALSE
otherwise.
Any numerical results shown for this example may differ between
platforms as the results depend on the SPICE kernels used as input
and the machine specific arithmetic implementation.
;;
;; Create a cell containing a double precision
;; 8-vector.
;;
win1 = cspice_celld( 8 )
;;
;; Define a
test_array = [ 0.d, 1.d, 9.d, 13.d, 29.d ]
for i=0, n_elements(test_array) -1 do begin
if( cspice_wnelmd( test_array[i], win1) ) then begin
print, test_array[i], " - an element of the window"
endif else begin
print, test_array[i], " - not an element of the window"
endelse
endfor
IDL outputs:
0.0000000 - not an element of the window
1.0000000 - an element of the window
9.0000000 - an element of the window
13.000000 - not an element of the window
29.000000 - not an element of the window)
element of a d.p. window
Wed Apr 5 17:58:04 2017 | https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/IDL/icy/cspice_wnelmd.html | CC-MAIN-2019-35 | refinedweb | 252 | 65.62 |
I have a DLL I need to use, and it came with a header file and a DEF file. Judging by the header file, it looks like the DLL was written in C. The documentation says I should make a LIB file from the DEF file before the DLL is useable. I just got Visual Studio and am learning C#, but don't currently intend to learn C++ any more than is necessary to get this DLL working (I've been programming in FoxPro for several years). I tried using the LIB command with the /DEF switch, but I must not be using it correctly. I'm very much a novice in Visual Studio and don't know how to create C++ projects. I tried adding a command like this into the sample Hello World project, but it created an unknown error.
LIB /DEF: "c:\Data\MyDLL.def";
I tried running the same command in the Command Window, but it doesn't run code like FoxPro's Command Window does. I also tried running it in a Windows Command Prompt window, and it gave the error "MyDll.def : fatal error LNK1106: invalid file or disk full: cannot seek to 0x5059". The disk isn't full, and I doubt that the file is invalid.
I also found instructions to modify the header file, compile to get an OBJ file, and then use the LIB command to create a LIB file. This is beyond me, and anyway the header file in the sample has a class declaration, whereas my header file doesn't.
Apparently once I have the LIB file, I compile that into my EXE and don't use the DLL at all.
I hope there are a few simple steps you can tell me that will create the LIB file. If not, I would appreciate as much information as you can give me. I'll also be interested to know if the LIB file will be useable by C++ only or also by C#, Visual Basic, or FoxPro.
Here's the code from the header file and the DEF file. I cut out the comments and most of the arguments and changed the DLL name. Thanks for the help.
Code:#ifndef MYDLL_H #define MYDLL_H #if defined(CALLED_DLL) # define DLL_IMPORT __declspec(dllexport) #else # define DLL_IMPORT __declspec(dllimport) #endif #define STORE_SUCCESS 0 #define STORE_FILE_NOT_FOUND 1000 extern "C" int DLL_IMPORT __stdcall StoreX( char* FilePath, char* ServerAddress ); #endifCode:LIBRARY MYDLL.DLL EXPORTS StoreX@80 @1 ; StoreX | http://cboard.cprogramming.com/cplusplus-programming/42907-making-lib-file-def-file-dll.html | CC-MAIN-2015-32 | refinedweb | 412 | 71.14 |
- comments yet. People are still trying to read the code snippet.
Admin
The saddest part is, it probably isn't even any more efficient than if it'd been written legibly.
Admin
well, at least the return type os named correcly.
it surely did result in damage to the reader
Admin
The only thing missing is adding a recursive call to progress() for further confusion with the value progress.
Admin
Is there an IDE add-on that converts Ternaries to properly nested IF THEN ELSE ?
Admin
To add to the horror, once you replace all those ternary by if / else, you can see the author probably didn't even tried to make a truth table
DamageResult progress(UUID uuid, double progress, boolean auto) { if(this.isLooted()) { return DamageResult.ALREADY_LOOTED; } else { if(this.thief == null) { return this.makeProgress(uuid, progress); } else { if(this.challenger == null) { if (this.thief.equals(uuid)) { if (auto) { if (this.ticksSinceLastThiefHit >= this.getConfig().getLootTickRate()) { return this.makeProgress(uuid, progress); } else { return DamageResult.AUTO_NOT_YET; } } else { if (this.ticksSinceLastThiefHit >= this.getConfig().getClickRateCap()) { return this.makeProgress(uuid, progress); } else { return DamageResult.TOO_QUICK; } } } else { return this.makeChallengeProgress(uuid, progress, true); } } else { if (this.thief.equals(uuid) || this.challenger.equals(uuid)) { if (auto) { return DamageResult.NO_AUTO_CHALLENGE; } else { if (( this.thief.equals(uuid) ? this.ticksSinceLastThiefHit : this.ticksSinceLastChallengerHit ) >= this.getConfig().getChallengeClickRateCap() ) { return this.makeChallengeProgress(uuid, progress, false); } else { return DamageResult.TOO_QUICK; } } } else { return DamageResult.CHALLENGE_IN_PROGRESS; } } } } }
Admin
There's your WTF. ^^^^ Right there
Admin
In all fairness to the original programmer, we should probably note that this probably began as:
DamageResult progress() { return this.isLooted() ? DamageResult.ALREADY_LOOTED : DamageResult.CHALLENGE_IN_PROGRESS; }
Admin
Assembler programming wasn't so bad, after all.
Admin
Tried to make sense of it, at this point i think it's impossible to guess what this does and where it's supposed to go. I'm also pretty sure i messed up trying to unwind all the ternary hell, but here it goes
//I'm still not sure if this is a battle or a lockpicking minigame... DamageResult progress(UUID uuid, double progress, boolean auto) { if(this.isLooted()) { return DamageResult.ALREADY_LOOTED; }
}
Admin
Here's my attempt at a rewrite. It still utilizes a few ternaries but in a much saner way. It also negates a few of the conditionals to supply early ejection points and avoid pyramid programming. This should honestly be refactored into at least two separate methods but I'm done here.
DamageResult progress(UUID uuid, double progress, boolean auto) { if (this.isLooted()) return DamageResult.ALREADY_LOOTED;
}
Admin
Credit to the developer, they did a good job naming their variables.
Admin
Here is one:
Here are a few other solutions:
An online one:
Admin
And your expansion shows that it's not the use of ternaries that makes this a WTF, but the complete mess of logic therein.
Admin
I want to know* how those extensions for turning ternaries into if-elses go on this shambles. They should at least be using this as a test case.
Admin
Admin
When did TDWTF become a forum for submitting bad code and asking members to fix it?
Admin
Visual Studio 2017 and later, especially together with ReSharper from JetBrains has many such auto convert helpers to change between different ways to write the same code.
Admin
I used to work at a games studio. I recognize this type of quality code that is produced weeks into a 996-type crunch cycle! (12+ hrs/day, 6 days/wk)...
Admin
Hello, everyone. The code snippet in this article is actually one of my own.
I noticed that a method I had didn't have anything stopping me from turning it into a large ternary and, once done, shared it on a public Discord chat with some friends as a joke.
I can assure you that this code never made it into production!
If you want to view it in it's "unfucked" form, here's the gist I made when showing my friend:
Admin
It could be worse. In PHP
return this.isLooted() ? DamageResult.ALREADY_LOOTED : this.thief == null ? this.makeProgress(uuid, progress) : this.challenger == null ? ..
it evaluates as
return (( (this.isLooted() ? DamageResult.ALREADY_LOOTED : this.thief == null) ? this.makeProgress(uuid, progress) : this.challenger == null )) ? .. | https://thedailywtf.com/articles/comments/whose-tern-is-it-to-play | CC-MAIN-2019-26 | refinedweb | 696 | 60.21 |
I would like to set up an FTP service on a non-standard port (i.e. not 21) using the FTP service in IIS 6 on a Windows 2008 Server.
I have set it up and tested it locally - it all works.
However I am having issues when accessing it remotely. I can Telnet to the new port and see an FTP response, but I cannot create a true FTP connection.
So I think the firewall port for the connection from my remote PC to the server is open, but the response from the server to my PC occurs on a random port.
In order to limit the return (Outbound) ports used in the IIS 6 FTP service, I have followed the steps detailed here: Event ID 16 — IIS FTP Service Configuration (although adsutil.vbs was not on the server so I downloaded it it from another source and used that).
Then I used the command cscript.exe adsutil.vbs set /MSFTPSVC/PassivePortRange "6000-7000" which ran okay. In then
Then I ran net stop msftpsvc, net start msftpsvc and sc query msftpsvc.
Everything ran okay, but when I test using Wireshark, I can see that the ports 6000-7000 are not being used.
Any idea what might be wrong?
See MS KB article 555022
This article describes the ports used by FTP. The connection is made over the control port but the data transfer occurs over a different port. Try configuring a limited set of data transfer ports in IIS and configure your firewall to allow those ports for your FTP server IP.
I use network sniffers to troubleshoot these kinds of problems. The firewall is a likely culprit, so analyzing the network traffic on the outside and inside of the firewall can reveal what specifically is going wrong with the FTP connection.
One thing to check on the firewall policy is that the non-standard port is actually configured to handle FTP traffic. It seems that the policy has been configured to enable a TCP connection to the non-standard port. Has the policy been configured to allow specifically FTP traffic?
If you are using a very low-end firewall that can only perform NAT and basic packet inspection (i.e., not a TCP state aware firewall), then you will need to configure the FTP server to only allow passive mode, configure the FTP server to only allow DATA connections on some small range of ports (10 or 20?), and then configure the firewall to allow inbound connections to the FTP server for those 10-20 ports.
I found the full details12 times
active | http://serverfault.com/questions/16761/how-to-limit-ftp-ports-when-using-iis6-ftp-service-on-alternate-ftp-port | crawl-003 | refinedweb | 436 | 71.24 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.