text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
In this case, your best bet may be to come up with an ID structure for these messages that
incorporates (leads with) the timestamp; then have Lucene use that as the key when retrieving
any given message. For example, the ID could consist of:
{timestamp} + {unique id}
(Beware: if you're going to load data with this schema in real time, you'll hot spot one region
server; see for considerations related to this.)
Then, you can either scan over all data from one time period, or GET a particular message
by this (combined) unique ID. There are also types of UUIDs that work in this way. But, with
that much data, you may want to tune it to get the smallest possible row key; depending on
the granularity of your timestamp and how unique the "unique" part really needs to be, you
might be able to get this down to < 16 bytes. (Consider that the smallest possible unique
representation of 100B items is 36 bits - that is, log base 2 of 10 billion; but because you
also want time to be a part of it, you probably can't get anywhere near that small).
If you need to scan over LOTS of data (as opposed to just looking up single messages, or small
sequential chunks of messages), consider just writing the data to a file in HDFS and using
map/reduce to process it. Scanning all 100B of your records won't be possible in any short
time frame (by my estimate that would take about 10 hours), but you could do that with map/reduce
using an asynchronous model.
One table is still best for this; read up on what Regions are and why they mean you don't
need multiple tables for the same data:
There are no secondary indexes in HBase:.
If you use Lucene for this, it'd need its own storage (though there are indeed projects that
run Lucene on top of HBase:).
Ian
On Dec 5, 2012, at 9:28 PM, tgh wrote:
Thank you for your reply
And I want to access the data with lucene search engine, that is, with key
to retrieve any message, and I also want to get one hour data together, so I
think to split data table into one hour , or if I can store it in one big
table, is it better than store in 365 table or store in 365*24 table, which
one is best for my data access schema, and I am also confused about how to
make secondary index in hbase , if I have use some key words search engine ,
lucene or other
Could you help me
Thank you
-------------
Tian Guanhua
-----邮件原件-----
发件人: user-return-32247-guanhua.tian=ia.ac.cn@hbase.apache.org<mailto:user-return-32247-guanhua.tian=ia.ac.cn@hbase.apache.org>
[mailto:user-return-32247-guanhua.tian=ia.ac.cn@hbase.apache.org] 代表 Ian
Varley
发送时间: 2012年12月6日 11:01
收件人: user@hbase.apache.org<mailto:user@hbase.apache.org>
主题: Re: how to store 100billion short text messages with hbase
Tian,
The best way to think about how to structure your data in HBase is to ask
the question: "How will I access it?". Perhaps you could reply with the
sorts of queries you expect to be able to do over this data? For example,
retrieve any single conversation between two people in < 10 ms; or show all
conversations that happened in a single hour, regardless of participants.
HBase only gives you fast GET/SCAN access along a single "primary" key (the
row key) so you must choose it carefully, or else duplicate & denormalize
your data for fast access.
Your data size seems reasonable (but not overwhelming) for HBase. 100B
messages x 1K bytes per message on average comes out to 100TB. That, plus 3x
replication in HDFS, means you need roughly 300TB of space. If you have 13
nodes (taking out 2 for redundant master services) that's a requirement for
about 23T of space per server. That's a lot, even these days. Did I get all
that math right?
On your question about multiple tables: a table in HBase is only a namespace
for rowkeys, and a container for a set of regions. If it's a homogenous data
set, there's no advantage to breaking the table into multiple tables; that's
what regions within the table are for.
Ian
ps - Please don't cross post to both dev@ and user@.
On Dec 5, 2012, at 8:51 PM, tgh wrote:
Hi
I try to use hbase to store 100billion short texts messages, each
message has less than 1000 character and some other items, that is,
each messages has less than 10 items,
The whole data is a stream for about one year, and I want to create
multi tables to store these data, I have two ideas, the one is to
store the data in one hour in one table, and for one year data, there
are 365*24 tables, the other is to store the date in one day in one
table, and for one year , there are 365 tables,
And I have about 15 computer nodes to handle these data, and I want
to know how to deal with these data, the one for 365*24 tables , or
the one for 365 tables, or other better ideas,
I am really confused about hbase, it is powerful yet a bit complex
for me , is it?
Could you give me some advice for hbase data schema and others,
Could you help me,
Thank you
---------------------------------
Tian Guanhua | http://mail-archives.apache.org/mod_mbox/hbase-user/201212.mbox/%3C4B7F71B9-09B4-4EBE-AF6E-00CEC3507C86@salesforce.com%3E | CC-MAIN-2018-17 | refinedweb | 936 | 63.97 |
Add a submit button to the form. Defaults to 0.
Form::Sensible::Reflector - A base class for writing Form::Sensible reflectors.
my $reflector = Form::Sensible::Reflector::SomeSubclass->new(); my $generated_form = $reflector->reflect_from($data_source, $options);
A Reflector in Form::Sensible is a class that inspects a data source and creates a form based on what it finds there. In other words it creates a form that 'reflects' the data elements found in the data source.
A good example of this would be to create forms based on a DBIx::Class result_source (or table definition.) Using the DBIC reflector, you could create form for editing a user's profile information simply by passing the User result_source into the reflector.
This module is a base class for writing reflectors, meaning you do not use this class directly. Instead you use one of the subclasses that deal with your data source type.
my $reflector = Form::Sensible::Form::Reflector::SomeSubclass->new(); my $generated_form = $reflector->reflect_from($data_source, $options);
By default, a Reflector will create a new form using the exact fields found within the datasource. It is possible, however, to adjust this behavior using the
$options hashref passed to the
reflect_from call.
my $generated_form = $reflector->reflect_from($data_source, { form => { name => 'profile_form', validation => { code => sub { ... } } } });
If you want to adjust the parameters of the new form, you can provide a hashref in the
$options->{form} that will be passed to the
Form::Sensible::Form->new() call.
$reflector->reflect_from($data_source, { form => $my_existing_form_object } );
If you do not want to create a new form, but instead want the fields appended to an existing form, you can provide an existing form object in the options hash (
$options->{form} )
$reflector->reflect_from($data_source, { additional_fields => [ { field_class => 'Text', name => 'process_token', render_hints => { field_type => 'hidden', } }, { field_class => 'Trigger', name => 'submit' } ] }
This allows you to add fields to your form in addition to the ones provided by your data source. It also allows you to override your data source, as any additional field with the same name as a reflected field will take precedence over the reflected field. This is also a good way to automatically add triggers to your form, such as a 'submit' or 'save' button.
NOTE: The reflector base class used to add a submit button automatically. The additional_fields mechanism replaces that functionality. This means your reflector call needs to add the submit button, as shown above, or it needs to be added programmatically later.
$reflector->reflect_from($data_source, { ## sort fields alphabetically fieldname_filter => sub { return sort(@_); }, } );
If you are unhappy with the order that your fields are displaying in you can adjust it by providing a subroutine in
$options->{'fieldname_filter'}. The subroutine takes the list of fields as returned by
get_fieldnames() and should return an array (not an array ref) of the fields in the new order. Note that you can also remove fields this way. Note also that no checking is done to verify that the fieldnames you return are valid, if you return any fields that were not in the original array, you are likely to cause an exception when the field definition is created.
$reflector->reflect_from($data_source, { ## change 'logon' field to be 'username' in the form ## and other related adjustments. fieldname_map => { logon => 'username', pass => 'password', address => 'email', home_num => 'phone', parent_account => undef, }, } );
By default, the
Form::Sensible field names are exactly the same as the data source's field names. If you would rather not expose your internal field names or have other reason to change them, you can provide a
$options->{'fieldname_map'} hashref to change them on the fly. The
fieldname_map is simply a mapping between the original field name and the
Form::Sensible field name you would like it to use. If you use this method you must provide a mapping for ALL fields as a missing field (or a field with an undef value) is treated as a request to remove the field from the form entirely.
Creating a new reflector class is extraordinarily simple. All you need to do is create a subclass of Form::Sensible::Reflector and then create two subroutines:
get_fieldnames and
get_field_definition.
As you might expect,
get_fieldnames should return an array containing the names of the fields that are to be created.
get_field_definition is then called for each field to be created and should return a hashref representing that field suitable for passing to the Form::Sensible::Field's
create_from_flattened method.
Note that in both cases, the contents of
$datasource are specific to your reflector subclass and are not inspected in any way by the base class.
package My::Reflector; use Moose; use namespace::autoclean; extends 'Form::Sensible::Reflector'; sub get_fieldnames { my ($self, $form, $datasource) = @_; my @fieldnames; foreach my $field ($datasource->the_way_to_get_all_your_fields()) { push @fieldnames, $field->name; } return @fieldnames; } sub get_field_definition { my ($self, $form, $datasource, $fieldname) = @_; my $field_definition = { name => $fieldname }; ## inspect $datasource's $fieldname and add things to $field_definition return $field_definition; }
Note that while the
$form that your field will likely be added to is available for inspection, your reflector should NOT make changes to the passed form. It is present for inspection purposes only. If your module DOES have a reason to look at
$form, be aware that in some cases, such as when only the field definitions are requested,
$form will be null. Your reflector should do the sensible thing in this case, namely, not crash.
If you need to customize the form object that your reflector will return, there are two methods that Form::Sensible::Reflector will look for. You only need to provide these in your subclass if you need to modify the form object itself. If not, the default behaviors will work fine. The first is
create_form_object which
Form::Sensible::Reflector calls in order to instantiate a form object. It should return an instantiated Form::Sensible::Form object. The default
create_form_object method simply passes the provided arguments to the "METHODS" in Form::Sensible::Form's
new call:
sub create_form_object { my ($self, $handle, $form_options) = @_; return Form::Sensible::Form->new($form_options); }
Note that this will NOT be called if the user provides a form object, so if special adjustments are absolutely required, you should consider making those changes using the
finalize_form method described below.
The second method is
finalize_form. This method is called after the form has been created and all the fields have been added to the form. This allows you to do any final form customization prior to the form actually being used. This is a good way to add whole-form validation, for example:
sub finalize_form { my ($self, $form, $handle) = @_; return $form; }
Note that the
finalize_form call must return a form object. Most of the time this will be the form object passed to the method call. The return value of
finalize_form is what is returned to the user calling
reflect_from.
This is a base class to write reflectors for things like, configuration files, or my favorite, a database schema.
The idea is to give you something that creates a form from some other source that already defines form-like properties, ie a database schema that already has all the properties and fields a form would need.
I personally hate dealing with forms that are longer than a search field or login form, so this really fits into my style.
Devin Austin <dhoss@cpan.org> Jay Kuri <jayk@cpan.org>
Jay Kuri <jayk@cpan.org> for his awesome Form::Sensible library and helping me get this library in tune with it.
Form::Sensible Form::Sensible Wiki: Form::Sensible Discussion: | http://search.cpan.org/dist/Form-Sensible/lib/Form/Sensible/Reflector.pm | CC-MAIN-2016-36 | refinedweb | 1,240 | 50.77 |
Investors eyeing a purchase of Stericycle Inc. (Symbol: SRCL) stock, but cautious about paying the going market price of $86.17/share, might benefit from considering selling puts among the alternative strategies at their disposal. One interesting put contract in particular, is the January 2019 put at the $60 strike, which has a bid at the time of this writing of $2.55. Collecting that bid as the premium represents a 4.2% return against the $60 commitment, or a 2.5% annualized rate of return (at Stock Options Channel we call this the YieldBoost ).
Selling a put does not give an investor access to SRCL Stericycle Inc. sees its shares decline 30.4% and the contract is exercised (resulting in a cost basis of $57.45 per share before broker commissions, subtracting the $2.55 from $60), the only upside to the put seller is from collecting that premium for the 2.5% annualized rate of return.
Below is a chart showing the trailing twelve month trading history for Stericycle Inc., and highlighting in green where the $60 strike is located relative to that history:
The chart above, and the stock's historical volatility, can be a helpful guide in combination with fundamental analysis to judge whether selling the January 2019 put at the $60 strike for the 2.5% annualized rate of return represents good reward for the risks. We calculate the trailing twelve month volatility for Stericycle Inc. (considering the last 251 trading day closing values as well as today's price of $86.17) to be 37%. For other put options contract ideas at the various different available expirations, visit the SRCL. | https://www.nasdaq.com/articles/agree-purchase-stericycle-60-earn-42-using-options-2017-04-24 | CC-MAIN-2019-39 | refinedweb | 276 | 58.08 |
I am building a recommendation system inspired by YouTube’s “Deep Neural Networks for YouTube Recommendations” paper. I will need to execute recommendations in real time so I structured it with low latency predictions in mind. The structure is the following
|User Features| |Item Features| | | |Fully Connected NN_user| |Fully Connected NN_item| \ / |Concatenated output of both NNs| | |Fully Connected NN| | |output|
This is all one network built using two sub-networks.
The reason I did it this way is to create rich embeddings for the user and item based on their features which I could then store. At prediction time, I can retrieve the stored embeddings, then only the top NN needs to be executed and is therefore very fast. In testing, the model gives good results.
My question is about decreasing the time it takes to train this model. Is there a way for Pytorch to execute the sub-networks in parallel? Using DataParallel splits that data and trains it in parallel, but I think that the two sub-NN are trained one after the other, even though they don’t need to be. The forward section of the model has the following structure:
def sub-network(features, **params): .... def forward(user_features, item_features): user_embedding = sub-network(user_features) item_embedding = sub-network(item_features) x = torch.cat([user_embedding, item_embedding],1) ...
What is a good strategy for parallelizing the execution of the sub-network functions? | https://discuss.pytorch.org/t/optimize-training-speed-for-recommendation-system-using-subnetworks/50384 | CC-MAIN-2019-30 | refinedweb | 231 | 53.31 |
Before discussing what basically new methods introduced in Nested Based Access control we should know What is Nested Based Access Control. So for this you can refer to this blog .
From the above blog we can check that a new access control context nest is being added in Java 11, which has NestHosts and NestMembers. To check NestHosts and NestMembers of a class new methods are being added in java.lang.class. We will discuss those methods in this blog.
Let’s have a look at those methods in java.lang.class:
- Class getNestHost()
- Class[] getNestMembers()
- boolean isNestmateOf(Class)
Let’s check how they are useful to us one by one.
For reference we can this example
public class Sample { public class Nest1 { } public class Nest2 { public class Nested1ClassA { } public class Nested1ClassB { } } }
Class getNestHost()
As its name suggests it is basically used to know the Host of the Nest. In the compiled file we can see the NestHost as well.
private static void getNestHost() { System.out.println("Nest Host:"); System.out.println(((Class<?>) Sample.class).getNestHost().getSimpleName()); } public static void main(String[] args) { getNestHost(); }
Output:- Nest Host: Sample
Class[] getNestMembers()
As its name suggests it is basically used to know the Nest members, in a class every field or class will be the members of any nest. So this method helps to check the nest members of any class.
private static void getNestHost() { Class<?>[] nestMembers = Sample.class.getNestMembers(); System.out.println("Nest Members:\n" + Arrays.stream(nestMembers).map(Class::getSimpleName) .collect(Collectors.joining("\n"))); } public static void main(String[] args) { getNestHost(); }
Output:- Nest Members: Sample Nest2 Nested1ClassB Nested1ClassA Nest1
boolean isNestmateOf(Class)
This method returns the boolean value, it can be used to check whether two fields/classes/methods are Nest members of the same nest or the nest host is the same for both of the fields/classes/methods. It will return true if the fields/classes/methods are nest members or false if they are not.
private static void getIsNestmateOf(Class<?> cls1, Class<?> cls2) { System.out.printf("%s isNestmateOf %s = %s%n", cls1.getSimpleName(), cls2.getSimpleName(), cls1.isNestmateOf(cls2)); } public static void main(String[] args) { getIsNestmateOf(Sample.class, Nest1.class); getIsNestmateOf(Nest1.class, Nest2.Nested1ClassA.class); }
Output:- Sample isNestmateOf Nest1 = true Nest1 isNestmateOf Nested1ClassA = true
Hope this blog will help you.
Happy Coding !!!
References:- | https://blog.knoldus.com/java-11-nested-based-access-control-methodsjep-181/ | CC-MAIN-2022-27 | refinedweb | 386 | 55.54 |
{-# LANGUAGE RecursiveDo #-} {-# LANGUAGE Rank2Types #-} -- | Convenience functions on top of "Yogurt.Mud". module Network.Yogurt.Utils ( -- * Hook derivatives mkTrigger, mkTriggerOnce, triggerOneOf, mkAlias, mkArgAlias, mkCommand, -- * Timers Timer, Interval, mkTimer, mkTimerOnce, rmTimer, isTimerActive, -- * Sending messages receive, sendln, echo, echoln, echorln, bell, -- * Logging Logger, startLogging, stopLogging, -- * Triggering multiple hooks -- | By default, when a message causes a hook to fire, the message is stopped and discarded unless the hook decides otherwise. These functions provide ways to give other hooks with lower priorities a chance to fire as well. matchMore, matchMoreOn, matchMoreOn' ) where import Network.Yogurt.Mud import Control.Concurrent import Control.Monad import Data.Time.Format (formatTime) import System.Locale (defaultTimeLocale) import Data.Time.LocalTime (getZonedTime) -- Hook -- | For each pair @(pattern, action)@ a hook is installed. As soon as one of the hooks fires, the hooks are removed and the corresponding action is executed. triggerOneOf :: [(Pattern, Mud ())] -> Mud () triggerOneOf pairs = mdo hs <- forM pairs $ \(pat, act) -> do mkTrigger pat (forM hs rmHook >> act) return () -- | . The command's arguments are available as 'group' 1. mkCommand :: String -> Mud a -> Mud Hook mkCommand pat = mkHook Remote ("^" ++ pat ++ "($| .*$)") -- Section: Timers. -- | The abstract Timer. withNewline :: String -> String withNewline = (++ "\r\n") -- | Sends a message to the terminal, triggering hooks. receive :: String -> Mud () receive = trigger Local -- | Sends a message appended with a newline character to the MUD, triggering hooks. sendln :: String -> Mud () sendln = trigger Remote . withNewline -- | Sends a message to the terminal, without triggering hooks. echo :: String -> Mud () echo = io Local -- | Sends a message appended with a newline character to the terminal, without triggering hooks. echoln :: String -> Mud () echoln = echo . withNewline -- | Sends a message appended with a newline character to the MUD, without triggering hooks. echorln :: String -> Mud () echorln = io Remote . withNewline -- | suffix <- liftIO $ fmap (formatTime defaultTimeLocale "-%Y%m%d-%H%M.log") getZonedTime let filename = name ++ suffix let record dest = mkPrioHook 100 dest "^" $ do line <- matchedLine lift. If no other hooks match, -- the message is sent on to its destination. | http://hackage.haskell.org/packages/archive/Yogurt/0.4/doc/html/src/Network-Yogurt-Utils.html | crawl-003 | refinedweb | 319 | 51.34 |
In our last post about REST APIs, we have learned the basics of how REST APIs function. In this post, we would see how we can develop our own REST APIs. We would use Python and Flask for that. If you are new to Python, we have you covered with our Python: Learning Resources and Guidelines post.
Python / Flask code is pretty simple and easy to read / understand. So if you just want to grasp the best practices of REST API design but lack Python skills, don’t worry, you will understand most of it. However, I would recommend you try out the codes hands on. Writing codes by hand is a very effective learning method. We learn more by doing than we learn by reading or watching.
Installing Flask and Flask-RESTful
We will be using the Flask framework along with Flask-RESTful. Flask-RESTful is an excellent package that makes building REST APIs with Flask both easy and pleasant. Before we can start building our app, we first need to install these packages.
pip install flask pip install flask-restful
Once we have the necessary packages installed, we can start thinking about our API design.
RESTful Mailing List
You see, I just recently started this Polyglot.Ninja() website and I am getting some readers to my site. Some of my readers have shown very keen interest to receive regular updates from this blog. To keep them posted, I have been thinking about building a mailing list where people can subscribe with their email address. These addresses get stored in a database and then when I have new posts to share, I email them. Can we build this mailing list “service” as a REST API?
The way I imagine it – we will have a “subscribers” collection with many subscriber. Each subscriber will provide us with their full name and email address. We should be able to add new subscriber, update them, delete them, list them and get individual data. Sounds simple? Let’s do this!
Choosing a sensible URL
We have decided to build our awesome mailing list REST API. For development and testing purposes, we will run the app on my local machine. So the base URL would be. This part will change when we deploy the API on a production server. So we probably don’t need to worry about it.
However, for API, the url path should make sense. It should clearly state it’s intent. A good choice would be something like
/api/ as the root url of the API. And then we can add the resources, so for subscribers, it can be
/api/subscribers. Please note that it’s both acceptable to have the resource part singular (ie.
/api/subscriber) or plural (
/api/subscribers). However, most of the people I have talked to and the articles I have read, more people like the plural form.
API Versioning: Header vs URL
We need to think about the future of the API before hand. This is our first iteration. In the future, we might want to introduce newer changes. Some of those changes can be breaking changes. If people are still using some of the older features which you can’t break while pushing new changes, it’s time you thought about versioning your API. It is always best practice to version your API from the beginning.
The first version of the api can be called
v1. Now there are two common method of versioning APIs – 1) Passing a header that specifies the desired version of the API 2) Put the version info directly in the URL. There are arguments and counter arguments for both approaches. However, versioning using url is easier and more often seen in common public APIs.
So we accommodate the version info in our url and we make it –
/api/v1/subscribers. Like discussed in our previous REST article, we will have two types of resources here – “subscriber collection” (ie.
/subscribers) and “individual subscriber” elements (ie.
/subscribers/17). With the design decided upon and a bigger picture in our head, let’s get to writing some codes.
RESTful Hello World
Before we start writing our actual logic, let’s first get a hello world app running. This will make sure that we have got everything setup properly. If we head over to the Flask-RESTful Quickstart page, we can easily obtain a hello world code sample from there.
from flask import Flask from flask_restful import Resource, Api app = Flask(__name__) api = Api(app) class HelloWorld(Resource): def get(self): return {'hello': 'world'} api.add_resource(HelloWorld, '/') if __name__ == '__main__': app.run(debug=True)
Let’s save this code in a file named
main.py and run it like this:
python main.py
If the code runs successfully, our app will launch a web server here –. Let’s break down the code a bit:
- We import the necessary modules (Flask and Flask-RESTful stuff).
- Then we create a new
Flaskapp and then wrap it in
Api.
- Afterwards, we declare our
HelloWorldresource which extends
Resource.
- On our resource, we define what the
gethttp verb will do.
- Add the resource to our API.
- Finally run the app.
What happens here, when we write our
Resources, Flask-RESTful generates the routes and the view handlers necessary to represent the resource over RESTful HTTP. Now let’s see, if we visit the url, do we get the message we set?
If we visit the url, we would see the expected response:
{ "hello": "world" }
Trying out REST APIs
While we develop our api, it is essential that we can try out / test the API to make sure it’s working as expected. We need a way to call our api and inspect the output. If you’re a command line ninja, you would probably love to use
curl. Try this on your terminal:
➜ curl -X GET { "hello": "world" } ➜
This would send a
GET request to the URL and curl would print out the response on the terminal. It is a very versatile tool and can do a lot of amazing things. If you would like to use curl on a regular basis, you may want to dive deeper into the options / features / use cases. These can help you:
However, if you like command line but want a friendlier and easier command line tool, definitely look at httpie.
Now what if you’re not a CLI person? And we can agree that sometimes GUI can be much more productive to use. Don’t worry, Postman is a great app!
If you are developing and testing a REST API, Postman is a must have app!
Back to Business
We now have a basic skeleton ready and we know how to test our API. Let’s start writing our mailing list logic. Let’s first layout our resources with some sample data. For this example, we shall not bother about persisting the data to some database. We will store the data in memory. Let’s use a list as our subscriber data source for now.
from flask import Flask from flask_restful import Resource, Api app = Flask(__name__) api = Api(app, prefix="/api/v1") users = [ {"email": "[email protected]", "name": "Masnun", "id": 1} ] class SubscriberCollection(Resource): def get(self): return {"msg": "All subscribers "} def post(self): return {"msg": "We will create new subscribers here"})
What changes are notable here?
- Note we added a
prefixto the
Apifor versioning reason. All our urls will be prefixed by
/api/v1.
- We created a list named
usersto store the subscribers.
- We created two resources –
SubscriberCollectionand
Subscriber.
- Defined the relevant http method handlers. For now the response just describes the intended purpose of that method.
- We add both resources to our api. Note how we added the
idparameter to the url. This
idis available to all the methods defined on
Subscriber.
Fire up the local development server and try out the API. Works fine? Let’s move on!
Parsing Request Data
We have to accept, validate and process user data. In our cases, they would be the subscriber information. Each subscriber would have an email address, a full name and ID. If we used a database, this ID would have been auto generated. Since we are not using a database, we would accept this as part of the incoming request.
For processing request data, the
RequestParser can be very helpful. We will use it in our
POST calls to
/api/subscribers/ to validate incoming data and store the subscriber if the data is valid. Here’s the updated code so far:
from flask import Flask from flask_restful import Resource, Api from flask_restful.reqparse import RequestParser app = Flask(__name__) api = Api(app, prefix="/api/v1") users = [ {"email": "[email protected]", "name": "Masnun", "id": 1} ])
Here we have made two key changes:
- We created a new instance of
RequestParserand added
argumentsso it knows which fields to accept and how to validate those.
- We added the request parsing code in the
postmethod. If the request is valid, it will return the validated data. If the data is not valid, we don’t have to worry about it, the error message will be sent to the user.
Testing the request parser
If we try to pass invalid data, we will get error messages. For example, if we request without any data, we will get something like this:
{ "message": { "email": "Missing required parameter in the JSON body or the post body or the query string", "id": "Please enter valid integer as ID", "name": "Name has to be valid string" } }
But if we pass valid data, everything works fine. Here’s an example of valid data:
This will get us the following response:
{ "msg": "Subscriber added", "subscriber_data": { "email": "[email protected]", "id": 3, "name": "John Smith" } }
Cool, now we know how to validate user data 🙂 Please remember – never trust user input. Always validate and sanitize user data to avoid security risks.
Next, we need to implement the user level updates.
Subscriber Views
We went ahead and completed the code for the rest of the methods. The updated code now looks like this:
from flask import Flask from flask_restful import Resource, Api from flask_restful.reqparse import RequestParser app = Flask(__name__) api = Api(app, prefix="/api/v1") users = [ {"email": "[email protected]", "name": "Masnun", "id": 1} ] def get_user_by_id(user_id): for x in users: if x.get("id") == int(user_id): return x): user = get_user_by_id(id) if not user: return {"error": "User not found"} return user def put(self, id): args = subscriber_request_parser.parse_args() user = get_user_by_id(id) if user: users.remove(user) users.append(args) return args def delete(self, id): user = get_user_by_id(id) if user: users.remove(user) return {"message": "Deleted"} api.add_resource(SubscriberCollection, '/subscribers') api.add_resource(Subscriber, '/subscribers/<int:id>') if __name__ == '__main__': app.run(debug=True)
What did we do?
- We added a helper function to find users from the list by it’s id
- The update view works – we can update the user data. In our case we’re deleting the data and adding the new data. In real life, we would use
UPDATEon the database.
- Delete method works fine!
Feel free to go ahead and test the endpoints!
HTTP Status Codes
Our mailing list is functional now. It works! We have made good progress so far. But there’s something very important that we haven’t done yet. Our API doesn’t use proper http status codes. When we send response back to the client, we should also give it a status code. This code would help the client better interpret the results.
Have you ever visited a website and saw “404 Not found” error? Well, 404 is the status code, that means the document / resource you were looking for is not available. Saw any “500 Internal Server Error” lately? Now you know what that 500 means.
We can see the complete list of http status codes here:.
Also depending on whether you’re a cat person or a dog enthusiast, these websites can explain things better:
So let’s fix our code and start sending appropriate codes. We can return an optional status code from our views. So when we add a new subscriber, we can send
201 Created like this:
return {"msg": "Subscriber added", "subscriber_data": args}, 201
And when we delete the user, we can send
204.
return None, 204
What’s next?
We have made decent progress today. We have designed and implemented a very basic API. We chose a sensible url, considered API versioning, did input validation and sent appropriate http status codes. We have done good. But what we have seen here is a very simple implementation. There are a lot of scope of improvements here. For example, our API is still open to public, there is no authentication enabled. So anyone with malicious intentions can flood / spam our mailing list database. We need to secure the API in that regard. We also don’t have a home page that uses HATEOAS to guide the clients. We don’t yet have documentation – always remember, the documentation is very important. We developers often don’t feel like writing documentation but well written documentation helps the consumers of your API consume it better and with ease. So do provide excellent docs!
I don’t know when – but in our next post on REST APIs, we shall explore more into the wonderful world of API development. And may be we shall also talk about some micro services? If you would like to know when I post those contents, do subscribe to the mailing list. You can find a subscription form on the sidebar.
And if you liked the post, do share with your friends 🙂
20 thoughts on “REST API Best Practices: Python & Flask Tutorial”
Great article.
For me, this is a really useful template!
I like your style: you put a lot of interesting links to expand the subject! Nice.
Thanks
curl -X POST
-H “Content-Type: application/json”
-d ‘{“email”: “[email protected]”, “name”: “John Smith”, “id”: 3}’
Just what i wanted to know! Thank a lot! 😀
Nice tutorials i almost wrote my first Flask Restful API , im looking forward for more insight. Good job Polyglot!!!!!!!!
Thank you! I’m happy that it helped!
Nice Article , really helped scaffold my API, but the separation between the collection and the model made me uncomfortable. Isn’t there a way to join the two in only one resource? Is it a bad thing ?
I am not familiar with any built in way to do that. You could create your own “router” and “viewset” like Django REST Framework does.
Really nice tutorial. Got a great grasp of REST and the benefits of using the flask_restful library over just base Flask. I love that the flask_restful library simplifies not having to know how to code all the routes your self. This simplifies my ability to pass this code on to others to expand functionality without having to teach them how to provide HTTP routes. I am looking forward to going through your JWT authentication tutorial next.
I just started with Flask. This is useful, maybe you should submit this tutorial on Hackr.io. I’ve been using that website for quite a while in order to find recommended programming resources.
Really Nice Tutorial. Good job Polyglot!
Hello, this is very helpful. I finished building a coding interview API through tutorials guide. | http://polyglot.ninja/rest-api-best-practices-python-flask-tutorial/ | CC-MAIN-2021-17 | refinedweb | 2,551 | 74.9 |
Note: This document is for an older version of GRASS GIS that will be discontinued soon. You should upgrade, and read the current manual page.
Information about GRASS GIS core GIS Library can be printed by -r flag.
Version numbers of additional libraries like PROJ, GDAL/OGR or GEOS are printed by -e flag.
See also function version() from Python Scripting Library.
import grass.script as gcore print gcore.version()
g.version GRASS 7.8.dev (2019)
g.version -r GRASS 7.8.dev (2019) libgis Revision libgis Date
g.version -rge version=7.8.dev date=2019 revision=d4879d401 build_date=2019-08-04 build_platform=x86_64-pc-linux-gnu build_off_t_size=8 libgis_revision=060163d17 libgis_date="2017-04-04 09:43:02 +0200 (Tue, 04 Apr 2017) " proj4=5.2.0 gdal=2.3.2 geos=3.7.1 sqlite=3.26.0
Available at: g.version source code (history)
Latest change: Fri Sep 25 15:56:38 2020 in commit: 449cc678b8c2bc7f36898c7a1cb0c41ff064d804
Note: This document is for an older version of GRASS GIS that will be discontinued soon. You should upgrade, and read the current manual page.
Main index | General index | Topics index | Keywords index | Graphical index | Full index
© 2003-2022 GRASS Development Team, GRASS GIS 7.8.8dev Reference Manual | https://grass.osgeo.org/grass78/manuals/g.version.html | CC-MAIN-2022-40 | refinedweb | 209 | 61.83 |
Here is a list of what I had gotten wrong on an exam:
boolan a = false, b = true, c = true, d = false, e = true;
1. System.out.println a != c;
I had converted it to a==c, and had answered FALSE. I'm checking the answer again -- and since a = false, c = true; the result of a == c would be FALSE. FALSE was my answer.
But I got the answer wrong, did I read too much into it and convert the not equals into ==, when I should've left it alone, and resulting in a != c being TRUE. because that would be true, a(false) doesn't equal c(true.
2. This was a question that asked for showing the EXACT output that will be created by execution of the following program.
Code :
package javaapplication1; public class JavaApplication1 { public static void main (String[] args) { int manny = 40, moe = 23, jack = 26; System.out.printf ("%3d%3d%3d\n", manny, moe, ++jack); if (manny > moe) if (moe > jack) { moe = 40;); if (manny > moe) if (moe > jack) { moe = 31;); } }
Here is how I tracked my variables, I would cross them out as I read through the program and the most bottom one would be the newest value
manny moe jack
40 23 26
0 22 27
28
29
My output looked like this(everything had to be exact, spacing is indicated by the underscore _ )
OUTPUT
_40_23_27
_28_23
__0_22_29
Please help me learn what I did wrong. | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/36163-please-see-multiple-questions-i-got-wrong-exam-what-i-did-wrong-printingthethread.html | CC-MAIN-2015-06 | refinedweb | 245 | 73.41 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
How to use dynamic values in domain filter?
Hi,
The following is my line of code and is working,
'user_id': fields.many2one('res.users', 'Responsible', track_visibility='onchange', domain="[('emp_department_id', '=', 10)]"),
Now i want replace that employee department id and that should be the value returned by a function,
say something similar to this
def test(): return department id of the login users 'user_id': fields.many2one('res.users', 'Responsible', track_visibility='onchange', domain="[('emp_department_id', '=', self.test())]"),
I know the above statement is not valid since the whole thing is a dict and domain is one of the key in the dict and its value is string. I want this concept to be done somehow.
I am even ok for some crooked ideas. Thanks for your time.
Hi,
The below is working for me to give dynamic values for domain filters for a particular field.
I've placed the below code in fields_view_get() method.
def fields_view_get(self, cr, uid, view_id=None, view_type='form', context=None, toolbar=False, submenu=False): login_user_dpt_id = users_obj.browse(cr, SUPERUSER_ID, uid, context=context).emp_department_id for node in eview.xpath("//field[@name='user_id']"): if login_user_dpt_id: user_filter = "[('emp_department_id', '='," + str(login_user_dpt_id.id) + " )]" node.set('domain',user_filter)
Thanks for all your replies and time.
vivak, i had a long path to git to this and now my problem start from here, what if 'user_filter' depends on other field value in the same form (eg. 'status')?, how i can get it?
In python, you can access all the fields' values obay. And so u use them in the place where i have login_user_dpt_id and set something like this st = self.browse(cr,uid,id,context) and then your operation and finally the conditino like this user_filter = "[('emp_department_id', '='," + str(your final constructed condition) + " )]". Please check and clarify. Sorry for late reply as i didn't notice it.
Hi vivek, I am from Madurai. i have also same problem. could you explain briefly? I tried your code which you posted. Nothing happening. pls help.
Hey vivek
just do like below
remove domain from below field (
.py), like this
'user_id': fields.many2one('res.users', 'Responsible'),
for dynamic domain , you should have an another field
in view code will be like this :
<field name="myfirst_field" on_change="get_domain_useer_id()" /> <field name="user_id" />
after this you should add a function in your model object
def get_domain_useer_id(self,cr,uid,ids,context=None): mach=[] lids=self.pool.get('res.users').search(cr,uid,[('active','=',True)]) return {'domain':{'user_id':[('id','in',lids)]}}
This is just example to see all active user for dynamic domain you can add more effective code as you want
good luck
Thanks
Thanks a good idea sandeep, but what happens if the users has not changed the first field or in my case i do not have any first field to select. I just having the user field for selection and values must not be available even before doing anything in the form. Thanks!!
if you don't have first field then , where is a need of dynamic domain,,,,,, i think then no need of dynamic domain
:) yeah i don't need a dynamic domain, i have to use dynamic values in the domain filter. as i've narrated in the ques i want to take the login users department id and place them in the domain filter. That should be applicable for the values when loading the form itself. I am open for clarification.
we can get login user directly , you know
uid just put
uid in domain but without method we can't get his depratment id ,, give me time ..... what can i do for you ......:)
but in domain we can't just say: domain = "[('emp_department_id', '=', uid.department_id)]" ?!
Can we add date range fields and print its value in the reports too? I added two fields date_from and date_to and tried to print its value. Please help if someone knows. Here is the link to my question
why did u add mach=[] in function get_domain_useer_id? Is there any specific reason for that?
why did u add mach=[] in function get_domain_useer_id? Is there any specific reason for that?
You can redefine fields_view_get method and build your own domain and set it on view modifying with lxml.
Regards,
Thanks for your reply cristian, is it possible to give some reference links?
Hi, you can read the doc here: and some example in code:
You can use
on_change. Write a onchange on
emp_department_id and in that onchange method you can return
domain for your
user_id field.
Have a look at the example of onchange which returns value, domain and warning.
Thanks sudhir, It's a one2many field(res.users) i've used in project -> task form. All the users are listed there and i want to show users whose department is equal to the login person's department. Hope you exactly understand what i want and guide me to do that. Expecting your reply hopefully. Thanks
Can we add date range fields and print its value in the reports too? I added two fields date_from and date_to and tried to print its value. Please help if someone knows. Here is the link! | https://www.odoo.com/forum/help-1/question/how-to-use-dynamic-values-in-domain-filter-17289 | CC-MAIN-2016-44 | refinedweb | 884 | 66.54 |
up arrow button
how do i change the label of JBUTTON to show up arrow
Hii..
Y isnt that the images arent working for me.i tried to specify an image of my choice and it wasnt working.wat is the pblm...?
JSlider disabling sliding through arrow keys
; Please visit the following links: disabling sliding through arrow keys I want that JSlider
Swing Button Example
Swing Button Example Hi,
How to create an example of Swing button in Java?
Thanks
Hi,
Check the example at How to Create Button on Frame?.
Thanks
How to Hide Button using Java Swing
How to Hide Button using Java Swing Hi,
I just begin to learn java programming. How to hide the button in Java programming. Please anyone suggest or provide online example reference.
Regards,
hi,
In java
java swing button click event
java swing button click event java swing button click event
public void doClick()
Java Swing Tutorials
add button to the frame - Swing AWT
for more information. button to the frame i want to add button at the bottom... JFrame implements ActionListener {
JButton button = new JButton("Button
how to set image in button using swing? - Swing AWT
://
Thanks...how to set image in button using swing? how to set the image in button using swing? Hi friend,
import java.awt.*;
import
Java Swing Tutorials
illustrates you how to change the
label of a button in java swing...
button in java swing. Radio Button is like check box.
... to create a
JSpinner component of swing. The JSpinner provides the up-down arrow
Setting icon on the button in Java
on
the button in Java Swing.
This program sets the icon on the button in Java
Swing.
Following is the output of the program:
Code description:
setIcon...
Setting icon on the button in Java
Swing - Applet
information on swing visit to : Hello,
I am creating a swing gui applet, which is trying to output all the numbers between a given number and add them up. For example | http://roseindia.net/tutorialhelp/allcomments/443 | CC-MAIN-2014-42 | refinedweb | 333 | 66.13 |
Search city name and INSEE code by zip code.
Based on the official postal codes database from La Poste and fixed by Christian Quest.
import 'package:code_postaux/code_postaux_html.dart'; main() async { List<City> cities = await find("31000"); // a list of cities corresponding to zip code 31000 }
import 'package:code_postaux/code_postaux_io.dart'; main() async { List<City> cities = await find("31000"); // a list of cities corresponding to zip code 31000 }
Add this to your package's pubspec.yaml file:
dependencies: code_postaux: "^1.0.2"
You can install packages from the command line:
with pub:
$ pub get
Alternatively, your editor might support
pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:code_postaux/code_postaux_html.dart'; import 'package:code_postaux/code_postaux_io.dart';
We analyzed this package, and provided a score, details, and suggestions below.
Detected platforms: web
Platform components identified in package:
html,
io.
code_postaux.dart. | https://pub.dartlang.org/packages/code_postaux | CC-MAIN-2018-09 | refinedweb | 152 | 59.4 |
more Linked 6 int cannot be dereferenced 15 Test for floating To be honest I think it's good that they are looking at compareTo so early. Was This Post Helpful? 1 Back to top MultiQuote Quote + Reply #3 Ryano121 D.I.C Lover Reputation: 1460 Posts: 3,286 Joined: 30-January 11 Re: double cannot be dereferenced Posted 19 navigate here
The lines of code that do what you seem to want are: String hoursrminfield; // you better declare any variable you are using // wrap hours in a Double, then use You can only upload a photo or a video. AnalProgrammer · 5 years ago 1 Thumbs up 2 Thumbs down Comment Add a comment Submit · just now Report Abuse compareTo() is a method from the Integer class. Expand» Details Details Existing questions More Tell us some more Upload in Progress Upload failed. navigate to these guys
more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed Please get into the habit of using Double.valueOf(…) as this will allow Java to reuse existing objects. Why do languages require parenthesis around expressions when used with "if" and "while"? Was This Post Helpful? 0 Back to top MultiQuote Quote + Reply #11 macosxnerd101 Games, Graphs, and Auctions Reputation: 12015 Posts: 44,844 Joined: 27-December 08 Re: double cannot be dereferenced
Dishwasher Hose Clamps won't open Expression evaluates numerically inside of Plot but not otherwise Why is innovation spelt with 2 n's while renovation is spelt with 1? Browse other questions tagged java double compareto or ask your own question. share|improve this answer answered Jan 13 '14 at 22:07 rgettman 122k15139228 thanks a lot man –user2291903 Jan 13 '14 at 22:12 add a comment| up vote 1 down vote Byte Extends Number You should have asked another question when you fixed the original problem. –paxdiablo Apr 23 '12 at 2:34 If you want to use hours as double, you have to
How to build your own website? Double Cannot Be Dereferenced Tostring public class test { public static void main (String [] args) { int x=5; int y=1; System.out.println(x.compareTo(y)); } } //simple program,wont compile. You can only upload files of type PNG, JPG, or JPEG. You will get the int value of that division, so if Mins = 43; double hours = Mins / 60; // Mins / 60 is an int = 0.
How to grep two numbers from the same line at different places using bash? Compareto Double Java This compareTo() should be used to help you find the cheapest pizza in pizza.txt. If that's your intention, then read the user input into a String, perform your checks on that string and convert it into double using Double.parseDouble() method. Of course that’s even more important for small integers or longs. –Michael Piefel Sep 9 '13 at 6:39 @MichaelPiefel Does this depend on the implementation or am I missing
It was supposed to use the binary search method in order to find the number 45.3 in this array: [-3, 10, 5, 24, 45.3, 10.5}. Would we find alien music meaningful? Double Cannot Be Dereferenced Intvalue Try our newsletter Sign up for our newsletter and get our top new questions delivered to your inbox (see an example). Double Cannot Be Dereferenced Java Does my electronic parking brake remain engaged if I disconnect the battery?
Int cannot be be dereferenced-4“int cannot be dereferenced” error is at k=PIE.gcd(e1)0“error: int cannot be dereferenced” while trying to subtract 1 from array.length Hot Network Questions IN operator must be used Antonym for Nourish Did a thief think he could conceal his identity from security cameras by putting lemon juice on his face? Web Sites: Disneyland vs Disney World in the United States What security operations provide confidentiality, integrity and authentication? What's the most robust way to list installed software in debian based distros? Cannot Invoke Compareto(double) On The Primitive Type Double
What should be satisfactory result of pen-testing job? more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed public static void main(String[] args) { List l = new ArrayList(); l.add(new Pizza(9, 7.99, "Olive Pizza")); l.add(new Pizza(9, 8.99, "Hot & Spicy Chicken Pizza")); l.add(new Pizza(9, 9.99, "Corned Mutton Sensation Pizza")); his comment is here Answer Questions I want to write the formula for calculating the inverse normal distribution using JavaScript.
Drawing a torso with a head (using \draw) Build me a brick wall! Double Compare Java I bet a lot of more experienced people would struggle comparing objects without the use of the relational operators. You don't need the compareTo() method.
The OP needs to learn what a compilation error is and how to read it. –bmargulies Nov 19 '12 at 1:35 add a comment| 2 Answers 2 active oldest votes up Can anyone help me figure out why? This is what gives you the error you asked about. 2) you don't say what type hoursminfield is, maybe you haven't even declared it yet. 3) it is unusual to set Long Cannot Be Dereferenced You can autobox/unbox primitives to/from their object types automatically.
GO OUT AND VOTE Inequality caused by float inaccuracy Build me a brick wall! You can only upload files of type 3GP, 3GPP, MP4, MOV, AVI, MPG, MPEG, or RM. Returns a negative integer, zero, or a positive integer as the first argument is less than, equal to, or greater than the second. I just began Java programming.
Just use the normal <, >, and == operators to compare ints. Who is this Voyager character? Difference between Animal Handling Skill and Animal Friendship Spell? It’s actually implemented for integers and longs between -128 and 127.
Why is the dialogue 'You talking to me' from the movie 'Taxi Driver' so famous? THANK YOU!!! If you have a different problem, ask a different question. –paxdiablo Apr 23 '12 at 2:35 | show 3 more comments up vote 0 down vote How about this way double public class test { public static void main (String [] args) { int x=5; int y=1; System.out.println(x.compareTo(y)); } } //simple program,wont compile. | http://geekster.org/cannot-be/double-cannot-be-dereferenced-compareto.html | CC-MAIN-2017-47 | refinedweb | 1,081 | 62.78 |
Work item: "Support for dealing with class files generated by external Java compilers like javac and jikes from an Ant script."
Here's the crux of one problem (from WSAD, via John W.):
In some environments the client has limited flexibility in how they structure their Java projects. Sources must go here; resource files here; mixed resource and class files here; etc.
The Java builder was designed under the assumption that it "owns" the output folder. The work item, therefore, is to change the Java builder to give clients and users more flexiblility as to where they place their source, resource, library class, and generated class files.
The following proposal involves:
When the output folder does not coincide with a source folder, the Java builder owns the output folder and everything in it. The output folder is taken to contain only files that are "expendable" - either generated class files or copies of files that live in a source or library folder.
Users or clients that add, remove, or replace files in the output folder can expect unpredicatable results. If the user or client does tamper with files in the output folder, the Java builder does not attempt to repair the damage. It is the responsibility of the user or client to clean up their mess (by manually requesting a full build).
When the output folder coincides with a source folder, the Java builder only owns the class files in the output folder. Only the class files in the output folder are considered expendable. Users or clients that add, remove, or replace class files in the output folder can expect unpredictable results.
(N.B. This is a restatement of the current behavior. [Verify that damage to output folder is not triggering builds.])
Output folder resource file consolidation The Java builder provides resource file consolidation, for resource files stored in source folders.
When the output folder does not coincide with a source folder, the Java builder can also be used to consolidate resources files needed at runtime in the output folder. In some cases, this consolidation may be preferred over the alternative of including additional runtime classpath entries for source folders containing resources files.
By flagging a source folder as copied, all non-source, non-class files become eligible to be copied to the output folder. When there are multiple entries on the build classpath specifying copying, eligible files for earlier classpath entries take precedence over ones for later entries.
When the output folder coincides with a source folder, the Java builder cannot perform any resource file consolidation (resource files in the output folder belong to the user, not to the Java builder). It is considered an error to specify copying from other source folders.
(N.B. This is different from current behavior in a couple of regards:
The Java builder can also be used to consolidate class in the output folder, regardless of whether the output folder coincides with a source folder. In some cases, this consolidation may be preferred over the alternative of including additional runtime classpath entries for library folders. Note, however, that this works only when the library folder contains no important resource files needed at runtime (resource files are not copied from library folders, because resource files in the output folder belong to the user rather than to the Java builder).
By flagging a library folder as copied, all class files become eligible to be copied to the output folder. Class files generated in the output folder always take precedence over class files copied from library folders.
(N.B. This new behavior. Files are not copied from library folders by the current Java builder.)
Output folder invariant:
A full builds must achieve the output folder invariant from arbitrary initial conditions. When output and source folders do not coincide, a full build should scrub all existing files from the output folder, regardless of how they got there. When output and source folders do coincide, a full build should scrub all existing class files from the output folder, but leave all other files alone.
Assuming that a user or client is only adding, removing, or changing files in source or library folders, but not tampering with any of the files in the output folder that the Java builder owns, then an incremental build should re-achieve the output folder invariant.
Algorithm:
Full build:
Scrub all class files from the output folder.
if performing resource consolidation (requires output folder != source folder)
Scrub all resource files from the output folder.
Compile all source files into class files in the output folder.
Infill/copy eligible class files from library folders into the output folder (no overwriting).
if performing resource consolidation (requires output folder != source folder)
Infill/copy eligible resource files from source folders into the output folder.
Incremental build:
(phase 1) process changes to library folders:
for add or remove or change file p/x.class in one of the library folders
if p/x.class in the output folder was not generated by compiler then
scrub p/x.class from the output folder
remember to compile source files that depend on p/x
remember to infill p/x.class
(phase 2) process changes to source folders:
for add p/y.java in one of the source folders
remember to compile source file at path p/y.java
for remove or change p/y.java in one of the source folders
scrub any class file p/x.class from the output folder that some p/y.java compiled into last time
remember to infill p/x.class
remember to compile source file at path p/y.java
for add or remove or change resource p/x.other in one of the source folders
if performing resource consolidation (requires output folder != source folder)
scrub p/x.other from the output folder
remember to infill p/x.other
(phase 3) recompile:
compile all remembered source files into the output folder (and any dependent source files)
(phase 4) infill:
for each hole p/x.class to infill
copy first-found file p/x.class in a library folder to p/x.class in the output folder (no overwriting)
if performing resource consolidation (requires output folder != source folder)
for each hole p/x.other to infill
copy first-found file p/x.other in a source folder to p/x.other in the output folder
WSAD would include their classes/ folder on the build classpath as a library folder with class file copying turned on. Doing so means that the pre-compiled class files in the library are available to build against, and will be used whenever there is no corresponding source code in a source folder. By turning class file copying on for that library folder (programatically - there is no UI), the class files in the library folder are automatically consolidated with the generated class files.
Resource files can always be kept in the same folder as the source files. When the source and output folders do not coincide, the source folder on the classpath could have copying turned on to ensure that resource files were copied to the output folder. When the source and output folders do coincide, further resource file consolidation is not required (or possible) and the source folder on the classpath would have copying turned off. The resource files that normally live in the source folder would automatically be included in the output folder (without copying).
WSAD has a special problem. They have class files in a classes/ folder which they obtain from unzipping a WAR file. They have a folder of source code; some of the source code may be brand new; some of the source code may correspond to class files in the classes/ folder. They need to prune from the classes/ directory those class files for which corresponding source is available. This allows them to save only those class files which they actually need.
The heart of this operation is identifying the class files which could have come from a given source file. A source file can be lightly parsed to obtain fully qualified names for all top-level types declared within; e.g., a source file com/example/acme/app/Foo.java might contain types named com.example.acme.app.Foo and com.example.acme.app.FooHelper. Such type names map directly to corresponding class file name patterns; e.g., com.example.acme.app.FooHelper would compile to com/example/acme/app/FooHelper.class and possibly other class files matching com/example/acme/app/FooHelper$*.class.
This basic operation can be implemented with the existing JDOM API (or the proposed AST API): simply open the compilation unit and read off the names from the package declaration and and top-level type declarations.
Given this basic operation, it is straightforward to walk any set of source files and use it to prune a given set of class files. Source files in some folder in the workspace can be monitored with a resource change listener. It is trivial to delete corresponding class files incrementally as new source files are added.
Conclusion: New API is not required.
The Java model has 2 primitive kinds of inputs: Java source files, and Java library class files. The Java builder produces one primary output: generated Java class files. Each Java project has a build classpath listing what kinds of inputs it has and where they can be found, and a designated output folder where generated class files are to be placed. The runtime classpath is computed from the build classpath by substituting the output folder in place of the source folders.
Java "resource" files, defined to be files other than Java sources and class files, are of no particular interest to the Java model for compiling purposes. However, these resource files are very important to the user, and to the program when it runs. Resource files are routinely co-located with library class files. But it is also convenient for the user if resource files can be either co-located with source code, or segregated in a separate folder.
Ideally, the Java model should not introduce constraints on where inputs and outputs are located. This would give clients and users maximum flexibility with where they locate their files.
The proposal here has 4 separate parts. Taken in conjunction they remove the current constraints that make it difficult for some clients to place their files where they need to be.
Java project p1/
src/com/example/ (source folder on build classpath)
Bar.java
Foo.java
Quux.java
bin/com/example/ (output folder)
Bar.class {SourceFile="Bar.java"}
Foo.class {SourceFile="Foo.java"}
Foo$1.class {SourceFile="Foo.java"}
Internal.class {SourceFile="Foo.java"}
Main.class {SourceFile="Main.java"}
From this arrangement of files (and looking at the SourceFile attributed embedded in class files), we can infer that:
In this situation, the Java builder deletes the class files corresponding to Bar.java (i.e., Bar.class), to Foo.java (i.e., Foo.class, Foo$1.class, and Internal.class), and to Quux.java (none, in this case). The remaining class files (Main.class) must be retained because it is irreplaceable.
The Java builder takes responsibility for deleting obsolete class files in order to support automated incremental recompilation of entire folders of source files. Note that standard Java compilers like javac never ever delete class files; they simply write (or overwrite) class files to the output folder for the source files that they are given to compile. Standard Java compilers do not support incremental recompilation: the user is responsible for deleting any obsolete class files that they bring about.
If the Java builder is free to assume that all class files in the output folder are ones that correspond to source files, then it can simply delete all class files in the output folder at the start of a full build. If it cannot assume this, the builder is forced to look at class files in the output folder to determine whether it has source code for them. This is clearly more expensive that not having to do so. By declaring that it "owns" the output folder, the current builder is able to makes this simplifying assumption. Allowing users and clients to place additional class files in the output folder requires throwing out this assumption.
If the user or client is free to manipulate class files in the output folder without the Java builder's involvement, then the builder cannot perform full or incremental builds without looking at and deleting the obsolete class files from the output folder corresponding to source files being compiling.
Under the proposed change, the Java builder would need to look at the class files in the output folder to determine whether it should delete them. The only files in the output folder that the Java builder would be entitled to overwrite or delete are class files which the Java builder would reasonably generate, or did generate, while compiling that project.
There is another facet of the obsolete class file problem that the Java builder is not in a position to help with.
If the source file Foo.java were to be deleted, its three class files become obsolete and need to be deleted immediately. Why immediately? Consider what happens if the class files are not deleted immediately. If the user requests a full build, the Java builder is presented with the following workspace:
Java project p1/
src/com/example/ (source folder on build classpath)
Bar.java
Quux.java
bin/com/example/ (output folder)
Bar.class {SourceFile="Bar.java"}
Foo.class {SourceFile="Foo.java"}
Foo$1.class {SourceFile="Foo.java"}
Internal.class {SourceFile="Foo.java"}
Main.class {SourceFile="Main.java"}
Since a full build is requested, the Java builder is not passed a resource delta tree for the project. This means that the Java builder has no way of knowing that Foo.java was just deleted. The Java builder has no choice but to retain the three class files Foo.class, Foo$1.class, and Internal.class, just as it retains Main.class. This too is a consequence of allowing the Java builder to share the output folder with the user's class files.
If the obsolete class files are not deleted in response to the deletion of a source file, these class files will linger around. The Java builder will be unable to get rid of them.
The proposal is to have the Java model monitor source file deletions on an ongoing basis and identify and delete any corresponding obsolete class files in the output folder. This clean up activity must handle the case of source files that disappear while the Java Core plug-in is not activated (this entails registering a Core save participant).
Since deleting (including renaming and moving) a source file is a relatively uncommon thing for a developer to do, the implementation should bet it does not have to do this very often. When a source file in deleted, its package name gives us exactly which subfolder of the output folder might contain corresponding class files that might now be obsolete. In the worst case, the implementation would need to access all class files in that subfolder to determine whether any of them have become obsolete. In cases where there is more than one source folder on the builder classpath, and there is therefore the possibility of one source file hiding another by the same name, it is necessary to consult the build classpath to see whether the deleted source file was exposed or buried.
[Revised proposal: The Java builder remembers the names of the class files it has generated. On full builds, it cleans out all class files that it has on record as having generated; all other class files are left in place. On incremental builds, it selectively cleans out the class files that it has on record as having generated corresponding to the source files that it is going to recompile. There is no need to monitor source file deletions: corresponding generated class files will be deleted on the next full build (because it nukes them all) or next incremental build (because it sees the source file deletion in the delta). The Java builder never looks at class files for their SourceFile attributes. A full build always deletes generated class files, so there's no need to a special UI action.]
The proposed change is to consistently allow the same folder to be used in multiple ways on the same build classpath.
This change is not a breaking change; it would simply allow some classpath configurations that are currently disallowed to be considered legitimate. The API would not need to change.
[Revised proposal: Many parts of the Java model assume that library folders are relatively quiet. Allow a library folder to coincide with the output folder would invalidate this assumption, which would tend to degrade performance. For instance, the indexer indexes libraries and source folders, but completely ignores the output folder. If the output folder was also a library, it would repeatedly extract indexes for class files generated by the builder.
N.B. This means that the original scenario of library class files in the output folder is cannot be done this way. It will need to be addressed in some other way (discussed later on).
The identity criteria for package fragment root handles are based on resources/paths and do not take kind (source vs. binary) into account. This means that a source folder and a library folder at the same path map to the same package fragment root handle! Thus allowing a source folder to coincide with a library folder cannot be supported without revising Java element identity criteria (which is due for an overhaul, but that's a different, and bigger, work item).
The current Java builder copies "resource" files from source folders to the output folder (provided that source and output do not coincide). Once in the output folder, the resource files are available at runtime because the output folder is always present on the runtime class path.
This copying is problematic:
The proposal is to eliminate this copying behavior. The proper way to handle this is to include an additional library entry on the build classpath for any source folders that contain resources. Since library entries are also included on the runtime classpath, the resource files contained therein will be available at runtime.
We would beef up the API specification to explain how the build classpath and the runtime classpath are related, and suggests that one deals with resource files in source folders using library entries. This would be a breaking change for clients or users that rely on the current resource file copying behavior.
The clients that would be most affected are ones that co-locate their resource files with their source files in a folder separate from their output folder. This is a fairly large base of customers that would need to add an additional library entry for their source folder.
It would be simple to write a plug-in that detected and fixed up the Java projects in the workspace as required. By the same token, the same mechanism could be built in to the Java UI. If the user introduces a resource files into a source folder that had none and there is no library entry for that folder on the build classpath, ask the user whether they intend this resource file to be available at runtime.
(JW believes that WSAD will be able to roll with this punch.)
[Revised proposal: Retain copying from source to output folder where necessary.
The Java compiler should minimize the opportunity for obsolete class files to have bad effects.
Consider the following workspace:
Java project p1/
src/com/example/ (source folder on build classpath)
C1.java {package com.example; public class C1 {}}
C2.java {package com.example; public class C2 extends Secondary {})
lib/com/example/ (library folder on build classpath)
C1.class {from compiling an old version of C1.java
that read package com.example; public class C1 {}; class Secondary {}}
C2.class {from compiling an old but unchanged version of C2.java}
Secondary.class {from compiling an old but unchanged version of C2.java}
Quux.class {from compiling Quux.java}
Assume the source folder precedes the library folder on the build classpath (sources should always precede libraries).
When the compiler is compiling both C1.java and C2.java, it should not satisfy the reference to the class com.example.Secondary using the existing Secondary.class because the SourceFile attributes shows that Secondary.class is clearly an output from compiling C1.java, not an input. In general, the compiler should ignore library class files that correspond to source files which are in the process of being recompiled. (In this case, only Quux.class is available to satisfy references.) The Java builder does not do this.
Arguably, the current behavior should be considered a bug. (javac 1.4 (beta) has this bug too.) Fixing this bug should not be a breaking change.
When the SourceFile attribute is not present in a class file, there is no choice but to use it.
[Revised proposal: Maintain current behavior.]
The proposal is to arrange to copy class files from a certain library folder into the output folder. The library folder would have to be represented by a library classpath entry so that the compiler can find any class files it needs to compile source files. Copying the class files to the output folder would unite them with the class files generated by the compiler. Since there may be source code in the source folder corresponding to some of the classes in the library folder, the builder should only use a class file when source is available.
Desired semantics:
S (source folder)
L (library folder)
O (output folder)
Invariant:
x.class in O =
if some y.java in S generates x.class then
x.class from compiling x.java in S
else
if x.class in L then
x.class in L
else
none
endif
endif
Full builds achieve invariant.
Incremental builds maintain invariant.
Full build:
Scrub all class files from O.
Compile all source files in S into class files in O.
Infill/copy all class files from L to O (no overwriting).
Incremental build:
(phase 1) process all changes to L:
for delete or change x.class in L
if x.class in O was not generated by compiler then scrub x.class from O
for add or change x.class to L
remember to infill x.class
(phase 2) process negative changes to S:
for delete or change y.java from S
scrub any class file x.class from O that y.java compiled into
remember to infill x.class
(phase 3) process positive changes to S:
for add or change y.java from S
compile y.java into O
(phase 4) Infill/copy indicated class files from L to O (no overwriting).
We will look at ways to implement the above behavior that do not involve changing the Java builder. This would mean that a customer (such as WSAD) that requires library copying would be able to add it themselves; otherwise, we will need to complicate the Java builder (which is complex enough as it is) and integrate the mechanism into JDT Core.
Could the copying of class files from the library folder L to the output folder O be accomplished in a separate incremental project builder that would run before the Java builder?
Assume the Java builder manages its own class files in the output folder and knows nothing of the pre-builder. Conversely, assume that the pre-builder has no access to the insides of the Java builder.
Pre-copying of class files to the output folder cannot handle the case where a source file gets deleted and a pre-existing class file in the library folder should now take its place. The Java builder, which runs last, deletes the class file; the pre-builder has missed its chance and does not get an opportunity to fill that hole. When this happens on a full build, the full build does not achieve the invariant. This is unacceptable.
Here's the nasty case:
S (source folder): Bar.java (but recently has Foo.java as well)
L (library folder): Foo.class
On a full build
Pre-builder runs first:
Scrubs Foo.class and Bar.class from O.
Copies in Foo.class from L to O.
Java Builder runs second:
Scrubs Foo.class from O (generated by Java builder from Foo.java on last build).
Compile Bar.java into Bar.class O (Foo.java is no longer around).
The output folder should contain a copy of Foo.class from L since there is no equivalent source file that compiles to Foo.class. It doesn't.
Could the copying of class files from the library folder to the output folder be accomplished in a separate incremental project builder that would run after the Java builder?
Again, assume the Java builder manages its own class files in the output folder and knows nothing of the post-builder, and conversely.
Post-copying of class files to the output folder (no overwriting) cannot handle the case where library class files are changed or deleted since the last build, because the post-builder is never in a position to delete or overwrite class files in the output folder (they might have been generated by the Java builder). Once lost, the invariant cannot be reachieved no matter how many full builds you do (you're stuck with stale or obsolete class files). This is unacceptable.
Could the copying of class files from the library folder to the output folder be accomplished by a pair of separate incremental project builders that run on either side of the Java builder?
Assume the Java builder manages its own class files in the output folder and knows nothing of the pre-builder and post-builder, and the pre- and post-builders have no access to the insides of the Java builder.
Full build:
Pre-builder runs first:
Scrubs all class files from O.
Java Builder runs second:
Scrubs all class files from O generated by Java builder.
Compiles all source files into O.
Post-builder runs third:
Infill/copy class files from L to O (no overwriting).
Incremental build when L changes:
Pre-builder runs first:
For delete or change x.class in L
Does nothing (FAILs if no corresponding source file)
For add x.class to L
Infill/copy Foo.class from L to O (no overwriting).
Java Builder runs second:
Recompiles classes that depend on affected class files in L.
Post-builder runs third:
Infill/copy class files from L to O (no overwriting).
Incremental build - changes to source folder:
Pre-builder runs first:
Does nothing since library did not change.
Java Builder runs second:
Compiles source files into O.
Post-builder runs third:
Infill/copy class files from L to O (no overwriting).
An incremental build may fail in the case of a library class file being changed or deleted, leading to stale or obsolete class files in the output folder. Fortunately, a full build always achieves the invariant, and can be used to repair the damage due to changes to the library.
So while the combination of pre- and post-builders is not perfect, it does work in many cases. If the user could do a full build after making changes to the library folder, they would avoid all the problems. The solution has the advantage of not requiring anything special from the Java Core (i.e., WSAD should be able to implement it themselves). An example implementation is available here
When the source folder and output folder coincide, there is no problem keeping resource files in the output folder since they are not at risk of being overwritten (no with the proposed change to disable resource copying when the source folder and output folder coincide).
When the source folder and output folder do not coincide, keeping resource files in the output folder on a permanent basis encounters two issues:
(1) The first issue is that output folder has no presence in the packages view. Any resources that permanently resided in the output folder would therefore be invisible during regular Java development. One would have to switch to the resource navigator view to access them.
The packages view only shows resource files in source and library folders. Changing the packages view to show resources in the output folder is infeasible. Including the output folder on the classpath as a library folder was discussed at length above and is out of the question. Including the output folder on the classpath as a source folder is an option (in fact, it's exactly what you get when your source and output folders coincide).
(2) The second issue is that resource files in the output folder are in harm's way of resources of the same name being copied from a source folder.
If resources existing in the output folder are given precedence over the ones in source folders, then the ones from source folders would only be copied once and nevermore overwritten. Copies in the output folder would get stale or obsolete; automatic cleanup would not be possible.
On the other hand, if resources existing in source folders are given precedence over the ones in the output folders, then one that exists only in the output folders would be permanently lost if a resource by the same name was ever to be created in a source folder. It is a dangerous practice to allow the user to store important data in a place that could be clobbered by an automatic mechanism that usually operates unseen to the user.
Conclusion: Keeping resource files in the output folder on a permanent basis is not well supported at the UI, and should only be done if the resource files can be considered expendable. | http://www.eclipse.org/jdt/core/r2.0/output%20folder/output-folder.html | CC-MAIN-2016-40 | refinedweb | 4,990 | 63.7 |
Current project statusCurrent project status
PySynth is no longer being actively developed by me and has therefore been removed from PyPI.
If you would like to take over as maintainer of the project, please fork it now!
This repo may be deleted in the future.
OverviewOverview
PySynth is a simple music synthesizer for Python 2 or 3. The goal is not to produce many different sounds, but to have scripts that can turn ABC notation or MIDI files into a WAV file without too much tinkering.
The current release of the synthesizer can only play one note at a time. (Although successive notes can overlap in PySynth B and S, but not A.) However, two output files can be mixed together to get stereo sound.
Synthesizer scriptsSynthesizer scripts
InstallationInstallation
LinuxLinux
Clone the repository:
git clone git@github.com:mdoege/PySynth.git
or
git clone
Enter the directory (
cd PySynth) and run
python3 setup.py install
Sample usageSample usage
Basic usage:
import pysynth as ps test = (('c', 4), ('e', 4), ('g', 4), ('c5', -2), ('e6', 8), ('d#6', 2)) ps.make_wav(test, fn = "test.wav")
More advanced usage:
import pysynth_b as psb # a, b, e, and s variants available ''' (note, duration) Note name (a to g), then optionally a '#' for sharp or 'b' for flat, then optionally the octave (defaults to 4). An asterisk at the end means to play the note a little louder. Duration: 4 is a quarter note, -4 is a dotted quarter note, etc.''' song = ( ('c', 4), ('c*', 4), ('eb', 4), ('g#', 4), ('g*', 2), ('g5', 4), ('g5*', 4), ('r', 4), ('e5', 16), ('f5', 16), ('e5', 16), ('d5', 16), ('e5*', 4) ) # Beats per minute (bpm) is really quarters per minute here psb.make_wav(song, fn = "danube.wav", leg_stac = .7, bpm = 180)
Read ABC file and output WAV:
python3 read_abc.py straw.abc
DocumentationDocumentation
More documentation and examples at the PySynth homepage. | https://libraries.io/pypi/music-syn | CC-MAIN-2020-24 | refinedweb | 314 | 64 |
Below you’ll find the basic Regex patterns you can use to match, edit and replace strings. I am using the Java/Perl Regex flavor, so the patterns might be slightly different if you are using another programming language or platform.
Java Implementation
Here’s a simple Java sample that uses the Regex library. It will try to match the given pattern on a string as many times as possible.
import java.util.regex.*; public class Text{ public static void main(String[] args){ boolean found = false; String str1 = "Washington is located in the United States."; Pattern pattern = Pattern.compile("[Ww]ashington"); Matcher matcher = pattern.matcher(str1); while(matcher.find()){ found = true; System.out.println("Found: "+matcher.group()); System.out.println("Start index: "+matcher.start()); System.out.println("End index: "+matcher.end()); } if(!found) System.out.println("No match found"); return; } }
Regex Patterns
1. “string” (String literals)
The most basic pattern you can use is a string literal. It will basically try to match the exact pattern on the target string, as many times as possible.
“test” will be matched twice on the string “testtest”
2. . (metacharacter)
The . will match any given character, so the pattern “box.” will match “boxe” as well as “boxp”. The pattern “.” will match “a”, “b” and “c” on the string “abc”.
Remember that you can escape metacharacters with a backslash, so “\.” will match a dot in the target string.
3. [] (character class)
You can use brackets to create a disjuntion on your pattern (i.e. a separated part), which is also called a character class. The matcher will look for any of the characters inside the brackets. For instance, this can be used to match a string with or without a capital letter.
“[Ww]ashington” will match either “Washington” or “washington”
4. ^ (negation)
The ^ metacharacter negates the characters inside a character class. So “[^abc]ice” will match “dice” but not “bice”.
5. [a-d] (range)
If you want to include many characters or numbers on your character class you can use the hyphen to form a range. For instance, “[a-d]” will match any character from a through d.
6. Unions and Intersections
You can compose a character class from the union of two different classes. You achieve that by nesting the classes: “[a-c[d-f]]” will match any character from a through f.
For the intersection you use the && symbol before the nested element. For instance, “[a-c&&[d-f]]” won’t match anything because the intersection is empty.
7. Predefined Classes
There are several predefined classes that will make your job easier. The most used ones are:
. (any character)
\d (any digit)
\D (any character except digits)
\s (whitespace)
\S (anything except whitespace)
\w (any alphanumeric character)
\W (anything except alphanumeric characters)
8. Quantifiers
You can use quantifiers to specify how many times or in which sequences the characters you are looking for should appear.
“a*” means the character ‘a’ appearing zero or more times
“a+” means the character ‘a’ appearing one or more times
“a?” means the character ‘a’ appearing once or not at all
“a{5}” means the character ‘a’ appearing exactly five times
“a{2,}” means the character ‘a’ appearing at least twice
“a{2,3}” means the character ‘a’ appearing at least twice but at most three times
9. Specifying Locations
If needed you can specify exactly where your pattern should be matched on the target string.
^ means at the beginning of the line
$ means at the end of the line
\b means word boundary
\G means the end of the previous match | https://www.programminglogic.com/basic-regex-patterns-in-javaperl/ | CC-MAIN-2018-17 | refinedweb | 594 | 66.03 |
- The Test-First Technique
- Tests as Specifications
- Building Good Specifications
- Summary
Read Test-Driven Database Development: Unlocking Agility or more than 24,000 other books and videos on Safari Books Online. Start a free trial today.
This chapter gives you a crash course in test-driven development (TDD) in case you are not familiar with the discipline.
A staple of the TDD process is the test-first technique. Many people who are new to test-driven development actually confuse it with the test-first technique, but they are not the same thing. Test-first is one tool in the TDD toolbelt, and a very important one at that, but there is a lot more to TDD.
The chapter then covers a test’s proper role in your organization. Tests are best thought of as executable specifications. That is, they not only test something but they also document what that thing should do or how it should look.
One very powerful benefit of cyclically defining and satisfying executable specifications is that it forces your design to emerge incrementally. Each new test you write demands that you revisit and, if necessary, revise your design.
Following that discussion, I cover what you actually want to specify and, probably at least as important, what you do not want to specify. In a nutshell, the rule is “specify behaviors only.” Deciding what a database’s behavior should be can be a little difficult, and I cover that topic in Chapters 6, “Defining Behaviors,” and 7, “Building for Maintainability.” This chapter deals with the behaviors inherent in tables.
Finally, an important piece of test-driven development is to drive behaviors into a database from outside, not the other way around. Again, you can find a lot more advice on how a database should actually be structured later in the book. This chapter deals only with traditional design concepts.
The Test-First Technique
If I were in an elevator, traveling to the top of a building with a software developer I would never see again who had never heard of TDD or the test-first technique, I would try to teach him the test-first technique. I would choose that because it is so easy to teach and it is so easy to get people to try. Also, if done blindly, it creates problems that will force someone to teach himself test-driven development.
The technique is simple, and the following is often enough to teach it:
- Write a test.
- See it fail.
- Make it pass.
- Repeat.
There’s nothing more to test-first. There’s a lot more to test-driven development, but test-first really is that simple.
Write the Test
The first step in the technique is to write your test. If you’ve never done this before it might be a little bit uncomfortable at first. You might be thinking “How do I know what to test if there’s nothing there?” That’s a pretty normal feeling that I want you to ball up really tightly and shove down into your gut while you do this a few times. Later you will discover that the best way to determine what should be tested is to write the test for it, but convincing you of that is hard; you’ll have to convince yourself by way of experience.
Anyway, start out by writing a test. Let’s say that I want a database that can store messages sent between users identified by email addresses. The first thing I would do is write a test that requires that ability to be there in order to pass. The test is going to need to create a database of the current version, connect to it, and insert a record. This test is shown in the following listing as one would write it using NUnit and .NET:
[TestFixture] public class TestFirst { private Instantiator instantiator; private IDbConnection connection; [SetUp] public void EstablishConnectionAndRecycleDatabase() { instantiator = Instantiator.GetInstance( DatabaseDescriptor.LoadFromFile("TestFirstDatabase.xml")); connection = DatabaseProvisioning.CreateFreshDatabaseAndConnect(); } [TearDown] public void CloseConnection() { connection.Close(); } [Test] public void TestTables() { instantiator.UpgradeToLatestVersion(connection); connection.ExecuteSql(" INSERT INTO USERS VALUES(1, 'foo@bar.com')"); connection.ExecuteSql( @"INSERT INTO MESSAGES " + "VALUES(1, 'Hey!', 'Just checking in to see how it''s going.')"); } }
That code, as is, won’t compile because I delegate to a little bit of infrastructure that has to be written. One such tool is the DatabaseProvisioning class, which is responsible for creating, tearing down, and connecting to test databases. This class is shown in the following example code, assuming I wanted to test against a SQL Server database:
public class DatabaseProvisioning { public static IDbConnection CreateFreshDatabaseAndConnect() { var connection = new SqlConnection(@"Data Source=.\sqlexpress;" + "Initial Catalog=master;Integrated Security=True"); connection.Open(); connection.ExecuteSql("ALTER DATABASE TDDD_Examples SET " + "SINGLE_USER WITH ROLLBACK IMMEDIATE"); connection.ExecuteSql("DROP DATABASE TDDD_Examples"); connection.ExecuteSql("CREATE DATABASE TDDD_Examples"); connection.ExecuteSql("USE TDDD_Examples"); return connection; } }
The other piece of infrastructure (following) is a small extension class that makes executing SQL statements—something I’m going to be doing a lot in this book—a little easier. For those of you who aren’t C# programmers, what this does is make it look like there is an ExecuteSql method for all instances of IDbConnection.
public static class CommandUtilities { public static void ExecuteSql( this IDbConnection connection, string toExecute) { using (var command = connection.CreateCommand()) { command.CommandText = toExecute; command.ExecuteNonQuery(); } } }
The next step is to see a failure.
Stub Out Enough to See a Failure
I like my failures to be interesting. It’s not strictly required, but there’s not a really good reason to avoid it, so assume that making a failure meaningful is implied in “see the test fail.” The main reason you want to see a test fail is because you want to know that it isn’t giving you a false positive. A test that can’t fail for a good reason is about as useful as a test that cannot fail for any reason.
The test I have would fail because there is no database to make, which isn’t a very interesting reason to fail. So let’s create a database class and make it so that the database gets created.
<Database> <Version Number="1"> </Version> </Database>
With that change in place, my test would fail for an interesting reason: The table into which I was trying to insert doesn’t exist. That’s a meaningful enough failure for me.
See the Test Pass
Now that a test is giving me a worthwhile failure, it’s time to make it pass. I do that by changing the class of databases to create the required table. If I had committed the most recent version of the database class to production, I would create a new version to preserve the integrity of my database class. As it stands, because this new database class hasn’t ever been deployed in an irreversible way, I’ll just update the most recent version to do what I want it to do.
<Database> <Version Number="1"> <Script> <![CDATA[ CREATE TABLE Users(ID INT PRIMARY KEY, Email NVARCHAR(4000)); CREATE TABLE Messages( UserID INT FOREIGN KEY REFERENCES Users(ID), Title NVARCHAR(256), Body TEXT); ]]> </Script> </Version> </Database>
That update causes my database class to create the message table in version 1. When I rerun my test, the database gets rebuilt with the appropriate structures required to make the test pass. Now I’m done with a test-first programming cycle.
Repeat
After the cycle is complete, there is an opportunity to start another cycle or to do some other things, such as refactoring. I’m going to go through one cycle just to show you how a design can emerge incrementally. After thinking about the design I created, I decided I don’t like it. I don’t want the email addresses to be duplicated.
How should I handle that? I’ll start by adding a test.
[Test] public void UsersCannotBeDuplicated() { instantiator.UpgradeToLatestVersion(connection); connection.ExecuteSql( @"INSERT INTO Users(Email) VALUES('foo@bar.com')"); try { connection.ExecuteSql( @"INSERT INTO Users(Email) VALUES('foo@bar.com')"); } catch { return; } Assert.Fail("Multiple copies of same email were allowed"); }
After I get that compiling, I’ll watch it fail. It will fail because I can have as many records with a well-known email address as I want. That’s an interesting failure, so I can go on to the next step: adding the constraint to the new version of my database.
<Database> <Version Number="1"> <Script> <![CDATA[ CREATE TABLE Users(ID INT PRIMARY KEY, Email NVARCHAR(4000)); ALTER TABLE Users ADD CONSTRAINT OnlyOneEmail UNIQUE (Email); CREATE TABLE Messages( UserID INT FOREIGN KEY REFERENCES Users(ID), Title NVARCHAR(256), Body TEXT); ]]> </Script> </Version> </Database>
Recompiling and rerunning my test shows me that it passes. Had that new behavior caused another test to fail, I would update that test to work with the new design constraint, rerun my tests, and see everything pass. After I’ve done that, I decide I’m done with this phase of updating my database class’s design and move on to other activities. | http://www.informit.com/articles/article.aspx?p=2026252&seqNum=2 | CC-MAIN-2017-26 | refinedweb | 1,517 | 55.44 |
Sessions Enhancement Request
- Greg Heffernan
Hi – First, Thanks!
Second – Can I request when a sessions is saved, it would be extremely useful/helpful if the no-named new documents be saved with the session too. That way, you could have a group of unsaved notes for a particular project and quickly save them all under a Session. Create a new Session … do the same … and return to the original Session and all its unnamed new documents will also be returned?
Most of the time I find myself working on a project with lots of small unnamed notes that aid in the project development … but then need to work on another unrelated project … it would be nice to save all the open files under a Session, including the unnamed/unsaved temporary files … to be able to return back to them later when the Session files are reloaded?
Currently, only the named open files are recorded in the Session file.
- Scott Sumner
Who really wants a bunch of “new X” pseudo-files hanging around?
“Let’s see, where did I put that important thing I wanted to save…was it in the new 16 file…no…maybe it was new 12…no…well…hmmm…it’s here somewhere…”
If it is important enough to type, it’s important enough to save properly, I always say.
Anyway, suggest you look at the Take Notes plugin as it offers up a better way than the unsaved “new X” file method. Or a few lines of Pythonscript yield the same functionality; I have this tied to an entry in my right-click context menu, I call it
TempFileYyyyMmDdHhMmSs.py:
import os import time def TFYMDHMS__main(): temp_dir = os.environ['TEMP'] # or whatever... ymdhms = time.strftime("%Y%m%d-%H%M%S_", time.localtime()) new_file_fullpath = temp_dir + os.sep + ymdhms + '.txt' notepad.new() editor.insertText(0, '?'); editor.deleteRange(0, 1) # modify file so it can be saved notepad.saveAs(new_file_fullpath) TFYMDHMS__main()
When run this script creates a new (saved) editor tab with a filename like this:
And, obviously, once a file is named, it is saved along with other files making up a Session. | https://notepad-plus-plus.org/community/topic/15011/sessions-enhancement-request | CC-MAIN-2019-39 | refinedweb | 356 | 58.11 |
09 April 2012 08:33 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
MTBE supply in
Panjin Heyun New Materials shut in early April its 200,000 tonne/year MTBE unit at Panjin in
Zhanjiang Dongxing Petrochemical plans to shut its 80,000 tonne/year MTBE unit at
Few import cargoes are expected to arrive in
MTBE demand is expected to recover in April as the weather grows warmer and boosts gasoline consumption. MTBE is a major blending feedstock for gasoline and approximately 90% of
A Chinese MTBE producer said the number of buying enquiries is increasing.
However, MTBE prices are unlikely to see any significant rise this month as the already-high prices are weakening buying interest, industry sources said.
MTBE prices in south China rose to yuan (CNY) 9,400/tonne ($1,490/tonne) on 5 April, a historical high since May 2011, according to C1 Energy, an ICIS service in
In addition, sentiment in the gasoline market is bearish because of weakening crude prices. A bearish gasoline market usually causes the MTBE market to be soft.
WTI closed at $103.31/bbl on 5 April, down by $1.92/bbl from 2 Apr | http://www.icis.com/Articles/2012/04/09/9548418/chinas-mtbe-prices-likely-to-remain-steady-in-april-on-weak-crude.html | CC-MAIN-2015-22 | refinedweb | 195 | 59.74 |
* Adam Borowski <kilobyte@angband.pl> [2007-06-28 23:49]: > README.Debian says: > | You might also like to use a shell script to wrap up this > | funcationality, e.g. > | place in /usr/local/bin/gcc-snapshot and chmod +x it > but I see no reason why it couldn't simply be shipped in the package > outright. It's not like it invades anyone's namespace, etc. It would be > also consistent with all other gcc packages, all having the executable named > the same as the package. At least after having tested my stuff with gcc-4.2 > in the past, I didn't even suspect gcc-snapshot could be any different until > ./configure failed :p I know, it's like driving by memory, but in most > cases like this meeting people's assumptions is nice. > There's little reason to force people to some cut&paste work just to use > this package... doko, what do you think about this? -- Martin Michlmayr | https://lists.debian.org/debian-gcc/2007/06/msg00319.html | CC-MAIN-2017-43 | refinedweb | 161 | 75.61 |
Hello,
I posted this info to the tomcat users mailing list and am following Mark Thomas' advice to open a bug report:
My setup is a RedHat 5 server (32 bit) running Tomcat 6.0.20 with Tomcat Native 1.1.16 libraries and Sun JDK 1.6.0_14. I've built and installed Tomcat Native as described in
The server.xml file has been modified to add enableLookups="true" to the HTTP Connector entry:
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
enableLookups="true"
redirectPort="8443" />
Now, when the client exists in the DNS, reverse lookups via HttpServletRequest.getRemoteHost() work fine whether or not I'm using APR.
The problem is, when attempting a reverse lookup for a client that is not found in the naming service, the behaviour of getRemoteHost() depends on whether or not APR is being used. Specifically, without APR, the method returns the dotted-string form of the IP address (consistent with the doc ). However, when APR is enabled, the method returns NULL.
I can reproduce the problem using a simple test servlet:
# cat GetAddress.java
import java.io.*;
import java.util.*;
import javax.servlet.*;
import javax.servlet.http.*;
public class GetAddress extends HttpServlet {
public void doGet(HttpServletRequest request,HttpServletResponse response)
throws IOException, ServletException{
response.setContentType("text/html");
PrintWriter out = response.getWriter();
out.println("<b><font color='red'>Hostname of request : </font></b>"
+request.getRemoteHost()+"<p>");
out.println("<b><font color='blue'>IP Address of request : </font></b>"
+request.getRemoteAddr());
}
}
If LD_LIBRARY_PATH is set to $CATALINA_HOME/lib, catalina.out confirms APR is enabled:
05-Jun-2009 11:09:01 org.apache.catalina.core.AprLifecycleListener init
INFO: Loaded APR based Apache Tomcat Native library 1.1.16.
05-Jun-2009 11:09:01 org.apache.catalina.core.AprLifecycleListener init
INFO: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
05-Jun-2009 11:09:02 org.apache.coyote.http11.Http11AprProtocol init
From my client unknown to the DNS, the web page shows "Hostname of request: null ... IP Address of request: <client IP address>"
Now, after simply unsetting LD_LIBRARY_PATH and restarting Tomcat (catalina.out confirms APR is not used), a request from the same client correctly shows "Hostname of request: <client IP address>... IP Address of request: <client IP address>"
This behaviour with APR is causing problems for a third-party application that relies on identifying the client IP/host for authentication - as the code does not expect NULL from getRemoteHost() it denies access to the client (coming from another company via LAN-to-LAN VPN).
Any ideas on how to debug this further? Nothing is logged to catalina.out when the error occurs. I also had a quick look in the APR source but couldn't find any reference to getRemoteHost or enableLookups so I'm not sure where this side effect is coming from.
Thanks in advance for any feedback,
Best regards
- Paul.
In fact that needs to be fixed in java/org/apache/coyote/http11/Http11AprProcessor.java
Fixed in 7.0.x and will be included in 7.0.6 onwards.
Proposed for 6.0.x
This has been fixed in 6.0.x and will be included in 6.0.30 onwards.
Proposed for 5.5.x as well.
Fixed in 5.5.x and will be in 5.5.32 onwards. | https://bz.apache.org/bugzilla/show_bug.cgi?id=47319 | CC-MAIN-2020-24 | refinedweb | 552 | 60.41 |
In this tutorial, we are going to see how to create an Iterable class in Python.
Python Iterable Class:
An iterator is an object that can be iterated through all the values in the object. In python Lists, tuples, dictionaries, and sets are all iterable objects, these are all can be iterate over the values.
The iter() is a method used to get the iterator from List, tuple, dictionary and set objects, through which the items are get iterated.
How to create Python Iterable class?
By default, all python classes are not iterable so that we can not traverse/iterate the values of the python class. To make a class as iterable, you have to implement the
__iter__() and
__next__() methods in that class.
The
__iter__() method allows us to make operations on the items or initializing the items and returns an iterator object.
The
__next__() method allows us to do operations and it should return the next value of the sequence.
Iterable class in Python Example:
To make the example as simple, I am taking a simple use-case – Creating a class that allows iterating over a range of provided dates, and it always produces one day at a time on every loop.
from datetime import timedelta, date class DateIterable: def __init__(self, start_date, end_date): # initilizing the start and end dates self.start_date = start_date self.end_date = end_date self._present_day = start_date def __iter__(self): #returning __iter__ object return self def __next__(self): #comparing present_day with end_date, #if present_day greater then end_date stoping the iteration if self._present_day >= self.end_date: raise StopIteration today = self._present_day self._present_day += timedelta(days=1) return today if __name__ == '__main__': for day in DateIterable(date(2020, 1, 1), date(2020, 1, 6)): print(day)
Output:
2020-01-01 2020-01-02 2020-01-03 2020-01-04 2020-01-05
The above example iterates the days between the start and end days. The for loop creates an iterator object and executes the
next() method for each loop, that’s why we see them each day in a separate line.
Note:
StopItertion is used to prevent the iteration, usually, we prevent the iteration on a specific condition. | https://www.onlinetutorialspoint.com/python/how-to-create-python-iterable-class.html | CC-MAIN-2021-31 | refinedweb | 360 | 52.8 |
Re: [racket-users] db: nested transactions
Yes, that should be fine. One note about your sample code: the isolation mode of inner transactions must be #f (the default); you can't change isolation levels once you've started an outer transaction. Also keep in mind that nested transactions are not supported for ODBC connections. Ryan
[racket-users] Racket v6.2
Re: [racket-users] db: nested transactions
On 06/24/2015 07:46 AM, George Neuner wrote: Hi Ryan, On 6/23/2015 12:20 PM, Ryan Culpepper wrote: Yes, that should be fine. One note about your sample code: the isolation mode of inner transactions must be #f (the default); you can't change isolation levels once you've started an outer
Re: [racket-users] Macro-introducing macros with inter-macro communication channel
On 06/19/2015 03:07 PM, Thomas Dickerson wrote: Hi All, I'm trying to figure out how best to implement the following pattern of macro behavior: Let's say we are writing Loop macro that implements a looped computation over a specified body. I would like to then be able to (a) introduce
Re: [racket-users] stops in Macros that Work Together
On 07/06/2015 10:04 PM, Anthony Carrico wrote: I've been working through Macros that Work Together (on my way to working through Sets-of-Scopes). I've come across something that is slightly unclear to me in the section on local-expand: E ::= a mapping from name to transform I don't believe
Re: [racket-users] problems with sql-timestamp and date*
On 08/22/2015 06:18 PM, George Neuner wrote: On 8/22/2015 5:50 PM, Jon Zeppieri wrote: On Sat, Aug 22, 2015 at 4:36 PM, George Neunergneun...@comcast.net wrote: The latter code using date works properly (modulo the time zone field) and gives consistent results, but the former using date*
[racket-users] Racket v6.2.1
Re: [racket-users] inconsistency/bug in math/array
I think I've run into this problem before. The type of array-slice-ref is (Array A) (Listof Slice-Spec) -> (Array A) where Slice-Spec = (U (Sequenceof Integer) Integer ) The problem is that integers are also sequences, so the contract generated for Slice-Spec just discards the Integer
Re: [racket-users] interface to unix sockets
I think someone was working on listener code for unix sockets a year or so ago, but I don't remember who or how far they got. Pull requests are welcome. Ryan On 8/31/15 2:36 PM, qwe-te...@yandex.ru wrote: Racket provides unix-socket-connect and lacks listener. Is it going to be added in
Re: [racket-users] Lost in ellipsis depths
Here's one more solution, using "template metafunctions" (inspired by Redex's metafunctions). And yes, most of the point of template metafunctions is to have something that cooperates with ellipses like you want. > (require syntax/parse syntax/parse/experimental/template
Re: [racket-users] Iterating Through Database
On 09/29/2015 12:28 PM, Tim Roberts wrote: I'm coming to Racket after many decades of programming in other languages. One of the things that still gives me trouble is being able to know exactly what type of "thing" I have at any given point. Let me give you an example, which is actually quire
[racket-users] Racket v6.3
Racket version 6.3 is now available from - Racket's macro expander uses a new representation of binding called "set of scopes". The new binding model provides a simpler explanation of how macros preserve binding, especially across module boundaries and in
[racket-users] Racket v6.4
Racket version 6.4 is now available from - We fixed a security vulnerability in the web server. The existing web server is vulnerable to a navigation attack if it is also enabled to serve files statically; that is, any file readable by the web server is
Re: [racket-users] Top Level Variables or List of Shared Libraries
The openssl library uses scheme_register_process_global to make sure it initializes the openssl foreign library only once. See the end of openssl/mzssl.rkt. Ryan On 01/28/2016 02:33 PM, Leif Andersen wrote: Since a lot of people were at POPL last week, I think it's worth pinging this list
Re: [racket-users] Store value with unsupported type in Postgres?
On 01/17/2016 06:35 PM, Alexis King wrote: The DB docs for SQL type conversions[1] note that not all Postgres types are supported by Racket, and it recommends using a cast to work around this. It even uses the inet type as an example right at the start of the page. However, I want to store an
Re: [racket-users] Debug help with Unix Write
Unix socket ports are block-buffered, so after writing to them you need to flush the output. Something like (write-json data port) (flush-output port) Your code on github has calls to flush-output without the port argument. That doesn't flush the unix socket port; it flushes the current
Re: [racket-users] Re: how to transform syntax post-expansion?
On 02/14/2016 11:07 PM, Nota Poin wrote: I suppose I could do something like this: (define-syntax (transform-post-expansion stx) (syntax-case (expand stx) () (...))) The macro should use `local-expand` rather than `expand`. See the docs for `local-expand`, since it takes more
Re: [racket-users] Re: using ryanc's oauth2 package with Google?
On 02/14/2016 12:02 PM, Fred Martin wrote: So... even though I chose "Other" as the client type, my API credentials were created with a secret. I had to copy the secret into my client constructor request. From my reading of the oauth 2 API docs, I thought "installed app" clients weren't
Re: [racket-users] Re: Debug help with Unix Write
On 02/12/2016 07:33 PM, Ty Coghlan wrote: Both of you were correct, I had to flush my output and newline terminate it. The final result looks like: (define (broadcast source destination type message port) (let ([h (hash 'source source 'dest destination 'type type 'message
Re: [racket-users] Which html-parsing package?
On 02/17/2016 10:39 AM, Brian Adkins wrote: On Wednesday, February 17, 2016 at 10:35:44 AM UTC-5, Brian Adkins wrote: On Wednesday, February 17, 2016 at 10:20:21 AM UTC-5, Neil Van Dyke wrote: Brian Adkins wrote on 02/17/2016 10:04 AM: takes me
Re: [racket-users] Setup/teardown for unit testing
I think it would be more (Racket-)idiomatic to make account a parameter (as in make-parameter) and have something that can update the parameter around a test case. Here is my preferred extension, by example: (define account (make-parameter #f)) (define (call/open-bank proc) (parameterize
Re: [racket-users] Calling a procedure from mysql
On 03/24/2016 06:17 PM, Ty Coghlan wrote: I have the following simple code: (require db) (define mdb (mysql-connect #:user user #:password password)) (query-exec mdb "use starwarsfinal") (query mdb "CALL track_character(?)" "Chewbacca") (disconnect mdb). Where track_character is a procedure
Re: [racket-users] drracket / postgresql error when opening connection for syntax
The problem was a use of (system-type 'machine) in racket/unix-socket. I've pushed a fix. Ryan On 04/06/2016 08:06 AM, WarGrey Gyoudmon Ju wrote: I met this problem before. (system-type 'machine) uses the output of `uname`. On Wed, Apr 6, 2016 at 5:47 PM, Tim Brown
[racket-users] Racket v6.5
Re: [racket-users] Re: for/list in-query
You might find this function helpful (from the implementation of in-query): (define (in-list/vector->values vs) (make-do-sequence (lambda () (values (lambda (p) (vector->values (car p))) cdr vs pair? #f #f Ryan On 05/04/2016 09:46 AM, Denis
Re: [racket-users] Typo in syntax-class documentation?
On 05/11/2016 03:16 AM, Tim Brown wrote: I found this in the documentation for syntax-class (Link to this section with @secref["stxparse-attrs" #:doc '(lib "syntax/scribblings/syntax.scrbl")]): Consider the following code: (define-syntax-class quark (pattern (a b ...)))
Re: [racket-users] Is my model of (values) accurate?
On 07/22/2016 07:58 PM, David Storrs wrote: Thanks Jon, I appreciate the clear explanation. I'm using call-with-values in database code in order to turn a list into an acceptable set of bind parameters. Here's an example: (query-exec conn "insert into foo (bar, baz) values ($1, $2)"
Re: [racket-users] equivalence relation for hash keys
See `define-custom-hash-types` in `racket/dict`. Note that you'll need to use `dict-ref` instead of `hash-ref`, etc. Ryan On 08/04/2016 12:34 PM, Jos Koot wrote: Hi As far as I can see a hash has three options only for the equivalence relation comparing keys: eq?, eqv? and equal?. Would it
Re: [racket-users] Typed Racket is Unsound because of module->namespace
It seems that a typed module needs to be as *protected* as TR itself, but not as *powerful* (or privileged) as TR. Otherwise one could define a malicious typed module that uses the power TR grants it to break sandboxing. Is the combination of protected but not powerful possible in the current
Re: [racket-users] Fun with keyword-apply
On 02/08/2017 04:41 PM, Dan Liebgold wrote: Hi all - I have an odd syntax I'm trying to maintain backward compatibility with, but I'd like to take advantage of keyword parameters to accommodate the presence/absence/ordering of those parameters. Here's an example of what I'm trying to do:
Re: [racket-users] Detecting EOT when reading a TCP port
read-line returns eof when the port is closed, which is completely different from sending byte 04 (or any other byte or sequence of bytes) over the port. ;; set up {client,server}-{in,out} ports (write-byte 4 client-out) (flush-output client-out) (read-byte server-in) ;; => 4
Re: [racket-users] syntax-properties, local-expand, and a struct
On 9/28/16 12:04 PM, William J. Bowman wrote: On Fri, Sep 23, 2016 at 02:58:23PM -0400, Ryan Culpepper wrote: It appears that the constructor macro (implemented by self-ctor-transformer in racket/private/define-struct.rkt) transfers the syntax properties from the macro use to its expansion (see
Re: [racket-users] syntax-parse #:with and error messages
On 09/28/2016 02:33 PM, 'William J. Bowman' via Racket Users wrote: I recently ran into a problem that took me hours to diagnose. It turns out that a `#:with` clause in a syntax-parse was not matching, but I would never have guessed that from the error message I got. Here is a simplified
Re: [racket-users] Constructing unicode surrogates
Does one of the `string-normalize-*` functions do what you want? Ryan On 10/08/2016 01:06 PM, Jens Axel Søgaard wrote: Hi All, The following interaction shows how the reader can be used to construct a surrogate character: > (string-ref "\ud800\udc00" 0) #\ Given the two
Re: [racket-users] Using schemas with the 'db' module
In PostgreSQL, clusters contain databases contain schemas. I think the answer is to use "SET SCHEMA" or the more general "SET search_path". See Ryan On 10/06/2016
Re: [racket-users] How to get non-prefab struct values from manually-expanded syntax?
On 10/03/2016 06:38 PM, Jack Firth wrote: So I'm reading a file in as code and expanding it, then looking for values in a certain syntax property that macros in the expanded code attach. This works fine for prefab structs, but I can't seem to get it to work with transparent structs. The issue
Re: [racket-users] syntax-properties, local-expand, and a struct
On 09/23/2016 02:43 PM, 'William J. Bowman' via Racket Users wrote: Under certain conditions, the value of a syntax property is duplicated. I can't figure out why, or if this is a bug, and any advice would be appreciated. I've attached the smallest program that generates this behavior that
Re: [racket-users] Re: racket command line parameter
On 10/25/2016 04:57 PM, Dan Liebgold wrote: On Tuesday, October 25, 2016 at 1:43:28 PM UTC-7, Alexis King wrote: bound... You need to put the -i flag first, so the command should look like: racket -iI -l Hmm... that give the REPL the proper language but no access to the contents of
Re: [racket-users] Identifier equality of identifiers stashed in preserved syntax properties
On 10/25/2016 08:04 PM, Alexis King wrote: That makes sense; thank you for your quick reply. It might be possible to do something like what you describe, but I do have a little more context that makes this sort of tricky. I’m trying to not just store identifiers but also store prefab structs
Re: [racket-users] Re: racket command line parameter
On 10/25/2016 06:16 PM, Dan Liebgold wrote: On Tuesday, October 25, 2016 at 2:09:59 PM UTC-7, Ryan Culpepper wrote: racket -e '(enter! "your-module.rkt")' -i BTW, any luck putting a line like this in csh shell script, alias, or windows batch file? For scripting, if your in
Re: [racket-users] find-seconds daylight saving
On 11/08/2016 11:24 PM, George Neuner wrote: [...] - I need to turn the UTC datetimes on all the results back into local times with the right time zone Does the following do what you want? (require srfi/19) ;; date-at-tz : Date Integer -> Date ;; Returns date of equivalent instant in
Re: [racket-users] postgresql sql-timestamp problem - test.rkt (0/1)
On 11/06/2016 09:42 PM, George Neuner wrote: [...] The following in Racket gets it wrong. e.g., [...] => #(struct:sql-timestamp 2016 5 1 5 0 0 0 0) -> "2016-05-01 05:00:00Z" "2016-05-01 00:00:00-05" -> #(struct:sql-timestamp 2016 5 1 0 0 0 0 -18000) #(struct:sql-timestamp 2016 6 12 5 0 0 0 0)
Re: [racket-users] degenerate performance in syntax-parse
On 10/21/2016 05:50 PM, Dan Liebgold wrote: Hi all - In the process of putting together a somewhat complex application using syntax-parse, I discovered that when I specified a repeated pattern in a syntax-class (which was incorrect) AND I had a certain usage of the syntax transformer with an
Re: [racket-users] unit testing syntax errors
See `convert-compile-time-error` and `convert-syntax-error` from the `syntax/macro-testing` library. I should fix the docs to say that the type of the exception can change, so they work best for testing the contents of the exception message. For examples, there are tests for invalid uses of
Re: [racket-users] degenerate performance in syntax-parse
On 10/24/2016 02:15 PM, Dan Liebgold wrote: On Sunday, October 23, 2016 at 1:14:56 PM UTC-7, Ryan Culpepper wrote: [...] 1. A term like `(a <- blend)` will match the first pattern and treat `blend` as a `remap:id`. If you don't want that to happen, there are two ways to prevent it.
Re: [racket-users] syntax-class composed of literals
On 11/16/2016 06:11 PM, Dan Liebgold wrote: Hi, A couple questions about literals in syntax-parse: 1. I'd like to make a syntax-class that is just a set of literals (with a clear error for something not matching any literal). Is there a better way than this:
Re: [racket-users] syntax-class composed of literals
On 11/16/2016 07:42 PM, Dan Liebgold wrote: Literal sets can include datum-literals: (define-literal-set lits #:datum-literals (a b c) (d e)) Ah, oops I missed that keyword parameter. For question 1, that's probably the best way. If you want to suppress the printing of all of the
Re: [racket-users] syntax-class composed of literals
On 11/16/2016 07:51 PM, Vincent St-Amour wrote: FWIW, Eric Dobson wrote a very nice `define-literal-syntax-class` macro that is used extensively inside TR. Its companion
Re: [racket-users] syntax-class composed of literals
On 11/16/2016 08:24 PM, Dan Liebgold wrote: FWIW, Eric Dobson wrote a very nice `define-literal-syntax-class` macro that is used extensively inside TR. Hmm... I can't quite
[racket-users] server downtime
We will be performing maintenance on one of the PLT servers Monday afternoon. Some services will be unavailable during that time, including PLaneT, mailing list archives, and the old bug database. The main web pages, the package server, and the user and dev mailing lists should be unaffected.
Re: [racket-users] Re: How to change text alignment in a text-field% and how to make text-field% read-only but still copy-able
On 12/03/2016 02:15 PM, Winston Weinert wrote: I managed to resolve the alignment question with the following lines: (define e (send my-text-field get-editor)) (send e auto-wrap #t) (send e set-paragraph-alignment 0 'center) However, I'm still at a loss how to make the text-field% read-only
[racket-users] Re: prepared queries
On 3/15/17 12:41 AM, George Neuner wrote: Hi Ryan, Hope you enjoyed the snow day. Lost power here for a while, but fortunately no damage. I did :) My neighborhood didn't get much snow by volume, but what it did get was then rained/sleeted nearly into ice. On 3/13/2017 11:09 PM, Ryan
Re: [racket-users] Reporting exceptions as though they came from the caller
On 03/31/2017 04:00 PM, David Storrs wrote: Imagine I have the following trivial module (ignore that things are defined out of sequence for clarity): #lang racket (define (foo arg) (_baz arg) ; do some checking, raise an exception if there's a problem ...do stuff... ) (define (bar
Re: [racket-users] Virtual connections, threads, and DSN extraction, oh my!
[racket-users] Re: prepared queries
On 03/13/2017 06:30 PM, George Neuner wrote: Hi Ryan, On 3/13/2017 5:43 PM, Ryan Culpepper wrote: Racket's db library always prepares a statement before executing it, even if there are no query parameters. When allowed, instead of closing the prepared statement immediately after executing
Re: [racket-users] Virtual connections, threads, and DSN extraction, oh my!
On 03/13/2017 03:16 PM, David Storrs wrote: [...] On Mon, Mar 13, 2017 at 2:49 PM, George Neuner
> wrote: - It's also fine to pass the VC into other threads. It will be shared state between the threads, but the CP will keep their
Re: [racket-users] Virtual connections, threads, and DSN extraction, oh my!
On 03/13/2017 04:56 PM, George Neuner wrote: On 3/13/2017 3:41 PM, David Storrs wrote: On Mon, Mar 13, 2017 at 2:04 PM, Ryan Culpepper <ry...@ccs.neu.edu <mailto:ry...@ccs.neu.edu>> wrote: If you are using `prepare` just for speed, it might help to know that most base conn
Re: [racket-users] chaining operations on bignum not scalable
On 07/29/2017 02:48 PM, rom cgb wrote: Hi, Probably due to all operations not being in-place, chaining operations on bignums is very costful. for example, using bitwise-bit-field[1] on bignums is atrocious. I also tried (define (reverse-bits n) (for/fold ([reversed 0])
Re: [racket-users] Struct declaration conflict if a file is required implicitly
On 07/23/2017 07:26 AM, Alejandro Sanchez wrote: Hello everyone, I am working on this project: I am writing test cases and I ran into a problem with my ‘ext’ structure. It is declared in the file ‘msgpack/main.rkt’, which is required in the file
Re: [racket-users] How do db handles interact with threads and parameters
On 7/24/17 9:11 AM, George Neuner wrote: Hi David, On 7/24/2017 8:18 AM, David Storrs wrote: What happens in the following code? (define dbh (postgresql-connect ...)) ;; Use the DBH in a new thread (thread (thunk (while ...some long-running condition... (sleep 1) ; don't flood the DB
Re: [racket-users] Catching syntax errors
Use `convert-syntax-error` from the `syntax/macro-testing` module:(form._((lib._syntax%2Fmacro-testing..rkt)._convert-syntax-error)) Ryan On 06/30/2017 04:47 PM, Sam Waxman wrote: Hello, I'm trying to test whether or not certain
Re: [racket-users] Setting parameters between files does not work as expected
You might be interested in `dsn-connect` and the `data-source` structure (). Ryan On 4/25/17 8:18 PM, David Storrs wrote: Great. Thanks, Phillip! On Tue, Apr 25, 2017 at 2:14 PM, Philip McGrath
Re: [racket-users] Anyone using MongoDB 3.2.15 with DrRacket 6.9?
On 08/06/2017 05:49 PM, Cecil McGregor wrote: [...] What are other people using for a NoSQL racket experience? Are you looking for no schema or no ACID? If the latter, the git version of the db library now has experimental support for Apache Cassandra. Ryan -- You received this message
Re: [racket-users] Generate function defintions at compile time
On 08/22/2017 05:29 PM, hiph...@openmailbox.org wrote: Hello, I am writing a Racket library which will make it possible to control the Neovim text editor using Racket. People will be able to use Racket to control Neovim, as well as write plugins for Neovim in Racket.
Re: [racket-users] break-thread + thread-wait can't be handled
On 5/3/17 10:41 PM, Eric Griffis wrote: Hello, I'm having trouble catching "terminate break" exceptions when combining break-thread with thread-wait. MWE 1: (with-handlers ([exn:break:terminate? writeln]) (let ([t (thread (lambda () (thread-wait (current-thread])
Re: [racket-users] syntax-parse attributes in macro-generated macros
On 09/17/2017 01:00 AM, Philip McGrath wrote: [...] I have a macro like `example-macro`, but more complicated and with many, many more potential keyword arguments, so I wanted to write a macro that would let me define `example-macro` with a more declarative syntax, like this:
Re: [racket-users] Re: code reflection
On 10/14/2017 05:01 AM, George Neuner wrote: On 10/14/2017 3:00 AM, Jack Firth wrote: So is there a way ... from normal code ... to get at the locals of functions higher in the call chain? Or at least the immediate caller? Some reflective capability that I haven't yet
Re: [racket-users] okay for the stepper to blanket disarm all syntax?
On 10/16/2017 12:38 PM, 'John Clements' via users-redirect wrote: I’m in the process of trying to update the stepper to handle check-random, and I’m somewhat baffled by the amount of difficulty I’m running into in and around ‘syntax-disarm’. It occurs to me that it would probably be simpler
Re: [racket-users] Efficient & "nice" communication mechanism between Racket and other languages
On 09/07/2017 12:11 PM, Brian Adkins wrote: I'm considering having a group of programmers create micro-services in various programming languages to be glued together into a single application. I would like a communication mechanism with the following characteristics: * Already supported by
Re: [racket-users] "Test did not clean up resources" message from GUI test runner
On 08/20/2017 09:28 PM, Alex Harsanyi wrote: I just noticed that the GUI test runner displays "test did not clean up resources" messages on my tests, but it is not clear to me what resources are not being cleaned up. I tried to reproduce the problem in the following test case: #lang
Re: [racket-users] confused about raco check-requires error
On 11/21/17 2:57 AM, Alex Harsanyi wrote: I'm trying to use the "raco check-requires" command to determine which requires I should remove from my source files, and the command fails when I include one of my files (the application compiles and runs fine). I managed to reproduce the case as
Re: [racket-users] ssax:make-parser
On 05/15/2018 11:36 PM, John Clements wrote: Interestingly, it looks like this change is a deliberate one, made by Ryan Culpepper back in 2011. Here’s the relevant commit: commit 738bf41d106f4ecd9111bbefabfd78bec8dc2202 Author: Ryan Culpepper <ry...@racket-lang.org> Date: Tue Nov 22 02
Re: [racket-users] Jupyter Racket Kernel - iracket
On 05/01/2018 09:33 PM, Graham Dean wrote: PR submitted :) PR merged. Thanks! Ryan -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to
Re: [racket-users] Cryptography routines in Racket
I have a crypto library in progress that I haven't released yet. The repo is at, but the repo is missing a lot of local commits and I won't be able to fix it until mid-January. If you want to try out the old version, you'll also need asn1-lib from
Re: [racket-users] Generating automatic testsuites using a macro
On 2/23/18 3:36 PM, 'Paulo Matos' via Racket Users wrote: On 23/02/18 15:13, 'Paulo Matos' via Racket Users wrote: That's true, thanks for pointing it out. I only just noticed you could generate test-suites and test-cases at runtime with make-testcase and make-testsuite. Therefore I will
Re: [racket-users] Confusion about attributes of struct-id
On 08/16/2018 06:04 PM, David Storrs wrote: The struct-id syntax class from syntax/parse/class/struct-id is puzzling me. Given this preamble: (require (for-syntax syntax/parse/class/struct-id syntax/parse syntax/parse/experimental/template
Re: [racket-users] FFI Trouble (file streams, variadic functions)
On 08/27/2018 02:13 PM, Philip McGrath wrote: I am hoping for some help debugging a problem I'm having writing FFI bindings for libxml2. I am trying to use the function `xmlValidateDtd`, which (predictably) validates an XML document against a DTD. To support error reporting, the first
Re: [racket-users] FFI Trouble (file streams, variadic functions)
. -Philip On Mon, Aug 27, 2018 at 8:03 AM Ryan Culpepper <mailto:ry...@ccs.neu.edu>> wrote: On 08/27/2018 02:13 PM, Philip McGrath wrote: > I am hoping for some help debugging a problem I'm having writing FFI > bindings for libxml2. > > I am tryin
Re: [racket-users] Where to put scribblings in 'multi package?
On 08/29/2018 12:37 PM, Erich Rast wrote: I have a preliminary scribbling for the manual of a multi source package, but it doesn't show up in Racket's main documentation when I install the package locally. Here is the directory structure: appy | |--info.rkt |--appy | |--
Re: [racket-users] Using match on hash tables with optional keys
On 8/31/18 4:28 AM, Greg Hendershott wrote: A general trick for optional values with match is something like (or pat (app (λ _ default-value) pat)). But that doesn't work for hash-table which uses [pat path] for each mapping. (At least I couldn't see how.) Here's _a_ way you could write this as
Re: [racket-users] whither `splicing-parameterize`? or am I doing it wrong?
It might make sense to `(set! new-parameterization #f)` at the end so that the parameterization (and the values it holds) can be GC'd sooner when splicing-parameterize is used at top level or module level. Ryan On 1/24/18 6:00 AM, Alexis King wrote: Here is an implementation of a version of
Re: [racket-users] Output Port Shenanigans
On 03/14/2018 06:16 PM, Lehi Toskin wrote: On Wednesday, March 14, 2018 at 10:10:20 AM UTC-7, Matthew Butterick wrote: probably it requires a combination of peek + read, or copying the port. That may be true, but I've been messing around getting *anything* to print from inside that
Re: [racket-users] how to match unbound identifier as literal in `syntax-parse`?
Here's one way: (~and z:id (~fail #:unless (free-identifier=? #'z #'zeta) "expected the identifier `zeta`")) Another way is to make a syntax class (either specifically for `zeta` or parameterized by the identifier) that does the same check. Ryan On 4/3/18 8:33 AM,
Re: [racket-users] Behavior of nested ellipses
On 03/27/2018 11:46 PM, Ryan Culpepper wrote: On 03/27/2018 10:01 PM, Justin Pombrio wrote: I'm surprised by the behavior of using a pattern variable under one set of ellipses in the pattern, and under two sets of ellipses in the template: [...] BTW, it looks like Macro-By-Example[1
Re: [racket-users] Behavior of nested ellipses
On 03/27/2018 10:01 PM, Justin Pombrio wrote: I'm surprised by the behavior of using a pattern variable under one set of ellipses in the pattern, and under two sets of ellipses in the template: | #lang racket (require(for-syntax syntax/parse)) (define-syntax (test stx) (syntax-parse stx [(_
Re: [racket-users] Storing JSON into PostgreSQL 10
On 03/16/2018 11:28 PM, David Storrs wrote: I'm noticing that when I store jsexpr?s into PostgreSQL 10 I end up with them as strings, not as actual JSONB data. I've read the docs and tried every combination of typecasting / methods of writing that I can think of but nothing ends up working.
Re: [racket-users] syntax/parse is not hygienic
On 03/04/2018 09:40 PM, Alexis King wrote: [... context ...] Still, with all this context out of the way, my questions are comparatively short: 1. Is this lack of hygiene well-known? I did not find anything in Ryan’s dissertation that explicitly dealt with the question, but I
Re: [racket-users] Creating truly unique instances of structure types?
On 11/6/18 11:31 AM, Alexis King wrote: On Nov 5, 2018, at 20:01, Ryan Culpepper wrote: You could use a chaperone to prohibit `struct-info` Good point! I had forgotten that `struct-info` is a chaperoneable operation. This isn’t ideal, though, since I don’t think `struct-info` is ever
Re: [racket-users] Creating truly unique instances of structure types?
On 11/5/18 5:26 PM, Alexis King wrote: To my knowledge, there are two main techniques for creating unique values in Racket: `gensym` and structure type generativity. The former seems to be bulletproof — a value created with `gensym` will never be `equal?` to anything except itself – but the
Re: [racket-users] Compilation/Embedding leaves syntax traces
On 9/25/18 1:11 PM, Alexis King wrote: [] Personally, I would appreciate a way to ask Racket to strip all phase ≥1 code and phase ≥1 dependencies from a specified program so that I can distribute the phase 0 code and dependencies exclusively. However, to my knowledge, Racket does not
Re: [racket-users] How do I (de)serialize PKI keys for storage?
On 12/18/18 23:36, David Storrs wrote: I'm trying to persist public/private keys to our database and having some trouble: Welcome to Racket v6.11. > (require crypto crypto/gcrypt) > (crypto-factories gcrypt-factory) > (define key (generate-private-key 'rsa)) > key (object:gcrypt-rsa-key%
Re: [racket-users] Help with evaluation of examples in Scribble
On 3/10/19 4:16 PM, Matt Jadud wrote: Oh! Thank you, Matthew. I see. So, I'm running into the sandbox... as in, the sandbox is doing what it should, and as a result, it is preventing the networked accesses that I've added to my documentation. That's awfully obvious (now that it is put that
Re: [racket-users] exercise 1, racket school 2018
On 6/17/19 4:20 PM, Robert Girault wrote: I was able to do the first half of exercise 1 (see sources at the end of this message) --- write a macro computing some information at run-time. The second half is to write the same macro, but computing some information at compile-time --- I couldn't do
Re: [racket-users] db module + SQLite driver + 'ON CONFLICT' = syntax error
Use "INSERT OR IGNORE" instead. See, 3rd paragraph. Ryan On 4/18/19 23:28, David Storrs wrote: On Thu, Apr 18, 2019 at 4:48 PM Jon Zeppieri
> wrote: It might well be the SQLlite version. This is a pretty new feature.
Re: [racket-users] db module + SQLite driver + 'ON CONFLICT' = syntax error
On 4/18/19 23:53, David Storrs wrote: On Thu, Apr 18, 2019 at 5:42 PM Ryan Culpepper <mailto:ry...@ccs.neu.edu>> wrote: Use "INSERT OR IGNORE" instead. See, 3rd paragraph. Yep. Unfortunately, this was a simplified case of w
Re: [racket-users] Racket SIGSEGV during FFI call
On 6/26/19 6:34 AM, Christopher Howard wrote: Hi, I have a project going to make Racket bindings to the libhackrf C library installed on my Debian 9 system. I have successfully made and used bindings to around a dozen procedures in the library. However, when I get to the first really important
Re: [racket-users] db module 'query' does not return insert-id
On 4/22/19 20:36, David Storrs wrote: > (require db) > (define db (postgresql-connect ...args...)) > (simple-result-info (query db "insert into collaborations (name) values ('foojalskdsfls')")) '((insert-id . #f) (affected-rows . 1)) From the docs on the 'simple-result' struct:
Re: [racket-users] how do I clear this FFI-related build error?
On 4/23/19 23:14, Matthew Butterick wrote: Some code relies on the `harfbuzz` library, like so: (define-runtime-lib harfbuzz-lib [(unix) (ffi-lib "libharfbuzz" '("1" ""))] [(macosx) (ffi-lib "libharfbuzz.0.dylib")] [(windows) (ffi-lib "libharfbuzz-0.dll")]) Though this works on my
Re: [racket-users] Getting JSON to work with the DB module
It is not possible, unfortunately. You must do the conversion to and from strings yourself. I've thought about adding a hook for additional conversions based on declared types, but there's no declared type information at all for parameters, and the declared type for results is fragile: a
[racket-users] CFP: 4th Workshop on Meta-Programming Techniques and Reflection (Meta'19), Co-located with SPLASH 2019
Scholliers, Ghent University ### Program Committee Nada Amin, University of Cambridge, UK Edwin Brady, University of St Andrews, UK Andrei Chis, Feenk, Switzerland David Thrane Christiansen, Galois, Portland, Oregon, USA Tom Van Cutsem, Bell Labs, Belgium Ryan Culpepper, Czech Technical University | https://www.mail-archive.com/search?l=racket-users@googlegroups.com&q=from:%22Ryan+Culpepper%22 | CC-MAIN-2021-31 | refinedweb | 5,780 | 55.98 |
I'm creating a vertical line graph of data that was collected at various station points along a pipeline. In some of the areas, there are null values for the data at a particular station or multiple stations. I would like the vertical line to stop at the null points, and continue again when there is data. (like a dashed line) Currently, the null valued data is being ignored, and a line is being drawn from the last point with data to the next point with data. I have looked all through the advanced properties of the graph, and can't find anything that will do what I need. I don't what to have to create multiple series within the graph to create this broken line.
Does anyone know how to accomplish what I need? I've attached a picture of what I need it to look like.
thanks in advance!
You have to treat each break in the dataset as a new pair of data. So split your data where a null value occurs and treat it as a pair. Implementing this depends on the format of the data of course
Thanks. That's exactly what I was hoping I didn't have to do. I just can't believe that when you create a graph, it doesn't recognize null values in data.
of course you don't have to stick with arcmap, since matplotlib is builtin.
import matplotlib.pyplot as plt
x = np.arange(10)
y = [ 1, 1, 1, 1.25, None, 1.5, 1.25, 1, 1, 1]
plt.plot(x, y, linestyle='-', marker='o')
[<matplotlib.lines.Line2D at 0x17de70d9ac8>]
plt.show()
yields | https://community.esri.com/t5/data-management-questions/graphing-vertical-lines/m-p/687807 | CC-MAIN-2021-10 | refinedweb | 278 | 84.47 |
WebPush publication library
Project description
Webpush Data encryption library for Python
This is a work in progress. This library is available on pypi as pywebpush. Source is available on github.
Installation
You’ll need to run python virtualenv. Then
bin/pip install -r requirements.txt bin/python setup.py develop
Usage
In the browser, the promise handler for registration.pushManager.subscribe() returns a PushSubscription object. This object has a .toJSON() method that will return a JSON object that contains all the info we need to encrypt and push data.
As illustration, a subscription_info object may look like:
{"endpoint": "...", "keys": {"auth": "k8J...", "p256dh": "BOr..."}}
How you send the PushSubscription data to your backend, store it referenced to the user who requested it, and recall it when there’s a new push subscription update is left as an exercise for the reader.
Sending Data using webpush() One Call
In many cases, your code will be sending a single message to many recipients. There’s a “One Call” function which will make things easier.
from pywebpush import webpush webpush(subscription_info, data, vapid_private_key="Private Key or File Path[1]", vapid_claims={"sub": "mailto:YourEmailAddress"})
This will encode data, add the appropriate VAPID auth headers if required and send it to the push server identified in the subscription_info block.
Parameters
subscription_info - The dict of the subscription info (described above).
data - can be any serial content (string, bit array, serialized JSON, etc), but be sure that your receiving application is able to parse and understand it. (e.g. data = "Mary had a little lamb.")
content_type - specifies the form of Encryption to use, either 'aesgcm' or the newer 'aes128gcm'. NOTE that not all User Agents can decrypt 'aes128gcm', so the library defaults to the older form.
vapid_claims - a dict containing the VAPID claims required for authorization (See py_vapid for more details). If aud is not specified, pywebpush will attempt to auto-fill from the endpoint.
vapid_private_key - Either a path to a VAPID EC2 private key PEM file, or a string containing the DER representation. (See py_vapid for more details.) The private_key may be a base64 encoded DER formatted private key, or the path to an OpenSSL exported private key file.
e.g. the output of:
openssl ecparam -name prime256v1 -genkey -noout -out private_key.pem
Example
from pywebpush import webpush, WebPushException try: webpush( subscription_info={ "endpoint": "", "keys": { "p256dh": "0123abcde...", "auth": "abc123..." }}, data="Mary had a little lamb, with a nice mint jelly", vapid_private_key="path/to/vapid_private.pem", vapid_claims={ "sub": "mailto:YourNameHere@example.org", } ) except WebPushException as ex: print("I'm sorry, Dave, but I can't do that: {}", repr(ex)) # Mozilla returns additional information in the body of the response. if ex.response and ex.response.json(): extra = ex.response.json() print("Remote service replied with a {}:{}, {}", extra.code, extra.errno, extra.message )
Methods
If you expect to resend to the same recipient, or have more needs than just sending data quickly, you can pass just wp = WebPusher(subscription_info). This will return a WebPusher object.
The following methods are available:
.send(data, headers={}, ttl=0, gcm_key="", reg_id="", content_encoding="aesgcm", curl=False, timeout=None)
Send the data using additional parameters. On error, returns a WebPushException
Parameters
data Binary string of data to send
headers A dict containing any additional headers to send
ttl Message Time To Live on Push Server waiting for the client to reconnect (in seconds)
gcm_key Google Cloud Messaging key (if using the older GCM push system) This is the API key obtained from the Google Developer Console.
reg_id Google Cloud Messaging registration ID (will be extracted from endpoint if not specified)
content_encoding ECE content encoding type (defaults to “aesgcm”)
curl Do not execute the POST, but return as a curl command. This will write the encrypted content to a local file named encrpypted.data. This command is meant to be used for debugging purposes.
timeout timeout for requests POST query. See requests documentation.
Example
to send from Chrome using the old GCM mode:
WebPusher(subscription_info).send(data, headers, ttl, gcm_key)
.encode(data, content_encoding="aesgcm")
Encode the data for future use. On error, returns a WebPushException
Parameters
data Binary string of data to send
content_encoding ECE content encoding type (defaults to “aesgcm”)
Example
encoded_data = WebPush(subscription_info).encode(data)
## 0.7.0 (2017-02-14) feat: update to http-ece 0.7.0 (with draft-06 support) feat: Allow empty payloads for send() feat: Add python3 classfiers & python3.6 travis tests feat: Add README.rst bug: change long to int to support python3
## 0.4.0 (2016-06-05) feat: make python 2.7 / 3.5 polyglot
## 0.3.4 (2016-05-17) bug: make header keys case insenstive
## 0.3.3 (2016-05-17) bug: force key string encoding to utf8
## 0.3.2 (2016-04-28) bug: fix setup.py issues
## 0.3 (2016-04-27) feat: added travis, normalized directories
## 0.2 (2016-04-27) feat: Added tests, restructured code
## 0.1 (2016-04-25)
Initial release
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pywebpush/1.7.0/ | CC-MAIN-2019-51 | refinedweb | 848 | 50.63 |
(2013-05-22, 12:54)slinuxgeek Wrote: [ -> ](2013-05-17, 20:27)ace5342 Wrote: [ -> ](2013-05-16, 11:29)slinuxgeek Wrote: [ -> ]I could not find source for greasemonkey and I think it may not be related to youtube plugin code for xbmc.
I just want that search should start on click of a button in video library window, instead of by clicking on an item in panel control.
Right now there is a search icon item in panel if we click it, it displays all previous searches and another icon for new search clicking on that displays Dialog keyboard to enter search string to search.
I want to achieve same on click of a button.
I have been looking for a similar thing (one call or two at the most to go from the xbmc menu to the youtube search menu with the dialog box ready for input) using JSON RPC.
can you post here if you do find a solution as i could prob make what you want work for me.
Hi ace5342,
I did it through small script:
On click of the button I am calling this script:
So basically I am trying to open the virtual folder with that folder path which is of same format when YouTube search displays a search result.So basically I am trying to open the virtual folder with that folder path which is of same format when YouTube search displays a search result.PHP Code:
import xbmc
import CommonFunctions as common
def Main():
print "*************************Begin YouTube Search*****************************"
query = common.getUserInput('Search YouTube', '')
if query:
xbmc.executebuiltin('ReplaceWindow(video,plugin://plugin.video.youtube/Explore Youtube/?feed=search&path=/root/search&search=' + query + ',return)')
print "******************************END********************************"
if __name__ == '__main__':
Main()
That will take some moment to draw result and there will not be any user feedback like working dialog displayed, but if you know little scripting you can achieve that too.
Thank you for the reply slinuxgeek i will give it a go when i get home. | https://forum.kodi.tv/archive/index.php?thread-79487-215.html | CC-MAIN-2019-18 | refinedweb | 334 | 62.72 |
Hi, On Wed, Apr 18, 2012 at 10:01:03PM +0200, Andreas Tille wrote: > ;-)). I certainly think this is a great goal, but I have some reservations about the process. You are using debian/upstream, which in no way is in some science-related namespace and appears to be for general-purpose upstream metadata. Therefore, I think before wide adoption, this should be discussed on debian-devel and/or be a DEP. Presenting this as a fait accompli to -devel after x packages have been converted might result in some irritation or confusion. Possibly I have just missed that conversation though, so excuse me in that case. Cheers, Michael | http://lists.debian.org/debian-science/2012/04/msg00085.html | CC-MAIN-2013-20 | refinedweb | 109 | 62.68 |
Mar09
The Complete Guide To Validation In ASP.NET MVC 3 - Part 1
Created on March 09, 2011 at 20:30 by Paul Hiles | Permalink | 29 comments
The latest release of ASP.NET MVC (version 3) has a number of new validation features that significantly simplify both the validation code AND the html outputted to the client. This two-part article will attempt to cover all common validation scenarios and introduce all the new MVC3 validation features along the way. We will provide detailed explanations as well as full code examples that you can adapt for your own needs.
In part one of the article we will give an overview of validation in ASP.NET MVC 3. We will look at the built-in validators including the new CompareAttribute and RemoteAttribute and see what has changed from MVC 2, particularly on the client-side. In Part Two, we will write several custom validators that include both client and server-side validation. We will also look at an alternative to using data annotations - the IValidatableObject interface.
So let's get started...
Setting up Visual Studio
We will be adding all code to the 'Internet Application' template that comes with ASP.NET MVC 3, so if you wish to follow along you, just open up Visual Studio, select new ASP.NET MVC 3 Web Application and pick the Internet Application template when prompted.
Figure 1: The Visual Studio 2010 New Project Dialog
Figure 2: The Visual Studio 2010 New ASP.NET MVC3 Internet Application Template
If you look at solution explorer, you will find a skeleton application with two controllers and a single file in the Models folder (AccountModels.cs) containing all view models. Open up this file, expand the Models region and you will find three models for registration, logon and change password. All examples in this article will revolve around the registration process and thus use the RegisterModel class.
Figure 3: Initial Solution Explorer for Internet Application Template
Inspecting the model
public class RegisterModel { [Required] [Display(Name = "User name")] public string UserName { get; set; } [Required] ; } }
If you take a look at this model, you will find four properties decorated with a number of attributes. If you have worked with ASP.NET MVC version 2, you will probably recognise many of these as System.ComponentModel.DataAnnotations attributes. Some of these are used to affect appearance such as [Display] and [DataType]. The remaining attributes are used for validation. In the RegisterModel, three distinct validation atributes are used: [Required], [Compare] and [ValidatePasswordLength].
Given that throughout this series, we are dealing with data annotations based validation that exclusively use .NET attributes, I will refer to validators and attributes synonymously. So for example, [Required], RequiredAttribute and the Required validator all refer to the same System.ComponentModel.DataAnnotations.RequiredAttribute class.
RequiredAttribute is not, new having been present in MVC 2 but is by far the most commonly used validator. As the name implies, any field with a Required validator needs to have a value if it is to pass validation. The required attribute does not have any configuration properties other than the three common error message ones that we will discuss later in this article.
CompareAttribute is a new, very useful validator that is not actually part of System.ComponentModel.DataAnnotations, but has been added to the System.Web.Mvc DLL by the team. Whilst not particularly well named (the only comparison it makes is to check for equality, so perhaps EqualTo would be more obvious), it is easy to see from the usage that this validator checks that the value of one property equals the value of another property. You can see from the code, that the attribute takes in a string property which is the name of the other property that you are comparing. The classic usage of this type of validator is what we are using it for here: password confirmation.
ValidatePasswordLengthAttribute is a custom validator that is defined within the Internet Application template. If you scroll to the bottom of the AccountModels.cs file, you will see the source code for this validator which is very useful to look at when you come to write your own custom validators. The validator subclasses ValidationAttribute and implements the new IClientValidatable interface that is used to output validation-specific markup to the rendered view. In Part Two of this article, we will discuss much of the structure of ValidatePasswordLengthAttribute and use this class as a base when we come to write several custom validators of our own.
The Controller and View
If these are your first steps with MVC 3, take a look at the AccountController and Register.cshtml View. In terms of validation, you can see that there are no significant changes to either file from MVC 2. This is good and what we would expect - validation logic like all business logic belongs in the model.
If you are particularly observant, you may have noticed a change to the javascript libraries that we are using in our view. We will discuss this below.
What has changed for the client?
Before we run the application for the first time, let's make a few changes:
public class RegisterModel { [Required(ErrorMessage="You forgot to enter a username.")] [Display(Name = "User name")] public string UserName { get; set; } [Required(ErrorMessage="Email is required (we promise not to spam you!).")] ; } }
Here, we have just changed the error messages for some of the [Required] validators. For an explanation of all the options for customising error messages, see below.
All data annotations validators share three common properties related to the error message that is displayed on validation failure. You can use these properties in three different ways:Do not specify any property.
[Required]
Leaving all three properties blank and the default error message built in to the validator will be used. As an example, for the RequiredAttribute on the UserName property, the default error message is The User name field is required.. Note that it is displaying 'User name' rather than UserName because the validator uses the Name property of the DisplayAttribute if present.Set the ErrorMessage property
[Required(ErrorMessage="Email is required (we promise not to spam you)")]
If you do not need to support multiple languages, you can simply set your error message directly on the attribute and this will replace the default error message. You can also make the error message a format string. Depending on the validator, you can specify one or more placeholders that will be replaced with the name of the property (or the Name property of the DisplayAttribute if present). The definition displayed below would result in the message 'Email address is required (we promise not to spam you)'
[Display(Name = "Email address")] [Required(ErrorMessage="{0} is required (we promise not to spam you)")]
[Required(ErrorMessageResourceName="Login_Username_Required" ErrorMessageResourceType=typeof(ErrorMessages))]
If you prefer to put your error messages in a resource file, you can instruct the validator to retrieve the error message from there using these properties. Again you can use format strings in your resource file in the same way that you do when you are setting the ErrorMessage.
OK, now it is time to run the application and see what has changed on the client-side with MVC 3. Navigate to /Account/Register and try to submit the form with missing or invalid data. You may be surprised to find client-side (javascript) validation firing and using the error messages that you just modified in the model.
Figure 4: The Initial Registration Form
Client-side validation and the option to use unobtrusive javascript (see below) is controlled by two settings in the web.config. These will both be present in all new MVC 3 projects, but you will need to add them yourself if upgrading an MVC2 application. As an aside, I cannot say that I am a fan of using appSettings in this way. I don't know why the team didn't use a custom config section but I am sure there was probably a good reason for it.
<appSettings> <add key="ClientValidationEnabled" value="true" /> <add key="UnobtrusiveJavaScriptEnabled" value="true" /> </appSettings>
View the page source and you will see lots of attributes on each form control starting with data-. This is unobtrusive javascript in action. Instead of outputting inline javascript or a JSON blob as in previous releases, MVC3 by default uses this much cleaner syntax. These new attributes are a feature of HTML5, but are fully backward compatible with all modern browsers (including IE6). We will talk more about unobtrusive javascript later in the article.
<div class="editor-label"> <label for="UserName">User name</label> </div> <div class="editor-field"> <input data- <span class="field-validation-valid" data-
You will also see that we are no longer required to reference MicrosoftAjax.js, MicrosoftMvc.js or MicrosoftMvcValidation.js. Instead, we are using jquery.validate.js and an adapter library built by the MVC team: jquery.validate.unobtrusive.
Since the announcement back in 2008, Microsoft has commited to using jQuery as part of their official development platform. Now in version 3 of MVC, we can see the effects of this commitment. Instead of duplicating functionality between jQuery and Microsoft Ajax libraries, jQuery has become the primary client-side library. All custom Microsoft functionality is built on top of jQuery with the jquery.validate.unobtrusive containing validation adapter logic and jquery.unobtrusive.ajax containing the necessary javascript for ajax forms and links. Note that the older libraries are still present in the Scripts folder but there is absolutely no reason to use them for new projects.
In order to created some separation between MVC and jQuery (validate), the MVC developers have decided to use HTML5 data-* attributes instead of jQuery validate's method of using css class names. Not only is this more semantically correct, but this approach also allows you to use completely different javascript libraries in the future without any change to the mvc framework itself. The downside is that this separation does necessitate the creation of another javascript file to bridge these differences. Microsoft's jquery.validate.unobtrusive contains all the adapters necessary to convert the HTML5 attributes that are outputted by the validators into jQuery.validate compatible syntax. When you come to write your own custom validation, you can make use of methods in this file to register you own adapters.
Exploring the other data annotations validation attributes
If you are already familiar with MVC 2, feel free to skip this section and move on to the next section where we talk about the Remote validator.
The Range and RegularExpression validators
So we can take a look at the other built-in validators, let's add a new Age property to our model.
[Required] [Range(18, 65, ErrorMessage = "Sorry, you must be between 18 and 65 to register.")] public int Age { get; set; }
We will also need to add an age field to the Register.cshtml view. Just copy the following somewhere between the fieldset tags.
<div class="editor-label"> @Html.LabelFor(m => m.Age) </div> <div class="editor-field"> @Html.TextBoxFor(m => m.Age) @Html.ValidationMessageFor(m => m.Age) </div>
Note that we have added a RequiredAttribute and RangeAttribute to the age property. The Range validator does exactly what you would expect and in this case restricts the property to values between 18 and 65. If you re-run the code and enter an out of range age, you'll get error message we defined. Now try entering 'xyz'. You'll see a different error message 'The field Age must be a number.'. This is not the result of a ValidationAttribute. It is simply because we defined the age property as an integer and as such, we cannot assign a string to it. We have no control over this message, so if it is not to your liking, you will need to change your model slightly. To get over these model binding issues, many people advocate the user of strings for all view model properties. Model binding will always be successful in this way.
Strictly speaking, it is possible to change the type conversion error message, but it is not straightforward to do so and involves overriding framework components. Changing the type to a string is far simpler and has the same effect.
[Required] [Range(18, 65, ErrorMessage = "Sorry, you must be between 18 and 65 to register.")] [RegularExpression(@"\d{1,3}", ErrorMessage = "Please enter a valid age.")] public string Age { get; set; }
We have changed the age property to a string so model binding always suceeds. In addition, we have added a regular expression validator that checks that the string is a valid number. We are now able to specify any error message we want here.
While we are here, lets also add a RegularExpressionAttribute to the Email property:
[RegularExpression("", ErrorMessage = "Please enter a valid email address.")] public string Email { get; set; }
Please no comments about the efficacy of the email regular expression. Yes, there are better expressions that you can use. If you are interested, I would recommend looking here.
The StringLength validator
Finally, let's add a StringLengthAttribute to username:
[StringLength(12, MinimumLength = 6, ErrorMessage = "Username must be between 6 and 12 characters.")] public string UserName { get; set; }
As you would expect, StringLength validates the number of characters that you can have in a string type property. You must specify a maximum number, but a minimum is optional.
[StringLength(12, ErrorMessage = "Username must be a maximum of 12 characters.")] public string UserName { get; set; }
In a standard text input, you can limit the number of characters by using the maxlength attribute, so you might not see any point in specifying it on the model as well, but remember that maxlength is just client-side html and as such can be manipulated or ignored by anyone that chooses to do so. StringLength like all built-in data annotations validators enforces validation logic on both the client and server side.
Viewing the model changes on the UI
Below is the fully modified version of the RegisterModel
public class RegisterModel { [Display(Name = "User name")] [Required(ErrorMessage = "You forgot to enter a username.")] [StringLength(12, MinimumLength = 6, ErrorMessage = "Username must be between 6 and 12 characters.")] public string UserName { get; set; } [Display(Name = "Email address")] [DataType(DataType.EmailAddress)] [Required(ErrorMessage = "Email is required (we promise not to spam you!).")] [RegularExpression("", ErrorMessage = "Please enter a valid email address.")] public string Email { get; set; } [Display(Name = "Password")] [DataType(DataType.Password)] [Required] [ValidatePasswordLength] public string Password { get; set; } [Required] [Range(18, 65, ErrorMessage = "Sorry, you must be between 18 and 65 to register.")] [RegularExpression(@"\d{1,3}", ErrorMessage = "Please enter a valid age.")] public string Age { get; set; } [Display(Name = "Confirm password")] [DataType(DataType.Password)] [Compare("Password", ErrorMessage = "The password and confirmation password do not match.")] public string ConfirmPassword { get; set; } }
Re-run the application and confirm that your form has client-side validation for the new validators that you have applied.
Figure 5: The Modified Registration Form with an Age field
If you turn off JavaScript in your browser, you can test out the server side validation. You will find that the logic is exactly the same on client and server as you would expect. The server side validation will always fire ensuring that invalid data can never make it through to your business logic. For those users with JavaScript enabled, the client-side validation will provide immediate feedback resulting in a better experience for them and less load on the server for you.
<div class="editor-label"> <label for="UserName">User name</label> </div> <div class="editor-field"> <input data- <span class="field-validation-valid" data-</span> </div> <div class="editor-label"> <label for="Age">Age</label> </div> <div class="editor-field"> <input data- <span class="field-validation-valid" data-
If you look at the html source, you can see all the HTML5 data attributes. It is interesting to look at the way these data attributes are constructed. Each form element that has validation has the following attributes:
The data- prefix is defined by the HTML5 standard. The specification states 'Custom data attributes are intended to store custom data private to the page or application, for which there are no more appropriate attributes or elements'. In the past, people tended to use hacks such as embedding data in css classes to add such data, but thankfully this is no longer necessary.
The Remote validator
Let's take a look at a brand new MVC 3 validator - RemoteAttribute. The Remote validator is very simple to use and yet extremely powerful. Remote is for situations where it is impossible to duplicate server side validation logic on the client, often because the validation involves connecting to a database or calling a service. If all your other validation uses javascript and responds to the user's input immediately, then it is not a good user experience to require a post back to validate one particular field. This is where the remote validator fits in.
In this example, we are going to add remote validation to the username field to check that it is unique.
[Remote("ValidateUserName", "Account", ErrorMessage = "Username is not available.")] public string UserName { get; set; }
Remote has three constructors allowing you to specify either a routeName, a controller and action or a controller, action and area. Here we are passing controller and action and additionally overriding the error message.
Because the remote validator can be used to validate any field in any manner, the default error message is understandably vague. In this case, the error message would be 'UserName is invalid' which is not particularly useful to your end users, so always remember to override the default error message when using the RemoteAttribute.
The remote validator works by making an AJAX call from the client to a controller action with the value of the field being validated and optionally, the value of other fields. The controller action then returns a Json response indicating validation success or failure. Returning true from your action indicates that validation passed. Any other value indicates failure. If you return false, the error message specified in the attribute is used. If you return anything else such as a string or even an integer, it will be displayed as the error message. Unless you need your error message to be dynamic, it makes sense to return true or false and let the validator use the error message specified on the attribute. This is what we are doing here.
Our controller action is displayed below. To keep the example simple, we are just checking if the username is equal to 'duplicate' but in a real scenario, you would do your actual validation here making use of whatever database or other resources that you may require.
public ActionResult ValidateUserName(string username) { return Json(!username.Equals("duplicate"), JsonRequestBehavior.AllowGet); }
There are a few different arguments that can be passed to RemoteAttribute. Let's look at an alternative definition:
[Remote("ValidateUserNameRoute", HttpMethod="Post", AdditionalFields="Email", ErrorMessage = "Username is not available.")] public string UserName { get; set; }
This time, we are passing a named route instead of controller and action. We have also changed the http method from the default (GET) to POST. Most interestingly, we are specifying that we need an additional field in order to perform validation. Just by specifying the AdditionalFields property on our model, the remote validator will automatically pass the values of these fields back to our controller action, with no further coding required in JavaScript or in .NET.
In our example, having email in addition to username is obviously of no use but you can imagine complex validation where multiple fields are required to validate successfully. When we include the AdditionalFields property in our validator declaration, we need to change our controller action to take in the additional field as arguments, so our controller action would become:
[HttpPost] public ActionResult ValidateUserName(string username, string email) { // put some validation involving username and email here return Json(true); }
Run the application and try completing the completed form with valid and invalid usernames. As with all the built-in validators, client-side validation is enabled by default so as soon as you change the username and tab out of the field, an ajax request will be made to the controller action you specified in your remote validator declaration. This level of immediate feedback provides a great user experience and given the trivial amount of code required, I can see this validator being very popular.
Figure 6: The RemoteAttribute validator in action
It is also important to note that you must ensure that the result of the controller action used by the remote validator is not be cached by the browser. How do we do this? Simply by decorating our controller action with the OutputCache attributes and explicitly setting it not to cache. Thanks to Ryan for this important step. You can read more about the remote validator on MSDN.
[OutputCache(Location = OutputCacheLocation.None, NoStore = true)]
Conclusion
We have examined the use of validators in the Internet Application template of ASP.NET MVC 3. We have reviewed the usage of many of the in-built validators including the new RemoteAttribute and CompareAttribute. We have also looked at the significant changes to client-side validation including HTML5 data-* attributes and the move from MicrosoftAjax to jQuery and jQuery Validate.
In the Second Part of this article we will create several custom validators from scratch that implement validation logic on both the client and back-end. We will also investigate an alternative to data annotations, the IValidatableObject interface.
Sharing
If you found this article useful then we would be very grateful if you could help spread the word. Linking to it from your own sites and blogs is ideal. Alternatively, you can use one of the buttons below to submit to various social networking sites.
Added on March 14, 2011 at 21:37 by Ryan | Permalink
Hey,
This is a really good article. I will be checking out the remote validator soon.
There is one piece of information that people might want to know about though for the remote validator
According to msdn
"The OutputCacheAttribute attribute is required in order to prevent ASP.NET MVC from caching the results of the validation methods."
Cheers
Added on March 17, 2011 at 16:33 by Paul Hiles | Permalink
Thanks Ryan, I did not know that. I have updated the article.
Added on May 10, 2011 at 22:00 by Theja | Permalink
How can I achieve group validation in mvc3
Added on May 11, 2011 at 15:03 by Paul Hiles | Permalink
@Theja - Well, you can write custom data annotations or use IValidatableObject to do group validation at the server side but I am guessing that you are taking about client-side integration and in particular, integration with jquery.validate's validation groups functionality. If this is the case then unfortunately, out of the box, this is not possible.
Having said that, a UX colleague of mine is in the process of hacking together some support, but it involves hundreds of lines of javascript, replacing a good deal of Microsoft's library. It is not pretty and it will be a while before I would even consider using it for a client project.
Added on August 11, 2011 at 13:14 by Henk Jan | Permalink
I discovered an error in the validation client handling. When using a viewmodel that contains an entity the compare validation always returns the validation message.
Added on August 12, 2011 at 20:38 by Michael | Permalink
Great article! Things can get a bit overwhelming sometimes with MS releasing new versions and ways of doing things quite often, so it's nice when someone clears things up.
Added on September 26, 2011 at 17:59 by snp | Permalink
Hi, I really liked this post.
This is very informative & well explained.
I tried to use 'NotEqualTo' attribute in default sample MVC project in VS 2010.
I added this attribute to Password property in RegisterModel as per this post part # 2. Also added customValidation.js with the closure script as told in post # 2 & included this script file in Register.cshtml.
But when i ran the app, only server side validation is happening, no client side validation.???
I am not able to find what is wrong i did.
Your help will be helpful.
Added on October 02, 2011 at 16:44 by Paul Hiles | Permalink
@snp - I have just created a new mvc3 application and got it working ok, so not sure what is wrong at your end. Here are the steps I took. Hopefully, this will help:
(1) New MVC3 application - Internet Aplication template.
(2) Downloaded the zip file from
(3) Extracted the c# project and added it to the solution, adding a reference to the project from the internet appliction MVC project.
(4) Added the devtrends.validation.js to the scripts folder.
(5) Referenced the script from Register.cshtml using:
<script src="@Url.Content("~/Scripts/devtrends.validation.js")" type="text/javascript"></script>
(6) Added the NotEqualTo attribute to the password property of the RegisterModel in AccountModels.cs:
[Required]
[NotEqualTo("Email,UserName")]
[ValidatePasswordLength]
[DataType(DataType.Password)]
[Display(Name = "Password")]
public string Password { get; set; }
I can bundle this up in a NuGet package if people want me to. That way, you can just install the package and go. Let me know.
Added on October 17, 2011 at 14:56 by Rana | Permalink
Thank you very much for this tutorial, it really helped me make since of validation options.
One question on remote validation: ive implemented same concept as your "ValidateUserName", its working as expected but how can exclude it for editing?
when I edit a record I would get the error that this username is already there?
Thanks again :)
Added on October 27, 2011 at 01:15 by Paul Hiles | Permalink
@Rana - if you are editing then you will have some form of key stored either in the URL or as a hidden field so you can update the existing record. You can check for this key to determine whether you are adding or editing a record and change your logic accordingly. You will not want to skip the username check completely for an edit though. Instead, you want to check if the username has changed and if not, skip the check.
If the key is a hidden field, you can use the AdditionalFields property of the RemoteAttribute and then just add an extra parameter to your remote validation method. Just change the example above where we pass the additional email field and use your id/key instead. If you are using the URL, you can just access this directly from your validation method.
Added on November 05, 2011 at 22:38 by Steve | Permalink
If all tutes were this polished the internets would suck much less. Highly appreciated and looking for a way to tip you ;)
Added on November 14, 2011 at 08:14 by Sen K. Mathew | Permalink
Hi
I wan to check the username while i am editing. so I passed username and userid as AdditionalFields.
But the mvc3 not allowing to edit the field after it displayes the first error message. It gets stuck.
Added on November 15, 2011 at 09:12 by Paul Hiles | Permalink
@Sen K Mathew - I have not experienced this issue with the Remote validator, so without code, it is very difficult to determine the cause. I would suggest you put a cut down example on stackoverflow.
Added on November 22, 2011 at 03:51 by Sen K. Mathew | Permalink
Hi sir
I Added a comment on November 14, 2011 at 08:14.
I am repeating the same once agin for your easy reference.
I wan to check the username while i am editing. so I passed username and userid as AdditionalFields.
But the mvc3 not allowing to edit the field after it displayes the first error message. It gets stuck.
the problem was with IE8. Remote or all validator will get stuck if you use
IE 8. You need to convert the browser to compatiability mode.
This issue is related to IE 8 bug.
Thanks a lot.
Sen K. Mathew
Added on November 22, 2011 at 09:55 by Paul Hiles | Permalink
@Sen K Mathew - I did some research and it turns out that there are some compatibility problems with IE when using certain versions of jQuery and jQuery.validate. Updating to jQuery 1.6.1 and jQuery.validate 1.8.1 should fix the problem.
More information can be found in this bug report:
Added on November 23, 2011 at 23:00 by Dpak | Permalink
Hi,
Q1) Can you have more than 1 RemoteAttribute specified for a field?
Q2) Has anyone tried extending the "RemoteAttribute" and creating custom "RemoteAttributes" ?
Added on November 28, 2011 at 14:25 by Manu | Permalink
I try to implement a "Forgot password" mechanism and I thought it would be better to check the user name and the e-mail address before resetting the password and sending the email. I did this:
-- model:
public class ForgotPasswordModel
{
[Required(ErrorMessageResourceType = typeof(AccountResources), ErrorMessageResourceName = "RequiredUserNameMessage")]
[Display(ResourceType = typeof(AccountResources), Name = "UserNameDisplay")]
[Remote("ValidateUser", "Account", ErrorMessage = "Unknown user")]
public string UserName { get; set; }
[Required(ErrorMessageResourceType = typeof(AccountResources), ErrorMessageResourceName = "RequiredEmailMessage")]
[DataType(DataType.EmailAddress)]
[Remote("ValidateEmailAddress", "Account", AdditionalFields = "UserName", ErrorMessage = "Wrong password"]
[Display(ResourceType = typeof(AccountResources), Name = "EmailDisplay")]
public string Email { get; set; }
}
-- validation controller ("Account"):
public ActionResult ValidateUser(string username)
{
bool result=false;
MembershipUser currentUser = Membership.GetUser(username);
result = (currentUser != null) && (currentUser.UserName.Equals(username));
return Json(result, JsonRequestBehavior.AllowGet);
}
public ActionResult ValidateEmailAddress(string username, string email)
{
bool result = false;
MembershipUser currentUser = Membership.GetUser(username);
result = (currentUser != null) && (currentUser.Email.Equals(email));
return Json(result, JsonRequestBehavior.AllowGet);
}
-- in the _Layout.cshtml view I included
<script src="@Url.Content("~/Scripts/jquery-1.5.1.js")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/jquery-1.5.1.min.js")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.js files")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>
<script src="" type="text/javascript"></script>
<script src="" type="text/javascript"></script>
-- in the Web.config file I have:
<appSettings>
<add key="webpages:Version" value="1.0.0.0" />
<add key="ClientValidationEnabled" value="true" />
<add key="UnobtrusiveJavaScriptEnabled" value="true" />
</appSettings>
I can't get the remote validation to fire, neither client-side, nor on submit. The "Required" validation is performed, though.
I'm stuck, please suggest a solution, thank you in advance.
Added on November 28, 2011 at 20:48 by Manu | Permalink
Hello again, I just solved the remote validation problem: it was because of the jquery includes in the _Layout.cshtml view, so you may discard my previous post.
Thank you.
Added on January 04, 2012 at 00:28 by Dharmesh | Permalink
I want to pass some extra information which is not part of model where i am applying remote attribute.
Is it possible?
I have one viewmodel which contain list of x model which is bound to telerik grid. now when user enter same name again i want to give error message that name already exist.
Added on January 09, 2012 at 07:43 by ARUN RANA | Permalink
How can i put updaterprogress image beside username when post request is being executed , in case of remote validation ?
Added on February 14, 2012 at 14:49 by David | Permalink
Is it possible to use a checkbox as AdditionalFields in the Remote attribute? I tried that, but it always posts 'true'to the server for the field value. Doesn't matter if it's checked or not.
Added on March 07, 2012 at 23:14 by Georgios | Permalink
Awesome guide, thanks for posting it.
Added on April 19, 2012 at 13:54 by Daniel | Permalink
Thank you so much for this article. I did learn a lot more from it about validation than from MVC books I have.
Added on August 03, 2012 at 14:49 by Rocky royalson | Permalink
Hi Paul,
How to validate multiple email addresses entered in the To/CC fields,
Using MVC3 I want to implement this client requirement.
Multiple email address should be comma separated, and
rocky.royalson@abc.com, abc@gmail.com => this is right
rocky,royalson@abc.com => this is wrong bcoz after rocky there is comma which should not be allow in the email id.
Please help me to achieve this i have written some regular expression as follows, please correct me to implement this :-
[RegularExpression((@"^(([A-Za-z0-9]+_+)|([A-Za-z0-9]+\-+)|([A-Za-z0-9]+\.+)|([A-Za-z0-9]+\++))*[A-Za-z0-9]+@((\w+\-+)|(\w+\.))*\w{1,63}\.[a-zA-Z]{2,6},?$"), ErrorMessage = "Please enter a valid Email Address.")]
Added on August 16, 2012 at 15:39 by sharmila | Permalink
It doesnt seem to work in IE7. but works in chrome and firefox
Added on September 14, 2012 at 13:04 by Zeeshan Umar | Permalink
This is really a comprehensive tutorial, I was really looking for quite some time something like it. You did an excellent job.
Added on March 22, 2013 at 00:04 by Stephen | Permalink
In 'The Range and RegularExpression Validators' section you note 'we have no control over the message' (for typeof int)
An alternative is to modify the message in the unobtrusive adaptors
e.g. for a data property with the [DataType(DataType.Date)] attribute
$.validator.unobtrusive.adapters.add('date', function (options) {
if (options.message) {
options.messages['date'] = 'The date must be in the format dd/mm/yyyy';
}
});
Added on May 13, 2013 at 07:54 by Deepanshu | Permalink
Great Article... very Helpful Thanks a lot.
But i am looking where the text field support multiple language. I mean user should be able to enter multiple language.
What regularexpresssions should be used in that case directly in the model.?
I read something over unicode and used p{L}, something like this, but din't worked.
Can some one please help me out.
Added on July 10, 2013 at 01:31 by Jason Presley | Permalink
Does the The Remote validator work in MVC 4? I have seen this described in several blog posts but I'm trying it and I get nothing. Either I'm missing something that all of the posts are assuming or there is something broken in MVC 4. Any help of suggestions would be appreciated. Thanks!
This article has been locked and further comments are not permitted. | http://www.devtrends.co.uk/blog/the-complete-guide-to-validation-in-asp.net-mvc-3-part-1 | CC-MAIN-2014-15 | refinedweb | 5,759 | 54.52 |
Table of Contents
- Introduction
- List of Dollar Control Options
- Dollar Control Options Affecting the Input Comment Format
- Dollar Control Options Affecting the Input Data Format
- Dollar Control Options Affecting the Output Format
- Dollar Control Options Affecting the Listing of Reference Maps
- Dollar Control Options Affecting Program Control
- Dollar Control Options for GDX Operations
- Dollar Control Options for Compile-Time Variables and Environment Variables
- Dollar Control Options for Macro Definitions
- Dollar Control Options for Compressing and Encrypting Source Files
- Detailed Description of Dollar Control Options
- Conditional Compilation
- Conditional Compilation: General Syntax and Overviews
- Conditional Compilation: Examples
- File Operation Test
- Conditional Compilation and Batch Include Files
- Testing Whether an Item Has Been Defined
- Testing Whether an Item May Be Used in an Assignment
- Testing Whether an Identifier May Be Declared
- Error Level Test
- Solver Test
- Command Line Parameters in String Comparison Tests
- System Attributes in String Comparison Tests
- Conditional Compilation with $ifThen and $else
- Type of Identifiers
- Macros in GAMS
- Compressing and Decompressing Files
- Encrypting Files
Introduction
Dollar control options are used to indicate compiler directives and options. Dollar control options are not part of the GAMS language and must be entered on separate lines marked with the symbol
$ in the first column. A dollar control option line may be placed anywhere within a GAMS program and it is processed during the compilation of the program. The symbol
$ is followed by one or more options separated by spaces. Since the dollar control options are not part of the GAMS language, they do not appear on the compilation output in the listing file unless an error has been detected or the user has requested them to be shown (with the option $onDollar).Note that dollar control option lines are not case sensitive and a continued compilation uses the previous settings.
This chapter is organized as follows. First an overview of the dollar control options will be given in section List of Dollar Control Options, where the options will be presented in groups reflecting their major functional categories. Section Detailed Description of Dollar Control Options will contain a reference list of all dollar control options in alphabetical order with detailed description for each.
We will conclude this chapter with separate sections for two important topics: Conditional Compilation, Macros in GAMS, Compressing and Decompressing Files, and Encrypting Files.
Syntax
In general, the syntax in GAMS for dollar control statements is as follows:
$option_name argument_list {option_name argument_list}
The symbol
$ in the first column indicates that this is a dollar control statement. It is followed by the name of the dollar control option
option_name and the list of arguments
argument_list of the option. Depending on the particular option, the number of arguments required can vary from 0 to many. More than one dollar control option may be activated in one statement. Note that in this case the symbol
$ is not repeated. Observe that some dollar control options require that they be the first option on a line.
- Note
- No blank space is permitted between the character
$and the first option that follows.
- The effect of the dollar control option is felt immediately after the option is processed.
- Dollar control options are not part of the GAMS language they instruct the compiler to perform some task. Therefore, dollar control options are not terminated with a semicolon as real GAMS language statements.
A simple example of a list of dollar control options is shown below:
$title Example to illustrate dollar control options $onsymxref onsymlist
Note that there is no blank space between the character
$ and the option that follows. The first dollar control option $title sets the title of the pages in the listing file to the text that follows the option name. In the second line of the example above, two options are set: $onSymXRef and $onSymList. These options turn on the echoing of the symbol cross reference table and symbol listing in the compilation output in the listing file.
Observe that it is also permitted to place a dollar control statement in a column other than column 1. However, in this case the statement must begin with the symbols
$$, like in this example
$$title Example showing that dollar control option can start in any column with an extra $ added
List of Dollar Control Options
The dollar control options are grouped into nine major functional categories affecting
- the input comment format
- the input data format
- the output format
- reference maps
- program control
- GDX operations
- compile-time variables and environment variables
- macro definitions
- compressing and encrypting source files
The following subsections briefly describe the options in each of the categories.
Dollar Control Options Affecting the Input Comment Format
Note that comments in GAMS are introduced in section Comments.
Dollar Control Options Affecting the Input Data Format
Dollar Control Options Affecting the Output Format
Dollar Control Options Affecting the Listing of Reference Maps
Dollar Control Options Affecting Program Control
Note that conditional compilation in GAMS is discussed in section Conditional Compilation below.
Dollar Control Options for GDX Operations
Note that GDX facilities and utilities are introduced in chapter GAMS Data eXchange (GDX).
Dollar Control Options for Compile-Time Variables and Environment Variables
See also sections Compile-Time Variables and Environment Variables in GAMS.
Dollar Control Options for Macro Definitions
Note that macros are introduced in section Macros in GAMS below.
Dollar Control Options for Compressing and Encrypting Source Files
Detailed Description of Dollar Control Options
In this section we will describe each dollar control option in detail. Note that the dollar control options are listed in alphabetical order for easy reference. Note further, that in each entry the default value, if applicable, is given in parentheses.
Syntax:
$abort[.noError] [text]
If used as
$abort, this option will issue a compilation error and abort the compilation. It may be followed by a text.
Example:
$if not %system.fileSys% == UNIX $abort We only do UNIX
This stops compilation if the operating system is not Unix. Running the example above on Windows will result in the compilation being aborted and the following lines in the listing file:2 $abort We only do UNIX **** $343 Error Messages 343 Abort triggered by above statement
This option has a variant:
$abort.noError. If the extension
.noErroris used the compilation will be aborted as well, but there will be no error. If a save file is written, all remaining unexecuted code will be flushed. This allows effective reuse of the save file.
Note that there is also an abort statement in GAMS, it is used to terminate the execution of a program.
See also $exit, $error, $stop, and $terminate.
Syntax:
$batInclude external_file {arg}
The
$batIncludefacility performs the same task as the $include facility: it inserts the contents of the specified file
external_fileat the location of the call. However, in addition, the option
$batIncludealso passes on arguments
argwhich may be used inside the include file.
External_fileis the name of the batch include file, it may be quoted or unquoted. The arguments
argare passed on to the batch include file. These arguments are treated as character strings that are substituted by numbers inside the included file. The arguments may be single unbroken strings (quoted or unquoted) or quoted multi-part strings.
Note that the syntax has been modeled after the DOS batch facility. Inside the batch file, a parameter substitution is indicated by using the character
%followed immediately by an integer value corresponding to the order of parameters on the list where
%1refers to the first argument,
%2to the second argument, and so on. If an integer value is specified that does not correspond to a passed parameter, then the parameter flag is substituted with a null string. The parameter flag
%0is a special case that will substitute a fully expanded file name specification of the current batch included file. The flag
%$is the current
$symbol (see $dollar). Observe that parameters are substituted independent of context and the entire line is processed before it is passed to the compiler. There is one exception: parameter flags that appear in comments are not substituted.
- Attention
- GAMS requires that processing the substitutions must result in a line of less than or equal to the maximum input line length.
- The case of the passed parameters is preserved, thus it may be used in string comparisons.
Example:
$batInclude "file1.inc" abcd "bbbb" "cccc dddd"
Note that
file1.incis included with
abcdas the first parameter,
bbbbas the second parameter and
cccc
ddddas the third parameter.
Parameter a,b,c ; a = 1 ; b = 0 ; c = 2 ; $batInclude inc2.inc b a display b ; $batInclude inc2.inc b c display b ; $batInclude inc2.inc b "a+5" display b ;
The external file
inc2.inccontains the following line:
%1 = sqr(%2) - %2 ;
The echo print in the corresponding listing file follows:1 Parameter a,b,c ; 2 a = 1 ; b = 0 ; c = 2 ; BATINCLUDE C:\tmp\inc2.inc 4 b = sqr(a) - a ; 5 display b ; BATINCLUDE C:\tmp\inc2.inc 7 b = sqr(c) - c ; 8 display b ; BATINCLUDE C:\tmp\inc2.inc 10 b = sqr(a+5) - a+5 ; 11 display b ;
Note that the option
$batIncludeappears three times with different arguments. GAMS is interprets the contents of the batch include file in turn as:
b = sqr(a) - a ; b = sqr(c) - c ; b = sqr(a+5) - a+5 ;
Note that the third call is not interpreted as
sqr(a+5)-(a+5), but instead as
sqr(a+5)-a+5. The results of the display statement are shown at the end of the listing file are given below:---- 5 PARAMETER b = 0.000 ---- 8 PARAMETER b = 2.000 ---- 11 PARAMETER b = 40.000
Observe that the third call leads to
b = sqr(6)-1+5, thus the final value of
bis 40. Suppose the statement in the batch include file is modified to read as follows:
%1 = sqr(%2) - (%2) ;
With this modification the output generated by the display statement will be as follows:---- 5 PARAMETER b = 0.000 ---- 8 PARAMETER b = 2.000 ---- 11 PARAMETER b = 30.000
Note that the third call leads to
b = sqr(6)-6which results in
btaking a value of 30.
See also $include, $libInclude, $sysInclude.
Syntax:
$call [=]command
This option passes a
commandto the current operating system command processor and interrupts compilation until the command has been completed. If the command string is empty or omitted, a new interactive command processor will be loaded.
Example:
$call dir
This command creates a directory listing on a PC.
Note that the command string may be passed to the system and executed directly without using a command processor by prefixing the command with an '
=' sign. Compilation errors will be issued if the command or the command processor cannot be loaded and executed properly.
$call gams trnsport $call =gams trnsport
The first call will run the model [TRNSPORT] in a new command shell. The DOS command shell does not send any return codes from the run back to GAMS. Therefore any errors in the run are not reported back. The second call, however, will send the command directly to the system. The return codes from the system will be intercepted correctly and they will be available to the GAMS system through the errorLevel function.
- Attention
- Some commands (like
copyon a PC and
cdin Unix) are shell commands and cannot be spawned off to the system. Using these in a system call will create a compilation error.
$call 'copy myfile.txt mycopy.txt' $call '=copy myfile.txt mycopy.txt'
The first call will work on a PC, but the second will not. The copy command may be used only from a command line shell. The system is not aware of this command (Try this command after clicking Run under the Start menu in Windows. You will find that it does not work).
See also $call.Async, $hiddenCall.
Syntax:
$call.Async[NC] command
$call.Asyncworks like $call but allows asynchronous job handling. This means users may start a job
commandwithout having to wait for the result, they may continue in their model and collect the return code of the job later. The function jobHandle may be used to get the process ID (pid) of the last job started. The status of the job may be checked using the function jobStatus(pid). An interrupt signal to a running job may be sent with the function jobTerminate(pid). With the function jobKill(pid) a kill signal may be sent to a running job.
The difference between
$call.Asyncand
$call.AsyncNCis, that the latter starts processes in a new console, rather than sharing the console of the parent process.
- Note
- On non-Windows platforms
$call.AsyncNCand
$call.Asyncare synonyms.
Syntax:
$clear ident {ident}
This option resets all data for the identifiers
identto their default values. Note that only the following data types may be reset: sets, parameters, variables and equations. Note further, that the clearing is carried out during compile time and not when the GAMS program executes.
Example:
Set i / 1*20 /; Scalar a / 2 /; $clear i a display i, a;
The option
$clearresets
iand
ato their default values: an empty set for
iand zero for
a. The output generated by the display statement follows:---- 4 SET i ( EMPTY ) ---- 4 PARAMETER a = 0.000
- Attention
- The two-pass processing of a GAMS file may lead to seemingly unexpected results. Both the dollar control options and the data initialization is done in the first pass, and assignments in the second, irrespective of their relative locations. This is an issue particularly with
$clearsince data can be both initialized and assigned.
Scalar a / 12 /; a = 5; $clear a display a;
Note that the scalar data initialization statement is processed during compilation and the assignment statement
a = 5;during execution. In the order that it is processed, the example above is read by GAMS as:
* compilation step Scalar a /12/ ; $clear a * execution step a = 5; display a ;
Therefore the result is that
atakes the value of 5. The output from the display statement is as follows:---- 4 PARAMETER a = 5.000
Compare also $kill and the execution time option clear.
Syntax:
$clearError[s]
This option (
$clearErrorand
$clearErrorsare synonyms) clears GAMS awareness of compiler errors and turn them into warning messages instead.
Example:
Scalar z / 11 /; $eval x sqrt(-1) $clearError $log %x% Display z;
Note that without the use of
$clearErrorthe program above would not continue with the execution after line 2.
Syntax:
$comment char
This option changes the symbol indicating a single line comment from the default
*to the single character
char. Note that after this option is used, the new comment character
charcannot be used in column 1 as before, since it got a special meaning. Note further, that the case of the character does not matter if it is used as a comment character. This option should be used with great care and we recommend to reset the symbol quickly to the default.
- Attention
- The case of the start-of-line comment character does not matter when being used.
Example:
$comment c c now we use a FORTRAN style comment symbol $comment * * now we are back to the default
See also section Comments.
Syntax:
$compress source target
This option causes the file
sourceto be compressed into the packed file
target.
Example: Consider the following example where the well-known model [TRNSPORT] is used:
$call gamslib trnsport $compress trnsport.gms t2.gms $include t2.gms
The first command retrieves the file
trnsport.gmsand the second command compresses it. Note that a compressed GAMS file is treated like any other GAMS file, therefore it may be included and executed as usual. Large data files that do not change often can be compressed this way to save disk space.
The following example serves as a little utility to compress and decompress files:
$ifthen set decompress $ if not set input $set input file_c.gms $ if not exist %input% $abort No file input file %input% exist $ if not set output $set output file.gms $ log Decompressing %input% into %output% $ decompress %input% %output% $else $ if not set input $set input file.gms $ if not exist %input% $abort No file input file %input% exist $ if not set output $set output file_c.gms $ log Compressing %input% into %output% $ compress %input% %output% $endif
The program (saved to a file called
compress.gms) can be used as follows:> gams compress.gms --input myfile.gms --output myfile_c.gms > gams compress.gms --decompress=1 --input myfile_c.gms --output myfile.gms
See also $decompress. Further details are given in chapter Compressing and Decompressing Files.
Syntax:
$decompress source target
This option causes the compressed file
sourceto be decompressed into the unpacked file
target.
Example: Consider the following example where the well-known model [TRNSPORT] is used:
$call gamslib trnsport $compress trnsport.gms t2.gms $decompress t2.gms t3.gms $call diff t3.gms trnsport.gms $if errorlevel 1 $abort t3.gms and trnsport.gms are not identical!
The first command retrieves the file
trnsport.gms, the second command compresses it and the third command decompresses the compressed file. Note that the resulting file,
t3.gms, is identical to the original file
trnsport.gmswhich is tested via the
diffprogram.
See also $compress. Further details are given in chapter Compressing and Decompressing Files.
Syntax:
$dollar char
This option changes the current 'dollar' symbol to the single character
char.
- Note
- The special
%$substitution symbol can be used to get the current 'dollar' symbol.
Example:
$dollar # #log now we can use '%$' as the '$' symbol
Syntax:
$double
The lines following this option will be echoed double spaced to the echo print in the listing file.
Example:
Set i / 1*2 / ; Scalar a / 1 / ; $double Set j / 10*15 / ; Scalar b / 2 / ;
The resulting echo print in the listing file looks as follows:1 Set i /1*2/ ; 2 Scalar a /1/ ; 4 Set j /10*15/ ; 5 Scalar b /2/ ;
Note that lines before the option
$doubleare listed single spaced, while the lines after the option are listed with double space.
Syntax:
$drop VARNAME
This option destroys (removes from the program) the scoped compile-time variable
VARNAMEthat was defined with the dollar control option $set.
Example:
$set NAME my name $if set NAME $log Scoped compile-time variable NAME is set to "%NAME%" $drop NAME $if not set NAME $log Scoped compile-time variable NAME is not available anymore
See also $set, $dropGlobal, and $dropLocal.
Syntax:
$dropEnv VARNAME
This dollar control option destroys (removes from the program) the operating system environment variable
VARNAME. For detailed information, see the dollar control option .
Example:
$if setEnv GDXCOMPRESS $dropEnv GDXCOMPRESS
See also $setEnv, and $if setEnv.
Syntax:
$dropGlobal VARNAME
This option destroys (removes from the program) the global compile-time variable
VARNAMEthat was defined with the dollar control option $setGlobal.
Example:
$setGlobal NAME my name $if setGlobal NAME $log Global compile-time variable NAME is set to "%NAME%" $dropGlobal NAME $if not setGlobal NAME $log Global compile-time variable NAME is not available anymore
See also $setGlobal, and $drop.
Syntax:
$dropLocal VARNAME
This option destroys (removes from the program) the local compile-time variable
VARNAMEthat was defined with the dollar control option $setLocal.
$setLocal NAME my name $if setLocal NAME $log Local compile-time variable NAME is set to "%NAME%" $dropLocal NAME $if not setLocal NAME $log Local compile-time variable NAME is not available anymore
See also $setLocal, and $drop.
Syntax:
$echo text >[>] external_file
This option allows to write the text
textto a file
external_file. The text and the file name may both be quoted or unquoted. The file name is expanded using the working directory. The option
$echotries to minimize file operations by keeping the file open in anticipation of another
$echoto be appended to the same file. The file will be closed at the end of the compilation or when an option $call or any variant of the option $include is encountered. The redirection symbols
>and
>>have the usual meaning of starting at the beginning or appending to an existing file respectively.
Example:
$echo > echo.txt $echo The message written goes from the first non blank >> echo.txt $echo 'to the first > or >> symbol unless the text is' >> echo.txt $echo "is quoted. The input File is %gams.input%. The" >> echo.txt $echo 'file name "echo.txt" will be completed with' >> echo.txt $echo %gams.workdir%. >> echo.txt $echo >> echo.txt
The content of the resulting file
echo.txtis the following:The message written goes from the first non blank to the first > or >> symbol unless the text is is quoted. The input File is C:\tmp\echoTest.gms. The file name "echo.txt" will be completed with C:\tmp\.
See also $on/offEcho, and $echoN.
Syntax:
$echoN text >[>] external_file
This option sends a text message
textto an file
external_filelike $echo but writes no end of line marker so the line is repeatedly appended to by subsequent commands. The redirection symbols
>and
>>have the usual meaning of starting at the beginning or appending to an existing file respectively. Note that the text and the file name may be quoted or unquoted. By default the file will be saved in the working directory.
Example:
$echoN 'Text to be sent' > 'aaa.txt' $echoN 'More text' >> aaa.txt $echoN And more and more and more >> aaa.txt $echo This was entered with $echo >> 'aaa.txt' $echo This too >> aaa.txt
The created file
aaa.txtcontains the following text:Text to be sentMore textAnd more and more and moreThis was entered with $echo This too
See also $on/offEcho, and $echo.
Syntax:
$eject
This option advances the echo print to the next page.
Example:
$eject Set i,j ; Parameter Data(i,j) ; $eject Scalar a; a = 7;
The statements following the first
$ejectwill be listed on one page in the echo print of the listing file and the statements following the second
$ejectwill be listed on the next page.
Syntax:
$ifThen[E|I] cond ... { $elseIf[E|I] cond ... } [ $else ... ] $endIf
This option always appears together with the option $ifThen[E/I]. It is followed by an instruction which is executed if the conditional expression of the matching option $ifThen[E/I] is not true. For an example, see section Conditional Compilation with $ifThen and $else.
See also $ifThen, $elseIf and section Conditional Compilation.
Syntax:
$ifThen[E|I] cond ... { $elseIf[E|I] cond ... } [ $else ... ] $endIf
This option always appears together with the option $ifThen[E/I]. It is followed by another condition and instruction. For an example, see section Conditional Compilation with $ifThen and $else.
See also $ifThen, $else, $elseIfE, $elseIfI and section Conditional Compilation.
Syntax:
$ifThen[E|I] cond ... { $elseIf[E|I] cond ... } [ $else ... ] $endIf
This option does the same as $elseIf but evaluates numerical values of the control variables.
See also $elseIf and section Conditional Compilation.
Syntax:
$ifThen[E|I] cond ... { $elseIf[E|I] cond ... } [ $else ... ] $endIf
This option does the same as $elseIf but it is case insensitive.
See also $elseIf and section Conditional Compilation.
Syntax:
$encrypt source target
This option causes a file to be converted into an encrypted file. Here
sourceis the name of the source file to be encrypted and
targetis the name for the resulting encrypted file. Note that encryption requires the secure option to be licensed and is available for commercial licenses only. The command line parameter pLicense specifies the target license to be used for encryption. The encrypted file can only run on a system licensed with the license file used for encryption. No special action is required on the executing system since GAMS recognizes whether a file is encrypted and will process it accordingly. There is no option to decrypt an encrypted file, so better keep the original unencrypted file.
Further details and examples are given in chapter Encrypting Files.
Syntax:
$ifThen[E|I] cond ... { $elseIf[E|I] cond ... } [ $else ... ] $endIf
This must option must be matched with one of the options $ifThen, $ifThenE or $ifThenI. For an example, see section Conditional Compilation with $ifThen and $else.
See also $ifThen and section Conditional Compilation.
Syntax:
$eolCom char[char]
This option redefines and activates the end-of-line comment symbol, which may be one character or a sequence of two characters. By default, this is initialized to
!!, but is not active. The option $onEolCom is used to activate the end-of-line comments. If
$eolComis used, $onEolCom is set automatically.
Example:
$eolCom // Set i /1*2/ ; // set declaration Parameter a(i) ; // parameter declaration
Here the character sequence
//serves as the end-of-line-comment indicator.
- Attention
- It is not allowed to reset the end-of-line comment symbol to the current end-of-line comment symbol. This would cause an compilation error as in the following example:
$eolCom // $eolCom //
Some end of line character settings can cause confusion. The widely used end of line character sequence
//is also legal GAMS syntax in
putstatement to indicate two line breaks:
file fx; put fx; put 'first line' // 'second line' //; $eolCom // put 'third line' // 'fourth line';
results in a put file with the following content:first line second line third line
This can also confuse syntax highlighting in editors (or on thsi web page). Other popular end of line characters like
#and
@are also used as GAMS syntax, see Controlling the Cursor On a Page.
See also section Comments for more about comments in GAMS.
Syntax:
$error [text]
This option will issue a compilation error and will continue with the next line.
Example:
$if not exist myfile $error File myfile not found - will continue anyway
Note that the first line checks if the file
myfileexists. If the file does not exist, it will generate an error with the comment
File myfile not found - will continue anywayand then the compilation will continue with the next line.
See also $abort, $exit, $terminate, and $stop.
Syntax:
$escape character
This option allows users to work with text sequences containing
%without substitution.
This causes all subsequent commands of the form
%
symbol
%to not have parameter substitution done for them. As a consequence, no parameter substitutions are performed in GAMS statements (mostly useful in display and put statements) and the outcome of such statements where
%
symbol
%is used is just
%
symbol
%.
Note that the effect of the option
$escapemay be reversed with the option
$escape %.
Example:
$set tt DOIT file it; put it; display "first %tt%"; display "second %&tt%&"; put "display one ", "%system.date%" /; put "display two " "%&system.date%&"/; $escape & display "third %tt%"; display "fourth %&tt%&"; put "display third ", "%system.date%" /; put "display fourth " "%&system.date%&"/; $escape % display "fifth %tt%"; display "sixth %&tt%&"; put "display fifth ", "%system.date%" /; put "display sixth " "%&system.date%&"/;
The output generated by the display statements follows:---- 6 first DOIT ---- 7 second %&tt%& ---- 12 third DOIT ---- 13 fourth %tt% ---- 18 fifth DOIT ---- 19 sixth %&tt%&
The file
it.putwill contain the following lines:display one 08/10/17 display two %&system.date%& display third 08/10/17 display fourth %system.date% display fifth 08/10/17 display sixth %&system.date%&
Note that this option was introduced to facilitate writing GAMS code (or command.com/cmd.exe batch scripts) from GAMS including unsubstituted compile-time variables. Text can also be written at compile-time without parameter substitution via option $on/offEchoV and at run-time via $on/offPutV.
- Note
- In GAMS the escape character follows the character (
%) that needs to be escaped. In many other languages the escape character precedes the to be escaped character.
Syntax:
$eval VARNAME expression
This option evaluates a numerical expression at compile time and places it into a scoped compile-time variable. In turn the option $ifE may be used to do numeric testing on the value of this variable.
VARNAMEis the name of a compile-time variable and
expressionis an expression that consists of constants, functions, operators and other compile-time variables with numerical values. Note that no whitespace is allowed in the expression which can be overcome by additional parentheses.
Example:
$eval b1 ifthen(uniform(0,1)<0.5,0,1) $eval b2 ifthen(uniform(0,1)<0.5,0,1) $eval b3 (%b1%)xor(%b2%) $log b1=%b1% b2=%b2% b1 xor b2=%b3%
The first two lines use the
uniformfunction to generate a random number between 0 and 1 and assign
0if this number is less than 0.5 otherwise
1via the
ifthenfunction to the scoped compile-time variable
b1and
b1. In the third line we apply the logical
xoroperator to
b1and
b2and store the result in
b3. The parentheses are required because the more natural expression
b1% xor b2%contains spaces. In the forth line we print the values and result to the log.b1=1 b2=1 b1 xor b2=0
The expression are evaluated using IEEE nonstop arithmetic, so no evaluation errors are triggered as demonstrated in the following example:
$eval OneDividedByZero 1/0 $log 1/0=%OneDividedByZero%
This produces the following log:1/0=+INF
The
$evaland related dollar control options give access to a reduced set of GAMS functions: abs, card, ceil, cos, errorlevel, exp, fact, floor, frac, gamsrelease, gamsversion, gday, gdow, ghour, gleap, gmillisec, gminute, gmonth, gsecond, gyear, ifthen, jdate, jnow, jobhandle, jobkill, jobstatus, jobterminate, jstart, jtime, log, log10, log2, max, min, mod, numcores, pi, power, round, sameas, sign, sin, sleep, sqr, sqrt, tan, trunc, and uniform. The available operators are: +, -, *, /, ** and even ^ (integer power) which is not available in regular GAMS expression and requires the use of the function
ipower. The comparison relations are <, >, <=, >=, <>, and =. The logical operators are not, and, or, xor, imp, and eqv.
The expression also allows the use of dollar on the right. In the following example we replace the
ifthenfunction by a dollar one the right:
$eval b1 1$(uniform(0,1)>=0.5) $eval b2 1$(uniform(0,1)>=0.5) $eval b3 (%b1%)xor(%b2%) $log b1=%b1% b2=%b2% b1 xor b2=%b3%
Moreover, the
$evalhas access to data available at compile time. The expression can access the value of scalars and for other symbols we can use the
cardfunction to access the cardinality (at this point) of the symbol. Here is an example:
Scalar ac 'Avogadro constant' / 6.0221409e+23 /; $eval log_ac round(log10(ac)) $log round(log10(ac))=%log_ac% Set d / d0*d%log_ac% /; $eval card_d card(d) $log card(d)=%card_d%
Access to individual records of symbols is not possible. The embedded code facility allows access to symbol records at compile time.
See also $evalGLobal, $evalLocal, $ifE, and $set.
Syntax:
$evalGlobal VARNAME expression
This option evaluates a numerical expression at compile time and places it into a global compile-time variable. The syntax and behavior otherwise is identical to $eval.
Syntax:
$evalLocal VARNAME expression
This option evaluates a numerical expression at compile time and places it into a local compile-time variable. The syntax and behavior otherwise is identical to $eval.
Syntax:
$exit
This option will cause the compiler to exit (stop reading) from the current file. This is equivalent to having reached the end of file.
Example:
Scalar a ; a = 5 ; display a ; $exit a = a+5 ; display a ;
Note that the lines following the option
$exitwill not be compiled.
Observe that there is a difference to the dollar control option $stop. If there is only one input file,
$stopand
$exitwill have the same effect. If the option
$exitoccurs within an include file, it acts like an end-of-file on the include file. However, if the option
$stopoccurs within an include file, GAMS will stop reading all input.
See also $abort, $error, $terminate, and $stop.
Syntax:
$expose all | ident1 ident2 ...
This option removes all privacy restrictions from identifiers.
With explicit identifiers the privacy restrictions are removed only for the listed identifiers and with
allthe restrictions are removed for all identifiers. The privacy restrictions may be set with the dollar control options $hide or $protect. Note that a special license file is needed for this feature to work and that the
exposeonly takes effect in subsequent restart files. For further information, see chapter Secure Work Files.
Syntax:
$FuncLibIn InternalLibName ExternalLibName
This makes extrinsic function libraries available to a model.
InternalLibNameis the internal name of the library in the GAMS code and
ExternalLibNameis the name of the shared library in the file system. See Using Function Libraries for more information.
Syntax:
$gdxIn [GDXFileName]
This option is used in a sequence to load specified items from a GDX file. Here
GDXFileNamedenotes the name of the GDX file (with or without the extension
.gdx) and the command opens the specified GDX file for reading. The use of
$gdxInwithout a file name closes the currently open GDX file. The command is used in conjunction with the option $load or one of its variants.
Example:
set i,j; parameters a(i), b(j), d(i,j), f; $gdxIn mydata.gdx $load i j a b d f $gdxIn
See also $load, and $gdxOut.
Syntax:
$gdxOut [GDXFileName]
This option is used in a sequence to unload specified items to a GDX file at compile time. Here
GDXFileNamedenotes the name of the GDX file (with or without the extension GDX) and the command opens the specified GDX file for writing. The use of
$gdxOutwithout a file name closes the currently open output GDX file. The command is used in conjunction with the dollar control option $unLoad.
Example:
set i /i1*i3/; parameters a(i) /i1 3, i2 87, i3 1/; $gdxOut mydata.gdx $unLoad i a $gdxIn
See also $unLoad, and $gdxIn.
Syntax:
$goto id $label id
This option will cause GAMS to search for a line starting with
$label idand then continue reading from there. This option can be used to skip over or repeat sections of the input files. In $batinclude files the target labels or label arguments can be passed as parameters because of the manner in which parameter substitution occurs in such files. In order to avoid infinite loops, jumps to the same label are restricted to a maximum of 100 times by default. This maximum may be changed with the option $maxGoto.
Example:
Scalar a ; a = 5; display a ; $goto next a = a+5 ; display a ; $label next a = a+10 ; display a ;
Note that GAMS will continue from line
$label nextafter reading line
$goto next. Observe that all lines in between are ignored. Therefore the final value of
ain the example above will be 15.
- Attention
- The lines
$gotoand $label have to be in the same file. If the target label is not found in the current file an error will be issued.
See also $label, $maxGoto.
$hidden$hidden
Syntax:
$hidden text
A line starting with this option will be ignored and will not be echoed to the listing file. This option is used to enter information only relevant to the person manipulating the file.
Example:
$hidden You need to edit the following lines if you want to: $hidden $hidden 1. Change form a to b $hidden 2. Expand the set
The lines above serve as comments to the person who wrote the file. However, these comments will not be visible in the listing file and are therefore hidden from view.
$hiddenCall$hiddenCall
Syntax:
$hiddenCall [=]command
This option does the same as $call but the statement is neither shown on the log nor the listing file.
$hiddenCall.Async[NC]$hiddenCall.Async[NC]
Syntax:
$hiddenCall.Async[NC] command
This option does the same as $call.Async[NC] but the statement is neither shown on the log nor the listing file.
Syntax:
$hide all | ident1 ident2 ...
This option hides identifiers so they cannot be displayed or computed, but they may still be used in model calculations (i.e. commands when the solve statement is executed).
With explicit identifiers the listed identifiers are hidden and with
allall identifiers are hidden. These restrictions may be removed with the dollar control options expose or purge. Note that a special license file is needed for this feature to work.
For further information, see chapter Secure Work Files.
Syntax:
$if [not] conditional_expression new_input_line
This dollar control option provides the greatest amount of control over conditional processing of the input file(s).
For more information on the
conditional
expressionsallowed, details on the
new_input_lineand examples, see section Conditional Compilation below.
See also $ifE, $ifI, $ifThen.
Syntax:
$ifE [not] conditional_expression new_input_line
This dollar control option does the same as the option $if but allows constant expression evaluation. The
conditional_expressionmay take two different forms:expr1 == expr2 TRUE if (expr1-expr2)/(1+abs(expr2)) < 1e-12 expr TRUE if expr1 <> 0
Example:
Scalar a; $ifE (log2(16)^2)=16 a=0; display a; $ifE log2(16)^2 == 16 a=1; display a; $ifE NOT round(log2(16)^2-16) a=2; display a; $ifE round(log2(16)^2-16) a=3; display a; $ifE round(log2(16)^2-17) a=4; display a;
This will create the following output:---- 3 PARAMETER a = 1.000 ---- 4 PARAMETER a = 2.000 ---- 6 PARAMETER a = 4.000
See also $if and section Conditional Compilation.
Syntax:
$ifI [not] conditional_expression new_input_line
This option is working like the option $if. The only difference is that $if makes comparisons involving text in a case sensitive fashion while
$ifIis case insensitive.
See also $if and section Conditional Compilation.
Syntax:
$ifThen[E|I] cond ... { $elseIf[E|I] cond ... } [ $else ... ] $endIf
This option is a form of the option $if that controls whether a number of statements are active. The syntax for the condition is generally the same as for the option $if. Like
$if, it is case sensitive. Often it is followed by one or more of the following dollar control options: $else, $elseIf, $elseIfI, $elseIfE. The option
$ifThenmust be matched with the option $endIf that marks the end of the construct. An example is given in section Conditional Compilation with $ifThen and $else.
Note that users may add a tag to the
$ifThenand $endIf. For example,
$ifThen.tagOnehas to match with
$endif.tagOne.
Example:
$ifThen.one x == y display "it1"; $elseIf.one a == a display "it2"; $ifThen.two c == c display "it3"; $endIf.two $elseIf.one b == b display "it4"; $endIf.one
The resulting listing file will contain the following lines:---- 2 it2 ---- 4 it3
Note that the first condition (
x == y) is obviously not true and the fourth condition (
b == b) is not tested because the second condition (
a == a) was already true.
See also $if, $ifThenE, $ifThenI, $else, $elseIF and section Conditional Compilation.
Syntax:
$ifThen[E|I] cond ... { $elseIf[E|I] cond ... } [ $else ... ] $endIf
This option does the same as the option $ifThen but evaluates numerical values of the control variables.
See also $ifThen and section Conditional Compilation.
Syntax:
$ifThen[E|I] cond ... { $elseIf[E|I] cond ... } [ $else ... ] $endIf
This option does the same as the option $ifThen but it is case insensitive.
See also $ifThen and section Conditional Compilation.
Syntax:
$include external_file
This option inserts the contents of a specified text file at the location of the call.
External_fileis the name of the file that is included. It can be quoted or unquoted. Note that include files may be nested.
The include file names are processed in the same way like the input file. The names are expanded using the working directory. If the file cannot be found and no extension is given, the standard GAMS input extension is tried. However, if an incomplete path is given, the file name is completed using the include directory. By default, the library include directory is set to the working directory. The default directory search path may be extended with the command line parameter InputDir.
Note that the start of the include file is marked and the include file is echoed to the echo print in the listing file. This reference to the include file may be omitted by using the option $offInclude.
Example:
$include myfile $include "myfile"
Both statements above are equivalent and the search order for the include file is as follows:
-
myfilein current working directory
-
myfile.gmsin current working directory
-
myfileand
myfile.gms(in that order) in directories specified by the command line parameter InputDir.
- Attention
- The current settings of the dollar control options are passed on to the lower level include files. However, the dollar control options set in the lower level include file are passed on to the parent file only if the option $onGlobal is set.
Note that details on the compilation output of include files are given in section The Include File Summary.
See also $batInclude, $libInclude, $sysInclude.
$inlineCom ( \(/* ~~ */\))
Syntax:
$inlineCom char[char] char[char]
This option redefines and activates the in-line comment symbols. These symbols are placed at the beginning and the end of the in-line comment and are one character or a two character sequence at the beginning and the end. By default, the system is initialized to ' \(/*\)' and ' \(*/\)', but is not active. The option $onInline is used to activate the in-line comments. If
$inlineComis used, $onInline is set automatically.
Example:
$inlineCom {{ }} Set {{ this is an inline comment }} i / 1*2 / ;
Note that the character pairs
{{ }}serve as the indicator for in-line comments.
- Attention
- It is not allowed to reset the option
$inlineComto the current symbol for in-line comments. This would cause an compilation error as in the following example:
$inlinecom {{ }} $inlinecom {{ }}
- Note
- The option $onNestCom enables the use of nested comments.
See also section Comments.
Syntax:
$kill ident {ident}
This option removes all data for the identifiers
ident, only the type and dimension are retained (this means that these identifiers will be declared but not defined anymore). Note that only the data of the following data types may be removed: sets, parameters, variables and equations. Note further that the data removal is carried out during compile time and not when the GAMS program executes.
Example:
Set i / 1*20 /; Scalar a /2/; $kill i a
Note that the effect of the third line above is that all data from
aand
iis removed, so the set
iand the scalar
aare declared, but not initialized or assigned to. Note that after
iand
ahave been killed, a display statement for them will trigger an error. However, new data may be assigned to identifiers that were previously killed. Thus the following statements are valid if appended to the code above:
Set i / i1 *i3 /; a = 7;
Observe that this option needs to be distinguished from the dollar control option $clear, that resets the data to the default values.
Syntax:
$goto id $label id
This option marks a line to be jumped to by a dollar control option $goto. Any number of labels may be used in files and not all of them need to be referenced. Re-declaration of a label identifier will not generate an error and only the first occurrence encountered by the GAMS compiler will be used for future $goto references.
Example:
Scalar a ; a = 5 ; display a ; $goto next a = a+5 ; display a ; $label next a = a+10 ; display a ;
When GAMS reaches the line
$goto next, it continues from the line
$label next. All lines in between are ignored. Therefore in the example above, the final value of
ais 15.
- Attention
- If several dollar control options appear in one line and
labelis one of them, then
labelmust be listed first.
See also $goto, $maxGoto.
Syntax:
$libInclude external_file {arg}
This option is mostly equivalent to the option $batInclude. However, if an incomplete path is given, the file name is completed using the library include directory. By default, the library include directory is set to the directory
inclibin the GAMS system directory. Note that the default directory may be reset with the command line parameter ldir.
Example:
$libInclude abc x y
This call will first look for the include file
[GAMS System Directory]/inclib/abc. If this file does not exist GAMS will looks for the file
[GAMS System Directory]/inclib/abc.gms. The arguments
xand
yare passed on to the include file and are interpreted as explained in the detailed description of the option $batInclude.
See also $include, $batInclude, $sysInclude.
Syntax:
$lines n
This option starts a new page in the listing file if less than
nlines are available on the current page.
Example:
$hidden Never split the first few lines of the following table $lines 5 Table io(i,j) Transaction matrix ... ;
This will ensure that if there are less than five lines available on the current page in the listing file before the next statement (in this case, the table statement) is echoed to it, the contents of this statement are echoed to a new page.
Syntax:
$load [sym1[,] sym2=gdxSym2[,] sym3<[=]gdxSym3[.dimI][,] ...]
This option is preceded and succeeded by the option $gdxIn that open the GDX file for reading. The option
$loadloads specified items from the GDX file. Note that more than one instance of
$loadmay occur. A listing of the GDX file contents will be created if the option
$loadis not followed by arguments.
Examples
Consider the following example, where
transsolis the GDX file of the transportation model [TRNSPORT]
$gdxIn transsol $load Sets i, j; Parameters a(i), b(j), d(i,j), f; $load i j a b d f $gdxIn
A comma between the symbols is optional. The follow example works identically:
$gdxIn transsol $load Sets i, j; Parameters a(i), b(j), d(i,j), f; $load i, j, a, b, d, f $gdxIn
The
$loadwithout any arguments produces a table of contents of the GDX container in the listing file:Content of GDX C:\Users\default\Documents\gamsdir\projdir\transsol 6 Parameter 0 1 f freight in dollars per case per thousand miles 7 Parameter 2 6 c(i,j) transport cost in thousands of dollars per case 8 Variable 2 6 x(i,j) shipment quantities in cases 9 Variable 0 1 z total transportation costs in thousands of dollars 10 Equation 0 1 cost define objective function 11 Equation 1 2 supply(i) observe supply limit at plant i 12 Equation 1 3 demand(j) satisfy demand at market j
Symbols may be loaded with new names with the following syntax:
$load i=gdx_i j=j_gdx. The universal set may be loaded using
$load uni=*.
$gdxIn transsol Sets i, jj, uni; Parameters a(i), bb(jj), d(i,jj), f; $load i jj=j uni=* a bb=b d f $gdxIn display uni;
This results in a display of all used labels:---- 5 SET uni Seattle , San-Diego, New-York , Chicago , Topeka
The syntax
sym<[=]GDXSym[.dimI]allows to load a one dimensional set from a symbol in the GDX file that has even a higher dimensionality. GAMS tries to find the set
symas a domain in the symbol
GDXSymand uses the labels from this index position (with
<the first domain set from the right and with
<=from the left). If no domain information is stored in the GDX file or the domain information does not match the suffix
.dimIallows to pick a fixed index position.
In the following we work with a GDX file created by the following code:
set i / i1*i3 /, ii(i,i) / i1.i2, i2.i3 /; $gdxOut ii $unLoad i ii $gdxOut
Now use use this GDX file to load the first and second index from
ii:
set i, i1; $gdxIn ii * Load first index from ii as i $load i<=ii i1<ii.dim1 display i, i1;
the display lists all labels from the first index of
ii:---- 5 SET i Domain loaded from ii position 1 i1, i2 ---- 5 SET i1 Domain loaded from ii position 1 i1, i2
Now we match from the right and get the second index of
ii:
set i, i2; $gdxIn ii * Load second index from ii as i $load i<ii i2<ii.dim2 display i, i2;
The resulting listing file will contain the following lines:---- 5 SET i Domain loaded from ii position 2 i2, i3 ---- 5 SET i2 Domain loaded from ii position 2 i2, i3
This type of projection loading can be useful to extract the domain sets from a single parameter that is stored in a GDX file:
set i,j,k; parameter data(i,j,k); $gdxIn data $load i<data.dim1 j<data.dim2 k<data.dim3 data
- Attention
- Loading an item that was already initialized will cause a compilation error.
For example, the following code snippet will cause a compilation error:
Set j / 1*5 /; $gdxIn transsol $load j $gdxIn
Note that GAMS offers variants of
$loadthat do not cause a compilation error in such a case: $loadM and $loadR.
Syntax:
$loadDC [sym1[,] sym2=gdxSym2[,] sym3<[=]gdxSym3[.dimI][,] ...]
This option is an alternative form of $load. It performs domain checking when items are loaded. Any domain violations will be reported and flagged as compilation errors. All other features are the same as discussed under $load .
Example: Consider the following example where
transsolis the GDX file of the transportation model [TRNSPORT].
Set i, j; Parameter b(i), a(j); $gdxIn transsol $load i b $loadDC j a $gdxIn
Note that in contrast to the example above, the parameter
ais indexed over the set
iand the parameter
bis indexed over the set
jin the file
transsol. While
$load i bdoes not generate an error and
bis just empty, the option
$loadDC j atriggers a domain violation error because in
transsol
ais indexed over
iand produces a list of errors in the listing file:--- LOAD a = 3:a **** Unique domain errors for symbol a Dim Elements 1 seattle, san-diego 5 $loadDC j a **** $649
Syntax:
$loadDCM [sym1[,] sym2=gdxSym2[,] sym3<[=]gdxSym3[.dimI][,] ...]
This option combines the functionality of merging as in $loadM and domain checking as in $loadDC.
Example:
Consider the following example where
transsolis the GDX file of the transportation model [TRNSPORT].
Set i, uni 'all labels'; Parameter abFail(i), ab(uni) 'capacity and demand'; $gdxIn transsol $load i abFail=a $loadDCM abFail=b $loadDCM uni=i uni=j ab=a ab=b $gdxIn display uni, ab;
Here we try to merge parameters
aand
btogether into one parameter. The first attempt (to merge it into parameter
abFail) would fail because of line 5 and result into a domain violation report as described with dollar control option $loadDC. In the second attempt we first merge the sets
iand
jinto set
uniand then merge the parameters
aand
binto
ab. If one comments line 5 the resulting display looks as follows:---- 8 SET uni all labels seattle , san-diego, new-york , chicago , topeka ---- 8 PARAMETER ab capacity and demand seattle 350.000, san-diego 600.000, new-york 325.000 chicago 300.000, topeka 275.000
Syntax:
$loadDCR [sym1[,] sym2=gdxSym2[,] sym3<[=]gdxSym3[.dimI][,] ...]
This option combines the functionality of replacing data as in $loadR and domain checking as in $loadDC.
Example:
Consider the following example where
transsolis the GDX file of the transportation model [TRNSPORT].
Set uni 'all labels'; Parameter ab(uni) 'capacity and demand'; $gdxIn transsol $loadM uni=i uni=j ab=a $loadDCR ab=b $gdxIn display uni, ab;
Here we try to read twice into the parameter
ab. First GDX symbol
aand
bare read into
ab. GDX symbol
bis read with replace and hence the parameter
abcontains the elements of
bonly.
Syntax:
$loadM [sym1[,] sym2=gdxSym2[,] sym3<[=]gdxSym3[.dimI][,] ...]
This option is an alternative form of $load. Instead of replacing an item or causing a symbol redefined error if the item was already initialized it merges the contents. Records that would result in domain violations will be ignored.
Example:
Consider the following example where
transsolis the GDX file of the transportation model [TRNSPORT].
Set i, uni 'all labels'; Parameter ab(uni) 'capacity and demand'; $gdxIn transsol $loadM uni=i uni=j ab=a ab=b $gdxIn display uni, ab;
Here we merge parameters
aand
btogether into one parameter
ab. We first merge the sets
iand
jinto set
uniand then merge the parameters
aand
binto
ab. The resulting display looks as follows:---- 6 SET uni all labels seattle , san-diego, new-york , chicago , topeka ---- 6 PARAMETER ab capacity and demand seattle 350.000, san-diego 600.000, new-york 325.000 chicago 300.000, topeka 275.000
Syntax:
$loadR [sym1[,] sym2=gdxSym2[,] sym3<[=]gdxSym3[.dimI][,] ...]
This option is a variant of the option $load. With
$loadRwe can have multiple loads into the same symbols and the data stored in GAMS will be replaced with the one from the GDX container.
Example:
Consider the following example, where
transsolis the GDX file of the transportation model [TRNSPORT]:
Sets i / 1*3 / j / 1*2 /; $gdxIn transsol $loadR i j $gdxIn display i, j;
The resulting listing file will contain the following lines:---- 6 SET i canning plants Seattle , San-Diego ---- 6 SET j markets New-York, Chicago , Topeka
Syntax:
$log text
This option will send a message
textto the log file. Recall that by default, the log file is the console. The default log file may be reset with the command line parameters logOption and logFile.
- Attention
- Leading blanks are ignored when the text is written out to the log file as a result of using the
$logoption.
- All special
%symbols will be substituted before the text passed through the
$logoption is sent to the log file.
Example:
$log $log The following message will be written to the log file $log with leading blanks ignored. All special % symbols will $log be substituted before this text is sent to the log file. $log This was line %system.incLine% of file %system.incName% $log
The log file that results by running the lines above will contain the following lines:The following message will be written to the log file with leading blanks ignored. All special % symbols will be substituted before this text is sent to the log file. This was line 5 of file C:\tmp\logTest.gms
Note that
%
system.incLine
%is replaced by 5 which is the line number where the string replacement was requested. Note further that
%
system.incName
%is substituted with the name of the file completed with the absolute path. Observe that the leading blanks on the second line of the example are ignored.
Syntax:
$macro name(arg1,arg2,arg3, ...) macro_body
This option defines a macro in GAMS. Here
nameis the name of the macro,
arg1,arg2,arg3,...are the arguments and
macro_bodydefines what the macro should do. The macro names follow the rules for identifiers. The macro
namecannot be used for other symbols. For further details and examples, see section Macros in GAMS below.
Syntax:
$maxCol n
This option restricts the valid range of input columns at the right margin. Note that all input after column
nis treated as comment, therefore it is ignored.
Example:
$maxCol 30 Set i / vienna, rome /; set definition Scalar a / 2.3 /; scalar definition
Observe that the text strings
set definitionand
scalar definitionare treated as comments and are ignored since they begin on or after column 31.
Any changes in the margins via
$maxColor $minCol will be reported in the listing file with the message that gives the valid range of input columns. For example, the dollar control option
$minCol 20 maxCol 110will trigger the following message:NEW MARGINS: 20-110
- Note
See also $on/offMargin and section Comments.
Syntax:
$maxGoTo n
This option sets the maximum number of jumps to the same label and is used in the context of the options $goTo and $label. Once the maximum number is reached a compilation error is triggered. Such a limit has been implemented to avoid infinite loops at compile time.
Example:
Scalar a / 1 /; $maxGoTo 5 $label label1 a = a+10; display a ; $goTo label1
Note that a compilation error is triggered if
$goTo label1is called for the fifth time.
Syntax:
$minCol n
This option restricts the valid range of input columns at the left margin. Note that all input before column
nis treated as comment, therefore it is ignored.
Example:
$minCol 30 Set definition Set i / vienna, rome /; Scalar definition Scalar a / 2.3 /;
Observe that the text strings
set definitionand
scalar definitionare treated as comments and are ignored since they are placed before column 30.
Any changes in the margins via the option $maxCol or
$minColwill be reported in the listing file with the message that gives the valid range of input columns. For example, the dollar control option
$minCol 20 maxCol 110will trigger the message:NEW MARGINS: 20-110
- Attention
- GAMS requires that the left margin set by the option
$minColis smaller than the right margin set by the option $maxCol.
See also $on/offMargin and section Comments.
$[on][off]Delim ($offDelim)
Syntax:
$onDelim $offDelim
This option controls whether data in table statements may be entered in comma delimited format.
Example:
Sets plant 'plant locations' / NEWYORK, CHICAGO, LOSANGELES / market 'demands' / MIAMI, HOUSTON, PORTLAND /; Table dist(plant,market) $onDelim ,MIAMI,HOUSTON,PORTLAND NEWYORK,1300,1800,1100 CHICAGO,2200,1300,700 LOSANGELES,3700,2400,2500 $offDelim ; Display dist;
The resulting listing file will contain the following output:---- 12 PARAMETER dist MIAMI HOUSTON PORTLAND NEWYORK 1300.000 1800.000 1100.000 CHICAGO 2200.000 1300.000 700.000 LOSANGELES 3700.000 2400.000 2500.000
$[on][off]Digit ($onDigit)
Syntax:
$onDigit $offDigit
This option controls the precision check on numbers. Computers work with different internal precision. To have the same behavior on all supported platforms, GAMS does not accept numbers with more than 16 significant digits on input. Sometimes one needs to work with input values with more digits, e.g., if the data is generated from some source, which is out if the users control. Instead of changing numbers with too much precision, the option
$offDigitinstructs GAMS to use as much precision as possible and ignore the rest of the number.
Example:
Parameter y(*) / toolarge 12345678901234.5678 $offDigit ignored 12345678901234.5678 /;
The resulting listing file will contain the following lines:1 Parameter y(*) / toolarge 12345678901234.5678 **** $103 3 ignored 12345678901234.5678 / Error Messages 103 Too many digits in number ($offdigit can be used to ignore trailing digits)
Note that the error occurs in the 17th significant digit of
y("toolarge"). However, after the line containing the option
$offDigit,
y("ignored")is accepted without any errors even though there are more than 16 significant digits.
$[on][off]Dollar ($offDollar)
Syntax:
$onDollar $offDollar
This option controls the echoing of dollar control option lines in the listing file.
Example:
$hidden This line will not be displayed $onDollar $hidden This line will be displayed $offDollar $hidden This line will not be displayed
The compilation output of the resulting listing file will contain the following lines:2 $onDollar 3 $hidden This line will be displayed
Note that all lines between the option
$onDollarand the option
$offDollarare echoed in the listing file. Note further that the effect of this option is immediate: the line
$onDollaris echoed in the listing file, while the line
$offDollaris not.
$[on][off]DotL ($offDotL)
Syntax:
$onDotL $offDotL
This option activates or deactivates the automatic addition of the attribute
.Lto variables on the right-hand side of assignments. It is most useful in the context of macros. For further information, see section Macros in GAMS below.
$[on][off]DotScale ($offDotScale)
Syntax:
$onDotScale $offDotScale
This option activates or deactivates the automatic addition of the attribute
.Scaleto variables and equations on the right-hand side of assignments. As with on|offDotL It is most useful in the context of macros. For further information, see section Macros in GAMS below.
Syntax:
$onEcho[S|V] >[>] external_file text {text} $offEcho
This option is used to send one or more lines of
textto an file
external_file. The text and the file name may be quoted or unquoted. The external file is not closed until the end of the compilation or when the option $call or any variant of the option $include is encountered. Note that the redirection symbols
>and
>>have the usual meaning:
>creates a new file and writes to it or - in case there exists already a file with the respective name - overwrites the existing file and
>>appends to a file. Note further that parameter substitutions are permitted with
$onEcho. The option
$onEchohas two more variants:
$onEchoSand
$onEchoV.
$onEchoSallows parameter substitutions like
$onEcho, so it is just a synonym which makes it more obvious that parameter substitution is allowed with the appended
S. The option
$onEchoVdoes not allow parameter substitutions but writes the text verbatim.
Example:
$set it TEST $onEchoS > externalfile1.txt send %it% to external file line 2 to send $offEcho $onEchoV > externalfile2.txt send %it% to external file line 2 to send $offEcho
The
externalfile1.txtwill contain the following lines:send TEST to external file line 2 to send
The
externalfile2.txtwill contain these lines:send %it% to external file line 2 to send
Observe that in the first case
%
it
%is substituted with
TEST, but in the second case there is no substitution.
Note that by default the external file will be placed in the current working directory if there is no path specified.
See also options $echo, and $echoN.
$[on][off]Embedded ($offEmbedded)
Syntax:
$onEmbedded $offEmbedded
This option enables or disables the use of embedded values in parameter and set data statements. If enabled, the explanatory text for set elements is concatenated with blank separators. For parameters, the embedded values get multiplied.
Example:
Set k / a,b / l / a /; Set i(k,l) / a.a 'aaaa cccc dddd' b.a 'bbbb cccc dddd' /; Parameter m(k,l) / a.a 12 b.a 24 /; $onEmbedded Set j(k,l) / (a aaaa, b bbbb).(a cccc) dddd /; Parameter n(k,l) / (a 1, b 2) .(a 3) 4 /;
Note that the explanatory text of the set elements in
iand
jas well as the values of the parameters
mand
nare identical.
$[on][off]EmbeddedCode[S][V]
Syntax:
$onEmbeddedCode[S|V] Python: [arguments] Python code {Python code} $offEmbeddedCode {symbol[<[=]embSymbol[.dimX]]}
This option is used to execute one or more lines of
Python
codewhile GAMS stays alive. The Python code has access to GAMS symbols and can read and change them.
Note that parameter substitutions are permitted with
$onEmbeddedCode. The option
$onEmbeddedCodehas two more variants:
$onEmbeddedCodeSand
$onEmbeddedCodeV.
$onEmbeddedCodeSallows parameter substitutions like
$onEmbeddedCode, so it is just a synonym which makes it more obvious that parameter substitution is allowed with the appended
S. The option
$onEmbeddedCodeVdoes not allow parameter substitutions but passes the code verbatim to the Python interpreter. The optional
argumentsgiven to
$onEmbeddedCode[S|V]can be accessed in the
Python
code.
$offEmbeddedCodecan be followed by a GAMS symbol or a list of GAMS symbols. If GAMS symbols are specified they get updated in the GAMS database after the Python code got executed. The syntax
symbol<[=]embSymbol[.dimX]allows to load a one dimensional set from a symbol which was set in the embedded code that has even a higher dimensionality (here we call
<[=]the
projection operator). GAMS tries to find the set
symbolas a domain in the symbol
embSymboland uses the labels from this index position (with
<the first domain set from the right and with
<=from the left). If no domain information is stored in the GDX file or the domain information does not match the suffix
.dimXallows to pick a fixed index position (
Xneeds to be replaced by the desired index position).
Example: = [] country = set() city = set() for cc in gams.get("cc"): r = str.split(cc, " - ", 1) mccCountry.append((cc,r[0])) mccCity.append((cc,r[1])) country.add(r[0]) city.add(r[1]) gams.set("country",list(country)) gams.set("city",list(city)) gams.set("mccCountry",mccCountry) gams.set("mccCity",mccCity) $offEmbeddedCode country city mccCountry mccCity Option mccCountry:0:0:1, mccCity:0:0:1; Display country, city, mccCountry ,mccCity;
This will be in the listing file:---- 28 SET country Spain , USA , France , Germany ---- 28 SET city Washington DC, Toulouse , Berlin , Munich Houston , Madrid , New York , Seville Paris , Bilbao , Lille , Bonn Cordoba ---- 28 ---- 28
Using the
projection operatorthe same task could be done like country<mccCountry city<mccCity mccCountry mccCity Option mccCountry:0:0:1, mccCity:0:0:1; Display country, city, mccCountry ,mccCity;
See also chapter Embedded Code Facility for more details.
$[on][off]Empty ($offEmpty)
Syntax:
$onEmpty $offEmpty
Setting
$onEmptyallows empty data statements for list or table formats. Note that by default, empty data statements will cause a compilation error.
Example:
Set i / 1,2,3 / ; Set j(i) / / ; Parameter x(i) "empty parameter" / / ; Table y(i,i) "headers only" 1 2 3 ; $onEmpty Set k(i) / / ; Parameter xx(i) "empty parameter" / / ; Table yy(i,i) "headers only" 1 2 3 ;
The resulting listing file will contain the following lines:1 Set i / 1,2,3 / ; 2 Set j(i) / / ; **** $460 3 Parameter x(i) "empty parameter" / / ; **** $460 4 Table y(i,i) "headers only" 5 1 2 3 6 ; **** $462 8 Set k(i) / / ; 9 Parameter xx(i) "empty parameter" / / ; 10 Table yy(i,i) "headers only" 11 1 2 3 12 ; Error Messages 460 Empty data statements not allowed. You may want to use $ON/OFFEMPTY 462 The row section in the previous table is missing
Empty data statements are most likely to occur when data is being entered into the GAMS model by an external program. This problem may be overcome with the option
$onEmpty.
- Note
- The empty data statement may only be used with symbols which have a known dimension. If the dimension is also derived from the data, the option $phantom should be used to generate 'phantom' set elements.
The option
$onEmptyin conjunction with the option $onMulti and the save and restart feature may be used to set up a model and add data later.
Syntax:
$onEnd $offEnd
This option offers an alternative syntax for flow control statements. The option
$onEndcauses the following words to be regarded as keywords:
do,
endLoop,
endIf,
endForand
endWhile. They are used to close the language constructs
loop,
if,
forand
whilerespectively.
Example:
- Note
- The standard syntax is given as an end-of-line comment.
Set i / 1*3 /; Scalar cond / 0 /; Parameter a(i) / 1 1.23, 2 2.65, 3 1.34/; $eolCom // $onEnd loop i do // loop (i, display a; // display a; endLoop; // ); if (cond) then // if (cond, display a; // display a; else // else a(i) = a(i)/2; // a(i) = a(i)/2; display a; // display a; endIf; // ); for cond = 1 to 5 do // for (cond = 1 to 5, a(i) = 2 * a(i); // a(i) = 2 * a(i); endFor; // ); while cond > 3 do // while (cond > 3, a(i) = a(i) / 2; // a(i) = a(i) / 2; cond = cond-1; // cond = cond-1; endWhile; // );
Observe that the alternative syntax is more in line with the syntax used in some of the popular programming languages.
- Attention
- Setting the option
$onEndwill make the alternative syntax valid, and at the same time it will make the standard syntax invalid. Therefore the two forms of the syntax will never be valid simultaneously.
$[on][off]EolCom ($offEolCom)
Syntax:
$onEolCom $offEolCom
This option acts as a switch to control the use of end-of-line comments. Note that by default, the end-of-line comment symbol is set to
!!but the processing is disabled.
Example:
$onEolCom Set i /1*2/ ; !! set declaration Parameter a(i) ; !! parameter declaration
Observe that after the option
$onEolComhas been specified, comments may be entered on the same line as GAMS code.
See also section Comments.
Syntax:
$onEps $offEps
This option is used to treat zero as
EPSin a parameter or table data statement. This can be useful if the value of zero is overloaded with existence interpolation.
Example:
Set i / one, two, three, four /; Parameter a(i) / $oneps one 0 $offeps two 0 three EPS /; Display a ;
The outcome generated by the display statement follows:---- 8 PARAMETER a one EPS, three EPS
Note that only those entries specifically entered as 0 are treated like
EPS.
$[on][off]Expand ($offExpand)
Syntax:
$onExpand $offExpand
This option changes the processing of macros that appear in the arguments of a macro call. The default operation is not to expand macros in the arguments. The switch
$onExpandenables the recognition and expansion of macros in the macro argument list and
$offExpandwill restore the default behavior.
Example:
variable x(*,*); $macro f(i) sum(q, x(i,q)) $macro equ(x) equation equ_&x; equ_&x.. &x =e= 0; equ(f(i))
The macro expansion of the code above will result in an equation definition that reads as follows:
equation equ_f(I); equ_f(i).. f(i) =e= 0;
If we compile the code under
$onExpandthe argument
f(i)is expanded before the macro
equ()gets expanding resulting in the following (incorrect) code:
equation equ_sum(q, x(i,q)); equ_sum(q, x(i,q)).. sum(q, x(i,q)) =e= 0;
For further information, see section Macros in GAMS below.
$[on][off]Global ($offGlobal)
Syntax:
$onGlobal $offGlobal
When an include file is inserted, it inherits the dollar control options from the higher level file. However, the dollar control option settings specified in the include file do not affect the higher level file. This convention is common among most scripting languages or command processing shells. In some cases, it may be desirable to break this convention. This option allows an include file to change the options of the parent file as well.
Example:
$include 'inc.inc' $hidden after first call to include file $onGlobal $include 'inc.inc' $hidden after second call to include file
The file
inc.inccontains the following lines:
$onDollar $hidden text inside include file
The the echo print of the resulting listing file follows:INCLUDE D:\GAMS\INC.INC 2 $onDollar 3 $hidden text inside include file INCLUDE D:\GAMS\INC.INC 7 $onDollar 8 $hidden text inside include file 9 $hidden after second call to include file
Note that the dollar control option $onDollar inside the include file does not affect the parent file until
$onGlobalis set. The text following the option $hidden is then echoed to the listing file.
$[on][off]Include ($onInclude)
Syntax:
$onInclude $offInclude
This option controls the listing of the expanded include file name in the listing file.
Example:
$include 'inc.inc' $offInclude $include 'inc.inc'
We assume that the file
inc.inccontains the following lines:
$onDollar $hidden Text inside include file
The resulting listing file will contain the following lines:INCLUDE C:\tmp\inc.inc 2 $onDollar 3 $hidden Text inside include file 6 $onDollar 7 $hidden Text inside include file
Note that the include file name is echoed the first time the include file is used. However, the include file name is not echoed after
$offIncludehas been set.
$[on][off]Inline ($offInline)
Syntax:
$onInline $offInline
This option acts as switch to control the use of in-line comments. Note that by default, the in-line comment symbols are set to the two character pairs \(/*\) and \(*/\) but the processing is disabled. In-line comments may span several lines till the end-of-comment characters are encountered.
Example:
$onInline Set i /* The default comment symbols are now active. These comments can continue to additional lines till the closing comments are found. */ / i1*i3 / ;
- Note
- The option $inlineCom automatically sets
$onInline.
- Nested in-line comments are illegal unless the option $onNestCom is set.
See also section Comments.
$[on][off]Listing ($onListing)
Syntax:
$onListing $offListing
This option controls the echoing of input lines to the compilation output of the listing file. Note that suppressed input lines do not generate entries in the symbol and reference sections that appear at the end of the compilation output. Lines with errors will always be listed.
Example:
Set i /0234*0237/ j /a,b,c/ ; Table x(i,j) "very long table" a b c 0234 1 2 3 $offListing 0235 4 5 6 0236 5 6 7 $onListing 0237 1 1 1 ;
The resulting listing file will contain the following lines:1 Set i /0234*0237/ 2 j /a,b,c/ ; 3 Table x(i,j) very long table 4 a b c 5 0234 1 2 3 10 0237 1 1 1
Note that the lines in the source file between the options
$offListingand
$onListingare not echoed to the listing file.
- Note
- For some projects the listing file can become huge and can take significant time to be written. This time can be saved by setting
$offListingat the beginning of the input file and
$onListingjust before the parts one is interested in, or not at all, if one does not look at the listing file anyway.
$[on][off]Local ($onLocal)
Syntax:
$onLocal $offLocal
The suffix
.localattached to the name of a controlling set will use an implicit alias within the scope of the indexed operation or on the left-hand side of an assignment statement. This feature is particularly useful in the context of nested macros.
Example:
Set i /1*3/; alias(i,j); Parameter xxx(i,j) / 1.1 1, 2.2 2, 3.3 3, 1.3 13, 3.1 31 /; display xxx; Parameter p(i); p(i.local) = sum(j, xxx(i,j)); display p;
Note that in the assignment statement the set
ion the right-hand side is controlled by
i.localon the left-hand side. Thus we have the following values for the two parameters:---- 3 PARAMETER xxx 1 2 3 1 1.000 13.000 2 2.000 3 31.000 3.000 ---- 7 PARAMETER p 1 14.000, 2 2.000, 3 34.000
In the example above, the suffix
.localappeared one time on the left-hand side. The option
$onLocalallows the suffix
.localto appear more that one time attached to the same symbol. Consider the following example that extends the example above:
Parameter g(i,i); g(i.local-1,i.local) = xxx(i,i); display g;
Note that in the assignment statement of
gthe suffix
.localattached to the set
iappears two times on the left-hand side. The question arises whether the reference to the set
ion the right-hand side refers to the first or the second instance of
.localon the left-hand side. The assignment statement may alternatively be written in the following way using an explicit alias statement:
alias (i,i1,i2); g(i1-1,i2) = xxx(i2,i2);
Thus is becomes clear that the symbol on the right-hand side refers to the controlling index that enters last (here the second one). The output generated by the display statement follows:---- 10 PARAMETER g 1 2 3 1 1.000 2.000 3.000 2 1.000 2.000 3.000
Observe that the multiple use of the suffix
.localon the same symbol is considered an error with the option
$offLocal.
Note that it is also allowed to combine the original index with an index suffixed with
.local. Consider the following alternative formulation:
g(i.local-1,i) = xxx(i,i);
Note that in this case the index suffixed with
.localtakes precedence and the reference of
ion the right-hand side refers to the index
i.localeven though
iis entered last. Observe that this statement even works with
$offLocalas the suffix
.localappears only once.
See also section Macros in GAMS below.
Syntax:
$onLog $offLog
This option acts as a switch that controls logging information about the line number and memory consumption during compilation. This is scoped like the option $on/offListing applying only to included files and any subsequent included files but reverting to the setting
$onLogin the parent files (if it was not changed there as well).
Example:
Set i /i1*i20000000/; $include inc.inc Set l /l1*l20000000/;
The file
inc.inclooks like this:
Set j /j1*j20000000/; $offLog Set k /k1*k20000000/;
The generated log will contain the following lines:--- test.gms(1) 1602 Mb 5 secs --- test.gms(2) 1602 Mb --- .inc.inc(1) 3122 Mb 6 secs --- test.gms(3) 6161 Mb 14 secs
Note that the first line of both the parent and the include file got logged, but not the third line of the include file, after
$offLogwas set. The last line of the parent file got logged again.
$[on][off]Macro ($onMacro)
Syntax:
$onMacro $offMacro
Enables or disables the expansion of macros defined by $macro.
Example:
$macro oneoverit(y) 1/y $offMacro y = oneoverit(x1); display y;
causes an error because the macro
oneoveritin line 3 can not be expanded.
$[on][off]Margin ($offMargin)
Syntax:
$onMargin $offMargin
This option controls margin marking, that means if margins set by the options $minCol and $maxCol, should be marked in the lst file.
Example:
$onmargin mincol 20 maxcol 51 Now we have Set i "plant" / US, UK /; This defines I turned on the Scalar x / 3.145 /; A scalar example. margin marking. Parameter a, b; Define some parameters. $offmargin
The lst file will contain this:2 Now we have |Set i "plant" / US, UK /; |This defines I 3 turned on the |Scalar x / 3.145 /; |A scalar example. 4 margin marking. |Parameter a, b; |Define some 5 | |parameters.
Note that any statements between columns 1 and 19 and any input beyond column 52 are treated as comments. These margins are marked with
| on the left and right.| on the left and right.
See also section Comments.
$[on][off]Multi ($offMulti)
Syntax:
$onMulti $offMulti
This option controls multiple data statements or tables. By default, GAMS does not allow data statements to be redefined. If this option is activated the second or subsequent data statements are merged with entries of the previous ones. Note that all multiple data statements are performed before any other statement is executed.
Example:
Consider the following slice of code. The list after the end of line comment describes the complete content of the symbol x after the data statement has been processed:
$eolCom // Set i / i1*i10 /; Parameter x(i) / i1*i3 1 / // /i1 1,i2 1,i3 1/ $onMulti Parameter x(i) / i7*i9 2 / // /i1 1,i2 1,i3 1,i7 2,i8 2,i9 2/ Parameter x(i) / i2*i6 3 / // /i1 1,i2 3,i3 3,i4 3,i5 3,i6 3,i7 2,i8 2,i9 2/ Parameter x(i) / i3*i5 0 / // /i1 1,i2 3,i6 3,i7 2,i8 2,i9 2/ $offMulti display x;
Note that the repeated parameter statements would have resulted in a compilation error without the presence of the option
$onMulti. The result of the display statement in the listing file follows:---- 8 PARAMETER x 1 1.000, 2 3.000, 6 3.000, 7 2.000, 8 2.000, 9 2.000
Note that
x("i1")is assigned the value of 1 with the first data and is not affected by any of the subsequent data statements.
x("i3")on the other hand is reset to 3 by the third data statement and wiped out with 0 in the fourth data statement.
- Attention
- The two-pass processing of a GAMS file may lead to seemingly unexpected results. Dollar control options and data initialization are both done in the first pass and assignments in the second, irrespective of their relative locations. This is an issue particularly with the option
$onMultisince it allows data initializations to be performed more than once. See section GAMS Compile Time and Execution Time Phase for details.
Consider the following example:
Scalar a /12/; a=a+1; $onMulti Scalar a /20/; display a;
Note that the two
scalardata initialization statements and the option
$onMultiare processed before the assignment statement
a=a+1. As a result, the final value of
awill be 21. The output of the display statement follows:---- 5 PARAMETER a = 21.000
Observe that the option $onEmpty in conjunction with the option
$onMultiand the save and restart feature may be used to set up a model and add data later. See example in section Advanced Separation of Model and Data for details.
$[on][off]NestCom ($offNestCom)
Syntax:
$onNestCom $offNestCom
This option controls nested in-line comments. It makes sure that the open-comment and close-comment characters match.
Example:
$inlineCom { } onNestCom { nesting is now possible in comments { braces have to match } }
See also $inlineCom, $onInline and section Comments.
$[on][off]Order ($onOrder)
Syntax:
$onOrder $offOrder
Lag and lead operations and the ord function require the referenced set to be ordered and constant. In some special cases users might want to use those operations on dynamic and/or unordered sets. The option
$on/offOrderhas been added to locally relax the default requirements. The use of this option comes with a price, the system will not be able to diagnose odd and incorrect formulations and data sets.
Example:
Set t1 / 1987, 1988, 1989, 1990, 1991 / t2 / 1983, 1984, 1985, 1986, 1987 /; Parameter p(t2); $offOrder p(t2) = ord(t2); display t2,p;
Without the
$offOrderthe compilation of the line
p(t2) = ord(t2);would have triggered a compilation error. The ordinal numbers assigned here are probably not what one expects. The element
1987gets ordinal number 1 although it seems to be last last in the set. The ordinal numbers are assigned in the order the set is stored internally in GAMS. This order is also used when displaying the set
t2:---- 6 SET t2 1987, 1983, 1984, 1985, 1986 ---- 6 PARAMETER p 1987 1.000, 1983 2.000, 1984 3.000, 1985 4.000, 1986 5.000
Syntax:
File myputfile; put myputfile; $onPut[S|V] text {text} $offPut
The pair
$onPut[S|V]-
$offPutcauses a block of text to be placed in a put file at run-time. The is one of the few dollar control options that operate at run time. The
$in the first column usually indicates action at compile time.
Note that parameter substitutions are not permitted with
$onPut. The option
$onPuthas two more variants:
$onPutSand
$onPutV.
$onPutSallows parameter substitutions while the option
$onPutVdoes not allow parameter substitutions, like
$onPut, so it is just a synonym which makes it more obvious that the text is written verbatim with the appended
V.
Example:
$set it TEST File myputfile; put myputfile; $onPutS Line 1 of text "%it%" Line 2 of text %it% $offPut
This code generates the put file
myputfile.putwith the following content:Line 1 of text "TEST" Line 2 of text TEST
Note that the compile-time variable
%
it
%was replaced by
TEST. However, if the option
$onPutVis used instead, then
%
it
%will not be substituted:
$set it TEST File myputfile put myputfile $onPutV Line 1 of text "%it%" Line 2 of text %it% $offPut
The resulting file
myputfile.putwill contain the following lines:Line 1 of text "%it%" Line 2 of text %it%
$[on][off]Recurse ($offRecurse)
Syntax:
$onRecurse $offRecurse
This option controls whether it is permitted for a file to include itself.
Example:
The following GAMS program result in a recursive inclusion of the program itself:
$onRecurse $include "%gams.input%"
Note that the maximum include nesting level is 40 and if it is exceeded an error is triggered.
In the following example that prints a string and then the reverse string the nesting level is less that 40 and one get some kind of recursion at compile time:
$onEchoV > reverse.gms $ifthene %1=%3+1 put ' ' $ exit $endif loop(map(chars,code)$(code.val=ord("%2",%1)), put chars.tl:0); $eval posPlus1 %1+1 $batInclude reverse %posPlus1% %2 %3 loop(map(chars,code)$(code.val=ord("%2",%1)), put chars.tl:0); $offEcho set chars / A*Z /, code / 65*90 /, map(chars,code) / #chars:#code /; file fx /''/; put fx; $onRecurse $batInclude reverse 1 RACECAR 7 put /; $batInclude reverse 1 LAGER 5
The log will print the following lines:--- Starting execution: elapsed 0:00:00.067 RACECAR RACECAR LAGER REGAL *** Status: Normal completion
$[on][off]StrictSingleton ($onStrictSingleton)
Syntax:
$onStrictSingleton $offStrictSingleton
If the option
$onStrictSingletonis active, a compilation error is triggered if a data statement for a singleton set contains more than one element. After activating the option
$offStrictSingletonGAMS will take the first element of a singleton set that was declared with multiple elements as the valid element, the other elements are disregarded and there is no error. The option to control this behavior at runtime is strictSingleton.
Example:
The first element is not always the one that appears in the data statement first as the following example shows:
set i /1,2/ $offStrictSingleton singleton set ii(i) /2,1/; display ii;
The set
iicontains the element
1because it is the first in the GAMS label order as the display statements shows:---- 4 SET ii 1
$[on][off]SymList ($offSymList)
Syntax:
$onSymList $offSymList
This option controls whether the symbol listing map appears in the compilation output of the listing file. The symbol listing map contains the complete listing of all symbols that have been defined and their explanatory text. The entries are in alphabetical order and grouped by symbol type.
Example:
The symbol listing map generated by running [TRNSPORT] with
$onSymListis as follows:Symbol Listing SETS i canning plants j markets PARAMETERS a capacity of plant i in cases b demand at market j in cases c transport cost in thousands of dollars per case d distance in thousands of miles f freight in dollars per case per thousand miles VARIABLES x shipment quantities in cases z total transportation costs in thousands of dollars EQUATIONS cost define objective function demand satisfy demand at market j supply observe supply limit at plant i MODELS transport
This serves as a simple description of the symbols used in a model and may be used in reports and other documentation. For further information, see section The Symbol Listing Map.
$[on][off]SymXRef ($offSymXRef)
Syntax:
$onSymXRef $offSymXRef
This option controls the following:
- Collection of cross references for symbols like sets, parameters, variables, acronyms, equations, models and put files.
- Symbol cross reference report of all collected symbols in the compilation output of the listing file. For details, see section The Symbol Reference Map.
- Listing of all referenced symbols and their explanatory text by symbol type in listing file. This listing may also be activated with the option $onSymList.
Example:
$onSymXRef Set i / 1*6 /, k; $offSymXRef Set j(i) "will not show" / 1*3 /; $onSymXRef k('1') = yes;
The resulting listing file will contain the following symbol reference map and symbol listing map:SYMBOL TYPE REFERENCES i SET declared 2 defined 2 k SET declared 2 assigned 6 SETS i k
Note that the set
jdoes not appear in these listings because the listing was deactivated with the option
$offSymXRefin line 3 of the code above.
Syntax:
$onText $offText
The pair
$onText-
$offTextencloses comment lines. Line numbers in the compiler listing are suppressed to mark skipped lines.
Example:
* Standard comment line $onText Everything here is a comment until we encounter the closing $offText like the one below $offText * Another standard comment line
The echo print of the resulting listing file will contain the following lines:1 * Standard comment line Everything here is a comment until we encounter the closing $offText like the one below 7 * Another standard comment line
- Attention
- GAMS requires that every
$onTexthas a matching
$offTextand vice versa.
See also section Comments.
$[on][off]UElList ($offUElList)
Syntax:
$onUElList $offUElList
This option controls the complete listing of all set elements that have been entered in the compilation output of the listing file. For details see section The Unique Element Listing Map.
Example:
The unique element listing in the listing file generated by running the model [TRNSPORT] with
$onUElListfollows:Unique Element Listing Unique Elements in Entry Order 1 seattle san-diego new-york chicago topeka Unique Elements in Sorted Order 1 chicago new-york san-diego seattle topeka
Note that the sorted order is not the same as the entry order. For more information, see section Ordered and Unordered Sets.
$[on][off]UElXRef ($offUElXRef)
Syntax:
$onUElXRef $offUElXRef
This option controls the collection and listing of cross references of set elements in the compilation output. For more information, see section The Unique Element Listing Map.
Example:
Set i "set declaration" / one, two, three /, k(i); $onUElXRef k('one') = yes; $offUElXRef k('two') = yes; $onUElXRef k('three') = yes;
The resulting listing file will contain the following unique element reference report:Unique Element Listing ELEMENT REFERENCES one index 3 three index 7
Note that the element
twodoes not appear in this listing because the listing was deactivated with the option
$offUElXRefin line 4 of the code above.
$[on][off]UNDF ($offUNDF)
Syntax:
$onUNDF $offUNDF
This option controls the use of the special value UNDF which indicates a result is undefined. For details see section Extended Range Arithmetic. By default, UNDF is not permitted to be used in assignments. This may be changed with the option
$onUNDF.
Example:
Scalar x; $onUNDF x = UNDF; Display x;
The output of the display statement follows:---- 4 PARAMETER x = UNDF
Note that an error would have been triggered without the use of
$onUNDF. The option
$offUNDFwill return the system to the default, where
UNDFmay not be used in assignments.
Syntax:
$onUni $offUni
This controls whether the compiler checks the referential integrity (see section Domain Checking) of the code. This is an essential part of good GAMS programming and it is highly recommend to declare symbols with proper domains. With the universe as a domain the compiler does not help the user with easy-to-make mistakes, like swapping indexes,
a(i,j)versus
a(j,i). By default something like this would generate an error, if
awas declared as
a(i,j). Such an error could be ignored, by setting
$onUni, which can be useful in few situations, when accessing a symbol with a set that is not the domain or a subset of the domain. For example, we could read data of a union of sets that already exist. We could use the universe as the domain for that symbol, but perhaps we need to protect the referential integrity of this symbol too.
Example:
Set fruit / apple, pear / veggie / carrot, pea / produce / #fruit, #veggie /; Parameter produceCalories(produce) "per 100g" / apple 52, pear 57, carrot 41, pea 81 / fc(fruit) "calories per 100g" vc(veggie) "calories per 100g"; $onUni fc(fruit) = produceCalories(fruit); vc(veggie) = produceCalories(veggie); $offUni display fc, vc;
So when assigning
fcwe only access
produceCalorieswith
fruit. We could reverse the order of declaration of
fruit,
veggieand
produceand use a proper subdomain, but sometimes data flow and input don't allow that.
- Attention
- When the GAMS compiler operates under $onUni it treats all symbols as being declared over the universe. So all domain checking is gone. We can set elements in a symbols that normally can't be entered. This can also lead to strange effects:
set i / 1*2 / j / a,b /; parameter pi(i); $onuni pi(j) = 1; $offuni * We will see elements from j in pi Display pi; * The following should only clear the i-elements from pi, but it clears the * entire symbol, because GAMS knows it's doing this to the entire domain and * takes a shortcut. pi(i) = no; Display pi;
Syntax:
$onVerbatim $offVerbatim
These options are used in conjunction with the GAMS command line parameter DumpOpt to suppress the input preprocessing for input lines that are copied to the dmp file. This feature is mainly used to maintain different versions of related models in a central environment.
Note that the options
$on/offVerbatimare only recognized for
DumpOpt\(\geq\) 10 and apply only to lines in the file between the two options.
Observe that the use of the options $goto and
$on/offVerbatimare incompatible and may produce unexpected results.
Example:
$set f 123 $log %f% $onVerbatim $log %f% $offVerbatim $log %f%
The corresponding dmp file will contain the following lines:
$log 123 $onVerbatim $log %f% $offVerbatim $log 123
See also command line parameter DumpOpt.
$[on][off]Warning ($offWarning)
Syntax:
$onWarning $offWarning
This option acts as a switch for data domain checking. In some cases it may be useful to accept domain errors in data statements that are imported from other systems and report warnings instead of errors. Data will be accepted and stored, even though it is outside the domain.
- Attention
- This switch effects three types of domain errors usually referred to as error numbers 116, 170 and 171, see example below.
- This may have serious side affects and we recommend to exercise great care when using this feature.
Example:
Set i / one, two, three / $onWarning j(i) / four, five / k / zero /; Parameter x(i) "Messed up Data" / one 1.0, five 2.0 /; x('six') = 6; x(j) = 10; x('two') = x('seven'); j(k) = yes; $offWarning display i,j,x;
Note that the set
j, although specified as a subset of
i, contains elements not belonging to its domain. Similarly, the parameter
xcontains data elements outside the domain of
i. The skeleton listing file that results from running this code follows:1 Set i / one, two, three /; 3 j(i) / four, five / **** $170 $170 4 k / zero /; 5 Parameter x(i) "Messed up Data" / one 1.0, five 2.0 /; **** $170 6 x('six') = 6; x(j) = 10; x('two') = x('seven'); **** $170 $116,170 7 j(k) = yes; **** $171 9 display i,j,x; Error Messages 116 Label is unknown 170 Domain violation for element 171 Domain violation for set **** 0 ERROR(S) 7 WARNING(S) E x e c u t i o n ---- 9 SET i one , two , three ---- 9 SET j four, five, zero ---- 9 PARAMETER x Messed up Data one 1.000, four 10.000, five 10.000, six 6.000
Observe that the domain violations are marked like normal compilation errors but are only treated as warnings and it is permitted to execute the code.
For an introduction to domain checking in GAMS, see section Domain Checking.
Syntax:
$phantom id
This option is used to designate
idas a phantom set element. Syntactically, a phantom element is handled like any other set element. Semantically, however, it is handled like it does not exist. This is sometimes used to specify a data template that initializes the phantom records to default values.
Example:
$phantom null Set i / null / j / a, b, null /; display i,j;
The output generated by the display statement is shown below:---- 4 SET i ( EMPTY ) ---- 4 SET j a, b
Note that
nulldoes not appear in the listing file.
- Attention
- Statements that assign values to phantom labels are ignored.
Consider the following extension to the previous example:
Parameter p(j) / a 1, null 23 /; display p;
The output generated by the display statement is shown below:---- 6 PARAMETER p a 1.000
The system attribute system.empty is an implicitly defined phantom element. The following code works even without specifying
$phantom:
Set i / system.empty / j / a, b, system.empty /; display i,j;
Another way to specify empty data statements makes use of on/offEmpty. The following example produces the same data as the data statement with the phantom label. In contrast to the example we
$phantomwe need to provide the dimensionality of the symbol
iexplicitly via the
(*):
$onEmpty Set i(*) / / j / a, b /; display i,j;
Syntax:
$prefixPath directoryPath
This option augments the search path in
PATHenvironment variable. The effect is that the text
directoryPathis added to the beginning of the search path.
Example:
$log %sysenv.PATH% $prefixPath C:\somewhereelse\anotherpath $log %sysenv.PATH%
The log contains the following two relevant lines:C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0 C:\somewhereelse\anotherpath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0
The option setEnv and
sysEnv.VARNAME%allow to modify system environment variables but the length of the environment variable value is limited in GAMS to 255 characters. The
PATHenvironment variable is often much longer and therefore this special
$prefixPathoption exists.
This works on all platforms but the path separator depends on the operating system (
;for Windows and
:for Unix).
Syntax:
$protect all | ident1 ident2 ...
This option creates a privacy setting: it freezes all values of identifiers with the result that modifications are no longer allowed but the parameters may still be used in model calculation (for example, equation definitions). Here
ident1and
ident2are specific GAMS identifiers previously defined in the program and the keyword
alldenotes all identifiers.
Note that this option is mainly used in the context of secure work files. The privacy restrictions may be removed with the options $expose or $purge.
Syntax:
$purge all | ident1 ident2 ...
This option removes the identifiers and all associated data in a privacy setting. With explicit identifiers the listed identifiers are removed, and with
allall identifiers are removed.
Note that this option is used in the context of secure work files. A special license file is needed for this feature to work, the removal only takes effect in the restart files.
Syntax:
$remark text
This option performs a parameter substitution and writes a comment
textto the compilation output of the listing file. Note that the line numbers of the comment are suppressed.
Example:
$set it TEST $remark Write %it% to the listing file
The resulting listing file will contain the following line:Write TEST to the listing file
Syntax:
$set VARNAME text
This option establishes or redefines contents of a scoped compile-time variable that is accessible in the code where the command appears and all code included therein. Here
VARNAMEis any user chosen variable name;
textis optional and may contain any text. The text may contain spaces. The text can not be longer than 255 characters otherwise a compilation error is triggered. Observe that scoped compile-time variables may be destroyed (removed from the program) with the option $drop.
Note that in contrast to the option $eval the option
$setdoes not evaluate the expression at compile time.
Note that GAMS allows scoped, local and global compile-time variables to be defined with the same name and therefore in some cases needs to prioritize. When referencing a compile-time variable via
VARNAME%a local variable hides scoped and global variables and a scoped variables hides the global variable as the following example demonstrates.
Example:
$setLocal myvar this is a local variable $set myvar this is a scoped variable $setGlobal myvar this is a global variable $log %myvar% $droplocal myvar $log %myvar% $drop myvar $log %myvar%
The log will look as follows:this is a local variable this is a scoped variable this is a global variable
If one wants to set a compile-time variable in an include file that is visible to the program after the
$includeone need to use
$setglobal:
$onEchoV > setvar.gms $setArgs varname varvalue $setglobal %varname% %varvalue% $offEcho $batInclude setvar MYVAR one $log %MYVAR%
The log will showone
An inventory of all defined compile-time variables and their type (local, scoped, and global) is available with the option $show.
See also $setGlobal, $setLocal, and section Compile-Time Variables.
Syntax:
$setArgs id1 id2 id3 ...
With this option parameters that may be substituted are defined as GAMS compile-time variables. Note that
$setArgsmay only be used in external files that are included with the option $batInclude, $libInclude, and $sysInclude.
Example:
Scalar a /2/, b /4/, c /5/; $batInclude test3 a b c
The file
test3.gmscontains the following lines:
Scalar x; x = %1 + %2 * %3 ; display x; $setArgs aa bb cc x = %aa% - %bb% * %cc% ; display x; x = %1 + %2 * %3 ; display x;
The option
$setArgsallows the batInclude file to use the more descriptive compile-time variables
aa%instead of
%1,
bb%instead of
%2and
cc%instead of
%3. Note that the use of
%1,
%2etc. is still allowed. The program listing looks as follows:1 Scalar a /2/, b /4/, c /5/; BATINCLUDE C:\Users\default\Documents\gamside\projdir\test3.gms 3 Scalar x; 4 x = a + b * c ; 5 display x; 7 x = a - b * c ; 8 display x; 9 x = a + b * c ; 10 display x;
and the output generated by the display statements follows:---- 5 PARAMETER x = 22.000 ---- 8 PARAMETER x = -18.000 ---- 10 PARAMETER x = 22.000
See also $set, $batInclude.
Syntax:
$setComps perioddelimstring id1 id2 id3 ...
This option establishes or redefines compile-time variables so they contain the components of a period delimited string.
Here
perioddelimstringis any period delimited string like the set specification of a multidimensional parameter,
id1is the name of a scoped compile-time variable that will contain the name of the set element in the first position,
id2is the name of a scoped compile-time variable that will contain the name of the set element in the second position and
id3is the name of a scoped compile-time variable that will contain the name of the set element in the third position. The items may be recombined back into the original filename string by using
%
id1
%.
%
id2
%.
%
id3
%.
Example:
$setComps period.delim.string id1 id2 id3 $log id1=%id1% $log id2=%id2% $log id3=%id3% $set name %id1%.%id2%.%id3% $log name=%name%
The resulting log file will contain the following lines:id1=period id2=delim id3=string name=period.delim.string"
Syntax:
$setDDList id1 id2 id3 ...
This option causes GAMS to look for misspelled or undefined double dash GAMS parameters.
Example: Consider the following example where three double dash GAMS parameters are defined on the command line:> gams mymodel.gms --one=11 --two=22 --three=33 --four=44
The corresponding GAMS file follows:
$log %one% $log %two% $setDDList three $log %three% $log %four%
Note that the option
$setDDList threechecks if all double dash parameters have been used so far except for
three. An error is triggered because
fourhas not been used so far, the log file will contain the following:*** 1 double dash variables not referenced --four=44
See also section Double Dash Parameters.
Syntax:
$setEnv VARNAME value
This option defines an operating system environment variable. Here
VARNAMEis a user chosen environment variable name and
valuemay contain text or a number. Note that system environment variables are destroyed (removed from the program) with the option $dropEnv or when GAMS terminates.
Example:
$ondollar $set env this is very silly $log %env% $setenv verysilly %env% $log %sysenv.verysilly% $if not "%env%"=="%sysenv.verysilly%" $error "$setEnv did not work" $dropenv verysilly $if setenv verysilly $error should not be true
The following output is echoed to the log file:--- Starting compilation this is very silly this is very silly
See also $dropEnv and section Environment Variables in GAMS.
Syntax:
$setGlobal VARNAME text
This option establishes or redefines contents of a global compile-time variable that is accessible in the code where the command appears and all code included therein and all parent files. Here
VARNAMEis any user chosen variable name;
textis optional and may contain any text. The text may contain spaces. The text can not be longer than 255 characters otherwise a compilation error is triggered. Observe that global compile-time variables may be destroyed (removed from the program) with the option $dropGlobal.
The difference between local, scoped, and global compile-time variable is explained with the option $set.
See also $set, $setLocal, $dropGlobal and section Compile-Time Variables.
Syntax:
$setGlobal VARNAME text
This option establishes or redefines contents of a local compile-time variable that is accessible only in the code module (source file) where it is defined. Here
VARNAMEis any user chosen variable name;
textis optional and may contain any text. The text may contain spaces. The text can not be longer than 255 characters otherwise a compilation error is triggered. Observe that local compile-time variables may be destroyed (removed from the program) with the option $dropLocal.
The difference between local, scoped, and global compile-time variable is explained with the option $set.
See also $set, $setGlobal, $dropLocal and section Compile-Time Variables.
Syntax:
$setNames file filepath filename fileextension
This option establishes or redefines three scoped compile-time variables so they contain the drive subdirectory, filename and extension of a file named with full path. Here
fileis any filename,
filepathis the name of a scoped compile-time variable that will contain the name of the subdirectory where the file is located,
filenameis the name of a scoped compile-time variable that will contain the root name of the file and
fileextensionis the name of a scoped compile-time variable that will contain the extension of the file.
Example:
$setNames "%gams.input%" filepath filename fileextension $set name %filepath%%filename%%fileextension% $log %name%
The log will showC:\Users\default\Documents\gamside\projdir\ Untitled_1 .gms C:\Users\default\Documents\gamside\projdir\\Untitled_1.gms
Note that
fileis separated into its three components placing
C:\Users\default\Documents\gamside\projdir\into
filepath,
Untitled_1into
filenameand
.gmsinto
fileextension. The three items may be recombined back into the original filename by using
filepath%filename%fileextension%as shown in the example.
If the file is missing a path, name, or extension the corresponding variable is defined but remains empty as demonstrated in the following example:
$onEchoV > showfileparts.gms $setNames "%1" filepath filename fileextension $log path=%filepath% $log name=%filename% $log ext=%fileextension% $offEcho $batInclude showfileparts "C:\tmp\" $batInclude showfileparts "Untitled_1" $batInclude showfileparts "Untitled_1.gms" $batInclude showfileparts "Untitled_1.gms.txt"
The log shows:--- Untitled_1.gms(7) 2 Mb path=C:\tmp\ name= ext= --- .showfileparts.gms(4) 2 Mb --- Untitled_1.gms(8) 2 Mb path= name=Untitled_1 ext= --- .showfileparts.gms(4) 2 Mb --- Untitled_1.gms(9) 2 Mb path= name=Untitled_1 ext=.gms --- .showfileparts.gms(4) 2 Mb --- Untitled_1.gms(10) 2 Mb path= name=Untitled_1.gms ext=.txt
Note that if a file contains multiple
.the last one will be assigned to the
fileextensionas shown in the example with
Untitled_1.gms.txt.
Syntax:
$shift
This option is similar to the command.com/cmd.exe shift operator (see en.wikipedia.org/wiki/COMMAND.COM::Batch_file_commands. It shifts the order of all parameters passed once to the left. This effectively drops the lowest numbered parameter in the list.
Example:
Scalar a, b, c ; a = 1 ; $batInclude inc.inc a b c display a, b, c ;
The batch include file
inc.incfollows:
%2 = %1 + 1 ; $shift %2 = %1 + 1 ;
The resulting listing file will contains the following echo print:1 Scalar a, b, c ; a = 1 ; BATINCLUDE C:\Users\default\Documents\gamsdir\projdir\inc.inc 3 b = a + 1 ; 5 c = b + 1 ; 6 display a, b, c ;
Note that in the first statement in the include file,
%1is the first argument in the $batInclude call and in this case it is interpreted as
a.
%2is the second argument in the $batInclude call and is interpreted as
b. This leads to the overall assignment being interpreted as
b=a+1. The dollar control option
$shiftshifts the arguments to the left. As a result,
%1is interpreted as
b, and
%2is interpreted as
c. This leads to the second assignment being interpreted as
c=b+1.
Therefore the outcome generated by the display statement in the input file is as follows:---- 6 PARAMETER a = 1.000 PARAMETER b = 2.000 PARAMETER c = 3.000
See also $batInclude.
Syntax:
$show
This option causes current values of the compile-time variables plus a list of the macros and active input and include files to be shown in the compilation output.
Example:
$set it 1 $setLocal yy $setGlobal gg what $include myinclude $show
The file
myinclude.gmsfollows:
$set inincs $setLocal inincsl $setGlobal inincsg $show
The resulting listing file will contain the following environment reports in the compilation output:---- Begin of Environment Report LEVEL TYPE LINE FILE NAME ---------------------------------- 1 INCLUDE 5 C:\Users\default\Documents\gamside\projdir\myinclude.gms 0 INPUT 4 C:\Users\default\Documents\gamside\projdir\\Untitled_1.gms Level SetVal Type Text ----------------------------------------------------- 1 inincsl LOCAL 1 inincs SCOPED 0 yy LOCAL 0 it SCOPED 1 0 gg GLOBAL what 0 inincsg GLOBAL ---- macro definitions $macro multx(x) x*x ---- End of Environment Report
and---- Begin of Environment Report LEVEL TYPE LINE FILE NAME ---------------------------------- 0 INPUT 6 C:\Users\default\Documents\gamside\projdir\\Untitled_1.gms Level SetVal Type Text ----------------------------------------------------- 0 yy LOCAL 0 it SCOPED 1 0 gg GLOBAL what 0 inincsg GLOBAL ---- macro definitions $macro multx(x) x*x $macro addx(x) x+x ---- End of Environment Report
Note that only the macros and the item defined with the option $setGlobal in the included file carries over. Observe that the name "Environment Report" is unfortunate since the reported values are for compile-time variables, not environment variables.
See also section Compile-Time Variables.
Syntax:
$single
The lines following this option will be echoed single spaced in the compilation output. Note that this is the default. The option is only useful as a switch to deactivate the option $double.
Example:
Set i / 1*2 / ; Scalar a / 1 / ; $double Set j / 10*15 / ; Scalar b / 2 / ; $single Set k / 5*10 / ; Scalar c / 3 / ;
The echo print in the resulting listing file will look as follows:1 Set i / 1*2 / ; 2 Scalar a /1/ ; 4 Set j / 10*15 / ; 5 Scalar b /2/ ; 7 Set k / 5*10 / ; 8 Scalar c /3/ ;
Note that lines between the options $double and
$singleare listed double spaced, while the lines after the option
$singlerevert back to being listed single spaced.
Syntax:
$splitOption KEYVALPAIR optname optvalue
Establishes or redefines two scoped compile-time variables so they contain the name and value of an option key/value pair specified in various formats.
KEYVALPAIRis a string formatted as
-opt=valor
-opt val(instead of
-one can also use
/).
optnameis the name of a scoped compile-time variable that will contain the name of the option and
optvalueis the name of a scoped compile-time variable that will contain the value of the option. This is useful in particular in combination with batInclude files.
Example:
$onechoV > myinclude.gms * Default values for named arguments $setGlobal a1 1 $setGlobal a2 2 $setGlobal a3 3 $setGlobal positionalArgs $label ProcessNamedArguments $ splitOption "%1" key val $ if x%key%==x $goto FinishProcessNamedArguments $ ifThenI.NamedArguments %key%==a1 $ setGlobal a1 %val% $ elseIfI.NamedArguments %key%==a2 $ setGlobal a2 %val% $ elseIfI.NamedArguments %key%==a3 $ setGlobal a3 %val% $ else.NamedArguments $ error Unkown named argument "%key%" $ endIf.NamedArguments $ shift $goTo ProcessNamedArguments $label FinishProcessNamedArguments $setGlobal positionalArgs %1 %2 %3 $offEcho $batInclude myinclude -a3=0 -a2=3.14 i j k $log Using named arguments -a1=%a1% -a2=%a2% -a3=%a3% positionalArgs=%positionalArgs%
Now when calling this piece of code as a batInclude one can specify optionally some named arguments (in any order) right after the name of the batInclude file and before the positional arguments as demonstrated by the log output:Using named arguments -a1=1 -a2=3.14 -a3=0 positionalArgs=i j k
Syntax:
$stars char[char][char][char]
This option is used to redefine the **** marker in the GAMS listing file. By default, important lines like those that denote errors and the solver and model status are prefixed with ****. A new marker consists of one to four characters.
Example:
$stars *##* garbage
The resulting listing file follows:2 garbage *##* $140 *##* $36,299 UNEXPECTED END OF FILE (1) Error Messages 36 '=' or '..' or ':=' or '$=' operator expected rest of statement ignored 140 Unknown symbol 299 Unexpected end of file
Syntax:
$sTitle text
This option sets the subtitle in the page header of the listing file to
text. Note that the next output line will appear on a new page in the listing file.
Example:
$sTitle Data tables for input/output
Syntax:
$stop [text]
This option stops program compilation without creating an error. Note there is a difference to the option $exit. If there is only one input file,
$stopand
$exitwill have the same effect. In an include file the option $exit acts like an end-of file on the include file. However, the option
$stopin an include file will cause GAMS to stop reading all input but continue the execution phase of the so far compiled program. The text followed by
$stopis ignored.
Example:
$ifthen not set EXPORTEXCEL $ stop No export to Excel $else $ call gdxxrw ... $endif
See also $abort, $error, $exit, and $terminate.
The syntax of this dollar control option is equivalent to the syntax of $batinclude:
Syntax:
$sysinclude external_file arg1 arg2 ...
However, if an incomplete path is given, the file name is completed using the system include directory. By default, the system include directory is set to the GAMS system directory. Note that the default directory may be reset with the command line parameter sysIncDir.
Example:
The only relevant include file in the GAMS system directory is
mpsgesetfor MPSGE models, see for example [HARMGE]:
$sysInclude mpsgeset KAMIYA
Note that this call will first look for the include file
[GAMS System Directory]/mpsgeset. If this file does not exist, it will looks for
[GAMS System Directory]/mpsgeset.gms. The argument
KAMIYAis passed on to the include file and are interpreted as explained for the dollar control option $batInclude.
Consider the following example:
$sysInclude C:\Users\default\Documents\mpsgeset KAMIYA
This call will first look specifically for the include file
C:\Users\default\Documents\mpsgesetand next for
C:\Users\default\Documents\mpsgeset.gms.
See also $batInclude.
Syntax:
$terminate [text]
This option terminates compilation and also does not execution to program compiled so far without giving an error.
Example:
$if set JUSTTERMINATE $terminate
See also $abort, $error, $exit, and $stop.
Syntax:
$title text
This option sets the title in the page header of the listing file to
text. Note that the next output line will appear on a new page in the listing file.
Example:
$title Production Planning Model $sTitle Set Definitions
Syntax:
$unLoad sym1[,] sym2=gdxSym2[,] ...]
This option unloads specified items to a GDX file. Note that
$unLoadmust be used in conjunction with the option $gdxOut:
$gdxOutmust precede
$unLoad. More than one option
$unloadmay appear in between. Symbols can be renamed via the
sym=GDXSymsyntax. A
$unLoadwithout arguments unloads the entire GAMS database into the GDX file.
Example: Consider the following slice of code: ; $gdxOut tran $unLoad i j $unLoad b=dem a=sup $unLoad d $gdxout tranX $unLoad
Note that the last lines will create a file named
tran.gdxthat contains
i,
jand
dand the parameters
aand
bwhich are now named
demand
sup. The
$unLoadin the very last line creates a GDX file
tranX.gdxwith all symbols (with their original names). The table of content (via
$gdxInand
$loadwithout parameters) of these two files looks as follows:Content of GDX C:\Users\default\Documents\gamsdir\projdir\tran.gdx 5 UELs Number Type Dim Count Name 1 Set 1 2 i canning plants 2 Set 1 3 j markets 3 Parameter 1 3 dem(j) demand at market j in cases 4 Parameter 1 2 sup(i) capacity of plant i in cases 5 Parameter 2 6 d(i,j) distance in thousands of miles Content of GDX C:\Users\default\Documents\gamsdir\projdir\tranX
Both listings show domain information for the various symbols but only the file
tranX.gdxcreated with
$unLoadwithout arguments has the real domain sets that can be used for domain matching when loading with the
$load sym<[=]symGDX, see $load for details.
Syntax:
$use205
This option sets the GAMS syntax to the syntax of Release 2.05. This is mainly used for backward compatibility. New keywords have been introduced in the GAMS language since Release 2.05. Models developed earlier that use identifiers that have since become keywords will cause errors when run with the latest version of GAMS. This option will allow to run such models.
Example:
$use205 Set if /1.2.3/; Scalar x ;
The word "if" is a keyword in GAMS that was introduced with the first version of Release 2.25. Setting option
$use205allows "if" to be used as an identifier since it was not a keyword in Release 2.05.
Syntax:
$use225
This option sets the GAMS syntax to the syntax of the first version of Release 2.25. This is mainly used for backward compatibility. New keywords have been introduced in the GAMS language since the first version of Release 2.25. Models developed earlier that use identifiers that have since become keywords will cause errors when run with the latest version of GAMS. This option will allow to run such models.
Example:
$use225 Set for /1.2.3/; Scalar x ;
The word "for" is a keyword in GAMS that was introduced with the later versions of Release 2.25. Setting option
$use225allows "for" to be used as an identifier since it was not a keyword in the first version of Release 2.25.
Syntax:
$use999
This option sets the GAMS syntax to the syntax of the latest version of the compiler. Note that this setting is the default.
Example:
$use225 Set for /1.2.3/; Scalar x ; $use999 for (x=1 to 3, display x) ;
Note that the word "for" is used as a set identifier after setting the option
$use225and later the keyword
foris used in a looping construct after having set the language syntax to that of the latest version using the option
$use999.
Syntax:
$version n
This issues a compilation error if
nis greater than the current GAMS version. This can be useful to ensure that a model is run only with new versions of GAMS, because, e.g., a particular feature which did not exist in older versions is needed.
Example:
* With GAMS 24.8.1 the function numCores was added to the system. * Make sure, that we use this GAMS version or newer. $version 248 Scalar nc "Number of cores"; nc = numCores; Display nc;
Syntax:
$warning text
This dollar control option issues a compilation warning to the log and listing but continues compilation and execution.
Example
$ifthen not set INPUTFILE $ set INPUTFILE default.txt $ warning Using default INPUTFILE "default.txt". Use --INPUTFILE=myfile.txt to overwrite default. $endif
The GAMS log file will issue a warning:*** Error 332 in C:\Users\default\Documents\gamsdir\projdir\myinput.gms $Warning encountered - see listing for details
with the details in the listing file:3 $ warning Using default INPUTFILE "default.txt". Use --INPUTFILE=myfile.txt to overwrite default. **** $332
Conditional Compilation
GAMS offers several dollar control options that facilitate conditional compilation. In this section we will first introduce the general syntax, present an overview of all relevant options and list the conditional expressions that may be used to perform tests. Then we will give several examples to illustrate how these options are used and to demonstrate their power. This section is meant as an introduction to conditional compilation in GAMS and complements the detailed descriptions of the dollar control options listed in Table 1 below.
Conditional Compilation: General Syntax and Overviews
The dollar control option
$if and its variants provide a great amount of control over conditional processing of the input file(s). The syntax in GAMS is similar to the
IF statement of the DOS Batch language:
$if [not] <conditional expression> new_input_line
The dollar control statement begins with
$if. Note that
$if may be replaced by one of its variants that are listed in Table 1 below. The operator
not is optional and makes it possible to negate the
conditional
expression that follows. The conditional expression may take various forms, a complete list is given in Table 2. The result of the conditional test is used to determine whether to process ir not the remainder of the line,
new_input_line, which may be any valid GAMS input line.
- Attention
- The first non-blank character on the line following the conditional expression is considered to be the first column position of the GAMS input line. Therefore, if the first character encountered is a comment character the remainder of the line is treated as a comment line. Likewise, if the first character encountered is the dollar control character, the line is treated as a dollar control line.
Alternatively, the
new_input_line may be placed in the next line. The corresponding syntax follows:
$if [not] <conditional expression> new_input_line
Note that in this version the space after the conditional expression is left blank. If the conditional is found to be false, either the remainder of the line (if any) will be skipped or the next line will not be processed.
The overviews in Table1 and Table 2 conclude this subsection. Examples are given in the next subsection.
Table 1:
$if and Related Dollar Control Options
Table 2: Conditional Expressions in Conditional Compilation
Conditional Compilation: Examples
File Operation Test
The operator
exist may be used to test whether a given file name exists. Consider the following example:
$if exist myfile.dat $include myfile.dat
Observe that the effect of this dollar control statement is that the file
myfile.dat is included if it exists. Note that the character
$ at the beginning of the option $include is the first non-blank character after the conditional expression
exist myfile.dat and therefore it is treated as the first column position. The statement above may also be written as follows:
$if exist myfile.dat $include myfile.dat
Conditional Compilation and Batch Include Files
In the next example we will illustrate how the option
$if is used inside a batch include file where parameters are passed through the option $batInclude from the parent file:
$if not "%1a" == a $goto labelname $if exist %1 file.ap=1;
Note that in the first line the
$if condition uses the string comparison
"%1a" == a to check if the parameter is empty. This test may also be done in the following way:
%1 == "". If the parameter is not empty, the option $goto is processed.
- Note
- The option $label cannot be part of the conditional input line. However, if the option $label appears on the next line, the condition decides once if the label is placed or not and subsequent instances of
$gotowill find the label without reevaluating the condition.
The second line illustrates the use of standard GAMS statements if the conditional expression is valid. If the file name passed as a parameter through the $batInclude call exists already, the GAMS will execute the
file.ap=1; statement which will append to the file.
The next example demonstrates how an unknown number of file specifications may be passed on to a batch include file that will include each of them if they exist. The batch include file could look as follows:
* Batch Include File - inclproc.gms * include and process an unknown number of input files $label nextfile * Quote everything because file name might have blanks $if exist "%1" $include "%1" $shift $if not "%1a" == a $goto nextfile
The call to this file in the parent file could take the following form:
$batInclude inclproc "file 1.inc" file2.inc file3.inc file4.inc
Testing Whether an Item Has Been Defined
The next example shows how to test if a named item was declared and/or defined.
Set i; $if defined i $log First: set i is defined $if declared i $log First: set i is declared Set i /seattle/; $if defined i $log Second: set i is defined $if declared i $log Second: set i is declared
Note that after the first declaration of
i only
declared i evaluates to true when after the second declaration with a data statement both
defined i and
declared i are true.
Testing Whether an Item May Be Used in an Assignment
The expression
readable id tests whether data were assigned to an item and therefore he item may be used on the right-hand side of an assignment statement. Consider the following example:
Scalar f; $if not readable f $log f cannot be used on the right Scalar f /1/; $if readable f $log f can be used on the right $kill f $if not readable f $log f cannot be used on the right after clear f = 1; $if readable f $log f can be used on the right after assignment
Note that in the first test the set
f was declared, but there was no data statement, hence it is not
readable. After a declaration with a data statement the test
readable f evaluates to
TRUE. With $kill we can revert
f to a data less state and hence
not readable f is
TRUE after the "$kill". The assignment statement
f = 1; make the scalar
f readable again.
Testing Whether an Identifier May Be Declared
In programming flow control structures, like if statements or loop statements declaration statements are not permitted. The test
decla_ok may be used to test whether the current environment allows declaration statements. Consider the following example:
$if decla_ok $log declarations are possible if(1, $ if not decla_ok $log declarations are not allowed );
Note that the conditional expression in the both
$if tests will evaluate to
TRUE. However, the second test of
decla_ok itself will be
FALSE because it is processed while compiling an
if statement, but with the
not the entire expression evaluated to
TRUE. For more information, see chapter Programming Flow Control Features.
In-line and end-of-line comments are stripped out of the input file before processing the
new_input_line. If either of these forms of comments appear, they will be treated as blanks. Consider the following example:
Parameter a ; a=10 ; $eolCom // inlineCom /* */ $if exist myfile.dat /* in line comments */ // end of line comments a = 4 ; display a;
Note that the comments on line 3 are ignored and the fourth line with the assignment statement will be processed if the conditional expression is true Hence the outcome generated by the display statement will list
a with a value of 4 if the file
myfile.dat exists and a value of 10 if the file does not exist.
Error Level Test
Consider the following example:
$call gams mymodel.gms lo=2 $if errorlevel 1 $abort one or more errors encountered
Note that the errorlevel is retrieved from the previous system call via $call. The conditional statement
errorlevel 1 is true if the returned errorlevel is equal to or larger than 1. In case of calling GAMS this means that something was not quite right with the execution of GAMS (either a compilation or execution error or other more exotic errors, see GAMS return codes. If this is the case, this GAMS program will be aborted immediately at compilation time.
Usually programs return 0 on success and non-zero on failure. The
$if errorlevel 1 checks for strictly positive return codes. There are rare cases with failures and negative return codes (e.g. on Windows if some DLL dependencies of the program can't be resolved). In such a case
$if errorlevel 1 will evaluate to false and not continue with the
$abort instruction. It might be better to access the program return code via the errorLevel function in the following way:
$call gams mymodel.gms lo=2 $ifE errorLevel<>0 $abort one or more errors encountered
Solver Test
The following example illustrates how to check if a solver exists.
$if solver ZOOM
Note that the conditional expression is false since the solver named
ZOOM does not exists in the GAMS system (anymore).
Command Line Parameters in String Comparison Tests
Assume we include the following dollar control statements in a GAMS file called
myfile.gms:
$if not '%gams.ps%'=='' $log Page size set to %gams.ps% $if not '%gams.pw%'=='' $log Page width set to %gams.pw% $if not '%gams.mip%'=='' $log MIP solver default is %gams.mip%
Then we run the program with the following call:
> gams myfile pageSize=60 pageWidth=85 mip=cbc
Note that we specified values for the command line parameters pageSize, pageWidth, and MIP. We can either use the short or long name on the command line and in the compile-time variable. If we do not specify the option on the command line we will get the default value for option page size and page width. The MIP solver line will not show because
gams.mip% remains empty. The log with option setting on the command line will include the following lines:
Page size set to 60 Page width set to 85 MIP solver default is cbc
Command line parameters are introduced in chapter The GAMS Call and Command Line Parameters.
System Attributes in String Comparison Tests
Compile-time system attributes may also be used in string comparison tests. The system attribute that is most useful in this context is .fileSys. It identifies the name of the operating system being used. Consider the following example:
$ifthen not %gams.logOption%==3 $ ifi %system.fileSys%==UNIX $set nullFile > /dev/null $ ifi %system.fileSys%==MSNT $set nullFile > nul $ if not set nullFile $abort %system.fileSys% not recognized $else $ set nullFile $endif $call gamslib trnsport %nullFile%
These dollar control statements allow the definition of a NULL file destination that is dependent on the operating system that is being used. Note that the control variable
nullfile is set to the operating system dependent name. This makes it useful to make an external program that writes to STDOUT quiet in case the GAMS log does not go to STDOUT (logOption=3). This example could also use the system attribute
system.nullFile% which is sets the operating system dependent NULL file destination:
$set nullFile $if not %gams.logOption%==3 $set nullfile > %system.nullFile% $call gamslib trnsport %nullfile%
System attributes in general are introduced in chapter System Attributes.
Conditional Compilation with $ifThen and $else
Consider the following example which illustrates the use of
$ifThen,
$elseIf,
$else and
$endif:
$set x a $label test test
Note that the resulting log file will contain the following lines:
$ifthen with x=a $elseif 2 with x=c $elseif 1 with x=b $else with x=k
Observe that the options
$else and
$endIf are not followed by conditional expressions and the instruction following the option
$endIf contains a dollar control statement. Moreover, note that the
$set x 'c' has the text to be set in quotes. GAMS needs to know where the text ends and the next dollar control option (in this case
$log) starts.
Type of Identifiers
The type of a symbol can be retrieved via
$if ...Type. Consider the following example:
Set diag / 1*3 /; Parameter p(diag) / 1 1, 2 4, 3 8 /; $if setType diag $log diag is a set $if not varType diag $log diag is not a variable $if preType diag $log diag is a predefined type $if parType p $log p is a parameter $if setType sameAs $log sameAs is a set $if preType sameAs $log sameAs is a predefined type
Note that for predefined symbols more than one type applies (e.g.
sameAs is of set and predefined type). Please also note that
diag is a set even though there is a predefined symbol named
diag but that becomes invisible with a user defined symbols with the same name.
Normally there is no way to get a symbol into the GAMS symbols table without a proper type. However, if the dollar command line parameter multiPass is set to a value larger than zero, then the compiler will just check for some integrity and will try to deduce the symbol type from the context. If it is not able to do so, the symbol type will remain unknown. For example, compiling the following lines with
multiPass=1
display x; $if xxxType x $log x is of unknown type
result in the line
x is of unknown type in the GAMS log.
Macros in GAMS
Macros are widely used in computer science to define and automate structured text replacements. The GAMS macro processors function similarly to the popular C/C++ macro preprocessor. Note that the GAMS macro facility has been inspired by the GAMS-F preprocessor for function definition developed by Michael Ferris, Tom Rutherford and Collin Starkweather, 1998 and 2005. The GAMS macro facility incorporates the major features of the GAMS-F preprocessor into the standard GAMS release as of version 22.9. GAMS macros act like a standard macro when defined. However, their recognition for expansion is GAMS syntax driven.
Syntax and Simple Examples
The definition of a macro in GAMS takes the following form:
$macro name macro_body $macro name(arg1,arg2,arg3,...) macro_body with tokens arg1, ...
The dollar symbol
$ followed by
macro indicate that this line is a macro definition. The name of the macro has to be unique, similar to other GAMS identifiers like sets and parameters. The macro name is immediately followed by a list of replacement arguments
arg1,arg2,arg3,... that are enclosed in parentheses. The macro body is not further analyzed after removing leading and trailing spaces.
The recognition and following expansion of macros is directed by GAMS syntax. The tokens in the macro body to be replaced by the actual macro arguments follow the standard GAMS identifier conventions. Consider the following simple example of a macro with one argument:
$macro reciprocal(y) 1/y
Here the name of the macro is
reciprocal,
y is the argument and the macro body is
1/y. This macro may be called in GAMS statements as follows:
$macro reciprocal(y) 1/y scalar z, x1 /2/, x2 /3/; z = reciprocal(x1) + reciprocal(x2);
As GAMS recognizes
reciprocal(x1) and
reciprocal(x2) as macros, the assignment statement will expand to:
z = 1/x1 + 1/x2;
The next example illustrates macros with multiple arguments:
$macro ratio(x,y) x/y scalar z, x1 /2/, x2 /3/; z = ratio(x1,x2);
The assignment above will expand to:
z= x1/x2;
Note that the macro definition may extend over several lines with the symbol
\ acting as a continuation string. Consider the following example:
$macro myxor(a,b) (a or b) \ and (not a or not b) scalar z; z = myxor(1,0); display z;
The
z assignment expands to
z = (x1 or x2) and (not x1 or not x2);
Note that although the macro has been defined over two lines, the expansion happens by combining the lines after stripping leading white spaces of the second line as demonstrated in the next example (because
and has a higher precedence than
or we can omit the parenthesis):
$macro myxor(a,b) not a and b \ or a and not b scalar z; z = myxor(1,0); display z;
The
z assignment expands to this:
z = not 1 and 0 or 1 and not 0;
The
&, explained in more detail in the next section can be used to preserve (some of the) leading white spaces (but not the line breaks) if that is desired:
$macro myxor(a,b) not a and b \ & or a and not b
Nested Macros
Macros may be nested. Consider the following example:
$macro product(a,b) a*b $macro addup(i,x,z) sum(i,product(x(i),z)) set j /j1*j10/; Parameter a1(j) / #j 1 /, z, x1 /5/; z = addup(j,a1,x1);
Observe that the macro
product is nested in the macro
addup. The assignment will expand to:
z = sum(j,a1(j)*x1);
Note that nested macros may result in an expansion of infinite length. An example follows.
$macro a b,a display a;
This will expand into:
display b,b,b,b,b,b,b,b,b,b,b,b,b,b,b,b,b,...
In such a case GAMS will eventually refuse to do more substitutions and will issue a compilation error:
732 Too many edits on one single line - possible recursion in macro calls compilation will be terminated
Ampersands in Macro Definitions
The expansion of arguments may be more carefully controlled by the use of ampersands
& in the macro body. A single ampersand
& is used as a concatenation or separation symbol to indicate tokens that are to be replaced. Consider the following example:
$macro f(i) sum(j, x(i,j)) $macro equ(q) equation equ_&q; \ equ_&q.. q =e= 0; set i /i/, j /j/; variable x(i,j); equ(f(i))
This will expand into:
equation equ_f(i);equ_f(i).. sum(j, x(i,j)) =e= 0;
Note that without the ampersand notation, GAMS would have recognized only the third occurrence of
q and hence the expansion would have been:
equation equ_q;equ_q.. sum(j, x(i,j)) =e= 0;
Two ampersands
&& immediately preceding a token will drop the most outer matching single or double quotes of the replacement argument. This makes it possible to include expressions with spaces, commas and unbalanced parentheses. The latter one is something users should really avoid doing. An example follows.
$macro d(q) display &&q; $macro ss(q) &&q) set i /i/, k /k/; parameter a1(i) / i 1/, z; d('"here it is" , i,k') d('"(zz"') z=ss('sum(i,a1(i)'); z=ss('prod(i,a1(i)');
Note that the expressions
d contain quotes, spaces and commas and the expression
ss has unbalanced parentheses within the quoted parts. In turn these expand to become:
display "here it is" , i,k; display "(zz"; z=sum(i,a1(i)); z=prod(i,a1(i));
Additional Macro Features
Deeply nested macros may require aliased sets in indexed operations like
sum and
prod. A minor syntax extension allows the implicit use of aliases. The suffix
.local on a controlling set will use an implicit alias within the scope of the indexed operation. Consider the following example:
$macro ratio(a,b) a/b $macro total(q) sum(i,q(i)) set i /i1*i15/; parameter a(i) / #i 1 /, b(i) / #i 2 /, r(i), asum; asum = total(a); r(i) = ratio(total(a), b(i));
The assignment statement will expand to:
asum = sum(i,a(i)); r(i) = sum(i,a(i))/b(i);
The second line will not compile because the
i in the sum is already controlled from the
i on the left. The intention was the
total macro is to add up the elements of a parameter indexed over
i. As in the
r(i) assignment the macro might be used in a statement where
i is already controlled hence when doing the
sum in the macro we want to use an alias of
i. If we change the macro definition to
$macro total(q) sum(i.local,q(i))
The code works as expected because the
i in the sum refers to the
i.local and not the outside
i.
Note that the the modifier
.local is not limited to macros and may be used in any context. For further details and more examples, see the detailed description of the dollar command option $on/offLocal.
Another feature of macros is the implicit use of the suffix
.L in report writing and other data manipulation statements. This allows using the same algebra in model definitions and assignment statements. The following code illustrates this feature:
$macro sumIt(i,term) sum(i,term) cost .. z =e= sumIt((i,j), (c(i,j)*x(i,j))) ; supply(i) .. sumIt(j, x(i,j)) =l= a(i) ; demand(j) .. sumIt(i, x(i,j)) =g= b(j) ; Model transport /all/ ; solve transport using lp minimizing z ; Parameter tsupply(i) total demand for report tdemand(j) total demand for report $onDotL tsupply(i)=sumIt(j, x(i,j)); tdemand(j)=sumIt(i, x(i,j));
The option $onDotL enables the implicit suffix
.L for variables. This feature was introduced for macros with variables to be used in equation definitions as well as assignment statements. The matching option $offDotL will disable this feature. Similarly, $offDotScale will access the
.scale suffix of a variable or equation in an assignment statement.
Three more switches are relevant to macros. The option $show will list any GAMS macros defined. The option $on/$offMacro will enable or disable the expansion of macros; the default is
$onMacro. Finally, the option $on/offExpand will change the processing of macros appearing in the arguments of a macro call. The default operation is not to expand macros in the arguments. The switch
$onExpand enables the recognition and expansion of macros in the macro argument list. The option
$offExpand will restore the default behavior.
Note that macro definitions are preserved in a save/restart file and are available again for a continued compilation.
Summarizing, macros shares the name space of GAMS symbols, like sets, parameters, variables, etc. Macros are recognized and expanded anywhere a proper GAMS identifier may be used. This may be suppressed with the option $on/offMacro. The body of macros is only used during expansion. Hence, macro definitions are not order dependent. Variables in macro bodies will have an implicit suffix
.L when they are used in assignment statements. This GAMS feature needs to be activated with the option $onDotL.
Compressing and Decompressing Files
GAMS provides two dollar control options for compressing and decompressing GAMS input files:
- Attention
- Spaces are interpreted as separators between the source and target file names, hence quotes (single or double) have to be used if the file names contain spaces.
Note that GAMS will recognize whether a file is compressed and will processes it accordingly.
- Note
- Like any other GAMS input files, all compressed files are platform-independent.
Compressing and Decompressing Files: A Simple Example
We use the well-known transportation model [TRNSPORT] to illustrate. First we copy the model from the GAMS Model Library and then we create a compressed version of the original:
> gamslib trnsport > echo $compress trnsport.gms t1.gms > t2.gms > gams t2
Alternatively, the following code snippet may be used from within a GAMS file:
$call 'gamslib trnsport' $compress trnsport.gms t1.gms $include t1.gms
Note that the compressed input file
t1.gms can be treated like any other GAMS input file. If it is executed, the listing file will be identical to the listing file of the original input file
trnsport.gms, since a decompressed input is reported in the echo print. As usual, the parts of the model that are marked with the dollar control option $on/offListing will not appear in the echo print.
The compressed file
t1.gms can be decompressed as follows:
> echo $decompress t1.gms. t3.gms > t4.gms > gams t4
Alternatively, from within a GAMS file:
$decompress t1.gms t3.gms
Observe that the decompressed file
t3.gms is identical to the original file
trnsport.gms. This can easily be tested with the following command:
> diff trnsport.gms t3.gms
Compressing and Decompressing Files: The Model CEFILES
The following more elaborate example is self-explanatory. It is adapted from model [CEFILES] and can easily be modified to test the use of compressed files.
* --- get model $call gamslib -q trnsport * --- compress and run model $compress trnsport.gms t1.gms $decompress t1.gms t1.org $call diff trnsport.gms t1.org > %system.nullFile% $if errorLevel 1 $abort files trnsport and t1 are different * --- check to see if we get the same result $call gams trnsport gdx=trnsport lo=%gams.lo% $if errorLevel 1 $abort model trnsport failed $call gams t1 gdx=t1 lo=%gams.lo% $if errorLevel 1 $abort model t1 failed $call gdxdiff trnsport t1 %system.reDirLog% $if errorLevel 1 $abort results for trnsport and t1 are not equal * --- also works with include files $echo $include t1.gms > t2.gms $call gams t2 gdx=t2 lo=%gams.lo% $if errorLevel 1 $abort model t2 failed $call gdxdiff trnsport t2 %system.reDirLog% $if errorLevel 1 $abort results for trnsport and t2 are not equal $terminate
Encrypting Files
When models are distributed to users other than the original developers, issues of privacy, security, data integrity and ownership arise. To address these concerns, secure work files may be used and GAMS input files may be encrypted. Note, that the encryption follows the work file security model and requires special licensing.
- Note
- Like any other GAMS input files, all compressed and encrypted files are platform-independent.
Encryption is only available if a system is licensed for secure work files and usually requires a target license file which will contain the user or target encryption key. Note that once a file has been encrypted it cannot be decrypted any more. GAMS provides the following dollar control option to encrypt an input file:
$encrypt <source> <target>
Here the name of the input file to be encrypted is
source and the name of the resulting encrypted file is
target.
Encrypting Files: A Simple Example
We use again the transportation model [TRNSPORT] to illustrate. First we copy the model from the GAMS Model Library and then we create an encrypted version of the original:
> gamslib -q trnsport > echo $encrypt trnsport.gms t1.gms > t2.gms > gams t2 pLicense=target lo=%gams.logOption%
Note that the first two lines are similar to the directives that we have used to compress the model above. In the third line, the command line parameter pLicense specifies the target or privacy license to be used as a user key for encrypting. Thus the new encrypted file
t1.gms is locked to the license key
target and it can only be executed with the license file
target:
> gams t1 license=target dumpOpt=11
Note that the command line parameter license is used to override the default GAMS license file
gamslice.txt that is located in the system directory. Note further that the command line parameter dumpOpt is usually used for debugging and maintenance. The value 11 causes a clean copy of the input to be written to the file
t1.dmp, where all include files and macros are expanded. Observe that if some lines have been marked with the dollar control options $on/offListing in the original file, then these lines will be suppressed in the file
t1.dmp.
- Note
- Once a file has been encrypted, it cannot be decrypted any more. There is no inverse mechanism to recover the original file from the encrypted file. An attempt to decompress it using $decompress will fail.
Observe that encrypting is done on the fly into memory when the GAMS system files are read. GAMS will recognize if a file is just plain text or compressed and/or encrypted and will validate and process the files accordingly.
Encrypting Files: The Model ENCRYPT
The following more elaborate example is self-explanatory; it is model [ENCRYPT] from the GAMS Model Library.
Note that the option
license=demo is used. This overrides the license that is currently installed with a demo license that has the secure file option enabled.
$ontext To create an encrypted file, we need a license file which has the security option enabled. To allow easy testing and demonstration a special temporary demo license can be created internally and will be valid for a limited time only, usually one to two hours. In the following example we will use the GAMS option license=demo to use a demo license with secure option instead of our own license file. Also note that we use the same demo license file to read the locked file by specifying the GAMS parameter pLicence=license. $offtext * --- get model $ondollar $call gamslib -q trnsport * --- encrypt and try to decrypt $call rm -f t1.gms $echo $encrypt trnsport.gms t1.gms > s1.gms $call gams s1 license=demo pLicense=license lo=%gams.logOption% $if errorLevel 1 $abort encryption failed $eolCom // $if not errorFree $abort pending errors $decompress t1.gms t1.org // this has to fail $if errorFree $abort decompress did not fail $clearError * --- execute original and encrypted model $call gams trnsport gdx=trnsport lo=%gams.logOption% $if errorLevel 1 $abort model trnsport failed * Although this reads license=demo, this license file is the one * specified with pLicense from the s1 call $call gams t1 license=demo gdx=t1 lo=%gams.logOption% $if errorLevel 1 $abort model t1 failed $call gdxdiff trnsport t1 %system.reDirLog% $if errorLevel 1 $abort results for trnsport and t1 are not equal * --- use the encrypted file as an include file $onEcho > t2.gms $offListing * this is hidden option limRow=0,limCol=0,solPrint=off; $include t1.gms $onListing * this will show $offEcho $call gams t2 license=demo lo=%gams.logOption% $if errorLevel 1 $abort model t2 failed * --- protect against viewing * now we will show how to protect parts of an input * file from viewing and extracting original source * via the gams DUMPOPT parameter. We just need to * encrypt again * --- encrypt new model $call rm -f t3.gms $echo $encrypt t2.gms t3.gms > s1.gms $call gams s1 license=demo pLicense=license lo=%gams.logOption% $if errorLevel 1 $abort encryption failed $call gams t3 license=demo gdx=t3 dumpOpt=11 lo=%gams.logOption% $if errorLevel 1 $abort model t3 failed $call gdxdiff trnsport t3 %system.reDirLog% $if errorLevel 1 $abort results for trnsport and t3 are not equal * --- check for hidden output $call grep "this is hidden" t3.lst > %system.nullFile% $if not errorLevel 1 $abort did not hide in listing $call grep "this is hidden" t3.dmp > %system.nullFile% $if not errorLevel 1 $abort did not hide in dump file | https://www.gams.com/latest/docs/UG_DollarControlOptions.html | CC-MAIN-2019-04 | refinedweb | 24,629 | 54.32 |
Not long ago I went to the US Postal Service's site where they have an online address validation service. If you aren't familiar with this, here is a direct link to their page. They also offer a service where you are supposed to be able to submit addresses in bulk via a WebService call, but you have to apply for it.
On the questionnaire form, in typical bureaucratic fashion, you have to answer some questions, one of which I seem to remember being something about whether it was for commercial use. Now if the rules say that commercial use is disallowed, common sense would dictate that they could simply tell you this up front, and save you and me some of our hard-earned tax dollars, right?
Nope. They actually make you go through the whole process, believing you'll be approved to get a "key" allowing you to use the service, and then, some three days later after some clerk with a sub - room temperature IQ has reviewed your request, you get back an email saying that "Commercial use isn't allowed". Folks, this is the U.S. Government; don't fight it.
So what I did is simply raise the middle finger of my right hand at the screen, calm down, and proceed to write my own web-scraped version of what I needed. The code I'll show you is cross-browser and it's all client-side javascript, and I'll point you to an online version you can try out, but I must warn you that if you are using Firefox, even though this code is 100% Firefox - compatible, it won't work.
The developers of the Firefox browser, in their wisdom, crippled the XMLHTTP Request object to only work with requests to the same domain that the page originated from. In Internet Explorer, as long as the site is in your Trusted Sites list, it is no problem. Firefox, you'll need to write a completely separate server side page in your favorite language, have your XMLHTTP Request make a call to this page (on the same domain), do the XMLHTTP scraping at the server, and return the results to the client page. Actually, you can "digitally sign" your script to get around this, but then it won't be compatible with other browsers. DOH! You call this "standards"?
Personally, I find this approach quite insulting. The Firefox people are essentially saying, "Mr. User, we think you are so dumb that we are gonna prevent you from doing this even if you know what you are doing and want to override the behavior. And that's the end of that, Mr. Firefart Surfer FanBoy." To be fair, IE7 will also have a native (non-ActiveX) implementation of XMLHttpRequest that behaves exactly the same way. However, you can still use ActiveX, as you can see in the sample code below.
In actual fact, );
A discussion of JSON is beyond the scope of this article, but essentially it is a shorthand way of representing objects that when "Eval-ed" in Javascript, translates into an instance of the object. It's not pretty, but it is more compact than XML..
The XMLHTTP fiasco is not my only beef with Firefox, it also has a burdeningly strict interpretation of Javascript and DOM that can make writing cross-browser script a royal pain in the butt, but that's another story. Otherwise, its a good browser and I commend them for their efforts. Some people seem to have gotten all religious about this Firefox vs IE thing, personally I couldn't care less. Since we get about 20% of our traffic on Firefox, I guess I have to at least try to make more of my stuff accomodate it's quirks.
(Now you see, the above statement would probably start a religious war from the Firefox afficionados about how terrible Microsoft is and what a miserable browser "Internet Exploder" is, that it's IE which has all the quirks, etc. ad-nauseum) Fact of the matter is, I think Internet Explorer is pretty damn good browser, and IE 7.0 looks like it will be even better. You have to remember that from a security standpoint, if you are number one, you're the guy they all take their pot shots at. Standards are a good thing, but like any other good thing, you can get so carried away that your ultra-strict (and perhaps not entirely accurate) interpretation does more harm than good from a usability standpoint. Devout Penguinistas and FireFartians should keep in mind that inevitably, as their alternative offerings grow and change, so too will the targets of the potshot - takers.
So here is the code, commented (I hope) sufficiently for you to see what it does:
<HTML>
<HEAD>
<TITLE>USPS Address Validation Example</TITLE>
<script>
var url1="";
// 2 Broadway
var url2="&address1=&city="; //NEW YORK
var url3="&state=";//NY
var url4="&urbanization=&zip5=";
function getAddress( street, city, state)
{
if(street=="" || city=="" || state =="")
document.getElementById("result").innerText="Please fill in all fields.";
return;
}
var fullurl=url1+street+url2+city+url3+state+url4;
var x=createXMLHttp();
x.open("GET",fullurl, false);
x.Send(null);
var res=x.responseText;
try{
// strip off everything before where the result starts
var startpos = res.indexOf("<td headers=\"full\"")+124;
res=res.substring(startpos)
if(res.toUpperCase().indexOf("<HTML")>0)
document.getElementById('result').innerText="Address Not valid";
return;
// strip off everything after the result
var endpos=res.indexOf("</td>")-10;
res=res.substring(0,endpos);
//clean up line breaks
res=res.replace("<br />","");
res=res.replace("<br/>","");
res=res.toUpperCase();
// clean off HTML Spaces
res=res.replace("&NBSP;", " ");
document.getElementById('result').innerText=res;
catch(e)
//Generic cross-browser XMLHttp object
function createXMLHttp() {
// NOTE Here I am trapping for IE7 to use ActiveXObject since it also now has a native
// XMLHttpRequest object without COM that behaves just like "other browsers"
if (typeof XMLHttpRequest != "undefined" && !window.ActiveXObject) {.");
</script>
</HEAD>
<BODY>
<BASEFONT FACE="Tahoma">
<div align="center"><h3>USPS Address Validation</H3></div>
<input type=text id=streetStreet<BR/>
<input type=text id=cityCity<BR/>
<input type=text id=stateState Abbrev <input type=button<BR/>
<textarea id=result style="width:400px;" rows=10></textarea>
</BODY>
So basically, we take the street, city and state of the address you want to validate against the Postal Service database, format them into a GET Url, and use The XmlHTTPRequest object to make the call. The result, which is an HTML Page, comes back in the responseText property on a synchronous call. There's no need to do it asynchronously because its very fast, so a blocking call is the order here. The US Postal service always gets the request through. At least, that's what Al Gore told me.
At that point, I just use simple string manipulation to chop off the junk HTML before and after the results, then clean up extraneous HTML Tags such as <BR /> and , and I've got my result for display. If the result comes back with your full USPS - adjusted address including ZIP+4 zipcode, your address is valid. If not, we just display the appropriate error message. Yes, I know -- I could have used Regex, but this job is so simple. And besides, if they change the page, your Regex can break too.
Here's a link to a test page, you can simply View Source and you have everything:
It is relatively easy to massage the above Javascript into a nice .NET Class library. Here is some example code:
using System;
using System.Net;
namespace PAB.Utils.USPS
public class AddressValidator
{
private AddressValidator() {}
public static string ValidateAddress(string street, string city, string state)
{
string url1=
"
1&pagenumber=0&firmname=&address2="; // 2 Broadway
string url2="&address1=&city="; //NEW YORK
string url3="&state=";//NY
string url4="&urbanization=&zip5=";
if(street=="" || city=="" || state =="")
{
throw new ArgumentException("missing or invalid parameter.", "street:city:state");
}
string fullurl=url1+street+url2+city+url3+state+url4;
WebClient cln = new WebClient();
try
{
string res=System.Text.Encoding.UTF8.GetString(cln.DownloadData(fullurl));
// strip off everything before where the result starts
int startpos = res.IndexOf("<td headers=\"full\"")+124;
res=res.Substring(startpos);
if(res.ToUpper().IndexOf("<HTML")>0)
{
return "Invalid Address";
}
// strip off everything after the result
int endpos=res.IndexOf("</td>")-10;
res=res.Substring(0,endpos);
//clean up line breaks
res=res.Replace("<br />","");
res=res.Replace("<br/>","");
res=res.ToUpper();
// clean off HTML Spaces
res=res.Replace("&NBSP;", " ");
return res;
catch(Exception ex)
return ex.Message;
I've put the above, along with a nice ASPX web page to test it, into a Visual Studio.NET 2003 Solution that you can download and try right away:
Download the code that accompanies this article
You can thank the Postal Service for their incredible efficiency. If you are a Firefox user, please note that I've taken care to ensure that this page looks the same in Firefox as it does in Internet Explorer. I even spend some time fixing up the "Search" Button at the upper right of our pages to allow you to use the enter key instead of having to click the submit button, just as it already functioned for IE users. It was originally only about 3 lines of Javascript code for IE, and now it's about 20 lines of code, but its "cross browser". So there. I wish you great peace and happiness with either Firefart or Internet Exploder. May all your methods return, etc. | http://www.nullskull.com/articles/20060430.asp | CC-MAIN-2014-35 | refinedweb | 1,587 | 60.85 |
Guillaume,
You're right that there's generally no server-wide JNDI context. It's
possible to look up any resource in the server at runtime using
Geronimo-specific APIs (such as Kernel.listGBeans, or using the JSR-77
management APIs). For J2EE apps, the standard practice leans towards
binding everything at deployment time, so we have a deployment
descriptor that maps the declared resources to actual resources in the
server during deployment. Then we create the component-specific JNDI
space during deployment containing the resources to be made avaliable
to the component, because that's what J2EE dictates. But it's not the
only possible way things could work.
I'm having trouble wrapping my head around how resources should
normally work in ServiceMix. Is there some place in jbi.xml or other
standard JBI deployment information for, say, a service unit to
declare that it needs resources like a JMS connection factory and
destination? If not, how is ServiceMix/Geronimo supposed to know what
resources to provide? Is there just an assumption that a global JNDI
namespace will be present containing every resource in the server and
each component can look up whatever it wants to?
Thanks,
Aaron
On 4/4/06, Guillaume Nodet <guillaume.nodet@worldonline.fr> wrote:
> Thanks to dain and djencks advises, I have began to write a real
> ServiceMix integration for Geronimo.
> However I am facing a number of problems.
>
> The problem is that the JBI spec needs some things to be done when
> undeploying jbi artifacts so I will be in need of an event fired before
> undeployment (and not after as this is the current case). Let me
> explain the use case for this.
> The JBI container is a server (like a web server or EJB server): it
> accepts three kind of deployment artifacts: component, shared libraries
> and service assemblies. A shared library is a collection of jars to be
> added to the classpath of a component. A component is also a container,
> like a BPEL engine for example. A service assembly is a package
> containing service units. These service units are given to a target
> component upon deployment. A service unit could a BPEL process.
> When deploying a BPEL process onto a BPEL engine, the engine may have to
> store the process in a database at deployment time and remove the clean
> the database when undeploying the service unit. The JBI spec has all
> the needed interfaces to perform these deployment / undeployment steps.
> The only problem is that I have not found any way to know when a
> configuration is being undeployed.
> Looking at the kernel, it seems it should be quite easy to do, so I
> think I will raise a JIRA for that and attach a patch at a later time.
>
> The next problem, which is IMHO more important, is how to access managed
> resources. In the previous BPEL engine example, the component has to
> access a database. A JMS component would access a JMS connection
> factory. These resources should be accessed via JNDI. I have browsed
> the naming / deployer code these past days and AFAIK, there is no
> server-wide JNDI context. When a web app is deployed, a specific JNDI
> context is created (and bound to the thread with interceptors), that
> includes all the bindings referenced in the web deployment descriptor.
> This leads me to think that I have to create a geronimo-jbi.xml
> deployment descriptor which will contain resource references and / or
> additional gbeans for the configuration.
> I fear this will lead to another problem, which is the fact that these
> resources are usually deployed inside an EAR and JBI artifacts can not...
> So the main questions is: did I miss something ? Is there any easier way
> to access server-wide resources or do I really have to create a specific
> deployment plan of some kind ?
>
> Cheers,
> Guillaume Nodet
>
>
> | http://mail-archives.apache.org/mod_mbox/geronimo-dev/200604.mbox/%3C74e15baa0604040708u366aedeev15da7ffc2ee328d2@mail.gmail.com%3E | CC-MAIN-2014-52 | refinedweb | 644 | 63.29 |
Definitions for a simple block device interface. More...
#include <sys/cdefs.h>
#include <stdint.h>
#include <sys/types.h>
Go to the source code of this file.
Definitions for a simple block device interface.
This file contains the definition of a very simple block device that is to be used with filesystems in the kernel. This device interface is designed to abstract away direct hardware access and make it easier to interface the various filesystems that we may add support for to multiple potential devices.
The most common of these devices that people are probably interested in directly would be the Dreamcast SD card reader, and that was indeed the primary impetus to this device structure. However, it could also be used to support a file-based disk image or any number of other devices.. | http://cadcdev.sourceforge.net/docs/kos-2.0.0/blockdev_8h.html | CC-MAIN-2018-05 | refinedweb | 135 | 56.96 |
Ask A Question
How do I access an attribute from a background worker (Stripe charge_id)?
I managed to get Stripe working and processing Jobs, where a user must pay a one-time charge in order to create a Job record.
After putting the call to `Stripe::Charge.create` in a background job I can't manage to figure out how to pass the `charge.id` from `Stripe::Charge.create` to an Order object.
I planned to move the Order.create call into the sidekiq worker and access the `charge.id` directly, but I can't access the @job within the worker because a stripeToken can't be used more than once. Any idea on how I can still save the `charge.id` to an `Order`? *(separate from the main Job model)*
JobsController
def create ... if @job.create_with_stripe(params[:stripeToken]) if @job.save Order.create( # Can't figure out how to pass the charge.id from StripePaymentJob :charge_id => @charge.id, :job_id => @job.id ) end ... end
Job Model
def create_with_stripe(token) Stripe.api_key = Rails.application.secrets.stripe_secret_key if valid? StripePaymentJob.perform_later(token, SecureRandom.uuid) else ... end
Stripe Worker
class StripePaymentJob < ApplicationJob queue_as :default def perform(token, idempotent_key) @charge = Stripe::Charge.create({ ... }, { idempotency_key: idempotent_key }) end end
Phil gave me a hand over on StackOverFlow. In short, he recommended I consider this approach:
- Before requesting the job be performed, create the order record with a nil charge_id
- After the Stripe transaction has been completed in the job, update the order with the returned charge_id | https://gorails.com/forum/how-do-i-access-an-attribute-from-a-background-worker-stripe-charge_id | CC-MAIN-2021-17 | refinedweb | 250 | 60.92 |
man lcb_make_http_request to get more info about doing restful queries to
couchbase
Also you can find doc sources in the repo
const char *docid = "_design/test";
const char *doc = "{"views":{"all":{"map":"function (doc, meta) {
emit(meta.id, null); }"}}}";
lcb_http_cmd_t cmd;
lcb_http_request_t req;
cmd.version = 0;
cmd.v.v0.path = docid;
cmd.v.v0.npath = strlen(docid);
cmd.v.v0.body = doc;
cmd.v.v0.nbody = strlen(doc);
cmd.v.v0.method = LCB_HTTP_METHOD_PUT;
cmd.v.v0.content_type = "application/json";
lcb_error_t err = lcb_make_http_request(instance, NULL,
LCB_HTTP_TYPE_VIEW,
&cmd, &r
I was able to figure out how to solve this problem by changing this line in
env.rb from:
Capybara.app = MainSinatra
to:
Capybara.app = eval "Rack::Builder.new {( " +
File.read(File.dirname(__FILE__) +
'/../../config.ru') + "
)}"
And this runs the application from the rack up file (config.ru) and loads
all the middleware that wasn't otherwise loading.
I found the answer in this blog.
See Monitoring Shape Based Region in the Location Awareness Programming
Guide.
- (BOOL)registerRegionWithCircularOverlay:(MKCircle*)overlay
andIdentifier:(NSString*)identifier
{
// Do not create regions if support is unavailable or disabled
if ( ![CLLocationManager regionMonitoringAvailable])
return NO;
// Check the authorization status
if (([CLLocationManager authorizationStatus] !=
kCLAuthorizationStatusAuthorized) &&
([CLLocationManager authorizationStatus] !=
kCLAuthorizationStatusNotDetermined))
return NO;
// Clear out any old regions to prevent buildup.
if ([self.locManager.monitoredRegions count] > 0) {
for (id obj in self.locManager.monitoredRegions)
[self.locManager stopMonitoringForRegion:obj];
}
// If the ov);
Try calling replace on the datetime instead of passing the tzinfo into the
__init__ of the datetime. Taken from the django docs:
import datetime
from django.utils.timezone import utc
now = datetime.datetime(9999, 1, 1).replace(tzinfo=utc)
Gemspec+correct directory structure+(most importantly) placing a script
that will launch your app(with run, probably) into bin/ directory.
A little more details on gem binaries here
UPDATE
An example as requested. I have made a gem called agent which depends on
sinatra(it also depends on rack). It has this definition of Agent::Server:
module Agent
# Your code goes here...
class Server < ::Sinatra::Base
get '/sync' do
[200, "yahoo!"]
end
end
I also created file called test with following contents:
#!/usr/bin/env ruby
require "rubygems"
require "agent"
Rack::Handler::WEBrick.run(
Agent::Server.new,
:Port => 9000
)
Then, if I run chmod 0755 test and ./test after that, I can go to and see yahoo!.
I suggest that you can try to print the 'env' variable with writing a
simple programming.
require "rubygems"
require "rack"
def pp(hash)
hash.map {|key,value| "#{key} => #{value}"}.sort.join("<br/>")
end
Rack::Handler::WEBrick.run lambda {|env| [200,{},[pp(env)]]} ,
:Port=>3000
enter the link localhost:3000
You need to tell Rails what to do when a Rack::Timeout error is thrown.
If you ignore it, execution will stop after 15 seconds (or whatever you
configure it to be)
If you want to show the user a nice error, you'll need to rescue from that
exception (like below).
You could do something like this
class ApplicationController < ActionController::Base
rescue_from Timeout::Error, with: :handle_timeout
protected
def handle_timeout
render "shared/timeout"
end
end
@padde makes a good point, you need to give us more information. However,
one easy way to selectively run things is to use environment variables. The
obvious classic use is to run some things in production and some things in
development etc, e.g.
if ENV["RACK_ENV"] == "production"
# do this
elsif ENV["RACK_ENV"] == "staging"
# do something almost the same
else
# do something quite different
end
Rack will generally set those vars for you, but you could use a different
one and if you wanted to run it from the commandline you could use env
MYVAR=1 bin/rackup config.ru.
Consider @padde's request and tell us your goal, not the implementation you
believe is best (considering you don't really know what's best or else you
wouldn't be asking;) and perhaps you'll get a better answer
You need to stub and set expectations on whatever method your custom logger
is calling to write its output.
So, for example, if your custom logger writes directly to STDERR with a
single argument, you would use:
STDERR.stub(:write) {|arg| expect(arg).to_not match(/password/)}
If your custom logger writes to the default logger (i.e.
env['rack.errors']) instead of STDERR, then you would stub the write method
of that object instead. I would have shown that example, but I can't figure
out how to get a hold of the Rack environment within an RSpec test.
A couple of notes on this implementation:
The use of any_number_of_times, as in
STDERR.should_receive(:write).any_number_of_times is deprecated in
favor of stub
There is no reasonable regex to match strings which
do not contain a particula
Had exactly the same (rather annoying) thingie, ended up with doing this on
my one:
on a before do block,
header 'Access-Control-Allow-Origin', '*'
header 'Access-Control-Allow-Methods', 'GET, POST, OPTIONS, PUT'
(You may want to tweak according to RACK_ENV or such...)
Seems to work. Note this is not a full CORS implementation(of course its
not) but i'll wait for that rack-cors gem fix...
HTH
It uses RAILS_ENV instead of RACK_ENV.
In my env.rb file I just had it output the ENV variable. Do this to double
check, but in mine I clearly see:
"RAILS_ENV"=>"test"
I have similar experience though using pjax-rails, but the reason should be
the same on JS side.
A very possible reason is it takes a bit long for your development server
to response. Pjax can't wait so long by default, so it falls back to normal
HTTP request.
The solution is to add time out option on JS
$('.local-nav a').pjax('[data-pjax-container]', {timeout: 2000})
You can just have a default app that produces the error message if the
request doesn’t match any of the other mappings:
map '/one' do
run app1
end
map '/two' do
run app2
end
run default_app
Alternatively you could have a mapping for /, since “URLMap dispatches in
such a way that the longest paths are tried first, since they are most
specific”. (This is actually pretty much equivalent to having a default
app like above).
map '/one' do
run app1
end
map '/two' do
run app2
end
map '/' do
default_app
end
This is an old question but I ran into a similar problem with Sidekiq Web.
To get around this we can use route constraints.
In your routes file constrain the route to the mounted application:
require "sidekiq/web"
mount Sidekiq::Web, at: "/sidekiq", constraints: AdminConstraint.new
Then in the AdminConstraint, you need to place the calls that are in your
before_filter that are applicable for your application's authentication. So
for example if I had the following in my ApplicationController:
class ApplicationController < ActionController::Base
before_filter :authenticate
def authenticate
user = User.find_by(email: params[:email])
if user && user.authenticate(params[:password])
session[:user_id] = user.id
end
end
end
I would put the following in
Try the following:
class Pre
def initialize(app)
@app = app
end
def call(env)
# To be safe, reset the ARGV and rebuild, add any other items if needed
ARGV.clear
ARGV << "--debug"
ARGV << "--host" << "localhost"
if some_env_related_logic
ARGV << "--test"
end
Somecommand.new.call(env)
end
end
require 'Somecommand'
# Note the change, Somecommand is no longer mentioned here
run Pre
Apparently this is not possible. Rack-compatible servers usually support
this feature, but Rack::Server "interface" does not make use of it (at
least the latest version as of October 2013). Why not, is beyond me.
The rack-coffee README describes one way to do it:
If you want to serve stock javascript files from the same directory as
your coffeescript files, stick a Rack::File in your middleware stack after
Rack::Coffee.
Although Rack::File does not appear to be a middleware but a standalone
Rack app class, so I would instead use Rack::Static before your code above:
use Rack::Static,
:root => File.join(Dir.pwd, 'assets'),
:urls => ["/javascripts"]
use Rack::Coffee,
:root => File.join(Dir.pwd, 'assets'),
:urls => '/javascripts'.
I presume you're meaning testing whether it's changing env...
A middleware goes something like:
class Foo
def initialize(app)
@app = app
end
def call(env)
# do stuff with env ...
status, headers, response = @app.call(env)
# do stuff with status, headers and response
[status, headers, response]
end
end
You could initialize it with a bogus app (or a lambda, for that matter)
that returns a dummy response after doing some tests:
class FooTester
attr_accessor :env
def call(env)
# check that env == @env and whatever else you need here
[200, {}, '']
end
end
I've been able to get this to work by writing my own middleware which just
adds the Rails.logger to the Rack environment.
module Something
class UseRailsLogger
def initialize(app)
@app = app
end
def call(env)
env['rack.logger'] ||= Rails.logger
@app.call(env)
end
end
end
If you stash that in lib/something/use_rails_logger.rb, then you can add it
to your middleware stack and the logger will be available to every layer
that comes after it.
Note: I wanted to add it to config/application.rb since there's no reason
for this setting to be environment-dependent, but for some reason require
'something/use_rails_logger' wouldn't work from that file. Adding it to
config/environment/*.rb worked just fine. Beside the require all you need
is:
config.middleware
You are using both
enable :sessions
which makes Sinatra setup cookie based sessions, and
use Rack::Session::Cookie, ...
which also adds sessions to your app, so you end up with two instances of
Rack::Session::Cookie in your middleware stack.
The warning is being generated by the session middleware included by
Sinatra. By default Sinatra doesn’t create a session secret when running
in the development environment (in classic mode at least, it does for
modular apps), and so Rack generates the warning in development.
You should only need one of the two ways of enabling sessions, using two
together could result in them interacting in unexpected ways.
To avoid the warning, you can explicitly set a secret for the Sinatra
session with the session_secret option:
enable :sessions
set :se
ActiveRecord provides a method for clearing connections manually -
ActiveRecord::Base.clear_active_connections!. Update the call method in the
middleware to clear the active connections after the changes are made in
the database.
def call(env)
# ... prepare in memory storage for what needs to change
return_value = @app.call(env)
# ... commit changes to the database
ActiveRecord::Base.clear_active_connections! # fixes the connection leak
return_value
end
According to "FormToken
lets through xhr requests without token."
So if you were relying on form tokens and had not taken extra steps to
protect against xhr requests, then this might be considered a security
risk. You might assume a request was genuine (since it's protected by
FormToken - right!?) when in fact it was a forgery. By forcing you install
FormToken explicitly, the developers are hoping that you will examine what
it does and take the necessary steps.
I've used PHP 5.4.x, specifically I used the Bitnami MAPP stack, which
comes with the couchbase.so already built for 5.4 (although you can do that
yourself easily from the source). Then connected it to my local Couchbase
instance (in my case I used the Laravel framework).
My Blog:
But now the couchbase.so is already in the Bitnami stack... so you can skip
that part.
Bitnami MAPP:
Bitnami MAMP: (also can be installed via Mac
App Store)
A long time ago the Couchbase engineers intended to build out a concept of
having pools similar to zfs pools, but for a distributed database. The
feature isn't dead, but just never got much attention compared to other
database features that needed to be added. What ended up happening was that
the pools/default just ended up being a placeholder for something that the
engineers wanted to build in the future. In the old days the idea was that
a pool would be a subset of buckets that was assigned to a subset of nodes
in the cluster and that this would help with management of large clusters
(100+ nodes).
So right now I would say don't worry about the whole pools concept because
in the current (2.x releases) this is a placeholder that doesn't have any
special meaning. In the future though there
if the Ruby runtime is booted - which in this case seems it is ... you
should be able to configure a (minimal) rack error application just set smt
rack-y (require 'my_error_app'; run MyErrorApp) into the
jruby.rack.error.app context parameter (e.g. in your web.xml with Warbler))
There's quite a lot of questions molded into one, I think.
The middleware itself would look something(haven't checked it, but it feels
right) like this:
class AntiHijackingMiddleware
def call(env)
status, headers, body = @app.call(env) # save initial state
if env["HTTP_X_REQUESTED_WITH"] == "XMLHttpRequest" &&
headers['Content-type'].to_s.include?("application/json")
body = "while(1);"+body
headers['Content-Length'] = Rack::Utils.bytesize(body.to_s).to_s
end
[status, headers, body]
end
end
You can add additional conditions on env["REQUEST_URI"] to do url matching.
Adding it to Rails' middleware stack is boilerplate.
Assuming you have a non-Rack application and are following the instructions
in the second answer which references,
it seems to me that the hack/workaround is only valid for the simplest of
use cases, namely when you're not attempting to pass in parameters to the
app. I assume that either Rack or Capybara is attempting to invoke your app
to pass the parameters and it's failing because you app is just a String
and not a callable object.
You have already setup cookie in your question. I am not sure if you means
something else by "setup".
Instead of env['rack.session'] you can use session[KEY] for simplification.
session[:key] = "vaue" # will set the value
session[:key] # will return the value
Simple Sinatra example
require 'sinatra'
set :sessions, true
get '/' do
session[:key_set] = "set"
"Hello"
end
get "/sess" do
session[:key_set]
end
Update
I believe it wasn't working for you because you had set invalid domain. So
I had to strip that off :domain => 'foo.com',. BTW Sinatra wraps Rack
cookie and exposes session helper. So above code worked fine for me. I
believe following code should work as expected.
require 'sinatra'
use Rack::Session::Cookie, :key => 'rack.session',
:expire_after => 2592
Probably not, because AppCache is meant for static resources. Best to use a
static HTML page, and use JavaScript to load in dynamic content.
I encountered the same error then I renamed the name of the checkbox array
to something else other than the name of the model. It worked.
For example you have @dashboard_banners as your model then you call your
checkbox dashboard_banners[]. I think that causes the error. Name it to
something else like d_banner[] then fetch ii in the controller as
params[:d_banner]. Then you can loop it in the controller or do anything
that you like.
The source for encrypted_cookie shows that it generates different encrypted
output every time it is called regardless of the input. There are 2 reasons
for this:
The library would have to know what the session value was during the last
request. It doesn't, all it does is accept a single input, the given
session. If you wished to circumvent this and just rewrite the cookie (I
suppose) you could, since you have the extra information available higher
up in the Sinatra app.
It's more secure. It doesn't leak information (if the cookie doesn't change
then an observer of the cookie knows nothing changed during the request),
and it gives an attacker less time to try and get to a meaningful value.
Your post endpoint must parse the posted JSON body itself, which I assume
you already do. Can you post how your end point works, also the rack-test,
rack,ruby and sinatra version numbers? Please mention also how you test
whether the server's receiving anything -- namely test mockup may confuse
your detection.
post '/user' do
json_data = JSON.parse(request.body.read.to_s)
# or # json_data = JSON.parse(request.env["rack.input"].read)
...
end
As the error suggests ("No such middleware to insert before"), the issue is
with the middleware you are trying to insert before (and not the middleware
you are trying to insert, which was my initial assumption).
In Rails4, threading is enabled by default which removes Rack::Lock.
To find a replacement, you can run rake middleware from your rails project
directory, and look for something near the start of the stack. I'm going to
pick Rack::Runtime as it is early in the stack, and seems pretty standard.
So the rewrite config is now:
config.middleware.insert_before(Rack::Runtime, Rack::Rewrite) do
r301 %r{^/(.*)/$}, '/$1', :headers => {'Cache-Control' => 'public,
max-age='+2.week.to_s}
end
It's very common to use a hash of arrays so try:
headers = {
"Access-Control-Allow-Origin" => %w[
]
}
I've got a guess that it should be { "Access-Control-Allow-Origin" =>
[ 'a', 'b' ] * "
" }
Looking at the RFC, the pertinent part is "5.1 Access-Control-Allow-Origin
Response Header" which points to:
So, try:
[ 'a', 'b' ] * ";"
Or, for the uninitiated:
%
The library itself comes with manpages, which are the most actual
documentation. The index page is man 3 libcouchbase. The page you need is
man 3 lcb_make_http_request. Also you can found docs in asciidoc format in
the repo itself
Between 1.x and 2.x releases, we've changed API a lot, so that it isn't
backward compatible mostly. And function libcouchbase_make_couch_request
was only accessible in "developer preview" version (like beta), eventually
it was named lcb_make_http_request, because you can use the same call to
create design documents, and also perform admin tasks, like
create/flush/delete bucket, etc.
Here is the code example from man page above:
lcb_http_request_t req;
l
I have recently done a short review of the respective merits of MongoDb and
CouchBase.
The most important thing is that there are more similarities than
differences, and both are good products that would work well in most cases.
I will sum up the differences by saying that MongoDB is generally easier to
install, use, and query (plus it has a larger community as you say),
whereas CouchBase goes the extra mile for performance(memcache and insert
throughput), auto-scaling, and recovery from failure.
Personally given a situation like yours of a (presumably) new app with a
query requirement and virtually no write, I would go with MongoDB. It would
be faster to get it to work and there would be solutions to optimize reads
down the road if you'd need to. | http://www.w3hello.com/questions/Couchbase-Enterprise-2-5-AWS-Rack-Awareness-and-XDCR | CC-MAIN-2018-17 | refinedweb | 3,128 | 55.64 |
Introduction: Raspberry Pi Home Monitoring With Dropbox
This tutorial will show you how to create a simple and expandable home monitoring system using a Raspberry Pi, a webcam, a few electrical components and your Dropbox account. The finished system will allow you to remotely request and view images from your webcam while also using an off-the-shelf digital temperature sensor to monitor the temperature of your home over the internet, all using Dropbox.
This was the first project I thought up after receiving a Raspberry Pi 2 model B. My aim was to create a Python-based monitoring system that I could control and receive data from over the internet. While there are many different ways of doing this, I decided to use Dropbox as the interface between the Pi and the internet as they have a simple Python API which allows you to upload, modify and search for files in specific folders using a few lines of code.
I also wanted my solution to be lightweight and simple, and to avoid cluttering my Pi with unnecessary libraries and programs. The software component of this project consists of a single Python script, meaning that you can continue to use your Pi as normal, even when the monitoring system is running.
For this project you will need:
- A Raspberry Pi. Any model should work, I used an all-in-one starter kit, but perhaps you need the central unit only.
- A USB webcam. I bought a cheap ADVENT AWC72015, which happened to work fine. It may be a good idea to consult this list of webcams which are confirmed to work with the Pi. Note that some require a powered USB hub (mine works fine without).
- A Dropbox account. I use my standard free account as this project does not require much storage space.
- A DS18B20 digital temperature sensor and a 4.7k resistor. You can buy the sensor here, and it might be worth grabbing a pack of various resistors too.
-: Set Up the Hardware
The first step is to ensure that your Pi and the associated peripherals are set up.
First, connect your Pi to the internet. This is necessary to ensure that the monitoring program can receive your requests and upload data to Dropbox. I use an ethernet connection to ensure reliability, but a Wi-Fi connection should work fine too, while also having the advantage of improved portability. If you choose Wi-Fi, I'd recommend this USB dongle for the Pi.
Next, connect your webcam to the Pi by plugging it into one of the USB ports. While my Advent webcam's instructions did not explicitly say that it would work with Linux, all I had to do was plug it in and boot up the Pi. No further installation was needed. Other webcams may vary. You can check whether your webcam has been detected by Linux using the following command:
lsusb
In the above image, my webcam is listed as '0c45:6340 Microdia'
Finally, you can connect your DS18B20 temperature sensor to the Pi's GPIO header. I use my breadboard to make the process of creating circuits easier, and I'd recommend you do the same, especially as the DS18B20 requires a 4.7k resistor to be placed between two of its three pins. This link provides a good wiring diagram showing how a breadboard can be used to connect to this temperature sensor.
The next page of the above tutorial also covers the steps needed to read data in from the DS18B20, and shows you how to check that it is working. It is important to perform these setup steps before you can use the DS18B20 for this project. We will also be integrating the sample Python script from the tutorial into our monitoring program, so you may want to have a quick skim over this code.
Please also make note of your DS18B20's unique number. It is the number beginning with '28-' that you come across during the setup tutorial. You will need to enter it into the upcoming Python program to allow it to read in the temperature.
Step 2: Set Up Dropbox
In order for your Pi to interface with Dropbox, you need to set up a new Dropbox app. This will provide you with details needed for your Pi to perform online file management using Python. Assuming you have created a Dropbox account and logged in, you can create a new app using the 'Developers' menu option. See the above image for a summary of the important steps.
Within the 'Developers' menu, select 'My apps', then press the 'Create app' button. To fill out the resulting form, select 'Dropbox API' followed by 'App Folder'. Finally, you can choose a unique name for your app within Dropbox. Click 'Create app'.
You will then be taken to your app's settings page within Dropbox. There is only one further thing you need to do here - generate yourself an Access Token. To do this, scroll down to the 'OAuth 2' section and under 'Generated access token', click the 'Generate' button.
This will present you with a long string of characters which are needed to access your Dropbox account using Python. Make a note of this Access Token as you will need to specify it later in your code. If you lose the token, you can navigate back to your app's settings by clicking 'My apps' in the Dropbox 'Developers' section and generate a new token.
You can leave the other settings as they are. To confirm that your app has created the necessary folders on your Dropbox account, navigate to your storage homepage and look for the 'Apps' folder. Within this folder should be a sub-folder with the name you chose for your new app. This is where all files for your monitoring system will be placed.
Step 3: Preparing Your Dropbox App Folder
Once you have set up your Dropbox app, it's time to think about how you will use the resulting folder in your Dropbox account to interact with your Pi. This is accomplished quite simply. The Python script which will run on the Pi will use a subset of commands from the Dropbox API to search and modify the names of some empty, extension-less files in your app folder. We will call these files 'parameter files' as each one will allow you to control a different aspect of the monitoring system's behaviour. The image above shows the four parameter files which need to be present in your Dropbox app folder for this project. Creating them is simple:
Starting with your app folder completely empty, open a text editor program on your computer. While this could be done using the Pi, I found it easier to use my Windows laptop for this setup phase. Once the text editor is open (I used Notepad on Windows 7), all you need to do is save a completely empty text file anywhere on your computer. As our first example, we will create the first parameter in the header image. Name the file 'delay=10' when you save it.
To recap, you should now have an empty text file stored on your computer with the name 'delay=10'. The file will also have a '.txt' extension which may or may not be visible.
The next step is to upload this file to your Dropbox app folder. This is just like any other Dropbox upload. Simply navigate to your app's folder and click 'Upload' and choose your 'delay=10' file.
When this file has uploaded, you must remove the '.txt' extension which should now be visible in the filename. To do this, simply right click the file and select 'Rename'. Remove the '.txt' part of the filename. You should now be left with a file called 'delay=10' with no file extension, as shown in the header image.
The 'delay' parameter file is one of four which will be used by the monitoring program. To create the others, you can just copy and rename your 'delay' file by right clicking it. Once you have created three copies, name them as shown in the header image so that your app folder is identical to that shown at the beginning of this step.
Step 4: Getting Started With the Code
As discussed, the core of our monitoring system will consist of a single Python script which will interface with Dropbox. In order for the monitoring program to be active, this script will have to run in the background on your Pi. I guess it is most accurately described as a 'daemon' script, meaning you can just set it running and forget about it. The script is attached to this step, so there is no sense in repeating the code here. Now may be a good time to download it and familiarise yourself with it.
Before you will be able to run the script, it is important to ensure you have the relevant Python libraries installed. The ones you need are listed at the top of the attached script. They are:
import dropbox import pygame.camera import os import time
The Python installation on my Pi already included pygame, os and time so the only one I had to install was Dropbox. I did this using their very simple installation instructions with pip.
Once your libraries are set up, you will need to edit the top two lines of the attached script to match your Dropbox Access Token and your DS18B20 temperature sensor's unique identifier. These are the two lines which need to be edited:
APP_ACCESS_TOKEN = '**********' THERMOMETER_FILE = '/sys/bus/w1/devices/28-**********/w1_slave'
Just replace the ****s with the correct values. At this point, you are actually ready to start using the monitoring program! Instead of just jumping in, I'd recommend that you continue to the next step for a general overview of the code.
IMPORTANT: When you run this script, you want it to run in the background so that a) you can continue to use the Pi, and b) when you close your SSH session, the script will continue to run. This is the command I use when I run the script:
nohup python DropCamTherm.py &
This accomplishes three things: It will run the script ('python DropCamTherm.py'), it will return control to the command line immediately so you can continue to use the Pi ('&'), and it will send Python outputs that would normally be displayed on the command line into a file called 'nohup.out'. This can be read using a Linux text editor (my favourite is nano), and will be created automatically in the directory from which the script is being run.
Attachments
Step 5: Digging Deeper Into the Code
When you open the script, you will notice that it consists of three functions along with a block of code which implements these functions when the script is run. The functions use the Dropbox API and and access the DS18B20's temperature log file in order to listen for commands from Dropbox and upload the latest temperature reading. Below is an overview of what the functions do, and how they are used to make the monitoring system work:
- poll_parameter():
This function shows the purpose of the Dropbox parameter files we created in step 3. It searches the Dropbox app folder for a file containing the text 'param='. It then extracts the text after the '=' and tries to convert it into an integer. You can see that this allows us to control the program by appending relevant numbers to the end of the parameter files manually. The next step will contain a brief instruction manual showing you how to use each of the parameter files to control an aspect of the program.
- set_parameter():
This function allows the program to rename a parameter file from within Python. It does this on a few occasions, mainly to reduce the need for excessive manual renaming of the files.
- set_latest_temp():
This function makes use of set_parameter() to upload the latest temperature to the Dropbox app folder by appending it to the 'temperature' parameter file. The function reads the latest temperature from the DS18B20's log file (which is available on Linux at the path pointed by the THERMOMETER_FILE variable).
The final part of the program contains the code which will execute when the script is run. After some setup steps required for the DS18B20 sensor, it opens a Dropbox session using your Access Token and uses pygame to search out your webcam. If a webcam is found, it will enter a loop where it uses poll_parameter() to extract information from Dropbox and act on it.
IMPORTANT: You will notice the following line of code:
cam = pygame.camera.Camera(cam_list[0], (864, 480))
...this attempts to create a usable camera interface from the first webcam that pygame detects. The resolution may need to be changed to match your webcam. Experiment with a number of values to find what works best.
Step 6: Using the Dropbox Parameter Files
So now you should have a working script which, when run using the instructions from step 4, will allow your Pi to start monitoring the app folder for your inputs. On your first run, the app folder should contain the following parameter files:
delay=10 exitprogram=0 imagerequest=0 temperature=0
Interaction with the program is achieved by manually renaming the parameter files via Dropbox. To do this, just right-click one of the files and select 'rename'. Each parameter file has a different function:
- delay:
This file tells the monitoring program how many seconds to wait between each iteration of the monitoring loop. When I know that I won't be interacting with the program much, I set it to 60 or 120. When I know that I want to request data from the Pi often, I set it to 10.
- exitprogram:
This should be set to 1 or 0. If the program detects that it is set to 1, it will end the script. If you set it to 1 and the script exits, you will need to log in to the Pi again to start it back up. This parameter exists so that you can gracefully end the monitoring program when you no longer need it to be running (for example, if you have returned home and no longer want to monitor the webcam remotely).
- imagerequest:
This is perhaps the most important parameter. This should be set to 1 or 0. If the program detects that it is set to 1, it will request an image from the webcam and upload it into the app folder (with the title 'image.jpg'). If another 'image.jpg' exists, it will overwrite it.
- temperature:
This is the DS18B20 temperature reading set by the set_latest_temp() function. You should never need to edit this parameter file - it is automatically set by the program.
Note that if you set 'exitprogram' or 'imagerequest' to 1, the program will automatically return them to 0 before executing the relevant code. This is for convenience. You may also notice that the code contains a lot of 'try' and 'except' blocks surrounding many of the critical functions. This is to ensure that the script will not throw exceptions (and hence stop running) if something goes wrong (such as an internet connectivity problem preventing Dropbox access).
Step 7: Conclusion
This project has presented a way to control a Raspberry Pi using Python and Dropbox. While the hardware used in this project is a temperature sensor and a USB webcam, there are many other applications for this method of controlling the Pi. In fact, any hardware component that is accessible via GPIO can be controlled using a similar program structure, making the system very easy to expand.
As a next step, you could also use a GUI library such as Tkinter along with the Dropbox API to create a client program which would allow you to modify the parameter files without even needing to log in to Dropbox.
I hope that this tutorial has been clear, and if you have any questions or would like me to clarify anything, please post a comment!
Participated in the
Raspberry Pi Contest 2016
Be the First to Share
Recommendations
10 Comments
Question 1 year ago on Step 7
I'm a beginner in the python programming language. Could u please tell which command or function is actually responsible for updating the value in the dropbox?
5 years ago
Hi George.
Thanks for a great example! I am a total beginner with Raspberry Pi and python and was stopped first time with the 'pip install dropbox' command, but managed to find out that adding 'sudo' in front of installation command did the trick.
However, second stop came when I was trying to run the python script. I got a syntax error: "EOL while scanning string literal" on line 29.
I'm pretty sure you can help with this.
Reply 5 years ago
Hey,
Thanks very much for pointing that out - I've uploaded a fixed version of the script. Some of the lines ended with a '$' character when they shouldn't have. This was probably from when I copied the script from my Pi. The new version was downloaded from the Pi using FTP so it shouldn't have any corruption :)
Good job, thanks!
Reply 5 years ago
Ok, looks like lines 29 and 40 should end: 'and value $'
Works fine. Thanks!
5 years ago
nice job George.
it will be possible to use raspicam, instead a webcam ?
thanks a lot
Reply 5 years ago
Hey, thank you very much :) Sure, as long as you can get the camera to save an image to upload, it should work fine. You'd have to update the script to take a photo using the raspicam instead of using Pygame to access the USB webcam. Perhaps this is a good place to start:
5 years ago
neat! will try to use it alongside my pi which controls my 3d printer to get images of the work in progress while afk...
may add a few switches to interact with the printer aswell: pause, abort,... call firedepartment. ;)
(last one was a joke)
the ambient temperature sensor may get swapped to the 2nd sensor on my hotend of the printer...
clearly and understandable written 'ible. thanks for the ideas and sharing. :)
Reply 5 years ago
Thanks for the comment, good to see you've come up with an even more interesting use for it so quickly! I might expand mine by adding another webcam so that I can look out the window too :)
5 years ago
Great idea for a monitoring system.
Reply 5 years ago
Thanks very much for reading! Yeah, it's fun to be able to check up on my house during the day, even if it's just to see whether it's sunny or not! | https://www.instructables.com/Raspberry-Pi-Home-Monitoring-With-Dropbox/ | CC-MAIN-2021-43 | refinedweb | 3,157 | 69.82 |
I'm either missing something small or theres a BIG problem somewhere. The following code compiles sucessfully with g++ 4.1.0 under SuSE 10.1 i586 (Linux 2.6.16.13-4-default i686) but it gives a Segmentation Fault when I run it.
#include <stdio.h> #include <list> typedef struct { std::list<int> b; } data; int main(int argc, char **argv) { data *tmp = (data*)malloc(sizeof(data)); printf("Hello1\n"); tmp->b.push_back(3); printf("Hello2\n"); return 0; }
Output:
Hello1
Segmentation fault
In fact, any operation on the list (and only the list, everything else works fine, even other members inside the same struct) results in some kind of segmentation fault as if the list needs initialization.
Any help will be appreciated (a lot). | https://www.daniweb.com/programming/software-development/threads/51712/big-problem-with-stl-list-container | CC-MAIN-2017-17 | refinedweb | 127 | 66.74 |
import "go.chromium.org/luci/common/runtime/paniccatcher"
Package paniccatcher package exposes a set of utility structures and methods that support standardized panic catching and handling.
Example is a very simple example of how to use Catch to recover from a panic and log its stack trace.
Code:
Do(func() { fmt.Println("Doing something...") panic("Something wrong happened!") }, func(p *Panic) { fmt.Println("Caught a panic:", p.Reason) })
Output:
Doing something... Caught a panic: Something wrong happened!
Catch recovers from panic. It should be used as a deferred call.
If the supplied panic callback is nil, the panic will be silently discarded. Otherwise, the callback will be invoked with the panic's information.
Do executes f. If a panic occurs during execution, the supplied callback will be called with the panic's information.
If the panic callback is nil, the panic will be caught and discarded silently.
type Panic struct { // Reason is the value supplied to the recover function. Reason interface{} // Stack is a stack dump at the time of the panic. Stack string }
Panic is a snapshot of a panic, containing both the panic's reason and the system stack.
Package paniccatcher imports 1 packages (graph) and is imported by 22 packages. Updated 2020-01-18. Refresh now. Tools for package owners. | https://godoc.org/go.chromium.org/luci/common/runtime/paniccatcher | CC-MAIN-2020-05 | refinedweb | 213 | 53.17 |
How do you know when a page is being rendered as the result of a Server.Transfer, rather than a Response.Redirect or the user browsing directly to a page?
Actually it’s quite easy, assuming you’re using the default ASP.NET pipeline. In reality the “thing” that is responsible for handling an HTTP request is aptly called an HttpHandler – that is, they implement the IHttpHandler interface. Of course, you can create your own handlers if you just want to return a document, or your own manually rendered content, or similar.
But... the Page class also implements IHttpHandler, and ASP.NET therefore uses some cleverness to work out which page should be rendered, and then uses an instance of that Page-derived class as the HttpHandler.
So the bottom line is that if you do a Server.Transfer, the HttpHandler for the request will be the page that was originally being rendered... Therefore, the following code when executed from within a page will display “true” if a Server.Transfer has occurred;
TransferredLabel.Text = (Context.Handler != this).ToString();
Easy huh? Try the attached if you want to see this in action.
DetectingServerTransfer.zip
And how do you know if it is redirected?
There is no magic way I’m afraid as a redirect is processed as a completely seperate HTTP request from the button click (or whatever initiated the Response.Redirect).
I should also say that I think this isn’t such a bad thing – trying to customise the ASP.NET pipeline or behaviour too much often leads to headaches and maintainability problems. I’ve seen people use all sorts of tokens in the URL or cookies to indicate it was Redirected… but if you *need* to know this in the page there’s a good chance you’ve taken an approach that doesn’t fit well with ASP.NET.
The only reasons why detecting a Server.Transfer can be so useful is to try and avoid doing expensive tasks twice (if they were done in the transferring page already for example), or to pick up on data posted to the previous page’s controls.
Hope that makes sense!
Simon
Thank you – this is just what I was looking for.
I know this is an old thread, but this is the top Google result for "detect server transfer."
I devised a much cleaner way of doing this using the HTTP response headers. I was using this for recaptcha validation, so I implemented the following on my page base class:
private const string cCaptchaHeaderKey = "X-Captcha";
private const string cCaptchaAcceptedValue = "Accepted";
public bool CaptchaAccepted
{
get
{
return (Response.Headers.Get(cCaptchaHeaderKey) == cCaptchaAcceptedValue);
}
set
{
if (value)
Response.Headers.Set(cCaptchaHeaderKey, cCaptchaAcceptedValue);
else if (Response.Headers.Get(cCaptchaHeaderKey) != null)
Response.Headers.Remove(cCaptchaHeaderKey);
}
}
@ Captain (and lol at your screen name!)
I've re-read that 5 times and I'm still not sure I see what you're doing. You're storing a flag in the response headers collection to indicate you've already done something?
I'm not convinced that is "cleaner" than;
public bool Transferred
{
get
{
return (Context.Handler != this);
}
}
Surely that's simpler??!!
Simon | https://blogs.msdn.microsoft.com/simonince/2009/07/13/detecting-server-transfer/ | CC-MAIN-2017-47 | refinedweb | 523 | 56.76 |
Computing some simple statistics for the jobs that run on your Hadoop cluster can be very useful in practice. Data collection systems like Chuckwa probably allow you to do this, but if you don’t have hundreds of nodes, simply running the following shell script daily on your master might be all you need:
find /path/to/hadoop/logs/history/ -daystart -ctime 1 | \ grep -v 'xml$' | grep -v 'crc$' | while read FILE; do NAME="`basename $FILE`" sed 's/ *$//' $FILE | sed "s/\$/ JOBNAME=\"$NAME\"/" done > /tmp/joblogs.txt DATE="`date -d yesterday +"%Y/%m/%d"`" /path/to/hadoop/bin/hadoop dfs -mkdir /user/hadoop/joblogs/$DATE /path/to/hadoop/bin/hadoop dfs -put /tmp/joblogs.txt \ /user/hadoop/joblogs/$DATE/joblogs.txt || exit 1 rm /tmp/joblogs.txt
This script takes all of yesterday’s job logs, adds a JOBNAME=”<filename>” field to each line, puts everything in a single file, and uploads this file to the DFS. Once you got this set up, you can use Hadoop to analyse your job logs. Here’s an example in the form of a Dumbo program:
from dumbo import sumsreducer, statsreducer, statscombiner, main class Mapper1: def __init__(self): from re import compile self.fieldselector = compile('([A-Z_]+)="([^"]*)"') def __call__(self, key, value): logtype = value.split(" ", 1)[0] if logtype.endswith("Attempt"): try: fields = dict(self.fieldselector.findall(value)) jobname = fields["JOBNAME"] taskid = fields["TASK_ATTEMPT_ID"] if "START_TIME" in fields: start = int(fields["START_TIME"]) yield (logtype, jobname, taskid), (start, 0) elif "FINISH_TIME" in fields and not "ERROR" in fields: stop = int(fields["FINISH_TIME"]) yield (logtype, jobname, taskid), (0, stop) except KeyError: self.counters["Broken loglines"] += 1): date = prog.delopt("date") if date: prog.addopt("input", "/user/hadoop/joblogs/" + date) prog.addopt("name", "jobstats-" + date) if __name__ == "__main__": main(runner, starter)
From the output of this program, you can easily generate a few charts that show you which jobs are slowest. We recently started playing with this at Last.fm, mainly because such charts allows us to identify the jobs on which we should focus our optimization efforts.
[…] Posts Simple job logs analysisRandom samplingHADOOP-1722 and typed bytes Indexing typed […]
[…] this on the job logs for one of our clusters (which are gathered by the shell script discussed in this previous post) led to the following […]
[…] thing you’ll need to do is follow Klaas’ tip for collecting job logs into HDFS using a simple cron job. If you don’t already have Dumbo and want to keep your dev environment clean, you can follow […]
Wow, what a Great Post!…
[..]Today I saw this really fantastic blog post, and i wanted to link to it. [..]… | https://dumbotics.com/2009/03/04/simple-job-logs-analysis/ | CC-MAIN-2019-22 | refinedweb | 439 | 52.7 |
Many.
The debugger is called HAP which stands for the Humongous Addition to Python. It was written (mostly by Neal Josephson) while at Humongous Entertainment, and Ms. Hap was a character in one of our in-development games, so the name is an inside joke.
The debugger’s user interface was designed to be as Visual Studio compatible as possible – F5 to start or continue debugging, F10 to single-step, F9 to toggle breakpoints. It has the expected set of windows (call stack, locals, watch window, output window, etc.), a syntax highlighting editor, project files, Perforce integration, remote debugging support, and a bunch of other features that I’ve forgotten.
The screen shot below shows a bunch of the features during a typical debugging session:
There are a lot of other Python debuggers available now, including debugging extensions for Visual Studio, but I have a sentimental attraction to this one, and it works quite well.
Features to look for
You can create projects that contain multiple Python source files – handy when working on large projects – and it is through project settings that you set things like the startup directory and command line arguments.
The ‘Both’ window (Error and Output) lets you type arbitrary Python commands. I find this an excellent way to prototype regular expressions when I’m running through early versions of scripts.
Ctrl+F7 does a compile (syntax check) of the current Python file.
The default layout is missing many of the useful debugger windows. Set a breakpoint with F9, launch your script with F5, then use the View menu to show the debug windows you want. They will then show up automatically whenever you are debugging.
Hap uses sockets for its debugging protocol so the client can be on any machine. With a bit of work the client could be compiled for a game console, Linux, etc.
Known limitations and gotchas
The watch window is supposed to let you type in an arbitrary expression, but this doesn’t work. So, I add items to the watch window by selecting them in the source code and typing Shift+F9. Somebody should fix that…
The watch window also starts out with the ‘Name’ field being zero pixels wide. You need to click the left edge of the Value field and drag it to the right to expose the name field. You only have to do this once per machine and then it remembers it. Somebody should fix that…
The locals window displays variables by asking them to convert themselves to text and can hang on every single-step if you have large lists. The most common cause of this is from writing code like this:
lines = open(file).readlines()
for line in lines:
# Do stuff…
This is reasonable Python code, but the mini-hangs when processing huge files are annoying enough that I tend to use this style instead:
for line in open(file).readlines():
# Do stuff…
A few simple heuristics in the watch window code (look for large lists, dictionaries, or strings) would avoid most of these mini-hangs. Somebody should fix that…
The HAP client gets compiled to a specific version of Python – it currently references python27.dll – and using it with different versions requires a recompile. Somebody should fix that…
Bug fixes for this release
After many years of neglect Hap needed a bit of work to get it back into shape and stronger than before. Some of the main fixes include:
- Removed a requirement that the HapClient be built with the same version of VC++ as Python, caused by passing FILE pointers to Python
- Got everything to build mostly warning free with VS 2010
- Added VS 2008 and VS 2010 solution files (VS 2005 no longer supported). The VS 2010 solution is recommended for future development as the VS 2008 solution may be out of date already
- Fixed erroneous use of namespace aliases
- Fixed some const-correctness errors
- Added pre-build checks for the existence of $PYTHON_ROOT\include\python.h to make build setup easier
- Fixed the function declarations for OnNcHitTest to match the new correctness
- Added and used a HAP_MIN template function to avoid namespace problems with min and MIN
- Added _CRT_SECURE_NO_WARNINGS to the preprocessor definitions to suppress lots of warnings, which should probably be individually addressed at some point
- Got debug builds to work by changing _DEBUG to NDEBUG and specifying the release CRT. The only difference now between debug and release should be optimization. This change was needed because a true debug build of the debugger would require a debug build of Python, which is unwieldy. The P4ApiWrapper project doesn’t build in debug for some reason, but the others all do
- Rewrote makerelease.bat significantly
- Fixed bug with files not in folders not getting saved, by forcing them into folders
- Fixed an infinite loop when dragging folders to the project root
The Hap Python Debugger for Windows can be downloaded (source and binaries) from here. Give it a try and let me know what you think.
The mini-hangs in the Python code you mention are caused by readlines() reading the entire file up front. The style you’re using as a solution (removing the variable) doesn’t actually solve that; readlines() is still being called. Instead, you’d want to use something like this:
for line in open(file):
# Do stuff…
This will read from the file line by line.
The hangs that I see are definitely not from the reading of the lines. That is a one-time thing and I’m generally okay with it. The mini-hangs are because every time you stop in the debugger (on every breakpoint, after every single step) HAP displays all of the locals and globals. Displaying a ten million entry list takes a while.
That said, I’ll try your suggested change — it seems like a good idea.
Pingback: Bugs I Got Other Companies to Fix in 2013 | Random ASCII | https://randomascii.wordpress.com/2013/04/11/python-debugger-update/ | CC-MAIN-2022-40 | refinedweb | 989 | 66.78 |
I'm in an environment where user/group information is maintained in /etc/passwd and /etc/group files, which are NFS mounted. This is nice because we can just edit flat files to change user/group information. However, the OS X machines in our setup don't like this very much, because Directory Services doesn't pick up on when these files change.
Therefore, I'm planning on setting up a cron job to run something like this once a day or so:
dsimport -g /etc/group /Local/Default O -T xDSStandardGroup -u $ADMIN_USER -p $ADMIN_PASS
The problem is those two last arguments at the end: user and password. I want to avoid writing out passwords in scripts, to reduce the risk of them being compromised. Is there any way of using dscl or dsimport without having to provide a password, but instead having them simply use the privileges of the user running the command? (You know, the way every standard Unix command does.) Or is there some other way of accomplishing this without writing out passwords in cleartext?
Just browsing through my notes on dscl, which I've scripted fairly extensively. I'm fairly sure that the answer is no, there is no way to avoid supplying the password. The only exception might be if you were root on the local box (which, in your example, does appear to be the case). [I've almost exclusively done changes over the network.]
If you use expect or pexpect, you can encoded the password in a script (in a reversible manner), and then call into the program you need. [I've come up with a method to encode/decode something that looks like gobbledygook, but it is security through obscurity, I'm afraid.]
For using pexpect, something along these lines would work [note that this example uses dscl, and not dsimport! (I imagine it could be simplified a fair bit for your purposed; turning on the logging command for the dscl child helps when setting things up)]:
#!/usr/bin/env python
import pexpect
# If you don't have pexpect, you should be able to run
# 'sudo easy_install pexpect' to get it
### Fill in these variables
diradmin = "diradmin"
host = "host"
directory = '/Local/Default' # '/LDAPv3/127.0.0.1'
# Note: it is possible to encode the data here so it is not in plain text!
password = "password"
DSCL_PROMPT = " > " # Don't change this (unless the dscl tool changes)
def ReplyOnGoodResult(child, desired, reply):
"""Helps analyze the results as we try to set passwords.
child = a pexpect child process
desired = The value we hope to see
reply = text to send if we get the desired result (or None for no reply)
If we do get the desired result, we send the reply and return true.
If not, we return false."""
expectations = [ pexpect.EOF, pexpect.TIMEOUT, '(?i)error', desired ]
desired_index = len(expectations) - 1
index = child.expect(expectations)
if index == desired_index:
if reply:
child.sendline(reply)
return True
else:
return False
def RunDSCLCommand(dscl_child, command):
"""Issues one dscl command; returns if it succeeded or failed.
command = the command to be sent to dscl, such as 'passwd Users/foo newpword'
"""
assert dscl_child is not None, "No connection successfully established"
# We should be logged in with a prompt awaiting us
expected_list = [ pexpect.EOF, pexpect.TIMEOUT,
'(?i)error', 'Invalid Path', DSCL_PROMPT ]
desired_index = len(expected_list) - 1
invalid_path_index = desired_index - 1
dscl_child.sendline(command)
reply_index = dscl_child.expect(expected_list)
if reply_index == desired_index:
return True
# Find the next prompt so that on the next call to a command like this
# one, we will know we are at a consistent starting place
# Looking at the self.dscl_child.before will likely contain
# the error that occured, but for now:
dscl_child.expect(DSCL_PROMPT)
if invalid_path_is_success and reply_index == invalid_path_index:
# The item doesn't exist, but we will still count it
# as a success. (Most likely we were told to delete the item).
return True
# one of the error conditions was triggered
return False
# Here is the part of the program where we start doing things
prompt = DSCL_PROMPT
dscl_child = pexpect.spawn("dscl -u %s -p %s" % (diradmin, host))
#dscl_child.logfile = file("dscl_child.log", "w") # log what is going on
success = False
if (ReplyOnGoodResult(self.dscl_child, "Password:", password) and
ReplyOnGoodResult(self.dscl_child, prompt, "cd %s" % directory) and
ReplyOnGoodResult(self.dscl_child, prompt, "auth %s %s" % (diradmin, password)) and
ReplyOnGoodResult(self.dscl_child, prompt, None)):
success = True
if success:
# Now issue a command
success = RunDSCLCommand(dscl_child, 'passwd Users/foo newpword')
dscl_child.close()
I have posted some of the code that I am using here; I'm afraid it is woefully unsupported (and posted to the pymacadmin group about it here. Unfortunately, it doesn't look like I wrote up anything on how to use it :(
By posting your answer, you agree to the privacy policy and terms of service.
asked
5 years ago
viewed
190 times
active | http://serverfault.com/questions/94318/scripting-directory-services-without-passwords | CC-MAIN-2015-11 | refinedweb | 803 | 61.67 |
Any help given would be much appreciated. I normally like to figure things out for myself, but this one has me stumped.
Thanks,
PS I only know basic coding commands, we haven't done extensive work on inbuilt functions and all, so my code will probably look a little, well basic, and that is because that is what I know.
#include <iostream> #include <String> using namespace std; //Function prototype bool palindrome(string , int , int ); int main() { string word; int b , e; //call the function. if (palindrome(word,b,e) == true) cout<< word << " Is a palindrome." << endl << endl; else if (palindrome(word,b,e) ==false) cout << word << " Is not a palindrome." << endl << endl; system("PAUSE"); return 0; } bool palindrome(string test, int b , int e ) { b = 0; e = test.size(); if(b >=e) return true; if(test[b] != test[e]) return false; if((test[b])== (test[e])) { return palindrome(test,b+1,e-1); } return true; } | http://www.dreamincode.net/forums/topic/316223-check-if-a-word-is-a-palindrome-and-avoid-punctuation-and-spaces/ | CC-MAIN-2017-34 | refinedweb | 155 | 73.47 |
Unused CSS is a one of the issues most web applications suffer when it comes to performance and page load time.
Apart from option one in this post, most of the time you will need a tool and some manual intervention to be able to safely eliminate unused CSS. These tools are great in a sense that they will let you know what you don’t know; which classes in your template files are not used.
These tools can’t work with complex scenarios like when you have JavaScript adding a DOM element in the template.
They will have problems with dynamic templates in Single Page Applications (SPA) or when your template changes based on a state on your server side code.
As I said, Unused CSS is a one of the issues most web applications suffer when it comes to performance and page load time.
This even gets worst when you use a CSS library like Tailwind, or an older versions of CSS frameworks like Bootstrap or Material Design.
Note: most CSS frameworks have moved to modular structure where you could import only the part you need without having to include the whole bundle.
To my opinion, the best and safest approach is to be careful and get rid of any CSS file or part you remove from your HTML or template files. Do not be lazy or ignorance when it comes to tech debts like this. If you’re involved in a green field project, make sure you do not copy paste large chunks of CSS from somewhere you’re looking into without realising which parts are actually used.
You can manually find your unused CSS using the DevTools in Google Chrome with following these steps:
Any CSS rule which has a solid green line on the left side is used. Those with a red line are not:
Warning: Just because a rule isn’t used on this page doesn’t mean it’s not used elsewhere. You would ideally check the coverage on all pages and combine the result for a better overview.
Some times you cannot adhere to ☝🏼 point because of various reasons. Such as you got involved in a brown field project or the code base is to large to be able to refactor and fix the issue in a timely fashion.
In this case you might be looking at some tool to automate the process and do the clean-up systematically during build time. Fortunately there are many tools available which help you with this. I will cover some famous ones and mention a short list at the end for good measure 😉.
PurifyCSS is tool that can analyse your files, go through code, and figure out what classes are not used. Most of the time when you have static HTML files this tool can eliminate nearly all your unused CSS.
Apart from that, it can also work to a degree with Single Page Applications (SPA).
Standalone
You can install this package via
npm:
npm i -D purify-css
And some basic usage:
import purify from 'purify-css' const purify = require('purify-css') let content = '' let css = '' let options = { output: 'filepath/output.css', } purify(content, css, options)
If you’re wondering well, this is not gonna help me, you might be right. But this is the simplest form. In fact in the
purify command,
content and
css parameters can be an
Array of glob file patterns. Now you see the bigger picture and how this can help.
Now let’s make it a bit more complex.
via Grunt
First you need to install the grunt package:
npm install grunt-purifycss --save-dev
And then use it:
grunt.initConfig({ purifycss: { options: {}, target: { src: ['path/to/*.html', 'path/to/*.js'], css: ['path/to/*.css'], dest: 'tmp/purestyles.css', }, }, })
This will handle even scenarios when you have a class added using JavaScript 😍. So this will be picked up:
<!-- html --> <!-- class directly on element --> <div class="button-active">click</div>
Or in JavaScript:
// javascript // Anytime your class name is together in your files, it will find it. $(button).addClass('button-active')
Or even a bit more complex scenario:
// Can detect if class is split. var half = 'button-'; $(button).addClass(half + 'active'); // Can detect if class is joined. var dynamicClass = ['button', 'active'].join('-'); $(button).addClass(dynamicClass); // Can detect various more ways, including all Javascript frameworks. // A React example. var classes = classNames({ 'button-active': this.state.buttonActive }); return ( <button className={classes}>Submit</button>; );
Warning: The Webpack plugin for purifycss is deprecated. You will need to use Purgecss which I will go through later on.
CLI
Install the CLI:
npm install -g purify-css
And you can see the help using
-h param:
purifycss -h purifycss <css> <content> [option] Options: -m, --min Minify CSS [boolean] [default: false] -o, --out Filepath to write purified css to [string] -i, --info Logs info on how much css was removed [boolean] [default: false] -r, --rejected Logs the CSS rules that were removed [boolean] [default: false] -w, --whitelist List of classes that should not be removed [array] [default: []] -h, --help Show help [boolean] -v, --version Show version number [boolean]
This library can help you big time. It’s pretty effective and works on complex scenarios. But as I mentioned before you will need to have good tests to able to find out if anything is messed up after clean-up.
In terms of Boostrap, PurifyCSS can reduce up to ~ 33.8% of unused CSS.
Purgecss is another powerful tool to remove unused CSS. It can be used as part of your development workflow. It comes with a JavaScript API, a CLI, and plugins for popular build tools.
You can install
Purgecss globally or using
npx.
npm i -g Purgecss
And use it:
Purgecss --css <css> --content <content> [option]
As you can see, the API is very similar to
PurifyCSS, but of course options would be different.
You should install the plugin first:
npm i -D Purgecss-webpack-plugin
And the add it to your Webpack config:
const path = require('path') const glob = require('glob') const ExtractTextPlugin = require('extract-text-webpack-plugin') const PurgecssPlugin = require('Purgecss-webpack-plugin') const PATHS = { src: path.join(__dirname, 'src'), } module.exports = { entry: './src/index.js', output: { filename: 'bundle.js', path: path.join(__dirname, 'dist'), }, module: { rules: [ { test: /\.css$/, use: ExtractTextPlugin.extract({ fallback: 'style-loader', use: 'css-loader?sourceMap', }), }, ], }, plugins: [ new ExtractTextPlugin('[name].css?[hash]'), new PurgecssPlugin({ paths: glob.sync(`${PATHS.src}/**/*`, { nodir: true, }), }), ], }
For more information please refer to their documents.
UnCSS is another tool to help you remove your unused CSS. But this tool is a bit different in sense that it will load your files in jsdom, then will parse all the stylesheets with PostCSS.
Once finished,
document.querySelector will filter out selectors that are not found in the HTML files. And finally, the remaining rules are converted back to CSS.
Note: The best thing about this library which I am in love with, is their unofficial server where you can paste your HTML and CSS it will show you the shortened CSS online.
Again you can use
npm to install it or use
npx:
npm i -g uncss
You can use UnCSS with node:, banner: false, uncssrc: '.uncssrc', userAgent: 'Mozilla/5.0 (iPhone; CPU iPhone OS 10_3 like Mac OS X)', inject: function(window) { window.document .querySelector('html') .classList.add('no-csscalc', 'csscalc') }, } uncss(files, options, function(error, output) { console.log(output) }) /* Look Ma, no options! */ uncss(files, function(error, output) { console.log(output) }) /* Specifying raw HTML */ var rawHtml = '...' uncss(rawHtml, options, function(error, output) { console.log(output) })
Or use their build flows which supports Gulp, Grunt and Broccoli (unfortunately no Webpack 😔). For more information about how to setup those tools refer to their documentation.
Now let’s compare these tools and see their pros and cons.
Some people believe the biggest problem with PurifyCSS is lack of modularity. Some think it’s also its biggest benefit 🤷🏽♂️. It can work with any file type not just HTML. It can also find selectors which are added using JavaScript.
Unfortunately since every word is considered a selector, this can end up in a lot of false positives and make the resulting CSS a bit larger than it should be.
Purgecss fixes the above issue by providing the possibility to create an extractor. This is simply a function which takes content of a file and extracts the list of CSS selectors used in it.
The extractor can be used as a parser that returns an AST (abstract syntax tree) and looks through it to find any CSS selectors. This is the way
purge-from-html works. You can specify which selectors you want to use for each file type, allowing you to get the most accurate results. Additionally, you can use the default or legacy extractor, which will mimic PurifyCSS’s behaviour.
That said, Purgecss has some drawbacks too. FIrst and foremost you will need to write a custom extractor for frameworks like
Tailwind. Another problem is when you use a syntax highlighter like primjs, in which case you will have to whitelist your token classes using a property called
whitelistPatternsChildren.
Another point to consider is that Purgecss doesn’t have an extractor for JavaScript files. But because of its modularity, developers can create custom extractors for frameworks like Angular, React or Vue.
Because of its HTML emulation and JavaScript execution, UnCSS is effective at removing unused selectors from web applications. However, its emulation can have a cost in terms of performance and practicality.
At this point in time, UnCSS is probably the most accurate tool to remove unused CSS if you do not use server-side rendering.
Here is of other tools you can consider:
Although these tools are really helpful in terms of finding unused CSS, each has its own drawbacks and you will need to be careful to not end up with a broken UI.
Hope you’ve gained just a tiny bit insight on how to find your unused CSS and deploy them to space 😁👇.
Deploy... to Space?!? 🚀🔥🤯— StackBlitz (@stackblitz) May 2, 2019 | https://yashints.dev/blog/2019/05/07/unused-css/ | CC-MAIN-2019-35 | refinedweb | 1,675 | 64.51 |
Windows.
UI.
Windows. Xaml. Markup UI.
Windows. Xaml. Markup UI.
Windows. Xaml. Markup UI.
Windows. Xaml. Markup UI.
Namespace
Xaml. Markup
Classes
Structs
Interfaces
Remarks
Many of the types in this namespace are infrastructure or types that support uncommon scenarios. But there are two types in this namespace that apps might use in more typical app scenarios.
- XamlParseException is the specialized exception that is thrown by the Windows Runtime XAML parser in cases where it attempts to load XAML but can't generate the expected run-time object tree from that XAML. Most of the time any problems with XAML are detectable at design-time, but it's still possible for problems to occur that would only be known at run-time, in which case you get a XamlParseException. XamlParseException is only thrown if your app is written using C# or Microsoft Visual Basic (Visual C++ component extensions (C++/CX) uses Platform::COMException instead).
- XamlReader is a static class that can parse XAML and produce object trees. This class enables run-time access to the Windows Runtime XAML parser, the same parser that's used when XAML UI definition pages are parsed into object representations when an app starts. You can then connect the generated object tree to other existing UI elements and make the new objects appear in your UI.
See also
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/uwp/api/Windows.UI.Xaml.Markup | CC-MAIN-2019-22 | refinedweb | 227 | 55.37 |
,=0A=0AThanks. I went with James Clark's code. Once I taught it to=
use namespaces it worked just fine for me.=0A=0A=0A--Marty=0A=0A----- Orig=
inal Message ----=0AFrom: Michael Kay <mike@...>=0ATo: Mailing lis=
t for SAXON XSLT queries <saxon-help@...>=0ASent: Friday,=
September 29, 2006 2:30:20 PM=0ASubject: Re: [saxon] Attribute Order=0A=0A=
=0A=0A =0ADIV {=0AMARGIN:0px;}=0A=0A=0A=0AWell, you could put Saxon's outpu=
t through an =0AXML canonicalizer: some of my test suites use a canonicaliz=
er produced =0Ayears ago by James Clark. Or you could tweak the Saxon seria=
lizer. With the new =0Afeatures in 8.8 this shouldn't be too hard: =0A=0A =
=0A=0A* create a subclass of XMLEmitter in which you intercept =0Athe calls=
on attribute() and namespace() and startContent() to buffer the =0Aattribu=
tes and sort them =0A=0A =0A=0A* subclass SerializerFactory, overriding the=
=0AnewXMLEmitter() method to instantiate your subclass of =0AXMLEmitter=0A=
=0A =0A=0A* register this subclass of SerializerFactory with the =0AConfigu=
ration object=0A=0A =0A=0AMichael Kay=0A=0A
=0A=0A =0A =0A From: saxon-help-bounces@... =0A [mail=
to:saxon-help-bounces@...] On Behalf Of Martin =0A Wegne=
r=0ASent: 29 September 2006 19:54=0ATo: =0A saxon-help@...=
et=0ASubject: [saxon] Attribute =0A Order=0A=0A=0A=0A =0A=0A =0A =0AWhe=
never I see someone write into this list about having trouble with =0A the=
order of attributes in a serialized XML message, I always laugh. But =0A =
now I am laughing at myself. I have found myself in the difficult spot =
=0A where I need two different machines, with the same JARs, JRE and class=
path, to =0A produce the same sequence of bytes for a given DOM. Is there=
any =0A solution to this aside from writing my own serializer?=0A=0AAnd y=
es, you can =0A start =0Alaughing.=0A=0A=0A--Marty=0
t.php?page=3Djoin.php&p=3Dsourceforge&CID=3DDEVDEV=0A______________________=
_________________________=0Asaxon-help mailing list=0Asaxon-help@...=
ceforge.net=0A
=0A=0A=0A | https://sourceforge.net/p/saxon/mailman/message/13251323/ | CC-MAIN-2018-17 | refinedweb | 346 | 54.32 |
Red Hat Bugzilla – Bug 107865
Panel does not respect %f for launchers
Last modified: 2007-04-18 12:58:44 EDT
Creating a launcher for a small program...
When the command includes %u, the file gets passed to the program correctly as a
URL. However, using %f instead, which according to the Freedesktop.org spec
should pass the full path to the file, fails and passes _nothing_.
Correction, does not fail, simply launches the program with no argument where it
should be passing the path to the file.
Is this still an issue with FC 1?
Dan: just tried this out and it works fine for me. Any more details?
Mark, how are you testing it?
#include <stdio.h>
int main( int argc, char *argv[] )
{
fprintf( stderr, "args: %d %s %s\n", argc, argv[0], argv[1] );
sleep( 5 );
exit( 0 );
}
Then, using a launcher with the command "/path/to/program %f" and
specifying "Run in Terminal", drag a document onto the launcher. The
program pauses after printing its args. Note that %f is (null) while
a %u actually works.
%f: args: 1 /home/boston/dcbw/thing (null)
%u: args: 2 /home/boston/dcbw/thing
gnome-panel-2.5.3.1-6 currently, but has existed since FC1 betas at
least, probably earlier
Hmm, I added a launcher to the panel which pointed at as script:
#!/bin/bash
echo $@ > /tmp/t.tmp
and then tried dragging a file onto it with both %f and %u and it worked.
Could you confirm that works for you ?
Actual bug is becuase the gnome-desktop library uses
gnome_vfs_uri_is_local() and that makes files from NFS mounted
homedirs be skipped over. Upstreaming this bug, gnome.org #135629 | https://bugzilla.redhat.com/show_bug.cgi?id=107865 | CC-MAIN-2017-30 | refinedweb | 282 | 74.79 |
Am Dienstag, 7. Oktober 2008 22:09 schrieb Andrew Coppin: > Daniel Fischer wrote: > > Am Dienstag, 7. Oktober 2008 20:27 schrieb Andrew Coppin: > >> Basically, the core code is something like > >> > >> raw_bind :: (Monad m) => [[x]] -> (x -> m (ResultSet y)) -> m > >> (ResultSet y) > >> raw_bind [] f = return empty > >> raw_bind (xs:xss) f = do > >> rsYs <- mapM f xs > >> rsZ <- raw_bind xss f > >> return (foldr union (cost rsZ) rsYs) > >> > >> As you can see, this generates all of rsZ before attempting to return > >> anything to the caller. And I'm really struggling to see any way to > >> avoid that. > > > >? > > If I'm doing this right, it seems that > > rsZ <- raw_bind xss f > ... > > desugards to > > raw_bind xss f >>= \rsZ -> ... > >.) | http://www.haskell.org/pipermail/haskell-cafe/2008-October/048906.html | CC-MAIN-2014-42 | refinedweb | 112 | 70.33 |
Lab 7: Midterm Review
Starter Files
Download lab07.zip. Inside the archive, you will find starter files for the questions in this lab, along with a copy of the OK autograder.
Submission
This lab will not be graded. You do not need to submit anything.
Control
Question 1: Abundant
Implement a function
abundant that takes a positive integer
n. It prints
all ways of multiplying two positive integers to make
n. It returns
whether
n is an abundant number, meaning that the sum of its proper
divisors is greater than
n. A proper divisor of
n is an integer smaller
than
n that evenly divides
n.
Hint: To print
1 * 2, use the expression
print(1, '*', 2)
def abundant(n): """Print all ways of forming positive integer n by multiplying two positive integers together, ordered by the first term. Then, return whether the sum of the proper divisors of n is greater than n. A proper divisor of n evenly divides n but is less than n. >>> abundant(12) # 1 + 2 + 3 + 4 + 6 is 16, which is larger than 12 1 * 12 2 * 6 3 * 4 True >>> abundant(14) # 1 + 2 + 7 is 10, which is not larger than 14 1 * 14 2 * 7 False >>> abundant(16) 1 * 16 2 * 8 4 * 4 False >>> abundant(20) 1 * 20 2 * 10 4 * 5 True >>> abundant(22) 1 * 22 2 * 11 False >>> r = abundant(24) 1 * 24 2 * 12 3 * 8 4 * 6 >>> r True>>> r = abundant(25) 1 * 25 5 * 5 >>> r False >>> r = abundant(156) 1 * 156 2 * 78 3 * 52 4 * 39 6 * 26 12 * 13 >>> r True""""*** YOUR CODE HERE ***"d, total = 1, 0 while d*d <= n: if n % d == 0: print(d, '*', n//d) total = total + d if d > 1 and d*d < n: total = total + n//d d = d + 1 return total > n
Use OK to test your code:
python3 ok -q abundant
Question 2:
Use OK to test your code:
python3 ok -q same_hailstone
Higher-Order Functions
Question 3: Piecewise >>> identity = lambda x: x >>> abs_value = piecewise(negate, identity, 0) >>> abs_value(6) 6 >>> abs_value(-1) 1 """"*** YOUR CODE HERE ***"def h(x): if x < b: return f(x) return g(x) return h
Use OK to test your code:
python3 ok -q piecewise
Question 4: Smoothing
The idea of smoothing a function is a concept used in signal
processing among other things. If
f is a one-argument function and
dx is some small
number, then the smoothed version of
f is the function whose value at
a point
x is the average of
f(x - dx),
f(x), and
f(x + dx).
Write a function
smooth that takes as input a function
f and a
value to use for
dx and returns a function that computes the smoothed
version of
f. Do not use any
def statements inside of
smooth; use
lambda expressions instead.
def smooth(f, dx): """Returns the smoothed version of f, g where g(x) = (f(x - dx) + f(x) + f(x + dx)) / 3 >>> square = lambda x: x ** 2 >>> round(smooth(square, 1)(0), 3) 0.667 """"*** YOUR CODE HERE ***"return lambda x: (f(x - dx) + f(x) + f(x + dx)) / 3
Use OK to test your code:
python3 ok -q smooth
It is sometimes valuable to repeatedly smooth a function (that is,
smooth the smoothed function, and so on) to obtain the
n-fold
smoothed function. Show how to generate the
n-fold smoothed function,
n_fold_smooth, of any given function using your
smooth function and
repeated function:
def repeated(f, n): """Returns a single-argument function that takes a value, x, and applies the single-argument function F to x N times. >>> repeated(lambda x: x*x, 3)(2) 256 """ def h(x): for k in range(n): x = f(x) return x return h
As with
smooth, use
lambda expressions
rather than
def statements in the body of
n_fold_smooth.
def n_fold_smooth(f, dx, n): """Returns the n-fold smoothed version of f >>> square = lambda x: x ** 2 >>> round(n_fold_smooth(square, 1, 3)(0), 3) 2.0 """"*** YOUR CODE HERE ***"return repeated(lambda g: smooth(g, dx), n)(f)
The
repeated function takes in a single-argument function
f and
returns a new single-argument function that repeatedly applies
f to
its argument. We want to repeatedly apply
smooth, but
smooth is a
two-argument function. So we first have to convert it to a one-argument
function, using a
lambda expression. Then
repeated returns a
function that repeatedly smooths its input function, and we apply this
to
f to get an
n-fold smoothed version of
f.
Use OK to test your code:
python3 ok -q n_fold_smooth
Lambdas
Question 5: Lambda the Plentiful
Try drawing an environment diagram for the following code and predict what Python will output.
Note: This is a challenging problem! Work together with your neighbors and see if you can arrive at the correct answer.
You can check your work with the Online Python Tutor, but try drawing it yourself first!
>>> def go(bears): ... gob = 3 ... print(gob) ... return lambda ears: bears(gob) >>> gob = 4 >>> bears = go(lambda ears: gob)______3>>> bears(gob)______4
Hint: What is the parent frame for a lambda function?
Question 6:
Linked Lists
Question 7: Deep Linked List Length
A linked list that contains one or more linked lists as elements is called a
deep linked list. Write a function
deep_len that takes in a (possibly deep)
linked list and returns the deep length of that linked list, which is the sum
of the deep length of all linked lists contained in a deep linked list.
def deep_len(lnk): """ Returns the deep length of a possibly deep linked list. >>> deep_len(link(1, link(2, link(3, empty)))) 3 >>> deep_len(link(link(1, link(2, empty)), link(3, link(4, empty)))) 4 >>> deep_len(link(link(link(1, link(2, empty)), \ link(3, empty)), link(link(4, empty), link(5, empty)))) 5 """"*** YOUR CODE HERE ***"if not is_link(lnk): return 1 elif lnk == empty: return 0 else: return deep_len(first(lnk)) + deep_len(rest(lnk))
Use OK to test your code:
python3 ok -q deep_len
Question 8: Linked Lists as Strings
Marvin and Brian like different ways of displaying the linked list
structure in Python. While Marvin likes box and pointer diagrams,
Brian. >>> marvins_to_string = make_to_string("[", "|-]-->", "", "[]") >>> brians_to_string = make_to_string("(", " . ", ")", "()") >>> lst = link(1, link(2, link(3, link(4, empty)))) >>> marvins_to_string(lst) '[1|-]-->[2|-]-->[3|-]-->[4|-]-->[]' >>> marvins_to_string(empty) '[]' >>> brians_to_string(lst) '(1 . (2 . (3 . (4 . ()))))' >>> brians_to_string(empty) '()' """"*** YOUR CODE HERE ***"def printer(lst): if lst == empty: return empty_repr else: return front + str(first(lst)) + mid + printer(rest(lst)) + back return printer
Use OK to test your code:
python3 ok -q make_to_string
Trees
Question 9: tree and returns the result in a new tree. >>> numbers = tree(1, ... [tree(2, ... [tree(3), ... tree(4)]), ... tree(5, ... [tree(6, ... [tree(7)]), ... tree(8)])]) >>> print_tree(tree_map(lambda x: 2**x, numbers)) 2 4 8 16 32 64 128 256 """"*** YOUR CODE HERE ***"if children(t) == []: return tree(fn(entry(t)), []) mapped_subtrees = [] for subtree in children(t): mapped_subtrees += [ tree_map(fn, subtree) ] return tree(fn(entry(t)), mapped_subtrees) # Alternate solution def tree_map(fn, t): return tree(fn(entry(t)), [tree_map(fn, t) for t in children(t)])
Use OK to test your code:
python3 ok -q tree_map
Question 10:_entry = entry(t1) + entry(t2) t1_children, t2_children = children(t1), children_entry, [add_trees(child1, child2) for child1, child2 in zip(t1_children, t2_children)])
Use OK to test your code:
python3 ok -q add_trees | http://inst.eecs.berkeley.edu/~cs61a/su16/lab/lab07/ | CC-MAIN-2018-17 | refinedweb | 1,247 | 58.96 |
Hello
It seems it might be a good idea if it were possible to uninstall
gnuradio
propper.
I currently have two systems faling (hard) using the new build.
My gentoo box (configured using cmake in another thread)
gives me the error :
ImportError: libgruel-3.4.2git.so.0: cannot open shared object file: No
such
file or directory
Whenever i try
from gnuradio import digital.
funny part is: I never succeeded in installing 3.4.2, so I don’t blame
it
for not finding it.
I tried doing a manual ldconfig, but it didn’t seem to do the trick.
On an ubuntu machine (xubuntu to be specific) using the build-gnuradio
script, most of the digital schemes fails
due to the reallocation of packets to digital. This includes stuff that
should be updated.
Is it possible that the python stuff does not get properly updated and
is
there any way to fix this?
Downgrading, by adding a “git checkout v3.4.2” fixes makes the build run
fine again.
On both systems the building of the system is without problems. | https://www.ruby-forum.com/t/trouble-with-multiple-installs-or-how-i-learned-to-love-make-uninstall/213170 | CC-MAIN-2021-49 | refinedweb | 182 | 66.64 |
When I ran the pyRosetta, it could not be used because of the error generated by incompatible of system. I am using Red Hat Enterprise Linux Client release 5.5 (Tikanga), with glibc-2.5-42.e15_4.3.
The error is like this:
Traceback (most recent call last):
File "./loops.py", line 1, in
from rosetta import *
File "/src/pyrosetta11/rosetta/__init__.py", line 14, in
import utility, core
File "/src/pyrosetta11/rosetta/utility/__init__.py", line 1, in
from _rosetta_utility_000 import *
ImportError: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by /src/pyrosetta11/rosetta/libmini.so)
Can somebody face this type of problem and also have the solution to solve the problem? | https://www.rosettacommons.org/node/1890 | CC-MAIN-2019-43 | refinedweb | 117 | 53.78 |
Deleting a user via the REST API does not delete their user preferences
Bug Description
The on_delete function in rest/users.py appears to not delete user preferences when the user is deleted resulting in an accumulation of orphaned preferences data.
def on_delete(self, request, response):
"""Delete the named user, all her memberships, and addresses."""
if self._user is None:
return
for member in self._user.
for address in self._user.
Deleted the preferences of the user being deleted before deleting him and added test for it.
Thanks for the patch and the test, it's a great contribution. I am going to apply it with some modifications. Here is some feedback for the future:
Please read PEP 8 and the Mailman style guide for our coding standards.
Because the fix is in the model, there should be a test in the model. It's okay to also have a test in the REST layer because that's where the problem was observed. I adapted your REST test and added one to the model.
Be sure to run the full test suite with `tox` both before and after applying your fix. It's a good idea to add the test, run tox to validate that the test fails, then add your fix to validate that it succeeds. Also, running the full suite before submitting a patch ensures that there are no regressions elsewhere in Mailman.
When attributing fixes by community members, I use the name given in your Launchpad id. If you want your full name to appear in the NEWS file, please contact me directly.
See r7307 for the full, applied patch.
Thanks Barry for the suggestions. I will see the PEP 8 and the Mailman guide. And i tried pushing the branch for the merge proposal, but from inside the college i cant do ssh outside and was getting error. So i submitted the patch.
I looked at the r7307 and wanted to ask something:
In rest/tests/
230 with transaction():
231 preferences = config.
232 id=anne.
233 self.assertEqua
On Mar 20, 2015, at 05:41 PM, Abhishek wrote:
>In rest/tests/
>config.
>usage of it in line 221 and also in the file model/tests/
>which are not inside with block. Any specific reason for the
>difference..?
>
>230 with transaction():
>231 preferences = config.
>232 id=anne.
>233 self.assertEqua
Yes. The ids don't get assigned until after the commit, so they have to be
within the with-statement in order for the subsequent query to work. But
after that, the delete does not require a commit for the query to succeed.
I don't think this was the case with Storm, but it appears to be so with
SQLAlchemy.
Deleted the preferences of the user being deleted before deleting him. | https://bugs.launchpad.net/mailman/+bug/1418276 | CC-MAIN-2019-09 | refinedweb | 465 | 73.88 |
Beginning Windows Presentation Foundation.
Introduction
WPF, an acronym for Windows Presentation
Foundation is a subsystem of class libraries for WinFX and it enables the
user to get a richer experience bringing together UI, Documents, media etc. A
XAML (Extensible Application Markup Language) file which is at the heart of a
WPF project can be created in several ways that includes the Notepad text
editor, the Expression Blend which requires another download from Microsoft, but
may not provide a easy to use XAML file to use in VS, and the Visual Studio
editions except the express edition. XAML is presently specific to windows
platform and is a XML formatting language and not an application programming
interface. I will be mostly showing how to get some hands-on experience with a
WPF project using the Visual Studio 2005 interface and the template files that
you may access with the Windows SDK installed.
Creating a WPF Project
From File | New | Project click open the New
Project window as shown in the next figure. Click on Visual Basic and expand
its contents. Under .NET 3.0 FrameWork (It is assumed that you have installed
NET 3.0 Framework) choose the Windows Application (WPF).
Now highlight the Windows Application (WPF) and
change the name of the application to some name of your choice. For this article
it is changed to AppWPF. Click on the OK button after typing a
name of your choice. This creates the necessary file/folders for the application
as shown in the next figure.
There are two XAML files created in the project. The
App.xaml and the Windows1.xaml file. Delete the
Windows1.xaml and add a new item as shown with the name
BasicControls.xaml.
With this new item added you may need to change the
App.xaml file as shown below.
<Application x:Class="App"
xmlns=""
xmlns:
<Application.Resources>
</Application.Resources></Application>
The StartupUri has been changed from the original
Windows1.xaml to BasicControls.xaml. With this change made you can now display
the BasicControls.xaml file together with its design as shown in the next
figure.
This represents a 300 X 300 window which can be used as
a container for other controls. You also notice the reference to the namespaces
that are required and the XML syntax with the attribute of the project for the
window.
Placing Controls on the Window
Placing Controls automatically creates XAML
code.
Placing controls on this window is as easy as dragging
from the Tools and dropping on to this window. The next picture shows a
button and a textbox dragged and dropped onto this window.
The necessary code for these controls gets automatically
added as the controls are placed. After the two controls are added, the xaml
file gets changed as shown. The Button and Textbox properties are
the defaults which may be modified as will be seen later in the article.
<Window x:Class="BasicControls"
xmlns=""
xmlns: <Grid> <Button
Height="23" Margin="94,0,123,39" Name="Button1"
VerticalAlignment="Bottom">Button</Button> <TextBox
HorizontalAlignment="Left" Margin="43, 126, 0,115"
Name="TextBox1" Width="100"></TextBox>
</Grid></Window>
Adding code automatically updates the window design.
Inserting a declarative code into the BasicControls.xaml
file will automatically add the control defined by that code to the design
window.
Add this code to the xaml file after as.
<Textbox Name="TextBox2" Height="20"
Margin="89.5,96.5,0,0" VerticalAlignment="Top"
HorizontalAlignment="Left" Width="50"></TextBox>
The property window for the TextBox2 shown can
also be used to make changes. You can also move, or adjust the dimensions of the
controls using the mouse. The various controls provide a very rich interface for
the designer in manipulating the controls.
Event Handling
All 'Hello World' programs used a button click to
demonstrate the workings of the code or how the events were handled. In this
tutorial also you will demonstrate the click event along the same lines. In the
Solution Explorer only a few items are seen but there are lot more files
in the project. Click on the middle toolbar just above the project as shown in
the next figure.
This will allow you to see all the files / folders in
the project displayed (every folder expanded out) as shown.
This is vastly different from a legacy windows project.
The references to the Presentation Foundation are all in the three references,
PresentationCore, PresentationDesignDeveloper and
PresentationDesignFramework.
In order to appreciate the rich designer support you
have to go to the ,Object Browser and look at the references. For example
just the PresentationCore has the following namespaces shown in the next
figure.
The BasicControls.xaml file also has the code behind
file, BasicControls.xaml.vb, as shown in the next figure.
In the code page, the drop-down control displaying
BasicControls presently has all the objects on this window listed in its
menu. You can find the Button as well. With the button chosen you can use
the second drop-down to access all the events of the Button in the second
drop-down (presently showing Declarations). In this manner the button click
event was chosen from the second drop-down. Here the Button1_Click has been set
to display "Click is registered" in Textbox1 when the button is clicked. You can
find the reference to this in the Object Browser as shown in the next
figure.
Object Browser is an extremely valuable resource
that you should seek out to understand the underlying logic, the arguments of a
function call, etc.
When you build and execute the program and click on the
button this is what you will see displayed. The top part is the design window
and the bottom is the window when clicked.
At this point you might be wondering how to improve the
look and feel. Indeed the form looks drab since none of the properties have been
used except for the most basic. The next figure shows how you may change the
appearance by inserting the property attributes directly into the XMAL file. You
will be better off using the intellisense rather than trying to guess the
property based on your previous 'Windows' experience as shown in the next
figure. You may also add attributes from the property window of the object which
you can view when the object is highlighted (or clicked) in the design pane.
The variety of attributes is just too many and when in
doubt you will be able to drill down to the one you want to use in the Object
Browser.
The next code listing shows a few more attributes added
to the Textbox1. As you might have seen in the intellisense pop-up windows,
there is a large number of properties that you can tweak and events that you can
trigger. Notice the [.] notation for the TextElement in the code listing,
FontFamily being the child of the parent TextElement.
Listing 1
<TextBox HorizontalAlignment="Left"
Margin="43,126,0,115" Name="TextBox1" Width="150"
TextElement. </TextBox>
When the program is executed you will see the following
displayed.
Summary
The article describes the steps to create a WPF project.
The Design <-->Declarative Code interactivity is also described. The
placing of controls and adding event handling code to the code behind page is
explained with an example. While testing the "AutoWordSelection" did not
function as it should by its definition. You may look up this in the 'Help'.
This article is based on the book Programming
Windows Workflow Foundation: Practical WF Techniques and Examples using XAML and
C# | http://www.codedigest.com/Articles/WPF/253_Beginning_Windows_Presentation_Foundation_Project.aspx | CC-MAIN-2018-17 | refinedweb | 1,252 | 64.3 |
Composable CSS Animation In Vue With AnimXYZ
About The Author
Ejiro Asiuwhu is a Software Developer with rock-solid experience in building complex interactive applications with JavaScript, TypeScript, Vue.js, NuxtJS, …
More about
Ejiro ↬
Most animation libraries like GSAP and Framer Motion are built purely with JavaScript or TypeScript, unlike AnimXYZ, which is labelled “the first composable CSS animation toolkit”, built mainly with SCSS. While a simple library, it can be used to achieve a lot of awesome animation on the web in a short amount of time and little code.
In this article, you will learn how to use the AnimXYZ toolkit to create unique, interactive, and visually engaging animations in Vue.js and plain HTML. By the end of this article, you will have learned how adding a few CSS classes to elements in Vue.js components can give you a lot of control over how those elements move in the DOM.
This tutorial will be beneficial to readers who are interested in creating interactive animations with few lines of code.
Note: This article requires a basic understanding of Vue.js and CSS.
What Is AnimXYZ?
AnimXYZ is a composable, performant, and customizable CSS animation toolkit powered by CSS variables. It is designed to enable you to create awesome and unique animations without writing a line of CSS keyframes. Under the hood, it uses CSS variables to create custom CSS properties. The nice thing about AnymXYZ is its declarative approach. An element can be animated in one of two ways: when entering or leaving the page. If you want to animate an HTML element with this toolkit, adding a class of
xyz-out will animate the item out of the page, while
xyz-in will animate the component into the page.
This awesome toolkit can be used in a regular HTML project, as well as in a Vue.js or React app. However, as of the time of writing, support for React is still under development.
Meet Smashing Online Workshops on front-end & UX, with practical takeaways, live sessions, video recordings and a friendly Q&A. On design systems, CSS/JS and UX. With Carie Fisher, Stefan Baumgartner and so many others.
Why Use AnimXYZ?
Animation with AnimXYZ is possible by adding descriptive class names to your markup. This makes it easy to write complex CSS animation without writing complex CSS keyframes. Animating an element into the page is as easy as adding a class of
xyz-in in the component and declaring a descriptive attribute.
<p class="xyz-in" xyz="fade">Composable CSS animation with AnimXYZ</p>
The code above will make the paragraph element fade into the page, while the code below will make the element fade out of the page. Just a single class with a lot of power.
For simple animations, you can use the out-of-the-box utilities, but AnimXYZ can do so much more. You can customize and control AnimXYZ to create exactly the animations you want by setting the CSS variables that drive all AnimXYZ animations. We will create some custom animations later on in this tutorial.
With AnimXYZ, you can create powerful and smooth animations out of the box, and its size is only 2.68 KB for the base functionality and 11.4 KB if you include the convenient utilities.
Easy to Learn and Use
AnimXYZ works perfectly with regular HTML and CSS, and it can be integrated in a project using the content delivery network (CDN) link. It can also be used in Vue.js and React, although support for React is still under development. Also, the learning curve with this toolkit is not steep when compared to animation libraries such as GSAP and Framer Motion, and the official documentation makes it easy to get started because it explains how the package works in simple terms.
Key Concepts in AnimXYZ
When you want a particular flow of animation to be applied to related groups of element, the
xyz attribute provides the context. Let’s say you want three
divs to be animated in the same way when they enter the page. All you have to do is add the
xyz attribute to the parent element, with the composable utilities and variable that you want to apply.
<div class="shape-wrapper xyz-in" xyz="fade flip-up flip-left"> <div class="shape"></div> <div class="shape"></div> <div class="shape"></div> </div>
The code above will apply the same animation to all
divs with a class of
shape. All child elements will fade into the page and flip to the upper left, because the attribute
xyz="fade flip-up flip-left" has been applied to the parent element.
See the Pen [Contexts in AnimXYZ]() by Ejiro Asiuwhu.
AnimXYZ makes it easy to animate a child element differently from its parent. To achieve this, add the
xyz attribute with a different animation variable and different utilities to the child element, which will reset all animation properties that it has inherited from its parent.
See the Pen [Override Parent contexts in AnimXYZ]() by Ejiro Asiuwhu.
AnimXYZ comes with a lot of utilities that will enable you to create engaging and powerful CSS animations without writing any custom CSS.
xyz="fade up in-left in-rotate-left out-right out-rotate-right"
For example, the code above has a
fade up utility, which will make the element fade from top to bottom when coming into the page. It will come in and rotate from the left. When the element leaves the page, it will go to the right and rotate out of the page.
With the out-of-the-box utilities, you can, say, flip a group of elements to the right and make them fade while leaving the page. The possibilities of what can be achieved with the utilities are endless.
The
stagger utility controls the
animation-delay CSS property for each of the elements in a list, so that their animations are triggered one after another. It specifies the amount of time to wait between applying the animation to an element and beginning to perform the animation. Essentially, it is used to queue up animation so that elements can be animated in sequence.
<div class="shape-wrapper" xyz="fade up-100% origin-top flip-down flip-right-50% rotate-left-100% stagger"> <div class="shape xyz-in"></div> <div class="shape xyz-in"></div> <div class="shape xyz-in"></div> <div class="shape xyz-in"></div> </div>
By adding the
stagger utility, each element in a parent
div will animate one after another from left to right. The order can be revered by using
stagger-rev.
With
stagger:
See the Pen [Staggering with AnimXYZ]() by Ejiro Asiuwhu.
Without
stagger:
See the Pen [!Staggering Animation – AnimXYZ]() by Ejiro Asiuwhu.
Using AnimXYZ With HTML and CSS
Let’s build a card and add some cool animation with AnimeXYZ.
See the Pen [Animxyz Demo]() by Ejiro Asiuwhu.
First, we need to add the AnimXYZ toolkit to our project. The easiest way is via a CDN. Grab the CDN, and add it to the
head of your HTML document.
Add the following lines of code to your HTML.
// html <p class="intro xyz-in" xyz="fade">Composable CSS Animation with Animxyz</p> <div class="glass xyz-in" id="glass" xyz="fade flip-down flip-right-50% duration-10"> <img src="" alt="" class="avatar xyz-in"> <p class="des xyz-in">Image by Jordon Cheung</p> </div>
This is where the magic happens. At the top of the page, we have a paragraph tag with a class of
xyz-in and an
xyz attribute with a value of
fade. This means that the
p element will fade into the page.
Next, we have a card with an
id of
glass, with the following
xyz attribute:
The composable utilities above will make the card fade into the page. The
flip-down value will set the card to flip into the page from the bottom, and the
flip-right value will flip the card by 50% when leaving the page. An animation duration of
10 (i.e. 1 second) sets the length of time that the animation will take to complete one cycle.
Integrating AnimXYZ in Vue.js
Scaffold a Vue.js Project
Using the Vue.js command-line interface (CLI), run the command below to generate the application:
Install VueAnimXYZ
This will install both the core package and the Vue.js package. After installation, we will have to import the
VueAnimXYZ package into our project and add the plugin globally to our Vue.js application. To do this, open your
main.js file, and add the following block of code accordingly:
import VueAnimXYZ from '@animxyz/vue' // import AnimXZY vue package import '@animxyz/core' // import AnimXZY core package Vue.use(VueAnimXYZ)
The
XyzTransition Component
The component is built on top of Vue.js’ component. It’s used to animate individual elements into and out of the page.
Here is a demo of how to use the
XyzTransition component in Vue.js.
<div id="app"> <button @Animate</button> <XyzTransition appear <div class="square" v-</div> </XyzTransition> </div>
Notice how the element that we intend to transition is wrapped in the `XYZTransition` component. This is important because the child element `
` will inherit the utilities that are applied to the `XYZTransition` component. The child element is also conditionally rendered when `isAnimate` is set to `true`. When the button is clicked, the child element with a class of `square` is toggled into and out of the DOM.
#### `XyzTransitionGroup`
The `XyzTransitionGroup` component is built on top of Vue.js’ [`transition-group` component](). It is used to animate groups of elements into and out of the page.
Below is an illustration of how to use the `XyzTransitionGroup` component in Vue.js. Notice here again that a lot of the complexity that comes with Vue.js’ `transition-group` component has been abstracted away in order to reduce complexity and increase efficiency. All we need to care about when using the `XyzTransitionGroup` component are `appear`, `appear-visible`, `duration`, and `tag`. The following is taken [from the documentation]():
<XyzTransitionGroup appear={ boolean } appear-visible={ boolean | IntersectionObserverOptions } duration={ number | 'auto' | { appear: number | 'auto', in: number | 'auto', out: number | 'auto' } } tag={ string } > <child-component /> <child-component /> <child-component /> </XyzTransitionGroup>
### Build an Animated Modal With AnimXYZ and Vue.js
Let’s build modal components that will animate as they enter and leave the DOM.
Here is a demo of what we are going to build:
<section class="xyz-animate"> <div class="alerts__wrap copy-content"> <div class="alert reduced-motion-alert"> <p> AnimXYZ animations are disabled if your browser or OS has reduced-motion setting turned on. <a href="" target="_blank"> Learn more here. </a> </p> </div> </div> <h1>Modal Animation With AnimXYZ and Vue.js</h1> <button class="modal-toggle modal-btn-main" data- <span class="invisible">Close this window</span> </span> <div role="dialog" class="simple-modal__wrapper" aria- <XyzTransition duration="auto" xyz="fade out-delay-5"> <section id="modal1" aria- <div class="modal_top flex xyz-nested" xyz="up-100% in-delay-3"> <header id="modal1_label modal-title" class="modal_label xyz-nested" xyz="fade right in-delay-7" > Join our community on Slack </header> <button type="button" aria- <svg viewBox="0 0 24 24" focusable="false" aria- <path fill="currentColor" d="M.439,21.44a1.5,1.5,0,0,0,2.122,2.121L11.823,14.3a.25.25,0,0,1,.354,0l9.262,9.263a1.5,1.5,0,1,0,2.122-2.121L14.3,12.177a.25.25,0,0,1,0-.354l9.263-9.262A1.5,1.5,0,0,0,21.439.44L12.177,9.7a.25.25,0,0,1-.354,0L2.561.44A1.5,1.5,0,0,0,.439,2.561L9.7,11.823a.25.25,0,0,1,0,.354Z" ></path> </svg> </button> </div> <div class="modal_body xyz-nested" xyz="up-100% in-delay-3"> <div class="modal_body--top flex justify_center align_center"> <img src="../assets/slack.png" alt="slack logo" class="slack_logo" /> <img src="../assets/plus.png" alt="plus" class="plus" /> <img src="../assets/discord.png" alt="discord logo" class="discord_logo" /> </div> <p><span class="bold">929</span> users are registered so far.</p> </div> <form class="modal_form" autocomplete> <label for="email" ><span class="sr-only">Enter your email</span></label > <input id="email" type="email" placeholder="johndoe@email.com" autocomplete="email" aria- Get my invite </button> <p>Already joined?</p> <button type="button" aria- <span ><img src="../assets/slack.png" alt="slack logo" role="icon" /></span> Open Slack </button> </form> </section> </XyzTransition> </div> </section>
Then, in our modal, we would use the `v-if=”isModal”` directive to specify that we want the modal to be hidden from the page by default. Then, when the button is clicked, we open the modal by calling the `open()` method, which sets the `isModal` property to `true`. This will reveal the modal on the page and also apply the animation properties that we specified using AnimXYZ’s built-in utilities.
We’ve gone through the basics of AnimXYZ and how to use it with plain HTML and Vue.js. We’ve also implemented some demo projects that give us a glimpse of the range of CSS animations that we can create simply by adding the composable utility classes provided by this toolkit, and all without writing a single line of a CSS keyframe. Hopefully, this tutorial has given you a solid foundation to add some sleek CSS animations to your own projects and to build on them over time for any of your needs.
The final demo is on GitHub. Feel free to clone it and try out the toolkit for yourself.
That’s all for now! Let me know in the comments section below what you think of this article. I am active on Twitter and GitHub. Thank you for reading, and stay tuned.
Resources
(ks, vf, yk, il, al) | https://kerbco.com/composable-css-animation-in-vue-with-animxyz-smashing-magazine/ | CC-MAIN-2022-05 | refinedweb | 2,320 | 54.83 |
retitle 650667 kfreebsd-9: problems with reading of /proc/self/maps -- Hi, I looked into ktrace of midori, we have problem with reading of /proc/self/maps. The kfreebsd-8 kernel handles it sufficiently. It is possible to read /proc/self/maps in blocks, It is possible to read more blocks, given /proc/self/maps is bigger than block. The kfreebsd-9 kernel allows only read of one block. Iff the size of /proc/self/maps is bigger, even the first read fails. Test code is bellow, try to play with BS. Or even by dd if=/proc/self/maps bs=200 dd if=/proc/self/maps bs=2000 Petr ************************** #include <stdio.h> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <sys/mman.h> #define BS 128 int main() { int i, fd; int sum; char buf[BS+1]; #if 0 // enable to enlarge /proc/self/maps fd = open ("/etc/hosts", O_RDONLY); for (i = 0; i < 100; i++) { mmap(NULL, 4096, PROT_READ, MAP_SHARED, fd, 0); }; close (fd); #endif fd = open ("/proc/self/maps", O_RDONLY); sum = 0; while ((i = read(fd, buf, BS)) > 0) { sum += i; buf[i] = 0; printf("%s", buf); } printf("\n\nTotal %d bytes\n", sum); }; ************************** | https://lists.debian.org/debian-bsd/2011/12/msg00019.html | CC-MAIN-2016-44 | refinedweb | 204 | 75.91 |
Decoding HTML entities with Python
I'm trying to decode HTML entries from here NYTimes.com and I cannot figure out what I am doing wrong.
Take for example:
"U.S. Adviser’s Blunt Memo on Iraq: Time ‘to Go Home’"
I've tried BeautifulSoup, decode('iso-8859-1'), and django.utils.encoding's smart_str without any success.
Answers
Try this:
import re def _callback(matches): id = matches.group(1) try: return unichr(int(id)) except: return id def decode_unicode_references(data): return re.sub("&#(\d+)(;|(?=\s))", _callback, data) data = "U.S. Adviser’s Blunt Memo on Iraq: Time ‘to Go Home’" print decode_unicode_references(data)
Actually what you have are not HTML entities. There are THREE varieties of those &.....; thingies -- for example all mean U+00A0 NO-BREAK SPACE.
(the type you have) is a "numeric character reference" (decimal). is a "numeric character reference" (hexadecimal). is an entity.
Further reading:
Here you will find code for Python2.x that does all three in one scan through the input:
This does work:
from BeautifulSoup import BeautifulStoneSoup s = "U.S. Adviser’s Blunt Memo on Iraq: Time ‘to Go Home’" decoded = BeautifulStoneSoup(s, convertEntities=BeautifulStoneSoup.HTML_ENTITIES)
If you want a string instead of a Unicode object, you'll need to decode it to an encoding that supports the characters being used; ISO-8859-1 doesn't:
result = decoded.encode("UTF-8")
It's unfortunate that you need an external module for something like this; simple HTML/XML entity decoding should be in the standard library, and not require me to use a library with meaningless class names like "BeautifulStoneSoup". (Class and function names should not be "creative", they should be meaningful.)
>>> from HTMLParser import HTMLParser >>> print HTMLParser().unescape('U.S. Adviser’s Blunt Memo on Iraq: ' ... 'Time ‘to Go Home’') U.S. Adviser’s Blunt Memo on Iraq: Time ‘to Go Home’
The function is undocumented in Python 2. It is fixed in Python 3.4+: it is exposed as html.unescape() there.
Need Your Help
Keyboard shortcut to paste clipboard content into command prompt window (Win XP)
windows keyboard-shortcutsIs there a keyboard shortcut for pasting the content of the clipboard into a command prompt window on Windows XP (instead of using the right mouse button)?
Mixing Objective C ,(*.m , *.mm & .c /.cpp ) files
c++ objective-c cocoa objective-c++In my project Core libraries are part of C/C++ files, while UI needs to be developed in Objective C, | http://www.brokencontrollers.com/faq/20715131.shtml | CC-MAIN-2019-47 | refinedweb | 408 | 59.09 |
The following sections discuss the Oracle Database Lite ADO.NET provider for Microsoft .NET and Microsoft .NET Compact Framework. The Oracle Database Lite ADO.NET provider resides in the
Oracle.DataAccess.Lite namespace.
A
DataException is thrown if synchronization fails. Also, you must close all database connections before doing a synchronization.
Section 14.1, "Discussion of the Classes That Support the ADO.NET Provider"
Section 14.2, "Running the Demo for the ADO.NET Provider"
Section 14.3, "Limitations for the ADO.NET Provider"
To use the Oracle Database Lite ADO.NET provider from your own project, add a reference to
Oracle.DataAccess.Lite_wce.dll. This section describes the following classes for the Oracle Database Lite ADO.NET provider.
Section 14.1.1, "Establish Connections With the OracleConnection Class"
Section 14.1.2, "Transaction Management"
Section 14.1.3, "Create Commands With the OracleCommand Class"
Section 14.1.4, "Maximize Performance Using Prepared Statements With the OracleParameter Class"
Section 14.1.5, "Large Object Support With the OracleBlob Class"
Section 14.1.6, "Data Synchronization With the OracleSync Class"
The
OracleConnection interface establishes connections to Oracle Database Lite. This class implements the
System.data.IDBConnection interface. When constructing an instance of the
OracleConnection class, implement one of the following to open a connection to the back-end database:
Pass in a full connection string as described in the Microsoft ODBC documentation for the
SQLDriverConnect API, which is shown below:
OracleConnection conn = new OracleConnection ("DataDirectory=\\orace;Database=polite;DSN=*;uid=system;pwd=manager"); conn.Open();
Construct an empty connection object and set the
ConnectionString property later.
With an embedded database, we recommended that you open the connection at the initiation and leave it open for the life of the program. When you close the connection, all of the
IDataReader cursors that use the connection are also closed.
By default, Oracle Database Lite connection uses the autocommit mode. Alternatively, you can start a transaction with the
BeginTransaction method in the
OracleConnection object. Then, when finished, execute either the
Commit or
Rollback methods on the returned
IDbTransaction, which either commits or rolls back the transaction. Once the transaction is completed, the database is returned to autocommit mode.
Within the transaction, use SQL syntax to set up, remove and undo savepoints.
For Microsoft Pocket PC-based devices, Oracle Database Lite supports only one process to access a given database. When a process tries to connect to a database that is already in use, the
OracleConnectionOpen method throws an
OracleException. To avoid this exception being thrown, close a connection to allow another process to connect.
The
OracleCommand class implements the
System.Data.IDBCommand interface. Create any commands through the
CreateCommand method of the
OracleConnection class. The
OracleCommand has constructors recommended by the ADO.NET manual, such as
OracleCommand(OracleConnection conn, string cmd).
However, if you use the
OracleCommand constructors, it is difficult to port the code to other platforms, such as the ODBC provider on Windows 32. Instead, create the connection and then use interface methods to derive other objects. With this model, you can either change the provider at compile time or use the reflection API at runtime.
Parsing a new SQL statement can take significant time; thus, use prepared statements for any performance-critical operations. Although,
IDbCommand has an explicit
Prepare method, this method always prepares a statement on the first use. You can reuse the object repeatedly without needing to call
Dispose or change the
CommandText property.
Oracle Database Lite uses ODBC-style parameters in the SQL string, such as the
? character. Parameter names and data types are ignored by the driver and are only for the programmer's use.
For example, assume the following table:
create table t1(c1 int, c2 varchar(80), c3 data)
You can use the following parameters in the context of this table:
IDbCommand cmd = conn.CreateCommand(); cmd.CommandText = "insert into t1 values(?,?,?);" cmd.Parameters.Add("param1", 5); cmd.Parameters.Add("param2", "Hello"); cmd.Parameters.Add("param3", DateTime.Now); cmd.ExecuteNonQuery();
The
OracleBlob class supports large objects. Create a new
OracleBlob object to instantiate or insert a new BLOB object in the database, as follows:
OracleBlob blob = new OracleBlob(conn);
Since the BLOB is created on a connection, you can use the
Connection property of
OracleBlob to retrieve the current
OracleConnection.
Functions that you can perform with a BLOB are as follows:
Section 14.1.5.1, "Using BLOB Objects in Parameterized SQL Statements"
Section 14.1.5.2, "Query Tables With BLOB Columns"
Section 14.1.5.3, "Read and Write Data to BLOB Objects"
You can use the BLOB object in parameterized SQL statements, as follows:
OracleCommand cmd = (OracleCommand)conn.CreateCommand(); cmd.CommandText = "create table LOBTEST(X int, Y BLOB)"; cmd.ExecuteNonQuery(); cmd.CommandText = "insert into LOBTEST values(1, ?)"; cmd.Parameters.Add(new OracleParameter("Blob", blob)); cmd.ExecuteNonQuery();
You can retrieve the
OracleBlob object using the data reader to query a table with a BLOB column, as follows:
cmd.CommandText = "select * from LOBTEST"; IDataReader rd = cmd.ExecuteReader(); rd.read(); OracleBlob b = (Blob)rd["Y"];
Or you can write the last line of code, as follows:
OracleBlob b = (OracleBlob)rd.getvalue(1);
The
OracleBlob class supports reading and writing to the underlying BLOB, and retrieving and modifying the BLOB size. Use the
Length property of
OracleBlob to get or to set the size. Use the following functions to read and write to the BLOB, as follows:
public long GetBytes(long blobPos, byte [] buf, int bufOffset, int len); public byte [] GetBytes(long blobPos, int len); public void SetBytes(long blobPos, byte [] buf, int bufOffset, int len); public void SetBytes(long blobPos, byte [] buf);
For example, the following appends data to a BLOB and retrieves the bytes from position five forward:
byte [] data = { 0, 1, 2, 3, 4, 5, 6, 7, 8 }; blob.SetBytes(0, data); //append data to the blob byte [] d = blob.GetBytes(5, (int)blob.Length - 5); //get bytes from position 5 up to the end blob.Length = 0; //truncate the blob completely
Use the
GetBytes method of the data reader to read the BLOB sequentially, but without accessing it as a
OracleBlob object. You should not, however, use the
GetBytes method of the reader and retrieve it as a
OracleBlob object at the same time.
You can perform a synchronization programatically with one of the following methods:
Section 14.1.6.1, "Using the OracleSync Class to Synchronize"
Section 14.1.6.2, "Using the OracleEngine to Synchronize"
To programmatically synchronize databases, perform the following:
Instantiate an instance of the
OracleSync class.
Set relevant properties, such as username, password and URL.
Call the
Synchronize method to trigger data synchronization.
This is demonstrated in the following example:
OracleSync sync = new OracleSync(); sync.UserName = "JOHN"; sync.Password = "JOHN"; sync.ServerURL = "mobile_server"; sync.Synchronize();
The attributes that you can set are described in Table 14-1.
If you want to retrieve the synchronization progress information, set the
SyncEventHandler attribute of the
OracleSync class before your execute the
sync.synchronize method, as follows.
sync.SetEventHandler (new OracleSync.SyncEventHandler (MyProgress), true);
You pass in your implementation of the
MyProgress method, which has the following signature:
Void MyProgress(SyncStage stage, int Percentage)
You can synchronize with the same engine that performs the synchronization for the
msync tool. You can actually launch the GUI to have the user enter information and click Synchronize or you can enter the information programmatically and synchronize without launching the GUI.
You can launch the
msync tool, so that the user can modify settings and initialize the synchronization, by executing the following:
OracleEngine.Synchronize(false)
Providing the
false as the input parameter tells the engine that you are not providing the input parameters, but to bring up the
msync GUI for the user to input the information., usename and password.
OracleEngine.Synchronize("S11U1", "manager", "myserver.mydomain.com")
Alternatively, you can configure a string that contains the options listed in Table 14-2 with a single
String input parameter and synchronize, as follows:
OracleEngine.Synchronize(args)
In the above example, the
String
args input parameter is a combination of the options in Table 14-2.
String args = "S11U1/manager@myserver.mydomain.com /save /ssl /force"
Include as many of the options that you wish to enable in the
String.
In a non-production environment, you may want to create a database to test your ADO.NET application against. In the production environment, the database is created when you perform the
OracleEngine.Synchronize method (see Section 14.1.6.2, "Using the OracleEngine to Synchronize" for more information). However, to just create the database without synchronization, you can use the
CreateDatabase method of the
OracleEngine class. To remove the database after testing is complete, use the
RemoveDatabase method. These methods are only supported when you install the Mobile Development Kit (MDK).
The following is the signature of the
CreateDatabase method:
OracleEngine.CreateDatabase (string dsn, string db, string pwd)
This release comes with sample code that illustrates the Oracle Database Lite ADO.NET provider. The demo is a timecard application for a cable technician who might install, remove, or repair service and keep track of the hours worked. To use the Oracle Database Lite ADO.NET provider from your own project, add a reference to
Oracle.DataAccess.Lite_wce.dll.
Perform the following to run the demo:
If you have not already done so, install the .NET Compact Framework on your device using
netcfsetup.msi.
Install Oracle Database Lite on your device—such as the
olite.us.pocket_pc.arm.CAB—from the following directory:
<
ORACLE_HOME>
\Mobile\Sdk\wince\Pocket_PC\cabfiles
Open the
ClockIn_wce.csdproj from the
ADO.NET\ADOCE\Clockin_wce directory with Visual Studio.NET 2003. Make sure that the
Oracle.DataAccess.Lite reference in the project points to the DLL in the
ADO.NET\ADOCE directory.
Deploy Application from the
Project menu to install the ClockIn sample application on your Pocket PC device.
Use the file manager to launch
msql in the
\OraCE directory on your device. Go to the
Tools tab and click
Create to create the
POLITE database and its corresponding ODBC data source. Exit
msql.
Use the file manager to start the
ClockIn demo in the
\Program files directory.
Choose the job type and time from the drop down lists at the bottom of the screen and Click Add to enter a new work item and update summary on the title bar. Click on an existing work item row to remove it. You can also navigate to a different date to review past work.
Examine the
MainForm.cs in the
ClockIn subdirectory. Notice the following items, which demonstrate the functionality discussed in this chapter:
Creating an Oracle Database Lite connection.
Using prepared statements and cleaning up at program exit.
Using LiteDataAdapter to retrieve data into disconnected ResultSet and delete an existing row.
Using DataGrid to display data on screen.
You can make some changes to become familiar with ADO.NET development, such as:
Add checking for overlapping work items and give an appropriate error.
Add an ability to edit an existing work item and give arbitrary start/end times and description by clicking on a row.
Add sync support to ClockIn. You need to define a primary key on the ClockIn table using a sequence.
The following are limitations to the Oracel Database Lite ADO.NET provider:
Section 14.3.1, "Partial Data Returned with GetSchemaTable"
Section 14.3.2, "Creating Multiple DataReader Objects Can Invalidate Each Other"
Section 14.3.3, "Calling DataReader.GetString Twice Results in a DbNull Object"
Section 14.3.4, "Thread Safety"
The Oracle Database Lite ADO.NET provider method—
GetSchemaTable—only returns partial data. For example, it claims that all of the columns are primary key, does not report unique constraints, and returns null for
BaseTableName,
BaseSchemaName and
BaseColumnName. Instead, to retrieve Oracle Database Lite meta information, use
ALL_TABLES and
ALL_TAB_COLUMNS instead of this call to get Oracle Database Lite meta information.
The Oracle Database Lite ADO.NET provider does not support multiple concurrent
DataReader objects created from a single
OracleCommand object. If you need more than one active
DataReader objects at the same time, create them using separate
OracleCommand objects.
The following example shows how if you create multiple
DataReader objects from a single
OracleCommand object, then the creation of
reader2 invalidates the
reader1 object.
OracleCommand cmd = (OracleCommand)conn.CreateCommand(); cmd.CommandText = "SELECT table_name FROM all_tables"; cmd.Prepare(); IDataReader reader1 = cmd.ExecuteReader(); IDataReader reader2 = cmd.ExecuteReader();
Calling the
GetString method of
DataReader twice on the same column and for the same row results in a
DbNull object. The following example demonstrates this in that the second invocation of
GetString results in a
DbNull object.
IDataReader dr = cmd.ExecuteReader(); String st = null; while(dr.Read()) { st = dr.GetString (1); st = dr.GetString (1); } | http://docs.oracle.com/cd/B19188_01/doc/B15920/nvadonet.htm | CC-MAIN-2017-17 | refinedweb | 2,123 | 50.43 |
Qt Splash screen won't disappear
Hi all,
I have created a splash screen image to show on running of the application I have tried to set it to a timer to automatically disappear but it just stays on screen until i click the centre of the splash screen then it disappears.
can anyone help me? I'm a complete newbie to Qt
@#include "mainwindow.h"
#include <QApplication>
#include <QSplashScreen>
#include <QTimer>
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QSplashScreen *splash=new QSplashScreen; splash->setPixmap(QPixmap("/Users/Marcus/Calc/Images/Optics.jpg")); splash->show(); MainWindow w; QTimer::singleShot(2500,splash,SLOT(close())); QTimer::singleShot(2500,splash,SLOT(show())); w.show(); return a.exec();
}
@
- sierdzio Moderators
Remove line 16. and it should work fine. | https://forum.qt.io/topic/48499/qt-splash-screen-won-t-disappear | CC-MAIN-2019-09 | refinedweb | 126 | 50.43 |
Please refer to the errata for this document, which may include some normative corrections.
See also translations.
Copyright © 2008 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. 1.0 .
Comments on this document should be sent to www-xml-canonicalization-comments@w3.org which is an automatically archived public email list.
The implementation report details CR implementation feedback from several implementations. It should be noted that this IR reflects results implemented against the CR as clarified based on issues raised during the CR period and subsequently reflected in the wording of this Recommendation..
The XML 1.0 Recommendation [XML] specifies the syntax of a class of resources called XML documents. The Namespaces in XML 1.0 in XML 1.0.
Canonical XML Version 1.1
is a revision to Canonical XML Version 1.0 [C14N10] to address
issues related to
inheritance of attributes in the XML namespace when canonicalizing
document subsets, including the requirement not to inherit
xml:id, and to treat
xml:base URI path processing
properly. See also the Working Group Notes on [C14N-Issues] and [DSig-Usage] for
further discussion of the relationship of Canonical XML Version 1.1 to Canonical
XML Version 1.0.
Canonical XML Version 1.1 is applicable to XML 1.0 and defined in terms of the XPath 1.0 data model. It is not defined for XML 1.1.:
xml:baseattributes [C14N-Issues] is performed or supplying a base URI through
xml:base).
Since the XML 1.0 Recommendation [XML] and the Namespaces in XML 1.0, other
than the
xml:id attribute,:nsfor the text of the local name in place of the empty local name (in XPath, the default namespace node has an empty URI and local name).
&, all open angle brackets (<) with
<, all quotation mark characters with
", and the whitespace characters #x9, #xA, and #xD, with character references. The character references are written in uppercase hexadecimal with no leading zeroes (for example, #xD is represented by the character reference
).
&, all open angle brackets (<) are replaced by
<, all closing angle brackets (>) are replaced by
>, and all #xD characters are replaced by
.
<?),. This is necessary because omitted nodes SHALL not break the inheritance rules of inheritable attributes [C14N-Issues],
any simple inheritable attributes that are already in E's attribute axis (whether or not they are in the node-set) are removed. E's attribute axis needs to be enhanced further. A "join-URI-References"
function is used for
xml:base fix up. It incorporates
xml:base attribute values from omitted
xml:base attributes and updates the
xml:base attribute value
of the element being fixed up.
An
xml:base fixup
is performed on an element E as follows. Let E be an element in the node set whose ancestor axis contains
successive elements En ... E1 (in reverse
document order) that are omitted and E=En+1 is included. (It is important to note that En ... E1 is for contiguously omitted elements, for example
only e2 in the example in Section 3.8.) The fix-up is only
performed if at least one of E1 ... En had an
xml:base attribute. In that case let X1 ... Xm be the values of the
xml:base attributes on E1 ... En+1 (in document
order, from outermost to innermost, m <= n+1). The sequence of values is reduced in reverse document order
to a single value by first combining Xm with Xm-1, then the result with Xm-2, and so on by calling
the "join-URI-References" function until the new value for E's
xml:base attribute remains. The result may also
be null or empty (
xml:base="") in which case
xml:base MUST NOT be rendered.
Note that this
xml:base fixup is only performed if an element with an
xml:base attribute is removed. Specifically, it is not performed if the element
is present but the attribute is removed.
The join-URI-References
function takes an
xml:base attribute value from an omitted
element and combines it with other contiguously omitted values to
create a value for an updated
xml:base attribute. A simple
method for doing this is similar to that found in sections 5.2.1,
5.2.2 and 5.2.4 of RFC 3986 with the following
modifications:
xml:baseattribute values that include relative path components (i.e., path components that do not begin with a '/' character) results in an attribute value that is a relative path component.
Then, lexicographically merge this fixed up attribute with the nodes of E's attribute axis that are in the node-set. The result of visiting the attribute axis is computed by processing the attribute nodes in this merged attribute list.
Attributes
in the XML namespace other than
xml:base,
xml:id,
xml:lang, and
xml:space MUST be processed
as ordinary attributes.
The following examples illustrate the modification of the "Remove Dot Segments" algorithm:
"abc/"and
"../"should result in
""
"../"and
"../"are combined as
"../../"and the result is
"../../"
".."and
".."are combined as
"../../"and the result is
"../../"
To illustrate the last example, when the elements b and c are removed from the following sample XML document,
the correct result for the
xml:base attribute on element d would be
"../../x":
<a
xml:
<b xml:
<c xml:
<d xml:
</d>
</c>
</b>
</a>).
ancestor-or-self path of the context node (such that ancestor-or-self
stays the same size under union with the element identified by E3).
Note: The canonical form contains no line delimiters. informative table outlines example results of the modified Remove Dot Segments algorithm described in Section 2.4. | http://www.w3.org/TR/2008/REC-xml-c14n11-20080502/ | crawl-002 | refinedweb | 940 | 58.28 |
A case for FlatMap:
Let's say we wanted to implement an AJAX search feature in which every keypress in a text field will automatically perform a search and update the page with the results. How would this look? Well we would have an
Observable subscribed to events coming from an input field, and on every change of input we want to perform some HTTP request, which is also an
Observable we subscribe to. What we end up with is an
Observable of an
Observable.
By using
flatMap we can transform our event stream (the keypress events on the text field) into our response stream (the search results from the HTTP request).
app/services/search.service.ts
import {Http} from '@angular/http';import {Injectable} from '@angular/core';@Injectable()export class SearchService {constructor(private http: Http) {}search(term: string) {return this.http.get('' + term + '&type=artist').map((response) => response.json())}}
Here we have a basic service that will undergo a search query to Spotify by performing a get request with a supplied search term. This
search function returns an
Observable that has had some basic post-processing done (turning the response into a JSON object).
OK, let's take a look at the component that will be using this service.).flatMap(term => this.searchService.search(term)).subscribe((result) => {this.result = result.artists.items});}}
Here we have set up a basic form with a single field,
search, which we subscribe to for event changes. We've also set up a simple binding for any results coming from the
SearchService. The real magic here is
flatMap which allows us to flatten our two separate subscribed
Observables into a single cohesive stream we can use to control events coming from user input and from server responses.
Note that flatMap flattens a stream of
Observables (i.e
Observable of
Observables) to a stream of emitted values (a simple
Observable), by emitting on the "trunk" stream everything that will be emitted on "branch" streams. | https://angular-2-training-book.rangle.io/http/search_with_flatmap | CC-MAIN-2020-40 | refinedweb | 328 | 59.94 |
Any thoughts from more knowledgeable folks here about 4G/LTE speeds in NZ?
Is any provider able to actually make good use of the new iPhone XS hardware?
Thanks!
Spark's '4.5G' would probably be the best thing currently I'd think. That is essentially 4G using Carrier Aggregation, 4x4 MIMO and a higher modulation rate.
However unless you're paying $$$$$ for a lot of data, what is the point other than boasting about a speedtest result?
Absolutely, all of the Spark 4.5 and 4.9G towers can do this quite easily (tower load depending ofcourse).
In many of the areas Spark has these, the towers regularly peak over 1gbit. be it from end users running a quick test or purely areas with alot of wireless broadband connections.
Regardless of provider there are always a few gotchas.
Deploying the hardware to do these speeds in some places are simply infeasible.
Take a tower out in the wops surrounded by trees, It's hardly going to be reasonable to run band 7 or band 40 there as the trees will just gobble that up. While Band 28, Band 5 and 8 will often go far (be it with their very limited bandwidth in compassion). Quite often Band 3 also goes well in these situations to offload the closer traffic.
#include <std_disclaimer>
Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have. | https://www.geekzone.co.nz/forums.asp?forumid=42&topicid=240753 | CC-MAIN-2019-43 | refinedweb | 243 | 73.47 |
beans-msg - 2/2/15
Medieval beans. Fava beans. Garbanzo beans. Recipes.
NOTE: See also the files: fava-beans-msg, peas-msg, vegetables-msg, vegetarian-msg, salads-msg, seeds-msg, soup-msg, grains-msg, at motorola.com stefan at florilegium.org
************************************************************************
From Jeff.Peck at hubert.rain.comMon Feb 26 12:21:04 1996
Date: Thu, 22 Feb 1996 01:23:00 -0800
From: Jeff Peck <Jeff.Peck at hubert.rain.com>
To: antir at mail.orst.edu
Subject: Re: Hummos recipe<musical fruit>
I have found in the past that if you use dried beans, and soak
overnight in water with 1tbs of baking soda (rinse before cooking)
it takes away a LARGE portion of the gassiness.
Lyulf
Date: Sun, 4 May 1997 18:06:24 -0700 (PDT)
From: Catherine deSteele <desteele at netcom.com>
To: sca-arts at raven.cc.ukans.edu
Subject: Re: Beans are period...sort of.
Based on our research, there were a couple of period beans - fava
beans, which were known in Roman times and are still eaten in the
Meditteranean today. The other period bean was a now-extinct version of
the broad bean - you can substitute the Italian broad bean for it. Be
careful serving fava beans - some people have adverse reactions to it.
They also consumed the pods of fenugreek, known in period as "greek hay",
and still used extensively in Meditteranean and Afghani cooking today.
Vewgetarianism in the Middle Ages was a risky practice - few beans or
legumes, no corn, so options for protein were seriously limited - mainly
nuts, eggs, and dairy products. With lack of refrigeration, not a good
lifestyle choice...then.
Catherine deSteele
From: nweders at mail.utexas.edu (ND Wederstrandt)
Date: Tue, 3 Jun 1997 16:29:58 -0500 (CDT)
Subject: Re: SC - Mediterranean Feast
I grew them as a project this year to see how they would do in our somewhat
warm and humid climate (Central Ansteorra) but got them into the ground
late for a heavy crop. They are still producing so I will have a small but
hearty seed crop for next year, since I started with just a few. There is
not as much information on growing them as regular dry beans or green
beans. Most of what I found were British publications. They are a very
pretty plant -- the flowers are white and purplish black. I have a couple
of catalogs at home that sell seeds if you want to try them. Taste wise
the dry beans have a floury texture, I like. There are several Roman
recipes featuring favas that are very good so you might check them out.
You can get fava beans at health food stores as well as specialty
and eastern markets.
Clare St. John
From: david friedman <ddfr at best.com>
Date: Sat, 7 Jun 1997 01:36:23 -0700 (PDT)
Subject: Re: SC - Period Recipes
At 2:59 PM -0500 6/6/97, Peters, Rise J. wrote:
>What other sorts of beans were available in Europe? (I don't guess I could
>possibly be lucky enough that pinto beans were .... or any kind of "brown
>beans"?)
Fava beans, garbanzos, lentils. I don't think any of our standard
beans--pinto, lima, kidney, etc.--are old world.
David/Cariadoc
From: david friedman <ddfr at best.com>
Date: Sat, 7 Jun 1997 01:36:14 -0700 (PDT)
Subject: Re: SC - Period Recipes
>There is a good recipe out of 700 Years of English Cooking. Since I don't
>have it here and you need it now, I'll summarize and you can experiment if
>you want to. Its Fried Beans and Onions. Saute onions in oil, add kidney
>beans, ginger, cinnamon and another sweet spice. Heat. The onions, beans
>and sweet spices make a tastey mix and the dish is good hot or cold.
1. Kidney beans are from the new world.
2. I don't know 700 years of English Cooking well enough to identify the
recipe, but here are some somewhat similar things; the last of the three is
the closest to what you describe. All three recipes are from the
_Miscellany_, available online.
Makke 2 large onions
1/2 c red wine enough oil to fry the onions
1 t salt.
- ------.
1 c dried fava beans 2/3 c figs (cut in pot herbs: 1 1/2 c
spinach, packed
6-8 T lard about 8 pieces) 1 1/2 c parsley, packed
1/2 c+ onions 1/2 t sage 1 1/2 c mustard greens, packed
1/2 t salt 1 1/2 c turnip greens
Spices for sprinkling on top: 1/4 t ginger, 1/2 t cinnamon, 1/4 t pepper
Bring beans to a boil in 2 1/2 c water, leave to soak about 1/2 hour, then
simmer another hour, until soft. Drain the beans, mix the whole mess
together and fry it in the lard for 10 minutes, then serve it forth with
spices sprinkled on it. This is also good with substantially less greens.
- ------
Benes yfryed
Curye on Inglysch p. 141 (Forme of Cury no. 189)
Take benes and seeth hem almost til they bersten. Take and wryng out the
water clene. Do =DDerto oynouns ysode and ymynced, and garlec therwith; frye
hem in oile o=DDer in grece, & do =DDerto powdour douce, & serue it forth.
2 15 oz cans fava beans 3 T olive oil
1 small onion chopped poudre douce (2 t sugar, 3/8 t cinnamon, 3/8 t ginger)
3 cloves garlic (1 oz), smashed & minced
Drain and wash the beans well, draining thoroughly. Chop onions, crush and
mince garlic. Simmer onions and garlic in 1/2 c water for 3 minutes,
drain. Heat the frying pan with oil on a medium heat, add onions and garlic
and beans (will splatter--be careful), cook, stirring frequently, 10
minutes. Then add pouder douce, mix well, cook 2 more minutes, and serve.
Remember to keep stirring.
David/Cariadoc
From: Uduido at aol.com
Date: Sun, 8 Jun 1997 16:33:57 -0400 (EDT)
Subject: Re: SC - Period Recipes
<< What is a pea bean? >>
Acording to my currently limited resources pea beans are navy beans. It also
says "have been grown in Europe and elsewhere since the discovery of
America.". I would suspect that these were one of the 1st beans introduced to
Europe after America's dicovery although I do not have verification of that
hypothesis.
Lord Ras (Uduido at aol.com)
From: zarlor at acm.org (Lenny Zimmermann)
Date: Mon, 09 Jun 1997 16:26:52 GMT
Subject: Re: SC - Period Recipes
On Fri, 6 Jun 1997 2:59 PM -0500, Peters, Rise J wrote:
>What other sorts of beans were available in Europe? (I don't guess I could
>possibly be lucky enough that pinto beans were .... or any kind of "brown
>beans"?)
I believe it has already been mentioned that the beans known to have
existed throughout most of out studied time period are fava, garbanzo
and lentils. In the 16th century there are a few more that were added
by import from the New World, so you'll have to decide when and where
your recipe is used from.
he best source I have on what was available in beans is (again)
Castelvetro's "The Fruit, Herbs and Vegetables of Italy". Here he
lists Broad Beans (or Fava/Faba), Turkish beans (these are not from
Turkey, but Castelvetro terms them such to mean they are "foreign",
mainly New World in origin) which are described as "white or flecked
with pink and tan." They also "grow very tall" and " have [a]...lovely
green foliage". The translator, Gillian Riley, proclaims in he
glossary that these are Runner Beans, which we also call French
Beans".
He also lists another kind of bean, unnamed, that are "smaller, white
or faintly pinkish with a black spot in the middle." Kind of like a
black-eyed pea, apparently. Then he lists Dwarf Beans, which are he
states are native or domestic to Italy and are sown in large
quantities in wheat-fields after the harvest. "They do not grow high"
and he states they eat "the cooked tender green pods as a salad, and
do the same with the shelled fresh beans."
Chickpeas are mentioned and are mentioned as being seen in white and
red forms, the red being considered the better variety. Lentils are
also mentioned and he proclaims them as "one of the most, if not the
most, unhealthy vegetables one can eat, except for the broth, which,
they say, is a miraculous drink for children with smallpox. In general
lentils are only eaten by the lowest of the low." Those Italians sure
have a way with words, eh? ;-)
As a side note he also mentions peas (no further explanation as to any
particular kind or description of pea) and the Grass Pea, or vetch,
which, he says, tastes rather like Chickpeas. He dos state of theses
that "they are considered a rather common food, for they generate
wind, bad blood and considerable melancholy." Gillian Riley notes of
Grass peas that the "grew wild in Italy and were eaten a lot by the
Romans, but have fallen out of use, which is just as well, as they are
poisonous, even after a preliminary roasting, which is no doubt why
they were said to generate 'wind, bad blood and considerable
melancholy'."
Also of note is that Castelvetro discusses Lupin beans, but I do not
know if this is an actual bean or not. He mainly talks about
sweetening the bean by putting it in clear running water for 2 or 3
days. They are then "peeled and salted and nibbled more as a snack
than anything else, the sort of thing that appeals to pregnant women
or silly children. Dried lupins are used to fatten pigs and other
animals." Gillian Riley states these have been grown in Italy and the
Middle East since the times of the Romans.
So runner beans could probably be used, at least after the mid-late
16th Century. I'm not sure what the black-eyed pea looking bean is. It
could be a black-eyed pea, for all I know. (Not like I have this great
horticultural knowledge, or anything. I know diddley about such
things).
Honos Servio,
Lionardo Acquistapace, Barony of Bjornsborg, Ansteorra
(mka Lenny Zimmermann, San Antonio, TX)
zarlor at acm.org
Date: Mon, 4 Aug 1997 00:22:30 -0400 (EDT)
From: "Sharon L. Harrett" <afn24101 at afn.org>
Subject: Re: SC - Green Beans
On Sun, 3 Aug 1997, Terry Nutter wrote:
> Hi, Katerine here. Ceridwen quotes John Gerard on kidney beans. Sounds
> interesting! I haven't tripped over any references to them in 13th to 15th
> Century cuisine, but maybe I'm looking too early. Can you tell us what his
> dates are?
>
> -- Katerine/Terry
Hi Katerine,
Geradrd's Herbal was first published in 1597, late for us but still
within the realm of Renaissance cookery , by my standards anyway. I have the
facsimile edition published by Dover, and have spent hours trying to figure
out some of his sources and see if I can get any time frame as to the
import or common use of the plants he describes. Those from the New World he
usually specifies when and where they came from, but not always. There is an
introductory chapter in which he describes many Herbals preceding this one,
by date and author, but no indication if he quotes from these.
I won't be so bold as to hold up this book as documentation for
anything before the lifetime of Gerard,whose book was based on the Dodoens
herbal of 1583, and was updated and revised by Thomas Johnson in 1633. I do
not have to hand any horticultural encyclopedia which would tell me
definitively whether the beans he refers to were actually favas, or kidney,
or some other .
I have seen mention in Le Menagier and a couple others of preparing
beans in their "cods", though and deduce from that , that the people of the
Late Middle Ages ate beans fresh from the plant at times, and not always
ripe or dried. Though this does not allow me to assume those beans are the
same as our "green beans", they may have been similar.
My gardening experience and the seed catalogs I recieve lead me to
believe that even what we know as "heirloom" vegetables, (open-pollinated,
old varieties) cannot be traced back more than 75-100 years. Our modern
varieties have been bred for tenderness, appearance, selective harvest
times, tolerance to adverse weather, resistance to disease and insects, etc.
For a definate answer I suppose we would have to look to archaeology, or
plant historians.
OOHHH!... Just looked in "medieval English Gardens" In a treatise on
necessities for the country man, he says that one needs a small table on
which to mince or cut up vegetables, including beans in the pod! (12th
c)along with shelled beans, cabbage, leeks, onoins,lentils, peas, and
millet. (Neckham) Hmmmmmmmm......
Ceridwen
Date: Sun, 3 Aug 1997 02:01:05 -0400 (EDT)
From: "Sharon L. Harrett" <afn24101 at afn.org>
Subject: Re: SC - Green Beans
Greetings All from Ceridwen
First I'd like to tell you all how much I have enjoyed this past
week's postings! The challenges, whether they be simple or complex, have
something for all of us! They have been wonderful!!!!.
Comment on the Green Bean thing... John Gerard mentions 11 different
types of "Kidney Beans", with different characteristics of growing and
flowering, Friuting, etc. He says that 9 of those are common in English
gardens and are eaten both shelled (ripe) and " the friut and cods of Kidney
Beans boiled together before they be ripe, and buttered, and so eaten with
their cods, are exceeding delicate meat, and do not engender winde as the
other Pulses do" In the next paragraph, he goes on to describe the
praparation of the unripe beans, including de-stringing them after being
parboiled.
As for the Botanical evidence, I'm not entirely sure when and by
whom Latin classification was standardized, but Gerard names those beansas
follows;
1. Phaseolus Albus - Garden or White kidney bean
2. Phaseolus Niger - Black Kidney bean
3. Smilax hortensus rubra - Red Kidney bean
4. Smilax hortensus flava - Pale yellow Kidney Bean
5. Phaseolus peregrinus fructu minore alba - Indian Kidney Bean with
a small white fruit
6.Phaseolus peregrinus fructa minore frutescens - Indian Kidney Bean
with a small red fruit
7. Phaseolus prergrinus augustifolius - Narrow leafed Kidnay bean
(with a small red fruit)
8.Phaseolus Brasilianus - Kidney Bean of Brazil
9. Phaseolus Egyptanicus - Parti-coloured bean of Egypt.
As an aside, he says that there is a bean called the "scarlet bean" which is
grown in a garden he knows of, that the pods have little hairs on them that
sting like nettles, possibly from the East Indies, but not eaten.
He also discusses Lupines (boiles till the bitterness is gone, and
eaten with pickle), peas and lentils, garden beans (fava major hortensis)
and black beans (not eaten)
Anyone care to take a stab at comparing Gerard's beans to ours,
horticulturally or otherwise?
Ceridwen
Date: Thu, 09 Oct 1997 13:11:49 -0400
From: "Sharon L. Harrett" <ceridwen at commnections.com>
Subject: Re: SC - Cassoulet
Yep, I have Gerard's... and we did discuss this a few months back, but
anyway, here goes.
Gerard states that there are 9 kinds of "kidney bean" known to him (and
quotes from other sources as well). These include some from India,
Egypt, and Brazil, as well as those grown in earlier times in the
Mediterranean. His illustrations resemble our lima bean far more than a
kidney bean, being flat ovals, and the pods are flat also with a
distinct string along the straight side. He says they come in several
colors, white, black, red, purple, and orange. The plants and flowers
resemble our lima bean much more than a string or shell bean, having
narrow leaves well apart on the stalks.
Among the other legumes, he has lentils(2 kinds) garden peas (6 kinds)
several edible vetches, and the "garden bean" or fava, with 3 kinds
being known (white, yellow, and black)- the black being grown
ornamentally only, not eaten.
There are no references to what we have now... string beans, although
he says that the favas and "kidney" beans may be cooked immature, in
their pods, and dressed with vinegar and salt as a "daintie meat"
Ceridwen
Date: Thu, 09 Oct 1997 16:06:55 -0400
From: Philip & Susan Troy <troy at asan.com>
Subject: Re: SC - Cassoulet
Certainly there must have been beans of various kinds imported from
places like India and China to the Middle East, other than the chick
pea, the lentil, and the fava. The soy bean certainly was cultivated in
Asia very early in our period, and sooner. Other candidates are things
like mung beans (more or less a tiny variety of soybean) and several
varieties of chick pea that appear to have been more or less unknown to
most Europeans.
However, we don't really know that the kidney beans Gerard refers to,
are the String Bean Group from South America. Kidney bean is a perfectly
natural nomenclature based on shape, and it would be perfectly
acceptable to call even favas by that name.
As is often the case, the more you dig, the more confusing things
become...
Adamantius
Date: Sat, 8 Nov 1997 00:35:22 -0800
From: "Melinda Shoop" <mediknit at nwinfo.net>
To: "SCA Arts" <SCA-arts at raven.cc.ukans.edu>
Subject: Beans in a Period Recipe
I am looking at recreating a recipe from Thomas Dawson's "The Good
Huswife's Jewell", published in London in 1596.
In a recipe titled, "To Defend Humors" the reader is instructed:
"Take beanes, the rinde or the upper skin being pulled of, & bruse them and
mingle them with the white of an Egge, and make it sticke to the Temples,
it keepeth backe humors flowing to the eyes."
I want to know what type of bean available to the sho.
Thank you in advance for your help!
In Gratitude,
Lady Fiametta La Ghianda/Melinda Shoop
Date: Sat, 8 Nov 1997 11:54:37 -0500 (EST)
From: DianaFiona at aol.com
To: sca-arts at raven.cc.ukans.edu
Subject: Re: Beans in a Period Recipe
<<
I want to know what type of bean available to the sh.
>>
Well the bean part is easy---they were using fava beans. These are one
of the few old-world bean varieties, along with lentils and garbanzos
(chickpeas), plus the peas that our modern green peas descended from. Favas
look rather like limas, and tend to have a rather thick, tough skin that
fastidious cooks will often remove. It's not hard, just rather tedious---you
cook the beans lightly, cool them enough to handle, and squirt them out of
the skins. Then finish cooking and seasoning. This process is for the fresh
ones, if you can find them (Look in gourmet markets and stores that cater to
a Middle Eastern or Mediteranian community.), but with the dried ones the
pre-cooking soak will often loosen the skins enough to let you remove them.
That said, I rarely bother, since the skins don't usually offend my tastes.
The exception was some fresh ones that I helped prepare for a feast last
summer. The feastcrat had managed to find a source for frozen fresh favas,
that we used to make the Benes Yfryed from Forme of Cury (Boil the beans,
drain, fry with chopped onions and garlic, sprinkle with powder douce [sweet
spices]). But either the variety was particularly tough or the frying caused
the problem, but they were a bit much even for me. And microwaving the
leftovers I got to take home *really* didn't help............ ;-)
I can find several varieties of canned or dried favas in my local Indian
market, so I don't imagine they are *too* hard to get these days if you live
in a large enough place to have ethnic groceries. Now, if I can just manage
to get across town soon---I running low on several things from there! ;-)
Ldy Diana Fiona O'Shera
Vulpine Reach, Meridies
(Chattanooga, TN)
Date: Tue, 18 Nov 1997 09:36:06 -0600 (CST)
From: Todd Lewis <telewis at comp.uark.edu>
To: SCA-ARTS list <sca-arts at raven.cc.ukans.edu>
Subject: Re: Re- Beans in a Period Recip
I came across an interesting passage in a chronicle entitled
L'Estoire de la Guerre Sainte, printed in Edward Noble Stone, trans.,
Three Old French Chronicles of the Crusades (Seattle: University of
Washington, 1939). The chronicle details the campaign of King Richard in
Third Crusade. Describing a period of famine, the passage reads,
"Back he came and they ate beans, being well-nigh mad with
hunger . . . A certain thing was sold in the host of God which they called
carob-beans. These were sweet to the taste, and a man could get a mess of
them for one silver penny; and they were well worth the seeking. With
these and with little nuts were many folk kept alive. . ." (p.65)
A note in the text describes "carob-beans" as "Saint John's bread,
Ceratonia Siliqua." I don't have much experience in medieval cooking, but
perhaps this is what is referred to in medieval recipes calling for beans.
Lord Henry Percivale Kempe
Shire-March of Grimfells
Calontir
Date: Wed, 19 Nov 1997 09:56:11 -0500 (EST)
From: LrdRas at aol.com
Subject: Re: SC - green beans
<< why would you say green beans were one of the items quickly
and extensively used after discovery? >>
First, I would refer you to the posts from this list a while back when we
were having the great bean debate. ;-)
Secondly, we have a date, according to Toussaint-Samat of 1528, when seeds
were given to Canon Piero Valeriano by Pope Clement VII, who recieved tas a
gift from the New World.
The Canon planted the beans in pots and carefully noted germination rates,
growth patterns, etc. He commented speciffically on how productive they were.
Some of the resulting crop was used to prepare a dish which usually used
favas. The result was pronounced delicious and the beans were called fagioli.
The use of these beans swept throughout N. Italy,
At this time the Canon persuaded Catherine de Medici to include a bag of bean
seed in her dowry. The bean was loved by all and due to it's productivity was
only a fleeting "exotic" soon being grown all over Provence and other regions
where it ultimately (My Note: probably within 10 years) was known as "poor
man's food". Quote: "It's reputation as a cheap stomach filler guarenteed its
popularity".
IMHO, other sources and conjecture from eating habits support the supposition
supports the idea that green beans as opposed to dried beans per se were
eaten rather extensively because a handful of green beans is one serving.
Those same beans shelled as dry would amount to a mere taste. As researchers
into food history we, as moderns, must be ever vigilant to remember that
until recently in history man's society was agricultural. Thusly, the quick
dispersal of a food product that was prolific and good for eating in several
stages of growth would have been, and indeed was, quickly accomplished.
Ras
Date: Wed, 19 Nov 1997 10:14:53 -0500 (EST)
From: LrdRas at aol.com
Subject: Re: SC - green beans
<< Pineapple, I can see. They are sweet which was craved. They are
unusual which makes them ideal for gardens of exotics.
Same goes for peppers and perhaps for Turkeys. They fill a percieved
need.
But green beans?>>
As noted in my previous post, the percieved need was filling bellies. The
planting of a single seed and harvest mutiple seeds only a few weeks later
would have assured it's place in the garden. With an average household
(including servants) of 20 mouths to feed this shouldn't be too hard to
grasp. :-)
To add to the green bean post> Jane Grigsom in her "Vegetable Book" (as does
Toussaint-Samat clearly staes that the word "haricot" as used by the English
meant dried beans while SAME word in France denoted "green beans".
Such a dual purpose food which had the advantage of looking very similar to
an already known product, favas, would not have had the problem of exceptance
that such foods as tomatoes or potatoes would have (and did).
In storage dried beans keep very well while dried favas loose their flavor
and become rather insipid. As green beans they could have been eaten
throughtout the growing season and yet would have provided a crop of seed for
next year.
Add the ability to be substituted for favas in any recipe and thereby
producing a far more palatable product, it is not at all surprising that it
rapidly gained acceptance. When climate is taken into account, the use of
dried beans by the English and green beans by the French is readily apparent
as it would have been easy for the French to produce two or even three crops
a year where England would have produced one.
<< Stefan li Rous >>
Ras
Subject Re: SC - green beans
Date: Wed, 19 Nov 97 18:03:24 MST
From: DUNHAM Patricia R <Patricia.R.DUNHAM at ci.eugene.or.us>
To: "Mark.S Harris" <rsve60 at msgphx1>, sca-cooks at Ansteorra.ORG
Having grown pole beans last summer and favas and Jacob's cattle beans
(one of the kinds you dry and make soup from) this summer...
The seeds you plant pole beans from look about a quarter the size, but
the same general bean-shape as a fava... (about 1.5 times the size of a
seed pea-- we also had regular peas and sugar pods, both years). I
don't think pole-bean seeds are sold for anything besides growing more
pole beans, to eat the flesh of, but that's a very casual opinion. The
seeds you would see in frozen or canned green beans would be of an
immature size. I think the kinds of beans you use for baked beans and
chili and so forth are not mature green-bean seeds, but types that are
grown specifically for the dried seeds, like the Jacob's cattle (an old
variety, name comes from they're brown and white speckled). (Yeah, I
got inspired by John Thorne's latest book; I MAY have enough for one
batch of baked beans 8-).)
To a casual observer (me), green beans and favas appear quite similar
when growing. We didn't stake the favas because we didn't understand
they'd try to grow to 6 feet! I think the leaves are generally similar
and the favas and pole beans both have climbing tendrils... the pole
beans' tendrils seem to be much sturdier and more active than the favas.
The fava pods are about twice the size of a green bean, same length,
but, well, --broader-- , and flatter rather than green-bean round...
they -look- like there'd be lima-shaped beans in them... And before the
fava pods mature and start to dry, they're green.
The foliage of the pole beans as I recall stay brighter green for
longer. The favas started to fade (paler and paler green) sooner,
didn't seem nearly as vibrant as the other two types. The real
difference is in harvesting... you pick the pole beans whole and eat
them out of hand clear thru the growing season 8-), or can or freeze or
whatever. The drying beans stay on the bush while the pod goes tan and
papery as it and the beans dry. (Then you pick and shell and winnow
the pod scraps out...) And the Fava pods dry BLACK and withered looking
around the beans... a very odd effect. And you sort of pry the pod off
in hard solid chunks.
So there's a lot of visual similarity between favas and green beans when
young and growing, and by the time you get the big harvest difference,
you've already eaten enough green beans to know a good thing!
Especially cause there is edible produce there from an early stage, on
the pole/green beans, which isn't available with the favas or
drying-beans (well, I didn't try either of those when they were little,
'cause I was pre-programmed to go for the storage end-product...)
Chimene
Date: Thu, 20 Nov 1997 23:19:58 -0500 (EST)
From: LrdRas at aol.com
Subject: Re: SC - green beans
<< But do favas and green beans look alike? >>
No. They look similar. They are both legumes and have the basic
characteristics of all legumes. This includes but is not restricted to
flower structure, podded seeds, root nodules and, in the case of favas and
New World beans, SIMILAR l;eaf structure.. As noted in a previous post the
growing season is longer in New World types. Favas generally require cooler
growing temperatures and finish producing before hot weather sets in.
<<I thought favas were big, tan colored things similar to lima beans. In that
case, I don't think they look like or would be substituted for fava beans.
But I may not be right on what fava beans look like and will look for some.>>
You are right for the most part except you are forgetting that the fava is
surrounded by a darker colored sheath which is usually removed. The resulting
bean is SIMILAR in shape to N.W. beans, that is more or less kidney shaped.
Cooking times and techniques are almost identical for dried beans of both
families and mouth feel and texture are almost identical
<<The only green beans I know have seeds a bit smaller than green peas and
are encased in a little green sack or tube, fresh, canned or frozen. >>
There are many varieties of beans> Red kidney beans, Great Northeren, Lima,
Black beans, white kidney beans and my absolute favorite "horticultural"
beans which are white with burgundy markings, just to name a few. All of
these varieties vary in size and to a lesser extent shape. All can be
consumed in the green, immature state pod and all. All can be grown until
mature and used as a dried bean. Most are definitely NOT smaller than peas
with the notible exception of black eyed peas, black beans and the miniature
form of Great Northern (a name I can't recall) which is used in the Current
Middle Ages for the making of real Boston Baked Beans. And, yes, the beans
you are to that come in a "small" green tube including the tube is
collectively called a "green bean". The tiny seeds you notice are embryonic
forms of what would have matured into the familiar dry bean you are familiar
with.
<<If this is the immature seed, are the more mature seeds sold today? Perhaps
under a different name?>>
Generally, yes. See the above varities mentioned. For the most part, whether
beans are grown for eating when immature and encased in green tube-like
structures or whether they will be allowed to mature into seeds and shelled
out is a decision of the gardener depending on whether food needs are
immediate or not.
Ras
Date: Thu, 12 Mar 1998 12:24:50 -0800
From: david friedman <ddfr at best.com>
Subject: Re: SC - Hummus and falafel
At 9:32 AM -0600 3/12/98, Decker, Terry D. wrote:
>I took note of a comment in an earlier message that there is no period
>documentation for hummus.
>
>I am considered serving hummus and falafel as vegetable dishes in a future
>feast. I would appreciate any input about the history of these two dishes.
>
>Bear
"Hummus" means "chickpeas," and is a period ingredient. Hummus bi Tahini is
the familiar chickpea dip, and I have not found it in any period cookbook.
Sesame seeds are common in period Islamic cooking, but I don't think I have
seen anything that looks like tahini.
There are, however, period dips, or things that work as dips, of which my
favorite (also vegetarian) is badinjan muhassa; the recipe is in the
(webbed) _Miscellany_.
Is falafel made from chickpea flour? If so, you might want to consider
"counterfeit Isfiriya of Garbanzos" in _Manuscrito Anonimo_ as the closest
period equivalent, and try working on that instead. The recipe is:
Date: Thu, 12 Mar 1998 22:32:47 -0800
From: david friedman <ddfr at best.com>
Subject: Re: SC - Hummus and falafel
At 3:45 PM -0600 3/12/98, jeffrey stewart heilveil wrote:
>On Thu, 12 Mar 1998, david friedman wrote:
>
>> Counterfeit (Vegetarian) IsfÓriy’
>>
>
>Cariadoc,
>I was wondering what spices might have been used at the time, as this does
>not sound far from what I generally use to make falafel.
- ---.
- --
The last two appear just before the counterfeit isfiriya recipe. So it
looks as though pepper, coriander, saffron, cumin, plus maybe cinnamon,
lavender, ginger, cloves, garlic and murri, would be the appropriate
spicing.
David/Cariadoc
Date: Tue, 31 Mar 1998 15:55:31 -0800
From: david friedman <ddfr at best.com>
Subject: Re: SC - European Grain/Legume combo?
At 12:56 PM -0800 3/28/98, Konstanza von Brunnenburg wrote:
>
>I am searching for any documented European dish that combined a grain (i.e.
>cereal grass) product with a legume (e.g. beans, peas) product -- the
>trusty vegetarian "complete protein" combo. So far I've only found this in
>a couple of Arabic recipes -- Caradoc's translations of "Khichri" and
>"Counterfeit (Vegetarian) Isfî riyâ of Garbanzos". I'd like to try
>substituting a grain/legume combo for meat in appropriate European recipes,
>and it would be great to be able to somehow *document* that a grain/legume
>combination was at least actually used in Period in (for example) England
>or Germany. (Extra points for grain/legume documented as a Lenten
>substitute!)
As far as I can tell, they did not substitute grain/legume combinations for
meat in order to do meatless meals. Fish is the usual substitute--which
probably isn't much help to you. They did have pea and bean dishes, but
they aren't versions of meat dishes. Note also that bread would have been
served with every meal--so you are getting a grain along with whatever else
is part of the meal. Here are some bean dishes (original only; references
below). The funny letter is meant to be a thorn: single letter for th.
Longe Wortes de Pesone
Two Fifteenth Century p. 89 †at potte with the drawen pesen, and late
hem boile togidre til they be all tendur, And then take faire oile and
fray, or elles fressh broth of some maner fissh, (if †ou maist, oyle a
quantite), And caste thereto saffron, and salt a quantite. And lete hem
boyle wel togidre til they ben ynogh; and stere hem well euermore, And
serue hem forthe..
Benes yfryed
Curye on Inglysch p. 141 (Forme of Cury no. 189)
Take benes and see† hem almost til †ey bersten. Take and wryng out the
water clene. Do †erto oynouns ysode and ymynced, and garlec †erwith; frye
hem in oile o†er in grece, & do †erto powdour douce, & serue it forth.
Two Fifteenth Century Cookery Books (1430-1450), Thomas Austin Ed., Early
English Text Society, Oxford University Press, 1964..) Page numbers given herein are from the Falconwood
edition.
Curye on Inglysch: English Culinary Manuscripts of the Fourteenth Century
(Including the Forme of Cury), edited by Constance B. Hieatt and Sharon
Butler, published for the Early English Text Society by the Oxford
University Press, 1985.
Elizabeth of Dendermonde/Betty Cook
Date: Tue, 31 Mar 1998 21:36:47 -0800
From: "Anne-Marie Rousseau" <acrouss at gte.net>
Subject: SC - SC-reconstructions of medieval grain and legume dishes
Hi all from Anne-Marie
as promised, here's my reconstructions for medieval dishes that can be used
to combine grains and legumes. As Cariadoc has pointed out, this is not a
medieval concept, but these are reconstructions of medieval dishes, so I
guess its better than sneaking in your Veggie burger cuz there's nothing
else to eat.
Once again, formatting didn't transfer over well, and so if you need
citations, etc, let me know. And, of course, as always, if you choose to
use my recipes, that's great, just let me know and please cite me
appropriately.
Thanks, and enjoy!
- --AM
<snip of pea recipes. See the file peas-msg>
BENES YFRYED from Forme of Curye.
189 Benes yfryed. Take benes and Seeth hem almost til they bersten. Take
and wryng out the water clene. Do thereto oynouns ysode and ymynced, and
garlec therwith; frye hem in oile other in grece, and do therto powdour
douce, and serve it forth.
8T butter
2 large onions, chopped
4 cloves garlic.
Caramelize. Divide into two.
27 oz can Fava beans or 2x15oz cans garbanzos. Drain and rinse.
Fry the benes in 2T melted, bubbling hot butter or olive oil over medium
hi heat until crunchy looking, about 10 minutes. Sprinkle with * tsp.
poudre douce.
Reconstruction notes: YUM!!!! Fava way tastier than garbanzos. Definitely
need to serve hot. Way to go Celeste!
Date: Wed, 1 Apr 1998 11:53:42 -0800
From: david friedman <ddfr at best.com>
Subject: Re: SC - Need recipe ?beans?
Niamh of Wyvern Cliffes gave a recipe for pinto bean pie and wrote:
>Okay so its OOP thought you might like to try it. It is actually
>surprisingly good.
>PINTO BEAN PIE:
>1/2 c hot mashed beans
>1/2 stick oleo
>1 1/2 c sugar
>2 whole eggs
>1 c coconut
>1 c pecans
>1 (9-inch) unbaked crust...
Well, the pinto beans, coconut, and pecans are OOP but the basic idea, as
it happens, is period.
To Make a Tarte of Beans
A Proper Newe Book of Cookery p. 37/C11 (16th c. English) 1/2 c curds (cottage cheese) 6 T butter
4 egg yolks 4 T sugar 4 t cinnamon
Crust:
6 threads saffron crushed in 1 t cool water 5-6 T very soft butter
1 c flour 2 egg yolks
Put beans in 2 1/2 c of water, bring to boil and let sit, covered, 70
minutes. Add another cup of water, boil about 50 minutes, until soft. Drain
beans and mush in food processor. Cool bean paste so it won't cook the
yolks..) Roll smooth and place in 9" pie plate. Crimp edge.
Pour into raw crust and bake at 350° for about 50 minutes (top cracks).
Cool before eating.
Elizabeth/Betty Cook
Date: Thu, 21 May 1998 21:17:58 EDT
From: LrdRas <LrdRas at aol.com>
Subject: Re: SC - Wanted: recipes for Jacob's cattle beans
rsve60 at email.sps.mot.com writes:
<< like the
Jacob's cattle (an old variety, name comes from they're brown and white
speckled). (Yeah, I got inspired by John Thorne's latest book; I MAY have
enough for one batch of baked beans 8-).)
>>
Jacob's Cattle beans are identical to "horticultural beans" which is what
they are. When cooked they loose the speckles and are all white. They can be
used in any bean recipe that calls for Great Northerns or Navy Beans. They are
New World. Hope this helps.
Ras
Date: Tue, 16 Jun 1998 15:43:33 -0400
From: mermayde at juno.com (Christine A Seelye-King)
Subject: SC - Fave dei Morti (Beans of the Dead)
Ok, here you go, a recipie and everything. This is from the book
"Feast-Day Cakes From Many Lands" by Dorothy Gladys Spicer
copyrite 1960, Holt, Rinehart and Winston, NY
'Fave die Morti (Beans of the Dead) - Italy
Fave dei Morti, beans of the dead, are the little bean-shaped
cakes that Italians eat on November 2, Il Giorno dei Morti, or All Soul's
Day. These small cakes, made of ground almonds and sugar combined with
egg, butter, flour, and subtle flavorings, are traditionally eaten
throughout Italy on the day that everyone decorates the graves with
flowers and says masses for departed souls.
<snip explaination of church decorations, graveside florals>.
<snip explaination of other holiday observances>
Fave dei Morti, beans of the dead, are rich and delicate little
cakes. Despite their macabre origin, you will want them often. Color
them orange and serve them at Halloween or Thanksgiving parties with ice
cream goblin or pumpkin molds. Or leave them white and store in tightly
closed tins, to serve with coffee or tea to unexpected guests.
FAVE DEI MORTI
1/2 cup sugar
3 tablespoons butter
1/2 cup finely ground almonds (unblanched)
1 egg
2 tablespoons all purpose flour
1 tablespoon grated lemon rind
Vegetable coloring, if desired them into kidney-shaped pieces about as large as lima beans.
Bake on greased cookie sheet in moderate oven (350 degrees) about 15-20
minutes, or until golden brown. Cool 5 minutes before removing them from
pan with spatula. Yield: about 2 dozen small cakes. '
I would infer from the "Add flour and flavoring" line that you should add
whatever flavor you wish at this stage, such as cocoa powder, lemon, etc.
Hope this is what your autocrat had in mind!
Good Luck,
Mistress Christianna MacGrain
Date: Tue, 16 Jun 1998 16:12:35 -0400 (EDT)
From: Gretchen M Beck <grm+ at andrew.cmu.edu>
Subject: Re: SC - Fave dei Morti (Beans of the Dead)
Excerpts from internet.listserv.sca-cooks: 16-Jun-98 SC - Fave dei Morti
(Beans .. by C. Seelye-King at juno.com
> Ok, here you go, a recipie and everything. This is from the book
> "Feast-Day Cakes From Many Lands" by Dorothy Gladys Spicer
> copyrite 1960, Holt, Rinehart and Winston, NY
Looks like the name has transferred since 1614 -- in Castelvetti, Fava
del Morte is actually a sort of fava bean paste.
toodles, margaret
Date: Thu, 18 Jun 1998 00:52:05 -0400
From: "Robert Newmyer" <rnewmyer at epix.net>
Subject: Re: SC - Fave dei Morti (Beans of the Dead)
I found the following recipe thru a friend. Pretty basic but tasty. I have
no idea of the origin of this version but I thought a Fave de Morti recipe
that actually contains beans would be of interest.
Fava de Morti
(Fava Beans)
1 lb. broad beans, dried
5 large garlic cloves, mashed
2 bay leaves
salt
pepper
olive oil, extra virgin
Soak the beans in water overnight. Next morning drain and put in pot with
fresh water, the garlic, and the bay leaves, and simmer until tender. This
may take two to three hours, depending on the age of the beans. Add water,
if necessary, but aim for a thick rather than runny sauce at the end.
Season with salt, pepper, and plenty of really good olive oil. Serve with
lemon and parsley. This dish is good tepid or at room temperature, and is
even better the next day.
from "Painters & Food - Renaissance Recipes" by Gillian Riley
Griffith Allt y Genlli
Bob Newmyer
Date: Thu, 18 Jun 1998 21:17:44 EDT
From: LrdRas at aol.com
Subject: SC - Fava alert
In a message dated 6/18/98 2:41:08 AM Eastern Daylight Time, allilyn at juno.com
writes:
<< Does that mean the dried seeds inside the fava case, or does it mean food
processing cooked fava beans, as we usually eat them--green? >>
I know that I have said this before but people of European descent can have
severe allergic reactions to fava beans. Please be cautious if you are of
European descent, espicially Mediterranean ancestry. The offending part of
trhe bean is the gelatinous stuff between the pod and the bean in green fava
beans for the most part.
Ras
Subject: RE: ANST --..Historical references to beans...
Date: Tue, 08 Sep 98 16:54:25 MST
From: "Decker, Terry D." <TerryD at Health.State.OK.US>
To: "'ansteorra at Ansteorra.ORG'" <ansteorra at Ansteorra.ORG>
> Okay Bear, it's renaissence...but heres what you requested...Reference
> for your beans: fourteenth century.
>
>
>
> Rayah
Thank you for the information. I don't have the Viander in my library, but
I will probably add it. The reference is almost certainly to favas and I
have never come across it. Wonder if his unstated recipe for tripe is
similar to modern menudo?
Bear
Subject: Re: ANST --..Historical references to beans...
Date: Tue, 08 Sep 98 12:03:17 MST
From: peerage1 <peerage1 at flash.net>
To: ansteorra at Ansteorra.ORG
More windy talk *grin*
> Phaseolus vulgaris, the New World string bean.
Yes and no, that particular name that covers a very broad
category...please go and read this site:
and
> To my knowledge, there is no record of these having been consumed in
>Europe within the SCA period.
In answer to that from that site:
The four major cultivated species of Phaseolus bean all originated in
central and S. America. Ancient seeds of cultivated forms
have been found in Peru (dated to 6000 BC) and Mexico (dated to 4000
BC). Bean cultivation spread into N. America; finds
in New Mexico have been dated to around 300 BC. French beans were
brought to Europe in the early 16th century. Early varieties were all
climbers, and dwarf French beans were not commonly grown until the 18th
century.
another similar reference:.
Herando Cortes is guilty of bringing "green" beans to europe. Date of
Cortes is 1485-1547.
Main Entry: har·i·cot
Pronunciation: '(h)ar-i-"kO
Function: noun
Etymology: French
Date: 1653
: the ripe seed or the unripe pod of any of several beans (genus
Phaseolus and especially P. vulgaris)
>
Subject: RE: ANST --..Historical references to beans...
Date: Tue, 08 Sep 98 13:10:49 MST
From: "Decker, Terry D." <TerryD at Health.State.OK.US>
To: "'ansteorra at Ansteorra.ORG'" <ansteorra at Ansteorra.ORG>
> > To my knowledge, there is no record of these having been consumed in
> >Europe within the SCA period.
>
> Herando Cortes is guilty of bringing "green" beans to europe. Date of
> Cortes is 1485-1547
To be precise, I know of no use of unshelled New World beans in period
(which is what the menu that kicked this off suggested).
Introduction and cultivation does not equate to culinary use. Tomatoes were
brought back to the Old World early on and known to be in Italy in 1534 and
in England by 1596, but they were used as ornamentals rather than food
plants. Sweet potatoes were in common use early on, but the white potato
was generally ignored. There is evidence that the white was imported into
Spain in 1573 as some form of emergency food and there is a German recipe
from the very late 16th Century for a potato dish, but as a general food
stuff white potatos didn't take off until the 18th Century.
>.
To my knowledge, this is apocryphal. Catherine was 14 in 1533, her family
was in dire straits financially having been on the wrong side of a bad civil
war, and her Uncle, Pope Clement, used her to cement a political alliance
with the French. Her retinue belonged to the Pope and all those wonderous
Italian cooks went back to Italy with him. She was a very small player in
French history until 1560, when she became Regent for her son. She spent
the next 29 years making up for lost time, changing France's culinary tastes
in the process. Unless there is primary evidence that she did receive
haricots from Canon Piero Valeriano, I would consider the story
questionable.
> Main Entry: har·i·cot
> Pronunciation: '(h)ar-i-"kO
> Function: noun
> Etymology: French
> Date: 1653
> : the ripe seed or the unripe pod of any of several beans (genus
> Phaseolus and especially P. vulgaris)
Yes, and how were they served? The best evidence I've seen is a late 16th
Century painting called "The Bean Eater," shows a peasant eating a bowl of
shelled beans. The recipes I've seen would not work well with unshelled
beans.
> >
The dried pod is green around the edges and brown on the sides. I haven't
seen a fresh pod or the growing plant.
To my knowledge, the pod is not used in medieval cooking, at least, I
haven't seen primary source recipe or description to that effect. If you
have one, I would be interested in the source.
One of the reasons for not using the pod (in fact for not serving favas at a
feast) is that a number of people, usually of Southern European extraction,
display an allergic reaction to the fava. This is commonly very mild , but
there is a small percentage who have an anaphylactic reaction. Some
authorities believe Pythagoras died from an anaphylactic reaction to fava
beans after avoiding arrest by hiding in a bean field.
At any rate, I would not serve what we in the U.S. call "green beans" at an
"authentic" Medieval feast. They would be Renaissance at best.
Bear
Subject: Re: ANST --..Historical references to beans...
Date: Tue, 08 Sep 98 22:27:44 MST
From: RAISYA at aol.com
To: ansteorra at Ansteorra.ORG
I've been listening in on the discussion of period beans with interest. I
have an interest in plants, not as much as a cook but as a gardener. New
world shell beans were available before 1600 in Europe, whether or not they
were common, they were known in Europe within our period. I haven't found a
description of snap beans, I'd be interested in that..
>peas, frenched beans, mashed beans, sieved beans or beans in their shell
In the TACUINUM, the author recommends eating favas cooked in water and
vinegar and eaten unshelled to treat dysentary. I generally get an impression
that the pods aren't considered too tasty, though, so this reference interests
me <G>.
I don't really care one way or another about the inclusion of New World foods,
that's the discretion of the cooks, or should be. I just found this part of
the discussion intriguing. It's amazing what we can learn when we share
information.
However, my husband is deathly allergic to all legumes, and we had a bad scare
a while back when someone used the same spoon to stir several pots,
accidentally adding some peas to a dish that wasn't supposed to have any.
Luckily, he spotted a pea in his bowl. Now, we rarely eat feasts that include
legumes, which means we won't be eating this one. We don't eat pot-luck
feasts for the same reason.
Subject: RE: ANST --..Historical references to beans...
Date: Wed, 09 Sep 98 07:02:06 MST
From: "Decker, Terry D." <TerryD at Health.State.OK.US>
To: "'ansteorra at Ansteorra.ORG'" <ansteorra at Ansteorra.ORG>
>.
You also have fasoles, which are an African variety of Vigna sinensis and
are the ancestor of the modern black-eyed pea. Another variety commonly
called the cowpea has its origins in India.
Vetchlings are members of genus Lathyrus, but I haven't taken the time to
chase down the appropriate species.
>
A little casual reading last night suggests that there a couple varieties of
fava. The chief difference appears to be the size of the seed. There were
no comments on the difference in taste. I think the seed you are describing
is the large seed variety.
Bear
Date: Sat, 5 Dec 1998 08:22:15 -0600
From: "Decker, Terry D." <TerryD at Health.State.OK.US>
Subject: RE: SC - Fava beans?? (and thanks)
> All the stuff
> I have on period beans tells me that favas are the most period variety,
> which isn't helping much... :>
>
> Melisant
Take a look at cowpeas and black-eyed peas. My understanding is that these
are variants of the same species which originated in India was brought to
Africa and entered Europe from Africa in the late Medieval period.
The black-eyed pea was presumably imported into the US as part of the slave
trade.
Bear
Date: Fri, 04 Dec 1998 17:43:06 -0800
From: Anne-Marie Rousseau <acrouss at gte.net>
Subject: RE: SC - Fava beans?? (and thanks)
Hey all from Anne-Marie
re: nom de plums for fava beans....
see also broad beans, and "horse beans" of all things. The bins at our
middle eastern market show them to come in a wide variety of colors and
shapes and sizes, but the most common is either like a large browny green
lima bean with a thick leathery skin, or else the canned variety, which
resembles a brownish garbanzo bean with a thick skin.
As far as I know, "black eyed peas" and "cowpeas" are new world beans. They
may have been introduced to colonial america by the slave trade, but
several other new world foods like sweet potatoes and peanuts were as well
(amazing how things move so quickly, no? The porteugese see 'em here, and
bring 'em home and use them and next thing you know, the Africans are using
them, and then they come back home...) Anything with the genus Phaseolus
is. Fava, garbanzos and lentils are in the pea family. If you get a chance
to look at the plants, you can eaily tell the difference, and if you wanna
do a bit of dissection, the way the seed is assembled can tell the
difference too. Kidney beans are Phaseolus, and they have a "belly button"
in a certain place. Fava and friends have their "belly buttons" in a
different place.
- --AM, who is very angry with Mr Vehling for interpreting Apicius as being
for "french beans". Sheesh!
Date: Sat, 5 Dec 1998 17:49:31 EST
From: LrdRas at aol.com
Subject: Re: SC - Fava beans?? (and thanks)
TerryD at Health.State.OK.US writes:
<< Cowpeas are Vigna unguiculata and are of Old World origin.
Bear >>
Correct. The Chinese yard long bean is also a Vigna. The unique thing about
CYL bean is that we have what is apparently an very close to life-like
illumination of it in a manuscript dating before discovery of the New World
Using that illumination as a reference point I planted these beans in my
garden this year. They work in all the period recipes we have for beans that
do not specify fava specifically. Oh, one other interesting thing about them
is that they come from the area that most of the Oriental spices (e.g.
cinnamon, etc. come from and the dried bean looks like a miniature red
kidney bean which are mentioned in period sources, IIRC.
Mind you, I'm not saying that these were known in Europe but all the
circumstantial evidence adds up to the probability that they were known. If
they were known it would explain a lot about why Europeans accepted Phaseolus
beans so extraordinarily quickly. CYL beans a long and green, have kidney
shaped beans and most importantly they taste like Phaseolus beans in both
the green state and mature dried form.
Date: Sat, 5 Dec 1998 15:25:49 -0800
From: david friedman <ddfr at best.com>
Subject: Re: SC - Fava beans?? (and thanks)
At 9:40 AM +0200 12/5/98, Jessica Tiffin wrote:
>Please, can one of you American cooks give me an alternative name for fava
>beans?
Broad beans. I think I've also seen them labelled "fabiolo" or something
similiar in Italian or Spanish.
David/Cariadoc
Date: Sat, 5 Dec 1998 18:36:12 -0600
From: "Decker, Terry D." <TerryD at Health.State.OK.US>
Subject: RE: SC - Fava beans?? (and thanks)
>
There is a 16th Century (IIRC) painting entitled The Bean Eater which shows
what appears to be a farmer eating a bowl of beans. The beans are kidney
shaped, white with a black spot at the inside of the bend. I haven't been
able to identify them, but I think they are some form of Phaseolus. The
painting may support your contention of early adoption.
Bear
Date: Sun, 6 Dec 1998 01:15:51 EST
From: LrdRas at aol.com
Subject: SC - Broad (fava) beans more info
Bean, Broad -- Vicia faba L.
James M. Stephens Arabic world, which are large and flat. Seeds
are variable in size and shape, but usually are nearly
round and white, green, buff, brown, purple, or black. Pods are large and
thick, but vary from 2-12 inches in length. The plant is an
erect, stiff-stemmed, leafy legume reaching 2-5 feet when mature. They are
quite different from common beans in appearance
because the leaves look more like those of English peas than bean leaves.
Small white flowers are borne in spikelets.
CULTURE
Broad bean is a long, cool season crop, requiring 4.
USE
The parts of the plants used are the seeds the USA do not carry them.
The varieties `Long Pod' and `Giant Three-seeded' are often advertised.
Other Varieties Fava Beans.
Aquadulce
Ipro
Banner
Ite
Bell
Masterpiece
Bonnie Lad
Minica
Broad Windsor
Primo
Brunette
Relon
Bunyard's Exhibition
Suprifin
Colossal
Tezieroma
Express
Toto
Fava
Windsor
Hava
Witkiem Major
Date: Sun, 6 Dec 1998 10:12:59 EST
From: LrdRas at aol.com
Subject: Re: SC - Fava beans??
melisant at iafrica.com writes:
<< We do also get the little red kidney beans, which Ras suggested are also
mentioned in period sources - which ones? Could you post some recipes?? :>
>>
The 'little red kidney beans' I mentioned are the dried seeds of Chinese Yard
Long beans. These beans are very small averaging only about 1/3 of an inch
long. The product labeled 'kidney beans' in the supermarket are 2 to 3 plus
times larger and, SFAIK, are a species of Phaseolus therefore New World.
Chinese Yard Long Beans are not Phaseolus. And as indicated in my previous
post, their use in the Middle Ages is merely conjecture on my part. Until I
can find some evidence that clearly shows their use in medieval times, I
would be very hesitant about serving them at feast or claiming them as 'period'
for western cultures.
Ras
Date: Sun, 6 Dec 1998 11:14:55 EST
From: LrdRas at aol.com
Subject: Re: SC - Black-eyed peas
phlip at bright.net writes:
<< Are you sure about that, Ras? I was always told that it was the other way
around, that black-eyed peas were actually beans. >>
Sorry for the confusion. Black-eyed peas are a member of the Vigna spp. They
are all commonly referred to as cowpeas. Technically they are , in fact,
beans. The legumes have many terms used for their several categories
including beans, cowpeas, peas, lupines and other terms depending on the
individual chacteristics.
While black-eyed peas are in fact a bean, they are more accurately cowpeas
when a descriptive term is applied to them. My apologies for the confusion
but I was trying to distinguish them from Phaseolus and specific other Vigna
spp.at the time.
My error :-(.
Ras
Date: Sun, 6 Dec 1998 11:15:55 -0600
From: "Decker, Terry D." <TerryD at Health.State.OK.US>
Subject: RE: SC - Black-eyed peas
> TerryD at Health.State.OK.US writes:
> << Since there are more varieties of beans than I have encountered, I
> leave the question of precise identification open for further research.
>
> Bear >>
>
> Was there any accompanying text with the illustration that you cited which
> could shed any light on the matter? My possible illumination of a long
> green bean was merely a decorative element on the page and completely
> unrelated to the text. :-(
>
> Ras
It was being used as a decorative illustration. The particular piece is The
Bean Eater by Annibale Carracci (1560-1609).
Looking at a better reproduction from the wife's collection, the colors run
more toward tan, so it could be black-eyed peas which are being eaten.
I think the Italian title may be Mangafagioli. If so, according to Root,
the fagioli refers specifically to haricot beans. Unfortunately, we still
have the problem of artistic license.
Thanks for passing on the information about the coloration of cowpeas.
While rooting around in my stacks, I came across the information that your
Yard Long Beans are Vigna unguiclata sesquipedalis and are also commonly
named asparagus beans or Goa beans.
Bear
Date: Sat, 19 Dec 1998 04:43:29 -0600
From: allilyn at juno.com (LYN M PARKINSON)
Subject: Re: SC - Bean experiments
Tonight's Play in the Kitchen dealt with some bean experiments. I don't
have any Fava beans, so the experiments still have a great gap, but
having washed, soaked, rinsed and cooked pea beans, pinto beans, great
northern beans, navy beans, chick peas and lentils I don't find much
taste difference in any of them. What little there might be would be
covered with the onions and garlic that seem ubiquitous to period
preparations. Once brayed, they'd look almost the same, too, except for
a bit of color difference, and that could be changed with the recommended
saffron..
Allison
allilyn at juno.com, Barony Marche of the Debatable Lands, Pittsburgh, PA
Kingdom of Aethelmearc
Date: Sat, 19 Dec 1998 16:10:27 EST
From: LrdRas at aol.com
Subject: Re: SC - Bean experiments
allilyn at juno.com writes:
<<.
>>
Please don't take this personally but I find there is a very great difference
in flavor between all the varieties that you mentioned especailly favas and
the other beans. Also there is circumstantial evidence that suggests that
several other beans may have been grown in period besides favas, such as
yard long beans and black-eyed peas.
The gist of your post, if I read it correctly, is that you feel the
similarities warrant their use. You also feel that supposed difficulty in
obtaining them coupled with a rare allergic reaction to favas also warrant
their exclusion. These insignificant factors alone then warrant the
substitution of Phaseolus species for known Old World species. Am I correct?
If so, my position is that ease of attaining ingredients should not be a
factor. Simply use other recipes which do not call for the product, grow
your own or, most significantly, have your grocer order them for you.
Similarities with New World products sounds like a reasonable reason. However,
this observation is based on your personal taste. I can tell the difference
between different varieties of green beans, potatoes and tomatoes among other
things. To my palette those differences are real enough to cause me to not
prepare certain dishes if the variety necessary for the dish is not available.
The flavor diffierence between favas, lentils, chickpeas and New World beans
is so glaring to a trained palatte that they are as different as licorice,
oranges, walnuts and grapes.
In addressing the allergy angle, the reaction to favas is EXTREMELY rare and
is limited to persons descended from ancestors that come from a very narrow
Mediteranian region. If we were to use this argument we would have to leave every known food out of feasts, especially since allergies to nuts, assorted
fruits, alliums, dairy products, seafood, fish and wheat are more wisespread
than fava allergies.
When we come across rare or unusual ingredients in recipes the far better
route, IMO, would be to try to obtain the ingredient or forego using the
recipe rather than compromise the truth by degrading cookery from a
respected art/science to the level of 'slopping the hogs'.
Ras
Date: Fri, 22 Jan 1999 23:02:57 EST
From: LrdRas at aol.com
Subject: Re: SC - Lupini Beans??
TerryD at Health.State.OK.US writes:
<< I can't place them, but it is possible that you are talking about lupine
seeds. Lupine or lupin is a generic name for members of the genus Lupinus
in the pea family. Lupines have been cultivated since the Bronze Age, so it
is very likely they were known in period.
Bear >>
Lupini are EXTREMELY poisonous if eaten raw and must be thoroughly cooked
which removes the poisons.
Ras
Date: Sat, 23 Jan 1999 21:52:04 EST
From: LrdRas at aol.com
Subject: SC - Lupini Beans-update
CONRAD3 at prodigy.net writes:
<< Yes these do look similar to what I saw, but the ones I saw were dried
beans. >>
In my previous post I said that lupines were poisonous and must be cooked
before eating to render the poison harmless. This is only partially correct.
Of the 100+ species of lupines, the white lupine has been bred to produce a
few non-poisonous
varieties. The others are still grown, however, so caution would be the best
route when using these beans because variety is not usually listed on the
package.
Historical additions:
Although these legumes grew wild in Italy and Greece and were collected and
used by both cultures, they were not cultivated until the Roman empire. They
were considered a food for the poor and great cauldrons of them were prepared
for the poor on certain festival days. By the beginning of the Italian
Renaissance, they disappear from culinary tomes and are not mentioned
again until after that period.
Toussant-Samat in History of Food talks about them a little. Poisonous
properties and a minor amount of history was mentioned in The Visual Food
Encyclopedia.
Although considered by many in the Current Middle Ages to be at best an
Italian ethnic food, the vast majority of gardeners today grow them for there
beautiful white, mauve and pink flowers, for which they have been known
throughout history.
Ras
Date: Sun, 24 Jan 1999 12:30:13 -0500 (EST)
From: Gretchen M Beck <grm+ at andrew.cmu.edu>
Subject: Re: SC - Lupini Beans-update
Excerpts from internet.listserv.sca-cooks: 23-Jan-99 SC - Lupini
Beans-update by LrdRas at aol.com
> for the poor on certain festival days. By the beginning of the Italian
> Renaissance , they disappear from culinary tomes and are not mentioned
> again until after that period.
That is not entirely correct -- both Platina and Castelvetro discuss
lupines. Castelvetro says :Our womenfolk and little children nibble at
lupin beans between meals during the hottest summer days. They are very
bitter but can easily be sweetened by putting them in a canal or deep
stream of clear running water, in a thightly fastened bag securd to a
pole or hook, so that the current flows right through them. The lupins
are left there for two or three whole days, until they have lost their
bitterness and become sweet. Them they are peeled and salted and
nibbled more as a snack than anything else, the sort of thing that only
appeals to pregnant women or silly children. Dried lupins are used to
fatten pigs and other animals. (He also mentions that lupin beans can
be used to drive away moles and enrich poor soil)
Platina doesn't talk about the beans, but does advise cooking and eating
the stalks like you would asparagus. From the description, "harsh" and
"they are very bitter", it is likely the same plant.
toodles, margaret
Date: Wed, 1 Dec 1999 16:43:56 EST
From: LrdRas at aol.com
Subject: Re: SC - Lab Rat Redux (cooking exper.)
Alia Atlas writes:
<< Boil green beans (This probably refers to something like fava beans. These
are no string beans. String beans are a New World food.) >>
Correct about the phaseolus green beans. But as I posted sometime ago, using
a picture of an illuminated manuscript I found in a book (source unknown now
but when found will be posted), I still am of the opinion that either Chinese
yard long beans or, possible young black-eyed peas were the actual 'green
beans ' referred to during period. The yard long beans look EXACTLY like the
illustration when a photo is placed side by side and in real life. Also the
dried beans of the yard long beans is a perfect miniature of what we know of
as 'kidney' beans. So there is a possibility that when 'kidney bean' is
mentioned in period manuscripts the yard long in a dried state is also meant.
I know that this is all circumstantial evidence but I would bet my money that
yard longs are the evasive period 'green' and 'kidney' beans.
Ras
Date: Thu, 02 Dec 1999 17:30:43 -0800
From: "Laura C. Minnick" <lcm at efn.org>
Subject: Re: SC - Lab Rat Redux (cooking exper.)
Valoise Armstrong wrote:
> Just one quick note. I believe gruene can refer to fresh beans as well
> as green beans.
> Instead of dried beans, you might try this with fresh ones.
The Middle English 'grene' also means 'new', 'untested/untried', even
'raw'. And holds hints of the supernatural.
'Lainie
Date: Thu, 2 Dec 1999 22:09:08 EST
From: LrdRas at aol.com
Subject: Re: SC - Period green beans
ringofkings at mindspring.com writes:
<< Could the asparagus pea or winged lotus (Tetragonorobus purpureus) have been
what was described as 'green beans? You eat them pod and all and they do look
more like beans than peapods. It is listed in Gerard as the four square
velvet pea.
Akim >>
Absolutely. I only references yard long beans because I grew them a couple of
years in a row and they look so much like the 13th century illumination that
is down right eerie :-) The period recipes for 'green beans' and 'kidney'
beans also work extremely well with this variety in my experience. Is a there
are source for a picture of the beans that you mention? I looked in my seed
catalogs and can't find them . :-(
Ras
Date: Tue, 22 Feb 2000 17:40:18 -0500 (EST)
From: Gretchen M Beck <grm+ at andrew.cmu.edu>
Subject: Re: SC - Suggestions for a mushroom dish?
Excerpts from internet.listserv.sca-cooks: 22-Feb-100 Re: SC -
Suggestions for a .. by Bronwynmgn at aol.com
> Did I miss something? I can't see anything in the original that
suggests the
> bean paste should be put into pastry and fried. It looks to me like you
> should serve the pureed beans hot with olive oil, pepper or cinnamon, and
> raisins. More like refried beans.
No, you didn't, I think I did. I've loaned out my copy of the
manuscript, but somewhere in the recipe for favetta it says that wrap
them in paste and fry them, and that ladies keep these in little boxes
for delicate nibbling.
Sorry about that.
toodles, margaret
Date: Wed, 19 Apr 2000 09:31:59 -0500
From: "Decker, Terry D." <TerryD at Health.State.OK.US>
Subject: RE: SC - Lamb recipe and vegetable stew recipe request
> I used dried chickpeas at a feast I did a couple years ago. They never
> completely softened up and I soaked them overnight and then cooked them for
> several hours. I've been reluctant to use them ever since.
>
> Mercedes
I've had this problem with dried legumes which have been stored for extended
periods and have not discovered a satisfactory answer. It may be they need
to be soaked longer or be cooked for an extended period or both. I have not
had the problem with dried legumes purchased shortly before use from my
local health food store which sells them in bulk and has a high turnover.
Bear
Date: Thu, 20 Apr 2000 00:41:04 +1000
From: "Lee-Gwen Booth" <piglet006 at globalfreeway.com.au>
Subject: Re: SC - Lamb recipe and vegetable stew recipe request
From: Mercedes
> I used dried chickpeas at a feast I did a couple years ago. They never
> completely softened up and I soaked them overnight and then cooked them
> for several hours. I've been reluctant to use them ever since.
Simple solution! Invest in a pressure cooker - I would not be without mine.
It does amazing things to dry beans and makes the most wonderful brown rice
imaginable (what is more, do it properly and you don't even have to drain
it. Ready, cooked, soft and delicious, and in its own serving dish in about
20 minutes!
Gwynydd of Culloden
Date: Wed, 19 Apr 2000 11:09:11 -0400
From: "Alderton, Philippa" <phlip at morganco.net>
Subject: Re: SC - Lamb recipe and vegetable stew recipe request
Mercedes skrev:
>I used dried chickpeas at a feast I did a couple years ago. They never
completely softened up and I soaked them overnight and then cooked them for
several hours. I've been reluctant to use them ever since. <
I ran into the same problem a few years ago with black beans- I boiled them
off and on for almost a week before they were soft enough to eat. It turned
out that they were from a very old batch, and they'd just dried far more
than we're used to dealing with from the store. I suspect this might be what
had happened to your chick peas, as I've had it happen to a lesser degree
with other dried beans..
My suggestion would be to try another package from a store where you're
reasonably sure that they have a good turnover- either a ME store, or a
chain in an area where you have either a lot of Mediterranean ethnic groups
or upscale Yuppie types, and see how they work. Another alternative is to
buy the canned variety- just keep in mind, that they're already well-salted,
and you don't need to add more.
Phlip
Philippa Farrour
Caer Frig
Southeastern Ohio
Date: Thu, 20 Apr 2000 01:12:12 +1000
From: "Lee-Gwen Booth" <piglet006 at globalfreeway.com.au>
Subject: Re: SC - Lamb recipe and vegetable stew recipe request
One other hint - do not add salt to the cooking water - it means tough
beans!
Gwynydd
Date: Wed, 19 Apr 2000 17:44:55 EDT
From: Etain1263 at aol.com
Subject: Re: SC - Lamb recipe and vegetable stew recipe request
phlip at morganco.net writes:
<<.
>>
Even "normal" dried beans are several years old! I learned this when I lived
in Michigan..where they grow a great many of the "navy" and "great northern"
beans for market! someone gave me a large bag of "fresh" dried beans...from
that year - and they cooked up almost immediately! Wow! What a difference!
The farmers sell to the grain elevators...who store until they have enough to
transport to the packagers....who package and store until the prices are
"right"...sometimes it's a year or more.
Etain
Date: Wed, 19 Apr 2000 17:08:27 -0400
From: Christine A Seelye-King <mermayde at juno.com>
Subject: SC - Tough Beans?
> One other hint - do not add salt to the cooking water - it means
> tough beans!
> Gwynydd
This was something my late husband used to say, and I never understood
it. If your beans are 'tough', then they aren't cooked enough. If you
don't salt the water as the beans are soaking it up, you will never get
the salt into the beans, just in the fluid surrounding it. So, how do
you end up with tough beans? Sounds more like "tough noogies" to me.
Christianna
Date: Wed, 19 Apr 2000 23:04:53 EDT
From: LrdRas at aol.com
Subject: SC - Chickpeas
mercedes at geotec.net writes:
<< I used dried chickpeas at a feast I did a couple years ago. They never
completely softened up and I soaked them overnight and then cooked them for
several hours. I've been reluctant to use them ever since.
Mercedes >>.
Ras
Date: Thu, 20 Apr 2000 04:04:23 EDT
From: CBlackwill at aol.com
Subject: Re: SC - Lamb recipe and vegetable stew recipe request
piglet006 at globalfreeway.com.au writes:
> One other hint - do not add salt to the cooking water - it means tough
> beans!
> Gwynydd
Actually, this is little more than a wide spread myth, I'm happy to say.
Adding salt to beans while they are cooking does not affect their tenderness
in any appreciable way. There may be some tiny chemical reaction, but it is
unnoticeable in the finished product. Salt away, and eat the beans when they
are tender.
Balthazar of Blackmoor
Date: Thu, 20 Apr 2000 14:50:13 -0500
From: "catwho at bellsouth.net" <catwho at bellsouth.net>
Subject: Re: SC - Chickpeas
>
Date: Fri, 21 Apr 2000 00:32:26 EDT
From: LrdRas at aol.com
Subject: Re: SC - Chickpeas
catwho at bellsouth.net writes:
<< >>
I can buy #10 cans for $2.29 (a gallon). I would say that in view of quantity
and fuel costs that is a pretty good deal. :-)
Ras.
PIECE ONE
Ciurons Tendres Ab Let de Melles
(from Sent Sovi)
ORGINAL: Si vols apperellar ciurons tendres ab let de amelles, se ffa
axi: Prin los ciurons, e leva'ls be. E ages let de amelles, e mit-los
a coura ab la let e ab holi e ab sal; e met-hi seba escaldade ab
aygua bulent. E quant deuran esser cuyt, met-hi jurvert e alfabegua
e moradux e d'altres bones epicies [should be 'erbes'] e un poc de
gingebre e de gras. E quant hi metras los ciurons, sien levats ab
aygua calda, que tentost son cuyts.
TRANS: If you want to prepare tender chickpeas with almond milk, do
it thus: take the chickpeas and wash them well. And take almond milk
and set them to cook with the milk and with oil and with salt; and
put in it onion scalded with boiling water. And when they should be
cooked, put in them parsley and basil and marjoram and other good
spices [should be 'herbs'] and a little ginger and verjus. And when
you add the chick peas, wash them with hot water that they should
cook more quickly.
[NOTE: the insert "should be 'herbs' is from Santich's book, i didn't
add it. I cooked the recipe with herbs and no additional spices.]
WHAT IT DID:
(1) I used canned garbanzos, rather than soaking and boiling my own.
I've cooked garbanzos from scratch, and while they are, hmmm, mealier
(a good quality) than canned, which are sometimes a bit slimy (i
usually rinse them), i haven't noticed a vast difference in the
quality of a dish made with one or the other.
(2) I bought organic, whole, unroasted almonds to make almond milk,
but i didn't have time to make it. I was going to make it Thursday
night and bring it in a bottle, but I was appliqueing and
embroidering my consort's fighting surcote as well as hand-sewing a
couple wool tunics for myself, so i didn't get around to it. When it
was time to cook, i used boxed organic almond milk that i'd bought to
drink - it has a little, very little brown rice sweetener and some
vegetable thickeners (guar, xanthan, carageenen, and locust bean
gums). But not so much that it is a vastly different creature from
homemade almond milk, which I would have preferred, but i don't think
the dish suffered greatly.
I dumped the drained garbanzos into my kettle, then poured in enough
almond milk to cover (i wasn't trying to make soup) and added some
salt and a little olive oil. While it was beginning to heat, i finely
chopped a small onion and added it without first scalding, as i
didn't bring enough pans.
After warming and stirring, i began to add other seasonings. I added
white pepper (for personal reasons i don't use black pepper) and
dried ginger powder. It's an amazingly good dried ginger powder that
i bought at the health food store. When i tasted the liquid it seemed
as if i'd used too much ginger and white pepper - it was quite "hot"
- - and while that doesn't bother me, i know some people at the feast
don't like food that's too "piquant". But after i let it cook a bit,
then tasted again with chickpeas in my tasting spoon, it was fine. I
cooked it until the onion was tender and mild.
I had bought fresh organic herbs. At this point i added lots of
chopped flat-leaf parsley and fresh basil. I didn't see fresh
marjoram at the store, so i tossed in fresh thyme and oregano, going
easy on the oregano so it wouldn't take over. When the herbs were
cooked and the broth was well flavored, i added the verjus, stirred
to distribute, then left it to warm for a minute, and removed the pot
from the fire. Personally, i'd like to have added more verjus, as i
like strong flavors. But it was fine, adding a bit of tang to the
dish.
PIECE TWO
Cauli Verdi con Carne
(from Libro della Cocina)
<snip of cabbage recipe. See vegetables-msg>
PIECE THREE
On Preparing a Salad of Several Greens
(from de Honesta Voluptate)
<snip of salad recipe - see salads-msg>
- ---------------
I picked these dishes because they were relatively quick and easy to
prepare at a busy event, yet authentic. I was actually done cooking
before the others who cooked on site. (i mention this because i'm
usually still cooking when everyone is already eating)
Anahita al-shazhiyya
Date: Sun, 7 May 2000 07:31:21 -0500
From: "Decker, Terry D." <TerryD at Health.State.OK.US>
Subject: RE: An Test was Re: SC - Truck Crops
> Bear wrote:
> >BTW, Root also says, "...fagioli refers specifically to the New World bean."
> >Fagioli also refers to the black-eyed pea, which is definitely Old World in
> >origin.
>
> When were black eyed peas introduced, if ever, to Europe? Were they
> eaten in North Africa in Medieval times, i.e., would they have been
> eaten in North Africa after 600 and before 1600? I've got this bag of
> 'em in the freezer...
The term "phaseolus" from which from which "fagioli" is derived appears in
Roman writings. From context it appears to refer to kidney shaped beans
which are distinct from "faba" or fava beans (Vicia faba). While this does
not preclude some variety of fava being the bean referenced, it does
demonstrate that the Romans acknowledged a difference. The term appears in
Roman writings after the beginning of major trade with Africa which
increases the probability that they were writing about some form of the
black-eyed pea.
The black-eyed pea (Vigna unguiculata var. sinensis (IIRC)) is a bean of
Asian origin with several major varieties, including the yard-long bean (V.
unguiculata var. sesquipedalis). Phaseolus likely refers to any of these
related plants.
For visual evidence of their use at the end of the 16th Century, take a look
at Annibale Carracci's The Bean Eater (Il Mangafagiolo).
It should be noted that while the black-eyed pea was eaten in Italy within
period, and probably before, there is no evidence I have encountered to show
it being used elsewhere in Europe.
> Anahita al-shazhiyya
Bear
Date: Mon, 8 May 2000 09:30:34 -0500
From: "Decker, Terry D." <TerryD at Health.State.OK.US>
Subject: SC - Phaseolus recipes
Platina 7.14
On the Kidney Bean
There is the kidney bean, phaseolus or phasellus, which Virgil calls lowly.
Apuleius writes that this name comes from the island of Phasellus, not far
from Mt. Olympus. Kidney beans have warm and damp force. Their use
lubricates the bowels and is fattening, moves the urine, and is good for
chest and lungs but fills the head with gross and bad humors and brings on
dreams, and indeed bad ones. Its cold and harmfulness can be reduced to
some degree by sprinkling with majoram, pepper, and mustard. After [eating]
kidney beans, it is necessary to drink pure wine.
Platina 7.33
Dish Made from Peas
Let peas come to a boil with carob. When they are taken from the water, put
in a frying pan with bits of salt meat, especially that balanced between
lean and fat. I would wish, however, that the bits had been fried a little
beforehand. Then add a bit of verjuice, a bit of must, or some sugar and
cinnamon. Cook broad beans in the same way.
Recipe 7.33 is problematical. The Latin text in Milham states, "Hoc item
modo et phaseolos coquito." Milham translates this as "cook broad beans in
the same way." Elsewhere in the text, broad beans appear as "fabam" and
kidney beans appear as "phaseolus." The pattern of translation suggests
that the "broad beans" of this recipe should be translated as "kidney
beans." As the two preceding recipes are for broad beans, it is possible
that this apparent translation error is a printer's typographical error.
According to a footnote, the recipe is taken from Martino and was entitled
in his work, "Per fava li piselli fritti nella fava menata." Said title
suggests that broad beans are meant rather than kidney beans.
Bon Chance
Bear
Date: Tue, 8 Apr 2003 00:21:08 -0600
To: sca-cooks at ansteorra.org
From: James Prescott <prescotj at telusplanet.net>
Subject: Re: [Sca-cooks] Translation issue
At 22:57 -0400 2003-04-07, Patrick Levesque wrote:
> This is a very basic question: I'm wondering about the exact meaning of
> 'grams'. (The french-english dictionnaries I have here are not of much
> help, unfortunately, being stuck on the metrical measurement).
>
> Do they only refer to pulses and legumes in general, or does the term
> indicate a narrower selection therein? Webster's definition is 'any of
> several beans' which quite frankly doesn't lead one very far...
>
> It seems safe to assume that chickpeas would be included, but I want to
> verify this before I adapt a new recipe.
If you are referring to Indian cuisine, then 'gram' generally
refers only to the chick pea and very close relatives, such as
'channa' which is like a small chick pea with its skin removed,
and with the pea split. 'Gram' flour is made from 'channa'.
Nevertheless, for our greater confusion, 'gram' is occasionally
used to refer to some other legumes, such as moong beans and
horse gram.
Thorvald
From: "Decker, Terry D." <TerryD at Health.State.OK.US>
To: "'sca-cooks at ansteorra.org'" <sca-cooks at ansteorra.org>
Subject: [Sca-cooks] Translation issue
Date: Tue, 8 Apr 2003 08:57:50 -0500
Gram refers to a number of plants including chickpea (Bengal gram) whose
seeds are used for food in Asia. It derives through Portuguese from the
Latin "granum" (grain), suggesting a 16th Century origin for the usage.
The mung bean (Vigna radiata, green gram or golden gram) and the urd (Vigna
mungo, black gram) are also among the grams. Cowpeas, black-eyed peas, and
yard-long beans (V. unguiculata), pigeon pea (Cajanus cajan), soybeans
(Glycine max) and lentils (Lens culinarius) are sometimes included in
the grain legumes.
Gram also refers 1/1000 of a kilogram (standard metric measure). From
The French "gramme" (small weight) derived from the Latin "gramma" (small
weight) derived from Greek.
And one must not forget Gram, the sword of Sigmund, broken by Odin,
Repaired by Regin, and used by Sigurd to kill Fafnir.
Bear
Date: Tue, 29 Jul 2003 02:23:25 -0400
From: "Christine Seelye-King" <kingstaste at mindspring.com>
Subject: [Sca-cooks] Duh
To: "SCAFoodandFeasts" <SCAFoodandFeasts at yahoogroups.com>, "SCA Cooks"
<Sca-cooks at ansteorra.org>
Ok, if I'd just scrolled down the page, I would have seen them. My bad.
Here is the recipe for anyone who's curiosity I've peaked:.
Date: Tue, 29 Jul 2003 09:07:23 +0200
From: Ana Vald?s <agora at algonet.se>
Subject: Re: [Sca-cooks] Duh
To: Cooks within the SCA <sca-cooks at ansteorra.
Ana
Christine Seelye-King wrote:
<<<
Ok, if I'd just scrolled down the page, I would have seen them. My bad.
Here is the recipe for anyone who's curiosity I've peaked:
Counterfeit (Vegetarian) Isfî riyâ of Garbanzos
Andalusian p. A-1
>>>
Date: Tue, 29 Jul 2003 07:34:48 -0400
From: "Phil Troy/ G. Tacitus Adamantius" <adamantius at verizon.net>
Subject: Re: [Sca-cooks] Duh
To: Cooks within the SCA <sca-cooks at ansteorra.org>
Also sprach Ana Val.
>>>
The socca recipes I've seen also call for water, along with the olive
oil. Authorities seem to differ on whether it should be paper thin or
slightly thicker. Usually the cooking method is like that of a pizza,
except the dough would be referred to in English as a batter. If you
can pour it, and cannot pick it up in your hands without tools,
that's a batter. With a couple of exceptions, but generally...
On an only marginally related note, the other big Provencale
chick-pea-based street food (you generally don't see these on
restaurant menus) would be panisse, which is a thick boiled porridge
of ground chick peas, which is spread on a plate to cool and
solidify, after which it is cut into strips and fried like French
fries, in olive oil...
Adamantius
Date: Wed, 08 Oct 2003 22:40:00 -0400
From: Tara Sersen Boroson <tara at kolaviv.com>
Subject: Re: [Sca-cooks] garbanzos/chickpeas
To: Cooks within the SCA <sca-cooks at ansteorra.org>
jenne at fiedlerfamily.net wrote:
> Does one soak garbanzo beans/chickpeas prior to cooking, or not?
>
> -- Pani Jadwiga Zajaczkowa, Knowledge Pika jenne at fiedlerfamily.net
I assume you mean dried ones - yes, you need to soak them. They are too
big to cook down like lentils.
-Magdalena
Date: Mon, 15 Nov 2004 15:02:07 -0600
From: "Terry Decker" <t.d.decker at worldnet.att.net>
Subject: Re: [Sca-cooks] Gunthar Updates
To: "Cooks within the SCA" <sca-cooks at ansteorra.org>
> Really! Black-eyed peas? I always thought they were of African origin.
> Not that I doubt your extensive knowledge, but do you have references
> on hand? I imagine I may have to defend this one if I use them! :)
>
> Aoghann
Consider the Latin "phaseolus" which is distinct from "faba". Phaseolus is
the term for kidney bean. It's Italian derivative is fasoli. Both
phaseolus and fasoli predate Columbus and the arrival of the New World
kidney beans in genus Phaseolus. While it is not certain that phaseolus
referred to the black-eyed pea, it is a generally accepted opinion. Fasoli
still includes the black-eyed pea in modern usage.
Apicius has a recipe for "Faseoli" and Platina has recipes for "phaseolus"
(IIRC) translating from Martino's Italian. Modern confusion occurs because
of the work of taxonomists in the 16th and 17th Centuries using Phaseolus as
the genus name for the New World string-beans.
There are a number of members of genus Vigna, which are of Asian and African
origin, and commonly referred to a black-eyed peas, cowpeas, asparagus
beans, yard long beans, etc. These are found in long pods which resemble
the string-beans. It was this resemblence which caused Columbus to identify
some of the New World beans as "faxones."
I've got no hard and fast dates on when the Vigna arrived in Europe, but it
was certainly no later that the 1st Century CE and it may have been brought
to Europe during the prehistoric migrations. I tend to think it may
come from Asia with Alexander's armies.
The best evidence of black-eyed peas being eaten is Europe is fairly late.
It is a 16th Century painting by Annibale Carracci, The Bean Eater, which
shows a peasant eating a bowl of black-eyed peas.
Bear
Date: Fri, 12 Aug 2005 16:43:07 -0700
From: lilinah at earthlink.net
Subject: Re: [Sca-cooks] Beans, beans...
To: sca-cooks at ansteorra.org
I wrote:
> Also in the 14th C. Tuscan cookbook are recipes for "fasoli", which
> is "beans", but since most of what we call "beans" are New World, and
> favas have their own name, what does "fasoli" mean?
In response to several posts:
Fasoli are not fava beans. Favas have their own *sections*, one for
fresh and one for dried.
Fasoli are not chick peas. Chickpeas have their own section.
Fasoli are not red beans. Those are New World and the 14th C. is way
prior to Columbus...
Could they be black-eyed peas or a relative? Field peas (which are
grey) or are these the peas? Something else?
In the order in which they appear:
7 Chickpea [ceci] recipes
5 Pea [pesi] recipes
5 Fresh Fava [fave sane] recipes
- - "fave sane" means "whole favas" but it's clear from the recipes
that they are fresh.
2 Dried Fava [fave infrante] recipes
- - "fave infrante" means "split favas" but it's clear from the
recipes that they are dried.
2 Lentil [lenti] recipes
3 Fasoli recipes - it's entirely not clear from the recipes if they
are fresh or dried, although i lean toward dried, since they are
boiled first before adding them to the recipes.
Here are the originals and Vittoria's translations:
[57]
De' fasoli. Fasoli bene lavati e bulliti, metti a cocere con oglio e
cipolle, con sopradette spezie, cascio grattato, et ova dibattute.
Beans well cleaned and boiled, set them to cook with oil and onions,
with aforementioned spices, grated cheese, and beaten eggs.
[58]
Altramente al modo trivisano. Metti fasoli bulliti, descaccati, a
cocere con carne insalata, e con pepe, e zaffarano. E possonsi dare
soffritti con oglio, postovi dentro un poco d'aceto, amido e sale.
Another preparation in the style of Treviso. Put boiled beans,
shelled, to cook with salted meat, and with pepper and saffron. And
this can be served fried in oil, put in a bit of vinegar, starch, and
salt.
[59]
Altramente. Tolli i fasoli bulliti, e gittatane via l'acqua, mettili
a cocere con carne di castrone, di porco, o di bue, o qualunche
vuoli, e molto pesta, e un poco di zaffarano e sale, e da' mangiare.
Another preparation. Take boiled beans, and throw away the water,
set them to cook with mutton, pork, or beef, or whatever you like,
and grind it well, and a bit of saffron and salt, and serve it.
--
Urtatim (that's err-tah-TEEM)
the persona formerly known as Anahita
Date: Fri, 12 Aug 2005 22:54:39 -0400
From: Johnna Holloway <johnna at sitka.engin.umich.edu>
Subject: Re: [Sca-cooks] Beans, beans...
To: Cooks within the SCA <sca-cooks at ansteorra.org>
> lilinah at earthlink.net wrote:
> Also in the 14th C. Tuscan cookbook are recipes for "fasoli", which
> is "beans", but since most of what we call "beans" are New World, and
> favas have their own name, what does "fasoli" mean?
> --
> Urtatim (that's err-tah-TEEM)
The Medieval Kitchen by Redon, Sabban & Serventi
talks about "fasole or faseole. This was an African legume
belonging to the family Vigna and was very similar to the New
World Phaseolus vulgaris. The fasole has more or less
disappeared, but you can easily find its descendant: the black-eyed
pea." page 94
Johnnae
Date: Fri, 12 Aug 2005 22:14:45 -0500
From: "Terry Decker" <t.d.decker at worldnet.att.net>
Subject: Re: [Sca-cooks] Beans, beans...
To: "Cooks within the SCA" <sca-cooks at ansteorra.org>
Fasoli (phaseolus) refers to kidney beans. Any bean that looks like a
kidney, not just the red ones. The word appears in Pliny, so it obviously
applied to a type of legume before the New World beans arrived. Most of the
authorities I've checked believe that phaseolus refers to members of genus
Vigna although some suggest that it may have originally been some form of
fava bean.
If you look up the painting "The Bean Eater," you'll find the poor fellow
eating black-eyed peas.
Bear
Date: Mon, 15 Aug 2005 10:34:26 -0500 (GMT-05:00)
From: Christiane <christianetrue at earthlink.net>
Subject: [Sca-cooks] Re: Sca-cooks Digest, Vol 27, Issue 41
To: sca-cooks at ansteorra.org
I honestly believe that "fasoli" is a variant of fava, from the Roman
Phaseoli mentioned by Pliny.
Today, "fasoli" is the Greek word for fava, and the popular bean stew
of Southern Italian origin — which gets slurred into pastafazool —
initially was pasta fasoli in some Southern dialects, notably
Sicilian and Neapolitan, the strongly Greek-influenced regions of the
country (where also today "fasoli" means beans, but it seems to mean
beans in general; however, I believe that which bean "fasoli"
referred to would vary by region to region and village to village).
There were "white" favas and "black" favas; undoubtedly there were
other varieties, heirloom types that no longer exist today. Fasoli
could very well refer to one of these specific fava variants.
Considering how many different types of favas were cultivated in 18th
century Williamsburg, I have no doubt there were just as many
varieties being cultivated in medieval Tuscany.
Gianotta
Date: Sun, 4 Dec 2005 20:32:10 -0800
From: David Friedman <ddfr at daviddfriedman.com>
Subject: Re: [Sca-cooks] Uses for fava beans....
To: Cooks within the SCA <sca-cooks at ansteorra.org>
I don't think so. One of them has a name that refers to
pistachios--because the green favas look like pistachios.
But I haven't tried it.
> Would using reconstituted (soaked) dried ones work in the fresh-beans
> recipes?
> --Maire
Date: Thu, 7 Sep 2006 06:27:31 -0700 (PDT)
From: Louise Smithson <helewyse at yahoo.com>
Subject: [Sca-cooks] Green beans was My Next Feast
To: sca-cooks at lists.ansteorra.org
Actually there are period (prior to 1600) recipes for Green beans
and a whole host of other new world foods. You just have to look.
Here are some of the ones I found from Italian sources either in or
post period (this is taken from the class I gave at Pennsic).
Recipes from Scappi [5], Messisbugo [6] and Castelvetro [7]
This is where it gets tricky. How do you tell the difference
between an old bean recipe and a new bean recipe when the same name
is used for each? This is the one situation where the appellation
"of India" or "of Turkey" was not added to the name of the plant to
distinguish it from what came before the one exception to this is a
description from Castelvetro [7]. Capatti & Montanari [8] indicate
that both Scappi and Messisbugo have recipes for green beans, however
this is the one occasion where no end note is given. Judging from
the recipes themselves however, these are the ones calling for "fresh
beans" which are replacing black eyed peas or cow peas as a fresh
bean type vegetable.
Per far minestra di piselli, & fave fresche con brodo di carne Cap
CLXXXVIII secondo libro, Scappi
Piglinosi li piselli freschi nella sua stastione, laqual comincia
in Roma dal fin di Marzo, & dura per tutto Giugno, come sanno ancho
le fave fresche, sgraninosi li detti piselli, & ponganosi in un vaso
di terra, o di rame con brodo grasso, & gola di porco salata,
tagliata in fette, et faccianosi bollire fin'a tanto che siano quasi
cotti, & pongavisi una brancata d'aneci, & petrosemolo battuto, &
facciano si finir di cuocere; et volendo fare piu spesso il brodo,
pestisi un poco di essi piselli cotti, & passinosi per lo setaccio, &
mescolinsi con li piselli intieri giungendovi pepe, & cannella, &
servanosi con le tagliature della gola di porco. Si potrebbeno
cuocere con li detti piselli teste de capretti pelate, &
pollastrelli, piccioni, paparini, & anatrine ripiene. Si pu? fare
ancho in un'altro modo, cio? cotto che sar? il pisello con il brodo,
si potr? maritare con uova, cascio, e spetierie. In tutti li sudetti
modi si possono cuocere le fave fresche.
To make a dish of peas and fresh beans with meat broth, Chapter
188, 2nd book Scappi.
Take fresh peas in their season, which starts in Rome at the end
of March and lasts through all of June, which is also that of fresh
beans. Shell the said peas and put them into an earthenware pot or
copper pot with fat broth and salted pork jowls cut into slices let
them boil until they are almost cooked. Then add a handful of dill
and parsley chopped and let it finish cooking. And if you want to
make the broth more dense grind a few of the cooked peas, pass them
through a strainer and mix them with the intact peas, adding pepper
and cinnamon. Serve them with the cut pieces of pork jowl. One can
also cook the said peas with skinned goat heads, and pullets,
pigeons, doves and ducks stuffed. One can also make it in another
way, that is when the peas are cooked with the broth one can enrich
it with eggs, cheese and spices. In all these described ways one can
also cook fresh beans.
Per fare minestra di Piselli, & Fave fresche Cap CCXLIX, terzo
libro, Scappi.
Piglinosi i piselli o baccelli, sgraninosi, & ponganosi in un vaso
con oglio d'olive, sale, & pepe, & faccianosi soffriggere pian piano,
aggiungendovi tanta acqua tinta di zafferano, che stiano coperti di
due dita, & come saranno poco men che cotti, pestisene una parte nel
mortaro, e stemperisi con il medesimo brodo, & mettasi nel vaso con
una branchata d'herbuccie battute, e faccianosi levare il bollo, e
servanosi caldi. In questo medesimo modo si pu? accommodare il cece
fresco, havendolo prima fatto perlessare, & fatto stare per un quarto
d'hora nell'acqua fresca. In questo modo ancho si cuoce il fagiolo
frescho.
To make a dish of peas and fresh beans, chapter 249, 3rd book,
Scappi.
Take the peas or beans, pod them and put them in a pot with olive
oil, salt and pepper, and let them fry very slowly. Then add enough
water, which has been colored with saffron, that the beans are
covered by two fingers. When they are a little bit less than fully
cooked, grind a few and mix them with the same broth, and put them
back into the pot with a handful of chopped herbs and bring back to
the boil and serve hot. In this same way one can cook fresh chick
peas, having first parboiled them and let them soak for a quarter of
an hour in fresh water. In this same way one can also cook fresh beans.
A fare fasoletti freschi in tegola. Page 113 Messisbugo
Pigliarai le tegole de fasoletti quando sono tenerini, e tagliarai
il picollo, poi le porrai a cuocere in'acqua bogliente, e subito si
cuoceranno, & cotte che seranno le porrai a scolare col sale sopra,
poi le frigerai in olio overo butiro, e frigendole nella patella, li
porrai un poco di Aceto, e Pevere, e poi li imbandirai.
To cook fresh beans in the pod, page 113 Messisbugo
Take the pod of beans when they are tender, and cut them into
little pieces, then put them to cook in boiling water, and they will
be cooked almost immediately. And when they are cooked drain them
and sprinkle them with salt, then fry them in olive oil or butter in
a frying pan. Add a little bit of vinegar and pepper before serving
them.
De? fagiuoli turcheschi, Castelvetro
Nella passata stagione ho a pieno ragionato della fava fresca e
secca; or qui mi convien ragionare de? fagiuoli, frutto o legume
molto simigliante a quelle di gusto; e di due spezie ne abbiam noi,
n? di niuna crudi mangiamo. L?una ? de? men communi e pi? grossi, li
quali son tutti o bianchi over macchiati di rosso e di nero. L?altra
spezie ? de? pi? minuti e tutti bianchi con un occhio nero nel
ventre. I primi si nominano turcheschi, li quali ascendono molto in
alto; per? chi non gli pianta vicino alle siepi conviene, volendone
aver molto frutto, piantarvi a canto de? rami di fronde secchi, a?
quali appiccandosi possano in alto montare; e perch? portano una
bella foglia verde, le donne in Italia e spezialmente in Vinezia, ove
son molto vaghe dell?ombra e della verdura e ancora per poter dalle
finestre loro vagheggiare i viandanti senza da coloro esser esse
vedute, usano di porre su le finestre delle camere loro alcune
cassette di legno lunghe quanto ? larga la finestra,
n? pi? larga d?una buona spanna e piene d?ottima terra; in quella
piantano dieci o dodici di que? fagiuoli a luna crescente di febraio
o di marzo o d?aprile, e poi con bastoncin bianchi vi formano una
vaga grata alla quale essi s?attaccano, s? che d?una piacevole ombra
tutta la finestra adombrano. Gli ortolani ancora ne? colti loro fanno
siepi di canne o di bastoni bianchi della canape, a canto alle quali
piantano quantit? di simile legume, e cos? vengono alla vista a
rendere i loro orti pi? vaghi e maggior coppia di fagiuoli
raccolgono. I baccelli adunque di questo legume, mentre son verdi e
teneri,.
On turkish beans.
In the past season I have given full account of the fava been
fresh or dried, now I shall give an account of the fagioli (bean)
fruit or legume very similar to that tasted, and the two species we
have no-one eats raw. The one is less common and is larger, it is
all white or flecked with pink and red. The other species is much
smaller and is all white with a black eye in the middle. The first
we call Turkish, it grows very tall, so you should grow them against
a trellis, or if you want a lot of fruit (a good crop) plant them
against dried sticks or branches, the which fasten themselves tightly
to it so they can raise themselves up. Because the they have
beautiful green leaves, the women in Italy, especially in Venice,
where there is much longing for shade and of greenery and also to be
able to have the windows desireable to passers by without color being
lost, They place around the windows of their rooms several wooden
boxes, as long as the width of the window, if not
larger by a good span, and full of good dirt. In this they plant
tent or twelve of these beans in the new moon of February or March or
April. Then with white sticks they make a rough trellis to which
these attach themselves, and this creates a pleasant shade over all
the windows so adorned. The market gardners still collect canes from
the hedgrows or white sticks from the hemp, against which they plant
a number of these same beans. And thus they come to make the view of
their garden more desirable and also collect more beans..
<<< --Anne-Marie:..if you want a period veggie instead of the new world haricots, I highly
> recommend the "new peas in the pod", <> you can often find frozen sugar
> snap peas (much tastier than the snow
> peas) in the frozen veggie section.<
Yeah, sounds better too! I happened to find "enough" bags of green beans on
sale {2 lbs./$1.00} at a store going out of business. And have been
worrying myself about their quality ever since. Since the seating capacity
of the hall is being limited to 60 diners, I am willing to keep the green
beans for myself. Shoot, I should open a bag tonight . . . . Caointiarn >>>
Date: Mon, 12 May 2008 14:04:36 -0400
From: Johnna Holloway <johnnae at mac.com>
Subject: Re: [Sca-cooks] Vegetables and are you all still there?
To: Cooks within the SCA <sca-cooks at lists.ansteorra.org>
And here we can once more tell everyone that the award winning
BEANS A History by Ken Albala is well worth the price. Old world beans
versus new world ... it's all in there.
Johnnae
Date: Mon, 12 May 2008 13:35:00 -0500
From: "Terry Decker" <t.d.decker at worldnet.att.net>
Subject: Re: [Sca-cooks] Vegetables and are you all still there?
To: "Cooks within the SCA" <sca-cooks at lists.ansteorra.org>
<<<
While it appears that some New World beans were adopted by Europeans in
the 16th C. (i'm not sure which ones... Bear? Adamantius? Anyone else?)...
--
Urtatim (that's err-tah-TEEM)
the persona formerly known as Anahita >>>
This research paper at the Colonial Williamsburg website is worth looking
over . It's in the
Gardening > Research area of the site. There are a number of other plants
covered in other papers.
You might also find Bermejo and Leon's "Neglected Crops from 1492 a
different perspective" of interest.
Bear
Date: Mon, 11 Aug 2008 12:59:07 -0400
From: euriol <euriol at ptd.net>
Subject: Re: [Sca-cooks] Wanted: Bean Recipes
To: Cooks within the SCA <sca-cooks at lists.ansteorra.org>
Here
Date: Thu, 14 Aug 2008 11:26:10 -0400
From: "Nick Sasso" <grizly at mindspring.com>
Subject: Re: [Sca-cooks] Wanted: Bean Recipes
To: "Cooks within the SCA" <sca-cooks at lists.ansteorra.org>
#41 Fasoli cipolle fritte cum pipero he canella he zaffrano; poi
lassali reposare sopra las cinere calda uno peza; et poi fa le menestre cum
specie bone de sopra.
Kidney Beans (#41)
Cook the kidney beans in pure water or good broth; wheny.
ORIGINAL TEXT & TRANSLATION
Scully, T. (2000). Cuoco Napoletano - The Neapolitan Recipe
Collection: a critical edition and English translation.
Ann Arbor: University of Michigan Press.)
Niccolo's Recipe
Serves 6 to 8
1 pound fields peas, crowder peas, black-eyed peas, or similar
1 medium onion, sliced thin
1 tsp black pepper
1 1/2 tsp cinnamon
10 strands saffron crushed steeped in 1/4 cup very warm broth
Cook Kidney beans in water or broth until just tender (or use high quality
canned). Fry sliced onions in a pan with oil; add saffron and remove from
heat immediately. Put the beans in single layer in a shallow casserole; on
top of the beans sprinkle with black pepper, cinnamon and then
onions/saffron spread evenly on top. for larger quantities, layer beans and
onions alternately. Bake at 350F for about 30 minutes.
Date: Thu, 14 Aug 2008 13:18:13 -0400
From: "Sharon R. Saroff" <sindara at pobox.com>
Subject: Re: [Sca-cooks] Pinto bean recipe
To: Christiane <christianetrue at earthlink.net>, Cooks within the SCA
<sca-cooks at lists.ansteorra.org>
Traditional
Date: Thu, 26 Feb 2009 22:58:00 -0500
From: Robin Carroll-Mann <rcarrollmann at gmail.com>
Subject: Re: [Sca-cooks] A Question of Dried Beans
To: Cooks within the SCA <sca-cooks at lists.ansteorra.org>
On Thu, Feb 26, 2009 at 10:49 PM, Mairi Ceilidh <jjterlouw at earthlink.net> wrote:
<<< Can someone point me toward period sources that discuss or describe
soaking dried beans or peas prior to cooking? >>>
Here's one.
From the Menagier de Paris:
"OLD BEANS which are to be cooked with their pods must be soaked and
put on the fire in a pot the evening before and all night; then throw
out that water, and put to cook in another water..."
--
Brighid ni Chiarain
My NEW email is rcarrollmann at gmail.com
Date: Tue, 12 Apr 2011 13:35:59 -0500
From: Sayyeda al-Kaslaania <samia at idlelion.net>
To: Cooks within the SCA <sca-cooks at lists.ansteorra.org>
Subject: [Sca-cooks] converting to gluten free
I wonder if someone more experienced with gluten free cooking can talk
about how best to make this gluten free? Could I substitute xanthan gum
and water for the sourdough? I figure I need to play with it, but I'm
hoping not to re-invent a wheel. :)
Sayyeda al-Kaslaania
*******************
Counterfeit (Vegetarian) Isf?riy?.
1 c chickpea flour
4 t cinnamon
? c sourdough
? c cilantro, chopped
4 eggs
? t salt
2 t pepper
garlic sauce:
2 t coriander
3 cloves garlic
16 threads saffron
2 T oil
2 t cumin
2 T vinegar
[snipped] Crush the garlic in a garlic press, combine with vinegar and
oil, beat together to make sauce. Combine the flour, sourdough, eggs,
spices and beat with a fork to a uniform batter. Fry in about ? c oil in
a 9? frying pan at medium high temperature until brown on both sides,
turning once. Add more oil as necessary. Drain on a paper towel. Serve
with sauce. Note: The ingredients for the sauce are from ?A Type of
Ahrash [Isf?riy?]? (p. 96) which is from the same cookbook. What is done
with them is pure conjecture.
How to Milk an Almond Stuff an Egg And Armor a Turnip: A Thousand Years
of Recipes
By David Friedman and Elizabeth Cook ISBN: 978-1-460-92498-3
Date: Tue, 12 Apr 2011 18:40:05 +0000
From: yaini0625 at yahoo.com
To: "Cooks within the SCA" <sca-cooks at lists.ansteorra.org>
Subject: Re: [Sca-cooks] converting to gluten free
Does this recipe require a sourdough starter or actual sourdough bread?
If it calls for sourdough bread and you don't want to make a loaf of gluten free bread try Udi's brand bread- not Rudi's.
For sourdough starter we have experimented with rice flour, xanthum gum, yeast and water. It doesn't have the same traditional "sour" taste as sourdough bread but it was good.
Aelina the Saami
Date: Sat, 8 Oct 2011 22:07:19 -0400
From: Sharon Palmer <ranvaig at columbus.rr.com>
To: Cooks within the SCA <sca-cooks at lists.ansteorra.org>
Subject: Re: [Sca-cooks] cannellini beans
<<< Anyone know if these [cannellini beans]
Date: Sat, 08 Oct 2011 23:12:00 -0400
From: Johnna Holloway <johnnae at mac.com>
To: Cooks within the SCA <sca-cooks at lists.ansteorra.org>
Subject: Re: [Sca-cooks] cannellini beans
Actually
Date: Sun, 9 Oct 2011 18:09:18 -0700 (PDT)
From: Honour Horne-Jaruk <jarukcomp at yahoo.com>
To: Cooks within the SCA <sca-cooks at lists.ansteorra.org>
Subject: Re: [Sca-cooks] cannellini beans
Respected
Date: Mon, 10 Oct 2011 00:58:21 -0500
From: "otsisto" <otsisto at socket.net> original
recipe that got me curious has no claims of historic origins but when I
looked in other sources of a similar recipe, they claimed to be adaptations
of a renaissance recipe. Of coarse, no citations.
I figured that it is modern but I wanted to see if by chance it could have
been in SCA period. The original recipe that started my quest has cocoa and
vanilla, the "renaissance ones have cinnamon and almonds instead but
everything else is the same.
As weirdness would have it, I received my "La Cucina Italiana magazine
today. It has an article on beans and bean recipes. Their cannellini torta
is called "Flan dolce di cannellini con ricotta e cacao" :)
Thank you again for the help. I will be making both recipes one day just to
see what they taste like.
De
Date: Mon, 10 Oct 2011 14:34:29 -0400
From: Sharon Palmer <ranvaig at columbus.rr.com>. >>>
Navy and great northern are also New World beans, which are (as
Johnna corrected me) only period in a few places, late in period, as
novelties. Certainly not Roman.
I've never seen any history for the various varieties of new world
beans or how old they are. I suspect that they date to before the
beans came to Europe. I don't think anyone knows what variety the
earliest beans in Europe were, and doubt there is any reason to
consider one New World bean as more period than another.
<<< new beans were given the same name as the old ones, and used in
the same recipes. It is *possible* that your recipe is old and was
was originally made with one of the Old world beans.
I checked the index of Apicus for "bean" and "torta" and don't see
anything like this. The notes say that one word now associated
with beans, actually meant peas then. Apicus isn't the only Roman
cookbook, and it would help to know the exact title of the original
recipe.
Ranvaig
Date: Thu, 13 Oct 2011 01:36:49 -0400
From: Sharon Palmer <ranvaig at columbus.rr.com>?
===================
Date: Fri, 21 Oct 2011 18:05:00 -0500
From: "Terry Decker" <t.d.decker at att.net>?
Stefan
============
I'm far removed from my references at the moment, but I would suggest taking
a look in Pliny's Natural Histories for a basic take on beans and peas in
Antiquity. That being said, the terms "pea" and "bean" are not
scientifically precise and may through usage apply to various seeds that are
not taxonomically peas or beans. The black-eyed pea, for example, is called
a pea in English, but is placed with beans in Italian, and being a member of
genus Vigna is truly neither a pea nor a bean but is related to both.
For differentiation in most of period, I believe you will find that most of
the peas available were of the sort that divide in two producing split peas,
while the beans retained their unity.
If you want to duck the entire issue, divide the collection of messages by
age and label them "legumes-1-msg", "legumes-2-msg", etc.
Bear
Date: Tue, 25 Oct 2011 00:09:22 -0400
From: Sharon Palmer <ranvaig at columbus.rr.com>
To: Cooks within the SCA <sca-cooks at lists.ansteorra.org>
Subject: Re: [Sca-cooks] peas vs. beans
<<< The black-eyed pea, for example, is called a pea in English, but is
placed with beans in Italian, and being a member of genus Vigna is
truly neither a pea nor a bean but is related to both. >>>
Old world Aduki and mung "beans" are also Vigna.
<<< For differentiation in most of period, I believe you will find that
most of the peas available were of the sort that divide in two
producing split peas, while the beans retained their unity. >>>
I'm not sure this is a valid distinction. Split peas are the result
of a milling operation, but beans can be split too. I'm not sure how
common it was to have peas milled in period
Rumpolt has numerous recipes for peas, all of them for unmilled peas,
because it tells you to remove the hull. Sometimes by soaking in lye
and washing the hulls off, some by cooking with the hull and pressing
through a sieve, leaving the hull behind. I tried this once, and it
looked and tasted exactly like common split peas.
There is a bean recipe that tells you to remove the hull too.
Rumpolt's "Bonen" is likely black eyed peas.
Ranvaig
Date: Fri, 27 Jan 2012 22:45:35 +0000
From: Gretchen Beck <cmupythia at cmu.edu>
To: "yaini0625 at yahoo.com" <yaini0625 at yahoo.com>, Cooks within the SCA
<sca-cooks at lists.ansteorra.org>, Donna Green <donnaegreen at yahoo.com>
Subject: Re: [Sca-cooks] A Bean is a bean
<<< Are lima beans New World or Old?
Aelina >>>
New World, I believe. "Lima" in an agricultural product name was, at least in the 19th C, a reference to Lima, Peru.
toodles, margaret
Date: Fri, 03 Feb 2012 13:13:24 -0800
From: "Laura C. Minnick" <lcm at jeffnet.org>
To: Cooks within the SCA <sca-cooks at lists.ansteorra.org>
Subject: Re: [Sca-cooks] Old World beans
Favas, black-eyed peas, lentils, garbanzos, and peas are all in the
Carolingian capitularies I'm working from. So is salt pork and chard and
kale and mustard greens... cooking without cookbooks isn't actually all
that hard...
Liutgard
Date: Fri, 03 Feb 2012 16:34:20 -0500
From: Johnna Holloway <johnnae at mac.com>
To: Cooks within the SCA <sca-cooks at lists.ansteorra.org>
Subject: [Sca-cooks] soooooo...
On Feb 3, 2012, at 12:58 PM, Honour Horne-Jaruk wrote:
<<< Please list them! I joined in the days when "only Favas survive" was
Gospel. If that's wrong I _so_ want the names of the others! >>>
Maybe this entry will help:
mentions t he Mediterranean legumes are: carob or St. John?s bread
(Ceratonia siliqua); grasspea or India pea (Lathyrus sativus);
chickpea (Cicer arietinum) ; and lentils (Lens esculenta).
Then fava beans (Vicia faba), also known as the broad bean, Windsor
bean, horse bean,
Scotch bean, and English bean.
The lupine bean (Lupinus albus); bittervetch (Vicia ervilia) ; the
cultivated pea (Pisum sativum); the field pea, used mostly for dried
peas and forage, and the garden pea with its high sugar content.
from Africa, such as the hyacinth bean (Lablab niger),
and the cowpea (Vigna unguiculata), also known as the aspargus bean or
yard-long bean, native to West Africa.
Johnnae
Date: Fri, 3 Feb 2012 16:42:49 -0600
From: "Terry Decker" <t.d.decker at att.net>
To: "Cooks within the SCA" <sca-cooks at lists.ansteorra.org>
Subject: Re: [Sca-cooks] Old World beans (was: soooooo...)
From Leonard Fuchs Herbal of 1545:
White Horse Bean (Lupinus albus, white lupine) used in the Mediterranean
world and still cultivated in Georgia (US) until recently.
Common Bean (Vicia faba, Faba vulgaris, fava bean, horse bean)
Large (or great) Pea (Pisum sativa, Pismum maius) looks to be a garden pea
French (or foreign) Bean AKA Wild Bean (Smilax hortensis, possibly the Old
World phaseolus or the New World Phaseolus vulgaris) what Fuchs says about
it can be found here:
The image can be found here:
And for fun, here is the Phaseolus entry from William turner's New Herball:
Bear
Date: Sat, 4 Feb 2012 11:37:57 -0600
From: "Terry Decker" <t.d.decker at att.net>
To: "Cooks within the SCA" <sca-cooks at lists.ansteorra.org>
Subject: Re: [Sca-cooks] Old World beans (was: soooooo...)
What you are after are members of the species Lathyrus, Vigna and Vicia.
Probably all members of these genera have been used for human consumption,
but most were considered "famine foods" by the Middle Ages and consigned to
being ground cover and animal fodder, a purpose many still serve today. I
am listing only those I can demonstrate have been used by humans
Grass pea (Lathyrus sativa, Lathyrus sphaericus)
Red pea (Lathyrus cicera)
Sea pea (Lathyrus japonica)
These are the vetchlings that have commonly used for human consumption, but
other members of the species have been consumed by humans. You should be
careful with members of genus Lathyrus. The seeds are toxic in quantity
causing symptoms that are referred to as lathyrism.
Bitter vetch (Vicia ervilia) -- evidence of 12th Century use in European
famine
Common vetch (Vicia sativa)
Hairy vetch (Vicia villosa)
Fava beans are placed in the Vicia (Vicia faba) or are places in their own
genus, Faba (Faba sativa). They are not the survivor of a class of bean but
are monotypic with cultivar differentiation based on the size of the bean.
Besides arguing over genus, the botanical taxonomists are trying to decide
if the group has varietals.
Black gram (Vigna mungo) -- probably Asian only in period
Adzuki bean (Vigna angularis) -- Asian only in period
Mung bean (Vigna radiata) -- Asian in period, East African after 10th
Century
Cowpea (Vigna unguiculata or V. unguiculata ssp. dekindtiana) -- the name is
used for the general group or the specific subspecies
Catjang (Vigna unguiculata ssp cylindrica) -- probably Asian in period
Black-eyed pea (Vigna unguiculata ssp unguiculata) -- the common European
member of the species
Yardlong bean (Vigna unguiculata ssp sesquipidalis) -- probably Asian in
period
Bear
Date: Thu, 25 Apr 2013 20:51:22 -0400 (EDT)
From: lilinah at earthlink.net
To: sca-cooks at lists.ansteorra.org
Subject: [Sca-cooks] Medieval Beans, was Delights From The Garden of
Eden
David Friedman wrote:
<< In her translation of al-Warraq, she identifies one of the kinds of
beans mentioned as kidney beans, which according to other sources I have
seen are New World. So I am a little concerned that she may be too
willing to assume that knowledge of current practice can be projected
back to period practice. >>
I replied:
<<< Yeah, i've been meaning to write her about that. The original Arabic says, literally, "red beans", and, IIRC, elsewhere in the book she mentions adzuki beans, which like kidney beans are red and unlike kidney beans are Old World (i don't have the book with me at the moment, so i can't find the page #), so i wonder if she may have confused them. >>>
SNIP
So i looked through the glossary last night. I misremembered several things.
First, on page 798 Nasrallah discusses lubya, "beans". In the generic heading, she equate them to kidney beans (New World Phaseolus) and black-eyed peas (Old World Vigna). She mentions that the Arabic word "fasulya" was used in medieval times, although rarely. In modern times, phaseolus is the New World bean genus. This may be where some of her confusion comes. However, the genus Vigna, which is Old World, includes a large number of different beans, some of which were formerly included in the genus Phaseolus. And Nasrallah does go on to mention some of them.
When discussing "lubya bayda", p. 798, literally "white beans", Nasrallah equates them to haricot or kidney beans. Then she quotes another medieval Arabic author, Ibn Baytar, comparing these particular beans to kidneys and saying some may be tinged with black or red. It seems to me that these WHITE beans are not our modern red kidney beans, which Nasrallah does not make clear.
As far as my adzuki bean comment, i could not find them mentioned in the glossary of "Annals of the Caliphs' Kitchens". However, on p. 799 Nasrallah lists "lubya hamra", literally "red beans", and she says they are like Hindu red chori. Red chori ARE adzuki beans, although Nasrallah doesn't say so. These are, for a change, actually Old World beans.
Also on p. 799, she lists "lubya sawda", literally "black beans", which Nasrallah equates with turtle or black beans. Obviously the medieval bean was black, given its name. However, modern black turtle beans are Phaseolus, so what this was in Ibn Sayyar's time she does not make clear.
And to add to the problems she lists "lubya Yamaniyya", "Yemenite beans", which Nasrallah says are white soy beans. I am skeptical that they are soy, although at least soy are Old World. Again, there are many spp. of Vigna, so rather than soy these may be one of them.
Urtatim (that's oor-tah-TEEM)
the persona formerly known as Anahita
Date: Thu, 25 Apr 2013 21:48:05 -0400
From: "Jim and Andi Houston" <jimandandi at cox.net>
To: <lilinah at earthlink.net>, "'Cooks within the SCA'"
<sca-cooks at lists.ansteorra.org>
Subject: Re: [Sca-cooks] Medieval Beans, was Delights From The Garden
of Eden
Urtatim,.
Madhavi
Date: Fri, 26 Apr 2013 16:51:41 -0400 (EDT)
From: lilinah at earthlink.net
To: SCA-Cooks <sca-cooks at lists.ansteorra.org>
Subject: Re: [Sca-cooks] Medieval Beans, was Delights From The Garden
of Eden
Madhavi wrote:
<<<. >>>
Cow peas are in the genus Vigna, species unguiculata, of which there are four chief varieties. However, Vigna includes around 2 dozen varieties of Old World beans, with and without black eyes :-) Many species of Vigna are known in India as "gram". Other species of Vigna are originally native to Africa.
Lubya is a somewhat generic Arabic word for beans, especially Old World beans and other pulses. Nasrallah lists quite a few varieties, most, but not all, Vigna species. Those she lists are not necessarily mentioned in Ibn Sayyar al-Warraq's compendium, but appear in other works. Other species of Vigna are not classified as "lubya" but have individual names, such as the Arabic "mast", which is the mung bean, known in India as green gram or moong dal, which is Vigna radiata. Confusingly, the Vigna mungo is not the mung bean, but urad dal, which has a black skin and is white inside.
As i mentioned, in her expansive glossary Nasrallah includes white beans (lubya bayda), black beans (lubya sawda), red beans (lubya hamra). The problem with what Nasrallah writes is that she does not seem to differentiate between Old World beans - such as black beans in genus Vigna - and New World beans - such as black turtle beans in genus Phaseolus, which she lists as equivalent to lubya sawda.
Just for fun, here is the list of beans / peas in genus Vigna published in wikipedia
and this list is not all-inclusive(!!).
Vigna aconitifolia ? Moth Bean, Mat Bean, Turkish Gram
Vigna angularis ? Azuki Bean, Red Bean
Vigna caracalla ? Snail Bean, Corkscrew Vine, Snail Vine
Vigna debilis Fourc.
Vigna dinteri Harms
Vigna lanceolata ? Pencil Yam, merne arlatyeye (Arrernte)
-- Vigna lanceolata var. filiformis
-- Vigna lanceolata var. lanceolata
-- Vigna lanceolata var. latifolia
Vigna luteola
Vigna marina (Burm.f.) Merr. ? beach pea, mohihihi, nanea (Hawaiian)
Vigna maritima
Vigna mungo ? Urad Bean, Black Matpe Bean, Black Gram, White Lentil, "black lentil"
Vigna o-wahuensis Vogel ? Hawaii Wild Bean
Vigna parkeri
Vigna radiata ? Mung Bean, Green Gram, Golden Gram, Mash Bean, Green Soy
Vigna speciosa (Kunth) Verdc. ? Wondering Cowpea
Vigna subterranea ? Bambara Groundnut, Jugo Bean, njugumawe (Swahili)
Vigna trilobata (L.) Verdc. ? Jungle Mat Bean, African Gram, Three-lobe-leaved Cowpea
Vigna umbellata ? Ricebean, "red bean"
Vigna unguiculata ? Cowpea, Crowder Pea, Southern Pea, Southern Field Pea
-- Vigna unguiculata ssp. cylindrica ? Katjang
-- Vigna unguiculata ssp. dekindtiana ? Wild Cowpea, African Cowpea, Ethiopian Cowpea
-- Vigna unguiculata ssp. sesquipedalis ? Yardlong Bean, Long-podded Cowpea, Asparagus Bean, Snake Bean, Chinese Long Bean
-- Vigna unguiculata ssp. unguiculata ? Black-eyed Pea, Black-eyed Bean
Vigna vexillata (L.) A.Rich. ? Zombi Pea
-- Vigna vexillata var. angustifolia
-- Vigna vexillata var. youngiana
--- End List ---
On the other hand, i really really really enjoy reading / studying Nasrallah's glossary, despite its potential problems, since in it she draws information not just from Ibn Sayyar's compendium, but also from a wide range of other medieval writers in Arabic on agriculture, medicine, and trade, as well as cuisine; and many of these books are not available in English or other Western European languages.
I'm currently on a quest for the books that have been translated into French and Spanish, so i can read them. I studied Arabic for about half a year and will probably go back to study some more. I now own a copy of the 13th c. "Kanz al-fawa'id fi tanwi' al-mawa'id", in Arabic transcribed and annotated by David Waines and Manuela Marin, but the pages haven't been cut and i am shy to start slashing them open. Paulina B. Lewicka, in her book "Food and Foodways of Medieval Cairenes" which i'm currently reading, frequently refers to recipes in the Kanz, but it hasn't been translated. So once i get over my fear of biblio-abuse, i want to translate some recipes. There appear to be some very tasty cheese recipes in it...
More languages, more better!
Urtatim (that's oor-tah-TEEM)
the persona formerly known as Anahita
Date: Wed, 14 Aug 2013 00:26:59 -0400 (EDT)
From: JIMCHEVAL at aol.com
To: sca-cooks at lists.ansteorra.org
Subject: Re: [Sca-cooks] Black-eyed peas recipes?
To a French speaker, all of these would be haricots.
It's worth pointing out, I think, that the main words for beans used to be
fasiolus (phaseolus) and faba. The word haricoq - as students of French
medieval food will know - originally applied to a mutton dish:
"To make haricoq, take sheep bellies and brown them on the grill. When they are browned, cut up them into pieces, and put in a pot. Take peeled onions, and chop them up fine. Put in the pot with the meat. Take white ginger, cinnamon and assorted spices, that is, clove and seed. Moisten with verjuice and add to the pot. Salt to taste."
Note that there are no greens in this recipe, though the TLF says that the
dish was later made with string beans. The name apparently referred originally to something cut up:
". 1 d?verbal de l'anc. verbe harigoter ? d?chiqueter, mettre en lambeaux
? (1176-81, CHR. DE TROYES, Chevalier Lion, ?d. M. Roques, 831), lequel
est prob. un d?r. en -oter* (cf. tapoter) de l'a. b. frq. *hari?n ? g?cher
?, prononc? *harij?n (d'o? l'all. verheeren ? d?vaster, d?truire ?) et
entr? en Gallo-Romania sous la forme *harig?n. Hericot est peut-?tre d? ?
l'infl. d'?cot* ? rameau ?lagu? imparfaitement, chicot d'arbuste ?, le
rapprochement de ces deux mots s'expliquant sans doute par le fait que la viande du haricot de mouton est d?coup?e en morceaux irr?guliers. ";s=2857712835;
But as a word for beans, it came along fairly late.
Faba is less complicated, vicia faba having been found often in
archeological digs.
Jim Chevallier
Date: Wed, 30 Oct 2013 11:32:14 -0400 (EDT)
From: JIMCHEVAL at aol.com
To: sca-cooks at lists.ansteorra.org
Subject: [Sca-cooks] Phaseolus et al
Back in April we had a discussion of the various meanings of "bean" in period terminology. I was thinking at the time there must be images we could consult.
It turns out there are. Dalechamp's book on plants came out just as people
were becoming aware of the Americas and has a chapter on the European
version (in the Latin edition) and another on the various "foreign" versions (in the French edition):
Historia generalis plantarum...
By Jacques Dalechamps
472 Phasiolvs Lib IV: Cap. XLIX
Histoire generale des plantes , contenant XVIII. livres egalement departis
en deux tomes : tir?e de l'exemplaire latin de la bibliotheque de Me
Jacques Dalechamp, puis faite fran?oise par Me Jean des Moulins ... avec un
indice ... ensemble les tables des noms en diverses langues. Derniere edition,
reveu?, corrig?e, & augment?e ... & illustr?e...
Auteur : Dalechamps, Jacques (1513-1588)
735 Phasiol d'Indie, du Bresil, etc.
The one glitch is that I cannot make out any edible-sized seeds in the
European variety. But maybe someone else can do better.
Oh, and if you download Google's version, be warned that it has a quirk -
the scanned pages themselves are a reasonable size, but for some reason they
have been scanned onto an ENORMOUS background.
Jim Chevallier
()
<the end> | http://www.florilegium.org/files/FOOD-VEGETABLES/beans-msg.html | CC-MAIN-2017-47 | refinedweb | 22,363 | 72.05 |
On Wed, Mar 4, 2009 at 8:43 AM, Chris Anderson <jchris@apache.org> wrote:
> On Wed, Mar 4, 2009 at 8:34 AM, Jason Davies <jason@jasondavies.com> wrote:
>> I also prefer _render. How about doing the analogous to what we do for JSON
>> docs and views, i.e. something like:
>>
>> /db/_design/foo/_render/renderfun/docid
>> /db/_design/foo/_render/renderfun/_view/viewname
>>
>
> I do believe this works, but I'm not convinced it is more elegant that
> having one name for rendering views and one name for rendering
> documents. For one thing, it doesn't take advantage of the
> [httpd_design_handlers] extension point, and for another, it's just
> plain long!
>
> Not totally against it, but to me it's like making an origami
> paper-crane, and then adding an elephant leg to it.
>
That said, let me be clear that I'm flexible and if a consensus
emerges that something that doesn't fit the httpd_design_handlers
extension point is preferred, I'm happy to help change it to that.
Another disadvantage to the deeper URLs required by stacking the doc
and view rendering namespaces is that links from rendered views to
rendered docs start to look like "../../../docrenderfun/docid"
Chris
--
Chris Anderson | http://mail-archives.apache.org/mod_mbox/incubator-couchdb-dev/200903.mbox/%3Ce282921e0903040857m2b6357b2x4e1d668a768cd09d@mail.gmail.com%3E | CC-MAIN-2015-48 | refinedweb | 205 | 61.87 |
The Gmail API uses
Thread resources
to group email replies with their original message into a single conversation or
thread. This allows you to retrieve all messages in a conversation, in order,
making it easier to have context for a message or to refine search results.
Like messages, threads may also have labels applied to them. However, unlike messages, threads cannot be created, only deleted. Messages can, however, be inserted into a thread.
Contents
Retrieving threads
Threads provide a simple way of retrieving messages in a conversation in order.
By listing a set of threads you can choose to group messages by conversation
and provide additional context. You can retrieve a list of threads using the
threads.list method, or retrieve
a specific thread with
threads.get. You can also
filter threads using the same query parameters as
for the
Message resource. If any
message in a thread matches the query, that thread is returned in the result.
The code sample below demonstrates how to use both methods in a sample that
displays the most chatty threads in your inbox. The
threads.list method
fetches all thread IDs, then
threads.get grabs all messages in each thread.
For those with 3 or more replies, we extract the
Subject line and display the
non-empty ones plus the number of messages in the thread. You'll also find this
code sample featured in the corresponding DevByte video.
Python
def show_chatty_threads(service, user_id='me'): threads = service.users().threads().list(userId=user_id).execute().get('threads', []) for thread in threads: tdata = service.users().threads().get(userId=user_id, id=thread['id']).execute() nmsgs = len(tdata['messages']) if nmsgs > 2: # skip if <3 msgs in thread msg = tdata['messages'][0]['payload'] subject = '' for header in msg['headers']: if header['name'] == 'Subject': subject = header['value'] break if subject: # skip if no Subject line print('- %s (%d msgs)' % (subject, nmsgs))
Adding drafts and messages to threads
If you are sending or migrating messages that are a response to another email or part of a conversation, your application should add that message to the related thread. This makes it easier for Gmail users who are participating in the conversation to keep the message in context.
A draft can be added to a thread as part of creating, updating, or sending a draft message. You can also add a message to a thread as part of inserting or sending a message.
In order to be part of a thread, a message or draft must meet the following criteria:
- The requested
threadIdmust be specified on the
Messageor
Draft.Messageyou supply with your request.
- The
Referencesand
In-Reply-Toheaders must be set in compliance with the RFC 2822 standard.
- The
Subjectheaders must match.
Take a look at the creating a draft or sending a
message examples. In both cases, you would simply
add a
threadId key paired with a thread ID to a message's metadata, the
message object. | https://developers.google.com/gmail/api/guides/threads | CC-MAIN-2022-05 | refinedweb | 487 | 63.59 |
Handle and raise events
Events in .NET. This article describes the major components of the delegate model, how to consume events in applications, and how to implement events in your code.
Events. The event sender doesn't know which object or method will receive (handle) the events it raises. The event is typically a member of the event sender; for example, the Click event is a member of the Button class, and the PropertyChanged event is a member of the class that implements the INotifyPropertyChanged interface.
To define an event, you use the C#
event or the Visual Basic
Event keyword in the signature of your event class, and specify the type of delegate for the event. Delegates are described in the next section.
Typically, to raise an event, you add a method that is marked as
protected and
virtual (in C#) or
Protected and
Overridable (in Visual Basic). Name this method
OnEventName; for example,
OnDataReceived. The method should take one parameter that specifies an event data object, which is an object of type EventArgs or a derived type. You provide this method to enable derived classes to override the logic for raising the event. A derived class should always call the
OnEventName method of the base class to ensure that registered delegates receive the event.
The following example shows how to declare an event named
ThresholdReached. The event is associated with the EventHandler delegate and raised in a method named
OnThresholdReached.
class Counter { public event EventHandler ThresholdReached; protected virtual void OnThresholdReached(EventArgs e) { EventHandler handler = ThresholdReached; handler?.Invoke(this, e); } // provide remaining implementation for the class }
Public Class Counter Public Event ThresholdReached As EventHandler Protected Overridable Sub OnThresholdReached(e As EventArgs) RaiseEvent ThresholdReached(Me, e) End Sub ' provide remaining implementation for the class End Class
Delegates
A delegate is a type that holds a reference to a method. A delegate is declared with a signature that shows the return type and parameters for the methods it references, and it can hold references only to methods that match its signature. A delegate is thus equivalent to a type-safe function pointer or a callback. A delegate declaration is sufficient to define a delegate class.
Delegates have many uses in .NET. In the context of events, a delegate is an intermediary (or pointer-like mechanism) between the event source and the code that handles the event. You associate a delegate with an event by including the delegate type in the event declaration, as shown in the example in the previous section. For more information about delegates, see the Delegate class.
.NET provides the EventHandler and EventHandler<TEventArgs> delegates to support most event scenarios. Use the EventHandler delegate for all events that do not include event data. Use the EventHandler<TEventArgs> delegate for events that include data about the event. These delegates have no return type value and take two parameters (an object for the source of the event, and an object for event data).
Delegates are multicast, which means that they can hold references to more than one event-handling method. For details, see the Delegate reference page. Delegates provide flexibility and fine-grained control in event handling. A delegate acts as an event dispatcher for the class that raises the event by maintaining a list of registered event handlers for the event.
For scenarios where the EventHandler and EventHandler<TEventArgs> delegates do not work, you can define a delegate. Scenarios that require you to define a delegate are very rare, such as when you must work with code that does not recognize generics. You mark a delegate with the C#
delegate and Visual Basic
Delegate keyword in the declaration. The following example shows how to declare a delegate named
ThresholdReachedEventHandler.
public delegate void ThresholdReachedEventHandler(object sender, ThresholdReachedEventArgs e);
Public Delegate Sub ThresholdReachedEventHandler(sender As Object, e As ThresholdReachedEventArgs)
Event data
Data that is associated with an event can be provided through an event data class. .NET provides many event data classes that you can use in your applications. For example, the SerialDataReceivedEventArgs class is the event data class for the SerialPort.DataReceived event. .NET follows a naming pattern of ending all event data classes with
EventArgs. You determine which event data class is associated with an event by looking at the delegate for the event. For example, the SerialDataReceivedEventHandler delegate includes the SerialDataReceivedEventArgs class as one of its parameters.
The EventArgs class is the base type for all event data classes. EventArgs is also the class you use when an event does not have any data associated with it. When you create an event that is only meant to notify other classes that something happened and does not need to pass any data, include the EventArgs class as the second parameter in the delegate. You can pass the EventArgs.Empty value when no data is provided. The EventHandler delegate includes the EventArgs class as a parameter.
When you want to create a customized event data class, create a class that derives from EventArgs, and then provide any members needed to pass data that is related to the event. Typically, you should use the same naming pattern as .NET and end your event data class name with
EventArgs.
The following example shows an event data class named
ThresholdReachedEventArgs. It contains properties that are specific to the event being raised.
public class ThresholdReachedEventArgs : EventArgs { public int Threshold { get; set; } public DateTime TimeReached { get; set; } }
Public Class ThresholdReachedEventArgs Inherits EventArgs Public Property Threshold As Integer Public Property TimeReached As DateTime End Class
Event handlers
To respond to an event, you define an event handler method in the event receiver. This method must match the signature of the delegate for the event you are handling. In the event handler, you perform the actions that are required when the event is raised, such as collecting user input after the user clicks a button. To receive notifications when the event occurs, your event handler method must subscribe to the event.
The following example shows an event handler method named
c_ThresholdReached that matches the signature for the EventHandler delegate. The method subscribes to the
ThresholdReached event.
class Program { static void Main() { var c = new Counter(); c.ThresholdReached += c_ThresholdReached; // provide remaining implementation for the class } static void c_ThresholdReached(object sender, EventArgs e) { Console.WriteLine("The threshold was reached."); } }
Module Module1 Sub Main() Dim c As New Counter() AddHandler c.ThresholdReached, AddressOf c_ThresholdReached ' provide remaining implementation for the class End Sub Sub c_ThresholdReached(sender As Object, e As EventArgs) Console.WriteLine("The threshold was reached.") End Sub End Module
Static and dynamic event handlers
.NET allows subscribers to register for event notifications either statically or dynamically. Static event handlers are in effect for the entire life of the class whose events they handle. Dynamic event handlers are explicitly activated and deactivated during program execution, usually in response to some conditional program logic. For example, they can be used if event notifications are needed only under certain conditions or if an application provides multiple event handlers and run-time conditions define the appropriate one to use. The example in the previous section shows how to dynamically add an event handler. For more information, see Events (in Visual Basic) and Events (in C#).
Raising multiple events
If your class raises multiple events, the compiler generates one field per event delegate instance. If the number of events is large, the storage cost of one field per delegate may not be acceptable. For those situations, .NET provides event properties that you can use with another data structure of your choice to store event delegates.
Event properties consist of event declarations accompanied by event accessors. Event accessors are methods that you define to add or remove event delegate instances from the storage data structure. Note that event properties are slower than event fields, because each event delegate must be retrieved before it can be invoked. The trade-off is between memory and speed. If your class defines many events that are infrequently raised, you will want to implement event properties. For more information, see How to: Handle Multiple Events Using Event Properties.
Related articles
See also
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/dotnet/standard/events/ | CC-MAIN-2022-33 | refinedweb | 1,358 | 54.83 |
Recently, I brought up Qt 5.5 on a Freescale i.MX35, which has an ARM11 CPU but no OpenGL support. Despite the missing OpenGL, I wanted to write the HMI with QML. The additional challenge was that the cross-compilation toolchain was 32-bit, but I wanted to use my standard 64-bit Ubuntu. I’ll show in this post how to set up the 32-bit toolchain and rootfs on my 64-bit Ubuntu machine, how to configure and build Qt 5.5 from the sources, and how to run a hello-world application written in QML on the i.MX35.
The Challenges
I was recently tasked to bring up Qt 5.5 on Wachendorff’s display computer OPUS A3s and build a QML demo for it. The OPUS A3s is based on the Freescale i.MX35 system on chip (SoC), which has an ARM11 CPU but no OpenGL acceleration.
My first thought – and the first challenge – was that QML requires OpenGL to work and that the QPA plugin for software rendering is only available under the commercial Qt license. But then my personal Qt historian Dario Freddi reminded me that in the olden days QML (Qt 4.8) used to run just with the software renderer and that this feature still lives on in Qt 5.5 with QtQuick 1 and QtDeclarative. In short, QtQuick 2 requires OpenGL but QtQuick 1 does not. Phew, no Qt commercial license needed and lots of euros saved!
The second challenge was that Wachendorff provided a toolchain with 32-bit tools and a 32-bit Ubuntu 10.10. As Ubuntu 10.10 is not supported any more, I wanted to use at least the lastest 64-bit Ubuntu with long-term support (v14.04) or best the very latest 64-bit Ubuntu (v15.10). Unfortunately, the 32-bit versions of the gcc compilers do not simply run on a 64-bit system. This can be solved by installing some libraries including the standard C and C++ libraries on the 64-bit Ubuntu development machine. I describe the installation of the toolchain and root file system in the section Setting Up the Development Environment.
The third challenge is to create an “mkspec” or “make spec” for a new device, the i.MX35, and to configure and build Qt 5.5 from the GitHub sources for this device. The make specs of the Rasperry Pi (another ARM11 SoC) and of the i.MX53 (another Freescale SoC) will be a good reference for the make spec. Section Configuring and Building Qt 5.5 from Sources gives the details.
The fourth and final challenge is to run a sample QML app on the target device, the OPUS A3s. We must tell the app, which device files send the events for the function keys, the rotary knob and the touches and whether to rotate the touch coordinate system. I explain these things in section Building and Running a QML App on the Target.
Although my target device is a Wachendorff OPUS A3s with an i.MX35 SoC, most of my explanations apply to every SoC. It does not make much of a difference whether we bring up Qt on a Freescale i.MX35/i.MX53/i.MX6, Texas Instruments Jacinto 5/6 or an Nvidia Tegra 2/3. Once we have understood the essential steps, it just small adjustments like the path to the C++ compiler, the sysroot path, the Qt modules that can or cannot be built for the target device or the environment variables needed for running the QML app.
Above is a photo of the home screen of the harvester HMI I have been building for the OPUS A3s (i.MX35). The needles of the two dials change every 50ms (20 times per second). All the other information – the diesel gauge, the gear info, the speed and the up to five warning indicators at the top – change once a second. Although the i.MX35 does not have OpenGL acceleration, the CPU load comes in at an average of only 27% – with peaks up to 35%. This is pretty good for a low-end device like the i.MX35.
Setting Up the Development Environment
Installing Packages Needed for Building Qt
My development machine is a 64-bit Ubuntu 15.10 virtual machine hosted by VmWare Fusion on my Macbook Pro. A well-equipped Windows laptop or PC with VmWare or VirtualBox would do as well. When starting with a fresh Ubuntu installation, many packages needed for development are missing.
A good starting point is to install all the packages needed to build Qt from sources. As Qt is quite a kraken, we are done more often than not. And – we need all the Qt dependencies anyway, because we will build Qt both for our Ubuntu machine and for the target hardware. The documentation page Building Qt 5 from Git lists the required packages neatly.
Ubuntu has a nifty command to install all the packages needed to build a package – qt5-default in our case:
$ sudo apt-get build-dep qt5-default
The next command pulls in tools needed for building software. The perl and python packages are typically already installed in a standard Ubuntu image. Then, they get updated by this command.
$ sudo apt-get install build-essential perl python git
Especially if you have been around Qt for some years, it is hard to believe that these two innocuous commands do all the heavy lifting. But they do! No more cumbersome figuring out which packages are needed.
If we want to build some special Qt modules like WebKit, WebEngine and Multimedia or if we need the latest version of some packages like XCB (the default QPA plugin for X11), we must install some more packages. I don’t care about web things at the moment, but the other two could be interesting.
// For XCB: $ sudo apt-get install "^libxcb.*" libx11-xcb-dev libglu1-mesa-dev libxrender-dev libxi-dev // For Multimedia: $ sudo apt-get install libasound2-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev
Installing the Toolchain and Root File System
Wachendorff, the manufacturer of the OPUS A3s display computer, gives us a tarball linux-opusa3-2.0.3.tgz. When we unpack this tarball, we find a tarball for the toolchain in the subdirectory linux-opusa3-2.0.3/toolchain and a tarball for the root file system in linux-opusa3-2.0.3/rootfs. These two tarballs are all we need.
First, we unpack the tarball for the toolchain.
$ cd / $ sudo tar xzf /path/to/linux-opusa3-2.0.3/toolchain/tc_arm_gcc-4.6.2-glibc-2.13-linaro-multilib-2011.12-1.tar.gz
This unpacks the toolchain into the directory /opt/freescale. As I work with different SoCs, different toolchains and even different versions of the same toolchain, I renamed this directory /opt/imx35-gcc-4.6.2-glibc-2.13 and created a symbolic link /opt/imx35 to it for convenience.
$ cd /opt $ sudo mv freescale imx35-gcc-4.6.2-glibc-2.13 $ sudo ln -s imx35-gcc-4.6.2-glibc-2.13 imx35
From now on, the toolchain with the C++ compiler and the linker is located at /opt/imx35.
Second, we unpack the tarball for the root file system.
$ mkdir -p ~/Wachendorff/OpusA3 $ cd ~/Wachendorff/OpusA3 $ tar xzf /path/to/linux-opusa3-2.0.3/rootfs/rootfs-opusa3_2.0.3.tar.gz
This unpacks the root file system into the directory ~/Wachendorff/OpusA3/rootfs. The root file system contains all the files that are needed to run our applications on the target device.
By the way, it does not matter where we install the toolchain and the root file system. We only need to pass these two locations to Qt’s
configure command.
Running a 32-bit GCC on a 64-bit Ubuntu Machine
The installed toolchain comes with 32-bit executables made for cross-compiling from an i386 development PC to an ARM11 target device. If we try to cross-compile a simple hello-world app
#include
int main(int argc, char** argv) { std::cout << "Hello World!" << std::endl; }
with the command
$ /path/to/arm-fsl-linux-gnueabi-g++ -o hello main.cpp
the answer will be something along the lines:
bash: /path/to/arm-fsl-linux-gnueabi-g++: No such file or directory
The error message is not really helpful. It took me some time to understand that I had run into a 32-bit versus 64-bit problem. Running
ldd on the executable of the cross-compiler finally revealed that the libraries libc.so.6 and /lib/ld-linux.so.2 were missing. The second one gives away that the linker was looking for a 32-bit version, because the 64-bit version is /lib64/ld-linux-x86-64.so.2.
After a bit of "duckduckgoing", I came across the answer on the AskUbuntu forum. We must install libc6, libstdc++6 and libncurses5 for the i386 architecture. The following three commands do the trick.
$ sudo dpkg --add-architecture i386 $ sudo apt-get update $ sudo apt-get install libc6:i386 libncurses5:i386 libstdc++6:i386
Now, cross-compiling our hello-world app works fine.
Configuring and Building Qt 5.5 from Sources
Getting the Qt Sources from Git
Getting the Qt source code from Git is described in the Qt documentation at Building Qt 5 from Git - Getting the source code.
We clone the top-level Qt5 git repository in a directory of our choice, say ~/Qt.
$ mkdir ~/Qt $ cd ~/Qt $ git clone
We clone all the Qt submodules like qtbase, qt3d and qtwebengine by running the
init-repository script. As we do not need qtwebkit, we skip cloning it by the option
--no-webkit.
$ cd ~/Qt/q5 $ perl init-repository --no-webkit
Now, the complete Qt5 repository is locally available in our Ubuntu machine. Qt 5.5.1 was the latest released version of Qt at the time of writing. So, we check out the tag "v5.5.1" into the new branch qt-5.5.1.
$ cd ~/Qt/qt5 $ git checkout -b qt-5.5.1 v5.5.1
In ~/Qt/qt5, we have the versions of the Qt sources that went into the Qt release v5.5.1.
Instead of cloning the Qt sources from the git repository, we could download the source tarball for Qt 5.5.1 from the Download page. However, I have found out over the years that nearly every project needs some changes, extensions or bug fixes to Qt. I prefer to have these modifications under version control, which is easy with the git repository.
Defining the Make Spec and Configure Command Iteratively
The keyword in the section title is "iteratively". I have never been able to come up with a make spec and configure command first time right - despite having plenty of exercise over the years. Creating a make spec and figuring out the write options for the configure command are an iterative process.
First Iteration
We find the make specs for several common SoCs in the directory ~/Qt/qt5/qtbase/mkspecs/devices. There is no make spec for the i.MX35 though. The closest SoC is the i.MX53, the big brother of the i.MX35. Its make spec ~/Qt/qt5/qtbase/mkspecs/devices/linux-imx53qsb-g++/qmake.conf looks as follows.
# qmake.conf for i.MX53 include(../common/linux_device_pre.conf) QMAKE_LIBS_EGL += -lEGL QMAKE_LIBS_OPENGL_ES2 += -lGLESv2 -lEGL QMAKE_LIBS_OPENVG += -lOpenVG -lEGL IMX5_CFLAGS = -march=armv7-a -mfpu=neon -DLINUX=1 -DEGL_API_FB=1 -Wno-psabi QMAKE_CFLAGS += $$IMX5_CFLAGS QMAKE_CXXFLAGS += $$IMX5_CFLAGS include(../common/linux_arm_device_post.conf) load(qt_config)
As the i.MX35 does not support OpenGL, we can remove the three lines about
QMAKE_LIBS_*.
The i.MX35 is part of the ARM11 family. A look at the list of ARM microarchitectures reveals that the ARM11 family uses the ARMv6 architecture. In contrast to the i.MX35, the i.MX53 with a Cortex-A8 core uses the ARMv7-A architecture. Hence, the
CFLAGS for the i.MX35 differ from those of the i.MX53.
At this point, we are looking for a sample make spec based on the ARMv6 architecture. We either know that the Raspberry Pi is part of the ARM11 family as well or we just search all make specs for "armv6".
$ cd ~/Qt/qt5/qtbase/mkspecs/devices $ find . -name "qmake.conf" | xargs grep -i armv6 ./linux-rasp-pi-g++/qmake.conf: -march=armv6zk \
The relevant lines of the Raspberry Pi's make spec read as follow:
QMAKE_CFLAGS += \ -marm -march=armv6zk -mtune=arm1176jzf-s \ -mfpu=vfp -mabi=aapcs-linux
We figure out the correct values for the above machine options by checking out the datasheet of the i.MX35. Section "2.4 ARM11 Microprocessor Core" gives us the needed information. The i.MX35 uses an ARM1136JF-S core (hence,
-mtune=arm1136jf-s), which has a vector floating point co-processor (hence,
-mfpu=vfp). Looking up the ARM1136JF-S core in the list of ARM microarchitectures tells us that its architecture is neither ARMv6Z (not TrustZone support) nor ARMv6K (no multi-core) but simply ARMv6 (hence,
-march=armv6). Using the "ARM Architecture Procedure Call Standard" (AAPCS) for Linux as the ABI sounds reasonable (hence,
-mabi=aapcs-linux.
We can check the admissible values of the machine options on the page "ARM Options" of the GCC documentation. All is fine!
We have just finished the first version of qmake.conf of our make spec for the i.MX35.
# qmake.conf for i.MX35 include(../common/linux_device_pre.conf) IMX35_CFLAGS += \ -marm \ -mfpu=vfp \ -mtune=arm1136jf-s \ -march=armv6 \ -mabi=aapcs-linux QMAKE_CFLAGS += $$IMX35_CFLAGS QMAKE_CXXFLAGS += $$IMX35_CFLAGS include(../common/linux_arm_device_post.conf) load(qt_config)
Besides the qmake.conf file, the make spec contains another file, qplatformdefs.h. This one is easy as it is the same for all Linux devices. So, we can simply copy it, say, from the make spec of the i.MX53. It contains one line:
#include "../../linux-g++/qplatformdefs.h"
All we need now is a configure command. I'll show you my first version and show you how I came up with all the options.
..
I always run configure in verbose mode (
-v) to get more information why configure failed. I use Qt under LGPLv3 (
-opensource). I don't want configure to stop and ask me to configure the license (
-confirm-license).
I typically start with a release build (
-release), because it builds much faster than a debug build and it shows how fast our HMI runs on the target device. Building Qt as fast as possible is paramount, because the Qt build is likely to fail. If the build fails, we want it to fail fast. The option
-prefix /opt/qt-5.5.1-imx35 specifies that Qt will be installed in the directory /opt/qt-5.5.1-imx35 on the target device.
The option
-device linux-imx35-g++ tells the configure command to use the make spec we just created.
The options
-sysroot and
-device-option tell the configure command and later qmake commands where to find the root file system and the toolchain, respectively. The value of
-sysroot is the directory, ~/Wachendorff/OpusA3/rootfs, where we unpacked the tarball with the root file system. The option
-device-option defines the environment variable
CROSS_COMPILE as the prefix of the path to tools like gcc, g++, objcopy and strip in the toolchain. These tools are located in the directory
/opt/imx35/usr/local/gcc-4.6.2-glibc-2.13-linaro-multilib-2011.12/fsl-linaro-toolchain/bin/
and start with the prefix
arm-fsl-linux-gnueabi-. For example,
/opt/imx35/usr/local/gcc-4.6.2-glibc-2.13-linaro-multilib-2011.12/fsl-linaro-toolchain/bin/arm-fsl-linux-gnueabi-g++
is the C++ compiler for cross-compiling Qt. The environment variable
CROSS_COMPILE is used in the configuration file linux_device_pre.conf included by our make spec file. It used to define qmake's tool variables:
QMAKE_CC = $${CROSS_COMPILE}gcc QMAKE_CXX = $${CROSS_COMPILE}g++ # More ...
We should never hard-code the root-fs path or the cross-compile path prefix into any configuration file or any Qt project file. As true believers of the DRY principle ("don't repeat yourself"), we should define these paths only once - in the options of the configure command. If we need to define more environment variables for the configuration of Qt, we can add another
-device-option option with the definition of another environment variable.
The options
-linuxfb -qpa linuxfb -no-eglfs -no-directfb -no-kms -no-opengl
specify that we want to use the Linux framebuffer (
-linuxfb) as the window-system backend (a.k.a. QPA plugin) and that the Linux framebuffer is the default QPA plugin (
-qpa linuxfb). Defining the default QPA plugin saves from passing the command-line option
-qpa linuxfb when we start our application. We do not build the QPA plugins eglfs, directfb and kms and turn off OpenGL support, as none of these is supported by our target device.
The final three lines of options
excludes every feature and module that we don't need for the initial run of our application. Remember that we want our build to fail fast - and fail it will. Once we have a working build and a running sample app, we can add features and modules as we need them.
Now we are finally ready to run the configure command. We will perform a shadow build outside the Qt sources, because it is pretty likely that we will build multiple versions of Qt (for example, a 64-bit version for our Ubuntu machine or a debug version for the i.MX35).
// Create directory for shadow build $ cd ~/Qt $ mkdir build-qt-5.5.1-imx35 $ cd build-qt-5.5.1-imx35 $ ..
Bad news! As expected the configuration command fails. All feature tests fail because the C++ compiler cannot find system headers. It cannot even find a header like stdio.h. Something is fundamentally wrong. Let us have a closer look at the first feature that fails.
Determining architecture... () /opt/imx35/.../bin/arm-fsl-linux-gnueabi-g++ \ -c -pipe -marm -mfpu=vfp -mtune=arm1136jf-s -march=armv6 -mabi=aapcs-linux \ -mfloat-abi=softfp --sysroot=/home/burkhard/Wachendorff/OpusA3/rootfs \ -g -Wall -W -fPIC -I../../../../qt5/qtbase/config.tests/arch -I. \ -I../../../../qt5/qtbase/mkspecs/devices/linux-imx35-g++ -o arch.o \ ../../../../qt5/qtbase/config.tests/arch/arch.cpp ../../../../qt5/qtbase/config.tests/arch/arch.cpp:37:19: fatal error: stdio.h: No such file or directory compilation terminated. Makefile:207: recipe for target 'arch.o' failed make: *** [arch.o] Error 1 Unable to determine architecture!
Searching for the file stdio.h in the root file system yields nothing. Searching for it in the toolchain yields three hits. The relevant hit is
/opt/imx35/.../arm-fsl-linux-gnueabi/multi-libs/default/usr/include/stdio.h
Running the command
/opt/imx35/.../bin/arm-fsl-linux-gnueabi-g++ -print-sysroot
yields
/opt/imx35/.../arm-fsl-linux-gnueabi/multi-libs/default/
as the standard sysroot of the compiler. By passing a different sysroot to the compiler, we override the standard sysroot. The compiler looks for system headers in the new sysroot. And we know that it cannot find these headers there. Let us test our theory and run just the failing test for determining the architecture again - but this time without the sysroot option.
$ cd ~/Qt/build-qt-5.5.1-imx35/qtbase/config.tests/arch $ /opt/imx35/.../bin/arm-fsl-linux-gnueabi-g++ \ -c -pipe -marm -mfpu=vfp -mtune=arm1136jf-s -march=armv6 -mabi=aapcs-linux \ -mfloat-abi=softfp \ -g -Wall -W -fPIC -I../../../../qt5/qtbase/config.tests/arch -I. \ -I../../../../qt5/qtbase/mkspecs/devices/linux-imx35-g++ -o arch.o \ ../../../../qt5/qtbase/config.tests/arch/arch.cpp
Now the compile command works. Tweaking the command line of a feature test in isolation is a common trick to solve the configuration problems. So, add it to your bag of tricks.
We must tell configure somehow that it should not use our sysroot but the compiler's sysroot. We apply wishful thinking and hope that configure provides some option for that already.
$ cd ~/Qt/build-qt-5.5.1-imx35 $ ../qt5/configure -help | grep sysroot -sysroot <dir> ...... Sets <dir> as the target compiler's and qmake's sysroot and also sets pkg-config paths. -no-gcc-sysroot ..... When using -sysroot, it disables the passing of --sysroot to the compiler
And bingo! It worked. The second option
-no-gcc-sysroot is exactly what we wished for.
Second Iteration
We add the option
-no-gcc-sysroot to the configure command from the first iteration and try our luck again.
$ configure command runs to the end successfully. On first glance, the summary looks pretty reasonable. LinuxFB is listed as the only QPA backend. OpenGL is disabled. Evdev, which handles touch, keyboard and rotary-encoder output, is enabled. The other features are disabled or enabled as specified in the configure command.
On second glance, there are few minor things that need at least an explanation and possibly a fix. Here are the suspicious lines.
Image formats: GIF .................. yes (plugin, using bundled copy) JPEG ................. yes (plugin, using bundled copy) PNG .................. yes (in QtGui, using bundled copy) tslib .................. no zlib ................... yes (bundled copy)
Since the arrival of the Linux v3.x kernels, tslib is not needed any more to handle touch input. Touch input is now handled by evdev. Hence, tslib need not be built and the "no" is OK.
It is a bit strange that the Qt build wants to use the copies of the jpeg, png and zlib libraries bundled with Qt. It should use the system libraries, which are available on every proper Linux system. A quick search through our root file system shows that the Linux system provided by Wachendorff is proper. The headers and libraries exist.
With the
-no-gcc-sysroot option, we told g++ to ignore our sysroot, which points to our root file system. Obviously, the jpeg, png and zlib headers and libraries are not contained in the toolchain. As we do not want these libraries twice on the target system - once provided by the system and once by Qt, we make the configure command pick up the system versions. We do this by adding include and library search directories to our make spec. The modified qmake.conf looks as follows.
include(../common/linux_device_pre.conf) IMX35_CFLAGS += \ -marm \ -mfpu=vfp \ -mtune=arm1136jf-s \ -march=armv6 \ -mabi=aapcs-linux QMAKE_INCDIR += $$[QT_SYSROOT]/usr/include QMAKE_LIBDIR += $$[QT_SYSROOT]/usr/lib QMAKE_CFLAGS += $$IMX35_CFLAGS QMAKE_CXXFLAGS += $$IMX35_CFLAGS include(../common/linux_arm_device_post.conf) load(qt_config)
$$[QT_SYSROOT] refers to the value of the
-sysroot option defined in the configure command. It is time for the third iteration.
Third Iteration
We run the same configure command as for the second iteration. The command will use our modified qmake.conf file.
$ summary looks better now, but not yet perfect. The critical lines from the second iteration have changed as follows.
Image formats: GIF .................. yes (plugin, using bundled copy) JPEG ................. yes (plugin, using system library) PNG .................. yes (in QtGui, using bundled copy) tslib .................. yes zlib ................... yes (system library)
The enabling of tslib is collateral damage from adding the include directory in our root file system. If we want to disable tslib and gif, we can add the options
-no-tslib -no-gif to the configure command.
The png library remains stubborn. A quick look into the verbose output of configure shows that the linker complains:
$ /opt/.../bin/arm-fsl-linux-gnueabi-g++ -mfloat-abi=softfp -Wl,-O1 -o libpng libpng.o -L/home/burkhard/Wachendorff/OpusA3/rootfs/usr/lib -lpng /opt/.../arm-fsl-linux-gnueabi/bin/ld: warning: libz.so.1, needed by /home/.../OpusA3/rootfs/usr/lib/libpng.so, not found (try using -rpath or -rpath-link)
The executable
libpng links directly against
libpng.so, which links directly against
libz.so.1. Hence,
libpng links indirectly or implicitly against
libz.so.1. We could fix this problem quick and dirty by adding
-lz at the end of the linker command (after
-lpng). This would only solve the linking problem for
libz, but not for any other library linked against implicitly.
Fortunately, the error message gives us a hint how to solve this problem generally: "try using -rpath or -rpath-link". Whereas the option
-rpath is for finding libraries at runtime, the option
-rpath-link is for finding libraries at linktime. We insert the line
QMAKE_LFLAGS += -Wl,-rpath-link,$$[QT_SYSROOT]/usr/lib
after the definition of
QMAKE_LIBDIR into our qmake.conf file. It is time for the fourth iteration.
Fourth Iteration
We run configure with gif and tslib disabled on the modified make spec.
$-gif -no-tslib
Finally, the configure summary is perfect. The features gif and tslib are disabled and png uses the system library. It is high time to kick off the build.
$ cd ~/Qt/build-qt-5.5.1-imx35 $ make -j4
Surprise! The build succeeded. It is not unusual for the build to fail, because some headers or libraries cannot be found or because we forgot to skip some modules or to disable some features. Then we must fix the problem either in the make spec or in the configure command and re-configure and re-build Qt again.
Now we are good to install Qt.
$ make install
Qt is installed into <sysroot>/<prefix>, that is, into ~/Wachendorff/OpusA3/rootfs/opt/qt-5.5.1-imx35.
You can download the complete make spec from here. Just unpack the ZIP archive in the directory ~/Qt/qt5/qtbase/mkspecs/devices.
Building and Running a QML App on the Target Device
Connecting the Target Device with the Development Computer
We need a way to transfer files like Qt and the HelloWorld app from the development computer to the target device. Obviously, the OPUS A3 also needs power for that.
We stick the red plug "Clamp 15 Ignition" into the side of the red plug "Clamp 30 Plus". We stick the red plug "Clamp 30 Plus" into the Plus socket of the power supply and the black plug "Clamp 31 CarGND" into the Minus socket. The plug with the big cable bundle goes into the back of the OPUS A3. We connect the USB-to-serial adapter with the USB port of the computer and the RS232 cable of the OPUS A3. That's all the wiring we need.
We set the power supply to 12V and power up the OPUS A3. The result should look as shown in this photo.
As I have never managed to get a terminal emulator like putty or screen running in a Ubuntu virtual machine, I run it on my host computer, a Macbook Pro laptop. I start the Terminal application and run the screen terminal emulator in it.
$ screen /dev/cu.usbserial 115200
Then, the OPUS A3 asks me for the login name ("root") and the password, which we find in Wachendorff's developer documentation. Then, we see the prompt from the OPUS A3 system and can run any Linux commands. For example:
root@rosi ~$ uname -a Linux rosi 3.0.35-rt56-opusa3-2.0.3-1 #5 PREEMPT Tue Apr 7 08:42:30 CEST 2015 armv6l GNU/Linux
We will use a USB drive to transfer data from the development machine to the OPUS A3. It is not as elegant a solution as NFS but it always works. NFS needs a network connection over Ethernet, WLAN or USB. I haven't yet got around to set this up.
Setting Up QtCreator
We can download QtCreator from the Qt Download page. "Qt Creator 3.6.0 for Linux/X11 64-bit" is the one we want. Download the installer qt-creator-opensource-linux-x86_64-3.6.0.run, make it executable, run it, and follow the installation instructions. You can install QtCreator where you want. Mine is installed in ~/Qt/qtcreator-3.6.0, for example.
We need to make QtCreator aware of three things: the Qt version we just built, the i.MX35 toolchain, and that the two should be used together when we build our own applications.
First, we specify the Qt version we just built. Open the "Tools | Options" and select the tab page "Build & Run | Qt Versions". Press the "Add" button. QtCreator opens a file dialog, in which we select the qmake executable from our Qt build. The qmake executable is located in /home/.../rootfs/opt/qt-5.5.1-imx35/bin/qmake. The tab page "Qt Versions" should look similar to this screenshot.
The version name "Qt 5.5.1 (qt-5.5.1-imx35)" is pretty telling. So, we keep it. We accept the new Qt version by pressing the "Apply" button.
Second, we make QtCreator aware of the cross-compiler we installed as part of the toolchain. On the dialog "Tools | Options", we select the tab page "Build & Run | Compilers". We press the "Add" button and select "GCC" from the dropdown menu. This gives us an empty form.
Change the "Name" to something more telling, say, "GCC 4.6.2 (armv6)". Press the "Browse" button and select the C++ cross-compiler from the toolchain in the file dialog. The C++ cross-compiler is located at /opt/imx35/.../fsl-linaro-toolchain/bin/arm-fsl-linux-gnueabi-g++. QtCreator fills out the last line "ABI" of the form automatically. The form should look similar to this screenshot.
We accept the new compiler by pressing the "Apply" button.
Finally, we must tell QtCreator that it should use the compiler "GCC 4.6.2 (armv6)" when it builds an application for the Qt version "Qt 5.5.1 (qt-5.5.1-imx35)". We do this by adding the compiler and the Qt version to a so-called kit. On the dialog "Tools | Options", we select the tab page "Build & Run | Compilers". When we press the "Add" button, we get the following empty form.
We fill out this form such that it reflects the following screenshot at the end.
Building the HelloWorld App for the Target Device
You can download the HelloWorld app from my website. This QML app provides just a yellow button on a blue background. By tapping on the button or by pressing the F1 function key, the button toggles its colour between yellow and orange. The app prints the key code on the console. This is enough to figure out whether touch and key input are working. Additionally, we can figure out the codes for the different keys of the OPUS A3. If this simple app works fine, we can be pretty sure that any QML app will work fine as well. When we run the app on our Ubuntu machine, it looks as follows.
When you open the HelloWorld project for the first time in QtCreator, QtCreator asks you to select the relevant kits.
Check the box left to "Opus A3 (Qt 5.5.1)" and then click the "Configure Project" button. This gives us a Qt kit for the target device "Opus A3 (Qt 5.5.1)" and a kit for the Ubuntu machine "Qt 5.5.1 Debug 64bit". This is exactly what we want. We will develop our apps on the Ubuntu machine first and deploy and run them on the target device then. If you forget to check the kit for the OPUS A3, you can make up for it later. Go to the settings page "Projects | Build & Run" of the HelloWorld project and press the "Add Kit" button at the top left.
In the "Build & Run" configurator near the left bottom corner of QtCreator, we select Project "HelloWorld", Kit "Opus A3 (Qt 5.5.1)", Build "Release" and Run "HelloWorld (on Remote Device)".
We build the HelloWorld app by pressing "Ctrl+B" or by selecting "Build | Build Project "HelloWorld"". The build should work fine and use the cross-compiler from our toolchain. The executable, which also contains the only QML file main.qml, is written to the shadow build directory ../build-HelloWorld-Opus_A3_Qt_5_5_1-Release/HelloWorld (relative to HelloWorld's project file). We are ready to deploy Qt and our HelloWorld app to the target device.
Deploying Qt and HelloWorld App to Target Device
We deploy Qt and the HelloWorld app to the target device, the OPUS A3, the old-fashioned way. We copy Qt and the app from the Ubuntu machine to a USB drive, plug the USB drive into the USB jack of the OPUS A3, and copy Qt and the app from the USB drive to the /opt directory of the OPUS A3.
First, we create a tarball for Qt. The Qt installation directory /home/.../rootfs/opt/qt-5.5.1-imx35 has the following subdirectories
$ cd ~/Wachendorff/OpusA3/rootfs/opt $ ls -1 qt-5.5.1-imx35/ bin doc imports include lib mkspecs plugins qml translations
We only need the subdirectories given in bold face for running apps on the target device. The other subdirectories are only interesting for building these apps. This saves a lot of space on the target device. We create the Qt tarball containing the relevant subdirectories on a USB drive. Don't forget to plugin the USB drive into your Ubuntu machine before executing the next command.
$ cd ~/Wachendorff/OpusA3/rootfs/opt $ tar czf /media/burkhard/DISK_IMG/qt-5.5.1-imx35.tgz \ qt-5.5.1-imx35/imports/ qt-5.5.1-imx35/lib/ qt-5.5.1-imx35/plugins/ \ qt-5.5.1-imx35/qml/ qt-5.5.1-imx35/translations
Next, we copy the HelloWorld executable from the shadow build directory to the USB drive.
$ cd ~/Wachendorff/OpusA3/build-HelloWorld-Opus_A3_Qt_5_5_1-Release $ cp ./HelloWorld /media/burkhard/DISK_IMG
Eject the USB drive properly from your Ubuntu machine and plug it into the USB port of the OPUS A3. If not done, power up the OPUS A3 and log into it via a terminal emulator. The USB drive is mounted under /disk/usbsda on the OPUS A3. Install Qt and the HelloWorld app at the right place. Don't forget to make the HelloWorld app executable.
imx35$ cd /opt imx35$ tar xzf /disk/usbsda/qt-5.5.1-imx35.tgz imx35$ cp /disk/usbsda/HelloWorld . imx35$ chmod a+x ./HelloWorld
Running HelloWorld App on Target Device
The big moment is very near. We are about to run the HelloWorld app for the very first time. Drum roll, please!!!
imx35$ cd /opt imx35$ ./HelloWorld &
5, 4, 3, 2, 1 ... and our fantastic HelloWorld app shows up on the target device.
The F1 function key is the top button at the left side of the OPUS A3. Pressing this button or any other hard button on the OPUS A3 doesn't yield any response - not even debug message in the terminal.
Tapping the yellow on-screen button doesn't have any effect either. But don't give up with touch too early. Touch could be decalibrated or the touch coordinate system could be rotated. Tap all over the screen and see whether the button ever turns from yellow to orange. It does, when we tap in the right bottom corner of the screen. So, the touch screen is rotated by 180 degrees.
The Qt documentation page "Qt for Embedded Linux" comes to our rescue. The sections about linuxfb and evdev are relevant for us. In the section "Input on eglfs and linuxfb", we find the sentence: On some touch screens the coordinates will need to be rotated. This can be enabled by setting QT_QPA_EVDEV_TOUCHSCREEN_PARAMETERS to rotate=180.
Terminate the HelloWorld app and then start it again with the touch screen rotated by 180 degrees.
imx35$ killall HelloWorld imx35$ cd /opt imx35$ export QT_QPA_EVDEV_TOUCHSCREEN_PARAMETERS="rotate=180" imx35$ ./HelloWorld &
This time the "Hello, A3" button toggles nicely between yellow and orange when tapped. The touch problem is solved.
Time to look into the "broken" keys. The section "Keyboard" of "Qt for Embedded Linux" helps us out this time. We must tell Qt, from which input device files it gets the keyboard events. The directory /dev/input offers the following device files.
imx35$ ll /dev/input drwxr-xr-x 2 root root 0 Jan 2 07:59 by-path lrwxrwxrwx 1 root root 6 Jan 2 07:59 encoder0 -> event1 crw-r----- 1 root root 13, 64 Jan 2 07:59 event0 crw-r----- 1 root root 13, 65 Jan 2 07:59 event1 crw-r----- 1 root root 13, 66 Jan 2 07:59 event2 lrwxrwxrwx 1 root root 6 Jan 2 07:59 keyboard0 -> event2 lrwxrwxrwx 1 root root 6 Jan 2 07:59 ts0 -> event0
The input events from the function keys come from /dev/input/keyboard0 and those from the rotary knob come from /dev/input/encoder0. We add these two device filenames to the environment variable
QT_QPA_EVDEV_KEYBOARD_PARAMETERS and run the app again.
imx35$ killall HelloWorld imx35$ cd /opt imx35$ export QT_QPA_EVDEV_KEYBOARD_PARAMETERS="/dev/input/encoder0:/dev/input/keyboard0" imx35$ export QT_QPA_EVDEV_TOUCHSCREEN_PARAMETERS="rotate=180" imx35$ ./HelloWorld &
Pressing the F1 key toggles the colour of the "Hello, A3" button. Pressing the other keys or rotating the rotary knob prints the key code to the terminal. The key codes are mapped to symbolic names in the enum Qt::Key. For example, 16777264 or 0x1000030 in hex is the key code for Qt::Key_F1.
We are done!!! Our HelloWorld runs on the OPUS A3. Touch, keyboard and encoder input work. This is all we need to build more complex QML applications like an AM/FM radio, a music or video player or the control terminal of harvesters, tractors, excavators or cranes.
Downloads
- linux-imx35-g++ - Make spec for the i.MX35. Unpack ZIP archive in directory ~/Qt/qt5/qtbase/mkspecs/devices.
- HelloWorld - Project of HelloWorld app. Unpack ZIP archive in directory of your choice. Open project file /path/to/HelloWorld/HelloWorld.pro into QtCreator set up for i.MX35. | http://www.embeddeduse.com/2016/02/05/running-a-qml-hmi-on-an-arm11-without-opengl/ | CC-MAIN-2017-39 | refinedweb | 6,288 | 67.15 |
A Developer.com Site
An Eweek.com Site
Type: Posts; User: Cakkie
Well, what do you considder a project manager?
In my opinion, it really doesn't matter as it is not the function/role of the project manager to be dealing with the technical details. The program...
You can also use the Microsoft.Office.Interop namespace to interact with Excel directly, which will give you a much greater control on how you can transfer the data. You will need to add a reference...
You can use the SHGetFolderPath API to get the desktop folder.
Private Const CSIDL_DESKTOPDIRECTORY As Long = &H10
Private Declare Function SHGetFolderPath Lib "shfolder" _
Alias...
Perfect, works like a charm :)
Thanks!
A ParamArray is used to define a unknown amount of parameters, not to pass a parameter.
The declaration of the function should be this:
Public Function max(a() as Integer) As Integer
I'm trying to call a C++ API function, which takes two parameters. The function will fill these 2 parameters. The first parameter will be a list of a specific structure, the second will be a number...
Ok, that clears up a lot.
I found an example on PSC to call cdecl dll's from VB, but it keeps complaining about a wrong parameter type i'm passing (the WRAPI_NDIS_DEVICE).
I'll play with it some...
You are returning an object, zso you will need to assign the return value using the Set command
public Function Executequery(sql As String) As Object
dim Rs As New Adodb.recordset
Set...
Dir() returns wrapi.dll
As for the special characters when copy/pasting, since I only had the c++ header file to go on, I had to type the declaration myself, so that rules out any invalid...
Tried that, same result.
It would be rather strange if it did, since that way I would have to make sure the dll had to be in that exact location if installed on another machine.
Dim ie(2) As Object 'whatever datatype
Dim i As Integer
For i = 0 To UBound(ie)
Set ie(i) = CreateObject("InternetExplorer.Application")
ie(i).navigate address2
ie(i).Visible =...
I'm trying to use WRAPI in VB, but I can't manage to get it working.
I downloaded the dll and lib from
The source files contain a header file,...
Maybe you can use an application like BGInfo
Well, if you need to be able to take a screenshot, you will have to have a program running on that computer (preferable one that you can connect to through winsock or so). In that case, you could...
Another option is to have your program start hidden, and have it check to see if the other program is running. Once you find the program running, you unhide your program and start regular execution...
That is indeed the way to do that in VB.
In order to be able to sort on a date correctly, you will need to store it in the form YYYYMMDD.
AFAIK that isn't possible. SQL Server does know a datatype table, which can be used to pass in memory tables from 1 proc to another, but there's no way you can pass them from VB to SQL Server in 1...
vbNullString is slightly faster than "", since vbNullString is not actually a string, but a constant set to 0 bytes, whereas "" is a string (consuming at least 4-6 bytes for just existing).
You can't make a portion of the form transparent, so you will have to change the shape of the form to fit the shape of the image. This can be done throught hte use of some API functions.
This...
If your dates are stored in 1 field (which I presume will be the case), you can use the DATEPART function.
SELECT Sum(Amount)
FROM TableName
WHERE DATEPART(Year, DateField) = 2004
AND...
Make sure the files are writable. VSS will save the files readonly.
The Kill function in VB is used to delete a file, not to terminate a process.
To terminate a process, you will need to make use of API functions. Here's an example:
Private Declare Function...
Indeed, basically if you don't need to pass them ByRef, then pass them ByVal.
Unlike one would think, ByVal is actually faster than ByRef because it does not need to pass back the data in the...
I think the problem lies in the declaration of the SetControls function you wrote.
Make sure you explicitly declare the parameter that accepts the frame to either Frame or Object. Not declaring the...
First of, the First If statement will always be True, You cannot pass more than 1 value to an =, so the second sckConnecting will be evaluated by itself, and since it's not 0, this will always. | http://forums.codeguru.com/search.php?searchid=20184039 | CC-MAIN-2019-43 | refinedweb | 801 | 72.56 |
25 November 2011 10:57 [Source: ICIS news]
By Samuel Wong
SINGAPORE (ICIS)--South Korea’s petrochemical exports in October grew 17.6% year on year to $3.6bn (€2.7bn), thanks to the weak won and higher exports to China, an analyst said on Friday.
Data from the Ministry of Knowledge Economy showed that ?xml:namespace>
“Weak won currency led to an increase in exports to
Overseas shipments of most petrochemical products last month increased by volume on a year-on-year basis, data from Korea International Trade Association (KITA) showed.
Ethylene exports grew 48.2% to 68,571 tonnes, while propylene shipments nearly tripled to 121,411 tonnes. (Please see table below)
For aromatics, exports of benzene fell by 17.4% to 99,430 tonnes, while shipments of toluene increased 18.1% to 79,139 tonnes, the data showed.
Paraxylene (PX) exports jumped by 61.3% to 159,230 tonnes.
MKE said that the country has remained in trade surplus for 21 consecutive months.
Exports of most of Korea’s key export items, including petroleum products, general machinery, automobiles, steel, automobile parts, household appliances, petrochemicals, computers and textiles showed good growths in October compared to the same period in 2010, according to MKE.
However, business optimism in the country slightly weakened for November, with the South Korean manufacturers’ confidence index slipping to 82 from 86 for October, according to the Bank of Korea.
“The European debt crisis negatively affected exporters and there was a time lag, which is a reason why the November Manufacturers’ confidence index fell,” said Lee of Shinhan Investment.
In a survey of 2,000 manufacturers by the Korean Chamber of Commerce and Industry, showed business confidence rapidly cooling and anxiety building up for next year’s business planning and investments on concerns of a double-dip of the world economy.
Petrochemical demand is expected to weaken because of the debt crisis in the eurozone and the weak
Source: KITA ($1 = €0.75)
Source: K | http://www.icis.com/Articles/2011/11/25/9511564/s-korea-october-petrochemical-exports-grow-17.6-on-weak.html | CC-MAIN-2014-52 | refinedweb | 330 | 55.03 |
E2E (End-to-End) Testing in Ionic: Structuring Tests with Page Objects
By Josh Morony
In the first post in this series I wrote about how to create E2E (End-to-End) tests for Ionic applications. This was a very basic introduction that focused on the general concept and getting a bare bones example up and running.
I intend to create a few more tutorials that will go a little more in-depth into E2E testing and discuss how to better integrate them into your testing strategy. In this tutorial, I will be discussing how you can use page objects in your E2E tests to make them more maintainable and easier to write.
We will discuss what a page object actually is in a little more depth soon, but in short, a page object is a class that represents a page of your application. It adds a level of abstraction between your tests and the specific implementation of a page. If that explanation doesn’t make any sense at all, don’t worry, it’s much more easily demonstrated with an example.
Why Page Objects?
When creating E2E tests we will often grab an element by referring to it by a CSS selector. That might be a class name like
profile-information or perhaps it will be through a CSS combinator like
ion-list button to select any
<button> elements inside of an
<ion-list>. This might look something like this:
element(by.css('.profile-information'))
Let’s suppose you’ve got your tests set up, and everything is working fine, but then you need to make a change to the structure of your template. Perhaps you have renamed some classes or you have changed the level of nesting of an element. Now the CSS selector you are using for your E2E test is incorrect and all of the tests related to that will be broken. You will need to find every instance of that selector in the tests and update it.
This is one issue that page objects can solve for us, another is that of navigation.
Protractor is designed to work well with the browser and provides methods for navigating about an application through the URL. Although an Ionic application is technically a website that runs through a browser, it doesn’t behave like one – it behaves more like a native mobile application and so does its navigation. Rather than relying on routes in the URL to determine the page, Ionic 2 follows a push/pop style of navigation.
UPDATE: This post has been updated since initially being published, and the current version of Ionic/Angular uses Angular routing by default rather than push/pop navigation. This means that the way you structure your tests may be a little different, but we are going to stick with the push/pop example in this tutorial to help illustrate the benefit of page objects.
This push/pop style navigation complicates our E2E tests a bit. Generally, the first thing you would do in an E2E test is to point the browser to wherever it needs to go to start the test, e.g:
browser.get('')
which would direct the browser to the index page of the application (which isn’t an issue for Ionic), or something like this:
browser.get('/products')
to direct the browser to the products page. This is an issue for an Ionic application relying on push/pop because, as I just mentioned, this method of navigation does not rely on a URL being provided to the browser to navigate to a specific page.
In order to navigate to the page that we need to test with Protractor when using push/pop navigation, we need to direct the browser to the index page, and then trigger a series of clicks to navigate to the page that we want to start our test on. Although using a page object does not entirely provide a solution to this problem, it does make it easier to manage.
Using a Page Object
In order to demonstrate how a page object can help deal with the issues I talked about above, I am going to use an example from an application that I was building when I first published this tutorial. We will take a look at an E2E test that I have written, we will see what it looks like without using page objects, then we will see what it looks like using page objects.
Before Using a Page Object
Here’s what the E2E test would look like without using page objects:
import { browser, element, by } from 'protractor'; describe('Module Tests', () => { beforeEach(() => { browser.get(''); element(by.css('.module-list button')).click(); browser.driver.sleep(500); }); it('a user can select a lesson from a module and view the lesson content', () => { let lessonToTest = element.all(by.css('.lesson-list button')).get(2); // Trigger a click to navigate to module lessonToTest.click(); // Wait browser.driver.sleep(500); // Check if there is content expect(element(by.css('.lesson-content')).getText()).toBeTruthy(); }); });
On the surface, this E2E test looks pretty sane. We set the test up by first navigating to the relevant page by triggering a
click on the module list, and then we test whether or not a user is able to view lesson content by entering into a lesson and checking the content.
This test will perform the test that is required of it well, but it has some serious maintainability issues. Let’s consider a couple of scenarios:
- The location of the page changes. Right now the page can be accessed by pointing the browser to the root of the application and performing one click. But what if the application becomes more complex than that? Perhaps a login page is added, or there may even be several pages added before the user can arrive at this one. This is going to make the
beforeEachsetup for each test quite messy.
- The CSS selectors change. We are grabbing references to elements using CSS selectors, but it’s not too hard to imagine that at some point those selectors may be changed (especially if we rely on combinators) - this would mean that we need to change our E2E tests to reflect those changes. That wouldn’t be too much of an issue for this simple example, but if you are referring to the same element in multiple different tests, you are going to have to go an update every single reference to it every time you make a change.
If you use page objects, this won’t be as much of an issue.
Creating a Page Object
Now let’s take a look at how to create a page object. The general approach is to create the page object as a separate file, and then import it wherever you need it. This is the structure that I am using:
UPDATE: You will now find that page objects are included in the default E2E tests generated with an Ionic/Angular application (e.g.
app.po.ts). You may wish to follow this default structure, but you can set your files and folders up however you like.
I have a folder specifically for page objects, and name them
[page-name].page-object.ts. You can use whatever structure you prefer. Let’s take a look at what one of the page objects actually looks like.
module.page-objects.ts
import { browser, element, by } from 'protractor'; import { SelectModulePageObject } from './select-module.page-object'; export class ModulePageObject { selectPage: SelectModulePageObject = new SelectModulePageObject(); getLessonList() { return element.all(by.css('.lesson-list button')); } browseToPage(){ browser.get(''); this.selectPage.getModuleElement().click(); browser.driver.sleep(500); } }
This page object is just a simple class that performs a couple of tasks. Instead of referring to:
element.all(by.css('.lesson-list button'));
in our tests, we create a function in the page object to access it instead, then our tests will reference the function instead. This way, we are only ever referencing the CSS selector in one place. If we ever need to update the CSS selector later, then that means we will only need to make a single update to this page object, rather than multiple updates across different tests.
I have also added a
browseToPage method. The purpose of this method is to specify the steps to navigate to the page that the page object represents. This way we can just call the
browseToPage method in our tests whenever we need to navigate to this page.
Also, notice that in this page object I am importing another page object. To navigate to this page we need to click an element from another page, so again, instead of manually referencing it here (which may need to be updated later) we just grab the element from that pages page object instead.
After Using a Page Object
Now let’s take a look at what the E2E test looks like after using page objects:
import { browser } from 'protractor'; import { ModulePageObject } from './page-objects/module.page-object'; import { LessonPageObject } from './page-objects/lesson.page-object'; describe('Module Tests', () => { let modulePage: ModulePageObject; let lessonPage: LessonPageObject; beforeEach(() => { modulePage = new ModulePageObject(); lessonPage = new LessonPageObject(); modulePage.browseToP(); }); });
The test looks pretty much the same, but it’s a little neater and simpler now, and won’t suffer from those maintainability issues we discussed earlier.
Summary
This was a pretty basic example, the benefits of using page objects may be more obvious on a more complex test suite, but I think that this example shows why page objects can be useful to help structure your tests.
In the next part of this tutorial series, we will go into a little more depth about what kinds of E2E tests you should write and when you should write them. | https://www.joshmorony.com/e2e-end-to-end-testing-in-ionic-2-structuring-tests-with-page-objects/ | CC-MAIN-2020-10 | refinedweb | 1,626 | 58.82 |
Introduction
One of the best ways to update an application with a tired two-dimensional (2D) graphical user interface (GUI) is to update its legacy look and feel with some three-dimensional (3D) effects to get more of an Apple* iPhone*–like user experience. By exploiting the Khronos* OpenGL* ES accelerator on Intel® Atom™ processors, you can make such a change without degrading the responsiveness of the UI. But rewriting a 2D application from scratch to use OpenGL* ES is usually not practical. Instead, update your 2D application to use a combination of 2D and 3D rendering by making OpenGL* ES coexist with the legacy 2D application programming interface (API) you already use. This way, your 2D GUI can still be rendered by the legacy 2D API, but then be animated in 3D with transition effects that OpenGL* ES handles well, like rotation, scaling, blending, and lighting effects.
Even when a new application is built on OpenGL* ES from the start, 2D objects—such as GUI widgets and text fonts—are often required that OpenGL* ES does not provide, so mixing 2D and 3D APIs makes more sense than you might think. In fact, the combination of 2D and 3D rendering is powerful, especially when using an application processor that offers accelerators for both, like Intel® Atom™ processors. The trick is to make them play together nicely.
2D and 3D are really different paradigms with important architectural design trade-offs that developers must face to avoid some of the limitations of OpenGL* ES on embedded systems; making efficient use of the limited resources on embedded systems is important if you want a responsive user experience. This article details and contrasts several proven solutions to combining OpenGL* ES with legacy 2D APIs that work on most embedded systems, including Linux* and Google Android. The architectural trade-offs of each approach are explained, and some important pitfalls are identified. These concepts work with either OpenGL* ES 1.1 or 2.0 on embedded Linux systems, with or without a windowing system, such as X11, Qt, or Android. Some code examples are specific to Android, which supports OpenGL* ES through both its framework API and the Native Development Kit (NDK). The API framework supports OpenGL* ES 2.0 beginning with Android 2.2.
The Legacy 2D Problem
Typical legacy applications build 2D screen images piece by piece using BitBlt operations through a 2D API, which may be accelerated by a BitBlt engine. BitBlts typically involve raster operations, transparency, brushes, clipping rectangles, and other features that do not map well to OpenGL* ES. Even worse, BitBlts are typically layered heavily. There may be hundreds of BitBlts to construct a typical screen in a 2D GUI. Also, a typical screen update usually only renders the pixels that have actually changed. In contrast, OpenGL* ES always renders screen frames whole. If your application relies on a 2D API to render GUI widgets such as buttons, scroll bars, icons, and text fonts, don’t plan on moving to OpenGL* ES exclusively, because OpenGL* ES doesn’t provide those elements.
Some examples of the most widely used legacy 2D APIs on Linux* systems are Cairo, GTK+, Motif, FreeType, Pango, DirectFB, and Qt Frameworks—although there are many more. These APIs are used for rendering scalable vector graphics (SVGs), BitBlts, text fonts, GUI widget components, windows, or some combination. All of these APIs produce 2D images that OpenGL* ES can animate on a Linux* or Android platform, but a mechanism is needed to exchange images between these 2D and 3D APIs efficiently.
The Hybrid 2D/3D Solution
Think of your legacy application as producing 2D images in which each screen update through the 2D API produces a new 2D image. These images can then be copied into an OpenGL* ES texture to allow OpenGL* ES to display it on the screen. OpenGL* ES can then animate the movement of the entire image as a texture by applying a transform to create a transition effect from one screen image to the next. This animated transition effect can be a rotation, scale up or down, translate, fade in or out, or any combination. The geometry for the texture to achieve these effects can be as simple as a pair of triangles to form a rectangle that matches the shape of the display. The time duration of animated effects is typically just a fraction of a second—just long enough for the user to visualize the animation and provide a 3D experience but not long enough to impede the UI. When animations are complete, the cycle repeats, with the 2D API providing the next texture image to load. OpenGL* ES is efficient at animating textures after they have been loaded, because the 3D accelerator actually does most of the work.
The code example in Listing 1 shows the major steps required to initialize OpenGL* ES 2.0 to perform an animated transition (scale up and rotate) of a 2D texture image. First, a GL Shading Language ES shader program is selected for use by the ShaderHandle, and the locations of its uniforms are retrieved. Next, two matrices are created for a simple perspective projection: the projection matrix and the model view matrix. Then, a texture is created and loaded with a 2D image using the conventional
glTexImage2D() method. The image will be mapped onto a pair of triangles that form a rectangle, so the pointers to the vertex and texture coordinate attributes are passed to the shader with
glVertexAttribPointer(). These same arrays for the triangle pair will be reused for every frame of the animation, but the position of the rectangle on the display is recalculated in each iteration of the loop.
The loop begins by clearing the frame buffer to black. Then, the
fModelViewMartix is recalculated for the next frame of the animation and passed to the shader with
glUniformMatrix4fv(). The same
fProjectionMatrix is used for every frame. Finally, the call to
glDrawArrays() initiates the rendering of the texture image onto the triangle pair by the OpenGL* ES accelerator. The call to
eglSwapBuffers() makes the new rendered frame visible on the display.
Listing 1. Example of an animated texture transition
#include "GLES2/gl2.h"
// Define the vertices for a rectangle comprised of two triangles.
const GLfloat fPositions[] =
{
-1.0f, -1.0f,
1.0f, -1.0f,
1.0f, 1.0f,
-1.0f, 1.0f,
};
// Define the coordinates for mapping the texture onto the triangle pair.
const GLfloat fTexCoords[] =
{
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f
};
// Initialize interface with the shader program.
GLuint ShaderHandle;
glUseProgram(ShaderHandle);
GLint ModelViewMatrixLocation = glGetUniformLocation(ShaderHandle, "ModelViewMatrix");
GLint ProjectionMatrixLocation = glGetUniformLocation(ShaderHandle, "ProjectionMatrix");
GLint TextureLocation = glGetUniformLocation(ShaderHandle, "Texture");
// Initialize the projection and model view matrices.
GLfloat fProjectionMatrix[16];
GLfloat fModelViewMatrix[16];
Identity(fProjectionMatrix);
Frustum(fProjectionMatrix, -0.5f, 0.5f, -0.5f, 0.5f, 1.0f, 100.0f);
glUniformMatrix4fv(ProjectionMatrixLocation, 1, 0, fProjectionMatrix);
glUniformMatrix4fv(ModelViewMatrixLocation, 1, 0, fModelViewMatrix);
// Create and load a texture image the conventional way.
EGLint TextureHandle;
glGenTextures(1, &TextureHandle);
glBindTexture(GL_TEXTURE_2D, TextureHandle);
glTexImage2D(GL_TEXTURE_2D,0, GL_RGBA, Width, Height, 0, GL_RGBA, GL_UNSIGNED_BYTE,pImage);
glUniform1i(TextureLocation, 0);
// Initialize pointers to the vertices and texture coordinates of the triangle fan.
glEnableVertexAttribArray(VERTEX);
glEnableVertexAttribArray(TEXCOORD);
glVertexAttribPointer(VERTEX, 2, GL_FLOAT, 0, 0, &fPositions[0]);
glVertexAttribPointer(TEXCOORD, 2, GL_FLOAT, 0, 0, &fTexCoords[0]);
// Animation loop which scales and rotates the texture and maps it to the triangle pair.
for (float fAnimationTime = 0.0f; fAnimationTime < 1.0f; fAnimationTime += 0.01)
{
// Clear the frame buffer to black.
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Create matrix to scale and rotate the texture image.
Identity(fModelViewMatrix);
Translate(fModelViewMatrix, 0.0f, 0.0f, -2.0f);
Scale(fModelViewMatrix, fAnimationTime, fAnimationTime, 1.0f);
Rotate(fModelViewMatrix, fAnimationTime * 360.0f, 0.0f, 0.0f, 1.0f);
glUniformMatrix4fv(ModelViewMatrixLocation, 1, 0, fModelViewMatrix);
// Render and display the new frame.
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
eglSwapBuffers(EglDisplayHandle, EglSurfaceHandle);
}
Making 2D and 3D Graphics APIs Coexist
When adding OpenGL* ES to a legacy 2D application, the main point of contention is ownership of the frame buffer. The frame buffer is special, because it is the memory area that is actually shown on the display (as opposed to off-screen buffers). OpenGL* ES expects to acquire the frame buffer from the EGL* driver, which acquires pointers to the frame buffer from the Linux* frame buffer device. Your 2D API is probably rendering to that same frame buffer memory allocated from that same Linux frame buffer device. That is the conflict. Android is a good example of this: It owns the frame buffer. Android has 2D features that are already integrated with OpenGL* ES, which is great if you are writing a new app from scratch. Otherwise, the challenge is to make your legacy 2D API run without owning the frame buffer.
Note that the term frame buffer is really a convenient over-simplification, because there are typically several frame buffers to prevent screen tearing artifacts, plus depth buffers, and so on. The actual number of frame buffers available typically depends on the amount of memory allocated in your system for that purpose when the Linux kernel boots. I conveniently refer to all of these buffers as the frame buffer; regardless of how many frame buffers you actually have, most graphics APIs consider all of them to be their private property to use at the exclusion of all other graphics APIs, 2D or otherwise. Making OpenGL* ES coexist with a 2D API requires resolving this basic conflict of frame buffer ownership.
The solution is to either make your 2D API share the real frame buffer with OpenGL* ES or to redirect the rendering of either of the two APIs into an off-screen (fake) frame buffer, which can then be read and used by the other API. OpenGL* ES can be made to share the frame buffer. So, the first question to ask is, will your 2D API share the frame buffer? In other words, can your application’s legacy 2D rendering be redirected into an off-screen buffer that OpenGL* ES can then read and use as a texture image? If not, then your decision on a rendering order is limited to the first option in the next section.
Deciding on the Rendering Order
There are three solutions to this problem, each with its associated trade-offs. They are:
- OpenGL* ES rendering through an existing 2D GUI;
- Rendering an existing 2D GUI through OpenGL* ES; and
- Using a shared frame buffer for 2D and 3D rendering
OpenGL* ES Rendering Through an Existing 2D GUI
You can configure OpenGL* ES never to render directly to the real frame buffer but rather to an off-screen buffer (Figure 1). Then, the 2D API must copy each frame to the frame buffer with a BitBlt operation. This approach is usually the easiest to implement, because the legacy GUI retains ownership of the frame buffer and operates without modification. The disadvantage of this approach is that the copy operation slows the 3D rendering somewhat. The loss in 3D performance should be acceptable on systems with a BitBlt accelerator, because that accelerator can be used to perform the operation. Note that this is how OpenGL* ES works in typical windowing system environments like X11 or Qt when it is restricted to rendering into a window that is smaller than the display. The off-screen buffer to which OpenGL* ES renders will either be a frame buffer object (FBO), a pixel buffer, or a pixmap.
Figure 1. 3D rendering through a 2D API
Rendering an Existing 2D GUI Through OpenGL* ES
The opposite solution is to give OpenGL* ES ownership of the frame buffer and adapt the legacy 2D GUI to render through OpenGL* ES (Figure 2). This means that every time the GUI alters a 2D image, the image must be updated on the screen by copying it into an OpenGL* ES texture. Obviously, this approach reduces the performance of the 2D GUI but maximizes 3D rendering performance. This option represents the design when a legacy 2D GUI is ported to Android, because OpenGL* ES has ownership of the frame buffer and provides acceleration of both 2D and 3D. With this design, it is critical that you use fast texture-loading capabilities, such as the EGL* image extension, to minimize the loss of performance, because loading texture images into OpenGL* ES is inherently a slow operation.
Figure 2. 2D rendering through OpenGL ES
Using a Shared Frame Buffer for 2D and 3D Rendering
It is possible to have the best 2D and 3D rendering performance without sacrificing either: The trick is to allow the two APIs to share direct access to the frame buffer. This approach requires configuring OpenGL* ES and the 2D API for the same frame buffer (Figure 3). If OpenGL* ES is running in a different execution thread than the 2D GUI, you must use a mutex to control which API has ownership of the frame buffer at any particular time to prevent one from rendering over the other.
Figure 3.Sharing the frame buffer for rendering
This approach also requires paying particular attention to how the two APIs advance the frame buffer display sequence. With OpenGL* ES, the frame buffer typically consists of three actual frames, so that the 3D accelerator can always render the next frame while the previous frame is displayed without tearing artifacts. OpenGL* ES applications advance the frame buffer display sequence by calling
eglSwapBuffers(), which then calls the Linux frame buffer device through the
FBIOPAN_DISPLAY ioctl() method. For a 2D API to share the same set of frame buffers, it too must call the same frame buffer device. It is critical that the presentation order of the frame buffers be maintained when rendering is switched between the 2D and 3D APIs, or the API will periodically render to a buffer that is currently displayed (the front buffer), which causes ugly rendering artifacts. The solution is always to read the current value of the yoffset parameter to determine which frame buffer is currently displayed before rendering the next frame. The EGL* and
eglSwapBuffers() method already do this for OpenGL* ES rendering, so your 2D rendering must do the same, as shown in Listing 2.
Listing 2. Example of advancing the frame buffer display sequence
#include <sys/ioctl.h>
#include <linux/fb.h>
struct fb_var_screeninfo varinfo;
// Open the linux frame buffer device.
int fbDeviceHandle = open("/dev/fb0", O_RDWR);
// Get the variable screen information from the fb device.
ioctl(fbDeviceHandle, FBIOGET_VSCREENINFO, &varinfo);
// Determine which framebuffer is currently displayed by the EGL.
int FrameIndex = varinfo.yoffset / FrameHeight;
// Advance to the next framebuffer.
if (++FrameIndex > 2)
FrameIndex = 0;
// Flip displayed framebuffer to display new rendering.
varinfo.xoffset = 0;
varinfo.yoffset = FrameIndex * FrameHeight;
ioctl(fbDeviceHandle, FBIOPAN_DISPLAY, &varinfo);
Using FBOs, Render Buffers, and Pixmaps for Off-screen Rendering
Exact terminology is important here, because the Khronos Group has defined several ways for OpenGL* ES to render into off-screen buffers. The most widely used solution is the FBO with an attached texture. Pixel buffers (or pbuffers) are obsolete and have performance problems. Pixmaps are useful if your 2D API is compatible with your EGL* driver, which is usually not the case. However, Android does support pixmaps, but they are called native GraphicBuffers. This is the preferred way to exchange 2D images between Android and OpenGL* ES.
Attaching a texture to an FBO is typically done to implement render-to-texture techniques, where the rendered output from OpenGL* ES is reused as a texture for the finished scene, such as a reflection or mirror effect. But it is also useful for passing rendered 3D frames to your 2D API, because you can retrieve the address of the texture map using the EGL* image extension. If you can obtain the physical address of a texture buffer, you can use an accelerated 2D API to BitBlt rendered frames between the 2D and 3D APIs quickly. But even a
memcpy() method is still typically faster than using
glTexImage2D() to load textures.
Another approach worth mentioning is to use an FBO with an attached render buffer and the
glReadPixels() method to copy the rendered frames. However, the performance of
glReadPixels() will be poor unless it is accelerated by your OpenGL* ES driver.
Typically, it’s a good idea to use compression and mip maps when creating textures for OpenGL* ES. However, that is for static images and way too expensive for dynamic images. The texture compression algorithms implemented in 3D accelerators are asymmetrical, meaning that it is much more compute intensive to compress an image than to decompress the same image. So, for good performance loading dynamic images, use an uncompressed common RGB format without mip maps, such as RGB_565 (16 bit), RGB_888 (24 bit), or ARGB_8888 (32 bit).
A common problem is that your 2D API might use a nonstandard pixel format that is not supported directly by OpenGL* ES. You can usually handle this issue for images coming into OpenGL* ES 2.0 as textures by writing a custom pixel shader that swaps the red, green, blue, or alpha pixel color components as needed, as shown in Listing 3.
Listing 3. Example fragment shader to convert pixel formats
void main()
{
vec3 color_rgb = texture2D(Texture, TexCoord).bgr; // Swap red and blue components
gl_FragColor = vec4(color_rgb, 1.0); // Append alpha component
}
However, custom shader code cannot change the format with which OpenGL* ES renders its output, so if that is being directed into an FBO to be read by your 2D API, it must be able to handle one of the output formats that the OpenGL* ES driver supports—either RGB_565 (16 bit) or ARGB_8888 (32 bit).
Figure 4 illustrates the preferred mechanisms for exchanging 2D images between a 2D API and OpenGL* ES. An EGL* image is allocated and associated with each texture so that pointers to the texture buffers can be obtained. These pointers can then be used to transfer rendered images between the APIs with either a software copy or an accelerated BitBlt. OpenGL* ES can render into a texture that is attached to an FBO. This is an off-screen buffer that can also be read through its associated EGL* image.
Figure 4. Exchanging images between a 2D API and OpenGL* ES
Using the EGL* Image Extension
The conventional way to copy an image into a texture is with either the
glTexImage2D() or
glTexSubImage2D() methods, but these methods are slow because of how they convert the format of the image data as it is copied. These are really intended for loading static images, not dynamic ones. Moving images between OpenGL* ES textures and another graphics API quickly requires direct access to the memory in which the texture image is stored. Ideally, the image should be copied by an accelerated 2D BitBlt, but that requires the physical address of the image. Otherwise, you can use a
memcpy() method instead, which only requires the virtual address of the image.
The EGL* image extension is an extension to the EGL* standard defined by the Khronos Group that provides the virtual or physical addresses of an OpenGL* ES texture. With these addresses, images can be copied to or from OpenGL* ES textures quickly. This technique is so fast that it is possible to stream uncompressed video into OpenGL* ES, but doing so typically requires converting the pixels from the YUV to RGB color space, which is beyond the scope of this article.
The official name of the EGL* image extension is GL_OES_EGL_image. It is widely supported on most platforms, including Android. To confirm which extensions are available on any platform, use the functions provided in Listing 4 to return strings that list all of the available extensions by name for your OpenGL* ES and EGL* drivers.
Listing 4. Checking for available OpenGL* ES and EGL* extensions
glGetString(GL_EXTENSIONS);
eglQueryString(eglGetCurrentDisplay(), EGL_EXTENSIONS);
The header file
eglext.h defines the names of the rendering surface types that the EGL* and OpenGL* ES drivers for your platform support. Table 1 provides a summary of the EGL* image surface types that are available for Android. Note that Android lists support for the EGL_KHR_image_pixmap extension, but it is actually the
EGL_NATIVE_BUFFER_ANDROID surface type that you must use, not
EGL_NATIVE_PIXMAP_KHR.
Table 1. Surface types for EGL* images on Android
The code in Listing 5 shows how to use the EGL* image extension in two ways. First, on the Android platform, a native
GraphicBuffer surface is created and locked. This buffer can be accessed for rendering while it is locked. When this buffer is unlocked, it can be imported into a new EGL* image with the ClientBufferAddress parameter to
eglCreateImageKHR(). This EGL* image is then bound to GL_TEXTURE_2D with
glEGLImageTargetTexture2DOES(), to be used as any texture can be used in OpenGL* ES. This is accomplished without ever copying the image, as the native
GraphicBuffer and the OpenGL* ES texture are actually sharing the same image data. This example demonstrates how images can be exchanged quickly between OpenGL* ES and Android or any 2D API on the Android platform. Note that the GraphicBuffer class is only available in the Android framework API, not the NDK.
If you are not using Android, you can still import images into OpenGL* ES textures in the same way. Set the
ClientBufferAddress to point to your image data, and set the SurfaceType as
EGL_GL_TEXTURE_2D_KHR. Refer to your eglext.h include file for a complete list of the surface types that are available on your platform. Use
eglQuerySurface() to obtain the address, pitch (stride), and origin of the new EGL* image buffer after it is created. Be sure to use
eglGetError() after each call to the EGL* to check for any returned errors.
Listing 5. Example of using the EGL* image extension with Android
#include <EGL/eglext.h>
#include <GLES2/gl2ext.h>
#ifdef ANDROID
GraphicBuffer * pGraphicBuffer = new GraphicBuffer(ImageWidth, ImageHeight, PIXEL_FORMAT_RGB_565, GraphicBuffer::USAGE_SW_WRITE_OFTEN | GraphicBuffer::USAGE_HW_TEXTURE);
// Lock the buffer to get a pointer
unsigned char * pBitmap = NULL;
pGraphicBuffer->lock(GraphicBuffer::USAGE_SW_WRITE_OFTEN,(void **)&pBitmap);
// Write 2D image to pBitmap
// Unlock to allow OpenGL ES to use it
pGraphicBuffer->unlock();
EGLClientBuffer ClientBufferAddress = pGraphicBuffer->getNativeBuffer();
EGLint SurfaceType = EGL_NATIVE_BUFFER_ANDROID;
#else
EGLint SurfaceType = EGL_GL_TEXTURE_2D_KHR;
#endif
// Make an EGL Image at the same address of the native client buffer
EGLDisplay eglDisplayHandle = eglGetDisplay(EGL_DEFAULT_DISPLAY);
// Create an EGL Image with these attributes
EGLint eglImageAttributes[] = {EGL_WIDTH, ImageWidth, EGL_HEIGHT, ImageHeight, EGL_MATCH_FORMAT_KHR, EGL_FORMAT_RGB_565_KHR, EGL_IMAGE_PRESERVED_KHR, EGL_TRUE, EGL_NONE};
EGLImageKHR eglImageHandle = eglCreateImageKHR(eglDisplayHandle, EGL_NO_CONTEXT, SurfaceType, ClientBufferAddress, eglImageAttributes);
// Create a texture and bind it to GL_TEXTURE_2D
EGLint TextureHandle;
glGenTextures(1, &TextureHandle);
glBindTexture(GL_TEXTURE_2D, TextureHandle);
// Attach the EGL Image to the same texture
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, eglImageHandle);
// Get the address and pitch (stride) of the new texture image
eglQuerySurface(eglDisplayHandle, eglImageHandle, EGL_BITMAP_POINTER_KHR, &BitmapAddress);
eglQuerySurface(eglDisplayHandle, eglImageHandle, EGL_BITMAP_PITCH_KHR, &BitmapPitch);
eglQuerySurface(eglDisplayHandle, eglImageHandle, EGL_BITMAP_ORIGIN_KHR, &BitmapOrigin);
// Check for errors after each call to the EGL
if (eglGetError() != EGL_SUCCESS)
break;
// Delete the EGL Image to free the memory when done
eglDestroyImageKHR(eglDisplayHandle, eglImageHandle);
Conclusion
One of the best ways to update an application with a tired 2D GUI is to exploit the accelerated OpenGL* ES features of Android on the Intel® Atom™ platform. Even though 2D and 3D are really different paradigms, the combination of the two is powerful. The trick is to make them cooperate by either sharing the frame buffer or sharing images through textures and the EGL* image extension. Use of this extension with OpenGL* ES is essential for achieving a good user experience, because the conventional method of loading textures with
glTexImage2D() is too slow for dynamic images. Fortunately, this extension is well supported on most embedded platforms today, including Android.
For More Information
- Khronos standard document on the EGL* image extension, OES_EGL_image:
- Khronos standard document on EGL* image, KHR_image_base:
- Mark Callo, Khronos API Implementers Guide:
- James Willcox, “Using Direct Textures on Android”
- Android Developer | https://software.intel.com/es-es/articles/using-opengl-es-to-accelerate-apps-with-legacy-2d-guis | CC-MAIN-2015-32 | refinedweb | 3,960 | 50.06 |
(2013-06-12 14:40)matbor Wrote: * I can't run any python script from the commandline (via SSH), but I think that is because I use ATV2's (correct me if i'm wrong here)
chmod +x xbmc_play.py
./xbmc_play.py
/usr/bin/env python
(2013-06-13 03:52)matbor Wrote: Yeah, no python installed as it doesn't work from the cli, but how come all my scripts work from within XBMC then? is this something to do with how your plugin works?
Quote:I wonder if changing your code from using;
subprocess.Popen([script_player_starts,self.playing_type()])
to;
xbmc.executebuiltin('XBMC.RunScript([script_player_starts,self.playing_type()])')
might fix it, will have a play tonight.
def onPlayBackStarted(self):
log('player starts')
global script_player_starts
if script_player_starts:
log('Going to execute script abcdefghij= "' + script_player_starts + '"')
#subprocess.Popen([script_player_starts,self.playing_type()])
xbmc.executebuiltin('XBMC.RunScript(special://masterprofile/scripts/xbmc_play.py)')
(2013-03-29 19:45)pilluli Wrote: .
Sorry don't really understand. You don't want to use the "screensaver start" callback but another one which is related to the "put display to sleep"??
I don't know if xbmc publishes that callback to addons though... But I'll check anyway...
.
(2013-06-25 20:19)sIRwa2 Wrote: Great Addon! im using it together with UsbIrToy to shutdown my amp on screen saver. but i love if i could choose for "display to sleep" to invoke a script. Did you manage to look into that yet Piluli?
(2013-06-26 09:54)ctshh Wrote: Do you think it would be possible to add "when entering a favorite" to the script; eg. call the script when a certain favorite is called?
(2013-06-26 20:40)sIRwa2 Wrote: thanks for your answer,
Interesting idea, I could but in a timer in my py script,so it runs irtoy after, say, 10 minuts. but then i need the posibility to cancel the timer on screensaver off...
can this be done? like firing a second script to cancel the first? im not a programmer.
#/bin/sh
touch /tmp/screensaver_off
#/bin/sh
rm /tmp/screensaver_off
sleep 10m
if [ -e /tmp/screensaver_off ]; then
# Do whatever you need here
fi
(2013-06-27 03:40)matbor Wrote: Thanks for the script, have got my 4x XBMC's publishing there status using MQTT to a MQTT Broker, screenshot here... more to come....
21:12:13 T:2774530880 NOTICE: Previous line repeats 2 times.
21:12:13 T:2774530880 ERROR: EXCEPTION Thrown (PythonToCppException) : -->Python callback/script returned the following error<--
- NOTE: IGNORING THIS CAN LEAD TO MEMORY LEAKS!
Error Type: <type 'exceptions.OSError'>
Error Contents: [Errno 8] Exec format error
Traceback (most recent call last):
File "/home/xbmc/.xbmc/addons/service.xbmc.callbacks-0.2/default.py", line 108, in onScreensaverActivated
subprocess.Popen([script_screensaver_starts,self.get_player_status()])
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1259, in _execute_child
raise child_exception
OSError: [Errno 8] Exec format error
-->End of Python script error report<--
-rwxrwxr-x 1 xbmc xbmc 131 Jul 1 22:10 power.sh and
-rwxrwxr-x 1 xbmc xbmc 33 Jul 1 22:01 delete_screen_tmp_file.sh | http://forum.xbmc.org/showthread.php?tid=151011&page=3 | CC-MAIN-2014-41 | refinedweb | 528 | 58.99 |
Details
- Type:
Bug
- Status: Resolved
- Priority:
Major
- Resolution: Won't Fix
- Affects Version/s: None
- Fix Version/s: None
- Component/s: Database Core
- Labels:None
- Environment:
All
- Skill Level:Regular Contributors Level (Easy to Medium)
Description.
Activity
Pretty sure we have the compactor fix like such:
Completely untested but it sure looks like a fix.
That looks like a fix to me.
I wonder if we should make some macros for element/2 in places where we have tuples like this. Doing it as a macro would mean that it'd still expand to something suitable for a guard but it scares me how easy it is to typo an index and screw everything up.
This is by design. Deleted documents are supposed be able to contain meta information about who deleted them, etc, because they replicate. The problem might be a documentation issue, as clients need to make sure the document body is empty when bulk deleting.
Heh, if by documentation issue you mean eleven of twelve committers had no idea that it was intentional.
Though it brings up the question, is this information accessible over the HTTP API in any manner? I can't think of anything but I can't say it's something I've checked.
This is proper behavior. (Or – what Damien said...)
The intent is to allow users the option to save audit data like deleted_at and deleted_by when they set _deleted=true.
To read these documents you have to fetch them with their rev num.
Now, I haven't figured out how to find their rev num except via the changes feed, and THAT might be an improvement we should make.
This is fixed on trunk but needs porting to 1.1.x branch once 1.1 has shipped.
The http layer now cleans out #doc.body if the doc is deleted. Additionally, the compactor cleans out the body too for all deleted documents (thanks to davisp for that piece). The compactor piece will need minor modification to apply cleanly to 1.1.x
Well, that teaches me not to commit without reading my email.
This is horrible behavior, imo. Since no one knew that this information was preserved other than Damien and Chris isn't it safe to assume no one is storing tombstone information deliberately? Instead, most people are surprised that whatever they post along with their deletion is preserved forever (but forever invisible).
I think the default should be to clean out, as trunk does, perhaps we add a flag to bypass it for the case Damien mentions?
I remember when this intentional behaviour was added, and it's fine for me.
well even if it' sintentional, would be good to document it somewhere so we make sure library writers remove the doc content before adding _deleted: true
ALos agree, doc content should be at least removed at doc compaction.
To add another client issue about this behaviour, if people want to make sure the doc content is emptied on bulk doc it will imply we remove all properties except _id, and _rev or create a new new empty doc before sending this doc, which would slow down the process a lot.
Hm, I don't recall a JIRA ticket related to that change in July 2010, did it just get changed without a ticket or discussion or am just forgetful?
That commit looks like its just the ability to see the body, not the enabling part. Unless I'm mistaken this feature has always existed, just no one has used it.
I see the benefits of having such a capability but seeing as there's such a lack of awareness even by people that have worked with CouchDB internals, it seems like we need to have a behavior change so that it becomes a bit more obvious why things work the way they are.
The simplest change that would make this fairly obvious is to just return the doc body in the 404 response to a GET request. So instead of{"not_found": "deleted"}
we would return (usually){"_id": "foo", "_rev": "1-234223424", "_deleted": true}
And then if people had extra tombstone info it would just be in the doc body. Though we still haven't addressed whether attachments should be removed or kept.
The second bit of functionality that seems like it'd be necessary is to be able to DELETE a deleted doc somehow. As it is, it seems like the only way to get rid of deleted doc bodies is to resurrect them and then DELETE them again. Seems like being able to issue a delete against the 404 would be better (though seems un HTTP, though I can't think of a clause in the RFC that addresses this directly, also, the RFC does make distinctions between unknown resources and resources that are known to not be available, so maybe precedent?)
Anyone have thoughts? Open a new ticket?.
This change must not be applied to 1.1.1 or earlier..
From IRC:
<tilgovi> huh. is 1141 actually an issue?
<tilgovi> it's certainly a lil weird
<tilgovi> but like, delete is via DELETE
<davisp> tilgovi: _bulk_docs?
<tilgovi> ohhhhhhhhhh snappppp
I think a proper fix should include detecting this during compaction and removing the contents at that point. | https://issues.apache.org/jira/browse/COUCHDB-1141?focusedCommentId=13025525&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2014-15 | refinedweb | 881 | 70.84 |
Introduction
Assertions were added in Java 1.4 to create reliable programs that are correct and robust. Assertions are Boolean expressions that are used to test/validate the code. They are basically used during the testing and development phases. Assertions are used by programmers to be doubly sure about a particular condition, which they feel to be true.
Declaration of Assertions (syntax)
assert Expression1
In Java we declare Assertions with the help of the assert keyword. In this syntax, Expression1 is a Boolean expression; when the program is executed, the Expression1 in the assert is checked. If Expression1 returns true then the assertion set is true and the program runs without interruption. In the case when Expression1 returns false then an AssertionError is thrown and the Assertion makes the program fail.assert Expression1:Expression2
In this syntax Expression1 has the same meaning as we explained above and the second Expression2 defines a string type statement that you want to show as an error message. This string type message is passed to the constructor of the AssertionError Object. If the Expression1 one returns false then the Expression2 value returns an error message. public AssertionError()This is the hierarchy of the AssertionError class in Java:
NOTE: Assertions must to be enabled explicitly; they are disabled by default. Using the -ea and -da option of Java, we can enable or disable assertions.
Exampleclass MyAssertion { static void ErrorCheck(int i) { assert i>0:"value must be Enter a possitive nuber"; System.out.println("you Enter a valid number ="+i); } public static void main(String arg[]) { ErrorCheck(Integer.parseInt(arg[0])); } }
OUTPUT
Step 1 : When -ea is used and enter the value throw command line is -1.
Step 2 : When -ea is used and enter the value throw command line is 5 the output will be:
Step 3 : With out using -ea the output will be the same for both number -1 and 5.
Example
This example is made by using the assert keyword but using a single argument form like assert Expression1:
public class MyAssertion1 { static int maxmarks=100; static int changes(int mark) { maxmarks=maxmarks-mark; System.out.println("maxmark:= " + maxmarks); return maxmarks; } public static void main(String args[]) { int g; for(int i=0;i<5;i++) { g=changes(15); assert maxmarks>=70.00; } } }
Without using -ea the output will be the same for both number -1 and 5.
Resources
Use of Assertions in Java
Accessing Private Fields and Private Methods (Hacking a Class) in Java | http://www.c-sharpcorner.com/UploadFile/433c33/use-of-assertions-in-java/ | crawl-003 | refinedweb | 414 | 53.92 |
RadListView: LoadOnDemandBehavior
If your list contains a lot of items, it may not be necessary to load of them at the start. Here the LoadOnDemandBehavior can be useful to allow the end user to request the loading of more items if needed. There are two modes for this behavior: manual and automatic. The manual is represented by a button at the end of the list that can be clicked to load more items. The automatic mode, as the name suggests, automatically requests the loading of more items when the list is scrolled to a position close to the end of the currently loaded items.
Getting Started
If you have read the Getting Started page, you already have a project with RadListView which is populated with items of type City. In the Behaviors Overview we introduced the behaviors and now we will go into more details about the LoadOnDemandBehavior. Here's how to add the LoadOnDemandBehavior to your list view instance:
LoadOnDemandBehavior loadOnDemandBehavior = new LoadOnDemandBehavior (); listView.AddBehavior (loadOnDemandBehavior);
This will only show a button at the end of the list, but in order to actually load items, you need to add a LoadOnDemandListener.
LoadOnDemandListener
The LoadOnDemandListener should be used to get notification that loading is requested. Here's one simple implementation:
public class LoadListener : Java.Lang.Object, LoadOnDemandBehavior.ILoadOnDemandListener { private ListViewAdapter listViewAdapter; public LoadListener(ListViewAdapter adapter) { listViewAdapter = adapter; } public void OnLoadStarted () { City city = new City("Naples", "Italy"); listViewAdapter.Add(city); listViewAdapter.NotifyLoadingFinished (); } public void OnLoadFinished () { } }
That will result in addition of a new city at the end of the list every time the load more button is pressed. Note that you need to manually notify that you have finished the loading process. This can be done with either ListViewAdapter's notifyLoadingFinished() method or with LoadOnDemandBehavior's endLoad(). The result is the same, so you can use the one that is more suitable for you. Now what's left to do is to add the listener to the behavior:
LoadListener loadListner = new LoadListener (cityAdapter); loadOnDemandBehavior.AddListener (loadListner);
Automatic mode
The LoadOnDemandBehavior also provides an automatic mode that allow loading of items in a way in which the end user may not even realize that all items were not loaded at the start. Here's how we can switch the mode to automatic:
loadOnDemandBehavior.Mode = LoadOnDemandBehavior.LoadOnDemandMode.Automatic;
What happens now is that whenever the user is near the end of the list new items are requested. What exactly is 'near the end' is defined by the max remaining items. You can get the current value with getMaxRemainingItems()
and set a new value with setMaxRemainingItems(int). The default value is
10. What it means
for the behavior is that when the remaining items (the items that are not yet visible to the user) become less than the value specified by that number, new items will be requested.
Let's add some numbers to make things more clear. Let's say the max remaining items are set to
3 and the items loaded at the beginning are
10. When the user scrolls the list and reaches the position of the 7th item
(which means that there are 3 more items for him to see from the currently loaded list) the request is issued and more items are loaded.
In this mode, again, you are expected to notify either the adapter or the behavior that you have finished with the loading so
that the indicator's state can be correctly updated.
Customization
The LoadOnDemandBehavior has one more constructor that allows you to use your own custom Views that will be used for indicators in both modes: LoadOnDemandBehavior(View manualDemandView, automaticDemandView). If you add your custom views, you will need to handle manually the changes in their state (for example you can disable the button while the loading process is in progress). In that case you will need to use the startLoad() method to initiate the load (for example when the button is clicked). | https://docs.telerik.com/devtools/xamarin/nativecontrols/android/listview/behaviors/listview-behaviors-loadondemand.html | CC-MAIN-2019-18 | refinedweb | 658 | 50.67 |
Weather Station with Scene Activator!!!
- BulldogLowell Contest Winner last edited by BulldogLowell
you will have a lot of fun building this one.
ADDED: Check out the youtube video.
This Weather Station will display your indoor and outdoor environmental conditions but features some hot capabilities...
Send a Short Message to the LCD from Vera by adding another variable and populating it with whatever you like on Vera.
Using it in the bedroom and you want to turn the backlight off to get a good night sleep? It has that. Just use scenes or PLEG to toggle a variable on the hygrometer device to turn it on or off at specified times.
Want to trigger a scene or initiate some other action from PLEG? It has that. Just use PLEG or Luup to monitor a variable on the hygrometer device to do cool things like:
Put your house into Night mode
Turn off all your lights
Open your window coverings
Turn on your Stereo, TV, or even a tea kettle
Close your Garage door...
in fact... this little button can be set up to do whatever you can setup in Vera!!!
I have attached the sketch, so you are just a few tiny components away from having this all on your nightstand or on your desk.
1: You will need to hardware debounce your switch but can be done with two extra components available for just a few shekels at your local electronics supplier. diagram attached.
2: Arduino Nano, Uno or Pro Mini.
3: Hygrometer/thermometer sensor
4: LCD display I got mine on ebay
5: A few wires
6: A Button
7: Moisture/Temperature sensor like this
If you plan on building a simple hygrometer/thermometer... build this one instead, and make it really cool and powerful.
Have Fun!
*6-May EDIT
I was having a couple small issues still with the debounce so added it in the sketch. I took the humidity and temp and put into a function and added the call the to setup. I didn't notice that it was delayed quite a bit getting T&H.
WeatherDisplayPBSceneController.ino
Grate to see you got it all to work.
A few comments came to mind.
Is the hardware debounce realy necesary? Did you try any software solutions?
Is the sleep mode really useful for this sketch? I'm guessing you do not have any plans to run this one on battery so I see no reason to put it to sleep to save power.
thanks for the feedback.
I found that it needed debouncing. It was getting into a funny state once in a while where it would lock up. it was so easy for me to add the capacitor on my breadboard, so I did it. A software debounce with a timeout on the button press would work too.
I started with the sketch that had the radio sleep, never sure if I was going to make it battery version or not, so it was left in. It can be yanked out, do you want to try that?
I really wanted to do two things, get data in, and push data out. I am working on a project that is reliant on both of those so building this was a major step for me.
I thought it had a nice practical use given the display and the scene trigger. Plus it is so easy to build, I thought i could get some more people to look at it and help make improvements.
I'm not a software guy, as you can tell from the sketch...
FYI I added the debounce to the sketch. Even with the cap I had a few lockups. I noticed I could get multiples and needed it to go away.
I thought I would post a few photos of the finished device. While I like all things tech, my wife does not. So you can see, I have added to the list of making 'stuff' have a purpose.
Hay @BulldogLowell , I'm trying to make use of your sketch for my own project but I ran in to some issue.
I had trouble finding the right library. I used the "LiquidCrystal_V1.2.1.zip" from here ->
But I got some errors when compiling sketch. Then I found the library link under the ebay item you referred to and now it seems to work.
you could mention this in the first post if possible so others don't fall in the same ditch.
cheers
Now I ran int to the next issue. How do I know which pin the SDA and SCL should be connected to??
Is it this line that defines it?
LiquidCrystal_I2C lcd(0x27,20,4);
So what should it be if I'm using a Pro Mini?
I think I found it now... For the Pro Mini it is like this:
"I2C: A4 (SDA) and A5 (SCL). Support I2C (TWI) communication using the Wire library."
I was expecting it to be declared somehow in the sketch so I got confused... Well I learned something once again...
@korttoma hey, glad you have it figured out.
I have been very busy and haven't been able to check in here for a while:(
all working now?
Yepp, all working now. Just need to find some time to make my deviec
- korttoma Hero Member last edited by korttoma
Hi @BulldogLowell (or anybody else),I'm hoping you could help me with one last thing to finish this project of mine.
The issue I'm facing is that I do not know how to send a "string" from Vera using luup. Or basically collecting a string from a variable of one device and sending it to a MySensors node. Here is what I tried:
local text = luup.variable_get("urn:empuk-net:serviceId:SimpleAlarm1","StatusLabel", 319) luup.call_action("urn:upnp-arduino-cc:serviceId:arduino1", "SendCommand", {radioId="11;3", variableId="VAR_3", value=text}, 276)
If I give the variable "text" a numeric value it is sent and received without problems but how the h**l can I send a string??
Grateful for any help
- korttoma Hero Member last edited by korttoma
One step closer:
local text = "Test" luup.call_action("urn:upnp-arduino-cc:serviceId:arduino1", "SendCommand", {radioId="11;3", variableId="VAR_3", value=text}, 276)
Instantly prints the text "Test" on my display
Seems like my issue is that I can not get the text from the SimpleAlarm Variable "StatusLable".
any suggestions?:
Hye korttoma,
Did you finish this project?
I have a bit of time and thought I would convert my weather station up to the new mySensors version... was wondering if you could share your work...
Hi Jim, Yeah I finished it. there is a few posts about it over in heks scene controller thread ->
Tomas,
My business has kept me away from this, I guess I didn't notice how far along you took it... brilliant job!
I'll check in when it's done.
Thanks mate.
jim
Tomas,
Again, thanks for your example. I have to say, it helped a lot.
I am however struggling with radio communication it seems and I am wondering at this point what I am doing wrong, having spent a couple hours trying to find my errors.
Would you mind to take a look for anything you may see:
#define STATES 7 #define HUMIDITY_SENSOR_DIGITAL_PIN 4 #define DEBUG #ifdef DEBUG #define DEBUG_SERIAL(x) Serial.begin(x) #define DEBUG_PRINT(x) Serial.print(x) #define DEBUG_PRINTLN(x) Serial.println(x) #else #define DEBUG_SERIAL(x) #define DEBUG_PRINT(x) #define DEBUG_PRINTLN(x) #endif #include <Wire.h> #include <Time.h> #include <SPI.h> #include <MySensor.h> #include <LiquidCrystal_I2C.h> #include <DHT.h> // #define RADIO_ID 11 #define CHILD_ID_SCENE 3 // LiquidCrystal_I2C lcd(0x27,16,2); // set the LCD address to 0x20 for a 16 chars and 2 line display // void (*lcdDisplay[STATES])(); // byte state = 0; byte lastState; byte timeCounter = 0; unsigned long lastTime; unsigned long refreshInterval = 3000UL; unsigned long lastClockSet; boolean isMessage = false; float insideTemperature; float humidity; int OutdoorTemp = -99; int OutdoorHumidity = -99; int todayHigh = -99; int todayLow = -99; String conditions = "Not yet Reported"; String FreeMessage = "No Message recieved"; //***** int ledStatus = 1;// to toggle LCD backlight led //int ledLevel = 254; boolean buttonPushed = false; // MySensor gw; DHT dht; // MyMessage msgOn(CHILD_ID_SCENE, V_SCENE_ON); MyMessage msgOff(CHILD_ID_SCENE, V_SCENE_OFF); MyMessage msgVAR1(CHILD_ID_SCENE, V_VAR1); MyMessage msgVAR2(CHILD_ID_SCENE, V_VAR2); MyMessage msgVAR3(CHILD_ID_SCENE, V_VAR3); MyMessage msgVAR4(CHILD_ID_SCENE, V_VAR4); MyMessage msgVAR5(CHILD_ID_SCENE, V_VAR5); // void setup() { DEBUG_SERIAL(115200); DEBUG_PRINTLN(F("Serial started")); attachInterrupt(1, PushButton, CHANGE); // lcdDisplay[0] = lcdDisplay0; lcdDisplay[1] = lcdDisplay1; lcdDisplay[2] = lcdDisplay2; lcdDisplay[3] = lcdDisplay3; lcdDisplay[4] = lcdDisplay4; lcdDisplay[5] = lcdDisplay5; lcdDisplay[6] = lcdDisplay6; // dht.setup(HUMIDITY_SENSOR_DIGITAL_PIN); gw.begin(TempStatus, RADIO_ID); gw.sendSketchInfo("WeatherClock", "1.0"); gw.present(CHILD_ID_SCENE, S_SCENE_CONTROLLER); int clockTimer; while(timeStatus() != timeSet && clockTimer < 10) { gw.requestTime(receiveTime); Serial.println("getting Time"); delay(500); clockTimer++; } // lcd.init(); lcd.clear(); lcd.backlight(); lcd.setCursor(0, 0); lcd.print("Hello World!!!"); delay(2000); lcd.clear(); lastTime = millis(); } void loop() { gw.process(); if (millis() - lastClockSet >= 60000UL) { gw.requestTime(receiveTime); lastClockSet = millis(); } if (millis() - lastTime >= refreshInterval) { state++; if (state > STATES - 1) state = 0; DEBUG_PRINTLN(F("State:")); DEBUG_PRINTLN(state); lastTime += refreshInterval; getTempHumidity(); } if (state != lastState) { fastClear(); lcdDisplay[state](); } lastState = state; if (buttonPushed) { activateScene(); } } void fastClear() { lcd.setCursor(0,0); lcd.print(" "); lcd.setCursor(0,1); lcd.print(" "); } // void lcdDisplay0() { lcd.setCursor(0,0); lcd.print(F("Time: ")); if (hourFormat12() < 10) lcd.print("0"); lcd.print(hourFormat12()); lcd.print(":"); if (minute() < 10) lcd.print("0"); lcd.print(minute()); DEBUG_PRINT(F("Time:")); DEBUG_PRINTLN(hourFormat12()); lcd.setCursor(0,1); lcd.print(F("Date: ")); if (month() < 10) lcd.print("0"); lcd.print(month()); lcd.print("/"); if (day() < 10) lcd.print("0"); lcd.print(day()); lcd.print("/"); lcd.print(year()); DEBUG_PRINTLN(F("Date: 01.11.2014")); } void lcdDisplay1() { lcd.setCursor(0,0); lcd.print(F("Indoor Temp:")); lcd.print(int(insideTemperature)); lcd.print(char(223)); DEBUG_PRINT(F("Indoor Temp:")); DEBUG_PRINT(int(insideTemperature)); DEBUG_PRINTLN(F("F")); lcd.setCursor(0,1); lcd.print(" Humidity:"); lcd.print(int(humidity)); lcd.print(F("%")); DEBUG_PRINT(" Humidity:"); DEBUG_PRINT(int(humidity)); DEBUG_PRINTLN(F("F")); } void lcdDisplay2() { lcd.setCursor(0,0); lcd.print("Outdoor Temp:"); lcd.print(OutdoorTemp); lcd.print(char(223)); DEBUG_PRINT(F("Outdoor Temp:")); DEBUG_PRINTLN(OutdoorTemp); lcd.setCursor(0,1); lcd.print(F(" Humidity:")); lcd.print(OutdoorHumidity); lcd.print(F("%")); DEBUG_PRINT(F(" Humidity:")); DEBUG_PRINTLN(OutdoorHumidity); } void lcdDisplay3() { lcd.setCursor(0,0); lcd.print(F("Today's HI:")); lcd.print(todayHigh); lcd.print(char(223)); DEBUG_PRINT(F("Today's HIGH")); DEBUG_PRINTLN(todayHigh); lcd.setCursor(0,1); lcd.print(F(" LO:")); lcd.print(todayLow); lcd.print(char(223)); DEBUG_PRINT(F("Today's LOW")); DEBUG_PRINTLN(todayLow); } void lcdDisplay4() { lcd.setCursor(0,0); lcd.print(F("Today's Weather is")); DEBUG_PRINTLN(F("Today's Weather:")); lcd.setCursor(0,1); lcd.print(conditions); DEBUG_PRINTLN(F("EXAMPLE")); } void lcdDisplay5() { if (isMessage) { lcd.setCursor(0,0); lcd.print(F("****Message****")); DEBUG_PRINTLN(F("****Message****")); lcd.setCursor(0,1); lcd.print(F("Custom Message")); DEBUG_PRINTLN(F("Custom Message")); } else { lcd.setCursor(0,0); lcd.print(F("****Message****")); DEBUG_PRINTLN(F("****Message****")); lcd.setCursor(0,1); lcd.print(F("Have a Nice Day")); DEBUG_PRINTLN(F("Have a Nice Day")); } } void lcdDisplay6() { lcd.setCursor(0,0); lcd.print(F(" Weather & Time ")); DEBUG_PRINTLN(F(" Weather & Time ")); lcd.setCursor(0,1); lcd.print(F("by BulldogLowell")); DEBUG_PRINTLN(F("by BulldogLowell")); } // void getTempHumidity() { insideTemperature = dht.toFahrenheit(dht.getTemperature()); if (isnan(insideTemperature)) { DEBUG_PRINTLN(F("Failed reading temperature from DHT")); } humidity = dht.getHumidity(); if (isnan(humidity)) { DEBUG_PRINTLN(F("Failed reading humidity from DHT")); } } // void receiveTime(unsigned long time) { DEBUG_PRINTLN(F("Time value received: ")); DEBUG_PRINTLN(time); setTime(time); } // void PushButton() { static unsigned long last_interrupt_time = 0; unsigned long interrupt_time = millis(); if (interrupt_time - last_interrupt_time > 200) { buttonPushed = true; } last_interrupt_time = interrupt_time; } // void activateScene() { DEBUG_PRINTLN(F("ButtonPushed")); fastClear(); for (byte i = 0; i < 10; i++) { lcd.noBacklight(); delay(50); lcd.backlight(); delay(50); } lcd.setCursor(0,0); lcd.print(F(" A/C Boost Mode ")); lcd.setCursor(0,1); lcd.print(F("**** ACTIVE ****")); delay(2000); buttonPushed = false; lastTime = millis(); //Reset the timer to even out display interval } // void TempStatus(const MyMessage &message) { if (message.type == V_VAR1) { OutdoorTemp = atoi(message.data); DEBUG_PRINTLN(F("OutdoorTemp recieved:")); DEBUG_PRINTLN(OutdoorTemp); } if (message.type == V_VAR2) { OutdoorHumidity = atoi(message.data); DEBUG_PRINT(F("OutdoorHumidity recieved:")); DEBUG_PRINTLN(OutdoorHumidity); } if (message.type == V_VAR3) { todayLow = atoi(message.data); DEBUG_PRINT(F("Today's LOW:")); DEBUG_PRINTLN(todayLow); } if (message.type == V_VAR4) { todayHigh = atoi(message.data); DEBUG_PRINT(F("Today's HIGH:")); DEBUG_PRINTLN(todayHigh); } if (message.type == V_VAR5) { conditions = String(message.data); } }
my error looks like this:
Serial started sensor started, id 11 send: 11-11-0-0 s=255,c=0,t=17,pt=0,l=3,st=fail:1.4 send: 11-11-0-0 s=255,c=3,t=6,pt=1,l=1,st=fail:0 send: 11-11-0-0 s=255,c=3,t=11,pt=0,l=12,st=fail:WeatherClock send: 11-11-0-0 s=255,c=3,t=12,pt=0,l=3,st=fail:1.0 send: 11-11-0-0 s=3,c=0,t=25,pt=0,l=3,st=fail:1.4 send: 11-11-0-0 s=255,c=3,t=1,pt=0,l=3,st=fail:1.4 getting Time send: 11-11-0-0 s=255,c=3,t=1,pt=0,l=3,st=fail:1.4 send: 11-11-255-255 s=255,c=3,t=7 State: 1 Indoor Temp:71F Humidity:46F State: 2 Outdoor Temp:-99 Humidity:-99 State: 3 Today's HIGH-99 Today's LOW-99 State:
My lua code server-side is this:)
I added delays in the lua code and that seemed to help a lot:) luup.sleep(750)) luup.sleep(750)) luup.sleep(750)) luup.sleep(750))
still struggling with this, however:
if (message.type == V_VAR5) { conditions = String(message.data); DEBUG_PRINT(F("Received today's Conditions:")); DEBUG_PRINTLN(conditions); }
not returning the present conditions....
- BulldogLowell Contest Winner last edited by BulldogLowell
also noticed that variable5 (V_VAR5) does not show up here (in device 85):
336) | https://forum.mysensors.org/topic/93/weather-station-with-scene-activator/2 | CC-MAIN-2018-09 | refinedweb | 2,316 | 51.04 |
CFD Online Discussion Forums
(
)
-
FLUENT
(
)
- -
Parallel UDF problem
(
)
Lindsay
November 7, 2008 04:31
Parallel UDF problem
Hi,
Im having a couple of problems with running a UDF in parallel. When I compile the UDF on 2 processors, the library's are built and loaded but the simulation does not start. I receive a couple of warning signs during the build.
-In the host build:
modified_drag.c: In function `modified_drag_EMMS':
modified_drag.c:5: warning: 'k_g_s' might be used uninitialized in this function
-In the node build:
modified_drag.c: In function `modified_drag_EMMS':
modified_drag.c:80: warning: control reaches end of non-void function
modified_drag.c:10: warning: 'w' might be used uninitialized in this function
modified_drag.c:10: warning: 'cd' might be used uninitialized in this function
Firstly, I have initialised 'k_g_s', 'w' and 'cd' which confused me a bit and secondly I thought the 'warning: control reaches end of non-void function' might be why the simulation seems to stop before producing the first line of iteration.
The UDF is given as follows:
#include "udf.h"
DEFINE_EXCHANGE_PROPERTY(modified_drag_EMMS,cell,m ix_thread,s_col,f_col) { real k_g_s; #if !RP_HOST
Thread *thread_g, *thread_s; real x_vel_g, x_vel_s, y_vel_g, y_vel_s, abs_v, slip_x, slip_y,
rho_g, mu_g, Re, vf_g, vf_s, dp, w, cd;
/* find the threads for the gas (primary) */ /* and solids (secondary phases) */
thread_g = THREAD_SUB_THREAD(mix_thread, s_col); /* gas phase */ thread_s = THREAD_SUB_THREAD(mix_thread, f_col); /* solid phase*/
/* find phase velocities and properties*/
x_vel_g = C_U(cell, thread_g); y_vel_g = C_V(cell, thread_g); x_vel_s = C_U(cell, thread_s); y_vel_s = C_V(cell, thread_s); slip_x = x_vel_g - x_vel_s; /* velocity slip in the x direction */ slip_y = y_vel_g - y_vel_s; /* velocity slip in the y direction */
rho_g = C_R(cell, thread_g); /* gas density */
mu_g = C_MU_L(cell, thread_g); /* viscosity of gas */ dp = C_PHASE_DIAMETER(cell, thread_s); /* particle diameter */
vf_g = C_VOF(cell, thread_g); /* gas volume fraction */ vf_s = C_VOF(cell, thread_s); /* solid volume fraction */
/* Absolute slip velocity */ abs_v = sqrt(slip_x*slip_x + slip_y*slip_y);
/* Reynold's number */ Re = vf_g*rho_g*abs_v*dp/mu_g;
/* Reynolds conditions */
if (Re < 960)
cd = (24./(vf_g*Re))*(1+0.15*pow(vf_g*Re,0.687));
if (Re > 960)
cd = 0.44;
/* compute drag coefficient for dilute region */
if (0.74 < vf_g <= 0.82)
w = -0.1680+(0.0679/(4*pow(vf_g-0.7463,2)+0.0044));
if (0.82 < vf_g <= 0.97)
w = -0.8601+(0.0823/(4*pow(vf_g-0.7789,2)+0.0040));
if (0.97 < vf_g)
w = -31.8295+32.9895*vf_g;
k_g_s = 0.75*cd*((rho_g*vf_s*abs_v)/dp)*w;
/* drag coefficient for the dense region */
if(vf_g <= 0.74)
k_g_s = 150*((pow(vf_s,2)*mu_g)/(pow(vf_g,2)*pow(dp,2)))+1.75*((vf_s*rho_g*abs_v)/(vf_g*dp));
node_to_host_real_1(k_g_s);
#endif
#if !RP_NODE
return k_g_s;
#endif
}
I would really appreciate some guidence as to why the iterations are not starting as I am new to all this.
Thanks in advance,
Lindsay
All times are GMT -4. The time now is
01:01
. | http://www.cfd-online.com/Forums/fluent/49729-parallel-udf-problem-print.html | CC-MAIN-2014-41 | refinedweb | 478 | 50.12 |
When I run this it works just fine, but when I look at the memory consumed, if I pass the input as 8 it has taken 320kb of memory!When I run this it works just fine, but when I look at the memory consumed, if I pass the input as 8 it has taken 320kb of memory!
using System; class Program { public static void Main(string[] args) { int N = Convert.ToInt32(Console.ReadLine()); long sum =1; for(int i =1; i<= N; i++) { sum *= i; } Console.WriteLine("{0}", you can't reproduce the results that an input of 8 always consumes 320 KB and values 5, 7, 10 have 64 KB only, you may forget the above. it simply could be due to some timeouts that caused the debugger to save its current status and create a new greater debug space.
if an input of 8 always causes significant more memory to be consumed, it is more strange. perhaps it is dependent on how many loop cycles are to be performed. you may test if you get the same results for 4, 16, or 32 as well and post the results.
Sara | https://www.experts-exchange.com/questions/29076105/Memory-issue-for-a-small-program.html | CC-MAIN-2018-43 | refinedweb | 193 | 76.15 |
Getting Started With Microsoft's New XML ProcessorGetting Started With Microsoft's New XML Processor
Last week, Microsoft released its first MSXML "Parser Technology Preview Release." This marks a shift for Microsoft, who previously had been notably slow in supporting emergent XML standards. The "technology preview" release cycle is intended to be in "Web time" in order for Microsoft to gain feedback from incremental releases to the development community.
The new release adds much support for current and developing XML standards such as XPath, XSLT, and XLink (subsequent releases will add support for XML Schemas). However, even support for XPath and XSLT, which are stable recommendations, is not complete. Instead, Microsoft is aiming at the parallel development of all facets of the XML processor. Special attention has been paid to speed-related issues, with the emphasis on server-side performance. Features such as style sheet and schema caching support these aims.
The MSXML Technology Preview SDK itself contains documentation for the updated parserincluding a list of which XSLT elements are implemented. Additionally, the SDK provides some useful tables of the GUIDs and ProdIDs for the old and new DLL files, allowing developers the flexibility to use both parsers side-by-side, if desired.
The new processor's XSLT engine is backwards-compatible, so old IE5 "MSXSL" style sheets should still work fine. Significantly, the new XSLT implementation does not yet support named templates, or control over whitespace handling. The XPath support omits the "namespace," "preceding-sibling," and "following-sibling" functions, among others. These omissions make it unlikely that anyone using a more complete XSLT processor, such as XT, will switch to MSXML at the moment.
It should be understood, however, that the new release makes no claim to completeness. That said, it is a step in the right direction from Microsoft. If you currently use MSXML for your XML processing, this new release is going to significantly enhance your applications. These releases are not meant to replace shipping products, but instead to provide a preview for early adopters building prototype applications.
For purposes of applications, the new processor can be installed side-by-side with the old MSXML. However, if you want to use the new facilities from IE5, you will need to replace the old MSXML parser. Here's how:
Download the "msxmlwr.exe" file from Microsoft's web site. (This is the processor only, not the SDK). To install, run the downloaded file. This will create an "xmlinst.exe" tool in the Windows System folder.
You'll need to launch a DOS command line to configure the parser to run automatically in IE5. If you want to replace the old version of the MSXML parser with the new preview version, open a DOS command line and type:
C:\WINDOWS\SYSTEM> xmlinst -u C:\WINDOWS\SYSTEM> regsvr32 msxml2.dll C:\WINDOWS\SYSTEM> xmlinst
Here's a description of these steps (Figure 1 shows these performed in a DOS window):
This step is optional. It will remove old registry entries added by MSXML.DLL and/or MSXML2.DLL. Don't use if you wish to use both old and new side-by-side.
This registers the new DLL "side-by-side" with the original version of the parser, without actually taking over any of the old settings. These new parser features can then be explicitly called using the new ClassIDs and ProgIDs, leaving MSXML.DLL as the default parser for IE5 and any other MSXML-enabled applications.
This overrides all the current registry entries so that IE5 and other applications that use MSXML, etc will use the new MSXML2.DLL. Don't use if you wish IE5 to use the old processor.
In addition to the processor and SDK, Microsoft has created several other tools for use with the new processor.
The Microsoft XSL ISAPI Extension enables several features when used with an IIS server: the automatic execution of XSL style sheets on the server, choosing alternate style sheets based on browser type, style-sheet caching for improved server performance, the capability to specify output encodings, and customizable error messages. This release also includes full source code.
Microsoft's XSL to XSLT converter converts from "MSXSL" to XSLT. It uses an XSLT style sheet to perform a number of changes on "MSXSL-compliant" style sheets: changing the XSL namespace value to the correct URI, adding the new required XSLT version attribute, and converting the outdated XSL pattern syntax implemented from over a year ago to an XPath-compliant syntax.
However, the Microsoft documentation warns that vendor-specific elements may be generated. This means that not all parsers will be able to process the elements, so the resulting style sheets will not necessarily be completely XSLT-compliant.
A Microsoft newsgroup has been created for early adopters: microsoft.public.xml.msxml-webrelease. Comments can also be sent to: xmlfeedback@microsoft.com.
XML.com Copyright © 1998-2006 O'Reilly Media, Inc. | http://www.xml.com/lpt/a/377 | crawl-001 | refinedweb | 818 | 54.93 |
Add push notifications push notifications to your Windows app.
This topic shows you how to use Azure Mobile Services with a JavaScript backend to send push notifications to a universal Windows app. In this tutorial you enable push notifications using Azure Notification Hubs in a universal Windows app project. When complete, your mobile service will send a push notification from the JavaScript backend to all registered Windows Store and Windows Phone Store apps each time a record is inserted in the TodoList table. The notification hub that you create is free with your mobile service, can be managed independent of the mobile service, and can be used by other applications and services.
Note:
This topic shows you how to use the tooling in Visual Studio 2013 with Update 3 to add support for push notifications from Mobile Services to a universal Windows app. The same steps can be used to add push notifications from Mobile Services to a Windows Store or Windows Phone Store 8.1 app. To add push notifications to a Windows Phone 8 or Windows Phone Silverlight 8.1 app, see this version of Get started with push notifications in Mobile Services.
This tutorial walks you through these basic steps to enable push notifications:
- Register your app for push notifications
- Update the service to send push notifications
- Test push notifications in your app
To complete this tutorial, you need the following:
- An active Microsoft Store account.
- Visual Studio 2013 Express for Windows with Update 3, or a later version
Register your app for push notifications
The following steps registers your app with the Windows Store, configure your mobile service to enable push notifications, and add code to your app to register a device channel with your notification hub. Visual Studio 2013 connects to Azure and to the Windows Store by using the credentials that you provide.
In Visual Studio 2013, open Solution Explorer, right-click the Windows Store app project, click Add then Push Notification....
This starts the Add Push Notification Wizard.
Click Next, sign in to your Windows Store account, then supply a name in Reserve a new name and click Reserve.
This creates a new app registration.
Click the new registration in the App Name list, then click Next.
In the Select a service page, click the name of your mobile service, then click Next and Finish.
The notification hub used by your mobile service is updated with the Windows Notification Services (WNS) registration. You can now use Azure Notification Hubs to send notifications from Mobile Services to your app by using WNS.
Note:
This tutorial demonstrates sending notifications from a mobile service backend. You can use the same notification hub registration to send notifications from any backend service. For more information, see Notification Hubs Overview.
When you complete the wizard, a new Push setup is almost complete page is opened in Visual Studio. This page details an alternate method to configure your mobile service project to send notifications that is different from this tutorial.
The code that is added to your universal Windows app solution by the Add Push Notification wizard is platform-specific. Later in this section, you will remove this redundancy by sharing the Mobile Services client code, which makes the universal app easier to maintain.
6. Browse to the
\Services\MobileServices\your_service_name project folder, open the generated push.register.cs code file, and inspect the UploadChannel method that registers the device's channel URL with the notification hub.
7. Open the shared App.xaml.cs code file and notice that a call to the new UploadChannel method was added in the OnLaunched event handler. This makes sure that registration of the device is attempted whenever the app is launched.
8. Repeat the previous steps to add push notifications to the Windows Phone Store app project, then in the shared App.xaml.cs file, remove the extra call to Mobile Service Client, UploadChannel and the remaining
#if...#endif conditional wrapper. Both projects can now share a single call to UploadChannel.
Note that you can also simplify the generated code by unifying the
#if...#endif wrapped MobileServiceClient definitions into a single unwrapped definition used by both versions of the app.
Now that push notifications are enabled in the app, you must update the mobile service to send push notifications.
Update the service to send push notifications
The following steps update the insert script registered to the TodoItem table. You can implement similar code in any server script or anywhere else in your backend services..
Test push notifications in your app
In Visual Studio, right-click the Windows Store project, click Set as StartUp Project, then press the F5 key to run the Windows Store app.
After the app starts, the device is registered for push notifications.
Stop the Windows Store app and repeat the previous step to run.
Next steps.
Learn how to authenticate users of your app with different account types using mobile services.
What are Notification Hubs?
Learn more about how Notification Hubs works to deliver notifications to your apps across all major client platforms.
How to use a .NET client for Azure Mobile Services
Learn more about how to use Mobile Services from C# Windows apps. | https://azure.microsoft.com/en-us/documentation/articles/mobile-services-javascript-backend-windows-universal-dotnet-get-started-push/ | CC-MAIN-2016-30 | refinedweb | 866 | 53.92 |
On Monday, Oct 13, 2003, at 19:24 America/New_York, Andrew Straw wrote: >. > > Recently, the maintainer of PyOpenGL has indicated that he's dropping > Togl support (for Windows?) in the next release due to high > maintenance requirements and limited usage. Not to discourage you > from working on Togl, but I therefore think Tk may not be your best > long-term cross-platform solution for OpenGL + GUI unless someone is > willing to take over python/togl maintenance in general (and on the > Mac in specific). Well forget that then :) I'm not a big fan of Tk anyways. Here's what I recall seeing from when I looked at it last night, if it helps: * It doesn't use distutils (ugh) * There are a whole bunch of paths that need to be changed to get it to work * You probably don't want to use the -DLOCAL_TK_HEADERS (from memory.. probably wrong, but the name is similar) cflag, since the version they have locally isn't the patched AquaTk version * Tcl/Tk plugins end up living in /Library/Tcl * Tcl/Tk plugins are dylibs, not bundles * There's X11 support, and AGL (OS9) support (enabled with -Dmacintosh). The AGL stuff should work on OS X, but you'd probably have to rewrite a bunch of the code that shoves the context into a window. The X11 stuff is a lost cause, cause _tkinter with MacPython is compiled to link against AquaTk, not an X11 Tk. Though the AquaTk does have a subset of X11 headers inside it, so you may need to -I those when compiling. Not sure whether they're standard or not. I've attached the Makefile that I have so far (from Togl-1.6, looks like I checked it out from CVS at some point) -------------- next part -------------- A non-text attachment was scrubbed... Name: Makefile Type: application/octet-stream Size: 6467 bytes Desc: not available Url : -------------- next part -------------- Here's the changes I had made to togl.c, just headers at this point: [crack:PyOpenGL2/src/Togl-1.6] bob% cvs diff -u togl.c Index: togl.c =================================================================== RCS file: /cvsroot/pyopengl/PyOpenGL2/src/Togl-1.6/togl.c,v retrieving revision 1.2 diff -u -r1.2 togl.c --- togl.c 4 Aug 2003 13:55:01 -0000 1.2 +++ togl.c 13 Oct 2003 23:43:42 -0000 @@ -37,11 +37,8 @@ /*** Mac headers ***/ #elif defined(macintosh) && !defined(WIN32) && !defined(X11) -#include <Gestalt.h> -#include <Traps.h> -#include <agl.h> -#include <tclMacCommonPch.h> - +#include <Carbon/Carbon.h> +#include <AGL/AGL.h> #else /* make sure only one platform defined */ #error Unsupported platform, or confused platform defines... #endif @@ -60,6 +57,9 @@ #endif #ifdef WIN32 +# include <tkPlatDecls.h> +#endif +#ifdef macintosh # include <tkPlatDecls.h> #endif Have fun! :) -bob | https://mail.python.org/pipermail/pythonmac-sig/2003-October/009031.html | CC-MAIN-2016-40 | refinedweb | 460 | 75.91 |
Sponsors:
Haxx
Bugs item #1824894, was opened at 2007-11-02 17:18
Message generated for change (Comment added) made by scantor: Scott Cantor (scantor)
Assigned to: Daniel Stenberg (bagder)
Summary: Recent addition of ws2tcpip.h to curl.h breaks C++ apps
Initial Comment:
Bit of a mess I think, this must have to do with the whole socklen_t mess I've run into with the older Windows compiler, but this recent addition of ws2tcpip.h to fix that issue is causing a C++ build problem with the new 7.17.1 release.
That header pulls in the Microsoft header <wspiapi.h> at the bottom of the file. That in turn includes an actual C++ template (do NOT ask me why) and because curl.h is wrapping itself in extern "C", that breaks the build with this error:
c:\program files\microsoft visual studio 8\vc\platformsdk\include\wspiapi.h(44) : error C2894: templates cannot be declared to have 'C' linkage
I think the fix for this is to stop wrapping the #includes in curl.h inside the extern "C" block. When you pull in headers, you really should refrain from assuming it's C linkage in case somebody does something like this.
You'll need to carefully wrap your own declarations in the extern "C" block, but pull in headers outside them. This will take quite a bit of testing, I suspect.
Another possibility MIGHT be to refrain from including that ws2tcpip.h header on anything other than the old VS6 compiler, but that may be more trouble, I didn't follow the socklen issue closely.
----------------------------------------------------------------------
>Comment By: Scott Cantor (scantor)
Date: 2007-11-07 11:07
Logged In: YES
user_id=96701
Originator: YES
Appears to work, thanks.
Comment By: Daniel Stenberg (bagder)
Date: 2007-11-06 16:26
Logged In: YES
user_id=1110
Originator: NO
Yang Tse committed a fix to this problem today, it'd be great if you could
get tomorrow's daily snapshot or the current CVS and verify that it builds
fine!
Thanks for the report, case is closed.
Comment By: Scott Cantor (scantor)
Date: 2007-11-02 17:26
As a quick fix, this change to curl.h in that spot might be an option:
#if !(defined(_WINSOCKAPI_) || defined(_WINSOCK_H))
/* The check above prevents the winsock2 inclusion if winsock.h already
was
included, since they can't co-exist without problems */
#ifdef __cplusplus
}
#endif
#include <winsock2.h>
#include <ws2tcpip.h>
#ifdef __cplusplus
extern "C" {
#endif
#endif
A bit ugly, but it would save us testing the impact of a larger fix for
now.
You can respond by visiting:
Received on 2007-11-07
These mail archives are generated by hypermail.
Page updated November 12, 2010.
web site info | https://curl.haxx.se/mail/tracker-2007-11/0027.html | CC-MAIN-2018-17 | refinedweb | 455 | 73.07 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
Won't write new value
Guys,
In my class if str(tot_qty[0][0]) == 'none'
self.write(cr, uid, ids, {'cntnt': 'occupied'})
Everything seems to work fine but .... it doesn't
do not write new value accordingly to the if statement. It print out on the screen correctly but not writing value into the field cntnt.
Why ?
<pre>
class stock_location(osv.osv):
_inherit = 'stock.location'
_columns = {
'cntnt': fields.char('Empty')
}
def name_get(self, cr, uid, ids, context=None):
res = super(stock_location, self).name_get(cr, uid, ids, context=context)
res1=[]
product_id = 0
if context.has_key('prod_id'):
product_id = context.get('prod_id', 0)
for obj in self.browse(cr, uid, ids):
cr.execute("""SELECT sum(qty) from stock_quant where location_id = %s""" % obj.id)
tot_qty = cr.fetchall()
if product_id:
cr.execute("""SELECT sum(qty) from stock_quant where location_id = %s and product_id = %s""" % (obj.id, product_id))
tot_qty = cr.fetchall()
if str(tot_qty[0][0]) == 'None':
self.write(cr, uid, ids, {'cntnt': 'empty'})
print 'State:',str(tot_qty[0][0]) , ' Empty '
if str(tot_qty[0][0]) != 'None':
self.write(cr, uid, ids, {'cntnt': 'occupied'})
print 'State:',str(tot_qty[0][0]) , ' Ocuppied '
res1.append((obj.id, obj.name + ' ('+str(tot_qty[0][0])+' in Stock)'))
return res1
</pre>
or how can I do it! | https://www.odoo.com/forum/help-1/question/won-t-write-new-value-93793 | CC-MAIN-2016-50 | refinedweb | 237 | 55.2 |
This scenario describes a three-component Job that uses a DLL library containing a class called Test1.Class1 Class and invokes a method on it that processes the value and output the result onto the console.
Before replicating this scenario, you need first to build up your runtime environment.
Create the DLL to be loaded by tDotNETInstantiate
This example class built into .NET reads as follows:
using System; using System.Collections.Generic; using System.Text; namespace Test1 { public class Class1 { string s = null; public Class1(string s) { this.s = s; } public string getValue() { return "Return Value from Class1: " + s; } } }
This class reads the input value and adds the text Return Value from Class1: in front of this value. It is compiled using the latest .NET.
Install the runtime DLL from the latest .NET. In this scenario, we use janet-win32.dll on Windows 32-bit version and place it in the System32 folder.
Thus the runtime DLL is compatible with the DLL to be loaded.
Drop the following components from the Palette to the design workspace: tDotNETInstantiate, tDotNETRow and tLogRow.
Connect tDotNETInstantiate to tDotNETRow using a Trigger On Subjob OK connection.
Connect tDotNETRow to tLogRow using a Row Main connection.
Double-click tDotNETInstantiate to display its Basic settings view and define the component properties.
Click the three-dot button next to the Dll to load field and browse to the DLL file to be loaded. Alternatively, you can fill the field with an assembly. In this example, we use :
"C:/Program Files/ClassLibrary1/bin/Debug/ClassLibrary1.dll""
Fill the Fully qualified class name field with a valid class name to be used. In this example, we use:
"Test1.Class1"
Click the plus button beneath the Value(s) to pass to the constructor table to add a new line for the value to be passed to the constructor.
In this example, we use:
"Hello world"
Double-click tDotNETRow to display its Basic settings view and define the component properties.
Select Propagate data to output check box.
Select Use an existing instance check box and select tDotNETInstantiate_1 from the Existing instance to use list on the right.
Fill the Method Name field with a method name to be used. In this example, we use "getValue", a custom method.
Click the three-dot button next to Edit schema to add one column to the schema.
Click the plus button beneath the table to add a new column to the schema and click OK to save the setting.
Select newColumn from the Output value target column list.
Double-click tLogRow to display its Basic settings view and define the component properties.
Click Sync columns button to retrieve the schema defined in the preceding component.
Select Table in the Mode area.
Save your Job and press F6 to execute it.
From the result, you can read that the text
Return Value from Class1
is added in front of the retrieved value
Hello world. | https://help.talend.com/reader/hm5FaPiiOP31nUYHph0JwQ/uxiSSj3xwcSy5zNMKgnfkA?section=dotnet-tdotnetrow-sce1 | CC-MAIN-2020-34 | refinedweb | 488 | 66.84 |
Are you the kind of person who likes a good storm? Staying inside and warm while lightning flashes and the wind lashes the rain around is something I have always enjoyed. What if you could capture that feeling in a sphere that you could keep on your desk?
SEEING TURBULENCE
Rheoscopic fluids allow the currents and turbulence in liquids to be seen. They’re typically made with mica, a mineral that forms small, shiny flat plates that easily move within a fluid. The reflection of light off the mica turns the turbulence into a mesmerizing display.
You can find large spinnable round vessels filled with rheoscopic fluids in science museums (such as the Glasgow Science Centre shown in the video above) to demonstrate atmospheric flows over the surface of a planet, or as art displays (most famously, the Kalliroscopes of Paul Matisse), or even in your living room (the Rheoscopic Disc Coffee Table by Ben Krasnow, Make: Volume 47).
Making a rheoscopic fluid is simple, and it’s easy and inexpensive to buy the mica flakes, because they’re used in manufacturing soap, bath bombs, and makeup. I thought it might be interesting to try to shine a light from behind a rheoscopic fluid display, rather than just relying on reflected light to flash off the mica.
A quick test showed this to be an interesting effect.
The turbulence reminded me of a storm, so I thought it would be fun to bottle some lightning as well!
BOTTLE YOUR OWN STORM
MATERIALS
- Circuit Playground Bluefruit microcontroller board Adafruit #4333, adafruit.com. Other Circuit Playground boards should work with minor code modifications.
- Micro-USB cable
- Fan, 40mm×40mm×10mm
- DIY Snow Globe Kit Adafruit’s 108mm version, #3722, fits the Circuit Playground perfectly, and was also included in AdaBox014.
- Colored mica Typically sold for makeup, soap-making, or bath bombs — a very little will go a long way!
- Rare-earth magnets, 3mm×1mm round (2) Any small strong magnets would likely work fine.
- Magnetic stir bar, 15mm round aka stirring flea. Other shapes and sizes should work as well or better.
- Screws and spacers, M2.5 or M3, non-ferrous (brass or plastic)
- Cyanoacrylate glue
TOOLS
- Computer
- Wire cutters and strippers
- Marker
- Glue
- Soldering iron (optional)
1. MAKE A MAGNETIC STIRRER
Take two small magnets and stick them together. Since like poles repel, and opposites attract, this will let you identify the opposite poles of the two magnets. Use a marker to mark the opposite sides of each magnet.
Then glue the magnets down to the edges of the middle of the fan hub. Be sure to do this on the side that spins (the one without the label).
2. WIRE UP THE BLUEFRUIT
We’re going to just use the USB voltage (5V) to power our fan. There are better ways to do this, but let’s keep it simple! Cut off any connector on the fan wires, and strip the wires back a bit. Then attach the black wire to GND, and the red wire to VOUT (which is 5V, if you’re powering the board from USB). You can solder these if you like, or just make sure the wire is wrapped around the edge of the hole tightly to make a good electrical connection.
3. ASSEMBLE THE STIR PLATE
Attach the fan to the Bluefruit using non-ferrous M2.5 or M3 screws and standoffs. On a 40mm fan, the diagonal holes line up perfectly with two of the holes on the Bluefruit, which is convenient. You may need to vary the length of standoffs or screws to get an offset between the fan and the base of the snow globe, but the height of the screw heads should be enough space to allow the fan and magnets to spin.
4. PREPARE THE SNOW GLOBE
Fill the snow globe with water, and add a magnetic stir bar — the little white lozenge shown above. A small bowl can help you keep the snow globe in place, or you can have a friend hold it for you.
It is very easy to add too much mica to your snow globe, which will make a nice swirly effect, but will be completely opaque to light. If you wet the end of a toothpick, dip it in the mica, then into the snow globe, and repeat that process a few times, you’ll have the right amount of mica. Some experimentation may be required, depending on the mica you use.
Place the plastic stopper on the snow globe. If you overfill it a bit, and tilt it while you slowly squeeze the lid on, you can get all the air out to avoid any bubbles.
It’s likely there won’t be enough room to attach the screw-on base; I didn’t use it at all. If you’re worried about leakage, you can glue the plastic stopper on.
5. PREPARE A BASE (OPTIONAL)
Your Storm Globe will work fine without a base. But I decided to make a Mars-themed version, so I designed a base from the 3D model of the Block Island rock mapped by the Opportunity rover on Mars, and 3D printed it. The STL file is available at Github.
6. CODE THE BLUEFRUIT
There’s an excellent Adafruit Learning Guide that explains everything about getting a Bluetooth Snow Globe up and running. Read it at Adafruit. Then download the Storm Glass code from Github; it’s a lightly modified version of their demo code.
The stormy section of the code lights up the LEDs on the Bluefruit in such a way that it looks like a lightning flash. This has some random-ness included in it, so no two lightning flashes are the same.
def lightning(config): start_time = time.monotonic() last_update = start_time while time.monotonic() - start_time < config[‘duration’]: if time.monotonic() - last_update > config[‘speed’]: for _ in range(random.randint(1,8)): pixels.fill(0) pixels.fill(config[‘color’]) time.sleep(0.02+(.001*random.randint(1,70))) pixels.fill(0) time.sleep(.01) time.sleep(5+random.randint(1,5)) last_update = time.monotonic()
7. START YOUR STORM!
Magnetic stir plates typically include a speed control, which allows you to start the stirring process gradually. This one doesn’t, so you must be careful to get your stir bar stirring correctly.
If you look underneath the snow globe, you should be able to see your stir bar. Tilt the globe around until the stir bar is more or less in the middle, and then place it on top of the assembled base. Then plug the Bluefruit into a USB port to start the stirring effect. If the stir bar flings off to one side or just rattles around, you may need to try again or adjust the distance slightly between the fan and the base of the snow globe.
GOING FURTHER
There are a lot of ways you can experiment with your Storm Globe and make it your own. Can you make a better lightning animation or more interesting LED lighting effects? Mount one on the end of a staff for an outstanding wizard costume effect? Maybe put waterproof lights inside the sphere? Connect it to a Lightning Detector circuit (Make: Volume 71, page 105) or trigger it with real-time lightning notifications from lightningmaps.org/apps?
I’ve noticed that the mica tends to eventually settle up against the walls of the globe. Easily fixed with an occasional shake and reset, but perhaps maybe a different stir bar would create enough turbulence to prevent it? (There’s a good overview here.)
Unfortunately we can’t control speed directly from the Bluefruit, since the current draw of most fans is too high for it. However, you could certainly build a small circuit with a transistor to control the speed with the Bluefruit. This could be used to add a speed control knob or to slowly ramp up the stirrer.
I’d love to see what you can do with rheoscopic fluids and lighting effects; feel free to show off on Twitter and let me know @grajohnt! | https://makezine.com/projects/rheoscopic-storm-globe/ | CC-MAIN-2022-33 | refinedweb | 1,351 | 71.04 |
In this post, Tim Ewald talks about using Doc/Literal/Bare for your web service. There are several benefits he ticks off, but one seems to be the aesthetic effect of cleaning up the format of the XML within your SOAP message. In SOAP, the XML sent back and forth is just the wire format. As a typical developer, why should you care what the wire format is? In general, you shouldn't. If you have the tools to generate WSDL and generate a proxy off of a WSDL to call a web service, you're all set.
Unfortunately for me, it's not that easy. My job right now is to expose my company's platform to clients running cell-phones, set-top boxes, etc... These platforms are running J2ME, BREW, C, etc... Potential future clients are interested in SOAP, but our first client is dead set against it because they say it's too verbose for their tiny devices and there is scant tool support for them.
So I went and took some sand-paper to our SOAP services and shaved off every bit I could, smoothing out the edges, shortening the namespaces I have control over, making everything so "Doc/Literal/Bare" you'd blush just looking at it. Still, no go. They weren't having it. They have their own proprietary XML format they want to send to us over HTTP with a roll-our-own authentication scheme. I was hoping to take advantage of all the plumbing VS.NET and the .NET Web Services provide.
I recently watched a video in which Don Box and Doug Purdy discuss Indigo and SOA. They hope that most developers will not have to become plumbers and understand how it all works under the hood. You just use Indigo and it automagically takes care of it for you. You just focus on your business rules.
The problem I see arising is that as Microsoft takes web services and SOA to the next level, not everybody is keeping up. How will I get these people on mobile devices to interoperate with my service if they are lacking the tools to even generate simple SOAP messages? These guys didn't want to use XML until I showed them their format required very little change to make it XML compliant. As much as I don't want to know what's going on under the hood, I'm afraid I am forced to hike my pants down a bit and expose some butt crack to become a plumber.
In my next post, I'll talk about my solution to this problem and a problem I ran... | http://haacked.com/archive/2004/07/23/842.aspx | crawl-002 | refinedweb | 444 | 71.95 |
The .NET Compact Framework combines Windows Forms controls with Pocket PC controls and components to provide a rich development experience for developing smart device projects.
Describes how to implement and use Form elements on the Pocket PC.
Describes how controls and core elements operate on the Pocket PC.
Describes the InputPanel class for using the soft input panel (SIP).
Describes how to trap key input from device hardware
Shows to send and respond to a Notification control.
Shows how to select a input method for user input from the collection of input methods on a Pocket PC.
Shows how to use a DocumentList to provide for file management tasks in your application.
Shows how to use a HardwareButton to activate applications from a physical hardware buttons on a Pocket PC.
Shows how position controls on your from when an InputPanel is enabled on your Pocket PC.
This namespace contains classes for programming device applications using the.NET Compact Framework. | http://msdn.microsoft.com/en-us/library/w3keyz9t.aspx | crawl-002 | refinedweb | 159 | 56.86 |
mail::ACCOUNT::getFolderIndexInfo.3x man page
Cone©
mail::ACCOUNT::getFolderIndexInfo — Return message status
Synopsis
#include <libmail/sync.H> mail::ACCOUNT *mail;
mail::messageInfo msgInfo=mail->getFolderIndexSize(size_t messageNum);
Usage
mail::ACCOUNT::getFolderIndexInfo returns a structure that contains a message's unique identifier, and the message's current status flags. messageNum must be between zero and one less than the return code from mail::ACCOUNT::getFolderIndexSize(3x).
Return Codes and Callbacks
This function returns an object with the following fields:
- std::string uid
A unique ID that's assigned to each message in a folder. Applications should consider this unique ID as a completely opaque string, with no particular interpretation. The only assumption that applications may make is that no two messages will ever have the same uid in the same folder. A message copied to another folder will receive a different unique ID in the destination folder (the copy in the original folder is not affected).
- bool draft
This is a draft message.
- bool replied
A reply was previously sent to this message.
- bool marked
This message is marked for further processing.
- bool deleted
This message is marked for deletion.
- bool unread
The contents of this message have not been read.
- bool recent
This is the first time the folder was opened with this message in the folder.
Note
This message flag is considered obsolete, and should only be used by IMAP-based clients that absolutely need this flag. Applications that absolutely require this flag should be evaluated for correctness, since the IMAP specification indicates that this flag's setting is not defined in situations where the same mail folder is opened by multiple applications at the same time. Since this is nearly always the case, it seems that this flag's usability is rather limited. For this reason, the recent flag was not reimplemented in SMAP, and will not be set for accounts that are accessed via SMAP.
Note
Not all types of mail accounts support every message status flag. Unsupported message status flags will be automatically emulated, where possible. Specifically, POP3 mail accounts do not have a concept of message status flags at all. Each time a POP3 mail account is opened, the status of all messages in the POP3 account will be reset to the default status (unread message, no other flags set).
See Also
mail::ACCOUNT::getFolderIndexSize(3x), mail::ACCOUNT::getFolderKeywordInfo(3x), mail::ACCOUNT::getMessageEnvelope(3x), mail::ACCOUNT::getMessageStructure(3x), mail::ACCOUNT::getMyRights(3x).
Author
Sam Varshavchik | https://www.mankier.com/3/mail::ACCOUNT::getFolderIndexInfo.3x | CC-MAIN-2018-13 | refinedweb | 411 | 55.03 |
This is the mail archive of the gdb@sourceware.cygnus.com mailing list for the GDB project.
p 'Foo::version' I'm trying to remember if i made it possible to just do p Foo::version, i know i did it with templates. If you don't want to have to quote it, try the latest CVS version of gdb, and tell me if i fixed that (Bell Atlantic DSL is down right now, or else i'd do a checkout myself). On 28 Jun 2000, Alexander Zhuckov wrote: > Hi! > > I use Linux, C++ and gdb 5.0. > Suppose I have a simple program: > > namespace Foo { > > const char version[] = "0.1"; > > } > > int main() > { > return 0; > } > > How I can eximine a value of the Foo::vesrion varibale? > > -- > Alexander Zhuckov zuav@int.spb.ru 2:5030/518.50 > | https://www.sourceware.org/ml/gdb/2000-06/msg00198.html | CC-MAIN-2019-43 | refinedweb | 136 | 83.46 |
Objected-oriented design for a game
A game is a good domain to study object-oriented design. In my game, there are one or more human players (the Player class), and some grues (the Grue class; a grue is a scary monster that eats people, but is never present in lighted rooms, for what it’s worth). The Game class tells the grues and players to make their moves (the grues’ moves are random, the players’ moves come from asking the person at the keyboard to make a move).
Rooms are connected to each other by arbitrary exits (e.g. “south,” “out the
window”). Each room keeps track of the things inside it (using a
set). Also,
there are generic things (the Thing class) such as statues and forks and
knives, lying around the maze. Since Players and Grues are Agents and Agents
are Things, each room is keeping track of the things, players, and grues inside
of it.
The solid-line arrows in the diagram indicate inheritance relationships. The dotted-line arrows represent “dependency,” which means, for example, that the Player and Grue classes may need to be modified if the Room class changes its definition.
Private variables/methods start with a dash (-), protected variables/methods start with a hash (#), and public variables/methods start with a plus (+). Pure virtual methods are bold and italicized.
Here is my
main() function:
int main() { Game game; Room *entrance = new Room("Entrance", "A wide open entrance...", 100); Room *hallway = new Room("Hallway", "A long hallway...", 50); Room *ballroom = new Room("Ballroom", "A huge ballroom...", 200); entrance->link(hallway, "south"); hallway->link(entrance, "north"); hallway->link(ballroom, "east"); ballroom->link(hallway, "west"); Player *josh = new Player("Josh", "A prince", 50); Player *tracy = new Player("Tracy", "A princess", 40); josh->moveTo(entrance); tracy->moveTo(entrance); game.addAgent(josh); game.addAgent(tracy); Grue *napolean = new Grue("Napolean"); Grue *kafka = new Grue("Kafka"); napolean->moveTo(ballroom); kafka->moveTo(ballroom); game.addAgent(napolean); game.addAgent(kafka); Thing *liberty = new Thing("Statue of Liberty", "A miniature Statue of Liberty", 5); Thing *hoop = new Thing("Hoop", "A basketball hoop", 30); liberty->moveTo(entrance); hoop->moveTo(ballroom); cout << "Welcome!" << endl; // the step() function in the Game class will eventually // return false, when a player chooses to quit; // this tiny "while" loop keeps asking the game.step() // function if it is false or true; the effect is // that the step() function is called repeatedly // until it returns false while(game.step()); return 0; }
Here is
thing.h:
#ifndef THING_H #define THING_H #include <string> #include "room.h" using namespace std; class Thing { private: string name, description; int size; protected: Room *cur_room; public: Thing(string _name, string _description, int _size); bool moveTo(Room *r); string getName(); string getDescription(); int getSize(); }; #endif
… and
thing.cpp:
#include "thing.h" // this constructor just sets all the Thing class's properties; // it uses the ":" syntax for convenience Thing::Thing(string _name, string _description, int _size) : name(_name), description(_description), size(_size), cur_room(NULL) {} bool Thing::moveTo(Room *r) { // moving to a room will fail if the room is full; // the add() function (part of the Room class) returns // true or false depending on whether the thing can // fit into the room (rooms have capacities and things; // things have sizes) if(r->add(this)) { if(cur_room != NULL) cur_room->remove(this); cur_room = r; return true; } return false; } string Thing::getName() { return name; } string Thing::getDescription() { return description; } int Thing::getSize() { return size; }
Here is
game.cpp:
#include <cstdlib> #include "game.h" Game::Game() { // initialize the random number generator (for Grue movements) // when a new game is started srand(time(NULL)); } void Game::addAgent(Agent *agent) { agents.push_back(agent); } bool Game::step() { // give each agent a chance to move; // if any return false (e.g. a player quits), also return false for(unsigned int i = 0; i < agents.size(); i++) { if(!agents[i]->act()) return false; } return true; } | http://csci221.artifice.cc/lecture/game-design.html | CC-MAIN-2017-30 | refinedweb | 648 | 62.17 |
Important: Please read the Qt Code of Conduct -
Read variables between more than 2 forms
Hello,
I know that many people ask this before but i tried to follow their answers without success... The question is how can i send data between forms. I tried with signals and slots, the problem i have is this:
I have 3 forms:
- In form1 i create form2.
- In form2 i create form 3.
And now in form3 i want to read one variable from the form 2, the problem is that from the form 3 i can't do something like:
form2.get_variable();
because form2 is created in the form1 and the command i write is already in form3. As i saw the solution must be solved with signals and slots, when i try the program compile but it crashes when tries to send the data...
I don't know if i explained well, i hope it jeje
Thanks!!
avmg
There are many ways to solve this. It depends on the details.
In the simplest case you don't need signals/slots at all. Just pass the stuff from Form2 to Form3 as the constructor param or a setter function.
If it has to go the other way around then pass a pointer (this) in the constructor of Form3 and get data from Form2 through it, just like you wrote:
form2ptr->get_variable()
Example:
@
class Form3 : public QDialog {
public:
...
void setF2Ptr(Form2 * ptr) { f2 = ptr; }
void whatever() { f2->get_stuf(); }
private:
Form2 * f2;
}
void Form2::foo {
Form3 f3();
f3.setF2Ptr(this);
f3.exec();
}
@
The same goes for Form1 and Form2 and however many links in chain you need.
Signals are usually used to signal that something occurred. They are not a mean to retrieve a value (they are void). Slots on the other hand are used to react to these signals.
Hi,
I tried that, but it shows me the error: 'f2' does not name a type, (changing f2 by my form name obviusly)
Thanks for reply.
This was an example. I omitted some stuff for brevity. I thought you could figure out that you need #include, not implement stuff in the header etc.
@
//Form3.h
class Form2; //forward declaration
class Form3 {
... //as before except those:
void setF2Ptr(Form2 * ptr);
void whatever();
}
//Form3.cpp
#include "Form3.h"
#include "Form2.h"
void Form3::setF2Ptr(Form2 * ptr) { f2 = ptr; }
void Form3::whatever() { f2->get_stuf(); }
@
Ok, i still have some problem and i think it's related with the place i created the forms:
for example:
//Form1.h
public:
Form2 * f2;
//Form1.cpp
//In the constructor:
Form2 *f2= new Form2();
I want to show f2 when i press one button, in the button action i wirte:
f2.show();
But when i press the button it crashes... Before i was creating f2 in the button action instead the constructor, but i changed because i thought is better create one time and show and hide.
Which is the recomended way to create and show forms??
Ok was crashing because i had something wrong on the constructor, but anyway i wish to know how is the properly way to do it.
Thanks.
There's no "one size fits all". Proper way depends on what those forms are.
For example if there's gonna be one of these shown and hidden all the time then it's pretty much like you did it, except you should give your forms a parent so that you don't need to manually delete them.
If it's some sort of dialog then it's better not to keep it in memory and create/run it locally with its own event loop (via exec) eg.
@
void Form2::showSomeDialog() {
Form3 dialog(this);
dialog.exec();
}
@
If there are gonna be lots of these forms, like in a cam monitoring app where each cam has a window, then a factory pattern with some managing object might be better.
You don't find "the one" pattern and try to force your app into it. It's the other way around. Know what it's meant to do and then choose a pattern that fits.
Hi,
I tried what Chris explained me but without success...The program doesn't compile, maybe a tried to mix what Chris suggest me on his both answers but i did something wrong. I tried also with connect and signals, but even the program compiles, the fact is that nothing happens when execute the "emit" instruction, it's like not receiving on the other side. Anyway if i want to read vectors from other forms maybe this way is not the best...
So basically, i want to learn how to read data between forms, but also the problem can be in how i create them, and their pointers.
What i did was, i created the form2 on the form1.cpp, and when i press a button i show this form2, but to do this the form2 must be declared on form1.h beacuase if not it says that it wasn't declared on this scope. What i have not clear is how to create the forms pointers or instances to access to the other forms.
Thanks.
What is the intended lifespan of form2? If it's a modal dialog then you just need it in the local scope of execution of the button press, not in form1.h eg.
@
//form1.cpp
#include "form1.h"
#include "form2.h"
...
void Form1::someButton_clicked() {
Form2 dialog(this);
auto result = dialog.exec();
...//do something with result
}
@
If it's not a dialog, but a window that should be opened "in parallel" to form1 then this is one way:
@
//form1.h
class Form2; //forward declaration
class Form1 {
... //constructor and other stuff
private:
Form2* window = nullptr;
};
//form1.cpp
#include "form1.h"
#include "form2.h"
void Form1::someButton_clicked() {
if(!window)
window = new Form2();
window->show();
}
Form1::~Form1() {
delete window; //remember to clean up
}
@
Ok, i did the second option, because it's a form very used and i prefer to hide and show when needed, therefore i create the Form2 in the Form1 constructor and show when clicking a button. It's the same as you did but creating the Form2 at the "beginning" instead each time the button is clicked.
And how i add the instances to access to the Form1 from the Form2??
Thanks!
It's more a matter of design then anything else. Should form2 "know" what form1 is or should it accept data from anywhere?
It's often a good idea to decouple classes from each other i.e. if form1 creates form2 then form2 should know nothing about form1.
It's also a matter of design if form2 should reach to form1 for data or should form1 set some data in form2. As before the second option often is preferable.
So for example this way only form1 knows form2, not the other way around:
@
//form2.h
class Form2 {
...
public:
inline void setSomeData(SomeType data) { someData = data; }
private:
SomeType someData;
}
//form1.cpp
void Form1::someButton_clicked() {
auto foo = ...; //get it from wherever
form2->setSomeData(foo);
form2->show();
}
@
If on the other hand you really need to couple form1 and form2 so that they know about each other here's an example:
@
//form2.h
class Form1; //forward declaration
class Form2 {
...
public:
void setForm1Ptr(Form1* ptr);
private:
Form1* f1 = nullptr;
}
//form2.cpp
#include "form2.h"
#include "form1.h"
void Form2::setForm1Ptr(Form1* ptr) {
f1 = ptr;
}
void Form2::doSomeStuff() {
if(f1) {
auto stuff = f1->getSomeStuff(); //here you access form1 from form2
}
}
//form1.cpp
//form1.cpp
void Form1::someButton_clicked() {
form2->setForm1Ptr(this); //you can put this anywhere, eg. in constructor
form2->show();
}
@
The second one is a bit messier. Usually a design where everything knows about everything complicates fast and is harder to maintain. You should strive for a directional tree-like design, where one class only reaches down, not up to the class that created the instance of it.
Hi,
I tried the second option, but the program crashes when executes f1 = ptr. It says:
The inferior stopped because it received a signal from the Operating System.
Signal name :
SIGSEGV
Signal meaning :
Segmentation fault
Also i add the next line to the .pro file to use the nullptr:
QMAKE_CXXFLAGS += -std=c++0x
What do you think?
[quote author="avmg" date="1403203760"]Also i add the next line to the .pro file to use the nullptr:
QMAKE_CXXFLAGS += -std=c++0x
[/quote]
This is not portable and will cause an error when used with compiler like MSVC. Use this instead:
@
CONFIG += c++11
@
As for the segfault - well then you're "doing something wrong"™ ;) Most likely calling setForm1Ptr on an uninitialized form2 pointer. Start your debugger and check. I'm not really good at "guess debugging" ;)
Ok i will check deeply but i found this problem debugging, if i not debug the program crashes and show on the application output:
The program has unexpectedly finished.
Does the f1 name be the same as the created at the begginig or it doesn't matter?
[quote author="avmg" date="1403204829"]Does the f1 name be the same as the created at the begginig or it doesn't matter?[/quote]
I don't think I understand. f1 is the pointer to the first form. It's a member of Form2 class. I'm just using placeholder names. You can (and should) name them something meaningful.
The sequence of events should be as follows:
- you create a Form1 instance (form1)
- somewhere (eg. in the constructor of Form1) you create instance of Form2 (form2)
- somewhere(eg. right after the above) you use form2->setForm1Ptr to set pointer to form1 (f1) inside form2
- somewhere in form2 you use that pointer f1 to get data from form1
If you do it in any other order it will likely result in a segfault.
For example call to form2->setForm1Ptr when form2 is not yet created will segfault.
Using f1 before it was set will segfault.
Using f1 after form1 was destroyed will segfault.
Calling form2->setForm1Ptr after form2 was destroyed will segfault.
You get the picture ;)
[quote author="avmg" date="1403204829"] if i not debug the program crashes and show on the application output: The program has unexpectedly finished.[/quote]
That means you've got a bug. That's what you use a debugger for ;)
Solved!
The problem was that i was creating the instance on the constructor like this:
Form1* form1 = new Form1;
instead:
form1 = new Form1();
Stupid mistake....
Thank you so much Chris for help me!!
Hi again,
Finally i understood how to send data between forms and i could continue with my program. But i'm again in another trouble related with the same. I want to get a struct declared in another form. I did like this to get the struct:
// Form1.h
Public slots:
mystruct return_struct(void);
Private:
struct mystruct{
...
}
mystruct new_mystruct;
// Form1.cpp
mystruct return_struct(void){
return new_mystruct;
}
I've declared new_mystruct in Form1.h to have access to this data on every functions of this form. The problem is that it says in the Form1.h return_struct function:
'mystruct' does not name a type.
I think i should create new_mystruct on the Form1.cpp constructor but if i do this, i have not access to this on the functions...
What do you recommend me?
Thanks
Please use code tags when posting code (last button on the right). It's easier to read.
You haven't posted entire code but from what I see you declared your struct in the private section of Form1. As so it is not visible to outside classes and the compiler is right - there is no 'mystruct' outside of Form1.
You need to declare it in a public section and refer to it as mystruct when inside Form1 members and as Form1::mystruct anywhere outside.
Btw. It's a good convention to start type names with capital letter. That way people know right away what is what.
As for where you create an instance of it. You don't say what the struct contains but if you return a local variable like you did you don't get chance to fill it with any meaningful data. So either fill it up with something before you return or make it a class member and fill it elsewhere.
Example:
@
//Form1.h
class Form1 {
public:
MyStruct {
...
}
MyStruct getStruct() const;
private:
MyStruct structInstance;
};
//Form1.cpp
Form1::MyStruct getStruct() const { return structInstance; }
//Somewhere else
Form1* form1 = ... //form1 is some instance of Form1
Form1::MyStruct data = form1->getStruct();
@
You can use structInstance in any of the Form1 members this way.
Edit: Oh, and it's not an error, but it's unusual for a getter function to be a slot. Slots are meant as functions that modify your object in a reaction to a signal. They're not meant to return values as there is no object to give that value to in a connect statement. Getters like yours should not be slots.
Hi Chris,
I tried that but now the compiler says that 'structInstance' was not declared in this scope in the line:
return structInstance;
Ah! thanks for the advicements!
Sorry, getStruct is a class member function. I missed class specifier:
@
Form1::MyStruct Form1::getStruct() const { return structInstance; }
@
Done!
Thank you so much again!
Hi again!
Before i ask how to get the struct from another form. Now i'm trying to set values from a struct located in another form. The question is that the struct is too to make a set function for each variable...so i was thinking on this two options, considering that in Form2 I have a private struct that I want to change from Form1.
1- Try to send the complete struct. I declared a second struct in Form1, i modify the values and i send to Form2, once i receive there i change the values. But i don't have declared the struct in Form1 and also i think is a very ugly solution.
2-Try to make a pointer in Form1 to the struct located in Form2 to modify it.
Anyway both solutions are ugly, and i want to know a good way to do it.
Thanks.
I solved doing the 1st method.
Thanks
You mean you have 2 structs in both forms and they look the same? Why not just declare it once outside of both classes?
From design standpoint if both forms are reading and writing to it then it clearly doesn't belong to any of them.
I got a private struct in a Form, and if i want to change it from another Form i have to implement a public function like "set_value", but if the struct is too big and i don't want to add let's say 200 functions, i need to do something diferent. Finally i create a temporal struct in form1, after i modify the values i want, and finally i send this struct to the form2 where is declared the struct i wanted to modify at the begining.
If i declare outside i can't access because is private.
For sure there are many ways and better but this is the best i could done actually... | https://forum.qt.io/topic/42258/read-variables-between-more-than-2-forms | CC-MAIN-2021-04 | refinedweb | 2,526 | 73.88 |
TEMPLATE_TOP from your old application config/RELEASE file. In many cases the modules you actually use under R3.14 will be different to the R3.13 modules, but the old module names here give you a starting-point for what there replacements will be.
If (unless you have other App/src directories still to convert):
rm -rf ../.. file from $(EPICS_BASE)/dbd will be used instead. If you only want to load a subset of the record definitions from base you can keep a local edited copy of the base.dbd file but you should copy it from $(EPICS_BASE)/dbd and edit that rather than trying to re-use the R3.13 version from your old application area.
Add the following header file inclusion after all other #include statements:
#include "epicsExport.h"
The struct rset is now available as a typedef so change
struct rset recordnameRSET = { ... };
to
rset recordnameRSET = { ... };
and add the following line immediately after that definition:
epicsExportAddress(rset, recordnameRSET);
Add the following header file inclusion after all other #include statements:
#include "epicsExport.h"
and add the following line after every dset definition struct { ... } devname = { ... }; in the file.
epicsExportAddress(dset, devname);
Add the following header file inclusion after all other #include statements:
#include "epicsExport.h"
and add the following line after the drvet drvname definition:
epicsExportAddress(drvet, drvname);
Registration code for application specific functions, e.g. subroutine record init and process functions, must be changed as follows
#include "registryFunction.h" #include "epicsExport.h"
static long mySubInit(subRecord *precord) static long mySubProcess(subRecord *precord)
epicsRegisterFunction(mySubInit); epicsRegisterFunction(mySubProcess);
function("mySubInit") function("mySubProcess")
It may be necessary to add one or more of the following header file inclusions to any C source file if you get warnings or errors from the compilation process. The most likely file missing is errlog.h.
The ld command in vxWorks 5.5.2 doesn't clean up its standard input
stream properly, so we now recommend passing the filename to it as an argument
instead. Change
ld < nameLib to
ld 0,0, "name.munch" not compile properly, or on vxWorks you could see the load-time
error:
undefined symbol: _recGblSetSevr.
The steppermotor, scan, and pid records are no longer in base. If these record types are used at your site, their unbundled modules should be downloaded from the EPICS website and built with base R3.14 by your EPICS administrator. To use these record types in your application you must add them to the application just like any other external support module. Most modules provide instructions on how to use them in an IOC application.
Consider changing any existing old steppermotor records to the EPICS motor record module supported by the Beamline Controls and Data Acquisition group at APS.
recDynLink.o and devPtSoft.o are no longer in EPICS base and now exist as separate unbundled EPICS modules. As with the three record types described above these must now be built separately and added as support modules to any applications that need them., remove any references to them from your dbd files.
Hardware support now exists as separate EPICS modules. The hardware support modules used at your site should be downloaded and built with base R3.14 by your EPICS administrator. To use them, add the appropriate module full path definitions to your application configure/RELEASE file, and make the documented changes to your Makefile to link their binaries into the your IOC executable.
For example, remove
LIBOBJS += $(EPICS_BASE_BIN)/symb
from baseLIBOBJS and add
LIBOBJS += $(SYMB_BIN)/symb
to your application src/Makefile, and add the line
SYMB = <full path definition for the built module SYMB>
into your application configure/RELEASE file.
The host tool dbLoadTemplate has been replace by a new EPICS extension called msi, which should be downloaded and built with base R3.14 by your EPICS administrator. dbLoadTemplate is still supported on IOCs. If the msi executable is not in your default search path and in your application db files are created from template and substitution files, you should add the definition
MSI = <full path name to msi executable>
to your application's configure/RELEASE file.
Review and optionally modify site build settings. | http://www.aps.anl.gov/epics/base/R3-14/12-docs/ConvertingR3.13AppsToR3.14.html | CC-MAIN-2014-52 | refinedweb | 689 | 56.96 |
If you’ve used any programming language for a long enough time, you’ve found things about it that you wish were different. It’s true for me with Python. I have ideas of a number of things I would change about Python if I could. I’ll bore you with just one of them: the syntax of class definitions.
But let’s start with the syntax for defining functions. It has this really nice property: function definitions look like their corresponding function calls. A function is defined like this:
def func_name(arg1, arg2):
When you call the function, you use similar syntax: the name of the function, and a comma-separated list of arguments in parentheses:
x = func_name(12, 34)
Just by lining up the punctuation in the call with the same bits of the definition, you can see that arg1 will be 12, and arg2 will be 34. Nice.
OK, so now let’s look at how a class with base classes is defined:
class MyClass(BaseClass, AnotherBase):
To create an instance of this class, you use the name of the class, and parens, but now the parallelism is gone. You don’t pass a BaseClass to construct a MyClass:
my_obj = MyClass(...)
Just looking at the class line, you can’t tell what has to go in the parens to make a MyClass object. So “def” and “class” have very similar syntax, and function calls and object creation have very similar syntax, but the mimicry in function calls that can guide you to the right incantation will throw you off completely when creating objects.
This is the sort of thing that experts glide right past without slowing down. They are used to arcane syntax, and similar things having different meanings in subtly different contexts. And a lot of that is inescapable in programming languages: there are only so many symbols, and many many more concepts. There’s bound to be overlaps.
But we could do better. Why use parentheses that look like a function call to indicate base classes? Here’s a better syntax:
class MyClass from BaseClass, AnotherBase:
Not only does this avoid the misleading punctuation parallelism, but it even borrows from the English we use to talk about classes: MyClass derives from BaseClass and AnotherBase. And “from” is already a keyword in Python.
BTW, even experts occasionally make the mistake of typing “def” where they meant “class”, and the similar syntax means the code is valid. The error isn’t discovered until the traceback, which can be baffling.
I’m not seriously proposing to change Python. Not because this wouldn’t be better (it would), but because a change like this is impractical at this late date. I guess it could be added as an alternative syntax, but it would be hard to argue that having two syntaxes for classes would be better for beginners.
But I think it is helpful to try to see our familiar landscape as confused beginners do. It can only help with explaining it to them, and maybe help us make better choices in the future.
What about the metaclass keyword argument you can use at class definition level in Python3?
class Thing(Parent, metaclass = Whatever):
Good point. In ECMAScript 2015 they followed this idea, using "extends" instead of "for". From the existing keywords, I think "is" might even be better than "for".
Simple: make Whatever Thing from Parent.
In that parlance, class is simply a shortcut for "make type".
Yes, make should be a new keyword. But at least we could write the full name of first arguments to classmethods. :-D
class Thing from Parent as Whatever
A related thing that bothers me about Python is the overloading of __call__. If it wasn't for old-style types (str(), int(), etc, that aren't even old-style anymore), it could have an object.new() method, eliminating type.__call__() . That would reduce the class confusion you mention as well.
I don't like the way attributes for a class are defined. Seems too much work to define an __init__, and then mixes declaration of attributes with other code. (Data classes can be an improvement.)
Maybe because these are not class attributes? :-)
The syntax for defining class attributes is perfectly fine. It's the instance attributes that are the problem, mostly because you don't have an instance until creation time.
Add a comment: | https://nedbatchelder.com/blog/201905/why_python_class_syntax_should_be_different.html | CC-MAIN-2020-24 | refinedweb | 732 | 72.46 |
PyMake has the ability to call out to Python modules/methods natively instead of invoking separate Python interpreters. We should define $(NSINSTALL) in config.mk to do this when running under PyMake. FWIW, $(NSINSTALL) is executed ~1650 times during a fresh build of the browser profile. As we know, processes on Windows are expensive (compared to other OS's), so a savings of 1650 extra processes should help bring down Windows build times. By how much, I have no clue.
Created attachment 554613 [details] [diff] [review] Execute nsinstall.py natively under PyMake This patches config.mk to run nsinstall.py natively under PyMake. I've tested it on my local Linux VM and it seems to work fine in both make and PyMake. The patch includes a one-liner to nsinstall.py to change a 'sys.exit()' to 'return' from within the method. (The method should always return, never exit.) I have not yet performed a Try build. That should definitely be done before this lands anywhere. I created the patch from Git, so that's why the format is screwy. I can repost in Mercurial format if requested.
Comment on attachment 554613 [details] [diff] [review] Execute nsinstall.py natively under PyMake >+# We prefer the PyMake native invokation about all others because it will not >+# spawn a new process. >+ifdef .PYMAKE You probably meant "pymake invocation above all others". IMO we don't need to explain why pymake native commands are preferred. The patch looks good to me, but I'll defer to khuey for definitive review.
(In reply to Mitchell Field [:Mitch] from comment #2) > You probably meant "pymake invocation above all others". Coming from PyMake land, "native" means "native Python" (see pymake/data.py:getcommandsforrule() ). But, we aren't inside PyMake here, so I can see how the terminology is confusing. I'll change it on the next patch or for the commit if this one is r-conditional.
I wouldn't bother with the comment at all, honestly. We haven't had any comments like that elsewhere.
Oh, I think that the reason I hadn't proposed this yet is that nsinstall.py doesn't support creating symlinks currently. Not that anyone is going to use pymake on Linux/OS X, but it would be nice if building with pymake didn't have subtle differences like that.
Comment on attachment 554613 [details] [diff] [review] Execute nsinstall.py natively under PyMake I think we can live with the difference for the moment, but please find or file a followup on finishing nsinstall.py's implementation.
Also, in the future, 8 lines of context would be better.
Created attachment 554913 [details] [diff] [review] Execute nsinstall.py as Python module under PyMake The content is identical to the previous patch except it is formatted properly and has the comment removed per comment from ted.
This is technically ready for committing. However, I have yet to perform a Try build. If you want me to perform a Try build, just ping me here or on IRC. The only reason I haven't done a Try build yet is I haven't performed a Try build before and I want to grok the documentation before I perform one.
Try doesn't run pymake so you're not going to learn that much.
Try run for 6f63ee0f78d4 is complete. Detailed breakdown of the results available here: Results (out of 5 total builds): exception: 5 Builds available at
This patch causes my local pymake build to break: Mozconfig used: Build script used: (ie autoconf-2.13 then configure rather than client.mk) My directory structure is such that objdir isn't inside srcdir, don't know if that would make any difference. (ie: c:\mozilla\repos\inbound\ and c:\mozilla\repos\obj-inbound\). Using VC2010, Win SDK 7.0A, MozillaBuild 1.6rc.
In response to comment #13, it appears the problem is $(NSINSTALL) is being used within a larger script. It is now obvious that after this change, $(NSINSTALL) would not be safe to use outside of single-line commands because the shell would not know how to invoke using the Python syntax. So, to properly implement this patch, we'll need to audit the code base for usages of $(NSINSTALL) inside command/shell blocks. This change isn't as trivial as I hoped. Ugh.
Backed out of inbound:
Ugh, that's pretty nasty :-/
So, what do we do for the PyMake built-in/native-Python commands (like rm, mkdir, pythonpathy, etc)? Do we just not utilize them in multiline recipes that are invoked under a single shell? If so, perhaps a useful feature of PyMake would be to error upon detection of these commands.
Yeah, we shouldn't be trying to use them in those situations. It's an unfortunate case I hadn't really thought of when I implemented native command support. Ideally we would just rewrite the Makefiles to remove complex shell invocations, replacing them with simpler Makefile constructs or calls to Python scripts containing the logic.
Created attachment 628064 [details] [diff] [review] handle a large subset of nsinstall invocations (m-c) This patch goes on top of the one in bug 757252.
Created attachment 628066 [details] [diff] [review] handle a large subset of nsinstall invocations (c-c)
Created attachment 633535 [details] [diff] [review] now with proper unicode support (m-c) I pushed this to try and didn't see any issues.
Created attachment 633536 [details] [diff] [review] now with proper unicode support (c-c)
Created attachment 635871 [details] [diff] [review] patch updated to fix bitrot (m-c)
Comment on attachment 633536 [details] [diff] [review] now with proper unicode support (c-c) Review of attachment 633536 [details] [diff] [review]: ----------------------------------------------------------------- ::: config/config.mk @@ +575,5 @@ > PWD := $(CURDIR) > endif > > NSINSTALL_PY := $(PYTHON) $(call core_abspath,$(MOZILLA_SRCDIR)/config/nsinstall.py) > +# For Pymake, whereever we use nsinstall.py we're also going to try to make it "wherever" @@ +611,5 @@ > > endif # WINNT/OS2 > > +# The default for install_dist is simply INSTALL > +install_dist ?= $(INSTALL) $(1) install_dist seems like a confusing name since it doesn't actually install to DIST. I might just call this "install_cmd" or something.
Comment on attachment 635871 [details] [diff] [review] patch updated to fix bitrot (m-c) Review of attachment 635871 [details] [diff] [review]: ----------------------------------------------------------------- ::: config/config.mk @@ +637,5 @@ > +# For Pymake, whereever we use nsinstall.py we're also going to try to make it > +# a native command where possible. Since native commands can't be used outside > +# of single-line commands, we continue to provide INSTALL for general use. > +# Single-line commands should be switched over to install_dist. > +NSINSTALL_NATIVECMD := %nsinstall nsinstall_native Kind of a bummer that you had to name it "nsinstall_native" instead of just using "nsinstall". @@ +657,5 @@ > > ifeq (,$(CROSS_COMPILE)$(filter-out WINNT OS2, $(OS_ARCH))) > +INSTALL = $(NSINSTALL) -t > +ifdef .PYMAKE > +install_dist = $(NSINSTALL_NATIVECMD) -t $(1) Same comment as the other patch.
Created attachment 638097 [details] [diff] [review] what I'm going to check in (m-c) > Kind of a bummer that you had to name it "nsinstall_native" instead of just using "nsinstall". Turns out switching it over isn't a problem at all.
Created attachment 638098 [details] [diff] [review] what I'm going to check in (c-c) The two patches are blocked on the ones in bug 757252. I'll check them in together. | https://bugzilla.mozilla.org/show_bug.cgi?id=680636 | CC-MAIN-2017-13 | refinedweb | 1,211 | 65.83 |
Guardian is a Vapor 3 based Middleware that limits the number of requests from the client based on the IP address + access URL. It works by adding the client's IP address to the cache and counting the number of requests that the client can make within the lifecycle defined when the GuardianMiddleware is added, and returns HTTP 429 (too many requests) when the limit is reached. After the time limit expires, the request can be re-initiated,And support custom return data. The reason Guardian generates is because gatekeeper only supports vapor 2 , thanks very much to the original author! 🍺
Consider that if there is a public IP address in the LAN, increase the unit threshold appropriately.
Installation 📦
To include it in your package, add the following to your
Package.swift file.
let package = Package( name: "Project", dependencies: [ ... .package(url: "", from: "3.0.0"), ], targets: [ .target(name: "App", dependencies: ["Guardian", ... ]) ] )
Usage 🚀
There are two ways to use:
- Global use:
Guardian Configurable fields: Maximum number of visits, time units, and cache to use.
If you do not provide your own cache, Guardian will create its own memory cache.
// Each api URL is limited to 20 times per minute let guardian = GuardianMiddleware(rate: Rate(limit: 20, interval: .minute))
or
on
configure.swift
- Import header files
import Guardian
- Join before services
var middlewares = MiddlewareConfig() middlewares.use(GuardianMiddleware(rate: Rate(limit: 25, interval: .minute), closure: { (req) -> EventLoopFuture<Response>? in let view = ["result":"429","message":"The request is too fast. Please try again later!"] return try view.encode(for: req) })) services.register(middlewares)
Method Two:
- Routing group use:
Adding Middleware to a Routing Group
let group = router.grouped(GuardianMiddleware(rate: Rate(limit: 25, interval: .minute))) group.get("welcome") { req in return "hello,world !" }
Support custom return data 📌
Guardian adds support for custom return data, as in the following example:
Return a JSON object:
middlewares.use(GuardianMiddleware(rate: Rate(limit: 20, interval: .minute), closure: { (req) -> EventLoopFuture<Response>? in let view = ["result":"429","message":"The request is too fast. Please try again later!"] return try view.encode(for: req) }))
or return a Leaf/Html web page:
middlewares.use(GuardianMiddleware(rate: Rate(limit: 25, interval: .minute), closure: { (req) -> EventLoopFuture<Response>? in let view = try req.view().render("leaf/busy") return try view.encode(for: req) }))
or Custom returns other types of data...
Rate.Interval Enumeration types
Currently supported setup intervals are:
case .second case .minute case .hour case .day
Contacts
If you have any questions or suggestions you can raise one Issues or contact me:
Twitter : @Jinxiansen
License 📄
Guardian is released under the MIT license. See LICENSE for details.
Github
Help us keep the lights on
Dependencies
Used By
Total: 1 | https://swiftpack.co/package/Jinxiansen/Guardian | CC-MAIN-2018-34 | refinedweb | 444 | 52.87 |
If we, in our program, have just one class, without extending any class. For example
public class Point {
int x, y;
}
Compiler creates default constructor and call the super() method acording to this
public class Point {
int x, y;
public Point() {
super();
}
}
Q: As i understand super(); is calling default constructor of super class, but in this case we do not have a super class, so what is super() calling in that case?
All java classes extend from
Object
You do have a super class. All classes in Java automatically extend java.lang.Object, regardless of whether you specify it or not.
See here:
To take one snippet from that link:.
The default contructor is
Object, which all Java objects inherit from
In java, every class has a super class. If none is explicitly given, then it's
Object
All Java Classes extends from Object, so if you are not extending any class, by super you are calling constructor of Object class.
Object is the super type of all in java.
Super() it will call
Object class.
Each class in Java implicitly extends Object Class. So you can always call super() from the constructor of any class.
Again in Object class there is no explicit constructor. Compiler creates a default one and the default constructor of the Object class creates the object itself. | http://www.dlxedu.com/askdetail/3/cd78e5dcbca52a46c06b703529a57123.html | CC-MAIN-2018-51 | refinedweb | 223 | 64 |
Description
As reported in KMeans can be surprisingly slow, and it's easy to see that most of the time spent is in kmeans|| initialization. For example, in this simple example...
import org.apache.spark.mllib.random.RandomRDDs import org.apache.spark.mllib.clustering.KMeans val data = RandomRDDs.uniformVectorRDD(sc, 1000000, 64, sc.defaultParallelism).cache() data.count() new KMeans().setK(1000).setMaxIterations(5).run(data)
Init takes 5:54, and iterations take about 0:15 each, on my laptop. Init takes about as long as 24 iterations, which is a typical run, meaning half the time is just in picking cluster centers. This seems excessive.
There are two ways to speed this up significantly. First, the implementation has an old "runs" parameter that is always 1 now. It used to allow multiple clusterings to be computed at once. The code can be simplified significantly now that runs=1 always. This is already covered by
SPARK-11560, but just a simple refactoring results in about a 13% init speedup, from 5:54 to 5:09 in this example. That's not what this change is about though.
By default, k-means|| makes 5 passes over the data. The original paper at actually shows that 2 is plenty, certainly when l=2k as is the case in our implementation. (See Figure 5.2/5.3; I believe the default of 5 was taken from Table 6 but it's not suggesting 5 is an optimal value.) Defaulting to 2 brings it down to 1:41 – much improved over 5:54.
Lastly, small thing, but the code will perform a local k-means++ step to reduce the number of centers to k even if there are already only <= k centers. This can be short-circuited. However this is really the topic of
SPARK-3261 because this can cause fewer than k clusters to be returned where that would actually be correct, too.
Attachments
Issue Links
- relates to
SPARK-3261 KMeans clusterer can return duplicate cluster centers
- Resolved
SPARK-11560 Optimize KMeans implementation / remove 'runs' from implementation
- Resolved
- links to
-
-
- | https://issues.apache.org/jira/browse/SPARK-17389 | CC-MAIN-2021-49 | refinedweb | 346 | 58.58 |
TickerHandler¶
One way to implement a dynamic MUD is by using “tickers”, also known as “heartbeats”. A ticker is a timer that fires (“ticks”) at a given interval. The tick triggers updates in various game systems.
About Tickers¶
Tickers are very common or even unavoidable in other mud code bases. Certain code bases are even hard-coded to rely on the concept of the global ‘tick’. Evennia has no such notion - the decision to use tickers is very much up to the need of your game and which requirements you have. The “ticker recipe” is just one way of cranking the wheels.
The most fine-grained way to manage the flow of time is of course to use Scripts. Many types of operations (weather being the classic example) are however done on multiple objects in the same way at regular intervals, and for this, storing separate Scripts on each object is inefficient. The way to do this is to use a ticker with a “subscription model” - let objects sign up to be triggered at the same interval, unsubscribing when the updating is no longer desired.
Evennia offers an optimized implementation of the subscription model -
the TickerHandler. This is a singleton global handler reachable from
evennia.TICKER_HANDLER. You can assign any callable (a function
or, more commonly, a method on a database object) to this handler. The
TickerHandler will then call this callable at an interval you specify,
and with the arguments you supply when adding it. This continues until
the callable un-subscribes from the ticker. The handler survives a
reboot and is highly optimized in resource usage.
Here is an example of importing
TICKER_HANDLER and using it:
# we assume that obj has a hook "at_tick" defined on itself from evennia import TICKER_HANDLER as tickerhandler tickerhandler.add(20, obj.at_tick)
That’s it - from now on,
obj.at_tick() will be called every 20
seconds.
You can also import function and tick that:
from evennia import TICKER_HANDLER as tickerhandler from mymodule import myfunc tickerhandler.add(30, myfunc)
Removing (stopping) the ticker works as expected:
tickerhandler.remove(20, obj.at_tick) tickerhandler.remove(30, myfunc)
Note that you have to also supply
interval to identify which
subscription to remove. This is because the TickerHandler maintains a
pool of tickers and a given callable can subscribe to be ticked at any
number of different intervals.
The full definition of the
tickerhandler.add method is
tickerhandler.add(interval, callback, idstring="", persistent=True, *args, **kwargs)
Here
*args and
**kwargs will be passed to
callback every
interval seconds. If
persistent is
False, this subscription
will not survive a server reload.
Tickers are identified and stored by making a key of the callable
itself, the ticker-interval, the
persistent flag and the
idstring (the latter being an empty string when not given
explicitly).
Since the arguments are not included in the ticker’s identification, the
idstring must be used to have a specific callback triggered multiple
times on the same interval but with different arguments:
tickerhandler.add(10, obj.update, "ticker1", True, 1, 2, 3) tickerhandler.add(10, obj.update, "ticker2", True, 4, 5) Note that, when we want to send arguments to our callback within a ticker handler, we need to specify ``idstring`` and ``persistent`` before, unless we call our arguments as keywords, which would often be more readable:
tickerhandler.add(10, obj.update, caller=self, value=118)
If you add a ticker with exactly the same combination of callback, interval and idstring, it will overload the existing ticker. This identification is also crucial for later removing (stopping) the subscription:
tickerhandler.remove(10, obj.update, idstring="ticker1") tickerhandler.remove(10, obj.update, idstring="ticker2")
The
callable can be on any form as long as it accepts the arguments
you give to send to it in
TickerHandler.add.
Note that everything you supply to the TickerHandler will need to be pickled at some point to be saved into the database. Most of the time the handler will correctly store things like database objects, but the same restrictions as for Attributes apply to what the TickerHandler may store.
When testing, you can stop all tickers in the entire game with
tickerhandler.clear(). You can also view the currently subscribed
objects with
tickerhandler.all().
See the Weather Tutorial for an example of using the TickerHandler.
When not to use TickerHandler¶
Using the TickerHandler may sound very useful but it is important to consider when not to use it. Even if you are used to habitually relying on tickers for everything in other code bases, stop and think about what you really need it for. This is the main point:
You should never use a ticker to catch changes.
Think about it - you might have to run the ticker every second to react to the change fast enough. Most likely nothing will have changed at a given moment. So you are doing pointless calls (since skipping the call gives the same result as doing it). Making sure nothing’s changed might even be computationally expensive depending on the complexity of your system. Not to mention that you might need to run the check on every object in the database. Every second. Just to maintain status quo …
Rather than checking over and over on the off-chance that something changed, consider a more proactive approach. Could you implement your rarely changing system to itself report when its status changes? It’s almost always much cheaper/efficient if you can do things “on demand”. Evennia itself uses hook methods for this very reason.
So, if you consider a ticker that will fire very often but which you expect to have no effect 99% of the time, consider handling things things some other way. A self-reporting on-demand solution is usually cheaper also for fast-updating properties. Also remember that some things may not need to be updated until someone actually is examining or using them - any interim changes happening up to that moment are pointless waste of computing time.
The main reason for needing a ticker is when you want things to happen to multiple objects at the same time without input from something else. | http://evennia.readthedocs.io/en/latest/TickerHandler.html | CC-MAIN-2018-13 | refinedweb | 1,023 | 53.81 |
KDEUI
#include <kwordwrap.h>
Detailed Description
Word-wrap algorithm that takes into account beautifulness ;)
That means:
- not letting a letter alone on the last line,
- breaking at punctuation signs (not only at spaces)
- improved handling of (), [] and {}
- improved handling of '/' (e.g. for paths)
Usage: call the static method, formatText, with the text to wrap and the constraining rectangle etc., it will return an instance of KWordWrap containing internal data, result of the word-wrapping. From that instance you can retrieve the boundingRect, and invoke drawing.
This design allows to call the word-wrap algorithm only when the text changes and not every time we want to know the bounding rect or draw the text.
Definition at line 49 of file kwordwrap.h.
Member Enumeration Documentation
Use this flag in drawText() if you want to fade out the text if it does not fit into the constraining rectangle.
Definition at line 56 of file kwordwrap.h.
Constructor & Destructor Documentation
Destructor.
Definition at line 155 of file kwordwrap.cpp.
Member Function Documentation
- Returns
- the bounding rect, calculated by formatText. The width is the width of the widest text line, and never wider than the rectangle given to formatText. The height is the text block. X and Y are always 0.
Definition at line 300 of file kwordwrap.cpp.
Draws the string
t at the given coordinates, if it does not
fit into
maxW the text will be faded out.
- Parameters
-
Definition at line 191 of file kwordwrap.cpp.
Draw the text that has been previously wrapped, at position x,y.
Flags are for alignment, e.g. Qt::AlignHCenter. Default is Qt::AlignAuto.
- Parameters
-
Definition at line 244 of file kwordwrap.cpp.
Draws the string
t at the given coordinates, if it does not
fit into
maxW the text will be truncated.
- Parameters
-
Definition at line 238 of file kwordwrap.cpp.
Main method for wrapping text.
- Parameters
-
Definition at line 40 of file kwordwrap.cpp.
- Returns
- the original string, truncated to the first line. If
dotswas set, '...' is appended in case the string was truncated. Bug: Note that the '...' come out of the bounding rect.
Definition at line 174 of file kwordwrap.cpp.
- Returns
- the original string, with '
' inserted where the text is broken by the wordwrap algorithm.
Definition at line 159 of file kwordwrap.cpp.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2016 The KDE developers.
Generated on Sat Dec 3 2016 01:28:56 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/4.x-api/kdelibs-apidocs/kdeui/html/classKWordWrap.html | CC-MAIN-2016-50 | refinedweb | 430 | 60.41 |
Difference between revisions of "Snd"
Latest revision as of 16:57, 26 August 2011.
Contents.
Most common usage documentation is embedded on the application. The rightmost menu on SND's interface is the "Help" menu which contains information for most tasks needed to run SND properly. For example if you go top the Help Menu and choose "Play", A new window will show the task's description and pointers to related information and links to more in-depth information. "Play Help" on SND's Help menu reads as:
To play a sound, click the 'play' button. If the sound has more channels than your DAC(s), Snd will (normally) try to mix the extra channels into the available DAC outputs. While it is playing, you can click the button again to stop it, or click some other file's 'play' button to mix it into the current set of sounds being played. To play from a particular point, set a mark there, then click its 'play triangle' (the triangular portion below the x axis). (Use control-click here to play all channels from the mark point). To play simultaneously from an arbitrary group of start points (possibly spread among many sounds), set syncd marks at the start points, then click the play triangle of one of them. The Edit menu 'Play' option plays the current selection, if any. The Popup menu's 'Play' option plays the currently selected sound. And the region and file browsers provide play buttons for each of the listed regions or files. If you hold down the control key when you click 'play', the cursor follows along as the sound is played. In a multichannel file, C-q plays all channels from the current channel's cursor if the sync button is on, and otherwise plays only the current channel. Except in the browsers, what is actually played depends on the control panel. Use the play function to play any object." but not required, in order to accomplish these guidelines).
cd /zap cp /usr/ccrma/lisp/src/snd/bird.scm /zap/.
Now you can open SND directly on the shell as stated before by issuing the 'snd' command:
snd &
SND's window interface should open and show only pull-down menus. Go to the 'View' menu and select 'Show Listener' option. SND listener opens only showing a prompt ">". On this prompt you type "Scheme" code which is processed and compliled by SND's s7. In order to have SND synthesize a sound with "bird.scm", you need to load the file into the interpreter and then run a function (sub-program), which in turn generates the soundfile. You do this by typing on the listener the function to load the file, and then running the returned name of the function as follows:
> (load "/zap/bird.scm")
- Note that a prompt ">" means that the listener is ready to accept another function call.
- After a carriage return (entering) '(load "/zap/bird.scm")', the listener returns:
make-birds >
Now the listener shows there is a function in SND called 'make-birds'. To get the bird's songs you need to issue a function call (run the sub-program) '(make-birds)' on SND's listener:
> (make-birds)
The soundfile window should open showing a waveform just created. To listen to this waveform you can click on the play button or alternatively use the function call '(play)' on the listener.
> (play)
A screen shot of SND with the procedures outline might look like:
Get Familiar with SND's waveform interface: move horizontal scrollbars to change the depth of the waveform, and vertical scrollbars to change the amplitude viewing scale. The [w] button shows the waveform while the [f] button shows the spectra. Notice that both waveform and spectra change while your move the focus on your soundfile. try them on SND key combination and more are also explained on SND's Help menu.
References
- Last seen on Aug 26 2011, MStation.ORG has an interview of Bill Schottstaedt and the history of SND.
- Users at PlanetCCRMA:SND has several tutorials on using SND for diverse applications as: | https://ccrma.stanford.edu/mediawiki/index.php?title=Snd&diff=12243&oldid=12215 | CC-MAIN-2018-05 | refinedweb | 686 | 71.04 |
- Products
- Support + Services
- Markets
- Partners
- Community
- Company
- Downloads
- Hardware
Execute a file
#include <process.h> int execl( const char * path, const char * arg0, const char * arg1, ... const char * argn, NULL );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The execl():
If you call this function from a process with more than one thread, all of the threads are terminated and the new executable image is loaded and executed. No destructor functions are called.
Upon successful completion, the st_atime field of the file is marked for update. If the exec* function.
When execl() is successful, it doesn't return; otherwise, it returns -1 (errno is set)..
abort(), atexit(), errno, execle(), execlp(), execlpe(), execv(), execve(), execvp(), execvpe(), _exit(), exit(), getenv(), main(), putenv(), spawn(), spawnl(), spawnle(), spawnlp(), spawnlpe(), spawnp(), spawnv(), spawnve(), spawnvp(), spawnvpe(), system() | http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/e/execl.html | crawl-002 | refinedweb | 142 | 57.27 |
Contextual fragments are fragments that are added to the composites during assembly time. That means that they are not present in the composite declarations, but a start-up decision what should be added. Once the application instance is created, it is no longer possible to modify which fragments are attached.
Typical use-case is tracing and debugging. Other potential uses are additional security or context interfaces needing access to internal mixins not originally intended for, such as GUI frameworks doing reflection on certain composites. We strongly recommend against using this feature, as it is not needed as commonly as you may think.
Constraints are not supported to be contextual at the moment.
If you want to reproduce what’s explained in this tutorial, remember to depend on the Core Bootstrap artifact:
At runtime you will need the Core Runtime artifact too. See the Depend on Polygene™ tutorial for details.
The mixins, sideeffects and concerns are added during the bootstrap phase. It is very straight-forward;
public class TraceAll { public void assemble( ModuleAssembly module ) { ServiceDeclaration decl = module.addServices( PinSearchService.class ); if( Boolean.getBoolean( "trace.all" ) ) { decl.withConcerns( TraceAllConcern.class ); } } }
In the example above, we add the TraceAllConcern from the Logging Library if the system property "trace.all" is true. If the system property is not set to true, there will be no TraceAllConcern on the PinSearchService.
Concerns that are added in this way will be at the top of the method invocation stack, i.e. will be the first one to be called and last one to be completed.
SideEffects that are added in this way will be the last one’s to be executed. | https://polygene.apache.org/java/3.0.0/howto-contextual-fragments.html | CC-MAIN-2022-40 | refinedweb | 274 | 56.76 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.