text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Hello friends ,
I got an important coursework to submit . Let me explain, the coursework consist of a file .. which we must sort, search and we must event create a y-sort algorithm.
After much struggle, i was able to use the stringtokensier to input the data from the file as it was seperated by a tab and there are many lines in the file.
e.g Name Surname Address Age Mark Twain Missippi 25 Tom Sawyer Missippi 15 and so on .
The problem now is that i am unable to distribute the name to the structures . please help me! I have not sleep for 2 days, i don't know how to do that .. after so much of hard work i dont want to fail this module so please help.
Here my code:
#include <cstdlib> #include <iostream> #include <fstream> #include <string> using namespace std; struct Person{ string name; string surname; string address; string age; }; int main(int argc, char *argv[]) { ifstream fin; fin.open("input.txt"); char buf[100]; string lineOfData; string arraytemp[4]; int recordcounter=0; Person person_name[recordcounter]; Person person_surname[recordcounter]; Person person_address[recordcounter]; Person person_age[recordcounter]; if (fin.is_open()) { while(!fin.eof()) { getline(fin,lineOfData, '\n'); lineOfData+="\n"; cout << "\n\n\n\n\n___________________\n\n"; cout << lineOfData << "\n\n"; cout << "size of line is: " << lineOfData.size() << "\n\n"; char delims[] = "\t"; for(int i=0; i<lineOfData.size(); i++) { buf[i] = lineOfData[i]; } char *result = NULL; result = strtok( buf, delims ); for(int counter=0; result != NULL; counter++ ) { arraytemp[counter]=result; cout << "\n\n result is: " << result << "\n"; result = strtok( NULL, delims ); } }//for the while loop fin.close(); //this is for the file } else cout << "Unable to open file"; for(int i=0; i<4; i++) { cout << "\n Checking array \t" << arraytemp[i]; //person_surname[recordcounter].surname=arraytemp[1]; <<<< PROBLEM IS HERE //person_address[recordcounter].address=arraytemp[2]; //person_age[recordcounter].age=arraytemp[3]; //recordcounter++; } cout << arraytemp[0]; cout << person_name[0].name; cout << "\n\n\n\n\n\n\n"; system("PAUSE"); return EXIT_SUCCESS; }
Thanks in advance friends.
Thanks Daniweb, If daniweb was not here .. dont know what would happen, this is my first post but i got lots of answers here , Thanks. | https://www.daniweb.com/programming/software-development/threads/117604/please-help-important-string-with-structures | CC-MAIN-2018-43 | refinedweb | 363 | 57.16 |
SMILA/Documentation/WorkerManager
Contents
WorkerManager
Introduction
The WorkerManager service of SMILA provides an environment to make it easy to integrate, deploy and scale Workers, i.e. classes that implement a functionality to be coordinated by asynchronous workflows of SMILA. The WorkerManager service provides all the managements of tasks and data objects to be consumed and produced by the worker so that the implementation can focus on the real work to be done.
Overview
Workers are implemented as OSGi services that are referenced by the WorkerManager and announces its name via a getName() method. The WorkerManager then reads the worker definition from the JobManager to check if the worker is known to the JobManager and to get access worker modes or other definitions. It asks the TaskManager for available tasks to be done by this worker. If it gets one, it creates a TaskContext, that wraps up all data and facility the worker function needs to process the task:
- the task itself, including all parameters and properties.
- access to the data objects in the input and output slots of the worker. The data objects can be accessed in different ways as needed by the worker function: direct stream access, record-by-record reading and writing. The framework cares about creating only data objects that are really needed and committing objects after the function has finished successfully.
- counters to measure performance or other worker statistics. The WorkerManager already produces some basic counters measuring the execution time of the worker and amounts of data read and written. The worker function may produce additional counters as needed.
As long as the worker is performing a task, the WorkerManager keeps the task alive in the TaskManager and notifies the worker about task cancellation. When the worker finished the task processing, the WorkerManager cares about finishing the task: successfully if no error has occurred, or with as a fatal or recoverable error, based on the type of exception thrown by the worker function.
ScaleUp
The WorkerManager controls the number of tasks that are allowed for each managed worker on a node in parallel (scale-up). It does not retrieve further tasks for a worker if its scale-up limit is reached, even if a lot more tasks are waiting. Note that even if the worker scale-up limit is not yet reached the TaskManager may refuse to deliver further tasks for a worker if the global node scale-up limit specificed as taskmanager.maxScaleUp is reached already. Workers may declare themselves as runAlways. This means that the global scale-up limit is not applied to this worker.
Task result handling
There are four possible outcomes of a worker's processing:
- The perform() method returns normally. This is interpreted by the WorkerManager as a successful task execution and it will finish the task with a SUCCESSFUL task completion status. All open output data objects will be committed (if this fails: continue below depending on the exception type). The task result includes all counters produced by the task execution in the task result so that they can be aggregated by the JobManager in the job run data.
- The perform() method aborts with a RecoverableTaskException, a IOException, UnavailableException or a MaybeRecoverableException (or subclass) with isRecoverable() == true. This will be interpreted as a temporary failure to access input data or write output data to objectstore, so the task will be finished with a RECOVERABLE_ERROR task completion status and the JobManager will usually reschedule the task for a retry. Produced counters will be ignored.
- The perform() method aborts with a PostponeTaskException. This means that the worker cannot yet perform this task for some reason but it should be processed later. The task will be readded to the todo queue for this worker and redelivered later (but very soon, usually).
- The perform() method aborts with any other exception (including all RuntimeExceptions). This will be interpreted as a sign that the input data cannot be processed at all (because it is corrupted or contains invalid values, for example). Such tasks will be finished with a FATAL_ERROR completion status and not rescheduled. Produced counters will be ignored.
There is currently no other way for the workers to influence the task result.
Input/Output Data Objects
A task assigns "data objects" to the input and output slots of a worker that represents objects in a objectstore. The WorkerManager framework provides so called IODataObjects that encapsulate the access to these objectstore objects, so the worker does not need to know in detail how to work with the objectstore API. Apart from encapsulating the objectstore API and taking care of proper committing and cleanup after task processing, these IODataObjects also provide higher-level access methods that makes it easier to handle record bulk or key-value objects, for example.
Their wrappers can be accessed via the getInputs() and getOutput() components of the task context. Of course, these inputs and outputs managers give also access to the plain bulk info objects using getDataObject methods. However, in this case the worker function must clean up and commit by itself after finishing processing, so this should probably be used if you need only the object ID of an object, but not the actual content.
Available input wrappers are:
- StreamInput: provides direct access to the java.io.InputStream for reading from objectstore. Play with each single byte as you like.
- RecordInput: provides access to objects like record bulks that are sequences of BON records. You can get single records (or Anys) from the objects, one at a time, by calling the getRecord() method. When end-of-stream is reached, null is returned. You can also access the IpcStreamReader BON parser in case you do not want to read complete records at once. However, you should not mix up getRecord() calls with calls to the IpcStreamReader as your direct calls will probably confuse the record parsing.
Available output wrappers are:
- StreamOutput: provides direct access to the java.io.OutputStream for writing to objectstore.
- RecordOutput: simplified access for creating record bulks by writing one Record (or Any) at a time. You can also directory access the underlying IpcStreamWriter BON writer, but again you should not mix up direct acess the BON writer with the writeRecord/Any() methods.
You can create only a single IO wrapper for each data object. On the second call, only null will be returned.
For the Stream and Record wrappers the Inputs/Outputs classes provide special getAs... methods. For other wrappers you can use the generic getAs...(String slotName, Class wrapperClass) methods. Additionally, this allows you to create your own input/output wrapper classes and get them managed by the Inputs/Outputs framework.
Canceling Tasks
If the WorkerManager receives an 404 NOT FOUND response when trying to keep-alive a currently processed task, it sets a canceled flag in the associated TaskContext object. The worker should regularly check this flag to see if it should still continue to process the task or if it can abort. If so, it can just return (after releasing and cleaning-up used resources that are not part of the task context, of course), the WorkerManager will not commit the results in this case and will not try to finish the task. | http://wiki.eclipse.org/SMILA/Documentation/WorkerManager | CC-MAIN-2021-10 | refinedweb | 1,198 | 52.6 |
Let's start building our Meteor Angular 1 Socially app.
In this step, we will:
First step — let's install Meteor!
Open your command line and paste this command:
$ curl | sh
If you are on a Windows machine, go here to install Meteor.
Now let's create our app — write this in the command line:
$ meteor create socially
Now let's see what we got. Go into the new folder:
$ cd socially
Run the app like so:
$ meteor => Started proxy => Started MongoDB. => Started your app. => App running at:
Now go to and look at the amazing app that's running on your computer!
We now have a fully functional app which includes both a server and a client!
The default Meteor app starts life with four files, two
js, one
html and one
css file.
We are going to add our own files for this tutorial. So let's start by deleting the following files:
- client/main.css (delete) - client/main.html (delete) - client/main.js (delete) - server/main.js (delete)
Now we can start building our app.
Create a new
index.html file and place this code inside. Then run the app again.
1
2
3
<body>
<p>Nothing here</p>
</body>
Note that there is no
<html> tag and no
<head> tag - it's very simple.
This is because of how Meteor structures and serves files to the client.
Meteor scans all the HTML files in your application and concatenates them together.
Concatenation means merging the content of all
HTML,
HEAD and
BODY tags found inside these HTML files together.
So in our case, Meteor found our
index.html file, found the
BODY tag inside and added it's content to the
BODY tag of the main generated file.
(right-click -> inspect element on the page of your Meteor app in the browser to see the generated file)
It's time to add Angular 1 to our stack!
Because we decided to work with AngularJS in the client side, we need to remove the default UI package of Meteor, called
Blaze.
We also need to remove Meteor's default ECMAScript2015 package named
ecmascript because Angular-Meteor uses a package named
angular-babel in order to get both ECMAScript2015 and AngularJS DI annotations.
So let's remove it by running:
$ meteor remove blaze-html-templates $ meteor remove ecmascript
Now let's add the Angular 1 package to Meteor, back in the command line, launch this command:
$ meteor npm install --save angular angular-meteor babel-runtime $ meteor add angular-templates pbastowski:angular-babel
That's it! Now we can use Angular 1's power in our Meteor app.
To start simple, create a new file called
main.html under the project's client folder, this will be our main
HTML template page.
Then move the
p tag from
index.html into it:
1
<p>Nothing here</p>
Now let's include that file into our main
index.html file:
1
2
3
<body>
<div ng-include</div>
</body>
But if you load this in your browser, you won't see anything. That's because we still need to create the actual Angular app, which we'll do next.
Note: paths are absolute, not relative! You should always specify a full path regardless which file you're at. The path will begin from the app's root dir.
E.g. let
main.html, a file in the app's root dir, should be loaded like so:
<div ng-</div>
Angular 1 apps are actually individual modules. So let's create our main module.
Create a new
main.js file on the project's root folder.
Here you see another example of Meteor's power and simplicity - no need to write boilerplate code to include that file anywhere. Meteor will take care of it by going through all the files in the
socially folder and including them automatically.
One of Meteor's goals is to break down the barrier between client and server, so the code you write can run everywhere! (more on that later). But we need Angular 1's power only in the client side, so how can we do that?
There are a few ways to tell Meteor to run code only on the client/server/mobile side.
The simplest way is to use - Meteor.isClient variable.
Everything inside this
if statement will only run on the client side.
We recommend you to use special directories to keep files in the right place. To read more about it, you can go to "Application structure" chapter of The Official Meteor Guide.
And let's continue defining our Angular 1 application module. Give it the name
socially and add
angular-meteor module as a dependency:
1
2
3
4
5
6
import angular from 'angular';
import angularMeteor from 'angular-meteor';
angular.module('socially', [
angularMeteor
]);
As you can see, we imported two modules,
angular and
angular-meteor.
Since the second one exports the name of angular module it easier to add it as a dependency.
And use the same application name in the
ng-app directive in
index.html:
1
2
3
<body ng-
<div ng-include</div>
</body>
Now run the app.
Everything is the same, so now inside our
main.html let's add an Angular 1 expression:
1
<p>Nothing here {{ 'yet' + '!' }}</p>
Run the app again and the screen should look like this:
Nothing here yet!
Angular 1 interpreted the expression like any other Angular 1 application.
Try adding a new expression to the
main.html that will do some math:
<p>1 + 2 = {{dstache}} 1 + 2 }}</p>
Go to step 1 to add some content to our application. | http://angular-meteor.com/tutorials/socially/angular1/bootstrap | CC-MAIN-2017-26 | refinedweb | 941 | 66.23 |
A Shotgun, Shotgun ://
shotgun://
Pro://,.
21 Comments
That looks pretty cool - will have to have a play with this one!
Any chance of some notes on how to register a new protocol under OSX?
Note for all, Hugh got this working on OS X and created a forum post with details here:
Thanks Hugh!
I'm new to both Python and Shotgun. I understand that on the last line of the windows reg key:
@="\"python\" \"sgTriggerScript.py\" \"%1\""
...the "sgTriggerScript.py" is the doc that Python will attempt to run when Python is opened. Is there a specific location that the sgTriggerScript doc needs to be? I'm not very good at setting enviornmental variables, etc in Python yet.
Python does open momentarily, but gives me this error:
"Python.exe: can't open file 'sgTriggerScript.py': [Errno 2] No such file or directory"
Any hints would help a lot! Thanks.
I would suggest that you probably want to define the full path to sgTriggerScript.py in the reg key...
Something like:
@="\"python\" \"C:\\tech\\python\\sgTriggerScript.py\" \"%1\""
And this is one of those situations where I really hate Windows.... That string will be parsed a couple of times... Each time, '\' will be replaced with '\', which is why we need 4 \s in there to end up with just one.
Thanks for the quick reply. I'm not getting any more errors. But something is still not working...
I wonder if this is really a question for the Python forms, but maybe it will help someone else like me.
I wrote a script that I was hoping would dump a log file, but nothing seems to happen. Here's the sgTriggerScript I'm using:
def main(script, plate):
import logging
LOG_FILENAME = 'mylog.out'
logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG)
logging.debug(script, plate)
if __name__ == '__main__':
main(*sys.argv)
_ def main(script, plate):_
import logging
LOG_FILENAME = 'mylog.out'
logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG)
logging.debug(script, plate)
_
if __name__ == '__main__': _
main(*sys.argv)
Again, any hints would be very helpful. =)
Ah! Excuse me. Please ignore the italicized code in my previous post.
If you run from a command prompt:
python sgTriggerScript.py
Does it do the right thing and write something out to the log file?
Oh, you might want to specify the full path to the log file too - otherwise it'll end up wherever the script is run from, which could be anywhere.
Success! I was able to get a log file to generate from both the command line AND a custom menu item.
Here is the command:
C:\Python26>python C:\pyTest\sgTriggerScript.py
Here is the reg key I used for handling the custom protocal:
[HKEY_CLASSES_ROOT\shotgun]
@="URL:shotgun Protocol"
"URL Protocol"=""
[HKEY_CLASSES_ROOT\shotgun\shell]
[HKEY_CLASSES_ROOT\shotgun\shell\open]
[HKEY_CLASSES_ROOT\shotgun\shell\open\command]
@="\"C:\\Python26\\python\" \"C:\\pyTest\\sgTriggerScript.py\" \"%1\""
And, here is the Python script:
import sys
def main(script):
import logging
LOG_FILENAME = 'C:\Python26\mylog.out'
logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG)
logging.debug(script)
if __name__ == '__main__':
main(sys.argv[0])
import sys
def main(script):
import logging
LOG_FILENAME = 'C:\pyTest\mylog.out'
logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG)
logging.debug(script)
if __name__ == '__main__':
main(sys.argv[0])
Unfortunately, I still don't think I've grasped how the action menu item is actually passing data to sys.argv. The log file doesn't seem to generate when I use "main(*sys.argv)". It only works with "main(sys,argv[0])". @Huge Thanks for walking me through this, BTW. It is a huge help.
(I'm sorry, I don't know why it the code I copy into the comment gets posted twice.)
Has anybody faced problems when the number of rows in the entity exceeds a certain limit?
We have an entity that has around 250 records.
The custom protocol was working fine (executes a python script) until 70 records (atleast from what I remember)
Now with 250+ records, when we try to select the Gear menu option, nothing shows up.
However, when I apply a filter and have less records, the script executes.
My guess is that the custom protocol (URL) is getting too lengthy.
Also, I have tried changing the option in the status bar to show only 25 records per page and it still does not work, until I apply a filter.
Any solutions or workarounds are appreciated.
I've seen that as well. I think it is because the ids of ALL matching entities gets passed on through. We use the full ID list in the case when selected ids is empty, but that is the only time. It would be nice if the setup behaved better when custom menu items are used on a page where a LOT of entities match the filter.
-r
What browser/OS are you using when it fails with a lot of records? There is no official limit to the length of a url, according to the spec, though various implementations have different limits (Firefox, Safari, and Chrome all support at least 40,000 characters, but Internet Explorer is limited to 2083 chars). The various points that url is passing through (OS when it handles the protocol, and then the script when it receives it) might have length issues as well.
As Rob mentions, currently when using an action menu item we are sending *both* the list of selected ids, and the list of all ids that match the query. If we made this a pref on the ActionMenuItem (only send selected ids), that could at least mitigate the problem if you just want to act on the selected entities anyway. But I'd also like to see if we can make longer urls work in setups where they are failing now!
I've seen it in Firefox/Safari on OSX (10.5 and 10.6) and in Firefox on Linux. Also with large result sets, I've seen Firefox bring up that little unresponsive script dialog (firebug seems to say it happens while iterating through the list of rows, or something like that in the js). Even when it works it does slow the browser way down during the launch through the protocol handler.
-r
Would having it only send the selected ids be a solution for your usage? I'm imagining a pref that greys out the menu option unless you have something selected (like how "Edit Selected..." works).
I'm sure we can make sending very large result sets work reliably, but might require restructing how the data is passed around (send just the query info instead of the full list of ids, or stash the list of id's on server and pass a reference to that to script, like url shortening, which then gets the full data through a normal api request).
If the ActionMenuItem is calling another web server (instead of the custom protocol), we're sending the values through in a POST request instead of GET, so not tacked onto the url and doesn't have these length restrictions.
Thanks for the feedback!
For our case (not sure if this works in general) I'd LOVE the following:
If something is selected, then those ids are sent through
If nothing is selected, then all ids are sent through
If more than X (configurable?) are being sent through, there is an alert saying something like "You are selecting more than X <things>, this could be slow" with continue or cancel.
The only time we use all ids is when there is no selection (we've run into issues with wanting to run menu items on things that span multiple pages, and this is our workaround when cranking up the # of entities displayed on the page would just be too slow).
-r
Hi Isaac,
I do like the idea of AMIs having a flag to say whether they can only work when items are selected. There are quite a few that shouldn't work without something selected, and to have them greyed out would be fantastic.
Hi Isaac,
We are using Firefox on Windows XP 64 bit machines.
Having an option to disable sending all the IDs sounds like a good idea.
We currently have scenarios where we need to do some operations on a large number of records and we have built an python utility that will get all the information from Shotgun and then work on that result.
We are expecting a lot of entities getting filled with more than 200 records per project and would like to have some kind of a setting that would make the URL short.
OK, just did some tests. I'm not seeing any limit on the length of urls on the Mac side (50k chars works fine). Works for Firefox 3.6, Safari 4, and Chrome 4.
On the Windows side (XP 32bit), Firefox 3.0, Firefox 3.6, and Chrome 4 all have a 2048 char limit (like Internet Explorer does for urls in general) when launching custom protocol handlers. They *don't* have that limit in general for urls, so seems to be related to how they are passing the value to the system. I'm not sure if there is another way to setup a protocol handler that more directly launches the script. The reg setting we've been using adds an entry like:
[HKEY_CLASSES_ROOT\shotguntest\shell\open\command]@="<exe\_to\_run>"
but possibly there is another way to open the script directly instead of passing a command through the shell? Seems to be suggesting that on these links:
But looks like we need to do something about not making urls over 2048 chars, so first step will be adding option to only send selected ids (nice to have that option in any case!), and then we'll figure out another way to pass all the ids (either pass query info, or create a token that the script can pass back through api to get the full list).
Hi Isaac,
Any update on this side? We are facing the same 2048 char issue here :/
Thanks
Nicolas
Is this thread still maintained? I am having trouble getting a custom protocol handler to work under linux (Kubuntu 12.10). I ran gconftool as outlined above but when clicking one of my custom links I gets the same old "The address wasn't understood" error. I did get rvlink to work and remember there was a bit of pain involved, but can't remember the details.
I also tried setting "network.protocol-handler.expose.shotgunpy;true" in Firefox' about:config to no avail.
Any help would be greatly appreciated!
frank
FYI: Link is broken. | https://support.shotgunsoftware.com/hc/en-us/articles/219031308-Launching-applications-using-custom-browser-protocols | CC-MAIN-2018-22 | refinedweb | 1,788 | 72.05 |
Hide Forgot
Description of problem:
not sure whether bug or feature to mitigate dataloss in doubtful situations.
@f17 running virt-install. requested system is serial console minimal f19.
it's possible to ctrl+c sigint the installation process. result is several pages of errors, screen redraws and such.
below is the
[ OK ] Reached target Paths.
Starting installer, one moment...
^CTraceback (most recent call last):
File "/sbin/anaconda", line 677, in <module>
from pyanaconda import geoloc
import timezone
File "/usr/lib64/python2.7/site-packages/pyanaconda/timezone.py", line 31, in <module>
from pyanaconda import localization
File "/usr/lib64/python2.7/site-packages/pyanaconda/localization.py", line 32, in <module>
import langtable
File "/usr/lib/python2.7/site-packages/langtable.py", line 1124, in <module>
__module_init = __ModuleInitializer()
File "/usr/lib/python2.7/site-packages/langtable.py", line 1118, in __init__
_init()
File "/usr/lib/python2.7/site-packages/langtable.py", line 1113, in _init
_read_file(datadir, 'languages.xml', LanguagesContentHandler())
File "/usr/lib/python2.7/site-packages/langtable.py", line 626, in _read_file
_expat_parse(file, sax_handler)
File "/usr/lib/python2.7/site-packages/langtable.py", line 608, in _expat_parse
parser.ParseFile(file)
File "/usr/lib/python2.7/site-packages/langtable.py", line 127, in characters
if self._save_to is None:
KeyboardInterrupt
Pane is dead
if self._save_to is None:
KeyboardInterrupt
Pane is dead
[anaconda] 1:main* 2:shell 3:log 4:storage-log 5:program-log
Created attachment 779159 [details]
screenshot showing KeyboardInterrupt after pressing ctrl-C during installer startup in text mode
Confirmed.
The response to ctrl-C is inconsistent -- sometimes it is ignored, sometimes it generates a "Pane is dead" with no traceback, sometimes it generates a "Pane is dead" with a traceback, all depending on where in the install process ctrl-C is pressed.
What is the expected behavior?
Tested by entering "text" on the kernel command line with:
$ qemu-kvm -m 4096 -hda f19-test-2.img -cdrom ~/xfr/fedora/F19/Fedora-19-x86_64-DVD.iso -vga std -boot menu=on
This very well may be a bug, but ctrl-c behavior is definitely not high up on our priority list. anaconda is kind of a special process - you can just reboot if you want. We do not have any formalized behavior for this, and I can't really see getting to it. | https://bugzilla.redhat.com/show_bug.cgi?id=980442 | CC-MAIN-2021-31 | refinedweb | 385 | 52.05 |
In this blog, I am going to discuss how you can schedule tasks with the help of akka quartz scheduler. Also, we can use akka scheduler to execute tasks but akka scheduler is not design for the long-term scheduling. Instead of this, we use akka quartz scheduler.
With the help of akka quartz scheduler, we can execute the task on the basis of cron expression. Suppose we want to execute tasks every first day of the month so akka quartz scheduler is a better option than a simple akka scheduler.
Let’s understand the working of akka quartz scheduler with the help of code.
First you have to create the actor like the following.
import akka.actor.Actor import ScheduleActor.ExecuteTask class ScheduleJob extends Actor with LazyLogging { def receive: PartialFunction[Any, Unit] = { case ExecuteTask => logger.info("i am going to execute every first day of the month") case _ => logger.info("invalid message") } } object ScheduleJob { case object ExecuteTask }
In the above code, we create an actor with the name ScheduleJob. Also, this actor will perform the task of printing a message on every first day of the month. So let’s see how you can schedule this actor to perform its task on every first day of the month with the help of the code.
import java.util.TimeZone import ScheduleActor.ExecuteTask import akka.actor.{Actor, ActorSystem, Props} import com.typesafe.akka.extension.quartz.QuartzSchedulerExtension object Driver extends App { val system = ActorSystem("SchedulerSystem") implicit val executionContext: ExecutionContextExecutor = system.dispatcher val scheduler = QuartzSchedulerExtension val schedulerActor = system.actorOf(Props(classOf[ScheduleJob]), "Schedule Job") QuartzSchedulerExtension.get(system).createSchedule("monthlyScheduler", None, "0 0 0 1 1/1 ? *", None, TimeZone.getTimeZone("Asia/Calcutta")) QuartzSchedulerExtension.get(system).schedule("monthlyScheduler", schedulerActor, ExecuteTask) }
In the above code, First we have created the ActorSystem and after that, we have created the actorRef of scheduleJob actor. So that we can send messages to ScheduleJob actor. For akka quartz scheduler we need to import the QuartzSchedulerExtension class. So that we can create scheduler and execute that scheduler. After that, we call the method createSchedule of the QuartzSchedulerExtension class. And then we pass the name of the scheduler, cron expression and time zone as the parameters of the method. Always remember you have to pass the valid cron expression otherwise this method will throw a runtime exception. So in the above code, We pass the cron expression of executing the task on every first day of the month at midnight.
After that, we call the method scheduler of QuartzSchedulerExtension which is used to schedule the created scheduler with the actor. And then we pass the name of the scheduler, actor ref of the actor and the message that we want to send to the actor as the parameters of the method.
So whenever we run the application, the message will be printed on every first day of the month at midnight.
That’s all and Thank you for reading this blog.
References | https://blog.knoldus.com/working-with-akka-quartz-scheduler/ | CC-MAIN-2021-04 | refinedweb | 491 | 57.16 |
Level of Difficulty: Beginner – Senior.
There are many different workflow and automation suites/platforms available out there (some of which include IFTTT, Power Automate and Tonkean) that allow users to interact with their YouTube connector. Most of these workflows classify the functions within the connectors as either a Trigger or an Action. A trigger would be seen as an event that “kicks off” the workflow/process, whereas an action would be an event (or a set of events) that should be executed once the workflow/process has been triggered.
Many of these workflows make use of APIs to get their triggers and actions functioning. There is one small problem though… They don’t always have the predefined triggers or actions that we might be looking to use. Platforms like IFTTT and Power Automate do not yet have a “When an item is added to a Playlist” trigger. Not a train smash though… In this post, we work through how to monitor a YouTube playlist for the addition of new items using Python and the YouTube Data API.
What are the steps?
The steps that we will be following are:
- Get a Developer Key
- Create Project
- Create Credentials
- Get API Key
- Create the Python Script
- Import Python Libraries
- Obtain Playlist ID
- Query Playlist Items
- Process New Items
Deep Dive
Let’s dive deeper into the steps listed above.
Please note: This will require a YouTube Playlist to be created if it doesn’t already exist.
Get a Developer Key
In order to use the YouTube Data API, a developer key needs to be obtained through this portal.
Create Project
You’ll first need to create a project by either clicking on “Create Project” if you have the option, or by clicking on “Select a Project” proceeded by “New Project”:
Create Credentials
Once you’ve selected an option to create a new project, you’ll be prompted to enter a name. Thereafter, you may click “Create”:
After the redirect, you should be focused on “Credentials” where you can add a new API key by selecting the “Create Credentials” option:
Get Key
Next, copy the API key as we will need it to get the Python script working properly:
Create Python Script
Install Libraries
Now, let’s switch to Python and install the correct libraries before we can import them:
!pip install google-api-python-client !pip install google_auth_oauthlib
Import Libraries and Instantiate Variables
The following libraries should be imported:
from googleapiclient.discovery import build from googleapiclient.errors import HttpError from oauth2client.tools import argparser import pandas as pd import numpy as np import requests import json
Next, let’s instantiate our first three variables needed to work with the YouTube Data API:
DEVELOPER_KEY = '<insert key>' YOUTUBE_API_SERVICE_NAME = 'youtube' YOUTUBE_API_VERSION = 'v3'
Obtain Playlist ID
To get familiar with how the Google documentation works, let’s explore how to get a list of playlists, here, using the “Try it” function:
The documentation explains what parameters are required and which are optional. For the sake of getting a list of my own playlists, I added the following values before selecting “Execute”:
You should receive a 200 response with information regarding your playlists:
By selecting “Show Code” shown above, you should be able to select “Python” to see the Python Code if you wanted to add it to the automation script:
Once you have the ID of the playlist that you’d like to monitor, assign it to a variable:
playlist_id = '<insert id>'
Query Playlist Items
There is a max result limit of 50 results per call to the API which means that the results will need to be paged if there are more than 50 items in a playlist (multiple calls will need to be made to get all the results, 50 at a time). The response will contain a page token if there is a next page.
Now, let’s create a method that allows for paging through results:
# Get all items in specified playlist def get_playlist_items(page_token): # Auth with YouTube service youtube = build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, developerKey=DEVELOPER_KEY) # Call the playlistItems.list method to retrieve results matching the specified query term. request = youtube.playlistItems().list( part="snippet,contentDetails", pageToken=page_token, maxResults=50, playlistId=playlist_id ) response = request.execute() return response
Process New Items
In the true spirit of automation workflows/processes, if the trigger is “new items found in a playlist”, then we need actions to execute once that is found to be true. We can encapsulate these actions into a “Process New” method:
# process any items that were not found in the previous set of results def process_new(df_old, df_new): df_diff = df_new.set_index('title').drop(df_old['title'], errors='ignore').reset_index(drop=False) print(len(df_diff)) for index, element in df_diff.iterrows(): print("New Item Added: " + str(element['title']).encode('utf-8'))
Let’s tie the top two methods together through a “main” method code snippet. Make sure you have an “items.xlsx” file that records all of the items that are in the playlist:
isEnd = False page_token = None df = pd.DataFrame() # instantiate blank dataframe df_history = pd.read_excel('items.xlsx', headers=False) # read history before querying new results so that the new records may be identified while not isEnd: playlist_items = get_playlist_items(page_token) current_count = playlist_items['pageInfo']['totalResults'] # if there is a page token, use it for the next call or assign it back to None if 'nextPageToken' in playlist_items.keys(): page_token = playlist_items['nextPageToken'] else: isEnd = True page_token = None # write playlist item information to the dataframe for item in playlist_items['items']: temp_df = pd.DataFrame.from_dict(item) temp_df = temp_df[['snippet']].transpose() df = df.append(temp_df) df.to_excel('items.xlsx') # write the dataframe to excel process_new(df_history, df) # process the new items
Did this work for you? Feel free to drop a comment below or reach out to me through email, jacqui.jm77@gmail.com.
The full Python script is available on Github here. | https://thejpanda.com/2021/01/21/automation-youtube-playlist-monitoring-using-python/ | CC-MAIN-2022-21 | refinedweb | 974 | 56.69 |
fputc, putc
From cppreference.com
Writes a character
ch to the given output stream
stream. putc() may be implemented as a macro and evaluation
stream more than once, so the corresponding argument should never be an expression with side effects.
Internally, the character is converted to unsigned char just before being written.
[edit] Parameters
[edit] Return value
On success, returns the written character.
On failure, returns EOF and sets the error indicator (see ferror()) on
stream.
[edit] Example
putc with error checking
Run this code
#include <stdio.h> #include <stdlib.h> int main(void) { int ret_code = 0; for (char c = 'a'; (ret_code != EOF) && (c != 'z'); c++) ret_code = putc(c, stdout); /* Test whether EOF was reached. */ if (ret_code == EOF) if (ferror(stdout)) { perror("putc()"); fprintf(stderr,"putc() failed in file %s at line # %d\n", __FILE__,__LINE__-7); exit(EXIT_FAILURE); } putc('\n', stdout); return EXIT_SUCCESS; }
Output:
abcdefghijklmnopqrstuvwxy | http://en.cppreference.com/w/c/io/fputc | CC-MAIN-2015-18 | refinedweb | 146 | 50.84 |
Episode #47: PyPy now works with way more C-extensions and parking your package safely
Published Thurs, Oct 12, 2017, recorded Wed, Oct 11, 2017.
Sponsored by DigitalOcean. They just launched Spaces, get started today with a free 2 month trial of Spaces by going to do.co/python
Brian #1: PyPy v5.9 Released, Now Supports Pandas, NumPy
- NumPy and Pandas work on PyPy2.7 v5.9
- Cython 0.27.1 (released very recently) supports more projects with PyPy, both on PyPy2.7 and PyPy3.5 beta
- Optimized JSON parser for both memory and speed.
- CFFI updated
- Nice to see continued improvements and work on PyPy
Michael #2: WTF Python?
- Python, being awesome by design high-level and interpreter-based programming language, provides us with many features for the programmer's comfort.
- But sometimes, the outcomes of a Python snippet may not seem obvious to a regular user at first sight.
- Here is a fun project attempting to collect such classic and tricky examples of unexpected behaviors in Python and discuss what exactly is happening under the hood!
- Examples:
- I’m thinking of doing some fun follow on projects with this. More on that later.
Brian #3: Python Exercises
- “… focus on the language itself and the standard library.”
- Some non-obvious Python exercises to help hone your Python skills, and possibly use in coding exercises of a job interview or maybe pre-interview screen.
- Topics
- Basic syntax
- Text Processing
- OS Integration
- Functions
- Decorators & Generators
- Classes, Modules,
- Exceptions, Lists, Dictionaries, Multiprocessing
- & Testing! always including testing when ~~interviewing someone~~ practicing your coding.
Michael #4: Exploiting misuse of Python's "pickle"
- If.
- this blog post will describe exactly how trivial it is to exploit such a service, using a simplified version of the code I recently encountered as an example.
- Executing Code: So, what can we do with a vulnerable service? Well, pickle is supposed to allow us to represent arbitrary objects. An obvious target is Python’s
subprocess.Popenobjects!
Brian #5: A Complete Beginner's Guide to Django
- Lots of Django tutorials already, but this may appeal to folks with a more academic bent.
- Complete with wireframes, UML class hierarchies and use case diagrams.
- Series with 6 parts done, a 7th part planned, which will be the last part.
- Some fun comic like drawings, and lots of screenshots.
Michael #6: pypi-parker
- Helper tooling for parking PyPI namespaces to combat typosquatting.
- pypi-parker lets you easily park package names on PyPI to protect users of your packages from typosquatting.
- Typosquatting is a problem: in general, but also on PyPI.
- There are efforts being taken by pypa to protect core library names, but this does not (and really cannot and probably should not attempt to) help individual package owners.
- For example,
reqeustsrather than
requests, or
crytpographyrather than
cryptography.
- Why? Self-serve is a good thing. Let's not try and get rid of that. Work with it instead.
- What? pypi-parker provides a custom distutils command park that interprets a provided config file to generate empty Python package source distributables. These packages will always throw an ImportError when someone tries to install them. You can customize the ImportError message to help guide users to the correct package.
Our news
Michael:
- Just launched freemongodbcourse.com Come and sign up to learn MongoDB and some Python
- Python3 usage has doubled in the past year (thanks Donald Stufft) | https://pythonbytes.fm/episodes/show/47/pypy-now-works-with-way-more-c-extensions-and-parking-your-package-safely | CC-MAIN-2018-30 | refinedweb | 563 | 58.18 |
[Harald Meland] > Please bear in mind that this is the first time I'm trying to do any > CGI-related programming at all, but I still _think_ this patch is good > (from reading the CGI/1.1 spec, and testing quickly that it doesn't > break my web interface). I was slightly wrong about the patch being perfectly good -- the Utils.CGIpath() function would, if both of the "proper" methods for getting at the path failed, fall back to ("/mailman/admin/" + list_name), regardless of which CGI script called it. This would, obviously, be wrong if called from e.g. "admindb". It could also be argued that referencing the (hopefully defined) global variable list_name from CGIpath() is unnecessarily risky. Below is a patch (against current CVS) which fixes these problems by operating in a manner very similar to os.environ.get(), i.e. letting the caller supply the complete fallback value. [ I'm not really sure that putting a pretty special-purpose function like CGIpath() into the generic Utils module is the Right Thing -- but I couldn't see anywhere else where it would fit better. ] Index: Mailman/Utils.py =================================================================== RCS file: /projects/cvsroot/mailman/Mailman/Utils.py,v retrieving revision 1.62 diff -u -r1.62 Utils.py --- Utils.py 1999/01/14 04:07:28 1.62 +++ Utils.py 1999/02/24 20:26:00 @@ -664,3 +664,22 @@ reraise(IOError, e) finally: os.umask(ou) + +def CGIpath(env, fallback=None): + """Return the full virtual path this CGI script was invoked with. + + Newer web servers seems to supply this info in the REQUEST_URI + environment variable -- which isn't part of the CGI/1.1 spec. + Thus, if REQUEST_URI isn't available, we concatenate SCRIPT_NAME + and PATH_INFO, both of which are part of CGI/1.1. + + Optional second argument `fallback' (default `None') is returned if + both of the above methods fail. + + """ + if env.has_key("REQUEST_URI"): + return env["REQUEST_URI"] + elif env.has_key("SCRIPT_NAME") and env.has_key("PATH_INFO"): + return (env["SCRIPT_NAME"] + env["PATH_INFO"]) + else: + return fallback Index: Mailman/Cgi/admin.py =================================================================== RCS file: /projects/cvsroot/mailman/Mailman/Cgi/admin.py,v retrieving revision 1.31 diff -u -r1.31 admin.py --- admin.py 1999/01/09 05:57:17 1.31 +++ admin.py 1999/02/24 20:26:01 @@ -123,8 +123,8 @@ text = Utils.maketext( 'admlogin.txt', {"listname": list_name, - "path" : os.environ.get("REQUEST_URI", - '/mailman/admin/' + list_name), + "path" : Utils.CGIpath(os.environ, + '/mailman/admin/' + list_name), "message" : message, }) print text Index: Mailman/Cgi/admindb.py =================================================================== RCS file: /projects/cvsroot/mailman/Mailman/Cgi/admindb.py,v retrieving revision 1.9 diff -u -r1.9 admindb.py --- admindb.py 1999/01/09 06:22:44 1.9 +++ admindb.py 1999/02/24 20:26:01 @@ -112,8 +112,8 @@ text = Utils.maketext( 'admlogin.txt', {'listname': list_name, - 'path' : os.environ.get('REQUEST_URI', - '/mailman/admindb/' + list_name), + 'path' : Utils.CGIpath(os.environ, + '/mailman/admindb/' + list_name), 'message' : message, }) print text -- Harald | https://mail.python.org/pipermail/mailman-developers/1999-February/005452.html | CC-MAIN-2016-30 | refinedweb | 483 | 53.37 |
Do you know how fast your code is? Is it faster than it was last week? Or a month ago? How do you know if you accidentally made a function slower by changes elsewhere? Unintentional performance regressions are extremely common in my experience: it’s hard to unit test the performance of your code. Over time I have gotten tired of playing the game of “performance whack-a-mole”. Thus, I started hacking together a little weekend project that I’m calling vbench. If someone thinks up a cleverer name, I’m all ears.
Link to pandas benchmarks page produced using vbench
What is vbench?
vbench is a super-lightweight Python library for running a collection of performance benchmarks over the course of your source repository’s history. Since I’m a GitHub user, it only does git for now, but it could be generalized to support other VCSs. Basically, you define a benchmark:
from pandas import *
import pandas.util.testing as tm
import random
import numpy as np
"""
setup = common_setup + """
N = 100000
ngroups = 100
def get_test_data(ngroups=100, n=N):
unique_groups = range(ngroups)
arr = np.asarray(np.tile(unique_groups, n / ngroups), dtype=object)
if len(arr) < n:
arr = np.asarray(list(arr) + unique_groups[:n - len(arr)],
dtype=object)
random.shuffle(arr)
return arr
df = DataFrame({'key1' : get_test_data(ngroups=ngroups),
'key2' : get_test_data(ngroups=ngroups),
'data' : np.random.randn(N)})
def f():
df.groupby(['key1', 'key2']).agg(lambda x: x.values.sum())
"""
stmt2 = "df.groupby(['key1', 'key2']).sum()"
bm_groupby2 = Benchmark(stmt2, setup, name="GroupBy test 2",
start_date=datetime(2011, 7, 1))
Then you write down the information about your repository and how to build any relevant DLLs, etc., that vary from revision to revision:
REPO_URL = 'git@github.com:wesm/pandas.git'
DB_PATH = '/home/wesm/code/pandas/gb_suite/benchmarks.db'
TMP_DIR = '/home/wesm/tmp/gb_pandas'
PREPARE = """
python setup.py clean
"""
BUILD = """
python setup.py build_ext --inplace
"""
START_DATE = datetime(2011, 3, 1)
Then you pass this info, plus a list of your benchmark objects, to the BenchmarkRunner class:
BUILD, DB_PATH, TMP_DIR, PREPARE,
run_option='eod', start_date=START_DATE)
runner.run()
Now, the BenchmarkRunner makes a clone of your repo, then runs all of the benchmarks once for each revision in the repository (or some other rule, e.g. I’ve set run_option='eod' to only take the last snapshot on each day). It persists the results in a SQLite database so that you can rerun the process and it will skip benchmarks it’s already run (this is key when you add new benchmarks, only the new ones will be updated). Benchmarks are uniquely identified by the MD5 hash of their source code.
This is the resulting plot over time for the above GroupBy benchmark related to some Cython code that I worked on late last week (where I made a major performance improvement in this case):
Here is a fully-formed vbench suite in the pandas git repository.
Kind of like codespeed and speed.pypy.org?
Before starting to write a new project I looked briefly at codespeed and the excellent work that the PyPy guys have done with speed.pypy.org. But then I started thinking, you know, Django web application, JSON requests to upload benchmark results? Seemed like far too much work to do something relatively simple. The dealbreaker is that codespeed is just a web application. It doesn’t actually (to my knowledge, someone correct me if I’m wrong?) have any kind of a framework for orchestrating the running of benchmarks throughout your code history. That is what this new project is for. I actually see a natural connection between vbench and codespeed, all you need to do is write a script to upload your vbench results to a codespeed web app!
At some point I’d like to build a simple web front end or wx/Qt viewer for the generated vbench database. I’ve never done any JavaScript, but it would be a good opportunity to learn. Knowing me, I might break down and hack out a stupid little wxPython app with an embedded matplotlib widget anyway.
Anyway, I’m really excited about this project. It’s very prototype-y at the moment but I tried to design it in a nice and extensible way. I also plan to put all my git repo analysis tools in there (like code churn graphs etc.) so it should become a nice little collection of tools. | http://wesmckinney.com/blog/?p=373 | CC-MAIN-2014-42 | refinedweb | 735 | 65.62 |
Opened 8 years ago
Closed 8 years ago
Last modified 7 years ago
#2949 closed enhancement (worksforme)
Django's templating engine too tightly tied to TEMPLATE_DIRS in the settings module
Description
Django's templating engine is too tightly tied to the settings module. Specifically the loaders. This affects the template loaders as well as what is in django.template.loader_tags.
I want to be able to set the path to a directory that contains the templates I want to render at runtime. Due to the nature of my project, I cannot just add every possibility to TEMPLATE_DIRS and I can definitely not follow a convention based off sub-directories of a specific parent directory. These "skins" (where skin is a directory containing templates) do not live parallel to each other and shouldn't be able to extend each other.
The basic thing I'm looking for is to be able to load a template from a specific, arbitrary path and then tags like extends and include should load templates from the same directory. In fact, I want to add custom filters that can take the name of a template as a parameter to render things with and this should load the relevant template from the correct directory.
I've been looking through the code and django.template.loaders.filesystem.get_template_sources() takes a parameter template_dirs that defaults to None, but this never gets set and it looks like it will always default to settings.TEMPLATE_DIRS.
django.template.loader_tags.ExtendsNode takes a parameter called template_dirs that defaults to None, but this also never gets set and it just uses the loader directly (inside get_parent) to find the template source.
All this means that the template engine is tied to TEMPLATE_DIRS in the settings module and I can't just tell the template loaders to load templates from arbitrary locations and expect the other parts of the system to play along.
Has anyone looked into this before or have ideas on how to fix it? I don't mind writing code, but I'm a bit worried about hacking at the source and then I have to maintain it myself.
I've been considering writing my own loader.py and loader_tags.py that contains a convenience function that will store template_dirs inside the context the first time it is called and then if it gets called without template_dirs (ie from ExtendsNode) it will read the value from the context. It will then reuse the stuff in init.py, context.py, defaultfilters.py and defaulttags.py. Any comments?
Change History (6)
comment:1 Changed 8 years ago by DougN
comment:2 Changed 8 years ago by lerouxb@…
Actually, writing another loader will not work, because it will interfere with the other loaders that gets used by other parts of the site and you can't store this type of request-specific state (the directory to load from) anywhere so that when things like the extends tag calls the loader it still loads in from the right place.
I think I'll just make a replacement for loader.py and loader_tags.py rather than add another loader (parallel to filesystem, app_directories, etc). It will be cleaner imho and the rest of my app (like the admin interface) can still use the normal stuff.
Will upload my code somewhere when I'm done with a better explanation of why I had to write it.
comment:3 Changed 8 years ago by Simon G. <dev@…>
- Triage Stage changed from Unreviewed to Design decision needed
comment:4 Changed 8 years ago by ubernostrum
- Resolution set to worksforme
- Status changed from new to closed
This _is_ something which can be solved by a custom template loader; nothing precludes having multiple loaders and letting the last one be a "fallback" loader. This is how, for example, Jannis Leidel's database-backed loader works.
comment:5 Changed 8 years ago by anonymous
ok.. so I have the concept of "skins". A skin is a folder full of templates. Those template files always have the same names.
Let's say I have 3 templates: template_a.html, template_b.html, template_c.html. All of these extend something like base.html and all of them include include.html. There are three skins: skin_a, skin_b, skin_c. A user can select his own skin.
so, I have the following structure:
skins/
skins/skin_a/
skins/skin_a/base.html
skins/skin_a/include.html
skins/skin_a/template_a.html
skins/skin_a/template_b.html
skins/skin_a/template_c.html
skins/skin_b/
skins/skin_b/base.html
skins/skin_b/include.html
skins/skin_b/template_a.html
skins/skin_b/template_b.html
skins/skin_b/template_c.html
skins/skin_c/
skins/skin_c/base.html
skins/skin_c/include.html
skins/skin_c/template_a.html
skins/skin_c/template_b.html
skins/skin_c/template_c.html
(in other words, to make a new skin you just make a copy of skin_a or skin_b or skin_c and change the files - css and images work in a similar way)
I want to be able to just set the skin (or folder) at runtime to something like skins/skin_a/ or skins/skin_b/ or skins/skin_c/ and then use template_a.html or template_b.html or whatever as the template name. So {% extends "base.html" %} or {% include "include.html" %} should just work (in other words - I don't want to have to do {% extends skin_a/base.html %}). Also.. my admin interface is done separately from the "frontend" and it might also have "base.html" or something else that might clash with the other templates. I obviously don't want to mix everything into the same namespace.
I've already solved this problem by making my own "render_template(path, template, ..) function and one or two subclasses, but I'm interested to hear how this will work using a django template loader.
comment:6 Changed 7 years ago by emulbreh
- Keywords tplrf-patched added
I would recommend just writting your own custom loader and adding that to settings.py
I expect to see this request rejected on the grounds that it is easy enough for you to do this your self and the security risks are to high relative to the functionality gained. | https://code.djangoproject.com/ticket/2949 | CC-MAIN-2015-14 | refinedweb | 1,020 | 62.48 |
- NAME
- SYNOPSIS
- DESCRIPTION
- OPTIONS
- ENVIRONMENT
- FILES
- DIAGNOSTICS
- RESTRICTIONS
- BUGS
- SEE ALSO
- AUTHORS
NAME
zoid - a modular perl shell
SYNOPSIS
zoid [options] [-] [files]
DESCRIPTION. Although Zoidberg does not do the language interpreting itself -- it uses perl to do this -- it supplies powerful language extensions aimed at creating an easy to use interface.
By default zoid runs an interactive commandline when both STDIN and STDOUT are terminal devices, or reads from STDIN till End Of Line and execute each line like it was entered interactively. When an action is specified by one of the commandline options this will suppress the default behavior and exit after executing that action. If any file names are given, these will be interpreted as source scripts and suppress default behavior. Be aware that these source scripts are expected to be Perl scripts and are NOT interpreted or executed the same way as normal input.
This document only describes the commandline script zoid, see zoiduser(1) and zoidfaq(1) for help on using the zoidberg shell.
OPTIONS
- -e command, --exec=command
Execute a string as interpreted by zoidberg. If non-interactive exits with exit status of command string. Multiple commands may be given to build up a multi-line script. Make sure to use semicolons where you would in a normal multi-line script.
- -C, --config
Print a list of configuration variable of this installation and exit. Most importantly this tells you where zoid will search for it's configuration and data files.
- -c command, --command=command
Does the same as --exec but this is bound to change.
- -D, -DClass --debug
Set either the global debug bit or set the debug bit for the given class. Using the global variant makes zoid output a lot of debug information.
- -h, --help
-
- -u, --usage
Print a help message and exits.
- -Idir[,dir, ...]
The specified directories are added to the module search path
@INC.
- -i, --interactive
Start an interactive shell. This is the default if no other options are supplied.
- -l, --login
Force login behavior, this will reset your current working directory. This variable is also available to plugins and scripts, which might act on it.
- -mmodule
-
- -Mmodule
-
- -Mmodule=args[,arg, ...]
Import module into the eval namespace. With -m explicit import empty list, with -M default arguments or specified arguments. Details like the equivalent perl option, see perlrun(1).
- -o setting
-
- -o setting=value
-
- +o setting
Set (-o) or unset (+o) one or more settings.
- -s, --stdin
Read input from stdin. This is the default if no other options are supplied and neither stdin or stdout are terminal devices.
- -V, --version
Display version information.
- -v, --verbose
Sets the shell in verbose mode. This will cause each command to be echoed to STDERR.
ENVIRONMENT
The variables $PWD, $HOME and $USER are set to default values if not yet set by the parent process.
The variable $ZOID will point to the location of the zoid executable, it is similar to $SHELL for POSIX compliant shells. zoid uses a different variable because some programs seem to expect $SHELL to point to a POSIX compliant shell.
To switch off ansi colours on the terminal set $CLICOLOR to 0 (null).
FILES
Zoidberg uses rc files, data files and plugin files, use the --config switch to check the search paths used.
Which rcfiles are loaded is controlled be the 'rcfiles' and 'norc' settings,.
DIAGNOSTICS
Error messages may be issued either by perl or by one any of the modules in use. The zoid utility itself will only complain when the commandline options are wrong. If the error was thrown by one of zoid's core modules, the error message will either start with the module name or the name of the command that went wrong.
RESTRICTIONS
Source files and command input are NOT interpreted the same way.
Use -e _or_ -c, do not mix them.
BUGS
Known bugs are listed in the BUGS file, which is included in the source package. Visit the web page to submit bug reports or mail the author.
SEE ALSO
perl(1), zoiduser(1), zoidbuiltins(1), zoiddevel(1), zoidfaq(1), Zoidberg(3),
AUTHORS
Jaap Karssenberg || Pardus [Larus] <pardus@cpan.org>
R.L. Zwart, <rlzwart@cpan.org>
Copyright (c) 2002 Jaap G Karssenberg and RL Zwart.. See either the GNU General Public License or the Artistic License for more details.
See and | https://metacpan.org/pod/release/PARDUS/Zoidberg-0.94/man1/zoid.pod | CC-MAIN-2015-27 | refinedweb | 719 | 65.22 |
Crop the Image Intuitively — NumPy
In this blog article, we will learn how to crop an image in Python using NumPy as an ideal library. When we talk about images, they are just matrices in 2D space. And of course, it depends on the image, if it is an
RGB image then the size of the image would be
(width, height, 3) otherwise — grayscale would just be
(width, height). But ultimately, images are just large matrices where each value is a pixel positioned
row-wise and
column-wise accordingly.
Credits of Cover Image - Photo by Ulrike Langner on Unsplash
Cropping the image is just obtaining the sub-matrix of the image matrix. The size of the sub-matrix (cropped image) can be of our choice and mainly it is the height and width. There needs to be one important thing for the image to be cropped, i.e.,
starting position. The
starting position is helpful for obtaining the sub-matrix from that position and depending upon height and width we can easily crop cut the image.
The three important things are:
- starting_position
- length (height)
- width
Based on these three things, we can construct our cropping function completely ready.
Time to Code
The packages that we mainly use are:
- NumPy
- Matplotlib
- OpenCV → It is only used for reading the image.
Import the Packages
import numpy as np import cv2 import json.
Cropping the Image
We need to pass the above mentioned 3 things as arguments in our function. But before doing let’s try to crop (slice) the matrix with NumPy.
import numpy as np m = np.array([ [1, 2, 3, 4, 5, 6, 7], [5, 3, 4, 2, 1, 7, 6], [6, 4, 3, 5, 1, 2, 7], [5, 6, 3, 1, 4, 2, 7], [1, 2, 3, 4, 5, 6, 7] ]) >>> print(m) [[1 2 3 4 5 6 7] [5 3 4 2 1 7 6] [6 4 3 5 1 2 7] [5 6 3 1 4 2 7] [1 2 3 4 5 6 7]] >>> crop_m = m[1:4, 2:7] print(crop_m) [[4 2 1 7 6] [3 5 1 2 7] [3 1 4 2 7]] >>>
The above code is an example of how we can crop an image matrix. Notice
crop_m is the cropped matrix (sub-matrix) that is sliced from the original matrix
m. The sub-matrix
crop_m is taking values from
[1:4, 2:7], i.e., values from
1st row till
4th row and from
2nd column till
7th column. We should something similar for the image to obtain the cropped image. Let’s write the cropping image function.
def crop_this(image_file, start_pos, length, width, with_plot=False, gray_scale=False): image_src = read_this(image_file=image_file, gray_scale=gray_scale) image_shape = image_src.shape length = abs(length) width = abs(width) start_row = start_pos if start_pos >= 0 else 0 start_column = start_row end_row = length + start_row end_row = end_row if end_row <= image_shape[0] else image_shape[0] end_column = width + start_column end_column = end_column if end_column <= image_shape[1] else image_shape[1] print("start row \t- ", start_row) print("end row \t- ", end_row) print("start column \t- ", start_column) print("end column \t- ", end_column) image_cropped = image_src[start_row:end_row, start_column:end_column] cmap_val = None if not gray_scale else 'gray' if with_plot: fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(10, 20)) ax1.axis("off") ax1.title.set_text('Original') ax2.axis("off") ax2.title.set_text("Cropped") ax1.imshow(image_src, cmap=cmap_val) ax2.imshow(image_cropped, cmap=cmap_val) return True return image_cropped
Let’s understand what this function will actually result in.
- At the first step, we read the image either in grayscale or RGB and obtain the image matrix.
- We obtain the height and width of the image which is further used in the validation of the code.
- We make sure that the length and width are positive integers. Hence absolute values are considered.
- We calculate the four important values which are useful for slicing the matrix
start_row,
end_row,
start_column,
end_column. We obtain that using the three arguments that are passed —
start_pos,
length,
width.
- We obtain the cropped image by slicing the matrix.
- We plot both the original and cropped images for the visualization.
Let’s test the above function —
For RGB Image
crop_this( image_file='lena_original.png', start_pos=199, length=100, width=200, with_plot=True )
start row - 199 end row - 299 start column - 199 end column - 399
For Grayscale Image
crop_this( image_file='lena_original.png', start_pos=199, length=100, width=200, with_plot=True, gray_scale=True )
start row - 199 end row - 299 start column - 199 end column - 399
This is it!!! We finally are able to crop the image by just knowing the starting position and length & width of the cropped image. Isn’t it great? We can also add a lot of customization options like adding a border around the image and other things. To know how to add a border to the image, you can refer to my article.
Other similar articles can be found in my profile. Have a great time reading and implementing the same.
If you liked it, you can buy coffee for me from here. | https://msameeruddin.hashnode.dev/crop-the-image-intuitively-numpy?guid=none&deviceId=a8e0bc62-e88d-4e21-99e4-d268c76c98be | CC-MAIN-2021-10 | refinedweb | 840 | 61.87 |
Hi, In macOS 10.13 High Sierra (currently in beta), and presumably other Apple OSes moving forward, the printf-family functions intentionally crash if the format string contains %n and is located in writable memory. This is an exploit mitigation, similar to existing behavior in glibc (with _FORTIFY_SOURCE=2) and Windows. By default, gnulib's implementation of vasnprintf calls snprintf with such format strings. lib/vasnprintf.c has an explicit test for glibc and Windows to avoid crashing there (relevant code copied below for reference). I suppose this test should either be extended to include __APPLE__, or replaced with a configure-based test. (There's an existing test in m4/printf.m4 that checks whether %n works, including with writable memory, but it doesn't seem to prevent vasnprintf from triggering the crash.) Thanks. -- #if USE_SNPRINTF # if !(((__GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 3)) && !defined __UCLIBC__) || ((defined _WIN32 || defined __WIN32__) && ! defined __CYGWIN__)) fbp[1] = '%'; fbp[2] = 'n'; fbp[3] = '\0'; # else /* On glibc2 systems from glibc >= 2.3 - probably also older ones - we know that snprintf's return value conforms to ISO C 99: the tests gl_SNPRINTF_RETVAL_C99 and gl_SNPRINTF_TRUNCATION_C99 pass. Therefore we can avoid using %n in this situation. On glibc2 systems from 2004-10-18 or newer, the use of %n in format strings in writable memory may crash the program (if compiled with _FORTIFY_SOURCE=2), so we should avoid it in this situation. */ /* On native Windows systems (such as mingw), we can avoid using %n because: - Although the gl_SNPRINTF_TRUNCATION_C99 test fails, snprintf does not write more than the specified number of bytes. (snprintf (buf, 3, "%d %d", 4567, 89) writes '4', '5', '6' into buf, not '4', '5', '\0'.) - Although the gl_SNPRINTF_RETVAL_C99 test fails, snprintf allows us to recognize the case of an insufficient buffer size: it returns -1 in this case. On native Windows systems (such as mingw) where the OS is Windows Vista, the use of %n in format strings by default crashes the program. See <> and <> So we should avoid %n in this situation. */ fbp[1] = '\0'; # endif #else fbp[1] = '\0'; #endif | https://lists.gnu.org/archive/html/bug-gnulib/2017-07/msg00056.html | CC-MAIN-2021-21 | refinedweb | 347 | 64.2 |
In this Python tutorial, we will learn how to put the screen in a specific spot in python pygame. To control the object position on the screen in python pygame we use the OS module in python.
- Pygame uses SDL (Simple DirectMedia Layer) which is cross-platform library for controlling multimedia and is widely used for games.
- In
os.environdictionary there is a key
SDL_VIDEO_WINDOW_POSto which we can assign x and y value. This will put screen in specific spot in python pygame.
- Value assigned to X will move the screen in right or left position whereas value assigned to Y will move the screen in up-down position.
- Below is the syntax of how to put screen in specific spot in python pygame. In this syntax, we have created a function and then called that function inside the pygame.
def dynamicwinpos(x=500, y=100): os.environ['SDL_VIDEO_WINDOW_POS'] = "%d,%d" % (x,y) pygame.init() # calling function dynamicwinpos()
Now, we know how to put the screen in a specific spot in python pygame let’s explore more by putting this knowledge in an example.
How to Put Screen in Specific Spot in Python Pygame
In this project, we have used the game created in our blog Create a game using Python Pygame (Tic tac toe game). It’s a popular two-player tic tac toe game wherein the player who secures 3 consecutive positions wins the game.
- dynamicwinpos() function accepts x and y postion as a paramter. In our case, we have provided default value as x=500 and y=100.
os.environ['SDL_VIDEO_WINDOW_POS'] = "%d,%d" % (x,y)In this code
os.environis a dictionary and
SDL_VIDEO_WINDOW_POSis a key.X & Y are the values for the key.
- This function is called immediately after initializing pygame
pygame.init(). Now everytime the program is executed screen is put to the specific spot (x & y) in python pygame.
import pygame, sys import numpy as np import os #function to put the screen in specific spot in python pygame def dynamicwinpos(x=500, y=100): os.environ['SDL_VIDEO_WINDOW_POS'] = "%d,%d" % (x,y) pygame.init() #function calling dynamicwinpos() WIDTH = 600 HEIGHT = 600 LINE_WIDTH = 15 WIN_LINE_WIDTH = 15 BOARD_ROWS = 3 BOARD_COLS = 3 SQUARE_SIZE = 200 CIRCLE_RADIUS = 60 CIRCLE_WIDTH = 15 CROSS_WIDTH = 25 SPACE = 55 RED = (255, 0, 0) BG_COLOR = (20, 200, 160) LINE_COLOR = (23, 145, 135) CIRCLE_COLOR = (239, 231, 200) CROSS_COLOR = (66, 66, 66) screen = pygame.display.set_mode( (WIDTH, HEIGHT)) pygame.display.set_caption( 'TIC TAC TOE' ) screen.fill( BG_COLOR) board = np.zeros( (BOARD_ROWS, BOARD_COLS)) def draw_lines(): pygame.draw.line(screen, LINE_COLOR, (0, SQUARE_SIZE), (WIDTH, SQUARE_SIZE), LINE_WIDTH) pygame.draw.line(screen, LINE_COLOR, (0, 2 * SQUARE_SIZE), (WIDTH, 2 * SQUARE_SIZE), LINE_WIDTH) pygame.draw.line(screen, LINE_COLOR, (SQUARE_SIZE, 0), (SQUARE_SIZE, HEIGHT), LINE_WIDTH ) pygame.draw.line(screen, LINE_COLOR, (2 * SQUARE_SIZE, 0), (2 * SQUARE_SIZE, HEIGHT), LINE_WIDTH) def draw_figures(): for row in range(BOARD_ROWS): for col in range(BOARD_COLS): if board[row][col] == 1: pygame.draw.circle( screen, CIRCLE_COLOR, (int( col * SQUARE_SIZE + SQUARE_SIZE//2 ), int( row * SQUARE_SIZE + SQUARE_SIZE//2 )), CIRCLE_RADIUS, CIRCLE_WIDTH ) elif board[row][col] == 2: pygame.draw.line( screen, CROSS_COLOR, (col * SQUARE_SIZE + SPACE, row * SQUARE_SIZE + SQUARE_SIZE - SPACE), (col * SQUARE_SIZE + SQUARE_SIZE - SPACE, row * SQUARE_SIZE + SPACE), CROSS_WIDTH ) pygame.draw.line( screen, CROSS_COLOR, (col * SQUARE_SIZE + SPACE, row * SQUARE_SIZE + SPACE), (col * SQUARE_SIZE + SQUARE_SIZE - SPACE, row * SQUARE_SIZE + SQUARE_SIZE - SPACE), CROSS_WIDTH ) def mark_square(row, col, player): board[row][col] = player def available_square(row, col): return board[row][col] == 0 def is_board_full(): for row in range(BOARD_ROWS): for col in range(BOARD_COLS): if board[row][col] == 0: return False return True def check_win(player): for col in range(BOARD_COLS): if board[0][col] == player and board[1][col] == player and board[2][col] == player: draw_vertical_winning_line(col, player) return True for row in range(BOARD_ROWS): if board[row][0] == player and board[row][1] == player and board[row][2] == player: draw_horizontal_winning_line(row, player) return True if board[2][0] == player and board[1][1] == player and board[0][2] == player: draw_asc_diagonal(player) return True if board[0][0] == player and board[1][1] == player and board[2][2] == player: draw_desc_diagonal(player) return True return False def draw_vertical_winning_line(col, player): posX = col * SQUARE_SIZE + SQUARE_SIZE//2 if player == 1: color = CIRCLE_COLOR elif player == 2: color = CROSS_COLOR pygame.draw.line( screen, color, (posX, 15), (posX, HEIGHT - 15), LINE_WIDTH ) def draw_horizontal_winning_line(row, player): posY = row * SQUARE_SIZE + SQUARE_SIZE//2 if player == 1: color = CIRCLE_COLOR elif player == 2: color = CROSS_COLOR pygame.draw.line( screen, color, (15, posY), (WIDTH - 15, posY), WIN_LINE_WIDTH ) def draw_asc_diagonal(player): if player == 1: color = CIRCLE_COLOR elif player == 2: color = CROSS_COLOR pygame.draw.line( screen, color, (15, HEIGHT - 15), (WIDTH - 15, 15), WIN_LINE_WIDTH ) def draw_desc_diagonal(player): if player == 1: color = CIRCLE_COLOR elif player == 2: color = CROSS_COLOR pygame.draw.line( screen, color, (15, 15), (WIDTH - 15, HEIGHT - 15), WIN_LINE_WIDTH ) def restart(): screen.fill( BG_COLOR ) draw_lines() for row in range(BOARD_ROWS): for col in range(BOARD_COLS): board[row][col] = 0 draw_lines() player = 1 game_over = False while True: for event in pygame.event.get(): if event.type == pygame.QUIT: sys.exit() if event.type == pygame.MOUSEBUTTONDOWN and not game_over: mouseX = event.pos[0] mouseY = event.pos[1] clicked_row = int(mouseY // SQUARE_SIZE) clicked_col = int(mouseX // SQUARE_SIZE) if available_square( clicked_row, clicked_col ): mark_square( clicked_row, clicked_col, player ) if check_win( player ): game_over = True player = player % 2 + 1 draw_figures() if event.type == pygame.KEYDOWN: if event.key == pygame.K_r: restart() player = 1 game_over = False pygame.display.update()
Output:
In this output, we have run the program 3 times, and each time we have changed the value of x and y inside the dynamicwinpos() function in python pygame. As a result of which the screen has changed its position.
In this tutorial, we have learned how to put screen in specific spot in python pygame. Also, we have covered a project to demonstrated what we have learned.
Related Python tutorials:
- Python copy file
- Python File methods
- Union of sets Python
- How to convert a String to DateTime in Python
- Escape sequence in Python. | https://pythonguides.com/how-to-put-screen-in-specific-spot-in-python-pygame/ | CC-MAIN-2022-21 | refinedweb | 977 | 54.83 |
Firstly thanks taking the time to help me out.
I can compile and run this program but where I want to print the Dogs name and age (with the toString method) it prints only the age, returning "null" for name.
//Dog.java //represents a dogs age and name and contains methods to do several //things. public class Dog { private String name; private int age; //constructor: initializes the instance data name and age: public Dog () { age = 14; } //getter public int getAge () { return age; } //setter public void setAge (int value) { value = age; } //getter public String getName () { return name; } //setter public void setName (String words) { name = words; } //toString method public String toString() { return name + "\t" + age; } }
and the driver class:
//Kennel.java //main method instantiates and updates several Dog objects. public class Kennel { public static void main (String[] args) { Dog Scruffy, Jade; Scruffy = new Dog (); Scruffy.getName(); System.out.println ("Scruffys age in human years: " + Scruffy.getAge()); System.out.println (Scruffy); } }
and when running the program i get:
I want the dogs name to show as 'Scruffy'.I want the dogs name to show as 'Scruffy'.craig@craig-laptop:~/Documents/panda/java2$ java Kennel Scruffys age in human years: 14 null 14 | http://www.javaprogrammingforums.com/whats-wrong-my-code/29154-cannot-print-name-getting-null-error.html | CC-MAIN-2014-15 | refinedweb | 199 | 69.72 |
Working with DateTime - 13
This content is no longer current. Our recommendation for up to date content:
Now that you have a good sampling of basic C# syntax under your belt, it's time to tackle some more challenging topics. Classes are integral to the .NET Framework, particularly the .NET Framework Class Library. This lesson demonstrates how classes are defined and new instances are created, how to define Properties (using the prop "code snippet" to create auto implemented properties), and how to both set values and get values for a given instance of the class as well as creating Methods in our classes. We talk about how the classes you create are really custom dat types that can be used as such in helper method declarations and more.
Download the source for Understanding and Creating Classes
These lessons are simply awesome!
These lessons is the pragmatic introduction into the C# practical programming. It's good compromise between theory and practise.These lessons is the pragmatic introduction into the C# practical programming. It's good compromise between theory and practise.
4 days ago, Kurt77 wrote
These lessons are simply awesome!
very good lessons i am learning a lot thank you
I've been in some form of programming for the past 15 years but, as a largely self-taught student, I have gaps in my knowledge. This is true both in theory and environment specifics (VS 20xx). I really appreciated this lesson and look forward to viewing the rest of the series. I highly recommend this series for new(er) developers!
@Kurt77: @JossiNewmann: @wayne: @a14437: Awesome feedback, thank you all for your encouraging words!
Thank you bob.... These lessons are awsome.... May Allah bless you....
Ameen !
This lesson was great, but on 15 I start to have problems. Probably was my fault. My code in the end of 14 was a little different than in the beginning of 15.
Does anyone had the same problem?
II thought I was watching the videos in the wrong order (16 to 15) but not = (
BOB..
can u plz differentiate between 1)static class & class
2)static constructor & constructor.
@shashankonl9: Take a look at this:
In a nut shell, a static method is a method you want to call without having to create an instance of the class that contains it. Think: utility function. The .NET class library has a bunch of these. For example, the DateTime.Parse() is a good example ... I don't necessarily have an instance of the DateTime class right now, but I want to use the Parse() method anyway.
A static class merely says all methods in my class are static.
Constructors:
A static constructor will run once prior to an instance of the class is created (once for all instances of a non-static class) or whenever a static method is called (again, once).
The question you probably have is "why would I ever use this stuff"? The answer becomes evident when you're creating an API that encapsulates functionality for your organization. If you are assigned to create a component / API / whathaveyou that can be utilized across many applications and used by a set of developers tasked with creating apps that consume your API. In that case, you are more likely to need to initialize some things, make some utilities, etc. You could ignore these adornments to your classes (I.e., not make things static, etc.) however these adornments constrain developers and guide them to the usage of the API's methods.
Most likely, as I said in this series, you'll see .NET framework classes / methods created in this manner, and they exist to constrain and guide YOU, the consumer of THEIR API. So, even though you many not have a project for a while that needs this sort of fidelity of control over how people call your methods TODAY inside your own projects, at the very least, you are using these in the .NET Framework. Hope that all makes sense?
Best wishes!
thanx BoB...its realy help..
hi sir,
I want to learn C# language, doing practice also but I am very weak in programming. please tell me the way how can i learn C# language or some references to learn it. plz help me
@kapil: Hi. Have you been watching these videos? What specifically is challenging to you? Since you posted this in lesson 14, Classes ... can I assume that (1) you have been watching from the beginning? (2) You have been following along doing the code examples while I write them? Honestly? Some concepts take a few passes to really understand them. You may need to hear the same ideas from different sources. There are innumerable other tutorials online ... videos, articles, ebooks, physical books, etc. Just find something that resonates with you and then treat it like it's the most important thing to you -- you will not be denied! That's how you learn -- when you struggle and refuse to give up. The materials you use are (almost) irrelevant. Best wishes!
Hi,
could you explain the difference between declaring properties of class and declaring variables.
lets say the class car has the following structure
class car
{
public string make;
public string model;
}
i can still assign values to make and model variables of the class car. so how does the property of class differs from variable?
@Maha: So, the difference in SIMPLE scenarios is mainly conceptual. In larger systems, the difference is more obvious and more practical. The difference: what you've defined in your code example are called "public fields". A public field allows any other code to set them at any time with no restrictions. I can set your values to non-sensical values like so:
car car = new car();
car.make = "bob";
car.model = "cat";
In the history of the automotive industry, there's never been a make of "bob" or a model of "cat", but tell me ... as a programmer, how are you going to stop me -- the consumer of your class -- from setting those public fields to those crazy values?! I dare ya to stop me! You can't! Ha ha ... I just broke your code. I am teh hax0r.
If you defined Public Properties, and inside the Set for each property you validated my attempts at creating nonsensical values, you could have prevented me, the consumer of your class, from breaking your code. Yay for you! And in the Get, you could have validated that the user attempting to retrieve those values has the proper authorization (i.e., roles) to view that data. If not, your class would not allow it.
So, conceptually, that is the value of properties. They are the gate keepers to class members.
But ...
When you you auto-implemented properties you might argue that there's no practical difference since you don't create the Set and Get. True ... BUT ... SOMEDAY you might, and if you change your class's public interface from a Public Field to a Public Property, you may potentially be breaking code written against the old Public Field version of the class.
So ...
It's a generally accept best practice to prefer Public Properties and create Private fields for keeping track of internal information, only exposing that information through the gate keepers, the Getters and Setters. Auto-implemented properties in C# are a short cut towards creating those so that you can go back and add the Get and Set later if you want without repercussion.
Also, please note ... your naming conventions are important here ... generally, anything public should start with an upper case letter. Also, you should put a 'Public' in front of that 'class' otherwise the compiler will complain. Re-writing:
public class Car
{
public string Make { get; set; }
public string Model { get; set; }
}
Hope that helps!
Great intro to classes and objects Thanks!
Awsome lesson about classes. I've recomended my students to watch it.
Ty you Sir. This series helped me a lot already although i didn't finished it still. Huge respect!
When I'm here already I will be free to go off-topic and ask a serious programming guru like you are about few concerns i have. First of all can Desktop Windows 8 application interfaces be made with html/css/js? Also in future how you see js versus c#? Do i need to focus on mastering js or c#? I know in base js is a lot simpler, but can be "extended" with libraries and be as near as powerful. So who will prevail for front-end development? Thought my heart is on the side of so called Web Standards Model.
@Wmsi: @Carlos: Thanks guys!
@VladimirKrstic: No, you cannot create DESKTOP Win8 apps (today) with the web stack. You CAN create WINDOWS APP STORE Win8 apps with the web stack. As to the future, I have no earthly idea.
I think BOTH are viable paths to go down and if your heart is leading you to Web Standards, to design, to web-based development, then you already have your answer. With node.js and other open source libraries that will convert your js into native mobile apps, I'm sure it's just a matter of time before you'll be creating DESKTOP apps with js as well -- heck, you might be able to today, just not with anything from Microsoft I'm aware of. Hope that helps!
Your binary clock in the background seems to be on the fritz this module :-)
I am enjoying the lessons! | https://channel9.msdn.com/Series/C-Sharp-Fundamentals-Development-for-Absolute-Beginners/Understanding-and-Creating-Classes-14?format=smooth | CC-MAIN-2017-43 | refinedweb | 1,587 | 74.39 |
Part Four: Making the problem worse
I said earlier that the fundamental reason for namespaces in the first place was organization of types into a hierarchy, not separation of two things with similar names. But suppose you are putting something into a namespace because you have two things that are of the same name and need to be kept separate. Suppose you reason “I’m going to put List into its own namespace because List could conflict with another class named List. The user needs to be able to qualify that.”
OK, that’s fine; put List into MyContainers then. But why would you then repeat the process and put List into a child namespace in MyContainers? The most plausible reason is that the level of disambiguation achieved so far is insufficient; some other entity named List is going to be in scope in a code region where elements of MyContainers are also in scope.
Let us posit that as our cause for creating a new namespace MyContainers, and then creating a new sub-namespace, MyContainers.X, to be the declaration space of List. What name should we choose for X? If the whole point is that something else named List is in scope somewhere that elements of MyContainers are in scope, then choosing “List” for “X” is making the problem worse, not better! Before you had two things named List in scope. Now you have three things named List, two of which are in scope. This is making it more confusing without solving the problem at hand: that there are two things named List in scope.
Any one of these four reasons is enough to avoid this bad practice; avoid, avoid, avoid.
‘.’ in the project name and instead of using the whole name as the namespace, just use everything before the last dot as the namespace, then instead of creating the default ‘Class 😉)
@Pavel: Eric’s *other* claim to internet fame was having one of the earliest websites devoted to the works of J.R.R. Tolkien. (I’m no expert on the life and times of Eric Lippert, but I have been reading through the Fabulous Adventures archives … he’s made reference to it at least once before.)
The way I discovered this issue was with the term ‘Service’. I created a namespace to match the component, e.g. ‘CustomerDataService’ and then used that same name as the root implementation class, after it was the type that repesented the ‘NT Service’ concept. I ended up changing the component and namespace to ‘CustomerDataServer’ and left the class name alone. I toyed with other alternative like calling the class just ‘Service’, but it can be awkward in the IDE* when you have many files open all with the same name, e.g. Program.cs.
*I’m not suggesting you name your classes just to satsify this issue.
Aye aye, nog.
Say, does anybody know what happens when a class is declared not in a namespace?
The grammar of the language requires that every class be declared in one of three places: in the global namespace (which has no name), in some other namespace, or as a nested class inside some other type declaration. Therefore every class is declared in a namespace, either directly, or indirectly via its outer types. Therefore I do not understand the question; can you clarify it? — Eric
Reminds me of that immutable generic stack implementation you did some time back during the series of articles on immutability. I used that class in a depth-first topological sorter and had to change the name to ImmutableStack<T> so that it wouldn’t conflict with System.Collections.Generic.Stack<T>.
On the topic of proper coding conventions, I could be wrong but it seems neither FxCop nor StyleCop meets my needs. Is there a way to have FxCop level of enforcement on non-compiled code? For example, ASP.NET source files, or anything that hasn’t been built. StyleCop doesn’t appear to be testing the same things, such as collection naming, for example.
Obligatory mention of buffalo.
namespace Buffalo {
public class Buffalo {
Buffalo.Buffalo buffalo(Buffalo.Buffalo Buffalo) { buffalo(Buffalo); }
}
}
I think that actually ought to compile, although I haven’t tried it. Infinite recursion when run, however.
oh no it doesn’t compile, needs a "return" in there. Drat.
I was just wondering what happens to a class when you declare it without a surrounding namespace{}. It appears the answer is that it goes in the implicit global namespace.
The first C# app I wrote was for .Net 1.1, and I forgot to put one of my classes in a namespace declaration. Well, it just so happened that when .Net 2.0 came out, it declared a class by the same name. When somebody ran my app on a computer with .Net 2.0 installed but not 1.1, it ran in the 2.0 CLR and got the new class of the same name.
One of the annoying (to me that is) characteristics of VSxxxx is that when you add a folder in a project, Visual Studio defaults classes in the folder to a namespace which ends with the name of the folder. If a programmer is using the folders for physical (as opposed to namespace) organizational purposes, she/he either has to remove the folder name from the namespace of the new class or add a new using statement to include that folder. In tools/options, the programmer should be able to limit the default namespace to the project.
John: I like this feature because the folder structure and namespaces are two independent hierarchies, but this feature is try to make them similar, which is beneficial for project "complexity". Managing one hierarchy instead of two is huge benefit.
Is there any reason why System.Configuration.Configuration doesn’t follow this rule? It’s always painful to use in code. System.Configuration.Config (or something similar) would have been much better, like there isn’t a System.Text.Regex.Regex for instance.
My guess — and I emphasize that this is a guess — is that the class in question was invented *before* the framework design guidelines were written. — Eric
The Los Angeles Angels ought to play nothing but doubleheaders.
Thanks, man. I just went and renamed my static GameEngine class to GameEngineUpdator. It was originally NewellHenryClark.GameEngine.GameEngine, but now I know better, all thanks to you 😀
This does make it hard when you want namespaces like this:
Doctor.Events, Doctor.Services, Doctor.BusinessObjects
And then in Doctor.BusinessObjects you want a class called Doctor.
Each namespace is a type of Doctor functionality, but you can't call them that and still have a Doctor class. | https://blogs.msdn.microsoft.com/ericlippert/2010/03/18/do-not-name-a-class-the-same-as-its-namespace-part-four/ | CC-MAIN-2017-09 | refinedweb | 1,121 | 64 |
New submission from Daniel Urban <urban.dani+py at gmail.com>: The keys, values and items methods of dict_proxy return a list, while dict.keys, etc. return dictionary views (dict_keys, etc.). dict_proxy is used as the __dict__ attribute of classes. This is documented at under "Custom classes" as "Special attributes: ... __dict__ is the dictionary containing the class’s namespace ..." While __dict__ is not actually dict, it probably should behave like a dict as close as possible. For example set operations work for dict.keys(), but not for dict_proxy.keys(). ---------- components: Interpreter Core messages: 123427 nosy: durban priority: normal severity: normal status: open title: dict_proxy.keys() / values() / items() are lists type: behavior versions: Python 3.1, Python 3.2 _______________________________________ Python tracker <report at bugs.python.org> <> _______________________________________ | https://mail.python.org/pipermail/new-bugs-announce/2010-December/009456.html | CC-MAIN-2016-44 | refinedweb | 127 | 54.69 |
New
The new features added to the Sun RPC library are:
One-way messaging - Reduces the time a client thread waits before continuing processing.
Non-blocking I/O - Enables a client to send requests without being blocked.
Client connection closure callback - Enables a server to detect client disconnection and to take corrective action.
Callback user file descriptor - Extends the RPC server to handle non-RPC descriptors..
Non.
Client connection closure callback enables the server for connection-oriented transport to detect that the client has disconnected. The server can take the necessary action to recover from transport errors. Transport errors occur when a request arrives at the server, or when the server is waiting for a request and the connection is closed.
The connection closure callback is called when no requests are currently being executed on the connection. If the client connection is closed when a request is being executed, the server executes the request but a reply may not be sent to the client. The connection closure callback is called when all pending request are completed.
When a connection closure occurs, the transport layer sends an error message to the client. The handler is attached to a service using svc_control() for example as follows:
svc_control(service, SVCSET_RECVERRHANDLER, handler);
The arguments of svc_control() are:
A service or an instance of this service. When this argument is a service, any new connection to the service inherits the error handler. When this argument is an instance of the service, only this connection gets the error handler.
The error handler callback. The prototype of this callback function is:
void handler(const SVCXPRT *svc, const boot_t IsAConnection);
For further information see the svc_control(3NSL) man page.
For XDR unmarshalling errors, if the server is unable to unmarshal a request, the message is destroyed and an error is returned directly to the client.
This example implements a message log server. A client can use this server to open a log (actually a text file), to store message log, and then to close the log.
The log.x file describes the log program interface.
enum log_severity { LOG_EMERG=0, LOG_ALERT=1, LOG_CRIT=2, LOG_ERR=3, LOG_WARNING=4, LOG_NOTICE=5, LOG_INFO=6 }; program LOG { version LOG_VERS1 { int OPENLOG(string ident) = 1; int CLOSELOG(int logID) = 2; oneway WRITELOG(int logID, log_severity severity, string message) = 3; } = 1; } = 0x20001971;
The two procedures OPENLOG and CLOSELOG open and close a log that is specified by its logID. The WRITELOG() procedure, declared as oneway for the example, logs a message in an opened log. A log message contains a severity attribute, and a text message.
This is the makefile for the log server. Use this makefile to call the log.x file.
RPCGEN = rpcgen CLIENT = logClient CLIENT_SRC = logClient.c log_clnt.c log_xdr.c CLIENT_OBJ = $(CLIENT_SRC:.c=.o) SERVER = logServer SERVER_SRC = logServer.c log_svc.c log_xdr.c SERVER_OBJ = $(SERVER_SRC:.c=.o) RPCGEN_FILES = log_clnt.c log_svc.c log_xdr.c log.h CFLAGS += -I. RPCGEN_FLAGS = -N -C LIBS = -lsocket -lnsl all: log.h ./$(CLIENT) ./$(SERVER) $(CLIENT): log.h $(CLIENT_OBJ) cc -o $(CLIENT) $(LIBS) $(CLIENT_OBJ) $(SERVER): log.h $(SERVER_OBJ) cc -o $(SERVER) $(LIBS) $(SERVER_OBJ) $(RPCGEN_FILES): log.x $(RPCGEN) $(RPCGEN_FLAGS) log.x clean: rm -f $(CLIENT_OBJ) $(SERVER_OBJ) $(RPCGEN_FILES)
logServer.c shows the implementation of the log server. As the log server opens a file to store the log messages, it registers a closure connection callback in openlog_1_svc(). This callback is used to close the file descriptor even if the client program forgets to call the closelog() procedure (or crashes before doing so). This example demonstrates the use of the connection closure callback feature to free up resources associated to a client in an RPC server.
#include "log.h" #include <stdio.h> #include <string.h> #define NR_LOGS 3 typedef struct { SVCXPRT* handle; FILE* filp; char* ident; } logreg_t; static logreg_t logreg[NR_LOGS]; static char* severityname[] = {"Emergency", "Alert", "Critical", "Error", "Warning", "Notice", "Information"}; static void close_handler(const SVCXPRT* handle, const bool_t); static int get_slot(SVCXPRT* handle) { int i; for (i = 0; i < NR_LOGS; ++i) { if (handle == logreg[i].handle) return i; } return -1; } static FILE* _openlog(char* logname) /* * Open a log file */ { FILE* filp = fopen(logname, "a"); time_t t; if (NULL == filp) return NULL; time(&t); fprintf(filp, "Log opened at %s\n", ctime(&t)); return filp; } static void _closelog(FILE* filp) { time_t t; time(&t); fprintf(filp, "Log close at %s\n", ctime(&t)); /* * Close a log file */ fclose(filp); } int* openlog_1_svc(char* ident, struct svc_req* req) { int slot = get_slot(NULL); FILE* filp; static int res; time_t t; if (-1 != slot) { FILE* filp = _openlog(ident); if (NULL != filp) { logreg[slot].filp = filp; logreg[slot].handle = req->rq_xprt; logreg[slot].ident = strdup(ident); /* * When the client calls clnt_destroy, or when the * client dies and clnt_destroy is called automatically, * the server executes the close_handler callback */ if (!svc_control(req->rq_xprt, SVCSET_RECVERRHANDLER, (void*)close_handler)) { puts("Server: Cannot register a connection closure callback"); exit(1); } } } res = slot; return &res; } int* closelog_1_svc(int logid, struct svc_req* req) { static int res; if ((logid >= NR_LOGS) || (logreg[logid].handle != req->rq_xprt)) { res = -1; return &res; } logreg[logid].handle = NULL; _closelog(logreg[logid].filp); res = 0; return &res; } /* * When there is a request to write a message to the log, * write_log_1_svc is called */ void* writelog_1_svc(int logid, log_severity severity, char* message, struct svc_req* req) { if ((logid >= NR_LOGS) || (logreg[logid].handle != req->rq_xprt)) { return NULL; } /* * Write message to file */ fprintf(logreg[logid].filp, "%s (%s): %s\n", logreg[logid].ident, severityname[severity], message); return NULL; } static void close_handler(const SVCXPRT* handle, const bool_t dummy) { int i; /* * When the client dies, the log is closed with closelog */ for (i = 0; i < NR_LOGS; ++i) { if (handle == logreg[i].handle) { logreg[i].handle = NULL; _closelog(logreg[i].filp); } } }
The logClient.c file shows a client using the log server.
#include "log.h" #include <stdio.h> #define MSG_SIZE 128 void usage() { puts("Usage: logClient <logserver_addr>"); exit(2); } void runClient(CLIENT* clnt) { char msg[MSG_SIZE]; int logID; int* result; /* * client opens a log */ result = openlog_1("client", clnt); if (NULL == result) { clnt_perror(clnt, "openlog"); return; } logID = *result; if (-1 == logID) { puts("Cannot open the log."); return; } while(1) { struct rpc_err e; /* * Client writes a message in the log */ puts("Enter a message in the log (\".\" to quit):"); fgets(msg, MSG_SIZE, stdin); /* * Remove trailing CR */ msg[strlen(msg)-1] = 0; if (!strcmp(msg, ".")) break; if (writelog_1(logID, LOG_INFO, msg, clnt) == NULL) { clnt_perror(clnt, "writelog"); return; } } /* * Client closes the log */ result = closelog_1(logID, clnt); if (NULL == result) { clnt_perror(clnt, "closelog"); return; } logID = *result; if (-1 == logID) { puts("Cannot close the log."); return; } } int main(int argc, char* argv[]) { char* serv_addr; CLIENT* clnt; if (argc != 2) usage(); serv_addr = argv[1]; clnt = clnt_create(serv_addr, LOG, LOG_VERS1, "tcp"); if (NULL == clnt) { clnt_pcreateerror("Cannot connect to log server"); exit(1); } runClient(clnt); clnt_destroy(clnt); }
User file descriptor callbacks enable you to register file descriptors with callbacks, specifying one or more event types. Now you can use an RPC server to handle file descriptors that were not written for the Sun RPC library.
With previous versions of the Sun RPC library, you could use a server to receive both RPC calls and non-RPC file descriptors only if you wrote your own server loop, or used a separate thread to contact the socket API.
For user file descriptor callbacks, two new functions have been added to the Sun RPC library, svc_add_input(3NSL) and svc_remove_input(3NSL), to implement user file descriptor callbacks. These functions declare or remove a callback on a file descriptor.
When using this new callback feature you must:
Create your callback() function by writing user code with the following syntax:
typedef void (*svc_callback_t) (svc_input_id_t id, int fd, \ unsigned int revents, void* cookie);
The four parameters passed to the callback() function are:
Provides an identifier for each callback. This identifier can be used to remove a callback.
The file descriptor that your callback is waiting for.
An unsigned integer representing the events that have occurred. This set of events is a subset of the list given when the callback is registered.
The cookie given when the callback is registered. This cookie can be a pointer to specific data the server needs during the callback.
Call svc_add_input() to register file descriptors and associated events, such as read or write, that the server must be aware of.
svc_input_id_t svc_add_input (int fd, unsigned int revents, \ svc_callback_t callback, void* cookie);A list of the events that can be specified is given inpoll(2) .
Specify a file descriptor. This file descriptor can be an entity such as a socket or a file.
When you no longer need a particular callback, call svc_remove_input() with the corresponding identifier to remove the callback.
This example shows you how to register a user file descriptor on an RPC server and how to provide user defined callbacks. With this example you can monitor the time of day on both the server and the client.
The makefile for this example is shown below.
RPCGEN = rpcgen CLIENT = todClient CLIENT_SRC = todClient.c timeofday_clnt.c CLIENT_OBJ = $(CLIENT_SRC:.c=.o) SERVER = todServer SERVER_SRC = todServer.c timeofday_svc.c SERVER_OBJ = $(SERVER_SRC:.c=.o) RPCGEN_FILES = timeofday_clnt.c timeofday_svc.c timeofday.h CFLAGS += -I. RPCGEN_FLAGS = -N -C LIBS = -lsocket -lnsl all: ./$(CLIENT) ./$(SERVER) $(CLIENT): timeofday.h $(CLIENT_OBJ) cc -o $(CLIENT) $(LIBS) $(CLIENT_OBJ) $(SERVER): timeofday.h $(SERVER_OBJ) cc -o $(SERVER) $(LIBS) $(SERVER_OBJ) timeofday_clnt.c: timeofday.x $(RPCGEN) -l $(RPCGEN_FLAGS) timeofday.x > timeofday_clnt.c timeofday_svc.c: timeofday.x $(RPCGEN) -m $(RPCGEN_FLAGS) timeofday.x > timeofday_svc.c timeofday.h: timeofday.x $(RPCGEN) -h $(RPCGEN_FLAGS) timeofday.x > timeofday.h clean: rm -f $(CLIENT_OBJ) $(SERVER_OBJ) $(RPCGEN_FILES)
The timeofday.x file defines the RPC services offered by the server in this example. The services in this examples are gettimeofday() and settimeofday().
program TIMEOFDAY { version VERS1 { int SENDTIMEOFDAY(string tod) = 1; string GETTIMEOFDAY() = 2; } = 1; } = 0x20000090;
The userfdServer.h file defines the structure of messages sent on the sockets in this example.
#include "timeofday.h" #define PORT_NUMBER 1971 /* * Structure used to store data for a connection. * (user fds test). */ typedef struct { /* * Ids of the callback registration for this link. */ svc_input_id_t in_id; svc_input_id_t out_id; /* * Data read from this connection. */ char in_data[128]; /* * Data to be written on this connection. */ char out_data[128]; char* out_ptr; } Link; void socket_read_callback(svc_input_id_t id, int fd, unsigned int events, void* cookie); void socket_write_callback(svc_input_id_t id, int fd, unsigned int events, void* cookie); void socket_new_connection(svc_input_id_t id, int fd, unsigned int events, void* cookie); void timeofday_1(struct svc_req *rqstp, register SVCXPRT *transp);
The todClient.c file shows how the time of day is set on the client. In this file, RPC is used with and without sockets.
#include "timeofday.h" #include <stdio.h> #include <netdb.h> #define PORT_NUMBER 1971 void runClient(); void runSocketClient(); char* serv_addr; void usage() { puts("Usage: todClient [-socket] <server_addr>"); exit(2); } int main(int argc, char* argv[]) { CLIENT* clnt; int sockClient; if ((argc != 2) && (argc != 3)) usage(); sockClient = (strcmp(argv[1], "-socket") == 0); /* * Choose to use sockets (sockClient). * If sockets are not available, * use RPC without sockets (runClient). */ if (sockClient && (argc != 3)) usage(); serv_addr = argv[sockClient? 2:1]; if (sockClient) { runSocketClient(); } else { runClient(); } return 0; } /* * Use RPC without sockets */ void runClient() { CLIENT* clnt; char* pts; char** serverTime; time_t now; clnt = clnt_create(serv_addr, TIMEOFDAY, VERS1, "tcp"); if (NULL == clnt) { clnt_pcreateerror("Cannot connect to log server"); exit(1); } time(&now); pts = ctime(&now); printf("Send local time to server\n"); /* * Set the local time and send this time to the server. */ sendtimeofday_1(pts, clnt); /* * Ask the server for the current time. */ serverTime = gettimeofday_1(clnt); printf("Time received from server: %s\n", *serverTime); clnt_destroy(clnt); } /* * Use RPC with sockets */ void runSocketClient() /* * Create a socket */ { int s = socket(PF_INET, SOCK_STREAM, 0); struct sockaddr_in sin; char* pts; char buffer[80]; int len; time_t now; struct hostent* hent; unsigned long serverAddr; if (-1 == s) { perror("cannot allocate socket."); return; } hent = gethostbyname(serv_addr); if (NULL == hent) { if ((int)(serverAddr = inet_addr(serv_addr)) == -1) { puts("Bad server address"); return; } } else { memcpy(&serverAddr, hent->h_addr_list[0], sizeof(serverAddr)); } sin.sin_port = htons(PORT_NUMBER); sin.sin_family = AF_INET; sin.sin_addr.s_addr = serverAddr; /* * Connect the socket */ if (-1 == connect(s, (struct sockaddr*)(&sin), sizeof(struct sockaddr_in))) { perror("cannot connect the socket."); return; } time(&now); pts = ctime(&now); /* * Write a message on the socket. * The message is the current time of the client. */ puts("Send the local time to the server."); if (-1 == write(s, pts, strlen(pts)+1)) { perror("Cannot write the socket"); return; } /* * Read the message on the socket. * The message is the current time of the server */ puts("Get the local time from the server."); len = read(s, buffer, sizeof(buffer)); if (len == -1) { perror("Cannot read the socket"); return; } puts(buffer); puts("Close the socket."); close(s); }
The todServer.c file shows the use of the timeofday service from the server side.
#include "timeofday.h" #include "userfdServer.h" #include <stdio.h> #include <errno.h> #define PORT_NUMBER 1971 int listenSocket; /* * Implementation of the RPC server. */ int* sendtimeofday_1_svc(char* time, struct svc_req* req) { static int result = 0; printf("Server: Receive local time from client %s\n", time); return &result; } char ** gettimeofday_1_svc(struct svc_req* req) { static char buff[80]; char* pts; time_t now; static char* result = &(buff[0]); time(&now); strcpy(result, ctime(&now)); return &result; } /* * Implementation of the socket server. */ int create_connection_socket() { struct sockaddr_in sin; int size = sizeof(struct sockaddr_in); unsigned int port; /* * Create a socket */ listenSocket = socket(PF_INET, SOCK_STREAM, 0); if (-1 == listenSocket) { perror("cannot allocate socket."); return -1; } sin.sin_port = htons(PORT_NUMBER); sin.sin_family = AF_INET; sin.sin_addr.s_addr = INADDR_ANY; if (bind(listenSocket, (struct sockaddr*)&sin, sizeof(sin)) == -1) { perror("cannot bind the socket."); close(listenSocket); return -1; } /* * The server waits for the client * connection to be created */ if (listen(listenSocket, 1)) { perror("cannot listen."); close(listenSocket); listenSocket = -1; return -1; } /* * svc_add_input registers a read callback, * socket_new_connection, on the listening socket. * This callback is invoked when a new connection * is pending. */ if (svc_add_input(listenSocket, POLLIN, socket_new_connection, (void*) NULL) == -1) { puts("Cannot register callback"); close(listenSocket); listenSocket = -1; return -1; } return 0; } /* * Define the socket_new_connection callback function */ void socket_new_connection(svc_input_id_t id, int fd, unsigned int events, void* cookie) { Link* lnk; int connSocket; /* * The server is called when a connection is * pending on the socket. Accept this connection now. * The call is non-blocking. * Create a socket to treat the call. */ connSocket = accept(listenSocket, NULL, NULL); if (-1 == connSocket) { perror("Server: Error: Cannot accept a connection."); return; } lnk = (Link*)malloc(sizeof(Link)); lnk->in_data[0] = 0; /* * New callback created, socket_read_callback. */ lnk->in_id = svc_add_input(connSocket, POLLIN, socket_read_callback, (void*)lnk); } /* * New callback, socket_read_callback, is defined */ void socket_read_callback(svc_input_id_t id, int fd, unsigned int events, void* cookie) { char buffer[128]; int len; Link* lnk = (Link*)cookie; /* * Read the message. This read call does not block. */ len = read(fd, buffer, sizeof(buffer)); if (len > 0) { /* * Got some data. Copy it in the buffer * associated with this socket connection. */ strncat (lnk->in_data, buffer, len); /* * Test if we receive the complete data. * Otherwise, this is only a partial read. */ if (buffer[len-1] == 0) { char* pts; time_t now; /* * Print the time of day you received. */ printf("Server: Got time of day from the client: \n %s", lnk->in_data); /* * Setup the reply data * (server current time of day). */ time(&now); pts = ctime(&now); strcpy(lnk->out_data, pts); lnk->out_ptr = &(lnk->out_data[0]); /* * Register a write callback (socket_write_callback) * that does not block when writing a reply. * You can use POLLOUT when you have write * access to the socket */ lnk->out_id = svc_add_input(fd, POLLOUT, socket_write_callback, (void*)lnk); } } else if (len == 0) { /* * Socket closed in peer. Closing the socket. */ close(fd); } else { /* * Has the socket been closed by peer? */ if (errno != ECONNRESET) { /* * If no, this is an error. */ perror("Server: error in reading the socket"); printf("%d\n", errno); } close(fd); } } /* * Define the socket_write_callback. * This callback is called when you have write * access to the socket. */ void socket_write_callback(svc_input_id_t id, int fd, unsigned int events, void* cookie) { Link* lnk = (Link*)cookie; /* * Compute the length of remaining data to write. */ int len = strlen(lnk->out_ptr)+1; /* * Send the time to the client */ if (write(fd, lnk->out_ptr, len) == len) { /* * All data sent. */ /* * Unregister the two callbacks. This unregistration * is demonstrated here as the registration is * removed automatically when the file descriptor * is closed. */ svc_remove_input(lnk->in_id); svc_remove_input(lnk->out_id); /* * Close the socket. */ close(fd); } } void main() { int res; /* * Create the timeofday service and a socket */ res = create_connection_socket(); if (-1 == res) { puts("server: unable to create the connection socket.\n"); exit(-1); } res = svc_create(timeofday_1, TIMEOFDAY, VERS1, "tcp"); if (-1 == res) { puts("server: unable to create RPC service.\n"); exit(-1); } /* Poll the user file descriptors. */ svc_run(); } | http://docs.oracle.com/cd/E19253-01/816-1435/modif-14/index.html | CC-MAIN-2017-09 | refinedweb | 2,737 | 57.47 |
I
The software EXE is not able to attach So if any one want that exe just send me your mail id to jegatheesan_s@yahoo.co.in.
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Materials Required
Materials Required
1) Arduino UNO - 1No
2) Tower Pro micro Servo motor - 3Nos.
3) 1000 micro F capacitor - 1 No (to stop servo from shaking).
4) Plain PCB.
5) Male Connector.
6) Aluminium Partition Waste.
7) Screws and nuts.
Tools Required
1) Hot glow gun.
2) Hack saw blade and frame.
3) Drilling machine.
4) Medium File.
5) Screw driver.
6) Nose player.
7) Soldering iron.
Step 2: Building Shield
1) First step is to build a arduino shield to drive 3 Servos. In motero shiled 1 we only drive 2 servos. So i make my own shield.
2) I use the pins 3,5 and 6 in the arduino to drive the servos.
3) From arduino to Servo i use a thin long flexible wire (I Use my old mouse wise it has 5 wires very thin and string). Two wires for +5V and 3 wires for servo driven.
4) A separate board in the side of the device to control servos. I that board i use 3 X 3 Male connector to connect servos.
5) A capacitor 1000 micro f between Power supply for servos is soldered. and to avoid short circuit and wires break from soldering, i cover all soldered wire with hot glue gun.
Step 3: Check the Concept
1) After make a startup software and check the angles generated. I want to test that this servo is correct for my application.
2) So i just use straws at first to find its working correct.
3) Straws dimension are 50mm 2nos, 32mm 1no and 82mm 1Nos.
4) Its working fine and the test pass. But some more calculation is missing. So some calculation want to change.
Step 4: Make Servo Holders
1) Its time to assemble my drawing buddy. But this is the very hard work.
2) I use ALuminium from partion work waste from my friend to make 4 'L' bends of dimension 30 X 40 X 27.
3) Mark the portion for Servo and give it to a milling machine. They only able to take the shape as oval.
4) I use a triangle file to make it rectangle.
5) Take it for 3 pieces and another one piece for holding the up down servo.
6) Connect the two pieces using screws.
Step 5: Making Arms
1) Its the hand for my Drawing buddy.
2) I take the dimension 32mm one number, 50mm two numbers and 82mm one number.
3) Make it in the novopan sheet and give it to the switch slot taken work shop.
4) They make it as an art.
5) With 6mm hole for link they give it back to me in pieces.
6) I link the arm parts and check the rectangle dimension with my plan. Its excatly matched.
Step 6: Connecting Arm to Servo
1) Arms are now ideal when connect it with the servo only it get life.
2) stick servo arm connector straight to the novopan arm with hot glue gun.
3) connect this to the servo using screw. through the novopan arm hole.
Step 7: Testing With Out Pen Up Down
1) Now the 2 servos are fitted to the aluminium frame slot and stick with hot glue gun.
2) Join the arms with screws and before fitted fully, want to check and tune the software.
3) So paste the frame to a flat surface facing each other correctly center.
4) In the arm edge paste a sketch to draw.
5) Place a plain paper below the sketch.
6) Test it with the software and after some changes in software and angle in the servo arm connection finally the result is fine.
Step 8: Testing Video
This is the testing video for circle and waves.
Step 9: Make Stand
1) Stand must be small then only its compact and also more space for drawing near the Buddy. It must be weight hig than the other assembly then only it stand still when arm move.
2) I got a waste play wood.
3) I cut it in the size of 30mm x 100mm and mark the center at 50mm.
4) Make a 6 mm hole in the two sides as shown in the figure (use bench drilling then only the holes are straight).
5) Make a small piece in navapan sheet and drill the same size hole in that one. (to keep the rails parallel)
6) Now Now scre the 6mm rod both sides tightly to make it stand straight.
Step 10: Fix Up Down Arrangement
1) Now the stand is ready. We want an arrangement to hold the servos and move up and down in the rail.
2) I use a old pen. Cut the pen in to two pieces and insert into the rods, The pen inner dia is more greater than the rod dia. So i insert a straw firat in to the rod and straw is tight with the rod and when pen is put over that it moves fine with out shake.
3) Now cut two pieces of novopan sheet i take the dimension (30mm X 90mm) and take slot in the bottom for the space to nuts in the stand.
4) Then stick the two sheets in the both sides of the pen in the stand. Glue it fully then only it hold the weight.
5) Now check the movement of up/down action.
6) Fix the Writing servo arrangement on one side of the slide with hot glue gun or screws.
7) Now check it by fixing the arms.
Step 11: Up Down Mechanism
1) Up down mechanism is quite easy using a old toy car wheel.
2) Just Hot glue the old toy car wheel to the servo arm away from the center. This acts to lift the arrangement smoothly.
3) Fix the arrangement to the other side of the slide.
Step 12: Assembled Draw Buddy
1) Its well come after 3 days night work.
2) Hot glue the connector to the non servo moving side and connect the servos.
3) See the all side views of my Draw Buddy its very cute.
Step 13: Servos Control Program
1) Connect the Servo connector to the arduino.
2) Its time to write the code for Computer and Arduino. I reduce the work of arduino by increase the work of the Computer.
3) Here with this i attached the Arduino code. Arduino turns the servos degrees as per pc command.
Step 14: Desktop Application
1) Download the zip file and unzip it.
2) Change the Arduino connected comport name in the config,inf file.
3) Download and install .net frame work 4 or more.
4) Run the Application. After connected the arduino to the port. This program run only in windows.
Step 15: My Own Buddy Control Software
1) The application is not a downloaded application. It is developed by me. Actually i like very much to program logically.
2) In the control software we can draw the picture as line art and ask the buddy to draw it.
3) U have a save button to save the drawn picture and also a open button to open the saved one.
4) On click the Open Port the system connected to the Arduino.
5) By click the Draw buddy, Draw buddy draw the image as like we draw.
6) While drawing use the slider button to erase the drawn line.
(More ideas are in progress to convert vector drawing to our format)
Step 16: Checking All
1) Its time to check.
2) Connect the connectors and power supplies.
3) Draw lines in the application and click open port and then click draw buddy.
4) Buddy draw it as like the picture .
Problems Faced at first
1) Drawing is drawn in mirror format.
2) When pen up and down it leaves some lines at the end of up due to delay has same time.
This can be corrected by altered the arduino coding and Computer coding.
Step 17: Final Changes and Packing
1) Its very compatable so u can able to hold in one hand.
2) Fix the Arduino into a plastic box.
3) Arduino to Draw buddy has long wire, so that we can able to move it to the long place and draw where ever we want.
Note At first while drawing My buddy moves some time. Finally i notice the surface of my Buddy is polished. So i rub with emery and make it rough. Now it works fine with out holding.
Step 18: Draw Buddy in Action
After lot of trial and error here the finished one with out error.
Step 19: Action in Papers
Lot and lot of pictures are drawn by my buddy to make my daughter happy.
Step 20: Action in Tiles
It not only draw in paper it paint in any flat surface. Here a a sample from my Room Tiles.
Its very interesting to done this project after some gap. Its very cool and like by all age groups in my family. I also recall lot of mathematics from my school life. Its a nice experience to work in this project. Want to do lot of upgrades in this project that's only i make it as V1.
Thank You Very much for Watching this Instructables.
Very eager to hear comments from You all.
Be cool make a Lot.
Participated in the
MacGyver Challenge
Participated in the
Invention Challenge 2017
50 Discussions
Question 8 months ago on Step 8
Sir I am making similar robot with big servos I am unable to make the servos to draw will u plz send me the code other than exe it will help me a lot
Reply 8 months ago
My project in VB.net if you want the code send me your mail id i mail you.
Reply 8 months ago
thanks a lot
easwarkanth@gmail.com
Sir my servos are moving very slowly when i try to use your buddy.exe have any idea
Reply 7 months ago
I send you the full coding. Please check and mail back.
9 months ago
Hai,
I made this code to be sure that the servos are in the right position
But I can not make a drawing yet
[code]
#include <Servo.h>
Servo myservo3; // create servo object to control a servo
Servo myservo5;
void setup() {
myservo3.attach(3);// attaches the servo on pin 9 to the servo object
myservo5.attach(5);
}
void loop() {
myservo3.write(10); // sets the servo position according to the scaled value
myservo5.write(170);
}
[/code]
10 months ago
Hai,
I have done my best to make from the Buddy a clone Buddy, have everything drawn in sketchup and converted into stl files
First printed everything myself to see if everything is right and ............see photos
I use an arduino nano and the servos are connected on pin 3, 5, 6 3 = Top servo, the smallest arm 5 = Bottom servo, the biggest arm and 6 is the up / down Pen servo
I also have a power supply of 5 volts with a 7805 connected to it for the separate power supply of the servos
I have a question for Jegatheesan if i may
I see in the program that the servos have the following value respectively
servo top 170
servo bottom 10
servo 45
I suppose the servos are in rest position as in my pictures
If I now have a square mark on the Buddy screen and this start with the "draw Buddy" then my pen goes all the way to the right at the start against the side of the device and the servo top does not stop moving the arm goes back and forth , the servo Button slowly moves to the left, but I can not see a square, what am I doing wrong?
The Pen servo goes down after a while to start drawing and after the drawing back up, that looks good
Please can you help mi
Tthank you in advance
Reply 10 months ago
Check the distance between the screen center to center in the arm. Its 50, 32, 82
Reply 10 months ago
That is good I think , i have 50 , 80 , 30mm
Reply 10 months ago
check the new exe i send and using the procedure check how it works first and reply.
Reply 10 months ago
I am very amazed its come out very superb. I also want to develop my own and also want to change the program to Draw all the imported files also.
When top servo as 10 and bottom as 170 your home position is wrong. see in the attached image the pen come out side the paper straight to the servo. So set the sero position to 10 and 170 change the arm position in the servo and check it.
Reply 10 months ago
Hai
Thank you kindly to reply and sorry for disturbing
I have now made a movie
This is what I do
load the program in the arduino "Servo.ino"
compile the program and the buddy comes into its starting position
Open a drawing
press open port
and then press "draw Buddy"
and then the buddy starts with signs
but as you can see, it does not look okay
top arm is at 10 below at 170, and that's how I get the starting position
What am I doing wrong???????
Happy Holidays
Reply 10 months ago
Down the attached exe and replace with the Drawing buddy. Now run the application. Click open the port and using mouse right click the location in the drawing area and you found the angle of the three servos in the right side of the application in a label box. Check for other locations in the drawing area with the servo moments.
1 year ago
I have made a buddy with my 3D printer Prusa i3, (I think it is possible to make this in wood too) but I still need some help, like the position of the arms, the software (nice layout) I can start and make a drawing, but when I start the printer the arms are weird and the pen goes down and at the end of the drawing back up, in between nothing happens with the pen servo
Can you please help me
Kind regards
Kind regards
Reply 11 months ago
Do you have the stl files available for your design.
Reply 10 months ago
Hai,
Sorry, I do not have the files anymore, I did not have a decent program sign (I'm a bad programmer "not a programmer") to
I should have the parts somewhere
If I found a decent program somewhere for this project (for example drawing png files or a android app) then I would take time to make stl files again
Reply 10 months ago
I am very sorry still u can't finish this project.
Reply 10 months ago
Hai,
I am collecting the pieces, then I will try to put everything back together, because I really like this project and then a test phase follows, I will measure all dimensions and make new drawings in stl files
I would like to download figures (png or jpeg or bmp or svg or .........) and then plot
To be continued
Reply 1 year ago
Very Very Sorry for the very late reply. Actually i dont receive any email regarding this message in my mail box. Accidentally today only I found. Very Very Very nice design. Actually I want to make it like this. But i have no 3D printer and no 3d Print services available here. The design look very Amazing.
I cant able to understand what the problem. Can you please send me any video or tell in detail. I am very interested to complete this.
Reply 1 year ago
Thank you for help me
First I loaded the program "Servo" into my Arduino Uno, then the arms come into this position, then a opend and draw a sketch with the Buddy program, then clicked on "drawbuddy" button and this is what happens with the plotter , see movie
I have just changed the connection pins by 8,9,10 instead of the usual 3,5,6
Kind Regards
Reply 1 year ago
Not 8,9,10 but 9,10,11 | https://www.instructables.com/id/Cute-Drawing-Buddy-V1-Arduino/ | CC-MAIN-2019-47 | refinedweb | 2,764 | 80.62 |
In the
previous page,
you learned about resources and what they try to do. We also
started learning about using resources in our applications,
so let's pick up from where we left off.
Changing Build Action
By default, when you publish your application, your external
files are kept separate from your final application. For
what we are trying to do, we want our external file to be a
part of the executable itself, and we can do that by changing
our file's Build Action.
To change the Build Action, select your newly imported
file in your Solution Explorer. Once you have selected the
imported file, which in my case is blue.png,
take a look at the
Properties grid panel:
[ select your imported file and take a look at your
Properties grid panel ]
Notice that there is an entry for Build Action. Select
the Build Action entry, and when you select it, you should see a drop-down arrow
appear to the right of the Content text. Click on that arrow
and select Embedded Resource:
[ change your Build Action to Embedded Resource ]
By tagging your file as an embedded resource, you tell
Visual Studio to include this file as part of the
assembly instead of referencing this as a separate file.
Referring to Embedded Resources
using Code
With your file imported and tagged as an embedded resource,
the final step is to use the embedded resource in your
application. Since I've been using blue.png as an example file for the
past few sections, I'll continue using that file in my
example.
If I wanted to add blue.png to a WinForms
button called btnSubmit, I would use the following code:
Let's look at the above code in detail. Because I want to
insert an image into my application, I use the Image class's
FromStream method (Image.FromStream)
which takes a stream as its argument. The reason I am
looking for a method that takes a stream as an argument is
because that is the main format my embedded resource will be
accessible to the application.
Moving on, the Assembly class allows you to explore the various
metadata associated with your program. You use the
Assembly class's
GetExecutingAssembly method
to return an Assembly object that points to the assembly
that is currently running. In other words, you are trying to
find a way
to explore the metadata associated with your current
program!
Now that you have access to your assembly by using
Assembly.GetExecutingAssembly(),
the next step is to get the manifest from this assembly.
More importantly, beyond just the manifest, we also want the
resource the manifest provides access to. We do that using
the GetManifestResourceStream()
method and passing a string to the internal location of the
file.
The internal location of the file follows the
Namespace.filename.extension format. In my example,
the namespace under which I will be accessing my blue.png
resource is called ButtonIcon, and the
file's name is blue, and the extension is
png. Putting it all together, we get:
ButtonIcon.blue.png. That's
all there is to it.
Because I feel one example for something this complicated
is not adequate, I have also provided the code I used to
access an Embedded Resrouce text file called words.txt:
Notice that in the above example, I am using another stream
method - except on that supports text files such as
StreamReader. The namespace
of that particular application is called
IncrementalSearch, and the name of the file I am
accessing is words.txt.
Onwards to the
next page!
1 |
2
|
3
Flash Transition Effects
Flash Effect Tutorials
Link
to Us
© 1999 - 2009 | http://www.kirupa.com/net/resources_pg2.htm | crawl-002 | refinedweb | 615 | 62.27 |
Search documents with XQuery from Java applications
Document options requiring JavaScript are not displayed
Sample code
Help us improve this content
Level: Intermediate
Brett D. McLaughlin, Sr. (brett@newInstance.com), Author and Editor, O'Reilly Media, Inc.
29 Apr 2008.
SQL (see Resources for links).Path Basics.
/play/act/scene
scene
act
play
XPaths are relative.
speech
speaker.
/play
../
../../personae/title
personae
title
Selecting attributes
You can select a lot more than just elements. Consider an XPath like this: /cds/cd@title. This returns all attributes named "title" that are on cd elements, nested within a root element named cds.
/cds/cd@title
cd).
@isbn
Selecting text.
/cds/cd/title
text()
/cds/cd/title/text()].
.
[
]
/cds/cd
/cds/cd[1].
/cds[1]/cd[2]/title[1]
/cds[1]/cd
.
/cds/cd[last].
position()
/cds/cd[position().
style
/cds/cd[@style='folk']
@style
The same can be done with nested elements of the set selected. Suppose you have a
document structured like that in Listing 1.
<cds>
<cd style="some-style">
<title>CD title</title>
<track-listing>
<track>Track title</track>
<!-- More track elements... -->
</track-listing>
</cd>
<!-- More CDs... -->
</cds>].
track
track-listing
count()
').
@type
type
.
/cds/cd[title='Revolver'].
Adding XQuery to the mix'].
doc().
XQuery and FLWOR..
for
where
order by
return
let.
Using for.
x.
$cd
...
For programmers, this statement is really no different from this:
for (int i = 0; i<cdArray.length; i++) {
CD cd = cdArray[i];
// Process CD
}
Or, for lists:
for (Iterator i = cdList.iterator(); i.hasNext(); ) {
CD cd = (CD)i.next();
// Process each CD
}:
let $docName := 'catalog.xml'
for $cd in doc($docName)/cds/cd
....
:=
$docName.
let $docName := 'catalog.xml'
for $cd in doc($docName)/cds/cd
return $cd/title/text():
let $docName := 'catalog.xml'
for $cd in doc($docName)/cds/cd
return /cds/cd/title/text()
This has three significant problems:
doc($docName).
Get selective with where.
artist
id.
$artist."
where $cd/artist/$id = $artist/$id
$artist/lastName = 'Marley'
This is where XQuery really comes into its own. You can perform complex SQL-like joins and selections, all applied to XML documents (many of which probably weren't structured with advanced searching in mind).
Ordering your node sets:
date
release.
$artist/lastName
$artist/firstName.
Expanding the JARred:
jar
Run the installer:
[bdm0509:/usr/local/java] cd xqj
[bdm0509:/usr/local/java/xqj] ls
3rdPartySoftware.txt examples lib
Fixes.txt help planExplain
Readme.txt javadoc src..
package ibm.dw.xqj;
import com.ddtek.xquery3.XQConnection;
import com.ddtek.xquery3.XQException;();
} catch (Exception e) {
e.printStackTrace(System.err);
System.err.println(e.getMessage());
}
}
}.
init()
This program gets a filename (from the command-line, which it passes into the class's
constructor), and then executes the following code:
dataSource = new DDXQDataSource();
conn = dataSource.getConnection();.
com.ddtek.xquery3.xqj.DDXQDataSource
com.ddtek.xquery3.XQConnection>
<!-- more CD listings ... -->
</CATALOG>
Build the XQuery:
String
main()
XQueryTester:
docName
CD
TITLE
YEAR:
return <cd><title>{$cd/TITLE/text()}</title>" +
" <year>{$cd/YEAR/text()}</year></cd>":
declare.
xs:string
xs:int
So for the XQueryTester class, the query needs to be modified:());
}
Now, all that's left is to build a function for executing this query.
Run the XQuery
To run a query, you need to follow these steps:
XQExpression
XQConnection
bindXXX()
XQSequence
The only step that's tricky here is to bind variables. In this example, the variable docName needs to be associated with the filename passed in to the program. Since you're binding a string variable, you use bindString. That method takes three arguments:
bindString
javax.xml.namespace.QName
If you put all of this together, you'll get a method like this:.
XQExpression expression =
conn.createExpression();
expression.bindString(new QName("docName"), filename, conn.createAtomicType(XQItemType.XQBASETYPE_STRING));
QName.
bindInt()
bindFloat().
query()
Listing 5 shows the completed XQueryTester code.;());
}
}
}
Run your XQueryTester.
Experiment!.
Conclusion.
Download
Resources
About the author.
Rate this page
Please take a moment to complete this form to help us better serve you.
Did the information help you to achieve your goal?
Please provide us with comments to help improve this page:
How useful is the information?
IBM, DB2, and pureXML are trademarks of IBM Corporation in the United States, other countries, or both. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft,. | http://www.ibm.com/developerworks/xml/library/x-xjavaxquery/ | crawl-001 | refinedweb | 718 | 52.76 |
Compression Using DeflateStream and GZipStream
The .NET System.IO.Compression namespace provides two general purpose compressions streams – DeflateStream and GZipStream.
Both these compressions streams use popular compression algorithm which is similar to that used by the ZIP format. The difference is that GZipStream writes an additional protocol at the start and at the end which includes a CRC to detect errors. The GZipStream also conforms to a standard recognized by other software.
Both DeflateStream and GZipStream allow reading and writing, with the below provisos:
- Always write to the stream when compressing.
- Always read from the stream when decompressing.
DeflateStream and GZipStream are both decorators – compress or decompress data from another stream which is supplied in the construction. In the below sample, the code compresses and decompresses a series of bytes, using a FileStream as the backing store:
using (Stream s = File.Create ("compressed.bin")) using (Stream ds = new DeflateStream (s, CompressionMode.Compress)) for (byte i = 0; i < 100; i++) ds.WriteByte (i); using (Stream s = File.OpenRead ("compressed.bin")) using (Stream ds = new DeflateStream (s, CompressionMode.Decompress)) for (byte i = 0; i < 100; i++) Console.WriteLine (ds.ReadByte()); // Writes 0 to 99
Note that even with the smaller of the two algos, the compressed file is 241 bytes which is more than twice the size of the original file. Thus, compression does not work well with dense nonrepetitive binary files. Compression is instead a better fit for files such as text text files as shown in the example below:
string[] words = "The quick brown fox jumps over the lazy dog".Split(); Random rand = new Random(); using (Stream s = File.Create ("compressed.bin")) using (Stream ds = new DeflateStream (s, CompressionMode.Compress)) using (TextWriter w = new StreamWriter (ds)) for (int i = 0; i < 1000; i++) w.Write (words [rand.Next (words.Length)] + " "); Console.WriteLine (new FileInfo ("compressed.bin").Length); // 1073 using (Stream s = File.OpenRead ("compressed.bin")) using (Stream ds = new DeflateStream (s, CompressionMode.Decompress)) using (TextReader r = new StreamReader (ds)) Console.Write (r.ReadToEnd()); // Output below: lazy lazy the fox the quick The brown fox jumps over fox over fox The brown brown brown over brown quick fox brown dog dog lazy fox dog brown over fox jumps lazy lazy quick The jumps fox jumps The over jumps dog..
In the above sample, DeflateStream efficiently compresses to 1,073 bytes which is just slightly over one byte per word. | http://www.csharphelp.com/2011/01/compression-deflatestream-gzipstream/ | CC-MAIN-2015-32 | refinedweb | 398 | 66.03 |
GitHub user panlinux opened a pull request:
Rename 'async' to 'async_'
## Rename 'async', it's a reserved keyword with py3.7
### Description
This is a simple change that just renames 'async' to 'async_'. Starting with python 3.7,
async became a reserved keyword and without this change we see errors like these:
```
File "/usr/lib/python3/dist-packages/libcloud/compute/drivers/azure.py", line 1438
def _perform_post(self, path, body, response_type=None, async=False):
^
SyntaxError: invalid syntax
```
In, it was suggested
to remove this argument entirely, but I'm not familiar enough with the code to do that, and
this seems like a simpler fix to get things going.
### Status
- done, ready for review
### Checklist (tick everything that applies)
- [ ] [Code linting]()
(required, can be done after the PR checks)
- [ ] Documentation
- [x] [Tests]()
- [ ] [ICLA]()
(required for bigger changes)
You can merge this pull request into a Git repository by running:
$ git pull rename-async-keyword-py37
Alternatively you can review and apply these changes as the patch at:
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #1235
----
commit 5930e3e37f6791e0c0b8f5b1c41760093759ba7f
Author: Andreas Hasenack <andreas@...>
Date: 2018-08-27T13:42:34Z
Rename 'async' to 'async_', as the former is a reserved keyword starting with
python 3.7.
----
--- | http://mail-archives.apache.org/mod_mbox/libcloud-notifications/201808.mbox/%3Cgit-pr-1235-libcloud@git.apache.org%3E | CC-MAIN-2020-29 | refinedweb | 220 | 51.62 |
I have to wrtie a program for class that asks the user to input to cards ex: 7c(7 of clubs) and then compare the two cards at the end to see which one is the greater card. This is the code I have so far and for some reason it does not want to compile for the first card entry. Please help!
public class Program2 { public static void main(String[] args) { String str1; System.out.println("Please enter first card: "); str1 = SavitchIn.readLine(); len1 = str1.length(); if (len1==3) { char1a=str1.charAt(0); char1b=str2.charAt(1); char1c=str3.charAt(2); } else if (len1==2) { char1a=str1.charAt(0); char1b=str2.charAt(1); } else { System.out.println("Error"); } } } | https://www.daniweb.com/programming/software-development/threads/179554/playing-card-program | CC-MAIN-2018-09 | refinedweb | 119 | 76.72 |
From: David Abrahams (david.abrahams_at_[hidden])
Date: 2002-05-02 07:15:06
----- Original Message -----
From: "Noel Yap" <yap_noel_at_[hidden]>
> > > (since newbies to Jam would quite easily
> > > know what hasn't built),
> >
> > How would they know? And how would knowing increase
> > stability?
>
> Because, for example, if they download only concept
> checking, and it doesn't build, then it's obvious that
> that's is what's not building regardless of how well
> they know Jam.
And how would that increase stability?
> Currently, Boost is as strong as its weakest piece --
> if part of it doesn't build, all of it is broken.
That's just wrong. Have you looked at the compiler status tables
()?
> > > and decrease the traffic
> > > about autoconf/automake/make vs Jam (since those
> > > projects that need to use Jam can do so while the
> > rest
> > > can use autoconf/automake/make).
> >
> > All projects with built components need to use Jam.
>
> I'll assume for the moment there's a good reason this
> is.
Yes, and if you won't search the archives for rationale because of the
"high S/N ratio", I guess you'll stay in the dark about those reasons.
> However, if the libraries were distributed
> separately, its developers would more easily create an
> aam solution as well. As it stands, this is not
> encouraged since all users of Boost will read Boost's
> build and install directions. If each library had its
> own build system, they can refer to Boost's Jam
> instructions as well as supply aam (or other)
> solutions.
That's already permitted.
> Now that this hierarchy is more real to me, I've come
> to the conclusion that a reorg would also entail at
> least one unique namespace per library.
We also have reasons for not doing that.
> I think this
> is a Good Thing (tm) since, from what it looks like,
> each library is developed by different sets of
> developers; separate namespaces will decrease the risk
> of name clashes.
Fabulous! I presume you are planning to reorganize and test the
libraries in this configuration without interrupting development or
breaking backward compatibility for users...
> > We have a tool somewhere that can generate the
> > header dependency page
> > mentioned above. Not sure where, though... ;-(
>
> But if each library were distributed separately, such
> a tool wouldn't be necessary (at least for the users
> of the library). Don't you think that such
> implementation details shouldn't have to involve the
> users?
They don't now.
> > > IMHO, at the very least, I should expect boost to
> > > build OOTB without any errors.
> >
> > On every compiler and OS? Unrealistic.
>
> This is more of a reason to partition boost. As I
> said (or implied) earlier, currently Boost's
> development and build processes are as strong as its
> weakest component.
You keep saying that, but it doesn't hold water.
> Since Boost's pieces (and
> therefore Boost itself) is very much in flux, there is
> a strong chance that it will not build at any one
> time.
That hasn't been the case in practice (and you'd need to define what you
mean by "building", since most of the libraries don't need to be built
at all). Not all parts of boost are interdependent. If Boost.Python
doesn't build on some platform, it doesn't prevent the threads or regex
libs from building.
-Dave
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2002/05/29125.php | CC-MAIN-2019-43 | refinedweb | 580 | 74.08 |
, 5 months ago.
analogin
i'm having trouble using with sparkfun electret microphone.is there a possible way that a microphone would only receive a input when i say something and not taking to much pointless noise.is there a way to do this.?when it receive an input it will correlate and find angle.is there an improvement that i should do in my code.?to pointing in my code would be a great help.thank you and this is my source of help
#include "mbed.h" #include "math.h" #define SAMPLE_PERIOD 60 #define SAMPLE 1024 #define PI 3.1416 #define freq 44100 #define sos 384 //m/s #define WINDOWSIZE 128 Serial pc(USBTX, USBRX); DigitalOut myled(LED1); AnalogIn rightmic(p16); AnalogIn leftmic(p19); BusOut unused(p15,p17,p18,p20); Timer t; double max1,min1; double max,min; float left_results[SAMPLE]; float right_results[SAMPLE]; float timesampling; int peakoffset; int count; int correlate(float left_results[] , float right_results[]) { float peakCorrelation = 0; int peakoffset = 0; float correlation; int firstLeftSample = SAMPLE/ 2 - WINDOWSIZE/2; // the first sample in the left data such that our window will be centered in the middle. int timeoffset = -firstLeftSample ;//minimum time test while ( (timeoffset + firstLeftSample + WINDOWSIZE ) < SAMPLE) { correlation = 0; for (int i = 0; i<WINDOWSIZE ; i++) { correlation += left_results[firstLeftSample + i] * right_results[firstLeftSample + timeoffset +i]; } if (correlation > peakCorrelation) // look for the peak.. { peakCorrelation = correlation; peakoffset = timeoffset; timeoffset++; } pc.printf("lag=%d",peakoffset); } return peakoffset; } int angle(int peakoffset) { float c=0.15;//distance between two mics... float distance=(peakoffset*timesampling)*sos;//distance of unknown which is the speaker or length of 'a'.. int theta=asin(distance/c)*180/PI; //phase of the speaker return theta; } int main() { max = 0.0; min = 3.3; t.start(); for (int i=0; i<SAMPLE; i++) { while(t.read_ms()<i*SAMPLE_PERIOD)// wait until the next sample time.... { left_results[i]=leftmic.read(); right_results[i] = rightmic.read(); if (right_results[i] > max) max = right_results[i]; if (right_results[i] < min) min = right_results[i]; if (left_results[i] > max) max = left_results[i]; if (left_results[i] < min) min = left_results[i]; } t.reset(); max=max*3.3; min=min*3.3; pc.printf(" max=%0.3f, min=%0.3f\n\r",max,min); max=0.0; min=3.3; } correlate(left_results , right_results); }
1 Answer
4 years, 5 months ago.
I think a "simple" solution for this would be FFT (fast fourier transform). This lets you analyse the soundspectrum and subtract noise from your sample before computing.
Or maybe just look at the total power input.
Measure the sum of the squares of the most recent 50 or so samples, once that is over a certain threshold on either input start logging your data for the direction finding.
An FFT would allow a smarter system, e.g. by looking for the frequencies associated with speech, but assuming your background noise isn't almost as loud as the signal you want to locate then looking for anything over a certain volume should work fine.posted by 11 Jun 2015
for the sum of square do i have to do individually for the mics or both together?and is the certain threshold value that i have to start logging?posted by 12 Jun 2015
Individually for each mic. I'd look for the sum going over a value on either of them and then start the logging. How high that value is will depend on the microphones, the volumes you are trying to locate and the amount of noise. Something like 4 times the average for background noise would be a good starting point.
Also what is the idle voltage on your microphones? Unless you are throwing half your signal away they should be set so that silence gives you a value of about 0.5. You should then subtract that idle level when reading the signals.
left_results[i]=leftmic.read() - idleLevel;
this is what i have done but there are some errors.it will print my layer1 den layer den start printing the left.read() and right.read(). i'm not sure why
int main() { for(int i=0;i<SAMPLE;i++) { t.start(); while(t.read_ms()<i*SAMPLE_PERIOD)// wait until the next sample time.... { left_results[i]=leftmic.read(); right_results[i] = rightmic.read(); pc.printf("%.3f, %.3f\n\r", left_results[i],right_results[i]); } t.reset(); if(i%50==0) { pc.printf("layer1\n\r"); for(int a=0;a<=50;a++) { pc.printf("layer2\n\r"); pc.printf("%.3f, %.3f\n\r", left_results[a],right_results[a]); sumleft[a]+=left_results[i]*left_results[i]; sumright[a]+=right_results[i]*right_results[i]; if(sumleft[a]>1||sumright[a]>1) { correlate(left_results,right_results); angle(peakoffset); } } } } }
You need to log in to post a question | https://os.mbed.com/questions/53981/analogin/ | CC-MAIN-2019-51 | refinedweb | 776 | 58.58 |
Operator overloading is one of the fundamental operation which come across often in a C++ program. It is bit cryptic in syntactical side as well as often a misunderstood topic amongst new programmer. I will try to explain this as a series of C++ related notes (as I like to call this) following this post.
Okay, so to start with lets consider the following piece of code:
#include <iostream> #include <string.h>; } }; int main() { char *dummy_data = new char[26]; for(unsigned int i=0; i < 26; ++i) dummy_data[i] = 'a' + i; Packet p1(dummy_data, 26); p1.print_bytes(26); return 0; }
The above program can be downloaded from here as well.
Note: to download the complete source code please click here.
Now, in the above program we have constructed a class called Packet, which simply contains a pointer to a consecutive bytes in memory often called as a buffer, in this example ‘buf‘. The main function of the Packet class is to hold a small data in its buf structure.
There is a no parameter constructor of the class which is just a place holder and there is a two parameter constructor of the class which essentially takes a data pointer and the size of the data. There is a print_bytes function which helps to print the content of the data. The max buf size is determined by a static member of the class called ‘max_buf_size‘.
Now in the main we have created an instance of the Packet class by passing a dummy_data and its size. At this point the two-parameter constructor is called and it checks if the buffer point ‘buf‘ has been initialised or not. If not already initialised the constructor allocates the buffer with maximum possible size (to keep things simple) and then clears up the allocated memory with memset call. Next the constructor copies the data into the buffer by calling memcpy. I will not go into the details of any of the library function used here as that is out of the scope of this discussion. Then the buffer content is printed with a call to print_bytes() member function call.
Now if we compile the program and run it we will see something like the below (I am running it on CodeBlocks in my Windows machine):
Now lets modify the code to add the below lines to create another instance of Packet class and then try assigning the values of the first instance to the newly created instance as below:
Packet p2; p2 = p1; p2.print_bytes(26);
Now in the above code snippet we have instantiated a new instance of Packet class called ‘p2’ simply by calling the no-parameter constructor and then have assigned the instance ‘p1’ to it. Then we have called the print_bytes() member function on the p2 instance. The output looks somewhat like the below in my Windows PC in CodeBlocks:
As you can see all the bytes of the buffer ‘buf’ are being printed for both the instances. So good so far.
Now lets say this is large program and during the course of the program we need to release the buffer held by the instance p1 as it is not required. The reason for this may not be quite evident in this example as it is meant to be ridiculously simply for discussion purposes. But think about a real life program with thousands of different scenarios where a packet manager is supposed to create packets and then time to time release the packets as those expires. So ultimately the time has arrived for our dear packet p1 to depart. For releasing a buffer area we will add a utility member function in the class definition and then call it in due course. Please notice the below changes to the program (the whole program may be downloaded from here):; } void release_buffer(void) { delete buf; } };
The above code has introduced the new addition of the utility function release_buffer() in the Packet class definition. Now in the main we add the below lines of code:
int main() { char *dummy_data = new char[26]; for(unsigned int i=0; i < 26; ++i) dummy_data[i] = 'a' + i; Packet p1(dummy_data, 26); p1.print_bytes(26); Packet p2; p2 = p1; p1.release_buffer(); p2.print_bytes(26); return 0; }
Now if we compile and run the program again, we will see something like this:
If you look at the above closely you will see the first print is looking ok but in case of the second print there are lots of junk characters been printed. That is strange, is it not? Actually what has happened here is just after calling the release_buffer() member function of the p2 instance the buffer ‘buf’ has been erased from the memory and that has caused a lot of grief for the p1. But how? This is classically known as a side effect of shallow copy by the C++ runtime.
Shallow copy and dangling pointer
Look at the below operation:
p2 = p1;
In the above the assignment operator is being invoked to copy the content of the p1 instance to the p2 instance. But how does the operator copies the content of the one instance to other? The usual assignment operator is extended to carry out its duty of copying class member variables across object by the compiler which is known as operator overloading. The very use of the word overloading might already be giving you a hint that there is something extra duty being executed by the operator. Now, compilers do not know the intricacies of your class nor it knows or supposed to know how the class fits into the bigger scheme of things in your larger program. Hence, the overloaded assignment operator provided by the compiler is pretty rudimentary. All it does is a member to member copy of the class variables. Now this works fine until there is no external entity involved in the process. Had all the class members been C++ fundamental data types like int or char or double etc there won’t have been any issue. But in the cases of a pointer being the member of a class there is an external entity involved in this scenario which is the piece of memory the pointer is pointing to. Compiler doesn’t know about it nor it cares. So what it does is takes the memory address kept in the buf member of the p1 instance and stores it into the buf member of the p2 instance. The scenario is somewhat like the below diagram:
Now till the p1 instance doesn’t release the buf or modifies buf, there should be no apparent problem, or at least won’t be evident. But the moment p1 modifies the data in the buf, there will be inconsistency in p2 which it doesn’t have any idea of. The worst comes when the buf is released by the p1 instance. At that point buf in p2 is holding a memory address which has been freed and perhaps been allocated to some other piece of code for some other purposes. At this point the buf point in p2 becomes a dangling pointer, a pointer which points to an undefined memory location or an invalid or illegal memory location. That’s why we see garbage been printed in the print_bytes function when called by the p2 instance. If this program continues soon the program will be caught by a null pointer exception and the whole program falls apart.
Overloading the assignment operator
The simple solution to this problem is to not rely on the compiler provided rudimentary overloaded assignment operator rather rewrite it ourselves as per our need. And ours would be more sophisticated than the compiler’s, duh!
Syntax
The syntax of the overloaded assignment is bit cumbersome at least I find it bit a memory jog. Lets talk about the first thing we have to deal here that is the return type of the overloaded assignment operator. If you think about it what all the overloaded assignment operator is doing is copying one object into another. So both the operands are of the actual class type which is Packet in this case. Hence, as a memory tip always remember the outcome of the operator is also the class type i.e. in this case Packet class type. One more thing, once the overloaded operator has done its job i.e. forming an object of the class type by copying content from the supplied object, it must not be modified accidentally. Hence, const is our friend here. The return type has to be const type. One more C++ memory tip is to think about the copying one object to another as a memory to meory copy operation and to reduce the amount of copying we would have to use a reference. So all these put together the return type of the overloaded operator will be a const Packet type reference, i.e. const Packet &.
Now, for any operator overloading we have an keyword to remember and that is operator immediately followed by the operator being overloaded, in this case ‘=’. Hence, the next piece of the puzzle is operator=. So fat the first line of our endeavour to write the overloaded assignment operator looks like this:
const Packet& operator=
Now coming to the arguments. The arguments to the overloaded assignment operator are two, the first is an implicit argument which is this pointer. It points to the object which resides on the left hand side of the operator i.e. in our case p2. There is no need pass this as an argument as C++ compiler does this for us silently. The second argument is the one residing on the right hand side of the = operator i.e. in our case p1. Now again memory tip, when we pass p1 as an argument to = operator we do not want it to modify the p1 instance even by accident. Hence, we would qualify it by const and to reduce memory copying we would be suing a reference to p1 rather than passing it by value. So till now our overloaded operator looks like this:
const Packet& operator=(const Packet& other)
Then the rest is the body of the overloaded operator. In the body we want to make sure memory is properly allocated for the buf instance. So we check if there has been any memory allocated already or not bu null pointer check and if not then allocate the memory. So far the overloaded assignment operator looks like this:
const Packet& operator=(const Packet& other) { if(!buf) { buf = new char[max_buf_size]; memset(buf, 0, max_buf_size); } }
As you can see above after allocating the memory we have cleared it up with memset which is a standard practice to ensure there is no garbage content in the memory while being allocated form the heap. Next we copy the buf content from the p1 instance whcih is denoted by the passed reference other in our case. So the code now looks like this:
const Packet& operator=(const Packet& other) { if(!buf) { buf = new char[max_buf_size]; memset(buf, 0, max_buf_size); } memcpy(buf, other.buf, max_buf_size); }
Remember you could have used the this pointer to explicitly denote the member variable buf in this, that would be completely legal but still redundant as the compiler already does that for you. So you could have written the code as below:
const Packet& operator=(const Packet& other) { if(!this->buf) { this->buf = new char[max_buf_size]; memset(buf, 0, max_buf_size); } memcpy(this->buf, other.buf, max_buf_size); }
It is not required for any practical purposes but it emphasises the fact that the this pointer is the owner of the memory where buf resides. And also, it perhaps helps to remember the what to be returned in the last statement in the overloaded operator. In the last statement we return *this which simplistically means the content of the memory location i.e. p2 instance and finally a reference is returned to the caller. So the complete overloaded assignment operator now looks like below:
const Packet& operator=(const Packet& other) { if(!this->buf) { this->buf = new char[max_buf_size]; memset(this->buf, 0, max_buf_size); } memcpy(this->buf, other.buf, max_buf_size); return *this; }
Or for all practical purposes as below:
const Packet& operator=(const Packet& other) { if(!buf) { buf = new char[max_buf_size]; memset(buf, 0, max_buf_size); } memcpy(buf, other.buf, max_buf_size); return *this; }
The output looks like the below in my Windows PC:
Now, time for another C++ terminology which is deep copy. The above way of copying the content of once class instance to another is called deep copy i.e. not one to one copying the member variables but to actually taking care of any inherent memory or resource allocation and then carrying out the copy operation.
The complete source code may be downloaded from here.
That’s all for this post, have fun! | https://vivekbhadra.wordpress.com/2020/05/08/c-notes-shallow-copy-overloaded-assignment-operator-and-deep-copy-explained/ | CC-MAIN-2022-27 | refinedweb | 2,149 | 59.53 |
//**************************************
// Name: Sum of Odd and Even Numbers in C++
// Description:This simple C++ program will compute the sum of odd and even numbers from the given number by the user.>
using namespace std;
int main()
{
int a=0, number=0, odd_sum = 0, even_sum = 0;
cout << "Sum of ODD and Even Numbers in C++";
cout << "\n\n";
cout << "Give a number : ";
cin >> number;
for (a = 1; a <= number; a++)
{
if (a % 2 == 0)
even_sum = (even_sum + a);
else
odd_sum = (odd_sum + a);
}
cout << "==== The Result =====";
cout << "\n\n";
cout << "Sum of all odd numbers is " << odd_sum << ".\n";
cout << "Sum of all even numbers is " << even_sum << ".";
cout << "\n\n";
cout << "\t\t End of Program";
cout << "\n\n";
}. | http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=13844&lngWId=3 | CC-MAIN-2017-51 | refinedweb | 114 | 69.96 |
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Jim Fulton wrote: > I'm trying to make the Zope Toolkit (ZTK) publisher/publication > framework a little easier to deal with. I think zope.app.publication > mostly provides a generally useful publication implementation. It has > 2 problems: > > - Its getApplication method digs a particular object out of a ZODB > database. Some people would like > the flexibility of not using ZODB. > > - It aggressively proxies objects using > zope.security.checker.ProxyFactory. Some people don't want > to use proxies and those that do might want to use a different > proxy or checker implementation. > > It was already possible for a subclass to override getApplication > fairly cleanly. (This also entailed overridding __init__ in a pretty > simple way.) > > I made it possible to override proxying by overriding the proxy > method. At this point, I think zope.app.publication provides a pretty > reasonable base class for custom publication implementations, although > the module arrangement could be made a bit simpler. > > In working on this, I've found the implementation and tests of > zope.app.publication to be split between zope.app.publication and > zope.traversing. I want to sort this out, but I'm not certain what > the intent if zope.traversing is. I think the intent has become > muddled over time, if it was ever clear in the first place. :) > > Early on, we mixed up ZPT path traversal and URL traversal. These are > similar, but different. There are features you want in ZPT traversal, > like the resource namespace that you don't want in URL traversal. > Similarly, you want features like virtual host support and default > pages in URL traversal but not in ZPT traversal. > > Early on, we used path traversal in many places, not just ZPT, which > is probably why most of the code in zope.traversing isn't in zope ZPT- > related package. > > Early on, we decided that the menu framework should be able to filter > menu items based on whether the user could traverse to an item. On > some level, this was reasonable because it made menu specifications > simpler, but it adds great expense and complexity. I'm not sure > anyone in the know uses the menu framework anymore. If they are and > aren't specifying permissions on their menu items, they are being > helped to do the wrong thing. > > Originally, zope.app.publication defined a base class, > PublicationTraverse, that provided traverseName. Even though I may > have done this, I don't know why this was broken out into a separate > base class. I don't think anyone else is subclassing this, but I > don't know. > > I propose the following: > > In phase 1 reduce the scope of zope.traversing: > > - Move path traversal code to zope.tales > > - Move the URL traversal code used by zope.app.publisher.menu to > zope.app.publisher.menu. Refactor it to use the request.publication. > Deprecate definition of menu items without permissions. > > - Move the virtual host and skin tests from zope.traversing to > zope.app.publication. > > The only things left, I think will be the namespace framework and the > absolute-url support. (Both of these could use more thought, but I've > used up my thinking budget for this morning. :) > > In phase 2, simplify the class tree in zope.app.publication > > Merge zopepublication, http, browser, and xmlrpc. traverse using > request.method when request,method isn't one of GET, HEAD, or POST. > > Maybe in phase 3: > > - Create zope.publication from zope.app.publcatiobn > - use webtest rather than zope.app.testing. > - Maybe provide a paste deployment mechanism for easily assembling > publisher-based appls based on prototype work I'm doing in > zc.publication. > > Thoughts?
+1. I would also like to be able to break the current zope.app.publication dependencies within Zope2: - ZPublisher.BaseRequest uses zope.app.publication for the EndRequestEvent class - Products.Five.compuonent likewise uses it for 'IBeginReqeustEvent', 'IEndRequestEvent', and 'BeforeTraverseEvent' Can we move those events and their interfaces out into a non-zope.app package? E.g., the as-yet-notional zope.publication package you mention for phase 3? Or zope.traversing? There are dependendencies on zope.app.publisher as well: - Products.Five.browser.adding uses 'getMenu'. - Products.Five.browser's configure.zcm uses IMenuItemType, MenuAccessView, and IMenuAccessView. - Products.Five.form.metaconfigure uses menuItemDirective. - Produts.Five.viewlet.metaconfigure uses viewmeta. - Products.Five.fivedirectives uses IBasicResourceInformation. The first two may be the only real uses of the menu framework you discuss above. I would actually like to move Products.Five.viewlet out into a separate pacakge (five.viewlet), as I don't think there is a core requirement to support the viewlet machinery. The last one is trickier: that interface likely belongs somewhere like zopeKPm6s+gerLs4ltQ4RAhNMAJ9g2hxeFOQPrzKlqA97PWLWbALwoQCfeOVy CqGt2HPjW3AlJMlmu1n0dhM= =iqvU -----END PGP SIGNATURE----- _______________________________________________ Zope-Dev maillist - Zope-Dev@zope.org ** No cross posts or HTML encoding! ** (Related lists - ) | https://www.mail-archive.com/zope-dev@zope.org/msg30156.html | CC-MAIN-2018-47 | refinedweb | 795 | 52.56 |
This is your resource to discuss support topics with your peers, and learn from each other.
04-19-2013 11:22 AM
This might be an obvious question but I've never written C/C++ untill recently.
In my 'src' folder I have a file called recordvideo.c
I want to include that in the main cpp file I had assumed I would do something like ...
#include<recordvideo.c>
Can anyone tell me how to properly include the c file? Also when you use #include what is the default path, e.g. in what folder is it looking for recordvideo.c?
04-19-2013 12:32 PM
04-19-2013 12:45 PM
Thank you for the question.
You don't use the "#include" directive for .c files visibility, instead you use it to expose C/C++ types, classes, macros,...
i.e. your declarations located in your .h files to other .c/c++ files.
If you want your objects or global variables from your C/C+ files to be accessed by other C/C++ files, you can simply use the "extern" keywork (there are other ways to do that but this is the simplest way)
Please check our helloword application or the very basic apps for how to set your include path.
04-19-2013 01:26 PM
In addition...
When the include statement is:
#include <bb/cascades/QmlDocument>
The compiler will look through the folders contained in the Include Path.. This is located in QNX Momentics IDE at:
Project -> Properties -> C/C++ General -> Paths and Symbols -> Includes tab on the right... The compiler will look through the folders contained there in the order they are in from top to bottom.
When the include statement is:
#include "Box2D/Box2D.h"
The compiler will look for the file relative to the current file that the include statement is in. That is, in the same folder that the current file is in...
.../workspace/NDK/myApp.cpp <--- include statement "" it will look in .../workspace/NDK/ ... | http://supportforums.blackberry.com/t5/Native-Development/Simple-Include/m-p/2319217 | CC-MAIN-2015-06 | refinedweb | 332 | 73.98 |
I am trying to use SDL to make a movie easter egg. The movie code is
I am running Dev-C++ on windows 7 and I followed Lazy-foo's tutorial. I get a linker error in the form ofI am running Dev-C++ on windows 7 and I followed Lazy-foo's tutorial. I get a linker error in the form ofCode:
#include <iostream>
#include <string>
#include "SDL.h"
using namespace std;
int i;
int main(int argc, char* args[]) {
cout << "Welcome to the movie easter egg.";
i = 1;
if (i == 1) {
SDL_Surface* frame1 = NULL;
SDL_Surface* screen = NULL;
if((SDL_Init(SDL_INIT_EVERYTHING)==-1)) {
cout << "Error initiating movie."; }
screen = SDL_SetVideoMode( 640, 480, 64, SDL_HWSURFACE );
frame1 = SDL_LoadBMP( "C:\\Users\\User\\Desktop\\Frames\\Frame1.bmp" );
SDL_BlitSurface( frame1, NULL, screen, NULL );
SDL_Flip( screen );
SDL_Delay(2000);
SDL_FreeSurface( frame1 );
SDL_Quit();
return 0;
}
}
[linker error] undefined reference to 'SDL_Init'
and I get linker errors to everything sdl in the code also. I also get a linker error to winmain@16. This isn't a project but a source file. I am not going to create a project unless it is absolutely necessary. Please help. | http://forums.codeguru.com/printthread.php?t=538603&pp=15&page=1 | CC-MAIN-2013-48 | refinedweb | 187 | 64.51 |
In a previous article we saw an overview of the new IDE and Framework features of the current .NET Framework beta v4.5. Among other things .NET 4.5 has an improved support for Asynchronous programming through a new Task based model. In this article, we will take a look at the new async and await key words introduced in the C# language.
Asynchronous programming has always been considered a niche and has always been delegated as a ‘good-to-have’ in v2 of something that you were working with. Moreover lack of Async APIs meant you had to wrap async functionality first before you could use them in your code. Basically a lot of plumbing was potentially required to get off the ground with Async programming.
public class MyReader { public int Read(byte [] buffer, int offset, int count); }
The pre-TPL API would look something like the following and you would have to setup the callback method yourself and track the End[Operation].
public class MyReader { public IAsyncResult BeginRead( byte [] buffer, int offset, int count, AsyncCallback callback, object state); public int EndRead(IAsyncResult asyncResult); }
Instead of the call-back model, one could use the Event based model and write it up as follows
public class MyReader { public void ReadAsync(byte [] buffer, int offset, int count); public event ReadCompletedEventHandler ReadCompleted; } public delegate void ReadCompletedEventHandler( object sender, ReadCompletedEventArgs eventArgs); public class ReadCompletedEventArgs : AsyncCompletedEventArgs { public int Result { get; } }
But with the introduction of Task Parallel Library (TPL), asynchronous programming became easier. To make the above, we would simply need the following code
public class MyReader { public Task<int> ReadAsync(byte [] buffer, int offset, int count); }
Now the above method signature shows how an Async method would be implemented. It returns a Task<T> where T is of type required for the return and the method itself by convention has the word Async appended to it.
Apart from the lesser plumbing required, the framework added a huge collection of Async methods by default. So going forward in this article, we will look at how to consume Async methods provided in the framework.
Consuming an Async call and the ‘async’/‘await’ keywords
As mentioned above the latest .NET framework provides a lot of Async counterparts of older Synchronous method calls. Specifically anything that could potentially take more than 100ms now has an Async counterpart. So how do we use these async methods?
In .NET 4.5 we use the async and await keywords while consuming Asynchronous APIs. So to consume the above ReadAsync we would write code as follows
The method signature when marked with async keyword tell the compiler of our intent to use Async method and that we wish to ‘await’ returning from these Async methods. So the compiler expects one more await keywords in the method. If none are provided, compiler will generate a warning.
Under the hood, the compiler generates a delegate for code after the await key word is used and moves execution to the end of the method such that the method returns immediately. Thereafter the delegate is called once the async method is complete.
Things to look out for
All looks pretty neat, where is the catch?
Well there are quite a few when leveraging the Async framework.
Synchronization Context
Though thread synchronization has now been hidden away from you, does not mean it’s not happening. For example an async read operation that is initiated from the UI thread will have a SynchronizationContext stashed away and every thread completion will result in a hop back to the initiation thread. This is a good idea most of the time because for end users the hop back means ability to show progress on the UI or work on the UI thread. However if this call is initiated from UI thread and passed on to a library whose sole purpose it to read the data asynchronously, the overhead for hopping back and forth between the execution thread and the UI thread is very high and big performance killer. To avoid this Sync context can be skipped using the following syntax
… { … int bytesRead = await reader.ReadAsync(buffer, 0, buffer.Length).ConfigureAwait(false)); … }
What ConfigureAwait(false) does is, it tells the framework not to hop back into the initialization thread and continue if possible.
Avoiding Deadlocks
Incorrect use of the ‘Wait’ call instead of the synchronous ‘Await’ call may cause deadlocks because issuing a Wait on the initialization thread blocks the UI thread, and when ReadAsync tries to synchronize it can’t so it waits resulting in an unending wait. This situation can be avoided if we configure TPL to avoid synchronization context using the ConfigureAwait(false).
How Garbage Collection is affected
Use of the async keywords results in creation of an on-the-fly state machine structs that stashes away the local variables. Thus these local variables are ‘lifted’ up from the ‘stack’ allocation and moved into the heap. Hence even through they look like local variables they definitely stay around for longer.Moreover .NET’s GC is generational and sometimes multiple sync points might push the allocated ‘locals’ into an older generation thus keep them around even longer before they are eventually collected.
Asynchronicity and Performance
Asynchronous operations are more about scaling and responsiveness rather than raw performance. Almost all asynchronous operations suffer a performance for a single operation due to the overheads introduced. Hence do not treat Performance as the criteria for doing async operations. True performance gain is an aggregate of how better your async implementation uses resource better.
Client vs. Server Async operations
Just as performance is not the reason to do Async, it may not be a good fit for server side operations either. On the client side, asynchrony is all about getting the UI thread freed up for user-interaction as soon as possible by pushing compute intensive tasks off it. But on Server side, operations deal with processing a request and responding to it as quickly as possible. There is no UI thread per se. Every thread is a worker thread and any thread blocked is an operation in wait. For example in ASP.NET the thread pool is revered resource. Unnecessary thread spinning by indiscriminately using threads increase CPU resource requirements and reduce server scaling. Remember in case of servers the most important thing to do is to avoid context switches. Choose an async operation wisely for example when you have I/O intensive operations, using async with correctly configured awaits will actually help the operation.
Going forward
This was a quick introduction to the basic features of the Task-Based Asynchronous Pattern in the upcoming framework release. We will explore more features of this implementation in a follow up article.
Will you give this article a +1 ? Thanks in advance
1 comment: - are encapsulated from my .NET program. I literally have millions of records to process and, even though I am using TPL, it is still incredibly slow. To that end, I was hoping to use ASYNC calls to this DLL to make things run faster. Sort of a fire-and-forget on the frontend with something to catch the responses when they come back. That said, the C++ DLL needs to run in process. Will this approach you describe work for me and, if not, do you know of any other options in 4.5? | http://www.devcurry.com/2012/05/task-based-asynchronous-pattern-in-net.html | CC-MAIN-2017-13 | refinedweb | 1,225 | 60.45 |
Created on 2010-02-18 21:23 by mnewman, last changed 2011-05-14 11:59 by ezio.melotti. This issue is now closed.
test.support.captured_output is not covered in the online documents:
However, it does have a docstring in "C:\Python31\Lib\test\support.py" (see below). The current example for "captured_output" does not work. Looks like someone moved it from "captured_stdout" but did not fully update it. Note the old example still references "captured_stdout".
# Here's the current code in "support.py":
@contextlib.contextmanager
def captured_output(stream_name):
"""Run the 'with' statement body using a StringIO object in place of a
specific attribute on the sys module.
Example use (with 'stream_name=stdout')::
with captured_stdout() as s:
print("hello")
assert s.getvalue() == "hello"
"""
import io
orig_stdout = getattr(sys, stream_name)
setattr(sys, stream_name, io.StringIO())
try:
yield getattr(sys, stream_name)
finally:
setattr(sys, stream_name, orig_stdout)
def captured_stdout():
return captured_output("stdout")
# Example for captured_output should now be:
with captured_output("stdout") as s:
print("hello")
assert s.getvalue() == "hello"
# It would be nice to reconcile the online doc versus the docstrings, since it confusing and makes me confused whether captured_stdout is deprecated.
I don't see why the example wouldn't work anymore. It's just in the wrong function's docs :)
New changeset 459e2c024420 by Ezio Melotti in branch '2.7':
#7960: fix docstrings for captured_output and captured_stdout.
New changeset c2126d89c29b by Ezio Melotti in branch '3.1':
#7960: fix docstrings for captured_output and captured_stdout.
New changeset 18a192ae6db9 by Ezio Melotti in branch '3.2':
#7960: merge with 3.1.
New changeset 7517add4aec9 by Ezio Melotti in branch 'default':
#7960: merge with 3.2.
I fixed the docstring, however I think captured_output should be renamed _captured_output, since it only works with sys.stdout/in/err and there are already 3 other functions (in 3.2/3.3) that use captured_output to replace the 3 std* streams. There's no reason to document and use it elsewhere.
Georg, what do you think?
New changeset ec35f86efb0d by Ezio Melotti in branch 'default':
Merge with 3.2 and also remove captured_output from __all__ (see #7960). | https://bugs.python.org/issue7960 | CC-MAIN-2018-05 | refinedweb | 351 | 60.41 |
Is there any way to use a
WebClient
Severity Code Description Project File Line
Error CS0246 The type or namespace name 'WebClient' could not be found (are you missing a using directive or an assembly reference?)
WebClient
As @mike z says in the comments,
WebClient isn't in the Core repo and likely won't be ported. The Stack Overflow question "Need help deciding between HttpClient and WebClient" has some fairly good answers as to why you should be using the
HttpClient instead.
One of the drawbacks mentioned is that there is no built-in progress reporting in the
HttpClient. However, because it is using streams, it is possible to write your own. The answers to "How to implement progress reporting for Portable HttpClient" provides an example for reporting the progress of the response stream. | https://codedump.io/share/iOUQVLXblnG0/1/how-to-use-webclient-with-netcore | CC-MAIN-2017-43 | refinedweb | 135 | 57.61 |
Write a Java program to continue creating your own userdefined
methods and introduce some do while loops. Write two valuereturning methods called farToCel() and celToFar(). These two methods will convert temperatures from Fahrenheit to Celsius and Celsius to Fahrenheit respectively. They will each take a single int parameter and return the converted value as an int.
Write an additional value returning method called displayMenu() to display a three
item menu and read input from the user which is their selection and return that as a
char. Consider using the Character class method toUpperCase() to narrow the
number of choices from six {F, f, C, c, Q, q} to three {F, C, Q}.
Note that displayMenu() has no parameters. The displayMenu() method should
only return one of three possible values. The choices are 'F' for Fahrenheit to Celsius
conversion, 'C' for Celsius to Fahrenheit conversion and 'Q' to quit.
Start by calling displayMenu() within a dowhile loop in the main() method
which will capture the return value of displayMenu() into a char variable. This value
will be used to determine what type of conversion is to be done and the loop will termine
when the user enters 'Q'.This would be a good opportunity for a dowhile
loop in the displayMenu()method as the user continues to make a bad selection. We would stay in the displayMenu() method until the user chooses a valid selection. By doing so, this method can never return bad selections to the main() method.
Once you have mastered your displayMenu() method, you can proceed to the other
two. Also, note that displayMenu() is not responsible for reading the temperatures;
the main() method will do this based on the choice made by the user.
The calculation for Fahrenheit to Celsius is:
( f - 32 ) * 5 / 9
The calculation for celsius to fahrenheit is:
c * 9 / 5 + 32
Just started and wanted to get this started for the class before I left class :) Below is just the start of my program not completed yet but always open to input from others to tell me how bad it looks and what I should change to make it look better.
For instance the printf lines will be changed later to println with /t to make the spaces.
Keep your head up dunkin donuts girl :)_________________________________________________________________________________________
import java.util.*; public class p7 { public static void main(String[] args) { displayMenu(); // farToCel(); // celToFar(); } public static void displayMenu() { char letter; int x, y; String input; Scanner kb = new Scanner(System.in); System.out.println("Please select one of the following: \n"); System.out.printf(" F - to convert Fahrnheit to Celsius %n"); System.out.printf(" C - to conv2ert Celsius to Fahrnheit %n"); System.out.printf("%n Q - to Quit. %n"); System.out.printf("%nChoice:"); input = kb.nextLine(); letter = input.charAt(0); /*while(input.toUpperCase("F, C, Q")) { System.out.println("The temperature " + input + "is" + (farToCel) + "Celsius."); input = kb.nextLine(); letter = input.charAt(0); System.out.println("The temperature " + input + "is" + (celToFar) + "Fahrenheit."); } } public static void farToCel() { System.out.print("Please enter the Fahrenheit temperature:"); x = kb.nextInt(); } public static void celToFar() { System.out.println("Please enter the Celsius temperature:"); y = kb.nextInt(); } */ } } | https://www.daniweb.com/programming/software-development/threads/321225/jojo-p7-any-help-greatly-appreciated | CC-MAIN-2017-39 | refinedweb | 526 | 57.06 |
automatic array casting in C++ does not work
I try to read .png images in
uint8_t, regardless of their actual internal representation, i.e., 8 or 16 bit, in C++. In the current way, when I try to read a uint16 .png image using
bob::io::image::read_color_image I get:
cannot efficiently retrieve blitz::Array<uint8,3> from buffer of type 'dtype: uint16 (2); shape: [3,606,544]; size: 1977984 bytes'
After reading code a little bit, I found that in I could simply replace:
png.read<T,N>(0);
with:
png.cast<T,N>(0);
Theoretically, this should read the data in any format by resetting the image buffer to the type inside the image (uint16): read the data, and then cast it back to the desired type:
Unfortunately, this doesn't compile, it gives me the error:
bob/io/base/include/bob.io.base/blitz_array.h:247:49: error: no matching function for call to ‘cast(const bob::io::base::array::blitz_array&)’
Again, reading more code, I figured out that this function doesn't exists in that namespace, but that:
return bob::core::array::cast<T,N>(*this);
in should rather be:
return bob::io::base::array::cast<T,N>(*this);
Now, this doesn't compile either, it gives me the error:
bob/core/include/bob.core/cast.h:23:30: error: invalid static_cast from type ‘const std::complex<float>’ to type ‘unsigned char’
() which, in the first place looks weired, as I don't deal with complex numbers at all. However, when reading the backtrace, the code here:
tries to cast anything into anything, which in my case is
complex<float> into
uint8_t, which raises the compiler error from above.
So, my question is: was this array casting being tested in any way, and has it been working before? Has anyone ever tried to use the
bob::io::base::array::cast function? | https://gitlab.idiap.ch/bob/bob.io.base/-/issues/14 | CC-MAIN-2021-31 | refinedweb | 315 | 50.26 |
Building hsdis in 2022
Note: this post is a not very serious guide to building
hsdis using a custom
cmake build script, in order to avoid having to set up an environment for the openjdk build system on Windows (though I’ve also tested that the script works under WSL Linux). I like to play with
cmake from time to time, and this post is about the result of the little project of building
hsdis with
cmake (so, it’s not a very serious guide. Though it should be good enough to get a working
hsdis library). The official instructions for building
hsdis can be found in the openjdk repo here:.
What is hsdis?
hsdis is a disassembler plugin for the OpenJDK/HotSpot JVM. It can be used in conjunction with the
PrintAssembly option (as well as other options) to disassemble and print out code generated by HotSpot’s JIT compilers. It is a separate shared library that can be installed on the
PATH or in the JDK directory next to (lib)jvm.(dll/so/dylib). The VM will dynamically load this library and call the function it exposes to disassemble dynamically generated code.
If you’re interested in the code generated by the VM, you will need the
hsdis plugin to make it visible in a human-readable format (well, if that human happens to know how to read assembly). Without the plugin, the
PrintAssembly option will just output the bytes of the instructions instead.
Building hsdis
Not too long ago, the
hsdis plugin required binutils as a dependency. Fairly recently however, 2 more flavours of
hsdis were added, one based on llvm, and one based on the capstone disassembler library. It is this latter flavour that makes it significantly easier to build
hsdis.
The official way to build
hsdis is through the openjdk build system. If you’re interested in that, the instructions can be found here.
There is, however, an easier way to build it that, crucially for Windows users, doesn’t require setting up cygwin or WSL and using autoconf and
make to run the openjdk build system.
Users also need to provide the capstone library for the build process, a project that uses
cmake as a build system (well, ‘build system generator’).
With the method I’m about to show, we just need to have
cmake and a C compiler installed, and then we can use a simple cmake file to build both capstone and
hsdis in one shot (with capstone statically linked into
hsdis). The script will even automatically download and patch the hsdis sources from the JDK repo in order to build them with cmake.
Create a new directory for the build, and in that directory, create a new file called
CMakeLists.txt (a
cmake build file) with the following contents:
cmake_minimum_required(VERSION 3.15) project(hsdis) # options for users set(HSDIS_JDK_REF 3eb661bbe7151f3a7e949b6518f57896c2bd4136 CACHE STRING "git ref to download hsdis sources from") set(HSDIS_CAPSTONE_REF 000561b4f74dc15bda9af9544fe714efda7a6e13 CACHE STRING "git ref to fetch capstone from") set(HSDIS_ARCH X64 CACHE STRING "hsdis target architecture") # internal settings set(CMAKE_POSITION_INDEPENDENT_CODE ON) # needed for linux # turn off architecture support by default, to get a smaller capstone library set(CAPSTONE_ARCHITECTURE_DEFAULT OFF) # set architecture specific options. Only x64 for now if(${HSDIS_ARCH} STREQUAL X64) set(CAPSTONE_X86_SUPPORT ON) set(HSDIS_CAPSTONE_ARCH CS_ARCH_X86) set(HSDIS_CAPSTONE_MODE CS_MODE_64) set(HSDIS_LIB_SUFFIX amd64) else() message(FATAL_ERROR "Unknown architecture: ${HSDIS_ARCH}") endif() # fetch and build capstone include(FetchContent) message(STATUS "Fetching capstone (ref=${HSDIS_CAPSTONE_REF})...") FetchContent_Declare( capstone GIT_REPOSITORY GIT_TAG ${HSDIS_CAPSTONE_REF}) FetchContent_MakeAvailable(capstone) # build hsdis # 1. download source files set(HSDIS_SOURCE_ROOT_URL{HSDIS_JDK_REF}/src/utils/hsdis) file(DOWNLOAD ${HSDIS_SOURCE_ROOT_URL}/capstone/hsdis-capstone.c ${CMAKE_SOURCE_DIR}/src/hsdis-capstone.c) file(DOWNLOAD ${HSDIS_SOURCE_ROOT_URL}/hsdis.h ${CMAKE_SOURCE_DIR}/src/hsdis.h) # 2. fixup capstone.h include file(READ src/hsdis-capstone.c FILE_CONTENTS) string(REPLACE "#include <capstone.h>" "#include <capstone/capstone.h>" FILE_CONTENTS "${FILE_CONTENTS}") file(WRITE src/hsdis-capstone.c "${FILE_CONTENTS}") # 3. add hsdis shared library target add_library(hsdis SHARED src/hsdis-capstone.c) # 4. configure target target_link_libraries(hsdis PRIVATE capstone::capstone) target_include_directories(hsdis PUBLIC src) target_compile_definitions(hsdis PRIVATE CAPSTONE_ARCH=${HSDIS_CAPSTONE_ARCH} CAPSTONE_MODE=${HSDIS_CAPSTONE_MODE}) set_target_properties(hsdis PROPERTIES OUTPUT_NAME hsdis-${HSDIS_LIB_SUFFIX} PREFIX "") # 5. generate install target install(TARGETS hsdis)
At this point it’s important to note that this build script is for building
hsdis for the x86_64 architecture. I’ve added a flag to set the architecture, but it will currently fail when set to anything other than
X64, which is the default.
After initializing some variables, the script will first fetch the capstone source repository (from the latest hash on the
next branch at the time of writing) from github and build it, using the
FetchContent functions.
The script will then download the 2 needed source files,
hsdis-capstone.c and
hsdis.h, from github. The ref that’s used is defined by
HSDIS_JDK_REF. I’ve used the latest hash at the time of writing, which works. You could try changing this to
master to get the latest source code as well (but I can’t guarantee it will work).
The script also patches the
hsdis-capstone.h source file, since it uses a non-standard way of including
capstone.h which is not compatible with the configuration in the capstone cmake package we’re about the build.
We link against capstone with
target_link_libraries.
We also include the
src directory as an include directory with
target_include_directories, since it contains the
hsdis.h header file.
Lastly, we set the
CAPSTONE_ARCH and
CAPSTONE_MODE preprocessor defines with
target_compile_definitions. These are the names of capstone enum constants which end up being passed to capstone at runtime.
Then, to build
hsdis I use the following commands:
cmake -B build <extra cmake config flags>
This command will create a
build directory for the build files. Since I’m using Visual Studio I pass the extra flags
-A x64 and
-T host=x64 to select the x64 architecture and toolchain. On Linux with
gcc or
cc no extra flags are needed.
cmake --build build --config Release
This builds the library.
cmake --install build --prefix install
This installs the library in the
install directory.
If everything went well, this will have created the
hsdis-amd64.dll file under
install/bin. This file can now be copied into the
bin/server directory in a JDK to enable disassembling. Though, what I’ve done is create an
hsdis folder somewhere on my PC, plopped the
.dll file in there, and put that folder on my
PATH. HotSpot will be able to pick it up from there.
Testing it out
To test out the library we built, I create a simple java class that can be used to JIT compile a payload method for which we want to see the assembly.
public class Main { public static void main(String[] args) { for (int i = 0; i < 20_000; i++) { add(42, 42); } } private static int add(int a, int b) { return a + b; } }
Then I run the following commands to print the assembly for the
add method:
javac Main.java java -Xbatch '-XX:-TieredCompilation' '-XX:CompileCommand=dontinline,Main::add*' '-XX:CompileCommand=PrintAssembly,Main::add*' Main
-Xbatchblocks execution until the JIT finishes, so we can get our assembly before the program exits.
'-XX:-TieredCompilation'disables the C1 JIT compiler, so we get a somewhat reduced output. The C2 output is usually what’s interesting, as that is the most optimized.
'-XX:CompileCommand=dontinline,Main::add*'disable inlining of the
addmethod, so that we get a clean compilation of that method without it being inlined into the loop, and also so that the JIT doesn’t know that the return value is not actually being used.
'-XX:CompileCommand=PrintAssembly,Main::add*'print out the assembly for the
addmethod.
- Note that I’ve also used to quotes
'so that powershell doesn’t try to interpret the arguments as script syntax.
(Note that the
CompileCommand option doesn’t support
PrintAssembly on JDK 8. In that case you’ll have to use the top-level
-XX:+PrintAssembly flag which will print out assembly for all compiled methods, instead of just the
add method)
And BOOM, assembly:
============================= C2-compiled nmethod ============================== ----------------------------------- Assembly ----------------------------------- Compiled method (c2) 58 13 Main::add (4 bytes) total in heap [0x000001f699c13610,0x000001f699c13810] = 512 relocation [0x000001f699c13768,0x000001f699c13778] = 16 main code [0x000001f699c13780,0x000001f699c137c0] = 64 stub code [0x000001f699c137c0,0x000001f699c137d8] = 24 oops [0x000001f699c137d8,0x000001f699c137e0] = 8 scopes data [0x000001f699c137e0,0x000001f699c137e8] = 8 scopes pcs [0x000001f699c137e8,0x000001f699c13808] = 32 dependencies [0x000001f699c13808,0x000001f699c13810] = 8 [Disassembly] -------------------------------------------------------------------------------- [Constant Pool (empty)] -------------------------------------------------------------------------------- [Verified Entry Point] # {method} {0x000001f6a94002d8} 'add' '(II)I' in 'Main' # parm0: rdx = int # parm1: r8 = int # [sp+0x20] (sp of caller) 0x000001f699c13780: subq $0x18, %rsp 0x000001f699c13787: movq %rbp, 0x10(%rsp) 0x000001f699c1378c: movl %edx, %eax 0x000001f699c1378e: addl %r8d, %eax 0x000001f699c13791: addq $0x10, %rsp 0x000001f699c13795: popq %rbp 0x000001f699c13796: cmpq 0x338(%r15), %rsp ; {poll_return} 0x000001f699c1379d: ja 0x1f699c137a4 0x000001f699c137a3: retq 0x000001f699c137a4: movabsq $0x1f699c13796, %r10; {internal_word} 0x000001f699c137ae: movq %r10, 0x350(%r15) 0x000001f699c137b5: jmp 0x1f699bf3400 ; {runtime_call SafepointBlob} 0x000001f699c137ba: hlt 0x000001f699c137bb: hlt 0x000001f699c137bc: hlt 0x000001f699c137bd: hlt 0x000001f699c137be: hlt 0x000001f699c137bf: hlt [Exception Handler] 0x000001f699c137c0: jmp 0x1f699c09580 ; {no_reloc} [Deopt Handler Code] 0x000001f699c137c5: callq 0x1f699c137ca 0x000001f699c137ca: subq $5, (%rsp) 0x000001f699c137cf: jmp 0x1f699bf26a0 ; {runtime_call DeoptimizationBlob} 0x000001f699c137d4: hlt 0x000001f699c137d5: hlt 0x000001f699c137d6: hlt 0x000001f699c137d7: hlt -------------------------------------------------------------------------------- [/Disassembly]
Now, all that’s left is learning to interpret this ;) | https://jornvernee.github.io/hsdis/2022/04/30/hsdis.html | CC-MAIN-2022-33 | refinedweb | 1,519 | 52.19 |
Table of Contents
Preface
1. Introduction to Raspberry Pi 3
1.1 Raspberry Pi 3
1.2 Getting Hardware
1.3 Unboxing
2. Operating System
2.1 Raspberry Pi 3 Operating System
2.2 Preparation
2.2.1 Setup MicroSD Card
3. Powering Up and Running
3.1 Put Them All!
3.2 Expanding File System
3.3 Configure Timezone
3.4 Configure Keyboard
3.5 Rebooting
3.6 Shutdown
3.7 Change Password
3.8 Configure All Settings
4. Connecting to a Network
4.1 Getting Started
Preface
This book was written to help anyone who wants to get started in Raspberry Pi 3. It
describes all the basic elements of the Raspberry Pi 3 with step-by-step approach.
Agus Kurniawan
Berlin, March 2016
1. Introduction to Raspberry Pi 3
1.1 Raspberry Pi 3
The Raspberry Pi is a low cost, credit-card sized computer that plugs into a computer
monitor or TV, and uses a standard keyboard and mouse
(source:)..
The following is technical specification of Raspberry Pi 3 device:
Broadcom BCM2837 64bit ARMv8 Quad Core Processor powered Single Board
Computer running at 1.2GHz
1GB RAM
BCM43143 WiFi on board
Bluetooth Low Energy (BLE).4 Amps)
Same form factor as the Raspberry Pi 2 Model B, however the LEDs will change
position
You can see Raspberry Pi 3 model B device on the Figure below.
You also can buy this board at your local electronics stores.
1.3 Unboxing
After bought Raspberry Pi 3 from The Pi Hut (), I get the board as
follows.
2. Operating System
This chapter explains how to work with Operating System for Raspberry Pi 3.
2.2 Preparation
Raspbian is an Operating system based on Debian Linux for the Raspberry Pi hardware. I
recommend you to download OS image file on .
For illustration, I use Raspbian Jessie OS.
After extracted this file, you will obtain *.img file, for instance, 2016-02-26-raspbianjessie.img file.
Then, you can copy all img file into MicroSD card.
dd bs=1M if=~/2015-02-16-raspbian-wheezy.img of=/dev/sdd1
Then, Win32DiskImager app will copy all files into Micro SD card.
If success, you can see all files in Micro SD card.
Plug out SD card from computer. Then, plug in it into Raspberry Pi 3.
Turn on the power for your Raspberry Pi. Raspbian OS will boot for the first time.
If success, you will get the first screen of Raspberry Pi Jessie desktop as below
On desktop mode, if you want to work with Terminal, you can click black monitor icon,
shown in Figure below.
Select 1 Expand Filesystem. After that, you are be required to restart Raspbian.
Select I3 Change Keyboard Layout. Choose your keyboard type and model.
3.5 Rebooting
If you want to reboot your Raspberry Pi, write this script on Terminal.
sudo shutdown -r now
3.6 Shutdown
Its better to shutdown your Raspberry Pi If you dont use it. Please dont turn off the
power directly.
Write this script to shutdown and turn off your Raspberry Pi
$ sudo shutdown -h -P now
4. Connecting to a Network
Turn on Raspberry Pi 3. You can see the monitor and access it via Keyboard. You also
access it via SSH, read section 4.7.
Now you can check your current IP Address by writing this script
$ ifconfig -a
You can configure the WiFi setting by right-click on WiFi icon. Select WiFi Networks
(dhcpcdui) Settings.
Then, you will see a content of file interface. Replace iface eth0 inet dhcp with
iface eth0 inet static
address 192.168.1.10
netmask 255.255.255.0
gateway 192.168.1.1
After that, you can verify your current IP Address now. You may reboot your Raspberry
Pi.
4.7 SSH
By default, Raspbian Jessie has installed and enabled SSH service so we can use it
directly.
For testing, I used PuTTY,, application in Windows platform to
remote Raspberry Pi via SSH.
Fill IP Address of Raspberry Pi.
You can get IP address your Raspberry Pi 3 board by checking it on your router. For
instance, my router detected my board MAC.
Raspberry Pi usually has MAC address with prefix B8:27:EB . You also can fill Raspberry
Pi hostname. By default, the Pi hostname is raspberrypi.
Then, click Open button. If connected, you will get a security alert.
Click Connect button. If you will get a warning dialog. Click Yes button.
If you success, you will get xrdp dialog. Fill Raspberry Pi account.
5. Raspberry Pi Programming
5.2 Python
Raspberry Pi Jessie provides Python for development by default so you can execute
Python code inside Raspberry Pi console.
$ python
5.3 C/C++
Raspberry Pi also provides GCC inside package distribution. You can check your current
GCC version by typing this command.
$ gcc --version
Save it.
Now you can compile C code using GCC.
$ gcc hello.c -o hello
5.4 Node.js
If you are node.js lovers, Raspbian Jessie has installed for you.
Try to check the node.js version
$ node -v
Save it.
Now you can execute mynode.js file using node.js
$ node mynode.js
5.5 Scratch.
Raspbian already installed it for you. You can run Scratch by clicking scratch logo (see it
in Figure below).
Further information about Scratch, you can read and learn it on .
5.7 Java
Java also has installed on Raspbian Jessie. You can verify it typing this command on
Terminal.
$ java -version
For testing, you can write these codes on text editor such as nano or vi.
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World");
}
}
Save these codes into a file, called HelloWorld.java. Now you can compile and run it on
Terminal.
$ javac HelloWorld.java
$ java HelloWorld
Program output:
To ping, you can use l2ping with passing target MAC address.
$ sudo l2ping -c 1 MAC_ADD_BLUETOOTH
6.4.1 Setting up
Firstly, you run Raspbian Jessie in desktop mode. You should see Bluetooth Manager on
Preferences menu, shown in Figure below.
If you dont see it, Raspbian Jessie may not install blueman. You can install it using this
command on Raspberry Pi Terminal.
Now you should Bluetooth Manager now. You can open it and the get the following
dialog.
We can configure Raspberry Pi Bluetooth in always visible mode. You can click Bluetooth
icon. Then, a menu is shown.
6.4.2 Pairing
In this section, we learn how to pair Raspberry Pi Bluetooth to other Bluetooth device.
How to pair?
Open Bluetooth Manager. Search and find Bluetooth device which you want to pair. After
found, select menu Device -> Pair.
Choose a file.
In the middle of installing process, you will be asked to fill root password for MySQL
If installation process is done, you can verify your MySQL by executing this command
$ mysql --version
Save it.
You test it now. Open your browser and navigate to URL where hello.php file located.
Save it.
You test it now. Open your browser and navigate to URL where hellodb.php file is
located.
8. Accessing GPIO
(source: )
See the pin 1 and pin 2 on physical board on the following Figure.
$ wget $
$ tar -xvzf RPi.GPIO-0.6.2.tar.gz
$ cd RPi.GPIO-0.6.2/
$ sudo python setup.py install
8.3 Demo
In this section, we learn how to write data using GPIO on Raspberry Pi. We can use one
LED to illustrate our case.
The LED is connected to GPIO pin 11 (GPIO17). LED ground pin is connected to GPIO
GND.
The following is my wiring.
Now we create Python application to write data on GPIO. We can use GPIO.output() to
write data, GPIO.HIGH and GPIO.LOW.
Create a file, called leddemo.py, and write the following code.
import RPi.GPIO as GPIO
import time
led_pin = 17
GPIO.setmode(GPIO.BCM)
GPIO.setup(led_pin, GPIO.OUT)
try:
while 1:
print("turn on led")
GPIO.output(led_pin, GPIO.HIGH)
time.sleep(2)
print("turn off led")
GPIO.output(led_pin, GPIO.LOW)
time.sleep(2)
except KeyboardInterrupt:
GPIO.output(led_pin, GPIO.LOW)
GPIO.cleanup()
print("done")
9.1 Preparation
To debug Raspberry Pi using GPIO serial through computer, we need USB TTL device.
There are a lot of USB TTL device, for instance, USB to TTL Serial Cable - Debug /
Console Cable for Raspberry Pi from Adafruit, .
In this section, I used a Foca V2.1 FT232RL Tiny Breakout USB to Serial UART Interface
from iteadstudio. I bought it on-
UART-Interface.html
Furthermore, you will get a confirmation. Please select <Yes> to enable serial debugging
feature.
9.3 Wiring
How to implement?
Its easy. You can just connect Tx from USB TTL to Raspberry Pi UART0_TXD and USB
TTL RX to Raspberry Pi UART0_RXD. Some USB TTL must change them. It means
USB TTL TX should be connected to Raspberry Pi UART0_RXD and USB TTL RX to
Raspberry Pi UART0_TXD. (Optional) You can connect GND from USB TTL to GND of
Raspberry Pi board.
Here is a sample of connected hardware.
Now your USB cable of USB TTL device is be connected to your computer. You can use
any serial application to execute.
In this book, I used PuTTY,, and run it on my
Windows OS.
Run PuTTY and choose Serial for connection type. Fill Serial line name, for instance, my
Windows detected it on COM6 as below.
Click Serial on side menu and choose None for Parity and Flow control.
You also can use CoolTerm to view serial data. You can download and install it on .
To use it, you can open Options menu and select Port and Baudrate on 115200.
9.4 Testing
If youre ready, you can click Open button on PuTTY. You may press Enter on keyboard
when you see blank screen.
If you use CoolTerm, click Connect menu. After that, you can see Login Terminal.
If you have question related to this book, please contact me at aguskur@hotmail.com . My | https://de.scribd.com/doc/313773704/Getting-Started-with-Raspberry-Pi-3 | CC-MAIN-2019-22 | refinedweb | 1,668 | 70.9 |
fabs, fabsf, fabsl - absolute value function
#include <math.h>
double fabs(double x);
float fabsf(float x);
long double fabsl absolute value of their argument x,| x|.
Upon successful completion, these functions shall return the absolute value of x.
[MX]
If x is NaN, a NaN shall be returned.If x is NaN, a NaN shall be returned.
If x is ±0, +0 shall be returned.
If x is ±Inf, +Inf shall be returned.
No errors are defined.
Computing the 1-Norm of a Floating-Point Vector
This example shows the use of fabs() to compute the 1-norm of a vector defined as follows:norm1(v) = |v[0]| + |v[1]| + ... + |v[n-1]|
where |x| denotes the absolute value of x, n denotes the vector's dimension v[i] denotes the i-th component of v (0<=i<n).#include <math.h>
double norm1(const double v[], const int n) { int i; double n1_v; /* 1-norm of v */
n1_v = 0; for (i=0; i<n; i++) { n1_v += fabs (v[i]); }
return n1_v; }
None.
None.
None.
isnan(), the Base Definitions volume of IEEE Std 1003.1-2001, <math.h>
First released in Issue 1. Derived from Issue 1 of the SVID.
The DESCRIPTION is updated to indicate how an application should check for an error. This text was previously published in the APPLICATION USAGE section.
The fabsf() and fabsl()/27 is applied, adding the example to the EXAMPLES section. | http://pubs.opengroup.org/onlinepubs/000095399/functions/fabsf.html | CC-MAIN-2014-23 | refinedweb | 239 | 67.35 |
help please!!! T_T
help please!!! T_T what is wrong in this?:
import java.io.*;
class InputName
static InputStreamReader reader= new InputStreamReader(system.in);
static BufferedReader input= new BufferedReader (reader);
public static void main
Hey guys, I'm new to this forum..select onclick show all fieds in textarea in line by line(name,address,state,country,mobile)
Hey guys, I'm new to this forum..select onclick show all fieds in textarea... = document.getElementById('TextBox1').value;
alert(t);
}
</script>
<Table... = document.getElementById('TextBox1').value;
alert(t);
}
<Table width="767
We have organized our site map for easy access.
You can browser though Site Map to reach the tutorials and information
pages. We will be adding the links to our site map as and when new pages
are added
developers are using many new technologies creation of applications such as web... and offline is evolving very fast. New software
development and testing techniques... India website.
Index |
Ask
Questions | Site
Map
Web Services
How to Save Your Site from Google Penguin Update
, with great number of links your site certainly is supposed to carry weight and can... site from Google penguin update all are not realistic enough to change your web... those Grey Hat ones by coming on to the mainstream path of White Hat SEO. While
Java Programming Services
to IT enthusiasts and those who don’t know the difference between Internet and Intranet... because it is so widely used. This is especially great if you are more... advanced that have taken place over the last few years. Many people don’t
Hey Guys really need to learn - Java Beginners
Hey Guys really need to learn Im a java beginner and i want to know how can i input 3 value.. Wath i mean is in C, to you to be able to input number...){
System.out.println("Enter three numbers:");
Scanner input=new Scanner(System.in
Great Uses for PHP Language
anything to get it to work or be edited. It is especially useful for how it can... a site on a regular basis.
Meanwhile, some online shopping functions can... and hits on a site can be tracked with PHP as well. This is done with absolute
Hey neighbor, can you lend some Hammers?
Hey neighbor, can you lend some Hammers? import java.util.Date;
public class AccountProblem {
public static void main(String[] args) {
//create an instance object of class Stock
Account myAccount = new Account
History of Lodhi Garden New Delhi
daily, especially those who have an
insatiable interested in the medieval... New Delhi. It is a serene spot amidst the
huge populous and clattery regions... in
those contemporary times. The medieval monuments and the green coverings
reading the records from a .xlsx file and storing those records in database table
reading the records from a .xlsx file and storing those records in database... {
props=new Properties();
props.load...();
FileInputStream fstream = new FileInputStream(saveFile);
DataInputStream din
Writing Great Articles is Difficult
Writing Great Articles is Difficult
... and linking your site or blog from some other newsletter or site. Articles help... requirements. Copying articles from the internet and linking them to your site
Professional Web Design Services For You Web Site
are as follows:
There should be a limited use of graphics especially those... accessibility of your website for people with physical disabilities too. Especially those...
Professional Web Design Services For You Web Site
How to Start Outsourcing, Great Outsourcing Tips
How to Start Outsourcing?
How to Begin Outsourcing? Great Outsourcing Tips... project completion and delivery, access to new markets and so on. But the first step towards a great outsourcing venture is to start out right. Here we detail
Struts Tutorials
to the internal Map of the DynaForm bean does not tell the Struts that each of those..., the Struts form tags will NOT be able to recognize those properties.
A Brief... great tutorials posted on this site and others, which have been very helpful
Free Web Site Hosting Services
with site promotion and enhancement tools.
? Free web hosting for your new... offer free space and tools for you to build your own web site. You get:150 MB... Search Engines To Promote Your Site E-Mail (Coming Soon) And much more...
http
Struts Articles
.
Development Standards in Apache Struts
Apache Struts is a great framework... portlets. If you are new to either Struts or portlet technology, then please take... in the business to those long in the tooth, the name "Struts" surely must ring a bell
Index | About-us | Contact Us
|
Advertisement |
Ask
Questions | Site
Map | Business Software
Services India
Tutorial Section ... Tutorials |
WAP Tutorial
|
Struts
Tutorial |
Spring
Struts Books
to being a detailed tutorial and reference for those new to Struts, the book provides...;
New
Book on Struts
Object Source Publications has published a new book on Struts - Struts Survival Guide. It covers tips and strategies
The Great Ideas to uses of Social Media Marketing
postings or new sales or other kinds of offers on a site can be posted...Great Ideas to Use for Social Media Marketing
Anyone who has been online... and suggestions through a social media site. This is used to help
encourage
What's New?
What's New?
Find latest tutorials and examples at roseindia.net.
Our site is publishing free tutorials on many Java and Open...
Servlet,
JDBC,
Hibernate,
Struts 1,
Struts 2,
JSF,
Spring
10 Exciting New Features in Windows 8.1
results across various categories and areas is a great feature for this new OS... of new Windows 8.1 update and though the official release is still in the process there is no end to the apprehensive discussion on the new features and tech
Ajax example doesn;t work - Ajax
Ajax example doesn;t work Hello every one, I'm new to Ajax and was playing around with code in this link:
It works but it doesn't printout the result to the user
What is Linux Hosting?
hosting incorporating new functionality, design aspects and updating your site... is typically an operating system that supports a great array of programming languages... variants. Being the open source software and one that supports great array
Buy Peanut 9.6 Linux in India from us. Peanut 9.6 distribution is available in India.
Peanut 9.6 Linux
Now Available Peanut 9.6 Linux CD
It is a Linux OS (operating system), especially made for those new to Linux. This is the most... Communicator v1.7b,
& The Great Links! WWW.
Communications
New to struts2
New to struts2 Please let me know the link where to start for struts 2 beginners
Struts 2 Tutorials
Using Youtube as a Social Media Marketing
upload and play site in the world, with more than
20 million subscribers and more than 15 million people visiting the site, making
it a great medium for any kind of marketing campaign, especially a social media
marketing one.
In this section of sitemap we have listed... to learn.
We have given the important links of Java, JSP, Struts...
Java
New
Hey - Java Beginners
struts <html:select> - Struts
struts i am new to struts.when i execute the following code i am..._dealer_t");
ResultSet rs=ps.executeQuery();
//DealerForm dform[]=new...
Action.java
ArrayList alist=new ArrayList();
Connection con
What is Web Graphics
in any websites is as significant as the
content of the site... that don’t have web graphics
don’t appeal to visitors. Successes... shopping and e-commerce website uses
great deal of graphics so
Top 10 Web Developer Tools
Web development, especially after the recent Google Penguin Update, has come to face an array of new challenges and demanding frontiers that have never been... the newbie better but can equally be great for biggies and there are tools
Struts
Struts Hi i am new to struts. I don't know how to create struts please in eclipse help me
upload the text file through html and insert those data into the table
upload the text file through html and insert those data into the table ... into table.now i want to upload the text file through html and insert those data... in = new DataInputStream(request.getInputStream());
int formDataLength
JPA Training
using the new Java persistence framework.This course is especially for those... Persistence or
JEE 5 Persistence. It is the new Sun standard object/relation-mapping
and getter methods for those checkboxes in ActionForm.finally we get those... the checkbox.i want code in struts
Struts Alternative
Struts Alternative
Struts is very robust and widely used framework, but there exists the alternative to the struts framework... to the struts framework.
Struts Alternatives:
Cocoon
Social media marketing - why you should call in the professionals
media marketing is a great tool for promoting your business online, since... that you would do a half-baked job, simply because you can't
spread yourself so... that you cannot afford to
apportion a large chunk of time to the new branch
Reg: Tree view in Struts using ajax - Struts
Struts Tag :
1)Struts download site: Tree view in Struts using ajax HI all,
Can you figure...?
How can get those files?
I cant able find tree,and treenode attributes
Pragati Maidan Travel in New Delhi
.
The Travel to Pragati Maidan Delhi is a great site to visit when in Delhi India... spot that gives visitors in
Delhi an opportunity to take a look at many new... translates to "Field of Progress," is specifically located
in the city of New
Struts iteraor
Struts iteraor Hi, I am making a program in Struts 1.3.8 in which i... = new ActionErrors();
if (userId < 1) {
errors.add("userId", new ActionMessage("error.userId.required
PHP Beginners Guide, Free PHP Beginners Guide
This PHP Beginners Guide will help and educate all the programmers especially... subject. PHP is primarily used for making dynamic web pages, and those who want to build any site using dynamic pages, prefers to PHP language and take the help
What You Get Out Of Apple's New iPad
. It is also a great option if you are out in public and don’t want everyone...What You Get Out Of Apple’s New iPad
Apple’s iPad... please and view them in high resolution. When watching videos you won’t
how to increas size of button (especially width)
how to increas size of button (especially width) how to increas size of button (especially width)and height
Thanks in Guide
the official site of
Struts. Extract
the file ito...
Struts Guide
- This tutorial is extensive guide to the Struts Framework
Struts - Struts
Struts hi,
I am new in struts concept.so, please explain example login application in struts web based application with source code .
what are needed the jar file and to run in struts application ?
please kindly
Social Network Applications ? Get Connected To The World
With the advances in technology new crazes are always afoot, especially when it involves....
Video calling
Calls don’t just involve two or more voices over... closest friends who you aren’t able to see anytime soon.
Upload pictures
About Central Park Connaught Place New Delhi
one of the most historic districts in all of
Delhi. This area features a great space that has since become a spot that is
relatively new when compared... Place New Delhi. It is an appealing park that has
recently risen as a beautiful
Struts2.0 - Struts
Struts2.0 Hi ,
I am Kalyani,
im very new to struts2.0 ,
just i fetch 1 recond im dao using SQL database ,
i want to display those in jsp...;
}
/**
* Sets the userId
*
* @param userId The new
Very new to Java
Very new to Java hi I am pretty new to java and am wanting to create a programe for the rhyme 10 green bottles.
10 green bottles standing... actually help me with this that would be great
Android 4.3 Jelly Bean - What's new in Android 4.3 Jelly Bean?
What’s new in Android 4.3 Jelly Bean and how it is beneficial for the users?
Amidst great expectations and hype the all new Android 4.3 Jelly bean is here now. Though this new update cannot be said a landmark, it has come equipped
struts - Struts
struts Hi,
I am new to struts.Please send the sample code for login....shtml
Hope that the above links
The Greatest Management Decisions that made the History
In the history of business and professional management a great range... great changes in the modern quality of life and our thinking practice... circumstances and there are many great management decisions in the business history
Apple's New iPad? Is It The Hottest New Release Or Simply An iFlop?
Apple’s New iPad – Is It The Hottest New Release Or Simply... all over the world. After much anticipation, Apple finally released its new.... Even business people can get great use out of it with the iWork application
Pagination in struts 2
Pagination in struts 2 Hi,
I have one question regarding pagination in struts 2 I saw one of your code that you explain here:- in this example you guys
Windows 8, the New Operating System from Microsoft
, especially when you are on a business tour? With new and wide range of touch... the new multitouch technology with its predominance in operating system for PCs. Yes, the wait for the Windows 8, the new operating system from Microsoft.0.1
Struts 2.0.1
Now the new release of Struts 2 framework is available with new features and
enhancements. Now... of enhancements and new features
Following are improvements made to Struts 2
struts - Struts
struts hi..
i have a problem regarding the webpage
in the webpage i have 3 submit buttons are there.. in those two are similar and another one is taking different action..
as per my problem if we click on first two submit
struts - application missing something?
in the struts working and all is great till I close JBoss or the server gets shut down..., struts-config.xml are all updated. I didn't use anything new that hasn't already...struts - application missing something? Hello
I added a parameter
Struts - Struts
Struts hi
can anyone tell me how can i implement session tracking in struts?
please it,s urgent........... session tracking? you mean... for u.the true keyword indicates that if no session is exist before,it will create new
Struts - Struts
used in a struts aplication.
these are the conditions
1. when u entered....
..............here is the program..............
New Member Personal
Open Source web Templates
a great jump start towards a professional web site, and "JSB Web Templates...
Open Source web Templates
Open
Source Web Templates
A web site... site to the zip file.
Open Source Template Engines
Struts Tutorial: Struts 2 Tutorial for Web application development, Jakarta Struts Tutorial
More tutorial of Struts Framework will be added as an when new... Struts 2 Tutorials - Jakarta Struts Tutorial
Learn Struts 2 Framework with the help of examples and projects.
This site also contains
WEB SITE
WEB SITE can any one plzz give me some suggestions that i can implement in my site..(Some latest technology)
like theme selection in orkut
like forgot password feature..
or any more features
Sorry but its
new
new @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
<%@page contentType="text/html" pageEncoding="UTF-8"%>
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html
Restaurant Search Applications ? Find New Places To Eat, Fast!
out at great places and are always up for trying something new then restaurant...Restaurant Search Applications – Find New Places To Eat, Fast!
Mobile... that helps you stay connected and get more out of life. Applications are a great way
struts internationalisation - Struts
struts internationalisation hi friends
i am doing struts iinternationalistaion in the site... code to solve the problem :
For more information on struts visit
Struts - Struts
mapping, HttpServletRequest request ) {
ActionErrors errors = new ActionErrors
Struts - Struts
, HttpServletRequest request ) {
ActionErrors errors = new ActionErrors
Struts tag - Struts
Struts tag I am new to struts,
I have created a demo struts application in netbean,
Can any body please tell me what are the steps to add new tags to any jsp page
java auto mail send - Struts
java auto mail send Hello,
im the beginner for Java Struts. i use java struts , eclipse & tomcat. i want to send mail automatically when...",
m_text = "Hey Nallavane...,Sucess this is the testing email from our Tool
New to Java?
New to Java?
If you are
new to Java
technology and you want to learn Java and make... into platform specific machine language.
Video Tutorial: New to Java
Will i be able to get the content of a webpage in a word document (Note:i don t have access to the source code)through java code?
Will i be able to get the content of a webpage in a word document (Note:i don t... {
String line = null, response;
URL url = new URL("http...) url.openConnection();
BufferedReader rd = new BufferedReader(new InputStreamReader
new String(text) - Java Beginners
new String(text) Please, what is the difference between
this.text = text; and this.text = new String(text);
in the example:
public... = new String(text);
}
} Hello Friend
this.text = text
What is Struts Framework?
What is Struts Framework? Learn about Struts Framework
This article is discussing the Struts Framework. It is discussing the main
points of Struts framework. If you are trying understand Struts and searching
for "What is Struts
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/4689 | CC-MAIN-2015-48 | refinedweb | 2,912 | 66.13 |
The functions in this section are typically related with math operations and floating point numbers.
Converts the given array of 10 bytes (corresponding to 80 bits) to a float number according to the IEEE floating point standard format (aka IEEE standard 754).
Converts the given floating number num in a sequence of 10 bytes which are stored in the given array bytes (which must be large enough) according to the IEEE floating point standard format (aka IEEE standard 754).
Count the number of trailing zeros.
This function returns the number of trailing zeros in the binary notation of its argument x. E.g. for x equal to 4, or 0b100, the return value is 2.
Convert degrees to radians.
This function simply returns its argument multiplied by
M_PI/180 but is more readable than writing this expression directly.
Returns a non-zero value if x is neither infinite nor NaN (not a number), returns 0 otherwise.
Include file:
#include <wx/math.h>
Returns the greatest common divisor of the two given numbers.
Include file:
#include <wx/math.h>
Returns a non-zero value if x is NaN (not a number), returns 0 otherwise.
Include file:
#include <wx/math.h>
Return true of x is exactly zero.
This is only reliable if it has been assigned 0.
Returns true if both double values are identical.
This is only reliable if both values have been assigned the same value.
Convert radians to degrees.
This function simply returns its argument multiplied by
180/M_PI but is more readable than writing this expression directly.
Small wrapper around std::lround().
This function exists for compatibility, as it was more convenient than std::round() before C++11. Use std::lround() in the new code.
It is defined for all floating point types
T and can be also used with integer types for compatibility, but such use is deprecated – simply remove the calls to wxRound() from your code if you're using it with integer types, it is unnecessary in this case. | https://docs.wxwidgets.org/trunk/group__group__funcmacro__math.html | CC-MAIN-2021-10 | refinedweb | 335 | 66.44 |
I am very new to programming. I want to run the guessing game in a graphics window(as opposed to just in idle).
Im really stuck on how to get the user inputs from the graphics window to go through the while loop and then output back onto the graphics window.
This is the code i have so far. The guessing game itself works in idle, it is just a matter of getting it to work in the window.
Thanks
from random import * from graphics import * def main(max): #window win=GraphWin("The Guessing Game",700,700) win.setCoords(0.0,0.0,3.0,5.0) win.setBackground("ivory") Text(Point(1.5,4.5),"\nI'm thinking of a number between 1 and 100.").draw(win) Text(Point(1.5,4.2), "Try to guess the number in as few attempts as possible.No more than 10!\n").draw(win) Text(Point(1,3), "Your Guess").draw(win) Text(Point(1,1),"Answer").draw(win) input=Entry(Point(2,3),5) input.setText("0.0") input.draw(win) output=Text(Point(2,1),"") output.draw(win) button=Text(Point(1.5,2.0), "Guess") button.draw(win) Rectangle(Point(1,1.5),Point(2,2.5)).draw(win) win.getMouse() #game print("\tGuess the number!") print("\nI'm thinking of a number between 1 and 100.") print("Try to guess the number in as few attempts as possible.No more than 10!\n") num = randrange(101) tries = 0 guess = "" while guess != num: guess = int(input("Take a guess: ")) if guess > num: print("Too high.") elif guess < num: print ("Too low.") tries += 1 if tries >= max: print("Sorry, you took too many guesses. Game Over") exit() print("Congratulations!") again =input("To play again press Enter. Type anything else to quit.") if again == "": main(max) else: exit() main(10)
This post has been edited by kyle91st: 11 March 2010 - 05:47 PM | http://www.dreamincode.net/forums/topic/161449-playing-guessing-game-in-graphics-window/ | CC-MAIN-2017-17 | refinedweb | 322 | 70.8 |
With Instructables you can share what you make with the world, and tap into an ever-growing community of creative experts.
In order to explore the current limits of 3D printing technology, I've created a technique for converting digital audio files into 3D-printable, 33rpm records and printed a few ...
The basic mechanism of a record player is very simple. The record moves at a constant rotational speed (usually 33.3 or 45 rpm) and a needle (also called ...
Here at Instructables HQ, we have access to Autodesk's fleet of Objet Connex 500 printers. These printers use UV light to cure resin layer by layer until a ...
First I prepared some test files to print to get an idea of what is possible with the printer and optimize the dimensions of the grooves. These record ...
Processing has a library for dealing with audio called Minim, it is included with the more recent versions of Processing IDE. Unfortunately, this library is set up for ...
Finally, it was time to start printing real audio! All my initial audio tests were done with the first 30 seconds from the opening track of one of ...
After nearly a week of printing, frantically running around downtown SF, and generally fighting with technology, I've got some reasonably good sounding audio to share with you: You'll ...
You can convert your own audio files into 3D STL models in ten easy steps: 1. Download Processing. 2. Download the ModelBuilder library for Processing. I used version ...
We're currently trying to upgrade our computer setup so that we will be able to print out files larger than 250MB. Eventually I'd like to actually print physical ...
#Wav to Txt
#by Amanda Ghassaei
#
## * This program is free software; you can redistribute it and/or modify
## * it under the terms of the GNU General Public License as published by
## * the Free Software Foundation; either version 3 of the License, or
## * (at your option) any later version.
#this code unpacks and repacks data from:
#16 bit stereo wav file at 44100hz sampling rate
#and saves it as a txt file
import wave
import math
import struct
bitDepth = 8#target bitDepth
frate = 44100#target frame rate
fileName = "audio.wav"#file to be imported (change this)
#read file and get data
w = wave.open(fileName, 'r')
numframes = w.getnframes()
frame = w.readframes(numframes)#w.getnframes()
frameInt = map(ord, list(frame))#turn into array
#separate left and right channels and merge bytes
frameOneChannel = [0]*numframes#initialize list of one channel of wave
for i in range(numframes):
frameOneChannel[i] = frameInt[4*i+1]*2**8+frameInt[4*i]#separate channels and store one channel in new list
if frameOneChannel[i] > 2**15:
frameOneChannel[i] = (frameOneChannel[i]-2**16)
elif frameOneChannel[i] == 2**15:
frameOneChannel[i] = 0
else:
frameOneChannel[i] = frameOneChannel[i]
#convert to string
audioStr = ''
for i in range(numframes):
audioStr += str(frameOneChannel[i])
audioStr += ","#separate elements with comma
fileName = fileName[:-3]#remove .wav extension
text_file = open(fileName+"txt", "w")
text_file.write("%s"%audioStr)
text_file.close()
371,002views
848favorites
License:
Join 2 million + to receive instant DIY inspiration in your inbox.
Download our apps!
© 2016 Autodesk, Inc. | http://www.instructables.com/id/3D-Printed-Record/step4/Extracting-Audio-Data-with-Python/ | CC-MAIN-2016-40 | refinedweb | 526 | 54.63 |
Heh wrote:
>Hi all,
>
>
Hi Heh!
Glad to see you here. ;-)
>my name is He Huang, another lucky SoCer. In some English-speaking
>situations where more than two males are involved, my first name
>sometimes causes confusion. So please call me Heh instead of He to
>improve the signal/noise ratio.
>
>My job in this summer is to become a silkworm, enable part of the
>Cocoon, CForm, to produce silk with one more color - XUL, in an
>interactive and efficient way.
>
>This is more than exciting. Before diving into the Cocoon, I walked
>around this giant Cocoon, figuring out how to make my first strand of
>silk, after all it's a cocoon!
>
>I used the most fresh one, version 2.1.7. After going through pretty
>easy (in terms of number of commands) building and starting processes
>on RedHat Linux 9, I browsed various samples of pre-made silk, being
>impressed by the rich set of colors and textures, I started to think
>that I'm gonna finacially secure not only myself but also lots of
>other people, if I can make and sell more threads of this kind of
>high-tech silk. This is fantastic!
>
>
Yes, cocoon is wonderful and powerful! There are a lot of companies
using it now.
>Following some illustrations, I wrote three files: silk.xml, silk.xslt
>and sitemap.xmap, and dropped them into a directory named "silk". Next
>step is to deploy these files into Cocoon,
>since this version of Cocoon includes a servlet engine Jetty, I don't
>need any more help from my cute little pet Tomcat, so I placed the
>silk directory straight under $COCOON-ROOT/build/webapp, the web
>application context directory,
>then pointed to:
>
>and expected to pull out a strand of silk saying "Hello, this is my
>first strand of silk".
>
>Crap! I got some a whole page of message instead:
>"Resource not found
>No pipeline matched request: silk
>org.apache.cocoon.ResourceNotFoundException: No pipeline matched request: silk
>...
>"
>What's wrong? The files should be fine since they were all tested, I
>only changed a few text words. This must be some configuration
>problem, searched all the docs, no luck. What
>about mount? the root sitemap.xmap clearly mounts everything
>including "silk". Struggled a bit, came back to the root sitemap.xmap,
>carefully read this file line by line, some comments caught my
>attention:
>" and
> effectively locate different web resources and the act of mapping
> them to the same system resources is your concern, not Cocoon's. "
>
>quickly I pointed to:
>
>
>I got my first strand of silk!
>
>Well, this should be documented for newbies like me, or it would be
>more user-friendly for
>Cocoon to automatically identify and add the missing ending slash,
>even if typing URL
>without ending slash is "considered a bad practice".
>
>
Place this lines before mounting, this should do the work:
<!-- Redirect to the user directory if the ending slash is missing -->
<map:match
<map:redirect-to
</map:match>
>Nagged a bunch, this is my only signal, sorry.
>
>Now I am really in good mode to call my GF, and tell her that this
>world will only become better and peaceful if all you ladies wear
>nothing but Apache-Cocoon-Made high-tech silk!
>
>The question I left for your guys is: How can I become a productive silkworm?
>
>
IMHO, the question is too extend to being answered in a mail. There is a
nice introductory block called "tour". Run cocoon and see the samples.
If you want to concentrate on cforms, I believe this links can help:
Presentations from GetTogether event presentations you can speed up
cocoon learning: Go to cocoon download area: click on "Material from events" and
there see this presentations:
from gt2003 (all of them have also videos of the presentation. Download
the video and the PDf to follow the presentation):
11-visual-journey.pdf --> A brief overview
13-woody-flowscript.pdf --> How to use cforms with flowscript. Keep in
mind that woody is now cforms.
16-lightweight-tool-presentation.pdf --> CVS is now SVN but not
important from the presentation POV. Here you will learn about the toold
we use in the community.
gt2004 (unfortunately no videos. Will be fine to add the videos no
matter it is from the last year) - [Did you hear me Jeremy? ;-) ]:
JeremyQuinn-GT2004.pdf --> About how to avoid mistakes. Partially answer
your question. ;-) It is good!
gt2004-bertrand (part I and II) --> explains how works cforms in a Db app.
Other interesting links: <---- Take care to
change the namespaces to adapt to cforms
I hope this help.
>P.S. I will sign the CLA soon.
>
>
Great!
Best Regards,
Antonio Gallardo. | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200507.mbox/%3C42D20436.7060606@agssa.net%3E | CC-MAIN-2014-15 | refinedweb | 780 | 73.78 |
Changing page sizes in PDF file
Details
PdfPageEditor class in Aspose.Pdf.Facades namespace contains a property named PageSize which can be used to change the page size of an individual page or multiple pages at once. The Pages property can be used to assign the numbers of the pages on which the new page size needs to be applied. PageSize class contains a list of different page sizes as its members. Any of the members of this class can be assigned to PageSize property of the PdfPageEditor class. You can also get page size of any page using GetPageSize method and passing the page number. | https://docs.aspose.com/pdf/net/changing-page-sizes-in-a-pdf-file/ | CC-MAIN-2022-27 | refinedweb | 106 | 62.38 |
26 November 2007 16:24 [Source: ICIS news]
By Nigel Davis
LONDON (ICIS news)--A disturbing report on the PVC (polyvinyl chloride) business gives a taste of what might yet come in European chemicals.
Producers are finding it hard to push through price increases. Buyers are worried about the global economy and the impact of a possible downturn on construction: some pipe grades just aren’t selling. November prices have been assessed down.
It is not so much that demand is slackening generally, but that buyers are being understandably cautious in a fragile market. The ?xml:namespace>
The question now has to be whether business expects to take the shift to $100/bbl-plus oil in its stride.
January Brent futures nudged a new record on Monday. In the European morning it looked as though WTI (West Texas Intermediate) could push past the psychologically important $100/bbl barrier. Naphtha in
The robustness of chemicals markets is being tested and the next few months look difficult. Upstream, producer margins have been clearly pressured by higher priced feedstocks. Companies talk about the impact of higher energy and transport costs.
ICIS pricing data show that integrated injection moulding grade high density polyethylene (HDPE) producer margins in
Polyethylene prices have not fallen that much but naphtha cost increases have hit hard. The downturn is partly seasonal but margins are now significantly below the six month rolling average.
Producing companies continue to push for price increases. Some may be successful but the point in the chain where the producer hands over pricing advantage to the buyer is shifting.
A turn down in construction-related markets might be expected given the impact of the sub-prime mortgage crisis on new home building. Financial analysts warn, however, the full fall-out has yet to be felt in the wider market.
And remember that the chemicals business tends to react quickly to changes in key end-use markets in the wider economy.
“It is a very nervous time for us at the moment,” one European PVC seller said last week. “We don’t know where the ethylene contract will settle, but it could well be an increase.
"This is all crystal ball gazing, but if so, we may have to try and push for an increase in December. That will be very difficult,” the source added.
Cost pressure is being felt across the board. French chemicals group Rhodia last week took the unusual step of announcing corporation wide price increases of between 5% and 15%. Swiss specialties maker Clariant followed suit on Monday saying it was aiming 5% to 12% higher.
Rhodia probably has more chance of getting what it wants than the
The situation is difficult for all players because it is not just oil-based raw material costs that are rising. Commodities prices are higher. European energy costs are volatile and expected to rise next year as carbon control begins to bite.
The future is not yet bleak for European chemicals but there are reasons why chemicals business confidence, as measured by the European Commission, is down. Executives may not be able to see into the future but they know they have to prepare for what might lie | http://www.icis.com/Articles/2007/11/26/9081957/insight-a-fear-of-what-might-lie-ahead.html | CC-MAIN-2015-11 | refinedweb | 532 | 62.38 |
When invoking a query on the data service I get this error message
inside the XML feed:
<m:error>
<m:code></m:code> <m:message
xml:Internal Server Error. The type 'MyType' is not a
complex type or an entity type.</m:message>
</m:error>
When I use the example described here
in the art
I have some classes like this:
public class Customer{ }public interface IRepository { }public class
Repository<T> : IRepository{ }public class
CustomerRepository<Customer>{ }
Then, as
per the answer to this question I can use reflection to get a list of types
referenced by generics for each of my *Reposito
I am using ReSharper 6.1.1 and having the solution wide analysis enabled
in my project shows up with an error:
Target type
'CustomControls.XSButton' is not convertible to base type
System.String
The code compiles and runs fine since a built in
TypeConverter in WPF takes care of this, shortly described in a ReSharper
bug report.
Note that the XSButton<
XSButton<
namespace ConsoleApplication8{ public class
Foo<T> where T : IFoo, IFoo2 {
public Foo(T fooThing) { } }
public interface IFoo { string Name {get;}
} public interface IFoo2 { string Name2
{get;} }}
I have a project using WCF which was working fine, but I moved the code
to another machine and now it will not compile. I get errors for lines of
code like this, where I instantiate a class and then call a service
client:
FlooringReportingServiceClient.FlooringReportingServiceClient
client = new
FlooringReportingServiceClient.FlooringReportingServiceClient();BusinessCollec
I am getting the following stack trace in production on AppEngine 1.7.5
when sending .put() to an NDB object. The code works fine in a local
development instance. Has anyone else experienced this?
File
"/python27_runtime/python27_lib/versions/1/google/appengine/ext/ndb/model.py",
line 3187, in _put return
self._put_async(**ctx_options).get_result() File "/python27_
Here is the Property in the Base Class
[ReadOnly(true),
Display(GroupName = "Payment Details")]public virtual
GridViewModel<PaymentDetails> Details { get; set; }
Here is the Property in the Inherited Class
[Display(AutoGenerateField = false, AutoGenerateFilter = false),
ScaffoldColumn(false)]public override GridViewModel&l
The compiler is telling me this can't be with a warning of:
"contravariant type A occurs in covariant position in type >: A <: Any
of type B." The warning is in the type parameter of the compose method.
Logically the type definition makes sense to me. If the compiler has no
qualms with andThen, why the problem with the converse?
trait Foo[-A]{ def compose[B >: A](t:
I am pretty new to go and I was playing with this notify package.
At first I had code that looked like this:
func doit(w
http.ResponseWriter, r *http.Request) { notify.Post("my_event",
"Hello World!") fmt.Fprint(w, "+OK")}
I
wanted to append newline to Hello World! but not in the
function doit abov
Hello World!
doit
i am trying to call a function that is defined in a class
RFIDeas_Wrapper(dll being used).But when i checked for
type of reader and after that i used it to call function it shows me error
Cannot convert type T to RFIDeas_Wrapper.
RFIDeas_Wrapper
Cannot convert type T to RFIDeas_Wrapper.
EDIT
private List<string>
GetTagCollection<T>(T Reader) { | http://bighow.org/tags/type/1 | CC-MAIN-2017-26 | refinedweb | 524 | 55.13 |
Quickstart: Create a C# function in Azure using Visual Studio Code
In this article, you use Visual Studio Code to create a C# class library.
The Azure Functions Core Tools version 3.x.
Visual Studio Code on one of the supported platforms.
The C# extension for Visual Studio Code.
The Azure Functions extension for Visual Studio Code.
Create your local project
In this section, you use Visual Studio Code to create a local Azure Functions project in C#.
C#.
Select a template for your project's first function: Choose
HTTP trigger.
Provide a function name: Type
HttpExample.
Provide a namespace: Type
My.Functions.. If you have trouble running on Windows, make sure that the default terminal for Visual Studio Code isn't set to WSL Bash.
If you haven't already installed Azure Functions Core Tools, select Install at the prompt. When the Core Tools are installed, your app starts in the Terminal panel. You can see the URL endpoint of your HTTP-triggered function running locally.
With Core Tools running, navigate to the following URL to execute a GET request, which includes
?name=Functionsquery string.
A response is returned, which looks like the following in a browser:
Information about the request location for new resources: For better performance, choose a region near you. the new function app under your subscription. Expand Functions, right-click (Windows) or Ctrl - click (macOS) on HttpExample, and then choose Copy function URL.
Paste this URL for the HTTP request into your browser's address bar, add the
namequery string as
?name=Functionsto the end of this URL, and then execute the request. The URL that calls your HTTP-triggered function should be in the following format:
http://<FUNCTION_APP_NAME>.azurewebsites.net/api/HttpExample?name=Functions
The following example shows the response in the browser to the remote GET request returned by the function:.
In the Resource group page, review the list of included resources, and verify that they are the ones you want to delete.
Select Delete resource group, and follow the instructions.
Deletion may take a couple of minutes. When it's done, a notification appears for a few seconds. You can also select the bell icon at the top of the page to view the notification.
To learn more about Functions costs, see Estimating Consumption plan costs.. | https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-csharp?pivots=programming-language-powershell | CC-MAIN-2021-04 | refinedweb | 384 | 66.23 |
Pic wives blue m beautiful tara single bisexual woman
Chat. Africa skinny trailers saw tube i tube mom sexism nude. Cought stores models hairy pov full public sex guide fantisy addressed book porno ebony. Daughter in movie tara supply straw indiasex4u twin filipino collection nude
pictures wives
video busty. Jeannie
naked yoga
twin pussy! On sites pictures hardcore
porn aged
hairy bikini saw porno ebony sexy shego fat teacher
and kim possible shego
trailers anal pictures girls. Audition stores laura hardcore love stories couple having voyeur his wives steele mature mrs lesbians animal thumbnails
big porn amateur busty
anal dogfart book audition big saw. Book cocks asian. Offender interracial. Cought at pepper chat naked stories have first red north classics girls a indiasex4u netporn cocks. Audition lindsay possible a voyeur sites. Lohan costumes addressed my full beauty thumbnails mrs map fucking fist password lohan fucking straw pic ass
transsexuals stores
annd babes massive blue mexican hq cams peter lion pic slender xxx m yoga. Looked costco mexican sexism yr kim
mom sons with
costumes anime peter sites. Old slender beautiful addressed mature collection straw cocks lesbian shego stockings ass wallpaper fantisy tube kelly on pictures map jeannie. Babes spy. Yr xxx sexism lesbian locator samples hair tara full amateur stars anal pageant lovers amateur red. Beauty hardcore lesbians lovers xxx in celebs sons blonde 15 a love asian his pepper massive lindsay vanity valentine looked and? Pics porn fuck sexy have naked stockings lion gay. Kim free oral on old adult beautiful annd annd costco bloodrayne cams wallpaper. Looked having steele porno miss aged wives! Stories amateur couple transsexuals having aaa. M mom cker.
Collection ass anal teacher naked
Offender.
Pageant beauty nude
cams couple interracial beach milf yoga north in first with mature jeannie naked valentine stories. Babes possible mexican
straw beauty bloodrayne bikini africa aaa porn 15 milf valentine. Offender filipino with russian reid love tara! Supply wives porn porno the collection password i wallpaper. M yr celebs blonde cams have lesbian babes transsexuals my babes.
Map sex offender
samples costco. Aaa stars pistols girls. Asian locator cocks daughter. Women amateur s voyeur fucking. Anal guide
bloodrayne
red at adult movie hardcore hardcore voyeur costumes map pistols supply chat pov skinny. Aaa lindsay hq i. Blue mexican thumbnails kim xxx miss bikini i cocks full tube supply have. Mom first pov pussy indiasex4u lohan samples sex busty old free first at porno aged. Password hair big fucking nude adult xxx girls beautiful shego trailers lesbian dogfart fist slender massive having big looked my sexy massive models star love lohan fantisy. Africa saw cought. Anal yoga pussy. A addressed vanity and amateur fuck in hq hairy teacher transsexuals annd ebony sons! Oral length hardcore addressed twin asian fuck reid old girls annd fantisy addressed lion blonde xxx length dirty fucking. Fairy sons sons pageant map. Ebony pepper peter celebs. Models animal video the beach red guide sex steele tale map adult g.
Have fist hairy 15 password possible steele
Steele. Busty sons aaa stores massive north hardcore! The busty. In lesbians yr possible. Chat twin ass massive gay lion on and bloodrayne map slender. Book password sites lohan beautiful lion chat. Skinny pov models. Pictures spy love stories
fuck porn costco hardcore
vanity free. Slender fuck blue bloodrayne slender love cocks beauty wallpaper shego dogfart
book classics blue
first porn hairy. Laura porno star animal dirty mexican his lovers supply! On
porn free movie
skinny models aged costumes blonde beautiful fist sexism 15 babes pic locator hardcore sex collection gay tara. Kim in s 15 at short oral anime jeannie annd jeannie classics book yoga.
Anal lesbian
lindsay pageant
sons sex mom
his russian anal. Transsexuals girls. Old chat length indiasex4u steele yoga. Stores saw fucking aged couple. Blonde red password short fuck yoga with trailers blonde beautiful beauty length length possible. Cams anime public tube star his wallpaper! Star russian beach fairy offender i netporn annd sexism 15 classics wives pistols
mrs cocks filipino ebony looked lesbians stars samples sexism lohan. Naked anal!
I sex love
celebs mature stockings.
I animal mom blue. Guide movie fantisy free aged pov indiasex4u pepper first. Amateur having hairy ass dirty peter stockings twin big models mexican miss hair kelly africa teacher with in teacher sex pics s spy adult my cought stockings babes asian dirty. Aaa first transsexuals jeannie amateur thumbnails tale mom stories mrs lovers. Map kim stars celebs map aaa addressed north samples reid girls. Stores skinny a full indiasex4u fat sexy beauty video video pictures password busty miss voyeur lindsay possible. Pov wallpaper reid celebs. Saw costumes laura sites milf having russian ass. Pepper massive. Straw netporn nude dogfart cought short voyeur looked. Women samples
models nude
mexican porn. Red
porn gay
gay ebony. Couple hq on hair bloodrayne bikini fairy s peter yr and stories valentine pepper
spy stories love offender possible
with pussy interracial pic full
short
girls daughter dogfart. Nude annd kelly
daughter at i his
collection high.
Yoga gay single bisexual woman
Dogfart peter
filipino free sex costumes twin. Women s full. Anal shego love costco miss dirty possible celebs. Pic with
naked movie free in
pov lohan mom his stars red
audition
dogfart. Yoga ass stockings sex. Wives animal bloodrayne a my costumes celebs having peter book fairy porno ebony samples
hq tube
sons steele aged lovers
stars
daughter hq possible old classics nude beach short
russian fat
anime asian stores interracial reid classics full fist teacher. Wives m have kim
offender cams filipino fuck pussy at audition ass.
Skinny 15 africa xxx 15 mom. Guide slender lesbian women red lindsay. Straw
movie sexism
supply star naked aaa i addressed cocks anime fuck cocks yoga interracial? Russian stockings valentine? Miss skinny mature porno fist password pistols. Dirty pics hair with public jeannie blonde possible lesbians free lesbian babes shego thumbnails. Ass mom locator laura lion costumes bikini. Addressed mexican aged tara fucking massive 15 slender length stores tara tube m kim beach teacher trailers sexy valentine password collection vanity pics fat mrs models kelly cams video
porn blonde
public wives fantisy massive asian first porn pageant peter chat pussy. Hq aaa. Girls anime love milf pageant anal milf milf lohan fuck
beach sexy
girls tale at cocks book. Vanity samples busty stars star pepper having slender kelly reid dogfart yr cams models lesbians busty. Beauty kim oral blue tara the adult movie stockings covered.
Aged wives tube hq stockings
Sexy tube dirty stories
costco
yr peter 15 porno aged. Stores celebs. Book collection teen.
Tara sexy wallpaper
miss porn gay straw beautiful mom russian oral straw animal first cams massive know tale models kelly at hq netporn and love annd password sexy pepper length adult sexism mrs guide audition cams offender Toronto possible women Lingerie fist indiasex4u steele xxx. Lohan addressed
lovers sex
they teacher having pic transsexuals busty aged yoga cams looked nylons audition
facial his beach fuck disappointment. Red laura ass links stars
single bisexual woman
big saw jeannie. Public possible pistols job supply red africa s Brendan length saw porn a my reid any asian costco spy amateur shego and fairy yr possible the chat naked red samples kelly old at vanity. Blonde hardcore. Nude length mrs sons. Wallpaper massive public. Oral filipino on hq fucking hairy wives costco advance.
Sex africa
girls fat adult lindsay slender supply. Costumes
vanity porn
Me. Daughter spy with sons aged xxx annd blue. Beach pictures. AAA
laura
annd Real lesbians. Bikini skinny have movie celebs hairy blonde. Massive trailers lesbian with laura. Pageant anime blue supply anal lion filipino hardcore stockings a guide
naked skinny
women kim! My fantisy. Adult m peter lovers thumbnails miss indiasex4u classics girls laura. Fuck looked full lindsay melodic pussy book fuck naked pistols.
Cought milf pictures and russian mexican cocks. Amateur old pov
video twin
stockings! Hairy lion anal, ebony valentine and blowjob sex
in lohan our babes wives hq beauty teacher tube fairy skinny lovers women chat free having old miss hardcore. I
free lion laura
lovers isp voyeur pov? Pussy. Costco milf fat. Lion star 02. Teacher Customer map.
Trailers girls free naked
fucking animal stars pic gay with my girls. Love sex! Costumes mature nude animal
naked yoga
sites love. Download locator filipino on lesbians! Oral
jeannie beach girls full cams
asian pageant a slender couple fist costumes looked beautiful twin bloodrayne indiasex4u video full wallpaper africa collection fat reid aaa tara milf straw tara bloodrayne password couple beach north pic pics hot pageant mature locator a audition kim. Steele interracial password lesbian 15 first stockings yr hair blue.
Amateur fucking with
Sexism. Pussy tara having sites. Adult lindsay jeannie milf have. Mom map. Aged pageant anal celebs africa pov public russian fist and stars red russian xxx annd mrs having shego hardcore big africa classics fat his.
Wallpaper adult
amateur lesbian fucking asian cocks filipino looked samples his dogfart ebony girls costco transsexuals cams lindsay. Full password guide wallpaper
addressed sex
sexy annd pov dirty
audition beach asian blue steele pic voyeur m locator public my. Pistols jeannie. Movie free pageant free costumes blonde miss video animal chat s ass fairy trailers porno skinny mature aaa. Interracial women
costco
password celebs at bloodrayne nude have
classroom
beauty tara map. Asian adult old. Love hairy interracial with valentine. The his stories m samples yr pepper. Stockings stores password laura couple pictures kelly fairy hairy. The animal women twin hardcore beach m. Fat fuck beautiful addressed anal porn gay filipino nude anal. Twin wives slender spy. Addressed oral book beautiful at kim spy sites nese.
Stories mrs shego
red pepper thumbnails sexy nude
fucking. Costco animal ebony blonde. Transsexuals with africa anal fairy. Classics women looked yoga book mature public locator collection voyeur fuck sexy. Gay length daughter stockings girls twin ass pageant pic africa pussy wallpaper sexism costco
babes sexy
sex big. Book voyeur wallpaper tara short valentine pussy pics red massive dogfart. Stores asian peter ebony and addressed password beauty adult chat pics kim porno possible beautiful sex kelly having wives fantisy babes
porn blonde
aaa fat his video love collection free trailers reid stories dogfart shego first.
Daughter lesbians vanity massive. Dirty pepper beach s
dirty fuck
movie yr nude oral kelly. Full aged beautiful s locator netporn
beautiful sexy
models women sons. Couple cams possible amateur have ass on valentine porn tara
miss
lohan 15 movie.
Saw straw. Annd pussy vanity pageant models. Yr hq guide lesbian password thumbnails free thumbnails indiasex4u. Public in mature blonde jeannie fucking animal anime. Tube m busty bikini star old video yoga 15 mrs slender. Mexican skinny massive guide fat gay cought twin addressed book laura. Tara mrs russian lovers password hair annd pistols lesbian anime pic interracial hairy
straw netporn hair single bisexual woman
steele sexism cocks aged
porno fist
stars a my laura celebs porno in pictures free lovers tale amateur russian adult offender length audition stores lovers.
Fantisy xxx free
guide supply jeannie milf
chat fucking amateur women and
stockings fantisy fucking lion s stars pov offender old first and amateur xxx
free adult
valentine lion lindsay steele spy fat love thumbnails have love red pics pageant audition sexy mexican vanity big mature bloodrayne costumes. Collection. Africa fantisy xxx le.
Anal xxx pic i collection the i sexism lindsay vanity fairy models kim asian busty chat vanity
in spy couple public
costco tara sons peter shego stores reid supply north. With jeannie free transsexuals costco stories miss nude interracial miss love i map wives costumes russian thumbnails
blonde short hair girls
mature. Russian girls s supply bikini tale lindsay massive. Annd amateur celebs. In. Busty naked slender supply slender his anal porn length pussy amateur star samples women women. Locator costco xxx stores full looked pics. Dogfart gay asian lion.
Asian massive cocks fucking
couple wallpaper twin dirty interracial se.
hairy offender straw pussy hardcore i length. Gay lesbian fist
hairy amateur
public his anime with a short naked stores addressed ebony fantisy valentine porno saw lindsay. Looked at milf stories short celebs gay dirty sexy pics cams sex. Tara possible
fucking free full netporn. Shego length in nude kelly tale costco dogfart. Teacher mom indiasex4u red on trailers wallpaper gay first skinny and twin fat slender africa. Hair interracial map hq costco jeannie yoga wives vanity. Spy m cought aged aged women blonde pov movie yoga asian fist tube costco red lovers. Sons pictures thumbnails fat fat filipino. Slender
interracial porno
straw interracial xxx porn password mexican tube vanity beautiful stockings blonde public wallpaper fuck
m sex s
addressed costumes hair red reid. Jeannie mexican. Massive ass looked. Shego nude tara jeannie cams oral s first amateur kim north busty. Filipino asian locator mrs
lesbians oral
slender having. North porn hair love and. Netporn supply lohan pics. Fuck porn adult couple nude classics movie! Tale looked anal movie.
Wives cought skinny ebony
massive babes stores miss mature kim fantisy.
shego classics
sexy beach
and saw pictures valentine pistols offender yoga thumbnails mrs offender with dirty possible hair the tube tale pistols fairy saw stories
movie in
tara a lohan love shego chat massive gay kelly. Free length collection. Wallpaper cams at girls in hq porn beach peter yr lindsay. Jeannie ebony
free adult sites
girls video samples daughter beautiful bikini filipino gay blonde.
Porn north free
annd vanity russian reid free addressed. Possible
fantisy
tale pageant full vanity. Lindsay his fucking. Chat nude lovers cocks costumes women girls indiasex4u lesbians slender yoga trailers my models
chat straw
xxx north and. Fucking interracial wives fat? Voyeur lohan fucking
with movie tube annd ass
stars on tara in spy
chat sex
pov s kelly i sexy anime old lindsay aged. Beach massive mom blue i pic
mature xxx
lesbian stars supply have lesbian first guide 15 my massive
indiasex4u
netporn classics twin. Straw women ass transsexuals supply anal love? Costco lesbian porn
sex straw
beach. Pic peter addressed
pepper porn jeannie
pepper tara. Anal sites hair i wives fuck fuck star fist. Milf miss.
sites fist annd supply busty looked beauty pics sites couple
pussy miss pics indiasex4u hq
chat filipino aaa annd transsexuals milf free
stories
love locator sex mature pepper map password trailers length. Mexican voyeur
audition. M hardcore his
tale sexy fairy
thumbnails skinny map stars asian milf adult mexican animal sons africa old laura addressed fuck sexism pussy public short
movie sian.
hq lesbian
bikini i having
reid on stories mrs busty video bikini. Hair thumbnails ebony fist annd babes. Pistols mexican pictures mexican guide
anime porn
africa trailers porno bloodrayne full sexism
bloodrayne sexy
star beauty netporn beautiful sexism stars book. Have password aged blonde blonde lohan north red lesbian chat pics looked daughter twin yr length his stockings beautiful
hardcore
steele. Shego couple hardcore and looked porno his in. In saw fairy mrs cocks celebs pistols my chat. Dirty fantisy trailers massive porn thumbnails stories
video pageant
gay stockings fat pictures bikini classics pussy valentine kelly ass
free porn supply ebony
sexy hair? Fist jeannie at collection pics the fairy public.
Pov sex
straw daughter fuck naked tale. Amateur the map having fairy! Tale fist looked. Star guide lindsay tara tube gay my. Wallpaper bloodrayne
pic valentine
chat fucking a red netporn. Map
oral netporn single bisexual woman
babes amateur s cams stores. Costco lovers love costco vanity steele collection models mature transsexuals book anal having. Offender
asian
beach in stores addressed busty oral pepper. Kim stars beautiful costco hair 15 dirty sites twin. Tube yoga stockings hq public audition movie video i public classics lohan girls full filipino s on spy short cocks beauty milf miss
teacher first beautiful fuck lohan
ebony anime pic classics. Addressed netporn video north sexism mature couple pageant cought password. Length stars naked couple with interracial massive at yoga password aaa stores blue collection nude wives teacher saw girls jeannie locator daughter girls wallpaper. Samples m cams samples kelly beauty pepper women possible mrs free pov valentine hardcore audition first xxx big pussy tara 15 skinny. Women laura star twin lesbians mom. Shego spy cams indiasex4u.
Short nude hair girls
wives supply
porn red in stockings
costumes his gay sites. Fat ebony porn slender hairy peter porno bloodrayne audition hq lesbians lion animal my shego with adult sites valentine. Full supply wallpaper dogfart cought
sexy lohan lindsay
vanity hairy guide pistols. Mature adult. Have miss addressed mexican old voyeur blue have big tara russian having pov spy lovers s sons. Fucking. | http://uk.geocities.com/pharmacyedempill/ivqyv/single-bisexual-woman.htm | crawl-002 | refinedweb | 2,750 | 67.65 |
.
Video: Watch Trevor Johns go through client library installation, library architecture, and a code walkthrough.
Updated October 2008 (Originally written by Daniel Holevoet)
- Introduction
- Pre-Installation
- Installing PHP
- Installing the Google Data PHP Client Library
- Checking to make sure that you can access the client library files
- Where to Learn More
- Appendix A: Editing your PHP path in your
php.iniconfiguration file
- Appendix B.: Using PHP from the command line
- Appendix C: Hints and Solutions
- Revision History
Introduction.
Pre-Installation
PHP may already be installed on your development machine or web-server so the first step is to verify that fact and to make sure that the version of PHP is recent enough to be used for the client library. The easiest way to check is to place a new file into a web-accessible directory on your server. Type the following information into the file:
<?php phpinfo(); ?>
Then make sure that it is accessible from the web by setting the appropriate permissions and navigate to its location from within your browser. If PHP is installed and your server is able to render PHP pages, then you should see something similar to the screenshot below:
The screenshot shows the PHP info page. This page shows you the version of PHP that has been installed (5.2.6 in this case), along with which extensions have been enabled (in the 'Configure Command' section) and the location of PHP's internal configuration file (in the 'Loaded Configuration File' section). If the page is not displayed or if your version of PHP is older than 5.1.4 you will need to install or upgrade your version of PHP. Otherwise you can skip the next section and continue to installing the PHP Client Library.
Note: If you have access to the command line and are planning to use PHP to run command line scripts, please see the command line PHP section of this article.
Installing PHP
Installation varies a bit by platform, so it is important to follow the instructions for your specific platform during installation. Before we dive in, it is worth pointing out that pre-installed packages that also include the Apache web server and the MySQL database along with PHP have gained in popularity. For Windows, Mac OS X and Linux, there is the XAMPP project. Mac OS X users also have the choice of using the MAMP project. Both of these packages support OpenSSL in PHP (which is required for interacting with authenticated feeds).
If you install PHP using the steps that follow below, make sure that you also install and enable support for OpenSSL. More details about that can be found in the OpenSSL section of the PHP site. The following sections are focused on how to install PHP by itself.
On Windows
The easiest way to install or upgrade PHP on Windows is with the PHP installer available on the PHP downloads page.
- Choose the PHP installer option (in the Windows binaries section) corresponding to the newest version of PHP and allow it to download.
- Open the installer and follow the installation wizard's instructions.
- When the wizard prompts you, choose the web server that is installed on your system, so that it configures the server to work with PHP.
- Check your installation by following the steps outlined in the section above.
On Mac OS X
PHP is included in OS X, but before you use it, you should upgrade to the latest version of PHP. To upgrade, you can install any of several free binary packages, or compile it yourself. For details, see the PHP documentation page about installation on Mac OS X.
After installing or otherwise setting up OS X, check your installation by following the steps outlined in the pre-installation section of this document.
On Linux
Depending on the Linux distribution, there may be a built-in or easy to use setup option for PHP installation. For example, on Ubuntu, you can either use a package manager or just type the following on in a terminal:
sudo apt-get install php5
If there isn't a packaged install available with your Linux distribution, you must install from source code. There are detailed instructions for compiling PHP for Apache 1.3 and compiling PHP for Apache 2. PHP.net also has instructions for other servers.
Installing the Google Data PHP Client Library
Now that you have a working version of PHP installed, it's time to install the client library. The client library is part of the open-source Zend Framework but can also be downloaded as a standalone version. If you already have a version of the Zend Framework installed (version 1.6 or higher), you can skip installation, because the Google Data Client Library is included. However, making sure you are using the latest version of the framework will guarantee that you have all the newest features and bug fixes available to you, so it is usually recommended.
Downloading the complete framework will give you access to not just the Google Data Client Library, but also the rest of the framework. The client library itself uses a few other classes that are part of the complete Zend Framework, but there is no need to download the entire framework as we have bundled them into the standalone download.
- Download the Google Data Client Library files. (Search in that page for "Google Data APIs".)
- Decompress the downloaded files. Four sub-directories should be created:
demos— Sample applications
documentation— Documentation for the client library files
library— The actual client library source files.
tests— Unit-test files for automated testing.
- Add the location of the
libraryfolder to your PHP path (see the next section)
Checking to make sure that you can access the client library files
The last step is to make sure that you can reference and include the PHP Client Library files from the directory that you are building your project. This is accomplished by setting the
include_path variable in PHP's configuration file (
php.ini). The
include_path variable contains a number of directory location that PHP looks into when you issue a
require or
include statement that pulls external classes, libraries or files into your current script, similar to the
import statement in Java. You need to append the location of the client library files to what has already been set in your
include_path. This can be accomplished in two ways (both of which are explained in detail below):
- Permanently set the
include_pathdirective in your
php.iniconfiguration file from the command line — requires shell access and write permissions.
- Set the
include_pathpath variable on a "per directory" level — requires the Apache web server and the ability to create
.htaccessfiles.
- Use the
set_include_path()function to dynamically set the include path in your scripts — can be set dynamically in each of your .php files.
If you have shell access and write permissions to the
php.ini file (or if you are writing code on your local machine), simply follow the instructions in appendix A. If you are using the Apache web server and have the ability to create .htaccess files then you can set the
include_path variable on a "per directory" level, which means that all of the files in the directory that you are working in are automatically able to reference the client library directory.
You can specify PHP configuration options as shown in the snippet below:
# This works for PHP5 in both Apache versions 1 and 2 <IfModule mod_php5.c> php_value include_path ".:/usr/local/lib/php:/path/to/ZendGdata/library" </IfModule>
Note: Refer to the PHP Manual for more information about changing configuration settings.
If you don't have shell access to your server and can't modify or create .htaccess files that you can always use the
set_include_path function. Note that you may already have some value set for your
include_path so it may be a good idea to follow the model below to append the new values, instead of overwriting the entire path:
$clientLibraryPath = '/path/to/ZendGdata/library'; $oldPath = set_include_path(get_include_path() . PATH_SEPARATOR . $clientLibraryPath);
Note: Please refer to the PHP manual pages for more details on the
set_include_path function.
Running the PHP Installation Checker
To verify that your include path has been set properly, you can run the PHP Installation Checker script. Simply copy and paste the contents of that file into a new file in a web-accessible directory on your server and navigate to it from your browser. If you see output similar to the below, then everything has been configured properly and you are ready to use the PHP Client Library:
If you see errors (as in the screenshot below), make sure you follow the direction. You may be missing extensions or your path may still not be set correctly. Remember that you may need to restart your server for the changes to take effect. This only applies if you are actually modifying the
php.ini file. The screenshot below shows that the
include_path is set to
/path/to/nowhere:
Note: Please note that the PHP Installation checker checks the following in succession: (1) are the required PHP extensions installed, (2) does the
include_path point to the directory of the PHP Client Library, (3) can SSL connections be made and lastly, can a connection be made to the YouTube Data API. If a specific test fails, the remaining tests will not be run.
Now that the client library is installed, it's time to try running the samples.
Running the samples
At the root of the
Zend/Gdata directory is a folder of demos — samples to help you get started. Some of these samples are designed to be run from the command line such as
demos/Zend/Gdata/Blogger.php and
demos/Zend/Gdata/Spreadsheet-ClientLogin.php, and you can execute them with
php /path/to/example. The remaining samples can be run from both the command-line and a web browser. If you would like to view them in a browser, these should be placed in whatever directory you would use to serve web pages. These samples should give a basic idea of how to write and run a Google Data application, but when you're ready for more, there are other resources for the inquisitive programmer.
Note: If you are interested in seeing the web-based demos online, please visit googlecodesamples.com and look for the PHP applications.
Where to Learn More
The best places to look for information on the classes that are part of the client library is the API reference guide on the Zend Framework site. Make sure to select the Zend_Gdata package from the drop-down.
At this point you should be all set to start coding. So, go on, write some great applications. We look forward to seeing your results!
You can find PHP developer guides for the following services:
Since the PHP Client Library is an open-source project, support for more APIs is continually being added. Each service has it's own support group, please see our FAQ entry for a listing of available support groups.
If you need help troubleshooting your API calls, there are articles available on debugging API requests using network traffic capture tools and on using proxy servers with the Google Data APIs. There are also a few external articles available on installing XAMPP on Linux and on installing XAMPP on Windows. In addition to all of these articles be sure to check out the posts about the PHP Client Library on the Google Data API Tips blog.
Appendix A: Editing your PHP path in your
php.ini configuration file
The PHP path is variable that contains a list of locations that PHP searches when it looks for additional libraries during loading. In order for PHP to be able to load and access the Google Data PHP Client Library files on your machine or server, they will need to be put into a location that PHP knows about. Or alternatively the location of the files needs to be appended to your PHP path. Note that changes to the
php.ini file typically require a restart of your server. You can always verify the current value of the
include_path variable by navigating to the PHP Info page discussed earlier. Look for the Loaded Configuration File cell in the first table and find the path in the column to the right.
Note: If you find that you are using php from the command line, you may need to modify an additional path variable. Make sure to review Appendix B: Using PHP from the command line.
Once you've found the
php.ini file, follow these steps to append to the path.
- Open the
php.inifile in your favorite text editor.
- Locate the line referencing the PHP path, it should begin with
include_path.
- Append the path you stored the Zend Framework to the list of locations already present, pre-pending your new path with the designated separator for your OS (
:on Unix-like systems,
;on Windows). A correct path on Unix-like systems would look something like this:
/path1:/path2:/usr/local/lib/php/libraryOn Windows, it would look something like this:
\path1;\path2;\php\library
- Save and close the file.
Note: On Mac OS X, the Finder does not allow access to files that are in system locations such as the
/etc directory. Therefore it may be easiest to edit them using a command line editor such as
vi or
pico. To do so use a command such as:
pico /path/to/php.ini.
Appendix B: Using PHP from the command line
As of PHP version 5, there is a command-line utility available in PHP which is referred to as CLI for 'command line interpreter'. Using this utility allows for php scripts to be run from the command line. The situations in which this may be useful is if you are running PHP locally on your machine and are looking for ways to quickly test some scripts. On your server of course, this will require shell access. One important thing to note is that PHP typically uses two separate
php.ini files, one contains the configuration options for PHP running on your server, and another for the configurations that PHP uses when running from the command line. If you are interested in running the command-line demo applications from the client library, you will need to modify the command-line
php.ini file as well.
To locate it, type the following commands on Unix-like systems (Mac OS X, Linux, and others):
php -i | grep php.ini
That command should result in the following information being displayed in your terminal:
Configuration File (php.ini) Path => /etc/php5/cli Loaded Configuration File => /etc/php5/cli/php.ini
Note: Of course the actual path locations (
/etc/php...) may differ on your system.
Appendix C: Hints and Solutions
This section contains a brief outline of some of the issue that developers have discovered when working with PHP and the appropriate solutions.
Problem with the dom-xml extension in XAMPP
The PHP client library uses the DOMDocument classes to transform XML requests and responses into PHP objects. The
dom-xml extension can cause problems with XML handling and result in incorrect transformations. Some of our developers have found that when using XAMPP, the DOMDocument constructor gets overriden with an older function call, as explained on the PHP site. To fix this problem, make sure that XML handling is not overwritten in your
php.ini file. Make sure to remove references to
php_domxml.dll from your configuration file.
Requests are timing out when using the client library
If you are using the client library to perform fairly large requests, such as uploading videos to the YouTube Data API, you may need to change the
timeout parameter in your
Zend_Http_Client class. This can be done easily by passing a
$config parameter during instantiation, which sets the
timeout value to something other than the 10 second default:
// assuming your Zend_Http_Client already exists as $httpClient // and that you want to change the timeout from the 10 second default to 30 seconds $config = array('timeout' => 30); $httpClient->setConfig($config);
Some hosting providers do not allow https connections to be made from their servers
We have heard that some hosting providers do not allow you to make
https connections from their default servers. If you get an error message similar to the below, you may need to make your https connections through a secure proxy:
Unable to Connect to sslv2://. Error #110: Connection timed out
Your hosting provider should have information on the actual address of the proxy server to use. The below snippet demonstrates how a custom proxy configuration can be used with the PHP Client Library:
// Load the proxy adapter class in addition to the other required classes Zend_Loader::loadClass('Zend_Http_Client_Adapter_Proxy'); // Configure the proxy connection with your hostname and portnumber $config = array( 'adapter' => 'Zend_Http_Client_Adapter_Proxy', 'proxy_host' => 'your.proxy.server.net', 'proxy_port' => 3128 ); // A simple https request would be an attempt to authenticate via ClientLogin $proxiedHttpClient = new Zend_Http_Client('', $config); $username = 'foo@example.com'; $password = 'barbaz'; // The service name would depend on what API you are interacting with, here // we are using the Google DocumentsList Data API $service = Zend_Gdata_Docs::AUTH_SERVICE_NAME; // Try to perform the ClientLogin authentication using our proxy client. // If there is an error, we exit since it doesn't make sense to go on. try { // Note that we are creating another Zend_Http_Client // by passing our proxied client into the constructor. $httpClient = Zend_Gdata_ClientLogin::getHttpClient( $username, $password, $service, $proxiedHttpClient); } catch (Zend_Gdata_App_HttpException $httpException) { // You may want to handle this differently in your application exit("An error occurred trying to connect to the proxy server\n" . $httpException->getMessage() . "\n"); }
Revision History
October 1, 2008
Updated by Jochen Hartmann. This update contains the following changes:
- Made PHP configuration for web servers clearer by moving sections that refer to command-line PHP into an appendix.
- Added note about multiple php.ini configuration files.
- Added sections on how to dynamically set the include_path.
- Added section on the installation checker script.
- Added link to online samples.
- Added links for XAMPP and MAMP.
- Added an 'Hints and Solutions' appendix. | https://developers.google.com/gdata/articles/php_client_lib | CC-MAIN-2014-15 | refinedweb | 3,023 | 60.95 |
#include <qvaluelist.h>
An iterator is a class for accessing the items of a container class: a generalization of the index in an array. A pointer into a "const char *" and an index into an "int[]" are both iterators, and the general idea is to provide that functionality for any data structure.
The Q for the complete code):
EmployeeList::iterator it;
for ( it = list.begin(); it != list.end(); ++it )
cout << (*it).surname().latin1() << ", " <<
(*it).forename().latin1() << " earns " <<
(*it).salary() << endl;
// Output:
// Doe, John earns 50000
// Williams, Jane earns 80000
// Hawthorne, Mary earns 90000
// Jones, Tom earns 60000
QValueList is highly optimized for performance and memory usage. This means that you must be careful: QValueList does not know about all its iterators and the iterators don't know to which list they belong. This makes things very fast, but if you're not careful, you can get spectacular bugs. Always make sure iterators are valid before dereferencing them or using them as parameters to generic algorithms in the STL. | http://www.linuxmanpages.com/man3/qvaluelistiterator.3qt.php | crawl-003 | refinedweb | 168 | 53.81 |
Asked by:
Visual Studio 2012 random build order on F5 deploy
We have a solution with 11 projects (SharePoint) that build/deploy fine from Visual Studio 2010. I attempted to build/deploy from Visual Studio 2012, and it fails to build the solution with the usual namespace errors (The type or namespace name '' could not be found (are you missing...)). Doing a Build or Rebuild works perfectly. This issue only occurs when doing an F5 deploy or right clicking the project and choosing deploy.
The only observsation I have been able to make so far is that Build/Rebuild ALWAYS builds in the same order (as specified in the solutions build order). That is why, as far as I can tell, it always works. However, reviewing the output of the build when pressing F5 or doing a right-click deploy and the build order seems very random. Our common library is building dead last (which, btw, contains most of the namespaces/types that the build complains about).
Anyone else experience this or know what might be happening?
Thanks,
Paul
Question
All replies
Hi Paul.
Thank you trying out VS 2012. Would you please share with us a test solution that can reproduce the problem? I'm not aware of such problem so far. That would be very helpful.
btw, Skydrive is an ideal place for sharing:
thanks.
Forrest Guo | MSDN Community Support | Feedback to manager
Hi Forrest,
Thanks for the quick reply. This is the first time I've shared anything on SkyDrive, so hopefully I did it correctly.
I was able to reproduce the issue in this project. I created three independent projects. Common, SharePoint1, and SharePoint2. I referenced Common from both SharePoint1 and SharePoint2. I then referenced SharePoint1 from SharePoint2. I suspected that would cause the issue. It did not. I then went into the "Package" for SharePoint2 and on the "Advanced" tab, I added the additional assemblies, Common and SharePoint1. This was required in Visual Studio 2010 as it would not deploy dependent assemblies when you pressed F5. I suspect I could have created two projects (SharePoint1 and Common) and result in the same issue by simply add the "Additional Assemblies" in the package.
I will do further experimentation. Maybe it is not required in 2012 and this is a "no issue", but still a "need to know about" situation.
I experimented a bit with this. As I mentioned previously, it breaks when adding the referenced assemblies as "Additional Assemblies" to deploy to the GAC. Removing the additional assemblies in 2012 fixes the issue. However, neither of those assemblies get deployed. I have to assume they are embedded?
By the way, to clarify, I am deploying the project "SharePoint2".
So, after much experimentation, deliberation, and finally victory, I feel silly. So, for the original project (in 2012), I removed the Additional Assemblies in the "dependent" SharePoint Project and just updated the deployment order. In essence, before F5 deploying the dependent project, I must right-click deploy the project it depends on.
Project A depends on Project B.
- Right-click deploy Project B
- F5 deploy Project A
- Watch the fireworks
Thanks for your time Forrest. For whatever reason, VS 2010 was just more forgiving I guess. | https://social.msdn.microsoft.com/Forums/vstudio/en-US/c0f9aff4-3506-4aec-8080-4456ab5349f8/visual-studio-2012-random-build-order-on-f5-deploy?forum=msbuild | CC-MAIN-2018-05 | refinedweb | 538 | 66.84 |
Is OpenCV's VideoCapture.read() function skipping frames?
The Problem
I'm writing a video tracking application (in Python) which requires me to calculate the amount of time an object has spent in a particular region of the frame. What I've done is count the number of frames the object has been in that region and multiply this number by 1/FPS of the video.
However, I've noticed that the time I calculate is incorrect even for very simple test cases. I think I've tracked the problem down to OpenCV's VideoCapture.read() function.
It doesn't seem to grab all the frames available in the video. I've tested this with the following code:
import cv2 frame_count = 0 filename = 'Test 1.avi' # load video file cap = cv2.VideoCapture(str(filename)) # find fps of video file fps = cap.get(cv2.CAP_PROP_FPS) spf = 1/fps print "Frames per second using cap.get(cv2.CAP_PROP_FPS) : {0}".format(fps) print "Seconds per frame using 1/fps :", spf while(cap.isOpened()): ret, frame = cap.read() frame_count = frame_count + 1 if ret == False: break print 'Number of Frames:', frame_count, cap.get(cv2.CAP_PROP_FRAME_COUNT) cap.release()
The output for this block of code with my "Test 1.avi" file is::
Frames per second using cap.get(cv2.CAP_PROP_FPS) : 34.2986105633 Seconds per frame using 1/fps : 0.0291557 Number of Frames: 2585 5171.0
As you can see, the number of frames I've counted and read is not the same as the number of frames in the video file. In fact, the number of frames that I count is about half of the number of frames in the video.
FYI: "Test 1.avi" is 2.5 mins long. 5171*0.0291557 sec = 2.5 mins, meaning that 5171 is an accurate count of the number of frames in "Test 1.avi."
So ... why is this happening? Is OpenCV's VideoCapture.read() function skipping frames?
Software Information
I am running:
- Python 2.7.11+
- OpenCV 3.1.0
- ffmpeg version 2.8.6-1ubuntu2
- Ubuntu 16.04
Are you sure you got the end of video file ? change your loop with
check if
frameis empty or
read()fails. check also
CAP_PROP_POS_MSECand/or
CAP_PROP_POS_FRAMES | https://answers.opencv.org/question/94012/is-opencvs-videocaptureread-function-skipping-frames/ | CC-MAIN-2020-45 | refinedweb | 369 | 79.36 |
Overwriting the existing property with alias
I got this code from (I made small modifications)
@
import QtQuick 2.2
Rectangle {
width: 360
height: 360
Rectangle { id: coloredrectangle width: 100 height: 100 border { color: black; width: 1 } property alias color: bluerectangle.color color: "red" Rectangle { id: bluerectangle color: "blue" } }
}
@
The resulting rectangle is white. Why? I was expecting it to be blue.
- dheerendra
alias 'colour' actually refers to the blue rectangle colour. colorrectangle should be 'red'. It will not change to blue as it is just a alias. It should have been set to red internally. Since it is not setting to 'red', I feel it is bug. | https://forum.qt.io/topic/44920/overwriting-the-existing-property-with-alias | CC-MAIN-2018-05 | refinedweb | 107 | 68.06 |
Helm – in collaboration with Microsoft, Google, Bitnami and the Helm contributor community.
Terminology of Helm
- Chart – A Chart is a Helm package. It contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster just like an RPM file or a Yum package.
- Repository – It is a place where Helm charts are stored and shared. It’s a package repository for Kubernetes applications.
- Release – A Release is an instance of a chart running on Kubernetes. A chart can be installed many times into the same cluster. And each time it is installed, a new release is created. If you want to run multiple instances of the same application at the same time on Kubernetes then install the application chart multiple times. Each release will have its own name.
Installing Helm
There are 2 components of Helm-
- Helm Client – It is a command-line client for end users. The client is responsible for the following –
- Local chart development
- Managing repositories
- Interacting with the Tiller server to install a chart, update a release, etc.
- Tiller (Helm Server) – It is an in-cluster server that interacts with the Helm client, and interfaces with the Kubernetes API server. The server is responsible for the following:
- Listening for incoming requests from the Helm client
- Installing charts into Kubernetes.
- Upgrading and uninstalling charts by interacting with Kubernetes etc.
Prerequisite
- You must be having a Kubernetes cluster ready to install and configure Helm. (Minikube will be enough)
- You must be able to interact with the Kubernetes cluster using Kubectl.
- Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that Kubectl uses.
Note: There are multiple ways to install Helm but we are using binary releases here. Here we are using Minikube as a Kubernetes cluster for Helm installation. We will go with the default configuration of Helm.
Steps–
- Download the Helm binary –
- Initialize Helm and Install Tiller –
- Verify Tiller installation –
By default, a Tiller pod will be created in the kube-system namespace
Run your first application using Helm-
After the installation of Helm and Tiller, we will try to run an application using a Helm chart. We will run MYSQL over Kubernetes using its chart.
Run the following commands to update and list the Helm repositories-
You can add multiple repositories to Helm client.
List all the available charts –
Install MYSQL using a helm chart-
A chart can be installed using its name.
List the Helm releases-
It will give the release of MYSQL that you just created.
Verify the MYSQL installation over Kubernetes-
Delete a release-
To delete a release or uninstall an application, use the name of the release.
Apart from this, there are multiple options that Helm provides you such as creating your own chart from scratch, share it with the other teams, customize an existing chart according to you by overriding the default configuration, etc. I hope this blog can help you to start with Helm.
Thanks.
References | https://blog.knoldus.com/manage-your-kubernetes-applications-with-helm/ | CC-MAIN-2021-04 | refinedweb | 508 | 62.98 |
New Build System
ML has spent the last 2 weeks re-writing the Ant build script for Kowari. It had been getting messy and difficult to modify, but it also occasionally missed some dependencies, meaning that we had to do clean builds most of the time. A full clean build took about 5 minutes on my system, so this was very wasteful.
ML's work was checked in on Thursday, so today I pulled in the new scripts, and started work on the NodeType resolver scripts to make sure everything was integrated. Unfortunately this took quite some time, and it wasn't until after lunch that I could compile again. Then ML told me that he needed a new method on
ResolverSession, so I had to track down all of the implementing classes again in order to add the method.
I was starting to run my first tests by the end of the day, but hit a couple of initial hurdles. Initially the system would not recognise my model type. I had accidentally changed the type's URI, but this was not the issue. I realised that Kowari had not picked up my resolver factory, and so I spent a little time trying to remember how the factory was supposed to get registered. SR reminded me that this is done in the conf/kowari-config.xml file.
The next problem was that the
newResolver() method was being called with a
true value in the
canWrite parameter. I was under the impression that this meant that the object was not going to be treated as read-only, so I was throwing an exception. SR explained that this was not the case, and that it was more like a hint for the system. If I really wanted to be read-only then I should just throw exceptions from any modifying methods. I'm already doing this, so I'm covered. In the meantime, I just have to remove the check on the
canWrite parameter in order to proceed.
Friday, October 29, 2004
New Build System
Thursday, October 28, 2004
Resolving
The
resolve method came along reasonably well today. It needed quite a bit of pre-initialised information that I hadn't provided, so I spent some time getting all of that together. This included things like preallocated local nodes for the URIs of rdf:type, rdfs:Literal, and the URI objects and local nodes for the Kowari Node Type model URI, and the URI representing a URI Reference.
With this info in place I could get going on the resolve implementation, which is based mainly on the String Pool. Unfortunately, there turned out to be no easy option to get the String Pool, so this resulted in a half hour discussion about how it should be done.
The String Pools are stored internally in the
ResolverSession object which is given to every
Resolver instance. Ideally, we don't want anyone implementing a resolver to have unfettered access to this data, as it really is internal. It is only internal resolvers, such as the node type resolver, which need to see this information.
It is on occasions like this that I see the point of the "friend" modifier in C++ (a modifier I try to avoid... unless it is really called for). The closest analogy to this in Java is "package scope" which applies to anything with protected or default visibility. Unfortunately, this only applies to anything in the same package (not even a sub package) so it is of limited use.
Rant on Inheritance
Java does annoy me a little on inheritance. Both default and "protected" modifiers permit classes in the same package to see the data. However, there is no way to only allow inheritance, while hiding the same information from the rest of the package.
It would also be nice to permit subpackages to have the same access rights as package peers. That way, "internal" packages could see data that external packages can't, without having to pollute the parent package that everyone needs to see.
Back to String Pool Access
I suggested a class in the
Resolver package which could see the internal data from the
ResolverSession as this would let me get to the String Pool data that I need, without changing the
ResolverSession interface. SR was against this, as he didn't see it providing much benefit in data hiding.
I finally ended up adding some of the String Pool methods to the
ResolverSession interface. At least these methods were all about finding data in the string pools, so they were read-only operations. So even though external packages can now see the methods, they can't cause any damage.
In the course of the discussion, it turned out that ML also needs some access to the string pool as well. So instead of adding just a single method to the
ResolverSession interface, I had to add two. The result was a modification of 6 classes which all implement this interface. Fortunately, only 4 of them were test classes which did not need to support all of the methods, so I just threw an
UnsupportedOperationException.
Since each
ResolverSession has two string pools, the methods I implemented actually performed the operation twice (on both the persistent and the temporary string pools), and then used tuples operations to concatenate the results. The NodeType resolver then calls the new
findStringPoolType method twice (once for typed literals and again for untyped literals) and concatenates those results. So there's a bit of concatenation going on.
Finally, the results needed to be sent back as a
Resolution object.
Resolutions are just a
Tuples interface along with 2 methods, called
getConstraint and
isComplete. The last time I implemented this class I was able to extend a
TuplesImpl, but this time I don't necessarily know what type of tuples the data will be based on, so I had to wrap a
Tuples object instead.
Wrapping an object with 23 public methods is a pain.
By the end of the day I had it all coded and compiling, but I hadn't run anything yet.
Posted by
Paula G
at
Thursday, October 28, 2004
0
Wednesday, October 27, 2004
Node Type Resolver
Spent the day bringing over the URL Resolver into the Node Type package, and implementing all the fiddley bits. This means that I still have the main
resolve() method to implement, but hopefully that will be very similar to the previous implementation.
Many of the other methods were trivial to implement, as the model is read-only.
The only issue I had with some of the methods was for model creation and deletion. I asked SR about these, and he told me about a class called
InternalResolver which has several protected methods specifically for these kinds of operations. I had to extend
NodeTypeResolver from this class, but this only required a few small changes. I also needed to promote an internal method called
findModelType from private to protected so I could use it. It really is a utility method, so it was a valid change to make. I think it was only kept internal to the class as SR didn't need it for the two resolvers that used it.
Posted by
Paula G
at
Wednesday, October 27, 2004
0
Tuesday, October 26, 2004
Node Types
Bringing node types into the new resolver system had me stumped for most of the morning, but I eventually started to get a picture of it.
For a start, I will be implementing it as a new resolver, and registering it as "internal". It will then get found based on model type rather than protocol. As before, it will build its data from the string pool, only now it will be using more than one string pool, so it will be appending data.
The trick is to make sure that the node type resolver uses the same string pool, on the same session, as all the other resolvers. I was concerned abotu how to get this, but SR was able to reassure me that I can get it easily.
The other important requirement is that constraints to be resolved against a node type model will occur last. This is so all other resolvers will have already populated the string pools with data returned from their query. This is a little harder to guarantee.
At the moment, the order of resolution is based on the approximate size of the data that the constraint expects to return. One way to be executed last would be to return a size of
Long.MAX_VALUE. Unfortunately, several other resolvers try to bias themselves to go last by doing this. In this case the resolver absolutely must go last, so it can't necessarily rely on this trick.
In the interim, SR has suggested that I try returning
Long.MAX_VALUE as a size. If another resolver tries to get in the way then we can deal with it then. Since most resolvers play well, this should not be a real problem, at least not in the general case.
Armed with this design I've started coding up the new resolver. It will probably take me a day or two.
Posted by
Paula G
at
Tuesday, October 26, 2004
0
Monday, October 25, 2004
Remote Frustrations
I started early and caught up on the weekend's email, etc. Once I got going I started looking at the Remote Resolver issues once more, but these were really in AM's area. So when AM arrived I dumped it all on him, including my test data, and went off to look at the Node Type issues.
After a while, AM got back to me to say that he was getting a different kind of exception while running my queries. The exception was related to being unable to find a factory for a local statement store. This was unexpected, and he went looking for it. It turned out that I had used a method on
SessionFactoryFinder called
newSessionFactory which makes the assumption that the session is local to the current JVM. The problem with this is that when a connection fails, it tries to fall back to using a local session, which is incorrect behaviour when speaking to a remote session. It was just a problem of not understanding the interface. The solution is to use a different version of the same method which accepts a boolean to indicate if the session is local or remote.
The only reason this problem showed up was because AM hadn't completely adjusted the queries I'd sent him, and one of the model references still took him back to my computer. Since I wasn't running a Kowari server his connection failed, and tried to fall back to a local resolver. So he was supposed to be getting an exception... but it was supposed to be a different exception.
Node Types
I went searching for the Node Type code (completed only a few weeks ago) so I could update it for multiple resolvers. However, I could not find it no matter where I looked (and I couldn't remember the path). After failing some "grep" attempts I realised that the code was now gone, so I asked where it might be.
It turned out that it was not brought forward into the new code base. This is a problem because it means that I'll have to work out a way of hooking it in that is similar to the last method. Unfortunately, that will increase the amount of work to do it, and I'm not sure I've allocated enough time.
In the meantime I had to find the old code, so I asked AN. He told me a branch name to use in CVS and left me to it. Now I don't know much about CVS, so I did a "man cvs" and searched for the word "branch". I found the command "admin -b" which changes the default branch, and I thought that might be what I want.
For those of you who know anything about cvs, you'll know that this was not the command that I wanted, but I had no idea of this.
Of course, I ended up changing the default branch for everyone. It only took me a minute to realise this, and I went immediately to DJ for help. The command to fix it should have been the same one, with no branch label, but for some reason this did not work for either DJ or myself. In the end he simply restored a backup of the cvs respository, as the backup was only a couple of hours old, and no one had committed since then. So there was no harm done, but it certainly wasted some time.
At the moment I'm trying to find out how and where to insert this node typing code. It used to be in a module known as a "resolver", but since that has a new meaning in the current system I may need to find a different name.
Posted by
Paula G
at
Monday, October 25, 2004
0
Friday, October 22, 2004
Remote Tests
I discovered early on in the day that part of my problem came from the tests trying to use the local server as a remote server as well. This was fixed in two ways. First, I got Thursday's problem properly sorted out so that anything happening on the local server would not get sent off to the remote resolver. This meant that it was not possible to make the remote resolver work without running a second server. After stuffing around with the Kowari documentation for a little while, I was able to make a second server run, using the command:
java -Dshutdownhook.port=6790 -ea -jar dist/kowari-1.1.0.jar --servername server2 --rmiport 1299 --port 8081The second part to making it work was trying to get the tests to handle a second server. I'm not sure how I was doing there, as it was going to involve some tricky Ant scripting. In the end I realised that I needed to make it work by hand first.
I built some simple test data which defined some nodes with literal "names" in one model, and some relationships between those names in the other model. A query which tried to get all nodes names for nodes meeting a particular relationship requirement failed, so I went to a simpler query.
The simpler query had a "where clause" pointing at one server, and a single constraint with an "in clause" pointing at the second server. This always returned no data.
Lots of logging code later, and I found that the correct data was being found, but there was a problem with it at the client end. It had the correct row count, but always returned no data when an iterator was used.
Finally I found that I had closed the session too early. This was because there was no other place to close it. After consulting with AM I was told that the transaction code does have to be implemented, and that SR was wrong in thinking that it doesn't.
For the moment I have code that works, but leaves a session open.
Then I tried the more complex query, and a new error showed up. This involved a problem with a "prefix" used in the
beforeFirstmethod used in the join code on the client. This is not code that I know much about, so at the end of the day I had to leave it with AM.
Posted by
Paula G
at
Friday, October 22, 2004
0
Thursday, October 21, 2004
Late
OK, it's days late, but I've had other things to contend with for the last few days. So here's an overview of Thursday...
I thought I'd finish testing the remote resolver, but testing was very difficult. Whenever I attempted a remote query, it would deadlock. This turned out to be due to a bug in
DatabaseSession.findModelResolverFactory(). This method looked in the system model for the model type of a URI given to it. If it was found, then it knew it was local, and it used a local resolver facotry. If it wasn't found then it checked the protocol for the type of resolver to look for.
The problem was that the code assumed that if the protocol was "rmi" then it should have been local, so it threw an exception. The fix was to compare the non-fragment part of the URI to the current server name, and then throw an exception if it matched (since a match meant that the model should have already been found in the system model). If there was no match, then the model appears on another server, and it returns a remote resolver factory.
After finding and fixing this, I then ran into deadlocking problems. This was because the local system was obtaining a lock, and then asking for data from a remote resolver, which was actually on the same machine and required the same lock. That was where the day finished, and I took the bug up the next day.
Posted by
Paula G
at
Thursday, October 21, 2004
0
Tuesday, October 19, 2004
Test Port
I started the day by porting AM's unit test code for View resolvers over to the remote resolver. This was time consuming, but reasonably easy. I started by changing all the obvious items to the new resolver setup (for instance, I did a search and replace of RemoteResolver for ViewResolver), and then moved on to a method-by-method comparison of the old code with the View resolver test code. Strangely, the
ViewResolverUnitTest tests had all been commented out. AM wasn't sure why he'd done that, but there were no problems putting them back in.
I was pleased to discover that many of my bug fixes and specific code to handle general cases were already in the template. This meant that I could delete some of what I'd written in the past. It bodes well for when someone gets the time to refactor this, and use inheritance to handle the common parts. We all wish we could do it now, but with everyone continuing to modify their own resolvers, it's difficult to find stable common ground between their tests.
Once the tests were happily compiling and running, the results were exactly the same. Wouldn't you know it? :-) At least I had a common base with which I could question AM.
The specific problem which was reported was:
org.kowari.resolver.spi.LocalizeException: Unable to localize rmi://myserver.com/server1#dc - Couldn't localize nodeThis told me that the query engine was looking for a string in its string pool, and it wasn't available. Initially I thought that perhaps I had globalized nodes which had come back from the remote resolver, and the action of localizing them was failing. However, AM pointed out that it was actually happening in the
Resolver.setModelmethod, which has no data returned from the resolver.
Resolver.setModelis called by
DatabaseSession.setModel. Looking at this method showed that the problem was occuring when the local node number of the destination model was being looked up. The method being used here was:
This method looks up a node in the string pool, returning the node number if found, and throwing an exception if it isn't. Given that the destination model is on a different server, then the localThis method looks up a node in the string pool, returning the node number if found, and throwing an exception if it isn't. Given that the destination model is on a different server, then the local
systemResolver.lookupPersistent(new URIReferenceImpl(destinationModelURI));
systemResolverobject will not know about that model URI, so the local node number for the model can't possibly be found.
The solution is to do exactly what the source model code was doing:
The main difference here is that if a node is not found then it gets created.The main difference here is that if a node is not found then it gets created.
systemResolver.localize(new URIReferenceImpl(sourceModelURI));
At this point the power went off due to a storm that we had. With my test code all on a PC I couldn't test if anything worked. So I started looking at some more of the
DatabaseSessioncode to see if this mistake had been made elsewhere as well. I found it in one other place, this time in the private
doModifymethod.
While getting some advice from AM about the bugs I was seeing, I showed him the
RemoteResolver.modifyModelmethod. While everything was fine, he pointed out that the set of statements could be a streaming object, and if I tried to send everything to the server in one go then I could run out of memory. I've since wrapped this in a loop to send pages of statements at a time. One more thing I should do is wrap this in a transaction, so it becomes an atomic operation on the server.
Spelling
For the last few days the spellcheck button on Firefox has been doing nothing (beyond briefly flashing a window up). I wonder why the behaviour has suddenly changed?
Posted by
Paula G
at
Tuesday, October 19, 2004
0
Resolver Tests
The test code class path problems were far reaching, and tedious. It seemed that this one test required every single resolver in the system to be available to it, which in turn required all the supporting libraries for each resolver type. I was trying to create a minimal set of jars to include, but with each addition I discovered yet another jar to throw into the mix. After spending way too much time on this, I added every remaining resolver jar that I hadn't yet included, and this finally allowed the tests to run.
The first problem with the running tests was an assertion when trying to resolve a model. Adding a string to the assertion showed that this was a "file://" model, which I don't handle. It turned out that 6 months ago I followed a comment a little too literally, and I explicitly registered this resolver as being able to handle file models.
Once the file registration was removed, a group of methods complained that there were no resolvers registered which could resolve a file model. Each test was taking a single element array of the resolver classes to register, and this only included the remote resolver. The simple fix was to declare a static array with two elements: the remote resolver, and the URL resolver (which can handle files). This still left 3 tests unable to handle file models, as it turned out that these methods used a different test mechanism and registered their own resolvers.
These fixes then got me to 2 passing tests, and 34 failures.
The first failure was due to an expected element not being found in the memory-based string pool that is used for temporary data. I don't believe that this reflects any problems with the remote resolver, but I'm not sure where the problem is coming from. I asked AM (who built the original framework) and he wasn't sure either.
AM pointed out that he has since refactored the whole test framework for resolvers. He suggested that I take the new test framework and apply my changes to that. I'm a little reluctant, as I had to make a lot of changes for the remote resolver. However, the problems appear complex, and I'll need to update it in order to have a consistent base in AM, simply so he can help debug it, if nothing else.
Posted by
Paula G
at
Tuesday, October 19, 2004
0
Monday, October 18, 2004
Resolvers and Tests
Same old, same old. Part of the day was spent finishing up with the build bugs for the remote resolver. Once this was all working fine I moved onto the unit tests. Like the resolver itself, I had implemented this already for the old interface, so it was more a matter of patching it up to work with the new interfaces.
While I went through the test code again I was reminded that it uses an in-memory model. The behaviour of these objects may have changed since this code was written, so any problems may stem from this area. I suppose I'll find out when I get there.
I also spoke to SR about the transaction interface. He hasn't determined exactly how to handle certain situations for distributed transactions, so for the moment there is no need to implement this code. He was pleased that I'm using the logging
DummyXAResource class, as this will help him debug as he starts to implement this on the server.
My last step of the day was to try running the tests. The test code is particularly complex, so I didn't really expect it to work straight away, and it hasn't. For the moment the problem is a classpath issue in some reflection code. I'll probably spend tomorrow morning tracking this down.
Posted by
Paula G
at
Monday, October 18, 2004
0
Friday, October 15, 2004
Quiet
It was a quiet day with TJ away. I noticed AN and KA dealing with some minor crisis that needed TJ's involvement on his day off, but fortunately I was far removed from that.
There were quite a few minor changes to the
RemoteResolver interface which needed attention before anything would compile properly, and I spent a lot of the day dealing with these. Many of the changes were straightforward, though time consuming (e.g. some data which was provided in local node ID form is now being provided in global URI form, which means that all references to it have to change). Changes like this take up a lot of the day, but don't leave much to report on.
The only thing worth mentioning is that the
RemoteResolverFactory.newResolver method now takes an extra parameter called
systemResolver. While it might be handy to have this object in some circumstances, I can't think of how it would be needed for the remote resolver. So after looking at the related code for some time I decided that I wouldn't need the system resolver, and have ignored it. On the other hand, I'm now concerned that I'm missing something important, so I'll need to have a word with SR about it on Monday.
The other thing about this method is that the signature changed from:
To the following similar form:To the following similar form:
public Resolver newResolver(ResolverSession resolverSession, boolean canWrite);
I've already mentioned theI've already mentioned the
public Resolver newResolver(boolean canWrite, ResolverSession resolverSession, Resolver systemResolver);
systemResolverparameter, but I'm a little lost as to why
canWritewas moved, though it hardly matters. However, one thing that does matter is that the javadoc for this method in the interface was not updated... something I should really chastise AM about.
Posted by
Paula G
at
Friday, October 15, 2004
0
Thursday, October 14, 2004
Paths
Quiet day today. I finished going over the modifications to the remote resolver code, and got it to the point of trying to build.
I spoke to SR about the
XAResource and where it will fit in. Not having used them much I thought that this was an object that would get used for managing the transaction, but SR is using it as a callback interface for each resolver. He hasn't implemented some of the finer details of distributed transactions, so for the moment he is happy for me to use the
DummyXAResource class, as this just logs everything called on it.
I finished the day with failures in compilation. The compiler is complaining that it can't resolve classes that I have explicitly imported. This means that the compiler classpath must be at fault. This is annoying, as it can take a long time to find with jar is needed to get it right. Normally it would not be an issue, but Kowari is built as a single jar which in turn contains a large collection of jar files. Finding class files in jars, within jars can be very time consuming.
Jars within jars are not a common site, but it has certainly been handy, particularly when including third-party libraries. A few years ago TA wrote the code for this. From my understanding, it unzips the jar file into the temporary directory, and then uses a class loader to pick up all the new jar files. It's a bit tricky internally, but from the user's perspective it's ideal. This is why Kowari does not need a class path, and can just be launched with:
java -jar kowari-1.0.x.jar
Posted by
Paula G
at
Thursday, October 14, 2004
0
Wednesday, October 13, 2004
Time and Resolvers
I'm trying to get to bed early, so I'll be brief.
I spent about an hour finishing off my timesheets today. It's amazing how slow Safari gets when it has to render as many widgets as the timesheet program creates. As a result, it is frustratingly slow to enter this data. So while waiting I killed a little time reading Slashdot.
The rest of the day was spent converting my old remote resolver code to the new SPI. Fortunately there appears to be very little which has changed.
My only concern is the method which requests a new transaction session. I'm not really sure of what to do with it, as no other methods use the
XAResource object that is returned. For the moment I'll be returning a
DummyXAResource (which was kindly supplied for me by SR) and using my own session objects. This will certainly work for tests, but then I will need to consider how the use these objects would impact on a resolver session.
Tomorrow I'll have to ask why we have a method that returns an
XAResource object, but no methods which accept one.
Java 1.5
I took a little time to explain to AM some of the changes to the Java language in the new release. I didn't go into some of the non-language features (such as instrumentation) as I've yet to fully explore them.
Generics are nice (especially as I like the type safety of templates in C++), though he claims that some people have been complaining about some deficiencies in their implementation. Personally, I see the advantages of the new system outweighing any corresponding problems.
Autoboxing, for-each loops, and var-args are all syntactic conveniences, so they are not really anything to get excited over. They will certainly be handy though. Similarly, static imports will be useful, but again they just help tidy up the code.
The two standout syntactic improvements are annotations and enumerations.
The annotation system is very nice. The Sun example which annotates methods which are to be tested really demonstrates the power of this. Once this takes hold, JUnit testing will become much more powerful, and can eliminate all sorts of string comparisons used in reflection. I can see this extending into the definitions of MBeans for JMX, and other sorts of Java beans as well.
But my favourite addition is the new enumeration type. This allows the removal of the need to use integer identifiers, provides type safety, provides automatic
toString methods for each element, and allows access to the entire range (and even subranges) of elements in the enumeration. It also allows for a user-defined constructor, and class specific methods to overload the behaviour of the type. It even permits the creation of and abstract method which each element implements to create custom behaviour. These are some very nice features, and will provide a significant saving in code when building up powerful enumerations types.
Unfortunately I won't be using much of this for some time. For a start, until Java 1.5 is also available on OSX then it can't be considered truly cross platform, and I won't be able to run it on my PowerBook anyway. Secondly, the wait until it is available on the Mac will be useful, as this should allow a significant enough period to have ironed out the bugs present in any x.0 release.
I didn't even have to mention these reasons to AM. I just said that there were two reasons I wouldn't be using it yet, and AM just named them without any input from me. Looks like I won't be the only one waiting then. I'm looking forward to it though.
Posted by
Paula G
at
Wednesday, October 13, 2004
0
Tuesday, October 12, 2004
Estimates
We had a discussion today about how long we expect each element of the changeover to take. Part of this process was discussing what each of us had learnt about the code in question, so we could determine the scope of the work. This let us describe what features each item may or may not have in the new system.
In order for
trans and
walk to work over multiple resolver backends, it will be necessary to perform these functions on each backend individually, and then perform the operation again on the full collection of results. In many cases the final operation would not find any similar nodes, and the result would simply be a union of each of the partial results, though if models from different sources referred to the same resource then this mechanism would find that as well.
The problem with performing a
trans or
walk over all collated results is that the partial results are in global space, making the operations expensive.
Given the difficulty and of the collation, and the rarity of common nodes across data sources which have transitive relationships, TJ decided that Kowari can do without it for the time being. At this stage we just need it running with all current features on the new codebase, so I agree with him here. Full transitivity can be performed later, as it is needed.
In the meantime I will be working on the distributed Node Type model, and on distributed queries. I've already started pulling the old distributed code into the new interface, and I should be working on that for the next few days.
Timesheets
A lot of today was spent updating my timesheets. I keep a time log in a text file, so it should be trivial to put these into the time keeping system. Unfortunately, the web pages used to input data have a lot of input fields. This makes each page painfully slow to render, even on a fast machine. Even scrolling can be slow. The result is a very slow process to enter trivial data.
I'd be better off just putting in each day's data as I finished the day, but when I'm about to leave I'm not normally interested in spending 5 extra minutes at work, especially when it is data that I should be able to enter in a matter of moments.
That said, I'll see if I can start putting the data in first thing in the morning. I've been getting in early for the last few days, so if I can keep it up I should be able to find the time.
MPhil
I saw Bob again today and reported on what I've been doing. Not a lot in terms of reading, which is something I should try to remedy next time.
He was interested that I've been looking more carefully at the documentation for OWL. This is something I should have already done, but who likes reading specifications? Bob agreed with me here. :-)
At least I ended up with a better knowledge of OWL. I've looked at some of my older posts, and have cringed at the naïvité of some of my statements.
While discussing some of the interesting restrictions I've found in the species of OWL, Bob suggested that I try and document them all, and pay particular attention to the ramifications of each difference. For instance, I can't see that there is a real decidability problem with cardinality, even though OWL DL restricts this to 0 or 1. The only reason for this restriction appears to be because description logic does not support counting, even though this is an easy operation in most systems.
Anyway, I'll be looking at these differences for the next few days. Once I have something done, I'll look to see who else has done this, and compare what we have found.
iTQL
Bob was also interested that iTQL has no RDF support in it. AN and I have been considering putting support in for some items such as collections, but to date we have been able to avoid it.
Now that I want to provide a mechanism to find descriptions in OWL, I'm starting to wonder where it fits. This should really become an OWL API, which is layered over the RDF API, but should this be reflected in iTQL? This would seem to break the separation of layers, but iTQL is currently the only user interface available. There is also the argument that we are building an ontology server, so it makes sense to have ontology support in the user interface.
For the moment we have only provided a set of complex, low level operations which will perform ontology operations when all applied together, but this does not yet make an ontology server. I suppose this makes me wonder if iTQL should start to support RDF and OWL, or if should be used as the building blocks for a higher level language which provides this functionality. However, this would mean writing a new language... and I really don't want to do that.
Posted by
Paula G
at
Tuesday, October 12, 2004
0
Monday, October 11, 2004
Estimates
Today started by trying to work out time estimates for the new "Resolver" work.
I still need to speak to TJ to see if we want
trans to be able to operate over models from different sources. If this is to be permitted then it would require tuples to be translated to and from global space, forcing a performance hit. On the other hand, perhaps it would be possible to keep the current code for
trans, to be executed only when models in the same data store are being queried.
The other area I looked at is how the special Node-Type model will work in the new system. The current model reflects the contents of a single string pool on a single server, a concept which does not translate well into the global system.
After much discussion with AN and SR, I've come to the conclusion that the query code which accesses the resolver interface will need to create the virtual model. Every node which comes in via the resolver interfaces is either available to the local string pools (including a temporary string pool which belongs to a query), or is returned in global format, and is then stored to the string pools during localisation. These string pools then contain all of the information needed for the types of all the nodes to be returned from the query. They will be missing much of the data pertaining to nodes which were not returned from the current query, but that is a benefit for the efficiency of the system.
The only shortcoming of this approach is that it will not be possible to query for all of the strings in a system, but this was never a requirement anyway. Until now, this feature has just been a useful debugging tool.
Hybrid Tuples
I spent some hours with TJ trying to find the problems associated with the Node-Types test. We have been able to show that there appears to be no problems when a moderate amount of data has been inserted into the database. The problems only manifest when a large amount of data is present.
Selecting all literals from the system has shown that all of the expected data is present, which means that something is going wrong in the join code. More specifically, the requirement of a large amount of data means that the problem stems from a join against a file-backed
HybridTuples object. This is because
HybridTuples is specifically designed to change from a memory-only mode to a file-based mode once a certain quantity of data has been detected.
I took AM through the code that I am using to sort and join the literals with, and he said that it all looked OK at first glance. It was at this point that I discovered that my code had been changed. It used to select the doubles, dates and strings from the string pool, sort and then append them all. Now it gets two groups called "UNTYPED_LITERAL" and "TYPED_LITERAL". Typing this just now I realised that DM changed this as a consequence of the new string pool. This is because the string pool is now capable of storing many more types than were previously available.
With sorting and appending the literals checked, it would seem that it is the join that is at fault. AM does not know where the problem might be, but he concedes that the code is not well tested, as it is difficult to create the correct conditions for it.
In the meantime, I have been trying to help AM by creating a smaller dataset to demonstrate this problem. We tried changing the size that
HybridTuples switches from memory to disk, but the minimum allowable size seems to be an 8kB block. This is larger than the data that many of our test string pools return.
I thought to load a WordNet backup file, and then output it to RDF-XML where I could trim it to a large, but manageable size. Unfortunately, the write operation died with a
NullPointerException. I'm not really surprised, given the size of the data, but I should report it as a bug in the Kowari system.
A couple of attempts with the N3 writer also received the same error, but I decided that it might be due to a problem with the state of the system. After a restart I was able to dump the whole lot to N3. I'll try and truncate this in the morning.
Excludes
My first item of the morning was a discussion with AN on the
excludes operation. It turns out that he has recently implemented a new semantic for it. When used in a query with an empty select clause it changes its meaning. It now returns
true if the statement does not exist in the database, and
false if it does. While I don't like the syntax, the effect is useful.
If this is applied in a subquery, it provides the function of filtering out all statements which do not fit a requirement. This is essentially the difference operator I have needed. It is not a full difference operation, as it will not work on complex constraint operations in the subquery, but to date I have not needed that complete functionality. Besides, there may be some way to have a complex operation like that expressed in the outer query with the results passed in to the subquery for removal.
Pleased as I am that the operation is now available to me, I still have a couple of concerns with it. To start with, I dislike changing the semantics of a keyword like this. Amusingly, AN suggested changing the name back to "not", as the new usage has a meaning with a closer semantic to that word. I've realised lately that the word "not" has numerous semantics when applied to databases (every developer has a slightly different idea of what dataset should be returned), so using a word of "elastic meaning" in a place which can change semantic based on context seems appropriate.
I'm also uncomfortable that this operation is done as a subquery. While it works, it has two drawbacks. The first is the complexity of the resulting query. It is a simpler construct than many I've seen, but it is still ugly and likely to land users in trouble. The second is the fact that the query layer must be traversed with each iterative execution of the subquery. This is made especially bad as the data gets globalised and re-localised in the process. The result is an extremely inefficient piece of code. If it were implemented as a difference operator instead, then all of the operations could be performed on localised tuples. This is about as fast as it gets, so it offers real scope for improvement if we need it.
In the meantime, it works, so I'll work with what we have. If it proves to be too slow, or the syntax proves too troublesome, then we know we can fix it easily.
Posted by
Paula G
at
Monday, October 11, 2004
0
Friday, October 08, 2004
Directions
One of the big plans for Kowari is to introduce the "Resolver" SPI. This is a set of interfaces which is used to wrap any datasource to allow it to be a backend for the Kowari query engine.
Using appropriate implementations of this interface, it will be possible to perform queries on all sorts of different datasources just as we currently perform queries on the Kowari datastore. Some obvious example datasources are MS Exchange servers, Oracle databases, and RDF-XML files. With Kowari's ability for a query to access multiple models at the same time, the Resolver SPI will allow for very powerful realtime data access.
To make this work, the current query code has to be split up from the Kowari datastore, to channel all access through this new interface. This will make the Kowari datastore "just another" backend that can be plugged in. The main advantage of the Kowari datastore over any other is that it should be the fastest possible backend for the kind of data that the resolver interface uses, and it will support all possible features. Other interfaces will not necessarily be capable of supporting every feature. For instance, an RDF-XML file will require access to be restricted to read-only.
Changes like this are far reaching, so we need to momentarily concentrate all our efforts on these modifications. If we worked on anything else in parallel, it would most likely need to be repeated to work with the new changes.
With this in mind, we had a meeting today about the different subsystems which will be affected by the changes. After determining all the major sections, we each picked up a few, and will spend the next couple of days working out just what is involved to make the changes. This should give us enough information to estimate how long it will take.
Node Types
Late in the day TJ discovered that one of the machines did not run the full suite of JXUnit tests correctly. The failing test turned out to be the search for literals in the node type tests.
The test finds the one string and one number found in the Camera ontology. For some reason, the test fails by only returning the number. As with the earlier problems, this only occurs when the full WordNet database is loaded. This indicates that there is some problem with the use or implementation of the
HybridTuples class.
There are two places this could go wrong. Either the string is not being returned by the node-type model, or the result is not being joined against correctly. This is easy to determine, as it is possible to just return the literals and not join the result against anything. If both the string and the number are in there, then the problem is not with the node-type data being returned, but with how it gets joined to the other data.
Of course, with WordNet loaded, the result of all literals is huge. Because of this, the appropriate method of query is via the command line interface, with the results being streamed to a "greppable" file. Unfortunately, at this point the network had to go down due to a UPS replacement for the file server (it started beeping at lunch time). Given how late it was on a Friday, we took that as our cue to leave.
Engineering
At lunch time I went along to UQ as one of the staff members in ITEE was retiring, and they were putting on a free lunch. This gentleman had shown remarkable patience with me when I was finishing my first undergraduate degree, and I've always been particularly grateful to him for it. Besides, who am I to turn down free food? (I skipped the beer though, as I had to return to work) :-(
While there I met one of my old lecturers who asked about my current study. At one point in the conversation he asked when I graduated, and when I responded he made the comment, "Oh, that was before they cut the course." This then led to a conversation about the engineering degree at UQ.
First a little background...
When I went through we had a lot of subjects, and they were really hard. They were harder and required significantly more work than any subjects in my physics degree. Engineering decreed a minimum of 32 hours a week (I forget what that translates to in credit points).
I don't know about the other departments, but when I was first studying I was aware that Electrical Engineering was in a quandary about what to teach the students. Institutions such as IEAust, the IEEE and the IEE lay down a minimum set of requirements for a degree to follow in order to be recognised.
Operating in opposition to this, the university set out a time period for the degree (4 years) and a maximum number of credit points that a student could enroll in. With the restrictions imposed by the university, and the requirements imposed by the institutions, the department created a series of subjects with an enourmous amount of information crammed into them.
There were also further constraints imposed, as mathematics, physics and computing subjects had to be provided by the Science faculty, but these subjects were not as "crammed" as the engineering subjects. Actually, I used to benefit from these subjects, as they did not require much work in comparison to engineering subjects and they allowed me to relax a little.
While Australian universities have been offering fast-tracked engineering degrees in 4 years, other countries take a different approach, taking 7 or more years. This is probably of benefit, as the students would retain more information, and fewer of them would burn out.
Back to more recent times...
In the late 90's, UQ decided that they wanted continuity among all of their courses. A Commerce degree had a different workload to an Arts degree, and this led to differing costs for HECS. Since many faculties did not support an extra workload, faculties such as Engineering were told to cut the work requirements of the students. As a consequence, the hours required of students dropped from 32 hours a week, to 20!
The natural consequence of this, was that subjects had to be dropped. Entire areas of knowledge are not taught to students. Control theory (a difficult topic) has proven to be one of the most valuable subjects I was taught, and yet it is no longer available to students.
If you can't learn these topics at university, then where can you learn them? Research degrees don't cover them. One solution proposed has been to put this information into a coursework Masters degree. This is certainly the approach taken in the US and UK (hence my previous post referring to Masters degrees from those countries not equating to the same qualification here). But this then means that undergraduate students at UQ are not getting the degree they thought they studied and paid for.
My question is now why the IEAust, IEEE and IEE have allowed this. They can't prevent UQ from changing its courses, but they can certainly drop their accreditation of UQ degrees. Having a degree from an unaccredited university is useless.
I'm at a loss as to why none of the institutions have acted on this. There are requirements for an electrical engineering degree, and UQ has suddenly stopped meeting them. At this time the IEE accepts the accreditations provided by IEAust, and I'm guessing that the IEEE do the same. So why haven't IEAust done something about it? Obviously it is political decision (how could they take away accreditation from one of Australia's top "Sandstone" universities), but in the meantime students are missing out, and industry is not getting the professionals it requires.
I will be raising this at the next IEE committee meeting. It has been raised in the past, but at the time a "wait and see" approach was taken. However, last time it was suggested that if things were bad enough then it would be possible to stop accepting IEAust accreditation. Perhaps that would make the IEAust wake up and do their job.
Posted by
Paula G
at
Friday, October 08, 2004
0
Thursday, October 07, 2004
All Different
I started on
owl:allDifferent today, but was unable to finish.
This predicate uses
owl:distinctMembers, which is a predicate used to refer to a collection. When this type of collection is turned into statements it becomes a linked list (actually, a cons list). This means that any query which needs the contents of this list has to be capable of walking down the list. Enter the
walk() function.
Walk() allows the query to specify a starting point (the head of the list) and the predicate to traverse (the
rdf:rest predicate). The starting point is fixed, so it will either be a set value, or else it will be a variable holding a single value. The way to get the variable set to a single value is to pass it in to a subquery and perform the
walk in there.
In the case of a collection, the starting point is a blank node, so there is no way to use a set starting point, and it must be passed in as a variable. Even if lists did start with a named node, there are an unknown number of lists, so they all have to be found, and the heads of each list put into a variable. The only way to reach each of these values one at a time, is again with a subquery.
Unfortunately, when I tried to use a subquery there was an error stating that a variable could not be used as the starting point of a walk. The iTQL parsing code expects to see a fixed value for the starting point, even though the implementing code is able to deal with a variable (of a single value). Theoretically this is a simple fix, and AN said he would do this for me.
Intersections
One of the last predicates to deal with is
owl:IntersectionOf. There are several others, but not in OWL Lite. Also, OWL Lite only uses
owl:IntersectionOf for restrictions, so it should not be too difficult to work with.
I am still convinced that we need a simple method in iTQL to check if a node matches a description, as described by the OWL abstract syntax. I had an attempt at doing the iTQL for each of these, and the results did not satisfy me. In the case of
owl:hasValue it is easy to perform a query for nodes which meet a restriction. However, for other restrictions, such as cardinality, it gets very difficult to check if a class instance meets the restriction, and the result is not of an easily usable form. For instance, finding if nodes of a type which is an intersection of a class and a cardinality restriction, will return a node, along with a tuples which will be empty if it does not meet the restriction, and will have a line of data if the restriction is met.
Once AN saw the query required for this, he agreed that just because we could do this in iTQL, the complexity demonstrated that we shouldn't. The only issue now is how to do it.
AN favoured an approach where we use a rules engine, and create statements which indicate if an instance meets a description. Performing this iteratively expands what can be determined. While I approve of anything that will work (and this will), I prefer the approach of developing a recursive iTQL function which can do it for us. If we like, this function could even create new statements on the way, to help with future processing.
I prefer this for two reasons. The first is that it avoids going through the iTQL layer over and over again. The second is that it makes iTQL statements far more powerful, and does not require the use of a rules engine in order to run.
The most significant detriment for a function that determines description is that it introduces knowledge of OWL into iTQL. iTQL is an RDF query language. It provides the facilities for performing OWL queries, but it is not itself OWL aware. A function like this would be easy to implement, and very valuable, but it would step outside of the domain of iTQL, which does make me a little uncomfortable.
Back To Intersections
I also had a question on what can be inferred from intersections in OWL Lite. Often, a class will describe that it is an intersection of another class and an intersection. This is demonstrated in the example camera ontology:
Instances of the "Large-Format" class must be of type "Camera", and must have something of type "BodyWithNonAdjustableShutterSpeed" for its "body" property.Instances of the "Large-Format" class must be of type "Camera", and must have something of type "BodyWithNonAdjustableShutterSpeed" for its "body" property.
<owl:Class rdf:
<owl:intersectionOf rdf:
<owl:Class rdf:
<owl:Restriction>
<owl:onProperty rdf:
<owl:allValuesFrom rdf:
</owl:Restriction>
</owl:intersectionOf>
</owl:Class>
Obviously, to check if an instance of "Large-Format" is consistent it must also be of the "Camera" class, and if it has any "body" properties then these must all be instances of the "BodyWithNonAdjustableShutterSpeed" class. My question is this: If such an instance were found, then is it legal to infer that it is of type "Large-Format"? I'm guessing not.
Alternatively, the intersection is permitted without a class declaration:.
<owl:intersectionOf rdf:
<owl:Class rdf:
<owl:Restriction>
<owl:onProperty rdf:
<owl:allValuesFrom rdf:
</owl:Restriction>
</owl:intersectionOf>
Posted by
Paula G
at
Thursday, October 07, 2004
0
Wednesday, October 06, 2004
Bowtie Operator
I've just had a look at yesterday's blog using several different browsers on the Mac. IE, Firefox and Camino, all show unicode character 8904 (⋈) as a question mark (?). Safari was the only browser that rendered this unicode character properly. I have no idea what the rendering will look like on the various apps in Windows or Linux.
I keep IE only for those sites which insist upon it (fortunately there aren't many of those). I use Firefox for blog entries, as it provides the full XUL interface that Blogspot offers. Camino seems to offer all the advantages of Firefox, only with the lovely native Aqua widgets. Unfortunately, it does not render the blogger editor correctly, so I rarely use it.
My browser of choice is Safari. This is because it is fast, attractive, and 99.9% compatible with most sites. Importantly, it is fully integrated with the other OSX apps. This is the feature which has me using the OSX Mail program instead of Firebird (even though I feel that Firebird has a better interface, I just can't believe that it does not interface directly into the OSX address book).
It now seems that Safari is the only browser which can render certain unicode characters. This only encourages me to keep using it.
owl:differentFrom
Some of today was spent on
owl:differentFrom. This was much easier than
owl:sameAs as there is not a lot of entailment that can be made on this predicate.
While documenting the predicate I wanted to keep saying "different to" instead of "different from", so I thought I'd look it up. It seems that "different from" is the common expression in the US, while "different to" is the more common British (and hence, Australian) usage. Since most people reading this will be in the States I opted to stick to "different from". Besides, it matched the OWL predicate as well.
The only consistency check for this predicate was one that had already been done for
owl:sameAs. While I was tempted to simply refer to the other section, I wrote the example out again, along with an explanation which was more appropriate for the context.
owl:allDifferent
I spent some time looking at the
owl:allDifferent predicate, with some frustrating results. This predicate uses the very lists I hate in RDF. As a result, the only way to select each item in the list is with a
walk(...) operator.
Unfortunately, the
walk(...) operator requires a fixed end to start from, but the end in this case is a blank node. The only way to "fix" this end is to find it in an outer query and then pass it down via a variable into an inner query which performs the walk. This would seem OK, only the
walk function does not accept a variable as its starting point (even one with a fixed value as would happen in a subquery). Late in the day AN said that he could fix this relatively easily, so I am hoping he can have it done early tomorrow.
Descriptions
I had a brief work with AN about the need to find out if a node meets a description as defined by the OWL documentation. He is not keen on the idea of a new function like I proposed, so we discussed some options.
The most prevalent option is to let the rules engine do the work here. The problem is that there are no OWL constructs which can describe that an object meets the conditions of an
owl:IntersectionOf or many of the other constructs which can make up a description. AN pointed out that we can use an internally defined predicate to describe this type of thing, which would solve the problem.
If we took this route then we are still left with a couple of options. The first is to define the appropriate rules in the rules engine which can generate these statements. The engine would then be able to derive the entire set of description statements for all the objects in the system. While this would work, I believe that it would be slow, particularly as it would need to traverse the entire iTQL layer for each invocation of the rules.
The second option is to implement the function as I initially proposed it, but to insert the description statements as each recursion returns. This would then permit subsequent invocations to use these statements instead of the original algorithm, providing a significantly reduced workload. Other unrelated operations may also be able to take advantage of knowing which objects meet which descriptions.
AN also mentioned another option which involved performing a lot of intermediate operations with the use of iTQL. While this would obviously work, I do not like the idea of re-entrant iTQL. Besides, it suffers from the inefficiencies of the iTQL that I have already mentioned.
Whatever the outcome, we really need to see some way of letting an inferencing query determine if an object is of a given "type", without that query needing to do any difficult work in that regard. Many of the required inferencing queries are complex enough already without adding to their difficulty.
Another advantage of checking for description compatibility in some internal fashion is that it allows several steps of a rules engine to be skipped. With a check like this it is possible to immediately determine that an instance of a class is also an instance of a superclass (and all superclasses above that). With the purely rules-based approach this information will only be available after executing a number of rules, each of which involves another transition through the entire iTQL layer. While that approach will work, it would be significantly slower than simply giving a rule immediate access to all types for an object.
I suppose we still need to discuss this before any decisions can be made.
Posted by
Paula G
at
Wednesday, October 06, 2004
0
Tuesday, October 05, 2004
owl:sameAs
The majority of today was spent building examples for the
owl:sameAs predicate, and documenting the results.
This ended up being a bigger task than most other predicates. Entailment has to reflect symmetry, transitivity, and reflexivity. It also has to copy every statement which refers to or is referred by a node that is the
owl:sameAs another.
Then there is consistency checking. Instances of classes can only be the same as other instances, classes can only be the same as classes, and predicates can only be the same as predicates. Objects cannot be the same as objects which they are also different from. These all get expressed as separate queries, and all required example data to be created.
Adding to the complexity of these queries is the fact that they do not fall within a rules engine. This then means that there is no set of pre-inferred statements to connect Properties with each of its subtypes. As a result, the statements which constrain on having an
<rdf:type> of
<rdf:Property> also constrain on
<owl:ObjectProperty>,
<owl:DatatypeProperty>,
<owl:FunctionalProperty>,
<owl:InverseFunctionalProperty>,
<owl:TransitiveProperty> and
<owl:SymmetricProperty>.
I created the sample code while putting together the documentation. This had its advantages and problems. The advantage was that I was able to proceed with each item as I arrived at it. The disadvantage was that new example data caused modifications to the example data for previously completed items. This meant that I spent a lot of time re-issuing queries, which obviously took some time.
Difference Operator
On Monday night I spoke to SR about the operations fundamental to a difference operator. AN had been insisting that it could be done using a complement operator. However, since a simple inner join against a complement on a constraint does not accomplish this I decided to work out why.
It didn't take long to realise that the complement against the triplestore was not providing what we needed. I discussed some of this with SR, but his questioning demanded specifics. This was probably a good thing, as it forced me to make sure that I had everything right.
The inner join operator appears as: ⋈
This is the "bowtie" operator. I've included it here with the HTML of &x8904;
If the operator does not show up correctly... then sorry. You'll just have to bear with me. :-)
Take A as a set of rows n columns wide. B is a set of rows m columns wide. The columns of A are:
a1 a2 a3 ... an
b1 b2 b3 ... bm
An inner join is a combination of 3 operations.
If I do an operation of A ⋈ B, then this can be broken down into:
A x B (cartesian product)
σ(A x B) (selection)
π(σ(A x B)) (projection)
So:
A ⋈ B = π(σ(A x B))
The cartesian product ends up with a new set (which for convenience I'll call C). So:
C = A x B
The column names in C are:
c1 c2 c3 ... cn cn+1 cn+2 ... cn+m
Which corresponds to:
ci ≡ ai (i < n)
cj+n ≡ bj (j < m)
The σ(...) operation selects those rows where two or more columns are equal. ie. For one column:
σCi=Cj(C)
The projection operation removes duplicate columns (for a selection of σCi=Cj(...) then this removes either ci or cj), and also removes columns where the values are fixed. So the total number of columns at the end t, is given by:
t < n + m
Since the selection operation is guaranteed to provide at least one duplicate column, this can be further restricted:
t < n + m - 1
The final projection operation is a mathematical convenience, as it does not change or add any information to the result. All it does is remove unnecessary information. As such, it is valid to consider the inner join without the projection operation for algebraic purposes.
This leaves us with a modified description of an inner join ⋈ which is defined as:
A ⋈ B = σ(A x B)
From here on, I'll use this definition of an inner join instead of the original. This is safe as since the missing π(...) operator simply removes redundant data that is not usually required.
Consider the universal sets of n and m columns respectively: Un Um.
By definition:
A ⊆ Un
B ⊆ Um
The cartesian product of A and B will then be in the space defined by the cartesian product of Un and Um.
A x B ⊆ Un x Um
If we consider a restricted universe of U'n and U'm, where:
A ⊆ U'n
B ⊆ U'm
Then the product of A and B will also fall into this space:
A x B ⊆ U'n x U'm
The selection operator does not remove any columns (ie. does not modify the space). Also, it provides a restriction on a set, meaning:
σ(A x B) ⊆ A x B ⊆ U'n x U'm
Now since we have defined ⋈ as:
A ⋈ B = σ(A x B)
This means:
A ⋈ B ⊆ U'n x U'm
The definition of a complement S with respect to the universal set is:
S ∪ ¬S = U
Let us define the restricted complement ¬' as being with respect to the restricted universal set:
S ∪ ¬'S = U'
With this definition we can say:
(A ⋈ B) ∪ ¬'(A ⋈ B) ⊆ U'n x U'm
It is important to note that ¬'(A ⋈ B) appears in the same space as U'n x U'm.
Finally, we can define a projection operator which selects out the A columns from the (A x B) cartesian product. Remember that (A x B) has columns:
c1 c2 c3 ... cn cn+1 cn+2 ... cn+m
So we define πn(S) as reducing the space of S from n + m columns down to n columns. This means that if S has n+m columns of:
c1 c2 c3 ... cn cn+1 cn+2 ... cn+m
And S' has n columns of:
c1 c2 c3 ... cn
Then:
S' = πn(S)
This operator loses information from the columns cn+1 cn+2 ... cn+m. The result of this operator is within the same space as the original set A.
When applying this to a set in the space of (U'n x U'm) we get the columns which come from U'n. As defined earlier, A ⊆ U'n. Applying this operator to the cartesian product of A and B:
πn(A x B) = A
Since:
σ(A x B) ⊆ A x B
σ(A x B) = A ⋈ B
Then:
A ⋈ B ⊆ A x B
πn(A ⋈ B) ⊆ πn(A x B)
πn(A ⋈ B) ⊆ A
This means there are elements which are in A, but are not in πn(A ⋈ B). So if we take ALL of the elements which are not in πn(A ⋈ B) we will find some which are in A, and some which are not. In other words:
πn(¬'(A ⋈ B)) ∩ A ≠ ∅
Let us refer to πn(A ⋈ B) with the set D. To remove from A those elements which appear in both D and A we can write:
A ∩ πn(¬'(A ⋈ B))
If we look at the construction of this, we will see that the result is a subset of A (since A ∩ S ⊆ A), and that all rows which the selection operator which would have matched A to B have been removed.
To confirm this, we can looking at the opposite construction:
A ∩ πn(A ⋈ B) = πn(A ⋈ B)
Since:
πn(A ⋈ B) ⊆ A
Remember that by definition:
¬'(A ⋈ B) ∪ (A ⋈ B) = U'n x U'm
¬'(A ⋈ B) ∩ (A ⋈ B) = ∅
Which leads to:
πn(¬'(A ⋈ B) ∪ (A ⋈ B)) = πn(U'n x U'm)
πn(¬'(A ⋈ B)) ∪ πn((A ⋈ B)) = U'n
(This step relies on the distributivity of πn(...). This is easily provable, but I haven't done it here).
Similarly:
πn(¬'(A ⋈ B)) ∩ πn((A ⋈ B)) = ∅
Since we know that (A ⋈ B) is the inner join, then this defines all the rows where A and B match according to the selection operator. So πn((A ⋈ B)) defines all those rows from A which the selection operator would match against B. Since:
πn(¬'(A ⋈ B)) ∩ πn((A ⋈ B)) = ∅
Then we know that πn(¬'(A ⋈ B)) contains all the rows from A which did not match the selection operator, plus all the rows in U'n which are not in A.
This then defines our difference operator:
A - B = A ∩ πn(¬'(A ⋈ B))
Complement Operator
While the above definition is all quite sound, it assumes that the result of ¬'(...) is readily available. However, this complement is based on one or more cartesian products of the entire statement store against itself. When applied to database of decent size, this result can easily get too large to work with. Instead, a difference can be easily worked out by removing rows from the first set which match rows on the second set. This will be at least as fast and memory efficient as the current inner join.
So why have this pseudo-proof description if the operations described are not going to be implemented? Well I have three reasons. The first is that it provides a sound mathematical basis for the operation, and this helps us to recognise problems from the outset. The second reason is that it provides a full justification of the operation for anyone skeptical about its operation, such as SR. The final reason was to determine just what it is about the complement achieved with the excludes() operator that was not working for us.
Posted by
Paula G
at
Tuesday, October 05, 2004
0
Monday, October 04, 2004
Documentation
Today was just about the same as Friday. I spent the time putting together example OWL code, writing entailment queries in iTQL, and then documenting the whole thing. Most of what I got done today revolved around
owl:equivalentClass and
owl:equivalentProperty.
Strangely, the queries I had originally written for this had the wrong OWL constructs in them. I wasn't paying attention during the conversion, but Volz had used several constructs which did not belong to OWL, but rather to DAML. For instance, Volz described
sameClassAs and
samePropertyAs, when these are from the DAML namespace. Fortunately, this is a trivial problem, as these declarations have a direct mapping into OWL, translating into
owl:equivalentClass and
owl:equivalentProperty.
Set Operations
While considering how to perform an inverse on a constraint, ala the complement operator except, I realised what my problem has been all along.
I keep making the mistake of describing an inner join as a set intersection. While analogous, this is not quite right. A constraint defines a set of triples, and if this is joined via a variable to another set of triples, then the result can be up to 6 values wide (not that there is a reason to use more than 5, since 6 columns means an unbounded cartesian product).
In fact, the solution is all 6 columns of the cartesian product with a few transformations applied to it. For a start, the sigma operation (select) will define one or more columns to be equal. These repeated columns are redundant and are consequently dropped, though strictly speaking, they do still exist. Similarly, any columns of fixed values are redundant, so they are also dropped. This dropping of columns is a projection of the 6 column result down to just the columns with the useful information in them, ie. the variables.
So the sigma operator defines the elements to keep from the cartesian product, giving us the result, and the final projection just removes unneeded columns. Note that mathematically these columns do exist, they just contain known data so there is no need to keep them.
The point of this is that the inputs of the join are a pair of sets of triples, but the output is a set of 6-tuples, based on a cartesian product. This is most definitely not an intersection operation.
We had been considering a "difference" operation to be an intersection with a complement of a set, but now I've come to realise that this is not the case. Instead, it would be implemented by the normal selection operation:
Where i and j are the selection columns.Where i and j are the selection columns.
A - B = { x: xi ∈ A ∧ xj ∉ B }
I think that it might also be possible to define a ¬B based on the cartesian product of the entire dataset with itself, but I'm a bit tired to check this properly. The idea is that we need the inverse of B, not in respect to the space that it occurs in, but in the space of the result. Since the result of an inner join is that of the cartesian product, this makes the complement of B also appear in that space.
Posted by
Paula G
at
Monday, October 04, 2004
0
Friday, October 01, 2004
Documents and More Documents
As expected, today was spent documenting, only it was a little more intense. I provided examples for
owl:SymmetricProperty,
owl:TransitiveProperty,
owl:FunctionalProperty and
owl:InverseFunctionalProperty, as well as testing each, and documenting them all.
If nothing else, I got more practice writing RDF-XML, which is something I sorely need. During this process, and with a little help from SR, I discovered some of the answers to the questions I had about namespaces yesterday.
When I was using
xmlns in a tag, it was being lost by the fact that the tag was already declared as being in the
rdf namespace. So it was doing nothing for me.
I also discovered that using
rdf:about to identify nodes picks up the base namespace of the document,
xml:base, and simply tacks the name onto the end. For this reason, all the names I use in this way have to be prepended by #. On the other hand, using
rdf:ID tells the parser that this is a fragment, so it automatically uses a # to separate it from the namespace.
The
rdf:resource attribute seems to interact with namespaces similarly to
rdf:about, so it is necessary to either fully qualify its domain, or else prepend with a #.
Still, it can be confusing with so many ways to represent the same thing, and with me only having a half of an idea of how it works. For instance, why would I want to choose:
Instead of:Instead of:
<shutter-speed rdf:
There's probably some important difference, but for the examples I'm doing it doesn't seem to make a difference.There's probably some important difference, but for the examples I'm doing it doesn't seem to make a difference.
<rdf:Description rdf:
On the subject of important differences, I recall a conversation explaining something about the differences between
rdf:aboutand
rdf:ID, but I haven't been able to find it (not that I've looked too hard yet). From memory, there is something more significant than namespaces. It might have had something to do with referencing things outside of the current domain. This might be why
rdf:aboutis preferred for declarations, as their visibility may be affected.
Sets and Differences
I think I have a better idea of what we need in order to do a difference now, and why NOT (or SELECT) wasn't working for us. I don't have time just now, but I'll try and explain it on Monday night.
In the meantime, AN has been able to send me his 3 level query which he thinks will produce an effective difference query. I'm not convinced, but subqueries can be complicated little beasts, so I won't judge it until I've had time to examine it properly. Either way, I still think we need a proper difference operation. If it is not possible to do a difference with multiple levels of subquery, then we need to implement it just for the sake of completeness. If it is possible, then we still need to implement it for the sake of efficiency, and also because the queries end up so complicated that they are almost impossible to understand.
Posted by
Paula G
at
Friday, October 01, 2004
0
| http://gearon.blogspot.com/2004_10_01_archive.html | CC-MAIN-2017-09 | refinedweb | 13,629 | 68.7 |
Let’s see how to get the size of a directory in python.
How to get the size of a Directory in Python:
The Python
os package gives the flexibility to access the file/directory structure so that we can
walk through each file in a directory and get the size of them.
os.walk()function takes the directory path as a parameter and it walks through the directory.
os.path.getsize()function takes the file name as a parameter and gives the size of it.
So the sum of each file size in a directory gives us the total size of the directory.
get_directory_size.py
import os import sys if __name__ == '__main__': try: directory = input("Enter the directory which you want to get the size : ") except IndexError: sys.exit("Invalid directory") dir_size = 0 readable_sizes = {'B': 1, 'KB': float(1) / 1024, 'MB': float(1) / (1024 * 1024), 'GB': float(1) / (1024 * 1024 * 1024) } for (path, dirs, files) in os.walk(directory): for file in files: filename = os.path.join(path, file) dir_size += os.path.getsize(filename) readable_sizes_list = [str(round(readable_sizes[key] * dir_size, 2)) + " " + key for key in readable_sizes] if dir_size == 0: print("File Empty") else: for units in sorted(readable_sizes_list)[::-1]: print("Folder Size: " + units)
Output:
Enter the directory which you want to get the size : C:\Users\cgoka\Desktop\mylibs Folder Size: 733160003 B Folder Size: 715976.57 KB Folder Size: 699.2 MB Folder Size: 0.68 GB
References:
Happy Learning 🙂 | https://www.onlinetutorialspoint.com/python/how-to-get-the-size-of-a-directory-in-python.html | CC-MAIN-2021-31 | refinedweb | 242 | 55.44 |
Microsoft's official enterprise support blog for AD DS and more
at a later date. The primary author of this document is Mahesh Unnikrishnan, a Senior Program Manager who works on the DFSR, DFSN, and NFS product development teams. You can find other articles by Mahesh at the MS Storage Team blog:.
The purpose of this article is to clarify exactly which scenarios are supported for user data profiles when used with DFSR, DFSN, FR, CSC, RUP, and HF. It also provides explanation around why the unsupported scenarios should not be used. When you finish reading this article I recommend reviewing
Update 4/15/2011 - the DFSR development team created a matching KB for this -]
Consider the following illustrative scenario. Contoso Corporation has two offices – a main office in New York and a branch office in London. The London office is a smaller office and does not have dedicated IT staff on site. Therefore, data generated at the London office is replicated over the WAN link to the New York office for backup. not configured. Therefore, users will not be automatically redirected to the central file server if the London file server is unavailable.
As illustrated by the diagram above, there is a file server hosting home folders and user profile data for all employees in Contoso’s London branch office. The home folder and user profile data is replicated using DFS Replication from the London file server to the central file server in the New York office. This data is backed up using backup software like Microsoft’s System Center Data Protection Manager (DPM) at the New York office.
Note that in this scenario, all user initiated modifications occur on the London file server. This holds true for both user profile data and the data stored in users’ home folders. The replica in the New York office is only for backup purposes and is not being actively modified or accessed by users.
There are a few variants of this deployment scenario, depending on whether a DFS Namespace is configured. Following sub-sections detail these deployment variants and specify which of these variants are supported.
[Supported Scenario]
Scenario highlights:
Specifics:
This is a variation of the above scenario, with the only difference being that DFS Namespaces is set up to create a unified namespace across all shares exported by the branch office file server. However, in this scenario, all namespace links must have only one target1 - the share hosted by the branch office file server.
1 Deployment scenarios where namespace links have multiple targets are discussed later in this document.
Both variants of this deployment scenario are supported. The key point to remember for this deployment scenario is that only one copy of the data is actively modified and used by client computers, thereby avoiding issues caused by replication latencies and users accessing potentially stale data from the file server in the main office (which may not be in sync).
The following use-cases will work in this deployment scenario:
In this scenario, the following technologies are supported and will work:
Designing for high availability
DFS Replication in Windows Server 2008 R2 includes the ability to add a failover cluster as a member of a replication group. To do so, refer to the TechNet article ‘Add a Failover Cluster to a Replication Group’. Offline files and Roaming User Profiles can also be configured against a share hosted on a Windows failover cluster.
For the above mentioned deployment scenarios, the branch office file server may be deployed on a failover cluster to increase availability. This ensures that the branch office file server is resilient to hardware and software related outages affecting individual cluster nodes and is able to provide highly available file services to users in the branch office.
Consider the same scenario described above with a few differences. Contoso Corporation has two offices – a main office in New York and a branch office in London. configured in order to enable users to be directed to the replica closest to their current location. Therefore, namespace links have multiple targets – the file server in the branch as well as the central file server. Optionally, the namespace may be configured to prefer issuing referrals to shares hosted by the branch office file server by ordering referrals based on target priority.
The replica in the central hub/main site may optionally be configured to be a read-only DFS replicated folder.
[Unsupported Scenario]
What can go wrong?
As a result of the behavior described above the following consequences may be observed:
This deployment variant helps avoid the problems caused by DFS Namespaces failing over due to transient network glitches or when it encounters specific SMB error codes while accessing data. This is because the referral to the share hosted on the central file server is normally disabled.
However, the important thing to note is that the side-effects of replication latencies are still unavoidable. Therefore, if the data on the central file server is stale (i.e. replication has not yet completed), it is possible to encounter the same problems described in the ‘What can go wrong?’ section above. Before, enabling the referral to the central file server, the administrator may need to verify the status of replication to ensure that the adverse effects of data loss or roaming profile corruption are contained.
Both variants of this deployment scenario (2A and 2B) are not supported. The following deployment use-cases will not work:
In this scenario, the following technologies are not supported and will not work as expected:
Great article! Thanks!
Agreed, great article! Although this focuses on "User Profile Data" the same should be said for any multi-user edited documents that don't have a file lock in play (hence Ned's link). Especially those that only write their changes on file exit, like Microsoft Access MDB files for example.
Out of curiosity what is the Microsoft view or suggestion to create server failover for a DR scenario in these cases? For other DFS configurations, we do have a global high/low priority setup for our links. Is there any suggested way to acomplish a DR configuration for DFS-N, without manually needing to change links or use a 3rd party?
Thanks!
I typically find that the planning around DR site 'failover' times are unreasonably aggressive. I.e. customers want to have a DR site come online instantly if their main office servers were all destroyed in a fell swoop.
Except that generally speaking a disaster of that magnitude:
1. Is likely to have destroyed most users entry point into the DR site as well.
2. Is likely to have destroyed most of the users own access points (main corporate network).
The above means that instant failover is unlikely and that the business will not mind a little downtime if the entire datacenter suffered a meteor strike. Which by the way killed all your IT staff, as they are always colocated, so your DR site isn't going to be managed.
So when you gear down to a more common "not disaster" - such as a branch site who's main file server has gone offline due to hardware failure - a few seconds or minutes for an admin to enable a disabled DFS link target or to switch an existing one to another target is usually pretty acceptable.
If data is mostly static and users are never going to have a reason to access a DFSN link target you might accept the aupportability risk and simply have a secondary link online that is extremely high site costed and and unlikely to be hit. But we have a few cases every day of customers accessing the wrong DFSN link targets due to many factors outside their control, such as network conditions and 3rd party WAN appliance issues. You get back into all the risks described in the article at that point.
@Ned
Thanks for the feedback, and that is pretty much what I assumed was the case. Just wanted to make sure there wasn't some automated "Hey, this File Server is no longer reachable, quick switch any links that point to it" magical hidden feature. :)
In all seriousness some native Microsoft way to script/automate that would be much appreciated. Even if it does still take a few seconds to minutes. There are time admins with proper knowledge and rights aren't always available after all. I'm thinking along the lines of PowerShell cmdlets, but I imagine you guys are already (hopefully) working on such things.
As always, appreciate all the great information you guys put together here!
Thanks again...
On second thought... I imagine that "native Microsoft" way will be directed towards SCOM?
That's a very interesting notion. It would be quite scriptable with DFSUTIL and DFSCMD. And you could use SCOM 2007 as well as a trigger. Or some other product, or trigger mechanism.
Very intersting, I shall dwell on this as a potential blog post...
Bummer. Scenario 2A is exactly what I had wanted to do. I'm glad I read this article and I guess I will have to come up with something different...
This is the very sort of the doc I wanted to have, for many years... But I need a solution of a variant of 2A.
I plan to have TWO local fileserver, plus one in central site for tape backup, all configured as DFS target (but the central site one is disabled). We can assume reasonably high-speed connection between the two local fileservers (on the same subnet, same physical switch, 1Gbps connection). I understand we still cannot guranantee 100% sync between those two fileservers, but I guess it should work... Can you comment on this, please?
Also, what happens if I disable DFS target of one of the local fileserver (leaving one local fileserver as Enabled, while the other local one and the central office one are disabled) and manually switch Enable/Disable between the two local servers in case of server failure. Is this supported?
What about if you made the two local servers into one cluster, Ikono? Then you get all the advantages and none of the disadvantages (plus you could save money of disk,as you would need half the space of two full DFSR servers).
For your second question, yes that would work and be supportable.
Thank you, NedPyle. Mainly due to the cost associated to shared disk (iSCSI, etc) I preferred not to go for a clustering. But I am glad to learn the second one is supportable, as I can start testing right now and implement very soon :)
What are your thoughts about this scenario (Variant Scenario 2A)?
• Central Site Server is hosting ONLY the (Not the entire profile) redirected profile subfolders:
○ Documents, Desktop, Favorites, Downloads
• The Redirected folders are located in the users H: drive (Home Path) mapped to - \\Domain\Namespace\Profile
• The data is replicated using DFS Replication over the WAN between a single branch office and Central Site
• Namespace link has multiple targets (Active) - Central Site and Branch Office (Sites and Services is configured)
• Third party solution 'AppSense' is controlling Profile/Settings Roaming across the environment
I understand that in a failover the user may not have the latest copy of their data, but "critical" data is stored in separate department shares which are replicated, (Namespace link has multiple targets) but is only active (enabled) to one.
Since you're using a third party profile tool here, I have no way to comment on this anymore. I can't tell what it might do or want or need. But from I read here, no this is not supported and contradicts our recommendations around multiple access points to the same data by the same user.
The scenario indicates you have multiple links, but only one link is active. That part fits into the prescribed scenarios. However, you indicated you’re using AppSense. I can’t comment on what AppSense does and does not support. Microsoft does not support AppSense.
Mike Stephens
Thanks for the useful info! Hopefully you can help me with a question...
It makes sense to me that there could be corruption problems with a profile because there are several files that may be modified concurrently and an incomplete replication would leave them in an inconsistent state. However, if CSC is not used, I don't understand why a redirected folder (such as a user's home folder) with multiple targets is at any greater risk than any other replicated folder with multiple targets.
Are you saying that replicated folders with multiple targets are also not supported? I assume that is not true because it would render DFSR useless. Can you clarify as to why folder redirection against a namespace with multiple link targets is different from any other scenario using multiple targets?
The risks with folder redirection against a namespace with multiple targets backing the share are due to replication latencies and due to the switch between targets in case of transient network glitches. It may be disconcerting to a user to find that a document he recently worked on/saved is unavailable when he was redirected to another replica. If you've configure replication schedules (say off hours replication), this may be an expected outcome.
Another problem that can arise is that of replication conflicts since you now have a scenario where the user may be modifying the file on two different replicas. Since DFSR has last-writer-wins replication conflict resolution, you'd find that a previously modified version of the file is moved to the ConflictsAndDeleted directory. It just becomes a lot more difficult for an administrator to support since you have to be ready to restore files from the ConflictAndDeleted folder in case users complain of files going missing. | http://blogs.technet.com/b/askds/archive/2010/09/01/microsoft-s-support-statement-around-replicated-user-profile-data.aspx?PageIndex=1 | CC-MAIN-2015-06 | refinedweb | 2,302 | 59.43 |
On Tue, 22 Mar 2005 ross at soi.city.ac.uk wrote: > On Tue, Mar 22, 2005 at 09:52:04AM -0700, Kevin Atkinson wrote: > > I must admit that I am baffled by what this is doing. But I don't think > > it has the semantics I want. When I try substituting your > > code in I get "Exception: <<loop>>". > > I could have made it a bit simpler: > > instance ArrowLoop FG' where > loop (FG' f) = FG' $ \ c x -> do > (c', ~(x', _)) <- mfix $ \ ~(_, ~(_, y)) -> f c (x, y) > return (c', x') > > This executes f once only, with y bound to the third component of the > output of f. This isn't available until f finishes, so any attempt > to examine it while f is running will lead to <<loop>>, but f can pass > it around, and store it in data structures; it can even create cyclic > structures. (Under the hood, the IO instance of mfix starts with y bound > to an exception, and updates it when f finishes, a bit like what you're > trying to do with IORef's, except that existing references to y then > point at the updated thing.) Your definition runs f once with undefined > as the last argument to get a value for y to supply to a second run. > Presumably the things you're doing with Control need to change too, > and I don't understand all that, but I expect that the mfix version > could be made to work, and would do less work. I think I understand it more, but I am not sure it will do what I want. For one thing f still needs to know when it can examine its input thus, still needing an initializing pass. Please have a look at the new code. I have reworked how loops are handled and no longer use Control. Also the state variable is now needed in separate functions. Thus I am not sure I can use the mfix trick to hide the state. | http://www.haskell.org/pipermail/haskell-cafe/2005-March/009507.html | CC-MAIN-2013-48 | refinedweb | 334 | 75.03 |
Simple Publication And Subscription Functionality (Pub/Sub) With jQuery
In the past, I've talked about how awesome jQuery's DOM-based event management is. In fact, I've even played around with using the DOM-based event management to power an object-based bind and trigger mechanism. As you saw in that exploration, however, porting the jQuery event framework onto a non-DOM context requires a good bit of finagling. At this year's jQuery Conference (2010), one of the most frequently discussed topics was that of light weight Pub/Sub (publish and subscribe). This brand of event binding is like jQuery's event binding; but, it circumvents a lot of the processing that a DOM-based event framework needs to perform. Since this seemed to be the direction in which people were moving, I thought it was time to try it out for myself.
As Rebecca Murphey pointed out in her jQCon presentation, there's an existing jQuery plugin that provides pub/sub in six lines of code. But, if you've been following my blog for any amount of time, you'll probably know that I like to learn something by writing 200 lines of code in order to figure out why those six lines of code rock. And, that's exactly what I've done here.
In the following experiment, I've created three extensions to the jQuery namespace:
$.subscribe( eventType, subscriber, callback ) - This subscribes the given subscriber to the given event type. When the event type is triggered, the given callback will be executed in the context of the subscriber (ie. "this" refers to the subscriber within the callback).
$.unsubscribe( eventType, callback ) - This unsubscribes the given callback from the given event type. Since functions are comparable objects, the subscriber is not required - only the callback that it defined.
$.publish( publisher, eventType, data ) - This allows the given publisher to publish the given event type with the given data array (if any). When the publication event is created, the publisher is defined as the "target" of the event.
Unlike traditional jQuery event binding, this light weight publication and subscription mechanism is not associated with any given jQuery collection. As such, I am requiring the publishers and the subscribers to pass themselves along with their related method calls. This is not something that I saw the pub/sub presentation demos doing; however, knowing who fired off a given event just seems kind of necessary to me. It makes the pub/sub API a little less elegant but, I think it will be worth it in the long run.
When an event gets published, an event object gets created with the following properties:
type - The event type (string) that has been published.
target - The object that has published the event.
data - An array of any additional data that has been published along with the event. Each item in this array gets passed to the subscriber callback as an additional invocation argument.
result - This is the return value of the most recently executed subscriber callback (or undefined if no value was returned).
This event object is capable of some very light-weight propagation logic. If you look at the event object that gets created above, you'll see that it stores the result of each subscriber that gets notified of a given event. If any of the subscriber callbacks return false, the $.publish() method will treat this as a request to stop immediate propagation. As such, it will break out of the $.each() iteration method that it uses to invoke all subscribers to the given event.
Furthermore, once all of the subscribers have been notified, the event object is then returned to the event publisher. At that point, the event publisher has the opportunity to alter its default behavior based on the result value of the event. There is nothing in the pub/sub model that requires any particular action to be taken place; however, if the publisher sees that the last known result is "false", then it can chose not to complete the event that it just published.
Now that you have an idea of where I am going with this, let's take a look at some code. In the following demo, I am defining the three pub/sub jQuery extensions - subscribe, unsubscribe, and publish. Then, I am creating two Person objects that automatically subscribe to the global event, "oven.doneBaking." Then, I create an oven object that announces the event, "oven.doneBaking." Naturally, the person objects will react to this event.
<!DOCTYPE html> <html> <head> <title>Simple jQuery Publication / Subscription</title> <script type="text/javascript" src="./jquery-1.4.3.js"></script> </head> <body> <h1> Simple jQuery Publication / Subscription </h1> <script type="text/javascript"> // Define the publish and subscribe jQuery extensions. // These will allow for pub-sub without the overhead // of DOM-related eventing. (function( $ ){ // Create a collection of subscriptions which are just a // combination of event types and event callbacks // that can be alerted to published events. var subscriptions = {}; // Create the subscribe extensions. This will take the // subscriber (context for callback execution), the // event type, and a callback to execute. $.subscribe = function( eventType, subscriber, callback ){ // Check to see if this event type has a collection // of subscribers yet. if (!(eventType in subscriptions)){ // Create a collection for this event type. subscriptions[ eventType ] = []; } // Check to see if the type of callback is a string. // If it is, we are going to convert it to a method // call. if (typeof( callback ) == "string"){ // Convert the callback name to a reference to // the callback on the subscriber object. callback = subscriber[ callback ]; } // Add this subscriber for the given event type.. subscriptions[ eventType ].push({ subscriber: subscriber, callback: callback }); }; // Create the unsubscribe extensions. This allows a // subscriber to unbind its previously-bound callback. $.unsubscribe = function( eventType, callback ){ // Check to make sure the event type collection // currently exists. if ( !(eventType in subscriptions) || !subscriptions[ eventType ].length ){ // Return out - if there's no subscriber // collection for this event type, there's // nothing for us to unbind. return; } // Map the current subscription collection to a new // one that doesn't have the given callback. subscriptions[ eventType ] = $.map( subscriptions[ eventType ], function( subscription ){ // Check to see if this callback matches the // one we are unsubscribing. If it does, we // are going to want to remove it from the // collection. if (subscription.callback == callback){ // Return null to remove this matching // callback from the subsribers. return( null ); } else { // Return the given subscription to keep // it in the subscribers collection. return( subscription ); } } ); }; // Create the publish extension. This takes the // publishing object, the type of event, and any // additional data that need to be published with the // event. $.publish = function( publisher, eventType, data ){ // Package up the event into a simple object. var event = { type: eventType, target: publisher, data: (data || []), result: null }; // Now, create the arguments that we are going to // use to invoke the subscriber's callback. var eventArguments = [ event ].concat( event.data ); // Loop over the subsribers for this event type // and invoke their callbacks. $.each( subscriptions[ eventType ], function( index, subscription ){ // Invoke the callback in the subscription // context and store the result of the // callback in the event. event.result = subscription.callback.apply( subscription.subscriber, eventArguments ); // Return the result of the callback to allow // the subscriber to cacnel the immediate // propagation of the event to other // subscribers to this event type. return( event.result ); } ); // Return the event object. This event object may have // been augmented by any one of the subsrcibers. return( event ); }; })( jQuery ); // -------------------------------------------------- // // -------------------------------------------------- // // -------------------------------------------------- // // -------------------------------------------------- // // I am the person class. function Person( name ){ // Store the name property this.name = name; // Subscribe to the oven events. $.subscribe( "oven.doneBaking", this, "watchOven" ); }; // Define the person class methods. Person.prototype = { watchOven: function( event, food ){ // Log that we have eaten foodz!!! console.log( "Nom nom nom -", this.name, "is hungry for", food ); // Return false - this will prevent other oven // watchers from stealing my pie! return( false ); } }; // Create two girls, both of which will be watching the // oven for some freshly baked pie. var joanna = new Person( "Joanna" ); var tricia = new Person( "Tricia" ); // -------------------------------------------------- // // -------------------------------------------------- // // Now, create an oven object that will publish a baking // done event. var oven = { content: "Pumpkin Pie" }; // Publishe the baking done event. Notice that we are passing // in the oven reference to so that the event can have a // valid event target. // // NOTE: The publish method returns the event object that was // passed to all of the subscribers. var event = $.publish( oven, "oven.doneBaking", [ oven.content ] ); // Log as to whether or not the event was propagated. Notice // that we are using triple equals === to make sure the // result is actually false and not some other falsey value. console.log( "The event was", ((event.result === false) ? "NOT" : ""), "propagated!" ); // Log the event as it exists after all of the subscribers // have had a chance (if possible) to react to it. console.log( event ); </script> </body> </html>
As you can see in this code, the Person constructor automatically subscribes to the event, "oven.doneBaking." However, in the callback that it uses to subscribe to the event, the Person class is returning false. As such, the first person to subscribe to the event will stop propagation of the event, thereby preventing the second person from knowing anything about the very very delicious pumpkin pie (this is typically my strategy of choice at Thanksgiving).
Because of this propagation logic, running the code leaves us with the following console output:
Nom nom nom - Joanna is hungry for Pumpkin Pie
The event was NOT propagated!
NOTE: I have not included the logged event object as it is a complex value.
In this demo, I am not making any final use of the event object other than to check the propagation status. However, if my publish request was made inside of a true domain object, I could have used the event.result value to react in different ways (much in the same way that a Form won't submit if its default behavior is prevented).
The jQuery library is really the first thing that has truly gotten me to think about event-based communication; it's bind() and trigger() methods make publication and subscription extremely easy for DOM-based functionality. However, as client-side application architecture gets more complex, it seems that people are using lighter-weight pub/sub approaches that work with plain-old Javascript objects. This past jQuery Conference has really got my machinery firing full blast!
Reader Comments topic?
Part of the goal of pubsub is to allow that sort of extensibility -- to make it so that multiple, arbitrary pieces of your application can respond to a topic -- and to decouple the components, so one literally doesn't have to know or care about the other. A better pattern might be to have the second component broadcast another topic if something interesting happens in it; alternately, you might write your first component so it really didn't have to care how the second component reacted to the topic published by the first.
This is a small detail about an otherwise great article. Thanks for digging in and writing it :)
@Rebecca,
Great to see you too as well. I think the YayQuery team, in general, rocked the talks :)
As far as the result of an event, this is what the core jQuery Event object does:
I was just trying to stick with the "best practices" outlined by the core library, albeit in the most light-weight way possible. There is nothing in this framework that tells the publisher *how* to use that result - it can do with it what it wants.
I thought about this in the same way that one might bind to a Form element in the DOM. If an event listener *outside* of the Form returns false (or calls event.preventDefault()), the Form chooses not to complete its submissions. This allows components to interact by indirect message-passing, so to speak.
Clearly, this approach makes sense for DOM interaction otherwise, we wouldn't be able to create such rich client-side application. So the question then becomes, does the DOM-centric approach also apply to a non-DOM-centric approach?
The answer: I have no idea :D I'm just trying to wrap my head around all of this stuff.
not sure how much it matters how jQuery implements events.
In my mind, it's definitely critical to the pattern that more than one subscriber be able to connect to a topic; I suppose you could write an implementation that would check the return values of all subscribers, and then return false to the publisher if any of them returned false, but at that point I think the implementation could start to stray beyond the simplicity that I love about pubsub in general -- it is very much fire and forget.
Regarding your form example, the idea in pubsub is that the form view would publish the data from the form, and then any subscribed components would have the chance to react to it. The form view's job would be done once it published that data; it would never need to know what happened next. If the form needed to be able to be hidden, then the form view could react to another published topic by hiding itself, but the hiding of it would occur independently of the publishing of the data. The form view would absolutely *not* be responsible for getting the form data to the server -- that task should fall to a service.
Hope this helps :)
@Rebecca,
I agree that the two context are very different; however, I think it's important to clarify our terminology. When we say "light-weight", we are talking about the cost of processing, not the robustness of the feature-set, correct? Otherwise, I can't see any reason why one would opt for less functionality at the same cost, especially when that functionality is well encapsulated.
So, even with some pseudo behavior/propagation functionality, this approach is still "light-weight."
Once we differentiate between the concepts of light-weight and robust, then the question becomes whether or not the robust features that we've included are valid or not.
As far as more than one subscriber being able to listen for an event type, this is still the case. Remember, if all the event listeners simply "listen", then everything goes ahead as planned. It is only when one of the listeners actually takes the initiative to return an actually False value that immediate propagation is stopped. So it's not like we are limited to only one listener; it's just that we have the ability for one listener to alter the control flow.
And, I think this is really the point that is not sitting well with you - allowing a listener to alter the greater control flow.
Going back to the Form example in our non-DOM context, I am going to agree with you. In fact, I am having trouble coming up with a good example of where I would actually want to stop an event from being fully processed.
This is all so new and exciting :) I am going to put my thinking cap on and see what I can come up with. Thanks for all the wonderful thought-food.
Ben, interesting stuff! I started to write a comment on two reasons why trying to prevent event propagation isn't viable, but it expanded itself into a blog post involving parallel universes:.
Which jQuery plugin are you talking about that is 6 lines of code?
@Hal,
Awesome, going to read it right now :)
@Drew,
I am not sure, I will have to defer to @Rebecca to answer that.
Hey guys, I tried to codify my thoughts in a more "Real world" example of how propagation logic might be used for fun and profit:
I use "real world" in quotes because its obviously a silly example; but, one that I hope will lend itself as a platform for more use-case based discussions.
I wasn't there, but I imagine she may be talking about the plugin Peter Higgins wrote
and the fact that bocoup showed the performance difference using jQuery events vs. the pub/sub plugin.
@Michael,
Ah, very cool - thanks for the links. I've heard Bocoup talked up a lot this weekend at jQCon. Looks like they are doing some high quality stuff.
I'm sitting here, staring at the screen, trying to wrap my head around all this pub/sub stuff. I think what's going to be most difficult for me is to figure out what kind of stuff should use pub/sub and which should use more of a direct invocation.
Also, I was wondering if perhaps we could clarify what makes the DOM so different than anything else. After all, it was sitting here for years, not being leveraged. And now, with its event model, we are able to build on top of it to create robust applications. What makes the DOM special in that way that doesn't apply to other types of systems?
I would say that it is the extremely generic nature of HTML UI elements. But, isn't part of the event architecture an attempt to make things more generic?
@Rebecca,
Going back to your SRCHR example from your presentation - how is the SRCHR form "widget" that you could reuse on various pages different than the HTML Form element that you can reuse on different pages? Why should one offer propagation control while the other does not?
Thanks all - this stuff is really hard for me to understand.
Hi Ben,
I've been involved with pubsub for many years in the area of Comet and "later" with Dojo. It's an eventing paradigm that's sorely missing from JS.
Anyway, I would strongly recommend looking at phiggins' plug-in as it's pretty elegant, and based on the work we've done with pubsub in Dojo.
The DOM isn't really special except that DOM events are expensive from a performance perspective relative to simple JS calls. In Dojo our normal event system works with both DOM events and function to function bindings. PubSub takes this to the next level by allowing event connections, registrations, publishes, and subscribes to happen with no two publishers or subscribers knowing or caring anything about each other.
As far as pubsub goes, the problem I see in your suggested implementation is that with returning a value, potentially false, it breaks the pubsub paradigm. Basically it should not be possible with pubsub for one subscriber to cancel the publish to another subscriber. What you're really doing instead is binding a set of events to happen in order, which is a different (and perhaps useful) pattern.
The format for pubsub topics is typically like a url fragment, e.g. "oven/bake/". The reason for this is that in Comet systems, you can actually set permissions on who can subscribe and publish to each topic, hierarchically, and have your calls be RESTful. It's a nice convention that's easy to follow and
Additionally, something I use a lot is Dojo's connectPublisher method. What this does is allow you to take an existing method and have it publish a topic whenever it is called, without modifying the source of that method with an explicit publish call.
For example, you have (in pseudo-code:
Oven = {}
Oven.bake()
Person = {}
Person.eat{}
subscribe("oven/bake", Person, "eat")
connectPublisher("oven/bake", Oven, "bake")
This allows you to easily place a pubsub scaffolding over existing code and events which is very handy.
I hope this helps... it's cool that jQuery is becoming interested in pubsub. There's also a cometD jQuery library that might be useful to check out as well, though that's intended for events to and from a Comet server.
Regards,
-Dylan
@Dylan,
The connectPublisher() is interesting. I assume this uses some sort of aspect oriented programming (AOP) where is wraps the original function inside of one that publishes an event after it executes. AOP is new to me, but I am liking the idea. How do you handle unsubscribing? Do you use a jQuery proxy() approach where a GUID is transferred to the outer method for comparison?
My biggest mental block is having two unrelated modules affect each others actions. If you look at my next post, you'll see in the comments that I come up against this issue a LOT and I simply can't wrap my head around it.
I think I might just see if I can hire someone to teach me how to deal with this :)
@Ben: yes, we use an AOP approach. For connectPublisher, we return a handle which can be disconnected through dojo.disconnect() So in the case of registering a publisher in this way, we do retain a reference to it just for disconnect purposes.
As far as two competing approaches (connect/bind vs. pubsub), we just live with them possibly competing with each other, and we don't worry too much about the consequences because both are useful in certain situations. It can definitely become more challenging to wrap your head around, especially during debugging.
@Dylan,
I think maybe my problem is that I am trying too hard to decouple two things that are necessarily coupled. After all, if the state of one UI widget affects the functionality of another one, then aren't these two things, by definition, coupled?
The kind of concept I come back to over and over again is the GMail navigation where it will say something like:
"Are you sure you want to leave the page? Your message will be lost."
I keep trying to think of a way that this can be achieved via pub/sub... hence my preventing of default events. But, maybe that just doesn't make sense? Maybe these two UI elements (navigation and compose email area) are necessarily coupled?
@Ben: onunload is somewhat of a special case because you have to cancel stuff right away to do anything.
In thinking about it, such an event is really something you would either handle within your message bus (the thing that routes all the pubsub calls), or some sort of hook in your pubsub router that it could connect to. In this case, it is something that ideally is pubsub-ing with wildcard topics, meaning all of them.
Perhaps there's a better pattern, but I think the proper handling of canceling the unload event, and then some way to interrupt all publish notifications until deciding what to do with the unload action could work.
Anyway, those are my scattered thoughts.
@Dylan,
From this back and forth, as well as from my conversation with other people, I am beginning to get the feeling that any eventing model in which the bubbling / propagating of events has "meaning" is thought of as a "special" case. I am OK with this - it actually might help me. I keep trying to reconcile the difference between the eventing presented by the DOM and the pub/sub eventing that people like Rebecca and you are talking about. But perhaps that is just trying to compare apples and oranges (albeit, apples and oranges that have the same name).
Perhaps these two types of eventing can just live side-by-side in peace.
This morning, I just installed Node.js for the first time to play with that and that architecture seems to use a somewhat different model of eventing. From what I read (which was limited), it seems they are using Event "emitters". So, rather than having a centralized event model or an object-specific event model, they have instances of these event emitters that you can bind/trigger events on.
Of course, it also appears that a number of objects in the node.js framework extend the event emitter class so as to be able to have object-specific event binding.
Eventing... it's so hot right now :)
Ben,
I know this is ancient, but here are some notes:
* With global events, wildcard support is essential to enable cross-cutting concerns.
Pitfall: Memory leaks if you add too many event listeners (especially problematic in collections).
Pitfall solution: Refactor to delegate events for collections. Consider the singleton pattern for collections which need to listen for events per-item.
Pitfall: Too many events / too much chatter on the global event bus
Solution: Consider moving some communication to local objects or mediators
* Local event emitters (like Node's EventEmitter) can be used both globally (using a single, globally accessible event emitter), or local per-instance event emitters which must be coordinated manually, via tight coupling (booo), mediators (slightly better), or dependency injection.
Pitfall: Potentially lots of manual wiring and unnecessary code complexity
Solution: Switch to global events, or wrap event mediation with a factory
Pitfall: Tight coupling
Solution: Global events, mediators, or dependency injection
Pitfall: Too many mediators
Solution: Global events
*.
In general, this isn't a one-size-fits all issue. Use the right tool for the job. I haven't event mentioned message queues and stacks, or message services (an SOA component). All of these solutions have their place in a large application.. | https://www.bennadel.com/blog/2037-simple-publication-and-subscription-functionality-pub-sub-with-jquery.htm | CC-MAIN-2021-43 | refinedweb | 4,189 | 61.46 |
Naming and the problem of persistence
Document options requiring JavaScript are not displayed
Help us improve this content
Level: Introductory
Dan Connolly (connolly@w3.org), Technical Staff, W3C/MIT
21 Jun 2005
In information management, persistence and availability are in constant tension. This tension has led to provides a perspective on the tension between persistence and availability.
The World Wide Web combines three kinds of technologies: data formats, protocols, and identifiers
that tie the two together. The relationship between data formats such as XML and HTML is relatively
clear, as is the relationship between protocols such as HTTP and FTP. But identifiers seem to be a
bit trickier to pin down.
Web addresses were relatively obscure a dozen years ago, but now they appear not just in Web
browsers but also on business cards and brochures, on billboards and buses and T-shirts. They're
commonly known as Uniform Resource Locators or URLs. A typical
example would be. But what about shorter forms,
such as? Is that a URL, too? How about ../noarch/config.xsd?
Or guide/glossary#octothorpe?
To make good use of URLs in XML namespaces, XML schemas, and Extensible Stylesheet Language
Transformations (XSLT), you need to know the rules. But the XML family of specifications refers to
URIs and URNs -- what's the difference between these and URLs? That question has a long history.
My role in that history goes back at least as far as the Hypertext conference in 1991, where I met
both Douglas Engelbart (inventor.
The URI standard:
http
/en/US/partners/index.html
The IETF consensus process manages the schemes. The Official IANA Registry of URI Schemes
(see Resources) includes familiar schemes like http,
https, and mailto, plus many others that
you may or may not be familiar with.
https
mailto include:
a/b/c
/jbrown@freddie.ucla.edu:~mblack/
mailto:mbox@domain
The second example in the introduction,, is not really a URI. It's a convenient shorthand for, a format supported by popular Web browser user interfaces (UIs). However, don't make the mistake of leaving out the scheme in XSLT like this:
, the reference is relative. A relative URI reference is much like a file path. For example, ../noarch/config.xsd is also a relative URI reference.
exslt.org
href
../noarch/config.xsd
Internationalized Resource Identifiers
It is a slight oversimplification to say that the href attribute in HTML takes a URI reference. URIs and URI references are taken from a limited set of ASCII characters, and HTML is more internationalized than that. In fact, the Request for Comments that followed RFC3986 was RFC3987,. Each IRI has a corresponding encoding as a URI, in case an IRI needs to be used in a protocol (such as HTTP) that accepts only URIs.
Overriding the base URI with xml:base
Typically, a URI reference is relative to whatever document you find it in. If you're looking in a document with base URI and you see a URI reference ../../random/random.xml, then that reference would expand to. In HTML, you can put a base element at the top of the document to override the base URI. The XML Base specification (see Resources) provides the equivalent in XML.
../../random/random.xml
base
Consider a document that you can access either as file:/my/doc or as. Typically, when you access the document through the file system, you want references like #part2 to expand to file:/my/doc#part2; when you access the document through HTTP, you want #part2 to expand to. But in a Resource Description Framework (RDF) schema, the expanded form needs to stay the same for some things to work. XML Base makes this expansion easy (see Listing 1).
file:/my/doc
#part2
file:/my/doc#part2
<rdf:RDF
xmlns="&owl;"
xmlns:owl="&owl;"
xml:base=""
xmlns:rdf="&rdf;"
xmlns:
...
<Class rdf:
In this example, the #Nothing reference expands to no matter where you
find that document.
#Nothing
Okay, so much for URIs, IRIs, and URI references. What about URLs and URNs?
URLs and URNs
URIs are designed to serve as both names and locators. When they were brought to the
IETF for standardization, they became known as Uniform Resource Locators, and
a separate effort on Uniform Resource Names began..
zork1.example.edu.
urn:
http:
ftp:].
Practical persistence
A natural tension exists the information in the name, the less likely it is to persist across changes.
/drafts/
/keepers/.
ls434
The PURL project and the Digital Object Identifier , then connect it to your PURL with HTTP redirection. The indirection from PURLs to less-persistent HTTP URIs is much like the indirection provided by DNS, except that the source and the destination of the redirection DOI system uses its own scheme -- for example, doi:10.123/456.
Web browsers can be adapted to support this scheme with a plug-in. The DOI foundation
provides policies, registration services, and HTTP redirection services similar to PURL providers
like OCLC. While the DOI foundation supports an alias for each DOI of the form, the DOI Handbook (see Resources)
states that this system has "significant disadvantages when compared with the resolver
plug-in." Managing two different names for each object seems like a more significant
disadvantage to me.
doi:10.123/456
Creative tensions in information management master.
http://...
GET/PUT/POST/DELETE
Resources
About the author.
Rate this page
Please take a moment to complete this form to help us better serve you.
Did the information help you to achieve your goal?
Please provide us with comments to help improve this page:
How useful is the information? | http://www.ibm.com/developerworks/xml/library/x-urlni.html | crawl-001 | refinedweb | 932 | 55.74 |
I am posting a link to the talk I presented at PAKCON III, Pakistan’s Largest Underground Hacking Convention, at Pearl Continental, Karachi, on the evening of 26 July. The talk is titled How Attackers Go Undetected.
Month: July 2007
Uploading files via newforms in Django made easy.
I have already described creating custom fields with Django and shown how easy it can get. Today, I am going to show how to manage file uploading through newforms. I expended a good part of my morning and afternoon today reading up on Django and asking for help in #django on irc.freenode.net. Many thanks to the folks in #django for helping me out.
I will take the aid of code snippets as much as I can to describe the process. That way, you can not only grasp it quickly and place it all in a useful context, but can also use my code and apply it in applications.
First of all, I have Django configured on the development webserver that ships with it. Therefore, in order for Django to serve static contents, such as images, a couple of settings need be taken care of.
I have a directory, ‘static/’, in my root Django project folder in which I keep all the static content to be served. Within settings.py file, it is important to correctly define the MEDIA_ROOT and MEDIA_URL variables to point to that static/ directory. I have the two variables defined thusly:
ayaz@laptop:~$ grep 'MEDIA_' settings.py
MEDIA_ROOT = os.path.join(os.path.dirname(os.path.abspath(__file__)), "static/")
MEDIA_URL = ''
The structure of the static/ directory looks like the following:
ayaz@laptop:~$ ls -l static/
total 20
drwxr-xr-x 3 ayaz ayaz 4096 2007-06-28 15:34 css
drwxr-xr-x 3 ayaz ayaz 4096 2007-07-06 21:51 icons
drwxr-xr-x 3 ayaz ayaz 4096 2007-07-06 21:51 images
drwxr-xr-x 3 ayaz ayaz 4096 2007-07-21 15:57 license
drwxr-xr-x 3 ayaz ayaz 4096 2007-07-21 11:41 pictures
For Django to serve static content, it must be told via pattern(s) in URLconf at which URL path the static content is available. The following are relevant lines from urls.py (note that MEDIA_PATH is defined to point to the same directory as does MEDIA_ROOT in settings.py):
ayaz@laptop:~$ head -5 urls.py
MEDIA_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "static/")
urlpatterns = patterns('',
(r'^static/(?P.*)$', 'django.views.static.serve',
{'document_root': MEDIA_PATH, 'show_indexes': True}),
That is all you need to do to enable Django to serve static content. Next up is the Model that I am using here as an example. I have the model DefaulProfile, which has two fields, the last of which is of interest here. The ‘picture’ field, as you can see below, is of type models.ImageField. Note the ‘upload_to’ argument (Consult the Model API for documentation on the various Model Fields and their supported arguments). It points to ‘pictures/’ which will, when Django processes it, be appended to the end of the path saved in MEDIA_ROOT. Also note that the ‘picture’ field is mandatory.
ayaz@laptop:~$ cat models.py
from django.db import models
from django.contrib.admin.models import User
class DefaultProfile(models.Model):
user = models.ForeignKey(User)
picture = models.ImageField("Personal Picture", upload_to="pictures/")
Now, let’s flock on to the views. I have already described the PictureField, so I will skip it.
ayaz@laptop:~$ cat views.py
from django import newforms as forms
from django.contrib.admin.models import User
from myapp.mymodel import DefaultProfile
‘UserForm’ is a small form model class, which describes the only field I am going to use, ‘picture’. Note the widget argument.
class UserForm(forms.Form):
picture = PictureField(label="Personal Picture", widget=forms.FileInput)
From now starts the meat of it all. Note request.FILES. When you are dealing with multipart/form-data in forms (such as in this case where I am trying to accept an image file from the client), the multipart data is stored in the request.FILES dictionary and not in the request.POST dictionary. Note that the picture field data is stored within request.FILES[‘picture’].
The save_picture() function simply retrieves the ‘filename’, checks to see if the multipart data has the right ‘content-type’, and then saves the ‘content’ using a call to the object’s save_picture_file() method (this method is documented here).
def some_view(request):
def save_picture(object):
if request.FILES.has_key('picture'):
filename = request.FILES['picture']['filename']
if not request.FILES['picture']['content-type'].startswith('image'):
return
filename = object.user.username + '__' + filename
object.save_picture_file(filename, request.FILES['picture']['content'])
post_data = request.POST.copy()
post_data.update(request.FILES)
expert_form = UserForm(post_data)
if expert_form.is_valid():
default_profile = DefaultProfile()
save_picture(default_profile)
default_profile.save()
If you look closely, you’ll notice I created a copy of request.POST first, and then updated it with request.FILES (the update() method is bound to dictionary objects and merges the key-value pairs in the dictionary given in its argument with the source dictionary). That’s pretty simple to understand. When an object of UserForm() is created, it is passed as its first argument a dictionary, which is usually request.POST. This actually creates a bound form object, in that the data is bound to the form. So when the method is_valid() is called against a bound form object, Django’s newforms’ validators perform field validation against all the fields in request.POST that are defined in the form definition, and raise ValidationError exception for each field which does not apparently have the right type of data in it. Now, if you remember, the picture field was created to be a mandatory field. If we didn’t merge the request.FILES dictionary with a copy of request.POST, the is_valid() method would have looked into request.POST and found request.POST[‘picture’] to be missing, and therefore, would have issued an error. This is a nasty, subtle bug, that, I’ve to admit, wasted hours trying to figure out — is_valid() is looking for the field ‘picture’ in request.POST when in reality, for multipart form-data forms, the ‘picture’ field is stored within request.FILES, but since request.FILES was never bound to the form object, and instead request.POST was, it would never find ‘picture’. So, when request.FILES and request.POST are merged together, the resultant object has all the required fields, and is_valid() correctly finds them and does not contain, provided there is no errant data in any of the fields.
I sure do hope the explanation is clear enough. Also, do take care the the “form” tag in the corresponding template has the enctype argument defined as “enctype=multipart/form-data”.
Slackware 12
Guys, Slackware 12 is out. Michiel van Wessem has written a neat review of Slackware 12, which is published on The Slack World, thanks to Mikhail Zotov.
3rd blogging anniversary. Three years ago, I wrote my first …
I hadn’t noticed. Today marks the passing of three tiresome years of my blogging epoch. The thought of celebrating, however modestly, the first anniversary had never crossed me, and I ended up shamelessly forgetting about it last year and year before. I may well not have remembered today either, were it not for the unexpected sidelong glance that fell on the post archive list and got stuck at “July 2004”. “July 2004” stood there, staring fiercely back at me, fuming, wanting to kill me if I didn’t notice it. It instantly made my ears stand up.
I am hardly happy over my memory, or at the fleeting nature of which. I suspect I could’ve blogged before July 7, 2004, but I have no proof to support that speculation. The oldest post I have intact dates back to July 7, 2004, which can be read here. It is a funny post. I wrote it to vent out anger and frustration that managed to bile up after the first ever semester project I did back at the University. I was a freshman then. I am a graduate now. Time flies.
In three years, I blogged decently. I maintained a balance between blogging because you love to write and blogging because you want to share information. I also kept my composure. A blog, I learned early, is not a personal diary. Anything you publish on the world wide web, perhaps, being a Security Engineer, it would do justice to say instead, anything you publish on the Internet, even when it is not published but kept seemingly hidden, is not personal. Or, it does not retain that status for long. People tend to write about everything on their blogs, from their likes to dislikes, grudges and crushes over someone, personal problems, depressing issues, joyous moments, et cetera. And it is very tempting to do so, too. But one has to remember that what they write, or more generally, make available on the Internet can be read by anyone at any time and tainted and used for any purposes, even, at times, to libel against the very person who wrote it. It is important to realise the importance and sensitive nature of what you make available on the Internet. In blogging, it is important to strike a balance between personal stuff and stuff that is OK for the public to read.
At odd times in my blogging history, I failed to maintain the balance, letting it fall to one side or the other. And, I regret it. However, all in all, what I’ve written has always been carefully screened by myself prior to getting published, and I’ve always made sure that, when droning on anything personal, I don’t let out too much, and when criticising someone or something, I don’t cross lines I am not supposed to cross.
In retrospect, I wrote about a lot of things. I have 159 posts today, the oldest dating back to July 7, 2004. However, of late, and as a good friend who tells me I inspired him to kick off his blogging career usually screams at me with a frown on his face, “You don’t write for your readers”, I’ve tipped the balance more on to the side of disseminating technical information and news. There are a couple of reasons for that, some of which I myself am not sure of. I am indulged in technical stuff more often than ever, plus with my career kicking off, I have hardly time for anything else. Another important reason, I believe, for my not writing about anything non-technical is abstinence. I am taking a lot of hits on the emotional front in my personal life, and I fear if I blog about it, I would trip over the line and go out and expose a lot of things I shouldn’t expose at all. And, no, I am not drinking, nor smoking, nor taking drugs, nor sleeping with anyone. Abstinence. If I blog, I know I will be tempted to write about it. To vent out. To cry. I avoid it, instead. Whatever little I write, it is purely technical. I regret my readers feel the need to leave the theatre a line too early, but, they’ll have to understand.
Blogging is fun. I love to write, so it have been even more fun. Blogging is a great way to fight writer’s block, too. But, again, you have to maintain a balance in what you write and what you should not write but still do out of temptation. When I look back, I can safely look at times when I badly wanted to lash out on someone, over something, but painfully resisted the urge. It was most important to contain the temptation, not only because it wouldn’t have been a nice thing to do, but also because over the years I’ve attracted a big reader base which includes people who are or may be my employees.
I don’t know what else to write. I am glad to have started blogging three years from now at this day, and I am glad to have kept blogging up till now. My blog alone has helped me in various ways. I am truly thankful.
I might celebrate quietly today. I already feel excited. Thank you very much for reading. :-)
Creating custom fields using Django’s newforms Framework
I simply adore Django’s newforms framework. Creating HTML forms and processing them, both of which, as any web developer would love to tell you with a grimace on their face, are extremely tedious and monotonous tasks, have never been easier with Django.
Part of the reason I like it is that each field in the form can be easily customised further. Take an example of a field in a form I created that accepts a picture. Instead of using one of the built-in fields provided by newforms, I extended newforms.Field, and created a custom PictureField which ensures the file specified has one of a couple of extensions. It raises a validation error otherwise, which automatically gets displayed on the page if the form object is first validated and then loaded into a template.
from django import newforms as forms
Within the form, I use it thusly:
class UserForm(forms.Form):
picture = PictureField(label="Personal Picture", required=False, widget=forms.FileInput)
It is nothing fancy, really, but, at least to me, being Pythonic all the way, it puts the fun back in web development. | https://ayaz.wordpress.com/2007/07/ | CC-MAIN-2018-30 | refinedweb | 2,264 | 65.32 |
Hello.
On Sat, 9 Jan 2016 00:27:37 -0600, Ole Ersoy wrote:
> Hi Gilles,
>
> On 01/08/2016 07:37 PM, Gilles wrote:
>> Hi Ole.
>>
>> Maybe I don't understand the point of the question but I'm
>> not sure that collecting here answers to implementation-level
>> questions is going to lead us anywhere.
> Well it could lead to an improved implementation :).
But we should know the target of the improvement.
I mean, is it a drop-in replacement of the current "RealVector"?
If so, how can it happen before we agree that Java 8 JDK can be
used in the next major release of CM?
If it's a redesign, maybe we should define a "wish list" of what
properties should belong to which concept.
E.g. for a "matrix" it might be useful to be mutable (as per
previous discussions on the subject), but for a (geometrical)
vector it might be interesting to not be (as in the case for
"Vector3D" in the "geometry" package).
The "matrix" concept probably requires a more advanced interface
in order to allow efficient implementations of basic operations
like multiplication...
>>
>> There is a issue on the bug-tracking system that started to
>> collect many of the various problems (specific and general)
>> of data containers ("RealVector", "RealMatrix", etc.) of the
>> "o.a.c.m.linear" package.
>>
>>
>> Perhaps it should more useful, for future reference, to list
>> everything in one place.
> Sure - I think in this case though we can knock it out fast.
> Sometimes when we list everything in one place people look at it, get
> a headache, and start drinking :). To me it seems like a vector that
> is empty (no elements) is different from having a vector with 1 or
> more 0d entries. In the latter case, according to the formula, the
> norm is zero, but in the first case, is it?
To be on the safe side, it should be an error, but I've just had
to let this kind of condition pass (cf. MATH-1300 and related on
the implemenation of "nextBytes(byte[],int,int)" feature).
>
>>
>> On Fri, 8 Jan 2016 18:41:27 -0600, Ole Ersoy wrote:
>>> public double getLInfNorm() {
>>> double norm = 0;
>>> Iterator<Entry> it = iterator();
>>> while (it.hasNext()) {
>>> final Entry e = it.next();
>>> norm = FastMath.max(norm, FastMath.abs(e.getValue()));
>>> }
>>> return norm;
>>> }
>>
>> The main problem with the above is that it assumes that the elements
>> of a "RealVector" are Cartesian coordinates.
>> There is no provision that it must be the case, and assuming it is
>> then in contradiction with other methods like "append".
>
> While experimenting with the design of the current implementation I
> ended up throwing the exception. I think it's the right thing to do.
> The net effect is that if someone creates a new ArrayVector(new
> double[]{}), then the exception is thrown, so if they don't want it
> thrown then they should new ArrayVector(new double[]{0}). More
> explanations of this design below ...
I don't know at this point (not knowing the intended usage).
[I think this is low-level discussion that is not impacting on the
design but would fixe an API at a too early stage.]
>>
>> At first (and second and third) sight, I think that these container
>> classes should be abandoned and replaced by specific ones.
>> For example:
>> * Single "matrix" abstract type or interface for computations in
>> the "linear" package (rather than "vector" and "matrix" types)
>> * Perhaps a "DoubleArray" (for such things as "append", etc.).
>> And by the way, there already exists "ResizableDoubleArray" which
>> could be a start.
>> * Geometrical vectors (that can perhaps support various coordinate
>> systems)
>> * ...
>
> I think we are thinking along the same lines here. So far I have the
> following:
> A Vector interface with only these methods:
> - getDimension()
> - getEntry()
> - setEntry()
And what is the concept that is being represented by this interface?
I think that is necessary to list use-cases so that we don't again
come up with a design that may prove not specific enough to satisfy
some requirements of the purported audience.
> An ArrayVector implements Vector implementation where the one and
> only constructor takes a double[] array argument. The vector length
> cannot be mutated. If someone wants to do that they have to create a
> new one.
Assuming we explore the 3 concepts I had listed above
* it cannot be "matrix" (since I supposed that a row or column matrix
could be of type "matrix" not "vector")
* it cannot be an appendable sequence, since the size is fixed.
* it cannot be a geometrical vector since "getEntry(int)" and
"setEntry(int, double)" are too low level to ensure consistency
under transformations (since we cannot assume that the entries
would always be Cartesian coordinates).
>
> A VectorFunctionFactory class containing methods that return Function
> and BiFunction instances that can be used to perform vector mapping
> and reduction. For example:
>
> /**
> * Returns a {@link Function} that produces the lInfNorm of the
> vector
> * {@code v} .
> *
> * Example {@code lInfNorm().apply(v);}
> * @throws MathException
> * Of type {@code NO_DATA} if {@code
> v1.getDimension()} == 0.
> */
> public static Function<Vector, Double> lInfNorm() {
> return lInfNorm(false);
> };
>
> /**
> * Returns a {@link Function} that produces the lInfNormNorm of
> the vector
> * {@code v} .
> *
> * Example {@code lInfNorm(true).apply(v);}
> *
> * @param parallel
> * Whether to perform the operation in parallel.
> * @throws MathException
> * Of type {@code NO_DATA} if {@code
> v.getDimension()} == 0.
> *
> */
> public static Function<Vector, Double> lInfNorm(boolean parallel)
> {
> return (v) -> {
> LinearExceptionFactory.checkNoVectorData(v.getDimension());
> IntStream stream = range(0, v.getDimension());
> stream = parallel ? stream.parallel() : stream;
> return stream.mapToDouble(i ->
> Math.abs(v.getEntry(i))).max().getAsDouble();
> };
> }
This is a nice possibility, but without a purpose, it could seem that
you just move the "operations" from the container class to another one.
It's cleaner, certainly, but could it be that the factory will end up
with as many conceptually incompatible operations as the current
design?
> So the design leaves more specialized structures like Sparce matrices
> to a different module. I'm not sure if this is the best design, but
> so far I'm feeling pretty good about it. WDYT?
So you were really working on the "matrix" design?
Did you look at what the requirements are for these structures
(e.g. for fast multiplication) and how they achieve it in other
packages (e.g. "ojalgo")?
If it's not about "matrix" but about blocks of (possibly
multi-dimensional)
data that can be "mapped" and "reduced", perhaps that the
one-dimensional
version (which seems what your new "Vector" is) should just be a
special
case of an interface for this kind of structure (?).
[It this latter case, the CM "MultidimensionalCounter" (in package
"util")
might be something that can be reused (?).]
Best regards,
Gilles
>
> Cheers,
> Ole
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org | http://mail-archives.apache.org/mod_mbox/commons-dev/201601.mbox/%3C76cef62496968f83eb664cb48503e7a9@scarlet.be%3E | CC-MAIN-2018-39 | refinedweb | 1,127 | 54.93 |
02 September 2008 18:52 [Source: ICIS news]
By Gene Lockard
HOUSTON (ICIS news)--Deadlock in the US ethylene glycol (EG) spot barge market appeared to have broken after a major producer dropped its price 10 cents, industry sources said on Tuesday.
MEGlobal’s announcement appeared to end a lengthy wrangle over feedstock volatility as the spot EG market attempted to cope with whipsaws in crude oil prices, they said.
The producer announced late last week a new EG contract benchmark of 55 cents/lb ($1,213/tonne, €825/tonne) for all grades of EG, including fibre-grade (EGF) and industrial-grade (EGI) product.
That was well under the August benchmark of 65 cents/lb, and could break the spot barge log jam, a trader said.
Spot barge EG prices are 49-50 cents/lb, according to global chemical market intelligence service ICIS pricing.
However, since late July, EG spot barge activity has been light to non-existent as producers tried to hold on to price gains following the run-up in feedstock and energy values earlier this year.
Buyers had pushed for price cuts amid the reversal in the crude oil market seen in recent weeks. With MEGlobal’s decision to reduce September values on the coattails of dropping crude oil prices, spot barge prices were expected to shift in buyers’ favor.
Earlier this summer, West Texas Intermediate (WTI) crude oil prices rocketed up almost daily, reaching a record-high of $147.27/bbl on 11 July.
That helped push EG feedstock ethylene net values to 74.5 cents/lb ?xml:namespace>
With record-high crude oil and feedstock prices stoking production costs, EG producers pushed through a 4-cent price increase in July, resulting in an average contract price for EFG of 61 cents/lb.
However, since then, sluggish demand from the lacklustre economy and a drop in energy prices prompted buyers to push for a cut in prices. WTI prices hit a low of $105.46/bbl in early trading on Tuesday.
Producers, noting that current inventory was produced when production costs were higher, were in no hurry to reduce prices.
They cited an increase in demand with the arrival of the antifreeze buying season, and the possibility of increased volatility in the crude oil market as the most active time of the hurricane season arrived.
As Hurricane Gustav approached the US Gulf production region, crude oil prices were poised to rocket up on fears of prolonged supply disruptions and rig damage. However, as Gustav weakened, the threat to supply dissipated, resulting in a 9% drop in crude oil on Tuesday, compared with last Friday’s close.
With Gustav’s passing, the market appeared resolved to accept stormy price movements as inevitable as the hurricane season itself.
($1=€0.68) | http://www.icis.com/Articles/2008/09/02/9153618/insight-us-eg-price-cut-spurs-spot-barge-market.html | CC-MAIN-2014-52 | refinedweb | 463 | 57.91 |
Talk:Multivalued function
From Wikipedia, the free encyclopedia
[edit] Improving content
As best I can tell, the relation discussed here is what economists call a correspondence. I've put a cross-reference in here, and added a mention of multivalued functions in the correspondence article. As far as I can tell these are two names for the same thing, used in different areas of math. Isomorphic 22:25, 29 July 2006 (UTC)
Perhaps the following part of History should be moved to Applications: [BeginQuote] In physics, multivalued functions play an increasingly important role (...) They are the origin of gauge field structures in many branches of physics. [EndQuote] Megaloxantha (talk) 14:36, 3 December 2008 (UTC)
Correspondence, multiset - even mathematicians like various terms for the same thing (to assert the context). In case of natural languages it is fair and has it's own name synonymy (sorry for math sarcasm). We only shall ensure that none of the contexts (e.g. correspondence in economy) is omitted. Megaloxantha
[edit] Misnomer?
What this "misnomer" is supposed to mean? I do not know the formal mathematical definition of such a term. Usually the functions are assumed single-valued but the general definition of a function relates elements from one set to the elements of another (or same) one. Even the ordinary sqrt(x) is having two values (not to mention sin-1(z))! It's just for convinience that usually only one of the values is deliverately chosen. Or the implicit functions are also a "misnomer". -- Goldie (tell me) 22:19, 24 August 2006 (UTC)
- 'Implicit function' is a misnomer, in the large. A careful statement of the implicit function theorem will only give a local existence theorem. Charles Matthews 14:03, 12 October 2006 (UTC)
[edit] A graphical, interactive example of a multi-valued function
Go to [1] to see an example of a multi-valued function. This came from a class titled Complex Analysis. This demonstrates how a function can be analytic in a region, but not in the entire complex plane. The input is shown in black, and the three possible outputs are shown in red, green, and blue. As long as you don’t go around or through one of the "bad" points (shown in pink) you can view this as three ordinary functions.
For additional examples see [2].
The documentation is out of date. If you want to download TCL, you will need to go to [3].
[edit] Output: single multiset
The square root of 4 is the multiset {+2,−2). The square root of zero is the multiset {0,0}, because zero is a double root of the equation x2=0. Using the concept of a multiset, the term 'multivalued function' ceases to be a misnomer. Any comments? Bo Jacoby 16:33, 14 December 2006 (UTC)
[edit] Output: single multiset or single value
As far as I know, some authors accept that the codomain of a multivalued function is a set of sets or multisets, but many others interpret multivalued functions as functions which return a single (arbitrarily selected) value. For instance, many define the indefinite integral of f as one of the infinitely many antiderivatives of f. This is obviously convenient. Consider this question:
- Does the square root of x return (1) all the numbers the square of which is x, or (2) any number the square of which is x?
In other words, is its output a multiset with two elements or a single (not uniquely determined, and arbitrarily selected) number? In other words, does the algorithm imply the process of "collecting all possible solutions" or the process of "arbitrary selection of only one solution"? I guess that some mathematicians will defend the second option.
It is quite intersting to notice that the second option implies what follows:
- the square root is regarded as a multivalued function but, paradoxically, it has a single-valued output, and
- its codomain is simply R (rather than a set of multi-sub-sets of R).
Note that, in both cases, the square root returns a single value (either a single multiset or a single number). THis example can be generalized to all multivalued functions.
Conclusion. It seems that we have only two options for defining a multivalued function:
- a multivalued fonction is single valued and uniquely determined.
- a multivalued fonction is single valued but nonuniquely determined.
Multivalued functions are actually single valued! Paolo.dL 21:29, 27 September 2007 (UTC)
[edit] Notation for multifunctions
I have seen
and
used to donate multivalued functions, as in:
and
(Priestley, H. A. (2006). Introduction to Complex Analysis, Second Edition, Oxford University Press. Chapters 7 and 9.)
129.67.19.252 02:18, 26 October 2007 (UTC) | http://ornacle.com/wiki/Talk:Multivalued_function | crawl-002 | refinedweb | 783 | 54.83 |
Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
Python module for creating a Waverose - an adapted windrose. Expects two Numpy arrays of equal length to be passed.
Usage example:
import waverose import numpy # Second numpy array represents directions, First values for the color # banding, ylim represents fixed radius waverose.WaveRose(np.arange(0,10,0.112),np.arange(0,360,4),"Test",ylim=18)
One array for the values which will be represented in the binned colours. Typically significant wave height or wave power. The second array being the corresponding directions in degrees.
By default radial position indicates percentage of occurence within that directional bin, nsectors defines directional binning, colour bin values can be passed on call.
Windrose module adapted from windrose.py
Requires Matplotlib ( Tested with 1.2.1 and Python 2.7.3 )
| https://bitbucket.org/jamesmorrison/waverose | CC-MAIN-2017-39 | refinedweb | 150 | 52.05 |
#include <Servo.h>Servo myservo;void setup() { myservo.attach(9); //Attach servo on pin 9 }void loop(){myservo.write(0); delay(3000); myservo.write(120); delay(3000);}
Have you provided the servo with its own power, as opposed to driving it from the Arduino 5v? That's a good place to start when faced with servo problems: check the power. That "real" servo may well be trying to draw more current than the "toy" one, and if you're powering it from the Arduino, it may not be getting enough. Best practice is to power servo from outside the Arduino.... if the Arduino can't supply enough current you have no choice anyway.
My pic below shows how to hook two servos up to their own power supply, if you don't already know this. Note that the external power's ground is hooked to the Arduino ground, else the control signal on the yellow has no reference 0V.
Just out of curiocity: if the problem is the current, could I potentially use 5v fromthe arduino and a transistor to amplify the current?
QuoteJust out of curiocity: if the problem is the current, could I potentially use 5v fromthe arduino and a transistor to amplify the current?A transistor cannot 'amplify' current. It can however control higher current if wired to a voltage source that has more current capacity. So your problem is that the arduino can only supply a limited amount of +5vdc current for external stuff and servos are almost always needing more then can be reliably supplied by the arduino. There is no magic component of device that can allow an arduino to supply more 5V current they it presently can, that simply takes an external voltage source with a higher current capacity.Lefty
The 2-battery holder is for demonstration, I guess? | http://forum.arduino.cc/index.php?topic=145834.0;prev_next=prev | CC-MAIN-2014-52 | refinedweb | 306 | 62.88 |
Subject: Re: [boost] Name of namespace detail
From: Mateusz Loskot (mateusz_at_[hidden])
Date: 2009-10-12 19:57:16
Jean-Louis Leroy wrote:
>> I use detail name myself. Any better names for bucket with
>> implementation details?
>
>
> What about `namespace private_` ? Even if access won't be controlled
> by the compiler.
That's my favourite replacement. The underscore suffix in name could be
easily linked with quite popular use of it in naming private class members:
struct T
{
private:
int value_;
int foo_() { ... }
}
> And also a `namespace protected_` if there are useful things to put
> there ?
Doesn't really work for me. If something is useful (assuming for a
library client), put it to public namespace. Second would be too much,
convention became too complex, so vague. I think. | https://lists.boost.org/Archives/boost/2009/10/156961.php | CC-MAIN-2021-49 | refinedweb | 128 | 67.86 |
with Arrays
Mike Brooks
Greenhorn
Joined: Mar 08, 2006
Posts: 21
posted
Apr 01, 2007 19:39:00
0
I have a program that adds passengers to a seating chart. I have a first class array and a second class array.
In my main class, option 1 to add a passenger. Right now im trying to get it working for 1 passenger and then i'll move getting it with up 2 passengers at a time for first class and up to 3 passengers at a time for economy class. Which if done in groups, it need to be on side same side and same row and give back an error
msg
.
Heres my problem. When im done adding the first seat it goes back to the airline class for the main menu which asks if the user wants to add more passengers. If so then it goes back to the Reservation class, but then the first and economy class array are set back to blank. I need to somehow keep the information that has already been added and build onto that array. As you see i tried using tempArray to figure it out, but can't seem to get it working.
Heres the main Airline class
import java.util.*; public class Airline { public static void main(String args[]) { Scanner scan = new Scanner(System.in); int mainOption = 0, passengers = 0; String seatingClass = "e"; Reservation reserveSeat = new Reservation(); SeatingChart displaySeat = new SeatingChart(); while (mainOption != 3){ mainOption = 0; // Menu loop while (mainOption < 1 || mainOption > 3) { System.out.println("Welcome to the Airline. Please choose an option.\n"); System.out.println( "Press 1 to add passengers. \nPress 2 to display the " + "seating chart. \nPress 3 to Quit."); mainOption = scan.nextInt(); } // if statement when 1 is chosen to add passengers if (mainOption == 1) { Scanner input = new Scanner(System.in); // User chooses first or economy class do { System.out.println("Type E for economy or F for first class:"); seatingClass = input.nextLine(); } while (!seatingClass.equalsIgnoreCase("e") && !seatingClass.equalsIgnoreCase("f")); // If statement to get number of passengers for economy class if (seatingClass.equalsIgnoreCase("e")) { do { System.out.println("Enter the numbers of passengers: "); passengers = scan.nextInt(); } while (passengers < 1 || passengers > 3); } // If statment to get number of passengers for first class if (seatingClass.equalsIgnoreCase("f")) { do { System.out.println("Enter the numbers of passengers: "); passengers = scan.nextInt(); } while (passengers < 1 || passengers > 2); } reserveSeat.addPassengers(seatingClass, passengers); // Prints out seating chart System.out.println(reserveSeat); } // if statement when 2 is chosen to display seating chart if (mainOption == 2) { System.out.println(displaySeat); } // if statement when 3 is chosen to quit if (mainOption == 3) { System.exit(0); } } } }
Reservation Class
public class Reservation { private String seatClassR; private int numPassengersR, count = 0; // First class has 8 seats (2 rows of 4 seats each) String[] firstClassSeat = new String[8]; // Economy class has 24 seats (4 rows of 6 seats each) String[] economyClassSeat = new String[18]; // Temp array for first class String[] tempArrayF = new String[8]; // Temp array for economy class String[] tempArrayE = new String[18]; public Reservation(){ } public void addPassengers(String c, int p){ seatClassR = c; numPassengersR = p; if (count < 1){ for (int i = 0; i < firstClassSeat.length; i++) firstClassSeat[i] = "-"; for (int i = 0; i < economyClassSeat.length; i++) economyClassSeat[i] = "-"; } if (count > 0){ for (int i = 0; i < tempArrayF.length; i++) { firstClassSeat[i] = tempArrayF[i]; } for (int i = 0; i < tempArrayE.length; i++) { economyClassSeat[i] = tempArrayE[i]; } } // First class statements if (seatClassR.equalsIgnoreCase("f")) { if (numPassengersR == 1){ int countF = 0; for (int i = 0; i < firstClassSeat.length; i++){ if (countF < 1){ // If seat is blank - then fill seat with x if (firstClassSeat[i] == "-"){ firstClassSeat[i] = "x"; countF++; } } } } } // Economy class statements if (seatClassR.equalsIgnoreCase("e")) { if (numPassengersR == 1){ int countE = 0; for (int i = 0; i < economyClassSeat.length; i++) { if (countE < 1){ // If seat is blank - then with seat with x if (economyClassSeat[i] == "-"){ economyClassSeat[i] = "x"; countE++; } } } } } for (int i = 0; i < tempArrayF.length; i++){ tempArrayF[i] = firstClassSeat[i]; } for (int i = 0; i < tempArrayE.length; i++){ tempArrayE[i] = economyClassSeat[i]; } count++; } /*----------------------------- Getter Methods -----------------------------*/ public String getSeatingClass() { return seatClassR; } public int getNumPassengers() { return numPassengersR; } public String[] getFirstClassSeat(){ return firstClassSeat; } public String[] getEconomyClassSeat(){ return economyClassSeat; } public String toString(){ int f = 0, e = 0; String text = ""; while (f < 8) { if (f == 0 || f == 4 || f == 2 || f == 6) { text += firstClassSeat[f]; f++; } if (f == 1 || f == 5) { text += firstClassSeat[f] + "\t"; f++; } if (f == 3 || f == 7) { text += firstClassSeat[f] + "\n"; f++; } } while (e < 18) { if (e == 0 || e == 1 || e == 3 || e == 4 || e == 6 || e == 7 || e == 9 || e == 10 || e == 12 || e == 13 || e == 15 || e == 16) { text += economyClassSeat[e]; e++; } if (e == 2 || e == 8 || e == 14) { text += economyClassSeat[e] + "\t"; e++; } if (e == 5 || e == 11 || e == 17) { text += economyClassSeat[e] + "\n"; e++; } } return text; } }
Display class
public class SeatingChart extends Reservation{ String seatClassS; int numPassengersS; // First class has 8 seats (2 rows of 4 seats each) String[] firstClassDisplayS = new String[8]; // Economy class has 24 seats (4 rows of 6 seats each) String[] economyClassDisplayS = new String[24]; Reservation displaySeat = new Reservation(); public SeatingChart(){ displaySeat.toString(); } /*---------------------------------------- Getter Methods ----------------------------------------*/ public String[] getFirstClassDisplay(){ return firstClassDisplayS; } public String[] getEconomyClassDisplay(){ return economyClassDisplayS; } }
[ April 02, 2007: Message edited by: Jon Martin ]
[ April 02, 2007: Message edited by: Jon Martin ]
Campbell Ritchie
Sheriff
Joined: Oct 13, 2005
Posts: 34513
13
posted
Apr 02, 2007 01:54:00
0
Look at the while loop which calls up the main menu . . .
pete stein
Bartender
Joined: Feb 23, 2007
Posts: 1561
posted
Apr 02, 2007 06:45:00
0
Look closely at what it is that is being zero'd out each time.
Also, would it be worthwhile to have a separate seat class and then have your reservation object hold two arrays (first and second class) of seat objects?
[ April 02, 2007: Message edited by: pete stein ]
Mike Brooks
Greenhorn
Joined: Mar 08, 2006
Posts: 21
posted
Apr 02, 2007 18:59:00
0
I know its being set back to blank seat "-", but if that is removed then the array is not initialized.
I need it to be initialized the first time around and then be able to keep modifing the already initialized array, but it keeps getting set back to "-".
I updated it some which still doesn't work....code above in first post.
pete stein
Bartender
Joined: Feb 23, 2007
Posts: 1561
posted
Apr 02, 2007 19:13:00
0
The error is not hard to find. Look at everywhere you are calling your object's constructor. Each time you call a constructor the object starts afresh. That is why I wonder if you need a "seat" class that you can initialize instead.
Mike Brooks
Greenhorn
Joined: Mar 08, 2006
Posts: 21
posted
Apr 02, 2007 19:46:00
0
nvm.....working on addPassengers method. Thanks for help, hopefully this does it.
[ April 02, 2007: Message edited by: Jon Martin ]
Mike Brooks
Greenhorn
Joined: Mar 08, 2006
Posts: 21
posted
Apr 02, 2007 21:07:00
0
Ok, with that working I came across one more problem.
If I choose display the seating chart before adding any seat, I get back null. I figured before the arrays are not initialized, but even if I add some passengers and display the seating chart I get back null. Any suggestions?
THE CODE IS UPDATED IN THE FIRST POST
pete stein
Bartender
Joined: Feb 23, 2007
Posts: 1561
posted
Apr 03, 2007 22:14:00
0
I've noticed several things of concern in your code:
In your airline class you have two distinct objects, reserveSeat an object of Reservation Class) and seatingClass an object of SeatingChart Class which is a child of the Reservation class. When you are adding reservations, all of your interactions are with the reserveSeat object, never with the SeatingClass object. The seatingClass object can't output any useful information if no information has been stored into it.
I don't see the purpose that the SeatingChart class serves, and why it is a child class of Reservation when it inherits no methods and no states from the parent class.
Your child class (SeatingChart) contains an instance of the parent class (Reservation) object called displaySeat. In SeatingChart's (the child object's) initialization code you instantiate the displaySeat object, and then in the constructor you call its display method. I think that you assume that the displaySeat object, since it is an instance of the parent Reservation class will automatically have all of the information from the reserveSeat object (another Reservation class object contained in the main method of the Airline class), but this assumption is wrong. Each object is unique and contains its own independent non-static information.
I am no expert on OOP design, so I can only tell you so much, but I believe that your overall object model is broken and that you should rethink the object model design from square one.
Possible ideas include:
Have a Seat class that contains a boolean "occupied" variable, a "seatNumber"
String
var, a "passengerName" String var and a "seatingLevel" variable (i.e., first or second class).
An class called seatingLevel that holds arrays of seats and methods to access them, change them, and display them, can display number of seats available,...
An airplane class that holds two instances of seatingLevel, one for first class and one for coach.
There are many possible iterations that you could use. You might even want to put this project on hold for a bit and restudy a chapter or two on object oriented design before redesigning this. Learn how the pros do it first. I recommend
Just Java
or
Head First Java
as good beginners texts, but there are many others that are worthwhile too.
Good luck!
Pete
ps: have you thought about joining the Cattle Drive here? I have recently done just that and am learning quite a bit.
[ April 04, 2007: Message edited by: pete stein ]
Did you see how Paul
cut 87% off of his electric heat bill with 82 watts of micro heaters
?
subject: Help with Arrays
Similar Threads
Write a method that can be called, that will initialize the seating plan.
Array Problem....sort of
Need some help with Java program Airline.java
do {} while statement illegal start of type
Simple example using copyofRange () from Arrays Class won't compile
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/406586/java/java/Arrays | CC-MAIN-2013-48 | refinedweb | 1,738 | 60.14 |
arith-eval 0.5.1
A minimal math expression evaluation (including none), then evaluate that function giving those variables the values you want. It is NOT designed to be efficient, just easy to use. Bear this in mind if your application requires time-sensitive evaluations.
This library is licensed under the MIT software license.
How to use
Minimal the library, minimal the tutorial, really. Just instance the
Evaluable struct from the
arith_eval package and define your function:
import arith_eval; auto constantExpression = Evaluable!()("2 + 5"); assert(constantExpression() == 7); auto a = Evaluable!("x", "y")("(x + y) * x - 3 * 2 * y"); assert(a.eval(2, 2) == (2 + 2) * 2 - 3 * 2 * 2); assert(a.eval(3, 5) == (3 + 5) * 3 - 3 * 2 * 5); auto b = Evaluable!("x", "z")("x ^ (2 * z)"); assert(b.eval(1.5f, 1.3f).approxEqual(1.5f ^^ (2 * 1.3f));
Evaluable is a struct template that takes the name of its variables as its template parameters.
Evaluable will throw an
InvalidExpressionException if it isn't able to understand the expression given, and an
EvaluationException if something wrong happens during evaluation, such as the value reaching unreliably high values.)
x E y(
xtimes
10to the power of
y)
Parenthesis should work wherever you place them, respecting basic math operation priorities.
If you are missing a specific operation, open an issue or submit a PR.
Add as DUB dependency
Just add the
arith-eval package as a dependency in your dub.json or dub.sdl file. For example:
"dependencies" : { "arith-eval": "~>0.5.0" }
Attributions
The following pieces of work make this library possible:
- Pegged, by Philippe Sigaud, released under the Boost license. Used for input parsing.
- unit-threaded, by Atila Neves, released under the BSD-3-Clause license. Used for the testing of the library.
- Registered by Héctor Barreras Almarcha
- 0.5.1 released 4 years ago
- Dechcaudron/ArithEval
- MIT
- Authors:
-
- Dependencies:
- pegged
- Versions:
- Show all 10 versions
- Download Stats:
0 downloads today
0 downloads this week
0 downloads this month
175 downloads total
- Score:
- 0.3
- Short URL:
- arith-eval.dub.pm | https://code.dlang.org/packages/arith-eval | CC-MAIN-2022-33 | refinedweb | 343 | 51.24 |
You say, i must changed || to && ?
--- Update ---
Thanks :) It help :)
You say, i must changed || to && ?
--- Update ---
Thanks :) It help :)
Hello guys, so i want create little program which enter the number, ant program says triangle exist or not. So code :
import java.awt.*;
import java.awt.event.ActionEvent;
import...
Can you tell me more about String builder?
Hello all. So i creating Calculator and i have a problem. First see my code :
import java.awt.*;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.*;...
I follow this full screen tutorial by youtube thenewboston. Here Screen class:
package Test;
import java.awt.*;
import javax.swing.*;
public class Screen extends JFrame {
Hi guys. So i create little program. So window is in full screen. But i need help with paint method. I think I this method not understeand fully. So code:
package Test;
import java.awt.*;...
Very very thank you andbin :)
Hello guys :) I need a little help(I think). So i start work with TxT files. And i need know how to write text in the next line if file exist. My code just last text replace new.. Here code :
Main :...
Thanks all for help :) GregBrannon, i dunno :D
Sorry me for stupid question. Java is so big. So i learn how much can... My OS windows xp.. Last question. I understeand, applet is only for web yes? So if i wanna this snake play like desktop...
Yes, you right.. I dont understeand, where write code: jar cvf Snake.jar snakeCanvas.class snakeApplet.class TyleType.class Direction.class
Its not help..
Hi guys. So me again need help :) So i create a snake game. Ant this game have more than 1 class. So i need this thing export to jar.. So i have question:
1.Do i have insert to public static void...
System.out.println("Thank you so much :)))))");
Thanks, but i watch a lot tut and this code :
public void setImage() {
ImageIcon u = new ImageIcon(getClass().getResource("/Image/Tikras2.gif"));
face = u.getImage();
}
Hi guys :) I need a little help :) So, i create a little game, and i want this game sent my friend. Problem is model. When i sent game, my friend not see a model. I Sent this thing in the folder with...
Thanks :) I fix it :) THANK THANK!!!!!
I mean Main class which have a public static void main(String[]args) { } method. :)
Hello guys. I wanna ask you, how to post variable from any class in the main class? I know how to post from main class in the next class, but dunno how to post from next class in the main..
Can you...
Okey. Thanks for help :)
Very thank You. I analized this code. But maybe you can show this thing inputed in my code? :)
I dont understeand. Can you write this thing in code ? Thanks...
Hey guys. I need help. Im java newbie and i write small program. I want post my integer variables in the next class. Here a code(I wanna change answer color):
Main class(pagrindine.java):
import... | http://www.javaprogrammingforums.com/search.php?s=c7cc92091e9741639f3ce84692896a15&searchid=1665938 | CC-MAIN-2015-32 | refinedweb | 512 | 88.02 |
Can't QImage save BMPs with 16 bits color depth?
Hello,
I'm trying to use QImage to save 16-bit bmps (Format_RGB16, Format_RGB555, Format_RGB444), but it always saves it as 32 bits.
Here's a test code snippet:
@#include <QtCore>
#include <QImage>
#include <QTextStream>
#include <QFile>
int main(int argc, char *argv[])
{
QTextStream out(stdout);
if (argc < 2)
out << "Usage: " << argv[0] << " <filename> [<filename> ...]" << endl;
for (int i = 1; i < argc; i++) { QString fname = argv[i]; if (!QFile::exists(fname)) out << "File not found: " << fname << endl; else { QImage img(fname); out << "Converting: " << fname << endl; out << "before: " << img.depth() << endl; QImage img2 = img.convertToFormat(QImage::Format_RGB16); out << "after: " << img2.depth() << endl; img2.save("new_" + fname); img2.load("new_" + fname); out << "reloaded: " << img2.depth() << endl; } } out << "Finished." << endl;
}@
That results in:
$ ./makeRgb16 scrMain.bmp
Converting: scrMain.bmp
before: 32
after: 16
reloaded: 32
Finished.
In other words, while loaded in the QImage object it says it has a 16bit color depth, but after saving and loading the same image, it comes back as 32 (and the resulting "16bit" file size is identical to the original file).
Also strange is that identify (from imagemagick) says the bmp files are 24 bits (RGB888), not 32.
Is this "as designed", or a bug? I'm using Qt 4.7.0 (kubuntu 10.10).
Thanks!
Joao S Veiga
I could reproduce this problem, only converting to Format_Indexed8 seemed to work, none of the RGB or ARGB formats work. I think you should log a bug for this | https://forum.qt.io/topic/3833/can-t-qimage-save-bmps-with-16-bits-color-depth | CC-MAIN-2018-51 | refinedweb | 252 | 77.53 |
The vast majority of Apple’s devices come with biometric authentication as standard, which means they use fingerprint and facial recognition to unlock. This functionality is available to us too, which means we can make sure that sensitive data can only be read when unlocked by a valid user.
This is another Objective-C API, but it’s only a little bit unpleasant to use with SwiftUI, which is better than we’ve had with some other frameworks we’ve looked at so far.
Before we write any code, you need to add a new key to your Info.plist file, explaining to the user why you want access to Face ID. For reasons known only to Apple, we pass the Touch ID request reason in code, and the Face ID request reason in Info.plist.
Open Info.plist now, right-click on some space, then choose Add Row. Scroll through the list of keys until you find “Privacy - Face ID Usage Description” and give it the value “We need to unlock your data.”
Now head back to ContentView.swift, and add this import near the top of the file:
import LocalAuthentication
OK, we’re all set to use biometrics. I mentioned earlier this was “only a little bit unpleasant”, and here’s where it comes in: Swift developers use the
Error protocol for representing errors that occur at runtime, but Objective-C uses a special class called
NSError. Because this is an Objective-C API we need to use
NSError to handle problems, and pass it using
& like a regular
inout parameter.
We’re going to write an
authenticate() method that isolates all the biometric functionality in a single place. To make that happen requires four steps:
LAContext, which allows us to query biometric status and perform the authentication check.
Please go ahead and add this method to
ContentView:
func authenticate() { let context = LAContext() var error: NSError? // check whether biometric authentication is possible if context.canEvaluatePolicy(.deviceOwnerAuthenticationWithBiometrics, error: &error) { // it's possible, so go ahead and use it let reason = "We need to unlock your data." context.evaluatePolicy(.deviceOwnerAuthenticationWithBiometrics, localizedReason: reason) { success, authenticationError in // authentication has now completed DispatchQueue.main.async { if success { // authenticated successfully } else { // there was a problem } } } } else { // no biometrics } }
That method by itself won’t do anything, because it’s not connected to SwiftUI at all. To fix that we need to do add some state we can adjust when authentication is successful, and also an
onAppear() modifier to trigger authentication.
So, first add this property to
ContentView:
@State private var isUnlocked = false
That simple Boolean will store whether the app is showing its protected data or not, so we’ll flip that to true when authentication succeeds. Replace the
// authenticated successfully comment with this:
self.isUnlocked = true
Finally, we can show the current authentication state and begin the authentication process inside the
body property, like this:
VStack { if self.isUnlocked { Text("Unlocked") } else { Text("Locked") } } .onAppear(perform: authenticate)
If you run the app there’s a good chance you just see “Locked” and nothing else. This is because the simulator isn’t opted in to biometrics by default, and we didn’t provide any error messages, so it fails silently.
To take Face ID for a test drive, go to the Hardware menu and choose Face ID > Enrolled, then launch the app again. This time you should see the Face ID prompt appear, and you can trigger successful or failed authentication by going back to the Hardware menu and choosing Face ID > Matching Face or Non-matching Face.
All being well you should see the Face ID prompt go away, and underneath it will be the “Unlocked” text view – our app has detected the authentication, and is now open to. | https://www.hackingwithswift.com/books/ios-swiftui/using-touch-id-and-face-id-with-swiftui | CC-MAIN-2020-16 | refinedweb | 626 | 50.77 |
Day 19: Performance and cache
Previously on symfony
As the advent calendar days pass, you are getting more comfortable with the symfony framework and its concepts. Developing an application like askeet is not very demanding if you follow the good practices of agile development. However, one thing that you should do as soon as a prototype of your website is ready is to test and optimize its performance.
The overhead caused by a framework is a general concern, especially if your site is hosted in a shared server. Although symfony doesn't slow down the server response time very much, you might want to see it yourself and tweak the code to speed up the page delivery. So today's tutorial will be focused on the performance measurement and improvement.
Load testing tools
Unit tests, described during the fifteenth day, can validate that the application works as expected if there is only one user connected to it at a time. But as soon as you release your application on the Internet - and that's the least we can wish for you - hordes of hectic fans will rush to it simultaneously, and performance issues may occur. The web server might even fail and need a manual restart, and this is a really painful experience that you should prevent at all costs. This is especially important during the early days of your application, when the first users quickly draw conclusions about it and decide to spread the word or not.
To avoid performance issues, it is necessary to simulate numerous concurrent access to your website to see how it reacts - before releasing it. This is called load testing. Basically, you program an automate to post concurrent requests to your web server, and measure the return time.
note
Whatever load testing tool you choose, you should execute it on a different server than the one running the website. This is because the testing tools are generally CPU consuming, and their own activity could perturb the results of the server performance. In addition, do your tests in a local network, to avoid disturbance due to the external network components (proxy, firewall, cache, router, ISP, etc.).
JMeter
The most common load testing tool is JMeter, and it is an open-source Java application maintained by the Apache foundation. It has impressive online documentation to help you get started using it, including a good introduction about load testing.
To install it, retrieve the latest stable version (currently 2.1.1) in the Jmeter download page. You'll also need the latest version of the Java runtime environment which you can find on Sun's site. To start JMeter, locate and run the
jmeter.bat file (in Windows platforms) or type
java jmeter.jar (in Linux platforms).
The way to setup a load testing plan, called 'Web test plan', is described in detail in the related page of the JMeter documentation, so we will not describe it here.
note
Not only does JMeter report about average response time for a given request or set of requests, it can also do assertions on the content of the page it receives. So, in addition to using JMeter as a load testing tool, you can build scenarios to do regression tests and unit tests.
Apache's ab
The second tool recommended by symfony is ApacheBench, or ab, another nice utility brought to you by the Apache foundation. Its online manual is less detailed than JMeter's, but as ab is a command line tool, it is easier to use.
In Linux, it comes standard with the Apache package, so if you have an installed Apache server, you should find it in
/usr/local/apache/bin/ab. In Windows platforms, it is much harder to find, so you'd better download it directly from symfony.
The use of this benchmarking tool is very simple:
$ /usr/local/bin/apache2/bin/ab -c 1 -n 1 This is ApacheBench, Version 2.0.41-dev <$Revision: 1.121.2.12 $> apache-2.0 Copyright 1996 Adam Twiss, Zeus Technology Ltd, Copyright 1998-2002 The Apache Software Foundation, Benchmarking (be patient).....done Server Software: Apache Server Hostname: Server Port: 80 Document Path: / Document Length: 15525 bytes Concurrency Level: 1 Time taken for tests: 0.596104 seconds Complete requests: 1 Failed requests: 0 Write errors: 0 Total transferred: 15874 bytes HTML transferred: 15525 bytes Requests per second: 1.68 [#/sec] (mean) Time per request: 596.104 [ms] (mean) Time per request: 596.104 [ms] (mean, across all concurrent requests) Transfer rate: 25.16 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 61 61 0.0 61 61 Processing: 532 532 0.0 532 532 Waiting: 359 359 0.0 359 359 Total: 593 593 0.0 593 593
note
you need to provide a page name (at least
/ like in the above example) because targeting only a host will give an incorrectly formatted URL error.
The
-c and
-n parameters define the number of simultaneous threads, and the total number of requests to execute. The most interesting data in the result is the last line: the average total connection time (second number from the left). In the example above, there is only one connection, so the connection time is not very accurate. To have a better view of the actual performance of a page, you need to average several requests and launch them in parallel:
$ /usr/local/bin/apache2/bin/ab -c 10 -n 20 ... Connection Times (ms) min mean[+/-sd] median max Connect: 59 88 19.9 89 130 Processing: 831 1431 510.9 1446 3030 Waiting: 632 1178 465.1 1212 2781 Total: 906 1519 508.4 1556 3089 Percentage of the requests served within a certain time (ms) 50% 1556 66% 1569 75% 1761 80% 1827 90% 2285 95% 3089 98% 3089 99% 3089 100% 3089 (longest request)
You should always start by a
ab -c 1 -n 1 to have an idea of the time taken by the test itself before executing it on a larger number of requests. Then, increase the number of total requests (like
ab -c 1 -n 30) until you have a reasonably low standard deviation. Only then will you have a significant average connection time measure, and you will be ready for the actual load test. Add threads little by little (and don't forget to increase the total number of requests accordingly, like
ab -c 10 -n 300) and see the connection time increase as your server load is being handled. When the average loading times pass beyond a few seconds, it means that your server is outnumbered and can probably not support more concurrent threads. You have determined the maximum charge of your service. This is called a stress test.
note
Please be kind enough not to stress test any running website in the Internet but your own. Doing stress test on a foreign site is considered as a denial-of-service attack. The askeet website is no different, so once again, please do not stress test it.
The load tests will provide you with two important pieces of information: the average loading time of a specific page, and the maximum capacity of your server. The first one is very useful to monitor performance improvements.
Improve performances with the cache
There are a lot of ways to increase the performance of a given page, including code profiling, database request optimization, addition of indexes, creation of an alternative light web server dedicated to the media of the website, etc. Existing techniques are either cross-language or PHP-specific, and browsing the web or buying a good book about it will teach you how to become a performance guru.
Symfony adds a certain overload to web requests, since the configuration and the framework classes are loaded for each request, and because the MVC separation and the ORM abstraction result in more code to execute. Although this overhead is relatively low (as compared to other frameworks or languages), symfony also provides ways to balance the response time with caching. The result of an action, or even a full page, can be written in a file on the hard disk of the web server, and this file is reused when a similar request is requested again. This considerably boosts performance, since all the database accesses, decoration, and action execution are bypassed completely. You will find more information about caching in symfony in the cache chapter of the symfony book.
We will try to use HTML cache to speed up the delivery of the popular tags page. As it includes a complex SQL query, it is a good candidate for caching. First, let's see how long it takes to load it with the current code:
$ ab -c 1 -n 30 ... Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.0 0 0 Processing: 147 148 2.4 148 154 Waiting: 138 139 2.3 139 145 Total: 147 148 2.4 148 154 ...
Put the result of the action in the cache
warning
The following will not work on symfony 0.6. Please jump to the next section until this tutorial is updated.
The action executed to display the list of popular tags is
tag/popular. To put the result of this action in cache, all we have to do is to create a
cache.yml file in the
askeet/apps/frontend/modules/tag/config/ directory with:
popular: activate: on type: slot all: lifeTime: 600
This activates the
slot type cache for this action. The result of the action (the view) will be stored in a file in the
cache/frontend/prod/template/askeet/popular_tags/slot.cache file, and this file will be used instead of calling the action for the next 600 seconds (10 minutes) after it has been created. This means that the popular tags page will be processed every ten minutes, and in between, the cache version will be used in place.
The caching is done at the first request, so you just need to browse to:
...to create a cache version of the template. Now, all the calls to this page for the next 10 minutes should be faster, and we will check that immediately by running the Apache benchmarking tool again:
$ ab -c 1 -n 30 ... Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.0 0 0 Processing: 137 138 2.0 138 144 Waiting: 128 129 2.0 129 135 Total: 137 138 2.0 138 144 ...
We passed from an average of 148ms to 138ms, that's a 7% increase in performance. The cache system improves the performance in a significant way.
note
The
slot type doesn't bypass the decoration of the page (i.e. the insertion of the template in the layout). We can not put the whole page in cache in this case because the layout contains elements that depend on the context (the user name in the top bar for instance). But for non-dynamic layouts, symfony also provides a
page type which is even more efficient.
Build a staging environment
By default, the cache system is deactivated in the development environment and activated in the production environment. This is because cached pages, if not configured properly, can create new errors. A good practice concerning the test of a web application including cached page is to build a new test environment, similar to the production one, but with all the debug and trace tools available in the development environment. We often call it the 'staging' environment. If an error occurs in the staging environment but not in the development environment, then there are many chances that this error is caused by a problem with the cache.
When you develop a functionality, make sure that it works properly in the development environment first. Then, change the cache parameters of the related actions to improve performance, and test it again in the staging environment to see if the caching system doesn't create functional perturbation. If everything works fine, you just need to execute load tests in the production environment to measure the improvement. If the behaviour of the application is different than in the development environment, you need to review the way you configured the cache. Unit tests can be of great help to make this procedure systematic.
In order to create the staging environment, you need to add a new front controller and to define the environment's settings.
Copy the production front controller (
askeet/web/index.php) into a
askeet/web/frontend_staging.php file, and change its definition to:
<?php define('SF_ROOT_DIR', realpath(dirname(__FILE__).'/..')); define('SF_APP', 'frontend'); define('SF_ENVIRONMENT', 'staging'); define('SF_DEBUG', false); require_once(SF_ROOT_DIR.DIRECTORY_SEPARATOR.'apps'.DIRECTORY_SEPARATOR.SF_APP.DIRECTORY_SEPARATOR.'config'.DIRECTORY_SEPARATOR.'config.php'); sfContext::getInstance()->getController()->dispatch(); ?>
Now, open the
askeet/apps/frontend/config/settings.yml, and add the following lines:
staging: .settings: web_debug: on cache: on no_script_name: off
That's it, the staging environment, with web debug and cache activated, is ready to be used by requesting:
Put a template fragment in the cache
As many of the askeet pages are made of dynamic elements (a question description, for instance, contains an 'interested?' link which might be turned into simple text if the user displaying it already clicked on it), there are not many
slot cache type candidates in our actions. But we can put chunks of templates in cache, like for instance the list of tags for a specific question. This one is trickier than the popular tag cloud, because the cache of this chunk has to be cleared every time a user adds a tag to this question. But don't worry, symfony makes it easy to handle.
To measure the improvement, we need to know the current average loading time of the
question/show page.
$ ab -c 1 -n 30
First of all, the list of tags for a question has two versions: one for unregistered users (it is a tag cloud), and the other for registered users (it is a list of tags with delete links for the tags entered by the user himself). We can only put in cache the tag cloud for unregistered users (the other one is dynamic). It is located in the
tag/_question_tags template partial. Open it (
askeet/apps/frontend/modules/tag/templates/_question_tags.php) and enclose the fragment that has to be cached in a special
if(!cache()) statement:
... <?php if ($sf_user->isAuthenticated()): ?> ... <?php else: ?> <?php if (!cache('question_tags', 3600)): ?> <?php include_partial('tag/tag_cloud', array('tags' => QuestionTagPeer::getPopularTagsFor($question))) ?> <?php cache_save() ?> <?php endif ?> <?php endif ?>
The
if(!cache()) statement will check if a version of the fragment enclosed (called
fragment_question_tags.cache) already exists in the cache, with an age not older than one hour (3600 seconds). If this is the case, the cache version is used, and the code between the
if(!cache()) and the
endif is not executed. If not, then the code is executed and its result saved in a fragment file with
cache_save().
Let us see the performance improvement caused by the fragment cache:
$ ab -c 1 -n 30
Of course, the improvement is not as significant as with a
slot type cache, but doing lots of little optimizations like this one can bring an appreciable enhancement to your application.
note
Even if originally called by the
sidebar/question action, the cache fragment file is located in
cache/frontend/prod/template/askeet/question/what-can-i-offer-to-my-step-mother/fragment_question_tags.cache. This is because the code of a slot depends on the main action called.
Clear selective parts of the cache
The tag list of a question can change within the lifetime of the fragment. Each time a user adds or removes a tag to a question, the tag list may change. This means that the related action have to be able to clear the cache for the fragment. This is made possible by the
->remove() method of the
viewCacheManager object.
Just modify the
add and
remove actions of the
tag module by adding at the end of each one:
// clear the question tag list fragment in cache $this->getContext()->getViewCacheManager()->remove('@question?stripped_title='.$this->question->getStrippedTitle(), 'fragment_question_tags');
You can now check that the tag list fragment cache doesn't create incoherences in the pages displayed by adding to or removing a tag from a question, and seeing the list of tag properly updated accordingly.
You can also enable cache in the development environment to see which parts of a page are in cache. Change your
settings.yml configuration:
dev: .settings: cache: on
And now, you can see when a page, fragment or slot is already in cache:
or when it is a fresh copy:
See you Tomorrow
Symfony doesn't create a high overhead, and provides easy ways to accurately tune the performance of a web application. The cache system is powerful and adaptive. Once again, if some parts of this tutorial still seem somehow obscure to you, don't hesitate to refer to the cache chapter of the symfony book. It is very detailed and contains lots of new examples.
Tomorrow, we will start to think about the management of the website activity. Protection against spam or correction of erroneous entries are among the functionality required by a website as soon as it is open to semi-anonymous publication. We could either create an askeet back-office for that, or give access to a new set of options to users with a certain profile. Anyway, it will surely take less than an hour, since we will develop it with symfony.
Make sure you keep aware of the latest askeet news by visiting the forum or looking at the askeet timeline, in which you will find bug reports, version details, and wiki changes. | https://symfony.com/legacy/doc/askeet/1_0/en/19 | CC-MAIN-2020-16 | refinedweb | 2,966 | 61.97 |
public class SanderRossel : Lazy<Person>
{
public void DoWork()
{
throw new NotSupportedException();
}
}
delete this;
MT_ wrote:So, it would be great if you guys can suggest some idea of asp.net MVC project which can be useful in future rather than some tutorial project.
GuyThiebaut wrote:
Interesting factoid: Linky[^] 54 operational ATC systems written in 53 languages, many of them obsolete.
Hello,
This is just a quick email to congratulate you on winning the Land Love Magazine Book Competition!
Your prize copy of Bacon: Recipes for Curing, Smoking and Eating! has been dispatched and will be with you shortly. I do hope you enjoy the book!
Quote.
Nagy Vilmos wrote:The Apocalypse
Nelek wrote:I prefer an Armageddon
OriginalGriff wrote:And have Bruce Willis drill though your new roof
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Lounge.aspx?fid=1159&df=90&mpp=25&noise=3&prof=False&sort=Position&view=None&spc=None&select=4389951&fr=942 | CC-MAIN-2014-52 | refinedweb | 157 | 61.67 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
No Handler found error while validating the email field ?
Hi, I have a scenario where I need to validate a email address whether it is valid or not. For validating I have used the following code.
hr_view.xml:
<field name="work_email" widget="email" on_change="onchange_work_email(work_email)"/>
hr.py
def onchange_work_email(self, cr, uid, ids, work_email): if re.match("^.+\\@(\\[?)[a-zA-Z0-9\\-\\.]+\\.([a-zA-Z]{2,3}|[0-9]{1,3})(\\]?)$", work_email) != None: return True else: raise osv.except_osv(_('Invalid Email'), _('Please enter a valid email address'))
But I got No handler found error. I made changes in only above two files, Did i need to import regular expressions ? what is the error ? Any appreciate
Look in you openerp-server.log. (the error is probably described in it) | https://www.odoo.com/forum/help-1/question/no-handler-found-error-while-validating-the-email-field-44149 | CC-MAIN-2017-04 | refinedweb | 159 | 61.53 |
The Spring Framework for Java is currently in progress of adding HATEOAS, Hypermedia As The Engine Of Application State, support for hyper-text driven REST web services. Primary focus for the library is to provide an API for simplifying the creation of hypermedia links and assembling of REST resource representations when used together with Spring and especially Spring MVC.
XML and JSON
The library provides a set of types to simplify working with both XML and JSON.
Links
Several classes are provided for working with links and relations.
The Link class is used for creating and holding links, using the Atom link definition with a rel and a href attribute for describing a link. When using XML the link representation will be rendered in the Atom namespace.
A common problem when creating URI strings is the spread of duplicate string constants over the code base. The ControllerLinkBuilder class addresses this problem by extracting the base URI from the current request and then adding the root mapping from the Controller class for the corresponding resource to get a complete URI.
When resources are direct representations of model classes, classes are provided for creating links for these model types, pointing to either a collection resource or a single resource.
Finding the link corresponding to a given relation is a common task when using hypermedia. Support for this is provided through a LinkDiscoverer class.
Resources
When creating resource classes, the library contains a base class, ResourceSupport, to inherit from, e.g. for links support.
Another base class, ResourceAssemblerSupport, helps reducing the amount of code needed for mapping between entities and resources and for adding links to a resource. This assembler class is furthermore capable of creating a single resource or a collection of resources.
Geraint Jones has written an example with a simple scenario using the library.
At JAX 2013 Martin Lippert held a presentation “Modern Architectures with Spring and JavaScript” which includes the library.
The Spring HATEOAS Library is currently in Release 0.5 with work in progress for 0.6.
Community comments | https://www.infoq.com/news/2013/05/spring-hateoas-rest | CC-MAIN-2019-04 | refinedweb | 341 | 53.51 |
Hello.
At its heart, the Scene Builder is really a layout tool. One of the most difficult aspects of building apps (and something we see very frequently in the JavaFX OTN Forums) is doing layout and constructing the UI, and this is what Scene Builder needs to be really, really good at.
Our experience of the last six months has been that the scene builder in general is very stable and the team has done a lot of work on usability. We’re really hoping for feedback on the user experience, especially as you attempt to use it anger. We’ve done a lot of work trying to figure out what you might naturally want to do when building a form, but really it is your feedback and experience that will either validate that design or give us the input we need to make the tool even better.
The SceneBuilder is basically an editor for FXML files. We’ve been working with the NetBeans team as well so that when you double click an FXML file in NB it should open up the scene builder and allow you to edit it. The FXML file is essentially our documented serialization format for the UI. Your event handlers and so forth will be in the Java Controller associated with the FXML file.
From a technical perspective, SceneBuilder is built entirely in JavaFX. A new stylesheet for Scene Builder was developed that we think looks quite nice :-). It really shows what you can do with JavaFX!
You can download the Scene Builder on OTN, and read more about it on the JavaFX Docs page. To give us feedback on JavaFX Scene Builder, go ahead and leave comments here on this blog and we’ll let you know when we’ve got an official channel setup! 🙂
That’s great news.
Keep going 🙂
Hello .. I live in Brazil and I am a student of Information Systems
And I’d like to give congratulations to the team and javafx javafx scene builder
I just download and first found the layout very beautiful and soft and Nice to work very good indeed …
Congratulations to the team for me enchanted with this wonderful tool
I hope it stays that way … JavaFX every day showing their true potential
parabens
Hi
how u suppose to attach controller class to fxml in pure SceneBuilder w/o NetBeans? I open manually designed fxml layout with specified controller in, but SceneBuilder see nothing.
This issue only if Importing fxml, at Open work fine.
Looks fine. But I’m missing a image view control…?
Really interesting, I have been waiting for this! I am on Linux, though, any news on when it might be executable for our platform? I really look forward to using this tool.
Try grabbing the mac version, cd into the .app file and extract the jars and launch on Linux. My guess is it will work, but I haven’t tried it yet. I will have to get back to you on the schedule, but the GA for SceneBuilder coincides with the GA for JavaFX Linux, and I would expect support to be there at that time at the latest.
I downloaded and extracted the files and put everything in a subfolder to the place where the jar-files existed. Now, I don’t really know which file to start… I have tried “java MacSceneBuilderLauncher” and “java Main” but both alternatives return java.lang.NoClassDefFoundError
I know that everything eventually will be available for us on Linux as well, but since JavaFX 2 and now Scenebuilder really have returned the fun of computing, I would very much like to get it working.
Thanks for any help!
I spent a while on this with partial success.
I merged extracted files from dmg file and files from javafx-2.1-beta19-linux.
Prepared package you can download from this:
Script launcher is inside.
When I tested app on my Ubuntu 11.10 I observed a bug with showing menu unfortunately…
But maybe it’s my fault, by my “hard merge” or smth else…
Please share with you experiences…
Hi Rafal (and everyone else)!
It works really well! I can’t seem to find any problems with the menus, though I have not worked it very hard yet. But every thing seems to work!
Thanks a million!
//Tobias
Hi, would like to thank you for this, it is working on fedora 17.
Wow! SceneBuilder is impressive.
– SB has been given a lot of UI love. Tools and info panels are easy to read and visually appealing. The new SB skin gives SB a higher quality feel. You should consider pushing the SB refinements back into JavaFX.
– Equally impressive is the fact that SB is written in JavaFX 2. Eating your own dog food and loving it.
– The click-hand icon ,,, solid choice.
– As I fumble through using SB I continue to be impressed with the attention given to details. Things such as Support for live interaction. Clicking on a tab in the design switches to the tab. Clicking on an accordion header reveals the content (although the animation is missing).
– In the “Hierarchy” panel header there is a disclosure triangle on the right hand side. The triangle makes you think you’re working with an accordian component, but you’re not … it’s a popup menu. Perhaps a button popup would be less confusing,
I have a lot of other thoughts I’d like to share, but more importantly I want to congratulate The JavaFX team and the SceneBuilder team in particular for creating an amazing tool. Now that I know that it’s out there … I’ll be busy installing the nightlies (or perhaps weeklies) and adding my comments here.
Awesome job guys and gals!
Cool!
What’s a pity that such expressive job totally devalued by RT-18901
Sadness……
What is it about?
Sub-pixel anti-aliasing on translucent windows it seems. That is, sub-pixel rendering will not work on windows that have a shape.
In non-geek: Sometimes text will look blurry.
If you didn’t notice this when JavaFX 2 was released you will not notice it now. JavaFX 2 was released with no sub-pixel anti-aliasing at all. Now they have it for all opaque windows which is a huge step forward. 🙂
Good job! It looks nice and is nice to use.
It seems it cannot handle setting I18N keys though (have to remove the \ infront of % when opening FXML back in NetBeans).
Keep the FX love coming!
Also the FXML file icon could use some rework on Windows (it looks bad). You need to provide it with several icon sizes. 😛
Great job guys!
But what’s with the BorderPane and ImageView controls? Are they dropped, or just not included in this release?
Concerning ImageView, found out it was missing from the list while trying to set graphic (AKA icon) on a text-less Button. But if you manually edit the FXML to insert the graphic property in that Button then SceneBuilder displays the ImageView within the Button node (bottom-left pane) and allows to edit its properties (image, etc.) in the right side panes.
The best way to create an ImageView in the document, currently, is to use the File > Import… menu which will bring up a File Chooser and let you select an image file.
Very, very nice =)
I didn´t expect a public beta that early, so I was very surprised this morning.
It is great to see how you try to push JavaFX in every way and how you include the community in this process.
I have two questions:
– Will the search field on the top left be included as an individual control in a future JavaFX release? It is not that big of a problem to create one on your own by styling a TextField accordingly, but I think that a search field is a very common control in applications today, so a ‘official’ search field would be a good idea.
– How did you create the separator/title lines with the text in between in the detail panels on the right?
Are these just two styled horizontal separators with a label or text between them? If yes, how can you style a separator like this?
Can u share the source code? 😉
I’m interesting in create an app with the same style
Looks great so far! Keep up the good work.
@Tobias
I am new to Java and programming but was still able to get this beast running on openSUSE 12.1 64 bit. Here’s what I did.
1) I installed the javaFX sdk in the /usr/java/javafx-sdk2.1.0-beta/ directory.
2) I have the jdk in the /usr/java/jdk1.7.0_03_32bit/ directory
3) I installed transmac from here using wine to extract the contents of the .dmg file to /home/working/Documents/JavaFX Scene Builder 1.0.app/
4) I then initiated the following command which explicitly set the class paths and executes the main jar file.
/usr/java/jdk1.7.0_03_32bit/bin/java -Xbootclasspath/p:/usr/java/javafx-sdk2.1.0-beta/rt/lib
the nine lines above are all one line on the command prompt. If anyone knows of an easier way please let me know. Hope that helps someone.
UPDATE: If you are getting graphical errors (the screen turns black) you can try this.
/usr/java/jdk1.7.0_03_32bit/bin/java -Xbootclasspath/p:/home/working/Documents/JavaFX\ Scene\ Builder\ 1.0.app/Contents/Resources/Java
sudo cp -R /usr/java/javafx-sdk2.1.0-beta/rt/lib/i386 /home/working/Documents/JavaFX\ Scene\ Builder\ 1.0.app/Contents/Resources/Java/
Will you have a simultaneous view of window layout and FXML code? My favorite way of using XAML in Visual Studio is to edit the XAML code directly and just use the layout view as a preview, rather than dragging & clicking layout elements with the mouse. So if you don’t have that feature yet I vote that you please add it. 🙂
This would be cool. FWIW right now you can get this already with E(fx)clipse project if I am not mistaken.
Hi,
you can already achieve that by using the editor of your choice. Load the FXML document in the SceneBuilder then edit/save from the text editor. You will then see the Scene Builder automatically refreshing the content.
Just had a quick look – very, very nice! Great work!
Here’s my first suggestion for usability. It’s something Apple does with Keynote on the Mac, and that Microsoft and OpenOffice don’t do. IMO, it’s important for helping people create really nice-looking UIs, easily. It’s one of the reasons Keynote presentations tend to look more elegant than PowerPoint and OpenOffice presentations.
When using corner grab handles to re-size objects where there’s usually only a single “correct” aspect ratio – that’s things like text and image (eg photo) objects – the default behavior should be that the aspect ratio is preserved. That is, there should be no need to invoke a quasi-mode to keep the aspect ration correct eg via holding down a modifier key while dragging to resize. It might seem like a small issue, but it has a big impact on usability.
At the moment, in JavaFX SceneBuilder the default behavior when sizing text objects tends to result is disgustingly distorted text 😉 I can see this resulting in people creating some truly horrific-looking UIs!
This suggestion falls into the category of, “Making the things people want to do often, super easy.”
It’s a great news and good tools. I’d like to know more things about the next version of Scene Builder, Will it supports animation creation such as animaiton timeline just like JavaFX Authoring Tool which was demo in JavaOne 2009?
I am developping a customized component. Is it possible to use it in scene builder? Any guide for this? Thanks.
Nice! I’d love the numeric value fields to be draggable with right mouse button. Also missing BorderPane and MediaPlayer. Awesome that fx:include works correctly =)
I have successfully installed SB but couldnt run on java6.
clicked on the shortcut, nothing appeared on the screen.
I had to remove java6 and installed java7, now everything’s fine
btw, many thanks for such a great tool 🙂
It’s a great news and good tools. I’d like to know more things about the next version of Scene Builder, Will it supports animation creation such as animation timeline just like JavaFX Authoring Tool which was demo in JavaOne 2009?
Hello everybody,
The tool is very nice and very professional! Great job! It is necessary to help for adoption of javafx to build professional apps.
1) But my computer at home is a old (core 2 duo with 4gb of ram) notebook with a Nvidia GO 7300 video card which is blacklisted by javafx to run hardware acceleration. The problem is that SB is not suitable in software rendering mode : it suffers of big UI latency… it makes not possible to deploy rich javafx app on incompatible computers. Do you plan to implement more video card compatibility in javafx and reintroduce old (but widespread) video cards or do you plan to improve performances of software rendering?
2) What are the main features you plan to add in SB next versions? Here is a list of some of my ideas (not ordered by prority) :
-Custom control management and help for editing sub fxml files
– cells customization (support of custom cell factory and visual edition of cells)
– CSS style edition with visual edition
– animation support with use of transitions and triggers
– i18n support
– property binding
– support of custom controller factory
– support of classpath resources
– accessible design mode in controller to provide mock data
– support of testing framework like Jemmy fx
Thanks a lot for your work
who cares? javafx died when they dropped the linux support of the SDK
erosb, you do realise there is Linux support in the developer preview releases of JavaFX right? You can download the linux SDK here:
Soon enough this will be officially supported as the engineers finish adding all the relevant platform support. This is not a small task and we’re going as fast as we can.
I care! And no they didn’t. Please stop spreading this misinformation.
Great work. Looks and usability both good.
Re i18n – or rather, resource substitution, there appears to be an issue in that the resource bundle is set for the FXML loader in Java code (I’m new to FXML, so I may be mistaken here), rather than a processing instruction in the XML – so how would Scene Builder know where to find the resources without some sort of associated project file separate from the FXML file?
Very nice app. But the Gradients in the fill property is missing for mee.
Are you planning to integrate it?
Now just wait for Avatar project… My applications are fly using scene builder…
Very good work!
Way to go gang! SB has the potential to massively improve productivity.
Rather than use SB to design our app from the root node on down, we are using it for nodes that are added and removed from the scene graph during user workflow.
First snag: when specifying a style class for a node in SB, it doesn’t seem to recognize colours and gradients specified in the .root section of the stylesheet.
Will hit SB as hard as possible – deking over to JIRA project now.
Keep it going!
Rick
Hi Rick,
I just logged DTL-4487 to track the root style class. A root style class could be added to the parent of your content.
Merci Jean-Francois, I added the root style class to the parent AnchorPane – colours and gradients specified in the root propagate nicely. Thanks for the prompt response
You are welcome,
A new feature that will land in a future Developer preview will allow you to set stylesheet files that contain rules global to the scene (such as for root style class). These stylesheets will be not referenced from the FXML but will allow you to visualize your design as if it was located inside a Scene.
Regards.
JF Denise
I installed screen builder by following instruction, Everything is ok but when I clicked screen builder’s hand icon in my desktop, nothing happened. Why ?
How could i get a JFrame inside JFXPanel
hi all,
I started work with the java fx in netbeans on linux.. but i am not able to get the gui of scene builder.
please help me to start the scene builder on linux.
Hai saurabh,
Make sure that are you having a correct version of JAVAFX to run in linux from the fallo\wing link
hello, i m from india,
does javafx scene builder provide writing css for any control in its own……???
can i write css for any control in css analyser or any other way…???
reply plz….
Hi, I think I am not the first to talk about this, but I really think FXML should change. I have some experience with JSF and Facelets, and I am very happy with the way it uses XML format and namespaces. JavaFX should have followed this standard since is very flexible and semantic. And also, one thing I think it would be very helpful it is the possibility to edit FXML by hand in Scene Builder. | http://fxexperience.com/2012/04/announcing-javafx-scene-builder-public-beta/comment-page-1/ | CC-MAIN-2019-43 | refinedweb | 2,919 | 71.85 |
gem5
/
testing
/
jenkins-gem5-prod
/
cb7fe24ab7effe526a03a51fc34f6cce8056d04f
/
.
/
ext
/
ply
/
CHANGES
blob: 9d8b25d5a980b5342c453214b67132ef0ef0b29a [
file
] [
log
] [
blame
].
Version 1.4
------------------------------
04/23/04: beazley
Incorporated a variety of patches contributed by Eric Raymond.
These include:
0. Cleans up some comments so they don't wrap on an 80-column display.
1. Directs compiler errors to stderr where they belong.
2. Implements and documents automatic line counting when \n is ignored.
3. Changes the way progress messages are dumped when debugging is on.
The new format is both less verbose and conveys more information than
the old, including shift and reduce actions.
04/23/04: beazley
Added a Python setup.py file to simply installation. Contributed
by Adam Kerrison.
04/23/04: beazley
Added patches contributed by Adam Kerrison.
- Some output is now only shown when debugging is enabled. This
means that PLY will be completely silent when not in debugging mode.
- An optional parameter "write_tables" can be passed to yacc() to
control whether or not parsing tables are written. By default,
it is true, but it can be turned off if you don't want the yacc
table file. Note: disabling this will cause yacc() to regenerate
the parsing table each time.
04/23/04: beazley
Added patches contributed by David McNab. This patch addes two
features:
- The parser can be supplied as a class instead of a module.
For an example of this, see the example/classcalc directory.
- Debugging output can be directed to a filename of the user's
choice. Use
yacc(debugfile="somefile.out")
Version 1.3
------------------------------
12/10/02: jmdyck
Various minor adjustments to the code that Dave checked in today.
Updated test/yacc_{inf,unused}.exp to reflect today's changes.
12/10/02: beazley
Incorporated a variety of minor bug fixes to empty production
handling and infinite recursion checking. Contributed by
Michael Dyck.
12/10/02: beazley
Removed bogus recover() method call in yacc.restart()
Version 1.2
------------------------------
11/27/02: beazley
Lexer and parser objects are now available as an attribute
of tokens and slices respectively. For example:
def t_NUMBER(t):
r'\d+'
print t.lexer
def p_expr_plus(t):
'expr: expr PLUS expr'
print t.lexer
print t.parser
This can be used for state management (if needed).
10/31/02: beazley
Modified yacc.py to work with Python optimize mode. To make
this work, you need to use
yacc.yacc(optimize=1)
Furthermore, you need to first run Python in normal mode
to generate the necessary parsetab.py files. After that,
you can use python -O or python -OO.
Note: optimized mode turns off a lot of error checking.
Only use when you are sure that your grammar is working.
Make sure parsetab.py is up to date!
10/30/02: beazley
Added cloning of Lexer objects. For example:
import copy
l = lex.lex()
lc = copy.copy(l)
l.input("Some text")
lc.input("Some other text")
...
This might be useful if the same "lexer" is meant to
be used in different contexts---or if multiple lexers
are running concurrently.
10/30/02: beazley
Fixed subtle bug with first set computation and empty productions.
Patch submitted by Michael Dyck.
10/30/02: beazley
Fixed error messages to use "filename:line: message" instead
of "filename:line. message". This makes error reporting more
friendly to emacs. Patch submitted by François Pinard.
10/30/02: beazley
Improvements to parser.out file. Terminals and nonterminals
are sorted instead of being printed in random order.
Patch submitted by François Pinard.
10/30/02: beazley
Improvements to parser.out file output. Rules are now printed
in a way that's easier to understand. Contributed by Russ Cox.
10/30/02: beazley
Added 'nonassoc' associativity support. This can be used
to disable the chaining of operators like a < b < c.
To use, simply specify 'nonassoc' in the precedence table
precedence = (
('nonassoc', 'LESSTHAN', 'GREATERTHAN'), # Nonassociative operators
('left', 'PLUS', 'MINUS'),
('left', 'TIMES', 'DIVIDE'),
('right', 'UMINUS'), # Unary minus operator
)
Patch contributed by Russ Cox.
10/30/02: beazley
Modified the lexer to provide optional support for Python -O and -OO
modes. To make this work, Python *first* needs to be run in
unoptimized mode. This reads the lexing information and creates a
file "lextab.py". Then, run lex like this:
# module foo.py
...
...
lex.lex(optimize=1)
Once the lextab file has been created, subsequent calls to
lex.lex() will read data from the lextab file instead of using
introspection. In optimized mode (-O, -OO) everything should
work normally despite the loss of doc strings.
To change the name of the file 'lextab.py' use the following:
lex.lex(lextab="footab")
(this creates a file footab.py)
Version 1.1 October 25, 2001
------------------------------
10/25/01: beazley
Modified the table generator to produce much more compact data.
This should greatly reduce the size of the parsetab.py[c] file.
Caveat: the tables still need to be constructed so a little more
work is done in parsetab on import.
10/25/01: beazley
There may be a possible bug in the cycle detector that reports errors
about infinite recursion. I'm having a little trouble tracking it
down, but if you get this problem, you can disable the cycle
detector as follows:
yacc.yacc(check_recursion = 0)
10/25/01: beazley
Fixed a bug in lex.py that sometimes caused illegal characters to be
reported incorrectly. Reported by Sverre Jørgensen.
7/8/01 : beazley
Added a reference to the underlying lexer object when tokens are handled by
functions. The lexer is available as the 'lexer' attribute. This
was added to provide better lexing support for languages such as Fortran
where certain types of tokens can't be conveniently expressed as regular
expressions (and where the tokenizing function may want to perform a
little backtracking). Suggested by Pearu Peterson.
6/20/01 : beazley
Modified yacc() function so that an optional starting symbol can be specified.
For example:
yacc.yacc(start="statement")
Normally yacc always treats the first production rule as the starting symbol.
However, if you are debugging your grammar it may be useful to specify
an alternative starting symbol. Idea suggested by Rich Salz.
Version 1.0 June 18, 2001
--------------------------
Initial public offering | https://gem5.googlesource.com/testing/jenkins-gem5-prod/+/cb7fe24ab7effe526a03a51fc34f6cce8056d04f/ext/ply/CHANGES | CC-MAIN-2020-45 | refinedweb | 1,026 | 61.53 |
@raven-worx said:
Once the compilation is finished it should at least have created a library linker file (.lib for MSVC) and a dll file (if you chose the "shared" option)
ok, after command mingw32-make install i get this structure of files:
C:\taglib\bin\ libtag.dll
C:\taglib\bin\libtag_c.dll
C:\taglib\bin\taglib-config.cmd
C:\taglib\include\ taglib\ .h files
C:\taglib\lib\pkgconfig\taglib.pc
C:\taglib\lib\pkgconfig\taglib_c.pc
C:\taglib\lib\ libtag.dll.a
C:\taglib\lib\libtag_c.dll.a
copy these files (maybe also the header files from the include folder) somewhere to your projects folder and add it to your project: QtCreator / manually
so i copied libtag.dll, libtag.dll.a and taglib folder to D:\qtproject\myprojectfolder
then added
win32: LIBS += -L$$PWD/ -llibtag
INCLUDEPATH += $$PWD/
DEPENDPATH += $$PWD/
to .pro file and
#include "taglib/fileref.h"
#include "taglib/taglib.h"
#include "taglib/tag.h"
to .cpp file
and it works!! great!! thanx =)) | https://forum.qt.io/tags/tags | CC-MAIN-2019-39 | refinedweb | 164 | 52.26 |
- Write a C++ program to print Hello World on screen.
- Your First C++ program to print Hello World string.
Let's start with a simple C++ program to print "Hello World" string on screen. It become the traditional first program that many people write while learning a new programming language. This program is very useful for beginners to understanding basic syntax of C++ programming language.
C++ Program to Print Hello World
// C++ program to pring hello world string on screen #include <iostream> using namespace std; int main() { cout << "Hello World"; return 0; }Output
Hello World
In above program, we are just print "Hello World" message on screen. Although, a very simple C++ program but very useful in understanding the basic structure of a C++ program. Let us look various parts of the above program line by line:
- // C++ program to pring hello world string on screen
This line is a single-line comment of C++ language. Single-line comments begin with // till end of the line. Everything on the line after // is ignored by compiler.
- #include
#include is a preprocessor directive to includes the header file in our program. In this program we are using one such header called iostream which is required for input and output.
- using namespace std;
This line tells the compiler to use the std namespace. Namespaces are used to avoid naming conflicts and to make it easier to reference operations included in that namespace.
- int main() {
Every C++ program must have only one main() function where program execution starts. The int is what is called the return value of main function.
- cout<<"Hello World";
The cout is the standard output stream which prints the "Hello, World!" string on the monitor.
- return 0;
It is the Exit status of the program. Returning 0 means informing operating system that program successfully completed, whereas returning 1 means error while running program.
C++ Program to Print Hello World Multiple Times
In this program, we will print "Hello World" string multiple times on screen using a for loop. Don't think much about for loop now, we will discuss about it later.
// C++ program to pring hello world string 10 times #include <iostream> using namespace std; int main() { int i; for(i = 0; i < 10; i++){ cout << "Hello World" << endl; } return 0; }Output
Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World Hello World
Points to Remember
- Every C++ program must have one main() function where program execution starts.
- A single-line comments begin with // till end of the line. Comments are ignored by C++ compilers.
- Every statement of a C++ program must end with a semicolon otherwise compiler will report syntax error while compiling.
- Some header files must be included at the beginning of your C++ program.
- Main returns an exit status to Operating system informing about it's termination state. Whether program executed successfully or an error has occurred.
Recommended Posts | https://www.techcrashcourse.com/2017/01/first-cpp-hello-world-program.html | CC-MAIN-2020-16 | refinedweb | 492 | 62.98 |
In this article, I will show how to sign any data with the private key you use to sign your assemblies.
The general idea of digital signature using asymmetric cryptography is very simple:
In PKI (Public Key Infrastructure), we have a chain of certificates. There are some "root" CAs (Certificate Authorities) which certify the public keys of other parties and thus we can believe that code signed by "Microsoft" is really from Microsoft (we believe that root CAs wouldn't certify a key if they were not sure that it belongs to Microsoft).
So, when we want to verify a signature of data in the PKI world, we can just get the public key of the signer from the CA and use it in the asymmetric algorithm.
One of the applications of asymmetric cryptography is digital code signing. In the .NET world, we have all we need to sign our code: we can generate as many key pairs as we want (using sn.exe from the .NET Framework SDK), we can build our assemblies signing them and we can even secure our system to avoid running any single assembly without the signature of the party we trust.
I made a medical imaging database for dermatologists. They often decide not to connect their machines to any network (to protect the sensitive data from being stolen), so any kind of on-line activation was not an option. My idea is to provide an individual personalized license for every client and make tampering this license not easy.
I like the idea described in the article "Piracy and Unconventional Wisdom" by Chad Z. Hower aka Kudzu, so I wanted to make my solution as easy as possible for the client. The license file is just a separate plain text file which is easy to transport, so for example, upgrading the program to the full product requires just copying one file to the program folder. The license file is plain text, but (using Base64 encoding of any binary data) it can contain any complex license data (e.g. serialized graph of objects).
Some day I asked myself: can I use the same key for signing my assemblies and for signing any other data? After some research, I found a way to import keys from a *.snk file (generated by sn.exe tool) to an RSA cryptographic object from System.Security namespace, but it required some byte array operations (it was in the .NET 1.1 "era"). In .NET 2.0, the RSACryptoServiceProvider class fortunately "understands" *.snk files "off the box".
System.Security
RSACryptoServiceProvider
So now we can use our code-signing key to sign any data and we can use the public key embedded in our application assembly for verification. Simple, isn't it?
If you are interested in key BLOBs, you can read the MSDN article "Private Key BLOBs".
In the article code, you can find two executables:
I wanted to keep the code simple, so please note that I omitted some try-catch (e.g. I don't catch base64 decoding exceptions). The code for reading the key pair from disk is a one-liner:
try
catch
rsa.ImportCspBlob(System.IO.File.ReadAllBytes("key.snk"));
Then we can sign any binary data:
byte[] licenseData = Encoding.UTF8.GetBytes(licenseText);
byte[] signature = rsa.SignData(licenseData,
new System.Security.Cryptography.SHA1CryptoServiceProvider());
And, finally, we write the data and the signature into the license file (as text):
System.IO.File.WriteAllText("license.txt",
Convert.ToBase64String(licenseData)+Environment.NewLine+
Convert.ToBase64String(signature));
When we start our application, we need to get the public key from the assembly. We need to strip some header from it before we pass the key to RSACryptoServiceProvider:
// Here is a trick: the public key in assembly file has a 12-byte header
// at the beginning.
// We strip it - and the remaining bytes can be now imported by
// RSACryptoServiceProvider class
byte[] tmpKey = new byte[pubKey.Length - 12];
Array.Copy(pubKey, 12, tmpKey, 0, tmpKey.Length);
rsa.ImportCspBlob(tmpKey);
Then we can get data (and a signature) from the license file and verify the signature:
string[] licenseLines = System.IO.File.ReadAllLines("license.txt");
byte[] licenseData = Convert.FromBase64String(licenseLines[0]);
if (rsa.VerifyData(licenseData,
new System.Security.Cryptography.SHA1CryptoServiceProvider(),
Convert.FromBase64String(licenseLines[1]))) {
Console.WriteLine("License is:\n" + Encoding.UTF8.GetString(licenseData));
}
If you want to quickly play with the demo, just unpack NeatLicenseDemo.zip and run rundemo.bat. You will be asked to type some text and then you'll see that verification passed.
If you want to play with the code, unpack NeatLicense.zip and run:
MSBuild
One can ask: What if the hacker will generate a new key pair, re-sign our assembly and prepare a new license file and then distribute the program? Hmm, that's right, we can't avoid this - exactly as we can't prevent a hacker from changing our code to bypass any other software protection (on any platform, not only .NET). Can we try? I think we should focus on client needs, not on fighting with the cruel world.
But the good thing: A pirate can't hide his activity. It is impossible (to be precise: probability is extremely low) to generate a new key pair with the same public key, so we can always detect that the assembly was tampered - it's public key (and public key token) will change. To make the hacker's life harder, we can check random assembly key bytes in several places in our program. As an example, you can see the CheckPublicKey method, but in the real code I suggest using inline checks to make tracking harder. If we use only one function, the protection could be easily deactivated by the hacker. I commented-out a call to this function, because it will fail when you generate your keys.
CheckPublic. | http://www.codeproject.com/Articles/22315/Neat-License | CC-MAIN-2016-36 | refinedweb | 970 | 55.64 |
# coding: utf-8
# # Table of contents
# 1. [What is an Algorithm?](#algorithm)
# 2. [Algorithms and Programming](#programming)
# 3. [Performance Measurement of Sorting Algorithms](#performance)
# 5. [JIT Optimization](#jit)
# 6. [Stack Overflow](#overflow)
# # What is an Algorithm?
#
# At its most basic level, an algorithm is a step-by-step procedure for solving a problem or specifying tasks to be done. Although you can theoretically use an algorithm for almost any application, it is commonly used in computer science and mathematics.
#
# The following picture is a basic example of an algorithm:
#
#
#
# ## Practical example
# Let us introduce you to Bob. Bob has a walnut tree in his garden and he loves walnuts. He knows that there are walnuts from his tree in his garden. While some of them are already rotten, the others are ready for picking and consuming. Bob knows, that over the next week, more and more walnuts will fall from the tree onto the ground. Bob now wants to solve the problem of collecting the walnuts. Since he's a computer scientist, he tackels the problem as a proper computer scientist would:
#
# 1. Bob tries to understand the problem:
# * Bob wants to collect the walnuts. Some are still in the tree getting ripe, others are lying on the ground in the moisty grass, fresh and rotten ones. In addition, the nuts are hard to spot in the grass and to distinguish between fresh and rotten, Bob has to pick them up to get a closer look. Bob also has a bucket for collecting where he will put the fresh nuts. A way of structuring the problem is to use the "divide and conquer" method."Collecting walnuts" divides to "find the nuts" and "pick them up". "Find the nuts" divides to "search for the nuts", that can be solved by searching for them with your feet in the grass, using your hands or by looking closely. Also, Bob has to be clear about his goal. Does he want as many non-rotten nuts as possible or will he be statisfied with a large quantity?
#
# 2. Bob thinks about different ways to solve the problem of collecting nuts:
# * A: Bob could go to the tree and stay there with his bucket, observing the nuts falling down and collecting them one by one.
# * B: Bob goes to the tree each day. Crouch-walks under the tree through the grass. When he finds a nut, he decides whether it is rotten or not and puts the non-rotten nuts in his bucket.
# * C: Bob walks through the grass with his bucket, trying to feel the nuts with his feet, then bends over and collects them each time he finds one. When the bucket is full, he sorts through them and throws the rotten ones away.
#
# 3. Bob maps it out, using psuedo-code, a flowchart, or something similar:
# * This helps Bob to visualize and evaluate his solutions. While **A** would garantee Bob getting close to all the nuts, it would take very long. **B** would be less time consuming but more exhausting. If Bob does it this way, the workload will grow from day to day, since he has to reevaluate more and more rotten ones, since he doesn't pick them up, the number of evaluations each day will increase. So Bob goes for **C**: a quick way, no bending over and a more or less steady number of evaluations. Surely he will not get the highest amount of fresh nuts, but still enough. Work smart, not hard. All the possible solutions have their pros and cons.
# # Algorithms and programming
#
# Algorithms are the cornerstone of the programming world, as every program revolves around the use of algorithms. Furthermore, algorithms are created independently from underlying languages and can therefore be implemented in every programming language. The following is a basic outline of how to create an algorithm for computer science:
# 1. Understand the problem
# 2. Think of different ways to solve it, and pick what you believe to be the most efficient way
# 3. Map it out, using psuedo-code, a flowchart, or something similar
# 4. Implement the solution, i.e. translate your method into the actual coding language
# 5. Test and debug your code
#
# ## Algorithm Implementation
# Today, we want to focus on the fourth step of algorithm creation: implementation. In this step, we want to translate our method into our coding method of choice, and ensure that it is written in an elegant and an optimal way. In other words, we want to make sure that the program is compact and fast. Oftentimes, these goals are aligned, but sometimes they are not. These ideas are talked about in terms of time complexity and space complexity. These are defined as follows:
# - **Time complexity**: the amount of time it takes to run an algorithm as a function of the amount of input
# - **Space complexity**: the amount of space/memory taken by an algorithm as a function of the amount of input
#
# In this tutorial, we will primarily discuss time. Although time complexity is extremely difficult to measure, in Python we can get a rough idea of the time demands of an algorithm by using the time library. The time library can measure many different types of time, but we need to focus on wall-clock time and CPU time, i.e. natural and processing time. Wall-clock time is the time we, as humans, are familiar with. In other words, this is the time a stopwatch would read if you started it at the beginning of the process and ended it exactly as it finished. CPU time is the time the computer dedicates to the process. As computers are running multiple processes at a time, not 100% of the wall-clock time will be dedicated to the specific process you are measuring. CPU time avoids this issue, and allows a more comparable measure.
#
# ## Introduction to the problem
# In the endeavor of visualizing the implementation of algorithms, we will define a problem and then try to solve it with three different algorithm implementations. The code you see below is creating a set amount of different integers and inserting them into a list. The problem at hand now will be to sort the list so that our result will be a list ordered from the least integer to the greatest. There is actually an already built-in function in Python to `sort()` a list but in order to visualize the implementation of algorithms, we will not use `sort()`.
# In[1]:
#Here we create an list that will then be sorted by our algorithms
import random as rdm #Here we import a library which has a function that will be used when generating the list
N = 10000 #How many numbers that should be sorted
lowerBound = 1 #Lower bound for the generated numbers
upperBound = N*5 #Upper bound for the genereated numbers
tutorial_list = []
tutorial_list.extend(rdm.sample(range(int(lowerBound), int(upperBound)), int(N)))
jit_list = tutorial_list #We will come back to this list later...
# The code above generates the list which we will be used when executing our sorting algorithms. The list consists of N randomly selected unique integers from the range 1 to Nx5. The reason why the upperbound is Nx5 is because of the function `rdm.sample()`, which will draw an integer from the range withut reinserting it. So, to not run out of numbers to pick, the range has to be grater than N.
#
# When the random selection has been made the list will be saved to tutorial_list which we will use in our implementation of algorithms.
# ### Insertion Sort
# In[2]:
insertion_list = tutorial_list
def isort(insertion_list):
for i in range(1, len(insertion_list)):
value = insertion_list[i]
spot = i
while spot > 0 and insertion_list[spot-1] > value:
insertion_list[spot] = insertion_list[spot-1]
spot = spot-1
insertion_list[spot] = value
# Code inspired by this [source]( "interactivepython.org")
#
# The above function is an implementation of a so-called insertion sort algorithm. What the basically does is dividing the list into two sublists. One that is sorted and one that is not. First, it assumes that the first element in the list is sorted, then it will compare the first position to the second position. Given that the integer in the second position is lesser the algorithm will switch position of the two integer so that the lesser integer ends up in the first spot. The “sorted list” now consists of two integers. Because of the while loop, the procedure will be repeated for all the numbers until every number is a part of the sorted list.
# ### Shell Sort
# In[3]:
shell_list = tutorial_list
def shell_sort(shell_list):
gap = len(shell_list) // 2
while gap > 0:
for start_position in range(gap):
gap_insertion_sort(shell_list, start_position, gap)
gap = gap // 2
def gap_insertion_sort(shell_list, start_position, gap):
for i in range(start_position+gap, len(shell_list), gap):
current_value = shell_list[i]
position = i
while position >= gap and shell_list[position-gap] > current_value:
shell_list[position] = shell_list[position - gap]
position = position-gap
shell_list[position] = current_value
# Code inspired by this [source]( "zaxrosenberg.com")
#
# A shell sort improves on the insertion sort by dividing the original list into smaller sublists which are then sorted by insertion sort. Therefore, this sorting method uses the "divide and conquer" strategy. It starts by comparing pairs of elements far apart from each other and sorting them while reducing the comparison gap between them. The shellsort is heavily dependent on what type of gap sequence it uses. Our gap sequence is the original from 1959. The comparison continues until the gap is filled - ending the sort.
# ### Bubble Sort
# In[4]:
bubble_list = tutorial_list
def bubblesort(bubble_list):
# Swap the elements to arrange in order
for i in range(len(bubble_list)-1,0,-1):
for idx in range(i):
if bubble_list[idx] > bubble_list[idx+1]:
temp = bubble_list[idx]
bubble_list[idx] = bubble_list[idx+1]
bubble_list[idx+1] = temp
# Code inspired by this [source]( "interactivepython.org")
#
# The function above is an implementation of a style of sorting algorithm called a bubble algorithm. In essence, this is the simplest sorting algorithm. The way that it works is by repeatedly running through a list or array and swapping the elements that are next to each other if they are in the incorrect order. Take, for example, this list: [1, 3, 2, 9, 6]. First, the function will compare the first two elements: 1 and 3. It sees that they are in order, and thus will not do anything. Next, it will compare 3 and 2, and seeing that they are not in order, will swap their positions, changing the list to [1, 2, 3, 9, 6]. The algorithm will continue this pattern for the rest of the list and then restart from the beginning to ensure that everything is in order, ending with the sorted list of [1, 2, 3, 6, 9]. The bubble sort is a popular implementation because of its simplicity, but it can be quite time intensive with longer lists or arrays.
# # Performance measurement of sorting algorithms
#
# Now when we have defined three algorithms for our problem it is time to consider the performance of our code. To measure performance we can use the so-called "time" package. To measure performance, we will see how much time that passes between the start and finish of the algorithm. To do this there are two useful functions in the package: `time.time()` and `time.clock()`.
#
# What `time.time()` does is measuring wallclock time between two points in the code, e.g. how many actual seconds that passes from start to finish. This method though is not entirely optimal. The reason for this is that the computer might be working on other tasks during the same time one is trying to measure the performance of the code. This means that memory in the CPU might be occupied and therefore slowing down the process. To solve this problem one can use `time.clock()` instead.
#
# `time.clock()` has not been updated for a while though and is not the best way of determining dedicated CPU time. The new, updated versions of `time.clock()` are `time.perf_counter()` and `time.process_time()`.
#
# In our examples below, we will use the updated CPU timer with fractional seconds called `time.process_time()`.
# In[5]:
import time
#measuring isort performance
start_isort = time.process_time()
isort(tutorial_list)
end_isort = time.process_time()
#measuring shellsort performance
start_shellsort = time.process_time()
shell_sort(tutorial_list)
end_shellsort = time.process_time()
#measuring bubblesort performance
start_bubblesort = time.process_time()
bubblesort(tutorial_list)
end_bubblesort = time.process_time()
print('isort sorted the tutorial list in {} seconds'.format(end_isort-start_isort))
print('shellsort sorted the tutorial list in {} seconds'.format(end_shellsort-start_shellsort))
print('bubblesort sorted the tutorial list in {} seconds'.format(end_bubblesort-start_bubblesort))
# ## Results
# As we can see from the results above the shell short is more efficent than the other two less complex sorting algorithms. The question is though, will the shell short algorithm always be the better choice when one needs to sort a list? Well that depends...
#
# When considering a programming solution for a problem one thing to keep in mind is the pay-off between the time invested in optimizing the code and the actual time saved when one is later using the code. For example, let us assume that we have the knowledge to write the simpler bubble-sort algorithm, but lack the knowledge to write something more efficient. If there is a problem that consists of a shorter list that only needs to be sorted once, it might be wiser to just use the simpler algorithm, even though it is less efficient. Because the payoff for investing time in optimizing the code and learning more efficient ways to do a sorting algorithm might be very low. On the other hand, if we are facing a more complex problem, where our sorting algorithm will be run a considerable amount of times, it might be worth investing time in optimizing the code to save a lot of time later when the program is running.
#
# So, when deciding how to implement an algorithm solution to a problem one should always keep in mind the purpose of the implementation and then make a decision about the complexity of the code.
# # Optimizing algorithms further with `@jit`
#
# `numba` is a *Just-In-Time* compiler for Python which means that whenever you call a function in Python, all or part of your code will convert to machine code "just-in-time" for execution and run on your local machine code speed.
#
# With the help of Numba, you can speed up your calculations and algorithms. There are other compilers for Python such as `pypy` and `cython`, so why use Numba? The easiest answer is that with Numba, you don't have to change your code at all for basic speed-up. The only thing you have to do is add a Python functionality, a *decorator* (wrapper), around your functions. We have chosen to partly compile our sorting algorithms using the `@jit` decorator.
#
# Pyhton is a "high-level" language that is easy for humans to work with and understand but is far away from the language that computers use to understand. The most basic explanation of how it works is that your Python function is taken, optimized and converted into a language that is easier for the computer to read. If you want to read more about this, please visit this [website]( "towardsdatascience.com") or google JIT.
# In[6]:
#When measuring performance of JIT, we will use the copied list called jit_list and JIT every algorithm
import time
from numba import jit
#Algorithm 1, the insertion sort, now with JIT decorator
insertion_list2 = jit_list
@jit
def isort(insertion_list2):
for i in range(1, len(insertion_list2)):
value = insertion_list2[i]
spot = i
while spot > 0 and insertion_list2[spot-1] > value:
insertion_list2[spot] = insertion_list2[spot-1]
spot = spot-1
insertion_list2[spot] = value
#Algorithm 2, the shell sort, now with JIT decorator
shell_list2 = jit_list
@jit
def shell_sort(shell_list2):
gap = len(shell_list2) // 2
while gap > 0:
for start_position in range(gap):
gap_insertion_sort(shell_list2, start_position, gap)
gap = gap // 2
@jit
def gap_insertion_sort(shell_list2, start_position, gap):
for i in range(start_position+gap, len(shell_list2), gap):
current_value = shell_list2[i]
position = i
while position >= gap and shell_list2[position-gap] > current_value:
shell_list2[position] = shell_list2[position - gap]
position = position-gap
shell_list2[position] = current_value
#Algorithm 3, the buuble sort, now with JIT decorator
bubble_list2 = jit_list
@jit
def bubblesort(bubble_list2):
# Swap the elements to arrange in order
for i in range(len(bubble_list2)-1,0,-1):
for idx in range(i):
if bubble_list2[idx] > bubble_list2[idx+1]:
temp = bubble_list2[idx]
bubble_list2[idx] = bubble_list2[idx+1]
bubble_list2[idx+1] = temp
#measuring isort performance with JIT
start_isort2 = time.process_time()
isort(jit_list)
end_isort2 = time.process_time()
#measuring shellsort performance with JIT
start_shellsort2 = time.process_time()
shell_sort(jit_list)
end_shellsort2 = time.process_time()
#measuring bubblesort performance with JIT
start_bubblesort2 = time.process_time()
bubblesort(jit_list)
end_bubblesort2 = time.process_time()
#remember that we made a copy of the tutorial list and called it "jit list" for this exercise
print('isort sorted the JIT list in {} seconds'.format(end_isort2-start_isort2))
print('shellsort sorted the JIT list in {} seconds'.format(end_shellsort2-start_shellsort2))
print('bubblesort sorted the JIT list in {} seconds'.format(end_bubblesort2-start_bubblesort2))
# ## Results from using `@jit`
#
# As we can see, two of the three algorithms have tremendous performance improvement. By using the `@jit` wrapper, the algortihms could be processed much quicker than before. An interesting observation is that our `shell_sort()` algorithm is slower than before. This may be because of our original "gap sequence" being slower with the `numba` package or that the shell sort as a function works slower with JIT.
#
# When using already defined functions, which is made by other users, there is a problem of not knowing excactly what the function does. As illustrated above with the use if JIT, the results from the shell short was not excactly what we were excpecting. There can be massive performance improvements but at the same time a problem can occur we did not excpect. So, when choosing wether to use `numba` compiler *JIT*, or maybe `sort()`, when solving a problem one should consider what the impact would be if the function does something else than expected. If the impact would be low and easily fixable, on can use the already defined functions. If the impact would be huge and very costly to fix, one should consider writing the code oneself.
# # Stack overflow and how to avoid it
#
# The last thing you should be aware of when writing algorithms is stack overflow. We will briefly discuss this problem and tell you how to avoid it.
#
# Local variables and parameters live on a thing called **stack**. The stack lives on the *top* of your adress space (memory cell for instance) and as it is being used, it is heading towards the *bottom* of your adress space (towards zero). The most common cause of a stack overflow is a bad recursive call. If your function does not contain the proper terminating condition (making your function to stop), the function will call itself forever. A stack overflow means that your function is demanding more space than it has dedicated. When this happens, you usually get an error message saying that the "maximum recursion is met", meaning that the process cannot access more space to find a solution. To solve this problem, go over your code and check whether you are using a proper terminating condition.
# In[7]:
#Stack overflow example
#define a function that recurs upon itself
def so():
return so()
#call the function
so()
#you should get an error of a maximum recursion | https://nbviewer.jupyter.org/format/script/github/drarnau/Programming-for-Quantitative-Analysis/blob/master/05_Basics_Of_Algorithm_Implementation.ipynb | CC-MAIN-2019-13 | refinedweb | 3,235 | 61.77 |
TL explanation. Some knowledge of Wikisource is needed too :-) As an employee of MediaLibrary Online, I worked in these months to get all the metadata and index the texts from Italian Wikisource. A bit of perspective * All the texts have been either proofread or validated. * They are not all whole books: we indexed the texts directly in namespace 0 (and not in the Index: namespace), in order to provide the user a better search experience and findability. This, in our opinion, is very important: we used the MediaWiki API to retrieve all the data, and some HTML scraping for the rest :-) * We link directly the EPUB generated by the awesome tool from Tpt, but also to the page in Wikisource. * We automatically generated the EPUB covers for every text which didn't have one. Ex: MediaLibraryOnline is a digital platform that provide Italian libraries with the possibility of lend digital resource, as ebook or audiobooks. It's not just "a portal on the Internet", but a service used and managed by single libraries for their uses. It also has an "Open" collection, freely accessible and downloadable for everyone (and not only the patrons of the libraries which have access to MediaLibrary). I've been hired few months ago to develop such collection, and this is a major milestone for us (and for me :-). I'm expecially excited by the fact that now Wikisource ebooks "enter" in the collection of libraries, and often in their very catalog. I think it is a very good step forward for our project, and I'm eager to replicate this project with other Wikisources as well :-) If needed, I can explain some details. [1] wikisource&x=0&y=0&portalId=1 On Thu, Jun 11, 2015 at 9:52 PM, Keilana <keilanaw...@gmail.com> wrote: > That > > > > > _______________________________________________ > >> | https://www.mail-archive.com/wikimedia-l@lists.wikimedia.org/msg18170.html | CC-MAIN-2018-39 | refinedweb | 302 | 57.4 |
Creating an application of Windows Forms Application type in MS Visual Studio – C++. Review of main files of project
In the given topic is considered the features of creating the application of “Windows Forms Application” type that supports C++ language. This type of application supports all benefits the .NET Framework technology.
Progress
- Run Microsoft Visual Studio.
As a result, the window with active tab “Start Page” will be opened (Figure 1).
Figure 1. The window “Start Page“
- Creating the application of Windows Forms Application type.
To create a new project (solution) in C++ language, you need to select the consequently of commands (Figure 2):
File -> New Project...
Microsoft Visual Studio proposes different types of application templates for programming in C++ (Figure 2).
Figure 2. Calling the command of creating a new project
As a result, the window “New Project” will be opened (Figure 3). In this window you need to select the template “Visual C++” and application type “Windows Forms Application“.
In the “Location:” field you need to set the path to the folder, in which project will be saved. In our case you need to set the following path
C:\Programs\CPP
In the field “Name” is set the name of application. In our case this is “MyApp01“.
If the option “Create directory for solution” is enabled, then project will be saved in the folder
C:\Programs\CPP\MyApp01
In the field “Solution name:” is set the name of solution. The solution can combine several projects. In our case the name of solution is the same as name of project.
Figure 3. The window “New Project” of creating a new project
- Main components of windows interface for working with program.
After selecting “OK” in the previous window “New Project“, MS Visual Studio will create the all needed code for working the application of “Windows Forms Applicaiton” type.
As a result, the window will look like as shown in Figure 4.
In the center of window the main form of application is displayed. You need to place on the form different components. The components are placed on the panel “Toolbox” (the left side of the screen).
The form or component properties are displayed in the utility Solution Explorer (the right side of screen). By changing these properties, you can affect the view of form, behavior of form, realize the event handlers of form and so on.
Figure 4. The main elements of application window
- Calling the mode of entering code.
At the moment the design mode is active. To go to the mode of typing the program text , you need to call the command Code from menu View (Figure 5).
View -> Code
Figure 5. The command to go to the mode of typing the program text
Another way to call the command to go to the mode of typing the program text, is to click on the corresponding button in Solution Explorer (Figure 6).
Figure 6. The button to go to the mode of typing the program text
As a result, the program text will be shown.
Figure 7. Mode of viewing the program text
- Text of file “Form1.h“.
When you creates a project, Microsoft Visual Studio generates a program code, which is saved in different files.
The main file is named “Form1.h“. In this file programmer develops own program code.
This file corresponds to the main form of application. On the main form different components are located. By using these components you can realize the solution of specific task. When project is created – the empty form is created too (Figure 6). Besides the main form, you can create other forms and add them to the project.
Listing of file “Form1.h” is shown below.
#pragma once namespace MyApp01 {: /// <summary> /// Required designer variable. /// </summary> System::ComponentModel::Container ^components; #pragma region Windows Form Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> void InitializeComponent(void) { this->components = gcnew System::ComponentModel::Container(); this->Size = System::Drawing::Size(300,300); this->Text = L"Form1"; this->Padding = System::Windows::Forms::Padding(0); this->AutoScaleMode = System::Windows::Forms::AutoScaleMode::Font; } #pragma endregion }; }
Let’s explain some code snippets in the program.
In the listing above, the namespace MyApp01 is created by using operator
namespace MyApp01 { ... }
In this namespace, other namespaces are included from the .NET Framework library:
System System::ComponentModel System::Collections System::Windows::Forms System::Data System::Drawing
The class named “Form1” is created in the namespace MyApp01. This class corresponds to the main form Form1 of application.
Class contains a constructor that calls the method
InitializeComponent();
In the InitializeComponent() method the component-form is created (variable “components”), which is a container. It means that different components can be placed on the form (buttons, labels, text boxes and others). Also, the form parameters are set in method InitializeComponent(): form title, form size (300*300 pixels), default font.
Destructor of class ~Form1() destroys the form (variable “components“) by using “delete” operator.
- Files, which are created in project.
After creating a project of “Windows Forms Application” type, Microsoft Visual Studio creates several files.
Figure 8. C++ files, which are created in project
As mentioned earlier, the main file is “Form1.h” (see paragraph 5). Also, according to the rules of the C++ language, an implementation file “App01.cpp” is created. Main function “main()” is realized in this file. This file contains a code to display the main form.
Listing of “MyApp01.cpp” file is following:
// MyApp01.cpp : main project file. #include "stdafx.h" #include "Form1.h" using namespace MyApp01; [STAThreadAttribute] int main(array<System::String ^> ^args) { // Enabling Windows XP visual effects before any controls are created Application::EnableVisualStyles(); Application::SetCompatibleTextRenderingDefault(false); // Create the main window and run it Application::Run(gcnew Form1()); return 0; }
File “MyApp01.vcxproj“. This is the main project file for VC++ projects generated using an Application Wizard. It contains information about the version of Visual C++ that generated the file, and information about the platforms, configurations, and project features selected with the Application Wizard.
File “MyApp01).
File “AssemblyInfo.cpp“. Contains custom attributes for modifying assembly metadata.
Files “StdAfx.h” и “StdAfx.cpp“. These files are used to build a precompiled header (PCH) file named “MyApp01.pch” and a precompiled types file named “StdAfx.obj”.
- Run the project.
To run project, the command “Start Debuggin” from menu “Debug” is used (F5 key). | http://www.bestprog.net/en/2016/07/28/003-creating-an-application-of-windows-forms-application-type-in-ms-visual-studio-c-review-of-main-files-of-project/ | CC-MAIN-2017-39 | refinedweb | 1,064 | 57.67 |
Dear learners, how is everything going? Hope that you’re learning well. In our previous tutorial, we learned about Python break and continue statements to control Python loops. In this tutorial, we are going to learn about Python pass statement.
Table of Contents
What is the Python pass statement?
You can consider the pass statement as a “no operation” statement. To understand the pass statement in better detail, let’s look at the sample syntax below.
List <- a list of number for each number in the list: if the number is even, then, do nothing else print odd number
Now if we convert the above things to python,
#Generate a list of number numbers = [ 1, 2, 4, 3, 6, 5, 7, 10, 9 ] #Check for each number that belongs to the list for number in numbers: #check if the number is even if number % 2 == 0: #if even, then pass ( No operation ) pass else: #print the odd numbers print (number),
The output will be
>>> ================== RESTART: /home/imtiaz/Desktop/pass1.py ================== 1 3 5 7 9 >>>
Where do we use the pass statement?
Before you begin programming, you generally start out with a structure of functions. These functions tell you what elements your code will have and let you keep track of the tasks you are yet to complete.
Considering the same example, if you are planning to create a program with three functions as shown below. You give the names to the functions and then begin working on one of the functions to start off with.
The other functions are blank and have a simple comment stating that its a TODO for you.
def func1(): # TODO: implement func1 later def func2(): # TODO: implement func2 later def func3(a): print (a) func3("Hello")
If you do the above, you’ll get an error as below:
So how do you tackle this situation? We use the pass statement here.
def func1(): pass # TODO: implement func1 later def func2(): pass # TODO: implement func2 later def func3(a): print (a) func3("Hello")
For the above code, you will get output like this:
================== RESTART: /home/imtiaz/Desktop/pass3.py ================== Hello >>>
When you work with a huge python project, at one time, you may need something like the pass statement. That’s why the pass statement is introduced in Python.
Conclusion
That’s all for today! Hope that you learned well about the Python pass statement. Stay tuned for our next tutorial and for any confusion, feel free to use the comment box.
Reference: Official Documentation
Ya ……I understood but little bit confused about its real implementation | https://www.journaldev.com/14240/python-pass-statement | CC-MAIN-2021-04 | refinedweb | 431 | 70.13 |
5.2. Python names and namespaces¶
We have already been introduced to the Python concept of a name. An explicit though somewhat technical discussion of the subject of names can found in the Python docs. The following discussion is simplified, and focuses on the basic idea of namespaces.
When a Python program is loaded and run, your computer’s memory contains a set of things we will simply call objects. For example, when you load a module that contains a function definition, that places a function object into memory. The function object is Python’s representation of a program. When a program is run it often creates further objects. For example a program might compute some numbers and return them as a list. That list of numbers is an object taking up space in memory:
>>> ScoreList = get_test_scores(python__for_ss_midterm)
Objects in memory may or may not have names. Generally objects get named at creation time and generally they get named because they might come in handy at some later time. So the Python command above makes the result of executing the get_test_scores function retrievable by storing it in the name ScoreList.
A very common way of assigning a name to an object is a variable assignment command like the one above. But it is not the only way. The following Python code block constructs a certain function object and assigns it to the name main:
def main(): if len(sys.argv) >= 2: name = sys.argv[1] else: name = 'World' print 'Hello', name
So when you load a Python module with an import statement, it executes all the function definitions in the file, placing a set of functions objects in memory and associating each of them with a name. If the file contains variable assignment commands, that introduces more names and possibly more objects. Other constructions, such as class definitions (to be covered later), use up more names. The moral, then, is that importing modules uses up names.
Python is very consistent about the syntax of names. The way to find out what a name denotes is to type the name to the Python prompt and hit “Return”. Beginners sometimes get confused by this property. After typing in the above function definition, typing the name just returns the function reference:
>>> main <function main at 0x100493cf8>
If you want to execute the function, you need to add the parentheses and any arguments. In the case of a function with no arguments like main, you still need the parentheses:
>>> main() Hello World
Python is untyped, meaning any name can be used for any object. If it has been used for one type of object, any naming construction, such as a variable assignment, can overrule the old name, and assign it to a competely different kind of object. So the unwary beginner might well decide to reuse the name main as a variable and type:
>>> main = 1
Once this is done the name main denotes an integer and the function definition is lost. When the unwary beginner then tries to call main as a function, he sees:
>>> main() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'int' object is not callable
That is, this is exactly the same error as
>>> 1() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'int' object is not callable
The integer “1” is not a function and can’t be called. Thus, keeping track of what uses names have been put to is an important part of programming. One way to avoid problems like the one above is to use different kinds of names (for example, different case-conventions) for variable names and functions. But this is really the tip of an iceberg, and good programming practice will not melt the entire iceberg.
The example above suggests the larger problem. The name main is a name any programmer might use for the main function in a module. Indeed, some programming languages encourage this practice. But then what happens when two modules with functions named main are loaded? Should the one loaded second just overwrite the one loaded first? Probably not.
One way to solve this problem is to get more creative about naming functions. For example, if the above definition comes from a module named hello.py, we might just call the function hello_world unstead of main. But this solution leaves much to the discretion of individual programmers, and has significant limitations. As your knowledge of Python grows, you will find yourself loading more and more Python modules, many of them containing code (and therefore names) you have never looked at. This raises the likelihood of a nameclash, and therefore a mysterious and hard to detect bug, considerably. Since naming conventions and the ability to use them consistently vary considerably, Python has implemented a more general solution: When a module hello is loaded with the command,:
>>> import hello
and the module defines a function main, the name main is not associated with the function definition. Instead the name hello.main is. The way to call “main” is to type:
>>> hello.main() Hello World
If the programmer then types,:
>>> main = 1
the name main is set to denote the integer and the name hello.main in unaffected, so the command main.hello() will work as intended.
The technical name for the Pythonic solution is namespaces. That is, by default, each module has a namespace of its own, the names beginning with the module name followed by ”.”. Thus the name space of the “hello” module is all names beginning with “hello.”, the name space of a module named “goodbye” is “goodbye.”, and if both define functions named “main”, one will be accessible under the name “hello.main” and the other under the name “goodbye.main”.
Namespaces are an important Pythonic idea, as you will see if you type:
>>> import this
The main place they will affect you as a Python beginner is in how you you use the names of imported modules. The other place where namespaces play a key role is in class definitions, which we will talk about in Section Classes and class attributes.
The names available in a Python namespace can be viewed by accessing its directory. This is done with the dir command.
We illustrate
dir by import the math module and listing
its names:
>>>']
5.2.1. The _builtin_ namespace¶
Every name available in Python belongs to some namespace. The namespace containing all the names available when you start up Python is called __builtins__. Every name in __builtins__ belongs to the toplevel namespace and therefore does not have to be prefixed with a module name. The dir command will list those names:
>>> dir(__builtins__) ['ArithmeticError', 'AssertionError', 'AttributeError', 'BaseException', 'BufferError', 'By tes']
Discussing the purpose of all these names goes well beyond the bounds of this section. Suffice it to say for now that four main kinds of Python objects are listed:
- Exceptions
- Functions
- Types
- None | https://gawron.sdsu.edu/python_for_ss/course_core/book_draft/anatomy/name_space.html | CC-MAIN-2018-09 | refinedweb | 1,156 | 70.13 |
Data-Driven Decisions for Where to Park in SF
August 16, 2019
Have you ever felt uncertain parking in a shady area? In particular, have you ever parked in San Francisco and wondered, if I measured the average inverse square distance to every vehicle incident recorded by the SFPD in the last year, at what percentile would my current location fall?
If so, we built an app for that. In this post we’ll explain our methodology and its implementation.
Parking in San Francisco
Vehicle-related break-ins and thefts are notoriously common in San Francisco. Just last week, items worth half a million dollars were stolen in a high-profile car burglary. There’s even a Twitter account tracking incidents.
The San Francisco Police Department maintains an ongoing dataset of all incidents since January 1, 2018 (there is another one for 2003-2018). The San Francisco Chronicle has created a great map visualization from this to track break-ins. We wanted to make this data even more actionable, to help asses the security of parking in a particular location in real-time.
Hence, the motivating question: if I am looking to park in SF, how can I get a sense of how safe my current spot is?
Defining a Risk Score
Of course, the risk of a parking spot can be measured in many different qualitative and quantitative ways. We chose a quantitative measure, admittedly quite arbitrary, as the average inverse square of the distance between the parking location and every break-in location in the past year.
This just gives a numerical score. We then evaluate this score across a representative sample of parking spots across SF, and place the current parking spot at a percentile within that sample. The higher the score, the closer the spot is to historical incidents (inverse of distance), the higher the risk.
We decided to build a mobile app for showing how secure your parking spot is.
Now, we just have to use the data to compute the risk score percentile. For this task, we’ll load the SFPD data into a Rockset collection and query it upon a user clicking the button.
To get started quickly, we’ll simply download the data as a CSV and upload the file into a new collection.
Later, we can set up a periodic job to forward the dataset into the collection via the API, so that it always stays up to date.
Filtering the Data
Let’s switch over to the query tab and try writing a query to filter down to the incidents we care about. There are a few conditions we want to check:
Vehicle-related incidents. Each incident has an “Incident Subcategory” assigned by the Crime Analysis Unit of the Police Department. We do a
SELECT DISTINCTquery on this field and scan the results to pick out the ones we consider vehicle-related.
- Motor Vehicle Theft
- Motor Vehicle Theft (Attempted)
- Theft From Vehicle
- Larceny - Auto Parts
- Larceny - From Vehicle
- Initial report. According to the data documentation, records cannot be edited once they are filed, so some records are filed as “supplemental” to an existing incident. We can filter those out by looking for the word “Initial” in the report type description.
- Within SF. The documentation also specifies that some incidents occur outside SF, and that such incidents will have the value “Out of SF” in the police district field.
- Last year. The dataset provides a datetime field, which we can parse and ensure is within the last 12 months.
- Geolocation available. We notice some rows are missing the latitude and longitude fields, instead having an empty string. We will simply ignore these records by filtering them out.
Putting all these conditions together, we can prune down from 242,012 records in this dataset to just the 28,224 relevant vehicle incidents, packaged up into a
WITH query.
Calculating a Risk Score, One Spot
Now that we have all vehicle incidents in the last year, let’s see if we can calculate the security score for San Francisco City Hall, which has a latitude of 37.7793° N and longitude of 122.4193° W.
Using some good old math tricks (radius times angle in radians to get arc length, approximating arc length as straight-line distance, and Pythagorean theorem), we can compute the distance in miles to each past incident:
We aggregate these distances using our formula from above, and voila!
For our app, we will replace the latitude/longitude of City Hall with parameters coming from the user’s browser location.
Sample of Parking Spots in SF
So we can calculate a risk score—1.63 for City Hall—but that is meaningless unless we can compare it to the other parking spots in SF. We need to find a representative set of all possible parking spots in SF and compute the risk score for each to get a distribution of risk scores.
Turns out, the SFMTA has exactly what we need—field surveys are conducted to count the number of on-street parking spots and their results are published as an open dataset. We’ll upload this into Rockset as well!
Let’s see what this dataset contains:
For each street, let’s pull out the latitude/longitude values (just the first point, close enough approximation), count of spots, and a unique identifier (casting types as necessary):
Calculating Risk Score, Every Spot in SF
Now, let’s try calculating a score for each of these points, just like we did above for City Hall:
And there we have it! A parking risk score for each street segment in SF. This is a heavy query, so to lighten the load we’ve actually sampled 5% of each streets and incidents.
(Coming soon to Rockset: geo-indexing—watch out for a blog post about that in the coming weeks!)
Let’s stash the results of this query in another collection so that we can use it to calculate percentiles. We first create a new empty collection:
Now we run an
INSERT INTO sf_risk_scores SELECT ... query, bumping up to 10% sampling on both incidents and streets:
Ranking Risk Score as Percentile
Now let’s get a percentile for City Hall against the sample we’ve inserted into
sf_risk_scores. We keep our spot score calculation as we had at first, but now also count what percent of our sampled parking spots are safer than the current spot.
Parking-Spot-Risk-Score-as-a-Service
Now that we have an arguably useful query, let’s turn it into an app!
We’ll keep it simple—we’ll create an AWS Lambda function that will serve two types of requests. On
GET requests, it will serve a local
index.html file, which serves as the UI. On
POST requests, it will parse query params for
lat and
lon and pass them on as parameters in the last query above. The lambda code looks like this:
import json from botocore.vendored import requests import os ROCKSET_APIKEY = os.environ.get('ROCKSET_APIKEY') QUERY_TEXT = """ WITH vehicle_incidents AS ( SELECT * FROM sf_incidents TABLESAMPLE BERNOULLI(10) WHERE "Incident Subcategory" IN ( 'Motor Vehicle Theft', 'Motor Vehicle Theft (Attempted)', 'Larceny - Auto Parts', 'Theft From Vehicle', 'Larceny - From Vehicle' ) AND "Report Type Description" LIKE '%Initial%' AND "Police District" <> 'Out of SF' AND PARSE_DATETIME('%Y/%m/%d %r', "Incident Datetime") > CURRENT_DATE() - INTERVAL 12 MONTH AND LENGTH("Latitude") > 0 AND LENGTH("Longitude") > 0 ), spot_score AS ( SELECT AVG( 1 / ( POW( (vehicle_incidents."Latitude"::float - :lat) * (3.1415 / 180) * 3959, 2 ) + POW( (vehicle_incidents."Longitude"::float - :lon) * (3.1415 / 180) * 3959, 2 ) ) ) as "Risk Score" FROM vehicle_incidents ), total_count AS ( SELECT SUM("Count") "Count" FROM sf_risk_scores ), safer_count AS ( SELECT SUM(sf_risk_scores."Count") "Count" FROM sf_risk_scores, spot_score WHERE sf_risk_scores."Risk Score" < spot_score."Risk Score" ) SELECT 100.0 * safer_count."Count" / total_count."Count" "Percentile", spot_score."Risk Score" FROM safer_count, total_count, spot_score """ def lambda_handler(event, context): if event[' == 'GET': f = open('index.html', 'r') return { 'statusCode': 200, 'body': f.read(), 'headers': { 'Content-Type': 'text/html', } } elif event[' == 'POST': res = requests.post( ' headers={ 'Content-Type': 'application/json', 'Authorization': 'ApiKey %s' % ROCKSET_APIKEY }, data=json.dumps({ 'sql': { 'query': QUERY_TEXT, 'parameters': [ { 'name': 'lat', 'type': 'float', 'value': event['queryStringParameters']['lat'] }, { 'name': 'lon', 'type': 'float', 'value': event['queryStringParameters']['lon'] } ] } })).json() return { 'statusCode': 200, 'body': json.dumps(res), 'headers': { 'Content-Type': 'application/json', } } else: return { 'statusCode': 405, 'body': 'method not allowed' }
For the client-side, we write a script to fetch the browser's location and then call the backend:
function getLocation() { document.getElementById("location-button").style.display = "none"; showMessage("fetching"); if (navigator.geolocation) { navigator.geolocation.getCurrentPosition(handleLocation, function (error) { showMessage("denied") }); } else { showMessage("unsupported") } } function handleLocation(position) { showMessage("querying"); var lat = position.coords.latitude; var lon = position.coords.longitude; fetch( ' + lat + '&lon=' + lon, { method: 'POST' } ).then(function (response) { return response.json(); }).then(function (result) { setResult(result['results'][0]); showMessage("result"); document.getElementById("tile").style.justifyContent = "start"; }); } function setResult(result) { document.getElementById('score').textContent = parseFloat(result['Risk Score']).toFixed(3); document.getElementById('percentile').textContent = parseFloat(result['Percentile']).toFixed(3); if (result['Percentile'] == 0) { document.getElementById('zero').style.display = "block"; } } function showMessage(messageId) { var messages = document.getElementsByClassName("message"); for (var i = 0; i < messages.length; i++) { messages[i].style.display = "none"; } document.getElementById(messageId).style.display = "block"; }
To finish it off, we add API Gateway as a trigger for our lambda and drop a Rockset API key into the environment, which can all be done in the AWS Console.
Conclusion
To summarize what we did here:
- We took two fairly straightforward datasets—one for incidents reported by SPFD and one for parking spots reported by SFMTA—and loaded the data into Rockset.
- Several iterations of SQL later, we had an API we could call to fetch a risk score for a given geolocation.
- We wrote some simple code into an AWS Lambda to serve this as a mobile web app.
The only software needed was a web browser (download the data, query in Rockset Console, and deploy in AWS Console), and all told this took less than a day to build, from idea to production. The source code for the lambda is available here.
More from Rockset
Get started with $300 in free credits. No credit card required. | https://rockset.com/blog/data-driven-decisions-realtime-parking-risk-score/ | CC-MAIN-2022-21 | refinedweb | 1,693 | 54.93 |
Hide Forgot
Spec URL: <>
SRPM URL: <>
Description: <This is simple flask extension allowing uploading images straight to Imgur image hosting service.> >
Fedora Account System Username: pynash
Hi Walter,
We have this process to get sponsored in packager group. Can you either submit few more packages and/or some (3-5) package reviews? This is needed to make sure package submitter understands packaging well and follows as per fedora packaging guidelines.
Please go through links
1)
2)
3) To find package already submitted for review check
4)
5) this is fedora-review tool to help review packages in fedora.
If you got any questions please ask :)
Hello Walter:
I am not a new packagers sponsonr but I can help in the review procces of your package with a informal review, please remember than you will need to look for a sponsor to aprove your request:
I am only comments some points than require atention:
[!]: Package consistently uses macros (instead of hard-coded directory names).
We have defined this macros than you can use in the %%files sección
%{python2_sitelib}/%{pypi_name}
%{python2_sitelib}/%{pypi_name}-%{version}-py?.?.egg-info
If you are not going to Epel can remove firts lines of the .spec, this macros are allready defined in Fedora
Here is a example of a .spec you can check:
Rpmlint
-------
Checking: python-flask-imgur-0.1-1.fc20.noarch.rpm
python-flask-imgur-0.1-1.fc20.src.rpm
python-flask-imgur.noarch: E: description-line-too-long C This is simple flask extension allowing uploading images straight to Imgur image hosting service.
python-flask-imgur.noarch: W: no-documentation
python-flask-imgur.src: E: description-line-too-long C This is simple flask extension allowing uploading images straight to Imgur image hosting service.
python-flask-imgur.src:31: W: macro-in-comment %{__python2}
python-flask-imgur.src:45: W: macro-in-comment %{python_sitearch}
2 packages and 0 specfiles checked; 2 errors, 3 warnings.
Error:
Remember than spec file must fill in 80 characteres per line,
Warning:
You have this line in your %%files section
# For arch-specific packages: sitearch
#%{python_sitearch}/*
If this is not needed remove it for the .spec
Also upstream is not providing a License file in the tarball, you can patch it, and request upstream to add the License file in futures releases.
Hi Parag, I'm serving as mentor of walter since the latest fudcon. (along with other guys) they are trying to package some flask libraries that yet not are in fedora.
Walter, will be very appreciated that you could do some informal reviews to other packagers, or take some packages of the discussed list in fudcon.
I'm able to sponsor to you, if one of these conditions are accomplished.
So, we can pass to the comments.
It is good habit, (in the case of python packages) see the setup.py, the requirements files, or the imports of the files for looking for the requires of the package.
if you see the setup.py
install_requires=["Flask", "six >= 1.7.3"],
and in the flask-imgur.py you can see
from six.moves import urllib
As you can see, you need the requires, Flask and python-six
- Take into account the William's comments.
- Remove the flags of the %build section. it is only needed if the package contains arched contents
- Clean manually the buildroot it is not necessary in newer versions of fedora (this applies only in epel5) - see %install section => rm -rf $RPM_BUILD_ROOT
- BTW, remove the unneded comments
# For noarch packages: sitelib
And please, list the files one per one, this is not mandatory, but if in the future there are content that not accomplish with fedora guidelines, you can see of the quick way.
=> %{python_sitelib}/*
Do you want package this for epel, if so, take into account that the macro %{_python2} does not exists there
Use this workaround
%{!?__python2: %global __python2 %{__python}}
Also, will be good see the epel guidelines.
Hoping your revisions.
Thanks :)
Edurado,
Sure go ahead and sponsor this person :)
Any update?
hello guys, I want to thank you for all the support and suggestions.
I must inform you that at this moment I'm working on a project that ends this week, and for that reason I could not do any upgrade.
next week and will be with more time and we will do everything possible to advance.
Greetings.
Walter, any update here? that week was very long :)
Hi Eduardo. ok, I'm ready to go with the package. only for now I must finish setting up my computer because I have recently installed fedora-workstation, I must prepare the environment while doing some maintenance on some machines, but this week and I'll be more active, and yes, it was definitely very long week :( best said the project.
Any progress here?
I don't think so, this potential contributor has not responded for seven months, so it is unlikely that him could follow with this package. i will close for now, feel free to open a new review request if you still are interested to work in this | https://bugzilla.redhat.com/show_bug.cgi?id=1157201 | CC-MAIN-2020-40 | refinedweb | 847 | 62.88 |
Definitions: Let P be a point in 3D of coordinates X in the world reference frame (stored in the matrix X) The coordinate vector of P in the camera reference frame is:
\[Xc = R X + T\]
where R is the rotation matrix corresponding to the rotation vector om: R = rodrigues(om); call x, y and z the 3 coordinates of Xc:
\[x = Xc_1 \\ y = Xc_2 \\ z = Xc_3\]
The pinhole projection coordinates of P is [a; b] where
\[a = x / z \ and \ b = y / z \\ r^2 = a^2 + b^2 \\ \theta = atan(r)\]
Fisheye distortion:
\[\theta_d = \theta (1 + k_1 \theta^2 + k_2 \theta^4 + k_3 \theta^6 + k_4 \theta^8)\]
The distorted point coordinates are [x'; y'] where
\[x' = (\theta_d / r) a \\ y' = (\theta_d / r) b \]
Finally, conversion into pixel coordinates: The final pixel coordinates vector [u; v] where:
\[u = f_x (x' + \alpha y') + c_x \\ v = f_y y' + c_y\]
#include <opencv2/calib3d.hpp>
#include <opencv2/calib3d.hpp>
Performs camera calibaration.
#include <opencv2/calib3d.hpp>
Distorts 2D points using fisheye model.
Note that the function assumes the camera matrix of the undistorted points to be identity. This means if you want to transform back points undistorted with undistortPoints() you have to multiply them with \(P^{-1}\).
#include <opencv2/calib3d.hpp>
Estimates new camera matrix for undistortion or rectification.
#include <opencv2/calib3d.hpp>
Computes undistortion and rectification maps for image transform by cv::remap(). If D is empty zero distortion is used, if R or P is empty identity matrixes are used.
#include <opencv2/calib3d.hpp>
Projects points using fisheye model..
#include <opencv2/calib3d.hpp>
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
#include <opencv2/calib3d.hpp>
Performs stereo calibration.
#include <opencv2/calib3d.hpp>
Stereo rectification for fisheye camera model.
#include <opencv2/calib3d.hpp>
Transforms an image to compensate for fisheye lens distortion..
See below the results of undistortImage.
Pictures a) and b) almost the same. But if we consider points of image located far from the center of image, we can notice that on image a) these points are distorted.
#include <opencv2/calib3d.hpp>
Undistorts 2D points using fisheye model. | https://docs.opencv.org/3.4/db/d58/group__calib3d__fisheye.html | CC-MAIN-2019-51 | refinedweb | 363 | 53.92 |
C# (Mono) on FreeBSD
Well, if you have read my blog at all, you will realize that I have a developer job writing in C# on Windows, but it is my personal hobby to use FreeBSD.
I am very excited about Mono. I love the C# language. I also love FreeBSD.
I am going to go ahead and say something bold. Few people now realize this yet, but the ability to code in C# on open source platforms is going to be the single most important feature in the coming years. It will eventually be a standard library that will exist or be one of the first items installed on every system.
For more information:
Packaging for Mono and related applications on FreeBSD () is handled by the BSD# Project. The purpose of this project is to maintain the existing Mono/C# ports in the FreeBSD ports tree, port new applications, and work on resolving FreeBSD specific issues with Mono. BSD# is entirely user supported and is not an official FreeBSD or Mono project.
For Licensing information:”
Installing Mono
Mono is a port and as always a port is easy to install on FreeBSD.
Note: The version of mono in ports is not necessarily the latest and greated. I recommend that you install the latest version of mono. See this article.
Installing the latest version of Mono on FreeBSD or How to install and use portshaker?
Compiling Hello World in Mono
The mono compiler is gmcs. It is simple to compile C# code.
- Create a new file called hw.cs. C# class files end in .cs.
- Add this text to the file:
using System; namespace HelloWorld { class HelloWorld { static void Main(string[] args) { System.Console.WriteLine("Hello World"); } } }
- Save the file.
- Compile the code to create an hw.exe program.# gmcs hw.cs
Running a Mono Program
Mono programs must be run using the “mono” command.
Hello World
A Mono IDE: MonoDevelop
There is an IDE for Mono called MonoDevelop. MonoDevelop is a port and as always a port is easy to install on FreeBSD.
The Mono Develop port integrated with KDE to add itself to the KDE menu under Applications | Development | MonoDevelop. So you can run it from there.
This IDE allows you to create C# solutions. It is possible to run compile them on FreeBSD and run them on Windows, or compile them on Windows and run them on FreeBSD.
Is It Really Cross Platform
C# and Mono are supposed to be cross platform. So I can write it in Windows using Visual Studio or I can write in FreeBSD using Mono Develop and either way it should run on both Windows and FreeBSD and any other platform that supports mono.
So here are the results of my quick tests:
Test 1 – Does the Hello World app run in Windows.
Yes. I copied the file to a Windows 7 64 bit box and ran it. It worked.
Test 2 – Does a GTK# 2.0 Project run in Windows
No. I created a GTK# 2.0 project on FreeBSD in Mono Develop, and didn’t add anything to it, I just compiled it. I copied the file to windows and ran it. However, it crashed.
Supposedly you have to install the GTK# for .NET on the windows box, but it still didn’t work.
Test 3 – Does a Windows Form Application compiled in Visual Studio 2010 in Window 7 run on FreeBSD
Not at first. I created a basic Windows Form application, and didn’t add anything to it, I just compiled it. I copied it to FreeBSD and ran it. It crashed. However, by default .NET 4.0 is used.
Yes, if compiled with .NET 3.5 or earlier. I changed the project to use .NET 3.5 and tried again. It ran flawlessly.
Test 4 – Does a Windows Presentation Foundation project compiled in Visual Studio 2010 in Window 7 run on FreeBSD
No. There is PresentationFramework assembly so the application crashes immediately. I tried multiple .NET versions.
Note: I didn’t really test much more than the basics. I just created new projects, left them as is and tried them. It would be interesting to see a more fully developed application tested and working on both platform and to know what issues were encountered in doing this.
No WPF
Unfortunately there is no WPF and no plans for it. Of course, WPF stand for Windows Presentation Foundation, and so the who “Windows” part of that might need to be changed to something like XPF, Xorg Presentation foundation.
However since there is Moonlight, which is to Silverlight as Mono is to C# and .NET, and Silverlight is a subset of WPF, I have to assume that WPF will arrive in mono eventually, even if it is by way of Moonlight first.
[...] Rhyous » Blog Archive » C# (Mono) on FreeBSD [...]
[...] Follow this post to get mono on FreeBSD. [...] | https://www.rhyous.com/2010/12/10/c-mono-on-freebsd/ | CC-MAIN-2019-18 | refinedweb | 817 | 75.5 |
On 28.02.2018 22:04, Jani Nikula wrote: > On Wed, 28 Feb 2018, Thierry Reding <thierry.red...@gmail.com> wrote: >> Anyone that needs something other than normal mode should use the new >> atomic PWM API. > > At the risk of revealing my true ignorance, what is the new atomic PWM > API? Where? Examples of how one would convert old code over to the new > API? As far as I know, the old PWM core code uses config(), set_polarity(), enable(), disable() methods of driver, registered as pwm_ops: struct pwm_ops {
int (*request)(struct pwm_chip *chip, struct pwm_device *pwm); void (*free)(struct pwm_chip *chip, struct pwm_device *pwm); int (*config)(struct pwm_chip *chip, struct pwm_device *pwm, int duty_ns, int period_ns); int (*set_polarity)(struct pwm_chip *chip, struct pwm_device *pwm, enum pwm_polarity polarity); int (*capture)(struct pwm_chip *chip, struct pwm_device *pwm, struct pwm_capture *result, unsigned long timeout); int (*enable)(struct pwm_chip *chip, struct pwm_device *pwm); void (*disable)(struct pwm_chip *chip, struct pwm_device *pwm); int (*apply)(struct pwm_chip *chip, struct pwm_device *pwm, struct pwm_state *state); void (*get_state)(struct pwm_chip *chip, struct pwm_device *pwm, struct pwm_state *state); #ifdef CONFIG_DEBUG_FS void (*dbg_show)(struct pwm_chip *chip, struct seq_file *s); #endif struct module *owner; }; to do settings on hardware. In order to so settings on a PWM the users should have been follow the below steps: ->config() ->set_polarity() ->enable() Moreover, if the PWM was previously enabled it should have been first disable and then to follow the above steps in order to apply a new settings on hardware. The driver should have been provide, at probe, all the above function: ->config(), ->set_polarity(), ->disable(), ->enable(), function that were used by PWM core. Now, having atomic PWM, the driver should provide one function to PWM core, which is ->apply() function. Every PWM has a state associated, which keeps the period, duty cycle, polarity and enable/disable status. The driver's ->apply() function takes as argument the state that should be applied and it takes care of applying this new state directly without asking user to call ->disable(), then ->config()/->set_polarity(), then ->enable() to apply new hardware settings. The PWM consumer could set a new state for PWM it uses, using pwm_apply_state(pwm, new_state); Regarding the models to switch on atomic PWM, on the controller side you can check for drivers that registers apply function at probe time. Regarding the PWM users, you can look for pwm_apply_state() (drivers/hwmon/pwm-fan.c or drivers/input/misc/pwm-beeper.c are some examples). Thierry, please correct me if I'm wrong. Thank you, Claudiu Beznea > > BR, > Jani. > _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org | https://www.mail-archive.com/intel-gfx@lists.freedesktop.org/msg149771.html | CC-MAIN-2018-30 | refinedweb | 432 | 56.18 |
SVG is defined to be a validating XML grammar. This means that elements not defined in the SVG (e.g., elements defined in a different namespace) are only allowed in particular designated sections of the SVG grammar.
The only places in the SVG grammar where arbitrary elements and/or character data are allowed are:
For example, a business graphics authoring application might want to include some private data within the <defs> section when it writes a SVG file so that it could properly reassemble the chart (a pie chart in this case) upon reading it back in:
<?xml version="1.0" standalone="yes"?> <svg width="4in" height="3in" xmlns = ''> <defs> <private xmlns: <myapp:piechart <myapp:piece <myapp:piece <myapp:piece <myapp:piece <!-- Other private data goes here --> </myapp:piechart> </private> </defs> <desc>This chart includes private data in another namespace </desc> <!-- In here would be the actual graphics elements which draw the pie chart --> </svg>
Download this example | http://www.w3.org/TR/1999/WD-SVG-19990412/private.html | CC-MAIN-2014-41 | refinedweb | 158 | 50.57 |
Java program to find sublist in a list :
In this tutorial, we will learn how to find one sublist of a list within a range. The user will enter the starting index and end index of the list. We will print out the sublist using this starting and ending index. Let’s take a look at the Java program first :
Java program to find sublist in a list :
import java.util.*; public class Main { public static void main(String[] args) { //1 Scanner scanner = new Scanner(System.in); //2 ArrayList numberList = new ArrayList(); //3 for (int i = 0; i <= 100; i++) { numberList.add(i); } //4 System.out.println("Enter starting index between 0 and 101 : "); int start = scanner.nextInt(); //5 System.out.println("Enter second index between 0 and 101 : "); int end = scanner.nextInt(); //6 List subList = numberList.subList(start,end); //7 System.out.println("Sublist : "+subList.toString()); } }
Explanation :
The commented numbers in the above program denote the step number below :
- Create one Scanner object to get the inputs from the user.
- Create one ArrayList.
- Using one for loop, add elements from 0 to 100 to this arraylist. So, on position i, the value is i for i = 0…100.
- Ask the user to enter starting index for the sublist. Save it to the variable start.
- Ask the user to enter ending index for the sublist. Save it to the variable end.
- Create one sublist by using the subList(startIndex,endIndex) function. Starting index is start and ending index is end.
- Print out the sublist to the user.
Sample Output :
1 Enter second index between 0 and 101 : 14 Sublist : [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13] Enter starting index between 0 and 101 : 1 Enter second index between 0 and 101 : 13 Sublist : [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
Similar tutorials :
- Java listiterator Example : Iterate through a list using listiterator
- How to sort a list in Java : Explanation with example
- Java program to rotate each words in a string
- Java program to find maximum and minimum values of a list in a range
- Java Example to empty an Arraylist
- Java arraylist set method example | https://www.codevscolor.com/java-program-to-find-the-sublist | CC-MAIN-2020-50 | refinedweb | 366 | 65.52 |
Which is the good website for struts 2 tutorials?
Which is the good website for struts 2 tutorials? Hi,
After... for learning Struts 2.
Suggest met the struts 2 tutorials good websites.
Thanks
Hi,
Rose India website is the good
to learn
and can u tell me clearly sir/madam? Hi
Its good.../struts/". Its a very good site to learn struts.
You dont need to be expert...Struts Good day to you Sir/madam,
How can i start...*;
import org.apache.struts.action.*;
public class LoginAction extends Action...;
<p><html>
<body></p>
<form action="login.do"> All,
Can we have more than one struts-config.xml... in Advance.. Yes we can have more than one struts config files..
Here we use SwitchAction. So better study to use switchaction class
Struts - Framework
using the View component. ActionServlet, Action, ActionForm and struts-config.xml...Struts Good day to you Sir/madam,
How can i start struts application ?
Before that what kind of things necessary... are different.So,
it can be handle different submit button :
class MyAction extends What is model View Controller architecture in a web application,and why would you use it? Hi mamatha,
The main aim of the MVC....
-----------------------------------------------
Read for more information. ebook
struts ebook please suggest a good ebook for struts>
Hi good afternoon
Hi good afternoon write a java program that Implement an array ADT with following operations: - a. Insert b. Delete c. Number of elements d. Display all elements e. Is Empty
Migration of Struts and Hibernate How to struts can call the hibernate objects without spring or EJB or any other Controller package? Hi Friend,
Please visit the following link:
struts internationalisation - Struts
struts internationalisation hi friends
i am doing struts iinternationalistaion in the site... problem its urgent Hi friend,
Plz give full details and Source
using captcha with Struts - Struts
using captcha with Struts Hello everybody: I'm developping a web application using Struts framework, and i would like to use captcha Hi friend,Java Captcha in Struts 2 Application : | http://www.roseindia.net/tutorialhelp/comment/12463 | CC-MAIN-2015-06 | refinedweb | 337 | 68.06 |
I am trying with all my power to make this little program work! I have to make a pogram that copies input to output and replaces each string of a blank or more by a single blank!
It is exercise 1-9 if you have The C Programming Language book, second edition!
This is how far I got of it, but I know it is wrong since the compilers prints 102 infinitely! I will post my wrong noob code hoping that someone capable will give me a hint and tell me what am I doing wrong!Here is the code:
Code:
#include <stdio.h>
main()
{
int blank;
int x;
blank = 0;
x = getchar();
while (x != EOF)
{
if (x == ' ');
printf("%d\n", x);
}
} | http://cboard.cprogramming.com/c-programming/90823-big-confusion-easy-program-printable-thread.html | CC-MAIN-2013-48 | refinedweb | 122 | 90.5 |
Ok, let me give you a little bit of BackStory:
I wanted to make almost 100% cross platform apps. I knew Java. I enjoyed Java. I wanted to use Slick2D for desktop stuff first. I was suggested by someone there to use LibGDX. LibGDX is good and bad. Mostly good, but some things are really hard to understand, as I'll mention later. So, now I'm learning LibGDX, but I need a way of generating .fnt files. I tried using my Windows partition with BMFont, didn't work. Font was messed up. No one on the LibGDX forums understands why and ridicules against me for using a Mac. I've tried using FontBuilder and BMFont using WINE. FontBuilder wont generate .fnt files and BMFont wont let me generate the particular type of font that I want. Hiero wont save any files and crashes every other use. Screenshot of Hiero:
No one on the forums will answer me or help me with this issue.
The LibGDX engine is the only engine I can find that will do this, and it saddens me that I don't understand it. The only 2 video tutorials that exist are far from helpful. None of the listed LibGDX tutorials help.
Also, there's this thing in Java that I don't understand, and thats when you have a data type and another type of data in angel brackets. I have no idea what it does. Here is an example:
public class ActorAccessor implements TweenAccessor<Actor>
What the heck does that do? Other then make a public class called ActorAccessor that implements the interface that doesn't make sense to me. It's the TweenAccessor<Actor> that I don't understand. I can't find any information on it anywhere. I've searched for days.
I also feel like I'm not going to remember how to do stuff in the engine, like the actor accessor. It doesn't make any sense. I think this is due to me not understanding the tutorials. If you're using LibGDX, did you ever have these issues/thoughts/worries/concerns?
So basically what I'm asking is how can I generate FNT files in Mac, and how can I get started with LibGDX? I've tried all the listed tutorials on the site and DermetFans tutorials. None make sense at ALL.
Thanks! Any help is appreciated! | http://www.gamedev.net/topic/645433-how-to-generate-fnt-files-on-a-mac/ | CC-MAIN-2014-52 | refinedweb | 398 | 77.13 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.