text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Hide Forgot
Description of problem:
libmongoclient.so in libmongodb-2.4.6-1.el6.x86_64 is built with scons --use-system-all, which results in linking tcmalloc from libmongoclient.so. This is 1) undesirable because libraries should leave the choice of allocator up to the user of the library and 2) causes an application that uses libmongoclient.so to crash if it is prelinked, apparently due to a static initialization order issue.
The appropriate flag to use when building the library is --use-system-boost rather than --use-system-all. Note that the linking of tcmalloc if --use-system-all is specified is in itself a bug in the mongodb build, tracked by; however --use-system-boost is the appropriate flag to use when building the library and will also resolve this problem in the EPEL RPM.
Note that the above applies to the libmongoclient.so library *only*. The mongod server process should be built using --use-system-all and should link tcmalloc.
Version-Release number of selected component (if applicable):
libmongodb-2.4.6-1.el6.x86_64
How reproducible:
deterministic
Steps to Reproduce:
1. create simple hello world test.cpp
2. compile with: g++ -o test test.cpp -lmongoclient
3. run with: ./test
Actual results:
aborts with
*** glibc detected *** /root/prelink-test/test: free(): invalid pointer: 0x0000000000e31200 ***
Expected results:
prints "hello world"
Additional info:
What's the proper fix, then? Patch mongodb 2.4.6 so that --use-system-all actually works? Because the current build is done with a single scons command for both the library and the server, so just changing the parameter would affect both.
Would it be easier to change your build to use two separate scons commands, one for the library and one for the server? The downside of a patch is that it may create more work to maintain as you move to future versions of MongoDB. I think either fix would work, it just depends on what you feel more comfortable with.
It sounds feasible as starting with version 2.6.0, the libmongoclient library is completely separated from mongodb (and will be provided as separate package - now I'm starting to work on it).
Currently I'm running out if time though to patch this old EPEL version to use two separate scons commands. Any volunteer?
We already have to maintain a Sconstruct patch, because on the 2.4 version of mongodb there were several libraries that were not working with the local versions. (Note: Those were fixed in 2.6, thank you)
I was going to take this on, but I can't seem to reproduce the error. So I wouldn't know if I fixed it.
[user@64RHEL6 test]$ cat test.cpp
// 'Hello World!' program
#include <iostream>
int main()
{
std::cout << "Hello World!" << std::endl;
return 0;
}
[user@64RHEL6 test]$ g++ -o test test.cpp
[user@64RHEL6 test]$ ./test
Hello World!
[user@64RHEL6 test]$ g++ -o test1 test.cpp -lmongoclient
[user@64RHEL6 test]$ ./test1
Hello World!
[user@64RHEL6 test]$ echo "Just did prelink"
Just did prelink
[user@64RHEL6 test]$ g++ -o test2 test.cpp -lmongoclient
[user@64RHEL6 test]$ ./test2
Hello World!
## Rebooted machine incase of cache ##
[user@64RHEL6 test]$ g++ -o test3 test.cpp -lmongoclient
[user@64RHEL6 test]$ ./test3
Hello World!
Can you verify that you only have one libmongoclient.so installed?
Here's a more detailed repro script, including prelink commands to force the prelink; does this help? Also included some commands to show which version of libmongoclient.so is being linked, just to verify.
+ cat test.cpp
#include <iostream>
int main() {
std::cout << "Hello World" << std::endl;
return 0;
}
+ g++ -o test test.cpp -lmongoclient
+ sudo /usr/sbin/prelink -u --all
+ sudo /usr/sbin/prelink test
+ ./test
*** glibc detected *** ./test: free(): invalid pointer: 0x0000000002a99200 ***
======= Backtrace: =========
/lib64/libc.so.6[0x3001476166]
/usr/lib64/libmongoclient.so(_ZN5mongo23InputStreamSecureRandomD0Ev+0x25)[0x30004d91a5]
/usr/lib64/libmongoclient.so(_ZN5mongo3OID16genMachineAndPidEv+0x4c)[0x300046303c]
/usr/lib64/libmongoclient.so[0x3000463142]
/usr/lib64/libmongoclient.so[0x3000517566]
...
+ ls -l /usr/lib64/libmongoclient.so
-rwxr-xr-x. 1 root root 1404104 May 15 16:13 /usr/lib64/libmongoclient.so
+ md5sum /usr/lib64/libmongoclient.so
65ed5698505f5178bbbb4d6af9068cdc /usr/lib64/libmongoclient.so
+ rpm -qf /usr/lib64/libmongoclient.so
libmongodb-2.4.6-1.el6.x86_64
This should be now fixed in el6.
If this issue is still actual feel free to reopen.
mongodb-2.4.12-3.el6 has been submitted as an update for Fedora EPEL 6.
Package mongodb-2.4.12-3.el6:
* should fix your issue,
* was pushed to the Fedora EPEL 6 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=epel-testing mongodb-2.4.12-3.el6'
as soon as you are able to.
Please go to the following url:
then log in and leave karma (feedback).
mongodb-2.4.12-3.el6 has been pushed to the Fedora EPEL 6 stable repository. If problems still persist, please make note of it in this bug report. | https://partner-bugzilla.redhat.com/show_bug.cgi?id=1058434 | CC-MAIN-2019-35 | refinedweb | 829 | 61.12 |
This Python GUI tutorial covers the basics of how to create GUI's in Python using PyQt5. You'll learn how to install PyQt5, how to find the designer and basic usage, converting .ui to .py, running the output file and connecting events to methods.
This tutorial covers the basics of how to create GUI's in Python using PyQt5. I will demonstrate how to find the designer and basic usage, converting .ui to .py, running the output file and connecting events to methods.
For this tutorial you will need a version of Python equal to or above 3.5. I do however recommend to get a more recent version than 3.5 as I had issues with 3.5.2; a simple upgrade to 3.6.6 fixed this. If you don't know where to get Python, look here.
Now you will want to install pyqt5 and pyqt5-tools. To do this, we can use pip. Call the two commands
python -m pip install pyqt5 and
python -m pip install pyqt5-tools. After each command completes, make sure to verify that they installed properly by checking if it says they were installed successfully.
You should now be able to execute the following in Python with no errors:
import PyQt5
Using the DesignerUsing the Designer
If you have issues installing PyQt5 for some reason, you could try the installers at its sourceforge page.
When you ran the
pip install pyqt5-tools command, this installed tools that we can use to help with building GUIs.
First we need to locate where they are now, to do this we can run the following code in Python:
import site print (site.getsitepackages())
This will print out a couple of paths, for me it prints:
['C:\\Python36', 'C:\\Python36\\lib\\site-packages']. Here we can see my Python distribution location and the site-packages folder. Open up a file explorer and navigate to the site-packages folder. In this folder you then want to locate the
pyqt5-tools folder and open it. In this folder you should find a few files, one named designer.exe.
After recent updates to
pyqt5-tools, designer.exe has been moved to the 'Scripts' folder in your Python distribution. To find this using the paths we found before, go to the first path returned ('C:\Python36' in my case) and then go into the 'Scripts' folder; you should find designer.exe in here.
designer.exe is the program that we will use to make designing GUIs a lot easier; remember the location of this or create a shortcut to it and then run it (double click if you wish). You will be welcomed with a selection window; select "Main Window" and then click "Create".
You will now be shown an empty window in the middle of the designer's window with widget options on the left and some other windows to the right.
On the left you can drag widgets from the drop-downs onto the MainWindow object in the center. On the right in the Object Inspector you can see the current widgets in your window. Right below that is the Property Editor that allows you to modify the widget currently selected.
For this example, we will be just putting a button that prints "Hello World!" when pressed in the Window; soon we will work with a larger example when the basics are out of the way.
First make your window a bit smaller by clicking and dragging the bottom right of the window (or any part on the edge) to change the size as you would do on any other window. Now find the "Push Button" widget under the Buttons section (on the left) and drag it into the window anywhere. You can double click on the button now to rename the text in it; set it to "Print Message".
You may now want to change the size of the button, to do this click on the button until there are blue squares on each of it's corners and edges. Use these blue squares to re-size the button to a size you want.
When you are happy with what you have done (this is just a simple practice) then click File -> Save in the top left and save it somewhere where you will remember; I will save it as 'printMessage.ui' on my desktop.Converting .ui to .py
Now that we have our .ui file, we can convert it into Python code so we can modify and run it.
To do this we need to located pyuic5. pyuic5 converts .ui to .py in one simple command and is located in the 'Scripts' folder in your Python distribution. If you don't know where this is, execute the following:
import sys import os print (os.path.dirname(sys.executable))
You will be provided a path of where your python distribution is located; mine is at
C:\Python36 (we saw this before when looking for the designer). Now go to this directory and the open the 'Scripts' folder in there.
In this folder you will see a few executables to make Python in the terminal easier, you should also be able to see a file named
pyuic5.exe; this is what we are going to use. Open cmd and then use the cd command to get to the directory; for example I will execute
cd C:\Python36\Scripts in cmd. Now if you type
pyuic5 you should be provided a small error message saying "Error: one input ui-file must be specified"; this is good, we are in the right place.
To convert we now need to pass in the .ui file we created before and decide on an output filename and location. My .ui file is on my desktop currently so I will put the created .py file on the desktop also to make it easier. Now we can convert using
pyuic5 -x <.ui file> -o <output.py file>, for example I will use
pyuic5 -x "C:\Users\Brent\Desktop\printMessage.ui" -o "C:\Users\Brent\Desktop\printMessage1.py". This should take not very long at all and show no errors; the python script you declared after -o will now exist - for me a file called printMessage1.py is now on my desktop.
Now that you have the .py file containing the ui code, open it in IDLE or whatever IDE you use. Your code should look something like this:
Run the code and then wait for the GUI to appear; it currently won't do anything when you press the button so we will set that up.
Go back into the code for this GUI and look in the setupUi definition. You should see a class variable called
self.pushButton which later has its text set to "Print Message" in the retranslateUi definition below. This is the object of our button that we need to make do something.
First we need to create our method, this method can be in the global scope or in the class scope, for simplicity I will put it in the class scope. After the retranslateUi definition, create a new class called
printMessage that takes self as an argument since it is part of the class now. Inside this put what you want the script do to when you press the button, for this example I will print "Hello World!".
def printMessage(self): print ("Hello World!")
Now that you have created the method to run, we can attach it to the button. Using the button variable we found before, we can call .clicked.connect() on it passing in the name of our method we just created (make sure to remember
self. for class variables). Put this line at the bottom of the setupUi definition so it will be executed when the GUI is built.
self.pushButton.clicked.connect(self.printMessage)
Remember to not include () when passing the method here or it will not work.
Your script should now look something like this, don't worry if it is a bit different.
Run the code again and now when you press the button you should see that the script runs your code that you defined in the method you attached to the button.More Advanced Example
In this next example I will show you how to put images in the GUI, add a selection dialog box and add data to a table.
Open the designer back up and create a new MainWindow. Now create and position the 5 widgets described below: - Label (in Display Widgets): This label will be the widget we use to show the image - PushButton (in Buttons): This button will allow us to select the image to show in the Label - ListWidget (in Item Widgets): This will hold and display entries we add to it - LineEdit (in Input Widgets): This will allow us to enter what we want into the table - PushButton (in Buttons): This will add the item in the LineEdit to the table
If you click on a widget, for example the button that will be selecting the button, the Property Editor on the right will display all the options for that object. In there you can edit things like the exact size/position, borders (frames), tooltips and fonts.
First we will want to edit the Label that is holding the image; click on the label to open it up in the Property Editor. Find "text" under the QLabel section and delete the text, this is the same as double clicking and removing the text. Now go to the QFrame heading and change "frameShape" to box, this will but a border around the image.
Now click on the select image button and in the QWidget header go to "font". This has many options inside of this so play around to get a font you like (you can leave it if you want). Your GUI should now look something like this:
To help you later on, you can also change the "objectName" under QObject in the Property Editor for all widgets so you can identify them easier later. Here is what I named mine: - Select Image Button: selectImageBtn - Image Label: imageLbl - LineEdit: lineEdit - Add button: addBtn - ListWidget: listWidget
Like we did before, convert this .ui file (after saving it) to a .py file.
Now we need to connect all the actions together. The select image button will need to be connected to a method that displays an image selection dialog which will then put an image in the label. Also we will need to make the 'add' button add the text in the lineEdit object into the listWidget object.
If you open your .py file in IDLE, you can see that the names we gave each widget have passed through to the names of the actual objects. I also recommend to run the code now to make sure it is running.
First you will want to create a class definition called
setImage; make sure this takes self as it is a class definition. In this method we want to ask the user for an image (not just any file). If they provide a file, we then need to create a pixmap object and then set the pixmap object to the image label. Before we set it, it is ideal to scale the image to fit inside the label without stretching and aligning it to the centre. Your method will look like this:
def setImage(self): fileName, _ = QtWidgets.QFileDialog.getOpenFileName(None, "Select Image", "", "Image Files (*.png *.jpg *jpeg *.bmp)") # Ask for file if fileName: # If the user gives a file pixmap = QtGui.QPixmap(fileName) # Setup pixmap with the provided image pixmap = pixmap.scaled(self.imageLbl.width(), self.imageLbl.height(), QtCore.Qt.KeepAspectRatio) # Scale pixmap self.imageLbl.setPixmap(pixmap) # Set the pixmap onto the label self.imageLbl.setAlignment(QtCore.Qt.AlignCenter) # Align the label to center
In the QtWidgets.QFileDialog.getOpenFileName method call, you can see I have passed the string
"Image Files (*.png *.jpg *jpeg *.bmp)". This declares the type of files I will accept. If you want to accept any file, remove this string completely. If you want to make it so the user could switch it to all files themselves, set it to
"Image Files (*.png *.jpg *jpeg *.bmp);;All Files (*)"; play around with this to get the idea of what is going on, you can add more file extensions and selections if wanted.
Now we need to attach the button to the definition. This can simply be done using the connect method we used previously; make sure to put it at the bottom of the setupUi definition like we did before:
self.selectImageBtn.clicked.connect(self.setImage)
Run the script and make sure that your button works. Initially the label will be clear but each time you select a file it should change to the selected file.
We will now need to create another class definition called
addItem like before. In this method we need to get the value of the lineEdit object and then clear it. We can then simply add the value we just got to the listWidget. After you have done that, the method should look like this:
def addItem(self): value = self.lineEdit.text() # Get the value of the lineEdit self.lineEdit.clear() # Clear the text self.listWidget.addItem(value) # Add the value we got to the list
And then like before, we need to connect the button to the method just under the last one we did:
self.addBtn.clicked.connect(self.addItem)
Running the script now you should be able to type in the text edit item and then press add for the item to go into the list.
Python tutorial for beginners -?Python tutorial for beginners - Learn Python for Machine Learning and Web Development
TABLE OF CONTENT
Using Scikit-Learn for Machine Learning Application Development in Python - Python is arguably the best programming language for machine learning. However, many aspiring machine learning developers don’t know where to start...
Using Scikit-Learn for Machine Learning Application Development in Python - Python:
Scikit-learn has been used in a number of applications by J.P. Morgan, Spotify, Inria, and other major companies. Machine learning applications built with scikit-learn include financial cybersecurity analytics, product development, neuroimaging, barcode scanner development, medical modeling and help with handling Shopify inventory issues..Loading Data From a CSV File .loadtxt('scikit_1.csv',):
[:,:2233] [:,-3]
The first line brings $2233$ dimensions of our vectors (in this case we are ignoring those derived from these data). The data will be stored in the variable $First_variable$. The variable $Second_variable$ stores the classes (all lines, last column).
Scikit learn contains a function that allows separating the training data from the test data, and this is done automatically and shuffles the data randomly that supports our methodology.
We have four sets; two versions of the dimension data we generally call features and two versions of the classes. One version is for training (train), and another for testing (test). The train versions have half of the original data, while testing the other half.
Learn Python 3 from scratch to become a developer in demand
Description
What are you waiting for? Enroll today and learn the powerful python language !!!
Who is the target audience?
Beginners with zero programming background
Experienced programmers with other programming language
Testers who want to automate tools | https://morioh.com/p/8181fe957957 | CC-MAIN-2020-10 | refinedweb | 2,561 | 71.55 |
Data Infrastructure Management Software Discussions
Hey all,
Hopefully this is quick for someone.. I'm just trying to create a function to roundup a value to the nearest integer. I'm using WFA v2.2. I've created this:
def roundup (num){Math.ceil(num);}
My problem is that it returns a decimal value and OnTap needs an integer value to increase the size of a volume. Example, if I enter 12, I get 12.0. If I enter 12.5, I get 13.0
I've tried using ((int) Math.ceil(num)); to return an integer, but I get an error that it's an Illegal expression.
Thanks,
Roger
Solved!
See The Solution
Roger,
Instead of Math.Ciel(num) try Integer.ValueOf(num). If you look at the "actualVolumeSize" function, it appears that "return (int)(Math.Ciel(num))" may also work, or just "(int)(num)" if you really just need the integer component and are not that concerned about rounding up.
Mike
View solution in original post
Thanks Mike,
Just putting the int on the return worked. I ended up with this.
def roundup (num){answer= Math.ceil(num);return (int)(answer)} | https://community.netapp.com/t5/Data-Infrastructure-Management-Software-Discussions/WFA-Function-to-roundup/m-p/102312/highlight/true | CC-MAIN-2021-10 | refinedweb | 193 | 62.14 |
DBIx::Class::Tutorial::Part3
If you missed the previous parts, go and start reading at DBIx::Class::Tutorial::Part1.
Stay here if you're just after ways to deal with the data you're getting out of DBIx::Class, or adding your own accessors.
The act of converting data coming out of the database,
into a useful object in order to call methods on it,
is called
Inflation.
This is usually done for the data from one column of a Row.
The classic example is a field containing data representing a specific date and time.
The data type for these fields is usually
datetime or
timestamp.
An
Inflator can turn the datetime data into a DateTime object.
Deflation is the opposite. Turning a supplied object back into a string or other piece of data suitable for inserting into the database.
Most databases provide a way to auto-increment a numeric column, usually an integer column, to use as a primary key. Some allow you to create a sequence which can be queried for its next value, to use as the primary key. The difficulty with creating new rows using auto-increment primary keys is retrieving the value of the key after a new row has been inserted.
An oft-asked question is: How can I add accessors to my Result classes to store non-column data?
We can easily add new accessors to Row objects to set and retrieve data not stored in the database.
DBIx::Class automatically fetches the primary key from the database for you, and stores it in your new row object.
## Fetch a new path and print its ID: my $path = $schema->resultset('Path')->create( { path => '/support', } ); print $path->id;
This is done using last_insert_id.
Tables using sequences for their primary keys should be updated using a trigger to update the value. The name of the sequence can be set in the add_columns so that the last value can be fetched by DBIx::Class::PK::Auto.
Just retrieving raw data from the database is only half the battle. You likely want to also do something useful with it, or manipulate it and re-insert. For example, many tables have a field containing the date and time the row was created, or last modified.
If we want to take that date and display, for example, how long since the row was modified in days/hours/minutes, it would be useful to have that
datetime value as a DateTime object, then we can use DateTime::Duration or similar to display the elapsed time.
To do this, we can add a new
component to the
Result classes. In lib/Breadcrumbs/Schema/Path.pm you'll notice a line that says:
__PACKAGE__->load_components(qw/ Core/);
It may contain other components as well. Add the InflateColumn::DateTime component in front of the existing ones.
__PACKAGE__->load_components(qw/ InflateColumn::DateTime Core /);
Order is important. Core must go last.
The accessors of any columns of type
datetime,
timestamp and
date will now return DateTime objects when called.
## print last_modified as an iso formatted string my $dt = $path->last_modified(); print $dt->iso_string;
You can now also set the value of last_modified using a DateTime object.
## Set last_modified my $dtnow = DateTime->now(); $path->last_modified($dtnow); $path->update();
To see how to create more inflators and deflators for other types of objects, read DBIx::Class::InflateColumn.
DBIx::Class creates standard getter/setter accessors for you, for all your values. If you would like to change or manipulate the value of a particular column on the way into or out of the database, you can write your own accessors.
To do this you will first have to edit the Result class, adding the
accessor key in your "add_columns" in DBIx::Class::ResultSource call.
## add accessor for path column, in ## Breadcrumbs::Schema::Path __PACKAGE__->add_columns( ... path => { data_type => 'varchar', size => 255, accessor => '_path', } .. );
DBIx::Class will now create this accessor with the name
_path. We can now write our own
path method.
## Clean extra slashes off paths sub path { my ($self, $value) = @_; if(@_ > 1) { $value = s{^/}{}; $value = s{/$}{}; $self->_path($value); } return $self->_path(); }
You can add your own accessors for non-column (database) data to your Result classes quite easily. Just edit the Result classes.
## Add accessor for storing whether a path has been checked ## to Breadcrumbs::Schema::Path __PACKAGE__->mk_group_accessors('simple' => qw/is_checked/); $path->is_checked(1);
Or, just add an entire method to do the work and return the result.
## Add method to check if the path exists: sub check { my ($self, $root) = @_; return 1 if(-d catfile($root, $self->path)); return 0; }
Putting methods in your Result classes will make them available to the Row objects. To add methods to entire resultsets, you will first need to subclass DBIx::Class::ResultSet.
package Breadcrumbs::ResultSet::Path; use base 'DBIx::Class::ResultSet'; sub check_paths { ## $self is a resultset object! my ($self, $root) = @_; my $ok = 1; foreach my $path ($self->all) { $ok = 0 if(!-d catfile($root, $path->path)); } return $ok; }
To make this module your default resultset for all Path resultsets, call
resultset_class in your Result class.
## Set the new resultset class, in Breadcrumbs::Schema::Path: __PACKAGE__->resultset_class('Breadcrumbs::ResultSet::Path');
Make sure you don't create the new
ResultSet class in the namespace/directory underneath the existing Schema. This will cause "load_classes" in DBIx::Class::Schema to attempt to load it as if it were a Result class. The result will not be good.
Jess Robinson <castaway@desert-island.me.uk> | http://search.cpan.org/dist/DBIx-Class-Tutorial/lib/DBIx/Class/Tutorial/Part3.pod | CC-MAIN-2017-30 | refinedweb | 918 | 54.32 |
hi i am a begginer in java and i neeed help in completing this program...
- 5 Contributors
- forum9 Replies
- 12 Views
- 8 Years Discussion Span
- comment Latest Post by jon.kiparsky
yasuodancez -3
What have you gotten so far for your code?
You have the definition fundamentals down for what you need to do, now you can search the java api for that. Searching the API for tokenizing will bring you upon a StringTokenizer. However, you could read in the file with a buffered reader that takes an argument of file reader, which take an argumenet of a file. Then you can read in each line of the file/text by calling readLine() method on the buffered reader, and then you can even split each word by invoking the split() method on a string.
The split method can take an argument of a delimiter and returns an array with each word split with the given delimiter.
Then you can iterate through each word and do what you please with it.
I hope that all made sense to you. If not, I can elaborate.
Edited by yasuodancez: n/a
i'm making use of the scanner to read the text from a file. but if my input file is an html page n i've to read the contents within the tag
<text></text>, how do i do it.? i've done so much so far..
package test;
import java.io.*; import java.io.FileNotFoundException; import java.util.Scanner; public class Main { private static void readFile(String fileName) { try { File file = new File(fileName); Scanner scanner = new Scanner(file); while (scanner.hasNext()) { System.out.println(scanner.next()); } scanner.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } } public static void main(String[] args) { if (args.length != 1) { System.err.println("usage: java TextScanner1" + "file location"); System.exit(0); } readFile(args[0]); } }
Edited by mike_2000_17: Fixed formatting
javaAddict 900
Try to read line by line. Use the methods hasNextLine and readLine. Then save that line into a String.
Once you have the String, use the method indexOf(String) and find where the "<text>" and "</text>" are found. Then use the substring method.
All the above methods can be found at the java.lang.String API.
Remember:
<text>aaaa</text>
012345678910
The indexOf method, will return 0 when you search for the "<text>", so in order to get the "aaaa", you will need to do subString(0+6, 10)
Where 0 and 10 would be the values that the indexOf method will return
Of course, you don't know where the tag starts or how long the text is, so
subString(s.indexOf("<text>")+"<text>".length(), s.indexOf("</text">));
would be closer to it. If that's difficult to read, take it apart piece by piece:
"<text>" is a string, so it has the String methods, so "<text>".length() returns 6 - why do we use the longer form? Because it makes it clear what it is we're measuring, and becaues you're likely to want to generalize in the future, so "<text>" might become a String variable called, say, tag, and you'd have tag.length() - but it would still work.
s.indexOf(arg ) returns the initial index of the String arg within the String s - in this case, it's our friend "<text>". Again, if you generalized it, you might find that you used tag in place of the explicit String.
The second srgument is just indexOf() again, which you know. So this works out to
"get me the String composed of the characters starting just after the first instance of "<text>" in my string, right up to the first instance of "</text>".
(by the way, this is not a great parsing method, though it works for simple input, and when nobody's trying to break it - see if you can come up with a few ways to break it)
javaAddict 900
Another good thing would be to check if the line you are trying to parse has that tag.
First call the indexOf method. If it returns -1 then the line doesn't have that tag <text> , so continue with the next line.
An in case that the line has more than one tag:
Line = <text>aa</text><text>bbb</text>
You can put that in a loop and take the next indexOf that tag. There is method that also takes as argument an int that indicates from where you want to start searching (indexOf("string", int))
But better leave that for last and handle simple cases first.
Edited by javaAddict: n/a
Checking the indexOf() value is probably a good idea. If you don't, you'll be getting the substring from (-1 +6 = 5) to (-1) on any line that doesn't have the </text> tag.
I wouldn't worry too much about the looping, though. If you're really trying for a robust parsing method, you want to get into stacks and regular expressions. If you're not, you have to decide just what cases you want to be able to handle.
My suggestion is, get the subString stuff running for the simplest case, something like:
blah blah <text> Here is the text to return </text> blah blah
and then determine what else you need to handle.
thank u one n all... :-) i completed it.. i'm overwhelmed by the response..
Glad to help. Hope it was fun to write. | https://www.daniweb.com/programming/software-development/threads/308566/function-that-tokenizes-the-text-from-a-file | CC-MAIN-2019-04 | refinedweb | 900 | 80.72 |
I have seen in some peoples code the command void. From what I understand, it would allow you to point a person to another part in the program. I have tried this, but could not figure it out. The main idea of what I would try to get, would be a concept of something along the following lines:
If anyone can help me with this code, please do. Thanks for the help.If anyone can help me with this code, please do. Thanks for the help.Code:
#include <iostream>
using namespace std;
int main()
//I want to define a place that can be refered to here.
{
cout<<"Hello.\n";
cin.get();
cout<<"Press return to restart the program.\n";
cin.get();
//I want it to refer to the point above when it reaches this point.
}
Razorblade Kiss | http://cboard.cprogramming.com/cplusplus-programming/59901-i-need-help-void-command-printable-thread.html | CC-MAIN-2014-42 | refinedweb | 137 | 92.22 |
This article originally appeared in C/C++ Users Journal, and is reproduced by kind permission.
Embedded programmers traditionally use C as their language of choice. And why not? It's lean and efficient, and allows you to get as close to the metal as you want. Of course C++, used properly, provides the same level of efficiency as the best C code. But we can also leverage powerful C++ features to write cleaner, safer, more elegant low-level code. This article demonstrates this by discussing a C++ scheme for accessing hardware registers in an optimal way.
Embedded programming is often seen as black magic by those not initiated into the cult. It does require a slightly different mindset; a resource constrained environment needs small, lean code to get the most out of a slow processor or a tight memory limit. To understand the approach I present we'll first review the mechanisms for register access in such an environment. Hardcore embedded developers can probably skip ahead; otherwise here's the view from 10,000 feet.
Most embedded code needs to service hardware directly. This seemingly magical act is not that hard at all. Some kinds of register allows us-memory[1] ports on some other memory mapped device.
Each device has a data sheet that describes (amongst in a similar manner to Figure 1.
So what does hardware access code look like? Using the simple example of a fictional UART line driver device presented in Figure 1, the traditional C-style schemes are:
Direct memory pointer access. It's not unheard of to see register access code like Listing 1, but we all know that the perpetrators of this kind of monstrosity should be taken outside and slowly tortured. It's neither readable nor maintainable.
Pointer usage is usually made bearable by defining a macro name for each register location. There are two distinct macro flavours. The first macro style defines bare memory addresses (as in Listing 2)., in Listing 3, is to include the cast in the macro itself; far nicer in C. Unless there's a lot of assembly code this latter approach is preferable.
We use macros because they have no overhead in terms of code speed or size. The alternative, creating a physical pointer variable to describe each register location, would have a negative impact on both code performance and executable size. However, macros are gross and C++ programmers are already smelling a rat here. There are plenty of problems with this fragile scheme. It's programming at a very low level, and the code's real intent is not clear- it's hard to spot all register accesses as you browse a function.
Deferred assignment is a cute technique that allows you to write code like Listing 4, defining the register location values at link time. This is not commonly used; it's cumbersome when you have a number of large devices, and not all compilers provide this functionality. It requires you to run a flat (non virtual) memory model.
Use a struct to describe the register layout in memory, as in Listing 5. There's a lot to be said for this approach - it's logical and reasonably readable. However, it has one big drawback: it is not standards-compliant. Neither the C nor C++ standards specify how the contents of a struct are laid out in memory. You are guaranteed an exact ordering, but you don't know how the compiler will pad out non-aligned items. Indeed, some compilers have proprietary extensions or switches to determine this behaviour. Your code might work fine with one compiler and produce startling results on another. for us see how to manipulate registers containing a bitset. Conventionally we write such code by hand, something like Listing 6. This is a sure-fire way to cause yourself untold grief, tracking down odd device behaviour. It's very easy to manipulate the wrong bit and get very confusing results..
So having seen the state of the art, at least in the C world, how can we move into the 21st century? Being good C++ citizens…
Step one is to junk the whole preprocessor macro scheme, and define the device's registers in a good old-fashioned enumeration. For the moment we'll call this enumeration Register. We immediately lose the ability to share definitions with assembly code, but this was never a compelling benefit anyway. The enumeration values are specified as offsets from the device's base memory address. This is how they are presented in the device's datasheet, which makes it easier to check for validity. Some data sheets show byte offsets from the base address (so 32-bit register offsets increment by 4 each time), whilst others show 'word' offsets (so 32-bit register offsets increment by 1 each time). For simplicity, we'll write the enumeration values however the datasheet works.
The next step is to write an inline regAddress function that converts the enumeration to a physical address. This function will be a very simple calculation determined by the type of offset in the enumeration. For the moment we'll presume that the device is memory mapped at a known fixed address. This implies the simplest MMU configuration, with no virtual memory address space in operation. This mode of operation is not at all uncommon in embedded devices. Putting all this together results in Listing 7.
The missing part of this jigsaw puzzle is the method of reading/writing registers. We'll do this with two simple inline functions, regRead and regWrite, shown in Listing 8. Being inline, all these functions can work together to make neat, readable register access code with no runtime overhead whatsoever. That's mildly impressive, but we can do so much more.
Up until this point you could achieve the same effect in C with judicious use of macros. We've not yet written anything groundbreaking. But if our device has some 8-bit registers and some 32-bit registers we can describe each set in a different enumeration. Let's imaginatively call these Register8 and Register32. Thanks to C++'s strong typing of enums, now we can overload the register access functions, as demonstrated in Listing 9.
Now things are getting interesting: we still need only type regRead to access a register, but the compiler will automatically ensure that we get the correct width register access. The only way to do this in C is manually, by defining multiple read/write macros and selecting the correct one by hand each time. This overloading shifts the onus of knowing which registers require 8 or 32-bit writes from the programmer using the device to the compiler. A whole class of error silently disappears. Marvellous!
An embedded system is composed of many separate devices, each performing their allotted task. Perhaps you have a UART for control, a network chip for communication, a sound device for audible warnings, and more. We need to define multiple register sets with different base addresses and associated bitset definitions. Some large devices (like super I/O chips) consist of several subsystems that work independently of one another; we'd also like to keep the register definitions for these parts distinct.
The classic C technique is to augment each block of register definition names with a logical prefix. For example, we'd define the UART transmit buffer like this:
#define MYDEVICE_UART_TXBUF ((volatile uint32_t *)0xffe0004)
C++ provides an ideal replacement mechanism that solves more than just this aesthetic blight. We can group register definitions within namespaces. The nest of underscored names is replaced by :: qualifications - a better, syntactic indication of relationship. Because the overload rules honour namespaces, we can never write a register value to the wrong device block: it's a syntactic error. This is a simple trick, but it makes the scheme incredibly usable and powerful.
Namespacing also allows us to write more readable code with a judicious sprinkling of using declarations inside device setup functions. Koenig lookup combats excess verbiage in our code. If we have register sets in two namespaces DevA and DevB, we needn't quality a regRead call, just the register name. The compiler can infer the correct regRead overload in the correct namespace from its parameter type. You only have to write:
uint32_t value = regRead(DevA::MYREGISTER); // note: not DevA::regRead(...)
Not every operating environment is as simplistic as we've seen so far. If a virtual memory system is in use then you can't directly access the physical memory mapped locations - they are hidden behind the virtual address space. Fortunately, every OS provides a mechanism to map known physical memory locations into the current process' virtual address space.
A simple modification allows us to accommodate this memory indirection..
Here are a few extra considerations for the use of this register access scheme:
Just as we use namespaces to separate device definitions, it's a good idea to choose header file names that reflect the logical device relationships. It's best to nest the headers in directories corresponding to the namespace names.
A real bonus of this register access scheme is that you can easily substitute alternative regRead/regWrite implementations.ised builds. Although it's remarkably rare to not optimise your code, without optimisation allows us to avoid repeated definition of bitRead/bitWrite. This is shown in Listing 11.
OK, this isn't rocket science, and there's no scary template metaprogramming in sight (which, if you've seen the average embedded programmer, is no bad thing!) But this is a robust technique that exploits a number of C++'s features to provide safe and efficient hardware register access. Not only is it supremely readable and natural in the C++ idiom, it prevents many common register access bugs and provides extreme flexibility for hardware access tracing and debugging.
I have a number of proto-extensions to this scheme to make it more generic (using a healthy dose of template metaprogramming, amongst other things). I'll gladly share these ideas on request, but would welcome some discussion about this.
Do Overload readers see any ways that this scheme could be extended to make it simpler and easier to use? | https://accu.org/index.php/journals/281 | CC-MAIN-2018-30 | refinedweb | 1,700 | 54.83 |
Get this Question solved
Choose a Subject » Select Duration » Schedule a Session
Get this solution now
Get notified immediately when an answer to this question is available.
You might
want to try the quicker alternative (needs Monthly Subscription)
No, not that urgent
Accounting – Partnership Assignment Business made up : Just Juice Bar so it’s a JUICE BUSINESS numbers have to be realistic. Partners are called: William, Christina, Ryan (3 Partners ) PART 1 – Commencement of business PART 1 - Commencement of business (Just Juice Bar) 1. What date did
Need help with this partnership tax return (2014). Attached is everything you need to complete! Thanks
Which of the following statements about partnership financial statements is true? a. Details of the distribution of net income are shown in the owners’ equity statement . b. The distribution of net income is shown on the balance sheet. c . Only the total of all partner capital balances
Please, provide detailed as possible explanation of calculations
each set of transactions. There are three sets—formation set, operation set, and liquidation set. Complete all work on the spreadsheet attached to this assignment; it will be your only deliverable. Part 1: Perform for all partnership formation transactions Part 2: Perform for all operational
By creating an account, you agree to our terms & conditions
We don't post anything without your permission
Attach Files | https://www.transtutors.com/questions/partnership-dissolution-with-statement-financial-position-provided-340597.htm | CC-MAIN-2018-39 | refinedweb | 223 | 52.9 |
This article contains the following executables: DEVLOD.ARC
Jim has published more than a dozen books and hundreds of magazine articles, and has been Primary Forum Administrator of Computer Language magazine's forum on CompuServe since 1985. Most recently, he is responsible for the revised editions of Que's DOS Programmer's Reference and Using Assembly Language, and is a coauthor of Undocumented DOS, edited by Andrew Schulman, from which this article has been adapted.
Ever have an MS-DOS program that required the presence of a device driver, and wish you had a way to install the driver from the command-line prompt, rather than having to edit your CONFIG.SYS file and then reboot the system?
Of course, you can be thankful that it's so much easier to reboot MS-DOS than it is to rebuild the kernel, which must be done to add a device driver to UNIX. While DOS 2.x borrowed the idea of installable device drivers from UNIX, it's often forgotten that DOS in fact improved on the installation of device drivers by replacing the building of a new kernel with the simple editing of CONFIG.SYS.
But still, most DOS users occasionally wish they could just type a command line to load a device driver and be done with it.
Also, developers of device drivers often wish they had a way to debug the initialization phase of a device driver. This type of debugging usually requires either a debug device driver that loads before your device driver, or a hardware in-circuit emulation. But if only we could load device drivers after the normal CONFIG.SYS stage....
Well, wish no more. Command-line loading of MS-DOS device drivers is not only possible, it's relatively simple to accomplish, once you know a little about undocumented DOS. This article presents such a program, DEVLOD, written in a combination of C and assembly language. All you have to type is DEVLOD, followed by the name of the driver to be loaded, and any parameters needed, just as you would supply them in CONFIG.SYS. For example, instead of placing the following in CONFIG.SYS: device=c:\dos\ansi.sys, you would instead type the following on the DOS command line: C:\>devlod c:\dos\ ansi.sys.
There are several ways to verify that this worked, but perhaps the simplest is to write ANSI strings to CON and see if they are properly interpreted as ANSI commands. For example, after a DEVLOD ANSI.SYS, the following DOS command should produce a DOS prompt in reverse video: C:\>prompt$e[7m$ p$g$e[Om.
DEVLOD loads both character device drivers (such as ANSI.SYS) and block device drivers (drivers that support one or more drive units, such as VDISK.SYS), whether located in .SYS or .EXE files.
How DEVLOD Works
To install a device driver, a program must first locate the driver and determine its size, then reserve space for it. Because this space is almost certain to be at a higher memory address than the loader itself, the loader moves itself up above the driver area, so that memory space will not be unduly fragmented. Once the space is set up, DEVLOD loads the driver file and links it into the chain of drivers that MS-DOS maintains. Next, the program calls the driver's own initialization code, and finally returns to DOS, leaving the driver resident but releasing all space that is no longer needed. The basic structure of the DEVLOD program is shown in Figure 1.
Figure 1: Basic structure of DEVLOD
startup code (CO.ASM) main (DEVLOD.C) Move_Loader movup (MOVUP.ASM) Load_Drvr INT 21h Function 4BO3h (Load Overlay) Get_List INT 21h Function 52h (Get List of Lists) based on DOS version number: get number of block devices get value of LASTDRIVE get Current Directory Structure (CDS) base get pointer to NUL device Init_Drvr call DD init routine build command packet call Strategy call Interrupt Get_Out if block device: Put_Blk_Dev for each unit: Next_Drive get next available drive letter INT 21h Function 32h (Get DPB) INT 21h Function 53h (Translate BPB -> DPB) poke CDS link into DPB chain Fix_DOS_Chain link into dev chain release environment space INT 21h Function 31h (TSR)
DEVLOD loads device drivers into memory using the documented DOS function for loading overlays, INT 21h Function 4B03h. An earlier version read the driver into memory using DOS file calls to open, read, and close the driver, but this made it difficult to handle .EXE driver types. By instead using the EXEC function, DEVLOD makes DOS take care of properly handling both .SYS and .EXE files.
DEVLOD then calls undocumented INT 21h Function 52h (Get List of Lists) to retrieve the number of block devices currently present in the system, the value of LASTDRIVE, a pointer to the DOS Current Directory Structure (CDS) array, and a pointer to the NUL device. The location of these variables within the LoL, (List of Lists) varies with the DOS version number. (See Table 1, which explains LoL, CDS, and similar alphabet soup in more detail.)
Table 1: DOS and BIOS data structures
Data Structure Description ----------------------------------------------------------------------- BPB The BIOS uses the BPB (BIOS Parameter Block) to learn the format of a block device. Normally, the BPB is part of a physical disk's boot record, and contains information such as the number of bytes in a sector, the number of root directory entries, the number of sectors taken by the File Allocation Table (FAT), and so on. CDS The CDS (Current Directory Structure) is an undocumented array of structures, sometimes also called the Drive Info Table, which maintains the current state of each drive in the system. The array is n elements long, where n equals LASTDRIVE. DPB For every block device (disk drive) in the system, there is a DBP (Drive Parameter Block). These 32-byte blocks contain the information that DOS uses to convert cluster numbers into Logical Sector Numbers, and also associate the device driver for that device with its assigned drive letter. LoL Probably the most commonly used undocumented DOS data structure, the list of Lists is the DOS internal variable table, which includes, among other things, the LASTDRIVE value, the head of the device driver chain, and the CDS
(Current Directory Structure). A pointer to the LoL is returned in ES:BX by undocumented DOS Int 21h Function 52h.
DEVLOD requires a pointer to the NUL device because NUL acts as the "anchor" to the DOS device chain. Because DEVLOD's whole purpose is to add new devices into this chain, it must update this linked list. The other variables from the List of Lists are needed in case we are loading a block device (which we won't know until later, after we've called the driver's INIT routine).
If the DOS version indicates operation under MS-DOS 1.x, or in the OS/2 compatibility box, DEVLOD quits with an appropriate message. Otherwise, a pointer to the name field of the NUL driver is created, and the 8 bytes at that location are compared to the constant "NUL" (followed by five blanks) to verify that the driver is present and the pointer is correct.
Next, DEVLOD sends the device driver an initialization packet. This is straightforward: The function Init_Drvr( ) forms a packet with the INIT command, calls the driver's Strategy routine, and then calls the driver's Interrupt routine. As elsewhere, DEVLOD merely mimicks what DOS does when it loads a device driver.
If the device driver INIT fails, there is naturally nothing we can do but bail out. It is important to note that we have not yet linked the driver into the DOS driver chain, so it is easy to exit if the driver INIT fails. If the driver INIT succeeds, DEVLOD can then proceed with its true mission, which takes place (oddly enough) in the function Get_Out( ).
It is only at this point that DEVLOD knows whether it has a block or character device driver, so it is here that DEVLOD takes special measures for block device drivers, by calling Put_Blk_Dev( ). For each unit provided by the driver, that function calls undocumented DOS INT 21h Function 32h (Get DPB -- see Table 1) and INT 21h Function 53h (Translate BPB to DPB), alters the CDS entry for the new drive, and links the new DPB into the DPB chain. In short, in Put_Blk_Dev( ), DEVLOD takes information returned by a block driver's INIT routine and produces a new DOS drive.
DEVLOD pokes the CDS in order to install a block device, and needs a drive letter to assign to the new driver. The function Next_Drive( ) is where DEVLOD determines the drive letter to assign to a block device (if there is an available drive letter). One technique for determining the next letter, #ifdefed out within DEVLOD.C, is simply to read the "Number of Block Devices" field (nblkdrs) out of the LoL. However, this fails to take account of SUBSTed or network-redirected drives. Therefore, we walk the CDS instead, looking for the first free drive. In any case, DEVLOD will update the nblkdrs field if it successfully loads a block device.
Whether loading a block or character driver, DEVLOD also uses the "break address" (the first byte of the driver's address space which can safely be turned back to DOS for reuse) returned by the driver. Get_Out( ) converts the break address into a count of paragraphs to be retained.
The function copyptr( ) is called three times in succession to first save the content of the NUL driver's link field, then copy it into the link field of the new driver, and finally store the far address of the new driver in the NUL driver's link field. The copyptr( )function is provided in MOVUP.ASM, described later in the text. Note again that the DOS linked list is not altered until after we know that the driver's INIT succeeded.
At last, DEVLOD links the device header into DOS's linked list of driver headers and saves some memory by releasing its environment. (The resulting "hole in RAM" will cause no harm, contrary to popular belief. It will, in fact, be used as the environment space for any program subsequently loaded, if the size of the environment is not increased!) Finally, DEVLOD calls the documented DOS TSR function INT 21h Function 31h to exit, so as not to release the memory now occupied by the driver.
The Stuff DEVLOD's Made of
Before we look at how this dynamic loader accomplishes all this in less than 2000 bytes of executable code, let's mention some constraints.
Many confusing details were eliminated by implementing DEVLOD as a .COM program, using the tiny memory model of Turbo C. The way the program moves itself up in memory became much clearer when the .COM format removed the need to individually manage each segment register.
In order to move the program while it is executing, it's necessary to know every address the program can reach during its execution. This precludes using any part of the libraries supplied with the compiler. Fortunately, in this case that's not a serious restriction; nearly everything can be handled without them. Two assembly language listings take care of the few things that cannot easily be done in C itself.
The one readily available implementation of C that makes it easy to completely sever the link to the runtime libraries is Borland's Turbo C, which provides sample code showing how. (Microsoft also provides such a capability, but the documentation is quite cryptic.)
Thus the main program, DEVLOD.C (Listing One, page 90), requires Turbo C with its register pseudovariables and geninterrupt( ) and __emit__( ) features. Register pseudovariables such as _AX provide a way to directly read or load the CPU registers from C and both geninterrupt( ) and __emit__( ) simply emit bytes into the code stream; neither are actually functions.
The smaller assembler module MOVUP (Listing Two, page 94) contains two functions used in DEVLOD: movup( ) and copyptr( ). Recall that in order not to fragment memory, DEVLOD moves itself up above the area into which the driver will be loaded. It accomplishes this feat with movup( ).
The function copyptr( ) is located here merely because it's written in assembler. It could have been written in C, but using assembly language to transfer 4 bytes from source to destination makes the function much easier to understand.
Finally, startup code appears in C0. ASM (Listing Three, page 96), which has been extensively modified from startup code provided by Borland with Turbo C. This or similar code forms part of every C program and provides the linkage between the DOS command line and the C program itself. Normal start-up code, however, does much more than this stripped-down version: It parses the argument list, sets up pointers to the environment, and arranges things so that the signal( ) library functions can operate.
Because our program has no need for any of these actions, our C0.ASM module omits them. What's left just determines the DOS version in use, saving it in a pair of global variables, and trims the RAM used by the program down to the minimum. Then the module calls main( ), PUSHes the returned value onto the stack, and calls exit( ). If the program succeeds in loading a device driver, it will never return from main( ).
This sample program includes two assembly language modules in addition to the C source, so a MAKEFILE (Listing Four, page 98) for use with Borland's MAKE utility greatly simplifies its creation.
How Well Does DEVLOD Work?
Figure 2 shows the use of the utilities MEM (which displays owners of allocated memory) and DEV (which lists the names of the installed device drivers) to see what our system looks like after we've loaded up a large number of device drivers with DEVLOD. (MEM and DEV come from the book Undocumented DOS, but MAPMEM.COM from TurboPower or a number of other utilities could also be used.)
Figure 2: Loading device drivers
C:\UNDOC\KYLE>devlod \dos\smartdrv.sys 256 /a Microsoft SMARTDrive Disk Cache version 3.03 Cache size: 256K in Expanded Memory Room for 30 tracks of 17 sectors each Minimum cache size will be 0K C:\UNDOC\KYLE>devlod \dos\ramdrive.sys Microsoft RAMDrive version 3.04 virtual disk D: Disk size: 64k Sector size: 512 bytes Allocation unit: 1 sectors Directory entries: 64 C:\UNDOC\KYLE>devlod \dos\vdisk.sys VDISK Version 3.2 virtual disk E: Buffer size adjusted Sector size adjusted Directory entries adjusted Buffer size: 64 KB Sector size: 128 Directory entries: 64 C:\UNDOC\KYLE>devlod \dos\ansi.sys C:\UNDOC\KYLE>mem Seg Owner Size Env ------------------------------------------------------------------------- 09F3 0008 00F4 ( 3904) config [15 2F 4B 67 ] 0AE8 0AE9 00D3 ( 3376) 0BC1 c:\dos33\command.com [22 23 24 2E ] 0BBC 0000 0003 ( 48) free 0BC0 0AE9 0019 ( 400) 0BDA 0AE9 0004 ( 64) 0BDF 3074 000D ( 208) 0BED 0000 0000 ( 0) free 0BEE 0BEF 0367 ( 13936) 0BE0 \msc\bin\smartdrv.sys 256 /a [13 19 ] 0F56 0F57 1059 ( 66960) 0BE0 \msc\bin\ramdrive.sys [F1 FA ] 1FB0 1FB1 104C ( 66752) 0BE0 \dos33\vdisk.sys 2FFD 2FFE 0075 ( 1872) 0BE0 \dos33\ansi.sys [1B 29 ] 3073 3074 1218 ( 74112) 0BE0 C:\UNDOC\KYLE\MEM.EXE [00 ] 428C 0000 7573 (481072) free [30 F8 ] C:\UNDOC\KYLE>dev NUL CON Block: 1 unit(s) Block: 1 unit(s) SMARTAAR QEMM386$ EMMXXXX0 CON AUX PRN CLOCK$ Block: 3 unit(s) COM1 LPT1 LPT2 LPT3 COM2 COM3 COM4
The output from MEM shows quite clearly that our device drivers really are resident in memory. The output from DEV shows that they really are linked into the DOS device chain (for example, "SMARTAAR" is SMARTDRV.SYS). Of course, the real proof for me is that after loading SMARTDRV, RAMDRIVE, VDISK, and ANSI.SYS, my disk accesses went a bit faster (because of the new 256K SMARTDRV disk cache in expanded memory). I also had some additional drives (created by RAMDRIVE and VDISK), and programs that assume the presence of ANSI.SYS (for shame!) suddenly started producing reasonable output. Of course, I had less free memory.
One other interesting item in the MEM output is the environment segment number displayed for the four drivers. Recall that, in order to save some memory, DEVLOD releases its environment. The MEM program correctly detects that the 0BE0h environment segment, still shown in the PSP for each resident instance of DEVLOD, does not in fact belong to them. The name "DEVLOD" does not precede the names of the drivers, because program names (which only became available in DOS 3+) are located in the environment segment, not in the PSP. Each instance of DEVLOD has jettisoned its environment, so its program name is gone too.
Who then does this environment belong to? Actually, it belongs to MEM.EXE itself. Because each instance of DEVLOD has released its environment, when MEM comes along there is a nice environment-sized block of free memory just waiting to be used, and MEM uses this block of memory for its environment. The reason 0BE0 shows up as an environment, not only for MEM.EXE, but for each instance of DEVLOD as well, is that when DEVLOD releases the environment, it doesn't do anything to the environment segment address at offset 2Ch in its PSP. Probably DEVLOD (and any other program which frees its environment) ought to zero out this address.
It should be noted that some device drivers appear not to be properly loaded by DEVLOD. These include some memory managers and some drivers that use extended memory. For example, Microsoft's XMS driver HIMEM.SYS often crashes the system if you attempt to load it with DEVLOD. Furthermore, while DEVLOD VDISK.SYS definitely works in that a valid RAM disk is created, other programs that check for the presence of VDISK (such as protected-mode DOS extenders) often fail mysteriously when VDISK has been loaded in this unusual fashion. In the MEM display, note that the INT 19h vector is not pointing at VDISK.SYS as it should.
For another perspective on loading drivers, see the .EXE Magazine article, "Installing MS-DOS Device Drivers from the Command Line" by Giles Todd. For background on DOS device drivers in general, two excellent books are the classic Writing MS-DOS Device Drivers by Robert S. Lai, and the recent Writing DOS Device Drivers in C by Phillip M. Adams and Clovis L. Tondo.
References
Adams, Phillip and Clovis L. Tondo. Writing DOS Device Drivers in C. Englewood Cliffs, N.J.: Prentice Hall, 1990.
Lai, Robert S. Writing MS-DOS Device Drivers. Reading, Mass.: Addison-Wesley, 1987.
Todd, Giles. "Installing MS-DOS Device Drivers from the Command Line." .EXE Magazine (August 1990).
_LOADING DEVICE DRIVERS FROM THE DOS COMMAND LINE_ by Jim Kyle[LISTING ONE]
<a name="025c_000c"> /******************************************************************** * DEVLOD.C - Copyright 1990 by Jim Kyle - All Rights Reserved * * (minor revisions by Andrew Schulman * * Dynamic loader for device drivers * * Requires Turbo C; see DEVLOD.MAK also for ASM helpers.* ********************************************************************/ #include <stdio.h> #include <stdlib.h> #include <dos.h> typedef unsigned char BYTE; #define GETFLAGS __emit__(0x9F) /* if any error, quit right now */ #define FIXDS __emit__(0x16,0x1F)/* PUSH SS, POP DS */ #define PUSH_BP __emit__(0x55) #define POP_BP __emit__(0x5D) unsigned _stklen = 0x200; unsigned _heaplen = 0; char FileName[65]; /* filename global buffer */ char * dvrarg; /* points to char after name in cmdline buffer */ unsigned movsize; /* number of bytes to be moved up for driver */ void (far * driver)(); /* used as pointer to call driver code */ void far * drvptr; /* holds pointer to device driver */ void far * nuldrvr; /* additional driver pointers */ void far * nxtdrvr; BYTE far * nblkdrs; /* points to block device count in List of Lists*/ unsigned lastdrive; /* value of LASTDRIVE in List of Lists */ BYTE far * CDSbase; /* base of Current Dir Structure */ int CDSsize; /* size of CDS element */ unsigned nulseg; /* hold parts of ListOfLists pointer */ unsigned nulofs; unsigned LoLofs; #pragma pack(1) struct packet{ /* device driver's command packet */ BYTE hdrlen; BYTE unit; BYTE command; /* 0 to initialize */ unsigned status; /* 0x8000 is error */ BYTE reserv[8]; BYTE nunits; unsigned brkofs; /* break adr on return */ unsigned brkseg; /* break seg on return */ unsigned inpofs; /* SI on input */ unsigned inpseg; /* _psp on input */ BYTE NextDrv; /* next available drive */ } CmdPkt; typedef struct { /* Current Directory Structure (CDS) */ BYTE path[0x43]; unsigned flags; void far *dpb; unsigned start_cluster; unsigned long ffff; unsigned slash_offset; /* offset of '\' in current path field */ // next for DOS4+ only BYTE unknown; void far *ifs; unsigned unknown2; } CDS; extern unsigned _psp; /* established by startup code in c0 */ extern unsigned _heaptop; /* established by startup code in c0 */ extern BYTE _osmajor; /* established by startup code */ extern BYTE _osminor; /* established by startup code */ void _exit( int ); /* established by startup code in c0 */ void abort( void ); /* established by startup code in c0 */ void movup( char far *, char far *, int ); /* in MOVUP.ASM file */ void copyptr( void far *src, void far *dst ); /* in MOVUP.ASM file */ void exit(int c) /* called by startup code's sequence */ { _exit(c);} int Get_Driver_Name ( void ) { char *nameptr; int i, j, cmdlinesz; nameptr = (char *)0x80; /* check command line for driver name */ cmdlinesz = (unsigned)*nameptr++; if (cmdlinesz < 1) /* if nothing there, return FALSE */ return 0; for (i=0; i<cmdlinesz && nameptr[i]<'!'; i++) /* skip blanks */ ; dvrarg = (char *)&nameptr[i]; /* save to put in SI */ for ( j=0; i<cmdlinesz && nameptr[i]>' '; i++) /* copy name */ FileName[j++] = nameptr[i]; FileName[j] = '\0'; return 1; /* and return TRUE to keep going */ } void Put_Msg ( char *msg ) /* replaces printf() */ { #ifdef INT29 /* gratuitous use of undocumented DOS */ while (*msg) { _AL = *msg++; /* MOV AL,*msg */ geninterrupt(0x29); /* INT 29h */ } #else _AH = 2; /* doesn't need to be inside loop */ while (*msg) { _DL = *msg++; geninterrupt(0x21); } #endif } void Err_Halt ( char *msg ) /* print message and abort */ { Put_Msg ( msg ); Put_Msg ( "\r\n" ); /* send CR,LF */ abort(); } void Move_Loader ( void ) /* vacate lower part of RAM */ { unsigned movsize, destseg; movsize = _heaptop - _psp; /* size of loader in paragraphs */ destseg = *(unsigned far *)MK_FP( _psp, 2 ); /* end of memory */ movup ( MK_FP( _psp, 0 ), MK_FP( destseg - movsize, 0 ), movsize << 4); /* move and fix segregs */ } void Load_Drvr ( void ) /* load driver file into RAM */ { unsigned handle; struct { unsigned LoadSeg; unsigned RelocSeg; } ExecBlock; ExecBlock.LoadSeg = _psp + 0x10; ExecBlock.RelocSeg = _psp + 0x10; _DX = (unsigned)&FileName[0]; _BX = (unsigned)&ExecBlock; _ES = _SS; /* es:bx point to ExecBlock */ _AX = 0x4B03; /* load overlay */ geninterrupt ( 0x21 ); /* DS is okay on this call */ GETFLAGS; if ( _AH & 1 ) Err_Halt ( "Unable to load driver file." ); } void Get_List ( void ) /* set up pointers via List */ { _AH = 0x52; /* find DOS List of Lists */ geninterrupt ( 0x21 ); nulseg = _ES; /* DOS data segment */ LoLofs = _BX; /* current drive table offset */ switch( _osmajor ) /* NUL adr varies with version */ { case 0: Err_Halt ( "Drivers not used in DOS V1." ); case 2: nblkdrs = NULL; nulofs = LoLofs + 0x17; break; case 3: if (_osminor == 0) { nblkdrs = (BYTE far *) MK_FP(nulseg, LoLofs + 0x10); lastdrive = *((BYTE far *) MK_FP(nulseg, LoLofs + 0x1b)); nulofs = LoLofs + 0x28; } else { = 81; break; case 4: case 5: = 88; break; case 10: case 20: Err_Halt ( "OS2 DOS Box not supported." ); default: Err_Halt ( "Unknown version of DOS!"); } } void Fix_DOS_Chain ( void ) /* patches driver into DOS chn */ { unsigned i; nuldrvr = MK_FP( nulseg, nulofs+0x0A ); /* verify the drvr */ <a name="025c_000e">[LISTING TWO]
<a name="025c_000e"> NAME movup ;[]------------------------------------------------------------[] ;| MOVUP.ASM -- helper code for DEVLOD.C | ;| Copyright 1990 by Jim Kyle - All Rights Reserved | ;[]------------------------------------------------------------[] _TEXT SEGMENT BYTE PUBLIC 'CODE' _TEXT ENDS _DATA SEGMENT WORD PUBLIC 'DATA' _DATA ENDS _BSS SEGMENT WORD PUBLIC 'BSS' _BSS ENDS DGROUP GROUP _TEXT, _DATA, _BSS ASSUME CS:_TEXT, DS:DGROUP _TEXT SEGMENT BYTE PUBLIC 'CODE' ;----------------------------------------------------------------- ; movup( src, dst, nbytes ) ; src and dst are far pointers. area overlap is NOT okay ;----------------------------------------------------------------- PUBLIC _movup _movup PROC NEAR push bp mov bp, sp push si push di lds si,[bp+4] ; source les di,[bp+8] ; destination mov bx,es ; save dest segment mov cx,[bp+12] ; byte count cld rep movsb ; move everything to high ram mov ss,bx ; fix stack segment ASAP mov ds,bx ; adjust DS too pop di pop si mov sp, bp pop bp pop dx ; Get return address push bx ; Put segment up first push dx ; Now a far address on stack retf _movup ENDP ;------------------------------------------------------------------- ; copyptr( src, dst ) ; src and dst are far pointers. ; moves exactly 4 bytes from src to dst. ;------------------------------------------------------------------- PUBLIC _copyptr _copyptr PROC NEAR push bp mov bp, sp push si push di push ds lds si,[bp+4] ; source les di,[bp+8] ; destination cld movsw movsw pop ds pop di pop si mov sp, bp pop bp ret _copyptr ENDP _TEXT ENDS end <a name="025c_000f"> <a name="025c_0010">[LISTING THREE]
<a name="025c_0010"> NAME c0 ;[]------------------------------------------------------------[] ;| C0.ASM -- Start Up Code | ;| based on Turbo-C startup code, extensively modified | ;[]------------------------------------------------------------[] _TEXT SEGMENT BYTE PUBLIC 'CODE' _TEXT ENDS _DATA SEGMENT WORD PUBLIC 'DATA' _DATA ENDS _BSS SEGMENT WORD PUBLIC 'BSS' _BSS ENDS DGROUP GROUP _TEXT, _DATA, _BSS ; External References EXTRN _main : NEAR EXTRN _exit : NEAR EXTRN __stklen : WORD EXTRN __heaplen : WORD PSPHigh equ 00002h PSPEnv equ 0002ch MINSTACK equ 128 ; minimal stack size in words ; At the start, DS, ES, and SS are all equal to CS ;/*-----------------------------------------------------*/ ;/* Start Up Code */ ;/*-----------------------------------------------------*/ _TEXT SEGMENT BYTE PUBLIC 'CODE' ASSUME CS:_TEXT, DS:DGROUP ORG 100h STARTX PROC NEAR mov dx, cs ; DX = GROUP Segment address mov DGROUP@, dx mov ah, 30h ; get DOS version int 21h mov bp, ds:[PSPHigh]; BP = Highest Memory Segment Addr mov word ptr __heaptop, bp mov bx, ds:[PSPEnv] ; BX = Environment Segment address mov __version, ax ; Keep major and minor version number mov __psp, es ; Keep Program Segment Prefix address ; Determine the amount of memory that we need to keep mov dx, ds ; DX = GROUP Segment address sub bp, dx ; BP = remaining size in paragraphs mov di, __stklen ; DI = Requested stack size ; ; Make sure that the requested stack size is at least MINSTACK words. ; cmp di, 2*MINSTACK ; requested stack big enough ? jae AskedStackOK ; yes, use it mov di, 2*MINSTACK ; no, use minimal value mov __stklen, di ; override requested stack size AskedStackOK: add di, offset DGROUP: edata jb InitFailed ; DATA segment can NOT be > 64 Kbytes add di, __heaplen jb InitFailed ; DATA segment can NOT be > 64 Kbytes mov cl, 4 shr di, cl ; $$$ Do not destroy CL $$$ inc di ; DI = DS size in paragraphs cmp bp, di jnb TooMuchRAM ; Enough to run the program ; All initialization errors arrive here InitFailed: jmp near ptr _abort ; Set heap base and pointer TooMuchRAM: mov bx, di ; BX = total paragraphs in DGROUP shl di, cl ; $$$ CX is still equal to 4 $$$ add bx, dx ; BX = seg adr past DGROUP mov __heapbase, bx mov __brklvl, bx ; ; Set the program stack down into RAM that will be kept. ; cli mov ss, dx ; DGROUP mov sp, di ; top of (reduced) program area sti mov bx,__heaplen ; set up heap top pointer add bx,15 shr bx,cl ; length in paragraphs add bx,__heapbase mov __heaptop, bx ; ; Clear uninitialized data area to zeroes ; xor ax, ax mov es, cs:DGROUP@ mov di, offset DGROUP: bdata mov cx, offset DGROUP: edata sub cx, di rep stosb ; ; exit(main()); ; call _main ; the real C program push ax call _exit ; part of the C program too ;---------------------------------------------------------------- ; _exit() ; Restore interrupt vector taken during startup. ; Exit to DOS. ;---------------------------------------------------------------- PUBLIC __exit __exit PROC NEAR push ss pop ds ; Exit to DOS ExitToDOS: mov bp,sp mov ah,4Ch mov al,[bp+2] int 21h ; Exit to DOS __exit ENDP STARTX ENDP ;[]------------------------------------------------------------[] ;| Miscellaneous functions | ;[]------------------------------------------------------------[] ErrorDisplay PROC NEAR mov ah, 040h mov bx, 2 ; stderr device int 021h ret ErrorDisplay ENDP PUBLIC _abort _abort PROC NEAR mov cx, lgth_abortMSG mov dx, offset DGROUP: abortMSG MsgExit3 label near push ss pop ds call ErrorDisplay CallExit3 label near mov ax, 3 push ax call __exit ; _exit(3); _abort ENDP ; The DGROUP@ variable is used to reload DS with DGROUP PUBLIC DGROUP@ DGROUP@ dw ? _TEXT ENDS ;[]------------------------------------------------------------[] ;| Start Up Data Area | ;[]------------------------------------------------------------[] _DATA SEGMENT WORD PUBLIC 'DATA' abortMSG db 'Quitting program...', 13, 10 lgth_abortMSG equ $ - abortMSG ; ; Miscellaneous variables ; PUBLIC __psp PUBLIC __version PUBLIC __osmajor PUBLIC __osminor __psp dw 0 __version label word __osmajor db 0 __osminor db 0 ; Memory management variables PUBLIC ___heapbase PUBLIC ___brklvl PUBLIC ___heaptop PUBLIC __heapbase PUBLIC __brklvl PUBLIC __heaptop ___heapbase dw DGROUP:edata ___brklvl dw DGROUP:edata ___heaptop dw DGROUP:edata __heapbase dw 0 __brklvl dw 0 __heaptop dw 0 _DATA ENDS _BSS SEGMENT WORD PUBLIC 'BSS' bdata label byte edata label byte ; mark top of used area _BSS ENDS END STARTX <a name="025c_0011"> <a name="025c_0012">[LISTING FOUR]
<a name="025c_0012"> # makefile for DEVLOD.COM # can substitute other assemblers for TASM c0.obj : c0.asm tasm c0 /t/mx/la; movup.obj: movup.asm tasm movup /t/mx/la; devlod.obj: devlod.c tcc -c -ms devlod devlod.com: devlod.obj c0.obj movup.obj tlink c0 movup devlod /c/m,devlod if exist devlod.com del devlod.com exe2bin devlod.exe devlod.com del devlod.exe | http://www.drdobbs.com/architecture-and-design/loading-device-drivers-from-the-dos-comm/184408654 | CC-MAIN-2016-07 | refinedweb | 4,825 | 56.89 |
Overview
The Configurable List widget requires a little more to set up, but it’s not hard at all. There are a few things we need to consider in order to understand how it works and get the widget up and running:
- The Widget dynamically adds a set of fields on your form as a list
- You can add as many rows as possible
- You can define the type of data or information that can be collected for each dynamically added row on the list
- To restrict the type of information that can be collected, you need to define field types
- The following field types are allowed in the configuration for each row on the dynamic list:
How to Configure the Widget
The Configuration Dialog opens when you add the widget to your form. You can always access it by clicking the widget’s wizard wand from the form builder:
We have prepared a table that will guide you in setting up your dynamic list through this widget.
Let’s begin with the list of field configurations:
Field Types
text
Accepts plain text.
{label}:text:{placeholder}
Example:Name : text : Enter your name
number
Accepts numbers only.
{label}:number:{placeholder}
Example: Age : number : Enter your age
textarea
Accepts more text in paragraphs or narrations.
{label}:textarea:{placeholder}
Example: Comments : textarea : Enter your comment
dropdown
A list of options in a drop-down list.
{label}:dropdown:{option1},{option2},{...}:{placeholder}
To have one of the options selected by default, just replace the {placeholder} with one of the options on your list.
Example: Fruits : dropdown : Apple, Banana, Mango, Orange : Banana
radio
Single Choice (radio button) – select one of the available options.
{label}:radio:{option1},{option2},{...}
Example: Accept Terms? : radio: Yes, No
checkbox
{label}:checkbox:{option1},{option2},{...}
Multiple Choice – check off available options.
Example:Payment : checkbox : Full, Partial
date
A date selector with a pop-out calendar.
{label}:date:{format}:{range}
If the date {format} is undefined or invalid, it defaults to y/m/d. You can interchange the letters as you wish.
The year {range} is formatted as start-end, e.g. 2005-2015. If undefined or invalid, defaults to a range of ten years ago from now to next year from now.
Example:Date of Arrival : date : m/d/y : 2014-2020
time
A time selector.
{label}:time:{format}[,now]
The allowed {format} values are 12 (with AM/PM selector) and 24. If undefined or invalid, it defaults to 12.
To set the current time as default time, append “, now” to the config.
Example:Arrival Time: time : 12, now
static
Display a message or text.
{label}:static:{text}
Example:Important Message : static : Click on the ‘+’ button to add a new row.
The list above looks a little scary, however, it’s quite easy. Here is a sample configuration:
This is what the configuration will show on your form:
After completing each field configuration, hit the return (enter) key to add a new field configuration on a different line so that the widget can treat it as a new field setting.
Other Configurations
Enter the number in the Maximal rows number input box to limit the number of rows that will be added to your form.
To automatically load a number of rows, enter the number in the Minimal rows number field.
To change the buttons’ text (add and delete row), set them in the following input boxes:
Set a Field or Input as “Required”
In most forms, some fields should not be skipped, like names and email fields. To prevent the form user from skipping these mandatory fields, you need to set the field as required by adding an asterisk (*) before the field configuration as shown below:
Your form users will only be allowed to submit the form if those fields are completed:
Change How the List Looks
Its often necessary to style your form to look exactly how you would like – perhaps to match your product image or corporate identity among other reasons.
With this widget, you can further customize how the list looks by adding custom CSS.
And here is an example form. You are welcome to clone or copy the form; take a closer look or modify the form as your own.
The Custom CSS part can look a little too technical – but not to worry – we would love to help – just let us know.
That’s all it takes to configure the widget.
If you want to make the Configurable List widget mobile responsive, check this guide: How to Make the Configurable List Widget Mobile Responsive.
We would love to hear what you think, or share your experience so that we can continually improve the tools that make your form awesome!
Send Comment:
6 Comments:
Can you use this to get a total?
Hello,
Not possible to proceed to field translation using the menu in jotform, why ?
I need it.
Thank you.
Hi I have somehow added a hover text to a configurable list and I have no idea how to remove it. When the form is displayed the hover text appears over one of the form fields and the user is unable to enter a value in the field.
Possible to manage dependent dropdown in this widget ?
Hi, How it is possible to changer month word to translate it in french ?
Thank you.
have to have total of the qty it will solve many problems and more calculation | https://www.jotform.com/help/282-how-to-set-up-the-configurable-list-widget/ | CC-MAIN-2021-43 | refinedweb | 900 | 60.24 |
02 August 2012 06:15 [Source: ICIS news]
TOKYO (ICIS)--?xml:namespace>
Tosoh had to shut its 550,000 tonnes/year No 2 VCM line at Nanyo in Yamaguchi prefecture in November 2011 after an explosion.
The shutdown of the plant contributed to an ordinary loss of Y3.61bn, registered in the three months to 30 June 2012, Tosoh said.
The company also recorded an operating loss of Y1.76bn during the first quarter, compared with an operating profit of Y10.4bn a year earlier, while net sales were down 18% year on year to Y150.5bn.
In the chlor-alkali segment, which includes VCM and PVC operations, an operating loss of Y5.05bn was recorded in the first quarter, while net sales fell 25% year on year to Y51.4bn, according to Tosoh.
( | http://www.icis.com/Articles/2012/08/02/9583140/japans-tosoh-posts-y2.74bn-net-loss-in-q1-on-vcm-unit.html | CC-MAIN-2014-10 | refinedweb | 134 | 76.01 |
FriendlyThings for RK3399
查看中文
Note: the steps and methods presented here apply to all FriendlyElec's RK3399 based boards. For steps and methods that apply to other platforms refer to FriendlyThings
Contents
- 1 Introduction
- 2 Android Versions
- 3 List of Applicable Boards
- 4 Quick Start
- 5 APIs in libfriendlyarm-things.so Library
- 6 Access RK3399 Based Boards' Hardware under Android
- 7 Download Links to Code Samples
1 Introduction
FriendlyThings is an Android SDK developed by FriendlyElec to access hardware. Users can use it to access various hardware resources Uart, SPI, I2C, GPIO etc on a FriendlyElec ARM board under Android. This SDK is based on Android-NDK. Users can use it to develop popular IoT applications without directly interacting with drivers.
2 Android Versions
The Android BSPs provided by FriendlyElec already have FriendlyThings SDK(libfriendlyarm-things.so) and currently have two version:
- Android 7.1.2-rk3399
BSP source code download link:
BSP source code location on the network disk: sources/rk3399-android-7.git-YYYYMMDD.tgz
Latest ROM download link:
- Android 8.1-rk3399
BSP source code download link:
BSP source code location on the network disk: sources/rk3399-android-8.1.git-YYYYMMDD.tgz
Latest ROM download link:
3 List of Applicable Boards
FriendlyThings SDK(libfriendlyarm-things.so) works with the following FriendlyElec RK3399 based boards:
- NanoPC-T4
- NanoPi M4 (an external eMMC module is needed)
- NanoPi NEO4 (an external eMMC module is needed)
FriendlyThings might also work with other FriendlyElec boards such as Samsung S5P4418/S5P6818, Samsung S5PV210, Allwinner H3/H5 etc. For more details refer to FriendlyThings
4 Quick Start
4.1 Step 1: Include libfriendlyarm-things.so in APP
Clone the following library locally:
git clone
Copy all the files under the libs directory to your working directory and create a "com/friendlyarm" directory in your Android project's src directory, copy the whole "java/FriendlyThings" to your newly created "com/friendlyarm" directory. The whole project directory will look like this(Note:AndroidStudio's project may be a little bit different):
YourProject/ ├── AndroidManifest.xml ├── libs │ ├── arm64-v8a │ │ └── libfriendlyarm-things.so │ └── armeabi │ └── libfriendlyarm-things.so ├── src │ └── com │ └── friendlyarm │ ├── FriendlyThings │ │ ├── BoardType.java │ │ ├── FileCtlEnum.java │ │ ├── GPIOEnum.java │ │ ├── HardwareControler.java │ │ ├── SPIEnum.java │ │ ├── SPI.java │ │ └── WatchDogEnum.java
Import the following components and the major APIs are included in the HardwareControler.java file:
import com.friendlyarm.FriendlyThings.HardwareControler; import com.friendlyarm.FriendlyThings.SPIEnum; import com.friendlyarm.FriendlyThings.GPIOEnum; import com.friendlyarm.FriendlyThings.FileCtlEnum; import com.friendlyarm.FriendlyThings.BoardType;
4.2 Step 2: Give APP System Right
Your app needs the system right to access hardware resources;
Give your app the system right by making changes in the AndroidManifest.xml file and the Android.mk file;
It is better to include your app in your Android source code and compile them together. If your app is not compiled together with your Android source code you have to go through tedious steps to compile your app and sign your app to give it the system right.
4.2.1 Modify AndroidManifest.xml
Add the following line in the manifest node in the AndroidManifest.xml file:
android:sharedUserId="android.uid.system"
4.2.2 Modify Android.mk
Create an Android.mk file(the simplest way is to copy a sample Android.mk file), modify the Android.mk file by adding a line LOCAL_CERTIFICATE := platform :
LOCAL_PATH:= $(call my-dir) include $(CLEAR_VARS) LOCAL_SRC_FILES := $(call all-subdir-java-files) LOCAL_PACKAGE_NAME := Project Name LOCAL_CERTIFICATE := platform LOCAL_MODULE_TAGS := optional LOCAL_CFLAGS := -lfriendlyarm-hardware include $(BUILD_PACKAGE)
4.3 Final Step: Compile Your APP Together with Android Source Code
Go to Android source code's root directory and run "setenv.sh" to export environmental variables, enter your app's directory and run "mm" to compile:
For example: compile the GPIO_LED_Demo on RK3399:
cd rk3399-android-8.1 . setenv.sh cd vendor/friendlyelec/apps/GPIO_LED_Demo mm
5 APIs in libfriendlyarm-things.so Library
Refer to this wiki site:FriendlyThings APIs
6 Access RK3399 Based Boards' Hardware under Android
6.1 Serial Port
Currently the only available serial port for users is UART4 and its device name is "/dev/ttyS4". Other serial ports are already taken. Here is a list of the serial ports and their functions. If you need more serial ports you can convert USB ports to serial ports:
6.1.1 APIs for Accessing Serial Ports
HardwareControler.openSerialPortEx //opens a serial port. HardwareControler.select //polls a serial port's status and checks if it has data to be read or if data can be written to it. HardwareControler.read //reads data from a serial port. HardwareControler.write //writes data to a serial port. HardwareControler.close //closes a serial port.
For more details refer to :FriendlyThings APIs
6.2 GPIO
You can access GPIO by calling sysfs APIs. You need to access the "/sys/class/gpio" directory, write a GPIO index number you want to access to the export file, and set the GPIO's direction and value.
Here is a list of GPIOs FriendlyElec's RK3399 boards support:
- NanoPC T4
- NanoPi M4和NanoPi NEO4
6.2.1 APIs for Accessing GPIO
HardwareControler.exportGPIOPin //exports a GPIO. HardwareControler.setGPIODirection //sets a GPIO's direction. HardwareControler.getGPIODirection //gets a GPIO's direction. HardwareControler.setGPIOValue //sets a GPIO's value. HardwareControler.getGPIOValue //gets a GPIO's value HardwareControler.unexportGPIOPin //unexports a GPIO.
For more details refer to:FriendlyThings APIs
6.2.2 Testing GPIO
You can use a FriendlyElec's LED module to test GPIOs. Set a HIGH to turn on the LED and a LOW to turn off the LED.
6.3 ADC
RK3399 populates three ADC channels 0, 2 and 3 and here is a list of the channels and their corresponding nodes:
You can access GPIOs like accessing files under Android.
6.4 PWM
Note: The PWM interface is already used by the fan by default. If you want to control the PWM by yourself, you need to disable the fan first,
Please refer to: Template:RK3399 Android PWMFan
You can access PWMs by calling sysfs APIs. You can access the nodes under the "/sys/class/pwm/pwmchip1" directory. Here is code sample to control a PWM fan:
6.4.1 APIs for Accessing PWM
- Export PWM0 to users
echo 0 > /sys/class/pwm/pwmchip1/export
- Control a PWM fan's speed by setting the PWM's period and duty_cycle.
6.4.2 Testing PWM
Connect a PWM fan(3 pins) to a NanoPC-T4's fan port to test it.
6.5 I2C
To test I2C we connected a FriendlyElec's LCD1602 module to a NanoPC-T4 and ran the I2C demo program:
Here is a hardware setting:
6.6 RTC
You can access RTC by calling APIs under the "/sys/class/rtc/rtc0/" directory. For instance you can check the current RTC time by running the following commands:
cat /sys/class/rtc/rtc0/date # 2018-10-20 cat /sys/class/rtc/rtc0/time # 08:20:14
Set power-on time. For instance power on in 120 seconds:
#Power on in 120 seconds echo +120 > /sys/class/rtc/rtc0/wakealarm
6.7 Watch dog
It is quite straightforward to access the watch dog. You can simply open the "/dev/watchdog" device and write characters to it.If for any reason no characters can be written to the device the system will reboot in a moment:
mWatchDogFD = HardwareControler.open("/dev/watchdog", FileCtlEnum.O_WRONLY); HardwareControler.write(mWatchDogFD, "a".getBytes());
6.8 SPI
6.8.1 Enable SPI
The SPI and UART4 share the same pins. You need to modify the kernel's DTS file to enable the SPI by running the following commands:
Edit the DTS file: arch/arm64/boot/dts/rockchip/rk3399-nanopi4-common.dtsi:
cd ANDROID_SOURCE/kernel vim arch/arm64/boot/dts/rockchip/rk3399-nanopi4-common.dtsi
You need to replace "ANDROID_SOURCE" to your real source. In our system it is either Android7's or Android8's source code directory.
Locate spi1's definition:
&spi1 { status = "disabled"; // change "disabled" to "okay"
Locate uart4's definition in the rk3399-nanopi4-common.dtsi file:
&uart4 { status = "okay"; // change "okay" to "disabled"
Compile kernel:
cd ANDROID_SOURCE/ ./build-nanopc-t4.sh -K -M
Use the newly generated "rockdev/Image-nanopc_t4/resource.img" image file to update your system.
6.8.2 Testing SPI
By default the SPI is not enabled in your system. You need to manually enable it by making changes to your Android source code:
vendor/friendlyelec/apps# vi device-partial.mk
Remove the comment:
# PRODUCT_PACKAGES += SPI-OLED
Compile Android source code
We connected a FriendlyElec's OLED module which had a 0.96" LCD with a resolution of 128 x 64 and a SPI interface to a NanoPC-T4 and tested it.
The LCD module had 7 pins and here is hardware setup:
6.8.3 APIs for Accessing SPI
For more details refer to:FriendlyThings APIs
7 Download Links to Code Samples
All the code samples are included in FriendlyElec's Android source code and are under the "vendor/friendlyelec/apps" directory in Android7.1.2 and Android8.1. Or you can download individual code samples. Here is a list of code samples and their download links:
7.1 Android8.1
7.1.1 Applicable Boards
- NanoPC-T4/NanoPi-M4/NanoPi-NEO4
7.2 Android7.1.2
7.2.1 Applicable Boards
- NanoPC-T4/NanoPi-M4/NanoPi-NEO4 | http://wiki.friendlyarm.com/wiki/index.php?title=FriendlyThings_for_RK3399&direction=next&oldid=19098 | CC-MAIN-2020-40 | refinedweb | 1,546 | 50.12 |
Postpone is a polyfill for delaying the downloading of media associated with an element when the element is not visible. This polyfill is modelled after the W3C's draft specification for resource priorities.
Install with component(1):
$ component install lsvx/postpone
$ bower install postpone
To postpone an element, simply specify a
postpone attribute on it and move the url for the resource you would like to load to a data attribute.
Postpone is written using a UMD pattern so it can be used in CJS-style environments, with AMD loaders, or as a browser global. To start using postpone, import the
postpone module and create a new instance; this instance will automatically start watching your document.
// If you are using postpone as a CJS module, require it like so:var Postpone = ;var postpone = ; // Creates a new instance and starts watching the document.
Note: If you are using postpone in a large project or as a dependency for a library you would like to distribute publicly, then it is advisable to import it as a module rather than use it as a global variable to avoid polluting the global namespace. = 50 ; // Postpone will set the threshold to 50vh, or half a viewport.
Optionally, you can manually change the threshold at any point in your code by calling the
.setThreshold() method.
var postpone = // Threshold defaults to 0vh... // Do something with our code, object,.
Stop all of postpone's functionality. This means postpone will stop watching the document for changes and will unbind any scroll events associated with postponed elements.
If you have paused postpone's watcher and unbound its scroll events using
.stop(), you can start it all back up with
.start().
Check if your
element is somewhere in the browser's viewport, where
element and
scrollElement are DOM nodes. If
scrollElement is not specified, postpone assumes that
element scrolls with respect to the document.
Check if your
element is visually hidden, where
element is a DOM node. This method checks if
element or any parent element is hidden with the CSS style
display: none;.
Stop postponing the download of your
element by manually telling postpone to load it..
MIT | https://www.npmjs.com/package/postpone | CC-MAIN-2018-13 | refinedweb | 358 | 62.17 |
Hi Enthusiastic Learners! In this article we learn about many Universal Functions or Built-In functions of NumPy. Universal Functions plays a very crucial role in getting best performance out of NumPy and if you want to know advantages and effects on performance while using Universal Functions (UFuncs), you should go through our article Why use Universal Functions or Built in Functions in NumPy ?
And to learn basics of NumPy you can go through these 2 detailed articles, they will help you get a good understanding of creating & traversing NumPy arrays.
In this article we will be covering following topics:
- Arithmetic Universal Functions
- Trigonometric Universal Functions
- Exponent & Logarithmic Universal Functions
Arithmetic Universal Functions¶
The biggest advantage of using UFuncs for Arithmetic Operations is that they all look same as that of our standard Mathematical operators, that is, you can simply use ‘
+‘, ‘
-‘, ‘
/‘ & ‘
*‘ for their Mathematical meaningful operations – Addition, Subtraction, Division & Multiplication respectively.
Let’s create an Array and try out these operations — just remember one thing when we are adding or subtracting a scalar value to or from an Array it will implement to all elements of that array.
import numpy as np # Our Base Array x = np.arange(1, 20, 3) x
array([ 1, 4, 7, 10, 13, 16, 19])
print("x + 4 = " + str(x + 4)) print("x - 4 = " + str(x - 4)) print("x / 4 = " + str(x / 4)) print("x * 4 = " + str(x * 4))
x + 4 = [ 5 8 11 14 17 20 23] x - 4 = [-3 0 3 6 9 12 15] x / 4 = [0.25 1. 1.75 2.5 3.25 4. 4.75] x * 4 = [ 4 16 28 40 52 64 76]
Note: One interesting thing to note here is that NumPy automatically selects that which data-type to choose after any operation. In above example, when we divided integer values of array we got Float or Decimal values in output. Thus, it automatically chooses the higher data-set.
You can also perform following operations:
- Negate all values of an array
- Finding modulus of all values of array (remainder of values)
- Finding power of all numbers
print("Negate all values of array") print ("-x \t= " + str(-x)) print("\nModulus of all numbers with 4") print("x % 4 \t= " +str(x % 4)) print("\nCalculating power of all number with 3") print("x ** 3 \t=" + str(x ** 3))
Negate all values of array -x = [ -1 -4 -7 -10 -13 -16 -19] Modulus of all numbers with 4 x % 4 = [1 0 3 2 1 0 3] Calculating power of all number with 3 x ** 3 =[ 1 64 343 1000 2197 4096 6859]
Corresponding to above Mathematical operators we also have standard NumPy functions which are internally called whenever an operator is used. List of these functions is as follows:
- ”
+”
np.add
- ”
-”
np.subtract
- ”
/”
np.divide
- ”
*”
np.multiply
- ”
-val”
np.negative
- ”
%”
np. mod
- ”
*
*”
np.pow
Trigonometric Universal Functions¶
We can could use trigonometric functions to find both standard results as well as inverse trigonometric results too.
Let’s begin with creating an array of different angles.
angle = np.arange(0,15, 4) angle
array([ 0, 4, 8, 12])
print("tan(angle) = " + str(np.tan(angle))) print("\nsin(angle) = " + str(np.sin(angle))) print("\ncos(angle) = " + str(np.cos(angle)))
tan(angle) = [ 0. 1.15782128 -6.79971146 -0.63585993] sin(angle) = [ 0. -0.7568025 0.98935825 -0.53657292] cos(angle) = [ 1. -0.65364362 -0.14550003 0.84385396]
Let’s see Inverse Trigonometric Functions too.
First create an array on which we will be applying inverse trigonometric functions to get corresponding angles.
values = [0, 1, -1] values
[0, 1, -1]
print("arctan(values) = " + str(np.arctan(values))) print("\narcsin(values) = " + str(np.arcsin(values))) print("\narccos(values) = " + str(np.arccos(values)))
arctan(values) = [ 0. 0.78539816 -0.78539816] arcsin(values) = [ 0. 1.57079633 -1.57079633] arccos(values) = [1.57079633 0. 3.14159265]
NumPy also provides us with a set of Hyperbolic Trigonometric functions.
Here is an example for them.
print("tanh(angle) = " + str(np.tanh(angle))) print("\nsinh(angle) = " + str(np.sinh(angle))) print("\ncosh(angle) = " + str(np.cosh(angle)))
tanh(angle) = [0. 0.9993293 0.99999977 1. ] sinh(angle) = [0.00000000e+00 2.72899172e+01 1.49047883e+03 8.13773957e+04] cosh(angle) = [1.00000000e+00 2.73082328e+01 1.49047916e+03 8.13773957e+04]
One thing to note here is that for getting Hyperbolic functions all you had to do was add an ‘h’ at end of each function. So, its very easy to remember.
And similarly, you can check Inverse Hyperbolic Functions — add ‘arc’ before function name and ‘h’ at the end. That is, arctanh(), arcsinh() and arccosh().
It make things easy to remember.
Exponent & Logarithmic Universal Functions¶
Following is the list of Exponential Functions
- exp(x) — e^x
- expm1(x) — e^x — Used when ‘x’ is very small, it provides more accuracy in comparison to exp(), however it is a little bit slower. So, try using it only when you have very small values.
- exp2() — 2^x — Used only when calculating power of scalar value ‘2’
- power(n,x) — n^x — Any number raise to power ‘x’
Let’s see them in action.
# base array x = np.arange(1, 8, 2) x
array([1, 3, 5, 7])
print("-- e^x --") print(np.exp(x)) print("\n-- 2^x --") print(np.exp2(x)) print("\n-- 5^x --") print(np.power(5, x))
-- e^x -- [ 2.71828183 20.08553692 148.4131591 1096.63315843] -- 2^x -- [ 2. 8. 32. 128.] -- 5^x -- [ 5 125 3125 78125]
x_small = np.array([0.01, 0.001, 0.0001, 0.00001]) x_small
array([1.e-02, 1.e-03, 1.e-04, 1.e-05])
print("EXP() -- Standard Function") print(np.exp(x_small)) print("\nEXPM1() -- High Precision") print(np.expm1(x_small))
EXP() -- Standard Function [1.01005017 1.0010005 1.00010001 1.00001 ] EXPM1() -- High Precision [1.00501671e-02 1.00050017e-03 1.00005000e-04 1.00000500e-05]
As you can see we get more precision while using
expm1().
Following is the list of Logarithmic Functions
- log(x) — Natural Log
- log1p(x) — Natural Log with high precision. Use it when value of ‘x’ is very small.
- log2(x) — Log with Base 2
- log10(x) — Log with Base 10
Let’s see how they work.
# base array x = np.arange(1, 8, 2) x
array([1, 3, 5, 7])
print("-- log(x) --") print(np.log(x)) print("\n-- log2(x) --") print(np.log2(x)) print("\n-- log10(x) --") print(np.log10(x))
-- log(x) -- [0. 1.09861229 1.60943791 1.94591015] -- log2(x) -- [0. 1.5849625 2.32192809 2.80735492] -- log10(x) -- [0. 0.47712125 0.69897 0.84509804]
print("LOG() -- Standard Log Function") print(np.exp(x_small)) print("\nLOG() -- High Precision") print(np.expm1(x_small))
LOG() -- Standard Log Function [1.01005017 1.0010005 1.00010001 1.00001 ] LOG() -- High Precision [1.00501671e-02 1.00050017e-03 1.00005000e-04 1.00000500e-05]
From results it is clear that for very small number we get higher precision when we use logp() function.
In our next tutorial we will be learning Aggregation Functions in depth, as we have covered only very few over here. There are a lot more functions that we need to explore yet.
So stay tuned & Keep Learning!!
And don’t forget to check our YouTube Channel ML For Analytics.
You can also follow us on Facebook!!
2 thoughts on “Universal Functions in NumPy” | https://mlforanalytics.com/2020/04/02/numpys-universal-functions-built-in-functions/ | CC-MAIN-2021-21 | refinedweb | 1,229 | 67.15 |
plotly-scalaplotly-scala
Scala bindings for plotly.js
plotly-scala is a Scala library able to output JSON that can be passed to plotly.js. Its classes closely follow the API of plotly.js, so that one can use plotly-scala by following the documentation of plotly.js. These classes can be converted to JSON, that can be fed directly to plotly.js.
It can be used from almond, from scala-js, or from a Scala REPL like Ammonite, to plot things straightaway in the browser.
It runs demos of the plotly.js documentation during its tests, to ensure that it is fine with all their features. That allows it to reliably cover a wide range of the plotly.js features - namely, all the examples of the supported sections of the plotly.js documentation are guaranteed to be fine.
It is published for both scala 2.12 and 2.13.
Table of contentTable of content
Quick startQuick start
From almondFrom almond
Add the
org.plotly-scala::plotly-almond:0.8.1 dependency to the notebook. (Latest version:
) Then initialize plotly-scala, and use it, like
import $ivy.`org.plotly-scala::plotly-almond:0.8.1` import plotly._ import plotly.element._ import plotly.layout._ import plotly.Almond._ val (x, y) = Seq( "Banana" -> 10, "Apple" -> 8, "Grapefruit" -> 5 ).unzip Bar(x, y).plot()
JupyterLabJupyterLab
If you're using JupyterLab, you have to install jupyterlab-plotly to enable support for rendering Plotly charts:
jupyter labextension install jupyterlab-plotly
From scala-jsFrom scala-js
Add the corresponding dependency to your project, like
libraryDependencies += "org.plotly-scala" %%% "plotly-render" % "0.8.1"
From your code, add some imports for plotly,
import plotly._, element._, layout._, Plotly._
Then define plots like
val x = (0 to 100).map(_ * 0.1) val y1 = x.map(d => 2.0 * d + util.Random.nextGaussian()) val y2 = x.map(math.exp) val plot = Seq( Scatter(x, y1).withName("Approx twice"), Scatter(x, y2).withName("Exp") )
and plot them with
val lay = Layout().withTitle("Curves") plot.plot("plot", lay) // attaches to div element with id 'plot'
From AmmoniteFrom Ammonite
Load the corresponding dependency, and some imports, like
import $ivy.`org.plotly-scala::plotly-render:0.8.1` import plotly._, element._, layout._, Plotly._
Then plot things like
val labels = Seq("Banana", "Banano", "Grapefruit") val valuesA = labels.map(_ => util.Random.nextGaussian()) val valuesB = labels.map(_ => 0.5 + util.Random.nextGaussian()) Seq( Bar(labels, valuesA, name = "A"), Bar(labels, valuesB, name = "B") ).plot( title = "Level" )
RationaleRationale
Most high-level Javascript libraries for plotting have well designed APIs, enforcing immutability and almost relying on typed objects, although not explicitly. Yet, the few existing Scala libraries for plotting still try to mimick matplotlib or Matlab, and have APIs requiring users to mutate things, in order to do plots. They also tend to lack a lot of features, especially compared to the current high-end Javascript plotting libraries. plotly-scala aims at filling this gap, by providing a reliable bridge from Scala towards the renowned plotly.js.
InternalsInternals
plotly-scala consists in a bunch of definitions, mostly case classes and sealed class hierarchies, closely following the API of plotly.js. It also contains JSON codecs for those, allowing to convert them to JSON that can be passed straightaway to plotly.js.
Having the ability to convert these classes to JSON, the codecs can also go the other way around: from plotly.js-compatible JSON to plotly-scala Scala classes. This way of going is used by the tests of plotly-scala, to ensure that the examples of the plotly.js documentation, illustrating a wide range of the features of plotly.js, can be represented via the classes of plotly-scala. Namely, the Javascript examples of the documentation of plotly.js are run inside a Rhino VM, with mocks of the plotly API. These mocks allow to keep the Javascript objects passed to the plotly.js API, and convert them to JSON. These JSON objects are then validated against the codecs of plotly-scala, to ensure that all their fields can be decoded by them. If these are fine, this gives a proof that all the features of the examples have a counterpart in plotly-scala.
Internally, plotly-scala uses circe (along with custom codec derivation mechanisms) to convert things to JSON, then render them. The circe objects don't appear in the plotly-scala API - circe is only used internally. The plotly-scala API only returns JSON strings, that can be passed to plotly.js. In subsequent versions, plotly-scala will likely try to shade circe and its dependencies, or switch to a more lightweight JSON library.
Supported featuresSupported features
plotly-scala supports the features illustrated in the following sections of the plotly.js documentation:
- Scatter Plots,
- Bubble Charts,
- Line Charts,
- Bar Charts,
- Horizontal Bar Charts,
- Filled Area Plots,
- Time Series,
- Subplots,
- Multiple Axes,
- Histograms,
- Log Plots,
- Image.
Some of these are illustrated in the demo page.
Adding support for extra plotly.js featuresAdding support for extra plotly.js features
The following workflow can be followed to add support for extra sections of the plotly.js documentation:
- find the corresponding directory in the source of the plotly.js documentation. These directories can also be found in the sources of plotly-scala, under
plotly-documentation/_posts/plotly_js, if its repository has been cloned with the
--recursiveoption,
- enabling testing of the corresponding documentation section examples in the
DocumentationTestsclass, around this line,
- running the tests with
sbt ~test,
- fixing the possible Javascript typos in the plotly-documentation submodule in the plotly-scala sources, so that the enabled JS snippets run fine with Rhino from the tests, then committing these fixes, either to or,
- add the required fields / class definitions, and possibly codecs, to have the added tests pass.
AboutAbout
Battlefield tested since early 2016 at Teads.tv
Released under the LGPL v3 license, copyright 2016-2019 Alexandre Archambault and contributors.
Parts based on the original plotly.js API, which is copyright 2016 Plotly, Inc. | https://index.scala-lang.org/alexarchambault/plotly-scala/circe-alt-generic/0.2.0?target=_sjs0.6_2.11 | CC-MAIN-2021-39 | refinedweb | 1,007 | 58.89 |
DEPRECATION WARNING: The Modular framework is being deprecated in favor of the Session Framework.
Requirements
To configure the modular framework, you will need to create a JSON file defining
the required configurations for
basemgr and
sessionmgr as detailed below.
The configuration file should be packaged via the build rule
modular_config,
which will validate your file against a schema. You must then include the
modular_config() target in the product's base packages.
The file may contain (non-standard JSON) C-style comments
(
/* block */ and
// inline).
Example
{ /* This is a block comment. Comments are ignored. */ // This is an inline comment. Comments are ignored. "basemgr": { "enable_cobalt": false, "use_session_shell_for_story_shell_factory": true, "base_shell": { "url": "fuchsia-pkg://fuchsia.com/dev_base_shell#meta/dev_base_shell.cmx", }, "session_shells": [ { "url": "fuchsia-pkg://fuchsia.com/ermine_session_shell#meta/ermine_session_shell.cmx", "display_usage": "near", "screen_height": 50.0, "screen_width": 100.0 } ] }, "sessionmgr": { "use_memfs_for_ledger": true, "cloud_provider": "NONE", "startup_agents": [ "fuchsia-pkg://fuchsia.com/startup_agent#meta/startup_agent.cmx" ], "session_agents": [ "fuchsia-pkg://fuchsia.com/session_agent#meta/session_agent.cmx" ], "component_args": [ { "uri": "fuchsia-pkg://fuchsia.com/startup_agent#meta/startup_agent.cmx", "args": [ "--foo", "--bar=true" ] } ], "agent_service_index": [ { "service_name": "fuchsia.modular.SomeServiceName", "agent_url": "fuchsia-pkg://fuchsia.com/some_agent#meta/some_agent.cmx" } ] } }
Basemgr fields
base_shellboolean (required)
url: string (required)
- The fuchsia component url for which base shell to use.
keep_alive_after_loginboolean (optional)
- When set to true, the base shell is kept alive after a log in. This is used for testing because current integration tests expect base shell to always be running.
- default:
false
argsstring[] (optional)
- A list of arguments to be passed to the base shell specified by url. Arguments must be prefixed with --.
session_shellsarray (required)
- Lists all the session shells with each shell containing the following fields:
url: string (required)
- The fuchsia component url for which session shell to use.
display_usage: string (optional)
- The display usage policy for this session shell.
- Options:
handheld: the display is used well within arm's reach.
close: the display is used at arm's reach.
near: the display is used beyond arm's reach.
midrange: the display is used beyond arm's reach.
far: the display is used well beyond arm's reach.
screen_height: float (optional)
- The screen height in millimeters for the session shell's display.
screen_width: float (optional)
- The screen width in millimeters for the session shell's display.
story_shell_url: string (optional)
- The fuchsia component url for which story shell to use.
- default:
fuchsia-pkg://fuchsia.com/mondrian#meta/mondrian.cmx
enable_cobalt: boolean (optional)
- When set to false, Cobalt statistics are disabled.
- default:
true
use_minfs: boolean (optional)
- When set to true, wait for persistent data to initialize.
- default:
true
use_session_shell_for_story_shell_factory: boolean (optional)
- Create story shells through StoryShellFactory exposed by the session shell instead of creating separate story shell components. When set,
story_shell_urland any story shell args are ignored.
- default:
false
Sessionmgr fields
cloud_provider: string (optional)
- Options:
LET_LEDGER_DECIDE: Use a cloud provider configured by ledger.
FROM_ENVIRONMENT: Use a cloud provider available in the incoming namespace, rather than initializing and instance within sessionmgr. This can be used to inject a custom cloud provider.
NONE: Do not use a cloud provider.
- default:
LET_LEDGER_DECIDE
enable_cobalt: boolean (optional)
- When set to false, Cobalt statistics are disabled. This is used for testing.
- default:
true
enable_story_shell_preload: boolean (optional)
- When set to false, StoryShell instances are not warmed up as a startup latency optimization. This is used for testing.
- default:
true
use_memfs_for_ledger: boolean (optional)
- Tells the sessionmgr whether it should host+pass a memfs-backed directory to the ledger for the user's repository, or to use /data/LEDGER.
- default:
false
startup_agents: string[] (optional)
- A list of fuchsia component urls that specify which agents to launch at startup.
session_agents: string[] (optional)
- A list of fuchsia component urls that specify which agents to launch at startup with PuppetMaster and FocusProvider services.
component_args: array (optional)
- A list of key/value pairs to construct a map from component URI to arguments list for that component. Presence in this list results in the given arguments passed to the component as its argv at launch.
uri: The component's uri.
args: A list of arguments to be passed to the component specified by
uri. Arguments must be prefixed with --.
agent_service_index: array (optional)
- A list of key/value pairs to construct a map from service name to the serving agent's URL. Service names must be unique, so only one agent can provide a given named service.
service_name: The name of a service offered by a session agent.
agent_url: A fuchsia component url that specifies which agent will provide the named service. | https://fuchsia.dev/fuchsia-src/concepts/modular/guide/config | CC-MAIN-2020-24 | refinedweb | 740 | 51.14 |
CONSOLE
itself, representing standard output, is an instance of java.io.PrintStream class. Standard output is, on most operating systems, console output.
format...: It is a class made available by Java to let you manipulate various operating output in Console - JSP-Servlet
the output on console.
Thanks...JSP output in Console Q:An input text should be read and the same should be printed in the CONSOLE.
Actually i was able to do it in the browser
Input From Console
-based console device
associated with the current Java virtual machine... Input From Console
The Console Class inherits from Java.io.console
HIBERNATE IN CONSOLE & SERVLET
the basic
theoretical ideas of java source file of the class representing the data...
HIBERNATE IN CONSOLE & SERVLET
( part-3... for using Hibernate in a console application & a servlet.
ClearScreen in Console Java
ClearScreen in Console Java How can I perform Clear Screen Operation in Java.
As we used to do in C++ using clrscr();
Please guide
java console programming - Java Beginners
java console programming how to clear d screen in java console programming
i want to type the intput in the output console box how?
i want to type the intput in the output console box how? i want to type the intput in the output console box how
Need help writing a console program
Need help writing a console program I need help cant seems to figure... will replace all sequences of 2 spaces with 1 space.
The program will read a file...):
This is a test, this is only a test!
This test is a test of your Java programming
input output
;
Introduction
The Java I/O means Java Input/Output and is a part... the output
stream in machine format.
File
This class...
It uses for writing data to a file and also implements an
output
Java Redirect output to file
Java Redirect output to file
This section illustrates you how to redirect the output to the file.
It may be comfortable to a programmer if they are able... to the
file.
In the given example, we have set the output stream of System
Spring Console
Spring Console
The Spring Console is a FREE standalone Java
Swing application... Library files.
The Spring Console also plugs into multiple, popular Java IDEs
Struts Console
Struts Console
The Struts Console is a FREE standalone Java Swing
application for managing Struts-based applications. With the Struts Console you
can
Faces Console
Faces Console
The Faces Console is a FREE standalone Java Swing
application... Library files.
The Faces Console also plugs into multiple, popular
Java IDEs
java coding for creating table in the console
java coding for creating table in the console write a java program to create table
Console Appender in Log4j
you will get following
output on your console.
16:22:51,406...
Console Appender in Log4j
In this log4j console appender tutorial you
Display Calendar On Console
Display Calendar On Console
In this section, we are going to create a calendar and display it on the console. For this, user is allowed to enter the year...;
}
day %= 7;
System.out.print("\n\n");
}
}
}
Output
Input And Output
;
Introduction
The Java I/O means Java Input/Output and is a part of java.io...;}
}
Filter Files in Java: The
Filter File Java example...;}
}
}
Java read file line by line
console application - text-based menu - Java Beginners
console application - text-based menu Im doin a text-based menu console application. I have created five classes: namely:
1. addproduct
2. updateproduct
3. deleteproduct
4. view product
5. productmain
i used Scanner
How to read password from the console
Description:
Console class was introduced in jdk 1.6. This class help in taking the input from
the console using its readPassword method . Here..., ' ');
}
}
}
Output
Command Line Standard Output In Java
be written at the console, in a file, or at any
output source. Java provides... output data. These output data can be displayed at the
console,
file or any...Command Line Standard Output In Java
In this section we will discuss about
Core Java Interview Question Page 1
direct program messages to the system console, but error messages, say to a file..., they both point at the system console. This how the standard output could be re...
Core Java Interview Question Page 1
Hibernate generated SQL statements on console
the Hibernate generated SQL statements on console, what should we do?
If you want to see the Hibernate generated SQL statements on console just add property in hibernate configuration file-
<property name=?show_sql? >true<
executing a batch file which depends on jar files from a system tray but console should not display.
to see the frame outside of system tray and i want to see the console with output...executing a batch file which depends on jar files from a system tray but console should not display. Hi all,
I got following requirement,
I have
How to read and display password from the console
Description:
Console class was introduced in jdk 1.6 This class help in taking the input from
the console using its readPassword method . Here...; }
}
}
Output
Java Input/Output Examples
Java Input/Output Examples
In this tutorial you will learn about how the Inputs and outputs are managed in java. To understand the Java Input
& Output..., serialization, and the file system.
Here we are giving a long list of examples which
ReadLine(String fmt,Object... args) of Console class - Java Beginners
ReadLine(String fmt,Object... args) of Console class How to reload the string and objects of console class in Java? Hi friend,import... ConsoleExample{ public static void main(String[] args){ Console console
Show Hibernate generated SQL statements on console
Show Hibernate generated SQL statements on console How can i see Hibernate generated SQL statements on console?
In Hibernate configuration file add the following:
<property name="show_sql">true</property>
soap message console - WebSevices
soap message console hi friends i have one doubt that is iam using MyEclipse 6.0 in that how do i get soap message console
Need help with console program?
Need help with console program? Write a console program that repeatedly prompts the user to enter data until they type done (any case, Upper, Lower, or Mixed).
thanks in advance.
Here is an example that repeatedly
input output
input output java program using fileinputstream and fileoutputstream
Hi Friend,
Try the following code:
import java.io.*;
class...();
System.out.println("File is copied");
}
}
Thanks
Java program to get the Command Line Arguments
Java program to get the Command Line Arguments
In java we can get the command line arguments as
they are provided on the console. In this example program our
File Reader example in JRuby
on
your console as output. Here is the JRubyFile.txt as follows: .... For reading whole text file we have used while-loop in our
program.
Here is the output... File Reader example in JRuby
Java Read Lines from Text File and Output in Reverse order to a Different Text File
Java Read Lines from Text File and Output in Reverse order to a Different Text... to another text file. When that is done the output values of that file need... to display the names and path on the console. But I can not get the file to read line
How to read from the console
the input from
the console. Here in this sample program it will take one word input
from the console and display it.
Code:
import ...;System.out.println(sc.next());
}
}
Output
should print in console
Reading Value From console
input output in java
input output in java java program using filereader and filewriter...();
System.out.println("File is copied");
}
}
Thanks
Hi...);
}
out.close();
System.out.println("File is copied");
}
}
Thanks
Why do the slashes change when the console gives me the error?
Why do the slashes change when the console gives me the error? The string input as the filename looks like this:
String file = "http...";
The console gives me back an error saying:
java.io.FileNotFoundException: http
java - Java Beginners
-user-input.shtml... output. Therefore in Java suggest how to accept input from the user and display
Input and Output package
Input and Output package Hi guys,
My professor used input and output... she used
in.readint()
out.writeln() commands to read input and print output.
she created two new objects directly to use this statements.
/* input and output
Printing a Stack Trace to the Server Console
Printing a Stack Trace to the Server Console... the Stack Trace is
printed on the Console.
When you generate an exception... that we can
get more information about the error process.
Output
"Hello World" program in Swing and JRuby
Tutorials you have studied how to use Java classes in JRuby examples to show results on your
console. Now we are going to describe you how to use Swing in JRuby... are
making our JRuby program enabled to use java classes.
frame
How to Read Excel file Using Java
and every cell and stored the excel file values into the
vector.This Vector data is then used to display file values on the console.
Here is the code...How to Read Excel file
In this section,you will learn how to read excel file
output java - Java Beginners
;
}
a. what is the output of the ff. java statement:?
i. System.out.println (secret...output java public static int secret(int one)
{
int i;
int...? Hello
Are you beginner?
Ok, The first Output is 125
How to read properties file in Java?
the
java.util.Properties class for reading a property file in Java program....
Here is the video tutorial of reading a property file in Java: "How to read properties file in Java?"
Here is the data.properties the properties
Need in desperate help in writing a console program
Need in desperate help in writing a console program Write a console program that repeatedly prompts the user to enter data until they type done (any case, Upper, Lower, or Mixed). As they enter the data, assign it to a two
unable to see the output of applet. - Applet
unable to see the output of applet. Sir,
I was going through the following tutorial
but the problem
Reverse String
is the output:
C:\Examples>java Reverse roseindia
aidniesor... it in its backward direction on the console.
Description of program... and getting the necessary path ,iam clicking on the JSP file i am not getting
Show output as a xml file using Velocity
Show output as a xml file using Velocity
This
Example shows you how
to show output as a xml file...;to produce
the output.#foreach( $stu in $stuList ): This works same
Data input & output Stream
write primitive Java data types to an output stream in a portable way...Data input & output Stream Explain Data Input Stream and Data Output... stream lets an application read primitive Java data types from an underlying input
PHP SQL Output
need to show the output to the console
or web page of the application... PHP SQL Output
This example illustrates how to display the output fetched as a result
Output of null value
Java Factorial Program I want to create a factorial program in Java. Please help me.
public class NullOutPut{
public static void...(str);
}
}
Output value
Null
Description:- In the given example
How to check long buffer is direct or not in java.
How to check long buffer is direct or not in java.
In this tutorial, we will discuss how to check long buffer is direct or not
in java.
LongBuffer API...;}
}
}
Output
C:\>java BufferIsDirect
Output of flex
Output of flex hi.......
please provide the name of the output file.
What is the output of flex applications?
please rply ASAP........
Input and Output problems - Java Beginners
Input and Output problems 1) what is Difference between InputStreamReader and FileReader?
2) what is buffer?
Hi friend.... This link will help you.
Convert Text To PDF
into pdf file
Output of The Example...;
Here we are discussing the convertion of a text file into
a pdf file by using an example. In
this example we take a string from console, pass
Java File - Java Beginners
.
Output
Print the file names and size of the file to the console.
Please...Java File Hi Friend,
Thank you for the support I got previously...
Anyone please send me the Java Code for scanning a directory and print
Java char to int
Java char to int
We are going to describe How to convert char to int. First of all we have create class name
"CharToInteger" and take a char on the console...();
}
}
}
Output
Download Source Code
Java programming 1 - Java Beginners
:// class examples...Java programming 1 Thx sir for reply me ^^ err how bout if using
Convert Object To XML
file with the help of an example. We are taking ten strings from console... into xml file as child node values. Convert Object To XMLTo create a xml file pass the name of xml file
with .xml extension into an object of FileOutputStream
Java get GMT Time
Java get GMT Time
....
The following example helps you to obtain the IST and GMT time on the console.
The method...;}
}
Output will be displayed as:
Download Source Code
NSLog examples
Programmers uses the NSLog function to debug the application. NSLog function is used to display the debug messages on the console. The NSLog function is very useful in debugging the iPhone applications.
In this NSLog tutorial series we
Output Previous or Next Day of the Month
Output Previous or Next Day of the Month Please Help! I need... the if...then...else construct and (2) to start using some of the Java API's, by using the String... or previous day:
The program should output the next day's or previous day's date
Java I/0 Examples
Core Java Hello World Example
for writing the output on the console.
In the sixth line a '}' (closing... Java editor to
write the above Java code and then save this file as File->Save/Save As then go
to your directory where you want to save your java file
File Upload Tutorial With Examples In JSP
File Upload Tutorial With Examples In JSP
... such types of examples
with the complete code in JSP. This tutorial also provides the output of
each and every examples for cross checking for your
Java I/O Examples
; Characters.
File Output Stream
As we discussed earlier, java has... Java I/0 Examples
... Java Input/Output. It is provided by the java.io
package. This package hassf form output - Java Server Faces Questions
the output with all the details whatever i provide.
It is urgent for me please give
jsf form output - Java Server Faces Questions
button it goes to hello.jsp page and gives the output with all the details whatever
Java coding for beginners
compiler you have for the purposes.
Echoing words in Java to output words...This article is for beginners who want to learn Java
Tutorials of this section are especially for the beginners who want to incept
the Java program from
Java file from url
urlstring = "";
URL u = new URL(urlstring...Java file from url
In this section, you will learn how to convert url to file... have
displayed the file on the console.
URL: This class represents a Uniform
WriterAppender in Log4j
in simple file (plain/text) file and to the console also. If we want
to write... of WriterAppender and
since it requires Writer or OutputStream we are using a file "... to logger and all logging events will be added to this HTML
file.
Here
While loop break causing unwanted output
it will output that I was correct but after that it will output something that I'm... as I'm just beginning learning java.
import java.util.Scanner;
public class
Object Output Stream
Object Output Stream Can anyone decode the data from a file which is written by ObjectOutputStream??
I want to know if this is a secure way to write protected data without encryption
How to copy a file in java
How to copy a file in java
In this section you will learn about how to copy a content of one file to
another file. In java, File API will not provide any direct way to copy a file.
What we can do is, read a content of one
Read File in Java
Read File in Java
This tutorial shows you how to read file in Java. Example discussed here
simply reads the file and then prints the content on the console... is the complete example of Java program that reads a character file and
prints
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/87695 | CC-MAIN-2015-40 | refinedweb | 2,791 | 64.2 |
The string format is like this
"a.b.c = 1" "a.b.d = 2" => its hash will be => {'a'=> {'b' => {'c'=>1, 'd'=>2 } } }
It gets tricky with arrays
"a.e[0].f = 1" "a.e[0].g = 2" "a.e[1].h = 3" => its hash will be => {'a' => {'e' => [{'f'=>1, 'g'=>2}, {'h'=>3}] } }
I wrote a version which doesn't handle arrays with too many if-else checks
def construct $output[$words[0]] = {} unless $output.has_key?($words[0]) pointer = $output[$words[0]] $words[1..-2].each do |word| pointer[word] = {} unless pointer.has_key?(word) pointer = pointer[word] end pointer[$words[-1]] = 1 end $output = {} $words = ['a', 'b', 'c'] construct $words = ['a', 'b', 'd'] construct p $output
The array version is even worse. Is there a better way of solving this in Ruby? | http://www.howtobuildsoftware.com/index.php/how-do/bNR/ruby-in-ruby-how-can-i-convert-this-string-format-to-a-hash | CC-MAIN-2018-43 | refinedweb | 135 | 78.35 |
da quello che ho capito Arduino Uno non ha il controllo sullo slave select.
Note about Slave Select (SS) pin on AVR based boardsAll AVR based boards. For example, the Arduino Ethernet shield uses pin 4 to control the SPI connection to the on-board SD card,and pin 10 to control the connection to the Ethernet controller.
...DomandaCome faccio a mantenere lo slave select basso e inviare un altro byte??Grazie
P.S: questa foto fa intendere che lo SS è un'uscita
•SS/OC1B/PCINT2 - Port B, Bit 2SS: Slave Select input. When the SPI is enabled as a Slave, this pin is configured as an input regardless of the setting of DDB2.As a Slave, the SPI is activated when this pin is driven low.When the SPI is enabled as a Master, the data direction of this pin is controlled by DDB2. When the pin is forced by the SPI to be an input, the pull-up can still be controlled by the PORTB2 bit.
#include "SPI.h"void setup() { pinMode(SS-pin, OUTPUT); SPI.begin(); SPI.setBitOrder(MSBFIRST); //oppure SPI.setBitOrder(LSBFIRST); }void setup() { ... digitalWrite(SS-pin, LOW); SPI.transfer(Byte1); SPI.transfer(Byte2); digitalWrite(SS-pin, HIGH); ...
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=154537.msg1158636 | CC-MAIN-2016-40 | refinedweb | 238 | 67.35 |
Item Bucket Settings: Creating awesome rules to rule your folder structure
In Sitecore 7.5 there are great new features available like the settings for Item Buckets and especially conditions and actions for resolving the Bucket Folder Path. This way we could influence the way our Bucket Folder Paths are generated based on Sitecore's great Rules Engine.
Bucket Folder Paths that make sense
In Sitecore 7.5 there are great new features available like the settings for Item Buckets and especially conditions and actions for resolving the Bucket Folder Path. This way we could influence the way our Bucket Folder Paths are generated based on Sitecore's great Rules Engine. We can create different Bucket structures for bucketable items that are based on specific templates, or when the name of our Bucket matches certain rules. Out of the box we can base the Bucket Folder Path on the creation date of the new bucketable item in a specific format, or based on the ID of the new bucketable item (the item ID will be used and will generate folder structures like “A/3/F/E/2”, etc.), or just use the name of the item in de same way to get some structure that is interpriteable to humans “s/i/t/e/c/o” etc.
Publications in categories
This is cool and even better when we implement our own actions for these rules. In our solution we have a bucket that holds our publication items. The template on which these publications are based on contain a field in which the category is set. This field is shared, so in all languages this field holds the same value. We also have a publication date field available in our template, this field is also shared.
The reason we use this Bucket is that we have a lot of publication items to store and we’re not really interested in how our publication items are stored in Sitecore. There are some cons and pros about Item Buckets but that would go far beyond this blog.
But to get some grip on the structure we want to get the following:
With the actions that came out of the box we weren’t able to do this so we need to create our own actions. We could make an action that is a tight fit with our Publication Item template but that will be a quick and dirty solution. We could also make two generic actions which you can use to make this work. But hey, why the rush? This blog will cover both situations.
Tight fit action
For creating and using our tight fit solution we need to do three things:
- Create a piece of code
- Create the item for our action
- Make the action work with our Item Bucket Settings
We need to create an action that does the following. First we stop our action if we’re not dealing with an item that is not based on our Publication Item template. Next we need to get our item from the database because our actual item isn’t passed into our action. If we’re dealing with an item that has to be created then we’re not getting our item from the database because it isn’t created yet. If we’re working with an item that has already been created, for example the item is moved into the bucket, then we can get the item from the database. If the item is new we use DateTime.Now as our value, if the item already existed then we take the value of the DateField “pubDate”. If that field has no value then we take DateTime.Now. Next we need to determine the category where would like to see in our Bucket Folder Path. For this value we get the item our category field is pointing at and when have that Category Item we take the Category Name field from that item. If no category had been set we take a default value “Uncategorized”. After this we assemble our path by adding the values to a simple list of strings and set the outcome of the string.Join method on our parts to the ResolvedPath property of the ruleContext.
namespace Sdp.Modules.Publications.Buckets.Rules.Bucketing.Actions { using System; using System.Collections.Generic; using System.Linq; using Sdp.Modules.Publications.Business.Entities; using Sitecore.Buckets.Rules.Bucketing; using Sitecore.Buckets.Util; using Sitecore.Data.Fields; using Sitecore.Data.Items; using Sitecore.Diagnostics; using Sitecore.Rules.Actions; public class CreatePublicationItemBasedPath
: RuleAction where T : BucketingRuleContext { public CreatePublicationItemBasedPath() { } public override void Apply(T ruleContext) { Assert.ArgumentNotNull(ruleContext, "ruleContext"); IList pathparts = new List (); Item item = ruleContext.Database.GetItem(ruleContext.NewItemId); DateTime date = DateTime.Now; if (!ruleContext.NewItemTemplateId.Equals(PublicationItem.TemplateID)) { Log.Warn("CreatePublicationItemBasedPath: Cannot resolve path by this rule", this); return; } else if (item == null) { pathparts.Add("Uncategorized"); } else if (item != null) { date = ((DateField)item.Fields["__created"]).DateTime; if (item.TemplateID.Equals(PublicationItem.TemplateID)) { Item categoryItem = ruleContext.Database.GetItem(item[PublicationItem.FieldIDs.PubCategory]); pathparts.Add(categoryItem == null ? "Uncategorized" : categoryItem[Category.FieldIDs.CatName]); DateTime pubDate = ((DateField)item.Fields[PublicationItem.FieldIDs.PubDateTime]).DateTime; if (pubDate > DateTime.MinValue) { date = pubDate; } } } pathparts.Add(date.ToString("yyyy")); pathparts.Add(date.ToString("MM")); pathparts.Add(date.ToString("dd")); ruleContext.ResolvedPath = string.Join(Constants.ContentPathSeperator, pathparts).ToLowerInvariant(); } } }
Next we define the action in Sitecore. We create an item of the Template “/sitecore/templates/System/Rules/Action” and set the text with “create the folder structure based on the publication item” and the Type to our newly create action.
Now we have our action and we need to setup our Item Bucket Settings.
If we go to our Bucket and hit the sync button then we see our structure working like a charm.
Different rules per language?!?
But there is one thing you should be aware of, the “Rules for Resolving the Bucket Folder Path” isn’t a shared field. You will need to set this rules for every language.
Just one “Item Buckets Settings” item to rule them all?
Out of the box in Sitecore 7.5 there can be only one Item Buckets Settings item, but of you really would like to get the ability of creating multiple settings items you should take a look at the Sitecore.Buckets.Util.BucketFolderPathResolver. You should be able to get all the defined rules in the Item Buckets Settings items you create, combine them to one ruleslist and get them executed against the itemArray. But like Louis van Gaal said earlier “That’s another ‘biscuit’”.
Generic actions for Bucket Folder Path
It would be more efficient if we create generic actions to make this Bucket Folder Path possible. Like we mentioned earlier we needed the value of a field of a referenced item and we needed the value of a DateField from the Bucketable item. We can do this with two separate actions that work based on prepending.
Prepend the value of a field
First we create an action that gets the value of a certain field from our Bucketable item. I our situation we only act when the field is a shared field and in this case we use the value of a DateTime Field. The code we need is:
using System; using System.Collections.Generic; using System.Linq; using Sitecore.Buckets.Rules.Bucketing; using Sitecore.Buckets.Util; using Sitecore.Data.Fields; using Sitecore.Data.Items; using Sitecore.Diagnostics; using Sitecore.Rules.Actions; namespace Sdp.Modules.Publications.Buckets.Rules.Bucketing.Actions { public class PrependFieldValueBasedPath
: RuleAction where T : BucketingRuleContext { public string FieldId { get; set; } public PrependFieldValueBasedPath() { } public override void Apply(T ruleContext) { Assert.ArgumentNotNull(ruleContext, "ruleContext"); Item item = ruleContext.Database.GetItem(ruleContext.NewItemId); IList pathparts = new List () { ruleContext.ResolvedPath }; if (item == null || string.IsNullOrWhiteSpace(FieldId) || string.IsNullOrWhiteSpace(item[FieldId]) || !item.Fields[FieldId].Shared) { return; } if ((item.Fields[FieldId].Type.Equals("date", StringComparison.InvariantCultureIgnoreCase) || item.Fields[FieldId].Type.Equals("datetime", StringComparison.InvariantCultureIgnoreCase))) { DateTime date = ((DateField)item.Fields["__created"]).DateTime; var dateField = (DateField)item.Fields[FieldId]; if (dateField.DateTime > DateTime.MinValue) { date = dateField.DateTime; } pathparts.Add(date.ToString("yyyy")); pathparts.Add(date.ToString("MM")); pathparts.Add(date.ToString("dd")); } else { pathparts.Insert(0, item[FieldId]); } if (pathparts.Count > 1) { ruleContext.ResolvedPath = string.Join(Constants.ContentPathSeperator, pathparts).ToLowerInvariant(); } } } }
For our action definition we use the following:
Now we have our specific Date Field value from our item we still need the value of the Category Name field of the Category Item that our Item is referencing. This is a relatively simple action.
using System; using System.Collections.Generic; using System.Linq; using Sitecore.Buckets.Rules.Bucketing; using Sitecore.Buckets.Util; using Sitecore.Data.Items; using Sitecore.Diagnostics; using Sitecore.Rules.Actions; namespace Sdp.Modules.Publications.Buckets.Rules.Bucketing.Actions { public class PrependTargetItemFieldValueBasedPath
: RuleAction where T : BucketingRuleContext { public string TargetItemFieldId { get; set; } public string FieldId { get; set; } public PrependTargetItemFieldValueBasedPath() { } public override void Apply(T ruleContext) { Assert.ArgumentNotNull(ruleContext, "ruleContext"); Item item = ruleContext.Database.GetItem(ruleContext.NewItemId); if (item == null || string.IsNullOrWhiteSpace(FieldId) || string.IsNullOrWhiteSpace(item[FieldId]) || string.IsNullOrWhiteSpace(TargetItemFieldId)) { return; } Item targetItem = ruleContext.Database.GetItem(item[FieldId]); if (targetItem == null || string.IsNullOrWhiteSpace(targetItem[TargetItemFieldId]) || !targetItem.Fields[TargetItemFieldId].Shared) { return; } IList pathparts = new List () { ruleContext.ResolvedPath }; pathparts.Insert(0, targetItem[TargetItemFieldId]); if (pathparts.Count > 1) { ruleContext.ResolvedPath = string.Join(Constants.ContentPathSeperator, pathparts).ToLowerInvariant(); } } } }
The action definition of this item looks a lot like the action for the Date Field.
Now we can replace the action in our Item Buckets Settings with our two new generic actions.
If we run the sync command on our Bucket we’ll see it work like a charm.
Conclusion
We’re quite happy with the new Item Buckets Settings. There are some things we would like to mention.
- The rules you set are language dependent, the field is simply not shared.
- Bucket strategies do need to be aware of field that are not shared. If you have setup the exact same rule for two different languages and your action uses a field that isn’t shared you could end up with different folder paths depending on which version you are working with. Working with Item ID, Item Name or values of Shared Fields is quite safe.
- There is only one Item Buckets Settings item. We would like to see an out of the box solution where we can use multiple settings items with a certain hierarchy (conditions could interfere with each other).
- If we insert a new Publication Item in our bucket the item isn't created yet when we jump into our actions, this results in the "uncategorized" mode. We needed to add a item:saved event that acts on items based on the Publication Item template and in that action we move the item, again, back into the Bucket. This results in the conditions and actions being checked and performed and we end up with the correct path.
- | https://www.suneco.nl/blogs/item-bucket-settings-creating-awesome-rules-to-rule-your-folder-structure | CC-MAIN-2018-13 | refinedweb | 1,805 | 50.33 |
Used to bring the name of a source code file in line with the type declared in this file. The best coding practices require each type to be declared in a separate file named after the type.
Available when the cursor is on a type name, assuming the file name differs from it.
Place the caret on a class name.
The blinking cursor shows the caret's position at which the Refactoring is available.
After execution, the Refactoring removes the source code file from the project, renames the file and adds the new file to the project.
//Filename: Customer.cs public class Customer { private int CustomerID; public string FirstName { get; set; } public string LastName { get; set; } } | https://docs.devexpress.com/CodeRushForRoslyn/115571/refactoring-assistance/rename-file-to-match-type | CC-MAIN-2019-43 | refinedweb | 116 | 80.31 |
Comment on Tutorial - Write to a file in Java - Sample Program By Dorris
Comment Added by : sa
Comment Added at : 2012-08-02 07:09:30
Comment on Tutorial : Write to a file in Java - Sample Program By Dorris
import java.io.File;
import java.util.Scanner;
public class FileExists {
public static void main(String[] s) {
Scanner scanner = new Scanner(System.in);
System.out.print("Enter the url of a file: ");
System.out.flush();
String filename = scanner.nextLine();
File file = new File(filename);
if(file.exists()){System.out.println("exists");}
else{System.out.println("not exists");}
}
}
output1:
Enter the url of a file: C:\Users\298564\Desktop\sa.txt
exists
output2:
Enter the url of a file: C:\Users\298564\Desktop\asdfs.txt
not think this ist perfectly fine :) if you don't kn
View Tutorial By: julian at 2011-03-07 05:54:08
2. wow........thanks a lot :)
View Tutorial By: shiva at 2015-08-11 07:47:08
3. thanks for this tutorial
View Tutorial By: d3ptzz at 2009-08-03 23:38:57
4. This site is an Excellent Choice for lerners.Thank
View Tutorial By: Sravanthi at 2011-06-28 06:20:35
5. I am geting following exception
javax.comm
View Tutorial By: Raviteja at 2014-06-23 12:46:06
6. i want to open pdf file from sd card and i dont wa
View Tutorial By: Karan Mavadhiya at 2013-01-30 12:02:30
7. method overloading possible across classes????
View Tutorial By: Rohit at 2011-01-26 20:23:02
8. i like this site.good explanation,but it is good f
View Tutorial By: mamata at 2010-08-16 05:22:53
9. hi please help me . i want the full code to send t
View Tutorial By: Prashant at 2008-04-16 06:08:23
10. sir there is no startapp(),destroyapp() and pause
View Tutorial By: Sathish at 2012-08-20 08:49:33 | https://java-samples.com/showcomment.php?commentid=38225 | CC-MAIN-2020-29 | refinedweb | 327 | 60.82 |
Wikiversity:Colloquium/archives/June 2008
Contents
- 1 Short notice - Live astronomy presentation -- NOW!
- 2 Do you want to join the next cycle of Composing free and open online educational resources ?
- 3 Ping bot
- 4 Meeting on IRC about 'Wikiversity learning'
- 5 Support for developing projects
- 6 Categories for templates...
- 7 blocked IP preventing access to good information.
- 8 Sidebar changes
- 9 What shall we do with Wikipedia?
- 10 Guide to Tertiary Education
- 11 Can anyone suggest how to approach these subjects through Wikiversity?
- 12 Custodial flag for Remi
- 13 Student essays on Wikiversity
- 14 freedomdefined.org + free cultural work seal
- 15 Colloquium header
- 16 Wikimedia Foundation's board election (see also sitenotice)
- 17 MediaWiki:Deletereason-dropdown
- 18 Help Wanted: Posting student assignments automatically
- 19 Journals on Wikia
- 20 Finding a reference
- 21 Proposal for better standards
- 22 No licensed images
- 23 Produce clock: opinions please
- 24 extension testing on sandbox coming to a stand still?
- 25 How to watch different posts
- 26 Expert commentaries
- 27 Medical Advice Proposal
Short notice - Live astronomy presentation -- NOW!
Actual feed --Remi 20:27, 1 June 2008 (UTC)
- watching :-) ----Erkan Yilmaz uses the Wikiversity:Chat (try) PS: Tag a learning project with completion status !! 20:30, 1 June 2008 (UTC)
- Had from time to time bandwidth drop outs, but could see the presentation with second life. I am wondering if Mike also watched it ? ----Erkan Yilmaz uses the Wikiversity:Chat (try) PS: Tag a learning project with completion status !! 21:12, 1 June 2008 (UTC)
Do you want to join the next cycle of Composing free and open online educational resources ?
If someone is interested, please join here. Why you should join ? The course has already been exercised once (70 people joined). Therefore we can profit from still available users and old blog posts (20 people stayed until the end). We have also feedback about highs and lows during the 10 weeks course time. ----Erkan Yilmaz uses the Wikiversity:Chat (try) PS: Tag a learning project with completion status !! 22:54, 1 June 2008 (UTC)
Ping bot
I have a question if it is possible to develop something like pingbot script. What I mean?. I have not so much time to take care all the projects I am registered in and I usually browes my account ocassionaly. But sometimes people reply you there in a very important case, but because you are not present there you can miss it. I know that there is something like e-mail announcment system, but hundreds of e-mails open, hundreds of e-mails read - uf. So what about a bot, which will ping me here on e.g. wikiversity. Look at this, it was very helpfull: [1].--Juan 10:11, 1 June 2008 (UTC)
- As I know it, email-notices are turned of outside meta:, due to server load. But it is not too difficult (if it hasn't been written already) to write such a script in the meta:python wikipedia framework for yourself. You can run the bot everyday and scan all the projects to check if your user talk page has been changed. I can help, and if you have any more problems, you may ask filnik or other folks on botwiki:. Hillgentleman|Talk 14:56, 1 June 2008 (UTC)
OK, thank you for your help. It looks like my first bot.:-)--Juan 06:53, 2 June 2008 (UTC)
Meeting on IRC about 'Wikiversity learning'
Hi all - I'd like to rekindle a meeting that never happened - see Wikiversity:Meetings#New suggested time and date - perhaps this Tuesday @ 21:00 UTC? Please indicate if you would be available, or would prefer a different time/date. Cormaggio talk 14:39, 1 June 2008 (UTC)
- The meeting today is at 16:00 UTC, and more feedback is needed to confirm another one for Saturday - see above links. Cormaggio talk 12:31, 3 June 2008 (UTC)
- A further update. The meeting on Tuesday was, I think, very interesting and productive - thanks to all who participated! You can see a summary of the meeting here and the full log here. There's another meeting planned for Saturday at 18:00 UTC (for roughly 1-1.5 hours) - if you could indicate whether this is ok for you on Wikiversity:Meetings, or whether we should change the time/date, that would be great. One of the outcomes of Tuesday's meeting was to develop a page on Collective learning (thanks to User:Απεργός for initiating this). There were also other themes that need to be further discussed - we can plan an agenda on the meetings page. Hope to talk to you then. Thanks, Cormaggio talk 15:53, 5 June 2008 (UTC)
Support for developing projects
Just know I have red this: "On Wikiversity, there are too many startups of only one person who is waiting for others to join." ([2], Daanschr). I think, it would be also nice to support developing projects and why not do it on the Main page or close to the MP? Right now on the Main page there is a Featured content, which is of course an advertising for the projects, but also for the Wikiversity. There is also a higher posibility that someone will joint those projects. But with my idea I would like to call for Leibig Principles of minimum: That the limiting nutrient (i.e. the nutrient less available) influences the yeald. This principle was desribed in plant alimetnation, but today it generaly applicated in more areas. Usually there were you need to develop something and recieve fruits.--Juan 07:07, 2 June 2008 (UTC)
- Thanks for your contribution, Juan. You are correct that we need more means of promoting content development than just the featured project box. For example, to respond to the problem you raise, we would need a development drive (of the week, month...), much as other Wikimedia projects have. Obvious initial candidates for development drives would be French and Spanish language resources, which are greatly desired. Such drives might help promote the collective development of resources. To go into the wider implications of your post, experience shows us that Wikiversity content development is very much more individualistic (and less collectivist) than those who dreamed up the project were hoping. This is an experiential observation: content development on Wikiversity is in fact driven by individuals (even when these individuals are proponents of collectivism). There are a number of causes of this. One is the ability to productively fork, which reduces the pressure on people with similar interests to cooperate. Another is the size of the community. Because development is largely individualistic, unless those individuals are prepared to push their own projects forward with their own time/classes/employees, then projects will tend to remain in a state of suspension. But this is all part of the wiki way - we observe how the wiki functions, and then we react to this and find a positive way to build on these observed foundations of human behaviour. --McCormack 07:25, 2 June 2008 (UTC)
- A random analogy with the financial industry: wikiversity is still improving her efficiency as a marketplace of knowledge; even with the recent work of many folks (e.g. McCormack) we are still not very efficient in matching the seller (editors, and writers and resources) with the buyer (those who are - or may be - interested in the resource). Hillgentleman|Talk 07:46, 2 June 2008 (UTC)
- These are all very interesting comments. I would say that, to extend Hillgentleman's point, we need a way of advertising not just pages/projects which need content development (just as WP, WB etc do), but also learning projects that need co-learners. (Actually, I think that was precisely Juan's point, and Daan's original frustration.) It might be a worthwhile project to take up McCormack's point, and see how 'productive forking' might lead to individualism - and explore whether there might be other models for working that foster collaboration (while all the time asking how both 'individualism' and 'collaboration' can support learning). Cormaggio talk 09:10, 2 June 2008 (UTC)
- We might repurpose both the "development" and "community" boxes on the main page for "development drives" and "collaboration drives" (the latter promoting projects in need of co-learners), possibly using the same kind of rotation systems the POTD and featured project boxes do, but possibly working on a different time scale - i.e. weekly changes rather than daily ones. We could also start tagging resources with "co-learners needed" (bare categories, or project boxes, depending on taste) in preparation for setting up such systems. --McCormack 11:31, 2 June 2008 (UTC)
- Sounds good to me.--Juan 15:13, 4 June 2008 (UTC)
- The purpose of the sample box at the right is that it categorises the resource into Category:projects needing co-learners, which in turn can be used to select content for collaboration drives. The project box can be developed further with a hyperlink to a help page which describes to new users how to get involved with existing learning projects. [Remember: if you don't like the box, the icon and text can be modified. It's just a quick sample.] --McCormack 12:00, 2 June 2008 (UTC)
- Well, I am not sure, if this is a good start. Remember Wikipedia as they started with 5 line categorization according quality content. Substub-stub-article-Good article-Featured article. And what happend. Categories of stubs includes thousands of pages, so recently some Wikipedias are deleting stub status. Is a stub on wp equal to alone on wv? Is it possible that it will happen the same here?Juan 15:13, 4 June 2008 (UTC)
- It's a really good start - thanks! We could also start to organise this by subject, in much the same way as the stub-sorting mechanism on Wikipedia becomes an invitation for people to contribute to underdeveloped articles in areas that they are interested in. Cormaggio talk 12:12, 2 June 2008 (UTC)
- Thats a question if the stub system help and promote people to work on it. I dont know if it works on English wp, but many small-scale projects finds it useless, recently as I heard folks on de, they are leaving this system. And remmeber, even we are en, there are still lover number of participants - so simmillar conditions to low-scale projects of wp.--Juan 15:13, 4 June 2008 (UTC)
- I would like to help setup a development drive, as McCormack suggested. The development drive on Wikipedia has mostly collapsed, so we need to find ways to how it can be an engine that can grow, instead of one that will slowly pass away.--Daanschr 08:33, 7 June 2008 (UTC)
Categories for templates...
How do I create a category for templates which will not be included in the pages that I use the templates in? I need to have a list of my templates, not the templates plus the pages that they are used in. See Category:Film School Templates. It is very hard to find the templates inside the list of the pages. Robert Elliott 12:41, 8 June 2008 (UTC)
- Hi Robert. You can prevent the category from appearing on pages where the template is used by using:
- <noinclude>[[Category:X]]</noinclude>.
- Likewise you can make the category appear only on the page the template is used on but not on the template itself by using
- <includeonly>[[Category:Y]]</includeonly>
- I added noninclusion tags to one template here, as an example. --SB_Johnny | talk 12:57, 8 June 2008 (UTC)
- Got it. Thanks! Robert Elliott 13:16, 8 June 2008 (UTC)
blocked IP preventing access to good information.[blah]45.173/pls/portal30/CATALOG.GRANT_PROPOSAL_DYN.show - Shall we remove this ip from the filter list? --Emesee 21:25, 8 June 2008 (UTC)
- That particular IP is not blocked. There is a block on typing just any random IP as an url. You can use instead. I don't know why the link to that page from the CFDA home page uses the raw IP address. It is not very good form. --mikeu talk 02:27, 9 June 2008 (UTC)
Sidebar changes
There has been a lot of discussion of the sidebar over the last few months, especially the random link. The main problem with the random link is that most of Wikiversity's content consists of subpages - i.e. well-developed projects tend to have something between 10 and 100 subpages, sometimes up to 700 or so. That means that for every 1 project homepage, there are dozens or even hundreds of subpages. This has a significant effect on the value of a random link. The random link was originally programmed into Mediawiki for projects like Wikipedia which do not have subpages. The random link (and its variants) make no sense on a wiki which has subpages. The upshot of a lot of technical discussion has been that a switch function applied to a (wide) selection of project homepages might be the best solution. Implementing this solution was delayed, partly because a sufficiently long list of reasonably good content was lacking. However the work done on Wikiversity:Featured has thrown light into many dusty, dark and forgotten corners of Wikiversity, so I have begun to put together a list from which a random project homepage can be drawn. The outcome of this has been Wikiversity:Random. It's not ideal, but it is a relatively good cludge given the restrictions of the Mediawiki software. I've put this onto the sidebar so that people can really experience what it is like. Compare with Special:Random/Topic (click many times) to get a feel of the difference between new and old. I've also add in the guided tours to the sidebar, which have been under development for some time now. Comment is invited. --McCormack 09:20, 15 June 2008 (UTC)
- Fantastic, nice work; I think this will raise the quality of the user experience. -- Jtneill - Talk 10:44, 15 June 2008 (UTC)
- BTW, do you think the criteria for selection into "Random" is a bit lower than the selection for "Featured"? -- Jtneill - Talk 10:45, 15 June 2008 (UTC)
- I deliberately lowered the bar. By the end of the year I'd hope we have about 20-30 featured resources for the main page, and about 100-150 semi-featured ones which can go into Wikiversity:Random. How we deal with this long-term depends on whether or not we can move into exponential rather than linear growth. --McCormack 11:49, 15 June 2008 (UTC)
What shall we do with Wikipedia?
The community may like to think about What shall we do with Wikipedia? --McCormack 05:02, 22 June 2008 (UTC)
Guide to Tertiary Education
This box on the front page of the wikiversity tertiary education is very confusingly structured, with some pages (like level_of_measurement ) being completely out of context on the front page. I'd edit it, but it said I'd have to edit the entire structure of something or other, and that didn't sound like something I wanted to mess up. (The preceding unsigned comment was added by 128.114.250.29 (talk • contribs) 04:03, 15 June 2008.)
- This is a good point - I think what is meant is the Portal:Tertiary Education page, which lists contents of the Category:Tertiary Education. I have been (perhaps over diligently?) adding Template:Tertiary which adds Category:Tertiary Education to pages, but am thinking perhaps I shouldn't (at least) be adding subpages, just non-subpage pages. Plus, perhaps I shouldn't be adding stub-like pages? Appreciate any suggestions. -- Jtneill - Talk 00:36, 15 June 2008 (UTC)
- There's an issue of how best to subdivide automatically generated categories like Category:Tertiary Education. Possible courses of action: (1) Template:Tertiary actually adds stuff to a subcategory of Category:Tertiary Education such as Category:Tertiary Resources in Alphabetical Order; (2) One uses the titleparts parser function to identify subpages and place them in categories named after their parent pages, rather than placing them in a major category; (3) we do some better manual reorganisation of Category:Tertiary Education. Of course, if Wikiversity was able to have its Mediawiki extensions activated by the Wikimedia Foundation, then we could programme a few little widgets to help, but, ummm, "that's just not going to happen", as they say. --McCormack 05:58, 15 June 2008 (UTC)
- These ideas all sound promising. I'm not sure how this works in with templating/project boxes, but some maybe useful "categories" might include:
- Tertiary courses
- Tertiary learning resources
- Tertiary learning projects
- Is there any "smart" (magic) way of combining two categories e.g., Tertiary education + Courses? I haven't really thought through this at all clearly, but wanted to reply. -- Jtneill - Talk 13:16, 17 June 2008 (UTC)
- The smart or magic way would be a bot. For example, a bot trawls through all WV pages looking for pages which are both "tertiary" and "psychology", and then adds an additional "tertiary psychology" category to each of those page (it could also remove the basic tags at the same time). However I don't do bots. Try User:Darklama or User:Hillgentleman. We probably won't need bots like this for a few years yet, though. But it's worth noting now that it is retrospectively doable, so we don't need to worry now about restructuring how we do things. --McCormack 09:53, 23 June 2008 (UTC)
Can anyone suggest how to approach these subjects through Wikiversity?
Newbee Just got into the site and am interested in discussions / forums on the real estate business, market, mortgages, etc.... Can anyone suggest how to approach these subjects through Wikiversity? (The preceding unsigned comment was added by Foolsgold (talk • contribs) 16:33 2008-06-23.)
- Hi, a warm welcome to you. You can start by using the search, e.g. Special:Search?search=real+estate&go=Go. Then you can find pages like: Topic:Real estate. So, find what is existent here already, look at them, contribute to them, look at the version history who else has contributed and contact them - Be bold. Besides that you will now get a welcome msg with more infos about Wikiversity. I hope this reply is satisfying for you ? ----Erkan Yilmaz uses the Wikiversity:Chat (try) 16:41, 23 June 2008 (UTC)
Custodial flag for Remi
Hi, I would like to give Remi another custodial flag. What do you say ? ping, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 16:52, 23 June 2008 (UTC)
- I read the discussion so far about this. I'm a bit confused about why a custodian might use multiple accounts. For anti-vandalism activity I can understand might be one good reason. Are there others? Is there precedent? And how does WP and other sisterprojects handle such requests and what is their policy? I am really just asking here for my own understanding, not offering opinion/viewpoint because I don't know enough. -- Jtneill - Talk 04:25, 24 June 2008 (UTC)
- I believe it is conventional for custodians/sysops on Wikimedia projects to have an additional non-sysop account if they wish. Often the non-sysop account is used for content contribution and opinion-mongering (talk pages), while the sysop-account is used for sysop-actions only. The idea behind this (at least on Wikipedia, where goofing up or overreacting can be taken very seriously) is that misbehaviour as an editor doesn't get mixed up with mishaviour as a sysop, the latter being something which could be used to have sysop status withdrawn. Sysops are open about the existence of these accounts (if they are good). On Wikiversity, some sysops have additional accounts because they log on from different locations and forget their password (written on the wall at home). --McCormack 07:16, 24 June 2008 (UTC)
Student essays on Wikiversity
I'm also looking for a format for running my social psychology class next semester. Last year I had students present their essays via blogs. This year, I'm thinking of having them present their essays here on WV. There are ~80 students, so I need to find a way for wiki novices to author academic essays here. They will all be on different topics, and they can help one another, etc. Appreciate any ideas or examples. -- Jtneill - Talk 03:26, 19 June 2008 (UTC)
- My guess is that you've already looked at Help:essay and followed up links to User:SRego's project. There have been a few others like it, all with slightly differing approaches and volumes. One thing that still needs to be done is to create some essay boilerplate, which specific teachers can then develop into a course-specific boilerplate. With 80 or so students, your biggest problem will probably be class control, but User:SRego has over 100, so it's been done. How wiki-savvy will your 80 be when they first hit WV? --McCormack 08:37, 19 June 2008 (UTC)
- No, I haven't looked at those links yet (closely)... thanks! Will check out. I'm wondering whether the essays might be better developed as course subpages or user subpages... any thoughts? -- Jtneill - Talk 05:29, 20 June 2008 (UTC)
- I'd follow User:SRego and use the main namespace. Set up a course page, add a whole bunch of (red) links to it for individual essays, and then give your class pretty tight instructions about what to do. You can always clear up the mess afterwards, but I reckon the better your initial instructions are, the less clearing up you'll have to do. Oh - and afterwards, write an experience report for us! User:SRego didn't leave us with a report of how he organised things on the ground. It would be good if we had a little collection of class wiki-outing reports for future course instructors to look at. --McCormack 14:45, 20 June 2008 (UTC)
- That's all helpful advice, thanks; mainspace ... now why didn't I think of that? :). I like it. I'm underway with setting this up here: Social_psychology_(psychology)#Essay topics. -- Jtneill - Talk 16:05, 20 June 2008 (UTC)
- Don't forget my question "How wiki-savvy will your 80 be when they first hit WV?" And here's another thought: if you have 80 people simultaneously editing WV, then recent changes will look like a riot, so you'd better have some way of identifying your students (common username prefix/suffix?). --McCormack 16:13, 20 June 2008 (UTC)
- It is safest to assume they will be complete wiki novices - I am expecting the unexpected :). But I will be encouraging them to do a "tutorial" / guided tour - although I haven't quite worked out where/which ones (open to suggestions and to creating a custom tour/tutorial). And I hope to mentor along the way. With "taking over" recent changes... hmmmmmm.... Do you think a common username prefix/suffix would help much? I was thinking to let them register any user name, and perhaps 5% of them might become longer-term contributors. -- Jtneill - Talk 03:08, 21 June 2008 (UTC)
- In the User:SRego case, although each of 100+ students probably did way over 100 edits within 2 or 3 days, not a single one (to my knowledge) ever returned. It would be interesting from a wiki-research point-of-view to be able to track user retention from class wiki-outings - i.e. produce confirmation of a %-retention level. To do this, a prefix/suffix would make things easier - alternatively, a page on WV which lists your students' usernames. As regards tutorials for novices, we really need good ones, divided by level rsther than topic, as other Wikimedia projects do. Can you make one? You could steal from: --McCormack 04:48, 21 June 2008 (UTC)
- It occurred to me while reading this that one way to distinguish students of your class while still giving them freedom of expressive nomenclature is to have them all add a color to their name. You could give them the copy-paste code to make all their usernames show up in green or something. Simple, but also effective. Comments? -Serge76.126.218.12 06:57, 25 June 2008 (UTC)
- Nice idea, but colours only show when signing. One really needs identifiable class usernames for things like "recent changes", page histories and contribution logs - and colouring doesn't work there. One class already uses the prefix technique - some of the Technical writing people use the prefix TW. --McCormack 07:04, 25 June 2008 (UTC)
May be you want to check out also this. Another note: You students' essays will be published under the GFDL licence, so they have to agree with it. It may seem as a minor detail, but you should clearly explain also this side.
To that wiki-novice-tutorial... You could explain the absolute basics in 10 minutes of live presentation so they might ask some questions (maybe supported with a powerpoint pps?). You know, just how to register, create internal links, headings and put the essay into correct category... Other technical things they will learn along the way, ask via talkpages or instant messaging clients...--Gbaor 05:17, 23 June 2008 (UTC)
More material at: w:Wikipedia:Instructional material/MediaWiki training videos + I found this on YouTube :D --Gbaor 05:26, 23 June 2008 (UTC)
freedomdefined.org + free cultural work seal
I'm curious about. Is this something for us to explore, embrace, use? Or are there some underlying politics or issues here? Like I say, curious to know more. Also, is the free cultural works seal meaningful and useful here or elsewhere? How does it relate to GFDL and CC-A? (). Since its for CC and public domain is it also for GFDL? Otherwise, it maybe nice to have something equivalent for GFDL? If I added something like this colourful seal to unit materials, I think it would help to accelerate cultural change amongst my colleagues and students towards open educational resources. What do you think? -- Jtneill - Talk 04:32, 24 June 2008 (UTC)
- The site main page is old and hasn't been edited in a long time. I'd rather we invented our own "seal" ("these course materials featured on WV" with pretty pic). As regards Wikiversity development, we should have our sights set on projects which are much larger than ourselves and which can help us expand - such as Wikipedia. See: What shall we do with Wikipedia? We are few, so we must focus. --McCormack 07:21, 24 June 2008 (UTC)
- Hi Jameses :-), the definition of "free cultural works" has been explicitly recognised by the WMF - see the licensing policy resolution. On GFDL and CC licences, there is ongoing work between WMF, CC, and FSF on their harmonisation or practical integration - but we are still awaiting any official notice. On seals/buttons/logos, I'd prefer them to link to something on Wikiversity which explains free content/culture, eg Open Educational Resources, and which can then link to the above definition. Cormaggio talk 10:36, 24 June 2008 (UTC)
- Thanks for the comments guys; maybe we could tweak something like [3] to use the WV logo?? -- Jtneill - Talk 11:25, 24 June 2008 (UTC)
- Well, if you're thinking of borrowing and modifying their images, you'll note that they have failed to attach any licence details to the button and logo images. In Wikimedia projects, that would be reason for deletion. In general, it means you can't use them. They have to explicitly licence the images under an open licence before they can be re-used. Note for comparison that just about the only thing which is copyright at Wikimedia are the logos! --McCormack 09:02, 25 June 2008 (UTC)
- Ach. You could at least assume that it's free content - and the default licence on the site is CC-BY-2.5. If you really feel particularly twitchy about it, you could contact the author. Cormaggio talk 09:14, 25 June 2008 (UTC)
- ;) It's true that I am a stickler for proper licencing declarations. Mind you, sites which propagate any form of licencing should themselves set a high standard. Anyway, a useful page for jtneill would be commons:Category:Wikimedia logos (and its subdirectories and parent directories). There's loads of useful artwork relating to Wikimedia on Commons, all superbly tagged with permissions status. --McCormack 09:42, 25 June 2008 (UTC)
Colloquium header
I got a bit bold and shifted static content from the top of this page into Wikiversity:Colloquium/Header. Could some people check this edit. In particular, I'm a bit concerned about whether the autoarchive code will work properly when transcluded? -- Jtneill - Talk 11:19, 24 June 2008 (UTC)
- Yes - you asked before ;-) --McCormack 07:27, 25 June 2008 (UTC)
- OOps - I reckon every 20th or so "save" nothing happens; so I re-save - anyway, duplicate version of this message removed. -- Jtneill - Talk 08:49, 25 June 2008 (UTC)
Wikimedia Foundation's board election (see also sitenotice)
Last chance ! Voting is still possible until 23:59 21 June 2008 (UTC):
Special:Boardvote (see the requirements), ----Erkan Yilmaz uses the Wikiversity:Chat (try) 23:27, 20 June 2008 (UTC)
- Hi Erkan, I never voted before, so thanks for your encouragement. I'm not quite sure how? Is there a sense of which candidates might be good for Wikiversity and WMF? -- Jtneill - Talk 03:17, 21 June 2008 (UTC)
- Hello James, you can see views of candidate's about Wikiversity here. At least the ones who gave a response. But unfortunately I can not tell you who of them has the "best" for Wikiversity in the heart. Future will show.
- But at least I think people who are able to vote should not waste their possibility to do so. ----Erkan Yilmaz uses the Wikiversity:Chat (try) 07:56, 21 June 2008 (UTC)
- OK, I did it! Interesting process, always wondered how it worked. When I tried to vote from Wikiversity, I got the message "You are not qualified to vote in this election. You need to have made at least 600 contributions before 00:00, 1 March 2008, and have made at least 50 contributions between 00:00, 1 January 2008 and 00:00, 29 May 2008." So, I voted instead from wikipedia. Based on the candidate's statements about Wikiversity I prefer: Craig Spurrier[4], Jussi-Ville Heiskanen[2], Paul Williams[7], Matthew Bisanz[6], Ad Huikeshoven[10], Ray Saintonge[5], Alex Bakharev[8], Gregory Kohs[7], Ryan Postlethwaite[100], Kurt M. Weber[100], Dan Rosenthal[1], Samuel Klein[100], Steve Smith[100], Ting Chen[15], Harel Cain[100] -- Jtneill - Talk 09:27, 21 June 2008 (UTC)
Hello James, when I look at our top5/6, there is a pretty good match :-), ----Erkan Yilmaz uses the Wikiversity:Chat (try) 11:02, 21 June 2008 (UTC)
- btw: here you can see who voted from which project (results are not published, this is based purely on your free will), ----Erkan Yilmaz uses the Wikiversity:Chat (try) 11:25, 21 June 2008 (UTC)
- I just revoted, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 18:01, 21 June 2008 (UTC)
meta:Board elections/2008/Results/en, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 20:38, 26 June 2008 (UTC)
- A nice WP Signpost report on the results: w:Wikipedia:Wikipedia Signpost/2008-06-26/Board elections. -- Jtneill - Talk - c 00:30, 28 June 2008 (UTC)
MediaWiki:Deletereason-dropdown
I just discovered this page, which I think was introduced in a MW upgrade at the end of last year. We might be just about the only project which has failed to add to the default reasons. I've added some of the most obvious missing ones. Comments and suggestions? --McCormack 03:03, 28 June 2008 (UTC)
Help Wanted: Posting student assignments automatically
I need some programming help. I need a way to accept and post student assignments.
The film school has been accepting students for one year now (even while the course is being completed.)
Already, I have more students submitting homework than I can post. Manually, it is a very slow process.
What I need is automated exam and submissions system, sort of like a cross between a Quiz and a very simple learning management system.
As a test, i have set up one student's page with a possible format for homework assignments which hopefully can be programmed into Wikiversity. Look at Tonto Silver's user page. Each box is a completed assignment. (I have to upload all of this data for each student. Tonto Silver is only 1/3rd of the way through the course.) You can see the huge amount of work for the instructor if this is not automated. Robert Elliott 14:59, 2 June 2008 (UTC)
- Quick comment: It depends on the format of your input data. If it is in a text file and simply formatted, there is a chance that somebody could write a script to automate the process. In any case, try using templates (substituted or not) more to reduce having to do everything over and over again. Hillgentleman|Talk 20:33, 3 June 2008 (UTC)
- Quick Question -- Date vs. Current date
- I am still not very familiar with templates. I tried some features but when I created a template with a date, the final result always showed the current date, not the date that the OGG file was uploaded. How do I use a DATE in a template to get the date when the template was used, not the current date? Robert Elliott 09:59, 6 June 2008 (UTC)
- This is a complex project. Just to get my head around it, how would it be automated (generally, not technically)? Would it be automated from the student's side (ie "click here to upload second assignment"), or from the tutor's? What processes would it need to include (eg student submitting assignment sends message to tutor)? Also, thinking broadly, would it help to bring in other people to help you with the evaluation and other administration of the course? I have some experience with filmmaking (and I'll be teaching a Master's course in educational video next year) - I could offer some time (though I'd have to rationalise it somewhat!) But maybe you could elicit help from other websites, institutions..? Back to the programming side, I'd really like to find ways of mobilising a developer community around Wikiversity's technical needs in general - whether inside Wikiversity, or in the wider developer community - this seems like something which, with some clearly defined needs, we could start such a process... Cormaggio talk 10:18, 5 June 2008 (UTC)
- Creating an example
- To make this clearer, I will create an example of "before" (the lesson page) and "after" showing the final box placed in the student's about me page and on the finished assignment page.
- I have very specific needs but I think I can create a general format that might work for any lesson which a instructor/coordinator/facilitator assigns to students. Robert Elliott 10:32, 6 June 2008 (UTC)
- The method
- Basically, I am trying to use the QUIZ feature to upload answers to a student's about me page as well as the completed assignment page for that lesson.
- Currently the existing QUIZ feature assumes that there are correct answers and wrong answers.
- However, for my courses, each students answer is always correct (ie. "Which picture do you prefer? -- A, or B, or C) and all I need to do is to record the answer and to show the answer the other students.
- Currently, the QUIZ function does not show the answers from one student to the other students. That is what I need changed.
- I want a new QUIZ function so people can share their ideas, not just to test their knowledge of facts.
- As you see in the quiz above, the current QUIZ function does not allow ideas to be recorded and viewed by others. I need to find a way that each answer can be shared. Robert Elliott 10:46, 6 June 2008 (UTC)
Horrible experience. The quiz box told me "every answer is a correct answer". I though I am selling somewhere my opinion.--Juan 18:48, 6 June 2008 (UTC)
- Reply
- Yes, you are correct. The Quiz function does not send your opinion somewhere. I want to correct this. Rather than a single answer (as in this example above), it would be better to allow the student to enter an answer and have it posted automatically. (Currently, 95% of my students do not know how to post an answer using the Wiki language. The Quiz function does not post answers.) Robert Elliott 21:25, 6 June 2008 (UTC)
- Robert Elliott, From what I understand, what you want is possible with the option called "preload". There are several ways to do it; the easiest is the input box: Try
- (See meta:help:inputbox for more details. )
- -- Hillgentleman|Talk 05:20, 20 June 2008 (UTC)
- Help wanted
- Yes, I am always looking help from people who know both film scoring and dialog editing (editing picture based on the words of a script, with emphasis added by using "L-Cuts"). All of my courses deal exclusively with editing scripted dramatic conversations while using symphonic musical sound effects/melodies/motiv to create the mood. Robert Elliott 09:59, 6 June 2008 (UTC)
- If student are taking a filmmaking class, they are probably pretty bright to begin with. Perhaps they could just upload/convert assignments themselves? Maybe they could upload the assignments as jpegs. How might that be? --Emesee 22:57, 6 June 2008 (UTC)
Journals on Wikia
I'm trying to understand the reality and possibilities for Wiki Journals, and more specifically what the role/relationship of Journals on Wikia compared to Wikiversity is? e.g., I was looking at. I think I'd prefer to contribute to and to promote such Wiki Journals here on Wikiversity. Any suggestions for how I might learn more? -- Jtneill - Talk 03:13, 19 June 2008 (UTC)
- Absolutely - the idea was always to encourage journals to flourish here on Wikiversity - the Wikia versions had simply been established before Wikiversity was launched, or as its launch was seemingly endlessly in discussion. We have Wiki Journal and Portal:Wiki Scholar, which could both use some adaptation and expansion. One of these, or a new page, could act as a hub for a number of journals, each focused on a particular theme. So, when you ask to learn more, is it enough to simply say: "be bold"? Cormaggio talk 09:56, 25 June 2008 (UTC)
- Well, I was thinking to write something for. For me it is more international to write about issues of let say cs on academia.wiki than here on en.--Juan 12:30, 28 June 2008 (UTC)
Finding a reference
I am writing a lesson and I want to include a fact that I know to be true but I was hoping to include a reference for it. I have been searching but I can't seem to find one. Is there somewhere on wikiversity that I could post a request for a reference? Go raibh mile maith agaibh 14:08, 20 June 2008 (UTC)
- How about right here? :) Otherwise maybe try the page's talk page, or the associated School: or Topic: talk pages. -- Jtneill - Talk 15:19, 20 June 2008 (UTC)
- References we post here or you might try to go for Wikiversity:Featured. Wikiversity:News is probably for something different.--Juan 12:37, 28 June 2008 (UTC)
Proposal for better standards
I have found it very difficult recently to navigate my way around the site. I think I am right in saying that the main shared goal in this community is that it becomes and remains a free and easy-to-use educational resource for all. The problem is, however, there are a lot of pages with namespaces, such as portal and school, that, by their definitions, don't really belong to them, which just confuses things even more. There are a lot of pages unlinked, uncategorized and unnecessary. I would like the community's opinion on an internal audit in which the goals of the community will be identified and compared against our current position. Problems can then be identified and solutions proposed. Then we can set up a system to maintain the new high standards without jeopardizing the ethical principles upon which the community is founded. There is a lot to be done, and everyone can play their role. We need someone with credible expertise to lead the audit and a lot of volunteers to play a part in the process if we want it done quickly. After this process I would envisage a system of quick navigation and functional links, allowing for higher productivity among editors and admins. --Go raibh mile maith agaibh 02:00, 22 June 2008 (UTC)
- Wikiversity:Vision 2009 is a form of internal audit, and is definitely the right place to find out what we are doing about this problem. It is a problem which has been at a centre of interest for about 3 to 6 months now, with a lot of activity. --McCormack 05:01, 22 June 2008 (UTC)
- I just had a read, it's exactly what I was talking about but beyond my expertise. I am willing to lend a hand in any area. I know it's not very "wiki" to give and receive orders but I want to help lighten the load, so please give me orders! You guys have been here longer than me and have been working on the vision 2009 for some time too. I am currently working on the school of medicine and want to implement the ideas of the overall community but it is difficult for me to understand all the language that is being used. I want to help the overall community as well as help develop learning resources in my own area of interest. If there is anything I can do to help, please let me know. Go raibh mile maith agaibh 14:34, 22 June 2008 (UTC)
- Hi Donek, and thanks for your enthusiasm. You can also talk to me on my talk page if you want to get down to details. If you want to do something to help, the best way is to find whatever you'd most like to fix, and start fixing it. Make a short list of what you'd most like to see, and then post it on my talk page. If you don't know how to do any of it, then I'll show you and help you. If you make a mess, I can clear it up, so don't be afraid of making a mess - just keep me posted on your progress. --McCormack 09:48, 23 June 2008 (UTC)
- About simillar problems recently talking on cs. See: cs:Wikiverzita:Koncepce (in English-GT) and cs:Wikiverzita diskuse:Koncepce (in English-GT).--Juan 12:47, 28 June 2008 (UTC)
No licensed images
What to do with non-licensed images such as Image:Cost Analysis of IGCC.JPG?--Juan 12:34, 28 June 2008 (UTC)
- [4], add a comment on user's talk page. Do you want to continue checking the user's other media files ? ----Erkan Yilmaz uses the Wikiversity:Chat (try) 12:43, 28 June 2008 (UTC)
Well, I have noticed other on his user page. I am just posting here this example, because I think it could be risky to host nonlicensed images and files.--Juan 13:32, 28 June 2008 (UTC)
- Indeed - I will then check the user's other files. ----Erkan Yilmaz uses the Wikiversity:Chat (try) 13:38, 28 June 2008 (UTC)
- Also other files had no licence. User was also emailed, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 13:46, 28 June 2008 (UTC)
Produce clock: opinions please
I'm hoping to have time to set up a "Produce Clock", vaguely following the bloom clock's schema, but this will be much less complicated (since it presumably won't need identification keys and so on). See User:SBJ/Produce clock for initial notes.
The idea is for people to be able to track when "fresh, local produce" is available (see w:Locavore), either as crops in gardens or on farms. I think this would be useful for people trying to cut carbon footprints, gardeners planning a long-season veggie garden, or just people who go gaga for very fresh veggies.
I guess I have a couple questions about doing it though. First, I'm not sure this clock is really so much research as it is "a learning project with homework", where the homework assignment is to report what fresh local produce is in the garden or at the farmstand. Should it be characterized as a research project?
Second, I'd like to actively recruit farmers as participants. One "enticement" might be to have a template on the contributors' page that adds information about their farm (type of farm, growing method (e.g., "organic", etc.), location, website, perhaps even phone numbers). Would that be problematic?
Third, I'm just curious whether this would be a more attractive way to introduce the whole "clock" concept, as opposed to the somewhat more esoteric bloom clock. There will still be some template usage involved, but not nearly to the extent that's required for the bloom clock (since, again, there will be less worry about identification keys).
BTW, speaking of keys: I'm interested to see if they're working for non-plant people. Try going outside, look for a flower that you don't know the name of, and see if you can use the Late Spring key to find it! --SB_Johnny | talk 14:42, 28 June 2008 (UTC)
extension testing on sandbox coming to a stand still?
As wikiversity promotes "learning by doing", couldn't we just go out and randomly test mediawiki extensions for short periods of time and encourage the wikiversity community come and try their hands on them? That would promote both wikiversity and sandboxserver. I know that, for example, the semantic wiki is cool but to run it we would need to extend the data base (but it is worth it...); On the other hand, there are simpler things which we could play around. - Hillgentleman|Talk 09:33, 23 June 2008 (UTC)
- As my "trial" with the subpagelist extension showed, the Wikimedia Foundation is currently unable to administer extension approval for Wikiversity. I do not know why. I do know that most of the main developers are very helpful and prompt. I do know that we have people willing to do programming. I have about 4 Mediawiki installations of my own, some public, and I can provide and perform extensive testing opportunities for everyone myself if asked. None of that is a problem. The problem is at the "approval" stage, where an extension is green-lighted for deployment on Wikiversity. This is an administrative step. The Wikimedia Foundation cannot take this administrative step. There is no explanation. So at the current time, I see no point in looking at the many available extensions on mediawiki.org, because we can only have what the larger projects also have. --McCormack 09:43, 23 June 2008 (UTC)
- As I know it (e.g. from StringFunctions), new extensions are usually stuck at the security checks by Tim Starling. However, we may accept that a site as large as wikiversity has little flexibility. And for participants in the MediaWiki learning project, having the chance to try different extensions together would be useful. Hillgentleman|Talk 09:50, 23 June 2008 (UTC)
- Agree, to just play at the sandbox server this is good enough. The prob with the Sandbox Server at the moment is, that person(s) who can install the extensions are not so fast reachable. So a solution for this - on the sandbox server - would be to grant more people rights to install extensions. ----Erkan Yilmaz uses the Wikiversity:Chat (try) 18:12, 23 June 2008 (UTC)
- I am keen to try out the RSS Reader, but I'm guessing there's no way WMF is going to be letting RSS feeds onto its sites anytime soon. At the very least, it would be nice to be able to show a list of recent changes for a set of pages to display e.g., on a course homepage. I also want to show the feed from a course discussion list on the course homepage, i.e., I want a more dynamic homepage e.g., I am trying to replicate something like this: at Social psychology (psychology). To do this, I probably need to use the sandbox server as my unit homepage, but that probably comes with more hassles (esp. with confusing participants). So, I may need to stick with the university's wiki (Confluence - but you can't edit :() or try WikiEducator. I just wanted to share my wee dilemma as an example. -- Jtneill - Talk 10:22, 23 June 2008 (UTC)
- See the case with McCormack's extension - some need motivation to do something. So one way is to find a way to increase their motivation: I propose to let the ideas flow at this page: How to increase motivation so more extensions can be used at Wikiversity ?. E.g. another way would be to find something which can be done without others. An idea: someone could program a bot which reads a RSS feed locally and then the bot edits the new entries (in a certain form) automatically by a normal edit into the WV page you want ? That could give the feeling of an automatic RSS reader. ----Erkan Yilmaz uses the Wikiversity:Chat (try) 18:19, 23 June 2008 (UTC)
So is there a place right now, where I can freely test extensions (and possibly new tags)? Or the place, where I can recieve sysops rights to play with extension set-up? Is that place open internationaly or just English speakers are prefered? Or is it better that I instal MediaWiki on my computer? How? Does it work on Windows Vista?--Juan 13:25, 28 June 2008 (UTC)
- To Juan: I think that the sandbox is intended to be this spot. Unfortunately I haven't had the time to track down the person(s) running it, but I do believe that we can use it for this very purpose.
- As far as approvals go, we definitely have to figure out how to motivate things. We're a smaller wiki, a smaller community, and while it's important that we don't destabilize the servers, it's also important that things that will help the community get made available to it. Let me know if I can assist. Historybuff 18:52, 28 June 2008 (UTC)
- Extension installing can be done by User:Draicone + User:Darklama. Requests should be placed at "Testing Mediawiki extensions" on the Sandbox server (don't forget to verify your email so you can edit the page). ----Erkan Yilmaz uses the Wikiversity:Chat (try) 21:36, 29 June 2008 (UTC)
How to watch different posts
Recently, I have found out a small imperfection. If I am wathcing changes on the page Wikiversity:Colloquium it shows me changes globaly. For someone, who is watching Colloquium two times a day, it is not the problem. But me, I what it once or two times a week. Than to have WV:C on My watchlist is useless, because I dont see if someone replyed my post or commented. I should go to history or better to scroll down. It is userunfriendly. So how to solve it? Is it possible to watch recent changes just in some posts on the page? If not, I would recomend to send new post in the format {{/New post}} rather in the format == New post ==. That users may watch recent changes of just prefered posts at the same page.
Or another question, is there someone, who would be able and who would enjoy to programme new extension, which would allow to whatch recent changes in the posts titled == name ==? Me not, I do not know any programme language.--Juan 14:27, 28 June 2008 (UTC)
- Juan -- I think there was a discussion about things like this a while back on the Colloquium. I think it was about liquid threads, but I might not be getting the tech name right. I think it's a great idea. Historybuff 19:06, 28 June 2008 (UTC)
Well, I think I saw Liquid threads discussion, but I havent understood what is it all about.--Juan 18:04, 29 June 2008 (UTC)
Expert commentaries
People may like to read and comment on Wikiversity:What shall we do with Wikipedia?/Expert commentaries. --McCormack 14:40, 29 June 2008 (UTC)
Medical Advice Proposal
I have a policy proposal for dealing with medical advice posted on wikiversity. I would like everyone's opinion. Is there anywhere else I should post it so everyone may have a say? Donek (talk) - Go raibh mile maith agaibh 00:40, 30 June 2008 (UTC) | http://en.wikiversity.org/wiki/Wikiversity:Colloquium/archives/June_2008 | CC-MAIN-2015-18 | refinedweb | 8,896 | 70.73 |
To with Cryptol posts such as ZUC, Skein, Simon, SPECK, Sudoku, and N-Queens. Complementing these small, single-algorithm, efforts we wrote a Cryptol specification for the popular file encryption tool Minilock.
Minilock allows users to encrypt files such that only select few people are capable of decryption using their respective private keys. To accomplish this, user passwords and SCrypt are used to generate Curve25519 asymmetric keys. These asymmetric keys and ECDH allow each party to derive symmetric keys which are used to decrypt the Salsa20+Poly1305 encrypted file. A key-wrapping scheme is used to avoid duplicating ciphertext for each recipient.
Our executable Minilock specification, called Crylock, allows us to produce encrypted files compatible with Minilock, analyze the algorithms, and formally verify functional equivalence of implementations. In building this specification we developed Cryptol implementations of Blake2s, Curve25519, HMAC, SHA256, PBKDF2, SCrypt, Salsa20, base64 and 58 encodings, and NaCl’s CryptoBox primitive.
Armed with this body of specifications and evidence of functional correctness due to the interoperability testing, we can now use our SMT solvers of choice as a hammer and treat any publication or bit of code as a nail to be struck. It’s hammer time.
Salsa20 and TweetNacl
One issue in implementing cryptographic algorithms is knowing it is done right, without corner cases. Was shift accidentally used instead of rotate? Have all operations accounted for overflow correctly? SAW can prove equivalence of implements in a straight-forward manner. For example, using a couple dozen lines of SAW script and our Salsa20 Cryptol implementation we can quickly verify the core of Salsa20 from TweetNaCl:
import "Salsa20.cry"; let sym = "crypto_core_salsa20_tweet"; let main : TopLevel () = do { print "Proving equivalence between spec and tweet nacl.";
Next, we declare symbolic values of the nonce, key and output. The quoted symbol names must match what is found in the bytecode, while the (non-quoted) SAW variables can be arbitrary but match for readability sake.
out <- fresh_symbolic "out" {| [64][8] |}; n <- fresh_symbolic "n" {| [16][8] |}; k <- fresh_symbolic "k" {| [32][8] |}; let allocs = [ ("out", 64), ("in", 16) , ("k", 32), ("c", 16) ]; let inits = [ ("*out", out, 64) , ("*in", n, 16) , ("*k", k, 32) , ("*c", {{ "expand 32-byte k" }}, 16) ]; let results = [ ("*out", 64) ];
In the above one particular variable stands out, the final ‘inits’ binding for variable ‘c’. In this verification we’re only interested in how the core function matches when used in the context of the Salsa20 stream cipher. As a result we do not need to prove equivalence for every possible 16 byte value ‘c’ and instead declare the static input. In the syntax,
{{ "..."}}, double curly braces indicates a term in the Cryptol language; the term is a string constant from the Salsa20 algorithm.
Having specified inputs and outputs, the meat of the work is loading the module, extracting a symbolic representation of the code, obtaining an And-Inverter Graph (AIG) of both code and specification, then comparing these graphs for equivalence.
print "\tLoading tweetnacl llvm byte code."; tnacl <- llvm_load_module "tweetnacl.bc"; print "\tExtracting the Salsa20 encryption function."; nacl_salsa20 <- time (llvm_symexec tnacl sym allocs inits results); print "\tBit blasting the NaCl and Cryptol terms."; nacl_as <- abstract_symbolic nacl_salsa20; let cry_f = {{ \key nonce -> Salsa20_expansion `{a=2} (key,nonce) }}; let nacl_f = {{ \key nonce -> nacl_as nonce key }}; naclAIG <- time (bitblast nacl_f); cryAIG <- time (bitblast cry_f); print "\tUsing CEC to prove equivalence."; res <- time (cec naclAIG cryAIG); print res; };
The ABC-backed ‘cec’ solver is too good. We don’t have time to get coffee.
Proving Salsa20_encrypt equivalent between Cryptol spec and tweet nacl. Loading tweetnacl llvm byte code. Extracting the Salsa20 encryption function. Time: 3.143s Bit blasting the NaCl and Cryptol terms. Time: 1.183s Time: 0.204s Using CEC to prove equivalence. Time: 0.069s Valid
The exact same steps can successfully show equivalence with Salsa20 encryption. A caveat is the SAW engine works over monomorphic types, so while one might desire to show the Salsa20 encryptions from Cryptol and NaCl are identical for all possible input sizes, SAWScript requires a static size prior to validation.
Cryptographic Properties
The 2008 paper “On the Salsa20 Core Function” highlighted seven theorems relating to a part of Salsa20. These are exactly the type of properties Cryptol and the underlying solvers are intended to quickly verify, allowing the user to more efficiently explore the algorithm.
The first few properties are about invariant inputs for transformations, for example:
property theorem1 a = quarterround [a, -a, a, -a] == [a,-a,a,-a] property theorem2 a b c d = rowround val == val where val = [a,-a,a,-a ,b,-b,b,-b ,c,-c,c,-c ,d,-d,d,-d] property theorem3 a b c d = columnround val == val where val = [a,-b,c,-d ,-a,b,-c,d ,a,-b,c,-d ,-a,b,-c,d] property theorem4 a = doubleround val == val where val = [a,-a,a,-a ,-a,a,-a,a ,a,-a,a,-a ,-a,a,-a,a]
That is, for the Salsa sub-functions of doubleround, columnround, rowround, and quarterround there exists inputs such that
f x = x. These theorems all can be handled by Cryptol directly:
Salsa20> :set prover=any Salsa20> :prove theorem1 Q.E.D. Salsa20> :prove theorem2 Q.E.D. Salsa20> :prove theorem3 Q.E.D. Salsa20> :prove theorem4 Q.E.D.
The seventh, and last, theorem of the paper does not terminate in a timely manner. This is unfortunate but not unexpected – it is a slightly more complex theorem that leverages the prior six. In order to make such compositional proofs painless, SAW returns proof objects which can be used to enhance future proof tactics in a simple manner. This is a middle ground between the full power of manual theorem prover and the “all-or-nothing” system exposed by Cryptol. The SAWScript is:
import "../src/prim/Salsa20.cry"; let main : TopLevel () = do { print "Proving Salsa20 hash theorems."; let simpset x = addsimps x empty_ss; t1_po <- time (prove_print abc {{ \a -> quarterround [a, -a, a, -a] == [a,-a,a,-a] }}); t2_po <- time (prove_print abc {{ \a b c d -> (rowround val == val where val = [a,-a,a,-a ,b,-b,b,-b ,c,-c,c,-c ,d,-d,d,-d])}}); t3_po <- time (prove_print abc {{ \a b c d -> (columnround val == val where val = [a,-b,c,-d ,-a,b,-c,d ,a,-b,c,-d ,-a,b,-c,d]) }}); t4_po <- time (prove_print abc {{ \a -> (doubleround val == val where val = [a,-a,a,-a ,-a,a,-a,a ,a,-a,a,-a ,-a,a,-a,a]) }}); let ss = simpset [ t1_po, t2_po, t3_po, t4_po ]; print "Proving Theorem 7"; time (prove_print do { simplify ss; abc ; } {{ \a -> ((Salsa20Words a == Salsa20Words (a ^ diff)) where diff = [ 0x80000000 | _ <- [0..15]) }}); print "Done"; };
SAW yields the result quickly:
Proving Salsa20 hash theorems. Valid Time: 0.145s Valid Time: 0.232s Valid Time: 0.218s Valid Time: 0.245s Proving Theorem 7 Valid Time: 0.411s Done
This task took SAW under two seconds while proving theorem 7 without the simplification rules, as in Cryptol, takes over a week of computation time.
Download
You can download SAW and many examples from our github page. | https://galois.com/blog/2015/06/cryptol-saw-minilock/ | CC-MAIN-2019-43 | refinedweb | 1,178 | 53.92 |
thank u.
Well why dont you use a library function from math.h.
#include <math.h>
...
...
sqrt( 4 );
ssharish
If this isn't a trivial question, you might want to look at using the Newton Raphson method for finding the square root.
Or other iterative methods.
The simplest way is to use Newton method:
#define TOLERANCE 0.0001
double myabs(const double x){
if (x > 0.0)
return x;
return -x;
}
double mysqrt(const double x){
double assumption = 1.0;
if (x < 0.0)
return 0.0;
while (myabs(assumption * assumption - x) > TOLERANCE){
assumption = 0.5 * (assumption + x / assumption);
}
return assumption;
}
I dont want to use library function sqrt().
have you ever heard of google ?
Pretty harsh reply by AncientDragon but a very good one...
Just google it and you will find methods and even the source code.
Yes, but well deserved. The OP gave absolutely no indication at all (in either post) of what he was looking for. We are simply left to guess.
Pretty harsh reply by AncientDragon [...]
Harsh? That is not being harsh. That's practicing at being a jerk without style and heavy on the unoriginal side of the trend.
After all, it wasn't very long ago that they used to write “RTFM”. And no; the F doesn't stand for Read The Fine Manual. Nowadays, it is some form of “include Google apathetic sentence here”. Like if there were not any other search engine in the whole Internet.
Many C programming boards count amount its assets, with one or two talented individuals, in the understanding of the intricacy of the language. By virtue of regular visits, they often answer or read the same questions time and time again developing a syndrome of cynicism or boredom, due to the lack of intelligent questions. To combat the occasional spell of this syndrome; posting witty or sharp answers, with the intent to stir up some form of self reliance in the OP is often common. Which for the most part is celebrated by their acolytes.
On the other hand, there's plenty of copycats. Creating an atmosphere of unfriendliness, and fomenting the typical judgmental post where the underling excuse its always: “You, unworthy lazy poster, if you get any answer from me it is because I am so magnanimous. Better start jumping as many hoops as I tell you and prepare for paying your dues.”
I don't agree that post #4 did anything to help the OP. It just gave him the answer on the | https://www.daniweb.com/programming/software-development/threads/137153/plz-give-me-code-to-find-the-squareroot-of-a-number | CC-MAIN-2018-30 | refinedweb | 419 | 75.61 |
This tutorial introduces attribute variables. It builds on the tutorial about a minimal shader and the RGB-cube tutorial about varying variables.
This tutorial also introduces the main technique to debug shaders in Blender: false-color images, i.e. a value is visualized by setting one of the components of the fragment color to it. Then the intensity of that color component in the resulting image allows you to make conclusions about the value in the shader. This might appear to be a very primitive debugging technique because it is a very primitive debugging technique. Unfortunately, there is no alternative in Blender.
Where Does the Vertex Data Come from?Edit
In the RGB-cube tutorial you have seen how the fragment shader gets its data from the vertex shader by means of varying variables. The question here is: where does the vertex shader get its data from? Within Blender this data is specified for each selected object by the settings in the Properties window, in particular the settings in the Object Data tab, Material tab, and Textures tab. All the data of the mesh of the object is sentEdit
In Blender, // (usually normalized; also in object coordinates) Blender:
import bge cont = bge.logic.getCurrentController() VertexShader = """ varying vec4 color; attribute vec4 tangent; // this attribute is specific to Blender // and has to be defined explicitly void main() { color = gl_MultiTexCoord0; // set the varying to this attribute // other possibilities to play with: // color = gl_Vertex; // color = gl_Color; // color = vec4(gl_Normal, 1.0); // color = gl_MultiTexCoord0; // color = gl_MultiTexCoord1; // color = tangent; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } """ FragmentShader = """ varying vec4 color; void main() { gl_FragColor = color; } """ mesh = cont.owner.meshes[0] for mat in mesh.materials: shader = mat.getShader() if shader != None: if not shader.isValid(): shader.setSource(VertexShader, FragmentShader, 1) shader.setAttrib(bge.logic.SHD_TANGENT)
Note the line
shader.setAttrib(bge.logic.SHD_TANGENT)
in the Python script which tells Blender to provide the shader with tangent attributes. However, Blender will only provide several of the attributes for certain settings in the Properties window, in particular something should be specified as UV Maps in the Object Data tab (just click on the “+” button), a material should be defined in the Material tab, and a texture (e.g. any image) should be defined in the Textures tab.
In the RGB-cube tutorial we have already seen, how to visualize the
gl_Vertex coordinates by setting the fragment color to those values. In this example, the fragment color is set to
gl_MultiTexCoord0 such that we can see what kind of texture coordinates Blender provides for certain settings in the Properties window.
How to Interpret False-Color ImagesEdit
When trying to understand the information in a false-color image, it is important to focus on one color component only. For example, if the attribute
gl_MultiTexCoord0 is written to the fragment color then the red component of the fragment visualizes the
x coordinate of
gl_MultiTexCoord0, i.e. it doesn't matter whether the output color is maximum pure red or maximum yellow or maximum magenta, in all cases the red component is 1. On the other hand, it also doesn't matter for the red component whether the color is blue or green or cyan of any intensity because the red component is 0 in all cases. If you have never learned to focus solely on one color components, this is probably quite challenging; therefore, you might consider to look only at one color component at a time. For example by using this line to set the varying in the vertex shader:
color = vec4(gl_MultiTexCoord0.x, 0.0, 0.0, 1.0);
This sets the red component of the varying variable to the
x component of
gl_MultiTexCoord0 but sets the green and blue components to 0 (and the alpha or opacity component to 1 but that doesn't matter in this shader).
The specific texture coordinates that Blender sends to the vertex shader depend on the UV Maps that is specified in the Object Data tab and the Mapping that is specified in the Textures tab._Normal is a three-dimensional vector. Black corresponds then to the coordinate -1 and full intensity of one component to the coordinate +1.
If the value that you want to visualize is in another range than 0 to 1 or -1 to +1, you have to map it to the range from 0 to 1, which is the range of color components. If you don't know which values to expect, you just have to experiment. What helps here is that if you specify color components outside of the range 0 to 1, they are automatically clamped to this range. I.e., values less than 0 are set to 0 and values greater than 1 are set to 1. Thus, when the color component is 0 or 1 you know at least that the value is less or greater than what you assumed and then you can adapt the mapping iteratively until the color component is between 0 and 1.
Debugging PracticeEdit
In order to practice the debugging of shaders, this section includes some lines that produce black colors when the assignment to
color in the vertex shader is replaced by each of them. Your task is to figure out for each line, why the result is black. To this end, you should try to visualize any value that you are not absolutely sure about and map the values less than 0 or greater than 1 to other ranges such that the values are visible and you have at least an idea in which range they are. Note that most of the functions and operators are documented in “Vector and Matrix Operations”.
color = gl_MultiTexCoord0 - vec4(1.5, 2.3, 1.1, 0.0);
color = vec4(1.0 - gl_MultiTexCoord0.w);
color = gl_MultiTexCoord0 / tan(0.0);
The following lines work only with spheres and, gl_Vertex), 1.0);
Does the function
radians() always return black? What's that good for?
color = radians(gl_MultiTexCoord0);
Consult the documentation in the “OpenGL ES Shading Language 1.0.17 Specification” available at the “Khronos OpenGL ES API Registry” to figure out what
radians() is good for.
Special Variables in the Fragment ShaderEdit the tutorial on cutaways.
SummaryEdit
Congratulations, you have reached the end of this tutorial! We have seen:
- The list of built-in attributes in Blender:Edit
If you still want to know more
- about the data flow in vertex and fragment shaders, you should read the description of the “OpenGL ES 2.0 Pipeline”.
- about operations and functions for vectors, you should read “Vector and Matrix Operations”.
< GLSL Programming/Blender | http://en.m.wikibooks.org/wiki/GLSL_Programming/Blender/Debugging_of_Shaders | CC-MAIN-2014-15 | refinedweb | 1,094 | 52.8 |
Business events consist of message data sent as the result of an occurrence in a business environment. When a business event is published, other service components can subscribe to it.
In this chapter, you learn how to subscribe to a business event using Oracle Mediator. At a high-level, you perform the following tasks:
Create a business named
NewOrderSubmitted.
Create
OrderPendingEvent mediator service component to subscribe to the
NewOrderSubmitted business event and initiate the
OrderProcessor BPEL process through a routing rule to process the order through a routing rule.
This chapter contains the following sections:
Section 7.1, "Task 1: Create the NewOrderSubmitted Business Event"
Section 7.2, "Task 2: Create Mediator Service Component to Subscribe to NewOrderSubmitted Business Event"
Section 7.3, "Task 3: Route OrderPendingEvent Mediator Service Component to OrderProcessor BPEL Process"
To create the
NewOrderSubmitted business event:
Click the composite.xml tab to view the SOA Composite Editor.
Launch the Event Definition Creation wizard in either of two ways:
In the SOA Composite Editor, click the Event Definition Creation icon above the designer:
From the File main menu, select New > SOA Tier > Service Components > Event Definition.
The Event Definition Creation dialog appears.
In the Event Definition Name field, enter
OrderEO. Oracle JDeveloper saves the
NewOrderSubmitted event to the
orderEO.edl file.
Leave the default settings for the Namespace field.
Click the Add an Event icon to add an event.
The Add an Event dialog appears.
Enter the following values.
Click OK.
In the Event Definition Creation dialog, click OK.
The Business Events Editor displays with the
NewOrderSubmitted event.
Select Save All from the File main menu to save your work.
Click X in the OrderEO.edl tab to close the definition file.
The business event is published to MDS and you are returned to the SOA Composite Editor.
To subscribe to the
NewOrderSubmitted business event and initiate the
OrderProcessor BPEL process:
Drag a Mediator service component into the SOA Composite Editor. This service component enables you to subscribe to the business event.
In the Name field, enter
OrderPendingEvent.
From the Templates list, select Subscribe to Events.
The window refreshes to display an events table.
Click the Subscribe to a new event icon to display the Event Chooser dialog.
With the NewOrderSubmitted event selected, click OK.
You are returned to the Create Mediator dialog.
one and only one specifies that events are delivered to the subscriber in its own global (that is, JTA) transaction. Any changes made by the subscriber within the context of that transaction are committed when the event processing is complete. If the subscriber fails, the transaction is rolled back. Failed events are retried a configured number of times before being delivered to the hospital queue.
$publisher specifies the event requires a security role.
In the Create Mediator dialog, click OK.
The
OrderPendingEvent mediator displays in the SOA Composite Editor. The icon on the left side that indicates that mediator is configured for an event subscription.
Click Source.
The following source code provides details about the subscribed event of the mediator service component.
<component name="OrderPendingEvent"> <implementation.mediator <business-events> <subscribe xmlns: </business-events> </component>
To create a routing rule from the
OrderPendingEvent mediator service component to the
OrderProcessor BPEL process:
Back in the Design tab, drag a wire from OrderPendingEvent to the OrderProcessor reference handle.
Double-click OrderPendingEvent tab to see the rule in the Mediator Editor:
Modify the transformation used for the
OrderPendingEvent mediator service component so that the
OrderProcessor BPEL process receives input from the
NewOrderSubmitted business event:
Click the transformation icon next to the Transform Using field.
The Event Transformation Map dialog displays.
Select Create New Mapper File, use the default name
NewOrderSubmitted_To_WarehouseRequest.xsl in the accompanying field, and then click OK.
The Data Mapper displays.
On the Source:OrderEO.xsd (left) side, click and drag OrderID to ns1:WarehousreRequest > ns1:orderId on the XSLT File: OrderProcessor.wsdl (right) side. The namespace number values (for example, ns1, ns2) can vary.
The Data Mapper dialog should look like the following now.
Select Save All from the File main menu to save your work.
Click X in the NewOrderSubmitted_To_WarehouseRequest.xsl tab to close the Data Mapper.
With the OrderPendingEvent.mplan tab back in focus, in the Routing Rules section, you should see the transformation updated as follows: | http://docs.oracle.com/cd/E12839_01/integration.1111/e10275/orderpend.htm | CC-MAIN-2015-14 | refinedweb | 714 | 50.84 |
Ever wondered which devices support Windows Hello, Fingerprint verification, and critical biometric data – and where they store that data? Storing this data on your computer or phone can be risky. This is where TPM or Trusted Platform Module comes into the picture. In this post, we will learn about the Trusted Platform Module and learn how to check if you have a TPM chip.
What is Trusted Platform Module
Trusted Platform Module or TPM is a specialized and dedicated chip which stores cryptographic keys. It acts as endpoint security for the devices which support it.
When someone owns a device, it generates two keys —
- Endorsement Key
- Storage Root Key.
These keys can only be accessed on the hardware level. No software program can access those keys.
Apart from these keys, there is another key called as Attestation Identity Key or AIK. It protects the hardware from unauthorized firmware and software modification.
Related: How to clear and update TPM firmware.
How to check if you have TPM chip
There are multiples ways to check TPM chip availability. However, you should know that it should be enabled at the hardware level so that security software security like Bitllocker can use it.
- Using TPM Management
- Enable it in BIOS or UEFI
- Using the Security Node in Device Manager
- Using WMIC command.
1] Open Trusted Management Module Management
Type tpm.msc in the Run prompt, and hit enter. It will launch the Trusted Management Module Management.
If it says:
Compatible TPM cannot be found on this computer. Verify that this computer has 1.2 TPM or later and it’s turned on in the BIOS.
or anything similar, then you do not TPM on the computer.
If it says:
The TPM is ready to use
You have it!
You can use TPM Diagnostics Tool in Windows 11 to find out the Trusted Platform Module chip information of your system.
2] Check-in BIOS or UEFI
Restart the computer and boot into BIOS or UEFI. Locate the security section, and check if there is a setting similar to TPM Support or Security Chip or anything else. Enable it, and restart the computer after saving the settings.
Read: TPM vs PTT: What are the main differences?
3] Check with Device Manager
Use Win+X+M to open the Device Manager. Find if there is a Security devices node. If yes expand it and TPM with module number.
4] Use WMIC in the Command Prompt
In an elevated command prompt, execute the command:
wmic /namespace:\\root\cimv2\security\microsofttpm path win32_tpm get * /format:textvaluelist.xsl
It will display a list of key-value pairs.
If you see True in the result, it means that TPM is enabled; else you will see No instances available.
We hope the guide was straightforward and easy enough for you to figure out if the computer has TPM chipset.
Read: How to bypass TPM requirement and install Windows 11?
| https://www.thewindowsclub.com/trusted-platform-module-check-if-you-have-tpm-chip | CC-MAIN-2022-05 | refinedweb | 487 | 66.54 |
Matplotlib for Python Developers — Save 50%
Build remarkable publication-quality plots the easy way
Read Part One of Advanced Matplotlib.
Plotting dates
Sooner or later, we all have had the need to plot some information over time, be it for the bank account balance each month, the total web site accesses for each day of the year, or one of many other reasons.
Matplotlib has a plotting function ad hoc for dates, plot_date() that considers data on X, Y, or both axes, as dates, labeling the axis accordingly.
As usual, we now present an example, and we will discuss it later:
In [1]: import matplotlib as mpl
In [2]: import matplotlib.pyplot as plt
In [3]: import numpy as np
In [4]: import datetime as dt
In [5]: dates = [dt.datetime.today() + dt.timedelta(days=i)
...: for i in range(10)]
In [6]: values = np.random.rand(len(dates))
In [7]: plt.plot_date(mpl.dates.date2num(dates), values, linestyle='-
');
In [8]: plt.show()
First, a note about linestyle keyword argument: without it, there's no line connecting the markers that are displayed alone.
We created the dates array using timedelta(), a datetime function that helps us define a date interval—10 days in this case. Note how we had to convert our date values using the date2num() function. This is because Matplotlib represents dates as float values corresponding to the number of days since 0001-01-01 UTC.
Also note how the X-axis labels, the ones that have data values, are badly rendered.
Matplotlib provides ways to address the previous two points—date formatting and conversion, and axes formatting.
Date formatting
Commonly, in Python programs, dates are represented as datetime objects, so we have to first convert other data values into datetime objects, sometimes by using the dateutil companion module, for example:
import datetime
date = datetime.datetime(2009, 03, 28, 11, 34, 59, 12345)
or
import dateutil.parser
datestrings = ['2008-07-18 14:36:53.494013','2008-07-20
14:37:01.508990', '2008-07-28 14:49:26.183256']
dates = [dateutil.parser.parse(s) for s in datestrings]
Once we have the datetime objects, in order to let Matplotlib use them, we have to convert them into floating point numbers that represent the number of days since 0001-01-01 00:00:00 UTC.
To do that, Matplotlib itself provides several helper functions contained in the matplotlib.dates module:
- date2num(): This function converts one or a sequence of datetime objects to float values representing days since 0001-01-01 00:00:00 UTC (the fractional parts represent hours, minutes, and seconds)
- num2date(): This function converts one or a sequence of float values representing days since 0001-01-01 00:00:00 UTC to datetime objects (or a sequence, if the input is a sequence)
- drange(dstart, dend, delta): This function returns a date range (a sequence) of float values in Matplotlib date format; dstart and dend are datetime objects while delta is a datetime.timedelta instance
Usually, what we will end up doing is converting a sequence of datetime objects into a Matplotlib representation, such as:
dates = list of datetime objects
mpl_dates = matplotlib.dates.date2num(dates)
drange() can be useful in situations like this one:
import matplotlib as mpl
from matplotlib import dates
import datetime as dt
date1 = dt.datetime(2008, 9, 23)
date2 = dt.datetime(2009, 4, 12)
delta = dt.timedelta(days=10)
dates = mpl.dates.drange(date1, date2, delta)
where dates will be a sequence of floats starting from date1 and ending at date2 with a delta timestamp between each item of the list.
Axes formatting with axes tick locators and formatters
As we have already seen, the X labels on the first image are not that nice looking. We would expect Matplotlib to allow a better way to label the axis, and indeed, there is.
The solution is to change the two parts that form the axis ticks—locators and formatters. Locators control the tick's position, while formatters control the formatting of labels. Both have a major and minor mode: the major locator and formatter are active by default and are the ones we commonly see, while minor mode can be turned on by passing a relative locator or formatter function (because minors are turned off by default by assigning NullLocator and NullFormatter to them).
While this is a general tuning operation and can be applied to all Matplotlib plots, there are some specific locators and formatters for date plotting, provided by matplotlib.dates:
- MinuteLocator, HourLocator,DayLocator, WeekdayLocator,MonthLocator, YearLocator are all the locators available that place a tick at the time specified by the name, for example, DayLocator will draw a tick at each day. Of course, a minimum knowledge of the date interval that we are about to draw is needed to select the best locator.
- DateFormatter is the tick formatter that uses strftime() to format strings.
The default locator and formatter are matplotlib.ticker.AutoDateLocator and matplotlib.ticker.AutoDateFormatter, respectively. Both are set by the plot_date() function when called. So, if you wish to set a different locator and/or formatter, then we suggest to do that after the plot_date() call in order to avoid the plot_date() function resetting them to the default values.
Let's group all this up in an example:
In [1]: import matplotlib as mpl
In [2]: import matplotlib.pyplot as plt
In [3]: import numpy as np
In [4]: import datetime as dt
In [5]: fig = plt.figure()
In [6]: ax2 = fig.add_subplot(212)
In [7]: date2_1 = dt.datetime(2008, 9, 23)
In [8]: date2_2 = dt.datetime(2008, 10, 3)
In [9]: delta2 = dt.timedelta(days=1)
In [10]: dates2 = mpl.dates.drange(date2_1, date2_2, delta2)
In [11]: y2 = np.random.rand(len(dates2))
In [12]: ax2.plot_date(dates2, y2, linestyle='-');
In [13]: dateFmt = mpl.dates.DateFormatter('%Y-%m-%d')
In [14]: ax2.xaxis.set_major_formatter(dateFmt)
In [15]: daysLoc = mpl.dates.DayLocator()
In [16]: hoursLoc = mpl.dates.HourLocator(interval=6)
In [17]: ax2.xaxis.set_major_locator(daysLoc)
In [18]: ax2.xaxis.set_minor_locator(hoursLoc)
In [19]: fig.autofmt_xdate(bottom=0.18) # adjust for date labels
display
In [20]: fig.subplots_adjust(left=0.18)
In [21]: ax1 = fig.add_subplot(211)
In [22]: date1_1 = dt.datetime(2008, 9, 23)
In [23]: date1_2 = dt.datetime(2009, 2, 16)
In [24]: delta1 = dt.timedelta(days=10)
In [25]: dates1 = mpl.dates.drange(date1_1, date1_2, delta1)
In [26]: y1 = np.random.rand(len(dates1))
In [27]: ax1.plot_date(dates1, y1, linestyle='-');
In [28]: monthsLoc = mpl.dates.MonthLocator()
In [29]: weeksLoc = mpl.dates.WeekdayLocator()
In [30]: ax1.xaxis.set_major_locator(monthsLoc)
In [31]: ax1.xaxis.set_minor_locator(weeksLoc)
In [32]: monthsFmt = mpl.dates.DateFormatter('%b')
In [33]: ax1.xaxis.set_major_formatter(monthsFmt)
In [34]: plt.show()
The result of executing the previous code snippet is as shown:
We drew the subplots in reverse order to avoid some minor overlapping problems.
fig.autofmt_xdate() is used to nicely format date tick labels. In particular, this function rotates the labels (by using rotation keyword argument, with a default value of 30°) and gives them more room (by using bottom keyword argument, with a default value of 0.2).
We can achieve the same result, at least for the additional spacing, with:
fig = plt.figure()
fig.subplots_adjust(bottom=0.2)
ax = fig.add_subplot(111)
This can also be done by creating the Axes instance directly with:
ax = fig.add_axes([left, bottom, width, height])
while specifying the explicit dimensions.
The subplots_adjust() function allows us to control the spacing around the subplots by using the following keyword arguments:
- bottom, top, left, right: Controls the spacing at the bottom, top, left, and right of the subplot(s)
- wspace, hspace: Controls the horizontal and vertical spacing between subplots
We can also control the spacing by using these parameters in the Matplotlib configuration file:
figure.subplot.<position> = <value>
Custom formatters and locators
Even if it's not strictly related to date plotting, tick formatters allow for custom formatters too:
...
import matplotlib.ticker as ticker
...
def format_func(x, pos):
return <a transformation on x>
...
formatter = ticker.FuncFormatter(format_func)
ax.xaxis.set_major_formatter(formatter)
...
The function format_func will be called for each label to draw, passing its value and position on the axis. With those two arguments, we can apply a transformation (for example, divide x by 10) and then return a value that will be used to actually draw the tick label.
Here's a general note on NullLocator: it can be used to remove axis ticks by simply issuing:
ax.xaxis.set_major_locator(matplotlib.ticker.NullLocator())
Text properties, fonts, and LaTeX
Matplotlib has excellent text support, including mathematical expressions, TrueType font support for raster and vector outputs, newline separated text with arbitrary rotations, and Unicode.
We have total control over every text property (font size, font weight, text location, color, and so on) with sensible defaults set in the rc configuration file. Specifically for those interested in mathematical or scientific figures, Matplotlib implements a large number of TeX math symbols and commands to support mathematical expressions anywhere in the figure.
We already saw some text functions, but the following list contains all the functions which can be used to insert text with the pyplot interface, presented along with the corresponding API method and a description:
All of these commands return a matplotlib.text.Text instance. We can customize the text properties by passing keyword arguments to the functions or by using matplotlib.artist.setp():
t = plt.xlabel('some text', fontsize=16, color='green')
We can do it as:
t = plt.xlabel('some text')
plt.setp(t, fontsize=16, color='green')
Handling objects allows for several new possibilities; such as setting the same property to all the objects in a specific group. Matplotlib has several convenience functions to return the objects of a plot. Let's take the example of the tick labels:
ax.get_xticklabels()
This line of code returns a sequence of object instances (the labels for the X-axis ticks) that we can tune:
for t in ax.get_xticklabels():
t.set_fontsize(5.)
or else, still using setp():
setp(ax.get_xticklabels(), fontsize=5.)
It can take a sequence of objects, and apply the same property to all of them.
To recap, all of the properties such as color, fontsize, position, rotation, and so on, can be set either:
- At function call using keyword arguments
- Using setp() referencing the Text instance
- Using the modification function
Fonts
Where there is text, there are also fonts to draw it. Matplotlib allows for several font customizations.
The most complete documentation on this is currently available in the Matplotlib configuration file, /etc/matplotlibrc. We are now reporting that information here.
There are six font properties available for modification.
The list of font names, selected by font.family, in the priority search order is:
The first valid and available (that is, installed) font in each family is the one that will be loaded. If the fonts are not specified, the Bitstream Vera Sans fonts are used by default.
As usual, we can set these values in the configuration file or in the code accessing the rcParams dictionary provided by Matplotlib.
Using Latex formatting
If you have ever used LaTeX, you know how powerful it can be at rendering mathematical expressions. Given its root in the scientific field, Matplotlib allows us to embed TeX text in its plots. There are two ways available
- Mathtext
- Using an external TeX renderer
Mathtext
Matplotlib includes an internal engine to render TeX expression, mathtext. The mathtext module provides TeX style mathematical expressions using FreeType 2 and the default font from TeX, Computer Modern.
As Matplotlib ships with everything it needs to make mathtext work, there is no requirement to install a TeX system (or any other external program) on the computer for it to be used.
The markup character used to signal the start and the end of a mathtext string is $;encapsulating a string inside a pair of $ characters will trigger the mathtext engine to render it as a TeX mathematical expression.
We should use raw strings (preceding the quotes with an r character) and surround the mathtext with dollar signs ($), as in TeX. The use of raw strings is important so that backslashes (used for TeX symbols escaping) are not mangled by the Python interpreter.
Matplotlib accepts TeX equations in any text expressions, so regular text and mathtext can be interleaved within the same string.
An example of the kind of text we can generate is:
In [1]: import matplotlib.pyplot as plt
In [2]: fig = plt.figure()
In [3]: ax= fig.add_subplot(111)
In [4]: ax.set_xlim([1, 6]);
In [5]: ax.set_ylim([1, 9]);
In [6]: ax.text(2, 8, r"$ mu alpha tau pi lambda omega tau
lambda iota beta $");
In [7]: ax.text(2, 6, r"$ lim_{x rightarrow 0} frac{1}{x} $");
In [8]: ax.text(2, 4, r"$ a leq b leq c Rightarrow a
leq c$");
In [9]: ax.text(2, 2, r"$ sum_{i=1}^{infty} x_i^2$");
In [10]: ax.text(4, 8, r"$ sin(0) = cos(frac{pi}{2})$");
In [11]: ax.text(4, 6, r"$ sqrt[3]{x} = sqrt{y}$");
In [12]: ax.text(4, 4, r"$ neg (a wedge b) Leftrightarrow neg a
vee neg b$");
In [13]: ax.text(4, 2, r"$ int_a^b f(x)dx$");
In [14]: plt.show()
The preceding code snippet results in the following:
The escape sequence is almost the same as that of LaTeX; consult the Matplotlib and/or LaTeX online documentation to see the full list.
External TeX renderer
Matplotlib also allows to manage all the text layout using an external LaTeX engine. This is limited to Agg, PS, and PDF backends and is commonly needed when we want to create graphs to be embedded into LaTeX documents, where rendering uniformity is really pleasant.
To activate an external TeX rendering engine for text strings, we need to set this parameter in the configuration file:
text.usetex : True
or use the rcParams dictionary:
rcParams['text.usetex'] = True
This mode requires LaTeX, dvipng, and Ghostscript to be correctly installed and working. Also note that usually external TeX management is slower than Matplotlib's mathtext and that all the texts in the figure are drawn using the external renderer, not only the mathematical ones.
There are several optimizations and configurations that you will need to do when dealing with TeX, postscripts and so, we invite you to consult an official documentation for additional information.
When the previous example is executed and rendered using an external LaTeX engine, the result is:
Also, look at how the tick label's text is rendered in the same font as the text in the figure, as in this real world example:
In [1]: import matplotlib as mpl
In [2]: import matplotlib.pyplot as plt
In [3]: mpl.rcParams['text.usetex'] = True
In [4]: import numpy as np
In [5]: x = np.arange(0., 5., .01)
In [6]: y = [np.sin(2*np.pi*xx) * np.exp(-xx) for xx in x]
In [7]: plt.plot(x, y, label=r'$sin(2pi x)exp(-x)$');
In [8]: plt.plot(x, np.exp(-x), label=r'$exp(-x)$');
In [9]: plt.plot(x, -np.exp(-x), label=r'$-exp(-x)$');
In [10]: plt.title(r'$sin(2pi x)exp(-x)$ with the two asymptotes
$pmexp(-x)$');
In [11]: plt.legend();
In [12]: plt.show()
The preceding code snippet results in a sinusoidal line contained in two asymptotes:
Contour plots and image plotting
We will now discuss the features Matplotlib provides to create contour plots and display images.
Contour plots
Contour lines (also known as level lines or isolines) for a function of two variables are curves where the function has constant values. Mathematically speaking, it's a graph image that shows:
f(x, y) = L
with L constant. Contour lines often have specific names beginning with iso- (from Greek, meaning equal) according to the nature of the variables being mapped.
There are a lot of applications of contour lines in several fields such as meteorology (for temperature, pressure, rain precipitation, wind speed), geography, oceanography, cartography (elevation and depth), magnetism, engineering, social sciences, and so on.
The absolutely most common examples of contour lines are those seen in weather forecasts, where lines of isobars (where the atmospheric pressure is constant) are drawn over the terrain maps. In particular, those are contour maps because contour lines are drawn above a map in order to add specific information to it.
The density of the lines indicates the slope of the function. The gradient of the function is always perpendicular to the contour lines, and when the lines are close together, the length of the gradient is large and the variation is steep.
Here is a contour plot from a random number matrix:
In [1]: import matplotlib.pyplot as plt
In [2]: import numpy as np
In [3]: matr = np.random.rand(21, 31)
In [4]: cs = plt.contour(matr)
In [5]: plt.show()
where the contour lines are colored from blue to red in a scale from the lower to the higher values.
The contour() function draws contour lines, taking a 2D array as input (a list of list notations). In this case, it's a matrix of 21x31 random elements. The number of level lines to draw is chosen automatically, but we can also specify it as an additional parameter, N:
contour(matrix, N)
The previous line of code tells us to draw N automatically chosen level lines.
There is also a similar function that draws a filled contours plot, contourf()
In [6]: csf = plt.contourf(matr)
In [7]: plt.colorbar();
In [8]: plt.show()
contourf() fills the spaces between the contours lines with the same color progression used in the contour() plot: dark blue is used for low value areas, while red is used for high value areas, fading in between for the intermediate values.
Contour colors can be changed using a colormap, a set of colors used as a lookup table by Matplotlib when it needs to select more colors, specified using the cmap keyword argument.
We also added a colorbar() call to draw a color bar next to the plot to identify the ranges the colors are assigned to. In this case, there are a few bins where the values can fall because rand() NumPy function returns values between 0 and 1.
Labeling the level lines is important in order to provide information about what levels were chosen for display; clabel() does this by taking as input a contour instance, as returned by a previous contour() call:]: cs = plt.contour(ellipses)
In [8]: plt.clabel(cs);
In [9]: plt.show()
Here, we draw several ellipses and then call clabel() to display the selected levels. We used the NumPy meshgrid() function to get the coordinate matrices, X and Y, from the two coordinate vectors, x and y. The output of this code is shown in the following image:
Image plotting
Matplotlib also has basic image plotting capabilities provided by the functions: imread() and imshow().
imread() reads an image from a file and converts it into a NumPy array; imshow() takes an array as input and displays it on the screen:
import matplotlib.pyplot as plt
f = plt.imread('/path/to/image/file.ext')
plt.imshow(f)
Matplotlib can only read PNG files natively, but if the Python Imaging Library (usually known as PIL) is installed, then this library will be used to read the image and return an array (if possible).
Note that when working with images, the origin is in the upper-left corner. This can be changed using the origin keyword argument, origin='lower' (which is the only other acceptable value, in addition to the default 'upper'), which will set the origin on the lower-left corner. We can also set it as a configuration parameter, and the key name is image.origin.
Just note that once the image is an array, we can do all the transformations we like. imshow() can plot any 2D sets of data and not just the ones read from image files. For example, let's take the ellipses code we used for contour plot and see what imshow() draws.]: plt.imshow(ellipses);
In [8]: plt.colorbar();
In [9]: plt.show()
This example creates a full spectrum of colors starting from deep blue of the image center, slightly turning into green, yellow, and red near the image corners:
Summary
We've come a long way even in this article, so let's recap the arguments we touched:
- Object-oriented interfaces and the relationship with pyplot and pylab
- How to draw subplots and multiple figures
- How to manipulate axes so that they can be shared between subplots or can be shared between two plots
- Logarithmic scaled axes
- How to plot dates, and tune tick formatters and locators
- Text properties, fonts, and LaTeX typewriting both with the internal engine mathtext and with an external renderer
- Contour plots and image plotting
With the information that we have gathered so far, we are ready to extract Matplotlib from a pure script or interactive usage inside the Python interpreter and learn how we can embed this library in a GUI Python application.
If you have read this article you may be interested to view :
- Advanced Matplotlib: Part 1
- Plotting data using Matplotlib: Part 1
- Plotting data using Matplotlib: Part 2
- Plotting Geographical Data using Basemap
About the Author :
Sandro Tosi Packt
Post new comment | http://www.packtpub.com/article/advanced-matplotlib-part2 | CC-MAIN-2013-48 | refinedweb | 3,570 | 54.52 |
This library functionality is implemented in C. Methods for accessing the machine representation are provided, including the ability to import and export buffers. This allows creating bitarrays that mapped are to other objects, including memory-mapped files.
The bit endianness can be specified for each bitarray object, see below.
Sequence methods: slicing (including slice assignment and deletion),
operations
+,
*,
+=,
*=, the
in operator,
len()
Fast methods for encoding and decoding variable bit length prefix codes.
Bitwise operations:
~,
&,
|,
^,
<<,
>> (as well as
their in-place versions
&=,
|=,
^=,
<<=,
>>=).
Packing and unpacking to other binary data formats, e.g.
numpy.ndarray.
Bitarray objects support the buffer protocol (both importing and exporting buffers).
frozenbitarray objects which are hashable
Pickling and unpickling of bitarray objects.
Sequential search
Extensive test suite with over 400 unittests.
Utility module
bitarray.util:
If you have a working C compiler, you can simply:
.. code-block:: shell-session
pip install bitarray
If you rather want to use precompiled binaries, you can:
conda install bitarray(both the default Anaconda repository as well as conda-forge support bitarray)
Chris Gohlke <>__
Once you have installed the package, you may want to test it:
.. code-block:: shell-session
$ python -c 'import bitarray; bitarray.test()' bitarray is installed in: /Users/ilan/bitarray/bitarray bitarray version: 2.3.4 sys.version: 2.7.15 (default, Mar 5 2020, 14:58:04) [GCC Clang 9.0.1] sys.prefix: /Users/ilan/Mini3/envs/py27 pointer size: 64 bit sizeof(size_t): 8 sizeof(bitarrayobject): 80 PY_UINT64_T defined: 1 DEBUG: 0 ......................................................................... ......................................................................... ................................................................ ---------------------------------------------------------------------- Ran 407 tests in 0.483s OK
You can always import the function test,
and
test().wasSuccessful() will return
True when the test went well.
As mentioned above, bitarray objects behave very much like lists, so there is not too much to learn. The biggest difference from list objects (except that bitarray are obviously homogeneous) is the ability to access the machine representation of the object. When doing so, the bit endianness is of importance; this issue is explained in detail in the section below. Here, we demonstrate the basic usage of bitarray objects:
.. code-block:: python
from bitarray import bitarray a = bitarray() # create empty bitarray a.append(1) a.extend([1, 0]) a bitarray('110') x = bitarray(2 ** 20) # bitarray of length 1048576 (uninitialized) len(x) 1048576 bitarray('1001 011') # initialize from string (whitespace is ignored) bitarray('1001011') lst = [1, 0, False, True, True] a = bitarray(lst) # initialize from iterable a bitarray('10011') a.count(1) 3 a.remove(0) # removes first occurrence of 0 a bitarray('1011')
Like lists, bitarray objects support slice assignment and deletion:
.. code-block:: python
50) a.setall(0) # set all elements in a to 0 a[11:37:3] = 9 * bitarray('1')')a = bitarray(
In addition, slices can be assigned to booleans, which is easier (and faster) than assigning to a bitarray in which all values are the same:
.. code-block:: python
20 * bitarray('0') a[1:15:3] = True a bitarray('01001001001001000000')a =
This is easier and faster than:
.. code-block:: python
20 * bitarray('0') a[1:15:3] = 5 * bitarray('1') a bitarray('01001001001001000000')a =
Note that in the latter we have to create a temporary bitarray whose length must be known or calculated. Another example of assigning slices to Booleans, is setting ranges:
.. code-block:: python
>>> a = bitarray(30) >>> a[:] = 0 # set all elements to 0 - equivalent to a.setall(0) >>> a[10:25] = 1 # set elements in range(10, 25) to 1 >>> a bitarray('000000000011111111111111100000')
Bitarray objects support the bitwise operators
~,
&,
|,
^,
<<,
>> (as well as their in-place versions
&=,
|=,
^=,
<<=,
>>=). The behavior is very much what one would expect:
.. code-block:: python
'101110001') ~a # invert bitarray('010001110') b = bitarray('111001011') a ^ b bitarray('010111010') a &= b a bitarray('101000001') a <<= 2 a bitarray('100000100') b >> 1 bitarray('011100101')a = bitarray(
The C language does not specify the behavior of negative shifts and of left shifts larger or equal than the width of the promoted left operand. The exact behavior is compiler/machine specific. This Python bitarray library specifies the behavior as follows:
ValueError
Unless explicitly converting to machine representation, using
the
.tobytes(),
.frombytes(),
.tofile() and
.fromfile()
methods, as well as using
memoryview, the bit endianness will have no
effect on any computation, and one can skip this section.
Since bitarrays allows addressing individual bits, where the machine represents 8 bits in one byte, there are two obvious choices for this mapping: little-endian and big-endian.
When dealing with the machine representation of bitarray objects, it is recommended to always explicitly specify the endianness.
By default, bitarrays use big-endian representation:
.. code-block:: python
>>> a = bitarray() >>> a.endian() 'big' >>> a.frombytes(b'A') >>> a bitarray('01000001') >>> a[6] = 1 >>> a.tobytes() b'C'
Big-endian means that the most-significant bit comes first.
Here,
a[0] is the lowest address (index) and most significant bit,
and
a[7] is the highest address and least significant bit.
When creating a new bitarray object, the endianness can always be specified explicitly:
.. code-block:: python
>>> a = bitarray(endian='little') >>> a.frombytes(b'A') >>> a bitarray('10000010') >>> a.endian() 'little'
Here, the low-bit comes first because little-endian means that increasing
numeric significance corresponds to an increasing address.
So
a[0] is the lowest address and least significant bit,
and
a[7] is the highest address and most significant bit.
The bit endianness is a property of the bitarray object. The endianness cannot be changed once a bitarray object is created. When comparing bitarray objects, the endianness (and hence the machine representation) is irrelevant; what matters is the mapping from indices to bits:
.. code-block:: python
'11001', endian='big') == bitarray('11001', endian='little') Truebitarray(
Bitwise operations (
|,
^,
&=,
|=,
^=,
~) are
implemented efficiently using the corresponding byte operations in C, i.e. the
operators act on the machine representation of the bitarray objects.
Therefore, it is not possible to perform bitwise operators on bitarrays
with different endianness.
When converting to and from machine representation, using
the
.tobytes(),
.frombytes(),
.tofile() and
.fromfile()
methods, the endianness matters:
.. code-block:: python
>>> a = bitarray(endian='little') >>> a.frombytes(b'\x01') >>> a bitarray('10000000') >>> b = bitarray(endian='big') >>> b.frombytes(b'\x80') >>> b bitarray('10000000') >>> a == b True >>> a.tobytes() == b.tobytes() False
As mentioned above, the endianness can not be changed once an object is created. However, you can create a new bitarray with different endianness:
.. code-block:: python
>>> a = bitarray('111000', endian='little') >>> b = bitarray(a, endian='big') >>> b bitarray('111000') >>> a == b True
Bitarray objects support the buffer protocol. They can both export their
own buffer, as well as import another object's buffer. To learn more about
this topic, please read
buffer protocol <>. There is also an example that shows how
to memory-map a file to a bitarray:
mmapped-file.py <>
The
.encode() method takes a dictionary mapping symbols to bitarrays
and an iterable, and extends the bitarray object with the encoded symbols
found while iterating. For example:
.. code-block:: python
>>>:
.. code-block:: python
>>> a.decode(d) ['H', 'e', 'l', 'l', 'o'] >>> ''.join(a.decode(d)) 'Hello'
Since symbols are not limited to being characters, it is necessary to return
them as elements of a list, rather than simply returning the joined string.
The above dictionary
d can be efficiently constructed using the function
bitarray.util.huffman_code(). I also wrote
Huffman coding in Python using bitarray <>__ for more
background information.
When the codes are large, and you have many decode calls, most time will
be spent creating the (same) internal decode tree objects. In this case,
it will be much faster to create a
decodetree object, which can be
passed to bitarray's
.decode() and
.iterdecode() methods, instead
of passing the prefix code dictionary to those methods itself:
.. code-block:: python
from bitarray import bitarray, decodetree t = decodetree({'a': bitarray('0'), 'b': bitarray('1')}) a = bitarray('0110') a.decode(t) ['a', 'b', 'b', 'a'] ''.join(a.iterdecode(t)) 'abba'
The
decodetree object is immutable and unhashable, and it's sole purpose
is to be passed to bitarray's
.decode() and
.iterdecode() methods.
A
frozenbitarray object is very similar to the bitarray object.
The difference is that this a
frozenbitarray is immutable, and hashable,
and can therefore be used as a dictionary key:
.. code-block:: python
from bitarray import frozenbitarray key = frozenbitarray('1100011') {key: 'some value'} {frozenbitarray('1100011'): 'some value'} key[3] = 1 Traceback (most recent call last): ... TypeError: frozenbitarray is immutable
bitarray version: 2.3.4 --
change log <>__
In the following,
item and
value are usually a single bit -
an integer 0 or 1.
bitarray(initializer=0, /, endian='big', buffer=None) -> bitarray
Return a new bitarray object whose items are bits initialized from
the optional initial object, and endianness.
The initializer may be of the following types:
int: Create a bitarray of given integer length. The initial values are
uninitialized.
str: Create bitarray from a string of
0 and
1.
iterable: Create bitarray from iterable or sequence or integers 0 or 1.
Optional keyword arguments:
endian: Specifies the bit endianness of the created bitarray object.
Allowed values are
big and
little (the default is
big).
The bit endianness effects the buffer representation of the bitarray.
buffer: Any object which exposes a buffer. When provided,
initializer
cannot be present (or has to be
None). The imported buffer may be
readonly or writable, depending on the object type.
New in version 2.3: optional
buffer argument.
A bitarray object supports the following methods:
all() -> bool
Return True when all bits in the array are True.
Note that
a.all() is faster than
all(a).
any() -> bool
Return True when any bit in the array is True.
Note that
a.any() is faster than
any(a).
append(item, /)
Append
item to the end of the bitarray.
buffer_info() -> tuple
Return a tuple containing:
bytereverse(start=0, stop=<end of buffer>, /)
Reverse the bit order for the bytes in range(start, stop) in-place.
The start and stop indices are given in terms of bytes (not bits).
By default, all bytes in the buffer are reversed.
Note: This method only changes the buffer; it does not change the
endianness of the bitarray object.
New in version 2.2.5: optional
start and
stop arguments.
clear()
Remove all items from the bitarray.
New in version 1.4.
copy() -> bitarray
Return a copy of the bitarray.
count(value=1, start=0, stop=<end of array>, /) -> int
Count the number of occurrences of
value in the bitarray.
New in version 1.1.0: optional
start and
stop arguments.
decode(code, /) -> list
Given a prefix code (a dict mapping symbols to bitarrays, or
decodetree
object), decode the content of the bitarray and return it as a list of
symbols.
encode(code, iterable, /)
Given a prefix code (a dict mapping symbols to bitarrays),
iterate over the iterable object with symbols, and extend the bitarray
with the corresponding bitarray for each symbol.
endian() -> str
Return the bit endianness of the bitarray as a string (
little or
big).
extend(iterable, /)
Append all items from
iterable to the end of the bitarray.
If the iterable is a string, each
0 and
1 are appended as
bits (ignoring whitespace and underscore).
fill() -> int
Add zeros to the end of the bitarray, such that the length of the bitarray
will be a multiple of 8, and return the number of bits added (0..7).
find(sub_bitarray, start=0, stop=<end of array>, /) -> int
Return the lowest index where sub_bitarray is found, such that sub_bitarray
is contained within
[start:stop].
Return -1 when sub_bitarray is not found.
New in version 2.1.
frombytes(bytes, /)
Extend bitarray with raw bytes. That is, each append byte will add eight
bits to the bitarray.
fromfile(f, n=-1, /)
Extend bitarray with up to n bytes read from the file object f.
When n is omitted or negative, reads all data until EOF.
When n is provided and positive but exceeds the data available,
EOFError is raised (but the available data is still read and appended.
index(sub_bitarray, start=0, stop=<end of array>, /) -> int
Return the lowest index where sub_bitarray is found, such that sub_bitarray
is contained within
[start:stop].
Raises
ValueError when the sub_bitarray is not present.
insert(index, value, /)
Insert
value into the bitarray before
index.
invert(index=<all bits>, /)
Invert all bits in the array (in-place).
When the optional
index is given, only invert the single bit at index.
New in version 1.5.3: optional
index argument.
iterdecode(code, /) -> iterator
Given a prefix code (a dict mapping symbols to bitarrays, or
decodetree
object), decode the content of the bitarray and return an iterator over
the symbols.
itersearch(sub_bitarray, /) -> iterator
Searches for the given sub_bitarray in self, and return an iterator over
the start positions where bitarray matches self.
pack(bytes, /)
Extend the bitarray from bytes, where each byte corresponds to a single
bit. The byte memory view.
pop(index=-1, /) -> item
Return the i-th (default last) element and delete it from the bitarray.
Raises
IndexError if bitarray is empty or index is out of range.
remove(value, /)
Remove the first occurrence of
value in the bitarray.
Raises
ValueError if item is not present.
reverse()
Reverse all bits in the array (in-place).
search(sub_bitarray, limit=<none>, /) -> list
Searches for the given sub_bitarray in self, and return the list of start
positions.
The optional argument limits the number of search results to the integer
specified. By default, all search results are returned.
setall(value, /)
Set all elements in the bitarray to
value.
Note that
a.setall(value) is equivalent to
a[:] = value.
sort(reverse=False)
Sort the bits in the array (in-place).
to01() -> str
Return a string containing '0's and '1's, representing the bits in the
bitarray.
tobytes() -> bytes
Return the byte representation of the bitarray.
tofile(f, /)
Write the byte representation of the bitarray to the file object f.
tolist() -> list
Return a list with the items (0 or 1) in the bitarray.
Note that the list object being created will require 32 or 64 times more
memory (depending on the machine architecture) than the bitarray object,
which may cause a memory error if the bitarray is very large.
unpack(zero=b'\x00', one=b'\x01') -> bytes
Return bytes containing one character for each bit in the bitarray,
using the specified mapping.
frozenbitarray(initializer=0, /, endian='big', buffer=None) -> frozenbitarray
Return a frozenbitarray object, which is initialized the same way a bitarray
object is initialized. A frozenbitarray is immutable and hashable.
Its contents cannot be altered after it is created; however, it can be used
as a dictionary key.
New in version 1.1.
decodetree(code, /) -> decodetree
Given a prefix code (a dict mapping symbols to bitarrays),
create a binary tree object to be passed to
.decode() or
.iterdecode().
New in version 1.6.
bitarraymodule:
bits2bytes(n, /) -> int
Return the number of bytes necessary to store n bits.
get_default_endian() -> string
Return the default endianness for new bitarray objects being created.
Unless
_set_default_endian() is called, the return value is
big.
New in version 1.3.
test(verbosity=1, repeat=1) -> TextTestResult
Run self-test, and return unittest.runner.TextTestResult object.
bitarray.utilmodule:
This sub-module was add in version 1.2.
zeros(length, /, endian=None) -> bitarray
Create a bitarray of length, with all values 0, and optional
endianness, which may be 'big', 'little'.
urandom(length, /, endian=None) -> bitarray
Return a bitarray of
length random bits (uses
os.urandom).
New in version 1.7.
pprint(bitarray, /, stream=None, group=8, indent=4, width=80)
Prints the formatted representation of object on
stream, followed by a
newline. If
stream is
None,
sys.stdout is used. By default, elements
are grouped in bytes (8 elements), and 8 bytes (64 elements) per line.
Non-bitarray objects are printed by the standard library
function
pprint.pprint().
New in version 1.8.
make_endian(bitarray, /, endian) -> bitarray
When the endianness of the given bitarray is different from
endian,
return a new bitarray, with endianness
endian and the same elements
as the original bitarray.
Otherwise (endianness is already
endian) the original bitarray is returned
unchanged.
New in version 1.3.
rindex(bitarray, value=1, start=0, stop=<end of array>, /) -> int
Return the rightmost (highest) index of
value in bitarray.
Raises
ValueError if the value is not present.
New in version 2.3.0: optional
start and
stop arguments.
strip(bitarray, /, mode='right') -> bitarray
Return a new bitarray with zeros stripped from left, right or both ends.
Allowed values for mode are the strings:
left,
right,
both
count_n(a, n, /) -> int
Return lowest index
i for which
a[:i].count() == n.
Raises
ValueError, when n exceeds total count (
a.count()).
parity(a, /) -> int
Return the parity of bitarray
a.
This is equivalent to
a.count() % 2 (but more efficient).
New in version 1.9.
count_and(a, b, /) -> int
Return
(a & b).count() in a memory efficient manner,
as no intermediate bitarray object gets created.
count_or(a, b, /) -> int
Return
(a | b).count() in a memory efficient manner,
as no intermediate bitarray object gets created.
count_xor(a, b, /) -> int
Return
(a ^ b).count() in a memory efficient manner,
as no intermediate bitarray object gets created.
subset(a, b, /) -> bool
Return
True if bitarray
a is a subset of bitarray
b.
subset(a, b) is equivalent to
(a & b).count() == a.count() but is more
efficient since we can stop as soon as one mismatch is found, and no
intermediate bitarray object gets created.
ba2hex(bitarray, /) -> hexstr
Return a string containing the hexadecimal representation of
the bitarray (which has to be multiple of 4 in length).
hex2ba(hexstr, /, endian=None) -> bitarray
Bitarray of hexadecimal representation. hexstr may contain any number
(including odd numbers) of hex digits (upper or lower case).
ba2base(n, bitarray, /) -> str
Return a string containing the base
n ASCII representation of
the bitarray. Allowed values for
n are 2, 4, 8, 16, 32 and 64.
The bitarray has to be multiple of length 1, 2, 3, 4, 5 or 6 respectively.
For
n=16 (hexadecimal),
ba2hex() will be much faster, as
ba2base()
does not take advantage of byte level operations.
For
n=32 the RFC 4648 Base32 alphabet is used, and for
n=64 the
standard base 64 alphabet is used.
See also:
Bitarray representations <>__
New in version 1.9.
base2ba(n, asciistr, /, endian=None) -> bitarray
Bitarray of the base
n ASCII representation.
Allowed values for
n are 2, 4, 8, 16, 32 and 64.
For
n=16 (hexadecimal),
hex2ba() will be much faster, as
base2ba()
does not take advantage of byte level operations.
For
n=32 the RFC 4648 Base32 alphabet is used, and for
n=64 the
standard base 64 alphabet is used.
See also:
Bitarray representations <>__
New in version 1.9.
ba2int(bitarray, /, signed=False) -> int
Convert the given bitarray to an integer.
The bit-endianness of the bitarray is respected.
signed indicates whether two's complement is used to represent the integer.
int2ba(int, /, length=None, endian=None, signed=False) -> bitarray
Convert the given integer to a bitarray (with given endianness,
and no leading (big-endian) / trailing (little-endian) zeros), unless
the
length of the bitarray is provided. An
OverflowError is raised
if the integer is not representable with the given number of bits.
signed determines whether two's complement is used to represent the integer,
and requires
length to be provided.
serialize(bitarray, /) -> bytes
Return a serialized representation of the bitarray, which may be passed to
deserialize(). It efficiently represents the bitarray object (including
its endianness) and is guaranteed not to change in future releases.
See also:
Bitarray representations <>__
New in version 1.8.
deserialize(bytes, /) -> bitarray
Return a bitarray given the bytes representation returned by
serialize().
See also:
Bitarray representations <>__
New in version 1.8.
vl_encode(bitarray, /) -> bytes
Return variable length binary representation of bitarray.
This representation is useful for efficiently storing small bitarray
in a binary stream. Use
vl_decode() for decoding.
See also:
Variable length bitarray format <>__
New in version 2.2.
vl_decode(stream, /, endian=None) -> bitarray
Decode binary stream (an integer iterator, or bytes object), and return
the decoded bitarray. This function consumes only one bitarray and leaves
the remaining stream untouched.
StopIteration is raised when no
terminating byte is found.
Use
vl_encode() for encoding.
See also:
Variable length bitarray format <>__
New in version 2.2.
huffman_code(dict, /, endian=None) -> dict
Given a frequency map, a dictionary mapping symbols to their frequency,
calculate the Huffman code, i.e. a dict mapping those symbols to
bitarrays (with given endianness). Note that the symbols are not limited
to being strings. Symbols may may be any hashable object (such as
None). | https://openbase.com/python/bitarray-hardbyte | CC-MAIN-2021-39 | refinedweb | 3,426 | 50.23 |
The QDomNamedNodeMap class contains a collection of nodes that can be accessed by name. More...
#include <QDomNamedNodeMap>
Note: All the functions in this class are reentrant.
The QDomNamedNodeMap class contains a collection of nodes that can be accessed by name..
Constructs an empty named node map.
Constructs a copy of n.
Destroys the object and frees its resources.
Returns true if the map contains a node called name; otherwise returns false.
Returns the number of nodes in the map.
This function is the same as length().
Retrieves the node at position index.
This can be used to iterate over the map. Note that the nodes in the map are ordered arbitrarily.
See also length().
Returns the number of nodes in the map.
See also item().
Returns the node called name.
If the named node map does not contain such a node, a null node is returned. A node's name is the name returned by QDomNode::nodeName().
See also setNamedItem() and namedItemNS().
Returns the node associated with the local name localName and the namespace URI nsURI.
If the map does not contain such a node, a null node is returned.
See also setNamedItemNS() and namedItem().
Removes the node called name from the map.
The function returns the removed node or a null node if the map did not contain a node called name.
See also setNamedItem(), namedItem(), and removeNamedItemNS().().
Returns true if n and this named node map are not equal; otherwise returns false.
Assigns n to this named node map.
Returns true if n and this named node map are equal; otherwise returns false. | http://doc.trolltech.com/4.0/qdomnamednodemap.html | crawl-001 | refinedweb | 265 | 87.31 |
<oXygen/> comes with DocBook DTDs, XML catalog, XSL stylesheets and document
templates so that one can start creating DocBook documents right away. The articles
from our site are written using DocBook and their sources are available for download.
You can use them as samples to start with! DocBook documents can be converted to HTML,
PDF or PostScript and supports intelligent XML editing, validation, content
completion.
<oXygen/> is able to recognize the DocBook documents based either on the root
element name or namespace. When you switch to the Author mode, the
editor loads both the set of CSS files and the available actions that were associated
in the DocBook configuration.
The action set include operations for emphasising text, creating lists, tables,
sections and paragraphs.
More than this, you can create your own operations for inserting or deleting XML
document fragments.
You can create CALS or HTML tables, join or split cells, add or remove rows
easily. <oXygen/> will create all the column specifications for you.
A CALS and HTML table example. The caret is positioned between two cells.
You can edit the DocBook files using the text/source editing mode of <oXygen/>
XML Editor. The "as you type" validation and the powerful content completion are
always on your side.
A new module file was added in the DocBook DTD distribution adding XInclude
support to the DocBook DTD. There are also document templates (New from templates
action) that allows easy creation of a DocBook document with XInclude support. An
XInclude sample is provided.
<oXygen/> adds by default a root catalog that refers the built-in catalogs for
DocBook documents. These are located in the frameworks/docbook subdirectory of the
installation.
Before transforming the current edited XML document in <oXygen/> one must define
a transformation scenario to apply to that document. A scenario is a set of values for
various parameters defining a transformation.
<oXygen/> has two predefined scenarios for DocBook, one using the XSLs for FO-PDF
output and the other using the stylesheets for HTML. These scenarios can be reused for
any DocBook document. If the predefined settings are not exactly what you need you can
easily change them.
The Apache FOP is bundled inside <oXygen/> and it does not require any special
configuration. Thus you can convert DocBook to PDF just by selecting a scenario and
pressing a button.
Each <oXygen/> XML Editor release bundles the latest DocBook schemas and XSL
stylesheets. At this time, both DocBook 4 and DocBook5 are included. | http://www.oxygenxml.com/docbook_editor.html | crawl-001 | refinedweb | 412 | 56.66 |
What is a transient variable?
What is a transient variable? Hi,
What is a transient variable...) with transient keyword becomes transient variable in java. It will be beneficiary... to transient variable. The Serialization is a process in which the object's state
transient keyword
Friend,
The transient keyword is used to indicate that the member variable should not be serialized when the class instance containing that transient variable... into the persistent but if the variable is declared as transient
java Transient variable - Java Beginners
containing that transient variable is needed to be serialized.
For example if a variable is declared as transient in a Serializable
class and the class... of the variable becomes null
public class Class1{
private transient String
Java Transient Variable
Java Transient Variables
Before knowing the transient variable you should firs... mark that object as
persistent. You can also say that the transient variable is a variable whose
state does not Serialized.
An example of transient
What is Transient variable in Java
Transient variable in Java is used to show that a particular field should not be serialized when the class instance containing that transient variable being... from getting converted, one must declare the variable Transient keyword Interview,Corejava questions,Corejava Interview Questions,Corejava
variable for
your class name.
Q 11 : What... only to the unique instances,
* permits a variable number of instances
CoreJava
corejava
Create Session Variable PHP
Create Session Variable PHP hi,
Write a program which shows the example of how to create session variable in PHP. Please suggest the reference... session Variable in PHP. May be this reference will solve your query.
Thanks
I want to store the value of local variable into Global variable
I want to store the value of local variable into Global variable <%=cnt%>=x; is it a valid Statement
What is a local, member and a class variable?
What is a local, member and a class variable? Hi,
What is a local, member and a class variable?
thanks
javascript variable value insertion in DB
javascript variable value insertion in DB how can I insert javascript variable value into database using php
Extracting variable from jsp
Extracting variable from jsp how to Write a java program which will be extracting the variables and putting them in an excel format?
The given code allow the user to enter some fields and using the POI API
java basics
when the class instance containing that transient variable is needed...java basics What is the use of transient keyword? The transient keyword is applicable to the member variables of a class.
The transient
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/83367 | CC-MAIN-2015-32 | refinedweb | 447 | 55.44 |
Investment Basics - Course 101 - Stocks and ETFs Versus Other Investments
This is the first course in a series of 38 called "Investment Basics" - created by Professor Steven Bauer, a retired university professor and still a proactive asset manager and consultant / mentor.
Stocks & ETFs Versus Other Investments
Introduction:
We all have financial goals in life: to pay for college for our children, to be able to retire by a reasonable age, to buy and own the things we need. However, you must not discount the importance of "experience." Combine these with an understanding of how money flows and how businesses compete with one another, along with a dash of accounting knowledge, and you have all the mental tools needed to get started. Then it's a matter of discipline, practice and experience.
Prof's. Guidance: I will teach you all these things and more over the coming weeks. operates, like it or not.
What Is a Stock?
Perhaps the most common misperception among new investors is that stocks are simply pieces of paper to be traded. This is simply not the case. In stock investing, trading is a means, not an end.
A stock is an ownership interest in a company. A business or company is started by a person or small group of people who put their money in, as seed capital investment. How much of the business each founder owns is a function of how much money each invested. At this point, the company is considered "private." Once a business reaches a certain size, the company may decide to "go public" and sell a chunk of itself to the investing public. This is how stocks are created, and how you can participate.
When you buy a stock, you become a business owner. Period. Over the long term, the value of that ownership stake will rise and fall according to the success of the underlying business. The better the business does, the more your ownership stake will be worth.
Prof's. Guidance: This is best measured by the "earnings" of the company. And the earnings are subject to many variables.
Why Invest in Stocks?
Stocks are but one of many possible ways to invest your hard-earned money. Why choose stocks instead of other options, such as bonds, rare coins, or antique sports cars, Etc? Quite simply, the reason that savvy investors invest in stocks is that they have historically provided the highest potential returns. And over the long term, no other type of investment tends to perform better.
On the downside, stocks tend to be one of the most volatile investments. This means that the value of stocks can drop in the short term. Sometimes stock prices may fall for a protracted period. For instance, those who put all their savings in stocks in early 2000 are probably still underwater today. Bad luck or bad timing can easily sink your returns, but you can minimize this by taking a long look and different investing approaches.
There's also no guarantee you will actually realize any sort of positive return. If you have the misfortune of consistently picking stocks that decline in value, you can obviously lose money.
Prof's. Guidance: That is why you are taking your time to learn. Of course, I think that by educating yourself and using the knowledge in these courses, you can make the risk acceptable relative to your expected reward. I will help you pick the right companies to own and help you spot the ones to avoid. Again, this effort is well worth it, because over the long haul, your money can work harder for you in equities than in just about any other investment.
ETFs (Exchange Traded Funds)
Exchange Trade Funds (ETFs) are very much like mutual funds. That is, they are baskets of stock that are bought and sold, just like stock. They differ from mutual funds in that shares of ETFs can be traded at any time while the host stock market is open.
Many ETFs are based on an Index, Sector, Industry Group, County, Commodity, etc. making them exchange traded (specialty) funds.
For example: An Index fund is a passively managed collection of stocks that Index. One of the more common Index funds is one that closely matches the holdings and performance of the Standard & Poor's 500 - Index (S&P 500).
Prof's. Guidance: I recommend sticking with the big name ETF firms such as, iShares, PowerShares, or ProShares - that is if they have an ETF of interest.
Other Basic financial analysts, held 50 - 100 or more stocks, it would be very unlikely that all of those stocks become worthless.
The flip, often more than you may be aware of. The professionals running mutual funds do not do so for free. They charge fees, and fees eat into returns.
Plus, the more money you have invested in mutual funds, the larger the absolute value of fees you will pay every year. For instance, paying 1%, 2% or even 3% the advent of $10 (or less) per-trade commissions on stocks, this is no longer the case.
Just as picking the wrong stock is a risk, so is picking the wrong fund. What if the group of people you selected to manage your investment does not perform well? Just like stocks, there is no guarantee of a return in mutual funds.
It's also worth noting that investing in a mix of mutual funds and stocks can be a perfectly prudent strategy. Stocks versus funds (or any other investment vehicle) is really a personal decision.
Bonds. At their most basic, bonds are loans. When you buy a bond, you become a lender to an institution, and that institution pays you interest. As long as the institution does not go bankrupt, it will also pay back the principal on the bond, but no more than the principal.
There are two basic types of bonds: government bonds and corporate bonds. U.S. government bonds (otherwise known as T-bills or Treasuries) are issued and guaranteed (in the US) by Uncle Sam. They typically offer a modest return with low risk. Corporate bonds are issued by companies and carry a higher degree of risk (should the company default) as well as return.
Bond investors must also consider interest rate risk. When prevailing interest rates rise, the market value of existing bonds tends to fall. (The opposite is also true.) The only way to alleviate interest rate risk is by holding the bond to maturity. Investing in corporate bonds also tends to require just as much homework as stock investing, yet bonds generally have lower returns.
Given their lower risk, there is certainly a place for bonds - but be weary of owning bond mutual funds in most portfolios, but their relative safety comes with the price of lower expected returns compared with stocks over the long term.
Real Estate. Most people's homes are indeed their largest investments. We all have to live somewhere, and a happy side effect is that real estate tends to appreciate in value over time. But if you are going to use real estate as a true investment vehicle by buying a second home, a piece of land, or a rental property, it's important to keep the following in mind.
First, despite the exceptionally strong appreciation real estate values have had in the past, real estate can and does occasionally decline in value. Second, real estate taxes will constantly eat into returns. Third, real estate owners must worry about physically maintaining their properties or must pay someone else to do it. Likewise, they often must deal with tenants and collect rents. Finally, real estate is rather illiquid and takes time to sell--a potential problem if you need your money back quickly.
Some people do nothing but invest their savings in real estate, but just as stock investing requires effort, so does real estate investing.
Bank Savings Accounts. The problem with bank savings accounts and certificates of deposit is that they offer very low returns. The upside is that there is essentially zero risk in these investment vehicles, and your principal is protected. These types of accounts are fine as rainy-day funds--a place to park money for short-term spending needs or for an emergency. But they really should not be viewed as long-term investment vehicles.
The low returns of these investments are a problem because of inflation. For instance, if you get a 3% return on a savings account, but inflation is also dropping the buying power of your dollar by 3% a year, you really aren't making any money. Your real return (return adjusted for inflation) is zero, meaning that your money is not really working for you at all.
Prof's. Guidance: Just so you will know, my personal investment focus is the stock market and investing in Companies and ETFs.
Wrapping Up:
Though investing in stocks may indeed require more work and carry a higher degree of risk compared with other investment opportunities, you cannot ignore the higher potential return that stocks provide. And as I will share in the next course, given enough time, a slightly higher return on your investments can lead to dramatically larger dollar sums for whatever your financial goals in life may be.
Quiz 101
There is only one correct answer to each question.
- Which of the following types of investments provide the largest long-term returns?
- Stocks.
- Bonds.
- Savings accounts.
- Which of the following types of investments are the most volatile in their pricing?
- Stocks.
- Bonds.
- Savings accounts.
- Which of the following skills sets is NOT needed to be a successful investor?
- Discipline.
- A critical eye.
- Advanced statistics.
- Over the long term, which type of investment provides the lowest real (inflation adjusted) returns?
- Stocks.
- Mutual funds.
- Savings accounts.
- When you buy a stock, you are:
- Making a loan to a company.
- Buying an ownership interest in a company.
- Investing in the government.
Thanks for attending class this week - and - don't put off doing some extra homework (using Google - type "info" and the word or question) and sharing with or asking the Prof. questions and concerns.
Investment Basics (a 38 Week - Comprehensive Course)
By: Professor Steven Bauer
Text: Google has the answers
Junior Year
Course 301 - The Income Statement
Course 302 - The Balance Sheet
Course 303 - The Statement of Cash Flows
Course 304 - Interpreting the Numbers
Course 305 - Quantifying Competitive Advantages
Senior?
TweetTweet | http://www.safehaven.com/article/18039/investment-basics-course-101-stocks-and-etfs-versus-other-investments | crawl-003 | refinedweb | 1,741 | 62.88 |
Many experts recognize that the government will still step in to support some financial institutions rather than allow them to go through bankruptcy. This “too-big-to-fail” doctrine remains at least as prominent now—and as costly to taxpayers—as it was prior to the 2008 crisis, partly because the Dodd–Frank bill exacerbated the problem. For instance, in the post–Dodd–Frank world, any firm deemed a high risk to U.S. financial stability enjoys implicit government protection.
One of the many ways in which Dodd–Frank worsened the too-big-to-fail problem is its expansion of the capital requirements that contributed to the 2008 financial crisis. For decades, federal regulators have required banks to hold a certain amount of capital based on how much money they lend to customers. These rules are supposed to force banks to build a cushion against unexpected losses, but they ultimately contributed to the financial meltdown because they were filled with arbitrary measures of risk.
Although quite simple in theory, these capital requirements have always been incredibly complex, and Dodd–Frank has only made the situation worse. The new risk-based requirements are not yet fully implemented but have already placed an enormous regulatory burden on financial firms, even small banks for which these rules were never intended. There is no reason to believe that these new capital regulations will prevent or even mitigate future financial crises, much less solve the too-big-to-fail problem. Implementing these rules will most likely impede economic growth without any real reduction in systemic risk.
Ending Too Big to Fail
The best way to end too big to fail would be for the government to announce credibly that it will not use taxpayer funds to support failing firms. A credible commitment to let troubled firms fail would alleviate the need for regulatory capital standards because markets would price risk and develop their own capital standards accordingly. Such a commitment is not possible in the current environment, so the best way to lessen the impact of the too-big-to-fail problem is to make regulatory changes that can lead to a believable no-bailout policy.
For example, people would be likely to lower their expectations of government bailouts if banks’ capital requirements were reformed to make financial distress less disruptive to the economy.[1] Despite many different proposals to reform capital standards, Dodd–Frank essentially imposed an updated version of the requirements that were in place before the 2008 crisis. This development is counterproductive because risk-based capital requirements, a centerpiece of the Dodd–Frank rules, were a key contributor to the meltdown.
Risk-based standards are not part of the solution to the too-big-to-fail problem. A better approach, more in line with basic free-market principles, would be to simplify, lower, and improve the incentive effects of capital standards. This plan should be implemented by eliminating risk-based capital standards and removing other costly regulations, which ultimately lead bank managers to use more debt to increase their shareholders’ returns.[2] Merely increasing the percentage of required equity that banks must hold against their assets does not solve the problems that contributed to the 2008 crisis. Instead, increasing the percentage of required equity arguably amplifies those difficulties.
The Main Problem with Higher Capital Standards
In the present context, capital refers to money that people can use to run a corporation. Business owners can raise this money by borrowing or by selling shares of equity in their company.[3] While borrowed funds must be paid back to avoid bankruptcy, money raised by selling equity does not need to be repaid. In the event a firm fails, equity holders can lose all of their investment while the firm’s assets are sold to pay back the lenders. Thus, investors who buy equity in a business take on more risk than those who lend money to the company.
In general, firms do not employ large amounts of equity capital because it is too expensive and because it produces incentives to take high risks. When a company enjoys abnormally high profits, only the shareholders benefit because lenders agree (ahead of time) to receive a fixed rate of interest for providing funds. For instance, lenders would be due 4 percent interest on their investment in the company whether the firm has minimal, abnormally large, or zero profit. Managers that employ large amounts of equity capital (relative to debt) have an incentive to take on high-risk, high-reward projects to satisfy shareholders’ required return.
Therefore, from a bank safety standpoint, requiring too much equity capital is a bad idea because only shareholders can profit from these high-risk earnings. Excessively high equity requirements also impose higher costs that, to some extent, will be passed on to customers through some combination of higher interest rates and less lending. Thus, while requiring any given percentage of equity for financial firms is somewhat arbitrary, setting them “too high” will most likely be self-defeating. This arbitrary nature applies even to the risk-based capital standards that have been used for decades because markets have essentially never determined bank capital standards.
The Basel I Risk-Based Capital Standards
The Federal Reserve and the Federal Deposit Insurance Corporation (FDIC) jointly adopted risk-based capital requirements for U.S. commercial banks in 1988. These rules, phased in through 1990, were based on the Basel I accords, an international agreement reached through the Basel Committee on Banking and Supervision.[4] In recognition of the high cost and inherent problems associated with equity capital, the Basel accords sought to better match capital requirements to the risk level of banks’ assets.
Under these rules, U.S. commercial banks have been required to maintain several different minimum equity capital ratios. Banks that fail to meet these requirements can ultimately be dissolved by the FDIC.[5] U.S. banking regulators were implementing Basel II, an updated version of the original rules, at the onset of the financial crisis. As a result of the crisis, regulators stopped that process and, instead, went to work on developing Basel III. These newest rules have not yet been fully implemented in the U.S.
Basel’s Tiers and Risk Weights. The Basel I rules use a tiered definition of capital that distinguishes between different “qualities” of capital. In this framework, Tier 1 (core) capital consists of common stock, retained earnings, some preferred stock, and certain intangible assets.[6] Tier 2 (supplementary) capital includes reserve allowances for loan losses, several types of debt, other types of preferred stock, and several types of debt/equity hybrid instruments.[7] A detailed discussion of these components is beyond the scope of this paper, but these definitions of Tier 1 and Tier 2 capital highlight the difficulty of defining exactly what makes up a bank’s capital.
Under the Basel I rules, regulators determine whether a bank is adequately capitalized by using these tiered capital figures to calculate several ratios. For instance, a bank is considered adequately capitalized if its ratio of total capital (the sum of Tier 1 and Tier 2 capital) to total risk-weighted assets is at least 8 percent and if its ratio of Tier 1 capital to total risk-weighted assets is at least 4 percent. To calculate its risk-weighted assets, a bank must apply a predefined (by regulators) weight to each asset on its balance sheet as well as to “off-balance-sheet” assets.[8]
Table 1 provides a simplified example of how these risk weights are used to calculate a bank’s total capital ratio. Each asset’s risk weight is provided in the middle column. The risk weight is used to calculate both the required amount of capital and the total amount of the bank’s risk-weighted assets. The bank’s total risk-weighted assets, rather than total assets, are used to calculate the capital ratio. The riskier the asset is perceived, the more capital is required in case that asset loses value. Because cash and U.S. Treasury securities are deemed risk-free, no capital is required against these assets. Hence, they have risk weights of zero.
Basel I assigned risk weights of 20 percent for government-sponsored enterprise (GSE) mortgage-backed securities (MBS), so they contribute only $1,000 to this hypothetical bank’s risk-weighted assets ($5,000 x 0.20 = $1,000).[9] This bank would also be required to hold $80 in capital against its MBS ($5,000 x 0.20 x 0.08 = $80). At the other end of the perceived risk spectrum, commercial loans have a risk weight of 100 percent, so every dollar of these loans counts as a dollar of risk-weighted assets, and the bank must hold the full 8 percent in total capital.
As shown on Table 1, the total capital ratio for this bank is 8 percent (1,040/13,000 = 0.08). However, measured against the bank’s total assets, this amount represents less than 8 percent. In other words, the risk weights reduce the total amount of required capital versus a non-weighted scheme. More specifically, the weights allow the bank to hold capital of less than 5 percent of its total assets.
While somewhat oversimplified, this example replicates the manner in which banks were required to estimate their capital ratios under Basel I. Although brief, the example provides a glimpse into the complexity and subjectivity of estimating bank safety and soundness under these rules. For example, MBS proved to be much riskier than even regulators thought.
That mistake is not entirely surprising because regulators set risk weights based on how risky they think various assets will be in the future—a process that is inherently error prone. Mistakes are likely not only because people lack clairvoyance, but also because the process essentially prevents the private sector from finding norms for capital requirements. In other words, managers have been forced to adhere to—and influence—arbitrary standards as opposed to letting their own losses dictate the amount of capital they should hold.[10] Although it is not entirely clear that any form of legal capital requirements is economically necessary, the Basel risk-based system certainly did not provide financial safety.
Basel I and the 2008 Financial Crisis
In the wake of the 2008 crisis, the Basel I risk-based standards were clearly inadequate. According to the FDIC, U.S. commercial banks exceeded their minimum capital requirements by 2 to 3 percentage points (on average) for six years leading up to the crisis.[11] One factor contributing to the risk-based standards’ failure was that purchasing MBS enabled banks to reduce their capital and remain (nominally) adequately capitalized.
As noted, the Basel I capital standards called for banks to maintain 8 percent total capital against their risk-weighted assets. This system gave banks the incentive to invest in assets with low risk weights to reduce their cost of capital. Banks employed this strategy by investing heavily in the MBS issued by Fannie Mae and Freddie Mac, two GSEs. The MBS carried only a 20 percent risk weight and had the advantage of providing a higher return than government bonds.
Banks needed to hold only $1.60 in capital per $100 of MBS ($100 x 0.08 x 0.20 = $1.60) because of the lower risk weight. Home mortgages, on the other hand, carried a 50 percent risk weight, requiring capital of $4 for every $100 ($100 x 0.08 x 0.50 = $4.00). Thus, selling its mortgages to GSEs and then buying GSE-issued MBS allowed banks to lower their required capital by 60 percent (from $4 to $1.60) while earning a return (on MBS) that was higher than what was available on risk-free securities.
Stating these facts is not meant to suggest that banks “gamed” the system or did anything nefarious by purchasing MBS. In fact, there is very little reason to believe that banks thought the MBS they were buying would lose so much value—bank managers tend to prefer staying in business, after all. Regardless, the Basel requirements were—and still are—a system designed to match lower capital requirements against lower risk assets.
It is this part of the Basel standards that broke down. The Basel standards were inadequate not because they required too little capital per se, but because regulators failed to measure risk properly. This problem will always exist because the true risk of any financial asset can never be known until after the fact. People poorly estimated the risk of MBS prior to 2008, but the same could have happened with virtually any other asset.
Dodd–Frank and Basel III
The 2010 Dodd–Frank Wall Street Reform and Consumer Protection Act required federal banking agencies to develop countless rules and regulations. Although the legislation did not explicitly require adoption of the Basel III rules, the bill included language—mostly in Sections 165 and 171—that effectively directed federal banking agencies to implement the Basel III proposals.[12] These new regulations go well beyond minor adjustments to banks’ capital requirements..[13] In general, the new Basel III rules are supposed to be an improvement over earlier versions because they apply a “macro” regulatory view as opposed to micro-level scrutiny.
This approach is supposed to be better because Basel III’s predecessors focused too much on the safety and soundness of individual institutions. Purportedly, the new rules are tailored to prevent financial difficulties at any one institution from carrying over into the broader economy. One problem with this claim is that it ignores a basic justification for creating the Federal Reserve. Congress created the Fed in 1913 to prevent banking crises from causing widespread economic harm, not to save a few individual banks. Yet the new rules are supposed to improve financial stability because now the Fed will finally shift to a macro-oriented view of regulation.
The Fed, Congress, and the U.S. Treasury have openly discussed their roles in stemming economy-wide systemic risk and financial stability for decades. In fact, these concepts were mentioned in Federal Reserve testimony before the House Subcommittee on Economic Stabilization in 1991, shortly after the Basel I accords were accepted.[14] Aside from these issues, no empirical evidence shows that any of the new Basel III regulations will prevent financial crises any better than the old rules did.[15] The fact that some of the most glaring weaknesses of the original Basel framework remain unchanged in the Basel III rules offers little hope for success. For example, Fannie and Freddie MBS still carry only a 20 percent risk weight.[16]
A Better Approach
Specific capital requirements could be improved in many ways, but any changes should be balanced against the enormous regulatory burden that Dodd–Frank imposed on financial firms. For example, a much simpler approach to minimum capital ratios would be to require banks to maintain a 5 percent common equity to total asset ratio. However, regulators should move cautiously because this sort of “flat” capital requirement would actually increase the capital buffer that many banks hold. Simply increasing capital ratios without reducing banks’ regulatory burden will likely harm economic growth and do nothing to improve managerial incentives toward taking risks.
Other proposals, such as requiring banks to issue contingent convertible bonds (CoCos), could supplement simplified capital requirements and mitigate excessive risk taking. CoCo bonds serve the dual purpose of bringing market discipline to firm managers and, in the event of financial stress, automatically providing new equity capital through the private sector. Several different types of CoCos have been proposed, but they all share the same basic principles: They are issued as long-term debt securities (bonds) that may convert into shares of equity if the firm runs into financial trouble.[17]
Ideally, CoCos convert from debt to equity when a pre-agreed trigger event occurs. For instance, a capital-ratio trigger would impose conversion if the firm’s capital ratio falls below its required minimum.[18] Naturally, this type of CoCo would still require an arbitrarily selected minimum capital ratio because markets have not been allowed to determine the “correct” amount of CoCos that banks should hold. Ultimately, policymakers should fix this glaring weakness in the financial industry and let banks interact with their customers to determine what their capital requirements should be.
What Congress Should Do
The Basel risk-weighted capital standards have proven inadequate. Congress should direct regulators to replace the Basel III capital standards that are being implemented with standards that do not require subjective risk assessments of individual assets. Simultaneously, Congress should begin to dismantle the regulations that Dodd–Frank imposed on the financial sector.
Going forward, Congress’s best courses of action include:
- Repealing the Dodd–Frank Wall Street Reform and Consumer Protection Act.
- Short of a full repeal of the Dodd–Frank act, repealing Title I and Title II of Dodd–Frank or, at the very least, eliminating the Financial Stability Oversight Council.
- Until these changes are politically possible, allowing banks to opt out of all federal banking regulations and government assistance if they convert to a partnership entity. This option should be paired with an explicit statement that these entities will not be eligible for any federal assistance, including FDIC deposit insurance.
The best way to ensure that firms do not take undue risk is to state credibly that owners and creditors—not taxpayers—will be responsible for financial losses. Such a commitment is not possible in the current environment, so Congress can lessen the impact of the too-big-to-fail problem by making the structural changes suggested above. In exchange for relief from the federal regulatory burden, Congress can allow bank owners to assume the risk of their operation, as should be the case with any businesses in any sector of the economy.
Conclusion
The desire to end the too-big-to-fail problem has led to calls for everything from steep increases in capital requirements to arbitrarily breaking up financial institutions deemed too large. These types of proposals are not the answer because they would unduly harm consumers and would not end government bailouts. The new Dodd–Frank rules are similarly misguided because they essentially force financial institutions to comply with recycled versions of old risk-based capital, leverage, and liquidity standards that have already proven themselves inadequate.
The best way to end the too-big-to-fail problem is through a credible federal commitment not to use taxpayer funds to save financially troubled companies. A credible commitment to let firms fail would allow markets to price risk as accurately as possible and would alleviate the need for formal capital standards. A good first step toward making such a commitment believable would be to eliminate subjective risk projections from capital requirements and to expose financial firms’ managers to more market discipline.
—Norbert J. Michel, PhD, is a Research Fellow in Financial Regulations in the Thomas A. Roe Institute for Economic Policy Studies, and John L. Ligon is Senior Policy Analyst and Research Manager in the Center for Data Analysis, at The Heritage Foundation. | https://www.heritage.org/markets-and-finance/report/basel-iii-capital-standards-do-not-reduce-the-too-big-fail-problem | CC-MAIN-2019-04 | refinedweb | 3,190 | 50.06 |
Consider the following program that finds the second prime number between 1000 and 10000:
((1000 to 10000) filter isPrime)(1)
This is much shorter than the recursive alternative:
def secondPrime(from: Int, to: Int) = nthPrime(from, to, 2) def nthPrime(from: Int, to: Int, n: Int): Int = if (from >= to) throw new Error("no prime") else if (isPrime(from)) if (n == 1) from else nthPrime(from + 1, to, n - 1) else nthPrime(from + 1, to, n)
But from a standpoint of performance, the first version is pretty bad; it constructs
all prime numbers between
1000 and
10000 in a list, but only ever looks at
the first two elements of that list.
Reducing the upper bound would speed things up, but risks that we miss the second prime number all together.
However, we can make the short-code efficient by using a trick:
This idea is implemented in a new class, the
Stream.
Streams are similar to lists, but their tail is evaluated only on demand.
Streams are defined from a constant
Stream.empty and a constructor
Stream.cons.
For instance,
val xs = Stream.cons(1, Stream.cons(2, Stream.empty))
Let's try to write a function that returns a
Stream representing a range of numbers
between
lo and
hi:
def streamRange(lo: Int, hi: Int): Stream[Int] = if (lo >= hi) Stream.empty else Stream.cons(lo, streamRange(lo + 1, hi))
Compare to the same function that produces a list:
def listRange(lo: Int, hi: Int): List[Int] = if (lo >= hi) Nil else lo :: listRange(lo + 1, hi)
The functions have almost identical structure yet they evaluate quite differently.
listRange(start, end)will produce a list with
end - startelements and return it.
streamRange(start, end)returns a single object of type
Streamwith
startas head element.
tailon the stream.
Stream supports almost all methods of
List.
For instance, to find the second prime number between 1000 and 10000:
(streamRange(1000, 10000) filter isPrime)(1)
The one major exception is
::.
x :: xs always produces a list, never a stream.
There is however an alternative operator
#:: which produces a stream.
x #:: xs == Stream.cons(x, xs)
#:: can be used in expressions as well as patterns.
The implementation of streams is quite close to the one of lists.
Here's the trait
Stream:
trait Stream[+T] extends Seq[T] { def isEmpty: Boolean def head: T def tail: Stream[T] … }
As for lists, all other methods can be defined in terms of these three.
Concrete implementations of streams are defined in the
Stream companion object.
Here's a first draft:
object Stream { def cons[T](hd: T, tl: => Stream[T]) = new Stream[T] { def isEmpty = false def head = hd def tail = tl override def toString = "Stream(" + hd + ", ?)" } val empty = new Stream[Nothing] { def isEmpty = true def head = throw new NoSuchElementException("empty.head") def tail = throw new NoSuchElementException("empty.tail") override def toString = "Stream()" } }
The only important difference between the implementations of
List and
Stream
concern
tl, the second parameter of
Stream.cons.
For streams, this is a by-name parameter: the type of
tl starts with
=>. In such
a case, this parameter is evaluated by following the rules of the call-by-name model.
That's why the second argument to
Stream.cons is not evaluated at the point of call.
Instead, it will be evaluated each time someone calls
tail on a
Stream object.
The other stream methods are implemented analogously to their list counterparts.
For instance, here's
filter:
class Stream[+T] { … def filter(p: T => Boolean): Stream[T] = if (isEmpty) this else if (p(head)) cons(head, tail.filter(p)) else tail.filter(p) }
Consider the following modification of
streamRange. When you write
streamRange(1, 10).take(3).toList what is the value of
rec?
var rec = 0 def streamRange(lo: Int, hi: Int): Stream[Int] = { rec = rec + 1 if (lo >= hi) Stream.empty else Stream.cons(lo, streamRange(lo + 1, hi)) } streamRange(1, 10).take(3).toList rec shouldBe res0
The proposed
Stream implementation suffers from a serious potential performance
problem: If
tail is called several times, the corresponding stream
will be recomputed each time.
This problem can be avoided by storing the result of the first
evaluation of
tail and re-using the stored result instead of recomputing
tail.
This optimization is sound, since in a purely functional language an expression produces the same result each time it is evaluated.
We call this scheme lazy evaluation (as opposed to by-name evaluation in
the case where everything is recomputed, and strict evaluation for normal
parameters and
val definitions.)
Haskell is a functional programming language that uses lazy evaluation by default.
Scala uses strict evaluation by default, but allows lazy evaluation of value definitions
with the
lazy val form:
lazy val x = expr
Using a lazy value for
tail,
Stream.cons can be implemented more efficiently:
def cons[T](hd: T, tl: => Stream[T]) = new Stream[T] { def head = hd lazy val tail = tl … }
val builder = new StringBuilder val x = { builder += 'x'; 1 } lazy val y = { builder += 'y'; 2 } def z = { builder += 'z'; 3 } z + y + x + z + y + x builder.result() shouldBe res0 | https://www.scala-exercises.org/scala_tutorial/lazy_evaluation | CC-MAIN-2017-39 | refinedweb | 854 | 63.7 |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
October 2008
INSIDE:
Part 2 of our LUMA interview A Introversion speaks about Multiwinia A Audacity explained A More Blender A Did someone ask about Level Design? A Quad Trees A News + Reviews + Other stuff too!
1
REGULARS
3 4
Editorial
Netbriefs
Features
6
Not so Introverted
Mark Morris takes some time out of his making-awesome-games time to answer a few of our questions We finish up our interview with the big boys at Luma, chatting about their next offering: BLUR.
10
Blurring the lines (Pt2)
Reviews
15
Glyph Hunter
Braaaaaaaaaaaains! And lots of dying! Art, design, and game challenges. Awesome.
17
Lost Garden
Development
19 23
Level Design
Audacity
Ever wondered about effective level design? Of course you have! Look no further as we delve into just that! Oh dear! Nandrew is at it again! This time he’s looking to make some neat sound effects using Audacity! OMG it’s long! But entirely worth the read! We take a look at using Quad trees to represent 2D data. The only time you can say “Rigid Body” without slapping an R-rating on the front cover! Warning: Involves “solid objects”
28
Quad Trees
Blender Blender
37
Tailpiece
42
Game.Dev Comp History
Part 1 of our Competition retrospective looks at the first 10 challenges!
2
DreamBuildPlay is done, rAge preparations are nearing conclusion and we’re all lookClaudio “Claudius” de Sa James “Calamitas” Etherington-Smith Quinton “Voluptarius” Bronkhorst Rodain “Venustas” Joubert Simon “Tr0jan” de la Rouviere Ricky “SecusObdormio” Abell William “Cairnswm...us?” Cairns Danny “RecitoPessime” Day Andre “Fengolus” Odendaal Luke “FrigusManus” Lamothe Rishal “IntegerProprius” Hurbans Gareth “GazzanusEnios” Wilcock Sven “TergumChucciaios” Bergstrom Kyle “ErepoCaudos” van Duffelen Chris “LeoPenitus” Dudley Herman Tulleken Robbie “Squid” Fraser devmag@gmail.com This magazine is a project of the South African Game.Dev community. Visit us at: All images used in the mag are copyright and belong to their respective owners. I’ve stealthily hidden a purple ninja somewhere in this issue. If you find him; stop taking drugs immediately! Also. Burning ducks. Why has no one made this a basis for a game yet?
CELER NUNTIUS DOMUS NUNTIUS SERVIOS PULCRITUDO DEMITTO CAELESTIS CAELESTIS
less to say, we’re all incredibly excited and hopeful. And proud. That’s enough blathering, though. Now comes the fun part, where I get to tell you about all the things that you’d already find on the index page, just in many more words. Most importantly, we have two feature interviews this month – a first for us. One is an excellent chat with Mark Morris of Introversion on Multiwinia, and another being the conclusion of the Luma interviews we started last month. We’ve got another audio-related piece discussing the interrelationships between bwumphs and blonks and other words that make spellcheckers cry, and our tailpiece goes back and looks at the history of Game.Dev’s regular competitions, discussing what aspiring designers could learn from them, whether or not they have participated. That’s it for this month. Read, enjoy and get out there and make games! Oh, and finally: rAge! ~ Claudio, Editor
ing forward to a crazy weekend where most of the Dev.Mag staff and Game.Dev community will actually meet each other, some for the first time. It’s odd to consider that, even though Dev.Mag has been running for nearly three years, a large majority of the people who contribute to this magazine on a monthly basis are still faceless pseudonyms to me. Such is the curious nature of an online venture, where contributors are so disjoint yet still rather intimately connected. And all the above means the busiest month of the year will be over by the time you read this. Although that last statement was a bit of a hopeful conjecture on my part, mostly because I cannot imagine a month that was busier than the last. More accurately, I cannot imagine the aftermath of such a month; finishing the largest game project I’ve ever been involved in as well as preparing for what is, quite possibly, the most important event of the year qualifies any month as Freakishly Busy. No person should need to endure many of these every year, and I shall certainly look forward to a break after the rAge dust storm settles. However, in hindsight, all the effort was well worth it. By the end of it all, Game.Dev as a community will have two complete Xbox games under its collective belt, both of which were entered into a highprofile competition where even the slightest honourable mention will have a potentially gargantuan effect for our little fellowship of developers. Need-
3
Shred Nebula Design Documents released. com/features/603/documents_ of_newly_published_xbox_.php In an unprecedented move, James Goddard of CrunchTime Games released both design documents that were used during the Xbox Live Arcade pitch of his game, Shred Nebula. Both documents - a ’60 seconds of gameplay’ essay and the actual pitch/
Luma Arcade on InstantAction. com/2008/07/blur-game-forme. html The local team over at Luma Arcade has been slaving with the arcade racer BLUR, a game slated to be released into open beta on GarageGames’ InstantAction games portal soon. The game, currently in a private beta testing phase, puts players behind the wheel of one of two selectable vehicles in an adrenaline fueled 8-player race. Be sure to try it out when it’s available for public play.
design document - are freely available to download and view at the site above. These should provide a valuable insight into how the usually hidden internal processes work for XBLA and, by extension, for most publishing deals.
Bioshock Postmortem available at Gamasutra. com/view/feature/3774/ postmortem_2k_boston2k_.php In a first for Gamasutra, notable postmortems and other articles from the vaunted Game Developer magazine will be published on the site. The first fruit of this new arrangement is this Bioshock postmortem, detailing the creation of the game from the perspective of project lead Alyssa Finley. Worth reading.
4
Retro-Remakes competition 2008. com/ Retro Remakes, a site dedicated to the retro gaming scene and retrostyled games, have launched their 2008 remake competition. The contest, running till 6 December, challenges developers to submit a freeware entry under any of 6 categories for over £5000 total prizes to be claimed by the first 10 places in the grand prize, winners of each category, and a special judges prize to be handed out at their discretion. Entries are open until 2 December.
Torque X 3D Engine now bundled with Softimage|XSI Mod Tool Pro id=508463 The powerful, professional modelling and
World of Goo is Gold. com/2008/09/09/pretty-bignews/ After a long, long wait, and a massive amount of evolution from the original Tower of Goo prototype, World of Goo has finally gone gold. Made by a tiny two-man team, World of Goo is a constructionbased puzzle game that won the 2008 IGF Innovation award, an honour previously bestowed on XBLA’s Braid and PSN’s Everyday Shooter.
animation package made by Softimage, more popularly known for its use with Valve’s Half Life 2, is now included absolutely free with a Torque X 3D engine license. The Torque X Engine contains flexible game authoring tools built on XNA and, with the inclusion of the XSI Mod Tool, should become a tool that every XNA developer would want to have at hand. With indie licenses available for only $250 it isn’t far out of reach either.
5
“As always, we will live or die by our next title, and in this case it's Multiwinia.”
Simon “Tr00jg” de la Rouviere
not so introverted
Having thoroughly enjoyed playing through the preview code of Multiwinia (Dev.Mag issue 25) and bringing our readers a first look at the game,
Dev.Mag decided to have with a chat with Mark Morris, the Managing Director of Introversion. Perhaps he can help us find our missing Darwinian…
6
Rocket Riot and Blitzkrieg,.
2007 coming up with, testing and rejecting lots of game modes before settling on the final six. We're really pleased with the end results, and I'm sure that everyone will have their own favorite modes and maps. I personally love attacking in Assault.
Dev.Mag: How much did DEFCON help with Dev.Mag: Why did you decide to take Darwinia into multiplayer? creating Multiwinia, from a design and coding perspective?
Mark: Just after we won the 2006 IGF (Independent Games Festival) realized that!
Mark: rewritten for Multiwinia.
Dev.Mag: What hurdles did you encounter
with the development of Multiwinia?
Dev.Mag: How did the design process for
the game modes work?
Mark: The biggest challenge with Multiwinia
was making sure that there was enough variation in the maps and the different game modes. We spent most of
Mark: Design at introversion is really iterative.
Some of the modes were really obvious, like King of the Hill; but the more complex modes, like Assault,
7
main effort; developing great new games. As always, we will live or die by our next title, and in this case it's Multiwinia. Keep your fingers crossed for us!
Dev.Mag: What was it like working with the Xbox
360?
Dev.Mag: First you rattle our collective gaming
minds with Subversion, and then we hear about Chronometer. When are we going to get some info on this? When do you plan to release it?
Mark:!
Mark: *smiles roguishly* Dev.Mag: How does your design process work in
general? How do you get your wild ideas onto paper and then into a solid game?
Mark: We have a very relaxed and free flowDev.Mag: Was it difficult to adapt the Xbox controls? ing design process. Chris has most of the big ideas and then he’ll go away and jam for a few months. It probably takes him about six months, working part time, for him to come up with?
Mark: Multiwinia was designed with the 360 controller in
mind, but it’s Darwinia!
Dev.Mag: You've been going for quite some
time. Do you foresee a shiny future ahead for you guys?
Mark: We're in a very strong position now. We're pretty
well known in the industry, and our back catalogue of games still sells in reasonable numbers. We have a few ports in the pipeline, which should help to support the
8
Dev.Mag: Do you plan on releasing Darwinia merchandise? Those Darwinians are a hot commodity.
very, very hard to make a great video game, but if you want it enough, you'll get there!
Mark: We already have a store full of cool Darwinia stuff.
Dev.Mag: After working on the Xbox, do you still see
the PC as your main platform?
Dev.Mag: The entry point for great indie games is
getting higher and higher? Don't you think it is discouraging for beginners?
Mark:.
Mark:
Short
ee d Sw an
t
Mark Morris
Peanuts or Raisins? Diablo 3 or StarCraft 2? DRM or No DRM?
Mark: Listening to DRM-free MP3’s whilst eating peanuts and playing StarCraft 2.
9
Last issue, we featured Luke Lamothe from Luma, talking about their latest offering, BLUR - this issue, we finish off by talking to BLUR’s
lead artist, Chris Cunnington, and creative director, Dale Best, about some of the challenges they faced bringing out this anticipated offering!
BLURRING The lines 2
Sven ‘Fuzzyspoon’ Bergstrom
10
Chris Cunnington
Dev.Mag: As lead artist on the BLUR project,
which aspects brought new challenges? We came into BLUR just off the back of MINI#37. With MINI we had been very restricted by TGE in the way we produced the road & pavement sections, not allowing us to make flowing, curved roads with textures that followed the curves. So heading into BLUR one of the first things I wanted to re-engineer was our road production. The outcome of a couple of weeks work was a modular system, that allowed for completely flowing surfaces. The drawbacks were that for each single road section, I had two sections, one you see and one you don’t. The visual improvement over MINI#37 completely outweighed the drawbacks of handling double the art assets. Otherwise, we generally pushed the bar as much as we could, as we always will, allowing a huge visual leap! Hmmm, that’s a tough one! Each level has its own unique quality that jumps out for me. SkyCity was the first to be built, and I really dig the shiny clean feel of this world. ParadiseCove was planned to be more a gritty, graffiti-cum-old–European, coastal town; The beauty of the InstantAction platform is that it is able to load any game engine – obviously with a few tech changes – into the browser. So when we set out creating BLUR, we never limited ourselves at all with any thoughts of, “It’s an in browser game.” We created it as if it was a standard PC game. We actually only got it into the browser very late on in the project. The only limitation we then set on ourselves was file size. We wanted to keep it small and neat so that people don’t need to download huge amounts of data. but it got changed slowly into the world it is now. Now it is awesome; who can complain about a level filled with Palm Trees? Reactor Station was great fun to make. We wanted an industrial, piping filled world, but went wild with it when we added a huge glowing reactor to the scene. So yeah, worlds with more detail are awesome, even though racing games still don’t let you get to the level of detail needed for a modern FPS.
Dev.Mag: Working on worlds with good detail has its perks. What was your favourite map or scene to create?
Dev.Mag: With developing for a browser
environment, were there any art limitations, compared to desktop games?
11
Dev.Mag: As an artist, what ‘ideal’ map would
you have made? Hmmm, well, for a racing game, I pretty much got my wish with ParadiseCove, making an island beach level. Moving away from racing games, I am very keen to start building more open, free worlds, with a more sandbox feel!
Dev.Mag: Tips for aspiring 3D game artists?
Be prepared! Game art is not all that it appears on game-artisans.org. A regular day is easily split between working out bugs with your art pipeline, asset management, and somewhere in between, making new art assets.
Dev.Mag: Any last comments?
As with most games, everyone always looks at the quality of the art, but let’s not forget the programmers. As a game artist, I have to say a big thanks to the programming guys. Without them so many things would not have been possible visually. Being able to ask for a custom tool to be made to help your workflow, and watching it being developed, is the coolest thing ever!
12
Dale best
Dev.Mag: What part did you play in the overall production of this game? I oversaw the creative direction in terms of style and game play, as well as the direction of the levels and cars.
Dev.Mag: What aspect slowed you down most as a
creative director? Once I’ve had my third cup of coffee, there’s no place for slowing down. We had milestones to meet as a requirement by our publisher, and that’s that.
Dev.Mag: What aspects of the game were swayed
by InstantAction integration, in terms of design changes? Well, the whole idea around InstantAction is that of ‘pick up and play’, so that is a major consideration in the game design. Pushing the game into the, ‘hard to master’ territory also is key, because you want to keep your players coming back. We feel the game is nicely balanced in this regard, and continue to get feedback from the closed beta stage it’s currently in.
13
Dev.Mag: If you were to make another racing game for InstantAction, what would you choose? LumaArcade won’t be making another racing game for a while, unless we have to. We may incorporate a racing component into new games we develop if the design requires that. BLUR will continue to grow over time, with new car packs that users can buy, and new tracks as well. The game will grow, it won’t be replaced. We are happy with BLUR, and wouldn’t do it another way. The old school arcade quality is spot on for the platform, as far as I’m concerned.
Dev.Mag: Any tips for aspiring creative directors? Well, I just kind of moved ‘organically’ into this position, and usually that’s the case. It becomes more of a managerial role, but I’m still hands on, which is cool. Just do what you enjoy doing. Take it seriously enough to do well, but try to keep a balance in your life. When you start getting symptoms of carpal tunnel syndrome, you know you need to get out more!
14
Glyph Hunter
“The addiction lies in the difficulty.”
There exists a theory; that to truly express your hatred of the zombies in Glyph Hunter, would bring about a black hole of contempt, capable of devouring the universe and replacing it with something even more frustrating. There is conjecture that this has already happened.
Chris “Braaaaaaains” Dudley
15
That said, this is not a horrible game in any way. While a mixture of dungeons, monsters, and swords, is hardly the most innovative combination in the industry, for local indie developer Rodain “Nandrew” Joubert to take those elements and create a game as enjoyable as this one is quite a feat. Such a lot of quality has been packed into this title, that it is easy to get drawn into the action and forget the generic ‘magical quest’ storyline. The meat-and-potatoes of the game involves hacking, slashing, and various combinations of the two deadly assaults. The tutorial gives you the gist of the simple gameplay, throwing a few enemies at the player to get them started. After that, the game lets the player get to the carnage, with the occasional pop-up informing them of their progress. For the most part, the gameplay is cut-and-dried; hack through monstrous hordes to a certain point, flip a switch, slash all the way back through more monstrous hordes. Along the way, tension and addictive frustration mounts as the player watches their life slowly dwindle. Mana steadily drops as they frantically attempt to dispatch enemy mages who are lobbing flaming balls of death. Desperate whimpers escape as hordes of respawning zombies claw for the jugular. It’s at these moments that the game shines; it creates tense, enjoyable fights that entertain and challenge (boy-oh-boy, do they challenge).
This is where the aforementioned zombies come into play, with their unsettling obsession of having a chunk of cranium for dinner. The zombies don’t merely gang up, they swarm. They amass. No matter how many times the player shoves the business end of a sharp metal object into their faces, they pick themselves up and stagger on, with a chorus of “You’re never gonna keep me down!” A quick fireball will turn them into a zombie flambé, and send their tortured souls back to hell permanently; but mana has to be used sparingly if the player is to make it past the ghouls, and to the haven of the next save point. Once there, they can rest assured with the knowledge that the next section will prove to be far more difficult. The sense of euphoria that the player receives upon slipping into safety with their avatar on the verge of a grisly death is so immensely satisfying, that they will suddenly find themselves unable to tear away from it, striving to reach ‘just one more checkpoint.” The addiction lies in the difficulty though - be warned - this is not the most relaxing of games, but still a high quality production, coming out of the local dev scene.
Warning: side effects of Glyph Hunter may include hair loss; the invention of new swearwords; and spontaneous bursts of animalistic battle cries.
16
Lost Garden
Claudio “Chippit” de Sa Solid game design knowledge is traditionally kept close to the hearts of those who possess it; rarely does one find people willing to divulge their insights
into the art and even rarer still are those who are willing to do so regularly, for no gain whatsoever other than the simple act of seeing others benefit because of it.
17
Daniel Cook is one such person. Also a contributor to the well-known development related sites GameDev.net and Gamasutra.com, Cook – under the moniker Danc – helms a game design blog named Lost Garden, where he regularly posts insightful game design essays and thoughts, provides free hand-drawn art for game prototypes, and occasionally challenges his readers to create game prototypes based on his design and theme. One such challenge, concluding just last month, tasked developers with creating a game in which world shadows are an inherent part of the game dynamic. A player would need to lead mushrooms harvested from the world back to a ‘home’ point. Since the mushrooms would shrivel and die in the sun, the player would need to hug dynamically changing shadows created by the changing time of day to achieve the best performance. The game design insights posted on Lost Garden are also well presented and offer incredibly handy knowledge for the indie developer. In fact, all the offerings on the site will be invaluable for an independent game developer: design challenges to test your skill; design essays to impart useful tips; insights and knowledge; and artwork to facilitate your masterpiece creation. All in all, a priceless resource that should live on anyone’s bookmark list.
“Game design insights posted on Lost Garden are well presented and offer incredibly handy knowledge.”
18
Looking at
Level Design
Paul "Nostrick" Myburgh
Have you ever had a really cool concept for a level of a game you enjoy? Have you ever made a level, but it just didn't come out quite the way you
planned? Ever given up because you just couldn't find a way to get your (brilliant) ideas out of your head and into the game? Perhaps, on the other hand, you have never thought about level design or modding in the slightest; but someday you'd like to give it a shot. Whatever your level designing experience may be, you are reading the right article!
19
Here we aim to give you a grasp of the design aspect of level creation; it’s a little something on how one might go about the design process, the process by which you bring your ideas into being, to create a playable and fantastic level for all to enjoy! Although simplified, this explanation should be effective enough to get you on your way. To explain this process, we are going to use the example of building a level (or track) in Track Mania. Explanation will be as generic as possible, so that you may take these basic concepts and apply them in other games as well. To understand what makes a good level in a game, it’s always best to know the game well beforehand; experiment with the game play mechanics, and take note of what makes the levels fun to play. There is no better example to follow in level design than that set by the creators of the game. The most downloaded and highly rated user levels are also great examples, so you might want to find a good website with a database of levels pertaining to your game of choice. Start by gathering ideas from these designers, and taking that inspiration to fuel your own creations. Once kitted up with a bit of knowledge and inspiration, it’s time to get started on the actual building and design process. We have an idea, now how do we go about making a track? Do not be intimidated by the ‘blank canvas’ of the level editor; it is your playground, so treat it as such. Just go at it with everything you have, and don't stop to think twice. Begin by laying down the basic level design; similar to sketching outlines on a drawing before you clean up and shade in. In Track Mania, run tracks all around, add loops, corners and little jumps. Just let loose; allow that idea in your head to flow down your arm, into the mouse and onto the screen! Do not be intimidated by the ‘blank canvas’ of the level editor, it’s your playground - use it as such!
20
Always begin rough; treat it as a draft. Don't become concerned over little things such as how the player is going to take the first corner, or perfecting the first jump. This is just a distraction and will be detrimental to your master plan; leave it for later. Once you've experimented a bit with the rough draft, and found something that works, we move on to the next step; the flow. This is where play-testing begins to play a role, as refinements to the roughness of the level take place, shaping it into something more playable. This is a very important step. Ask yourself; could someone else race on this track (or play through this level) without becoming too frustrated? Make sure that the overall flow of your level provides a fun experience, while still providing a challenge. By first creating the ‘skeleton’ of your level, you will find it easy to go back and add more interesting and fun ways to link areas together, and fine tune certain areas for an even better experience. You should constantly have your mind focused on how the first-timer to your level would experience it and how they will either grow to like it or hate it. Never, ever, forget about the end user. Once finished with fine tuning the mechanics of the level, the beautification begins. Try to add hints, such as which way one should be turning next, or in which direction you should be facing to make a jump, by strategically placing surroundings. As much as some
people find beautifying a level/track unimportant, I'd say it plays a big role in the overall experience, but remember - don't overdo it! Sometimes less is better than more! Finally, and most importantly, have lots of fun! It should be something you do in good spirits and enjoy as you design and play-test your very own creation. If you are having fun, you will be more involved and proud of your creation, and you might find yourself tweaking and play-testing to perfection right through the night! So, find a fun game, get those creative juices flowing and just go at it. You might be surprised by the awesome things you can come up with. Happy level making everyone! When design ideas run low, nature is often our best resource
Trackmania has an excellent level editor which lets you imagination run wild!
The best part about designing a level is testing it out!
21
22
Interested in making your own sound effects for videogames?
This month we'll be looking at Audacity and a few of the common effects that can be used to turn your humble “blink-blonks” into fantastic “kaphwooms!”
Looking at Effects with
Audacity
“Turn your humble ‘blink-blonks’ into fantastic ‘kaphwooms!’”
23
This article deals with the mildly technical side of sound production in Audacity, so it'll assume that you already have a wave file loaded up and ready for priming. In terms of format, you may want to look for a file that's based on the standard PCM WAV structure (it's the most common WAV file format, so you're probably using it already) with a 16-bit, 44100Hz quality (these details will be shown in the info box to the left of the file when you load it up in Audacity.)
You can also opt for a stereo sound format, but mono means a smaller file size and will usually do the job just fine unless you specifically need two channels. So, to clarify, you'll prefer these settings for an input file: • • • • PCM WAV format; 16 bit; 44100Hz (or similar); Mono.
understand this article just the same, but try and have a gander at it anyway. Also, be sure to grab your own copy of Audacity from. interesting stuff. Right, now that we've got all the details out of the way, let's look at the
Higher quality is optional, but isn't always necessary. If you really can't find an input file to meet these requirements, no biggie – you can still try to convert them to the desired format by clicking on the filename in the left info box and selecting the necessary properties in the pop-up menu. Also check the Edit -> Preferences -> File Format menu to make sure that your uncompressed export format is set to the 16-bit PCM.
Maybe some of you remember that funky little sound idea thing back in Issue 24? If you haven't checked it out yet, don't panic – you'll be able to
24
The Audacity effects
Select all, or part of the file that you've loaded up in Audacity (this can be done by clicking and dragging the mouse over the desired file component). This is going to be the section of the file that you apply your effects and filters to. Now select Effects from the top menu. You'll see a whole list of neat things that can be done to the innocent sound file which is now under your control. We won't be looking at every effect that's on display, but a few of the simpler ones will be covered. What follows is a description of several effects, giving you their job, their potential for game development and a few useful pointers to get you going in the right direction. Let's go!
Change Pitch
Technical Description:
This effect alters, well, the pitch of the sound. How
Change Tempo
Technical Description: This is the counterpart of changing
pitch. Do you want the same sound to play much faster or a little bit slower than the one you currently have? Tempo can make a “bwooooooooooom” into a “bwumph,” and vice versa.
high or low the notes are, so to speak. You can plug in a spoken sentence and decrease the pitch for a deep, manly-man voice or conversely increase the pitch to make it sound like a chipmunk. Wheee!
Game Use: Maybe you want a quick and tiny explosion noise
but only have the Manhattan Project on hand. Or maybe your character is using a weapon with a high rate of fire and you've only got sound effects which last for at least two seconds. No problemo! Increase the tempo until you're able to justifiably go “powpowpow” for as long as your character needs.
Game Use: This is one of the most commonly used effects to tweak sounds. You may want
to change the pitch of an in-game explosion. Perhaps you have a piano somewhere in your game and you want to get several pitch variations of the same sound clip. Or you have a nice cartoony game where you want to take a standard set of sound effects and make them all cutesy by swinging the pitch up.
Hint: By doing some extreme compression or extension of sound effects with the
tempo changer, you'll actually start hearing some very, very weird things. This can be rather neat if you're looking for some exotic and/or sci-fi sound effects, so give it a shot.
Hints: This filter is great if you want to speak in your game but need to mask your voice or
simply make it sound cooler. Lowering or heightening the pitch ever-so-slightly will greatly improve the sound in your own ears, and you could potentially voice several in-game characters simply by altering the way you speak each time and applying a pitch effect.
25
Fade In / Out Change Speed
Technical Description: This function is the equivalent of changing both the pitch and tempo in one go. A higher pitch delivers a faster tempo, and vice versa.
Technical Description: Gradually brings a sound clip from zero
volume to full, or vice versa.
Game Use: A very specific effect which is probably most useful for
tailoring background music or longer sound effects (power up and power down sequences, for example). Can replace amplify in certain circumstances.
Game Use: If you're going to be using both pitch and tempo
change for a sound effect, this can be handy for doing it with one effect.
Reverse Echo (Echo)
Technical Description: Adds a basic echo. Game Use: Handy for dramatic announcements or sound effects in
a cave.
Technical Description: Puts the sound clip back-to-front, so that the
end plays first.
Game Use: Can be used for a wide range of funky effects. If you're
keen to experiment with a sound, try reversing it and see how it comes out!
Hints: Most uses for the echo will involve decreasing the default
delay time. A high delay time may sound good in certain areas (experimenting to find exotic sound effects is great) but generally it just makes the effect rather confusing.
Hints: If you know how to use stereo tracks in Audacity (it's a more
convoluted process than you may experience in some other programs – check Audacity's help file for more details), you can reverse one of the channels and leave the other one playing normally. This occasionally grants a really cool effect.
26
Amplify
Technical Description: Increases or decreases the volume of the selected sound clip. Game Use: This is mainly to place emphasis on a certain portion of a sound. For example,
if you want an explosion to start off loud and then trail off, you can use amplification to tweak various parts of the sound. You can also use it to get all the sound effects in your game on acceptable volume levels. There's no point in your water sound effects drowning out the sound of loud bangs, after all.
Remove Clicks
Technical Description: This is the easiest way to remove that
horrible crackly effect that one typically encounters when using amateur recording equipment. Make your sounds crisper and less ‘polluted’ with this filter.
Game Use: If you've got ‘clean’ sound effects mixed with crumbly
messes in your game, it helps to apply this effect to the culprits.
Hints: If you're recording your own sounds, don't rely too heavily on amplify to fix your
volume. Making sounds louder will increase the chance of background noise becoming noticeable. Conversely, screaming into the microphone and then reducing the volume probably won't get rid of the distortion that sometimes crops up. Use amplification for minor tweaking only. Also note that there are several more advanced amplification tools in Audacity (Normalize, Compressor, Equalize, etc). Learn to use these if you want to fine-tune, or perform finicky volume tasks, otherwise you should be able to ignore them.
Hints: This isn't a perfect effect, so try not to rely on it too heavily.
Rather make an effort to generate a smooth sound before it goes into the Audacity editor. If you truly feel confident, try using the Noise Removal effect instead (this will require you to capture a separate noise profile and then use it to remove noise in other parts of your clip).
In Conclusion
These are just a few of the simpler effects in Audacity. Fiddling with some of the more complex tools can lend more interesting effects, but the point of this tutorial is to provide you with the basics, to be able to confidently tweak your own sounds and get rid of the more glaring problems in your files. If you seriously want to go into sound editing and file fixing, it's worthwhile to consider finding a more powerful and/or specialised application to do the job. Many programs come with sets of filters and tweaks that can offer you a wider variety of sound wizardry (useful in generating robot voices, impacts and other common effects). However, do not to underestimate the power of a well-recorded sound and a few choice effects – after giving it a shot, you'll wonder how you ever settled for your database of 1001 Free Sounds for your day-to-day gamecrafting.
27
Quad Trees
Herman Tulleken. Quad trees are 2D data structures, useful for efficient representation of 2D data (such as images), and lookup in a 2D space (where are those monsters?). In this tutorial, we focus on the implementation of quad trees that represent 2D data efficiently; that is, where quadtrees can be used to compress data. Although quadtrees can be used with a variety of data types, we will focus on colour data (images) in this tutorial. In part 2 we will look at more general applications, such as occupancy maps and force fields. We also look at the properties of data that determines whether it is suitable to represent as a quadtree. Quadtrees take advantage of the fact that cells in a grid are often the same as adjacent cells – for example, a red pixel is very likely surrounded by other red pixels. Thus, we do not need a data point for every pixel, as we do when we use a grid – we can use a single point for an entire section of the grid.
28
Implementation
Every node on a quad tree has exactly 4 or 0 children. A quad tree is always constructed from a grid that contains the raw data. The root node represents the entire grid. If the grid does not have enough detail, no children are necessary, and the entire grid is represented by one data item contained in the root. If, however, the data is interesting enough, every quadrant of the grid is represented by a child node. These nodes can be further divided based on the squares of the original image they are to represent, and so on, until every piece of the image is represented by a single node.
New groupings for the quad tree, with no tolerThe original image, represented as a grid. ance. Note that there is not any detail in the larger blocks, so we do not need to subdivide them. New groupings for the quad tree, with a higher tolerance.
The tree representation.
The tree representation.
29
Original
Quad Tree
Quad Tree with Rectangles
To implement a quad tree, you need to do five things: • • • • • define a QuadTree class; define a Node class; implement a detail measuring function; implement a construction algorithm; implement an access algorithm.
These are explained below.
30
Defining the Quad Tree Class
This class is very simple: it stores the root node and the algorithms, which are explained in the sections below. Here is how that would look in Java:
class Quadtree { Node root; QuadTree(Grid grid){…} Color get(int x, int y){…} }
Defining the Node class
The node class is where most of the work is done. It should store its four children (possibly all Null), and be able to handle construction and access functions.
class Node { Node children[]; int x, y; int width, height; Node(Grid grid, int x, int y, int width, int height){…} Color get(int x, int y){…} }
31
Detail Measure Algorithm
This algorithm calculates the amount of detail on a rectangle of the grid. How that detail is measured, depends on the application. For images, the average Manhattan distance between colours and the average colour is a crude measure that often works well. The Manhattan distance between two colours is defined as: d = |r1 – r2| + |g1 – g2| + |b1 – b2|, Where ‘r1’ is the red component of colour 1, and so on. Note that the entire grid is passed to the algorithm, with extra parameters to indicate the boundaries of the piece we actually want to measure. We also define a help function to calculate the average of a rectangle in a grid.
// Calculates the average color of a rectangular region of a grid Color average(Grid grid, int x, int y, int width, int height) { int redSum = 0; int greenSum = 0; int blueSum = 0; //Adds the color values for each channel. for(int i = 0; i < x + width; i++) for(int j = 0; j < y + height; j++) { Color color = grid.get(i, j); redSum += color.getRed(); greenSum += color.getGreen(); blueSum += color.getBlue(); } //number of pixels evaluated int area = width * height; // Returns the color that represent the average. return Color(redSum / area, greenSum / area, blueSum / area); } // Measures the amount of detail of a rectangular region of a grid Color measureDetail(Grid grid, int x int y, int width, int height)
{ Color averageColor = average(grid, x, y, width, height); int int int int red = averageColor.getRed(); green = averageColor.getGreen(); blue = averageColor.getBlue(); colorSum = 0;
// Calculates the distance between every pixel in the region // and the average color. The Manhattan distance is used, and // all the distances are added. for(int i = 0; i < x + width; i++) for(int j = 0; j < y + height; j++) { Color cellColor = grid.get(i, j); colorSum += abs (red – cellColor.getRed()); colorSum += abs(green – cellColor.getGreen()); colorSum += abs (blue – cellColor.getBlue()); } // Calculates the average distance, and returns the result. // We divide by three, because we are averaging over 3 channels. return colorSum / (3 * area); }
32
Construction Algorithm
The construction algorithm can be put in the constructor of the Node class. The constructor of the actual QuadTree class simply constructs the root node. The algorithm is simple: the detail is measured over the part of the grid that the node is meant to represent. If it is higher than some threshold, the node constructs four children nodes, each representing a quadrant of the original rectangle. If the detail in the rectangle is lower than the threshold, the average colour is calculated for the rectangle, and stored in the node. The threshold value passed to the function is often determined empirically – that is, you change it until you get what you want. Obviously, the smaller it is, the more accurately the quadtree will represent the original data, and the more memory and processing time will be used.
// Constructs a new Quadtree node from a rameters // that indicate the region this node is as well as // the threshold to use to decide wether node further. Node(Grid grid, int x, int y, int width, threshold) { this.x = x; this.y = x; this.width = width; this.height = height;
grid, and pato represent, to split this int height,
children = new Node[4]; //upper left quadrant children[0] = new Node(data, x, y, width/2, height/2); //upper right quadrant children[1] = new Node(data, x + width/2, y, width - width/2, height/2); //lower left quadrant children[2] = new Node(data, x, y + height/2, width/2, height - height/2); //lower right corner children[3] = new Node(data, x + width/2, y + height / 2, width - width/2, height - height/2); } }
if (measureDetail(grid, x, y, width, height) < threshold) {//too little detail color = average(grid, x, y, width, height); } else {//enough detail to split
33
Access Algorithm
The access works as follows: if the node from which the method is called is a leaf (a node without any children), that node’s colour is returned. Otherwise, the call is delegated to the child node of the correct quadrant. The method is shown below:
// Returns whether this node has any children. boolean isLeaf() { return children == null; } // Gets the colour at the given pixel location. Color get(int i, int j) { if isLeaf() { return color; } else { // Decides in which quadrant the pixel lies, // and delegates the method to the appropriate node. if(i < x + width/ 2) { if(j < y + height / 2) {
return ((Node) children[0]).get(i, j); } else { return ((Node) children[2]).get(i, j); } } else { if(j < y + height / 2) { return ((Node) children[1]).get(i, j); } else { return ((Node) children[3]).get(i, j); } } } }
34
Real-world Implementation
The typical real-world implementation will differ in several respects from the simple implementation described above: • Integers will be used to represent colours in the
Debugging Tips
• Implement a way to visualise the quad tree, even
if the quad tree does not represent an image. In addition to the normal visualisation, also implement visualisations that: o o render every square in a different colour (see examrender outlines of blocks over the normal visualisaImplement a visualisation of the error of your quad For benchmarking, implement node count methods all nodes; all leaf nodes.
raw data and Nodes, rather than Colour objects; • Intermediate values of colours will be stored as
ple algorithms); tion. • • tree representation against the original; for counting: o o
floats, where component values are scaled to lie between 0 and 1; • Whatever detail function is used, its output will be
scaled to lie between 0 and 1. (This makes it easier to determine a correct threshold); • Adding several hundred (or even thousand) red,
This is useful to make sure that a quad tree is indeed a more efficient representation of you data (for example, white noise will be better represented by a grid). blue and green values will cause overflow problems. The summands are often scaled before they are added (for example, the division by the area can be done before they are added to the sums);
Resources
Calculating the average is expensive; where it is used as part of the detail measuring algorithm, it will be calculated separately, and passed to this function, so that the value can be reused. A list of quadtree resources from Geometry in Action.
Downloads
You can download examples of implementations in Java and Python on code-spot:
35
36
Blender Physics RigiD Body
Claudio “Chippit” de Sa
Rigid body dynamics is a term used to describe the motion and behavior of solid physical objects;
simulating such things as friction, gravity and collision responses, in the six degrees of freedom that describe the state of most physical objects. A movement along any of the three dimensional axes is referred to as ‘translation’, and is described by the nautical terms of heaving, swaying and surging. ‘Rotation’ around the axes is described by roll, pitch and yaw. Rigid body physics ignores the potential deformations of objects during collision and motion, and focuses only on linear momentum (movement and force in a certain direction) and angular momentum and torque (governing rotation and rotational force). This is commonly used in the current generation of video game engines and is also simulated by Blender’s Game Engine.
37
Blender Game Engine
An underused feature of Blender, the ability to create games in its engine, has been steadily increasing in flexibility and power, as is true for most other aspects of Blender’s functionality. While the more advanced features of the BGE are reserved for future issues, and out of the scope of this tutorial, we’ll be using some of its basic functionality to create rigid body physical simulations. The ‘record to IPO’ feature and the physics engine itself. To get started, you need to ensure that Blender is set to simulate using the Bullet physics engine – the most advanced engine currently implemented. This should be the default setting for all Blender versions newer than 2.42. If you don’t have it, it’s probably time to upgrade. The relevant options are grouped under the Shader context menu -> World buttons. In the ‘Mist/ Stars/Physics’ tab you’ll find the choice of physics engine, as well as a global gravity value.
Scene set up
Physics engines need objects to operate on. Whilst the bullet engine can operate on complex objects; boxes, spheres and planes are the fastest and simplest to use. I’ve set up a quick example scene using a haphazardly stacked pile of boxes, a ball, a plane to represent the floor, and another plane for the ball to roll down. The key to a successful rigid body simulation is - as with the other simulations previously covered - a good setup. Every object needs to have certain properties defined, the most important being their collision type and their mass. All the relevant options are visible in the Logic context. Note that before you place a large quantity of objects in your scene, you will need to define physical properties for each item individually. If you plan to have a lot of items in the scene with identical or similar properties, it is often best to create a single one, set up its collision properties, then make duplicates of it.
38
Object Properties
Rigid body objects are actually comparatively simple to set up and use. A quick overview of each relevant option follows: Actor: This object is active and will be evaluated by the game engine. Ghost: This object will not collide. Dynamic: Object obeys the laws of physics. Rigid Body: For accurate rigid body calculations, including angular velocity and momentum, necessary for rolling motions. No sleeping: Object will never enter a ‘rest’ state where simulation stops running on it. Mass: Total mass of the object. Radius: The radius of the sphere that represents this object’s bounds. Only necessary for a spherical collision setting. Damp / RotDamp: Damping values for all movement and rotation. Bounds: Specify object collision bounds for physics. If you do not specify one of these, the collision engine will use the actual mesh. Choosing one of these usually speeds up your computation time, however, so it is recommended where possible. Within the bound menu are three settings. Static triangle mesh: For complex shapes that do not move, commonly used for terrain or static obstacles. Convex hull polytope: Will use the smallest convex hull as the collision mesh for this object. Sphere/Cone/Cylinder/Box: Uses the specified shape as a collision mesh for the object. Compound: The object is made of multiple compound shapes. Used for more complex simulations where objects are tied together with child/parent relationships, to make more complex shapes. Because our planes aren’t going to move, we don’t need to do anything to them; they’ll collide with objects automatically. The other’s need a few changes, however. The boxes should have ‘Actor’, ‘Dynamic’, and ‘Rigid Body’ enabled, together with Box bounds. The ball is the same, but with spherical bounds. Be sure to set the radius setting to match the size of the sphere. You’ll notice that the radius is represented graphically in the Blender 3D view, though it’s often easier to see in wireframe mode.
39
Tying everything up
Once you’re confident that all your objects are set up, simply press P or select Start Game from the Game menu in the main Blender menu bar to start the simulation. If all goes well, you should see the ball roll down the slope and strike the boxes placed at the bottom, all in real-time. Press Escape to end the game simulation when you’re done. Now, it’s all well and good that everything is moving as it should, but it still won’t do any good if we want to render this as a proper scene. The BGE provides a Record Game Physics to IPO feature that will take all movement in a game and record it to all the involved object’s IPO curves, for tweaking and proper rendering. Enable the option in the Game menu then run the simulation again. Once it completes, you’ll see that you can step through the baked animation just like any normal keyframed animation. You can also edit the IPO curve, just as you would do otherwise. Note that the IPO curve also defines where an object will be at the beginning of the game simulation. If you wish to move an object after you’ve already recorded a previous animation to IPO, you’ll either need to clear the IPO curve or add a keyframe with the new position at the starting frame. That’s all for this month. Next time we’ll get to actually using the BGE for what it was intended. Interaction and making games! Have fun till then.
40
41
Ea Lo m
ch
ok
Co t
ou
m
p
G
ha sa e th le ss Ga ea on e m to ch on e!
tL
es
so
fo r n
L
be ve l le ar op ne d
fro m
De
A Retrospective glance at
Game.Dev Competions
Rodain “Nandrew” Joubert
Part 1. Since January 2005, Game.Dev has worked to inspire and lead game developers with these competitions, and some truly intriguing titles have come about as a result. What follows is an overview of early Game.Dev Comp history, along with the lessons that people have learned along the way. Read on and be inspired.
42
Comp2
Circles vs Squares
After a month of downtime (and the creation of its own forum), Game.Dev decided to host its second competition, themed around circles versus squares. The group was still rather young and wide-eyed at this point, but the competition provided several entries from members who would later become influential components of the Game. Dev group. In any game, it’s important to look at the fun factor first – you can get to the rest of it later. Comp 2 produced some hearty entries from people who used circles and squares to their best The Game.Dev competitions had very humble beginnings, as most things tend to. Comp 1 started off as a simple idea posted on the NAG forums in January 2005, before Game.Dev itself even existed. The concept was basic and the criteria were broad: make a game, any game, and post it on the forums for judgement. The competition eventually produced five games, most of them using the recommended development tool, Game Maker. These entries were crude compared to later offerings, but they proved one thing: there was an interest in game development amongst gamers (who would have thought?). Even though some people scoffed at the idea of such a 'childish' tool being used to craft games, anyone who bothered to download this free application and take the time to fiddle about with it was generally able to produce results by the time the competition came to an end. effect to create a fun and engaging experience, limited to a simple graphics set and forced to figure out how they can make their game stand out from a field of similar-looking entries. Game.Dev’s Comp 3 asked gamers to do a remake of famous old-school games (aside from a few horribly cliché ones such as Pong and Tetris). The results were interesting, to say the least. Some opted to take the classics and improve upon old dynamics with the availability of better development tools and greater processing power. Others took even more creative routes and merged several classics to cre-
GL
It's possible to make videogames, whoever you are, whatever your experience level.
GL
It pays to study the classics.
GL
ate an entirely new game using rules from each. This competition was possibly the first to display the game development maturity of entrants: the top games homed in on the most fun aspects of these bygone offerings, proving that they understood what made great games great; adding improvements in the correct places to make these titles even better. After all, everybody knows that PacMan is famous; not everybody truly understands why. To excel in game development, Game.Dev wanted entrants to analyse the games they play more critically, and adopt that special ‘game developer’ mindset that’s critical for anybody who wants to do gamecrafting for a living.
Great games can be made with the simplest of graphics.
Comp1
Make a game. Any game. Go for it.
43
Comp3
Remakes
Comp4
Simple Rules, Complex Game
Too often, game developers try to make a good game by adding more bells and whistles. Not enough variety in your project? No problem, just add more enemies and abilities... Right? Wrong. A flawed game doesn’t become better simply because you add more features – it’s the core dynamic, that little kernel of your game which defines it and makes it special. This was an exercise to create a few rules that the game developer could twist and manipulate to generate a massive variety of gaming scenarios, and exercised the creativity and flexibility of developers. This particular competition produced one of the finest games of Game.Dev’s early era – an offering titled Roach Toaster which stood head and shoulders above the rest of the entries up until that point and raised the bar for all competitions that followed. It didn’t have mind-blowing graphics. It didn’t have a load of flashy scripted events. It didn’t even have sound effects. It just had a basic roach generation algorithm and a few well-balanced roach busting tools that were meticulously considered; providing a player with a simple experience that felt like an epic.
GL
Comp6
Polishing an Old Game
Playtesting and fellow developers are golden.
Game.Dev’s Comp 6 decided to go in a slightly different direction and forced entrants to look at previous work for inspiration. Most new game developers are quick to generate a fun or quirky title, but tend to lose steam after they’ve finished a “full go” of the
Action games are difficult. They tend to be real-time and a lot of control leaves the developer: you can’t force the player to take a turn, deal a specific amount of damage and tailor the enemy response to provide a balanced counter-attack. Every split-second matters; which means that the developer needed a lot of help to make sure that the game felt ‘just right’ no matter who played it. By the time Comp 5 came about, a flood of new developers had entered the forum and it fell upon the established crowd to help them get into the swing of things. This revealed a trait about the community which has successfully lasted to this very day: an openness and friendliness which is crucial for allowing good game development. Whether an entrant was a development veteran or a complete newbie who had just learned the concept of ‘player.x + 1’, feedback from the community was inevitably constructive and helped make early, clumsy offerings into golden games by the end of the competition month. Those who posted early drafts excelled in this competition, because instead of relying on a single developer to playtest and hunt for bugs in their title, these entries had the feedback and collective expertise of at least a dozen enthusiasts to back them up.
game or realised that there were too many extra resources to generate easily. Comp 6 was very much a discipline competition – people are often reluctant to revisit their old creations, favouring a hop to new titles, rather than lingering with the old. But polish is important for good game creation, and most of the successful games out there weren’t simply done with one take – they repeatedly changed as development progressed, and no matter how heartbreaking it may be to throw away a particular piece of code or artwork and start again, it’s necessary to allow growth in your game where it’s needed. Successful competitors were also generally able to modify their games quite easily – they’d left enough room in the design for change, rather than creating a static game with no opportunity for expansion. Remember to plan ahead when designing your game – you never know how it may change at the end, and it’s much better to modify a small amount of code rather than being forced to restart the whole mess.
GL
Less is more.
GL
You can always improve.
Comp5
Action!
44
Comp8
Consume!
By this time, Game.Dev had evolved even further and was beginning to look at other development groups
GL
for inspiration and ideas. The idea for this Comp came from Experimental Gameplay, a site known for its interesting prototypes, which was holding a similar competition at the time. There were two major points in this competition: firstly, the description was simply ’Consume’, affording a great deal of flexibility to entrants keen to pump up their creativ-
GL
Game ideas can’t be pulled out of a hat.
Style counts too.
Comp 9 was odd in the way that the lesson it taught was quite dramatically different from the one which was originally thought up. After more than a year of competitions (running one every two months), Game.Dev decided to engage developers a little more and have them looking in rather exotic directions. The premise for this one was therefore quite creative: entrants were each given three words to use, and all of their in-game graphics needed to consist of imagery extracted from Google image searches based on these three words. This was meant to be another competition which focused on the generation of good gameplay, while forgetting about complicated graphics. Unfortunately, for some developers, this task was a little too restrictive – they found themselves developing games that they didn’t feel entirely comfortable with, and the
Have you ever played a game with that certain X factor that made it really special? That feeling or vibe which turns an average Joe game into something a little more involving? Style is an elusive aspect of game development and competitors found it difficult to define. In all respects, this was the most advanced Game.Dev Comp to date – not only were people required to craft a game, but they had to grasp an abstract concept and try make it show in their final work. To ease the process, this competition was once again oriented around remakes, to ensure that game developers had a springboard to launch from instead of floating about in a haze. Many of today’s remakes often have some sort of revamp or stylish factor to make them more appealing to players, whether it’s a particular colour theme, the type of sound effects employed or even just a funky change of art direction. Once consolidated in a remake, these sort of ideas can be carried over and used in original games, to put your own unique stamp on your work.
ity. Secondly, each entrant was required to submit two games instead of one. The result was that developers had to learn the skill of prototyping – rapidly conceptualising and establishing the framework for potential games without getting bogged down in details or long-term development. Prototyping is an incredibly important skill in game development: new developers often try to make “the next big game” and end up getting bogged down with a concept that often isn’t all that good. A far better idea is the rapid generation of several minor game concepts, allowing the developer to gather a broader range of experience and browse through an entire collection of ideas to see which one works the best.
GL
results showed in these cases. Discipline is important in game development, but it’s also important to remember that game development is an expression of your own creativity and enjoyment, and that the best titles are created when the developers love what they’re doing.
Prototyping is vital.
Comp7
Style.
45
Comp9
Google it!
Comp10
Management Games
By the time Game.Dev had hit the double digits for its competitions, it had gained enough influence and enough of a following, to attract a sponsorship of a R10 000 (just over $1000) cash pool for Comp 10. This was met with considerable enthusiasm from the community, and to do justice to this cash sponsorship, it was decided that the competition would run for an extra month, focusing on a particularly challenging subject: management games. This genre, more than most, requires developers to carefully think out their game design in advance, considering every addition to their game and how such an addition would affect the rest of the objects already in play. Although it was the players who would ultimately be keeping track of resources and variables, it was the developer who needed to pay meticulous attention to ALL of these values to ensure that the game remained consistently challenging and fun to play. Planning was key, and the winner of the competition (a game titled “Fast Food in Space”) exemplified this principle by providing players with a management game that kept developing and offering new challenges as play-time increased, ultimately providing a steadily rising difficulty curve, which managed to keep gamers hooked for multiple playing sessions.
That’s it...for now
This concludes Part 1 of the Game.Dev Comp series. Check next month's Dev.Mag for the second half of the series, where we investigate more contemporary competitions, and see where they've taken participants following their first tender steps nearly four years ago. If you're from South Africa and are interested in entering Game.Dev's latest competition, keep an eye on the Website () and scout about on the forums for news of the most recent offering.
GL
Balancing is trickier than you think.
46
Gear Count:
It’s time for our favourite game! Ho-down! Go call your mom. OHsnap!
47 | https://www.scribd.com/document/7503855/Dev-Mag-26 | CC-MAIN-2018-22 | refinedweb | 11,004 | 61.06 |
Integrating Facebook and Parse Tutorial: Part 1
Parse is a service you can provide to easily make web backends for your apps. In our previous Parse tutorial, you learned how to use it to create a simple photo sharing app.
Recently, Parse was acquired by Facebook. One of the benefits of this is that integrating apps made with Parse with Facebook is now easier than ever – and that is the subject of this tutorial!
In this tutorial, you will be Facebookifying (okay, I admit I made up that word!) the same type of photo sharing app you built in our previous Parse tutorial.
Your app will allow users to upload their images to Parse, and in turn, see a wall of images uploaded by the user and their friends. Comments are open on all of the images on the wall — which should provide a steady stream of
insults witty banter!
This tutorial is split into two parts:
- In Part 1 (you are here!), you will learn how how to set up a project with Facebook and Parse SDKs, authenticate a user with Facebook using Parse’s Facebook toolkit and retrieve Facebook user information using the Facebook Graph API.
- In Part 2, you’ll build a data store for your images and comments in Parse, display uploaded images on an image wall and add a comment system to your app.
So without further delay, read on to get started building your three new apps!
Wait…THREE new apps?!?
Getting Started
Yes, three new apps! You’ll need one Facebook app, one Parse app, and an iOS app to pull it all together. So that means before you can set down any code, you’ll need a verified Facebook account, as well as a Parse account.
To verify your Facebook account you need to provide either a mobile number or a valid credit card on the Account Settings page of Facebook. Once you’ve verified your Facebook account, head on over to and click on the Apps link at the top of the page, as shown below:
On the Apps page, click the Create New App button to create your Facebook app, as such:
Enter the App Name and App Namespace in the popup dialog that appears. The namespace needs to be completely unique across all apps on Facebook, so if you have a naming collision, just try another. Psst — don’t try “fbparse”, it’s already taken! :]
You can optionally provide an App Category for your app, but it’s not mandatory. Click Continue, proceed through the security check, and you’ll be rewarded with your shiny new Facebook app.
Facebook creates apps in sandbox mode, but you’ll need to change this so that all your friends can use your app straight away. Toggle the Sandbox Mode option to Disabled as shown below:
Next, expand the Native iOS App section, and fill in the Bundle ID with
com.rwtutorial.FBParse, which is the bundle ID of your starter iOS app. You won’t be submitting this app to the App Store for now, so set iPhone App Store ID and iPad App Store ID to
0. Turn on Facebook Login, and review your settings before saving:
If everything looks good, click Save Changes and your Facebook app is ready to use.
Before you leave this page, make note of your Facebook App ID at the top of the page, as in the screenshot below:
Don’t lose this ID — you’ll need it later on in this tutorial.
That’s it for Facebook; you can now set up your Parse application.
Setting up Parse
Head on over to and click Sign Up to create a new account. When prompted, provide a username, email address and password. On the next screen, choose a name for your Parse app and select the Individual Developer option, as so:
Once you’ve done that, you’ll be presented with the Parse welcome page.
At the top right corner of the page you’ll see a drop down menu displaying your username. From this drop down list, select Account:
Then on the Account page, choose App Keys from the buttons at the top:
Find the Application ID and Client Key for your app; copy these to a text editor and save them in a convenient location. You’ll use these later to link your iOS app with Parse.
The Parse setup is complete — everything else you need to do takes place in the iOS app.
The Starter Project
So that you can concentrate on Facebook and Parse, download the starter project here. The starter project provides a storyboard and all the UI elements you’ll need.
Open up the starter project in Xcode and build and run. You’ll see a Login screen, but since you haven’t yet integrated Parse with your app, it won’t do much at this point.
Open MainStoryboard.storyboard; it will appear as below:
Your storyboard contains three view controllers:
- Login: Provides a mechanism to log in to the app using Facebook authentication.
- Image Wall: Displays the images the user and their Facebook friends have uploaded along with comments on the images.
- Image Upload: Uploads images along with their comments.
The Image Wall is a basic table view; the storyboard provides prototype cells to display the following elements:
- A section header cell showing the image, who uploaded it, and a timestamp.
- A comment cell, to show all the comments for that image.
- A cell providing a text field to submit a new comment.
The view controller segues are simple push segues.
In order to use Facebook and Parse in your app, you will need to include the appropriate SDKs in your build.
Adding Requisite Libraries
Head over to and select Download the latest SDK.
Install the downloaded package; the SDK will be installed in ~/Documents/FacebookSDK by default.
In Finder, drag the FacebookSDK.framework folder from the SDK installation folder into the Frameworks section of your project in Xcode. Choose Create groups for any added folders and deselect Copy items into destination group’s folder (if needed) to keep the reference to the SDK installation folder, rather than creating a copy.
Now drag the FBUserSettingsViewResources.bundle file from FacebookSDK.framework/Resources into the Frameworks section of your project in Xcode. As before, choose Create groups for any added folders and deselect Copy items into destination group’s folder (if needed).
To make use of the Facebook features built into iOS 6, the Facebook SDK requires five other Frameworks and Libraries:
- AdSupport
- Accounts
- libsqlite3.dylib
- Security
- Social
To add these, go to the Link Binary With Libraries section of the Build Phases for the FBParse target in your project. Add the Frameworks and Libraries as Optional, rather than Required, so that your app supports devices running iOS 5 as well as iOS 6.
When you’re done, your list of Libraries should look like the following:
Remember that Facebook App ID you captured earlier? (You did save it somewhere, didn’t you?) It’s time to put that to use in your project.
Open the main project’s .plist file Supporting Files\FBParse-Info.plist. Using Facebook requires three new entries in this file. Create the following three keys and values in the .plist file:
FacebookAppIDas a string value containing the Facebook App ID you stored safely away early in the tutorial.
FacebookDisplayNameas a string value containing the display name you set up previously when creating your Facebook app.
URL typesas a single array sub-item named
URL Schemes, which contains your Facebook App ID prefixed with
fb. This value is used when handling callbacks from the Facebook web dialogs, should the Login not be handled natively by iOS.
When you’re done, your .plist file should look like the following:
Build and run your project to ensure you have no compilation issues with the Facebook SDK and the included Frameworks and Libraries.
Integrating Parse in Your iOS App
That takes care of Facebook — now it’s time to integrate Parse into your app. Head over to and download the Parse iOS SDK. In Finder, locate Parse.framework and drag it into the Frameworks section of your project.
Just as with the Facebook SDK, Parse requires a few other Frameworks and Libraries to be included in the project. Okay, admittedly, it’s more than a few. Add the following list of Frameworks and Libraries to your project’s Link Binary With Libraries:
- AudioToolbox
- CFNetwork
- CoreGraphics (usually added by default)
- CoreLocation
- libz.1.1.3.dylib
- MobileCoreServices
- QuartzCore
- StoreKit
- SystemConfiguration
Once you’ve completed this step, your Frameworks structure should look like the following:
Since you’re going to be using Parse throughout the app, you’ll import the Parse header in the pre-compile header, rather than in each individual class header file. Open Supporting Files\FBParse-Prefix.pch and add the following import:
Build and run once more to confirm that you have no compilation issues with all the included SDKs, Frameworks and Libraries. The app still doesn’t do much of anything — don’t worry, you’ll see some visible progress soon!
Building the iOS App
That was a lot of prerequisite work, but you came through with flying colors. Now it’s time to work on the code of your app.
Open AppDelegate.m and add the following two lines to the top of application:didFinishLaunchingWithOptions: method:
Be sure to replace the placeholder keys here with your actual Parse Application ID and Client Key that you got earlier when creating the app in Parse.
The above code informs your app of the existence of your Parse app and initializes the the Facebook features built in to the Parse SDK.
Add the following methods to the bottom of AppDelegate.m:
These methods are required for your app to handle the URL callbacks that are part of OAuth authentication. You simply call a helper method in
PDFFacebookUtils and it takes care of the rest.
Build and run your app to confirm there are no compilation issues – your app is now linked to your Parse app!
Leveraging the Login
You’ve seen the Login screen a few times now and you’re probably itching to make it do something useful. Here’s your chance.
To keep all of the communications code in a convenient place, you’ll put the login code into a new Comms class. This class will have static methods for all of your calls to Parse and Facebook.
Create a new Objective-C class with a subclass of NSObject in your project and name the class Comms.
Open Comms.h and replace the contents with the following:
The
Comms class has a single method so far:
login:. When you call this method, you will pass an object that implements the
CommsDelegate protocol. When the login completes, the
Comms class will call the
commsDidLogin: method on the delegate object.
Now for the login method itself. Open Comms.m file and add the code below inside the
@implementation:
The code above uses Parse’s Facebook Utils to login to Parse using Facebook credentials. You don’t need to ask for any specific Facebook permissions because the basic user information and list of friends is part of the default permissions. The code then calls the delegate method to inform the calling object of the success or failure of the logon attempt.
The methods in your
Comms class should to be accessible from anywhere in the app, so it makes sense to put the Comms class header import into the pre-compile.
Open Supporting Files\FBParse-Prefix.pch and add the following import:
Now you need to call your new login method from the Facebook login button on the login view controller. Open FBLoginViewController.m and add the following code to
loginPressed:
The method is referenced in your storyboard, to the touch-up-inside event on the Facebook login button. When the button is pressed, you disable the Facebook login button so that you don’t get any trigger-happy users attempts to hit the button multiple times. Next, you display an activity indicator to inform the user that something is happening behind the scenes. Finally, you call your new
login: method that you added to the
Comms class.
The only thing left to do is make
FBLoginViewController conform to the
CommsDelegate protocol so that it can be informed about Comms actions.
Still in the FBLoginViewController.m file, add the
CommsDelegate protocol to the class extension as so:
Then implement the required delegate method as follows:
The above method is your callback from the Comms class. First, the method does a bit of cleanup by re-enabling the Login button and hiding the activity indicator. Then, on a successful login, it performs a segue to the next view controller. However, if the login attempt failed, it displays an alert to the user.
Build and run your project, and behold as your app asks for permission to access your basic Facebook account details:
Choose OK, and if authentication succeeds the app will automatically transition to the image wall:
Also, if you check the console you will see that the login was successful:
FBParse[1727:907] User signed up and logged in through Facebook!
The Parse
PFFacebookUtils class makes use of the three Facebook authentication methods available on iOS devices. First, it tries to use the native Facebook app. If the Facebook app is not installed on the device, it attempts to use the Facebook account set up in the operating system on iOS 6 and above. Finally, if neither of these options are available, it falls back on the basic Facebook web dialogs.
Logging Out
Now that you can log into the app, take a moment to browse through the various views. The Upload navigation item takes you to the Upload screen – but this doesn’t work yet. The Logout navigation item takes you back to the login screen.
Hmm, that’s not really what you want to do, is it? Currently, tapping the Logout button simply pops the Image Wall view controller from the navigation stack. You’ll need to implement the calls to log the user out of Parse.
Open ImageWallViewController.m and add the following line to the top of logoutPressed:
Hey — that was much easier than expected. This is the only method you need to call to clean up your user on Parse and log out of Facebook.
Ordinarily, you would allow
PFUser to persist between launches of the app so that the app would automatically log the user in when the app starts. However, since the login is an important part of this tutorial, you’ll need to call
logOut as well when the app loads.
Open FBLoginViewController.m and add the following to the bottom of viewDidLoad:
Build and run again; log in to the app as before and click Logout and you will return to the Login screen. When you run the app again, you’ll be prompted to re-login, since
logOut was called when the main view was loaded.
Uploading Images
Your app now logs in and out of Facebook properly; now it’s time to upload some images.
In order to share an image with your friends, Parse needs to associate your Facebook ID with your Parse User. This way, if you have a list of Facebook IDs of friends, you can find the Parse Users to share the image with.
Unfortunately, when Parse stores the logged-in user in its Users table, it doesn’t store the Facebook user ID as a field; instead, it stores Facebook authentication data as an object. So you’ll have to put the user data into the model yourself.
Open up Comms.m and replace the following code in login:
…with this code:
The code above sends a request to Facebook for all of the user’s details including their Facebook id. If no error is received from the request, then take the
FBGraphUser from the returned object and get the value of the Facebook user ID stored in the
me.id element.
[PFUser currentUser] provides you with the currently logged-in Parse user. You then add the Facebook ID to the current user’s dictionary of fields and issue a
saveInBackground command on the PFUser object. Your Parse app now has all the information about the logged-in user — including the user’s Facebook ID — needed to run future queries.
Build and run the app, and log in as usual.
To check that it worked, go to in your browser and choose your app name under the Dashboard entry in the drop down menu in the top right hand corner.
In the next screen, select Data Browser at the top and you will see the User’s data, which includes your new
fbId field that contains your Facebook user ID:
Leave this browser page open as you will return to this page shortly to view your uploaded images.
Uploading Images with Comments
On a recent Sunday afternoon stroll, you were surprised to learn that the Loch Ness Monster was alive and well and living in a nearby stretch of canal. You jumped at the chance to snap this once in a lifetime photo, and now you can’t wait to share this experience with your Facebook friends through your new app.
To send the image and a comment to Parse you’ll use your
Comms class. Open Comms.h and add the following method declaration:
Then add the following callback methods to the
CommsDelegate protocol at the top of the file:
Parse provides you with an upload progress callback as your image uploads. You need to pass this on to the delegate using
commsUploadImageProgress: so that your app can display the upload progress to the user. Once the upload is complete, you call
commsUploadImageComplete: so that the delegate knows the image upload is complete.
Open up Comms.m and add the following method:
There’s a fair bit of code here, but the points below describe what you do in each step:
- Get the image data for uploading.
- Convert the image data into a Parse file type
PFFileand save the file asynchronously.
- If the save was successful, create a new Parse object to contain the image and all the relevant data (the user’s name and Facebook user ID). The timestamp is saved automatically with the object when it is sent to Parse. Save this new object asynchronously.
- If the save was successful, save the comment in another new Parse object. Again, save the user’s name and Facebook user ID along with the comment string.
- Once this is all done, report success back to the delegate class.
- If there was an error saving the Wall Image Parse object, report the failure back to the delegate class.
- If there was an error saving the image to Parse, report the failure back to the delegate class.
- During the image upload, report progress back to the delegate class.
Now you need to call your new upload method from the upload screen.
Open UploadImageViewController.m and add the following call to the end of
uploadImage:
You’ll notice when you add this method Xcode presents a warning. Can you figure out the source of the warning?
To fix this, add
CommsDelegate to the protocols in the class extension at the top of UploadImageViewController.m, like so:
The warning is gone, but before you test this, you need to handle those callbacks from the Comms class.
Still in UploadImageViewController.m, add the following method:
The code above resets the UI to be ready for the next image upload. If the upload is successful, the view controller is popped and returns the user to the Image Wall screen. If the upload fails, the code alerts the user of the failure.
All that’s left to add is the progress indicator. Add the following method to UploadImageViewController.m:
The upload progress is displayed as a percentage, which fits nicely into the progress bar on the screen.
Build and run your project; select an image from the upload screen, add a comment and tap the Send To Image Wall button.
You can either navigate to an image on the web using Mobile Safari, or simply drag and drop an image from Finder into the Simulator. Press and hold on the image in the simulator and you can then save it to the iOS simulator’s photo album.
The image upload progress indicator will change as the image is uploaded; when the image is completely uploaded, the view controller is reset and popped and you will be returned to the Image Wall, safe in the knowledge that your images and comment are saved in your Parse app.
Or are they?
Verifying Image Uploads in Parse
Okay, so you need some proof that your images and comments made it to Parse. Since you haven’t actually coded the Image Wall yet, the only way to prove that your image is saved to Parse is to take a look at the Parse Data Browser.
If you kept the browser window open from the previous section, simply refresh the page and you’ll see two new tables – Wall Image and Wall Image Comment. Take a look at the Wall Image table and you’ll see one new row. Click on the img button and Parse will show you your uploaded image as demonstrated below:
Where To Go From Here?
Here is the sample code which includes everything you have done up to this point in this tutorial.
In Part 1 of this tutorial, you built an app that uses both the Facebook and Parse SDKs, added Facebook authentication using Parse’s Facebook Utils, and have successfully uploaded images to Parse. This knowledge gives you a good foundation to develop apps that use Facebook and Parse together.
In Part 2 of this tutorial you will complete your app by fleshing out the Image Wall and allowing the user to create comments on friends’ images — and vice versa!
If you want to learn more about the Parse SDK, check out the Parse iOS SDK reference. In particular, you might be interested in the documentation for the classes you’ve used so far – PFFacebookUtils, PFFile, and PFUser.
Please use the comments section below to ask any questions about the tutorial, integrating Facebook and/or Parse and with any suggestions for related tutorials going forward.
Team
Each tutorial at is created by a team of dedicated developers so that it meets our high quality standards. The team members who worked on this tutorial are:
- Author
Toby Stephens
You can read earlier discussions of this topic in the archives | https://www.raywenderlich.com/44640/integrating-facebook-and-parse-tutorial-part-1 | CC-MAIN-2016-44 | refinedweb | 3,789 | 69.52 |
Scrapy Proxy Guide: How to Integrate & Rotate Proxies With Scrapy
If you are scraping at scale then using proxies is a must to avoid your spiders getting blocked or returning unreliable data.
There are many different proxy types to choose from with different integration methods, so in this guide, we're going to go through step by step:
- What Are Proxies & Why Do We Need Them?
- The 3 Most Popular Proxy Integration Methods
- How To Integrate & Rotate Proxy Lists
- How To Use Rotating/Backconnect Proxies
- How To Use Proxy APIs
First, let's quickly go over some the very basics.
Need help scraping the web?
Then check out ScrapeOps, the complete toolkit for web scraping.
What Are Proxies & Why Do We Need Them?
Web scraping proxies are IP addresses that you route your requests through instead of using your own or servers IP address.
We need them when web scraping as they allow us to spread our requests over thousands of proxies so that you can easily scrape a website at scale, without the target website blocking us.
If you doing a very small scraping project or scraping a website without a sophisticated anti-bot countermeasures then you mightn't need them. However, when you start scraping big websites or at larger volumes then proxies quickly become a must as they allow you to:
- Bypass anti-bot countermeasures
- Get country specific data from websites
- Hide your identity from the websites you are scraping
There are many different types of proxies (datacenter proxies, residential proxies, mobile proxies, ISP proxies, SOAX proxies), however, for the purposes of this guide we will focus on how to integrate them into our Scrapy spiders.
The 3 Most Popular Proxy Integration Methods
When it comes to proxies there are 3 main integration methods that are most commonly used:
- Proxy Lists
- Rotating/Backconnect Proxies
- Proxy APIs
All 3 have their pros and cons, and can have an impact on whether you have dedicated proxies or proxies in a shared pool. However, which type you use is really down to your own personal preferences and project requirements (budget, performance, ease of use, etc).
The easiest proxies to use are smart proxies that either allow you to send your requests to a single proxy endpoint or to a HTTP API.
These smart proxy providers take care of all the proxy selection, rotation, ban detection, etc. within their proxy, and allow you to easily enable extra functionality like JS rendering, country-level geotargeting, residential proxies, etc. by simply adding some flags to your request.
Examples of smart proxy providers are: ScraperAPI, Scrapingbee, Zyte SmartProxy
How To Integrate & Rotate Proxy Lists
The most fundamental way of using proxies, is to insert a list of proxy IPs into your spider and configure it to select a random proxy every time it makes a request.
'proxy1.com:8000',
'proxy2.com:8031',
'proxy3.com:8032',
When you sign up to some proxy providers, they will give you a list of proxy IP addresses that you will then need to use in your spider. Most free proxy lists online use this approach and some large providers still offer this method for datacenter IPs or if you want dedicated proxies.
To integrate the a list of proxies with your spider, we can build our own proxy management layer or we can simply install an existing Scrapy middleware that will manage our proxy list for us.
There are a number of free Scrapy middlewares out there that you can choose from (like scrapy-proxies), however, for this guide we're going to use the scrapy-rotating-proxies middleware as it was developed by the some of Scrapy's lead maintainers and has some really cool functionality.
scrapy-rotating-proxies is very easy to setup and is very customisable..
Alternatively, you could give the scrapy-rotating-proxies middleware a path to a file that contains the proxy list and your spider will use the proxies from this list when making requests.
## settings.py
ROTATING_PROXY_LIST_PATH = '/my/path/proxies.txt'
The very cool thing about the scrapy-rotating-proxies middleware is that it will actively monitor the health of each individual proxy and remove any dead proxies from the proxy rotation.
You can also define your own ban detection policies, so you can tell the scrapy-rotating-proxies middleware what constitutes a dead proxy so it can remove it from the rotation. For more on this functionality then check out the docs.
How To Use Rotating/Backconnect Proxies
Once upon a time, all proxy providers gave you lists of proxy IPs when you purchased a plan with them.
However, it is far more common for them to provide you a single proxy endpoint that you send your requests too and they handle the selection and rotation of the proxies on their end. Making it much easier for you to integrate a proxy solution into your spider.
Examples of such proxies include BrightData, Oxylabs, NetNut. Their proxy endpoints look something like this:
'zproxy.lum-superproxy.io:22225', # BrightData
'pr.oxylabs.io:7777', # Oxylabs
'gw.ntnt.io:5959', # Netnut
Important: When using a single proxy endpoint, you shouldn't use a rotating proxy middleware like the scrapy-rotating-proxies middleware as it could interfere with the correct functioning of the proxy.
You have a couple of options on how you integrate one of these proxy endpoints into your spider.
1. Via Request Parameters
Simply include the proxy connection details in the meta field of every request within your spider.
## your_spider.py
def start_requests(self):
for url in self.start_urls:
return Request(url=url, callback=self.parse,
meta={"proxy": ""})
Scrapy's HttpProxyMiddleware, which is enabled by default, will then route the request through the proxy you defined.
2. Create Custom Middleware
A cleaner and more modular approach is to create a custom middleware which you then enable in your
settings.py file. This will ensure all spiders will use the proxy.
Here is an example custom middleware that you can add to
Then you just need to enable it in your
settings.py file, and fill in your proxy connection details:
## settings.py
PROXY_USER = 'username'
PROXY_PASSWORD = 'password'
PROXY_ENDPOINT = 'proxy.proxyprovider.com'
PROXY_PORT = '8000'
DOWNLOADER_MIDDLEWARES = {
'myproject.middlewares.MyProxyMiddleware': 350,
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 400,
}
Note: For this middleware to work correctly, you will need to put it before the default Scrapy HttpProxyMiddleware by assigin it a lower number.
How To Use Proxy APIs
Over the last few years, a number of smart proxy solution have been launched that take care of all the proxy/user-agent selection, rotation, ban detection, and are easily customisable.
Typically, these smart proxy solutions allow you to make requests via their HTTP endpoint. Some even have dedicated SDKs and traditional proxy endpoints.
Instead, of adding a proxy to your request, you send the url you want to scrape to them via their API and then they return the HTML response to you. Only charging you if the request has been successful.
For this example, we're going to use ScraperAPI.
1. Via API Endpoint
To send the pages we want to scrape to ScraperAPI we simply just need to forward the urls we want to scrape to their API endpoint. We can do this by creating a simple function:
## myspider.py
API_KEY = 'YOUR_API_KEY'
def get_proxy_url(url):
payload = {'api_key': API_KEY, 'url': url}
proxy_url = '?' + urlencode(payload)
return proxy_url
And use this function in our Scrapy request:
## myspider.py
yield scrapy.Request(url=get_proxy_url(url), callback=self.parse)
This is how your final code should look.
## myspider.py
import scrapy
from urllib.parse import urlencode
API_KEY = 'YOUR_API_KEY'
def get_proxy_url(url):
payload = {'api_key': API_KEY, 'url': url}
proxy_url = '?' + urlencode(payload)
return proxy_url
class QuotesSpider(scrapy.Spider):
name = "QuotesSpider"
def start_requests(self):
urls = [
'',
]
for url in urls:
yield scrapy.Request(url=get_proxy_url(url), callback=self.parse)
def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').get(),
'author': quote.css('small.author::text').get(),
'tags': quote.css('div.tags a.tag::text').getall(),
}
next_page_url = response.css("li.next > a::attr(href)").extract_first()
if next_page is not None:
next_page_url = '' + next_page
yield response.follow(get_proxy_url(next_page_url), callback=self.parse)
2. Using SDK
Alternatively, a lot of proxy providers now have their own SDKs and custom middlewares that make it even easier to integrate them into your scrapers.
For ScraperAPI, simply install their SDK:
pip install scraperapi-sdk
Then integrate the SDK into your code by initialising the ScraperAPIClient with your API key and then using the
client.scrapyGet method to make requests.
## myspider.py
import scrapy
from scraper_api import ScraperAPIClient
client = ScraperAPIClient('YOUR_API_KEY')
class QuotesSpider(scrapy.Spider):
name = "quotes"
def start_requests(self):
urls = [
'',
'',
]
for url in urls:
yield scrapy.Request(client.scrapyGet(url=url), callback=self.parse)
More Scrapy Tutorials
That's it for all the most common ways you can integrate proxies with Scrapy.
If you would like to take things a step further by using multiple types of proxies from different proxy providers, and create your own hierarchical waterfall system then be sure to check out our Proxy Waterfall System guide.
If you would like to learn more about Scrapy in general, then be sure to check out The Scrapy Playbook. | https://scrapeops.io/python-scrapy-playbook/scrapy-rotating-proxy-guide/ | CC-MAIN-2022-40 | refinedweb | 1,534 | 53.31 |
Apart from sessions in which an overview of a certain topic is presented, JavaOne also had a few sessions in which the audience got the chance to actually do some work theirselves. These so called labs typically took up two hours in contrast to "normal" sessions that took up one hour. I attended two of the labs. In this article I will tell you all about "Dynamic Service Composition with OpenESB" that took place on the third day of JavaOne.
Line out of the lab
Tuhin Kumar and Rupesh Ramachandran, both Sun Microsystems, hosted the lab aided by several other people. The lab started with a presentation about the basics of SOA and BPEL as well as a short overview of OpenESB. The use case presented in the lab was a full-functioning JavaOne conference survey/poll application. The idea was that the audience would create a Composite Application that connects to an existing MySQL database to push the results of a small survey. The lab presenters already prepared a Petaho application that would retrieve the survey results out of the MySQL database and show them in a few plots. The survey could be submitted in two ways. The first way was using the built in Composite Application tester, the second way was a prepared web application that calls the Composite Application via a web service call.
Creating the BPEL application
The Sun folks already prepared the whole infrastructure to do the lab. This means they had installed a central MySQL database with a "javaonedb" database on it. On the lab machines NetBeans 6.1 and GlassFish v2 with OpenESB were already installed and configured. Everything was setup for us to get going. The first exercise was to build a BPEL module that would take all the answers to the survey and return success. The second exercise was to enhance the BPEL module to store all survey answers in the MySQL database. At the end of the first exercise, the BPEL module was loaded into a new Composite Application that can be deployed to GlassFish.
Unfortunately the lab was done on pre-installed machines in the lab room. So I had to recreate the BPEL module to be able to add a screenshot of it to this article. Fortunately a CD was made available with all labs on it so I was able to use the prepared WSDL document that defines the web service on which the BPEL process is built. This is what my BPEL process looks like
The process doesn’t do much yet. It only consumes the input that was passed on with the web service call to the javaOnePoll web service. The assign in between the receive and reply only returns a pre-defined string as can be seen in this image
However, this still is a valid BPEL process! And it will run in a JBI compliant esb, like OpenESB, if deployed correctly.
Deploying to OpenESB
The module I created above is a Service Unit. This is the smallest deployable unit for JBI containers. In order to deploy it, the Service Unit must be encapsulated in a Service Assembly. Basically, a Service Assembly is a zip file containing Service Units and a deployment descriptor. In NetBeans, such a Service Unit is also refered to as a Composite Application or as a Composite Application Service Assemble (a.k.a. CASA or C.A.S.A.). So, the steps to deploy the BPEL process created a bove is to create a CASA project, add the BPEL module to it and deploy to e.g. OpenESB. These steps result in yet another graphical representation shown in this image:
The graph indicates that whenever the javaOnePoll web service is called, the request is passed on to the CaptureJ1Polls BPEL module. Building and deploying the Composite Application to OpenESB results in the following output in the NetBeans log:
run-jbi-deploy:<br />[deploy-service-assembly]<br /> Deploying a service assembly...<br /> host=localhost<br /> port=4848<br /> file=/home/wouter/NetBeansProjects/J1PollCASA/dist/J1PollCASA.zip<br />[start-service-assembly]<br /> Starting a service assembly...<br /> host=localhost<br /> port=4848<br /> name=J1PollCASA<br />run:<br />BUILD SUCCESSFUL (total time: 14 seconds)
Testing the Composite Application
Now the Composite Application is deployed we can test it with the testing facilities that NetBeans provides. Creating such a test case can be done by right clicking the Test node under the Composite Application tree in NetBeans and it involves three steps. The first step is to provide a name for the test case. I chose JavaOnePollTest. The next step is to select the WSDL file that the test will be based on. Here the WSDL file that was provided for the lab needs to be chosen. The final step is to choose the web service operation that needs to be tested. In this case the JavaOnePollOperation needs to be selected.
NetBeans will now create an input.xml file based on the WSDL. The values in this file can be adjusted to your needs. Here’s the one I used to test my Composite Application:
<soapenv:Envelope <br /> xsi:<br /> <soapenv:Body><br /> <jav:javaOnePoll><br /> <jav:PersonalInfo><br /> <jav:Name>Wouter van Reeven</jav:Name><br /> <jav:Continent>Europe</jav:Continent><br /> <jav:Industry>ICT</jav:Industry><br /> <jav:TechnicalBackground>JavaSE</jav:TechnicalBackground><br /> <jav:SOAExpertiseLevel>2</jav:SOAExpertiseLevel><br /> </jav:PersonalInfo><br /> <jav:JavaOneOpinionPoll><br /> <jav:JavaOneTechnicalInterest>SOA</jav:JavaOneTechnicalInterest><br /> <jav:TechnicalContentRating>3</jav:TechnicalContentRating><br /> <jav:VenueRating>3</jav:VenueRating><br /> <jav:WeatherRating>2</jav:WeatherRating><br /> <jav:PreferredVenue>San Francisco</jav:PreferredVenue><br /> </jav:JavaOneOpinionPoll><br /> </jav:javaOnePoll><br /> </soapenv:Body><br /></soapenv:Envelope>
The rating values range from 1 (bad) to 3 (excellent). Right clicking the
test case
and selecting Run generated this output
<?xml version="1.0" encoding="UTF-8"?><br /><SOAP-ENV:Envelope xmlns:<br /> <SOAP-ENV:Body><br /> <ns1:javaOnePollResponse xmlns:<br /> Success<br /> </ns1:javaOnePollResponse><br /> </SOAP-ENV:Body><br /></SOAP-ENV:Envelope>
Great!
Connecting to a database
The next exercise was to store the input values of the web service call into a database. During the lab, MySQL was used (of course, Sun recently acquired MySQL) but it should make no difference what database is used. To prove that, I will try to store the data in an Oracle XE database. The first steps involve creating the database schema and making a connection from GlassFish to that schema. This is very trivial stuff and I will not show you how to do that. If you have troubles making the connection from GlassFish to a database, check out one of my previous blog entries.
In order to be able to store the data to the database, we need to setup a partner link to the database. Every partner link is based on WSDL so we need to create a WSDL based on the database. NetBeans has a nice wizard for this. To start the wizard, go to the CaptureJ1Polls project and choose File -> New File -> SOA -> WSDL From Database
Follow the steps in the wizard to create the WSDL. There is one tricky part to it. In the final screen you need to provide the JNDI name of the GlassFish datasource that connects to the database by yourself. I hope the NetBeans team will extend this screen in the future so the available JNDI names are presented in a drop down list and the possibility to create a new data source will be available too. This is all possible in Java EE project wizards so it shouldn’t be too hard to create that here too.
With the newly created WSDL it’s not hard to create a partner link in our BPEL module. Reopen the CaptureJ1Polls BPEL diagram if closed and drag and drop the JavaOnePollsTable WSDL file to the partner link lane to the right (or left) of the BPEL diagram. Next, an Assign activity needs to be added before the Invoke so the data that was passed on to the web service can be passed on to the database. The final BPEL diagram looks like this
The mapDataToDB Assign activity is shown with warnings due to a namespace mismatch that can safely be ignored. The new Assign activity itself looks like this
After recompiling both the CaptureJ1Polls and the J1PollCASA projects, the J1PollCASA project looks like this
Testing the Composite Application again goes ok and using SQL Developer I can now see the data in the database
Conclusion
NetBeans provides a very good graphical editor for orchestration of Business Process Modeling and for creating Composite Applications. The facilities for laying out BPEL diagrams, Assign activities and XSTL mappings are very easy to use and provide powerfull (BPEL related) tools. I will need to do a LOT more investigation, though, to find out all the details.
By the way, on May 22, Alexis Moussine-Pouchkine from Sun in Paris will come over to our company to do a presentation about OpenESB. After that I will do a presentation on NetBeans and BPEL and Composite Applications. If you are interested to attend this event and live in (or close by) the Netherlands, then please visit our agenda for more info and how to subscribe.
Thanks for the link. Please note that the documentation in the link also specifies how to install the JDBC Service Engine for OpenESB.
Great!!! You can download JavaONE 2008 Hands on labs from here:
Enjoy!
You need to join Sun Developer Network (SDN).
Hi Mark,
You’re welcome. Like I said when we met: if you are able to drop by our event at May 22nd please let me know!
Wouter
Thanks for posting this, Wouter! And thanks for dropping by our demo pod at JavaOne and introducing yourself. | http://technology.amis.nl/2008/05/12/javaone-2008-lab-dynamic-service-composition-with-openesb-and-netbeans/ | CC-MAIN-2014-42 | refinedweb | 1,631 | 53.21 |
PEP 324 – subprocess - New process module
- Author:
- Peter Astrand <astrand at lysator.liu.se>
- Status:
- Final
- Type:
- Standards Track
- Created:
- 19-Nov-2003
- Python-Version:
- 2.4
- Post-History:
-
Table of Contents
- Abstract
- Motivation
- Rationale
- Specification
- Replacing older functions with the subprocess module
- Open Issues
- Backwards Compatibility
- Reference Implementation
- References
Abstract ‘args’ argument, just like the
Popenclass constructor. It waits for the command to complete, then returns the
returncodeattribute. The implementation is very simple:
def call(*args, **kwargs): return Popen(*args, **kwargs).wait()
The motivation behind the
call()function is simple: Starting a process and wait for it to finish is a common task.
While
Popensupports:
argsshould be a string, or a sequence of program arguments. The program to execute is normally the first item in the args sequence or string, but can be explicitly set by using the executable argument.
On UNIX, with
shell=False(default): In this case, the
Popenclass uses
os.execvp()to execute the child program.
argsshould normally be a sequence. A string will be treated as a sequence with the string as the only item (the program to execute).
On UNIX, with
shell=True: If
argsis a string, it specifies the command string to execute through the shell. If
argsis a sequence, the first item specifies the command string, and any additional items will be treated as additional shell arguments.
On Windows: the
Popenclass uses
CreateProcess()to execute the child program, which operates on strings. If
argsis a sequence, it will be converted to a string using the
list2cmdlinemethod. Please note that not all MS Windows applications interpret the command line the same way: The
list2cmdline
bufsizemeans to use the system default, which usually means fully buffered. The default value for
bufsizeis 0 (unbuffered).
stdin,
stdoutand
stderrspecify the executed programs’ standard input, standard output and standard error file handles, respectively. Valid values are
PIPE, an existing file descriptor (a positive integer), an existing file object, and
None.
PIPEindicates that a new pipe to the child should be created. With
None, no redirection will occur; the child’s file handles will be inherited from the parent. Additionally,
stderrcan be STDOUT, which indicates that the stderr data from the applications should be captured into the same file handle as for stdout.
- If
preexec_fnis set to a callable object, this object will be called in the child process just before the child is executed.
- If
close_fdsis true, all file descriptors except 0, 1 and 2 will be closed before the child process is executed.
- If
shellis true, the specified command will be executed through the shell.
- If
cwdis not
None, the current directory will be changed to cwd before the child is executed.
- If
envis not
None, it defines the environment variables for the new process.
- If
universal_newlinesis true, the file objects stdout and stderr are opened as a text file, but lines may be terminated by any of
\n, the Unix end-of-line convention,
\r, the Macintosh convention or
\r\n, the Windows convention. All of these external representations are seen as
\nby the Python program. Note: This feature is only available if Python is built with universal newline support (the default). Also, the newlines attribute of the file objects stdout, stdin and stderr are not updated by the
communicate()method.
- The
startupinfoand
creationflags, if given, will be passed to the underlying
CreateProcess()function. They can specify things such as appearance of the main window and priority for the new process. (Windows only)
This module also defines two shortcut functions:
call(*args, **kwargs):
- Run command with arguments. Wait for command to complete, then return the
returncodeattribute.
The arguments are the same as for the Popen constructor. Example:
retcode = call(["ls", "-l"])
Exceptions
Exceptions raised in the child process, before the new program has started to execute, will be re-raised in the parent. Additionally, the exception object will have one extra attribute called ‘child.
Security
Unlike some other popen functions, this implementation will never call /bin/sh implicitly. This means that all characters, including shell meta-characters, can safely be passed to child processes.
Popen objects
Instances of the Popen class have the following methods:
poll()
- Check if child process has terminated. Returns
returncodeattribute.
wait()
- Wait for child process to terminate. Returns
returncodeattribute.argument is
PIPE, this attribute is a file object that provides input to the child process. Otherwise, it is
None.
stdout
- If the
stdoutargument is
PIPE, this attribute is a file object that provides output from the child process. Otherwise, it is
None.
stderr
- If the
stderrargument is
PIPE, this attribute is file object that provides error output from the child process. Otherwise, it is
None.
pid
- The process ID of the child process.
returncode
- The child return code. A
Nonevalue) sts = os.waitpid(p.pid, 0)
Note:
- Calling the program through the shell is usually not required.
- It’s easier to look at the returncode attribute than the exit status.raises an exception if the execution fails
- the
capturestderrargument is replaced with the stderr argument.
stdin=PIPEand
stdout=PIPEmust be specified.
popen2closes all file descriptors by default, but you have to specify
close_fds=Truew
This document has been placed in the public domain.
Source:
Last modified: 2017-11-11 19:28:55 GMT | https://peps.python.org/pep-0324/ | CC-MAIN-2022-27 | refinedweb | 871 | 56.96 |
VisionVision
Computer Vision allows your robots to understand their environment. For the competition, this is used to locate cubes and arena walls.
When you tell you robot to
see, it will give you a list of all the markers it can see. The objects it returns will give you information about the type of the marker, the distance/angle to the marker along with other assorted information.
PythonPython
To look for markers call
see():
markers = R.see()
markers is now a Python list of
marker objects. Each
marker object contains the following properties.
BlocklyBlockly
Blocks for vision can be found in the Vision section.
Here's an example of a Blockly program that does some basic vision stuff:
Changing the resolutionChanging the resolution
The default the camera takes pictures at a resolution of 640x480px. You can change this by specifying a
res parameter to
R.see(). This maybe be helpful when trying to see things far away.
markers = R.see(res=(1920, 1088))
You must use one of the following resolutions:
(640, 480)
(1296, 736)(default)
(1296, 976)
(1920, 1088)
(1920, 1440)
TIP
Using a higher resolution will increase the amount of time it takes to process the image, but you may be able to see more. Using a smaller resolution will be faster, but markers further away may stop being visible.
Here's a more complete example:
import robot R = robot.Robot() markers = R.see() for marker in markers: if marker.info.token_type == robot.TOKEN_GOLD: move(marker.dist)
Definition of AxesDefinition.
TIP
Note that the axes are all defined relative to the camera. Since we have no way to know how you've mounted your camera, you may need to account for that in your usage of the vision system's data.
Objects of the Vision SystemObjects of the Vision System
Marker
A
Marker object contains information about a detected marker.
It has the following attributes:
info
: A
MarkerInfo object containing information about the type of marker that was detected.
centre
: A
Point describing the position of the centre of the marker.
vertices
: A list of 4
Point instances, each representing the position of the black corners of the marker.
dist
: An alias for
centre.polar.length
rot_y
: An alias for
centre.polar.rot_y
orientation
: An
Orientation instance
token_type
: The type of token the marker represents.
One of:
TOKEN_NONE
TOKEN_ORE
TOKEN_GOLD
TOKEN_FOOLS_GOLD
offset
: The offset of the numeric code of the marker from the lowest numbered marker of its type. object.
Using USB camerasUsing USB cameras
WARNING
Your robots ability to see is very much dependant on the camera you use. We strongly recomend testing your webcams accuracy and maxium distance against that of the Pi cam in the Brain Box.
Cheap webcameras do tend to hurt how well your robot can see.
To use a USB camera you will need to initialize the robot object with the
use_usb_camera parameter. Then just call
R.see() as you would normally.
import robot R = robot.Robot(use_usb_camera=True) print R.see()
You will then need to calibrate your camera as the distance that it reports will not be accurate. You can do this by changing the value in the
usbcamera_focal_lengths dictionary up or down.
To get the current value print it:
import robot.vision as vision print vision.usbcamera_focal_lengths
Assign a new value and print the distance and rotation use the following code.
# usbcamera_focal_lengths[(resx, resy)] = (newValue,newValue) # Where (resx, resy) is the resolution that you want to tune # To set the resolution (640, 480) to the focal length (100,100) do import robot.vision as vision vision.usbcamera_focal_lengths[(640, 480)] = (100, 100) R = robot.Robot(use_usb_camera=True) while True: markers = R.see() for marker in markers: dist = marker.dist rot_y = marker.rot_y print "dist:", dist, "rot_y:", rot_y
We recommend that you tune this value by placing a marker exactly 2m away, printing
R.see() (remember to take an average), and tuning the focal length up or down until you get a value that is close to 2m. If you are feeling fancy you could even write a function to automatically tune the value.
The default resolutions are as follows.
usbcamera_focal_lengths = { (1920, 1440): (1393, 1395), (1920, 1088): (2431, 2431), (1296, 976): (955, 955), (1296, 736): (962, 962), (640, 480): (463, 463), } | https://hr-robocon.org/docs/vision.html | CC-MAIN-2019-43 | refinedweb | 710 | 58.18 |
errno - number of last error
Synopsis
Description
Notes
Colophon
#include <errno.h>.
A common mistake is to dowhere errno no longer needs to have the value it had upon return from somecall() (i.e., it may have been changed by the printf(3)). If the value of errno should be preserved across a library call, it must be saved:where errno no longer needs to have the value it had upon return from somecall() (i.e., it may have been changed by the printf(3)). If the value of errno should be preserved across a library call, it must be saved:
if (somecall() == -1) { printf("somecall() failed\n"); if (errno == ...) { ... } }.
err(3), error(3), perror(3), strerror(3)
This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.sgvulcan.com/errno.3.php | CC-MAIN-2017-47 | refinedweb | 146 | 73.47 |
Discussion in 'Social Networking Sites' started by nitro909, Dec 27, 2010.
I am looking to que up tweets, anyway to do this?
If you're running Windows, then Powershell 2.0 gives you a great way to do it with the net.webclient namespace.
It takes a little bit of scripting knowledge, but you could queue up messages in a text file (easy), or even excel file (a bit more advanced) and have them post on a schedule.
Hm, any sites that tell you how? Never used anything like that!
you can use socialoomph service. It's free.
You can use Twuffer to schedule pre-written, post-dated tweets
Separate names with a comma. | https://www.blackhatworld.com/seo/anyway-to-schedule-tweets-on-twitter.265459/ | CC-MAIN-2017-39 | refinedweb | 114 | 84.68 |
Contents
- 1 Open Mobile Money API for MTN, Airtel, Africell and UTL m-sente Uganda
- 2 Requirements for Open Mobile Money API
- 3 Enable Mobile Money API for MTN, Airtel, Africell and UTL m-sente Uganda
- 4 Step 1 – Fill in your Information
- 5 Anatomy of Request
- 6 Anatomy of Response
- 7 Mobile Money Deposit & Payout API
- 8 Instant Payment Notification
- 9 Other methods worth looking into;
Open Mobile Money API for MTN, Airtel, Africell and UTL m-sente Uganda
Open Mobile Money API – EasyPay provides an easy Restful API that makes the process of integrating Mobile Money within your website or system a breeze. Start collecting mobile money instantly for your products or services.
Easypay provides two types of mobile money api;
Mobile Money Deposit Api (Incoming)
This API allows for you to collect payments from your customers mobile money accounts. This API allows any connected internet device process payments using Mobile money of MTN Uganda or Airtel Uganda. Example applications include e-commerce websites, ticketing systems, school fees systems, insurance, tax collection etc. Basically, if you have any kind of service you want to charge your customers for.
Mobile Money Payout Api (Outgoing)
This Api allows you to send money from your Easypay account to either MTN, Airtel, Africell or UTL m-sente Mobile Money accounts. This API has a variety of uses like salary payments, bulk payments to suppliers, betting payouts etc.
Requirements for Open Mobile Money API
- accounts are limited to transactions worth 10USD/month for both incoming and outgoing transactions. To remove this limitation, you will have to contact us with your company details (KYC).
Enable Mobile Money API for MTN, Airtel, Africell and UTL m-sente Uganda
Easypay will provide you with the following credentials to make the process quicker;
-": "your_clientId",
"password": "your_secret",
"action": "do_something",
"paramater":"value"
}
Anatomy of Response
A success response comes in the following JSON format . You have to test the success field to either 1 for success or 0 for failed.
Successful Response
{
"success": 1,
"data": object
}
Failed Response
{
"success": 0,
"errormsg": "error message here describing failure"
}
Mobile Money Deposit & Payout API
This API contains two methods that facilitate topping up you (incoming) account with mobile money or sending easypay money to mobile money accounts (outgoing).
Mobile Money Deposit involves you asking the user for their phone number and amount they would like to deposit. When successful a network overlay is shown on users mobile phone number asking them to enter their mobile money PIN to approve.
Mobile Money Deposit Charges
These charges are automatically added based on the amount being deposited. It is good practice to display these before confirming the transaction. Easypay charges 3% of the transaction amount when receiving money from mobile money payments/deposits into your account.
Mobile Money Payout Charges
Easypay charges 3% of the transaction amount + UGX 400 network charge (flat charge), when sending money from your wallet to a mobile money account.
Mobile Money (Incoming) API action
Instant Payment Notification
When a mobile money deposit has been successfully completed at the network level. We notify your system using the callback url you supplied us and POST raw JSON data to you. This is not needed for mobile money transactions but is the standard way you should rely on. It works for both card payments and mobile money apis.
{
"phone": "phonenumber",
"reference": "your order id",
"transactionId": "Easypay Transaction ID",
"amount": "1000",
"reason":"your reason or narrative"
}
Other methods worth looking into;
Mobile Money Incoming Sample in php
<!--?php //Testing Mobile money incoming$url = '';$payload = ( 'username' => '___YOUR CLIENT ID___','password' => '___YOUR CLIENT SECRET___','action' => 'mmdeposit','amount' => 500,'phone'=>'25675XXXXXXX','currency'=>'UGX','reference'=>12,'reason'=>'Testing MM DEPOSIT');/));
Mobile Money Tutorial for JAVA Programmers
For those of you who use JAVA, Waagana Alex has released two tutorials including code samples to help you get started. Visit the links below;
In three steps Add Mobile Payments to your application using MTN Mobile Money, Airtel Money, Africell Mobile Money and UTL m-sente, Payments to your Android Application, or any other JAVA application
Android Payment Using MTN Mobile Money, Airtel Money, Africell Mobile Money and UTL m-sente, Free Code Project
Mobile Money Incoming Sample in C#
using System;
using System.Collections.Generic;
using System.IO;
using System.Net;
using System.Text;
using System.Web.Script.Serialization;
public String Demo()
{
var js = new JavaScriptSerializer();
var clientId = “xxxxxxxxxxxxxxx”;
var secret = “xxxxxxxxxxxxxxx”;
var action = “mmdeposit”;
var amount = “500”;
var phone = “256XXXXXXXXX”;
var currency = “UGX”;
var reference = “1234”;
var reason = “Testing MM DEPOSIT”;
var url = “”;
var dics = new Dictionary<string, string>();
dics.Add(“username”, clientId);
dics.Add(“password”, secret);
dics.Add(“action”, action);
dics.Add(“amount”, amount);
dics.Add(“currency”, currency);
dics.Add(“phone”, phone);
dics.Add(“reference”, reference);
dics.Add(“reason”, reason);
var payload = js.Serialize(dics);
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
var requestScore = (HttpWebRequest)WebRequest.Create(url);
byte[] data = Encoding.UTF8.GetBytes(payload);
requestScore.Method = “Post”;
requestScore.ContentType = “application/x-www-form-urlencoded”;
requestScore.ContentLength = data.Length;
requestScore.KeepAlive = true;
var stream = requestScore.GetRequestStream();
stream.Write(data, 0, data.Length);
stream.Close();
var responseSorce = (HttpWebResponse)requestScore.GetResponse();
var reader = new StreamReader(responseSorce.GetResponseStream(), Encoding.GetEncoding(“GB2312”));
string content = reader.ReadToEnd();
responseSorce.Close();
return content;
}
Mobile Money Outgoing Sample in php
<!--?php //Testing Mobile money payout$url = '';$payload = ( 'username' => 'your clientId','password' => 'your secret','action' => 'mmpayout','amount' => 500,'phone'=>'0787xxxxxx');/));
Php sample for IPN or callback
<!--?php $post = file_get_contents('php://input'); $data = json_decode($post); $reference=$data->reference; //This is your order id, mark this as paid $reason=$data->reason; //reason you stated$txid=$data->transactionId; //Easypay transction Id$amount=$data->amount; //amount deposited$phone=$data->phone; //phone number that deposited//With the above details you can update your system and mark order as paid on your side
45 Comments
Is it possible for me to use this API in my wordpress developed website? I would like to know how to use it
yes, we are releasing a plugin to repository but if you cant wait contact us at info@easypay.co.ug and we shall email it to you
You can get our wordpress plugin from
public function successful_payment()
{
$post = file_get_contents(‘php://input’);
$data = json_decode($post);
$reference=$data->reference; //This is my appointment id, mark this as paid
$reason=$data->reason; //reason you stated
$txid=$data->transactionId; //Easypay transction Id
$amount=$data->amount; //amount deposited
$phone=$data->phone; //phone number that deposited
}
I use Codeigniter Framework. I supplied my URL as as my callback URL. Will I get the post data?
Thank you!
First off, you need to be on the internet not localhost. Get your code on a real server to get a response posted to you
Is it available for Uganda developers? Im from Ghana can I use it as well?
You can use it if you are collecting from Ugandan Mobile Money channels. As for Ghana, we are working on an expansion for that
What about getting user registration details given number before using sending api. Usually required for confirmation purposes , like the one MTN offers in their USSD client.
e.g
Is it possible to integrate the api in an android application?
The API is RESTful so any internet connected device can access it and use it 🙂
Use that Am sure it will help you
Can I integrate the api in a parse server cloud code (back-end) and if so do you have a java library for front end client mobile application (specifically android)
Hi, our business has an Airtel Merchant number — looking to see if we could integrate IPN capability for our ERP systems. Is that something EasyPay can help with?
just requesting that you improve on your documentation. I can hardly find where to enable the API from on this page.
On trying mmdeposit request got the following result:
{“success”:0,”errormsg”:”FAILED TO COMPLETE TRANSACTION AT TELECOM”}
Could you suggest the issues, which may cause this kind of failures?
This issue is caused due to insufficient funds on mobile money account to handle amount+charge
Notice: Trying to get property ‘reference’ of non-object in C:\xampp\htdocs\sacco\callback.php on line 4
Notice: Trying to get property ‘reason’ of non-object in C:\xampp\htdocs\sacco\callback.php on line 5
Notice: Trying to get property ‘transactionId’ of non-object in C:\xampp\htdocs\sacco\callback.php on line 6
Notice: Trying to get property ‘amount’ of non-object in C:\xampp\htdocs\sacco\callback.php on line 7
how can i solve this problem
Our system cannot callback a localhost. please ensure you on a public server that can receive responses. also, research on how to make your xampp server public.
How do we add this API in android apps to enable purchase of goods using mobile money.
This API is json restful, meaning any internet connected device can access it.
Use that Am sure it will help you
Any Java code example?
check this
Do you provide automatic payment options
yes we do
How can i customize from PEGASUS TECHNOLOGIES LIMITED to my own name?
Thank you
Currently you cannot. To get your own you need to integrate with telcos directly
Can the url be used as the IPN
Not really… the IPN is a url on your site that is called when transactions are successful. It is really needed with VISA (3D) as transactions can be forwarded to bank and terminate there. without IPN you would never know the status of these transactions
hello how can i embed easy pay on my website to accept donations?
The API is restful and uses POST. so you can create a form that you can put on your website and integrate the api within the form submission function or url to handle logic
If some one deposits UGX.50,000 into my account, how much output remains in my account as net income after all the charges are taken off.
If you deposit 50k, you will have 50k in your account. The person paying is charged 50k plus 3%.
Sorry, how much output remains if i also use that UGX.50,000 to send UGX.20,000 to another account from my wallet after all charges are taken off.
How do i add this API to an Ecwid e-commerce site?
How do i create a custom message that show on the user’s side while confirming a payment. Its currently bring “Merchant PEGASUS TECHNOLOGIES LIMITED has initiated……….”
At the moment you cannot
is it possible to withdraw my profits from my account using VISA or Master cards say i have traveled
Not at the moment. At the moment, the process is still manual and would require you contact us (InApp) and give us permission to send to your bank account. We are pushing to work with the local banks to have funds delivered to your bank accounts instead.
Does it require connecting the website payments to a bank account where money deposited by customers can be saved or easypay has an account that can hold this cash including big sums for our business
When enabling the API, you are required to register for an Easypay account and that is where money deposited by your customers is saved, then when you wish to withdraw it, you send from your Easypay account to your mobile money account
In case of extra assistance with the API and IPN, which place in Uganda can i visit to find a Easy pay assistant.
Your can contact us on skype, jordah.ferguson for further assistance with the API OR on visit our office in Kampala, Kabalagala at Tirupati mazima mall or call us on phone 0705630793/0787297589.
Thanks I like easy pay | https://www.easypay.co.ug/kb/knowledge-base/open-mobile-money-api-uganda-mtn-airtel-africell-utl-m-sente/ | CC-MAIN-2019-35 | refinedweb | 1,946 | 53.31 |
1608295560
Shortcut Expert is a platform for all application shortcuts.
Shortcut Expert is build with VueJS, Gridsome and Vuetify.
It is served with Netlify and it is statically generated, which means that every page is distrivuted through CDN and SEO friendly. Additionally, this makes the application blazing fast.
There is no database, once you fork it, you have all the data necessary to run it locally(including the application shortcuts).
You can fork the GitHub repo and create pull requests for anything. However, below are some common contributions:
All application data is in
src/data/applications. Each
json file represents an application. You need to prepare a
json file for your application and create a pull request.
Preparing a
json file for your application is pretty stratight forward. There are a manual and a preffered way to do that.
First, create a Google Sheets file and prepare it as in this example. You need to create a different sheet (tab) for each operating system and each tab needs to have below columns.
After you prepare your Google Sheets file, you can use our Test Application Page to test it out.
Once it is ready, use Create Application Page to prepare a json file. Details for each field is explained on the page.
Afterward, download your json file and create a pull request to add your application to
src/data/applications. Once the pull request is merged, our server will generate a static page for the application and distribute it worldwide through Netlify CDN.
You can manually create
json files. Just check out the examples in the
src/data/applications folder.
Again, there are the manual and the preffered method.
Each application has a Google Sheet URL in its json file (additionally there is a link pointing to that file on each application page). Once you go to the URL, you can not edit that file since you do not have permissions. However, you can copy that Google Sheet to your own drive. After you copy, make the necessary updates and make sure that you publish your Google Sheet. Once you publish it, click the share button and make your file accessable for anyone on the web.
Again, you can use Create Application Page to prepare a json file and create a pull request for your updates.
Just find the application
json file in
src/data/applications and create a pull request after your edit.
You can contribute to this repo, however you can not use any variant for commercial purposes.
Author: giray123
Demo:
Source Code:
#vue #vuejs #javascript
1642405260
If you’re a Python developer thinking about getting started with mobile development, then the Kivy framework is your best bet. With Kivy, you can develop platform-independent applications that compile for iOS, Android, Windows, macOS, and Linux. In this article, we’ll cover Android specifically because it is the most used.
We’ll build a simple random number generator app that you can install on your phone and test when you are done. To follow along with this article, you should be familiar with Python. Let’s get started!
First, you’ll need a new directory for your app. Make sure you have Python installed on your machine and open a new Python file. You’ll need to install the Kivy module from your terminal using either of the commands below. To avoid any package conflicts, be sure you’re installing Kivy in a virtual environment:
pip install kivy // pip3 install kivy
Once you have installed Kivy, you should see a success message from your terminal that looks like the screenshots below:
Kivy installation
Successful Kivy installation
Next, navigate into your project folder. In the
main.py file, we’ll need to import the Kivy module and specify which version we want. You can use Kivy v2.0.0, but if you have a smartphone that is older than Android 8.0, I recommend using Kivy v1.9.0. You can mess around with the different versions during the build to see the differences in features and performance.
Add the version number right after the
import kivy line as follows:
kivy.require('1.9.0')
Now, we’ll create a class that will basically define our app; I’ll name mine
RandomNumber. This class will inherit the
app class from Kivy. Therefore, you need to import the
app by adding
from kivy.app import App:
class RandomNumber(App):
In the
RandomNumber class, you’ll need to add a function called
build, which takes a
self parameter. To actually return the UI, we’ll use the
build function. For now, I have it returned as a simple label. To do so, you’ll need to import
Label using the line
from kivy.uix.label import Label:
import kivy from kivy.app import App from kivy.uix.label import Label class RandomNumber(App): def build(self): return Label(text="Random Number Generator")
Now, our app skeleton is complete! Before moving forward, you should create an instance of the
RandomNumber class and run it in your terminal or IDE to see the interface:
import kivy from kivy.app import App from kivy.uix.label import Label class RandomNumber(App): def build(self): return Label(text="Random Number Generator") randomApp = RandomNumber() randomApp.run()
When you run the class instance with the text
Random Number Generator, you should see a simple interface or window that looks like the screenshot below:
Simple interface after running the code
You won’t be able to run the text on Android until you’ve finished building the whole thing.
Next, we’ll need a way to outsource the interface. First, we’ll create a Kivy file in our directory that will house most of our design work. You’ll want to name this file the same name as your class using lowercase letters and a
.kv extension. Kivy will automatically associate the class name and the file name, but it may not work on Android if they are exactly the same.
Inside that
.kv file, you need to specify the layout for your app, including elements like the label, buttons, forms, etc. To keep this demonstration simple, I’ll add a label for the title
Random Number, a label that will serve as a placeholder for the random number that is generated
_, and a
Generate button that calls the
generate function.
My
.kv file looks like the code below, but you can mess around with the different values to fit your requirements:
<boxLayout>: orientation: "vertical" Label: text: "Random Number" font_size: 30 color: 0, 0.62, 0.96 Label: text: "_" font_size: 30 Button: text: "Generate" font_size: 15
In the
main.py file, you no longer need the
Label import statement because the Kivy file takes care of your UI. However, you do need to import
boxlayout, which you will use in the Kivy file.
In your main file, you need to add the import statement and edit your
main.py file to read
return BoxLayout() in the
build method:
from kivy.uix.boxlayout import BoxLayout
If you run the command above, you should see a simple interface that has the random number title, the
_ place holder, and the clickable
generate button:
Random Number app rendered
Notice that you didn’t have to import anything for the Kivy file to work. Basically, when you run the app, it returns
boxlayout by looking for a file inside the Kivy file with the same name as your class. Keep in mind, this is a simple interface, and you can make your app as robust as you want. Be sure to check out the Kv language documentation.
Now that our app is almost done, we’ll need a simple function to generate random numbers when a user clicks the
generate button, then render that random number into the app interface. To do so, we’ll need to change a few things in our files.
First, we’ll import the module that we’ll use to generate a random number with
import random. Then, we’ll create a function or method that calls the generated number. For this demonstration, I’ll use a range between
0 and
2000. Generating the random number is simple with the
random.randint(0, 2000) command. We’ll add this into our code in a moment.
Next, we’ll create another class that will be our own version of the
box layout. Our class will have to inherit the
box layout class, which houses the method to generate random numbers and render them on the interface:
class MyRoot(BoxLayout): def __init__(self): super(MyRoot, self).__init__()
Within that class, we’ll create the
generate method, which will not only generate random numbers but also manipulate the label that controls what is displayed as the random number in the Kivy file.
To accommodate this method, we’ll first need to make changes to the
.kv file . Since the
MyRoot class has inherited the
box layout, you can make
MyRoot the top level element in your
.kv file:
<MyRoot>: BoxLayout: orientation: "vertical" Label: text: "Random Number" font_size: 30 color: 0, 0.62, 0.96 Label: text: "_" font_size: 30 Button: text: "Generate" font_size: 15
Notice that you are still keeping all your UI specifications indented in the
Box Layout. After this, you need to add an ID to the label that will hold the generated numbers, making it easy to manipulate when the
generate function is called. You need to specify the relationship between the ID in this file and another in the main code at the top, just before the
BoxLayout line:
<MyRoot>: random_label: random_label BoxLayout: orientation: "vertical" Label: text: "Random Number" font_size: 30 color: 0, 0.62, 0.96 Label: id: random_label text: "_" font_size: 30 Button: text: "Generate" font_size: 15
The
random_label: random_label line basically means that the label with the ID
random_label will be mapped to
random_label in the
main.py file, meaning that any action that manipulates
random_label will be mapped on the label with the specified name.
We can now create the method to generate the random number in the main file:
def generate_number(self): self.random_label.text = str(random.randint(0, 2000)) # notice how the class method manipulates the text attributre of the random label by a# ssigning it a new random number generate by the 'random.randint(0, 2000)' funcion. S# ince this the random number generated is an integer, typecasting is required to make # it a string otherwise you will get a typeError in your terminal when you run it.
The
MyRoot class should look like the code below:
class MyRoot(BoxLayout): def __init__(self): super(MyRoot, self).__init__() def generate_number(self): self.random_label.text = str(random.randint(0, 2000))
Congratulations! You’re now done with the main file of the app. The only thing left to do is make sure that you call this function when the
generate button is clicked. You need only add the line
on_press: root.generate_number() to the button selection part of your
.kv file:
<MyRoot>: random_label: random_label BoxLayout: orientation: "vertical" Label: text: "Random Number" font_size: 30 color: 0, 0.62, 0.96 Label: id: random_label text: "_" font_size: 30 Button: text: "Generate" font_size: 15 on_press: root.generate_number()
Now, you can run the app.
Before compiling our app on Android, I have some bad news for Windows users. You’ll need Linux or macOS to compile your Android application. However, you don’t need to have a separate Linux distribution, instead, you can use a virtual machine.
To compile and generate a full Android
.apk application, we’ll use a tool called Buildozer. Let’s install Buildozer through our terminal using one of the commands below:
pip3 install buildozer // pip install buildozer
Now, we’ll install some of Buildozer’s required dependencies. I am on Linux Ergo, so I’ll use Linux-specific commands. You should execute these commands one by one:
sudo apt update sudo apt install -y git zip unzip openjdk-13 # add the following line at the end of your ~/.bashrc file export PATH=$PATH:~/.local/bin/
After executing the specific commands, run
buildozer init. You should see an output similar to the screenshot below:
Buildozer successful initialization
The command above creates a Buildozer
.spec file, which you can use to make specifications to your app, including the name of the app, the icon, etc. The
.spec file should look like the code block below:
[app] # (str) Title of your application title = My Application # (str) Package name package.name = myapp # (str) Package domain (needed for android/ios packaging) package.domain = org.test # (str) Source code where the main.py live source.dir = . # (list) Source files to include (let empty to include all the files) source.include_exts = py,png,jpg,kv,atlas # (list) List of inclusions using pattern matching #source.include_patterns = assets/*,images/*.png # = 0.1 # (str) Application versioning (method 2) # version.regex = __version__ = \['"\](.*)['"] # version.filename = %(source.dir)s/main.py # (list) Application requirements # comma separated e.g. requirements = sqlite3,kivy requirements = python3,kivy # (str) Custom source folders for requirements # Sets custom source for any requirements with recipes # requirements.source.kivy = ../../kivy # (list) Garden requirements #garden_requirements = # (str) Presplash of the application #presplash.filename = %(source.dir)s/data/presplash.png # (str) Icon of the application #icon.filename = %(source.dir)s/data/icon.png # (str) Supported orientation (one of landscape, sensorLandscape, portrait or all) orientation = portrait # (list) List of service to declare #services = NAME:ENTRYPOINT_TO_PY,NAME2:ENTRYPOINT2_TO_PY # # OSX Specific # # # author = © Copyright Info # change the major version of python used by the app osx.python_version = 3 # Kivy version to use osx.kivy_version = 1.9.1 # # Android specific # # (bool) Indicate if the application should be fullscreen or not fullscreen = 0 # (string) Presplash background color (for new android toolchain) #. #android.presplash_color = #FFFFFF # (list) Permissions #android.permissions = INTERNET # (int) Target Android API, should be as high as possible. #android.api = 27 # (int) Minimum API your APK will support. #android.minapi = 21 # (int) Android SDK version to use #android.sdk = 20 # (str) Android NDK version to use #android.ndk = 19b # (int) Android NDK API to use. This is the minimum API your app will support, it should usually match android.minapi. #android.ndk_api = 21 # (bool) Use --private data storage (True) or --dir public storage (False) #android.private_storage = True # (str) Android NDK directory (if empty, it will be automatically downloaded.) #android.ndk_path = # (str) Android SDK directory (if empty, it will be automatically downloaded.) #android.sdk_path = # (str) ANT directory (if empty, it will be automatically downloaded.) #android.ant_path = # (bool) If True, then skip trying to update the Android sdk # This can be useful to avoid excess Internet downloads or save time # when an update is due and you just want to test/build your package # android.skip_update = False # (bool) If True, then automatically accept SDK license # agreements. This is intended for automation only. If set to False, # the default, you will be shown the license when first running # buildozer. # android.accept_sdk_license = False # (str) Android entry point, default is ok for Kivy-based app #android.entrypoint = org.renpy.android.PythonActivity # (str) Android app theme, default is ok for Kivy-based app # android.apptheme = "@android:style/Theme.NoTitleBar" # (list) Pattern to whitelist for the whole project #android.whitelist = # (str) Path to a custom whitelist file #android.whitelist_src = # (str) Path to a custom blacklist file #android.blacklist_src = # = # (list) Android AAR archives to add (currently works only with sdl2_gradle # bootstrap) #android.add_aars = # (list) Gradle dependencies to add (currently works only with sdl2_gradle # bootstrap) #android.gradle_dependencies = # (list) add java compile options # this can for example be necessary when importing certain java libraries using the 'android.gradle_dependencies' option # see for further information # android.add_compile_options = "sourceCompatibility = 1.8", "targetCompatibility = 1.8" # (list) Gradle repositories to add {can be necessary for some android.gradle_dependencies} # please enclose in double quotes # e.g. android.gradle_repositories = "maven { url '' }" #android.add_gradle_repositories = # (list) packaging options to add # see # can be necessary to solve conflicts in gradle_dependencies # please enclose in double quotes # e.g. android.add_packaging_options = "exclude 'META-INF/common.kotlin_module'", "exclude 'META-INF/*.kotlin_module'" #android.add_gradle_repositories = # (list) Java classes to add as activities to the manifest. #android.add_activities = com.example.ExampleActivity # (str) OUYA Console category. Should be one of GAME or APP # If you leave this blank, OUYA support will not be enabled #android.ouya.category = GAME # (str) Filename of OUYA Console icon. It must be a 732x412 png image. #android.ouya.icon.filename = %(source.dir)s/data/ouya_icon.png # (str) XML file to include as an intent filters in <activity> tag #android.manifest.intent_filters = # (str) launchMode to set for the main activity #android.manifest.launch_mode = standard # (list) Android additional libraries to copy into libs/armeabi #android.add_libs_armeabi = libs/android/*.so #android.add_libs_armeabi_v7a = libs/android-v7/*.so #android.add_libs_arm64_v8a = libs/android-v8/*.so #android.add_libs_x86 = libs/android-x86/*.so #android.add_libs_mips = libs/android-mips/* = # (list) Android shared libraries which will be added to AndroidManifest.xml using <uses-library> tag #android.uses_library = # (str) Android logcat filters to use #android.logcat_filters = *:S python:D # (bool) Copy library instead of making a libpymodules.so #android.copy_libs = 1 # (str) The Android arch to build for, choices: armeabi-v7a, arm64-v8a, x86, x86_64 android.arch = armeabi-v7a # (int) overrides automatic versionCode computation (used in build.gradle) # this is not the same as app version and should only be edited if you know what you're doing # android.numeric_version = 1 # # Python for android (p4a) specific # # (str) python-for-android fork to use, defaults to upstream (kivy) #p4a.fork = kivy # (str) python-for-android branch to use, defaults to master #p4a.branch = master # (str) python-for-android git clone directory (if empty, it will be automatically cloned from github) #p4a.source_dir = # (str) The directory in which python-for-android should look for your own build recipes (if any) #p4a.local_recipes = # (str) Filename to the hook for p4a #p4a.hook = # (str) Bootstrap to use for android builds # p4a.bootstrap = sdl2 # (int) port number to specify an explicit --port= p4a argument (eg for bootstrap flask) #p4a.port = # # iOS specific # # (str) Path to a custom kivy-ios folder #ios.kivy_ios_dir = ../kivy-ios # Alternately, specify the URL and branch of a git checkout: ios.kivy_ios_url = ios.kivy_ios_branch = master # Another platform dependency: ios-deploy # Uncomment to use a custom checkout #ios.ios_deploy_dir = ../ios_deploy # Or specify URL and branch ios.ios_deploy_url = ios.ios_deploy_branch = 1.7.0 # (str) Name of the certificate to use for signing the debug version # Get a list of available identities: buildozer ios list_identities #ios.codesign.debug = "iPhone Developer: <lastname> <firstname> (<hexstring>)" # (str) Name of the certificate to use for signing the release version #ios.codesign.release = %(ios.codesign.debug)s [buildozer] # (int) Log level (0 = error only, 1 = info, 2 = debug (with command output)) log_level = 2 # (int) Display warning if buildozer is run as root (0 = False, 1 = True) warn_on_root = 1 # (str) Path to build artifact storage, absolute or relative to spec file # build_dir = ./.buildozer # (str) Path to build output (i.e. .apk, .ipa) storage # bin_dir = ./bin # ----------------------------------------------------------------------------- #
If you want to specify things like the icon, requirements, loading screen, etc., you should edit this file. After making all the desired edits to your application, run
buildozer -v android debug from your app directory to build and compile your application. This may take a while, especially if you have a slow machine.
After the process is done, your terminal should have some logs, one confirming that the build was successful:
Android successful build
You should also have an APK version of your app in your bin directory. This is the application executable that you will install and run on your phone:
Android .apk in the bin directory
Congratulations! If you have followed this tutorial step by step, you should have a simple random number generator app on your phone. Play around with it and tweak some values, then rebuild. Running the rebuild will not take as much time as the first build.
As you can see, building a mobile application with Python is fairly straightforward, as long as you are familiar with the framework or module you are working with. Regardless, the logic is executed the same way.
Get familiar with the Kivy module and it’s widgets. You can never know everything all at once. You only need to find a project and get your feet wet as early as possible. Happy coding.
Link:
1604060760
It’s October and we’re calling all programmers, designers, content writers and open-source contributors to join Hacktoberfest 2020. This is a fantastic opportunity to contribute to open-source or try your hand at something new.
For those who are new to programming or open-source, you may be wondering what is open-source or Hacktoberfest.
_Open source_refers to source code that is publicly accessible and allows anyone to inspect, modify, or learn from it. Open source projects encourage collaboration and the freedom to use the software for any purpose you wish.
_Hacktoberfest_is a month-long celebration of open source software run by DigitalOcean and is open to everyonein our global community.
Seven years ago, Hacktoberfest kick-started the celebration along with 676 excited participants contributing to open source projects and earning a limited-edition T-shirt. Now, hundreds of thousands of developers participate in Hacktoberfest from 150 countries.
If you want to contribute to open-source projects, but don’t know where to start, then Hacktoberfest is the perfect opportunity for you.
Hacktoberfest is a month-long celebration of open source software sponsored by Digital Ocean, Intel, and DEV.
The goal of the event is to encourage participation in the open-source community all across the globe. The challenge is quite simple: open four high-quality pull requests in October on any open source project to get some swag.
If you complete valid 4prs, you stand to get a T-shirt, some stickers and a cup coaster (I got one last year, I’m not sure if they’ll be doing it this year also).
They also introduced the option to plant a tree instead of receiving a T-shirt as a reward to reduce the environmental impact.
#hacktoberfest #github #git #open-source #opensource #contributing-to-open-source #open-source-contribution #first-open-source-contribution | https://morioh.com/p/7ec1fa6ae880 | CC-MAIN-2022-40 | refinedweb | 3,688 | 56.45 |
FrameMaker 7.2 (Mostly Windows, may also apply to Solaris)
July 14, 2006 | 0 comments
FrameMaker 7.2 (Mostly Windows, may also apply to Solaris)
Note: See also issues present in previous releases of FrameMaker, as they may also apply to FrameMaker 7.2 (unless indicated otherwise)
FM7.2b128, released September 2005; see Adobe’s description of New Features (PDF: 245K), Reviewer’s Guide (PDF: 3.07M)
- (fixed in FM7.2b158) Structured FrameMaker: Cannot delete the value of a required attribute
When trying to delete the value of a required attribute from an element in FM7.2, the normal procedure is followed (and the standard warning is displayed), a confirmation is given… yet the attribute is left unchanged and it is not deleted. Using the same steps in FM7.1 or earlier does produce the expected result.
- (fixed in FM7.2b158) In FM 7.1, 7.0 and 6.0 it is possible to drag files from a Windows Explorer window onto a FrameMaker book window to add files to a book. This also avoided the problem where (often) the files get added in the wrong order when clicking the “Add file to book” button . In FrameMaker 7.2, attempting to drag from an Explorer window has no effect, so the only option is to use the “Add file” button. [Jon White]
Drag and drop doesn’t work when you drop… (ATN 326392)
- FrameMaker crashes when rotating an object using the mouse (Alt-drag), if it is a group of items and in the course of rotating, part of the object goes off-screen temporarily. To reproduce, open the Transport.fm file from the clip art library, go to page 2; select the ship, and Alt-drag a corner handle to rotate the object (error code: 7204, 6107874, 7775428, 0)
- Control-drag to copy an object cannot be undone and clears the history stack; use Alt-drag instead.
- (was fixed in FM7.1, but resurfaced in FM7.2) File Info values (File > File Info) are reset when importing formats from other documents with the Document Properties category turned on.
Note: A new entry can be added to the Preferences section in maker.ini file in FM7.2b158 to control this behavior.
See also: File Info metadata values are not imported from one document to another in FrameMaker (ATN 331895), where this problem is described as a feature
- Imported CGM files are smaller than expected (ATN 331867)
- Font size drop-down in the formatting bar (new in FM7.2/Windows) supports a limited set of font sizes (7, 9, 10, 12, 14, 18, 24, 36) and therefore may not reflect the text size at the cursor location (showing “Other” if the current size is not one of the supported set, or if there is a mix of sizes in the selected area).
- Font choice drop-down in the formatting bar (new in FM7.2/Windows) shows the first font in the alphabetical list of fonts available when current selection has multiple fonts; names of CJK fonts may be displayed with additional non-text characters.
- Understanding Multiple Undo (ATN 331774)
The following commands clear the history palette: Print, Save, SaveAs, Revert, ImportFile (when an anchored frame is selected or if placed directly on a page), ImportFormats, DocumentNumbering, CombinedFonts, PageAdd, PageDelete, ColumnLayout, PageSize, Pagination, MasterPageUsage, UpdateColumnLayout, NewMasterPage, FreezePagination, ConnectTextFrames, CutBoth, EditMarkerTypes, ColorViews, ViewSeparation, AtomizeInset, SwapRedBlue, ColorDefinitions, DeleteWordFromDocDict, EditRulingStyle, ImportElementDefns, Update, NormalizeTags, InclusionElementGrouping, EditLinks, BookRenameFile
- History palette is blank (ATN 331866)
- Disable or enable the clear history stack warning (ATN 331868)
- Troubleshoot errors or freezes during installation (ATN 331828)
- ICU 2.8 libraries required to compile a mapping file into a binary converter file (ATN 322830)
- Install and remove FrameMaker 7.2 and Acrobat Distiller 7 using the command line (ATN 331867)
FM7.2b144 (released December 2005; downloadable update, 37.3MB); the ReadmeForPatch144.txt file reports the following fixes:
- #1244994: FrameMaker 7.2b128, while adding TOC in book, adds an entry in the history palette of structured book, even if the operation is cancelled.
- #1244610: FrameMaker 7.2b128 generates invalid content when exporting UTF-16 encoded XML.
- #1244390: FrameMaker 7.2b128 makes the paragraph tag definition incorrect, if pgf unify command is applied twice with selection in different paragraphs and the second unify command is undone.
- #1244353: The Set on “Numbering Properties” clears the undo history even on cancelling the action.
- #1239625: FrameMaker 7.2b128 crashes when you rotate an object using the mouse (Alt-drag).
- #1236151: FrameMaker 7.2b128 crashes/hangs/takes lot of time, when exporting XML in the case when the relative path of the DTD/Schema w.r.t. the output folder is greater than the absolute path of the DTD/Schema.
- #1234318: In FrameMaker 7.2b128 Undo of color change does not work if a graphics group has a textline object in it.
- #1232166: In FrameMaker 7.2b128, Undo of Split element leads to invalid structure. Undo all commands after that leads to a crash.
- #1230468: In FrameMaker 7.2b128, UNIX: API clients are not getting initialized if language parameter passed is in uppercase.
- #1225193: In FrameMaker 7.2b128, insert an image element in DITA template (at a location where it is valid with any graphic doc) and save the template as XML. Saved document has dpi and nsoffset attributes which are not defined for image element. Hence graphic object does not round trip.
- #1225150: In FrameMaker 7.2b128, DITA: Online manuals should include DITA_Starter_Kit.pdf document for details of DITA application.
- #1225114: In FrameMaker 7.2b128, Clicking on apply master page usage without changing anything clears Multiple undo history without any warning
- #1224317: EPV: Moving anchor frame from outside the text frame into the text frame, does not get undone after some move and undo of graphic objects.
- #1221337: FrameMaker 7.2b128 can not open primary document, if the filepath/stylesheet contains characters like “???”
- #1221290: FrameMaker 7.2b128 FrameMaker Crashes if Undo-Redo is done after Drag and Drop of Book Component.
- #1204754: FMR: Xalan:write extension should be provided with FrameMaker 7.2.
The Xalan namespace “” is used for namespace of the write extension element. The “write” element can have a file attribute and/or a select attribute to designate the output file, a method attribute to identify the method that should be used for outputting the subsidiary output file. The method attribute is optional.
The file attribute takes a string which can be used to directly specify the output file name. The select attribute takes an XPath expression or a stylesheet parameter which allows dynamic generation of the output file name. If both attributes are used, the “write” extension first evaluates the select attribute, and falls back to the file attribute. The “write” extension element opens the specified file, writes to it using the output method specified in its method attribute, and closes the file at the end of the element.
The following example shows the usage of this element in a stylesheet.
Stylesheet Example:
<?xml version="1.0" encoding="UTF-8" ?>
<xsl:stylesheet
<xsl:template
<p>
Hello from the main file.
<xsl:comment>This is a comment in the main file.</xsl:comment>
<xalan:write
<sub>
<xsl:comment>This is a comment in the new file "sub.html".</xsl:comment>
</sub>
</xalan:write>
</p>
</xsl:template>
</xsl:stylesheet>
- #1245829: In FrameMaker 7.1p116, When “convert referenced graphics; export to file” rule is used, all the graphic files present within xml are exported in a specified format but the exported xml file refers to the original graphic file paths. Hence when you open the exported file, you will get the missing graphics dialog.
- #1244634: FrameMaker 7.1P116 and earlier versions removed the pgfReferenced tag from MIF when a paragraph containing marker for named destination is deleted and then undone.
- #1236208: FrameMaker 6.0 and later do not show the current Paragraph Format Name in the formatting toolbar.
- #1219618: EPV: File attribute not generated when saving graphics to XML.
- #1215327: structapps.fm does not include Schema in default application definition content model.
- #1211754: EPV: French: URI notation not coming in XML files.
- #1202191: When using the Find/Change box on to do a search and replace across a book, Internal Error occurs. [note]
- #1165638: Command !pu (Update Page layout) places the insertion point into another text frame with same flow tag but not autoconnected.
FM7.2b158 (released April 2006; downloadable update, 39.5MB); the ReadmeForPatchp158.txt file reports the following fixes and corrections:
- #0655371: Tagged PDF not getting generated for a book when option to generate separate PDFs is not checked
- #1199273: CodePoint Correction: Support added for few missing characters on Cyrillic, Central European and Baltic Codepages for Windows.
“Please note that we haven’t added full support for these codepages. We have just enabled these codepoints. You can enter these characters in a document and use them only for display/print and pdf creation. You, however still can’t use these characters for pdf bookmarks or save as html or save as xml. These codepoints are also not supported with WebWorks Publisher Standard Edition 8.0 and for RTF Import/Export. You can’t copy/paste these characters to/from other applications. Also, there is no dictionary or spell-check support for the languages supported by these code pages.”
(for additional details, see ATN 332864 for details; a message by Stefan Genz)
- #1239618: When importing formats from one document to another, with the “Document Properties” category turned on, File Info items (such as Title) are reset as a result.
The boolean flag “CopyFileInfoOnImport” can now be used in maker.ini (valid values: on/off) or .fmpreferences (valid values: true/false) to allow/disable import of File Info.
(see “solution 3” in ATN 331895)
- #1243985: FrameMaker 7.2b144 crashes on undo of import of 4th object.
- #1248348: After importing an xml file, a vertically straddled cell in a table is not generated correctly when the table is forced across pages. If the table is forced into one page it is generated correctly.
- #1249265: In FrameMaker 7.2b144, Undo of UpdateAll removes the table catalog entry from the table catalog.
- #1249268: In FrameMaker 7.2b144, Undo of GlobalUpdate removes the table catalog entry from the table catalog.
- #1253783: In FrameMaker 7.2b144, some CGM files from ISODraw import gives smaller dimensions than in 7.1 or 7.0 (or 6).
(see ATN 332605 for details)
- #1259520: In FrameMaker 7.2b144, you could not drag and drop files from windows explorer onto the FrameMaker book window.
- #1260794: In FrameMaker 7.2b144, DITA/Samples/Concepts/garageconceptsovervi should be renamed to garageconceptsoverview.xml.
- #1263600: If you roundtrip an xml document that uses a schema with higher ASCII characters in FrameMaker 7.2b144 using “no application”, you get a “file does not exist” error during import.
- #1263959: In FrameMaker 7.2b144, if you have the character “ƒ” within filename/filepath of xml file and you open this XML with a StyleSheet, FrameMaker does not open the file.
Note: A similar issue is present for character “¥”.
Xerces replaces “¥” with slash and hence you should not use “¥” in your filename/filepath.
- #1264951: In FrameMaker 7.2b144, History palette displays garbage text for “Delete Text” in Japanese version.
- #1265686: In FrameMaker 7.2b144, attribute value deletion failed if the attribute is declared as #required.
- #1265695, #1265699: In FrameMaker 7.2b144, when you import or export an XML file the DTD will be searched in the following order: $HOME, $SRCDIR, $STRUCTDIR. Hence if you have a “<!DOCTYPE root SYSTEM “./a.dtd”>” and a.dtd also exists in your $HOME then the one in $HOME will be picked.
- #1273612: In FrameMaker 7.2b144, an underline and strikethrough is applied when an equation is exported to RTF
- #1266621: FrameMaker 7.2b128 and & 7.2 Updater can not be installed with Korean language settings on Chinese OS.
- #1250031: In FrameMaker 7.1, In “Save As”, when you select MicroSoft Word * options, the file gets saved with .msw extension.
- #1207612: In FrameMaker 7.1 WMF/EMF import filter is weak -WMF couldn’t be imported.
(listing of maker.ini entries related to the WMF filter omitted — please consult the readme file after applying the patch) | http://designstacks.net/framemaker-72-mostly-windows-may-also-apply-to-solaris | CC-MAIN-2022-05 | refinedweb | 2,025 | 66.94 |
The only reason I can think of is if he (or a library used by him) utilises just one string to hold the base URL.
Some compiled libraries might also come with URLs hardcoded. Not a major bother, but possibly inconvenient as he notes. On Fri, Nov 27, 2009 at 12:30 AM, Rich <rhyl...@gmail.com> wrote: > I can't understand why it would be difficult to develop your app using > api.twitter.com/1 instead of twitter.com, it's just a minor url change > and will make your app as future proof as you can get until v1 is > deprecated. > > Sorry if I'm missing some big reason why. > > On Nov 26, 3:45 pm, Raffi Krikorian <ra...@twitter.com> wrote: > > In general, our recommendation is to use api.twitter.com/1 from here > > on out as we are beginning our transition of all our endpoints to the > > versioned api namespace. We don't know exact dates, but at some point > > we will deprecate accessing the API from twitter.com directly. > > > > On Nov 26, 2009, at 1:10 AM, bang <bang...@gmail.com> wrote: > > > > > > > > > I found new APIs use api.twitter.com instead of twitter.com > > > > > in some lists APIs > > > > > > > > is the same as > > > > > > > > but some other lists API , for example > > > > > is ok > > > but > > > > > > is not found > > > > > this's a bug? this problem is very unconvient for me to develop my app > -- Harshad RJ | https://www.mail-archive.com/twitter-development-talk@googlegroups.com/msg16184.html | CC-MAIN-2018-43 | refinedweb | 237 | 75.71 |
A type-safe non-negative index class. More...
#include <drake/common/type_safe_index.h>
A type-safe non-negative index class.
This class serves as an upgrade to the standard practice of passing
ints around as indices. In the common practice, a method that takes indices into multiple collections would have an interface like:
It is possible for a programmer to accidentally switch the two index values in an invocation. This mistake would still be syntactically correct; it will successfully compile but lead to inscrutable run-time errors. The type-safe index provides the same speed and efficiency of passing
ints, but provides compile-time checking. The function would now look like:
and the compiler will catch instances where the order is reversed.
The type-safe index is a stripped down
int. Each uniquely declared index type has the following properties:
intvalues.
int(to serve as an index).
intreturn values. One can even use operands of different index types in such a binary expression. It is the programmer's responsibility to confirm that the resultant
intvalue has meaning.
While there is the concept of an "invalid" index, this only exists to support default construction where appropriate (e.g., using indices in STL containers). Using an invalid index in any operation is considered an error. In Debug build, attempts to compare, increment, decrement, etc. an invalid index will throw an exception.
A function that returns TypeSafeIndex values which need to communicate failure should not use an invalid index. It should return an
std::optional<Index> instead.
It is the designed intent of this class, that indices derived from this class can be passed and returned by value. Passing indices by const reference should be considered a misuse.
This is the recommended method to create a unique index type associated with class
Foo:
This references a non-existent, and ultimately anonymous, class
FooTag. This is sufficient to create a unique index type. It is certainly possible to use an existing class (e.g.,
Foo). But this provides no functional benefit.
Examples of valid and invalid operations
The TypeSafeIndex guarantees that index instances of different types can't be compared or combined. Efforts to do so will cause a compile-time failure. However, comparisons or operations on other types that are convertible to an int will succeed. For example:
As previously stated, the intent of this class is to seamlessly serve as an index into indexed objects (e.g., vector, array, etc.). At the same time, we want to avoid implicit conversions from int to an index. These two design constraints combined lead to a limitation in how TypeSafeIndex instances can be used. Specifically, we've lost a common index pattern:
This pattern no longer works because it requires implicit conversion of int to TypeSafeIndex. Instead, the following pattern needs to be used:
Type-safe Index vs Identifier
In principle, the TypeSafeIndex is related to the Identifier. In some sense, both are "type-safe `int`s". They differ in their semantics. We can consider
ints, indexes, and identifiers as a list of
int types with decreasing functionality.
intand other indexes of the same type. This behavior arises from the intention of having them serve as an index in an ordered set (e.g.,
std::vector.)
Ultimately, indexes can serve as identifiers (within the scope of the object they index into). Although, their mutability could make this a dangerous practice for a public API. Identifiers are more general in that they don't reflect an object's position in memory (hence the inability to transform to or compare with an
int). This decouples details of implementation from the idea of the object. Combined with its immutability, it would serve well as a element of a public API.
Default constructor; the result is an invalid index.
This only exists to serve applications which require a default constructor.
Construction from a non-negative
int value.
Constructor only promises to enforce non-negativity in Debug build.
Disallow construction from another index type.
Reports if the index is valid–the only operation on an invalid index that doesn't throw an exception in Debug builds.
Implicit conversion-to-int operator.
Whitelist inequality test with indices of this tag.
Blacklist inequality test with indices of other tags.
Prefix increment operator.
Postfix increment operator.
Addition assignment operator.
In Debug builds, this method asserts that the resulting index is non-negative.
Whitelist addition for indices with the same tag.
Blacklist addition for indices of different tags.
Prefix decrement operator.
In Debug builds, this method asserts that the resulting index is non-negative.
Postfix decrement operator.
In Debug builds, this method asserts that the resulting index is non-negative.
Subtraction assignment operator.
In Debug builds, this method asserts that the resulting index is non-negative.
Whitelist subtraction for indices with the same tag.
Blacklist subtraction for indices of different tags.
Whitelist less than test with indices of this tag.
Blacklist less than test with indices of other tags.
Whitelist less than or equals test with indices of this tag.
Blacklist less than or equals test with indices of other tags.
Assign the index a value from a non-negative int.
In Debug builds, this method asserts that the input index is non-negative.
Whitelist equality test with indices of this tag.
Blacklist equality tests with indices of other tags.
Whitelist greater than test with indices of this tag.
Blacklist greater than test with indices of other tags.
Whitelist greater than or equals test with indices of this tag.
Blacklist greater than or equals test with indices of other tags. | http://drake.mit.edu/doxygen_cxx/classdrake_1_1_type_safe_index.html | CC-MAIN-2017-43 | refinedweb | 926 | 60.11 |
At JetBrains we not only bring you new powerful features that make your life better, but also take care to polish the good old stuff to perfection. Let’s take a look at the Move refactoring for ActionScript classes, Flex components and all other types of top-level declarations (namespaces, functions, variables and constants), that has just got a little smarter. By the way, this refactoring also works for inner declarations (also known as helpers or file-local declarations), defined in ActionScript file out of the package statement.
Generally speaking there’s nothing special about Move refactoring, but in complex multi-module projects choosing a right folder may turn out harder than it seems to be. Well, not anymore it isn’t, and here’s the proof.
We’ll start with, invoking Move action by pressing F6 in editor, Project view or UML diagram (don’t forget to select your class/top-level function/whatever else first):
Here we specify the target package (thinking in terms of packages instead of directories is a lot more natural, isn’t it?). As we type, IntelliJ IDEA assists us with completion and highlights packages that don’t exist with red, which means they will be created on-the-fly, which is good, because we can start coding right away without preparing package structure in advance.
We can opt to searching for out-of-the-code usages (e.g. comments, string constants and properties files), and finally, if we need to move a class to a different source folder (e.g. to another module), all we need to to is to select an appropriate option! This way, after pressing Refactor or Preview we’ll be able to choose which module and source root to move to (current source root will be preselected):
And that’s it! No need to bother with directories anymore!
Of course, all changes can be previewed first, so that we can see the entire picture without making any actual changes, which is cool, because not all changes are safe. Press Preview and here we go:
All the usages of moved element grouped by type and location, plus we can easily exclude usages we don’t want modified.
Another interesting case is to move inner class to the top-level. This may be useful when some utility class has grown enough to deserve a separate file. Here’s an example:
Just as before, place the caret at the class and press F6:
Now accept the name and package of a new top-level class, and IntelliJ IDEA will take care about the rest.
We wish you safe refactoring, and keep watching this blog for the latest news! | http://blog.jetbrains.com/idea/2012/01/intellij-ideas-move-refactoring-for-actionscriptflex-gets-smarter/ | CC-MAIN-2015-35 | refinedweb | 446 | 56.29 |
The scale to apply to the inverse mass and inertia tensor of the body prior to solving the constraints.
Scale mass and the inertia tensor to make the joints solver converge faster, thus resulting in less stretch of the limbs of a typical ragdoll. Most useful in conjunction with Joint.connectedMassScale.
For example, if you have two objects in a ragdoll of masses 1 and 10, the physics engine will typically resolve the joint by changing the velocity of the lighter body much more than the heavier one. Applying a mass scale of 10 to the first body makes solver change the velocity of both bodies by an equal amount. Applying mass scales such that the joint sees similar effective masses and inertias makes the solver converge faster, which can make individual joints seem less rubbery or separated, and sets of jointed bodies appear less twitchy
Note that scaling mass and inertia is fundamentally nonphysical and momentum won't be conserved.
The following script is useful to adjust the mass and inertia scaling in order to get the same corrective velocity out of the solver. Attach it to the ragdoll's root, or to a limb that is over-stretched during the gameplay and it will find all joints down in the transform hierarchy below itself and adjust the mass scale.
using System.Collections; using System.Collections.Generic; using UnityEngine;
public class NormalizeMass : MonoBehaviour { private void Apply(Transform root) { var j = root.GetComponent<Joint>();
// Apply the inertia scaling if possible if (j && j.connectedBody) { // Make sure that both of the connected bodies will be moved by the solver with equal speed j.massScale = j.connectedBody.mass / root.GetComponent<Rigidbody>().mass; j.connectedMassScale = 1f; }
// Continue for all children... for (int childId = 0; childId < root.childCount; ++childId) { Apply(root.GetChild(childId)); } }
public void Start() { Apply(this.transform); } } | https://docs.unity3d.com/es/2018.4/ScriptReference/Joint-massScale.html | CC-MAIN-2022-21 | refinedweb | 305 | 53.61 |
Need to create a default constructor that generates a random question, addition or subtraction. And when adding the numbers must be random from 0-12 and when subtracting the first number must be from 6-12, while the second is less than the first number. Here's my progress as of now:
Also, when called via the toString method I get: 6-0. Every time I run it.Also, when called via the toString method I get: 6-0. Every time I run it.package project4; public class Question { private int num1; private int num2; private char operand; public Question() { operand = '+'; num1 = (int)(Math.random())*12; num2 = (int)(Math.random())*12; operand = '-'; num1 = ((int)(Math.random())*12+6); num2 = (int)(Math.random()) << num1; } public String toString() { String str = new String(num1 + " " + operand + " " + num2); return str; } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/27605-default-constructor-generating-random-questions.html | CC-MAIN-2015-11 | refinedweb | 135 | 69.28 |
On Sun, Sep 27, 2009 at 07:39:04PM +0100, Russell King - ARM Linux wrote:> On Sun, Sep 27, 2009 at 08:27:07PM +0200, Sam Ravnborg wrote:> > On Sun, Sep 27, 2009 at 05:41:16PM +0100, Russell King - ARM Linux wrote:> > > Sam,> > > > > > Any idea how to solve this:> > > > > > WARNING: arch/arm/kernel/built-in.o(.text+0x1ebc): Section mismatch in reference from the function cpu_idle() to the function .cpuexit.text:cpu_die()> > > The function cpu_idle() references a function in an exit section.> > > Often the function cpu_die() has valid usage outside the exit section> > > and the fix is to remove the __cpuexit annotation of cpu_die.> > > > > > WARNING: arch/arm/kernel/built-in.o(.cpuexit.text+0x3c): Section mismatch in reference from the function cpu_die() to the function .cpuinit.text:secondary_start_kernel()> > > The function __cpuexit cpu_die() references> > > a function __cpuinit secondary_start_kernel().> > > This is often seen when error handling in the exit function> > > uses functionality in the init path.> > > The fix is often to remove the __cpuinit annotation of> > > secondary_start_kernel() so it may be used outside an init section.> > > > > > Logically, the annotations are correct - in the first case, cpu_die()> > > will only ever be called if hotplug CPU is enabled, since you can't> > > offline a CPU without hotplug CPU enabled. In that case, the __cpuexit.*> > > sections are not discarded.> > > > The annotation of cpu_die() is wrong.> > To be annotated __cpuexit the function shall:> > - be used in exit context and only in exit context with HOTPLUG_CPU=n> > - be used outside exit context with HOTPLUG_CPU=y> > > > cpu_die() fails on the first condition because it is only used> > if HOTPLUG_CPU=y.> > The annotation is wrongly used as a replacement for an ifdef.> > As cpu_die() is already inside ifdef CONFIG_HOTPLUG_CPU is should> > be enough to just remove the annotation.> > > > Like this (copy'n'paste so it does not apply)> > > > diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c> > index e0d3277..de4ef1c 100644> > --- a/arch/arm/kernel/smp.c> > +++ b/arch/arm/kernel/smp.c> > @@ -214,7 +214,7 @@ void __cpuexit __cpu_die(unsigned int cpu)> > * of the other hotplug-cpu capable cores, so presumably coming> > * out of idle fixes this.> > */> > -void __cpuexit cpu_die(void)> > +void cpu_die(void)> > {> > unsigned int cpu = smp_processor_id();> > This is wrong. cpu_die() does not need to exist if hotplug CPU is> disabled. In that case, it should be discarded and this is precisely> what __cpuexit does. The annotation is, therefore, correct.From arch/arm/kernel/smp.c#ifdef CONFIG_HOTPLUG_CPU...void __cpuexit cpu_die(void){ unsigned int cpu = smp_processor_id(); local_irq_disable(); idle_task_exit(); /* * actual CPU shutdown procedure is at least platform (if not * CPU) specific */ platform_cpu_die(cpu); /* * Do not return to the idle loop - jump back to the secondary * cpu initialisation. There's some initialisation which needs * to be repeated to undo the effects of taking the CPU offline. */ __asm__("mov sp, %0\n" " b secondary_start_kernel" : : "r" (task_stack_page(current) + THREAD_SIZE - 8));}#endif /* CONFIG_HOTPLUG_CPU */Please look at the above and realise that cpu_die() is only ever definedin case that HOTPLUG_CPU is defined. So there is nothing to discard ifHOTPLUG_CPU equals to n.And just to repeat myself....The only correct use of __cpu* annotation is for function/data that isused with or without HOTPLUG_CPU equals to y.Which is NOT the case for cpu_die().The __cpu* annotation is not a replacement for ifdeffed out code that isnot relevant for the non-HOTPLUG_CPU case. Sam | http://lkml.org/lkml/2009/9/27/142 | CC-MAIN-2017-17 | refinedweb | 557 | 55.74 |
Greetings, and welcome back to "Twisted Web in 60 Seconds". In the previous entry, back at the beginning of December, I promised to cover Twisted Web's proxying capabilities. For various reasons I've decided to dump that topic and cover something else instead. So, prepare to learn about Twisted Web's CGI capabilities!
twisted.web.twcgi.CGIScript and twisted.web.twcgi.FilteredScript are the two most interesting classes in this area. They are both Resource subclasses, so many of the features of resources that I've covered so far apply to them. For example, you can use them as the
resource in an .rpy script:
from twisted.web.twcgi import CGIScript
resource = CGIScript("/path/to/date-example.sh")
date-example.sh might look like this:
#!/bin/sh
echo "Content-Type: text/plain"
echo
/bin/date
That is, just a regular CGI - nothing special about Twisted Web going on there.
If you need to specify an interpreter for some reason (for example, the CGI itself isn't set executable, or doesn't specify its own interpreter with
#! on the first line), you can use
FilteredScript instead of
CGIScript:
from twisted.web.twcgi import FilteredScript
resource = FilteredScript("/path/to/date-example.sh")
resource.filter = "/bin/sh"
Set up this way,
/bin/sh will always be run and passed an argument of
/path/to/date-example.sh. | http://as.ynchrono.us/2010/02/ | CC-MAIN-2021-17 | refinedweb | 225 | 51.24 |
re_path syntax in application urls.py
I have been getting this error for almost a week. I have googled and checked the docs and looked for youtube videos. I cannot find an answer to this seemingly simple and obvious question: What is the syntax for re_path() in the included urls from my apps?
error:
Reverse for ‘jhp_url’ with keyword arguments ‘{‘slug’: ‘’}’ not found. 1 pattern(s) tried: [’(?P[a-z]{2,3})/courts/(?P[-a-zA-Z0-9_]+)/$’]
That pattern is correct! So obviously, the problem is slug has an empty string. But why? I have it in reverse():
def get_absolute_url(self): return reverse(‘jhp_url’, kwargs={‘slug’: self.slug})
Q1:Why isn’t it seeing the kwarg from reverse() and using self.slug?
If I try to put self.slug in the view or the url as extra arguments, PyCharm complains.
Putting the namespace in reverse() and the template makes absolutely no difference! I get the same error.
BUT, if I take the namespace out of those two(1) places, I get a different error:
Reverse for ‘jhp_url’ not found. ‘jhp_url’ is not a valid view function or pattern name.
(1) as opposed to having it in one but not the other
So it seems like the first error I mentioned here is closer to being right. My debug_toolbar template context says:
‘slug’: ‘^(?P[-a-zA-Z0-9_]+)/$’
I’m pretty sure that’s wrong. It should be the actual slug and not the pattern. That’s why I have focused on the app urls.py. But, as I said at the top of this rant. I have not been able to find anything on the syntax of re_path() in the included app urls!
bench.urls.py:
urlpatterns = [ re_path(r"^$", twodigit_testt1, {‘slug’: r"^(?P[-a-zA-Z0-9_]+)/$"}, name=‘tdr’), re_path(r"(?P[-a-zA-Z0-9_]+)/$", courtdetail, name=‘jhp_url’),
Of course I still get these errors, but my point here is that the interpreter runs with that. But when I try things like
re_path(r"^$", twodigit_testt1, {‘slug’: r’^(?P=slug)/$’}, name=‘tdr’),
I just get syntax errors.
Finally, please note that these errors are coming because the list template that twodigit_test1 is calling has urls to the individual detail pages in it. If I take the detail urls out of the template, it works. But if I go directly to the detail page, after importing my app views into the project urls, that works, too! It’s only the list template + detail urls combination that is the problem - and if you can’t list your details on your list page, what’s the point? I have tried both the url template tag and get_absolute_url in the template. Finally, I did ask an earlier version of this question on SO. I know some people don’t like that but it did not resolve this issue. I have reworked and refocused the question so it is not identical. Plus, I wasn’t using re_path() then. | https://forum.djangoproject.com/t/re-path-syntax-in-application-urls-py/9474 | CC-MAIN-2022-21 | refinedweb | 494 | 76.32 |
[
]
Johannes Mockenhaupt commented on TIKA-1435:
--------------------------------------------
Chris thanks.
Here's what I found out so far:
netcdf has a dependency on jdom with scope provided. Rome 1.0 has a dependency on jdom as
well and provided netcdf with jdom so far. This was probably not intenteded and hid netcdf's
dependency on jdom.
By upgrading Rome to 1.5, which in turn upgraded its jdom dependency to jdom 2.0, which changed
its namespace, netcdf now fails a test because its jdom (1.0) dependency isn't satisfied anymore.
One solution is to make the netcdf dependency explicit by adding a dependency on jdom 1.0.
This leads to both jdom 1 & 2 being on the classpath, in different namespaces. This adds
a 150k jar.
The other option is to upgrade netcdf 4.2.20 -> 4.3.22, which also uses jdom 2.0. That
way, only one jdom library is needed. netcdf declares its dependency on jdom 2.0 in that version
properly (AFAIK), as it uses the default scope rather than provided.
The later option seems good, however, the 4.2.20 version of netcdf seems to be the last that
has its dependencies released to maven.org; the later release requires ~2 dependencies (a
parent pom and udunits - see attachment), for which a repository needs to be added. Is that
an option for a project like tika? (Repo is:)
> Update rome dependency to 1.5
> -----------------------------
>
> Key: TIKA-1435
> URL:
> Project: Tika
> Issue Type: Improvement
> Components: parser
> Affects Versions: 1.6
> Reporter: Johannes Mockenhaupt
> Assignee: Chris A. Mattmann
> Priority: Minor
> Fix For: 1.7
>
>
> Rome 1.5 has been released to Sonatype ().
Though the website () is blissfully ignorant of that. The
update is mostly maintenance, adopting slf4j and generics as well as moving the namespace
from _com.sun.syndication_ to _com.rometools_. PR upcoming.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332) | http://mail-archives.us.apache.org/mod_mbox/tika-dev/201410.mbox/%3CJIRA.12745699.1412328342000.200303.1412622754866@Atlassian.JIRA%3E | CC-MAIN-2020-45 | refinedweb | 318 | 70.19 |
I have been using arcpy intermittently over the past year and a half mainly for automating and chaining batch processing to save myself countless hours of repetition. This week, however, I had to implement a facet of arcpy that I had not yet had the opportunity to utilise – the data access module.
The Scenario
A file geodatabase with 75 feature classes each containing hundreds to thousands of features. These feature classes were the product of a CAD (Bentley Microstation) to GIS conversions via FME with data coming from 50+ CAD files. As a result of the conversion each feature class could contain features with various attributes from one or multiple CAD files but each feature class consisted of the same schema which was helpful.
The main issue was that the version number for a chunk of the CAD files had not been corrected. Two things needed to be fixed: i) the ‘REV_NUM’ attribute for all feature classes needed to be ‘Ver2’, there would be a mix of ‘Ver1’ and ‘Ver2’, and ii) in the ‘MODEL_SUMMARY’ if ‘Ver1’ was found anywhere in the text it needed to be replaced with ‘Ver2’. There was one other issue and this stemmed from creating new features and not attributing them, this would have left a ‘NULL’ value in the ‘MODEL’ field (and the other fields). All features had to have standardised attributes. The script would not fix these but merely highlight the feature classes.
OK so a quick recap…
1. Set the ‘REV_NUM’ for every feature to ‘Ver2’
2. Find and replace ‘Ver1’ with ‘Ver2’ in the text string of ‘MODEL_SUMMARY’ for all features.
3. Find all feature classes that have ‘NULL’ in the ‘MODEL’ field.
The Script
Let’s take a look at the thirteen lines of code required to complete the mission.
import arcpy arcpy.env.workspace = r"C:\Users\*****\Documents\CleanedUp\Feature_Classes.gdb" fc_list = arcpy.ListFeatureClasses() fields = ["MODEL", "MODEL_SUMMARY", "REV_NUM"] for fc in fc_list: with arcpy.da.UpdateCursor(fc, fields) as cursor: for row in cursor: if row[0] == None or row[0] == "": print fc + ": Null value found for MODEL" break if row[1] != None: row[1] = row[1].replace("Ver1", "Ver2") row[2] = "Ver2" cursor.updateRow(row)
The Breakdown
Import the arcpy library (you need ArcGIS installed and a valid license to use)
import arcpy
Set the workspace path to the relevant file geodatabase
arcpy.env.workspace = r"C:\Users\*****\Documents\CleanedUp\Feature_Classes.gdb"
Create a list of all the feature classes within the file geodatabase.
fc_list = arcpy.ListFeatureClasses()
We know the names of the fields we wish to access so we add these to a list.
fields = ["MODEL", "MODEL_SUMMARY", "REV_NUM"]
For each feature class in the geodatabase we want to access the attributes of each feature for the relevant fields.
for fc in fc_list: with arcpy.da.UpdateCursor(fc, fields) as cursor: for row in cursor:
If the ‘MODEL’ attribute has a None (NULL) or empty string value then print the feature class name to the screen. Once one is found we can break out and move onto the next feature class.
if row[0] == None or row[0] == "": print fc + ": Null value found for MODEL" break
We know have a list of feature classes that we can fix the attributes manually.
Next we find any instance of ‘Ver1’ in ‘MODEL_SUMMARY’ text strings and replace it with ‘Ver2’….
if row[1] != None: row[1] = row[1].replace("Ver1", "Ver2")
…and update all ‘REV_NUM’ attributes to ‘Ver2’ regardless of what is already attributed. This is like using the Field Calculator to update.
row[2] = "Ver2"
Perform and commit the above updates for each feature.
cursor.updateRow(row)
Very handy to update the data you need and this script can certainly be extended to handle more complex operations using the arcpy.da.UpdateCursor module. | https://glenbambrick.com/tag/arcgis/ | CC-MAIN-2020-16 | refinedweb | 633 | 71.44 |
FireCloud Basics
This document is in the process of being deprecated and replaced by individual articles about the topics it covers.
The FireCloud world is organized into workspaces.
These components and all relevant information, including using and creating workspaces, uploading and downloading data, configuring methods and applying tools in workflows, are detailed below.
FireCloud Basics
Table of Contents
- Workspace Overview
- Workspace Concepts
- Google Buckets
- Workspace Attributes
- Accessing an Existing Workspace
- Workspace Navigation
- Creating a New Workspace
- Workspace Access Controls
- Uploading and Downloading Data
- Entity Attributes
- Data Model
- Methods
- Method Repository
- Method Configurations
- Inputs and Outputs
- Configuring a Method and Launching an Analysis
- Statuses in FireCloud
Workspace Overview
Workspace Concepts
Workspaces contain a data model to organize data and metadata, and simplify analysis runs for large data sets. The data model includes predefined entity types (e.g., participant and sample set), relationships, and attributes. For your convenience, results from analyses are populated directly to the data model. Currently, the data model is tailored to TCGA data, but will be extensible to non-TCGA projects with a germline or cell-line focus.
The data model includes entities and entity attributes. Entities refer to a physical thing (e.g., a participant) or a collection of physical things (e.g., participant sets). FireCloud uses entities to provide organization and hierarchical structure for data. For example, a participant entity refers to a participant; a sample entity refers to a sample from that participant.
Meanwhile, entity attributes are used to describe entities and associate data to entities. An entity attribute can include values (e.g., numbers or strings) and file paths to data (e.g., the URL of a Google bucket). For example, a participant (entity) can have an age (entity attribute). A sample (entity) can also have an associated BAM that resides in a Google bucket.
Entity attributes can serve as inputs and outputs to methods. For example, a sample (entity) can reference a BAM file path (entity attribute) that serves as the input to a method. This method can in turn generate outputs that populate new entity attributes as results.
Google Buckets
Upon creation, a workspace generates a single Google bucket within Google Cloud Platform (GCP). FireCloud uses Google buckets to store the data generated in your workspace in the cloud. All storage costs are charged to FireCloud Billing Projects. Go here for more information about billing and projects.
In the example below, if you create a new workspace called "Broad_GTEx_RNASeq," the system creates a new bucket, e.g., fc-25ec6523-aad2-49e7-9b59-c89a737276c6 with a clickable link below Google Bucket.
Total Estimated Storage Fee per Month
FireCloud displays a Total Estimated Storage Fee per month for every bucket associated with a workspace. You can view this information in the Workspace Summary tab, below the Google bucket id.
To calculate the estimate, FireCloud applies the Google Cloud Storage (GCS) General Pricing mode (0.026 dollars/GB/month or 26 dollars/TB/month) to the total size of all files in your Google bucket. The estimate includes any files that you uploaded or copied directly to your bucket, as well as files that populated to your bucket from analysis submissions within your workspace. Example: your Google bucket includes a dozen files with a total size of 1.141 GB. The Total Estimated Storage Fee per month is $0.03.
Please note that FireCloud updates the estimate on a daily basis as it receives storage information from Google. If you add files to your Google bucket, please allow at least 24 hours for FireCloud to display the updated estimate.
Workspace Attributes
Workspaces attributes are globally accessible to all methods within a workspace. If you enter workspace attributes in the workspace Summary tab, they can serve as inputs for any method you run within your workspace.
Click Edit and Add new to add a new workspace attribute. In the example below, the Key markers_file refers to the name of the file and the Value gs://fc-44cb2981-5e5a-4ec0-bb0c-0a9ff5966c6f/markers_file.txt refers to the Google Bucket file path. After you click Save, any method configurations within this workspace can reference this file as an input to run an analysis.
You can also click Import Attributes and Download Attributes to import or download workspace attributes as a tab-separated-value (TSV) file.
When importing a new TSV file for workspace attributes, note the file format requirement:
The first column header only MUST begin with workspace:[Key]. Enter the Value in the row below each Key. To enter multiple workspace attributes, simply enter a new column with a Key row and Value row.
Workspaces also include method configurations that bind data to methods, containing tools. You can use a method config to specify which entity attributes to use as inputs to an analysis runs and for which entity attributes you want to populate results.
Accessing an Existing Workspace
After you log on to FireCloud, any workspaces for which you have access will display in the Workspaces List.
From here you can search for workspaces using the filter box. To enter a workspace, click on its name (e.g., my_workspace).
Workspace Navigation
The Summary tab describes basic workspace details such as its owner(s) and description.
To view the workspace data model and attributes, click on the Data tab.
You can view different entities (participant, sample, pair, pair sets) in your data model by clicking the buttons above.
Use the Method Configurations tab to view all method configurations for your workspace. For tutorial workspaces, method configurations may be pre-populated with methods to run analyses. You can also import method configurations from the Method Repository.
You can click on a single method configuration to view its settings.
Information about the method displays below the Method Configurations tab. Additionally, you can view methods you have access to by clicking the Method Repository link at the top of the page.
Each method has a set of inputs and outputs that define how the data model feeds data to methods and updates the data model from analysis results. Input parameters get their values from attributes of the pair entity that the method is executed on.
In the example below, the user entered the the "case_bam" attribute value for the “tumor_bam” input parameter. The method will execute on the “case_bam” attribute for this input parameter when the user launches an analysis.
The Monitor tab displays analyses that you run in FireCloud.
You can check the status of an analysis by clicking on it.
Creating a New Workspace
From the Workspaces view, click the Create New Workspace... button.
Upon creation, you will be prompted to enter information about your new workspace. If you provided a Google Billing Account during registration, you can select a Google Cloud Platform (GCP) Project to track compute and storage costs for this workspace. Refer to Projects and Billing Accounts for more information.
Workspace Access Controls
FireCloud workspace access controls (ACLs) contain three access levels: READER, WRITER, and OWNER where each access level represents an expanded set of permissions from the previous.
You can update workspace access controls from the workspace Summary tab.
READER Access
If a workspace ACL grants a user READER access, the user can:
enter the workspace and view its contents
clone the workspace
copy data and method configs from that workspace to one in which the user has been granted WRITER or OWNER access
The user cannot:
make changes to the data model (add/delete entities, edit metadata)
add/delete method configs
edit method configs
launch an analysis (submit a method config for execution)
abort submissions
WRITER Access
If a workspace ACL grants a user WRITER access, the user has all the permissions granted to a user with READER access, and in addition can:
can make changes to the data model (add/delete entities)
can create new collections (sample sets, individual sets, pair sets) from existing non-set entities (samples, participants, pairs)
can delete and edit entities
can add/modify entities, including the ability to
copy entities from another workspace’s data model into the workspace, provided user has at least READER access to the source workspace
upload data entities and their data files directly to workspace
can add/modify/delete method configs, including the ability to
copy method configs to the workspace from the method repository (provided user has read access to the method config)
copy method configs from another workspace provided user has at least READER access to the source workspace
can edit method configs within the workspace
OWNER Access
If a workspace ACL grants a user OWNER access, the user has all the permissions granted to a user with WRITER access, and in addition:
can edit the workspace’s ACL
can delete a workspace
When you create or clone a workspace, the new workspace’s ACL automatically grants you OWNER-level permissions.
Uploading and Downloading Data
In order to add data to your workspace, you need to upload your data files to the workspace bucket. You can upload data to your bucket through either the Google Developers Console or Google’s command line utility gsutil.
Google Developers Console
Users can also upload and download data through the Google Developers Console.
To access buckets, navigate to your Workspace Summary tab and click on the bucket URL, .e.g., fc-f498747a-b7d8-4d78-937e-26f0eb27cfa0.
Once you are in a bucket, you can click Upload Files. Or to download the file, click on its name, .e.g,. panel_100_genes.interval_list.
gsutil
First, install gsutil to your local computer. The Google Cloud SDK installation includes gsutil. To install Google Cloud SDK
You can run the following command using bash shells in your Terminal: curl | bash Or download google-cloud-sdk.zip or google-cloud-sdk.tar.gz and unpack it. Note: The command is only supported in bash shells.
Restart your shell: exec -l $SHELL or open a new bash shell in your Terminal.
Run gcloud init to authenticate, set up a default configuration, and clone the project's Git repository.
Before uploading data using gsutil, you can list buckets you have access to by running gsutil ls, or gsutil ls -p [project name] to list buckets for a specific project.
To upload data to a bucket, run gsutil cp [local file path] [bucket URL]. You must have read/write access to the bucket.
The bucket URL is the path to your file in the Google Cloud SDK. It will look like gs://[bucket name], e.g. gs://jntest10052015 or for folders within a bucket, gs://[bucket name]/[folder name], e.g., gs://jntest10052015/gene_files.
To download data from a bucket, run gsutil cp [bucket URL]/[file name], e.g., gs://jntest10052015/HCC1143.100_gene_250bp_pad.bam.
Entity Attributes
FireCloud uses entity attributes to describe data entities (e.g., a participant identifier) and reference entity file locations (e.g., the URL to a Google Cloud Storage bucket). From the Data tab, you can click the Import Data… button to import new entity attributes or add to existing attributes within a workspace.
You can import entity attributes by clicking Import from file or Copy from another workspace. Note that copying from another workspace will not import the data into your workspace bucket. Rather, it will refer to file paths in the bucket of the workspace you copied. Thus, if that workspace bucket is deleted, your workspace data model will no longer refer to an existing bucket path.
Data Model
This section describes format requirements for load files and how FireCloud translates files into data model entities and relationships.
FireCloud Load File Format
Data can be imported to FireCloud as tab-separated-value (TSV) files (e.g., a .txt file) where each line in the file corresponds to an entity. All of the lines in a load file must reference entities of the same type and separate files must be used for each entity type.
The FireCloud data model supports the following entity types:
Participant
Sample
Pair
Participant Set
Sample Set
Pair Set
The first line for TSV files must contain the appropriate field names in their respective column headers.
Below are load file entity types and their corresponding first-column headers.
Non-Set Entity Load Files
Participants
first column: entity:participant_id (key)
subsequent columns: entity attributes
exactly one row per entity in the file
Example of Participants Load File
Samples
first column: entity:sample_id (key)
subsequent columns (no ordering requirement):
participant_id (foreign key)
entity attributes
exactly one row per entity in a file
Example of Samples Load File
Pairs
first column: entity:pair_id (key)
subsequent columns (no ordering requirement):
case_sample_id (foreign key)
control_sample_id (foreign key)
participant_id (foreign key)
entity attributes (optional)
exactly one row per entity in a file
Example of Pairs Load File
Set Entity Load Files
FireCloud uses Membership load files to specify set entity membership and Update load files to specify set entity attributes. Update load files are ONLY necessary if you want to specify attributes for a set.
In Membership load files, each line lists the membership of a non-set entity (e.g., participant) in a set (e.g., participant set). The first column contains the identifier of the set entity and the second column contains a key referencing a member of that set.
Example of Participant Set Membership Load File
Note: Multiple rows in the Membership load file may have the same set entity id (e.g., TCGA_BRCA).
Meanwhile, Update load files specify set entity attributes. The first column contains the set entity identifier and subsequent columns contain entity attributes, e.g., a gistic2_input file.
In Update load files, the set entity referenced in the first column must already exist in the workspace data model.
Example of Participant Set Update Load File
Note: Multiple rows for the same set entity are NOT permitted in Update load files. To add additional attributes for the same set entity id, you should use additional columns.
Generating Load Files
The FireCloud load file format permits users to copy and paste data from Excel into their TSV file of choice. When you copy data from Excel and paste it into Text Editor, Text Editor will add tabs between columns.
Order for Uploading Load Files
Load files must be imported in a strict order due to references to other entities. You can click the Import Data… button in order to browse for files to import:
The order is as follows ("A > B" means entity type A must precede B in order of upload):
participants > samples
samples > pairs
participants > participant sets
samples > sample sets
pairs > pair sets
set membership > set entity, e.g., participants > samples > sample set membership > sample set entity.
After you upload a load file successfully, a confirmation message appears.
If the import failed, a failure message specifies what went wrong with the import data.
Overwriting and Deleting Entity Attributes
If an entity attribute already exists, you can import or copy load files to overwrite its values. For example, you may have previously imported a participant load file that had several columns for entity attributes. If you import another participant load file, it will overwrite values in all columns that existed in the previous load file, and create new entries for any new columns.
You can also enter DELETE in a load file to remove an entity attribute value. For example,
Methods
A method can correspond to a single task or a workflow. Tasks are executable programs (e.g., a Tool) bundled into a Docker image. Workflows, comprised of one or more tasks, contain the method and the method input parameters. FireCloud submits both tasks and workflows to Google Job Execution System (JES) when they are ready to run (i.e., inputs are available, which may be the outputs of upstream tasks).
An Analysis is what FireCloud submits to JES when launching a method configuration against an entity or entity set. It is a combination of a method config and entity or entity set identifying the
method that runs;
number of times the method runs; and
inputs and outputs for each run.
Inputs and Outputs are mapped to attributes on data model entities. In response to the user submission, JES launches a workflow for each run of the method.
FireCloud uses WDL (Workflow Description Language), a domain specific language, to describe tasks and workflows.
In the Method Repository, a method is described by a WDL file that references tasks such as Docker images. The FireCloud Method Repository does not store the Docker images; rather these are stored on Docker Hub.
FireCloud identifies methods through the following unique identifiers:
a namespace;
a method name; and
a snapshotID.
When a method is added to the method repository, it automatically displays a new snapshotID. This ensures methods in the method repository are never overwritten and that provenance is fully captured.
For example, if you add the tool myAligner to the method repository under the namespace myNamespace, it’s identifier in the method repository might be: myNamespace/myAligner/1
Method Repository
The method repository contains namespaces, methods, and method configs.
Method Repository Namespaces
The namespace is a "folder" of methods and method configs. The method repository has namespaces under which both methods and method configs are stored. You can name your namespaces however you see fit, provided the chosen namespace does not already exist. Administrators may verify namespaces, which like verified twitter accounts, establish the authenticity of namespaces attached to specific organizations (e.g., Broad Institute).
Uploading Methods through the UI
You can upload your method (WDL) directly through the FireCloud UI. In the Method Repository, click on Create new method.... Then, copy/paste your WDL into the WDL text block or click Load from file... to load a WDL file from your computer.
Namespace: the "folder" where you want to store this method
Name: the desired name of this method
Type: FireCloud supports both single task WDLs and workflows that comprise multiple tasks
Synopsis (optional): a description of this method
Documentation (optional): any markdown text you want to add to describe this method
First, you must create a text file on your local filesystem that contains the WDL for the method.
Below is an example of a WDL file.
task M2 { File ref_fasta File ref_fasta_dict File ref_fasta_fai File tumor_bam File tumor_bai File normal_bam File normal_bai File intervals String m2_output_vcf_name command { java -jar /task/GenomeAnalysisTK_latest_unstable.jar -T M2 \ --no_cmdline_in_header -dt NONE -ip 50 \ -R ${ref_fasta} \ -I:tumor ${tumor_bam} \ -I:normal ${normal_bam} \ -L ${intervals} \ -o ${m2_output_vcf_name} \ --disable_auto_index_creation_and_locking_when_reading_rods } runtime { docker: "gcr.io/broad-dsde-staging/cep3" memory: "12 GB" defaultDisks: "local-disk 100 SSD" } output { File m2_output_vcf = "${m2_output_vcf_name}" } } workflow CancerExomePipeline_v2 { call M2 }
Pushing Methods through the CLI
The FireCloud Command Line Interface (CLI) provides another option to upload methods (WDL) to the Method Repository.
First, enter cd firecloud-cli in your Terminal to change directories.
Then, enter the FireCloud CLI push command. The box below shows an example of a command you use to push a method.
firecloud -u -m push -t Workflow -s broad-firecloud-jn-test -n test-method -y 'Test synopsis' file.wdl
You will see a "Successfully pushed" message indicating your method uploaded to FireCloud. You can also confirm your Method pushed by viewing the Method Repository. If your push failed, you will see an error message. To troubleshoot, please share the error message in the Forum.
Note: This example assumes that you downloaded the FireCloud Command Line Interface. You can also run a Docker version without downloading to your local environment.
Description of CLI Commands
Enter -u, then state the URL of the FireCloud environment.
Enter -m, then state the command (push).
Enter -t, then state Workflow as the type for pushing your Method.
Enter -s, then state the Method Namespace.
Enter -n, then state the Method name.
Enter -y, then describe your Method
Finally, enter the file name for your WDL.
Method Permissions
In order to share a method, you must set its ACL. Each method in the method repository has an ACL attached to it. When a user is granted READER permission, the user:
will see the method listed when viewing the contents of the method repository.
can execute the method (i.e., run analyses that call for the running of the method)
When a user is granted OWNER permission, the user:
has READER permission
can edit the method’s ACL
can redact a method
When a user uploads a method to the method repository, she is by default given OWNER permission to method.
Methods that are set to Publicly Readable will be shared with all FireCloud users.
Method Configurations
Method Configurations bind inputs and outputs to a root entity and its attributes in a data model. The Method Configuration can specify attributes to update upon method completion or output files to generate from the method.
Inputs and Outputs
Inputs and outputs tell the method configuration how to associate with entities and entity attributes. When writing an expression, this refers to the root entity that is selected when configuring a method. To write an expression that binds to attributes associated with the root entity, use the
this.<attribute_name> syntax. For example, to access a reference FASTA associated with the root entity, use this.ref_fasta, where ref_fasta is the attribute name associated with your FASTA file pointer.
Example Entity Attributes: control_bai, case_bai, output_vcf, ref_pac, ref_ann, case_bam, case_sample, participant, vcf_output_name, ref_fasta, ref_amb, ref_intervals, control_sample, ref_sa, ref_bwt, control_bam, ref_fai,ref_dict
Input expressions may also dereference intermediate entities from the root entity. For example, to dereference an attribute called ref_fasta on the entity intermediate_entity, use the expression this.intermediate_entity.ref_fasta. Output expressions do not have the ability to dereference intermediate entities.
To map the ref_fasta attribute of HCC1143_pair to the input ref_fasta in the CancerExomePipeline_v2, use an Input Name of CancerExomePipeline.M2.ref_fasta and an Input Value of this.ref_fasta.
When writing an expression, you can also type
workspace.<attribute_name> to refer to Workspace Attributes you entered in the workspace Summary tab. For example, if you entered a workspace attribute called
markers_file, your input can reference this file if you type
workspace.markers_file.
Workflow Expansion Expressions
Workflow expansion expressions can be used to expand a workflow whose root entity type is a single entity (participant,sample,and pair) to run on multiple entities (pair_set, participant_set, and sample_set). To launch a workflow on each pair in a pair_set, you would toggle to pair_set in the Launch Analysis window and enter this.pairs in the Define Expression field.
Default Method Configurations
The method repository will contain "default" method configurations for each of the methods it holds. If you want to run a particular method in your workspace, copy the method’s default configuration in the Method Repository to your workspace.
Method Configurations can be created within a workspace. Users may publish a workspace’s method configuration to the method repository. Each method configuration in the method repository has an ACL.
Users granted READER permission:
will see the method configuration listed when viewing the contents of the method repository
can copy the method configuration from the method repository to a workspace they have COLLABORATOR or OWNER permissions for.
Users granted OWNER permission:
have READER permission
can edit the method configuration’s ACL
can redact the method configuration
OWNER permission can only be granted to individuals, it cannot be granted to a group
When a user publishes a method config (copy from workspace to the method repository), that user is granted OWNER permission for the method configuration.
Importing Method Configurations into a Workspace
You can import Method configurations from the methods repository by clicking the Import Configuration… button within the Method Configurations tab.
The import configuration dialog will show all methods you have access to within the repository.
You can click on an individual method configuration to get details and import it into your workspace. The imported method configuration can have a new name and namespace within your workspace to allow the same configuration to be copied.
Configuring a Method and Launching an Analysis
Each method configuration has a set of inputs and outputs that must be configured in order to run an analysis.
Each of these inputs has two parts: a name of the parameter as shown in a gray box, followed by the expression used to convert data from the data model into input values.
The expression on the right specifies which attributes in the data model map to this given input parameter. Every expression starts with the word "this", which corresponds to a single entity.
Each method configuration has a root entity type that defines the entity on which this configuration is intended to run. Input expressions are of the form this.attribute, this.entity.attribute, this.entity1.entity2.attribute, etc.
After you click on Method Configuration, you can click Launch Analysis… to start an analysis.
When launching the method, choose the entity type to run this method.
If the method runs on a sample, you can choose the sample entity type from the dropdown and any samples that appear in the table. You can also choose an entity type different from the intended entity type in the method configuration.
For instance, if a method runs against a sample and you want to run against the control sample within a pair, you could choose the given pair and write an expression to get the control sample.
Within the method configuration, "this" then refers to the pair’s control sample. Similarly, if the method ran on a sample and you selected a sample set, “this” refers to each sample in the sample set. In this case, one workflow would be created for each entity in the set and submitted all at once.
Each method also has a set of outputs that must be configured to populate analysis results.
These output expressions are the name of an attribute on the entity on which the method was run. For instance, if the method runs on a sample entity with value "output_vcf", the given output from the method will be set to an attribute on an entity called “output_vcf”. If this attribute does not exist on the entity, it will be added. If it already exists, it will be replaced.
To launch an analysis, click the Launch button at the bottom of the dialog.
After an analysis is launched, the Monitor tab appears to show the status of the analysis.
You can check the status of an analysis by clicking on it.
Each analysis will contain 0 or more workflows as shown in the Workflows section.
Statuses in FireCloud
FireCloud displays statuses for workspaces and analysis submissions at varying levels of granularity. This section describes where to find statuses in FireCloud and what each status means at each level.
For information about the high level status of FireCloud services, go to status.firecloud.org/.
Workspace Summary Tab
You can view the status of any workspace from the workspace Summary tab. The screenshot below displays the "Complete" status in the workspace Summary tab.
Workspace Statuses
There are three color-coded statuses for workspaces:
Running: the workspace has running analysis submissions.
Exception: the workspace has any failed analysis submission AND the last failed analysis submission is more recent than the last successful analysis submission.
Complete: there are no currently running analysis submissions AND the most recent analysis submission completed successfully.
Note: You can also view color-coded workspace statuses from the Workspaces List.
Monitor Tab Statuses
If you click on any workspace, you can view the status of its analysis submissions through the Monitor tab. The Monitor tab displays statuses at three levels of granularity:
Analysis submission
Workflow submission
Call/Task submission
Analysis Submissions
Analysis submissions display in a table when you first click on the Monitor tab. Each analysis submission displays below the Date column (e.g., May 18, 2016 9:28 PM). The status of each analysis submission displays below the Status column (e.g., Done).
By clicking on an analysis submissions in this table (e.g., May 18, 2016 9:28 PM), you can view the status of individual workflows in a separate screen.
Analysis Submissions Statuses any unstarted workflows OR any finished workflow is not in status "Succeeded".
Workflow Submissions
Workflow submission statuses display in a new screen after you click on an analysis submission. This screen displays the high level status of the analysis submission (e.g., Done) at the top of the page and workflow statuses in the Workflows table.
In the Workflows table, workflow submissions displays below the Data Entity column (e.g., HCC1143_WE_pair). Note that "HCC1143_WE_pair" refers to the entity on which the workflow submission was run. For each workflow submission, the status displays below the Status column (e.g., Succeeded).
Workflow Submission Statuses
Submitted - The workflow has been received by Cromwell and an associated workflow id has been generated.
Running - The workflow is being actively processed by Cromwell.
Aborting - Crowell has processed the request to abort this workflow but the abort is not complete.
Aborted - The workflow has halted completely due to an abort request.
Succeeded - The workflow has successfully completed.
Failed - The workflow has terminated abnormally due to some error.
Launching - The recently launched workflow is being sent to Cromwell.
Queued - Each user is permitted 1000 active (either running or aborting) workflows in FireCloud. Each workflow is initially queued, and is started assuming the user has not exceeded this limit.
By clicking on a workflow submission (e.g., HCC1143_WE_pair), you can view more information about the Started and Ended times, as well as the status for any calls within that workflow. This information will appear in a new screen as shown below. Note that every call represents a task within a workflow.
Call/Task Submissions
In this screen, you can view the status for any call (task) within a workflow by clicking Show. For example, if you click on a call (e.g., ContEstMuTectOncotatorWorkflow.OncotatorTask), you can view the Call ID, Call Status (e.g., Done) and Started and Ended times.
Call/Task Submission Statuses
NotStarted - The call/task has not started processing yet.
Starting - The call/task has been identified to start processing but is currently initializing internally.
Running - The call/task has been sent to Google JES for processing.
Failed - The call/task exited abnormally due to an unrecoverable error.
Done - The call/task successfully completed.
Aborted - The call/task was halted due to a workflow abort request.
The section on the data model needs to be updated to include a description of the TSV file format for uploading/downloading workspace attributes. The format is described (very briefly) in GAWB-23.
Will do, thanks for pointing this out @birger. | https://gatkforums.broadinstitute.org/firecloud/discussion/comment/37889 | CC-MAIN-2020-10 | refinedweb | 5,084 | 54.02 |
Andrew Morton writes:> Paul Mackerras <paulus@samba.org> wrote:> > > Is that enough to be worth factoring out? Note that> > update_wall_time_one_tick() needs both time_adjust_step and> > delta_nsec, so to share more, we would have to have a function> > returning two values and it would start to get ugly.> > > > update_wall_time_one_tick() gets:> > long delta_nsec = new_function();> > and your new function becomes> > return (u64)new_function() << (SHIFT_SCALE - 10)) + time_adj;and update_wall_time_one_tick misses out on this bit: /* Reduce by this step the amount of time left */ time_adjust -= time_adjust_step;That's why I said that update_wall_time_one_tick needs bothtime_adjust_step and delta_nsec. If you have a nice way to returnboth values, please let me know...Paul.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2006/2/15/340 | CC-MAIN-2017-17 | refinedweb | 133 | 62.38 |
This python article is a compilation of project examples of Scrapy.
This article isn’t a proper tutorial or article. It’s merely a collection of Scrapy programs from our various tutorials throughout our site, CodersLegacy. Each project example will have a brief description as to what it does, with a link to it’s respective tutorial where you can learn how to do it yourself.
You can also think of this as a place for you to get some ideas for your own Scrapy projects through the python examples we show you here.
Extracting Data
This is a Scrapy Spider with a rather simple purpose. It goes through the entire
quotes.toscrape site extracting all available Quotes along with the name (Author) of the person who actually said the Quote.
Scraping an entire site can be a pretty complex task, which is why we are also using the Rules Class which define a set of rules for the Spider to follow while Scraping the site.(),
Extracting Links
This project example features a Scrapy Spider that scans a Wikipedia page and extracts all the links from it, storing them in a output file.
This can easily be expanded to crawl through the entire Wikipedia although the total time required to scrape through it would be very long.() }
Link Follower
The project example below is that of a Spider that “follows” links. This means that is can read a link, open the page to which it leads, and begin extracting data from that page. You can even follow links continuously till you’re spider has crawled and followed every link in the entire site.
You don’t have to include all the urls in the start_urls this way, just one is required.
The only reason we’ve set the depth limit to 1 is to keep the total time of the scraping reasonable (More on this in the tutorial).
from scrapy.spiders import CrawlSpider class SuperSpider(CrawlSpider): name = 'follower' allowed_domains = ['en.wikipedia.org'] start_urls = [''] base_url = '' custom_settings = { 'DEPTH_LIMIT': 1 } def parse(self, response): for next_page in response.xpath('.//div/p/a'): yield response.follow(next_page, self.parse) for quote in response.xpath('.//h1/text()'): yield {'quote': quote.extract() }
Scrapy Automated Login
Another powerful feature of Scrapy is FormRequest which allows for automated logins into sites. While most sites you want to scrape won’t require it, there are some sites whose data can only be accessed after a successful login.
Using FormRequest we can make the Scrapy Spider imitate this login, as we have shown below.())
-> Link to Tutorial
Additional Features
Scrapy has many different features and opportunities to further enhance and improve your Spider. Putting aside the examples we discussed we above, we compiled all the important (main) features that might interest you.
This marks the end of the Python Scrapy Project Examples article. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the article can be asked in the comments section below. | https://coderslegacy.com/python/scrapy-project-examples/ | CC-MAIN-2021-21 | refinedweb | 495 | 63.49 |
Sample the color of a vertex
Back in releases prior to R20, I have a set of code that would sample the color that a polygonal object had at any vertex, assuming it had a single material and a UVW tag assigned to it.
This was all I required: a polygonal object with a material tag and a UVW tag.
But now, in R20, that code (that was quite long) is not working anymore.
And I can't seem to make it work, no matter what I do.
Is there any R20 code that would get me the result I need?
Just to be clear, you sampled the mapped material color per vertex, not the color of the vertex attribute (aka known as vertex color)?
Yes, exactly.
I had two sets of code that sampled the mapped material per vertex.
One would sample the color using a modified version of a snipped code from Remo, and another one would calculate the texture with an internal method to bake the texture.
Neither one seems to be working now, in R20.
Hi rui_mac, thanks for reaching us.
With regard to your support request, in R20 I confirm that nothing has changed compared to R19 on how a shader is sampled in Cinema 4D API.
Again without having any evidence of where your code is failing or what kind of behavior your code is exhibiting is actually helpless to me.
Something simple like this
import c4d from c4d import gui # Welcome to the world of Python # Main function def main(): if op is None: return # get the current TextureTag / Material / UVWTag currentTextureTag = op.GetTag(c4d.Ttexture) currentMaterial = currentTextureTag[c4d.TEXTURETAG_MATERIAL] currentUVWTag = op.GetTag(c4d.Tuvw) # get the shader associated to the color slot shd = currentMaterial[c4d.MATERIAL_COLOR_SHADER] # init via InitRenderStruct() initRS = c4d.modules.render.InitRenderStruct() c4d.INITRENDERRESULT_OK == shd.InitRender(initRS) chanData = c4d.modules.render.ChannelData() # loop over the data in the UVW set to retrieve the color for i in xrange(currentUVWTag.GetDataCount()): uvwdict = currentUVWTag.GetSlow(i) chanData.p = uvwdict["a"] col = shd.Sample(chanData) print uvwdict["a"], "/", col chanData.p = uvwdict["b"] col = shd.Sample(chanData) print uvwdict["b"], "/", col chanData.p = uvwdict["c"] col = shd.Sample(chanData) print uvwdict["c"], "/", col chanData.p = uvwdict["d"] col = shd.Sample(chanData) print uvwdict["d"], "/", col # free the allocated resources shd.FreeRender() # Execute main() if __name__=='__main__': main()
Does at least the above script works for you? Can you provide further details on your code?
Cheers, Riccardo
It seems quite simple, reading your code.
However, that is python and I'm doing it in c++.
I was trying to attach the old code I was using, but the uploader only allows for images.
However, I'm now trying to make more generic, by internally baking the material, as it allows for more complex material assignments to objects.
I'm still struggling but I will try to solve it myself, before coming back here ;-) | https://plugincafe.maxon.net/topic/11280/sample-the-color-of-a-vertex | CC-MAIN-2020-50 | refinedweb | 489 | 58.69 |
I have been programming in C# for a while and now I want to brush up on my C++ skills.
Having the class:
class Foo { const std::string& name_; ... };
What would be the best approach (I only want to allow read access to the name_ field):
- use a getter method:
inline const std::string& name() const { return name_; }
- make the field public since it’s a constant
Thanks.
It tends to be a bad idea to make non-const fields public because it then becomes hard to force error checking constraints and/or add side-effects to value changes in the future.
In your case, you have a const field, so the above issues are not a problem. The main downside of making it a public field is that you’re locking down the underlying implementation. For example, if in the future you wanted to change the internal representation to a C-string or a Unicode string, or something else, then you’d break all the client code. With a gettor, you could convert to the legacy representation for existing clients while providing the newer functionality to new users via a new gettor.
I’d still suggest having a getter method like the one you have placed above. This will maximize your future flexibility.
Using a getter method is a better design choice for a long-lived class as it allows you to replace the getter method with something more complicated in the future. Although this seems less likely to be needed for a const value, the cost is low and the possible benefits are large.
As an aside, in C++, it’s an especially good idea to give both the getter and setter for a member the same name, since in the future you can then actually change the the pair of methods:
class Foo { public: std::string const& name() const; // Getter void name(std::string const& newName); // Setter ... };
Into a single, public member variable that defines an
operator()() for each:
// This class encapsulates a fancier type of name class fancy_name { public: // Getter std::string const& operator()() const { return _compute_fancy_name(); // Does some internal work } // Setter void operator()(std::string const& newName) { _set_fancy_name(newName); // Does some internal work } ... }; class Foo { public: fancy_name name; ... };
The client code will need to be recompiled of course, but no syntax changes are required! Obviously, this transformation works just as well for const values, in which only a getter is needed.
As an aside, in C++, it is somewhat odd to have a const reference member. You have to assign it in the constructor list. Who owns the actually memory of that object and what is it’s lifetime?
As for style, I agree with the others that you don’t want to expose your privates. 🙂 I like this pattern for setters/getters
class Foo { public: const string& FirstName() const; Foo& FirstName(const string& newFirstName); const string& LastName() const; Foo& LastName(const string& newLastName); const string& Title() const; Foo& Title(const string& newTitle); };
This way you can do something like:
Foo f; f.FirstName("Jim").LastName("Bob").Title("Programmer");
I think the C++11 approach would be more like this now.
#include <string> #include <iostream> #include <functional> template<typename T> class LambdaSetter { public: LambdaSetter() : getter([&]() -> T { return m_value; }), setter([&](T value) { m_value = value; }), m_value() {} T operator()() { return getter(); } void operator()(T value) { setter(value); } LambdaSetter operator=(T rhs) { setter(rhs); return *this; } T operator=(LambdaSetter rhs) { return rhs.getter(); } operator T() { return getter(); } void SetGetter(std::function<T()> func) { getter = func; } void SetSetter(std::function<void(T)> func) { setter = func; } T& GetRawData() { return m_value; } private: T m_value; std::function<const T()> getter; std::function<void(T)> setter; template <typename TT> friend std::ostream & operator<<(std::ostream &os, const LambdaSetter<TT>& p); template <typename TT> friend std::istream & operator>>(std::istream &is, const LambdaSetter<TT>& p); }; template <typename T> std::ostream & operator<<(std::ostream &os, const LambdaSetter<T>& p) { os << p.getter(); return os; } template <typename TT> std::istream & operator>>(std::istream &is, const LambdaSetter<TT>& p) { TT value; is >> value; p.setter(value); return is; } class foo { public: foo() { myString.SetGetter([&]() -> std::string { myString.GetRawData() = "Hello"; return myString.GetRawData(); }); myString2.SetSetter([&](std::string value) -> void { myString2.GetRawData() = (value + "!"); }); } LambdaSetter<std::string> myString; LambdaSetter<std::string> myString2; }; int _tmain(int argc, _TCHAR* argv[]) { foo f; std::string hi = f.myString; f.myString2 = "world"; std::cout << hi << " " << f.myString2 << std::endl; std::cin >> f.myString2; std::cout << hi << " " << f.myString2 << std::endl; return 0; }
I tested this in Visual Studio 2013. Unfortunately in order to use the underlying storage inside the LambdaSetter I needed to provide a “GetRawData” public accessor which can lead to broken encapsulation, but you can either leave it out and provide your own storage container for T or just ensure that the only time you use “GetRawData” is when you are writing a custom getter/setter method.
Even though the name is immutable, you may still want to have the option of computing it rather than storing it in a field. (I realize this is unlikely for “name”, but let’s aim for the general case.) For that reason, even constant fields are best wrapped inside of getters:
class Foo { public: const std::string& getName() const {return name_;} private: const std::string& name_; };
Note that if you were to change
getName() to return a computed value, it couldn’t return const ref. That’s ok, because it won’t require any changes to the callers (modulo recompilation.)
Avoid public variables, except for classes that are essentially C-style structs. It’s just not a good practice to get into.
Once you’ve defined the class interface, you might never be able to change it (other than adding to it), because people will build on it and rely on it. Making a variable public means that you need to have that variable, and you need to make sure it has what the user needs.
Now, if you use a getter, you’re promising to supply some information, which is currently kept in that variable. If the situation changes, and you’d rather not maintain that variable all the time, you can change the access. If the requirements change (and I’ve seen some pretty odd requirements changes), and you mostly need the name that’s in this variable but sometimes the one in that variable, you can just change the getter. If you made the variable public, you’d be stuck with it.
This won’t always happen, but I find it a lot easier just to write a quick getter than to analyze the situation to see if I’d regret making the variable public (and risk being wrong later).
Making member variables private is a good habit to get into. Any shop that has code standards is probably going to forbid making the occasional member variable public, and any shop with code reviews is likely to criticize you for it.
Whenever it really doesn’t matter for ease of writing, get into the safer habit.
From the Design Patterns theory; “encapsulate what varies”. By defining a ‘getter’ there is good adherence to the above principle. So, if the implementation-representation of the member changes in future, the member can be ‘massaged’ before returning from the ‘getter’; implying no code refactoring at the client side where the ‘getter’ call is made.
Regards, | https://exceptionshub.com/c-getterssetters-coding-style.html | CC-MAIN-2021-21 | refinedweb | 1,229 | 50.87 |
In this tutorial, you’ll learn how to use and edit the
service.properties
file. You’ll also learn about the properties included in this file and how to
set them to fit your needs.
Service Builder generates a
service.properties file in your
*-service
module’s
src/main/resources folder. Liferay DXP uses the properties in this file
to alter your service’s database schema. You should not modify this file, but
rather make any necessary overrides in a
service-ext.properties file in that
same folder.
Here are some of the properties included in the
service.properties file:
build.namespace: This is the namespace you defined in your.
include-and-override: The default value of this property defines
service-ext.propertiesas an override file for
service.properties.
Awesome! You now have all the tools necessary to set up your own
service-ext.properties file. | https://help.liferay.com/hc/en-us/articles/360017882052-Configuring-service-properties | CC-MAIN-2022-40 | refinedweb | 146 | 53.07 |
Working with Python means working with objects because, in Python, everything is an object. So, for example:
>>> type(1) <class 'int'> >>> type('x') <class 'str'>
As you can see, even basic types like integer and strings are objects, in particular, they are respectively instances of int and str classes. So, since everything is an object and given that an object is an instance of a class… what is a class?
Let’s check it:
>>> type(int) <class 'type'> >>> type(str) <class 'type'>
It turns out that classes are an object too, specifically they are instances of the “type” class, or better, they are instances of the “type” metaclass.
A metaclass is the class of a class and the use of metaclasses could be convenient for some specific tasks like logging, profiling and more.
So, let’s start demonstrating that a class is just an instance of a metaclass. We’ve said that type is the base metaclass and instantiating this metaclass we can create some class so… let’s try it:
>>> my_class = type("Foo", (), {"bar": 1}) >>> print(my_class) <class '__main__.Foo'>
Here you can see that we have created a class named “Foo” just instantiating the metaclass type. The parameters we have passed are:
- The class name (Foo)
- A tuple with the class superclasses (in this example we are creating a class without specifying any superclass)
- A dictionary of attributes for the class (in this example we are creating the attribute “bar” with an int value of 1)
If everything is clear so far, we can try to create and use a custom metaclass. To define a custom metaclass it’s enough to subclass the type class.
Look at this example:
class Logging_Meta(type): def __new__(cls, name, bases, attrs, **kwargs): print(str.format("Allocating memory space for class {0} ", cls)) return super().__new__(cls, name, bases, attrs, **kwargs)
def __init__(self, name, bases, attrs, **kwargs): print(str.format("Initializing object {0}", self)) return super().__init__(name, bases, attrs)
class foo(metaclass=Logging_Meta): pass
foo_instance = foo() print(foo_instance) print(type(foo))
on my PC, this code returns:
Allocating memory space for class <class '__main__.Logging_Meta'> Initializing object <class '__main__.foo'> <__main__.foo object at 0x000000B54ACC0B00> <class '__main__.Logging_Meta'>
In this example we have defined a metaclass called Logging_Meta and using the magic methods __new__ and
__init__ we have redefined the behavior of the class when the object is created and initialized. Then, we’ve declared a foo class specifying which is the metaclass to use for this class and as you can see, our class behavior is changed according to the Logging_Meta metaclass implementation.
A concrete use-case: Abstract Base classes (ABC’s)
A concrete use of metaclasses is the abc module. The abc module is a module of the standard library that provides the infrastructure for defining an abstract base class. Using abc you can check that a derived class that inherits from an abstract base class implements all the abstract methods of the superclass when the class is instantiated.
For example:
from abc import ABCMeta, abstractmethod
class my_base_class(metaclass=ABCMeta): @abstractmethod def foo(self): pass
class my_derived_class(my_base_class): pass
a_class = my_derived_class()
If you try this example, you will see that the last line (the one that tries to instantiate the derived class) will raise
the following exception:
TypeError: Can't instantiate abstract class my_derived_class with abstract methods foo
That’s because my_derived_class does not implement the method foo as requested from the abstract base class.
It’s worth to be said that if you subclass a base class that uses a specific metaclass, your new object will use the metaclass as well. In fact, since Python 3.4 the module abc now provide also the ABC class that is just a generic class that uses the ABCMeta metaclasses. This means that the last example can be rewritten as follows:
from abc import ABC
class my_base_class(ABC): @abstractmethod def foo(self): pass
class my_derived_class(my_base_class): pass
a_class = my_derived_class()
This was just a brief introduction to metaclasses in Python. It’s more or less what I think should be known about this topic because it could lead to a better understanding of some internals of Python.
But let’s be clear: this is not something that every single Python user needs to know in order to start working in Python. As Python Guru).” — Tim Peters
Enjoy! | https://www.thepythoncorner.com/2016/12/python-metaclasses/ | CC-MAIN-2019-30 | refinedweb | 726 | 57.81 |
Chair: Jon Gunderson
Date: Wednesday, May 19th
Time: 12:00 noon to 1:30 pm Eastern Standard Time
Call-in: W3C Tobin Bridge (+1) 617-252-7000
Update on outstanding action items
Discusion and resolution of support of math
Discussion and resolution of navigation of element with associated events
Clean up some outstanding checkpoint issues from the issue list
/* Brief discussion of difficulty of getting UA Guidelines to Recommendation */
HB: This is where the rubber hits the road.
JG: People will buy into them when they start asking their tools for support.
Issue #12: Recommend full implmentation of MathML .
JG: Three main conclusions:.
JG: What's between navigation of math document tree and navigation of linear navigation of equation.
IJ: What about lower bound/upper bound of an integral?
JG: Linear is appropriate for simple equations. Need this for simple situations and naive users. Discussion last week about some middle ground. Maybe just talk about these two approaches in techniques; this might be sufficient for math in this document.
CW: You need to get through a math tree unambiguously. Linear is depth-first. Tree is a "node-announcing breadth-first search". Users should be able to shrink or expand subtrees. Need clues about what the children are (math nodes tend to be very sparse). You may need to know about the depth of subtrees in order to guess where to navigate.
IJ: I see the same techniques useful for, e.g., navigating a document structure such as headers.
CW: Math is a somewhat special case due to sparseness. We should be sure that the guidelines ensure that the tree structure is apparent. Yes, it fits under a checkpoint about navigating the document tree.
CONCLUSION: Just include math in current checkpoints.
Action CW: Write math technique proposal (with others if desired, e.g., Raman and Gardner).
Action MR: Review this proposal.
JG: Ensure that we mention XML in checkpoint about exposing tree.
IJ: Do we need to say anything specific about math if it's covered in sections on (1) Use W3C specs (2) Use the DOM?
HB: Need to mention entities/mapping to semantics. User agent issue to convert symbols into (notably spoken) rendering that is part of the math terminology.
IJ: To me, I hear "semantics" and I think "schemas."
JB: Do we need to say "support schemas"?
IJ: Political issue right now about convergence of RDF and XML.
CW: If the author has made the spoken rendering available (e.g., through a schema), then the user agent should make that information available to the user.
MK: What about the reading order?
JG: Is this a navigation issue? Or do we need to say something about "when markup is available for content substitution, make the content available".
IJ: Covered by 5.2.1. Perhaps ensure that namespaces included in examples.
CW: MathML provides separate content markup for use by computation engines. It's supposed to be included with the math and can be passed around, e.g., for evaluation of expressions.
JG: Put details in techniques doc.
RESOLVED: Adopt single-digit numbering of guidelines as per WCAG. Push "structure" to introduction.
CW: I like single-digit number systems. They should agree across guidelines for consistency.
IJ: Will be difficult to track changes in WCAG. I think this WG should choose the most important and put them first.
RESOLVED: Move terms/definitions to a glossary at the end. | http://www.w3.org/WAI/UA/1999/05/wai-ua-telecon-19990519 | CC-MAIN-2013-48 | refinedweb | 568 | 68.87 |
Introduction: EAL - Instant Drink Cooler (IDC)
This Instructable shows you how to build an instant drink cooling machine. The drink cooling machine works by pouring cold water over a spinning drink, either a can or a bottle. This method helps cool down the bottle material, while the fluid inside is mixed.
Step 1: Materials
In this project I have used:
- One cordless drill motor, with the gearing and the speed controller
- The battery of the cordless drill ( 12V )
- A windshield washer pump
- An Arduino Uno R3
- One LCD display
- A rotary encoder with clickbutton
- One relay module
- Various pieces of wire
- M3 nuts and bolts
- A USB power bank, for powering the Arduino, due to the onboard voltage regulator being underpowered for this purpose.
Furthermore, the cordless drill has an LED attached. This looks nice but is not necessary!
Step 2: Tank - Design and Assembly
The tank is made out of acrylic glass.
The design of the tank was done using a 3D sketch program, which provides a base for the whole building process. Making the sketch creates a design template on where to cut pieces of acrylic glass, and where to mount the different parts.
The tank itself is made out of 5 sheets of acrylic glass in various sizes, which are scored with a knife, and thereafter, broken along the score line. To glue the pieces together, acrylic glue is used. In this example, Acrifix 1R 0192 was used, which is hardened by UV light.
A step drill is a nice tool to work with in these sheet sizes, because it has several steps with different hole diameters.
Step 3: Machine Assembly
The machine is assembled with the rod attached to a bearing part on one end, and the motor on the other. The motor and gear is screwed directly onto the tank, and it is press fitted to the shaft.
The LCD, Arduino, and the rotary encoder are bolted to the front panel after being fitted in their respective slots.
The water hose from the pump is run along the back up to its bracket along with the LED from the cordless drill. A holder for the battery was printed, allowing for the battery poles to connect the right way. This was then attached with zip-ties.
Step 4: Arduino Code
#include <liquidcrystal.h> LiquidCrystal lcd(13, 12, 11, 10, 9, 8); //CLK signal const int PinA = 3; //DT signal const int PinB = 2; //push button switch const int PinSW = 4; //Menu variables int menuselect = 0; int submenu = 0; int menuitems = 2; //rotary value int lastCount = 2; //Runtime variables unsigned long runtime = 0; unsigned long timetorun = 60000; unsigned long lasttimetorun = 60000; // Updated by the ISR (Interrupt Service Routine) volatile int virtualPosition = 1; static unsigned long lastInterruptTime=0; //Motor I/O int Motors = 6;</p><p>void setup() { // Just whilst we debug, view output on serial monitor Serial.begin(9600); // Rotary pulses are INPUTs pinMode(PinA, INPUT); pinMode(PinB, INPUT); // Motors are OUTPUTs pinMode(Motors, OUTPUT); // Switch is floating so use the in-built PULLUP so we don't need a resistor pinMode(PinSW, INPUT_PULLUP); // Attach the routine to service the interrupts attachInterrupt(digitalPinToInterrupt(3), isr, LOW); // Ready to go! // Print to LCD lcd.begin(16, 2); Serial.println("Start"); lcd.setCursor(7, 0); lcd.print("IDC"); // Make sure Motors are off at startup digitalWrite(Motors,LOW); } void loop() { // Is someone pressing the rotary switch? // If yes, and submenu is 0, we select the desired item, and goes 1 deeper in its submenu. // If submenu is larger than 0, we go one back to the mainpage if ((!digitalRead(PinSW))) { if (submenu > 0) submenu --; else { menuselect = virtualPosition; submenu++; } while (!digitalRead(PinSW)) delay(10); //Print what happens in the Serial monitor... Debugging info only Serial.print("Select: "); Serial.println(menuselect); lastCount = 10; } // If the current rotary switch position has changed then update everything, and send info to the Serial monitor if (virtualPosition != lastCount || timetorun != lasttimetorun) { // Write out to serial monitor the value and direction Serial.print(virtualPosition > lastCount ? "Up :" : "Down:"); Serial.println(virtualPosition); //Two switch cases, Submenu and virtualposition. If submenu is 0, we are on the mainpage. //If submenu is 1 we are in a submenu to one of the main items. switch (submenu) { case 0: switch (virtualPosition) { case 1: lcd.setCursor(0, 1); lcd.print(" "); lcd.setCursor(0, 1); lcd.print("Start"); break; case 2: lcd.setCursor(0, 1); lcd.print(" "); lcd.setCursor(0, 1); lcd.print("Run time"); break; } break; case 1: //Submenu for one of the main items. switch (menuselect) { //Menuselect is what the last virtual position was, therefore it shows the corresponding menu. case 1: //Print to screen, and start the cooling program. lcd.setCursor(0, 1); lcd.print(" "); lcd.setCursor(0, 1); lcd.print("Running..."); virtualPosition = 1; CoolDrink(); break; case 2: //Show the time for a run cycle in seconds, set with the rotary encoder lcd.setCursor(0, 1); lcd.print(" "); lcd.setCursor(0, 1); lcd.print(timetorun / 1000 ); //Print remaining seconds to the LCD lcd.print(" Seconds"); break; } } // Set everything to the current value, so we can skip if nothing changes lastCount = virtualPosition ; lasttimetorun = timetorun; } void isr () { //Rotary encoder interrupt loop unsigned long interruptTime = millis(); // If interrupts comes faster than 10ms, assume it's a bounce and ignore if (interruptTime - lastInterruptTime > 10) { if (digitalRead(PinB) == LOW) { //Depending on the menu and submenu, change some values if (menuselect == 2 && submenu ==1) timetorun=timetorun-1000; else virtualPosition-- ; } else { //Depending on the menu and submenu, change some values if (menuselect == 2 && submenu ==1) timetorun=timetorun+1000; else virtualPosition++ ; }</p><p> // Restrict menuvalue from 1 to 3 if (virtualPosition > menuitems) virtualPosition = 1; if (virtualPosition < 1) virtualPosition = menuitems; // Keep track of when we were here last (no more than every 5ms) lastInterruptTime = interruptTime; } } void CoolDrink () { //Register when we started the loop. runtime = millis(); digitalWrite(Motors,HIGH); //Turn on the two motors while (millis() < runtime + timetorun) { // While the current time is less than the run timer lcd.setCursor(10, 1); lcd.print((timetorun + runtime - millis()) / 1000 ); //Print remaining seconds to the LCD lcd.print(" "); if ((!digitalRead(PinSW)) && millis() > runtime + 1000) { //If the button is pressed after 1 second, cancel. 1 second to avoid fail clicks while (!digitalRead(PinSW)) delay(10); break; } } digitalWrite(Motors, LOW); //Turn off the two motors menuselect = 0; // Reset all values set, so loop returns to menu. submenu = 0; lastCount = 10; virtualPosition = 1; lcd.setCursor(0, 1); lcd.print(" "); loop(); }
Step 5: Schematics
Step 6: 3D Parts
Various parts of the machine have been printed by a 3D printer.
This includes the motor shaft that connects the motor to the rotary shaft, a shaft holder that acts as a bearing (a bearing would have been better), and a motor bracket that press fits onto the gearing, and attaches, with screws, onto the motor and tank.
If you want to check out the 3D parts, I have attached most of them here as STL files.
Step 7: Function
The drink cooler can cool cans and 0.5L plastic bottles. First, fill the tank with two litres of water. Next, plug in the batteries. The onboard screen starts at "Start", from here you can scroll either way to "Run time". If you press the encoder, you can choose the time for the machine to run. The standard runtime is 60 seconds. You can press the encoder again to return to the main menu. Here, you can scroll back to "Start", and press the encoder to begin the cooling process. The Arduino will then time the process, and print the remaining time onto the LCD screen. When it is done, the process stops and returns to the main menu. If you want the machine to stop while the process is running, simply press the encoder.
Recommendations
We have a be nice policy.
Please be positive and constructive. | http://www.instructables.com/id/EAL-Instant-Drink-Cooler-IDC/ | CC-MAIN-2018-22 | refinedweb | 1,298 | 64.2 |
Native duplicate line works only with one line, but would love to have a universal and intelligent command to duplicate not only the current line, but any selection, any selected part of code, any multiline selection. Duplicate Anything
For example, in JetBrain WebStorm (PhpStorm, IDEA) command Duplicate Block works this way.
Hi Ilya,
I've created a plugin that works almost exactly the same way as the TextMate Duplicate Line command works for ST2, I think it might be what you're after:
import sublime, sublime_plugin
class DuplicateLineCommand(sublime_plugin.TextCommand):
def run(self, edit):
sel = self.view.sel()
for region in sel:
if region.empty():
line = self.view.full_line(region)
s = self.view.substr(line)
self.view.insert(edit, line.end(), s)
x = region.end() + len(s)
y = x
else:
s = self.view.substr(region)
self.view.insert(edit, region.end(), s)
x = region.end()
y = x + len(s)
sel.clear()
sel.add(sublime.Region(x, y))
Hope it helps!
Thanks,
Dom
Dom, thank you very much, this is exactly what I needed.
Hi,
This plugin looks awesome -- but I am a lowly newbie and I haven't a clue how to install this (or create a key binding).
I tried saving it as DuplicateLineCommand.py in C:\Users\Andy\AppData\Roaming\Sublime Text 2\Packages\User
and I edited C:\Users\Andy\AppData\Roaming\Sublime Text 2\Packages\User\Default (Windows).sublime-keymap to contain:
{ "keys": "ctrl+shift+alt+d"], "command": "DuplicateLine" }
]
How is this meant to be done? Am I being stupid?
Many thanks,
Andy
atwright, you can download last dev build, which now included this functionality out of the box: sublimetext.com/dev
Hey Andy,
Ilya is right, the same day I posted my code, a dev build was released that included (a probably much cleaner!) updated command that does the same. But for future reference, you look like you were almost right with the key binding the only change is the command name should be underscored instead of CamelCased so if you had duplicate_line it should have worked.
Hope that helps,
Cheers,
Thanks guys.
How do I access this new, in-built function?
Hi Andy,
I'm not currently using dev builds on my machines, but there should be a reference to duplicate somewhere in your default or user keybindings. If you have a reference to this old plugin or something like it, you may need to remove that to get the new built-in function working. (Please, someone correct me if I'm wrong!)
Hope that helps! | https://forum.sublimetext.com/t/duplicate-anything/1920/8 | CC-MAIN-2016-22 | refinedweb | 422 | 58.38 |
08 March 2010 08:02 [Source: ICIS news]
SINGAPORE (ICIS news)--Mab Ta Phut Olefins Company is in the process of starting up its 900,000 tonne/year naphtha cracker this week, a source close to the company said on Monday.
“On-spec production for ethylene and propylene should be achieved by this week,” he said.
The company is a joint venture between ?xml:namespace>
New crackers coming on stream in the region and ample supplies from the
Spot prices were assessed down $100-120/tonne at $1,120-1,160/tonne CFR SE Asia last week, according to ICIS pricing.
( | http://www.icis.com/Articles/2010/03/08/9340573/mab-ta-phut-olefins-in-process-of-starting-up-cracker.html | CC-MAIN-2014-35 | refinedweb | 101 | 65.76 |
A Developer.com Site
An Eweek.com Site
Type: Posts; User: salem_c
So, the dialog goes away and some worker thread then tries to update some non-existent UI element and promptly dies.
This is generally regarded as being a big no-no....
Are your separate worker thread(s) directly updating the UI?
Cross-posting madness....
> SocketAddr.sin_port = htons(9050);
> SocketAddr.sin_addr.s_addr = inet_addr("127.0.0.1");
So, is your local TOR service running?
...
> That implies that folks are only allowed to post a question to a single forum
and it's little sister...
Also here ->
It means everyone here can make an informed choice as to whether it's worth spending time answering the question.
Also here ->
Yes, you should be able to do that.
Also here ->
Maybe:
#include<iostream>
using namespace std;
void f(int &v)
{
v = 10;
}
int main()
I guess you need to decide who is actually responsible for opening the file.
filename = "C:/Development/Test/Input.hun";
pFile = fopen(filename, "rt");
// Call the Input.cpp file and his...
CertUtil perhaps?
Bear in mind that if the code was truly malicious, then it's possible that it could detect your attempt to run in a sandbox / VM and "play along" to lure you into a false sense of security....
I've no idea, since I use real Linux distros, not some hackery to make Linux run in Windows.
Try googling "WSL configure printer".
It's a BASH script.
Does your source file end in .c or .cpp?
It should be .c, if you're writing C code.
> The C++ code is from an Arduino library that i wanted to convert to Visual C++ dll for use in one of my C# applications.
Why?
There's already AES built into C# by Microsoft themselves....
It looks like you forgot to say 'struct' or 'class'.
If you choose 'class', then you need to make your member functions public.
What does this tell you?
> The path [/websocketmms/{kind}/{uname}/] contains one or more empty segments which are is not permitted
Hello,
I've just registered, and observed the login was insecure.
35417
Is the forum itself accessible via https?
If not, is there a plan to make it so?
I notice the main site is all. | http://forums.codeguru.com/search.php?searchid=20184081 | CC-MAIN-2019-43 | refinedweb | 378 | 75.81 |
Subject: Re: [OMPI devel] Simple program (103 lines) makes Open-1.4.3 hang
From: Eugene Loh (eugene.loh_at_[hidden])
Date: 2010-11-23 15:55:36
To add to Jeff's comments:
Sébastien Boisvert wrote:
>The reason is that I am developping an MPI-based software, and I use
>Open-MPI as it is the only implementation I am aware of that send
>messages eagerly (powerful feature, that is).
>
>
As wonderful as OMPI is, I am fairly sure other MPI implementations also
support eager message passing. That is, there is a capability for a
sender to hand message data over to the MPI implementation, freeing the
user send buffer and allowing an MPI_Send() call to complete, without
the message reaching the receiver or the receiver being ready.
>Each byte transfer layer has its default limit to send eagerly a
>message. With shared memory (sm), the value is 4096 bytes. At least it
>is according to ompi_info.
>
>
Yes. I think that 4096 bytes can be a little tricky... it may include
some header information. So, the amount of user data that could be sent
would be a little bit less... e.g., 4,000 bytes or so.
>To verify this limit, I implemented a very simple test. The source code
>is test4096.cpp, which basically just send a single message of 4096
>bytes from a rank to another (rank 1 to 0).
>
>
I don't think the test says much at all. It has one process post an
MPI_Send and another post an MPI_Recv. Such a test should complete
under a very wide range of conditions.
Here is perhaps a better test:
#include <stdio.h>
#include <mpi.h>
int main(int argc, char **argv) {
int me;
char buf[N];
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&me);
MPI_Send(buf,N,MPI_BYTE,1-me,343,MPI_COMM_WORLD);
MPI_Recv(buf,N,MPI_BYTE,1-me,343,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
printf("%d of %d done\n", me, np);
MPI_Finalize();
return 0;
}
Compile with the preprocessor symbol N defined to, say, 64. Run for
--np 2. Each process will try to send. The code will complete for
short, eager messages. If the messages are long, nothing is sent
eagerly and both processes stay hung in their sends. Bump N up slowly.
For N=4096, the code hangs. For N slightly less -- say, 4000 -- it runs. | http://www.open-mpi.org/community/lists/devel/2010/11/8699.php | CC-MAIN-2013-20 | refinedweb | 386 | 75.91 |
>>>>> On Wed, 18 Jun 2003 01:14:11 +0200, Vojtech Pavlik <vojtech@suse.cz> said: >> Sounds much better to me. Wouldn't something along the lines of >> this make the most sense: >> #ifdef __ARCH_PIT_FREQ # define PIT_FREQ __ARCH_PIT_FREQ #else # >> define PIT_FREQ 1193182 #endif >> After all, it seems like the vast majority of legacy-compatible >> hardware _do_ use the standard frequency. Vojtech> Now, if this was in some nice include file, along with the Vojtech> definition of the i8253 PIT spinlock, that'd be Vojtech> great. Because not just the beeper driver uses the PIT, Vojtech> also some joystick code uses it if it is available.ftape, too. The LATCH() macro should also be moved to such a headerfile, I think. How about just creating asm/pit.h? Only platformsthat need to (potentially) support legacy hardware would need todefine it. E.g., on ia64, we could do: #ifndef _ASM_IA64_PIT_H #define _ASM_IA64_PIT_H #include <linux/config.h> #ifdef CONFIG_LEGACY_HW # define PIT_FREQ 1193182 # define LATCH ((CLOCK_TICK_RATE + HZ/2) / HZ) #endif #endif /* _ASM_IA64_PIT_H */This way, machines that support legacy hardware can defineCONFIG_LEGACY_HW and on others, the macro can be left undefined, sothat any attempt to compile drivers requiring legacy hw would fail tocompile upfront (much better than accessing random ports!). --david-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2003/6/17/232 | CC-MAIN-2016-50 | refinedweb | 233 | 53.41 |
2005
> threads for thursday july 14
Filter by Day:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
DTS Issue while calling from another domain
Posted by zacH at 7/14/2005 11:19:08 PM
hey guys, i have this Applicaation that is hosted on one server. This makes a call to a DTS which is on teh DB server on another Domain which is behind a firewall. When the app calls it the call never excutes the DTS. Thought could be a problem with the DTS but the DTS has error loggin but t...
more >>
Differences among VARCHAR, VARCHAR2, and CHAR
Posted by jrefactors NO[at]SPAM hotmail.com at 7/14/2005 9:51:52 PM
What are the major differences among data type VARCHAR, VARCHAR2, and CHAR? please advise. thanks!! ...
more >>
how to avoid inserting duplicate key to the table?
Posted by jrefactors NO[at]SPAM hotmail.com at 7/14/2005 9:50:48 PM
how to avoid inserting duplicate key to the table? For example, the EMPLOYEE TABLE has field EMPID (primary key) and NAME If I execute the following sql statement one by one, I will get error "violation of primary key contraint. cannot insert duplicate key in table employee." INSERT INTO ...
more >>
Finding How Many users connected ?
Posted by WhiteJul at 7/14/2005 9:49:38 PM
if there is any SP to check how many users are connected to my sql server? This is good in case I need to know who is connected or not, so I can bounce the server or do some maintenance without interfering with users's connections. Thanks ...
more >>
What the N is going on?
Posted by Mator DeSchenna at 7/14/2005 9:18:55 PM
What the heck is with these Ns that SQL script generator sticks in here everywhere. It is enough to make me quit my SQL Farm Managerial position. Why do I have to deal with Ns everywhere?? EXECUTE sp_rename N'dbo.Tmp_Table1, N'Table1', 'OBJECT' ...
more >>
Can I do this?
Posted by Brian Selzer at 7/14/2005 9:16:14 PM
DECLARE @inserted TABLE ( RowNumber INT IDENTITY(1, 1) NOT NULL, BadgeNo CHAR(6) NOT NULL, LastName VARCHAR(35) NOT NULL, FirstName VARCHAR(25) NOT NULL, Street VARCHAR(125) NOT NULL, City VARCHAR(40) NOT NULL, State CHAR(2) NOT NULL, ZipCode CHAR(9) NOT NULL, HomePhone...
more >>
how to use join hints ?
Posted by Hassan at 7/14/2005 9:14:53 PM
Can someone send me a query example on how to force join hints ? Id like to try different options such as nested, merge,hash Thanks ...
more >>
San Diego Users Group
Posted by Patrick at 7/14/2005 6:06:02 PM
Gang, I'm looking for a SQL Server Users group in San Diego. -- Patrick....
more >>
Don't see what you're looking for? Search DevelopmentNow.com.
flexible database filtering
Posted by ChrisB at 7/14/2005 5:38:43 PM
Hello: I would like to add filters to some Sql Server 2000 queries and was wondering if someone might be able to recommend a proven design pattern. To be more specific, like most applications, I have several queries that reduce the number of returned records through the use of filtering. ...
more >>
How to get the 4th record and on from select query
Posted by TdarTdar at 7/14/2005 4:42:02 PM
SELECT [itemVal] FROM #TempTB WHERE [ID] > (the fourth Record in the #TempTB) Thanks Tdar ...
more >>
Basic question on object naming
Posted by vvenk at 7/14/2005 4:32:01 PM
Hello: If I have a table named venki.TableA and if I log in as Venki, is the following syntax valid: SELECT * FROM TableA I tried it and it complained about an invalid object, TableA. However, the following works no matter who has been logged in" SELECT * FROM Venki.TableA I thou...
more >>
Stored Procedure with a Field of Type "text"
Posted by honcho at 7/14/2005 4:29:30 PM
Hello, Does anyone have an example of an SQL Server stored procedure that updates a record, where one of its field is of type "text"? My procedure is /* ** Update the client note and production cycle in a Sites record. Set the ** When_submitted field to the current date/time. */ CRE...
more >>
Field length question
Posted by Mark at 7/14/2005 4:21:02 PM
Hello, Will SQL Server alert you to the fact that you are trying to put more characters into a nvarchar column than is allowed or will it truncate the data? Thanks in advance. Any help would be greatly appreciated!...
more >>
delete ?
Posted by KevinE at 7/14/2005 3:48:38 PM
is there a keyword in sql that will allow me to run a delete statement that delete's 1000 rows (example) but breaks the processing into deleting 100 rows 10 times? TIA, KevinE. ...
more >>
Custom Aggregate function
Posted by Oleg at 7/14/2005 3:39:03 PM
Can I write a custom aggregate function? something like bult in 'SUM' function. The reason is that I want to culculate a sum of integer column as bitmaps when I have a 'group by' clause. Another question. Which way is better to write SELECT statement with multiple tables? Use 'inner join' f...
more >>
Multi columns in a sub query?
Posted by JP at 7/14/2005 2:54:04 PM
select Addresses.ID, Addresses.Address1, Addresses.Address2 , Addresses.City, Addresses.State, Addresses.ZipCode, (select top 1 Phone,Email from Attributes where attAddrID=Addresses.ID), Addresses.AreaName how do I select multiple filelds in the sub query. Analyser will only let me selec...
more >>
Create data and upload from web to local machine
Posted by Neil Jarman at 7/14/2005 2:49:02 PM
Hi, I realise this is a tall order for one group, hence the cross-posting. I have data in a web server which I need to process and package as a csv and then upload to a local machine so that a mail merge can be printed off with the data that has been created. My host appears to have dis...
more >>
Duplicate entry issue
Posted by Lontae Jones at 7/14/2005 2:21:04 PM
Hello I have a table called log and the schema is below Create table [log] ([AutoID] [int] IDENTITY (1, 1) NOT NULL , [Timestamp] [smalldatetime] , [PageSource] [varchar] (100), [Domain] [varchar] (100), [Faxedby] [varchar] (5), [Agent] [varchar] (10) , [FaxPhone] [varchar] (14)) ...
more >>
question
Posted by Britney at 7/14/2005 2:05:14 PM
how to find out when was last time the sql server was down? ...
more >>
Executing a RESTORE across servers
Posted by E2TheC at 7/14/2005 1:59:02 PM
I have a procedure in a db on Server A that I want to have execute a RESTORE statement to restore a database on Server B. Is this possible programmatically? Server B is a Linked Server on Server A, but how can I execute the RESTORE? Thanks, E2TheC...
more >>
Way to see how many records will be returned?
Posted by ChrisR at 7/14/2005 1:49:32 PM
sql2k I could have sworn there was a way to see how many records will be returned from a query in Query Analyzer. Either under "Query" or "Tools"? Ive tried and cant find it. Is there such a thing? TIA, ChrisR ...
more >>
sp_who2 shows old connections
Posted by Mike at 7/14/2005 1:49:05 PM
Hi, when I tried sp_who2, I see connections that are old, please take a look at this and suggest what I need to do - (I have connections showing up since 6/27), how can I clean them up and how did it stay so long? SPID Status Login ...
more >>
is there a set-based solution to this task?
Posted by jason at 7/14/2005 1:02:41 PM
given the following simplified tables: create table apples ( appleid int not null, column1 varchar(50) null, column2 varchar(50) null, orangeid int null) create table oranges ( orangeid int not null, column1 varchar(50) null, column2 varchar(50) null) ...
more >>
sp_OAMethod returns resultset, but how to get the resultset?
Posted by Rene Anstötz at 7/14/2005 12:48:21 PM
Hello, I have a vb programm that creates a fax and send it to the fax server, here is the important part: JobID = objFaxDocument.ConnectedSubmit(objFaxServer) MsgBox "The Job ID is :" & JobID(0) So the ID is stored in an array. I transfered this code to the SQL server: .... EXECUT...
more >>
mscomm
Posted by gerry at 7/14/2005 12:30:59 PM
we have need to use mscomm from within sql server to do a simple output. using sp_OA everything works fine with the exception of setting the Output property. we are getting the error 0x800A017C Invalid property value I assume this is because the proprty expects a Variant and we are sending a st...
more >>
an index creation question...
Posted by === Steve L === at 7/14/2005 12:08:00 PM
i'm using sql2k i have a big table with client names in it. first and last names are seperate fields. users would like to search on either first or last name, or both at the same time. what's the best way to create indexes for the best search performance? one on first name, one on last name, ...
more >>
best way to calculate aggregate products from materialized paths?
Posted by Paul at 7/14/2005 12:01:06 PM
I need to implement some complicated hierarchy manipulation features. To make this happen I can no longer maintain expanded structures relationally - doing it purely in SQL would be a nightmare. I'm going to move the logic that maintains them into OO; this will force me to do row updates individ...
more >>
Best way to implement tedious trigger?
Posted by Kyle at 7/14/2005 11:26:04 AM
Hi. I'm working on a trigger for logging changes made to a particular table. For ugly legacy reasons, the table has 150+ fields, and currently the trigger is a long string of 'If UPDATE(Fieldname)' statements. Is there a better way? I am considering trying to make a select statement that finds...
more >>
sp_Execute
Posted by Mike Labosh at 7/14/2005 11:08:14 AM
Someone is running a big batch in MS Access connected to some SQL Server table(s) -- What Access calls "linked tables". I am monitoring a batch running on my local box with SQL Profiler. I can see this other user's process spewing gobbles of calls like this: exec sp_execute 8, 7485 sp_...
more >>
@@spid
Posted by JT at 7/14/2005 10:21:46 AM
can someone point me to an article or give me a brief explanation of how sql server 2000 assigns a value to @@spid? does each stored procedure get its own spid? what if a stored procedure calls another stored procedure? do each of these get the same spid? thanks, jt ...
more >>
xpsql.cpp: Error 87 from GetProxyAccount on line 604
Posted by carloscajas at 7/14/2005 10:09:03 AM
they reckoned, I am executing these commands in slqanalyzer: set @ruta ='\\aBsrvNET\reportes_TC\' + @mess + '\' + @mes + dbo.f_it_fill_campo(convert(varchar(2),day(@pp_fecha)),1,'0') + convert(varchar(4),year(@pp_fecha)) + '_LISTADOS\OTROS\' -- BUSCA EL ARCHIVO PARA SABER -- SI YA SE...
more >>
Select Query
Posted by Sam at 7/14/2005 9:54:50 AM
I have a POITEM Table which has just three columns InvetoryID, DateofOrder, VendorID I have under the same InventoryID ,multiple DateofOrders and VendorrID;s. SO for eg here is a couple of lines of the data, InvetoryID, DateofOrder, VendorID ACCKAPCONS01 05/05/2005 ALTPRO0...
more >>
how to check if #temp table exists?
Posted by Rich at 7/14/2005 9:12:01 AM
I create a temp table Create Table #temp... and do stuff - without dropping #temp. Then I tried using the following in Query Analyzer if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[#temp]') and OBJECTPROPERTY(id, N'IsUserTable') = 1) drop table [dbo].[#temp] ...
more >>
sql server 2000 datetime data type question
Posted by Wendy Elizabeth at 7/14/2005 8:46:08 >>
Set default value from another table?
Posted by MartyNg at 7/14/2005 8:29:00 AM
I have a field in a table which I need to set a default value for. The default value needs to be a string concatenation of two fields from a DIFFERENT table. Anyone know how to do this? Thanks! SQL Server 2000. ...
more >>
Can we connect?
Posted by Rogers at 7/14/2005 8:26:16 AM
can we connect named instance through IP like in the network utility I just defined the IP of the server where the named instance installed and define the port.... Client Netowork Utility > Alias > Alias: MyServer Server: 209.45.23.55 Port : 1434 Protocol: TCP/IP any thing else I n...
more >>
Selecting a Specific row from a range
Posted by -Ldwater at 7/14/2005 8:18:08 AM
Hi all, Just wondering.. is there currently a function in SQL to be able to return a specfic row number from a record set. For example, from my Select statement, I only want the 5th record. Just wondering if there is any function already available that means that I dont have to iterate ...
more >>
SQL Server Replication Question
Posted by Yosh at 7/14/2005 8:06:02 AM
I have a problem and I am trying to decide the best solution. I have a production database that is running very slow because of the = volume of data. We don't need all this data so we have a purge process. = However, we need to keep the purged data so we can run historical = reports. What i...
more >>
Migrating data from old database to new???
Posted by Tim::.. at 7/14/2005 8:01:02 AM
Can someone please tell me why this doesn't work! I'm trying to migrate some data from an old database into a new database but it keeps returning the error: Server: Msg 208, Level 16, State 1, Line 1 Invalid object name 'dbo.cpncms.tblPageContent'. Server: Msg 208, Level 16, State 1, Line ...
more >>
SQL Server Enterprise Manager..
Posted by Rogers at 7/14/2005 7:18:01 AM
I am trying to connect named instance of remote machine into my local machine but I have failed to do so..."sql server doesn't exist or access denied..." I am successfully registered the default instance of remote machine in my client machine... Can any one guide me.... what the problem wou...
more >>
Diff btw SP3 and SP4
Posted by Boomessh at 7/14/2005 6:36:02 AM
Hai all, We have been using SP3 and now we are updated to SP4, due to which we get some problems in the queries that was working fine with SP3. (FYI: we are working on SQL 2000 and application developed using ASP) Can i know any site which clearly states the difference? Thanks, V.Boome...
more >>
update only first time
Posted by Souris at 7/14/2005 6:23:02 AM
I wanted to update record only the field is empty (null), or nothing in it. If there is anything in the filed then I wnat to leave what it is. I have following SQL it seems does not update at all, if I take out the case statement then it updates all the time. "UPDATE MYTABLE SET MYVALUE ...
more >>
Ownership Issue
Posted by jsfromynr at 7/14/2005 6:18:05 AM
Hi All, Please consider the following details [Env.: sql server 2000]: User "A" is having "db_owner" database role permission; it executes a procedure (owned by dbo), this procedure create a view "vw1" using dynamic query (its owner is user "A"), but when I search this "vw1" view in the sam...
more >>
Multilanguage support
Posted by Senna at 7/14/2005 5:11:02 AM
Hi Wonder if there a general way to build a database with support for multilanguage. Say I have a Product table. CREATE TABLE ( Id Title Info Price Quantity ... ) Here I want Title and Info to be stored in x languages. Should I just add columns like: Title_D...
more >>
index not used when column is nullable and using jdbc
Posted by Palaniappan N at 7/14/2005 4:45:03 AM
I have a table with a few columns, and one of the non-PK columns is indexed and defined as nullable. Most of my selects happen on this column, but I notice that as the no. of rows in this table increase, the time for selects keeps going up. This happens while accessing this table using jdbc fr...
more >>
SQL performance problem
Posted by Arne at 7/14/2005 4:14:04 AM
I have a table that doesn't perform. The table gets 200-300 new records a day. The table gets cleaned up and trimmed down once a month. Now I can't even do select count(*) from the enterprise manager for this table, which results in a timeout error. If I copy the whole table to the development...
more >>
Internal SQL Server error (INSTEAD OF trigger on a view)
Posted by Razvan Socol at 7/14/2005 4:04:40 AM
Running the following code (on SQL Server 2000 SP4, build 8.00.2040) results in an "Internal SQL Server error" message, at the last UPDATE statement: CREATE DATABASE Bug GO USE Bug CREATE TABLE dbo.Conturi ( Cont varchar (20) , ) GO CREATE TABLE dbo.Perioade ( Cont varchar (20) , ...
more >>
Building A Report Tool
Posted by jsfromynr at 7/14/2005 2:56:41 AM
Hi all, I am looking for the development of a reporting tool (something like Crystal Reports). Can someone provide links and material which will help me in developing this tool. I wish to use SQL Standards for the queries that will be generated intermediately , so no cube , rollup etc. ...
more >>
Connect Two databases in a single query
Posted by atul saxena at 7/14/2005 2:56:02 AM
As it is possible to connect two different databases within same server from a single query, I want to whether connecting two different Sybase servers from within a query is also possible or not. If yes, Can anyone suggest me how to accomplish it. ...
more >>
Self Join!
Posted by Arpan at 7/14/2005 2:24:04 AM
Can someone explain me the logic behind the following query which makes use of SELF JOIN especially the 'ON' clause (the resultset retrieves 11 rows)? --------------------------------------------- USE pubs SELECT au1.au_fname,au1.au_lname,au2.au_fname,au2.au_lname FROM Authors AS au1 INNER ...
more >>
Quickly add a field to a table with 4 million records
Posted by Cynthia at 7/14/2005 2:05:02 AM
I have a problem in adding a field to the table which has 4 million records. When I do the above process, it taked around 12 hrs and the transaction log is getting full. So I am not able to add the field in the table. Is there any way to add it in one min for a table which has 4 million rec...
more >>
Carriage return in the column alias.
Posted by manishkaushik at 7/14/2005 2:00:28 AM
Hi Friends, I have the following query, i am using the column alias by this way, -*select work_Code as "Work Code",work_nature as "Work Nature" fro sb_cm_work_nature*- it works fine and i get this output. Work Code Work Nature 1 External ...
more >>
Different identifier on SELECT
Posted by Jeff at 7/14/2005 12:48:03 AM
Hello, I'd like to do something like these : Select MyColumn AS "MyColumnName " (1) Select MyColumn AS "MyColumnNameIsEmpty" (2) FROM .... WHERE .... If my query returns a record then I'd like to use (1) else (2) Is it possible to do this ? And how ? Thx for your help. Jeff...
more >>
Stored Procedure ... HELP!
Posted by hurricane NO[at]SPAM tin.it at 7/14/2005 12:38:23 AM
Hi to all, excuse my english ( i'm an italian student... ) I have the necessity of make a stored procedure that convert one parameter passed from base64 to binary before store in a field image. Is possible? How? I have this necessity because a lot of page made in ASP get the binary and p...
more >>
union query for full double joins
Posted by David Shorthouse at 7/14/2005 12:34:33 AM
Hello folks, I'm attempting to set-up a DSN-less connection Access template for clients who may not be particularly adept at handling appends and update queries. So, I was toying with the idea of making a client-side table that they could mess with, add records, change other records, et...
more >>
xp_cmdshell by proxy account is not working, please help
Posted by I.P. at 7/14/2005 12:00:00 AM
Hi, I have SQL2000-SP3. I defined a proxy account to execute the xp_cmdshell instead of the simple sql account which I don't want to define as sysadmin. I don't succeed to execute the command as I get an unprivilaged error. If I define the simple sql account as sysadmin, everything works... ...
more >>
Strange SP Problem
Posted by Neil at 7/14/2005 12:00:00 AM
I have a strange situation. I have a stored procedure that is hanging upon execution, but only some machines and not others. The db is an Access 2000 MDB using ODBC linked tables and a SQL 7 back end. The sp is executed as a pass-through. The sp is fairly simple: UPDATE CUSTOMER SET Las...
more >>
Combining Two Tables ?
Posted by Bongee at 7/14/2005 12:00:00 AM
Hello, I have two tables, TABLE 1 ---------- AccRef Debit Credit ------------------------- LL001 100 200 LL002 150 300 LL003 300 250 TABLE 2 ---------- AccRef Dbt Crdt ------------------------- LL001 300 400 LL002 950 ...
more >>
DTS package for exporting data from excel into temp tbl
Posted by Ilin S via SQLMonster.com at 7/14/2005 12:00:00 AM
I need to create a DTS package , which will export data from excel into temp table in MS SQL and then use stored proc to manipulate this data from temp table and some permanent tables. Any help will be appreciated. Regards, Ilin -- Message posted via SQLMonster.com...
more >>
Reserving Identity values
Posted by GMG at 7/14/2005 12:00:00 AM
Is it possible to reserve a number of identity values before inserting into a table in a multiuser environment ? ...
more >>
Can use Subquery like this?
Posted by Bongee at 7/14/2005 12:00:00 AM
Hello, SELECT B.CODE, C.CURCODE, SUM(A.AMOUNT) AS TL_TOPLAM, SUM(A.TRNET) AS DOVIZ_TOPLAM, SUM(A.TRNET * (SELECT RATES1 FROM L_DAILYEXCHANGES WHERE LREF = 1) ) AS FARK FROM LG_001_01_CLFLINE A LEFT JOIN LG_001_CLCARD B ON A.CLIENTREF = B.LOGICALREF ...
more >>
Help with grouping-type query
Posted by epigram at 7/14/2005 12:00:00 AM
I think a simple (but fictitious) example is the best way to express what I am trying to do. I have 4 tables. Family: ID Description Person: ID FamilyID Name Person_Car: ID PersonID CarID Car: ID Make Model Year This setup allows me to have many common types of cars...
more >>
partitioned data
Posted by simon at 7/14/2005 12:00:00 AM
Hi, I have one big table with 5 million of data. If applocation works on that table, it's very slow. So, I should break one huge table into smaller tables, for example one for each year. Lets say, I have tables: table2003 table2004 table2005 Then, should I create partitioned in...
more >>
Dynamic cursor with change of context first
Posted by Marc Eggenberger at 7/14/2005 12:00:00 AM
Hi. I use SQL Server 2000 SP3a on Windows 2003SP1. I'm creating a stored procedure which should do the following: a. If no database name is supplied to the sp then it should loop over all user databases on that server and do what it does when a database name is specified. b. If a data...
more >>
Need help in defining a stored procedure
Posted by romy at 7/14/2005 12:00:00 AM
Hi I need to define a SP which returns the frequency of occurrences of Events in a certain range of dates. Parameters: range of dates (DateFrom ,DateTo) Input: An events table which its relevant fields are: EventCode, EventDate Output: For each eventcode return the average Frequency ...
more >>
Merge Question
Posted by Shahriar at 7/14/2005 12:00:00 AM
I have the following table. I want to copy(merge) all ID's being 1 and 2 into ID 3 producing the result set shown. A requirement is to add the QTY columns for C1 being the same in both ID1 and ID2, otherwise just insert a row. What would be a good approach to do this? Would one update stat...
more >>
Sql server encryption?
Posted by perspolis at 7/14/2005 12:00:00 AM
Hi all I have a application that uses MSDE as database. you know when I give my application to someone he/she can see design of my tables. is there any way ,like encription of stored procedure,to prevent user from seeing deign of tables?? thx ...
more >>
Conversion to int error
Posted by quiglepops at 7/14/2005 12:00:00 AM
Here is a code excerpt from a large proc I have inherited. It runs from within an application, but when I take it outside and try to run it in Query Analyzer I encounter problems... (Note that these are only code sections and I have included all relevant lines) DECLARE @Entity varchar(12) D...
more >>
SQL - Browse directory list
Posted by pixmind NO[at]SPAM hotmail.com at 7/14/2005 12:00:00 AM
in the Enterprise Manager, if you have a remote SQL Server registered, you could right-click the database and choose 'New Database', in the 'Data Files' tab and press the 'Loacation' button, you could get the remote server's directory. I wonder if I could get it by API's or other. (I know ...
more >>
Object does not reference any object, and no objects reference it.
Posted by Kiran at 7/14/2005 12:00:00 AM
Hi, I altered a table by moving it from one database to another. In the = process it lost it's all dependencies. When I do a sp_depends on that table, it says "Object does not reference = any object, and no objects reference it." I know that there are dependencies for this table. Is th...
more >>
import delimited text file
Posted by Sam at 7/14/2005 12:00:00 AM
how i can import data into sql table from a delimited text file? thanks ...
more >>
·
·
groups
Questions? Comments? Contact the
d
n | http://www.developmentnow.com/g/113_2005_7_0_14_0/sql-server-programming.htm | crawl-001 | refinedweb | 4,545 | 71.55 |
Author: Donal K. Fellows <[email protected]> State: Draft Type: Project Vote: Pending Created: 23-Jan-2020 Post-History: Tcl-Version: 8.7 Keywords: Tk, TclOO, configuration, properties, options Tk-Branch: tip-560
Abstract
This TIP is a companion for TIP #558 and builds upon the basic facilities described in it; it describes how to build a configuration system based on TclOO that can support making Tk megawidgets.
Rationale and Design Requirements
Tk megawidgets should be a natural fit for TclOO, as Tk widgets have long behaved like classes. However, configuration of Tk-like objects is quite complex and has long been one of the most awkward parts for authors of megawidgets, often leading to only partial implementations. It is also rather different to the simple system described in TIP #558. In particular, options can be configured from defaults, from the option database, or explicitly, and two methods, configure and cget, to handle scripted access; configure returns an option descriptor when reading, whereas cget just reads the value of the option. (Terminology note: Tcl has properties while Tk has options.)
It should be noted that this scheme is necessarily semantically incompatible with the configure of TIP #558; the results of configure are entirely different, with one returning the property value itself and the other returning an option descriptor (a short list); both mechanisms are present "in the wild" so attempting to unify this is extremely difficult with how things stand. Therefore it is a non-goal of this TIP to design a system that can allow an object to be accessed by both systems at once; that is never going to work right.
There are also options that can only be configured during widget creation (e.g., the -use option of toplevels and the -class option of quite a few widgets) though all options are always readable. Another complexity is that calls to configure are transactional; no changes are applied unless all changes are applied (though a useless redisplay might be triggered). What's more, any -class option needs to be handled early as it changes how the configuration database is read from. (Indeed, this is omitted from the standard mechanism, just as it is also handled specially in Tk frame widgets and so on.)
Finally, there should be a mechanism for supporting aliases of options.
In support of this, we will want to support typing of options as this is a common feature of Tk widgets. While there is a substantial set of standard types (such as strings, colors, images, and screen distances) it is an open set: we need a way of allowing user code to add custom types. A common custom type is the table-driven type, where values must be chosen from a given list of strings but can be abbreviated, so we should ensure that we provide special support for that.
Specification
Tk will supply a TclOO class, tk::Configurable, that classes may inherit from to gain a configure and a cget method, as well as a non-exported Initialise method (that may only be called once; subsequent calls will do nothing) intended to be used from constructors, and a non-exported PostConfigure method intended as a point for user- and subclass-interception. In addition, Tk will supply a metaclass, tk::configurable (notice the capitalisation difference), that will allow the creation of definitions suitable for configuration (that class may gain other behaviours in the future) with the option declaration (note that this isn't the built-in option command, but has the same name so that options are always called that). As with the oo::configurable metaclass, option names will be given without leading hyphens when they are specified. Classes created with tk::configurable will have tk::Configurable mixed in.
An example of use of this:
tk::configurable create myLabel { # Conventional setup of constructor/destructor variable window constructor {w args} { set window [label $w] my Initialise $w {*}$args } destructor { destroy $window } # Define some options for this class option label option borderwidth -type distance -default 1px \ -name borderWidth -class BorderWidth option bd -alias borderwidth }
As we can see from this, we want to support some configuration properties for an option. The full list (not fully shown above) is:
-name optName
This gives the name used for looking up a default in the option database. It defaults to the main name of the option with string tolower applied to it.
-class clsName
This gives the class name used for looking up a default in the option database (this is not a TclOO name). It defaults to the main name of the option with string totitle applied to it.
-default value
The default value of the option, used when no other default is available. When this option is not present, the default will depend on the type of the option (see below), but is usually either the empty string or a zero value. The default will be validated against the type (see below); it is an error for the default to be not type-valid.
-type typeName
The type of the value. The types will be subcommands of an ensemble (see Option Types below) that will provide defaults and validation. The default type will be string, which will do no validation and use a default that is empty.
Defaults from the option database (read during Initialise) are subject to validation by the type, but if the default from the option database fails validation, it is ignored and does not trigger a failure of the megawidget to initialise. This is because the option database is not wholly under script control.
-initonly boolean
Controls whether this is an initialisation only option. Initialisation only options may only be set by the Initialise method, and not by configure. (For example, real initialise-only options are the -use option of toplevels and the -container option of frames.) Options are not initialisation only by default; this is expected to be a rare use case.
-alias optionName
Makes this option an alias for another option, which must exist at this point (for sanity's sake if nothing else). The other configuration properties to option (as described above) will be illegal if this is given. Note that alias options are not set by initialisation (unless explicitly provided), since their underlying option should be set instead.
The configure and cget methods will work in ways that should be immediately recognisable to Tk users. There will also be a non-exported PostConfigure method (taking no arguments) that will be called by configure after any call that could have changed the state (no determination will be done of whether the state actually changed); the default implementation of PostConfigure will be empty, but it will provide a convenient place to hook generation of events for state changes or validation of the whole configuration (errors will trigger the same rollback behavior as validation failures). It will be recommended (but out of the scope of this TIP to implement) that idle events are used to combine state update events.
Initialisation will be done by calling the non-exported Initialise method, which will take its first argument to be the widget path name (we do not assume that this is the same as the object name) and an even number of following arguments that will be the same as if for configure. The initialisation will write all elements of the array, using the information from the option configuration and retrieved from the option database (see option get), and is the only method that will write initialisation-only elements. Note that this method is intended to be called from a constructor, and will not call the PostConfigure method or perform state rollback on failure; the caller can do any system validation afterwards, and validation failures are expected to abort widget creation altogether rather than rolling anything back.
The Initialise method may only be called (successfully) once.
Interaction with Fundamental TclOO and Tk Mechanisms
As all options are readable in Tk, all will be listed in the readable properties of the class (see TIP #558 for the Tcl mechanism for this). Most options will also be listed in the writable properties of the class, but initialisation only options will not. (Note once again: megawidgets are not oo:configurable in the sense of TIP #558, but they do use the same basic TclOO mechanisms.)
The following methods will be created for each option (where name is the hyphen-removed version of the name):
<OptDescribe-name> — takes no arguments and returns a description of the parts of the option descriptor that do not change. In particular:
For ordinary options, this is a list of three items; these are the option database name of the option, the option database class of the option, and the actual default value of the option.
For aliases, this is a single-element list containing the name of the target option.
<OptValidate-name> — takes a single value, checks whether the value is of the type of the option, and returns the normalized option value. Typically forwarded to the validate method of the option's type object.
<OptRead-name> — how to actually read the option out of whatever storage it is implemented with. Takes no arguments and returns the current value of the option. Note that if a method exists with this name in the class, it is not overridden by the option declaration; this makes providing implementations in C on a case-by-case basis relatively straight forward.
The default implementation forwards to the <StdOptRead> method (see below).
<OptWrite-name> — how to actually write the option to whatever storage it is implemented with. Takes a single argument, the value to write (after validation and normalization). Return value is ignored. Note that if a method exists with this name in the class, it is not overridden by the option declaration; this makes providing implementations in C on a case-by-case basis relatively straight forward.
The default implementation forwards to the <StdOptWrite> method (see below).
The default storage mechanism for options will be the array in the object
instance with the empty local name (so the option
foo will be in array
element variable
(foo) in the instance namespace; this is a trick stolen
from
stooop).
This can be overridden by defining appropriate non-exported methods, for which
there are these implementations provided by default:
<StdOptRead> — This method reads an element of the array. It takes a single argument, the full name of the element, and returns the value inside. It is never called with the name of an alias. (The <OptRead-name> methods described above delegate to this.)
If an implementation provides custom readers for each option so that it goes to the right slot in a C structure, it does not need to provide an override of this method.
<StdOptWrite> — This method writes an element of the array. It takes a two arguments, the full name of the element and the value to write. Its return value is ignored. It is never called with the name of an alias. (The <OptWrite-name> methods described above delegate to this.)
If an implementation provides custom writers for each option so that it comes from the right slot in a C structure, it does not need to provide an override of this method.
<OptionsMakeCheckpoint> — This method saves the state so that it may be restored. It takes no arguments and returns the saved state that can be used with <OptionsRestoreCheckpoint> on the same class if required. The default implementation uses array get, but the method may be overridden; the only constraint is that whatever the checkpoint creator creates must be consumable by the matching checkpoint restorer. Any overriding implementation should include the superclass's checkpoint state in its own checkpoint state (calling via next).
If any options are implemented with C backing store, an override for this method should be provided. Note that this means that it is probably unwise to override this method and not <OptionsRestoreCheckpoint>.
<OptionsRestoreCheckpoint> — This method restores a saved state created with <OptionsMakeCheckpoint>. It takes a single argument, the saved state. Its return value is ignored. The default implementation uses array set, but the method may be overridden. No attempt is made to validate the contents of a saved state. If overridden, the overriding implementation should also call the superclass's implementation (calling via next) to restore that portion of the checkpointed state.
If any options are implemented with C backing store, an override for this method should be provided. Note that this means that it is probably unwise to override this method and not <OptionsMakeCheckpoint>.
Note that Initialise requires an existing widget name. A consequence of that is that any true initialisation-only options that need to be passed to that widget must be manually parsed before the widget is created (or the widget can be created, used for parsing, and then destroyed and rebuilt with the correct options; that's not too expensive if the temporary widget is never mapped).
Note also that the implementations of <StdOptRead>, <StdOptWrite>, <OptionsMakeCheckpoint>, <OptionsRestoreCheckpoint>, and PostConfigure are installed in a place in the class hierarchy where it is maximally easy for instances of tk::configurable to override. Their implementation class is ::tk::ConfigurableStandardImplementations.
Supporting Changes to the Tk Core
The option get is to gain an extra optional argument after all its current
mandatory ones, default, which will be the value returned when the
underlying call to
Tk_GetOption() cannot find a value to return (the case
where it returns
NULL) and where Tk used to always return the empty string.
Since option get previously did not take any optional arguments at all,
this is a compatible change.
- option get window name class ?default?
The value of this change is when we have any code where we already know what we want to use instead (such as with the option specified in this TIP) it is less ambiguous to get Tk to handle the switch over to our known default value rather than assuming that the empty string always means that there was no value specified in the option database.
Option Types
One key part of this specification is a system for typing of options, since it is extremely common for Tk widget options to be constrained to be of particular types. This will be done using an ensemble of type implementation commands, tk::OptionType, with the member elements of the ensemble being themselves ensemble-like (probably objects, but not necessarily), supporting at least two subcommands, validate and default.
The validate subcommand will take a single argument, the value to be validated, and will produce an error if the validation fails and return the value to be actually set otherwise (to allow a value to be converted to canonical form if desired). The default subcommand will take no arguments, and return the default value usually associated with the type. (Note that there is no need to make either of these commands aware of which class or instance they’re being used with; types are independent of how they are used and these defaults can be overridden when the option is created.)
For example, this will allow the validation of a proposed value, $foobar, for an option of type $gorp, to be done by calling:
tk::OptionType $gorp validate $foobar
The standard types will be:
string: any string. Defaults to the empty string. Validation always succeeds on this type and never changes the value.
boolean: value acceptable to string is boolean -strict (e.g., true or off). Defaults to false.
zboolean: empty string or boolean. Defaults to the empty string.
integer: any integer (i.e., acceptable to string is entier -strict). Defaults to 0.
zinteger: any integer (as above) or empty string. Defaults to the empty string.
float: any float (except NaN) acceptable to string is double -strict. Defaults to 0.0.
zfloat: any float (as above) or empty string. Defaults to the empty string.
distance: any screen distance (as parsed by winfo fpixels). Defaults to 0px.
image: name of any Tk image, or empty string. Defaults to the empty string.
color: any color (as parsed by winfo rgb). Defaults to black.
zcolor: any color, or empty string. Defaults to the empty string.
font: any font (as parsed by font actual). Defaults to TkDefaultFont.
relief: any relief (flat, groove, raised, ridge, solid, or sunken) or unambiguous prefix. Defaults to flat.
justify: any justification (left, right, or center) or unambiguous prefix. Defaults to left.
anchor: any anchor (n, ne, e, se, s, sw, w, nw, or center) or unambiguous prefix. Defaults to center.
window: any existing window path name, or empty string. Defaults to the empty string.
cursor: any of the forms acceptable as a cursor on the current platform, or the empty string. Defaults to the empty string.
list: any valid Tcl list. Defaults to the empty list (and empty string).
dict: any valid Tcl dictionary. Defaults to the empty dict (and empty string).
The official mechanism for adding a new type will be via the class tk::optiontype. Instances of that will automatically plug themselves inside using their names, and will be implemented using a callback provided to the constructor. This will result in a class definition (approximately) like this:
oo::class create tk::optiontype { constructor {default testCommand} { ... # trivial implementation that saves the params } method validate {value} { if {![{*}$testCommand $value]} { return -code error ... # error message generation } return $value } method default {} { return $default } self { # class-level definitions } }
In practice, things are more complex because there are three basic ways to validate a value. In particular, there are types for which there are tests that return a boolean, types for which there are parsers that error on failure, and types that are driven by a table of permitted values. As such, tk::optiontype is actually an abstract class and there are concrete implementations of each of the validation options.
When a type is created with tk::optiontype createbool, a boolean test is expected to be provided as a command fragment that takes a single extra argument. An example of the use is this, which makes the boolean type described above:
tk::optiontype createbool boolean "false" { string is boolean -strict }
When a type is created with tk::optiontype createthrow, the test instead
is expected to throw an error on failure. Because this can be more complex, we
assume that the value being tested is passed in the (local) variable
$value.
An example of the use is this, which makes the distance type described
above:
tk::optiontype createthrow distance "0px" { winfo fpixels . $value }
When a type is created with tk::optiontype createtable, the test is driven by tcl::prefix match and all the caller has to do is supply the table. An example of this one is:
tk::optiontype createtable justify "left" { center left right }
It is up to the caller to ensure that each type's default values actually pass validation checks.
Note that all of the types above are created with unqualified names; the names are mangled internally by the above methods so that they plug in correctly into the tk::OptionType ensemble. This is implemented using the unexported Create method of tk::optiontype, for example like this:
method Create {realClass name args} { # Condition the class name first set name [namespace current]::[namespace tail $name] # Delegate to the concrete subclass's create method tailcall $realClass create $name {*}$args } forward createbool my Create ::tk::BoolTestType forward createthrow my Create ::tk::ThrowTestType forward createtable my Create ::tk::TableType
Implementation
This document is placed in the public domain. | https://core.tcl-lang.org/tips/doc/trunk/tip/560.md | CC-MAIN-2021-21 | refinedweb | 3,248 | 51.78 |
Keywords play an important role when reading a long text to understand the subject and context of the text. Search engines also analyze an article’s keywords before indexing it. In this article, I will walk you through how to extract keywords using Python.
Well, we can also train a machine learning model that will extract keywords, but here I am just going to walk you through how to use a Python library for this task so that even beginners can understand how extracting keywords work before training a machine learning model.
Extract Keywords using Python
There are so many Python libraries for the task of extracting keywords, the best ones are spaCy, Rake-Nltk, YAKE. In this tutorial, I will use the Rake-NLTK as it is beginner-friendly and easy to install. You can easily install it by using the pip command; pip install rake-nltk.
Also, Read – 200+ Machine Learning Projects Solved and Explained.
RAKE stands for Rapid Automatic Keyword Extraction. It is only built to extract keywords by using the NLTK library in Python. Now let’s see how to use this library for extracting keywords.
I will first start with importing the Rake module from the rake-nltk library:
from rake_nltk import Rake rake_nltk_var = Rake()
Now I will store some text into a variable:
text = """ I am a programmer from India, and I am here to guide you with Data Science, Machine Learning, Python, and C++ for free. I hope you will learn a lot in your journey towards Coding, Machine Learning and Artificial Intelligence with me."""
Now let’s extract the keywords from the text and print the output:
rake_nltk_var.extract_keywords_from_text(text) keyword_extracted = rake_nltk_var.get_ranked_phrases() print(keyword_extracted)
Output: ['journey towards coding', 'machine learning', 'data science', 'c ++', 'artificial intelligence', 'python', 'programmer', 'lot', 'learn', 'india', 'hope', 'guide', 'free']
Summary
The process of extracting keywords helps us identifying the importance of words in a text. This task can be also used for topic modelling. It is very useful to extract keywords for indexing the articles on the web so that people searching the keywords can get the best articles to read.
This technique is also used by various search engines. It is obvious that they don’t use any library but the process remains the same to extract keywords. You can learn how to train a machine learning model to extract keywords from here.
I hope you liked this article on how to extract keywords using the Python programming language. Feel free to ask your valuable questions in the comments section below. | https://thecleverprogrammer.com/2021/02/07/extract-keywords-using-python/ | CC-MAIN-2021-43 | refinedweb | 425 | 61.77 |
On 5/5/06, Adrian Tarau <ady@daxtechnologies.com> wrote:
> I just did you you told me :
Have you tried printing out what is in the destinationMap?
The keys are ActiveMQDestination objects - since topics and queues
both can have identical names, using a String key could cause clashes;
so looking things up by String is not gonna work - use the
ActiveMQDestination (e.g. the ActiveMQQueue object) as the key and it
should work. Or worse case, iterate through the Map and compare the
String value of the key etc.
> On the other hand, this piece of code seems to work fine :
>
> private int getQueueSize(Queue queue) throws JMSException {
> MessageConsumer consumer = session.createConsumer(queue);
> try {
> if (consumer instanceof ActiveMQMessageConsumer) {
> return ((ActiveMQMessageConsumer)
> consumer).getMessageSize();
> }
> } finally {
> consumer.close();
> }
> int count = 0;
> QueueBrowser browser = session.createBrowser(queue);
> Enumeration enumeration = browser.getEnumeration();
> while (enumeration.hasMoreElements()) {
> enumeration.nextElement();
> count++;
> }
> return count;
> }
Sure - the thing is if you had 1 million messages on a queue; asking
for the size that way could take a very long time since basically you
have to connect to the broker, set up a browser, send every message to
the client (which will involve much querying of the database and tons
of data being sent) just to find out that there is 1 million messages
on the queue.
The broker already maintains a counter for this, so its a trivial
operation to just ask the broker.
Seriously, its probably way simple to just use JMX if you're having
trouble with this Map. Just point JConsole at the broker and you can
see the stats....
--
James
------- | http://mail-archives.apache.org/mod_mbox/activemq-users/200605.mbox/%3Cec6e67fd0605060008qc0f81c0idbc4c5dd3031e183@mail.gmail.com%3E | CC-MAIN-2015-27 | refinedweb | 264 | 63.19 |
XML 2.0?
Anne van Kesteren: The time has probably arrived to define graceful error handling for XML, put some IETF, W3C or WHATWG sticker on it, label it XML 2.0 and ship it. Perhaps we can drop this internal subset thing in the process.
I’ve been slowly but steadily prototyping this in the html5lib svn repository. Since this post, I’ve added both W3C DOM support and can produce SAX2 events (with or without namespaces) from that DOM. A SAX w/namespaces interface will make it easier for me to replace sgmllib as the fallback in the event of XML parsing errors in the Universal Feed Parser. | http://www.intertwingly.net/blog/2007/01/26/XML-2-0 | crawl-002 | refinedweb | 110 | 70.63 |
Microchip USB: Part 1
Here's the simple test program:
#include <p18f2550.h> #pragma config FOSC = HSPLL_HS #pragma config PLLDIV = 5 #pragma config USBDIV = 2 #pragma config CPUDIV = OSC1_PLL2 #pragma config MCLRE = ON #pragma config WDT = OFF #pragma config LVP = OFF #pragma config DEBUG = ON #include <delays.h> void InitApp(void) { /* TODO Initialize User Ports/Peripherals/Project here */ TRISA=0xFB; //LED output ADCON1=0xD; // two analog channels CMCON=0xF; ADCON2=0x86; // 64 cycles for acq time, lright justify ADCON0=0; // start with channel 0 } // use the built-in 100 cycle delay to get 1mS delays void dly(unsigned int duration) { // the way the clock is set up the frequency should be 48MHz // and 4 clock cycles per instruction // so 12Mhz effective or about 83nS per tick // therefore 12000 ticks should be 1ms while (duration--) Delay100TCYx(120); } int read_analog() { int result; ADCON0|=1; // turn on A/D ADCON0|=2; while (ADCON0 & (1<<1)); result=ADRESH; result<<=8; result|=ADRESL; return result; } void main(void) { int v; /* Initialize I/O and Peripherals for application */ InitApp(); while (1) { LATA=0xff; dly(500); while ((PORTA & 8)==0); LATA=0; v=read_analog(); dly(500); } }
If you get this far, you should see a blinking LED; it will stop blinking when you push the attached push button. Not exciting, I agree, but it does show you have the build environment and the hardware ready.
That catches you up with me. The next step (maybe next week or maybe the week after) is to load the USB libraries and see how far I can get towards making a working HID device.
Actually, you don't actually need support for a USB stack from the vendor, but it sure helps. If you want to get an idea of what it takes to roll your own, have a look at V-USB, an open source USB port for the Atmel AVR.
Another controller I thought looked promising was the NXP LPC11U1X family. I even bought the LPCXpresso board, and will probably eventually experiment with it and share what I find here. The attractive part is it has USB firmware built into the chip. The documentation, however, is a little daunting, so I need to set aside a block of time to play with it properly. | http://www.drdobbs.com/cpp/microchip-usb-part-1/232602786?pgno=2 | CC-MAIN-2017-09 | refinedweb | 376 | 59.16 |
NAME
VOP_READDIR - read contents of a directory
SYNOPSIS
#include <sys/param.h> #include <sys/dirent.h> #include <sys/vnode.h> int VOP_READDIR(struct vnode *vp, struct uio *uio, struct ucred *cred, int *eofflag, int *ncookies, u_long **cookies);
DESCRIPTION
Read directory entries. vp The vnode of the directory. uio Where to read the directory contents. cred The caller’s credentials. eofflag Return end of file status (NULL if not wanted). ncookies Number of directory cookies generated for NFS (NULL if not wanted). cookies Directory seek cookies generated for NFS (NULL if not wanted). The directory contents are read into struct dirent structures. If the on-disc data structures differ from this then they should be translated.
LOCKS
The directory should be locked on entry and will still be locked on exit.
RETURN VALUES
Zero is returned on success, otherwise an error code is returned. If this is called from the NFS server, the extra arguments eofflag, ncookies and cookies are given. The value of *eofflag should be set to TRUE if the end of the directory is reached while reading. The directory seek cookies are returned to the NFS client and may be used later to restart a directory read part way through the directory. There should be one cookie returned per directory entry. The value of the cookie should be the offset within the directory where the on-disc version of the appropriate directory entry starts. Memory for the cookies should be allocated using: ...; *ncookies = number of entries read; *cookies = (u_int*)# malloc(*ncookies * sizeof(u_int), M_TEMP, M_WAITOK);
PSEUD. | http://manpages.ubuntu.com/manpages/intrepid/man9/VOP_READDIR.9freebsd.html | CC-MAIN-2015-48 | refinedweb | 258 | 59.6 |
SEARCH(3) OpenBSD Programmer's Manual TSEARCH(3)
NAME
tsearch, tfind, tdelete, twalk - manipulate binary search trees
SYNOPSIS
#include <search.h>
void *
tdelete(const void *key, void **rootp),int (*compar) (const void *, const
void *)
void *
tfind(const void *key, void * const *rootp),int (*compar) (const void *,
const void *)
void *
tsearch(const void *key, void **rootp),int (*compar) (const void *, const
void *)
void
twalk(const void *root, void (() deletes a node from the specified binary search tree and re-
turns a pointer to the parent of the node to be deleted. It takes the
same arguments as tfind() and tsearch(). If the node to be deleted is
the root of the binary search tree, rootp will be adjusted.
twalk() walks the binary search tree rooted in root and calls the func-
tion).
SEE ALSO
bsearch(3), lsearch(3)
RETURN VALUES
The tsearch() function returns NULL if allocation of a new node fails
(usually due to a lack of free memory).
tfind(), tsearch(), and tdelete() return NULL if rootp is NULL or the da-
tum cannot be found.
The twalk() function returns no value.
OpenBSD 2.6 June 15, 1997 1 | http://www.rocketaware.com/man/man3/tsearch.3.htm | crawl-002 | refinedweb | 189 | 68.7 |
go to bug id or search bugs for
Description:
------------
PHP is probably the only language I know which requires an opening tag (i.e <?
php). It's one of those things with PHP that people rarely question. While PHP
is a rather unique programming language in that it's basically a templating
engine at its core, I feel that requiring the opening <?php is not catering to
the majority of the use cases. Instead, I'd rather PHP assume that the file
being executed has PHP from line 1 which is most commonly the case. In the less
common scenario where PHP is not the first text encountered, the user would need
to close the assumed PHP execution block with a ?>.
In the early days, when web pages were mostly static, and PHP was used to add
dynamic elements, it made sense to require an opening tag to drop-into PHP
execution. These days however, the opposite is more often the case. You normally
have a complete PHP web application, into which HTML and other static text is
inject, rather than injecting dynamic elements into static web pages.
What I'd like to see is a new directive added to php.ini. Call it what you want,
e.g. assume_open_tag or omit_open_tag.
This would require a few changes in coding practice. For example, if
omit_open_tag is On, then the behaviour of the include() and require()
constructs will change. They too will assume the files being required contain
PHP from line 1. Programmer will not longer be able to use include() and
require() to load file contents, instead the programmer would have to use
file_get_contents or some other alternative, though this would arguably a good
thing, as using require() and include() to load and output non-php could be
vulnerability, hence it's already bad practice to use include/require() to load
non-PHP files.
I think this change would be consistant with some of the changes made in 5.4
which demonstrates PHP embracing modern programming idioms from other languages.
Ideally, I'd like this to become the default behaviour of PHP, though obviously
for at least the first major release, it would of course be defaulted to Off.
Thoughts?
Add a Patch
Add a Pull Request
So this would basically be a "break all existing code" .ini switch? I don't think
that is a good idea.
Are there not other directives that can break a lot of code? Remember, this
would default to off. I don't see why as a server owner, I should have this
option made unavailable purely because it can break other code. If you wanted to
write code that worked regardless of this setting, you could do something like:
<?php
init_set('implicit_open_tag', false)
?>
Of course, for that to work then implicit_open_tag is On, the parser would have
to ignore the "<?php". The rule could be that if <?php is the first non-
whitespace sequence encountered in the file, then it's ignored.
'optional_open_tag' may therefore be a more appropriate name for this setting.
Except for legacy templates which may start with something other than <?php,
this would allow for cross-environment code. Any such template code that breaks,
would break in a manner no different to how new features like namespaces break
in older version of PHP.
A new tag could be introduced: "<?php>". This would be shorthand for opening and
closing a php tag, and should be placed at the top of any template file that has
the requirement to work regardless of whether the opening tag is optional.
I hope this idea isn't dismissed on the grounds that it's difficult to
implement, because I think it's workable. Having optional opening tags would no
doubt be a step in the right direction for PHP, and I'm sure that if you didn't
have backwards compatible to be concerned about, you'd probably make opening
tags implicit with no option to make it otherwise. As I said earlier, the
decision to make the opening tag explicit was desirable at the time PHP was conceived, but I believe it's one of those legacy decisions that needs to be re-
evaluated.
So, basically, you're suggesting that programmers should write "<?php>" at the begining of the file to not write "<?php". Blarg.
Of course this "<?php>" thing is optional. The problem is that virtually any code has to use it to be portable. This means it's not really optional.
Sorry, I have misunderstood the "<?php>" tag. Thought it's used to *enable* `assume_open_tag`. But, if you want to use it to disable the feature, it's even worse. This breaks lots of existing code. Mixing PHP with HTML is an example of bad design, but there are lots of such things and PHP devs can't just say "from today your code is not working, because we have decided to break it". Adding some magical sequence of characters at the begining is not a solution for this problem. Do you imagine administrators going through all of the files on their servers and adding "<?php>" to fix the broken code? Even harder to imagine in case of obfuscated or PHARed code.
I believe there is enough problems with server incompatibilities already. No need to make next one. I would be much more happy to see UTF-8 to be a standard feature on every host. A requirement to write "<?php" doesn't bother me. Copyright notices and namespace declarations/imports use much more space.
After a bit of thinking I came to a conclusion that PHARs can, in theory, have such thing implemented. Some metadata may be included in PHAR to tell PHP that every source file in the archive assumes "<?php" tag to be open. Since such information is included by the author of the code, nothing can be broken.
However I don't know if it's worth being implemented. As I said before, it gives almost nothing. Five characters is not much if one have to put dozen lines at the begining of each file. Also PHARs, that assume opening tag to be open, should be incompatible with older versions of PHP to prevent sending source code to the client by accident. Too much trouble IMO.
It's not just about the extra characters, but like the end ?> tag (which
thankfully is optional), any white-space or otherwise non-printable characters
before the opening tag can cause "headers sent" issues. You could solve that
problem by implementing the ignore white-space rule I've already mentioned,
where any white-space before the opening tag is ignored.
The more I think about this and talk to the others, the more it becomes apparent
that what I'm actually asking for, is a distinction to made between PHP
templates, and PHP scripts/applications. If PHP were to define these two
distinct concepts, then you could do more than just make the opening tag
optional. For example, you could have a template() or render() method to act as
an include() for php templates. Unlike include() however, this render() method
would return the output of the file, instead of sending it straight to the
browser. This would negate the need to capture template output using the output
buffer functions (something that I believe most frameworks end up using).
Making such a distinction would also allow web servers like Apache to treat PHP
files differently. You may create a rule in Apache to render all .phpt files as
PHP templates, rendering everything else as PHP script or application files. We
may then see mod_php implement an application mode, where one can define a
single-point of entry into their application. This could have flow-on
performance benefits, where mod_php could cache the parsed PHP, then either fork
it on each request, or instantiate a new application class. Such a feature would
mean frameworks wouldn't have to stuff around with .htaccess files, and would
mean that programmers don't need to add the following to the top of all their
files: if (defined('SOME_CONSTANT')) exit;
While there's momentum among the PHP developers to move forward with modernising
the language, I think now would be a good idea to consider some of these more
fundamental changes. PHP's built-in template engine, ease of deployment, and
it's dynamic, traditional OO constructs would still remain PHP's strengths.
With all this said, I'd be happy to save such changes to a major release
intended to break legacy code, like PHP 6. I'd like to keep in mind too that
code portability isn't relevant to most people who don't intend to release their
code as open source. Typically, those people using PHP in a business context
have control of their server. It's only shared hosting environments where
portability becomes a potential issue. All I'm saying is don't rule out ideas
based on the lowest common denominator.
Note: I'm NOT against the idea itself. I'm just thinking that in the current form it can do more harm than good.
What you're asking for is redefining the whole PHP world. Let's imagine that PHP6 includes your idea and it's *not* optional. What happens?
1. Many of PHP books become obsolete
We all know mixing code and output is bad, but the books
take this approach because it's simpler. It allows authors
to show the basic ideas of PHP without requiring the reader
to download/install third party template engine.
But if .php files are no longer templates, books need
to be rewritten. Lots of money for authors, but I think
it's not dev's goal.
2. Lots of currently used code becomes obsolete
If one needs to write code for a server that has
this feature enabled, any template-like code should
be avoided. This means we can use only "safe" libraries.
Which are "safe"? Only those for which the author states
they're compatible with `assume_open_tags`.
In other words: less code for us to use. Many things needs
to be rewritten. This is bad.
3. Admins will simply refuse to enable the feature
I love the idea of removing magic_quotes. At the same time
I believe many admins will hesitate to upgrade to PHP6,
because they have irrational belief that the magic_quotes
feature was protecting them.
Now imagine what will they do with `assume_open_tags`.
Will they enable it? Will they risk breaking already
deployed applications? I don't think so. If they're afraid
to leave their servers without magic_quote "protection",
they'll be even more scared of the fact that they can beak
something seriously by enabling `assume_open_tags`.
Setting `assume_open_tags` on per-directory basis (for example
with .htaccess in Apache) doesn't solve the problem, because
PHP libraries may be shared between multiple applications.
I believe that books should be rewritten, real template engines should be used, we should update our code et cetera. But real life is real life. Encountering pieces of software that were not upgraded for 20-30 years is not an uncommon thing ("20-30yrs" does not apply to PHP, but I know apps that were not updated since PHP4). `magic_quotes` are deprecated for years and many people seen they're bad even earlier. There was enough time to update applications that depends on them. And even if some code is not fixed, removing magic_quotes doesn't make it stop working. The case of `assume_open_tags` is different. If it's optional, it needs to become a standard to be accepted. And this should be done quickly. I can't imagine building separate versions of libraries for server with this feature enabled and without it. Authors will simply keep using versions with "<?php" to maintain compatibility and the proposed feature will stay unused.
OTOH forcing it to be enabled will cause problems mentioned above. This is a lose-lose situation.
IMHO this may work only if the author of the code decides which mode to use. This makes the feature really optional. It may be included in the server without breaking any existing code and be enabled if new code requies it. This way the feature may be introduced gradually. Evolution, not revolution.
The question is: how to enable authors to tell that their code assumes that opening tags are open?
The first idea was to add some metainfo to a PHAR.
The second, based on your last post, is to add an optional argument to include*/require* constructs. In such case the following code would cause included file to be parsed as a raw PHP source, not requiring additional "<?php".
require_once some_magic 'ns1/ns2/Example.class.php'
There still needs to be a file with "<?php" at the begining to use this code. However currently the trend is to use a single dispatcher, so it's not a big deal.
Still I'm not sure if the feature is really worth being implemented.
I do agree with you, hence my last comment. Adding optional open tags alone
would be more hassle than it's worth, you're right. However, if PHP was to
provide more than just optional open tags, like some of the application-
orientated features I've mentioned, then the hassle would probably be worth it.
Of course, this goes beyond the scope of this feature request, but it would make
for an interesting discussion none-the-less. PHP is overdue to receive some kind
of application persistance so code isn't re-parsed and re-initialized after
every request.
I'm happy for this ticket to be closed, though I would love to see more thought
put into evolving PHP beyond its current template-orientated form, which lately
seems to work against developers more than it helps them.
There are still people who mix PHP and HTML code and that is still valid. Yes, this change would save typing 6 characters (<?php + new line) per file but cause lot's of compatibility and related issues which isn't really worth it.
I have to agree with the sentiment.
Standard convention seems to be to exclude the close tag on PHP-only files, but
I find it fundamentally flawed and improper syntax to have an unclosed tag. I
just can't look at a PHP file with it excluded and see it as "clean".
That said, I'd support an initiative to change the nature of the open tag--
either through its removal or changing it to something else identifiable as a
PHP "starter" that's clearly not an opening tag.
I also find it completely viable for this to be a server-level option, or
something that can be toggled through Apache on a per-vhost basis.
There's a few logical paths I can think of:
1. If the file's extension is .php then ASSUME it's pure PHP, but if there's a "
<?php" tag anywhere in it, fall back to the current assumption of it being a
mixed file. This should help ensure backwards compatibility and might not even
need to be a configuration change.
2. This one being a configuration change, assume all .php files are pure PHP,
assume all .phtml files are mixtures.
3. Support "<?php?>" (or something else) as a first-line identifier of a pure
PHP file
There are plenty of ways to evolve "<?php" for pure PHP files. Some that can
work in parallel to today's conventions, some that are more abrasive.
In the meantime, I just don't want to be condemned for closing my tags, which
has been a discipline encouraged (if not enforced) for many years. Frankly it
absolutely amazes me that so many people are suddenly alright going against that
simple principle. So much for consistency... | https://bugs.php.net/bug.php?id=61182&edit=3 | CC-MAIN-2019-35 | refinedweb | 2,638 | 73.27 |
Hi, I'm kind of new to C++, but not to programming in general. I'm trying to run a program that will download a file, but I'm getting errors at lines 9 and 12.
I'm just trying to simplify the API to get used to using functions. Any help would be appreciated!I'm just trying to simplify the API to get used to using functions. Any help would be appreciated!Code:
#include <iostream>
#include <string>
#include <windows.h>
int Url(LPCTSTR szURL, LPCTSTR szFileName){
HRESULT URLDownloadToFile(
LPUNKNOWN pCaller,
LPCTSTR szURL,//URL
LPCTSTR szFileName,//Save file
DWORD 0,/ERROR 1
LPBINDSTATUSCALLBACK lpfnCB
);
if (HRESULT == "S_OK") {//ERROR 2
return 1;
}
}
int WINAPI
WinMain(HINSTANCE hInst, HINSTANCE hPrev, LPSTR pszCmdLine, int iCmdShow)
{
int n = Url("","g.html");
return 0;
} | http://www.antionline.com/printthread.php?t=272231&pp=10&page=1 | CC-MAIN-2017-04 | refinedweb | 130 | 58.48 |
// Connect a scope to pin 13.// Measure difference in time between first pulse with no context switch// and second pulse started in thread 2 and ended in thread 1.// Difference should be about 10 usec on a 16 MHz 328 Arduino.#include <NilRTOS.h>const uint8_t LED_PIN = 13;// Semaphore used to trigger a context switch.Semaphore sem = {0};//------------------------------------------------------------------------------/* * Thread 1 - high priority thread to set pin low. */NIL_WORKING_AREA(waThread1, 128);NIL_THREAD(Thread1, arg) { while (TRUE) { // wait for semaphore signal nilSemWait(&sem); // set pin low digitalWrite(LED_PIN, LOW); }}//------------------------------------------------------------------------------/* * Thread 2 - lower priority thread to toggle LED and trigger thread 1. */NIL_WORKING_AREA(waThread2, 128);NIL_THREAD(Thread2, arg) { pinMode(LED_PIN, OUTPUT); while (TRUE) { // first pulse to get time with no context switch digitalWrite(LED_PIN, HIGH); digitalWrite(LED_PIN, LOW); // start second pulse digitalWrite(LED_PIN, HIGH); // trigger context switch for task that ends pulse nilSemSignal(&sem); // sleep until next tick (1024 microseconds tick on Arduino) nilThdSleep(1); }}//------------------------------------------------------------------------------/* * Threads static table, one entry per thread. Thread priority is determined * by position in table. */NIL_THREADS_TABLE_BEGIN()NIL_THREADS_TABLE_ENTRY("thread1", Thread1, NULL, waThread1, sizeof(waThread1))NIL_THREADS_TABLE_ENTRY("thread2", Thread2, NULL, waThread2, sizeof(waThread2))NIL_THREADS_TABLE_END()//------------------------------------------------------------------------------void setup() { // Start nil. nilBegin();}//------------------------------------------------------------------------------void loop() { // Not used.}.
In this project a simple RTOS might have been a huge timesaver. But we also had to have timer interrupts and serial I/O interrupts and I am not aware of how easily RTOSes can fit with those. If it is possible to weave RTOS features into your own needed I/O hardware support, that could be helpful but also could get a bit complex.
My concern would be: is the RTOS granular so we only need to use what fits our case, and can it co-exist with I/O device interrupt handlers?
On some architectures ChibiOS supports a special class of "Fast Interrupts", such interrupt sources have a higher hardware priority than the kernel so it is not possible to invoke system APIs from there.The invocation of any API is forbidden here because fast interrupt handlers can preempt the kernel even within its critical zones in order to minimize latency.
The .NET Micro Framework CLR has only one thread of execution and owns all of the memory in the system. During normal operation, the CLR iterates in the interpreter loop and schedules managed threads using a round-robin algorithm, according to the priority of the threads involved. Each managed thread gets a 20-millisecond (ms) time quantum during which the runtime executes intermediate language (IL) code that belongs to the stack of the managed thread being serviced. When the managed thread goes to sleep or waits for a synchronization primitive, such as a monitor that cannot be acquired or an event that is not signaled, the CLR puts the thread in the waiting queue and tries to schedule another thread. In between scheduling managed threads, the CLR checks for any hardware events that might have been raised at the native device driver level. If an event occurred, the CLR tries to dispatch the event to the managed thread that requested it. A hardware event is associated with some kind of input/output (I/O) event, whether it be an interrupt-based GPIO event or a serial USART, USB, or I2C event.The runtime interpreter loop starts by checking to discover whether a thread is ready for a time slice. If a thread is ready, the CLR schedules the thread for processing and tests to determine whether any hardware interrupt events require processing. If there are, the CLR processes the hardware interrupt event and moves the thread that handles the interrupt into the queue of threads that are ready to be processed. If no hardware interrupts have occurred, the CLR again tests to see whether a thread is ready for a time slice.If the CLR determines that there are no threads that are ready to be processed, it sets a timeout timer and goes to sleep in order to conserve power. When the timeout timer triggers or an interrupt occurs, it wakes the CLR back up. At that time, the CLR determines whether an interrupt or the timer woke it up. If it was an interrupt, the CLR determines if processing the interrupt is required. If not, it sets the timeout timer and goes back to sleep. If it is the timeout timer that wakes the CLR, the CLR executes any pending completions and checks again for interrupts. If there are waiting interrupts, it goes through the normal interrupt processing procedures. If there are no waiting interrupts, the CLR sets its timeout timer and goes back to sleep.
namespace NetduinoApplication1{ public class Program { static Thread myThread = new Thread(new ThreadStart(myFunc)); public static void Main() { myThread.Start(); //suspend at some point, wait for event //myThread.Suspend(); //resume at some point, wait for event //myThread.Resume(); Thread.Sleep(Timeout.Infinite); } public static void myFunc() { // do some stuff } }}
My opinion: Arduino needs a better delay() function The question RTOS or no RTOS does just heat up the discussion, but does not help the migration. This is my approach:I want a better delay() function - this is a feeling. If I introspect this feeling I recognize- it is a waste of energy for me if delay() is just doing busy waiting.- I want/I need to do several things in parallel, but delay() does not allow me to do it.- I do not want to throw everything away because of a bigbang RTOS solution, I want to go step by step from here to there.As a seasoned software engineer I have seen again and again that migration is the key. Here are my suggestions:Migration step 1: Introduce event handling--------------------------------------------The Arduino "core library" has the loop() function. An event handling Arduino program will have one only statement in the loop() function, the do_event() function. The do_event() function has to look after the events and has to service the events. A useful vehicle to show that event handling is useful should be a keyboard scan library that provides its services within an event handling framework. The typical pocket calculator keyboard has arranged the keys between row and column lines. A 12 button keyboard needs 3+4=7 scan lines. The keyboard scan function is not trivial. Everybody expects to get:- key debouncing- key repeat (repetition)- 2-keys-rollover (Rapid typists may sometimes inadvertently press a key before releasing the previous one)- no electrical shortcircuit if two keys are pressed simultaneouslyTo implement these functions, a timer is needed. The keyboard lib should provide a timeout functionality to the user. The function set of the keyboard lib is:button_register(): combine an event (key press) to a function (eventhandler)after(): after N milliseconds call a functionevery(): every N milliseconds call a functiondo_event(): check if there is a pending event and execute the eventhandler function. Needs to be called again and again within loop().A simple Arduino program that fits nicely the event paradigma is a math trainer program. The Arduino displays a math question like "1 + 2 = ?" and the user should enter the correct answer within a time limit. In an event program there will be a keyboard event for every digit and a timeout event. If the user presses the digit "3", the button-3-event function will say "correct answer". The other button-event functions will say "wrong answer". And the timeout event-function will say "timeout".Migration step 2: Create a event-driven LiquidCrystal lib----------------------------------------------------------Everybody is using LiquidCrystal. But LiquidCrystal uses delay() and busy waiting is the nemesis to all real time programming. Thanks to the keyboard lib above, we have one-shot timer. And an event driven LCD output function does look like this:do_lcd() {switch (state) {case 1: // do state 1 stuff state=2; after(N, do_lcd); return; // set next state, use after to delay execution by N milliseconds, exit do_lcd()case 2: // do state 2 stuff state=3; after(M, do_lcd); return; // set next state, use after to delay execution by M milliseconds, exit do_lcd()} }The busy waiting version of do_lcd() was:do_lcd() { // do state 1 stuff delay(N); // do state 2 stuff delay(M);} }The do_lcd() function is ugly, but it is working without busy waiting. If all Arduino users are annoyed by event handling uglyness, they are ready for phase 3 of the RTOS brainwash program:Migration step 3: Show that multi-threading does look better----------------------------------------------------------------Multi threading is dangerous. The worst thing you can do with busy waiting is to freeze everything up. The worst thing in multi threading is that you literally tear apart your machine because thread 1 says "go left" and thread 2 says "go right" as fast as time-division multiplexing can do. But, if you have multi-threading and a multi-threaded version of delay(), the do_lcd() function will no longer look ugly, and you are still out of busy waiting land: do_lcd() { // do state 1 stuff delay(N); // the multi-thread version of delay() will switch to another thread or will wait in the operating system // do state 2 stuff delay(M); // the multi-thread version of delay() will switch to another thread or will wait in the operating system} }I bet, everybody will be happy about a keyboard() lib. And they will learn that "fu**ing" event crap, because using a lib is better then writing a lib. And everybody who uses keyboard() will use after() and every(). And now the future is open: If you can live with the event programming uglyness, you never have to go to multi-threading land. Maybe you remember the X windows system, the GUI for UNIX and Linux? Well this was all done by using event programming, and it was done a long time before multi-threading was invented.
Yeah, I think we need to stop thinking of the implementation, and start thinking about what the API should look like if it's going to be usable by "The Arduino Community."timer mytimer;void timerSet(mytimer, timeInms)void timerSet(mytimer, hours, int minutes, int seconds)boolean timerExpired(mytimer)long timerTimeLeft(mytimer)alarmClock myclockalarmClockSet()alarmSet(endTime)In this case "timers" implement a delay for some duration, while alarmclocks implement checking whether some absolute time has ocurred.
sleep(); causes THIS thread to release the CPU back to the RTOS.
Quotesleep(); causes THIS thread to release the CPU back to the RTOS.No, no! If we had an RTOS with threads we can just use delay() (naming conflicts aside.)I want to know what APIs we can use FROM NOTHING, because I think basic RTOS concepts are too difficult.
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=142494.msg1073088 | CC-MAIN-2015-14 | refinedweb | 1,795 | 59.53 |
0
Hello everybody!
I'm trying to display the result of difference of two-dimensional vectors using friend overloading. Builder shows the following error after compilation (line 40). What should I change for in my code in order to find the problem?
Thanks in advance.
[C++ Error] 05_Part.cpp(40): E2034 Cannot convert 'double' to 'Vector2D'
#include <iostream.h> #include <conio.h> class Vector2D { double x, y; friend Vector2D operator- ( const Vector2D &, const Vector2D & ); public: friend ostream & operator << (ostream & os, const Vector2D & cv) { os << cv.x << ',' << cv.y; return os; } }; Vector2D operator- ( const Vector2D & vector1, const Vector2D & vector2 ) { Vector2D vector_tmp = vector1; vector_tmp.x -= vector2.x; vector_tmp.y -= vector2.y; return vector_tmp; } int main() { double a1, b1; double a2, b2; cout << "Enter the first two-dimensional vector: "; cin >> a1; cin >> b1; cout << "Enter the second two-dimensional vector: "; cin >> a2; cin >> b2; // vector declaration Vector2D a(a1, b1); Vector2D b(a2, b2); cout << "Vector #1: " << a << endl; cout << "Vector #2: " << b << endl; // vector difference Vector2D d = a - b - Vector2D(3.0, 9.0); cout << "Difference is equal " << d << endl; getch(); return 0; } | https://www.daniweb.com/programming/software-development/threads/479733/displaying-difference-of-vectors | CC-MAIN-2017-47 | refinedweb | 181 | 57.77 |
Details
- Type:
Bug
- Status: Resolved
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: None
- Fix Version/s: None
- Component/s: None
- Labels:None
Description
We have met empty index file during system test when it shouldn't be empty. In this case, there're around 100 messages in each segment, each of size around 100 bytes, given the "logIndexIntervalBytes" 4096, there should be at least 2 log index entries, but we see empty index file. The kafka and zookeeper logs are attached
[yye@yye-ld kafka_server_3_logs]$ cd test_1-2/
[yye@yye-ld test_1-2]$ ls -l
total 84
rw-r r- 1 yye eng 8 Oct 29 15:22 00000000000000000000.index
rw-r r- 1 yye eng 10248 Oct 29 15:22 00000000000000000000.log
rw-r r- 1 yye eng 8 Oct 29 15:22 00000000000000000100.index
rw-r r- 1 yye eng 10296 Oct 29 15:22 00000000000000000100.log
rw-r r- 1 yye eng 0 Oct 29 15:23 00000000000000000200.index
rw-r r- 1 yye eng 10293 Oct 29 15:23 00000000000000000200.log
rw-r r- 1 yye eng 0 Oct 29 15:23 00000000000000000300.index
rw-r r- 1 yye eng 10274 Oct 29 15:23 00000000000000000300.log
rw-r r- 1 yye eng 0 Oct 29 15:23 00000000000000000399.index
rw-r r- 1 yye eng 10276 Oct 29 15:23 00000000000000000399.log
rw-r r- 1 yye eng 0 Oct 29 15:23 00000000000000000498.index
rw-r r- 1 yye eng 10256 Oct 29 15:23 00000000000000000498.log
rw-r r- 1 yye eng 10485760 Oct 29 15:23 00000000000000000596.index
rw-r r- 1 yye eng 3564 Oct 29 15:23 00000000000000000596.log
Activity
- All
- Work Log
- History
- Activity
- Transitions
Producer used sync mode. So, there is 1 message per batch and each segment has about 100 messages. I expect at least 2 index entries being added per segment.
Here is the issue. We rolled a new segment in the follower. The follower in one fetch gets 10k bytes of data and appends to its log. This won't add any index entry since it's the very first append to this segment. After the append, the log rolled since the max segment size is reached. This leaves an empty index.
Technically, the logic is still correct. It does mean that index entries may not be generated as frequently as one expects, depending on the fetch size used in the follower fetcher thread and how far behind a follower is. This may impact consumer performance a bit.
Makes sense, clever. I thought a bit about this during the implementation and decided not to try to make the index entries exact just to reduce complexity. It would be possible to be more exact by having LogSegment calculate inter-messageset positions and append multiple entries to the index if necessary. This would not necessarily be a bad change, but there is definitely some complexity, especially in the face of truncation, so this would need pretty thorough testing.
Basically the problem was during restarting the server and truncation from existing log and index files.
When after the restart or truncation the existing index file is empty (so it's always full), one of the conditions of maybeRoll() will be true, so a new log segment will be rolled ---- we will end up with two log segments starting from the same offset, one of them is empty.
To fix it, we change the function trimToSize() in OffsetIndex to trimOrReallocate(), it does either trimming or reallocating the offset index file (and memory mapping). At truncation or restart, it will do reallocation, so that enough space for offset index is allocated.
We also check existing log segments at roll() function, and throws exception if some segment exists with the same offset as the target offset. (This should not happen)
Thanks for the patch. A couple of comments:
1. Log.rollToOffset(): We just need to verify that the starting offset of the last segment doesn't equal to the new offset.
2. OffsetIndex: When loading the log segment on broker startup, it is possible to have a segment with empty data and empty index. So, we will hit the same issue of rolling a segment with a duplicated name. We probably should extend the index size to max index size in the constructor of OffsetIndex.
trimOrReallocate(isReallocate: Boolean) is a bit of a hacky interface. This supports resizing to two sizes: entries or maxSize, but it would be better to just implement the more general
resize(numEntries: Int)
numEntries is the mmap.limit/8. It might be nice to leave the helper method trimToSize as a less error prone alias
def trimToSize() = resize(entries)
If we do this we should be able to use resize to implement OffsetIndex.truncateTo (the difference between truncateTo and resize is that truncateTo is in terms of offset whereas resize is in terms of entries).
We should also add a test case to cover these corner cases.
Jay, Jun,
Thanks for the comments.
Jun also pointed out that there may still be cases where at startup, the last log segment has empty index and empty log file ---- and the trimOrReallocate() is not called because it was a clean shutdown before.
What about an alternative idea, which is, whenever we load a log segment, we always make sure its offset index has enough disk space and memory. In this case, when we truncate back to old segment, its index will not be full, when we start the last log segment with empty log file and index file, its index will also not be full.
To do this, we just need to change the constructor of OffsetIndex, by always set the index file size and mmap limit to maxIndexSize.
Thanks for the discussions, and here's the third patch which is also verified to be working.
Thanks for patch v3. Looks good. Some minor comments.
30. Log.rollToOffset(): segmentsView.last.index.file.getName.split("
.")(0).toLong can just be segmentsView.last.start.
31. Log.loadSegments(): Just to be consistent. Should we use index.maxIndexSize instead of maxIndexSize in the following statement?
logSegments.get(logSegments.size() - 1).index.resetSizeTo(maxIndexSize)
For 30,
It's fixed
For 31,
It seems simpler to keep as it is
Jay, can you help have a look at this patch?
Looks good, two minor things:
1. Can we name resetSize to resize?
2. Can we change the argument to be in terms of number of entries rather than number of bytes? It is incorrect to set to a number of bytes that is not a multiple of the entry size and the entry size is kind of an implementation detail of that class so this would be nicer.
3. Can we add a test for this case?
Discussed with Victor. (1) and (3) should be doable, but (2) is not very convenient because the usage actually resizes to indexMaxSize which is in terms of bytes. So our solution was to have the API take bytes and just round to a multiple of 8.
Thanks for the comments, changes in v5:
1. rename resetSize() to resize()
2. add using roundToMultiple() in resize()
3. add comments to resize()
4. add one unit test "testIndexResizingAtTruncation" in LogTest for this case
+1 on v5. Committed this patch
I am not sure that this is a bug. The index entries are not placed at exact intervals. Rather, we add a single index entry per append if the bytesSinceLastIndexEntry > indexIntervalBytes. What this means is that you could get a log of 10,296 bytes by appending a single message set of that size or by appending one message set <= 4095 and then another that made up the difference.
What was batch size being used in this test? | https://issues.apache.org/jira/browse/KAFKA-593 | CC-MAIN-2015-48 | refinedweb | 1,297 | 72.87 |
Hi all!
In that tutorial I'll explain about abstract classes, abstract methods and interfaces.
Plus, the differences between an interface and an abstract class.
Ok! lets start!
Abstract class
An abstract class is a class that is declared abstract.
It cannot be instantiated, so if you'll try running the following code you'll get an error messege:
public abstract class AbstractClass { public AbstractClass(){//the constructor } public static void main(String args[]){ AbstractClass abstractClassInstance = new AbstractClass(); //Error! } }
However, an abstract class can be subclassed.
public abstract class AbstractClass { } public class SubClass extends AbstractClass{ }
After defining an abstract class, lets check why would you want to declare your class as an abstract class?
you would want to declare your class as an abstract class when your class describes an abstract idea
rather than a specific one.
Lets take an example from real life for an abstact class:
public abstract class Food{ }
Food! who doesnt like food! there are so much types of Food
but wait... what exactly is Food???? or, why did I declare Food abstract????
I'll answer that question with another question
Suppose i asked you "what did you eat for lunch?"
you may answer: "I ate pizza for lunch!" or maybe "I ate Pancake for lunch!"
but you will never answer, "I ate Food for lunch!"
why? because Food is an abstract definition for "Things we eat".
while Pizza, and Pancake are subclasses of food. in code it would look something like this:
public abstract class Food { } public class Pizza extends Food { } public class Pancake extends Food { }
You can declare instances of Pizza, and Pancake but not of food.
Ok. now, after we understood what an abstract class is, lets move to:
Abstract methods!
An abstract method is declared that way:
public abstract void abstractMethod();
Some important things concerned abstract methods:
1. Abstract method cannot be declared private!
2. Abstract methods don't have a body. these are empty methods.
so when declaring an abstract method, end the line with ";".
3.Only abstract classes can contain abstract methods!
(or interfaces, will be explained later).
Why would you want to use abstract method?
lets return to our food example.
to eat food, you have to prepare it.(cook\fry or so).
so how do you prepare Food?
there is no question for this question. why??
because each instance of food(pizza, Pancake, salad) is prepared in a different way!
public abstract class Food{ public food(){ } public abstract void prepare();//the abstract method } public class Pizza extends Food { public Pizza(){ } public void prepare(){ //cut tomatoes, have the dough, add cheese, insert the oven... } }//end of pizza subclass public class Pancake extends Food{ public pancake(){ } public void prepare(){ //add all ingredients on the pan, add sauce, onions, fry it... } }//end of Pancake class
In that example you see how you use an abstract method.
abstract method should be used when you want to do the same operation, but in different ways.
After defining abstract methods and abstract classes we are ready for
the next topic.
Interfaces!
What is an interface?
Think of an interface as a contract that a class does.
in order to implement an interface a class must implement the interfaces methods.
An interface contains constants and abstract methods.(only abstract methods!).
it looks like:
public interface newInterface{//newInterface can be replaced by every name you want. int CONST1;//can be every data type. (long,char etc...). String CONST2; //more constants.. void Method1(); int Method2();//can return all sort of data (long,char, boolean etc). //more methods... }
This is how an interface should be look like.
important!
1.notice that I don't declare the constant variables as public\private\protected.
in an interface they all get: public static.
2.notice that i dont declare the methods with public\private and also not abstract.
in an interface all methods get by default: "public abstract".
3.There is another type of interface. Marker Interfaces. marker interfaces are markers without methods. like the serializable interface.
It looks like that:
public interface serializable { }
later on my tutorial you'll understand why will you need an empty interface????
Lets?
first lets check the differences between an abstract class and an interface.
1.a certain class can extend only ONE superclass, while it can implement multiple interfaces.
2.an abstract class can contain implementation for some of its methods, while an interface only declares methods.
notice that if you declare an abstract class with only abstract methods, it should be declared as an interface!
3.an abstract class can contain all sort of variables(not only public static), and also can contain private, protected methods.
Now after understanding all basic definitions, we are ready for the main subject!
When will you use an interface rather than an abstract class?
Suppose you want to have Washable items.
who doesnt wash his stuff???
By stuff i mean your vehicle, your clothes etc.
Probably you have some kind of vehicle. right? a car? motorbike? a bicycle? rollerblades??
And i assume you wash them once a week..
And of course you have clothes you wear. (shirt, pants, jacket).
And i HOPE you wash them at least once a week!
so first declare a new class Washable:
NOTE: definately different than washing a rollerblades. right?
also, washing colored clothes is different than washing white clothes.
so, what will you do if you want to create a car class? public class car extends vehicle, extends Washable { //cant extend both! }
NOTE:the end of the wrong code i mentioned before.
What you want to do in such a case is declare Washable as an interface.
public interface Washable { void wash(); }
Now you can implement it with all these classes, and implement the wash method differently!
public abstract class Vehicle implements Washable{ public Vehicle(){ } public abstract void wash();//wash is abstract because it is not yet implemented! } public abstract class Clothes implements Washable{ public Clothes(){ } public abstract void wash();//wash is abstract because it is not yet implemented! } public class Car extends Vehicle { public Car(){ } public void wash(){ //the wash method for a car } } public class Bicycle extends vehicle { public Bicycle(){ } public void wash(){ //the wash method for a bicycle } } //now for clothes public Shirt extends Clothes { public Shirt(){ } public void wash(){ the wash method for shirt } } //and so on!
most important note:
When implementing an interface's method, you can't assign it different access privelege other than public.
The reason is that all methods in theare declared public. and in java you cant override a method by assign it a weaker access.
implementing the Washable interface in both Clothes and Vehicles has benefits!
Suppose you have a new Robot and you want it to wash all your stuff.
BY stuff i mean all your Clothes and all your Vehicles. (i want that robot too!)
you could have a list of Washable objects and use the wash method on all of them!
public class Robot { public Robot(){ } public void washAllMyStuff(){ List<Washable> objectsToWash = new ArrayList<Washable>(); //add your washable items to the list //then go over the list and invoke wash() for(Washable washableObject : objectsToWash){ washableObject.wash(); } }
Of course, you can create other abstract classes like Furniture and implement Washable on those classes.
then add them to the robot's washing list!
Have i already mention how Cool that Robot is?
ok that is the end of my tutorial, where i covered abstract classes, abstract methods and interfaces.
hope it helped
| http://www.dreamincode.net/forums/topic/130490-abstract-classes-vs-interfaces/ | CC-MAIN-2016-40 | refinedweb | 1,241 | 67.25 |
One of the questions that I’ve been asked on multiple occasions when presenting on Kubernetes security [1] is: “Which distribution should I install?”
There are a bewildering number of options for deploying Kubernetes, with over 60 commercial products or open source projects providing methods of deploying it. Making an informed choice can therefore be a difficult task.
If you’re planning on using Kubernetes in production, one of the key things to consider from a security perspective is your threat model. This may sound like a fancy term but it really just boils down to what kind of security attacks you are worried about. I see three main groups of threats to a Kubernetes cluster:
- External attackers: People who have no access to your cluster apart from being able to reach the applications running on it and/or the management port(s) over a network.
- Malicious containers: Here, an attacker has access to a single container (likely through some application vulnerability) and would like to expand their access to take over the whole cluster.
- Malicious user/stolen credentials: Here, an attacker has valid credentials to execute commands against the Kubernetes API, as well as network access to the port.
When looking at which Kubernetes deployment option you want, you need to consider whether each of these groups applies to you and ensure that the configuration of your cluster is modified appropriately.
External attacker
The first category, external attackers, is relatively straightforward. Here, the controls relate to ensuring that your management services (e.g. the API server, kubelet and etcd) are not exposed to untrusted networks without authentication controls in place.
Most cloud-based deployments will only expose the API server to the internet and will generally enforce authentication, so you should be in a reasonable place there. On-premise installations can get slightly more complicated as some deployments leave the kubelet accessible without authentication which could, for example, expose the cluster to attack over a corporate network.
Deciding if this affects you
Here are some questions you can think about to decide if this category of attack matters to you:
- Are my Kubernetes management services exposed to the internet? Obviously if the answer here is yes then external attackers are a potential problem.
- How much trust can I place in my internal network? If your cluster isn’t exposed to the internet, then you need to consider how ‘trusted’ your internal networks are. In a small organisation you might be quite comfortable trusting your internal networks, but in larger networks it’s generally best to assume that an external attacker can get access somehow.
Malicious container
The second category, malicious containers, is where things get more ‘interesting’ from a security standpoint.
Several Kubernetes distributions have made the decision that they don’t consider malicious containers part of their threat model. As such, once an attacker has that level of access then there are minimal controls, by default, stopping them getting full cluster-admin rights. In Kubernetes there are a number of well-known privilege escalation mechanisms, via the Kubelet, via access to etcd or via service tokens, which are deployed on all containers with high privileged rights in many cases.
Deciding if this affects you
Here are some questions you can think about relating to this:
- Do I have any internet-facing applications running in my cluster? If you’re running internet-facing applications then it’s a fact of life that attackers will try to compromise them. Therefore, the chances of having a container compromised increases.
- Am I running multiple applications in my cluster? If you’re running a lot of applications then there’s an increased risk that one will get compromised. However, running multiple applications in one cluster also increases the impact of any compromise. An attacker who can take out a container will get even more payoff as they can compromise multiple applications in the one cluster.
- Do my containers have access to my cloud service provider’s API? Some Kubernetes implementations will have application tokens for things like AWS or Azure APIs so they can interact with services there. Again, this will increase the impact of a single container being compromised, so is worth thinking about.
From a security person’s standpoint, I’d argue that all production clusters should take this scenario into account. It’s not rare to see a single issue in one application being leveraged to take control of other systems, and it’s important that your infrastructure can defend itself against this kind of attack.
Malicious users
The third scenario, malicious users, is more problematic. But with careful configuration of a cluster it is possible to make it hard for a user to escalate their privileges. This is most likely to occur when a user’s credentials have been lost or stolen. And with Kubernetes credentials tending to be static files with SSL keys in them, it’s fairly easy for them to be leaked or lost.
Deciding if this affects you
Here are some questions you can think about relating to this:
- Do I have a large number of users who have access to my cluster? If you’ve only got a small, well-controlled number of users, this scenario might not apply to you. However, if you have a larger number of users, the risk of a lost credential increases.
- Do I have a lot of users who I want to only have limited rights to my cluster? If all your users have cluster admin privileges, then controls to restrict user rights are perhaps less critical. But if you have groups of users who should only be able to take certain actions, then it’s important to consider this threat model.
Key controls
There are a large number of security settings that can be applied to a Kubernetes cluster, and with the current version of the CIS benchmark [2] weighing in at over 250 pages long, it can be a bit daunting to think of which ones should be looked at first. Depending on the threat models above, here are some key controls to think about:
External attacker
- Ensure that all management ports that are visible externally require authentication, including:
- Main Kubernetes API
- Kubelet API
- Etcd API
- Where services that don’t allow authentication (e.g. cAdvisor or the read-only Kubelet) are required, restrict access to a whitelist of source addresses.
Malicious container
- As with the external attacker approach, ensure that all management ports visible on the cluster network require authentication for all users.
- Ensure that service accounts are either not mounted in containers or have restricted rights (i.e. not cluster admin).
- Use Network Policies to restrict access between namespaces and pods.
Malicious users
- Ensure that RBAC policies are in place for all users, providing ‘least privilege’ access to cluster resources.
- Ensure that Pod Security Policies are in place for all users to restrict the rights of pods that can be created, paying particular attention to high risk items such as privileged containers.
Conclusion
Aligning the threat model you’re worried about with what your software suppliers think you should be worried about is always an important part of any deployment. Kubernetes is no different.
With the wide range of options available, it’s easy to go down a path that could leave you with unexpected, and ultimately costly, security issues. Even worse, you could face a compromised cluster and all the problems that would ensue.
References
[1]
[2]
Written by Rory McCune
First published on 23/11/17 | https://research.nccgroup.com/2017/11/23/kubernetes-security-consider-your-threat-model/ | CC-MAIN-2022-40 | refinedweb | 1,254 | 50.57 |
I thought people might find this fix interesting because it annoyed the hell out of me when I first started using Vista. In the XP days, I was a huge Windows Desktop Search (WDS) fan, and like many WDS fans Vista search wasn't that exciting because of new features but rather, it gave visibility to mainstream windows users that desktop search would change the way they compute.
When I used WDS I typically indexed my whole C:\ and I carried that practice over to Vista's indexing service. That's when I started having issues. I wanted to to search all my content on the machine, but when I would search for any files in the Start Menu, I would never get any results returned on a search unless they were applications. For example, if I was to search for an excel file, it would not return in the Start Menu even though it was located in "Documents" -- it was very frustrating.
As it turns outs, the quick fix was to change the start menu settings so that the search bar is set to "search entire index" instead of the default "search this user's files." The problem arises due to the user changing the "indexed locations" to include all of the "C:" when the Start Menu is set to "search this user's files." This is a known bug and will be fixed in Vista SP1.
Since learning from this experience, I've changed my view on indexing all of the "C:\". Index only what you need to so that indexing performance as well as Start Menu search results are returned quickly. The reality is that if you properly use the USER namespace as opposed to ROOT for your documents and content, then you should never have a need to index all of the C:\
As a best practice, I keep all my temp files, downloads, logs and content in the Windows USER space. So remember, index what you want, but if you are insistent on indexing the whole disk before SP1, change the start menu settings to "search entire index" --VT
PingBack from
I received some emails regarding my article on a Vista Search bug . The article stated that the bug would
Windows Vista Annoyances | http://blogs.technet.com/b/tarpara/archive/2007/09/04/vista-desktop-search-annoyance.aspx | CC-MAIN-2014-35 | refinedweb | 378 | 69.86 |
I'm always a bit dubious about staying in hotels as they often let you down however Viva On The Beach was an exception. Upon arrival a very friendly manager (Fabio) showed me a number of rooms i could choose from and i picked a basic room but it had air con, fan, lovely large shower and toilet and a little kitchen - Perfect.
Located directly on the beach and in walking distance to everything in Chaloklum i did not have to hire a scooter. I met the owner too a big character named Eddie who made our stay a bit more special by giving us a Viva On The Beach cocktail for free! The barman was also very friendly and seemed to really care. All in all Viva On The Beach was excellent value for money and when i return (not if) to Koh Phangan i will stay there again.
- Official Description (provided by the hotel):
-., Wi-Fi, hot shower, air-con or fan. ... more
less
- Reservation Options:
- TripAdvisor is proud to partner with Expedia, Hotels.com, Agoda and Priceline so you can book your Viva On The Beach reservations with confidence. We help millions of travelers each month to find the perfect hotel for both vacation and business trips, always with the best discounts and special offers. | http://www.tripadvisor.com/ShowUserReviews-g303907-d2615547-r134148248-Viva_On_The_Beach-Ko_Phangan_Surat_Thani_Province.html | CC-MAIN-2014-52 | refinedweb | 218 | 68.6 |
He's not as angry as he looks
People have been discovering that the VS Team System profiler can collect allocation data for an application. It isn't long after that they discover that it only works on managed code, not native. Sadly, the documentation is not clear on this.
The memory alloction profiling support in VSTS uses the profiler API provided by the CLR. This gives us a rich set of information that allows us to track the lifetimes of individual objects. There is no such off-the-shelf support in native memory management, since there are nearly as many heap implementations as there are applications in the world.
Memory allocation profiling is a much bigger deal for managed code, as the CLR has effectively turned what used to be a correctness issue (leaks, double frees, etc) into performance issues (excessive GCs and memory pressure)*. This does not mean that some kind memory allocation profiling wouldn't benefit the world, but the combination of it being less important and more difficult keeps it out of the product for now.
If you think you are facing memory issues in native code, there is at least one utility I can offer up: In the server resource kits, the vadump utility can give you information about your virtual address space, including memory allocated by VirtualAlloc.
Unfortunately, this is a pale shadow (if that) of what you can get from the managed side.
* The idea that correctness issues become performance issues as we develop more advanced runtimes was something I heard David Detlefs mention somewhere..);
Note that both the native and managed versions return zero on success, so it is possible to detect whether the call succeeded:
On the forums, someone was using the /INCLUDE option in VsInstr.exe. It is possible to use multiple instances of this option to include different sets of functions. For a big chunk of functions, you might want to use dozens of function specifications. Who the heck wants to do all that typing? You could make a batch file, but a response file would be better.
Response files are text files where each line in the file is a single command line option. Since each line holds exactly one option, quotes are not necessary. They are much easier to edit than a batch file (which would have single, really long line). To use a response file, simply use @filename on the command line of the tool. For example:
VsPerfCmd /start:sample "/output:c:\Documents and Settings\AngryRichard\foo.vsp" "/user:NETWORK SERVICE"
VsPerfCmd /start:sample "/output:c:\Documents and Settings\AngryRichard\foo.vsp" "/user:NETWORK SERVICE"
can be turned into a response file like this:
Startup.rsp:
/start:sample/output:c:\Documents and Settings\AngryRichard\foo.vsp/user:NETWORK SERVICE
VsPerfCmd @Startup.rsp
/start:sample/output:c:\Documents and Settings\AngryRichard\foo.vsp/user:NETWORK SERVICE
VsPerfCmd @Startup.rsp
All of the command line tools for the profiler accept response files. It beats all that error prone typing, and if you run scenarios from the command line a lot, it can pay to have some response files laying about for common scenarios.
I've just posted an article on the pitfalls of profiling services with the Visual Studio profiler. It includes a sample service with a quick walkthrough. Enjoy.
Profiling Windows™ Services with the Visual Studio Profiler
Typically, one can use the sampling profiler to nail down the hot spot in an application. Having done that, what does one do when the sampling data doesn't provide enough information? The trace profiler can offer up more detail, particularly if the issue revolves around thread interaction. However, if you profile a heavily CPU bound application, you may find that you are getting huge trace files, or that the profiler is significantly impacting the performance of your application. The VisualStudio profiler offers a mechanism to stem the avalanche of data.
I'll illustrate the general idea with an example.
Suppose we'd like to run trace profiling on the following highly useful piece of code. We've decided that we only care about profiling in the context of the function "OnlyProfileThis()"
using System;
public class A{ private int _x; public A(int x) { _x = x; } public int DoNotProfileThis() { return _x * _x; } public int OnlyProfileThis() { return _x + _x; } public static void Main() { A a; a = new A(2); Console.WriteLine("2 square is {0}", a.DoNotProfileThis()); Console.WriteLine("2 doubled is {0}", a.OnlyProfileThis()); }}
The VisualStudio profiler provides an API for controlling data collection from within the application. For native code, this API lives in VSPerf.dll. A header (VSPerf.h) and import library (VSPerf.lib) provided in the default Team Developer install allows us to use the profiler control API from native code. For managed code, this API is wrapped by the DataCollection class in Microsoft.VisualStudio.Profiler.dll. We can update the example to the following to control the profiler during our run:
using System;using Microsoft.VisualStudio.Profiler;
public class A{ private int _x; public A(int x) { _x = x; } public int DoNotProfileThis() { return _x * _x; } public int OnlyProfileThis() { return _x + _x; } public static void Main() { A a; a = new A(2);
int x; Console.WriteLine("2 square is {0}", a.DoNotProfileThis());
DataCollection.StartProfile( DataCollection.ProfileLevel.DC_LEVEL_GLOBAL, DataCollection.PROFILE_CURRENTID);
x = a.OnlyProfileThis();
DataCollection.StopProfile( DataCollection.ProfileLevel.DC_LEVEL_GLOBAL, DataCollection.PROFILE_CURRENTID);
Console.WriteLine("2 doubled is {0}", x);
}}
We still need to instrument the application as normal. We also have one additional step. When running the code above, data collection will be enabled by default, so the API won't appear to do anything beyond stopping data collection after the call to OnlyProfileThis. We need to disable data collection before running the application. The profiler control tool,VSPerfCmd, has options to do this.
To run this scenario from the command line:
StartProfile(ProfileLevel level, UInt32 id)StopProfile(ProfileLevel level, UInt32 id).
SuspendProfile(ProfileLevel level, UInt32 id)ResumeProfile(ProfileLevel level, UInt32 id)
This works very much like StartProfile and StopProfile, however, calls to these functions are reference counted. If you call SuspendProfile twice, you must call ResumeProfile twice to enable profiling.
This works very much like StartProfile and StopProfile, however, calls to these functions are reference counted. If you call SuspendProfile twice, you must call ResumeProfile twice to enable profiling.
MarkProfile(Int32 markId)CommentMarkProfile(Int32 markId, String comment)CommentMarkAtProfile(Int64 timeStamp, Int32 markId, String comment)
Inserts a 32-bit data value into the collection stream. Optionally, you can include a comment. With the last function, the mark can be inserted at a specific time stamp. The id value and the optional comment will appear in the CallTrace report.
Inserts a 32-bit data value into the collection stream. Optionally, you can include a comment. With the last function, the mark can be inserted at a specific time stamp. The id value and the optional comment will appear in the CallTrace report.
The profiler control tool provides similar functionality through a command line interface, though the notion of a "current" process or thread id is obviously not relevant.
VSPerfCmd /?
-GLOBALON Sets the global Start/Stop count to one (starts profiling).
-GLOBALOFF Sets the global Start/Stop count to zero (stops profiling).
-PROCESSON:pid Sets the Start/Stop count to one for the given process.
-PROCESSOFF:pid Sets the Start/Stop count to zero for the given process.
-THREADON:tid Sets the Start/Stop count to one for the given thread. Valid only in TRACE mode.
-THREADOFF:tid Sets the Start/Stop count to zero for the given thread. Valid only in TRACE mode.
-MARK:marknum[,marktext] Inserts a mark into the global event stream, with optional text. [...]
If you find yourself buried under a ton of trace data, investigate the profiling API to help focus on the important parts of your application.
I've frequently heard the question asked, "Can I use the profiler on a Virtual PC?" It has even come up on the blog feedback a few times. My answer has always been, "Theoretically, yes." I didn't want to post this answer externally until I'd actually gotten around to trying it myself.
I've finally been nagged into it.
In my limited experience with our VirtualPC product, it has quite impressed me with its functionality. However, it does not emulate the hardware performance counters upon which the profiler implicitly depends. For this reason, you can not run the sampling profiler using a performance counter based interrupt. My collegue Ishai pointed out* that you should be able to use page-fault or system-call based sampling, but the VPC has a different problem with these modes that is still under investigation.
Instrumentation based profiling will work on the VPC. However, as I've already mentioned, there is a bug check issue with the driver when it unloads. Fortunately, instrumentation based profiling doesn't rely on the presence of the driver.
By renaming the driver to prevent the profiling monitor from installing it at startup, I was able to use instrumentation based profiling on the VPC. This is obviously just a workaround, but I hope this will allow you to investigate some of our tools in the comfort of a VirtualPC environment.
Here is how you can prevent the driver from loading on your VPC installation.
Happy hunting!
* I'm pretty sure Ishai was hired to point out things I do wrong. Fortunately, he usually points out solutions as well.
I had a nice long email chat with members of the Virtual PC team.
The good news: The Virtual PC emulates the host processor well enough that our kernel-mode driver can detect what features are enabled.
The bad news: The Virtual PC does not emulate an APIC or performance counters.
So, if you were planning on running the profiler inside a Virtual PC, the best you can hope to do is get function trace data on an instrumented app. Sampling will not work at all, and collecting perfomance counter data in the instrumentation will fault the application.
Bummer. I wish I had better news.
I'm so pleased. Someone did something exciting and dangerous with the profiler. In case you're not reading the newsgroups, an intrepid customer tried to profile on a Virtual PC, and discovered that it only leads to pain and misery via the BSOD.
So don't do that.
Seriously, is this something people want to do? I mean, VPC is about the coolest thing ever, but we do use hardware performance counters by default, and VPC is not exactly a real life environment for performance analysis and measurement. Still, maybe y'all have good reasons for this.
At the very least, we'll fix that BSOD thing. I mean, how 1990's is that?
This is an excellent time to point out that the profiler does in fact install a kernel-mode device driver in order to play with the hardware counters on your Intel and AMD processors. There are some fun implications from this:
A bunch of the guys on the team I work for have been starting up blogs. I started feeling left out, which made me very angry.
It appears all blogs start with "Hi, I'm a developer who does X and I'm going to talk about Y and maybe Z."It's all part of Microsoft's new image -- we're transparent now.
Transparency's good, right?
Go down to your local German car dealer, or, if you own a newer German car, go out to your driveway. Shiny, pretty, isn't it? Open the hood. Look at that -- a big, fat, intake pipe that goes into a big box that says "BMW" in a 4" Century Gothic font. Cool. Now pop that plastic thing off the top. Go ahead, I dare you. Not so pretty now. Look at all those wires and tubes. Look at all that stuff you could cut yourself on. That's why that plastic thing is there; it makes owning all that power a little less scary.
Some of us spend all our time under the shiny plastic thing, hands full of wires and tubes and spark plugs. That'd be me and some of my less 'transparent' friends, working on the instrumentation and data collection engine in the profiler.
Of course, with great power, comes great potential for disaster. Some day you'll have your turn with the profiler. Trust me, something's always too slow. If it works, great. If you find yourself upside down in a ditch, tell us, we want to know where we need to cover up the sharp edges. | http://blogs.msdn.com/angryrichard/default.aspx | crawl-002 | refinedweb | 2,103 | 56.66 |
Data Structures for Drivers
no-involuntary-power-cycles(9P)
usb_completion_reason(9S)
usb_other_speed_cfg_descr(9S)
usb_request_attributes(9S)
- USB interrupt request structure
#include <sys/usb/usba.h>
Solaris DDI specific (Solaris DDI)
An interrupt request (that is, a request sent through an interrupt pipe), is used to transfer small amounts of data infrequently, but with bounded service periods. (Data flows in either direction.) Please refer to Section 5.7 of the USB 2.0 specification for information on interrupt transfers. (The USB 2.0 specification is available at.)
The fields in the usb_intr_req_t are used to format an interrupt request. Please see below for acceptable combinations of flags and attributes.
The usb_intr_req_t fields are:
ushort_t intr_len; /* Size of pkt. Must be set */ /* Max size is 8K for low/full speed */ /* Max size is 20K for high speed */ mblk_t *intr_data; /* Data for the data phase */ /* IN: zero-len mblk alloc by client */ /* OUT: allocated by client */ usb_opaque_t intr_client_private; /* client specific information */ uint_t intr_timeout; /* only with ONE TIME POLL, in secs */ /* If set to zero, defaults to 5 sec */ usb_req_attrs_t intr_attributes; /* Normal callback function, called upon completion. */ void (*intr_cb)( usb_pipe_handle_t ph, struct usb_intr_req *req); /* Exception callback function, for error handling. */ void (*intr_exc_cb)( usb_pipe_handle_t ph, struct usb_intr_req *req); /* set by USBA/HCD on completion */ usb_cr_t intr_completion_reason; /* overall completion status */ /* See usb_completion_reason(9S) */ usb_cb_flags_t intr_cb_flags; /* recovery done by callback hndlr */ /* See usb_callback_flags(9S) */
Request attributes define special handling for transfers. The following attributes are valid for interrupt requests:
Accept transfers where less data is received than expected.
Have USB framework reset pipe and clear functional stalls automatically on exception.
Have USB framework reset pipe automatically on exception.
Perform a single IN transfer. Do not start periodic transfers with this request.
Please see usb_request_attributes(9S) for more information.
Interrupt transfers/requests are subject to the following constraints and caveats: 1) The following table indicates combinations of usb_pipe_intr_xfer() flags argument and fields of the usb_intr_req_t request argument (X = don't care): "none" as attributes in the table below indicates neither ONE_XFER nor SHORT_XFER_OK flags Type attributes data timeout semantics ---------------------------------------------------------------- X IN X !=NULL X illegal X IN !ONE_XFER X !=0 illegal X IN !ONE_XFER NULL 0 See table note (A) no sleep IN ONE_XFER NULL 0 See table note (B) no sleep IN ONE_XFER NULL !=0 See table note (C) sleep IN ONE_XFER NULL 0 See table note (D) sleep IN ONE_XFER NULL !=0 See table note (E) X OUT X NULL X illegal X OUT ONE_XFER X X illegal X OUT SHORT_XFER_OK X X illegal no sleep OUT none !=NULL 0 See table note (F) no sleep OUT none !=NULL !=0 See table note (G) sleep OUT none !=NULL 0 See table note (H) sleep OUT none !=NULL !=0 See table note (I) Table notes: A) Continuous polling, new data is returned in cloned request structures via continous callbacks, original request is returned on stop polling. B) One time poll, no timeout, callback when data is received. C) One time poll, with timeout, callback when data is received. D) One time poll, no timeout, one callback, unblock when transfer completes. E) One time poll, timeout, one callback, unblock when transfer completes or timeout occurs. F) Transfer until data exhausted, no timeout, callback when done. G) Transfer until data exhausted, timeout, callback when done. H) Transfer until data exhausted, no timeout, unblock when data is received. I) Transfer until data exhausted, timeout, unblock when data is received. 2) USB_FLAGS_SLEEP indicates here just to wait for resources, except when ONE_XFER is set, in which case it also waits for completion before returning. 3) Reads (IN): a) The client driver does *not* provide a data buffer. By default, a READ request would mean continuous polling for data IN. The USBA framework allocates a new data buffer for each poll. intr_len specifies the amount of 'periodic data' for each poll. b) The USBA framework issues a callback to the client at the end of a polling interval when there is data to return. Each callback returns its data in a new request cloned from the original. Note that the amount of data read IN is either intr_len or "wMaxPacketSize" in length. c) Normally, the HCD keeps polling the interrupt endpoint forever even if there is no data to be read IN. A client driver may stop this polling by calling usb_pipe_stop_intr_polling(9F). d) If a client driver chooses to pass USB_ATTRS_ONE_XFER as 'xfer_attributes' the HCD polls for data until some data is received. The USBA framework reads in the data, does a callback, and stops polling for any more data. In this case, the client driver need not explicitly call usb_pipe_stop_intr_polling(). e) All requests with USB_ATTRS_ONE_XFER require callbacks to be specified. f) When continuous polling is stopped, the original request is returned with USB_CR_STOPPED_POLLING. g) If the USB_ATTRS_SHORT_XFER_OK attribute is not set and a short transfer is received while polling, an error is assumed and polling is stopped. In this case or the case of other errors, the error must be cleared and polling restarted by the client driver. Setting the USB_ATTRS_AUTOCLEARING attribute will clear the error but not restart polling. (NOTE: Polling can be restarted from an exception callback corresponding to an original request. Please see usb_pipe_intr_xfer(9F) for more information. 4) Writes (OUT): a) A client driver provides the data buffer, and data, needed for intr write. b) Unlike read (see previous section), there is no continuous write mode. c) The USB_ATTRS_ONE_XFER attribute is illegal. By default USBA keeps writing intr data until the provided data buffer has been written out. The USBA framework does ONE callback to the client driver. d) Queueing is supported. The intr_completion_reason indicates the status of the transfer. See usb_completion_reason(9S) for usb_cr_t definitions. The intr:
usb_alloc_request(9F), usb_pipe_ctrl_xfer(9F), usb_pipe_bulk_xfer(9F), usb_pipe_intr_xfer(9F), usb_pipe_isoc_xfer(9F), usb_bulk_request(9S), usb_callback_flags(9S), usb_completion_reason(9S), usb_ctrl_request(9S), usb_isoc_request(9S), usb_request_attributes(9S) | http://docs.oracle.com/cd/E23824_01/html/821-1478/usb-intr-request-9s.html | CC-MAIN-2014-23 | refinedweb | 973 | 57.27 |
I am using VS6, and after changing my #define (macros..) values, I got warnings, of macro redefinition. If I comment the #defines, the game works, as if I would be using my previous definitions.
First time, I defined the values, I had no warnings, no errors. It looks like the macros are stored somewhere else. I am using "win32 console application".
The Code:
#define USE_CONSOLE
#include <ALLEGRO.H>
#include <MATH.H>
#define RESOX 800
#define RESOY 800
#define RESOZ 800
#define MiD 400
#define LENGTH 200
#define PER_0 1600
The warnings:
D:\nvm\C\3DA3b\MAiN n ONLY.cpp(6) : warning C4005: 'RESOY' : macro redefinition
d:\nvm\c\3da3b\main n only.cpp(6) : see previous definition of 'RESOY'
D:\nvm\C\3DA3b\MAiN n ONLY.cpp(7) : warning C4005: 'RESOZ' : macro redefinition
d:\nvm\c\3da3b\main n only.cpp(7) : see previous definition of 'RESOZ'
D:\nvm\C\3DA3b\MAiN n ONLY.cpp(9) : warning C4005: 'MiD' : macro redefinition
d:\nvm\c\3da3b\main n only.cpp(9) : see previous definition of 'MiD'
D:\nvm\C\3DA3b\MAiN n ONLY.cpp(10) : warning C4005: 'LENGTH' : macro redefinition
d:\nvm\c\3da3b\main n only.cpp(10) : see previous definition of 'LENGTH'
Help please, i hate getting warnings, also slows down the progress when i get something important.
Thanks..
You have them defined somewhere elese in your code.
"Code is like shit - it only smells if it is not yours"Allegro Wiki, full of examples and articles !!
It can't be, there's only 1 file. Here's all the project. (a lot of other files, made auto by VS..)
Also, the code given is the top(of the file). And, pressing F4, following the message directs me there..
make a test project, and see if you get the same errors if you make the same defines. If so, they're reserved words for VS6, for some reason.
edit: never mind. The problem is that you don't use header guards, and most of your .cpp files include header.h, wich in turn includes #define.h. So the defines get included multiple times.
Linking...
test1.exe - 0 error(s), 0 warning(s)
I made a new project, empty, made a file, copied there all the source, and it's fine.
I changed values of the macros, and didn't get any warning.
See my edit above.
Oh. How do I use them though?
In the define.h file, do this:
#ifndef DEFINE_H
#define DEFINE_H
...
rest of header file here
...
#endif
Similarly for EVREY header file that will be included by more than one file. The exact term you define doesn't really matter, but what I did above is standard (taking the header name, capitalizing it, and replacing the . with _).
#include <ALLEGRO.H>
#include <MATH.H>
No, it's "allegro.h" and "math.h", in lowercase. Windows doesn't mind, but every other operating system does.
-- Tomasu: Every time you read this: hugging!
Ryan Patterson - <>
Thanks, though in my case than doesn't matter. | https://www.allegro.cc/forums/thread/591066 | CC-MAIN-2018-17 | refinedweb | 506 | 62.14 |
This:
This library uses a 16 bit timer for each group of 12 servos so PWM output with analogWrite() for pins associated with these timers are disabled when the first servo is attached to the timer. For example on a standard Arduino board, Timer1 is used, so once you attach a servo, analogWrite on pins 9 and 10 are disabled.
Here is a table of PWM pin usage on the Mega board:
New version updated 8 Jun:
- supports boards with 8MHz clock. - read_us() renamed to readMicroseconds() - writeMicroseconds() method added
Servo motors have three wires: power, ground, and signal. The power wire is typically red, and can be connected to the 5V pin on the Arduino board. The ground wire is typically black or brown and should be connected to a ground pin on the Arduino board. The signal pin is typically yellow, orange or white and should be connected to the pins attached in your sketch. You probably need to use an external power supply for more than one or two servos, don’t forget to connect the ground of the power supply to Arduino and servo grounds.
Note that write() expects parameters as an angle from 0 to 180 writeMicroseconds() expects values as microseconds.
The standard Arduino servo examples will work unchanged with this library. The following code demonstrates how to position 12 servos according to the voltage on potPin:
#include <MegaServo.h> #define NBR_SERVOS 12 // the number of servos, up to 48 for Mega, 12 for other boards #define FIRST_SERVO_PIN 2 MegaServo Servos[NBR_SERVOS] ; //); } | http://arduino.cc/playground/Code/MegaServo | crawl-003 | refinedweb | 256 | 66.98 |
pkg_resources fails in buildout with two-level namespace packages
I'm trying to run "buildout init" on a system where zc.buildout and distribute are already installed, and I get a traceback when pkg_resources tries to declare peak.utils as a namespace package : {{{ 2692, in <module> add_activation_listener(lambda dist: dist.activate()) File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 668, in subscribe callback(dist) File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2692, in <lambda> add_activation_listener(lambda dist: dist.activate()) File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2196, in activate map(declare_namespace, self._get_metadata('namespace_packages.txt')) File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 1777, in declare_namespace import(parent) ImportError: No module named peak An error occurred when trying to install distribute 0.6.16. Look above this message for any errors that were output by easy_install. }}}
peak and peak.utils are namespace packages, and I found a way to fix it : I check if the parent is not a namespace package itself before trying to import it. Here's the patch : {{{ --- /usr/lib/python2.7/site-packages/pkg_resources.py.orig 2011-05-13 06:44:46.165418517 +0200 +++ /usr/lib/python2.7/site-packages/pkg_resources.py 2011-05-13 07:50:14.827284238 +0200 @@ -1773,11 +1773,12 @@ if '.' in packageName: parent = '.'.join(packageName.split('.')[:-1]) declare_namespace(parent) - import(parent) - try: - path = sys.modules[parent].path - except AttributeError: - raise TypeError("Not a package:", parent) + if parent not in _namespace_packages: + import(parent) + try: + path = sys.modules[parent].path + except AttributeError: + raise TypeError("Not a package:", parent)
# Track what packages are namespaces, so when new path items are added, # they can be updated
}}}
With this, buildout init works fine, and I did not see regressions anywhere. Does it look OK to you ?
Looks good, please write a test and commit this
Thx
Don't try to import the parent of a namespace package in declare_namespace -- fixes
#204
→ a3f0d30e94c2
Your patch is, ironically enough, breaking my buildout.
Specifically, it's causing the minitage.recipe.cmmi recipe to break. Log:
My buildout.cfg:
And an excerpt from the cmmi package's setup (it looks perfectly valid, I think)
Yes this breakage was confirmed in distutils-SIG see
Aurelien could you have a quick look ? thanks
OK, I kind of understand where it comes from. The minitage.cmmi recipe's init.py files contain more than the simple declare_namespace that is recommanded here : They added an ImportError handler :
As a result, the code worked without my patch. Now if I understand correctly, my patch bypasses the import call, but also bypasses the path assignment right under it. This may be why the import fails later on.
I kind of see where it comes from, I've reproduced it, but I'm having problems finding a proper fix, since I don't know the details of the python import system. Here's a patch that seems to be fixing it :
Does it look OK ? Does it work for you ?
Your patch appears to fix the issue (on a fresh clone of the repository) :).
Martín.
Update the child's path in declare_namespace, even if the parent is already a namespace package -- fixes
#204
→ 191f38f47256 | https://bitbucket.org/tarek/distribute/issue/204 | CC-MAIN-2015-18 | refinedweb | 537 | 59.9 |
Opened 3 years ago
Closed 3 years ago
Last modified 2 years ago
#21584 closed Uncategorized (invalid)
prefetch_related child queryset does not update on create
Description
When a child foreign key relationship has been prefetched, calling
the .create method on the queryset does not update the queryset.
I've reproduced this bug in Django 1.5.4 and Django 1.6.
How to reproduce:
models.py
from django.db import models class Parent(models.Model): pass class Child(models.Model): parent = models.ForeignKey(Parent)
In the shell:
>>> p = Parent.objects.create() >>> list(p.child_set.all()) [] >>> child = p.child_set.create() >>> list(p.child_set.all()) [<Child: Child object>] >>> >>> p2 = Parent.objects.create() >>> parents = Parent.objects.filter(pk=p2.id).prefetch_related('child_set') >>> [p2_prefetched] = parents >>> list(p2_prefetched.child_set.all()) [] >>> p2_prefetched.child_set.create() <Child: Child object> >>> list(p2_prefetched.child_set.all()) []
The last expression should return a list with one child in it, but returns an empty list instead.
Change History (3)
comment:1 Changed 3 years ago by
comment:2 Changed 3 years ago by
This is not a bug, I'm afraid.
It's very unintuitive that the child_set won't keep track of what elements are in it when it is mutated by its own methods. This seems to break the "iterable" abstraction.
There may be some small exceptions, such as when you update a FK object, and the FK ID on that object may be updated as well, but only those that can be done with no dependency tracking
Isn't that precisely the behavior that's being described here? The foreign key relationship should know that a new entry was created, because the create method was called on the child_set itself. Create should invalidate the cache, because there's no way the cache can remain valid after create is called. Couldn't create set self._results_cache to None, and self.prefetch_done to False when create is called?
comment:3 Changed 2 years ago by
Sorry, that's just how the ORM works. Objects that represent collections do not keep track of their elements, because they represent querysets i.e. queries that may or may not yet have been evaluated, not actual collections of objects. If you have:
my_objects = Foo.objects.all().filter(bar=1) list(my_objects) # evaluate query my_objects.update(bar=2)
then you will find that the 'update' has not affected anything in
my_objects - either by changing the instances, or by removing them from the collection (since they no longer match the requirement
bar=1).
In the same way,
p.child_set does not keep track of elements that are referred to. When you call
all(), it executes a query every time, (rather than tracking creates/adds/deletes etc.). If you have used
prefetch_related, however, it never executes a query when you just do
all() because it has been prefetched. This is exactly what
prefetch_related is supposed to do - the
all() will not return data to reflect what is in the DB at that moment in time, but what was in the DB when the query was first evaluated.
This is not a bug, I'm afraid. The results of DB queries that have already been run are never updated automatically by the Django ORM, and that is by design. (There may be some small exceptions, such as when you update a FK object, and the FK ID on that object may be updated as well, but only those that can be done with no dependency tracking). When you specify 'prefetch_related', you are specifying this exact behaviour i.e. the 'child_set.all()' is not lazy, but prefetched in a single query.
Changing this would really require an identity mapper, and a very fundamental change to the way the Django ORM works. | https://code.djangoproject.com/ticket/21584 | CC-MAIN-2016-50 | refinedweb | 620 | 66.64 |
OS: Win2k
Compiler: Borland Turbo C++ v 4.52
Issue:Issue:Code:
#include <stdlib.h>
#include <stdio.h>
int main(void)
{
printf("About to spawn command.com and run a DOS command\n");
system("dir");
return 0;
}
The above code compiles just fine. When I rebuild (which includes linking) I get the following error message:
Compiling TEST1.CPP:
Linking test1.exe:
Linker Warning: No module definition file specified: using defaults
Linker Error: Undefined symbol _system in module TEST1.CPP.
(Throws the same linker error for all of the "execl" series commands as well.)
I've changed the system command to system() and rebuilt. The error changes to a compile error "too few parameters in call to system(const *char)... like we would normally expect.
I have a feeling I'm missing something really simple here but, not having seen this before, I am at a loss.
Any kick in the right direction would be greatly appreciated.
Many thanks!
Dan | http://cboard.cprogramming.com/c-programming/39262-linker-error-using-system-*-*-printable-thread.html | CC-MAIN-2014-35 | refinedweb | 159 | 61.12 |
, I am looking for a good (read: high precision values) comprehensive table of units and conversion factors to add to this. Before I add new units however I need to add a filter that rejects converted values if they get too large (ie if I ask for 1500 miles I don't want to know how much that is in milimeters).
But that's on the wishlist - for now, I needed to scratch an itch, and most every unit conversion CGI page I found out there was at least mildly painful. The too large filter is actually a bit more complex than the nobrainer too small filter. I'll update the script and add more many units when I have a few spare brain cycles (time is plentiful, but my mind is always elsewhere :-) ).
Makeshifts last the longest..
Carter's compass: I know I'm on the right track when by deleting something, I'm adding functionality
Thanks for the compliment :)
Anyone have suggestions how I should go about modularizing this?
At first I didn't see any way to meaningfully transform it into a module since it has to have a useful interface. Then I began thinking.. the following is a brainstorming transcript if you will, so bear with me and give me feedback.
I guess I should swap out the inner loop into of the script into a subroutine and package it with the table. The rest of the hoohah for calling the script from the commandline could ship as a script like the GET, HEAD etc ones that come with LWP. The inner loop alone probably offers too little control - extra routines to allow to ask for specific conversions should be useful. So I should probably add an interface to ask whether a unit is known. Actually, the best way to do that is probably to have an interface that returns all possible conversion targets for a given unit, which would obviously return nothing (= false in scalar context) if the unit is unknown.
The question of whether there should be an interface to add new conversions at runtime and how it should look repeatedly crossed my mind during all this, but I don't feel like it's a direction I'd want to take the module in. It's probably better if it remains something a.. "metaconstant" module, a bunch of oft needed calculations you just drop in your script and don't think about. It almost makes me think the "introspection" interface (requesting what conversion targets exist for a given unit) is overkill, but then it is probably useful even for static uses for things like maybe dynamically populating dropdowns or such.
If I'm gonna go there, maybe there should plaintext names and/or descriptions for the units supported.. on second thought, that would require translation and all the countless headaches it brings along, which is definitely farther out than I want to go. It would require an extra interface for the language selection and querying the available languages too, and people will probably still have to reimplement those themselves if it doesn't happen to support their language. If could include a lot of languages - but neither do I know who I'd ask for translations, nor would I be willing to put in the effort to maintain all of that. And it would probably be useless bloat for the vast majority of users. Maybe as an addon module ::Descriptions or something, should the interest in this module ever warrant that amount of work.
So I have a module containing the conversion table, a routine for one-shot conversions, one for broadside salvo conversions (calculate any, all and every related unit you can get to), and one to ask whether a unit is known and what conversion targets it has, if so.
Then the query routine should probably differentiate between all available direct conversion targets that can be reached via the one-shot routine and the full list of related units you can get via the broadside converter.
Maybe there should be a single unit to single unit conversion routine which does not care whether a direct conversion is possible or intermediate conversions have to be done. But that would be complex - choosing the conversion steps such that you get from the source to the destination in the shortest possible way - or even at all - is far from trivial. It is simpler to just bruteforce a broadside conversion and pluck the result out of it. But the user can do that too, esp if the broadside conversion function returns its results in a convenient format. There's no point in adding garden decoration to the module.
The most convenient format is probably to return either a hashref or flat list according to the context.
...
Ok, I'm done. Suggestions, anyone?
The interface I would want would be something like:
my @new= ConvertTo( $toUnits, $fromUnits, @old );
which would convert the numbers in @old from $fromUnits to $toUnits and return the results. So:
# <--------- inches ---------><----- ft ----->
my @inches= ConvertTo( 'in', 'ft', 1, 2, 3 );
# @inches is now ( 12, 24, 36 )
[download]
If I want to do a lot of conversions but not all at once, then:
my $toInFromFt= ConvertTo( 'in', 'ft' );
while( <IN> ) {
chomp;
print OUT $toInFromFt->($_), $/;
}
[download]
I'd probably put the unit data after __DATA__ so you could just append more units to be able to support them.
The "all possible conversions" is interesting for interactive exploration, but I don't see nearly as much use for it as requesting a specific conversion.
For finding the conversion, I'd look for a direct conversion, but if there isn't one, form the list of all possible conversions and then repeat for each of those: %table;
for my $conv ( @table ) {
my( $from, $to, $conv, $rev )= @$conv;
if( exists $table{$from}{$to} ) {
warn "Duplicate conversions to $to from $from.\n";
}
$table{$from}{$to}= $conv;
if( $rev ) {
if( exists $table{$from}{$to} ) {
warn "Duplicate conversions to $from from $to.\n";
}
$table{$to}{$from}= $rev;
}
}
# Handle reverse conversions when a better one isn't provided:
for my $conv ( @table ) {
my( $from, $to, $conv )= @$conv;
if( ! ref($conv) && ! exists $table{$to}{$from} ) {
$table{$to}{$from}=
sub { $_[0] / $conv };
}
}
sub FindConversionPathTo {
my( $dest, $source )= @_;
my( @sol );
$source= { $source => "$source " };
while( ! @sol ) {
for my $src ( keys %sources ) {
if( exists $table{$src}{$dest} ) {
$source{$src} .= "$dest ";
push @sol, $src;
} else {
for my $dest ( keys %{$table{$src} ) {
if( ! exists $source{$dest} ) {
$source{$dest}=
$source{$src} . "$dest ";
}
}
}
}
}
# Pick one of the solutions at random: (:
return split ' ', (@source{@sol})[rand @sol];
}
[download]
I haven't looked at your code in detail yet, nor tried to use the interface you've got already, specifically so that I would not be influenced.
My first thought on how I would like to use a conversions module is that I would pass the source and destination units and it would create a named sub in my package namespace (like use constant does).
In use, it might look something like this:
# if (units) specified after text unit description
# (which should understand most normal abbrevs.)
# then if the input is a string it is inspected for units,
# and the conversion done in the appropriate direction
# If the input is purely numeric (if ONLY Perl_looks_like_number() was
+ accessible!)
# then the conversion is in the direction specified by the order of de
+claration time parameters.
use Units::Convert FT_IN_2_MTRS => q[ft(')inches(") meters(m)];
print FT_IN_2_MTRS q{5'10"}; # prints '1.7773m'
print FT_IN_2_MTRS 5.8333; # prints 1.77773
# No (units) specified on delclaration, input must be numeric, convers
+ion works in 1 direction only.
use Units::Convert MPH_2_KPH => q[mph kph];
print MPH_2_KPH 70; # prints 112
print MPH_2_KPH '70mph'; # Causes warn or die
my @limits = qw(30 40 50 60 70);
print "@{[ MPH_2_KPH @limits ]}"; # prints 50 64 80 96 112
# An extension would be for the user to supply a sprintf-style format
+string
# that is used for the formating/precision of the output.
# Once we get string .v. numeric contexts, the sub could determine whe
+n to append the units or not
use Units::Convert CI_2_CC => 'inch^3(%.2f ci) cm^3(%.f cc)';
print CI_2_CC 500; # prints 8183
print CI_2_CC '500 ci'; # prints '8183 cc'
# If an itermediate conversion is required, this could be specified on
+ the declaration
# I'm not sure this is a good example, but it's the one that came to m
+ind.
use Units::Convert UK_2_METRIC_WEIGHTS => 'stones(st)_pounds(lbs) lbs
+kilograms(%.1f kilos)';
print UK_2_METRIC_WEIGHTS '11st 7lbs'; # prints '73.2 kilos'
print UK_2_METRIC_WEIGHTS 11.5; # prints 73.2
print UK_2_METRIC_WEIGHTS '11.5_'; # prints '73.2 kilos' maybe?
# The presence of an underscore forces output formattting (if supplied
+)?
[download]
Final thought on the precision and under/overflow thing. Perhaps, if a flag is set, the routines could return BigInt/Floats if the standrad precisions will cause accuracy loss? I haven't thought that through, so I don't know what the implications are.
Now I'll read your code and see if I'm completly off-base, but I like to look at things from my own perspective first when I can :^).
If you decide not to go ahead with teh module, let me know and I will.
Examine what is said, not who speaks.
How about putting the info from @tables into an XML format and then read it in when the module is loaded? You could also add an option to update the XML tables from an internet site, if. | http://www.perlmonks.org/?node_id=219603 | CC-MAIN-2016-40 | refinedweb | 1,589 | 67.89 |
Varnish is an Http accelerator designed for content-heavy websites and highly consumable APIs. You can easily spin up a Varnish server on top of your Azure Web Apps to boost your website's performance. Varnish can cache web pages and provide content to your website users blazing fast. This blog post shows you how to install and configure Varnish with sample configuration files.
Step 1: Create a cloud service using Linux virtual machine on Azure
First, you need to setup a cloud service with a Linux virtual machine, click here for details. For most web apps a single VM is sufficient. However, if you need a failure resilient front end cache, I recommend using at least two virtual machines on your cloud service. For the purpose of this blog post, I will be using Ubuntu LTS.
Step 2: Install Varnish on all VMs
It is recommended to use Varnish packages provided by varnish-cache.org. The only supported architecture is amd64 for Ubuntu LTS. For other Linux distributions, please see install instructions here. Connect to each virtual machine using PuTTY and do the following as root user:
- Add the security key [Debian and Ubuntu].
wget apt-key add GPG-key.txt
- Add the package URL to apt-get repository sources list.
echo "deb precise varnish-3.0" | sudo tee -a /etc/apt/sources.list
- Update the package manager and download/install Varnish Cache
apt-get update apt-get install varnish
Step 3: Varnish configuration
The default settings are not set to run on front-facing port of 80(HTTP) or 443 (for HTTPS) and hence this needs to modified to use port you need for your web app. Port 80 is the default TCP port for HTTP traffic. If you plan on using SSL with your website, you will also need to open port 443 which is the default port for HTTPS traffic.
Login to Azure Preview portal and select your virtual machine to add the endpoint for port 80 (HTTP) or 443 (HTTPS). This needs to be done for every virtual machine. The configuration file on Ubuntu is at /etc/default/varnish. Using your favorite editor to edit the file, in this blog post I’m using nano editor.
nano /etc/default/varnish
The file will have a few default settings. If you scroll down, you will see a block of text defining the Varnish daemon options starting with the text DAEMON_OPTS, similar to:
DAEMON_OPTS="-a :6081 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s malloc,256m"
Change the port from 6081 to 80 (HTTP) or 443 (HTTPS) :
DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s malloc,256m"
By default the port 80 or 443 is blocked by the firewall , and hence you need to explicitly open the port by using the iptables command
Using iptables:
By running the following commands a root can open port 80 allowing regular Web browsing from websites that communicate via port 80.
iptables -A INPUT -p tcp -m tcp --sport 80 -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 80 -j ACCEPT
To allow access to secure websites you must open port 443 as well.
iptables -A INPUT -p tcp -m tcp --sport 443 -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT
Step 4: Modifying the default VCL file under /etc/varnish/
Varnish uses a .vcl file (default located at /etc/varnish/ as default.vcl) containing instructions written in VCL Language in order to run its program. This is used to define how Varnish should handle the requests and how the document caching system should work.
Open the editor once again to modify the contents of default.vcl (located under /etc/varnish/) by using the following command.
nano /etc/varnish/default.vcl
Create a default backend with .host and .port referring to your Azure web app. Here is a sample of basic VCL configuration file (replace my-azure-webapp.azurewebsites.net with your actual web application custom domain or azurewebsite.net domain URL). Note, if you are using Varnish 4.0 and above you need to include vcl 4.0 at the beginning of the file. To learn more about Varnish 4.0 VCL documentation click here.
vcl 4.0; backend default { .host = "my-azure-webapp.azurewebsites.net"; .port = "80"; .connect_timeout = 600s; .first_byte_timeout = 600s; .between_bytes_timeout = 600s; } sub vcl_recv { set req.http.host = "my-azure-webapp.azurewebsites.net"; set req.backend = default; return (lookup); }
Troubleshooting
If you run into any issues with Varnish server, you can view the logs by running the following command.
varnishlog cmd
Browse your site again and look at the log in the your VM. For more information, click here.
Sample VCL configuration files
- WordPress
If you are using a WordPress web app, click here to download a sample Varnish configuration for WordPress.
- Drupal
If you are using a Drupal web app, click here to download a sample Varnish configuration for Drupal. | https://azure.microsoft.com/hu-hu/blog/using-varnish-as-front-end-cache-for-azure-web-apps/ | CC-MAIN-2018-05 | refinedweb | 833 | 65.01 |
Flow Control
After this first introduction to C#, we'll examine flow control and control structures. We'll need this information to implement code that is executed only under certain circumstances.
If/Else
Conditional execution is a core component of every programming language. Just like C and C++, C# supports If statements. To see how If statements work, we've implemented a trivial example:
using System; class Hello { public static void Main() { int number = 22; if (number > 20) Console.WriteLine("if branch ..."); else { Console.WriteLine("else branch ..."); } } }
Inside the C# program, we define an integer value. After that the system checks whether the value is higher than 20. If the condition is true, the code inside the If block is executed. Otherwise, the Else branch is called. It's important to mention that the blocks should be marked with parentheses, but this is not a must. Parentheses are normally used to make the code clearer.
When the program is called, one line is displayed:
if branch ...
As we expected, Mono called the If branch.
However, in many real-world scenarios, simple If statements are not enough. It's often useful to combine If statements. When working with Mono and C#, this is no problem:
using System; class Hello { public static int Main(String[] args) { Console.WriteLine("Input: " + args[0]); if (args[0] == "100") { Console.WriteLine("correct ..."); return 0; } else if (args[0] == "0") { Console.WriteLine("not correct ..."); } else { Console.WriteLine("error :("); } return 1; } }
This program is supposed to tell us whether a user has entered a correct number. If 0 is passed to the program, we want a special message to be displayed. Our problem can be solved with the help of else if because it can be used to define a condition inside an If statement. The comparison operator demands some extra treatment. As you can see, we use == to compare two values with each other.
Do not use the = operator for checking whether two values are the same. The = operator is used for assigning values it isn't an operator for comparing values. The C and C++ programmers among you already know about this subject matter.
The way data is passed to the program is important as well. The array called args contains all the values that a user passes to the script. Indexing the array starts with zero. Let's see what happens when we call the program with a wrong number:
[hs@duron mono]$ mono if.exe 23 Input: 23 error :(
In this case, a message is displayed.
Case/Switch Statements
Especially when a lot of values are involved, If statements can soon lead to unclear and hard-to-read code. In this case, working with case/switch statements is a better choice. In the next example, we see how the correct translation of a word can be found:
using System; class Hello { public static int Main() { String inp; String res = "unknown"; // Reading from the keyboard Console.Write("Enter a value: "); inp = Console.ReadLine(); Console.WriteLine("Input: " + inp); // Read the translation switch(inp) { case "Fernseher": res = "TV"; break; case "Honig": res = "honey"; break; case "Geschlecht": case "Sex": res = "sex"; break; } Console.WriteLine("Result: " + res); return 0; } }
First of all, we read a string. To fetch the values from the keyboard, we use the ReadLine method, which is part of the Console object. After reading the value, we call Console.WriteLine and display the value. Now the switch block is entered. All case statements are processed one after the after until the correct value is found.
One thing has to be taken into consideration: A case block is not exited before the system finds a break statement. This is an extremely important concept. If you use switch, case, and break cleverly, it's possible to implement complex decision trees. A good example is the words Geschlecht and Sex. In German, the words are different, but they have the same English translation. Because we do not use a break in the Geschlecht block, C# jumps directly to the Sex block where the correct word is found. In this block, a break statement is used and so the switch block is exited. Many advanced programmers appreciate this feature.
Let's compile and execute the program:
[hs@duron mono]$ mono case.exe Enter a value: Fernseher Input: Fernseher Result: TV [hs@duron mono]$ mono case.exe Enter a value: Geschlecht Input: Geschlecht Result: sex
As you can see, the correct result has been found.
Case/Switch statements also provide default statements. Default values help you to define the default behavior of a block if no proper values are found. Using strings in Switch statements isn't possible in most other languagethat's a real benefit of C#. | http://www.informit.com/articles/article.aspx?p=101325&seqNum=4 | CC-MAIN-2019-04 | refinedweb | 787 | 66.54 |
Whilst working with ASP.NET, sometimes we need to call server side methods asynchronously without having to postback whether it is full page postback
or a partial page postback. Thanks to the ASP.NET team for providing the implementation of ICALLBACK.
ICALLBACK
ICALLBACK is a lightweight process. It uses the well known XMLHTTP object internally to call a server side method, it doesn’t cause a page postback so doesn’t cause
page rendering, so we show the output at the client side. We need to make the output HTML ourselves and render the controls manually.
ICALLBACK is implemented in ASP.NET by using the ICALLBACKEVENTHANDLER interface and it has two methods: one of them is used to call from
JavaScript (client side code) and the other returns results asynchronously back to the JavaScript function.
ICALLBACKEVENTHANDLER
We just need to perform some action using server side code at the server side and return results, but results could be an instance or object
of any class which need not be easy for the JavaScript code to handle easily, so here we prefer JSON which stands for JavaScript Object Notation.
JSON is lightweight data-interchange format. ASP.NET gives good support for JSON as well. It’s rapidly being adopted because it is lightweight and easily
readable by humans and machines.
Let’s first implement ICALLBACKEVENTHANDLER to call a server side method asynchronously step by step:
Implement the server side (C#) page/control class using System.Web.UI.ICallbackEventHandler. Following are the definitions of the two methods which needs to be implemented:
System.Web.UI.ICallbackEventHandler
The RaiseCallbackEvent method is invoked through a JavaScript function:
RaiseCallbackEvent
public void RaiseCallbackEvent(string eventArgument)
{
//to do code here
}
The GetCallbackResult method invokes itself when processing of the RaiseCallbackEvent method is completed:
GetCallbackResult
public string GetCallbackResult()
{
return "";
}
In the Page_Load or Page_Init event, the following statements are used to register the client-side methods.
Page_Load
Page_Init
CallServer(arg, context), as the name implies, is used to call/raise the server side method RaiseCallbackEvent string eventArgument).
CallServer(arg, context)
RaiseCallbackEvent string eventArgument)
ReceiveServerData(arg, context) is used to get the result through the arg parameter by GetCallbackResult().
ReceiveServerData(arg, context)
arg);
}
Callback Client Side Code
<script language="javascript" type=text/javascript>
function ReceiveServerData(arg, context)
{
alert(arg);
}
function CallSrv()
{
CallServer('get customer', '');
}
</script>
<input type=”button” value=”get customer” onclick=”CallSrv()” />
That is it. These are the steps you need to use to call and get result from server side code using ICALLBACK.
Now we will see some very easy steps for JSON based JavaScript serialization to return results to JavaScript in an easily parseable format.
Suppose we have the following class whose object we need to return to a JavaScript function through JavaScript serialization.
public class Customer
{
public string Name;
public int Age;
}
Declare the string jsonResult at class level, which would be used to contain the final result for returning.
jsonResult
The sample code in both methods be available within a millisecond and without postback.
Callback is a lightweight technique used to call server side methods asynchronously from
JavaScript without any postback and reloading/rendering
of unnecessary parts of page and unnecessary code.
JSON is a lightweight data interchange format to make server side class objects easily parseable by client side code to show the output on the. | http://www.codeproject.com/Articles/22310/ICallback-JSON-based-Javascript-Serialization?fid=948284&df=90&mpp=10&sort=Position&spc=None&tid=2466983 | CC-MAIN-2015-22 | refinedweb | 553 | 51.18 |
Don't know exactly which forum to post this in... anways below is my problem:
1) I have a project. (obviously)
2) In this project I have a "Resource.h" that has a series of #defines
3) I include a header & cpp from a completely different directory and in these files they need to know about some of the #define(s) in the project's "Resource.h".
Problem is when I try and compile/build the sucker it complains about "error C2065: 'IDD_CONTROL': undeclared identifier (for the externally included header & cpp). I know there is a way to get this to work without having to include the "Resource.h" as I've seen a project do it, but even staring at it for 30 minutes comparing settings I can't figure out what they are doing that I'm not.
Specifically including in the external header:
#include "..\project folder\project\Resource.h" <-- this works, but this external header & cpp are shared among multiple projects so I don't want to do that.
Any ideas? I'm using Visual Studio's IDE. | http://cboard.cprogramming.com/windows-programming/103112-externally-included-file-project-level-sharpdefine.html | CC-MAIN-2013-48 | refinedweb | 181 | 63.19 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.