text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
I am finding myself a little confused about how Python imports. Would the following be a correct statement:
In any Python interactive shell, if the shell (since it was initiated or opened) has never before imported "myScript.py" then if the user types in "myScript" he will get an error like: NameError, name 'myScript' is not defined. At this point, entering the command "import myScript" will import myScript.py from the native OS diskfile system, i.e. from Windows folder or Unix directory, based upon a search path directed by "sys.path". Even if "myScript" is defined or has been imported in some other currently existing Python shell, the shell we are currently in will import from disk and not from the other shell.
The reason I ask is, the Blender 3D application I am working with comes with a built in Python shell. Blender can run continuously while the user chooses to open, close, re-open the built in Python shell. If, after importing "myScript" into a shell, I close the shell, make some changes to myScript.py through an external editor, and re-open a new Python shell, the definition for "myScript" is no longer there (NameError: name 'myScript' is not defined). But when i import myScript, it imports the old version, so apparently Blender has saved the old version somewhere and is allowing the Python shell to reload it. This violates Python rule that if not currently defined in namespace, import myScript should always import from the native OS file system, never from some previously loaded module, correct? | https://www.daniweb.com/programming/software-development/threads/421931/import-myscript-vs-imp-reload-myscript | CC-MAIN-2017-34 | refinedweb | 261 | 68.6 |
mister-fett
Light Poster
48 posts since Aug 2004
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
•Community Member
0
I am new to C and C++, but I wrote this calculator in C++. It compiles okay on borland 5.5, and can do addition, subtraction, multiplication, and division of two numbers. If you type any characters except numbers, +,-,*,/,or e, there will be lots of beeping noises and error messages!
Please post any ways I can improve this code or my programming in general!
(Additions such as square roots, etc. will be appreciated)
*****Planned changes: Add a constant for pi, add squared and square root, allow calculation of roman numerals.*****
/* Program: Fett Calculator File:C:\borland\bcc55\bin\cpp\calc Function: Calculator, very simple Author: Mister-Fett Revision: Version 1.0, distributed 9/17/04 */ #include <iostream.h> #include <conio.h> int mc(int x, int y) //Multiply two numbers { cout <<"\n\n"<< x <<" times "<< y <<" equals "; return (x*y); } int ac(int a, int b) //Add two numbers { cout <<"\n\n"<< a <<" plus "<< b <<" equals "; return (a+b); } int sc(int z, int c) //Subtract two numbers { cout <<"\n\n"<< z <<" minus "<< c <<" equals "; return (z-c); } int dc(int o, int t) //Divide two numbers { cout <<"\n\n"<< o <<" divided by "<< t <<" equals "; return (o/t); } void calc(char choice) { int on,tw,thr; if (choice == '+') //This whole block checks what the user wants to calculate, and refers to the proper routine to calculate it. { cout<<"You selected "<<choice<<". Please enter two numbers,\nsepperated by spaces,"; cout<<"that you want to add."<<endl;//print instructions for the user cin>>on;//Get the value of variable on cin>>tw;//Get the value of variable tw thr=ac(on,tw);//Get the sum of on and tw, and assign that value to thr cout<<thr<<"\n\n\n\aThanks for using my calculator!";//Print a thank you message } else if (choice =='-') { cout<<"You selected "<<choice<<". Please enter two numbers,\nsepperated by spaces, that you want to subtract."<<endl; cin>>on; cin>>tw; thr=sc(on,tw); cout<<thr<<"\n\n\n\aThanks for using my calculator!"; } else if (choice =='*') { cout<<"You selected "<<choice<<". Please enter two numbers,\nsepperated by spaces, that you want to multiply."<<endl; cin>>on; cin>>tw; thr=mc(on,tw); cout<<thr<<"\n\n\n\aThanks for using my calculator!"; } else if (choice =='/') { cout<<"You selected "<<choice<<". Please enter two numbers,\nsepperated by spaces, that you want to divide."<<endl; cin>>on; cin>>tw; thr=dc(on,tw); cout<<thr<<"\n\n\n\aThanks for using my calculator"; } else { cout<<"\nPlease reenter that value.\n\a"; cin>>choice; calc(choice,); } } void main() { clrscr(); int one, two, three; char choice; while (choice != 'e') { cout<<"\nPlease enter +,-,*, or / and then two numbers,\nsepperated by spaces, that you wish to\nadd,subtract,multiply,or divide.\nType e and press enter to exit."; cin>>choice; if (choice != 'e') { calc(choice); } } } | https://www.daniweb.com/software-development/cpp/threads/216276/calculator-in-c- | CC-MAIN-2015-18 | refinedweb | 495 | 58.18 |
To compare between with and without threading, there are three routers from R1 to R3, the task is to save the show version on each router to all.txt.
Do the tasks without threading
The script runs sequentially on each device, and the total time is 15seconds.
Do the tasks with threading
The time taken to complete the tasks for three routers took 5seconds.
When is threading use?
Threading should be used when the task requires a wait before another task can start, and the tasks do not require high cpu usage. Threading fits the requirement for this scenario.
Connecting the router and writing the data to the file are blocking tasks, before the next same tasks can start and run.
threading runs the connection then executes the command get the data and save to the file all at the same time, this is known as concurrency however all these tasks that runs almost concurrently only has one process.
Synchronization primitive
Because the file all.txt is a shared resource as all threading need to save the data to the same file, in order to ensure the data saved to the file is not corrupted a lock is required to open the file and write the contents into it, after the write is finished the lock is then released for another thread to run the task again.
crypto.py
This script contains functions for creating a symmetric key to decrypt and encrypt contents and save to encrypted contents to the file.
from cryptography.fernet import Fernet from os.path import exists # This checks the existence of enc.key. # which is the symmetric key for encryption and decryption. # if there is no key create one, but previous encrypted data will be locked forever. def check_key(): if not exists("enc.key"): key = Fernet.generate_key() with open("enc.key", "wb") as file: file.write(key) # This encrypts the data and save to a file. def encrypt(filename, data): key_byte = read_key() key = Fernet(key_byte) ciphertext = key.encrypt(data.encode('utf-8')) with open(filename, "wb") as cred: cred.write(ciphertext) # This decrypts the data retrieved from the encrypted file. # if the convert_dict is false returns the string. # if the convert_dict is true then convert the string to dictionary. def decrypt(filename, convert_dict=False): key_byte = read_key() key = Fernet(key_byte) with open(filename, "rb") as file: ciphertext = file.read() # plaintext is the string from the byte slice. plaintext = key.decrypt(ciphertext).decode('utf-8') if convert_dict: # string cannot be converted to dict directly. # first change the string to json type.. # then convert the json type to dict() object. import json return dict(json.loads(plaintext)) else: return plaintext # Use the key, the key is a file. # get the key from the file and load in memory. def read_key(): check_key() with open("enc.key", "rb") as file: key_byte = file.read() return key_byte
network_threads.py subclass of threading
The network_threads.py is the subclass of
threading.Thread class, this is for customizing the run method which is used to execute the task of connecting router and saving the contents to the file.
from netmiko import ConnectHandler import threading from time import sleep class SaveNetworkData(threading.Thread): def __init__(self, config): # it can also be threading.Thread.__init__(self) # super() refers to the parent class # which is a requirement to have a threading subclass super().__init__() # The class parameter is to put in the dictionary config, # which will be passed into netmiko. self.config = config # has to be def run(self), else will not work def run(self): # this pause of 0.2s is to ensure there is a short period of time # before the next thread starts. sleep(0.2) # create a lock for the thread to access shared resource. run_lock = threading.Lock() with ConnectHandler(**self.config) as conn: print(f"Connected to {self.config['ip']}") # this will get the prompt of cisco router such as R1# or R1> hostname = conn.find_prompt() print(f"Got the prompt of {self.config['ip']}") result = conn.send_command("show version") print(f"Got the show version of {self.config['ip']}") print("Preparing data and acquiring write lock...") # concatenate all data into one data object. # hostname[:-1] will only get the hostname without the > or #. data = hostname[:-1] + "\n" + 10 * "*" + "\n" + result + "\n" # Get the lock to access the console to print the message # and open the all.txt file to write the data into it. run_lock.acquire() print(f"Write lock acquired by thread for {self.config['ip']} ") print(f"Append data extracted from {self.config['ip']} to all.txt") with open("all.txt", "a") as file: file.write(data) print("Done appending releasing write lock...") # release the lock after data is saved. run_lock.release()
config.json
This is the base config required for netmiko.
{
"ip": "",
"device_type": "cisco_ios"
}
main.py
The main.py which does the entire task.
from network_threads import SaveNetworkData from security.crypto import decrypt import json from time import time from os.path import exists from os import remove devices = [ "192.168.1.32", "192.168.1.31", "192.168.1.30" ] config_list = list() threads = list() if __name__ == "__main__": if exists("all.txt"): remove("all.txt") start = time() # cred is a dictionary of username and password. cred = decrypt("cred", convert_dict=True) with open("devices/config.json", "r") as j: config = dict(json.load(j)) # adds on username and password dictionary to existing config dictionary. config.update(cred) for device in devices: # copy the config dict to a tmp object. # else append will not be changed. tmp = config.copy() tmp["ip"] = device # adds on dictionary of device into the list. config_list.append(tmp) for cfg in config_list: t = SaveNetworkData(cfg) threads.append(t) t.start() for thread in threads: thread.join() print("All threads finished. Time executed: {} seconds.".format(time() - start)) | https://cyruslab.net/2019/11/09/pythoncomparing-executed-time-between-with-and-without-threading/ | CC-MAIN-2022-27 | refinedweb | 961 | 69.79 |
[deal.II] Memory loss in system solver
Dear community I have written the simple code below for solving a system using PETSc, having defined Vector incremental_displacement; Vector accumulated_displacement; in the class LargeStrainMechanicalProblem_OneField. It turns out that this code produces a memory loss, quite significant
Re: [deal.II] KDTree implementation error
KDTree needs nanoflann to be available. Did you compile deal.II with nanoflann exnabled? Check in the summary.log if DEAL_II_WITH_NANOFLANN is ON. RTree, on the other hand, does not require nanoflann, as it is included with boost (and it is faster than nanoflann). L. > On 24 Jul 2020, at
Re: [deal.II] KDTree implementation error
Dear Heena, here is a snippet to achieve what you want: #include namespace bgi = boost::geometry::index; … Point<2> p0; const auto tree = pack_rtree(tria.get_vertices()); for (const auto : tree | bgi::adaptors::queried(bgi::nearest(p0, 3))) // do something with p
Re: [deal.II] Cannot find local compiled petsc library
Yuesun, Apparently, CMake was able to find the file petscvariables, but not the include directories or the library. Can you search for "libpetsc.so" yourself? Our CMake find module tries to find this library in {PETSC_DIR}/lib or {PETSC_DIR}/lib64. See if you can adjust PETSC_DIR accordingly.
Re: [deal.II] Accessing nodal values of a FEM solution
On Thu, Jul 23, 2020 at 5:14 PM Daniel Arndt wrote: > > You can do similarly, > > Quadrature q(fe.get_unit_support_points()); > FEValues fe_values (..., q, update_q_points); > for (const auto& cell) > ... > points = fe_values.get_quadrature_points(); >
Re: [deal.II] KDTree implementation error
Dear Luca, Thank you very much. It now works in both ways. Thanks for advice. Regards, Heena On Fri, Jul 24, 2020 at 12:31 PM luca.heltai wrote: > KDTree needs nanoflann to be available. Did you compile deal.II with > nanoflann exnabled? Check in the summary.log if
[deal.II] Re: Memory loss in system solver
Dear community, if I am not mistaking my analysis, it turned out that the memory loss is caused by this call: BiCG.solve (this->system_matrix, distributed_incremental_displacement, this->system_rhs, preconditioner); because if I turn it off the top command shows no change in the RES at all.
Re: [deal.II] Cannot find local compiled petsc library
Dear Daniel, Thank you for the instruction! I gave the architecture directory, which is a sub-directory : /home/yjin6/petsc/arch-linux-c-debug. It returns message like this: ***
Re: [deal.II] Re: Memory loss in system solver
Alberto, Have you tried running valgrind (in parallel) on your code? Admittedly, I expect quite a bit of false-positives from the MPI library but it should still help. Best, Daniel Am Fr., 24. Juli 2020 um 12:07 Uhr schrieb Alberto Salvadori < alberto.salvad...@unibs.it>: > Dear community, > >
Re: [deal.II] Cannot find local compiled petsc library
Dear all, This problem has been solved. I copied the petscversion.h file to the arch/include folder therefore cmake found all petsc files and finished compilation. Best regards On Fri, Jul 24, 2020 at 3:17 PM yuesu jin wrote: > Dear Daniel, > Thank you for the instruction! I gave the
Re: [deal.II] Accessing nodal values of a FEM solution
On 7/23/20 12:07 PM, Xuefeng Li wrote: Well, the above function calculates the gradients of a finite element at the quadrature points of a cell, not at the nodal points of a cell. Such a need arises in the following situation. for ( x in vector_of_nodal_points ) v(x) = g(x, u(x), grad
Re: [deal.II] Location for Boundary Condition Application
On 7/23/20 11:47 AM, Daniel Arndt wrote: McKenzie, I'm interested in applying a non-homogeneous Dirichlet boundary condition to a specific edge. However, I'm unsure how to identify or specify a particular edge or face to add the boundary condition to. Could you help clear this
Re: [deal.II] Memory loss in system solver
On 7/24/20 3:32 AM, Alberto Salvadori wrote: It turns out that this code produces a memory loss, quite significant since I am solving my system thousands of times, eventually inducing the run to fail. I am not sure what is causing this issue and how to solve it, maybe more experienced users
Re: [deal.II] Re: KDTree implementation error
Dear Luca, I am using 9.2 version and the implementation I try to follow from your presentation at SISSA 2018 but it gives me error. Following are the lines I added to step-1. I want to implement K nearest neighbor. I will work on your suggestion. *#include * | https://www.mail-archive.com/search?l=dealii@googlegroups.com&q=date:20200724 | CC-MAIN-2020-45 | refinedweb | 752 | 67.65 |
Hi all!
I have the problem that I have an object model with not-so-many entities but they have quite complex relationships with each other and a change to one entity can result in the auto-creation of quite a few others... at least that's how its supposed to work.
Now the problem occurs that it is difficult saving the entities (because its difficult to know which ones got created).
Generally, I have two options now: Either cascading a lot, which I fear it makes persisting slow, or entities save "themselves", probably even in the constructor.
The question is: Can an EJB3 entity access the EntityManager? With @Inject? Can it then persist "itself"? in the constructor?
Any hints very welcome!
kind regards,
Messi
I don't think that this is a good idea.
Why don't you simply write a Home-Interface like PersonHome for an entity called Person which has all the logic:
@Stateless public class PersonHome { private @Inject EntityManager em; public Person persist( Person p ) { // logic before persist em.persist( p ); // logic after persist return p; } }
EJB3 entities cannot receive injection so @Inject is forbidden in them.
Do not assume cascading is slow until actually see it is | https://developer.jboss.org/thread/104161 | CC-MAIN-2019-09 | refinedweb | 202 | 55.03 |
STAT 39000: Project second
Start by reading in your
train.csv data into tensors called
x_train and
y_train.
import pandas as pd import torch dat = pd.read_csv("/depot/datamine/data/sim/train.csv") x_train = torch.tensor(dat['x'].to_numpy()) y_train = torch.tensor(dat['y'].to_numpy())
In the previous project, we estimated the parameters of our regression model using a closed form solution. What does this do? At the heart of the regression model, we are minimizing our loss. Typically, this loss is the mean squared error (MSE). The formula for MSE is:
$MSE = \frac{1}{n}\sum_{i=1}^{n}(Y_i - \hat{Y_i})^2$
Using our closed form solution formulas, we can calculate the parameters such that the MSE is minimized over the entirety of our training data. This time, we will use gradient descent to iteratively calculate our parameter estimates!
By plotting our data, we can see that our data is parabolic and follows the general form:
$y = \beta_{0} + \beta_{1} x + \beta_{2} x^{2}$
If we substitute this into our formula for MSE, we get:
$MSE = \frac{1}{n} \sum_{i=1}^{n} ( Y_{i} - ( \beta_{0} + \beta_{1} x_{i} + \beta_{2} x_{i}^{2} ) )^{2} = \frac{1}{n} \sum_{i=1}^{n} ( Y_{i} - \beta_{0} - \beta_{1} x_{i} - \beta_{2} x_{i}^{2} )^{2}$
The first step in gradient descent is to calculate the partial derivatives with respect to each of our parameters: $\beta_0$, $\beta_1$, and $\beta_2$.
These derivatives will let us know the slope of the tangent line for the given parameter with the given value. We can then use this slope to adjust our parameter, and eventually reach a parameter value that minimizes our loss function. Here is the calculus.
$\frac{\partial MSE}{\partial \beta_0} = \frac{\partial MSE}{\partial \hat{y_i}} * \frac{\partial \hat{y_i}}{\partial \beta_0}$
$\frac{\partial MSE}{\partial \hat{y_i}} = 1$
$\frac{\partial \hat{y_i}}{\beta_0} = 2(\beta_0 + \beta_1x + \beta_2x^2 - y_i)$
$\frac{\partial MSE}{\partial \beta_1} = \frac{\partial MSE}{\partial \hat{y_i}} * \frac{\partial \hat{y_i}}{\partial \beta_1}$
$\frac{\partial \hat{y_i}}{\partial \beta_1} = 2x(\beta_0 + \beta_1x + \beta_2x^2 - y_i)$
$\frac{\partial MSE}{\partial \beta_2} = \frac{\partial MSE}{\partial \hat{y_i}} * \frac{\partial \hat{y_i}}{\partial \beta_2}$
$\frac{\partial \hat{y_{i}}}{\partial \beta_{2}} = 2x^{2} (\beta_{0} + \beta_{1} x + \beta2_x^{2} - y_{i})$
If we clean things up a bit, we can see that the partial derivatives are:
$\frac{\partial MSE}{\partial \beta_0} = \frac{-2}{n}\sum_{i=1}^{n}(y_i - \beta_0 - \beta_1 - \beta_2x^2) = \frac{-2}{n}\sum_{i=1}^{n}(y_i - \hat{y_i})$
$\frac{\partial MSE}{\partial \beta_1} = \frac{-2}{n}\sum_{i=1}^{n}x(y_i - \beta_0 - \beta_1 - \beta_2x^2) = \frac{-2}{n}\sum_{i=1}^{n}x(y_i - \hat{y_i})$
$\frac{\partial MSE}{\partial \beta_{2}} = \frac{-2}{n}\sum_{i=1}^{n} x^{2} (y_{i} - \beta_{0} - \beta_{1} - \beta_{2} x^{2}) = \frac{-2}{n}\sum_{i=1}^{n} x^{2} (y_{i} - \hat{y_{i}})$
Pick 3 random values — 1 for each parameter, $\beta_0$, $\beta_1$, and $\beta_2$. For consistency, lets try 5, 4, and 3 respectively. These values will be our random "guess" as to the actual values of our parameters. Using those starting values, calculate the partial derivitive for each parameter.
Okay, once you have your 3 partial derivatives, we can update our 3 parameters using those values! Remember, those values are the slope of the tangent line for each of the parameters for the corresponding parameter value. If by increasing a parameter value we increase our MSE, then we want to decrease our parameter value as this will decrease our MSE. If by increasing a parameter value we decrease our MSE, then we want to increase our parameter value as this will decrease our MSE. This can be represented, for example, by the following:
$\beta_0 = \beta_0 - \frac{\partial MSE}{\partial \beta_0}$
This will however potentially result in too big of a "jump" in our parameter value — we may skip over the value of $\beta_0$ for which our MSE is minimized (this is no good). In order to "fix" this, we introduce a "learning rate", often shown as $\eta$. This learning rate can be tweaked to either ensure we don’t make too big of a "jump" by setting it to be small, or by making it a bit larger, increasing the speed at which we converge to a value of $\beta_0$ for which our MSE is minimized, at the risk of having the issue of over jumping.
$\beta_0 = \beta_0 - \eta \frac{\partial MSE}{\partial \beta_0}$
Update your 3 parameters using a learning rate of $\eta = 0.0003$.
Code used to solve this problem.
Output from running the code.
Question 2
Woohoo! That was a lot of work for what ended up being some pretty straightforward calculations. The previous question represented a single epoch. You can define the number of epochs yourself, the idea is that hopefully after all of your epochs, the parameters will have converged, leaving your with the parameter estimates you can use to calculate predictions!
Write code that runs 10000 epochs, updating your parameters as it goes. In addition, include code in your loops that prints out the MSE every 100th epoch. Remember, we are trying to minimize our MSE — so we would expect that the MSE decreases each epoch.
Print the final values of your parameters — are the values close to the values you estimated in the previous project?
In addition, approximately how many epochs did it take for the MSE to stop decreasing by a significant amount? Based on that result, do you think we could have run fewer epochs?
Code used to solve this problem.
Output from running the code.
Question 3
You may be wondering think at this point that
pytorch has been pretty worthless, and it still doesn’t make any sense how this simplifies anything. There was too much math, and we still performed a bunch of vector/tensor/matrix operations — what gives? Well, while this is all true, we haven’t utilized
pytorch quite yet, but we are going to here soon.
First, let’s cover some common terminology you may run across. In each epoch, when we calculate the newest predictions for our most up-to-date parameter values, we are performing the forward pass.
There is a similarly named backward pass that refers (roughly) to the step where the partial derivatives are calculated! Great.
pytorch can perform the backward pass for you, automatically, from our MSE. For example, see the following.
mse = (error**2).mean() mse.backward()
Try it yourself!
Code used to solve this problem.
Output from running the code.
Question 4
Whoa! That is crazy powerful! That greatly reduces the amount of work we need to do. We didn’t use our partial derivative formulas anywhere, how cool!
But wait, there’s more! You know that step where we update our parameters at the end of each epoch? Think about a scenario where, instead of simply 3 parameters, we had 1000 parameters to update. That would involve a linear increase in the number of lines of code we would need to write — instead of just 3 lines of code to update our 3 parameters, we would need 1000! Not something most folks are interested in doing.
pytorch to the rescue.
We can use an optimizer to perform the parameter updates, all at once! Update your code to utilize an optimizer to perform the parameter updates.
There are a variety of different optimizers available. For this project, let’s use the
SGD optimizer. You can see the following example, directly from the linked webpage.
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) optimizer.zero_grad() loss_fn(model(input), target).backward() optimizer.step()
Here, you can just focus on the following lines.
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) optimizer.step()
The first line is the initialization of the optimizer. Here, you really just need to pass our initialized paramters (the betas) as a list to the first argument to
optim.SGD. The second argument,
lr, should just be our learning rate (
0.0003).
Then, the second line replaces the code where the three parameters are updated.
Code used to solve this problem.
Output from running the code.
Question 5
You are probably starting to notice how
pytorch can really simplify things. But wait, there’s more!
In each epoch, you are still calculating the loss manually. Not a huge deal, but it could be a lot of work, and MSE is not the only type of loss function. Use
pytorch to create your MSE loss function, and use it instead of your manual calculation.
You can find
torch.nn.MSELoss documentation here. Use the option
reduction='mean' to get the mean MSE loss. Once you’ve created your loss function, simply pass your
y_train as the first argument and your
y_predictions as the second argument. Very cool! This has been a lot to work on — the main takeaways here should be that
pytorch has the capability of greatly simplifying code (and calculus!) like the code used for the gradient descent algorithm. At the same time,
pytorch is particular, the error messages aren’t extremely clear, and it definitely involves a learning curve.
We’ve barely scraped the surface of
pytorch — there is (always) a lot more to learn! In the next project, we will provide you with the opportunity to utilize a GPU to speed up calculations, and SLURM to parallelize some costly calculations.
Code used to solve this problem.
Output from running the code. | https://the-examples-book.com/projects/current-projects/39000-s2022-project09 | CC-MAIN-2022-33 | refinedweb | 1,603 | 55.95 |
Xcode 4 – Time to make the leap?
Not having a lot of time to deal with silly issues I’m wondering if anyone would care to volunteer experiences with Xcode 4.
If I have an xcode 3.2 project which builds and I install xcode 4, should I expect it to just build properly? Or to I have to jump through a bunch of hoops?
I have a few projects I’m tinkering with and I don’t want to delay them just because I decided to jump to xcode 4 before they’d worked out the bugs…
it will probably build. personally i can’t get used to xcode 4, have tried twice and gone back to 3 twice. It is still a bit buggy as well. You can install it alongside 3 though, just change the path in the installer.
I’m sticking with 3 until I have a reason not to.
yeah, that’s been my feeling about it but wasn’t sure how well founded it was.
I’m sure soon I’ll be forced to upgrade because of iPhone development and I’m just wondering how and when would make sense to get it over with.
Thanks for the feedback!
In terms of the Max SDK, you will need to make a couple of changes to the maxmspsdk.xcconfig file, but after that everything should compile correctly.
1. Change the compiler version from
GCC_VERSION = 4.0
to
GCC_VERSION = 4.2
2. Change the SDK from:
SDKROOT = /Developer/SDKs/MacOSX10.4u.sdk
to:
SDKROOT = /Developer/SDKs/MacOSX10.6.sdk
3. Change the deployment target from:
MACOSX_DEPLOYMENT_TARGET = 10.4
to:
MACOSX_DEPLOYMENT_TARGET = 10.6
Hope this helps,
Tim
TIm –
Will this:
MACOSX_DEPLOYMENT_TARGET = 10.6
preclude the objects form running on older 10.5 systems?
brad
Hello Maxers,
the switch to XCode 4 is a bit more problematic. If you want to keep supporting PPC, you need to keep GCC 4 and the 10.4 SDK, at least as I know. As XCode 4 doesn’t support GCC 4 nor SDK 10.4, you have to add this manually to XCode. Here’s a really good how-to, which describes the entire process:
However, for me this didn’t work with XCode 4.0.2, just with XCode 4.0, and (although I don’t have Lion, so I can’t confirm) I guess this would also break on XCode 4.1…
Hope this helps,
Ádám
I have heard reports from very credible colleagues that this, as you suspected, does not work on Lion/Xcode 4.1.
Cheers,
Tim
@Brad, yes, this will limit your code from running on pre-OS10.6 systems. If you want to continue to target OS 10.4 or 10.5, then you should stick to Xcode 3.
FWIW, I upgraded my computer (with Xcode 3.2.5 already installed) to Lion, and Xcode 3.2.5 seems to still work. However, I’ve heard that trying to install Xcode 3.x onto a Lion computer does not work. Your mileage may vary here…
best,
Tim
Tim –
Is it possible to set the deployment target to a lower OS version in XCode 4?
I’ve heard there is a way to reinstall and run XCode 3.x, but it was all rumors and such.
brad
Hey Brad, I haven’t any luck with it, but maybe I’m just missing the required magic incantation.
Cheers,
Tim
I haven’t tried this, but the information seemed worth hanging on to:
<>
HTH, Charles
I just started testing Xcode 4.1 on Lion. I had to comment out the following lines in "ext_proto.h" to avoid compiler errors.
#ifndef WIN_VERSION
//int sprintf(char *, const char *, ...);
//int sscanf(const char *, const char *, ...);
#endif //WIN_VERSION
The following warning appears for projects with "~" in their path:
warning: invalid character ‘~’ in Bundle Identifier at column 19. This string must be a uniform type identifier (UTI) that contains only alphanumeric (A-Z,a-z,0-9), hyphen (-), and period (.) characters.
After that, the compiled externals work fine on build 47449.
Eric
I just got a new laptop with lion installed on it.
I first installed xcode 4.1 and I managed to make max externals by changing some things Tim said and the Modernize Project that xcode offered. Externals that were compiled worked on Lion, but it doesn’t work on my old laptop that runs leopard (and I would like to keep that backwards capability if possible)
It was not possible to select the 10.4 sdk (it’s simply not included in the xcode 4 install). So, I installed xcode 3.2 in a separate dir (not ‘Developer’ and excluding the parts that would overwrite the xcode 4.1 installation). I used the method mentioned here:
This works! (though you have to change the ‘/Developer/’ paths in maxmspsdk.xconfig to the folder you have chosen.) The Externals work fine on lion and leopard.
Then I simply copied the ‘MacOSX10.4u.sdk’ folder from the xcode 3 install into the SDK folder of the xcode 4 install and tried again (with the original maxmspsdk.xconfig that comes with the max sdk) and that works as well (again the externals work both on lion and leopard). So I plan to use xcode 4 now for for developing externals for max.
I hope this helps someone. My only disclaimer is that I only quickly checked this and I don’t know if I will encounter other problems, I’ll let you know.
Timo
I had heard of the copying the older sdk folder into the 4.x Xcode SDK dir trick, but had not heard that it was ok to have both Xcode versions running on same 10.7 machine. Makes sense, given changing the path info and all… All this will have to wait for me to be assured my converter drivers are updated to Lion compatiability (RME fireface800) before I take the leap. wonderful info, Timo!!
you’re welcome, btw my fireface 400 seems to work fine here… | http://cycling74.com/forums/topic/xcode-4-time-to-make-the-leap/ | CC-MAIN-2014-41 | refinedweb | 1,007 | 76.01 |
I’ve gotten half way with the Wikipedia Viewer API project but am stuck on retrieving JSON from parsed data in separate chunks.
I wanted to make my search a bit more dynamic so I decided to have search results appear as the user types into the input box. The results are being output successfully onto a DIV called ‘output’ but as one piece, and I can’t seem to slice the results individually on the page using Javascript. I can only output and reference two JSON variables, which result in [Object Object] and can’t seem to pinpoint the ID field, which is dynamically created as the user types into the input box. So as the user puts in their text the JSON url is created on the fly and the field coming after “query”:{“pages”: is usually a dynamic number field that is the label I can’t grab with Javascript.
This project is still unfinished so you’ll see a submit button that doesn’t yet work and some other things.
The dynamic Javascript variable I am using to store the URL is:
var urlsearchT = “*&generator=search&gsrsearch=” + up + “&gsrnamespace=0&gsrlimit=10&gsroffset=0&gsrwhat=text”;
Essential issue is when I try to ID the JSON field within the populate() function using this variable:
var title = data.query.pages I don’t know what identifier to use to ID the “query”:{“pages”: which is created dynamically as user types into input box. I could use a loop after I stringify() my JSON and then ID page # that way but that seems really long and unnecessary.
Here is my codepen:
Any help or suggestions greatly appreciated.
Best,
Oscar | https://www.freecodecamp.org/forum/t/stuck-on-wikipedia-viewer-json-dynamically-created-as-input-box-is-filled/161131 | CC-MAIN-2018-51 | refinedweb | 281 | 53.44 |
to display the pop up.
Full code after the jump.
<?xml version="1.0" encoding="utf-8"?> <!-- --> <mx:Application xmlns: <mx:Script> <![CDATA[ import mx.managers.PopUpManager; private function launchMoreInfo():void { var win:Dialog = PopUpManager.createPopUp(this, Dialog, true) as Dialog; PopUpManager.centerPopUp(win); } ]]> </mx:Script> <mx:ApplicationControlBar <mx:Button </mx:ApplicationControlBar> </mx:Application>
<?xml version="1.0" encoding="utf-8"?> <!-- Dialog.mxml --> <mx:TitleWindow xmlns: <mx:Script> <![CDATA[ import mx.events.CloseEvent; import mx.managers.PopUpManager; private function titleWindow_close(evt:CloseEvent):void { PopUpManager.removePopUp(this); } ]]> </mx:Script> <mx:String <mx:TextArea </mx:TitleWindow>
info.txt
<font size="+2"><i>More Information...</i></font> <p>Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Donec at arcu in lacus iaculis ultricies. Vivamus consectetuer. Donec vulputate aliquam leo. Nam metus. Aliquam nulla odio, ultrices vitae, nonummy eget, viverra accumsan, tellus. Curabitur neque ante, nonummy ut, fermentum eu, elementum a, ligula. Fusce hendrerit lectus ac velit. Suspendisse lorem pede, sagittis ac, fermentum non, auctor quis, nulla. Integer eu lacus sit amet justo vestibulum sodales. In euismod tellus eget magna. Vestibulum sed ante. Suspendisse eros libero, gravida ac, cursus et, porta vitae, lectus.</p>
View source is enabled in the following example.
Bah, I forgot to set the TextArea control’s
editableproperty to
false, but you get the idea.
Peter
Can you explain why spacing between paragraphs so big? (It is often the big problem)
s’ergo,
Try setting the
condenseWhiteproperty to
trueon the TextArea control.
Peter
Thank you!
All works fine.
You the great!
谢谢
…createPopUp(this, Dialog, true)…
how to make relation between dialog in .as and our dialog.mxml?
and thks ;)
yes u forgot to import it in as like :
import src.Dialog;
northwebs.net,
You shouldnt need the import. the Dialog.mxml, info.txt, and main.mxml should all be in the same directory. Sorry, I think the filenames above the code are a bit confusing. You can check out the source code to see the exact folder structure.
Peter
u’re right Peter, thks a lot ;)
Hi,
If i want to put the Dialog.mxml file in an other folder (ie : src/myComponents/Dialog.mxml). How can I then reference it into the code “PopUpManager.createPopUp(this, Dialog, true) as Dialog”. I tried to put myComponents.Dialog but it doest’n work.
Any help please
Thanks
Gael,
How about this:
Peter
Hi,
I’m trying to apply the style of the titleWindow you used in your “Creating a pop up TitleWindow using the PopUpButton control” example. However, when I use the PopUpManager class, the style isn’t applied, it uses the default one like the one in this example, is there anything I can do?
Thanx in advance
I want to do instead of info.txt in static I want dynamic that comes from database in text field.How????Anyone help me???
Could you make the popup re-sizable using the cursor?
Thanks
‘All works fine.’
Thats not true, the example didnt works.What about this?
var win:Dialog = PopUpManager.createPopUp(this, Dialog, true) as Dialog;
here is error: undefined property dialog
Excellent tutorial as usual!
Just one question. If I change the width/height of the popup to percentwidth/percentheight, it doesn’t work. How do I set the size of the popup to, say, 80% of the parent?
Thank you,
You are doing a great job!
Eric,
This is a little crude, and you probably want to ask this question on the FlexCoders list, but you may be able to do what you want if you create a custom TitleWindow component and calculate the width/height yourself based on the Application’s width/height multiplied by the desired percent width/height. Something like the following:
Peter
I am sorry Peter, I didn’t mean to offend anyone. I am very new to flex and I am not familiar with flexcoder list. Thanks a lot for your help anyway. Great blog.
Eric,
No problem. Sorry, after re-reading my reply, I worded it very poorly. I meant my solution was a little crude and I’m sure somebody on the FlexCoders list would have a more elegant solution.
Peter
Peter,
Thanks for the solution, I have a question. New height and width is received on CreationComplete, now is it possible to set height and width of child components based on this new height and width. Could you please give an example for this.
Thank You
–
Chetan
Hi Peter,
Ive been looking at your example and also the examples in the livedocs. Ive been doing flex for about 2 months now and previously i was developing in AS 2.0, C#, Java and VB. Im just trying to get my head around this class and it has been annoying. I have an application that uses a socket connection to raise events. When these events are raised, i am using a custom Alert component that is displayed using the PopUpManager class. Now it all works great, the problem is i am using the same PopUpManager class to display another custom Alert component. After adding one, it seems as though i cannot add the second one, even after removing it. Is there anything that i am missing to be getting the this error?? should i possible have 1 component with the two others nested within it? and the parent of these two be the panel that i use to display in PopupManager?
the error i am getting is as follows:
Hi Peter
Thank you for the good examples. I wonder if you have an answer for this. I want to popup a custom window like this example, and be able to dispatch an event back to the opener. Basically when the custom window closes I want run a function from the opener.
Are there any problems with this approach? Thanks!
Hi AC, this is the problem I also face just now. It can be easily solved with Cairgnorm framework but w/o it, you can do this way.
1. You create custom event in your custom popup window. Then dispatch event as usual.
2. In your opener, you have a reference to popup window, register it with custom event listener above. i.e.
(in your opener’s code)
var popup:CustomPopup = new CustomPopUP();
popup.addEventListener(yourEvent, functionHandler);
private var fucntion functionHandler(yourEvent):void{ }
Regards,
Anh
P.S. This is a great flex blog :D Thank you, Peter
Do you know how to make modal dialog only cover specific section of the swf application (say some panel) and not the full swf application ?
I’m also wondering the same as Shailendra (previous poster). Is there a way to make the modal window cover only a component, not the entire stage? I debugged and found where the modal window is created. I guess we could just inherit from the proper classes and resize/relocate the modal window.
Hello Sir
I am having a problem now !!I tried to use pop up window
I am not able to compile this code . Can any one Explain me the Exact Syntaz
Var win : XXXX –>should it be the name of the other class we are going to call or Dialog class ?
Can you please explain me the line
isaac,
Try this:
Peter | http://blog.flexexamples.com/2008/03/20/creating-custom-pop-up-windows-with-the-popupmanager-class-redux/ | crawl-002 | refinedweb | 1,206 | 68.87 |
Theming Views in Drupal 8 – Custom Style PluginsBy Daniel Sipos
Views is in Drupal 8 core. We all know that by now. Twig is the new templating engine in Drupal 8. This we also know. But do we know how to interact programmatically with the first in order to theme a View using the second? Aside from overriding View templates like with any other subsystem, we have available a more powerful alternative in the form of Views plugins (Display, Style, Row and Field).
More from this author.
We will not cover the details on how you can use Bootstrap in your project. However, you can check out the documentation page on assets or even this article on how to make sure anonymous users can benefit from jQuery being loaded on the page. And if you want to see the code we write ahead of time, you can find it in this repository within the Demo module.
What Is the Style Plugin?
The Views Style plugin is the one responsible for rendering the listing. Notable examples of core Style plugins are Unformatted List, HTML List, Table or Grid. They are used by the Display plugin and they in turn use Row plugins that represent one item in the listing.
In Drupal 8, all Views plugin types are built using the new Plugin system and share some common functionality (they always extend from the same Views
PluginBase).
Let’s now create our own such Style plugin that can be used by most Display types (Page, Block, etc) and which uses the Field row plugin.
The Bootstrap Tabs Style Plugin
The first step is to create our plugin class located in the
Plugin/views/style folder of our module:
namespace Drupal\demo\Plugin\views\style; use Drupal\Core\Form\FormStateInterface; use Drupal\views\Plugin\views\style\StylePluginBase; /** * A Views style that renders markup for Bootstrap tabs. * * @ingroup views_style_plugins * * @ViewsStyle( * id = "bootstrap_tabs", * title = @Translation("Bootstrap Tabs"), * help = @Translation("Uses the Bootstrap Tabs component."), * theme = "demo_bootstrap_tabs", * display_types = {"normal"} * ) */ class BootstrapTabs extends StylePluginBase { /** * Does this Style plugin allow Row plugins? * * @var bool */ protected $usesRowPlugin = TRUE; /** * Does the Style plugin support grouping of rows? * * @var bool */ protected $usesGrouping = FALSE; /** * {@inheritdoc} */ protected function defineOptions() { $options = parent::defineOptions(); $options['tab_nav_field'] = array('default' => ''); return $options; } /** * {@inheritdoc} */ public function buildOptionsForm(&$form, FormStateInterface $form_state) { parent::buildOptionsForm($form, $form_state); $options = $this->displayHandler->getFieldLabels(TRUE); $form['tab_nav_field'] = array( '#title' => $this->t('The tab navigation field'), '#description' => $this->t('Select the field that will be used as the tab navigation. The rest of the fields will show up in the tab content.'), '#type' => 'select', '#default_value' => $this->options['tab_nav_field'], '#options' => $options, ); } }
The Drupal plugin type we are creating an instance of is
ViewsStyle with some basic configuration passed in the annotation. Leaving aside the obvious ones, we have the
theme and
display_types keys that are worth mentioning. The first declares which theme function this Style plugin will use to render its data while the second declares which kinds of Display plugins this Style can be used by (in our case all Display types which don’t otherwise specify a custom type: normal). For more information on all the available annotation configuration for this plugin type, check out the
Drupal\views\Annotation\ViewsStyle annotation class.
Using the two class properties, we declare that our Style uses row plugins but does not allow grouping. Make sure you check out the parent classes to learn more about what other options can be specified like this. For example, the class we are extending already declares that Views fields can be used with the Style plugin.
As mentioned before, using the two methods we create a plugin option and form element to be able to specify which field should act as the tab navigation. Using the current display handler (
$this->displayHandler) we can load up all the available View fields the site builder has added to it. And this new form element will be available on the Style settings form:
Since we are extending from the
StylePluginBase class, there is nothing more we need to do. For the markup output we can rely on the
demo_bootstrap_tabs theme which receives the relevant variables from the executed View. If we want, we can override any of the render methods and add more variables, change the theme, or whatever we need. We are good with the defaults, especially since we will implement a preprocessor to handle the variables that the template receives.
The Theme
It’s time to define the
demo_bootstrap_tabs theme as we normally do (inside our
.module file):
/** * Implements hook_theme(). */ function demo_theme($existing, $type, $theme, $path) { return array( 'demo_bootstrap_tabs' => array( 'variables' => array('view' => NULL, 'rows' => NULL), 'path' => drupal_get_path('module', 'demo') . '/templates', ), ); }
The Style plugin passes the
$view object and the resulting
$rows by default to the template. It is up to the preprocessor to do a bit of handling of these variables (if needed) before they are sent to the template:
/** * Prepares variables for views demo_bootstrap_tabs template. * * Template: demo-bootstrap-tabs.html.twig. * * @param array $variables * An associative array containing: * - view: The view object. * - rows: An array of row items. Each row is an array of content. */ function template_preprocess_demo_bootstrap_tabs(&$variables) { $view = $variables['view']; $rows = $variables['rows']; $variables['nav'] = array(); // Prepare the tab navigation. $field = $view->style_plugin->options['tab_nav_field']; if (!$field || !isset($view->field[$field])) { template_preprocess_views_view_unformatted($variables); return; } $nav = array(); foreach ($rows as $id => $row) { $nav[$id] = array( '#theme' => 'views_view_field', '#view' => $view, '#field' => $view->field[$field], '#row' => $row['#row'], ); } template_preprocess_views_view_unformatted($variables); $variables['nav'] = $nav; }
So what’s happening here? First, we check the Style plugin options for the field name to be used (the one that was selected when configuring the View). If one is not there, we return, but not before doing a bit of default preprocessing that the
template_preprocess_views_view_unformatted function already does well. So we delegate to it. Then, we loop through the Views results and build an array of content for our tab navigation. For this, we use the default Views
views_view_field theme function to render the selected field. Finally, we pass this array to the template and also run the default preprocessor of the unformatted list style.
The Template
In Drupal 8 there are no more theme functions, everything is now handled in Twig templates. So let’s see how the
demo-bootstrap-tabs.html.twig file looks like in our module’s
templates folder:
<div> <!-- Nav tabs --> <ul class="nav nav-tabs" role="tablist"> {% for tab in nav %} {% set active = '' %} {% if loop.index0 == 0 %} {% set active = 'active' %} {% endif %} <li role="presentation" class="{{ active }}"><a href="#tab-{{ loop.index0 }}" aria-{{ tab }}</a></li> {% endfor %} </ul> <!-- Tab panes --> <div class="tab-content"> {% for row in rows %} {% set active = '' %} {% if loop.index0 == 0 %} {% set active = 'active' %} {% endif %} <div role="tabpanel" class="tab-pane {{ active }}" id="tab-{{ loop.index0 }}">{{ row.content }}</div> {% endfor %} </div> </div>
As you can see, this is the necessary markup for the Bootstrap tabs. It won’t work, of course, without making sure the relevant Bootstrap styles and script are loaded in your theme first.
The first thing we render are the tab navigation items (from our
nav variable). While looping through this array, we also make use of the loop index value in order to default the first item as active and be able to target the tab content panes below using unique IDs. For the actual value of the items, we just print the render array we created in our preprocessor and Drupal takes care of rendering that. That being said, it is probably a good idea to make sure that the field you use here is relatively short, without a link and plain markup. Titles would probably work just fine. But this is a matter of configuring the View accordingly.
Below the navigation, we print the actual view rows, using the same loop index to default the first row as the active tab pane and identify them uniquely so the navigation above can control their visibility. As for the content, we print the entire
row.content variable (which is prepared inside
template_preprocess_views_view_unformatted) and which contains all the fields in our View. And if we want to not include the field we used for the navigation, we can just exclude that one from display in the View configuration. It will still appear in the navigation (because we explicitly print it there) but not in the main tab pane.
Conclusion
And there we have it. A Views Style plugin to output the View results as Bootstrap tabs. All we need now is to make sure the Bootstrap assets are loaded and simply configure our View to use the new Style plugin. Do keep in mind that this is not meant for Views with lots of results and it only serves as an example to demonstrate how to create Style plugins.
If you have questions, comments, or suggestions, please leave them below! | https://www.sitepoint.com/theming-views-in-drupal-8-custom-style-plugins/ | CC-MAIN-2016-40 | refinedweb | 1,474 | 61.87 |
Correct and Efficient Vuex Using. Part I
With this article, we begin a series of publications about Vue.js technology and try to make out the application development and all its components from different practical sides. In this part, we will tell you what the Vuex library is and analyze in detail such components as a store, state, getters, mutations, and actions.
Also, in the second part, we will consider modules, application structure, plugins, strict mode, work with forms, testing and strengths/benefits of Vuex Storage.
What is Vuex, and Where it Used?
VueX is a state management library inspired by Flux, Redux, and Elm architecture, but specially designed and tuned to integrate well with Vue.js and take advantage of Vue’s Reactivity.
What is a state management pattern? Let's start with a simple Vue application that implements a counter. This stand-alone application consists of the following parts:
- State that controls the application;
- The view is a state display specified declaratively;
- Actions are possible ways to change the state of the app in response to users interaction with the view.
Sometimes several components may appear that are based on the same state:
- multiple views may depend on the same part of the application state;
- actions from different views can affect the equal parts of the application state.
Solving the first problem, you will have to transfer the same data with input parameters to deeply embedded components. This is often complicated and tedious, but for neighboring elements this will not work at all. Solving the second problem, you can come to such solutions as referring to parent/child instances or try changing and synchronizing multiple state copies through actions. Both approaches are fragile and quickly lead to the emergence of code that cannot be supported.
So why not take out the overall general state of the application from the components and manage it in a global singleton? At the same time, our component tree becomes one big "view" and any component can access the application state or trigger actions to change the state, regardless of where they are in the tree!
By clearly defining and separating the concepts that arise in state management, and by requiring certain rules that maintain independence between views and states, we better structure the code and make it easier to maintain.
This is the core idea of Vuex, inspired by Flux, Redux, and Elm Architecture. Unlike other patterns, Vuex is implemented as a library designed explicitly for Vue.js to use its reactivity system for efficient updates.
The Main Components and Capabilities of Vuex
Store
At the center of any Vuex application is a store. The store is a container that stores the state of your application. Two points distinguish Vuex store from a simple global object:
- The Vuex store is reactive. When Vue components rely on their state, they will be reactively and efficiently updated if the state of the store changes.
- You cannot directly change the state of the store. The only way to make changes is to cause a mutation explicitly. This ensures that any change in the state leaves a mark and allows the use of tools to better understand the progress of the application.
After installing Vuex, a repository is created. It's quite simple, and you need to specify the initial state object and some actions and mutations.
const store = new Vuex.Store({ state: { counter: 0 // initial store state }, actions: { increment({ commit, dispatch, getters }) { commit('INCREMENT') }, decrement({ commit, dispatch, getters }) { commit('DECREMENT') } }, mutations: { INCREMENT(state) { state.counter++ }, DECREMENT(state) { state.counter-- } }, getters: { counter(state) { return state.counter } } }).
State. Single state tree
Vuex uses a single state tree when one object contains the entire global state of the application and serves as the only one source. It also means that the app will have only one such storage. A single state tree makes it easy to find the part you need or take snapshots of the current state of the application for debugging purposes.
The data you store in Vuex follows the same rules as the
data in a Vue instance, ie the state object must be plain. So how do we display state inside the store in our Vue components? Since Vuex stores are reactive, the simplest way to "retrieve" state from it is simply returning some store state from within a computed property. Whenever
store.state.count changes, it will cause the computed property to re-evaluate, and trigger associated DOM updates.))
export default { methods: { incrementCounter() { this.$store.dispatch('increment') } } }:
import { mapState } from 'vuex'; export default { computed: { ...mapState({ counter: state => state.counter }), counterSquared() { return Math.pow(this.counter, 2) } } }
We can also pass a string array to
mapState when the name of a mapped computed property is the same as a state sub tree name. as shown above.ters
Sometimes we may need to compute derived state based on store state, for example filtering through a list of items and counting them.. Like computed properties, a getter's result is cached based on its dependencies, and will only re-evaluate when some of its dependencies have changed.
// In store getters: { counter(state) { return state.counter }, counterSquared(state) { return Math.pow(state.counter, 2) } } // In component import { mapGetters } from 'vuex'; export default { computed: { ...mapgetters([ 'counter', 'counterSquared' ]) } }
You can also pass arguments to getters by returning a function. This is particularly useful when you want to query an array in the store. Note that getters accessed via methods will run each time you call them, and the result is not cached.
The
mapGetters helper simply maps store getters to local computed properties..
You cannot directly call a mutation handler. Think of it more like event registration: "When a mutation with type
increment is triggered, call this handler." To invoke a mutation handler, you need to call
store.commit with its type.
export default { methods: { incrementCounter() { this.$store.commit('INCREMENT') } } }
You can pass an additional argument to
store.commit, which is called the payload for the mutation. In most cases, the payload should be an object so that it can contain multiple fields, and the recorded mutation will also be more descriptive. An alternative way to commit a mutation is by directly using an object that has a
type property. When using object-style commit, the entire object will be passed as the payload to mutation handlers, so the handler remains the same..
However, using constants to indicate the types of mutations is completely optional, although this may be useful in large projects.
One important rule to remember is that mutation handler functions must be synchronous. untrackable!
You can commit mutations in components with
this.$store.commit('xxx'), or use the
mapMutations helper which maps component methods to
store.commit calls (requires root
$store injection) to separate the two concepts. In Vuex, mutations are synchronous transactions. To handle asynchronous operations, should descry Actions.
Actions
Actions are similar to mutations with a few differences:
- Instead of mutating the state, actions commit mutations.
- Actions can contain arbitrary asynchronous operations.
actions: { signIn({ commit }, payload) { // Show spinner when user submit form commit('LOGIN_IN_PROGRESS', true); // axios - Promise based HTTP client for browser and node.js axios .post('/api/v1/sign_in', { email: payload.email password: payload.password }) .then((response) => { const { user, token } = response.data; commit('SET_AUTH_TOKEN', token); commit('SET_USER', user); commit('LOGIN_IN_PROGRESS', false); }) .catch((error) => { commit('SET_SIGN_IN_ERROR', error.response.data.reason); commit('LOGIN_IN_PROGRESS', false); }) } }
Asynchronous action on the example of authorization to simplify the code a bit especially when we need to call
commit multiple times. Actions are triggered with the
store.dispatch method. This may look silly at first sight if we want to increment the count, why don't we just call
store.commit('increment') directly? Remember that mutations have to be synchronous? Actions don't. We can perform asynchronous operations inside an action. Actions support the same payload format and object-style dispatch.
A more practical example of real-world actions would be an action to checkout a shopping cart, which involves calling an async API and committing multiple mutations. Performing a flow of asynchronous operations, and recording the side effects (state mutations) of the action by committing them.
You can dispatch actions in components with
this.$store.dispatch('xxx'), or use the
mapActions helper which maps component methods to
store.dispatch calls (requires root
$store injection).. It's possible for a
store.dispatch to trigger multiple action handlers in different modules. In such a case the returned value will be a Promise that resolves when all triggered handlers have been resolved.
It is only a small part of what we are going to tell in our next articles about Vue.js and all of its additional tools and benefits. Next, we continue our review of the Vuex library and components.
Also, in our blog section, you can read more about Vue.js tool here | https://amoniac.eu/blog/post/correct-and-efficient-vuex-using-part-i | CC-MAIN-2022-05 | refinedweb | 1,473 | 57.47 |
Results 1 to 4 of 4
Hi, I have a usb temperature stick called TEMPer. I am using a c program I found which returns the current air temperature. What I would like to do is ...
- Join Date
- May 2006
- 14
apache run server side code via php
I have a usb temperature stick called TEMPer. I am using a c program I found which returns the current air temperature. What I would like to do is call this binary code from a webpage.
I put the compiled code in /usr/bin and created the following file "index.php"
Code:
<?php print("Current Temp:\n\n"); $thetemp = system('temper'); echo $thetemp; ?>
"TemperCreate: Operation not permitted"
Where TemperCreate is a function in the original C code. I guess this is a good thing as it is stopping apache from running server side code.
So to my question. Is it possible to allow apache to run compiled C code ?
Safe mode is turned off and I have tried making apache the owner of the binary, still no joy.
Any suggestion would be great.
Thanks,
Pete.
It sounds like it is running the C program. Does the TemperCreate function require another file that Apache doesn't have permission to?What do we want?
Time machines!
When do we want 'em?
Doesn't really matter does it!?
The Fifth Continent
- Join Date
- May 2006
- 14
update
The C code includes the following.
#include <usb.h>
and i think the library function that is casing the problem is:
usb_control_msg
which has the following description:
int usb_control_msg(usb_dev_handle *dev, int requesttype, int request, int value, int index, char *bytes, int size, int timeout);
usb_control_msg performs a control request to the default control pipe on a device. The parameters mirror the types of the same name in the USB specification. Returns number of bytes written/read or < 0 on error.
Is there a way of allowing the apache user to run this function, i.e. access to usb devices ?
Thanks,
Pete.
- Join Date
- May 2006
- 14
Solved
I just placed the binary executable in the same directory as my index.php file (instead of in /usr/bin) and I edited the system call to be:
Code:
$thetemp = system('./temper');
and all works great now
Pete. | http://www.linuxforums.org/forum/servers/164743-apache-run-server-side-code-via-php.html | CC-MAIN-2014-41 | refinedweb | 378 | 66.84 |
trigger a proxy to signal a process
#include <sys/types.h> #include <sys/kernel.h> pid_t Trigger( pid_t proxy );
The kernel function Trigger() triggers proxy to send a message to the process that owns it. The calling process doesn't block. If more than one trigger occurs before the proxy message is received, that number of messages (up to 65535) will be received.
The pid that owns the proxy. On error, -1 is returned and errno is set.
See qnx_proxy_attach().
QNX
Trigger() is a macro.
Creceive(), Creceivemx(), errno, qnx_proxy_attach(), qnx_proxy_detach(), Receive(), Receivemx(), Reply(), Replymx(), Readmsg(), Readmsgmx(), Send(), Sendfd(), Sendfdmx(), Sendmx(), Writemsg(), Writemsgmx() | https://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/qnx/trigger.html | CC-MAIN-2022-33 | refinedweb | 102 | 68.97 |
Keyboard map chords (e.g., Ctrl+X Ctrl+C)
My apologies if this is a duplicate…
Is it possible to map multi-key shortcuts like in EMACS or (I am dating myself) WordStar/Borland IDEs?
For example, in WordStar, the keyboard shortcut to close a file is Ctrl+K Ctrl+Q (or Ctrl+K Q).
If this is not possible, how do I make this a feature request?
Thanks
Bill
welcome to the notepad++ community.
you can use multi key shortcuts, but you can’t use multi key sequences as shortcuts in notepad++.
eg. you can set ctrl+q instead of ctrl+w to close a tab in settings > shortcut mapper, but you can’t trigger this shortcut with a key sequence like ctrl+k followed by ctrl+q.
currently there is no dedicated place for feature requests like this one.
but you can click the feature requests link from here
which will lead you to the notepad++ issues page at github
@Bill-Stewart said:
or (I am dating myself) WordStar/Borland IDEs?
You definitely are. But the fact that I know what you are talking about–what does that say about me? (but I’ve never heard this called “keyboard chords” before)
how do I make this a feature request?
Proper way is here:
But do you REALLY think such a feature request will be taken seriously? :) It’s 2019 for goodness sake…been waiting all year to use that line
Note I have no decision-making power–that is just my feelings.
Another option might be to do something in a scripting language. Watch the keyboard for the first keycombo, followed by something valid in the second keycombo (within a certain period of time maybe ?), then invoke the corresponding menu option, from the script. It would have to be something configured in code rather than something nicely set up in Shortcut Mapper… Some downsides but depending how much you want it… :)
BTW, your avatar dates you as well! :)
Thanks, I found the Github page.
It’s 2019 for goodness sake…
Don’t know what that’s got to do with anything. These kinds of keyboard shortcuts are supported in (3 examples) Visual Studio, Visual Studio Code, EMACS (and probably others also).
@Bill-Stewart said:
Don’t know what that’s got to do with anything
Maybe nothing. It was my attempt at humor. Because you hinted at this being something only “dinosaurs” (there I go with humor again) that remember Wordstar would find useful. It probably would have been better for you to have just cited the VS/VSCode/emacs examples in the first post, and dropped the Wordstar/Borland stuff. :) Certainly no offense to the people out there that love this type of keystroking. I still maintain the opinion that it probably isn’t widely enough used these days to be seriously considered as a Notepad++ change, but the feature request link is the way to go to pursue it. Good luck to ya.
For anyone interesting in viewing/voting for this feature:
I was the one who proposed adding input ‘modes’, like in Vim, which somewhat relates to what you describe. Namely I proposed to add a command mode, as an additional ‘page’ for keyboard input where keys do not type characters but rather can be bound to commands.
As it seems though, majority of users do not appretiate any departure from standard windows’ edit box behaviour, mainly because it is additional learning.
At the same time, adding such features requires changing the core parts of the editor — that means the change must be really worth the deal, and there hardly will ever be consensus what should be optimal input scheme.
As for me, I have found an excellent solution - Autohotkey app. This app is made exactly for input customization and automation. Also I have Pythonscript plugin in Npp, so using combinations of Autohotkey scripts and some Pythonscript plugin scripts, I can do almost any input customization.
E.g. I can have Vim-like command mode with only ~50 lines of Ahk script plus some Pythonscript code for visual feedback (it changes GUI colors in different modes).
So, to be honest, if you like experimenting with input, you could start with Autohotkey. It would be IMO more productive than hoping for this feature addition ;-). Just make up an more or less concrete idea of what input combination/scheme and functionality you seek for and try to implement in Autohotkey / Pythonscript plugin.
Sounds interesting. How about posting a working example from your own efforts, both to illustrate the idea more fully and give a proof-of-concept.
Sure, here is an examlpe AHK I use for toggling command mode with Capslock, and example commands respectively, comment and un-comment bound to
qand
wkeys:
#NoEnv ; Recommended for performance and compatibility with future AutoHotkey releases. SendMode Input ; Recommended for new scripts due to its superior speed and reliability. CMode := false #If winActive("ahk_exe notepad++.exe") && WinActive("ahk_class Notepad++") ; global hotkeys (active in all modes) capslock:: if (CMode = false) { CMode := true send +{F11} ; execute visual feedback PS script inside NPP } else { CMode := false send +{F12} ; execute visual feedback PS script inside NPP } setCapslockState off ; keep capslock always off return #If CMode && WinActive("ahk_exe notepad++.exe") && WinActive("ahk_class Notepad++") ; command mode hotkeys ; comment q:: send ^q return ; uncomment w:: send ^+q return
Visual feedback is implemented in PS plugin scripts which are executed on Shift-F12 and Shift-F11 shortcuts (see send +{F12} above code).
E.g. this indicates command mode with blue folder column highlighting:
editor.setFoldMarginHiColour(1, (180, 200, 220)) editor.setFoldMarginColour(1, (180, 200, 220))
And the second sets it back to my default color:
editor.setFoldMarginHiColour(1, (226,226,220)) editor.setFoldMarginColour(1, (226,226,220))
As for OP idea with sequences - it is actually almost the same, if you want only 2-command sequences.
Just need to add code to get back to default mode from the hotkey in the second section.
If want support for more than 2 commands in a sequence, it will get more complicated of course.
- Eko palypse last edited by Eko palypse
That issue made be curious and I wanted to see if this could be solved with the PythonScript plugin in some way.
It is more of a proof of concept than a working solution but I thought it might be useful for some.
The script doesn’t prevent npp from executing registered shortcuts, if those are similar to the ones configured in the script.
Like if the script defines CTRL+K+L and you do have CTRL+K or CTRL+L mapped to some functions those get called,
as said - more of a proof of concept. But was really fun :-)
from __future__ import print_function import ctypes from ctypes.wintypes import DWORD, HRESULT, INT, WPARAM, LPARAM, HINSTANCE, HHOOK, BOOL, MSG import threading user32 = ctypes.WinDLL('user32', use_last_error=True) def ErrorIfNone(result, func, arguments): if result is None: raise ctypes.WinError(ctypes.get_last_error()) else: return result def ErrorIfZero(result, func, arguments): if result == 0: raise ctypes.WinError(ctypes.get_last_error()) else: return result HOOKPROC = ctypes.WINFUNCTYPE(HRESULT, INT, WPARAM, LPARAM) SetWindowsHookEx = user32.SetWindowsHookExW SetWindowsHookEx.argtypes = [INT, HOOKPROC, HINSTANCE, DWORD] SetWindowsHookEx.restype = HHOOK SetWindowsHookEx.errcheck = ErrorIfNone UnhookWindowsHookEx = user32.UnhookWindowsHookEx UnhookWindowsHookEx.argtypes = [HHOOK] UnhookWindowsHookEx.restype = BOOL UnhookWindowsHookEx.errcheck = ErrorIfZero WH_KEYBOARD = 0x2 HC_ACTION = 0 VK_SHIFT = 0x10 VK_CONTROL = 0x11 VK_MENU = 0x12 MODIFIER_MAP = {VK_SHIFT: 'SHIFT', VK_CONTROL: 'CTRL', VK_MENU: 'ALT',} SHORTCUT_MAP = {'CTRL+L+K': lambda: print('do something\n'), 'CTRL+K+L': lambda: print('do something other\n'),} class KEYLOGGER(threading.Thread): def __init__(self): threading.Thread.__init__(self) self.hookHandle = None self.hookFunction = None self.keyCombo = '' self.modifier = { VK_SHIFT:False, VK_CONTROL:False, VK_MENU:False } self.modifier_keys = self.modifier.keys() self.KEYUP = False self.curr_class = ctypes.create_unicode_buffer(256) def register(self): self.hookFunction = HOOKPROC(self.keyboardHook) self.hookHandle = SetWindowsHookEx(WH_KEYBOARD, self.hookFunction, user32._handle, 0) def unregister(self): if self.hookHandle is None: return UnhookWindowsHookEx(self.hookHandle) self.hookHandle = None def exec_shortcut(self, shortcut): func = SHORTCUT_MAP.get(shortcut, None) if func: func() def keyboardHook(self, nCode, wParam, lParam): user32.GetClassNameW(user32.GetFocus(), self.curr_class, 256) if self.curr_class.value == u'Scintilla' and nCode == HC_ACTION: self.KEYUP = user32.GetKeyState(wParam) & 0x8000 == 0 if wParam not in self.modifier_keys and self.KEYUP and any(self.modifier.values()): if 65 <= wParam <= 90: self.keyCombo += '+{}'.format(chr(wParam)) elif wParam in self.modifier_keys: if self.KEYUP: self.modifier[wParam] = False if not any(self.modifier.values()): self.exec_shortcut(self.keyCombo) self.keyCombo = '' else: self.keyCombo += '+{}'.format(MODIFIER_MAP[wParam]) else: if not any(self.modifier.values()): self.modifier[wParam] = True self.keyCombo = '{}'.format(MODIFIER_MAP[wParam]) else: self.modifier = { VK_SHIFT:False, VK_CONTROL:False, VK_MENU:False } return user32.CallNextHookEx(_keyLogger.hookHandle, nCode, wParam, lParam) def run(self): self.register() msg = MSG() user32.GetMessageW(ctypes.byref(msg), 0, 0, 0) try: _keyLogger.unregister() del(_keyLogger) except NameError: _keyLogger = KEYLOGGER() _keyLogger.start()
@Eko palypse
Interestingly, that’s what I meant when I said above:
Another option might be to do something in a scripting language.
However, since then I’ve looked at the code I had that was somewhat similar to yours and it suffers the same problem: If a keycombo is mapped via Shortcut Mapper, I haven’t found a way to snag the keycombo for my own purpose and NOT have Shortcut Mapper decoding also see it. Too bad @ClaudiaFrank isn’t around any more because I’m sure she’d solve that one in a second! :)
To be more accurate, if a keycombo is mapped via Shortcut Mapper, then SM will see it and my code then won’t even get a chance at it.
- Eko palypse last edited by Eko palypse
if a keycombo is mapped via Shortcut Mapper, then SM will …
you are right - two cooks in a kitchen isn’t good.
If I’m right, the only way to intercept the message flow is to hook into the message queue,
which I should have done in first place anyway, but playing with some kind of keylogger sounded more exciting at that time.
And concerning the code, I have to admit, I got a lot of inspiration, sounds better than stealing,
doesn’t it, from @Claudia-Frank and @Scott-Sumner :-)
Maybe I will have a look at it some time later but for now I’m more interested in the the topic LSP,
as this has some benefits for me as well.
Thanks for posting the example, it really clarifies. I see it uses coloring of the fold margin. What about when you are working with files with no fold margin, example
*.txtfiles? Then you don’t get the feedback about what “mode” you are in. :(
Wouldn’t it be better to color the line-number margin (presumes you have that turned on–side note: I’m always amazed that people post asking how to turn that OFF)? Of course, I can suggest that but I myself don’t see how one would change the color (via code) of that margin.
- Alan Kilborn last edited by Alan Kilborn
@Alan-Kilborn said:
but I myself don’t see how one would change the color (via code) of that margin.
I must correct myself. I DO see how to do it, half-an-hour later:
editor.styleSetBack(STYLESCOMMON.LINENUMBER, (255, 0, 0))
will turn the background of the line-number margin a rather glaring red, for example.
@Alan-Kilborn said:
editor.styleSetBack(STYLESCOMMON.LINENUMBER, (255, 0, 0))
will turn the background of the line-number margin a rather glaring red, for example.
Yes, changing the line number color is actually better solution in general and it can be better visible.
I’ve used folder margin from the beginning and just was too lazy to experiment further :)
BTW, if you want to make permanently visible folder margin in any file, it can be done with Pythonscript plugin via “startup.py” script.
- Disable folder margin:
Settings - > Preference -> Editing : set “Folder margin style” to “None”
- Add e.g. this to startup.py:
editor.setMarginWidthN (2, 16) # set folder margin to 16 pixel editor.markerDefine (MARKEROUTLINE.FOLDEROPEN, MARKERSYMBOL.ARROWDOWN) editor.markerDefine (MARKEROUTLINE.FOLDERSUB, MARKERSYMBOL.EMPTY) editor.markerDefine (MARKEROUTLINE.FOLDERTAIL, MARKERSYMBOL.EMPTY) ...
then Restart Npp. So can even define custom marker symbols or just set them all to EMPTY.
see: | https://community.notepad-plus-plus.org/topic/16861/keyboard-map-chords-e-g-ctrl-x-ctrl-c | CC-MAIN-2019-43 | refinedweb | 2,041 | 56.55 |
On Wed, Feb 1, 2017 at 6:31 AM, Erik Rijkers <e...@xs4all.nl> wrote: > On 2017-02-01 09:27, Corey Huinker wrote: > >> 0001.if_endif.v4.diff >> > > A few thoughts after a quick try: > > I dislike the ease with which one gets stuck inside an \if block, in > interactive mode. > > (for instance, in my very first session, I tried '\? \if' to see if > there is more info in that help-screen, but it only displays the normal > help screen. But after that one cannot exit with \q anymore, and there > is no feedback of any kind (prompt?) in which black hole one has ended up. > Only a \endif provides rescue.) >
Good find. I'll have to bulk up the help text. This raises a question: in interactive mode, should we give some feedback as to the result of an \if or \elif test? (see below) > > Therefore making it possible to break out of \if-mode with Ctrl-C would be > an improvement, I think. > I would even prefer it when \q would exit psql always, even from within > \if-mode. > This whole thing got started with a \quit_if <expr> command, and it was pointed out that \if :condition \q \endif SELECT ... FROM ... would be preferable. So I don't think we can do that. At least not in non-interactive mode. As for CTRL-C, I've never looked into what psql does with CTRL-C, so I don't know if it's possible, let alone desirable. > > Also, shouldn't the prompt change inside an \if block? > That's a good question. I could see us finding ways to print the t/f of whether a branch is active or not, but I'd like to hear from more people before diving into something like that. | https://www.mail-archive.com/pgsql-hackers@postgresql.org/msg303968.html | CC-MAIN-2021-04 | refinedweb | 298 | 82.95 |
jGuru Forums
Posted By:
Claude_Devarenne
Posted On:
Friday, December 14, 2001 01:21 PM
Hi,
I am new to all of this and I tried to build the Java HTML example and got the
following error:
Main.java:18: cannot access TokenBuffer
bad class file:
C:srcjavaantlr-2.7.antlrTokenBuffer.class
class file contains wrong class: antlr.TokenBuffer
Please remove or make sure it appears in the correct subdirectory of the classpath.
TokenBuffer buffer = new TokenBuffer(lexer);
^
1 error
I tried also the HTML grammar posted under the resources page at antlr.org and got the same result.
I built and ran other examples with no problem.
Thank you in advance for your help.
Re: problem with TokenBuffer class
Posted By:
Claude_Devarenne
Posted On:
Friday, December 14, 2001 04:27 PM
import antlr.TokenBuffer;import antlr.TokenStreamException;import antlr.TokenStreamIOException;
This even though the file Main.javaalready had
import antlr.*;
I am not entirely statisfied with thissolution so I'll try to track it down and figure out what is going on. | http://www.jguru.com/forums/view.jsp?EID=585430 | CC-MAIN-2014-52 | refinedweb | 172 | 50.02 |
The Update API (previously referred to in the Protocol v3.0 Developer's Guide as the Safe Browsing API) is an experimental API that allows client applications to check URLs against Google's constantly updated blacklists of suspected phishing, malware, and unwanted software pages. Your client application can use the API to download an encrypted table for local, client-side lookups of URLs.
This document describes the capabilities of the Update API, provides code samples for interacting with the API by sending HTTP messages to download lists or perform lookups. This document also includes information and examples on how to perform client-side lookups in a downloaded list.
Audience
This document is intended for programmers who want to use Google anti-phishing and anti-malware data to protect users from potentially malicious websites. It provides examples of basic data API interactions, guidelines for how the data may be accessed, and information on how the data can be used.
Overview
Google publishes phishing, malware, and unwanted software data in three separate blacklists (googpub-phish-shavar, goog-malware-shavar, and goog-unwanted-shavar). Each is a list of SHA-256 hash values that are usually truncated to a 4-byte hash prefix. The client MUST be able to handle both 4-byte prefixes and full 32-byte hashes. The client should keep a local copy of the lists and consult them for every URL that is to be scanned or visited. The client should store the lists as it receives them and make no attempt at converting a hashed list to plaintext. Clients MUST follow the protocol's requirements for update frequency, as this behavior is designed to prevent clients from overwhelming the service in case of excess demand on the service or the service recovering from failure. Clients MAY check for data less frequently than specified (e.g. for research purposes), but restrictions apply on how data may be used based on its age. Specifically, your application is not permitted to show warnings to end users unless they are based on current data, as defined in the Age of Data, Usage section. Also, if you do show warnings to end users, you must adhere to Google's guidelines for the warning text and provide appropriate attribution as discussed in End User Visible Warnings.
Key Differences From Previous Versions
Differences Between Version 2.2 and 3.0
Changes since version 2.2:
- shavar chunk data is now encoded using Protocol Buffers for improved efficiency.
- shavar chunks no longer include host keys. Clients should request full-length hashes anytime a hash prefix matches the URL, subject to the caching behavior described below.
- The HTTP Response for Full-Length Hashes now includes optional metadata associated with each full hash.
- The
goog-malware-shavarlist uses the new metadata functionality to distinguish between types of sites and allow for more informative warnings. See metadata contents and how they apply to warnings.
- The caching semantics for full hashes have changed:
- The HTTP Response for Full-Length Hashes now includes an expiration time in the response. As a result of this change, these requests will no longer return a 204 response.
- Clients must clear cached full-length hashes each time they send an update request.
- Clients migrating from version 2.2 to version 3.0 need to make sure that any previously-received full hashes follow these guidelines. These clients may need to clear their database if there is no other way to accomplish this.
- Full-length hashes obtained in shavar add chunks must also be verified via a full-length hash request prior to showing a warning. As with hash prefixes, the response will include an expiration time that specifies how long it may be cached.
- Message Authentication Code (MAC) support has been removed in favor of HTTPS. Requesting a MAC is not allowed when using
pver=3.0, though it is still supported for older protocol versions. HTTPS is required for pver 3.0.
- The format of API keys has changed. API keys are now managed through the Google Developers Console, as described in Getting Started. Note that the CGI parameter is now called
key.
- The URLs in Reporting Incorrect Data section have been updated. Note that we now recommend using HTTPS for these URLs.
Differences between Version 1 and Version 2
Version 1 of the update protocol is inefficient and not scalable. Caveats for version 1 of the protocol include:
- It does not support partial list updates unless a client has a recent version of the list already fully downloaded. A new client must download the entire list of phishing entries at once or else it will never get any data. As a result, some clients using slow connections take a very long time to download the full list, the request times out, and they never download anything.
- It sends phishing data to the client in oldest to newest order, which is inefficient for phishing sites since they have a very short lifetime.
- Expiring old entries requires listing them in updates, which actually consumes bandwidth.
- Clients only rarely find a match with any given listed pattern, so sending all the data is somewhat wasteful.
To address the above concerns, we have implemented a new version of the protocol, v2. The key differences are:
- The list is comprised of a series of "chunks" rather than a single versioned list.
- Updating now involves sending the list of chunks you have, and getting back a list of URLs that you should fetch for more data.
- The majority of data in the chunks are 32-bit truncated hashes (the first 32 bits of a SHA-256 hash). When you find a match, you send this 32-bit fragment to Google and get back a full list of 256-bit hashes.
Getting Started
To interact with the Safe Browsing service, in the Google Developers Console, you need to enable the Safe Browsing API and get an API key to authenticate as an API user. You will pass this key as a CGI parameter in your HTTP requests to the lookup server: the following steps to enable the API and get an API key:
- Open the Google Developers Console API Library.
- From the project drop-down, select a project or create a new one.
- In the Google APIs tab, search for and select the Safe Browsing API, then click Enable API.
- Next, in the sidebar on the left select Credentials.
- Select the Create credentials drop-down, then choose API key.
- Depending on your application, from the Create a new key pop-up, select Browser key or Server key.
- Enter a name for the key, set up the optional referrers or IP addresses, then click Create. Your key is created and displayed in a pop-up window. The key is also listed on the Credentials page.
If you need more help, check out the Google Developers Console Help Center.
See HTTP Request for Data for information on how to start downloading Safe Browsing updates.
Protocol Basics
Version 3 of the update protocol has the following characteristics:
- Each list type has one canonical list divided into chunks. Each chunk is assigned a unique identifier and describes entries to be added or removed from the blacklist.
- Clients can recommend a preferred download size, but that request is not guaranteed to be honored by the server.
- Clients inherently perform partial updates each time they connect, and the server will send the most valuable data to a client first. This could, for example, be the most recent data.
- The chunk structure is determined by the list type. Currently, all of the lists contain hashed expressions.
- Chunks that contain hash values do not necessarily contain the full hash; they are often only a prefix for that hash. A second request (a gethash request) can be issued to get the list of full-length hashes that start with the prefix.
- Within each chunk, all hash prefixes are the same length, but different chunks may contain prefixes of different lengths.
As with the previous protocol, the v3 protocol supports many different blacklists or whitelists. List names are in the form "provider-type-format", such as "googpub contents of the list. (See List Contents for details.)
The lists are divided into chunks, the smallest unit of data that will be sent to the client. This allows for supporting partial updates to all users, including new users, and allows for more flexibility in choosing which data to send the client. The actual chunk size is determined by the server.
There are two kind of chunks:
- "add" chunks contain new entries for the list.
- "sub" chunks contain entries that need to be removed from the client's list.
Chunks are assigned a number, which is a sequence number for chunks of the same type. For example, for a given list, there will be:
- "Add" chunk #1, "add" chunk #2,..., "add" chunk #N.
- "Sub" chunk #1, "sub" chunk #2,..., "sub" chunk #M.
- The total number of "add" and "sub" chunks will generally be different.
- There is no chunk number 0. Chunk numbers start with 1.
- Chunk numbers within the same chunk type grow increasingly.
For a blacklist, "add" chunks contain the new hashes to add to the blacklist and "sub" chunks contain the false positives that need to be removed from the client's blacklist.
The server does not explicly list all hashes that need to be removed. Instead, to save bandwidth, the server indicates which chunks need to be deleted by specifying a previously-seen "add" or "sub" chunk number.
Protocol Specification
The client-server exchange uses a simple pull model: the client connects regularly to the server and pulls updates. The data exchange can be summarized as follows:
- The client sends an HTTP POST request to the server and specifies which lists it wants to download. It indicates which chunks it already has. It specifies the desired download size.
- The server replies with an HTTP status code and an HTTP response. If there is any data, the response contains the chunk data URLs for the various requested lists.
Besides the data exchange, the server provides a way for the client to discover which lists are available.
R-BNF
This document uses a R-BNF notation to specify the format of requests and responses. This notation is a mix of Extended BNF and PCRE-style regular expressions:
- Rules are in the form: name = definition. Rule names are referenced as-is in the definition. Angle brackets may be used to help facilitate discerning the use of rule names.
- Literals are surrounded by quotation marks: "literal".
- Sequences: (rule1 rule2) or simply rule1 rule2.
- Alternatives groups: (rule1 | rule2).
- Optional groups: [rule[]].
- Repetition: rule* means 0 or more of this rule or this group.
- Repetition: rule+ means 1 or more of this rule or this group.
The following basic rules that describe the US-ASCII character set are also used as defined in RFC 2616:
- UPALPHA = <any US-ASCII uppercase letter "A".."Z">
- LOALPHA = <any US-ASCII lowercase letter "a".."z">
- ALPHA = UPALPHA | LOALPHA
- DIGIT = <any US-ASCII digit "0".."9">
- UNRESERVED = ALPHA | DIGIT | "-" | "_" | "." | "!" | "~" | "*" | "'" | "(" + ")"
- CR = <US-ASCII CR, carriage return (13)>
- LF = <US-ASCII LF, line-feed (10)>
- <"> = <US-ASCII double-quote mark (34)>
- EOF = End of File / End of Stream
HTTP Request for List
Clients use this to discover the available list types.
Request's URL
The client performs a
There is no body content for this request—any body data will be ignored by the server.
HTTP Response for.
- 401: Not Authorized—The client id is invalid.
- 503: Service Unavailable—The server cannot handle the request. Clients MUST follow the backoff behavior specified in the Request Frequency section.
- 505: HTTP Version Not Supported—The server CANNOT handle the requested protocol major version.
Response Body
There is no data in the response body for codes in 3xx, 4xx, and 5xx.
The response body may be empty. When present, the response body contains the name of each list that this client can access. Formal R-BNF description of the response body:
BODY = (LISTNAME LF)* EOF LISTNAME = (LOALPHA | DIGIT)+ "-" LOALPHA+ "-" (LOALPHA | DIGIT)+
Example:
googpub-phish-shavar goog-malware-shavar
HTTP Request for Data
Clients use this to get new data for known list types.
The request body is used to specify what the client has and wants:
- The client optionally specifies the maximum size of the download it wants to retrieve.
- The client specifies which lists it wants to retrieve.
- For each list, the client specifies the chunk numbers it already has.
The format of the body is line-oriented. Lines are separated by LF. Lines that cannot be understood are ignored by the server.
Formal R-BNF description of the request body:
BODY = [SIZE LF] (LIST LF)+ EOF SIZE = "s;" DIGIT+ # Optional size, in kilobytes and >= 1 LIST = LISTNAME ";" LISTINFO (":" LISTINFO)* LISTINFO = CHUNKTYPE ":" CHUNKLIST LISTNAME = (LOALPHA | DIGIT)+ "-" LOALPHA+ "-" (LOALPHA | DIGIT)+ CHUNKTYPE = "a" | "s" # 'Add' or 'Sub' chunks CHUNKLIST = (RANGE | NUMBER) ["," CHUNKLIST] NUMBER = DIGIT+ # Chunk number >= 1 RANGE = NUMBER "-" NUMBER
Note that the last line of the body MUST have a trailing line-feed.
Clients must collapse consecutive chunk numbers into a RANGE to reduce the request size.
The size request is optional. If present, the number indicates the ideal maximum response size, in kilobytes, that the server should return. The size is used as a hint by the server; the actual reply size may vary and could be larger or smaller than the ideal size specified by the client.
We strongly recommend that clients omit the size field unless they have a special need to limit the response size. Clients who are operating on a small bandwidth, such as a modem, may want to use the size field to limit the response size. However, doing so may cause the client to permanently lag behind. If unsure, clients should omit the size field and let the server decide the appropriate response size.
Example 1:
googpub-phish-shavar;a:1-3,5,8:s:4-5 acme-white-shavar;a:1-7:s:1-2
In this example, the client requests data for two lists. It then lists the chunks it already has for each list type.
Example 2:
s;200 googpub-phish-shavar;a:1-3,5,8:s:4-5 acme-white-shavar;a:1-7:s:1-2
In this example, the client requests a response size of 200 kilobytes for the two given lists. It then lists the chunks it already has for each list type.
Note that at first, the client has no data, so it has no chunk number on its side. If a client does not have any chunks of a type, it should not list the corresponding chunk type. Example (inline comments start after a # and are not part of the protocol):
googpub-phish-shavar;a:1-5 # The client has 'add' chunks but no 'sub' chunks acme-malware-shavar; # The client has no data for this list.
Examples of good chunk lists:
googpub-phish-shavar;a:1-5,10,12:s:3-8 googpub-phish-shavar;a:1-5,10,12,15,16 googpub-phish-shavar;a:16-20,3-5,1
Examples of bad chunk lists:
googpub-phish-shavar # Missing ; at end of list name googpub-phish-shavar;a:1,2,3,4,5,10,12,15,16 # Chunk range is not collapsed googpub-phish-shavar;5-7,16-10 # Missing 'a:' or 's:' for chunk type googpub-phish-shavar;a:5-4,16-10 # Invalid range (high < low) googpub-phish-shavar;a:5-7:s: # Missing chunk numbers for 's:'
Server Behavior:
- The server MUST reject a request with an empty body.
- The server MUST ignore ill-formated lines and MUST reply to the correctly formatted ones.
- The server SHALL try to accommodate the desired response size. The requested size takes into account only chunk data, not any metadata.
- However, if the desired size is less than at least one chunk, the server MUST send at least one chunk.
Client Behavior:
- The client MUST request at least one list.
HTTP Response for or the body did not contain any meaningful entries.
- 403: Forbidden—The client id or API key is invalid or unauthorized.
- next polling interval to use; that is, the number of seconds before the client should contact the server again.
- For each list, its name followed by redirect URLs containing chunk data.
The response body is line-oriented. Formal R-BNF description of the response body:
BODY = NEXT LF (RESET | (LIST LF)+) EOF NEXT = "n:" DIGIT+ # Minimum delay before polling again in seconds RESET = "r:pleasereset" LIST = "i:" LISTNAME LF LISTDATA LISTNAME = (LOALPHA | DIGIT | "-")+ # e.g. "googpub-phish-shavar" LISTDATA = ((REDIRECT_URL | ADDDEL-HEAD | SUBDEL-HEAD) LF)* REDIRECT_URL = "u:" URL URL = Defined in RFC 1738 (the scheme is omitted; see below) ADDDEL-HEAD = "ad:" CHUNKLIST SUBDEL-HEAD = "sd:" CHUNKLIST CHUNKLIST = (RANGE | NUMBER) ["," CHUNKLIST] NUMBER = DIGIT+ # Chunk number >= 1 RANGE = NUMBER "-" NUMBER
A reset response from the server means to clear out all current data in the database before requesting again.
The response doesn't actually contain the data associated with the lists; instead, it tells you where to find the data via redirect URLs. These URLs should be visited in the order that they are given, and if an error is encountered fetching any of the URLs, then the client must NOT fetch any URL after that. Parallel fetching is NOT allowed.
REDIRECT_URL, ADDDEL-HEAD, and SUBDEL-HEAD can be presented in any order. They can even be intermixed. Clients MUST NOT rely on any particular ordering.
The adddel and subdel chunks are used to expire previous add and sub chunks. Consequently, they have no associated chunk data. More than one chunk can be specified, either by listing each number, using a range, or a combination of both. When an add chunk is deleted, the client can delete the data associated with that chunk. Clients should no longer report that they have received that chunk. When a sub chunk is deleted, the client no longer needs to keep track of the removals for any unreceived add chunks, and no longer reports that it received that sub chunk in the past.
There may not be any chunks of a given type. In this case, no redirect URLs will contain the given chunk type, and there will be no adddels or subdels in the response.
The format for each redirect URL is a host and path, for example "example.com/redirect/123". The client should use HTTPS to fetch the URL.
Formal R-BNF description of redirect response:
BODY = (UINT32 CHUNKDATA)+ UINT32 = Unsigned 32-bit integer in network byte order. CHUNKDATA = Encoded ChunkData protocol message, see below.
The format for add and sub chunks is exactly the same. A length is given first, which specifies the size of the ChunkData message that immediately follows.
Following this length is an encoded ChunkData protocol message, which is defined as follows:
// Chunk data encoding format for the shavar-proto list format. message ChunkData { required int32 chunk_number = 1; // The chunk type is either an add or sub chunk. enum ChunkType { ADD = 0; SUB = 1; } optional ChunkType chunk_type = 2 [default = ADD]; // Prefix type which currently is either 4B or 32B. The default is set // to the prefix length, so it doesn't have to be set at all for most // chunks. enum PrefixType { PREFIX_4B = 0; FULL_32B = 1; } optional PrefixType prefix_type = 3 [default = PREFIX_4B]; // Stores all SHA256 add or sub prefixes or full-length hashes. The number // of hashes can be inferred from the length of the hashes string and the // prefix type above. optional bytes hashes = 4; // Sub chunks also encode one add chunk number for every hash stored above. repeated int32 add_numbers = 5 [packed = true]; }
See the protocol buffers documentation for details on how to generate language-specific bindings from this message definition, which can be used to parse the message.
Add and sub chunks can be presented in any order. They can even be intermixed. The order of the chunks depends on the implementation of the server and the clients MUST NOT rely on any empirical behavior. Moreover, the sequence order in which chunks of the same type are present in the stream is not guaranteed.
A chunk's
hashes may be empty. In this case, the prefix size
will still be set, but will have no meaning. Chunks may be given this way to
prevent fragmentation of chunk numbers and reduce request size.
In the case of an empty add chunk, it's possible that the client has or will receive a sub chunk that contains an expression that points to the empty add. In this case, the client is allowed to drop the sub expression.
The client may receive an empty chunk after previously receiving a non-empty version of the same chunk number. In this situation, no action is needed by the client. The prefix size of the empty chunk may not match the originally received chunk.
A sub chunk may refer to an add chunk that the client has not yet received. In this situation, the client must keep track of the pending removal, and apply it if the referenced add chunk is received in the future.
Example:
n:1200 i:googpub-phish-shavar u:cache.google.com/first_redirect_example sd:1,2 i:acme-white-shavar u:cache.google.com/second_redirect_example ad:1-2,4-5,7 sd:2-6
Contents of first_redirect_example: (contents shown in text format for demonstration purposes
ChunkData < chunk_number: 4 // chunk_type not set, default value of ADD // prefix_type not set, default value of PREFIX_4B hashes: 0x1122334455667788 // 2 4-byte hash prefixes > ChunkData < chunk_number: 3 chunk_type: SUB // prefix_type not set, default value of PREFIX_4B hashes: 0x1212343445456767 // 2 4-byte hash prefixes add_numbers: 3 4 // an add chunk number for each prefix > ChunkData < chunk_number: 6 // chunk_type not set, default value of ADD // prefix_type not set, default value of PREFIX_4B // empty hashes >
Contents of second_redirect_example:
ChunkData < chunk_number: 10 // chunk_type not set, default value of ADD // prefix_type not set, default value of PREFIX_4B hashes: 0x0011998800119977 // 2 4-byte hash prefixes >
In this example, there are no adddel chunks for the "googpub-phish-shavar" list, and there are no sub chunks for the "acme-white-shavar" redirect response.
Server Behavior:
- The server CAN change the "next" value (i.e. "n:" line) for each response.
Client Behavior:
- The client MUST respect the "next" value and not contact the server again until the specified delay has expired. See the Request Frequency section for more information on how often the server can be contacted after replying with an HTTP error code.
- The client MUST ignore a line starting with a keyword that it doesn't understand.
- If a redirect request returns an error code, the client MUST perform backoff behavior as indicated in the Request Frequency section.
- A client MUST perform a download request again if a redirect request returns an error.
- The client SHOULD keep all data delivered prior to a bad request.
- The client MUST refuse to use the whole response if any of the adddel and subdel metadata headers, or the encoded chunk data, cannot be parsed successfully.
- Upon successful decoding of all the response and all the binary data, the client MUST update its lists in an atomic fashion.
List Contents
The content of each chunk depends on its list type. Currently, the possible lists are:
- googpub-phish-shavar: A list of hashed suffix/prefix expressions representing sites that should be blocked, because they are hosting or redirecting to phishing pages.
- goog-malware-shavar: A list of hashed suffix/prefix expressions representing sites that should be blocked, because they are hosting or redirecting to malware.
- goog-unwanted-shavar: A list of hashed suffix/prefix expressions representing sites that should be blocked, because they are hosting or redirecting to unwanted software pages.
The "shavar" (short for "Variable-length SHA256") list type relies on suffix/prefix expressions. Each of the suffix/prefix expressions consists of a host suffix (or full host) and a path prefix (or full path). The path prefix consists of full path components. If the expression contains the full path, there may optionally be query parameters appended to the path.
Examples:For a more complete description of suffix/prefix expressions, see the Suffix/Prefix Expression Lookup section.
shavar List Format
For the "shavar" list format, hash prefixes are used to reduce bandwidth. A hash prefix is some number of the most significant bytes of a full-length, 256-bit hash. Each ChunkData message contains zero or more hash prefixes, and indicates the length of the hash prefixes in that chunk.
Examples of "shavar" hashes based on the examples from FIPS-180-2:
- Example B1:
- Input is "abc".
- SHA 256:
- Input is "abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq".
- SHA 256.
Unit test you can use to validate the key computation ]);
HTTP Request for Full-Length Hashes
A client may request the list of full-length hashes for a hash prefix. This usually occurs when a client is about to download content from a URL whose calculated hash starts with a prefix listed in a blacklist. See the Lookup section for details.:
Client Behavior:
- The client MUST specify the
client,
appver, and
pverCGI parameters. If
clientis "api" (it should be unless we explicitly tell you otherwise), you MUST also specify the
keyparameter.
Request Body
The request body specifies the list of hash prefixes for which the client should receive full-length hashes.
Formal R-BNF description of the request body:
BODY = HEADER LF PREFIXES EOF HEADER = PREFIXSIZE ":" LENGTH PREFIXSIZE = DIGIT+ # Size of each prefix in bytes LENGTH = DIGIT+ # Size of PREFIXES in bytes PREFIXES = <LENGTH number of unsigned bytes> # PREFIXSIZE prefixes in binary
PREFIXES is a list of PREFIXSIZE values. Note that the server returns the full hash for any matching prefixes given in the request. There may be 0 or more matches for each prefix given.
HTTP Response for Full-Length Hashes
The server replies using the status code and response body of the HTTP response. No specific HTTP headers are set by the server—some HTTP headers MAY be present but are not authoritative.
Response Code
The server generates the following HTTP status codes:
- 200: OK—Data is available in the HTTP response body.
- 400: Bad Request—The HTTP request was not correctly formed. The client did not provide all required CGI parameters.
- 403: Forbidden—The client id is invalid.
- cache lifetime of the response.
- The number of hash entries starting with the requested prefix in decimal.
- The matching full-length hashes.
- Optional metadata for the full-length hashes.
Formal R-BNF description of the response body:
BODY = CACHELIFETIME LF HASHENTRY* EOF CACHELIFETIME = DIGIT+ HASHENTRY = LISTNAME ":" HASHSIZE ":" NUMRESPONSES [":m"] LF HASHDATA (METADATALEN LF METADATA)* HASHSIZE = DIGIT+ # Length of each full hash NUMRESPONSES = DIGIT+ # Number of full hashes in HASHDATA HASHDATA = <HASHSIZE*NUMRESPONSES number of unsigned bytes> # Full length hashes in binary METADATALEN = DIGIT+ # Length of METADATA METADATA = <METADATALEN number of unsigned bytes> # Parsing depends on list. See below.
HASHDATA is grouped by LISTNAME. Clients must ignore any unrecognized list names in the response.
CACHELIFETIME specifies for how many seconds this response is valid.
Each hash in the response MUST be cached as a full-length hash for the requested prefix in the list indicated in the response, until either the cache lifetime elapses or the client restarts.
If there is a valid cached response for a hash prefix, then the client MUST not send any further full-length hash requests for that prefix.
Metadata contains additional list-specific information. If ":m" is present in the response header, then a metadata entry will be included for each full hash in HASHDATA. The metadata entries are provided in the same order as the corresponding full hashes; for example, the second metadata entry corresponds to the second full hash. The metadata entries are variable-sized, so each metadata entry includes its length (METADATALEN), followed by a newline, then the metadata payload. See Full Hash Metadata for a further description of how to interpret the metadata. Metadata MUST be cached along with the full-length hashes in the response.
If no hashes start with the requested prefix, the response body will contain only the cache lifetime. This is expected and may occur if a client has not yet downloaded an update to a list that deletes the requested prefix. In this situation, the client should still cache the response and refrain from sending further full-length hash requests for this prefix until the chache lifetime has expired.
Example Responses (line breaks indicate an LF byte):
600 googpub-phish-shavar:32:1 01234567890123456789012345678901This response contains a single 32-byte full hash (01234567890123456789012345678901) in googpub-phish-shavar, with no metadata. The cache lifetime is 10 minutes.
900 goog-malware-shavar:32:2:m 01234567890123456789012345678901987654321098765432109876543210982 AA3 BBBgoogpub-phish-shavar:32:1 01234567890123456789012345678901This response contains 2 32-byte full hashes (01234567890123456789012345678901 and 98765432109876543210987654321098) in goog-malware-shavar. The first entry has metadata "AA", the second has metadata "BBB". The response also contains a single 32-byte full hash (01234567890123456789012345678901) in googpub-phish-shavar, with no metadata. The cache lifetime for all entries is 15 minutes.
900This response indicates that no full hashes matched the given prefix. The empty result should be cached for 15 minutes.
Full Hash Metadata
This section describes the Safe Browsing lists that include accompanying metadata, and how to parse that metadata. Any lists not mentioned here do not currently return metadata.
goog-malware-shavar
The metadata for the
goog-malware-shavar list is an encoded
protocol buffer,
as follows:
message MalwarePatternType { enum PATTERN_TYPE { LANDING = 1; DISTRIBUTION = 2; } required PATTERN_TYPE pattern_type = 1; }
PATTERN_TYPE is used to target end-user warnings more precisely. See "End-User Visible Warnings".
Request Frequency
In order to ensure high availability of the API, Google limits the frequency of client requests. This is handled differently depending on the type of request.
HTTP Request for Data
When requesting a download of data from the server, two mechanisms are available to control request frequency:
- In its response, the server gives an update interval; that is, the delay in seconds before the next connection attempt should occur.
- The client watches for timeouts or HTTP errors (specifically HTTP response codes 3xx, 4xx or 5xx) from the server. If too many errors occur, it increases the time between requests. For example, a request returning an error code may be repeated 2 times in 2 minutes, and then not again for 30-60 minutes.
Client Behavior:
- The first request for data MUST happen at a random interval between 0 and 5 minutes after the client starts.
- After that, each update MUST happen at the update interval last specified by the server.
Client Behavior on error or timeout:
- If the client receives an error during update, it MUST try again in one minute.
- If it receives two errors in a row, it MUST continue to skip updates for a period of time defined by the following formula:
30mins * (rand + 1), where rand is a random number between 0 and 1. Thus, depending on the value of rand, the client will skip updates for 30-60 minutes.
- If it receives another (3rd) error, it MUST skip updates for double the length of time. Thus, depending on the value of rand, the client will skip updates for 60-120 minutes.
- If it receives another (4th) error, it MUST skip updates for double the length of time. Thus, depending on the value of rand, the client will skip updates for 120-240 minutes.
- If it then receives another (5th) error, it MUST skip updates for double the length of time. Thus, depending on the value of rand, the client will skip updates for 240-480 minutes.
- For every error after that, it SHOULD continue to check once every 480 minutes until the server responds with a success message.
- Once the client receives successful HTTP replies, the error stats are reset.
HTTP Request for Full-Length Hashes
Clients should follow the caching requirements described in HTTP Response for Full-Length Hashes - Response Body. In addition, clients should handle errors and timeouts as follows:
- If a client receives 2 errors within 5 minutes, it enters backoff mode.
- After this point, if the client receives one non-error response, or the last error occurred at least 8 hours ago, it exits backoff mode.
- While in backoff mode, the client MUST not ping for at least a certain amount of time from the last error. This time changes exponentially up to a maximum of 2 hours.
- When the client receives the first error, it MUST not ping for at least 30 minutes from the last error.
- If it receives another error, the client MUST not ping for at least 1 hour.
- If it receives another error, the client MUST not ping for at least 2 hours.
- After that, the client MUST wait at least 2 hours between pings.
- The client has two options for tracking the granularity of errors. It can treat any error during a request for a full length hash equally, triggering backoff mode as specified above. Or it can track errors separately by unique hash prefix; that is, only gethash requests for that particular hash prefix should be skipped for the length of time specified above, extending with each additional error as specified.
Performing Lookups
Canonicalization
Before lookup in any list, the URL must be canonicalized.
We assume that the client has parsed the URL and made it valid according to RFC 2396. If the URL uses an internationalized domain name (IDN), it should be converted').
If the URL ends in a fragment, remove the fragment. For example, shorten '' to ''.
Next, 4 components.
- Lowercase the whole string.
To canonicalize the path:
- The sequences "/../" and "/./" in the path should be resolved Expression Lookup
Currently, all valid list types rely on suffix/prefix expressions, as described in List Contents. To perform a lookup for a given URL, the client will try to form different possible host suffix and path prefix combinations and see whether they match each list. Depending on the list type, the suffix/prefix combination may be hashed before lookup. These lookups only use the host and path components of the URL. The scheme, username, password, and port are disregarded. If the URL includes query parameters, the client will include a lookup with the full path and query parameters.
For the hostname, the client will try at most 5 different strings. They are:
- the exact hostname in the URL
- up to 4 hostnames formed by starting with the last 5 components and successively removing the leading component. The top-level domain can be skipped. These additional hostnames should not be checked if the host is an IP address.
For the path, the client will also try at most 6 different strings. They are:
- the exact path of the URL, including query parameters
- the exact path of the URL, without query parameters
- the 4 paths formed by starting at the root (/) and successively appending path components, including a trailing slash.
The following examples illustrate the lookup 5/
Age of Data, Usage
Applications that retrieve data using the API must never use data older than what is specified by the service. Specifically, a warning can only be shown if a URL matches a full-length hash obtained in a response to an HTTP Request for Full-Length Hashes, and the cached response is still valid as described in HTTP Response for Full-Length Hashes - Response Body at the time a warning is to be shown.
Important: Under no other circumstances may a warning be shown.
Acceptable Usage in Clients
Please note that if you violate the requirements detailed in Acceptable Usage in Clients your key may be disabled for a period of time.
Usage Restrictions
A single API key can make requests for up to 10,000 clients per 24-hour period.
We limit the number of different clients you can support with a single API key. If you expect that more than 10,000 distinct clients per day will request updates, you must contact us to have your API key provisioned for additional capacity. We want to make sure that we have contact information for large users that may potentially affect the service and its availability. At the present time there is no cost for this. For further questions about large deployments, contact antiphish-malware-cap-req@google.com.
User Visible Warnings
If you use the Update or unwanted sofware, and that the warnings merely identify possible risk.
- In your user visible warning, you may not lead users to believe that the page in question is, without a doubt, a phishing page or a page that distributes malware or unwanted software. When you refer to the page being identified or the potential risks it may pose to users, you must qualify the warning using terms such as: suspected, potentially, possible, likely, may be.
- Your warning must enable the user to learn more by reviewing information at (for phishing warnings), (for malware warnings), or (for unwanted software warnings).
- When you show warnings for pages identified as risky by the Update API, you must give attribution to Google by including the line "Advisory provided by Google," with a link to the Safe Browsing Advisory. If your product also shows warnings based on other sources, you may not include the Google attribution in warnings derived from non-Google data.
- A user browses directly to a page on a site of either type.
- A user browses to a page that includes any resource from a Distribution site.
- A user browses to a page that uses frames to include content from a Landing site.
Types of Malware sites
As of API version 3.0,
goog-malware-shavar lists
two different types of Malware sites: Landing sites and
Distribution sites. Landing sites are gateways to malware. They
are often hacked sites that include iframes, scripts, or redirects
that load content from other sites that launch the actual attacks.
Distribution sites are the sites that launch the attacks.
An API client that uses the Update API to show warnings in browsers can leverage this data to tailor which warnings to show, and when to show them. Such clients should show warnings in the following circumstances:
Unwanted Software sites
As of our update on March 9 2015,
goog-unwanted-shvar lists landing
sites for unwanted software. Landing sites are gateways as described above. For more
information on unwanted software, please see our
Unwanted Software Policy.
Suggested warning language.
Warning—The site ahead may contain harmful programs. Attackers might attempt to trick you into installing programs that harm your browsing experience (for example, by changing your homepage or showing extra ads on sites you visit). You can learn more about unwanted software at our Unwanted Software Policy.
Notice to Users About Phishing, Malware, and Unwanted Software Protection
Our Terms of Service require that if you indicate to users that your service provides malware, phishing, or unwanted software:
References
- RFC 2119 — Keywords for use in RFCs.
- RFC 2616 — Hypertext transfer Protocol HTTP/1.1.
- Mozilla/Firefox Phishing Protection.
- FIPS-180-2 — SHA 256
- Protocol Buffers library | https://developers.google.com/safe-browsing/v3/update-guide?hl=es-419 | CC-MAIN-2018-30 | refinedweb | 6,513 | 62.88 |
Working with Excel Spreadsheets
Excel is a popular and powerful spreadsheet application for Windows. The
openpyxlmodule allows your Python programs to read and modify Excel spreadsheet files. For example, you might have the boring task of copying certain data from one spreadsheet and pasting it into another one. Or you might have to go through thousands of rows and pick out just a handful of them to make small edits based on some criteria. Or you might have to look through hundreds of spreadsheets of department budgets, searching for any that are in the red. These are exactly the sort of boring, mindless spreadsheet tasks that Python can do for you.
Although Excel is proprietary software from Microsoft, there are free alternatives that run on Windows, OS X, and Linux. Both LibreOffice Calc and OpenOffice Calc work with Excel’s . xlsx file format for spreadsheets, which means the
openpyxlmodule can work on spreadsheets from these applications as well. You can download the software from and, respectively. Even if you already have Excel installed on your computer, you may find these programs easier to use. The screenshots in this chapter, however, are all from Excel 2010 on Windows 7.
Excel Documents
First, let’s go over some basic definitions: An Excel spreadsheet document is called a workbook. A single workbook is saved in a file with the .xlsx extension. Each workbook can contain multiple sheets (also called worksheets). The sheet the user is currently viewing (or last viewed before closing Excel) is called the active sheet.
Each sheet has columns (addressed by letters starting at A) and rows (addressed by numbers starting at 1). A box at a particular column and row is called a cell. Each cell can contain a number or text value. The grid of cells with data makes up a sheet.
Installing the openpyxl Module
Python does not come with OpenPyXL, so you’ll have to install it. Follow the instructions for installing third-party modules in Appendix A; the name of the module is
openpyxl. To test whether it is installed correctly, enter the following into the interactive shell:
>>> import openpyxl
If the module was correctly installed, this should produce no error messages. Remember to import the
openpyxl module before running the interactive shell examples in this chapter, or you’ll get a
NameError: name 'openpyxl' is not definederror.
This book covers version 2.3.3 of OpenPyXL, but new versions are regularly released by the OpenPyXL team. Don’t worry, though: New versions should stay backward compatible with the instructions in this book for quite some time. If you have a newer version and want to see what additional features may be available to you, you can check out the full documentation for OpenPyXL at.
Reading Excel Documents
The examples in this chapter will use a spreadsheet named example. xlsx stored in the root folder. You can either create the spreadsheet yourself or download it from. Figure 12-1 shows the tabs for the three default sheets named Sheet1, Sheet2, and Sheet3 that Excel automatically provides for new workbooks. (The number of default sheets created may vary between operating systems and spreadsheet programs.)
Figure 12-1. The tabs for a workbook’s sheets are in the lower-left corner of Excel.
Sheet 1 in the example file should look like Table 12-1. (If you didn’t download example. xlsx from the website, you should enter this data into the sheet yourself.)
Table 12-1. The example.xlsx Spreadsheet
Now that we have our example spreadsheet, let’s see how we can manipulate it with the
openpyxl module.
Opening Excel Documents with OpenPyXL
Once you’ve imported the
openpyxl module, you’ll be able to use the
openpyxl.load_workbook() function. Enter the following into the interactive shell:
>>> import openpyxl >>> wb = openpyxl.load_workbook('example.xlsx') >>> type(wb) <class 'openpyxl.workbook.workbook.Workbook'>
The
openpyxl.load_workbook() function takes in the filename and returns a value of the
workbook data type. This
Workbook object represents the Excel file, a bit like how a
File object represents an opened text file.
Remember that example. xlsx needs to be in the current working directory in order for you to work with it. You can find out what the current working directory is by importing
os and using
os.getcwd(), and you can change the current working directory using
os.chdir().
Getting Sheets from the Workbook
You can get a list of all the sheet names in the workbook by calling the
get_sheet_names() method. Enter the following into the interactive shell:
>>> import openpyxl >>> wb = openpyxl.load_workbook('example.xlsx') >>> wb.get_sheet_names() ['Sheet1', 'Sheet2', 'Sheet3'] >>> sheet = wb.get_sheet_by_name('Sheet3') >>> sheet <Worksheet "Sheet3"> >>> type(sheet) <class 'openpyxl.worksheet.worksheet.Worksheet'> >>> sheet.title 'Sheet3' >>> anotherSheet = wb.active >>> anotherSheet <Worksheet "Sheet1">
Each sheet is represented by a
Worksheet object, which you can obtain by passing the sheet name string to the
get_sheet_by_name() workbook method. Finally, you can read the
active member variable of a
Workbook object to get the workbook’s active sheet. The active sheet is the sheet that’s on top when the workbook is opened in Excel. Once you have the
Worksheet object, you can get its name from the
titleattribute.
Getting Cells from the Sheets
Once you have a
Worksheet object, you can access a
Cell object by its name. Enter the following into the interactive shell:
>>> import openpyxl >>> wb = openpyxl.load_workbook('example.xlsx') >>> sheet = wb.get_sheet_by_name('Sheet1') >>> sheet['A1'] <Cell Sheet1.A1> >>> sheet['A1'].value datetime.datetime(2015, 4, 5, 13, 34, 2) >>> c = sheet['B1'] >>> c.value 'Apples' >>> 'Row ' + str(c.row) + ', Column ' + c.column + ' is ' + c.value 'Row 1, Column B is Apples' >>> 'Cell ' + c.coordinate + ' is ' + c.value 'Cell B1 is Apples' >>> sheet['C1'].value 73
The
Cell object has a
value attribute that contains, unsurprisingly, the value stored in that cell.
Cell objects also have
row,
column, and
coordinate attributes that provide location information for the cell.
Here, accessing the
value attribute of our
Cell object for cell B1 gives us the string
'Apples'. The
row attribute gives us the integer
1, the
column attribute gives us
'B', and the
coordinate attribute gives us
'B1'.
OpenPyXL will automatically interpret the dates in column A and return them as
datetime values rather than strings. The
datetime data type is explained further in Chapter 16.
Specifying a column by letter can be tricky to program, especially because after column Z, the columns start by using two letters: AA, AB, AC, and so on. As an alternative, you can also get a cell using the sheet’s
cell() method and passing integers for its
row and
column keyword arguments. The first row or column integer is
1, not
0. Continue the interactive shell example by entering the following:
>>> sheet.cell(row=1, column=2) <Cell Sheet1.B1> >>> sheet.cell(row=1, column=2).value 'Apples' >>> for i in range(1, 8, 2): print(i, sheet.cell(row=i, column=2).value) 1 Apples 3 Pears 5 Apples 7 Strawberries
As you can see, using the sheet’s
cell() method and passing it
row=1 and
column=2gets you a
Cell object for cell
B1, just like specifying
sheet['B1'] did. Then, using the
cell() method and its keyword arguments, you can write a
for loop to print the values of a series of cells.
Say you want to go down column B and print the value in every cell with an odd row number. By passing
2 for the
range() function’s “step” parameter, you can get cells from every second row (in this case, all the odd-numbered rows). The
forloop’s
i variable is passed for the
row keyword argument to the
cell() method, while
2 is always passed for the
column keyword argument. Note that the integer
2, not the string
'B', is passed.
You can determine the size of the sheet with the
Worksheet object’s
max_row and
max_column member variables. Enter the following into the interactive shell:
>>> import openpyxl >>> wb = openpyxl.load_workbook('example.xlsx') >>> sheet = wb.get_sheet_by_name('Sheet1') >>> sheet.max_row 7 >>> sheet.max_column 3
Note that the
max_column method returns an integer rather than the letter that appears in Excel.
Converting Between Column Letters and Numbers
To convert from letters to numbers, call the
openpyxl.cell.column_index_from_string()function. To convert from numbers to letters, call the
openpyxl.cell.get_column_letter() function. Enter the following into the interactive shell:
>>> import openpyxl >>> from openpyxl.cell import get_column_letter, column_index_from_string >>> get_column_letter(1) 'A' >>> get_column_letter(2) 'B' >>> get_column_letter(27) 'AA' >>> get_column_letter(900) 'AHP' >>> wb = openpyxl.load_workbook('example.xlsx') >>> sheet = wb.get_sheet_by_name('Sheet1') >>> get_column_letter(sheet.max_column) 'C' >>> column_index_from_string('A') 1 >>> column_index_from_string('AA') 27
After you import these two functions from the
openpyxl. cell module, you can call
get_column_letter() and pass it an integer like 27 to figure out what the letter name of the 27th column is. The function
column_index_string() does the reverse: You pass it the letter name of a column, and it tells you what number that column is. You don’t need to have a workbook loaded to use these functions. If you want, you can load a workbook, get a
Worksheet object, and call a
Worksheet object method like
max_column to get an integer. Then, you can pass that integer to
get_column_letter().
Getting Rows and Columns from the Sheets
You can slice
Worksheet objects to get all the
Cell objects in a row, column, or rectangular area of the spreadsheet. Then you can loop over all the cells in the slice. Enter the following into the interactive shell:
>>> import openpyxl >>> wb = openpyxl.load_workbook('example.xlsx') >>> sheet = wb.get_sheet_by_name('Sheet1') >>> tuple(sheet['A1':'C3']) ((<Cell Sheet1.A1>, <Cell Sheet1.B1>, <Cell Sheet1.C1>), (<Cell Sheet1.A2>, <Cell Sheet1.B2>, <Cell Sheet1.C2>), (<Cell Sheet1.A3>, <Cell Sheet1.B3>, <Cell Sheet1.C3>)) ❶ >>> for rowOfCellObjects in sheet['A1':'C3']: ❷ for cellObj in rowOfCellObjects: print(cellObj.coordinate, cellObj.value) print('--- END OF ROW ---') A1 2015-04-05 13:34:02 B1 Apples C1 73 --- END OF ROW --- A2 2015-04-05 03:41:23 B2 Cherries C2 85 --- END OF ROW --- A3 2015-04-06 12:46:51 B3 Pears C3 14 --- END OF ROW ---
Here, we specify that we want the
Cell objects in the rectangular area from A1 to C3, and we get a
Generator object containing the
Cell objects in that area. To help us visualize this
Generator object, we can use
tuple() on it to display its
Cell objects in a tuple.
This tuple contains three tuples: one for each row, from the top of the desired area to the bottom. Each of these three inner tuples contains the
Cell objects in one row of our desired area, from the leftmost cell to the right. So overall, our slice of the sheet contains all the
Cell objects in the area from A1 to C3, starting from the top-left cell and ending with the bottom-right cell.
To print the values of each cell in the area, we use two
for loops. The outer
for loop goes over each row in the slice ❶. Then, for each row, the nested
for loop goes through each cell in that row ❷.
To access the values of cells in a particular row or column, you can also use a
Worksheet object’s
rows and
columns attribute. Enter the following into the interactive shell:
>>> import openpyxl >>> wb = openpyxl.load_workbook('example.xlsx') >>> sheet = wb.active >>> sheet.columns[1] (<Cell Sheet1.B1>, <Cell Sheet1.B2>, <Cell Sheet1.B3>, <Cell Sheet1.B4>, <Cell Sheet1.B5>, <Cell Sheet1.B6>, <Cell Sheet1.B7>) >>> for cellObj in sheet.columns[1]: print(cellObj.value) Apples Cherries Pears Oranges Apples Bananas Strawberries
Using the
rows attribute on a
Worksheet object will give you a tuple of tuples. Each of these inner tuples represents a row, and contains the
Cell objects in that row. The
columns attribute also gives you a tuple of tuples, with each of the inner tuples containing the
Cell objects in a particular column. For example. xlsx, since there are 7 rows and 3 columns,
rows gives us a tuple of 7 tuples (each containing 3
Cellobjects), and
columns gives us a tuple of 3 tuples (each containing 7
Cell objects).
To access one particular tuple, you can refer to it by its index in the larger tuple. For example, to get the tuple that represents column B, you use
sheet.columns[1]. To get the tuple containing the
Cell objects in column A, you’d use
sheet.columns[0]. Once you have a tuple representing one row or column, you can loop through its
Cell objects and print their values.
Workbooks, Sheets, Cells
As a quick review, here’s a rundown of all the functions, methods, and data types involved in reading a cell out of a spreadsheet file:
openpyxlmodule. Call the
openpyxl.load_workbook()function. Get a
Workbookobject. Read the
activemember variable or call the
get_sheet_by_name()workbook method. Get a
Worksheetobject. Use indexing or the
cell()sheet method with
rowand
columnkeyword arguments. Get a
Cellobject. Read the
Cellobject’s
valueattribute.
Project: Reading Data from a Spreadsheet
Say you have a spreadsheet of data from the 2010 US Census and you have the boring task of going through its thousands of rows to count both the total population and the number of census tracts for each county. (A census tract is simply a geographic area defined for the purposes of the census.) Each row represents a single census tract. We’ll name the spreadsheet file censuspopdata. xlsx, and you can download it from. Its contents look like Figure 12-2.
Figure 12-2. The censuspopdata.xlsx spreadsheet
Even though Excel can calculate the sum of multiple selected cells, you’d still have to select the cells for each of the 3,000-plus counties. Even if it takes just a few seconds to calculate a county’s population by hand, this would take hours to do for the whole spreadsheet.
In this project, you’ll write a script that can read from the census spreadsheet file and calculate statistics for each county in a matter of seconds.
This is what your program does:
Reads the data from the Excel spreadsheet. Counts the number of census tracts in each county. Counts the total population of each county. Prints the results.
This means your code will need to do the following:
Open and read the cells of an Excel document with the
openpyxl module. Calculate all the tract and population data and store it in a data structure. Write the data structure to a text file with the . py extension using the
pprintmodule.
Step 1: Read the Spreadsheet Data
There is just one sheet in the censuspopdata. xlsx spreadsheet, named
'Population by Census Tract', and each row holds the data for a single census tract. The columns are the tract number (A), the state abbreviation (B), the county name (C), and the population of the tract (D).
Open a new file editor window and enter the following code. Save the file as readCensusExcel.py.
#! python3 # readCensusExcel.py - Tabulates population and number of census tracts for # each county. ❶ import openpyxl, pprint print('Opening workbook...') ❷ wb = openpyxl.load_workbook('censuspopdata.xlsx') ❸ sheet = wb.get_sheet_by_name('Population by Census Tract') countyData = {} # TODO: Fill in countyData with each county's population and tracts. print('Reading rows...') ❹ for row in range(2, sheet.max_row + 1): # Each row in the spreadsheet has data for one census tract. state = sheet['B' + str(row)].value county = sheet['C' + str(row)].value pop = sheet['D' + str(row)].value # TODO: Open a new text file and write the contents of countyData to it.
This code imports the
openpyxl module, as well as the
pprint module that you’ll use to print the final county data ❶. Then it opens the censuspopdata. xlsx file ❷, gets the sheet with the census data ❸, and begins iterating over its rows ❹.
Note that you’ve also created a variable named
countyData, which will contain the populations and number of tracts you calculate for each county. Before you can store anything in it, though, you should determine exactly how you’ll structure the data inside it.
Step 2: Populate the Data Structure
The data structure stored in
countyData will be a dictionary with state abbreviations as its keys. Each state abbreviation will map to another dictionary, whose keys are strings of the county names in that state. Each county name will in turn map to a dictionary with just two keys,
'tracts' and
'pop'. These keys map to the number of census tracts and population for the county. For example, the dictionary will look similar to this:
{'AK': {'Aleutians East': {'pop': 3141, 'tracts': 1}, 'Aleutians West': {'pop': 5561, 'tracts': 2}, 'Anchorage': {'pop': 291826, 'tracts': 55}, 'Bethel': {'pop': 17013, 'tracts': 3}, 'Bristol Bay': {'pop': 997, 'tracts': 1}, --snip--
If the previous dictionary were stored in
countyData, the following expressions would evaluate like this:
>>> countyData['AK']['Anchorage']['pop'] 291826 >>> countyData['AK']['Anchorage']['tracts'] 55
More generally, the
countyData dictionary’s keys will look like this:
countyData[state abbrev][county]['tracts'] countyData[state abbrev][county]['pop']
Now that you know how
countyData will be structured, you can write the code that will fill it with the county data. Add the following code to the bottom of your program:
#! python 3 # readCensusExcel.py - Tabulates population and number of census tracts for # each county. --snip-- for row in range(2, sheet.max_row + 1): # Each row in the spreadsheet has data for one census tract. state = sheet['B' + str(row)].value county = sheet['C' + str(row)].value pop = sheet['D' + str(row)].value # Make sure the key for this state exists. ❶ countyData.setdefault(state, {}) # Make sure the key for this county in this state exists. ❷ countyData[state].setdefault(county, {'tracts': 0, 'pop': 0}) # Each row represents one census tract, so increment by one. ❸ countyData[state][county]['tracts'] += 1 # Increase the county pop by the pop in this census tract. ❹ countyData[state][county]['pop'] += int(pop) # TODO: Open a new text file and write the contents of countyData to it.
The last two lines of code perform the actual calculation work, incrementing the value for
tracts ❸ and increasing the value for
pop ❹ for the current county on each iteration of the
for loop.
The other code is there because you cannot add a county dictionary as the value for a state abbreviation key until the key itself exists in
countyData. (That is,
countyData['AK']['Anchorage']['tracts'] += 1 will cause an error if the
'AK' key doesn’t exist yet.) To make sure the state abbreviation key exists in your data structure, you need to call the
setdefault() method to set a value if one does not already exist for
state ❶.
Just as the
countyData dictionary needs a dictionary as the value for each state abbreviation key, each of those dictionaries will need its own dictionary as the value for each county key ❷. And each of those dictionaries in turn will need keys
'tracts' and
'pop' that start with the integer value
0. (If you ever lose track of the dictionary structure, look back at the example dictionary at the start of this section.)
Since
setdefault() will do nothing if the key already exists, you can call it on every iteration of the
for loop without a problem.
Step 3: Write the Results to a File
After the
for loop has finished, the
countyData dictionary will contain all of the population and tract information keyed by county and state. At this point, you could program more code to write this to a text file or another Excel spreadsheet. For now, let’s just use the
pprint.pformat() function to write the
countyData dictionary value as a massive string to a file named census2010.py. Add the following code to the bottom of your program (making sure to keep it unindented so that it stays outside the
for loop):
#! python 3 # readCensusExcel.py - Tabulates population and number of census tracts for # each county.get_active_sheet --snip-- for row in range(2, sheet.max_row + 1): --snip-- # Open a new text file and write the contents of countyData to it. print('Writing results...') resultFile = open('census2010.py', 'w') resultFile.write('allData = ' + pprint.pformat(countyData)) resultFile.close() print('Done.')
The
pprint.pformat() function produces a string that itself is formatted as valid Python code. By outputting it to a text file named census2010. py, you’ve generated a Python program from your Python program! This may seem complicated, but the advantage is that you can now import census2010. py just like any other Python module. In the interactive shell, change the current working directory to the folder with your newly created census2010. py file (on my laptop, this is C:\Python34), and then import it:
>>> import os >>> os.chdir('C:\\Python34') >>> import census2010 >>> census2010.allData['AK']['Anchorage'] {'pop': 291826, 'tracts': 55} >>> anchoragePop = census2010.allData['AK']['Anchorage']['pop'] >>> print('The 2010 population of Anchorage was ' + str(anchoragePop)) The 2010 population of Anchorage was 291826
The readCensusExcel. py program was throwaway code: Once you have its results saved to census2010. py, you won’t need to run the program again. Whenever you need the county data, you can just run
import census2010.
Calculating this data by hand would have taken hours; this program did it in a few seconds. Using OpenPyXL, you will have no trouble extracting information that is saved to an Excel spreadsheet and performing calculations on it. You can download the complete program from.
Ideas for Similar Programs
Many businesses and offices use Excel to store various types of data, and it’s not uncommon for spreadsheets to become large and unwieldy. Any program that parses an Excel spreadsheet has a similar structure: It loads the spreadsheet file, preps some variables or data structures, and then loops through each of the rows in the spreadsheet. Such a program could do the following:
Compare data across multiple rows in a spreadsheet. Open multiple Excel files and compare data between spreadsheets. Check whether a spreadsheet has blank rows or invalid data in any cells and alert the user if it does. Read data from a spreadsheet and use it as the input for your Python programs.
Writing Excel Documents
OpenPyXL also provides ways of writing data, meaning that your programs can create and edit spreadsheet files. With Python, it’s simple to create spreadsheets with thousands of rows of data.
Creating and Saving Excel Documents
Call the
openpyxl.Workbook() function to create a new, blank
Workbook object. Enter the following into the interactive shell:
>>> import openpyxl >>> wb = openpyxl.Workbook() >>> wb.get_sheet_names() ['Sheet'] >>> sheet = wb.active >>> sheet.title 'Sheet' >>> sheet.>> wb.get_sheet_names() ['Spam Bacon Eggs Sheet']
The workbook will start off with a single sheet named Sheet. You can change the name of the sheet by storing a new string in its
title attribute.
Any time you modify the
Workbook object or its sheets and cells, the spreadsheet file will not be saved until you call the
save() workbook method. Enter the following into the interactive shell (with example. xlsx in the current working directory):
>>> import openpyxl >>> wb = openpyxl.load_workbook('example.xlsx') >>> sheet = wb.active >>> sheet.>> wb.save('example_copy.xlsx')
Here, we change the name of our sheet. To save our changes, we pass a filename as a string to the
save() method. Passing a different filename than the original, such as
'example_copy. xlsx', saves the changes to a copy of the spreadsheet.
Whenever you edit a spreadsheet you’ve loaded from a file, you should always save the new, edited spreadsheet to a different filename than the original. That way, you’ll still have the original spreadsheet file to work with in case a bug in your code caused the new, saved file to have incorrect or corrupt data.
Creating and Removing Sheets
Sheets can be added to and removed from a workbook with the
create_sheet() and
remove_sheet() methods. Enter the following into the interactive shell:
>>> import openpyxl >>> wb = openpyxl.Workbook() >>> wb.get_sheet_names() ['Sheet'] >>> wb.create_sheet() <Worksheet "Sheet1"> >>> wb.get_sheet_names() ['Sheet', 'Sheet1'] >>> wb.create_sheet(index=0, title='First Sheet') <Worksheet "First Sheet"> >>> wb.get_sheet_names() ['First Sheet', 'Sheet', 'Sheet1'] >>> wb.create_sheet(index=2, title='Middle Sheet') <Worksheet "Middle Sheet"> >>> wb.get_sheet_names() ['First Sheet', 'Sheet', 'Middle Sheet', 'Sheet1']
The
create_sheet() method returns a new
Worksheet object named
Sheet
X, which by default is set to be the last sheet in the workbook. Optionally, the index and name of the new sheet can be specified with the
index and
title keyword arguments.
Continue the previous example by entering the following:
>>> wb.get_sheet_names() ['First Sheet', 'Sheet', 'Middle Sheet', 'Sheet1'] >>> wb.remove_sheet(wb.get_sheet_by_name('Middle Sheet')) >>> wb.remove_sheet(wb.get_sheet_by_name('Sheet1')) >>> wb.get_sheet_names() ['First Sheet', 'Sheet']
The
remove_sheet() method takes a
Worksheet object, not a string of the sheet name, as its argument. If you know only the name of a sheet you want to remove, call
get_sheet_by_name() and pass its return value into
remove_sheet().
Remember to call the
save() method to save the changes after adding sheets to or removing sheets from the workbook.
Writing Values to Cells
Writing values to cells is much like writing values to keys in a dictionary. Enter this into the interactive shell:
>>> import openpyxl >>> wb = openpyxl.Workbook() >>> sheet = wb.get_sheet_by_name('Sheet') >>> sheet['A1'] = 'Hello world!' >>> sheet['A1'].value 'Hello world!'
If you have the cell’s coordinate as a string, you can use it just like a dictionary key on the
Worksheet object to specify which cell to write to.
Project: Updating a Spreadsheet
In this project, you’ll write a program to update cells in a spreadsheet of produce sales. Your program will look through the spreadsheet, find specific kinds of produce, and update their prices. Download this spreadsheet from. Figure 12-3 shows what the spreadsheet looks like.
Figure 12-3. A spreadsheet of produce sales
Each row represents an individual sale. The columns are the type of produce sold (A), the cost per pound of that produce (B), the number of pounds sold (C), and the total revenue from the sale (D). The TOTAL column is set to the Excel formula =ROUND(B3*C3, 2), which multiplies the cost per pound by the number of pounds sold and rounds the result to the nearest cent. With this formula, the cells in the TOTAL column will automatically update themselves if there is a change in column B or C.
Now imagine that the prices of garlic, celery, and lemons were entered incorrectly, leaving you with the boring task of going through thousands of rows in this spreadsheet to update the cost per pound for any garlic, celery, and lemon rows. You can’t do a simple find-and-replace for the price because there might be other items with the same price that you don’t want to mistakenly “correct. ” For thousands of rows, this would take hours to do by hand. But you can write a program that can accomplish this in seconds.
Your program does the following:
Loops over all the rows. If the row is for garlic, celery, or lemons, changes the price.
This means your code will need to do the following:
Open the spreadsheet file. For each row, check whether the value in column A is
Celery,
Garlic, or
Lemon. If it is, update the price in column B. Save the spreadsheet to a new file (so that you don’t lose the old spreadsheet, just in case).
Step 1: Set Up a Data Structure with the Update Information
The prices that you need to update are as follows:
You could write code like this:
if produceName == 'Celery': cellObj = 1.19 if produceName == 'Garlic': cellObj = 3.07 if produceName == 'Lemon': cellObj = 1.27
Having the produce and updated price data hardcoded like this is a bit inelegant. If you needed to update the spreadsheet again with different prices or different produce, you would have to change a lot of the code. Every time you change code, you risk introducing bugs.
A more flexible solution is to store the corrected price information in a dictionary and write your code to use this data structure. In a new file editor window, enter the following code:
#! python3 # updateProduce.py - Corrects costs in produce sales spreadsheet. import openpyxl wb = openpyxl.load_workbook('produceSales.xlsx') sheet = wb.get_sheet_by_name('Sheet') # The produce types and their updated prices PRICE_UPDATES = {'Garlic': 3.07, 'Celery': 1.19, 'Lemon': 1.27} # TODO: Loop through the rows and update the prices.
Save this as updateProduce.py. If you need to update the spreadsheet again, you’ll need to update only the
PRICE_UPDATES dictionary, not any other code.
Step 2: Check All Rows and Update Incorrect Prices
The next part of the program will loop through all the rows in the spreadsheet. Add the following code to the bottom of updateProduce.py:
#! python3 # updateProduce.py - Corrects costs in produce sales spreadsheet. --snip-- # Loop through the rows and update the prices. ❶ for rowNum in range(2, sheet.max_row): # skip the first row ❷ produceName = sheet.cell(row=rowNum, column=1).value ❸ if produceName in PRICE_UPDATES: sheet.cell(row=rowNum, column=2).value = PRICE_UPDATES[produceName] ❹ wb.save('updatedProduceSales.xlsx')
We loop through the rows starting at row 2, since row 1 is just the header ❶. The cell in column 1 (that is, column A) will be stored in the variable
produceName ❷. If
produceName exists as a key in the
PRICE_UPDATES dictionary ❸, then you know this is a row that must have its price corrected. The correct price will be in
PRICE_UPDATES[produceName].
Notice how clean using
PRICE_UPDATES makes the code. Only one
if statement, rather than code like
if produceName == 'Garlic':, is necessary for every type of produce to update. And since the code uses the
PRICE_UPDATES dictionary instead of hardcoding the produce names and updated costs into the
for loop, you modify only the
PRICE_UPDATES dictionary and not the code if the produce sales spreadsheet needs additional changes.
After going through the entire spreadsheet and making changes, the code saves the
Workbook object to updatedProduceSales.xlsx ❹. It doesn’t overwrite the old spreadsheet just in case there’s a bug in your program and the updated spreadsheet is wrong. After checking that the updated spreadsheet looks right, you can delete the old spreadsheet.
You can download the complete source code for this program from.
Ideas for Similar Programs
Since many office workers use Excel spreadsheets all the time, a program that can automatically edit and write Excel files could be really useful. Such a program could do the following:
Read data from one spreadsheet and write it to parts of other spreadsheets. Read data from websites, text files, or the clipboard and write it to a spreadsheet. Automatically “clean up” data in spreadsheets. For example, it could use regular expressions to read multiple formats of phone numbers and edit them to a single, standard format.
Setting the Font Style of Cells
Styling certain cells, rows, or columns can help you emphasize important areas in your spreadsheet. In the produce spreadsheet, for example, your program could apply bold text to the potato, garlic, and parsnip rows. Or perhaps you want to italicize every row with a cost per pound greater than $5. Styling parts of a large spreadsheet by hand would be tedious, but your programs can do it instantly.
To customize font styles in cells, important, import the
Font() function from the
openpyxl.styles module.
from openpyxl.styles import Font
This allows you to type
Font() instead of
openpyxl.styles.Font(). (See Importing Modules to review this style of
import statement.)
Here’s an example that creates a new workbook and sets cell A1 to have a 24-point, italicized font. Enter the following into the interactive shell:
>>> import openpyxl >>> from openpyxl.styles import Font >>> wb = openpyxl.Workbook() >>> sheet = wb.get_sheet_by_name('Sheet') ❶ >>> italic24Font = Font(size=24, italic=True) ❷ >>> sheet['A1'].font = italic24Font >>> sheet['A1'] = 'Hello world!' >>> wb.save('styled.xlsx')
A cell’s style can be set by assigning the
Font object to the
style attribute.
In this example,
Font(size=24, italic=True) returns a
Font object, which is stored in
italic24Font ❶. The keyword arguments to
Font(),
size and
italic, configure the
Fontobject. And when
fontObj is assigned to the cell’s
font attribute ❷, all that font styling information gets applied to cell A1.
Font Objects
To set font style attributes, you pass keyword arguments to
Font(). Table 12-2shows the possible keyword arguments for the
Font() function.
Table 12-2. Keyword Arguments for Font
style Attributes
You can call
Font() to create a
Font object and store that
Font object in a variable. You then pass that to
Style(), store the resulting
Style object in a variable, and assign that variable to a
Cell object’s
style attribute. For example, this code creates various font styles:
>>> import openpyxl >>> from openpyxl.styles import Font >>> wb = openpyxl.Workbook() >>> sheet = wb.get_sheet_by_name('Sheet') >>> fontObj1 = Font(>> fontObj2 = Font(size=24, italic=True) >>> sheet['B3'].font = fontObj2 >>> sheet['B3'] = '24 pt Italic' >>> wb.save('styles.xlsx')
Here, we store a
Font object in
fontObj1 and then set the A1
Cell object’s
fontattribute to
fontObj1. We repeat the process with another
Font object to set the style of a second cell. After you run this code, the styles of the A1 and B3 cells in the spreadsheet will be set to custom font styles, as shown in Figure 12-4.
Figure 12-4. A spreadsheet with custom font styles
For cell A1, we set the font name to
'Times New Roman' and set
bold to
true, so our text appears in bold Times New Roman. We didn’t specify a size, so the
openpyxl default, 11, is used. In cell B3, our text is italic, with a size of 24; we didn’t specify a font name, so the
openpyxl default, Calibri, is used.
Formulas
Formulas, which begin with an equal sign, can configure cells to contain values calculated from other cells. In this section, you’ll use the
openpyxl module to programmatically add formulas to cells, just like any normal value. For example:
>>> sheet['B9'] = '=SUM(B1:B8)'
This will store =SUM(B1:B8) as the value in cell B9. This sets the B9 cell to a formula that calculates the sum of values in cells B1 to B8. You can see this in action in Figure 12-5.
Figure 12-5. Cell B9 contains the formula =SUM(B1:B8), which adds the cells B1 to B8.
A formula is set just like any other text value in a cell. Enter the following into the interactive shell:
>>> import openpyxl >>> wb = openpyxl.Workbook() >>> sheet = wb.active >>> sheet['A1'] = 200 >>> sheet['A2'] = 300 >>> sheet['A3'] = '=SUM(A1:A2)' >>> wb.save('writeFormula.xlsx')
The cells in A1 and A2 are set to 200 and 300, respectively. The value in cell A3 is set to a formula that sums the values in A1 and A2. When the spreadsheet is opened in Excel, A3 will display its value as 500.
Excel formulas offer a level of programmability for spreadsheets but can quickly become unmanageable for complicated tasks. For example, even if you’re deeply familiar with Excel formulas, it’s a headache to try to decipher what =IFERROR(TRIM(IF(LEN(VLOOKUP(F7, Sheet2! $A$1:$B$10000, 2, FALSE))>0,SUBSTITUTE(VLOOKUP(F7, Sheet2! $A$1:$B$10000, 2, FALSE), “ ”, “”),“”)), “”) actually does. Python code is much more readable.
Adjusting Rows and Columns.
Rows and columns can also be hidden entirely from view. Or they can be “frozen” in place so that they are always visible on the screen and appear on every page when the spreadsheet is printed (which is handy for headers).
Setting Row Height and Column Width
Worksheet objects have
row_dimensions and
column_dimensions attributes that control row heights and column widths. Enter this into the interactive shell:
>>> import openpyxl >>> wb = openpyxl.Workbook() >>> sheet = wb.active >>> sheet['A1'] = 'Tall row' >>> sheet['B2'] = 'Wide column' >>> sheet.row_dimensions[1].height = 70 >>> sheet.column_dimensions['B'].width = 20 >>> wb.save('dimensions.xlsx')
A sheet’s
row_dimensions and
column_dimensions are dictionary-like values;
row_dimensions contains
RowDimension objects and
column_dimensions contains
ColumnDimension objects. In
row_dimensions, you can access one of the objects using the number of the row (in this case, 1 or 2). In
column_dimensions, you can access one of the objects using the letter of the column (in this case, A or B).
The dimensions. xlsx spreadsheet looks like Figure 12-6.
Figure 12-6. Row 1 and column B set to larger heights and widths
Once you have the
RowDimension object, you can set its height. Once you have the
ColumnDimension object, you can set its width. The row height can be set to an integer or float value between
0 and
409. This value represents the height measured in points, where one point equals 1/72 of an inch. The default row height is 12.75. The column width can be set to an integer or float value between
0 and
255. This value represents the number of characters at the default font size (11 point) that can be displayed in the cell. The default column width is 8.43 characters. Columns with widths of
0 or rows with heights of
0 are hidden from the user.
Merging and Unmerging Cells
A rectangular area of cells can be merged into a single cell with the
merge_cells()sheet method. Enter the following into the interactive shell:
>>> import openpyxl >>> wb = openpyxl.Workbook() >>> sheet = wb.active >>> sheet.merge_cells('A1:D3') >>> sheet['A1'] = 'Twelve cells merged together.' >>> sheet.merge_cells('C5:D5') >>> sheet['C5'] = 'Two merged cells.' >>> wb.save('merged.xlsx')
The argument to
merge_cells() is a single string of the top-left and bottom-right cells of the rectangular area to be merged:
'A1:D3' merges 12 cells into a single cell. To set the value of these merged cells, simply set the value of the top-left cell of the merged group.
When you run this code, merged. xlsx will look like Figure 12-7.
Figure 12-7. Merged cells in a spreadsheet
To unmerge cells, call the
unmerge_cells() sheet method. Enter this into the interactive shell.
>>> import openpyxl >>> wb = openpyxl.load_workbook('merged.xlsx') >>> sheet = wb.active >>> sheet.unmerge_cells('A1:D3') >>> sheet.unmerge_cells('C5:D5') >>> wb.save('merged.xlsx')
If you save your changes and then take a look at the spreadsheet, you’ll see that the merged cells have gone back to being individual cells.
Freeze Panes
For spreadsheets too large to be displayed all at once, it’s helpful to “freeze” a few of the top rows or leftmost columns onscreen. Frozen column or row headers, for example, are always visible to the user even as they scroll through the spreadsheet. These are known as freeze panes. In OpenPyXL, each
Worksheet object has a
freeze_panes attribute that can be set to a
Cell object or a string of a cell’s coordinates. Note that all rows above and all columns to the left of this cell will be frozen, but the row and column of the cell itself will not be frozen.
To unfreeze all panes, set
freeze_panes to
None or
'A1'. Table 12-3 shows which rows and columns will be frozen for some example settings of
freeze_panes.
Table 12-3. Frozen Pane Examples
Make sure you have the produce sales spreadsheet from. Then enter the following into the interactive shell:
>>> import openpyxl >>> wb = openpyxl.load_workbook('produceSales.xlsx') >>> sheet = wb.active >>> sheet.>> wb.save('freezeExample.xlsx')
If you set the
freeze_panes attribute to
'A2', row 1 will always be viewable, no matter where the user scrolls in the spreadsheet. You can see this in Figure 12-8.
Figure 12-8. With
freeze_panes set to
'A2', row 1 is always visible even as the user scrolls down.
Charts
OpenPyXL supports creating bar, line, scatter, and pie charts using the data in a sheet’s cells. To make a chart, you need to do the following:
Referenceobject from a rectangular selection of cells. Create a
Seriesobject by passing in the
Referenceobject. Create a
Chartobject. Append the
Seriesobject to the
Chartobject. Add the
Chartobject to the
Worksheetobject, optionally specifying which cell the top left corner of the chart should be positioned..
The
Reference object requires some explaining.
Reference objects are created by calling the
openpyxl.chart.Reference() function and passing three arguments:
Worksheetobject containing your chart data. A tuple of two integers, representing the top-left cell of the rectangular selection of cells containing your chart data: The first integer in the tuple is the row, and the second is the column. Note that
1is the first row, not
0. A tuple of two integers, representing the bottom-right cell of the rectangular selection of cells containing your chart data: The first integer in the tuple is the row, and the second is the column.
Figure 12-9 shows some sample coordinate arguments.
Figure 12-9. From left to right:
(1, 1), (10, 1);
(3, 2), (6, 4);
(5, 3), (5, 3)
Enter this interactive shell example to create a bar chart and add it to the spreadsheet:
>>> import openpyxl >>> wb = openpyxl.Workbook() >>> sheet = wb.active >>> for i in range(1, 11): # create some data in column A sheet['A' + str(i)] = i >>> refObj = openpyxl.chart.Reference(sheet, min_col=1, min_row=1, max_col=1, max_row=10) >>> seriesObj = openpyxl.chart.Series(refObj,>> chartObj.append(seriesObj) >>> sheet.add_chart(chartObj, 'C5') >>> wb.save('sampleChart.xlsx')
This produces a spreadsheet that looks like Figure 12-10.
Figure 12-10. A spreadsheet with a chart added
We’ve created a bar chart by calling
openpyxl.chart.BarChart(). You can also create line charts, scatter charts, and pie charts by calling
openpyxl.chart.LineChart(),
openpyxl.chart.ScatterChart(), and
openpyxl.chart.PieChart().
Unfortunately, in the current version of OpenPyXL (2.3.3), the
load_workbook()function does not load charts in Excel files. Even if the Excel file has charts, the loaded
Workbook object will not include them. If you load a
Workbook object and immediately save it to the same . xlsx filename, you will effectively remove the charts from it.
Summary
Often the hard part of processing information isn’t the processing itself but simply getting the data in the right format for your program. But once you have your spreadsheet loaded into Python, you can extract and manipulate its data much faster than you could by hand.
You can also generate spreadsheets as output from your programs. So if colleagues need your text file or PDF of thousands of sales contacts transferred to a spreadsheet file, you won’t have to tediously copy and paste it all into Excel.
Equipped with the
openpyxl module and some programming knowledge, you’ll find processing even the biggest spreadsheets a piece of cake.
Practice Questions
For the following questions, imagine you have a
Workbook object in the variable
wb, a
Worksheet object in
sheet, a
Cell object in
cell, a
Comment object in
comm, and an
Imageobject in
img.
Practice Projects
For practice, write programs that perform the following tasks.
Multiplication Table Maker
Create a program multiplicationTable. py that takes a number N from the command line and creates an N×N multiplication table in an Excel spreadsheet. For example, when the program is run like this:
py multiplicationTable.py 6
… it should create a spreadsheet that looks like Figure 12-11.
Figure 12-11. A multiplication table generated in a spreadsheet
Row 1 and column A should be used for labels and should be in bold.
Blank Row Inserter
Create a program blankRowInserter. py that takes two integers and a filename string as command line arguments. Let’s call the first integer N and the second integer M. Starting at row N, the program should insert M blank rows into the spreadsheet. For example, when the program is run like this:
python blankRowInserter.py 3 2 myProduce.xlsx
… the “before” and “after” spreadsheets should look like Figure 12-12.
Figure 12-12. Before (left) and after (right) the two blank rows are inserted at row 3
You can write this program by reading in the contents of the spreadsheet. Then, when writing out the new spreadsheet, use a
for loop to copy the first N lines. For the remaining lines, add M to the row number in the output spreadsheet.
Spreadsheet Cell Inverter
Write a program to invert the row and column of the cells in the spreadsheet. For example, the value at row 5, column 3 will be at row 3, column 5 (and vice versa). This should be done for all cells in the spreadsheet. For example, the “before” and “after” spreadsheets would look something like Figure 12-13.
Figure 12-13. The spreadsheet before (top) and after (bottom) inversion
You can write this program by using nested
for loops to read in the spreadsheet’s data into a list of lists data structure. This data structure could have
sheetData[x][y]for the cell at column
x and row
y. Then, when writing out the new spreadsheet, use
sheetData[y][x] for the cell at column
x and row
y.
Text Files to Spreadsheet
Write a program to read in the contents of several text files (you can make the text files yourself) and insert those contents into a spreadsheet, with one line of text per row. The lines of the first text file will be in the cells of column A, the lines of the second text file will be in the cells of column B, and so on.
Use the
readlines() File object method to return a list of strings, one string per line in the file. For the first file, output the first line to column 1, row 1. The second line should be written to column 1, row 2, and so on. The next file that is read with
readlines() will be written to column 2, the next file to column 3, and so on.
Spreadsheet to Text Files
Write a program that performs the tasks of the previous program in reverse order: The program should open a spreadsheet and write the cells of column A into one text file, the cells of column B into another text file, and so on.
Support the author by purchasing the print & ebook bundle from
No Starch Press
Read More:
- How to use matlab xlswrite
- Matlab error “Object returned error code of the xlswrite function: 0x800A03EC
- [Python] pandas Library pd.to_ Parameter arrangement and example of Excel operation writing into excel file
- Desktop support fixed objects will move
- (element UI component table) how to add a style to a table
- Panda was unable to open the. Xlsx file, xlrd.biffh.XLRDError : Excel xlsx file; not supported
- VBA error values in Excel
- Excel VBA: cell error value
- Origin — draw the curve with error bar
- Runtime Error 1004 Method ‘VBProject’ of object ‘_Workbook’ failed
- Error loading password’s fault file (MySQL for Excel)
- Solution to the problem that SQL database query result field contains new line character, which leads to copy to excel dislocation
- UITableView failed to obtain a cell from its dataSource?
- Python modifies word document content and inserts pictures
- Print regularly to activate the printer (for some printers that need to be activated to print)
- Solution pandas.errors.ParserError : Error tokenizing data. C error: Buffer overflow caught
- Solutions to Excel 2007 “cannot shift object off sheet”
- To solve the problem of loading rjava in installation of xlsx
- VBA 400 error
- 13. R language: Error in match.names(clabs, names(xi)): The name is not relative to the original name | https://programmerah.com/openpyxl-foreign-tutorial-written-very-well-13395/ | CC-MAIN-2021-17 | refinedweb | 7,948 | 65.52 |
Last Updated on January 18, 2021
The Wasserstein Generative Adversarial Network, or Wasserstein GAN, is an extension to the generative adversarial network that both improves the stability when training the model and provides a loss function that correlates with the quality of generated images.
The development of the WGAN has a dense mathematical motivation, although in practice requires only a few minor modifications to the established standard deep convolutional generative adversarial network, or DCGAN.
In this tutorial, you will discover how to implement the Wasserstein generative adversarial network from scratch. Code a Wasserstein Generative Adversarial Network (WGAN) From Scratch
Photo by Feliciano Guimarães, some rights reserved.
Tutorial Overview
This tutorial is divided into three parts; they are:
- Wasserstein Generative Adversarial Network
- Wasserstein GAN Implementation Details
- How to Train a Wasserstein GAN Model
Wasserstein Generative Adversarial Network
The Wasserstein GAN, or WGAN for short, was introduced by Martin Arjovsky, et al. in their 2017 paper titled “Wasserstein GAN.”
It is an extension of the GAN that seeks an alternate way of training the generator model to better approximate the distribution of data observed in a given training dataset.
Instead of using a discriminator to classify or predict the probability of generated images as being real or fake, the WGAN changes or replaces the discriminator model with a critic that scores the realness or fakeness of a given image.
This change is motivated by a theoretical argument that training the generator should seek a minimization of the distance between the distribution of the data observed in the training dataset and the distribution observed in generated examples.
The benefit of the WGAN is that the training process is more stable and less sensitive to model architecture and choice of hyperparameter configurations. Perhaps most importantly, the loss of the discriminator appears to relate to the quality of images created by the generator.
Wasserstein GAN Implementation Details
Although the theoretical grounding for the WGAN is dense, the implementation of a WGAN requires a few minor changes to the standard Deep Convolutional GAN, or DCGAN.
The image below provides a summary of the main training loop for training a WGAN, taken from the paper. Note the listing of recommended hyperparameters used in the model.
Algorithm for the Wasserstein Generative Adversarial Networks.
Taken from: Wasserstein GAN.
The differences in implementation for the WGAN are as follows:
- Use a linear activation function in the output layer of the critic model (instead of sigmoid).
- Use -1 labels for real images and 1 labels for fake images (instead of 1 and 0).
- Use Wasserstein loss to train the critic and generator models.
- Constrain critic model weights to a limited range after each mini batch update (e.g. [-0.01,0.01]).
- Update the critic model more times than the generator each iteration (e.g. 5).
- Use the RMSProp version of gradient descent with a small learning rate and no momentum (e.g. 0.00005).
Using the standard DCGAN model as a starting point, let’s take a look at each of these implementation details in turn.
Want to Develop GANs from Scratch?
Take my free 7-day email crash course now (with sample code).
Click to sign-up and also get a free PDF Ebook version of the course.
1. Linear Activation in Critic Output Layer
The DCGAN uses the sigmoid activation function in the output layer of the discriminator to predict the likelihood of a given image being real.
In the WGAN, the critic model requires a linear activation to predict the score of “realness” for a given image.
This can be achieved by setting the ‘activation‘ argument to ‘linear‘ in the output layer of the critic model.
The linear activation is the default activation for a layer, so we can, in fact, leave the activation unspecified to achieve the same result.
2. Class Labels for Real and Fake Images
The DCGAN uses the class 0 for fake images and class 1 for real images, and these class labels are used to train the GAN.
In the DCGAN, these are precise labels that the discriminator is expected to achieve. The WGAN does not have precise labels for the critic. Instead, it encourages the critic to output scores that are different for real and fake images.
This is achieved via the Wasserstein function that cleverly makes use of positive and negative class labels.
The WGAN can be implemented where -1 class labels are used for real images and +1 class labels are used for fake or generated images.
This can be achieved using the ones() NumPy function.
For example:
3. Wasserstein Loss Function
The DCGAN trains the discriminator as a binary classification model to predict the probability that a given image is real.
To train this model, the discriminator is optimized using the binary cross entropy loss function. The same loss function is used to update the generator model.
The primary contribution of the WGAN model is the use of a new loss function that encourages the discriminator to predict a score of how real or fake a given input looks. This transforms the role of the discriminator from a classifier into a critic for scoring the realness or fakeness of images, where the difference between the scores is as large as possible.
We can implement the Wasserstein loss as a custom function in Keras that calculates the average score for real or fake images.
The score is maximizing for real examples and minimizing for fake examples. Given that stochastic gradient descent is a minimization algorithm, we can multiply the class label by the mean score (e.g. -1 for real and 1 for fake which as no effect), which ensures that the loss for real and fake images is minimizing to the network.
An efficient implementation of this loss function for Keras is listed below.
This loss function can be used to train a Keras model by specifying the function name when compiling the model.
For example:
4. Critic Weight Clipping
The DCGAN does not use any gradient clipping, although the WGAN requires gradient clipping for the critic model.
We can implement weight clipping as a Keras constraint.
This is a class that must extend the Constraint class and define an implementation of the __call__() function for applying the operation and the get_config() function for returning any configuration.
We can also define an __init__() function to set the configuration, in this case, the symmetrical size of the bounding box for the weight hypercube, e.g. 0.01.
The ClipConstraint class is defined below.
To use the constraint, the class can be constructed, then used in a layer by setting the kernel_constraint argument; for example:
The constraint is only required when updating the critic model.
5. Update Critic More Than Generator
In the DCGAN, the generator and the discriminator model must be updated in equal amounts.
Specifically, the discriminator is updated with a half batch of real and a half batch of fake samples each iteration, whereas the generator is updated with a single batch of generated samples.
For example:
In the WGAN model, the critic model must be updated more than the generator model.
Specifically, a new hyperparameter is defined to control the number of times that the critic is updated for each update to the generator model, called n_critic, and is set to 5.
This can be implemented as a new loop within the main GAN update loop; for example:
6. Use RMSProp Stochastic Gradient Descent
The DCGAN uses the Adam version of stochastic gradient descent with a small learning rate and modest momentum.
The WGAN recommends the use of RMSProp instead, with a small learning rate of 0.00005.
This can be implemented in Keras when the model is compiled. For example:
How to Train a Wasserstein GAN Model
Now that we know the specific implementation details for the WGAN, we can implement the model for image generation.
In this section, we will develop a WGAN to generate a single handwritten digit (‘7’) from the MNIST dataset. This is a good test problem for the WGAN as it is a small dataset requiring a modest mode that is quick to train.
The first step is to define the models.
The critic model takes as input one 28×28 grayscale image and outputs a score for the realness or fakeness of the image. It is implemented as a modest convolutional neural network using best practices for DCGAN design such as using the LeakyReLU activation function with a slope of 0.2, batch normalization, and using a 2×2 stride to downsample.
The critic model makes use of the new ClipConstraint weight constraint to clip model weights after mini-batch updates and is optimized using the custom wasserstein_loss() function, the RMSProp version of stochastic gradient descent with a learning rate of 0.00005.
The define_critic() function below implements this, defining and compiling the critic critic model into one larger model.
This larger model will be used to train the model weights in the generator, using the output and error calculated by the critic model. The critic model is trained separately, and as such, the model weights are marked as not trainable in this larger GAN model to ensure that only the weights of the generator model are updated. This change to the trainability of the critic weights only has an effect when training the combined GAN model, not when training the critic standalone.
This larger GAN model takes as input a point in the latent space, uses the generator model to generate an image, which is fed as input to the critic model, then output scored as real or fake. The model is fit using RMSProp with the custom wasserstein_loss() function.
The define_gan() function below implements this, taking the already defined generator and critic is selected (about 5,000) that belongs to class 7, e.g. are a handwritten depiction of the number seven. Then the pixel values must be scaled to the range [-1,1] to match the output of the generator model.
The load_real_samples() function below implements this, returning the loaded and scaled subset of the MNIST training dataset ready for modeling.
We will require one batch images and their corresponding label for the critic, specifically target= label for the critic model, specifically target=1 for the critic for real and fake samples can be tracked for each model update, as can the loss for the generator for each update. These can then be used to create line plots of loss at the end of the training run. The plot_history() function below implements this and saves the results to file.
We are now ready to fit the GAN model.
The model is fit for 10 training epochs, which is arbitrary, as the model begins generating plausible number-7 digits after perhaps the first few epochs. A batch size of 64 samples is used, and each training epoch involves 6,265/64, or about 97, batches of real and fake samples and updates to the model. The model is therefore trained for 10 epochs of 97 batches, or 970 iterations.
First, the critic model is updated for a half batch of real samples, then a half batch of fake samples, together forming one batch of weight updates. This is then repeated n_critic (5) times as required by the WGAN algorithm.
The generator is then updated via the composite GAN model. Importantly, the target label is set to -1 or real for the generated critic and generator models is reported each iteration. Sample images are generated and saved every epoch, and line plots of model performance are created and saved at the end of the run.
Now that all of the functions have been defined, we can create the models, load the dataset, and begin the training process.
Tying all of this together, the complete example is listed below.
Running the example is quick, taking approximately 10 minutes on modern hardware without a GPU.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
First, the loss of the critic and generator models is reported to the console each iteration of the training loop. Specifically, c1 is the loss of the critic on real examples, c2 is the loss of the critic in generated samples, and g is the loss of the generator trained via the critic.
The c1 scores are inverted as part of the loss function; this means if they are reported as negative, then they are really positive, and if they are reported as positive, they are really negative. The sign of the c2 scores is unchanged.
Recall that the Wasserstein loss seeks scores for real and fake that are more different during training. We can see this towards the end of the run, such as the final epoch where the c1 loss for real examples is 5.338 (really -5.338) and the c2 loss for fake examples is -14.260, and this separation of about 10 units is consistent at least for the prior few iterations.
We can also see that in this case, the model is scoring the loss of the generator at around 20. Again, recall that we update the generator via the critic model and treat the generated examples as real with the target of -1, therefore the score can be interpreted as a value around -20, close to the loss for fake samples.
Line plots for loss are created and saved at the end of the run.
The plot shows the loss for the critic on real samples (blue), the loss for the critic on fake samples (orange), and the loss for the critic when updating the generator with fake samples (green).
There is one important factor when reviewing learning curves for the WGAN and that is the trend..
Line Plots of Loss and Accuracy for a Wasserstein Generative Adversarial Network
In this case, more training seems to result in better quality generated images, with a major hurdle occurring around epoch 200-300 after which quality remains pretty good for the model.
Before and around this hurdle, image quality is poor; for example:
Sample of 100 Generated Images of a Handwritten Number 7 at Epoch 97 from a Wasserstein GAN.
After this epoch, the WGAN continues to generate plausible handwritten digits.
Sample of 100 Generated Images of a Handwritten Number 7 at Epoch 970 from a Wasserstein GAN.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
Papers
- Wasserstein GAN, 2017.
- Improved Training of Wasserstein GANs, 2017.
API
- Keras Datasets API.
- Keras Sequential Model API
- Keras Convolutional Layers API
- How can I “freeze” Keras layers?
- MatplotLib API
- NumPy Random sampling (numpy.random) API
- NumPy Array manipulation routines
Articles
- WassersteinGAN, GitHub.
- Wasserstein Generative Adversarial Networks (WGANS) Project, GitHub.
- Keras-GAN: Keras implementations of Generative Adversarial Networks, GitHub.
- Improved WGAN, keras-contrib Project, GitHub.
Summary
In this tutorial, you discovered how to implement the Wasserstein generative adversarial network from scratch.
Specifically, you learned:
-.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Thank for your great tutorial
I am curious about 2 things:
1. What is exactly clipping weight does? (I read the source code of Keras for clip_by_value() function but I can’t figure out what exactly it does)
2. According to your previous article, I knew that Critic’s outcome is just a real number for optimizing the generator, as long as critic(fake) > critic(real). The critic is Wasserstein distance. Therefore, can Wasserstein loss use in common CNN replacing CNN loss function like MSE, PSNR? say, I am training a deblurring CNN using Wasserstein loss, if I minimize this loss value, then my CNN will converge? Am I correct?
Thank for reading my question
Good questions.
The weight clipping ensure weight values (model parameters) stay within a pre-defined bounds.
You can use MSE instead, but it is different. In fact, this is called a least-squares GAN and does not convergence, but does very well.
Thank for your response!
In my question 1, I still confuse about “stay within pre-defined bounds”. In my question 2, There is a misunderstanding.
Question 1: Assume I have weights like this W = [0.3 -2.5, 4.5, 0.05, -0.02] then what is the result of clip(W, (-0.1, 0.1)) ?
Question 2: I know least-squares GAN, my question is: “how to adopt Wasserstein loss metric as loss function in NORMAL architecture CNN (not GAN architecture)?”. I did search on the internet to find the answer but there is no study adopting this Wasserstein Distance as their loss function in normal CNN. I didn’t figure out the reason why they didn’t use it.
I am so sorry for my bad English
Hope you get what I mean.
Thank you
Good questions.
The clipping is defined as:
– If a weight is less than -0.01, it is set to -0.01.
– If a weight is more than 0.01, it is set to 0.01.
I’m not sure it is an appropriate loss function for a normal image classification task. i.e. you cannot.
Thank you very much, Sir
You’re welcome.
Great article!
I trained the GAN on all 9 digits and the results are wild. Lots of digits that look halfway between one digit and another… my model loves generating 0&8 hybrids. Have a few ideas for limiting this effect such as : using a bigger latent space, feeding the class as input into the network, using a bigger conv model, maybe adding a fc layer that appends to the output of the discriminator that my model can use as a class predictor(??) and obviously training longer.
Well done James!
So i implemented your model just as you provided it, but my loss functions look entirely different. Crit_real and crit_fake drop all the way down to -150 after 400 epochs while gen goes to 150 after 400 epochs.
Up to that point the resulting images look good, but afterwards they just turn all black.
Can you give an explanation why this would happen?
You changed the loss function and got bad results. Sorry, I don’t know why other than you changed the loss function.
Perhaps try the code as listed first?
I used your model, once local on a MS Surface Pro 3 (very slow of course) and once with Colab from Google, got very different results for the coefficiants and also for the final plot of loss and accuracy. I changed nothing in the proposed loss function.
Can this be an expected result
Yes, sometimes the models can be quite unstable, they can be tricky.
I changed the machine in Colab from CPU to GNU and the results are similar to the script
Nice!
I am trying to use Wasserstein loss for pix2pix and yet haven’t got any good results. one strange thing that happens is that the last hundred steps, it doesn’t matter if I have 1000 steps or 5000 steps, the last hundred steps suddenly the g_loss starts spiking, for example
>935, c1=-8.903, c2=-5.745 g=9.406
>936, c1=-12.613, c2=-9.001 g=-5.578
>937, c1=-9.143, c2=-8.499 g=-8.585
>938, c1=-12.172, c2=-5.203 g=1.296
>939, c1=-11.419, c2=-7.457 g=2.835
>940, c1=-11.785, c2=-7.998 g=-6.124
>941, c1=-12.427, c2=-8.275 g=-2.679
>942, c1=-12.758, c2=-8.763 g=4.127
>943, c1=-9.551, c2=-6.013 g=6.745
>944, c1=-9.443, c2=-8.791 g=-3.048
>945, c1=-10.753, c2=-7.275 g=-2.918
>946, c1=-12.762, c2=-8.732 g=10.906
>947, c1=-10.392, c2=-5.713 g=-7.287
>948, c1=-12.502, c2=-8.810 g=0.580
>949, c1=-12.936, c2=-9.329 g=-2.742
>950, c1=-7.656, c2=-7.766 g=-1.436
>951, c1=-10.732, c2=-4.744 g=-7.908
>952, c1=-12.870, c2=-9.123 g=5.914
>953, c1=-9.666, c2=-7.812 g=6.430
>954, c1=-12.867, c2=-7.669 g=11.883
>955, c1=-10.069, c2=-9.066 g=-4.029
>956, c1=-13.045, c2=-9.065 g=11.350
>957, c1=-11.683, c2=-0.542 g=11.850
>958, c1=-10.341, c2=-7.843 g=-0.476
>959, c1=-13.147, c2=-9.353 g=-3.374
>960, c1=-6.914, c2=-6.306 g=-3.759
even if I reduce the steps to 500, still at the last steps I get this.
another problem is my g-loss starts at like 80 and then comes down and it doesn’t start as 0.
I don’t get good results at all. Do you have any suggestions?
Good question.
Hmmm, I don’t have any good off the cuff suggestions other than ensure the implementation is correct and try making small changes to see if you can uncover the cause of the fault. Debug!
Let me know how you go.
Hi,
Thanks for the post. It was really helpful. I am planning to do a project about generating fake sentences/text/reviews based Text data. I did some research on online where I found out softmax encoder/decoder is the best way generate fake text for GAN. Another way is Reinforcement Learning though. Can you give me some ideas about how I can use Text data instead of image data?
I would recommend a language model for text instead of a GAN:
Thanks for the all the links for NLP. Can we break the words as vectors and feed them to discriminators and then generate some random text to figure out whether the reviews/sentences are actually from data or just generated from Generator?
I don’t see why not.
Hi Jason,
I am trying load dataset from the directory which has some .jpg images. I am trying to follow your load datatset tutorial but it seems there is some issues. When I ran the code the program goes like forever. Could you please guide something for loading custom dataset image like cat, cars instead of built in mnist dataset? Here is my code and output.
trainX = datagen.flow_from_directory(‘/celeb/train’, class_mode=’binary’, batch_size=64)
X = np.array(trainX, dtype=’float32′)
X = expand_dims(X, axis=-1)
X = X.astype(‘float32’)
X = (X – 127.5) / 127.5
print(trainX.shape)
Found 93 images belonging to 5 classes.
Program got stuck. No result for printing X or trainX value.
Perhaps this will help:
For some reason, when I try to run this example using Keras from Tensorflow 2, it doesn’t converge.
I’ve tried it both in Colab and Kaggle and it only worked when I downgraded my TF version to 1.14.
Did anyone else had this issue?
GANs do not converge, they find equilibrium.
Perhaps start with some of the simpler tutorials here to understand GAN basics:
I have tried on Google cloud with TF 2.7.0 and Keras 2.7.0. The GAN fails to generate any image. I’m not sure I can change to TF 1.14.
In the complete example, on line 209 of the train() method why do we have “y_gan = -ones((n_batch, 1))”? Wouldn’t this give the fake samples a label of -1? I thought that real samples have a label of -1 while the fake samples have a label of 1.
Ouch, I should have been consistent, sorry.
Both approaches work the same.
In the complete example on line 58, why is the kernel_constraint parameter not provided for the Dense layer? Is weight clipping not required in this layer of the critic, and if so, why?
Thank you for this awesome tutorial, by the way!
That is the output layer, I typically don’t add constraints to output layers.
Why? Habit/experience I guess. Perhaps try it and see.
Any tips on adapting this for using a gradient penalty instead of weight clipping?
Your series on GANs is currently the most helpful on the internet IMO, thanks!
Thanks.
No. From memory, the gradient penalty was a challenge to implement in Keras. I’m sure some bright mind figured it out though.
Hi Jason
Read one of you books… mailed a few times… 🙂
I think this is what Lennert S’s problem was. Because mine is similar. On Tensor flow 1.15 this stabalises happily and i get good 7’s for both CPU and GPU. Copy paste of your code.
On Tensorflow 2.0+ I get 100 little black squares and It does not stabilise. There is definately a difference here. I previously narrowed it down to the batchnorm when i was playing with GANs and removing BN improved things a bit. It doesn’t for WGAN so i’m not sure.
Wondering if you iunderstand whats going on here? Have you tried this with TF 2.0?
I can mail you my (your) plots if you like?
Greg
Fascinating. I have not seen this myself. I will investigate (adding to trello now).
UPDATE: I do not see any problem with Keras 2.3 and Tensoflow 2.1.
Try running the example a few times and try inspecting the results from different epochs.
Some suggestions here:
I met the same problem, Keras 2.3 and Tensoflow 2.2 get different result.
Why the new versions are having problems in training these GANS, since I saw the failure both in LSGAN and WGAN.
Try running the code a few times and compare the results.
Same here. Both WGAN and LSGAN are having the same problem with tensorflow.
All examples have been re-tested on Keras 2.4 and TensorFlow 2.3 without any problem.
Perhaps confirm your library versions and perhaps try running on AWS EC2 instance with GPUs to speed things up.
Ran on Google Cloud VM:
tensorflow: 2.7.0
keras: 2.7.0
Other libraries:
scipy: 1.7.3
numpy: 1.19.5
matplotlib: 3.5.1
pandas: 1.3.5
statsmodels: 0.13.1
sklearn: 1.0.1
I see dark boxes. I’m using Tesla T4 GPU on Google Cloud VM. Can you suggest how to make the GAN generate the digits?
Tried tensorflow 1.15 and 2.3 without any result. However, by removing the batch normalization layers in the discriminator/define critic the network seemed again able to produce satisfying results. Would be very interesting if someone have any idea as to why batch normalization is so problematic.
Also: Thank you so much for sharing your insight and providing such a good explanation of wasserstein gans!
Best regards
Well done!
Not just batch norm, GANs themselves are problematic to train.
Removing batch normalization really helped. Thanks!
Hi Jason,
I guess, you do mean “iterations” (steps) not epochs, in your loss graphs along x-axis? There are 10 epochs, each having 97 steps. Otherwise, I obtain the results for “7”, similar to yours. I observe quite a sharp “transition” at about step 194 when all generated images turn extremely dark, but after one epoch this darkness fades away, primarily from the background, thus leaving a beautiful “7” image alone. Thanks for the tutorial!
You’re welcome!
I believe, I get it now: in the first step (inner loop) you are trying to find a maximal difference between both fixed statistical distributions of the real and generated sets, by adjusting w-parameters of the critic and their subsequent clipping. This is needed to ensure, that the difference stays closer to the true Wasserstein distance, i.e. that we are in a Lipshitz space . In the second step (oughter loop) you are trying to minimize this best-defined difference, by adjusting the generator (via theta-parameters) to bring its distribution closer to the real one.
What I don’t understand – is why do c2 and g losses have different signs? In both cases it is the same generator loss estimated by the critic. In both cases it is evaluated on the fake samples and printed out as it is.
We are trying to improve the generator (g), which is the inverse of the capability/expectation of the critic (c).
Hi, I am confused of the loss function, where two losses of generator and discriminator presented in the paper are different, so how do you transform the two losses into the same loss function in this course. Thank you, Jason.
Perhaps this tutorial will help:
Hi Jason,
Do you have any implementations regarding the Improved WGAN method?
Not at this stage.
By the way sir, this is a brilliant explanation like the rest of your blog posts. It is helping me massively in my projects.
However, when I initially trained the above implementations for my dataset as well as for MNIST, the loss was only going upwards( in my dataset it went up to 25000 before I killed the process!). Then I figured out that in line 58 of the code, there is an activation function missing in the Dense layer! So I just added a ‘sigmoid’ activation function and things have been smooth ever since!
Thanks.
That is incorrect. The activation function is linear, loss can go up or down – it is not MSE!
Perhaps re-read the tutorial.
Oops! My bad! Thanks for your guidance sir!
No problem.
Hi Jason,
Do you have the implementation of WGAN for the MNIST dataset.
Yes, the above tutorial is exactly this!
Hi
Thanks for the post. It was really helpful. How to add checkpoints? Is that any way to add checkpoints?
Thanks
You can manually save the model each time it is evaluated. Or manually save any time during the manual updates.
Thanks for your prompt answer, Can you write a code for that or update the checkpoints code? Thanks in advance
See the summarize_performance() function in the above tutorial – it saves the model. Change it to save whenever you want.
summarize_performance() function saved every epoch weights. But I want to little bit confused when my training stopped then how I can start my training again from last epoch weights saved.
Thanks
You can load the model and continue the training procedure:
Hi, thank you for your tutorial.
I want to ask why the value of my result is so large.
>962, c1=-412.128, c2=390.963 g=-370.047
>963, c1=-411.156, c2=391.335 g=-367.996
>964, c1=-414.337, c2=387.317 g=-372.890
>965, c1=-412.908, c2=388.804 g=-371.697
>966, c1=-408.901, c2=387.349 g=-375.952
>967, c1=-412.134, c2=392.314 g=-372.287
>968, c1=-416.299, c2=388.089 g=-372.689
>969, c1=-411.435, c2=390.020 g=-373.012
>970, c1=-413.991, c2=391.278 g=-376.979
WGAN can do this. Monitor the generated images instead.
I am trying to train WGAN on CIFAR-10 following exactly the same approach with possible changes in architecture. But I am not able to get good results. look at the results. Also generated images are of not good quality.
>1, c1=-1.686, c2=-4.668 g=13.810
>2, c1=-9.374, c2=-9.605 g=16.668
>14, c1=-36.906, c2=-38.834 g=-33.304
>17, c1=-38.848, c2=-40.553 g=-37.991
>18, c1=-39.186, c2=-41.070 g=-38.564
>26, c1=-43.379, c2=-45.001 g=-43.369
>27, c1=-43.877, c2=-45.480 g=-43.876
>1505, c1=-1617.052, c2=-1619.644 g=959.692
>1506, c1=-1618.585, c2=-1620.744 g=1155.889
Nice work.
Focus on the generated images, not the loss.
Consider changing the model architecture.
Hiee Jason , I’m facing same problem as others . I took the architecture from your DCGAN implementation for cifar 10 .Changed the last activation to Linear ,and also used gradient clipping in Critic .Used wasserstein loss with RMS Prop ,rest everthing like this tutorial. Currently i have a Dgx-2 with me so i tried so many hyper parameters like batch size ,learning rate ,numer of filters and layers but loss just keeps on increasing and output images saved periodically are totally black and i’m just not getting the clue that wgan are said to be stable are why not able to provide any output in our case .
You may need to tune the architecture for the new dataset. E.g. large models.
Hi Dr. Brownlee,
Thank you for sharing those excellent tutorials with really good explanation. I learnt a lot following your tutorial.
For this one, I implemented and noticed that ‘trainable’ might cause some issue for some users. For example, in my main() function, I use your code to create critic_model, generator_model and GAN_model. If I print all three’s summary, I noticed that nearly all the parameters in critic_model is Non-trainable. However, same code, if I just comment out the GAN_model, and print the other two models’ summary, then all the parameters from critic_model becomes trainable.
Therefore, my guessing is that when we compile the GAN_model, the trainable attribute got changed, even though we already compiles critic_model beforehand. The critic_model’s trainable could still be affected if we change ‘trainable’ after it’s compiled.
In training phase, probably we just need to specifically state ‘critic_model.trainable=True’ and ‘critic_model.trainable=False’ under its appropriate loop.
You’re welcome.
No need, training is fixed for all layers in a model once compile is called and this state is preserved for separate models – e.g. reuse of layers in different models with different trainable state does not cause a problem.
Learn more here:
How can I freeze layers and do fine-tuning?
Thanks Jason for this great article!
If I am using pix2pix where the discriminator training has x_realB as its input too like
d_loss1 = d_model.train_on_batch([X_realA, X_realB], y_real), do I need to change the wasserstein_loss function input to address the extra x_realB input (def wasserstein_loss(y_true, y_pred))?
Another question, do c1 and c2 loss necessarily need to have different signs at the end of training like your results? Is is incorrect if both have minus signs during the training and can this be fixed by changing hyper parameters?
You’re welcome.
I have not used wloss with pix2pix, you may have to experiment.
I see that at the end of your training the c1 and c2 loss have different signs, Is having different signs important or they may have the same signs (both negatives) and still have a valid training?
The reason that your gen_loss is increasing (green plot) is that you are trining the generator with label -1? if we change the label to +1 as you used for the critic fake samples, the gen_loss decrease?
Thanks!
Not required, I believe this is discussed in the tutorial.
Perhaps.
I am confused about how the critic losses and gen_loss can be interpreted. In my case all the losses are decreasing and I am not sure how the values and their range can be interpreted. Any reading suggestion on interpretation of losses in WGAN?
Thanks.
Great question – generally the loss cannot be interpreted directly.
Hello,
I have tried to implement a Conditional WGAN, I just added the labels to the inputs of both the generator and critic, and did the embedding and concatenation as usual, but the GAN is not learning anything, any idea if WGAN can be conditioned normally as in vanilla DCGAN?
Well done!
No idea, experimentation is required.
Hello,
Thanks for the great article! your article always impressive.
I have tried to implement an Auxiliary Classifier GAN with the Wasserstein loss (WACGAN) by following your tutorials (WGAN, ACGAN, and CGAN). However, I got confused when calculating the loss value.
# WGAN loss (critic model)
c_loss1 = c_model.train_on_batch(X_real, y_real)
# CGAN loss (discriminator)
d_loss1, _ = d_model.train_on_batch([X_real, labels_real], y_real)
# ACGAN loss (discriminator model)
_, dr_1, dr_2 = d_model.train_on_batch(X_real, [y_real, labels_real])
and this is my code for calculating the loss:
# the WACGAN loss (my trial)
_, dr_1, dr_2 = d_model.train_on_batch(X_real, [y_real, labels_real])
I got confused about which value represents the critic model loss. Since in the ACGAN we can have two loss values (loss on real/fake and loss for the classification).
I want to ask:
Is it right that d_r1 is the loss for critic on samples and d_r2 is the loss for critic on classification?
Because when I tried to check the value, d_r1 always gives a value of -1.0 while d_r2 gives a different value for each iteration.
Sorry for this kind of question.
Could you give me any information on this case?
Thank you very much~
Sorry, I have not adapted acgan to use wgan loss, I cannot give you good off the cuff advice.
hi i’m trying to use this example but with time-series data instead of images so i use a bidirectional LSTM instead of convolutional nets. I tried to use the same kernel_constraint as you used here but I’m receiving an error:
ValueError: Unknown constraint: ClipConstraint
I used the ClipConstraint as is.
This is my critic:
def define_critic():
# weight initialization
init = RandomNormal(stddev=0.02)
# weight constraint
const = ClipConstraint(0.01)
# define model
model = Sequential()
model.add(
Bidirectional(LSTM(128, activation=’tanh’, return_sequences=True, kernel_initializer=init, kernel_constraint=const), input_shape=(TIME_STEPS, NUM_OF_FEATURES)))
model.add(LeakyReLU(alpha=0.2))
model.add(Bidirectional(LSTM(128, activation=’tanh’, kernel_initializer=init, kernel_constraint=const)))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(1, activation=’linear’))
# compile model
opt = RMSprop(lr=0.00005)
model.compile(loss=wasserstein_loss, optimizer=opt)
model.summary()
return model
do you have an idea what went wrong?
Thank you!
I’m eager to help, but I don’t have the capacity to debug your example sorry. Perhaps these tips will help:
Hi, I’m sorry, I didn’t mean for you to debug my code.
I’ll rephrase the question:
Should the ClipConstraint from this tutorial also work with LSTM layers? And if so, am I using it right by only adding ‘kernel_initializer=init, kernel_constraint=const’ to each LSTM layer in my model?
Thanks a lot!
Maybe – you might have to experiment/adapt it. Off the cuff, I don’t think it would be appropriate for LSTMs as-is.
Dear Jason,
Thanks for your articles they are very inspiring!
I found one thing about the implementation of wgan that slightly differs IMO from the original paper, I think that might cause some trouble to your readers.
In principle when training the wassertein loss is defined as:
# implementation of wasserstein loss
def wasserstein_loss(y_true, y_pred):
return backend.mean(y_true * y_pred)
However, I had to customise the model for my application and it seem to me that was fine to use the following cost from your example:
disc_cost = tf.reduce_mean(crit_fake – crit_real)
However, it is clear in the wgan original paper arXiv:1701.07875v3 that the actual critic’s loss is:
disc_cost = tf.reduce_mean(crit_fake) – tf.reduce_mean(crit_real)
Therefore, it should be clear for at least reader’s like me that when customising the critic’s loss that
tf.reduce_mean(crit_fake – crit_real) is not equal to tf.reduce_mean(crit_fake) – tf.reduce_mean(crit_real)
Thanks for sharing.
Dear Jason,
thank you very much for this thorough explanation. I am confused regarding the following statement of yours:
.”
Let’s assume that the generator is perfectly trained and the critic cannot tell anymore reals and fakes apart. I would say that roughly the 50% of generated images and the 50% of real images will be assigned a very positive/negative score. I would then expect that the loss_critic over the real and the loss_critic over the fake both goes to 0, as I am averaging out positive and negative scores more or less equally present in my batch (besides the sign in front of the mean).
Besides my previous point, I still do not understand why a well trained WGAN should have a critic_loss over generated images going down: this would mean that the critic keeps on assigning very negative values to the generated images, then flagging them as extremely unrealistic. Therefore, the generator is doing a really poor job.
Thank you very much in advance for your attention.
You’re welcome.
Yes, the loss never sits still, the models remain adversarial pushing the loss around/apart forever.
Thanks a lot. It was hard to understand the loss function, but with the second article maid it clear.
Now – it worked on CPU only (under Win 10) – but the losses made a crazy turmoil around 6-8 epochs but then crit_fake went very high (image quality was decent though). Nothing like yours.
I uploaded PNG to flickr if you interested:
I recommend focusing on image quality and save many different versions of the model during training – choose the one with the best images.
Also, try a few runs to see if it makes a difference – given the stochastic nature of the learning algorithm.
Thanks for your tutorial. I have one small question:
In the line plots of Loss and Accuracy, you mentioned that the line of gen (the green one) is about -20, close to the loss for fake samples.
Did that show the generator has bad performance, even though the line of crit_fake (the orange one) trends down?
Maybe a perfect generator in WGAN should make the loss of gen (green) be closed to the loss of crit_real(blue), not the crit_fake (orange)?
Hope your answer.
I’m not convinced the learning curves can be interpreted for wgan.
Why sometimes WGAN loss is represented as -critic(true_dist)+critic(fake_dist) for critic/discriminator step and -critic(fake_dist) for generator/actor step ?
Why indeed!
Thanks for the very in-depth article and example code. I’m running a WGAN and have maybe a simple question. I read the 2017 paper which introduces the Wasserstein loss in GANs, and in that paper there is a theorem which says the following are equivalent:
1. W_loss(P_real, P_t) –> 0 (as t –> infinity)
2. P_t converges in distribution to P_real
Where P_t is the probability distribution our model generates, parametrized by t and P_real is the distribution we want to model.
According to this, shouldn’t we see the best results as loss goes to 0, and shouldn’t loss tend there as the epochs go on? Why does more negative fake loss correspond with better images? I would think this is only the case if real loss is growing equally, so that their sum (total loss) is tending to 0. If fake loss is plummeting but real loss is nearly constant, we shouldn’t be converging. Similarly for the generator, loss just grows (or decreases, factoring out the -1 sign), shouldn’t we want a loss which tends to 0?
You’re welcome.
No, we don’t see this in practice.
For the people who are asking for the gradient penalty, you can find in keras documentation:
Thanks for sharing.
Jason, I’m trying to generate tabular data but sequential. Example a consumer session data of clicks, like:
SessionID | ItemClickedId | CategoryItemClickedId |HourClicked | DayClicked | MonthClicked
01 | 20 | 100 | 02 | 12 | 03
01 | 20 | 100 | 02 | 12 | 03
01 | 21 | 100 | 03 | 12 | 03
01 | 21 | 100 | 03 | 12 | 03
01 | 21 | 100 | 03 | 12 | 03
This example show the customer clicked on session 01, in two items, with number 20 and 21 with the same category at 2 and 3 o’clock.
Can you help me? Which gan and techiniques should I use for this?
Thanks
I would recommend using a method like SMOTE for tabular data:
Hey Jason great stuff again. I am trying to modify my AC Gan with a Wasserstein loss I have 2 Questions. First this “kernel_constraint=const” can I use it also for the generator is is it only for the discriminator Model. My second questions ist should I use this only for ConV layer because I only want use Dense layers is it also possible to avoid Overfitting?
Thanks.
In both cases – perhaps try it and see.
So does it mean that i can use this is not only for discfiminator it can also put it for the generator this loss?
Try it and see what happens.
Hello, Thanks for sharing. I have a question. “1. Linear Activation in Critic Output Layer
The DCGAN uses the sigmoid activation function in the output layer of the discriminator…”
I was reading the guidelines of DCGAN for that paper. It says “Use LeakyReLU activation in the discriminator for all layers.”
Could you please clarify it? Thank you.
Perhaps try both and discover what works best for your specific dataset.
Hi Brownlee,
Thanks for your useful posts and information! Logically speaking and based on your knowledge, is it possible to create an Auxilliary Classifier Wasserstein GAN? I am trying to create one! However, the loss values go to NaN after epoch one …
Not sure, try it and see.
Thanks for your asnwer!
Hi Dr. Brownlee,
I hope that you are doing well during the COVID-19 pandemic. Thanks for your fascinating and valuable articles.
I have a general question about ordinary and Wasserstein GANs. According to your online articles, when a regular GAN is being trained, finally, It should achieve an equilibrium (I think it is called Nash Equiblirium) between the Generator and Discriminator. Moreover, After attaining this state, if the training process is continued, the discriminator may produce false losses for the generator and break this equilibrium (Therefore, the quality of generated images gets worse). My question is whether it is the case in Wasserstein GANs? To explain more, I want to know if it is assumed that a WGAN is completely achieved its equilibrium between the Generator and Critic, is this state of equilibrium as breakable as ordinary GANs?
Another question is regarding WGAN, how could one check whether it has achieved its final equilibrium or not? Does this type of network get better indefinitely as the training process goes on?
Finally, as the last question, is FID score an appropriate parameter to check the quality of fake images generated by WGAN?
It is the common belief that WGAN can remain stable at equilibrium. But I am yet to find any paper to prove or disprove it (if you know one, I am happy to learn about that). Whether you are at the equilibrium or not, you may try to plot the loss function against the training epoch to see if you have plateaued.
Jason, it seems you have no complied your generator! Is that right?
Correct. As mentioned, “intentionally does not compile it as it is not trained directly”
Based on a conda install, these import statements work….
# example of a wgan for generating handwritten digits
from numpy import expand_dims
from numpy import mean
from numpy import ones
from numpy.random import randn
from numpy.random import randint
import tensorflow as tf
from tensorflow import keras
from tensorflow.python.keras.layers import Input, Dense
from tensorflow.keras import layers
from tensorflow.python.keras import Sequential
from tensorflow.keras.layers import Reshape,Flatten,Conv2D, Conv2DTranspose,LeakyReLU,BatchNormalization
from tensorflow.python.keras.datasets.mnist import load_data
from tensorflow.python.keras import backend
from tensorflow.keras.optimizers import RMSprop
from tensorflow.python.keras.initializers import RandomNormal
from tensorflow.python.keras.constraints import Constraint
from matplotlib import pyplot
I have purchased and enjoyed some of your courses. Thought I would give back to help keep the great and useful examples working. Sept 20, 2021
Thank you. Hope you enjoy the other posts here as well!
Thanks for your tutorial. I don’t understand this code:
# make weights in the critic not trainable
for layer in critic.layers:
if not isinstance(layer, BatchNormalization):
layer.trainable = False
Why we don’t freeze Batch Norm layer of critic? I think that we need to freeze all layers of the critic.?
Hello guys. Why it is generating only the number 7? Is it because of mode collapse? Is it possible to generate other numbers with the above code?
Thanks.
See line 109 of the complete code. That’s intentional as an example here.
could you add python example about how to evaluate the generated data quantitatively?
*could you add python example about how to evaluate the similarity between the generated data and original data quantitatively?
Its painless for every machine learning expert to work on MNIST data set as majority of blogs are written on neural networks using this dataset.it was better if writers used other high resolution datasets to really make their articles worth trying on practical problems.Moreover, most of codes of machinelearningmastery cannot be run.Please make your blogs to be more followable and functional
Thank you for the feedback Asifa!
Hi there!
Just one question: when we define GAN model that combines both generator and critic models into one larger one, why do we freeze weights from all layers but BacthNormalization?
Thanks!
Hi Laura…I am not following your question. Could you rephrase so that I may better assist you?
Thank you so much for your tutorial.
I have a question from the below lines of code:
# make weights in the critic not trainable
for layer in critic.layers:
if not isinstance(layer, BatchNormalization):
layer.trainable = False
Why we don’t freeze Batch Norm layer of critic? I think that we need to freeze all layers of the critic.?
Hi Rave…did you implement your idea?
can we train text data to wgan ? if so, how? if anyone knows the then share the code
only use text data set not image
Hi Basma…Please clarify what you are attempting to accomplish with your model so that we may better assist you. | https://machinelearningmastery.com/how-to-code-a-wasserstein-generative-adversarial-network-wgan-from-scratch/ | CC-MAIN-2022-27 | refinedweb | 8,349 | 65.83 |
This post was originally posted on my personal blog.
A while ago, I was reading an RFC from react's RFCs called
useMutableSource; it was an experimental feature that, in a nutshell, lets you safely read, write and even edit an external source (outside of the react components tree). It's a banger feature, which I'm really chuffed for it, but it's experimental at the same time. You may know that I'm working on an Open-source state-management library called jotai. This library announced a new feature, the Provider-less mode. To know what it is, think of React Context, but no need for a
Provider component, it's not exactly that, but it gives you the idea.
Why a new one?
Yea, we have patterns and libraries that allow us to read and write from an external source, but as I said, this one lets you do things safely; no tearing anymore.
Tearing
Think of tearing as something like if we have a value(state) that A and B read from it, but somehow in the rendering, the value changes. The B component is later than A, So in Rendering, the value in the A component is 0, and in the newer component (B), the value is 1. We call this tearing; it means you see two different values in the viewport from one exact source. It's a new and hard to understand implementation in React concurrent mode; for more information, see this.
Experimental, Why should I use it?
So I thought about this, we have two options:
- Experimental version of react:
yarn add react@experimental
- Consistent version of
useMutableSource, you can copy paste it from here
I recommend the second option because it's not going to change, and good for now as long as we don't have
useMutableSource in a major react version.
Context with no Provider
I think we have reached what brought you here, but wait before all of this, don't forget to look at my Github and Twitter; you're going to see cool stuff there and help me with my learning journey too. So let's start.
Some of the code below is written with typescript, so it will be more understandable, even for people who don't know typescript.
Start
First we need to create a simple global object, which contains three properties:
const globalStore = { state: { count: 0 }, version: 0, listeners: new Set<() => any>() };
state: simple value like react Context value
version: important part that has to change whenever any part of the state changes
listeners: a set of functions that we call them every time we change part of the
state, so we notify them about the changes
Now we need to create a mutable source from
globalStore and give it the version, so it'll help it with triggering new changes, so we're going to access it in
getSnapshot and
subscribe; we'll talk about these soon.
const globalStoreSource = createMutableSource( globalStore, () => globalStore.version // (store) => store.version (Optional) if you use the consistent and non-experimental version of useMutableSource );
Now it's the time to talk about
getSnapshot; in a nutshell, it's a function that
useMutableSource returns its returned value whenever the state changes.
const cache = new Map(); const getSnapshot = (store: typeof globalStore) => { const setState = ( cb: (prevState: typeof store.state) => typeof store.state ) => { store.state = cb({ ...store.state }); store.version++; store.listeners.forEach((listener) => listener()); }; if (!cache.has(store.state) || !cache.has(store)) { cache.clear(); // remove all the old references cache.set(store.state, [{ ...store.state }, setState]); // we cache the result to prevent the useless re-renders // the key (store.state) is more consistent than the { ...store.state }, // because this changes everytime as a new object, and it always going to create a new cache cache.set(store, store); // check the above if statement, if the store changed completely (reference change), we'll make a new result and new state } return cache.get(store.state); // [state, setState] }; // later: const [state, setState] = useMutableSource(...)
Take a look at the
setState function, first we use
cb and pass it the previous state, then assign its returned value to our state, then we update the store version and notify all the listeners of the new change.
we used the spread operator
({ ...store.state })because we have to clone the value, so we make a new reference for the new state object and disable direct mutations.
We don't have any
listener yet, so how we can add one? with the
subscribe function, take a look at this:
const subscribe = (store: typeof globalStore, callback: () => any) => { store.listeners.add(callback); return () => store.listeners.delete(callback); };
This function's going to get called by
useMutableSource, So it passes
subscribe two parameters:
store: which is our original store
callback: this is going to cause our component a re-render (by
useMutableSource)
So when
useMutableSource calls the subscribe, we're going to add the
callback to our listeners. Whenever something changes in the state (
setState), we call all of our listeners so that the component will get re-rendered. That's how we have the updated value every time with
useMutableSource.
So you may wonder we delete the callback in return, the answer is that when the component unmounts,
useMutableSource will call
subscribe(), or in another term, we call it
unsubscribe. When it gets deleted, we'll no longer call a useless callback that will cause a re-render to an unmounted (or sometimes an old) component.
useContext
Now we reached the end line, don't think too much about the name, we just wanted to mimic the Provider-less version of React context.
export function useContext() { return useMutableSource(globalStoreSource, getSnapshot, subscribe); } // returns [state, setState]
Now we can use this function everywhere we want. Take a look at this example, or if you want, you could go straight for the codesandbox.
function Display1() { const [state] = useContext(); return <div>Display1 component count: {state.count}</div>; } function Display2() { const [state] = useContext(); return <div>Display2 component count: {state.count}</div>; } function Changer() { const [, setState] = useContext(); return ( <button onClick={() => setState((prevState) => ({ ...prevState, count: ++prevState.count })) } > +1 </button> ); } function App() { return ( <div className="App"> <Display1 /> <Display2 /> <Changer /> </div> ); }
Now whenever you click the +1 button, you can see the beautiful changes without any
Provider.
I hope you enjoyed this article, and don't forget to share and reaction to my article. If you wanted to tell me something, tell me on Twitter or mention me anywhere else, You can even subscribe to my newsletter.
- Cover image: Experiment, Nicolas Thomas, unsplash
Discussion (2)
Great article will definitely look into this. But I do question, couldn't this behavior be achieved with way less code using RxJs and Observables?
Thanks, Austin; I don't know actually about RxJS, but every solution except the react internals has some trade-offs, maybe tearing and ...., I just made a vanilla fast solution with uMS, but if you can merge the observables solution with uMS, why not, it would be great. The point of this article is teaching uMS, and we can do many amazing things with it. | https://dev.to/aslemammad/react-context-without-provider-usemutablesource-4aph | CC-MAIN-2021-17 | refinedweb | 1,186 | 59.94 |
Read and write to Amazon S3 using a file-like object
Read and write files to S3 using a file-like object. Refer to S3 buckets and keys using full URLs.
The underlying mechanism is a lazy read and write using cStringIO as the file emulation. This is an in memory buffer so is not suitable for large files (larger than your memory).
As S3 only supports reads and writes of the whole key, the S3 key will be read in its entirety and written on close. Starting from release 1.2 this read and write are deferred until required and the key is only read from if the file is read from or written within and only updated if a write operation has been carried out on the buffer contents.
More tests and docs are needed.
Requirements
boto
Usage
Basic usage:
from s3file import s3open f = s3open("") f.write("Lorem ipsum dolor sit amet...") f.close()
with statement:
with s3open(path) as remote_file: remote_file.write("blah blah blah")
S3 authentication key and secret may be passed into the s3open method or stored in the boto config file.:
f = s3open("", key, secret)
Other parameters to s3open include:
- expiration_days
- Sets the number of days that the remote file should be cached by clients. Default is 0, not cached.
- private
- If True, sets the file to be private. Defaults to False, publicly readable.
- content_type
- The content_type of the file will be guessed from the URL, but you can explicitly set it by passing a content_type value.
- create
- New in version 1.1 If False, assume bucket exists and bypass validation. Riskier, but can speed up writing. Defaults to True.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/python-s3file/ | CC-MAIN-2017-34 | refinedweb | 298 | 75.71 |
[classlib][luni] Runtime.exec hangs on out-of-file-handles
----------------------------------------------------------
Key: HARMONY-5485
URL:
Project: Harmony
Issue Type: Bug
Components: Classlib
Environment: Unix
Reporter: Mark Hindess
Assignee: Mark Hindess
Priority: Minor
Runtime.exec (in unix/procimpl.c) creates 5 pipes - 10 file handles - with no error checking.
If you run as simple test case such as:
import java.lang.Runtime;
import java.io.IOException;
public class FileHandleHang {
public static void main(String[] args) throws IOException {
Runtime.getRuntime().exec("no-such-program");
}
}
using "strace -q -f -e pipe java FileHandleHang" and look for the pipe creates which will
be
something like:
[pid 6841] pipe([50, 51]) = 0
[pid 6841] pipe([52, 53]) = 0
[pid 6841] pipe([54, 55]) = 0
[pid 6841] pipe([56, 57]) = 0
[pid 6841] pipe([58, 59]) = 0
then limit the number of file handles with "ulimit -n 56" and re-run the test then it hangs.
The strace output would be something like:
[pid 6853] pipe([50, 51]) = 0
[pid 6853] pipe([52, 53]) = 0
[pid 6853] pipe([54, 55]) = 0
[pid 6853] pipe(0x40167760) = -1 EMFILE (Too many open files)
[pid 6853] pipe(0x40167770) = -1 EMFILE (Too many open files)
There really needs to be error checking on all of these pipe calls to fix this and related
problems.
I noticed this problem because recent gcc with -Wall complains about these calls with the
following warning:
ignoring return value of 'pipe', declared with attribute warn_unused_result
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online. | http://mail-archives.apache.org/mod_mbox/harmony-commits/200802.mbox/%3C4021812.1202730009112.JavaMail.jira@brutus%3E | CC-MAIN-2015-06 | refinedweb | 260 | 56.59 |
%matplotlib inline import random import numpy as np import matplotlib.pyplot as plt from math import sqrt, pi, erf import scipy.stats
We have some data, $x$ and $y$, and we want to fit a line to the data. Our model will be:$$\hat{y} = \hat{\alpha} + \hat{\beta}x$$
where a $\hat{}$ indicates our best estimate of something. We make an assumption that the process that generates our data looks like:$$y = \alpha + \beta x + \epsilon$$
where $\epsilon$ is coming from a normal distribution
One way to view this problem is as an optimization. Can we write down an equation that results in a single value that we minimize? Our ultimate goal is to make $\hat{y}$ as close as possible to $y$. Mathematically, that means we want $\sum_i(y_i - \hat{y}_i)^2$ to be small.
That is the objective function to minimize, but what are the dimensions to change? Those are $\hat{\alpha}$ and $\hat{\beta}$. So we need to write down a function which takes in $\hat{\alpha}$ and $\hat{\beta}$ and returns how good the fit is:
We can minimize this equation using any of our minimization techniques or we can do it analytically.
Using calculus you can show that the minimum to $f(\alpha, \beta)$ is:$$\hat{\beta} = \frac{\sum_i(x_i - \bar{x})(y_i - \bar{y})}{\sum_i(x_i - \bar{x})^2}$$
With a little bit of algebra, you can show this is$$\hat{\beta} = \frac{\sigma_{xy}}{\sigma_x^2}$$
where $\sigma_{xy}$ is the sample covariance of $x$ and $y$ and $\sigma_x^2$ is the sample variance of $x$.
To find the intercept, you can just take the average of the residuals (not their squares!) given the model so far:$$\hat{\alpha} = \frac{1}{N}\sum_i (y_i - \hat{\beta}x_i)$$
Let's see this in action
# Make some data -> this is problem setup # Do NOT copy this because it only generates random data # This does not perform regression x = np.linspace(0,10, 20) y = 1 + x * 2.5 + scipy.stats.norm.rvs(scale=2, size=20) plt.plot(x,y, 'o') plt.show()
cov = np.cov(x,y, ddof=2) #recall that the diagonal is variances, so we use that directly beta_hat = cov[0,1] / cov[0,0] alpha_hat = np.mean( y - beta_hat * x) print(f'alpha_hat = {alpha_hat:.2} ({1})') print(f'beta_hat = {beta_hat:.2} ({2.5})')
alpha_hat = 0.79 (1) beta_hat = 2.5 (2.5)
plt.plot(x,y, 'o') plt.plot(x, alpha_hat + beta_hat * x) plt.show()
Notice that we didn't get exactly the correct answer. The points were generated with a slope of 2.5 and an intercept of 1, whereas our fit was a little bit off
Data Type: single group of samples
Compares: If the samples came from an unknown parent normal distribution (are they normally distributed)
Null Hypothesis: The samples are from the unknown parent normal distribution
Conditions: None
Python:
scipy.stats.shapiro
Notes: There are many other tests for normality. This one is not the simplests, but is the most effective
data = [12.4, 12.6, 11.8, 11.5, 11.9, 12.2, 12.0, 12.1, 11.8] scipy.stats.shapiro(data)
(0.9825882911682129, 0.9762558937072754)
The $p$-value is quite high, so we don't reject the null hypothesis
data = np.linspace(0,1, 100) scipy.stats.shapiro(data)
(0.9547258019447327, 0.0017220161389559507)
The $p$-value is 0.002, so we correctly reject the null hypothesis
One of our assumptions is that the noise is normally distributed. Recall our model:$$y = \alpha + \beta x + \epsilon$$$$\epsilon = y - \alpha - \beta x \approx y - \hat{\alpha} - \hat{\beta} x = y - \hat{y}$$
Where the $\approx$ is because we are using our estimates for $\alpha$ and $\beta$. We can now check our assumption by histogramming the residuals, which should be the same as looking at the $\epsilon$ distribution.
plt.hist(y - beta_hat * x - alpha_hat) plt.show()
At this point, it's unclear if they are normally distributed. Luckily we just learned the Shapiro–Wilk Test!
scipy.stats.shapiro(y - beta_hat * x - alpha_hat)
(0.9317496418952942, 0.16685707867145538)
It looks like the residuals may indeed be normally distributed. So, our assumption was valid.
There are a few ways of measuring goodness of fit. One way is to just compute the SSR, the sum of the squared residuals. However, this has the negative that the units of $y$ appear in the goodness of fit. Here is what people typically use, the coefficient of determination:$$R^2 = 1 - \frac{\textrm{SSR}}{\textrm{TSS}} = 1 - \frac{\sum_i \left(\hat{y}_i - y\right)^2}{\sum_i \left(\bar{y} - y\right)^2}$$
This equation has the property that it's unitless, it's $1$ when the fit is perfect, and $0$ when the fit is awful. In the case of linear regression, $R$ is the same as the correlation coefficient.
ssr = np.sum((y - alpha_hat - beta_hat * x)**2) tss = np.sum((np.mean(y) - y)**2) rsq = 1 - ssr / tss print(rsq, sqrt(rsq)) print(np.corrcoef(x,y))
0.9454803725748993 0.9723581503617374 [[1. 0.97235815] [0.97235815 1. ]]
As you may have noticed, sometimes it gets confusing that the difference between the sample mean and the true mean, $\mu - \bar{x}$, are distributed according to a normal or $T$ distribution if the samples are distributed according to a normal distribution. Thus we end up with two distributions: one with parameters $\mu$, $\sigma$ that describes the samples and one with parameters $\mu$, $\sigma / \sqrt{N}$ that describes the difference between sample mean and true mean. Often this $\sigma / \sqrt{N}$ term is called Standard Error to distinguish it from the standard deviation of the population (true/hidden) distribution. Thus you can write:$$T = \frac{\bar{x} - \mu}{\sigma_x / \sqrt{N}} = \frac{\bar{x} - \mu}{S} $$
We're to be referring to Standard Error often now. You then can use Standard Error in confidence intervals or hypothesis tests. The same rules as previously apply: $N < 25$ requires a $t$-distribution and above is normal.
One other important thing to remember is that the denominator in the standard error should be the square root of the degrees of freedom. Usually this is written as $N - D$, where $D$ is the deducted degrees of freedom. In the case of linear regression, we have $D$ being the number of coefficients we're fitting. That $N - D$ is also the degrees of freedom in the $t$-distribution.
You may read the derivation in Bulmer on page 226. The variance in our estimated values ($S^2$) for slope and intercept are:$$S^2_{\epsilon} =\frac{\sigma^2_{\epsilon}}{N - D} = \frac{1}{N - D}\sum_i \left(\hat{y}_i - y_i\right)^2$$$$S^2_{\alpha} = S^2_{\epsilon} \left[ \frac{1}{N - D} + \frac{\bar{x}^2}{\sum_i\left(x_i - \bar{x}\right)^2}\right]$$$$S^2_{\beta} = \frac{S^2_{\epsilon}}{\sum_i \left(x_i - \bar{x}\right)^2}$$
$D$ here is the number of fit coefficients. 2 in our case.
df = len(x) - 2 s2_epsilon = np.sum((y - alpha_hat - beta_hat * x) ** 2) / df s2_alpha = s2_epsilon * (1. / df + np.mean(x) ** 2 / (np.sum((np.mean(x) - x) ** 2))) print('The standard error for the intercept is', np.sqrt(s2_alpha))
The standard error for the intercept is 0.8439285616547768
Let's just visualize now what the distribution for where the true intercept looks like. We know it is distributed according to:$$P(\alpha) = T(\mu=\hat{\alpha}, \sigma=S_\alpha, df=18)$$
The degrees of freedom here is
alpha_grid = np.linspace(-2, 2, 100) P_alpha = scipy.stats.t.pdf(alpha_grid, loc=alpha_hat, scale=np.sqrt(s2_alpha), df=len(x) - 2) plt.plot(alpha_grid, P_alpha) plt.axvline(1, color='red') plt.axvline(alpha_hat) plt.show()
Once we compute $S_\beta$, we can use the formulas we've seen before:$$P(\beta = \hat{\beta} \pm y) = 0.95$$$$T = \frac{y}{S_\beta}$$$$y = TS_\beta$$
The confidence interval will then be:$$\beta = \hat{\beta} \pm TS_\beta$$
with 95% confidence
s2_beta = s2_epsilon / np.sum((x - np.mean(x))**2) T = scipy.stats.t.ppf(0.975, len(x) - 2) print(s2_beta, T) print('beta = ', beta_hat, '+/-', T * np.sqrt(s2_beta), ' with 95% confidence')
0.0202139147447661 2.10092204024096 beta = 2.511951711962952 +/- 0.29869995143838984 with 95% confidence
Notice that just like in confidence intervals, once $N$ becomes large we can replace the $t$-distribution with a normal distribution.
The next interesting consequence of the $t$-distribution is that we can construct a hypothesis test. For example, we could test if the intercept should be $0$.
Our null hypothesis is that the intercept is $0$. Then, we can compute how big an interval would have to be constructed around $0$ to just capture what we calculated for $\alpha$. The distribution for that interval is:$$P(\alpha) = T(\mu=0, \sigma=S_\alpha, df=N - D = 19)$$
We take $D = 1$ here because we work under the assumption of the null hpyothesis (i.e., only slope needs to be fit). For the actual regression analysis, you should still use $D = 2$. Only when you do the hypothesis test itself do you use the $D = 1$
To convert to a standard $t$-distribution (variance is 1), so we have:$$T = \frac{\hat{\alpha}}{S_\alpha}$$$$\int_{-T}^T p(T)\,dT = 1 - p$$
df = len(x) - 2 s2_epsilon = np.sum((y - alpha_hat - beta_hat * x) ** 2) / df s2_alpha = s2_epsilon * (1. / df + np.mean(x) ** 2 / (np.sum((np.mean(x) - x) ** 2))) #ensure our T-value is positive, so our integral doesn't get flipped T = abs(alpha_hat / sqrt(s2_alpha)) p = 1 - (scipy.stats.t.cdf(T, len(x) - 1) - scipy.stats.t.cdf(-T, len(x) - 1)) print('alpha = ', alpha_hat, ' T = ', T, ' p-value = ', p)
alpha = 0.7923280760210064 T = 0.93885680852821 p-value = 0.3595872204956516
Depends on random date above!
When I ran this, the $p$-value is $\approx 0.04$ which says the evidence is weak but we can reject the null hypothesis. There is an intercept, since the $p$-value is below our threshold of 5%.
Our governing equation is now:$$y = {\mathbf X\beta} + \epsilon$$
where $\mathbf X$ is an $N\times D$ matrix, where $N$ is the number of data points and $D$ is the number of dimensions. $\beta$ is then a $D$ length column vector. We want to find $\hat{\beta}$ and get a model that looks like:$$\hat{y} = {\mathbf X\hat{\beta}}$$
This can be done with optimization just like last time, but we can more easily do it with matrix algebra. The equation for $\hat{\beta}$ is:$$\hat{\beta} = (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^Ty$$
#NOTE: THIS IS NOT PART OF REGRESSION!!!! #DO NOT COPY PASTE THIS CODE INTO HW/EXAM #generate data #I add some noise to the x coordinate to just spread the points out a little. x1 = np.linspace(0,1,15)+ scipy.stats.norm.rvs(size=15) x2 = np.linspace(0,1,15) + scipy.stats.norm.rvs(size=len(x1)) y = 3 * x1 - 2 * x2 + 3 + scipy.stats.norm.rvs(size=len(x1)) y
array([ 8.59319365, 8.2434237 , 12.21993175, 0.08357724, 10.2652057 , 2.56500889, 2.10507327, 0.20001633, -0.80413993, 5.18654211, 9.00227094, 3.56647433, -0.94086123, 5.75923108, 4.54752716])
import numpy.linalg as linalg x_mat = np.column_stack( (np.ones(len(x1)), x1, x2) ) x_mat
array([[ 1. , 1.48872779, -0.36739723], [ 1. , 0.64261885, -1.06126457], [ 1. , 1.16473294, -1.89915635], [ 1. , -1.21189489, -0.06291139], [ 1. , 1.14115123, -1.92180539], [ 1. , -0.38402235, 0.01482152], [ 1. , -0.12382456, 0.55549077], [ 1. , 0.28917314, 1.90087508], [ 1. , -0.22601835, 1.51870358], [ 1. , 1.51075808, 0.74723702], [ 1. , 1.55939315, -0.0256333 ], [ 1. , -0.59703574, -0.80557387], [ 1. , -0.19451134, 1.31367909], [ 1. , 0.35150002, -0.63603463], [ 1. , 1.10545322, 1.71878621]])
Now we have our $X$ matrix set-up. Now we need to evaluate the matrix equation for $\hat{\beta}$ above:
#dot -> matrix multiplication #transpose -> take a transpose #linalg.inv -> compute a matrix inverse beta_hat = linalg.inv(x_mat.transpose() @ x_mat) @ x_mat.transpose() @ y
Since it is tedius to type that whole equation out, you can instead use a shortcut:
beta_hat, *_ = linalg.lstsq(x_mat, y)
.
The
*_ symbol means put the rest of the return value into the
_ variable, which recall is how we indicate that we're making a variable which we will not use.
Let's now see how well the regression did!
y_hat = x_mat @ beta_hat
The first plot will be a $\hat{y}$ vs $y$ plot. This is called a parity plot. If $y$ and $\hat{y}$ are the same, you would see a $y=x$ line. How far the deviation is from that line is how bad the fit is.
plt.plot(y, y_hat, 'o') plt.plot(y, y, 'r') plt.xlabel('$y$') plt.ylabel('$\hat{y}$') plt.show()
Of course we can also look at the histogram of residuals
plt.hist(y - y_hat) plt.show()
Just like before, if we can find the standard error for the noise/residual, $\epsilon$, then we can find everything else. The equation for that is:$$S^2_{\epsilon} =\frac{\sigma^2_{\epsilon}}{N - D} = \frac{1}{N - D}\sum_i \left(\hat{y}_i - y_i\right)^2$$
where $D$ is the dimension of $\beta$. Knowing that, the standard error for $\hat{\beta}$ is$$S^2_{\beta} = S^2_{\epsilon} \left(\mathbf{X}^T\mathbf{X}\right)^{-1}$$
where the diagonal elements are the standard errors. The off-diagonal elements are unrelated.
And again, $P(\beta - \hat{\beta}) \propto T(0, \sigma = S_\beta, df = N - D)$
The $N - D$ term is called the degrees of freedom. $D$ is the number of fit coefficients.
One of the most common uses for this form of least-squares is if your problem is non-linear and you want to linearize it. For example, let's say I am in 1 dimension and the model I'd like to have is:$$y = \beta_2 x^2 + \beta_1 x + \beta_0 + \epsilon$$
I can package that into a vector, like this: $[x^2, x, 1]$ and create an $\mathbf{X}$ matrix by creating a vector for each $x_i$. If my $x$ values are 1,2,3, that would like:$$\left[ \begin{array}{lcr} 1^2 & 1 & 1\\ 2^2 & 2 & 1\\ 3^2 & 3 & 1\\ \end{array}\right]$$
#NOTE THIS IS NOT PART OF REGRESSION! #make some data to regress x = np.linspace(-3, 3, 25) y = 2 * x ** 2 - 3 * x + 4 + scipy.stats.norm.rvs(size=len(x), loc=0, scale=1.5) #END
plt.plot(x,y, 'o', label='data') plt.plot(x,2 * x ** 2 - 3 * x + 4, '-', label='exact solution') plt.legend(loc='upper right') plt.show()
x_mat = np.column_stack( (x**2, x, np.ones(len(x))) ) x_mat
array([[ 9. , -3. , 1. ], [ 7.5625, -2.75 , 1. ], [ 6.25 , -2.5 , 1. ], [ 5.0625, -2.25 , 1. ], [ 4. , -2. , 1. ], [ 3.0625, -1.75 , 1. ], [ 2.25 , -1.5 , 1. ], [ 1.5625, -1.25 , 1. ], [ 1. , -1. , 1. ], [ 0.5625, -0.75 , 1. ], [ 0.25 , -0.5 , 1. ], [ 0.0625, -0.25 , 1. ], [ 0. , 0. , 1. ], [ 0.0625, 0.25 , 1. ], [ 0.25 , 0.5 , 1. ], [ 0.5625, 0.75 , 1. ], [ 1. , 1. , 1. ], [ 1.5625, 1.25 , 1. ], [ 2.25 , 1.5 , 1. ], [ 3.0625, 1.75 , 1. ], [ 4. , 2. , 1. ], [ 5.0625, 2.25 , 1. ], [ 6.25 , 2.5 , 1. ], [ 7.5625, 2.75 , 1. ], [ 9. , 3. , 1. ]])
beta,*_ = linalg.lstsq(x_mat, y) print(beta) plt.plot(x,y, 'o', label='data') plt.plot(x,2 * x ** 2 - 3 * x + 4, '-', label='exact solution') plt.plot(x,x_mat.dot(beta), label='least squares') plt.legend(loc='upper right') plt.show()
[ 1.98114133 -2.87140368 4.227188 ]
.
yhat = x_mat @ beta resids = yhat - y SSR = np.sum(resids**2) se2_epsilon = SSR / (len(x) - len(beta)) print(se2_epsilon)
2.2894065474411858
se2_beta = se2_epsilon * linalg.inv(x_mat.transpose() @ x_mat) print(se2_beta)
[[ 0.01088978 0. -0.03539179] [ 0. 0.02817731 0. ] [-0.03539179 0. 0.20659959]]
Now that we have the standard error matrix, we can create confidence intervals for the $\beta$ values.
for i in range(len(beta)): #get our T-value for the confidence interval T = scipy.stats.t.ppf(0.975, len(x) - len(beta)) # Get the width of the confidence interval using our previously computed standard error cwidth = T * np.sqrt(se2_beta[i,i]) # print the result, using 2 - i to match our numbering above print(f'beta_{i} is {beta[i]:.2f} +/- {cwidth:.2f} with 95% confidence')
beta_0 is 1.98 +/- 0.22 with 95% confidence beta_1 is -2.87 +/- 0.35 with 95% confidence beta_2 is 4.23 +/- 0.94 with 95% confidence
TSS = np.sum( (np.mean(y) - y)**2) R2 = 1 - SSR / TSS R2, np.sqrt(R2)
(0.967408962984229, 0.9835695008408043)
You can justify a particular model by:
You CANNOT use a goodness of fit to compare models, since each time you add a new parameter you get a better fit
You should do the following when you want to do a good job: | https://nbviewer.jupyter.org/github/whitead/numerical_stats/blob/master/unit_12/lectures/lecture_1.ipynb | CC-MAIN-2020-40 | refinedweb | 2,844 | 66.44 |
MaixPy Python3 library
Project description
MaixPy3
MaixPy3 is a Python3 toolkit based on cpython, which simplifies the development of applications on Linux AI edge devices through Python programming.
Usage
Display the camera image on the screen.
from maix import display, camera display.show(camera.capture())
After inputting the image to the model, the result of the forward algorithm is returned.
from PIL import Image from maix import nn m = nn.load({ "param": "/root/models/resnet_awnn.param", "bin": "/root/models/resnet_awnn.bin" }, opt={ "model_type": "awnn", "inputs": { "input0": (224, 224, 3) }, "outputs": { "output0": (1, 1, 1000) }, "first_layer_conv_no_pad": False, "mean": [127.5, 127.5, 127.5], "norm": [0.00784313725490196, 0.00784313725490196, 0.00784313725490196], }) img = Image.open("input.jpg") out = m.forward(img, quantize=True) print(out.shape) out = nn.F.softmax(out) print(out.max(), out.argmax())
Some examples of accessing hardware peripherals.
- GPIO
import time from maix import gpio PH_BASE = 224 # "PH" gpiochip1 = gpio.chip("gpiochip1") led = gpiochip1.get_line((PH_BASE + 14)) # "PH14" config = gpio.line_request() config.request_type = gpio.line_request.DIRECTION_OUTPUT led.request(config) while led: led.set_value(0) time.sleep(0.1) led.set_value(1) time.sleep(0.1)
- PWM
import time from maix import pwm with pwm.PWM(6) as pwm6: pwm6.period = 1000000 pwm6.duty_cycle = 10000 pwm6.enable = True while True: for i in range(50, 1, -10): pwm6.duty_cycle = i * 10000 time.sleep(0.1) for i in range(1, 50, +10): pwm6.duty_cycle = i * 10000 time.sleep(0.1)
- I2C
from maix import i2c i2c = i2c.I2CDevice('/dev/i2c-2', 0x26) i2c.write(0x1, b'\xAA') print(i2c.read(0x1, 1))
- SPI
from maix import spi spi = spi.SpiDev() spi.open(1, 0) # '/dev/spidev1.0' spi.bits_per_word = 8 spi.max_speed_hz = 1 spi.mode = 0b11 import time while True: time.sleep(0.1) to_send = [0x01, 0x02, 0x01] print(spi.xfer2(to_send, 800000)) # 800Khz
- UART
from maix import serial with serial.Serial("/dev/ttyS1",115200) as ser: ser.write(b"Hello Wrold !!!\n") ser.setDTR(True) ser.setRTS(True) tmp = ser.readline() print(tmp) ser.write(tmp) ser.setDTR(False) ser.setRTS(False)
- EVENT
from maix import evdev from select import select dev = evdev.InputDevice('/dev/input/event9') while True: select([dev], [], []) for event in dev.read(): print(event.code, event.value)
See the documentation for more information
Jupyter
Install rpyc_ikernel kernel in jupyter notebook & lab to get an IDE editor that can remotely call Python code, videos, and image streaming.
Click here to view the effect usage_display_hook.ipynb. Note that jupyter runs on your computer.
Progress
The development progress is in no particular order.
Build
Under
linux x86_64, use
python3 setup.py build to complete the general package construction.
For other platforms, take the version of
maix_v831 as an example, match the Python3 + cross-compilation chain of the corresponding platform, and run
python3.8 setup.py build maix_v831 to complete the construction of the target platform package.
Welcome to provide configurations of different platforms to MaixPy3/envs/ to adapt to the MaixPy3 environment.
If you need
.whlpre-compiled package, please change
buildto
bdist_wheel.
Develop
Tested glibc >= 27 on Ubuntu20.04 & manjaro20.03.
Each catalog function of the project.
- docs [store some general development documents]
- envs [store compilation configurations for different platforms]
- examples [store examples or applications on different platforms]
- ext_modules [store project modules that need to be compiled]
- maix [Provide maix entry Python module]
- tests [Provide tox common test items]
- setup.py [MaixPy3 project compilation entry]
If you want to submit the configuration of other platforms, please refer to the following:
_maix_modules = [ libi2c_module, ] _maix_data_files = [ ('/maix', get_srcs(ext_so, ['so', 'ko'])), ] _maix_py_modules = [ "numpy", ]
If you want to submit some Python and C modules that need to be compiled, it is recommended to use the sub-repository to import, refer to the following:
If you want to submit some useful Python tools or sample code, refer to the following:
Thanks
All this comes from the power of open source. Thanks to them, they are listed in no particular order.
- cpython
- rpyc
- py-spidev
- libi2c
- python-evdev
- python3-gpiod
- pwmpy
- pyserial
- Pillow
- numpy
- opencv-python
- ipython
- VideoStream-python
- ipykernel
- jupyter
- MaixPy
- rpyc_ikernel
The open source repositories that may be cited in the future are also grateful to them.
You are welcome to recommend the open source projects you need. If you find any missing projects, please let me know immediately.
License
Licensed under the MIT license.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/MaixPy3/0.2.6/ | CC-MAIN-2021-17 | refinedweb | 747 | 53.17 |
Python – Searching Algorithms
Searching is a very basic necessity when you store data in different data structures. The simplest appraoch is to go across every element in the data structure and match it with the value you are searching for.
This is known as Linear search. It is inefficient and rarely used, but creating a program for it gives an idea about how we can implement some advanced
search algorithms.
Linear Search
In this type of search, a sequential search is made over all items one by one. Every item is checked and if a match is found then that particular item is returned, otherwise the search continues till the end of the data structure.
def linear_search(values, search_for): search_at = 0 search_res = False # Match the value with each data element while search_at < len(values) and search_res is False: if values[search_at] == search_for: search_res = True else: search_at = search_at + 1 return search_res l = [64, 34, 25, 12, 22, 11, 90] print(linear_search(l, 12)) print(linear_search(l, 91))
When the above code is executed, it produces the following result −
True False
Interpolation Search
This search algorithm works on the probing position of the required value. For this algorithm to work properly, the data collection should be in a sorted form and equally distributed.
Initially, the probe position is the position of the middle most item of the collection.If a match occurs, then the index of the item is returned..
There is a specific formula to calculate the middle position which is indicated in the program below.
def intpolsearch(values,x ): idx0 = 0 idxn = (len(values) - 1) while idx0 <= idxn and x >= values[idx0] and x <= values[idxn]: # Find the mid point mid = idx0 +\ int(((float(idxn - idx0)/( values[idxn] - values[idx0])) * ( x - values[idx0]))) # Compare the value at mid point with search value if values[mid] == x: return "Found "+str(x)+" at index "+str(mid) if values[mid] < x: idx0 = mid + 1 return "Searched element not in the list" l = [2, 6, 11, 19, 27, 31, 45, 121] print(intpolsearch(l, 2))
When the above code is executed, it produces the following result −
Found 2 at index 0 | https://scanftree.com/tutorial/python/python-data-structure/python-searching-algorithms/ | CC-MAIN-2022-40 | refinedweb | 357 | 51.72 |
Decklin Foster: man ascii
.
Fetched 1B in 0s (42B/s)Well. That settles that question! :-)
Twitter : IRC :: Blogs : Usenet(This applies equally to other "micro-blogging" services, but I am about to explain why I believe that's not the right metaphor. You may also substitute mailing lists for Usenet.) With the older media, you have a place -- a newsgroup, or a channel, that people went to, with a distinct culture, and that (mostly) weren't "owned" by anyone, but rather by the community. With the new ones, we are all sole proprietors of our own streams, and we "tune in" to the subset of people we find interesting, rather than topics we invest in. So, instead of bumping into the same person in a couple different groups, or never reading their words at all, you might find that your feeds overlap a bit more than they do with most people. This is how I find people to "follow" and blogs to read, in fact -- as my network expands, more people become loosely joined to it, and as I notice ones worth reading I add them. Is one model better than the other? Probably not. I could make an analogy to music. In theory, I'd rather read what my favorite critics have to say about a wide variety of new releases -- some of which I'd never know about otherwise -- than keep up with the discussion of bands and genres I really like, even if most people writing about them are terrible (remember, 90% of everything is crap). But I also lose that sense of community of being a "fan" of something; I no longer have a deep connection to what's going on in the fandom or the scene, which I also would never know about otherwise (some of it is just too obscure for my favorite writers to cover). In practice, it seems the new approach has been more popular, but maybe that's because more people are on the net now, and both kinds of communication are/were shaped by the technology available at the time (destinations make much more sense with limited/centralized computing resources, and aggregation makes much more sense with powerful clients and a wider, less specialized user base). Anyway. This post is not actually about social media theory or whatever you want to call it; it's about some software I have packaged. Because of all the above, I have always wished that I could use Twitter from something more like my IRC client. Like, say, my IRC client. One could abuse the concepts of IRC to make an "on the fly" channel of whoever happens to be in my feed. (I once read a blog comment somewhere complaining that Twitter could easily be implemented on any existing IRC server using one +m channel for everyone and some client-side direction of messages from all such channels to a single window. +1 for cleverness, but they did sort of miss the point of why normal people sign up for web sites rather than installing and configuring clients for obscure chat protocols.) So, I had grand plans to write a Twitter client which was an IRC server, with some clever mapping of IRC concepts and commands to their equivalents over there. I never got around to it, as I barely have enough time for anything I do now. But last month I noticed that someone else had implemented such a thing: tircd. I had seen Twitter/IRC services before, but like the official Twiter XMPP service, they were all implemented as bots, which I detest for this sort of application. Bitlbee, for example, translates various IM protocols to IRC, but only halfway -- for anything else you have to use a bot as a sort of poor man's command line. If I want a command line for Twitter, I already have several; IRC can do better. And tircd really does! It's great. You're not required to edit the config file, and there's no extra layer on top of IRC for things like logging in or adding people. I've finally packaged the latest release, which is waiting in NEW currently. I got a request for sneak preview packages, so if you want to install some unapproved .debs check this repository. I may still pick my own project back up, as Twitter itself, being a centralized service, feels like a stopgap solution on the way to to a more generalized 140-character equivalent of the, er, blogosphere as envisioned by open-source projects like Identi.ca. In the future, perhaps, when we use a certain micro-blogging "service" we might be randomly connected to one of any number of servers run by different individuals but all mirroring messages back and forth to each other. Which, now that I think of it, sounds vaguely like some obscure, obsolete chat and news-posting protocols I know.
git filter-branch \ --tag-name-filter cat \ --index-filter 'git update-index --remove .hgtags' --commit-filter \ 'if [ $# = 3 ] && git diff-tree --quiet $1 $3; then skip_commit "$@" else git commit-tree "$@" fi' \ HEADThe tag name filter is always necessary if you want tags to be updated to point to the corresponding commits on the new, rewritten branch. I consider this a UI failure -- when a branch is rewritten, the ref is modified, and the old one moved to refs/original. Tags, on the other hand, stay where they are, without any indication on the new branch that this is where you might want to move that old tag and sign it again or whatever. IMHO they ought to be handled the same as branches. The index filter is simply an efficient way of removing the unwanted file from all commits. This and the tag filter are both covered in the manual page. Writing a commit filter is a little more obscure. After .hgtags is removed from the index, we may end up at one of those useless "Added tag foo" commits and have no changes to record in the commit. By default, of course, filter-branch still records these -- the commit message might be useful, or something. But I want to suppress them. The commit filter is called with a tree -- you're at the point between write-tree and commit-tree (I recommend Git from the bottom up if you're confused here.) It gets that tree ($1), and then "-p PARENT" for each parent, just like commit-tree. So, if this is a normal commit with one parent, there will be 3 arguments. (If there's only one argument, there is no parent, i.e., the first commit, and if there are more, then it's a merge.) This is the only case we want to mess with. If there are no changes between our tree and the parent's tree, then it's one of those no-op commits, and we can skip it (skip_commit, a shell function defined by filter-tree, uses some deep magic to hand us the original parent again next time). I think diffing the index and the parent would work as well, but this seemed clearer. It still feels like a hack, so I'd love to hear from anyone who can suggest improvements. Since this is a special case, maybe it's better off being implemented in hg-to-git.py itself. There's always more than one way to do it. Update: Teemu Likonen points out that the next version of Git (1.6.2, not yet in unstable) will have a --prune-empty option which makes this particular problem totally trivial. I am starting to get the feeling that the Git developers are all reading our minds... :-)
class Server < Struct.new(:host, :port, :password) def to_s port == 6600 ? host : "# host :# port " end endThat particular one's from Njiiri (which may clue you in that it took me several months to get around to writing this post). For educational purposes, it's nice: you can point out how we give a name to a class (I've also tried to explain assignment as "naming" rather than "storing"), that there's an unnamed class (classes are also objects!), and overriding a method (which uses the dynamically defined stuff), without any boilerplate to get in the way. So it's tempting to use this in a lesson. Unfortunately, Ruby is a little bit weird here, and you run into those distracting "practical" issues about the particular language you're working in. Your classes can't subclass Class, and thus you can't say Foo.new like you can Struct.new. Struct is actually implemented in C (or Java, or whatever's native). So you have to do some handwaving. This is because the "metaclasses" we have in Ruby are implemented, so to speak, as singleton classes of objects (including class objects). I do not mean this as a criticism of Matz at all, but they seem like more of a serendipitous thing than an intentional design -- "hey, if we implement classes in this particular way, we get this sort of metaclassing automatically [1]." (The clearest explanation of why this is that I've found is in the first chapter of Advanced Rails by Brad Ediger.) I wanted to be able to just subclass Class, rather than have all that fun power that we normally get to abuse in Ruby, only because I think this is the clearest way to explain the abstraction. Even I still have trouble describing singletons in plain English. Struct is intuitively a class of classes, and factors out similar/boring stuff -- a good practice. It could be an example of how to refactor some simple classes, if only we could follow it. I decided to give up on the idea for my tutorial, but it kept me up at night. Ruby metaclasses clearly can do anything that the sort of Struct-like metaclasses I have in my mind -- "parametric" classes, if you will -- can do, and I can dynamically define just about anything; why not make it happen? We can instantiate Class itself, but the tool we have to shape that class is singleton methods. This can certainly be abstracted away. So, I whipped something up. Before getting to it, though, let's visit a new construct in Ruby 1.9 and 1.8.7: Object#tap. Apart from the obvious debugging use described in its documentation, it makes it quite easy to factor out this pattern:
def gimme_a_thing thing = Thing.new thing.do_stuff_to_it thing endInto something closer to the style of functional or declarative programming:
def gimme_a_thing Thing.new.tap thing thing.do_stuff_to_it end end(Well, "do stuff" is obviously still procedural, but A for effort.) Which one is "better" could probably be the subject of much debate, but: I really prefer the second one; even though it has exactly the same effect, it looks like "what" rather than "how" (something else I try to beat into impressionable young heads. :-)), which is I think easier to write tests for. I'm going to use it here, because it's shorter. Now, Object#tap passes the object as an argument to the block; in our case, we are going to define a metaclass, so we want to work with self, instead. So we define a new version, class_tap -- by analogy with class_def, I suppose --- which class_evals the block rather than simply evaluating it:
class Class def class_tap(&blk) tap _self _self.class_eval &blk end endAnd to do the following trickery, we make use of MetAid, written by why the lucky stiff -- it's very small, so we could always just incorporate the bits we want into the code here, but this short file provides a common vocabulary for talking about metaclass stuff which is quite valuable. Now, we can write a method to create our new classes on the fly. Here's what I came up with:
require 'metaid' class << Class def meta(_super=Object, &blk) new.class_tap do # 1 meta_def :new do *args # 2 Class.new(_super).class_tap do # 3 class_exec *args, &blk # 4 end end end end end(Note the "class << Class", opening the singleton, rather than "class Class", opening Class itself. Also, the distinction between new and Class.new -- they are the same method, but from inside the meta_def we're no longer in the Class class.) The lines of meta itself mean:.
- The value of this thing is a generated class, which we will describe thusly:
- Its singleton class has a new method, which gives you a value that is:
- Another generated class, which is defined by:
- the original block, which now gets run with new's arguments.
class MyStruct < Class.meta \ do *args attr_accessor *args class_def :initialize do *instance_args args.zip(instance_args).each do attr, val instance_variable_set :"@# attr ", val end end end endThe real Struct class does a few other nice things for you, but this is the heart of it; I can go back to that Njiiri example and just swap in MyStruct for Struct (Providing an instant performance gain of -200%... :-)). Here's a "shapes" example from my abandoned OO lesson:
class Polygon < Struct.new(:sides) def perimeter @sides.inject(&:+) end end class RegularPolygonClass < Class.meta(Polygon) \ do n_sides class_def :initialize do side_length @sides = [side_length] * n_sides end class_def :area do @sides.size * @sides.first**2 / Math.tan(Math::PI / @sides.size) / 4 end end end class Square < RegularPolygonClass.new(4); end class Pentagon < RegularPolygonClass.new(5); end(Note that area has no free variables and thus could actually be defined with def. It would just look funny.) You only have to change one number here to make new polygon classes, rather than accumulating parameter lint or explicitly subclassing and redefining something implicit just for the derived class [2]. In a way it the exact analogue of the imperative vs. tap style described above. There is quite a bit of aesthetics involved; one way is not "right". Apart from the syntactical wart, I like being able to do things this way. Ruby is, as they say, optimized for programmer happiness and the principle of least surprise. Still, I'm sure it is quite slow, and I don't particularly need it for any real-world application right now. For an intro to OO it's way too much complexity to have lurking unexplained beneath the surface and still requires getting bogged down in the language you happen to be using. But hey! It's neat.
Hello, please use for all support inquiries. Regards, your Joker.com teamThis is complete and utter bullshit [2]. If a company is going to mail me, I expect to be able to mail them. I have a mail client; it manages communication the way I want it to, and runs my preferred editor to compose messages (which is, in fact, how I am writing this very blog post). I am sick of dicking around on JavaScript-requiring web forms that all work differently and typing in postage-stamp-sized little textareas. Anything I do type in them is lost, because unlike my email client, web forms don't save sent messages unless the authors feel like letting you have a Cc:. When a company forces me to do this, I take it as a sign of disrespect. I tolerate a web browser [3] for reading pages; "applications" are universally painful [4]. One of the things I used to like about Joker was the PGP mail interface. AFAIK, they have not killed it (I didn't bother to check), but with automatic "we don't want to listen to you unless you inconvenience yourself" bounces like this, what's the point? I surmise that there is, or was, some smart person there who understood how (and more to the point, why) to hook a pseudo-mailbox up to a software system, and that they have been overridden by someone in management who realized it's much easier to have dime-a-dozen webmonkeys hook a form up to the same system since 90% of users just don't care (and even people who do notice and dislike this, but are not as inflamed as I, have come to expect it because everyone else refuses mail, so, y'know, pick yr battles son, etc.). Not the sort of culture I put faith my in. The irony of it all was that I was registering this domain to run a service I had decided to create specifically because of another site refusing mail and directing me to an even lamer web form. That would take incoming mail and, you know... process it with software. Suffice it to say, I am no longer going to be their customer. In deciding who to use instead, I figured I'd do a survey of where the domains in that "Subscription" column [5] on Planet Debian were registered, but... WHOIS is basically useless. Every server just returns free text, formatted differently by (apparently) every implementation under the sun. Cheaply parsing something out from .com/.org/.net is possible, but InterNIC kindly blacklists your IP after making more than a few requests in a few minutes. I guess I'm just gonna go with Gandi (but other suggestions would be welcome). In somewhat unrelated developments, I went to nic.at to update my nameservers for Where the Bus At? last week. There was no authentication or anything on the request form, so I just filled it in and sent it off. I got some automatic mail saying I need to print out a PDF, sign it, and fax it internationally. Annoying, and hardly as secure as mailing them with PGP, but whatever. But then, also yesterday, I got another mail saying the update was complete (lo and behold, it was). I am now somewhat concerned about the security of my domain: it seems like anyone can come by and put something in the form and if I'm not around to notice the courtesy mail and ask that they stop the request, it'll eventually go through, no questions asked. I have not yet written them to figure out what the deal is, though. Kinda burned out. I guess I didn't really have high hopes for dealing directly with a ccTLD registrar (this was the first time I've done it... I can't believe I blew 60 on a cutesy domain name) rather than a reseller who competes in a market, but then, I go and google "domain registrar" and look at all the AdWords dollars spent trying to compete with GoDaddy [6] and just kind of want to put my head in my hands. On DJB's DNS pages there's this bit about setting up a domain. It doesn't say "How to (buy register whatever) a domain name". It says, "How to receive a delegation from .com". Which is of course, how it works. And what I want to buy. I don't want "parking" or even gratis nameservers. Just a delegation from .com. Please. No AdWords came up when I googled for that phrase to copy the link. Sometimes I guess markets just sink to the bottom. Anyway. I feel like there's a free-software angle here. My continuing irrational hatred of using other people's forms, web-based mailing-list substitutes, nameservers, etc. stems not so much from their suckage but from the fact that there is no longer any software there, in front of me, for the four freedoms to possibly apply to. Being able to run your mail reader for any purpose doesn't win you much if no one uses mail. I don't really know what to do about this.:
- Sup has no folders, a la Gmail. After watching many friends and even fellow hackers switch to Gmail, I have to admit: this literal hierarchical organization thing doesn't scale. I was planning to totally redo my mail folder system Any Day Now for about six months prior to starting on this. It was never going to happen.
- Sup uses a Ferret full-text index to make this approach plausible. Search is super fast and beats (for me) both any kind of "organization" I could have disciplined myself into and the fine-grained control of something like mutt's search. It's sort of like git: until you do it, you don't realize how much more productive you can be when previously-expensive operations become instantaneous.
- Sup works with threads, not messages; this is another thing Gmail got right. I used to waste brain cells thinking about which messages in a thread were worthwhile enough to save or not. Given the absurdly cheap price of disk relative to what we can type out in plain text since, like, a decade ago, this is crazy. In the index, I only have to look at whether a thread has new chatter or not, not its size, shape, or where the new messages are relative to it. All that's in the thread-view buffer where I actually read content.
- Sup is written in Ruby. Back in the dawn of time, I used Gnus, and while I wasn't very good at elisp, the hackability afforded by being written in a high-level language was very nice compared to programs mostly implemented in C (even if they had a tacked-on scripting language). Plus, I love Ruby right now..)
- At version 0.6, sup is very much not-yet-1.0. While it handles insanely large amounts of email without breaking a sweat, I still keep an additional backup of everything. (If Ferret crashes, the original copies of mail will be untouched, but it never hurts to be paranoid.)
- The flow of data from your physical mail store to the sup index is currently one-way only. Actually removing deleted/spam messages is a big hack (if it works at all), and labels/flags/etc live entirely in Ferret-land. If you want to manipulate an actual mailbox, mutt is still the tool for the job (and then, you need to re-sync sup). This is probably the deal-breaker for most of us. I jumped in anyway because I feel like it can be solved (or more likely, made irrelevant) later.
- William (upstream) is currently re-designing the whole thing from scratch, replacing the index library with Sphinx, and decoupling the index from the console frontend. As a result, the previous item is pretty much a non-priority (and bugs in general are not going to get the same amount of love as usual). I am hoping that we end up dumping mail into the index directly, then writing more frontends to write to Maildir backup, serve as webmail/whatever, but this is a long way off. On the plus side, thanks to Thrift, they will not be limited to Ruby.
- Ruby's ncurses library still doesn't handle Unicode correctly. It can be patched (still doesn't work totally right), but I'm trying to find a more permanent solution for Debian.
"Obama had the additional skill of criticizing George W. Bush."Basically: yes. And that's what passes for politics over here, folks. Sorry about your economies and all that. | https://planet-search.debian.org/cgi-bin/search.cgi?terms=%22Decklin+Foster%22 | CC-MAIN-2022-40 | refinedweb | 3,880 | 69.92 |
ranking 1.0.1
ranking: ^1.0.1 copied to clipboard
A simple library to rank players.
This package contains functions to rank players, given the results of their games.
For static rankings, where players don't change their skill-level, the Bradley-Terry model computes a score for each player that predicts the probability at which one player would win against another.
For dynamic rankings, where players change their skill-level over time, the ELO ranking system provides a score that dynamically adjust with the results of the players.
Usage #
A simple usage example for Bradley-Terry:
import 'package:ranking/ranking.dart'; main() { // In the following list the first entry in the pair (representing a game) // won that game. var games = [ ["Player 2", "Player 1"], // Player 2 won over Player 1. ["Player 2", "Player 3"], ["Player 3", "Player 2"], ["Player 3", "Player 2"], ]; var scores = computeBradleyTerryScores(games); // The `scores` map contains a score for each player: // Player 1 -> 0.000 // Player 2 -> 0.333 // Player 3 -> 0.667 }
A simple usage example for ELO:
import 'package:ranking/ranking.dart'; main() { // The game-scope (3rd column) indicates the result of the game: // * 1.0: the first player won decisively. // * 0.5: a draw. // * 0.0: the second player won decisively. var games = [ ["Player 2", "Player 1", 1.0], ["Player 3", "Player 2", 0.5], ["Player 3", "Player 1", 0.0], ["Player 3", "Player 2", 1.0], ["Player 2", "Player 3", 1.0], ]; var elo = new Elo( defaultInitialRating: 100, // With an `n` of 30, a difference of 30 in score means that the // stronger player is 10 times more likely to win. n: 30, // How much new game-results move the score. With 10, players with // the same score will have a difference of 10 after a decisive // victory/loss. kFactor: 10 ); games.forEach((list) { elo.recordResult(list[0], list[1], list[2]); }); var ratings = elo.ratings; // The `rating` map contains a rating for each player: // Player 1 -> 101 // Player 2 -> 103 // Player 3 -> 95 }
Features and bugs #
Please file feature requests and bugs at the issue tracker. | https://pub.dev/packages/ranking | CC-MAIN-2021-25 | refinedweb | 347 | 68.36 |
2014-01-30 S.), Rafeal Weinstein (RWN), Dmitry Lomov (DL), Niko Matsakis (NM), Simon Kaegi (SK), Dave Herman (DH)
Parallel JavaScript
RH: API is stable
RH: Parallel array data type is no longer there. Instead as methods on arrays and typed objects.
NM: To be clear you can have an array with structs in it.
RH: The sweet spot is games and image processing.
RH: No current show stoppers on implementation. Some things to work out related to GC. The strategy going forward. Be complimentary to the typed object spec. Will track it and spec parts of it Typed Object spec. "Close follower". Will move in tandem.
LH: To move to phase 1 we need to see some examples.
YK: Agree, we need to see where we're at.
AWB: Agree, we should have presentations to move from phase 0 to phase 1.
RH: I have presented twice already...
YK: Moving to phase 1 should require a presentation.
YK: Concerned that we are exploding the API with mapPar, filterPar, fooPar etc.
YK: Prefer static functions
EA: Or a standard module
import {map} from '@parallel'
YK:
this in functions?
arr.map(obj.method, obj)
DH: The signature should just match the existing functions.
RH: Not less surprising. Just as surprising.
parallelModule.map(arr, func, ...) // [T] -> .... -> [T] // same return type
NM: What would you call
from?
WH: It's too confusing to require completely different static function style for invoking parallel maps as compared to non-parallel maps. mapPar etc. method style is better than the proposed alternatives.
DH: It is always nicer to write a method call than a function call.
DH: Don't want a "bring me a rock" exercise.
EA: File issues on GitHub on the drafts (that are also hosted on GitHub).
MM: Worked well for Promises.
YK/MN to talk through the concern about a "ton of methods".
Conclusion/resolution
- Move Parallel JavaScript to phase 1
- Talk offline about design issues further
Structured Clone
DL: Is implemented in all browsers. Part of HTML spec. Hixie speced it. Hixie is happy with TC39 moving this to ES.
YK: Like to object to this motion. It is currently a giant set of scenario hacks.
DL: We want to add language objects that we want to transfer.
AWB: Cloning framework in ES.
DH: Is it possible to reform or do we have to start from scratch? Seems hard to reform. Too many issues.
MM: Fears that if we do not take it over and introduce something new. The old will continue to exist. We need a path to replace the existing system, including what PostMessage does.
DH: We need a roadmap. How do we handle transferables?
DH: Hixie (or Anne) added Map and Set to structured clone to HTML spec.
DL: We cannot add extensibility mechanisms if we do not own the spec.
YK: We should own the spec. Opposed to DOM specific extensibility methods. General extensibility mechanism are important.
BE: How would symbols work?
AWB: We have a symbol registry. As long as both side cooperate. Serialize to a registration key. The two sides need to agree.
MM: It is unobservable that the symbol is not the same across the two workers.
WH: What if you have the same symbol registered under two different strings?
MM: That can't be allowed.
BE: in the cross-machine limit, structured clone is serlialization/deserialization; start there, allow optimizations like Sun XDR, Van Jacobsen TCP/IP, Google protocolbuffers [discussion on optimization]
WH: What do we mean by optimization?
DH: [explains difference between opaque and structured clone, including implications on optimization]
WH: Are we going to settle on one of the two or do both?
DH: Both [more discussion on optimization]
BE: [explains history of prior work]
BE: Can't tell Hixie and Anne to stop adding to structured clone. Anne: Want to give them assurance that we will take over the effort.
WH: What's the consensus?
YK: We'll take it on. Move to stage 0.x
defineGetter, defineSetter etc in Annex B?
BT: IE ships this.
MM: It would just be speced using defineProperty etc
AWB: Firefox does some strange things.
BE: please enumerate "strange things", file bugs
YK: It starts at level 1. Or 3? there are already implementations.
RW: What sites?
BT: Will attempt to furnish a list of sites...
Conclusion/resolution:
- Makes sense to put in ES7 annex B
- Brian to write an initial speec draft
Process document
Process doc is now public docs.google.com/a/chromium.org/document/d/1QbEE0BsO4lvl7NFTn5WXWeiEIBfaVUF7Dk0hpPpPDzU
Scheduling for next meeting
April 8-10 at Mozilla, San Francisco May 20-22 at Facebook, Menlo Park July 29-32 at Microsoft, Redmond Sept 23-25 at Bocoup, Boston Nov 18-20 at PayPal, San Jose
Async/await
LH: lukehoban/ecmascript-asyncawait
LH/MM: await syntax is important because the precedence of await needs to be different than yield.
LH: async functions could be combined with function*; in such a thing we'd need both yield and await
MM/WH: What would the behavior of such a thing be?
LH: That's a seperate proposal - something we can discuss later.
BE: Syntax conflict with => functions (elided parameters). what does: async() // newline, no semi here => {...} ... mean? We can make it be an async arrow if we want, but second line looks like 0-param arrow function expression....
WH: async (a, b, c) => await d looks too much like a function call of a function named 'async'. Need to parse a long way before figuring out it's an async lambda. This wouldn't fall under the existing cover grammar.
LH: I'll look into that.
DH: Initially concerned about hard-coding the scheduler.
LH: Identical to Q.async. There is only one way to do this.
MM: September Promises consensus is superior to the Promises spec we have now. [Debate about whether we'll end up with two Promises APIs]
WH: Do the two APIs use the same scheduler (that would be hardcoded)?
DH: No.
LH: Can we move async/await to stage 1? General agreement
AWB: This means that we agree that this is something in a future version of ES
Conclusion/Resolution
- Moved async/await to stage 1.
- Next step is to write real spec language
Promises discussion
MM: Advocacy for .then()/.cast()
STH: Advocacy for .chain()
LH: Proposal of Promise.chain() compromise
YK: I would probably be ok with this
MM: I would probably be ok with this ... extensive discussion ...
MM: A resolve of a resolve does not create multiple levels of wrapping. Without chain this is not observable.
BE: .chain() requires over-wrapping/-unwrapping, without chain this is unobservable and therefore optimizable -- says to reject chain (surprised by own position changing)
YK: This persuades me that we shouldn't have .chain()
STH: I strongly disagree, but I'm not going to hold up ES6 over this
Conclusion/Resolution
- Promisecast is renamed to Promise.resolve (remove old Promise.resolve)
- Keep then, reject chain (NOT DEFER, reject!)
- Renaming .cast thus removes over-wrapping (always-wrap) deoptimization in old Promise.resolve
Some further discussion on keeping the spec inheritance/AOP-friendly by not using .then internally some times, short-cutting through internal methods or internal helpers other times YK, MM, AWB have details | https://esdiscuss.org/notes/2014-01-30 | CC-MAIN-2019-18 | refinedweb | 1,209 | 68.57 |
, let's just change this to something more meaningful, like FirstTestCase.
- In the Solution Explorer window, which is on the right side of the Visual Studio in the above image. Right Click on the Program.cs and Select Rename.
- Notice that the text Program.cs is selected by default, now just type the new test case name 'FirstTestCase'.
- The new name will start reflecting everywhere in the project or code window.
Steps to Download Selenium WebDriver
- Go to Tools >> Nuget Package Manager >> Manage Nuget Packages for Solution....
Note: The above screenshot is wrongly taken, please select Nuget Package Manager >> Manage Nuget Packages for Solution...
- In the Seach Box, search for Selenium.WebDriver or Selenium. This will take a few seconds to populate the Selenium. Once done, just select Selenium.WebDriver and click on Install to start the installation process.
-
- At the top of your project code after the last ‘using’ namespace add the following Selenium namespaces:
using OpenQA.Selenium; using OpenQA.Selenium.Firefox;
- Add the following code in your static void Main section:
IWebDriver driver = new FirefoxDriver(); driver.Url = "";.
- Run the test by clicking on the Start button given on the top bar.
Notice that the Visual Studio started a Console application and just followed that it initiated a Firefox driver and opened the website.
Similar Articles
| https://www.toolsqa.com/selenium-webdriver/c-sharp/set-up-selenium-webdriver-with-visual-studio-in-c/ | CC-MAIN-2022-27 | refinedweb | 217 | 60.41 |
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Paul Eggert wrote: > "Derek R. Price" <address@hidden> writes: > >> Is there any reason that a few files (getndelim2.c, fsusage.c, >> mkdtemp.c, obstack.c, tempname.c, utimecmp.c) still do #ifdef >> HAVE_STDINT_H, #include <stdint.h>, #endif rather than assuming the >> GNULIB stdint module? > > Mostly because I haven't had time to carefully review the stdint > module. It relies on undefined behavior in several cases, and should > be nailed down better. I'll try to boost the priority of reviewing > it. If it's useful to you, the current stdint module works on the following list of platforms (from CVS nightly testing): x86 NetBSD 2.0.2 IRIX64 6.5 sparc SunOS 5.9 Generic_118558-11 BSD/OS 4.2 BSDI BSD/OS 4.2 Kernel #5 amd64 Linux 2.6.9-1.667smp #1 SMP alpha Linux 2.2.20 #2 ppc64 Linux 2.6.5-7.97-pseries64 #1 SMP x86 Linux 2.6.10-1.771_FC2smp x86 Linux 2.6.8-2-386 x86 Linux 2.6.13-1.1526_FC4 sparc SunOS 5.9 Generic_112233-03 I believe it is also working on the following two hosts: AIX 2 5 HP-UX B.11.00 U 9000/785 I am not sure how many of those platforms are actually building lib/stdint.h. The module may be slightly broken on IRIX. I am working with Larry Jones from the CVS team to develop a more comprehensive test. Regards, Derek - -- Derek R. Price CVS Solutions Architect Ximbiot <> v: +1 248.835.1260 f: +1 248.835.1263 <address@hidden> -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) Comment: Using GnuPG with Fedora - iD8DBQFERwC+LD1OTBfyMaQRAqXWAJ97/6nxPMtz7wWbxLYT7MBgspllpACg6pBr 4tdGAlGKy3UWqRoc3djc6Ws= =59EO -----END PGP SIGNATURE----- | http://lists.gnu.org/archive/html/bug-cvs/2006-04/msg00032.html | CC-MAIN-2015-14 | refinedweb | 291 | 71.61 |
Up to [cvs.NetBSD.org] / src / lib / libc / gen
Request diff between arbitrary revisions
Default branch: MAIN
Revision 1.12 / (download) - annotate - [select for diffs], Thu Aug 7 16:42:50.11: +2 -6 lines
Diff to previous 1.11 (colored)
Move UCB-licensed code from 4-clause to 3-clause licence. Patches provided by Joel Baker in PR 22280, verified by myself.
Revision 1.11 / (download) - annotate - [select for diffs], Wed Apr 16 13:34:37 2003 UTC (13 years, 7 months ago) by wiz
Branch: MAIN
Changes since 1.10: +2 -2 lines
Diff to previous 1.10 (colored)
Use .In header.h instead of .Fd #include \*[Lt]header.h\*[Gt] Much easier to read and write, and supported by groff for ages. Okayed by ross.
Revision 1.9.12.2 / (download) - annotate - [select for diffs], Fri Mar 22 20:42:08 2002 UTC (14 years, 8 months ago) by nathanw
Branch: nathanw_sa
CVS Tags: nathanw_sa_end
Changes since 1.9.12.1: +1 -1 lines
Diff to previous 1.9.12.1 (colored) to branchpoint 1.9 (colored) next main 1.10 (colored)
Catch up to -current.
Revision 1.9.12.1 / (download) - annotate - [select for diffs], Fri Mar 8 21:35:06 2002 UTC (14 years, 8 months ago) by nathanw
Branch: nathanw_sa
Changes since 1.9: +2 -2 lines
Diff to previous 1.9 (colored)
Catch up to -current.
Revision 1.10 / (download) - annotate - [select for diffs], Thu Feb 7 07:00:13 2002 UTC .9: +2 -2 lines
Diff to previous 1.9 (colored)
Generate <>& symbolically.
Revision 1.9 / (download) - annotate - [select for diffs], Tue Mar 2 14:02:02.8: +2 -2 lines
Diff to previous 1.8 (colored)
const poisoning.
Revision 1.8 / (download) - annotate - [select for diffs], Sat Jan 16 08:05:33 1999 UTC (17 years, 10 months ago) by lukem
Branch: MAIN
Changes since 1.7: +6 -13 lines
Diff to previous 1.7 (colored)
reference nsswitch.conf(5)
Revision 1.7 / (download) - annotate - [select for diffs], Sat Aug 29 08:32:33 1998 UTC (18 years, 3 months ago) by lukem
Branch: MAIN
Changes since 1.6: +2 -2 lines
Diff to previous 1.6 (colored)
first pass at fixing up capitalization of function names and arguments; ensure that each is correct with respect to the implementation, rather than being correct as per english.
Revision 1.6 / (download) - annotate - [select for diffs], Thu Feb 5 18:46:49 1998 UTC (18 years, 10 months ago) by perry
Branch: MAIN
Changes since 1.5: +3 -1 lines
Diff to previous 1.5 (colored)
add LIBRARY section to man page
Revision 1.1.1.2 / (download) - annotate - [select for diffs] (vendor branch), Mon Feb 2 00:12:02 1998 UTC (18 years, 10 months ago) by perry
Branch: CSRG
CVS Tags: lite-2
Changes since 1.1.1.1: +1 -2 lines
Diff to previous 1.1.1.1 (colored)
import lite-2
Revision 1.5 / (download) - annotate - [select for diffs], Mon May 26 14:02:53 1997 UTC (19 -7 lines
Diff to previous 1.4 (colored)
cleanup some problems with the use of mandoc macros
Revision 1.4 / (download) - annotate - [select for diffs], Sat Feb 25 08:51:17 1995 UTC (21)
clean up Id's on files previously imported...
Revision 1.3 / (download) - annotate - [select for diffs], Sun Dec 11 22:47:06 1994 UTC (21 years, 11 months ago) by christos
Branch: MAIN
CVS Tags: ivory_soap
Changes since 1.2: +5 -4 lines
Diff to previous 1.2 (colored)
NIS -> YP changes and other typos fixed (From Jason Thorpe)
Revision 1.2 / (download) - annotate - [select for diffs], Sun Dec 4 18:12:12 1994 UTC (22 years ago) by christos
Branch: MAIN
Changes since 1.1: +8 -7 lines
Diff to previous 1.1 (colored)
New netgroup implementation; replaces Rick's old one that did not expand recursively or handle YP.
Revision 1.1.1.1 / (download) - annotate - [select for diffs] (vendor branch), Tue May 17 13:30:51 1994 UTC (22 years, 6 months ago) by mycroft
Changes since 1.1: +0 -0 lines
Diff to previous 1.1 (colored)
From 4.4-Lite.
Revision 1.1 / (download) - annotate - [select for diffs], Tue May 17 13:30:50 1994 UTC (22 years, 6 months ago) by mycroft
Branch: MAIN
Initial revision
This form allows you to request diff's between any two revisions of a file. You may select a symbolic revision name using the selection box or you may type in a numeric name using the type-in text box. | http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/gen/getnetgrent.3 | CC-MAIN-2016-50 | refinedweb | 765 | 76.11 |
Hey,
first of all i have a GET request with a response... now i wanna save the response in a JSON file to compare this with a existing JSON file.
HOW can i do this?
best regards
kai
Solved! Go to Solution.
You can use the following code:
... if (!(it.value == outputResp[it.key])) { log.info ("File Mismatch at : " + it.key) testRunner.fail( "File mismatch..." ) } ...
This should be simple to do. But just to clarify before giving an answer:
1. If you want to compare the response with an existing content of a file why do you want to save the response to a file. Wouldn't it be simpler to just read the content of the initial file and compare it directly with the current response.
2. I am not aware of the full context of your test but I generally see this kind of tests unreliable. If something is to change in the response you may have to do a lot of rework.
1.Yes. Is that possible?
2. True!
i found some Code and it works but maybe u can help me. If there is a Mismatch there is only a log.info and the test continues with "PASS". Is taht possible to say is there a mismatch the test failed?
import groovy.json.JsonSlurper def filePath = "C:/qwerty/test.json" def jsonResp = context.expand('${GET ProCon#Response}') def outputResp = new JsonSlurper().parseText(jsonResp) def baselineResp = new JsonSlurper().parseText(new File(filePath).text) baselineResp.each{ if (!(it.value == outputResp[it.key])) { log.info ("File Mismatch at : " + it.key) } }
You can use a keyword assert i.e assert expected == actual
You can use the following code:
... if (!(it.value == outputResp[it.key])) { log.info ("File Mismatch at : " + it.key) testRunner.fail( "File mismatch..." ) } ...
@Lucian
it works.
and now my next question. i have a [] in my json. Is there a error/mismatch in the [] let me take a example... lets say i have Items[] and in Itmes i have a OID = 123456, Name = XYZ for now the test can say me there is a error/mismatch in ITEMS but not where. It cant say error or mismatch in Items>OID.
@jhanzeb1
im complity new to groovy where i need to add "assert i.e assert expected == actual" and did i need to delete anything?
assuming you hit the if condition you mentioned you should just be able to use
assert (it.value == outputResp[it.key])
if it passes, it wouldn't have hit your if so nothing would have been logged out, but if it fails it will hit the if and subsequently fail this assertion
Hope this helps,
Mo | https://community.smartbear.com/t5/SoapUI-Pro/compare-a-response-with-a-json-file/m-p/168934/highlight/true | CC-MAIN-2019-47 | refinedweb | 442 | 77.74 |
Function chaining in Python
On codewars.com I encountered the following task:
Create a function
addthat adds numbers together when called in succession. So
add(1)should return
1,
add(1)(2)should return
1+2, ...
While I'm familiar with the basics of Python, I've never encountered a function that is able to be called in such succession, i.e. a function
f(x) that can be called as
f(x)(y)(z).... Thus far, I'm not even sure how to interpret this notation.
As a mathematician, I'd suspect that
f(x)(y) is a function that assigns to every
x a function
g_{x} and then returns
g_{x}(y) and likewise for
f(x)(y)(z).
Should this interpretation be correct, Python would allow me to dynamically create functions which seems very interesting to me. I've searched the web for the past hour, but wasn't able to find a lead in the right direction. Since I don't know how this programming concept is called, however, this may not be too surprising.
How do you call this concept and where can I read more about it?
I don't know whether this is function chaining as much as it's callable chaining, but, since functions are callables I guess there's no harm done. Either way, there's two ways I can think of doing this:
Sub-classing
int and defining
__call__:
The first way would be with a custom
int subclass that defines
__call__ which returns a new instance of itself with the updated value:
class CustomInt(int): def __call__(self, v): return CustomInt(self + v)
Function
add can now be defined to return a
CustomInt instance, which, as a callable that returns an updated value of itself, can be called in succession:
>>> def add(v): ... return CustomInt(v) >>> add(1) 1 >>> add(1)(2) 3 >>> add(1)(2)(3)(44) # and so on.. 50
In addition, as an
int subclass, the returned value retains the
__repr__ and
__str__ behavior of
ints. For more complex operations though, you should define other dunders appropriately.
As @Caridorc noted in a comment,
add could also be simply written as:
add = CustomInt
Renaming the class to
add instead of
CustomInt also works similarly.
Define a closure, requires extra call to yield value:
The only other way I can think of involves a nested function that requires an extra empty argument call in order to return the result. I'm not using
nonlocal and opt for attaching attributes to the function objects to make it portable between Pythons:
def add(v): def _inner_adder(val=None): """ if val is None we return _inner_adder.v else we increment and return ourselves """ if val is None: return _inner_adder.v _inner_adder.v += val return _inner_adder _inner_adder.v = v # save value return _inner_adder
This continuously returns itself (
_inner_adder) which, if a
val is supplied, increments it (
_inner_adder += val) and if not, returns the value as it is. Like I mentioned, it requires an extra
() call in order to return the incremented value:
>>> add(1)(2)() 3 >>> add(1)(2)(3)() # and so on.. 6
★ Back to homepage or read more recommendations:★ Back to homepage or read more recommendations:
From: stackoverflow.com/q/39038358 | https://python-decompiler.com/article/2016-08/function-chaining-in-python | CC-MAIN-2019-26 | refinedweb | 542 | 59.53 |
a version resource or put a TensorFlow SavedModel in a Cloud Storage location that your project can access.
- If you choose to use a version resource for batch prediction, you must create the version with the
mls1-c1-m2machine type.
Set up a:
-. For batch prediction, the version must use the
mls1-c1-m2machine type.
If you provide a Model URI (see the following section), omit these fields.
- Model URI
You can get predictions from a model that isn't deployed on AI Platform by specifying the URI of the SavedModel you want to use. The SavedModel must be stored APIs Client Library for Python, you can use Python dictionaries to represent the Job and PredictionInput resources.
Format your project name and your model or version name with the syntax used by the AI Platform_name, input_paths, output_path, model_name, region, data_format='JSON',.
You can use the Google APIs Client Library for Python to call the AI Platform Training and Prediction API without manually constructing HTTP requests. Before you run the following code sample, you must set up authentication.
How to]
import googleapiclient.discovery as discovery project_id = 'projects/{}'.format(project_name) ml = discovery.build('ml', 'v1') Console:
Go to the AI Platform Jobs page in the Google Cloud Console:
Go to the Cloud. | https://cloud.google.com/ml-engine/docs/tensorflow/batch-predict?hl=no | CC-MAIN-2019-51 | refinedweb | 210 | 57.06 |
>
Hey you guys,
I have an idea for a personal test project, in which I would like to throw rigidbodies (projectiles) at a wall or other rigidbodies and collapse it / move them with Physics, and then implement some kind of time reversal in which I would be able to build the wall anew in which the physics would play in reverse.
I really have no idea as to how to go about this in Unity. I do feel that storing, at every frame, each object's name, position and rotation in an array (or a list) would be crazy in terms of memory usage and processing power and would be absolutely inefficient. Plus, this would only be feasible for a limited amount of time, and not for the whole game (not that I absolutely want to do this for the WHOLE game, but it would be cool it if I could...)
I was wondering if the PhysX engine had some sort of reverse button, or if I would be able to implement that reversal through code (by, for example, using a negative force, if that is even possible -- even though the collision of two rigidbodies is not governed by a scripted force...)
I am at a loss here, so I would greatly appreciate any help from you learned folks here :-)
Thanks a bunch!
Ari ;o)
Answer by YoungDeveloper
·
Jun 13, 2016 at 07:18 PM
I suggest getting down at least some prototype, at least with couple objects, before optimizing it. Storing position and euler angles in a struct list seems like a good starting point to me.
edit: Note that i wouldnt mark the struct as serializable or serialize the list itself, to reduce possible bottleneck from unity editor inspector drawer.
Hey @YoungDeveloper, thank you for your answer :) Correct me if I'm wrong, but a struct cannot inherit from MonoBehaviour (or from anything else, for that matter?), so how am I supposed to include such things as position (a Vector3) and rotation (a Quaternion) or eulerAngles (another Vector3)?
Plus I'm having a hard time figuring out what to do with the issue of time, frame after frame. Does this mean that I have to make a list of lists? (a list of all the objects' names, positions and rotations at time 0 + another for all the objects' names, positions, etc. at time 1, and so on, then all collected in a "list of lists" themselves ?)
Just create some Reversable Object class, with List of Vector3s for positions and List of Quaternions for rotations(or another vector3s for euler angles). Then put something like positions.Add(transform.position); rotations.Add(transform.rotation) in your Update, and it basically should do the work
Reversable Object
positions.Add(transform.position); rotations.Add(transform.rotation)
You are over complicating things, you need the position and rotation, thats it, no names or inheritance. I slapped a quick example for you.
Demonstration:
Magic code:
public class Magic : MonoBehaviour {
[SerializeField]private Rigidbody _rigidbody;
private List<Node> _history = new List<Node>();
public bool record = true;
[Range(0f, 1f)]
public float lerp = 1f;
private struct Node {
public Node(Vector3 position, Vector3 eulerAngles) {
this.position = position;
this.eulerAngles = eulerAngles;
}
public readonly Vector3 position;
public readonly Vector3 eulerAngles;
}
public void Update() {
if(record) {
_rigidbody.isKinematic = false;
_history.Add(new Node(transform.position, transform.eulerAngles));
}
else {
_rigidbody.isKinematic = true;
int index = (int)((_history.Count-1)*lerp);
Node historyNode = _history[index];
transform.position = historyNode.position;
transform.eulerAngles = historyNode.eulerAngles;
}
}
}
Instead of recording every frame id implement record ratio, which is basically how often we record per second. In update you would add time delta to some value, if its larger than that ratio then we write in a history node.
Answer by ahungrybear
·
Jun 14, 2016 at 06:05 AM
You don't have to store EVERY frame also. Depending on the complexity of the motion and how accurate you want the simulation, you could store the position and rotation every couple of frames and lerp in between them. That way you can store 3 or 4 times less data.
@ahungrybear: Thanks for the tip, that's actually a great idea!!
Answer by tanoshimi
·
Jun 14, 2016 at 05:49 AM
I've never used it, but for $40 this asset might seem a good investment for you:
@tanoshimi: This looks like a sweet asset indeed, but I'm not working on a proper game or anything. It was just a test project, sort of a little coding challenge for me. So the asset would kind of defeat the purpose, wouldn't it? :p Thank you, I'll keep the reference in mind, though, if I need this for real, one day!
Assets aren't just for slapping in to finished games. I regularly acquire assets for the sole purpose of taking them apart and examining the source code to learn how a particular effect was achieved.
If they're well-structured and commented, you can treat them as interactive tutorials. For me personally, a one-off $40 seems like good value compared to the amount of time I'did otherwise spend trawling blogs or forums searching for tidbits of how to implement the same :)
Answer by MrYOLO
·
Jun 14, 2016 at 06:09 AM
hey there, from what I see you can just record every frame as a pic the just play them in reverse. It will gives players a feeling that the whole scene is reversing. The only con is you can not just rewind one object. The performance should be all right as it is just like playing video. hope it is clear with my poor English;)
@MrYOLO: Your English is just perfect, don't you worry! Thanks for the idea, I must admit I hadn't even thought of it. I'd rather go down the road of recording data about the objects rather than still frames entirely, though, because I'd like some objects to be reversable and some not to be :)
Answer by Ari-Bouaniche
·
Jun 14, 2016 at 07:07 AM
So, after reading all of you guys' suggestions here, I decided to try my hand at implementing something. Right now I'm storing name, position and rotation @ every frame (I'll work on skipping frames and lerping later...), all into a giant "deported" list placed on an object.
Uh oh! While I'm typing this, I'm thinking this is awfully inefficient and I should save the data onto each object, not in a central list containing everything (which then has me looking for the gameObjects by name, which I read is a big no no...)
I'm going to try and assign a script to each object I want to reverse rather than centralize everything... More later! Thanks again to all of you who contribute to making those projects a little less lonely! :).
Shooting a cannonball.
4
Answers
My object falls through terrain.
8
Answers
Disable Rigidbody Function Start
1
Answer
Turning off rigidbody without turning the entire object inactive?
1
Answer
Prevent intersection of rigidbodies at any cost
0
Answers | https://answers.unity.com/questions/1202192/reverse-physics.html | CC-MAIN-2019-26 | refinedweb | 1,180 | 59.74 |
Internal Tk Routines for Cocoa. More...
#include <vtkCocoaTkUtilities.h>
Internal Tk Routines for Cocoa.
vtkCocoaTkUtilities provide access to the Tk internals for Cocoa implementations of Tk. These internals must be implemented in a .mm file, since Cocoa is Objective-C, but the header file itself is pure C++ so that it can be included by other VTK classes.
Definition at line 41 of file vtkCocoaTkUtilities.h.
Definition at line 45 of file vtkCocoaTkUtilities.h.
Definition at line 55 of file vtkCocoaTkUtilities.h.
Definition at line 56 of file vtkCocoaTkUt.
Return the NSView for a Tk_Window.
It is returned as a void pointer so that users of this function don't need to compile as Objective C. | https://vtk.org/doc/nightly/html/classvtkCocoaTkUtilities.html | CC-MAIN-2019-47 | refinedweb | 116 | 52.87 |
Seam Project with ScalaGreg Zoller Jan 19, 2010 1:08 AM
Hello,
I've seen a couple threads about the possibility of using Seam and Scala and understand it should be possible in principle.
I'm interested if anyone has actually done it. I'd like to take a project I created w/seamgen and while bit pieces of it will still be Java, I want to use some Scala classes as well.
I'm using Eclipse (Galileo). I've installed both the JBoss Tools for Seam as well as the latest Scala Eclipse plugin. I can successfully work with a Seam project and a (separate) Scala project, but now I'm trying to blend Seam w/some Scala code.
When I tried to add a trivial Scala class to my Seam project's source tree I got bizarre error messages, like it didn't know what to do with the Scala or couldn't find a library or something. I then added the scala-library.jar file to the project's lib dir and build path. No dice (same problem). Error slice is below.
scala.tools.nsc.MissingRequirementError: class scala.Array not found. at scala.tools.nsc.symtab.Definitions$definitions$.getModuleOrClass(Definitions.scala:514) at scala.tools.nsc.symtab.Definitions$definitions$.getClass(Definitions.scala:472) ...
Anyone out there already climbed this mountain and know where to put all the magic files?
Thanks!
Greg
1. Re: Seam Project with ScalaArbi Sookazian Jan 19, 2010 9:56 PM (in response to Greg Zoller)
I don't have any hands-on experience (yet) with Scala but it sounds like you may be missing a Scala library in your classpath unless a Scala-Seam integration library is required? I'm pretty sure Scala is like Groovy in that they just require a JVM to exec... Is there a Groovy-Seam integration library, maybe you can check that first??
2. Re: Seam Project with ScalaGreg Zoller Jan 21, 2010 10:22 PM (in response to Greg Zoller)
Ok... got it. Here's the HOWTO in order to use Seam w/Scala.
1) Download/install the nightly Scala Eclipse plug-in. This gives you Scala 2.8 (beta). May work w/2.7.x but frankly the community buzz confirms the older Eclipse plugin is bumpy. Take your best shot here.
2) Import a seam-gen'ed project like normal
3) In your Eclipse left-hand-side project view (Seam perspective), right click your project and go down to the Scala menu and select Add Scala Nature.
4) In your project's lib directory import the scala-library.jar (from the normal Scala distribution) for Scala 2.8 (or whatever, to match your Eclipse plugin's Scala version
5) Edit deployed-jars-ear.list and add scala-library.jar so that the library will deploy to your app server as part of the ear.
You can now use Scala in your project with the normal Java/Scala limitations. Here's a sample of a Seam component that injects a trivial Scala class (also a component):
import org.jboss.seam.ScopeType; import org.jboss.seam.annotations.AutoCreate; import org.jboss.seam.annotations.In; import org.jboss.seam.annotations.Name; import org.jboss.seam.annotations.Scope; @Name("message") @Scope(ScopeType.CONVERSATION) @AutoCreate public class Message { @In(create=true) MsgText msgText; // Scala object injected public String getSay() { return "Hey! "+msgText.say(); } }
and the Scala component:
import org.jboss.seam.ScopeType; import org.jboss.seam.annotations.Name; import org.jboss.seam.annotations.Scope; @Name("msgText") @Scope(ScopeType.CONVERSATION) class MsgText { val msg = "Some message text" def say() = { msg } }
I'm sure there'll be a gotcha here or there, but this is a wonderful prospect for those, like me, who love the potential of Scala but want the features Seam provides. | https://developer.jboss.org/thread/191008 | CC-MAIN-2018-17 | refinedweb | 632 | 67.86 |
)
Older versions
gregjor said on 2007-01-07 15:07
The application I'm working on uses PostegreSQL, and even though pgdb is minimally thread-safe you can't share connections across threads if you are using transactions. What I've done is create a wrapper around the database code that manages a pool of open connections using a Queue. When my module loads it allocates a queue to hold database connections:
import Queue ... dbq = Queue.Queue(10)
When a function or method needs a database connection it requests one from the queue. If the queue is empty a new connection is opened:
def opendbconnection(): try: conn = dbq.get_nowait() except Queue.Empty: try: conn = pgdb.connect(host=xxx, user=xxx, password=xxx, database=xxx) except: # failed to open a database connection raise return conn
The calling function/method uses the database connection, then releases it when it's done. If the queue is full the connection is closed.
def closedbconnection(conn): try: dbq.put_nowait(conn) except Queue.Full: conn.close
In my application this is done with decorators, so functions that need a database connection are decorated and an open database connection is passed to them. The decorator takes care of making sure the connection is released back to the queue.
@usedb def somefn(db, ...):
Our application also has a timed thread that periodically goes through the queue and closes connections that haven't been used in a while, because the connections seem to go stale after a while for reasons I haven't tracked down. There's also a function called when CherryPy shuts down or restarts that closes all connections in the queue.
We've used this fairly simple scheme in a couple of live applications with no problems. The beauty of the Queue is that it is a simple thread-safe mechanism for managing a pool of connections. If a connection is in the queue it's available for use, and connections are opened and added to the queue as needed.
The above document may be a bit misleading as to when a database connection may be safely shared among threads. If you are using a thread safe database, such as psycopg2, then you can feel free to use a shared connection, but only if you never use that connection to update the database. I wonder how realistic this scenario is though. I cannot imagine a very fun website that doesn't allow the webserver to update the database at all.
The problem is as follows. When the database is updated with a cursor, assuming a standard DB-API interface, a commit is required on the connection. If there are multiple threads using that connection, the commit will apply to all open cursors -- not just the one you want it to. This can lead to unexpected and hard to predict behavior.
To be safe, there needs to be an independent connection per thread. This can be achieved in the following ways.
- (Best/Slowest to code) Use Connection Pools (like psycopg2.pool).
- (Good/Decent to code) One connection per thread as in the above article.
- (Ok/Quick to code) Use a per-cursor commit extension (like psycopg1 has with serialize=False in connect).
David Sankel | http://tools.cherrypy.org/wiki/Databases?version=5 | CC-MAIN-2014-41 | refinedweb | 537 | 64.1 |
Lambda API: A lightweight web framework that will rock your serverless world
The serverless universe is expanding once again and this time its welcomes Lambda API, a lightweight web framework for use with AWS API Gateway and AWS Lambda using Lambda Proxy Integration.
With more tools and technologies joining in, the serverless universe constantly expands!
Today we present you Lambda API, a lightweight web framework for use with AWS API Gateway and AWS Lambda using Lambda Proxy Integration created by serverless advocate Jeremy Daly.
Although it is based on and mirrors other web frameworks like Express.js and Fastify, Lambda API is quite unique in one key aspect. It has zero dependencies!
When it comes to serverless applications that need to load quickly, all these extra dependencies that you have to use with your additional Node.js modules slow down execution and use more memory than necessary. But Lambda API is here to change that!
As fast as The Flash?
Lambda API was built to be lightweight and fast. And it looks like it’s on the right track! Let’s have a closer look at its main features:
- Zero dependencies. Yup, still not a joke!
- Super lightweight and built specifically for serverless applications using AWS Lambda and API Gateway
- Provides support for API routing, serving up HTML pages, issuing redirects, serving binary files and more
- Offers a built-in logging engine that can even periodically sample requests for things like tracing and benchmarking
- It has a powerful middleware and error handling system
- Designed to work with Lambda’s Proxy Integration, automatically handling all the interaction with API Gateway for you
- It parses
REQUESTSand formats
RESPONSES, allowing you to focus on your application’s core functionality
- Create just one function that handled all your user management features
If you are still not convinced, let’s take a closer look at some of the framework’s functions.
Routes and HTTP Methods – Routes are defined by using convenience methods or the
METHOD method. There are currently eight convenience route methods:
get(),
put(),
patch(),
delete(),
head(),
options() and
any().
Returning Responses – Supports both
callback-style and
async-await for returning responses to users. The RESPONSE object has several callbacks that will trigger a response (
send(),
json(),
html(), etc.)
Route Prefixing – Easy to create multiple versions of the same api without changing routes by hand. The
register()method allows you to load routes from an external file and prefix all of those routes using the
prefix option.
Debugging Routes – Offers a
routes() method that can be called on the main instance that will return an array containing the
METHODand full
PATH of every configured route. This will include base paths and prefixed routes.
REQUEST – The
REQUEST object contains a parsed and normalized request from API Gateway. It contains a number values by default. Check out the full list here.
RESPONSE – The
RESPONSE object is used to send a response back to the API Gateway. The
RESPONSE object contains several methods to manipulate responses. All methods are chainable unless they trigger a response. You can find the list here.
SEE ALSO: Architect simplifies Lambda functions for the cloud
Logging – Includes a robust logging engine specifically designed to utilize native JSON support for CloudWatch Logs. Not only is it super fast, but it’s also highly configurable.
Middleware – It supports middleware to preprocess requests before they execute their matching routes. Middleware is defined using the
use method and requires a function with three parameters for the
REQUEST,
RESPONSE, and
next callback.
Clean Up – It has a built-in clean up method called ‘finally()’ that will execute after all middleware and routes have been completed, but before execution is complete.
Error Handling – Automatically catches and logs errors using the Logging system.
Namespaces – Allows you to map specific modules to namespaces that can be accessed from the
REQUEST object.
Lambda Proxy Integration – That’s an option in API Gateway that allows the details of an API request to be passed as the
eventparameter of a Lambda function.
If you are interested in a full walk-through on how to build a serverless API with serverless, AWS Lambda and Lambda API, check out Jeremy Daly’s tutorial.
Getting started
To install Lambda API simply run
npm i lambda-api --save
Keep in mind, however, that you will need AWS Lambda running Node 8.10 and AWS API Gateway using Proxy Integration. | https://jaxenter.com/lambda-api-framework-serverless-149693.html | CC-MAIN-2018-51 | refinedweb | 732 | 53.81 |
React Bootstrap with Material Design
Built with React and Bootstrap 4. Absolutely no jQuery.
400+ material UI elements, 600+ material icons, 74 CSS animations, SASS files and many more.
All fully responsive. All compatible with different browsers.
Table of Contents
- Other Technologies
- Demo
- Version
- Quick start
- Available commands
- How to install MDB via npm
- Supported Browsers
- Documentation
- Pro version
- Highlights
- Useful Links
- Social Media
Other Technologies
Demo:
Version:
- MDBReact 4.19.0
- React 16.8.6
Quick start
- Clone following repo:
git clone .
note "." at the end. It will clone files directly into current folder.
- Run
npm i
- Run
npm start
- Voilà! Open browser and visit
Now you can navigate to our documentation, pick any component and place within your project.
Available commands
- npm start - runs the app in development mode.
- npm run remove-demo - remove demo directory from your project and generate a boilerplate for your app
- npm run build - builds the app for production to the build folder.
- npm test - runs the test watcher in an interactive mode.
How to install MDB via npm:
- create new project
create-react-app myApp
cd myApp
npm install --save mdbreact
- Import style files into the src/index.js before the App.js file:
import "@fortawesome/fontawesome-free/css/all.min.css"; import "bootstrap-css-only/css/bootstrap.min.css"; import "mdbreact/dist/css/mdb.css";
Run server
npm.
Documentation:
Huge, detailed documentation avilable online
PRO version:
React Bootstrap with Material Design PRO. | https://nicedoc.io/mdbootstrap/React-Bootstrap-with-Material-Design | CC-MAIN-2019-35 | refinedweb | 241 | 53.07 |
Advanced: Decorating the example workflow¶
Now that the basic concepts of Snakemake have been illustrated, we can introduce some advanced functionality.
Step 1: Specifying the number of used threads¶
For some tools, it is advisable to use more than one thread in order to speed up the computation.
Snakemake can be made aware of the threads a rule needs with the
threads directive.
In our example workflow, it makes sense to use multiple threads for the rule
bwa_map:
rule bwa_map: input: "data/genome.fa", "data/samples/{sample}.fastq" output: "mapped_reads/{sample}.bam" threads: 8 shell: "bwa mem -t {threads} {input} | samtools view -Sb - > {output}"
The number of threads can be propagated to the shell command with the familiar braces notation (i.e.
{threads}).
If no
threads directive is given, a rule is assumed to need 1 thread.
When a workflow is executed, the number of threads the jobs need is considered by the Snakemake scheduler.
In particular, the scheduler ensures that the sum of the threads of all jobs running at the same time does not exceed a given number of available CPU cores.
This number is given with the
--cores command line argument, which is mandatory for
snakemake calls that actually run the workflow.
For example
$ snakemake --cores 10
would execute the workflow with 10 cores.
Since the rule
bwa_map needs 8 threads, only one job of the rule can run at a time, and the Snakemake scheduler will try to saturate the remaining cores with other jobs like, e.g.,
samtools_sort.
The threads directive in a rule is interpreted as a maximum: when less cores than threads are provided, the number of threads a rule uses will be reduced to the number of given cores.
If
--cores is given without a number, all available cores are used.
Step 2: Config files¶
So far, we specified which samples to consider by providing a Python list in the Snakefile.
However, often you want your workflow to be customizable, so that it can easily be adapted to new data.
For this purpose, Snakemake provides a config file mechanism.
Config files can be written in JSON or YAML, and are used with the
configfile directive.
In our example workflow, we add the line
configfile: "config.yaml"
to the top of the Snakefile.
Snakemake will load the config file and store its contents into a globally available dictionary named
config.
In our case, it makes sense to specify the samples in
config.yaml as
samples: A: data/samples/A.fastq B: data/samples/B.fastq
Now, we can remove the statement defining
SAMPLES from the Snakefile and change the rule
bcftools_call to
rule bcftools_call: input: fa="data/genome.fa", bam=expand("sorted_reads/{sample}.bam", sample=config["samples"]), bai=expand("sorted_reads/{sample}.bam.bai", sample=config["samples"]) output: "calls/all.vcf" shell: "samtools mpileup -g -f {input.fa} {input.bam} | " "bcftools call -mv - > {output}"
Step 3: Input functions¶
Since we have stored the path to the FASTQ files in the config file, we can also generalize the rule
bwa_map to use these paths.
This case is different to the rule
bcftools_call we modified above.
To understand this, it is important to know that Snakemake workflows are executed in three phases.
- In the initialization phase, the files defining the workflow are parsed and all rules are instantiated.
- In the DAG phase, the directed acyclic dependency graph of all jobs is built by filling wildcards and matching input files to output files.
- In the scheduling phase, the DAG of jobs is executed, with jobs started according to the available resources.
The expand functions in the list of input files of the rule
bcftools_call are executed during the initialization phase.
In this phase, we don’t know about jobs, wildcard values and rule dependencies.
Hence, we cannot determine the FASTQ paths for rule
bwa_map from the config file in this phase, because we don’t even know which jobs will be generated from that rule.
Instead, we need to defer the determination of input files to the DAG phase.
This can be achieved by specifying an input function instead of a string as inside of the input directive.
For the rule
bwa_map this works as follows:
def get_bwa_map_input_fastqs(wildcards): return config["samples"][wildcards.sample] rule bwa_map: input: "data/genome.fa", get_bwa_map_input_fastqs output: "mapped_reads/{sample}.bam" threads: 8 shell: "bwa mem -t {threads} {input} | samtools view -Sb - > {output}"
Any normal function would work as well.
Input functions take as single argument a
wildcards object, that allows to access the wildcards values via attributes (here
wildcards.sample).
They have to return a string or a list of strings, that are interpreted as paths to input files (here, we return the path that is stored for the sample in the config file).
Input functions are evaluated once the wildcard values of a job are determined.
Step 4: Rule parameters¶
Sometimes, shell commands are not only composed of input and output files and some static flags.
In particular, it can happen that additional parameters need to be set depending on the wildcard values of the job.
For this, Snakemake allows to define arbitrary parameters for rules with the
params directive.
In our workflow, it is reasonable to annotate aligned reads with so-called read groups, that contain metadata like the sample name.
We modify the rule
bwa_map accordingly:
rule bwa_map: input: "data/genome.fa", lambda wildcards: config["samples"][wildcards.sample] output: "mapped_reads/{sample}.bam" params: rg=r"@RG\tID:{sample}\tSM:{sample}" threads: 8 shell: "bwa mem -R '{params.rg}' -t {threads} {input} | samtools view -Sb - > {output}"
Similar to input and output files,
params can be accessed from the shell command, the Python based
run block, or the script directive (see Step 6: Using custom scripts).
Exercise¶
- Variant calling can consider a lot of parameters. A particularly important one is the prior mutation rate (1e-3 per default). It is set via the flag
-Pof the
bcftools callcommand. Consider making this flag configurable via adding a new key to the config file and using the
paramsdirective in the rule
bcftools_callto propagate it to the shell command.
Step 5: Logging¶
When executing a large workflow, it is usually desirable to store the logging output of each job into a separate file, instead of just printing all logging output to the terminal—when multiple jobs are run in parallel, this would result in chaotic output.
For this purpose, Snakemake allows to specify log files for rules.
Log files are defined via the
log directive and handled similarly to output files, but they are not subject of rule matching and are not cleaned up when a job fails.
We modify our rule
bwa_map as follows:
rule bwa_map: input: "data/genome.fa", get_bwa_map_input_fastqs output: "mapped_reads/{sample}.bam" params: rg=r"@RG\tID:{sample}\tSM:{sample}" log: "logs/bwa_mem/{sample}.log" threads: 8 shell: "(bwa mem -R '{params.rg}' -t {threads} {input} | " "samtools view -Sb - > {output}) 2> {log}"
The shell command is modified to collect STDERR output of both
bwa and
samtools and pipe it into the file referred to by
{log}.
Log files must contain exactly the same wildcards as the output files to avoid file name clashes between different jobs of the same rule.
Exercise¶
- Add a log directive to the
bcftools_callrule as well.
- Time to re-run the whole workflow (remember the command line flags to force re-execution). See how log files are created for variant calling and read mapping.
- The ability to track the provenance of each generated result is an important step towards reproducible analyses. Apart from the
reportfunctionality discussed before, Snakemake can summarize various provenance information for all output files of the workflow. The flag
--summaryprints a table associating each output file with the rule used to generate it, the creation date and optionally the version of the tool used for creation is provided. Further, the table informs about updated input files and changes to the source code of the rule after creation of the output file. Invoke Snakemake with
--summaryto examine the information for our example.
Step 6: Temporary and protected files¶
In our workflow, we create two BAM files for each sample, namely
the output of the rules
bwa_map and
samtools_sort.
When not dealing with examples, the underlying data is usually huge.
Hence, the resulting BAM files need a lot of disk space and their creation takes some time.
To save disk space, you can mark output files as temporary.
Snakemake will delete the marked files for you, once all the consuming jobs (that need it as input) have been executed.
We use this mechanism for the output file of the rule
bwa_map:}"
This results in the deletion of the BAM file once the corresponding
samtools_sort job has been executed.
Since the creation of BAM files via read mapping and sorting is computationally expensive, it is reasonable to protect the final BAM file from accidental deletion or modification.
We modify the rule
samtools_sort to mark its output file as
protected:
rule samtools_sort: input: "mapped_reads/{sample}.bam" output: protected("sorted_reads/{sample}.bam") shell: "samtools sort -T sorted_reads/{wildcards.sample} " "-O bam {input} > {output}"
After successful execution of the job, Snakemake will write-protect the output file in the filesystem, so that it can’t be overwritten or deleted by accident.
Exercise¶
- Re-execute the whole workflow and observe how Snakemake handles the temporary and protected files.
- Run Snakemake with the target
mapped_reads/A.bam. Although the file is marked as temporary, you will see that Snakemake does not delete it because it is specified as a target file.
- Try to re-execute the whole workflow again with the dry-run option. You will see that it fails (as intended) because Snakemake cannot overwrite the protected output files.
Summary¶
For this advanced part of the tutorial, we have now created a
config.yaml configuration file:
samples: A: data/samples/A.fastq B: data/samples/B.fastq prior_mutation_rate: 0.001
With this, the final version of our workflow in the
Snakefile looks like this:
configfile: "config.yaml" rule all: input: "plots/quals.svg" def get_bwa_map_input_fastqs(wildcards): return config["samples"][wildcards.sample]}" rule samtools_sort: input: "mapped_reads/{sample}.bam" output: protected("sorted_reads/{sample}.bam") shell: "samtools sort -T sorted_reads/{wildcards.sample} " "-O bam {input} > {output}" rule samtools_index: input: "sorted_reads/{sample}.bam" output: "sorted_reads/{sample}.bam.bai" shell: "samtools index {input}" rule bcftools_call: input: fa="data/genome.fa", bam=expand("sorted_reads/{sample}.bam", sample=config["samples"]), bai=expand("sorted_reads/{sample}.bam.bai", sample=config["samples"]) output: "calls/all.vcf" params: rate=config["prior_mutation_rate"] log: "logs/bcftools_call/all.log" shell: "(samtools mpileup -g -f {input.fa} {input.bam} | " "bcftools call -mv -P {params.rate} - > {output}) 2> {log}" rule plot_quals: input: "calls/all.vcf" output: "plots/quals.svg" script: "scripts/plot-quals.py" | https://snakemake.readthedocs.io/en/stable/tutorial/advanced.html?highlight=log%20directive | CC-MAIN-2021-39 | refinedweb | 1,788 | 56.76 |
Q: How can I invoke a static method dynamically without an instance reference? Method.invoke(Object obj, Object[] parms) needs a concrete instance, but I want to call the static method directly on a Class object! Is this possible in Java?
The workaround is to create an instance dynamically with
newInstance() and call
invoke with that instance, but this will not work if the class does not have an empty constructor. And I don't want to create instances I really don't need!
A: According to the JDK API documentation for
Method.invoke(Object obj, Object[] args), "If the underlying method is static, then the specified
obj argument is ignored. It may be null." So, instead of passing in an actual object, a null may be passed; therefore, a static method can be invoked without an actual instance of the class.
The following sample program tests this fact, and correctly produces the output below. A concrete instance of class
Foo is never created.
import java.lang.reflect.*; public class Test { public static void main(String[] args) { try { Class c = Class.forName("Foo"); System.out.println("Loaded class: " + c); Method m = c.getDeclaredMethod("getNum", null); System.out.println("Got method: " + m); Object o = m.invoke(null, null); System.out.println("Output: " + o); } catch (Exception e) { e.printStackTrace(); } } } class Foo { public static int getNum() { return 5; } }
Program output:
Loaded class: class Foo Got method: public static int Foo.getNum() Output: 5 | http://www.javaworld.com/article/2077455/learn-java/dynamically-invoking-a-static-method-without-instance-reference-july-6-1999.html | CC-MAIN-2015-27 | refinedweb | 240 | 60.01 |
graphlib --- Functionality to operate with graph-like structures¶
Source code: Lib/graphlib.py
- class
graphlib.
TopologicalSorter(graph=None)¶
Provides functionality to topologically sort a graph of hashable nodes.
A topological order is a linear ordering of the vertices in a graph such that for every directed edge u -> v from vertex u to vertex v, vertex u comes before vertex v in the ordering. For instance, the vertices of the graph may represent tasks to be performed, and the edges may represent constraints that one task must be performed before another; in this example, a topological ordering is just a valid sequence for the tasks. A complete topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph.
If the optional graph argument is provided it must be a dictionary representing a directed acyclic graph where the keys are nodes and the values are iterables of all predecessors of that node in the graph (the nodes that have edges that point to the value in the key). Additional nodes can be added to the graph using the
add()method.
In the general case, the steps required to perform the sorting of a given graph are as follows:
Create an instance of the
TopologicalSorterwith an optional initial graph.
Add additional nodes to the graph.
Call
prepare()on the graph.
While
is_active()is
True, iterate over the nodes returned by
get_ready()and process them. Call
done()on each node as it finishes processing.
In case just an immediate sorting of the nodes in the graph is required and no parallelism is involved, the convenience method
TopologicalSorter.static_order()can be used directly:
>>> graph = {"D": {"B", "C"}, "C": {"A"}, "B": {"A"}} >>> ts = TopologicalSorter(graph) >>> tuple(ts.static_order()) ('A', 'C', 'B', 'D')
The class is designed to easily support parallel processing of the nodes as they become ready. For instance:
topological_sorter = TopologicalSorter() # Add nodes to 'topological_sorter'... topological_sorter.prepare() while topological_sorter.is_active(): for node in topological_sorter.get_ready(): # Worker threads or processes take nodes to work on off the # 'task_queue' queue. task_queue.put(node) # When the work for a node is done, workers put the node in # 'finalized_tasks_queue' so we can get more nodes to work on. # The definition of 'is_active()' guarantees that, at this point, at # least one node has been placed on 'task_queue' that hasn't yet # been passed to 'done()', so this blocking 'get()' must (eventually) # succeed. After calling 'done()', we loop back to call 'get_ready()' # again, so put newly freed nodes on 'task_queue' as soon as # logically possible. node = finalized_tasks_queue.get() topological_sorter.done(node)
add(node, *predecessors)¶
Add a new node and its predecessors to the graph. Both the node and all elements in predecessors must be hashable.
If called multiple times with the same node argument, the set of dependencies will be the union of all dependencies passed in.
It is possible to add a node with no dependencies (predecessors is not provided) or to provide a dependency twice. If a node that has not been provided before is included among predecessors it will be automatically added to the graph with no predecessors of its own.
Raises
ValueErrorif called after
prepare().
prepare()¶
Mark the graph as finished and check for cycles in the graph. If any cycle is detected,
CycleErrorwill be raised, but
get_ready()can still be used to obtain as many nodes as possible until cycles block more progress. After a call to this function, the graph cannot be modified, and therefore no more nodes can be added using
add().
is_active()¶
Returns
Trueif more progress can be made and
Falseotherwise. Progress can be made if cycles do not block the resolution and either there are still nodes ready that haven't yet been returned by
TopologicalSorter.get_ready()or the number of nodes marked
TopologicalSorter.done()is less than the number that have been returned by
TopologicalSorter.get_ready().
The
__bool__()method of this class defers to this function, so instead of:
if ts.is_active(): ...
it is possible to simply do:
if ts: ...
Raises
ValueErrorif called without calling
prepare()previously.
done(*nodes)¶
Marks a set of nodes returned by
TopologicalSorter.get_ready()as processed, unblocking any successor of each node in nodes for being returned in the future by a call to
TopologicalSorter.get_ready().
Raises
ValueErrorif any node in nodes has already been marked as processed by a previous call to this method or if a node was not added to the graph by using
TopologicalSorter.add(), if called without calling
prepare()or if node has not yet been returned by
get_ready().
get_ready()¶
Returns a
tuplewith all the nodes that are ready. Initially it returns all nodes with no predecessors, and once those are marked as processed by calling
TopologicalSorter.done(), further calls will return all new nodes that have all their predecessors already processed. Once no more progress can be made, empty tuples are returned.
Raises
ValueErrorif called without calling
prepare()previously.
static_order()¶
Returns an iterable of nodes in a topological order. Using this method does not require to call
TopologicalSorter.prepare()or
TopologicalSorter.done(). This method is equivalent to:
def static_order(self): self.prepare() while self.is_active(): node_group = self.get_ready() yield from node_group self.done(*node_group)
The particular order that is returned may depend on the specific order in which the items were inserted in the graph. For example:
>>> ts = TopologicalSorter() >>> ts.add(3, 2, 1) >>> ts.add(1, 0) >>> print([*ts.static_order()]) [2, 0, 1, 3] >>> ts2 = TopologicalSorter() >>> ts2.add(1, 0) >>> ts2.add(3, 2, 1) >>> print([*ts2.static_order()]) [0, 2, 1, 3]
This is due to the fact that "0" and "2" are in the same level in the graph (they would have been returned in the same call to
get_ready()) and the order between them is determined by the order of insertion.
If any cycle is detected,
CycleErrorwill be raised.
3.9 新版功能.
Exceptions¶
The
graphlib module defines the following exception classes:
- exception
graphlib.
CycleError¶
Subclass of
ValueErrorraised by
TopologicalSorter.prepare()if cycles exist in the working graph. If multiple cycles exist, only one undefined choice among them will be reported and included in the exception.
The detected cycle can be accessed via the second element in the
argsattribute of the exception instance and consists in a list of nodes, such that each node is, in the graph, an immediate predecessor of the next node in the list. In the reported list, the first and the last node will be the same, to make it clear that it is cyclic. | https://docs.python.org/zh-cn/3/library/graphlib.html | CC-MAIN-2021-25 | refinedweb | 1,084 | 56.55 |
>>.
41 thoughts on “Tweet-a-Watt kits”
WTF is the point of this crap…?
i can’t think of a better way to spend $110…
Fail.
hey – it’s phil, i founded hackaday and later went on to become the senior editor at MAKE and i also co-designed this project.
@ninja, this is the only “free” low-cost and public domain way to monitor and publish power usage at this time. you’ll see a few dozen companies that try and patent this and sell expensive versions. with this kit you can make your own. it’s also a great way to learn about xbee.
i’m a little surprised by the snarks – it’s an open source hardware project, we released the project to the public domain, it hacks/mods a current device and then adds a lot of smarts on the “web 2.0″ side for data (you can use it with google app engine, twitter, facebook, irc, whatever).
i’d like to see what one of the snarky folks here could do better, cheaper and more open. please post up your projects…
@phil / pt
I agree with you 100%. A hack is a hack.
This one is pretty slick, and doesn’t just apply to twitter.
I believe that anything that shows up here with the word ‘twitter’ on it is shunned. I really do not understand why. This stuff can be ported on to so many different environments.
Quit looking at ‘twitter’, and look at the hack itself, and what could be changed.
@firetech – thanks so much!
@phil / pt
Just ignore these guys. It is not worth
the effort to reply to them.
I’m a regular reader of hack-a-day, I’ve
seen amazing hacks published in here. Just
keep your good work and don’t let this
project go down just because of this idiots.
@pt
the backlash isn’t about your hack – which is cool, let me make that clear – but it’s about Twitter. i posted about this when this originally hit hackaday: focusing on the twitter aspect raises the ire of people who don’t really like twitter (like me). it just sounds like hype-shooting marketingspeak that’s latching on to a current meme (open-source or not, everyone markets their own projects).
ok, ok, so there’s some hype thrown in, what’s wrong with that? i think it dilutes the purpose of your technology to shackle it with something as limited as twitter. i won’t go over why i don’t like twitter, other than to say what’s relevant: it’s very narrow and very transient. the absolute most useful part of this project, aside from hacking the kill-a-watt, is that it outputs to a real, live spreadsheet. this project would be far better described as “Spreadsheet-a-Watt” or “Log-a-Watt” or “Anal-a-Watt” (!), not “Put-it-on-a-stack-of-poorly-organized-messages-a-Watt”.
I am on the fence, I agree with the “twitter” comments.
I like the idea of logging, and I like the idea of “wireless” transmission of power usage.
I keep pushing the idea down because instead of logging to a database or spreadsheet “you can ‘tweet’ your power usage!!”
As far as “cheaper” and “more open source” goes, you can apparently use an iron ballast and a few turns of wire to record your amperage usage. Combine that with an X10-like wire transmitter and a $1.50 Atmega and you can have nearly the same functionality for $30 or so (less if you have junk to raid for parts), not $110. Not to mention that you can probably attach multiple ammeters, or just one on your house shunt. (in fact, if you could design one to safely clamp around your bus-bar you would only need one.) You could use a portable device to measure your devices’ draw and then when your usage increments by that amount, the device is active.
(somebody please show me a $20 Kill-a-watt, all I can find is ebay [dodgy] and Newegg, $25+ after tax and shipping.)
@emilio – you’re saying that you have issues with twitter and the project would have been better if we named it “spreadsheet-a-watt”…. ok, next revision we will consider renaming it.
@nubie – on the fence? it’s pretty clear that you can use *any* way to display the data, twitter is one easy and free way to do it (and it’s a free SMS gateway).
the data goes to google app engine, or anywhere else (just read the how-to pages).
please design and publish the project you outlined – i’d love to see that and make something better and cheaper, that’s the point of this too – to get others to make things.
lastly, you can find a kill-a-watt for $15 or less, i see them in NYC and a trip to google’s old froogle thing has the same results.
Oh, did I mention I am too broke to buy the kill-a-watt at $25 as opposed to $20?
I live in California, not NYC, and the local Home Depot stores have them listed for $40.
I do understand that you can use it any way you see fit, if you can afford it in the first place.
Perhaps a version that can communicate via wire or Infra-red to the PC? It is just the Xbee system that seems frightfully expensive.
I am looking for the links to the project I was thinking of, they may be on my other PC, DOH!
Just a quick confirmation on prices, “froogle” (or Google shopping to me) has exactly 3 links for the kill-a-watt under $15:
The first is an ebay “$5″ link, to an auction that ended at $13.39 with $5 shipping, so $18.39
the second is iKitchen for $14.98, but of course they are out of stock (and who knows what shipping will cost?)
The most legit under $15 link is this one for $14.99:
Unfortunately they expect you to pay this much for shipping:
UPS / USPS Standard $13.99
UPS 3 Day Select $47.99
UPS 2nd Day Air $59.99
UPS Next Day Air $95.99
Sorry, it appears that you can’t get one for less than $25
As a person with little money I try to save where I can, and have found that armchair budgeting from ‘Froogle’ adds $8-20 more at each store. (When will google add real shipping costs?)
@nubie – you can get it online and also at other non-big-box store for less than $15. in general the price of things are also cheaper in most cities in CA as opposed to nyc.
you could easily use a wire – everything we did is documented, it’s up to anyone and everyone to use the information for better and cheaper ways to make a project like this if they want.
on a related note, the xbee is cheap compared to other wireless solutions – just my opinion.
Hmmm ok PT..
add a pic and a rs232 port to your PC. cheaper, easier, and does not clutter up the internet with useless information. I’d rather have this info on my Pc at home than on twitter.
$90.00 is a really high price for what the Xbee stuff does. Honestly it’s why it is not used in much because it is really really overpriced. Use the low grade 433mhz modules that you can get from sparkfun for $8.99 each and do the exact same thing with just a bit more programming.
It’s a neat idea, dont get me wrong, but It’s like seeing a “remote control light! uses 2 iphones!” using overkill parts for the sake of using them is silly. The foundation of hacking is to use the cheap stuff, not the highest priced RF link I can find.
@fartface – sure there are many ways to do this – in fact, please make & publish your suggestion – it might be better… as far as ” and does not clutter up the internet with useless information” – really? you think power data from a modded power meter is worse than 99% of what’s on twitter (and the internet)? this is real data that actually is useful – we lowered our power bill and other power services are tapping in to our data for their own use and study.
$90 is cheap compared to trying to do this with wifi – keep in mind, $90 = 2 xbee modules, 2 adapter kits, 1 ftdi cable and parts. if you were to buy all this on your own it’s well over $100 and if you were to try and re-create it using other tech it might not work and it would be a lot more expensive.
we used an off the shelf $15 power meter and the cheapest/best way to do wireless – again, do a better a cheaper version and publish it, i’d love to check it out and make it :)
congrats!
Please again point me to a source for $15 pricing.
What is a “non-big-box” store?
I just find it wasteful that the main parts of the “hack” involve costly complicated parts that are all-together used as:
1. An ammeter
2. An input cable.
over $100 is way too much to spend for what you receive in return (in my opinion).
The necessity for wireless communication is rare, no matter the cost of the Xbee.
In these ways I find the device inelegant (waste of hardware resources, high cost), and to add insult to the injury it is promoted with “tweeting”.
I am looking for a better solution, namely a source of clamp ammeters similar to that used in this instructable:
It should be possible to make your own with a few winds of wire if you want to go that route.
I would assume that an Atmega8 should be able to read the resulting voltage signal and then communicate the result over Serial or USB (Using this firmware)
Cost goes way down, reliance on proprietary modules goes down as well.
I like that there’s such a convo going on about this.
my question would be, how much power to you use keeping a pc running to log all this. not everyone would normally keep one running and it promotes doing so. of course you *could* use a low watt laptop or netbook, but lets be honest, alot of people might wanna run this on an old p3 based system sitting in a closet burning 300watts.
it seems more gimmick. now if it had an xport or wiznet 5100 built into the design, different story, less power usage, and you could skip the required xbee altogether, or still use it and transmit to an xbee + ethernet adaptor instead of the computer.
I question whether this solves a problem or adds to product consumption for gimmicks sake. much like the botanical kit…
props for making, don’t take me wrong, hopefully you see my point, but from the defensive tone here I think I know the response I’ll get…
not to add to the flame wars, but while the XBees are great, they do seem like the biggest waste of money in the Tweet-a-Watt design. A complete wireless XBee link costs $40, while this complete link (which is slower, but high-speed data shouldn’t be necessary for the design) costs only $4 + $5 = $9 total:
and
Here’s an example design they’re used in (nicely documented):
Next version should have an ethernet port and not have anything to do with Twitter.
/useless post
@fyrebug the router was $45
@fartface sure, 433 mhz modules are cheap. dont forget you’ll also need: a microcontroller with A/D AND a microcontroller programmer PLUS all the code to deal with A/D, sleeping and retransmits, packetization, error correction, addressing, collision detection, etc. if you’re psyched to write that code go for it! but…im not. its not easy and not worth the time. the xbee modules Just Work with no programming or tweaking.
then once you’ve spent that time you can reuse the code i’ve posted & comment out the single line of code (at the very end of the project) that sends a twitter
@nubie if you live in new york, J&R is right downtown and has them for…$20!
There are a few suggestions here for “just a wire coil” around the bus-bar/feed. Which works pretty ok if your entire house is purely resistive, but is wildly inaccurate if you have a non-unity power factor.
Given the killawatt represents the “it can kill you” and “burn your house down” part of the process, I think spending the extra money may be warranted. Especially given it may not be more than an equivalent clamp-on ammeter. killawatt was $8.50 from the library (they used to rent them out) while a used 75A clamp-on ammeter that wasn’t routinely overwhelmed by RF noise was $17.
Granted you can build a functional current-shunt meter or hall-effect meter for less.
correct me if I am wrong, but doesn’t the unit only measure power usage of whats plugged into it anyway? I mean, wow.. you managed to log your desk-lamp power usage to the internet.
The real use of these devices is to go around your house with one and collect data on EVERYTHING. Then come back, sit with all of it on paper and figure out where you can cut your bill. (wow, that 40 watt bulb in the halway is really sucking the current down, maby I can change that for a 20?) constantly monitoring only one appliance is sort of like missing the point. Knowing that your washer is a power hog doesn’t change the fact that you still need to wash your clothes.
as pointed out by several people, some way to watch your entire house power usage would be nice… ohh.. wait.. the electric company installed one of those fancy meters with the spinning wheel on it right there on the side of your house.
So, here is my published hack:
parts needed – pencil, paper (ruled), eyeglasses (Depends on your eyesight), house, gumption (that attribute that determines your willingness to get off your ass for 5 minutes a month, rather than relying on a fancy plug in module to wirelessly deliver all its data to your lazyboy while you watch the big game)
optional – ruler (to draw straight lines)
Step 1 – using pencil (and optional ruler) divide paper into 4 vertical columns. Mark the top of each column with the following headings: date, previous reading, this reading, monthly usage
Step 2 – go to your meter on the side of your house. Apply eyeglasses (if necessary). Read meter. Enter this information in the “this reading” column. Also not today’s date in the date column.
Step 3 – collect underpants, make profit, wait one month.
Step 4 – (30 days later) go to meter. Re-apply eyeglasses (if necessary). Read meter. Note this information in “this reading” column. Also note the date.
Step 5 – carry the data from the previous month’s reading in “this reading” down to today’s entry under the “last reading” column
Step 6 0 subtract last reading from this reading. Enter the result under “monthly usage”
Hack complete
Variation – use PC and spreadsheet software in place of paper. You can also slightly automate the process.
Tip: publish your data on twitter to track your progress and allow others to learn how much you depend on the air conditioner.
I suppose if you wanted to make one of these devices really useful, wire it into your air conditioner, or behind your fridge, or your washer/dryer combo..
ohh, and Amillio, “anal-watt” is *not* a good name ;)
@nubie – there’s one for $14.50 – just google around. a big box store is like a home depot, best buy, etc. they have huge mark ups, don’t shop there for things like kill-a-watts.
please please please make a better cheaper one, it’s fun to research around and try and figure a way to make things, the hard work is ahead – make one :)
@fyrebug – we do not use a PC to “keep this running” – please read the page(s) – we use a hacked ASUS router that is already on and doing things. again, you can use other things – we wanted low-cost wireless. if you look around there are dozens of “funded” companies doing wireless power metering and they’ll all using xbee too it seems. i see your point, but i don’t think you read the site or the details of the project. most folks just react to the headline.
@dano – you can use anything, the xbee is just one way of doing this project… and again, if you can do one better and cheaper and publish, i’d love to see it… just linking to lower-cost products doesn’t count :)
@error404 – you can use ethernet (please read the project page).
@mre “correct me if i am wrong, but doesn’t the unit only measure power usage of whats plugged into it anyway? i mean, wow.. you managed to log your desk-lamp power usage to the internet.”
we have our entire apartment on this, so it’s more than just a desk lamp. a house would be another type of project – and you could use what we did to tap the breaker too and publish…
.”
it does log to a spreadsheet, including google app engine (please read the site).
i can’t help your twitter issues, it’s a cheap and free sms gateway – you can comment the one line of code out if you want :)
if someone can point me to a good resource for do-it-yourself ammeter, I’ll have a go at adapting a propeller to do this with a built in ethernet cable like the ybox. Word on the street is prop prices are gonna drop, we’ll see. t’would be cool to see this combined with a relay so you could turn the plug off and on over the net.
@ Ladyada, Much respect, I love your site.
Unfortunately for me I live in California and the JR website offers them for $26.95 shipped to me.
At that point I might as well go with newegg (last I checked it was $17.99, it is now back to 19.99 + tax and shipping.)
I like your use of the router, can the router be connected directly to the Kill-a-watt to bring the price down?
Argh, I don’t really mean to fill up the comments here, but . . .
I found at least one of the pages with the DIY clamp on current transformers
I really wish I could find all the info that I saw before, it may be in some older bookmark backups.
It would be great if there was an option that was entirely DIY that only provided the functionality to interface with a computer or micro-controller.
(could it have been at eco-modder forums? Or a link from Dansdata? Or even here? Nuts, wish I knew, maybe a slashdot comment had the link I clicked.)
@ladyada that’s great! you’re right I didn’t fully read that, and having had people do that on my projects I know it can be annoying, but now understandt it from both ends.
much, much better with a router. though I’m still no fan of the xbee. just a personal preference unrelated and unneeded. seems like an overcost large hammer to drive in a tiny finsihing nail.
or you can buy DT266 Digital Clamp Meter for ~$10 (real price in two shops in my City/EU) and hack it
For house bus bars there *should be no ‘shunt’ involved!* I would *highly* recommend some prior research before you start comparing a ‘shunt’ to a ‘current transformer’. frankly, you stand to **LEARN** more by designing a clamp-on current transformer and actually understand where your power usage is coming from (rather than buying one)
@phill
Well its still a miserable hack :P
and using twitter means u deserve all the flack you got.
@medix,
Whoops, that was an error in vocabulary, I did indeed mean a current transformer or transducer.
(I am aware that a shunt must be installed in-line, and thus is much harder to install and is potentially more dangerous)
@rasz , that is very cool for $10
(if you already have an xbee and a kill-a-watt laying around I guess the t%$*t a watt is a neat hack, but looking from a cost-performance ratio it leaves a little to be desired. It should top the product range and at least one cheaper version should be considered.)
Once again I think people are missing the point here. Yes this may be expensive for some people. And if ~$110 is too much for you, having it is NOT going to save you any money probably. On other hand, for some people this $110 investment is returned within a first month on energy saved. And like others pointed out, you can just use an ammeter and have that data sent. The only flaw with the this meter is that its only use-full to monitor a single circuit. It’s more use-full to audit your per-device consumption but to get a better understanding or view of your energy consumption you have to monitor your total consumption. For most people the energy wasted is not caused by a single appliance, though outdated/rusted water heater will add a nice chunck to your monthly bill (as I learned). For most people it will be all those phantom devices that lay around plugged in but unused.
They should name the kit “twatter” – that would pretty much sum up all it’s features.
If you placed it at the junctionbox you could not only see your use but you could watch your house for any unwanted guest since you’d see peaks if they switched on a light or something.
Assuming one of these could handle the load of many devices at once.
You’d have to make some baselines for when the fridge motor is on and off and such periodic peaks devices of course.
Just musing.
@adafruit:
Maybe I’m missing something, but the project just seems to be about pumping serial data over xBee. The only mention of a non-computer-dependent Ethernet implementation was hacking an Asus router and adding xBee to it. I mean do away with all of this and build a simple data logging client and Ethernet interface into the device itself to cut out the cost of xBee and the need to have a power-sucking computer online to log how much power you’re sucking.
I think this is a brilliant concept – it’s an idea I’ve been sitting on for about 15 years. If people are simply conscious of what energy is going where (and it’s presented in the right way) then they’ll instinctively move towards efficiency.
The ultimate (in my most humble of opinions) is a web-based interface where you can measure every single outlet… and have a clamp meter over the entire supply… AND control every outlet via the web.
I think this would lead to economisation software, and possibly a social-type thing where (for example) one school can compare itself to another school and exchange tips/tricks into how to get better results.
I think the killer-ap aspect of this will be remote control – the energy monitoring part of it is an adjunct to this – more value than the control itself, but it will come as an “extra”
I think the price is too high at the moment though… for any currently commercially available unit anyway. Although the ROI angle is pretty clear, it’s still not low enough to be an impulse buy.
Some tools that we have these days are very helpful indeed, it didn’t just help us but it also make us aware in many things like how many power we had already used and the like.
I like it… very cool hack. | http://hackaday.com/2009/03/26/tweet-a-watt-kits/?like=1&source=post_flair&_wpnonce=18b5244a97 | CC-MAIN-2015-11 | refinedweb | 4,086 | 78.28 |
On 24 Jun 1998, Ben Gertzfield wrote:> "Connection timed out" is not documented as a valid return from> accept(2) and this is believed to be a bug in the Linux kernel.What a lame excuse. Anyhow, here's what I did in Apache, where I caremore about reliability than whether man pages are up to date. Someone maywant to use this to form a better patch for sendmail... or to get the manpages updated.Dean /* * search for EPROTO. * *. See PR#981 for example. It's hard to * handle both uses of EPROTO. */#ifdef EHOSTUNREACH case EHOSTUNREACH:#endif#ifdef ENETUNREACH case ENETUNREACH:#endif break; default: ap_log_error(APLOG_MARK, APLOG_ERR, server_conf, "accept: (client socket)"); clean_child_exit(1); }-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.rutgers.edu | http://lkml.org/lkml/1998/6/24/166 | CC-MAIN-2015-11 | refinedweb | 135 | 57.98 |
Creating custom tables
There are two applications related to custom tables in Kentico:
- Custom tables application allows you to create new custom tables, given you are the site's global administrator. You can also view and manipulate data in existing tables using this interface.
- Custom table data application allows you to view and manipulate data in existing custom tables only.
Note that you can create new custom tables using the wizard in the Custom tables application only. There is currently no simple way of creating custom tables using the API. This is due to the need to create related objects and settings — these tasks are handled by the wizard.
Creating a custom table
Follow the example to create a new custom table using the New custom table wizard.
Starting the New custom table wizard
- Open the Custom tables application.
- Click New custom table. A New custom table wizard starts.
Step 1
- Enter the following details:
- Custom table display name: People
- display name is used in Kentico user interface.
- Namespace: customtable
- Name: people
- Name is used in website code — always preceded by a Namespace, which allows you to have different tables of the same name in different contexts.
- Click Next.
Step 2
Specify the details of the database table that the custom table uses.
- Choose Create a new database table.
Leave the value customtable_People in the Database table name field.Click here for information about the available options...
- Create a new database table - choose this option to create a brand new table in the system's database.
- Database table name - the actual name of the table in the system database. A name in the <namespace><code name>_ format is pre-filled automatically.
- Primary key name - primary key column name, pre-filled with the ItemID value.
- Use an existing database table - choose this option if you already have a physical table in the system database and want to register it in the system.
- Database table name - the actual name of the table in the system database. The drop-down list offers only database tables which are not part of the default Kentico database schema and are not yet registered for any object.
- Primary key name - primary key column name. The original table's primary key column is used automatically.
The following check-boxes allow you to choose the default fields that you want to include in the custom table:
- Include ItemGUID field - globally unique ID of the particular custom table data record. Note that this field is mandatory if you want to stage the table's data to a different instance of Kentico.
- Include ItemCreatedBy field - user name of the user who created the item.
- Include ItemCreatedWhen field - date and time of when the item was created.
- Include ItemModifiedBy field - user name of the user who last modified the item.
- Include ItemModifiedWhen field - date and time of last modification.
- Include ItemOrder field - order of the item when table content list is displayed. The lower number, the earlier position in the list.
- Make sure all the checkboxes are enabled.
- Click Next.
Step 3
Third step contains a field editor. The field editor lets you define the columns in the database table.
Create the following fields to follow this example:
Create a field using the New field button with the following properties:
- Field name: FirstName
- Field type: Text
- Field size: 100
- Required: Yes (check the box)
- Field caption: First name
- Form control: Text box
- Click Save.
Create a second field using New field with the following properties:
- Field name: LastName
- Field type: Text
- Field size: 100
- Required: Yes (check the box)
- Field caption: Last name
- Form control: Text box
- Click Save.
- Create a third field using New field with the following properties:
- Field name: DateOfBirth
- Field type: Date
- Required: No (leave the box unchecked)
- Field caption: Date of birth
- Form control: Calendar
- Editing control settings -> Edit time: No
- Editing control settings -> Show 'Now' link: No
- Click Save.
- Click Next.
Step 4
Select for which sites you want to make the custom table available
- Click Add sites. A dialog box opens.
- Choose Corporate Site (or any site that you want to add the custom table to).
- Click Next.
Step 5
The last step gives you an overview of the tasks executed by the wizard.
Finish the wizard. The system redirects you to the General tab of the custom table that you created. | https://docs.xperience.io/k8/developing-websites/defining-website-data-structure/custom-tables/creating-custom-tables | CC-MAIN-2021-04 | refinedweb | 728 | 55.34 |
Hello all, I have a number of functions and data structures I'd like to see added to the stm package. The package is listed as being maintained by the list, so I thought I'd ask how "sacred" stm is considered before going through the process to propose things. The extra functions are no-brainers. Some are common conveniences (e.g., modifyTVar and modifyTVar'), some aim to make the API more consistent (e.g., add swapTVar like swapTMVar), and some can be optimized if they're included in the package (e.g., tryReadTMVar[1]). The data structures are all variations on TChan which seem common enough to be included in the package, though I could also fork off a stm-data package instead. The variations include channels that can be closed, channels which have bounded depth (i.e., cause extra writes to block until room is available), and channels which are both bounded and closeable. Thoughts? Also, to make proper patches, where is the stm repo? [1] From outside the package we must define this as: tryReadTMVar :: TMVar a -> STM (Maybe a) tryReadTMVar var = do m <- tryTakeTMVar var case m of Nothing -> return () Just x -> putTMVar var x return m Whereas, from inside the package we can just do: tryReadTMVar :: TMVar a -> STM (Maybe a) tryReadTMVar (TMVar t) = readTVar t thus saving an extraneous read and write. -- Live well, ~wren | http://www.haskell.org/pipermail/libraries/2011-February/015969.html | CC-MAIN-2014-35 | refinedweb | 231 | 69.72 |
Functions in C Programming
Function is a logically grouped set of statements that perform a specific task. In C program, a function is created to achieve something. Every C program has at least one function i.e. main() where the execution of the program starts. It is a mandatory function in C.
2. Components of Function
1. Function Prototype/Declaration
2. Function Definition
3. Function Call
3. Types of Function
Advantages of Function
The advantages of using functions are:
- Avoid repetition of codes.
- Increases program readability.
- Divide a complex problem into simpler ones.
- Reduces chances of error.
- Modifying a program becomes easier by using function.
Components of Function
A function usually has three components. They are:
- Function Prototype/Declaration
- Function Definition
- Function Call
1. Function Prototype/Declaration
Function declaration is a statement that informs the compiler about
- Name of the function
- Type of arguments
- Number of arguments
- Type of Return value
Syntax for function declaration
returntype function_name ([arguments type]);
For example,
void sort(int []); /*function name = sort, receives an array of integer as argument and returns nothing*/ int product(int,int); /*function name = product, receives two integers as argument and returns an integer*/
A function declaration doesn't require name of arguments to be provided, only type of the arguments can be specified.
2. Function Definition
Function definition consists of the body of function. The body consists of block of statements that specify what task is to be performed. When a function is called, the control is transferred to the function definition.
Syntax for function definition
returntype function_name ([arguments]) { statement(s); ... ... ... }
Return Statement
A return statement is used to return values to the invoking function by the invoked function. The data type of value a function can return is specified during function declaration. A function with void as return type don't return any value. Beside basic data type, it can return object and pointers too. A return statement is usually place at the end of function definition or inside a branching statement.
Syntax of return statement
return value;
For example,
int product (int x, int y) { int p = x*y; return p; }
In this function, the return type of product() is int. So it returns an integer value p to the invoking function.
3. Function call
A function call can be made by using a call statement. A function call statement consists of function name and required argument enclosed in round brackets.
Syntax for function call
function_name ([actual arguments]);
For example,
sort(a); p = product(x,y);
A function can be called by two ways. They are:
- Call by value
- Call by reference
Call by value
When a function is called by value, a copy of actual argument is passed to the called function. The copied arguments occupy separate memory location than the actual argument. If any changes done to those values inside the function, it is only visible inside the function. Their values remain unchanged outside it.
Example: C program to find the sum of two numbers using function (call by value)
#include<stdio.h> int sum(int,int); // function prototype int main() { int a,b,s; printf("Enter two integers:"); scanf("%d%d",&a,&b); s=sum(a,b); // fucntion call printf("Sum = %d",s); return 0; } int sum(int x,int y) // function definition { return (x+y); // return statement }
In this program, two integer values are entered by user. A user-defined function sum() is defined which takes two integer as arguments and return the sum of these values. This function is called from main() and the returned value is printed.
Output
Enter two integers:11 6 Sum = 17
Call by reference
In this method of passing parameter, the address of argument is copied instead of value. Inside the function, the address of argument is used to access the actual argument. If any changes is done to those values inside the function, it is visible both inside and outside the function.
Example: C program to swap two values using function (call by reference).
#include <stdio.h> void swap(int *, int *); // function prototype int main() { int a,b; printf("Enter two numbers: "); scanf("%d%d",&a,&b); printf("Before swapping\n"); printf("a = %d\n",a); printf("b = %d\n",b); swap(&a,&b); // function call by reference printf("After swapping\n"); printf("a = %d\n",a); printf("b = %d\n",b); return 0; } void swap(int *x, int *y) // function definition { int temp; temp = *x; *x = *y; *y = temp; }
This program swaps the value of two integer variables. Two integer values are entered by user which is passed by reference to a function swap() which swaps the value of two variables. After swapping these values, the result is printed. In this program, the arguments must be passed by value because we are changing the values of two variables inside a function. If they are passed by value then the change won't be seen outside the function.
Output
Enter two numbers: 12 35 Before swapping a = 12 b = 35 After swapping a = 35 b = 12
Types of function
There are two kinds of function. They are:
- Library functions
- User-defined functions
1. Library functions
Library functions are the built in function that are already defined in the C library. The prototype of these functions are written in header files. So we need to include respective header files before using a library function. For example, the prototype of math functions like pow(), sqrt(), etc is present in math.h, the prototype of exit(), malloc(), calloc() etc is in stdlib.h and so on.
2. User-defined functions
Those functions that are defined by user to use them when required are called user-defined function. Function definition is written by user. main() is an example of user-defined function. | https://www.programtopia.net/c-programming/docs/functions | CC-MAIN-2019-30 | refinedweb | 961 | 56.15 |
Red Hat Bugzilla – Bug 135673
Anaconda Crash During Install
Last modified: 2007-11-30 17:10:51 EST
Description of problem:
The install crash with the following;
'import site' failed; use -v for traceback
file '/usr/bin/anaconda' line 348 in ?
import signal, traceback, string, isys, iutil, time
file 'usr/lib/anaconda/isys.py' line 24 in ?
import kudzu
Importerror : No module named kudzu
Version-Release number of selected component (if applicable):
How reproducible:
Always on my one amd64 machine.
Steps to Reproduce:
1. Attempt install on x86_64 machine
2. After storage drivers loaded (1394, usb, sata, via_sata, sym53c8xx)
3.
Actual results:
Crash.
Expected results:
Successful info
Additional info:
Asus K8VSE Deluxe, AMD64 3200, Nvidia 5700.
FYI - systems has been running FC2 since release.
Can you provide the complete traceback you received?
That was it I !
I did not get any further info.
BTW - I also tried 'text' mode and it crashed but there was no error,
it was just after starting anaconda and it send 'installer aborting'
and rebooted.
Something that may be significant is that the media I used to burn the
ISO image too is not the best quality.
I have burned about 8 (out of 25) and every one failed to read in
other players etc.
I will test this evening with A-Grade media and ensure that I'm not
wasting your time.
Did things work better with the better quality media?
Hello,
I managed to do an 'upgrade' with i386 DVD but the 'install' crashed.
This was on A-grade media.
Not sure whats going on.
Does booting with 'linux ide=nodma' help?. | https://bugzilla.redhat.com/show_bug.cgi?id=135673 | CC-MAIN-2017-04 | refinedweb | 269 | 68.26 |
Index
Links to LINQ
LINQ Farm Seeds are short posts designed to be read in a few minutes. The first seed showed how to list in alphabetical order all the operators used in LINQ to Objects. At the top of the list one found the Aggregate operator.
The LINQ Aggregation operator is easy to use. Consider the following declaration:
List<int> ints = new List<int> { 1, 2, 3 };
The aggregate of the numbers in this list would be 1 + 2 + 3 or 6. Consider this array:
List<int> ints = new List<int> { 1, 2, 3, 4 };
The aggregate of the numbers in this list would be 10:
10 = 1 + 2 + 3 + 4;
To perform this kind of calculation one could take the first two numbers in the list, add them together to get 3, then add the third number to their sum to get 6, then add the fourth number that sum to get 10:
This algorithm is used by the first of the three overrides of the LINQ to Objects Aggregate operator. Using typical LINQ syntax, one could call the operator as follows:
int sum = integers.Aggregate((a, b) => a + b);
The Aggregate operator is passed a simple lambda expression that takes two parameters and adds them together. In our case, using the List<int> shown above, the first two parameters would be 1 and 2 and the return value would be 3. The function derived from the lambda expression would then be called again with the value 3 and 3, and so on.
If you are not yet quite comfortable using lambdas, you could accomplish the same ends with the following code:
private int AddEm(int a, int b)
{
return a + b;
}
int sum = integers.Aggregate(AddEm);
This code also compiles and runs correctly. It is semantically equivalent to the lambda expression code shown above it. The word lambda sounds complicated, but in some ways lambda expressions are even easier to create and understand than a method such as AddEm().
A complete program implementing the Aggregate operator is shown in Listing 1.
Listing 1: A short LINQ program that outputs the number 10.
using System;
using System.Collections.Generic;
using System.Linq;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
int count = 5;
List<int> integers = new List<int>();
for (int i = 0; i < count; i++)
{
integers.Add(i);
}
int sum = integers.Aggregate((a, b) => a + b);
Console.WriteLine(sum);
}
}
}
It would be a bit more fun to use this program if would could dynamically change the value of count. To make this possible I have created a simple Windows Form application. You can download this implementation from the LINQ Farm on Code Gallery. A screen shot from the application is shown in Figure 1.
Figure 01: The aggregate of 0 + 1 + 2 + 3 + 4 is 10.
Here is the declaration for this first override of the Aggregate operator:
public static TSource Aggregate<TSource>(this IEnumerable<TSource> source, Func<TSource, TSource, TSource> func);
Since it is an extension method, the first parameter to this generic method is effectively ignored. As a result, we need only be concerned with the second parameter, which looks like this:
Func<TSource, TSource, TSource> func
Assuming we are passing in an array of integers to this LINQ operator, then this declaration states that Func takes two integers as parameters and returns an integer:
int Func(int a, int b)
If you look up above at the AddEm method, you will see that it has the same signature. The lambda expression we have been using also generates code with the same signature:
(a, b) => a + b;
In this short post you have seen how to work with the first override of the Aggregate operator. In future posts, I will show how to work with other overrides of this same operator.
---
Down the source code from the LINQ Farm.
I want to get my hands dirty; show me more LINQ Farm posts.
If you would like to receive an email when updates are made to this post, please register here
RSS
PingBack from
I left this additional posting on a friend's blog. I love the 3rd overload on Aggrigate.
return list.Aggregate(new StringBuilder(),
(sb, n) => sb.Length == 0 ? sb.Append(n) : sb.Append(",").Append(n),
sb => sb.ToString());
More details here:
If I understand how this works from your description, then the following are common aggregations as well using the same overload:
int min = integers.Aggregate((a, b) => Math.Min(a, b))
int max = integers.Aggregate((a, b) => Math.Max(a, b))
I don't have 2008 or 3/3.5 frameworks loaded yet so I could be wrong.
Also what happens if the list is only one unit long? Shouldn't the int aggregate start from 0 (default value) and item 1. Then taking that aggregate and the next value in the list return the new aggregate and so on.
Also what happens if the list was 0 length? Does it return 0 the default int value or null because there is nothing to do? Returning null is an issue if we are dealing with ints though.
WarePhreak, the min and max examples you gave work fine - very useful.
If the list is only one unit long, it still seems to work. The addition aggregate just gives the value of the single list entry. The Min and Max aggregations also just give the value of the single list entry.
If the list is empty, a System.InvalidOperationException is thrown, with additional information of 'Sequence contains no elements'. So you'll either have to catch the exception or check the length of the list before calling the Aggregate.
LINQ Farm Seeds are short posts designed to be read in a few minutes. In the previous seed we used the
Charlie Calvert's LINQ Farm Seeds
Charlie Calvert is a C# Community Evanelist at Microsoft and his blog is filled with informative posts
> Also what happens if the list is only one unit long?
It returns that element (with the overload looked at here, the other overloads return the seed if zero length).
> return list.Aggregate(new StringBuilder(),
I think that the need for two function calls can make this look obscure, and given the simple overload's handling of one length lists an approach more like below is simpler:
list.Select(x => x.ToString()).Aggregate((x,y) => x + "," + y); | http://blogs.msdn.com/charlie/archive/2008/02/05/linq-farm-seed-i-aggregate-operator-part-i.aspx | crawl-002 | refinedweb | 1,067 | 62.88 |
The issue is that each item in the list called urls is a tuple. A
tuple is a container for other items and is also immutable. When you
do item + "
", you are asking the interpreter to concatenate a tuple and a string
which is not possible.
What you want to do instead is inspect the tuple and select one of the
fields in each item to write to the outfile:
with open("1.txt", "w+") as outfile:
f
1) Anywhere in your project directory. I made a new file called
exceptions.py where I place all my HTTP status code and validation
exceptions. I placed this file in the same directory as my views.py,
models.py, etc.
2) That bad taste in your mouth is Python, because importing methods
is the Pythonic way to go about using classes and functions in other
files, rather than some sort of magic. Might
It seems you've misunderstood how the link works. Django doesn't offer
a selection for reverse foreign keys. From the Item admin you could
select the Category like that. But not the other way around.
One workaround would be to use a project that adds custom widgets such
as Django Tags Input which adds a tag-like input field to your admin.
In this case the configuration would look something like
Well, I usually conditionally assign to the target of the loops. You
could also use an associative array containing the assignments.
if SOMECONDITION1:
target1 = js
target2 = ks
target3 = ls
elif SOMECONDITION2:
target1 = ks
target2 = ls
target3 = js
elif SOMECONDITION3:
target1 = ls
target2 = js
target3 = ks
for j in js:
for k in ks:
for l in ls:
Rather than a garbage token simply encode no data:
def generate_auth_token(username=None, expiration=600):
gen_serial = Serializer(secret_key, expires_in=expiration)
data = {'username': username} if username is not None else {}
return gen_serial.dumps(data)
Then you can have an invalidate endpoint that requires a login and
returns a token without a username:
def invalidate(self):
That's probably because is for word boundaries, not just characters.
The following pattern will only give matches for what you initially
specified (a word ending in * that doesn't have it in the middle or
beginning)
(?m)(?<!w|*)w+?*(?=[^w]|$)
Works in RegexBuddy when tested against this string:
word *word wo*rd word* *word* words
Check for yourself
If all you wanted was something to c
How about using zipWithIndex?
Zips this RDD with its element indices. The ordering is first based
on the partition index and then the ordering of items within each
partition. So the first item in the first partition gets index 0, and
the last item in the last partition receives the largest index. This
is similar to Scala's zipWithIndex but it uses Long instead of Int as
the index type. This me
I found following link. This seems to work great for me.
You can use a nested serializer to make this work.
class FSerializer(serializers.Serializer):
value = serializers.Field(source="f_value")
metric = serializers.Field(source="f_metric")
class MyModelSerializer(serializers.ModelSerializer):
somefield = serializers.Field()
f = FSerializer(source="*")
This should give you the nested output you are looking for. You can
find more inf
Have you thought about using a template system like Jinja2? I've used
it to store and use templates for a variety of situations (ranging
from web development to automatic code generation) and it works very
well.
You could have your SQL queries in template files, and load and fill
them as you wish.
Here is undirected graph with your dataframe. Edge (x,y) means that
there was some data line such that both x,y was mentioned.
For example - last line "A1,B4,C2" added edges (A1,B4), (B4,C2),
(A1,C2)
Now it's possible to sort A,B,C according to your wishes.
Finding the hierarchy with the least number of (extra) multi-parent
edges
We can bruteforce all arrangements (it's quite fast for N
Try using levels.
def plot_countour(x,y,z):
# define grid.
xi = np.linspace(-2.1, 2.1, 100)
yi = np.linspace(-2.1, 2.1, 100)
## grid the data.
zi = griddata((x, y), z, (xi[None,:], yi[:,None]), method='cubic')
levels = [0.2, 0.4, 0.6, 0.8, 1.0]
# contour the gridded data, plotting dots at the randomly spaced
data points.
CS = plt.contour(xi,yi,zi,len(levels),linewi
You'd have to make a function for that. Something like this should
suffice?
epoch = lambda: int(time.time()*1000)
print epoch()
time.sleep(2)
print epoch()
You could also encase this into a class and make a @property out of
it, so that you can get a value without using brackets ()
If it were me doing this in SWIG I'd try to write the least amount of
weird extra code possible. So my initial approach would be to write
some Python code to handle the heterogeneous list and figure out which
add_TYPE to call for each entry in the list.
As an example:
%module test
%{
#include <iostream>
#include <limits>
%}
%include <std_string.i>
%inline %{
struct MyObj {
From your code you want the program to wait for job to finish and
return code, right? If so, the qsub sync option is likely what you
want:
If the two commands can be run together then just run them together,
but as you say you have to run them separately you could try using
Export-Clixml to save the object to a file then Import-Clixml to
reload it. Here's a working example:
#!/usr/bin/env python3
import base64, subprocess, sys
def powershell(cmd, input=None):
cmd64 =
base64.encodebytes(cmd.encode('utf-16-le')).decode('ascii').s
From the linked question, you can do something like this. Multiple
names can point to the same DataFrame, so this will just grab the
"first" one.
def first_name(obj):
return [k for k in globals() if globals()[k] is obj and not
k.startswith('_')][0]
In [24]: first_name(name_of_df)
Out[24]: 'name_of_df'
I don't know, but maybe the answer is this:
With the amazing help of @PadraicCunningham I managed to find the
solution. The problem turned out to be that heroku-buildpack-apt
installs things in a newly created folder /app/.apt/ which was not in
the PYTHONPATH.
So I added the relevant folder to my PYTHONPATH on heroku as
follows:
heroku config:add PYTHONPATH=/app/.apt/usr/lib/python2.
You need to change the job part a bit:
@shared_task
def job():
chain = (task1.s() | task2.s() | task3.s() )
result = chain().get()
return result
Since the execution of one is dependent on it's preceding task you
don't get anything by applying it asynchronously. You can start job
asynchronously however.
Ok since no one has answered my question, I wrote this simple script
that I added in my path folder in utils.
So when I execute this in my project root folder, the VCS roots get
added in pycharm upon restart.
I think this is what you are looking for:
n1 = raw_input('Εισάγετε Λεφτά :') or 0
n2 = raw_input('Εισάγετε Πλυσίματα Μέσα - Έξω
:') or 0
n3 = raw_input('Εισάγετε Πλυσίματα Μηχανών :')
or 0
n4 = raw_input('Εισάγετε Πλυσίματα Εξωτερικά
:') or 0
n5 = raw_input('Εισάγετε Τι Ποσό Πήρε ο Σπύρο
It is possible indicate where to retrieve the egg from including the
full path to it in the find-lings parameter. The following example is
taken from the pip docs:
eggs = PILwoTk
find-links =
A contiguous array is just an array stored in an unbroken block of
memory: to access the next value in the array, we just move to the
next memory address.
Consider the 2D array arr = np.arange(12).reshape(3,4). It looks like
this:
In the computer's memory, the values of arr are stored like this:
This means arr is a C contiguous array because the rows are stored as
contiguous blocks of memor
According to the pycURL documentation, your post data needs to be key
value pairs and url encoded:)
Finally I got the solution and here is how I reach there.
Problem with recursion depth error with flask is its not easy to get
root cause of problem, so with hit and trial approach I got tail of
the problem.
Method call flow is like this:
dateformat filter > format_date() > to_user_timezone() >
get_timezone()
Now get timezone method is overridden here as:
def get_timezone():
"""
Retur
The issue was not having the right signature for one of the functions.
It was resolved by changing the argument passed to the MonkeyPatch
function as en empty dictionary {} instead of 'None' value which is
kind of specific to my code.
The reason the I was initially hitting the issue was, as the current
function's call(cli.port_register) was failing when the parameters
where passed to port_registe
The program is executed in the current working directory as reported
by os.getcwd(). For a command line program, its typically the
directory you are in when you run the program. To run a command in the
same directory as your python script, use the __file__ variable to
figure out where you are:
import os
import envoy
my_path = os.path.dirname(os.path.abspath(__file__))
envoy.run('./scripts.sh', c
You need to add newlines. Anyway, your method of making csvs (printing
the list and removing brackets from it) is not the best way to do it.
Try this instead:
csvList = '
'.join(','.join(row) for row in csvList)
Or use the csv module:
import io, csv
dest = io.StringIO()
writer = csv.writer(dest)
for row in csvList:
writer.writerow(row)
# Now dest is a file-like object containing your cs
The problem is that many email clients (including Gmail) send
non-ascii emails in base64. stdin on the other hand passes everything
into a string. If you parse that with Parser.parse(), it returns a
string type with base64 inside.
Instead the optional decode argument should be used on the
get_payload() method. When that is set, the method returns a bytes
type. After that you can use the builtin d
This gives you the ticks on the scale of the y-axis of the colorbar,
which has limits (0.0, 1.0)
cbar.ax.get_yticks()
This is what you need:
np.interp(cbar.ax.get_yticks(), cbar.ax.get_ylim(), cbar.get_clim())
The result is:
array([ 10., 20., 30., 40., 50., 60., 70., 80., 90.])
It only "works on the first string in the list" because you return
from inside the loop. Keep track of your result outside of the loop.
def test_find_anagrams():
stringlist = [('aces'), ('sidebar'), ('adverb'), ('fuels'),
('hardset'), ('praised')]
result = []
for str1 in stringlist:
result.extend(word for word in get_dictionary_word_list() if
anagram(word,str1))
return re
You can't. A process can't read it's own exit code. That would not
make much sense. But in order to force process shutdown the OS has to
send a signal to that process. As long as it is not SIGKILL and/or
SIGSTOP then you can intercept it via signal library. SIGKILL and
SIGSTOP cannot be blocked, e.g. if someone does kill -9 then there's
nothing you can do.
To use signal library:
import signal
im
Neither method will lead to leaks.
The lifetime of Python objects is governed by reference counting; if
there are no references left, the object is removed. In all cases here
are the instances and classes in your Outer.AppendToMenu() method only
referenced by locals, and will not lead to leaks.
In both your cases, the counts for the class and instance are as
follows:
class _Inner creates the c
You could use Weasyprint. You could easily render directly.
You could do something like that:
html = HTML(string=htmlstring)
main_doc = html.render()
pdf = main_doc.write_pdf()
return HttpResponse(pdf, content_type='application/pdf')
To render your Django view to HTML, you could simply use the shortcut
render_to_string(self.template_name, context,
context_instance=RequestConte
You can approach like this,
for itm in db.collection.find():
print {itm.pop('b'): itm}
Here collection is the name of your collection in the database. If
you iterate over the pymongo cursor object you will get dict type
object which you can modify like normal python dict.
First off you aren't passing the form to the template in articles().
You need to have something along the lines of:
args['form'] = YourForm()
In your first view function. Also, in your create view you do this:
a.user = User.objects.get(username = request.user.username)
Which does an unnecessary database lookup of the user again, you can
just do this to be clearer and faster:
a.user = requ
ttk.Label(mainframe, text = notice).grid(column = 1, row = 6)
Changing the value of the string notice won't update the text in this
label. You need to use a StringVar for this.
notice = Tkinter.StringVar()
ttk.Label(mainframe, textvariable = notice).grid(column = 1, row = 6)
Then, instead of doing notice = "whatever" in your functions, do
notice.set("whatever").
Alternatively, don't have a
To many brackets and you do not need brackets in python.
Try changing:
if ((not(i == car) and not(i == guess1)):
to
if i != car and i != guess1:
Use a SAX parser such as xml.sax. This gives you callbacks as it scans
the XML file for each of the various xml tags or 'events' (ie opening
a tag, closing a tag, seeing an attribute, seeing some data, etc).
Keep track of whether you are in part of the XML file you do or do not
want to keep (or delete) as you get these callbacks. Stream the data
into a new file if you are in "keeping" mode, and do | http://www.w3hello.com/category/python/11169/1 | CC-MAIN-2018-17 | refinedweb | 2,278 | 64.3 |
Hi,
Yes, it can be done like this, but like I said, I don't view the Lua script part as a separate module (or the DLL if you reverse the roles of DLL and script). The script is really private to the DLL and both communicate through private functions and tables that are unreachable from outside of the module.
I had a similar problem in previous versions of LuaSocket. The implementation of socket.select was messy in the sense that it was composed of both C and Lua code, with Lua parts that used C parts and vice-versa. I don't know exactly what is your scenario, but I now tend to avoid this double-dependency. Having C code depend on Lua code tends to complicate the setup, although it might simplify the implementation itself.Why worry about someone being able to load an internal module? If someone does that, it should be their problem, shouldn't it? Can't you simply not document it? This should make it obvious that a module is internal. I know this is not a solution to your problem,
it's just sort of denying it exists. Sometimes this is the best solution, though. :)
It is not worth it to worry about paths. I would suggest you simply use the engine provided by require itself.I wouldn't have to worry if require simply passed the path as an argument. :-) Look at it as a win-win situation since both your approach and mine could be implemented easily. Moreover, the changes it would take to loader_C (or require) are minimal.
This is somewhat assymetric. One of the ideas of the packaging system, I think, is that you can place the same function in the package.preload table and everything should work fine. In that case, you would be in bad shape.
All you have to do is return _M in the end of your module definition (as I added above). In C, you only have to return 1 from your loader, because luaL_module already leaves the namespace table on the stack.Don't you agree that it would be much nicer if we only need to use "module" and not rely on return values? Then it is simple: "module" determines the package table, period. If a) modules are not placed in the globals automatically and b) return values of packages are ignored, there'd be no way around using "module" to properly setup a package.
I am not sure. The current scheme makes it work in 99% of the cases with no setup. With one extra line for the other 1% you can get it to work. Is this too bad?
In fact, inverting the order, in the case you choose to implement your modules by simply filling up a table and returning it, would still not work.But that is exactly what I *don't* want to do. I think the "module" approach is nice and easy. (If only it would would bind to the required name instead of the module name. Again, the changes to "module" and "require" to support this binding are minimal.) Here's another example of possibly weird behaviour. Suppose we implement two packages "a" and "b" both adding to module "a", but without "b" requiring "a". Then local b = require "b" local a = require "a" doesn't load package "a", while local a = require "a" local b = require "b" does. With a modified module function, both would work as expected. Or should we compose a list of additional coding rules like - When in doubt, end your package with "return _M" - Before doing "module(name)" where <name> is not the package name, you should also do "require(name)"
But here you are focusing on a cases that are not commonly needed, are somewhat pathological, and for which there are cookbook solutions. 1) If two modules are recursive, invoke module() on yourself before requiring() your mate. This is just like a forward declaration. 2) If you are invoking module with a name other than your own, BEWARE. This means you probably are extending someone else. In that case, require() whoever it is that you are extending (it only makes sense). Also, because you are not exporting where you should, make sure you return your namespace, i.e., the one you are borrowing from. Alternatively, store your namespace into the loaded table where it belongs. Finally, you might consider placing your extension module as a submodule of whoever it is that you are extending, just to make things obvious, regardless of whether you export symbols to your own namespace or to that of your master.
You try very hard! ;-)
I know. :) There has to be some inertia, otherwise this thing never converges... But if there is a solution to a problem that is simpler than the work around, I am all for it.
Seriously though, I think my small wish list does not conflict with the spirit of the require proposal (and would take only minimal changes): 1) "module" binds to the "require"-ed name, not to the module name.
And how is "module" supposed to know what was the required name? Maybe some magic environment trick...
3) Drop the support for package return values, they're just confusing
I don't like this idea. One might desire to write a package whose only exported symbol is a function. Returning that function works and is useful. []s, Diego. | http://lua-users.org/lists/lua-l/2005-08/msg00469.html | CC-MAIN-2019-35 | refinedweb | 909 | 73.17 |
Is there a way to export aggregate variables from batch dataflows? If not, is there a workaround to that.
You can export it by encapsulating it in the smallest table with 1 row and 1 column.
def get_agg(field):
return field
Apply a map operation on the column and use this UDF. Pass the aggregate variable eg. "^avg" as a parameter to this UDF.
This will generate another table with 1 row, 1 column and the aggregate variable value in it. You can then export it.
You can generate similar scenarios, if you had multiple aggregates to export.
Here is an alternate approach to export an aggregate variable in a batch dataflow. The aggregate variable still needs to be output in a table with 1 row and 1 column.
Assume the aggregate is called ^sum_retail_price and is a float datatype. | https://discourse.xcalar.com/t/white-check-mark-exporting-aggregate-variables-from-batch-dataflows/278 | CC-MAIN-2019-13 | refinedweb | 140 | 68.16 |
import "github.com/go-mysql/slowlog"
Package slowlog provides functions and data structures for working with the MySQL slow log.
aggregator.go class.go event.go metrics.go parser.go
const ( // MAX_EXAMPLE_BYTES defines the maximum Example.Query size. MAX_EXAMPLE_BYTES = 1024 * 10 )
var ( // ErrStarted is returned if Parser.Start is called more than once. ErrStarted = errors.New("parser is started") )
An Aggregator groups events by class ID. When there are no more events, a call to Finalize computes all metric statistics and returns a Result.
NewAggregator returns a new Aggregator.
func (a *Aggregator) AddEvent(event Event, id, fingerprint string)
AddEvent adds the event to the aggregator, automatically creating new classes as needed.
func (a *Aggregator) Finalize() Result
Finalize calculates all metric statistics and returns a Result. Call this function when done adding events to the aggregator.
BoolStats are boolean-based metrics like QC_Hit and Filesort.
type Class struct { Id string // 32-character hex checksum of fingerprint Fingerprint string // canonical form of query: values replaced with "?" Metrics Metrics // statistics for each metric, e.g. max Query_time TotalQueries uint // total number of queries in class UniqueQueries uint // unique number of queries in class Example *Example `json:",omitempty"` // sample query with max Query_time // contains filtered or unexported fields }
A Class represents all events with the same fingerprint and class ID. This is only enforced by convention, so be careful not to mix events from different classes.
NewClass returns a new Class for the class ID and fingerprint. If sample is true, the query with the greatest Query_time is saved.
AddEvent adds an event to the query class.
Finalize calculates all metric statistics. Call this function when done adding events to the class.
type Event struct { Offset uint64 // byte offset in file at which event starts Ts string // raw timestamp of event Admin bool // true if Query is admin command Query string // SQL query or admin command User string Host string Db string TimeMetrics map[string]float64 // *_time and *_wait metrics NumberMetrics map[string]uint64 // most metrics BoolMetrics map[string]bool // yes/no metrics RateType string // Percona Server rate limit type RateLimit uint // Percona Server rate limit value }
An Event is a query like "SELECT col FROM t WHERE id = 1", some metrics like Query_time (slow log) or SUM_TIMER_WAIT (Performance Schema), and other metadata like default database, timestamp, etc. Metrics and metadata are not guaranteed to be defined--and frequently they are not--but at minimum an event is expected to define the query and Query_time metric. Other metrics and metadata vary according to MySQL version, distro, and configuration.
NewEvent returns a new Event with initialized metric maps.
type Example struct { QueryTime float64 // Query_time Db string // Schema: <db> or USE <db> Query string // truncated to MAX_EXAMPLE_BYTES Ts string `json:",omitempty"` // in MySQL time zone }
A Example is a real query and its database, timestamp, and Query_time. If the query is larger than MAX_EXAMPLE_BYTES, it is truncated and "..." is appended.
FileParser represents a file-based Parser. This is the canonical Parser because the slow log is a file.
func NewFileParser(file *os.File) *FileParser
NewFileParser returns a new FileParser that reads from the open file. The file is not closed.
func (p *FileParser) Error() error
Error returns an error, if any, encountered while parsing the slow log.
func (p *FileParser) Events() <-chan Event
Events returns the channel to which events from the slow log are sent. The channel is closed when there are no more events. Events are not sent until Start is called.
func (p *FileParser) Start(opt Options) error
Start starts the parser. Events are sent to the unbuffered Events channel. Parsing stops on EOF, error, or call to Stop. The Events channel is closed when parsing stops.
func (p *FileParser) Stop()
Stop stops the parser before parsing the next event or while blocked on sending the current event to the event channel.
type Metrics struct { TimeMetrics map[string]*TimeStats `json:",omitempty"` NumberMetrics map[string]*NumberStats `json:",omitempty"` BoolMetrics map[string]*BoolStats `json:",omitempty"` }
Metrics encapsulate the metrics of an event like Query_time and Rows_sent.
NewMetrics returns a pointer to an initialized Metrics structure.
AddEvent saves all the metrics of the event.
Finalize calculates the statistics of the added metrics. Call this function when done adding events.
type NumberStats struct { Sum uint64 Min uint64 `json:",omitempty"` Avg uint64 `json:",omitempty"` Med uint64 `json:",omitempty"` // median P95 uint64 `json:",omitempty"` // 95th percentile Max uint64 `json:",omitempty"` // contains filtered or unexported fields }
NumberStats are integer-based metrics like Rows_sent and Merge_passes.
type Options struct { StartOffset uint64 // byte offset in file at which to start parsing FilterAdminCommand map[string]bool // admin commands to ignore }
Options encapsulate common options for making a new LogParser.
A Parser parses events from a slow log. The canonical Parser is FileParser because the slow log is a file. The caller receives events on the Events channel. This channel is closed when there are no more events. Any error during parsing is returned by Error.
type Result struct { Global *Class // all classes Class map[string]*Class // keyed on class ID RateLimit uint Error string }
A Result contains a global class and per-ID classes with finalized metric statistics. The classes are keyed on class ID.
type TimeStats struct { Sum float64 Min float64 `json:",omitempty"` Avg float64 `json:",omitempty"` Med float64 `json:",omitempty"` // median P95 float64 `json:",omitempty"` // 95th percentile Max float64 `json:",omitempty"` // contains filtered or unexported fields }
TimeStats are microsecond-based metrics like Query_time and Lock_time.
Package slowlog imports 11 packages (graph). Updated 2017-02-26. Refresh now. Tools for package owners. This is an inactive package (no imports and no commits in at least two years). | https://godoc.org/github.com/go-mysql/slowlog | CC-MAIN-2019-13 | refinedweb | 935 | 56.15 |
I.
Anther thing is web services project, where there may be different consumers, it is not really a front end.
a) Fix all the converted code to match the new standard
b) Fix all new classes to use the 1.1 scheme as they are created.
c) Live with inconsistancy in naming and namespacing.
None of these options seem particularly appealing. Is there a 4th option that I'm missing?
I think the 2.0 migration is rough. The new Web Application Project will make the job easier and keep code consistent, but that's no good if you've already converted. :(
I've come across this article as I've hit a snag caused by this new 'web site' vs 'web app' feature in 2.0, and it seems to me I may have found a case for going back to the namespace model. This is my first 2.0 project, though, and I may just be missing something in understanding how it's all supposed to work.
I have a few similar pages which use a single MasterPage. On the MasterPage I've defined a control for displaying messages, and each page can set the property (MyMasterPage)Master.UserMessage to have their message displayed. All groovy so far.
Now suppose I want to make the call to display the message a bit more complex. Maybe I want to trap exceptions bubbled up to the page, log them to the database and display a helpfull message to the user. Logically, for me, I would abstract this code into a base class from which each of the pages derives, so the page calls base.HandleException(ex) and the base class deals with it.
Now I hit the snag: the base class, which seems to need to live in ~/App_Code cannot see the MyMasterPage type. So I can't call MyMasterPage.UserMessage from BasePage because it doesn't know what MyMasterPage is.
Now if they were all in the same namespace (as in 1.1) I'm pretty sure this wouldn't be happening.
Maybe the intended solution is that I put my error logging code in the masterpage's codebehind file, but this doesn't feel right somehow. Not a very good reason I know, and I'll give it a try now...
I think that leaves us with an approach to encapsulate the reusable code ina separate project, most likely a class library, then make calls to that library within asp.net 2.0
But I run into this issue as I was trying some AJAX.Net 2.0 code...
I don't know what to do with that ...
:)
like
in my ascx page_load
if typeof(me.page) is myhostingpage then
' addhandler for page event to a method here
end if
with 2003 i did not have any problems, the pages in the project are accessible across other controls and pages.
but in 2005, I am not able to find myhostingpage at all in my ascx. Is this approach a right one, even if its not, there has to be a way to access my page properties in my user control. right?
How about defining a base class in App_Code and letting the user control interact with the page through the base class?
Alternatively, you could just have the user control raise an event for the page to handle, or some other approach where the page passes a delegate for the user control to fire.
<asp:content>
<asp:Wizard>
</asp:Wizard>
<HeaderTemplate>
<asp:WebPartZone>
<ZoneTemplate>
<asp:gridview />
</ZoneTemplate>
</asp:WebPartZone>
</HeaderTemplate>
<WizardSteps>
<%-- Method1 -->
<asp:WizardStep>
<asp:WebPartZone>
<ZoneTemplate>
<UC:usercontrol />
</ZoneTemplate>
</asp:WebPartZone>
</asp:WizardStep>
<%-- Method2 -->
<asp:WizardStep>
<asp:WebPartZone>
<ZoneTemplate>
<asp:panel>
-- All the controls in the user control go here directly
</asp:panel>
</ZoneTemplate>
</asp:WebPartZone>
</asp:WizardStep>
</WizardSteps>
</asp:content>
I have the above design in one of my pages.
I need to get/set values, properties for my controls inside the webpartzone.
Problem1: accessing the controls is pain in the neck.
problem2: if i try to use a generic findcontrol and pass the wizard.controlscollection, webpartzone does not have the same behaviour for HasControls property. so i have to go into the zone.webparts collection and iterate under each webpart to get to my control.
this will be a killer as far as performance is concerned.
I am trying to make full use of web parts. Is there a easier way to "find" controls in wizard and in webpartZones? (both method1 and method2)
IF there is a way, then i dont need to worry about catching events in my user control from my page. your time is much appreciated. thanks!
I don't think the role a class plays changes it's accessibility. That is, just because I make a licensing component protected, internal, or private doesn't make it "safe" from someone who shouldn't be calling it.
I am through my first project in asp.net 2.0..
I have the same problem..
First of all I didn't like the app_code this.
Again I have a simple helper class in the AppCode with a namespace..The pages are not able to find the classess.
Even if its one helper class ..We should build a separate Library Project..Is that the only way out?
If you don't like App_Code then a class library project will work. App_Code should work, too, if the classes are public. | http://odetocode.com/blogs/scott/archive/2006/02/08/namespaces-and-asp-net-2-0.aspx | CC-MAIN-2014-49 | refinedweb | 903 | 74.39 |
Difference between revisions of "Xcore"
Latest revision as of 05:04, 9 March 2018
{{#eclipseproject:modeling.emf.emf}}
Contents
- 1 Modeling for Programmers and Programming for Modelers
- 2 Getting Started
- 3 Creating an Xcore Project
- 4 Creating an Xcore model
- 4.1 Specifying a Package
- 4.2 Specifying a Class
- 4.3 Specifying an Attribute
- 4.4 Specifying a Containment Reference
- 4.5 Specifying a Container Reference
- 4.6 Specifying a Cross Reference
- 4.7 Specifying an Enumeration
- 4.8 Specifying a Data Type
- 4.9 Specifying an Operation
- 4.10 Specifying a Derived Feature
- 4.11 Implementing an Interface
- 4.12 Specifying an Annotation
- 5 Creating an ESON Textual Instance
- 6 Creating a Dynamic Instance
- 7 Configuring GenModel Properties
- 8 Converting a GenModel to an Xcore Model
- 9 Converting an Xcore Model to a GenModel
Modeling for Programmers and Programming for Modelers
Xcore is an extended concrete syntax for Ecore that, in combination with Xbase, transforms it into a fully fledged programming language with high quality tools reminiscent of the Java Development Tools. You can use it not only to specify the structure of your model, but also the behavior of your operations and derived features as well as the conversion logic of your data types. It eliminates the dividing line between modeling and programming, combining the advantages of each. All this is supported for both generated and dynamic EMF models.
Getting Started
The first version of Xcore will be part of the Juno release train and is currently available in the Juno p2 repository. To get started, you'll need to install the hardware-appropriate platform version of Eclipse 4.2 or Eclipse 3.8. Once you have that installed and running, you're ready to install Xcore.
- Use "Help → Install New Software..." to bring up the "Install" wizard; the Juno repository should be available in the "Work with" dropdown.
- Select Juno and wait for the content of the wizard to populate.
- Either enter "Xcore" in the filter field or locate "Modeling → EMF - Eclipse Modeling Framework Xcore SDK" in the content and check mark it.
- Drive the wizard to completion.
Xcore and everything it needs will be automatically installed. Of course you'll need to restart Eclipse for the changes to properly take effect.
Please keep in mind that this is all work in progress, so there are bound to be bugs. For bleeding edge changes, you can use the Xtext nightly builds and EMF nightly builds; the install wizard supports drag and drop so you can drag the Xtext link first and then the EMF link before following the steps above to locate Xcore and install its latest nightly version. For "released" versions, look under EMF releases, typically using the p2-browser. If you have questions, the EMF newsgroup (eclipse.tools.emf) is the best place to ask them; please use "[xcore]" as the prefix for your subject line.
Creating an Xcore Project
Xcore can be used in any properly-configured Java project. There's a convenient wizard for creating an empty pre-configured project.
- Use "File → Project..." and enter "Xcore" in the filter field or locate "Xcore → Xcore Project".
- Use "Next" to advance the "New Project" wizard and enter the name of your project. It's best to use a qualified name that will be appropriate as a plug-in ID and as a prefix for your Java packages, i.e, org.example.library.
- Use "Finish" to complete the wizard.
In your workspace you'll see your new project.
It's a project with a Java, PDE, and Xtext natures that contains the following:
- A "src" folder for your hand written Java code.
- A "src-gen" folder in which your model code will be generated by default.
- A MANIFEST.MF with prepopulated dependencies for EMF Ecore and Xtext Xbase.
- And an empty model folder.
Of course it's possible to start with any Java project and use the "Convert → Add Xtext Nature" or "Convert → Convert to Plug-in Projects to introduce the missing natures and to edit the MANIFEST.MF to add missing dependencies.
Creating an Xcore model
An Xcore model is created in the "model" folder via its context menu using "New → File" to create an empty new file with extension ".xcore", e.g., "Company.xcore". This will open the Xcore-aware editor. It starts out with an error marker, because an empty file isn't a valid Xcore instance.
Specifying a Package
The editor supports syntax completion so if you enter Ctrl-Space, you'll see it fills in the "package" keyword. Next you need to enter a package name, e.g., org.example.company. If you save, you'll see the error markers go away. You've created an empty Xcore package. Files 'build.properties', 'plugin.properties' and 'plugin.xml' are automatically created at the top level of the project during the save. As you can see, like Java, an Xcore model starts with a package declaration but note that that there's no semicolon. Note too that nothing is generated in the "src-gen" folder yet; EMF doesn't generate anything for empty packages.
Specifying a Class
Now you're ready to create something more meaningful. Add two blank lines and try hitting Ctrl-Space again to see what you're allowed to enter next.
We'll start by defining a class called "Library". After entering the "class" keyword and the name of the class, type a curly brace: the closing curly brace is automatically inserted. If you now save the editor, you'll see the following:
Noticed that the model code has been generated automatically.
Specifying an Attribute
Now you're ready to define the structure of the "Library" class. Within the curly braces, hit Ctrl-Space to see what you're allowed to enter next.
To specify an attribute, you need to specify the name of a data type followed by the name of the feature. So if you choose "String", enter "name", and save, you'll see the following:
Of course the corresponding feature accessor methods, i.e., getName() and setName(String), will immediately be generated in the "Library" interface. Note that while the Xcore source shows the use of "String", which in the completion proposal you saw earlier was listed as "java.lang.String", from the hover information you can see that the reference actually resolves to the "EString" data type from the built-in "Ecore" package. Xcore provides familiar aliases for all of the Ecore data types that correspond to Java built-in and mathematical types. If your attribute is a reserved word such as "type" or "id", you will need to escape it with a "^". For example: String ^type.
class Library { int ^id String ^type }
If you need to specify a default value for an attribute, you can use the following syntax:
class Library { String name = "Default Name" boolean stateOwned = "true" }
In the above example, the string after the equals sign will be used to populate the "Default Value Literal" property of the Ecore feature. Thus, the generated Java implementation code will contain:
public class LibraryImpl ... { protected static final String NAME_EDEFAULT = "Default Name"; protected static final boolean STATE_OWNED_EDEFAULT = true; ... }
Specifying a Containment Reference
We'll want libraries to contain books and authors, so let's add two more empty classes, "Book" and "Writer" so that we can specify references to them in the library. If we try the same approach as defining the name feature, you'll end up with the following:
That's because Xcore interprets this as an attribute and the type of an attribute must be a data type, not a class. If you hover over the error indicator, you'll see how to fix the problem.
In this case, we want to define a containment reference, so choose that option. Notice that if you select the reference to "Book" you can use "F3" to navigate to the definition for the "Book" class. Of course we wanted to define this to be a multi-valued reference and here have specified a single-valued reference. The multiplicity of a feature is specified with the bounds in square brackets, where "[]" is used as a short-hand for "[0..*]" as follows:
Let's define another multi-valued containment reference called "authors" and save the result, which will look as follows:
Again, the expected results are immediately generated. Notice that the outline view uses icons are much like the GenModel's icons, though with a purplish cast rather than a bluish one.
Specifying a Container Reference
If you wanted a convenient API for navigating from a "Book" or "Writer" to its containing "Library" you'd specify container reference with an opposite like this:
Notice that completion support is available for specifying the opposite. When you select a choice, the opposite is automatically updated to refer back.
Let's add a container feature for "Writer" as well, and let's define "title" and "pages" features for "Book" and a "name" feature for "Writer" to produce the following:
Note how we've made use of "int" as an alias for "EInt" to define the type for "pages".
Specifying a Cross Reference
The most important thing left to complete the picture is specifying the bidirectional relationship between "Books" and "Writers". Specifying a cross reference is done much like we did for containment and container references, but using the "refers" keyword. We start by specifying one of the two references, i.e, a "Book" has "authors", then the other, i.e., a "Writer" has "books", at which point we can also specify the opposite.
Notice that opposite completion is supported here as well and that upon completion, both opposites are properly paired.
Of course "F3" navigation is supported for the reference to an opposite.
References can be declared with the
local modifier keyword. That is equivalent to setting the Ecore Resolve Proxies attribute to false.
Specifying an Enumeration
Supposed we wanted to categorize the kinds of books we have in the library. We could specify an enumeration named "BookCategory" for that and use it to specify a feature named "bookCategory" in "Book" as follows:
If you want to specify the integer values for the enum:
enum BookCategory { Mystery = 0 ScienceFiction = 1 Biography = 2 }
If you want to specify the literal values for the enum:
enum BookCategory { Mystery as "M" ScienceFiction as "S" Biography as "B" }
If you want to specify both the integer value and the literal values for the enum:
enum BookCategory { Mystery as "M" = 0 ScienceFiction as "S" = 1 Biography as "B" = 2 }
Specifying a Data Type
Suppose we wanted to specify the copyright date of a book. Of course we could use "EDate" from Ecore, but its serialization format is more like date and time, so that might not exactly fit our needs. Instead we could define our own "Date" data type to wrap "java.util.Date" as follows:
A data type acts as a wrapper for an existing Java type, so the completion proposal supports choosing a Java class that matches the name we've started typing. Upon selection, an import is added, so we can use the short form of the name.
Because classifier names and Java type names can never be used interchangeably in any Xcore scope, it's not a conflict to import the Java name in a scope that also defines a classifier with the same name.
So far everything we've shown are things you could do with Ecore directly, so we've only seen how Xcore provides a concise textual syntax for Ecore. To complete the support for our own "Date" type, it would normally be necessary to modify the generated "LibraryFactoryImpl" class' methods that implement string conversion for this data type. With Xcore, we can specify this logic directly in the model. Note that if we look at the proposals for what may follow the type specification, we can see that the "create" and "convert" keywords are expected.
So we can choose to start specifying the "create" logic for creating an instance of the data type from a "String" value which is specifying within the curly braces of a block. We can using Java's "java.text.SimpleDateFormat" class as follows:
Notice that completion proposals understand what's on the classpath so we can use that for calling the constructor. To refer to the instance of the data type within the body we use "it", which acts much like the implicit "this" variable in Java. We also need to consider that "parse" can throw a "ParseException" that we need to handle properly. In addition, the value of the data type might be null, so best we guard for that case too. And finally, we can use a similar approach to specify the "convert" logic to produce the following complete result:
Notice we've also used the new data type to define a "copyright" features in the "Book" class and that all the Java classes we've used were automatically imported by completion. The notation for expressing behavior in Xcore uses the model defined by Xbase. It's very similar to Java, but is expression oriented, so even things that are statements in Java, return a value in Xcore/Xbase. That's why you don't see a "return" statement.
Specifying an Operation
Suppose you wanted convenience methods on Library, e.g., an operation to retrieve a "Book" give its title. We could specify that as follows:
Again, the syntax is very Java-like and of course completion proposals are aware of the Java APIs implied by the Xcore model definition so feature names such as "books" are in the proposals. Here's how we would complete the definition:
We return "null" if we reach the end of the loop without a match. That's all there is to it. If you look at the generated implementation class for "Library" you'll see it compiles to the following Java code:
Notice that the "==" comparison for title is compiled to a null safe "Object.equals" test. Notice also that all the things that look like simple field accesses in the Xcore code actually compile down to proper calls to the generated API accessor methods. One of the nice things about Xcore is that you get the advantages of writing what look like simple Java classes with simple fields declarations, including the ability to write a notation as if you had simple fields, but you end up with properly defined APIs and proper uses of them.
To return Lists from operations, don't use the Java syntax but XCore syntax, e.g. Book[] getBooks(String filter) instead of (E)List<Book> getBooks().
To have your operation return any Java type, define it as a Data Type first (see previous chapter).
Specifying a Derived Feature
With Xcore it's even possible to define the behavior of a derived feature. You only need to mark the feature as "derived" and, using the "get" keyword, define within a block how the value is completed. For example, you could specify a derived attribute to compute the last name of the "Writer" as follows:
Implementing an Interface
If your class needs to implement an interface such as java.lang.Iterable, you wrap it just as you would a type. Example:
type Iterator wraps Iterator<EObject> interface Iterable wraps java.lang.Iterable<EObject> {}
class MyClass extends Iterable { op Iterator iterator() { return new SpecialIterator(); } }
Specifying an Annotation
Ecore supports general-purpose annotations comprising a source string (typically a URI) for specifying identification and a map of string-based key-value pairs for specifying the details. Rather than requiring the typically large source URI to be repeated on each use, Xcore provide support for specifying a simple alias for it as follows:
We can then use it as follows:
Annotations with the GenModel's nsURI have specialized support in Xcore. Any key that matches the name of a feature in the GenModel, will be used to populate that feature. If you have a look at your generated model after saving the above, you will notice that the base class of each modeled class's implementation class has changed to use the one we've just specified.
Note that even the annotation definition itself is not necessary for this case because this "alias" is defined in "xcore.lang" and is therefore visible everywhere by default:
Creating an ESON Textual Instance
The ESON EMF Simple Object Notation can be used to create instances in a textual representation.
Creating a Dynamic Instance
Not only does Xcore generate you model code as you'd expect, everything we've specified works in dynamic models as well. Let's create a dynamic instance of "Library" by selecting it and bringing up the context menu for it as follows:
This brings up the following wizard, which lets you control the name and location of the new instance:
The defaults should be fine for now do drive the wizard to completion. This will open the "Sample Reflective Editor" as follows:
Expanding the first resource and double clicking on the "Library" instance will bring up the properties view. Notice that the "Library.xcore" resource is loaded in order to load the dynamic Library instance. If you expand it, you'll notice it contains the Xcore model, the GenModel, the Ecore model, and a bunch of JVM model instances. The GenModel and Ecore model are derived from the details in Xcore model. The various JVM model instances in turn are derived from the GenModel, i.e, all the information about the Java artifacts that are generated by the GenModel. Additional resources contain things that are referenced directly and indirectly from our model. Let's make the model more complete by creating a "Book".
We can do the same thing to create a "Writer". Let's have a look at the properties for a "Writer". We can given the "Writer" a "Name", say "Arthur C Clark" using the properties view.
Notice that the "Last Name" updates correctly whenever we change the full "Name".
Let's populate some of the information about the "Book". We'll call it "2001: A Space Odyssey" and set the copyright date to September 1st, 1968.
Notice that as we start entering a date, we get feedback about parse failures demonstrating that the data type conversion logic we implemented is hard at work.
Finally, we can have a look at how the logic for finding a book from its title is working in the "Library" instance. Let's select it and use "Windows → Show View → Other..." and locate "Eclipse Modeling Framework → Operation Invocation View". This view responds to the selection in the active editor and provides a drop-down for choosing which operation to invoke. If the operation has arguments, it provides properties for setting those values. And of course there is an "Invoke" button for actually invoking the operation. We'd can use it as follows:
After invoking the operation, the result will be selected in the editor, so when the right book is selected, we can see that this too is working well.
Configuring GenModel Properties
The Xcore editor directly supports the same Properties view as you're familiar with from the Generator. Use "Windows → Show View → Other..." followed by "General → Properties" to bring up the Properties view. It responds to selection it the editor and the Outline, so selecting the package produces the following:
Note that the properties for the GenModel and the GenPackage are merged into a single set of properties.
Changing the values in the Properties view will modify the source with the corresponding @GenModel annotations.
If you save the Xcore resource now, you'll see that the "org.example.library.edit" project is created and the item providers are generated there.
Converting a GenModel to an Xcore Model
To make it easy to migrate existing artifacts to Xcore, we've added migration support. It is integrated with the GenModel's exporter framework, so from the context menu of the Generator editor opened for a "*.genmodel" resource, you can invoke "Export Model...". Suppose we did that for the XML Schema-based library model.
That will bring up the "Export EMF Model" wizard from which you can choose Xcore as the target.
The next page allows you to choose where the new Xcore resources will be saved.
The final page allows you to choose which packages to export and the name of the Xcore resource in which to save them.
Completing that page produces the following resource:
Notice all the annotations that capture details that would otherwise be missing:
- The "nsURI" of the model, which, if unspecified by an "Ecore" annotation, is just the fully qualified package name.
- The "ExtendedMetaData" annotations which record the mapping onto XML Schema.
Notice too in the import directives that keywords can be used as identifiers when escaped with a "^".
Converting an Xcore Model to a GenModel
If for some reason you need the legacy format, e.g., to use a tool which expects that format, you can convert your Xcore model to a GenModel using the importer framework. From the context menu for the "*.xcore" resource, you can create a new "*.genmodel" resource as follows:
This brings up the following wizard in which we can choose to create a new EMF Generator model:
Proceeding to the next page, where we can choose the location and name of the new "*.genmodel" resource.
Proceeding from there, we can choose which importer to use.
After that, we can choose which "*.xcore" resource to import. Note that one is already suggested because we have it selected in the package explore. Also note that you must hit the "Load" button in order to proceed from this page
That brings us to the final page where we can choose which packages to import and which to reuse from existing sources.
Given that the Xcore resource does physically contain a GenModel and an Ecore model, it should be possible in the future for any tool to consume Xcore resources directly. | http://wiki.eclipse.org/index.php?title=Xcore&diff=423583&oldid=323066 | CC-MAIN-2019-47 | refinedweb | 3,665 | 61.46 |
How to call a remote interface in ear from stand-alone sar (MBean) ?busters Nov 15, 2011 8:22 AM
Hi,
hello everyone, I am new here and already have a question to this annoying circumstance. I thought it's a simple task, but it already took me days by now.
I have a complete and fine working ear (including jar, war, all you need for this).
On the other hand I made a very simple SAR (implementing an interface ending to "MBean", start(), stop(), etc.) The jboss-service.xml is located in the META-INF Folder.
Both files are deployed without any errors, when I put them into the deploy folder, and I can find my MBean (from SAR) in JMX Console.
The MBean contains a test method for calling a remote interface provided by the ear (in the including jar).
I tried to use a JNDI lookup to get access to the interface, the way i found in several postings and turorials. I copied the string for the lookup from the global JNDI namespace, and tried variations of this but it never ever worked.
Always get something like this:
<javax.naming.NamingException: Could not dereference object [Root exception is java.lang.RuntimeException: Can not find interface declared by Proxy in our CL + BaseClassLoader...>
A problem I see, I am not sure about the correct <depends> tag in the jboss-service.mxl. At the moment it looks like this: "<depends>jboss.j2ee:ear=PGN_AS_ear.ear,jar=PGN_AS.jar,name=ApplicationBean,service=EJB3</depends>" That's the String I can find via the JMX Console.
For the JDNI lookup I use "PGN_AS_ear/ApplicationBean/remote-....pgn.as.business.application.facade.ApplicationRI" (global JNDI namespace).
Note: When I do this lookup from a simple "main" class it is all working fine. I even just need a more simple way for this:
ApplicationRI appBeanRI = (ApplicationRI) ctx.lookup("ApplicationBean");
I don't know what to try else. Is this even the right way to create such a connection ?
Things I am not sure about:
- classpath codebase tag for jboss-service.xml
- really correct string for the lookup
- correct <depends> tag in jboss-service.xml
Thank in advance
edit: I am using JBoss 5.1 in the standard server configuration
1. Re: How to call a remote interface in ear from stand-alone sar (MBean) ?busters Nov 15, 2011 9:46 AM (in response to busters)
Unbelievable...a team-mate stumbled over certain words in a forum posting. Using process of elemination and comparing several configuration files we really could identify the "mistake".
Just for your information...
If you are not using the default server configuration, make sure to set the flag 'isolated' to false in bean tag for "EARClassLoaderDeployer" located in the file: ".../deployers/ear-deployer-jboss-beans.xml"
...
...
<bean name="EARClassLoaderDeployer" class="org.jboss.deployment.EarClassLoaderDeployer">
<property name="isolated">false</property>
</bean>
...
I am very happy now | https://community.jboss.org/thread/174868?tstart=0 | CC-MAIN-2014-15 | refinedweb | 484 | 59.19 |
This is the third this into datastores like Redshift and Salesforce.Read the previous posts in the series:
When you have a system that streams billions of messages a day , in real-time, from MySQL into Kafka , how do you effectively manage all of the schemas stored across all the databases? When you are building to connect hundreds of services, you will encounter thousands of different schemas and manually managing those schemas is unsustainable. An automated solution became necessary to handle schema changes from upstream data sources to all downstream consumers. Confluent’s Schema Registry and Kafka Connect are excellent choices to begin with, except they did not exist at the time when we started building the Yelp Data Pipeline. This is how the Schematizer was born.
Schematizer… who?
One of the key design decisions of Yelp’s Data Pipeline is schematizing all data. In other words, the pipeline forces all the data flowing through it to conform to predefined schemas, instead of existing in a free form. Standardizing the data format is crucial to the Yelp Data Pipeline because we want the data consumers to have an expectation about what type of data they are getting, as well as being able to avoid immediate impact when the upstream data producers decide to change the data they publish. Having a uniform schema representation also gives the Data Pipeline a really easy way to integrate and support various systems that use different data formats.
The Schematizer is a schema store service that tracks and manages all the schemas used in the Data Pipeline and provides features like automatic documentation support. We use Apache Avro to represent our schemas. Avro has several very attractive features we need in the Data Pipeline, particularly schema evolution, which is one of the key ingredients that make decoupling data producing and consuming possible. Each message that flows through the Data Pipeline is serialized with an Avro schema. To reduce the message size, rather than embedding the entire schema into the message, the message only contains a schema ID. Data consumers then can retrieve the schema from the Schematizer and deserialize the messages at run time. The Schematizer is the single source of truth for all the predefined schemas.
Look at all these schemas. We can organize them in various ways.
The Schematizer associates and organizes schemas in two ways: one from the data producer’s perspective and the other from the data consumer’s perspective.
The first method groups the schemas based on the data origin. Each group is defined by a namespace and a source. The data producers must specify a namespace and a source when they register the schemas with the Schematizer. For instance, a service that wishes to publish its database data into the Data Pipeline can choose its service name as the namespace, and each table as a source.
Group schemas based on namespace and source
The second method is based on the data destination. Each data destination, such as a Redshift cluster or a MySQL database, is a data target, and can have one or more data origins. Each data origin consists of one or more schemas, that is, one or more namespace and source pairs as defined in the first method.
Group schemas based on data origins of single data target
These two different approaches allow us to search and query related schemas based on different needs. For instance, an application may want to know all the topics to which it is publishing, while another service may want to know the sources of all the data in its Redshift cluster.
Let’s register schemas, shall we?
The Data Pipeline requires all the data to be schematized and serialized with predefined Avro schemas. Therefore, when a data producer would like to publish data into the Data Pipeline, the first thing the producer does is register the schema with the Schematizer. The most common way to do this is to register an Avro schema directly.
For the data producers that do not have or cannot create Avro schemas, schema converters can be easily added into the Schematizer to convert a non-Avro schema to an Avro schema. A good example is theMySQLStreamer, a service that pumps data from MySQL databases into the Data Pipeline, which only knows about MySQL table schemas. The Schematizer can take a MySQL table schema definition and create the corresponding Avro schema. The data producer must also re-register the new schema whenever there is a schema change.
OMG, the upstream schema has changed! Will my service break?
A common pain point that every data pipeline must address is how to deal with upstream schema changes. Often times such an event requires a lot of communication and coordination between upstream data producers and downstream data consumers. Yelp is not immune to this problem. We have batch jobs and systems that ingest data coming from other batch jobs and systems. It has become a painful problem every time the upstream data changes because it may cause the downstream batch jobs or systems to crash or necessitate a backfill. The entire process is pretty labor-intensive.
We tackle this problem with schema compatibility. During schema registration, the Schematizer determines which topic should be assigned to the new schema based on schema compatibility. Only compatible schemas can use the same topic. When an incompatible schema is registered, the Schematizer creates a new topic in the same namespace and source for the new schema. How does the Schematizer determine the compatibility? Avro resolution rules . The resolution rules ensure that, in the same topic, the messages serialized with later versions of the schemas can always be deserialized with older version of the schemas, and vice versa.
Different topics are assigned to incompatible schemas
Right now, Yelp’s main database data flows through the Data Pipeline via the MySQLStreamer. Let’s say that at some point we decide to add a column with a default value to the Business table. The MySQLStreamer will re-register the updated Business table schema with the Schematizer. Since such a schema change is a compatible change based on the resolution rules, the Schematizer will create a new Avro schema and assign the latest existing topic of the same namespace and source to it. Later, if someone decides to change one column type of the Business table from
int to
varchar , this will cause an incompatible schema change, and the Schematizer will create a new topic for the updated Business table schema.
The guarantee of schema compatibility within each topic makes sure when the upstream schema is changed, the downstream data consumers can continue to consume the data from the same topic using their current schema, without worrying that the change may break the downstream systems. They also have the flexibility to transition to the newer topics based on their own timeline. It provides the pipeline further automation and less human intervention in schema change events.
Besides resolution rules, we also build in custom rules into the Schematizer to support some Data Pipeline features. The primary key fields of a schema are used for log compaction in the Data Pipeline. Because the key for log compaction must remain the same for a single topic, any change to the primary key fields is considered an incompatible change, and the Schematizer will create a new topic for the new schema. Also, when a non-PII (Personally Identifiable Information) schema starts to include a PII field, that change is qualified as an incompatible change as well. The PII data and non-PII data will be stored in separate topics, which simplifies the security implementation for the PII data, preventing downstream consumers from accidentally accessing the data they do not have permission to.
Logic flow to decide whether a new topic is required
One thing worth noting is the schema registration operation is idempotent. If identical schema registration calls are made multiple times, only the first one creates a new schema. The subsequent calls will simply return the registered schema. This also provides a simpler way for an application or a service to initially set up its Avro schemas. Most applications and services have Avro schemas defined in the files or in the code, but they do not hardcode schema IDs, since schema IDs depend on the Schematizer. Instead of first querying the Schematizer to get the schema information back and registering it if it doesn’t exist yet, the application or the service can use the schema registration endpoint to achieve these two operations at the same time.
Streamline all the way.
To fully streamline the pipeline for schema change events, the Schematizer can further generate the schema migration plan for the downstream systems to be applied based on the existing schema and the new schema. Currently the Schematizer is only able to create the schema migration plan for Redshift tables. A downstream system that loads data into a Redshift cluster from the Data Pipeline can simply query and apply the schema migration plan when a schema change event occurs, and then automatically pick up the new table schema without any manual process. This feature is designed to be easily extensible, and each schema migration plan generator is an exchangeable component, so later we can add more generators to support different schema types, or switch to the ones that use more sophisticated algorithms to generate the migration plans.
Who’s the data producer? Who consumes these data? The Schematizer knows it all.
In addition to registered schemas, the Schematizer also tracks the data producers and data consumers in the Data Pipeline, including which team and which service is responsible for producing or consuming the data, and how often they expect to publish the data. We use this data to effectively contact and communicate with the corresponding teams when events that require human intervention occur. This information also allows us to monitor and decide which schemas and topics may be out of service and can be deprecated or removed. As a result, it simplifies the compatibility validation during schema registration. The Schematizer can now skip deprecated schemas and check the schema compatibility only against active ones instead of every single schema for the same topic.
The data producers and consumers are required to provide this information when they start up. Initially, we planned to only store this information in the Schematizer. Since this data is also very valuable for exploratory analysis and alerting, we instead decided to write the data into separate Kafka topics outside the Data Pipeline. The data then can be ingested into Redshift and Splunk, as well as loaded into the Schematizer and displayed through the front-end web interface. We choose to use the async-Kafka producer, a non-blocking Kafka producer developed at Yelp which writes data through Clog , because it will not interfere with the normal producer publishing messages. In addition, it can avoid the circular dependency situation in which the normal producer tries to register itself by using another copy of same producer, which also tries to register itself.
Wait, which Kafka topic should I use? The Schematizer takes care of it for you.
Unlike regular Kafka producers, the data producers of the Data Pipeline do not need to know which Kafka topics they should publish the data to in advance. Since the Schematizer dictates the topic which each registered schema uses, the producers only need to tell the pipeline the schema they use to serialize the messages. The pipeline then asks the Schematizer for the topic information and publishes the messages to the correct topic. Abstracting away the topic awareness makes the interface simpler and easier to use.
The same mechanism exists for data consumers. Although they may define a specific Kafka topic to consume from, the more common use-case is to allow the Schematizer to provide the correct topics to consume from based on the group the data consumer is interested in. We introduced various grouping mechanisms in the Schematizer earlier. The data consumer can specify either a namespace and a source, or a data target, and the Schematizer will figure out the right topics in that group. This is especially useful when the data consumer is interested in a group of topics that may change over time due to incompatible schema changes. It relieves the data consumers from the burden of keeping track of every single topic in the group.
Schemas are good. Documentation is even better!
Schemas standardize the data format but may not provide enough information for people who want to understand the meaning of the data..
Watson provides valuable information about the state of the Data Pipeline: existing namespaces, sources, and the Avro schema information within those sources. Most importantly, Watson provides an easy interface to view documentation on every source and schema the Schematizer is aware of.
Those docs aren’t going to write themselves.
The majority of the data flowing through the Data Pipeline right now come from databases. To document the sources and the schemas of these data, we leverage SQLAlchemy models. At Yelp, SQLAlchemy is used to describe all models in our databases. Besides the docstring, SQLAlchemy also allows users to include additional information for the columns of the model. Therefore, it becomes a natural location for us to put documentation on the purpose and usage of both the model and its fields.
A new ownership field is also introduced to the SQLAlchemy model to capture the maintainers and the experts for each model. We think the people who generate the data are the best source to provide documentation. Also, this approach encourages us to keep the actual data models and their descriptions in sync all the time.
class BizModel(Base): __yelp_owner__ = Ownership( teams=[TEAM_OWNERS['biz_team'], members=[], contacts=[] ) __table_name__ = 'my_biz_table' __doc__ = 'Business information.' id = Column(Integer, primary_key=True, doc=r"""ID of the business.""") name = Column(String(64), doc=r"""Name of the business.""")
Simple SQLAlchemy model with documentation and ownership information
Developers may not always remember to include the documentation when they work on the SQLAlchemy models. To prevent that, we set up automated tests to enforce that the model is attributed and documented. Hard checks are also put in place to ensure that we never regress. Whenever a new model is added, the test suite will fail if the model is not properly documented or is missing ownership information. These automated tests and checks have moved us much closer to our goal of 100% documentation coverage.
Extract delicious documentation to feed Watson.
Once the documentation is available in the data models, we are ready to get it into the Schematizer and eventually present it in Watson. Before diving into the extraction process, we first introduce another component that plays an important role in this process: the Application Specific Transformer, or AST for short. The AST, as its name suggests, is a framework that takes in a stream of messages from one or more Data Pipeline topics, applies transformation logic to the messages’ schema and data payloads, and then outputs the transformed messages to a new set of Data Pipeline topics. Transformation components that provide specific transformation logic are chainable, and therefore multiple components can be combined to perform more sophisticated transformation logic.
We use a set of transformation components in the AST to generate more understandable data using SQLAlchemy model reflection. Since the components are chainable, we now simply create a transformation component which extracts the documentation and ownership information from the SQLAlchemy models and add it into the existing transformation chain. The documentation and ownership information of the models then are automatically extracted and loaded into the Schematizer through the existing pipeline. The implementation is surprisingly simple and seamlessly integrates into the entire pipeline, so it’s a very effective way to produce good quality documentation.
Transformation components in AST
As mentioned above, some transformation components already exist in the AST to generate more meaningful data to end users. The bit flag field transformation component flattens a single integer flags field into multiple boolean fields, each of which represents what each bit of the integer value means. Similarly the enum field transformation component converts the numeric enum value into a readable string representation. A nice bonus from these transformation components is they also produce self-explanatory and self-documented schemas at the same time, and consequently create better documentation.
Collaborate, contribute, and find.
The story doesn’t end with developer documentation. Watson also provides mechanisms for end users to collaborate and contribute toward making all of Yelp’s data easily understandable.
The first mechanism is tagging. Watson allows users to tag any source with a relevant category. A source may be a MySQL database table or a data model. For example, a Business source can be tagged with the “Business Information” category, and a User source may be tagged with the “User Information” category. End users can tag related sources with the same category and organize them in a way that makes the most sense to them. The effect of tagging is a richer understanding of how our own data sources relate and connect to each other.
Business source tagged with “Business Info”
Adding notes is the second mechanism Watson provides. This enables users, especially non-technical users, to contribute their own documentation to a source or a field. Users such as business analysts often have valuable experience on using data, and notes have been a great way for them to share gotchas, edge-cases, and time-sensitive information.
The number one feature requested by end users for Watson was a more nuanced search. To support that, we have implemented a simple search engine in Watson that allows users to search on various aspects of the data such as schemas, topics, and data model descriptions. We chose Whoosh python package to back the search functionality as opposed to something like Elasticsearch because it allowed us to get the development moving quickly. Whoosh also provides decent performance for the volume of search data we have so far. As the volume of data increases, we will consider switching to more scalable engine later.
Conclusion
The Schematizer is a key component of Yelp’s Data Pipeline. Its schema registration operation enables the major features of the pipeline, including mitigating the upstream schema change impact on downstream consumer applications and services. The Schematizer also takes care of topic assignment for data publishing, removing the need for users to determine which topic to use. Finally, it requires and provides the documentation for every piece of data flowing through the pipeline to facilitate knowledge-sharing across the entire organization. Combined with Watson, all the employees at Yelp now have a powerful tool to access up-to-date information.
We now have dug into the Schematizer and its front-end documentation system, Watson. Next, we are going to explore our stream processor: Paastorm. Stay tuned!
Acknowledgements
Many thanks to Josh Szepietowski, Will Cheng, and Bo Wu for bringing Watson to life. Special thanks to Josh, the author of the Application Specific Transformer, for providing invaluable inputs on the AST and Watson sections in this blog post.
This is the third post in a series covering Yelp's real-time streaming data infrastructure. Our series will explore in-depth how we stream MySQL updates in real-time with an exactly-once guarantee, how we automatically track & migrate schemas, how we process and transform streams, and finally how we connect all of this into datastores like Redshift and Salesforce. | http://126kr.com/article/6n5m40x6we2 | CC-MAIN-2016-50 | refinedweb | 3,248 | 51.78 |
New to Telerik Reporting? Download free 30-day trial
Creating Chart Programmatically
This article is obsolete. The Chart item is now superseded by the more advanced Graph item. The Graph item is most often used for building powerful OLAP/Pivot charts.
The steps below show how to create a minimal chart definition (i.e. Chart) programmatically. See Creating Chart Programmatically - more complex example topic for more info how to create multiple series and how appearance can be tailored at run-time.
Once the chart definition is created, you need to create ChartSeries and ChartSeriesItem collections.
The example below is an alternative to using the Report Designer for creating a chart. The approach of hardcoding data in the series is for the sake of the example and would produce a static chart. This code should not be used for programmatic creation of dynamic charts based on data, for such scenario, see How to Programmatically Data Binding Chart to a Generic List of Objects.
- First add the namespaces that support the objects to be referenced. The Telerik.Reporting.Charting namespace supports the Chart declaration.
- Next construct the Chart item itself.
Telerik.Reporting.Chart progchart = new Telerik.Reporting.Chart(); progchart.BitmapResolution = 96));
Dim progchart As New Telerik.Reporting.Chart() progchart.BitmapResolution = 96.0))
Construct a new ChartSeries object and assign a name to it. Set the ChartSeriesType to be Bar. Using the ChartSeries.AddItem(Double, String) method, add a series of ChartSeriesItem objects to the series Items ChartSeriesItemsCollection collection. The latter method takes as parameters a double "Value" and a string "Label".
// chart series collection and add the chart to a report section (detail section in the example).
- The finished chart should look like this example:Programmatically Created Chart at Runtime | https://docs.telerik.com/reporting/buildingprogrammaticcreate | CC-MAIN-2022-05 | refinedweb | 288 | 50.53 |
On Sun, 24 Feb 2002, John Belmonte wrote: > We've come to it in a roundabout fashion, but it seems to me that the syntax > > global (T) * > > would be better recognized as: > > namespace T Namespaces is only one of the uses of the global declaration. But remember that, in Lua, almost every function is global (print, write, etc.), so a simple "namespace T" would not be very helpful. Moreover, a simple "namespace T" helps to define modules, but not to use them. > However I'm not sure I understood the intended semantics of your proposal, The semantics is: "global (exp) name-list" translates to _temp = exp after that, any use of a name in name-list is read as "_temp.name", subjected to the usual visibility rules for variable declarations. If name-list is '*', then the rule applies to any name not declared in another local/global declaration. So, > 1) the access rules apply not only when the module is > executed, but also when functions defined within the module are > executed, Yes, if the functions are declared inside the scope of the global declaration. (The change from "name" to "_temp.name" is done at compile time...) > 2) in nesting of dofile's/modules, the access rules properly nest. Scopes in "dofile"s do not nest. > If upvalues were any lesson, we'll end up with proper namespaces despite > any contrary intentions ;-). What do you call "proper namespace"? -- Roberto | http://lua-users.org/lists/lua-l/2002-02/msg00381.html | CC-MAIN-2016-22 | refinedweb | 238 | 63.59 |
Thank you! As long as I manually set projectDir to some valid path all that
works for me! However, you specificaly said that absolute paths should be
avoided, and I agree. So, how should projectDir be getting set without
doing it overtly? For the moment this works, so I'm off high-center, but I
need to understand the more general coase for the longer haul.
--Hank
-----Original Message-----
From: webware-discuss-admin@...
[mailto:webware-discuss-admin@...]On Behalf Of Chuck
Esterbrook
Sent: Thursday, January 19, 2006 11:23 AM
To: webware-discuss@...
Subject: Re: [Webware-discuss] "No Module named Middle"
Sender ALLOWED [ Remove ] [ Block ] details
Vanquish Anti-Spam Control Panel
On 1/18/06, Hank Freeman <hfreeman@...> wrote:
> I am working on learning MiddleKit, and having a great time. Things
have
> gone well until I try to actually build a Store from my model. After
> creating a simple model, generating it, creating the database and it's
> objects, and loading my sample data all successfully, I've stumbled.
When I
> try to do store.readModelFileNamed(_filename) I get "ModelError: (<path
to
> my model.mkmodel>', "Could not import module for class '<First class in
my
> model>' due to 'No module named Middle'. If you added this class
recently,
> you need to re-generate your model.')
>
> I have a Settings.config file that contains:
> {
> 'Package': 'Middle',
> 'SQLLOG': {'File': 'mk-sql.log' },
> 'Database': 'MyDbNameHere'
> }
> and my "Middle" directory does have (an empty) __init__.py file to
identify
> it as a package.
>
> I suspect I'm making a common noob oversite here, but (admitting to
being a
> newbie) I cannot see it. Any advice on common pitfalls I might be
hitting
> here? Thanks in advance!
The directory containing your Middle directory needs to be in the
python path which is stored in sys.path at runtime, and augmented by
the environment variable PYTHONPATH when Python starts.
I like to make my code impervious to its environment when possible, so
my personal pref is to enhance sys.path in my Python code. Also:
- I also like to make my code impervious to what directory it's run
in, so I don't use absolute paths.
- sys.path[0] is often a blank string or the path of the main python
program. Whatever the case, I never pre-empt it.
- My program is usually "next door" to the Middle directory in another
directory like "bin" or "WebApp".
So given all that, a command line tool I write that uses Middle might say:
import os, sys
progPath = os.path.join(os.getcwd(), sys.argv[0])
progDir = os.path.dirname(progPath)
projectDir = os.path.dirname(projectDir)
assert os.path.exists(projectDir), projectDir
sys.path.insert(1, projectDir)
# now test it:
import Middle
For a WebKit app, you'd do this fixup during context initialization.
Or you could just set the PYTHONPATH environment variable and not
write any Python code.
Btw any time you have problems importing a module, you can "print
sys.path" to help you troubleshoot it, although I find that output
hard to read and do this instead:
for p in sys.path:
print repr(p)
HTH,
Chuck
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log
files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
_______________________________________________ | https://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200601&viewday=20 | CC-MAIN-2017-26 | refinedweb | 562 | 67.65 |
First time here? Check out the FAQ!
In an IvyScriptStep variables are auto initialized as soon as they are accessed. The following code will log "bad":
java.util.Date date = null;
if ( date!=null) {
ivy.log.info("bad");
} else {
ivy.log.info("good");
}
It looks like the object "date" will never be a java.util.Date, it is an IvyDate. There is also no chance to import java.util.Date because of a conflict.
asked
06.11.2013 at 14:42
Adrian Imfeld
146●6●7●15
accept rate:
33%
edited
07.11.2013 at 12:12
Reguel Werme... ♦♦
7.8k●1●15●49
A simple solution is to transfer the code to a handler class and call the methode from the IvyScript-Step.
If you still want the code in a ScriptStep, you could do the following:
import java.util.concurrent.atomic.AtomicReference;
AtomicReference date = new AtomicReference();
//date.set(new java.util.Date());
if ( date.get()!=null) {
ivy.log.info("bad");
} else {
ivy.log.info("good");
}
answered
06.11.2013 at 14:45
edited
06.11.2013 at 14:49
IvyScript auto initialise default types like:
To handle this inside IvyScript the is initialized statement is the recommended operator to check if a variable is null or was auto initialized. Example:
is initialized
Date myDate = foo.bar.getMyDate();
if (myDate is initialized) {
ivy.log.info("myDate is not the default value");
} else {
ivy.log.error("myDate is auto initialized");
}
As long as we are working inside IvyScript anything works fine. When we want to use the variable as a parameter, e.g. to call a method in Java, we have to pay attention to the auto initialization. Example:
// getMyDate() returns null
Date myDate = foo.bar.getMyDate();
// below line prints: 'Sat Jan 01 00:00:00 CET 1',
// because is was auto initialized even getMyDate() returns null
MyJavaClass.printDate(myDate);
// below line prints: 'null', becaue we pass null if the variable was auto initialized
MyJavaClass.printDate(myDate is initialized ? myDate : null);
If we navigating through beans we could also use the #-operator. This operator prevents auto initialization. Example:
#
if (in.foo.#adressBean == null)
{
// yes, the field 'adressBean' is null and was not auto initialized
}
When we working with types which could not be initialized (like interfaces), we have to use the #-operator too. Example:
IInterfaceX interfaceX = foo.bar.getInterfaceX();
if (#interfaceX != null)
{
// call a method on the interface.
}
// below call will fail with an exception, because IvyScript could not auto initialized an interface
interfaceX.getString();
See also the chapter IvyScript & Null Handling for more details.
answered
08.11.2013 at 15:43
Flavio Sadeghi ♦♦
1.8k●5●7●23
accept rate:
75%
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
ivyscript ×31
Asked: 06.11.2013 at 14:42
Seen: 3,321 times
Last updated: 08.11.2013 at 15 | https://answers.axonivy.com/questions/52/how-to-work-with-java-util-date-in-ivyscript-step-without-auto-initialization | CC-MAIN-2019-22 | refinedweb | 486 | 62.04 |
report request to the header section of a V7 BIRT Report.
The ‘out of the box’ Security Group Access Report, security_group.rptdesign,
will be updated as an example to display the User Name at the bottom of the
header section highlighted by the red arrow below.
Add UserName of Person
Executing Report Here
1. First, 0pen up the Report Design in BIRT Designer and expand the Report
Parameters Section.
1
2. Save the report design. 2 . Note this parameter is case sensitive. Add a label and data for the new parameter in the report as highlighted below. and should be specified as Not Required and Hidden. 4. Add a new parameter: username. 3.
import the report design into the V7 Instance. From the action menu. sign into V7 as an administrator. the new parameter ‘userName’ displays. Notice after the Report XML is generated. select Import Report. Filter for the report you updated. and go to its Report Tab. Navigate to the location of the updated report design file and import it. (If it does not display. Next. and go to the Report Administration Application. click Refresh) 3 . Save the report. 6. and then generate the Report XML. To do this.5.
7. test out the change. (*For details on this. Finally. Because this parameter should not display to the user on the request page. 4 . see note at the end of this doc) 8. Go to the Security Group Application. and select this report from the Run Report Menu. Notice its Request Page does not display the UserName Parameter. delete it from the parameter section in Report Administration by clicking on the garbage can. Save and regenerate the XML.
The report displays with the username of the individual who submitted the report request. Any parameter in the REPORTLOOKUP table then displays as a parameter in Report Admin and on the Report’s Request Page. In this case.9. the user executing the report was ‘MARKK’ as highlighted. A future Enhancement request has been filed so if a username is added as parameter in report design file. (Reference Issue #09-18636) September 2009 5 . Click submit. it will not be added to REPORTLOOKUP table on import. *Note: The username displays as a parameter in Report Administration after importing because the . ‘where’ etc – which is why they aren’t added to REPORTLOOKUP and don’t display on the Report’s Request Page. System code has been implemented to not import other standard hidden parameter values of ‘appname’ .rptdesign file identifies it as a parameter value and enters it into the REPORTLOOKUP table. | https://www.scribd.com/document/385100755/How-to-Add-a-Username-to-a-V7-BIRT-Report | CC-MAIN-2019-04 | refinedweb | 430 | 61.93 |
= 'share' 'a:
<mediawiki xmlns="" xmlns: .
Note: you can view and fetch the source from github here abij/hadoop-wiki-pageranking.
"] public class WikiPageLinksMapper extends MapReduceBase implements Mapper<LongWritable, Text, Text, Text> { private static final Pattern wikiLinksPattern = Pattern.compile("\
");(..) } }
The mapper class that will parse the chunks of xml to key page and value outLinks tuples. In this implementation all links are added to the map, even if they appear multiple times on the page.)); } }
The reducer class that will store the page with the initial PageRank and the outgoing links. This output format is used as input format for the next job. Key<tab>rank<tab>CommaSeparatedList-of-linksOtherPages.
First Run result:
Hilversum 1.0 Country,Netherlands,Province,North_Holland,Mayor,Democrats_66,A...
Get a bigger file! The 500Mb latest Dutch Wiki is a sufficient start. Extracted the big xml is around 2.3 Gb.
Upload the file to your HFDS in the wiki/in folder and remove the old result folder 'ranking'.:
sample input:
---------------------------------------
Page_A 1.0
Page_B 1.0 Page_A
Page_C 1.0 Page_A,Page_D.
sample output:
---------------------------------------
Page_A !
Page_C |Page_A
Page_B !
Page_B |Page_A
Page_A Page_B 1.0 1
Page_C !
Page_A Page_C 1.0 2
Page_D Page_C 1.0 2.
sample input (sorted on key):
---------------------------------------
Page_A !
Page_A Page_C 1.0 2
Page_A Page_B 1.0 1
Page_B !
Page_B |Page_A
Page_C !
Page_C |Page_A
Page_D Page_C 1.0 2)); } }
The output of the reducer contains the new pageRank for the existing pages with the links on those pages.
sample output:
Page_A 1.425
Page_B 0.15 Page_A
Page_C 0.15 Page_A,Page_D
We need to configure the main class so the new job is executed for a couple of times after the xml-parsing job. I have commented out the last job for now, we will create it after in the next paragraph..
sample input:
---------------------------------------
Page_A 1.425
Page_B 0.15 Page_A
Page_C 0.15 Page_A,Page_D.
sample output:
---------------------------------------
1.425 Page_A
0.15 Page_B
0.15 Page_C.
Ajay -
October 13, 2011 at 4:34 am
Hi,
I didn't understand whats the content of the dataset you have provided. Can you explain about it.
abij -
October 14, 2011 at 4:52 pm.
Anca -
December 17, 2011 at 2:51 pm
abij -
January 6, 2012 at 2:35 pm.
Kai -
February 12, 2012 at 11:17 pm
Hi abij, can you send me please the jar file, I want to run the job on my own PC (k.elloumi@gmail.com)
Thanks
Kai
Allan Kardec -
January 18, 2012 at 5:12 pm
Sure it's an excellent analysis that I disagree with all of your views. Do not they say that differences of opinion make a difference? I am delighted to have found your site through Google and will not fail to add to my bookmarks.
小e的分享 | 独乐乐不如众乐乐 » 迭代式MapReduce解决方案(一) -
February 10, 2012 at 5:18 pm
[...])。 [...]
Fadi -
March 21, 2012 at 9:01 pm
Hi abij,
Excellent tutorial, the information flows smoothly. But i couldn't find the data set, the link is broke?
can you send me please the jar file, I want to run the job on my own PC (fadi20052002@gmail.com)
Thanks
Fad
Anca -
May 28, 2012 at 1:25 pm
Hi Abij,
I ran the page rank job for the same data input twice and got different results?
Is this possible? I am missing something?
Thanks,
Anca
Algorithms: How is pagerank distributed? - Quora -
October 14, 2012 at 8:36 pm
[...] use Map Reduce. The blog below explains a simpler version by considering only the Wikipedia links. QuoteComment Loading... • Share • Embed • Just now Add [...]
Athresh -
December 30, 2012 at 1:10 am
Hey,
Have you tried using the Amazon Elastic Map Reduce to compute the pagerank? If yes, how did you go about doing it?
Alexander Bij -
December 30, 2012 at 9:44 pm
hobbit -
February 26, 2013 at 1:00 pm '.' at import import WikiPageLinksMapper ;
^
expecting ';' at import import WikiPageLinksMapper ;
^
for 3 files import....
can anyone please help me
thanx in advance...
Alexander Bij -
March 14, 2013 at 12:47 pm
Make sure the single-node cluster is of the same Hadoop version (0.20.204.0).
I should update to code and dependencies to the latest version.
Ricardo -
January 22, 2014 at 6:10 am
Great post. I used to be checking continuously this weblog and I'm inspired! Very useful information specifically the last part
I handle such information much. I was seeking this particular info for a long time. Thank you and good luck.
rohith -
March 19, 2014 at 6:26 am
hai,
i would like to execute this algorithm on my computer will u please mail me jar file to my mail (gurrapurohith@gmail.com) Thanks in advance
Kevin -
May 6, 2014 at 1:00 pm):1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
I use wiki-dump as my input data. I use my own code for job1. It also output the same form as yours. Then I use your job2. I have no idea why these errors happen.
Please help.
Thanks.
Alexander Bij -
May 7, 2014 at 7:38 am 'Yang
Kevin -
May 8, 2014 at 10:36 am
Deep Pradhan -
July 4, 2014 at 11:31 am | http://blog.xebia.com/2011/09/27/wiki-pagerank-with-hadoop/ | CC-MAIN-2014-52 | refinedweb | 871 | 78.35 |
This C Program Calculates the Value of sin(x). It’s a non-differentiable function. Start at zero, then goes up to 1, then back down to 0. But then, instead of going negative, it will just “reflect” about the x-axis. The derivative is 1 and then -1 for every x such that sin(x) = 0 (i.e. 0, 180, 360, 540, 720 …).
Here is source code of the C program to Calculate the Value of sin(x). The C program is successfully compiled and run on a Linux system. The program output is also shown below.
/*
* C program to find the value of sin(x) using the series
* up to the given accuracy (without using user defined function)
* also print sin(x) using library function.
*/
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
void main()
{
int n, x1;
float accuracy, term, denominator, x, sinx, sinval;
printf("Enter the value of x (in degrees) \n");
scanf("%f", &x);
x1 = x;
/* Converting degrees to radians */
x = x * (3.142 / 180.0);
sinval = sin(x);
printf("Enter the accuracy for the result \n");
scanf("%f", &accuracy);
term = x;
sinx = term;
n = 1;
do
{
denominator = 2 * n * (2 * n + 1);
term = -term * x * x / denominator;
sinx = sinx + term;
n = n + 1;
} while (accuracy <= fabs(sinval - sinx));
printf("Sum of the sine series = %f \n", sinx);
printf("Using Library function sin(%d) = %f\n", x1, sin(x));
}
$ cc pgm14.c -lm $ a.out Enter the value of x (in degrees) 60 Enter the accuracy for the result 0.86602540378443864676372317075294 Sum of the sine series = 0.855862 Using Library function sin(60) = 0.866093 $ a.out Enter the value of x (in degrees) 45 Enter the accuracy for the result 0.70710678118654752440084436210485 Sum of the sine series = 0.704723 Using Library function sin(45) = 0.707179. | https://www.sanfoundry.com/c-program-value-sin-x/ | CC-MAIN-2018-13 | refinedweb | 303 | 67.04 |
PHP and WMI – Dig deep into Windows with PHP.
What is WMI and why we need to deal with it
The MSDN site has its official definition of WMI in this article, out of which a few lines are extracted below:
Windows Management Instrumentation (WMI) is the Microsoft implementation of Web-Based Enterprise Management (WBEM), which is an industry initiative to develop a standard technology for accessing management information in an enterprise environment. has the so called CIM (Common Information Model) to encapsulate the information in an object-oriented manner. It also provides several programming interfaces to retrieve said information. In a pure Windows environment, these will be PowerShell, VB Script and .NET languages. But in our case, it will be PHP.
One of the fundamental questions when programming with WMI is: which "information" is available? In other words, which objects/classes are available? Luckily, Microsoft provides a full list of what WMI offers in terms of classes and their properties. Please visit here for a complete reference. In WMI programming, most of the time we are referring to Win32 Classes.
Pre-requisites
On a host Windows machine, WMI must be installed to provide the CIM. By default, any Windows system newer than Windows XP should have WMI installed and enabled...." button to invoke a new dialog similar to the one shown below:
Normally, we don't need to change a thing.
root\cimv2 is the system built-in namespace for our WMI interface. Just click the "
Connect" button in this dialog. It will bring us back to the previous window with all the buttons enabled.
Being able to connect to a machine's WMI interface is just one of the pre-requisites. We also need to make sure the Windows Firewall will allow WMI calls to pass through.
In Windows Firewall, choose "
Advanced Settings" then enable both inbound and outbound access rules for WMI related entries. Please see the screenshots below.
After we enable the WMI firewall rules in a remote machine, we can test the connection as illustrated in Step 2 above. To connect to a remote machine, we need to prefix the default namespace ("
root\cimv2") with the IP or name of the PC we need to connect ("
\\192.168.1.2\root\cimv2 for example") and provide the user name and password for that remote machine.
PHP extension for WMI (.NET)
To use WMI (or more precisely, .NET functionality) in PHP, we will need to enable
php_com_dotnet.dll. Add one line to the
php.ini like this:
extension=php_com_dotnet.dll
and restart the web server.
Note:
php_com_dotnet.dll is a Windows only extension. This means that we will have to run the WMI-calling PHP file in a WAMP-like environment, and not in a *nix environment.?
We can expect the WMI to provide information regarding the BIOS, CPU, disks, memory usage, etc. But how is this information presented?
Besides digging into the official documents provided, let's bring up the
wbemtest dialog again and connect to our local machine. In the
WMI Tester dialog, click the
Enum Classes... button and bring up the below dialog:
In this dialog, don't enter anything in the text box, choose
Recursive and click
OK. It should bring up another dialog like this:
This is a very long list (1,110 objects in my Windows 8.1 PC). Your PC may give out a different list but should be more or less the same as this one. Please take some time to scroll through it and look at the names of the classes that WMI provides. For example, in the above image, we have highlighted a class
Win32_LogicalDisk. This contains all the information related to the machine's logical disks. To get a deeper insight of what this class offers, please double click on that class and another
Object editor dialog will appear:
Take a closer look at the Properties panel. All the properties listed here are those we can retrieve. For example,
VolumeName will be the name we assigned for a logical disk.
WMI's Win32 Classes have a lot of entries to look through. Some of the most frequently used are:
- Computer System Hardward Classes, including Cooling Device, Input Device (Keyboard, Mouse, etc), Mass Storage, Motherboard, Networking Device, Printing, Video & Monitor, etc.
- Installed Application Classes, including Font, etc.
- Operating System Classes, including Drivers, Memory, Processes, Registry, Users, etc.
- Performance Counter Classes, including all performance related classes.
- etc, etc.
We now have a much clearer picture of the structure of the WMI classes and their associated properties.
Programming WMI in PHP
The code snippet below shows some basic information about the logical disks of a remote machine on the IP 192.168.1.4:
<?php $pc = "192.168.1.4"; //IP of the PC to manage $WbemLocator = new COM ("WbemScripting.SWbemLocator"); $WbemServices = $WbemLocator->ConnectServer($pc, 'root\\cimv2', 'your account', 'your password'); $WbemServices->Security_->ImpersonationLevel = 3; $disks = $WbemServices->ExecQuery("Select * from Win32_LogicalDisk"); foreach ($disks as $d) { $str=sprintf("%s (%s) %s bytes, %4.1f%% free\n", $d->Name,$d->VolumeName,number_format($d->Size,0,'.',','), $d->FreeSpace/$d->Size*100.0); echo $str; }
On my system, the above will print out something like:
C: (System) 104,864,059,392 bytes, 60.4% free D: (Data) 209,719,963,648 bytes, 84.3% free E: (Misc) 185,521,188,864 bytes, 95.3% free
This is a very simple example but it lays down the fundamental structure and flow of a PHP WMI program.
Firstly, a COM object instance of type
WbemScripting.SWbemLocator is created.
Then the connection to the PC will be established via the
ConnectServer method. The four parameters for this method call are self explanatory. Finally, we need to set the security impersonation to a proper level. Level 3 is the recommended level for WMI scripts. A detailed explanation of the level is documented here. Level 3 means "
Impersonation", which means and we quote:
The server process can impersonate the client's security context on its local system. The server cannot impersonate the client on remote systems.
In short, our script (and the service instance we created) are "impersonating" the user with the account/password provided. Perfect for what we need here.
Please note the code above is the way to create a remote COM connection to manage a remote PC. To manage a local PC, the syntax will be slightly different but not much:
<?php $pc = "."; $obj = new COM ("winmgmts:\\\\".$pc."\\root\\cimv2"); $disks = $obj->ExecQuery("Select * from Win32_LogicalDisk"); // Rest of the code is the same as previous remote connection sample
It is somewhat simpler as we don't need to provide a credential and impersonate but this is based on the assumption that the user running this snippet has the Administrator privilege.
To get the classes and their associated data, we have used a WQL (WMI Query Language) statement. It is very similar to SQL statements we issue to a MySQL server but in this case, we are retrieving data from WMI.
Win32_LogicalDisk is one "table" in WMI that stores all information related to logical disks. To access data from other tables, please use the name listed in the
Query Result dialog as shown above. This also allows us to filter the results. For example,
Select * from Win32_LogicalDisk where size > 150000000000 will only return those logical devices with size over 150G (roughly).
The
ExecQuery statement, if successful, will return a
variant typed object. One downside is that if we try to
var_dump that object, PHP will simply print something like
object (variant) #3.... Same thing happens when we try to
var_dump the
$d variable. There is actually nothing useful for further programming in the output.
In actuality, we just need to know that the object is iterable. In this case, when we use a
foreach loop, every
$d instance will hold an object reference to a logical disk. Then we can access the properties in that logical disk instance with the familiar
-> notation. The properties list can be found in the
Object editor dialog for that particular class as shown above.
Be sure to spell the class name (
Win32_LogicalDisk) and property names (like
Size,
Name) correctly. Windows is not case sensitive but if we provide the wrong name, an error will be thrown and returned.
As we mentioned earlier, WMI programming can be done with other languages as well – languages like C#, VB Script, etc. However, the WMI COM interface is such a dynamic interface that we can't count on any of these languages to provide a code completion hint to have easy access to all the properties. We have to rely on the dialogues shown above.
One solution to help the programmers is to further encapsulate each WMI class into a PHP class with necessary methods. This should be a very straightforward task and I will leave it to those interested to play around with it.
Conclusion
WMI is a powerful tool holding some of the most hidden secrets kept by the Windows operating system. In a large scale network with homogeneous Windows based machines, we can rely on WMI to retrieve this vital information and help the system admins better manage all the machines.
In this article, we only cover the very basics of WMI and PHP WMI programming but have laid down the fundamentals for further work.
Please leave your comments below if you'd like to see a more detailed WMI tutorial!
- Wiki Chua | http://www.sitepoint.com/php-wmi-dig-deep-windows-php/ | CC-MAIN-2014-35 | refinedweb | 1,573 | 56.35 |
Summary: This article shows how to create a simple low-pass filter, starting from a cutoff frequency \(f_c\) and a transition bandwidth \(b\). This article is complemented by a Filter Design tool that allows you to create your own custom versions of the example filter that is shown below, and download the resulting filter coefficients.
How to create a simple low-pass filter? A low-pass filter is meant to allow low frequencies to pass, but to stop high frequencies. Theoretically, the ideal (i.e., perfect) low-pass filter is the sinc filter. The sinc function (normalized, hence the \(\pi\)’s, as is customary in signal processing), is defined as
\[\mathrm{sinc}(x)=\frac{\sin(\pi x)}{\pi x}.\]
The sinc filter is a scaled version of this that I’ll define below. When convolved with an input signal, the sinc filter results in an output signal in which the frequencies up to the cutoff frequency are all included, and the higher frequencies are all blocked. This is because the sinc function is the inverse Fourier transform of the rectangular function. Multiplying the frequency representation of a signal by a rectangular function can be used to generate the ideal frequency response, since it completely removes the frequencies above the cutoff point. And, since multiplication in the frequency domain is equivalent with convolution in the time domain, the sinc filter has exactly the same effect.
The windowed-sinc filter that is described in this article is an example of a Finite Impulse Response (FIR) filter.
Sinc Filter
The sinc function must be scaled and sampled to create a sequence and turn it into a (digital) filter. The impulse response of the sinc filter is defined as
\[h[n]=2f_c\mathrm{sinc}(2f_cn),\]
where \(f_c\) is the cutoff frequency. The cutoff frequency should be specified as a fraction of the sampling rate. For example, if the sampling rate is 10 kHz, then \(f_c=0.1\) will result in the frequencies above 1 kHz being removed. The central part of a sinc filter with \(f_c=0.1\) is illustrated in Figure 1.
The problem with the sinc filter is that it has an infinite length, in the sense that its values do not drop to zero. This means that the delay of the filter will also be infinite, making this filter unrealizable. The straightforward solution is to simply stop computing at a certain point (effectively truncating the filter), but that produces excessive ripple. A better solution is to window the sinc filter, which results in, you guessed it, a windowed-sinc filter.
Window
A window function is a function that is zero outside of some interval. There exist a great variety of these functions, tuned for different properties, but I’ll simply use the well-known Blackman window here, which is a good choice for general usage. It is defined as (for \(N\) points)
\[w[n]=0.42-0.5\cos\left({\frac{2\pi n}{N-1}}\right)+0.08\cos\left({\frac{4\pi n}{N-1}}\right),\]
with \(n\in[0,\,N-1]\). It is shown in Figure 2, for \(N=51\).
Windowed-Sinc Filter
The final windowed-sinc filter is then simply the product of the two preceding expressions, as follows (with the sinc filter shifted to the range \([0,\,N-1]\)).
\[h[n]=\mathrm{sinc}\left(2f_c\left(n-\frac{N-1}{2}\right)\right)\left(0.42-0.5\cos\left({\frac{2\pi n}{N-1}}\right)+0.08\cos\left({\frac{4\pi n}{N-1}}\right)\right),\]
with \(h[n]=0\) for \(n\notin[0,\,N-1]\). I have dropped the factor \(2f_c\) from the sinc filter, since it is much easier to ignore constants at first and normalize the complete filter at the very end, by simply making sure that the sum of all coefficients is one, giving the filter unity gain, with
\[h_\mathrm{normalized}[n]=h[n]/\sum_{i=0}^{N-1}h[i].\]
This results in the normalized windowed-sinc filter of Figure 3.
Transition Bandwidth
The final task is to incorporate the desired transition bandwidth (or roll-off) of the filter. To keep things simple, you can use the following approximation of the relation between the transition bandwidth \(b\) and the filter length \(N\),
\[b\approx\frac{4}{N},\]
with the additional condition that it is best to make \(N\) odd. This is not really required, but an odd-length symmetrical FIR filter has a delay that is an integer number of samples, which makes it easy to compare the filtered signal with the original one. Setting \(N=51\) above was reached by setting \(b=0.08\). As for \(f_c\), the parameter \(b\) should be specified as a fraction of the sampling rate. Hence, for a sampling rate of 10 kHz, setting \(b=0.08\) results in a transition bandwidth of about 800 Hz, which means that the filter transitions from letting through frequencies to blocking them over a range of about 800 Hz. The values for \(f_c\) and \(b\) in this article were chosen to make the figures as clear as possible. The frequency response of the final filter (with \(f_c=0.1\) and \(b=0.08\)) is shown in Figure 4.
Python Code
In Python, all these formulas can be implemented concisely. sinc filter. h = np.sinc(2 * fc * (n - (N - 1) / 2)) # Compute Blackman window. w = 0.42 - 0.5 * np.cos(2 * np.pi * n / (N - 1)) + \ 0.08 * np.cos(4 * np.pi * n / (N - 1)) # Blackman window can be computed with
w = np.blackman(N).
In the follow-up article How to Create a Simple High-Pass Filter, I convert this low-pass filter into a high-pass one using spectral inversion. Both kinds of filters are then combined in How to Create Simple Band-Pass and Band-Reject Filters
Filter Design Tool
This article is complemented with a Filter Design tool. Experiment with different values for \(f_c\) and \(b\), visualize the resulting filters, and download the filter coefficients. Try it now!
Thanks! That's a very useful article.
Could you explain, what can I do with delay between source and filtered signals?
The nice thing about these kinds of filters is that it is easy to compensate for their delay. Symmetrical FIR filters, of which the presented windowed-sinc filter is an example, delay all frequency components in the same way. This means that the delay can be characterized by a single number. The delay of a filter of length M equals (M-1)/2. Hence, the shown filter with 51 coefficients has a delay of exactly 25 samples. So, if you want to overlay the original signal with the filtered one to compare them, you just need to shift one of them by 25 samples!
I don't understand how to compensate for the signal delay so that the signal matches the source signal.
If the delay is 25 samples, as in the example, then you can simply remove the first 25 samples of the filtered signal, or add 25 zeros at the beginning of the original signal. If you do that, and you plot the two signals on top of each other, then there will be no delay between them anymore.
Really appreciate how concise this article is. Thanks!
Thank you! It's a very nice article. Could you maybe explain why it is necessary to normalize each element in h by the sum(h)?
It’s not really necessary, but if you make sure that the sum of the coefficients is one, then the gain of the filter at DC is also one. Additionally, it allows you to make the gain of the filter whatever you want simply by multiplying the coefficients of the normalized filter by the required gain factor.
Thanks very much for your article... I had considered this topic whole week and now my mind is clear!
Thanks, this is really helpful !
Great article!
How do you make your plots?
Thanks
Thanks! The plots for most of the articles, including this one, were made with Python (using matplotlib).
Very interesting and clear article, thanks Tom! Question: You say that the number of coefficients is approx 4/b. If i use your fir designer tool and design a lowpass (windowed sinc) with Samplerate=44100, Cutoff=1000, Bandwidth=1000, i would expect approx 4/(1000/44100) = 176.4 thus 177 coefficients, but it gives 203 coefficients. I have been searching for the logic in that but i don't understand it. How do you define the specific number of coefficients? Thanks.
That’s a good observation. The reason for this is that the number of coefficients in the tool depends on the window function. This is because the window has a large influence on the transition bandwidth, so that, e.g., the rectangular window can get by with much less coefficients than the Blackman window. Maybe I'll write a separate article on this with more details, because it’s quite interesting stuff. In practice, the tool uses 4.6/b for Blackman, 3.1/b for Hamming, and 0.91/b for rectangular. From your example, I can tell that you’ve used a Blackman window, since 4.6/(1000/44100) = 202.86.
In the meantime, I’ve added the article The Transition Bandwidth of a Filter Depends on the Window Type with more details on this.
Thanks for your tutorial, well detailed!
Is the signal that you filter an array containing the amplitude of the signal?
Thanks! And yes, the variable
sin the example is an array of numbers. I would say that this is the signal, and not just its amplitude.
Hi, Can you explain why its not just the amplitudes?
Great articles, thank you very much!
I was just being nitpicky a bit, I think... :-) The signal is indeed really just a series of numbers that represent an amplitude. It’s just that saying “the amplitude of the signal” could suggest that you’ve left something out or did some sort of preprocessing. That’s why I’d just write “the signal”…
Thanks for this great article. I have a question regarding Figure 4. How did you create the frequency response diagram?
This is another great idea for a follow-up article! I'll do one where I explain this in detail. In short, you first pad the filter with zeros to increase the resolution of the frequency plot, then take an fft, compute the power, and plot the result, either on a linear scale or in dB.
In the meantime, I've written an article that shows exactly how I typically plot the frequency response of a filter.
Thank you. Would you write something about IIR?
Currently, I indeed only have an article on the (important) low-pass single-pole IIR filter. Are there any specific things on IIR filters or filter types that you would find interesting?
Well-elaborated! Many thanks.
This was very useful, thanks!
Hello, i'm implementing a fir filter, band reject to be exact, could the signal s be the data of an fft from an audio.wav?
Your signal
sshould be the data of the audio.wav file, not the FFT of the data. Of course, if you are going to implement the convolution in the frequency domain, then you need to take the FFT of both the signal and the filter coefficients.
Sorry Tom,
I was wrong in my earlier comment. fS/fL evaluates to zero in Python 3 as well, so the code gives the Blackman weights.
Your plots are, of course, correct but the display maybe doesn't come from the sample code shown below ?
I am a geophysicist, seismic sampling is often 500 -1000Hz The input box is labelled Hz.
Once again, thanks !
very impressive !
Jim
______________________________
import numpy as np
# Example code, computes the coefficients of a low-pass windowed-sinc filter.
# Configuration.
fS = 1000 # Sampling rate, ***** works fS = 1000.0 ? *****
fL = 20 # Cutoff frequency, *****works fL = 20.0 ? *****
N = 461 # Filter length, must be odd.
# Compute sinc filter.
h = np.sinc(2 * fL / fS * (np.arange(N) - (N - 1) / 2.))
# Apply window.
h *= np.blackman(N)
# Normalize to get unity gain.
h /= np.sum(h)
print(h)
# Applying the filter to a signal s can be as simple as writing
# s = np.convolve(s, h)
Hello Jim,
fL/fS (or fS/fL) does definitely not evaluate to zero in Python 3. Maybe two version of Python are installed on your computer, and you're still using Python 2? The plots on fiiir.com are indeed not generated in Python.
Tom
Tom,
Yes, indeed, I did have both versions active and must have used 2.7 again.
I just ran it in Python 3.4.5 and it works fine, a lesson learned.
I should have checked, was confused debugging in Idle
Jim
Awesome! you made windowed-sinc-filter design very easy!
Hi Tom,
I'm trying to apply this filter to my data which has sample rate of 250Hz and contains 1000 samples. After applying the filter my data has 1050 samples instead of 1000. Am I doing something wrong? Please help.
Thanks in advance.
This is normal. The length of the output signal is the length of the input signal plus the length of the filter minus one. This follows from the definition of convolution, but for a more intuitive understanding, have a look at the nice animations on. An effect of this is that you will see a so-called transient response of the filter in the beginning of your output signal, and that you have to wait a number of samples (the length of the filter, i.e., 51 samples in case of the example filter) before the filter is "filled up" and you get the actual response for which the filter was designed (the so-called steady state response).
Thanks Tom. I understand it better now.
I found it myself :). That was due to numpy.convolve. I used mode='same' and I got what I want.
Very good article.
Thanks for the articale and online filter-designer!
But it seems to me there must bу 2. * fL шт formula:
h = np.sinc(2 * fL / fS * (np.arange(N) - (N - 1) / 2.))
Ran it on Python 2.7 and found out with integer 2 the filter coefficients differ drastically (but with float 2, they correspond to the generated in pyhon list).
Thanks for pointing this out! (For other readers, the code snipped is from one of the generated Python programs from fiiir.com.) I've assumed Python 3 for a long time for all code on this site and on fiiir.com (you know, looking towards the future and all), but I think that I'll have to fold and make all my code compatible with both Python 2 and Python 3, because people keep reporting these kinds of problems… I'll look into it…
The code is now fully compatible with both Python 2 and Python 3. I've added
from __future__ import division(see One Code to Run Them All (Python 2 and Python 3) for details).
Thanks for your article.
That's a very useful article.
Could you explain, could we define this(=low_pass_filter) as one of FFT filters?
I saw someone called it(=low_pass_filter) as one of FFT filters.
Sorry for my English.
Yes, you can use FFT convolution with these filters, with exactly the same result. Instead of applying the filter with
s = np.convolve(s, h)as described above, you could use
from fft.signal import fftconvolve
s = fftconvolve(s, h)
According to the documentation for SciPy fftconvolve(), the SciPy
convolve()even picks the best algorithm (direct or FFT) automatically. The NumPy
convolve()that I've used above doesn't do that.
Hi Tom,
Thanks for the article. How would you determine the transition band width of Kaiser window function? Is it possible to provide the article where you found the transition band width for other window functions? And what is the criteria to determine the cutoff frequency? Thank you in advance!
I might misunderstand your question, but the cutoff frequency and rolloff normally follow from the problem that you are trying to solve by applying the filter that is being designed. For example, if the useful signal in the input has low frequencies and there is high frequency interference, then you could put the cutoff frequency "halfway" and make the rolloff as large as possible (to make the filter shorter) without removing part of the signal or letting through part of the noise.
I don't remember where I got the values for the other window functions… For (some) more details, see The Transition Bandwidth of a Filter Depends on the Window Type).
Thank you for the response, Tom. I am doing a lab experience, so I need to find a way to determine the cutoff frequency and rolloff parameters. I am new for signal processing, so I'm not quite sure the limitation to determine the parameters. When you say to make the rolloff as large as possible, is that mean to make the transition bandwith is smaller (the slope is steeper)?
As large as possible means to make the transition bandwitdh larger, so the that the slope is less steep. The effect of this is that the filter becomes shorter (has less coefficients). This can be important if you have large amounts of data or for real-time processing, because a short filter can be faster to compute (depending on its implementation). You can play with this on fiiir.com. To determine the cutoff frequency and rolloff, you have to look at your data: which frequencies do you want to let through, and which ones do you want to block? That's what will determine your choice of parameters.
I am a bit confused. I thought an ideal case for the filter is when it has a straight down transition bandwidth (zero width), which means we need the roll-off as narrow as possible, thus the slope is steeper. This can increase the signal-to-noise ratio. Am I misunderstanding the terminologies? Thanks!
It all depends on your input signal: the rolloff only needs to be steep enough to filter out the unwanted frequencies. It's fine if it's steeper, but then your filter will be longer…
thank you TomRoelandts. the article really help.
Hi. I want to convolve an image with sinc(t) function for t>0. Does anyone can help me how I should do that?
Thanks for this. I was looking for the relation between cutoff freq and the sinc function on FIR filters and it was not until I found this that doubts disappeared. cheers
Hi, Tom:) I just took a General Education course on sound and music and wanted to create a talk about subtractive synthesis and this lowpass filter in code will really help my presentation! Thanks a lot! but for some reason, i cant seem to apply the filter to any music clips or any soundwaves i generate. There are many errors that are being thrown up. is it possible for you to show a sample of how to apply it to a sound clip that you can easily find from anywhere ie Youtube?
Hello Sean, This is a completely different problem from the filtering operation itself, of course. The thought of adding a post on this more technical aspect of audio processing has crossed my mind, but I am currently very busy professionally and don't have a lot of spare time for things like that. My advise would be to start by writing a Python program that reads an audio file, converts it to floating point numbers, converts it to integers again, and then writes it to a different file. If that works without problems, then adding the filter will be easy...
Hello Tom ! I used your FIIIR tools in order to construct some anti-aliasing filter. But I have some problems when the cut-off frequency is small for example I have a nice 81 pts FIR sinc*blackman with parameters (sampling rate;cut-off_freq;transition) = (1;0.25;0.0575) :) But when I want upsampling for example with a factor 10 (in order to apply a lowpass filter and decimate with a filter (sampling rate;cut-off_freq;transition) = (1;0.025;0.0575) the filter seems to become smooth in frequency domain. Is it normal ? Can't we preserve the sharp-cut off ?
Hi, Tom
Thanks very much for this useful site to design a low pass filter.
I am using this online tool for an audio stream of which sampling rate is 384KHz.
I need to cut off the stream around 300Hz - 1KHz. However, your tool allows the lowest cutoff frequency as low as 10% of the sampling rate, that is 3840Hz.
I wonder why there is such limitation in your tool. Is it the limit of a low pass FIR filter? And it would be much appreciated if you suggest how to get the coefficients for the filter with lower cutoff frequency.
Thank you again for this useful site.
Best regards,
Jason.
It's 1%, of course. But you are right that the limit is arbitrary. However, there is a practical reason: The length of the filter is determined by the transition bandwidth, and I wanted to keep that length under 1000 (also for band-pass filters), so I set the limit of the transition bandwidth to 0.01. And then it doesn't make sense to allow a cutoff frequency that is much smaller…
As a practical solution, you could use the Python code from the this article directly. That has no such arbitrary limit.
Another option, to avoid that your filters become very long, is to filter in stages and subsample (i.e., throw away samples) in between stages.
Add new comment | https://tomroelandts.com/comment/1228 | CC-MAIN-2022-40 | refinedweb | 3,659 | 65.32 |
In the post I’ll show an example of how to extend the ASP.NET MVC HtmlHelper class that you can use within your MVC views. In the example I’ll provide a simple solution for building an Html table.
The HtmlHelper class is a class that can be used within ASP.NET MVC framework in order to help us render Html fragments of views. The class is provided with a lot of methods that can help you render Html types (textboxes, checkboxes etc) or Html parts (<form> for example). The ASP.NET MVC framework helpers are currently the following:
For example if I would like to render a checked checkbox with the name myChkbox I can do it like that in my view:
<%= Html.CheckBox("myChkbox", true) %>
All the Html helpers are built as extension methods and can be located in the System.Web.Mvc.Html namespace for review (if you like).
In my example I wrote an extension method for HtmlHelper to supply the Html table rendering method. The example is supplied as is and you can modify it or build your own example of how to render an Html table:
public static class MVCHelpers
{
public static string Table(this HtmlHelper helper, string name, IList items, IDictionary<string, object> attributes)
{
if (items == null || items.Count == 0 || string.IsNullOrEmpty(name))
{
return string.Empty;
}
return BuildTable(name, items, attributes);
}
private static string BuildTable(string name, IList items, IDictionary<string, object> attributes)
StringBuilder sb = new StringBuilder();
BuildTableHeader(sb, items[0].GetType());
foreach (var item in items)
{
BuildTableRow(sb, item);
TagBuilder builder = new TagBuilder("table");
builder.MergeAttributes(attributes);
builder.MergeAttribute("name", name);
builder.InnerHtml = sb.ToString();
return builder.ToString(TagRenderMode.Normal);
private static void BuildTableRow(StringBuilder sb, object obj)
Type objType = obj.GetType();
sb.AppendLine("\t<tr>");
foreach (var property in objType.GetProperties())
sb.AppendFormat("\t\t<td>{0}</td>\n", property.GetValue(obj, null));
sb.AppendLine("\t</tr>");
private static void BuildTableHeader(StringBuilder sb, Type p)
foreach (var property in p.GetProperties())
sb.AppendFormat("\t\t<th>{0}</th>\n", property.Name);
}
As you can see I’m extending the HtmlHelper with an extension method that is called Table. The main function to render a table is the BuildTable method that uses the ASP.NET MVC TagBuilder class in order to build the table tag. You can also see that in my example I use reflection to get the properties of the list of items I get and I use those properties as my header and their values as the values of the table cells.
If you want to use the new custom Html helper for table that I showed earlier you should do the following:
<%@ Import Namespace="TaskList.Models" %>
<%= Html.Table("myTable", (IList)ViewData.Model, null) %>
Lets sum up, in today’s post I introduced the HtmlHelper class and showed how to build a simple extension for that class. There is another way to extend the HtmlHelper class by building your own class (for example TableHelper) and all you have to do is to build methods that return the rendered html to be added to the view. The use of extension method is easier in my opinion.
You've been kicked (a good thing) - Trackback from DotNetKicks.com
Pingback from ASP.NET MVC Archived Blog Posts, Page 1
Thanks for sharing. A working and downloadable app would be even more helpful.
Pingback from Extending ASP.NET MVC HtmlHelper Class - Gil Fink on .Net
@cypher,
Thanks for the comment. You can copy the code I wrote to a class in your MVC Framework project and start working with it as described in the post.
Why make it take IList instead of IEnumerable?
it seems we need more work if we want to display a gridview like traditional aspx page
@Evan Freeman,
I could have used IEnumerable instead of IList. As I wrote "The example is supplied as is and you can
modify it or build your own example of how to render an Html table". I agree that in some situations IEnumerable is preferable then IList.
@francis,
Yes. We need a lot of work if we want to display a gridview like traditional aspx page. I suggest to look at Stephen Walther's example (Creating a DataGrid Helper) in the following link: stephenwalther.com/.../chapter-6-understanding-html-helpers.aspx
原文地址:ExtendingASP.NETMVCHtmlHelperClass在这篇帖子中我会使用一个示例演示扩展ASP.NETMVCHtmlHelper类,让它们可以在你的MVC视图中...
Thanks,
This helps alot as I'm still playing catchup with MVC.
Hi,
How can you say that this is better than the normal HTML tables ? Please let me know the advantages.
Thani
@Thanigainathan,
The solution is rendering Html tables from the List that is provided to the HtmlHelper's extension method that I wrote. Its advantage is by supporting the rendering of Html tables instead of writing your Html table using server script tags. The two methods (script tags and my solution) will give the same result (Html table).
Pingback from C#| [C#]??????ASP.NET MVC HtmlHelper??? | Mikel
Extending the ASP.NET MVC HtmlHelper in VB.NET Specifically
Extending ASP.NET MVC HtmlHelper Class
Pingback from Expand asp.net MVC HtmlHelper class (turn)
Pingback from ASP.NET MVC 3 Css & Js Otomatik Versiyonlama | HaKoSe | Web Developer
Pingback from ASP.NET MVC 3 Css & Js Otomatik Versiyonlama
Pingback from How to create MVC HtmlHelper table from list of objects | C Language Development | C Programming Language Tutorial | http://blogs.microsoft.co.il/blogs/gilf/archive/2009/01/13/extending-asp-net-mvc-htmlhelper-class.aspx | CC-MAIN-2013-20 | refinedweb | 890 | 66.03 |
Apple Inc (Symbol: AAPL). So this week we highlight one interesting put contract, and one interesting call contract, from the January 2019 expiration for AAPL. The put contract our YieldBoost algorithm identified as particularly interesting, is at the $85 strike, which has a bid at the time of this writing of $2.10. Collecting that bid as the premium represents a 2.5% return against the $85 commitment, or a 1.4% annualized rate of return (at Stock Options Channel we call this the YieldBoost ).
Turning to the other side of the option chain, we highlight one call contract of particular interest for the January 2019 expiration, for shareholders of Apple Inc (Symbol: AAPL) looking to boost their income beyond the stock's 1.6% annualized dividend yield. Selling the covered call at the $170 strike and collecting the premium based on the $7.60 bid, annualizes to an additional 2.9% rate of return against the current stock price (this is what we at Stock Options Channel refer to as the YieldBoost ), for a total of 4.6% annualized rate in the scenario where the stock is not called away. Any upside above $170 would be lost if the stock rises there and is called away, but AAPL shares would have to advance 20.3% from current levels for that to happen, meaning that in the scenario where the stock is called, the shareholder has earned a 25.7% return from this trading level, in addition to any dividends collected before the stock was called.
Top YieldBoost AA. | http://www.nasdaq.com/article/interesting-january-2019-stock-options-for-aapl-cm762970 | CC-MAIN-2017-13 | refinedweb | 259 | 64.1 |
timer_settime, timer_gettime, timer_getoverrun - per-process timers (REALTIME)
#include <time.h> int timer_settime(timer_t timerid, int flags, const struct itimerspec *value, struct itimerspec zero, if the implementation supports the Realtime Signals Extension,, or if the Realtime Signals Extension is not supported, the meaning of the overrun count returned is undefined.
If the timer_settime() or timer_gettime() functions succeed, a value of 0 is returned. If an error occurs for either of these functions, the value -1 is returned, and errno is set to indicate the error. If the timer_getoverrun() function succeeds, it returns the timer expiration overrun count as explained above.
The timer_settime(), timer_gettime() and timer_getoverrun() functions will fail if:
- [EINVAL]
- The timerid argument does not correspond to an id returned by timer_create() but not yet deleted by timer_delete().
- [ENOSYS]
- The functions timer_settime(), timer_gettime(), and timer_getoverrun() are not supported by this implementation.
The timer_settime() function will fail if:
- [EINVAL]
- A value structure specified a nanosecond value less than zero or greater than or equal to 1000 million.
None.
None.
None.
clock_gettime(), timer_create(), <time.h>.
Derived from the POSIX Realtime Extension (1003.1b-1993/1003.1i-1995) | http://www.opengroup.org/onlinepubs/007908799/xsh/timer_settime.html | crawl-002 | refinedweb | 184 | 53.71 |
2017-02-26 09:12 PM - edited 2017-02-26 09:15 PM
Hi,
I am planning to configure LS mirrors for root volumes of Vservers. We have a huge environment with SVMs hosting SAN-FC and NAS-NFS&CIFS.
Please let me know if there will be any impact while configuring LS mirrors on production system. Do we need downtime if yes how much and what would be the Impact.
Its FAS8060 / 8.3.2P6 = 4 node cluster
Thanks in Advance :-)
Solved! SEE THE SOLUTION
2017-02-27 08:07 PM
Hi,
Setting up LS mirrors does not need downtime. Please refer for more information.
2017-02-27 10:33 PM
Something to consider is that changed export rules will not propogate until the LS mirror syncs - this means that there could be some delay in using new datastores or new hosts.
But in general - you do want the LS mirrors in place, in case of cluster partition you will still have some data service occuring.
2017-03-01 06:05 AM
Good point..I prefer using DP mirrors (there is a KB on doing that instead).. then you don't have to update the mirror after any change for users to see the namespace change. The difference if you recover vsroot (never have had to do that) then you need to break the mirror and make-vsroot the volume instead of the single snapmirror promote command for LS mirrors. If you have a snapmirror license, I prefer this method... also I just heard from support that the new best practice is we only need 2 vsroot mirrors regardless of number of nodes in the cluster, so a 12 node cluster doesn't need 12 mirrors per SVM anymore.
2017-03-01 06:08 AM
here is the kb article to restore vsroot from a DP mirror. a reference in case you need official support... | http://community.netapp.com/t5/OnCommand-Storage-Management-Software-Discussions/LS-Mirror/td-p/128457 | CC-MAIN-2017-34 | refinedweb | 316 | 72.05 |
Hi, I have some problems here.
I’m using Panda3D v1.2.1 and I want to use the dynamic cube map by executing makeCubeMap() function.
So I executed the dynamic cube mapping example provided in the manual.
I simply use my own geometry for the scene and keep the teapot for the shiny object.
But it generated error during the cameras assertion process.
Here is the errror message:
Assertion failed: (mask & PandaNode::get_overall_bit()).is_zero() at line 112 of
c:\temp\mkpr\panda3d-1.2.1\built\include\camera.I
This message occured for all cameras, that is 6 times.
Here is my import section:
from pandac.PandaModules import *
…(changing some config vars).
import direct.directbase.DirectStart
from direct.interval.IntervalGlobal import *
from direct.gui.DirectGui import *
from direct.showbase.DirectObject import *
from math import *
import sys
Where is actually the erronous part?
Is it in the module “camera.i” or within my scene?
I may miss some modules to import.
I didn’t supply the bitmask for the cameras, so every object should be visible to the cameras.
This is my 2nd problem:
- Can we move the local pivot of a NodePath (lets name it ‘MYNODE’) ?
I used to solve this problem by attaching a new NodePath (lets name it ‘NEWNODE’) on the MYNODE’s topmost parent , so it’s located at the topmost parent’s origin. And then I move NEWNODE to the center of MYNODE by calling :
NEWNODE.setPos(MYNODE.getBounds().getCenter())
and then reparent MYNODE with respect to NEWNODE, so if I rotate NEWNODE, I get MYNODE rotated at it’s centerpoint.
It works well if I want to rotate MYNODE at it’s centerpoint, but how if I want to rotate it NOT at it’s centerpoint?
Lets say that now I want the centerpoint located at the center of MYNODE’s parent (lets name it ‘MYNODE-PARENT’, it’s not the topmost parent), and I set a simple 1 faced geometry as this node. Then I should proceed as the above solving, but now the boundingsphere is calculated for MYNODE-PARENT and its children too.
So, I call getInternalBound() to get the boundingsphere for MYNODE-PARENT only (without the chidren).
Function getInternalBound() is a property of PandaNode, so I call it this way:
NEWNODE.setPos(MYNODE-PARENT.node().getInternalBound().getCenter())
and the result was ‘isEmpty’ error, like there wasn’t any geometry on MYNODE-PARENT.
What is wrong?
The pivot point of every nodepath is kept in the nodepath, isn’t it?
How can I flush this pivot matrix to the nodepath’s translation matrix as addition and then the pivot matrix becomes zero matrix, so the pivot is located at the center and the previous pivot is considered as translation addition from the parent’s nodepath ?
This is my 3rd problem:
I tried to apply antialiasing to my scene, but I still don’t understand how I should type the antialias mode.
I read the reference about AntialiasAttrib and I found the modes are M_none, M_auto, M_point, M_line, M_polygon, etc.
When calling NodePath.setAntialias(mode,priority), the mode should appear like this :
MNone, MAuto, MPoint, MLine, MPolygon, or etc.
Is this correct ?
Please help me, thanks a lot. | https://discourse.panda3d.org/t/major-problems-here-anyone-interested/1312 | CC-MAIN-2022-27 | refinedweb | 535 | 58.69 |
Using Workflow app with Pythonista
I'm just beginning to play with the new IOS Workflow app and trying to understand how it works with Pythonista. I was trying to see if I could create a WF which simply ran a script and spoke the output (obviously I don't need WF to do this but it is simply a way to test how data can flow between the apps. I must be missing something elementary but if my script produces prints text on the console, how do I access that in WF? Also, it would seem that WF can't call a script without requiring that the user manually return to WF after the script executes, is that correct?
If others have written any scripts or WF that interact, it would great if you could post them.
OK, using the clipboard (as opposed to writing to console) brings the output back to WF. However, it still requires the manual action of returning from Pythonista to WF. I guess that is unavoidable?
Here's a little script to extract the workflow URL from the Add to Home Screen page:
import sys, clipboard, base64, re, console orig = sys.argv[1] if len(sys.argv) > 1 else clipboard.get() data = orig.replace('data:text/html;base64,', '') page = base64.b64decode(data) match = re.search('.*<a id="jump" href="([^"]+)">.*', page) clipboard.set(match.group(1)) console.hud_alert('Workflow URL copied.')
And a companion bookmarklet:
javascript:location.href='pythonista://Get%20WorkflowApp%20URL.py?action=run&argv='+encodeURIComponent(location.href);
You can use this and the clipboard, but unfortunately you're going to have to split your workflow in two. I suppose it may be possible to use just
workflow000000://to return to Workflow?
Just got Workflow myself (and it looks extremely promising) and wrote a quick test workflow/script pair. I get a text input from WF and pass it to a py script, which prints it out with the regular
dgelessus: The Run Script action in Workflow passes its input to its output unchanged. Your script is not running at all, I guess…
@dgelessus That's interesting. When you say the print output shows up in WF, how are you accessing it in the workflow? Also, I realized that I could get the script to go back to WF by just opening the url workflow://.
My conclusion is that Workflow.app's pythonista action is very very broken and we may just have to wait until it is fixed.
webbrowser.open("workflow000000://")does work for switching back to the app though.
@0942v8653, just noticed that as well. For whatever reason the script indeed is not actually run at all, neither a
print("something")nor
console.alert("something")show up. (I'm using Beta 1.6 of Pythonista btw.)
I filed a bug with Workflows - the bug occurs when passing anything to Pythonista via Workflows - the Pythonista script will not run. Looking on Twitter seems like it's a widespread bug.
OK, I experimented with a few things. x-callback-url is not an option due to Pythonista not actually supporting it. Giving Workflow a Pythonista script run URL as the x-callback-url doesn't help either, because the callback part starts with an unescaped & character and thus none of it lands in the &args= part of the URL. And Workflow requires the callback to be to a GUID unique to each run of the workflow, so we can't blindly send a callback to Workflow.
Here's a slightly crazy idea that might work though (untested):
- Get some input in Workflow.
- Run a Pythonista script with the input as args.
- Have that script run a HTTP server on some obscure port.
- Back in Workflow, send some request to that port.
- The script will return the desired output.
- Workflow receives output and sends a special request to shut the server down.
- Script does as it is asked to and ends with
webbrowser.open("workflow://").
- Workflow has its output.
This is way more complicated than it should be and might not even work. But if everything else fails I suppose there is always a way :P
(@omz, can we have an x-callback-url-compatible version of the URL scheme? Pretty please?)
dgelessus: Implemented!
from SimpleHTTPServer import SimpleHTTPRequestHandler import BaseHTTPServer import webbrowser stopping = False class StoppableHTTPServer (BaseHTTPServer.HTTPServer): def serve_forever(self, poll_interval=0.5): global stopping while not stopping: self._handle_request_noblock() class RequestHandler (SimpleHTTPRequestHandler): def do_GET(self): global stopping self.send_response(200) self.send_header("Content-type", "text/plain") self.end_headers() self.wfile.write('hello') stopping = True serv = StoppableHTTPServer(('', 25565), RequestHandler) port = serv.server_address[1] webbrowser.open('workflow://') serv.serve_forever()
The workflow is just a Pythonista Run action, a URL action with "localhost:25565", and a Get Contents of URLs action.
If you're just starting another workflow (not continuing the current one) you can also use the
inputparameter:
webbrowser.open('workflow://x-callback-url/run-workflow?name=PythonistaTest&input='+urllib.quote('Hi!'))
Hey there - this is Ari, one of the creators of Workflow. Awesome that you all are trying to get this to work!
Workflow should work great with Pythonista, but as some of you mentioned, there is currently an issue which prevents Workflow's Pythonista action from working correctly. This will be fixed in an update this week! Once the update is out, here is the general process I've used for integrating workflows with Pythonista scripts:
Make a new workflow with some sort of content to be passed to the Pythonista script. For example, maybe a Text action. Then add Run Script and Get Clipboard.
Make a corresponding Pythonista script and put its name into the Run Script action in your workflow. Start with this as your python script:
import sys import console import clipboard import webbrowser console.alert('argv', sys.argv.__str__(), 'OK') clipboard.set('here is some output!') webbrowser.open('workflow://')
This example shows how Workflow can provide input to the Python script (in this case, the Python script will show its input as an alert), and how the Python script can pass output back to Workflow via the clipboard.
(Optionally, you could preserve the clipboard by backing it up when running the workflow. At the beginning of your workflow, add Get Cilpboard and Set Variable, and give the variable a name. Then, at the end of the workflow, add Get Variable followed by Set Clipboard.)
By the way, before the update comes out, you can simulate this behavior by doing the following in place of the Run Script action:
Add a URL action, followed by Open URL and Wait to Return. Set the URL to something along the lines of along the lines of
pythonista://[[script name]]?action=run&argv=[[some argument to pass, can be a variable]]
We don't have URL encode actions yet, so this may be kind of limited at the moment.
Let me also second the request to @omz to make Pythonista x-callback compliant ;)
@AriX Thanks for including workflow into pythonista. This will be quite fun :) I do believe the correct url to pass pythonista argv is 'args' not 'argv'
<pre>
pythonista://[scriptname]?action=run&args=[argv values]
</pre>
@briarfox Either works - argv lets you pass arguments one-by-one, and is what we use inside Workflow. See the documentation.
- TutorialDoctor
I think for now I will use Editorial, which does have x-callback.
Edit: A simple "Console Output" Action in Editorial automatically jumps back to Workflow.
I wonder if Workflow could be used to make an "open in Pythonista" action since it had to be removed from the app natively. I haven't had a chance to mess with Workflow yet, but if this could work it would make my life much easier.
@Omega0, you can already do this using a bookmarklet in safari. See for example
gistcheck
You create a bookmarklet in safari, then when viewing a gist you click the bookmark, and it downloads and opens in pythonista. I'm not sure what the use case is that you are trying to solve, but with slight modifications you could make this more general.
@JonB,
I was unaware of the Safari bookmark method, but what I was looking for was something more along the lines of loading a PDF from the Google Drive app. Which I don't think would be possible with a bookmark.
Omega0: That would be really cool, but unless workflow includes their own web server (or at the very least a base64 conversion action) at some point, it doesn't seem possible.
Edit: I guess if there was an upload action that would work too. Right now the fastest way seems to be using Dropbox :( | https://forum.omz-software.com/topic/1918/using-workflow-app-with-pythonista | CC-MAIN-2017-47 | refinedweb | 1,447 | 64.91 |
Web scraping is extracting data from websites. It is a form of copying, in which specific data is gathered and copied from the web into a central local database or spreadsheet for later analysis or retrieval.
Since YouTube is the biggest video sharing website in the internet, extracting data from it can be very helpful, you can find the most popular channels, keeping track on the popularity of channels, recording likes, dislikes and views on videos and much more. In this tutorial, you will learn how to extract data from YouTube videos using requests and BeautifulSoup in Python.
Installing required dependencies:
pip3 install requests bs4
Before we dive into the quick script, we gonna need to experiment on how to extract such data from websites using BeautifulSoup, open up a Python interactive shell and write this lines of code:
import requests from bs4 import BeautifulSoup as bs # importing BeautifulSoup # sample youtube video url video_url = "" # get the html content content = requests.get(video_url) # create bs object to parse HTML soup = bs(content.content, "html.parser") # write all HTML code into a file open("video.html", "w", encoding='utf8').write(content.text)
This will create a new HTML file in the current directory, open it up on a browser and see how BeautifulSoup will see the YouTube video web page.
When you scroll a little bit down in the web page, you will see the number of views of the video, right click and click Inspect (atleast in Chrome) as shown in the following figure:
You will see the HTML tag element which contains that information:
As you can see, the number of video views is wrapped in a div with a class of "watch-view-count". This is trivial to extract in BeautifulSoup:
In [17]: soup.find("div", attrs={'class': 'watch-view-count'}).text Out[17]: '9,072 views'
This way, you will be able to extract everything you want from a web page. Now let's make our script, open up a new python file and follow along:
Importing necessary modules:
import requests from bs4 import BeautifulSoup as bs
Let's make a function, given an URL of a YouTube video, it will return all the data in a dictionary:
def get_video_info(url): # download HTML code content = requests.get(url) # create beautiful soup object to parse HTML soup = bs(content.content, "html.parser") # initialize the result result = {}
Retrieving the video title:
# video title result['title'] = soup.find("span", attrs={"class": "watch-title"}).text.strip()
The video title is in a span tag with the attribute class of "watch-title", the above line extracts it.
Number of views converted to an integer:
# video views (converted to integer) result['views'] = int(soup.find("div", attrs={"class": "watch-view-count"}).text[:-6].replace(",", ""))
Get the video description:
# video description result['description'] = soup.find("p", attrs={"id": "eow-description"}).text
The date when the video was published:
# date published result['date_published'] = soup.find("strong", attrs={"class": "watch-time-text"}).text
The number of likes and dislikes as integers:
# number of likes as integer result['likes'] = int(soup.find("button", attrs={"title": "I like this"}).text.replace(",", "")) # number of dislikes as integer result['dislikes'] = int(soup.find("button", attrs={"title": "I dislike this"}).text.replace(",", ""))
Since in a YouTube video, you can see the channel details, such as the name, and number of subscribers, let's grab that as well:
# channel details channel_tag = soup.find("div", attrs={"class": "yt-user-info"}).find("a") # channel name channel_name = channel_tag.text # channel URL channel_url = f"{channel_tag['href']}" # number of subscribers as str channel_subscribers = soup.find("span", attrs={"class": "yt-subscriber-count"}).text.strip() result['channel'] = {'name': channel_name, 'url': channel_url, 'subscribers': channel_subscribers} # return the result return result
Since soup.find() function returns a Tag object, you can still find HTML tags within other tags. As a result, It is a common practice to call find() more than once.
Now, let's finish up our script:
if __name__ == "__main__": import argparse parser = argparse.ArgumentParser(description="YouTube Video Data Extractor") parser.add_argument("url", help="URL of the YouTube video") args = parser.parse_args() url = args.url # get the data data = get_video_info(url) # print in nice format print(f"Title: {data['title']}") print(f"Views: {data['views']}") print(f"\nDescription: {data['description']}\n") print(data['date_published']) print(f"Likes: {data['likes']}") print(f"Dislikes: {data['dislikes']}") print(f"\nChannel Name: {data['channel']['name']}") print(f"Channel URL: {data['channel']['url']}") print(f"Channel Subscribers: {data['channel']['subscribers']}")
Nothing special here, since we need a way to retrieve the video URL from the command line, the above does just that, and then print it in a format, here is my output when running the script:
C:\youtube-extractor>python extract_video_info.py Title: Me at the zoo Views: 75909913 Description: The first video on YouTube. Maybe it's time to go back to the zoo?sub2sub kthxbai -- fast and loyal if not i get a subs back i will unsubs your cahnnel(Credit: The name of the music playing in the background is Darude - Sandstorm) Published on Apr 23, 2005 Likes: 2337823 Dislikes: 81210 Channel Name: jawed Channel URL: Channel Subscribers: 616K
This is it! Now you can not only extract YouTube video details, you can apply this skill to any website you want. If you want to extract Wikipedia pages, there is a tutorial for that ! Or maybe you want to scrape weather data from Google? There is a tutorial for that as well.
Learn also: How to Convert HTML Tables into CSV Files in Python.
Happy Scraping ♥View Full Code | https://www.thepythoncode.com/article/get-youtube-data-python | CC-MAIN-2020-16 | refinedweb | 927 | 64 |
Description
Jsleri is an easy-to-use parser created for SiriDB. We first used lrparsing and wrote jsleri for auto-completion and suggestions in our web console. Later we found small issues in lrparsing and also had difficulties keeping the language the same in both projects. That is when we decided to create both Jsleri and Pyleri where Pyleri can export it's grammar to JavaScript. Ofcourse you still can write your grammar in Javascript too.
Javascript Left-Right Parser alternatives and similar libraries
Based on the "Code highlighting" category.
Alternatively, view Javascript Left-Right Parser alternatives based on common mentions on social networks and blogs.
Highlight.js9.1 9.5 L2 Javascript Left-Right Parser VS Highlight.jsJavaScript syntax highlighter with language auto-detection and zero dependencies.
PrismJS7.9 9.2 L2 Javascript Left-Right Parser VS PrismJSLightweight, robust, elegant syntax highlighting.
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of Javascript Left-Right Parser or a related project?
README
Javascript Left-Right Parser
Jsleri is an easy-to-use language parser for JavaScript.
- Installation
- Related projects
- Quick usage
- Grammar
- Result
- Elements
Installation
Using npm:
$ npm i jsleri
In your project:
import * as jsleri from 'jsleri'; // Exposes: // - jsleri.version // - jsleri.noop // - jsleri.Keyword // - jsleri.Regex // - jsleri.Token // - jsleri.Tokens // - jsleri.Sequence // - jsleri.Choice // - jsleri.Repeat // - jsleri.List // - jsleri.Optional // - jsleri.Ref // - jsleri.Prio // - jsleri.THIS // - jsleri.Grammar // - jsleri.EOS
Or... download the latest release from here and load the file in inside your project. For example:
<!-- Add this line to the <head> section to expose window.jsleri --> <script src="jsleri-1.1.6.min.js"></script>
Related projects
- pyleri: Python parser (can export grammar to pyleri, cleri and jsleri)
- libcleri: C parser
- goleri: Go parser
- jleri: Java parser
Quick usage
import { Regex, Keyword, Sequence, Grammar } from 'jsleri'; // create your grammar class MyGrammar extends Grammar { static START = Sequence( Keyword('hi'), Regex('(?:"(?:[^"]*)")+') ); } // create a instance of your grammar const myGrammar = new MyGrammar(); // do something with the grammar alert(myGrammar.parse('hi "Iris"').isValid); // alerts true alert(myGrammar.parse('hello "Iris"').isValid); // alerts false
Grammar
When writing a grammar you should subclass Grammar. A Grammar expects at least a
START property so the parser knows where to start parsing. Grammar has a parse method:
parse().
parse
syntax:
myGrammar.parse(string)
The
parse() method returns a result object which has the following properties that are further explained in Result:
expecting
isValid
pos
tree
Result
The result of the
parse() method contains 4 properties that will be explained next.
isValid
isValid returns a boolean value,
True when the given string is valid according to the given grammar,
False when not valid.
node_result.isValid) # => False
Let us take the example from Quick usage.
alert(myGrammar.parse('hello "Iris"').isValid); // alerts false
Position
pos returns the position where the parser had to stop. (when
isValid is
True this value will be equal to the length of the given string with
str.rstrip() applied)
Let us take the example from Quick usage.
alert(myGrammar.parse('hello "Iris"').pos); // alerts 0
Tree
tree contains the parse tree. Even when
isValid is
False the parse tree is returned but will only contain results as far as parsing has succeeded. The tree is the root node which can include several
children nodes. The structure will be further clarified in the example found in the "example" folder. It explains a way of visualizing the parse tree.
The nodes in the example contain 5 properties:
startproperty returns the start of the node object.
endproperty returns the end of the node object.
elementreturns the type of Element (e.g. Repeat, Sequence, Keyword, etc.).
stringreturns the string that is parsed.
childrencan return a node object containing deeper layered nodes provided that there are any. In our example the root node has an element type
Repeat(), starts at 0 and ends at 24, and it has two
children. These children are node objects that have both an element type
Sequence, start at 0 and 12 respectively, and so on.
Expecting
expecting returns an array containing elements which jsleri expects at
pos. Even if
isValid is true there might be elements in this object, for example when an
Optional() element could be added to the string. Expecting is useful if you want to implement things like auto-completion, syntax error handling, auto-syntax-correction etc. In the "example" folder you will find an example. Run the html script in a browser. You will see that
expecting is used to help you create a valid query string for SiriDB. SiriDB is an open source time series database with its own grammar class. Start writing something, click one of the options that appear and see what happens.
Elements
Jsleri has several Elements which can be used to create a grammar.
Keyword
Keyword(keyword, ignCase)
The parser needs to match the keyword which is just a string. When matching keywords we need to tell the parser what characters are allowed in keywords. By default Jsleri uses
^\w+ which equals to
^[A-Za-z0-9_]+. Keyword() accepts one more argument
ignCase to tell the parser if we should match case insensitive.
Example:
const grammar = new Grammar( Keyword('tic-tac-toe', true), // case insensitive '[A-Za-z-]+' // alternative keyword matching ); console.log(grammar.parse('Tic-Tac-Toe').isValid); // true
Regex
Regex(pattern, ignCase)
The parser compiles a regular expression. Argument ignCase is set to
false by default but can be set to
true if you want the regular expression to be case insensitive. Note that
ignore case is the only
re flag from pyleri which will be compiled and accepted by
jsleri.
See the Quick Usage example for how to use
Regex.
Token
Token(token)
A token can be one or more characters and is usually used to match operators like
+,
-,
// and so on. When we parse a string object where jsleri expects an element, it will automatically be converted to a
Token() object.
Example:
// We could just write '-' instead of Token('-') // because any string will be converted to Token() const grammar = new Grammar(List(Keyword('ni'), Token('-'))); console.log(grammar.parse('ni-ni-ni-ni-ni').isValid); // true
Tokens
Tokens(tokens)
Can be used to register multiple tokens at once. The tokens argument should be a string with tokens separated by spaces. If given tokens are different in size the parser will try to match the longest tokens first.
Example:
const grammar = new Grammar(List(Keyword('ni'), Tokens('+ - !='))); grammar.parse('ni + ni != ni - ni').isValid // => True
Sequence
Sequence(element, element, ...)
The parser needs to match each element in a sequence.
Example:
const grammar = new Grammar(Sequence( Keyword('Tic'), Keyword('Tac'), Keyword('Toe') )); console.log(grammar.parse('Tic Tac Toe').isValid); // true
Repeat
Repeat(element, mi, ma)
The parser needs at least
mi elements and at most
ma elements. When
ma is set to
undefined we allow unlimited number of elements.
mi can be any integer value equal or higher than 0 but not larger then
ma. The default value for
mi is 0 and
undefined for
ma
Example:
const grammar = new Grammar(Repeat(Keyword('ni'))); console.log(grammar.parse('ni ni ni ni').isValid); // true
One should avoid to bind a name to the same element twice and Repeat(element, 1, 1) is a common solution to bind the element a second (or more) time(s).
For example consider the following:
const r_name = Regex('(?:"(?:[^"]*)")+'); // Do NOT do this const r_address = r_name; // WRONG // Instead use Repeat const r_address = Repeat(r_name, 1, 1); // Correct
List
List(element, delimiter, mi, ma, opt)
List is like Repeat but with a delimiter. A comma is used as default delimiter but any element is allowed. When a string is used as delimiter it will be converted to a Token element.
mi and
ma work excatly like with Repeat.
opt kan be set to set to
true to allow the list to end with a delimiter. By default this is set to
false which means the list has to end with an element.
Example:
const grammar = new Grammar(List(Keyword('ni'))); console.log(grammar.parse('ni, ni, ni, ni, ni').isValid); // true
Optional
Optional(element)
The parser looks for an optional element. It is like using
Repeat(element, 0, 1) but we encourage to use
Optional since it is more readable. (and slightly faster)
Example:
const grammar = new Grammar(Sequence( Keyword('hi'), Optional(Regex('(?:"(?:[^"]*)")+')) )); console.log(grammar.parse('hi "Iris"').isValid); // true console.log(grammar.parse('hi').isValid); // true
Ref
Ref(Constructor)
The grammar can make a forward reference to make recursion possible. In the example below we create a forward reference to START but note that a reference to any element can be made.
Warning: A reference is not protected against testing the same position in in a string. This could potentially lead to an infinite loop. For example:
let r = Ref(Optional); r.set(Optional(r)); // DON'T DO THIS
Use Prio if such recursive construction is required.
Example:
// make a forward reference START to a Sequence. let START = Ref(Sequence); // we can now use START const ni_item = Choice(Keyword('ni'), START); // here we actually set START START.set(Sequence('[', List(ni_item), ']')); // create and test the grammar const grammar = Grammar(START); console.log(grammar.parse('[ni, [ni, [], [ni, ni]]]').isValid); // true
Prio
Prio(element, element, ...)
Choose the first match from the prio elements and allow
THIS for recursive operations. With
THIS we point to the
Prio element. Probably the example below explains how
Prio and
THIS can be used.
Note: Use a Ref when possible. A
Prioelement is required when the same position in a string is potentially checked more than once.
Example:
const grammar = new Grammar(Prio( Keyword('ni'), Sequence('(', THIS, ')'), Sequence(THIS, Keyword('or'), THIS), Sequence(THIS, Keyword('and'), THIS) )); console.log(grammar.parse('(ni or ni) and (ni or ni)').isValid); // true | https://js.libhunt.com/jsleri-alternatives | CC-MAIN-2021-43 | refinedweb | 1,652 | 57.87 |
tag:blogger.com,1999:blog-20725068039118517152018-11-12T01:47:41.469-08:00No Joke ITInformation technology notes, tips and rants worth sharing.SixDimensionalArray down ESXi 5.1 guest VMs and the host (free edition) via SSH - the easy way!Thanks go out to reader Everett for pointing out an easier way to gracefully shutdown guest VMs and the host on a VMware ESXi 5.1 (free) server. <br /><br />This is much easier than the method described in the <a href="" target="_blank">previous post</a>.<br /><br />You may want to gracefully shut down your guest VMs and host ESXi 5.1 server via SSH, for example, on the triggering of a UPS power outage event or something similar. <br /><br />The method is as follows:<br /><br />1) Install VMware Tools in all guest VMs.<br /><br />2) Make sure each guest VM is setup to perform the shutdown action "Guest Shutdown" (or you could also use a suspend, if you wanted to) in the virtual host settings "Virtual Machine Startup and Shutdown" section.<br /><br />3) The following two commands, run in sequence, will shutdown the properly configured guest VMs and the host server also:<br /><br />/sbin/shutdown.sh && /sbin/poweroff<br /><br />These commands can be run in sequence via an SSH connection from another system (for example, a batch file and plink on Windows, on a machine running a UPS). The poweroff will only run if the shutdown.sh script runs successfully.<br /><br />4) That's it!<br /><br />Thanks Everett!<img src="" height="1" width="1" alt=""/>SixDimensionalArray shutdown of an ESXi 5.1 host and guest VMs (free edition) using the shell/command line/scripting (UPS friendly)<i>Update 2/11/2013: A much easier method for doing this has been <a href="" target="_blank">documented in this blog post</a>. Thanks to reader Everett for the suggestion!</i><br /><i><br /></i><i>Update 2/7/13: A shell script that does what this post describes has been <a href="" target="_blank">posted at github</a>. Enjoy!</i><br /><br />On a single ESXi 5.1 host (INCLUDING the free edition), I have been able to gracefully shutdown, poweroff or reboot the host and guest VMs using the commands documented below from the ESXi 5.1 shell.<br /><br />You may want to do this in response to an uninterruptible power supply (UPS) power failure event trigger. In that case, you will need to install at least one guest VM (consider the <a href="" target="_blank">VMware Virtual Management Appliance</a>) that can run your UPS' software or Linux's <a href="" target="_blank">Network UPS Tools (NUT)</a>..<br /><br /. <br /><br />Or you might just want to shut things down or do other maintenance via the shell/command line which these commands allow you to do.<br /><br /><b>The two command-line tools used here are vim-cmd and esxcli. </b><br /><br />If you type vim-cmd, or vim-cmd <namespace> the tool has pretty good command-line help for figuring out what it can do - and that is quite a bit! <br /><br />NOTE: I have not seen this method documented elsewhere and so you must assume this method is not officially supported by VMware - but it seems to work fine (and it may be able to be be improved on as well)!<b> </b><br /><br /><b>Command List/Sequence:</b><br /><br /><b>1) list all vms</b><br /><br />~ # vim-cmd vmsvc/getallvms<br /><br /><b>2) gracefully shutdown a vm (uses the VM's "world id") - you can also use power.off, power.reboot, power.suspend, etc.</b><br /><b> </b> <br />~ # vim-cmd vmsvc/power.shutdown <VM/"world id" from step 1><br /><br /><b>3) enter maintenance mode (immediately with no delay, this can only be done if ALL guest VMs have been shut down)</b><br /><br />~ # esxcli system maintenanceMode set -e true -t 0 <br /><br /><b>4) shutdown the ESXi host server</b><br /><br />~ # esxcli system shutdown poweroff -d 10 -r "Shell initiated system shutdown"<br /><br /><b>5) try to exit maintenance mode real quick before shutdown!</b> <br /><br />~ # esxcli system maintenanceMode set -e false -t 0<br /><br />If step # 5 does not succeed, your system will reboot in maintenance mode and you will have to manually take the system out of maintenance mode and restart your guest VMs. <br /> <br />These commands can be built into a simple shell script that you can then deploy on the ESXi host server itself. I have written one such script, and you can download it from GitHub.<br /><br /><a href="" target="_blank">Download esxidown (via github)</a><br /><br />There may be more information available on <a href="" target="_blank">this VMware forums post</a> (11/30/2012).<img src="" height="1" width="1" alt=""/>SixDimensionalArray Chrome pages not loading, pages appear gray<b>Update 11/27/2012 7:25PM: </b><br /><b><br /></b>This <a href="" target="_blank">Microsoft website</a> and on <a href="" target="_blank">VirusTotal</a>.<br /><br /.<br /><br />Scans with the latest up-to-date version of Microsoft's Security Essentials caught the virus (and hopefully other anti-virus vendors have now implemented signatures for it as well).<br /><br />Try an updated anti-virus scan and see if resolves your issue!<br /><br /><b>Update 9/6/2012 11:08AM< - <a href="" target="_blank">see this post on the Google product forums</a>.<br /><br /).<br /><br />The gray pages look like this:<br /><br /><div style="text-align: center;"><img alt="" height="237" src=" width="320" /> </div><br />I located the folder in which Chrome is installed. In this case (Windows 7 64-bit), it was:<br /><br />C:\Users\<your username>\AppData\Local\Google\Chrome\Application<br /><br />where <your username> is the Windows user account you used to log on. <br /><br /><b>TEMPORARY/POTENTIAL FIX:</b> <br /><br />Although it is odd and may not work for everyone, I was able to run the program "C:\Users\<your username>\AppData\Local\Google\Chrome\Application\old_chrome.exe", and it loaded up a previous version of Chrome which loaded as it was normally expected.<br /><br /><b>Navigate using Windows Explorer in Windows 7 to: </b><br /><br />C:\Users\<username>\AppData\Local\Google\Chrome\Application\old_chrome.exe<br /><br /><br />;"><b>Run "old_chrome" just once and then close the program and run Chrome as you normally do.</b></td></tr></tbody></table.<br /><br />I'm not sure completely why it fixed the problem (for that, we will have to wait and see what Google says regarding the issue), but it did.<br /><br />Here is a <a href="" target="_blank">related post ("Page not loading in Chrome") on the Google product forums</a> that might help you if the fix above doesn't solve the problem for you.<br /><br />Looks like a little Google bug. Oops!<img src="" height="1" width="1" alt=""/>SixDimensionalArray Curse of "Being the IT Guy/Gal"This is a truth that my fellow "IT Guys/Gals" can most likely identify with:<br /><br / <i>really</i> don't want to mess with your <b>own</b> personal IT stuff.<br /><br / <br /><br />I find it especially true for folks like me whose hobby became their job. Hey, I'm grateful I have a job.. and damn lucky my hobby fit the bill - don't get me wrong. That said, if you happen to be the local IT hero or <a href="">MacGyver programmer</a> of your office, where your day is spent doing anything from fixing printers, to writing shell scripts in Linux, to supporting legacy code, answering that support call (or 100s of them) or writing reams of new code - you know exactly what I'm talking about.<br /><br /!<br /><br /. <br /><br />Maybe better said, even if you love technology and want to mess with it 24/7, eventually, it will mess with <b>you</b> and when you feel that intense need to take a break and do something else - DO IT!<br /><br />After all, if you don't, the machines win. And we can't have that happen, can we? It didn't go too well for <a href="">John Connor</a>.<br /><br />Actually... envisioning my router with glowing blue LEDs as a <a href="">T2</a>... oh brother.. we're already there. Haha, pulled the power cable - die T2 die!! Oh crap, maybe I should plug that backbone connection back in. I felt the power and now I feel the pain! <br /><br />Heed the call fellow IT warriors, for constant IT work at home and on the job is a surefire path to burnout! Remember to have some fun once in a while! <br /><br />Now to try to take my own advice, and put down the keyboard and mouse for you know, ten minutes. At least until the next server alert or upgrade or status bar appears.<img src="" height="1" width="1" alt=""/>SixDimensionalArray 5.0 Auto-Start Broken - Fix/Patch Released!!If <a href="" target="_blank">revert back to the old version (5.0.0)</a>. <br /><br />It appears that a <a href="">patch has been released</a> - see this <a href="" target="_blank">VMWare blog post</a> for more information. Here's the direct link to the patch on the VMWare website: <a href="">ESXi500-201207001.zip</a><br /><br />Here is a <a href="" target="_blank">method of installing the patch via the CLI</a>.<br /><br />I applied the patch against an ESXi 5.0.0 U1 server in my lab by uploading it using vSphere Client to the main datastore, SSH'ing into the machine, and then running the following command:<br /><br />esxcli software vib install --depot=/vmfs/volumes/datastore1/ESXi500-201207001.zip<br /><br />I'm hoping they will roll this patch into the next major release... no idea when that comes out though.<img src="" height="1" width="1" alt=""/>SixDimensionalArray I buy Facebook stock? Why is FB valuable?Should I buy Facebook stock? The better question is, <b>why is Facebook valuable</b>? <br /><br />On the eve of the <a href="" target="_blank">Facebook</a> (<a href="" target="_blank">NASDAQ:FB</a>) IPO, I felt compelled as a technologist to share an important observation about the company and its products:<br /><blockquote class="tr_bq"><b>T</b><b>he value of Facebook rests in identity, communication, sharing, and recording/making relationships between data about people, places, events and things.</b></blockquote><br />Facebook has done what no other organization in the entire world has managed to do, and that is to <b>catalog the identity</b> (usually at least a name, photograph, maybe a hometown) of 800 million people.<br /><br />Even such projects as <a href="" target="_blank">OpenID</a>, explicitly designed to try solve the problem of giving you one unique identifier to use at websites around the world, have never taken hold to the extent that Facebook has.<br /><br />Using the simplest search tools, and the relationships you have with your contacts/friends (and the power of <a href="" target="_blank">6 degrees of separation</a>), there is a very likely chance that you can locate and identify nearly any person you know, who is on Facebook, and that they can locate you.<br /><br /.<br /><br /.<br /><br />They are all made somewhat extraneous if one can verify identity using Facebook. <br /><br />Your Facebook page is a public way of identifying yourself, to a large number of people and organizations. This is the reason for the advice - watch what you say on Facebook, you never know who will see it and how long it will be around for!<br /><br />The <a href="" target="_blank">IPO is set to price around $38 a share </a>and raise approximately $16 billion dollars at that price.<br /><br />Think about what $16 billion can do in making your life better through ancillary services that surround Facebook, but making use of this core <b>identity</b> feature.<br /><br /><br />Add to it, the fact that every website you see these days, every news article, everything on the web, has a giant "LIKE" button next to it. Everything is personalized to suit what these systems think your tastes are (also known as a "<a href="" target="_blank">filter bubble</a>"). What about all the other actions that could be tracked?<br /><br /><b>The like button is an action</b>. It is a communication tool, it is a sharing tool. By clicking it, content from around the world gets tagged and stored in Facebook's giant "<a href="" target="_blank">open graph</a>" database.<br /><br />I... like... something (already exists) or someone.<br />I... am in a relationship with... someone (already exists). <br />I... bought.. something.<br />I... talk to... someone. <br />I... went.... somewhere (think "Check-ins").<br />I... ate...... something.<br />I... made... something.<br />I... work... somewhere (job info).<br />I... have... something.<br />I... saw... something (why did they buy Instagram?). <br /><br /).<br /><br />If you are familiar with the popular game <a href="" target="_blank">The Sims</a> made by <a href="" target="_blank">Will Wright</a> you know how much data about mood, physical status, etc. the game tracks about your characters. Imagine a Sims character generated from all this data that Facebook has collected about you - I wonder what that would look like!<br /><br /><br />If Facebook gets this right, $16 billion or more will help them <b>turn open graph into the world's largest centralized repository of data about human activity</b> of many different types <b>that has ever existed,</b> outside of perhaps technologies used by governments through intelligence gathering organizations.<br /><br />Yes, there are privacy concerns. Yes, the government wants to get at this data.<br /><br />But there is more. Just because the <b>open graph database</b> exists, does not mean that everything needs to exist within it.<br /><br />Instead, Facebook could work with partner companies to build private, closed databases, where you are identified by your Facebook ID, but the Facebook application itself has NO access to the data inside these repositories.<br /><br /><b>Facebook wants to get into business organizations</b>. They are going to use much of this IPO money to try (or so I believe).<br /><br /><b>Facebook wants to be an integral part of your life, </b? <br /><br />Lately, there has been much discussion about electronic health records, and centralized healthcare systems, and health information exchange in the United States. <br /><br />Do you think such data should be on Facebook? How about your banking data? How about any private data at all?<br /><br /><b>Of course not!</b> But, that's the thing people don't realize - the data doesn't HAVE to live inside of Facebook. The data can be stored securely away, anywhere else, but <b>access and identification</b> of who you are could be done through Facebook.<br /><br /><b>Facebook is a platform to build on</b>. I believe this is what they will push the company forward with. Use Facebook as a platform to build around.<br /><br />Look at their <a href="" target="_blank">recent announcement</a> of the "App Center/Store". Look at the tools that Facebook offers for <a href="" target="_blank">building applications</a> that live inside of Facebook?<br /><br />There is value here! It's crazy, scary and powerful.<br /><br />Or is there value? And is it crazy, scary, powerful?<br /><br />One thing is for sure - they didn't get to where they are without having succeeding at implementing many major innovations and new ideas, and I would wager a guess that they will continue pushing hard and growing out however they can.<br /><br /><img src="" height="1" width="1" alt=""/>SixDimensionalArray Zywall - VPN issues, firmware updates, problems, review<u><b>Firmware updates for Zywall products</b></u> <br /><br />If you happen to have any Zyxel <a href="" target="_blank">Zywall products</a> (such as the USG 50, USG 200, etc.), keep your eyes out for firmware updates. It is clear to me that they are consistently having to update their firmware, and there have been a lot of changes recently. For the USG 200 product alone, there have been at least two firmware updates in only 3 months!<br /><br />Zyxel Zywall USG 200 firmware updates - <a href=""></a><br /><br />All Zyxel support downloads - "Download Library" - <a href=""></a>.<br /><br />It would be nice if they sent out an email notification every time their was a new firmware release!<br /><br /><u><b>Nailed-up VPNs</b></u><br /><br />Regarding setting up VPNs on Zywall USG products, if you have a problem where your VPN connections do not restore automatically after a reboot, you may consider activating the "nailed-up" option in the Advanced Settings section of the VPN Connection tab for that particular connection.<br /><br />According to a <a href="" target="_blank">knowledge base article on the Zyxel site (article 1633)</a>, "nailed-up" as applied to PPP connections means: <br /><div style="text-align: left;"><blockquote class="tr_bq"><i><span style="font-size: small;">"A nailed-up connection is always up regardless of there is traffic transmitted. The ZyXEL Device performs two actions when the nailed-up feature is enabled. First, connection idle timeout is disabled. Second, the ZyXEL Device will try to bring up the connection when turned on and whenever the connection is down. A nailed-up connection can be very expensive for some reasons. It is always a better idea not to enable a nailed-up connection unless the broadband service provider offers flat-rate service or you need a constant connection and the cost is not a concern. You can enable/disable WAN connection nail-up in SMT menu 11 or the web GUI."</span></i></blockquote></div><span style="font-size: small;">I did not find useful documentation, but I believe the term "nailed up" as applies to VPN has the same meaning - so in case you want your VPN connections to dial automatically after a reboot, consider this setting! So far, I have not seen any negative affect for having used it. If you were paying for metered/limited bandwidth</span> and leave your VPN "nailed up", though, you may have consequences from your connection being in use at all times, so do be careful. That said, for most people, site-to-site VPN connections are meant to be up constantly, so it isn't a problem.<br /><br /><u><b>Quick review of Zyxel as a vendor</b></u><br /><br />Overall, having worked with a lot of vendor's firewalls over the years (<a href="" target="_blank">Sonicwall</a>, <a href="" target="_blank">Watchguard</a>, <a href="" target="_blank">Zyxel</a> and <a href="" target="_blank">Cisco</a> to name a few), I have to say, the Zyxel stuff is affordable, but not entirely intuitive and somewhat roughly documented. That said, their tech support seems pretty responsive and helpful, as long as you only need their help during business hours (8-5PM Pacific Time). <br /> <span style="font-size: x-small;"> </span><br />Phew, it's been a while since I had time to post any tips!<img src="" height="1" width="1" alt=""/>SixDimensionalArray Firewall blocked netsession_win.exe - Akamai NetSession Installware.<br /><br />Of course, we all know <a href="" target="_blank">Akamai</a>, one of the leading providers of content caching and distribution networks among other things.<br /><br />Upon further review, it appears that the <a href="" target="_blank">Akamai NetSession Interface</a> is some sort of download accelerator/caching tool, but it is not clear how the user got that particular tool on their system. It does have an entry in the Windows control panel with some administrative tools. <br /><br />This app appears to be <a href="" target="_blank">installware</a> - i.e. a program, not necessarily malicious (but annoying) that was installed without the user's knowledge or direct consent, but included with some other download or via an automatic download mechanism. There is a <a href="" target="_blank">long list of companies</a> that appear to use this tool.<br /><br /!<br /><br />The <a href="" target="_blank">Akamai website also includes another uninstallation method</a>:<br /><br />"<b>How do I uninstall the Akamai NetSession Interface?</b><br /"<br /><br /><b>UPDATE 11/8/2011:</b> <a href="" target="_blank">list of companies</a>) do often bundle the installer.<br /><br /.". <br /><br /. <br /><br />It seems almost a guarantee that something automatically triggered the installer, whether it was timed, an update of some sort, or some other process.<img src="" height="1" width="1" alt=""/>SixDimensionalArray: The "Cloud" and BandwidthThis post is the start of a new type of post in which I will make predictions for the future of IT. These are purely my opinion and should be taken as such - don't bet the farm on any of them! <br /><br />To start it off, here's my first prediction:<br /><br /><i>When national bandwidth infrastructure improves drastically, the "cloud" finally has a chance to be relevant for storing/retrieving large data efficiently.</i><br /><br />By the way, by "cloud", I do mean a private cloud, or a public/shared cloud. <br /><br />What I mean by this is, right now, so many of us are limited to a few Mbps (usually 3 or less) download speed. For businesses with symmetric connections, upload speed is roughly the same (3 or less), and for residential areas, unless you can get FIOS or another high speed connection, usually upload is below 1Mbps.<br /><br />I predict that when we have at least 100Mbps, if not much greater (such as 1Gbps) to the home, and to the business, and transferring bits becomes cheaper and more affordable, having all our data (large and small) live in the cloud can finally be more feasible. The only problem standing in the way of the cloud at that point is security. No predictions about that!<br /><br />Think about trying to back up your servers from your datacenter to a cloud location or off-site. Got enough bandwidth? How many hours does it take? How about backing up your own personal photo collections off-site? How many hours does it take to get it to a service like Mozy or Carbonite?<br /><br />These types of services, among others, such as streaming audio and video, will only improve with more bandwidth. And maybe, just maybe, with more bandwidth, the idea that my operating system lives mostly in the cloud becomes ever closer a reality. Should we want that? Again, that's not a prediction for today!<br /><br />We're hungry for bandwidth. Who will feed the need?<img src="" height="1" width="1" alt=""/>SixDimensionalArray 5.5 and NX Server >3.4If you've set up NX Server on CentOS 5.5 by downloading it directly from the NoMachine website, and you try to connect to your newly minted install using SSH and a DSA key, and you encounter a problem where the server gives you a message something like this:<br /><br /><i>NX> 203 NXSSH running with pid: NNNN <br />NX> 285 Enabling check on switch command <br />NX> 285 Enabling skip of SSH config files <br />NX> 285 Setting the preferred NX options </i><br /><i>NX> 200 Connected to address: NNN.NNN.NNN.NNN on port: 22 <br />NX> 202 Authenticating user: nx <br />NX> 208 Using auth method: publickey </i><br /><i>NX> 204 Authentication failed.</i><br /><br />Check to make sure that you have synchronized the name of the authorized_keys file in the NX server.cfg, node.cfg and your sshd_config files. I discovered that server.cfg and node.cfg were looking for authorized_keys2 and the sshd_config was looking for authorized_keys. Match those values and restart the server (as described <a href="">here</a>) and you should have better luck. Apparently, authorized_keys2 was deprecated a long time ago.<br /><br />Other useful links:<br /><ul><li><a href="">NX Server Administrator's Guide</a></li></ul><img src="" height="1" width="1" alt=""/>SixDimensionalArray Search and Network File/Share Locations<b>NOTE:</b> <i>If you're finding this post, and having a terrible time setting up Windows Search features in Windows 7, particularly to index network file locations, I feel your pain!</i><br /><br />Windows Desktop Search 4.0 in Windows XP was, in my opinion, a legitimate competitor to Google Desktop Search. Using this product, you can easily build search indexes for your desktop as well as networked file locations.<br /><br />In Windows 7, Microsoft (in its infinite wisdom) chose to more deeply integrate this search technology with the core of the operating system. It works, <i>kind of</i>, although it seems to be lacking in documentation and strangely more complex, even though I think MS intended it to be easier, actually.<br /><br /.<br /><br />First, I enabled the Windows Search Service Role in File Services on the Windows 2008 R2 file server. This caused the network file shares to be indexed.<br /><br />Second, I created a library on the Windows 7 desktop whose sole purpose was to be a pointer to the network file share. This key step seems to be what "simplified" and made searching the network file share much easier.<br /><br />With these two steps completed, I could now search both the local machine (after including relevant locations in the index), and the network file shares, and the results showed in the Start Menu's search area.<br /><br /.<br /><br />Now to figure out how to automate the creation of the Windows 7 library creation via Group Policy..... if possible!<img src="" height="1" width="1" alt=""/>SixDimensionalArray Policy to Turn Off DisplayIf you want to turn off the user's monitor/display to save power and extend monitor life using Group Policy & Active Directory (Server 2008 R2, probably works in earlier versions as well), consider the following group policy setting:<br /><br /><ul><li>Computer Configuration > Policies > Administrative Templates > System > Power Management > Video and Display Settings > Turn Off the Display (Plugged In or On Battery)</li></ul>Just set a timeout value, and go!<br /><br /.<br /><br />The group policy wording states: "<i.</i>". It's confusing because they say "Turn Off" is "enabled" - but I think what they mean is,<b> the adaptive timeout for turning off the display policy is enabled or disabled.</b> That makes more sense to me at least. I will test it and see what happens.<br /><br />There is also <a href="">nice post over on the Microsoft Directory Services team blog</a> that discusses other power management options with AD.<img src="" height="1" width="1" alt=""/>SixDimensionalArray out and ping someone!If you are testing network connectivity, packet loss, latency, etc. and you need an external server to ping (ICMP echo request), consider using the IP <b><i>8.8.8.8</i></b>. This IP represents <a href="">Google's public DNS</a>.<br /><br />Funny, in the past, I remember using <b><i>4.2.2.2</i></b> and never knowing exactly who it was that ran that IP. Turns out, as with most things strange and interesting on the net, <a href="">there is a small story behind that IP</a>. Kudos to <a href="">Tummy.com</a> for posting it!<img src="" height="1" width="1" alt=""/>SixDimensionalArray a Network Device by MAC/Hardware AddressIf you are trying to identify a network device, and all you have is a MAC address for that device, you might try identifying which hardware vendor the MAC address range is associated with. For example, I had a device connected to my wireless which had a MAC address starting with the prefix 30:69:4B. I could not identify exactly which device it was, although it was most likely a valid one.<br /><br />The device also did not show up in other tools, such as the Windows command line arp -a command, <a href="">Angry IP Scanner</a> or <a href="">Colasoft's MAC Scanner</a>, so identifying it that way was not possible. ARP stands for <a href="">Address Resolution Protocol</a>, i.e. the protocol used to determine MAC addresses from IP addresses so that transmission at the link layer can occur. You didn't forget your <a href="">OSI model</a> did you? :)<br /><br />Using <a href=""></a> I was able to determine the device was a coworker's Blackberry phone which was associating with the wireless access point (AP). The MAC prefix is owned by the manufacturer RIM/Research In Motion. Another similar search site is <a href="">Vendor/Ethernet/Bluetooth MAC Address Lookup and Search</a>, although I didn't find what I was looking for on that one.<img src="" height="1" width="1" alt=""/>SixDimensionalArray Remote Desktop in Server 2008 R2 Active Directory Group PolicyIf you need to configure Windows Server 2008 R2 Active Directory group policy so that Remote Desktop is enabled on a domain, note that it is no longer referred to as Terminal Services in the Group Policy Management interfaces, it was renamed Remote Desktop Services.<br /><br />Two group policy changes should do the trick, followed by a gpupdate /force or waiting for the policy to be distributed to domain members/clients:<br /><ol><li>Computer Configuration > Administrative Templates > Network > Network Connections > Windows Firewall > Domain Profile > Allow inbound Remote Desktop exception. Note that I recommend limiting the IP addresses that have access as explained in the notes of that policy, if possible, as a best practice.</li><li>Computer Configuration > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Connections > Allow users to connect remotely using Remote Desktop Services </li></ol>Now you should be able to remote desktop into any domain member which the policy is applied to! <br /><ol></ol><img src="" height="1" width="1" alt=""/>SixDimensionalArray Profile Migration with USMTI recently had to migrate about 20 domain accounts from a small, dying, badly designed Windows 2000 domain controller to a newly named and properly configured Windows 2008 R2 domain. The domain names had to be different between the two systems, and all the users were using local Windows profiles on XP or Windows 7. The customer didn't want to implement roaming profiles or folder redirection due to concerns about the extra burden on the network. This meant I had to migrate the local user profiles from one domain to another.<br /><br />I tried using several different tools, including: Microsoft's <a href="">Active Directory Migration Tool version 3.2</a> (which is probably best for Server 2003 -> Server 2008 and above migrations in larger enterprises, where both AD servers are functioning normally and the old one still is online), evaluated the built in Easy Transfer or the Files and Settings Transfer Wizards, and ended up with the User State Migration Tool (USMT).<br /><br />USMT can basically copy the files from one profile to another, to a local disk or to a network share. This had to be done in order to change the association of the profile, unfortunately, from the old domain to the new one. <br /><br />Here are some useful download links:<br /><ul><li><a href="">USMT 3.01 - for Windows XP and Vista migrations</a></li><li><a href="">USMT 4 (part of the Automated Installation Kit - AIK for Windows 7)</a> - by the way, this one is an annoying 1.7 GB ISO file. There is a repackaged version provided from <a href="">wintools.com.au</a>, but it may not be always up-to-date. If you download the ISO, you then have to install it (or maybe you can extract directly from a CAB file? I didn't try that).</li><li><a href="">Hotfix for USMT 4</a> which migrates Outlook 2010 settings (oops, they left that part out of the RTM version!). Also, USMT DOES migrate Outlook files - but it screwed up a lot of user settings, including email rules and Outlook Address books which required manual reconfiguration.</li></ul><i>I will post a follow up with more instructions on how I applied these tools soon</i>, and the issues/workarounds I encountered. I highly encourage testing the USMT tools, as my migrations have worked but things didn't exactly go flawlessly (the most problems revolved around Outlook settings).<br /><br />I don't know how the heck a large organization would use any of these tools to successfully migrate like I did without some degree of manual intervention. What a headache!<br /><br />Also, quick tip - for small Active Directory domains that might grow up into larger ones some day, the best internal Active Directory domain naming scheme is an unused corporate public domain or subdomain that you own and will keep for a very long time. For example, I might own <i>mycompany.com</i> and <i>mycompany.net</i>. Therefore, you might use <i>corp.mycompany.com</i> for your internal domain name, or <i>mycompany.net</i> if you are not using it for anything else. I <b>DO NOT</b> recommend using anything else, including <i>.lan</i> or <i>.local</i>.<img src="" height="1" width="1" alt=""/>SixDimensionalArray Post!Greetings, and welcome to No Joke IT. I have a favorite saying in the information technology business - that IT vendors build or invent wonderful technologies in their "infinite wisdom", and then don't bother to document them for the rest of us. Hey, we all do it! Anyway, I will try to share tips and hints that crop up as I go along, so that other IT folks searching (just like I am) will find the solution to that odd problem that isn't documented anywhere else.<br /><br />It's always great to get first post, even on your own blog! :)<img src="" height="1" width="1" alt=""/>SixDimensionalArray | http://feeds.feedburner.com/NoJokeIt | CC-MAIN-2018-47 | refinedweb | 5,766 | 61.36 |
You have an existing Java application or are developing a new one and want to take advantage of the cloud? Azure is an attractive platform for deploying a variety of applications, including Java based apps.
In Azure you simply upload your application and the plumbing is all put in place for you. No need to manage Operating System installs or patches. Understanding how to package a Java application for deployment in Azure and the options you have in that process is what I will discuss here.
I am assuming you have a basic understanding of Azure so I won’t introduce any concepts here. If that is not the case you might want to visit to get an overview of Azure prior to reading this page.
How to run a Java Application in Azure
Azure requires a ‘Role’ to run your application. The Worker Role is the best type of Role to host a Java application. Instances of a Worker Role can be seen as fresh installs of Windows Server ready to run an application.
The Worker Role configuration is included in the deployment package you upload to Azure along with your application. The Role provides hooks to launch a Java application and ways to communicate with the Azure runtime.
Creating a Role involves adding it to the service definition file, setting attributes in the service configuration file and providing a class to handle the Role lifecycle. Let’s look into the main options in each of these files as it relates to deploying Java applications. I am using the Azure SDK 1.3 and will be talking about features introduced in that release.
Service Definition
The service definition file lists all Roles in your Azure service. I will cover Port Reservation and Startup Tasks.
Port Reservation
Of critical importance to Java applications running in the cloud is the specification of the required TCP ports the application uses. Azure will block communications to any port you don’t explicitly list in your service definition. You will need to add a InputEndpoint entry for every public port your application uses to receive requests. These ports will be configured both as a Windows Firewall rule and programmed in the Load Balancer. Remember you can have any number of instances of the same Role and they will all be configured behind a load balancer.
This is an example of how you would configure a Role that hosts a Java application which uses port 80 to accept HTTP requests:
<InputEndpoint name="Http" protocol="tcp" port="80" localPort="8080" />
Notice the protocol is set to "tcp" even though the application is actually using HTTP. "http" is a valid value for the protocol attribute but you must never use it for Java applications. The "http" value has a specific meaning in Azure and is reserved for .NET applications.
The port attribute specifies the public port that will be available for client requests. The localPort tells Azure the port your application is actually listening on. Azure will map the public port on the load balancer to the local port your application is using. Those values don’t need to be different, in fact most applications will likely use the same local and external port.
It is also important to know that localPort is an optional attribute. If you don’t define it Azure will dynamically open a port for your application. Since all requests will be mapped to that port your application must inquire the Azure runtime for the assigned port number to receive requests. Although optional you should probably always set the localPort.
The local port reservation has some limitations when it comes to the Compute Emulator (or Development Fabric). Because the Compute Emulator is running on your desktop it may not be able to bind to the port you requested because another application might have reserved the port. That is always the case for port 80 because IIS is already using it. If you intend on using the Compute Emulator you will need to get the dynamically assigned port at runtime, at least in development.
Startup Tasks
Startup Tasks are commands launched before your Role is initialized. They were created to allow a Role to perform any configuration before requests are forwarded to the Role instance.
A unique feature of Startup Tasks is the ability to run with elevated (administrator) privileges. This is usually required if your application needs to launch an installer. All installers must also run in silent or unattended mode. In fact the Oracle JVM installer does require administrator privileges to run and does support silent mode. If you plan on installing it you will have to use a Startup Task.
Another characteristic of Startup Tasks is the ability to write to any folder. Regular Azure applications must define working directories and can only write to them. That limitation does not apply to Startup Tasks. That facilitates the job of some installers and can also be beneficial to some Java application servers.
By now you must be thinking that you can just launch your Java application as a Startup Task and be done. In fact that is one option for launching Java applications but as usual is has some drawbacks. I will talk about this option shortly.
Service Configuration
Configurations are companions to Service Definitions. For every Role you define you must have a configuration entry.
Here is where you specify how many instances of your Role you would like to have. You will also define various settings for your Role.
One such setting of interest is the version of Windows Server. You can choose between Windows Server 2008 and Windows Server 2008 R2. For the most part the underlying operating system is irrelevant to a Java application. That continues to be true in Azure. If your application doesn’t rely directly on a specific Windows feature nor it depends on a 3rd party tool that does you can ignore this setting.
There are important differences between these Windows Server families though. The one I am interested in is Power Shell. Windows Server 2008 R2 includes Power Shell 2.0 which – as I’ll soon demonstrate – can be used to greatly simply the deployment of Java applications.
Azure Role Entry Point
The Role Entry Point is the interface between your application and the Azure runtime. Azure will instantiate and call your Role Entry Point providing you with the opportunity to take actions at different stages of the Role lifecycle.
This is where you will be doing some .NET programming. At this point Azure requires the Role Entry Point to be written in either C# or VB.NET. The amount of code in your Role Entry Point will vary based on the complexity of your application. It can be as simple as a few lines of code.
Three methods define the role lifecycle: OnStart, OnStop and Run. OnStart is called before your application receives live traffic. You should do any initialization in this step. To a certain extent this is akin to a synchronous Startup Task. OnStop is called when Azure is shutting down your role to allow for a graceful shutdown. Run is called when Azure is ready to forward live traffic to your application. The call to Run must not return while your Role is operational. If it returns Azure will interpret it as a failure of your application and will recycle your Role instance. This behavior can be of interest to certain applications and should be used to request Azure to recycle your Role instance in case of a failure. You can envision a heartbeat check on the JVM being executed from Run and returning in case of either application or JVM failure, causing Azure to restart your Role and bring your application back to a running state.
Java Virtual Machine, Libraries and Resources
Now that you understand how to configure a Role to run your Java application you need to provide it with a JVM. The best way to do it is to upload the JVM to Azure Blob Storage and install it during Role initialization, either as a Startup Task or in the Role Entry Point’s OnStart.
Most Java runtimes can be simply copied to a new machine, no install is necessary. If your JVM can be installed this way you should create a ZIP archive and upload it to Azure Blob Storage. Because you may need to maintain multiple versions of the JVM I recommend creating a separate container to hold all the JVMs you need. Inside that container you can keep JDKs and/or JREs named after the corresponding Java version.
If you need to run an installer which requires administrator privileges you have to use a Startup Task. If you can copy the JVM you have a choice.
Depending on your build process you might want to use the same strategy for other dependencies. Libraries like Spring, Log4J, Apache Commons can also be in the cloud and pulled from Azure Blob Storage before your application starts. Imagine doing the same for the application server or web server your application uses. I personally like to create a separate container called ‘apps’ for Jetty, Tomcat and other servers.
Launching the Java application
With everything in place the only step left is to launch the Java application. It can also be done from either the Role Entry Point or from a Startup Task.
Believe it or not there is enough to discuss in each of these alternatives that will save them for future posts.
As a cliffhanger I will tell you the high level differences between the two options. Using the Role Entry Point to launch your Java application will give you full control over the lifecycle of the role and allows for your application to react to environment changes (e.g. discover when instances are added or removed). It does require you to write.NET code. Using Startup Tasks requires potentially no .NET knowledge plus gives you the ability to run with elevated privileges at the cost of having little to no access to the role lifecycle.
The path you take will vary based on your application.
Packaging and Deploying the Java application
The Azure SDK includes tools to package an application along with the Service Definition and Configuration. The Windows Azure Packaging Tool (cspack.exe) is the main tool to use for this job. Visual Studio provides a convenient frontend for cspack and is also capable of initiating a deployment.
Cspack.exe is more desirable for integrating with exiting Java builds. It is only available on Windows so you will have to do the packaging of the Azure deployment on a Windows box. I will also save the explanation on using cpspack for a future post.
Putting it all together: Jetty 7 in Azure
I did not want to finish without showing how all of this can work. I will demonstrate how to get Jetty 7 working with the minimum amount of effort. Running a full web application on Jetty will require extra steps like deploying your own WAR/EAR but for demonstration purposes I will deploy Jetty as is and access the ROOT web app.
I picked Jetty because I wanted to show an application that uses NIO working in Azure. You should be able to deploy any other web server the same way. David Chou has previously posted a great article on running Jetty in Azure. He wrote it for the SDK 1.2 but a lot of the information there still applies.
You will need a Java 6 compatible JVM, the latest Jetty 7 distribution in zip format, the Azure SDK 1.3 (or later) and Visual Studio 2010. The free edition of Visual Studio 2010 will do. You will also need a .NET library to handle ZIP files. I used but other popular libraries exist and can be used with little change to the deployment scripts.
Start by installing Visual Studio and the Azure SDK. If you are using Oracle’s JVM you will need to install it locally then create a zip file of the JRE. If you installed the JDK in the default location you can navigate to C:\Program Files\Java\jdk1.6.0_<update #>, right click on the JRE directory and create a zip folder. The deployment script expects this file to be named jre1.6.0_22.zip.
Upload both the JRE.zip and Jetty to your Azure Storage account. Put the jre.zip in a container called ‘java’ and the jetty-distribution-7.x.x.<date>.zip in a container called ‘apps’. You can change these container and file names but you will need to update the Launch.ps1 script.
You can download this Visual Studio 2010 project (almost) ready to be deployed. You will need to enter your storage account credentials to get it to work.
If you decide to create the project from scratch you should add the SharpZipLib DLL to a ‘lib’ folder in your project.
Minimal role code
This example will launch Jetty from a Startup Task. The only requirement for the Role Entry Point is to not return from Run so this code blocks indefinitely.
using System.Threading;
using Microsoft.WindowsAzure.ServiceRuntime;
namespace JettyWorkerRole
{
public class WorkerRole : RoleEntryPoint
{
public override void Run()
{
while (true)
Thread.Sleep(10000);
}
}
}
Service definition
There is a single Startup Task to execute ‘Run.cmd’ with regular user (limited) privileges. The task type ‘background’ decouples the execution of the task from the Role lifecycle.
I also specify the public port 80 to accept request but internally Jetty will use port 8080.
<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="MinimalJavaWorkerRole" xmlns="">
<WorkerRole name="JettyWorkerRole">
<Startup>
<Task commandLine="Run.cmd" executionContext="limited" taskType="background" />
</Startup>
<Endpoints>
<InputEndpoint name="Http" protocol="tcp" port="80" localPort="8080" />
</Endpoints>
</WorkerRole>
</ServiceDefinition>
Service configuration
The default service configuration sets the number of instances for the role to 1. Feel free to increase it if you’d like, this example will work with any number of instances.
A little hidden in this configuration is the osFamily. This is where you specify the family of Windows Server you would like to use. It must be set to Windows Server 2008 R2 which is indicated by the family 2. This is needed because I use PowerShell 2.0 in this example.
<?xml version="1.0" encoding="utf-8"?>
<ServiceConfiguration serviceName="MinimalJavaWorkerRole" xmlns="" osFamily="2" osVersion="*">
<Role name="JettyWorkerRole">
<Instances count="1" />
</Role>
</ServiceConfiguration>
Run.cmd
This is a command used to launch the PowerShell script which contains the actual deployment logic. This file is not strictly needed but Visual Studio makes checks for the existence of the commandLine. So if you use Visual Studio you must have it.
I’m relaxing the security on the PowerShell script execution. In production you should sign the PowerShell script instead.
powershell -executionpolicy unrestricted -file .\Launch.ps1
A word of caution if you are new to Visual Studio – text files are saved in UTF-8 by default. It should be otherwise fine with this content if it wasn’t for the byte order mark. Because of that you must make sure to change the encoding for this file. You can do this in the “Save As..” option, here are the steps.
Launch.ps1
This is an overly simplified version of a deployment script. I will expand on it and explain the details of how it works on the post on using Startup Tasks. Basically this script downloads both JRE and Jetty from your storage account, unzips both and launches Jetty.
You must change the $connection_string to use your Storage Account credentials. You might also need to change $jre and $jetty to match the versions you used.
function unzip ($zip, $destination) {
Add-Type -Path ((Get-Location).Path + '\lib\ICSharpCode.SharpZipLib.dll')
$fs = New-Object ICSharpCode.SharpZipLib.Zip.FastZip
$fs.ExtractZip($zip, $destination, '')
}
function download_from_storage ($container, $blob, $connection, $destination) {
Add-Type -Path ((Get-Location).Path + '\Microsoft.WindowsAzure.StorageClient.dll')
$storageAccount = [Microsoft.WindowsAzure.CloudStorageAccount]::Parse($connection)
$blobClient = New-Object Microsoft.WindowsAzure.StorageClient.CloudBlobClient($storageAccount.BlobEndpoint, $storageAccount.Credentials)
$remoteBlob = $blobClient.GetBlobReference($container + '/' + $blob)
$remoteBlob.DownloadToFile($destination + "\" + $blob)
}
$connection_string = 'DefaultEndpointsProtocol=http;AccountName=<YOUR_ACCOUNT>;AccountKey=<YOUR_KEY>'
# JRE
$jre = 'jre1.6.0_22.zip'
download_from_storage 'java' $jre $connection_string (Get-Location).Path
unzip ((Get-Location).Path + "\" + $jre) (Get-Location).Path
# Jetty
$jetty = 'jetty-distribution-7.2.2.v20101205.zip'
download_from_storage 'apps' $jetty $connection_string (Get-Location).Path
unzip ((Get-Location).Path + "\" + $jetty) (Get-Location).Path
# Launch Jetty
cd jetty-distribution-7.2.2.v20101205
..\jre\bin\java `-jar start.jar
You must also tell Visual Studio do include this script in your build. Right-click on Launch.ps1 and select Properties. Make sure the ‘Copy to Output Directory’ option is set to ‘Copy Always’. You will need to do the same thing for Run.cmd and the Zip library.
Now you are ready to deploy. You can deploy the app directly from Visual Studio, information on how to set it up on this page. Once your Azure account is configured in Visual Studio right-click on the cloud project and select ‘Publish…’.
Once the deployment is complete you can navigate to your cloudapp.net domain to see Jetty running. You will be greeted by the welcome page.
Remember Jetty defaults to the SelectChannelConnector showing your NIO application can run fine in Azure.
This is a great article Mario. Not only did it allow me to set up a Glassfish setup on Azure, it also helped me understand the concepts of Windows Azure along the way. Thanks for posting this!
I am curious about the use of startup tasks versus onStart(). I am currently undecided on which one to use for downloading and launching my JDK and Glassfish server.
Secondly, I have a separate worker role running MySQL. I am interested in knowing your opinion on the pros and cons of running MySQL in the same worker role as Glassfish – versus running it in a separate worker role. So far, I am unable to get the Worker role running Glassfish to communicate with the one running MySQL (even though the latter has the requisite ports exposed via the csdef).
I need to elaborate on the differences of both approaches, that’s going to be my next article.
In short, if you don’t need to interact with the Azure runtime you should be safe using startup tasks. If you do need to know when new instances of or your worker roles are brought up or down, discover IP addresses for inter-role communication, programmatically take an instance of the role running Glassfish off service, etc, then you must write your own Role Entry Point implementation and therefore use OnStart().
The best practice is to run DBs on separate hosts from application or web servers. As applications grow you most likely will need more application than DB instances, having them in separate hosts makes that possible.
😉 if you mean people to set an option to "Copy always" … the screenshot you provide for this should not highlight a different option 😉 | https://blogs.msdn.microsoft.com/mariok/2011/01/05/deploying-java-applications-in-azure/?replytocom=63 | CC-MAIN-2018-09 | refinedweb | 3,169 | 57.27 |
Hello,
I applied dark color scheme (VS2008) and now namespaces and some other colors are now unreadable. In 4.5 I could go to Tools>Options>Fonts And Colors and there were a lot of Reshareper* entires I could adjust.
I tried to repair installation but this did not help (it did help when I had similar issue with R# 4.5). Is there any way I could change color settings in R# 5 EAP, or is this a know issues with beta release?
Thanks for great tool, guys and gals.
I have a somewhat similar issue. VS2008 works just fine, as it did in R#4.5. VS2010, however, does not have have R# coloring (yes, i do have "Color Identifiers" turned on). What's more, it does not have that list of "ReSharper *" items in "Fonts an Colors". Is this just because it is still an early release of R#, or should I have identifier coloring? In which case, is there anything I can do to make it work? I tried reinstalling R#, repairing it, resettign VS2010, repairing it as well. Nothing helped.
Thanks in advance.
I have tried to import the color scheme I exported from my dev box that runs corp licensed R# 4.5. The import was sucessful and I can see the fixed color scheme that was created with v4.5.
The color scheme persists until I open Fonts and Colors dialog and press OK to dismiss it (without changing anything) and then close and reopen VS.
They were included in an earlier nightly build, but apparently not
anymore. They are also missing from latest build 1528.
I can verify this issue--the "ReSharper" highlighting styles have been
missing from the "Fonts and Colors" panel in the 5.0 EAP. I was able to get
most of my color scheme restored by importing an old .vssettings file, but
my settings for the current line background aren't getting applied. I'd
really like to be able to configure the R# highlighting styles again.
Cheers,
Mike
"Rouslan Grabar" <no_reply@jetbrains.com> wrote in message
news:25588389.173541257271694150.JavaMail.clearspace@app8.labs.intellij.net...
>
>
>
>
Build 1529 seems to be fixing the missing colors issue. Thank you!
Mine are still missing. Did you do anything other than run the new MSI?
Thanks,
Mike
"Rouslan Grabar" <no_reply@jetbrains.com> wrote in message
news:1637065.184461257493707836.JavaMail.clearspace@app8.labs.intellij.net...
>
Hi Mike,
Unfortunatelly, on my laptop that runs win7, the latest build 1529 has the same issue. I tried everthing - installing, reinstalling, devenv.com /setup, etc without a sucess.
The devbox I installed the latest build on earlier today is a Windows 2003 R2 and it previously had v4.5 installed, so the R# did an upgrade and the Resharper color options were in place. Maybe it's "clean install" problem?
I'm having the same issue on two boxes. The first is an xp box that I upgraded resharper from 4.5.1 to 5.0 The second box was a clean install of win7 and resharper 1529. No color settings for either box.
For the first box I was able to get the settings back by importing a colleague's font and color settings (who did not install 5.0). Of coarse if more settings were introduced, I wont see them.
Is there an issue set up for this yet?
-eric
I've added a issue to the tracker
RSRP-131987
I had a similar issue and had someone comment with a fix. It might apply to the problems you are having here. | https://resharper-support.jetbrains.com/hc/en-us/community/posts/206068169-Missing-color-scheme-settings-in-Tools-Options-FontsAndColors | CC-MAIN-2020-24 | refinedweb | 598 | 75.91 |
Description
A custom UIDatePicker object that allows design customization of various user interface attributes such as font, color, etc. This pod
aims to replicate the default UIDatePicker functionality while adding additional customization in the user interface.
This is project is inspired by and uses codes from PIDatePicker.
In addition it has the option to set picker type to one of .year, .yearMonth, .date, .dateTime.
It is also possible to change calendar identifier.
Example
To run the example project, clone the repo, and run
pod install from the Example directory first.
Installation
HEDatePicker is available through CocoaPods. To install
it, simply add the following line to your Podfile:
pod "HEDatePicker"
Because this project was written in Swift, your project must have a minimum target of iOS 8.0 or greater. Cocoapods
does not support Swift pods for previous iOS versions. If you need to use this on a previous version of iOS,
import the code files directly into your project or by using git submodules.
Customization
There are several options available for customizing your date picker:
The following public methods are available for calling in your module:
Delegate
A class can implement
PIDatePickerDelegate and the following method to respond to changes in user selection.
func pickerView(pickerView: PIDatePicker, didSelectRow row: Int, inComponent component: Int)
Contributing
To report a bug or enhancement request, feel free to file an issue under the respective heading.
If you wish to contribute to the project, fork this repo and submit a pull request.
License
HEDatePicker is available under the MIT license. See the LICENSE file for more info.
Latest podspec
{ "name": "HEDatePicker", "version": "1.0.4", "summary": "A short description of HEDatePicker.", "description": "TODO: Add long description of the pod here.", "homepage": "", "license": { "type": "MIT", "file": "LICENSE" }, "authors": { "Hassan Eskandari": "[email protected]" }, "source": { "git": "", "tag": "1.0.4" }, "platforms": { "ios": "8.0" }, "source_files": "HEDatePicker/Classes/**/*", "pushed_with_swift_version": "3.0" }
Sun, 20 Aug 2017 01:40:16 +0000 | https://tryexcept.com/articles/cocoapod/hedatepicker | CC-MAIN-2020-24 | refinedweb | 320 | 57.87 |
David Vriend noted a problem in Example 3-3
of Java I/O,
StreamCopier, as well as several similar examples from that
book. The
copy() method attempts to synchronize on
the input and output streams to "not allow other threads to read from the
input or write to the output while copying is
taking place". Here's the relevant method:
However, this only helps if the other threads using those streams are also kind enough to synchronize them. In the general case, that seems unlikely. The question is this: is there any way to guarantee thread safety in a method like this when:However, this only helps if the other threads using those streams are also kind enough to synchronize them. In the general case, that seems unlikely. The question is this: is there any way to guarantee thread safety in a method like this when:); } } } }
InputStreamand
OutputStreamin this example) so you can't add synchronization directly to them.
Note that although the specific instance of this question deals with streams, the actual question is really more about threading. Since anyone answering this question probably already has a copy of Java I/O, I'll send out a free copy of XML: Extensible Markup Language for the best answer.
There were several thoughful answers to this question, but ultimately the answer is no.
This does seem
to be a design
flaw in Java. There is simply no way to guarantee exclusive access
to an object your own code did not
create. Apparently other languages do exist that do solve this problem.
In Java, however, the only thing you can do is fully document the intended use of the class
and hope programmers read and understand the documentation. What's especially pernicious
is that the behavior in a multithreaded environment of
most of the Sun supplied classes in the
java packages is undocumented.
This has gotten a little better in JDK 1.2, but not enough.
Several people argued that the question was silly; that it was ridiculous to attempt to guarantee behavior in the face of unknown objects produced by unknown programmers. The argument essentially went that classes only exist as part of a system and that you can only guarantee thread safety by considering the entire system. The problem with this attitude is that it runs completely counter to the alleged benefits of data encapsulation and code reuse that object oriented programming is supposed to provide. If you can only write safe code by writing every line of code in the system, then we might as well go back to C and Pascal and forget all the lessons learned in the last twenty years.
More pointedly, any solution that requires complete knowledge of all code in the entire program is extremely difficult to maintain in a multi-programmer environment. It may be possible with small teams. It's completely infeasible for large teams. And it's absolutely impossible for anyone trying to write a class library to be used by many different programmers on many different projects.
All the answers were quite well though out this time around.
You'll find them on the complete question page.
I think the best answer came from Michael Brundage who gets a copy of
XML: Extensible Markup Language as a prize just as soon as
he sends me his snail mail address. For the rest of you, I'll have a new question for
you soon which delves into some undocumented behavior in the
InputStream class.
Subject: ThreadSafeStreams Date: Mon, 7 Jun 1999 14:41:22 -0700 From: Michael Brundage brundage@ipac.caltech.edu
Hi Elliotte,
Saw your Cafe au Lait question regarding thread-safety and streams, and thought I would toss in a few cents:
The problem actually has nothing to do with streams or I/O at all. Rather, it's a fundamental problem with the thread/synchronization model chosen for Java. Monitors (as opposed to, say, concurrent sequential programming) just don't give the developer an opportunity to determine thread safety within the context of a single method. The developer is forced to look at the global picture to determine whether deadlocks or other threading problems might occur.
I once had a really informative conversation with Frode Odegard about this topic. I've attached a post he made to the LA Java Users' Group mailing list (including some useful references) to the end of this message for your reading pleasure.
A classic example is the thread-safe library class
public class Safe { protected int x, y; public synchronized void setXY(int x, int y) { this.x = x; this.y = y; } }
You see something like this in almost every library in existence: The class is meant to be extended, so the members are made protected. They can only be accessed through thread-safe methods, so everything's okay, right?
Wrong -- there is no way to prevent a developer from coming along and writing:
public class Scary extends Safe { public void mySetXY(int x, int y) { this.x = x; this.y = y; } }
Now, calls to
setXY() and
mySetXY() can be interleaved, resulting is
unexpected results. There is no way, as long as there are
protected/public fields or protected/public unsynchronized methods, to
prevent developers from coming along and shooting themselves in the foot
with a subclass that bypasses synchronization.
(Btw, you can simplify this example by using just one member of type double; however, I've heard that there are plans to make assignment to doubles atomic in the future, and in any case no JVM I've seen treats it as nonatomic, .)
One can avoid this particular case by never declaring anything that could result in a thread-safety problem in subclasses or elswhere protected or public. The example above could be repaired using:
public class Safe { private int x,y; public synchronized void setXY(int x, int y) { this.x = x; this.y = y; } }
Now any subclass that wants to change the value of x or y is required to go through the setXY() method on Safe. As long as every field is always private and every public and protected method is synchronized, your object can be threadsafe (which says nothing about liveness issues like deadlock, unfortunately). You could even imagine an automated tool to check your source code for this kind of potential threading problem.
There are some other areas where subclassing is problematic with threading, such as mixing static and instance methods (which synchronize differently), and deadlock is particularly thorny with subclasses. Doug Lea gives several examples in his book, Concurrent Programming in Java.
The example I gave above demonstrates how to settle the subclassing problem when the lock is required on the instance -- just always require subclasses to go through your (synchronized) methods on the superclass to do any modifications to fields in the superclass. However, the example you gave on Cafe au Lait is trickier, because the library class is not one you wrote yourself and two locks need to be acquired instead of one. Even for this trickier problem, the same idea of containment applies: You want to prevent other methods from modifying the objects you're using. The only way to guarantee that other developers don't bypass your synchronization is to prevent all other kinds of unsynchronized access.
Because the standard Java I/O classes are inherently un-thread-safe, this means you cannot allow external code to get ahold of the original stream objects, only thread-safe wrapped versions of the original stream. In particular, any stream class you write with the intention of making it thread-safe cannot have a constructor taking a generic stream (InputStream, OutputStream) since the calling code could then modify the stream it passed in the ctor in an un-thread-safe way. This restriction alone makes it almost impossible to guarantee thread-safety when working with the pre-existing java.io stream classes, since they are designed to be attached to other streams through the ctor.
Although this demonstrates that it is not possible to accept a (potentially unsafe) stream from the user's code, it is still possible to create a safe bridge by putting a stream factory into the library, wrapping the stream in an opaque way for its travels through the user's code, and then extracting the original stream again in the library code. By preventing the developer from ever getting at the underlying (unsafe) instance, you can guarantee that the developer's code never interferes with your library's synchronization.
If your library resides in a package that you control, say my.library, then you could do the following:
package my.library; import java.io.*; public interface SafeStream { } final class MySafeInputStream implements SafeStream { // package-local private InputStream stream; MySafeInputStream(InputStream s) { stream = s; } InputStream getInputStream() { return stream; } } final class MySafeOutputStream implements SafeStream { // package-local private OutputStream stream; MySafeOutputStream(OutputStream s) { stream = s; } OutputStream getOutputStream() { return stream; } } /* this implementation is just a sketch, to give the idea; it won't compile as-is because (among other things) it's missing error-handling. You'd also rewrite parts of it for better efficiency, etc. The main point of this class is that (together with the opaque handlers MySafeInputStream and MySafeOutputStream) it prevents the underlying InputStream or OutputStream instances from "leaking" to external (untrusted) code. */ public final class StreamFactory { public static SafeStream createInputStream(Class type) { return new MySafeInputStream(type.forInstance()); } public static SafeStream createInputStream(Class type, SafeStream inner) { Constructor ctor = type.getConstructor(new Class[] { InputStream.class }); return new MySafeInputStream(ctor.newInstance(new Object[] { inner })); } public static SafeStream createOutputStream(Class type) { return new MySafeOutputStream(type.forInstance()); } public static SafeStream createOutputStream(Class type, SafeStream inner) { Constructor ctor = type.getConstructor(new Class[] { OutputStream.class }); return new MySafeOutputStream(ctor.newInstance(new Object[] { inner })); } public SafeStream createFileInputStream(String file) { return new MyInputStream(new File(file).getInputStream()); } } public final class SafeStreamUtils { public static copy(SafeStream safeIn, SafeStream safeTo) throws IOException { InputStream in = ((MySafeInputStream)safeIn).getInputStream(); OutputStream out = ((MySafeOutputStream)safeIn).getOutputStream(); // now copy(in, to), exactly as before } }
You can always replace the one interface SafeStream with two, SafeInputStream and SafeOutputStream, and personally I would also add methods to these interfaces so that the user can work with them directly (but in a thread-safe way, mind you) instead of going through a global "Utils" object (which is so no-OO).
This factory/bridging mechanism works to solve the "guaranteed threadsafe" problem in general, not just for streams, although it also brings up issues of its own (mainly when trying to get two independent libraries to work well together). It can even be made to work better with custom stream subclasses by modifying the factory to accept arbitrary constructor method descriptors, although that opens you up to the possibility that the developer retains a reference to some underlying object (like another stream) and uses it to bypass your synchronization checks.
To: elharo@metalab.unc.edu Subject: ThreadSafeStreams Mime-Version: 1.0 Date: Wed, 09 Jun 1999 17:03:32 +1000 From: Michael Lawley lawley@dstc.edu.au
Hi Elliotte,
If Thread.suspend() were not deprecated, then one could even imagine a solution that involved attempting to suspend all other threads for the duration (not very pretty, and you'd have to contend with ThreadGroup security issues).
My only other suggestion is a real hack - grab the class files for the underlying implementation, hack them to change the class names, then wrap them with a new implementation using the original class names and arrange for the resulting classes to appear at the beginning of the classpath (or even the bootclasspath).
OTOH I wonder if this is really a problem in practice? Other than stdin, stdout and stderr, how often does one have multiple reader/ writer threads on a stream that one doesn't otherwise have control over either all the methods or the creation of the stream?
Date: Sat, 12 Jun 1999 18:23:08 -0400 From: Irum Parvez irum@sprint.ca Organization: Irum Parvez CA To: elharo@metalab.unc.edu Subject: ThreadSafeStreams
Hi Rusty
One of the difficulties with streams is that you are often using a chain of them. Witch stream (of the chain) do you synchronize on? - the first or the last(one of the middle streams?). Let me define: last-The node stream that the filter streams use.(There is one of these in a chain) first-A filter stream that you are using middle-A filter stream that other filter streams often use(delegate to)
It would be very nice if Sun would add to the Stream interface (API) It would be very nice if java.io.InputStream (& OutputStream) would contain an abstract method called getNode(). The FilterStreams would delegate this to the Node Stream that would implement the method (returning itself - this). This would allow you to just synchronize on the node stream of (a chain)and eliminate the overhead and risk of multiple locks.You would to just synchronize on the object returned by getNode().
What do you think?
P.S. I think your question could be phrased a little better:
//------------------------------------------------------------------------------ 3.Wrapping the unsynchronized classes in a synchronized class is insufficient because the underlying unsynchronized class may still be >exposed to other classes and threads.
>'unsynchronized classes' - Java mutex locks synchronize on objects - not classes.
I prefer to use the term delicate data or delicate object rather than unsynchronized class.
There are few ways to protect delicate data from being corrupted.
A java primitive variable that is shared between threads can be protected by declaring it as volitile(so it not stored in a register).
A delicate object can only be protected (from coruption) by encapsulation and synchronization:
NOTE:Protecting public delicate data is a waist of time.You must both synchronize and encasulate.
Date: Thu, 03 Jun 1999 18:51:36 +0200 From: Michael Peter <Michael.Peter@stud.informatik.uni-erlangen.de> X-Accept-Language: en MIME-Version: 1.0 To: elharo@metalab.unc.edu Subject: ThreadSafeStreams Status:
Hi!
Since you have no control over the classes you wish to synchronize, it is not possible to use them directly. In your example your StreamCopier class tries to use synchronized(in) and synchronized(out) to make the method thread safe. This is of no use, since the classes themselves are not synchronized and synchronization only takes places within synchronized blocks. Even if they were synchronized, your method could possibly create a problem if the classes use another object for synchronization like in the following example:
public class foo { private Object lock = new Object(); private int someVariable; public int someSynchronizedMethod() { synchronized( lock ) { // lock is used instead of this to synchronize return someVariable; } } }
Since you cannot access the lock, you cannot synchronize from the outside.
There is however a way to provide synchronization by using a wrapper. You write that wrapping the class is insufficient, but it is only if you don't create the class in your wrapper, but accept a previously created object. As for StreamCopier the following works:
public class SynchronizedInputStream extends InputStream { private InputStream in; public SynchronizedInputStream() { throw new IllegalArgumentException( "Use createXXXInputStream to create a synchronized input stream" ); } private SynchronizedInputStream( InputStream in ) { this.in = in; } public InputStream createFileInputStream( File f ) { return new SynchronizedInputStream( new FileInputStream( f ) ); } public InputStream createByteArrayInputStream( byte[] buf ) { return new SynchronizedInputStream( new ByteArrayInputStream( buf ) ); } /* ... You have to write a method for every InputStream type you want to use */ /* Wrap all InputStream methods */ public int available() { synchronized( this ) { return in.available(); } } public boolean markSupported() { synchronized( this ) { return in.markSupported(); } } /* Do this for all remaining methods */ }
This method might not be very efficient if multiple InputStreams are chained, but I think there is some room for optimisation, like checking if a class is already a instance of SynchronizedInputStream and then synchronizing only once. This is left as an exercise to the reader. (I always wanted to say this!)
The copy method from StreamCopier would then look like this:
public static void copy(SynchronizedInputStream in, SynchronizedOutput); } } } }
Date: Thu, 3 Jun 1999 16:41:04 -0700 To: elharo@metalab.unc.edu From: Greg Guerin <glguerin@amug.org> Subject: ThreadSafeStreams
Hi,
Good Q o'Week...
I think that parts of the question as stated are misleading (as in "leading to incorrect, unsupported, or erroneous conclusions or constraints":
>However, this only helps if the other threads using those streams >are also kind enough to synchronize them. In the general case, that >seems unlikely.
It better not be -- those other threads don't magically pop themselves into existence executing whatever code they feel like. Those other threads are spawned by *MY* code, and presumably execute the run() method *I* designate for them. Kindness is not the question: "The question is which is to be master -- that's all".
>1.You're trying to write a library routine to be used by many different >programmers in their own programs so you can't count on the rest of the >program outside this utility class being written in a thread safe fashion.
Fine, that's what synchronized methods are for. If you have encapsulate an entire object so only one thread can use it over a longer period of time than a single method-call, that's what wait/notify are for. See more under 3 below.
>2.You have not written the underlying classes that need to be thread safe >(InputStream and OutputStream in this example) so you can't add >synchronization directly to them.
True, but that doesn't mean I can't sub-class them and only give a SynchronizedInputStream and/or a SynchronizedOutputStream to the threads that need to coordinate their I/O. Either I write the threads to do that themselves, or I pass args to them that guarantee the needed coordination. A Thread is a thread of execution, not a monolithic self-contained self-determined omnipotent blob of code with only a few puppet-strings emerging from it. An ActiveX control or a Java Applet may fit that mold, but not a Thread.
>3.Wrapping the unsynchronized classes in a synchronized class is insufficient >because the underlying unsynchronized class may still be exposed to other >classes and threads.
So what if the *CLASS* is exposed? The threads are either:
If I don't have that level of control over my threads, then I have a bigger problem than merely coordinating some I/O streams.
This situation is no different than classes that take InputStream arguments, but happily work with FileInputStream, PipeInputStream, FilterInputStream, StrongCryptoInputStream, or MyBeamedInFromMarsInputStream. Indeed, a sub-class of FilterInputStream and FilterOutputStream with synchronization added to its methods should handily solve the problem, as I understand it.
If you need thread-safe access to an entire object over multiple method-calls, then a centralized arbitrator operating with wait/notify will be needed. You call the arbitrator to get the object, you use it until you don't need it any more, then you return the object to the arbitrator so it can hand it out again. Threads that fail to return an object to the arbitrator are defective -- they are resource-tyrants. If your question is asking "How can an arbitrator wrest contol of a resource back from a resource-tyrant?" the answer is "you can't". If that's a problem, then either don't write resource-tyrant threads, or don't share resources with a resource-tyrannical thread.
I think your StreamCopier.copy() example is kind of an odd case. First, I don't think I'd call it copy() -- seems to me that expand() or concatenate() might make more sense. Second, I probably wouldn't have structured it as a static method, but as an instance-method of a class that either expanded a given InputStream arg to an instance-variable OutputStream (guaranteeing the expansion would occur without intrusion), or a class constructed with several InputStream sources that guaranteed reading would occur in sequence (concatenation). The former makes more sense to me for your "copy" feature. The latter is essentially a java.io.SequenceInputStream. Either way, I think the StreamCopier is just a poorly-designed class for its intended purpose (judging by its copy() method alone, since I haven't read your book).
Since I haven't read your new book yet, I don't know to what use StreamCopier is being put. If it's guaranteeing non-intrusion of other writes, then a sub-classed FilterOutputStream could easily fill the bill. If it's concatenation, then SequenceInputStream might work, though I doubt its Enumeration is guaranteed thread-safe, so you might need a custom-made InputStream sub-class with thread-safety and concatenation.
Finally, here's a common idiom, overextended to make a point, that shares certain characteristics with the problem as you posed it. Consider:
OutputStream out1 = mySocket.getOutputStream(); OutputStream out2 = new BufferedOutputStream( out1 ); OutputStream out3 = new CRLFingFilterOutputStream( out2 ); OutputStream out4 = new BufferedOutputStream( out3 );
Here we have 4 different OutputStreams, and none of them are thread-safe. A single thread that writes to out1, out2, out3, and out4 at different points is going to screw things up badly, due to buffering, filtering, etc. OH NO, HOW DO WE PREVENT THAT?! Simple -- you create the thread's code to only write on out4. It can't mangle what it can't touch.
The same principle applies to the thread-safety issue with StreamCopier. If it's reading and writing to a synchronized stream, then each buffer-full that it processes has guaranteed sequential integrity. If it needs to guarantee sequential integrity of an entire InputStream onto an OutputStream, then it needs to participate in the Designated Sharing Ritual just like every other thread that has access to either stream. If a thread can't play by those rules, then you shouldn't let it play with other threads who are willing to share their toys and play nicely.
Sorry if that went too long.
-- GG
p.s. I don't have either of your Java I/O or XML books, but even if I don't win, you could just leave the choice up to the winner -- if unsure of customer's needs, ask customer. ;-) | http://www.cafeaulait.org/questions/06031999.html | crawl-002 | refinedweb | 3,680 | 51.89 |
Objective Evidence Against Gotos
Irrefutable proof that gotos will not be helping the programmers of the future:
Reminiscent of the actual Cobol Tombstone:
Can it be objectively proven that control-structure blocks are "better" than goto's? (regardless of personal preference)
MuAnswer
. Better for what purpose?
That's part of the issue to discuss. Perhaps some background. Some have alleged that certain technologies are objectively better. The goto issue is used as a testing ground of proving techniques.
I've rarely seen people allege certain technologies are objectively better without providing implicitly or explicitly a 'purpose'. The proposed 'goto issue' remains an invalid testing ground until offered a similar basis.
I would say "Developer productivity", but I doubt that is specific enough for you. However, the topic intro is not the place to hash it out in my opinion.
Most important point:
default
behavior of a code snippet is hard to know, since a goto can go anywhere.
What do you mean by "default"? Code with lots of IF statemnts and flags can also make it tricky to see what's "ahead".
{It would be nice if you didn't screw up this page with your thread mode bullets.}
Now here is an example of unnecessary rudeness. You could have simply said, "Please move discussion below the bullet points." I would have gladly done it. Don't let your reptilian attack-mode instincts control how you say things.
{An
if
or
case
statement by default goes downward. A procedure or function by default goes downward. These are all default behaviors. Even the exit, break, and continue statements are defaulted - they cannot be several different things all at once - which is what a goto/label combination can be. A default behavior is a form of useful restriction. And no one is saying that goto's should be banned - this page should be directed toward evidence against harmful goto's. As with Dijkstra's paper on the case against Goto, unfortunately a lot of people misinterpret the "case against it" to mean that ALL goto's should be banned as a religion. This is not the case.}
Most goto's went downward also, as I recall. Thus, direction is not necessarily a difference maker.
This lack of "default" behavior means goto's are more unpredictable and hard to follow when reading a snippet of code.
How is "unpredictable" being measured?
{You admitted yourself further down that it is more predictable, and now you are asking how it is unpredictable? Look at your own words.. obviously you think so yourself - the measurement therefore is empirically proven by YOUR own words}
Please use a named reference, such as a
PageAnchor
.
Assuming everyone makes errors (which is true in all observations thus far) the potential error for a 'goto' statement is the full collection of labels accessible from the goto statement's context (which, depending on the language and compiler, might be a single routine or the whole program). Thus, when dealing with common classes of 'goto' patterns (including loops), avoiding use of 'goto' can automatically avoid a variety of potential errors.
This is not specific enough.
(*Waves hands in front of your eyes in return and speaks in a low monotone*)
Yes it is.
. And I'll even offer a
reasonable
argument: If you write a language-supported loop, you have no risk of error figuring out which 'label' to go to at the end of the loop. If you use 'gotos', and there are two or more labels in context, then you have potential to choose the wrong one. Therefore a class of potential errors is prevented simply by avoiding use of goto. QED.
That assumes that using blocks has no downsides.
No, it doesn't.
Further, there's often confusion about which closing "}" a loop ends with. An explicit and unique name eliminates this problem. If not for the indentation, it would be very difficult. Again, I am not necessarily disagreeing from a personal perspective, but cannot turn my preferences into something outside my mind. A different person or alien may prefer something different.
One can have named loops that require a name at the top and bottom be equal if one wishes it in a language, and it still avoids the class of error associated with 'goto' labels (since it would still syntactically prevent overlapping of loops). You're attempting to use 'preferences' as an argument against hard facts that come from the very top of the
EvidenceTotemPole
. Is this your normal modus operandi?
If the block names are generic, such as "end-if", then one IF block ender can still be confused for another IF-block ender.
That's ultimately irrelevant. One is still prevented from making a mistake. 'Confusion' and such subjective purely-inside-the-programmer issues are entirely irrelevant to the above argument, which regards only degrees of freedom to make actual errors and the observed truth that people, given enough opportunities, make every error they have freedom to make.
If we "repair" it by introducing unique labels, then we create many of the same name-space problems that goto's have.
It's a matter of trading context for unique naming
in order to match and identify things.
That's simply false. If, for example, you have named "if" blocks with named end-if pairs, you'll still be unable to end-if the
incorrect
'if' statement due to the required block structuring of the 'if' statements - that is: one fully within another. If you write the incorrect end-if name, it will be caught at parse time - or even edit-time with a highlighting editor. You can't say the same of gotos.
I have my preference, but this is not about personal preference.
WaterbedTheory
is popping up here between context and unique labels. Braces are the far end of context-only, block-type markers (end-if, end-while, etc.) are a compromise between these, and labels-only are the far end of this. --top
The 'confusion' issue was
never
a counter-argument for the degrees-of-freedom problem... even confusing '}' statements still prevent the errors associated with the gotos. Aiming at confusion is a separate issue.
May I suggest you provide examples.
Also, given a 'goto' label in a large 'goto' context, it is rarely clear at a glance how one might reach it and, in the presence of errors, how one is intended to reach it - i.e. even if you're looking right at the error, it will be difficult (as a maintainer) to recognize it as one. The difficulties resulting from this have been observed often enough to receive a pattern name:
SpaghettiCode
.
Same can be true with lots of flags and IF statements. Reduction of goto's often results in more flags and more IF statements. Even though the "statement pointer" may flow in a more predictable manner, the results of flags and conditionals still adds a lot of uncertainty.
Oh! Oh! And I can pound a nail in with a hammer a lot more efficiently than you can drill a screw in with your tongue! Essentially, you're arguing that X done well is better than Y done badly, which is not particularly enlightening or valuable. If you translate the 'if' pattern to 'gotos' or labeled branches, the same problems will exist (you still need to conditionally hit all the correct segments of code) coming free with all the additional confusion regarding the branch labels. There are some resource-cleanup-patterns (in event of errors) that use gotos cleanly where 'if' based approaches are horribly obtuse, and I suspect you might be referring to these. But there are many other efficient patterns that don't use gotos (RAII, 'finally' clauses, 'withFile' routines, etc.). Mostly, one can make a good case against the use of 'IF' statements in the context of resource-cleanup and such, because that is 'ifs done badly'.
I suggest you break this up into sections, with an example each. There's too much mixed up together in that paragraph to easily follow and reply to cleanly.
Using 'goto' when you really mean 'foreach' or 'until' is semantic noise - you aren't expressing exactly what you mean. Long experience with languages of all varieties has proven that not saying what you mean introduces greater potential for confusion among people who read the code, including maintainers. Maintainers will need to learn to recognize patterns of code that vaguely correspond to a purpose rather than seeing some sort of language-statement or macros that makes its purpose as clear as possible.
Create a label called "for-each". Just because Fortran used numbers as labels does not mean that all goto's should.
And that fixes the problem how? 'goto foreach'... doesn't quite catch my meaning either, and 'goto until' is even worse.
Just because you can? There's another point against Goto's: programmers who use clever tricks just because they can, should be banned from programming. Goto's offer these tricks to programmers who want to follow the "just because I can" style.
If true, isn't that a psychological issue? Further, every technique is abusable.
Using 'goto' with naive macro systems to create language-statements generally runs headlong into namespace problems (symbol conflicts for 'goto' labels) when need arises to nest macros, so you rarely have that choice in modern languages. That might change in the future, but it's a truth that exists today.
Semantic noise can also affects other readers of the language: compilers, interpreters, and optimizers... e.g. use of 'goto' in place of a subroutine call makes it much more difficult to verify whether inlining of the call can legally be performed.
This is an issue of inlining versus subroutines, not so much goto's versus blocks.
Not really. It is an issue of semantic noise in general; the use of 'goto' in place of a subroutine is just one example. I could also talk about unrolling of loops and a variety of other patterns where hand-inlining 'goto' and other low-level implementation of a high-level ideas makes (more) difficult the analysis necessary to validate automated optimizations. This is fundamental because optimizations depend on finding invariants and high-level ideas are essentially abstractions of invariants and thus make them quite obvious to the analysis. But that really isn't the subject of this page.
Can we agree to limit this to blocks-versus-goto's and exclude the issue of subroutines? I don't want to introduce any more factors than we need to. Further, subroutines or functions were readily available in most early 2nd-generation languages. --top
I don't believe that a reasonable request, unless you're aiming to limit this page to: ObjectiveEvidenceAgainstGotosButIgnoringVariousFactorsArbitrarilyChosenByTop
?
.
It's not "arbitrary", it's based on what actually happened in
history
. That code is available for analysis. A subroutine-free era never actually existed. And why would your choice be any less arbitrary?
If statements and blocks have been around since Fortran and Lisp - the first two programming languages. Should we ignore evidence against gotos associated with these, too? You're making
arbitrary
decisions "based on what actually happened in history". Your excuses don't change that.
IIRC, Fortran and COBOL (the most widely-used 2nd-gen languages until the 80's) did not introduce blocks until the mid 70's, but it took a while before those became widely used. Thus, we have about 20-25 years of wide goto usage. I've seen such code with my own eyes. The biggest problem I see with eliminating subroutines for the comparison is that the issue of subroutines/functs versus no sub/functs may become the
overriding factor
of any metrics, and we couldn't really tell. We are testing goto's here, not subroutines, and making subroutines the focus is arguably off-topic. If you wish to ignore this line of reasoning and continue on your way, may I ask that you at least separate the two lines of comparisons because I do not wish to participate in a subroutine-versus-no-subroutine debate. Lisp was not widely used.
Feh. You talk as though evidence against gotos in one case would be a deciding 'metric' for policy regarding gotos in all other places. How extremely illogical.
I don't know how you are coming to this conclusion.
In any case, you bring up pages like this because you are convinced that there is no objective proof to support
any
methodology over another, which would include use of subroutines over gotos (and vice versa). I find your attempting to delimit the scope of evidence to be inconsistent with your prior assertions.
If you wish to talk about objective evidence for subroutines, that's fine, but I'd strongly suggest creating a new topic for it. This one is about goto's. (I won't promise I'd participate in it, by the way.)
Most of the above are generalized thought experiments, not rigorous counts.
Most of your responses, including that one, can be properly characterized as rigorous hand-waving, not reasonable counterpoints. 'Objective evidence' doesn't mean 'counts' or even 'measures'. It means 'not subjective evidence'. Logical inference in accordance with rules that aren't subject to arbitrary hand-waving is not subjective. Using rules that have traditionally lead to correct answers in the past, such as classical logic, is (in addition to being 'not subjective) even useful and well proven... and can even be rigorous.
An example of something concrete and objective[1] would be
CodeChangeImpactAnalysis
. We would have numbers and know how those numbers came about. It may produce a result such as, "the goto version required 22 percent more lines-of-code being changed."
If you wish to produce your own artificial soviet-shoe-factory problem by measuring the irrelevant, go ahead.
Why is it irrelevant? I agree it is not thorough, for it is one metric among many, like say rebounding stats in Basketball. (Stat-based selection has proven superior in Baseball over scouting according to the book Super-Crunchers.) The Soviet example didn't use a bad metric, just not enough metrics. Your choice of the word "artificial" also is confusing. And, what numeric metric *do* you propose instead?
In any case,
CodeChangeImpactAnalysis
itself isn't particularly relevant to the question of evidence for and against 'gotos'.
CodeChangeImpactAnalysis
is something you do either after you have a codebase or when you're deciding upon a codebase architecture in the presence of foreseeable need for change. That doesn't exist here, so the most you'll do with your acclaimed
CodeChangeImpactAnalysis
here is wave your arms some more, create 'code' and 'change' scenarios that favor whichever changes you happen to want to favor (probably using the 'X done well vs. Y done poorly' approach to building straw-men), then make whichever conclusion you happen to desire.
It is my personal assessment that you are likely projecting.
It is my personal assessment that you are likely incorrect.
[1] Some may argue that the choice of tests is subjective, but at least how the numbers come about is objective.
It doesn't matter whether they were motivated for subjective reasons; every choice you ever make is objective.
The above clearly shows a) how difficult it is to make general statements and b) how easy it is to uselessly criticize them.
The first poster should have clearly delimited his context. Given no context allows all kinds of counterpoints even ones that are silly and unhelpful. The first poster could have written:
Comparison of
GoTo
against structured means to express looping with respect to readability of the expressed loop.
Probably nobody would have disagreed with the finding that
GoTo
is worse at this task.
One could then list more uses of
GoTo
and probably find more areas where
GoTo
is not that useful (e.g. conditionals).
But an exhaustive list of
GoTo
usages is hard to get (though I bet there are disertations about this out there). I have no difficulty giving examples where a
GoTo
is
more readable than the normal structured means. First among these being a simple
StateMachine
simulation. Every solution that decomposes this into loop+switch or lots of state classes or using recursion is inherently less readable and the state-transitions are more difficult to follow.
This quickly leans to the conclusion that
GoTo
is not inherently evil but - as always - the usage is. No
GoldenHammer
, but no TracelessPoison
?
either.
More involved and heated discussion about goto evidence in
ObjectiveEvidenceAgainstGotosDiscussion
.
See also:
GotoConsideredHarmful
,
BickeringConsideredHarmful
CategoryMetrics
,
CategoryBranchingAndFlow
JuneZeroEight
View edit of
May 16, 2013
or
FindPage
with title or text search | http://c2.com/cgi/wiki?ObjectiveEvidenceAgainstGotos | CC-MAIN-2015-18 | refinedweb | 2,782 | 55.64 |
0
Hi,
I am working on a project and when I run my code, I get a segfault. I checked it with valgrind, and it says that I am trying to access a memory location that I maybe not malloc'd, stack'd or recently free'd. The only 'mysterious' thing I'm doing is, passing the pointer of a structure list to a function. Here is the code I get the segfault.
int GetFieldInt(list<fieldInfo> *listIn) { list<fieldInfo>::iterator tempItr; int listCount = 0; tempItr = (*listIn).begin(); while(tempItr != (*listIn).end()) { listCount++; (*tempItr).percentage = ((*tempItr).percentage)*targetCount/100; //segfault here tempItr++; } if(listCount == 0) { cout<<"Field not defined!\n"; return -1; } else return (int)MAX_RAND/listCount; }
It's a rather long code. Whenever I pass the list pointer to a function, funny things happen. Sometimes, in the linux terminal, the font changes to some weird symbols. Can anyone help me with this?
Your help is much appreciated. Thanks in advance.
Thilan | https://www.daniweb.com/programming/software-development/threads/297414/passing-a-list-pointer-to-a-function | CC-MAIN-2018-43 | refinedweb | 162 | 60.51 |
Important: Please read the Qt Code of Conduct -
[Solved] Using QmlDesktopViewer from within QtCreator
I'm using Qt (4.8), QtMobility and qt-components-desktop from Gitorious on Archlinux x86_64 and built with "qmake PREFIX=/usr" and all seemed to install okay but when I run a basic example all I get is...
@~/Devel/qt-components-desktop qmldesktopviewer/qmldesktopviewer examples/Gallery.qml
registerying types now
@
and it just hangs there. Any clues as to what might be happening?
The Gallery example is not a Window, hence it will not show up in the QmlDesktopViewer. There is a different TopLevel example you can use to try out that. The Gallery example should open fine in the regular QMLViewer.
Thanks for your help Jens, you are right. I was a bit confused getting a Qt devel system set up from scratch and I see the wiki page specifically mentions to try examples/TopLevel.qml and sure enough it works as advertised. I added another Tools -> External -> Qt Quick -> QDesktopViewer option to qtcreator so now it also runs from within it as well.
I am a complete newbie to both Qt & Qml. I downloaded the qml desktop components and opened the project in QtCreator (by double clicking on desktop.pro).
When I tried to run the project, I get the message as given below.
Starting C:\QtProjects\desktop-build-desktop-Qt_4_7_4_for_Desktop_-_MinGW_4_4__Qt_SDK__Release\qmldesktopviewer\release\qmldesktopviewer.exe…
registerying types now
I cannot see any windows or any other kind of outputs. When I try to close the qmldesktopviewer.exe tab (in Application Output pane), I get the message “qmldesktopviewer is still running. Force it to quit?”. So it means qmldesktopviewer started. But I don't know what else to do.
I had posted "here": also the same issue and the author gave this reply:
[quote]
as i already mentioned, i believe that on windows “nmake install” should install the plugin to the right place. Unfortunately i don’t have a windows machine here right now to verify that. Once the plugin is installed in the right place, you have to run QmlDesktopViewer with the appropriate file as an argument, otherwise you will simply see nothing happening.
The appropriate files are either: TopLevel.qml or TopLevelBrowser.qml.
In case you are running QmlDesktopViewer from QtCreator, please make sure that you add the appropriate file as an argument in Run settings.
[/quote]
Could you guys help me in getting this run? I am not even sure whether I am doing it the right way.
Yes, it's a bit confusing but the key point is the last sentence of the reply above. After loading the desktop example then go to the Projects section and note the Build|Run widget, select run then make sure "Run configuration:" is set to qmldesktopviewer and then put the full path to the TopLevel.qml demo in the Arguments field.
I got the full path of the TopLevel.qml file by going back to the Edit section and RMB on the desktop -> examples -> QML -> TopLevel.qml file and "Show Containing Folder" then did another RMB "Copy" operation on the TopLevel.qml file itself and pasted that into the Projects / Run / Arguments field. If the examples are not visible in the initial desktop project load then open desktop.pro and add "examples" to the end of the SUBDIRS line and reopen the project.
My project path is this: C:\QtProjects\QMLDesktopComponents\
I added the qml file path in Run configuration as you explained above. Now my Projects->Run->Arguments has "C:\QtProjects\QMLDesktopComponents\examples\TopLevel.qml" as the argument.
But when I run the project, I get this message now.
[quote]
Starting C:\QtProjects\desktop-build-desktop-Qt_4_7_4_for_Desktop_-_MinGW_4_4__Qt_SDK__Release\qmldesktopviewer\release\qmldesktopviewer.exe...
registerying types now: module "QtDesktop" is not installed
import QtDesktop 0.1
^
[/quote]
I do not want to appear lazy, but can't figure out what is wrong. :(
A wild guess would be to try a copy of the whole examples folder in the same folder as qmldesktopviewer.exe, change the Arguments field to match, and see if that works. It might be a waste of time but I don't have a windows machine. Hopefully someone else with more windows experience can help.
unni: Did you do a make install after compiling the desktop component project itself? This should copy the relevant files into QTDIR/imports/Qt/labs/QtDesktop and allow you to import QtDesktop into the project.
Does anybody tried to use it with SDK 1.2.
It compliles fine, but Qt Quick (inside QtCretor) crashes if you try to see the designer with it.
Any idea why? | https://forum.qt.io/topic/9381/solved-using-qmldesktopviewer-from-within-qtcreator/2 | CC-MAIN-2021-25 | refinedweb | 771 | 66.64 |
This action might not be possible to undo. Are you sure you want to continue?
Ba s i c s t o Be s t P r a c t i c e s
Covers Struts 1.1
Srikanth Shenoy
Austin
2
ObjectSource LLC books are available for bulk purchases for corporations and other organizations. The publisher offers discounts when ordered in bulk. For more information please contact: Sales Department ObjectSource LLC. 2811 La Frontera Blvd., Suite 517 Austin, TX 78728 Email: sales@objectsource.com First corrected reprint Copyright ©2004,2005 ObjectSource LLC. All rights reserved. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the publisher. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and ObjectSource LLC, was aware of a trademark claim, the designations have been printed in initial capital letters. The author and the publisher have taken care in preparation of this book, but make no express or implied warranty of any kind and assume no responsibility for errors or omissions. In no event shall the ObjectSource LLC or use of the information or programs contained herein.
Published by ObjectSource LLC 2811 La Frontera Blvd., Suite 517, Austin TX 78728 Printing RJ Communications nd 51 East 42 Street, Suite 1202, NewYork NY 10017 Cover Design Matt Pramschufer Budget Media Design Pleasantville, New York Library of Congress Catalog Number: 2004100026
ISBN: 0-9748488-0-8 (paperback)
Printed in the United States of America
3
4
5
Table of Contents
Chapter 1. Getting Started ..........................................................................................15 J2EE Platform 15 J2EE web application 16 JSPs Error! Bookmark not defined. 1.1 Model 1 Architecture 20 Problems with Model 1 Architecture 20 1.2 Model 2 Architecture - MVC 21 Advantages of Model 2 Architecture 22 Controller gone bad – Fat Controller 23 1.3 MVC with configurable controller 23 1.4 First look at Struts 25 1.5 Tomcat and Struts installation 28 1.6 Summary 28 Chapter 2. Struts Framework Components...............................................................30 2.1 Struts request lifecycle 31 ActionServlet 31 RequestProcessor and ActionMapping 33 ActionForm 34 Action 35 ActionForward 36 ActionErrors and ActionError 37 2.2 Struts Configuration File – struts-config.xml 39 2.3 View Components 42 How FormTag works 43 How ErrorsTag works 45 2.4 Summary 47 Chapter 3. Your first Struts application ....................................................................48 3.1 Introduction 48 3.2 Hello World – step by step 49 3.3 Lights, Camera, Action! 61 3.4 Handling multiple buttons in HTML Form 63 3.5 Value replacement in Message Resource Bundle 65 3.6 Summary 67 Chapter 4. All about Actions .......................................................................................68 4.1 ForwardAction 68 MVC compliant usage of LinkTag 69 Using LinkTag’s action attribute 70 Using LinkTag’s forward attribute 70 Using ForwardAction for Integration 71 ForwardAction Hands-on 72 4.2 Protecting JSPs from direct access 72 4.3 IncludeAction 75 4.4 DispatchAction 76 4.5 LookupDispatchAction 80 4.6 Configuring multiple application modules 82 4.7 Roll your own Base Action and Form 85 4.8 Handling Duplicate Form Submissions 88 4.9 What goes into Action (and what doesn’t) 91 4.10 When to use Action chaining (and when not to) 93
6
4.11 Actions for complex transitions 94 Wiring the handlers 94 State aware Forms 95 4.12 Managing struts-config.xml 96 Struts-GUI 96 Struts Console 96 XDoclet 97 4.13 Guidelines for Struts Application Development 98 4.14 Summary 99 Chapter 5. Form Validation ......................................................................................101 5.1 Using Commons Validator with Struts 102 The twin XML files 102 validation-rules.xml – The global rules file 103 validation.xml – The application specific rules file 104 More validation.xml features 106 Using the ValidationForm 108 Configuring the Validator 108 Steps to use Commons Validator in Struts 109 5.2 DynaActionForm – The Dynamic ActionForm 109 DynaValidatorForm 113 5.3 Validating multi-page forms 113 5.4 Validating form hierarchy 116 5.5 Summary 117 Chapter 6. Struts Tag Libraries................................................................................119 6.1 Struts HTML Tags 120 Modifying the Base Tag 120 Form Tag 122 FileTag 122 Smart Checkbox – The state aware checkbox 123 Using CSS with Struts HTML Tags 125 Enhancing the error display with customized TextTag 125 The recommended way to use ImgTag 129 6.2 Using Images for Form submissions 130 ImageButton and JavaScript 133 6.3 Struts Bean Tags 134 Message Tag and Multiple Resource Bundles 134 Write Tag 135 6.4 Struts Logic Tags 135 Nested Logic Tags 136 Iterate Tag 137 6.5 A crash course on JSTL 138 JSTL Binaries – Who’s who 141 6.6 Struts-EL 141 Struts-EL hands-on 142 Practical uses for Struts-EL 143 6.7 List based Forms 143 6.8 Multi-page Lists and Page Traversal frameworks 147 Pager Taglib 148 DisplayTag and HtmlTable frameworks 149 Creating the Model for iteration 150
7
6.9 Summary 153 Chapter 7. Struts and Tiles........................................................................................154 7.1 What is Tiles 154 7.2 Your first Tiles application 157 Step 1: Creating the Layout 158 Step 2: Creating the XML Tile definition file 159 Step 3: Modifying the forwards in struts-config.xml 160 Step 4: Using TilesRequestProcessor 161 Step 5: Configuring the TilesPlugIn 161 7.3 Tiles and multiple modules 163 7.4 Summary 163 Chapter 8. Struts and I18N........................................................................................164 Terminology 164 What can be localized? 165 8.1 The Java I18N and L10N API 166 Accessing Locale in Servlet Container 167 8.2 Internationalizing Struts Applications 171 8.3 Internationalizing Tiles Applications 173 8.4 Processing Localized Input 174 8.5 Character encodings 175 Struts and character encoding 177 native2ascii conversion 178 8.6 Summary 179 Chapter 9. Struts and Exception Handling ..............................................................181 9.1 Exception Handling Basics 182 9.2 Log4J crash course 183 9.3 Principles of Exception Handling 184 9.4 The cost of exception handling 187 9.5 JDK 1.4 and exception handling 188 9.6 Exception handling in Servlet and JSP specifications 189 9.7 Exception handling – Struts way 191 Declarative exception handling 191 Using the ExceptionHandler 193 When not to use declarative exception handling 194 Exception handling and I18N 196 9.8 Logging Exceptions 196 9.9 Strategies for centralized logging 202 9.10 Reporting exceptions 206 9.11 Summary 208 Chapter 10. Effectively extending Struts..................................................................209 Customizing the action mapping 211 10.1 A rudimentary page flow controller 213 10.2 Controlling the validation 215 10.3 Controlling duplicate form submissions 218 10.4 DispatchAction for Image Button form submissions 222 10.5 Summary 224
8
Preface
I started using Struts in late 2000. I was immediately drawn to its power and ease of use. In early 2001, I landed in a multi-year J2EE project, a large project by any measures. Struts 1.0 was chosen as the framework for the web tier in that project. Recently that project upgraded to Struts 1.1. I did the upgrade over a day. It cannot get any easier! This book makes no assumptions about your Struts knowledge. It starts with the basics of Struts, teaches you what is important in Struts from a usage perspective and covers a lot of practical issues all in a short 200-page book. No unnecessary explanations. Concise, Clear and straight to the topic. I am a consultant, not an author by profession. Hence my writing also tends to reflect my mindset that got shaped by the various assignments I have undertaken in the past. Large projects and their inbuilt complexities excite me. In large projects, decoupling layers is a big thing. Also minor decisions during architecting and designing (probably without the complete knowledge of the framework used) can impact the project in a big way down the line. Clearly understanding the strengths and shortcomings of a framework and minor customizations to the framework go a long way in making the applications cleaner. In that perspective, I have attempted to give a practical face to this book based on my experience. Chapters 4, 5, 6, 9 and 10 will be extremely valuable to all those wishing to use Struts effectively in J2EE projects.. I have enjoyed a lot writing this book. Even though I knew Struts well, there were some crevices that I had not explored and have made me that much better. If you are a beginner, this book is your fastest track to master Struts. There are a lot of best practices and strategies related to Struts that makes this book valuable to even experienced developers and architects. Srikanth Shenoy January 2004
9
10
Acknowledgements
A good book is the end result of the efforts of many. For the first edition, Sandeep Nayak helped by agreeing to be a beta reader and pointing out the problems in the initial two chapters. The first edition also owes debt to Fulco Houkes from Neuchâtel, Switzerland who reviewed the book and tried parts of the source code to ensure it is working. Likewise I am indebted to booksjustbooks.com, for making their book on Publishing basics available freely online without which this book wouldn’t have been a reality. Thanks to RJ Communications for printing this book. Many thanks to Matt Pramschufer from Budget Media Design for the cover design. I would like to thank Amazon.com for being a small publisher friendly retailer. It is impossible to sell the books without having the reach as Amazon does. This books comes from a independent small publisher lacking the distribution network that big publishers possess. Hadn't it for Amazon, the first edition of this book would still be lying in my warehouse and I would not have been able to offer this ebook free of cost. I owe thanks to my wife for patiently putting up with me when I was working evenings and weekends on the book and also editing the book. I also owe thanks and gratitude to my parents who showed the right path as I was growing up and instilled in me the passion for perfection, courage to achieve anything and never give up. Finally thanks to God through whom all things are made possible.
Srikanth Shenoy March 2005
11
12
Where to get the Source Code for the book The source code for this book can be downloaded from the website. Specifically the link for the examples is, which also includes a companion workbook for this Survival Guide. The workbook illustrates the concepts with step by step instructions. Also don’t forget to download the PDF slides used in a short Struts Training. It is located at.
13
Author Profile
Srikanth Shenoy is the co-founder and chief mentor at ObjectSource LLC. ObjectSource is a Austin, TX based company providing J2EE training and consulting that also publishes great technical books. Srikanth has over 10 years of experience in the software industry. Previously he worked as J2EE Consultant and Architect for J2EE and CORBA product vendors and large system integrators. He has helped clients in the manufacturing, logistics, and financial sectors to realize the Java's "write once, run anywhere" dream. He has written articles and papers for IBM developerWorks, The SerververSide and so on ranging from topics such as EJBs to JavaServer Faces and Maven. Most recently he created the OTOM framework () - a Java Tool that allows graphical mapping of one object to another and subsequent code generation. He also contributes to several other Open Source projects. He can be reached at shenoy@objectsource.com. ObjectSource LLC, provides solutions and services including architecture & design, strategic architecture review, consulting in areas such as J2EE applications, O-R Mapping (specifically TopLink and Hibernate) and training on topics such as J2EE, Struts, Hibernate, TopLink, JavaServer Faces and Spring.
14
Struts Survival Guide – Basics to Best Practices
Chapter 1. Getting Started
15
Ch a p te r 1
Getting Started
In this chapter:
1. You will learn about Model 1 and Model 2 (MVC) and their differences. 2. You will understand the shortcomings of Model 1. 3. You will understand the problems with Model 2 – The Fat Controller Antipattern. 4. You will learn how to overcome the problems in Model 2 by using Model 2 with a configurable controller. 5. You will see how Struts fill the gap by providing the configurable controller and much more to boost developer productivity. 6. You will look at installing Tomcat and Struts on your computer. What is Struts? Struts is a Java MVC framework for building web applications on the J2EE platform. That’s it! As you can see, whole lot of buzzwords appear in the above sentence. This chapter analyzes the above definition word-by-word and presents a big picture of Struts. It also shows how Struts makes it easy to develop web applications for the J2EE platform. But first, I will start with a quick overview of the J2EE platform followed by basics of J2EE web application development before looking at various strategies and frameworks that are available today for developing the J2EE presentation tier. J2EE Platform As you might be already knowing, J2EE is a platform for executing server side Java applications. Before J2EE was born, server side Java applications were written using vendor specific APIs. Each vendor had unique APIs and architectures. This resulted in a huge learning curve for Java developers and architects to learn and program with each of these API sets and higher costs for the companies. Development community could not reuse the lessons learnt in the
16
Struts Survival Guide – Basics to Best Practices
trenches. Consequently the entire Java developer community was fragmented, isolated and stunted thus making very difficult to build serious enterprise applications in Java. Fortunately the introduction of J2EE and its adoption by the vendors has resulted in standardization of its APIs. This in turn reduced the learning curve for server side Java developers. J2EE specification defines a whole lot of interfaces and a few classes. Vendors (like BEA and IBM for instance) have provided implementations for these interfaces adhering to the J2EE specifications. These implementations are called J2EE Application Servers. The J2EE application servers provide the infrastructure services such as threading, pooling and transaction management out of the box. The application developers can thus concentrate on implementing business logic. Consider a J2EE stack from a developer perspective. At the bottom of the stack is Java 2 Standard Edition (J2SE). J2EE Application Servers run in the Java Virutal Machine (JVM) sandbox. They expose the standard J2EE interfaces to the 1 application developers. Two types of applications can be developed and deployed on J2EE application servers – Web applications and EJB applications. These applications are deployed and executed in “container”s. J2EE specification defines containers for managing the lifecycle of server side components. There are two types of containers - Servlet containers and EJB containers. Servlet containers manage the lifecycle of web applications and EJB containers manage the lifecycle of EJBs. Only Servlet containers are relevant to our discussion as Struts, the theme of this book, is a web application framework. J2EE web application Any web application that runs in the servlet container is called a J2EE web application. The servlet container implements the Servlet and JSP specification. It provides various entry points for handling the request originating from a web browser. There are three entry points for the browser into the J2EE web application - Servlet, JSP and Filter. You can create your own Servlets by extending the javax.servlet.http.HttpServlet class and implementing the doGet() and doPost() method. You can create JSPs simply by creating a text file containing JSP markup tags. You can create Filters by implementing the javax.servlet.Filter interface. The servlet container becomes aware of Servlets and Filters when they are 2 declared in a special file called web.xml . A J2EE web application has exactly
1
There are actually three types of applications that can be developed and deployed on J2EE app servers. The third one is a application conforming to J2EE Connector Architecture (J2CA). However I will leave this out for simplicity. 2 There are various kinds of Listeners that you can declare in web.xml. You can also declare Tag Library Definitions (TLD) in web.xml. More details can be found in the Servlet Specification. Again, I am leaving this out for simplicity.
Chapter 1. Getting Started
17
one web.xml file. The web application is deployed into the servlet container by bundling it in zipped archive called Web ARchive – commonly referred to as WAR file.
Listing 1.1 Sample doGet() method in a Servlet handling HttpRequest
public class MyServlet extends HttpServlet { public void doGet(HttpServletRequest httpRequest, HttpServletResponse httpResponse) throws ServletException, IOException { //Extract data from Http Request Parameters //Business Logic goes here //Now write output HTML OutputStream os = httpResponse.getOutputStream(); os.println(“<html><body>”); //Write formatted data to output stream os.println(“</body></html>”); os.flush(); os.close(); } }
A servlet is the most basic J2EE web component. It is managed by the servlet container. All servlets implement the Servlet interface directly or indirectly. In general terms, a servlet is the endpoint for requests adhering to a protocol. However, the Servlet specification mandates implementation for servlets that handle HTTP requests only. But you should know that it is possible to implement the servlet and the container to handle other protocols such as FTP too. When writing Servlets for handling HTTP requests, you generally subclass HttpServlet class. HTTP has six methods of request submission – GET, POST, PUT, HEAD and DELETE. Of these, GET and POST are the only forms of request submission relevant to application developers. Hence your subclass of HttpServlet should implement two methods – doGet() and doPost() to handle GET and POST respectively. Listing 1.1 shows a doGet() method from a typical Servlet. With this background, let us now dive straight into presentation tier strategies. This coverage of presentation tier strategies will kick start your thought process on how and where Struts fits in the big picture.
1.1 Presentation Tier Strategies
Technologies used for the presentation tier can be roughly classified into three categories:
18
Struts Survival Guide – Basics to Best Practices
Markup based Rendering (e.g. JSPs) Template based Transformation (e.g. Velocity, XSLT) Rich content (e.g. Macromedia Flash, Flex, Laszlo) Markup based Rendering JSPs are perfect examples of markup based presentation tiers. In markup based presentation, variety of tags are defined (just like HTML tags). The tag definitions may be purely for presentation or they can contain business logic. They are mostly client tier specific. E.g. JSP tags producing HTML content. A typical JSP is interpreted in the web container and the consequent generation of HTML. This HTML is then rendered in the web browser. The next few paragraphs cover the role played by JSPs in comparison to Servlets in J2EE web application. In the last section, you saw how Servlets produced output HTML in addition to executing business logic. So why aren’t Servlets used for presentation tier? The answer lies in the separation of concerns essential in real world J2EE projects. Back in the days when JSPs didn’t exist, servlets were all that you had to build J2EE web applications. They handled requests from the browser, invoked middle tier business logic and rendered responses in HTML to the browser. Now that’s a problem. A Servlet is a Java class coded by Java programmers. It is okay to handle browser requests and have business and presentation logic in the servlets since that is where they belong. HTML formatting and rendering is the concern of page author who most likely does not know Java. So, the question arises, how to separate these two concerns intermingled in Servlets? JSPs are the answer to this dilemma. JSPs are servlets in disguise! The philosophy behind JSP is that the page authors know HTML. HTML is a markup language. Hence learning a few more markup tags will not cause a paradigm shift for the page authors. At least it is much easier than learning Java and OO! JSP provides some standard tags and java programmers can provide custom tags. Page authors can write server side pages by mixing HTML markup and JSP tags. Such server side pages are called JSPs. JSPs are called server side pages because it is the servlet container that interprets them to generate HTML. The generated HTML is sent to the client browser.
Presentation Logic and Business Logic – What’s the difference? The term Business Logic refers to the middle tier logic – the core of the system usually implemented as Session EJBs. The code that controls the JSP navigation, handles user inputs and invokes appropriate business logic is referred to as Presentation Logic. The actual JSP – the front end to the user
Chapter 1. Getting Started
19
contains html and custom tags to render the page and as less logic as possible. A rule of thumb is the dumber the JSP gets, the easier it is to maintain. In reality however, some of the presentation logic percolates to the actual JSP making it tough to draw a line between the two. We just said JSPs are server side pages. Server side pages in other languages are parsed every time they are accessed and hence expensive. In J2EE, the expensive parsing is replaced by generating Java class from the JSP. The first time a JSP is accessed, its contents are parsed and equivalent Java class is generated and subsequent accesses are fast as a snap. Here is some twist to the story. The Java classes that are generated by parsing JSPs are nothing but Servlets! In other words, every JSP is parsed at runtime (or precompiled) to generate Servlet classes. Template based Transformation In Template based transformation, a template engine uses a pre-defined template to transform a given data model into desired output. XSLT is a perfect example for template based transformation. XSLT stands for XML Stylesheet Language Transformation. XSLT is used to transform an XML document in one format into another XML document in another format. Since HTML is a type of XML, XSLT can be used for generating HTML from XML. In a J2EE application, J2EE components can generate XML that represents the data. Then the XML is transformed into HTML (presentation/view) by using a stylesheet written using the XML Stylesheet Language (XSL). Velocity is another fantastic example of template based transformation mechanism to generate the view. In fact Velocity is a general purpose Templating framework that can be used to generate almost anything, not just a replacement for JSPs. For more information on Velocity check out. Velocity with Struts is not covered in this edition of the book. Rich Content in Rich Internet Applications (RIA) Rich content delivery over the internet to the good old browser is not an entirely new paradigm, but something that’s drawing a lot of attention lately. The traditional browser’s presentation capabilities are fairly limited even with the addition of DHTML and JavaScript. In addition, the browser incompatibilities cause a lot of headache for developing rich application with just DHTML and JavaScript. Enter, Macromedia Flash, a freely available plugin for all the popular browsers that can render the rich content uniformly across all browsers and operating systems. This strategy can be of interest to Struts developers because
20
Struts Survival Guide – Basics to Best Practices
Macromedia has also released Flex – a presentation tier solution to deliver internet applications with rich content using Struts. Laszlo is another platform to deliver rich internet applications. Laszlo renders contents using the same flash player, but it is open source. It can be integrated with Struts too. NOTE: Struts can be used as the controller framework for any of the view generation strategies described above. Struts can be combined with JSPs – the most popular option among developers. Struts can also be combined with Velocity templating or XSLT. Struts is also an integral part of Macromedia Flex. Lazlo and Struts can be combined to deliver rich internet applications. So far, we have looked at various strategies that can be applied in the presentation tier to generate the view. We also saw that Struts can play an effective role in each of these strategies as a controller. Well, I didn’t explain exactly how it plays the role of a controller. It is the topic of next few sections. We will start by introducing the two modes of designing JSPs - Model 1 and Model 2 architectures in the next two sections and then arrive at Struts as an improvement over the Model 2 architecture.
1.2 Model 1 Architecture 1.1.
Chapter 1. Getting Started
21.
Figure 1.1 Model 1 Architecture.
1.3 Model 2 Architecture - MVC. 1. The Controller Servlet handles the user’s request. (This means the hyperlink in the JSP should point to the controller servlet). 2. The Controller Servlet then instantiates appropriate JavaBeans based on the request parameters (and optionally also based on session attributes). 3. The Controller Servlet then by itself or through a controller helper
22
Struts Survival Guide – Basics to Best Practices
communicates with the middle tier or directly to the database to fetch the required data. 4. The Controller sets the resultant JavaBeans (either same or a new one) in one of the following contexts – request, session or application. 5. The controller then dispatches the request to the next view based on the request URL. 6..
Figure 1.2
Chapter 1. Getting Started
23. Controller gone bad – Fat Controller If MVC is all that great, why do we need Struts after all? The answer lies in the difficulties associated in applying bare bone MVC to real world complexities. In medium to large applications, centralized control and processing logic in the servlet – the greatest plus of MVC is also its weakness. Consider a mediocre application with 15 JSPs. Assume that each page has five hyperlinks (or five form submissions). The total number of user requests to be handled in the application is 75. Since we are using MVC framework, a centralized controller servlet handles every user request. For each type of incoming request there is “if” block in the doGet method of the controller Servlet to process the request and dispatch to the next view. For this mediocre application of ours, the controller Servlet has 75 if blocks. Even if you assume that each if block delegates the request handling to helper classes it is still no good. You can only imagine how bad it gets for a complex enterprise web application. So, we have a problem at hand. The Controller Servlet that started out as the greatest thing next to sliced bread has gone bad. It has put on a lot of weight to become a Fat Controller.
1.4 MVC with configurable controller
You must be wondering what went wrong with MVC. 1.3..
24
Struts Survival Guide – Basics to Best Practices
Figure 1.3 MVC with configurable controller Servlet.
Listing 1.1); }
Chapter 1. Getting Started
25
} Listing 1.1..
1.4 First look at Struts
In the last section, you have seen the underlying principle behind Struts framework. Now let us look closely at the Struts terminology for controller servlet and Handler objects that we mentioned and understand Figure 1.4. Figure 1.4 is a rehash of Figure 1.3 by using Struts terminology. Since this is your first look at Struts, we will not get into every detail of the HTTP request handling lifecycle in Struts framework. Chapter 2 will get you there. For now, let us concentrate on the basics.
Listing 1.2 Sample ActionForm
public class MyForm extends ActionForm { private String firstName;
26
Struts Survival Guide – Basics to Best Practices
private String lastName; public MyForm() { firstName = “”; lastName = “”; } public String getFirstName() { return firstName; } public void setFirstName(String s) { this.firstName = s; } public String getLastName() { return lastName; } public void setLastName(String s) { this.lastName = s; } } Listing 1.2.. The ActionServlet then instantiates a Handler. The Handler class name is obtained from an XML file based on the URL path information. This XML file is referred to as Struts configuration file and by default named as struts-config.xml.
Chapter 1. Getting Started
27 forwards to the selected view.
Figure 1.4 A first look at next section.
28
Struts Survival Guide – Basics to Best Practices
1.5 Tomcat and Struts installation
We will use Windows environment to develop Struts application and Tomcat servlet container to deploy and test Struts applications. Precisely we will use Tomcat-5.0.14 Beta, the latest milestone release of Tomcat.. This is the latest production quality build available. Once you download the zipped archive of Struts 1.1 release, unzip the file to a convenient location. It automatically creates a folder named “jakarta-struts-1.1”. It has three subfolders.. However be sure to read through this book before you dive into the Struts source code.
1.6 Summary
In this chapter, we refreshed your memory on Model 1 and Model 2 architectures for JSPs and pointed out the problems with the bare bone MVC in real life – about how it gets big and ugly. You understood how MVC with configurable controller could solve the real life problem. You also took a first look at the high level Struts architecture and saw how it matched the configurable MVC controller. You also briefly looked at Struts and Tomcat installation and warmed up for the forthcoming chapters.
Chapter 2. Struts Framework Components
29
30
Struts Survival Guide – Basics to Best Practices
Ch a p te r 2
Struts Framework Components
In this chapter:
1. You will learn more about Struts components and their categories – Controller and View 2. You will understand the sequence of events in Struts request handling lifecycle 3. You will understand the role of the following controller classes ActionServlet, RequestProcessor, ActionForm, Action, ActionMapping and ActionForward in the request handling lifecycle 4. You will also learn about the role of Struts Tags as View components in rendering the response 5. You will understand the various elements of Struts configuration file – struts-config.xml In the last chapter, you had a cursory glance at the Struts framework. In this chapter you will dive deeper and cover various Struts Framework Components. Here is something to remember all the time. 1. All the core components of Struts framework belong to Controller category. 2. Struts has no components in the Model category. 3. Struts has only auxiliary components in View category. A collection of custom tags making it easy to interact with the controller. The View category is neither the core of Struts framework nor is it necessary. However it is a helpful library for using Struts effectively in JSP based rendering. Controller Category: The ActionServlet and the collaborating classes form the controller and is the core of the framework. The collaborating classes are RequestProcessor, ActionForm, Action, ActionMapping and ActionForward.
Chapter 2. Struts Framework Components
31
View Category: The View category contains utility classes – variety of custom tags making it easy to interact with the controller. It is not mandatory to use these utility classes. You can replace it with classes of your own. However when using Struts Framework with JSP, you will be reinventing the wheel by writing custom tags that mimic Struts view components. If you are using Struts with Cocoon or Velocity, then have to roll out your own classes for the View category. Model Category: Struts does not offer any components in the Model Category. You are on you own in this turf. This is probably how it should be. Many component models (CORBA, EJB) are available to implement the business tier.. NOTE: Some people argue that ActionForm is the model component. However ActionForm is really part of the controller. The Struts documentation also speaks along similar lines. It is just View Data Transfer Object – a regular JavaBeans that has dependencies on the Struts classes and used for transferring the data to various classes within the controller.
2.1 Struts request lifecycle
In this section you will learn about the Struts controller classes – ActionServlet, RequestProcessor, ActionForm, Action, ActionMapping and ActionForward – all residing in org.apache.struts.action package and struts-config.xml – the Struts Configuration file. Instead of the traditional “Here is the class – go use it” approach, you will study the function of each component in the context of HTTP request handling lifecycle in Struts. ActionServlet The central component of the Struts Controller is the ActionServlet. It is a concrete class and extends the javax.servlet.HttpServlet. It performs two important things. 1. On startup, its reads the Struts Configuration file and loads it into memory in the init() method. 2. In the doGet() and doPost() methods, it intercepts HTTP request and handles it appropriately.
32
Struts Survival Guide – Basics to Best Practices
The name of the Struts Config file is not cast in stone. It is a convention followed since the early days of Struts to call this file as struts-config.xml and place it under the WEB-INF directory of the web application. In fact you can name the file anyway you like and place it anywhere in WEB-INF or its subdirectories. The name of the Struts Config file can be configured in web.xml. The web.xml entry for configuring the ActionServlet and Struts Config file is as follows.
<servlet> <servlet-name>action</servlet-name> <servlet-class>org.apache.struts.action.ActionServlet </servlet-class> <init-param> <param-name>config</param-name> <param-value>/WEB-INF/config/myconfig.xml</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet>
In the above snippet, the Struts Config file is present in the WEB-INF/config directory and is named myconfig.xml. The ActionServlet takes the Struts Config file name as an init-param. At startup, in the init() method, the ActionServlet reads the Struts Config file and creates appropriate Struts configuration objects (data structures) into memory. You will learn more about the Struts configuration objects in Chapter 7. For now, assume that the Struts Config file is loaded into a set of objects in memory, much like a properties file loaded into a java.util.Properties class. Like any other servlet, ActionServlet invokes the init() method when it receives the first HTTP request from the caller. Loading Struts Config file into configuration objects is a time consuming task. If the Struts configuration objects were to be created on the first call from the caller, it will adversely affect performance by delaying the response for the first user. The alternative is to specify load-on-startup in web.xml as shown above. By specifying load-onstartup to be 1, you are telling the servlet container to call the init() method immediately on startup of the servlet container. The second task that the ActionServlet performs is to intercept HTTP requests based on the URL pattern and handles them appropriately. The URL pattern can be either path or suffix. This is specified using the servlet-mapping in web.xml. An example of suffix mapping is as follows.
<servlet-mapping> <servlet-name>action</servlet-name> <url-pattern>*.do</url-pattern> </servlet-mapping>
Chapter 2. Struts Framework Components
33
If the user types in the browser URL bar, the URL will be intercepted and processed by the ActionServlet since the URL has a pattern *.do, with a suffix of "do”. Once the ActionServlet intercepts the HTTP request, it doesn’t do much. It delegates the request handling to another class called RequestProcessor by invoking its process()method. Figure 2.1 shows a flowchart with Struts controller components collaborating to handle a HTTP request within the RequestProcessor’s process() method. The next sub sections describe the flowchart in detail. It is very important that you understand and even memorize this flowchart. Most of the Struts Controller functionality is embedded in the process() method of RequestProcessor class. Mastery over this flowchart will determine how fast you will debug problems in real life Struts applications. Let us understand the request handling in the process() method step by step with an example covered in the next several sub sections.
Figure 2.1 Flowchart for the RequestProcessor process method.
RequestProcessor and ActionMapping The RequestProcessor does the following in its process() method: Step 1: The RequestProcessor first retrieves appropriate XML block for the URL from struts-config.xml. This XML block is referred to as
34
Struts Survival Guide – Basics to Best Practices
ActionMapping in Struts terminology. In fact there is a class called ActionMapping in org.apache.struts.action package. ActionMapping is the class that does what its name says – it holds the mapping between a URL and Action. A sample ActionMapping from the Struts configuration file looks as follows.
Listing 2.1 A sample ActionMapping from struts-config.xml
<action path="/submitDetailForm" type="mybank.example.CustomerAction" name="CustomerForm" scope="request" validate="true" input="CustomerDetailForm.jsp"> <forward name="success" path="ThankYou.jsp" redirect=”true”/> <forward name="failure" </action>
Step 2: The RequestProcessor looks up the configuration file for the URL pattern /submitDetailForm. (i.e. URL path without the suffix do) and finds the XML block (ActionMapping) shown above. The type attribute tells Struts which Action class has to be instantiated. The XML block also contains several other attributes. Together these constitute the JavaBeans properties of the ActionMapping instance for the path /submitDetailForm. The above ActionMapping tells Struts to map the URL request with the path /submitDetailForm to the class mybank.example.CustomerAction. The Action class is explained in the steps ahead. For now think of the Action as your own class containing the business logic and invoked by Struts. This also tells us one more important thing.
Since each HTTP request is distinguished from the other only by the path, there should be one and only one ActionMapping for every path attribute. Otherwise Struts overwrites the former ActionMapping with the latter. ActionForm Another attribute in the ActionMapping that you should know right away is name. It is the logical name of the ActionForm to be populated by the RequestProcessor. After selecting the ActionMapping, the RequestProcessor instantiates the ActionForm. However it has to know the fully qualified class name of the ActionForm to do so. This is where the name attribute of ActionMapping comes in handy. The name attribute is the logical
Chapter 2. Struts Framework Components
35
name of the ActionForm. Somewhere else in struts-config.xml, you will find a declaration like this:
<form-bean
This form-bean declaration associates a logical name CustomerForm with the actual class mybank.example.CustomerForm. Step 3: The RequestProcessor instantiates the CustomerForm and puts it in appropriate scope – either session or request. The RequestProcessor determines the appropriate scope by looking at the scope attribute in the same ActionMapping. Step 4: Next, RequestProcessor iterates through the HTTP request parameters and populates the CustomerForm properties of the same name as the HTTP request parameters using Java Introspection. (Java Introspection is a special form of Reflection using the JavaBeans properties. Instead of directly using the reflection to set the field values, it uses the setter method to set the field value and getter method to retrieve the field value.) Step 5: Next, the RequestProcessor checks for the validate attribute in the ActionMapping. If the validate is set to true, the RequestProcessor invokes the validate() method on the CustomerForm instance. This is the method where you can put all the html form data validations. For now, let us pretend that there were no errors in the validate() method and continue. We will come back later and revisit the scenario when there are errors in the validate() method. Action
Step 6: The RequestProcessor instantiates the Action class specified in the
ActionMapping (CustomerAction) and invokes the execute() method on the CustomerAction instance. The signature of the execute method is as follows.
public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws Exception
Apart from the HttpServletRequest and HttpServletResponse, the ActionForm is also available in the Action instance. This is what the ActionForm was meant for; as a convenient container to hold and transfer data from the http request parameter to other components of the controller, instead of having to look for them every time in the http request. The execute() method itself should not contain the core business logic irrespective of whether or not you use EJBs or any fancy middle tier. The first and foremost reason for this is that business logic classes should not have any dependencies on the Servlet packages. By putting the business logic in the Action
36
Struts Survival Guide – Basics to Best Practices
class, you are letting the javax.servlet.* classes proliferate into your business logic. This limits the reuse of the business logic, say for a pure Java client. The second reason is that if you ever decide to replace the Struts framework with some other presentation framework (although we know this will not happen), you don’t have to go through the pain of modifying the business logic. The execute() method should preferably contain only the presentation logic and be the starting point in the web tier to invoke the business logic. The business logic can be present either in protocol independent Java classes or the Session EJBs. The RequestProcessor creates an instance of the Action (CustomerAction), if one does not exist already. There is only one instance of Action class in the application. Because of this you must ensure that the Action class and its attributes if any are thread-safe. General rules that apply to Servlets hold good. The Action class should not have any writable attributes that can be changed by the users in the execute() method. ActionForward The execute() method returns the next view shown to the user. If you are wondering what ActionForward is, you just have found the answer. ActionForward is the class that encapsulates the next view information. Struts, being the good framework it is, encourages you not to hardcode the JSP names for the next view. Rather you should associate a logical name for the next JSP page. This association of the logical name and the physical JSP page is encapsulated in the ActionForward instance returned from the execute method. The ActionForward can be local or global. Look again at the good old ActionMapping XML block in Listing 2.1. It contained sub elements called forwards with three attributes – name, path and redirect as shown below. The name attribute is the logical name of the physical JSP as specified in the path attribute. These forward elements are local to the ActionMapping in Listing 2.1. Hence they can be accessed only from this ActionMapping argument in the CustomerAction’s execute() method and nowhere else. On the other hand, when the forwards are declared in the global forwards section of the strutsconfig.xml, they are accessible from any ActionMapping. (In the next section, you will look closely at Struts Config file.) Either ways, the findForward() method on the ActionMapping instance retrieves the ActionForward as follows.
ActionForward forward = mapping.findForward(“success”);
The logical name of the page (success) is passed as the keyword to the findForward() method. The findForward() method searches for the forward with the name “success”, first within the ActionMapping and then in the global-forwards section. The CustomerAction’s execute() method returns the ActionForward and the RequestProcessor returns the physical JSP
Chapter 2. Struts Framework Components
37
to the user. In J2EE terms, this is referred to as dispatching the view to the user. The dispatch can be either HTTP Forward or HTTP Redirect. For instance, the dispatch to the success is a HTTP Redirect whereas the dispatch to “failure” is a HTTP Redirect. Difference between HTTP Forward and HTTP Redirect HTTP Forward is the process of simply displaying a page when requested by the user. The user asks for a resource (page) by clicking on a hyperlink or submitting a form and the next page is rendered as the response. In Servlet Container, HTTP Forward is achieved by invoking the following.
RequestDispatcher dispatcher = httpServletRequest.getRequestDispatcher(url); Dispatcher.forward(httpServletRequest, httpServletResponse);
HTTP Redirect is a bit more sophisticated. When a user requests a resource, a response is first sent to the user. This is not the requested resource. Instead this is a response with HTTP code “302” and contains the URL of the requested resource. This URL could be the same or different from original requested URL. The client browser automatically makes the request for the resource again with the new URL. And this time, the actual resource is sent to the user. In the web tier you can use HTTP redirect by using the simple API, sendRedirect() on the HttpServletResponse instance. The rest of the magic is done by HTTP. HTTP Redirect has an extra round trip to the client and is used only in special cases. Later in this book, we will show a scenario where HTTP redirect can be useful. ActionErrors and ActionError So far, we have covered Struts request handling lifecycle as a happy day scenario –. 1. Server time and resources are precious since they are shared. Spending too much of server’s time and resources on a request, that we know is going to fail eventually is a waste of server resources. 2. It has a negative impact on the code quality. Since one has to prepare for the possibility of having null data, appropriate checks have to be put (or
38
Struts Survival Guide – Basics to Best Practices.
Listing 2.2. We will postpone the discussion of how Struts reports the errors to the end user when we discuss View Components later in this chapter. As shown in the flowchart (Figure 2.1), the validate() method is called after the ActionForm instance is populated with the form data. A sample validate() method is shown in Listing 2.2.
Chapter 2. Struts Framework Components
39 2.2.. We will cover Message Resource Bundle in depth in Chapter10 on Internationalization and Localization. For now it suffices to know that. That completes our overview of the working of Struts Controller components. Now, let us formally look at the Struts configuration file in detail.
2.2 Struts Configuration File – struts-config.xml
As you learnt in Chapter 1, capability to the next dimension. We will deal with extending Struts in Chapter 7. In this section, we will just look at the normal facilities offered by the strutsconfig.xml.
40
Struts Survival Guide – Basics to Best Practices
The Struts configuration file adheres to the struts-config_1_1.dtd. The struts config dtd can be found in the Struts distribution in the lib directory. It shows every possible element, their attributes and their description. Covering all of them at once would only result in information overload. Hence we will only look at the five important sections of this file relevant to our discussion and their important attributes. In fact we have already covered most of these in the lifecycle discussion earlier, but are summarizing them again to refresh your mind. The five important sections are: 1. Form bean definition section 2. Global forward definition section 3. Action mapping definition section 4. Controller configuration section 5. Application Resources definition section Listing 2.3 shows a sample Struts Config file showing all the five sections. The form bean definition section contains one or more entries for each ActionForm. Each form bean is identified by a unique logical name. The type is the fully qualified class name of the ActionForm. An interesting to note is that you can declare the same ActionForm class any number of times provided each entry has a unique name associated with it. This feature is useful if you want to store multiple forms of the same type in the servlet session.
Table 2.1 Important attributes and elements of ActionMapping entry in struts-config.xml Attribute/Element name Path Type Name Description The URL path (either path mapping or suffix mapping) for which this Action Mapping is used. The path should be unique The fully qualified class name of the Action The logical name of the Form bean. The actual ActionForm associated with this Action Mapping is found by looking in the Form-bean definition section for a form-bean with the matching name. This informs the Struts application which action mappings should use which ActionForms. Scope of the Form bean – Can be session or request Can be true or false. When true, the Form bean is validated on submission. If false, the validation is skipped. The physical page (or another ActionMapping) to which control should be forwarded when validation errors exist in the form bean. The physical page (or another ActionMapping) to which the control should be forwarded when the ActionForward with this name is selected in the execute method of the Action class.
Scope Validate Input Forward
The ActionMapping section contains the mapping from URL path to an Action class (and also associates a Form bean with the path). The type attribute is the fully qualified class name of the associated Action. Each action entry in the action-mappings should have a unique path. This follows from the fact that each
Chapter 2. Struts Framework Components
41
URL path needs a unique handler. There is no facility to associate multiple Actions with the same path. The name attribute is the name of the Form bean associated with this Action. The actual form bean is defined in Form bean definition section. Table 2.1 shows all the relevant attributes discussed so far for the action entry in action-mappings section.
Listing 2.3 Sample struts-config.xml
<?xml version="1.0" encoding="ISO-8859-1" ?> <!DOCTYPE struts-config PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 1.1//EN" ""> <struts-config> <form-beans> <form-bean <form-bean </form-beans> <global-forwards> <forward <forward name="logon" path="/logon.jsp"/>
Form bean Definitions
Global Forward Definitions
</global-forwards> <action-mappings> <action
Action Mappings
<forward name="success" path="/ThankYou.jsp" redirect=”true” /> <forward name="failure" path="/Failure.jsp" </action> <action path=”/logoff” parameter=”/logoff.jsp” type=”org.apache.struts.action.ForwardAction” /> </action-mappings> <controller processorClass="org.apache.struts.action.RequestProcessor" /> <message-resources />
Controller Configuration
42
Struts Survival Guide – Basics to Best Practices
</struts-config>
Message Resource Definition
In the ActionMapping there are two forwards. Those forwards are local forwards – which means those forwards can be accessed only within the ActionMapping. On the other hand, the forwards defined in the Global Forward section are accessible from any ActionMapping. As you have seen earlier, a forward has a name and a path. The name attribute is the logical name assigned. The path attribute is the resource to which the control is to be forwarded. This resource can be an actual page name as in
<forward <forward name="logon" name="logoff" path="/logon.jsp"/>
or it can be another ActionMapping as in The /logoff (notice the absence of “.do”) would be another ActionMapping in the struts-config.xml. The forward – either global or local are used in the execute() method of the Action class to forward the control to another physical page or ActionMapping. The next section in the config file is the controller. The controller is optional. Unless otherwise specified, the default controller is always the org.apache.struts.action.RequestProcessor. There are cases when you want to replace or extend this to have your own specialized processor. For instance, when using Tiles (a JSP page template framework) in conjunction with Struts, you would use TilesRequestProcessor. The last section of immediate interest is the Message Resource definition. In the ActionErrors discussion, you saw a code snippet that used a cryptic key as the argument for the ActionError. We stated that this key maps to a value in a properties file. Well, we declare that properties file in the struts-config.xml in the Message Resources definition section. The declaration in Listing 2.1 states that the Message Resources Bundle for the application is called ApplicationResources.properties and the file is located in the java package mybank. If you are wondering how (and why) can a properties file be located in a java package, recall that any file (including class file) is a resource and is loaded by the class loader by specifying the package. An example in the next chapter will really make things clearer.
2.3 View Components
In Struts, View components are nothing but six custom tag libraries for JSP views – HTML, Bean, Logic, Template, Nested, and Tiles tag libraries. Each one caters to a different purpose and can be used individually or in combination with others. For other kinds of views (For instance, Template based presentation) you
Chapter 2. Struts Framework Components
43
are on your own. As it turns out, majority of the developers using Struts tend to use JSPs. You can extend the Struts tags and also build your own tags and mix and match them. You already know that the ActionForm is populated on its way in by the RequestProcessor class using Java Introspection. In this section you will learn how Struts tags interact with the controller and its helper classes to display the JSP using two simple scenarios – how FormTag displays the data on the way out and how the ErrorsTag displays the error messages. We will not cover every tag in Struts though. That is done in Chapter 6. What is a custom tag? Custom Tags are Java classes written by Java developers and can be used in the JSP using XML markup. Think of them as view helper beans that can be used without the need for scriptlets. Scriptlets are Java code snippets intermingled with JSP markup. You need a Java developer to write such scriptlets. JSP pages are normally developed and tweaked by page authors, They cannot interpret the scriptlets. Moreover this blurs the separation of duties in a project. Custom Tags are the answer to this problem. They are XML based and like any markup language and can be easily mastered by the page authors. You can get more information on Custom Tags in Chapter 6. There are also numerous books written about JSP fundamentals that cover this topic very well.
Listing 2.4 CustomerDetails JSP
<html> <head> <html:base/> </head> <body> <html:form <html:text <html:text <html:submit>Continue</html:submit> </html:form> </body> </html>
How FormTag works Consider a case when the user requests a page CustomerDetails.jsp. The CustomerDetails JSP page has a form in it. The form is constructed using the
44
Struts Survival Guide – Basics to Best Practices
Struts html tags and shown in Listing 2.4. The <html:form> represents the org.apache.struts.taglib.html.FormTag class, a body tag. The <html:text> represents the org.apache.struts.taglib.html.TextTag class, a normal tag. The resulting HTML is shown in Listing 2.5. The FormTag can contain other tags in its body. SubmitTag generates the Submit button at runtime. The TextTag <html:text> generates html textbox at runtime as follows.
<input name=”firstName” type=”text” value=”” />
The FormTag has an attribute called action. Notice that the value of the action attribute is /submitDetailForm in the JSP snippet shown above. This represents the ActionMapping. The generated HTML <form> has action=”/submitDetailForm.do” in its place. The servlet container parses the JSP and renders the HTML.
Listing 2.5 Generated HTML from CustomerDetails JSP
<html> <head> <html:base/> </head> <body> <form name=”CustomerForm” action=”/submitDetailForm.do”> <input type=”text” name=”firstName” value=”” /> <input type=”text” name=”lastName” value=”” /> <input type=”submit” name=”Submit” value=”” /> </form> </body> </html>
When the container encounters the FormTag, it invokes the doStartTag() method. The doStartTag() method in the FormTag class does exactly what the RequestProcessor does in the execute() method. 1. The FormTag checks for an ActionMapping with /submitDetailForm in its path attribute. 2. When it finds the ActionMapping, it looks for an ActionForm with the name CustomerForm, (which it gets from the ActionMapping) in the request or session scope (which it gets from ActionMapping). 3. If it does not find one, it creates a new one and puts it in the specified context. Otherwise it uses the existing one. It also makes the Form name available in the page context. 4. The form field tags (e.g. TextTag) access the ActionForm by its name from the PageContext and retrieve values from the ActionForm
Chapter 2. Struts Framework Components
45
attributes with matching names. For instance, the TextTag <html:text property=”firstName /> retrieves the value of the attribute firstName from the mybank.example.CustomerForm and substitutes as the value. If the CustomerForm existed in the request or session and the firstName field in the CustomerForm had a value “John”, then the TextTag will generate HTML that looks like this:
<input name=firstName” type=”text” value=”John” />
If the firstName field was null or empty in the CustomerForm instance, the TextTag will generate HTML that looks like this
<input name=firstName” type=”text” value=”” />
And thus the ActionForm is displayed as a HTML Form. The moral of the story is that ActionForms should be made available in advance in the appropriate scope if you are editing existing form data. Otherwise the FormTag creates the ActionForm in the appropriate scope with no data. The latter is suited for creating fresh data. The FormTag reads the same old ActionMapping while looking for the ActionForm in the appropriate scope. It then displays the data from that ActionForm if available. How ErrorsTag works When dealing with ActionErrors, you learnt that the validation errors in an
ActionForm are reported to the framework through the ActionErrors
container. Let us now see what facilities the framework provides to display those errors in the JSP. Struts provides the ErrorsTag to display the errors in the JSP. When the ActionForm returns the ActionErrors, the RequestProcessor sets it in the request scope with a pre-defined and well-known name (within the Struts framework) and then renders the input page. The ErrorsTag iterates over the ActionErrors in the request scope and writes out each raw error text to the HTML output. You can put the ErrorsTag by adding <html:errors /> in the JSP. The tag does not have any attributes. Neither does it have a body. It displays the errors exactly in the location where you put the tag. The ErrorsTag prints three elements to the HTML output – header, body and footer. The error body consists of the list of raw error text written out to by the tag. A sample error display from struts-example.war (available with Struts 1.1 download) is shown in Figure 2.2. You can configure the error header and footer through the Message Resource Bundle. The ErrorsTag looks for predefined keys named errors.header and errors.footer in the default (or specified) Message Resource Bundle and their values are also written out AS IS. In the struts-example.war, these are set as follows:
46
Struts Survival Guide – Basics to Best Practices
errors.header=<h3><font color="red">Validation Error</font></h3> You must correct the following error(s) before proceeding:<ul> errors.footer=</ul><hr>
For each ActionError, the ErrorsTag also looks for predefined keys errors.prefix and errors.suffix in the default (or specified) Message Resource Bundle. By setting errors.prefix=<li> and errors.suffix = </li>, the generated HTML looks like follows and appears in the browser as shown in Figure 2.2.
<h3><font color="red">Validation Error</font></h3> You must correct the following error(s) before proceeding: <ul> <li>From Address is required.</li> <li>Full Name is required.</li> <li>Username is required</li> </ul>
Figure 2.2 Struts error display.
Note that all the formatting information is added as html markup into these values. The bold red header, the line breaks and the horizontal rule is the result of html markup in the errors.header and errors.footer respectively. Tip: A common problem while developing Struts applications is that <html:errors/> does not seem to display the error messages This generally means one of the following: The properties file could not be located or the key is not found. Set the
Chapter 2. Struts Framework Components
47
<message-resources null="false"...> for debugging.
Another reason for not seeing the error messages has got to do with the positioning of the tag itself. If you added the tag itself in the <tr> instead of a <td>, the html browser cannot display the messages even though the tag worked properly by writing out the errors to the response stream. The View Components was the last piece of the puzzle to be sorted out. As it turns out, all the work is performed in the controller part of the framework. The View Tags look for information in the request or session scope and render it as HTML. Now, that is how a view should be – as simple as possible and yet elegant. Struts lets you do that, easy and fast.
2.4 Summary
In this chapter you learnt the Struts request lifecycle in quite a bit of detail. You also got a good picture of Struts framework components when we covered the controller and view components. You also got to know relevant sections of struts-config.xml – the Struts configuration file. Armed with this knowledge we will build a Hello World web application using Struts framework in the next chapter.
48
Struts Survival Guide – Basics to Best Practices
Ch a p te r 3
Your first Struts application
In this chapter:
1. You will build your first Struts web application step by step 2. You will build a Web ARchive (WAR) and deploy the web application in Tomcat In the last two chapters you have learnt a lot about Struts. In this chapter will take you step by step in building your first Struts application and deploying it onto Tomcat.
3.1 Introduction
You can access the sample application by typing in the browser. The index.jsp contains a single hyperlink. The link is. On clicking the link, CustomerDetails.jsp is displayed. CustomerDetails.jsp contains an HTML Form with two buttons – Submit and Cancel. When the user submits the Form by clicking Submit, Success.jsp is shown if the form validations go through. If the validations fail, the same page is shown back to the user with the errors. If the user clicks Cancel on the form, the index.jsp is shown to the user.
Figure 3.1 The JSP flow diagram for the Hello World Struts application.
Chapter 3. Your First Struts Application
49
Directory Structure overview This is the first time you are building a sample application in this book. Hence we will introduce you to a standard directory structure followed throughout the book when developing applications. Then we will move on to the actual steps involved. Figure 3.2 shows the directory structure. The structure is very logical. The toplevel directory for every sample application is named after the application itself. In this case all the files are located under the directory named App1. The directory src/java beneath App1 contains the Java source files (CustomerForm.java and CustomerAction.java) and also the application’s Message Resource Bundle (App1Messages.properties). Another directory called web-root beneath App1 contains all the JSPs (index.jsp, CustomerDetails.jsp and Success.jsp) and images (banner.gif). The web-root contains a WEB-INF sub directory with files web.xml and struts-config.xml.
Figure 3.2 The directory structure used throughout the book for sample Struts applications.
3.2 Hello World – step by step
Here are the steps involved in creating the Struts application. 1. Add relevant entries into the web.xml a. Add ActionServlet Configuration with initialization parameters b. Add ActionServlet Mapping c. Add relevant taglib declaration 2. Start with a blank template for the struts-config.xml. In the struts-config.xml, add the following a. Declare the RequestProcessor b. Create a properties file and declare it as Message Resource Bundle c. Declare the Message Resource Bundle d. Declare the Form-bean
50
Struts Survival Guide – Basics to Best Practices
e. Declare the ActionMapping for the Form-bean f. Add the forwards in the ActionMapping 3. Create the Form-bean class 4. Create the Action class 5. Create the JSP with Struts tags 6. For every <bean:message> tag in the JSP, add key value pairs to the Message Resource Bundle (properties file) created in Step 3b 7. Add Validation in the Form-bean 8. Define the error messages in the Message Resource Bundle 9. Create the rest of the JSPs.
Listing 3.1 web.xml for the Struts Application
<?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" ""> <web-app> <display-name>Hello World Struts Application</display-name> >1</load-on-startup> </servlet> (continued..)
Chapter 3. Your First Struts Application
51
Listing 3.1 web.xml for the Struts Application (Continued)
<servlet-mapping> <servlet-name>action</servlet-name> <url-pattern>*.do</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.jsp</welcome-file> </welcome-file-list> <taglib> <taglib-uri>/WEB-INF/struts-html.tld</taglib-uri> <taglib-location>/WEB-INF/struts-html.tld</taglib-location> </taglib> <taglib> <taglib-uri>/WEB-INF/struts-bean.tld</taglib-uri> <taglib-location>/WEB-INF/struts-bean.tld</taglib-location> </taglib> </web-app>
Step 1. As you already know from Chapter 2, the first step in writing a Struts
application is to add the ActionServlet entry in web.xml and also map the servlet to the url-pattern *.do. This is shown in Listing 3.1. You already know the meaning of the initialization parameter named config. Here we will introduce two more initialization parameters. They are debug and detail. The debug initialization parameter lets you set the level of detail in the debug log. A lower number means lesser details and a higher number implies detailed logging. It is absolutely essential that you use this logging feature especially in the beginning and also while setting up Struts application for the first time. The debug messages give you enough insight to resolve any configuration related issues. Use them to their fullest capability. In Listing 3.1, we have set the value of debug to 3. The detail initialization parameter lets you set the level of detail in the digester log. Digester is the component that parses the Struts Config file and loads them into Java objects. Some of the errors can be traced by looking at the log created by the Digester as it parses the XML file. Later in this chapter, you will also use two of the Struts Tag libraries to construct the JSP. Hence the relevant tag library definition files – struts-html.tld and struts-bean.tld are also declared in web.xml.
52
Struts Survival Guide – Basics to Best Practices
Another setting of interest in web.xml is the <welcome-file-list>. Typically you would want to type in the browser URL bar and go to index.jsp automatically. This goal is achieved by declaring index.jsp as one of the welcome files.
Step 2. Select a blank template for struts-config.xml and add the following
Listing 3.2 struts-config.xml with all entries for App1
<?xml version="1.0" encoding="ISO-8859-1" ?> <!DOCTYPE struts-config PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 1.1//EN" ""> <struts-config> <form-beans> <form-bean </form-beans> <global-forwards> <forward name="mainpage" </global-forwards> <action-mappings> <action path="/submitCustomerForm" type="mybank.app1.CustomerAction" name="CustomerForm" scope="request" validate="true" input="CustomerDetails.jsp"> <forward name="success" <forward name="failure" </action> </action-mappings> <controller processorClass="org.apache.struts.action.RequestProcessor"/> <message-resources </struts-config> />
Step 2a. Declare the controller element in Struts Config file. The
<controller> element tells the Struts framework to use org.apache.struts.action.RequestProcessor for this application. For
Chapter 3. Your First Struts Application
53
a simple Struts application like App1, this RequestProcessor will suffice. You will use specialized sub classes of RequestProcessor as controllers later in this book. The struts-config.xml is shown in Listing 3.2
<controller processorClass= "org.apache.struts.action.RequestProcessor" />
Step 2b. Create a properties file under mybank.app1 java package and name it
as App1Messages.properties. You will later add key value pairs into this file. Instead of hard coding field names in the JSP, you will use key names from this file to access them. In this way, the actual name can be changed outside the JSP. For now, add the following entry into the Struts Config file.
<message-resources
This is the instruction to the Struts controller to use the App1Message.properties file as the Message Resource Bundle.
Step 2c.Define the form bean by adding a form-bean entry in the form-beans
section.
<form-bean
Step 2d. Define an ActionMapping by adding the following to the actionmappings
<action path="/submitCustomerForm" type="mybank.app1.CustomerAction" name="CustomerForm" scope="request" validate="true" input="CustomerDetails.jsp"> </action>
Step 2e. Add the local forwards to the ActionMapping
<forward name="success" <forward name="failure" path="Success.jsp" path="Failure.jsp" /> />
At this point, the struts-config.xml looks as shown in Listing 3.3. All entries in bold are added for App1.
Step
Form-bean by extending ActionForm in org.apache.struts.action package. Listing 3.3 shows the Form bean. For every field in the HTML Form, there is an instance variable with getter and setter methods in the Form bean. The Struts controller populates the HTML Form by calling the getter methods on the Form bean. When the user submits the HTML
3.
Create
the
54
Struts Survival Guide – Basics to Best Practices
Form, the Struts controller populates the Form bean with data from HTML Form by calling setter method on the Form bean instance. Step 4. Next, create the Action bean by extending the org.apache.struts.action.Action class. Let us call it CustomerAction. Every class that extends Action implements the execute() method. As you saw earlier in Chapter 2, the RequestProcessor calls the execute() method after populating and validating the ActionForm. In this method you typically implement logic to access middle-tier and return the next page to be displayed to the user. Listing 3.4 shows the execute() method in CustomerAction. In this method, an operation is performed to check is the Cancel button was pressed. If so, the “mainpage” (Global Forward for index.jsp) is shown to the user. The isCancelled() method is defined in the parent Action class. If the operation requested is not Cancel, then the normal flow commences and the user sees Success.jsp.
Listing 3.3 CustomerForm – Form Bean for App1
public class CustomerForm extends ActionForm { private String firstName; private String lastName; public CustomerForm() { firstName = “”; lastName = “”; } public String getFirstName() { return firstName; } public void setFirstName(String s) { this.firstName = s; } public String getLastName() { return lastName; } public void setLastName(String s) { this.lastName = s; } }
Step 5. Create the JSP using Struts html and bean tags.
Chapter 3. Your First Struts Application
55
All Struts html tags including the FormTag are defined in struts-html.tld. These tags generate appropriate html at runtime. The TLD file struts-html.tld and struts-bean.tld are declared at the top of JSP and associated with logical names “html” and “bean” respectively. The JSP then uses the tags with the prefix of “html:” and “bean:” instead of the actual tag class name. Listing 3.5 shows the CustomerDetails.jsp. Let us start from the top of this Listing.
Listing 3; String firstName = custForm.getFirstName(); String lastName = custForm.getLastName(); System.out.println(“Customer First name is “ + firstName); System.out.println(“Customer Last name is “ + lastName); ActionForward forward = mapping.findForward(“success”); return forward; } }
<html:html>: Under normal circumstances, this JSP tag just generates opening and closing html tags for the page i.e. <html> and </html>. However the real advantage of this tag is when the browser has to render the HTML based on the locale. For instance, when the user’s locale is set to Russia, the tag generates <html lang=”ru”> instead of the plain old <html>, so that the browser can attempt to render the Russian characters (if any) in the best possible manner. Setting <html:html tells Struts to look for the locale specific resource bundle (More on this later). <html:base>: As you might be already aware of, one of the best practices in authoring pages is to use relative URLs instead of absolute ones. In order to use relative URLs in HTML, you need to declare the page context root with the declaration <base href=”…”> tag. All URLs (not starting with “/”) are
56
Struts Survival Guide – Basics to Best Practices
assumed to be relative to the base href. This is exactly what the <html:base/> tag generates.
Listing 3.5 CustomerDetails.jsp
<%@ page <bean:message: <html:text <BR> <bean:message: <html:text <BR> <html:submit> <bean:message </html:submit> <html:cancel> <bean:message </html:cancel> </html:form> </body> </html:html>
<html:form>: The FormTag represented by <html:form> generates the
HTML representation of the Form as follows:
<form name=..” action=”..” method=”GET”>)
It has one mandatory attribute – action. The action attribute represents the ActionMapping for this form. For instance, the action attribute in Listing 3.5 is /submitCustomerForm. Note that the FormTag converts this into a HTML equivalent as follows:
<form name=”CustomerForm” action=”/App1/submitCustomerForm.do”>
Chapter 3. Your First Struts Application
57
The corresponding ActionMapping in Struts Config file is associated with CustomerForm. Before displaying the page to the user, the FormTag searches the request scope for an attribute named CustomerForm. In this case, it does not find one and hence it instantiates a new one. All attributes are initialized to zero length string in the constructor. The embedded tags use the attributes of the CustomerForm in the request scope to display their respective values.
<html:text>: The <html:text> tag generates the HTML representation for the text box. It has one mandatory attribute named property. The value of this XML attribute is the name of the JavaBeans property from the Form bean that is being represented. For instance, the <html:text property=”firstname”/> represents the JavaBeans property firstName from the CustomerForm. The <html:text> tag will get the value of the JavaBeans property as indicated by the property attribute. Since the CustomerForm was newly instantiated, all its fields have a value of zero length string. Hence the <html:text property=”firstName” /> generates a html textbox tag of <input type=”text” name=”firstName” value=”” />. Listing 3.6 shows the generated HTML. <html:submit>: This tag generates the HTML representation for the Submit button as <input type=”submit” value=”Save Me”>. <html:cancel>: This tag generates the HTML representation for the
Cancel button. This must have started a though process in your mind now. Why do I need a <html:cancel> when I already have <html:submit>? Well, this is because of the special meaning of Cancel in everyday form processing. Pressing a Cancel button also results in Form submission. You already know that when validate is set to true, the form submission results in a validation. However it is absurd to validate the form when form processing is cancelled. Struts addresses this problem by assigning a unique name to the Cancel button itself. Accordingly, a JSP tag <html:cancel>Cancel Me</html:cancel> will generate equivalent HTML as follows:
<input type="submit" name="org.apache.struts.taglib.html.CANCEL" value="Cancel Me">
Just before the RequestProcessor begins the Form validation, it checks if the button name was org.apache.struts.taglib.html.CANCEL. If so, it abandons the validation and proceeds further. And that’s why <html:cancel> is different from <html:submit>.
<html:errors>: This tag displays the errors from the ActionForm
validation method. You already looked at its working in the last chapter.
58
Struts Survival Guide – Basics to Best Practices
In the generated html, you might notice that the <html:errors/> tag did not translate into any meaningful HTML. When the form is displayed for the first time, the validate() method in CustomerForm hasn’t been executed yet and hence there are no errors. Consequently the <html:errors/> tag does not output HTML response. There is another tag used in Listing 3.5 called <bean:message> for which we did not provide any explanation yet. The <bean:message> tags in the JSP generate regular text output in the HTML (See Listing 3.6). The <bean:message> tag has one attribute named “key”. This is the key to the Message Resource Bundle. Using the key, the <bean:message> looks up the properties file for appropriate values. Hence our next task is to add some key value pairs to the properties file created in Step 3b.
Listing 3.6 Generated HTML for CustomerDetails.jsp
<html lang=”en”> <head> <base href=”” /> </head> <body> <form name=”CustomerForm” action=”/App1/submitCustomerForm.do”> First Name: <input type=”text” name=”firstName” value=”” /> <BR> Last Name: <input type=”text” name=”lastName” value=”” /> <BR> <input type=”submit” value=”Save Me”/> <input type="submit" name="org.apache.struts.taglib.html.CANCEL" value="Cancel Me"> </form> <body> </html>
Step 6. For every <bean:message> tag in the JSP, add key value pairs to the
Message Resource Bundle (App1Messages.properties) created in Step 3b. This is pretty straightforward. Listing 3.7 shows the App1Messages.properties. We will add more contents into this file in Step 9. But for now, this is all we have in the Message Resource Bundle.
Chapter 3. Your First Struts Application
59
Step 7. Now that the CustomerForm is displayed to the user, what if user enters wrong data and submits the form? What if the user does not enter any data? These boundary conditions have to be handled as close to the user interface as possible for reasons discussed in Chapter 2. That’s why the validate() method is coded in every Form bean. You have seen the validate() method before in Chapter 2. It is repeated in Listing 3.8.
Listing 3.7 App1Messages.properties
prompt.customer.firstname=First Name prompt.customer.lastname=Last Name button.save=Save Me button.cancel=Cancel Me
According to the business requirements set for this application, first name has to exist all the time. Hence the validate() method checks to see if the first name is null or if the first name is all spaces. If either of this condition is met, then it is an error and according an ActionError object is created and added to the ActionErrors. Think of the ActionErrors as a container for holding individual ActionError objects. In Listing 3.8, the ActionError instance is created by supplying a key “error.cust.firstname.null” to the ActionError constructor. This key is used to look up the Message Resource Bundle. In the next step, the keys used for error messages are added to the Message Resource Bundle.
Listing 3.8 validate() method in CustomerForm
public ActionErrors validate(ActionMapping mapping, HttpServletRequest request) { ActionErrors errors = new ActionErrors(); if (firstName == null || firstName.trim().equals(“”)) { errors.add("firstName", new ActionError("error.cust.firstname.null")); } return errors; }
Step 8. In Step 7, validate() method was provided with the error messages
identified by keys. In this step, the error message keys are added to the same old App1Messages.properties. The modified App1Messages.properties is shown in Listing 3.9. The new entry is shown in bold. Note that we have used a prefix “error” for the error message entries, a prefix of “button” for button labels and a prefix of “prompt” for regular HTML text. There is no hard and fast rule and it is only a matter of preference. You can name the keys anyway you want.
60
Struts Survival Guide – Basics to Best Practices
Step 9. In the previous steps, you created most of the artifacts needed for the
Struts application. There are two more left. They are index.jsp and Success.jsp. These two JSPs are pretty simple and are shown in Listing 3.10 and Listing 3.11 respectively.
Listing 3.9 Updated App1Messages.properties
prompt.customer.firstname=First Name prompt.customer.lastname=Last Name button.save=Save Me button.cancel=Cancel Me error.cust.firstname.null=First Name is required
Here we are introducing a new tag – <html:link>. This generates a hyperlink in the HTML. You must be wondering why would you need another tag when <a href=”…”> might as well do the job. There are many advantages of using the <html:link> tag. We will explain one advantage relevant to our discussion – URL rewriting. We will look at other uses of the <html:link> tag in Chapter 4. Since HTTP is stateless, J2EE web applications maintain data in a special object called HTTPSession. A key on the server side uniquely identifies every user’s HTTPSession. You can think as if the Servlet container is storing all the active sessions in a big Hash Map. A per-session cookie is created when the user accesses the web application for the first time. There after the browser sends the per-session cookie to the server for every hit. The cookie serves as the key into the Servlet container’s global Hash Map to retrieve the user’s HTTPSession. Under normal circumstances this works fine. But when the user has disabled cookies, the Servlet container uses a mechanism called URL rewriting as a work around. In URL rewriting, the Servlet container encodes the per-session cookie information into the URL itself. However the container does not do this unless you ask it to do so explicitly. You should make this provision to support users with cookie-disabled browsers since you can never anticipate the user behavior in advance. Therefore, when using the regular <a href=”…”> for the hyperlinks, you have to manually encode the URL by using the API HttpServletResponse.encodeURL() method to maintain the session as follows:
<a href=”<%= response.encodeURL(“CustomerDetails.jsp”) %>” > Customer Form</a>
Now, that’s a painful and laborious thing to do for every link in your application. In addition, you are unnecessarily introducing a scriptlet for every encoding. The good news is that the <html:link> does that automatically. For instance,
Chapter 3. Your First Struts Application
61
<html:link page=”CustomerDetails.jsp”>Customer Form</a>
generates a HTML as follows by rewriting the URL by including the jsessionid.
<a href=”;jsessionid=1O s1Ame91Z5XCe3l648VNohduUlhA69urqOL1C2mT1EXzsQyw2Ex!824689399”>Customer Form</a>
Listing 3.10 index.jsp
<%@ page contentType="text/html;charset=UTF-8" language="java" %> <html:html> <head> <html:base/> </head> <body> <html:link page=”CustomerDetails.jsp”>Customer Form</a> </body> </html:html>
Listing 3.11 Success.jsp
<%@ page contentType="text/html;charset=UTF-8" language="java" %> <%@ taglib uri="/WEB-INF/struts-html.tld" prefix="html" %> <html:html> <head> <html:base/> </head> <body> <h1>My First Struts Applications is a Success.</h1> </body> </html:html>
3.3 Lights, Camera, Action!
In the previous steps, you completed all the coding that was required. Now you should compile the Java classes, create the WAR artifact and deploy onto Tomcat. Compiling the classes is a no-brainer. Just set the right classpath and invoke javac. The classpath should consist of the servlet-api.jar from Tomcat (This jar can be found in <TOMCAT_HOME>/common/lib, where TOMCAT_HOME is the Tomcat installation directory.) and all the JAR files
62
Struts Survival Guide – Basics to Best Practices
from Struts distribution. They are found under jakarta-struts-1.1/lib directory. After compiling, you have to construct the WAR. Ant, Java community’s defacto build tool, can be used to perform these tasks. However we have chosen to create the WAR manually to illustrate which component goes where in the WAR. A clear understanding of the structure of Struts web applications is key to writing effective Ant scripts. In Figure 3.2, you saw the directory structure of the Struts application. Now let us follow these steps to create the WAR that will look as shown in Figure 3.3 upon completion. 1. Create a directory called temp under the App1 directory. 2. Copy all the contents of App1/webroot AS IS into the temp directory. 3. Create a subdirectory called classes under temp/WEB-INF 4. Copy the compiled classes into the directory WEB-INF/classes. Retain the package structure while doing this) 5. Copy the App1Messages.properties into the directory WEB-INF/classes. Copy the file according to the java package structure. See Figure 3.3 for the final structure of the WAR.
Figure 3.3 The structure of the WAR file.
6. Create a directory lib under WEB-INF and copy all the JAR files from Struts distribution into the lib folder. These JAR files are required by your web application at runtime. 7. Copy struts-bean.tld and struts-html.tld from Struts distribution into the WEBINF directory. 8. Zip (or jar) the temp directory into a file named App1.war. You WAR is ready now. Drop it into the webapps sub-directory in Tomcat. Start Tomcat and test it out!
Chapter 3. Your First Struts Application
63
Congratulations! You have successfully developed and deployed your first Struts application. However we are not done yet. Let us look at some practical issues that need to be addressed.
3.4 Handling multiple buttons in HTML Form
In the example application, we used the <html:submit> tag to submit the HTML form. Our usage of the tag was as follows:
<html:submit><bean:message key=”button.save”/></html:submit>
This generated a HTML as follows.
<input type="submit" value="Save Me">
This worked okay for us since there was only one button with “real” Form submission (The other one was a Cancel button). Hence it sufficed for us to straight away process the request in CustomerAction.:
<formname=”CustomForm”action=”/App1/submitCustomerForm.do”/>
and is unique to the Form. You have to use a variation of the <html:submit> as shown below to tackle this problem.
<html:submit property=”step”> <bean:message key=”button.save”/> <=”step”
The generated HTML submit button has a name associated with it. You have to now add a JavaBeans property to your ActionForm whose name matches the
64
Struts Survival Guide – Basics to Best Practices
submit button name. In other words an instance variable with a getter and setter are required. If you were to make this change in the application just developed, you have to add a variable named “step” 3.12 shows the modified execute() method from CustomerAction. The changes are shown in bold. When the Save Me button is pressed, the custForm.getStep() method returns a value of “Save Me” and the corresponding code block is executed.
Listing 3.12 CustomerAction modified for mutltiple button Forms; ActionForward forward = null; if ( “Save Me”.equals(custForm.getStep()) ) { System.out.println(“Save Me Button Clicked”); String firstName = custForm.getFirstName(); String lastName = custForm.getLastName(); System.out.println(“Customer First name is “ + firstName); System.out.println(“Customer Last name is “ + lastName); forward = mapping.findForward(“success”); } return forward; } }
Chapter 3. Your First Struts Application
65 “Spike Me”. The submit button can still have the name “step” (same as the “Save Me” button). This means the CustomerForm class has a single JavaBeans property “step” for the submit buttons. In the CustomerAction you can have check if the custForm.getStep() is “Save Me” or “Spike Me”. If each of the buttons had different names like button1, button2 etc. then the CustomerAction would have to perform checks as follows:
if (“Save Me”.equals(custForm.getButton1()) { // Save Me Button pressed } else if (“Spike Me” “Excepto Mí” instead of “Save Me”. However the CustomerAction class is still looking for the hard coded “Save Me”. Consequently the code block meant for “Save Me” button never gets executed. In Chapter 4, you will see how a specialized subclass of the Action called LookupDispatchAction solves this problem.
3.5 Value replacement in Message Resource Bundle
When you constructed the web application, earlier in this chapter, you used static messages in the Resource Bundle. However consider this: You have a dozen fields in the form. The only validation rule is that all fields are required. Hence the error messages for each field differs from another only by the field name. First name is required, Last name is required, Age is required and so on. It would be ideal if there were a field name replacement mechanism into a fixed error message template. The good news is that it already exists. In the resource bundle file, you can define a template for the above error message as:
errors.required={0} is required.
66
Struts Survival Guide – Basics to Best Practices
In the validate() method, the ActionError is then constructed using one of the following overloaded constructors.
public ActionError(String key, Object value0); public ActionError(String key, Object value0, Object value1) . . . public ActionError(String key, Object[] values);
The first overloaded constructor accepts the key and one replacement value. The second overloaded constructor accepts a key and two replacement values. The last constructor accepts a key and an array of objects for replacement. You can now construct an ActionError for the first name as follows:
String[] strArray = {“First name”}; ActionError err = new ActionError(“errors.required” strArray);
This will result in an error message: First name is required. Beautiful isn’t it! Now you can make this even better. Notice that in the above example, we hard coded the field name in the replacement value array in the process of reducing the set of error messages to a single error message template. Now, let us go one step further and get the field name from the resource bundle too. The following code shows how to do it.
MessageResources msgRes = (MessageResources) request.getAttribute(Globals.MESSAGES_KEY); String firstName = msgRes.getMessage(“prompt.customer.firstname”); ActionError err = new ActionError(“errors.required” firstName);
First, a MessageResources for the current module is obtained. Next, the display value of the first name field is obtained from the
MessageResources (resource bundle) in the getMessage() method by using the key for the first name – prompt.customer.firstName.
Finally, the display value of the first name field is used as a replacement parameter in the ActionError using the first of the overloaded constructors. This is generally the preferred way of constructing reusable error messages when the validate() method is coded manually. TIP: Using the struts-blank.war as a template In this application we put together everything from scratch to construct the application. You can use the template so constructed for future use or you can use the ready-made template available in the Struts distribution. The ready-made template is called struts-blank.war is something that you can unwar and use as
Chapter 3. Your First Struts Application
67
template for your applications. It has all the tlds and jars included in the WAR. Plus it provides the web.xml and struts-config.xml ready to be used as placeholders with default values.
3.6 Summary
In this chapter you applied all that you have learnt so far and built a Hello World Struts application. This application although simple, illustrates the basic steps in building a Struts application. In the coming chapters we will go beyond the basics and learn other features in Struts and effectively apply them to tackle real life scenarios.
68
Struts Survival Guide – Basics to Best Practices
Ch a p te r 4
All about Actions
In this chapter:
1. You will learn about all the built-in Struts Actions – ForwardAction, IncludeAction, DispatchAction, LookupDispatchAction and SwitchAction 2. You will learn about multiple sub application support in Struts and using SwitchAction to transparently navigate between them 3. You will see examples of effectively using the built-in Actions 4. You will learn of ways to prevent direct JSP access by the users In Chapter 2, you understood the basics of the Struts framework. In Chapter 3, you applied those basics to build a simple web application using Struts and got a clear picture of the basics. In this chapter we take you beyond the basics as you explore Struts Controller components that are interesting and timesaving that prepare you to handle realistic scenarios. Action classes are where your presentation logic resides. In Chapter 2, you saw how to write your own Action. Struts 1.1 provides some types of Action outof-the-box, so you don’t have to build them from the scratch. The Actions provided by Struts are ForwardAction, IncludeAction, DispatchAction, LookupDispatchAction and SwitchAction. All these classes are defined in org.apache.struts.actions package. These built-in actions are very helpful to address some problems faced in day to day programming. Understanding them is the key to using them effectively. All of these the Actions are frequently used, except for IncludeAction.
4.1 ForwardAction,
Chapter 4. All about Actions
69:
<a href=”PageB.jsp”>Go
to Page B</a>
or even better, as follows:
<html:link page=”/PageB.jsp”>Go
to Page B</html:link>
This is what we did in Chapter 3 when navigating from index.jsp to the CustomerDetails.jsp. However this violates the MVC spirit by directly accessing the JSP. In Model 2 applications, it is the responsibility of the Controller to select and dispatch to the next view. In Struts, ActionServlet and Action classes together form the controller. They are supposed to select and dispatch to the next view. Moreover the ActionServlet is responsible for intercepting your request and providing appropriate attributes such as Message Resource Bundles. If you bypass this step, then the behavior of the Struts tags may become unpredictable. MVC compliant usage of LinkTag Struts provides a built-in Action class called ForwardAction to address this issue. With ForwardAction, the Struts Controller is still in the loop while navigating from PageA to PageB. There are two steps involved in using the ForwardAction. They are: First, declare the PageA hyperlink that takes you to PageB as follows:
<html:link page=”/gotoPageB.do”>Go
to Page B</html:link>
Next, add an ActionMapping in the Struts Config file as follows:
<action path=”/gotoPageB” parameter=”/PageB.jsp” type=”org.apache.struts.actions.ForwardAction” />
The PageA.jsp hyperlink now points to “/gotoPageB.do” instead of “PageB.jsp”. This ensures that the controller is still in the loop. The three attributes shown above are mandatory in a ForwardAction. The type attribute is always org.apache.struts.actions.ForwardAction instead of a custom Action of yours. The path attribute identifies the URL path, as any other ActionMapping. The parameter attribute in the above definition is the URL for the next JSP.
70
Struts Survival Guide – Basics to Best Practices
In the above ActionMapping you might have noticed there is no ActionForm. The Struts Config file DTD specifies that the Form bean is optional in an ActionMapping. Logically speaking ActionForm makes sense only where is data to be collected from the HTML request. In situations like this where there is no HTML data involved in the navigation, there is no need for ActionForm. Using LinkTag’s action attribute The LinkTag (<html:link>) has several variations. It can be used in a variety of ways in conjunction with ForwardAction. You just saw one usage of the LinkTag. A second way of using the this tag is as follows: First, declare the PageA hyperlink that takes you to PageB as follows:
<html:link action=”gotoPageB”>Go
to Page B</html:link>
Next, add the ActionMapping for /gotoPageB in the Struts Config file same way as before:
<action path=”/gotoPageB” parameter=”/PageB.jsp” type=”org.apache.struts.actions.ForwardAction” />
When you use the action attribute instead of the page attribute in <html:link>, you need not specify the “.do” explicitly. Using LinkTag’s forward attribute There is yet another way to use <html:link>. In this approach you use the forward attribute of the <html:link> tag instead of the action. There are two steps involved in this approach. First, declare the PageA hyperlink that takes you to PageB as follows:
<html:link forward=”pageBForward”>Go
to Page B</html:link>
Add a Global Forward for “pageBForward” as follows in the globalforwards section:
<global-forwards> <forward name=”pageBForward” path=”/PageB.jsp” /> </global-forwards>
When used in this manner, the <html:link> gets transformed into the following HTML Link. <a href=”App1/PageB.jsp”>Go to Page B</a> Oops, that doesn’t seem right. The HTML Link is now displaying the actual JSP name directly in the browser. Ideally you would love to hide the JSP name
Chapter 4. All about Actions
71
from the user. And with a slight twist you can! First, define an ActionMapping as follows:
<action path=”/gotoPageB” parameter=”/PageB.jsp” type=”org.apache.struts.actions.ForwardAction” />
Next, modify the global forward itself to point to the above ActionMapping.
<global-forwards> <forward name=”pageBForward” path=”/gotoPageB.do” /> </global-forwards>
When used in this manner, the <html:link> gets transformed into the following HTML Link. <a href=”App1/gotoPageB.do”>Go to Page B</a> There you go! The generated HTML is not displaying the JSP name anymore. From a design perspective this seems to be the best way of using the <html:link> tag since the link is completely decoupled from the associated ActionMapping, thanks to the global-forward. The <html:link> points to the global-forward and the global-forward points to the ForwardAction. The extra level of indirection, although looks confusing in the beginning, is a good design decision due to the following reason: As is true with any application, requirements change and it might just become necessary to do some processing during the navigation from PageA to PageB. A conversion from ForwardAction to a custom Action will be easier to manage with the extra level of indirection. Using ForwardAction for Integration In general, the ForwardAction’s parameter attribute specifies the resource to be forwarded to. It can be the physical page like PageB.jsp or it can be a URL pattern handled by another controller, maybe somewhere outside Struts. For instance, consider the following ForwardAction.
<action path=”/gotoPageB” parameter=”/xoom/AppB” type=”org.apache.struts.actions.ForwardAction” />
In the snippet above, the value of the parameter is not a physical page. It is a logical resource that might be mapped to another Servlet totally outside the control of Struts. Yet from PageA’s perspective, you are still dealing with a Struts URL. This is the second use of ForwardAction. You can integrate Struts applications transparently with already existing non-Struts applications. NOTE: Even with the ForwardAction, you cannot prevent a nosy user from accessing the JSP directly. See the section Protecting JSPs from direct access for techniques to protect your JSPs from direct access.
72
Struts Survival Guide – Basics to Best Practices
ForwardAction Hands-on In the last chapter, we modeled the navigation from index.jsp to CustomerDetails.jsp with a direct link. Let us correct the mistake we made by applying the knowledge we have gained so far. Think of index.jsp as PageA and CustomerDetails.jsp as PageB. The <html:link> in index.jsp will look as follows: <html:link forward=”CustomerDetailsPage”>Customer Form</a> The following Global Forward and ForwardAction are added to the Struts Config file.
<global-forwards> .. <forward name="CustomerDetailsPage" path="/gotoCustomerDetails.do" </global-forwards> <action-mappings> .. <action path=”/gotoCustomerDetails” parameter=”/CustomerDetails.jsp” type=”org.apache.struts.actions.ForwardAction” /> </action-mappings> />
And now, we have an application that strictly adheres to MVC. What a relief!
4.2 Protecting JSPs from direct access. A nosy user might attempt to guess the JSP name by the operation performed in that page or request parameters or worse – if the page author used html comment tag for SCM and code comments instead of the JSP comments. Armed with this information, the user attempts to access the JSPs directly. A JSP as you know is a view and it displays information based on model objects stored in one of the four scopes – page, request, session or application, the first three being the most
Chapter 4. All about Actions
73
common. These objects are created by the back end presentation and business logic and made available for the JSP to act upon. When the JSP is accessed out of context or out of order, the required model objects may not exist in the appropriate scope and consequently almost always leads to the exceptional situations in the JSP code. It is not common to perform null checks in every bit of code in the JSP tags, scriptlets and other helper classes. These checks are generally limited to interfaces and boundaries between modules and not later on. For instance, in a typical Model 2 scenario, when the model object cannot be created for some reason, the controller instead takes alternate route and displays an alternate view corresponding to the null model object. This assumption of model objects being not null in the main path of the presentation logic and view highly simplifies the coding. In fact when the system is accessed as intended, everything works smoothly. However whenever somebody tries to access the views out of order, all hell breaks lose. Every view starts throwing NullPointerExceptions, IllegalArgumentExceptions and other unchecked and checked exceptions depending on how the JSP page and its tags and scriptlets are authored. This is exactly what a nosy user is trying out. The implications are even more serious when a malicious user tries to find weak points in the design to bring the system down to its knees. The first thing that might occur is to put checks for nulls and unintended access in the system. Invariably, this is nothing but a collection of if-else blocks in every part of the JSP page making it messy and buggy to maintain. Two prominent alternatives exist. Let us look the easiest one first. As we glossed over earlier, the servlet specification explicitly states that contents located in the WEB-INF and its sub-directories are protected from outside access. Let us take a simple example to illustrate this. All contents located in a WAR belong to the same protection domain. A protection domain is a set of entities known (or assumed) to trust each other. Consequently any resource within a WAR can access resources located under WEB-INF directory without restrictions. JSP is also a resource and thus any class within the same WAR can forward to a JSP under WEB-INF. (This part is not explicitly stated in the specification) However when the request originates outside the container,. What if the hyperlink in one of your page wants to really just forward to another JSP? Is that disallowed as well? Yeah! You cannot have different rules in your system right? However there is a way around.
74
Struts Survival Guide – Basics to Best Practices
Consider the case when a hyperlink in page A needs to forward request to page B. Instead of directly forwarding to page B, which is disallowed, you can put the following entry in the struts-config.xml
<action path=”/gotoPageB” parameter=”/WEB-INF/pageB.jsp” type=”org.apache.struts.actions.ForwardAction” />
On the pageA, the hyperlink can point to “pageB.do” is suffix mapping is used or some other path is path mapping is used. Either ways, the ActionMapping shown above is picked up and as its type indicates, the action is just a ForwardAction, which as the name suggest is a forward. However since the forward is occurring from within the container, (in the protection domain) it is allowed. A question might be popping up in your mind. The technique just highlighted is the easiest and also supposedly the best. Why do I need anything lesser than best? The answer is not all containers support the behavior just mentioned. As we stated earlier, since the specification is clear about not letting direct access to resources under WEB-INF, all J2EE compliant application servers implement it. However, the second part is not stated in the specification and consequently it is the vendor’s prerogative to implement it or not. Certain providers do (For e.g. Tomcat) and others don’t (For e.g. WebLogic). Hence we have to have an alternate mechanism for the less fortunate ones. This one is not difficult either. Instead of putting the JSPs underneath WEB-INF, they can stay wherever they are. The following entries are added to the web.xml.
<security-constraint> <web-resource-collection> <web-resource-name>Deny Direct Access</web-resource-name> <description> Deny direct access to JSPs by associating them with denied role </description> <url-pattern>*.jsp</url-pattern> </web-resource-collection> <auth-constraint> <role-name>Denied</role-name> </auth-constraint> </security-constraint> <security-role> <role-name>Denied</role-name> </security-role>
Chapter 4. All about Actions
75
First, all the url patterns ending with suffix “.jsp” are associated with a Role named “Denied”. Any user who wants to access the JSP pages directly should be in that role. We further ensure that no user of the system is in that Role. Role and user association is done depending on your implementation of authentication and authorization. For instance, if you are using LDAP as the user persistence mechanism, then the users, their passwords and Roles are stored in LDAP. If you ensure nobody gets the Denied role, then you have effectively prevented everyone from directly accessing the JSPs. You will still have to have the ForwardAction as shown earlier in this section if you have situation when page A needs to just navigate to page B. The internal forwards to other JSPs using RequestDispatcher are okay because the container does not intercept and cross check internal forwards even though the url-pattern matches the ones in web.xml. NOTE: The default pagePattern and forwardPattern values for <controller> element in struts-config.xml are $M$P, where $M is replaced with the module prefix and the $P is replaced with the path attribute of the selected forward. If you place your JSP files under WEB-INF for access protection, you have to set the pagePattern and forwardPattern attributes of the <controller> element in the struts-config.xml to /WEB-INF/$M$P to tell Struts to construct the paths correctly.
4.3 IncludeAction . Wouldn’t it be better to have a <jsp:include> that pretends as if the resource exists in the current Struts application? It would be ideal if the page include looked as follows:
<jsp:include page=”/App1/legacyA.do” />
The /legacyA.do cannot be a ForwardAction because it would perform a HTTP Forward to the above resource instead of including the resource in the HTTP response. Since the HTTP Response OutputStream closes (The J2EE jargon for this is the response has been committed) after HTTP Forward, the
76
Struts Survival Guide – Basics to Best Practices. Consequently the output of the LegacyServletA is displayed in the same HTML as that of the Struts application. You have to add the following ActionMapping in the Struts Config file:
. This requires the use of <jsp:include> in the page. If you web application is aggregating response from legacy servlet applications, portlets seems to be the way to go. Portlet API – JSR 168 has been finalized and it is matter of time before you can develop standardized portals aggregating contents from disparate web applications. Tiles framework is the way to go if you are on a short-term project that wants to aggregate information now (From different applications or may be from various Actions in the same Struts application). Tiles provides a robust alternative to the primitive <jsp:include>. Chapter 7 provides an in-depth coverage of Tiles in conjunction with Struts.
4.4 DispatchAction 4.1 and they can act in one of four ways - Approve, Reject or Add Comment. Consequently there are three images each being a <html:link>. One way of dealing with this situation is to create three different Actions –
Chapter 4. All about Actions
77
DispatchAction. Let us assume that CreditAppAction, a sub-class of DispatchAction is used to implement the above-mentioned presentation logic. It has four methods – reject(), approve() and addComment(). The CreditAppAction class definition is shown in Listing 4.1.
You might be wondering why all the three methods take the same four arguments – ActionMapping, ActionForm, HttpServletRequest, HttpServletResponse. Don’t worry, you will find the answer soon. For a moment, look at the four URLs submitted when the bank staff perform the three actions as mentioned before. They would look something like this. bank/screen-credit-app.do?step=approve&id=2
Figure 4.1’t. You will have to tell it explicitly. And this is done in the ActionMapping for /screen-credit-app.do. The ActionMapping for
78
Struts Survival Guide – Basics to Best Practices
the URL path “/screen-credit-app.do” is declared in struts-config.xml as shown in Listing 4.2. The section highlighted in bold is what makes this Action different from the rest. The type is declared as mybank.example.list.CreditAppAction – you already knew that. Now, let us look at the second attribute in bold. This attribute, named parameter has the value “step”. Notice that one of the HTTP request parameter in the four URLs is also named “step”..
Listing 4.1 Example DispatchAction
public class CreditAppAction extends DispatchAction { public ActionForward reject(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws Exception { String id = request.getParameter(“id”); // Logic to reject the application with the above id ... ... ... mapping.findForward(“reject-success”); } public ActionForward approve(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws Exception { String id = request.getParameter(“id”); // Logic to approve the application with the above id ... ... ... mapping.findForward(“approve-success”); } public ActionForward addComment(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws Exception { String id = request.getParameter(“id”);
Chapter 4. All about Actions
79
// Logic to view application details for the above id ... ... ... mapping.findForward(“viewDetails”); } ... ... }
DispatchAction can be confusing in the beginning. But don’t worry. Follow these steps to setup the DispatchAction and familiarize yourself with the steps.
1. Create a subclass of DispatchAction. 2. Identify the related actions and create a method for each of the logical actions. Verify that the methods have the fixed method signature shown earlier. 3. Identify the request parameter that will uniquely identify all actions. 4. Define an ActionMapping for this subclass of DispatchAction and assign the previously identified request parameter as the value of the parameter attribute. 5. Set your JSP so that the previously identified request parameter (Step 3) takes on DispatchAction subclass method names as its values.
Listing 4.2 ActionMapping
80
Struts Survival Guide – Basics to Best Practices
the image buttons). Just name all the buttons same. For instance,
<html:submit property=”step”>Update</html:submit> <html:submit property=”step”>Delete</html:submit>
and so on. Image buttons is a different ball game. Image button usage for form submission and DispatchAction are exclusive. You have to choose one. See Chapter 6 on Struts tags for details on Image buttons. In the above example we used the DispatchAction and used methods that has ActionForm as one of its arguments. As you learnt in the last chapter, an ActionForm always existed in conjunction with the Action. Earlier in this chapter, we dealt with ForwardAction and we neither developed our Action or ActionForm. In that context we stated that having an ActionForm was optional. That holds true even if the Action is a custom coded one like the CreditAppAction. If the ActionMapping does not specify a form bean, then the ActionForm argument has a null value. In the Listing 4.1,.
4.5 LookupDispatchAction
In Chapter 3 you were introduced to a Localization problem with the Action class when the form has multiple buttons. Using LookupDispatchAction is one way of addressing the problem when regular buttons are used. Chapter 6 presents another alternative that works irrespective of whether an Image or a grey button is used to submit the Form. One has to choose the most appropriate solution under the given circumstances.
Figure 4.2 Modified Screen Credit Applications page as seen by the bank staff.
LookupDispatchAction is a subclass of DispatchAction as its name
suggests. We will use a slightly modified example to illustrate the use of
Chapter 4. All about Actions
81
LookupDispatchAction. We will still use the list of credit applications as
before, but with one twist. Each row in the list is a HTML Form and the images are now replaced with grey buttons to submit the Form. Figure 4.2 shows the modified application list as seen by the bank personnel. A LookupDispatchAction for this example is created by following these steps. 1. Create a subclass of LookupDispatchAction. 2. Identify the related actions and create a method for each of the logical actions. Verify that the methods have the fixed method signature as similar to DispatchAction methods in Listing 4.1. 3. Identify the request parameter that will uniquely identify all actions. 4. Define an ActionMapping in struts-config.xml in the same way as DispatchAction (Listing 4.2). Assign the previously identified request parameter as the value of the parameter attribute in the ActionMapping. All the steps up until this point are the same as what you did before with DispatchAction. From here on, they will differ. 5. Implement a method named getKeyMethodMap() in the subclass of the LookupDispatchAction. The method returns a java.util.Map. The keys used in the Map should be also used as keys in Message Resource Bundle. The values of the keys in the Resource Bundle should be the method names from the step 2 above. If the CreditAppAction from the bank example were to be implemented as a LookupDispatchAction it would look like Listing 4.3.
Listing 4.3 Example LookupDispatchAction
public class CreditAppAction extends LookupDispatchAction { public ActionForward reject(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws Exception { ... ... ... } //Other methods go here public Map getKeyMethodMap() { Map map = new HashMap();
82
Struts Survival Guide – Basics to Best Practices
map.put(“button.approve”, “approve”); map.put(“button.reject”, “reject”); map.put(“button.comment”, “addComment”); } }
6. Next, create the buttons in the JSP by using <bean:message> for their names. This is very important. If you hardcode the button names you will not get benefit of the LookupDispatchAction. For instance, the JSP snippet for Approve and Add Comment button are:
<html:submit property=”step”> <bean:message key=”button.approve”/> </html:submit> <html:submit property=”step”> <bean:message key=”button.comment”/> </html:submit>
The <bean:message> keys point to the messages in the Resource Bundle.
button.approve=Approve button.reject=Reject button.comment=Add Comment
In summary, for every form submission, LookupDispatchAction does the reverse lookup on the resource bundle to get the key and then gets the method whose name is associated with the key into the Resource Bundle (from getKeyMethodmap()). That was quite a bit of work just to execute the method. DispatchAction was much easier! But the implications of LookupDispatchAction are significant. The method name in the Action is not driven by its name in the front end, but by the Locale independent key into the resource bundle. Since the key is always the same, the LookupDispatchAction shields your application from the side effects of I18N.
4.6 Configuring multiple application modules
So far we have covered several important built-in Actions with examples. There is one more feature that is very important and useful addition in 1.1 – Multiple Application module support. In Struts1.0 (and earlier), a single config file was supported. This file, normally called struts-config.xml, was specified in web.xml as an initialization parameter for the ActionServlet as follows:
Chapter 4. All about Actions
83
<servlet> <servlet-name>mybank</servlet-name> <servlet-class>org.apache.struts.action.ActionServlet </servlet-class> <init-param> <param-name>config</param-name> <param-value>/WEB-INF/struts-config.xml</param-value> </init-param> </servlet>
The single configuration file is bottleneck in large projects as all developers had to contend to modify this resource. In addition managing a monolithic file is painful and error prone. With Struts1.1 this problem has been resolved by the addition of multiple sub application support better known as application modules. You can now have multiple configuration files, one for each module or logical group of forms. The configuration files are specified in web.xml file using multiple <init-param> - initialization parameters as shown in Listing 4.4.
Listing 4.4 web.xml setting for Multiple Application module Support
<servlet> <servlet-name>mybank</servlet-name> <servlet-class>org.apache.struts.action.ActionServlet </servlet-class> <init-param> <param-name>config</param-name> <param-value>/WEB-INF/struts-config.xml</param-value> </init-param> <init-param> <param-name>config/module1</param-name> <param-value> /WEB-INF/struts-module1-config.xml </param-value> </init-param> </servlet>
The newly added application module is shown in bold. The default application module based on struts-config.xml can still continue to exist. The new module is defined by adding an initialization parameter config/module1. In fact any init-param prefixed with “config/” is interpreted as configuration for a separate module. Its corresponding value – /WEB-INF/struts-module1-config.xml is the struts configuration file containing Form bean definitions and ActionMappings for the module module1. If the URLs in the default struts-
84
Struts Survival Guide – Basics to Best Practices
config.xml were accessed as, and the corresponding ActionMapping were moved to struts-module1-config.xml then the URL would be accessed as where App1 is the web application context. Notice that the application URL contains the module name after the web application context as if it is a sub directory name. Even though each application module has a separate struts configuration file and a sub-directory like url pattern while accessing through the browser, the physical organization need not necessarily be the same although that is generally the route taken since the application module was after all created for logical division (driven by functional requirements) and there are less headaches if the physical organization matches the logical division as much as possible. The benefits of application modules are immediately obvious. You can now split your monolithic struts application into logical modules thus making maintenance easier. It will cause less contention during development time as developers working on different modules get to work on their own struts configuration files. Each Struts Configuration file and hence each application module can choose its own RequestProcessor, MessageResources and PlugIn. You can now choose to implement one or more modules with Tiles. If you find this convenient and useful then you can migrate your application to Tiles or JSF or plug in any other Struts extensions for one module at a time. Here is a tip: Generally web applications are organized so that navigation occurs from generic to specific. For instance, you start from the initial welcome page for the web application and then navigate to a specific module. You can organize you struts modules so that the initial welcome page and other top-level pages are defined in the default application module (struts-config.xml). The pages correspond to individual use cases are defined in different application modules (struts-config-xxx.xml). You can then navigate from the default application module to the use case specific module. That brings up the question: How do you move between application modules? It is quite simple actually. Struts 1.1 provides a specialized Action called SwitchAction. We will illustrate its usage with an example. Consider a Struts banking application with a default module (with top level pages) and another module named loanModule. The JSPs of the loan module are present in a directory called loanModule and its action mappings are defined in struts-config-loan.xml. The top-level page defined in the default application module provides hyperlink to navigate to the loan module as shown below. This hyperlink indicates points to a global-forward named goto-loanModule in the default struts-config.xml.
Chapter 4. All about Actions
85
<html:link Go to Loan Module </html:link>
Add the action mapping for SwitchAction to the default struts-config.xml as follows:
<action path="/switch" type="org.apache.struts.actions.SwitchAction"/>
Now, add a global-forward named goto-loanModule to the default struts-config.xml as follows:
<forward name="goto-loanModule" path="/switch.do?page=/listloans.do&prefix=/loanModule" />
This global-forward turn points to an action mapping called switch.do and also adds two request parameters. The switch.do is the ActionMapping for the SwitchAction. The two request parameters – prefix and page stand for the module and the action mapping within that module. In this case, the module is loanModule (identified by the struts-config-loan.xml) and the listloans.do stands for an action mapping within the struts-configloan.xml – the Struts Config file for Loan module. In the struts-config-loan.xml, add the action mapping for listloans.do as follows:
<action </action>
The ListLoanAction is a normal Struts Action that decides the next resource to forwards in its execute() method. If you don’t have additional processing to do, you can use a ForwardAction too. If you want to go from the Loan module to the default module, repeat the same process, by setting the prefix attribute to a zero length string.
4.7 Roll your own Base Action and Form
You have looked at different types of Actions offered by Struts. Now, let us look at some recommended practices in using Action. When it comes to using Actions, the brute force approach is to extend the actions directly from the org.apache.struts.action.Action. But a careful look at your web application will certainly reveal behavior that needs to be centralized. Sooner or later you will discover functionality common to all the actions. While it is
86
Struts Survival Guide – Basics to Best Practices
impossible to predict the exact purposes of why you might need the base Action, here are some samples: You might like to perform logging in Action classes for debugging purposes or otherwise to track the user behavior or for security audit purposes. You might want to retrieve the user’s profile from application specific database to check if the user has access to your application and act appropriately. Whatever the purpose, there is always something done always in web applications warranting a parent Action class. Start with a common parent Action class. Let us call it MybankBaseAction. Depending on the complexities of the web application, you can further create child classes for specific purposes. For instance, an Action subclass for dealing with form submissions and another for dealing with hyperlink-based navigation is a logical choice if the Action classes handling hyperlink don’t need an ActionForm. You might want to filter out some words typed in the form fields. In conjunction with the base Action, you can also roll a base Form extending the org.apache.struts.action.ActionForm. Let us call this class MybankBaseForm. The base form fits well into the base action strategy. In chapter 2, we introduced the term View Data Transfer Object to refer an ActionForm. This isn’t without a reason. Data Transfer Object is a Core J2EE pattern name. It is typically used between tiers to exchange data. The ActionForm serves similar purpose in a Struts application and you use to its very best. Typical uses of a base form would be: Add attributes to the base form that are needed frequently in the web application. Consider a case when every Action in your web application needs to reference an attribute in the request or session. Instead of adding the code to access this attribute as request.getAttribute(“attribName”) everywhere, you can set this as an ActionForm attribute and access it in a type-safe manner in the application. Retrieving the user’s profile from application specific database and then set it as a form attribute on every call to MybankBaseAction’s execute() method. Listing 4.5 shows the MybankBaseAction using the MybankBaseForm. It implemented the execute() method and adds audit logging for entry and exit points. Further down the line, it retrieves the application specific profile for the user. This is helpful if you have a portal with a single sign-on and the user rights and profiles differ from one application to another. Then it casts the ActionForm to MybankBaseForm and assigns its variables with the values of commonly accessed request and session attributes. MybankBaseAction defines three abstract methods – preprocess(), process() and postprocess(). These methods when implemented by the subclasses respectively perform pre-
Chapter 4. All about Actions
87
processing, processing and post-processing activities. Their signatures as as follows:
protected abstract void preprocess(ActionMapping mapping, MybankBaseForm form, HttpServletRequest request, HttpServletResponse response) throws Exception; protected abstract ActionForward process(ActionMapping mapping, MybankBaseForm form, HttpServletRequest request, HttpServletResponse response) throws Exception; protected abstract void postprocess(ActionMapping mapping, MybankBaseForm form, HttpServletRequest request, HttpServletResponse response) throws Exception;
Pre-processing activities involve validating the form (Validations requiring access to backend resources to are typically performed in the Action instead of ActionForm, where the validations are limited to trivial checking and interdepdendent fields), checking for duplicate form submissions (In the next section you will look at the facilities in Struts to handle duplicate form submissions. In Chapter 10 we will develop the generalized strategy for duplicate form handling – not just repeating synchronizer token in the Action classes.), checking if the action (page) was invoked in the right order (if a strict wizard like behavior is desired) etc. Processing activities are the meat of the application and do not need any more explanation. Validation errors can be discovered in this stage too. Post-processing activities may involve setting the sync token (for checking duplicate form submission), cleaning up unnecessary objects from request and session scopes and so on. The bottom line is that all applications have recurring tasks that need to be refactored into the parent class and hence a base Form and Action are a must for every serious application. In Chapter we will add a lot of functionality into the base Action giving you that many reasons to create the base Action.
Listing 4.5 The
88
Struts Survival Guide – Basics to Best Practices
// or retrieve app specific profile for the user MybankBaseForm myForm = (MybankBaseForm) form; // Set common MybankBaseForm variables using request & // session attributes for type-safe access in subclasses. // For e.g. myForm.setUserProfile( // request.getAttribute(“profile”));
preprocess(mapping, myForm, request, response); ActionForward forward = process(mapping, myForm, request, response); postprocess(mapping, myForm, request, response); // More code to be added later. // Add centralized logging here (Exit point audit) return forward; } }
4.8 Handling Duplicate Form Submissions
Duplicate form submissions can occur in many ways Using Refresh button Using the browser back button to traverse back and resubmit form Using Browser history feature and re-submit form. Malicious submissions to adversely impact the server or personal gains Clicking more than once on a transaction that take longer than usual Duplicate form submissions are acceptable in some cases. Such scenarios are called idempotent transitions. When multiple submissions of data are not critical enough to impact the behavior of the application, duplicate form submissions do not pose a threat. They can cause a lot of grief if for instance you are buying from an online store and accidentally press refresh on the page where you are charged. If storefront is smart enough, it will recognize duplicate submissions and handle it graciously without charging you twice. Why is the form submitted again after all, when the refresh button is pressed? The answer lies in the URL seen in the URL bar of your browser after the form submission. Consider a form as: <form name=CustomerForm”
Chapter 4. All about Actions
89
action=”/App1/submitCustomerForm.do”>. The above form is submitted with the URL /App1/submitCustomerForm.do and the same URL is shown
in the URL bar. On the back end, Struts selects the action mapping associated with submitCustomerForm and executes the action instance. When you press refresh, the same URL is submitted and the same action instance is executed again. The easy solution to this problem is to use HTTP redirect after the form submission. Suppose that the CustomerForm submission results in showing a page called Success.jsp. When HTTP redirect is used, the URL in the URL bar becomes /App1/Success.jsp instead of /App1/submitCustomerForm.do. When the page refreshed, it is the Success.jsp that is loaded again instead of /App1/submitCustomerForm.do. Hence the form is not submitted again. To use the HTTP redirect feature, the forward is set as follows:
<forward name=”success” path=”/Success.jsp” redirect=”true” />
However there is one catch. With the above setting, the actual JSP name is shown in the URL. Whenever the JSP name appears in the URL bar, it is a candidate for ForwardAction. Hence change the above forward to be as follows:
<forward name=”success” path=”/GotoSuccess.do” redirect=”true” />
where GotoSuccess.do is another action mapping using ForwardAction as follows:
<action path=”/GotoSuccess” type=”org.apache.struts.actions.ForwardAction” parameter=”/Success.jsp” validate=”false” />
Now, you have now addressed the duplicate submission due to accidental refreshing by the customer. It does not prevent you from intentionally going back in the browser history and submitting the form again. Malicious users might attempt this if the form submissions benefit them or adversely impact the server. Struts provides you with the next level of defense: Synchronizer Token. To understand how the Synchronizer Token works, some background about built-in functionalities in the Action class is required. The Action class has a method called saveToken() whose logic is as follows:
HttpSession session = request.getSession(); String token = generateToken(request); if (token != null) { session.setAttribute(Globals.TRANSACTION_TOKEN_KEY, token); }
90
Struts Survival Guide – Basics to Best Practices
The method generates a random token using session id, current time and a MessageDigest and stores it in the session using a key name org.apache.struts.action.TOKEN (This is the value of the static variable TRANSACTION_TOKEN_KEY in org.apache.struts.Globals class. The Action class that renders the form invokes the saveToken() method to create a session attribute with the above name. In the JSP, you have to use the token as a hidden form field as follows:
<input type="hidden" name="<%=org.apache.struts.taglib.html.Constants.TOKEN_KEY%>" value="<bean:write">
The embedded <bean:write> tag shown above, looks for a bean named org.apache.struts.action.TOKEN (which is the the value of Globals. TRANSACTION_TOKEN_KEY ) in session scope and renders its value as the
value attribute of the hidden input variable. The name of the hidden input variable is org.apache.struts.taglib.html.TOKEN (This is nothing but
TOKEN_KEY in the class org.apache.struts.taglib.html.Constants). When the client submits the form, the hidden field is also submitted. In the Action that handles the form submission (which most likely is different from the Action that rendered the form), the token in the form submission is compared with the token in the session by using the isTokenValid() method. The method compares the two tokens and returns a true if both are same. Be sure to pass reset=”true” the manner acceptable to your users. NOTE: We could also have chosen to have the synchronizer token as an ActionForm attribute. In that case, the <html:hidden> tag could have been used instead of the above <input type=”hidden”> tag (which looks complicated at the first sight). However we have not chosen to go down this path since protection from duplicate submission is not a characteristic of the form and it does not logically fit there very well. Although the above approach is good, it requires you as a application developer to add the token checking method pair – saveToken() and isTokenValid() in methods rendering and submitting the sensitive forms respectively. Since the two tasks are generally performed by two different Actions, you have to identify the pairs and add them manually. In chapter 10, we will look at an approach to declaratively turn on the synchronizer token.
the
value
of
the
static
variable
Chapter 4. All about Actions
91
You can use the same approach for sensitive hyperlink navigations. Just set the tranaction attribute in <html:link> to true and use the same logic in the Action classes to track the duplicate hyperlink navigations. The reset argument of the isTokenValid() is useful for multi-page form scenario. Consider a form that spans across multiple pages. The form is submitted every time the user traverses from one page to another. You definitely want to validate token on every page submission. However you also want to allow the user to traverse back and forth using the browser back button until the point of final submission. If the token is reset on every page submission, the possibility of back and forth traversal using the browser button is ruled out. The solution is not disabling back button (using JavaScript hacks) but to handle the token intelligently. This is where the reset argument is useful. The token is initially set before showing the first page of the form. The reset argument is false for all the isTokenValid() invocations except in the Action for the last page. The last page uses a true value for the reset argument and hence the token is reset in the isTokenValid() method. From this point onwards you cannot use back button to traverse to the earlier form pages and successfully submit the form.
4.9 What goes into Action (and what doesn’t)
Don’t even think twice – Action classes should contain only the presentation logic. If it is business logic it does not belong here. What qualifies as presentation logic? The following do – analyzing request parameters and creating data transfer objects (for server side processing), invoking business logic (preferably through business delegates), creating view-models – the model JavaBeans for the JSPs, selecting the next view and converting exceptions into appropriate action errors. That’s probably it. The common mistake while coding the Action is stuffing the execute() with a lot of things that don’t belong there. By the time it is noticed, the execute() method has intermingled request handling and business logic beyond the point of separation without considerable effort. The separation is tough because, when there is no architectural separation, the HttpServletRequest and HttpSession attributes will be used all over the place and hence the code cannot be moved enmasse to the server side to “extract a class”. The first resolution you have to make for a cleaner and better design is to avoid this temptation. A preferred way of splitting the code in Action’s execute() method (or rather MybankBaseAction’s process() method is by layering. The functionality in process() method can be divided into three distinctive steps.
92
Struts Survival Guide – Basics to Best Practices
1. User Action Identification 2. Transfer Object Assembly 3. Business Logic invocation using Business Delegates The postprocess() method is suitable for forwarding the user to the chosen view based on the output from business tier. Let us start looking in detail at the above three steps in process(). User Action Identification: The first step in process() is to check what action the user performed. You don’t have to do this if DispatchAction or LookupDispatchAction is used. The framework itself calls the appropriate method.
if (user pressed save button) { //Do something } else if (user pressed delete button) { //Do something else }
Transfer Object Assembly: The next step is creating serializable data transfer objects (DTO) that are independent of the HttpServletRequest and HttpServletResponse (and the entire javax.servlet.http package). This involves copying the ActionForm attributes into a regular serializable JavaBeans. The formal term used to describe this copying process is Transfer Object Assembly. The class that assembles the transfer object is called Transfer Object Assembler. Every tier uses object assemblers when transferring objects across the tier boundary. In general, the object assemblers used to send data from business tier to presentation tier have some intelligence. However the object assemblers used to send data from presentation tier to business tier are straightforward. They are monotonous and dumb (It better be dumb. Otherwise you are coding business logic here). You can take advantage of their straightforward nature and easily develop a framework using Java Reflection API to perform the object assembly. The framework thus developed takes the ActionForm-to-DTO mapping information in a XML file and creates the DTOs. To make life a bit easier, you can offload some of the conversions to the BeanUtils class in Commons BeanUtils. This jar is packaged along with Struts. You can use the BeanUtils.copyProperties(dest, orig) method to copy the properties with same names between the form bean and the DTO. It also does the required data type conversions in the process. Business Logic Invocation: The DTOs thus created are transferred to the business tier as arguments while invoking the busiess logic methods. Consider how a Loan Session EJB containing the business logic for loan management is invoked using
Chapter 4. All about Actions
93
the standard Service Locator pattern. Service Locator is a core J2EE pattern that is used widely to locate the business service – in this case used to locate the EJB.
LoanMgmt loanmgmt = (LoanMgmt) ServiceLocator.getInstance().lookup(“LoanMgmtEJB”);
The above method call can throw RemoteException, CreateException. If the same business service is implemented using CORBA, a different Exception might be thrown. At times you will certainly have a lethal combination of EJB and mainframe serving as the business tier. Whatever be the case, you should isolate the web tier from these dependencies that are a direct result of the choice of implementation for the business logic tier. This is exactly where the Business Delegate comes in.
Figure 4.3 Business Delegate.
The Business Delegate is another Core J2EE Pattern and decouples the web tier from dependencies on the choice of business logic implementation. Typically business delegate is a class with implementation for all the business methods. Figure 4.3 shows the Business Delegate class. The client invokes the methods on business delegate. The delegate, true to its name delegates the client calls to the actual implementation. It uses the ServiceLocator to lookup the Service, invoke methods on it and convert the implementation exceptions into application exceptions thus reducing coupling.
4.10 When to use Action chaining (and when not to)
The process of forwarding to another action mapping from an action is called Action Chaining. Let’s say that the execute() method from an Action forwards to an ActionForward called pageB. Assume that the forward is as follows:
<forward name=”pageB” path=”/pageBAction.do” />
94
Struts Survival Guide – Basics to Best Practices
The forward itself points to another action mapping called pageBAction. Accordingly the Action instance associated with pageBAction is invoked. This can continue until an actual JSP is shown to the user. There are scenarios where the action chaining is a good idea. Consider the example used earlier in the chapter: A page shows a list of loans with option to delete loans one at a time. After deletion, the same loan list is shown again. If the user is forwarded directly to the List JSP after deletion, then the task of creating the loan list is left to the JSP. That is a bad design. Action chaining saves the day here. In the Action for the delete, just forward to the listLoan.do after a successful deletion. The Action corresponding to listLoan.do then creates the List of Loans to display. Using the action mapping of self as the input attribute is a preferred than using a JSP name. This is a special case of action chaining and comes handy when a lot or preprocessing is needed to show a page, irrespective of whether page is shown for the first time in the normal way or because of validation errors. Then there are scenarios where action chaining is a bad idea. If the chaining is used for linking several units of business logic one after the other, it is better to do this in the business tier. If this is one of your goals, then use a session ejb method as a façade to hide the execution of fine-grained business logic snippets as one unit instead of chaining actions. Use the Transfer Object Assembly from the last section to create a DTO from the form bean and pass it to the business tier. Also, avoid having more than two actions in the chain. If you are having more than two actions in the chain, chances are that you are trying to do business logic by chaining Actions. A strict no-no. Nada.
4.11 Actions for complex transitions
Perfectly reusable Actions are not a reality yet. Suppose that you have a common page accessed from two different pages and what the page shows where the page goes next depends on where you came from. You can never create totally reusable Actions and chain them in this scenario. Wiring the handlers If the web application you are designing is entirely of the format “where you came from drives what to do and where to go next”, then consider using a different approach. Split the current request handling and presenting next page into two different handler classes. Write atomic piece of “do”s as Commands for each. In a separate XML, wire them up together as you would like. The Action class serves very little purpose here other than to figure out which handlers are wired together. In fact a single Action for the whole application suffices. All that this Action does is to look up in the XML for commands to be executed in a
Chapter 4. All about Actions
95
pipeline. Similarly if your web application provides personalization features, then you have to create atomic handlers and wire them together dynamically. State aware Forms Consider a scenario when you can enter a page from N different places and exit in N different ways. Figure 4.4 shows the scenario. There is a common page. It can be accessed from Page1 and Page2. After executing the common action in common page, the user is forwarded to Page3 and Page4 respectively on success. On pressing Cancel, Page1 and Page2 are shown respectively.
Figure 4.4 Complex Page transition example.
An easy way to achieve this is to track where the user came from in session and then accordingly act in the common action. This however makes the common action less reusable. If a lot of your pages behave in this manner you should consider developing a framework to abstract the complexities of the transition. A simple approach is illustrated here. Start with an interface with two methods as shown below:
public interface StateAware { public String getPreviousState(); public String getNextState(); }
The ActionForms involved in the complex transitions implement this interface. Consider that ActionForm1 is associated with Page1. The ActionForm1 implements the StateAware interface. The getPreviousState() returns the forward for Page1 and the getNextState() returns the forward for Page3. The Common Action now becomes really reusable.
public class CommonAction extends Action { public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws Exception { StateAware sw = (StateAware) form;
96
Struts Survival Guide – Basics to Best Practices
if (isCancelled(request)) { return mapping.findForward(sw.getpreviousState()); } //do common action here //success return mapping.findForward(sw.getNextState()); } }
Refer to for more about Struts based workflow approach to solve similar problems. For multi page forms, use multiple Action classes; one Action per page submission. Multi-page forms generally have buttons with same names: Prev, Next etc. This will result in confusion when forwarding to appropriate page if a single Action class is used to handle buttons with the same names in multiple pages.
4.12 Managing struts-config.xml popular tools to manage struts-config.xml are described below. Struts-GUI Struts-GUI is a Visio Stencil from Alien Factory (). It lets you visually edit the struts config.xml as a Visio diagram and generate the xml file from it. One of the biggest challenges in maintaining the struts-config.xml is understanding the flow and tracking down what is going on. With Struts-GUI, you can follow the visually trace the actions, their forwards and the web pages they lead to. You can even trace action chaining. You can add documentation in the Visio diagram reducing the maintenance hurdle even further. Struts-GUI is a commercial tool. Struts Console Struts Console is a Swing based editor to manage the struts-config.xml from James Holmes (). It is not visually driven as Struts GUI, but very intuitive. It has tree like navigation to traverse the
Chapter 4. All about Actions
97
individual elements. It is much more flexible than Struts GUI in that it can be used to maintain customized struts-config.xml (More about Struts customization in Chapter 10). Wizards and drop downs are provided to add inidividual elements, thus eliminating the chances of typo. XDoclet XDoclet based management of struts-config.xml is a entirely different concept. The two tools cited earlier are based on maintaining the struts-config.xml, while in XDoclet approach, there is no struts-config.xml at all! In the XDoclet approach, there is no struts-config.xml at all! All the requisite information linked to the <form-bean> and <action> are specified in the Action and Form classes using special XDoclet tags as follows:
* @struts.action name="custForm" path="/editCustomer" * * * * @struts.action-forward name="showCustForm" * path="/ShowCustomerForm.jsp" scope="request" validate="false" parameter="action" input="mainpage"
The above tags generate the Struts action mapping as follows in the strutsconfig.xml at build time.
<action path="/editCustomer" type="mybank.app1.ShowCustomerAction" name="custForm" scope="request" input="mainpage" unknown="false" validate="false"> <forward name="showCustForm" path="/ShowCustomerForm.jsp" redirect="false"/> </action>
XDoclet is project on sourceforge that started off with auto-generating home interface, remote interface, ejb-jar.xml from special tags in the EJB implementation class. It was a good idea with EJBs since the EJB implementation class would drive everything else - home, remote and the ejbjar.xml. With struts-config.xml, none of the individual classes drive the flow in entirety. Everything works always in relation to another. You always want to track what is going on, how various pieces interact together and how the flow works. You always want to see which action takes you where and get the whole
98
Struts Survival Guide – Basics to Best Practices
big picture as you develop. Hence the struts-config.xml serves much like whiteboarding - visualizing whats going on in its entirety. Providing this information via piecemeal approach in different files using XDoclet tags defeats the purpose. Providing this information via piecemeal approach in different files using XDoclet tags defeats the purpose. Hence our advice is not to use the XDoclet approach for auto-generating struts-config.xml.
4.13 Guidelines for Struts Application Development
Struts application development in enterprise applications requires desicipline. We are not referring to any particular methodology; just some guidelines for Struts based application development for enterprise applications. In this section a stepby-step approach for Struts application development cycle is provided. 1. First design your flow at a usecase level on a whiteboard. A JAD session with business expert, page author and developer is recommended. JAD stands for Joint Application development. Judging by its name, you might think that this technique only applies to developing software, but that’s not the case. The JAD technique can be applied to a wide variety of areas where consensus is needed. 2. Decide how many forms are involved in the flow. Which comes when and so on. This will tell you which form should be in request and session scope. (If possible, try to maintain as many forms in request scope). 3. Forms in a web application have aspects related to it – Creating and populating a form and setting in the right scope before display and handling the submitted form. Decide when each of this would happen. 4. The JAD session will give following inputs: Page author knows which form to create and navigation using DynaActionForm (Refer to Chapter 5 for more on DynaActionForm) and navigation using ForwardAction with <html:link> Application developer knows what inputs are availble for the business logic invocation methods (Session EJB methods) 5. Application developer designs the business logic for each page (not the Action class) and unit tests them and page author develops and formats the pages. Both tasks can occur in parallel. 6. Application developer creates Form and Action using the DynaActionForm and updates the struts-config.xml and invokes the already tested business logic from Action classes. 7. Page author and developer integrate the pieces and unit test them with
Chapter 4. All about Actions
99
StrutsTestCase ().
4.14 Summary
Make the best use of the built-in Actions. Review the systems you build and see how you can use ForwardAction to stick to MVC, how to use DispatchAction and LookupDispatchAction to simplify things and perhaps even internationalize your application. Split your application into modules and create separate struts config files. Smaller files are easier to comprehend and manage. Doing so will benefit you in the long run. Define a base Form and Action in your application. You will be glad you did. Handle duplicate form submissions using redirects and synchronizer tokens. Use a tool to manage the Struts Config files and strictly follow the guidelines about what goes into Action and what does not. Author’s note: Struts Action Forms tend to get really huge in big projects. The data in Struts Forms need to be transferred to the middle tier and persisted to the database. This problem of copying (mapping) data from ActionForms to ValueObjects (which carry to the middle tier) has been traditionally done by BeanUtils. When the structure of Value Objects and Action Forms differ significantly, it is tough to use BeanUtils. Object To Object mapping (OTOM) framework () is designed to solve this problem. With h OTOM, any Java Object can be mapped to another via a GUI. Then, the mapping Java code can be generated from the GUI or Ant Task.
100
Struts Survival Guide – Basics to Best Practices
Chapter 5. Form Validation
101
Ch a p te r 5
Form Validation
In this chapter:
1. You will learn how to use Commons Validator with Struts via ValidatorForm and ValidatorActionForm 2. You will learn about DynaActionForm and DynaActionValidatorForm Validation is a beast that needs to be addressed at various levels using different strategies for different complexity levels in the validation itself. 1. It is commonly accepted principle that the user input validation should happen as close to the presentation tier as possible. If there are only a bunch of HTML forms with trivial checks and the validation is limited to the UI, you can afford to implement validations using JavaScript. 2. Getting too close with JavaScript is one extreme and is not pragmatic in everyday projects. On the other hand, postponing the validation until it manifests as a business logic exception, runtime exception or a database exception is unacceptable too. Another option is to programmatically validate the HTML Form data using helper classes in the web tier or code the validation right in the validate() method of the ActionForm itself – which is what we did in Chapter 3. 3. The third option is to externalize the validation into a XML file that confirms to the Commons Validator syntax and integrate it into Struts. This approach works very well for trivial checks, which is a case in approximately 50% of the projects. Examples of trivial checks are: Null Checks - Checking if a field is null, Number Check – checking if the field value is numeric or not, Range Check – checking if a numeric values lies within a range. These validations depend just on the fields being validated and nothing else. Validator is a part of the Jakarta Commons project and depends on the following Commons Projects - BeanUtils, Logging, Collections, Digester and also on Jakarta ORO library. All of these are shipped with the Struts 1.1. You can find them in the lib directory of the Struts distribution.
102
Struts Survival Guide – Basics to Best Practices
Struts is bundled with Commons Validator 1.0. Commons Validator 1.1.1 has support for validating interdependent fields. It can be downloaded from the Jakarta Commons website and used instead of the validator bundled with Struts.
5.1 Using Commons Validator with Struts
The interoperation of Commons Validator and Struts is like a jigsaw puzzle with several pieces. It is not possible to explain one piece in entirety and move on to the next since they are all interconnected. Hence our approach is to explain part of a puzzle before moving on to the next. And then several half-baked pieces are joined together. You might have read through this section twice to get a clear picture. The twin XML files In Struts, the XML based validations are located in two files – validationrules.xml and validation.xml. The validation-rules.xml file contains the global set of rules that are ready to use (Henceforth referred to as global rules file). It is shipped along with the Struts distribution in the lib directory. The second file – validation.xml is application specific. It associates the rules from the global rules file with individual fields of your ActionForm. Suppose there is a generic rule named required in the global rules file that checks if a field is empty. You can use this rule to check if any field is empty including the firstName field in the CustomerForm by adding the following declaration in validation.xml:
<form name="CustomerForm"> <field property="firstName" .. .. </field> <field .. .. .. </field> .. </form>
The above xml contains a XML block with a <form> element, which stands for an ActionForm named CustomerForm. All the rules associations for the CustomerForm fields exist inside this <form> block. One such validation – the validation for the firstName field is also shown in a <field> element. The <field> has an attribute named depends that lists the set of rules (comma separated) on which the field is dependent upon. In other words, the
Chapter 5. Form Validation
103
validation.xml is just an association of the actual rules with the application specific forms. The actual rules are defined in the validation-rules.xml. validation-rules.xml – The global rules file For a while, let us go back to validation-rules.xml – the global rules file where all the rules are actually defined. Listing 5.1 shows a sample file. Each <validator> element defines one validation rule. The Listing shows a required rule validator. The required rule validator uses a class called org.apache.struts.validator.FieldChecks. Where did this come from? Well, that requires some background too.
Listing 5.1 Required rule in validation-rules.xml
<form-validation> <global> "> </validator> <validator name=”…”> … … … … </validator> <!- More validators defined here --> </global> </form-validation>
The
basic
validator
class
in
Commons
Validator
is
org.apache.commons.validator.GenericValidator. It contains atomic and fine-grained validation routines such as isBlankOrNull(), isFloat(), isInRange() etc. Struts provides the FieldChecks that uses the GenericValidator but has coarse grained methods such as validateRequired(), validateDate(), validateCreditCard() etc.
Each of these methods accept four arguments of type java.lang.Object,
org.apache.commons.validator.ValidatorAction, org.apache.commons.validator.Field,
104
Struts Survival Guide – Basics to Best Practices
ActionErrors and HttpServletRequest
in that order. Notice that the same arguments are listed under the methodParams attribute in Listing 5.1. NOTE: The FieldChecks couples the ActionForm validations to the Struts framework by adding dependency on Struts specific classes in the XML, but makes it easy to use the Commons Validator with Struts. With this background info, the required validator in Listing 5.1 translates into plain English as: “The rule named “required” is defined in method validateRequired within a class named FieldChecks that accepts the above listed four arguments in that order. On error, an error message identified by the key errors.required is displayed. The errors.required is a key to the Resource Bundle”. Quite a mouthful indeed! The next step is to add the message for errors.required to the Resource Bundle. The key-value pair added is: errors.required={0} is required. By default, the rules in global rules file use the following keys for the error messages - errors.required, errors.minlength, errors.maxlength, errors.date and so on. To use different error keys, make appropriate changes in validation-rules.xml. A rule in the global rules file can itself depend on another rule. For example, consider the minlength rule. It checks if a field is less than a specified length. However it doesn’t make sense to check the length of an empty field is less than a specified length. In other words, minlength rule depends on required rule. If the required rule fails, the minlength rule is not executed. This depends relationship among rules is shown below.
<validator name="minlength" classname="org.apache.struts.validator.FieldChecks""> </validator>
validation.xml – The application specific rules file Now, let us get back to the validation.xml. A sample is shown in Listing 5.2. The xml consists of a <formset> block with multiple <form> blocks, one for each form.
Chapter 5. Form Validation
105
Listing 5.2 Application specific validations for CustomerForm
<form-validation> <formset> <form name="CustomerForm"> <field property="firstName" depends="required,minlength "> <arg0 key="customerform.firstname"/> <arg1 name="len" key="1" resource="false"/> </field> </form> </formset> </form-validation>
In Listing 5.2, the topmost xml block is <form-validation>. It contains a single <formset> element, which in turn can contain a collection of <form>s. Each <form> corresponds to the Struts Form. The <form> contains a set of <field>s to be validated. The firstName field depends on two rules – required and minLength. The required and minLength are defined in the validation-rules.xml. Then comes the arg0 and arg1. The <field> element accepts up to four args – arg0, arg1, arg2 and arg3. These argNs are the keys for replacement values in ActionError. Sure Sounds confusing. Here is an example to make things clear. Assume that the required rule has failed. An ActionError with key errors.required needs to be created. The error message for this key is defined in the resource bundle as “{0} is required”. This message needs a literal value to replace {0}. That replacement value itself is obtained by first looking up the resource bundle with key attribute of <arg0> element. In Listing 5.2, the key attribute of <arg0> is customer.firstname. The key is used to lookup the resource bundle and obtain the replacement value. Suppose that the resource bundle defines these messages.
customer.firstname=First Name errors.required={0} is required
Then, the replacement value for {0} is First Name. This value is used to replace {0} and the resulting error message is First Name is required. Notice that the Resource bundle is looked up twice – once using the arg0 key and then during the rendering of the ActionError itself. You might be wondering why arg1 is needed. The answer is when the minlength rule fails; it looks for an error message with a predefined key called errors.minlength. The errors.minlength requires two replacement values – arg0 and arg1. arg0 was also used by the errors.required key. The errors.minlength needs arg1 in addition to arg0. I can hear you are saying – “All that is fine. But how will I know what predefined error keys should
106
Struts Survival Guide – Basics to Best Practices
be added to the resource bundle”. It is simple actually. Just open the validationrules.xml and you will find all the error message keys are provided. They are:.
As you can see, every error message key needs arg0. The errors.minlength, errors.maxlength and errors.range need arg1. In addition, the errors.range also needs arg2. In Listing 5.2, the arg1 has an attribute called resource and it set to false. The resource=”false” implies that there is no need to lookup the message resource bundle for arg1 (as was done with arg0 key – customerform.firstname). More validation.xml features Let us investigate some more interesting validator features. Listing 5.3 shows the same CustomerForm validation rules with some additions and modifications. Those are highlighted in bold. The first addition is the <global> block to <form-validation>. The <global> can hold as many <constant>s. A <constant> is much like a Java constant. Declare it once and use wherever needed. In this case, a constant called nameMask is declared and a regular expression ^[A-Za-z]*$ is assigned to it. This regular expression is interpreted as: “The field can have any number of characters as long as each of them is between A-Z and a-z”. This constant is used to define the mask rule for CustomerForm in two steps as follows: 1. First, a variable <var> called mask is created and the value of nameMask is assigned to it. This is done by setting the <var-value> to be ${nameMask}. [Any variable within the ${ and } blocks is evaluated. You will find the same convention in JSTL too.] The <var> scope is limited to
Chapter 5. Form Validation
107
the <field> where it is declared. 2. Next, a rule called mask is added to the CustomerForm’s depends attribute. The mask rule is defined in the validation-rules.xml. It checks if the current field value confirms to the regular expression in a predefined variable called mask (This is the reason why we created a variable called mask in the firstName <field> and assigned it the nameMask value. Doing so, lets us reuse the nameMask expression for all the forms in validation.xml if necessary and at the same time satisfy the constraint imposed by the mask rule that the regular expression is always available in a <var> called mask.
Listing 5.3 Application specific validations for CustomerForm
<form-validation>
<global> <constant> <constant-name>nameMask</constant-name> <constant-value>^[A-Za-z]*$</constant-value> </constant> </global>
<formset> <form name="CustomerForm"> <field property="firstName" depends="required,minlength,mask"> <arg0 key="customerform.firstname"/> <arg1 name="len" key="${var:minlen}" resource="false"/>
<var> <var-name>minlen</var-name> <var-value>1</var-value> </var> <var> <var-name>mask</var-name> <var-value>${nameMask}</var-value> </var>
</field> </form> </formset> </form-validation>
The second new feature in Listing 5.3 is the use of variable for arg1. arg1 as you know, represents the minimum length of the first name. In Listing 5.2, the arg1 key was hard coded. A bit of flexibility is added this time round by
108
Struts Survival Guide – Basics to Best Practices
declaring it as a field scoped variable and then accessing it through the shell syntax ${..}. Using the ValidationForm There is one last piece pending in the puzzle. How does the validation failure become ActionError and get displayed to the user? We will answer it right away. Struts has a class called ValidatorForm in org.apache.struts.validator package. This is a subclass of ActionForm and implements the validate() method. The validate() method invokes the Commons Validator, executes the rules using the two xml files and generates ActionErrors using the Message Resources defined in the struts-config.xml. All you have to do is extend your form from ValidatorForm and write your rules in XML. The framework does the rest. More details on the validator are covered later in this chapter. For now, let us see how the Validator is configured. Configuring the Validator Starting from 1.1, Struts provides a facility to integrate third party utilities seamlessly through what is called as a PlugIn. A PlugIn is simply a configuration wrapper for a module-specific resource or service that needs to be notified about application startup and application shutdown events (through the methods init and destroy). A PlugIn is a class that implements org.apache.struts.action.PlugIn interface. This interface defines two methods:
public void init(ActionServlet servlet, ModuleConfig config) public void destroy()
You can respectively implement logic to initialize and destroy custom objects in these methods. PlugIns are configured in the struts-config.xml file, without the need to subclass ActionServlet simply to perform application lifecycle activities. For instance the following XML snippet (from the strutsconfig.xml) configures the validator plugin:
<plug-in <set-property </plug-in>
The ValidatorPlugIn is a class that implements the PlugIn interface. It has an attribute called pathnames. The two input rule XML file names are specified using this attribute. As you know already, Struts reads the struts-
Chapter 5. Form Validation
109
config.xml file during initialization – during which it also reads the Validator plugin and accordingly initializes it. Consequently the rules are loaded and available to the ValidatorForm class when the time comes to execute the validate() method. Steps to use Commons Validator in Struts Now, let us summarize the steps involved in using Commons Validator with Struts. They are: 1. Create the application specific ActionForm by extending the ValidatorForm 2. Add the corresponding <form> element with <field> sub-element for every form field that needs validation. 3. List the rules to execute in the <field>’s depends attribute. 4. For every rule, add the error message with predefined name to the message bundle. 5. For every rule, supply the argNs either as inline keys or keys to the resource bundle. 6. If the rules in validation-rules.xml do not meet your needs, add new rules and follow the steps above for the new rules. Be sure to have the classes executing the rules are available in the appropriate class path.
5.2 DynaActionForm – The Dynamic ActionForm
Struts 1.0 in Listing 5.4.
Listing 5.4 Sample DynaActionForm
<form-bean <form-property <form-property </form-bean>
There are two major differences between a regular ActionForm and a
DynaActionForm.
110
Struts Survival Guide – Basics to Best Practices
1. For a DynaActionForm, the type attribute of the form-bean is always org.apache.struts.action.DynaActionForm. 2. A regular ActionForm is developed in Java and declared in the strutsconfig Listing 5.4, CustomerForm is declared as a DynaActionForm with two JavaBeans properties – firstName and lastName. The type attribute of the <formproperty> is the fully qualified Java class name for that JavaBeans property; it cannot be a primitive. For instance int is not allowed. Instead you should use java.lang.Integer. You can also initialize the formproperty, so that the html form shows up with an initial value.
Listing 5”); } DynaActionForm custForm = (DynaActionForm) form; String firstName = (String) custForm.get(“firstName”); String lastName = (String) custForm.get(“lastName”); System.out.println(“Customer First name is “ + firstName); System.out.println(“Customer Last name is “ + lastName); ActionForward forward = mapping.findForward(“success”); return forward; } }
Chapter 5. Form Validation
111
How about an example of using DynaActionForm? Remember, the Hello World application from Chapter 3. Well, now let us rewrite that example using DynaActionForm. You will be surprised how easy it is. The first step obviously is to develop the DynaActionForm itself. Listing 5.4 is the DynaActionForm version of the CustomerForm from Chapter 3. The <form-property> tags are the equivalent of the JavaBeans properties of the ActionForm. What about the validate() method? In Chapter 3, you were able to code the validate() method since you had the CustomerForm as a Java class. What about the DynaActionForm? With DynaActionForm, unfortunately this is not possible. Don’t be disappointed. You can use the DynaValidatorForm (a subclass of DynaActionForm) in concert with Validator Plugin. We will cover this topic in the next section. Rearranging the execute() method in CustomerAction is the second and the final step in ActionForm to DynaActionForm conversion. Listing 5.5 shows the CustomerAction. Compare this with the CustomerAction in Chapter 3 (Listing 3.5). Instead of using compile time checked getters and setters, the JavaBeans properties in DynaActionForm as accessed just like HashMap. One thing is obvious. The DynaActionForm is quick and easy. It is very convenient for rapid prototyping. Imagine a Struts 1.0 world where an ActionForm was absolutely needed to prototype developed is constantly changing, the developer does local builds on his machine. Similarly the page author would certainly like to add or remove fields from the prototype during the page design. Since the HTML forms map to ActionForms, the above scenario implies one of two things. 1. The page author constantly pesters the Java application developer to modify the ActionForm. 2.. A page author can be isolated from the Java application development by having a application server environment available for page design. He develops
112
Struts Survival Guide – Basics to Best Practices <html:link> to prototype navigation instead of <html:submit <html:link> more downsides of using DynaActionForms 1. The DynaActionForm bloats up the Struts config file with the xml based definition. This gets annoying as the Struts Config file grow larger.. 4. ActionForm were designed to act as a Firewall between HTTP and the Action classes, i.e. isolate and encapsulate the HTTP request parameters from direct use in Actions. With DynaActionForm, the property access is no different than using request.getParameter(“..”). 5. DynaActionForm construction at runtime requires a lot of Java Reflection machinery that can be expensive. 6.
Chapter 5. Form Validation
113
setters in the IDE than fixing your Action code and redeploying your web application) That said, DynaActionForms have an important role to play in the project lifecycle as described earlier, which they do best and let us limit them to just that. Use them with caution, only when you absolutely need them. DynaValidatorForm An application specific form can take advantage of XML based validation by virtue of sub classing the ValidatorForm. The XML based dynamic forms can also avail this feature by specifying the type of the form to be DynaValidatorForm as follows:
<form-bean <form-property <form-property </form-bean>
DynaValidatorForm is actually a subclass of DynaActionForm. It implements the validate() method much like the ValidatorForm and invokes the Commons Validator. DynaValidatorForm brings the capability of writing XML based validation rules for dynamic forms too.
5.3 Validating multi-page forms
When large amount of data is collected from the user, it is customary to split the form into multiple pages. The pages follow a wizard like fashion. However the ActionForm would still exists as a single Java class. Moreover at any point, the data validation should be limited to only those pages that have been submitted. Fortunately, this feature is already built into the Validator. However it requires some setup from your side. There are two alternatives – the first uses a single action mapping and the second uses multiple action mappings. The strutsvalidator.war provided with the Struts distribution adopts the first approach, while we recommend the latter. Both approaches require the use of an optional hidden variable called page. Consider an html form split into two JSPs – PageA.jsp and PageB.jsp. Since both JSPs will have the hidden variable mentioned earlier, it is sent as a request parameter from both form submissions. The hidden variable is assigned the value of 1 in PageA and 2 in PageB. The ValidatorForm already has a JavaBeans property named page of type int. All validation for any field on a page less than
114
Struts Survival Guide – Basics to Best Practices
or equal to the current page is performed on the server side. This will of course require that each rule defined for the field in the validation.xml should have a page attribute as follows:
<form name="CustomerForm"> <field property="firstName" page=”1” <arg0 key="customerform.firstname"/> </field> <field property="fieldX" page=”2” <arg0 key="customerform.fieldX"/> </field> </form>
With this background, we will first explain the single action mapping approach. The html forms in both pages have the same action - <html:form action=”/submitForm”>. In the struts config file, set validate=false for the /submitForm action mapping and add forwards for each of the pages as follows:
<action path="/submitForm" type="mybank.example.CustomerAction" name="CustomerForm" scope="request"
validate="false">
<forward name="success" <forward name="cancel" path="/Success.jsp"/>
<forward name="input1" <forward name="input2"
</action>
Since validate is set to false, the execute() method in Action gets control immediately after the RequestProcessor populates the form. You have to now explicitly call the form.validate() in the execute() method (Since the CustomerForm extends from ValidatorForm, the validate() is already implemented). After that you have to forward to the appropriate page depending on the current page and whether there are ActionErrors in the current page. For instance, if PageA is submitted and there are no ActionErrors, then PageB is displayed to the user. However if there were ActionErrors in PageA, then it is displayed back to the user. The code is shown below.
public ActionForward execute(.. ..) throws Exception { CustomerForm info = (CustomerForm)form;
Chapter 5. Form Validation
115
// Was this transaction cancelled? if (isCancelled(request)) { // Add code here to remove Form Bean from appropriate scope return (mapping.findForward("cancel")); } ActionErrors errors = info.validate(mapping, request); if (errors != null && errors.isEmpty()) { if (info.getPage() == 1) return mapping.findForward("input2"); if (info.getPage() == 2){ //Data collection completed. Invoke Business Logic here return mapping.findForward("success"); } } else { saveErrors(request, errors); return mapping.findForward("input" + info.getPage()); } }
This approach is counter-intuitive. After all, validate() method was supposed to be invoked automatically by the framework, not manually in the execute() method. The second approach eliminates the need to manually invoke the validate(). In this method, the two forms in two pages have different actions as follows: Page A Form submission - <html:form action=”/submitPageA”> Page B Form submission - <html:form action=”/submitPageB”> Two action mappings are added to the Struts Config file for the above form submissions. Note that both of the action mappings use the same Action class. Moreover there is no need to set validate=false. The action mapping for PageA form submission is as follows:
<action
path="/submitPageA"
type="mybank.example.CustomerAction" name="CustomerForm" scope="request"
validate="true" input=”/PageA.jsp”> <forward name="success" path="/PageB.jsp"/>
<forward name="cancel" </action>
Similarly, the action mapping for PageB form submission is as follows:
116
Struts Survival Guide – Basics to Best Practices
<action
path="/submitPageB"
type="mybank.example.CustomerAction" name="CustomerForm" scope="request"
validate="true" input=”/PageB.jsp”> <forward name="success" path="/Success.jsp"/>
<forward name="cancel" </action>
Both action mappings define an input value. When the form is validated by the RequestProcessor and there are errors, the mapping.getInput() page is shown to the user. Similarly the mapping.findForward(“success”) page is shown when there are no ActionErrors. Any business logic invocation happens only after the PageB data is collected. The code below shows the execute() method.
public ActionForward execute(.. ..) throws Exception { CustomerForm info = (CustomerForm)form; // Was this transaction cancelled? if (isCancelled(request)) { // Add code here to remove Form Bean from appropriate scope return (mapping.findForward("cancel")); } if (info.getPage() == 2) { //Data collection completed. Invoke Business Logic here } return mapping.findForward("success"); }
With the second approach, the execute() method in Action is simplified. While you may not see much difference between the two execute() methods shown earlier, it will be much pronounced as the number of pages increase and the last thing you want is page navigation logic intermingled with business logic invocation.
5.4 Validating form hierarchy
There more validation related Form classes – ValidatorActionForm and DynaValidatorActionForm. A class diagram are still two
Chapter 5. Form Validation
117
will resolve some of the confusion arising out of plethora of Form classes. Figure 5.1 shows the relationship between these classes. ActionForm and DynaActionForm reside at the top of the figure as the root class for two branches. ValidatorForm and DynaValidatorForm are their immediate siblings. Each of them has a subclass – ValidatorActionForm and DynaValidatorActionForm. The last two classes deserve some explanation. Suppose that you have a Form and want to reuse it in various scenarios. Each scenario has its own validation. However with the XML based validation, a set of rules are associated with the form name, not where it is invoked from. Both the ValidatorActionForm and DynaValidatorActionForm match the action mapping instead of the form name. The name attribute is used to match the action mapping and thus multiple rules can be defined for the same form based on the action mapping.
Figure 5.1 Relationship hierarchy among Validating Forms.
5.5 Summary
In this chapter, you learnt about using Commons Validator with Struts – this is probably the approach you will adopt in your project too. You also understood the importance of DynaActionForm and its role in projects. You also learnt the best approach to handle validation in multi page forms.
118
Struts Survival Guide – Basics to Best Practices
Chapter 6. Struts Tag Libraries
119
Ch a p te r 6
Struts Tag Libraries
In this chapter:
1. You will learn about frequently used Html, Bean and Logic tags 2. We will customize these Html tags – base, text, checkbox, errors & image 3. You will learn about JSTL and Expression Language 4. You will understand how to use Struts-EL Tags and which of the Struts tags should be replaced with JSTL and Struts-EL 5. You will see how various Struts tags, their derivatives and other related tags can work together to create multi-page lists and editable lists. Custom Tags were introduced in JSP 1.1 specification. They are elegant replacement for the scriptlets. Without the custom tags, the “edge of the system” where the decisions in presentation logic based on middle tier models would be exposed to the JSP page author as Java scriptlets. While not only causing confusions and headaches to the page author, scriptlets also required the involvement of the Java developer in the page authoring. Custom Tags changed all that. The application developer now provides the custom tags written as a Java class with a pre-defined structure and hierarchy. The page author independently designs the pages and decides on the contents using the custom tags and their formatting using general HTML and CSS. Struts ships with these Tag libraries – Html, Bean, Logic, Template, Nested, Tiles. We will deal with the first three tag libraries in this chapter. The TLD file for each of these libraries is included in the Struts distribution. For instance, the Html Tags are defined in struts-html.tld. The Bean tags are defined in strutsbean.tld and so on. These tags are like any other custom tags. You have to include the TLD declarations in the web.xml and also the JSP. For e.g., you have to add the following lines in the web.xml to use the Html Tag library:
<taglib> <taglib-uri>/WEB-INF/struts-html.tld</taglib-uri> <taglib-location>/WEB-INF/struts-html.tld</taglib-location> </taglib>
and the following line in the JSP:
120
Struts Survival Guide – Basics to Best Practices
<%@ taglib uri="/WEB-INF/struts-html.tld" prefix="html" %>
Excellent documentation is available with the Struts distribution for each of the custom tags and their attributes. It will be merely a repetition of going over each of those attributes and tags here. Instead we will gloss over the categories and characteristics of the Struts Tags and more importantly cover tags that need to be customized for use in serious applications. Here is how to access the tag documentation in Struts distribution: Deploy the struts-documentation.war from Struts webapps in Tomcat. Use the URL to access the documentation in the browser. Click on the link named “Learning” on the left hand side. Click on “User and Developer Guides” on the resulting page. The page that you see at this point is loaded with information and links including links for Struts tag documentation. The direct link for Struts HTML Tag documentation is:. The direct link for Struts Bean Tag documentation is:.
6.1 Struts HTML Tags
Struts HTML tags are useful for generating HTML markup. The Struts HTML tag library defines tags for generating HTML forms, textboxes, check boxes, drop downs, radio buttons, submit buttons and so on. You have already used some of these in Chapter 3. We will look at other important html tags not covered there. Modifying the Base Tag This tag renders the <base href=…”> html tag pointing to the absolute location of the JSP containing the tag as follows:
<base href=””/>
This can be problematic at times. Assume that the JSP itself is present somewhere down in a hierarchy of directories. Also the images directory will be generally at the top level in a web application (See the WAR structure in Figure 3.3). Since the base href is referring to the absolute location of the JSP, the URL for the images might look like “../../images/banner.jsp”. Three reasons why this is not a good idea: 1. Referring to a same image with different URLs depending on from which JSP it is called is error prone and creates a lot of confusion. 2. If the JSP is moved from one folder to another (which is not uncommon),
Chapter 6. Struts Tag Libraries
121
every URL in the page should be inspected and changed if needed. Not a great idea. 3. Even though the Servlet specification encourages the idea of bundling the images JavaScript and other static resource along with the WAR, it is not a good idea in practice. It is a norm to deploy the static resources separately so that the web server serves these documents instead of the servlet container. 4. When using frameworks such as Tiles (Chapter 7), there is no concept of a single JSP. There is a single layout that aggregates the JSPs The solution is to modify the Base Tag itself so that the output is:
<base href=”” />
Listing 6.1 MybankBaseTag – Customized BaseTag
public class MybankBaseTag extends BaseTag { public int doStartTag() throws JspException { HttpServletRequest request = (HttpServletRequest) pageContext.getRequest(); String baseTag = renderBaseElement( request.getScheme(), request.getServerName(), request.getServerPort(),request.getContextPath()); JspWriter out = pageContext.getOut(); try { out.write(baseTag); } catch (IOException e) { pageContext.setAttribute(Globals.EXCEPTION_KEY, e, PageContext.REQUEST_SCOPE); throw new JspException(e.toString()); } return EVAL_BODY_INCLUDE; } .. }
Now, the URL of the image is always a constant no matter which JSP it is used in. Another advantage of this arrangement is that a directory named App1 can be created on the web server to contain the static resources and the images with no impact on the image URLs. With this background let us get started on modifying the BaseTag. Consider a URL. This is generated as the output of the BaseTag. It can be dissected into:
122
Struts Survival Guide – Basics to Best Practices
request.getScheme() (http://), request.getServerName() (localhost), request.getServerPort() (8080) and request.getRequestURI() (App1/customer/CustomerDetails.jsp). The desired output for the BaseTag is. This can be
dissected into
request.getScheme() (http://), request.getServerName() (localhost), request.getServerPort() (8080) and request.getContextPath() (App1).
There you go! This is what we want to output from our version of BaseTag. Let us call this MyBaseTag. Listing 6.1 shows doStartTag() method from MyBaseTag. Form Tag Another Tag that deserves extra attention is the FormTag. You have learnt about the working of this tag in Chapter 2 and used it in Chapter 3. At that point, we looked at only one attribute of this tag – the action attribute. It also has a set of attributes based on JavaScript events. For instance, the onreset and onsubmit attributes do exactly what their JavaScript equivalents do; they invoke the corresponding JavaScript event handler functions. The JavaScript event based attributes is not limited to just the FormTag. In fact all the tags in HTML Tag library have similar features. Another attribute of interest is the enctype. Normally you don’t have to set the enctype. When you are uploading files however, the value of enctype should be set to multipart/form-data. More details await you in the section on FileTag. FileTag
FileTag lets you select file to upload from the HTML page. When you are uploading files, the value of enctype (on the FormTag) should be set to multipart/form-data. The FileTag in its simplest format, generates an output of <input type=”file” name=”xyz” value”abc” />. This results
in the rendering of a text field for entering the file name and a Browse button as shown in the figure below.
On clicking the browse button a file selection dialog box appears. The selected file is uploaded when the form is submitted. In the JSP, the FileTag is used as <html:file property=”uploadFile”/>. The uploadFile is a
Chapter 6. Struts Tag Libraries
123
JavaBeans property in the ActionForm. Struts mandates the type of this property to be org.apache.struts.upload.FormFile. FormFile is an interface with methods to get the InputStream for the uploaded file. For more details refer to the example web application named struts-upload.war in the webapps directory of wherever you installed Struts. Smart Checkbox – The state aware checkbox Consider a HTML form containing a checkbox in a JSP as follows:
<html:form <html:text <html:checkbox <html:submit>Submit</html:submit> </html:form>
In addition to the usual text field, it has a checkbox that Customer checks to indicate he agrees with the terms and conditions. Assume that the associated ActionForm has validate() method checking if the firstName is not null. If the first name is not present, then the user gets the same page back with the error displayed. The user can then submit the form again by correcting the errors. Further assume that the associated ActionForm is stored in session scope. Now starts the fun. 1. First, submit the form by checking the checkbox but leaving the firstName blank. The form submission request looks like this:? firstName=””&agree=”true”
The ActionForm is created in the session with blank firstName and agree attribute set to true (Checkbox is mapped to Boolean attributes in ActionForm). 2. Since the firstName is blank, the user gets the same page back. Now fill in the firstName but uncheck the agree checkbox. The form submission request looks like this:”John”
Note that the agree request parameter is missing. This is nothing unusual. According to the HTTP specification, if a checkbox is unchecked, then it is not submitted as request parameter. However since the ActionForm is stored in the Session scope, we have landed in a problem. In our case, Struts retrieves the ActionForm from Session and sets the firstName to “John”. Now the ActionForm has the firstName=John and agree=true, although you intended to set the agree to be false.
124
Struts Survival Guide – Basics to Best Practices
The Smart Checkbox we are about to present is the solution to this problem. This solution uses JavaScript and it works as expected only if your target audience enables JavaScript in their browser. The solution is as follows: Define the ActionForm as usual with the Boolean property for checkbox. Define a new class SmartCheckboxTag by extending the CheckboxTag in org.apache.struts.taglib.html package and override the doStartTag(). In the doStartTag(), do the following: Render a checkbox with name “agreeProxy”, where agree is the name of the boolean property in ActionForm. Render a hidden field with the name agree. Define an inline JavaScript within the <script> block as follows. Substitute appropriate values into [property] and [formName].
<script> function handle" + [property] + "Click(obj) { if ( obj.checked == true) { document.form.[formName]." + [property] + ".value = 'true'; } else { document.form.[formName]." + [property] + ".value = 'false'; } } </script>
Invoke the above JavaScript function for the onclick event of the checkbox. The crux of this solution is to invoke a JavaScript function on clicking (check or uncheck) the checkbox to appropriately set the value of a hidden field. The hidden field is then mapped to the actual property in ActionForm. If you can ensure that your target audience has JavaScript enabled, this solution works like a charm! Many might classify this solution as a hack, but the truth is there is no elegant solution for this problem. Where applicable and feasible you can adopt this solution. If you are unsure about your target audience or deploying the application into the public domain, never use this solution. It is impossible to predict the environment and behavior of an Internet user.
Chapter 6. Struts Tag Libraries
125
Using CSS with Struts HTML Tags Cascading Style Sheets (CSS) is the mechanism used by page authors to centralize and control the appearance of a HTML page. Some of the uses of CSS are: Text formatting, indentation and fonts Add background color to text, links and other HTML tags Setting table characteristics such as styled borders, widths and cell spacing. CSS allows the page author to make changes to formatting information in one location and those changes immediately reflect on all pages using that stylesheet thus resulting in application with consistent appearance with the least amount of work – in other words a highly maintainable application. The developers of Struts tags had this in mind and thus most of the HTML tags support the usage of stylesheet in the form of styleClass and style attributes. The styleClass refers to the CSS stylesheet class to be applied to the HTML element and the style attribute refers to the inline CSS styles in the HTML page. You can use either but styleClass is most frequently used. Enhancing the error display with customized TextTag You already know how to validate an ActionForm and display the error messages to the user. This approach works great so long as the forms are small enough and the resulting error display fits into the viewable area without scrolling. If the forms are larger, it is a hassle for the user to scroll down and check for the messages. We address this usability challenge with an error indicator next to the field as shown in the figure below. In addition to the quick visual impact, the error indicator can also provide more information such as displaying a popup box with errors for that field in a JavaScript enabled browser thus enriching the user experience.
There are many ways to implement this. One simple way is to extend the TextTag class and override the doStartTag() method. The doStartTag() from the Struts TextTag generates the <input type=”text” name=”..” >. The subclass of the Struts TextTag has to then add an image next to it when
126
Struts Survival Guide – Basics to Best Practices
there is ActionError(s) associated with the input field. Listing 6.2 shows the implementation with the above approach. The new tag called MyTextTag is used in the JSP as follows:
<mytags:mytext property=”….” errorImageKey=”img.error.alert” />
The errorImageKey is the key to get the name of the error image from the resource bundle. In the doStartTag() method, a check is performed if the text field has any associated errors. If there are no errors, no extra processing is done. However if there are errors, the errorImageKey is used to retrieve the image source and a <img src=”…” > markup is constructed alongside the text tag. There are other ways of implementing this feature. One of them is to develop a separate custom tag to generate the error indicator.
Listing 6.2 TextTag with built-in error indicator
public class MyTextTag extends TextTag { private String errorImageKey; public int doStartTag() throws JspException { int returnValue = super.doStartTag(); ActionErrors errors = RequestUtils.getActionErrors( pageContext, this.property); if ((errors != null) && ! errors.isEmpty()) { String imageSrc = RequestUtils.message(pageContext, getBundle(), getLocale(), this.errorImageKey); if (imageSrc != null) { StringBuffer imageResults = new StringBuffer(); imageResults.append("<img src=\""); imageResults.append(imageSrc); imageResults.append("\""); // Print the image to the output writer ResponseUtils.write(pageContext, imageResults.toString()); } } return returnValue; } ...
Chapter 6. Struts Tag Libraries
127
... public void release() { super.release(); errorImageKey = null; } }
Further customizations can also be performed to pop up Java Script alerts to show the error, if needed. This requires communication between Java and JavaScript. Sounds complex right. It is easier than you think! You can achieve this in three steps. All you need is a basic understanding of JavaScript. First, create a JavaScript function as shown in Listing 6.3. This function simply creates a JavaScript data structure and adds individual ActionError to a JavaScript object called errorMessageArray. An array is created for every for form field to hold multiple error messages.
Listing 6.3 JavaScript function to add ActionError into a JavaScript data structure
function addActionError(window, formFieldName, errorMessage) { if (! window.errorMessageArray) } var value = window.errorMessageArray[formFieldName]; if ( typeof(value) == "undefined") { window.errorMessageArray[field] = new Array(); window.errorMessageArray[formFieldName][0] = errorMessage ; } else { var length = window.errorMessageArray[formFieldName].length; window.errorMessageArray[formFieldName][length] = errorMessage; } } { window.errorMessageArray = new Object();
Second, create your own Errors tag by extending the ErrorsTag from Struts. This JavaScript function is invoked repetitively from the ErrorsTag’s doStartTag() method for every ActionError in ActionErrors. Listing 6.4 shows the doStartTag() method for the MyErrorsTag. As usual the method first invokes the super.doStartTag() to write the ActionErrors as locale specific error messages to the output stream. It then invokes the JavaScript function addActionError() inline with the rest of HTML for every
128
Struts Survival Guide – Basics to Best Practices
ActionError. The JavaScript invocation is made inline by using <script> and </script> demarcations. At the end of this method, every ActionError
associated with the form fields is added to the JavaScript data structure (errorMessageArray). Any JavaScript code in the page can now access the data structure to do whatever it likes. Finally the error messages in the JavaScript data structure (added by MyErrorsTag) have to be displayed when clicking on the error indicator. This can be done with a simple JavaScript function as shown in Listing 6.5. The displayAlert() function iterates over the error messages for the given form field. This function has to be invoked on the onclick JavaScript event of the error indicator image.
Listing 6.4 MyErrorsTag invoking the JavaScript functions
public class MyErrorsTag extends ErrorTag { public int doStartTag() throws JspException { int returnValue = super.doStartTag(); //Retrieve the ActionErrors ActionErrors errors = RequestUtils.getActionErrors( pageContext, Globals.ERROR_KEY); StringBuffer results = new StringBuffer(); results.append("<script>"); //Retrieve all the form field names having ActionError Iterator properties = errors.properties(); String formFieldName = null; while (properties.hasNext()) { formFieldName = (String) properties.next(); if (formFieldName.equals(ActionMessages.GLOBAL_MESSAGE)) continue; //Retrieve all ActionError per form field Iterator reports = errors.get(formFieldName); String message = null; while (reports.hasNext()) { ActionError report = (ActionError) reports.next(); message = RequestUtils.message(pageContext, bundle, locale, report.getKey(), report.getValues()); //Invoke the JavaScript function for every ActionError
Chapter 6. Struts Tag Libraries
129
results.append("addActionError(window,\"" + formFieldName + "\",\"" + message + "\");\n"); } } results.append("</script>"); ResponseUtils.write(pageContext, results.toString()); return returnValue; } ... }
Listing 6.5 JavaScript function to display alert with ActionError messages
function displayAlert(window, formFieldName) { var length = window.errorMessageArray[formFieldName].length; var aggregateErrMsg = ""; for(var i = 0; i < length; i++) { aggregateErrMsg = aggregateErrMsg + window.errorMessageArray[formFieldName][i]; } alert(aggregateErrMsg); }
The recommended way to use ImgTag The ImgTag is used to render a HTML <img> element such as follows:
<img src=”images/main.gif” alt=”The Main Image”/>
If you are wondering why you need a Struts tag for such a simple HTML tag, consider this. Sometimes, the images actually spell out the actual English word. Users worldwide access your application. You want to localize the images displayed to them. You also want the alt text on your images to be internationalized. How do you do this without adding the big and ugly if-else block in your JSPs? The answer is to use the ImgTag. With ImgTag, the actual image (src) and the alternate text (alt) can be picked from the Message Resources. You can easily setup different Resource Bundles for different Locales and there you have it. Your images are internationalized without any extra effort. Even if you are not internationalizing the effort is well worth it. JSPs can remain untouched when the image is changed. The usage of the ImgTag is as follows:
<html:img srcKey=”image.main” altKey=”image.main.alttext” />
There are many more attributes in ImgTag and you can find them in the Struts documentation.
130
Struts Survival Guide – Basics to Best Practices
6.2 Using Images for Form submissions
All along, you have submitted HTML forms with the grey lackluster buttons. Life requires a bit more color and these days most of the web sites use images for Form submission. The images add aesthetic feeling to the page well. Struts provides <html:image> tag for Image based Form submission. Although the ImageTag belongs to the HTML Tag library, it requires an in-depth treatment and deserves a section by itself. Let us look at the ImageTag and how it fits into the scheme of things. Consider an HtmlTag used in a JSP as follows:
<html:image src=”images/createButton.gif” property=”createButton” />
This gets translated to:
<input type=”image” name=”createButton”
src=”images/createButton.gif” />.
When the Form is submitted by clicking on the image, the name is added to the X and Y coordinates and sent to the server. In this case, two request parameters createButton.x and createButton.y are sent. Suppose that the HTML form has two or more images each with a different name. How do you capture this information in the ActionForm and convey it to the Action? The answer to this is ImageButtonBean in org.apache.struts.util package. The ImageButtonBean has five methods – getX(), setX(), getY(), setY() and isSelected(). All you have to do is add JavaBeans property of type ImageButtonBean to the ActionForm (Listing 6.6). For instance, if the JSP has image buttons named and createButton and updateButton, you have to add two ImageButtonBean properties to the ActionForm with the same name. When the createButton image is clicked, two request parameters createButton.x and createButton.y are sent to the server. Struts interprets the dot separated names as nested JavaBeans properties. For example, the property reference:
<.. property=”address.city”/>
is translated into
getAddress().getCity()
while getting the property. The setters are called for setting the property as follows:
getAddress().setCity()
createButton.x and createButton.y, Struts invokes getCreateButton() on the ActionForm and then setX() and setY() on
For
Chapter 6. Struts Tag Libraries
131
createButton. Since createButton is an ImageButtonBean, its x and y are set to non-null values, when the button is clicked. The isSelected() method returns true if at least one of x or y is non-null. Listing 6.6 shows the ActionForm with createButton and updateButton. It is pretty straightforward. In the Action instance, you can find which of the buttons is pressed by using the isSelected() method as follows:
public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws Exception { CustomerForm custForm = (CustomerForm) form; if (custForm.getCreateButton().isSelected()) { System.out.prinitln(“Create Image Button is pressed”); } else if (custForm.getUpdateButton().isSelected()) { System.out.prinitln(“Update Image Button is pressed”); }
} Listing 6.6 CustomerForm using ImageButtonBean
public class CustomerForm extends ActionForm { private String firstName; .. .. private ImageButtonBean createButton; private ImageButtonBean updateButton; public CustomerForm() { firstName = “”; lastName = “”; createButton = new ImageButtonBean(); updateButton = new ImageButtonBean(); } … … public ImageButtonBean getCreateButton() { return createButton; } public void setCreateButton(ImageButtonBean imgButton) { this.createButton = imgButton; } public ImageButtonBean getUpdateButton() {
132
Struts Survival Guide – Basics to Best Practices
return updateButton; } public void setUpdateButton(ImageButtonBean imgButton) { this.updateButton = imgButton; } }
Compare this with the Action using grey buttons. It would look like:
custForm.getCreateButton().equals(“Create”). Obviously, changing
the grey button to image button on the JSP is not gone unnoticed in the Action. The Action class has changed accordingly. The ActionForm has changed too. Previously a String held on to the submit button’s name. Now an ImageButtonBean has taken its place. You might be wondering if it is possible to eliminate this coupling between the Action and the JSP? The good news is that this can be achieved quite easily. Listing 6.7 shows HtmlButton that extends the ImageButtonBean, but overrides the isSelected() method. ImageButtonBean has basically taken care of handling the image button in isSelected() method. The extra functionality in HtmlButton takes care of grey button submission. The attribute called name is the name of the grey button submitting the form. The isSelected() method now checks if the name is not null in addition to invoking the super.isSelected(). Now you can use the HtmlButton for whatever mode of JSP submission – grey button or image button. The ActionForm will use HtmlButton in both cases and never change when the JSP changes. Neither does the Action change. Decoupling Nirvana indeed! The Image button in JSP will look like:
<html:image property=”createButton” src=”images/createButton.gif” />
If grey button were used in the JSP, it would look like:
<html:submit property=”createButton.name”> <bean:message key=”button.create.name” /> <html:submit>
Notice that the grey button is called “createButton.name”. The dot separated naming is essential to maintain the ActionForm and Action unchanged. Moreover, the suffix of the property – “.name” is fixed since the HtmlButton has the JavaBeans property called name (Listing 6.7). Large projects need a simple and systematic way of handling Form submissions and deciding on the back end. Minor innovations like HtmlButton go a long way in making your application cleaner and better.
Chapter 6. Struts Tag Libraries
133
The alt text and the image source for ImageTag can also be externalized into the Message Resource Bundle much like the ImgTag. As it turns out the names of the attribute for externalizing these are also the same. <html:image> has srcKey to externalize the name of the image src and altKey to externalize the alternate text (alt). In Chapter 10, we will develop a DispatchAction-like capability for HtmlButton by exploiting the Struts customization facility. ImageButton and JavaScript
ImageButton is all cool and dandy as long as you don’t have to execute
JavaScript and support multiple browser versions. Microsoft Internet Explorer 4 (and above) and Netscape 6 (and above) provide support for JavaScript event handlers in the <input type=”image”>. If the JavaScript execution is critical to the application logic, you might want to switch between the image buttons and grey buttons depending on the browser version. This is another instance where HtmlButton saves the say. Irrespective of whether the front end uses Image button or grey button, the presentation logic in the ActionForm and Action doesn’t change.
Listing 6.7 HtmlButton
public class HtmlButton extends ImageButtonBean { private String name; public String getName() { return name; } public void setName(String aName) { this.name = aName; } public boolean isSelected() { boolean returnValue = super.isSelected(); if (returnValue == false) { returnValue = (name != null && name.trim().length() > 0); } return returnValue; } }
In Chapter 4, you looked at the goodies offered by DispatchAction and LookDispatchAction. In particular with LookupDispatchAction, you
134
Struts Survival Guide – Basics to Best Practices
were able to assign methods in Action instance based on the button names in a locale independent manner. The only pre-requisite was that, all the form submission buttons have the same name. With the HtmlButton (or ImageButtonBean for that matter), we started with different names for different buttons from the outset. For this reason, DispatchAction and LookupDispatchAction cannot be used in conjunction with image based form submissions. They can be however used with html links using images.
6.3 Struts Bean Tags
Struts Bean tag library contains tags to access JavaBeans and resource bundles among others. Two frequently used tags are MessageTag (bean:message) and WriteTag (bean:write). Message Tag and Multiple Resource Bundles You have already used the message Tag for accessing externalized messages in resource bundles using locale independent keys. In this section, we will go further and investigate the applicability of multiple resource bundles. When the application is small, a single resource bundle suffices. When the application gets larger, the single properties file gets cluttered much like a single strutsconfig.xml getting cluttered. It is advisable to have multiple resource bundles based on the message category from the outset. This saves the pain involved in splitting the single bundle into pieces and updating all the resources accessing it. The first step in using multiple resource bundles is to declare them in the strutsconfig.xml first. The semantic for declaring multiple resource bundle is as follows:
<message-resources <message-resources <message-resources
The above snippet declares three resource bundles identified by a key. The default resource bundle does not have a key. As the key suggests, the AltMsgResource contains alternate messages and the ErrorMsgResource contains error messages. The message tag accesses the default resource bundle as follows:
<bean:message key=”msg.key” />
Chapter 6. Struts Tag Libraries
135
The key specified in the <bean:message> tag is the key used in the properties file to identify the message. The non-default resource bundles are accessed by specifying the bundle key as declared in struts-config.xml (key=”bundle.alt”, key=”bundle.error” etc.). For instance, a message tag accesses a message in AltMsgResource as follows:
<bean:message key=”msg.key” bundle=”bundle.alt” />
Similarly a errors tag access the messages in the non-default bundle by using the bundle attribute as follows:
<html:errors bundle=”bundle.error” />
You can also specify alternate bundles to the following tags in HTML Tag library – messages, image, img and option. Write Tag Write Tag is another frequently used tag. The usage of this tag is as follows:
<bean:write name=”customer” property=”firstName” />
It accesses the bean named customer in the page, request, session and application scopes in that order (unless a scope attribute is specified) and then retrieves the property named firstName and renders it to the current JspWriter. If format attribute is specified, the value is formatted accordingly. The format can be externalized to the resource bundle by using the formatKey attribute instead. Alternate resource bundle can also be specified. This is handy when the display format is locale specific. Going further, <c:out> and other JSTL formatting tags are preferred over write tag for formatting and output.
6.4 Struts Logic Tags
The frequently used tags in the Logic tag library are for logical comparison between values and for iteration over collection. The important logical comparison tags are: equal, notEqual, greaterEqual, greaterThan, lessEqual and lessThan. The following are the important attributes are common to these tags.
value – The constant value against which comparison is made.
136
Struts Survival Guide – Basics to Best Practices
name – The name of the bean in one of the 4 scopes property – The property of the bean that is compred with the value.
A sample usage of the tags is as follows:
<logic:equal name=”customer” property=”firstName” value=”John”> //do whatever when the customer first name is John </logic:equal>
The above tag searches for a bean named customer in the 4 scopes and checks if its firstName property is equal to John. You can also specify the scope to restrict the search scope by using the scope attribute on these tags. Another tag attribute is parameter. You have to specify only one: parameter or (name and property). As the name suggests, the parameter attribute looks for the specified request parameter and compares it with the value attribute. In the example below, the request parameter named firstName is compared with a value of John.
<logic:equal parameter=”firstName” value=”John”> //do whatever when the request parameter firstName //is equal to John </logic:equal>
There are more comparison tags of interest: empty, notEmpty, present and notPresent. These tags do not compare against a given value, but check if an entity (bean property or request parameter) is empty or present respectively. Hence they don’t need the value attribute. The following snippet checks if a request parameter named firstName is present.
<logic:present parameter=”firstName” > //do whatever when the request parameter firstName is present //(Irrespective of its value) </logic:equal>
Nested Logic Tags Consider how you would write the following logical condition using Logic tags:
if (customer.firstName == “John” && customer.lastName == “Doe” && customer.age == 28) { do something.…. }
This can be done by nesting the logic:equal tags as follows:
Chapter 6. Struts Tag Libraries
137
<logic:equal name=”customer” property=”firstName” value=”John”> <logic:equal name=”customer” property=”lastName” value=”Doe”> <logic:equal name=”customer” property=”age” value=”28”> //do something…. </logic:equal> </logic:equal> </logic:equal>
Nesting of <logic:xxx> tags always results in logical ANDing. There is no convenient way to do an “OR” test however; that's where the expression language in JSTL comes in handy (introduced in next section). With JSTL, the above AND condition can be written as follows:
<c:if test=’${customer.firstName == “John” && customer.lastName == “Doe” && customer.age == 28}’> //do something ... </c:if>
Writing the OR condition is also no different
<c:if test=’${customer.firstName == “John” || customer.lastName == “Doe” || customer.age == 28}’> //do something ... </c:if>
The c in the c:if stands for JSTL’s core tag library TLD. There are other tag libraries in JSTL such as formatting. Refer to the section “A crash course on JSTL” for details. Iterate Tag The iterate tag is used to iterate over a collection (or a bean containing collection) in any of the four scopes (page, request, session and application) and execute the body content for every element in the collection. For instance, the following tag iterates over the collection named customers.
<logic:iterate name=”customers”> //execute for every element in the collection </logic:iterate>
Another alternative is to use a bean and iterate over its property identified by the attribute property. The following tag accesses the company bean from one of the scope and then invokes getCustomers() on it to retrieves a collection and iterates over it.
138
Struts Survival Guide – Basics to Best Practices
<logic:iterate name=”company” property=”customers”> // Execute for every element in the customers // collection in company bean </logic:iterate>
Most of the times a collection is iterated over to display the contents of that collection, perhaps in the table format. This requires the individual element of the collection is exposed as a scripting variable to the inner tags and scriptlets. This is done using the id attribute as follows:
<logic:iterate id=”customer” name=”company” name=”customers”> // Execute for every element in the customers // collection in company bean. // Use the scripting variable named customer <bean:write name=”customer” property=”firstName” /> </logic:iterate>
NOTE: The JSTL tag <c:forEach> performs similar functionality. It is recommended that you switch to these new tags where applicable.
6.5 A crash course on JSTL
JSTL stands for JSP Standard Template Library. It is one of the new specifications from Sun for standardizing common tags. Due to the lack of standard tags for most of the common tasks such as iterating over a collection and displaying it as a table, custom tags from different vendors have sprung up (including Struts Logic tags), thus presenting a formidable learning curve for the developers every time a new vendor is chosen. JSTL has standardized the set of tags. This standardization lets you learn a single tag and use it across the board. Table 6.1 shows the JSTL tag categories. Core and Formatting tags are most relevant to the discussion on Struts.
Table 6.1 JSTL Libraries Library Description Core Contains Core tags for if/then, output, iterating collections, Formatting Contains I18N and formatting tags. Localizing text, Setting Resource Bundles, Formatting and parsing number, currency, date SQL Database Access tags XML Tags for XML Parsing and Transforming with Xpath
JSTL also introduced a new expression language (called EL henceforth) as an alternative for using full-blown JSP expressions. For e.g. consider the following scriptlet. It checks for the “user” in page, request, session and application scopes and if it is not null, prints out the roles of that user.
<%
Chapter 6. Struts Tag Libraries
139
User user = (User) (pageContext.findAttribute(“user”); if (user != null) { Role[] roles = user.getRoles(); %> <ul> <% for (int i=0;i<roles.length;i++) { %> <li>Role Name is <%= roles[i].getName() %></li> <% }%> <% }%> </ul>
This can be easily written using JSTL and EL as follows:
<ul> <c:forEach items=”${user.roles}” var=”role”> <li><c:out value=${role.name}/></li> </c:forEach> </ul>
Any value to be evaluated in a JSTL tag lies in ${ and } blocks. EL defines several implicit objects much like JSP defines its implicit objects. Table 6.2 gives a complete list of implicit objects. If the name of the variable in the ${ and } block does not match one of the implicit objects, then EL searches the page, request, session application scopes in that order for the variable name specified. In the above code snippet, “user” is the name of an attribute in one of these scopes. Once the <c:forEach> tag gets the value, it iterates over the specified collection. In this case it iterates over the array of roles and provides a scripting variable called role (var=”role”) for the embedded tags. The <c:out> tag access the name property of the Role object (obtained from the role scripting variable) and renders it. The c in the <c:out> represents the Core JSTL tag library.
Table 6.2 Implicit objects in EL Category Identifier JSP pageContext Scope pageScope requestScope sessionScope ApplicationScope Request Param Parameters ParamValues Request headers Header HeaderValues Cookies Cookie Initialization InitParams Parameters Description PageContext for the current page Map holding page scoped attributes Map holding request scoped attributes Map holding session scoped attributes Map holding application scoped attributes Map holding request parameter names Map holding request parameter values as arrays Map holding header names Map holding header values Map holding cookies by name Map holding web application context initialization parameters by name
140
Struts Survival Guide – Basics to Best Practices
NOTE: JSTL 1.0 works with JSP 1.2 containers only, such as Tomcat 4.x. JSTL 1.1 works only with JSP 2.0 containers such as Tomcat 5.x. With JSP 1.2, the expression language can be used only within JSTL tags. JSP 2.0 specification defines a portable expression language. With JSP 2.0, the expression language will become part of the specification and can be used even outside the JSTL. You have already seen an example of using JSTL Core library earlier in conjunction with EL. Now, let us look at an example of formatting library tags. Consider the case when you want to display the currency 12.37 in the user’s Locale. You can use the formatNumber tag for this purpose. In the following example the currency is formatted and displayed in a variable called “money”. For the U.S. Locale, money will contain the value “$12.37”.
<fmt:formatNumber
This is somewhat similar to the <bean:write> tag in terms of formatting the currency. Similarly there is a JSTL equivalent for <bean:message> tag.
<fmt:message
In JSTL, the Resource Bundle for the above tag can be specified in a number of ways. Unless specified otherwise, JSTL looks for a servlet context parameter named javax.servlet.jsp.jstl.fmt.localizationContext and uses its value as the bundle name. You can also use the tag <fmt:setBundle baseName=”mybank.MyMessages”> in JSP and the rest of the JSP uses the specified bundle. You can scope a bundle by wrapping other tags with <fmt:bundle> tag as follows:
<fmt:bundle baseName=”mybank.MySecondMessages”> <fmt:message <fmt:message </fmt:bundle>
In the above snippet, all the inner tags use mybank.MySecondMessages as their resource bundle. The resource bundle lookup is similar to Struts. In the above scenario for instance, the servlet container looks for MySecondMessages.properties in WEB-INF/classes/mybank and then in the system classpath. Design Tip Since, Struts allows you to specify resource bundles on a tag basis, it seems easier (and logical) to use separate bundles per category. For instance, all the errors can reside in one bundle and all the messages of one type can reside in one bundle and so on.
Chapter 6. Struts Tag Libraries
141
JSTL on the other hand seems to encourage the practice of using resource bundle per module. This is evident from the way you specify the bundles at a JSP level, scope it and so on. It is easier this way to use a resource bundle for a set of JSPs belonging to one module. JSTL Binaries – Who’s who If you download JSTL Reference Implementation from Sun, it has two important jar files – jstl.jar and standard.jar. The former contains the classes from javax.servlet.jsp.jstl package and the latter contains Sun’s JSTL Reference Implementation. From a perspective of this book, we will be using Struts-EL, the JSTL compliant port of Struts tags. Struts-EL is shipped with Struts 1.1 release as a contributed library and can be found under the contrib folder. Struts-EL uses the same jstl.jar containing javax.servlet.jsp.jstl package – it is the vendor independent JSTL standard. However it uses the implementation from Jakarta TagLibs as the underlying expression evaluation engine (This implementation is also named standard.jar and found under Struts-EL/lib). If you explode the standard.jar, you will find classes belonging to org.apache.taglibs package.
6.6 Struts-EL
As you might know already, Struts-EL is a port of Struts tags to JSTL. This provides a migration path for the existing Struts applications to the expression language syntax in a non-intrusive manner. Normal Struts tags rely on runtime scriptlet expressions to evaluate dynamic attribute values. For example, the key of the bean:message below is dependent on some business logic.
<bean:message key=”<%= stringVar %>” />
This assumes that stringVar exists as a JSP scripting variable. This tag can be rewritten with the Struts-EL version of the message tag as follows:
<bean-el:message key=”${stringVar}” />
Although, not much exciting is going on in the above tag, it shows how easy it is to port the existing Struts tags to Struts-EL. The real power of Struts-EL comes to the fore especially when the scriptlet deciding the attribute value starts becoming complex. Not all tags from Struts are ported to Struts-EL. In areas where there is already a JSTL tag available, porting of the Struts tags will only cause
142
Struts Survival Guide – Basics to Best Practices
redundancy. Hence those Struts tags are not ported. For e.g., the bean:write tag can be implemented with the c:out JSTL tag. Similarly most of the logic tags (such as equal, notEqual, lessThan etc.) are not ported since the JSTL tag c:if can take any expression and evaluate it (with the test=”${….}” option). You have already seen how a logic:equal tag can be replaced with c:if in the earlier section on Nested Logic Tags. Struts-EL hands-on Enough theory. Let’s get down to business and use some Struts-EL tags to get the feel. Here is the step-by-step process to do so. You will need new jar files to use the Struts-EL in your application. Copy the following jars from the Struts contrib folder into the WEB-INF/lib folder of the web application – jstl.jar, standard.jar (remember to use the Jakarta Taglibs version, not the Sun reference implementation jar), struts-el.jar. These jars are needed in addition to the already existing jars from regular Struts. From the Struts-EL/lib folder copy the following tlds to the WEB-INF of your web application – c.tld, struts-bean-el.tld, struts-html-el.tld and strutslogic-el.tld. Add the <taglib> declaration for all the new tlds in web.xml as follows:
<taglib> <taglib-uri>/WEB-INF/struts-bean-el</taglib-uri> <taglib-location>/WEB-INF/struts-bean-el.tld</taglib-location> </taglib> <taglib> <taglib-uri>/WEB-INF/struts-html-el</taglib-uri> <taglib-location>/WEB-INF/struts-html-el.tld</taglib-location> </taglib> <taglib> <taglib-uri>/WEB-INF/struts-logic-el</taglib-uri> <taglib-location>/WEB-INF/struts-logic-el.tld</taglib-location> </taglib> <taglib> <taglib-uri>/WEB-INF/c</taglib-uri> <taglib-location>/WEB-INF/c.tld</taglib-location> </taglib>
In the JSPs, add the declaration for these TLDs as follows:
<%@ taglib uri="/WEB-INF/struts-bean-el" prefix="bean-el" %>
Chapter 6. Struts Tag Libraries
143
<%@ taglib uri="/WEB-INF/struts-html-el" prefix="html-el" %> <%@ taglib uri="/WEB-INF/struts-logic-el" prefix="logic-el" %> <%@ taglib uri="/WEB-INF/c" prefix="c" %>
That’s it! Now you are ready to use the Struts-EL tags in conjunction with JSTL tags to reap the benefits of expression language and make your applications a little bit simpler and cleaner. Practical uses for Struts-EL When was the last time you wrestled to use a custom tag as the attribute value of another tag and failed? Something like this:
<html:radio name=”anotherbean” value=”<bean:write name=”mybean” property=”myattrib”/>” />
Nesting custom tag within a tag element is illegal by taglib standards. The alternatives are no good. Thankfully now, with JSTL, you can solve this problem in a clean way. In Struts tags, JSTL can be combined only with Struts-EL and the problem can be solved as follows:
<html-el:radio name=”anotherbean” value=”${mybean.myattrib}” />
Beautiful isn’t it! Struts-EL provides you the best of both worlds, the elegance of JSTL and the power of Struts.
6.7 List based Forms
All along you have seen how to handle regular Forms. Now let us see how to handle list-based forms. List based forms are used for editing collections of objects. Examples include weekly hours-of-operation, contacts etc. Such collections may be limited to a single page or span across multiple pages. We will deal with a collection limited to single page first. Techniques for dealing with multi page lists are illustrated later.
144
Struts Survival Guide – Basics to Best Practices
Figure 6.1 Current and Future page layout for the banking application
Indexed struts-html tags are used to display editable collections of objects. Consider a HTML form used to collect information about the weekly hours of operation for a company and send the data back to an Action as shown in Figure 6.1. The brute force approach is to create 7 pair of text fields to collect the opening and closing time for each day of the week. An elegant approach is to use indexed <html:...> tags. The ActionForm for the above HTML in Figure 6.1 is shown in Listing 6.8. The ListForm has a java.util.List named hourOfOperationList. It is a list containing hours of operation. The HourOfOperation itself is a Serializable Java class with three JavaBeans properties – day, openingTime and closingTime. The zeroth day is a Sunday and sixth day is a Saturday. Back to the ListForm. The ListForm has a getter method for the hours of operation List, but no setter method. The reset() method initializes the List with exactly seven HourOfOperation objects. In reality, you would populate this list from database. Also there is an odd method called getTiming() that takes an integer index as argument and returns the corresponding HourOfOperation object from the List. This method replaces the setter method and is the key for the Struts framework when populating the list using form data. The details will become clear once you look at the JSP code in Listing 6.9 and the generated HTML in Listing 6.10.
Listing 6.8 ListForm
public class ListForm extends ActionForm { private List hourOfOperationList; public ListForm() { reset(); } public void reset() { hourOfOperationList = new ArrayList(7); hourOfOperationList.add(new HourOfOperation(0)); hourOfOperationList.add(new HourOfOperation(1)); hourOfOperationList.add(new HourOfOperation(2)); hourOfOperationList.add(new HourOfOperation(3)); hourOfOperationList.add(new HourOfOperation(4)); hourOfOperationList.add(new HourOfOperation(5));
Chapter 6. Struts Tag Libraries
145
hourOfOperationList.add(new HourOfOperation(6)); } public List getHourOfOperationList() { return hourOfOperationList; } public HourOfOperation getTiming(int index) { return (HourOfOperation) hourOfOperationList.get(index); } }
In Listing 6.9, the JSP displays a form containing the company name and the hours of operation List. The <logic:iterate> is used inside the <html:form> tag to iterate over the hoursOfOperationList property in the ListForm bean. Each hour of operation is exposed as a scripting variable named timing. You may be able to relate now between the getTiming() method in the ListForm and this scripting variable. The indexed=true setting on each of the html tags makes the array index to be part of the text field name. For instance, the following tag
<html:text
generates the HTML as follows in the second iteration (i=1):
<input type=”text” name="timing[1].openingTime" .. />
Listing 6.9 JSP for the ListForm
<html:form Company Name: <html:text<BR> <table border=1 cellpadding=1> <tr><td>Day</td><td>Opening Time</td><td>Closing Time</td></tr> <logic:iterate <tr> <td><bean:write</td> <td><html:text</td> <td><html:text</td> </tr> </logic:iterate> </table>
146
Struts Survival Guide – Basics to Best Practices
<BR> <html:submit>Save</html:submit> <html:cancel>Cancel</html:cancel> </html:form>
Notice the relation between the Struts text tag and the generated input tag. Each text field now has a unique name as the name is partly driven the array index. This magic was done indexed=”true” setting. When the form is edited and is submitted via POST, the request parameter names are unique (timing[0].openingTime, timing[1].openingTime etc.), thanks to the array index being part of the text field names. The HTML is shown in Listing 6.10. Upon form submission, when Struts sees the request parameter named timing[1].openingTime, it calls the following method:
listForm.getTiming(1).setOpeningTime(...)
and so on for every request parameter. This is exactly where the getTiming() method in ListForm comes in handy. Without it, Struts can never access the individual items in the list. Thanks to getTiming(), individual HourOfOperation are accessed and their attributes are set using the corresponding request parameters. List based form editing is frequently a necessity in day-to-day Struts usage. The above approach is perhaps the only clean way to achieve this.
Listing 6.10 Generated HTML from JSP in Listing 6.9
<form name="ListForm" action="/mouse/submitListForm.do"> Company Name: <input type="text" name="companyName" value="ObjectSource"><BR> <table border=1 cellpadding=1> <tr><td>Day</td><td>Opening Time</td><td>Closing Time</td></tr> <tr> <td>Sunday</td> <td><input type="text" name="timing[0].openingTime" value="N/A"></td> <td><input type="text" name="timing[0].closingTime" value="N/A"></td> </tr> <tr> <td>Monday</td> <td><input type="text" name="timing[1].openingTime" value="8:00 AM"></td>
Chapter 6. Struts Tag Libraries
147
<td><input type="text" name="timing[1].closingTime" value="6:00 PM"></td> </tr> .. .. .. </table> <BR><input type="submit" value="Save"> <input type="submit" name="org.apache.struts.taglib.html.CANCEL" value="Cancel" onclick="bCancel=true;"> </html:form>
6.8 Multi-page Lists and Page Traversal frameworks
As seen in the last section, <logic:iterate> can be used in conjunction with indexed html tags to display and edit list forms. However read-only tabular displays are more common than editable list forms in enterprise applications. Such read-only tables span multiple pages with data ranging from ten rows to thousands. The IterateTag can also be used for displaying read-only data by iterating over a collection and rendering the data using <bean:write>. For multi-page lists, the attributes offset and length are useful. The offset indicates the index from where to start the iteration in the page relative to the first element (index = 0). The length indicates the maximum number of entries from the collection to be displayed in the page. Using these two attributes it is possible to build a multi-page list. But the task is more daunting than you can imagine. Believe us. Multi-page list display will not be your only worry. You will be asked to provide a browsing mechanism – previous, next, first and last to traverse the collection. You will have to sort the data for the chosen column (and still do a previous, next etc.). You will be asked to group the data, aggregate and sum columns and format them. In addition you will have to make it easier to the page author to apply different display styles to the list. Before you know it, the seemingly trivial task has turned into a Frankenstein! The plain vanilla IterateTag simply cannot be stretched too far. A robust framework exclusively to perform the above tasks is needed. Fortunately such frameworks are available at no charge. Why reinvent the wheel unless you have a unique and stringent requirement not satisfied by one of these frameworks? Three such frameworks are reviewed below. One is free, the other two are open source Let us examine what is available and what is missing in these frameworks and how they fit into the Struts way of building applications. The three frameworks are:
148
Struts Survival Guide – Basics to Best Practices
1. Pager Taglib () 2. displayTag () 3. HtmlTable () Pager Taglib Pager Taglib covers the display aspects of list traversal very well. Provide it the minimal information such as rows per page and URL and it will control the entire paging logic. You are in complete control of the iterating logic and table display. (If using the IterateTag, offset and length attributes are not needed). Hence you can completely customize the look and feel of the table using CSS. The Pager taglib does not provide any assistance for the table display. Neither does it handle editable list forms, sorting or grouping. If all you need is an easy and elegant way to traverse list data, you should definitely consider using the Pager taglib and you will be glad you did. Below we cover a short note on how to use the Pager Taglib with Struts. Start with an Action that creates the collection to iterate and put it in HttpSession using results as the key name. Then forward to the JSP that uses the Pager taglib. This JSP is shown in Listing 6.11. The resulting HTML is shown in Figure 6.2. The pg:pager tag has two important attributes – url and maxPageItems. They specify the destination URL when any of the navigation links are clicked and the number of items per page respectively. In Listing 6.11, the url is traverse.do – a simple ForwardAction that forwards to the same JSP. The JSP uses the iterate tag to iterate the collection. Th pg:item defines each displayable row. The pg:index, pg:prev, pg:pages and pg:next together display the page numbers, previous and next links. These tags even provide you the flexibility of using your own images instead of plain old hyperlinks. Using pg:param (not shown in the listing), additional request parameters can also be submitted with the url.
Figure 6.2 Traversing the multi page list using Pager Taglib from jsptags.com Table 6.3 Feature Comparison between DisplayTag and HtmlTable Feature DisplayTag
HtmlTable
Chapter 6. Struts Tag Libraries
149
Display Formatting Column Grouping Nested Tables Coding Style
Very rich and customizable using CSS. Yes Yes The display model should be created in advance, but the formatting can be invoked from the JSP using hooks called decorators Does not require controller (such as Struts or its Action). The JSP itself can control the paging. Customizable auto paging Yes No. Messages can be externalized to a properties file but cannot be localized as of 1.0b2. Full support is expected soon. No
Limited features. Formatting is based on a back end XML file and hence not directly under page author’s control. Yes No The display model and its formatting should be performed in advance (in a Action). The paging is tied to Struts. Needs a predefined Action called ServeTableAction. Strictly MVC based.
Paging Sorting I18N
Fixed style auto paging Yes Yes. Can use Struts resource bundle
Editable column
Documentation and examples User community
Good Relatively high
Yes, but form is submitted to a predefined Action. Action chaining for custom processing can be setup with minor customization. Limited Less
DisplayTag and HtmlTable frameworks The pager taglib does paging through a table and nothing more. If sorting and grouping are one of your requirements, you can use one of DisplayTag or HtmlTable frameworks. Each of them has their own paging logic and should not be used in conjunction with the Pager Taglib. Covering these frameworks is beyond the scope of this book. Please check their respective web sites for documentation and user guide. Table 6.3 provides a comprehensive feature comparison between the two. DisplayTag shines in many categories but lacks the table based editing features. DisplayTag is not tied to Struts in any way. Neither does it enforce MVC. HtmlTable on the other hand mandates strict adherence to MVC. All the pagination, sort requests are handled by a pre-defined Action class (ServeTableAction) provided with the library. Further customization is needed to chain it to your own business processing before/after ServeTableAction does its job.
Listing 6.11 Using Pager taglib with Struts
<pg:pager <TABLE width="100%"> <TR> <TD align="center">> </TD></TR> <TR align="center"> <TD> <pg:index> <pg:prev><a href="<%=pageUrl%>">[<< Prev]</a></pg:prev> <pg:pages> <a href="<%= pageUrl %>"><%= pageNumber %></a> </pg:pages> <pg:next><a href="<%= pageUrl%>">[Next >>]</a></pg:next> </pg:index> </TD> </TR> </TABLE> </TD> </TR> </TABLE> </pg:pager>
Creating the Model for iteration In the last two sections, you looked at three options for traversing and displaying the collection. For limited amount of data, creating the collection is a no-brainer. The size of the result set is managable and you can tolerate the database returning
Chapter 6. Struts Tag Libraries
151
it at one shot. As the collection gets larger, it consumes a significant amount of memory and absolutely does not make sense to waste the precious runtime resources. Instead of maintaining the entire collection in memory, you can use the Value List Handler pattern (Core J2EE Patterns). Figure 6.3 shows the class diagram for Value List Handler.
Figure 6.3 Value List Handler pattern
ValueListHandler can be thought of as a façade for the underlying collection. ValueList - the data object collection is traversed using the ValueListIterator. The Data Access Object encapsulates the logic to access the database in the specified format – read-only EJB, direct JDBC or O/R mapper, the latter two approaches being preferred. We recommend designing the Value List Handler intelligently so that it fetches data in bulk using read-ahead (a.k.a pre-fetch) mechanism – i.e. data to serve two to three pages is retrieved and in advance if needed so that the delay in retrieval is minimized. The beauty of this pattern is that you can expose the ValueListIterator to the IterateTag and the tag will believe that it is traversing the original Iterator, while you can intelligently serve the requested rows and keep fetching in the background. In this context it is advantageous to combine an O/R mapping framework that allows you use SQLs to search the database. Most of the O/R mapping frameworks provide caching mechanisms. Hence the overhead of object creation after retrieval is eliminated since the object in already in the cache. Moreover you can take advantage of the features provided in the RDBMS. For instance, DB2 provides a feature called ROW_NEXT. Suppose that the requirement is to display 10 rows per page. Here is strategy for devising a responsive system while working with large data set. When the query is first made, data for three pages (30 rows) is prefetched and maintained in the HttpSession. When the user requests the third page, the ValueListHandler realizes that the end of the cache is reached. It goes ahead and serves the third
152
Struts Survival Guide – Basics to Best Practices
page (21-30 rows). After that it initiates an asynchronous request to fetch another 30 rows from the database (This will need a scheduling mechanism to which individual valueListHandlers submit their pre-fetch requests). When the next 30 rows are retrieved, it caches them along with the original 30 rows. Hence a cache of 60 rows is maintained per user. (This is to prevent a “cache-fault” if the user decides to go to previous page while on third page). Depdending on the size of the displayed objects, you have to choose an optimal cache within the Value List Handler. If the objects are 1K each, 60 objects means 60K of memory consumed by the corresponding HttpSession. This is absolutely not recommended. A rule of thumb is that HttpSession size should not exceed 20K per user. [Another reason to make the display objects only as big as needed and no bigger. Overcome the tendency to reuse bloated value objects and data transfer objects from elsewhere.] Coming back to the database features for large result sets. The following SQL can be used to fetch the first 30 rows:
SELECT FIRST_NAME, LAST_NAME, ADDRESS FROM CUSTOMER, … … WHERE … … … ORDER BY FIRST_NAME FETCH FIRST 30 ROWS ONLY OPTIMIZED FOR READ ONLY
When the user reaches the third page, the ValueListHandler makes a prefetch request for the next 30 rows (Rows 31 to 60). The following SQL can be used to fetch them:
SELECT * FROM ( SELECT FIRST_NAME, LAST_NAME, ADDRESS FROM CUSTOMER, … … WHERE … … … ORDER BY FIRST_NAME ) AS CUST_TEMP WHERE ROW_NEXT BETWEEN 31 AND 60 OPTIMIZED FOR READ ONLY
This SQL consists of two parts. The inner SQL is exactly the same as the SQL issued earlier and can be thought to be fetching the data into a temporary table. The ROW_NEXT in the outer SQL identifies the exact rows to be returned from the retrieved result set. The values 31 and 60 can be substituted dynamically. The proprietary SQL no doubt impacts the portability, but almost
Chapter 6. Struts Tag Libraries
153
every database used in the enterprise today has this feature. The Java code still is portable.
6.9 Summary
In this chapter you got an overview of Struts tags and more importantly learnt to customize these tags for your projects. In addition you looked at JSTL and Struts-EL. Hopefully this chapter has prepared you to use Struts tags better.
154
Struts Survival Guide – Basics to Best Practices
Ch a p te r 7
Struts and Tiles
In this chapter:
You will learn to use Tiles with Struts for web page design using Layouts
7.1 What is Tiles
Consider a banking application whose current web page layout has a header, body and footer as shown by the first layout in Figure 7.1. The management recently decided that all pages in the application should confirm to the corporate look and feel as shown in the second layout in Figure 7.1. The new layout has a header, footer, a varying body and a navigation sidebar.
Figure 7.1 Current and Future page layout for the banking application
When the application was first designed, the development team had two alternatives. Use “JSP based” approach. In this approach each JSP page is coded separately. Although the header and footer are common to every page, the common JSP markup for header and footer was added into each JSP by direct copy and paste. This quick and dirty solution is unacceptable even for the
Chapter 7. Struts and Tiles
155
smallest of the web applications and poses a maintenance hurdle. Anytime the header and footer changes, it has to be manually applied to every page. Further, any changes to the page layout will have to be made in every page. Use <jsp:include> approach. This approach is better than the previous one since it avoids repetition of common markup. The common markup related to header and footer is moved into JSPs of their own. The header and footer JSPs are added to the main JSP by using the standard <jsp:include> directive. Whenever header or footer changes, it affects only one or two files. However, if at any point in time, the layout of the pages itself changes (as it has happened for the banking application now), every JSP page with this structure has to be updated accordingly. The team chose the second option at the time of development. However the new management directive is now posing a challenge. It is a tedious task to change every page of the system and there are chances that the current system might break in the process. Had they had Tiles framework at their disposal at the time of development, this change would have been a breeze!
Title
Header
Body Footer
Figure 7.2 Current Customer Details Page for My Bank
Figure 7.2 shows a sample HTML page from the banking application. Listing 7.1 shows the (simplified) JSP for that page. The JSP contains the <jsp:include> for the common header and footer, but has the entire layout written in terms of html table with various tr and td. All JSPs in the application also might have the same layout copied over and over. This is “copy and paste technology” taken to the next dimension and exactly where Tiles comes into picture.
156
Struts Survival Guide – Basics to Best Practices
Listing 7.1 CustomerDetail JSP using <jsp:include>
<%@ taglib <head> <html:base/> <title><bean:message</title> </head> <body> <TABLE border="0" width="100%" cellspacing="5"> <tr><td><jsp:include</td></tr> <tr> <td> <html:form action=”/submitCustomerForm”> <table> <tr> <td> </td></tr> </TABLE> </body> </html:html>
The basic principle behind Tiles is to refactor the common layout out of the individual JSPs to a higher level and then reuse it across JSPs.
Chapter 7. Struts and Tiles
157
If the management wants a new look and feel, so be it; you can change the common layout JSP and the whole web application has the new look and feel! Redundancy is out and Reuse is in. In OO parlance this is similar to refactoring common functions from a set of classes into their parent class. In Tiles, layouts represent the structure of the entire page. Layout is simply a JSP. Think of it as a template with placeholders (or slots). You can place other JSPs in these slots declaratively. For instance, you can create a layout with slots for header, body and footer. In a separate XML file (called XML tile definition file), you specify what JSPs go into these slots. At runtime, Tiles framework composes the aggregate page by using the layout JSP and filling its slots with individual JSPs. In essence, Tiles is a document assembly framework that builds on the "include" feature provided by the JSP specification for assembling presentation pages from component parts. Each part (also called a tile, which is also a JSP) can be reused as often as needed throughout the application. This reduces the amount of markup that needs to be maintained and makes it easier to change the look and feel of a website. Tiles framework uses a custom tag library to implement the templates. Comparing this approach with <jsp:include> will help you to understand the Tiles better. In the <jsp:include> approach, all included JSPs (header, footer etc.) become part of the core JSP before being rendered. In Tiles, all the JSPs – header, footer and the core become part of the Layout JSP before being rendered. The outermost JSP rendered to the user is always the same; it is the layout JSP. This approach reduces redundancy of HTML and makes maximum reuse of formatting logic. The entire chapter deals with using Tiles for effective design in conjunction with Struts. In the next section, you will see how the banking application can be converted into a Tiles oriented design.
7.2 Your first Tiles application
In this section, you will learn how to assemble a Tiles application. We will start with the CustomerDetails.jsp in Listing 7.1 and change it to use Tiles. The Customer Details page is first shown to the user. When the submit button in the Customer Form is pressed, a Success page is shown. Note that we are not referring to “.jsp” files any more. Instead they are being referred to as “pages”. There is a reason for this. Strictly speaking, the only JSP file that the user gets every time is the Layout JSP – the aggregate page. Hence the several incarnations of Layout.jsp that the user sees are distinguished by their core contents – Customer Details information, Success information and so on.
158
Struts Survival Guide – Basics to Best Practices
Step 1: Creating the Layout To start with, let us concentrate on Tiles enabling the Customer Details Page. The first step is to create the Layout JSP with placeholders for header, footer, body and anything else you want by using Tiles insert tags. The insert tag is defined in struts-tiles.tld, the TLD file that is part of Struts distribution. The Layout JSP factors out most of the formatting markups. Typical things performed in a layout are: Defining the outer structure of the page with html tables with defined widths. Creating placeholders for pages relative to one another in the overall presentation scheme. The first tag used in the listing above is getAsString. This tag retrieves the title as a string. The insert tags are used to insert different JSPs into the SiteLayout.jsp. For example the header of the page is inserted as: <tiles:insert. The layout we have used is simple. In reality however, nested tables with bells and whistles are used for professional looking pages. Although required, they result in indentation and hence error prone and visually displeasing. With the layout taking care of these two things, the individual pages don’t have to deal it. That makes the design of included page simpler and cleaner.
Listing 7.2 SiteLayout.jsp – The layout used by Tiles in the banking app
<%@ taglib <head> <html:base/> <title><tiles:getAsString</title> </head> <body> <TABLE border="0" width="100%" cellspacing="5"> <tr><td><tiles:insert</td></tr> <tr><td><tiles:insert</td></tr> <tr><td><hr></td></tr> <tr><td><tiles:insert</td></tr> </TABLE> </body> </html:html>
Chapter 7. Struts and Tiles
159
Step 2: Creating the XML Tile definition file The SiteLayout.jsp created in the previous step uses the insert tag to insert the individual JSPs. The insert tags however do not specify the JSPs directly. They contain an attribute named attribute. The value of attribute is the reference to the actual JSP. The actual JSP name is specified in a XML based file called tiles definition. A sample definition is shown below.
<definition name="/customer.page" path="/Sitelayout.jsp"> <put name="title" value="My Bank – Customer Form”/> <put name="header" value="/common/header.jsp" /> <put name="footer" value="/common/footer.jsp" /> <put name="body" </definition>
The Tiles definition shown above defines the JSPs that go into each of the
insert tag placeholders in the SiteLayout.jsp for the Customer Details page and identify it them a unique name. Note that the name of each put in the definition is same as the value of attribute in the insert tag. Similarly a XML
definition for the Success page is added as follows:
<definition name="/success.page" path="/Sitelayout.jsp"> <put name="title" value="MyBank – Success”/> <put name="header" value="/common/header.jsp" /> <put name="footer" value="/common/footer.jsp" /> <put name="body" </definition>
Compare the above definition for the Customer Details Page definition shown earlier. You will see that only the title and body differ between the two. The header and footer remain the same. Tiles allows you to factor out these common elements in the definition and create a base definition. Individual definitions can then extend from the base definition, much like concrete classes extend from an abstract base class. Factoring out, the common elements of the two page definitions results in a base definition as:
<definition name="base.definition" path="/Sitelayout.jsp"> <put name="title" value="MyBank”/> <put name="header" value="/common/header.jsp" /> <put name="footer" value="/common/footer.jsp" /> <put name="body" </definition>
160
Struts Survival Guide – Basics to Best Practices
The individual definitions are created by extending the above definition. Accordingly, the new definitions for Customer Detail and Success pages are as follows:
<definition name="/customer.page" extends="base.definition"> <put name="title" value="MyBank – Customer Form”/> <put name="body" </definition> <definition name="/success.page" extends="base.definition"> <put name="title" value="MyBank – Success”/> <put name="body" </definition>
Each of the definition extends from the base.definition and overrides the settings for title and body. They will however reuse the header and footer settings from the base.definition. Notice that we have left the body section blank in the base definition but provided a default title. Individual page definitions must provide the body JSP. If title is not provided, then the default title from the base definition is used. The definitions thus created are stored in a file called tiles-defs.xml. Generally this file is placed under WEB-INF and is loaded at Struts startup. The file contains the definitions for each aggregate page (combination of several jsps) accessed by the user. Step 3: Modifying the forwards in struts-config.xml Suppose that you had the following action mapping in the struts-config.xml for the Customer form submission prior to using Tiles.
<action path="/submitCustomerForm" type="mybank.app1.CustomerAction" name="CustomerForm" scope="request" validate="true" input="CustomerDetails.jsp"> <forward name="success" </action>
The above action mapping uses the JSP name directly. With Tiles, you have to replace the JSP name with the tiles definition name. The resulting action mapping is shown below. The changes are highlighted in bold.
<action path="/submitCustomerForm"
Chapter 7. Struts and Tiles
161 <forward name="success" path="success.page"
</action>
/>
Step 4: Using TilesRequestProcessor You have so far used org.apache.struts.action.RequestProcessor as the request processor with regular Struts pages. This request processor forwards to a specified JSP and commits the response stream. This does not work with Tiles as individual JSPs have to be included in the response stream even after the stream data is flushed and data is committed. Moreover, the regular Struts RequestProcessor can only interpret forwards pointing to direct physical resource such as a JSP name or another action mapping. It is unable to interpret “/customer.page” – a Tiles definition. Hence Tiles provides a specialized request processor called TilesRequestProcessor to handle this scenario. For a given Struts module, only one request processor is used. A Tiles enabled module uses the TilesRequestProcessor, even if the module has regular Struts pages. Since TilesRequestProcessor extends from the regular Struts RequestProcessor, it inherits all its features and can handle regular Struts pages as well. TilesRequestProcessor is declared in the struts-config.xml as follows:
<controller processorClass= "org.apache.struts.tiles.TilesRequestProcessor"/>
The TilesRequestProcessor contains the logic to process includes and forwards. It checks if the specified URI is a Tiles definition name. If so, then the definition is retrieved and included. Otherwise the original URI is included or forwarded as usual. Step 5: Configuring the TilesPlugIn As you know, TilesRequestProcessor needs the XML Tiles definition at runtime to interpret Tiles specific forwards. This information created in Step 2 is stored in a file called tiles-defs.xml. Generally this file is placed under WEB-INF. At startup this file is loaded by using the Tiles PlugIn. The TilesPlugIn initializes Tiles specific configuration data. The plugin is added to the struts-config.xml as shown below.
<plug-in
162
Struts Survival Guide – Basics to Best Practices
<set-property <set-property </plug-in>
The classname attribute refers to the plugin class that will be used. In this case org.apache.struts.tiles.TilesPlugin class is used. NOTE: CSS or Cascading Style Sheets is a way to add formatting rules and layout to existing HTML tags. CSS greatly simplifies changes to page appearance by only having to make edits to the stylesheets. Tiles, as we saw in the previous section deals with the organization of different parts of the JSP page as against enhancing the look and feel of individual components. CSS deals more with enhancing individual features of the components in each tile or area of the page. Tiles and CSS are complementary and can be used together to improve the look and feel of a JSP page. In this section, you converted the Struts based Customer page and the subsequent page to use Tiles. The complete working application can be downloaded from the website (). Rules of thumb 1. Although Tiles provides several ways to construct a page, some of them don’t provide much advantage over the <jsp:include> approach at all. The approach we have illustrated above is usually the one used most. It is in this approach the real strength of Tiles get expressed. 2. Thanks to the multiple application module support in Struts 1.1, you don’t have to Tiles enable your entire application and watch it collapse. Start by breaking the application into multiple modules. Test if the modules are working as expected. Also test inter-module navigation. Then Tiles-enable the modules one at a time. This provides you a rollback mechanism, if something goes wrong. 3. Never use the Tiles definition as the URL on the browser. This will not work. Struts can forward to a Tiles definition only when the control is within the TilesRequestProcessor, not when an external request arrives. If you want to display an aggregate Tiles page on clicking a link, define an action mapping for the URL (You can also use a global-forward instead). Then create an action mapping for a ForwardAction and set the parameter attribute to be the Tiles definition. 4. In the application shown earlier, JSPs were used as Tiles. You can also use action mappings as page names.
<definition name="/customer.page" extends="base.definition"> <put name="body" value="/custdet.do" />
Chapter 7. Struts and Tiles
163
</definition>
7.3 Tiles and multiple modules
The application seen earlier used Tiles in a single application module. In this section you will see Tiles works across modules. Tiles provides two modes of operation: Non-module-aware and module-aware modes. They are distinguished by the setting moduleAware attribute on the Tiles PlugIn. The definition file is specified by the definitions-config attribute on the Tiles PlugIn. In non-module-aware mode, all modules use the same tiles definition file specified in the struts-config.xml for the default module. If there is no default module, all modules use the tiles definition file specified in the struts-config.xml for the first module listed in web.xml. In module-aware mode, each module has its own tiles definition file. A module cannot see definitions in a tiles definition file belonging to another module unless it uses that file itself.
7.4 Summary
In this chapter you saw how to use Tiles along with Struts to build a maintainable and cleaner page layout. By transitioning your Struts modules to Tiles, you will see a boost in productivity for both developers and page authors.
164
Struts Survival Guide – Basics to Best Practices
Ch a p te r 8
Struts and I18N
In this chapter:
1. You will understand the basics of I18N 2. You will learn the basics of Java I18N API 3. You will review the features in Struts for I18N 4. You will look how Tiles application is I18N enabled 5. You will understand how localized input is processed The Internet has no boundaries and neither should your web application. People all over the world access the net to browse web pages that are written in different languages. A user in Japan can access the web and check her Yahoo! Email in Japanese. How does Yahoo do it? Is it because the user’s machine has a Japanese operating system or do web-based applications automatically adjust according to the users’ region? This chapter answers these questions and shows you how to internationalize and localize your Struts web applications. Terminology Before diving deep into the bliss of Internationalization and Localization, coverage of some basic terminology is essential. That’s what we are doing in this section. Internationalization or I18n is the process of enabling your application to cater to users from different countries and supporting different languages. With I18n, software is made portable between languages or regions. For example, the Yahoo! Web site supports users from English, Japanese and Korean speaking countries, to name a few. Localization or L10n on the other hand, is the process of customizing your application to support a specific location. When you customize your web application to a specific country say, Germany, you are localizing your application. Localization involves establishing on-line information to support a specific language or region. A Locale is a term that is used to describe a certain region and possibly a language for that region. In software terms, we generally refer to applications as
Chapter 8. Struts and I18N
165
supporting certain locales. For example, a web application that supports a locale of “fr_FR” is enabling French-speaking users in France to navigate it. Similarly a locale of “en_US” indicates an application supporting English-speaking users in the US. A ResourceBundle is a class that is used to hold locale specific information. In Java applications, the developer creates an instance of a ResourceBundle and populates it with information specific to each locale such as text messages, labels, and also objects. There will be one ResourceBundle object per Locale. What can be localized? When your application runs anywhere in the US, everyone, well almost everyone speaks English and hence, they won’t have any trouble trying to figure out what your application is trying to say. Now, consider the same application being accessed by a user in a country say Japan where English is not the mainstream language. There is a good chance that the very same message might not make much sense to a Japanese user. The point in context is very simple: Present your web application to foreign users in a way they can comprehend it and navigate freely without facing any language barriers. Great, now you know where this is leading, right? That’s right, localization! In order to localize your web application, you have to identify the key areas that will have to change. There are three such key areas. From a Struts perspective, you only have to deal with the first two. a. The visible part of your application – the User Interface. The user interface specific changes could mean changes to text, date formats, currency formats etc. b. Glue Layer – Presentation Logic that links the UI to the business logic. c. The invisible parts of your application – Database support for different character encoding formats and your back-end logic that processes this data. Here is a list of the commonly localized areas in a web application. We will de dealing only with the highlighted ones in this chapter. 1. Messages and Labels on GUI components – labels, button names 2. Dates and Times 3. Numbers and Currencies 4. Personal titles, Phone numbers and Addresses 5. Graphics – Images specific for every locale and cater to each region’s cultural tastes. 6. Colors – Colors play a very important role in different countries. For example, death is represented by the color white in China.
166
Struts Survival Guide – Basics to Best Practices
7. Sounds 8. Page layouts – that’s right. Just like colors, page layouts can vary from locale to locale based on the country’s cultural preferences. 9. Presentation Logic in Struts Action classes. There are other properties that you might require to be localized, but the ones mentioned are the commonly used ones. Struts provides mechanisms to address some of these, but the actual I18N and L10N capabilities lie in the Java API itself. You will see in the next section, a brief overview of the Java Internationalization API and some examples on how to update some of these fields dynamically based on Locale information.
8.1 The Java I18N and L10N API
The primary I18N and L10N Java APIs can be found in the java.util and java.text packages. This section shows some of the commonly used classes and their functions. Figure 8.1 shows the classes in the Java I18n API. If you are already familiar with the Java Internationalization API, you can skip this section and proceed to the next section.
Figure 8.1 TheI18n classes provided by the Java Internationalization API
Chapter 8. Struts and I18N
167
java.util.Locale The Locale class represents a specific geographical or cultural region. It contains information about the region and its language and sometimes a variant specific to the user’s system. The variant is vendor specific and can be WIN for a Windows system, MAC for a Macintosh etc. The following examples show you how to create a Locale object for different cases: A Locale object that describes only a language (French):
Locale frenchSpeakingLocale = new Locale("fr", "");
A Locale object that describes both the spoken language and the country (French Canada):
Locale canadaLocale = new Locale("fr", "CA");
A Locale object that describes the spoken language, country and a variant representing the user’s operating system (French Canada and Windows Operating system):
Locale canadaLocaleWithVariant = new Locale("fr", "CA", "WIN");
Accessing Locale in Servlet Container On every request the client’s locale preference is sent to the web server as part of the HTTP Header. The “Accept-Language” header contains the preferred Locale or Locales. This information is also available to the servlet container and hence in your web tier through the HttpServletRequest. ServletRequest, the interface that HttpServletRequest extends defines the following two methods to retrieve Locale
public java.util.Locale getLocale(); public java.util.Enumeration getLocales();
The second method contains a set of Locales in the descending order of preference. You can set the request’s Locale preference in the browser. For instance in Internet Explorer, you can add, remove or change the Locales using Tools Internet Options Languages. The <controller> (RequestProcessor) setup in the Struts Config file has a locale attribute. If this is set to true, then Struts retrieves the Locale information from the request only the first time and stores it in the HttpSession with the key org.apache.struts.action.LOCALE (Don’t get confused. This is not a class name. It is the actual String used as the Session key.) The default value of the locale attribute is false for which Struts does not store the Locale information in the HttpSession.
168
Struts Survival Guide – Basics to Best Practices
A tip from usability perspective: Although it is possible to change the Locale preference from the browser, I18N usability experts suggest that it might still be valuable to explicitly provide the choice to the users and let them decide. Every web site has a navigation bar or menu or something of that sort. You can provide a HTML choice or drop down to let the user’s choose the Locale that shall override all other settings. This is easy from a Struts perspective because the Locale from the HttpServletRequest can be overridden with the setting in the HttpSession and Struts will never bother about the request header. java.util.ResourceBundle
ResourceBundle is an abstract base class that represents a container of resources. It has two subclasses: ListResourceBundle and PropertiesResourceBundle. When you are localizing your application, all
the locale specific resources like text-messages, icons and labels are stored in subclasses of the ResourceBundle. There will be one instance of the ResourceBundle per locale. The getBundle() method in this class retrieves the appropriate ResourceBundle instance for a given locale. The location of the right bundle is implemented using an algorithm explained later.
Listing 8.1 Extracting data from a ResourceBundle
Locale myLocale = new Locale("fr","FR"); // Get the resource bundle for myLocale ResourceBundle mybankBundle = ResourceBundle.getBundle( "MybankResources", myLocale); // Get the localized strings from this resource bundle String myHeader = mybankBundle.getString("header.title"); System.out.println(myHeader);
Let us see how a resource bundle instance is retrieved with a simple example. Consider a custom ResourceBundle subclass called MybankResources that will contain data specific to your application. In this example, you will see how to use PropertyResourceBundles assuming that all the resources to be localized are strings. In order to use PropertyResourceBundle, you will have to create Java Properties files that will hold the data in key = value format. The file name itself identifies the Locale. For instance, if MybankResources.properties contains strings to be localized for the language English in the United States (en_US), then MybankResources_fr_FR..properties contains strings to be localized for the language “fr” (French) and region of “FR” (France). In order to use the data in these files, you have to get the ResourceBundle instance as shown in Listing 8.1. In order to understand Listing 8.1, assume that the English properties file, MybankResources.properties contains a key value pair: header.title=My
Chapter 8. Struts and I18N
169
Next assume that the French properties file, MybankResources_fr_FR.properties also contains a key value pair: header.title= Ma Banque. The code snippet in Listing 8.1 produces an output “Ma Banque”. What happens if the MybankResources_fr_FR.properties file was missing? Just to see what happens, rename the file to something else and run the program again. This time the output will be My Bank. But the locale was “fr_FR”! Here’s what happened. Because the locale was “fr_FR”, the getBundle() method looked up MybankResources_fr_FR.properties. When it did not find this file, it looked for the “next best match” MybankResources_fr.properties. But this file doesn’t exist either. Finally the getBundle() found the MybankResources.properties file and returned an instance of PropertiesResourceBundle for this file. Accordingly the String myHeader is looked up using the header.title key from the MybankResources.properties file and returned to the user. In general, the algorithm for looking up a Properties file is:
Bank.
MybankResources_language_country_variant.properties MybankResources_language_country.properties MybankResources_language.properties MybankResources.properties
Java Properties files are commonly used for web tier localization in Struts web applications. Hence we have shown you how to use them for localizing string data. If your requirement involves extracting locale specific resources besides strings, you might want to use the ListResourceBundle class. NOTE: When the above program runs from the command line, the properties file is located and loaded by the default command line class loader – the System Classpath Class Loader. Similarly in a web application, the properties file should be located where the web application class loader can find it. java.text.NumberFormat NumberFormat is an abstract base class that is used to format and parse numeric data specific to a locale. This class is used primarily to format numbers and currencies. A sample example that formats currencies is shown in listing 8.2. A currency format for French Locale is first obtained. Then a double is formatted and printed using the currency format for French Locale. The output is: Salary is: 5 124,75 € In the above example, the double amount was hard coded as a decimal in en_US format and printed as in the French format. Sometime you will have to do the reverse while processing user input in your web applications. For instance, a user in France enters a currency in the French format into a text field in the web application and you have to get the double amount for the business logic to
170
Struts Survival Guide – Basics to Best Practices
process it. The NumberFormat class has the parse() method to do this. Listing 8.3 shows this. The output of the program is: Salary is: 5124.75
Listing 8.2 Formatting currencies using NumberFormat
Locale frLocale = new Locale ("fr","FR"); // get instance of NumberFormat NumberFormat currencyFormat = NumberFormat.getCurrencyInstance(frLocale); double salaryAmount = 5124.75; // Format the amount for the French locale String salaryInFrench = currencyFormat.format(salaryAmount); System.out.println ("Salary is: " + salaryInFrench);
There is a subclass of the NumberFormat called DecimalFormat that can be used to format locale specific decimal numbers with the additional capability of providing patterns and symbols for formatting. The symbols are stored in a DecimalFormatSymbols. When using the NumberFormat factory methods, the patterns and symbols are read from localized resource bundles.
Listing 8.3 Formatting currencies using NumberFormat
// get the amount from a text field (5 124,75 €) String salaryInFrench = salaryField.getText(); // Print it back into a regular number System.out.println("Salary is: " + CurrencyFormat.parse(salaryInFrench);
java.text.DateFormat
DateFormat is an abstract class that is used to format dates and times. When a
locale is specified it formats the dates accordingly. The following code formats a date independent of locale
Date now = new Date(); String dateString = DateFormat.getDateInstance().format(now);
To format a date for a given locale:
DateFormat dateFormat = DateFormat.getDateInstance(Locale.GERMANY); dateFormat.format(now);
java.text.MessageFormat
MessageFormat is used to create concatenated messages in a language neutral way. It takes a set of input objects, formats them and inserts the formatted strings
Chapter 8. Struts and I18N
171
into specific places in a given pattern. Listing 8.4 shows how to create a meaningful message by inserting string objects into specific locations in the already existing message. When you run the program, you will get the following output: John Doe logged in at 8/28/03 2:57 PM
Listing 8.4 Using MessageFormat to create message
Object[] myObjects = { "John", "Doe", new java.util.Date(System.currentTimeMillis()) }; String messageToBeDisplayed = "{0} {1} logged in at {2}"; String message = java.text.MessageFormat.format(messageToBeDisplayed, myObjects); System.out.println(message);
8.2 Internationalizing Struts Applications
The I18N features of Struts framework build upon the Java I18N features. The I18N support in Struts applications is limited to the presentation of text and images. I18N features of Struts Resource Bundle The Struts Resource Bundle is very similar to the Java ResourceBundle. Struts has an abstract class called org.apache.struts.util.MessageResources and a subclass org.apache.struts.util.PropertyMessageResources which as the name suggests is based on property files. In spite of the similar functionalities, the above Struts classes (surprisingly) do not inherit from their java.util counterparts. However if you understand the working of the java.util.ResourceBundle, you have more or less understood how the Struts Resource Bundles work. In general, Struts applications deal with internationalization in the following way: 1. The application developer creates several properties files (one per Locale) that contain the localized text for messages, labels and image file names to be displayed to the user. The naming convention for the Locale specific properties files is same as that of java.util.ResourceBundle. The base properties file name (corresponding to the en_US) is configured in the Struts Config file (Refer to Chapter 3). For other Locales, Struts figures out the names of the properties file by the standard naming conventions. 2. The properties file should be placed so that the web application class loader can locate it. The classes in the WEB-INF/classes folder are loaded by the web application class loader and is an ideal place to put the properties file.
172
Struts Survival Guide – Basics to Best Practices
Naming conventions of Java classes applies to the properties files too. Suppose that the classes in an application are packaged in mybank.app1 package. If the App1Messages.properties is placed in mybank/app1 folder it finally ends up in WEB-INF/classes/mybank/app1 directory in the WAR.). 3. When the Struts Controller Servlet receives a request, it checks the user’s Locale (by looking up the HttpSession for the key org.apache.struts.action.LOCALE) and then looks up a resource bundle confirming to that locale and makes it available. Interested parties (read your application logic) can then lookup Locale specific messages using the Locale independent keys. For instance, an ActionError can be constructed in the ActionsForm’s validate() method as follows:
ActionError error1 = new ActionError(“error.firstname.required”);
The actual ActionError constructed has the Locale dependent message for the key error.firstname.required. Some of the commonly used constructors are:
ActionError(String key) ActionError(String key, Object value) ActionError(String key, Object values[])
The second and the third constructor are used if any parameters need to be passed in dynamically. These constructors take the key and an array of strings containing the replacement parameters to be used in the validation error messages. This is similar to the behavior of java.text.MessageFormat. E.g.: The properties file contains a key value pair as
validation.range.message={0} cannot be less than {1} characters
The ActionError to access this message is:
String[] strArray = {“First Name”, “35”}; new ActionError(“validation.range.message”, strArray);
I18N features of MessageTag
Chapter 8. Struts and I18N
173
You have already used the MessageTag (<bean:message>), not in the context of I18N but for externalizing the messages. We used this tag to retrieve messages from the external properties file. Now that the same properties files are put to use in internationalizing the web application, the MessageTag has donned the role of providing Locale specific text in the JSP. This is one of the most frequently used tags whether you are localizing the application or not. Since you already the workings of this tag, we will not bore you with more verbosity. Instead we will compare this Struts tag with the JSTL equivalents. As has been stated earlier in Chapter 6, the word on the street is that the Struts tags should be preferably replaced with JSTL equivalents. I18N features of HTML Tag Library Struts HTML tag library is the key to rendering JSPs as HTML and is filled with tags offering I18N features. Look for the tag attributes whose name ends with key. For instance, the <html:img> tag offers srcKey to look up the src of the image and altKey to look up the alt text from message resource bundle. I18N features of LookupDispatchAction As you already know, LookupDispatchAction offers excellent capability to handle the business logic in a locale independent manner. Certain restrictions apply in that it can be used only with grey buttons or html links and not with image buttons. More details are in Chapter 4.
8.3 Internationalizing Tiles Applications
In Chapter 7 you saw how to use Tiles to organize your JSP pages. The Tiles framework provides an easy way to add tiles or templates to a JSP page to present content in a dynamic fashion. The Tiles framework, just like Struts can be localized to provide different tiles based on a user’s preferred locale. For example, the header tile in Chapter 7 could be replaced with a different header that corresponds to a specific locale. It could contain an area-specific flag for instance or simply a different background color. A Tiles application has a Tiles definition file (e.g.:/WEB-INF/tiles-defs.xml) that defines the structure of a JSP page using various tiles, for the header, menu, body, footer etc. In the case of a localized Tiles application, there will one such file per locale along with the default tiles-defs.xml file. For example, if your application supports US English and French, there will be two definition files, one for each locale as well as a default one – tiles-defs_fr.xml, tiles-defs_en.xml and tiles-defs.xml The naming conventions for the Tiles definition files are the same as for a java.util.ResourceBundle class as explained earlier in the chapter. Again, just as in a localized Struts application, the session attribute
174
Struts Survival Guide – Basics to Best Practices
Action.LOCALE_KEY is looked up for a user’s preferred or default locale and
the appropriate definition file are loaded. For instance, if the default file tilesdefs.xml is:
<tiles-definitions> <definition name="foo.bar" path="MybankLayout.jsp"> <put name="title" <put name="menu" <put name="body" </definition> </tiles-definitions> <put name="header" value="/header.jsp" /> <put name="footer" value="/footer.jsp" />
Then the localized Tiles definition file for French is:
<tiles-definitions> <definition name="foo.bar" path="MybankLayout.jsp"> <put name="title" <put name="menu" <put name="body" </definition> </tiles-definitions> <put name="header" value="/header.jsp" /> <put name="footer" value="/footer.jsp" />
This approach is justified if you use different JSPs per locale. However if the JSPs themselves are fully I18N capable, meaning the single JSP can adapt itself to render local sensitive UI, then the only difference between the two tiles definition for the two locales, is the title. The need for different definition files in that case could be eliminated if there was a mechanism to specify the key to the message resource bundle in the <put> element above. Unfortunately such a mechanism doesn’t seem to exist at the time of writing and hence you are left with creating definitions for each locale.
8.4 Processing Localized Input
Localized input is data input in a native language using locale specific formats. How does your back-end Java code process data input in a native language? Let us consider a simple form with two fields, fullName and monthlySalary as shown below.
Chapter 8. Struts and I18N
175
public class CustomerForm extends ActionForm { private String fullName = null; private double monthlySalary = 0.0; ... }
John Doe enters his monthly salary as 5431.52 and submits the form. That’s it, the form fields are populated nicely and the application works without a hitch. The conversion of the monthly salary from a String to a double is automatically taken care of by Struts and John Doe won’t have any problems with the application. What happens if the same application is viewed by a user in France and he decides to enter the same amount in the French format as 5 431,52? When the French user submits the application, the monthlySalary attribute in CustomerForm ends up being populated with 0.0 instead of 5431.52. Why so? When the form is submitted, the RequestProcessor populates the JavaBeans properties of the ActionForm with the request parameters by using the RequestUtils and BeanUtils classes. The actual population is done by the BeanUtils.populate() method. That method tries to parse the String “5 431,52” and assign it to monthlySalary – a double field without caring much for the Locale of the user. This obviously throws an exception on which the default action is to set 0.0 in the monthlySalary field. What is the solution then? How can you make the Struts applications process localized input? Since the BeanUtils class does not check the locale at the time of populating the form, the only way out of this situation is to make the monthlySalary field a String instead of a double. Now, the BeanUtils does not try to parse a double from the String. Instead the value is assigned AS IS. A customized routine has to be written to convert the String into a double in a Locale dependent manner.
8.5 Character encodings
Earlier, when applications were built, they were built for one language. Those were the days of “code pages”. Code pages described how binary values mapped to human readable characters. A currently executing program was considered to be in a single code page. These approaches were fine until Internationalization came along. Then came the issue of how to represent multiple character sets and encodings for an application. Hence came character sets and encodings. Character sets are sets of text and graphic symbols mapped to positive integers. ASCII was one of the first character sets to be used. ASCII though efficient, was good at representing only US English.
176
Struts Survival Guide – Basics to Best Practices
A Character encoding, as mentioned earlier, maps a character to fixed width units. It also defines ordering rules and byte serializing guidelines. Different character sets have multiple encodings. For example, Java programs represent Cyrillic character sets using KO18-R or KO18-U encodings. Unicode enables us to write multilingual applications. Other examples of encodings include ISO 8859, UTF-8 etc. UTF or Unicode Transformation Format is used to encode 16 bit Unicode characters as one to four bytes. A UTF byte is equivalent to 7-bit ASCII if its higher order bit is zero. You might have come across many JSP pages, which have a line that looks like:
<%@ page contentType="text/html;charset=UTF-8" language="java" %>
Here, charset=UTF-8 indicates that the page uses a response encoding of UTF-8. When internationalizing the web tier, you need to consider three types of encodings: • • • Request encoding Page encoding Response encoding
Request encoding deals with the encoding used to encode request parameters. Browsers typically send the request encoding with the Content-type header. If this is not present, the Servlet container will use ISO-8859-1 as the default encoding. Page encoding is used in JSP pages to indicate the character encoding for that file. You can find the page encoding from: • The Page Encoding value of a JSP property group whose URL pattern matches the page. To see how JSP property groups work, you can go to the following URL: The pageEncoding attribute in a JSP page specified along with the page directive. If the value pageEncoding attribute differs from the value specified in the JSP property group, a translation error can occur. The CHARSET value of the contentType attribute in the page directive.
•
•
If none of these encodings are mentioned, then the default encoding of ISO8859-1 is used. Response encoding is the encoding of the text response sent by a Servlet or a JSP page. This encoding governs the way the output is rendered on a client’s browser and based on the client’s locale. The web container sets a response encoding from one of the following:
Chapter 8. Struts and I18N
177
• • •
The CHARSET value of the contentType attribute in the page directive. The encoding in the pageEncoding attribute of the page directive The Page Encoding value of a JSP property group whose URL pattern matches the page
If none of these encodings are mentioned, then the default encoding of ISO8859-1 is used. Early on, when internationalization of computer applications became popular, there was a boom in the number of encodings available to the user. Unfortunately these encodings were unable to cover multiple languages. For instance, the European Union was not able to cover all the European languages in one encoding, resulting in having to create multiple encodings to cover them. This further worsened the problem as multiple encodings could use the same number to represent different characters in different languages. The result: higher chances of data corruption. A big company had its applications working great with a default locale of US English, until it decided to go global. One of the requirements was to support Chinese characters. The application code was modified accordingly but each time the application ran, it was just not able to produce meaningful output, as the text seemed to be distorted. The culprit was the database encoding. Chinese characters, just like Korean and Japanese, have writing schemes that cannot be represented by single byte code formats such as ASCII and EBCDIC. These languages need at least a Double Byte Character Set (DBCS) encoding to handle their characters. Once the database was updated to support DBCS encoding, the applications worked fine. These problems led to the creation of a universal character-encoding format called Unicode. Unicode is a 16 bit character encoding that assigns a unique number to each character in the major languages of the world. Though it can officially support up to 65,536 characters, it also has reserved some code points for mapping into additional 16-bit planes with the potential to cope with over a million unique characters. Unicode is more efficient as it defines a standardized character set that represents most of the commonly used languages. In addition, it can be extended to accommodate any additions. Unicode characters are represented as escape sequences of type \uXXXX where XXXX is a character’s 16 bit representation in hexadecimal in cases where a Java program’s source encoding is not Unicode compliant. Struts and character encoding Setting the character encoding in the web application requires the following steps:
178
Struts Survival Guide – Basics to Best Practices
1. Configure the servlet container to support the desired encoding. For instance, you have to set the servlet container to interpret the input as UTF-8 for Unicode. This configuration is vendor dependent. 2. Set the response content type to the required encoding (e.g. UTF-8). In Struts 1.1, this information is specified in the <controller> element in strutsconfig.xml using the contentType attribute. 3. This can also be set in the JSPs with the @page directive as follows:
<%@ page contentType="text/html; charset=UTF-8" %>.
4. Next add the following line in the HTML <head>:
<meta http-
5. Make sure you are using the I18N version rather than the US version of the JRE. (If you are using JDK, this problem may ot arise) 6. Make sure that the database encoding is also set to Unicode. NOTE: Setting <html:html doesn't set the encoding stream. It is only a signal to Struts to use the locale-specific resource bundle native2ascii conversion Java programs can process only those files that are encoded in Latin-1 (ISO 8859-1) encoding or files in Unicode encoding. Any other files containing different encodings besides these two will not be processed. The native2ascii tool is used to convert such non Latin-1 or non-Unicode files into a Unicode encoded file. Any characters that are not in ISO 8859-1 will be encoded using Unicode escapes. For example, if you have a file encoded in a different language, say myCyrillicFile in Cyrillic, you can use the native2ascii tool to convert it into a Unicode encoded file as follows:
native2ascii –encoding UTF-8 myCyrillicFile myUnicodeFile
You can use other encodings besides UTF-8 too. Use the above tool on the Struts prorperties files (message resource bundles) containing non Latin-1 encoding. Without this conversion, the Struts application (or java for that matter) will not be able to interpret the encoded text. Consequently the <bean:message> and <html:errors/> will display garbage.
Chapter 8. Struts and I18N
179
8.6 Summary
In this chapter you started with what I18N and L10N are, their need and their advantages. You also got a quick overview of the Java and Struts Internationalization API. Then you looked at the various ways to internationalize the web tier using the features in Struts and Tiles. You also saw how to process localized input using Struts applications.
180
Struts Survival Guide – Basics to Best Practices
Chapter 9. Struts and Exception Handling
181
Ch a p te r 9
Struts and Exception Handling
In this chapter:
1. You will learn about basics of Exception Handling 2. You will understand the exception handling from servlet specification perspective 3. You will understand exception handling facilities in Struts1.1 4. We will develop a simple yet robust utility to log exceptions 5. We will cover strategies to centralize logging in production environments Exception handling is very crucial part often overlooked in web application development that has ramifications far beyond deployment. You know how to handle exceptions using the built-in Java construct to catch one and handle it appropriately. But what is appropriate? The basic rationale behind exception handling is to catch errors and report them. What is the level of detail needed in reporting the exception? How should the user be notified of the exception? How should customer support handle problem reports and track and trace the exception from the logs? As a developer where do you handle the exceptions? These are some of the major questions we will answer in this chapter first from a generic way and then as applicable to Struts applications. Under normal circumstances when you catch the exception in a method, you print the stack trace using the printStacktrace() method or declare the method to throw the exception. In a production system, when an exception is thrown it's likely that the system is unable to process end user’s request. When such an exception occurs, the end user normally expects the following: A message indicating that an error has occurred A unique error identifier that he can use while reporting it to customer support. Quick resolution of the problem. The customer support should have access to back-end mechanisms to resolve the problem. The customer service team should, for example, receive immediate error notification, so that the service representative is aware of the problem
182
Struts Survival Guide – Basics to Best Practices
before the customer calls for resolution. Furthermore, the service representative should be able to use the unique error identifier (reported by the user) to lookup the production log files for quick identification of the problem – preferably up to the exact line number (or at least the exact method). In order to provide both the end user and the support team with the tools and services they need, you as a developer must have a clear picture, as you are building a system, of everything that can go wrong with it once it is deployed.
9.1 Exception Handling Basics
It is common usage by the developers to put System.out.println() to track the exception and flow through the code. While they come in handy, they have to be avoided due to the following reasons: 1. System.out.println is expensive. These calls are synchronized for the duration of disk I/O, which significantly. It is pretty customary these days to use a utility like Log4J () for logging. With the right coding conventions in place, a logging utility will pretty much take care of recording any type of messages, whether a system error or some warning. However it is up to you as a developer to make the best use of the utilities. It requires a lot of forethought to handle exceptions effectively. In this chapter we will use Log4J to log exceptions effectively. Hence we will review Log4J before proceeding to look at some commonly accepted principles of Exception handling in Java.
Chapter 9. Struts and Exception Handling
183
9.2 Log4J crash course
Log4J is the logging implementation available from Apache’s Jakarta project and has been around long before JDK Logging appeared and quite naturally has a larger developer base. Lot of material is freely available online if you want to dig deeper into Log4J and we have held back from such a detailed treatment here. As with any Logging mechanisms, this library provides powerful capabilities to declaratively control logging and the level of logging. In Log4J, all the logging occurs through the Logger class in org.apache.log4j package. The Logger class supports five levels for logging. They are FATAL, ERROR, WARNING, INFO, DEBUG. Without Log4J, you would perhaps use a Boolean flag to control the logging. With such a boolean flag, there are only two states – logging or no logging. In Log4J the levels are defined to fine tune the amount of logging. Here is how you would user the Log4J.
Logger logger = Logger.getLogger (“foo.bar”); logger.debug (“This is a debug message”);
The code above first obtains the Logger instance named foo.bar and logs a message at DEBUG level. You can declaratively turn off the logging for messages at lower level than WARNING. This means the messages logged at INFO and DEBUG level will not be logged. Logged messages always end up in a destination like file, database table etc. The destination of the log message is specified using the Appender. The Appender can represent a file, console, email address or as exotic as a JMS channel. If you need a destination that is not supported by the classes out of the box you can write a new class that implements the Appender interface. Appenders can be configured at startup in a variety of ways. One way to configure them is through an XML file. A XML file is shown below.
<appender name="Mybank-Warn" class="org.apache.log4j.FileAppender"> <param name="Threshold" value="WARN" /> <param name="File" value="./logs/mybank-warnings.log" /> <param name="Append" value="false" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d [%x][%t] %-5p %c{2} - %m%n"/> </layout> </appender>
184
Struts Survival Guide – Basics to Best Practices
<category name="foo.bar" additivity="false"> <appender-ref <appender-ref </category>
The above XML when translated to simple English reads as follows: The Appender named Mybank-Warn logs the messages to a file mybankwarnings.log. Only messages with a threshold of WARN or higher are logged. The format of the message is as specified by the PatternLayout. The format of the output message is specified using Layout. Standard classes for specifying the layout like PatternLayout are used most of the times and the format is declaratively specified using symbols like %d which instructs Log4J to include date time in the log and %m – the actual message itself and so on. As you saw earlier, the logging is performed through a named Logger instance. If you are wondering how the Logger would know which Appender to log to, it is the <category> element in the above XML that provides the link between the two. The Logger uses the <category> setting in the XML to get this information. The <category> in the above XML is called foo.bar. Recall that we tried to log using a Logger named foo.bar. The foo.bar Logger gets the FileAppender Mybank-Warn appender through the foo.bar category setting in the XML. And then the messages end up in the file mybank-warnings.log. There can be more than one appenders associated with a category. This implies that the messages logged with a Logger can potentially end up in multiple locations if needed.
9.3 Principles of Exception Handling
The following are some of the generally accepted principles of exception handling: 1. If you can't handle an exception, don't catch it. 2. Catch an exception as close as possible to its source. 3. If you catch an exception, don't swallow it. 4. Log an exception where you catch it, unless you plan to re-throw it. 5. Preserve the stack trace when you re-throw the exception by wrapping the original exception in the new one. 6.
Chapter 9. Struts and Exception Handling
185
handled. 7. If you programming application logic, use unchecked exceptions to indicate an error from which the user cannot recover. If you are creating third party libraries to be used by other developers, use checked exceptions for unrecoverable errors too. 8.. 9. Throw Application Exceptions as Unchecked Exceptions and Unrecoverable System exceptions as unchecked exceptions. 10. Structure your methods according to how fine-grained your exception handling must be. Principle 1 is obviously in conflict with 2. The practical solution is a tradeoff between how close to the source you catch an exception and how far you let it fall before you've completely lost the intent or content of the original exception. Principles 3, 4, and 5 is a problems developers face when they catch an exception, but do not know how to handle it and hence throw a new exception of same or different type. When this happens, the original exception’s stack trace is lost. Listing 9.1 shows such a scenario. The SQLException is caught on Line 15 and re-thrown as a application specific UpdateException on Line 16. In the process, the stacktrace with valuable info about the SQLException is lost. Thus the developer can only trace back to Line 16 where the UpdateException is thrown and not beyond that (This is the best case scenario with compiler debug flags turned on. If hotspot compiler was used, the stacktrace would only have the method name without any line number). Listing 9.2 shows almost similar scenario, but the actual exception is logged to the console. This is not good choice and sometimes not feasible because of reasons cited earlier in this section.
Listing 9.1 Losing Exception stack trace
10 public void updateDetails(CustomerInfo info) throws UpdateException 11 { 12 13 14 15 16 17 18 } } try { CustomerDAO custDAO = CustDAOFactory.getCustDAO(); custDAO.update(info); } catch (SQLException e) { throw new UpdateException(“Details cannot be updated”);
186
Struts Survival Guide – Basics to Best Practices
Listing 9.2 Losing Exception stack trace
public void updateDetails(CustomerInfo info) throws UpdateException { try { CustomerDAO custDAO = CustDAOFactory.getCustDAO(); custDAO.update(info); } catch (SQLException e) { e.printStackTrace(); throw new UpdateException(“Details cannot be updated”); } }
Listing 9.3 Preserving Exception stack trace
public void updateDetails(CustomerInfo info) throws UpdateException { try { CustomerDAO custDAO = CustDAOFactory.getCustDAO(); custDAO.update(info); } catch (SQLException e) { throw new UpdateException(e); } }
A better approach is shown in Listing 9.3. Here, the SQLException is wrapped in the UpdateException. The caller of the updateDetails can catch the UpdateException, and get the knowledge of the embedded SQLException. Principles 7, 8 and 9 in the above list pertain to the discussion of using checked v/s unchecked exceptions. Checked Exceptions are those that extend java.lang.Exception. If your method throws checked exceptions, then the caller is forced to catch these exceptions at compile time or declare in the throws clause of the method. On the other hand, unchecked exceptions are those that extend java.lang.RuntimeException, generally referred to as runtime exceptions. If your method throws a runtime exception, the caller of the method is not forced to catch the exception or add it to the method signature at compile time.
Chapter 9. Struts and Exception Handling
187
A rule of thumb is to model application exceptions as checked exceptions and system exceptions as unchecked exceptions. The code below is an example of application exception.
if (withDrawalAmt > accountBalance) { throw new NotEnoughBalanceException( “Your account does not have enough balance”); }
When the account does not have enough balance for requested withdrawal amount, the user gets a NotEnoughBalanceException. The user can decide to withdraw lesser amount. Notice that the application exception is not logged. In case of the application exceptions, the developer explicitly throws them in the code and the intent is very clear. Hence there is no need for content (log or stack trace). Principle 10 is about the use of debug flags with compilation. At compile time it is possible to tell the JVM to ignore line number information. The byte code without the line information are optimized for hotspot or server mode and the recommended way of deployment for production systems. In such cases, the exception stack traces do not provide the line number information. You can overcome this handicap by refactoring your code during development time and creating smaller and modular methods, so that guessing the line numbers for the exceptions is relatively easier.
9.4 The cost of exception handling
In the example used earlier to illustrate application exceptions, we are checking if withdrawal amount is greater than balance to throw the exception. This is not something you should be doing every time. Exceptions are expensive and should be used exceptionally. In order top understand some of the issues involved; let us look at the mechanism used by the Java Virtual Machine (JVM) to handle the exceptions. The JVM maintains a method invocation stack containing all the methods that have been invoked by the current thread in the reverse order of invocation. In other words, the first method invoked by the thread is at the bottom of the stack and the current method is at the top. Actually it is not the actual method that is present in the stack. Instead a stack frame representing the method is added to the stack. The stack frame contains the method’s parameters, return value, local variables and JVM specific information. When the exception is thrown in a method at the top of the stack, code execution stops and the JVM takes over. The JVM searches the current method for a catch clause for the exception thrown or one of the parent classes of the thrown exception. If one is
188
Struts Survival Guide – Basics to Best Practices
not found, then the JVM pops the current stack frame and inspects the calling method (the next method in the stack), for the catch clause for the exception or its parents. The process continues until the bottom of the stack is reached. In summary, it requires a lot of time and effort on the part of JVM. Exceptions should be thrown only when there is no meaningful way of handling the situation. If these situations (conditions) can be handled programmatically in a meaningful manner, then throwing exceptions should be avoided. For instance if it is possible to handle the problem of withdrawal amount exceeding the balance in some other way, it has to chosen over throwing an application exception. Examples of SystemException can be a ConfigurationException, which might indicate that the data load during start up failed. There is really nothing a user or even the customer support could do about it, except to correct the problem and restart the server. Hence it qualifies as a System exception and can be modeled as runtime exception. Certain exceptions like SQLException might indicate a system error or application problem depending on the case. In either case, it makes a lot of sense to model SQLException as a checked exception because that is not thrown from your application logic. Rather it is thrown in the third party library and the library developer wants you to explicitly handle such a scenario.
9.5 JDK 1.4 and exception handling
If you are modeling the UpdateException as a unchecked exception, you will have to extend from RuntimeException. In addition if you are using JDK1.3.x and lower, you will also provide the wrapped exception attribute in your own exception. JDK1.4 onwards, you can wrap the “causative exception” in the parent class RuntimeException as a java.lang.Throwable attribute thus allowing you to carry around the “causative exception”. For e.g. SQLException is the “causative exception” in Listing 9.3. In the Throwable class there is a new method getCause to get the cause of any exception which returns the wrapped exception if exists. This can result in an exception chain since the cause itself can have a cause. Prior to 1.4 Exception classes had their own non-standard exception chaining mechanisms. For instance, RemoteException was used to carry the actual exception across different JVMs or from EJB tier to web tier. As of 1.4, all of these classes have been retrofitted to use the standard exception chaining mechanism. Additional exception handling features in JDK1.4 include programmatic access to stack trace. This is a boon for real time error monitoring and alert facilities. Often these systems need to manually parse the stack dump for keywords. This is been made much easier. One can invoke getStackTrace
Chapter 9. Struts and Exception Handling
189
method on the Exception (or Throwable) and get an array of StackTraceElements returned. Each StackTraceElement provides the following methods. getClassName getFileName getLineNumber getMethodName isNativeMethod By calling the above methods, you can display the stack trace in any format you like. In addition, elegant error monitoring systems can be written. For instance, the error monitoring system should alert the appropriate support team for the sub system by intelligently analyzing the stack trace. This has been made easier. The following code snippet can be used with JDK 1.4
StackTraceElement elements[] = e.getStackTrace(); for (int i=0, n=elements.length; i<n; i++) { if ( elements[i].getClassName.equals("LegacyAccessEJB”) && elements[i].getMethodName().equals(“invokeCOBOL”) { //Alert the COBOL support team } }
This code snippet checks if the exception originated in LegacyAccessEJB during invoking a method named “invokeCOBOL”, it will alert the COBOL support team. Obviously the decision tree is not as simple as shown, but at least it removes the headache of parsing the trace for the same information.
9.6 Exception handling in Servlet and JSP specifications
In the previous section, you looked at the general principles in exception handling without a J2EE tilt. In this section, we will cover what servlet specification has to say about exception handling. Consider the doGet() method signature in a HttpServlet.
public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException
The above method signature implies that a Servlet or a JSP (and finally a web application) is only allowed to throw
190
Struts Survival Guide – Basics to Best Practices
ServletException or its subclasses IOException or its subclasses
RuntimeExceptions All other checked exceptions have to be handled in the Servlet/JSP code in one of the following manner: Catch the checked exception and log the error message and (or) take any business related action. Wrap the exception in
ServletException. ServletException and (ServletException has
a
throw the overloaded
constructors to wrap the actual exception.) Servlet specification provides exception-handling support through web.xml. In web.xml, you can declare <error-page> to handle exceptions that are thrown but not caught.
<error-page> <exception-type>UnhandledException</exception-type> <location>UnhandledException.jsp</location> </error-page>
What this means is that if an exception of type UnhandledException is thrown from your web application but not caught anywhere, then the user gets to see the UnhandledException.jsp. This works well for ServletException, IOException, RuntimeException and their subclasses. If the UnhandledException is a subclass of ServletException and none of the error-page declaration containing exception-type fit the class hierarchy of the thrown exception, then the Servlet container gets the wrapped exception using the ServletException.getRootCause method. Then the container attempts again to match the error-page declaration. This approach works well if the UnhandledException is not a subclass of ServletException or IOException (but is a checked exception). You have to throw a ServletException or its subclass by wrapping the UnhandledException in it and the servlet container does rest of the magic. There are times when the user cannot see a page due to incorrect access rights or the page simply does not exist. The Servlet sends an error response with an appropriate HTTP error code. For instance, 404 corresponds to Page not found, 500 corresponds to Internal Server Error and so on. You can also assign JSPs for default HTTP error code as follows.
<error-page> <error-code>404</error-code>
Chapter 9. Struts and Exception Handling
191
<location>exceptions/Page404.jsp</location> </error-page>
Similarly, exceptions can occur in the JSPs in scriptlets and custom tags. These can throw runtime exceptions. In addition scriptlets can throw ServletException and IOException since a JSP gets translated into the body of _jspService() method and the signature of the _jspService() method is same as doGet().
public void _jspService(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException
Tags however throw JspException in their tag callback methods (doStartTag(), doEndTag() and so on). JspException is a direct subclass of java.lang.Exception and has no relationship with ServletException or IOException. The _jspService() method is container dependent but its contract is to catch all those exceptions and forward the request to the errorPage specified by the JSP. Hence it is a best practice to assign error pages in JSPs with the declarative: <%@ page errorPage="/error.jsp" %> When forwarding to the exception page as specified by errorPage setting shown above, the exception describing the error is stored as request attribute with the key “javax.servlet.jsp.JspException”. If the JSP assigned to handle the exceptions has the directive <%@ page isErrorPage="true" %> at the top of their page, then the exception is provided as the implicit scripting variable named exception.
9.7 Exception handling – Struts way
ServletException, IOException, RuntimeException and their sub
classes can be declaratively mapped to appropriate JSP files through the web.xml settings. What about the other Exceptions? Fortunately since Struts1.1, you can assign JSP files for other checked exceptions too. Let us start by examining the features in Struts 1.1 for exception handling. Declarative exception handling Consider the method signature for the execute method in the Struts Action class.
public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest request,
192
Struts Survival Guide – Basics to Best Practices
HttpServletResponse response) throws java.lang.Exception
The execute() method has java.lang.Exception in its throws clause. Hence you don’t have to handle the exceptions explicitly in Action. You can let them fall through. Consider the execute() method from an Action class.
public ActionForward execute(...) throws java.lang.Exception { ActionForward nextPage = null; .. userControllerEJB.createUser(UserInfo info); .. mapping.findForward(“success”); }
The execute() method invokes the createUser() method on UserControllerEJB – a Session EJB that is responsible for creating the users. The createUser() method throws two Exceptions – RemoteException and DuplicateUserException. If the user cannot be created because another user with same id exists, then the Session EJB throws DuplicateUserException. A RemoteException is thrown if the user cannot be created because of problems in looking up or creating the Session EJB. If everything goes fine, then the user is forwarded to the ActionForward identified by success. However we have made no attempt to catch them and handle. Instead we have deferred their handling to Struts through the declarative exception handling facility.
Listing 9.5 Declarative Exception Handling in Struts
<struts-config> <action-mappings> <action path="/submitCustomerForm" type="mybank.example.CustomerAction" name="customerForm" scope="request" input="/CustomerDetails.jsp">
<exception key="database.error.duplicate" path="/UserExists.jsp" type="mybank.account.DuplicateUserException"/> <exception key="rmi.error"
Chapter 9. Struts and Exception Handling
193
</action> </action-mappings> </struts-config>
Listing 9.5 shows the Struts Config file with declarative exception handling for the two exceptions – DuplicateUserexception and RemoteException. For each exception, an <exception> element is defined in the action mapping. The path attribute in the <exception> element specifies the page shown to the user upon that exception. For instance, if a DuplicateUserException is thrown when submitting the modified user profile, the controller will forward control to the UserExists.jsp page. The key attribute is used to retrieve the error message template from the associated resource bundle. Since the <exception> is local to the action mapping, it applies only for that action invocation. As you might have already notice the J2EE and Struts way of declaratively handling exceptions are complementary to one another. In the Listing 9.5, the declarative exception handling was local to the CustomerAction. You can add global declarative exception handling too. For instance, if you want to handle the RemoteException in the same way across the board, use the following approach:
<struts-config> <global-exceptions> <exception key="rmi.error" type="java.rmi.RemoteException" path="/rmierror.jsp"/> </global-exceptions> </struts-config>
sets
Before forwarding to the page indicated in the <exception> element, Struts the exception as a request attribute with name org.apache.struts.action.EXCEPTION. (This is the value of Globals.EXCEPTION_KEY. Globals is a Java class in org.apache.struts package). The exception can be retrieved in the error page by using the method: request.getAttribute(Globals.EXCEPTION_KEY).
Using the ExceptionHandler Apart from key, type and path, the <exception> element also takes several optional attributes of which handler is a significant one. It is the fully qualified class name of the exception handler for that exception. By default org.apache.struts.action.ExceptionHandler is the class used. You
194
Struts Survival Guide – Basics to Best Practices
can create a custom handler by extending the ExceptionHandler and overriding the execute() method. The execute() method has the following signature:
public ActionForward execute(Exception ex, ExceptionConfig ae, ActionMapping mapping, ActionForm formInstance, HttpServletRequest request, HttpServletResponse response) throws ServletException
To understand the ExceptionHandler, you have to understand the RequestProcessor workings on exception. As it does everything else, RequestProcessor invokes the execute() method on the Action instance. Hence it is natural that the exception thrown in the execute() is caught by the RequestProcessor. On receiving the exception, here is what the RequestProcessor does: It checks to see if the exception has an associated <exception> declaration either in local or global scope. If none exists, then the exception is: Thrown AS IS if it is ServletException, IOException or their subclasses. Wrapped in a ServletException and thrown if the above criteria is not satisfied. If there is a <exception> element declared then it retrieves the handler class, instantiates it and invokes execute() method in it. The default exception handler returns the path attribute of the <exception> element as an ActionForward. As you will see later in this section, you can use a custom Exception Handler to centralize exception logging in the web tier. When not to use declarative exception handling Very frequently you would like to generate an ActionError and display it to the user instead of an exception. Let us look back at Listing 9.5 again for a moment. When RemoteException is thrown, the user sees rmierror.jsp. This makes sense since RemoteException is tantamount to a system exception and the only thing you can do is to ask the user to start all over again. However, it does not make sense to ask the user to start all over when DuplicateUserException is thrown since this is an application exception from which the user has a recovery path. A better option is to show this as an ActionError and give the user a chance to change the user id. For situations like this, you have to resort to programmatic exception handling.
Chapter 9. Struts and Exception Handling
195
Listing 9.6 shows the execute() method with programmatic exception handling. It catches the DuplicateUserException and creates an ActionErrors object to hold the error. The ActionErrors is set into the HTTP request as an attribute and then the same Form is shown back. The last part of showing the same page is achieved by the line mapping.getInput(). In this case you have to remove the declarative exception handling from Struts config file since it is being explicitly handled in the code. If you use declarative exception handling, the default ExceptionHandler will still generate an ActionErrors object. However, the ActionErrors is associated with the page rather than a particular field. If you don’t have this requirement, declarative exception handling is preferred over programmatic exception handling. Just set the initial JSP as the path for the <exception> and use <html:errors/> on the JSP and you get the exception as if it was an ActionError without any effort from your side.
Listing 9.6 Alternative to declarative exception handling
public ActionForward execute(... ...) throws java.lang.Exception { ActionForward nextPage = null; try { .. .. userControllerEJB.createUser(UserInfo info); .. mapping.findForward(“success”); } catch (DuplicateUserException due) { ActionErrors errors = new ActionErrors(); ActionError error = new ActionError(“userid.taken”, due.getUserId()); errors.add(“userid”, error); // This saves the ActionErrors in the request attribute // with the key Action.ERROR_KEY saveErrors(request, errors); nextPage = mapping.getInput(); } return nextPage; }
196
Struts Survival Guide – Basics to Best Practices
Exception handling and I18N Another important matter of concern with exception handling is I18N. Even though the exception logging can occur in the language of your operating system, the messages should still be displayed in the language of the user’s choice. This is not much of a concern is the message is generic. For instance, in Listing 9.5, the message shown to the user on RemoteException is identified by the key rmi.error. The key can have a generic message in the resource bundle. However the problem starts when the message has to get specific or the message requires replacement values. There are two possible solutions to this problem neither of which is ideal. Here is the first approach: If you want to keep the internationalization in the web tier, then the specific exceptions from the server side should encapsulate the resource bundle keys and some (if not all) replacement values in them. The key and the replacement values can be exposed through getter methods on the exception class. This approach makes the server side code dependent on the web tier resource bundle. This also requires a programmatic exception handling since you have to pass appropriate replacement values to the ActionError. The second approach is to send the user’s Locale as one of the arguments to the server side and let the server side generate the entire message. This removes the server’s dependency on the web tier code, but requires the Locale to be sent as a argument on every method call to the server.
9.8 Logging Exceptions
It is common knowledge that exceptions can occur anywhere – web-tier, ejb-tier, database. Wherever they occur, they must be caught and logged with appropriate context. It makes more sense to handle a lot, if not all of the exceptions originating in the ejb tier and database tier on the client side in the web tier. Why should exception logging take place on the client side?
Listing 9.7 Enumeration class for Exception Category
public class ExceptionCategory implements java.io.Serializable { public static final ExceptionCategory INFO = new ExceptionCategory(0); public static final ExceptionCategory WARNING = new ExceptionCategory(1); public static final ExceptionCategory GENERAL_PROBLEM = new ExceptionCategory(2); public static final ExceptionCategory DATA_PROBLEM = new ExceptionCategory(3);
Chapter 9. Struts and Exception Handling
197
public static final ExceptionCategory CONFIG_PROBLEM = new ExceptionCategory(4); public static final ExceptionCategory FATAL = new ExceptionCategory(5); private int type; private ExceptionCategory(int aType) { this.type = aType; } }
First, the control hasn't passed outside of the application server yet. (Assuming both the web tier and ejb tier do not belong to disparate entities). The so-called client tier, which is composed of JSP pages, servlets and their helper classes, runs on the J2EE application server itself. Second, the classes in a welldesigned web tier have a hierarchy (for example, hierarchy in the Business Delegate classes, Intercepting Filter classes, JSP base class, or in the Struts Action classes) or single point of invocation in the form of a FrontController servlet (Business Delegate, Intercepting Filter and Front Controller are Core J2EE Patterns. Refer to Sun blueprints for more details). The base classes of these hierarchies or the central point in FrontController option if you have co-located web tier and EJB tiers and you don't have a requirement to support any other type of client. To develop a full fledged exception handling strategy let us start with a simple class shown in Listing 9.7. This class, ExceptionCategory categorizes the exceptions into INFO, WARNING, ERROR and FATAL. This identification helps us when the notification of Support personnel depends on the severity of the exception.
Listing 9.8 Exception Info class
public class ExceptionInfo implements java.io.Serializable { private ExceptionCategory exceptionCategory; private String errorCode; private String exceptionID; private boolean logged; public ExceptionInfo(ExceptionCategory aCategory, String aErrorCode) {
198
Struts Survival Guide – Basics to Best Practices
this.exceptionCategory = aCategory; this.errorCode = aErrorCode; this.logged = false; this.exceptionID = UniqueIDGeneratorFactory. getUniqueIDGenerator().getUniqueID(); } }
The next class to look at is the ExceptionInfo class as shown in Listing 9.8 This class provides information about the Exception as the name indicates. Apart from the ExceptionCategory, this class also holds a unique id associated with the Exception and a boolean indicating if the exception has been already logged. The UniqueIDGeneratorFactory is a factory class that returns a UniqueIDGenerator. UniqueIDGenerator is represented by an interface IUniqueIDGenerator. This interface has just one method – getUniqueID(). Listing 9.9 shows a simple Unique ID Generator implementation IP Address and time.
Listing 9.9 Simple Unique ID Generator
public class UniqueIDGeneratorDefaultImpl implements IUniqueIDGenerator { private static IUniqueIDGenerator instance = new UniqueIDGeneratorDefaultImpl(); private long counter = 0; public String getUniqueID() throws UniqueIDGeneratorException { String exceptionID = null; try { exceptionID = InetAddress.getLocalHost().getHostName(); } catch(UnknownHostException ue) { throw new UniqueIDGeneratorException(ue); } exceptionID = exceptionID + System.currentTimeMillis() + counter++; return exceptionID;
Chapter 9. Struts and Exception Handling
199
} }
Listing 9.10 MybankException class
public abstract class MybankException extends Exception { private ExceptionInfo exceptionInfo; public MybankException(ExceptionInfo aInfo) { super(); this.exceptionInfo = aInfo; } }
And finally Listing 9.10 shows the actual Exception class. This is the base class for all the checked exceptions originating in MyBank. It is always better to have a base class for all exceptions originating in a system and then create new types as required. In this way, you can decide how much fine grained you want the catch exception blocks to be. Similarly you can have a base class for all unchecked exceptions thrown from system. Listing 9.11 shows such a class.
Listing 9.11 MybankRuntimeException class
public abstract class MybankRuntimeException extends Exception { private ExceptionInfo exceptionInfo; private Throwable wrappedException; public MybankException(ExceptionInfo aInfo, Throwable aWrappedException) { super(); this.exceptionInfo = aInfo; this.wrappedException = aWrappedException; } }
Notice that MybankRuntimeException has only one constructor that takes both ExceptionInfo and a Throwable. This is because if someone is explicitly throwing a runtime exception from his or her code, it is probably because a system error or serious unrecoverable problem has occurred. We want to get hold of the actual cause of the problem and log it. By enforcing development time disciplines like this, one can decrease the chances of exceptions in the system without a context.
200
Struts Survival Guide – Basics to Best Practices
Finally we also need to look at the actual Logging utility – a stack trace printing utility shown in Listing 9.12.. The StackTraceUtil class has two overloaded methods – getStackTraceAsString() – One of them takes the MybankException as the parameter, the other takes Throwable as the parameter. All exceptions of type MybankException already have the unique id in-built. For other exceptions, to be logged the unique id has to be explicitly generated. MybankException also has the flag indicating whether the exception has been logged making it easier to prevent multiple logging, as you will see very soon. Other Exceptions don’t have this capability and it is up to the caller program and called to collaborate and ensure that duplicate logging does not happen.
Listing 9.12 Stack Trace printing utility.
public final class StackTraceTool { private StackTraceTool() {} public static String getStackTraceAsString( MybankException exception) { String message = " Exception ID : " + exception.getExceptionInfo().getExceptionID() + "\n " + "Message :" + exception.getMessage(); return getStackMessage(message, exception); } public static String getStackTraceAsString(Throwable throwable) { String message = " Exception ID : " + UniqueIDGeneratorFactory.getUniqueIDGenerator().getUniqueID() + "\n " + "Message :" + exception.getMessage(); return getStackMessage(message, exception); } private static String getStackMessage(String message, Throwable exception) { StringWriter sw = new StringWriter();
Chapter 9. Struts and Exception Handling
201
PrintWriter pw = new PrintWriter(sw); pw.print(" [ " ); pw.print(exception.getClass().getName()); pw.print(" ] "); pw.print(message); exception.printStackTrace(pw); return sw.toString(); } }
Armed with these knowledge let us look at a scenario that will lead to duplicate logging in the system when an exception occurs. Consider a case when a method, foo(), in an entity EJB component. Fortunately, addressing these problems is fairly easy to do in a generic way. All you need is a mechanism for the caller to: Access the unique ID Find out if the exception has already been logged If the exception has been already logged don’t log it again. We have already developed the MybankException and ExceptionInfo class that let us check if the exception is already logged. If not logged already, log the exception and set the logged flag to be true. These classes also generate a unique id for every exception. Listing 9.13 shows a sample.
Listing 9.13 Sample Exception Logging
try { CustomerDAO cDao = CustomerDAOFactory.getDAO(); cDao.createCustomer(CustomerValue); } catch (CreateException ce) { //Assume CreateException is a subclass of MybankException if (! ce.isLogged() ) { String traceStr = StackTraceTool.getStackTraceAsString(ce); LogFactory.getLog(getClass().getName()).error( ce.getUniqueID() + ":" + traceStr); ce.setLogged(true); }
202
Struts Survival Guide – Basics to Best Practices
throw ce; }
Listing 9.13 shows the logging scenario when the exception caught is of type MybankException. It is very much a fact that not all of the exceptions thrown by your application are in this hierarchy. Under such conditions it is even more important that the logging is centralized in one place since there is no mechanism to prevent duplicate logging for exceptions outside the MybankException hierarchy. That brings us to the idea of centralized logging. In the beginning of this section we said that it is easy and convenient to log exceptions on web-tier since most of the web-tier classes have a hierarchy. Let us examine this claim in more detail.
9.9 Strategies for centralized logging
In the previous section, we saw how to avoid duplicate logging. But when it comes to the entire application, you also learnt that logging should not only be done once but also centralized for disparate modules of the system if possible. There are various strategies to achieve centralized logging in the web tier. This section will cover those strategies. Consider the web-tier for MyBank. After the Struts Forms are populated the RequestProcessor invokes the execute method on Action classes. Typically, in the execute method you access enterprise data and business logic in session ejbs and legacy systems. Since you want to decouple your web tier from the business logic implementation technology (EJB for example – which forces you to catch RemoteException) or the legacy systems, you are most likely to introduce Business Delegates. (Business Delegate is a Core J2EE Pattern). The Business Delegates might throw a variety of exceptions, most of which you want to handle by using the Struts declarative exception handling. When using the declarative exception handling you are most likely to log the exceptions in the JSPs since the control passes out of your code at the end of the execute method. Instead of adding the exception logging code to every JSP declared in Struts Config file, you can create a parent of all the error JSPs and put the logging code in there. Listing 9.14 shows a sample base JSP class. There is quite a bit going on in Listing 9.14. First the class implements the javax.servlet.jsp.HttpJspPage interface. All the methods in this interface except the _jspService() have concrete implementations. These methods represent the various methods called during the JSP life cycle. In particular you will recognize the service method that is similar to the servlet’s service method. In the course of this method execution, the _jspService() method is also executed. _jspService() method is not implemented by the
Chapter 9. Struts and Exception Handling
203
page author or the developer. Instead it is auto generated by the servlet container during JSP pre-compilation or run time. All the markup, tags and scriptlets contained in the JSP get transformed into Java code and form the gist of the _jspService() method. The page author indicates that the jsp extends from this java class by adding the directive
<%@ page extends="mybank.webtier.MybankBaseErrorJsp" %>
If all of the Error-JSPs extend from this abstract JSP class, centralized logging is achieved. Before you celebrate for having nailed down the problem, shall we remind you that this solution may not work in all servlet containers. The reason for this is JspFactory and PageContext implementations are vendor dependent. Normally the calls for JspFactory.getDefaultFactory() and factory.getPageContext() occur in the auto generated _jspService() method. It is possible that some of the implementations may not initialize these objects we accessed in the service() method until they reach the _jspService() method !
Listing 9.14 Base JSP class for error pages
public abstract class MybankBaseErrorJsp implements HttpJspPage { private ServletConfig servletConfig; public ServletConfig getServletConfig() { return servletConfig; } public String getServletInfo() { return "Base JSP Class for My Bank Error Pages"; } public void init(ServletConfig config) throws ServletException { this.servletConfig = config; jspInit(); } public void jspInit() {}
public void jspDestroy() {} public void service(ServletRequest req, ServletResponse res) throws ServletException, IOException { HttpServletRequest request = (HttpServletRequest)req;
204
Struts Survival Guide – Basics to Best Practices
HttpServletResponse response = (HttpServletResponse)res; JspFactory factory = JspFactory.getDefaultFactory(); this, request, response, null, // errorPageURL false, // needsSession JspWriter.DEFAULT_BUFFER, true ); Exception exc = pageContext.getException(); String trace =StackTraceTool.getStackTraceAsString(exc); Logger.getLogger(getClass().getName()).error(trace); //proceed with container generated code from here _jspService(request,response); } public abstract void _jspService(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException; } // autoFlush
PageContext pageContext = factory.getPageContext(
Don’t panic. We have an alternate solution, which is less elegant but is guaranteed to work across vendor implementations. Let us create a custom tag to be invoked from all of the Error-JSPs. Listing 9.15 shows the logic for doStartTag() method of this custom tag. You will notice that it is very much similar to the service() method in Listing 9.14. After obtaining the exception object, it is logged by obtaining the Logger from Log4J. Since this tag is invoked within the _jspService() method for the JSP, it is guaranteed to have access to all of the implicit objects including pagecontext and exception in every vendor implementation.
Listing 9.15 Custom Tag for exception logging
public class ExceptionLoggingTag extends TagSupport { public int doStartTag() throws ServletException, IOException { Exception exc = pageContext.getException(); String trace =StackTraceTool.getStackTraceAsString(exc); LogFactory.getLog(getClass().getName()).error(trace);
Chapter 9. Struts and Exception Handling
205
return EVAL_BODY_INCLUDE; } }
For those of you who are visualizing the big picture, you will realize that logging from the JSP is not the right way. However there is no ideal way to achieve centralized logging without taking this approach. Each mechanism has its drawbacks and tradeoffs. This is something you will experience whenever design abstractions meet reality. Until now you have seen a JSP based approach of exception handling and logging. What if you have a requirement to handle the exceptions originating from your application differently? Let us consider the application exceptions from our very own MyBank. The exceptions originating from the MyBank are subclasses of MybankException and MybankRuntimeException. When using Struts as the framework in your web applications, then you will most likely have a base Action class with trivial functionality common to all of the Actions factored out. The base Action class is the ideal location to centralize the special processing for the application exceptions. Listing 9.16 shows a sample base Action, MybankBaseAction for the special processing just mentioned.
Listing 9.16 Mybank Base Action
public class MybankBaseAction extends Action { public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws Exception { ActionForward aForward = null; MybankBaseForm aForm = (MybankBaseForm)form; try { preprocess(mapping, aForm, request, response); aForward = process(mapping, aForm, request, response); postprocess(mapping, aForm, request, response); } catch(MybankException ae) { //Any Special Processing for Mybank goes here if (ae.isLogged()) { String trace = StackTraceTool.getStackMessage(ae); LogFactory.getLog(getClass().getName()).error( ae.getUniqueID() + ":" + trace); ae.setLogged(true); }
206
Struts Survival Guide – Basics to Best Practices
aForward = errorPage; } return aForward; } protected abstract void preprocess(ActionMapping mapping, MybankBaseForm form, HttpServletRequest request, HttpServletResponse response) throws Exception; protected abstract void process(ActionMapping mapping, MybankBaseForm form, HttpServletRequest request, HttpServletResponse response) throws Exception; protected abstract void postprocess(ActionMapping mapping, MybankBaseForm form, HttpServletRequest request, HttpServletResponse response) throws Exception; }
This class implements the execute method, but defines three abstract methods preprocess(), process() and postprocess() which take the same arguments as the execute() method but are invoked before, during and after the actual processing of the request. In the execute method, the MybankException is caught and any special processing required is done and then re-throw the exception for the declarative exception handling to work or programmatically forward to the relevant error page. Note that you can achieve the same result by creating a custom exception handler for the MybankException. The custom exception handler’s execute() method will do exactly what the catch block in Listing 9.16 is doing.
9.10 Reporting exceptions
Until now, you have looked at various exception logging strategies. After the exception is logged, there is also a need to report the fatal and serious ones by sending out emails and pager messages to the support team. Several approaches exist and the choices are numerous, but in this chapter we would like to consolidate the logging and error reporting for better coordination and control. For this, let us look at what Log4J has to offer.
Listing 9.17 SMTP Appender setup
<appender name="Mybank-Mail" class="org.apache.log4j.net.SMTPAppender"> <param name="Threshold" value="ERROR" />
Chapter 9. Struts and Exception Handling
207
<param name="Subject" value="Mybank Online has problems" /> <param name="From" value="prod-monitor@mybank.com" /> <!-- use commas in value to separate multiple recipients --> <param name="To" value="prod-support@mybank.com " /> <param name="SMTPHost" value="mail.mybank.com" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%m" /> </layout> </appender>
As you already know, Log4J has three main components: Layout, Appender, and Category (also known as Logger). comes with several standard appenders and one of them is called SMTPAppender. By using the SMTPAppender, you can declaratively send email messages when the errors occur in your system. You can configure the SMTPAppender like any other Appender – in the Log4J configuration file. Listing 9.17 shows a sample setup for SMTPAppender. It takes a Threshold, beyond which the Appender is operational, a subject for the email, the From and To addresses, SMTP server name and the pattern for the email message body. You can set up the category for the above SMTPAppender as
<category name=”com.mybank.webtier.action” additivity=”false”> <priority value=”ERROR”/> <appender-ref ref=”Mybank-Mail”/> <appender-ref ref=”Mybank-ErrorLog”/> </category>
With this setup all the exceptions that are caught in the base Struts Action –
MybankBaseAction are reported to email address prod-support@mybank.com.
This is because the category name is identified by the package name for MybankBaseAction and while logging in Listing 9.16, we used the category whose name is same as the fully qualified class name for MybankBaseAction which happens to be com.mybank.webtier.action.MybankBaseAction. The email address prod-support@mybank.com is generally an email group configured in the enterprise mail server to include several individual recipients.
208
Struts Survival Guide – Basics to Best Practices
Alternatively, you can explicitly specify multiple recipients in the To param in Listing 9.17 with commas separating the recipients. You can take a similar approach if you are logging in the Base JSP class of Listing 9.14 or the custom tag class of 9.15. But what if you are logging the exception using a scriptlet in the JSP. Although this approach is not recommended, suppose that you already have it in place and want to retrofit the Log4J email feature. In this case, you still can setup the appender as in Listing 9.17. But what about the jsp? What is the fully qualified class name for the JSP? This depends on the vendor. For instance, in weblogic a JSP in a directory called mortgage will reside in package named jsp_servlet.mortgage. Accordingly, for WebLogic, you can setup the category as follows
<category name=”jsp_servlet.mortgage” additivity=”false”> <priority value=”ERROR”/> <appender-ref ref=”Mybank-Mail”/> <appender-ref ref=”Mybank-ErrorLog”/> </category>
Note that this setting is vendor specific and may not be portable to other application servers. But this is a minor change and should not be a problem if you decide to port to another application server say JBoss. If you are wondering, “Email messages are all fine. How do I send pager beeps?” The quick answer is “No problem”. Pagers have email addresses too. You can ask your service provider to associate the email address with the pager. Telecommunications companies and providers can use JavaMail API to implement a PAGER transport protocol that sends email messages to alphanumeric pagers. Similar approach works for Short Message Service (SMS) too since you can email a SMS device.
9.11 Summary
In development environments, the developer can go back, fix the root cause of the exception and move on. Not so in production systems. Exception handling is a very crucial part of enterprise applications. It is the key to quick response from the support team and resolution of the problems in working systems. A delayed or no resolution can leave the customer frustrated. In the internet world, where the competitor is just a click away, the importance of exception handling, logging, reporting and resolving cannot be stressed enough. This chapter gave you the insights into various gotchas on your way, common mistakes and strategies to address them from a web tier perspective.
Chapter 10. Effectively extending Struts
209
Ch a p te r 1 0
Effectively extending Struts
In this chapter:
1. You will learn how to extend Struts using PlugIn as an example 2. You will see how to construct a rudimentary page flow controller by customizing ActionMapping 3. You will develop a mechanism for controlling validation for image button form submission 4. You will see how to handle sensitive resubmissions in a generic way rather than handling in every form 5. You will devise a strategy to avail DispatchAction-like functionality for image button form submission Struts is a generic framework. It works fine without modification. But there are times when it has to be customized. And we are not talking about straightforward customizations like extending the Form, Action and custom tags. We are referring to the “hooks” that Struts provides to extend the framework. In this chapter you will see several practical uses of these hooks. A word of caution though: The customization features are probably going to be around without modification until Struts 2.0. The main candidates for overhaul are ActionMapping and RequestProcessor. The replacements would be designed using Command, interceptor and chain of responsibility patterns. However, since the above classes is part of public API, an alternative strategy will definitely emerge to seamlessly migrate the customizations discussed in this chapter so that none of the application code is affected and only the customization code might change. Of course this is just a speculation. To understand the hooks, consider a Struts Plugin for say, a slick menu utility (Such a utility indeed exists. Check out). The menu utility needs to read the configuration data from an external file. If the PlugIn were instead implemented as a servlet, it would read the file name from an <init-param> in web.xml. The <setproperty> can do the same task for the PlugIn. The file name is set in the struts-config.xml by using <set-property>.
210
Struts Survival Guide – Basics to Best Practices
<plug-in className=”mybank.example.MyMenuPlugIn”> <set-property property=”fileName” value=”/WEB-INF/menudata.xml”/> </plug-in>
A JavaBeans property with the same name (fileName) is then added to the PlugIn class. The <set-property> tells the Struts framework to set the corresponding JavaBeans property in the plugin class (or any class associated with the configuration) with the value attribute of the <set-property> element. In addition, the Struts PlugIn implements the PlugIn interface from org.apache.struts.action package. Accordingly, the MyMenuPlugIn class is defined as:
public class MyMenuPlugIn implements PlugIn { private String fileName; public String getFileName() { return fileName; } public void setFileName(String name) { this.fileName = name; } public void init(ActionServlet servlet, ModuleConfig config) throws ServletException { .. .. } public void destroy() { .. .. } }
During startup, Struts sets the fileName property using the corresponding setter method (and other properties if exist) and finally calls the init() method. Since PlugIns are the last ones to be configured by Struts, all other data from the struts-config.xml is guaranteed to be loaded before the init() method is invoked. The init() method is an opportunity to override and change any other settings including the RequestProcessor! Frameworks like SAIF (stands for Struts Action Invocation Framework. Available at) utilize this to change the RequestProcessor to one of its own.
Chapter 10. Effectively extending Struts
211
Back to <set-property>. The <set-property> is the cornerstone of hook-based customization. Its DTD entry is as follows:
<!ATTLIST set-property id property value ID CDATA CDATA #IMPLIED #REQUIRED #REQUIRED>
Both property and value are mandatory and ID is never set explicitly. The following elements in struts-config.xml can be customized using <setproperty>: Form-bean, Exception, DataSource, PlugIn, RequestProcessor, MessageResources, ActionForward and ActionMapping. Customizing the action mapping The <action mapping> is the most frequently customized element. One way to customize action mapping is by setting the className in struts-config.xml as shown in Listing 10.1.
Listing 10.1 struts-config.xml for custom action mapping
<action path="/submitCustomerForm" className=”mybank.struts.MyActionMapping”"
The className attribute tells Struts to use the specified class (mybank.struts.MyActionMapping) for storing the action-mapping configuration. MyActionMapping extends ActionMapping in the package org.apache.struts.action. In addition it has a JavaBeans property for each of the <set-property> elements. MyActionMapping class is shown below:
public class MyActionMapping extends ActionMapping { private String buttons;
212
Struts Survival Guide – Basics to Best Practices
private String forwards; //getters and setters for actions and forwards public MyActionMapping() { } }
The custom action mapping is now ready to use. During startup, Struts instantiates the subclass of ActionMapping (instead of ActionMapping itself) and sets its JavaBeans properties to the values specified in the corresponding <setproperty> element. As you know, the execute() method in the Action accepts ActionMapping as one of the arguments. When the execute() method in the CustomerAction is invoked at runtime, MyActionMapping is passed as the ActionMapping argument due to the setting in Listing 10.1. It can then be cast to MyActionMapping to access its JavaBeans properties as follows:
public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws Exception { MyActionMapping customMapping = (MyActionMapping) mapping; String actions = customMapping.getButtons(); .. ..
} Listing 10.2 struts-config.xml with global custom action mapping <action-mappings type=”mybank.struts.MyActionMapping”>
<action path="/submitCustomerForm""
Chapter 10. Effectively extending Struts
213
</action-mappings>
There are several uses of simple customizations like this. As you will see later in this chapter, a lot of code that needs to be often repeated everywhere can be eliminated by simple customizations. While doing so, a single customized ActionMapping will be used for all the action mappings in the application. Setting the className for individual action mapping as shown in Listing 10.1 is painful. The alternative is to specify the type attribute on the <actionmappings> element as shown in Listing 10.2. This tells Struts to use the corresponding ActionMapping class for all the <action> elements. Listing 10.2 forms the basis for some of the utilities we develop in this chapter – a rudimentary page flow controller, auto-validation feature for image based form submissions, a generic mechanism to handle sensitive form resubmissions and a DispatchAction-like facility for image based form submissions. All of these will use the base Action class and base form conceived in Chapter 4 in conjunction with the HtmlButton described in Chapter 6 for form submission.
10.1 A rudimentary page flow controller
In the last section you have seen how ActionMapping can be customized. Let us use the customized action mapping to build a rudimentary page flow controller. Every Action has to render the next view to the user after successful business logic invocation. This would mean that a standard mapping.findForward() in every method of yours. The rudimentary page flow controller eliminates this by providing this information in a declarative manner in struts-config.xml utilizing MyActionMapping. That information is used at runtime to automatically decide which page to forward to. The reason why the page flow controller is called rudimentary is because it has a serious limitation. If the page transitions are dynamic, then it cannot work. The controller serves as an example for customized action mapping usage. Some of the groundwork for the page flow controller is already done in Listing 10.2, in case you didn’t know it already!). In particular pay attention to the two lines:
<set-property <set-property
This first property (buttons) is a comma-separated name of all the buttons in the form. The second property (forwards) is a comma-separated name of the views rendered to the user when the corresponding buttons are selected. The view names refer to the forwards instead of the actual JSPs. Since the forwards
214
Struts Survival Guide – Basics to Best Practices
is provided declaratively, the task of deciding the next view can be refactored into the base Action. This functionality has been added to the MybankBaseAction from Chapter 4. The code is shown in Listing 10.3 with the changes highlighted in bold.
Listing 10.3 The new and modified methods in MybankBaseAction); preprocess(myMapping, myForm, request, response); // Returns a null forward if the page controller is used. ActionForward forward = process(myMapping, myForm, request, response); postprocess(myMapping, myForm, request, response); ... if (forward == null) { // For page controller only String forwardStr = mapping.getForward(selectedButton); forward = mapping.findForward(forwardStr); } return forward; } protected String getSelectedButton(MyActionForm form, MyActionMapping mapping) { String selectedButton = null; String[] buttons = mapping.getButtons(); for (int i=0;i<buttons.length;i++) { HtmlButton button = (HtmlButton) PropertyUtils.getProperty(form, buttons[i]); if (button.isSelected()) { selectedButton = buttons[i]; break; } } return selectedButton; }
Chapter 10. Effectively extending Struts
215
}
First notice that the ActionMapping is cast to MyActionMapping. Also notice that the signature of the three abstract methods – process(), preprocess() and postprocess() have been changed to accept MyActionMapping as the argument instead of ActionMapping. The page flow controller logic is implemented at the end of execute() method. The logic is simple: The code first checks which button has been selected. This is done in the getSelectedButton() method. It then retrieves the corresponding ActionForward and returns it. The RequestProcessor subsequently renders the view as usual. Since the code has been refactored into the base Action class, the child classes need not worry about mapping.findFoward(). They can simply return null. MybankBaseAction is now capable of automatically selecting the appropriate ActionForward.
10.2 Controlling the validation
The default mechanism in Struts to skip validations when a button is pressed is by using <html:cancel> in the JSP. Behind the scenes, this tag creates a button with a name – org.apache.struts.taglib.html.CANCEL. When the page is finally submitted, one of the first things RequestProcessor does is to check if the request has a parameter with the name org.apache.struts.taglib.html.CANCEL. If so, the validation is cancelled and the processing continues. While this may be acceptable for grey buttons (even those pages with multiple buttons), image buttons cannot be named as org.apache.struts.taglib.html.CANCEL due to their peculiar behavior. When images are used for form submission, the browsers do not submit the name and value, but the X and Y coordinates of the image. This is in accordance with the W3C specification. Even though an image corresponding to Cancel was pressed, the RequestProcessor is oblivious to this fact. It innocently requests the page validation and the end user is only but surprised to see the validation pop up! This is an area where some minor customization goes a long way. Let us start by customizing the action mapping in the struts-config.xml. Listing 10.4 shows the new addition in bold. In addition to the existing <set-property> elements, a new <setproperty> is added for a property called validationFlags. This is a commaseparated list of true and false telling Struts if validation needs to be performed when corresponding buttons (also comma-separated values) are selected on the browser. The validationFlags in the Listing are interpreted as: “When next and cancel buttons are selected, no validation is necessary. When save button is selected, validation is required”. In addition another interesting
216
Struts Survival Guide – Basics to Best Practices
change you will find in Listing 10.4 is that the validation is turned off by setting validate=false. With this setting, the validation in RequestProcessor is completely turned off for all buttons. The validation will be explicitly invoked in the base Action’s execute() method. Listing 10.5 shows the execute() method. The changes are shown in bold.
Listing 10.4 struts-config.xml with global custom action mapping
<action-mappings type=”mybank.struts.MyActionMapping”> <action path="/submitCustomerForm" type="mybank.app1.CustomerAction" name="CustomerForm" scope="request"
validate="false"
input="CustomerDetails.jsp"> <set-property
<set-property
<forward name="page2" <forward name="success" <forward name="success" </action> </action-mappings> /> /> path="success.jsp" path="cancel.jsp"
The new validationFlags setting requires some minor code changes to the in MyBankBaseAction. The changes involve explicitly running the form validation and saving the ActionErrors. The key logic deciding if the validation is required for the selected button is in MyActionMapping class in isValidationRequired() method. The method requires the selected button name as an argument. A sample implementation for the isValidationRequired() method is as follows:
public boolean isValidationRequired(String selectedButton) { String validationStr = validationFlagMap.get(selectedButton); return Boolean.valueOf(validationStr); }
The above method uses the selected button name to lookup a HashMap named validationFlagMap. As you know, the JavaBeans properties – buttons and validationFlags were provided as comma separated values. Parsing through the comma separated values at runtime for every user is a sheer waste of time and resources. Hence the comma-separated values are parsed in
Chapter 10. Effectively extending Struts
217
their corresponding setters to create a HashMap with the button name as the key. This ensures a fast retrieval of the values.
Listing 10.5 execute() method for controlling validation(); } } preprocess(myMapping, myForm, request, response); ActionForward forward = process(myMapping, myForm, request, response); postprocess(myMapping, myForm, request, response); ... return forward;
}
}
A sample implementation of the setters and the resolve() method is shown below:
public void setButtons(String buttonNames) { this.buttons = buttonNames; resolve(); }
218
Struts Survival Guide – Basics to Best Practices
public void setValidationFlags(String flags) { this.validationFlags = flags; resolve(); } public void resolve() { if (buttons != null && validationFlags != null) { validationFlagMap = new HashMap(); StringTokenizer stButtons = new StringTokenizer(buttons ","); StringTokenizer stFlags = new StringTokenizer(validationFlags, ","); while (stButtons.hasMoreTokens()) { String buttonName = stbuttons.nextToken(); String flagValue = stFlags.nextToken(); validationFlagMap.put(buttonName, flagValue); } } }
As seen above, every setter invokes the resolve() method. When the final setter is invoked, all the attributes are non-null and the if block in resolve() is entered. At this point every instance variable is guaranteed to be set by the Struts start up process. The resolve() method creates a StringTokenizer and parses the comma-delimited values to create a HashMap with button name as the key and the validation flag as the value. This HashMap thus created is utilized at runtime for a faster retrieval of flag values in the isValidationRequired() method.
10.3 Controlling duplicate form submissions
In chapter 4, you looked at how duplicate form submission can be handled effectively at individual form level. Here is a recap. The isTokenValid() is invoked in the execute() method (or one its derivatives). If the page is the last in a multi-page form, the token is reset. After processing, the user is forwarded or redirected to the next page. If the next page thus shown also has a form with sensitive submission, the saveToken() is called to set the token in session just before forwarding to the page.
Chapter 10. Effectively extending Struts
219
Page after page, the logic remains the same as above with two blanks to be filled. They are: 1. Should the execute() (or any equivalent method in Action) method check if the submission was valid for the current page? (through the isTokenValid() method)? 2. Does the page rendered after processing (in execute() or any equivalent method in Action) has sensitive submissions needs? Two approaches emerge to fill in the blanks. The first is to use Template Method pattern and encapsulate the logic of handling sensitive resubmissions in the base class and delegate the task of filling the two blanks to the child classes by declaring two abstract methods in the base Action. While this sounds as the logical thing to do, there is an even better way. You got it – customizing Struts. For a moment consider what would the two methods do if you chose the former option? The first method would simply provide a boolean value (without any logic) indicating whether the page should handle duplicate submission or not. The second method would decide (a simple if logic) whether the next view rendered needs the token in session. This information is best provided as configuration information and that is exactly what the forthcoming customization does.
Listing 10.6 struts-config.xml for duplicate form submission handling <action-mappings type=”mybank.struts.MyActionMapping”>
<action path="/submitCustomerForm" type="mybank.app1.CustomerAction" name="CustomerForm" scope="request" validate="true" input="CustomerDetails.jsp">
<set-property <set-property <forward name="success" className="mybank.struts.MyActionForward" path="success.jsp"> <set-property /forward>
<forward name="success" </action> </action-mappings>
220
Struts Survival Guide – Basics to Best Practices
Listing 10.6 shows all the customizations needed to achieve what is needed. The application flow that is used is the same flow as before: CustomerForm is a single page form. On submit, a success page is shown. On Cancel, cancel.jsp is shown. However the only twist is that success.jsp is treated as a JSP with a form that needs to avoid duplicate submission (For simplicity purposes, we are not showing the redirect=true setting). The action mapping in listing 10.6 provides all the information needed for sensitive resubmission logic to be retrieved in a generic manner in the base Action class. Before looking at the actual code in MybankBaseAction, let us look at what Listing 10.6 conveys. It has two new <set-property> elements. The first setting, validateToken is used to determine if token validation is necessary on entering the execute(). The second setting, resetToken is useful for the multi-page form scenario when the token has to be reset only on the final page (See chapter 4 for more information). These two settings fill in the first blank. Next, there is a new kind of ActionForward called mybank.struts.MyActionForward. This is an example of extending the ActionForward class to add custom settings. The <forward> itself now contains a <set-property> for a JavaBeans property called setToken on MyActionForward. This setting fills the second blank. Now, let us look at the actual code that handles form submission. This code goes into the base Action class and is shown in Listing 10.7. The new code is shown in bold. In addition, the listing includes all the useful code discussed so far that should make into the base Action (except page flow Controller). You can use this Listing as the template base Action for real world applications. The getValidateToken() method retrieves the validateToken (<setproperty>) from MyActionMapping. This setting tells the framework whether to check for sensitive resubmissions on the current page. After the check is done, duplicate form submissions need to be handled as prescribed by your business. For regular submissions, retrieve the ActionForward for the next page. If the next page happens to be one of those requiring a token in the Http Session, saveToken() is invoked and then the ActionForward is returned.
Listing 10.7 The complete // or retrieve app specific profile for the user {
Chapter 10. Effectively extending Struts
221
ActionForward forward = null; MybankBaseForm myForm = (MybankBaseForm) form; // Set common MybankBaseForm variables using request & // session attributes for type-safe access in subclasses. // For e.g. myForm.setUserProfile( // request.getAttribute("profile"));(); } } //if there are errors through form validation, //return immediately //Check if token is valid, but dont reset token boolean tokenIsValid = true; if (myMapping.getValidateToken()) { // validate token tokenIsValid = isTokenValid(request); } if (tokenIsValid) { preprocess(myMapping, myForm, request, response); forward = process(myMapping, myForm, request, response); postprocess(myMapping, myForm, request, response); } else { //duplicate submission //Adopt a strategy to handle duplicate submissions //This is up to you and unique to your business } if (forward.getClass().equals( mybank.struts.MyActionForward.class) { MyActionForward myForward = (MyActionForward) forward;.getResetToken()) { resetToken(request); } /* If there are no errors and next page requires a new token, set it The setToken is false in the ActionForwards for that ActionMapping. Hence a multipage form flow has a single token – a unique identifier for the business transaction */ if(myForward.getSetToken() && !hasErrors(request)) { // next page is a form with sensitive resubmission saveToken(request); } } // Add centralized logging here (Exit point audit) return forward; } }
10.4 DispatchAction for Image Button form submissions
DispatchAction and LookupDispatchAction work by invoking a method
on the Action whose name matches a predefined request parameter. This works fine for form submissions when all the buttons have the same name but does not work for image button form submissions. In this section, we will further customize the ActionMapping to support a DispatchAction like feature for image based form submissions. This can be used in an enterprise application without a second thought. It will definitely prove useful timesaver.
Chapter 10. Effectively extending Struts
223
As before, a new <set-property> needs to be added to the strutsconfig.xml as follows:
<set-property
This setting works in conjunction with the <set-property> for buttons property in MyActionMapping. The methods is a comma-separated list of method names to be invoked for every button name defined in the commaseparated buttons <set-property>. A subclass of MybankBaseAction called MyDispatchAction is created to provide the DispatchAction-like features. This class has concrete implementation for MybankBaseAction’s process() method. To use this class, you should subclass the MyDispatchAction. At runtime, the MyDispatchAction invokes appropriate method from your subclass via reflection. The process() method is shown in Listing 10.8. The underlying logic is almost similar to previous utilities. In the process() method, the method to be invoked for the currently selected button is retrieved from MyActionMapping. Then, using the MethodUtils (another helper class from BeanUtils package), the actual method is invoked. The actual method name to be invoked is specified in the action mapping. These methods are similar to any method you would write had it been a regular DispatchAction. The methods have the fixed signature:
public ActionForward methodName(MyActionMapping mapping, MybankBaseForm form, HttpServletRequest request, HttpServletResponse response) throws Exception
Listing 10.8 Base Action class with DispatchAction like features
public class MyDispatchAction extends MybankBaseAction { protected ActionForward process(MyActionMapping mapping, MybankBaseForm form, HttpServletRequest request, HttpServletResponse response) throws Exception { ActionForward forward = null; String selectedButton = getSelectedButton(myForm, mapping) String methodName = mapping.getMethod(button); Object[] args = {mapping, form, request, response}; // this invokes the appropriate method in subclass
224
Struts Survival Guide – Basics to Best Practices
forward = (ActionForward) MethodUtils.invokeMethod(this, ,methodName, args); return forward; } }
10.5 Summary
In this chapter you looked at how to effectively customize Struts to reap maximum benefit and minimize the burdens while developing web applications. Hopefully you were able to realize the strengths of extending Struts and its hidden potential for making your application cleaner and better.
Struts Survival Guide – Basics to Best Practices
225
I NDE X
Action, 24, 28, 29, 32, 33, 34, 37, 38, 39, 40, 48, 52, 53, 59, 62, 63, 66, 67, 69, 74, 76, 78, 80, 82, 83, 84, 85, 87, 88, 89, 90, 92, 93, 94, 97, 108, 110, 112, 113, 114, 128, 129, 130, 131, 142, 146, 148, 164, 172, 189, 190, 192, 193, 195, 200, 203, 205, 207, 208, 210, 211, 212, 213, 214, 215, 217, 218, 220, 221 ActionError, 35, 36, 37, 40, 44, 57, 64, 103, 106, 123, 125, 126, 127, 170, 192, 193, 194 ActionErrors, 35, 36, 37, 40, 43, 57, 101, 102, 106, 112, 113, 114, 124, 125, 126, 193, 214, 215, 218 ActionForm, 23, 28, 29, 32, 33, 36, 37, 38, 41, 42, 43, 51, 52, 53, 55, 61, 62, 63, 68, 75, 76, 78, 79, 84, 85, 90, 94, 99, 100, 102, 106, 107, 108, 109, 110, 111, 115, 120, 121, 122, 123, 128, 129, 130, 131, 142, 173, 189, 192, 203, 210, 212, 215, 218 ActionForward, 28, 29, 33, 34, 37, 38, 53, 62, 76, 79, 85, 92, 94, 108, 112, 114, 129, 189, 190, 192, 193, 203, 209, 210, 212, 213, 215, 218, 221 ActionMapping, 28, 29, 31, 32, 33, 34, 36, 37, 38, 40, 42, 43, 48, 51, 53, 54, 55, 57, 61, 62, 67, 68, 69, 72, 74, 75, 76, 77, 78, 79, 82, 83, 85, 94, 108, 129, 189, 192, 203, 207, 209, 210, 211, 212, 213, 215, 218, 220 ActionServlet, 23, 24, 28, 29, 30, 31, 47, 48, 49, 67, 80, 81, 106, 208 Bean Tags MessageTag, 48, 54, 56, 61, 63, 80, 130, 132, 133, 138, 139, 154, 171, 176 WriteTag, 132, 133, 138, 139, 141, 143, 145, 147 Business Logic, 17, 90, 91, 113, 114 Character encoding, 173, 174 Checkbox, Smart, 121 Commons Validator, 99, 100, 101, 102, 106, 107, 111, 115 Configurable Controller, 21 CORBA, 29, 91 Core J2EE Patterns, 148, 195 Business Delegate, 90, 91, 195, 200 Front Controller, 195 Intercepting Filter, 195 ValueListHandler, 149, 150 CSS, 117, 122, 123, 146, 148, 160 Custom Tag, 41, 117, 202 Data Transfer Object, 23, 84, 90, 92 DispatchAction, 66, 74, 75, 76, 77, 78, 79, 80, 90, 97, 131, 207, 211, 220, 221 displayTag, 146, 148 DynaActionForm, 96, 97, 99, 107, 108, 109, 110, 111, 115 DynaValidatorActionForm, 114 DynaValidatorForm, 109, 111, 115 EJB, 16, 29, 91, 96, 97, 149, 183, 186, 190, 195, 199, 200 Errors.footer, 43, 44 Errors.header, 43, 44 Exception Handling, 179, 180, 182, 190 ExceptionConfig, 192 ExceptionHandler, 191, 192, 193 Extension point, 37 Fat Controller, 15, 20, 37 Form-bean, 38, 39, 47, 48, 51, 55, 57, 68, 78, 81, 209 ForwardAction, 39, 66, 67, 68, 69, 70, 72, 73, 78, 83, 87, 97, 110, 146, 160 Handler, 21, 22, 23, 24, 148, 149, 150, 192 Html Tags BaseTag, 41, 42, 53, 54, 59, 154, 156 CancelTag, 54, 55, 143, 154, 213 ErrorsTag, 41, 43, 44, 125 FileTag, 120 FormTag, 41, 42, 43, 53, 54, 55, 120 HtmlTag, 53, 54, 59, 154, 156, 176 ImageTag, 128, 130 LinkTag, 58, 59, 67, 68, 69, 70, 74, 83, 89, 97, 110 SubmitTag, 41, 54, 55, 61, 77, 80, 110, 121, 130, 143, 154 TextTag, 41, 42, 43, 54, 55, 61, 121, 143 HtmlTable, 145, 146, 148 HTTP Forward, 35, 73
226
Struts Survival Guide – Basics to Best Practices
HTTP Redirect, 35 HttpSession, 87, 89, 146, 149, 165, 166, 170 I18N, 80, 136, 148, 162, 164, 165, 169, 170, 171, 172, 176, 177, 193 DateFormat, 168 Locale, 37, 80, 138, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 173, 194 MessageFormat, 168, 169, 170 NumberFormat, 167, 168 PropertiesResourceBundle, 166, 167 ResourceBundle, 37, 163, 166, 169, 171 ImageButtonBean, 128, 129, 130, 131 IncludeAction, 66, 73, 74 jsp include, 73, 74, 153, 154, 155, 160 JSTL, 104, 117, 133, 135, 136, 137, 138, 139, 141, 151, 171 EL, 117, 136, 137, 138, 139 List based Forms, 141 Logic Tags, 134, 140 EqualTag, 134, 135, 140 IterateTag, 135, 136, 143, 145, 147 LookupDispatchAction, 63, 66, 78, 79, 80, 90, 97, 131, 171, 220 Message Resource Bundle, 37, 40, 43, 44, 47, 48, 51, 56, 57, 63, 67, 79, 106, 127, 130, 170 Multiple, 132 MessageResources, 64, 82, 169, 209 Model 1 Architecture, 17, 18 Model 2 Architecture, 18, 19 Multi-page Lists, 145 MVC, 15, 18, 19, 20, 21, 22, 26, 66, 67, 70, 97, 148
MybankBaseAction, 84, 85, 90, 203, 205, 212, 213, 215, 218, 220, 221 MybankBaseForm, 84, 85, 203, 212, 215, 218, 221 MybankException, 196, 197, 198, 199, 203, 204 MybankRuntimeException, 197, 203 native2ascii, 176 Pager Taglib, 145, 146 Presentation Logic, 17, 163, 164 Request Handler, 21, 22 RequestProcessor, 28, 29, 31, 32, 33, 34, 37, 39, 40, 41, 42, 43, 47, 50, 51, 52, 55, 82, 112, 114, 159, 165, 173, 192, 200, 207, 208, 209, 213 StackTraceUtil, 197, 198 Struts Configuration File, 37 Struts Console, 95 Struts-EL, 117, 139, 140, 141, 151 Struts-GUI, 94 SwitchAction, 66, 82, 83 Tiles, 40, 74, 82, 117, 119, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 171, 172, 177 TilesRequestProcessor, 40, 159, 160 Tomcat, 15, 25, 26, 46, 59, 60, 72, 118, 137 UniqueIDGenerator, 196 URL rewriting, 58 validation.xml, 100, 101, 102, 104, 105, 106, 112 validation-rules.xml, 100, 101, 102, 103, 104, 105, 107 ValidatorActionForm, 99, 114 ValidatorForm, 99, 106, 107, 111, 112, 115 View Data Transfer Object, 23, 29, 84 XDoclet, 95
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview. | https://www.scribd.com/doc/58614966/Struts-Survival-Guide | CC-MAIN-2016-30 | refinedweb | 60,609 | 57.47 |
I think I have a quite reasonable 80-line answer to this question in Scala, but it was rejected because the compile time limit is exceeded... Either this limit should be raised, or Scala should not be accepted as a language option. It's otherwise a waste of time.
@entartete Hi, could you please attach your code here so I can troubleshoot the problem? You can also send the code directly to me at support@leetcode.com. Thanks!
Hi @1337c0d3r , thanks for the reply!
Please find the code attached below. I haven't tested it thoroughly, but it worked on the sample input used in the question when I ran it locally.
object Solution { def main(args: Array[String]): Unit = { val result = shoppingOffers(List(2, 5), List(List(3, 0, 5), List(1, 2, 10)), List(3, 2)) println(result) } def shoppingOffers(price: List[Int], special: List[List[Int]], needs: List[Int]): Int = { // for every price, compute the set of remaining needs if we spent that much. // for every offer, apply offer to every remaining need set at price-offer.cost var costToRemainingNeeds = Map(0 -> Set(needs)) var reachedZeroNeed = false var currentCost = 0 while (!reachedZeroNeed) { currentCost += 1 // check every offer and regular price for applicability // if we can reach zero need, the current cost is the solution for ((itemPrice, itemIndex) <- price.zipWithIndex) { costToRemainingNeeds.get(currentCost - itemPrice) match { case Some(needsSet) => { // if regular purchase is applicable, add new remaining needs for this cost costToRemainingNeeds += currentCost -> (costToRemainingNeeds.getOrElse(currentCost, Set()) ++ applyRegularPurchase(needsSet, itemIndex)) } case None => { // item cannot be purchased. } } } for (offer <- special) { val offerPrice = offer.last val offerItems = offer.take(offer.size - 1) costToRemainingNeeds.get(currentCost - offerPrice) match { case Some(needsSet) => { costToRemainingNeeds += currentCost -> (costToRemainingNeeds.getOrElse(currentCost, Set()) ++ applyOffer(needsSet, offerItems)) } case None => { // offer cannot be applied. } } } costToRemainingNeeds.get(currentCost) match { case Some(needsSet) => { reachedZeroNeed = needsSet.exists(needs => needs.forall(n => n == 0)) } case None => } } currentCost } def applyRegularPurchase(remainingNeeds: Set[List[Int]], itemIndex: Int): Set[List[Int]] = { val matchingNeeds = remainingNeeds.filter(needs => needs(itemIndex) > 0) matchingNeeds map (needs => needs.patch(itemIndex, Seq(needs(itemIndex) - 1), 1)) } def applyOffer(remainingNeeds: Set[List[Int]], offer: List[Int]): Set[List[Int]] = { val matchingNeeds = remainingNeeds.filter(needs => offerApplicable(needs, offer)) matchingNeeds map (needs => applyOffer(needs, offer)) } def offerApplicable(needs: List[Int], offer: List[Int]): Boolean = { needs.zip(offer).forall { case (needItem, offerItem) => needItem >= offerItem } } def applyOffer(needs: List[Int], offer: List[Int]): List[Int] = { needs.zip(offer) map { case (needItem, offerItem) => needItem - offerItem } } }
Hi, I am wondering if there are any updates on this problem? My scala solutions to usual practice problems are being rejected because of time limit issues as well. As a simple example, the following one line solution to the Hamming Distance problem:
public int hammingDistance(int x, int y) { return Integer.bitCount(x ^ y); }
computes in 11ms in Java, but 478 ms in Scala.
Thanks.
I have the same problem too. For the first problem which is Two Sum problem. This is my code snippet. It runs in 6ms in my IDE
def twoSum(nums: Array[Int], target: Int): Array[Int] = { nums.zipWithIndex.toList.combinations(2).find(i => i.map(_._1).sum == target).get .map(_._2).toArray }
The problem with Scala submissions persists...
Tried submitting 557. Reverse words in string III, but always exceeds time limit.
Same approach in java and python works no problem.
Must conclude that Scala code functionality is broken, unusable.
Hi, we have increased Scala's time limit for both compile and run code. I believe this should resolve most issues you're having. Please let me know if you still see issues, and I'll fix it. Thanks.
@1337c0d3r I'm still getting TLE for simple scala code. Here is my two sum solution, which is almost the same as the one posted earlier. It also gets TLE.
object Solution { def twoSum(nums: Array[Int], target: Int): Array[Int] = { nums.combinations(2).find(_.sum == target).get.map(nums.indexOf) } }
Is it possible to ignore time limits or to further raise them? I would be ok with my solution not being considered "accepted", but I'd like to know if my outputs are correct.
Immutable / functional code is sometimes an order of magnitude slower than its mutable counterpart (estimation I've heard thrown around), but it's the style of programming that is used for my line of work. Scala compile times are also known to be pretty poor unfortunately.
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/95256/scala-solutions-are-rejected-because-they-exceed-compile-time-limit | CC-MAIN-2017-47 | refinedweb | 750 | 50.23 |
Learn To Write Java With Java Code Examples
Java is an object-oriented language used to develop a wide variety of applications. These include applications like simple desktop address books, enterprise level ERP systems, complex dynamic websites, and mobile applications for Android. Java is portable and can run on various platforms, including Linux, Windows, and Mac.
Developed on the principle of Write Once Run Anywhere (WORA), Java is a great place to jump into the exciting world of programming. Learning Java is highly recommended for software developers who want to add to their development skills. This is because it is one of the most popular programming languages in the world.
Last Updated February 2022
Learn Java In This Course And Become a Computer Programmer. Obtain valuable Core Java Skills And Java Certification | By Tim Buchalka, Tim Buchalka’s Learn Programming AcademyExplore Course
This article contains some basic and interesting Java code examples for beginners. These Java tutorials will help the reader understand the basic features of any Java application.
1. A basic java calculator
The following code example is a simple calculator application in Java. The code takes input from the console using one of the four mathematical operations (add, subtract, multiply, and divide) and then displays the output on the console.
The application makes use of a Maths Java class which contains four methods. The methods perform each of the four mathematical operations and return the result to the calling method.
The definition of the Maths class looks like this:
public class Maths { public int Add (int num1, int num2) { return num1 + num2; } public int Subtract (int num1, int num2) { return num1 - num2; } public int Multiply (int num1, int num2) { return num1 * num2; } public int Divide (int num1, int num2) { return num1 / num2; } }
You can see from the above code that all the methods in the code return integer values, and all of them take two integer data type parameters.
To make use of the Maths class for performing mathematical functions, its object has to be created, and then you can call the methods from that object. The following code snippet demonstrates how a calculator application actually works.
import java.util.Scanner; public class MyClass { public static void main(String[] args) { Maths math = new Maths(); Scanner userInput = new Scanner(System.in); System.out.println("Welcome to the Java Calculator"); System.out.println("=============================="); System.out.print("Enter the first number:"); int num1 = userInput.nextInt(); System.out.print("Enter the second number:"); int num2= userInput.nextInt(); System.out.print("Enter operation to perform (+,-,x,/):"); String operation= userInput.next(); if (operation.equals("+")) System.out.println(math.Add(num1, num2)); else if (operation.equals("-")) System.out.println(math.Subtract(num1, num2)); else if (operation.equals("x")) System.out.println(math.Multiply(num1, num2)); else if (operation.equals("/")) System.out.println(math.Divide(num1, num2)); else System.out.println("The operation is not valid."); } }
If you look closely at the above code snippet, a package named java.util.Scanner has been imported at the beginning of the code. This Scanner class allows scanning or reading console input. It includes the ability to read from files, but also from the keyboard.
In the main method of the class, the object of Maths class has been declared. Then, using the Scanner class object, the first number, second number, and operation to be performed are retrieved from the user. Finally, the string comparison of the operation is performed using the equals method. This is to decide which method of the Maths class will be called.
If the matched string is “+,” the Add method of the Maths class will be called. If the matched string is “-,” then the Subtract method is called, and so on. The numbers taken as input from the users will be passed to the relevant method to be processed and the answer returned and printed out.
A sample run of the program is shown below.
Welcome to Java Calculator =========================== Enter First Number:10 Enter Second Number:20 Enter operation to perform (+,-,x,/):+ 30 Process finished with exit code 0
The addition of 20 to 10 gives the answer 30 in the output displayed.
2. Calculating factorial using recursive functions in Java
The second code example demonstrates how a factorial of any number can be calculated using a recursive method in Java. A recursive method in Java is a method that keeps calling itself until a particular condition is met.
The following code snippet demonstrates the use of recursion within a Java method to calculate a factorial of a user entered number:
public int factorial(int f) { if(f == 1) { return f; } else { f = f * factorial(f - 1); return f; } }
And this is the code that asks the user to enter the number and calls and outputs the result of calling the factorial method:)); }
Both methods would be added to a class in order to allow them to be executed. Here is the full listing:
import java.util.Scanner; public class MyClass {)); } public int factorial(int f) { if(f == 1) { return f; } else { f = f * factorial(f - 1); return f; } } }
In the above code, the factorial method in the class MyClass is the recursive function that keeps on calling itself. The method will stop executing when the value of the passed parameter is less than or equal to one. If it is, the program returns to the calling method call. Otherwise, the factorial method calls itself with the value one less than the value passed to this method invocation.
This is a sample of the output:
Enter the number to find the factorial for: 11 The factorial of 11 is: 39916800 Process finished with exit code 0
3. Displaying first ‘n’ prime numbers
The last code example demonstrates how the first ‘n’ prime numbers can be calculated where ‘n’ is any number. For instance, if the user specifies ‘n’ as 5, the first five prime numbers starting from 2 and ending at 11 would be displayed.
This is because the first five numbers are 2,3, 5, 7, and 11.
The code of the application is as follows:
import java.util.Scanner; public class MyClass { public boolean numberIsPrime(int n) { for (int i = 2; i < n; i++) { if (n % i == 0) return false; } return true; } public static void main(String[] args) { MyClass test = new MyClass(); Scanner userInput = new Scanner(System.in); System.out.print("Enter the number of primes to be displayed: "); int num = userInput.nextInt(); int count = 0; for (int i = 2; count < num; i++) { if (test.numberIsPrime(i)) { System.out.print(i + " "); count++; } } } }
A method named numberIsPrime takes an integer type number and checks if it is a prime number. If the number passed to the method is not prime, it returns false. Otherwise, it returns true.
In the main method, a number is obtained from the user using the console. Starting from the number 2, the program starts checking if every integer number is a prime number. If it is a prime number, the number is output, and a count of found prime numbers is incremented. This continues until it reaches the number of prime numbers requested by the user, after which the program terminates.
Here is a sample run of the program:
Enter the number of primes to be displayed: 5 2 3 5 7 11 Process finished with exit code 0
To learn more about Java programming, you can check out how to learn Java and how to get Java certifications.
Recommended Articles
How to Get an Oracle Java Certification
How to Learn Java Programming
Basics of Java: Learn Java Coding
How to Write a Simple Java Hello World Program
Understanding the Java String Matches Method
Introduction to the String Array: Java Tutorial
How to Use the for each Loop in Java with Arrays
Java Swing Tutorial for Beginners
Java String Length: Counting Strings
Java Applet Tutorial: Learning the Basics
Top courses in Java
Java students also learn
Empower your team. Lead the industry.
Get a subscription to a library of online courses and digital learning tools for your organization with Udemy for Business. | https://blog.udemy.com/java-code-examples/ | CC-MAIN-2022-33 | refinedweb | 1,344 | 52.29 |
I was having problems with one of my projects every time I pressed for first time the screen. At first I thought it was caused by a script that disabled an UI Text but after one day of researching I couldn't find any answer so I decided to try recreating the problem in an empty scene. The results are that whenever I place a script that checks for Input.GetTouch(0), the first time I touch the screen, the game freezes for a fraction of a second even if the scene is completely empty. I took a look at the profiler and the script checking for the input was generating that lag spike. Note that after the first touch everything works smoothly and without any kind of lag spike.
This was the script used using System.Collections; using System.Collections.Generic; using UnityEngine;
public class Scr_Input : MonoBehaviour
{
// Update is called once per frame
void Update()
{
if (Input.touchCount > 0)
{
Touch touch = Input.GetTouch(0);
if (touch.phase == TouchPhase.Began)
{
Debug.Log("AAA");
}
}
}
}
And here's the profiler when I first touch the device's screen. The device I tested on is a Xiaomi MI A1 but I had this kind of problem before using a Samsung Galaxy S9.
Now you may notice that the lag spike is generated by the Debug.Log() code but It just doesn't make sense to me since in my other projects I don't have any Debug.Log() going on and there are other things happening before I touch the screen so there is no other reason for the lag spike that the Input.GetTouch(). Also, in the picture you may see smaller spikes, those are the touches after the first one, just to prove that it only happens on the first touch.
So if you know how to avoid this I would be really thankful since it's completely ruining the game experience because the first touch when you open the game will always generate a lag spike no matter the device.
UPDATE
Since I've read that Debug.Log() is expensive for perfomance I've changed it and placed a transform.position += new Vector3(1, 0, 0);
And as expected the lag spike isn't that massive but even though the first touch is still generating a bigger lag spike than the ones after that and if you consider a bigger script (as the ones in my other project), the lag spike gets way too big and you can tell that the game lags at the first touch.
Here's the Profiler
Answer by Visuallization
·
Jan 17 at 01:09 PM
Okay I found a solution which works for me. It is split into 2 parts: 1.Trigger a drag start via code. It might be the only part you need to "fix" this issue. This is how I did it:
void Start() {
// Trigger drag start via code to prevent initial drag delay on android
PointerEventData pointer = new PointerEventData(EventSystem.current);
ExecuteEvents.Execute(gameObject, pointer, ExecuteEvents.beginDragHandler);
}
Thank you for the answer! I thought this question was already dead. I haven't tested your workaround but made something similar to fix it when I needed to (I simulated the first touch in the Start() function)
Answer by Visuallization
·
Jan 15 at 01:20 PM
I am actually experiencing the same issue on android even with the native unity scrollrect in an otherwise empty scene. Is there anyone out there who can help with this issue? Any unity folks
353 People are following this question.
Having an issue with touch input lag on certain Android devices, any help?
0
Answers
I want to move my cube on right or left when I touch screen for Android devices; Please help me which script I have to use
0
Answers
Touch radius always returns zero on Android
0
Answers
UNITY 2D ANDROID DEVICE TOUCH PROBLEM
0
Answers
Android| Camera scrolling
1
Answer
EnterpriseSocial Q&A | https://answers.unity.com/questions/1662339/massive-lag-spike-on-first-inputgettouch-with-noth.html | CC-MAIN-2021-17 | refinedweb | 661 | 69.82 |
Hi there,
It’s been a while since version 2.1 came out. As you know, this version adds support for assignment expressions. Have you tried it?
And when I use VS Code, it helps me highlight the variable by default when I move the cursor over. However, if I hadn’t made the extra settings, it wouldn’t have helped me check the strings. Em, you know, if I carelessly write two IDs,
id='table' and
id='tabel', I probably find it out until the process reports an error. But if they’re variables, it’s different, and I will get a very obvious underline.
from dash import html, dcc, Dash, Output, Input, State, callback_context, no_update app = Dash(__name__) app.layout = html.Div([my_input := dcc.Input(), my_output := html.Div()]) @app.callback(Output(my_output, 'children'), Input(my_input, 'value')) def update(value): return value if __name__ == "__main__": app.run_server(debug=True)
Then you might ask, where did my ID go? What if I want to use
callback_context.triggered to know which component triggers the callback?
Don’t worry. Let me print it out.
print(my_btn.id)
82e2e662-f728-b4fa-4248-5e3a0a5d2f34
See, dash gave it a component ID in UUID format. Then you can use it in callbacks.
if callback_context.triggered[0]["prop_id"].split(".")[0] != my_btn.id: return no_update
Hope this helps you. XD | https://community.plotly.com/t/daily-tips-give-your-component-a-name/62864 | CC-MAIN-2022-21 | refinedweb | 223 | 69.38 |
oslo_config.cfg.
ConfigOpts¶
Config options which may be set on the command line or in config files.
ConfigOpts is a configuration option manager with APIs for registering option schemas, grouping options, parsing option values and retrieving the values of options.
It has built-in support for
config_file and
config_dir options.
GroupAttr(conf, group)¶
Helper class.
Represents the option values of a group as a mapping and attributes.
StrSubWrapper(conf, group=None, namespace=None)¶
Helper class.
Exposes opt values as a dict for string substitution.
SubCommandAttr(conf, group, dest)¶
Helper class.
Represents the name and arguments of an argparse sub-parser.
Template(template)¶
clear()¶
Reset the state of the object to before options were registered.
This method removes all registered options and discards the data from the command line and configuration files.
Any subparsers added using the add_cli_subparsers() will also be removed as a side-effect of this method.
clear_default(name, group=None)¶
Clear an override an opt’s default value.
Clear a previously set override of the default value of given option.
clear_override(name, group=None)¶
Clear an override an opt value.
Clear a previously set override of the command line, config file and default values of a given option.
find_file(name)¶
Locate a file located alongside the config files.
Search for a file with the supplied basename in the directories which we have already loaded config files from and other known configuration directories.
The directory, if any, supplied by the config_dir option is searched first. Then the config_file option is iterated over and each of the base directories of the config_files values are searched. Failing both of these, the standard directories searched by the module level find_config_files() function is used. The first matching file is returned.
get_location(name, group=None)¶
Return the location where the option is being set.
See also
New in version 5.3.0.
import_group(group, module_str)¶
Import an option group from a module.
Import a module and check that a given option group is registered.
This is intended for use with global configuration objects like cfg.CONF where modules commonly register options with CONF at module load time. If one module requires an option group defined by another module it can use this method to explicitly declare the dependency.
import_opt(name, module_str, group=None)¶
Import an option definition from a module.
Import a module and check that a given option is registered.
This is intended for use with global configuration objects like cfg.CONF where modules commonly register options with CONF at module load time. If one module requires an option defined by another module it can use this method to explicitly declare the dependency.
list_all_sections()¶
List all sections from the configuration.
Returns a sorted list of all section names found in the configuration files, whether declared beforehand or not.
log_opt_values(logger, lvl)¶
Log the value of all registered opts.
It’s often useful for an app to log its configuration to a log file at startup for debugging. This method dumps to the entire config state to the supplied logger at a given log level.
mutate_config_files()¶
Reload configure files and parse all options.
Only options marked as ‘mutable’ will appear to change.
Hooks are called in a NON-DETERMINISTIC ORDER. Do not expect hooks to be called in the same order as they were added.
print_help(file=None)¶
Print the help message for the current program.
This method is for use after all CLI options are known registered using __call__() method. If this method is called before the __call__() is invoked, it throws NotInitializedError
print_usage(file=None)¶
Print the usage message for the current program.
This method is for use after all CLI options are known registered using __call__() method. If this method is called before the __call__() is invoked, it throws NotInitializedError
register_cli_opt(opt, group=None)¶
Register a CLI option schema.
CLI option schemas must be registered before the command line and config files are parsed. This is to ensure that all CLI options are shown in –help and option validation works as expected.
register_cli_opts(opts, group=None)¶
Register multiple CLI option schemas at once.
register_group(group)¶
Register an option group.
An option group must be registered before options can be registered with the group.
register_mutate_hook(hook)¶
Registers a hook to be called by mutate_config_files.
register_opt(opt, group=None, cli=False)¶
Register an option schema.
Registering an option schema makes any option value which is previously or subsequently parsed from the command line or config files available as an attribute of this object.
register_opts(opts, group=None)¶
Register multiple option schemas at once.
reload_config_files()¶
Reload configure files and parse all options
reset()¶
Clear the object state and unset overrides and defaults.
set_default(name, default, group=None)¶
Override an opt’s default value.
Override the default value of given option. A command line or config file value will still take precedence over this default.
set_override(name, override, group=None)¶
Override an opt value.
Override the command line, config file and default values of a given option.
unregister_opt(opt, group=None)¶
Unregister an option.
unregister_opts(opts, group=None)¶
Unregister multiple CLI option schemas at once.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/oslo.config/latest/reference/configopts.html | CC-MAIN-2018-51 | refinedweb | 870 | 50.63 |
Answered by:
Google Earth in C# (Noob Ahead)
Hi,
I'm trying to make google earth appear in my little program but I'm not sure how to do that. This is the code I got from someone else but I don't know how to continue.Code Snippet
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Drawing;
using System.Data;
using System.Text;
using System.Windows.Forms;
using System.Runtime.InteropServices;
using EARTHLib;
using System.Threading;
namespace GoogleEarth
{
public partial class GoogleEarthViewPort : UserControl
{
public delegate int EnumWindowsProc(IntPtr hwnd, int lParam);
[DllImport("user32", CharSet = CharSet.Auto)]
public extern static IntPtr GetParent(IntPtr hWnd);
[DllImport("user32", CharSet = CharSet.Auto)]
public extern static bool MoveWindow(IntPtr hWnd, int X, int Y, int nWidth, int nHeight, bool bRepaint);
[DllImport("user32", CharSet = CharSet.Auto)]
public extern static IntPtr SetParent(IntPtr hWndChild, IntPtr hWndNewParent);
public GoogleEarthViewPort()
{
InitializeComponent();
}
private IntPtr GEHrender = (IntPtr)0;
private IntPtr GEParentHrender = (IntPtr)0;
public ApplicationGEClass googleEarth;
private void GoogleEarthViewPort_Load(object sender, EventArgs e)
{
if (this.DesignMode == false)
{
googleEarth = new ApplicationGEClass();
GEHrender = (IntPtr)googleEarth.GetRenderHwnd();
GEParentHrender = GetParent(GEHrender);
MoveWindow(GEHrender, 0, 0, this.Width, this.Height,true);
SetParent(GEHrender, this.Handle);
}
}
public void Unload()
{
if (GEParentHrender != (IntPtr)0)
SetParent(GEHrender, GEParentHrender);
}
private void GoogleEarthViewPort_Resize(object sender,EventArgs e)
{
if (GEHrender != (IntPtr)0)
MoveWindow(GEHrender, 0, 0, this.Width, this.Height,true);
}
}
}
I already added the reference EARTHLib in Google Earth.
Can someone help me please? Thank you.
Question
Answers
All replies
- Hi,
You will have much more luck using the supported plug-in api rather than the un-supported COM api.
Take a look at this small example I blogged some time ago.
Also, I have been working on a library of controls and helper classes to work with the plug-in from managed code.
You can see the project here.
The basis of the control library interaction is the COM Google Earth Plugin Type Library (plugin_ax.dll) converted into the equivalent definitions in a common language runtime assembly. This is great as the Api for the plugin lets you work with a whole host of features and objects that are not exposed via the EARTH COM api.
Anyhow, I hope that is some use to you.
Regards,
Fraser.
Yea right...
From source, right click the reference item that has EARTHLib and click properties, Set interop to FALSE. Done
- Proposed as answer by guayaco10040 Wednesday, January 04, 2012 7:39 PM
- I suspect that after more than 3 years the problem is solved, or the OP has given up. :)
Regards David R
---------------------------------------------------------------
Every program eventually becomes rococo, and then rubble. - Alan Perlis
The only valid measurement of code quality: WTFs/minute. | http://social.msdn.microsoft.com/Forums/vstudio/en-US/79c9215e-b028-4126-aa28-dedd3c91e89c/google-earth-in-c-noob-ahead?forum=csharpgeneral | CC-MAIN-2014-35 | refinedweb | 443 | 51.34 |
int main() { char p[25]; printf("Hello world!\n"); scanf("%s",p); printf("%s\n",p); return 0; }
Or
#include <stdio.h> #include <stdlib.h> int main() { char *p=NULL; p=malloc(10); printf("Hello world!\n"); scanf("%s",p); printf("%s\n",p); return 0; }
Or
#include <stdio.h> #include <stdlib.h> int main() { char *p=NULL; p=malloc(10); printf("Hello world!\n"); gets(p); printf("%s\n",p); return 0; }
It's somewhat amusing how you managed to pick the two worst possible options. gets() is completely unsafe, and the latest C standard actually removed it from the library. scanf()'s "%s" specifier without a field width is no better than gets(). In both cases input longer than the memory can hold will cause buffer overflow. They also do different things in that gets() reads a whole line while
scanf("%s"...) only reads up to the nearest whitespace.
The recommended go to solution is fgets():
#include <stdio.h> #include <string.h> int main(void) { char buf[BUFSIZ]; printf("Enter a string: "); fflush(stdout); if (fgets(buf, sizeof buf, stdin)) { buf[strcspn(buf, "\n")] = '\0'; // Optionally remove any newline character printf("'%s'\n", buf); } return 0; } | https://www.daniweb.com/programming/software-development/threads/446577/how-to-accept-strings | CC-MAIN-2016-50 | refinedweb | 198 | 61.73 |
Facebook Interview QuestionSenior Software Development Engineers
Country: India
1. "why are you checking the first element before the binary search?" -
Only for acceleration this case: bug has been present since the beginning.
2. "what case this condition will be true? (e == s )" -
When array has only one element.
The idea here is correct, but your implementation leaves out a few details.
Note that there are three comparisions against revs[mid], when there only needs to be one.
The correct implementation for the binaryFindBugs method is as follows:
public static int binaryFindBugs(int[] revisions, int start, int end) {
if (start == end || start - end == 1) return revisions[start];
int mid = (start + end) / 2;
if (isBug(revisions[mid]) {
// Note that we check [start, mid - 1] because we've already checked revisions[mid] and we're moving left (decreasing).
return binaryFindBugs(revisions, start, mid - 1);
} else {
// Note that we check [mid + 1, end] because we've already checked revisions[mid] and we're moving right (increasing).
return binaryFindBugs(revisions, mid + 1, end);
}
}
How would binary search be faster than linear search? It is not like if we find that if middle version of the range didn't introduce bug we can eliminate upper half or lower half. We will have to do linear search from bottom or top of the range.
Actually it's possible to do modified binary search such that we always end up at the last occurrence. This can be done by going to the interval containing the current index and upper half of the array every time the current number is <= to the version number with the bug, which avoids linear search.
public class Branch{
public int findBug(int[] branch){
if(branch == null || branch.length == 0){ //if branch is null or empty, no bugs
return -1
} else if(isBug(branch[0])){ //checks if the bug is the only element in array, or first element of array
return branch[0];
} else { //runs if branch is length > 1, linear search. Can't do binary because there is no comparison to eliminate left/right half
int bugVersion;
for(int i = 0; i < branch.length; i++){
if(isBug(branch[i]))
bugVersion = branch[i];
}
if(bugVersion)
return bugVersion;
else
return -1;
}
}
}
This is a good start, but there's some work to be done here. First, in terms of your class design, things are a bit funky. The classname is Branch, which provides the assumption that it represents a branch. In that case, I would give Branch the method Branch.isBuggy() -> boolean, and create another class, BranchSet (or RevisionSet) which essentially wraps a Set<Branch>. In that class, I would define the method BranchSet.findBug() -> Branch.
In the second half of the code, you note that we can't preform binary search because there is no comparison to eliminate the left/right half, but there are assumptions that are safe to make that DO allow us to eliminate halves of the search. Let's assume that bugs are introduced linearly, such that if we introduce a bug in rev x, it will be contained in all revisions following x. This assumption is safe to make because we are primarily concerned with our implementation of binary search.
As such, if a bug is not found in the middle, we can eliminate the revisions [start, middle], and assume that the buggy revision lies between (middle, end], making binary search a prime choice to solve this problem.
The same approach with corrections, need to return v[start] as its lowest version where bug could have been introduced.
Also in the else, we can start from mid+1
static int findbugIntroduced(int[] v,int start, int end)
{
if(start == end || end-start == 1) return v[start];//return the lowest, where the bug is introduced
int mid = (start+end)/2;
if(isBug(v[mid]))//then introduced in lower version or mid
{
return findbugIntroduced(v,start,mid);
}else
{
return findbugIntroduced(v,mid+1,end);//can do mid+1 as mid doesnt have bug
}
}
- Ann January 24, 2015 | https://careercup.com/question?id=5156825198493696 | CC-MAIN-2021-43 | refinedweb | 666 | 58.72 |
> [mailto:haskell-cafe-bounces at haskell.org] On Behalf Of Vivian McPhail > > I just setup and installed hs-plugins from darcs on WinXP > using ghc-6.6 and > MSYS. > > The hs-plugin test suite all passes. > > Can you send me something that generates your error and I'll > have a look at > it. > > Vivian Well, if I haven't misunderstood what you're asking for, there is the original test case at the root of this mail thread (copied below). This produces the same error message that Conal gets: Main: c:/ghc/ghc-6.6/HSbase.o: unknown symbol `_free' Main: user error (Dynamic loader returned: user error (resolvedObjs failed.)) Alistair --------------------------------------------------------------- module Test1 where test1 = putStrLn "test1" module Main where import Prelude hiding (catch) import Control.Exception import Data.List import System.Environment import System.Plugins instance Show (LoadStatus a) where show (LoadFailure errors) = "LoadFailure - " ++ (concat (intersperse "\n" errors)) show (LoadSuccess m p) = "LoadSuccess" main = do a <- getArgs let modName = case a of (n:_) -> n _ -> "Test1" let modPath = "./" ++ modName ++ ".o" let method = "test1" fc <- catch (load modPath [""] [] method) (\e -> return (LoadFailure ["Dynamic loader returned: " ++ show e])) case fc of LoadFailure errors -> do fail (concat (intersperse "\n" errors)) LoadSuccess modul proc -> do let p :: IO (); p = proc proc *****************************************************************. ***************************************************************** | http://www.haskell.org/pipermail/haskell-cafe/2007-March/023612.html | CC-MAIN-2014-35 | refinedweb | 209 | 66.33 |
On Mon, Dec 29, 2003 at 04:04:40PM -0800, Matthew Dillon wrote: > > : > :Speaking of 'hlt' instruction, if I revert the revision 1.22 of > :sys/kern/lwkt_thread.c, the temperature stays low. > > 1.22? That's 24 versions down from the current version. Your kernel > shouldn't compile with lwkt_thread.c out of sync that much :-). No, I didn't say that I reverted to 1.22. What I meant was `undo the change made in revision 1.22', or surround it by #ifdef SMP: Index: lwkt_thread.c =================================================================== RCS file: /home/source/dragonfly/cvs/src/sys/kern/lwkt_thread.c,v retrieving revision 1.46 diff -u -r1.46 lwkt_thread.c --- lwkt_thread.c 4 Dec 2003 20:09:33 -0000 1.46 +++ lwkt_thread.c 30 Dec 2003 02:35:47 -0000 @@ -509,8 +509,10 @@ * pending interrupts, spin in idle if so. */ ntd = &gd->gd_idlethread; +#if SMP if (gd->gd_reqflags) ntd->td_flags |= TDF_IDLE_NOHLT; +#endif } } KASSERT(ntd->td_pri >= TDPRI_CRIT, Obviously this doesn't help SMP or HTT kernel. | http://leaf.dragonflybsd.org/mailarchive/kernel/2003-12/msg00436.html | CC-MAIN-2015-06 | refinedweb | 167 | 71.41 |
In my seminar, I often hear the question: How can I safely pass a plain array to a function? With C++20, the answer is quite easy: Use a std::span.
A std::span stands for an object that can refer to a contiguous sequence of objects. A std::span, sometimes also called a view, is never an owner. This contiguous memory can be a plain array, a pointer with a size, a std::array, a std::vector, or a std::string. A typical implementation consists of a pointer to its first element and a size. The main reason for having a std::span<T> is that a plain array will be decay to a pointer if passed to a function; therefore, the size is lost. This decay is a typical reason for errors in C/C++.
In contrast, std::span<T> automatically deduces the size of contiguous sequences of objects.
// printSpan.cpp
#include <iostream>
#include <vector>
#include <array>
#include <span>
void printMe(std::span<int> container) {
std::cout << "container.size(): " << container.size() << '\n'; // (4)
for(auto e : container) std::cout << e << ' ';
std::cout << "\n\n";
}
int main() {
std::cout << std::endl;
int arr[]{1, 2, 3, 4}; // (1)
printMe(arr);
std::vector vec{1, 2, 3, 4, 5}; // (2)
printMe(vec);
std::array arr2{1, 2, 3, 4, 5, 6}; // (3)
printMe(arr2);
}
The C-array (1), std::vector (2), and the std::array (3) have int's. Consequently, std::span also holds int's. There is something more interesting in this simple example. For each container, std::span can deduce its size (4).
All of the big three C++ compilers MSVC, GCC, and Clang, support std::span.
There are more ways to create a std::span.
You can create a std::span from a pointer and a size.
// createSpan.cpp
#include <algorithm>
#include <iostream>
#include <span>
#include <vector>
int main() {
std::cout << std::endl;
std::cout << std::boolalpha;
std::vector myVec{1, 2, 3, 4, 5};
std::span mySpan1{myVec}; // (1)
std::span mySpan2{myVec.data(), myVec.size()}; // (2)
bool spansEqual = std::equal(mySpan1.begin(), mySpan1.end(),
mySpan2.begin(), mySpan2.end());
std::cout << "mySpan1 == mySpan2: " << spansEqual << std::endl; // (3)
std::cout << std::endl;
}
As you may expect, the from a std::vector created mySpan1 (1) and the from a pointer and a size created mySpan (2) are equal (3).
You may remember that a std::span is sometimes called a view.Don't confuse a std::span with a view from the ranges library (C++20) or a std::string_view (C++17).
A view from the ranges library is something that you can apply on a range and performs some operation. A view does not own data, and it's time to copy, move, assignment it's constant. Here is a quote from Eric Nieblers range-v3 implementation, which is the base for the C++20 ranges: "Views are composable adaptations of ranges where the adaptation happens lazily as the view is iterated." These are all my posts to then ranges library: category ranges library.
A view (std::span) and a std::string_view are non-owning views and can deal with strings. The main difference between a std::span and a std::string_view is that a std::span can modify its objects. When you want to read more about std::string_view, read my previous post: "C++17 - What's New in the Library?" and "C++17 - Avoid Copying with std::string_view".
You can modify the entire span or only a subspan. When you modify the span, you modify the referenced objects..
The following program shows how a subspan can be used to modify the referenced objects from a std::vector.
// spanTransform.cpp
#include <algorithm>
#include <iostream>
#include <vector>
#include <span>
void printMe(std::span<int> container) {
std::cout << "container.size(): " << container.size() << std::endl;
for(auto e : container) std::cout << e << ' ';
std::cout << "\n\n";
}
int main() {
std::cout << std::endl;
std::vector vec{1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
printMe(vec);
std::span span1(vec); // (1)
std::span span2{span1.subspan(1, span1.size() - 2)}; // (2)
std::transform(span2.begin(), span2.end(), // (3)
span2.begin(),
[](int i){ return i * i; });
printMe(vec);
}
span1 references the std::vector vec (1). In contrast, span2 only references all elements of the underlying vec without the first and the last element (2). Consequently, the mapping of each element to its square (3) only addresses these elements.
There are many convenience functions to refer to the elements of the span.
The table presents the functions to refer to the elements of a span.
span<count>.first()
span.first(count)
span<count>last()
span.last<count>
span<first, count>.subspan()
span.subspan(first, count)
The small program shows the usage of the function subspan.
// subspan.cpp
#include <iostream>
#include <numeric>
#include <span>
#include <vector>
int main() {
std::cout << std::endl;
std::vector<int> myVec(20);
std::iota(myVec.begin(), myVec.end(), 0); // (1)
for (auto v: myVec) std::cout << v << " ";
std::cout << "\n\n";
std::span<int> mySpan(myVec); // (2)
auto length = mySpan.size();
auto count = 5; // (3)
for (long unsigned int first = 0; first <= (length - count); first += count ) {
for (auto ele: mySpan.subspan(first, count)) std::cout << ele << " ";
std::cout << std::endl;
}
}
The program fills the vector with all numbers from 0 to 19 (1), and initializes a std::span with it (2). The algorithm std::iota fills myVec with the sequentially increasing values, starting with 0. Finally, the for-loop (3) uses the function subspan to create all subspans starting at first and having count elements until mySpan is consumed.
Containers of the STL become with C++20 more powerful. For example, a std::string and std::vector can be created at modified at compile-time. Further, thanks to the functions std::erase and std::erase_if, the deletion of the elements of a container works like a charm. write-up. Question/comment: as far as I understand span is not bounds-safe. states that operator[] is undefined behaviour on out of bounds access.
Hunting
Today 512
Yesterday 7029
Week 40836
Month 107502
All 7375342
Currently are 161 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
I should clarify. I mean that std::span automatically deduces the size of a contiguous sequence of objects. | https://modernescpp.com/index.php/c-20-std-span | CC-MAIN-2021-43 | refinedweb | 1,049 | 65.01 |
I am a very beginner (5 days) and I am trying to make a program that generates frequency on whatever sided dice you want. I dont want to know how to make it more efficient. Just what I did wrong. Here's the code:
import java.util.Scanner; import java.util.Random; public class dieRoll { public static void main(String[] args) { Scanner input = new Scanner(System.in); Random rand = new Random(); System.out.println("Type number of sides on die: "); int faces = input.nextInt(); int freq[] = new int[1 + faces]; for(int roll = 1; roll < 1000; roll++) { ++freq[1 + rand.nextInt(faces)]; } System.out.println("Face\tFrequency"); for(int side = 1; side < freq.length; side++); { System.out.println(side+ "\t" + freq[side]); } } }
And here's the problem:
the problem is the last print line. | http://www.javaprogrammingforums.com/whats-wrong-my-code/18302-dice-roller.html | CC-MAIN-2015-48 | refinedweb | 134 | 62.54 |
Hey guys, it's been a while since I wrote a tutorial but this one is something I'm actually working on, so I decided to share with you what I learned ❤️.
BTW we are building a small wrapper for Pokeapi
what we will do
- Start a node project
- Install our dependencies
- Setup eslint & prettier
- Setup our package.json
- Start coding
- Setup small project for testing
- Let's Publish
Start a node project
So I will assume you at least know how to do this, but if not you have a picture down here:
You just need an empty folder and run the next command on it
npm init -y
Now I did some changes to my package.json (keywords, author, repo and version) you don't need to make this changes, but take a look at them if you will like to.
{ "name": "pokeapi", "version": "0.1.0", "description": "", "main": "index.js", "scripts": { }, "keywords": [ "pokemon", "api", "sdk", "typescript", "tutorial" ], "author": "David M.", "license": "GPLv3" }
You will notice scripts is empty 👀 we will fill it later
Install our dependencies
Now we will install one of our dev dependencies
npm install -D typescript
great! now we need another file on our folder root, it's called "tsconfig.json" you can copy the one that I used (below here) or you can generate it with the next command.
./node_modules/.bin/tsc --init
If you decided for this approach just make sure to adjust the declaration and outDir options according to the JSON bellow.
Setting the declaration attribute to true ensures that the compiler generates the respective TypeScript definitions files aside of compiling the TypeScript files to JavaScript files. The outDir parameter defines the output directory as the dist folder.
or just use mine ¯\_(ツ)_/¯
{ "compilerOptions": { "target": "ES2015", /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019', 'ES2020', or 'ESNEXT'. */ "module": "commonjs", /* Specify module code generation: 'none', 'commonjs', 'amd', 'system', 'umd', 'es2015', 'es2020', or 'ESNext'. */ . */ "declaration": true, "outDir": "./dist", } }
once we have this setup we will need to add some dependencies (this ones may not apply for your sdk)
npm install -S axios
now we are over with our dependencies... for now 👀
Setup eslint and prettier
Eslint
I think this part it's actually the shortest so let's start
you will need to run the next command:
npx eslint --init
Now... I recommend the next answers for the eslint init
Prettier
You need to run the next command
npm install -D prettier eslint-config-prettier eslint-plugin-prettier
After you have all that installed change the content of your
.eslintrc.json with this
{ "env": { "es6": true, "node": true }, "extends": [ "airbnb-base", "prettier/@typescript-eslint", "plugin:prettier/recommended" ], "globals": { "Atomics": "readonly", "SharedArrayBuffer": "readonly" }, "parser": "@typescript-eslint/parser", "parserOptions": { "ecmaVersion": 11, "sourceType": "module" }, "plugins": [ "@typescript-eslint" ], "rules": {} }
and add the file
.prettierrc.json with this inside
{ "printWidth": 100, "tabWidth": 2, "singleQuote": true, "jsxBracketSameLine": true, "trailingComma": "es5" }
Setup our package.json
now that we finally have all the development setup ready we need to modify a bit our
package.json so it know it's a TypeScript project
{ "name": "pokeapi", "version": "0.1.0", "description": "", "main": "dist/index.js", "types": "dist/index.d.ts", "scripts": { "prepare": "npm run build", "build": "tsc" }, "keywords": [ "pokemon", "api", "sdk", "typescript", "tutorial" ], "author": "David M.", "license": "GPLv3", "devDependencies": { "@typescript-eslint/eslint-plugin": "^3.9.0", "@typescript-eslint/parser": "^3.9.0", "eslint": "^7.6.0", "eslint-config-airbnb-base": "^14.2.0", "eslint-config-prettier": "^6.11.0", "eslint-plugin-import": "^2.22.0", "eslint-plugin-prettier": "^3.1.4", "prettier": "^2.0.5", "typescript": "^3.9.7" }, "dependencies": { "axios": "^0.19.2" } }
If you notice, all that we changed is the scripts and added some settings main and types,
remember if you change your outputdir on
tsconfig.json change it on your
package.json.
Start coding
FINALLY
Let's make a new file called index.ts (on our root)
this is where our SDK will leave, we obviously and separate it on different files and import them but my example is short and simple so I will use the same file for all of it.
First we will import everything
import axios from 'axios';
Let's add some variables we will need
import axios from 'axios'; const API_URL: string = '
perfect! now that we have "all" setup lets start by adding our first sdk method (getPokemonById)
import axios from 'axios'; const API_URL: string = ' export function getPokemonById(id: number): Promise<object> { return new Promise((resolve, reject) => { axios .get(`${API_URL}/pokemon/${id}`) .then((resp) => { resolve(resp.data); }) .catch(reject); }); } export default { getPokemonById };
Finally our code should look something like this, notice that we export our function and as an export default we use "all of our functions" I will add another function so we can have a better idea of multiple functions working from the sdk. It should look like this...
import axios from 'axios'; const API_URL: string = ' export function getPokemonById(id: number): Promise<object> { return new Promise((resolve, reject) => { axios .get(`${API_URL}/pokemon/${id}`) .then((resp) => { resolve(resp.data); }) .catch(reject); }); } export function getPokemonTypeById(id: number): Promise<object> { return new Promise((resolve, reject) => { axios .get(`${API_URL}/type/${id}`) .then((resp) => { resolve(resp.data); }) .catch(reject); }); } export default { getPokemonById, getPokemonTypeById };
Setup small project for testing
Now that we have a really bare bones version of our SDK we will try to use it, but first we should build it!
for simplicity we will make a new node project inside our project like so...
npm run build mkdir testing cd testing npm init -y npm install ..
now this should make our new project ready to make import our sdk and running it.
my test looked a little like this
const pokeapi = require('pokeapi'); pokeapi.getPokemonById(1).then((pokemon) => { console.log(pokemon.name); }); // it should say "bulbasaur"
Let's Publish
Great to know that you made it until here ❤️
lets start right away!
we will need a new file called
.npmignore where we will add all the folders we don't want our sdk to bring with itself like our "testing" folder
it should look like this
testing/
and that should be all for your code ❤️
now the last part is to have an account on Npm do the next commands
npm login #do all the steps necessary npm publish
and your sdk should be ready to be installed in any other node projects.
Here's some links that you might want:
Npm
Repo
I hope this quick tutorial really helped someone, because I wasn't lucky enough to find one as explicit as this one haha.
Discussion (4)
Good article. Thanks!
The following line can be removed as it is now merged github.com/prettier/eslint-config-....
Great article, thanks for share. the article does not cover the build of a single file sdk, here are some inputs.
a webpack.config.js file
install ts-loader, webpack-cli and npm script for build
build: "webpack"
this work for me and let me build a sdk in a single file.
in the html file dont forget to use
for include the js file and for the code
Great article, thank you!
Can we use this sdk for plain javascript project?
Like writing sdk in typescript has support for plain js projects or not? | https://dev.to/mendoza/how-to-build-a-simple-sdk-on-typescript-21gg | CC-MAIN-2022-21 | refinedweb | 1,214 | 62.48 |
no safe boot possible
Playing with the deepsleep of my pytrack with sipy I managed to get into a loop that I can not get out of.
Doing the hard reset pin 12 on 3V for one second, which used to work is not doing anything anymore
What can I do to get the hand back on my cards??
Thank you very much
- goldfishalpha last edited by
@robert-hh
To qoute the docs directly, " After reset, if P12 pin is held high (i.e. connect it to the 3V3 output pin), the heartbeat LED will begin flashing orange slowly. If after 3 seconds the pin is still held high..." suggesting that the pin should be pulled high AFTER reset.
In fact I did get a flashing orange led, using this method.
My problem was that it appeared to reset once I released P12. I released P12 because I did not want to go to the prvious OTA firware and only wanted to disable boot.py and main.py.
@Martinn
I am sure that I have P12 (and not GPIO12 or '12') as I get the flashing led..
I haven't solved it yet. Just documenting for posterity.
@goldfishalpha Are you sure you have P12? As you have noted, documentation is pretty buggy and pinout description a mess. You might well have the wrong pin.
@goldfishalpha Pulling P12 high and then pushing the reset button should activate safe boot, which does not execute boot.py and main.py on boot. That's almost the same as typing Ctrl-F at the REPL prompt.
- goldfishalpha last edited by
Just for the record, I had a similar issue with a LoPy on a Pysense board.
I foolishly got it stuck in a while loop. Pulling p12 high as per the official documentation could not get me back into REPL to reflash the file system, i.e
import os os.mkfs('/flash')
Reinstalling the firmware with the 'Erase flash file system' option checked worked the end.
Additional: This is not the first time that the official documentation has been wrong or incomplete.
Maybe an update of docs is in order?
@robert-hh Sorted. I just had desengage the sequence number on my sigfox account.
@devinv Erasing flash erase all data, including any credentials that existed. But these will normally be restored when you do a firmware install through the Pycom updater. And that needs to work, because sometimes devices are in a state where only a full erase help.
But you may also ask Pycom service. Maybe Sigfox requires another configuration step.
@robert-hh Is it possible that all what I did could create problem with my sipy sending messages to Sigfox?
It doesn't work anymore.
This was working before]))
But not anymore.
No messages arrives to my sigfox account!?
@robert-hh Thank you so much. I'm back on track.
@devinv Oh yes. You have to connect P2 to GND and press reset, to go in boot mode.
esptool.py does not support the PIC boot mode setting, it uses the espressif mode switch, which is sadly not supported by Pycom hardware.
And there is a more recent version of esptool.py here:
@robert-hh ok, I tryed this
python esptool.py -p COM6 -b 460800 -c esp32 erase_flash
esptool.py v2.0.1
Got this
esptool.py:2124: DeprecationWarning: inspect.getargspec() is deprecated since Python 3.0, use inspect.signature() or inspect.getfullargspec()
operation_args,,,_ = inspect.getargspec(operation_func)
then this for 30 seconds
Connecting.....................................................
(that takes 30 seconds...)
and then
A fatal error occurred: Failed to connect to ESP32: Timed out waiting for packet header
Is there something else which has to be done before?
@devinv You should install the stable version. But besides that all is fine. I use verison 1.15.2 of the updater. Links are here:
But firmware-wise the results are the same.
As next step, try erasing with esptool.py & reflash. That is the crowbar style of reset and brings the device definitely back to the manufacturing state after assembly.
@robert-hh As I don't see the options you are talking about I'm not sure I'm using the right tool
Following your link I used pycom_firmware_update_1.15.1.exe.zip
There is no advanced section
There is "Include developement releases", which I don't know if I need to tick or not (tryed both, same result)
In the Communiction section I choose the PORT (COM6), the Speed (115200)
I have the option 'erase flash file system' (which I ticked)
Force update Lora region (I have a sipy, so I don't tick that right??)
A 'flash from local file' option
A type (pybytes, stable or development), which one do I have to choose?
It asks for a pybytes activation token (no idea where I'm supposed to get that?). I can skip that part
Then it instal 1.18.1.r7 [pybytes] and it mentions it is Sipy
and it write after a few minutes : Your Sipy was successfully updated
But then, nothing change I have the same
flash read err, 1000 loop in Atom
@devinv I suggested to reinstall the firmware using the pycom firmware updater at.
When you run the downloader, you can select "show advanced seeting" first, an the on one of the next pages you have the option to erase the file system too. verify on that page, that the proper device is shown.
If all is OK, that should bring you to an clean device.
Another method of erasing flash completely is using esptool.py from. The call is:
python3 esptool.py -p <your_com_port> -b 460800 -c esp32 erase_flash
After erasing with esptool.py, you can reload the firmware using the pycom updater.
@robert-hh So what do I do now?
I can not upload anything.
And when I press reset it is definitly trying to launch something.
What can I do with the tiny basic interpreter? It doesn't let me do any command?
How can I go back to a normal state where I can upload things are input basic commande?
@devinv it was 'erase flash'. The promt you see now after Ctrl-C is from an embedded tiny basic interpreter in the boot ROM.
@robert-hh erase iprion?
I don't know what you mean. I launched the Pycom Upgrade
and then got this from the Atom
Connecting on COM6...
ets592
entry 0x400a059c
Starting theets Jun 8 2016 00:22:57
rst:0x7 (TG0WDT_SYS_RESET),boot:0x17 (SPI_FAST_FLASH_BOOT)
flash read err, 1000
Falling back to built-in command interpreter.
OK
ets Jun 8 2016 00:22:57
I can manage to break it with Control C
but I get the
CWhat?
And there is nothing I can do unless rebooting and starting again.
I can not upload anything
@devinv you can always re-flash the firmware using the pycom updater. If you select the erase iprion, then you have a clean empty device. Obviously that will also delete all files. | https://forum.pycom.io/topic/4155/no-safe-boot-possible/6 | CC-MAIN-2019-18 | refinedweb | 1,165 | 76.11 |
This is part 8 of Categories for Programmers. Previously: Functors. See the Table of Contents.
Now that you know what a functor is, and have seen a few examples, let’s see how we can build larger functors from smaller ones. In particular it’s interesting to see which type constructors (which correspond to mappings between objects in a category) can be extended to functors (which include mappings between morphisms).
Bifunctors
Since functors are morphisms in Cat (the category of categories), a lot of intuitions about morphisms — and functions in particular — apply to functors as well. For instance, just like you can have a function of two arguments, you can have a functor of two arguments, or a bifunctor. On objects, a bifunctor maps every pair of objects, one from category C, and one from category D, to an object in category E. Notice that this is just saying that it’s a mapping from a cartesian product of categories C×D to E.
That’s pretty straightforward. But functoriality means that a bifunctor has to map morphisms as well. This time, though, it must map a pair of morphisms, one from C and one from D, to a morphism in E.
Again, a pair of morphisms is just a single morphism in the product category C×D. We define a morphism in a cartesian product of categories as a pair of morphisms which goes from one pair of objects to another pair of objects. These pairs of morphisms can be composed in the obvious way:
(f, g) ∘ (f', g') = (f ∘ f', g ∘ g')
The composition is associative and it has an identity — a pair of identity morphisms (id, id). So a cartesian product of categories is indeed a category.
But an easier way to think about bifunctors is that they are functors in both arguments. So instead of translating functorial laws — associativity and identity preservation — from functors to bifunctors, it’s enough to check them separately for each argument. If you have a mapping from a pair of categories to a third category, and you prove that it is functorial in each argument separately (i.e., keeping the other argument constant), then the mapping is automatically a bifunctor. By functorial I mean that it acts on morphisms like an honest functor.
Let’s define a bifunctor in Haskell. In this case all three categories are the same: the category of Haskell types. A bifunctor is a type constructor that takes two type arguments. Here’s the definition of the
Bifunctor typeclass taken directly from the library
Control.Bifunctor:
class Bifunctor f where bimap :: (a -> c) -> (b -> d) -> f a b -> f c d bimap g h = first g . second h first :: (a -> c) -> f a b -> f c b first g = bimap g id second :: (b -> d) -> f a b -> f a d second = bimap id
The type variable
f represents the bifunctor. You can see that in all type signatures it’s always applied to two type arguments. The first type signature defines
bimap: a mapping of two functions at once. The result is a lifted function,
(f a b -> f c d), operating on types generated by the bifunctor’s type constructor. There is a default implementation of
bimap in terms of
first and
second, which shows that it’s enough to have functoriality in each argument separately to be able to define a bifunctor.
The two other type signatures,
first and
second, are the two
fmaps witnessing the functoriality of
f in the first and the second argument, respectively.
The typeclass definition provides default implementations for both of them in terms of
bimap.
When declaring an instance of
Bifunctor, you have a choice of either implementing
bimap and accepting the defaults for
first and
second, or implementing both
first and
second and accepting the default for
bimap (of course, you may implement all three of them, but then it’s up to you to make sure they are related to each other in this manner).
Product and Coproduct Bifunctors
An important example of a bifunctor is the categorical product — a product of two objects that is defined by a universal construction. If the product exists for any pair of objects, the mapping from those objects to the product is bifunctorial. This is true in general, and in Haskell in particular. Here’s the
Bifunctor instance for a pair constructor — the simplest product type:
instance Bifunctor (,) where bimap f g (x, y) = (f x, g y)
There isn’t much choice:
bimap simply applies the first function to the first component, and the second function to the second component of a pair. The code pretty much writes itself, given the types:
bimap :: (a -> c) -> (b -> d) -> (a, b) -> (c, d)
The action of the bifunctor here is to make pairs of types, for instance:
(,) a b = (a, b)
By duality, a coproduct, if it’s defined for every pair of objects in a category, is also a bifunctor. In Haskell, this is exemplified by the
Either type constructor being an instance of
Bifunctor:
instance Bifunctor Either where bimap f _ (Left x) = Left (f x) bimap _ g (Right y) = Right (g y)
This code also writes itself.
Now, remember when we talked about monoidal categories? A monoidal category defines a binary operator acting on objects, together with a unit object. I mentioned that
Set is a monoidal category with respect to cartesian product, with the singleton set as a unit. And it’s also a monoidal category with respect to disjoint union, with the empty set as a unit. What I haven’t mentioned is that one of the requirements for a monoidal category is that the binary operator be a bifunctor. This is a very important requirement — we want the monoidal product to be compatible with the structure of the category, which is defined by morphisms. We are now one step closer to the full definition of a monoidal category (we still need to learn about naturality, before we can get there).
Functorial Algebraic Data Types
We’ve seen several examples of parameterized data types that turned out to be functors — we were able to define
fmap for them. Complex data types are constructed from simpler data types. In particular, algebraic data types (ADTs) are created using sums and products. We have just seen that sums and products are functorial. We also know that functors compose. So if we can show that the basic building blocks of ADTs are functorial, we’ll know that parameterized ADTs are functorial too.
So what are the building blocks of parameterized algebraic data types? First, there are the items that have no dependency on the type parameter of the functor, like
Nothing in
Maybe, or
Nil in
List. They are equivalent to the
Const functor. Remember, the
Const functor ignores its type parameter (really, the second type parameter, which is the one of interest to us, the first one being kept constant).
Then there are the elements that simply encapsulate the type parameter itself, like
Just in
Maybe. They are equivalent to the identity functor. I mentioned the identity functor previously, as the identity morphism in Cat, but didn’t give its definition in Haskell. Here it is:
data Identity a = Identity a
instance Functor Identity where fmap f (Identity x) = Identity (f x)
You can think of
Identity as the simplest possible container that always stores just one (immutable) value of type
a.
Everything else in algebraic data structures is constructed from these two primitives using products and sums.
With this new knowledge, let’s have a fresh look at the
Maybe type constructor:
data Maybe a = Nothing | Just a
It’s a sum of two types, and we now know that the sum is functorial. The first part,
Nothing can be represented as a
Const () acting on
a (the first type parameter of
Const is set to unit — later we’ll see more interesting uses of
Const). The second part is just a different name for the identity functor. We could have defined
Maybe, up to isomorphism, as:
type Maybe a = Either (Const () a) (Identity a)
So
Maybe is the composition of the bifunctor
Either with two functors,
Const () and
Identity. (
Const is really a bifunctor, but here we always use it partially applied.)
We’ve already seen that a composition of functors is a functor — we can easily convince ourselves that the same is true of bifunctors. All we need is to figure out how a composition of a bifunctor with two functors works on morphisms. Given two morphisms, we simply lift one with one functor and the other with the other functor. We then lift the resulting pair of lifted morphisms with the bifunctor.
We can express this composition in Haskell. Let’s define a data type that is parameterized by a bifunctor
bf (it’s a type variable that is a type constructor that takes two types as arguments), two functors
fu and
gu (type constructors that take one type variable each), and two regular types
a and
b. We apply
fu to
a and
gu to
b, and then apply
bf to the resulting two types:
newtype BiComp bf fu gu a b = BiComp (bf (fu a) (gu b))
That’s the composition on objects, or types. Notice how in Haskell we apply type constructors to types, just like we apply functions to arguments. The syntax is the same.
If you’re getting a little lost, try applying
BiComp to
Either,
Const (),
Identity,
a, and
b, in this order. You will recover our bare-bone version of
Maybe b (
a is ignored).
The new data type
BiComp is a bifunctor in
a and
b, but only if
bf is itself a
Bifunctor and
fu and
gu are
Functors. The compiler must know that there will be a definition of
bimap available for
bf, and definitions of
fmap for
fu and
gu. In Haskell, this is expressed as a precondition in the instance declaration: a set of class constraints followed by a double arrow:
instance (Bifunctor bf, Functor fu, Functor gu) => Bifunctor (BiComp bf fu gu) where bimap f1 f2 (BiComp x) = BiComp ((bimap (fmap f1) (fmap f2)) x)
The implementation of
bimap for
BiComp is given in terms of
bimap for
bf and the two
fmaps for
fu and
gu. The compiler automatically infers all the types and picks the correct overloaded functions whenever
BiComp is used.
The
x in the definition of
bimap has the type:
bf (fu a) (gu b)
which is quite a mouthful. The outer
bimap breaks through the outer
bf layer, and the two
fmaps dig under
fu and
gu, respectively. If the types of
f1 and
f2 are:
f1 :: a -> a' f2 :: b -> b'
then the final result is of the type
bf (fu a') (gu b'):
bimap (fu a -> fu a') -> (gu b -> gu b') -> bf (fu a) (gu b) -> bf (fu a') (gu b')
If you like jigsaw puzzles, these kinds of type manipulations can provide hours of entertainment.
So it turns out that we didn’t have to prove that
Maybe was a functor — this fact followed from the way it was constructed as a sum of two functorial primitives.
A perceptive reader might ask the question: If the derivation of the
Functor instance for algebraic data types is so mechanical, can’t it be automated and performed by the compiler? Indeed, it can, and it is. You need to enable a particular Haskell extension by including this line at the top of your source file:
{-# LANGUAGE DeriveFunctor #-}
and then add
deriving Functor to your data structure:
data Maybe a = Nothing | Just a deriving Functor
and the corresponding
fmap will be implemented for you.
The regularity of algebraic data structures makes it possible to derive instances not only of
Functor but of several other type classes, including the
Eq type class I mentioned before. There is also the option of teaching the compiler to derive instances of your own typeclasses, but that’s a bit more advanced. The idea though is the same: You provide the behavior for the basic building blocks and sums and products, and let the compiler figure out the rest.
Functors in C++
If you are a C++ programmer, you obviously are on your own as far as implementing functors goes. However, you should be able to recognize some types of algebraic data structures in C++. If such a data structure is made into a generic template, you should be able to quickly implement
fmap for it.
Let’s have a look at a tree data structure, which we would define in Haskell as a recursive sum type:
data Tree a = Leaf a | Node (Tree a) (Tree a) deriving Functor
As I mentioned before, one way of implementing sum types in C++ is through class hierarchies. It would be natural, in an object-oriented language, to implement
fmap as a virtual function of the base class
Functor and then override it in all subclasses. Unfortunately this is impossible because
fmap is a template, parameterized not only by the type of the object it’s acting upon (the
this pointer) but also by the return type of the function that’s been applied to it. Virtual functions cannot be templatized in C++. We’ll implement
fmap as a generic free function, and we’ll replace pattern matching with
dynamic_cast.
The base class must define at least one virtual function in order to support dynamic casting, so we’ll make the destructor virtual (which is a good idea in any case):
template<class T> struct Tree { virtual ~Tree() {}; };
The
Leaf is just an
Identity functor in disguise:
template<class T> struct Leaf : public Tree<T> { T _label; Leaf(T l) : _label(l) {} };
The
Node is a product type:
template<class T> struct Node : public Tree<T> { Tree<T> * _left; Tree<T> * _right; Node(Tree<T> * l, Tree<T> * r) : _left(l), _right(r) {} };
When implementing
fmap we take advantage of dynamic dispatching on the type of the
Tree. The
Leaf case applies the
Identity version of
fmap, and the
Node case is treated like a bifunctor composed with two copies of the
Tree functor. As a C++ programmer, you’re probably not used to analyzing code in these terms, but it’s a good exercise in categorical thinking.
template<class A, class B> Tree<B> * fmap(std::function<B(A)> f, Tree<A> * t) { Leaf<A> * pl = dynamic_cast <Leaf<A>*>(t); if (pl) return new Leaf<B>(f (pl->_label)); Node<A> * pn = dynamic_cast<Node<A>*>(t); if (pn) return new Node<B>( fmap<A>(f, pn->_left) , fmap<A>(f, pn->_right)); return nullptr; }
For simplicity, I decided to ignore memory and resource management issues, but in production code you would probably use smart pointers (unique or shared, depending on your policy).
Compare it with the Haskell implementation of
fmap:
instance Functor Tree where fmap f (Leaf a) = Leaf (f a) fmap f (Node t t') = Node (fmap f t) (fmap f t')
This implementation can also be automatically derived by the compiler.
The Writer Functor
I promised that I would come back to the Kleisli category I described earlier. Morphisms in that category were represented as “embellished” functions returning the
Writer data structure.
type Writer a = (a, String)
I said that the embellishment was somehow related to endofunctors. And, indeed, the
Writer type constructor is functorial in
a. We don’t even have to implement
fmap for it, because it’s just a simple product type.
But what’s the relation between a Kleisli category and a functor — in general? A Kleisli category, being a category, defines composition and identity. Let’ me remind you that the composition is given by the fish operator:
(>=>) :: (a -> Writer b) -> (b -> Writer c) -> (a -> Writer c) m1 >=> m2 = \x -> let (y, s1) = m1 x (z, s2) = m2 y in (z, s1 ++ s2)
and the identity morphism by a function called
return:
return :: a -> Writer a return x = (x, "")
It turns out that, if you look at the types of these two functions long enough (and I mean, long enough), you can find a way to combine them to produce a function with the right type signature to serve as
fmap. Like this:
fmap f = id >=> (\x -> return (f x))
Here, the fish operator combines two functions: one of them is the familiar
id, and the other is a lambda that applies
return to the result of acting with
f on the lambda’s argument. The hardest part to wrap your brain around is probably the use of
id. Isn’t the argument to the fish operator supposed to be a function that takes a “normal” type and returns an embellished type? Well, not really. Nobody says that
a in
a -> Writer b must be a “normal” type. It’s a type variable, so it can be anything, in particular it can be an embellished type, like
Writer b.
So
id will take
Writer a and turn it into
Writer a. The fish operator will fish out the value of
a and pass it as
x to the lambda. There,
f will turn it into a
b and
return will embellish it, making it
Writer b. Putting it all together, we end up with a function that takes
Writer a and returns
Writer b, exactly what
fmap is supposed to produce.
Notice that this argument is very general: you can replace
Writer with any type constructor. As long as it supports a fish operator and
return, you can define
fmap as well. So the embellishment in the Kleisli category is always a functor. (Not every functor, though, gives rise to a Kleisli category.)
You might wonder if the
fmap we have just defined is the same
fmap the compiler would have derived for us with
deriving Functor. Interestingly enough, it is. This is due to the way Haskell implements polymorphic functions. It’s called parametric polymorphism, and it’s a source of so called theorems for free. One of those theorems says that, if there is an implementation of
fmap for a given type constructor, one that preserves identity, then it must be unique.
Covariant and Contravariant Functors
Now that we’ve reviewed the writer functor, let’s go back to the reader functor. It was based on the partially applied function-arrow type constructor:
(->) r
We can rewrite it as a type synonym:
type Reader r a = r -> a
for which the
Functor instance, as we’ve seen before, reads:
instance Functor (Reader r) where fmap f g = f . g
But just like the pair type constructor, or the
Either type constructor, the function type constructor takes two type arguments. The pair and
Either were functorial in both arguments — they were bifunctors. Is the function constructor a bifunctor too?
Let’s try to make it functorial in the first argument. We’ll start with a type synonym — it’s just like the
Reader but with the arguments flipped:
type Op r a = a -> r
This time we fix the return type,
r, and vary the argument type,
a. Let’s see if we can somehow match the types in order to implement
fmap, which would have the following type signature:
fmap :: (a -> b) -> (a -> r) -> (b -> r)
With just two functions taking
a and returning, respectively,
b and
r, there is simply no way to build a function taking
b and returning
r! It would be different if we could somehow invert the first function, so that it took
b and returned
a instead. We can’t invert an arbitrary function, but we can go to the opposite category.
A short recap: For every category C there is a dual category Cop. It’s a category with the same objects as C, but with all the arrows reversed.
Consider a functor that goes between Cop and some other category D:
F :: Cop → D
Such a functor maps a morphism fop :: a → b in Cop to the morphism F fop :: F a → F b in D. But the morphism fop secretly corresponds to some morphism f :: b → a in the original category C. Notice the inversion.
Now, F is a regular functor, but there is another mapping we can define based on F, which is not a functor — let’s call it G. It’s a mapping from C to D. It maps objects the same way F does, but when it comes to mapping morphisms, it reverses them. It takes a morphism f :: b → a in C, maps it first to the opposite morphism fop :: a → b and then uses the functor F on it, to get F fop :: F a → F b.
Considering that F a is the same as G a and F b is the same as G b, the whole trip can be described as:
G f :: (b → a) → (G a → G b)
It’s a “functor with a twist.” A mapping of categories that inverts the direction of morphisms in this manner is called a contravariant functor. Notice that a contravariant functor is just a regular functor from the opposite category. The regular functors, by the way — the kind we’ve been studying thus far — are called covariant functors.
Here’s the typeclass defining a contravariant functor (really, a contravariant endofunctor) in Haskell:
class Contravariant f where contramap :: (b -> a) -> (f a -> f b)
Our type constructor
Op is an instance of it:
instance Contravariant (Op r) where -- (b -> a) -> Op r a -> Op r b contramap f g = g . f
Notice that the function
f inserts itself before (that is, to the right of) the contents of
Op — the function
g.
The definition of
contramap for
Op may be made even terser, if you notice that it’s just the function composition operator with the arguments flipped. There is a special function for flipping arguments, called
flip:
flip :: (a -> b -> c) -> (b -> a -> c) flip f y x = f x y
With it, we get:
contramap = flip (.)
Profunctors
We’ve seen that the function-arrow operator is contravariant in its first argument and covariant in the second. Is there a name for such a beast? It turns out that, if the target category is Set, such a beast is called a profunctor. Because a contravariant functor is equivalent to a covariant functor from the opposite category, a profunctor is defined as:
Cop × D → Set
Since, to first approximation, Haskell types are sets, we apply the name
Profunctor to a type constructor
p of two arguments, which is contra-functorial in the first argument and functorial in the second. Here’s the appropriate typeclass taken from the
Data.Profunctor library:
class Profunctor p where
All three functions come with default implementations. Just like with
Bifunctor, when declaring an instance of
Profunctor, you have a choice of either implementing
dimap and accepting the defaults for
lmap and
rmap, or implementing both
lmap and
rmap and accepting the default for
dimap.
Now we can assert that the function-arrow operator is an instance of a
Profunctor:
instance Profunctor (->) where dimap ab cd bc = cd . bc . ab lmap = flip (.) rmap = (.)
Profunctors have their application in the Haskell lens library. We’ll see them again when we talk about ends and coends.
Challenges
- Show that the data type:
data Pair a b = Pair a b
is a bifunctor. For additional credit implement all three methods of
Bifunctorand use equational reasoning to show that these definitions are compatible with the default implementations whenever they can be applied.
- Show the isomorphism between the standard definition of
Maybeand this desugaring:
type Maybe' a = Either (Const () a) (Identity a)
Hint: Define two mappings between the two implementations. For additional credit, show that they are the inverse of each other using equational reasoning.
- Let’s try another data structure. I call it a
PreListbecause it’s a precursor to a
List. It replaces recursion with a type parameter
b.
data PreList a b = Nil | Cons a b
You could recover our earlier definition of a
Listby recursively applying
PreListto itself (we’ll see how it’s done when we talk about fixed points).
Show that
PreListis an instance of
Bifunctor.
- Show that the following data types define bifunctors in
aand
b:
data K2 c a b = K2 c
data Fst a b = Fst a
data Snd a b = Snd b
For additional credit, check your solutions agains Conor McBride’s paper Clowns to the Left of me, Jokers to the Right.
- Define a bifunctor in a language other than Haskell. Implement
bimapfor a generic pair in that language.
- Should
std::mapbe considered a bifunctor or a profunctor in the two template arguments
Keyand
T? How would you redesign this data type to make it so?
Next: Function Types.
Acknowledgment
As usual, big thanks go to Gershom Bazerman for reviewing this article.
Follow @BartoszMilewski | https://bartoszmilewski.com/2015/02/ | CC-MAIN-2017-34 | refinedweb | 4,155 | 57.81 |
Opened 7 years ago
Closed 7 years ago
#12796 closed (wontfix)
Change in django.db.models.options.get_verbose_name
Description
In order to make the code more readable I suggest to change the lambda function to a named function (def), simple like this:
def get_verbose_name(class_name): return re.sub('(((?<=[a-z])[A-Z])|([A-Z](?![A-Z]|$)))', ' \\1',class_name).lower().strip()
It came up in the Brazilian Django list[1].
[1] - pt_BR:
Change History (1)
comment:1 Changed 7 years ago by
Note: See TracTickets for help on using tickets.
I'm not sure I understand why this is any clearer, and my portuguese isn't good enough to follow the discussion on the referenced mailing list. If you can explain why this has caused confusion, please reopen. | https://code.djangoproject.com/ticket/12796 | CC-MAIN-2017-30 | refinedweb | 127 | 61.97 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.